hexsha stringlengths 40 40 | size int64 3 1.03M | ext stringclasses 10
values | lang stringclasses 1
value | max_stars_repo_path stringlengths 3 972 | max_stars_repo_name stringlengths 6 130 | max_stars_repo_head_hexsha stringlengths 40 78 | max_stars_repo_licenses listlengths 1 10 | max_stars_count int64 1 191k ⌀ | max_stars_repo_stars_event_min_datetime stringlengths 24 24 ⌀ | max_stars_repo_stars_event_max_datetime stringlengths 24 24 ⌀ | max_issues_repo_path stringlengths 3 972 | max_issues_repo_name stringlengths 6 130 | max_issues_repo_head_hexsha stringlengths 40 78 | max_issues_repo_licenses listlengths 1 10 | max_issues_count int64 1 116k ⌀ | max_issues_repo_issues_event_min_datetime stringlengths 24 24 ⌀ | max_issues_repo_issues_event_max_datetime stringlengths 24 24 ⌀ | max_forks_repo_path stringlengths 3 972 | max_forks_repo_name stringlengths 6 130 | max_forks_repo_head_hexsha stringlengths 40 78 | max_forks_repo_licenses listlengths 1 10 | max_forks_count int64 1 105k ⌀ | max_forks_repo_forks_event_min_datetime stringlengths 24 24 ⌀ | max_forks_repo_forks_event_max_datetime stringlengths 24 24 ⌀ | content stringlengths 3 1.03M | avg_line_length float64 1.13 941k | max_line_length int64 2 941k | alphanum_fraction float64 0 1 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
ef23945ff9b0972594f47fcb2c2ffa9d4f52d4c3 | 3,509 | py | Python | bindings/python/ensmallen/datasets/string/gammaproteobacteriumnor53.py | AnacletoLAB/ensmallen_graph | b2c1b18fb1e5801712852bcc239f239e03076f09 | [
"MIT"
] | 5 | 2021-02-17T00:44:45.000Z | 2021-08-09T16:41:47.000Z | bindings/python/ensmallen/datasets/string/gammaproteobacteriumnor53.py | AnacletoLAB/ensmallen_graph | b2c1b18fb1e5801712852bcc239f239e03076f09 | [
"MIT"
] | 18 | 2021-01-07T16:47:39.000Z | 2021-08-12T21:51:32.000Z | bindings/python/ensmallen/datasets/string/gammaproteobacteriumnor53.py | AnacletoLAB/ensmallen | b2c1b18fb1e5801712852bcc239f239e03076f09 | [
"MIT"
] | 3 | 2021-01-14T02:20:59.000Z | 2021-08-04T19:09:52.000Z | """
This file offers the methods to automatically retrieve the graph gamma proteobacterium NOR53.
The graph is automatically retrieved from the STRING repository.
References
---------------------
Please cite the following if you use the data:
```bib
@article{szklarczyk2019string,
title={STRING v11: protein--protein association networks with increased coverage, supporting functional discovery in genome-wide experimental datasets},
author={Szklarczyk, Damian and Gable, Annika L and Lyon, David and Junge, Alexander and Wyder, Stefan and Huerta-Cepas, Jaime and Simonovic, Milan and Doncheva, Nadezhda T and Morris, John H and Bork, Peer and others},
journal={Nucleic acids research},
volume={47},
number={D1},
pages={D607--D613},
year={2019},
publisher={Oxford University Press}
}
```
"""
from typing import Dict
from ..automatic_graph_retrieval import AutomaticallyRetrievedGraph
from ...ensmallen import Graph # pylint: disable=import-error
def GammaProteobacteriumNor53(
directed: bool = False,
preprocess: bool = True,
load_nodes: bool = True,
verbose: int = 2,
cache: bool = True,
cache_path: str = "graphs/string",
version: str = "links.v11.5",
**additional_graph_kwargs: Dict
) -> Graph:
"""Return new instance of the gamma proteobacterium NOR53 graph.
The graph is automatically retrieved from the STRING repository.
Parameters
-------------------
directed: bool = False
Wether to load the graph as directed or undirected.
By default false.
preprocess: bool = True
Whether to preprocess the graph to be loaded in
optimal time and memory.
load_nodes: bool = True,
Whether to load the nodes vocabulary or treat the nodes
simply as a numeric range.
verbose: int = 2,
Wether to show loading bars during the retrieval and building
of the graph.
cache: bool = True
Whether to use cache, i.e. download files only once
and preprocess them only once.
cache_path: str = "graphs"
Where to store the downloaded graphs.
version: str = "links.v11.5"
The version of the graph to retrieve.
The available versions are:
- homology.v11.5
- physical.links.v11.5
- links.v11.5
additional_graph_kwargs: Dict
Additional graph kwargs.
Returns
-----------------------
Instace of gamma proteobacterium NOR53 graph.
References
---------------------
Please cite the following if you use the data:
```bib
@article{szklarczyk2019string,
title={STRING v11: protein--protein association networks with increased coverage, supporting functional discovery in genome-wide experimental datasets},
author={Szklarczyk, Damian and Gable, Annika L and Lyon, David and Junge, Alexander and Wyder, Stefan and Huerta-Cepas, Jaime and Simonovic, Milan and Doncheva, Nadezhda T and Morris, John H and Bork, Peer and others},
journal={Nucleic acids research},
volume={47},
number={D1},
pages={D607--D613},
year={2019},
publisher={Oxford University Press}
}
```
"""
return AutomaticallyRetrievedGraph(
graph_name="GammaProteobacteriumNor53",
repository="string",
version=version,
directed=directed,
preprocess=preprocess,
load_nodes=load_nodes,
verbose=verbose,
cache=cache,
cache_path=cache_path,
additional_graph_kwargs=additional_graph_kwargs
)()
| 33.419048 | 223 | 0.680251 |
99c847a662b5502157ff6fbf7b043c92edca7015 | 2,277 | py | Python | tests/unit/resolution_resolvelib/conftest.py | colesbury/pip | 135faabfd62f8c3566fd4f99a5688316302cf1c8 | [
"MIT"
] | 2 | 2021-09-25T03:28:32.000Z | 2021-11-12T11:24:22.000Z | tests/unit/resolution_resolvelib/conftest.py | fengcc086/pip | 135faabfd62f8c3566fd4f99a5688316302cf1c8 | [
"MIT"
] | null | null | null | tests/unit/resolution_resolvelib/conftest.py | fengcc086/pip | 135faabfd62f8c3566fd4f99a5688316302cf1c8 | [
"MIT"
] | 1 | 2021-07-09T16:41:43.000Z | 2021-07-09T16:41:43.000Z | import pytest
from pip._internal.cli.req_command import RequirementCommand
from pip._internal.commands.install import InstallCommand
from pip._internal.index.collector import LinkCollector
from pip._internal.index.package_finder import PackageFinder
# from pip._internal.models.index import PyPI
from pip._internal.models.search_scope import SearchScope
from pip._internal.models.selection_prefs import SelectionPreferences
from pip._internal.network.session import PipSession
from pip._internal.req.constructors import install_req_from_line
from pip._internal.req.req_tracker import get_requirement_tracker
from pip._internal.resolution.resolvelib.factory import Factory
from pip._internal.resolution.resolvelib.provider import PipProvider
from pip._internal.utils.temp_dir import TempDirectory, global_tempdir_manager
@pytest.fixture
def finder(data):
session = PipSession()
scope = SearchScope([str(data.packages)], [])
collector = LinkCollector(session, scope)
prefs = SelectionPreferences(allow_yanked=False)
finder = PackageFinder.create(collector, prefs)
yield finder
@pytest.fixture
def preparer(finder):
session = PipSession()
rc = InstallCommand("x", "y")
o = rc.parse_args([])
with global_tempdir_manager():
with TempDirectory() as tmp:
with get_requirement_tracker() as tracker:
preparer = RequirementCommand.make_requirement_preparer(
tmp,
options=o[0],
req_tracker=tracker,
session=session,
finder=finder,
use_user_site=False,
)
yield preparer
@pytest.fixture
def factory(finder, preparer):
yield Factory(
finder=finder,
preparer=preparer,
make_install_req=install_req_from_line,
wheel_cache=None,
use_user_site=False,
force_reinstall=False,
ignore_installed=False,
ignore_requires_python=False,
py_version_info=None,
)
@pytest.fixture
def provider(factory):
yield PipProvider(
factory=factory,
constraints={},
ignore_dependencies=False,
upgrade_strategy="to-satisfy-only",
user_requested={},
)
| 30.77027 | 78 | 0.700044 |
a46292d1846536ffd71c4925b41703b05c801546 | 5,444 | py | Python | sar/scripts/modules/run.py | cpswarm/complex_behaviors | 68f2a07180f6056f32c0ed16e9e21ac57794e2ed | [
"Apache-2.0"
] | 4 | 2019-09-18T20:42:59.000Z | 2021-02-17T04:50:28.000Z | sar/scripts/modules/run.py | cpswarm/complex_behaviors | 68f2a07180f6056f32c0ed16e9e21ac57794e2ed | [
"Apache-2.0"
] | null | null | null | sar/scripts/modules/run.py | cpswarm/complex_behaviors | 68f2a07180f6056f32c0ed16e9e21ac57794e2ed | [
"Apache-2.0"
] | null | null | null | #!/usr/bin/python
import sys
import numpy as np
import shapely.geometry as geo
from modules import bag
class Run:
'''
a class for storing the bag file contents of one (simulation) run with multiple UAVs
'''
def __init__(self, folder, bag_files, launch_file, area_poly):
'''
initialize class
:param string folder: absolute path of the bag file directory
:param string bag_files: name of the bag files for this run
:param string launch_file: launch file that was launched to generate the bag files
:param polygon area_poly: maximum coverable area
'''
# date and time of run
self.id = bag_files[0].split('_')[4].split('.')[0]
# list of bags
self.bags = [bag.Bag(folder, bf, launch_file) for bf in bag_files]
# number of UAVs in this run
self.length = len(self.bags)
# maximum coverable area
self.area_poly = area_poly
self.area_max = area_poly.area
# time when the first UAV started
self.tmin = .0
# time when the last UAV stopped
self.tmax = .0
# first target found time
self.found = 0.0
# first target rescued time
self.rescued = 0.0
# average field of view of the UAVs
self.fov = .0
def process(self, name_space, res_space, fov, verbose=False):
'''
process all bags of this run
:param string name_space: name space of topics
:param float res_space: spatial resolution in meter to filter the input data by (minimum distance between two consecutive coordinates)
:param float fov: field of view of the UAVs in radian
:param bool verbose: whether to be verbose (default False)
'''
for i,b in enumerate(self.bags):
if not verbose:
sys.stdout.write("{0}... ".format(i+1))
sys.stdout.flush()
# get bag information
b.info(name_space, verbose)
# parse bag file
b.parse(name_space, res_space, verbose)
# process bag file data
b.process(fov, verbose)
# time when the first UAV started
self.tmin = min([b.tstart for b in self.bags])
# time when the last UAV stopped
self.tmax = max([b.tstop for b in self.bags])
# time when the first target was found
self.found = min([b.found for b in self.bags])
# time when the first target was rescued
self.rescued = min([b.rescued for b in self.bags])
# average field of view of the UAVs
self.fov = np.mean([b.fov for b in self.bags])
def area(self, t):
'''
calculate the percentage of area covered by all UAVs after a specific time
:param float t: time up to which to calculate the covered area, values less than 0 will return the total covered area
'''
# total covered area
if t < 0:
return geo.MultiLineString([geo.LineString(b.path(b.tstop)) for b in self.bags]).buffer(self.fov).intersection(self.area_poly).area / self.area_max
# no more area covered after UAV stopped
elif t > self.tmax:
return np.nan
# calculate area
else:
return geo.MultiLineString([geo.LineString(b.path(t)) for b in self.bags]).buffer(self.fov).intersection(self.area_poly).area / self.area_max
def time(self, p, tmin=-1, tmax=-1):
'''
calculate the time it took all UAVs to cover a specific area
:param float p: percentage of covered area, 0 will return the starting time, values less than 0 will return the stopping time
:param float tmin: lower bound of time (default self.tmin)
:param float tmax: upper bound of time (default self.tmax)
'''
# default values
if tmin < self.tmin:
tmin = self.tmin
if tmax < self.tmin:
tmax = self.tmax
# starting time
if p == 0:
return tmin
# stopping time
elif p < 0:
return tmax
# find time
else:
return self.regula_falsi(p, tmin, tmax)
def regula_falsi(self, p, a, b, e=0.001, m=5):
'''
find time that corresponds approximately to the given coverage percentage
:param float p: percentage to find time for
:param float a: lower bound of time (default self.tmin)
:param float b: upper bound of time (default self.tmax)
:param float e: half of upper bound for error (default: 0.001)
:param int m: maximum number of iterations (default: 5)
'''
# initialize areas of bounds
area_a = self.area(a)
area_b = self.area(b)
# initialize sought time
c = a
# search time for maximum number if iterations
for i in xrange(m):
# sought time out of bounds
if area_a == area_b:
return np.nan
# update estimate of time
c = (b*area_a - a*area_b + a*p - b*p) / (area_a - area_b)
area_c = self.area(c)
# time close enough
if np.abs(p - area_c) < e:
break
# update bounds
if area_c < p:
a = c
area_a = area_c
else:
b = c
area_b = area_c
return c
| 32.023529 | 159 | 0.579721 |
b067ae5bfffaecc0ab47a385dfa8c39114758749 | 4,250 | py | Python | examples/django_subscriptions/django_subscriptions/template.py | Dessix/graphql-ws | 523dd1516d8709149a475602042d6a3c5eab5a3d | [
"MIT"
] | 3 | 2020-09-20T12:41:39.000Z | 2021-09-22T18:35:42.000Z | examples/django_subscriptions/django_subscriptions/template.py | Dessix/graphql-ws | 523dd1516d8709149a475602042d6a3c5eab5a3d | [
"MIT"
] | 1 | 2018-07-21T22:46:50.000Z | 2018-07-21T22:46:50.000Z | examples/django_subscriptions/django_subscriptions/template.py | Dessix/graphql-ws | 523dd1516d8709149a475602042d6a3c5eab5a3d | [
"MIT"
] | 3 | 2020-02-01T11:18:59.000Z | 2021-05-25T08:47:25.000Z |
from string import Template
def render_graphiql():
return Template('''
<!DOCTYPE html>
<html>
<head>
<meta charset="utf-8" />
<title>GraphiQL</title>
<meta name="robots" content="noindex" />
<style>
html, body {
height: 100%;
margin: 0;
overflow: hidden;
width: 100%;
}
</style>
<link href="https://cdnjs.cloudflare.com/ajax/libs/graphiql/0.11.10/graphiql.css" rel="stylesheet" />
<script src="//cdn.jsdelivr.net/fetch/0.9.0/fetch.min.js"></script>
<script src="//cdn.jsdelivr.net/react/15.0.0/react.min.js"></script>
<script src="//cdn.jsdelivr.net/react/15.0.0/react-dom.min.js"></script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/graphiql/0.11.10/graphiql.js"></script>
<script src="//unpkg.com/subscriptions-transport-ws@${SUBSCRIPTIONS_TRANSPORT_VERSION}/browser/client.js"></script>
<script src="//unpkg.com/graphiql-subscriptions-fetcher@0.0.2/browser/client.js"></script>
</head>
<body>
<script>
// Collect the URL parameters
var parameters = {};
window.location.search.substr(1).split('&').forEach(function (entry) {
var eq = entry.indexOf('=');
if (eq >= 0) {
parameters[decodeURIComponent(entry.slice(0, eq))] =
decodeURIComponent(entry.slice(eq + 1));
}
});
// Produce a Location query string from a parameter object.
function locationQuery(params, location) {
return (location ? location: '') + '?' + Object.keys(params).map(function (key) {
return encodeURIComponent(key) + '=' +
encodeURIComponent(params[key]);
}).join('&');
}
// Derive a fetch URL from the current URL, sans the GraphQL parameters.
var graphqlParamNames = {
query: true,
variables: true,
operationName: true
};
var otherParams = {};
for (var k in parameters) {
if (parameters.hasOwnProperty(k) && graphqlParamNames[k] !== true) {
otherParams[k] = parameters[k];
}
}
var fetcher;
if (true) {
var subscriptionsClient = new window.SubscriptionsTransportWs.SubscriptionClient('${subscriptionsEndpoint}', {
reconnect: true
});
fetcher = window.GraphiQLSubscriptionsFetcher.graphQLFetcher(subscriptionsClient, graphQLFetcher);
} else {
fetcher = graphQLFetcher;
}
// We don't use safe-serialize for location, because it's not client input.
var fetchURL = locationQuery(otherParams, '${endpointURL}');
// Defines a GraphQL fetcher using the fetch API.
function graphQLFetcher(graphQLParams) {
return fetch(fetchURL, {
method: 'post',
headers: {
'Accept': 'application/json',
'Content-Type': 'application/json',
},
body: JSON.stringify(graphQLParams),
credentials: 'include',
}).then(function (response) {
return response.text();
}).then(function (responseBody) {
try {
return JSON.parse(responseBody);
} catch (error) {
return responseBody;
}
});
}
// When the query and variables string is edited, update the URL bar so
// that it can be easily shared.
function onEditQuery(newQuery) {
parameters.query = newQuery;
updateURL();
}
function onEditVariables(newVariables) {
parameters.variables = newVariables;
updateURL();
}
function onEditOperationName(newOperationName) {
parameters.operationName = newOperationName;
updateURL();
}
function updateURL() {
history.replaceState(null, null, locationQuery(parameters) + window.location.hash);
}
// Render <GraphiQL /> into the body.
ReactDOM.render(
React.createElement(GraphiQL, {
fetcher: fetcher,
onEditQuery: onEditQuery,
onEditVariables: onEditVariables,
onEditOperationName: onEditOperationName,
}),
document.body
);
</script>
</body>
</html>''').substitute(
GRAPHIQL_VERSION='0.11.10',
SUBSCRIPTIONS_TRANSPORT_VERSION='0.7.0',
subscriptionsEndpoint='ws://localhost:8000/subscriptions',
# subscriptionsEndpoint='ws://localhost:5000/',
endpointURL='/graphql',
)
| 33.730159 | 117 | 0.627529 |
a6af6e8c2787dfa40c7f21df470183a1afdb5443 | 6,549 | py | Python | tests/unit/objects/test_lifecyleconfiguration.py | senstb/aws-elastic-beanstalk-cli | ef27ae50e8be34ccbe29bc6dc421323bddc3f485 | [
"Apache-2.0"
] | 110 | 2020-01-15T22:58:46.000Z | 2022-03-27T20:47:33.000Z | tests/unit/objects/test_lifecyleconfiguration.py | senstb/aws-elastic-beanstalk-cli | ef27ae50e8be34ccbe29bc6dc421323bddc3f485 | [
"Apache-2.0"
] | 89 | 2020-01-15T23:18:34.000Z | 2022-03-31T21:56:05.000Z | tests/unit/objects/test_lifecyleconfiguration.py | senstb/aws-elastic-beanstalk-cli | ef27ae50e8be34ccbe29bc6dc421323bddc3f485 | [
"Apache-2.0"
] | 50 | 2020-01-15T22:58:53.000Z | 2022-02-11T17:39:28.000Z | # Copyright 2017 Amazon.com, Inc. or its affiliates. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"). You
# may not use this file except in compliance with the License. A copy of
# the License is located at
#
# http://aws.amazon.com/apache2.0/
#
# or in the "license" file accompanying this file. This file is
# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
# ANY KIND, either express or implied. See the License for the specific
# language governing permissions and limitations under the License.
import unittest
import copy
from datetime import datetime
import mock
from dateutil.tz import tzutc
from ebcli.objects.exceptions import InvalidOptionsError, NotFoundError
from ebcli.objects import lifecycleconfiguration
from ebcli.objects.lifecycleconfiguration import LifecycleConfiguration
class TestLifecycleConfiguration(unittest.TestCase):
current_time = datetime.now()
region = 'us-foo-1'
app_name = 'foo_app'
file_location = 'local/app/'
service_role = 'arn:aws:iam::123123123123:role/aws-elasticbeanstalk-service-role'
get_role_response = {u'Arn': service_role}
api_model = {u'ApplicationName': app_name, u'Description': 'Application created from the EB CLI using "eb init"',
u'Versions': ['Sample Application'],
u'DateCreated': datetime(2016, 12, 20, 2, 48, 7, 938000, tzinfo=tzutc()),
u'ConfigurationTemplates': [],
u'DateUpdated': datetime(2016, 12, 20, 2, 48, 7, 938000, tzinfo=tzutc()),
u'ResourceLifecycleConfig': {u'VersionLifecycleConfig':
{u'MaxCountRule': {u'DeleteSourceFromS3': False, u'Enabled': False,
u'MaxCount': 200},
u'MaxAgeRule': {u'DeleteSourceFromS3': False, u'Enabled': False,
u'MaxAgeInDays': 180}},
u'ServiceRole': service_role}}
usr_model = {'ApplicationName': app_name,
'DateUpdated': datetime(2016, 12, 20, 2, 48, 7, 938000, tzinfo=tzutc()),
'Configurations': {u'VersionLifecycleConfig':
{u'MaxCountRule': {u'DeleteSourceFromS3': False, u'Enabled': False,
u'MaxCount': 200},
u'MaxAgeRule': {u'DeleteSourceFromS3': False, u'Enabled': False,
u'MaxAgeInDays': 180}},
u'ServiceRole': service_role}}
def setUp(self):
self.patcher_get_role = mock.patch('ebcli.objects.lifecycleconfiguration.get_role')
self.mock_get_role = self.patcher_get_role.start()
def tearDown(self):
self.patcher_get_role.stop()
'''
Testing collect_changes
'''
def test_collect_changes_no_service_role(self):
no_service_role_model = copy.deepcopy(self.usr_model)
del no_service_role_model['Configurations']['ServiceRole']
expected_changes = no_service_role_model['Configurations']
lifecycle_config = LifecycleConfiguration(self.api_model)
changes = lifecycle_config.collect_changes(no_service_role_model)
self.assertEqual(expected_changes, changes,
"Expected changes to be: {0}\n But was: {1}".format(expected_changes, changes))
self.mock_get_role.assert_not_called()
def test_collect_changes_with_service_role(self):
lifecycle_config = LifecycleConfiguration(self.api_model)
changes = lifecycle_config.collect_changes(self.usr_model)
expected_changed = self.usr_model['Configurations']
self.assertEqual(expected_changed, changes,
"Expected changes to be: {0}\n But was: {1}".format(expected_changed, changes))
self.mock_get_role.assert_called_with(self.service_role.split('/')[-1])
def test_collect_changes_service_role_not_found(self):
self.mock_get_role.side_effect = NotFoundError("Could not find role")
lifecycle_config = LifecycleConfiguration(self.api_model)
self.assertRaises(InvalidOptionsError, lifecycle_config.collect_changes, self.usr_model)
self.mock_get_role.assert_called_with(self.service_role.split('/')[-1])
'''
Testing convert_api_to_usr_model
'''
def test_convert_api_to_usr_model_no_service_role(self):
self.mock_get_role.return_value = self.get_role_response
no_service_role_model = copy.deepcopy(self.api_model)
del no_service_role_model['ResourceLifecycleConfig']['ServiceRole']
lifecycle_config = LifecycleConfiguration(no_service_role_model)
actual_usr_model = lifecycle_config.convert_api_to_usr_model()
self.assertEqual(self.usr_model, actual_usr_model,
"Expected changes to be: {0}\n But was: {1}".format(self.usr_model, actual_usr_model))
self.mock_get_role.assert_called_with(lifecycleconfiguration.DEFAULT_LIFECYCLE_SERVICE_ROLE)
def test_convert_api_to_usr_model_default_role_does_not_exist(self):
self.mock_get_role.side_effect = NotFoundError("Could not find role")
no_service_role_model = copy.deepcopy(self.api_model)
del no_service_role_model['ResourceLifecycleConfig']['ServiceRole']
lifecycle_config = LifecycleConfiguration(no_service_role_model)
actual_usr_model = lifecycle_config.convert_api_to_usr_model()
expected_usr_model = copy.deepcopy(self.usr_model)
expected_usr_model['Configurations'][u'ServiceRole'] = lifecycleconfiguration.DEFAULT_ARN_STRING
self.assertEqual(expected_usr_model, actual_usr_model,
"Expected changes to be: {0}\n But was: {1}".format(expected_usr_model, actual_usr_model))
self.mock_get_role.assert_called_with(lifecycleconfiguration.DEFAULT_LIFECYCLE_SERVICE_ROLE)
def test_convert_api_to_usr_model_with_service_role(self):
lifecycle_config = LifecycleConfiguration(self.api_model)
actual_usr_model = lifecycle_config.convert_api_to_usr_model()
self.assertEqual(self.usr_model, actual_usr_model,
"Expected changes to be: {0}\n But was: {1}".format(self.usr_model, actual_usr_model))
self.mock_get_role.assert_not_called()
| 52.392 | 117 | 0.666972 |
30606dc1a2589b276c371297e50413f89d44ce0f | 3,789 | py | Python | bbevoting_project/settings.py | abhisunkewar/Blockchain-based-E-Voting-Simulation | ac26c5f3c49ea791cba7e17157ac68f6f50c2e51 | [
"MIT"
] | 2 | 2021-09-14T13:06:46.000Z | 2021-09-14T13:07:48.000Z | bbevoting_project/settings.py | abhisunkewar/Blockchain-based-E-Voting-Simulation | ac26c5f3c49ea791cba7e17157ac68f6f50c2e51 | [
"MIT"
] | null | null | null | bbevoting_project/settings.py | abhisunkewar/Blockchain-based-E-Voting-Simulation | ac26c5f3c49ea791cba7e17157ac68f6f50c2e51 | [
"MIT"
] | null | null | null | """
Django settings for bbevoting_project project.
Generated by 'django-admin startproject' using Django 2.1.2.
For more information on this file, see
https://docs.djangoproject.com/en/2.1/topics/settings/
For the full list of settings and their values, see
https://docs.djangoproject.com/en/2.1/ref/settings/
"""
import os, math
# Build paths inside the project like this: os.path.join(BASE_DIR, ...)
BASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
# Quick-start development settings - unsuitable for production
# See https://docs.djangoproject.com/en/2.1/howto/deployment/checklist/
# SECURITY WARNING: keep the secret key used in production secret!
SECRET_KEY = '9%k^*0(qi*oktfa+#a3pp9g3eo-9vd(kws8+$t#lf2u&b#+u-*'
# SECURITY WARNING: don't run with debug turned on in production!
DEBUG = True
ALLOWED_HOSTS = ['localhost', '127.0.0.1', ]
# Application definition
INSTALLED_APPS = [
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'ballot',
'simulation',
'welcome',
]
MIDDLEWARE = [
'django.middleware.security.SecurityMiddleware',
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.common.CommonMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'django.middleware.clickjacking.XFrameOptionsMiddleware',
]
ROOT_URLCONF = 'bbevoting_project.urls'
TEMPLATES = [
{
'BACKEND': 'django.template.backends.django.DjangoTemplates',
'DIRS': ['templates', ],
'APP_DIRS': True,
'OPTIONS': {
'context_processors': [
'django.template.context_processors.debug',
'django.template.context_processors.request',
'django.contrib.auth.context_processors.auth',
'django.contrib.messages.context_processors.messages',
],
},
},
]
WSGI_APPLICATION = 'bbevoting_project.wsgi.application'
# Database
# https://docs.djangoproject.com/en/2.1/ref/settings/#databases
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.sqlite3',
'NAME': os.path.join(BASE_DIR, 'db.sqlite3'),
}
}
# Password validation
# https://docs.djangoproject.com/en/2.1/ref/settings/#auth-password-validators
AUTH_PASSWORD_VALIDATORS = [
{
'NAME': 'django.contrib.auth.password_validation.UserAttributeSimilarityValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.MinimumLengthValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.CommonPasswordValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.NumericPasswordValidator',
},
]
# Internationalization
# https://docs.djangoproject.com/en/2.1/topics/i18n/
LANGUAGE_CODE = 'en-us'
TIME_ZONE = 'Asia/Jakarta'
USE_I18N = True
USE_L10N = True
USE_TZ = True
# Static files (CSS, JavaScript, Images)
# https://docs.djangoproject.com/en/2.1/howto/static-files/
STATIC_URL = '/static/'
STATICFILES_DIRS = [
os.path.join(BASE_DIR, 'static'),
]
# FOR DEMO PURPOSE
# IMPORTANT: Re-run the simulation after modifying these values.
# Public key
PUBLIC_KEY = """-----BEGIN PUBLIC KEY-----
MFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAEJ5r+Cab+pDjJm1INuWDJFfTPcvqJ
lIEPNL/gFRyz9sl55N8jENmCslOSpNCkJUEb879x+0Jx0pg9POeOAT7Xrw==
-----END PUBLIC KEY-----"""
# PUZZLE
PUZZLE = '0000' # or more zeros
PLENGTH = len(PUZZLE)
# BATCH SIMULATION
N_TRANSACTIONS = 50
N_TX_PER_BLOCK = 5
N_BLOCKS = math.ceil(N_TRANSACTIONS/N_TX_PER_BLOCK) # Number of required blocks as int
| 26.131034 | 91 | 0.703352 |
8dce05fecfa79f4fedcb5c6d4d82a3416e123d4d | 347 | py | Python | Articles/urls.py | Abdulrahmannaser/Journal | ee10dfe6e3d447087b4de829a4a46af74595b730 | [
"MIT"
] | null | null | null | Articles/urls.py | Abdulrahmannaser/Journal | ee10dfe6e3d447087b4de829a4a46af74595b730 | [
"MIT"
] | null | null | null | Articles/urls.py | Abdulrahmannaser/Journal | ee10dfe6e3d447087b4de829a4a46af74595b730 | [
"MIT"
] | 2 | 2020-07-08T00:55:49.000Z | 2020-12-11T04:14:50.000Z | from django.urls import path
from . import views
app_name = 'Articles'
urlpatterns = [
path('', views.Articles, name='Main'),
path('MakeArticle', views.MakeArticle, name='MakeArticle'),
path('Article/<int:Article_id>', views.Article, name='Article'),
path('EditArticle/<int:Article_id>', views.EditArticle, name='EditArticle')
]
| 28.916667 | 79 | 0.700288 |
39144aa97ea8f90ceb7a237328fdb6b0af24f1c1 | 15,548 | py | Python | resources/dot_PyCharm/system/python_stubs/-762174762/PySide/QtGui/__init__.py | basepipe/developer_onboarding | 05b6a776f8974c89517868131b201f11c6c2a5ad | [
"MIT"
] | 1 | 2020-04-20T02:27:20.000Z | 2020-04-20T02:27:20.000Z | resources/dot_PyCharm/system/python_stubs/cache/8cdc475d469a13122bc4bc6c3ac1c215d93d5f120f5cc1ef33a8f3088ee54d8e/PySide/QtGui/__init__.py | basepipe/developer_onboarding | 05b6a776f8974c89517868131b201f11c6c2a5ad | [
"MIT"
] | null | null | null | resources/dot_PyCharm/system/python_stubs/cache/8cdc475d469a13122bc4bc6c3ac1c215d93d5f120f5cc1ef33a8f3088ee54d8e/PySide/QtGui/__init__.py | basepipe/developer_onboarding | 05b6a776f8974c89517868131b201f11c6c2a5ad | [
"MIT"
] | null | null | null | # encoding: utf-8
# module PySide.QtGui
# from C:\Python27\lib\site-packages\PySide\QtGui.pyd
# by generator 1.147
# no doc
# imports
import PySide.QtCore as __PySide_QtCore
import Shiboken as __Shiboken
# Variables with simple values
qApp = None
# functions
def qAlpha(*args, **kwargs): # real signature unknown
pass
def qBlue(*args, **kwargs): # real signature unknown
pass
def qGray(*args, **kwargs): # real signature unknown
pass
def qGreen(*args, **kwargs): # real signature unknown
pass
def qIsGray(*args, **kwargs): # real signature unknown
pass
def qRed(*args, **kwargs): # real signature unknown
pass
def qRgb(*args, **kwargs): # real signature unknown
pass
def qRgba(*args, **kwargs): # real signature unknown
pass
# classes
from QPaintDevice import QPaintDevice
from QWidget import QWidget
from QAbstractButton import QAbstractButton
from QGraphicsItem import QGraphicsItem
from QAbstractGraphicsShapeItem import QAbstractGraphicsShapeItem
from QAbstractItemDelegate import QAbstractItemDelegate
from QFrame import QFrame
from QAbstractScrollArea import QAbstractScrollArea
from QAbstractItemView import QAbstractItemView
from QDialog import QDialog
from QAbstractPageSetupDialog import QAbstractPageSetupDialog
from QAbstractPrintDialog import QAbstractPrintDialog
from QAbstractProxyModel import QAbstractProxyModel
from QAbstractSlider import QAbstractSlider
from QAbstractSpinBox import QAbstractSpinBox
from QAbstractTextDocumentLayout import QAbstractTextDocumentLayout
from QAccessibleEvent import QAccessibleEvent
from QAction import QAction
from QActionEvent import QActionEvent
from QActionGroup import QActionGroup
from QApplication import QApplication
from QPixmap import QPixmap
from QBitmap import QBitmap
from QLayoutItem import QLayoutItem
from QLayout import QLayout
from QBoxLayout import QBoxLayout
from QBrush import QBrush
from QButtonGroup import QButtonGroup
from QCalendarWidget import QCalendarWidget
from QStyle import QStyle
from QCommonStyle import QCommonStyle
from QMotifStyle import QMotifStyle
from QCDEStyle import QCDEStyle
from QCheckBox import QCheckBox
from QWindowsStyle import QWindowsStyle
from QCleanlooksStyle import QCleanlooksStyle
from QClipboard import QClipboard
from QClipboardEvent import QClipboardEvent
from QCloseEvent import QCloseEvent
from QColor import QColor
from QColorDialog import QColorDialog
from QColumnView import QColumnView
from QComboBox import QComboBox
from QPushButton import QPushButton
from QCommandLinkButton import QCommandLinkButton
from QCompleter import QCompleter
from QGradient import QGradient
from QConicalGradient import QConicalGradient
from QInputEvent import QInputEvent
from QContextMenuEvent import QContextMenuEvent
from QCursor import QCursor
from QDataWidgetMapper import QDataWidgetMapper
from QDateTimeEdit import QDateTimeEdit
from QDateEdit import QDateEdit
from QDesktopServices import QDesktopServices
from QDesktopWidget import QDesktopWidget
from QDial import QDial
from QDialogButtonBox import QDialogButtonBox
from QDirModel import QDirModel
from QDockWidget import QDockWidget
from QDoubleSpinBox import QDoubleSpinBox
from QValidator import QValidator
from QDoubleValidator import QDoubleValidator
from QDrag import QDrag
from QDropEvent import QDropEvent
from QDragMoveEvent import QDragMoveEvent
from QDragEnterEvent import QDragEnterEvent
from QDragLeaveEvent import QDragLeaveEvent
from QErrorMessage import QErrorMessage
from QFileDialog import QFileDialog
from QFileIconProvider import QFileIconProvider
from QFileOpenEvent import QFileOpenEvent
from QFileSystemModel import QFileSystemModel
from QFocusEvent import QFocusEvent
from QFocusFrame import QFocusFrame
from QFont import QFont
from QFontComboBox import QFontComboBox
from QFontDatabase import QFontDatabase
from QFontDialog import QFontDialog
from QFontInfo import QFontInfo
from QFontMetrics import QFontMetrics
from QFontMetricsF import QFontMetricsF
from QFormLayout import QFormLayout
from QGesture import QGesture
from QGestureEvent import QGestureEvent
from QGestureRecognizer import QGestureRecognizer
from QGraphicsAnchor import QGraphicsAnchor
from QGraphicsLayoutItem import QGraphicsLayoutItem
from QGraphicsLayout import QGraphicsLayout
from QGraphicsAnchorLayout import QGraphicsAnchorLayout
from QGraphicsEffect import QGraphicsEffect
from QGraphicsBlurEffect import QGraphicsBlurEffect
from QGraphicsColorizeEffect import QGraphicsColorizeEffect
from QGraphicsDropShadowEffect import QGraphicsDropShadowEffect
from QGraphicsEllipseItem import QGraphicsEllipseItem
from QGraphicsGridLayout import QGraphicsGridLayout
from QGraphicsItemAnimation import QGraphicsItemAnimation
from QGraphicsItemGroup import QGraphicsItemGroup
from QGraphicsLinearLayout import QGraphicsLinearLayout
from QGraphicsLineItem import QGraphicsLineItem
from QGraphicsObject import QGraphicsObject
from QGraphicsOpacityEffect import QGraphicsOpacityEffect
from QGraphicsPathItem import QGraphicsPathItem
from QGraphicsPixmapItem import QGraphicsPixmapItem
from QGraphicsPolygonItem import QGraphicsPolygonItem
from QGraphicsWidget import QGraphicsWidget
from QGraphicsProxyWidget import QGraphicsProxyWidget
from QGraphicsRectItem import QGraphicsRectItem
from QGraphicsTransform import QGraphicsTransform
from QGraphicsRotation import QGraphicsRotation
from QGraphicsScale import QGraphicsScale
from QGraphicsScene import QGraphicsScene
from QGraphicsSceneEvent import QGraphicsSceneEvent
from QGraphicsSceneContextMenuEvent import QGraphicsSceneContextMenuEvent
from QGraphicsSceneDragDropEvent import QGraphicsSceneDragDropEvent
from QGraphicsSceneHelpEvent import QGraphicsSceneHelpEvent
from QGraphicsSceneHoverEvent import QGraphicsSceneHoverEvent
from QGraphicsSceneMouseEvent import QGraphicsSceneMouseEvent
from QGraphicsSceneMoveEvent import QGraphicsSceneMoveEvent
from QGraphicsSceneResizeEvent import QGraphicsSceneResizeEvent
from QGraphicsSceneWheelEvent import QGraphicsSceneWheelEvent
from QGraphicsSimpleTextItem import QGraphicsSimpleTextItem
from QGraphicsTextItem import QGraphicsTextItem
from QGraphicsView import QGraphicsView
from QGridLayout import QGridLayout
from QGroupBox import QGroupBox
from QHBoxLayout import QHBoxLayout
from QHeaderView import QHeaderView
from QHelpEvent import QHelpEvent
from QHideEvent import QHideEvent
from QHoverEvent import QHoverEvent
from QIcon import QIcon
from QIconDragEvent import QIconDragEvent
from QIconEngine import QIconEngine
from QIconEngineV2 import QIconEngineV2
from QImage import QImage
from QImageIOHandler import QImageIOHandler
from QImageReader import QImageReader
from QImageWriter import QImageWriter
from QInputContext import QInputContext
from QInputContextFactory import QInputContextFactory
from QInputDialog import QInputDialog
from QInputMethodEvent import QInputMethodEvent
from QIntValidator import QIntValidator
from QItemDelegate import QItemDelegate
from QItemEditorCreatorBase import QItemEditorCreatorBase
from QItemEditorFactory import QItemEditorFactory
from QItemSelection import QItemSelection
from QItemSelectionModel import QItemSelectionModel
from QItemSelectionRange import QItemSelectionRange
from QKeyEvent import QKeyEvent
from QKeyEventTransition import QKeyEventTransition
from QKeySequence import QKeySequence
from QLabel import QLabel
from QLCDNumber import QLCDNumber
from QLinearGradient import QLinearGradient
from QLineEdit import QLineEdit
from QListView import QListView
from QListWidget import QListWidget
from QListWidgetItem import QListWidgetItem
from QMainWindow import QMainWindow
from QMatrix import QMatrix
from QMatrix2x2 import QMatrix2x2
from QMatrix2x3 import QMatrix2x3
from QMatrix2x4 import QMatrix2x4
from QMatrix3x2 import QMatrix3x2
from QMatrix3x3 import QMatrix3x3
from QMatrix3x4 import QMatrix3x4
from QMatrix4x2 import QMatrix4x2
from QMatrix4x3 import QMatrix4x3
from QMatrix4x4 import QMatrix4x4
from QMdiArea import QMdiArea
from QMdiSubWindow import QMdiSubWindow
from QMenu import QMenu
from QMenuBar import QMenuBar
from QMessageBox import QMessageBox
from QMouseEvent import QMouseEvent
from QMouseEventTransition import QMouseEventTransition
from QMoveEvent import QMoveEvent
from QMovie import QMovie
from QPageSetupDialog import QPageSetupDialog
from QPaintEngine import QPaintEngine
from QPaintEngineState import QPaintEngineState
from QPainter import QPainter
from QPainterPath import QPainterPath
from QPainterPathStroker import QPainterPathStroker
from QPaintEvent import QPaintEvent
from QPalette import QPalette
from QPanGesture import QPanGesture
from QPen import QPen
from QPicture import QPicture
from QPictureIO import QPictureIO
from QPinchGesture import QPinchGesture
from QPixmapCache import QPixmapCache
from QPlainTextDocumentLayout import QPlainTextDocumentLayout
from QPlainTextEdit import QPlainTextEdit
from QPlastiqueStyle import QPlastiqueStyle
from QPolygon import QPolygon
from QPolygonF import QPolygonF
from QPrintDialog import QPrintDialog
from QPrintEngine import QPrintEngine
from QPrinter import QPrinter
from QPrinterInfo import QPrinterInfo
from QPrintPreviewDialog import QPrintPreviewDialog
from QPrintPreviewWidget import QPrintPreviewWidget
from QProgressBar import QProgressBar
from QProgressDialog import QProgressDialog
from QProxyModel import QProxyModel
from QTextObjectInterface import QTextObjectInterface
from QPyTextObject import QPyTextObject
from QQuaternion import QQuaternion
from QRadialGradient import QRadialGradient
from QRadioButton import QRadioButton
from QRegExpValidator import QRegExpValidator
from QRegion import QRegion
from QResizeEvent import QResizeEvent
from QRubberBand import QRubberBand
from QScrollArea import QScrollArea
from QScrollBar import QScrollBar
from QSessionManager import QSessionManager
from QShortcut import QShortcut
from QShortcutEvent import QShortcutEvent
from QShowEvent import QShowEvent
from QSizeGrip import QSizeGrip
from QSizePolicy import QSizePolicy
from QSlider import QSlider
from QSortFilterProxyModel import QSortFilterProxyModel
from QSound import QSound
from QSpacerItem import QSpacerItem
from QSpinBox import QSpinBox
from QSplashScreen import QSplashScreen
from QSplitter import QSplitter
from QSplitterHandle import QSplitterHandle
from QStackedLayout import QStackedLayout
from QStackedWidget import QStackedWidget
from QStandardItem import QStandardItem
from QStandardItemModel import QStandardItemModel
from QStatusBar import QStatusBar
from QStatusTipEvent import QStatusTipEvent
from QStringListModel import QStringListModel
from QStyledItemDelegate import QStyledItemDelegate
from QStyleFactory import QStyleFactory
from QStyleHintReturn import QStyleHintReturn
from QStyleHintReturnMask import QStyleHintReturnMask
from QStyleHintReturnVariant import QStyleHintReturnVariant
from QStyleOption import QStyleOption
from QStyleOptionButton import QStyleOptionButton
from QStyleOptionComplex import QStyleOptionComplex
from QStyleOptionComboBox import QStyleOptionComboBox
from QStyleOptionDockWidget import QStyleOptionDockWidget
from QStyleOptionDockWidgetV2 import QStyleOptionDockWidgetV2
from QStyleOptionFocusRect import QStyleOptionFocusRect
from QStyleOptionFrame import QStyleOptionFrame
from QStyleOptionFrameV2 import QStyleOptionFrameV2
from QStyleOptionFrameV3 import QStyleOptionFrameV3
from QStyleOptionGraphicsItem import QStyleOptionGraphicsItem
from QStyleOptionGroupBox import QStyleOptionGroupBox
from QStyleOptionHeader import QStyleOptionHeader
from QStyleOptionMenuItem import QStyleOptionMenuItem
from QStyleOptionProgressBar import QStyleOptionProgressBar
from QStyleOptionProgressBarV2 import QStyleOptionProgressBarV2
from QStyleOptionRubberBand import QStyleOptionRubberBand
from QStyleOptionSizeGrip import QStyleOptionSizeGrip
from QStyleOptionSlider import QStyleOptionSlider
from QStyleOptionSpinBox import QStyleOptionSpinBox
from QStyleOptionTab import QStyleOptionTab
from QStyleOptionTabBarBase import QStyleOptionTabBarBase
from QStyleOptionTabBarBaseV2 import QStyleOptionTabBarBaseV2
from QStyleOptionTabV2 import QStyleOptionTabV2
from QStyleOptionTabV3 import QStyleOptionTabV3
from QStyleOptionTabWidgetFrame import QStyleOptionTabWidgetFrame
from QStyleOptionTitleBar import QStyleOptionTitleBar
from QStyleOptionToolBar import QStyleOptionToolBar
from QStyleOptionToolBox import QStyleOptionToolBox
from QStyleOptionToolBoxV2 import QStyleOptionToolBoxV2
from QStyleOptionToolButton import QStyleOptionToolButton
from QStyleOptionViewItem import QStyleOptionViewItem
from QStyleOptionViewItemV2 import QStyleOptionViewItemV2
from QStyleOptionViewItemV3 import QStyleOptionViewItemV3
from QStyleOptionViewItemV4 import QStyleOptionViewItemV4
from QStylePainter import QStylePainter
from QSwipeGesture import QSwipeGesture
from QSyntaxHighlighter import QSyntaxHighlighter
from QSystemTrayIcon import QSystemTrayIcon
from QTabBar import QTabBar
from QTabletEvent import QTabletEvent
from QTableView import QTableView
from QTableWidget import QTableWidget
from QTableWidgetItem import QTableWidgetItem
from QTableWidgetSelectionRange import QTableWidgetSelectionRange
from QTabWidget import QTabWidget
from QTapAndHoldGesture import QTapAndHoldGesture
from QTapGesture import QTapGesture
from QTextBlock import QTextBlock
from QTextFormat import QTextFormat
from QTextBlockFormat import QTextBlockFormat
from QTextObject import QTextObject
from QTextBlockGroup import QTextBlockGroup
from QTextBlockUserData import QTextBlockUserData
from QTextEdit import QTextEdit
from QTextBrowser import QTextBrowser
from QTextCharFormat import QTextCharFormat
from QTextCursor import QTextCursor
from QTextDocument import QTextDocument
from QTextDocumentFragment import QTextDocumentFragment
from QTextFragment import QTextFragment
from QTextFrame import QTextFrame
from QTextFrameFormat import QTextFrameFormat
from QTextImageFormat import QTextImageFormat
from QTextInlineObject import QTextInlineObject
from QTextItem import QTextItem
from QTextLayout import QTextLayout
from QTextLength import QTextLength
from QTextLine import QTextLine
from QTextList import QTextList
from QTextListFormat import QTextListFormat
from QTextOption import QTextOption
from QTextTable import QTextTable
from QTextTableCell import QTextTableCell
from QTextTableCellFormat import QTextTableCellFormat
from QTextTableFormat import QTextTableFormat
from QTileRules import QTileRules
from QTimeEdit import QTimeEdit
from QToolBar import QToolBar
from QToolBarChangeEvent import QToolBarChangeEvent
from QToolBox import QToolBox
from QToolButton import QToolButton
from QToolTip import QToolTip
from QTouchEvent import QTouchEvent
from QTransform import QTransform
from QTreeView import QTreeView
from QTreeWidget import QTreeWidget
from QTreeWidgetItem import QTreeWidgetItem
from QTreeWidgetItemIterator import QTreeWidgetItemIterator
from QUndoCommand import QUndoCommand
from QUndoGroup import QUndoGroup
from QUndoStack import QUndoStack
from QUndoView import QUndoView
from QVBoxLayout import QVBoxLayout
from QVector2D import QVector2D
from QVector3D import QVector3D
from QVector4D import QVector4D
from QWhatsThis import QWhatsThis
from QWhatsThisClickedEvent import QWhatsThisClickedEvent
from QWheelEvent import QWheelEvent
from QWidgetAction import QWidgetAction
from QWidgetItem import QWidgetItem
from QWindowStateChangeEvent import QWindowStateChangeEvent
from QWizard import QWizard
from QWizardPage import QWizardPage
from QWorkspace import QWorkspace
| 39.362025 | 73 | 0.89452 |
4a7390306fc5950344cae130ba7d678caef800cb | 7,122 | py | Python | custom/icds_reports/reports/enrolled_children.py | dannyroberts/commcare-hq | 4b0b8ecbe851e46307d3a0e635d6d5d6e31c3598 | [
"BSD-3-Clause"
] | null | null | null | custom/icds_reports/reports/enrolled_children.py | dannyroberts/commcare-hq | 4b0b8ecbe851e46307d3a0e635d6d5d6e31c3598 | [
"BSD-3-Clause"
] | null | null | null | custom/icds_reports/reports/enrolled_children.py | dannyroberts/commcare-hq | 4b0b8ecbe851e46307d3a0e635d6d5d6e31c3598 | [
"BSD-3-Clause"
] | null | null | null | from __future__ import absolute_import, division
from __future__ import unicode_literals
from collections import OrderedDict, defaultdict
from datetime import datetime
import six
from django.db.models.aggregates import Sum
from django.utils.translation import ugettext as _
from corehq.util.quickcache import quickcache
from custom.icds_reports.const import LocationTypes, ChartColors, MapColors
from custom.icds_reports.messages import percent_children_enrolled_help_text
from custom.icds_reports.models import AggChildHealthMonthly
from custom.icds_reports.utils import apply_exclude, match_age, chosen_filters_to_labels, \
indian_formatted_number, get_child_locations
@quickcache(['domain', 'config', 'loc_level', 'show_test'], timeout=30 * 60)
def get_enrolled_children_data_map(domain, config, loc_level, show_test=False):
def get_data_for(filters):
filters['month'] = datetime(*filters['month'])
queryset = AggChildHealthMonthly.objects.filter(
**filters
).values(
'%s_name' % loc_level, '%s_map_location_name' % loc_level
).annotate(
valid=Sum('valid_in_month'),
all=Sum('valid_all_registered_in_month')
).order_by('%s_name' % loc_level, '%s_map_location_name' % loc_level)
if not show_test:
queryset = apply_exclude(domain, queryset)
return queryset
data_for_map = defaultdict(lambda: {
'valid': 0,
'all': 0,
'original_name': [],
'fillKey': 'Children'
})
average = []
total_valid = 0
total = 0
for row in get_data_for(config):
valid = row['valid'] or 0
name = row['%s_name' % loc_level]
all_children = row['all'] or 0
on_map_name = row['%s_map_location_name' % loc_level] or name
average.append(valid)
total_valid += valid
total += all_children
data_for_map[on_map_name]['valid'] += valid
data_for_map[on_map_name]['all'] += all_children
data_for_map[on_map_name]['original_name'].append(name)
fills = OrderedDict()
fills.update({'Children': MapColors.BLUE})
fills.update({'defaultFill': MapColors.GREY})
gender_ignored, age_label, chosen_filters = chosen_filters_to_labels(config, default_interval='0 - 6 years')
return {
"slug": "enrolled_children",
"label": "",
"fills": fills,
"rightLegend": {
"average": '%.2f' % (total_valid * 100 / float(total or 1)),
"info": percent_children_enrolled_help_text(age_label=age_label),
"extended_info": [
{
'indicator':
'Number of children{} who are enrolled for Anganwadi Services:'
.format(chosen_filters),
'value': indian_formatted_number(total_valid)
},
{
'indicator': (
'Total number of children{} who are registered: '
.format(chosen_filters)
),
'value': indian_formatted_number(total)
},
{
'indicator': (
'Percentage of registered children{} who are enrolled for Anganwadi Services:'
.format(chosen_filters)
),
'value': '%.2f%%' % (total_valid * 100 / float(total or 1))
}
]
},
"data": dict(data_for_map),
}
@quickcache(['domain', 'config', 'loc_level', 'show_test'], timeout=30 * 60)
def get_enrolled_children_data_chart(domain, config, loc_level, show_test=False):
config['month'] = datetime(*config['month'])
chart_data = AggChildHealthMonthly.objects.filter(
**config
).values(
'month', 'age_tranche', '%s_name' % loc_level
).annotate(
valid=Sum('valid_in_month'),
).order_by('month')
if not show_test:
chart_data = apply_exclude(domain, chart_data)
chart = OrderedDict()
chart.update({'0-1 month': 0})
chart.update({'1-6 months': 0})
chart.update({'6-12 months': 0})
chart.update({'1-3 years': 0})
chart.update({'3-6 years': 0})
all = 0
best_worst = {}
for row in chart_data:
location = row['%s_name' % loc_level]
if not row['age_tranche']:
continue
age = int(row['age_tranche'])
valid = row['valid']
all += valid
chart[match_age(age)] += valid
if location in best_worst:
best_worst[location] += valid
else:
best_worst[location] = valid
return {
"chart_data": [
{
"values": [
{
'x': key,
'y': value,
'all': all
} for key, value in six.iteritems(chart)
],
"key": "Children (0-6 years) who are enrolled",
"strokeWidth": 2,
"classed": "dashed",
"color": ChartColors.BLUE
}
],
"location_type": loc_level.title() if loc_level != LocationTypes.SUPERVISOR else 'Sector'
}
@quickcache(['domain', 'config', 'loc_level', 'location_id', 'show_test'], timeout=30 * 60)
def get_enrolled_children_sector_data(domain, config, loc_level, location_id, show_test=False):
group_by = ['%s_name' % loc_level]
config['month'] = datetime(*config['month'])
data = AggChildHealthMonthly.objects.filter(
**config
).values(
*group_by
).annotate(
valid=Sum('valid_in_month'),
all=Sum('valid_all_registered_in_month')
).order_by('%s_name' % loc_level)
if not show_test:
data = apply_exclude(domain, data)
chart_data = {
'blue': []
}
tooltips_data = defaultdict(lambda: {
'valid': 0,
'all': 0
})
loc_children = get_child_locations(domain, location_id, show_test)
result_set = set()
for row in data:
valid = row['valid'] or 0
all_children = row['all'] or 0
name = row['%s_name' % loc_level]
result_set.add(name)
row_values = {
'valid': valid,
'all': all_children
}
for prop, value in six.iteritems(row_values):
tooltips_data[name][prop] += value
chart_data['blue'].append([
name, valid
])
for sql_location in loc_children:
if sql_location.name not in result_set:
chart_data['blue'].append([sql_location.name, 0])
chart_data['blue'] = sorted(chart_data['blue'])
return {
"tooltips_data": dict(tooltips_data),
"format": "number",
"info": percent_children_enrolled_help_text(),
"chart_data": [
{
"values": chart_data['blue'],
"key": "",
"strokeWidth": 2,
"classed": "dashed",
"color": MapColors.BLUE
}
]
}
| 31.653333 | 112 | 0.565993 |
4dc93993330b29b380add555ea4068160af472c3 | 14,265 | py | Python | examples/preprocessing/plot_all_scaling.py | MarcinKonowalczyk/scikit-learn | 8b18d4cbfc3a10ce85decec292d30470c69f40d7 | [
"BSD-3-Clause"
] | 1 | 2021-10-14T08:51:12.000Z | 2021-10-14T08:51:12.000Z | examples/preprocessing/plot_all_scaling.py | MarcinKonowalczyk/scikit-learn | 8b18d4cbfc3a10ce85decec292d30470c69f40d7 | [
"BSD-3-Clause"
] | 1 | 2022-01-12T13:11:21.000Z | 2022-01-12T13:11:21.000Z | examples/preprocessing/plot_all_scaling.py | MarcinKonowalczyk/scikit-learn | 8b18d4cbfc3a10ce85decec292d30470c69f40d7 | [
"BSD-3-Clause"
] | null | null | null | #!/usr/bin/env python
# -*- coding: utf-8 -*-
"""
=============================================================
Compare the effect of different scalers on data with outliers
=============================================================
Feature 0 (median income in a block) and feature 5 (average house occupancy) of
the :ref:`california_housing_dataset` have very
different scales and contain some very large outliers. These two
characteristics lead to difficulties to visualize the data and, more
importantly, they can degrade the predictive performance of many machine
learning algorithms. Unscaled data can also slow down or even prevent the
convergence of many gradient-based estimators.
Indeed many estimators are designed with the assumption that each feature takes
values close to zero or more importantly that all features vary on comparable
scales. In particular, metric-based and gradient-based estimators often assume
approximately standardized data (centered features with unit variances). A
notable exception are decision tree-based estimators that are robust to
arbitrary scaling of the data.
This example uses different scalers, transformers, and normalizers to bring the
data within a pre-defined range.
Scalers are linear (or more precisely affine) transformers and differ from each
other in the way they estimate the parameters used to shift and scale each
feature.
:class:`~sklearn.preprocessing.QuantileTransformer` provides non-linear
transformations in which distances
between marginal outliers and inliers are shrunk.
:class:`~sklearn.preprocessing.PowerTransformer` provides
non-linear transformations in which data is mapped to a normal distribution to
stabilize variance and minimize skewness.
Unlike the previous transformations, normalization refers to a per sample
transformation instead of a per feature transformation.
The following code is a bit verbose, feel free to jump directly to the analysis
of the results_.
"""
# Author: Raghav RV <rvraghav93@gmail.com>
# Guillaume Lemaitre <g.lemaitre58@gmail.com>
# Thomas Unterthiner
# License: BSD 3 clause
import numpy as np
import matplotlib as mpl
from matplotlib import pyplot as plt
from matplotlib import cm
from sklearn.preprocessing import MinMaxScaler
from sklearn.preprocessing import minmax_scale
from sklearn.preprocessing import MaxAbsScaler
from sklearn.preprocessing import StandardScaler
from sklearn.preprocessing import RobustScaler
from sklearn.preprocessing import Normalizer
from sklearn.preprocessing import QuantileTransformer
from sklearn.preprocessing import PowerTransformer
from sklearn.datasets import fetch_california_housing
print(__doc__)
dataset = fetch_california_housing()
X_full, y_full = dataset.data, dataset.target
feature_names = dataset.feature_names
feature_mapping = {
"MedInc": "Median income in block",
"HousAge": "Median house age in block",
"AveRooms": "Average number of rooms",
"AveBedrms": "Average number of bedrooms",
"Population": "Block population",
"AveOccup": "Average house occupancy",
"Latitude": "House block latitude",
"Longitude": "House block longitude",
}
# Take only 2 features to make visualization easier
# Feature MedInc has a long tail distribution.
# Feature AveOccup has a few but very large outliers.
features = ["MedInc", "AveOccup"]
features_idx = [feature_names.index(feature) for feature in features]
X = X_full[:, features_idx]
distributions = [
("Unscaled data", X),
("Data after standard scaling", StandardScaler().fit_transform(X)),
("Data after min-max scaling", MinMaxScaler().fit_transform(X)),
("Data after max-abs scaling", MaxAbsScaler().fit_transform(X)),
(
"Data after robust scaling",
RobustScaler(quantile_range=(25, 75)).fit_transform(X),
),
(
"Data after power transformation (Yeo-Johnson)",
PowerTransformer(method="yeo-johnson").fit_transform(X),
),
(
"Data after power transformation (Box-Cox)",
PowerTransformer(method="box-cox").fit_transform(X),
),
(
"Data after quantile transformation (uniform pdf)",
QuantileTransformer(output_distribution="uniform").fit_transform(X),
),
(
"Data after quantile transformation (gaussian pdf)",
QuantileTransformer(output_distribution="normal").fit_transform(X),
),
("Data after sample-wise L2 normalizing", Normalizer().fit_transform(X)),
]
# scale the output between 0 and 1 for the colorbar
y = minmax_scale(y_full)
# plasma does not exist in matplotlib < 1.5
cmap = getattr(cm, "plasma_r", cm.hot_r)
def create_axes(title, figsize=(16, 6)):
fig = plt.figure(figsize=figsize)
fig.suptitle(title)
# define the axis for the first plot
left, width = 0.1, 0.22
bottom, height = 0.1, 0.7
bottom_h = height + 0.15
left_h = left + width + 0.02
rect_scatter = [left, bottom, width, height]
rect_histx = [left, bottom_h, width, 0.1]
rect_histy = [left_h, bottom, 0.05, height]
ax_scatter = plt.axes(rect_scatter)
ax_histx = plt.axes(rect_histx)
ax_histy = plt.axes(rect_histy)
# define the axis for the zoomed-in plot
left = width + left + 0.2
left_h = left + width + 0.02
rect_scatter = [left, bottom, width, height]
rect_histx = [left, bottom_h, width, 0.1]
rect_histy = [left_h, bottom, 0.05, height]
ax_scatter_zoom = plt.axes(rect_scatter)
ax_histx_zoom = plt.axes(rect_histx)
ax_histy_zoom = plt.axes(rect_histy)
# define the axis for the colorbar
left, width = width + left + 0.13, 0.01
rect_colorbar = [left, bottom, width, height]
ax_colorbar = plt.axes(rect_colorbar)
return (
(ax_scatter, ax_histy, ax_histx),
(ax_scatter_zoom, ax_histy_zoom, ax_histx_zoom),
ax_colorbar,
)
def plot_distribution(axes, X, y, hist_nbins=50, title="", x0_label="", x1_label=""):
ax, hist_X1, hist_X0 = axes
ax.set_title(title)
ax.set_xlabel(x0_label)
ax.set_ylabel(x1_label)
# The scatter plot
colors = cmap(y)
ax.scatter(X[:, 0], X[:, 1], alpha=0.5, marker="o", s=5, lw=0, c=colors)
# Removing the top and the right spine for aesthetics
# make nice axis layout
ax.spines["top"].set_visible(False)
ax.spines["right"].set_visible(False)
ax.get_xaxis().tick_bottom()
ax.get_yaxis().tick_left()
ax.spines["left"].set_position(("outward", 10))
ax.spines["bottom"].set_position(("outward", 10))
# Histogram for axis X1 (feature 5)
hist_X1.set_ylim(ax.get_ylim())
hist_X1.hist(
X[:, 1], bins=hist_nbins, orientation="horizontal", color="grey", ec="grey"
)
hist_X1.axis("off")
# Histogram for axis X0 (feature 0)
hist_X0.set_xlim(ax.get_xlim())
hist_X0.hist(
X[:, 0], bins=hist_nbins, orientation="vertical", color="grey", ec="grey"
)
hist_X0.axis("off")
# %%
# Two plots will be shown for each scaler/normalizer/transformer. The left
# figure will show a scatter plot of the full data set while the right figure
# will exclude the extreme values considering only 99 % of the data set,
# excluding marginal outliers. In addition, the marginal distributions for each
# feature will be shown on the sides of the scatter plot.
def make_plot(item_idx):
title, X = distributions[item_idx]
ax_zoom_out, ax_zoom_in, ax_colorbar = create_axes(title)
axarr = (ax_zoom_out, ax_zoom_in)
plot_distribution(
axarr[0],
X,
y,
hist_nbins=200,
x0_label=feature_mapping[features[0]],
x1_label=feature_mapping[features[1]],
title="Full data",
)
# zoom-in
zoom_in_percentile_range = (0, 99)
cutoffs_X0 = np.percentile(X[:, 0], zoom_in_percentile_range)
cutoffs_X1 = np.percentile(X[:, 1], zoom_in_percentile_range)
non_outliers_mask = np.all(X > [cutoffs_X0[0], cutoffs_X1[0]], axis=1) & np.all(
X < [cutoffs_X0[1], cutoffs_X1[1]], axis=1
)
plot_distribution(
axarr[1],
X[non_outliers_mask],
y[non_outliers_mask],
hist_nbins=50,
x0_label=feature_mapping[features[0]],
x1_label=feature_mapping[features[1]],
title="Zoom-in",
)
norm = mpl.colors.Normalize(y_full.min(), y_full.max())
mpl.colorbar.ColorbarBase(
ax_colorbar,
cmap=cmap,
norm=norm,
orientation="vertical",
label="Color mapping for values of y",
)
# %%
# .. _results:
#
# Original data
# -------------
#
# Each transformation is plotted showing two transformed features, with the
# left plot showing the entire dataset, and the right zoomed-in to show the
# dataset without the marginal outliers. A large majority of the samples are
# compacted to a specific range, [0, 10] for the median income and [0, 6] for
# the average house occupancy. Note that there are some marginal outliers (some
# blocks have average occupancy of more than 1200). Therefore, a specific
# pre-processing can be very beneficial depending of the application. In the
# following, we present some insights and behaviors of those pre-processing
# methods in the presence of marginal outliers.
make_plot(0)
# %%
# StandardScaler
# --------------
#
# :class:`~sklearn.preprocessing.StandardScaler` removes the mean and scales
# the data to unit variance. The scaling shrinks the range of the feature
# values as shown in the left figure below.
# However, the outliers have an influence when computing the empirical mean and
# standard deviation. Note in particular that because the outliers on each
# feature have different magnitudes, the spread of the transformed data on
# each feature is very different: most of the data lie in the [-2, 4] range for
# the transformed median income feature while the same data is squeezed in the
# smaller [-0.2, 0.2] range for the transformed average house occupancy.
#
# :class:`~sklearn.preprocessing.StandardScaler` therefore cannot guarantee
# balanced feature scales in the
# presence of outliers.
make_plot(1)
# %%
# MinMaxScaler
# ------------
#
# :class:`~sklearn.preprocessing.MinMaxScaler` rescales the data set such that
# all feature values are in
# the range [0, 1] as shown in the right panel below. However, this scaling
# compresses all inliers into the narrow range [0, 0.005] for the transformed
# average house occupancy.
#
# Both :class:`~sklearn.preprocessing.StandardScaler` and
# :class:`~sklearn.preprocessing.MinMaxScaler` are very sensitive to the
# presence of outliers.
make_plot(2)
# %%
# MaxAbsScaler
# ------------
#
# :class:`~sklearn.preprocessing.MaxAbsScaler` is similar to
# :class:`~sklearn.preprocessing.MinMaxScaler` except that the
# values are mapped in the range [0, 1]. On positive only data, both scalers
# behave similarly.
# :class:`~sklearn.preprocessing.MaxAbsScaler` therefore also suffers from
# the presence of large outliers.
make_plot(3)
# %%
# RobustScaler
# ------------
#
# Unlike the previous scalers, the centering and scaling statistics of
# :class:`~sklearn.preprocessing.RobustScaler`
# is based on percentiles and are therefore not influenced by a few
# number of very large marginal outliers. Consequently, the resulting range of
# the transformed feature values is larger than for the previous scalers and,
# more importantly, are approximately similar: for both features most of the
# transformed values lie in a [-2, 3] range as seen in the zoomed-in figure.
# Note that the outliers themselves are still present in the transformed data.
# If a separate outlier clipping is desirable, a non-linear transformation is
# required (see below).
make_plot(4)
# %%
# PowerTransformer
# ----------------
#
# :class:`~sklearn.preprocessing.PowerTransformer` applies a power
# transformation to each feature to make the data more Gaussian-like in order
# to stabilize variance and minimize skewness. Currently the Yeo-Johnson
# and Box-Cox transforms are supported and the optimal
# scaling factor is determined via maximum likelihood estimation in both
# methods. By default, :class:`~sklearn.preprocessing.PowerTransformer` applies
# zero-mean, unit variance normalization. Note that
# Box-Cox can only be applied to strictly positive data. Income and average
# house occupancy happen to be strictly positive, but if negative values are
# present the Yeo-Johnson transformed is preferred.
make_plot(5)
make_plot(6)
# %%
# QuantileTransformer (uniform output)
# ------------------------------------
#
# :class:`~sklearn.preprocessing.QuantileTransformer` applies a non-linear
# transformation such that the
# probability density function of each feature will be mapped to a uniform
# or Gaussian distribution. In this case, all the data, including outliers,
# will be mapped to a uniform distribution with the range [0, 1], making
# outliers indistinguishable from inliers.
#
# :class:`~sklearn.preprocessing.RobustScaler` and
# :class:`~sklearn.preprocessing.QuantileTransformer` are robust to outliers in
# the sense that adding or removing outliers in the training set will yield
# approximately the same transformation. But contrary to
# :class:`~sklearn.preprocessing.RobustScaler`,
# :class:`~sklearn.preprocessing.QuantileTransformer` will also automatically
# collapse any outlier by setting them to the a priori defined range boundaries
# (0 and 1). This can result in saturation artifacts for extreme values.
make_plot(7)
##############################################################################
# QuantileTransformer (Gaussian output)
# -------------------------------------
#
# To map to a Gaussian distribution, set the parameter
# ``output_distribution='normal'``.
make_plot(8)
# %%
# Normalizer
# ----------
#
# The :class:`~sklearn.preprocessing.Normalizer` rescales the vector for each
# sample to have unit norm,
# independently of the distribution of the samples. It can be seen on both
# figures below where all samples are mapped onto the unit circle. In our
# example the two selected features have only positive values; therefore the
# transformed data only lie in the positive quadrant. This would not be the
# case if some original features had a mix of positive and negative values.
make_plot(9)
plt.show()
| 35.485075 | 85 | 0.714686 |
45b8845d7e791a4b7b55ec72aafbef5bfd9ca9f2 | 5,301 | py | Python | implementations/DDPM/utils.py | STomoya/animeface | 37b3cd26097d7874559d4c152e41e5712b7a1a42 | [
"MIT"
] | 61 | 2020-06-06T08:25:09.000Z | 2022-03-28T13:30:10.000Z | implementations/DDPM/utils.py | OrigamiXx/animeface | 8724006df99ba7ef369e837d8294350ea733611b | [
"MIT"
] | 13 | 2020-07-02T02:41:14.000Z | 2021-05-09T14:24:58.000Z | implementations/DDPM/utils.py | OrigamiXx/animeface | 8724006df99ba7ef369e837d8294350ea733611b | [
"MIT"
] | 8 | 2020-10-03T18:51:16.000Z | 2022-02-05T18:18:01.000Z |
import torch
import torch.nn as nn
import torch.optim as optim
from torch.cuda.amp import autocast, GradScaler
from torchvision.utils import save_image
from dataset import AnimeFace, DanbooruPortrait
from utils import Status, save_args, add_args
from nnutils import get_device, update_ema, freeze
from .model import UNet, GaussianDiffusion
def train(
max_iters, dataset, timesteps, test_shape,
denoise_model, ema_model, gaussian_diffusion,
optimizer,
device, amp, save, sample=1000
):
status = Status(max_iters)
scaler = GradScaler() if amp else None
loss_fn = nn.MSELoss()
const_z = torch.randn(test_shape, device=device)
while status.batches_done < max_iters:
for real in dataset:
optimizer.zero_grad()
real = real.to(device)
t = torch.randint(0, timesteps, (real.size(0), ), device=device)
with autocast(amp):
# add noise
# q(xt|x0)
# √α・x0 + √1-α・ε; ε~N(0,I)
x_noisy, noise = gaussian_diffusion.q_sample(real, t)
# denoise
# εθ(√α・x0 + √1-α・ε)
recon = denoise_model(x_noisy, t)
# pθ(xt-1|xt)
# ||ε - εθ(√α・x0 + √1-α・ε)||
loss = loss_fn(recon, noise)
if scaler is not None:
scaler.scale(loss).backward()
scaler.step(optimizer)
else:
loss.backward()
optimizer.step()
update_ema(denoise_model, ema_model)
if status.batches_done % sample == 0 and status.batches_done != 0:
images = gaussian_diffusion.p_sample_loop(
ema_model, test_shape, const_z)
save_image(
images, f'implementations/DDPM/result/{status.batches_done}.jpg',
normalize=True, value_range=(-1, 1), nrow=2*4)
if status.batches_done % save == 0:
torch.save(
dict(denoise_model=denoise_model.state_dict(), gaussian_diffusion=gaussian_diffusion.state_dict()),
f'implementations/DDPM/result/DDPM_{status.batches_done}.pt')
status.update(loss=loss.item() if not torch.any(loss.isnan()) else 0.)
if scaler is not None:
scaler.update()
if status.batches_done == max_iters:
break
status.plot_loss()
def main(parser):
parser = add_args(parser,
dict(
num_test = [16, 'number of test smaples'],
image_channels = [3, 'image channels'],
# model
bottom = [16, 'bottom width'],
channels = [32, 'channel width mutiplier'],
attn_resls = [[16], 'resolution to apply attention'],
attn_head = [8, 'heads for MHA'],
time_affine = [False, 'adaptive normalization'],
dropout = [0., 'dropout'],
num_res = [1, 'number of residual blocks in one resolution'],
norm_name = ['gn', 'normalization layer name'],
act_name = ['swish', 'activation layer name'],
# diffusion
timesteps = [1000, 'number of time steps in forward/backward diffusion process'],
# optimization
lr = [2e-5, 'learning rate'],
betas = [[0.9, 0.999], 'betas'],
sample = [10000, 'sample very. inference takes time hence different arg for testing.']
))
args = parser.parse_args()
save_args(args)
device = get_device(not args.disable_gpu)
amp = not args.disable_gpu and not args.disable_amp
# dataset
if args.dataset == 'animeface':
dataset = AnimeFace.asloader(
args.batch_size, (args.image_size, args.min_year),
pin_memory=not args.disable_gpu)
elif args.dataset == 'danbooru':
dataset = DanbooruPortrait.asloader(
args.batch_size, (args.image_size, args.min_year),
pin_memory=not args.disable_gpu)
test_shape = (args.num_test, args.image_channels, args.image_size, args.image_size)
# model
denoise_model = UNet(
args.image_size, args.bottom, args.image_channels, args.channels,
args.attn_resls, args.attn_head, args.time_affine,
args.dropout, args.num_res, args.norm_name, args.act_name)
ema_model = UNet(
args.image_size, args.bottom, args.image_channels, args.channels,
args.attn_resls, args.attn_head, args.time_affine,
args.dropout, args.num_res, args.norm_name, args.act_name)
freeze(ema_model)
update_ema(denoise_model, ema_model, 0.)
# diffusion
gaussian_diffusion = GaussianDiffusion(
args.timesteps)
denoise_model.to(device)
ema_model.to(device)
gaussian_diffusion.to(device)
# optimizer
optimizer = optim.Adam(denoise_model.parameters(), lr=args.lr, betas=args.betas)
if args.max_iters < 0:
args.max_iters = len(dataset) * args.default_epochs
train(
args.max_iters, dataset, args.timesteps, test_shape,
denoise_model, ema_model, gaussian_diffusion,
optimizer, device, amp, args.save, args.sample)
| 37.330986 | 119 | 0.591209 |
7bf0d294812be3be75a909fb78fbc7b0d7be9c7f | 1,679 | py | Python | Code-Code/Method-Generation/evaluator/evaluator.py | LIANGQINGYUAN/CodeXGLUE | b9405a7d83248983b36834fcecc94335c89635f7 | [
"CC0-1.0",
"MIT"
] | 2 | 2022-02-14T13:43:12.000Z | 2022-02-14T14:45:09.000Z | Code-Code/Method-Generation/evaluator/evaluator.py | 1749740778/CodeXGLUE | 50a483084fd5641889e1afb1c4c253f3987b5660 | [
"CC0-1.0",
"MIT"
] | null | null | null | Code-Code/Method-Generation/evaluator/evaluator.py | 1749740778/CodeXGLUE | 50a483084fd5641889e1afb1c4c253f3987b5660 | [
"CC0-1.0",
"MIT"
] | null | null | null | # Copyright (c) Microsoft Corporation.
# Licensed under the MIT license.
import os
import logging
import argparse
from fuzzywuzzy import fuzz
import json
import re
from bleu import _bleu
logger = logging.getLogger(__name__)
logging.basicConfig(level=logging.INFO)
def post_process(code):
code = code.replace("<EOL>", "\n").replace("<INDENT>", " ").replace("<DEDENT>", " ")
code = code.replace("<NUM_LIT>", "0").replace("<STR_LIT>", "").replace("<CHAR_LIT>", "")
pattern = re.compile(r"<(STR|NUM|CHAR)_LIT:(.*?)>", re.S)
lits = re.findall(pattern, code)
for lit in lits:
code = code.replace(f"<{lit[0]}_LIT:{lit[1]}>", lit[1])
return " ".join(code.split())
def main():
parser = argparse.ArgumentParser(description='Evaluate leaderboard predictions for code completion (line level).')
parser.add_argument('--answers', '-a', required=True, help="filename of the labels, in txt format.")
parser.add_argument('--predictions', '-p', required=True, help="filename of the leaderboard predictions, in txt format.")
args = parser.parse_args()
preds = open(args.predictions, "r").readlines()
gts = open(args.answers, "r").readlines()
assert len(preds) == len(gts), f"Samples of predictions and answers are not equal, {len(preds)}: {len(gts)}"
total = len(gts)
edit_sim = 0.0
for pred, gt in zip(preds, gts):
pred = post_process(pred.strip())
gt = post_process(gt.strip())
edit_sim += fuzz.ratio(pred, gt)
bleu_score = round(_bleu(args.answers, args.predictions), 2)
logger.info(f"Edit sim: {round(edit_sim/total, 2)}, BLEU: {bleu_score}")
if __name__ == "__main__":
main()
| 36.5 | 125 | 0.660512 |
120c592324207fd5e11aea94305287d17bc13bdd | 160 | py | Python | payment_maintenance/payment_maintenance/doctype/bakery_cus_payment/test_bakery_cus_payment.py | Srijenanithish/Payment_Maintenance_System | d0be4eabfb04edfb10b6293e23eae13213c0a26a | [
"MIT"
] | null | null | null | payment_maintenance/payment_maintenance/doctype/bakery_cus_payment/test_bakery_cus_payment.py | Srijenanithish/Payment_Maintenance_System | d0be4eabfb04edfb10b6293e23eae13213c0a26a | [
"MIT"
] | null | null | null | payment_maintenance/payment_maintenance/doctype/bakery_cus_payment/test_bakery_cus_payment.py | Srijenanithish/Payment_Maintenance_System | d0be4eabfb04edfb10b6293e23eae13213c0a26a | [
"MIT"
] | null | null | null | # Copyright (c) 2021, Srijena_Nithish and Contributors
# See license.txt
# import frappe
import unittest
class TestBakeryCusPayment(unittest.TestCase):
pass
| 17.777778 | 54 | 0.8 |
26ab727cb2f72d7f2e8927906af160a0406be12f | 285 | py | Python | src/reporter/api.py | agaldemas/ngsi-timeseries-api | 0ef1e2dae2e22e8b033b0d7f92f57825c1f15bc1 | [
"MIT"
] | null | null | null | src/reporter/api.py | agaldemas/ngsi-timeseries-api | 0ef1e2dae2e22e8b033b0d7f92f57825c1f15bc1 | [
"MIT"
] | null | null | null | src/reporter/api.py | agaldemas/ngsi-timeseries-api | 0ef1e2dae2e22e8b033b0d7f92f57825c1f15bc1 | [
"MIT"
] | null | null | null | def list_of_api():
return {
'notify_url': '/v2/notify',
'subscriptions_url': '/v2/subscriptions',
'entities_url': '/v2/entities',
'types_url': '/v2/types',
'attributes_url': '/v2/attrs',
'entitiesArray_url': 'v2/entitiesArray'
}
| 28.5 | 49 | 0.564912 |
47ec205516e059093b32edc6df089be5ca75d871 | 1,421 | py | Python | cpmpy/all_different_pairs.py | hakank/hakank | 313e5c0552569863047f6ce9ae48ea0f6ec0c32b | [
"MIT"
] | 279 | 2015-01-10T09:55:35.000Z | 2022-03-28T02:34:03.000Z | cpmpy/all_different_pairs.py | hakank/hakank | 313e5c0552569863047f6ce9ae48ea0f6ec0c32b | [
"MIT"
] | 10 | 2017-10-05T15:48:50.000Z | 2021-09-20T12:06:52.000Z | cpmpy/all_different_pairs.py | hakank/hakank | 313e5c0552569863047f6ce9ae48ea0f6ec0c32b | [
"MIT"
] | 83 | 2015-01-20T03:44:00.000Z | 2022-03-13T23:53:06.000Z | """
All different pairs in cpmpy.
Assumption: a is a k by 2 matrix. n is the number of nodes.
This model implements these decompositions:
- pairs(x,n): function which returns the pairs of matrix x
in 'integer representation': a[k,1]*(n-1) + a[k,2]
- all_different_pairs(sol, x,n): all the pairs in x must be different
- increasing_pairs(sol, x,n): the pairs in x is in increasing order
- decreasing_pairs(sol, x,n): the pairs in x is in decreasing order
n #solutions
-------------
1 0
2 1
3 12
4 377
5 53834
6
This cpmpy model was written by Hakan Kjellerstrand (hakank@gmail.com)
See also my cpmpy page: http://hakank.org/cpmpy/
"""
from cpmpy import *
import cpmpy.solvers
import numpy as np
from cpmpy_hakank import *
def all_different_pairs_test(n=5):
model = Model()
# data
m = n*(n - 1) // 2 # number of pairs
print("n:",n, "m:", m)
# variables
x = intvar(1,n,shape=(m,2),name="x")
# constraints
model += [all_different_pairs(x, n)]
model += [increasing_pairs(x, n)]
for k in range(m):
model += [x[(k,0)] != x[(k,1)]]
ss = CPM_ortools(model)
num_solutions = 0
while ss.solve():
num_solutions += 1
for i in range(m):
for j in range(2):
print(x[i,j].value(),end=" ")
print()
print()
get_different_solution(ss,x.flat)
print("num_solutions:", num_solutions)
n = 5
all_different_pairs_test(n)
| 19.736111 | 70 | 0.636875 |
d7e75ff524bfb6e0c4a3c4b92d965fc452cae99e | 1,764 | py | Python | tests/test_f2py_atomenf.py | fury106/ProjectParallelProgrammeren | fd3c198edaca5bcb19d8e665561e8cd14824e894 | [
"MIT"
] | null | null | null | tests/test_f2py_atomenf.py | fury106/ProjectParallelProgrammeren | fd3c198edaca5bcb19d8e665561e8cd14824e894 | [
"MIT"
] | null | null | null | tests/test_f2py_atomenf.py | fury106/ProjectParallelProgrammeren | fd3c198edaca5bcb19d8e665561e8cd14824e894 | [
"MIT"
] | null | null | null | #!/usr/bin/env python
# -*- coding: utf-8 -*-
"""Tests for f2py module `projectparallelprogrammeren.atomenf`."""
import projectparallelprogrammeren as ppp
import math
# create an alias for the binary extension cpp module
f90 = ppp.atomenf.f90_module
def test_lj2atomen():
"""Test om de potentiaal tussen 2 atomen te berekenen"""
afstand = 1.5
pot = 4*((1/afstand)**12 - (1/afstand)**6)
potf = f90.ljpot2atomen(afstand)
assert round(pot, 7) == round(potf, 7)
def test_lj2atomen_afstand0():
"""Test om na te gaan dat er geen errors verschijnen wanneer de afstand 0 is"""
afstand = 0
pot = f90.ljpot2atomen(afstand)
assert pot == 1.7976931348623157e+308
def test_alle_jlpot():
"""test om alle potentialen te berekenen"""
atomen = [[1,2,3],[4,5,6],[7,8,9]]
r12 = math.sqrt(math.pow(2-1,2)+math.pow(5-4,2)+math.pow(8-7,2))
r13 = math.sqrt(math.pow(3-1,2)+math.pow(6-4,2)+math.pow(9-7,2))
r23 = math.sqrt(math.pow(3-2,2)+math.pow(6-5,2)+math.pow(9-8,2))
pot12 = 4*(math.pow(1/r12,12)-math.pow(1/r12,6))
pot13 = 4*(math.pow(1/r13,12)-math.pow(1/r13,6))
pot23 = 4*(math.pow(1/r23,12)-math.pow(1/r23,6))
pot = f90.ljpotalleatomen(atomen,3)
assert round(pot12+pot13+pot23,7) == round(pot,7)
#===============================================================================
# The code below is for debugging a particular test in eclipse/pydev.
# (normally all tests are run with pytest)
#===============================================================================
if __name__ == "__main__":
the_test_you_want_to_debug = test_f90_add
print(f"__main__ running {the_test_you_want_to_debug} ...")
the_test_you_want_to_debug()
print('-*# finished #*-')
#===============================================================================
| 37.531915 | 80 | 0.598073 |
d5b091e0a78fffd085801d9d1c83641b0aec3cd2 | 540 | py | Python | celeryapp/tasks.py | jirenmaa/twitter-clone | de211a7d73ef455f5759eba69cdceb4b51f5a9b0 | [
"MIT"
] | 5 | 2021-10-12T06:40:51.000Z | 2022-02-23T13:37:40.000Z | celeryapp/tasks.py | jirenmaa/twitter-clone | de211a7d73ef455f5759eba69cdceb4b51f5a9b0 | [
"MIT"
] | null | null | null | celeryapp/tasks.py | jirenmaa/twitter-clone | de211a7d73ef455f5759eba69cdceb4b51f5a9b0 | [
"MIT"
] | 1 | 2022-02-02T22:36:00.000Z | 2022-02-02T22:36:00.000Z | from celeryapp.artisan import app as artisan
from celeryapp.workers import mailer
@artisan.task(name="send_email_activation")
def send_email_activation(recepients: str, hashkey: str, **kwargs):
"""Send activation link to user email"""
mailer.email_activation_link(recepients, hashkey, **kwargs)
@artisan.task(name="send_email_resetpassword")
def send_email_resetpassword(recepients: str, hashkey: str, **kwargs):
"""Send resetpassword link to user email"""
mailer.email_resetpassword_link(recepients, hashkey, **kwargs)
| 36 | 70 | 0.774074 |
a1114288b6f4f9435f7819fac16ff1c2cb458a33 | 1,006 | py | Python | setup.py | kilataban/africastalking-python | 7de119e88165015b93f74804884d301a6dbcd623 | [
"MIT"
] | null | null | null | setup.py | kilataban/africastalking-python | 7de119e88165015b93f74804884d301a6dbcd623 | [
"MIT"
] | null | null | null | setup.py | kilataban/africastalking-python | 7de119e88165015b93f74804884d301a6dbcd623 | [
"MIT"
] | null | null | null | #!/usr/bin/env python
from setuptools import setup
import sys
import os
version = '1.1.7'
long_description = open('README.md').read()
if sys.argv[-1] == 'publish':
os.system('python setup.py sdist upload')
sys.exit()
setup(
name='africastalking',
version=version,
packages=['africastalking'],
description='Official Africa\'s Talking Python SDK',
data_files=[('', ['README.md'])],
license='MIT',
author='Africa\'s Talking',
install_requires=[
'requests>=v2.18.4',
'schema>=0.6.7'
],
python_requires=">=2.7.10",
author_email='info@africastalking.com',
url='https://github.com/AfricasTalkingLtd/africastalking-python',
download_url='https://codeload.github.com/AfricasTalkingLtd/africastalking-python/tar.gz/' + version,
keywords='ussd voice sms mpesa card bank b2b b2c sender_id payments airtime africastalking',
classifiers=[],
long_description=long_description,
long_description_content_type='text/markdown'
)
| 28.742857 | 105 | 0.687873 |
4fbe6281726fc941102081253793a2754f41f19e | 15,110 | py | Python | graphs/models/decentralplanner_GAT_bottleneck.py | proroklab/magat_pathplanning | a2cab3b11abc46904bc45be1762a780becb1e8c7 | [
"MIT"
] | 40 | 2021-07-01T03:14:20.000Z | 2022-03-23T23:45:22.000Z | graphs/models/decentralplanner_GAT_bottleneck.py | QingbiaoLi/magat_pathplanning | f28429b1a2ab7866c3001b82e6ae9ca3f072c106 | [
"MIT"
] | null | null | null | graphs/models/decentralplanner_GAT_bottleneck.py | QingbiaoLi/magat_pathplanning | f28429b1a2ab7866c3001b82e6ae9ca3f072c106 | [
"MIT"
] | 13 | 2021-07-14T07:57:16.000Z | 2022-03-03T10:43:25.000Z | """
An example for the model class
"""
import torch
import torch.nn as nn
import torch.nn.functional as F
from graphs.weights_initializer import weights_init
import numpy as np
import utils.graphUtils.graphML as gml
import utils.graphUtils.graphTools
from torchsummaryX import summary
from graphs.models.resnet_pytorch import *
class DecentralPlannerGATNet(nn.Module):
def __init__(self, config):
super().__init__()
self.config = config
self.S = None
self.numAgents = self.config.num_agents
# inW = self.config.map_w
# inH = self.config.map_h
inW = self.config.FOV + 2
inH = self.config.FOV + 2
# invW = 11
# inH = 11
convW = [inW]
convH = [inH]
numAction = 5
use_vgg = False
# ------------------ DCP v1.4 - with maxpool + non stride in CNN - less feature
numChannel = [3] + [32, 32, 64, 64, 128]
numStride = [1, 1, 1, 1, 1]
dimCompressMLP = 1
numCompressFeatures = [self.config.bottleneckFeature]
nMaxPoolFilterTaps = 2
numMaxPoolStride = 2
# # 1 layer origin
dimNodeSignals = [self.config.bottleneckFeature]
# # 2 layer - upsampling
# dimNodeSignals = [256, 2 ** 7]
# # 2 layer - down sampling
# dimNodeSignals = [64, 2 ** 7]
#
# # 2 layer - down sampling -v2
# dimNodeSignals = [64, 32]
#
## ------------------ GCN -------------------- ##
# dimNodeSignals = [2 ** 7]
# nGraphFilterTaps = [self.config.nGraphFilterTaps,self.config.nGraphFilterTaps] # [2]
nGraphFilterTaps = [self.config.nGraphFilterTaps]
nAttentionHeads = [self.config.nAttentionHeads]
# --- actionMLP
if self.config.use_dropout:
dimActionMLP = 2
numActionFeatures = [self.config.numInputFeatures, numAction]
else:
dimActionMLP = 1
numActionFeatures = [numAction]
#####################################################################
# #
# CNN to extract feature #
# #
#####################################################################
if use_vgg:
self.ConvLayers = self.make_layers(cfg, batch_norm=True)
self.compressMLP = nn.Sequential(
nn.Linear(512, 4096),
nn.ReLU(inplace=True),
nn.Dropout(),
nn.Linear(4096, 4096),
nn.ReLU(inplace=True),
nn.Dropout(),
nn.Linear(4096, 128)
)
numCompressFeatures = [128]
else:
if self.config.CNN_mode == 'ResNetSlim_withMLP':
convl = []
convl.append(ResNetSlim(BasicBlock, [1, 1], out_map=False))
convl.append(nn.Dropout(0.2))
convl.append(nn.Flatten())
convl.append(nn.Linear(in_features=1152, out_features=self.config.numInputFeatures, bias=True))
self.ConvLayers = nn.Sequential(*convl)
numFeatureMap = self.config.numInputFeatures
elif self.config.CNN_mode == 'ResNetLarge_withMLP':
convl = []
convl.append(ResNet(BasicBlock, [1, 1, 1], out_map=False))
convl.append(nn.Dropout(0.2))
convl.append(nn.Flatten())
convl.append(nn.Linear(in_features=1152, out_features=self.config.numInputFeatures, bias=True))
self.ConvLayers = nn.Sequential(*convl)
numFeatureMap = self.config.numInputFeatures
elif self.config.CNN_mode == 'ResNetSlim':
convl = []
convl.append(ResNetSlim(BasicBlock, [1, 1], out_map=False))
convl.append(nn.Dropout(0.2))
self.ConvLayers = nn.Sequential(*convl)
numFeatureMap = 1152
elif self.config.CNN_mode == 'ResNetLarge':
convl = []
convl.append(ResNet(BasicBlock, [1, 1, 1], out_map=False))
convl.append(nn.Dropout(0.2))
self.ConvLayers = nn.Sequential(*convl)
numFeatureMap = 1152
else:
convl = []
numConv = len(numChannel) - 1
nFilterTaps = [3] * numConv
nPaddingSzie = [1] * numConv
for l in range(numConv):
convl.append(nn.Conv2d(in_channels=numChannel[l], out_channels=numChannel[l + 1],
kernel_size=nFilterTaps[l], stride=numStride[l], padding=nPaddingSzie[l],
bias=True))
convl.append(nn.BatchNorm2d(num_features=numChannel[l + 1]))
convl.append(nn.ReLU(inplace=True))
# if self.config.use_dropout:
# convl.append(nn.Dropout(p=0.2))
# print('Dropout is add on CNN')
W_tmp = int((convW[l] - nFilterTaps[l] + 2 * nPaddingSzie[l]) / numStride[l]) + 1
H_tmp = int((convH[l] - nFilterTaps[l] + 2 * nPaddingSzie[l]) / numStride[l]) + 1
# Adding maxpooling
if l % 2 == 0:
convl.append(nn.MaxPool2d(kernel_size=2))
W_tmp = int((W_tmp - nMaxPoolFilterTaps) / numMaxPoolStride) + 1
H_tmp = int((H_tmp - nMaxPoolFilterTaps) / numMaxPoolStride) + 1
# http://cs231n.github.io/convolutional-networks/
convW.append(W_tmp)
convH.append(H_tmp)
self.ConvLayers = nn.Sequential(*convl)
numFeatureMap = numChannel[-1] * convW[-1] * convH[-1]
#####################################################################
# #
# MLP-feature compression #
# #
#####################################################################
numCompressFeatures = [numFeatureMap] + numCompressFeatures
compressmlp = []
for l in range(dimCompressMLP):
compressmlp.append(
nn.Linear(in_features=numCompressFeatures[l], out_features=numCompressFeatures[l + 1], bias=True))
compressmlp.append(nn.ReLU(inplace=True))
# if self.config.use_dropout:
# compressmlp.append(nn.Dropout(p=0.2))
self.compressMLP = nn.Sequential(*compressmlp)
self.numFeatures2Share = numCompressFeatures[-1]
#####################################################################
# #
# graph neural network #
# #
#####################################################################
self.L = len(nGraphFilterTaps) # Number of graph filtering layers
self.F = [numCompressFeatures[-1]] + dimNodeSignals # Features
# self.F = [numFeatureMap] + dimNodeSignals # Features
self.K = nGraphFilterTaps # nFilterTaps # Filter taps
self.P = nAttentionHeads
self.E = 1 # Number of edge features
self.bias = True
gfl = [] # Graph Filtering Layers
for l in range(self.L):
# \\ Graph filtering stage:
if self.config.attentionMode == 'GAT_origin':
gfl.append(gml.GraphFilterBatchAttentional_Origin(self.F[l], self.F[l + 1], self.K[l], self.P[l], self.E,self.bias,
concatenate=self.config.AttentionConcat,attentionMode=self.config.attentionMode))
elif self.config.attentionMode == 'GAT_modified' or self.config.attentionMode == 'KeyQuery':
gfl.append(gml.GraphFilterBatchAttentional(self.F[l], self.F[l + 1], self.K[l], self.P[l], self.E, self.bias,concatenate=self.config.AttentionConcat,
attentionMode=self.config.attentionMode))
elif self.config.attentionMode == 'GAT_Similarity':
gfl.append(gml.GraphFilterBatchSimilarityAttentional(self.F[l], self.F[l + 1], self.K[l], self.P[l], self.E, self.bias,concatenate=self.config.AttentionConcat,
attentionMode=self.config.attentionMode))
# gfl.append(gml.GraphFilterBatchAttentional_Origin(self.F[l], self.F[l + 1], self.K[l], self.P[l], self.E, self.bias, concatenate=self.config.AttentionConcat))
# gfl.append(
# gml.GraphFilterBatchSimilarityAttentional(self.F[l], self.F[l + 1], self.K[l], self.P[l], self.E, self.bias,
# concatenate=self.config.AttentionConcat))
# There is a 2*l below here, because we have three elements per
# layer: graph filter, nonlinearity and pooling, so after each layer
# we're actually adding elements to the (sequential) list.
# \\ Nonlinearity
# gfl.append(nn.ReLU(inplace=True))
# And now feed them into the sequential
self.GFL = nn.Sequential(*gfl) # Graph Filtering Layers
#####################################################################
# #
# MLP --- map to actions #
# #
#####################################################################
if self.config.AttentionConcat:
numActionFeatures = [self.F[-1]*self.config.nAttentionHeads] + numActionFeatures
else:
numActionFeatures = [self.F[-1]] + numActionFeatures
actionsfc = []
for l in range(dimActionMLP):
if l < (dimActionMLP - 1):
actionsfc.append(
nn.Linear(in_features=numActionFeatures[l], out_features=numActionFeatures[l + 1], bias=True))
actionsfc.append(nn.ReLU(inplace=True))
else:
actionsfc.append(
nn.Linear(in_features=numActionFeatures[l], out_features=numActionFeatures[l + 1], bias=True))
if self.config.use_dropout:
actionsfc.append(nn.Dropout(p=0.2))
print('Dropout is add on MLP')
self.actionsMLP = nn.Sequential(*actionsfc)
self.apply(weights_init)
def make_layers(self, cfg, batch_norm=False):
layers = []
input_channel = 3
for l in cfg:
if l == 'M':
layers += [nn.MaxPool2d(kernel_size=2, stride=2)]
continue
layers += [nn.Conv2d(input_channel, l, kernel_size=3, padding=1)]
if batch_norm:
layers += [nn.BatchNorm2d(l)]
layers += [nn.ReLU(inplace=True)]
input_channel = l
return nn.Sequential(*layers)
def addGSO(self, S):
# We add the GSO on real time, this GSO also depends on time and has
# shape either B x N x N or B x E x N x N
if self.E == 1: # It is B x T x N x N
assert len(S.shape) == 3
self.S = S.unsqueeze(1) # B x E x N x N
else:
assert len(S.shape) == 4
assert S.shape[1] == self.E
self.S = S
# Remove nan data
self.S[torch.isnan(self.S)] = 0
if self.config.GSO_mode == 'dist_GSO_one':
self.S[self.S > 0] = 1
elif self.config.GSO_mode == 'full_GSO':
self.S = torch.ones_like(self.S).to(self.config.device)
# self.S[self.S > 0] = 1
def forward(self, inputTensor):
B = inputTensor.shape[0] # batch size
# N = inputTensor.shape[1]
# C =
(B,N,C,W,H) = inputTensor.shape
# print(inputTensor.shape)
# print(B,N,C,W,H)
# B x G x N
# extractFeatureMap = torch.zeros(B, self.numFeatures2Share, self.numAgents).to(self.config.device)
input_currentAgent = inputTensor.reshape(B*N,C,W,H).to(self.config.device)
# print("input_currentAgent:", input_currentAgent.shape)
featureMap = self.ConvLayers(input_currentAgent).to(self.config.device)
# print("featureMap:", featureMap.shape)
featureMapFlatten = featureMap.view(featureMap.size(0), -1).to(self.config.device)
# print("featureMapFlatten:", featureMapFlatten.shape)
compressfeature = self.compressMLP(featureMapFlatten).to(self.config.device)
# print("compressfeature:", compressfeature.shape)
extractFeatureMap = compressfeature.reshape(B,N,self.numFeatures2Share).to(self.config.device).permute([0,2,1])
# extractFeatureMap_old = compressfeature.reshape(B,N,self.numFeatures2Share).to(self.config.device)
# print("extractFeatureMap_old:", extractFeatureMap_old.shape)
# extractFeatureMap = extractFeatureMap_old.permute([0,2,1]).to(self.config.device)
# print("extractFeatureMap:", extractFeatureMap.shape)
# DCP
for l in range(self.L):
# \\ Graph filtering stage:
# There is a 3*l below here, because we have three elements per
# layer: graph filter, nonlinearity and pooling, so after each layer
# we're actually adding elements to the (sequential) list.
# self.GFL[2 * l].addGSO(self.S) # add GSO for GraphFilter
self.GFL[l].addGSO(self.S) # add GSO for GraphFilter
# B x F x N - > B x G x N,
sharedFeature = self.GFL(extractFeatureMap)
(_, num_G, _) = sharedFeature.shape
sharedFeature_stack =sharedFeature.permute([0,2,1]).to(self.config.device).reshape(B*N,num_G)
# sharedFeature_permute = sharedFeature.permute([0, 2, 1]).to(self.config.device)
# sharedFeature_stack = sharedFeature_permute.reshape(B*N,num_G)
# print(sharedFeature_stack.shape)
action_predict = self.actionsMLP(sharedFeature_stack)
# print(action_predict)
# print(action_predict.shape)
return action_predict
| 44.181287 | 176 | 0.501257 |
39700f666416c129f4f948832429c4bf0609adf8 | 1,386 | py | Python | devops-toolbox/aws/node_operations.py | varmarakesh/devops-toolbox | 5ece7fb75f89ae23ca41571222bab382624b0462 | [
"ISC"
] | null | null | null | devops-toolbox/aws/node_operations.py | varmarakesh/devops-toolbox | 5ece7fb75f89ae23ca41571222bab382624b0462 | [
"ISC"
] | 1 | 2016-05-30T15:24:06.000Z | 2016-05-30T15:24:06.000Z | devops-toolbox/aws/node_operations.py | varmarakesh/devops-toolbox | 5ece7fb75f89ae23ca41571222bab382624b0462 | [
"ISC"
] | 1 | 2018-09-20T02:32:53.000Z | 2018-09-20T02:32:53.000Z | __author__ = 'rakesh.varma'
from ConfigParser import SafeConfigParser
import os
class Node:
name = None
ip_address = None
private_ip_address = None
dns_name = None
def __init__(self, name, ip_address, private_ip_address, dns_name):
self.name = name
self.ip_address = ip_address
self.private_ip_address = private_ip_address
self.dns_name = dns_name
def __str__(self):
return 'name: {0}, ip_address: {1}. private_ip_address: {2}, dns_name : {3}'.format(self.name, self.ip_address, self.private_ip_address, self.dns_name)
class Cluster:
config = None
nodes = []
def __init__(self, config = None):
self.config = config
c = SafeConfigParser()
c.read(self.config)
for item in c.items("main"):
name = item[0]
private_ip_address = eval(item[1])['private_ip_address']
ip_address = eval(item[1])['ip_address']
dns_name = eval(item[1])['dns_name']
node = Node(name = name, private_ip_address = private_ip_address, ip_address = ip_address, dns_name = dns_name)
self.nodes.append(node)
def __getitem__(self, item):
return filter(lambda node:node.name == item, self.nodes)[0]
def __str__(self):
return 'config:{0}\n'.format(self.config) + '\n'.join(str(node) for node in self.nodes)
| 30.130435 | 159 | 0.637085 |
b5eab263852da28921c95dda3288daedfacbe22a | 3,912 | py | Python | app.py | bsezgin/ms-identity-python-webapp | 914e2b874288a344f74498bc83b547a1e8d885ff | [
"MIT"
] | null | null | null | app.py | bsezgin/ms-identity-python-webapp | 914e2b874288a344f74498bc83b547a1e8d885ff | [
"MIT"
] | null | null | null | app.py | bsezgin/ms-identity-python-webapp | 914e2b874288a344f74498bc83b547a1e8d885ff | [
"MIT"
] | null | null | null | import uuid
import os
import requests
from flask import Flask, render_template, session, request, redirect, url_for
from flask_session import Session # https://pythonhosted.org/Flask-Session
import msal
from dotenv import load_dotenv
#import app_config
app = Flask(__name__)
app.config.from_object(os.environ)
Session(app)
@app.route("/")
def index():
if not session.get("user"):
return redirect(url_for("login"))
return render_template('index.html', user=session["user"], version=msal.__version__)
@app.route("/login")
def login():
session["state"] = str(uuid.uuid4())
# Technically we could use empty list [] as scopes to do just sign in,
# here we choose to also collect end user consent upfront
auth_url = _build_auth_url(scopes=os.environ.get('SCOPE'), state=session["state"])
return render_template("login.html", auth_url=auth_url, version=msal.__version__)
@app.route(os.environ.get('REDIRECT_PATH')) # Its absolute URL must match your app's redirect_uri set in AAD
def authorized():
if request.args.get('state') != session.get("state"):
return redirect(url_for("index")) # No-OP. Goes back to Index page
if "error" in request.args: # Authentication/Authorization failure
return render_template("auth_error.html", result=request.args)
if request.args.get('code'):
cache = _load_cache()
result = _build_msal_app(cache=cache).acquire_token_by_authorization_code(
request.args['code'],
scopes=os.environ.get('SCOPE'), # Misspelled scope would cause an HTTP 400 error here
redirect_uri=url_for("authorized", _external=True))
if "error" in result:
return render_template("auth_error.html", result=result)
session["user"] = result.get("id_token_claims")
_save_cache(cache)
return redirect(url_for("index"))
@app.route("/logout")
def logout():
session.clear() # Wipe out user and its token cache from session
return redirect( # Also logout from your tenant's web session
os.environ.get('AUTHORITY')+ "/oauth2/v2.0/logout" +
"?post_logout_redirect_uri=" + url_for("index", _external=True))
@app.route("/graphcall")
def graphcall():
token = _get_token_from_cache(os.environ.get('SCOPE'))
if not token:
return redirect(url_for("login"))
graph_data = requests.get( # Use token to call downstream service
os.environ.get('ENDPOINT'),
headers={'Authorization': 'Bearer ' + token['access_token']},
).json()
return render_template('display.html', result=graph_data)
def _load_cache():
cache = msal.SerializableTokenCache()
if session.get("token_cache"):
cache.deserialize(session["token_cache"])
return cache
def _save_cache(cache):
if cache.has_state_changed:
session["token_cache"] = cache.serialize()
def _build_msal_app(cache=None, authority=None):
return msal.ConfidentialClientApplication(
os.environ.get('CLIENT_ID'), authority=authority or os.environ.get('AUTHORITY'),
client_credential=os.environ.get('CLIENT_SECRET'), token_cache=cache)
def _build_auth_url(authority=None, scopes=None, state=None):
return _build_msal_app(authority=authority).get_authorization_request_url(
scopes or [],
state=state or str(uuid.uuid4()),
redirect_uri=url_for("authorized", _external=True))
def _get_token_from_cache(scope=None):
cache = _load_cache() # This web app maintains one cache per session
cca = _build_msal_app(cache=cache)
accounts = cca.get_accounts()
if accounts: # So all account(s) belong to the current signed-in user
result = cca.acquire_token_silent(scope, account=accounts[0])
_save_cache(cache)
return result
app.jinja_env.globals.update(_build_auth_url=_build_auth_url) # Used in template
if __name__ == "__main__":
app.run()
| 38.352941 | 109 | 0.70092 |
7a2715eb86ff7859114bf973b1013dca359ce76a | 442 | py | Python | djangocms_baseplugins/teaser_section/tests/test_plugin.py | benzkji/djangocms-baseplugins | 7f041a030ed93dcdec70e4ca777b841846b8f2f2 | [
"MIT"
] | 2 | 2019-04-14T01:31:22.000Z | 2020-03-05T13:06:57.000Z | djangocms_baseplugins/teaser_section/tests/test_plugin.py | benzkji/djangocms-baseplugins | 7f041a030ed93dcdec70e4ca777b841846b8f2f2 | [
"MIT"
] | 32 | 2017-04-04T09:28:06.000Z | 2021-08-18T16:23:02.000Z | djangocms_baseplugins/teaser_section/tests/test_plugin.py | bnzk/djangocms-baseplugins | 7f041a030ed93dcdec70e4ca777b841846b8f2f2 | [
"MIT"
] | null | null | null | from __future__ import unicode_literals
from django.test import TestCase
from djangocms_baseplugins.baseplugin.tests.base import BasePluginTestCase
from djangocms_baseplugins.teaser_section.cms_plugins import TeaserSectionPlugin
class TeaserSectionPluginTests(BasePluginTestCase, TestCase):
plugin_class = TeaserSectionPlugin
plugin_settings_prefix = 'TEASERSECTIONPLUGIN'
plugin_path = 'djangocms_baseplugins.teaser_section'
| 34 | 80 | 0.859729 |
02692e36b0d27031f4b4f93b1b17410787a12462 | 1,879 | py | Python | setup.py | aurule/npc | cbddc0b075dc364d564c11f734b41f277a3e0511 | [
"MIT"
] | 13 | 2016-02-23T08:15:22.000Z | 2021-07-17T20:54:57.000Z | setup.py | aurule/npc | cbddc0b075dc364d564c11f734b41f277a3e0511 | [
"MIT"
] | 1 | 2017-03-30T08:11:40.000Z | 2017-09-07T15:01:08.000Z | setup.py | aurule/npc | cbddc0b075dc364d564c11f734b41f277a3e0511 | [
"MIT"
] | 1 | 2020-02-21T09:44:40.000Z | 2020-02-21T09:44:40.000Z | #!/usr/bin/env python
# -*- coding: utf-8 -*-
import io
import os
import sys
from shutil import rmtree
from setuptools import setup, find_packages
# Package meta-data.
NAME = 'npc'
DESCRIPTION = "Game master's tool to manage characters and game files"
URL = 'https://github.com/aurule/npc'
EMAIL = 'pmandrews@gmail.com'
AUTHOR = 'Peter Andrews'
# What packages are required for this module to be executed?
REQUIRED = [
'mako', 'markdown'
]
# ------------------------------------------------
here = os.path.abspath(os.path.dirname(__file__))
# Import the README and use it as the long-description.
# Note: this will only work if 'README.md' is present in your MANIFEST.in file!
with io.open(os.path.join(here, 'README.md'), encoding='utf-8') as f:
long_description = '\n' + f.read()
about = {}
with io.open('npc/__version__.py') as f:
exec(f.read(), about)
setup(
name=NAME,
version=about["__version__"],
description=DESCRIPTION,
long_description=long_description,
url=URL,
author=AUTHOR,
author_email=EMAIL,
license="MIT",
classifiers=[
'Development Status :: 5 - Production/Stable',
'License :: OSI Approved :: MIT License',
'Programming Language :: Python :: 3',
'Programming Language :: Python :: 3.5',
],
keywords="npc tabletop gaming gm campaign",
packages=find_packages(exclude=['tests']),
install_requires=[
"Mako>=1.0.0",
"Markdown>=2.6.0"
],
extras_requires={
"test": [
"pytest>=3.9.1",
]
},
package_data={
'npc': [
'settings/*.json',
'templates/*.nwod',
'templates/*.mako',
'templates/*.md',
'templates/listing/*.mako'
]
},
entry_points={
'console_scripts': [
'npc=npc.cli:start',
],
}
)
| 24.402597 | 79 | 0.582757 |
a22db47661a968f3b82103e8c71812f260af3502 | 5,429 | py | Python | catkin_ws/src/parallel_autonomy/src/lane_supervisor_node.py | DiegoOrtegoP/Software | 4a07dd2dab29db910ca2e26848fa6b53b7ab00cd | [
"CC-BY-2.0"
] | 12 | 2016-04-14T12:21:46.000Z | 2021-06-18T07:51:40.000Z | catkin_ws/src/parallel_autonomy/src/lane_supervisor_node.py | DiegoOrtegoP/Software | 4a07dd2dab29db910ca2e26848fa6b53b7ab00cd | [
"CC-BY-2.0"
] | 14 | 2017-03-03T23:33:05.000Z | 2018-04-03T18:07:53.000Z | catkin_ws/src/parallel_autonomy/src/lane_supervisor_node.py | DiegoOrtegoP/Software | 4a07dd2dab29db910ca2e26848fa6b53b7ab00cd | [
"CC-BY-2.0"
] | 113 | 2016-05-03T06:11:42.000Z | 2019-06-01T14:37:38.000Z | #!/usr/bin/env python
import rospy
import numpy as np
import math
from std_msgs.msg import Bool
from duckietown_msgs.msg import Twist2DStamped, LanePose, StopLineReading
from sensor_msgs.msg import Joy
class lane_supervisor(object):
def __init__(self):
self.node_name = rospy.get_name()
self.lane_reading = LanePose()
self.car_control_lane = Twist2DStamped()
self.car_control_joy = Twist2DStamped()
self.safe = True
# TODO: properly encode states in state machine
self.state = 0
self.in_lane = True
self.at_stop_line = False
self.stop = False
# Params:
self.max_cross_track_error = self.setupParameter("~max_cross_track_error", 0.1)
self.max_heading_error = self.setupParameter("~max_heading_error", math.pi / 4)
self.min_speed = self.setupParameter("~min_speed", 0.1)
self.max_speed = self.setupParameter("~max_speed", 0.3)
self.max_steer = self.setupParameter("~max_steer", 0.2)
# Publication
self.pub_car_cmd = rospy.Publisher("~car_cmd", Twist2DStamped, queue_size=1)
self.pub_safe = rospy.Publisher("~safe", Bool, queue_size=1)
# Subscriptions
self.sub_lane_pose = rospy.Subscriber("~lane_pose", LanePose, self.cbLanePose, queue_size=1)
self.sub_lane_control = rospy.Subscriber("~car_cmd_lane", Twist2DStamped, self.cbLaneControl, queue_size=1)
self.sub_joy_control = rospy.Subscriber("~car_cmd_joy", Twist2DStamped, self.cbJoyControl, queue_size=1)
self.sub_at_stop_line = rospy.Subscriber("~stop_line_reading", StopLineReading, self.cbStopLine, queue_size=1)
self.params_update = rospy.Timer(rospy.Duration.from_sec(1.0), self.updateParams)
def setupParameter(self, param_name, default_value):
value = rospy.get_param(param_name, default_value)
rospy.set_param(param_name, value) # Write to parameter server for transparancy
rospy.loginfo("[%s] %s = %s " % (self.node_name, param_name, value))
return value
def updateParams(self, event):
self.max_cross_track_error = rospy.get_param("~max_cross_track_error")
self.max_heading_error = rospy.get_param("~max_heading_error")
self.min_speed = rospy.get_param("~min_speed")
self.max_speed = rospy.get_param("~max_speed")
self.max_steer = rospy.get_param("~max_steer")
def cbStopLine(self, stop_line_msg):
if not stop_line_msg.at_stop_line:
self.at_stop_line = False
self.stop = False
else:
if not self.at_stop_line:
self.at_stop_line = True
self.stop = True
rospy.sleep(2)
self.stop = False
def cbLanePose(self, lane_pose_msg):
self.in_lane = lane_pose_msg.in_lane
self.lane_reading = lane_pose_msg
cross_track_err = math.fabs(lane_pose_msg.d)
heading_err = math.fabs(lane_pose_msg.phi)
if cross_track_err > self.max_cross_track_error or heading_err > self.max_heading_error:
self.safe = False
else:
self.safe = True
self.pub_safe.publish(self.safe)
def cbLaneControl(self, lane_control_msg):
self.car_control_lane = lane_control_msg
def cbJoyControl(self, joy_control_msg):
self.car_control_joy = joy_control_msg
car_cmd_msg = self.mergeJoyAndLaneControl()
self.pub_car_cmd.publish(car_cmd_msg)
def mergeJoyAndLaneControl(self):
car_cmd_msg = Twist2DStamped()
if self.stop:
if self.state != 1:
rospy.loginfo("[PA] stopped at stop line")
self.state = 1
car_cmd_msg.v = 0
car_cmd_msg.omega = 0
elif self.safe: # or not self.in_lane:
if self.state != 2:
rospy.loginfo("[PA] in safe mode")
self.state = 2
self.car_control_joy.v = min(self.car_control_joy.v, self.max_speed)
self.car_control_joy.omega = np.clip(self.car_control_joy.omega, -self.max_steer, self.max_steer)
if self.car_control_joy.v < self.min_speed:
# sets the speeds to 0:
# TODO: reformat
self.car_control_joy.v = 0.0
self.car_control_joy.omega = 0.0
car_cmd_msg = self.car_control_joy
car_cmd_msg.header.stamp = self.car_control_joy.header.stamp
else:
if self.state != 3:
rospy.loginfo("[PA] not safe - merge control inputs")
self.state = 3
car_control_merged = Twist2DStamped()
if self.car_control_joy.v < self.min_speed:
# sets the speeds to 0:
car_control_merged.v = 0.0
car_control_merged.omega = 0.0
else:
# take the speed from the joystick:
car_control_merged.v = min(self.car_control_joy.v, self.max_speed)
# take the omega from the lane controller:
car_control_merged.omega = self.car_control_lane.omega
car_cmd_msg = car_control_merged
car_cmd_msg.header.stamp = self.car_control_joy.header.stamp
return car_cmd_msg
if __name__ == "__main__":
rospy.init_node("lane_supervisor", anonymous=False)
lane_supervisor_node = lane_supervisor()
rospy.spin()
| 42.085271 | 118 | 0.640265 |
4b7b66f644e72693f3faf80b4b8306b4badc2d88 | 1,259 | py | Python | tests/test_title_matcher.py | ninoNinkovic/python-edl | c7f5cbb524194a070d892137a46902f7a89a930a | [
"MIT"
] | 3 | 2018-02-16T13:10:31.000Z | 2021-03-09T15:51:19.000Z | tests/test_title_matcher.py | ninoNinkovic/python-edl | c7f5cbb524194a070d892137a46902f7a89a930a | [
"MIT"
] | null | null | null | tests/test_title_matcher.py | ninoNinkovic/python-edl | c7f5cbb524194a070d892137a46902f7a89a930a | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
import unittest
import re
from edl import EDL
from edl.matchers import TitleMatcher
class TitleMatcherTestCase(unittest.TestCase):
"""tests the edl.edl.TitleMatcher class
"""
def runTest(self):
self.test_TitleMatcher_regex_is_working_properly()
self.test_TitleMatcher_apply_is_working_properly()
def test_TitleMatcher_regex_is_working_properly(self):
"""testing if the TitleMatcher.regex is working properly
"""
test_line = 'TITLE: Sequence 01'
e = TitleMatcher()
m = re.search(e.regex, test_line)
self.assertIsNotNone(m)
self.assertEqual(
'Sequence 01',
m.group(1).strip()
)
def test_TitleMatcher_apply_is_working_properly(self):
"""testing if the TitleMatcher.apply() is working properly
"""
ed_list = EDL('24')
e = TitleMatcher()
test_line = 'TITLE: Sequence 01'
e.apply(ed_list, test_line)
self.assertEqual(
'Sequence 01',
ed_list.title
)
test_line = 'TITLE: Test EDL 24'
e.apply(ed_list, test_line)
self.assertEqual(
'Test EDL 24',
ed_list.title
)
| 23.314815 | 66 | 0.603654 |
e53d1c7e4036d45e87517b10beeac3bfd2f72931 | 2,094 | py | Python | setup.py | intgr/pgxnclient | b440b3b95329b109fa984acbbf98ef878facd6a5 | [
"BSD-3-Clause"
] | 1 | 2018-05-31T07:59:30.000Z | 2018-05-31T07:59:30.000Z | setup.py | umitanuki/pgxnclient | c6f8cdd124f55e07b1dad7e07c10ace431eb037f | [
"BSD-3-Clause"
] | null | null | null | setup.py | umitanuki/pgxnclient | c6f8cdd124f55e07b1dad7e07c10ace431eb037f | [
"BSD-3-Clause"
] | null | null | null | #!/usr/bin/env python
"""
pgxnclient -- setup script
"""
# Copyright (C) 2011 Daniele Varrazzo
# This file is part of the PGXN client
import os
import sys
from setuptools import setup, find_packages
# Grab the version without importing the module
# or we will get import errors on install if prerequisites are still missing
fn = os.path.join(os.path.dirname(__file__), 'pgxnclient', '__init__.py')
f = open(fn)
try:
for line in f:
if line.startswith('__version__ ='):
version = line.split("'")[1]
break
else:
raise ValueError('cannot find __version__ in the pgxnclient module')
finally:
f.close()
# External dependencies, depending on the Python version
requires = []
tests_require = []
if sys.version_info < (2, 5):
requires.append('simplejson<=2.0.9')
elif sys.version_info < (2, 7):
requires.append('simplejson>=2.1')
tests_require.append('mock')
if sys.version_info < (2, 7):
tests_require.append('unittest2')
classifiers = """
Development Status :: 5 - Production/Stable
Environment :: Console
Intended Audience :: Developers
Intended Audience :: System Administrators
License :: OSI Approved :: BSD License
Operating System :: POSIX
Programming Language :: Python :: 2
Programming Language :: Python :: 3
Topic :: Database
"""
setup(
name = 'pgxnclient',
description = 'A command line tool to interact with the PostgreSQL Extension Network.',
author = 'Daniele Varrazzo',
author_email = 'daniele.varrazzo@gmail.com',
url = 'http://pgxnclient.projects.postgresql.org/',
license = 'BSD',
packages = find_packages(),
package_data = {'pgxnclient': ['libexec/*']},
entry_points = {'console_scripts': [
'pgxn = pgxnclient.cli:command_dispatch',
'pgxnclient = pgxnclient.cli:script', ]},
test_suite = 'pgxnclient.tests',
classifiers = [x for x in classifiers.split('\n') if x],
zip_safe = False, # because we dynamically look for commands
install_requires = requires,
tests_require = tests_require,
version = version,
use_2to3 = True,
)
| 27.552632 | 91 | 0.684814 |
44a9cf9e02a8ee5bd8ad55c785e0dcc7c29cb07a | 5,097 | py | Python | kubernetes/client/models/networking_v1beta1_ingress_rule.py | Prahladk09/python-1 | 2dfb3035535e4be52ba549f1ff47acbe573b73f6 | [
"Apache-2.0"
] | 11 | 2020-10-13T05:27:59.000Z | 2021-09-23T02:56:32.000Z | kubernetes/client/models/networking_v1beta1_ingress_rule.py | Prahladk09/python-1 | 2dfb3035535e4be52ba549f1ff47acbe573b73f6 | [
"Apache-2.0"
] | 48 | 2020-10-15T09:53:36.000Z | 2021-07-05T15:33:24.000Z | kubernetes/client/models/networking_v1beta1_ingress_rule.py | Prahladk09/python-1 | 2dfb3035535e4be52ba549f1ff47acbe573b73f6 | [
"Apache-2.0"
] | 4 | 2020-12-04T08:51:35.000Z | 2022-03-27T09:42:20.000Z | # coding: utf-8
"""
Kubernetes
No description provided (generated by Swagger Codegen https://github.com/swagger-api/swagger-codegen)
OpenAPI spec version: v1.14.4
Generated by: https://github.com/swagger-api/swagger-codegen.git
"""
from pprint import pformat
from six import iteritems
import re
class NetworkingV1beta1IngressRule(object):
"""
NOTE: This class is auto generated by the swagger code generator program.
Do not edit the class manually.
"""
"""
Attributes:
swagger_types (dict): The key is attribute name
and the value is attribute type.
attribute_map (dict): The key is attribute name
and the value is json key in definition.
"""
swagger_types = {
'host': 'str',
'http': 'NetworkingV1beta1HTTPIngressRuleValue'
}
attribute_map = {
'host': 'host',
'http': 'http'
}
def __init__(self, host=None, http=None):
"""
NetworkingV1beta1IngressRule - a model defined in Swagger
"""
self._host = None
self._http = None
self.discriminator = None
if host is not None:
self.host = host
if http is not None:
self.http = http
@property
def host(self):
"""
Gets the host of this NetworkingV1beta1IngressRule.
Host is the fully qualified domain name of a network host, as defined by RFC 3986. Note the following deviations from the \"host\" part of the URI as defined in the RFC: 1. IPs are not allowed. Currently an IngressRuleValue can only apply to the IP in the Spec of the parent Ingress. 2. The `:` delimiter is not respected because ports are not allowed. Currently the port of an Ingress is implicitly :80 for http and :443 for https. Both these may change in the future. Incoming requests are matched against the host before the IngressRuleValue. If the host is unspecified, the Ingress routes all traffic based on the specified IngressRuleValue.
:return: The host of this NetworkingV1beta1IngressRule.
:rtype: str
"""
return self._host
@host.setter
def host(self, host):
"""
Sets the host of this NetworkingV1beta1IngressRule.
Host is the fully qualified domain name of a network host, as defined by RFC 3986. Note the following deviations from the \"host\" part of the URI as defined in the RFC: 1. IPs are not allowed. Currently an IngressRuleValue can only apply to the IP in the Spec of the parent Ingress. 2. The `:` delimiter is not respected because ports are not allowed. Currently the port of an Ingress is implicitly :80 for http and :443 for https. Both these may change in the future. Incoming requests are matched against the host before the IngressRuleValue. If the host is unspecified, the Ingress routes all traffic based on the specified IngressRuleValue.
:param host: The host of this NetworkingV1beta1IngressRule.
:type: str
"""
self._host = host
@property
def http(self):
"""
Gets the http of this NetworkingV1beta1IngressRule.
:return: The http of this NetworkingV1beta1IngressRule.
:rtype: NetworkingV1beta1HTTPIngressRuleValue
"""
return self._http
@http.setter
def http(self, http):
"""
Sets the http of this NetworkingV1beta1IngressRule.
:param http: The http of this NetworkingV1beta1IngressRule.
:type: NetworkingV1beta1HTTPIngressRuleValue
"""
self._http = http
def to_dict(self):
"""
Returns the model properties as a dict
"""
result = {}
for attr, _ in iteritems(self.swagger_types):
value = getattr(self, attr)
if isinstance(value, list):
result[attr] = list(map(
lambda x: x.to_dict() if hasattr(x, "to_dict") else x,
value
))
elif hasattr(value, "to_dict"):
result[attr] = value.to_dict()
elif isinstance(value, dict):
result[attr] = dict(map(
lambda item: (item[0], item[1].to_dict())
if hasattr(item[1], "to_dict") else item,
value.items()
))
else:
result[attr] = value
return result
def to_str(self):
"""
Returns the string representation of the model
"""
return pformat(self.to_dict())
def __repr__(self):
"""
For `print` and `pprint`
"""
return self.to_str()
def __eq__(self, other):
"""
Returns true if both objects are equal
"""
if not isinstance(other, NetworkingV1beta1IngressRule):
return False
return self.__dict__ == other.__dict__
def __ne__(self, other):
"""
Returns true if both objects are not equal
"""
return not self == other
| 33.313725 | 662 | 0.611732 |
b7aa4051dcae572d94a1b018c96355230faf7660 | 583 | py | Python | app/schemas.py | abmorte/PrivacyChain | 3b7157ad59a333e08e2504f01b5aa94f02f79b26 | [
"MIT"
] | 2 | 2022-02-18T11:55:38.000Z | 2022-02-18T12:14:17.000Z | app/schemas.py | abmorte/PrivacyChain | 3b7157ad59a333e08e2504f01b5aa94f02f79b26 | [
"MIT"
] | null | null | null | app/schemas.py | abmorte/PrivacyChain | 3b7157ad59a333e08e2504f01b5aa94f02f79b26 | [
"MIT"
] | null | null | null | from typing import List, Optional
from pydantic import BaseModel
class TrackingBase(BaseModel):
canonical_data: str
anonymized_data: str
blockchain_id: int
transaction_id: str
salt: str
hash_method: str
tracking_dt: str
locator: str
class TrackingCreate(TrackingBase):
pass
class Tracking(TrackingBase):
tracking_id: int
canonical_data: str
anonymized_data: str
blockchain_id: int
transaction_id: str
salt: str
hash_method: str
tracking_dt: str
locator: str
class Config:
orm_mode = True | 20.103448 | 35 | 0.692967 |
950497c9146cf36401ad4e39fedfd100129fecf9 | 2,852 | py | Python | AdminLTE/settings.py | StephenPCG/django-adminlte-templates | 183ace7c0bb36886e37fec80cad3d20ec1a99783 | [
"MIT"
] | 36 | 2015-09-04T13:28:39.000Z | 2021-04-14T07:23:04.000Z | AdminLTE/settings.py | StephenPCG/django-adminlte-templates | 183ace7c0bb36886e37fec80cad3d20ec1a99783 | [
"MIT"
] | 7 | 2015-09-04T13:28:18.000Z | 2017-01-19T22:19:05.000Z | AdminLTE/settings.py | StephenPCG/django-adminlte-templates | 183ace7c0bb36886e37fec80cad3d20ec1a99783 | [
"MIT"
] | 19 | 2015-09-30T05:04:40.000Z | 2018-08-22T12:55:11.000Z | # -*- coding: utf-8 -*-
from django.conf import settings as djangosettings
from .widgets.sidebar import Sidebar
from .default_settings import DEFAULT_SETTINGS
def store_get(store, key, default=None):
frags = key.split('.')
for frag in frags:
try:
if frag.isdigit():
frag = int(frag)
store = store[frag]
except:
return default
return store
def store_set(store, key, value, only_if_nonexist=False):
frags = key.split('.')
for frag in frags[:-1]:
if store.has_key(frag):
store = store[frag]
else:
store[frag] = dict()
store = store[frag]
if only_if_nonexist and store.has_key(frags[-1]):
return
store[frags[-1]] = value
class SettingsBase(object):
store = dict()
instance_store = None
loaded = False
def iget(self, key, default=None):
if not self.instance_store:
self.instance_store = dict()
class FakeDefault(): pass
ret1 = store_get(self.instance_store, key, default=FakeDefault())
ret2 = store_get(self.store, key, default=FakeDefault())
if isinstance(ret1, FakeDefault):
if isinstance(ret2, FakeDefault):
return default
else:
return ret2
if isinstance(ret1, dict) and isinstance(ret2, dict):
ret1.update(ret2)
return ret1
def iset(self, key, value, only_if_nonexist=False):
if not self.instance_store:
self.instance_store = dict()
return store_set(self.instance_store, key, value, only_if_nonexist)
@classmethod
def get(cls, key, default=None):
return store_get(cls.store, key, default)
@classmethod
def set(cls, key, value, only_if_nonexist=False):
store_set(cls.store, key, value, only_if_nonexist)
@classmethod
def _load(cls, settings, prefix=None):
for key, value in settings.items():
if not prefix:
next_prefix = key
else:
next_prefix = prefix + '.' + key
if isinstance(value, dict):
cls._load(value, next_prefix)
else:
cls.set(next_prefix, value)
@classmethod
def load(cls, settings, prefix=None):
if not cls.loaded:
cls._load(settings, prefix)
cls.loaded = True
# adding extra settings
cls.set('LOGIN.HAS_SOCIAL', len(cls.get('LOGIN.SOCIALS', []))>0)
cls.set('LANGUAGE_CODE', getattr(djangosettings, 'LANGUAGE_CODE', 'en-us'), True)
class Meta(type):
def __getitem__(self, key):
return self.get(key)
class Settings(SettingsBase):
store = DEFAULT_SETTINGS
__metaclass__ = Meta
def __getitem__(self, key):
return self.iget(key)
| 27.68932 | 89 | 0.593619 |
3964a41e850ae898b8e10eab5610d719d214cd67 | 14,880 | py | Python | python/ray/tune/cloud.py | yuanchi2807/ray | cf512254bb4bcd71ff1818dff5c868ab10c5f620 | [
"Apache-2.0"
] | 1 | 2022-03-28T06:18:30.000Z | 2022-03-28T06:18:30.000Z | python/ray/tune/cloud.py | yuanchi2807/ray | cf512254bb4bcd71ff1818dff5c868ab10c5f620 | [
"Apache-2.0"
] | null | null | null | python/ray/tune/cloud.py | yuanchi2807/ray | cf512254bb4bcd71ff1818dff5c868ab10c5f620 | [
"Apache-2.0"
] | 1 | 2022-03-24T22:48:21.000Z | 2022-03-24T22:48:21.000Z | import os
import shutil
import tempfile
import warnings
from typing import Optional
from ray import logger
from ray.ml.checkpoint import (
Checkpoint,
_get_local_path,
_get_external_path,
)
from ray.util import log_once
from ray.util.annotations import Deprecated
from ray.util.ml_utils.cloud import (
download_from_bucket,
clear_bucket,
upload_to_bucket,
is_cloud_target,
)
@Deprecated
class _TrialCheckpoint(os.PathLike):
def __init__(
self, local_path: Optional[str] = None, cloud_path: Optional[str] = None
):
self._local_path = local_path
self._cloud_path_tcp = cloud_path
@property
def local_path(self):
return self._local_path
@local_path.setter
def local_path(self, path: str):
self._local_path = path
@property
def cloud_path(self):
return self._cloud_path_tcp
@cloud_path.setter
def cloud_path(self, path: str):
self._cloud_path_tcp = path
# The following magic methods are implemented to keep backwards
# compatibility with the old path-based return values.
def __str__(self):
return self.local_path or self.cloud_path
def __fspath__(self):
return self.local_path
def __eq__(self, other):
if isinstance(other, str):
return self.local_path == other
elif isinstance(other, TrialCheckpoint):
return (
self.local_path == other.local_path
and self.cloud_path == other.cloud_path
)
def __add__(self, other):
if isinstance(other, str):
return self.local_path + other
raise NotImplementedError
def __radd__(self, other):
if isinstance(other, str):
return other + self.local_path
raise NotImplementedError
def __repr__(self):
return (
f"<TrialCheckpoint "
f"local_path={self.local_path}, "
f"cloud_path={self.cloud_path}"
f">"
)
def download(
self,
cloud_path: Optional[str] = None,
local_path: Optional[str] = None,
overwrite: bool = False,
) -> str:
"""Download checkpoint from cloud.
This will fetch the checkpoint directory from cloud storage
and save it to ``local_path``.
If a ``local_path`` argument is provided and ``self.local_path``
is unset, it will be set to ``local_path``.
Args:
cloud_path (Optional[str]): Cloud path to load checkpoint from.
Defaults to ``self.cloud_path``.
local_path (Optional[str]): Local path to save checkpoint at.
Defaults to ``self.local_path``.
overwrite (bool): If True, overwrites potential existing local
checkpoint. If False, exits if ``self.local_dir`` already
exists and has files in it.
"""
cloud_path = cloud_path or self.cloud_path
if not cloud_path:
raise RuntimeError(
"Could not download trial checkpoint: No cloud "
"path is set. Fix this by either passing a "
"`cloud_path` to your call to `download()` or by "
"passing a `cloud_path` into the constructor. The latter "
"should automatically be done if you pass the correct "
"`tune.SyncConfig`."
)
local_path = local_path or self.local_path
if not local_path:
raise RuntimeError(
"Could not download trial checkpoint: No local "
"path is set. Fix this by either passing a "
"`local_path` to your call to `download()` or by "
"passing a `local_path` into the constructor."
)
# Only update local path if unset
if not self.local_path:
self.local_path = local_path
if (
not overwrite
and os.path.exists(local_path)
and len(os.listdir(local_path)) > 0
):
# Local path already exists and we should not overwrite,
# so return.
return local_path
# Else: Actually download
# Delete existing dir
shutil.rmtree(local_path, ignore_errors=True)
# Re-create
os.makedirs(local_path, 0o755, exist_ok=True)
# Here we trigger the actual download
download_from_bucket(cloud_path, local_path)
# Local dir exists and is not empty
return local_path
def upload(
self,
cloud_path: Optional[str] = None,
local_path: Optional[str] = None,
clean_before: bool = False,
):
"""Upload checkpoint to cloud.
This will push the checkpoint directory from local storage
to ``cloud_path``.
If a ``cloud_path`` argument is provided and ``self.cloud_path``
is unset, it will be set to ``cloud_path``.
Args:
cloud_path (Optional[str]): Cloud path to load checkpoint from.
Defaults to ``self.cloud_path``.
local_path (Optional[str]): Local path to save checkpoint at.
Defaults to ``self.local_path``.
clean_before (bool): If True, deletes potentially existing
cloud bucket before storing new data.
"""
local_path = local_path or self.local_path
if not local_path:
raise RuntimeError(
"Could not upload trial checkpoint: No local "
"path is set. Fix this by either passing a "
"`local_path` to your call to `upload()` or by "
"passing a `local_path` into the constructor."
)
cloud_path = cloud_path or self.cloud_path
if not cloud_path:
raise RuntimeError(
"Could not download trial checkpoint: No cloud "
"path is set. Fix this by either passing a "
"`cloud_path` to your call to `download()` or by "
"passing a `cloud_path` into the constructor. The latter "
"should automatically be done if you pass the correct "
"`tune.SyncConfig`."
)
if not self.cloud_path:
self.cloud_path = cloud_path
if clean_before:
logger.info(f"Clearing bucket contents before upload: {cloud_path}")
clear_bucket(cloud_path)
# Actually upload
upload_to_bucket(cloud_path, local_path)
return cloud_path
def save(self, path: Optional[str] = None, force_download: bool = False):
"""Save trial checkpoint to directory or cloud storage.
If the ``path`` is a local target and the checkpoint already exists
on local storage, the local directory is copied. Else, the checkpoint
is downloaded from cloud storage.
If the ``path`` is a cloud target and the checkpoint does not already
exist on local storage, it is downloaded from cloud storage before.
That way checkpoints can be transferred across cloud storage providers.
Args:
path (Optional[str]): Path to save checkpoint at. If empty,
the default cloud storage path is saved to the default
local directory.
force_download (bool): If ``True``, forces (re-)download of
the checkpoint. Defaults to ``False``.
"""
temp_dirs = set()
# Per default, save cloud checkpoint
if not path:
if self.cloud_path and self.local_path:
path = self.local_path
elif not self.cloud_path:
raise RuntimeError(
"Cannot save trial checkpoint: No cloud path "
"found. If the checkpoint is already on the node, "
"you can pass a `path` argument to save it at another "
"location."
)
else:
# No self.local_path
raise RuntimeError(
"Cannot save trial checkpoint: No target path "
"specified and no default local directory available. "
"Please pass a `path` argument to `save()`."
)
elif not self.local_path and not self.cloud_path:
raise RuntimeError(
f"Cannot save trial checkpoint to cloud target "
f"`{path}`: No existing local or cloud path was "
f"found. This indicates an error when loading "
f"the checkpoints. Please report this issue."
)
if is_cloud_target(path):
# Storing on cloud
if not self.local_path:
# No local copy, yet. Download to temp dir
local_path = tempfile.mkdtemp(prefix="tune_checkpoint_")
temp_dirs.add(local_path)
else:
local_path = self.local_path
if self.cloud_path:
# Do not update local path as it might be a temp file
local_path = self.download(
local_path=local_path, overwrite=force_download
)
# Remove pointer to a temporary directory
if self.local_path in temp_dirs:
self.local_path = None
# We should now have a checkpoint available locally
if not os.path.exists(local_path) or len(os.listdir(local_path)) == 0:
raise RuntimeError(
f"No checkpoint found in directory `{local_path}` after "
f"download - maybe the bucket is empty or downloading "
f"failed?"
)
# Only update cloud path if it wasn't set before
cloud_path = self.upload(
cloud_path=path, local_path=local_path, clean_before=True
)
# Clean up temporary directories
for temp_dir in temp_dirs:
shutil.rmtree(temp_dir)
return cloud_path
local_path_exists = (
self.local_path
and os.path.exists(self.local_path)
and len(os.listdir(self.local_path)) > 0
)
# Else: path is a local target
if self.local_path and local_path_exists and not force_download:
# If we have a local copy, use it
if path == self.local_path:
# Nothing to do
return self.local_path
# Both local, just copy tree
if os.path.exists(path):
shutil.rmtree(path)
shutil.copytree(self.local_path, path)
return path
# Else: Download
try:
return self.download(local_path=path, overwrite=force_download)
except Exception as e:
raise RuntimeError(
"Cannot save trial checkpoint to local target as downloading "
"from cloud failed. Did you pass the correct `SyncConfig`?"
) from e
@Deprecated
class TrialCheckpoint(Checkpoint, _TrialCheckpoint):
def __init__(
self,
local_path: Optional[str] = None,
cloud_path: Optional[str] = None,
):
_TrialCheckpoint.__init__(self)
# Checkpoint does not allow empty data, but TrialCheckpoint
# did. To keep backwards compatibility, we use a placeholder URI
# here, and manually set self._uri and self._local_dir later.
PLACEHOLDER = "s3://placeholder"
Checkpoint.__init__(self, uri=PLACEHOLDER)
# Reset local variables
self._uri = None
self._local_path = None
self._cloud_path_tcp = None
self._local_path_tcp = None
locations = set()
if local_path:
# Add _tcp to not conflict with Checkpoint._local_path
self._local_path_tcp = local_path
if os.path.exists(local_path):
self._local_path = local_path
locations.add(local_path)
if cloud_path:
self._cloud_path_tcp = cloud_path
self._uri = cloud_path
locations.add(cloud_path)
self._locations = locations
@property
def local_path(self):
local_path = _get_local_path(self._local_path)
if not local_path:
for candidate in self._locations:
local_path = _get_local_path(candidate)
if local_path:
break
return local_path or self._local_path_tcp
@local_path.setter
def local_path(self, path: str):
self._local_path = path
if not path or not os.path.exists(path):
return
self._locations.add(path)
@property
def cloud_path(self):
cloud_path = _get_external_path(self._uri)
if not cloud_path:
for candidate in self._locations:
cloud_path = _get_external_path(candidate)
if cloud_path:
break
return cloud_path or self._cloud_path_tcp
@cloud_path.setter
def cloud_path(self, path: str):
self._cloud_path_tcp = path
if not self._uri:
self._uri = path
self._locations.add(path)
def download(
self,
cloud_path: Optional[str] = None,
local_path: Optional[str] = None,
overwrite: bool = False,
) -> str:
if log_once("trial_checkpoint_download_deprecated"):
warnings.warn(
"`checkpoint.download()` is deprecated and will be removed in "
"the future. Please use `checkpoint.to_directory()` instead.",
DeprecationWarning,
)
return _TrialCheckpoint.download(self, cloud_path, local_path, overwrite)
def upload(
self,
cloud_path: Optional[str] = None,
local_path: Optional[str] = None,
clean_before: bool = False,
):
if log_once("trial_checkpoint_upload_deprecated"):
warnings.warn(
"`checkpoint.upload()` is deprecated and will be removed in "
"the future. Please use `checkpoint.to_uri()` instead.",
DeprecationWarning,
)
return _TrialCheckpoint.upload(self, cloud_path, local_path, clean_before)
def save(self, path: Optional[str] = None, force_download: bool = False):
if log_once("trial_checkpoint_save_deprecated"):
warnings.warn(
"`checkpoint.save()` is deprecated and will be removed in "
"the future. Please use `checkpoint.to_directory()` or"
"`checkpoint.to_uri()` instead.",
DeprecationWarning,
)
return _TrialCheckpoint.save(self, path, force_download)
| 34.685315 | 82 | 0.583199 |
727bd6b0511931106f4e3c1a977668baf0fa008d | 1,816 | py | Python | Python/djaccard_simmiliarity.py | muthusk07/cs-algorithms | 5e05de538e672cf49dd6c689a317c32959a67902 | [
"MIT"
] | 239 | 2019-10-07T11:01:56.000Z | 2022-01-27T19:08:55.000Z | Python/djaccard_simmiliarity.py | muthusk07/cs-algorithms | 5e05de538e672cf49dd6c689a317c32959a67902 | [
"MIT"
] | 176 | 2019-10-07T06:59:49.000Z | 2020-09-30T08:16:22.000Z | Python/djaccard_simmiliarity.py | muthusk07/cs-algorithms | 5e05de538e672cf49dd6c689a317c32959a67902 | [
"MIT"
] | 441 | 2019-10-07T07:34:08.000Z | 2022-03-15T07:19:58.000Z | '''
Given a list of sentence with same topics (or from a news), this function will return
a text which have score similiar between sentences. It will be useful to find the main sentence.
'''
import numpy as np
def jaccard_similarity(str_1, str_2):
""" compute of intersection to get similraity score between words """
str_1 = set(str_1.split())
str_2 = set(str_2.split())
intersect = str_1.intersection(str_2)
return float(len(intersect)) / (len(str_1) + len(str_2) - len(intersect))
def max_avg_jaccard_sim(sentence_list, show_avg=False):
""" compute of intersection each sentence in cluster, and return sentence with maximum of average similarity score between sentence """
sim = []
text_avg_sim = {}
for idx in range(len(sentence_list)):
for text in sentence_list:
if len(text) < 2:
continue
similarity = jaccard_similarity(sentence_list[idx], text)
sim.append(similarity)
text_avg_sim[sentence_list[idx]] = sum(sim) / len(sim)
# key of max values
if show_avg:
return max(text_avg_sim, key=text_avg_sim.get), max(text_avg_sim.values())
else:
return max(text_avg_sim, key=text_avg_sim.get)
if __name__ == "__main__":
sentences = ["Manchester United midfielder Scott McTominay secured the Man-of-the Match award after the Reds’ 3-1 win at Norwich City.",
"The 21-year-old took home 50 per cent of your vote after netting our 21st-minute opener – which was our 2,000th strike in Premier League history.",
"The Scotland international said he was delighted with the team's performance, as well as securing his place in the record books."]
main_sent = max_avg_jaccard_sim(sentence_list=sentences)
print (main_sent) | 49.081081 | 164 | 0.686123 |
4dbd86f50c793900b9d5f316d2948c4a20a0360f | 2,866 | py | Python | tests/base.py | vishalbelsare/neupy | 684313cdaddcad326f2169384fb15ec3aa29d991 | [
"MIT"
] | null | null | null | tests/base.py | vishalbelsare/neupy | 684313cdaddcad326f2169384fb15ec3aa29d991 | [
"MIT"
] | null | null | null | tests/base.py | vishalbelsare/neupy | 684313cdaddcad326f2169384fb15ec3aa29d991 | [
"MIT"
] | null | null | null | import pickle
import inspect
import logging
import unittest
import numpy as np
from neupy import environment, layers
from utils import vectors_for_testing
class BaseTestCase(unittest.TestCase):
verbose = False
random_seed = 0
use_sandbox_mode = True
def setUp(self):
environment.reproducible(seed=self.random_seed)
if not self.verbose:
logging.disable(logging.CRITICAL)
if self.use_sandbox_mode:
# Optimize unit tests speed. In general all task very
# simple so some Theano optimizations can be redundant.
environment.sandbox()
# Clean identifiers map for each test
layers.BaseLayer.global_identifiers_map = {}
def assertItemsEqual(self, list1, list2):
self.assertEqual(sorted(list1), sorted(list2))
def assertInvalidVectorTrain(self, network, input_vector, target=None,
decimal=5, is_feature1d=True, **train_kwargs):
"""
Method helps test network prediction training using different
types of row or column vector.
"""
input_vectors = vectors_for_testing(input_vector, is_feature1d)
if target is not None:
target_vectors = vectors_for_testing(target, is_feature1d)
input_vectors = zip(input_vectors, target_vectors)
train_args = inspect.getargspec(network.train).args
if 'epochs' in train_args and 'epochs' not in train_kwargs:
train_kwargs['epochs'] = 5
elif 'epsilon' in train_args and 'epsilon' not in train_kwargs:
train_kwargs['epsilon'] = 0.1
for i, input_data in enumerate(input_vectors, start=1):
if target is None:
network.train(input_data, **train_kwargs)
else:
network.train(*input_data, **train_kwargs)
def assertInvalidVectorPred(self, network, input_vector, target,
decimal=5, is_feature1d=True):
"""
Method helps test network prediction procedure using different
types of row or column vector.
"""
test_vectors = vectors_for_testing(input_vector, is_feature1d)
for i, test_vector in enumerate(test_vectors, start=1):
predicted_vector = network.predict(test_vector)
np.testing.assert_array_almost_equal(predicted_vector, target,
decimal=decimal)
def assertPickledNetwork(self, network, input_data):
stored_network = pickle.dumps(network)
loaded_network = pickle.loads(stored_network)
network_prediction = network.predict(input_data)
loaded_network_prediction = loaded_network.predict(input_data)
np.testing.assert_array_almost_equal(
loaded_network_prediction, network_prediction)
| 34.53012 | 79 | 0.656315 |
839e58a10c44347e9044a1e2fb7a3e29e990df87 | 582 | py | Python | python/admin/tables/messages_table/delete_message.py | OSAMAMOHAMED1234/E-Commerce_Blueprint | eca5d0c2eb22a0e6a30bfd2499e85775b43ef919 | [
"MIT"
] | 1 | 2019-05-04T11:52:49.000Z | 2019-05-04T11:52:49.000Z | python/admin/tables/messages_table/delete_message.py | osama-mohamed/E-Commerce_Blueprint | eca5d0c2eb22a0e6a30bfd2499e85775b43ef919 | [
"MIT"
] | null | null | null | python/admin/tables/messages_table/delete_message.py | osama-mohamed/E-Commerce_Blueprint | eca5d0c2eb22a0e6a30bfd2499e85775b43ef919 | [
"MIT"
] | null | null | null |
from python.admin.login.login_check import *
from python.database.flask_database import *
delete_message_admin = Blueprint('delete_message_admin', __name__)
# delete message
@delete_message_admin.route('/admin/delete_message/<id>', methods=['post', 'get'])
@is_admin_logged_in
def admin_delete_message(id):
cur = mysql.connection.cursor()
cur.execute("DELETE FROM contact_us WHERE id = %s ;", [id])
mysql.connection.commit()
cur.close()
flash('You have successfully deleted the message!', 'success')
return redirect(url_for('dashboard.admin_dashboard')) | 32.333333 | 82 | 0.745704 |
b4cd7ac6e205acfde072f46e5617cc106153d70d | 10,689 | py | Python | magnum/tests/unit/objects/test_cluster.py | piersharding/magnum | 451358a57c4bd8fd93bab8121cfb5698d5471568 | [
"Apache-2.0"
] | 1 | 2020-01-15T08:25:38.000Z | 2020-01-15T08:25:38.000Z | magnum/tests/unit/objects/test_cluster.py | HNicholas/magnum | 410b7fd1055aa99d517ee82c6dcc2dba53457179 | [
"Apache-2.0"
] | 2 | 2019-06-14T12:13:53.000Z | 2020-07-10T00:30:53.000Z | magnum/tests/unit/objects/test_cluster.py | zonca/magnum | 3217e75b63fb9ebeb37b462677cdb72315ca1067 | [
"Apache-2.0"
] | null | null | null | # Copyright 2015 OpenStack Foundation
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import mock
from oslo_utils import uuidutils
from testtools.matchers import HasLength
from magnum.common import exception
from magnum import objects
from magnum.tests.unit.db import base
from magnum.tests.unit.db import utils
class TestClusterObject(base.DbTestCase):
def setUp(self):
super(TestClusterObject, self).setUp()
self.fake_cluster = utils.get_test_cluster()
self.fake_nodegroups = utils.get_nodegroups_for_cluster()
self.fake_cluster['trust_id'] = 'trust_id'
self.fake_cluster['trustee_username'] = 'trustee_user'
self.fake_cluster['trustee_user_id'] = 'trustee_user_id'
self.fake_cluster['trustee_password'] = 'password'
self.fake_cluster['coe_version'] = 'fake-coe-version'
self.fake_cluster['container_version'] = 'fake-container-version'
cluster_template_id = self.fake_cluster['cluster_template_id']
self.fake_cluster_template = objects.ClusterTemplate(
uuid=cluster_template_id)
self.fake_cluster['keypair'] = 'keypair1'
self.fake_cluster['docker_volume_size'] = 3
self.fake_cluster['labels'] = {}
self.fake_cluster['health_status'] = 'HEALTHY'
self.fake_cluster['health_status_reason'] = {}
@mock.patch('magnum.objects.ClusterTemplate.get_by_uuid')
def test_get_by_id(self, mock_cluster_template_get):
cluster_id = self.fake_cluster['id']
with mock.patch.object(self.dbapi, 'get_cluster_by_id',
autospec=True) as mock_get_cluster:
mock_cluster_template_get.return_value = self.fake_cluster_template
mock_get_cluster.return_value = self.fake_cluster
cluster = objects.Cluster.get(self.context, cluster_id)
mock_get_cluster.assert_called_once_with(self.context, cluster_id)
self.assertEqual(self.context, cluster._context)
self.assertEqual(cluster.cluster_template_id,
cluster.cluster_template.uuid)
@mock.patch('magnum.objects.ClusterTemplate.get_by_uuid')
def test_get_by_uuid(self, mock_cluster_template_get):
uuid = self.fake_cluster['uuid']
with mock.patch.object(self.dbapi, 'get_cluster_by_uuid',
autospec=True) as mock_get_cluster:
mock_cluster_template_get.return_value = self.fake_cluster_template
mock_get_cluster.return_value = self.fake_cluster
cluster = objects.Cluster.get(self.context, uuid)
mock_get_cluster.assert_called_once_with(self.context, uuid)
self.assertEqual(self.context, cluster._context)
self.assertEqual(cluster.cluster_template_id,
cluster.cluster_template.uuid)
@mock.patch('magnum.objects.ClusterTemplate.get_by_uuid')
def test_get_by_name(self, mock_cluster_template_get):
name = self.fake_cluster['name']
with mock.patch.object(self.dbapi, 'get_cluster_by_name',
autospec=True) as mock_get_cluster:
mock_cluster_template_get.return_value = self.fake_cluster_template
mock_get_cluster.return_value = self.fake_cluster
cluster = objects.Cluster.get_by_name(self.context, name)
mock_get_cluster.assert_called_once_with(self.context, name)
self.assertEqual(self.context, cluster._context)
self.assertEqual(cluster.cluster_template_id,
cluster.cluster_template.uuid)
def test_get_bad_id_and_uuid(self):
self.assertRaises(exception.InvalidIdentity,
objects.Cluster.get, self.context, 'not-a-uuid')
@mock.patch('magnum.objects.ClusterTemplate.get_by_uuid')
def test_list(self, mock_cluster_template_get):
with mock.patch.object(self.dbapi, 'get_cluster_list',
autospec=True) as mock_get_list:
mock_get_list.return_value = [self.fake_cluster]
mock_cluster_template_get.return_value = self.fake_cluster_template
clusters = objects.Cluster.list(self.context)
self.assertEqual(1, mock_get_list.call_count)
self.assertThat(clusters, HasLength(1))
self.assertIsInstance(clusters[0], objects.Cluster)
self.assertEqual(self.context, clusters[0]._context)
self.assertEqual(clusters[0].cluster_template_id,
clusters[0].cluster_template.uuid)
@mock.patch('magnum.objects.ClusterTemplate.get_by_uuid')
def test_list_all(self, mock_cluster_template_get):
with mock.patch.object(self.dbapi, 'get_cluster_list',
autospec=True) as mock_get_list:
mock_get_list.return_value = [self.fake_cluster]
mock_cluster_template_get.return_value = self.fake_cluster_template
self.context.all_tenants = True
clusters = objects.Cluster.list(self.context)
mock_get_list.assert_called_once_with(
self.context, limit=None, marker=None, filters=None,
sort_dir=None, sort_key=None)
self.assertEqual(1, mock_get_list.call_count)
self.assertThat(clusters, HasLength(1))
self.assertIsInstance(clusters[0], objects.Cluster)
self.assertEqual(self.context, clusters[0]._context)
@mock.patch('magnum.objects.ClusterTemplate.get_by_uuid')
def test_list_with_filters(self, mock_cluster_template_get):
with mock.patch.object(self.dbapi, 'get_cluster_list',
autospec=True) as mock_get_list:
mock_get_list.return_value = [self.fake_cluster]
mock_cluster_template_get.return_value = self.fake_cluster_template
filters = {'name': 'cluster1'}
clusters = objects.Cluster.list(self.context, filters=filters)
mock_get_list.assert_called_once_with(self.context, sort_key=None,
sort_dir=None,
filters=filters, limit=None,
marker=None)
self.assertEqual(1, mock_get_list.call_count)
self.assertThat(clusters, HasLength(1))
self.assertIsInstance(clusters[0], objects.Cluster)
self.assertEqual(self.context, clusters[0]._context)
@mock.patch('magnum.objects.ClusterTemplate.get_by_uuid')
def test_create(self, mock_cluster_template_get):
with mock.patch.object(self.dbapi, 'create_cluster',
autospec=True) as mock_create_cluster:
mock_cluster_template_get.return_value = self.fake_cluster_template
mock_create_cluster.return_value = self.fake_cluster
cluster = objects.Cluster(self.context, **self.fake_cluster)
cluster.create()
mock_create_cluster.assert_called_once_with(self.fake_cluster)
self.assertEqual(self.context, cluster._context)
@mock.patch('magnum.objects.ClusterTemplate.get_by_uuid')
def test_destroy(self, mock_cluster_template_get):
uuid = self.fake_cluster['uuid']
with mock.patch.object(self.dbapi, 'get_cluster_by_uuid',
autospec=True) as mock_get_cluster:
mock_get_cluster.return_value = self.fake_cluster
mock_cluster_template_get.return_value = self.fake_cluster_template
with mock.patch.object(self.dbapi, 'destroy_cluster',
autospec=True) as mock_destroy_cluster:
cluster = objects.Cluster.get_by_uuid(self.context, uuid)
cluster.destroy()
mock_get_cluster.assert_called_once_with(self.context, uuid)
mock_destroy_cluster.assert_called_once_with(uuid)
self.assertEqual(self.context, cluster._context)
@mock.patch('magnum.objects.ClusterTemplate.get_by_uuid')
def test_save(self, mock_cluster_template_get):
uuid = self.fake_cluster['uuid']
with mock.patch.object(self.dbapi, 'get_cluster_by_uuid',
autospec=True) as mock_get_cluster:
mock_cluster_template_get.return_value = self.fake_cluster_template
mock_get_cluster.return_value = self.fake_cluster
with mock.patch.object(self.dbapi, 'update_cluster',
autospec=True) as mock_update_cluster:
cluster = objects.Cluster.get_by_uuid(self.context, uuid)
cluster.status = 'DELETE_IN_PROGRESS'
cluster.save()
mock_get_cluster.assert_called_once_with(self.context, uuid)
mock_update_cluster.assert_called_once_with(
uuid, {'status': 'DELETE_IN_PROGRESS',
'cluster_template': self.fake_cluster_template})
self.assertEqual(self.context, cluster._context)
@mock.patch('magnum.objects.ClusterTemplate.get_by_uuid')
def test_refresh(self, mock_cluster_template_get):
uuid = self.fake_cluster['uuid']
new_uuid = uuidutils.generate_uuid()
returns = [dict(self.fake_cluster, uuid=uuid),
dict(self.fake_cluster, uuid=new_uuid)]
expected = [mock.call(self.context, uuid),
mock.call(self.context, uuid)]
with mock.patch.object(self.dbapi, 'get_cluster_by_uuid',
side_effect=returns,
autospec=True) as mock_get_cluster:
mock_cluster_template_get.return_value = self.fake_cluster_template
cluster = objects.Cluster.get_by_uuid(self.context, uuid)
self.assertEqual(uuid, cluster.uuid)
cluster.refresh()
self.assertEqual(new_uuid, cluster.uuid)
self.assertEqual(expected, mock_get_cluster.call_args_list)
self.assertEqual(self.context, cluster._context)
| 53.179104 | 79 | 0.659744 |
a709a2e15038f9d38ef36baac86a0f748b5ee6fa | 960 | py | Python | blender/arm/logicnode/input/LN_get_mouse_movement.py | Lykdraft/armory | da1cf33930ce9a8b1865d35c128fe4842bef2933 | [
"Zlib"
] | null | null | null | blender/arm/logicnode/input/LN_get_mouse_movement.py | Lykdraft/armory | da1cf33930ce9a8b1865d35c128fe4842bef2933 | [
"Zlib"
] | null | null | null | blender/arm/logicnode/input/LN_get_mouse_movement.py | Lykdraft/armory | da1cf33930ce9a8b1865d35c128fe4842bef2933 | [
"Zlib"
] | null | null | null | from arm.logicnode.arm_nodes import *
class GetMouseMovementNode(ArmLogicTreeNode):
"""Get the movement coordinates of the mouse."""
bl_idname = 'LNGetMouseMovementNode'
bl_label = 'Get Mouse Movement'
arm_version = 1
def init(self, context):
super(GetMouseMovementNode, self).init(context)
self.add_input('NodeSocketFloat', 'X Multiplier' , default_value=1.0)
self.add_input('NodeSocketFloat', 'Y Multiplier', default_value=-1.0)
self.add_input('NodeSocketFloat', 'Delta Multiplier', default_value=1.0)
self.add_output('NodeSocketFloat', 'X')
self.add_output('NodeSocketFloat', 'Y')
self.add_output('NodeSocketInt', 'Delta')
self.add_output('NodeSocketFloat', 'Multiplied X')
self.add_output('NodeSocketFloat', 'Multiplied Y')
self.add_output('NodeSocketFloat', 'Multiplied Delta')
add_node(GetMouseMovementNode, category=PKG_AS_CATEGORY, section='mouse')
| 43.636364 | 80 | 0.705208 |
86c7919db8dea4d024ba383014f629ec4c8841f3 | 1,314 | py | Python | app/core/tests/test_admin.py | aodanosullivan/recipe-app-api | 34f650ed23f009f5e0a3a0289566dd895498189b | [
"Apache-2.0"
] | null | null | null | app/core/tests/test_admin.py | aodanosullivan/recipe-app-api | 34f650ed23f009f5e0a3a0289566dd895498189b | [
"Apache-2.0"
] | null | null | null | app/core/tests/test_admin.py | aodanosullivan/recipe-app-api | 34f650ed23f009f5e0a3a0289566dd895498189b | [
"Apache-2.0"
] | null | null | null | from django.test import TestCase, Client
from django.contrib.auth import get_user_model
from django.urls import reverse
class AdminSiteTests(TestCase):
def setUp(self):
self.client = Client()
self.admin_user = get_user_model().objects.create_superuser(
email='admin@test.com',
password='Password123',
)
self.client.force_login(self.admin_user)
self.user = get_user_model().objects.create_user(
email='test@test.com',
password='Password234',
name="Regular User"
)
def test_users_listed(self):
"""Test that users are listed on User page"""
url = reverse('admin:core_user_changelist')
resp = self.client.get(url)
self.assertContains(resp, self.user.name)
self.assertContains(resp, self.user.email)
def test_user_change_page(self):
"""Test that the user edit page works"""
url = reverse('admin:core_user_change', args=[self.user.id])
resp = self.client.get(url)
self.assertEqual(resp.status_code, 200)
def test_create_user_page(self):
"""Test that the create user page works"""
url = reverse('admin:core_user_add')
resp = self.client.get(url)
self.assertEqual(resp.status_code, 200)
| 32.85 | 68 | 0.640791 |
21f8f358b3e1ee7002c383ab8bec15e1914a3c70 | 5,926 | py | Python | app.py | bocheng47/LINE_bocheng | 87c0e605bd922ae2002f103ff16f0b0707794b50 | [
"Apache-2.0"
] | null | null | null | app.py | bocheng47/LINE_bocheng | 87c0e605bd922ae2002f103ff16f0b0707794b50 | [
"Apache-2.0"
] | null | null | null | app.py | bocheng47/LINE_bocheng | 87c0e605bd922ae2002f103ff16f0b0707794b50 | [
"Apache-2.0"
] | null | null | null | import requests
import json
from flask import Flask, request, abort
from linebot import (
LineBotApi, WebhookHandler
)
from linebot.exceptions import (
InvalidSignatureError, LineBotApiError
)
from linebot.models import *
app = Flask(__name__)
# Channel Access Token
line_bot_api = LineBotApi('h36DEiW/iRpAgib0ryeNQmVAuKX6eaqEKzCj5QoT/2oXYXnRwFj9hCz0xeAflXyzPsk3lnrjZdsDlYGgXv56B7NMDSDHxgW4/AkLuhl8oNES0ts6/LkRY8EHcHl9YAgLdzn9mm1hw6BfC/psgq7figdB04t89/1O/w1cDnyilFU=')
# Channel Secret
handler = WebhookHandler('d57f500b5ac8ede281c2b196e1ad547b')
# 監聽所有來自 /callback 的 Post Request
@app.route("/callback", methods=['POST'])
def callback():
# get X-Line-Signature header value
signature = request.headers['X-Line-Signature']
# get request body as text
body = request.get_data(as_text=True)
app.logger.info("Request body: " + body)
# handle webhook body
try:
handler.handle(body, signature)
except InvalidSignatureError:
abort(400)
return 'OK'
# 預設訊息
@handler.add(FollowEvent)
def handle_follow(event):
print("in Follow")
default = """您好!我是柏丞 \U0001F604
可以輸入下列關鍵字,獲得更多資訊喔!
輸入:自我介紹、程式語言、工作經驗、GitHub
或是透過下方選單選取喔~"""
rich_menu_to_create = RichMenu(
size=RichMenuSize(width=2500, height=843),
selected=True,
name="Nice richmenu",
chat_bar_text="Tap here",
areas=[RichMenuArea(
bounds=RichMenuBounds(x=0, y=0, width=2500, height=843),
action=URIAction(label='Go to line.me', uri='https://line.me'))]
)
rich_menu_id = line_bot_api.create_rich_menu(rich_menu=rich_menu_to_create)
rich_menu = line_bot_api.get_rich_menu(rich_menu_id)
button_template_message =ButtonsTemplate(
thumbnail_image_url="https://i.imgur.com/mNnvVVe.jpg",
title='施柏丞自我介紹Line Rob',
text='可以透過下列選項了解我喔!',
image_size="cover",
actions=[
MessageTemplateAction(
label='自我介紹', text='自我介紹'
),
MessageTemplateAction(
label='程式語言', text='程式語言'
),
MessageTemplateAction(
label='工作經驗', text='工作經驗'
),
MessageTemplateAction(
label='Github', text='Github'
),
]
)
line_bot_api.reply_message(
event.reply_token,[
TextSendMessage(text=default),
TemplateSendMessage(alt_text = "電腦板無法顯示選單\U0001F614",template=button_template_message),
StickerSendMessage(package_id=1, sticker_id=13),
])
# 處理訊息
@handler.add(MessageEvent, message=TextMessage)
def handle_message(event):
msg = (event.message.text).lower()
default = """您好!我是柏丞 \U0001F604
可以輸入下列關鍵字,獲得更多資訊喔!
輸入:自我介紹、程式語言、工作經驗、GitHub
或是透過下方選單選取喔~"""
intro = """我是施柏丞,目前就讀於中央大學資管系
個性外向、熱愛挑戰、充滿好奇心 \U0001F606
平常喜歡看小說、美劇、打排球
也是系學會活動部與資管系排球隊的一員!
很高興認識你!"""
ability = """我會的程式語言有:Python、PHP、Java、C、SQL、SAS
我會的軟體有:Weka、Xampp、Git、Postman、Anaconda、GitHub Desktop、Sublime
使用過的資料庫有:MySQL
使用過的框架有:Laravel
"""
experience = """(一)中央資管系網站開發
時間:2019/07-2019/09
職責:負責系網站全端開發與架設、資料上傳、提供網頁架構圖與程式註解。藉此深入學習PHP、MySQL、Laravel、Git共同開發、MVC架構;在這5人團隊中,我負責團隊的組織,以及與系辦接洽的工作。並從中學習團隊間的互相合作,以及一起討論UI/UX設計,取得共識,並互相指導,提升技術能力。
(二) 您好健康有限公司(程式設計實習生)
時間:2018/08-2019/02
職責:負責網頁前端與後端設計、管理與開發後台、產品測試、UI設計;在團隊中,我常常負責支援公司的工程師,以及對產品提出具體性的建議,也因為我的實質建議,在產品及Code設計也獲得改善,在大多時候需要獨自思考並解決問題,以及如何跳脫框架思考。在過程中建立工作上的自主性、創意和成長經驗。
"""
github = """Line Rob: https://github.com/bocheng47/LINE_bocheng
fifa19資料分析: https://github.com/bocheng47/fifa19_data_analysis
演算法視覺化 Sort_Visualization: https://github.com/bocheng47/Sort_Visualization
Knight Tour List Comprehension: https://github.com/bocheng47/Knight_tour_List_Comprehension"""
button_template_message =ButtonsTemplate(
thumbnail_image_url="https://i.imgur.com/mNnvVVe.jpg",
title='施柏丞自我介紹Line Rob',
text='可以透過下列選項了解我喔!',
image_size="cover",
actions=[
MessageTemplateAction(
label='自我介紹', text='自我介紹'
),
MessageTemplateAction(
label='程式語言', text='程式語言'
),
MessageTemplateAction(
label='工作經驗', text='工作經驗'
),
MessageTemplateAction(
label='Github', text='Github'
),
]
)
if ('hello' in msg) or ('早安' in msg) or ('你好' in msg):
line_bot_api.reply_message(event.reply_token,
TextSendMessage(text="哈囉, 祝你有愉快的一天"))
elif '自我介紹' in msg :
line_bot_api.reply_message(event.reply_token,[
TextSendMessage(text=intro),
StickerSendMessage(package_id=1, sticker_id=114),])
elif '程式語言' in msg :
line_bot_api.reply_message(event.reply_token,
TextSendMessage(text=ability))
elif '工作經驗' in msg :
line_bot_api.reply_message(event.reply_token,
TextSendMessage(text=experience))
elif 'github' in msg :
line_bot_api.reply_message(event.reply_token,
TextSendMessage(text=github))
else :
line_bot_api.reply_message(event.reply_token,[
TextSendMessage(text=default),
TemplateSendMessage(alt_text=default,template=button_template_message)
])
import os
if __name__ == "__main__":
port = int(os.environ.get('PORT', 5000))
app.run(host='0.0.0.0', port=port)
| 30.081218 | 201 | 0.60513 |
e636f2ad2b2f649e549d8ab48d40eb9fefc4eb38 | 9,480 | py | Python | buttersalt_ldap/views.py | lfzyx/ButterSalt-LDAP | 95ad25433422b3227277edb5053c15e109731dd5 | [
"MIT"
] | 1 | 2018-03-30T06:07:20.000Z | 2018-03-30T06:07:20.000Z | buttersalt_ldap/views.py | lfzyx/ButterSalt-LDAP | 95ad25433422b3227277edb5053c15e109731dd5 | [
"MIT"
] | null | null | null | buttersalt_ldap/views.py | lfzyx/ButterSalt-LDAP | 95ad25433422b3227277edb5053c15e109731dd5 | [
"MIT"
] | null | null | null | import json
import os
import hashlib
import yaml
from base64 import encodebytes
from flask import Blueprint, render_template, flash, redirect, url_for
from flask_wtf import FlaskForm
from wtforms import StringField, SubmitField, PasswordField, TextAreaField, SelectMultipleField, widgets, SelectField
from wtforms.validators import InputRequired, Length, Email, Regexp, EqualTo, Optional, DataRequired
from flask_login import login_required
from ButterSalt import salt
from .ldap3 import Ldap3
class LdapAccount(FlaskForm):
cn = StringField('姓名拼音', validators=[InputRequired('姓名拼音是必须的'), Length(1, 64),
Regexp('^[A-Za-z]*$', 0,
'姓名拼音只能包含拼音')], render_kw={"placeholder": "zhangsan"})
ou = SelectField('部门', validators=[DataRequired()], default=None)
o = StringField('组 (可填)', validators=[Optional()])
mail = StringField('Email', validators=[InputRequired('Email是必须的'), Length(1, 64), Email()])
userPassword0 = PasswordField('密码', validators=[InputRequired('密码是必须的'),
EqualTo('userPassword1', message='密码必须相同.')])
userPassword1 = PasswordField('验证密码', validators=[InputRequired('验证密码是必须的')])
key = TextAreaField('Key (可填)', validators=[Optional()],
render_kw={"placeholder": "Begins with 'ssh-rsa', 'ssh-dss', 'ssh-ed25519', 'ecdsa-sha2-nistp25"
"6', 'ecdsa-sha2-nistp384', or 'ecdsa-sha2-nistp521'", "rows": "15"})
submit = SubmitField('提交')
class LdapAccountEdit(FlaskForm):
ou = SelectField('部门')
o = StringField('组 (可填)', validators=[Optional()])
userPassword0 = PasswordField('密码', validators=[Optional(),
EqualTo('userPassword1', message='密码必须相同.')],
render_kw={"placeholder": "**********"})
userPassword1 = PasswordField('验证密码', validators=[Optional(),
EqualTo('userPassword0', message='密码必须相同.')],
render_kw={"placeholder": "**********"})
key = TextAreaField('Key (可填)', validators=[Optional()],
render_kw={"placeholder": "Begins with 'ssh-rsa', 'ssh-dss', 'ssh-ed25519', 'ecdsa-sha2-nistp25"
"6', 'ecdsa-sha2-nistp384', or 'ecdsa-sha2-nistp521'", "rows": "15"})
minion = SelectMultipleField('主机登陆授权')
submit = SubmitField('提交')
ldap = Blueprint('ldap', __name__, url_prefix='/ldap', template_folder='templates')
@ldap.route('/', methods=['GET', 'POST'])
@login_required
def index():
ldap3 = Ldap3()
accounts = ldap3.search(scope='subtree', filterstr='(objectClass=organizationalPerson)')
return render_template('ldap/index.html', Data=accounts)
@ldap.route('/signup/', methods=['GET', 'POST'])
@login_required
def signup():
""" salt.modules.ldap3.add
"""
form = LdapAccount()
ldap3 = Ldap3()
ou_data = ldap3.search(scope='onelevel', filterstr='(objectClass=organizationalUnit)')
ou_list = [(ou_data.get(n).get('ou')[0], ou_data.get(n).get('ou')[0]) for n in ou_data]
ou_list.append((None, ''))
form.ou.choices = ou_list
if form.validate_on_submit():
def makessha(password):
salt = os.urandom(4)
h = hashlib.sha1(password.encode())
h.update(salt)
return "{SSHA}" + encodebytes(h.digest() + salt).decode()[:-1]
cn = form.cn.data
ou = form.ou.data
o = form.o.data
mail = form.mail.data
userpassword = makessha(form.userPassword0.data)
key = form.key.data
ldap3.add(cn=cn, ou=ou, o=o, userpassword=userpassword, mail=mail, key=key)
flash('Signup successfully')
return redirect(url_for('ldap.index'))
return render_template('ldap/signup.html', form=form)
@ldap.route('/account/<name>', methods=['GET', 'POST'])
@login_required
def account_detail(name):
ldap3 = Ldap3()
account = ldap3.search(scope='subtree', filterstr='(cn=%s)' % (name,))
minion_list = json.loads(salt.get_accepted_keys())
belong_minion_list = list()
for minion in minion_list:
text = salt.read_pillar_file('user/%s.sls' % (minion,)).get('return')[0].get(
'/srv/pillar/user/%s.sls' % (minion,))
text2yaml = yaml.load(text)
if name in text2yaml.get('users'):
belong_minion_list.append(minion)
return render_template('ldap/account_detail.html', Data=list(account.values()),
belong_minion_list=belong_minion_list, minion_list=minion_list)
@ldap.route('/account/<name>/edit', methods=['GET', 'POST'])
@login_required
def account_edit(name):
minion_list = json.loads(salt.get_accepted_keys())
form_choices_list = list()
for n in minion_list:
form_choices_list.append((n, n))
belong_minion_list = list()
for minion in minion_list:
text = salt.read_pillar_file('user/%s.sls' % (minion,)).get('return')[0].get(
'/srv/pillar/user/%s.sls' % (minion,))
text2yaml = yaml.load(text)
if name in text2yaml.get('users'):
belong_minion_list.append(minion)
LdapAccountEdit.minion = SelectMultipleField('主机登陆授权', option_widget=widgets.CheckboxInput(),
widget=widgets.ListWidget(prefix_label=False),
choices=form_choices_list,
default=belong_minion_list)
ldap3 = Ldap3()
ou_data = ldap3.search(scope='onelevel', filterstr='(objectClass=organizationalUnit)')
ou_list = [(ou_data.get(n).get('ou')[0], ou_data.get(n).get('ou')[0]) for n in ou_data]
account = ldap3.search(scope='subtree', filterstr='(cn=%s)' % (name,))
default_ou = list(account.values())[0].get('ou')[0]
try:
default_o = list(account.values())[0].get('o')[0]
except:
default_o = ''
try:
default_key = list(account.values())[0].get('userPKCS12')[0]
except:
default_key = ''
LdapAccountEdit.ou = SelectField('部门', choices=ou_list, default=default_ou)
form = LdapAccountEdit()
if form.validate_on_submit():
def makessha(password):
salt = os.urandom(4)
h = hashlib.sha1(password.encode())
h.update(salt)
return "{SSHA}" + encodebytes(h.digest() + salt).decode()[:-1]
ldap3.modify(dn=list(account.keys())[0], op='replace', attr='ou', vals=form.ou.data)
if form.userPassword0.data:
userpassword = makessha(form.userPassword0.data)
ldap3.modify(dn=list(account.keys())[0], op='replace', attr='userPassword',vals=userpassword)
if form.key.data:
ldap3.modify(dn=list(account.keys())[0], op='replace', attr='userPKCS12', vals=form.key.data)
if form.o.data:
ldap3.modify(dn=list(account.keys())[0], op='replace', attr='o', vals=form.o.data)
minion_absent_list = set(minion_list) - set(form.minion.data)
for minion_absent in minion_absent_list:
text = salt.read_pillar_file('user/%s.sls' % (minion_absent,)).get('return')[0].get(
'/srv/pillar/user/%s.sls' % (minion_absent,))
text2yaml = yaml.load(text)
if name in text2yaml.get('users'):
text2yaml.get('users').pop(name)
yaml2text = yaml.dump(text2yaml)
salt.write_pillar_file(yaml2text, 'user/%s.sls' % (minion_absent,))
salt.execution_command_low(tgt=minion_absent, fun='user.delete', args=[name])
for minion in form.minion.data:
text = salt.read_pillar_file('user/%s.sls' % (minion,)).get('return')[0].get(
'/srv/pillar/user/%s.sls' % (minion,))
text2yaml = yaml.load(text)
text2yaml.get('users').update(
{name: {'shell': '/bin/bash', 'fullname': name, 'name': name, 'groups': [form.ou.data]}})
yaml2text = yaml.dump(text2yaml)
salt.write_pillar_file(yaml2text, 'user/%s.sls' % (minion,))
salt.execution_command_minions(tgt=minion, fun='ssh.set_auth_key', kwargs={'user':name, 'key':form.key.data.split()[1]})
salt.execution_command_minions(tgt='*', fun='state.apply', args='user')
return redirect(url_for('ldap.account_detail', name=name))
return render_template('ldap/account_edit.html', form=form, default_o=default_o, default_key=default_key)
@ldap.route('/account/<name>/delete', methods=['GET', 'POST'])
@login_required
def account_delete(name):
ldap3 = Ldap3()
account = ldap3.search(scope='subtree', filterstr='(cn=%s)' % (name,))
ldap3.delete(dn=list(account.keys())[0])
minion_list = json.loads(salt.get_accepted_keys())
for minion in minion_list:
text = salt.read_pillar_file('user/%s.sls' % (minion,)).get('return')[0].get(
'/srv/pillar/user/%s.sls' % (minion,))
text2yaml = yaml.load(text)
if name in text2yaml.get('users'):
text2yaml.get('users').pop(name)
yaml2text = yaml.dump(text2yaml)
salt.write_pillar_file(yaml2text, 'user/%s.sls' % (minion,))
salt.execution_command_low(tgt=minion, fun='user.delete', args=[name])
return redirect(url_for('ldap.index'))
| 45.358852 | 132 | 0.603481 |
9481dcaf0c996a6dcd7ef4e4223c451298aa561b | 1,181 | py | Python | service/surf/apps/locale/models.py | surfedushare/search-portal | 708a0d05eee13c696ca9abd7e84ab620d3900fbe | [
"MIT"
] | 2 | 2021-08-19T09:40:59.000Z | 2021-12-14T11:08:20.000Z | service/surf/apps/locale/models.py | surfedushare/search-portal | f5486d6b07b7b04a46ce707cee5174db4f8da222 | [
"MIT"
] | 159 | 2020-05-14T14:17:34.000Z | 2022-03-23T10:28:13.000Z | service/surf/apps/locale/models.py | nppo/search-portal | aedf21e334f178c049f9d6cf37cafd6efc07bc0d | [
"MIT"
] | 1 | 2021-11-11T13:37:22.000Z | 2021-11-11T13:37:22.000Z | from django.db import models
from surf.apps.core.models import UUIDModel
class Locale(UUIDModel):
asset = models.CharField('Asset ID', max_length=512, unique=True)
en = models.CharField('English, en', max_length=5120, null=True, blank=True)
nl = models.CharField('Dutch, nl', max_length=5120, null=False, blank=False)
is_fuzzy = models.BooleanField(default=False)
@property
def translation_key(self):
return f"{self.asset}"
def __str__(self):
return self.asset
class Meta:
verbose_name = "Localization"
verbose_name_plural = "Localizations"
class LocaleHTML(UUIDModel):
asset = models.CharField('Asset ID', max_length=512, unique=True)
en = models.TextField('English, en', max_length=16384, null=True, blank=True)
nl = models.TextField('Dutch, nl', max_length=16384, null=False, blank=False)
is_fuzzy = models.BooleanField(default=False)
@property
def translation_key(self):
return f"html-{self.asset}"
def __str__(self):
return self.asset
class Meta:
verbose_name = "Localization with HTML"
verbose_name_plural = "Localizations with HTML"
| 28.119048 | 81 | 0.686706 |
ba735e6aab6b01e75ef41247820ce4b3b4a8b294 | 55,924 | py | Python | mayan/apps/documents/views.py | Dave360-crypto/mayan-edms | 9cd37537461347f79ff0429e4b8b16fd2446798d | [
"Apache-2.0"
] | 3 | 2020-02-03T11:58:51.000Z | 2020-10-20T03:52:21.000Z | mayan/apps/documents/views.py | Dave360-crypto/mayan-edms | 9cd37537461347f79ff0429e4b8b16fd2446798d | [
"Apache-2.0"
] | null | null | null | mayan/apps/documents/views.py | Dave360-crypto/mayan-edms | 9cd37537461347f79ff0429e4b8b16fd2446798d | [
"Apache-2.0"
] | 2 | 2020-10-24T11:10:06.000Z | 2021-03-03T20:05:38.000Z | from __future__ import absolute_import
import copy
import logging
import urlparse
from django.conf import settings
from django.contrib import messages
from django.core.exceptions import PermissionDenied
from django.core.urlresolvers import reverse
from django.http import HttpResponseRedirect
from django.shortcuts import render_to_response, get_object_or_404
from django.template import RequestContext
from django.utils.http import urlencode
from django.utils.translation import ugettext_lazy as _
import sendfile
from acls.models import AccessEntry
from common.compressed_files import CompressedFile
from common.literals import (PAGE_ORIENTATION_LANDSCAPE, PAGE_ORIENTATION_PORTRAIT,
PAGE_SIZE_DIMENSIONS)
from common.utils import (encapsulate, pretty_size, parse_range, return_diff,
urlquote)
from common.widgets import two_state_template
from common.conf.settings import DEFAULT_PAPER_SIZE
from converter.literals import (DEFAULT_FILE_FORMAT_MIMETYPE, DEFAULT_PAGE_NUMBER,
DEFAULT_ROTATION, DEFAULT_ZOOM_LEVEL)
from converter.office_converter import OfficeConverter
from filetransfers.api import serve_file
from history.api import create_history
from navigation.utils import resolve_to_name
from permissions.models import Permission
from .events import HISTORY_DOCUMENT_EDITED
from .conf.settings import (PREVIEW_SIZE, RECENT_COUNT, ROTATION_STEP,
ZOOM_PERCENT_STEP, ZOOM_MAX_LEVEL, ZOOM_MIN_LEVEL)
from .forms import (DocumentForm_edit, DocumentPropertiesForm,
DocumentPreviewForm, DocumentPageForm,
DocumentPageTransformationForm, DocumentContentForm,
DocumentPageForm_edit, DocumentPageForm_text, PrintForm,
DocumentTypeForm, DocumentTypeFilenameForm,
DocumentTypeFilenameForm_create, DocumentDownloadForm)
from .models import (Document, DocumentType, DocumentPage,
DocumentPageTransformation, DocumentTypeFilename,
DocumentVersion, RecentDocument)
from .permissions import (PERMISSION_DOCUMENT_PROPERTIES_EDIT,
PERMISSION_DOCUMENT_VIEW, PERMISSION_DOCUMENT_DELETE,
PERMISSION_DOCUMENT_DOWNLOAD, PERMISSION_DOCUMENT_TRANSFORM,
PERMISSION_DOCUMENT_TOOLS, PERMISSION_DOCUMENT_EDIT,
PERMISSION_DOCUMENT_VERSION_REVERT, PERMISSION_DOCUMENT_TYPE_EDIT,
PERMISSION_DOCUMENT_TYPE_DELETE, PERMISSION_DOCUMENT_TYPE_CREATE,
PERMISSION_DOCUMENT_TYPE_VIEW)
from .runtime import storage_backend
logger = logging.getLogger(__name__)
def document_list(request, object_list=None, title=None, extra_context=None):
pre_object_list = object_list if not (object_list is None) else Document.objects.all()
try:
Permission.objects.check_permissions(request.user, [PERMISSION_DOCUMENT_VIEW])
except PermissionDenied:
# If user doesn't have global permission, get a list of document
# for which he/she does hace access use it to filter the
# provided object_list
final_object_list = AccessEntry.objects.filter_objects_by_access(
PERMISSION_DOCUMENT_VIEW, request.user, pre_object_list)
else:
final_object_list = pre_object_list
context = {
'object_list': final_object_list,
'title': title if title else _(u'documents'),
'multi_select_as_buttons': True,
'hide_links': True,
}
if extra_context:
context.update(extra_context)
return render_to_response('generic_list.html', context,
context_instance=RequestContext(request))
def document_view(request, document_id, advanced=False):
document = get_object_or_404(Document, pk=document_id)
try:
Permission.objects.check_permissions(request.user, [PERMISSION_DOCUMENT_VIEW])
except PermissionDenied:
AccessEntry.objects.check_access(PERMISSION_DOCUMENT_VIEW, request.user, document)
RecentDocument.objects.add_document_for_user(request.user, document)
subtemplates_list = []
if advanced:
document_fields = [
{'label': _(u'Date added'), 'field': lambda x: x.date_added.date()},
{'label': _(u'Time added'), 'field': lambda x: unicode(x.date_added.time()).split('.')[0]},
{'label': _(u'UUID'), 'field': 'uuid'},
]
if document.latest_version:
document_fields.extend([
{'label': _(u'Filename'), 'field': 'filename'},
{'label': _(u'File mimetype'), 'field': lambda x: x.file_mimetype or _(u'None')},
{'label': _(u'File mime encoding'), 'field': lambda x: x.file_mime_encoding or _(u'None')},
{'label': _(u'File size'), 'field': lambda x: pretty_size(x.size) if x.size else '-'},
{'label': _(u'Exists in storage'), 'field': 'exists'},
{'label': _(u'File path in storage'), 'field': 'file'},
{'label': _(u'Checksum'), 'field': 'checksum'},
{'label': _(u'Pages'), 'field': 'page_count'},
])
document_properties_form = DocumentPropertiesForm(instance=document, extra_fields=document_fields)
subtemplates_list.append(
{
'name': 'generic_form_subtemplate.html',
'context': {
'form': document_properties_form,
'object': document,
'title': _(u'document properties for: %s') % document,
}
},
)
else:
preview_form = DocumentPreviewForm(document=document)
subtemplates_list.append(
{
'name': 'generic_form_subtemplate.html',
'context': {
'form': preview_form,
'object': document,
}
},
)
content_form = DocumentContentForm(document=document)
subtemplates_list.append(
{
'name': 'generic_form_subtemplate.html',
'context': {
'title': _(u'document data'),
'form': content_form,
'object': document,
},
}
)
return render_to_response('generic_detail.html', {
'object': document,
'document': document,
'subtemplates_list': subtemplates_list,
'disable_auto_focus': True,
}, context_instance=RequestContext(request))
def document_delete(request, document_id=None, document_id_list=None):
post_action_redirect = None
if document_id:
documents = [get_object_or_404(Document, pk=document_id)]
post_action_redirect = reverse('document_list_recent')
elif document_id_list:
documents = [get_object_or_404(Document, pk=document_id) for document_id in document_id_list.split(',')]
else:
messages.error(request, _(u'Must provide at least one document.'))
return HttpResponseRedirect(request.META.get('HTTP_REFERER', '/'))
try:
Permission.objects.check_permissions(request.user, [PERMISSION_DOCUMENT_DELETE])
except PermissionDenied:
documents = AccessEntry.objects.filter_objects_by_access(PERMISSION_DOCUMENT_DELETE, request.user, documents, exception_on_empty=True)
previous = request.POST.get('previous', request.GET.get('previous', request.META.get('HTTP_REFERER', '/')))
next = request.POST.get('next', request.GET.get('next', post_action_redirect if post_action_redirect else request.META.get('HTTP_REFERER', '/')))
if request.method == 'POST':
for document in documents:
try:
document.delete()
# create_history(HISTORY_DOCUMENT_DELETED, data={'user': request.user, 'document': document})
messages.success(request, _(u'Document deleted successfully.'))
except Exception as exception:
messages.error(request, _(u'Document: %(document)s delete error: %(error)s') % {
'document': document, 'error': exception
})
return HttpResponseRedirect(next)
context = {
'object_name': _(u'document'),
'delete_view': True,
'previous': previous,
'next': next,
'form_icon': u'page_delete.png',
}
if len(documents) == 1:
context['object'] = documents[0]
context['title'] = _(u'Are you sure you wish to delete the document: %s?') % ', '.join([unicode(d) for d in documents])
elif len(documents) > 1:
context['title'] = _(u'Are you sure you wish to delete the documents: %s?') % ', '.join([unicode(d) for d in documents])
return render_to_response('generic_confirm.html', context,
context_instance=RequestContext(request))
def document_multiple_delete(request):
return document_delete(
request, document_id_list=request.GET.get('id_list', [])
)
def document_edit(request, document_id):
document = get_object_or_404(Document, pk=document_id)
try:
Permission.objects.check_permissions(request.user, [PERMISSION_DOCUMENT_PROPERTIES_EDIT])
except PermissionDenied:
AccessEntry.objects.check_access(PERMISSION_DOCUMENT_PROPERTIES_EDIT, request.user, document)
if request.method == 'POST':
old_document = copy.copy(document)
form = DocumentForm_edit(request.POST, instance=document)
if form.is_valid():
document.filename = form.cleaned_data['new_filename']
document.description = form.cleaned_data['description']
if 'document_type_available_filenames' in form.cleaned_data:
if form.cleaned_data['document_type_available_filenames']:
document.filename = form.cleaned_data['document_type_available_filenames'].filename
document.save()
create_history(HISTORY_DOCUMENT_EDITED, document, {'user': request.user, 'diff': return_diff(old_document, document, ['filename', 'description'])})
RecentDocument.objects.add_document_for_user(request.user, document)
messages.success(request, _(u'Document "%s" edited successfully.') % document)
return HttpResponseRedirect(document.get_absolute_url())
else:
if document.latest_version:
form = DocumentForm_edit(instance=document, initial={
'new_filename': document.filename, 'description': document.description})
else:
form = DocumentForm_edit(instance=document, initial={
'description': document.description})
return render_to_response('generic_form.html', {
'form': form,
'object': document,
}, context_instance=RequestContext(request))
def get_document_image(request, document_id, size=PREVIEW_SIZE):
document = get_object_or_404(Document, pk=document_id)
try:
Permission.objects.check_permissions(request.user, [PERMISSION_DOCUMENT_VIEW])
except PermissionDenied:
AccessEntry.objects.check_access(PERMISSION_DOCUMENT_VIEW, request.user, document)
page = int(request.GET.get('page', DEFAULT_PAGE_NUMBER))
zoom = int(request.GET.get('zoom', DEFAULT_ZOOM_LEVEL))
version = int(request.GET.get('version', document.latest_version.pk))
if zoom < ZOOM_MIN_LEVEL:
zoom = ZOOM_MIN_LEVEL
if zoom > ZOOM_MAX_LEVEL:
zoom = ZOOM_MAX_LEVEL
rotation = int(request.GET.get('rotation', DEFAULT_ROTATION)) % 360
return sendfile.sendfile(request, document.get_image(size=size, page=page, zoom=zoom, rotation=rotation, version=version), mimetype=DEFAULT_FILE_FORMAT_MIMETYPE)
def document_download(request, document_id=None, document_id_list=None, document_version_pk=None):
previous = request.POST.get('previous', request.GET.get('previous', request.META.get('HTTP_REFERER', '/')))
if document_id:
document_versions = [get_object_or_404(Document, pk=document_id).latest_version]
elif document_id_list:
document_versions = [get_object_or_404(Document, pk=document_id).latest_version for document_id in document_id_list.split(',')]
elif document_version_pk:
document_versions = [get_object_or_404(DocumentVersion, pk=document_version_pk)]
try:
Permission.objects.check_permissions(request.user, [PERMISSION_DOCUMENT_DOWNLOAD])
except PermissionDenied:
document_versions = AccessEntry.objects.filter_objects_by_access(PERMISSION_DOCUMENT_DOWNLOAD, request.user, document_versions, related='document', exception_on_empty=True)
subtemplates_list = []
subtemplates_list.append(
{
'name': 'generic_list_subtemplate.html',
'context': {
'title': _(u'documents to be downloaded'),
'object_list': document_versions,
'hide_link': True,
'hide_object': True,
'hide_links': True,
'scrollable_content': True,
'scrollable_content_height': '200px',
'extra_columns': [
{'name': _(u'document'), 'attribute': 'document'},
{'name': _(u'version'), 'attribute': encapsulate(lambda x: x.get_formated_version())},
],
}
}
)
if request.method == 'POST':
form = DocumentDownloadForm(request.POST, document_versions=document_versions)
if form.is_valid():
if form.cleaned_data['compressed'] or len(document_versions) > 1:
try:
compressed_file = CompressedFile()
for document_version in document_versions:
descriptor = document_version.open()
compressed_file.add_file(descriptor, arcname=document_version.filename)
descriptor.close()
compressed_file.close()
return serve_file(
request,
compressed_file.as_file(form.cleaned_data['zip_filename']),
save_as=u'"%s"' % form.cleaned_data['zip_filename'],
content_type='application/zip'
)
# TODO: DO a redirection afterwards
except Exception as exception:
if settings.DEBUG:
raise
else:
messages.error(request, exception)
return HttpResponseRedirect(request.META['HTTP_REFERER'])
else:
try:
# Test permissions and trigger exception
fd = document_versions[0].open()
fd.close()
return serve_file(
request,
document_versions[0].file,
save_as=u'"%s"' % document_versions[0].filename,
content_type=document_versions[0].mimetype if document_versions[0].mimetype else 'application/octet-stream'
)
except Exception as exception:
if settings.DEBUG:
raise
else:
messages.error(request, exception)
return HttpResponseRedirect(request.META['HTTP_REFERER'])
else:
form = DocumentDownloadForm(document_versions=document_versions)
context = {
'form': form,
'subtemplates_list': subtemplates_list,
'title': _(u'Download documents'),
'submit_label': _(u'Download'),
'previous': previous,
'cancel_label': _(u'Return'),
'disable_auto_focus': True,
}
if len(document_versions) == 1:
context['object'] = document_versions[0].document
return render_to_response(
'generic_form.html',
context,
context_instance=RequestContext(request)
)
def document_multiple_download(request):
return document_download(
request, document_id_list=request.GET.get('id_list', [])
)
def document_find_duplicates(request, document_id):
document = get_object_or_404(Document, pk=document_id)
try:
Permission.objects.check_permissions(request.user, [PERMISSION_DOCUMENT_VIEW])
except PermissionDenied:
AccessEntry.objects.check_access(PERMISSION_DOCUMENT_VIEW, request.user, document)
extra_context = {
'title': _(u'duplicates of: %s') % document,
'object': document,
}
return _find_duplicate_list(request, [document], include_source=True, confirmation=False, extra_context=extra_context)
def _find_duplicate_list(request, source_document_list=Document.objects.all(), include_source=False, confirmation=True, extra_context=None):
previous = request.POST.get('previous', request.GET.get('previous', request.META.get('HTTP_REFERER', None)))
if confirmation and request.method != 'POST':
return render_to_response('generic_confirm.html', {
'previous': previous,
'title': _(u'Are you sure you wish to find all duplicates?'),
'message': _(u'On large databases this operation may take some time to execute.'),
'form_icon': u'page_refresh.png',
}, context_instance=RequestContext(request))
else:
duplicated = []
for document in source_document_list:
if document.pk not in duplicated:
results = DocumentVersion.objects.filter(checksum=document.latest_version.checksum).exclude(id__in=duplicated).exclude(pk=document.pk).values_list('document__pk', flat=True)
duplicated.extend(results)
if include_source and results:
duplicated.append(document.pk)
context = {
'hide_links': True,
'multi_select_as_buttons': True,
}
if extra_context:
context.update(extra_context)
return document_list(
request,
object_list=Document.objects.filter(pk__in=duplicated),
title=_(u'duplicated documents'),
extra_context=context
)
def document_find_all_duplicates(request):
return _find_duplicate_list(request, include_source=True)
def document_update_page_count(request):
Permission.objects.check_permissions(request.user, [PERMISSION_DOCUMENT_TOOLS])
office_converter = OfficeConverter()
qs = DocumentVersion.objects.exclude(filename__iendswith='dxf').filter(mimetype__in=office_converter.mimetypes())
previous = request.POST.get('previous', request.GET.get('previous', request.META.get('HTTP_REFERER', '/')))
if request.method == 'POST':
updated = 0
processed = 0
for document_version in qs:
old_page_count = document_version.pages.count()
document_version.update_page_count()
processed += 1
if old_page_count != document_version.pages.count():
updated += 1
messages.success(request, _(u'Page count update complete. Documents processed: %(total)d, documents with changed page count: %(change)d') % {
'total': processed,
'change': updated
})
return HttpResponseRedirect(previous)
return render_to_response('generic_confirm.html', {
'previous': previous,
'title': _(u'Are you sure you wish to update the page count for the office documents (%d)?') % qs.count(),
'message': _(u'On large databases this operation may take some time to execute.'),
'form_icon': u'page_white_csharp.png',
}, context_instance=RequestContext(request))
def document_clear_transformations(request, document_id=None, document_id_list=None):
if document_id:
documents = [get_object_or_404(Document.objects, pk=document_id)]
post_redirect = reverse('document_view_simple', args=[documents[0].pk])
elif document_id_list:
documents = [get_object_or_404(Document, pk=document_id) for document_id in document_id_list.split(',')]
post_redirect = None
else:
messages.error(request, _(u'Must provide at least one document.'))
return HttpResponseRedirect(request.META.get('HTTP_REFERER', u'/'))
try:
Permission.objects.check_permissions(request.user, [PERMISSION_DOCUMENT_TRANSFORM])
except PermissionDenied:
documents = AccessEntry.objects.filter_objects_by_access(PERMISSION_DOCUMENT_TRANSFORM, request.user, documents, exception_on_empty=True)
previous = request.POST.get('previous', request.GET.get('previous', request.META.get('HTTP_REFERER', post_redirect or reverse('document_list'))))
next = request.POST.get('next', request.GET.get('next', request.META.get('HTTP_REFERER', post_redirect or reverse('document_list'))))
if request.method == 'POST':
for document in documents:
try:
for document_page in document.pages.all():
document_page.document.invalidate_cached_image(document_page.page_number)
for transformation in document_page.documentpagetransformation_set.all():
transformation.delete()
messages.success(request, _(u'All the page transformations for document: %s, have been deleted successfully.') % document)
except Exception as exception:
messages.error(request, _(u'Error deleting the page transformations for document: %(document)s; %(error)s.') % {
'document': document, 'error': exception})
return HttpResponseRedirect(next)
context = {
'object_name': _(u'document transformation'),
'delete_view': True,
'previous': previous,
'next': next,
'form_icon': u'page_paintbrush.png',
}
if len(documents) == 1:
context['object'] = documents[0]
context['title'] = _(u'Are you sure you wish to clear all the page transformations for document: %s?') % ', '.join([unicode(d) for d in documents])
elif len(documents) > 1:
context['title'] = _(u'Are you sure you wish to clear all the page transformations for documents: %s?') % ', '.join([unicode(d) for d in documents])
return render_to_response('generic_confirm.html', context,
context_instance=RequestContext(request))
def document_multiple_clear_transformations(request):
return document_clear_transformations(request, document_id_list=request.GET.get('id_list', []))
def document_missing_list(request):
Permission.objects.check_permissions(request.user, [PERMISSION_DOCUMENT_VIEW])
previous = request.POST.get('previous', request.GET.get('previous', request.META.get('HTTP_REFERER', None)))
if request.method != 'POST':
return render_to_response('generic_confirm.html', {
'previous': previous,
'message': _(u'On large databases this operation may take some time to execute.'),
}, context_instance=RequestContext(request))
else:
missing_id_list = []
for document in Document.objects.only('id',):
if not storage_backend.exists(document.file):
missing_id_list.append(document.pk)
return render_to_response('generic_list.html', {
'object_list': Document.objects.in_bulk(missing_id_list).values(),
'title': _(u'missing documents'),
}, context_instance=RequestContext(request))
def document_page_view(request, document_page_id):
document_page = get_object_or_404(DocumentPage, pk=document_page_id)
try:
Permission.objects.check_permissions(request.user, [PERMISSION_DOCUMENT_VIEW])
except PermissionDenied:
AccessEntry.objects.check_access(PERMISSION_DOCUMENT_VIEW, request.user, document_page.document)
zoom = int(request.GET.get('zoom', DEFAULT_ZOOM_LEVEL))
rotation = int(request.GET.get('rotation', DEFAULT_ROTATION))
document_page_form = DocumentPageForm(instance=document_page, zoom=zoom, rotation=rotation)
base_title = _(u'details for: %s') % document_page
if zoom != DEFAULT_ZOOM_LEVEL:
zoom_text = u'(%d%%)' % zoom
else:
zoom_text = u''
if rotation != 0 and rotation != 360:
rotation_text = u'(%d°)' % rotation
else:
rotation_text = u''
return render_to_response('generic_detail.html', {
'page': document_page,
'access_object': document_page.document,
'navigation_object_name': 'page',
'web_theme_hide_menus': True,
'form': document_page_form,
'title': u' '.join([base_title, zoom_text, rotation_text]),
'zoom': zoom,
'rotation': rotation,
}, context_instance=RequestContext(request))
def document_page_view_reset(request, document_page_id):
return HttpResponseRedirect(reverse('document_page_view', args=[document_page_id]))
def document_page_text(request, document_page_id):
document_page = get_object_or_404(DocumentPage, pk=document_page_id)
try:
Permission.objects.check_permissions(request.user, [PERMISSION_DOCUMENT_VIEW])
except PermissionDenied:
AccessEntry.objects.check_access(PERMISSION_DOCUMENT_VIEW, request.user, document_page.document)
document_page_form = DocumentPageForm_text(instance=document_page)
return render_to_response('generic_detail.html', {
'page': document_page,
'navigation_object_name': 'page',
'web_theme_hide_menus': True,
'form': document_page_form,
'title': _(u'details for: %s') % document_page,
'access_object': document_page.document,
}, context_instance=RequestContext(request))
def document_page_edit(request, document_page_id):
document_page = get_object_or_404(DocumentPage, pk=document_page_id)
try:
Permission.objects.check_permissions(request.user, [PERMISSION_DOCUMENT_EDIT])
except PermissionDenied:
AccessEntry.objects.check_access(PERMISSION_DOCUMENT_EDIT, request.user, document_page.document)
if request.method == 'POST':
form = DocumentPageForm_edit(request.POST, instance=document_page)
if form.is_valid():
document_page.page_label = form.cleaned_data['page_label']
document_page.content = form.cleaned_data['content']
document_page.save()
messages.success(request, _(u'Document page edited successfully.'))
return HttpResponseRedirect(document_page.get_absolute_url())
else:
form = DocumentPageForm_edit(instance=document_page)
return render_to_response('generic_form.html', {
'form': form,
'page': document_page,
'navigation_object_name': 'page',
'title': _(u'edit: %s') % document_page,
'web_theme_hide_menus': True,
'access_object': document_page.document,
}, context_instance=RequestContext(request))
def document_page_navigation_next(request, document_page_id):
document_page = get_object_or_404(DocumentPage, pk=document_page_id)
try:
Permission.objects.check_permissions(request.user, [PERMISSION_DOCUMENT_VIEW])
except PermissionDenied:
AccessEntry.objects.check_access(PERMISSION_DOCUMENT_VIEW, request.user, document_page.document)
view = resolve_to_name(urlparse.urlparse(request.META.get('HTTP_REFERER', u'/')).path)
if document_page.page_number >= document_page.siblings.count():
messages.warning(request, _(u'There are no more pages in this document'))
return HttpResponseRedirect(request.META.get('HTTP_REFERER', u'/'))
else:
document_page = get_object_or_404(document_page.siblings, page_number=document_page.page_number + 1)
return HttpResponseRedirect(reverse(view, args=[document_page.pk]))
def document_page_navigation_previous(request, document_page_id):
document_page = get_object_or_404(DocumentPage, pk=document_page_id)
try:
Permission.objects.check_permissions(request.user, [PERMISSION_DOCUMENT_VIEW])
except PermissionDenied:
AccessEntry.objects.check_access(PERMISSION_DOCUMENT_VIEW, request.user, document_page.document)
view = resolve_to_name(urlparse.urlparse(request.META.get('HTTP_REFERER', u'/')).path)
if document_page.page_number <= 1:
messages.warning(request, _(u'You are already at the first page of this document'))
return HttpResponseRedirect(request.META.get('HTTP_REFERER', u'/'))
else:
document_page = get_object_or_404(document_page.siblings, page_number=document_page.page_number - 1)
return HttpResponseRedirect(reverse(view, args=[document_page.pk]))
def document_page_navigation_first(request, document_page_id):
document_page = get_object_or_404(DocumentPage, pk=document_page_id)
document_page = get_object_or_404(document_page.siblings, page_number=1)
try:
Permission.objects.check_permissions(request.user, [PERMISSION_DOCUMENT_VIEW])
except PermissionDenied:
AccessEntry.objects.check_access(PERMISSION_DOCUMENT_VIEW, request.user, document_page.document)
view = resolve_to_name(urlparse.urlparse(request.META.get('HTTP_REFERER', u'/')).path)
return HttpResponseRedirect(reverse(view, args=[document_page.pk]))
def document_page_navigation_last(request, document_page_id):
document_page = get_object_or_404(DocumentPage, pk=document_page_id)
document_page = get_object_or_404(document_page.siblings, page_number=document_page.siblings.count())
try:
Permission.objects.check_permissions(request.user, [PERMISSION_DOCUMENT_VIEW])
except PermissionDenied:
AccessEntry.objects.check_access(PERMISSION_DOCUMENT_VIEW, request.user, document_page.document)
view = resolve_to_name(urlparse.urlparse(request.META.get('HTTP_REFERER', u'/')).path)
return HttpResponseRedirect(reverse(view, args=[document_page.pk]))
def document_list_recent(request):
return document_list(
request,
object_list=RecentDocument.objects.get_for_user(request.user),
title=_(u'recent documents'),
extra_context={
'recent_count': RECENT_COUNT
}
)
def transform_page(request, document_page_id, zoom_function=None, rotation_function=None):
document_page = get_object_or_404(DocumentPage, pk=document_page_id)
try:
Permission.objects.check_permissions(request.user, [PERMISSION_DOCUMENT_VIEW])
except PermissionDenied:
AccessEntry.objects.check_access(PERMISSION_DOCUMENT_VIEW, request.user, document_page.document)
view = resolve_to_name(urlparse.urlparse(request.META.get('HTTP_REFERER', u'/')).path)
# Get the query string from the referer url
query = urlparse.urlparse(request.META.get('HTTP_REFERER', u'/')).query
# Parse the query string and get the zoom value
# parse_qs return a dictionary whose values are lists
zoom = int(urlparse.parse_qs(query).get('zoom', ['100'])[0])
rotation = int(urlparse.parse_qs(query).get('rotation', ['0'])[0])
if zoom_function:
zoom = zoom_function(zoom)
if rotation_function:
rotation = rotation_function(rotation)
return HttpResponseRedirect(
u'?'.join([
reverse(view, args=[document_page.pk]),
urlencode({'zoom': zoom, 'rotation': rotation})
])
)
def document_page_zoom_in(request, document_page_id):
return transform_page(
request,
document_page_id,
zoom_function=lambda x: ZOOM_MAX_LEVEL if x + ZOOM_PERCENT_STEP > ZOOM_MAX_LEVEL else x + ZOOM_PERCENT_STEP
)
def document_page_zoom_out(request, document_page_id):
return transform_page(
request,
document_page_id,
zoom_function=lambda x: ZOOM_MIN_LEVEL if x - ZOOM_PERCENT_STEP < ZOOM_MIN_LEVEL else x - ZOOM_PERCENT_STEP
)
def document_page_rotate_right(request, document_page_id):
return transform_page(
request,
document_page_id,
rotation_function=lambda x: (x + ROTATION_STEP) % 360
)
def document_page_rotate_left(request, document_page_id):
return transform_page(
request,
document_page_id,
rotation_function=lambda x: (x - ROTATION_STEP) % 360
)
def document_print(request, document_id):
document = get_object_or_404(Document, pk=document_id)
try:
Permission.objects.check_permissions(request.user, [PERMISSION_DOCUMENT_VIEW])
except PermissionDenied:
AccessEntry.objects.check_access(PERMISSION_DOCUMENT_VIEW, request.user, document)
RecentDocument.objects.add_document_for_user(request.user, document)
post_redirect = None
next = request.POST.get('next', request.GET.get('next', request.META.get('HTTP_REFERER', post_redirect or document.get_absolute_url())))
new_window_url = None
html_redirect = None
if request.method == 'POST':
form = PrintForm(request.POST)
if form.is_valid():
hard_copy_arguments = {}
# Get page range
if form.cleaned_data['page_range']:
hard_copy_arguments['page_range'] = form.cleaned_data['page_range']
new_url = [reverse('document_hard_copy', args=[document_id])]
if hard_copy_arguments:
new_url.append(urlquote(hard_copy_arguments))
new_window_url = u'?'.join(new_url)
new_window_url_name = u'document_hard_copy'
else:
form = PrintForm()
return render_to_response('generic_form.html', {
'form': form,
'object': document,
'title': _(u'print: %s') % document,
'next': next,
'html_redirect': html_redirect if html_redirect else html_redirect,
'new_window_url': new_window_url if new_window_url else new_window_url
}, context_instance=RequestContext(request))
def document_hard_copy(request, document_id):
# TODO: FIXME
document = get_object_or_404(Document, pk=document_id)
try:
Permission.objects.check_permissions(request.user, [PERMISSION_DOCUMENT_VIEW])
except PermissionDenied:
AccessEntry.objects.check_access(PERMISSION_DOCUMENT_VIEW, request.user, document)
RecentDocument.objects.add_document_for_user(request.user, document)
# Extract dimension values ignoring any unit
page_width = request.GET.get('page_width', dict(PAGE_SIZE_DIMENSIONS)[DEFAULT_PAPER_SIZE][0])
page_height = request.GET.get('page_height', dict(PAGE_SIZE_DIMENSIONS)[DEFAULT_PAPER_SIZE][1])
# TODO: Replace with regex to extact numeric portion
width = float(page_width.split('i')[0].split('c')[0].split('m')[0])
height = float(page_height.split('i')[0].split('c')[0].split('m')[0])
page_range = request.GET.get('page_range', u'')
if page_range:
page_range = parse_range(page_range)
pages = document.pages.filter(page_number__in=page_range)
else:
pages = document.pages.all()
return render_to_response('document_print.html', {
'object': document,
'page_aspect': width / height,
'page_orientation': PAGE_ORIENTATION_LANDSCAPE if width / height > 1 else PAGE_ORIENTATION_PORTRAIT,
'page_orientation_landscape': True if width / height > 1 else False,
'page_orientation_portrait': False if width / height > 1 else True,
'page_range': page_range,
'page_width': page_width,
'page_height': page_height,
'pages': pages,
}, context_instance=RequestContext(request))
def document_type_list(request):
Permission.objects.check_permissions(request.user, [PERMISSION_DOCUMENT_TYPE_VIEW])
context = {
'object_list': DocumentType.objects.all(),
'title': _(u'document types'),
'hide_link': True,
'list_object_variable_name': 'document_type',
}
return render_to_response('generic_list.html', context,
context_instance=RequestContext(request))
def document_type_edit(request, document_type_id):
Permission.objects.check_permissions(request.user, [PERMISSION_DOCUMENT_TYPE_EDIT])
document_type = get_object_or_404(DocumentType, pk=document_type_id)
next = request.POST.get('next', request.GET.get('next', request.META.get('HTTP_REFERER', reverse('document_type_list'))))
if request.method == 'POST':
form = DocumentTypeForm(instance=document_type, data=request.POST)
if form.is_valid():
try:
form.save()
messages.success(request, _(u'Document type edited successfully'))
return HttpResponseRedirect(next)
except Exception as exception:
messages.error(request, _(u'Error editing document type; %s') % exception)
else:
form = DocumentTypeForm(instance=document_type)
return render_to_response('generic_form.html', {
'title': _(u'edit document type: %s') % document_type,
'form': form,
'object_name': _(u'document type'),
'navigation_object_name': 'document_type',
'document_type': document_type,
'next': next
},
context_instance=RequestContext(request))
def document_type_delete(request, document_type_id):
Permission.objects.check_permissions(request.user, [PERMISSION_DOCUMENT_TYPE_DELETE])
document_type = get_object_or_404(DocumentType, pk=document_type_id)
post_action_redirect = reverse('document_type_list')
previous = request.POST.get('previous', request.GET.get('previous', request.META.get('HTTP_REFERER', '/')))
next = request.POST.get('next', request.GET.get('next', post_action_redirect if post_action_redirect else request.META.get('HTTP_REFERER', '/')))
if request.method == 'POST':
try:
Document.objects.filter(document_type=document_type).update(document_type=None)
document_type.delete()
messages.success(request, _(u'Document type: %s deleted successfully.') % document_type)
except Exception as exception:
messages.error(request, _(u'Document type: %(document_type)s delete error: %(error)s') % {
'document_type': document_type, 'error': exception})
return HttpResponseRedirect(next)
context = {
'object_name': _(u'document type'),
'delete_view': True,
'previous': previous,
'next': next,
'object_name': _(u'document type'),
'navigation_object_name': 'document_type',
'document_type': document_type,
'title': _(u'Are you sure you wish to delete the document type: %s?') % document_type,
'message': _(u'The document type of all documents using this document type will be set to none.'),
'form_icon': u'layout_delete.png',
}
return render_to_response('generic_confirm.html', context,
context_instance=RequestContext(request))
def document_type_create(request):
Permission.objects.check_permissions(request.user, [PERMISSION_DOCUMENT_TYPE_CREATE])
if request.method == 'POST':
form = DocumentTypeForm(request.POST)
if form.is_valid():
try:
form.save()
messages.success(request, _(u'Document type created successfully'))
return HttpResponseRedirect(reverse('document_type_list'))
except Exception as exception:
messages.error(request, _(u'Error creating document type; %(error)s') % {
'error': exception})
else:
form = DocumentTypeForm()
return render_to_response('generic_form.html', {
'title': _(u'create document type'),
'form': form,
},
context_instance=RequestContext(request))
def document_type_filename_list(request, document_type_id):
Permission.objects.check_permissions(request.user, [PERMISSION_DOCUMENT_TYPE_VIEW])
document_type = get_object_or_404(DocumentType, pk=document_type_id)
context = {
'object_list': document_type.documenttypefilename_set.all(),
'title': _(u'filenames for document type: %s') % document_type,
'object_name': _(u'document type'),
'navigation_object_name': 'document_type',
'document_type': document_type,
'list_object_variable_name': 'filename',
'hide_link': True,
'extra_columns': [
{
'name': _(u'enabled'),
'attribute': encapsulate(lambda x: two_state_template(x.enabled)),
}
]
}
return render_to_response('generic_list.html', context,
context_instance=RequestContext(request))
def document_type_filename_edit(request, document_type_filename_id):
Permission.objects.check_permissions(request.user, [PERMISSION_DOCUMENT_TYPE_EDIT])
document_type_filename = get_object_or_404(DocumentTypeFilename, pk=document_type_filename_id)
next = request.POST.get('next', request.GET.get('next', request.META.get('HTTP_REFERER', reverse('document_type_filename_list', args=[document_type_filename.document_type_id]))))
if request.method == 'POST':
form = DocumentTypeFilenameForm(instance=document_type_filename, data=request.POST)
if form.is_valid():
try:
document_type_filename.filename = form.cleaned_data['filename']
document_type_filename.enabled = form.cleaned_data['enabled']
document_type_filename.save()
messages.success(request, _(u'Document type filename edited successfully'))
return HttpResponseRedirect(next)
except Exception as exception:
messages.error(request, _(u'Error editing document type filename; %s') % exception)
else:
form = DocumentTypeFilenameForm(instance=document_type_filename)
return render_to_response('generic_form.html', {
'title': _(u'edit filename "%(filename)s" from document type "%(document_type)s"') % {
'document_type': document_type_filename.document_type, 'filename': document_type_filename
},
'form': form,
'next': next,
'filename': document_type_filename,
'document_type': document_type_filename.document_type,
'navigation_object_list': [
{'object': 'document_type', 'name': _(u'document type')},
{'object': 'filename', 'name': _(u'document type filename')}
],
},
context_instance=RequestContext(request))
def document_type_filename_delete(request, document_type_filename_id):
Permission.objects.check_permissions(request.user, [PERMISSION_DOCUMENT_TYPE_EDIT])
document_type_filename = get_object_or_404(DocumentTypeFilename, pk=document_type_filename_id)
post_action_redirect = reverse('document_type_filename_list', args=[document_type_filename.document_type_id])
previous = request.POST.get('previous', request.GET.get('previous', request.META.get('HTTP_REFERER', '/')))
next = request.POST.get('next', request.GET.get('next', post_action_redirect if post_action_redirect else request.META.get('HTTP_REFERER', '/')))
if request.method == 'POST':
try:
document_type_filename.delete()
messages.success(request, _(u'Document type filename: %s deleted successfully.') % document_type_filename)
except Exception as exception:
messages.error(request, _(u'Document type filename: %(document_type_filename)s delete error: %(error)s') % {
'document_type_filename': document_type_filename, 'error': exception})
return HttpResponseRedirect(next)
context = {
'object_name': _(u'document type filename'),
'delete_view': True,
'previous': previous,
'next': next,
'filename': document_type_filename,
'document_type': document_type_filename.document_type,
'navigation_object_list': [
{'object': 'document_type', 'name': _(u'document type')},
{'object': 'filename', 'name': _(u'document type filename')}
],
'title': _(u'Are you sure you wish to delete the filename: %(filename)s, from document type "%(document_type)s"?') % {
'document_type': document_type_filename.document_type, 'filename': document_type_filename
},
'form_icon': u'database_delete.png',
}
return render_to_response('generic_confirm.html', context,
context_instance=RequestContext(request))
def document_type_filename_create(request, document_type_id):
Permission.objects.check_permissions(request.user, [PERMISSION_DOCUMENT_TYPE_EDIT])
document_type = get_object_or_404(DocumentType, pk=document_type_id)
if request.method == 'POST':
form = DocumentTypeFilenameForm_create(request.POST)
if form.is_valid():
try:
document_type_filename = DocumentTypeFilename(
document_type=document_type,
filename=form.cleaned_data['filename'],
enabled=True
)
document_type_filename.save()
messages.success(request, _(u'Document type filename created successfully'))
return HttpResponseRedirect(reverse('document_type_filename_list', args=[document_type_id]))
except Exception as exception:
messages.error(request, _(u'Error creating document type filename; %(error)s') % {
'error': exception})
else:
form = DocumentTypeFilenameForm_create()
return render_to_response('generic_form.html', {
'title': _(u'create filename for document type: %s') % document_type,
'form': form,
'document_type': document_type,
'navigation_object_list': [
{'object': 'document_type', 'name': _(u'document type')},
],
},
context_instance=RequestContext(request))
def document_clear_image_cache(request):
Permission.objects.check_permissions(request.user, [PERMISSION_DOCUMENT_TOOLS])
previous = request.POST.get('previous', request.GET.get('previous', request.META.get('HTTP_REFERER', '/')))
if request.method == 'POST':
try:
Document.clear_image_cache()
messages.success(request, _(u'Document image cache cleared successfully'))
except Exception as exception:
messages.error(request, _(u'Error clearing document image cache; %s') % exception)
return HttpResponseRedirect(previous)
return render_to_response('generic_confirm.html', {
'previous': previous,
'title': _(u'Are you sure you wish to clear the document image cache?'),
'form_icon': u'camera_delete.png',
}, context_instance=RequestContext(request))
def document_version_list(request, document_pk):
document = get_object_or_404(Document, pk=document_pk)
try:
Permission.objects.check_permissions(request.user, [PERMISSION_DOCUMENT_VIEW])
except PermissionDenied:
AccessEntry.objects.check_access(PERMISSION_DOCUMENT_VIEW, request.user, document)
RecentDocument.objects.add_document_for_user(request.user, document)
context = {
'object_list': document.versions.order_by('-timestamp'),
'title': _(u'versions for document: %s') % document,
'hide_object': True,
'object': document,
'access_object': document,
'extra_columns': [
{
'name': _(u'version'),
'attribute': 'get_formated_version',
},
{
'name': _(u'time and date'),
'attribute': 'timestamp',
},
{
'name': _(u'mimetype'),
'attribute': 'mimetype',
},
{
'name': _(u'encoding'),
'attribute': 'encoding',
},
{
'name': _(u'filename'),
'attribute': 'filename',
},
{
'name': _(u'comment'),
'attribute': 'comment',
},
]
}
return render_to_response('generic_list.html', context,
context_instance=RequestContext(request))
def document_version_revert(request, document_version_pk):
document_version = get_object_or_404(DocumentVersion, pk=document_version_pk)
try:
Permission.objects.check_permissions(request.user, [PERMISSION_DOCUMENT_VERSION_REVERT])
except PermissionDenied:
AccessEntry.objects.check_access(PERMISSION_DOCUMENT_VERSION_REVERT, request.user, document_version.document)
previous = request.POST.get('previous', request.GET.get('previous', request.META.get('HTTP_REFERER', '/')))
if request.method == 'POST':
try:
document_version.revert()
messages.success(request, _(u'Document version reverted successfully'))
except Exception as exception:
messages.error(request, _(u'Error reverting document version; %s') % exception)
return HttpResponseRedirect(previous)
return render_to_response('generic_confirm.html', {
'previous': previous,
'object': document_version.document,
'title': _(u'Are you sure you wish to revert to this version?'),
'message': _(u'All later version after this one will be deleted too.'),
'form_icon': u'page_refresh.png',
}, context_instance=RequestContext(request))
# DEPRECATION: These document page transformation views are schedules to be deleted once the transformations app is merged
def document_page_transformation_list(request, document_page_id):
document_page = get_object_or_404(DocumentPage, pk=document_page_id)
try:
Permission.objects.check_permissions(request.user, [PERMISSION_DOCUMENT_TRANSFORM])
except PermissionDenied:
AccessEntry.objects.check_access(PERMISSION_DOCUMENT_TRANSFORM, request.user, document_page.document)
context = {
'object_list': document_page.documentpagetransformation_set.all(),
'page': document_page,
'navigation_object_name': 'page',
'title': _(u'transformations for: %s') % document_page,
'web_theme_hide_menus': True,
'list_object_variable_name': 'transformation',
'extra_columns': [
{'name': _(u'order'), 'attribute': 'order'},
{'name': _(u'transformation'), 'attribute': encapsulate(lambda x: x.get_transformation_display())},
{'name': _(u'arguments'), 'attribute': 'arguments'}
],
'hide_link': True,
'hide_object': True,
}
return render_to_response(
'generic_list.html', context, context_instance=RequestContext(request)
)
def document_page_transformation_create(request, document_page_id):
document_page = get_object_or_404(DocumentPage, pk=document_page_id)
try:
Permission.objects.check_permissions(request.user, [PERMISSION_DOCUMENT_TRANSFORM])
except PermissionDenied:
AccessEntry.objects.check_access(PERMISSION_DOCUMENT_TRANSFORM, request.user, document_page.document)
if request.method == 'POST':
form = DocumentPageTransformationForm(request.POST, initial={'document_page': document_page})
if form.is_valid():
document_page.document.invalidate_cached_image(document_page.page_number)
form.save()
messages.success(request, _(u'Document page transformation created successfully.'))
return HttpResponseRedirect(reverse('document_page_transformation_list', args=[document_page_id]))
else:
form = DocumentPageTransformationForm(initial={'document_page': document_page})
return render_to_response('generic_form.html', {
'form': form,
'page': document_page,
'navigation_object_name': 'page',
'title': _(u'Create new transformation for page: %(page)s of document: %(document)s') % {
'page': document_page.page_number, 'document': document_page.document},
'web_theme_hide_menus': True,
}, context_instance=RequestContext(request))
def document_page_transformation_edit(request, document_page_transformation_id):
document_page_transformation = get_object_or_404(DocumentPageTransformation, pk=document_page_transformation_id)
try:
Permission.objects.check_permissions(request.user, [PERMISSION_DOCUMENT_TRANSFORM])
except PermissionDenied:
AccessEntry.objects.check_access(PERMISSION_DOCUMENT_TRANSFORM, request.user, document_page_transformation.document_page.document)
if request.method == 'POST':
form = DocumentPageTransformationForm(request.POST, instance=document_page_transformation)
if form.is_valid():
document_page_transformation.document_page.document.invalidate_cached_image(document_page_transformation.document_page.page_number)
form.save()
messages.success(request, _(u'Document page transformation edited successfully.'))
return HttpResponseRedirect(reverse('document_page_transformation_list', args=[document_page_transformation.document_page_id]))
else:
form = DocumentPageTransformationForm(instance=document_page_transformation)
return render_to_response('generic_form.html', {
'form': form,
'transformation': document_page_transformation,
'page': document_page_transformation.document_page,
'navigation_object_list': [
{'object': 'page'},
{'object': 'transformation', 'name': _(u'transformation')}
],
'title': _(u'Edit transformation "%(transformation)s" for: %(document_page)s') % {
'transformation': document_page_transformation.get_transformation_display(),
'document_page': document_page_transformation.document_page},
'web_theme_hide_menus': True,
}, context_instance=RequestContext(request))
def document_page_transformation_delete(request, document_page_transformation_id):
document_page_transformation = get_object_or_404(DocumentPageTransformation, pk=document_page_transformation_id)
try:
Permission.objects.check_permissions(request.user, [PERMISSION_DOCUMENT_TRANSFORM])
except PermissionDenied:
AccessEntry.objects.check_access(PERMISSION_DOCUMENT_TRANSFORM, request.user, document_page_transformation.document_page.document)
redirect_view = reverse('document_page_transformation_list', args=[document_page_transformation.document_page_id])
previous = request.POST.get('previous', request.GET.get('previous', request.META.get('HTTP_REFERER', redirect_view)))
if request.method == 'POST':
document_page_transformation.document_page.document.invalidate_cached_image(document_page_transformation.document_page.page_number)
document_page_transformation.delete()
messages.success(request, _(u'Document page transformation deleted successfully.'))
return HttpResponseRedirect(redirect_view)
return render_to_response('generic_confirm.html', {
'delete_view': True,
'page': document_page_transformation.document_page,
'transformation': document_page_transformation,
'navigation_object_list': [
{'object': 'page'},
{'object': 'transformation', 'name': _(u'transformation')}
],
'title': _(u'Are you sure you wish to delete transformation "%(transformation)s" for: %(document_page)s') % {
'transformation': document_page_transformation.get_transformation_display(),
'document_page': document_page_transformation.document_page},
'web_theme_hide_menus': True,
'previous': previous,
'form_icon': u'pencil_delete.png',
}, context_instance=RequestContext(request))
| 42.430956 | 189 | 0.673575 |
01d376b56aa72caabf3d7355102d787af56a0536 | 101 | py | Python | tests/__init__.py | nprapps/elections16 | 7d0592340f8daa5abd7353d42834f952c573179e | [
"FSFAP"
] | 15 | 2016-01-14T22:01:59.000Z | 2016-05-15T20:22:04.000Z | tests/__init__.py | nprapps/elections16 | 7d0592340f8daa5abd7353d42834f952c573179e | [
"FSFAP"
] | 15 | 2016-01-06T16:24:51.000Z | 2016-06-14T20:01:52.000Z | tests/__init__.py | nprapps/elections16 | 7d0592340f8daa5abd7353d42834f952c573179e | [
"FSFAP"
] | 4 | 2016-02-09T17:20:07.000Z | 2021-02-18T11:29:34.000Z | import app_config
app_config.configure_targets('test')
from fabfile import data
data.bootstrap_db()
| 16.833333 | 36 | 0.831683 |
09676e22e26792fed0a60e1d26c6f3c06c4cbdb0 | 1,476 | py | Python | src/files/migrations/0001_initial.py | gkrnours/file-viewer | c4bc3a920e566bc02679731163782b30e178e902 | [
"MIT"
] | null | null | null | src/files/migrations/0001_initial.py | gkrnours/file-viewer | c4bc3a920e566bc02679731163782b30e178e902 | [
"MIT"
] | null | null | null | src/files/migrations/0001_initial.py | gkrnours/file-viewer | c4bc3a920e566bc02679731163782b30e178e902 | [
"MIT"
] | null | null | null | # Generated by Django 2.1.3 on 2018-11-17 14:55
from django.db import migrations, models
import django.db.models.deletion
class Migration(migrations.Migration):
initial = True
dependencies = [
]
operations = [
migrations.CreateModel(
name='File',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('name', models.CharField(max_length=150)),
('type', models.CharField(max_length=10)),
('ctime', models.DateTimeField(auto_now=True)),
('file', models.FileField(upload_to='')),
],
),
migrations.CreateModel(
name='CSVFile',
fields=[
('file_ptr', models.OneToOneField(auto_created=True, on_delete=django.db.models.deletion.CASCADE, parent_link=True, primary_key=True, serialize=False, to='files.File')),
('head', models.TextField()),
],
bases=('files.file',),
),
migrations.CreateModel(
name='ImageFile',
fields=[
('file_ptr', models.OneToOneField(auto_created=True, on_delete=django.db.models.deletion.CASCADE, parent_link=True, primary_key=True, serialize=False, to='files.File')),
('thumbnail', models.FileField(null=True, upload_to='')),
],
bases=('files.file',),
),
]
| 35.142857 | 185 | 0.567751 |
e09b9f783dbede7faa07798312fc7834ccdb09b4 | 2,966 | py | Python | ambari-agent/src/main/python/ambari_agent/NetUtil.py | zhanganha/ambari | c99dbff12a6b180c74f14e2fda06a204181e6e2c | [
"Apache-2.0"
] | null | null | null | ambari-agent/src/main/python/ambari_agent/NetUtil.py | zhanganha/ambari | c99dbff12a6b180c74f14e2fda06a204181e6e2c | [
"Apache-2.0"
] | null | null | null | ambari-agent/src/main/python/ambari_agent/NetUtil.py | zhanganha/ambari | c99dbff12a6b180c74f14e2fda06a204181e6e2c | [
"Apache-2.0"
] | 2 | 2020-11-04T06:30:31.000Z | 2020-11-06T11:02:33.000Z | # Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from urlparse import urlparse
import time
import logging
import httplib
logger = logging.getLogger()
class NetUtil:
CONNECT_SERVER_RETRY_INTERVAL_SEC = 10
HEARTBEAT_IDDLE_INTERVAL_SEC = 10
HEARTBEAT_NOT_IDDLE_INTERVAL_SEC = 5
# Url within server to request during status check. This url
# should return HTTP code 200
SERVER_STATUS_REQUEST = "{0}/cert/ca"
# For testing purposes
DEBUG_STOP_RETRIES_FLAG = False
def checkURL(self, url):
"""Try to connect to a given url. Result is True if url returns HTTP code 200, in any other case
(like unreachable server or wrong HTTP code) result will be False
"""
logger.info("Connecting to the following url " + url);
try:
parsedurl = urlparse(url)
ca_connection = httplib.HTTPSConnection(parsedurl[1])
ca_connection.request("GET", parsedurl[2])
response = ca_connection.getresponse()
status = response.status
logger.info("Calling url received " + str(status))
if status == 200:
return True
else:
return False
except Exception, e:
logger.info("Failed to connect to " + str(url) + " due to " + str(e))
return False
def try_to_connect(self, server_url, max_retries, logger = None):
"""Try to connect to a given url, sleeping for CONNECT_SERVER_RETRY_INTERVAL_SEC seconds
between retries. No more than max_retries is performed. If max_retries is -1, connection
attempts will be repeated forever until server is not reachable
Returns count of retries
"""
if logger is not None:
logger.info("DEBUG: Trying to connect to the server at " + server_url)
retries = 0
while (max_retries == -1 or retries < max_retries) and not self.DEBUG_STOP_RETRIES_FLAG:
server_is_up = self.checkURL(self.SERVER_STATUS_REQUEST.format(server_url))
if server_is_up:
break
else:
if logger is not None:
logger.info('Server at {0} is not reachable, sleeping for {1} seconds...'.format(server_url,
self.CONNECT_SERVER_RETRY_INTERVAL_SEC))
retries += 1
time.sleep(self.CONNECT_SERVER_RETRY_INTERVAL_SEC)
return retries
| 37.544304 | 102 | 0.715442 |
897ec9593bd7096aff055d2bb4cdb3e3b353b674 | 3,150 | py | Python | lib/src/align/optimization/optimize_parameters.py | thormeier-fhnw-repos/ip619bb-i4ds02-audio-text-alignment | 382800891c8de35c541d726da78f108a5a86b210 | [
"MIT"
] | null | null | null | lib/src/align/optimization/optimize_parameters.py | thormeier-fhnw-repos/ip619bb-i4ds02-audio-text-alignment | 382800891c8de35c541d726da78f108a5a86b210 | [
"MIT"
] | null | null | null | lib/src/align/optimization/optimize_parameters.py | thormeier-fhnw-repos/ip619bb-i4ds02-audio-text-alignment | 382800891c8de35c541d726da78f108a5a86b210 | [
"MIT"
] | null | null | null | from bin._bin import bin_print
from lib.src.align.aligner.google.GoogleFilesAligner import GoogleFilesAligner
from lib.src.align.compare.compare_alignments import compare_alignments
from typing import Dict, Any, List
from GPyOpt.methods import BayesianOptimization
import numpy as np
def optimize_parameters(
input_path: str,
output_path: str,
google_files_aligner: GoogleFilesAligner,
alignment_parameters: Dict[str, Any],
convergence_plot_file: str,
verbosity: int
) -> None:
"""
Tries to find the best parameters for google alignment.
:param input_path: Path to load all alignments from
:param output_path: Path to write the alignments to
:param google_files_aligner: GoogleFLiesAligner to re-align every epoch
:param alignment_parameters: Alignment parameters for comparison
:param convergence_plot_file: Where to save the convergence plot
:param verbosity: Verbosity of the output
:return: None
"""
def optimize_function(params: List) -> float:
"""
Function to optimize against
:param params: Parameters given by BOpt
:return: Calculated score
"""
bin_print(verbosity, 1, "Starting new iteration...")
google_files_aligner.alignment_parameters["algorithm"]["match_reward"] = params[0][0]
google_files_aligner.alignment_parameters["algorithm"]["mismatch_penalty"] = params[0][1]
google_files_aligner.alignment_parameters["algorithm"]["gap_penalty"] = params[0][2]
bin_print(verbosity, 3, "Configured params: ", google_files_aligner.alignment_parameters)
google_files_aligner.align_files(input_path, output_path, 0)
# Not "training_only", because we're using a further boiled down training set.
result = compare_alignments(input_path, 0, "hand", "google", False, alignment_parameters)
# Configurable, see config.example.yml
score = eval(google_files_aligner.alignment_parameters["optimize_params_formula"], {"__builtins__": None}, {
"deviation": result["scores"]["deviation"]["mean"],
"iou": result["ious"]["mean"],
"f1": result["appearance"]["f1_score"],
"precision": result["appearance"]["precision"],
"recall": result["appearance"]["recall"],
})
bin_print(verbosity, 1, "Parameters: ", params)
bin_print(verbosity, 1, "Achieved score (smaller == better): ", score)
return score
domain = [
{"name": "match_reward", "type": "continuous", "domain": (0, 100)},
{"name": "mismatch_penalty", "type": "continuous", "domain": (-100, 0)},
{"name": "gap_penalty", "type": "continuous", "domain": (-100, 0)},
]
bopt = BayesianOptimization(
f=optimize_function,
domain=domain,
model_type="GP",
acquisition_type="EI",
acquisition_jitter=0.05
)
bopt.run_optimization(max_iter=25)
bopt.plot_convergence(filename=convergence_plot_file)
bin_print(verbosity, 0, "Best values:", bopt.x_opt)
| 37.951807 | 116 | 0.659048 |
b8cc4379b5c1aa1b2cc19c3e2ae8361ad0ef6836 | 9,645 | py | Python | test/test_client.py | klinger/EsiPy | f51863034f933c15ec4c506a466576b1b966d5ef | [
"BSD-3-Clause"
] | null | null | null | test/test_client.py | klinger/EsiPy | f51863034f933c15ec4c506a466576b1b966d5ef | [
"BSD-3-Clause"
] | null | null | null | test/test_client.py | klinger/EsiPy | f51863034f933c15ec4c506a466576b1b966d5ef | [
"BSD-3-Clause"
] | null | null | null | # -*- encoding: utf-8 -*-
# pylint: skip-file
from __future__ import absolute_import
from .mock import _all_auth_mock_
from .mock import public_incursion
from .mock import public_incursion_expired
from .mock import public_incursion_no_expires
from .mock import public_incursion_no_expires_second
from .mock import public_incursion_server_error
from .mock import public_incursion_warning
from esipy import App
from esipy import EsiClient
from esipy import EsiSecurity
from esipy.cache import BaseCache
from esipy.cache import DictCache
from esipy.cache import DummyCache
from requests.adapters import HTTPAdapter
from requests.exceptions import ConnectionError
import httmock
import mock
import six
import time
import unittest
import warnings
import logging
# set pyswagger logger to error, as it displays too much thing for test needs
pyswagger_logger = logging.getLogger('pyswagger')
pyswagger_logger.setLevel(logging.ERROR)
class TestEsiPy(unittest.TestCase):
CALLBACK_URI = "https://foo.bar/baz/callback"
LOGIN_EVE = "https://login.eveonline.com"
OAUTH_VERIFY = "%s/oauth/verify" % LOGIN_EVE
OAUTH_TOKEN = "%s/oauth/token" % LOGIN_EVE
CLIENT_ID = 'foo'
SECRET_KEY = 'bar'
BASIC_TOKEN = six.u('Zm9vOmJhcg==')
SECURITY_NAME = 'evesso'
@mock.patch('six.moves.urllib.request.urlopen')
def setUp(self, urlopen_mock):
# I hate those mock... thx urlopen instead of requests...
urlopen_mock.return_value = open('test/resources/swagger.json')
warnings.simplefilter('ignore')
self.app = App.create(
'https://esi.evetech.net/latest/swagger.json'
)
self.security = EsiSecurity(
app=self.app,
redirect_uri=TestEsiPy.CALLBACK_URI,
client_id=TestEsiPy.CLIENT_ID,
secret_key=TestEsiPy.SECRET_KEY,
)
self.cache = DictCache()
self.client = EsiClient(self.security, cache=self.cache)
self.client_no_auth = EsiClient(cache=self.cache, retry_requests=True)
def tearDown(self):
""" clear the cache so we don't have residual data """
self.cache._dict = {}
def test_esipy_client_no_args(self):
client_no_args = EsiClient()
self.assertIsNone(client_no_args.security)
self.assertTrue(isinstance(client_no_args.cache, DictCache))
self.assertEqual(
client_no_args._session.headers['User-Agent'],
'EsiPy/Client - https://github.com/Kyria/EsiPy'
)
self.assertEqual(client_no_args.raw_body_only, False)
def test_esipy_client_with_headers(self):
client_with_headers = EsiClient(headers={'User-Agent': 'foobar'})
self.assertEqual(
client_with_headers._session.headers['User-Agent'],
'foobar'
)
def test_esipy_client_with_adapter(self):
transport_adapter = HTTPAdapter()
client_with_adapters = EsiClient(
transport_adapter=transport_adapter
)
self.assertEqual(
client_with_adapters._session.get_adapter('http://'),
transport_adapter
)
self.assertEqual(
client_with_adapters._session.get_adapter('https://'),
transport_adapter
)
def test_esipy_client_without_cache(self):
client_without_cache = EsiClient(cache=None)
self.assertTrue(isinstance(client_without_cache.cache, DummyCache))
def test_esipy_client_with_cache(self):
cache = DictCache()
client_with_cache = EsiClient(cache=cache)
self.assertTrue(isinstance(client_with_cache.cache, BaseCache))
self.assertEqual(client_with_cache.cache, cache)
def test_esipy_client_wrong_cache(self):
with self.assertRaises(ValueError):
EsiClient(cache=DictCache)
def test_esipy_request_public(self):
with httmock.HTTMock(public_incursion):
incursions = self.client_no_auth.request(
self.app.op['get_incursions']()
)
self.assertEqual(incursions.data[0].type, 'Incursion')
self.assertEqual(incursions.data[0].faction_id, 500019)
def test_esipy_request_authed(self):
with httmock.HTTMock(*_all_auth_mock_):
self.security.auth('let it bee')
char_location = self.client.request(
self.app.op['get_characters_character_id_location'](
character_id=123456789
)
)
self.assertEqual(char_location.data.station_id, 60004756)
# force expire
self.security.token_expiry = 0
char_location_with_refresh = self.client.request(
self.app.op['get_characters_character_id_location'](
character_id=123456789
)
)
self.assertEqual(
char_location_with_refresh.data.station_id,
60004756
)
def test_client_cache_request(self):
@httmock.all_requests
def fail_if_request(url, request):
self.fail('Cached data is not supposed to do requests')
incursion_operation = self.app.op['get_incursions']
with httmock.HTTMock(public_incursion_no_expires):
incursions = self.client_no_auth.request(incursion_operation())
self.assertEqual(incursions.data[0].state, 'mobilizing')
with httmock.HTTMock(public_incursion_no_expires_second):
incursions = self.client_no_auth.request(incursion_operation())
self.assertEqual(incursions.data[0].state, 'established')
with httmock.HTTMock(public_incursion):
incursions = self.client_no_auth.request(incursion_operation())
self.assertEqual(incursions.data[0].state, 'mobilizing')
with httmock.HTTMock(fail_if_request):
incursions = self.client_no_auth.request(incursion_operation())
self.assertEqual(incursions.data[0].state, 'mobilizing')
def test_client_warning_header(self):
# deprecated warning
warnings.simplefilter('error')
with httmock.HTTMock(public_incursion_warning):
incursion_operation = self.app.op['get_incursions']
with self.assertRaises(UserWarning):
self.client_no_auth.request(incursion_operation())
def test_client_raw_body_only(self):
client = EsiClient(raw_body_only=True)
self.assertEqual(client.raw_body_only, True)
with httmock.HTTMock(public_incursion):
incursions = client.request(self.app.op['get_incursions']())
self.assertIsNone(incursions.data)
self.assertTrue(len(incursions.raw) > 0)
incursions = client.request(
self.app.op['get_incursions'](),
raw_body_only=False
)
self.assertIsNotNone(incursions.data)
def test_esipy_reuse_operation(self):
operation = self.app.op['get_incursions']()
with httmock.HTTMock(public_incursion):
incursions = self.client_no_auth.request(operation)
self.assertEqual(incursions.data[0].faction_id, 500019)
# this shouldn't create any errors
incursions = self.client_no_auth.request(operation)
self.assertEqual(incursions.data[0].faction_id, 500019)
def test_esipy_multi_request(self):
operation = self.app.op['get_incursions']()
with httmock.HTTMock(public_incursion):
count = 0
for req, incursions in self.client_no_auth.multi_request(
[operation, operation, operation], threads=2):
self.assertEqual(incursions.data[0].faction_id, 500019)
count += 1
# Check we made 3 requests
self.assertEqual(count, 3)
def test_esipy_backoff(self):
operation = self.app.op['get_incursions']()
start_calls = time.time()
with httmock.HTTMock(public_incursion_server_error):
incursions = self.client_no_auth.request(operation)
self.assertEqual(incursions.data.error, 'broke')
end_calls = time.time()
# Check we retried 5 times
self.assertEqual(incursions.data.count, 5)
# Check that backoff slept for a sum > 2 seconds
self.assertTrue(end_calls - start_calls > 2)
def test_esipy_timeout(self):
def send_function(*args, **kwargs):
""" manually create a ConnectionError to test the retry and be sure
no exception is thrown """
send_function.count += 1
raise ConnectionError
send_function.count = 0
self.client_no_auth._session.send = mock.MagicMock(
side_effect=send_function
)
operation = self.app.op['get_incursions']()
with httmock.HTTMock(public_incursion):
incursions = self.client_no_auth.request(operation)
# there shouldn't be any exceptions
self.assertEqual(incursions.status, 500)
self.assertEqual(send_function.count, 5)
def test_esipy_expired_response(self):
operation = self.app.op['get_incursions']
with httmock.HTTMock(public_incursion_expired):
warnings.filterwarnings('error', '.*returned expired result')
with self.assertRaises(UserWarning):
self.client_no_auth.request(operation())
warnings.resetwarnings()
warnings.simplefilter('ignore')
incursions = self.client_no_auth.request(operation())
self.assertEquals(incursions.status, 200)
| 36.396226 | 79 | 0.659513 |
feb03dd961e78df977c4f495465f0e7d3a3fb25f | 136 | py | Python | allennlp/evaluation/__init__.py | HOZHENWAI/allennlp | 0d25f967c7996ad4980c7ee2f4c71294f51fef80 | [
"Apache-2.0"
] | null | null | null | allennlp/evaluation/__init__.py | HOZHENWAI/allennlp | 0d25f967c7996ad4980c7ee2f4c71294f51fef80 | [
"Apache-2.0"
] | null | null | null | allennlp/evaluation/__init__.py | HOZHENWAI/allennlp | 0d25f967c7996ad4980c7ee2f4c71294f51fef80 | [
"Apache-2.0"
] | null | null | null | from allennlp.evaluation.evaluator import Evaluator, SimpleEvaluator
from allennlp.evaluation.serializers.serializers import Serializer
| 45.333333 | 68 | 0.889706 |
5712dac7fe669984bc2f9c35fef13e256ddadb3f | 547 | py | Python | arc/arc001/arc001b-2.py | c-yan/atcoder | 940e49d576e6a2d734288fadaf368e486480a948 | [
"MIT"
] | 1 | 2019-08-21T00:49:34.000Z | 2019-08-21T00:49:34.000Z | arc/arc001/arc001b-2.py | c-yan/atcoder | 940e49d576e6a2d734288fadaf368e486480a948 | [
"MIT"
] | null | null | null | arc/arc001/arc001b-2.py | c-yan/atcoder | 940e49d576e6a2d734288fadaf368e486480a948 | [
"MIT"
] | null | null | null | # ワーシャルフロイド
def warshall_floyd(n, d):
for i in range(n):
for j in range(n):
for k in range(n):
d[j][k] = min(d[j][k], d[j][i] + d[i][k])
A, B = map(int, input().split())
N = 41
d = [[float('inf')] * N for _ in range(N)]
for i in range(N):
d[i][i] = 0
for i in range(N):
for j in [1, 5, 10]:
if 0 <= i + j <= 40:
d[i][i + j] = 1
d[i+j][i] = 1
if 0 <= i - j <= 40:
d[i][i - j] = 1
d[i-j][i] = 1
warshall_floyd(N, d)
print(d[A][B])
| 19.535714 | 57 | 0.398537 |
c13965e9759ee3fc3a8496b466deee064c31d187 | 6,073 | py | Python | modin/engines/ray/pandas_on_ray/frame/partition_manager.py | alclol/modin | 91e89eafc71f6ef03eb44456d646a6c5971b1c4c | [
"Apache-2.0"
] | 1 | 2020-04-08T17:26:53.000Z | 2020-04-08T17:26:53.000Z | modin/engines/ray/pandas_on_ray/frame/partition_manager.py | alclol/modin | 91e89eafc71f6ef03eb44456d646a6c5971b1c4c | [
"Apache-2.0"
] | 1 | 2020-11-05T13:32:08.000Z | 2020-11-05T13:32:08.000Z | modin/engines/ray/pandas_on_ray/frame/partition_manager.py | alclol/modin | 91e89eafc71f6ef03eb44456d646a6c5971b1c4c | [
"Apache-2.0"
] | null | null | null | # Licensed to Modin Development Team under one or more contributor license agreements.
# See the NOTICE file distributed with this work for additional information regarding
# copyright ownership. The Modin Development Team licenses this file to you under the
# Apache License, Version 2.0 (the "License"); you may not use this file except in
# compliance with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software distributed under
# the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
# ANY KIND, either express or implied. See the License for the specific language
# governing permissions and limitations under the License.
import numpy as np
from modin.engines.ray.generic.frame.partition_manager import RayFrameManager
from .axis_partition import (
PandasOnRayFrameColumnPartition,
PandasOnRayFrameRowPartition,
)
from .partition import PandasOnRayFramePartition
from modin.error_message import ErrorMessage
from modin import __execution_engine__
if __execution_engine__ == "Ray":
import ray
@ray.remote
def func(df, other, apply_func, call_queue_df=None, call_queue_other=None):
if call_queue_df is not None and len(call_queue_df) > 0:
for call, kwargs in call_queue_df:
if isinstance(call, ray.ObjectID):
call = ray.get(call)
if isinstance(kwargs, ray.ObjectID):
kwargs = ray.get(kwargs)
df = call(df, **kwargs)
if call_queue_other is not None and len(call_queue_other) > 0:
for call, kwargs in call_queue_other:
if isinstance(call, ray.ObjectID):
call = ray.get(call)
if isinstance(kwargs, ray.ObjectID):
kwargs = ray.get(kwargs)
other = call(other, **kwargs)
return apply_func(df, other)
class PandasOnRayFrameManager(RayFrameManager):
"""This method implements the interface in `BaseFrameManager`."""
# This object uses RayRemotePartition objects as the underlying store.
_partition_class = PandasOnRayFramePartition
_column_partitions_class = PandasOnRayFrameColumnPartition
_row_partition_class = PandasOnRayFrameRowPartition
@classmethod
def get_indices(cls, axis, partitions, index_func=None):
"""This gets the internal indices stored in the partitions.
Note: These are the global indices of the object. This is mostly useful
when you have deleted rows/columns internally, but do not know
which ones were deleted.
Args:
axis: This axis to extract the labels. (0 - index, 1 - columns).
index_func: The function to be used to extract the function.
old_blocks: An optional previous object that this object was
created from. This is used to compute the correct offsets.
Returns:
A Pandas Index object.
"""
ErrorMessage.catch_bugs_and_request_email(not callable(index_func))
func = cls.preprocess_func(index_func)
if axis == 0:
# We grab the first column of blocks and extract the indices
new_idx = (
[idx.apply(func).oid for idx in partitions.T[0]]
if len(partitions.T)
else []
)
else:
new_idx = (
[idx.apply(func).oid for idx in partitions[0]]
if len(partitions)
else []
)
new_idx = ray.get(new_idx)
return new_idx[0].append(new_idx[1:]) if len(new_idx) else new_idx
@classmethod
def groupby_reduce(
cls, axis, partitions, by, map_func, reduce_func
): # pragma: no cover
map_func = ray.put(map_func)
by_parts = np.squeeze(by)
if len(by_parts.shape) == 0:
by_parts = np.array([by_parts.item()])
new_partitions = np.array(
[
[
PandasOnRayFramePartition(
func.remote(
part.oid,
by_parts[col_idx].oid if axis else by_parts[row_idx].oid,
map_func,
part.call_queue,
by_parts[col_idx].call_queue
if axis
else by_parts[row_idx].call_queue,
)
)
for col_idx, part in enumerate(partitions[row_idx])
]
for row_idx in range(len(partitions))
]
)
return cls.map_axis_partitions(axis, new_partitions, reduce_func)
@classmethod
def broadcast_apply(cls, axis, apply_func, left, right):
map_func = ray.put(apply_func)
right_parts = np.squeeze(right)
if len(right_parts.shape) == 0:
right_parts = np.array([right_parts.item()])
assert (
len(right_parts.shape) == 1
), "Invalid broadcast partitions shape {}\n{}".format(
right_parts.shape, [[i.get() for i in j] for j in right_parts]
)
return np.array(
[
[
PandasOnRayFramePartition(
func.remote(
part.oid,
right_parts[col_idx].oid
if axis
else right_parts[row_idx].oid,
map_func,
part.call_queue,
right_parts[col_idx].call_queue
if axis
else right_parts[row_idx].call_queue,
)
)
for col_idx, part in enumerate(left[row_idx])
]
for row_idx in range(len(left))
]
)
| 40.218543 | 87 | 0.574675 |
7570f4110fff98f51d67a4e7e0595efcba54a689 | 3,932 | py | Python | lib/carbon/amqp_publisher.py | postwait/carbon | 5e4a0aab4df5a7a1aef45f6a9b096dd580cdbabf | [
"Apache-2.0"
] | null | null | null | lib/carbon/amqp_publisher.py | postwait/carbon | 5e4a0aab4df5a7a1aef45f6a9b096dd580cdbabf | [
"Apache-2.0"
] | null | null | null | lib/carbon/amqp_publisher.py | postwait/carbon | 5e4a0aab4df5a7a1aef45f6a9b096dd580cdbabf | [
"Apache-2.0"
] | null | null | null | #!/usr/bin/env python
"""
Copyright 2009 Lucio Torre <lucio.torre@canonical.com>
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
Will publish metrics over AMQP
"""
import os
import time
from optparse import OptionParser
from twisted.internet.defer import inlineCallbacks
from twisted.internet import reactor
from twisted.internet.protocol import ClientCreator
from txamqp.protocol import AMQClient
from txamqp.client import TwistedDelegate
from txamqp.content import Content
import txamqp.spec
@inlineCallbacks
def writeMetric(metric_path, value, timestamp, host, port, username, password,
vhost, exchange, spec=None, channel_number=1, ssl=False):
if not spec:
spec = txamqp.spec.load(os.path.normpath(
os.path.join(os.path.dirname(__file__), 'amqp0-8.xml')))
delegate = TwistedDelegate()
connector = ClientCreator(reactor, AMQClient, delegate=delegate,
vhost=vhost, spec=spec)
if ssl:
from twisted.internet.ssl import ClientContextFactory
conn = yield connector.connectSSL(host, port, ClientContextFactory())
else:
conn = yield connector.connectTCP(host, port)
yield conn.authenticate(username, password)
channel = yield conn.channel(channel_number)
yield channel.channel_open()
yield channel.exchange_declare(exchange=exchange, type="topic",
durable=True, auto_delete=False)
message = Content("%f %d" % (value, timestamp))
message["delivery mode"] = 2
channel.basic_publish(exchange=exchange, content=message, routing_key=metric_path)
yield channel.channel_close()
def main():
parser = OptionParser(usage="%prog [options] <metric> <value> [timestamp]")
parser.add_option("-t", "--host", dest="host",
help="host name", metavar="HOST", default="localhost")
parser.add_option("-p", "--port", dest="port", type=int,
help="port number", metavar="PORT",
default=5672)
parser.add_option("-u", "--user", dest="username",
help="username", metavar="USERNAME",
default="guest")
parser.add_option("-w", "--password", dest="password",
help="password", metavar="PASSWORD",
default="guest")
parser.add_option("-v", "--vhost", dest="vhost",
help="vhost", metavar="VHOST",
default="/")
parser.add_option("-s", "--ssl", dest="ssl",
help="ssl", metavar="SSL", action="store_true",
default=False)
parser.add_option("-e", "--exchange", dest="exchange",
help="exchange", metavar="EXCHANGE",
default="graphite")
(options, args) = parser.parse_args()
try:
metric_path = args[0]
value = float(args[1])
if len(args) > 2:
timestamp = int(args[2])
else:
timestamp = time.time()
except ValueError:
parser.print_usage()
raise SystemExit(1)
d = writeMetric(metric_path, value, timestamp, options.host, options.port,
options.username, options.password, vhost=options.vhost,
exchange=options.exchange, ssl=options.ssl)
d.addErrback(lambda f: f.printTraceback())
d.addBoth(lambda _: reactor.stop())
reactor.run()
if __name__ == "__main__":
main()
| 33.606838 | 86 | 0.638606 |
e20eb9000ae8789fe2fc8e41d5aadb1473670a00 | 2,172 | py | Python | docs/source/conf.py | Casper-Smet/turing-route-66 | b797485586c3491ddbcd76367aff88b7672d8d9a | [
"MIT"
] | 3 | 2019-12-03T09:47:02.000Z | 2019-12-03T09:47:51.000Z | docs/source/conf.py | Casper-Smet/turing-route-66 | b797485586c3491ddbcd76367aff88b7672d8d9a | [
"MIT"
] | null | null | null | docs/source/conf.py | Casper-Smet/turing-route-66 | b797485586c3491ddbcd76367aff88b7672d8d9a | [
"MIT"
] | null | null | null | # Configuration file for the Sphinx documentation builder.
#
# This file only contains a selection of the most common options. For a full
# list see the documentation:
# https://www.sphinx-doc.org/en/master/usage/configuration.html
# -- Path setup --------------------------------------------------------------
# If extensions (or modules to document with autodoc) are in another directory,
# add these directories to sys.path here. If the directory is relative to the
# documentation root, use os.path.abspath to make it absolute, like shown here.
#
import os
import os.path as path
import sys
sys.path.insert(0, path.abspath(path.join(os.getcwd(), "../../")))
# -- Project information -----------------------------------------------------
project = 'Turing Route 66'
copyright = '2019, Stan Meyberg, Thijs van den Berg, Casper Smet'
author = 'Stan Meyberg, Thijs van den Berg, Casper Smet'
# The full version, including alpha/beta/rc tags
release = '2019.12.13.1'
# -- General configuration ---------------------------------------------------
# Add any Sphinx extension module names here, as strings. They can be
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
# ones.
extensions = [
'sphinx.ext.autodoc',
'sphinx.ext.coverage',
]
# Add any paths that contain templates here, relative to this directory.
templates_path = ['_templates']
# List of patterns, relative to source directory, that match files and
# directories to ignore when looking for source files.
# This pattern also affects html_static_path and html_extra_path.
exclude_patterns = []
# -- Options for HTML output -------------------------------------------------
# The theme to use for HTML and HTML Help pages. See the documentation for
# a list of builtin themes.
#
html_theme = 'classic'
# Add any paths that contain custom static files (such as style sheets) here,
# relative to this directory. They are copied after the builtin static files,
# so a file named "default.css" will overwrite the builtin "default.css".
html_static_path = ['_static']
# Starting document, not needed in Sphinx >2.0 but readthedocs.io runs Sphinx 1.8.5
master_doc = 'index'
| 35.032258 | 83 | 0.67081 |
da4152a87beeb3962b32e7b5ac79e3ff9bd9746a | 256 | py | Python | templates.py | scienceburger/universe | 57b33c2f79d2e650631acbb382b54445fcd0b8ec | [
"MIT"
] | null | null | null | templates.py | scienceburger/universe | 57b33c2f79d2e650631acbb382b54445fcd0b8ec | [
"MIT"
] | 1 | 2020-03-28T13:14:58.000Z | 2020-03-28T13:14:58.000Z | templates.py | scienceburger/universe | 57b33c2f79d2e650631acbb382b54445fcd0b8ec | [
"MIT"
] | null | null | null | UNIVERSE_TITLE = "U N I V E R S E"
SIGNUP_BUTTON = '''<input type='submit' name='signup' value='Sign me up!'>'''
LOGIN_BUTTON = '''<input type='submit' name='login' value='Log me in!'>'''
LOGOUT_LINK = '''<a href='logout'>Log out!</a>'''
EOL = '''<br/>'''
| 42.666667 | 77 | 0.605469 |
a6279a19edcd47f0f6c3c592bbe259b912b20b80 | 285 | py | Python | nautils/config.py | Hackerjef/nadie-utils | 0c43c3b375cb29ff4103ef6625651a162fe22741 | [
"Unlicense"
] | null | null | null | nautils/config.py | Hackerjef/nadie-utils | 0c43c3b375cb29ff4103ef6625651a162fe22741 | [
"Unlicense"
] | null | null | null | nautils/config.py | Hackerjef/nadie-utils | 0c43c3b375cb29ff4103ef6625651a162fe22741 | [
"Unlicense"
] | null | null | null | import yaml
from functools import reduce
with open('config.yaml', 'r') as f:
config = yaml.load(f.read(), Loader=yaml.FullLoader)
def Getcfgvalue(keys, default):
return reduce(lambda d, key: d.get(key, default) if isinstance(d, dict) else default, keys.split("."), config)
| 25.909091 | 114 | 0.701754 |
edc45a266a29443711b41b8b90ae77b97e93270a | 2,932 | py | Python | law/target/base.py | HerrHorizontal/law | c31091d3bf39a25e79b3796ed5742346ddff8b77 | [
"BSD-3-Clause"
] | null | null | null | law/target/base.py | HerrHorizontal/law | c31091d3bf39a25e79b3796ed5742346ddff8b77 | [
"BSD-3-Clause"
] | null | null | null | law/target/base.py | HerrHorizontal/law | c31091d3bf39a25e79b3796ed5742346ddff8b77 | [
"BSD-3-Clause"
] | null | null | null | # coding: utf-8
"""
Custom base target definition.
"""
__all__ = ["Target"]
import logging
from abc import abstractmethod
import luigi
from law.config import Config
from law.util import colored, create_hash
logger = logging.getLogger(__name__)
class Target(luigi.target.Target):
def __init__(self, *args, **kwargs):
self.optional = kwargs.pop("optional", False)
self.external = kwargs.pop("external", False)
luigi.target.Target.__init__(self, *args, **kwargs)
def __repr__(self):
color = Config.instance().get_expanded_boolean("target", "colored_repr")
return self.repr(color=color)
def __str__(self):
color = Config.instance().get_expanded_boolean("target", "colored_str")
return self.repr(color=color)
def __hash__(self):
return self.hash
@property
def hash(self):
return create_hash(self.uri(), to_int=True)
def repr(self, color=None):
if color is None:
color = Config.instance().get_expanded_boolean("target", "colored_repr")
class_name = self._repr_class_name(self.__class__.__name__, color=color)
parts = [self._repr_pair(*pair, color=color) for pair in self._repr_pairs()]
parts += [self._repr_flag(flag, color=color) for flag in self._repr_flags()]
return "{}({})".format(class_name, ", ".join(parts))
def colored_repr(self):
# deprecation warning until v0.1
logger.warning("the use of {0}.colored_repr() is deprecated, please use "
"{0}.repr(color=True) instead".format(self.__class__.__name__))
return self.repr(color=True)
def _repr_pairs(self):
return []
def _repr_flags(self):
flags = []
if self.optional:
flags.append("optional")
if self.external:
flags.append("external")
return flags
def _repr_class_name(self, name, color=False):
return colored(name, "cyan") if color else name
def _repr_pair(self, key, value, color=False):
return "{}={}".format(colored(key, color="blue", style="bright") if color else key, value)
def _repr_flag(self, name, color=False):
return colored(name, color="magenta") if color else name
def _copy_kwargs(self):
return {"optional": self.optional, "external": self.external}
def status_text(self, max_depth=0, flags=None, color=False, exists=None):
if exists is None:
exists = self.exists()
if exists:
text = "existent"
_color = "green"
else:
text = "absent"
_color = "grey" if self.optional else "red"
return colored(text, _color, style="bright") if color else text
@abstractmethod
def exists(self):
return
@abstractmethod
def remove(self, silent=True):
return
@abstractmethod
def uri(self):
return
| 26.654545 | 98 | 0.627558 |
3f50828836ba462a0d0366e42cda3bfdaf5eacf6 | 2,881 | py | Python | flask_syllabus.py | tcloaken/cs322proj2 | 2180d165d6424bbb6ca574892e7e43dde8f3932d | [
"Artistic-2.0"
] | null | null | null | flask_syllabus.py | tcloaken/cs322proj2 | 2180d165d6424bbb6ca574892e7e43dde8f3932d | [
"Artistic-2.0"
] | null | null | null | flask_syllabus.py | tcloaken/cs322proj2 | 2180d165d6424bbb6ca574892e7e43dde8f3932d | [
"Artistic-2.0"
] | null | null | null | """
Very simple Flask web site, with one page
displaying a course schedule.
"""
import flask
from flask import render_template
from flask import request
from flask import url_for
import json
import logging
# Date handling
import arrow # Replacement for datetime, based on moment.js
import datetime # But we still need time
from dateutil import tz # For interpreting local times
# Our own module
import pre # Preprocess schedule file
###
# Globals
###
app = flask.Flask(__name__)
import CONFIG
currentDate = arrow.now()
###
# Pages
###
@app.route("/")
@app.route("/index")
@app.route("/schedule")
def index():
app.logger.debug("Main page entry")
if 'schedule' not in flask.session:
app.logger.debug("Processing raw schedule file")
raw = open(CONFIG.schedule)
flask.session['schedule'] = pre.process(raw)
print(format_arrow_date(currentDate))
flask.g.today = arrow.now()
return flask.render_template('syllabus.html')
@app.errorhandler(404)
def page_not_found(error):
app.logger.debug("Page not found")
flask.session['linkback'] = flask.url_for("index")
return flask.render_template('page_not_found.html'), 404
#################
#
# Functions used within the templates
#
#################
@app.template_filter( 'fmtdate' )
def format_arrow_date( date ):
try:
normal = arrow.get( date )
return normal.format("ddd MM/DD/YYYY")
except:
return "(bad date)"
@app.template_filter( 'isweek' )
def format_week( i ):
"""
args : a string of date (i)
returns : a bool value
True if current date is within 7 days past date (i)
"""
refdate = i.split()
date = format_arrow_date(currentDate).split()
isSeven = False
fmt = '%m/%d/%Y'
dtweek = datetime.datetime.strptime(date[1],fmt)
refweek = datetime.datetime.strptime(refdate[1],fmt)
dtweek = dtweek.timetuple()
refweek = refweek.timetuple()
if (dtweek.tm_yday - refweek.tm_yday <= 7) and (dtweek.tm_yday - refweek.tm_yday >=0):
isSeven = True
return isSeven
#############
#
# Set up to run from cgi-bin script, from
# gunicorn, or stand-alone.
#
if __name__ == "__main__":
# Standalone, with a dynamically generated
# secret key, accessible outside only if debugging is not on
import uuid
app.secret_key = str(uuid.uuid4())
app.debug=CONFIG.DEBUG
app.logger.setLevel(logging.DEBUG)
print("Opening for global access on port {}".format(CONFIG.PORT))
app.run(port=CONFIG.PORT, host="0.0.0.0")
else:
# Running from cgi-bin or from gunicorn WSGI server,
# which makes the call to app.run. Gunicorn may invoke more than
# one instance for concurrent service. It is essential all
# instances share the same secret key for session management.
app.secret_key = CONFIG.secret_key
app.debug=False
| 24.415254 | 90 | 0.668518 |
633e3df73b4d5e590d355641e8f1560d3c07264c | 245 | py | Python | examples/models/bernoulli/create_dropout_rbm.py | anukaal/learnergy | 704fc2b3fcb80df41ed28d750dc4e6475df23315 | [
"Apache-2.0"
] | 39 | 2020-02-27T00:47:45.000Z | 2022-03-28T14:57:26.000Z | examples/models/bernoulli/create_dropout_rbm.py | anukaal/learnergy | 704fc2b3fcb80df41ed28d750dc4e6475df23315 | [
"Apache-2.0"
] | 5 | 2021-05-11T08:23:37.000Z | 2022-01-20T12:50:59.000Z | examples/models/bernoulli/create_dropout_rbm.py | anukaal/learnergy | 704fc2b3fcb80df41ed28d750dc4e6475df23315 | [
"Apache-2.0"
] | 6 | 2020-04-15T00:23:13.000Z | 2022-01-29T16:22:05.000Z | from learnergy.models.bernoulli import DropoutRBM
# Creates a DropoutRBM-based class
model = DropoutRBM(n_visible=784, n_hidden=128, steps=1, learning_rate=0.1,
momentum=0, decay=0, temperature=1, dropout=0.5, use_gpu=False)
| 40.833333 | 82 | 0.730612 |
a0ea5cfd8f3b437f288730a881367083e1a2b93f | 1,721 | py | Python | projects/capstone/open_projects/robot_motion_planning/robot.py | anandsaha/ml-nanodegree | b16c98eb7f8580f64ead501de5eb5d57e07b0275 | [
"MIT"
] | 2 | 2018-12-17T21:42:36.000Z | 2018-12-18T15:22:20.000Z | projects/capstone/open_projects/robot_motion_planning/robot.py | anandsaha/ml-nanodegree | b16c98eb7f8580f64ead501de5eb5d57e07b0275 | [
"MIT"
] | 13 | 2020-11-13T17:59:17.000Z | 2022-03-11T23:29:33.000Z | projects/capstone/open_projects/robot_motion_planning/robot.py | anandsaha/ml-nanodegree | b16c98eb7f8580f64ead501de5eb5d57e07b0275 | [
"MIT"
] | null | null | null | import numpy as np
class Robot(object):
def __init__(self, maze_dim):
'''
Use the initialization function to set up attributes that your robot
will use to learn and navigate the maze. Some initial attributes are
provided based on common information, including the size of the maze
the robot is placed in.
'''
self.location = [0, 0]
self.heading = 'up'
self.maze_dim = maze_dim
def next_move(self, sensors):
'''
Use this function to determine the next move the robot should make,
based on the input from the sensors after its previous move. Sensor
inputs are a list of three distances from the robot's left, front, and
right-facing sensors, in that order.
Outputs should be a tuple of two values. The first value indicates
robot rotation (if any), as a number: 0 for no rotation, +90 for a
90-degree rotation clockwise, and -90 for a 90-degree rotation
counterclockwise. Other values will result in no rotation. The second
value indicates robot movement, and the robot will attempt to move the
number of indicated squares: a positive number indicates forwards
movement, while a negative number indicates backwards movement. The
robot may move a maximum of three units per turn. Any excess movement
is ignored.
If the robot wants to end a run (e.g. during the first training run in
the maze) then returing the tuple ('Reset', 'Reset') will indicate to
the tester to end the run and return the robot to the start.
'''
rotation = 0
movement = 0
return rotation, movement | 41.97561 | 78 | 0.664149 |
d428579c07da80390d9718b1b6e566bcfe1e64c1 | 5,866 | py | Python | pypcd/tests/test_pypcd.py | JeremyBYU/pypcd | a535a7b5f8b1adf12019c35e84612c6395765544 | [
"BSD-3-Clause"
] | 20 | 2019-03-22T05:02:13.000Z | 2022-03-30T06:52:11.000Z | pypcd/tests/test_pypcd.py | JeremyBYU/pypcd | a535a7b5f8b1adf12019c35e84612c6395765544 | [
"BSD-3-Clause"
] | 6 | 2020-01-09T15:06:26.000Z | 2022-03-24T07:42:00.000Z | pypcd/tests/test_pypcd.py | JeremyBYU/pypcd | a535a7b5f8b1adf12019c35e84612c6395765544 | [
"BSD-3-Clause"
] | 16 | 2019-01-24T12:59:42.000Z | 2021-12-07T23:28:40.000Z | """
this is just a basic sanity check, not a really legit test suite.
TODO maybe download data here instead of having it in repo
"""
import pytest
import numpy as np
import os
import shutil
import tempfile
header1 = """\
# .PCD v0.7 - Point Cloud Data file format
VERSION 0.7
FIELDS x y z i
SIZE 4 4 4 4
TYPE F F F F
COUNT 1 1 1 1
WIDTH 500028
HEIGHT 1
VIEWPOINT 0 0 0 1 0 0 0
POINTS 500028
DATA binary_compressed
"""
header2 = """\
VERSION .7
FIELDS x y z normal_x normal_y normal_z curvature boundary k vp_x vp_y vp_z principal_curvature_x principal_curvature_y principal_curvature_z pc1 pc2
SIZE 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4
TYPE F F F F F F F F F F F F F F F F F
COUNT 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
WIDTH 19812
HEIGHT 1
VIEWPOINT 0.0 0.0 0.0 1.0 0.0 0.0 0.0
POINTS 19812
DATA ascii
"""
@pytest.fixture
def pcd_fname():
import pypcd
return os.path.join(pypcd.__path__[0], 'test_data',
'partial_cup_model.pcd')
@pytest.fixture
def ascii_pcd_fname():
import pypcd
return os.path.join(pypcd.__path__[0], 'test_data',
'ascii.pcd')
@pytest.fixture
def bin_pcd_fname():
import pypcd
return os.path.join(pypcd.__path__[0], 'test_data',
'bin.pcd')
def cloud_centroid(pc):
xyz = np.empty((pc.points, 3), dtype=np.float)
xyz[:, 0] = pc.pc_data['x']
xyz[:, 1] = pc.pc_data['y']
xyz[:, 2] = pc.pc_data['z']
return xyz.mean(0)
def test_parse_header():
from pypcd.pypcd import parse_header
lines = header1.split('\n')
md = parse_header(lines)
assert (md['version'] == '0.7')
assert (md['fields'] == ['x', 'y', 'z', 'i'])
assert (md['size'] == [4, 4, 4, 4])
assert (md['type'] == ['F', 'F', 'F', 'F'])
assert (md['count'] == [1, 1, 1, 1])
assert (md['width'] == 500028)
assert (md['height'] == 1)
assert (md['viewpoint'] == [0, 0, 0, 1, 0, 0, 0])
assert (md['points'] == 500028)
assert (md['data'] == 'binary_compressed')
def test_from_path(pcd_fname):
from pypcd import pypcd
pc = pypcd.PointCloud.from_path(pcd_fname)
fields = 'x y z normal_x normal_y normal_z curvature boundary k vp_x vp_y vp_z principal_curvature_x principal_curvature_y principal_curvature_z pc1 pc2'.split()
for fld1, fld2 in zip(pc.fields, fields):
assert(fld1 == fld2)
assert (pc.width == 19812)
assert (len(pc.pc_data) == 19812)
def test_add_fields(pcd_fname):
from pypcd import pypcd
pc = pypcd.PointCloud.from_path(pcd_fname)
old_md = pc.get_metadata()
# new_dt = [(f, pc.pc_data.dtype[f]) for f in pc.pc_data.dtype.fields]
# new_data = [pc.pc_data[n] for n in pc.pc_data.dtype.names]
md = {'fields': ['bla', 'bar'], 'count': [1, 1], 'size': [4, 4],
'type': ['F', 'F']}
d = np.rec.fromarrays((np.random.random(len(pc.pc_data)),
np.random.random(len(pc.pc_data))))
newpc = pypcd.add_fields(pc, md, d)
new_md = newpc.get_metadata()
# print len(old_md['fields']), len(md['fields']), len(new_md['fields'])
# print old_md['fields'], md['fields'], new_md['fields']
assert(len(old_md['fields'])+len(md['fields']) == len(new_md['fields']))
def test_path_roundtrip_ascii(pcd_fname):
from pypcd import pypcd
pc = pypcd.PointCloud.from_path(pcd_fname)
md = pc.get_metadata()
tmp_dirname = tempfile.mkdtemp(suffix='_pypcd', prefix='tmp')
tmp_fname = os.path.join(tmp_dirname, 'out.pcd')
pc.save_pcd(tmp_fname, compression='ascii')
assert(os.path.exists(tmp_fname))
pc2 = pypcd.PointCloud.from_path(tmp_fname)
md2 = pc2.get_metadata()
assert(md == md2)
np.testing.assert_equal(pc.pc_data, pc2.pc_data)
if os.path.exists(tmp_fname):
os.unlink(tmp_fname)
os.removedirs(tmp_dirname)
def test_path_roundtrip_binary(pcd_fname):
from pypcd import pypcd
pc = pypcd.PointCloud.from_path(pcd_fname)
md = pc.get_metadata()
tmp_dirname = tempfile.mkdtemp(suffix='_pypcd', prefix='tmp')
tmp_fname = os.path.join(tmp_dirname, 'out.pcd')
pc.save_pcd(tmp_fname, compression='binary')
assert(os.path.exists(tmp_fname))
pc2 = pypcd.PointCloud.from_path(tmp_fname)
md2 = pc2.get_metadata()
for k, v in md2.items():
if k == 'data':
assert v == 'binary'
else:
assert v == md[k]
np.testing.assert_equal(pc.pc_data, pc2.pc_data)
if os.path.exists(tmp_fname):
os.unlink(tmp_fname)
os.removedirs(tmp_dirname)
def test_path_roundtrip_binary_compressed(pcd_fname):
from pypcd import pypcd
pc = pypcd.PointCloud.from_path(pcd_fname)
md = pc.get_metadata()
tmp_dirname = tempfile.mkdtemp(suffix='_pypcd', prefix='tmp')
tmp_fname = os.path.join(tmp_dirname, 'out.pcd')
pc.save_pcd(tmp_fname, compression='binary_compressed')
assert(os.path.exists(tmp_fname))
pc2 = pypcd.PointCloud.from_path(tmp_fname)
md2 = pc2.get_metadata()
for k, v in md2.items():
if k == 'data':
assert v == 'binary_compressed'
else:
assert v == md[k]
np.testing.assert_equal(pc.pc_data, pc2.pc_data)
if os.path.exists(tmp_dirname):
shutil.rmtree(tmp_dirname)
def test_cat_pointclouds(pcd_fname):
from pypcd import pypcd
pc = pypcd.PointCloud.from_path(pcd_fname)
pc2 = pc.copy()
pc2.pc_data['x'] += 0.1
pc3 = pypcd.cat_point_clouds(pc, pc2)
for fld, fld3 in zip(pc.fields, pc3.fields):
assert(fld == fld3)
assert(pc3.width == pc.width+pc2.width)
def test_ascii_bin1(ascii_pcd_fname, bin_pcd_fname):
from pypcd import pypcd
apc1 = pypcd.point_cloud_from_path(ascii_pcd_fname)
bpc1 = pypcd.point_cloud_from_path(bin_pcd_fname)
am = cloud_centroid(apc1)
bm = cloud_centroid(bpc1)
assert(np.allclose(am, bm))
| 27.539906 | 165 | 0.647119 |
af3f8a1a3f0f2b0478e2a004ba0498d78895b348 | 747 | py | Python | src/33. Search in Rotated Sorted Array.py | xiaonanln/myleetcode-python | 95d282f21a257f937cd22ef20c3590a69919e307 | [
"Apache-2.0"
] | null | null | null | src/33. Search in Rotated Sorted Array.py | xiaonanln/myleetcode-python | 95d282f21a257f937cd22ef20c3590a69919e307 | [
"Apache-2.0"
] | null | null | null | src/33. Search in Rotated Sorted Array.py | xiaonanln/myleetcode-python | 95d282f21a257f937cd22ef20c3590a69919e307 | [
"Apache-2.0"
] | null | null | null |
from bisect import bisect_left
class Solution(object):
def search(self, nums, target):
"""
:type nums: List[int]
:type target: int
:rtype: int
"""
i, j = 0, len(nums)
while i < j:
if nums[i] <= nums[j-1]:
k = bisect_left(nums, target, i, j )
return k if k != j and nums[k] == target else -1
if j-i < 3:
try: return nums[i:j].index(target) + i
except ValueError: return -1
m = (i+j) // 2
if nums[m] == target:
return m
if nums[i] <= nums[m-1]:
# left is in order
if nums[i] <= target <= nums[m-1]:
j = m
else:
i = m+1
else:
# right is in order
if nums[m+1] <= target <= nums[j-1]:
i = m+1
else:
j = m
return -1
print Solution().search([1], 0) | 18.675 | 52 | 0.531459 |
93d7183036fa2d29e0c01f2c88608025d9a86733 | 8,770 | py | Python | sdk/python/pulumi_azure/appservice/get_app_service_plan.py | suresh198526/pulumi-azure | bf27206a38d7a5c58b3c2c57ec8769fe3d0fc5d7 | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | sdk/python/pulumi_azure/appservice/get_app_service_plan.py | suresh198526/pulumi-azure | bf27206a38d7a5c58b3c2c57ec8769fe3d0fc5d7 | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | sdk/python/pulumi_azure/appservice/get_app_service_plan.py | suresh198526/pulumi-azure | bf27206a38d7a5c58b3c2c57ec8769fe3d0fc5d7 | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | # coding=utf-8
# *** WARNING: this file was generated by the Pulumi Terraform Bridge (tfgen) Tool. ***
# *** Do not edit by hand unless you're certain you know what you are doing! ***
import warnings
import pulumi
import pulumi.runtime
from typing import Any, Mapping, Optional, Sequence, Union
from .. import _utilities, _tables
from . import outputs
__all__ = [
'GetAppServicePlanResult',
'AwaitableGetAppServicePlanResult',
'get_app_service_plan',
]
@pulumi.output_type
class GetAppServicePlanResult:
"""
A collection of values returned by getAppServicePlan.
"""
def __init__(__self__, app_service_environment_id=None, id=None, is_xenon=None, kind=None, location=None, maximum_elastic_worker_count=None, maximum_number_of_workers=None, name=None, per_site_scaling=None, reserved=None, resource_group_name=None, sku=None, tags=None):
if app_service_environment_id and not isinstance(app_service_environment_id, str):
raise TypeError("Expected argument 'app_service_environment_id' to be a str")
pulumi.set(__self__, "app_service_environment_id", app_service_environment_id)
if id and not isinstance(id, str):
raise TypeError("Expected argument 'id' to be a str")
pulumi.set(__self__, "id", id)
if is_xenon and not isinstance(is_xenon, bool):
raise TypeError("Expected argument 'is_xenon' to be a bool")
pulumi.set(__self__, "is_xenon", is_xenon)
if kind and not isinstance(kind, str):
raise TypeError("Expected argument 'kind' to be a str")
pulumi.set(__self__, "kind", kind)
if location and not isinstance(location, str):
raise TypeError("Expected argument 'location' to be a str")
pulumi.set(__self__, "location", location)
if maximum_elastic_worker_count and not isinstance(maximum_elastic_worker_count, int):
raise TypeError("Expected argument 'maximum_elastic_worker_count' to be a int")
pulumi.set(__self__, "maximum_elastic_worker_count", maximum_elastic_worker_count)
if maximum_number_of_workers and not isinstance(maximum_number_of_workers, int):
raise TypeError("Expected argument 'maximum_number_of_workers' to be a int")
pulumi.set(__self__, "maximum_number_of_workers", maximum_number_of_workers)
if name and not isinstance(name, str):
raise TypeError("Expected argument 'name' to be a str")
pulumi.set(__self__, "name", name)
if per_site_scaling and not isinstance(per_site_scaling, bool):
raise TypeError("Expected argument 'per_site_scaling' to be a bool")
pulumi.set(__self__, "per_site_scaling", per_site_scaling)
if reserved and not isinstance(reserved, bool):
raise TypeError("Expected argument 'reserved' to be a bool")
pulumi.set(__self__, "reserved", reserved)
if resource_group_name and not isinstance(resource_group_name, str):
raise TypeError("Expected argument 'resource_group_name' to be a str")
pulumi.set(__self__, "resource_group_name", resource_group_name)
if sku and not isinstance(sku, dict):
raise TypeError("Expected argument 'sku' to be a dict")
pulumi.set(__self__, "sku", sku)
if tags and not isinstance(tags, dict):
raise TypeError("Expected argument 'tags' to be a dict")
pulumi.set(__self__, "tags", tags)
@property
@pulumi.getter(name="appServiceEnvironmentId")
def app_service_environment_id(self) -> str:
"""
The ID of the App Service Environment where the App Service Plan is located.
"""
return pulumi.get(self, "app_service_environment_id")
@property
@pulumi.getter
def id(self) -> str:
"""
The provider-assigned unique ID for this managed resource.
"""
return pulumi.get(self, "id")
@property
@pulumi.getter(name="isXenon")
def is_xenon(self) -> bool:
"""
A flag that indicates if it's a xenon plan (support for Windows Container)
"""
return pulumi.get(self, "is_xenon")
@property
@pulumi.getter
def kind(self) -> str:
"""
The Operating System type of the App Service Plan
"""
return pulumi.get(self, "kind")
@property
@pulumi.getter
def location(self) -> str:
"""
The Azure location where the App Service Plan exists
"""
return pulumi.get(self, "location")
@property
@pulumi.getter(name="maximumElasticWorkerCount")
def maximum_elastic_worker_count(self) -> int:
"""
The maximum number of total workers allowed for this ElasticScaleEnabled App Service Plan.
"""
return pulumi.get(self, "maximum_elastic_worker_count")
@property
@pulumi.getter(name="maximumNumberOfWorkers")
def maximum_number_of_workers(self) -> int:
"""
The maximum number of workers supported with the App Service Plan's sku.
"""
return pulumi.get(self, "maximum_number_of_workers")
@property
@pulumi.getter
def name(self) -> str:
return pulumi.get(self, "name")
@property
@pulumi.getter(name="perSiteScaling")
def per_site_scaling(self) -> bool:
"""
Can Apps assigned to this App Service Plan be scaled independently?
"""
return pulumi.get(self, "per_site_scaling")
@property
@pulumi.getter
def reserved(self) -> bool:
"""
Is this App Service Plan `Reserved`?
"""
return pulumi.get(self, "reserved")
@property
@pulumi.getter(name="resourceGroupName")
def resource_group_name(self) -> str:
return pulumi.get(self, "resource_group_name")
@property
@pulumi.getter
def sku(self) -> 'outputs.GetAppServicePlanSkuResult':
"""
A `sku` block as documented below.
"""
return pulumi.get(self, "sku")
@property
@pulumi.getter
def tags(self) -> Mapping[str, str]:
"""
A mapping of tags assigned to the resource.
"""
return pulumi.get(self, "tags")
class AwaitableGetAppServicePlanResult(GetAppServicePlanResult):
# pylint: disable=using-constant-test
def __await__(self):
if False:
yield self
return GetAppServicePlanResult(
app_service_environment_id=self.app_service_environment_id,
id=self.id,
is_xenon=self.is_xenon,
kind=self.kind,
location=self.location,
maximum_elastic_worker_count=self.maximum_elastic_worker_count,
maximum_number_of_workers=self.maximum_number_of_workers,
name=self.name,
per_site_scaling=self.per_site_scaling,
reserved=self.reserved,
resource_group_name=self.resource_group_name,
sku=self.sku,
tags=self.tags)
def get_app_service_plan(name: Optional[str] = None,
resource_group_name: Optional[str] = None,
opts: Optional[pulumi.InvokeOptions] = None) -> AwaitableGetAppServicePlanResult:
"""
Use this data source to access information about an existing App Service Plan (formerly known as a `Server Farm`).
## Example Usage
```python
import pulumi
import pulumi_azure as azure
example = azure.appservice.get_app_service_plan(name="search-app-service-plan",
resource_group_name="search-service")
pulumi.export("appServicePlanId", example.id)
```
:param str name: The name of the App Service Plan.
:param str resource_group_name: The Name of the Resource Group where the App Service Plan exists.
"""
__args__ = dict()
__args__['name'] = name
__args__['resourceGroupName'] = resource_group_name
if opts is None:
opts = pulumi.InvokeOptions()
if opts.version is None:
opts.version = _utilities.get_version()
__ret__ = pulumi.runtime.invoke('azure:appservice/getAppServicePlan:getAppServicePlan', __args__, opts=opts, typ=GetAppServicePlanResult).value
return AwaitableGetAppServicePlanResult(
app_service_environment_id=__ret__.app_service_environment_id,
id=__ret__.id,
is_xenon=__ret__.is_xenon,
kind=__ret__.kind,
location=__ret__.location,
maximum_elastic_worker_count=__ret__.maximum_elastic_worker_count,
maximum_number_of_workers=__ret__.maximum_number_of_workers,
name=__ret__.name,
per_site_scaling=__ret__.per_site_scaling,
reserved=__ret__.reserved,
resource_group_name=__ret__.resource_group_name,
sku=__ret__.sku,
tags=__ret__.tags)
| 38.464912 | 273 | 0.668985 |
c2251459300e46a6e680f2a096192055947ba118 | 338 | py | Python | setup.py | noboevbo/nobos_torch_lib | 11bc0a06b4cb5c273905d23c592cb3d847149a31 | [
"MIT"
] | 2 | 2020-10-08T12:50:50.000Z | 2020-12-14T13:36:30.000Z | setup.py | noboevbo/nobos_torch_lib | 11bc0a06b4cb5c273905d23c592cb3d847149a31 | [
"MIT"
] | null | null | null | setup.py | noboevbo/nobos_torch_lib | 11bc0a06b4cb5c273905d23c592cb3d847149a31 | [
"MIT"
] | 2 | 2021-05-06T11:40:35.000Z | 2021-09-30T01:10:59.000Z | from setuptools import setup, find_packages
setup(name='nobos_torch_lib',
version='0.1',
description='Nobos PyTorch Library. Various datasets, models and utils for PyTorch models.',
author='Dennis Ludl',
author_email='dennis@noboevbo.de',
license='MIT',
packages=find_packages(),
zip_safe=False) | 33.8 | 98 | 0.692308 |
3f606057171ca85bd4c8826ba073286acd9f6e1e | 2,075 | py | Python | python/classifications_userids.py | willettk/rgz-analysis | 11c34b1b2d0eb8b9c1c71757e6e2f771c169e993 | [
"MIT"
] | 3 | 2016-02-23T01:24:38.000Z | 2017-07-09T03:34:29.000Z | python/classifications_userids.py | willettk/rgz-analysis | 11c34b1b2d0eb8b9c1c71757e6e2f771c169e993 | [
"MIT"
] | 30 | 2015-02-24T03:05:54.000Z | 2016-06-27T19:36:36.000Z | python/classifications_userids.py | willettk/rgz-analysis | 11c34b1b2d0eb8b9c1c71757e6e2f771c169e993 | [
"MIT"
] | 3 | 2016-03-02T02:45:51.000Z | 2021-02-23T02:01:34.000Z | from matplotlib import pyplot as plt
import numpy as np
import datetime
'''
Data made with following commands:
within Mongo shell:
> use radio
> var result = db.radio_classifications.aggregate({$match:{user_id:{$exists:false}}},{$group:{_id:{year:{$year:"$created_at"},month:{$month:"$created_at"},day:{$dayOfMonth:"$created_at"}},count:{$sum:1}}})
> db.nouserid.insert(result.result)
> var result = db.radio_classifications.aggregate({$match:{user_id:{$exists:true}}},{$group:{_id:{year:{$year:"$created_at"},month:{$month:"$created_at"},day:{$dayOfMonth:"$created_at"}},count:{$sum:1}}})
> db.userid.insert(result.result)
command line:
> mongoexport -d radio -c nouserid -f "_id.year,_id.month,_id.day,count" --csv -o ~/Astronomy/Research/GalaxyZoo/rgz-analysis/csv/nouserid.csv
> mongoexport -d radio -c userid -f "_id.year,_id.month,_id.day,count" --csv -o ~/Astronomy/Research/GalaxyZoo/rgz-analysis/csv/userid.csv
'''
path = '/Users/willettk/Astronomy/Research/GalaxyZoo/rgz-analysis'
with open('%s/csv/nouserid.csv' % path,'rb') as f:
header = f.next()
dates_nouserid = []
count_nouserid = []
for line in f:
l = line.strip().split(',')
dates_nouserid.append(datetime.datetime(int(l[0]),int(l[1]),int(l[2])))
count_nouserid.append(float(l[-1]))
with open('%s/csv/userid.csv' % path,'rb') as f:
header = f.next()
dates_userid = []
count_userid = []
for line in f:
l = line.strip().split(',')
dates_userid.append(datetime.datetime(int(l[0]),int(l[1]),int(l[2])))
count_userid.append(float(l[-1]))
fig = plt.figure(figsize=(6,6))
'''
ax1 = fig.add_subplot(121)
ax1.plot_date(dates_nouserid,count_nouserid,'-',alpha=1.0)
ax1.plot_date(dates_userid,count_userid,'-',alpha=1.0)
'''
ax2 = fig.add_subplot(111)
ax2.plot_date(dates_nouserid,np.cumsum(count_nouserid),'-',alpha=1.0,label='user_id')
ax2.plot_date(dates_userid,np.cumsum(count_userid),'-',alpha=1.0,label='No user_id')
ax2.legend()
plt.show()
| 35.169492 | 213 | 0.657349 |
12b3ca6c22a1158dc0a2dcae5fc9d5a7f4a8b15a | 1,482 | py | Python | st2rbac_backend/backend.py | StackStorm/st2-rbac-backend | 4caaefb0c1cef4cb4cdc2d6d3ab28ef914675e03 | [
"Apache-2.0"
] | 1 | 2020-09-21T16:05:31.000Z | 2020-09-21T16:05:31.000Z | st2rbac_backend/backend.py | StackStorm/st2-rbac-backend | 4caaefb0c1cef4cb4cdc2d6d3ab28ef914675e03 | [
"Apache-2.0"
] | 18 | 2020-09-18T19:07:03.000Z | 2022-02-25T07:02:17.000Z | st2rbac_backend/backend.py | StackStorm/st2-rbac-backend | 4caaefb0c1cef4cb4cdc2d6d3ab28ef914675e03 | [
"Apache-2.0"
] | 4 | 2020-08-27T12:24:51.000Z | 2021-09-22T10:09:18.000Z | # Copyright 2020 The StackStorm Authors
# Copyright (C) 2020 Extreme Networks, Inc - All Rights Reserved
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from st2common.rbac.backends.base import BaseRBACBackend
from st2rbac_backend import resolvers
from st2rbac_backend.service import RBACService
from st2rbac_backend.utils import RBACUtils
from st2rbac_backend.syncer import RBACRemoteGroupToRoleSyncer
__all__ = ["RBACBackend"]
class RBACBackend(BaseRBACBackend):
def get_resolver_for_resource_type(self, resource_type):
return resolvers.get_resolver_for_resource_type(resource_type=resource_type)
def get_resolver_for_permission_type(self, permission_type):
return resolvers.get_resolver_for_permission_type(permission_type=permission_type)
def get_remote_group_to_role_syncer(self):
return RBACRemoteGroupToRoleSyncer()
def get_service_class(self):
return RBACService
def get_utils_class(self):
return RBACUtils
| 36.146341 | 90 | 0.790823 |
3367314e693383e3378edadbcac324cc4e295212 | 5,175 | py | Python | google-cloud-sdk/lib/surface/deployment_manager/types/list.py | KaranToor/MA450 | c98b58aeb0994e011df960163541e9379ae7ea06 | [
"Apache-2.0"
] | 1 | 2017-11-29T18:52:27.000Z | 2017-11-29T18:52:27.000Z | google-cloud-sdk/.install/.backup/lib/surface/deployment_manager/types/list.py | KaranToor/MA450 | c98b58aeb0994e011df960163541e9379ae7ea06 | [
"Apache-2.0"
] | null | null | null | google-cloud-sdk/.install/.backup/lib/surface/deployment_manager/types/list.py | KaranToor/MA450 | c98b58aeb0994e011df960163541e9379ae7ea06 | [
"Apache-2.0"
] | 1 | 2020-07-25T12:09:01.000Z | 2020-07-25T12:09:01.000Z | # Copyright 2016 Google Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""types list command."""
from apitools.base.py import list_pager
from googlecloudsdk.api_lib.deployment_manager import dm_v2_util
from googlecloudsdk.calliope import base
from googlecloudsdk.command_lib.deployment_manager import dm_base
from googlecloudsdk.command_lib.deployment_manager import dm_beta_base
from googlecloudsdk.core import log
@base.ReleaseTracks(base.ReleaseTrack.GA)
class List(base.ListCommand):
"""List types in a project.
Prints a list of the available resource types.
"""
detailed_help = {
'EXAMPLES': """\
To print out a list of all available type names, run:
$ {command}
""",
}
@staticmethod
def Args(parser):
base.SORT_BY_FLAG.RemoveFromParser(parser)
base.URI_FLAG.RemoveFromParser(parser)
def Run(self, args):
"""Run 'types list'.
Args:
args: argparse.Namespace, The arguments that this command was invoked
with.
Returns:
The list of types for this project.
Raises:
HttpException: An http error response was received while executing api
request.
"""
request = dm_base.GetMessages().DeploymentmanagerTypesListRequest(
project=dm_base.GetProject())
return dm_v2_util.YieldWithHttpExceptions(
list_pager.YieldFromList(dm_base.GetClient().types, request,
field='types', batch_size=args.page_size,
limit=args.limit))
def Collection(self):
return 'deploymentmanager.types'
def Epilog(self, resources_were_displayed):
if not resources_were_displayed:
log.status.Print('No types were found for your project!')
@base.ReleaseTracks(base.ReleaseTrack.ALPHA)
class ListALPHA(base.ListCommand):
"""Describe a type provider type."""
detailed_help = {
'EXAMPLES': """\
To print out a list of all available type names, run:
$ {command}
If you only want the types for a specific provider, you can specify
which one using --provider
$ {command} --provider=PROVIDER
""",
}
@staticmethod
def Args(parser):
parser.add_argument('--provider', help='Type provider name.')
def Run(self, args):
"""Run 'types list'.
Args:
args: argparse.Namespace, The arguments that this command was invoked
with.
Returns:
The list of types for this project.
Raises:
HttpException: An http error response was received while executing api
request.
"""
type_provider_ref = dm_beta_base.GetResources().Parse(
args.provider if args.provider else 'NOT_A_PROVIDER',
collection='deploymentmanager.typeProviders')
if not args.provider:
type_providers = self.GetTypeProviders(type_provider_ref.project,
args.page_size,
args.limit)
else:
type_providers = [type_provider_ref.typeProvider]
return dm_v2_util.YieldWithHttpExceptions(
self.YieldTypes(type_providers,
type_provider_ref.project,
args.page_size,
args.limit))
def GetTypeProviders(self, project, page_size, limit):
request = (dm_beta_base.GetMessages().
DeploymentmanagerTypeProvidersListRequest(
project=project))
type_providers = []
paginated_providers = dm_v2_util.YieldWithHttpExceptions(
list_pager.YieldFromList(dm_beta_base.GetClient().typeProviders,
request,
field='typeProviders',
batch_size=page_size,
limit=limit))
for type_provider in paginated_providers:
type_providers.append(type_provider.name)
return type_providers
def YieldTypes(self, type_providers, project, page_size, limit):
for type_provider in type_providers:
request = (dm_beta_base.GetMessages().
DeploymentmanagerTypeProvidersListTypesRequest(
project=project,
typeProvider=type_provider))
paginated_types = list_pager.YieldFromList(
dm_beta_base.GetClient().typeProviders,
request,
method='ListTypes',
field='types',
batch_size=page_size,
limit=limit)
for t in paginated_types:
yield {'type': t, 'provider': type_provider}
def Format(self, unused_args):
return 'table(type.name, provider)'
| 32.34375 | 77 | 0.650048 |
906b0ba185a70f096874e34ce32e01a97f27096d | 1,237 | py | Python | recimies_site/social/endpoints.py | steeznson/recimies | 7318eeffeb82b5918145787bebeaaacd2f44adce | [
"MIT"
] | null | null | null | recimies_site/social/endpoints.py | steeznson/recimies | 7318eeffeb82b5918145787bebeaaacd2f44adce | [
"MIT"
] | 10 | 2020-02-25T20:30:33.000Z | 2021-12-16T20:09:51.000Z | recimies_site/social/endpoints.py | steeznson/recimies | 7318eeffeb82b5918145787bebeaaacd2f44adce | [
"MIT"
] | 1 | 2020-10-24T17:30:32.000Z | 2020-10-24T17:30:32.000Z | from .models import Recipe
from django.http import JsonResponse
from django.core import serializers
from django.contrib.auth.models import User
def recipes_endpoint(request):
recipes = Recipe.objects.all()
if request.user.is_authenticated:
curuser = User.objects.get(username=request.user.username)
followingusers = curuser.profile.followers.all()
following_ids = []
for followinguser in followingusers.values():
following_ids.append(followinguser["id"])
following_recipes = Recipe.objects.filter(user__in=following_ids)
other_recipes = Recipe.objects.exclude(user__in=following_ids)
recipe_list = {
'friends_recipes': serializers.serialize('json',following_recipes, use_natural_foreign_keys=True),
'other_recipes': serializers.serialize('json',other_recipes, use_natural_foreign_keys=True),
'recipes': serializers.serialize('json',recipes, use_natural_foreign_keys=True)
}
return JsonResponse(recipe_list)
else:
recipe_list = {
'recipes': serializers.serialize('json',recipes, use_natural_foreign_keys=True)
}
return JsonResponse(recipe_list) | 39.903226 | 114 | 0.696847 |
28c13a5eaf8eeac54381c1be062a6e604f7b0c13 | 11,500 | py | Python | Text Categorisation/NaiveBayes.py | Bennygmate/Machine-Learning | 0dc29d0b28a7e69aebb9a7b6627ab8955b4f82da | [
"MIT"
] | null | null | null | Text Categorisation/NaiveBayes.py | Bennygmate/Machine-Learning | 0dc29d0b28a7e69aebb9a7b6627ab8955b4f82da | [
"MIT"
] | null | null | null | Text Categorisation/NaiveBayes.py | Bennygmate/Machine-Learning | 0dc29d0b28a7e69aebb9a7b6627ab8955b4f82da | [
"MIT"
] | null | null | null | # Benjamin Cheung
import re
import os
import math
import string
from collections import Counter
from prettytable import PrettyTable
vocab=dict()
vocab_MI=dict()
prob=dict() #probability for p(w|c)
prob_class=dict() # prob for class
total_distinctwordpos=dict()
IG=dict()
#Train_path = "20news-bydate-train"
Train_path = "cross_valid/10/train"
#Test_path = "20news-bydate-test"
Test_path = "cross_valid/10/test"
# FEATURE SELECTION VARIABLES
least_wordfrequency = 2
mostcommon_words = 100
simple_average_common = 5000
def main():
# Training folder
Train = Train_path
# V is the set of all possible target values.
V = os.listdir(Train)
learn_naive_bayes_text(Train, V)
naive_bayes_test(V)
def naive_bayes_test(V):
print "Split of 60% training and 40% testing"
total_accuracy = 0
# Number of items correctly labelled as belonging to positive class
true_pos=dict()
# Items incorrectly labelled as belonging to the class
false_pos=dict()
# Items which were not labelled as belonging to positive class but should have been
false_neg=dict()
# Precision - true positive / (true pos + false pos)
precision=dict()
# Recall - true pos / (true pos + false neg)
recall=dict()
# F1 = 2 * ((precision * recall) / (precision + recall))
f1_score=dict()
for vj in V:
true_pos[vj] = 0
false_pos[vj] = 0
false_neg[vj] = 0
precision[vj] = 0
recall[vj] = 0
f1_score[vj] = 0
for vj in V:
correct_classification = 0
path, direc, docs = os.walk(Test_path+"/"+vj).next()
for doc_name in docs:
doc_path = "" + path + "/" + doc_name
target_value = classify_naive_bayes_text(doc_path, V)
if target_value == vj:
true_pos[vj] += 1
else:
false_neg[vj] += 1
false_pos[target_value] += 1
for vj in V:
precision[vj] = float(true_pos[vj]) / (true_pos[vj] + false_pos[vj])
recall[vj] = float(true_pos[vj]) / (true_pos[vj] + false_neg[vj])
f1_score[vj] = 2 * ((precision[vj] * recall[vj]) / (precision[vj] + recall[vj]))
space = ""
print "Naive Bayes %f" %(sum(recall.values()) / 20)
print "%26sprecision%4srecall%2sf1-score%3ssupport\n" %(space, space, space, space)
for vj in V:
print "%24s%7s%.2f%6s%.2f%6s%.2f%7s%d" %(vj, space, round(precision[vj], 2),
space, round(recall[vj], 2),
space, round(f1_score[vj],2),
space, true_pos[vj])
avg = "avg / total"
print "\n%24s%7s%.2f%6s%.2f%6s%.2f%7s%d" %(avg, space, round((sum(precision.values()) / 20), 2),
space, round((sum(recall.values()) / 20), 2),
space, round((sum(f1_score.values()) / 20), 2),
space, (sum(true_pos.values()) / 20))
def classify_naive_bayes_text(Doc, V):
#Calculate probabilities for words in a document belonging to a class
# Initialise test_prob to multiply later for arg_max of class
test_prob = dict()
# Grab classes
for vj in V:
test_prob[vj] = 0
with open(Doc, 'r') as docsj:
for textj in docsj:
words = tokenize(textj)
for word in words:
if word in vocab:
# Then this meets the position
# positions - all word positions in Doc that contain tokens found in vocabulary
for vj in V:
# Apparently there is a problem with floats tending to 0 when reaching e-320
# Looked at internet for help and apparently sum the logarithms instead
# when comparing which would be larger, some math theorem i don't understand
# see https://docs.oracle.com/cd/E19957-01/806-3568/ncg_goldberg.html
if (test_prob[vj] == 0):
#test_prob[vj] = prob[vj][word] * prob_class[vj]
test_prob[vj] = math.log(prob[vj][word]+IG[vj][word]) + math.log(prob_class[vj])
else:
#test_prob[vj] = test_prob[vj] * prob[vj][word]
test_prob[vj] += math.log(prob[vj][word]+IG[vj][word])
# Take the argmax P(vj)
max_prob = -1000000000
for vj in test_prob:
if test_prob[vj] > max_prob:
max_prob = test_prob[vj]
vNB = vj
# Return the estimated target value for the document Doc
return vNB
def learn_naive_bayes_text(Examples, V):
# Examples is a set of text documents along with their target values (training data)
print "Building initial vocabulary and calculating probabilities"
print "========================================================="
# For each vj in V do
for vj in V:
# Grab path to file and docs
path, direc, docs = os.walk(Examples+"/"+vj).next() # seperating files from folders
# docsj
prob_class[vj] = len(docs)
prob[vj], total_distinctwordpos[vj] = build_vocabulary(path, docs)
# Vocabulary is created
print "Vocabulary is created with length %d" %len(vocab)
print "========================================================="
# P(vj) = |docsj| / |Examples| (The class prior probabilties)
total_examples = sum(prob_class.values())
for vj in V:
prob_class[vj] = float(prob_class[vj]) / total_examples
feature_selection(V)
#Calculating probabilities for each class and its words.
print "====================Training Classes===================="
print V
for vj in V:
n_words = total_distinctwordpos[vj]
words_vocab = len(vocab)
total_count = n_words + words_vocab
for word in vocab:
if word in prob[vj]:
count = prob[vj][word]
else:
count = 0
# P(wk|vj) = (nk + 1) / (n + |Vocabulary|)
prob[vj][word] = float(count+1)/total_count
print "====================Training Finished===================="
def build_vocabulary(path, docs):
# Counting words and building vocab
# nk_wordk - number of times word wk occurs in Textj
nk_wordk=dict()
# n_words - total number of distinct word positions in Textj (add for all text)
n_words = 0
# docsj - the subset of documents from Examples for which the target value is vj
for doc in docs:
with open(path+'/'+doc,'r') as docsj:
for textj in docsj:
#field = re.match(r'From:', textj)
#if (field):
# continue
#field = re.match(r'Organization:', textj)
#if (field):
# continue
#field = re.match(r'Lines:', textj)
#if (field):
# continue
words = tokenize(textj)
for word in words:
# Remove digit words
field = re.match(r'.*(\d+).*', word)
if (field):
continue
if word not in vocab:
vocab[word] = 1 # Add to vocab
nk_wordk[word] = 1 # Add to count
elif word != '':
nk_wordk.setdefault(word, 0)
vocab[word] += 1
nk_wordk[word] += 1
n_words += 1
return nk_wordk, n_words
def feature_selection(V):
print "Feature Selection:"
print "1. Pruning words with frequency less than %d" %least_wordfrequency
# Deleting less frequent words with count less than 3
infrequent_words()
print "2. Pruning most %d common words" %mostcommon_words
# Deleting 100 most frequent examples
high_frequency_words()
print "3. Choosing words with high mutual information with class"
mutual_information(V)
print "========================================================="
def mutual_information(V):
prob_MI=dict()
for vj in V:
if vj not in prob_MI:
prob_MI[vj] = {}
n_words = total_distinctwordpos[vj]
words_vocab = len(vocab)
total_count = n_words + words_vocab
for word in vocab:
if word in prob[vj]:
count = prob[vj][word]
else:
count = 0
# P(wk|vj) = (nk + 1) / (n + |Vocabulary|)
prob_MI[vj][word] = float(count)/total_count
for vj in V:
if vj not in IG:
IG[vj] = {}
for word in vocab:
class_log = math.log(prob_class[vj])
Entropy_C = (prob_class[vj] * class_log)
notword_log = math.log((1 - prob_MI[vj][word]) * prob_class[vj])
Entropy_NW = ((1-prob_MI[vj][word]) * prob_class[vj]) * notword_log
if prob_MI[vj][word] == 0:
word_log = 0
else:
word_log = math.log(prob_MI[vj][word] * prob_class[vj])
Entropy_W = prob_MI[vj][word] * prob_class[vj] * word_log
if word not in IG[vj]:
IG[vj][word] = Entropy_C - Entropy_NW - Entropy_W
#print IG[vj][word]
else:
IG[vj][word] += Entropy_C - Entropy_NW - Entropy_W
# OPTIONAL TO AVERAGE COMMON INFORMATION GAIN, to help discriminate
if simple_average_common != 0:
vocab_mostcommon=dict()
common_word=dict()
for vj in V:
d = Counter(IG[vj]).most_common(simple_average_common)
i = 0
while (i < simple_average_common):
word = d[i][0]
if word not in vocab_mostcommon:
vocab_mostcommon[word] = 0
else:
if word not in common_word:
common_word[word] = 0
i += 1
for word in common_word.keys():
average = 0
counter = 0
for vj in V:
if IG[vj][word] > 0:
average += IG[vj][word]
counter += 1
for vj in V:
if IG[vj][word] > 0:
IG[vj][word] = average / counter
print "Simple Average for %d common words mutual in class" %(simple_average_common)
print "Vocabulary for information gain complete"
def infrequent_words():
mark_del = []
for word in vocab:
if vocab[word] < least_wordfrequency:
mark_del.append(word)
for word in mark_del:
del vocab[word]
print "Vocabulary is pruned with length %d" %len(vocab)
def high_frequency_words():
mark_del = []
d = Counter(vocab).most_common(mostcommon_words)
for word in vocab:
i = 0
while (i < mostcommon_words):
if word == d[i][0]:
mark_del.append(word)
i += 1
for word in mark_del:
del vocab[word]
print "Vocabulary is pruned with length %d" %len(vocab)
def tokenize(text):
text = remove_punctuation(text)
text = text.lower()
return re.split("\W+", text)
def remove_punctuation(s):
table = string.maketrans("","")
return s.translate(table, string.punctuation)
main()
| 37.828947 | 108 | 0.536522 |
cfd6f3ea1c0711897d854b030a423f51beb02901 | 1,409 | py | Python | engine/rules.py | Guillaume-Docquier/cribbage | 521add66c63e463a467a2d8d269ef2347e676b98 | [
"MIT"
] | null | null | null | engine/rules.py | Guillaume-Docquier/cribbage | 521add66c63e463a467a2d8d269ef2347e676b98 | [
"MIT"
] | null | null | null | engine/rules.py | Guillaume-Docquier/cribbage | 521add66c63e463a467a2d8d269ef2347e676b98 | [
"MIT"
] | null | null | null | from typing import List
from models.card import Card
class Rules:
PLAYING_MAX_HAND_SIZE = 4
DEFAULT_WINNING_SCORE = 161
SHORT_WINNING_SCORE = 61
MAX_RUNNING_COUNT = 31
MIN_SEQUENCE_LENGTH = 3
MAX_SEQUENCE_LENGTH = 5
MIN_FLUSH_LENGTH = 4
@staticmethod
def get_card_value(card: Card):
return min(10, card.number)
@staticmethod
def get_cards_value(cards: List[Card]):
return sum([Rules.get_card_value(card) for card in cards])
@staticmethod
def game_is_over(players, winning_score):
for player in players:
if player.score >= winning_score:
return True
return False
@staticmethod
def is_sequence(cards: List[Card]) -> bool:
if len(cards) < Rules.MIN_SEQUENCE_LENGTH:
return False
sorted_card_numbers = sorted([card.number for card in cards])
last_card_number = sorted_card_numbers[0]
for card_number in sorted_card_numbers[1:]:
if card_number != last_card_number + 1:
return False
last_card_number = card_number
return True
@staticmethod
def is_flush(cards: List[Card]) -> bool:
if len(cards) < Rules.MIN_FLUSH_LENGTH:
return False
colors = set([card.color for card in cards])
if len(colors) > 1:
return False
return True
| 24.719298 | 69 | 0.631654 |
c292ea630689e7e163f045ecebdb0397703dd707 | 7,253 | py | Python | homeassistant/components/config/entity_registry.py | aomann/core | 5e71e7b775461cd4849c36075c6a1459a7d0ad22 | [
"Apache-2.0"
] | null | null | null | homeassistant/components/config/entity_registry.py | aomann/core | 5e71e7b775461cd4849c36075c6a1459a7d0ad22 | [
"Apache-2.0"
] | 24 | 2021-11-11T03:58:57.000Z | 2022-03-31T06:24:13.000Z | homeassistant/components/config/entity_registry.py | aomann/core | 5e71e7b775461cd4849c36075c6a1459a7d0ad22 | [
"Apache-2.0"
] | null | null | null | """HTTP views to interact with the entity registry."""
import voluptuous as vol
from homeassistant import config_entries
from homeassistant.components import websocket_api
from homeassistant.components.websocket_api.const import ERR_NOT_FOUND
from homeassistant.components.websocket_api.decorators import (
async_response,
require_admin,
)
from homeassistant.core import callback
from homeassistant.helpers import config_validation as cv
from homeassistant.helpers.entity_registry import (
RegistryEntryDisabler,
async_get_registry,
)
async def async_setup(hass):
"""Enable the Entity Registry views."""
websocket_api.async_register_command(hass, websocket_list_entities)
websocket_api.async_register_command(hass, websocket_get_entity)
websocket_api.async_register_command(hass, websocket_update_entity)
websocket_api.async_register_command(hass, websocket_remove_entity)
return True
@async_response
@websocket_api.websocket_command({vol.Required("type"): "config/entity_registry/list"})
async def websocket_list_entities(hass, connection, msg):
"""Handle list registry entries command.
Async friendly.
"""
registry = await async_get_registry(hass)
connection.send_message(
websocket_api.result_message(
msg["id"], [_entry_dict(entry) for entry in registry.entities.values()]
)
)
@async_response
@websocket_api.websocket_command(
{
vol.Required("type"): "config/entity_registry/get",
vol.Required("entity_id"): cv.entity_id,
}
)
async def websocket_get_entity(hass, connection, msg):
"""Handle get entity registry entry command.
Async friendly.
"""
registry = await async_get_registry(hass)
if (entry := registry.entities.get(msg["entity_id"])) is None:
connection.send_message(
websocket_api.error_message(msg["id"], ERR_NOT_FOUND, "Entity not found")
)
return
connection.send_message(
websocket_api.result_message(msg["id"], _entry_ext_dict(entry))
)
@require_admin
@async_response
@websocket_api.websocket_command(
{
vol.Required("type"): "config/entity_registry/update",
vol.Required("entity_id"): cv.entity_id,
# If passed in, we update value. Passing None will remove old value.
vol.Optional("area_id"): vol.Any(str, None),
vol.Optional("device_class"): vol.Any(str, None),
vol.Optional("icon"): vol.Any(str, None),
vol.Optional("name"): vol.Any(str, None),
vol.Optional("new_entity_id"): str,
# We only allow setting disabled_by user via API.
vol.Optional("disabled_by"): vol.Any(
None,
vol.All(
vol.Coerce(RegistryEntryDisabler), RegistryEntryDisabler.USER.value
),
),
vol.Inclusive("options_domain", "entity_option"): str,
vol.Inclusive("options", "entity_option"): vol.Any(None, dict),
}
)
async def websocket_update_entity(hass, connection, msg):
"""Handle update entity websocket command.
Async friendly.
"""
registry = await async_get_registry(hass)
entity_id = msg["entity_id"]
if entity_id not in registry.entities:
connection.send_message(
websocket_api.error_message(msg["id"], ERR_NOT_FOUND, "Entity not found")
)
return
changes = {}
for key in ("area_id", "device_class", "disabled_by", "icon", "name"):
if key in msg:
changes[key] = msg[key]
if "new_entity_id" in msg and msg["new_entity_id"] != entity_id:
changes["new_entity_id"] = msg["new_entity_id"]
if hass.states.get(msg["new_entity_id"]) is not None:
connection.send_message(
websocket_api.error_message(
msg["id"],
"invalid_info",
"Entity with this ID is already registered",
)
)
return
if "disabled_by" in msg and msg["disabled_by"] is None:
entity = registry.entities[entity_id]
if entity.device_id:
device_registry = await hass.helpers.device_registry.async_get_registry()
device = device_registry.async_get(entity.device_id)
if device.disabled:
connection.send_message(
websocket_api.error_message(
msg["id"], "invalid_info", "Device is disabled"
)
)
return
try:
if changes:
registry.async_update_entity(entity_id, **changes)
except ValueError as err:
connection.send_message(
websocket_api.error_message(msg["id"], "invalid_info", str(err))
)
return
if "new_entity_id" in msg:
entity_id = msg["new_entity_id"]
try:
if "options_domain" in msg:
registry.async_update_entity_options(
entity_id, msg["options_domain"], msg["options"]
)
except ValueError as err:
connection.send_message(
websocket_api.error_message(msg["id"], "invalid_info", str(err))
)
return
entry = registry.async_get(entity_id)
result = {"entity_entry": _entry_ext_dict(entry)}
if "disabled_by" in changes and changes["disabled_by"] is None:
config_entry = hass.config_entries.async_get_entry(entry.config_entry_id)
if config_entry and not config_entry.supports_unload:
result["require_restart"] = True
else:
result["reload_delay"] = config_entries.RELOAD_AFTER_UPDATE_DELAY
connection.send_result(msg["id"], result)
@require_admin
@async_response
@websocket_api.websocket_command(
{
vol.Required("type"): "config/entity_registry/remove",
vol.Required("entity_id"): cv.entity_id,
}
)
async def websocket_remove_entity(hass, connection, msg):
"""Handle remove entity websocket command.
Async friendly.
"""
registry = await async_get_registry(hass)
if msg["entity_id"] not in registry.entities:
connection.send_message(
websocket_api.error_message(msg["id"], ERR_NOT_FOUND, "Entity not found")
)
return
registry.async_remove(msg["entity_id"])
connection.send_message(websocket_api.result_message(msg["id"]))
@callback
def _entry_dict(entry):
"""Convert entry to API format."""
return {
"area_id": entry.area_id,
"config_entry_id": entry.config_entry_id,
"device_id": entry.device_id,
"disabled_by": entry.disabled_by,
"entity_category": entry.entity_category,
"entity_id": entry.entity_id,
"icon": entry.icon,
"name": entry.name,
"platform": entry.platform,
}
@callback
def _entry_ext_dict(entry):
"""Convert entry to API format."""
data = _entry_dict(entry)
data["capabilities"] = entry.capabilities
data["device_class"] = entry.device_class
data["options"] = entry.options
data["original_device_class"] = entry.original_device_class
data["original_icon"] = entry.original_icon
data["original_name"] = entry.original_name
data["unique_id"] = entry.unique_id
return data
| 32.524664 | 87 | 0.655591 |
a7b646f598303dd4b246fe194b81c8ad417e5db9 | 53,017 | py | Python | docs/manual/_ext/webapidocs.py | grimmy/reviewboard | ca673e1ba77985a5dc1f3261595ba0389b48e093 | [
"MIT"
] | 921 | 2015-01-01T15:26:28.000Z | 2022-03-29T11:30:38.000Z | docs/manual/_ext/webapidocs.py | grimmy/reviewboard | ca673e1ba77985a5dc1f3261595ba0389b48e093 | [
"MIT"
] | 5 | 2015-03-17T18:57:47.000Z | 2020-10-02T13:24:31.000Z | docs/manual/_ext/webapidocs.py | grimmy/reviewboard | ca673e1ba77985a5dc1f3261595ba0389b48e093 | [
"MIT"
] | 285 | 2015-01-12T06:24:36.000Z | 2022-03-29T11:03:50.000Z | """Sphinx plugins for web API docs."""
from __future__ import unicode_literals
import ast
import inspect
import json
import logging
import os
import re
import sys
from importlib import import_module
# Initialize Review Board before we load anything from Django.
import reviewboard
reviewboard.initialize()
from beanbag_docutils.sphinx.ext.http_role import (
DEFAULT_HTTP_STATUS_CODES_URL, HTTP_STATUS_CODES)
from django.contrib.auth.models import User
from django.core.exceptions import ObjectDoesNotExist
from django.http import HttpRequest
from django.template.defaultfilters import title
from django.utils import six
from djblets.features.testing import override_feature_checks
from djblets.util.http import is_mimetype_a
from djblets.webapi.fields import (BaseAPIFieldType,
ChoiceFieldType,
DateTimeFieldType,
ResourceFieldType,
ResourceListFieldType)
from djblets.webapi.resources import get_resource_from_class, WebAPIResource
from djblets.webapi.responses import WebAPIResponseError
from docutils import nodes
from docutils.parsers.rst import Directive, DirectiveError, directives
from docutils.statemachine import StringList, ViewList, string2lines
from reviewboard.scmtools.models import Repository
from reviewboard.webapi.resources import resources
from sphinx import addnodes
from sphinx.util import docname_join
from sphinx.util.docstrings import prepare_docstring
# Mapping of mimetypes to language names for syntax highlighting.
MIMETYPE_LANGUAGES = [
('application/json', 'javascript'),
('application/xml', 'xml'),
('text/x-patch', 'diff'),
]
# Build the list of parents.
resources.root.get_url_patterns()
class ResourceNotFound(Exception):
def __init__(self, directive, classname):
self.classname = classname
self.error_node = [
directive.state_machine.reporter.error(
str(self),
line=directive.lineno)
]
def __str__(self):
return ('Unable to import the web API resource class "%s"'
% self.classname)
class ErrorNotFound(Exception):
def __init__(self, directive, classname):
self.error_node = [
directive.state_machine.reporter.error(
'Unable to import the web API error class "%s"' % classname,
line=directive.lineno)
]
class DummyRequest(HttpRequest):
def __init__(self, *args, **kwargs):
super(DummyRequest, self).__init__(*args, **kwargs)
self.method = 'GET'
self.path = ''
self.user = User.objects.all()[0]
self.session = {}
self._local_site_name = None
self.local_site = None
# This is normally set internally by Djblets, but we don't
# go through the standard __call__ flow.
self._djblets_webapi_object_cache = {}
def build_absolute_uri(self, location=None):
if not self.path and not location:
return '/api/'
if not location:
location = self.path
if not location.startswith('http://'):
location = 'http://reviews.example.com' + location
return location
class ResourceDirective(Directive):
has_content = True
required_arguments = 0
option_spec = {
'classname': directives.unchanged_required,
'is-list': directives.flag,
'hide-links': directives.flag,
'hide-examples': directives.flag,
}
item_http_methods = set(['GET', 'DELETE', 'PUT'])
list_http_methods = set(['GET', 'POST'])
FILTERED_MIMETYPES = [
'application/json',
'application/xml',
]
def run(self):
try:
resource_class = self.get_resource_class(self.options['classname'])
except ResourceNotFound as e:
return e.error_node
# Add the class's file and this extension to the dependencies.
env = self.state.document.settings.env
env.note_dependency(__file__)
env.note_dependency(sys.modules[resource_class.__module__].__file__)
resource = get_resource_from_class(resource_class)
is_list = 'is-list' in self.options
docname = 'webapi2.0-%s-resource' % \
get_resource_docname(env.app, resource, is_list)
resource_title = get_resource_title(resource, is_list)
targetnode = nodes.target('', '', ids=[docname], names=[docname])
self.state.document.note_explicit_target(targetnode)
main_section = nodes.section(ids=[docname])
# Main section
main_section += nodes.title(text=resource_title)
for attr_name, text_fmt in (('added_in', 'Added in %s'),
('deprecated_in', 'Deprecated in %s'),
('removed_in', 'Removed in %s')):
version = getattr(resource, attr_name, None)
if not version:
if is_list:
prefix = 'list_resource'
else:
prefix = 'item_resource'
version = getattr(resource, '%s_%s' % (prefix, attr_name),
None)
if version:
paragraph = nodes.paragraph()
paragraph += nodes.emphasis(text=text_fmt % version,
classes=['resource-versioning'])
main_section += paragraph
main_section += parse_text(
self, inspect.getdoc(resource),
where='%s class docstring' % self.options['classname'])
if getattr(resource, 'required_features', False):
required_features = nodes.important()
required_features += nodes.inline(
text='Using this resource requires extra features to be '
'enabled on the server. See "Required Features" below.')
main_section += required_features
# Details section
details_section = nodes.section(ids=['details'])
main_section += details_section
details_section += nodes.title(text='Details')
details_section += self.build_details_table(resource)
# Fields section
if (resource.fields and
(not is_list or resource.singleton)):
fields_section = nodes.section(ids=['fields'])
main_section += fields_section
fields_section += nodes.title(text='Fields')
fields_section += self.build_fields_table(resource.fields)
# Links section
if 'hide-links' not in self.options:
fields_section = nodes.section(ids=['links'])
main_section += fields_section
fields_section += nodes.title(text='Links')
fields_section += self.build_links_table(resource)
# HTTP method descriptions
for http_method in self.get_http_methods(resource, is_list):
method_section = nodes.section(ids=[http_method])
main_section += method_section
method_section += nodes.title(text='HTTP %s' % http_method)
method_section += self.build_http_method_section(resource,
http_method)
if 'hide-examples' not in self.options:
examples_section = nodes.section(ids=['examples'])
examples_section += nodes.title(text='Examples')
has_examples = False
if is_list:
mimetype_key = 'list'
else:
mimetype_key = 'item'
for mimetype in resource.allowed_mimetypes:
try:
mimetype = mimetype[mimetype_key]
except KeyError:
continue
if mimetype in self.FILTERED_MIMETYPES:
# Resources have more specific mimetypes. We want to
# filter out the general ones (like application/json)
# so we don't show redundant examples.
continue
if mimetype.endswith('xml'):
# JSON is preferred. While we support XML, let's not
# continue to advertise it.
continue
url, headers, data = \
self.fetch_resource_data(resource, mimetype)
example_node = build_example(headers, data, mimetype)
if example_node:
example_section = \
nodes.section(ids=['example_' + mimetype],
classes=['examples', 'requests-example'])
examples_section += example_section
example_section += nodes.title(text=mimetype)
accept_mimetype = mimetype
if (mimetype.startswith('application/') and
mimetype.endswith('+json')):
# Instead of telling the user to ask for a specific
# mimetype on the request, show them that asking for
# application/json works fine.
accept_mimetype = 'application/json'
curl_text = (
'$ curl http://reviews.example.com%s -H "Accept: %s"'
% (url, accept_mimetype)
)
example_section += nodes.literal_block(
curl_text, curl_text, classes=['cmdline'])
example_section += nodes.literal_block(
headers, headers, classes=['http-headers'])
example_section += example_node
has_examples = True
if has_examples:
main_section += examples_section
return [targetnode, main_section]
def build_details_table(self, resource):
env = self.state.document.settings.env
app = env.app
is_list = 'is-list' in self.options
table = nodes.table(classes=['resource-info'])
tgroup = nodes.tgroup(cols=2)
table += tgroup
tgroup += nodes.colspec(colwidth=30, classes=['field'])
tgroup += nodes.colspec(colwidth=70, classes=['value'])
tbody = nodes.tbody()
tgroup += tbody
# Name
if is_list:
resource_name = resource.name_plural
else:
resource_name = resource.name
append_detail_row(tbody, "Name", nodes.literal(text=resource_name))
# URI
uri_template = get_resource_uri_template(resource, not is_list)
append_detail_row(tbody, "URI", nodes.literal(text=uri_template))
# Required features
if getattr(resource, 'required_features', False):
feature_list = nodes.bullet_list()
for feature in resource.required_features:
item = nodes.list_item()
paragraph = nodes.paragraph()
paragraph += nodes.inline(text=feature.feature_id)
item += paragraph
feature_list += item
append_detail_row(tbody, 'Required Features', feature_list)
# Token Policy ID
if hasattr(resource, 'policy_id'):
append_detail_row(tbody, "Token Policy ID",
nodes.literal(text=resource.policy_id))
# HTTP Methods
allowed_http_methods = self.get_http_methods(resource, is_list)
bullet_list = nodes.bullet_list()
for http_method in allowed_http_methods:
item = nodes.list_item()
bullet_list += item
paragraph = nodes.paragraph()
item += paragraph
ref = nodes.reference(text=http_method, refid=http_method)
paragraph += ref
doc_summary = self.get_doc_for_http_method(resource, http_method)
i = doc_summary.find('.')
if i != -1:
doc_summary = doc_summary[:i + 1]
paragraph += nodes.inline(text=" - ")
paragraph += parse_text(
self, doc_summary,
wrapper_node_type=nodes.inline,
where='HTTP %s handler summary for %s'
% (http_method, self.options['classname']))
append_detail_row(tbody, "HTTP Methods", bullet_list)
# Parent Resource
if is_list or resource.uri_object_key is None:
parent_resource = resource._parent_resource
is_parent_list = False
else:
parent_resource = resource
is_parent_list = True
if parent_resource:
paragraph = nodes.paragraph()
paragraph += get_ref_to_resource(app, parent_resource,
is_parent_list)
else:
paragraph = 'None.'
append_detail_row(tbody, "Parent Resource", paragraph)
# Child Resources
if is_list:
child_resources = list(resource.list_child_resources)
if resource.name != resource.name_plural:
if resource.uri_object_key:
child_resources.append(resource)
are_children_lists = False
else:
are_children_lists = True
else:
child_resources = resource.item_child_resources
are_children_lists = True
if child_resources:
tocnode = addnodes.toctree()
tocnode['glob'] = None
tocnode['maxdepth'] = 1
tocnode['hidden'] = False
docnames = sorted([
docname_join(env.docname,
get_resource_docname(app, child_resource,
are_children_lists))
for child_resource in child_resources
])
tocnode['includefiles'] = docnames
tocnode['entries'] = [(None, docname) for docname in docnames]
else:
tocnode = nodes.paragraph(text="None")
append_detail_row(tbody, "Child Resources", tocnode)
# Anonymous Access
if is_list and not resource.singleton:
getter = resource.get_list
else:
getter = resource.get
if getattr(getter, 'login_required', False):
anonymous_access = 'No'
elif getattr(getter, 'checks_login_required', False):
anonymous_access = 'Yes, if anonymous site access is enabled'
else:
anonymous_access = 'Yes'
append_detail_row(tbody, "Anonymous Access", anonymous_access)
return table
def build_fields_table(self, fields, required_field_names=None):
"""Build a table representing a list of fields.
Args:
fields (dict):
The fields to display.
required_field_names (set of unicode, optional):
The field names that are required.
Returns:
list of docutils.nodes.Node:
The resulting list of nodes for the fields table.
"""
options = {
'fields': fields,
}
if required_field_names is not None:
options.update({
'show-requirement-labels': True,
'required-field-names': set(required_field_names),
})
return run_directive(self, 'webapi-resource-field-list',
options=options)
def build_links_table(self, resource):
is_list = 'is-list' in self.options
table = nodes.table()
tgroup = nodes.tgroup(cols=3)
table += tgroup
tgroup += nodes.colspec(colwidth=25)
tgroup += nodes.colspec(colwidth=15)
tgroup += nodes.colspec(colwidth=60)
thead = nodes.thead()
tgroup += thead
append_row(thead, ['Name', 'Method', 'Resource'])
tbody = nodes.tbody()
tgroup += tbody
request = DummyRequest()
if is_list:
child_resources = resource.list_child_resources
else:
child_resources = resource.item_child_resources
names_to_resource = {}
for child in child_resources:
names_to_resource[child.name_plural] = (child, True)
if not is_list and resource.model:
child_keys = {}
create_fake_resource_path(request, resource, child_keys, True)
obj = resource.get_queryset(request, **child_keys)[0]
else:
obj = None
related_links = resource.get_related_links(request=request, obj=obj)
for key, info in six.iteritems(related_links):
if 'resource' in info:
names_to_resource[key] = \
(info['resource'], info.get('list-resource', False))
links = resource.get_links(child_resources, request=DummyRequest(),
obj=obj)
app = self.state.document.settings.env.app
for linkname in sorted(six.iterkeys(links)):
info = links[linkname]
child, is_child_link = \
names_to_resource.get(linkname, (resource, is_list))
paragraph = nodes.paragraph()
paragraph += get_ref_to_resource(app, child, is_child_link)
append_row(tbody,
[nodes.strong(text=linkname),
info['method'],
paragraph])
return table
def build_http_method_section(self, resource, http_method):
doc = self.get_doc_for_http_method(resource, http_method)
http_method_func = self.get_http_method_func(resource, http_method)
# Description text
returned_nodes = [
parse_text(self, doc,
wrapper_node_type=nodes.paragraph,
where='HTTP %s doc' % http_method),
]
# Request Parameters section
required_fields = getattr(http_method_func, 'required_fields', [])
optional_fields = getattr(http_method_func, 'optional_fields', [])
if required_fields or optional_fields:
all_fields = dict(required_fields)
all_fields.update(optional_fields)
fields_section = nodes.section(ids=['%s_params' % http_method])
returned_nodes.append(fields_section)
fields_section += nodes.title(text='Request Parameters')
table = self.build_fields_table(
all_fields,
required_field_names=set(six.iterkeys(required_fields)))
fields_section += table
# Errors section
errors = getattr(http_method_func, 'response_errors', [])
if errors:
errors_section = nodes.section(ids=['%s_errors' % http_method])
returned_nodes.append(errors_section)
errors_section += nodes.title(text='Errors')
errors_section += self.build_errors_table(errors)
return returned_nodes
def build_errors_table(self, errors):
"""Build a table representing a list of errors.
Args:
errors (list of djblets.webapi.errors.WebAPIError):
The errors to display.
Returns:
list of docutils.nodes.Node:
The resulting list of nodes for the errors table.
"""
table = nodes.table(classes=['api-errors'])
tgroup = nodes.tgroup(cols=2)
table += tgroup
tgroup += nodes.colspec(colwidth=25)
tgroup += nodes.colspec(colwidth=75)
tbody = nodes.tbody()
tgroup += tbody
for error in sorted(errors, key=lambda x: x.code):
http_code = nodes.inline(classes=['http-error'])
http_code += nodes.reference(
text='HTTP %s - %s' % (error.http_status,
HTTP_STATUS_CODES[error.http_status]),
refuri=(DEFAULT_HTTP_STATUS_CODES_URL
% error.http_status)),
error_code = nodes.inline(classes=['api-error'])
error_code += get_ref_to_error(error)
error_info = nodes.inline()
error_info += error_code
error_info += http_code
append_row(
tbody,
[
error_info,
nodes.inline(text=error.msg),
])
return table
def fetch_resource_data(self, resource, mimetype):
features = {
feature.feature_id: True
for feature in resource.required_features
}
with override_feature_checks(features):
kwargs = {}
request = DummyRequest()
request.path = create_fake_resource_path(
request, resource, kwargs, 'is-list' not in self.options)
headers, data = fetch_response_data(resource, mimetype, request,
**kwargs)
return request.path, headers, data
def get_resource_class(self, classname):
try:
return get_from_module(classname)
except ImportError:
raise ResourceNotFound(self, classname)
def get_http_method_func(self, resource, http_method):
if (http_method == 'GET' and 'is-list' in self.options and
not resource.singleton):
method_name = 'get_list'
else:
method_name = resource.method_mapping[http_method]
# Change "put" and "post" to "update" and "create", respectively.
# "put" and "post" are just wrappers and we don't want to show
# their documentation.
if method_name == 'put':
method_name = 'update'
elif method_name == 'post':
method_name = 'create'
return getattr(resource, method_name)
def get_doc_for_http_method(self, resource, http_method):
return inspect.getdoc(self.get_http_method_func(resource,
http_method)) or ''
def get_http_methods(self, resource, is_list):
if is_list:
possible_http_methods = self.list_http_methods
else:
possible_http_methods = self.item_http_methods
return sorted(
set(resource.allowed_methods).intersection(possible_http_methods))
class ResourceFieldListDirective(Directive):
"""Directive for listing fields in a resource.
This directive can be used to list the fields belonging to a resource,
the fields within part of a resource's payload, or fields accepted by
an operation on a resource.
The fields can be provided directly (if being called by Python code)
through the ``fields`` and ``required-field-names`` options. Otherwise,
this will parse the content of the directive for any
``webapi-resource-field`` directives and use those instead.
"""
has_content = True
option_spec = {
'fields': directives.unchanged,
'required-field-names': directives.unchanged,
}
def run(self):
"""Run the directive and render the resulting fields.
Returns:
list of docutils.nodes.Node:
The resulting nodes.
"""
fields = self.options.get('fields')
required_fields = self.options.get('required-field-names')
table = nodes.table(classes=['resource-fields'])
tgroup = nodes.tgroup(cols=3)
table += tgroup
tgroup += nodes.colspec(colwidth=15, classes=['field'])
tgroup += nodes.colspec(colwidth=85, classes=['description'])
tbody = nodes.tbody()
tgroup += tbody
if fields is not None:
assert isinstance(fields, dict)
if required_fields is not None:
field_keys = sorted(
six.iterkeys(fields),
key=lambda field: (field not in required_fields, field))
else:
field_keys = sorted(six.iterkeys(fields))
for field in field_keys:
info = fields[field]
options = {
'name': field,
'type': info['type'],
'field-info': info,
}
if info.get('supports_text_types'):
options['supports-text-types'] = True
if required_fields is not None and field in required_fields:
options['show-required'] = True
if info.get('added_in'):
options['added-in'] = info['added_in']
if info.get('deprecated_in'):
options['deprecated-in'] = info['deprecated_in']
if info.get('removed_in'):
options['removed-in'] = info['removed_in']
field_row = run_directive(
self,
'webapi-resource-field',
content='\n'.join(prepare_docstring(info['description'])),
options=options)
tbody += field_row
elif self.content:
node = nodes.Element()
self.state.nested_parse(self.content, self.content_offset,
node)
# ResourceFieldDirective outputs two fields (two table cells) per
# field. We want to loop through and grab each.
tbody += node.children
return [table]
class ResourceFieldDirective(Directive):
"""Directive for displaying information on a field in a resource.
This directive can be used to display details about a specific field
belonging to a resource, a part of a resource's payload, or a field
accepted by an operation on a resource.
This is expected to be added into a ``webapi-resource-field-list``
directive. The resulting node is a table row.
"""
has_content = True
option_spec = {
'name': directives.unchanged_required,
'type': directives.unchanged_required,
'field-info': directives.unchanged,
'show-required': directives.flag,
'supports-text-types': directives.flag,
'added-in': directives.unchanged,
'deprecated-in': directives.unchanged,
'removed-in': directives.unchanged,
}
type_mapping = {
int: 'Integer',
bytes: 'Byte String',
six.text_type: 'String',
bool: 'Boolean',
dict: 'Dictionary',
list: 'List',
}
type_name_mapping = {
'int': int,
'bytes': bytes,
'str': six.text_type,
'unicode': six.text_type,
'bool': bool,
'dict': dict,
'list': list,
}
def run(self):
"""Run the directive and render the resulting fields.
Returns:
list of docutils.nodes.Node:
The resulting nodes.
"""
self.assert_has_content()
name = self.options['name']
# Field/type information
field_node = nodes.inline()
field_node += nodes.strong(text=name, classes=['field-name'])
type_node = nodes.inline(classes=['field-type'])
field_node += type_node
if 'supports-text-types' in self.options:
type_node += get_ref_to_doc('webapi2.0-text-fields', 'Rich Text')
else:
type_node += self._get_type_name(
self.options['type'],
self.options.get('field-info', {}))
# Description/required/versioning information
description_node = nodes.inline()
if 'show-required' in self.options:
description_node += nodes.inline(text='Required',
classes=['field-required'])
if 'deprecated-in' in self.options:
description_node += nodes.inline(text='Deprecated',
classes=['field-deprecated'])
if isinstance(self.content, StringList):
description = '\n'.join(self.content)
else:
description = self.content
description_node += parse_text(self, description)
if 'added-in' in self.options:
paragraph = nodes.paragraph()
paragraph += nodes.emphasis(
text='Added in %s\n' % self.options['added-in'],
classes=['field-versioning'])
description_node += paragraph
if 'deprecated-in' in self.options:
paragraph = nodes.paragraph()
paragraph += nodes.emphasis(
text='Deprecated in %s\n' % self.options['deprecated-in'],
classes=['field-versioning'])
description_node += paragraph
if 'removed-in' in self.options:
paragraph = nodes.paragraph()
paragraph += nodes.emphasis(
text='Removed in %s\n' % self.options['removed-in'],
classes=['field-versioning'])
description_node += paragraph
row = nodes.row()
entry = nodes.entry()
entry += field_node
row += entry
entry = nodes.entry()
entry += description_node
row += entry
return [row]
def _get_type_name(self, field_type, field_info, nested=False):
"""Return the displayed name for a given type.
This will attempt to take a type (either a string representation or
a Python structure) and return a string that can be used for display
in the API docs.
This may also be provided a Python class path for a resource.
Args:
field_type (object):
The type of field (as a Python structure), a string
representing a Python structure, or the class path to a
resource.
field_info (dict):
The metadata on the field.
nested (bool, optional):
Whether this call is nested within another call to this
function.
Returns:
unicode:
The resulting string used for display.
Raises:
ResourceNotFound:
A resource path appeared to be provided, but a resource was
not found.
ValueError:
The type is unsupported.
"""
if (inspect.isclass(field_type) and
issubclass(field_type, BaseAPIFieldType)):
field_type = field_type(field_info)
if isinstance(field_type, ResourceFieldType):
result = []
if isinstance(field_type, ResourceListFieldType):
result.append(nodes.inline(text='List of '))
result.append(get_ref_to_resource(
self.state.document.settings.env.app,
field_type.resource,
False))
return result
elif isinstance(field_type, ChoiceFieldType):
value_nodes = []
for value in field_type.choices:
if value_nodes:
value_nodes.append(nodes.inline(text=', '))
value_nodes.append(nodes.literal(text=value))
return [nodes.inline(text='One of ')] + value_nodes
elif isinstance(field_type, DateTimeFieldType):
return parse_text(self,
':term:`%s <ISO8601 format>`' % field_type)
else:
return [
nodes.inline(text=six.text_type(field_type)),
]
if (isinstance(field_type, six.text_type) and
field_type is not six.text_type):
# First see if this is a string name for a type. This would be
# coming from a docstring.
try:
field_type = self.type_name_mapping[field_type]
except KeyError:
if '.' in field_type:
# We may be dealing with a forward-declared class.
try:
field_type = get_from_module(field_type)
except ImportError:
raise ResourceNotFound(self, field_type)
else:
# Maybe we can parse this?
field_type = self._parse_type_string(field_type)
if type(field_type) is list:
result = []
if not nested:
result.append(nodes.inline(text='List of '))
if len(field_type) > 1:
result.append(nodes.inline(text='['))
first = True
for item in field_type:
if not first:
result.append(nodes.inline(text=', '))
result += self._get_type_name(item, field_info, nested=True)
first = False
if len(field_type) > 1:
result.append(nodes.inline(text=']'))
return result
elif type(field_type) is tuple:
value_nodes = []
for value in field_type:
if value_nodes:
value_nodes.append(nodes.inline(text=', '))
value_nodes.append(nodes.literal(text=value))
return [nodes.inline(text='One of ')] + value_nodes
elif field_type in self.type_mapping:
return [nodes.inline(text=self.type_mapping[field_type])]
else:
raise ValueError('Unsupported type %r' % (field_type,))
def _parse_type_string(self, type_str):
"""Parse a string representing a given type.
The string can represent a simple Python primitive (``list``, ``dict``,
etc.) or a nested structure (``list[dict]``, ``list[[int, unicode]]``,
etc.).
Args:
type_str (unicode):
The string to parse.
Returns:
object
The resulting Python structure for the given type string.
Raises:
ValueError:
The type is unsupported.
"""
def _parse_node(node):
if isinstance(node, ast.Str):
return node.s
elif isinstance(node, ast.Num):
return node.n
elif isinstance(node, ast.Tuple):
return tuple(_parse_node(item) for item in node.elts)
elif isinstance(node, ast.List):
return list(_parse_node(item) for item in node.elts)
elif isinstance(node, ast.Dict):
return dict(
(_parse_node(key), _parse_node(value))
for key, value in six.iteritems(node.elts)
)
elif isinstance(node, ast.Name):
try:
return self.type_name_mapping[node.id]
except KeyError:
raise ValueError(
'Unsupported node name "%s" for type string %r'
% (node.id, type_str))
elif isinstance(node, ast.Subscript):
return _parse_node(node.value)([_parse_node(node.slice.value)])
raise ValueError('Unsupported node type %r for type string %r'
% (node, type_str))
return _parse_node(ast.parse(type_str, mode='eval').body)
class ResourceTreeDirective(Directive):
has_content = True
def run(self):
bullet_list = nodes.bullet_list()
self._output_resource(resources.root, bullet_list, True)
return [bullet_list]
def _output_resource(self, resource, parent, is_list):
item = nodes.list_item()
parent += item
paragraph = nodes.paragraph()
item += paragraph
paragraph += parse_text(
self,
':ref:`%s <%s>`' %
(get_resource_title(resource, is_list, False),
'webapi2.0-%s-resource'
% get_resource_docname(self.state.document.settings.env.app,
resource, is_list)))
bullet_list = nodes.bullet_list()
item += bullet_list
if is_list:
if resource.uri_object_key:
self._output_resource(resource, bullet_list, False)
for child in resource.list_child_resources:
self._output_resource(child, bullet_list, True)
else:
for child in resource.item_child_resources:
self._output_resource(child, bullet_list, True)
class ErrorDirective(Directive):
has_content = True
final_argument_whitespace = True
option_spec = {
'instance': directives.unchanged_required,
'example-data': directives.unchanged,
'title': directives.unchanged,
}
MIMETYPES = [
'application/json',
]
def run(self):
try:
error_obj = self.get_error_object(self.options['instance'])
except ErrorNotFound as e:
return e.error_node
# Add the class's file and this extension to the dependencies.
self.state.document.settings.env.note_dependency(__file__)
self.state.document.settings.env.note_dependency(
sys.modules[error_obj.__module__].__file__)
docname = 'webapi2.0-error-%s' % error_obj.code
error_title = self.get_error_title(error_obj)
targetnode = nodes.target('', '', ids=[docname], names=[docname])
self.state.document.note_explicit_target(targetnode)
main_section = nodes.section(ids=[docname])
# Details section
main_section += nodes.title(text=error_title)
main_section += self.build_details_table(error_obj)
# Example section
examples_section = nodes.section(ids=['examples'])
examples_section += nodes.title(text='Examples')
extra_params = {}
if 'example-data' in self.options:
extra_params = json.loads(self.options['example-data'])
has_examples = False
for mimetype in self.MIMETYPES:
headers, data = \
fetch_response_data(WebAPIResponseError, mimetype,
err=error_obj,
extra_params=extra_params)
example_node = build_example(headers, data, mimetype)
if example_node:
example_section = nodes.section(ids=['example_' + mimetype])
examples_section += example_section
example_section += nodes.title(text=mimetype)
example_section += example_node
has_examples = True
if has_examples:
main_section += examples_section
return [targetnode, main_section]
def build_details_table(self, error_obj):
table = nodes.table()
tgroup = nodes.tgroup(cols=2)
table += tgroup
tgroup += nodes.colspec(colwidth=20)
tgroup += nodes.colspec(colwidth=80)
tbody = nodes.tbody()
tgroup += tbody
# API Error Code
append_detail_row(tbody, 'API Error Code',
nodes.literal(text=error_obj.code))
# HTTP Status Code
ref = parse_text(self, ':http:`%s`' % error_obj.http_status)
append_detail_row(tbody, 'HTTP Status Code', ref)
# Error Text
append_detail_row(tbody, 'Error Text',
nodes.literal(text=error_obj.msg))
if error_obj.headers:
if callable(error_obj.headers):
headers = error_obj.headers(DummyRequest())
# HTTP Headers
header_keys = list(six.iterkeys(headers))
if len(header_keys) == 1:
content = nodes.literal(text=header_keys[0])
else:
content = nodes.bullet_list()
for header in header_keys:
item = nodes.list_item()
content += item
literal = nodes.literal(text=header)
item += literal
append_detail_row(tbody, 'HTTP Headers', content)
# Description
append_detail_row(
tbody, 'Description',
parse_text(self, '\n'.join(self.content),
where='API error %s description' % error_obj.code))
return table
def get_error_title(self, error_obj):
if 'title' in self.options:
error_title = self.options['title']
else:
name = self.options['instance'].split('.')[-1]
error_title = name.replace('_', ' ').title()
return '%s - %s' % (error_obj.code, error_title)
def get_error_object(self, name):
try:
return get_from_module(name)
except ImportError:
raise ErrorNotFound(self, name)
def parse_text(directive, text, wrapper_node_type=None, where=None):
"""Parse text in ReST format and return a node with the content.
Args:
directive (docutils.parsers.rst.Directive):
The directive that will contain the resulting nodes.
text (unicode):
The text to parse.
wrapper_node_type (docutils.nodes.Node, optional):
An optional node type used to contain the children.
where (unicode, optional):
Information on the location being parsed in case there's a
failure.
Returns:
list of docutils.nodes.Node:
The resulting list of parsed nodes.
"""
assert text is not None, 'Missing text during parse_text in %s' % where
if wrapper_node_type:
node_type = wrapper_node_type
else:
node_type = nodes.container
node = node_type(rawsource=text)
directive.state.nested_parse(ViewList(string2lines(text), source=''),
0, node)
if wrapper_node_type:
return node
else:
return node.children
def run_directive(parent_directive, name, content='', options={}):
"""Run and render a directive.
Args:
parent_directive (docutils.parsers.rst.Directive):
The directive running another directive.
name (unicode):
The name of the directive to run.
content (unicode, optional):
The content to pass to the directive.
options (dict, optional):
The options to pass to the directive.
Returns:
list of docutils.nodes.Node:
The resulting list of nodes from the directive.
"""
state = parent_directive.state
directive_class, messages = directives.directive(name,
state.memo.language,
state.document)
state.parent += messages
if not directive_class:
return state.unknown_directive(name)
state_machine = state.state_machine
lineno = state_machine.abs_line_number()
directive = directive_class(
name=name,
arguments=[],
options=options,
content=content,
lineno=lineno,
content_offset=0,
block_text='',
state=parent_directive.state,
state_machine=state_machine)
try:
return directive.run()
except DirectiveError as e:
return [
parent_directive.reporter.system_message(e.level, e.msg,
line=lineno),
]
def get_from_module(name):
i = name.rfind('.')
module, attr = name[:i], name[i + 1:]
try:
mod = import_module(module)
return getattr(mod, attr)
except AttributeError:
raise ImportError('Unable to load "%s" from "%s"' % (attr, module))
def append_row(tbody, cells):
row = nodes.row()
tbody += row
for cell in cells:
entry = nodes.entry()
row += entry
if isinstance(cell, six.text_type):
node = nodes.paragraph(text=cell)
else:
node = cell
entry += node
def append_detail_row(tbody, header_text, detail):
header_node = nodes.strong(text=header_text)
if isinstance(detail, six.text_type):
detail_node = [nodes.paragraph(text=text)
for text in detail.split('\n\n')]
else:
detail_node = detail
append_row(tbody, [header_node, detail_node])
FIRST_CAP_RE = re.compile(r'(.)([A-Z][a-z]+)')
ALL_CAP_RE = re.compile(r'([a-z0-9])([A-Z])')
def uncamelcase(name, separator='_'):
"""
Converts a string from CamelCase into a lowercase name separated by
a provided separator.
"""
s1 = FIRST_CAP_RE.sub(r'\1%s\2' % separator, name)
return ALL_CAP_RE.sub(r'\1%s\2' % separator, s1).lower()
def get_resource_title(resource, is_list, append_resource=True):
"""Returns a human-readable name for the resource."""
if hasattr(resource, 'verbose_name'):
normalized_title = resource.verbose_name
else:
class_name = resource.__class__.__name__
class_name = class_name.replace('Resource', '')
normalized_title = title(uncamelcase(class_name, ' '))
if is_list:
s = '%s List' % normalized_title
else:
s = normalized_title
if append_resource:
s += ' Resource'
return s
def get_resource_docname(app, resource, is_list):
"""Returns the name of the page used for a resource's documentation."""
if inspect.isclass(resource):
class_name = resource.__name__
else:
class_name = resource.__class__.__name__
class_name = class_name.replace('Resource', '')
docname = uncamelcase(class_name, '-')
docname = app.config.webapi_docname_map.get(docname, docname)
if is_list and resource.name != resource.name_plural:
docname = '%s-list' % docname
return docname
def get_ref_to_doc(refname, title=''):
"""Returns a node that links to a document with the given ref name."""
ref = addnodes.pending_xref(reftype='ref', reftarget=refname,
refexplicit=(title != ''), refdomain='std')
ref += nodes.literal(title, title, classes=['xref'])
return ref
def get_ref_to_resource(app, resource, is_list):
"""Returns a node that links to a resource's documentation."""
return get_ref_to_doc('webapi2.0-%s-resource' %
get_resource_docname(app, resource, is_list))
def get_ref_to_error(error, title=''):
"""Returns a node that links to an error's documentation."""
return get_ref_to_doc('webapi2.0-error-%s' % error.code,
title=title)
def get_resource_uri_template(resource, include_child):
"""Returns the URI template for a resource.
This will go up the resource tree, building a URI based on the URIs
of the parents.
"""
if resource.name == 'root':
path = '/api/'
else:
if resource._parent_resource:
path = get_resource_uri_template(resource._parent_resource, True)
path += '%s/' % resource.uri_name
if not resource.singleton and include_child and resource.model:
path += '{%s}/' % resource.uri_object_key
return path
def create_fake_resource_path(request, resource, child_keys, include_child):
"""Create a fake path to a resource.
Args:
request (DummyRequest):
A request-like object that will be passed to resources to generate
the path.
resource (reviewboard.webapi.resources.base.WebAPIResource):
The resource to generate the path to.
child_keys (dict):
A dictionary that will contain the URI object keys and their values
corresponding to the generated path.
include_child (bool):
Whether or not to include child resources.
Returns:
unicode:
The generated path.
Raises:
django.core.exceptions.ObjectDoesNotExist:
A required model does not exist.
"""
iterator = iterate_fake_resource_paths(request, resource, child_keys,
include_child)
try:
path, new_child_keys = next(iterator)
except ObjectDoesNotExist as e:
logging.critical('Could not generate path for resource %r: %s',
resource, e)
raise
child_keys.update(new_child_keys)
return path
def iterate_fake_resource_paths(request, resource, child_keys, include_child):
"""Iterate over all possible fake resource paths using backtracking.
Args:
request (DummyRequest):
A request-like object that will be passed to resources to generate
the path.
resource (reviewboard.webapi.resources.base.WebAPIResource):
The resource to generate the path to.
child_keys (dict):
A dictionary that will contain the URI object keys and their values
corresponding to the generated path.
include_child (bool):
Whether or not to include child resources.
Yields:
tuple:
A 2-tuple of:
* The generated path (:py:class:`unicode`).
* The new child keys (:py:class:`dict`).
Raises:
django.core.exceptions.ObjectDoesNotExist:
A required model does not exist.
"""
if resource.name == 'root':
yield '/api', child_keys
else:
if (resource._parent_resource and
resource._parent_resource.name != 'root'):
parents = iterate_fake_resource_paths(
request, resource._parent_resource, child_keys, True)
else:
parents = [('/api', child_keys)]
iterate_children = (
not resource.singleton and
include_child and
resource.model and
resource.uri_object_key
)
for parent_path, parent_keys in parents:
if iterate_children:
q = resource.get_queryset(request, **parent_keys)
if q.count() == 0:
continue
for obj in q:
value = getattr(obj, resource.model_object_key)
parent_keys[resource.uri_object_key] = value
path = '%s%s/' % (parent_path, value)
yield path, parent_keys
else:
yield parent_path, child_keys
# Only the non-recursive calls to this function will reach here. This
# means that there is no suitable set of parent models that match this
# resource.
raise ObjectDoesNotExist(
'No %s objects in the database match %s.get_queryset().'
% (resource.model.name, type(resource).__name__))
def build_example(headers, data, mimetype):
if not data:
return None
language = None
for base_mimetype, lang in MIMETYPE_LANGUAGES:
if is_mimetype_a(mimetype, base_mimetype):
language = lang
break
if language == 'javascript':
code = json.dumps(json.loads(data), sort_keys=True, indent=2)
else:
code = data
return nodes.literal_block(code, code, language=language or 'text',
classes=['example-payload'])
def fetch_response_data(response_class, mimetype, request=None, **kwargs):
if not request:
request = DummyRequest()
request.META['HTTP_ACCEPT'] = mimetype
result = bytes(response_class(request, **kwargs))
headers, data = result.split(b'\r\n\r\n', 1)
return headers.decode('utf-8'), data.decode('utf-8')
def setup(app):
app.add_config_value(str('webapi_docname_map'), {}, str('env'))
app.add_directive('webapi-resource', ResourceDirective)
app.add_directive('webapi-resource-field-list', ResourceFieldListDirective)
app.add_directive('webapi-resource-field', ResourceFieldDirective)
app.add_directive('webapi-resource-tree', ResourceTreeDirective)
app.add_directive('webapi-error', ErrorDirective)
app.add_crossref_type(str('webapi2.0'), str('webapi2.0'),
str('single: %s'), nodes.emphasis)
# Filter out some additional log messages.
for name in ('djblets.util.templatetags.djblets_images',):
logging.getLogger(name).disabled = True
# Our fixtures include a Git Repository that is intended to point at the
# git_repo test data. However, the path field of a repository *must*
# contain an absolute path, so we cannot include the real path in the
# fixtures. Instead we include a placeholder path and replace it when we go
# to build docs, as we know then what the path will be.
Repository.objects.filter(name='Git Repo', path='/placeholder').update(
path=os.path.abspath(os.path.join(
os.path.dirname(reviewboard.__file__),
'scmtools',
'testdata',
'git_repo')))
| 33.094257 | 79 | 0.579512 |
78568af191062e2228cdc0ca8838bfe697f8bedc | 338 | py | Python | app/user/urls.py | szymonborkowski/recipe-app-api | 28dacdaadffdcd0d70c66674271182c70d5fb277 | [
"MIT"
] | null | null | null | app/user/urls.py | szymonborkowski/recipe-app-api | 28dacdaadffdcd0d70c66674271182c70d5fb277 | [
"MIT"
] | null | null | null | app/user/urls.py | szymonborkowski/recipe-app-api | 28dacdaadffdcd0d70c66674271182c70d5fb277 | [
"MIT"
] | null | null | null | from django.urls import path # allows you to define different paths in the app
from user import views
app_name = "user"
urlpatterns = [
path('create/', views.CreateUserView.as_view(), name='create'),
path('token/', views.CreateTokenView.as_view(), name='token'),
path('me/', views.ManageUserView.as_view(), name='me'),
]
| 26 | 79 | 0.692308 |
19d60fb7e3517eb9d431e907d9189f3041dca29e | 18,866 | py | Python | Simulation/Train_CcGAN_ndim.py | asatk/improved_CcGAN | 29a58e6e2a03e56c2ad80ae1a2ebbd0710e026f3 | [
"MIT"
] | 1 | 2022-02-26T00:07:37.000Z | 2022-02-26T00:07:37.000Z | Simulation/Train_CcGAN_ndim.py | asatk/improved_CcGAN | 29a58e6e2a03e56c2ad80ae1a2ebbd0710e026f3 | [
"MIT"
] | null | null | null | Simulation/Train_CcGAN_ndim.py | asatk/improved_CcGAN | 29a58e6e2a03e56c2ad80ae1a2ebbd0710e026f3 | [
"MIT"
] | null | null | null | '''
2D gaussian grid phi x omega ML Generation
'''
import numpy as np
import os
import subprocess
import json
import random
import torch
import torch.backends.cudnn as cudnn
import matplotlib.pyplot as plt
from opts import parse_opts
# from PIL import Image
import timeit
from models import cont_cond_GAN as CCGAN
# from defs import PROJECT_DIR
# system
NGPU = torch.cuda.device_count()
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
NCPU = 8
# seeds
seed = 100
random.seed(seed)
torch.manual_seed(seed)
torch.backends.cudnn.deterministic = True
cudnn.benchmark = False
np.random.seed(seed)
# labels
# labels_phi_real = np.linspace(300,3000,num=10,endpoint=True)
# labels_phi_all = np.linspace(0,3000,num=1000+1,endpoint=True)
# labels_omega_real = np.linspace(0.2,2.0,num=10,endpoint=True)
# labels_omega_all = np.linspace(0,2.0,num=1000+1,endpoint=True)
# # load numpy histogram data
# gaus_grid = []
# for f in os.listdir(PROJECT_DIR+"/out/hist_npy/"):
# arr_temp = np.load(PROJECT_DIR+"/out/hist_npy/"+f)
# gaus_grid.append(arr_temp)
# load json hyperparameters
# hpars_path = PROJECT_DIR+"/src/hyperparams.json"
# with open(hpars_path) as hpars_json:
# hpars = json.load(hpars_json)
# vars = hpars['vars']
# kappas = np.array(hpars['kappas'])
# sigmas = np.array(hpars['sigmas'])
# learning parameters
args = parse_opts()
niters = args.niters_gan
resume_niters = args.resume_niters_gan
dim_gan = args.dim_gan
lr_g = args.lr_gan
lr_d = args.lr_gan
save_niters_freq = args.save_niters_freq
batch_size_D = args.batch_size_disc
batch_size_G = args.batch_size_gene
# n_vars = 2
n_vars = 1
# n_samples = len(os.listdir(PROJECT_DIR+"/out/hist_npy/"))
# n_samples = int(subprocess.check_output('ls -alFh ../out/hist_npy/ | grep "1800" | wc -l',shell=True))
# n_epochs = int(1e2)
# lr_g = 5e-5
# lr_d = 5e-5
# batch_size_D = 5
# batch_size_G = 128
use_hard_vicinity = True
soft_threshold = 1e-3
dim_latent_space = 128
# load data and labels in
# train_data = [0]*n_samples
# train_labels = np.zeros((n_samples,n_vars))
# second_label = 0.8e0
# associate each file's data with its label
# data_mapping_path = "data_mapping.json"
# with open(data_mapping_path) as json_file:
# data_label_map = json.load(json_file)
# # load each file from the data mapping
# count = 0
# for i,f in enumerate(data_label_map.keys()):
# # train_labels[i] = data_label_map[f]
# # train_data[i] = np.load(f)
# temp_label = data_label_map[f]
# if temp_label[1] == second_label:
# train_labels[count] = float(temp_label[0])
# train_data[count] = np.load(f)
# count+=1
# train_data = np.array(train_data)
# for 1-D only
var = 0
# label = 1.8e3
# train_index = np.where(train_labels == 1.8e3)[0]
# train_labels = train_labels[train_index]
# train_data = train_data[train_index,:,:]
# output_shape = train_data[0].shape
# out_dim = np.prod(output_shape)
# print(output_shape)
# print(out_dim)
# print(train_index)
# print(train_labels)
# print(train_data)
noise_dim = 128
# gan
# netG = CCGAN.cont_cond_generator(nz=noise_dim,nlabels=1,out_dim=out_dim)
# netD = CCGAN.cont_cond_discriminator(input_dim=out_dim)
# netG.float()
# netD.float()
def train_CcGAN(kernel_sigma, kappa, train_samples, train_labels, netG, netD, save_images_folder, save_models_folder = None, plot_in_train=False, samples_tar_eval = None, angle_grid_eval = None, fig_size=5, point_size=None):
#-----------TRAIN D-----------#
netG.float()
netD.float()
sigmas = np.array([kernel_sigma])
kappas = np.array([kappa])
optimizerD = torch.optim.Adam(netD.parameters(), lr=lr_d, betas=(0.5,0.999))
# generate a batch of target labels from all real labels
batch_target_labels_raw = np.ndarray((batch_size_D,n_vars))
for v in range(n_vars):
batch_target_labels_raw[:,v] = np.random.choice(train_labels[:,v],size=batch_size_D,replace=True).transpose()
# give each target label some noise
batch_epsilons = np.ndarray((batch_size_D,n_vars))
for v in range(n_vars):
batch_epsilons[:,v] = np.random.normal(0,sigmas[v],batch_size_D).transpose()
batch_target_labels = batch_target_labels_raw + batch_epsilons
# find real data within bounds
batch_real_index = np.zeros((batch_size_D),dtype=int)
batch_fake_labels = np.zeros((batch_size_D,n_vars))
def vicinity_hard(x,sample):
return np.sum(np.square(np.divide(x-sample,kappas)))
def vicinity_soft(x,sample):
return np.exp(-1*vicinity_hard(x,sample))
# iterate over the whole batch
j=0
reshuffle_count = 0
while j < batch_size_D:
# print(j)
batch_target_label = batch_target_labels[j]
# print("\n")
# print("batch target label",batch_target_label)
label_vicinity = np.ndarray(train_labels.shape)
# iterate over every training label to find which are within the vicinity of the batch labels
for i in range(len(train_labels)):
train_label = train_labels[i]
if use_hard_vicinity:
hard = np.apply_along_axis(vicinity_hard,0,train_label,sample=batch_target_label)
label_vicinity[i] = hard
else:
soft = np.apply_along_axis(vicinity_soft,0,train_label,sample=batch_target_label)
label_vicinity[i] = soft
if use_hard_vicinity:
indices = np.unique(np.where(label_vicinity <= 1)[0])
else:
indices = np.unique(np.where(label_vicinity >= soft_threshold)[0])
# reshuffle the batch target labels, redo that sample
if len(indices) < 1:
reshuffle_count += 1
# print("RESHUFFLE COUNT",reshuffle_count)
batch_epsilons_j = np.zeros((n_vars))
for v in range(n_vars):
batch_epsilons_j[v] = np.random.normal(0,sigmas[v],1).transpose()
batch_target_labels[j] = batch_target_labels_raw[j] + batch_epsilons_j
continue
# print("RESHUFFLE COUNT",reshuffle_count)
# print("BATCH SAMPLE",batch_target_label)
# set the bounds for random draw of possible fake labels
if use_hard_vicinity == "hard":
lb = batch_target_labels[j] - kappas
ub = batch_target_labels[j] + kappas
else:
lb = batch_target_labels[j] - np.sqrt(-1*np.log(soft_threshold)*kappas)
ub = batch_target_labels[j] + np.sqrt(-1*np.log(soft_threshold)*kappas)
# pick real sample in vicinity
batch_real_index[j] = np.random.choice(indices,size=1)[0]
# generate fake labels
for v in range(n_vars):
batch_fake_labels[j,v] = np.random.uniform(lb[v],ub[v],size=1)[0]
j += 1
reshuffle_count = 0
batch_real_samples = train_data[batch_real_index]
batch_real_labels = train_labels[batch_real_index]
batch_real_samples = torch.from_numpy(batch_real_samples).type(torch.float).to(device)
batch_real_labels = torch.from_numpy(batch_real_labels).type(torch.float).to(device)
print("BATCH REAL INDEX:\n",batch_real_index)
print("BATCH REAL LABELS:\n",batch_real_labels)
print("BATCH FAKE LABELS:\n",batch_fake_labels)
print("BATCH TARGET LABELS:\n",batch_target_labels)
batch_fake_labels = torch.from_numpy(batch_fake_labels).type(torch.float).to(device)
z = torch.randn(batch_size_D,dim_latent_space,dtype=torch.float).to(device)
batch_fake_samples = netG(z, batch_fake_labels)
batch_target_labels = torch.from_numpy(batch_target_labels).type(torch.float).to(device)
if use_hard_vicinity:
real_weights = torch.ones(batch_size_D, dtype=torch.float).to(device)
fake_weights = torch.ones(batch_size_D, dtype=torch.float).to(device)
else:
real_weights = np.apply_along_axis(vicinity_hard,0,batch_real_labels,sample=batch_target_label)
fake_weights = np.apply_along_axis(vicinity_hard,0,batch_fake_labels,sample=batch_target_label)
real_dis_out = netD(batch_real_samples, batch_target_labels)
fake_dis_out = netD(batch_fake_samples.detach(), batch_target_labels)
d_loss = - torch.mean(real_weights.view(-1) * torch.log(real_dis_out.view(-1)+1e-20)) - \
torch.mean(fake_weights.view(-1) * torch.log(1-fake_dis_out.view(-1)+1e-20))
optimizerD.zero_grad()
d_loss.backward()
optimizerD.step()
#-----------TRAIN G-----------#
optimizerG = torch.optim.Adam(netG.parameters(), lr=lr_g, betas=(0.5,0.999))
# generate a batch of target labels from all real labels
batch_target_labels_raw = np.ndarray((batch_size_G,n_vars))
for v in range(n_vars):
batch_target_labels_raw[:,v] = np.random.choice(train_labels[:,v],size=batch_size_G,replace=True).transpose()
# give each target label some noise
batch_epsilons = np.ndarray((batch_size_G,n_vars))
for v in range(n_vars):
batch_epsilons[:,v] = np.random.normal(0,sigmas[v],batch_size_G).transpose()
batch_target_labels = batch_target_labels_raw + batch_epsilons
batch_target_labels = torch.from_numpy(batch_target_labels).type(torch.float).to(device)
z = torch.randn(batch_size_G,noise_dim,dtype=torch.float).to(device)
batch_fake_samples = netG(z,batch_target_labels)
dis_out = netD(batch_fake_samples,batch_target_labels)
g_loss = - torch.mean(torch.log(dis_out+1e-20))
optimizerG.zero_grad()
g_loss.backward()
optimizerG.step()
return netG, netD
def SampCcGAN_given_label(netG, label, path=None, NFAKE = 10000, batch_size = 500, num_features=2):
'''
label: normalized label in [0,1]
'''
if batch_size>NFAKE:
batch_size = NFAKE
fake_samples = np.zeros((NFAKE+batch_size, num_features), dtype=np.float)
netG=netG.to(device)
netG.eval()
with torch.no_grad():
tmp = 0
while tmp < NFAKE:
z = torch.randn(batch_size, dim_gan, dtype=torch.float).to(device)
y = np.ones(batch_size) * label
y = torch.from_numpy(y).type(torch.float).view(-1,1).to(device)
batch_fake_samples = netG(z, y)
fake_samples[tmp:(tmp+batch_size)] = batch_fake_samples.cpu().detach().numpy()
tmp += batch_size
#remove extra entries
fake_samples = fake_samples[0:NFAKE]
fake_angles = np.ones(NFAKE) * label #use assigned label
if path is not None:
raw_fake_samples = (fake_samples*0.5+0.5)*255.0
raw_fake_samples = raw_fake_samples.astype(np.uint8)
for i in range(NFAKE):
filename = path + '/' + str(i) + '.jpg'
im = Image.fromarray(raw_fake_samples[i][0], mode='L')
im = im.save(filename)
return fake_samples, fake_angles
def loss_D():
#-----------TRAIN D-----------#
optimizerD = torch.optim.Adam(netD.parameters(), lr=lr_d, betas=(0.5,0.999))
# generate a batch of target labels from all real labels
batch_target_labels_raw = np.ndarray((batch_size_D,n_vars))
for v in range(n_vars):
batch_target_labels_raw[:,v] = np.random.choice(train_labels[:,v],size=batch_size_D,replace=True).transpose()
# give each target label some noise
batch_epsilons = np.ndarray((batch_size_D,n_vars))
for v in range(n_vars):
batch_epsilons[:,v] = np.random.normal(0,sigmas[v],batch_size_D).transpose()
batch_target_labels = batch_target_labels_raw + batch_epsilons
# find real data within bounds
batch_real_index = np.zeros((batch_size_D),dtype=int)
batch_fake_labels = np.zeros((batch_size_D,n_vars))
def vicinity_hard(x,sample):
return np.sum(np.square(np.divide(x-sample,kappas)))
def vicinity_soft(x,sample):
return np.exp(-1*vicinity_hard(x,sample))
# iterate over the whole batch
j=0
reshuffle_count = 0
while j < batch_size_D:
# print(j)
batch_target_label = batch_target_labels[j]
# print("\n")
# print("batch target label",batch_target_label)
label_vicinity = np.ndarray(train_labels.shape)
# iterate over every training label to find which are within the vicinity of the batch labels
for i in range(len(train_labels)):
train_label = train_labels[i]
if use_hard_vicinity:
hard = np.apply_along_axis(vicinity_hard,0,train_label,sample=batch_target_label)
label_vicinity[i] = hard
else:
soft = np.apply_along_axis(vicinity_soft,0,train_label,sample=batch_target_label)
label_vicinity[i] = soft
if use_hard_vicinity:
indices = np.unique(np.where(label_vicinity <= 1)[0])
else:
indices = np.unique(np.where(label_vicinity >= soft_threshold)[0])
# reshuffle the batch target labels, redo that sample
if len(indices) < 1:
reshuffle_count += 1
# print("RESHUFFLE COUNT",reshuffle_count)
batch_epsilons_j = np.zeros((n_vars))
for v in range(n_vars):
batch_epsilons_j[v] = np.random.normal(0,sigmas[v],1).transpose()
batch_target_labels[j] = batch_target_labels_raw[j] + batch_epsilons_j
continue
# print("RESHUFFLE COUNT",reshuffle_count)
# print("BATCH SAMPLE",batch_target_label)
# set the bounds for random draw of possible fake labels
if use_hard_vicinity == "hard":
lb = batch_target_labels[j] - kappas
ub = batch_target_labels[j] + kappas
else:
lb = batch_target_labels[j] - np.sqrt(-1*np.log(soft_threshold)*kappas)
ub = batch_target_labels[j] + np.sqrt(-1*np.log(soft_threshold)*kappas)
# pick real sample in vicinity
batch_real_index[j] = np.random.choice(indices,size=1)[0]
# generate fake labels
for v in range(n_vars):
batch_fake_labels[j,v] = np.random.uniform(lb[v],ub[v],size=1)[0]
j += 1
reshuffle_count = 0
batch_real_samples = train_data[batch_real_index]
batch_real_labels = train_labels[batch_real_index]
batch_real_samples = torch.from_numpy(batch_real_samples).type(torch.float).to(device)
batch_real_labels = torch.from_numpy(batch_real_labels).type(torch.float).to(device)
print("BATCH REAL INDEX:\n",batch_real_index)
print("BATCH REAL LABELS:\n",batch_real_labels)
print("BATCH FAKE LABELS:\n",batch_fake_labels)
print("BATCH TARGET LABELS:\n",batch_target_labels)
batch_fake_labels = torch.from_numpy(batch_fake_labels).type(torch.float).to(device)
z = torch.randn(batch_size_D,dim_latent_space,dtype=torch.float).to(device)
batch_fake_samples = netG(z, batch_fake_labels)
batch_target_labels = torch.from_numpy(batch_target_labels).type(torch.float).to(device)
if use_hard_vicinity:
real_weights = torch.ones(batch_size_D, dtype=torch.float).to(device)
fake_weights = torch.ones(batch_size_D, dtype=torch.float).to(device)
else:
real_weights = np.apply_along_axis(vicinity_hard,0,batch_real_labels,sample=batch_target_label)
fake_weights = np.apply_along_axis(vicinity_hard,0,batch_fake_labels,sample=batch_target_label)
real_dis_out = netD(batch_real_samples, batch_target_labels)
fake_dis_out = netD(batch_fake_samples.detach(), batch_target_labels)
d_loss = - torch.mean(real_weights.view(-1) * torch.log(real_dis_out.view(-1)+1e-20)) - \
torch.mean(fake_weights.view(-1) * torch.log(1-fake_dis_out.view(-1)+1e-20))
optimizerD.zero_grad()
d_loss.backward()
optimizerD.step()
return netG
def loss_G():
#-----------TRAIN G-----------#
optimizerG = torch.optim.Adam(netG.parameters(), lr=lr_g, betas=(0.5,0.999))
# generate a batch of target labels from all real labels
batch_target_labels_raw = np.ndarray((batch_size_G,n_vars))
for v in range(n_vars):
batch_target_labels_raw[:,v] = np.random.choice(train_labels[:,v],size=batch_size_G,replace=True).transpose()
# give each target label some noise
batch_epsilons = np.ndarray((batch_size_G,n_vars))
for v in range(n_vars):
batch_epsilons[:,v] = np.random.normal(0,sigmas[v],batch_size_G).transpose()
batch_target_labels = batch_target_labels_raw + batch_epsilons
batch_target_labels = torch.from_numpy(batch_target_labels).type(torch.float).to(device)
z = torch.randn(batch_size_G,noise_dim,dtype=torch.float).to(device)
batch_fake_samples = netG(z,batch_target_labels)
dis_out = netD(batch_fake_samples,batch_target_labels)
g_loss = - torch.mean(torch.log(dis_out+1e-20))
optimizerG.zero_grad()
g_loss.backward()
optimizerG.step()
return netG
def gen_G(netG,batch_size,label):
NFAKE = int(1e2)
path = PROJECT_DIR+'/out/1DGAN/'
fake_samples = np.zeros((NFAKE,output_shape[0],output_shape[1]),dtype=float)
netG = netG.to(device)
netG.eval()
with torch.no_grad():
tmp = 0
while tmp < NFAKE:
# print(tmp)
z = torch.randn(batch_size,dim_latent_space,dtype=float).to(device)
y = np.ones(batch_size) * label
y = torch.from_numpy(y).type(torch.float).view(-1,1).to(device)
batch_fake_samples = netG(z.float(),y.float())
batch_fake_samples = batch_fake_samples.view(-1,output_shape[0],output_shape[1])
fake_samples[tmp:(tmp+batch_size)] = batch_fake_samples.cpu().detach().numpy()
tmp += batch_size
fake_samples = fake_samples[0:NFAKE]
fake_labels = np.ones(NFAKE) * label
mass_pair_str = "%04.0f,%1.2f"%(label,second_label)
out_file_fstr = PROJECT_DIR+"/out/%s/gaus_%iepc_%s.%s"
if path is not None:
for i in range(batch_size):
fake_sample = fake_samples[i]
# print(fake_sample)
# out_file_jpg = out_file_fstr%("gen_jpg",mass_pair_str,"jpg")
out_file_npy = out_file_fstr%("1DGAN/gen_npy",int(n_epochs),mass_pair_str+"_"+str(i),"npy")
out_file_png = out_file_fstr%("1DGAN/gen_png",int(n_epochs),mass_pair_str+"_"+str(i),"png")
np.save(out_file_npy,fake_sample,allow_pickle=False)
plt.imsave(out_file_png,fake_sample.T,cmap="gray",vmin=0.,vmax=1.,format="png",origin="lower")
return fake_samples,fake_labels
# start = timeit.default_timer()
# for epoch in range(n_epochs):
# print("------------EPOCH: " + str(epoch) + " ------------")
# netD = loss_D()
# netD.float()
# netG = loss_G()
# netG.float()
# stop = timeit.default_timer()
# time_diff = stop-start
# print("training took: " + str(time_diff) + "s\t" + str(time_diff/60) + "m\t" + str(time_diff/3600) + "h")
# gen_G(netG,5,(0.6e3)) | 36.562016 | 224 | 0.673752 |
0e097437149bdaea7346010d3bc2ee480529ffb1 | 5,634 | py | Python | tests/nnapi/specs/V1_0/conv_1_h3_w2_VALID.mod.py | periannath/ONE | 61e0bdf2bcd0bc146faef42b85d469440e162886 | [
"Apache-2.0"
] | 255 | 2020-05-22T07:45:29.000Z | 2022-03-29T23:58:22.000Z | tests/nnapi/specs/V1_0/conv_1_h3_w2_VALID.mod.py | periannath/ONE | 61e0bdf2bcd0bc146faef42b85d469440e162886 | [
"Apache-2.0"
] | 5,102 | 2020-05-22T07:48:33.000Z | 2022-03-31T23:43:39.000Z | test/cts/tool/CTSConverter/src/nn/specs/V1_0/conv_1_h3_w2_VALID.mod.py | ibelem/webml-polyfill | aaf1ba4f5357eaf6e89bf9990f5bdfb543cd2bc2 | [
"Apache-2.0"
] | 120 | 2020-05-22T07:51:08.000Z | 2022-02-16T19:08:05.000Z | model = Model()
i4 = Int32Scalar("b4", 2)
i5 = Int32Scalar("b5", 1)
i6 = Int32Scalar("b6", 1)
i7 = Int32Scalar("b7", 0)
i2 = Input("op2", "TENSOR_FLOAT32", "{1, 8, 8, 3}") # input 0
i3 = Output("op3", "TENSOR_FLOAT32", "{1, 6, 7, 1}") # output 0
i0 = Parameter("op0", "TENSOR_FLOAT32", "{1, 3, 2, 3}", [-0.966213, -0.467474, -0.82203, -0.579455, 0.0278809, -0.79946, -0.684259, 0.563238, 0.37289, 0.738216, 0.386045, -0.917775, 0.184325, -0.270568, 0.82236, 0.0973683, -0.941308, -0.144706]) # parameters
i1 = Parameter("op1", "TENSOR_FLOAT32", "{1}", [0]) # parameters
model = model.Operation("CONV_2D", i2, i0, i1, i4, i5, i6, i7).To(i3)
input0 = {i2: [-0.869931, 0.644628, -0.918393, 0.153672, 0.868562, -0.358177, -0.134931, -0.247565, 0.22174, -0.259157, -0.284296, -0.538065, 0.765559, 0.41986, -0.556241, 0.658494, 0.214355, -0.850169, -0.252893, -0.478935, 0.530526, -0.0700663, -0.988729, -0.303061, 0.150845, 0.829915, 0.476349, 0.406537, -0.355343, 0.757145, -0.356362, 0.800482, -0.713861, 0.210483, -0.634303, 0.718236, -0.752038, 0.457547, -0.550769, -0.551178, 0.446766, -0.227462, 0.216348, -0.852806, -0.351486, 0.55906, -0.668493, -0.303493, -0.363763, -0.162837, 0.0701012, 0.756097, -0.142269, 0.329724, -0.656317, -0.998086, -0.652949, -0.40316, -0.893682, 0.432744, 0.612362, -0.869588, -0.71327, -0.398092, -0.0423559, 0.436576, -0.925272, 0.176549, 0.822904, 0.096833, -0.296802, -0.427195, 0.031654, -0.254479, 0.244905, 0.0948254, 0.643769, -0.90391, 0.352665, -0.901179, 0.266159, -0.968068, -0.615401, -0.388975, 0.939052, -0.116289, 0.107523, -0.0582711, 0.435172, 0.334675, 0.459711, 0.717436, 0.496627, -0.680175, -0.415066, 0.339848, 0.506004, -0.337808, -0.107218, -0.172496, 0.870638, 0.931872, -0.953884, 0.903042, 0.760078, 0.209727, -0.285384, -0.45514, 0.113194, 0.0756611, 0.0924435, -0.472863, 0.960609, -0.160385, -0.839445, 0.457097, 0.163348, 0.344867, -0.131619, 0.688715, -0.540827, 0.571259, -0.95587, 0.506164, -0.155839, 0.0789621, 0.756772, -0.662069, 0.242908, 0.460821, 0.177872, -0.289839, -0.640603, 0.702598, -0.506406, -0.568262, -0.0713716, 0.413792, 0.159673, -0.305208, 0.133816, -0.160254, 0.787323, -0.753244, 0.600721, 0.263186, -0.162387, 0.477962, -0.702951, -0.731036, -0.939481, -0.524519, 0.934072, -0.511637, -0.503499, 0.106236, -0.323684, 0.534444, -0.843745, 0.364171, 0.0370358, -0.168801, -0.404559, -0.814178, 0.91745, -0.334276, 0.66925, -0.801201, 0.156511, -0.427949, 0.379153, 0.818597, -0.649902, 0.427087, -0.586015, -0.559789, -0.833923, 0.0892409, -0.621251, 0.213826, 0.465509, 0.4704, 0.380261, 0.413067, 0.180822, 0.172866, 0.59614, 0.825575, 0.662916, -0.704381, -0.297631, 0.697778]}
output0 = {i3: [1.72003, 1.55816, 0.667546, 2.23663, 0.0661516, 0.290254, 0.770222, -1.58197, -0.850595, -0.484224, 0.949967, -0.577263, -0.871949, 2.34132, -0.135965, -0.985713, 0.815147, 1.03114, -1.41915, -0.515534, -0.373639, -1.50604, 0.673113, 3.06139, -0.388578, -1.76707, -0.315667, -1.03815, 0.432787, -1.41643, 1.12944, -0.175806, -0.846415, 1.40095, 0.70832, 2.19562, -2.61266, -0.705383, 1.26124, 1.46545, -2.35761, 2.04494, ]}
input1 = {i2: [-0.295335, -0.00387601, -0.552251, 0.166084, -0.28482, -0.152143, -0.719885, -0.869386, -0.745598, 0.823947, 0.473183, -0.331337, 0.187631, 0.0426571, -0.826897, -0.755085, -0.472453, -0.0233656, 0.0483436, 0.933418, -0.961974, 0.0125783, 0.219742, 0.342604, -0.15166, 0.0934905, 0.783221, 0.129664, 0.838844, -0.271388, 0.924519, 0.342843, 0.274418, 0.350817, 0.841638, -0.543993, -0.00283395, -0.128467, -0.682943, -0.319117, 0.84634, 0.283003, 0.32865, 0.0293755, -0.0335696, 0.591266, -0.0743476, -0.741271, 0.462056, -0.583625, -0.590183, 0.6234, 0.535269, -0.670818, -0.955642, -0.770173, 0.479986, 0.664377, 0.399445, -0.968874, -0.276263, -0.901951, 0.544104, -0.958981, 0.482658, -0.807284, 0.305369, -0.947818, 0.827498, -0.382887, -0.805741, -0.796678, -0.299804, -0.229828, 0.818783, -0.103055, -0.45568, -0.227827, 0.543743, -0.96073, 0.946747, -0.857182, -0.96426, -0.292411, -0.715614, 0.765278, -0.475043, -0.590142, -0.238507, 0.673002, -0.473357, -0.319626, 0.936014, 0.486607, 0.580844, 0.425352, -0.800994, 0.290763, -0.494953, -0.441162, 0.718677, -0.828427, 0.96965, 7.53637e-05, -0.699973, -0.526886, -0.352682, 0.799466, 0.332789, 0.723389, 0.407659, -0.934084, -0.284705, 0.961484, -0.700395, -0.985808, -0.595342, -0.691721, 0.49448, -0.0842649, 0.0390966, 0.298938, -0.128094, -0.97158, 0.86393, 0.270606, -0.468986, -0.256605, 0.47215, -0.273117, -0.590343, -0.826529, -0.725381, -0.194821, -0.259661, -0.0949207, -0.180302, 0.0446834, -0.222133, -0.40393, 0.295772, -0.92949, 0.580079, -0.169856, 0.330311, 0.0173551, -0.635823, 0.475942, 0.907175, 0.242777, -0.512208, 0.362463, 0.0496289, 0.65171, 0.990057, 0.690733, -0.469013, -0.101311, -0.68372, -0.157841, -0.677711, -0.708224, -0.659437, -0.407607, 0.677033, 0.89032, 0.228307, -0.749514, 0.772958, 0.054701, 0.551705, 0.917052, -0.895022, -0.702397, 0.484142, 0.108648, 0.833347, 0.478872, -0.984112, 0.387176, -0.73299, 0.7526, 0.443312, -0.0987856, 0.125415, 0.10876, -0.498108, 0.43209, 0.344609, 0.928941, -0.130732, -0.0569167]}
output1 = {i3: [1.28735, 1.91315, 2.51734, 0.375841, 0.637563, 2.653, 2.72959, 1.17389, -2.12119, 2.91417, -2.24246, 0.0497045, -0.127107, -0.144473, -0.393284, -2.02346, -0.239178, -0.246508, 1.29277, 1.32963, 0.117521, 0.0665713, 1.09438, -1.31426, 2.52594, -0.969211, 0.515478, -1.60926, 0.135211, 0.786415, -1.14382, -0.739102, -1.01731, 0.281615, 2.36311, 1.93872, -0.150491, 3.45217, 2.28219, 1.18282, -2.25086, 3.05468]}
Example((input0, output0))
Example((input1, output1))
| 256.090909 | 2,036 | 0.65584 |
0b94b5ac655d7f0cab98a70df3150b7b18070e2a | 885 | py | Python | jubox/utils/accessor.py | Miksus/jubox | daaf1e223e0a7c0a3bf9ae03b88d629c0f99d4d5 | [
"MIT"
] | 1 | 2020-04-26T05:18:45.000Z | 2020-04-26T05:18:45.000Z | jubox/utils/accessor.py | Miksus/jubox | daaf1e223e0a7c0a3bf9ae03b88d629c0f99d4d5 | [
"MIT"
] | null | null | null | jubox/utils/accessor.py | Miksus/jubox | daaf1e223e0a7c0a3bf9ae03b88d629c0f99d4d5 | [
"MIT"
] | null | null | null |
class Accessor:
def __init__(self, cls):
self.cls = cls
def __get__(self, instance, owner):
# instance.self
# where self is CLASS attribute of owner
# and instance is instance of owner class
return self.cls(instance)
def __set__(self, instance, value):
# instance.self = value
# where self is CLASS attribute
return self.cls(instance).__set__(instance, value)
def __delete__(self, instance):
# del instance.self
# where self is CLASS attribute
return self.cls(instance).__delete__(instance)
def register_accessor(cls_parent, name):
# Inspiration: https://github.com/pandas-dev/pandas/blob/c21be0562a33d149b62735fc82aff80e4d5942f5/pandas/core/accessor.py#L197
def wrapper(cls):
setattr(cls_parent, name, Accessor(cls))
return cls
return wrapper | 27.65625 | 130 | 0.665537 |
f621cc3a49a42356e2d4dfd7db4c3194a7744a34 | 2,001 | py | Python | build.py | DevGev/Lynx | 11ceddf63ee5e863b89aa1ff56ba9edd1bfecb07 | [
"Unlicense"
] | 9 | 2020-12-14T18:37:10.000Z | 2021-06-18T08:06:18.000Z | build.py | DevGev/Lynx | 11ceddf63ee5e863b89aa1ff56ba9edd1bfecb07 | [
"Unlicense"
] | 1 | 2021-07-15T09:07:45.000Z | 2021-12-12T20:21:21.000Z | build.py | DevGev/Lynx | 11ceddf63ee5e863b89aa1ff56ba9edd1bfecb07 | [
"Unlicense"
] | 1 | 2021-03-18T22:01:01.000Z | 2021-03-18T22:01:01.000Z | import PyInstaller.__main__
import platform
import shutil
import os
import zipfile
import subprocess
import json
import sys
def gversion(path):
with open(path) as f:
data = json.load(f)
if "{git-commit-hash}" in data["package"]["version"]:
lv = subprocess.check_output(["git", "describe", "--always"]).strip()
return data["package"]["version"].replace(
"{git-commit-hash}", lv.decode()
)
return data["package"]["version"]
def package(version, source, destination, profile):
with open(source) as f:
data = json.load(f)
data["package"]["profile"] = profile
data["package"]["version"] = version
with open(destination, "w") as f:
json.dump(data, f)
def zipdir(path, ziph):
# ziph is zipfile handle
for root, dirs, files in os.walk(path):
for file in files:
ziph.write(os.path.join(root, file))
print(
"Building Lynx for",
platform.system(),
"estimated build time: ~2 min, Continue? [Y]/[n]",
end=": ",
)
exit = input()
if str(exit) == "n":
sys.exit()
version = gversion("lynx/lynx.json")
zips = "Lynx " + str(platform.system()) + " " + version + ".zip"
fname = "Lynx"
os.mkdir(fname)
PyInstaller.__main__.run(
["lynx/main.py", "--noconsole", "--onefile", "-ilynx-profile/themes/icons/logo.ico", "--hidden-import=PyQt5.sip"]
)
# Rename Executable
if platform.system() == "Linux":
shutil.move("dist/main", "./" + fname + "/Lynx")
if platform.system() == "Windows":
shutil.move("dist/main.exe", "./" + fname + "/Lynx.exe")
# Copy Files
shutil.copytree("lynx-profile", "./" + fname + "/lynx-profile")
# Write Package Info
package(version, "lynx/lynx.json", "./" + fname + "/lynx.json", "lynx-profile/")
# Create Zip
zipf = zipfile.ZipFile(zips, "w", zipfile.ZIP_DEFLATED, compresslevel=1)
zipdir(fname, zipf)
zipf.close()
# Remove Temporary Directories
shutil.rmtree(fname)
shutil.rmtree("dist")
shutil.rmtree("build")
os.remove("main.spec")
| 24.108434 | 117 | 0.630685 |
478caa27a41376d8e17229fd4ccd59500d05a13e | 10,149 | py | Python | app/scrape/scrape_to_files.py | AndrewCMartin/marvelus | acb2d075f2f0e010ade7c842c6c3bb80b1895828 | [
"MIT"
] | null | null | null | app/scrape/scrape_to_files.py | AndrewCMartin/marvelus | acb2d075f2f0e010ade7c842c6c3bb80b1895828 | [
"MIT"
] | null | null | null | app/scrape/scrape_to_files.py | AndrewCMartin/marvelus | acb2d075f2f0e010ade7c842c6c3bb80b1895828 | [
"MIT"
] | null | null | null | import hashlib
import json
import os
import time
import requests
from app import db
from app.models import TvShow, Movie, Actor, ComicSeries, Character, Event
from app.scrape import tmdb
from config import BASE_DIR
models_list = [Actor, Character, ComicSeries, Event, Movie, TvShow]
DATA_DIR = os.path.join(BASE_DIR, "app", "data")
k_priv = 'fdf9c8bc5c83cbe565fdd6ddc4df9d0fb1e38a83'
k_pub = '7f39855a0661b5fe55f842d7afa8cd9f'
def compute_hash():
m = hashlib.md5()
ts = str(time.time())
m.update(ts.encode('utf-8'))
m.update(k_priv.encode('utf-8'))
m.update(k_pub.encode('utf-8'))
h = m.hexdigest()
return (ts, h)
def marvel_get(endpoint, params=None):
ts, h = compute_hash()
base_url = 'https://gateway.marvel.com/v1/public/'
data = {'ts': ts, 'hash': h, 'apikey': k_pub}
if params:
data.update(params)
return requests.get(base_url + endpoint, params=data)
def get_movie_data():
print("Scraping Movies")
movies_list = []
movie_ids = set()
marvel = tmdb.Companies(420)
result = marvel.movies()
total_pages = result["total_pages"]
for page in range(1, total_pages + 1):
result = marvel.movies(page=page)
for m in result["results"]:
movie_ids.add(m["id"])
for m_id in movie_ids:
time.sleep(0.04)
movie = tmdb.Movies(m_id)
info = movie.info(append_to_response="credits")
movies_list.append(info)
return movies_list
def get_tvshow_data():
print("Scraping TV Shows")
tv_ids = set()
s = tmdb.Search()
r = s.tv(query="marvel\'s")
for t in r["results"]:
tv_ids.add(t["id"])
for page in range(2, r["total_pages"] + 1):
r = s.tv(query="marvel\'s", page=page)
for t in r["results"]:
tv_ids.add(t["id"])
tv_info = []
for tv_id in tv_ids:
time.sleep(0.04)
tv = tmdb.TV(tv_id)
info = tv.info(append_to_response="credits")
tv_info.append(info)
return tv_info
def get_actor_data(*m):
print("Scraping Actor data (must first get movies and tv shows)")
actor_ids = set()
for l in m:
for movie in l:
cast = movie["credits"]["cast"]
for p in cast:
if p["order"] <= 20:
actor_ids.add(p["id"])
actor_data = []
for actor_id in actor_ids:
time.sleep(0.05)
actor = tmdb.People(actor_id)
info = actor.info()
actor_data.append(info)
return actor_data
def marvel_get_all(endpoint, chunk_size=20):
print("Scraping all " + endpoint)
data = []
r = marvel_get(endpoint)
d = json.loads(r.text)
total = d["data"]["total"]
for offset in range(0, total, chunk_size):
print("Fetching " + str(offset) + " of " + str(total))
r = marvel_get(endpoint, {'offset': str(offset), 'limit': str(chunk_size)})
d = json.loads(r.text)
if len(d['data']['results']) == 0:
break
data += d['data']['results']
return data
def save_results(**kwargs):
for file_name in kwargs:
data = json.dumps(kwargs[file_name])
if os.path.exists(os.path.join(DATA_DIR, file_name + '.json')):
os.remove(os.path.join(DATA_DIR, file_name + '.json'))
with open(os.path.join(DATA_DIR, file_name + '.json'), "w+") as f:
f.write(data)
print ("Saved file " + str(os.path.join(DATA_DIR, file_name + '.json')))
def scrape_all_and_save():
m = get_movie_data()
t = get_tvshow_data()
a = get_actor_data(m, t)
e = marvel_get_all('events')
s = marvel_get_all('series')
c = marvel_get_all('characters')
save_results(movies=m, actors=a, tvshows=t, events=e, series=s, characters=c)
def load_results(*models):
files = [f.split('.')[0] for f in os.listdir(DATA_DIR) if f.endswith(".json")]
if models:
files = [f for f in files if f in models]
results = {}
for file_name in files:
with open(os.path.join(DATA_DIR, file_name + '.json')) as f:
data = json.loads(f.read())
results[file_name] = data
return results
def add_characters(characters):
for c in characters:
path = c['thumbnail']['path']
if path == 'http://i.annihil.us/u/prod/marvel/i/mg/f/60/4c002e0305708.gif':
continue
if 'image_not_available' in path.split('/'):
continue
path = path + '.' + c['thumbnail']['extension']
stories = ", ".join([s["name"] for s in c['stories']['items']])
new_character = Character(c["id"], c["name"], c["description"][:1500], path, stories)
db.session.merge(new_character)
db.session.commit()
print("Added " + c["name"])
def add_actors(actors):
for a in actors:
if 'birthday' not in a or not a['birthday'] or len(a['birthday']) < 8:
continue
actor = Actor(a["id"], a["name"], a["birthday"], a["biography"][:3000], a["profile_path"])
db.session.merge(actor)
db.session.commit()
print("Added " + a["name"])
def get_character(character_name):
character_name = character_name.replace('(voice)', '')
character_name = character_name.replace('\"', '')
character_name = character_name.replace(' ', ' ')
character_name = character_name.replace('The', '')
character_name = character_name.replace('Dr. Stephen Strange', 'Doctor Strange')
if character_name.strip() == '':
return None
split_by_slash = [s.strip() for s in character_name.split('/')]
# Check full name split by slash
for s in reversed(split_by_slash):
c = Character.query.filter(Character.name.ilike(s)).first()
if c:
return c
# Check part of name from split by slash
for s in reversed(split_by_slash):
c = Character.query.filter(Character.name.ilike('%' + s + '%')).first()
if c:
return c
# Split each word
split_by_space = [s.strip() for s in character_name.split(' ')]
for s in split_by_space:
c = Character.query.filter(Character.name.ilike(s)).first()
if c:
return c
def add_movies(movies):
for m in movies:
movie = Movie(m["id"], m["title"], m["overview"], m["adult"], m["poster_path"], m["runtime"], m["release_date"],
m["original_language"], m["vote_average"])
for c in m["credits"]["cast"]:
character_name = c["character"]
if c["order"] > 20:
continue
character = get_character(character_name)
if (character):
movie.characters.append(character)
actor_id = c["id"]
actor = Actor.query.filter_by(id=actor_id).first()
if actor:
movie.actors.append(actor)
if actor and character:
actor.characters.append(character)
db.session.merge(movie)
db.session.commit()
print ("Added " + m["title"])
def add_tvshows(tvshows):
for t in tvshows:
tvshow = TvShow(t["id"], t["name"], t["overview"], t["poster_path"], t["last_air_date"],
t["vote_average"], t["number_of_seasons"], t["number_of_episodes"])
for c in t["credits"]["cast"]:
character_name = c["character"]
if c["order"] > 20:
continue
character = get_character(character_name)
if (character):
tvshow.characters.append(character)
actor_id = c["id"]
actor = Actor.query.filter_by(id=actor_id).first()
if actor:
tvshow.actors.append(actor)
if actor and character:
actor.characters.append(character)
db.session.merge(tvshow)
try:
db.session.commit()
except Exception:
db.session.rollback()
print ("Added " + t["name"])
def add_events(events):
for e in events:
path = e['thumbnail']['path']
if path == 'http://i.annihil.us/u/prod/marvel/i/mg/f/60/4c002e0305708.gif':
continue
if 'image_not_available' in path.split('/'):
continue
if not e['start']:
continue
path = path + '.' + e['thumbnail']['extension']
event = Event(e["id"], e["title"], e["description"], path, e["start"][:4], "")
for c in e["characters"]["items"]:
char_id = int(c['resourceURI'].split('/')[-1])
character = Character.query.filter_by(id=char_id).first()
if character:
event.characters.append(character)
db.session.merge(event)
db.session.commit()
print("Added " + e["title"])
def add_series(series):
for s in series:
# print (s)
if ComicSeries.query.filter_by(id=s["id"]).first():
continue
path = s['thumbnail']['path']
if path == 'http://i.annihil.us/u/prod/marvel/i/mg/f/60/4c002e0305708.gif':
continue
if 'image_not_available' in path.split('/'):
continue
path = path + '.' + s['thumbnail']['extension']
series = ComicSeries(s["id"], s["title"], s["description"], path, s["startYear"], s["endYear"])
for c in s["characters"]["items"]:
char_id = int(c['resourceURI'].split('/')[-1])
character = Character.query.filter_by(id=char_id).first()
if character:
series.characters.append(character)
for e in s["events"]["items"]:
e_id = int(e['resourceURI'].split('/')[-1])
event = Event.query.filter_by(id=e_id).first()
if event:
series.events.append(event)
db.session.merge(series)
db.session.commit()
print("Added " + s["title"])
def load_all():
r = load_results()
# add_characters(r["characters"])
# add_actors(r["actors"])
# add_movies(r["movies"])
# add_tvshows(r["tvshows"])
add_events(r["events"])
add_series(r["series"])
if __name__ == '__main__':
load_all()
# In[58]:
# In[71]:
| 32.015773 | 120 | 0.57385 |
f24ffb45b028fa02634e409e0b8c0c85efdb9e27 | 4,951 | py | Python | Awwwards/settings.py | imekenye/Awwwards | bb495e091c8a81aff172528d233c41c36fc1d2aa | [
"Unlicense"
] | null | null | null | Awwwards/settings.py | imekenye/Awwwards | bb495e091c8a81aff172528d233c41c36fc1d2aa | [
"Unlicense"
] | 12 | 2020-02-12T00:22:15.000Z | 2022-03-11T23:47:32.000Z | Awwwards/settings.py | imekenye/Awwwards | bb495e091c8a81aff172528d233c41c36fc1d2aa | [
"Unlicense"
] | null | null | null | """
Django settings for Awwwards project.
Generated by 'django-admin startproject' using Django 2.2.1.
For more information on this file, see
https://docs.djangoproject.com/en/2.2/topics/settings/
For the full list of settings and their values, see
https://docs.djangoproject.com/en/2.2/ref/settings/
"""
import os
import django_heroku
import dj_database_url
from decouple import config,Csv
MODE = config("MODE", default="dev")
SECRET_KEY = config('SECRET_KEY')
DEBUG = config('DEBUG', default=False, cast=bool)
# development
if config('MODE') == "dev":
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.postgresql_psycopg2',
'NAME': config('DB_NAME'),
'USER': config('DB_USER'),
'PASSWORD': config('DB_PASSWORD'),
'HOST': config('DB_HOST'),
'PORT': '',
}
}
# production
else:
DATABASES = {
'default': dj_database_url.config(
default=config('DATABASE_URL')
)
}
db_from_env = dj_database_url.config(conn_max_age=500)
DATABASES['default'].update(db_from_env)
ALLOWED_HOSTS = config('ALLOWED_HOSTS', cast=Csv())
# Build paths inside the project like this: os.path.join(BASE_DIR, ...)
BASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
# Quick-start development settings - unsuitable for production
# See https://docs.djangoproject.com/en/2.2/howto/deployment/checklist/
# SECURITY WARNING: keep the secret key used in production secret!
# SECRET_KEY = 'xp$)4&qat_g1)%txik6n+&&wh@rja@r&u#+ll3^=umgf@o@*f$'
# SECURITY WARNING: don't run with debug turned on in production!
DEBUG = True
ALLOWED_HOSTS = []
# Application definition
INSTALLED_APPS = [
'projects.apps.ProjectsConfig',
'accounts.apps.AccountsConfig',
'crispy_forms',
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'rest_framework',
]
MIDDLEWARE = [
'django.middleware.security.SecurityMiddleware',
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.common.CommonMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'django.middleware.clickjacking.XFrameOptionsMiddleware',
'whitenoise.middleware.WhiteNoiseMiddleware',
]
ROOT_URLCONF = 'Awwwards.urls'
TEMPLATES = [
{
'BACKEND': 'django.template.backends.django.DjangoTemplates',
'DIRS': [os.path.join(BASE_DIR, 'templates'),
os.path.join(BASE_DIR, 'accounts/templates'), ]
,
'APP_DIRS': True,
'OPTIONS': {
'context_processors': [
'django.template.context_processors.debug',
'django.template.context_processors.request',
'django.contrib.auth.context_processors.auth',
'django.contrib.messages.context_processors.messages',
],
},
},
]
WSGI_APPLICATION = 'Awwwards.wsgi.application'
# Database
# https://docs.djangoproject.com/en/2.2/ref/settings/#databases
# DATABASES = {
# 'default': {
# 'ENGINE': 'django.db.backends.postgresql',
# 'NAME': 'projectsdb',
# 'USER': 'studiopx',
# 'PASSWORD': '',
# 'HOST': 'localhost',
# 'PORT': '5432',
# }
# }
# Password validation
# https://docs.djangoproject.com/en/2.2/ref/settings/#auth-password-validators
AUTH_PASSWORD_VALIDATORS = [
{
'NAME': 'django.contrib.auth.password_validation.UserAttributeSimilarityValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.MinimumLengthValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.CommonPasswordValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.NumericPasswordValidator',
},
]
# Internationalization
# https://docs.djangoproject.com/en/2.2/topics/i18n/
LANGUAGE_CODE = 'en-us'
TIME_ZONE = 'UTC'
USE_I18N = True
USE_L10N = True
USE_TZ = True
# Static files (CSS, JavaScript, Images)
# https://docs.djangoproject.com/en/2.2/howto/static-files/
STATICFILES_DIRS = [
os.path.join(BASE_DIR, 'Awwwards/static/'),
os.path.join(BASE_DIR, 'Awwwards/static/css/'),
os.path.join(BASE_DIR, 'Awwwards/static/js/'),
]
STATICFILES_STORAGE = 'whitenoise.storage.CompressedManifestStaticFilesStorage'
STATIC_ROOT = os.path.join(BASE_DIR, 'static')
STATIC_URL = '/static/'
CRISPY_TEMPLATE_PACK = 'bootstrap4'
LOGIN_REDIRECT_URL = 'projects-home'
LOGIN_URL = 'login'
# configuring the location for media
if DEBUG:
MEDIA_DIR = '/Users/user/Desktop/Moringa_Projects/Awwwards/'
MEDIA_URL = '/media/'
MEDIA_ROOT = os.path.join(BASE_DIR, 'media')
# Configure Django App for Heroku.
django_heroku.settings(locals())
| 27.353591 | 91 | 0.676833 |
64277f35a1cc2b331ec30df7494e342aa0bc3275 | 17,758 | py | Python | DarkCapPy/DarkPhoton.py | fliptanedo/DarkCapPy | f48e7634442801c89ee6a11139536285dcd0f23b | [
"MIT"
] | null | null | null | DarkCapPy/DarkPhoton.py | fliptanedo/DarkCapPy | f48e7634442801c89ee6a11139536285dcd0f23b | [
"MIT"
] | null | null | null | DarkCapPy/DarkPhoton.py | fliptanedo/DarkCapPy | f48e7634442801c89ee6a11139536285dcd0f23b | [
"MIT"
] | null | null | null |
################################################################
# Import Python Libraries
################################################################
import numpy as np
import scipy.integrate as integrate
import scipy.interpolate as interpolate
from DarkCapPy.Configure.Constants import *
from DarkCapPy.Configure.AtomicData import *
from DarkCapPy.Configure.PlanetData import *
from DarkCapPy.Configure.Conversions import amu2GeV
# import os
# this_dir, this_filename = os.path.split(__file__)
# DATA_PATH = os.path.join(this_dir, "brtoe.csv")
################################################################
# Capture Rate Functions
################################################################
########################
# Nuclear Form Factor
########################
def formFactor2(element, E):
'''
formFactor2(element,E)
Returns the form-factor squared of element N with recoil energy E
[E] = GeV
'''
E_N = 0.114/((atomicNumbers[element])**(5./3))
FN2 = np.exp(-E/E_N)
return FN2
########################
# Photon Scattering Cross Sections
########################
def crossSection(element, m_A, E_R): # returns 1/GeV^3
'''
crossSection(element, m_A, E_R)
Returns the differntial scattering cross section for a massive dark photon
[m_A] = GeV
[E_R] = GeV
'''
m_N = amu2GeV(atomicNumbers[element])
FN2 = formFactor2(element, E_R)
function = ( FN2 ) / ((2 * m_N * E_R + m_A**2)**2)
return function
def crossSectionKappa0(element, E_R): # Dimensionless
'''
crossSectionKappa0(element, E_R)
Returns the cross section used in the kappa0 calculation
[E_R] = GeV
'''
FN2 = formFactor2(element, E_R)
function = FN2
return function
########################
# Dark Matter Velocity Distributions
########################
def normalization():
def function(u):
# The if-else structure accounts for the Heaviside function
if ((V_gal) - u < 0):
integrand = 0.
elif ( ((V_gal) - (u)) >= 0):
numerator = ((V_gal)**2 - (u)**2)
denominator = (k * (u_0)**2)
arg = ( numerator / denominator)
integrand = 4*np.pi* u**2 * (np.expm1(arg))** k
return integrand
tempA = integrate.quad(function, 0, V_gal)[0]
N_0 = 1./tempA
return N_0
# N_1 = normalization()
def normalizationChecker(u, N_0 = normalization()):
'''
normalizationChecker(u, N_0 = normalization())
Exists only to check that the normalization N_0 of dMVelDist
'''
if ( (V_gal - u) < 0):
integrand = 0.
elif ( (V_gal - u) >= 0):
numerator = ( (V_gal)**2 - (u)**2)
denominator = (k * (u_0)**2)
arg = ( numerator / denominator)
integrand = N_0 * 4*np.pi*u**2* (np.expm1(arg)) ** k
return integrand
def dMVelDist(u, N_0 = normalization()):
'''
dMVelDist(u, N_0 = normalization)
Returns the fraction of DM particles with velocity u in the Galactic frame
N_0 is the normalization fixed by the function normalization
'''
# The if-else structure accounts for the Heaviside function
if ((V_gal - u) < 0):
integrand = 0
elif ((V_gal - u) >= 0):
numerator = ( (V_gal)**2 - (u)**2)
denominator = (k * (u_0)**2)
arg = ( numerator / denominator)
integrand = N_0 * (np.expm1(arg)) ** k
return integrand
########################
# Earth Frame Velocity Distribution
########################
def fCross(u):
'''
fCross(u)
Returns the fraction of DM particles with velocity u in the Earth frame
'''
def integrand(x,y): #x = cos(theta), y = cos(phi)
cosGamma = 0.51
return 0.25 * dMVelDist( ( u**2 + ((V_dot) + (V_cross*cosGamma) * y)**2 \
+ 2 * u * ((V_dot) + (V_cross*cosGamma) * y) *x)** 0.5 )
return integrate.dblquad(integrand, -1, 1, lambda y: -1, lambda y: 1)[0]
########################
# Interpolate the Velocity Distributions
########################
velRange = np.linspace(0, V_gal, 1000)
fCrossVect = []
DMVect = []
for vel in velRange:
DMVect.append(dMVelDist(vel))
fCrossVect.append(fCross(vel))
dMVelInterp = interpolate.interp1d(velRange, DMVect, kind = 'linear')
fCrossInterp = interpolate.interp1d(velRange, fCrossVect, kind='linear')
########################
# Kinematics
########################
def eMin(u, m_X):
'''
eMin(u, m_X)
Returns the minimum kinetic energy to become Gravitationally captured by Earth
[m_X] = GeV
'''
function = (0.5) * m_X * u**2
# assert (function >=0), '(u, m_X): (%e,%e) result in a negative eMin' % (u, m_X)
return function
def eMax(element, m_X, rIndex, u):
'''
eMax(element, m_X, rIndex, u)
Returns the maximum kinetic energy allowed by the kinematics
[m_X] = GeV
rIndex specifies the index in the escape velocity array escVel2List[rIndex]
'''
m_N = amu2GeV(atomicNumbers[element])
mu = m_N*m_X / (m_N + m_X)
vCross2 = (escVel2List[rIndex])
function = 2 * mu**2 * (u**2 + vCross2) / m_N
# assert (function >= 0), '(element, m_X, rIndex, u): (%s, %e, %i, %e) result in negative eMax' %(element, m_X, rIndex, u)
return function
########################
# Intersection Velocity
########################
def EminEmaxIntersection(element, m_X, rIndex):
'''
EminEmaxIntersection(element, m_X, rIndex):
Returns the velocity uInt when eMin = eMax.
[m_X] = GeV
'''
m_N = amu2GeV(atomicNumbers[element])
mu = (m_N*m_X)/(m_N+m_X)
sqrtvCross2 = np.sqrt(escVel2List[rIndex])
# Calculate the intersection uInt of eMin and eMax given a specific rIndex
A = m_X/2.
B = 2. * mu**2 / m_N
uInt = np.sqrt( ( B ) / (A-B) ) * sqrtvCross2
return uInt
########################
# Photon Velocity and Energy Integration
########################
def intDuDEr(element, m_X, m_A, rIndex):
'''
intDuDER(element, m_X, m_A, rIndex):
Returns the velocity and recoil energy integral for dark photon scattering
[m_X] = GeV
[m_A] = GeV
'''
def integrand(E,u):
fu = fCrossInterp(u)
integrand = crossSection(element, m_A, E) * u * fu
return integrand
# Calculate the intersection uInt of eMin and eMax given a specific rIndex
uInt = EminEmaxIntersection(element, m_X, rIndex)
uLow = 0
uHigh = uInt
eLow = lambda u: eMin(u, m_X)
eHigh = lambda u: eMax(element, m_X, rIndex, u)
integral = integrate.dblquad(integrand, uLow, uHigh, eLow, eHigh)[0]
return integral
def intDuDErKappa0(element, m_X, rIndex):
'''
intDuDErKappa0(element, m_X, rIndex):
returns the velocity and recoil energy integration for dark photon scattering
used in the kappa0 calculation
[m_X] = GeV
'''
def integrand(E_R,u):
fu = fCrossInterp(u)
integrand = crossSectionKappa0(element, E_R) * u * fu
return integrand
uInt = EminEmaxIntersection(element, m_X, rIndex)
uLow = 0
uHigh = uInt
eLow = lambda u: eMin(u, m_X)
eHigh = lambda u: eMax(element, m_X, rIndex, u)
integral = integrate.dblquad(integrand, uLow, uHigh, eLow, eHigh)[0]
return integral
########################
# Sum Over Radii
########################
def sumOverR(element, m_X, m_A):
'''
sumOverR(element, m_X, m_A)
Returns the Summation over radius of the velocity and recoil energy integration
[m_X] = GeV
[m_A] = GeV
'''
tempSum = 0
for i in range(0, len(radiusList)):
r = radiusList[i]
deltaR = deltaRList[i]
n_N = numDensityList(element)[i]
summand = n_N * r**2 * intDuDEr(element, m_X, m_A, i) * deltaR
tempSum += summand
return tempSum
def sumOverRKappa0(element, m_X):
'''
sumOverR(element, m_X, m_A)
Returns the Summation over radius of the velocity and recoil energy integration
used in the kappa0 calculation
[m_X] = GeV
[m_A] = GeV
'''
tempSum = 0
for i in range(0,len(radiusList)):
r = radiusList[i]
deltaR = deltaRList[i]
n_N = numDensityList(element)[i]
summand = n_N * r**2 * intDuDErKappa0(element, m_X, i) * deltaR
tempSum += summand
return tempSum
########################
# Single Element Capture Rate
########################
def singleElementCap(element, m_X, m_A, epsilon, alpha, alpha_X):
'''
singleElementCap(element, m_X, m_A, epsilon, alpha, alpha_X)
Returns the capture rate due to a single element for the specified parameters
[m_X] = GeV
[m_A] = GeV
'''
Z_N = nProtons[element]
m_N = amu2GeV(atomicNumbers[element])
n_X = 0.3/m_X # GeV/cm^3
conversion = (5.06e13)**-3 * (1.52e24) # (cm^-3)(GeV^-2) -> s^-1
prefactors = (4*np.pi)**2
crossSectionFactors = 2 * (4*np.pi) * epsilon**2 * alpha_X * alpha * Z_N**2 * m_N
function = n_X * conversion* crossSectionFactors* prefactors * sumOverR(element, m_X, m_A)
return function
def singleElementCapKappa0(element, m_X, alpha):
'''
singleElementCapKappa0(element, m_X, alpha):
Returns a single kappa0 value for 'element' and the specified parameters
[m_X] = GeV
'''
Z_N = nProtons[element]
m_N = amu2GeV(atomicNumbers[element])
n_X = 0.3/m_X # 1/cm^3
conversion = (5.06e13)**-3 * (1.52e24) # cm^-3 GeV^-2 -> s^-1
crossSectionFactors = 2 * (4*np.pi) * alpha * Z_N**2 * m_N
prefactor = (4*np.pi)**2
function = n_X * conversion * prefactor * crossSectionFactors * sumOverRKappa0(element, m_X)
return function
########################
# Full Capture Rate
########################
def cCap(m_X, m_A, epsilon, alpha, alpha_X):
'''
cCap(m_X, m_A, epsilon, alpha, alpha_X)
returns the full capture rate in sec^-1 for the specified parameters
Note: This calculation is the "dumb" way to do this, as for every point in (m_A, epsilon) space,
you must calculate this quantity
[m_X] = GeV
[m_A] = GeV
'''
totalCap = 0
for element in elementList:
totalCap += singleElementCap(element, m_X, m_A, epsilon, alpha, alpha_X)
return totalCap
########################
# Kappa0
########################
def kappa_0(m_X, alpha):
'''
kappa_0(m_X, alpha)
Returns the kappa0 value for m_X and alpha
[m_X] = GeV
This funciton encodes how the capture rate depends on m_X and alpha
'''
tempSum = 0
for element in elementList:
function = singleElementCapKappa0(element, m_X, alpha)
tempSum += function
return tempSum
########################
# Capture Rate the quick way
########################
def cCapQuick(m_X, m_A, epsilon, alpha_X, kappa0):
'''
cCapQuick(m_X, m_A, epsilon, alpha_X, kappa0):
Returns the Capture rate in a much more computationally efficient way
[m_X] = GeV
[m_A] = GeV
Provides a quick way to calculate the capture rate when only m_A and epsilon are changing.
All the m_X dependence, which is assumed to be fixed, is contianed in kappa0
'''
function = epsilon**2 * alpha_X * kappa0 / m_A**4
return function
################################################################
# Thermal Relic
################################################################
def alphaTherm(m_X, m_A):
'''
alphaTherm(m_X,m_A)
This function sets alpha_X given the dark matter relic abundance
'''
conversion = (5.06e13)**3/ (1.52e24) # cm^3 Sec -> GeV^-2
thermAvgSigmaV = 2.2e-26 # cm^3/s from ArXiV: 1602.01465v3 between eqns (4) and (5)
function = conversion * thermAvgSigmaV * (m_X**2/np.pi) \
* (1 - 0.5*(m_A/m_X)**2)**2 / ((1 - (m_A/m_X)**2)**(3./2))
return np.sqrt(function)
# Thermal Relic for m_X >> m_A Approximation
def alphaThermApprox(m_X):
'''
alphaThermApprox(m_X)
This function sets alpha given the dark matter relic abundance in the m_X >> m_A limit
'''
conversion = (5.06e13)**3/ (1.52e24) # cm^3 Sec -> GeV^-2
thermAvgSigmaV = 2.2e-26 # cm^3/s from ArXiV: 1602.01465v3 between eqns (4) and (5)
function = conversion * thermAvgSigmaV * (5.06e13)**3/ (1.52e24) * (m_X**2/np.pi)
return np.sqrt(function)
################################################################
# Annihilation Rate Functions
################################################################
########################
# V0 at center of Earth
########################
def v0func(m_X):
'''
v0func(m_X)
Returns the typical velocity of a dark matter particle with mass m_X at the center of the Earth
[m_X] = GeV
'''
return np.sqrt(2*TCross/m_X)
########################
# Tree-level annihilation cross section
########################
def sigmaVtree(m_X, m_A, alpha_X):
'''
sigmaVtree(m_X, m_A, alpha_X)
Returns the tree-level annihilation cross section for massive dark photons fixed by relic abundance
[m_X] = GeV
[m_A] = GeV
'''
numerator = (1 - (m_A/m_X)**2)**1.5
denominator = ( 1 - 0.5 * (m_A/m_X)**2 )**2
prefactor = np.pi*(alpha_X/m_X)**2
function = prefactor * numerator/denominator
return function
########################
# Sommerfeld Enhahcement
########################
def sommerfeld(v, m_X, m_A, alpha_X):
'''
sommerfeld(v, m_X, m_A, alpha_X)
Returns the Sommerfeld enhancemement
[m_X] = GeV
[m_A] = GeV
'''
a = v / (2 * alpha_X)
c = 6 * alpha_X * m_X / (np.pi**2 * m_A)
# Kludge: Absolute value the argument of the square root inside Cos(...)
function = np.pi/a * np.sinh(2*np.pi*a*c) / \
( np.cosh(2*np.pi*a*c) - np.cos(2*np.pi*np.abs(np.sqrt(np.abs(c-(a*c)**2)) ) ) )
return function
########################
# Thermal Average Sommerfeld
########################
def thermAvgSommerfeld(m_X, m_A, alpha_X):
'''
thermAvgSommerfeld(m_X, m_A, alpha_X):
Returns the Thermally-averaged Sommerfeld enhancement
[m_X] = GeV
[m_A] = GeV
'''
v0 = v0func(m_X)
def integrand(v):
# We perform d^3v in spherical velocity space.
# d^3v = v^2 dv * d(Omega)
prefactor = 4*np.pi/(2*np.pi*v0**2)**(1.5)
function = prefactor * v**2 * np.exp(-0.5*(v/v0)**2) * sommerfeld(v, m_X, m_A, alpha_X)
return function
lowV = 0
# Python doesn't like it when you integrate to infinity, so we integrate out to 10 standard deviations
highV = 10*(v0func(m_X))
integral = integrate.quad(integrand, lowV, highV)[0]
return integral
########################
# CAnnCalc
########################
def cAnn(m_X, sigmaVTree, thermAvgSomm = 1):
'''
CAnnCalc(m_X, sigmaVTree, thermAvgSomm = 1)
Returns the Annihilation rate in sec^-1 without Sommerfeld effects.
To include sommerfeld effects, set thermAvgSomm = thermAvgSommerfeld(m_X, m_A, alpha_X)
[m_X] = GeV
[sigmaVTree] = GeV^-2
'''
prefactor = (Gnat * m_X * rhoCross/ (3 * TCross) )**(3./2)
conversion = (1.52e24) # GeV -> Sec^-1
function = conversion * prefactor * sigmaVTree * thermAvgSomm
return function
################################################################
# Annihilation Rate Functions
################################################################
########################
# Equilibrium Time
########################
def tau(CCap,CAnn):
'''
tau(CCap,CAnn)
returns the equilibrium time in sec^-1
[Ccap] = sec^-1
[Cann] = sec^-1
'''
function = 1./(np.sqrt(CCap*CAnn))
return function
########################
# Epsilon as a function of m_A
########################
def contourFunction(m_A, alpha_X, Cann0, Sommerfeld, kappa0, contourLevel):
'''
EpsilonFuncMA(m_A, alpha_X, Cann, Sommerfeld, kappa0, contourLevel)
returns the value of Epsilon as a function of mediator mass.
[m_A] = GeV
Cann0 = sec^-1
Kappa0 = GeV^5
Note: The 10^n contour is input as contourLevel = n
'''
function = 2 * np.log10(m_A) - (0.5)*np.log10(alpha_X * kappa0 * Cann0 * Sommerfeld) \
- contourLevel - np.log10(tauCross)
return function
########################
# Annihilation Rate
########################
def gammaAnn(CCap, CAnn):
'''
gammaAnn(CCap, CAnn)
returns the solution to the differential rate equation for dark matter capture and annihilation
[Ccap] = sec^-1
[Cann] = sec^-1
'''
Tau = tau(CCap, CAnn)
EQRatio = tauCross/Tau
function = (0.5) * CCap * ((np.tanh(EQRatio))**2)
return function
########################
# Decay Length
########################
def decayLength(m_X, m_A, epsilon, BR):
'''
decayLength(m_X, m_A, epsilon, BR)
returns the characteristic length of dark photons in cm
[m_X] = GeV
[m_A] = GeV
BR = Branching Ratio
'''
function = RCross * BR * (3.6e-9/epsilon)**2 * (m_X/m_A) * (1./1000) * (1./m_A)
return function
########################
# Decay Parameter
########################
def epsilonDecay(decayLength, effectiveDepth = 10**5): # Effective depth = 1 km
'''
epsilonDecay(decayLength, effectiveDepth = 10**5)
Returns the probability for dark photons to decay between near the surface of Earth
[effectiveDepth] = cm, default value for the IceCube Neutrino Observatory is 1 km.
'''
arg1 = RCross
arg2 = RCross + effectiveDepth
function = np.exp(-arg1/decayLength) - np.exp(-arg2/decayLength)
return function
########################
# Ice Cube Signal
########################
def iceCubeSignal(gammaAnn, epsilonDecay, T, Aeff = 10**10):
'''
iceCubeSignal(gammaAnn, epsilonDecay, liveTime, Aeff = 10**10)
returns the signal rate for IceCube
[gammaAnn] = sec^-1
[liveTime] = sec
[Aeff] = cm^2
'''
function = 2 * gammaAnn * (Aeff/ (4*np.pi*RCross**2) ) * epsilonDecay * T
return function
print ("Dark Photon Module Imported") | 24.698192 | 127 | 0.571968 |
3adb22b03d3ff7c37e2d562d14c3a2d048970a29 | 567 | py | Python | tcom_plot.py | mfatihaktas/service-capacity | 5480f6c1a95fdf6e9b49f40be1e5196951982223 | [
"MIT"
] | null | null | null | tcom_plot.py | mfatihaktas/service-capacity | 5480f6c1a95fdf6e9b49f40be1e5196951982223 | [
"MIT"
] | null | null | null | tcom_plot.py | mfatihaktas/service-capacity | 5480f6c1a95fdf6e9b49f40be1e5196951982223 | [
"MIT"
] | null | null | null | from cap_finder import *
# from popularity import *
if __name__ == "__main__":
popmodel = PopModel_wZipf(k=2, zipf_tailindex_rv=TNormal(1, 2), arrate_rv=TNormal(2, 0.4) )
heatmap_grid_xmax_ymax = gen_pop_heatmap_grid_xmax_ymax(popmodel)
def plot_(G, savename):
cf = ConfInspector(G)
print("cf.to_sysrepr= {}".format(cf.to_sysrepr() ) )
cf.plot_servcap_2d(popmodel, heatmap_grid_xmax_ymax, savename)
plot_(G=custom_conf_matrix(4, k=2), savename='plot_servcap_rep42.png')
plot_(G=mds_conf_matrix(4, k=2), savename='plot_servcap_mds42.png')
| 37.8 | 93 | 0.738977 |
1b6ae660fb4a55156f71cc3b3592fb3ff8145c8a | 4,392 | py | Python | pcdet/models/backbones_2d/base_bev_backbone.py | kingbackyang/pcdet_fusion_slim | 9e10274462d0e4c3dc50ff1ab3e7713e3f8c9fe3 | [
"Apache-2.0"
] | 4 | 2021-12-14T08:30:05.000Z | 2021-12-16T02:17:19.000Z | pcdet/models/backbones_2d/base_bev_backbone.py | kingbackyang/pcdet_fusion_slim | 9e10274462d0e4c3dc50ff1ab3e7713e3f8c9fe3 | [
"Apache-2.0"
] | 1 | 2021-12-14T08:47:46.000Z | 2021-12-16T02:18:47.000Z | pcdet/models/backbones_2d/base_bev_backbone.py | kingbackyang/pcdet_fusion_slim | 9e10274462d0e4c3dc50ff1ab3e7713e3f8c9fe3 | [
"Apache-2.0"
] | null | null | null | import numpy as np
import torch
import torch.nn as nn
class BaseBEVBackbone(nn.Module):
def __init__(self, model_cfg, input_channels):
super().__init__()
self.model_cfg = model_cfg
if self.model_cfg.get('LAYER_NUMS', None) is not None:
assert len(self.model_cfg.LAYER_NUMS) == len(self.model_cfg.LAYER_STRIDES) == len(self.model_cfg.NUM_FILTERS)
layer_nums = self.model_cfg.LAYER_NUMS
layer_strides = self.model_cfg.LAYER_STRIDES
num_filters = self.model_cfg.NUM_FILTERS
else:
layer_nums = layer_strides = num_filters = []
if self.model_cfg.get('UPSAMPLE_STRIDES', None) is not None:
assert len(self.model_cfg.UPSAMPLE_STRIDES) == len(self.model_cfg.NUM_UPSAMPLE_FILTERS)
num_upsample_filters = self.model_cfg.NUM_UPSAMPLE_FILTERS
upsample_strides = self.model_cfg.UPSAMPLE_STRIDES
else:
upsample_strides = num_upsample_filters = []
num_levels = len(layer_nums)
c_in_list = [input_channels, *num_filters[:-1]]
self.blocks = nn.ModuleList()
self.deblocks = nn.ModuleList()
for idx in range(num_levels):
cur_layers = [
nn.ZeroPad2d(1),
nn.Conv2d(
c_in_list[idx], num_filters[idx], kernel_size=3,
stride=layer_strides[idx], padding=0, bias=False
),
nn.BatchNorm2d(num_filters[idx], eps=1e-3, momentum=0.01),
nn.ReLU()
]
for k in range(layer_nums[idx]):
cur_layers.extend([
nn.Conv2d(num_filters[idx], num_filters[idx], kernel_size=3, padding=1, bias=False),
nn.BatchNorm2d(num_filters[idx], eps=1e-3, momentum=0.01),
nn.ReLU()
])
self.blocks.append(nn.Sequential(*cur_layers))
if len(upsample_strides) > 0:
stride = upsample_strides[idx]
if stride >= 1:
self.deblocks.append(nn.Sequential(
nn.ConvTranspose2d(
num_filters[idx], num_upsample_filters[idx],
upsample_strides[idx],
stride=upsample_strides[idx], bias=False
),
nn.BatchNorm2d(num_upsample_filters[idx], eps=1e-3, momentum=0.01),
nn.ReLU()
))
else:
stride = np.round(1 / stride).astype(np.int)
self.deblocks.append(nn.Sequential(
nn.Conv2d(
num_filters[idx], num_upsample_filters[idx],
stride,
stride=stride, bias=False
),
nn.BatchNorm2d(num_upsample_filters[idx], eps=1e-3, momentum=0.01),
nn.ReLU()
))
c_in = sum(num_upsample_filters)
if len(upsample_strides) > num_levels:
self.deblocks.append(nn.Sequential(
nn.ConvTranspose2d(c_in, c_in, upsample_strides[-1], stride=upsample_strides[-1], bias=False),
nn.BatchNorm2d(c_in, eps=1e-3, momentum=0.01),
nn.ReLU(),
))
self.num_bev_features = c_in
def forward(self, data_dict):
"""
Args:
data_dict:
spatial_features
Returns:
"""
spatial_features = data_dict['spatial_features']
ups = []
ret_dict = {}
x = spatial_features
for i in range(len(self.blocks)):
x = self.blocks[i](x)
if i == 0:
data_dict["multi_small_input"] = x
stride = int(spatial_features.shape[2] / x.shape[2])
ret_dict['spatial_features_%dx' % stride] = x
if len(self.deblocks) > 0:
ups.append(self.deblocks[i](x))
else:
ups.append(x)
if len(ups) > 1:
x = torch.cat(ups, dim=1)
elif len(ups) == 1:
x = ups[0]
if len(self.deblocks) > len(self.blocks):
x = self.deblocks[-1](x)
data_dict['spatial_features_2d'] = x
return data_dict
| 38.191304 | 121 | 0.519353 |
ba777459073afaf0a87c2ec09bd12b88d93cd242 | 150,767 | py | Python | Ejercicios PyCharm-Django-Flask/Flask/SapFlask/venv/lib/python3.8/site-packages/sqlalchemy/dialects/postgresql/base.py | ijchavez/python | bccd94a9bee90125e2be27b0355bdaedb0ae9d19 | [
"Unlicense"
] | null | null | null | Ejercicios PyCharm-Django-Flask/Flask/SapFlask/venv/lib/python3.8/site-packages/sqlalchemy/dialects/postgresql/base.py | ijchavez/python | bccd94a9bee90125e2be27b0355bdaedb0ae9d19 | [
"Unlicense"
] | null | null | null | Ejercicios PyCharm-Django-Flask/Flask/SapFlask/venv/lib/python3.8/site-packages/sqlalchemy/dialects/postgresql/base.py | ijchavez/python | bccd94a9bee90125e2be27b0355bdaedb0ae9d19 | [
"Unlicense"
] | null | null | null | # postgresql/base.py
# Copyright (C) 2005-2021 the SQLAlchemy authors and contributors
# <see AUTHORS file>
#
# This module is part of SQLAlchemy and is released under
# the MIT License: http://www.opensource.org/licenses/mit-license.php
r"""
.. dialect:: postgresql
:name: PostgreSQL
:full_support: 9.6, 10, 11, 12
:normal_support: 9.6+
:best_effort: 8+
.. _postgresql_sequences:
Sequences/SERIAL/IDENTITY
-------------------------
PostgreSQL supports sequences, and SQLAlchemy uses these as the default means
of creating new primary key values for integer-based primary key columns. When
creating tables, SQLAlchemy will issue the ``SERIAL`` datatype for
integer-based primary key columns, which generates a sequence and server side
default corresponding to the column.
To specify a specific named sequence to be used for primary key generation,
use the :func:`~sqlalchemy.schema.Sequence` construct::
Table('sometable', metadata,
Column('id', Integer, Sequence('some_id_seq'), primary_key=True)
)
When SQLAlchemy issues a single INSERT statement, to fulfill the contract of
having the "last insert identifier" available, a RETURNING clause is added to
the INSERT statement which specifies the primary key columns should be
returned after the statement completes. The RETURNING functionality only takes
place if PostgreSQL 8.2 or later is in use. As a fallback approach, the
sequence, whether specified explicitly or implicitly via ``SERIAL``, is
executed independently beforehand, the returned value to be used in the
subsequent insert. Note that when an
:func:`~sqlalchemy.sql.expression.insert()` construct is executed using
"executemany" semantics, the "last inserted identifier" functionality does not
apply; no RETURNING clause is emitted nor is the sequence pre-executed in this
case.
To force the usage of RETURNING by default off, specify the flag
``implicit_returning=False`` to :func:`_sa.create_engine`.
PostgreSQL 10 and above IDENTITY columns
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
PostgreSQL 10 and above have a new IDENTITY feature that supersedes the use
of SERIAL. The :class:`_schema.Identity` construct in a
:class:`_schema.Column` can be used to control its behavior::
from sqlalchemy import Table, Column, MetaData, Integer, Computed
metadata = MetaData()
data = Table(
"data",
metadata,
Column(
'id', Integer, Identity(start=42, cycle=True), primary_key=True
),
Column('data', String)
)
The CREATE TABLE for the above :class:`_schema.Table` object would be:
.. sourcecode:: sql
CREATE TABLE data (
id INTEGER GENERATED BY DEFAULT AS IDENTITY (START WITH 42 CYCLE),
data VARCHAR,
PRIMARY KEY (id)
)
.. versionchanged:: 1.4 Added :class:`_schema.Identity` construct
in a :class:`_schema.Column` to specify the option of an autoincrementing
column.
.. note::
Previous versions of SQLAlchemy did not have built-in support for rendering
of IDENTITY, and could use the following compilation hook to replace
occurrences of SERIAL with IDENTITY::
from sqlalchemy.schema import CreateColumn
from sqlalchemy.ext.compiler import compiles
@compiles(CreateColumn, 'postgresql')
def use_identity(element, compiler, **kw):
text = compiler.visit_create_column(element, **kw)
text = text.replace(
"SERIAL", "INT GENERATED BY DEFAULT AS IDENTITY"
)
return text
Using the above, a table such as::
t = Table(
't', m,
Column('id', Integer, primary_key=True),
Column('data', String)
)
Will generate on the backing database as::
CREATE TABLE t (
id INT GENERATED BY DEFAULT AS IDENTITY,
data VARCHAR,
PRIMARY KEY (id)
)
.. _postgresql_ss_cursors:
Server Side Cursors
-------------------
Server-side cursor support is available for the psycopg2, asyncpg
dialects and may also be available in others.
Server side cursors are enabled on a per-statement basis by using the
:paramref:`.Connection.execution_options.stream_results` connection execution
option::
with engine.connect() as conn:
result = conn.execution_options(stream_results=True).execute(text("select * from table"))
Note that some kinds of SQL statements may not be supported with
server side cursors; generally, only SQL statements that return rows should be
used with this option.
.. deprecated:: 1.4 The dialect-level server_side_cursors flag is deprecated
and will be removed in a future release. Please use the
:paramref:`_engine.Connection.stream_results` execution option for
unbuffered cursor support.
.. seealso::
:ref:`engine_stream_results`
.. _postgresql_isolation_level:
Transaction Isolation Level
---------------------------
Most SQLAlchemy dialects support setting of transaction isolation level
using the :paramref:`_sa.create_engine.execution_options` parameter
at the :func:`_sa.create_engine` level, and at the :class:`_engine.Connection`
level via the :paramref:`.Connection.execution_options.isolation_level`
parameter.
For PostgreSQL dialects, this feature works either by making use of the
DBAPI-specific features, such as psycopg2's isolation level flags which will
embed the isolation level setting inline with the ``"BEGIN"`` statement, or for
DBAPIs with no direct support by emitting ``SET SESSION CHARACTERISTICS AS
TRANSACTION ISOLATION LEVEL <level>`` ahead of the ``"BEGIN"`` statement
emitted by the DBAPI. For the special AUTOCOMMIT isolation level,
DBAPI-specific techniques are used which is typically an ``.autocommit``
flag on the DBAPI connection object.
To set isolation level using :func:`_sa.create_engine`::
engine = create_engine(
"postgresql+pg8000://scott:tiger@localhost/test",
execution_options={
"isolation_level": "REPEATABLE READ"
}
)
To set using per-connection execution options::
with engine.connect() as conn:
conn = conn.execution_options(
isolation_level="REPEATABLE READ"
)
with conn.begin():
# ... work with transaction
Valid values for ``isolation_level`` on most PostgreSQL dialects include:
* ``READ COMMITTED``
* ``READ UNCOMMITTED``
* ``REPEATABLE READ``
* ``SERIALIZABLE``
* ``AUTOCOMMIT``
.. seealso::
:ref:`postgresql_readonly_deferrable`
:ref:`dbapi_autocommit`
:ref:`psycopg2_isolation_level`
:ref:`pg8000_isolation_level`
.. _postgresql_readonly_deferrable:
Setting READ ONLY / DEFERRABLE
------------------------------
Most PostgreSQL dialects support setting the "READ ONLY" and "DEFERRABLE"
characteristics of the transaction, which is in addition to the isolation level
setting. These two attributes can be established either in conjunction with or
independently of the isolation level by passing the ``postgresql_readonly`` and
``postgresql_deferrable`` flags with
:meth:`_engine.Connection.execution_options`. The example below illustrates
passing the ``"SERIALIZABLE"`` isolation level at the same time as setting
"READ ONLY" and "DEFERRABLE"::
with engine.connect() as conn:
conn = conn.execution_options(
isolation_level="SERIALIZABLE",
postgresql_readonly=True,
postgresql_deferrable=True
)
with conn.begin():
# ... work with transaction
Note that some DBAPIs such as asyncpg only support "readonly" with
SERIALIZABLE isolation.
.. versionadded:: 1.4 added support for the ``postgresql_readonly``
and ``postgresql_deferrable`` execution options.
.. _postgresql_alternate_search_path:
Setting Alternate Search Paths on Connect
------------------------------------------
The PostgreSQL ``search_path`` variable refers to the list of schema names
that will be implicitly referred towards when a particular table or other
object is referenced in a SQL statement. As detailed in the next section
:ref:`postgresql_schema_reflection`, SQLAlchemy is generally organized around
the concept of keeping this variable at its default value of ``public``,
however, in order to have it set to any arbitrary name or names when connections
are used automatically, the "SET SESSION search_path" command may be invoked
for all connections in a pool using the following event handler, as discussed
at :ref:`schema_set_default_connections`::
from sqlalchemy import event
from sqlalchemy import create_engine
engine = create_engine("postgresql+psycopg2://scott:tiger@host/dbname")
@event.listens_for(engine, "connect", insert=True)
def set_search_path(dbapi_connection, connection_record):
existing_autocommit = dbapi_connection.autocommit
dbapi_connection.autocommit = True
cursor = dbapi_connection.cursor()
cursor.execute("SET SESSION search_path='%s'" % schema_name)
cursor.close()
dbapi_connection.autocommit = existing_autocommit
The reason the recipe is complicated by use of the ``.autocommit`` DBAPI
attribute is so that when the ``SET SESSION search_path`` directive is invoked,
it is invoked outside of the scope of any transaction and therefore will not
be reverted when the DBAPI connection has a rollback.
.. seealso::
:ref:`schema_set_default_connections` - in the :ref:`metadata_toplevel` documentation
.. _postgresql_schema_reflection:
Remote-Schema Table Introspection and PostgreSQL search_path
------------------------------------------------------------
**TL;DR;**: keep the ``search_path`` variable set to its default of ``public``,
name schemas **other** than ``public`` explicitly within ``Table`` definitions.
The PostgreSQL dialect can reflect tables from any schema. The
:paramref:`_schema.Table.schema` argument, or alternatively the
:paramref:`.MetaData.reflect.schema` argument determines which schema will
be searched for the table or tables. The reflected :class:`_schema.Table`
objects
will in all cases retain this ``.schema`` attribute as was specified.
However, with regards to tables which these :class:`_schema.Table`
objects refer to
via foreign key constraint, a decision must be made as to how the ``.schema``
is represented in those remote tables, in the case where that remote
schema name is also a member of the current
`PostgreSQL search path
<http://www.postgresql.org/docs/current/static/ddl-schemas.html#DDL-SCHEMAS-PATH>`_.
By default, the PostgreSQL dialect mimics the behavior encouraged by
PostgreSQL's own ``pg_get_constraintdef()`` builtin procedure. This function
returns a sample definition for a particular foreign key constraint,
omitting the referenced schema name from that definition when the name is
also in the PostgreSQL schema search path. The interaction below
illustrates this behavior::
test=> CREATE TABLE test_schema.referred(id INTEGER PRIMARY KEY);
CREATE TABLE
test=> CREATE TABLE referring(
test(> id INTEGER PRIMARY KEY,
test(> referred_id INTEGER REFERENCES test_schema.referred(id));
CREATE TABLE
test=> SET search_path TO public, test_schema;
test=> SELECT pg_catalog.pg_get_constraintdef(r.oid, true) FROM
test-> pg_catalog.pg_class c JOIN pg_catalog.pg_namespace n
test-> ON n.oid = c.relnamespace
test-> JOIN pg_catalog.pg_constraint r ON c.oid = r.conrelid
test-> WHERE c.relname='referring' AND r.contype = 'f'
test-> ;
pg_get_constraintdef
---------------------------------------------------
FOREIGN KEY (referred_id) REFERENCES referred(id)
(1 row)
Above, we created a table ``referred`` as a member of the remote schema
``test_schema``, however when we added ``test_schema`` to the
PG ``search_path`` and then asked ``pg_get_constraintdef()`` for the
``FOREIGN KEY`` syntax, ``test_schema`` was not included in the output of
the function.
On the other hand, if we set the search path back to the typical default
of ``public``::
test=> SET search_path TO public;
SET
The same query against ``pg_get_constraintdef()`` now returns the fully
schema-qualified name for us::
test=> SELECT pg_catalog.pg_get_constraintdef(r.oid, true) FROM
test-> pg_catalog.pg_class c JOIN pg_catalog.pg_namespace n
test-> ON n.oid = c.relnamespace
test-> JOIN pg_catalog.pg_constraint r ON c.oid = r.conrelid
test-> WHERE c.relname='referring' AND r.contype = 'f';
pg_get_constraintdef
---------------------------------------------------------------
FOREIGN KEY (referred_id) REFERENCES test_schema.referred(id)
(1 row)
SQLAlchemy will by default use the return value of ``pg_get_constraintdef()``
in order to determine the remote schema name. That is, if our ``search_path``
were set to include ``test_schema``, and we invoked a table
reflection process as follows::
>>> from sqlalchemy import Table, MetaData, create_engine, text
>>> engine = create_engine("postgresql://scott:tiger@localhost/test")
>>> with engine.connect() as conn:
... conn.execute(text("SET search_path TO test_schema, public"))
... meta = MetaData()
... referring = Table('referring', meta,
... autoload_with=conn)
...
<sqlalchemy.engine.result.CursorResult object at 0x101612ed0>
The above process would deliver to the :attr:`_schema.MetaData.tables`
collection
``referred`` table named **without** the schema::
>>> meta.tables['referred'].schema is None
True
To alter the behavior of reflection such that the referred schema is
maintained regardless of the ``search_path`` setting, use the
``postgresql_ignore_search_path`` option, which can be specified as a
dialect-specific argument to both :class:`_schema.Table` as well as
:meth:`_schema.MetaData.reflect`::
>>> with engine.connect() as conn:
... conn.execute(text("SET search_path TO test_schema, public"))
... meta = MetaData()
... referring = Table('referring', meta,
... autoload_with=conn,
... postgresql_ignore_search_path=True)
...
<sqlalchemy.engine.result.CursorResult object at 0x1016126d0>
We will now have ``test_schema.referred`` stored as schema-qualified::
>>> meta.tables['test_schema.referred'].schema
'test_schema'
.. sidebar:: Best Practices for PostgreSQL Schema reflection
The description of PostgreSQL schema reflection behavior is complex, and
is the product of many years of dealing with widely varied use cases and
user preferences. But in fact, there's no need to understand any of it if
you just stick to the simplest use pattern: leave the ``search_path`` set
to its default of ``public`` only, never refer to the name ``public`` as
an explicit schema name otherwise, and refer to all other schema names
explicitly when building up a :class:`_schema.Table` object. The options
described here are only for those users who can't, or prefer not to, stay
within these guidelines.
Note that **in all cases**, the "default" schema is always reflected as
``None``. The "default" schema on PostgreSQL is that which is returned by the
PostgreSQL ``current_schema()`` function. On a typical PostgreSQL
installation, this is the name ``public``. So a table that refers to another
which is in the ``public`` (i.e. default) schema will always have the
``.schema`` attribute set to ``None``.
.. versionadded:: 0.9.2 Added the ``postgresql_ignore_search_path``
dialect-level option accepted by :class:`_schema.Table` and
:meth:`_schema.MetaData.reflect`.
.. seealso::
`The Schema Search Path
<http://www.postgresql.org/docs/9.0/static/ddl-schemas.html#DDL-SCHEMAS-PATH>`_
- on the PostgreSQL website.
INSERT/UPDATE...RETURNING
-------------------------
The dialect supports PG 8.2's ``INSERT..RETURNING``, ``UPDATE..RETURNING`` and
``DELETE..RETURNING`` syntaxes. ``INSERT..RETURNING`` is used by default
for single-row INSERT statements in order to fetch newly generated
primary key identifiers. To specify an explicit ``RETURNING`` clause,
use the :meth:`._UpdateBase.returning` method on a per-statement basis::
# INSERT..RETURNING
result = table.insert().returning(table.c.col1, table.c.col2).\
values(name='foo')
print(result.fetchall())
# UPDATE..RETURNING
result = table.update().returning(table.c.col1, table.c.col2).\
where(table.c.name=='foo').values(name='bar')
print(result.fetchall())
# DELETE..RETURNING
result = table.delete().returning(table.c.col1, table.c.col2).\
where(table.c.name=='foo')
print(result.fetchall())
.. _postgresql_insert_on_conflict:
INSERT...ON CONFLICT (Upsert)
------------------------------
Starting with version 9.5, PostgreSQL allows "upserts" (update or insert) of
rows into a table via the ``ON CONFLICT`` clause of the ``INSERT`` statement. A
candidate row will only be inserted if that row does not violate any unique
constraints. In the case of a unique constraint violation, a secondary action
can occur which can be either "DO UPDATE", indicating that the data in the
target row should be updated, or "DO NOTHING", which indicates to silently skip
this row.
Conflicts are determined using existing unique constraints and indexes. These
constraints may be identified either using their name as stated in DDL,
or they may be inferred by stating the columns and conditions that comprise
the indexes.
SQLAlchemy provides ``ON CONFLICT`` support via the PostgreSQL-specific
:func:`_postgresql.insert()` function, which provides
the generative methods :meth:`_postgresql.Insert.on_conflict_do_update`
and :meth:`~.postgresql.Insert.on_conflict_do_nothing`:
.. sourcecode:: pycon+sql
>>> from sqlalchemy.dialects.postgresql import insert
>>> insert_stmt = insert(my_table).values(
... id='some_existing_id',
... data='inserted value')
>>> do_nothing_stmt = insert_stmt.on_conflict_do_nothing(
... index_elements=['id']
... )
>>> print(do_nothing_stmt)
{opensql}INSERT INTO my_table (id, data) VALUES (%(id)s, %(data)s)
ON CONFLICT (id) DO NOTHING
{stop}
>>> do_update_stmt = insert_stmt.on_conflict_do_update(
... constraint='pk_my_table',
... set_=dict(data='updated value')
... )
>>> print(do_update_stmt)
{opensql}INSERT INTO my_table (id, data) VALUES (%(id)s, %(data)s)
ON CONFLICT ON CONSTRAINT pk_my_table DO UPDATE SET data = %(param_1)s
.. versionadded:: 1.1
.. seealso::
`INSERT .. ON CONFLICT
<http://www.postgresql.org/docs/current/static/sql-insert.html#SQL-ON-CONFLICT>`_
- in the PostgreSQL documentation.
Specifying the Target
^^^^^^^^^^^^^^^^^^^^^
Both methods supply the "target" of the conflict using either the
named constraint or by column inference:
* The :paramref:`_postgresql.Insert.on_conflict_do_update.index_elements` argument
specifies a sequence containing string column names, :class:`_schema.Column`
objects, and/or SQL expression elements, which would identify a unique
index:
.. sourcecode:: pycon+sql
>>> do_update_stmt = insert_stmt.on_conflict_do_update(
... index_elements=['id'],
... set_=dict(data='updated value')
... )
>>> print(do_update_stmt)
{opensql}INSERT INTO my_table (id, data) VALUES (%(id)s, %(data)s)
ON CONFLICT (id) DO UPDATE SET data = %(param_1)s
{stop}
>>> do_update_stmt = insert_stmt.on_conflict_do_update(
... index_elements=[my_table.c.id],
... set_=dict(data='updated value')
... )
>>> print(do_update_stmt)
{opensql}INSERT INTO my_table (id, data) VALUES (%(id)s, %(data)s)
ON CONFLICT (id) DO UPDATE SET data = %(param_1)s
* When using :paramref:`_postgresql.Insert.on_conflict_do_update.index_elements` to
infer an index, a partial index can be inferred by also specifying the
use the :paramref:`_postgresql.Insert.on_conflict_do_update.index_where` parameter:
.. sourcecode:: pycon+sql
>>> stmt = insert(my_table).values(user_email='a@b.com', data='inserted data')
>>> stmt = stmt.on_conflict_do_update(
... index_elements=[my_table.c.user_email],
... index_where=my_table.c.user_email.like('%@gmail.com'),
... set_=dict(data=stmt.excluded.data)
... )
>>> print(stmt)
{opensql}INSERT INTO my_table (data, user_email)
VALUES (%(data)s, %(user_email)s) ON CONFLICT (user_email)
WHERE user_email LIKE %(user_email_1)s DO UPDATE SET data = excluded.data
* The :paramref:`_postgresql.Insert.on_conflict_do_update.constraint` argument is
used to specify an index directly rather than inferring it. This can be
the name of a UNIQUE constraint, a PRIMARY KEY constraint, or an INDEX:
.. sourcecode:: pycon+sql
>>> do_update_stmt = insert_stmt.on_conflict_do_update(
... constraint='my_table_idx_1',
... set_=dict(data='updated value')
... )
>>> print(do_update_stmt)
{opensql}INSERT INTO my_table (id, data) VALUES (%(id)s, %(data)s)
ON CONFLICT ON CONSTRAINT my_table_idx_1 DO UPDATE SET data = %(param_1)s
{stop}
>>> do_update_stmt = insert_stmt.on_conflict_do_update(
... constraint='my_table_pk',
... set_=dict(data='updated value')
... )
>>> print(do_update_stmt)
{opensql}INSERT INTO my_table (id, data) VALUES (%(id)s, %(data)s)
ON CONFLICT ON CONSTRAINT my_table_pk DO UPDATE SET data = %(param_1)s
{stop}
* The :paramref:`_postgresql.Insert.on_conflict_do_update.constraint` argument may
also refer to a SQLAlchemy construct representing a constraint,
e.g. :class:`.UniqueConstraint`, :class:`.PrimaryKeyConstraint`,
:class:`.Index`, or :class:`.ExcludeConstraint`. In this use,
if the constraint has a name, it is used directly. Otherwise, if the
constraint is unnamed, then inference will be used, where the expressions
and optional WHERE clause of the constraint will be spelled out in the
construct. This use is especially convenient
to refer to the named or unnamed primary key of a :class:`_schema.Table`
using the
:attr:`_schema.Table.primary_key` attribute:
.. sourcecode:: pycon+sql
>>> do_update_stmt = insert_stmt.on_conflict_do_update(
... constraint=my_table.primary_key,
... set_=dict(data='updated value')
... )
>>> print(do_update_stmt)
{opensql}INSERT INTO my_table (id, data) VALUES (%(id)s, %(data)s)
ON CONFLICT (id) DO UPDATE SET data = %(param_1)s
The SET Clause
^^^^^^^^^^^^^^^
``ON CONFLICT...DO UPDATE`` is used to perform an update of the already
existing row, using any combination of new values as well as values
from the proposed insertion. These values are specified using the
:paramref:`_postgresql.Insert.on_conflict_do_update.set_` parameter. This
parameter accepts a dictionary which consists of direct values
for UPDATE:
.. sourcecode:: pycon+sql
>>> stmt = insert(my_table).values(id='some_id', data='inserted value')
>>> do_update_stmt = stmt.on_conflict_do_update(
... index_elements=['id'],
... set_=dict(data='updated value')
... )
>>> print(do_update_stmt)
{opensql}INSERT INTO my_table (id, data) VALUES (%(id)s, %(data)s)
ON CONFLICT (id) DO UPDATE SET data = %(param_1)s
.. warning::
The :meth:`_expression.Insert.on_conflict_do_update`
method does **not** take into
account Python-side default UPDATE values or generation functions, e.g.
those specified using :paramref:`_schema.Column.onupdate`.
These values will not be exercised for an ON CONFLICT style of UPDATE,
unless they are manually specified in the
:paramref:`_postgresql.Insert.on_conflict_do_update.set_` dictionary.
Updating using the Excluded INSERT Values
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
In order to refer to the proposed insertion row, the special alias
:attr:`~.postgresql.Insert.excluded` is available as an attribute on
the :class:`_postgresql.Insert` object; this object is a
:class:`_expression.ColumnCollection`
which alias contains all columns of the target
table:
.. sourcecode:: pycon+sql
>>> stmt = insert(my_table).values(
... id='some_id',
... data='inserted value',
... author='jlh'
... )
>>> do_update_stmt = stmt.on_conflict_do_update(
... index_elements=['id'],
... set_=dict(data='updated value', author=stmt.excluded.author)
... )
>>> print(do_update_stmt)
{opensql}INSERT INTO my_table (id, data, author)
VALUES (%(id)s, %(data)s, %(author)s)
ON CONFLICT (id) DO UPDATE SET data = %(param_1)s, author = excluded.author
Additional WHERE Criteria
^^^^^^^^^^^^^^^^^^^^^^^^^
The :meth:`_expression.Insert.on_conflict_do_update` method also accepts
a WHERE clause using the :paramref:`_postgresql.Insert.on_conflict_do_update.where`
parameter, which will limit those rows which receive an UPDATE:
.. sourcecode:: pycon+sql
>>> stmt = insert(my_table).values(
... id='some_id',
... data='inserted value',
... author='jlh'
... )
>>> on_update_stmt = stmt.on_conflict_do_update(
... index_elements=['id'],
... set_=dict(data='updated value', author=stmt.excluded.author),
... where=(my_table.c.status == 2)
... )
>>> print(on_update_stmt)
{opensql}INSERT INTO my_table (id, data, author)
VALUES (%(id)s, %(data)s, %(author)s)
ON CONFLICT (id) DO UPDATE SET data = %(param_1)s, author = excluded.author
WHERE my_table.status = %(status_1)s
Skipping Rows with DO NOTHING
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
``ON CONFLICT`` may be used to skip inserting a row entirely
if any conflict with a unique or exclusion constraint occurs; below
this is illustrated using the
:meth:`~.postgresql.Insert.on_conflict_do_nothing` method:
.. sourcecode:: pycon+sql
>>> stmt = insert(my_table).values(id='some_id', data='inserted value')
>>> stmt = stmt.on_conflict_do_nothing(index_elements=['id'])
>>> print(stmt)
{opensql}INSERT INTO my_table (id, data) VALUES (%(id)s, %(data)s)
ON CONFLICT (id) DO NOTHING
If ``DO NOTHING`` is used without specifying any columns or constraint,
it has the effect of skipping the INSERT for any unique or exclusion
constraint violation which occurs:
.. sourcecode:: pycon+sql
>>> stmt = insert(my_table).values(id='some_id', data='inserted value')
>>> stmt = stmt.on_conflict_do_nothing()
>>> print(stmt)
{opensql}INSERT INTO my_table (id, data) VALUES (%(id)s, %(data)s)
ON CONFLICT DO NOTHING
.. _postgresql_match:
Full Text Search
----------------
SQLAlchemy makes available the PostgreSQL ``@@`` operator via the
:meth:`_expression.ColumnElement.match`
method on any textual column expression.
On a PostgreSQL dialect, an expression like the following::
select(sometable.c.text.match("search string"))
will emit to the database::
SELECT text @@ to_tsquery('search string') FROM table
The PostgreSQL text search functions such as ``to_tsquery()``
and ``to_tsvector()`` are available
explicitly using the standard :data:`.func` construct. For example::
select(func.to_tsvector('fat cats ate rats').match('cat & rat'))
Emits the equivalent of::
SELECT to_tsvector('fat cats ate rats') @@ to_tsquery('cat & rat')
The :class:`_postgresql.TSVECTOR` type can provide for explicit CAST::
from sqlalchemy.dialects.postgresql import TSVECTOR
from sqlalchemy import select, cast
select(cast("some text", TSVECTOR))
produces a statement equivalent to::
SELECT CAST('some text' AS TSVECTOR) AS anon_1
Full Text Searches in PostgreSQL are influenced by a combination of: the
PostgreSQL setting of ``default_text_search_config``, the ``regconfig`` used
to build the GIN/GiST indexes, and the ``regconfig`` optionally passed in
during a query.
When performing a Full Text Search against a column that has a GIN or
GiST index that is already pre-computed (which is common on full text
searches) one may need to explicitly pass in a particular PostgreSQL
``regconfig`` value to ensure the query-planner utilizes the index and does
not re-compute the column on demand.
In order to provide for this explicit query planning, or to use different
search strategies, the ``match`` method accepts a ``postgresql_regconfig``
keyword argument::
select(mytable.c.id).where(
mytable.c.title.match('somestring', postgresql_regconfig='english')
)
Emits the equivalent of::
SELECT mytable.id FROM mytable
WHERE mytable.title @@ to_tsquery('english', 'somestring')
One can also specifically pass in a `'regconfig'` value to the
``to_tsvector()`` command as the initial argument::
select(mytable.c.id).where(
func.to_tsvector('english', mytable.c.title )\
.match('somestring', postgresql_regconfig='english')
)
produces a statement equivalent to::
SELECT mytable.id FROM mytable
WHERE to_tsvector('english', mytable.title) @@
to_tsquery('english', 'somestring')
It is recommended that you use the ``EXPLAIN ANALYZE...`` tool from
PostgreSQL to ensure that you are generating queries with SQLAlchemy that
take full advantage of any indexes you may have created for full text search.
FROM ONLY ...
-------------
The dialect supports PostgreSQL's ONLY keyword for targeting only a particular
table in an inheritance hierarchy. This can be used to produce the
``SELECT ... FROM ONLY``, ``UPDATE ONLY ...``, and ``DELETE FROM ONLY ...``
syntaxes. It uses SQLAlchemy's hints mechanism::
# SELECT ... FROM ONLY ...
result = table.select().with_hint(table, 'ONLY', 'postgresql')
print(result.fetchall())
# UPDATE ONLY ...
table.update(values=dict(foo='bar')).with_hint('ONLY',
dialect_name='postgresql')
# DELETE FROM ONLY ...
table.delete().with_hint('ONLY', dialect_name='postgresql')
.. _postgresql_indexes:
PostgreSQL-Specific Index Options
---------------------------------
Several extensions to the :class:`.Index` construct are available, specific
to the PostgreSQL dialect.
Covering Indexes
^^^^^^^^^^^^^^^^
The ``postgresql_include`` option renders INCLUDE(colname) for the given
string names::
Index("my_index", table.c.x, postgresql_include=['y'])
would render the index as ``CREATE INDEX my_index ON table (x) INCLUDE (y)``
Note that this feature requires PostgreSQL 11 or later.
.. versionadded:: 1.4
.. _postgresql_partial_indexes:
Partial Indexes
^^^^^^^^^^^^^^^
Partial indexes add criterion to the index definition so that the index is
applied to a subset of rows. These can be specified on :class:`.Index`
using the ``postgresql_where`` keyword argument::
Index('my_index', my_table.c.id, postgresql_where=my_table.c.value > 10)
.. _postgresql_operator_classes:
Operator Classes
^^^^^^^^^^^^^^^^
PostgreSQL allows the specification of an *operator class* for each column of
an index (see
http://www.postgresql.org/docs/8.3/interactive/indexes-opclass.html).
The :class:`.Index` construct allows these to be specified via the
``postgresql_ops`` keyword argument::
Index(
'my_index', my_table.c.id, my_table.c.data,
postgresql_ops={
'data': 'text_pattern_ops',
'id': 'int4_ops'
})
Note that the keys in the ``postgresql_ops`` dictionaries are the
"key" name of the :class:`_schema.Column`, i.e. the name used to access it from
the ``.c`` collection of :class:`_schema.Table`, which can be configured to be
different than the actual name of the column as expressed in the database.
If ``postgresql_ops`` is to be used against a complex SQL expression such
as a function call, then to apply to the column it must be given a label
that is identified in the dictionary by name, e.g.::
Index(
'my_index', my_table.c.id,
func.lower(my_table.c.data).label('data_lower'),
postgresql_ops={
'data_lower': 'text_pattern_ops',
'id': 'int4_ops'
})
Operator classes are also supported by the
:class:`_postgresql.ExcludeConstraint` construct using the
:paramref:`_postgresql.ExcludeConstraint.ops` parameter. See that parameter for
details.
.. versionadded:: 1.3.21 added support for operator classes with
:class:`_postgresql.ExcludeConstraint`.
Index Types
^^^^^^^^^^^
PostgreSQL provides several index types: B-Tree, Hash, GiST, and GIN, as well
as the ability for users to create their own (see
http://www.postgresql.org/docs/8.3/static/indexes-types.html). These can be
specified on :class:`.Index` using the ``postgresql_using`` keyword argument::
Index('my_index', my_table.c.data, postgresql_using='gin')
The value passed to the keyword argument will be simply passed through to the
underlying CREATE INDEX command, so it *must* be a valid index type for your
version of PostgreSQL.
.. _postgresql_index_storage:
Index Storage Parameters
^^^^^^^^^^^^^^^^^^^^^^^^
PostgreSQL allows storage parameters to be set on indexes. The storage
parameters available depend on the index method used by the index. Storage
parameters can be specified on :class:`.Index` using the ``postgresql_with``
keyword argument::
Index('my_index', my_table.c.data, postgresql_with={"fillfactor": 50})
.. versionadded:: 1.0.6
PostgreSQL allows to define the tablespace in which to create the index.
The tablespace can be specified on :class:`.Index` using the
``postgresql_tablespace`` keyword argument::
Index('my_index', my_table.c.data, postgresql_tablespace='my_tablespace')
.. versionadded:: 1.1
Note that the same option is available on :class:`_schema.Table` as well.
.. _postgresql_index_concurrently:
Indexes with CONCURRENTLY
^^^^^^^^^^^^^^^^^^^^^^^^^
The PostgreSQL index option CONCURRENTLY is supported by passing the
flag ``postgresql_concurrently`` to the :class:`.Index` construct::
tbl = Table('testtbl', m, Column('data', Integer))
idx1 = Index('test_idx1', tbl.c.data, postgresql_concurrently=True)
The above index construct will render DDL for CREATE INDEX, assuming
PostgreSQL 8.2 or higher is detected or for a connection-less dialect, as::
CREATE INDEX CONCURRENTLY test_idx1 ON testtbl (data)
For DROP INDEX, assuming PostgreSQL 9.2 or higher is detected or for
a connection-less dialect, it will emit::
DROP INDEX CONCURRENTLY test_idx1
.. versionadded:: 1.1 support for CONCURRENTLY on DROP INDEX. The
CONCURRENTLY keyword is now only emitted if a high enough version
of PostgreSQL is detected on the connection (or for a connection-less
dialect).
When using CONCURRENTLY, the PostgreSQL database requires that the statement
be invoked outside of a transaction block. The Python DBAPI enforces that
even for a single statement, a transaction is present, so to use this
construct, the DBAPI's "autocommit" mode must be used::
metadata = MetaData()
table = Table(
"foo", metadata,
Column("id", String))
index = Index(
"foo_idx", table.c.id, postgresql_concurrently=True)
with engine.connect() as conn:
with conn.execution_options(isolation_level='AUTOCOMMIT'):
table.create(conn)
.. seealso::
:ref:`postgresql_isolation_level`
.. _postgresql_index_reflection:
PostgreSQL Index Reflection
---------------------------
The PostgreSQL database creates a UNIQUE INDEX implicitly whenever the
UNIQUE CONSTRAINT construct is used. When inspecting a table using
:class:`_reflection.Inspector`, the :meth:`_reflection.Inspector.get_indexes`
and the :meth:`_reflection.Inspector.get_unique_constraints`
will report on these
two constructs distinctly; in the case of the index, the key
``duplicates_constraint`` will be present in the index entry if it is
detected as mirroring a constraint. When performing reflection using
``Table(..., autoload_with=engine)``, the UNIQUE INDEX is **not** returned
in :attr:`_schema.Table.indexes` when it is detected as mirroring a
:class:`.UniqueConstraint` in the :attr:`_schema.Table.constraints` collection
.
.. versionchanged:: 1.0.0 - :class:`_schema.Table` reflection now includes
:class:`.UniqueConstraint` objects present in the
:attr:`_schema.Table.constraints`
collection; the PostgreSQL backend will no longer include a "mirrored"
:class:`.Index` construct in :attr:`_schema.Table.indexes`
if it is detected
as corresponding to a unique constraint.
Special Reflection Options
--------------------------
The :class:`_reflection.Inspector`
used for the PostgreSQL backend is an instance
of :class:`.PGInspector`, which offers additional methods::
from sqlalchemy import create_engine, inspect
engine = create_engine("postgresql+psycopg2://localhost/test")
insp = inspect(engine) # will be a PGInspector
print(insp.get_enums())
.. autoclass:: PGInspector
:members:
.. _postgresql_table_options:
PostgreSQL Table Options
------------------------
Several options for CREATE TABLE are supported directly by the PostgreSQL
dialect in conjunction with the :class:`_schema.Table` construct:
* ``TABLESPACE``::
Table("some_table", metadata, ..., postgresql_tablespace='some_tablespace')
The above option is also available on the :class:`.Index` construct.
* ``ON COMMIT``::
Table("some_table", metadata, ..., postgresql_on_commit='PRESERVE ROWS')
* ``WITH OIDS``::
Table("some_table", metadata, ..., postgresql_with_oids=True)
* ``WITHOUT OIDS``::
Table("some_table", metadata, ..., postgresql_with_oids=False)
* ``INHERITS``::
Table("some_table", metadata, ..., postgresql_inherits="some_supertable")
Table("some_table", metadata, ..., postgresql_inherits=("t1", "t2", ...))
.. versionadded:: 1.0.0
* ``PARTITION BY``::
Table("some_table", metadata, ...,
postgresql_partition_by='LIST (part_column)')
.. versionadded:: 1.2.6
.. seealso::
`PostgreSQL CREATE TABLE options
<http://www.postgresql.org/docs/current/static/sql-createtable.html>`_
.. _postgresql_table_valued_overview:
Table values, Table and Column valued functions, Row and Tuple objects
-----------------------------------------------------------------------
PostgreSQL makes great use of modern SQL forms such as table-valued functions,
tables and rows as values. These constructs are commonly used as part
of PostgreSQL's support for complex datatypes such as JSON, ARRAY, and other
datatypes. SQLAlchemy's SQL expression language has native support for
most table-valued and row-valued forms.
.. _postgresql_table_valued:
Table-Valued Functions
^^^^^^^^^^^^^^^^^^^^^^^
Many PostgreSQL built-in functions are intended to be used in the FROM clause
of a SELECT statement, and are capable of returning table rows or sets of table
rows. A large portion of PostgreSQL's JSON functions for example such as
``json_array_elements()``, ``json_object_keys()``, ``json_each_text()``,
``json_each()``, ``json_to_record()``, ``json_populate_recordset()`` use such
forms. These classes of SQL function calling forms in SQLAlchemy are available
using the :meth:`_functions.FunctionElement.table_valued` method in conjunction
with :class:`_functions.Function` objects generated from the :data:`_sql.func`
namespace.
Examples from PostgreSQL's reference documentation follow below:
* ``json_each()``::
>>> from sqlalchemy import select, func
>>> stmt = select(func.json_each('{"a":"foo", "b":"bar"}').table_valued("key", "value"))
>>> print(stmt)
SELECT anon_1.key, anon_1.value
FROM json_each(:json_each_1) AS anon_1
* ``json_populate_record()``::
>>> from sqlalchemy import select, func, literal_column
>>> stmt = select(
... func.json_populate_record(
... literal_column("null::myrowtype"),
... '{"a":1,"b":2}'
... ).table_valued("a", "b", name="x")
... )
>>> print(stmt)
SELECT x.a, x.b
FROM json_populate_record(null::myrowtype, :json_populate_record_1) AS x
* ``json_to_record()`` - this form uses a PostgreSQL specific form of derived
columns in the alias, where we may make use of :func:`_sql.column` elements with
types to produce them. The :meth:`_functions.FunctionElement.table_valued`
method produces a :class:`_sql.TableValuedAlias` construct, and the method
:meth:`_sql.TableValuedAlias.render_derived` method sets up the derived
columns specification::
>>> from sqlalchemy import select, func, column, Integer, Text
>>> stmt = select(
... func.json_to_record('{"a":1,"b":[1,2,3],"c":"bar"}').table_valued(
... column("a", Integer), column("b", Text), column("d", Text),
... ).render_derived(name="x", with_types=True)
... )
>>> print(stmt)
SELECT x.a, x.b, x.d
FROM json_to_record(:json_to_record_1) AS x(a INTEGER, b TEXT, d TEXT)
* ``WITH ORDINALITY`` - part of the SQL standard, ``WITH ORDINALITY`` adds an
ordinal counter to the output of a function and is accepted by a limited set
of PostgreSQL functions including ``unnest()`` and ``generate_series()``. The
:meth:`_functions.FunctionElement.table_valued` method accepts a keyword
parameter ``with_ordinality`` for this purpose, which accepts the string name
that will be applied to the "ordinality" column::
>>> from sqlalchemy import select, func
>>> stmt = select(
... func.generate_series(4, 1, -1).table_valued("value", with_ordinality="ordinality")
... )
>>> print(stmt)
SELECT anon_1.value, anon_1.ordinality
FROM generate_series(:generate_series_1, :generate_series_2, :generate_series_3) WITH ORDINALITY AS anon_1
.. versionadded:: 1.4.0b2
.. seealso::
:ref:`tutorial_functions_table_valued` - in the :ref:`unified_tutorial`
.. _postgresql_column_valued:
Column Valued Functions
^^^^^^^^^^^^^^^^^^^^^^^
Similar to the table valued function, a column valued function is present
in the FROM clause, but delivers itself to the columns clause as a single
scalar value. PostgreSQL functions such as ``json_array_elements()``,
``unnest()`` and ``generate_series()`` may use this form. Column valued functions are available using the
:meth:`_functions.FunctionElement.column_valued` method of :class:`_functions.FunctionElement`:
* ``json_array_elements()``::
>>> from sqlalchemy import select, func
>>> stmt = select(func.json_array_elements('["one", "two"]').column_valued("x"))
>>> print(stmt)
SELECT x
FROM json_array_elements(:json_array_elements_1) AS x
* ``unnest()`` - in order to generate a PostgreSQL ARRAY literal, the
:func:`_postgresql.array` construct may be used::
>>> from sqlalchemy.dialects.postgresql import array
>>> from sqlalchemy import select, func
>>> stmt = select(func.unnest(array([1, 2])).column_valued())
>>> print(stmt)
SELECT anon_1
FROM unnest(ARRAY[%(param_1)s, %(param_2)s]) AS anon_1
The function can of course be used against an existing table-bound column
that's of type :class:`_types.ARRAY`::
>>> from sqlalchemy import table, column, ARRAY, Integer
>>> from sqlalchemy import select, func
>>> t = table("t", column('value', ARRAY(Integer)))
>>> stmt = select(func.unnest(t.c.value).column_valued("unnested_value"))
>>> print(stmt)
SELECT unnested_value
FROM unnest(t.value) AS unnested_value
.. seealso::
:ref:`tutorial_functions_column_valued` - in the :ref:`unified_tutorial`
Row Types
^^^^^^^^^
Built-in support for rendering a ``ROW`` may be approximated using
``func.ROW`` with the :attr:`_sa.func` namespace, or by using the
:func:`_sql.tuple_` construct::
>>> from sqlalchemy import table, column, func, tuple_
>>> t = table("t", column("id"), column("fk"))
>>> stmt = t.select().where(
... tuple_(t.c.id, t.c.fk) > (1,2)
... ).where(
... func.ROW(t.c.id, t.c.fk) < func.ROW(3, 7)
... )
>>> print(stmt)
SELECT t.id, t.fk
FROM t
WHERE (t.id, t.fk) > (:param_1, :param_2) AND ROW(t.id, t.fk) < ROW(:ROW_1, :ROW_2)
.. seealso::
`PostgreSQL Row Constructors
<https://www.postgresql.org/docs/current/sql-expressions.html#SQL-SYNTAX-ROW-CONSTRUCTORS>`_
`PostgreSQL Row Constructor Comparison
<https://www.postgresql.org/docs/current/functions-comparisons.html#ROW-WISE-COMPARISON>`_
Table Types passed to Functions
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
PostgreSQL supports passing a table as an argument to a function, which it
refers towards as a "record" type. SQLAlchemy :class:`_sql.FromClause` objects
such as :class:`_schema.Table` support this special form using the
:meth:`_sql.FromClause.table_valued` method, which is comparable to the
:meth:`_functions.FunctionElement.table_valued` method except that the collection
of columns is already established by that of the :class:`_sql.FromClause`
itself::
>>> from sqlalchemy import table, column, func, select
>>> a = table( "a", column("id"), column("x"), column("y"))
>>> stmt = select(func.row_to_json(a.table_valued()))
>>> print(stmt)
SELECT row_to_json(a) AS row_to_json_1
FROM a
.. versionadded:: 1.4.0b2
ARRAY Types
-----------
The PostgreSQL dialect supports arrays, both as multidimensional column types
as well as array literals:
* :class:`_postgresql.ARRAY` - ARRAY datatype
* :class:`_postgresql.array` - array literal
* :func:`_postgresql.array_agg` - ARRAY_AGG SQL function
* :class:`_postgresql.aggregate_order_by` - helper for PG's ORDER BY aggregate
function syntax.
JSON Types
----------
The PostgreSQL dialect supports both JSON and JSONB datatypes, including
psycopg2's native support and support for all of PostgreSQL's special
operators:
* :class:`_postgresql.JSON`
* :class:`_postgresql.JSONB`
HSTORE Type
-----------
The PostgreSQL HSTORE type as well as hstore literals are supported:
* :class:`_postgresql.HSTORE` - HSTORE datatype
* :class:`_postgresql.hstore` - hstore literal
ENUM Types
----------
PostgreSQL has an independently creatable TYPE structure which is used
to implement an enumerated type. This approach introduces significant
complexity on the SQLAlchemy side in terms of when this type should be
CREATED and DROPPED. The type object is also an independently reflectable
entity. The following sections should be consulted:
* :class:`_postgresql.ENUM` - DDL and typing support for ENUM.
* :meth:`.PGInspector.get_enums` - retrieve a listing of current ENUM types
* :meth:`.postgresql.ENUM.create` , :meth:`.postgresql.ENUM.drop` - individual
CREATE and DROP commands for ENUM.
.. _postgresql_array_of_enum:
Using ENUM with ARRAY
^^^^^^^^^^^^^^^^^^^^^
The combination of ENUM and ARRAY is not directly supported by backend
DBAPIs at this time. Prior to SQLAlchemy 1.3.17, a special workaround
was needed in order to allow this combination to work, described below.
.. versionchanged:: 1.3.17 The combination of ENUM and ARRAY is now directly
handled by SQLAlchemy's implementation without any workarounds needed.
.. sourcecode:: python
from sqlalchemy import TypeDecorator
from sqlalchemy.dialects.postgresql import ARRAY
class ArrayOfEnum(TypeDecorator):
impl = ARRAY
def bind_expression(self, bindvalue):
return sa.cast(bindvalue, self)
def result_processor(self, dialect, coltype):
super_rp = super(ArrayOfEnum, self).result_processor(
dialect, coltype)
def handle_raw_string(value):
inner = re.match(r"^{(.*)}$", value).group(1)
return inner.split(",") if inner else []
def process(value):
if value is None:
return None
return super_rp(handle_raw_string(value))
return process
E.g.::
Table(
'mydata', metadata,
Column('id', Integer, primary_key=True),
Column('data', ArrayOfEnum(ENUM('a', 'b, 'c', name='myenum')))
)
This type is not included as a built-in type as it would be incompatible
with a DBAPI that suddenly decides to support ARRAY of ENUM directly in
a new version.
.. _postgresql_array_of_json:
Using JSON/JSONB with ARRAY
^^^^^^^^^^^^^^^^^^^^^^^^^^^
Similar to using ENUM, prior to SQLAlchemy 1.3.17, for an ARRAY of JSON/JSONB
we need to render the appropriate CAST. Current psycopg2 drivers accommodate
the result set correctly without any special steps.
.. versionchanged:: 1.3.17 The combination of JSON/JSONB and ARRAY is now
directly handled by SQLAlchemy's implementation without any workarounds
needed.
.. sourcecode:: python
class CastingArray(ARRAY):
def bind_expression(self, bindvalue):
return sa.cast(bindvalue, self)
E.g.::
Table(
'mydata', metadata,
Column('id', Integer, primary_key=True),
Column('data', CastingArray(JSONB))
)
""" # noqa E501
from collections import defaultdict
import datetime as dt
import re
from uuid import UUID as _python_UUID
from . import array as _array
from . import hstore as _hstore
from . import json as _json
from . import ranges as _ranges
from ... import exc
from ... import schema
from ... import sql
from ... import util
from ...engine import characteristics
from ...engine import default
from ...engine import reflection
from ...sql import coercions
from ...sql import compiler
from ...sql import elements
from ...sql import expression
from ...sql import roles
from ...sql import sqltypes
from ...sql import util as sql_util
from ...sql.ddl import DDLBase
from ...types import BIGINT
from ...types import BOOLEAN
from ...types import CHAR
from ...types import DATE
from ...types import FLOAT
from ...types import INTEGER
from ...types import NUMERIC
from ...types import REAL
from ...types import SMALLINT
from ...types import TEXT
from ...types import VARCHAR
IDX_USING = re.compile(r"^(?:btree|hash|gist|gin|[\w_]+)$", re.I)
AUTOCOMMIT_REGEXP = re.compile(
r"\s*(?:UPDATE|INSERT|CREATE|DELETE|DROP|ALTER|GRANT|REVOKE|"
"IMPORT FOREIGN SCHEMA|REFRESH MATERIALIZED VIEW|TRUNCATE)",
re.I | re.UNICODE,
)
RESERVED_WORDS = set(
[
"all",
"analyse",
"analyze",
"and",
"any",
"array",
"as",
"asc",
"asymmetric",
"both",
"case",
"cast",
"check",
"collate",
"column",
"constraint",
"create",
"current_catalog",
"current_date",
"current_role",
"current_time",
"current_timestamp",
"current_user",
"default",
"deferrable",
"desc",
"distinct",
"do",
"else",
"end",
"except",
"false",
"fetch",
"for",
"foreign",
"from",
"grant",
"group",
"having",
"in",
"initially",
"intersect",
"into",
"leading",
"limit",
"localtime",
"localtimestamp",
"new",
"not",
"null",
"of",
"off",
"offset",
"old",
"on",
"only",
"or",
"order",
"placing",
"primary",
"references",
"returning",
"select",
"session_user",
"some",
"symmetric",
"table",
"then",
"to",
"trailing",
"true",
"union",
"unique",
"user",
"using",
"variadic",
"when",
"where",
"window",
"with",
"authorization",
"between",
"binary",
"cross",
"current_schema",
"freeze",
"full",
"ilike",
"inner",
"is",
"isnull",
"join",
"left",
"like",
"natural",
"notnull",
"outer",
"over",
"overlaps",
"right",
"similar",
"verbose",
]
)
_DECIMAL_TYPES = (1231, 1700)
_FLOAT_TYPES = (700, 701, 1021, 1022)
_INT_TYPES = (20, 21, 23, 26, 1005, 1007, 1016)
class BYTEA(sqltypes.LargeBinary):
__visit_name__ = "BYTEA"
class DOUBLE_PRECISION(sqltypes.Float):
__visit_name__ = "DOUBLE_PRECISION"
class INET(sqltypes.TypeEngine):
__visit_name__ = "INET"
PGInet = INET
class CIDR(sqltypes.TypeEngine):
__visit_name__ = "CIDR"
PGCidr = CIDR
class MACADDR(sqltypes.TypeEngine):
__visit_name__ = "MACADDR"
PGMacAddr = MACADDR
class MONEY(sqltypes.TypeEngine):
r"""Provide the PostgreSQL MONEY type.
Depending on driver, result rows using this type may return a
string value which includes currency symbols.
For this reason, it may be preferable to provide conversion to a
numerically-based currency datatype using :class:`_types.TypeDecorator`::
import re
import decimal
from sqlalchemy import TypeDecorator
class NumericMoney(TypeDecorator):
impl = MONEY
def process_result_value(self, value: Any, dialect: Any) -> None:
if value is not None:
# adjust this for the currency and numeric
m = re.match(r"\$([\d.]+)", value)
if m:
value = decimal.Decimal(m.group(1))
return value
Alternatively, the conversion may be applied as a CAST using
the :meth:`_types.TypeDecorator.column_expression` method as follows::
import decimal
from sqlalchemy import cast
from sqlalchemy import TypeDecorator
class NumericMoney(TypeDecorator):
impl = MONEY
def column_expression(self, column: Any):
return cast(column, Numeric())
.. versionadded:: 1.2
"""
__visit_name__ = "MONEY"
class OID(sqltypes.TypeEngine):
"""Provide the PostgreSQL OID type.
.. versionadded:: 0.9.5
"""
__visit_name__ = "OID"
class REGCLASS(sqltypes.TypeEngine):
"""Provide the PostgreSQL REGCLASS type.
.. versionadded:: 1.2.7
"""
__visit_name__ = "REGCLASS"
class TIMESTAMP(sqltypes.TIMESTAMP):
def __init__(self, timezone=False, precision=None):
super(TIMESTAMP, self).__init__(timezone=timezone)
self.precision = precision
class TIME(sqltypes.TIME):
def __init__(self, timezone=False, precision=None):
super(TIME, self).__init__(timezone=timezone)
self.precision = precision
class INTERVAL(sqltypes.NativeForEmulated, sqltypes._AbstractInterval):
"""PostgreSQL INTERVAL type."""
__visit_name__ = "INTERVAL"
native = True
def __init__(self, precision=None, fields=None):
"""Construct an INTERVAL.
:param precision: optional integer precision value
:param fields: string fields specifier. allows storage of fields
to be limited, such as ``"YEAR"``, ``"MONTH"``, ``"DAY TO HOUR"``,
etc.
.. versionadded:: 1.2
"""
self.precision = precision
self.fields = fields
@classmethod
def adapt_emulated_to_native(cls, interval, **kw):
return INTERVAL(precision=interval.second_precision)
@property
def _type_affinity(self):
return sqltypes.Interval
def as_generic(self, allow_nulltype=False):
return sqltypes.Interval(native=True, second_precision=self.precision)
@property
def python_type(self):
return dt.timedelta
PGInterval = INTERVAL
class BIT(sqltypes.TypeEngine):
__visit_name__ = "BIT"
def __init__(self, length=None, varying=False):
if not varying:
# BIT without VARYING defaults to length 1
self.length = length or 1
else:
# but BIT VARYING can be unlimited-length, so no default
self.length = length
self.varying = varying
PGBit = BIT
class UUID(sqltypes.TypeEngine):
"""PostgreSQL UUID type.
Represents the UUID column type, interpreting
data either as natively returned by the DBAPI
or as Python uuid objects.
The UUID type may not be supported on all DBAPIs.
It is known to work on psycopg2 and not pg8000.
"""
__visit_name__ = "UUID"
def __init__(self, as_uuid=False):
"""Construct a UUID type.
:param as_uuid=False: if True, values will be interpreted
as Python uuid objects, converting to/from string via the
DBAPI.
"""
self.as_uuid = as_uuid
def coerce_compared_value(self, op, value):
"""See :meth:`.TypeEngine.coerce_compared_value` for a description."""
if isinstance(value, util.string_types):
return self
else:
return super(UUID, self).coerce_compared_value(op, value)
def bind_processor(self, dialect):
if self.as_uuid:
def process(value):
if value is not None:
value = util.text_type(value)
return value
return process
else:
return None
def result_processor(self, dialect, coltype):
if self.as_uuid:
def process(value):
if value is not None:
value = _python_UUID(value)
return value
return process
else:
return None
PGUuid = UUID
class TSVECTOR(sqltypes.TypeEngine):
"""The :class:`_postgresql.TSVECTOR` type implements the PostgreSQL
text search type TSVECTOR.
It can be used to do full text queries on natural language
documents.
.. versionadded:: 0.9.0
.. seealso::
:ref:`postgresql_match`
"""
__visit_name__ = "TSVECTOR"
class ENUM(sqltypes.NativeForEmulated, sqltypes.Enum):
"""PostgreSQL ENUM type.
This is a subclass of :class:`_types.Enum` which includes
support for PG's ``CREATE TYPE`` and ``DROP TYPE``.
When the builtin type :class:`_types.Enum` is used and the
:paramref:`.Enum.native_enum` flag is left at its default of
True, the PostgreSQL backend will use a :class:`_postgresql.ENUM`
type as the implementation, so the special create/drop rules
will be used.
The create/drop behavior of ENUM is necessarily intricate, due to the
awkward relationship the ENUM type has in relationship to the
parent table, in that it may be "owned" by just a single table, or
may be shared among many tables.
When using :class:`_types.Enum` or :class:`_postgresql.ENUM`
in an "inline" fashion, the ``CREATE TYPE`` and ``DROP TYPE`` is emitted
corresponding to when the :meth:`_schema.Table.create` and
:meth:`_schema.Table.drop`
methods are called::
table = Table('sometable', metadata,
Column('some_enum', ENUM('a', 'b', 'c', name='myenum'))
)
table.create(engine) # will emit CREATE ENUM and CREATE TABLE
table.drop(engine) # will emit DROP TABLE and DROP ENUM
To use a common enumerated type between multiple tables, the best
practice is to declare the :class:`_types.Enum` or
:class:`_postgresql.ENUM` independently, and associate it with the
:class:`_schema.MetaData` object itself::
my_enum = ENUM('a', 'b', 'c', name='myenum', metadata=metadata)
t1 = Table('sometable_one', metadata,
Column('some_enum', myenum)
)
t2 = Table('sometable_two', metadata,
Column('some_enum', myenum)
)
When this pattern is used, care must still be taken at the level
of individual table creates. Emitting CREATE TABLE without also
specifying ``checkfirst=True`` will still cause issues::
t1.create(engine) # will fail: no such type 'myenum'
If we specify ``checkfirst=True``, the individual table-level create
operation will check for the ``ENUM`` and create if not exists::
# will check if enum exists, and emit CREATE TYPE if not
t1.create(engine, checkfirst=True)
When using a metadata-level ENUM type, the type will always be created
and dropped if either the metadata-wide create/drop is called::
metadata.create_all(engine) # will emit CREATE TYPE
metadata.drop_all(engine) # will emit DROP TYPE
The type can also be created and dropped directly::
my_enum.create(engine)
my_enum.drop(engine)
.. versionchanged:: 1.0.0 The PostgreSQL :class:`_postgresql.ENUM` type
now behaves more strictly with regards to CREATE/DROP. A metadata-level
ENUM type will only be created and dropped at the metadata level,
not the table level, with the exception of
``table.create(checkfirst=True)``.
The ``table.drop()`` call will now emit a DROP TYPE for a table-level
enumerated type.
"""
native_enum = True
def __init__(self, *enums, **kw):
"""Construct an :class:`_postgresql.ENUM`.
Arguments are the same as that of
:class:`_types.Enum`, but also including
the following parameters.
:param create_type: Defaults to True.
Indicates that ``CREATE TYPE`` should be
emitted, after optionally checking for the
presence of the type, when the parent
table is being created; and additionally
that ``DROP TYPE`` is called when the table
is dropped. When ``False``, no check
will be performed and no ``CREATE TYPE``
or ``DROP TYPE`` is emitted, unless
:meth:`~.postgresql.ENUM.create`
or :meth:`~.postgresql.ENUM.drop`
are called directly.
Setting to ``False`` is helpful
when invoking a creation scheme to a SQL file
without access to the actual database -
the :meth:`~.postgresql.ENUM.create` and
:meth:`~.postgresql.ENUM.drop` methods can
be used to emit SQL to a target bind.
"""
self.create_type = kw.pop("create_type", True)
super(ENUM, self).__init__(*enums, **kw)
@classmethod
def adapt_emulated_to_native(cls, impl, **kw):
"""Produce a PostgreSQL native :class:`_postgresql.ENUM` from plain
:class:`.Enum`.
"""
kw.setdefault("validate_strings", impl.validate_strings)
kw.setdefault("name", impl.name)
kw.setdefault("schema", impl.schema)
kw.setdefault("inherit_schema", impl.inherit_schema)
kw.setdefault("metadata", impl.metadata)
kw.setdefault("_create_events", False)
kw.setdefault("values_callable", impl.values_callable)
return cls(**kw)
def create(self, bind=None, checkfirst=True):
"""Emit ``CREATE TYPE`` for this
:class:`_postgresql.ENUM`.
If the underlying dialect does not support
PostgreSQL CREATE TYPE, no action is taken.
:param bind: a connectable :class:`_engine.Engine`,
:class:`_engine.Connection`, or similar object to emit
SQL.
:param checkfirst: if ``True``, a query against
the PG catalog will be first performed to see
if the type does not exist already before
creating.
"""
if not bind.dialect.supports_native_enum:
return
bind._run_ddl_visitor(self.EnumGenerator, self, checkfirst=checkfirst)
def drop(self, bind=None, checkfirst=True):
"""Emit ``DROP TYPE`` for this
:class:`_postgresql.ENUM`.
If the underlying dialect does not support
PostgreSQL DROP TYPE, no action is taken.
:param bind: a connectable :class:`_engine.Engine`,
:class:`_engine.Connection`, or similar object to emit
SQL.
:param checkfirst: if ``True``, a query against
the PG catalog will be first performed to see
if the type actually exists before dropping.
"""
if not bind.dialect.supports_native_enum:
return
bind._run_ddl_visitor(self.EnumDropper, self, checkfirst=checkfirst)
class EnumGenerator(DDLBase):
def __init__(self, dialect, connection, checkfirst=False, **kwargs):
super(ENUM.EnumGenerator, self).__init__(connection, **kwargs)
self.checkfirst = checkfirst
def _can_create_enum(self, enum):
if not self.checkfirst:
return True
effective_schema = self.connection.schema_for_object(enum)
return not self.connection.dialect.has_type(
self.connection, enum.name, schema=effective_schema
)
def visit_enum(self, enum):
if not self._can_create_enum(enum):
return
self.connection.execute(CreateEnumType(enum))
class EnumDropper(DDLBase):
def __init__(self, dialect, connection, checkfirst=False, **kwargs):
super(ENUM.EnumDropper, self).__init__(connection, **kwargs)
self.checkfirst = checkfirst
def _can_drop_enum(self, enum):
if not self.checkfirst:
return True
effective_schema = self.connection.schema_for_object(enum)
return self.connection.dialect.has_type(
self.connection, enum.name, schema=effective_schema
)
def visit_enum(self, enum):
if not self._can_drop_enum(enum):
return
self.connection.execute(DropEnumType(enum))
def _check_for_name_in_memos(self, checkfirst, kw):
"""Look in the 'ddl runner' for 'memos', then
note our name in that collection.
This to ensure a particular named enum is operated
upon only once within any kind of create/drop
sequence without relying upon "checkfirst".
"""
if not self.create_type:
return True
if "_ddl_runner" in kw:
ddl_runner = kw["_ddl_runner"]
if "_pg_enums" in ddl_runner.memo:
pg_enums = ddl_runner.memo["_pg_enums"]
else:
pg_enums = ddl_runner.memo["_pg_enums"] = set()
present = (self.schema, self.name) in pg_enums
pg_enums.add((self.schema, self.name))
return present
else:
return False
def _on_table_create(self, target, bind, checkfirst=False, **kw):
if (
checkfirst
or (
not self.metadata
and not kw.get("_is_metadata_operation", False)
)
) and not self._check_for_name_in_memos(checkfirst, kw):
self.create(bind=bind, checkfirst=checkfirst)
def _on_table_drop(self, target, bind, checkfirst=False, **kw):
if (
not self.metadata
and not kw.get("_is_metadata_operation", False)
and not self._check_for_name_in_memos(checkfirst, kw)
):
self.drop(bind=bind, checkfirst=checkfirst)
def _on_metadata_create(self, target, bind, checkfirst=False, **kw):
if not self._check_for_name_in_memos(checkfirst, kw):
self.create(bind=bind, checkfirst=checkfirst)
def _on_metadata_drop(self, target, bind, checkfirst=False, **kw):
if not self._check_for_name_in_memos(checkfirst, kw):
self.drop(bind=bind, checkfirst=checkfirst)
colspecs = {
sqltypes.ARRAY: _array.ARRAY,
sqltypes.Interval: INTERVAL,
sqltypes.Enum: ENUM,
sqltypes.JSON.JSONPathType: _json.JSONPathType,
sqltypes.JSON: _json.JSON,
}
ischema_names = {
"_array": _array.ARRAY,
"hstore": _hstore.HSTORE,
"json": _json.JSON,
"jsonb": _json.JSONB,
"int4range": _ranges.INT4RANGE,
"int8range": _ranges.INT8RANGE,
"numrange": _ranges.NUMRANGE,
"daterange": _ranges.DATERANGE,
"tsrange": _ranges.TSRANGE,
"tstzrange": _ranges.TSTZRANGE,
"integer": INTEGER,
"bigint": BIGINT,
"smallint": SMALLINT,
"character varying": VARCHAR,
"character": CHAR,
'"char"': sqltypes.String,
"name": sqltypes.String,
"text": TEXT,
"numeric": NUMERIC,
"float": FLOAT,
"real": REAL,
"inet": INET,
"cidr": CIDR,
"uuid": UUID,
"bit": BIT,
"bit varying": BIT,
"macaddr": MACADDR,
"money": MONEY,
"oid": OID,
"regclass": REGCLASS,
"double precision": DOUBLE_PRECISION,
"timestamp": TIMESTAMP,
"timestamp with time zone": TIMESTAMP,
"timestamp without time zone": TIMESTAMP,
"time with time zone": TIME,
"time without time zone": TIME,
"date": DATE,
"time": TIME,
"bytea": BYTEA,
"boolean": BOOLEAN,
"interval": INTERVAL,
"tsvector": TSVECTOR,
}
class PGCompiler(compiler.SQLCompiler):
def visit_array(self, element, **kw):
return "ARRAY[%s]" % self.visit_clauselist(element, **kw)
def visit_slice(self, element, **kw):
return "%s:%s" % (
self.process(element.start, **kw),
self.process(element.stop, **kw),
)
def visit_json_getitem_op_binary(
self, binary, operator, _cast_applied=False, **kw
):
if (
not _cast_applied
and binary.type._type_affinity is not sqltypes.JSON
):
kw["_cast_applied"] = True
return self.process(sql.cast(binary, binary.type), **kw)
kw["eager_grouping"] = True
return self._generate_generic_binary(
binary, " -> " if not _cast_applied else " ->> ", **kw
)
def visit_json_path_getitem_op_binary(
self, binary, operator, _cast_applied=False, **kw
):
if (
not _cast_applied
and binary.type._type_affinity is not sqltypes.JSON
):
kw["_cast_applied"] = True
return self.process(sql.cast(binary, binary.type), **kw)
kw["eager_grouping"] = True
return self._generate_generic_binary(
binary, " #> " if not _cast_applied else " #>> ", **kw
)
def visit_getitem_binary(self, binary, operator, **kw):
return "%s[%s]" % (
self.process(binary.left, **kw),
self.process(binary.right, **kw),
)
def visit_aggregate_order_by(self, element, **kw):
return "%s ORDER BY %s" % (
self.process(element.target, **kw),
self.process(element.order_by, **kw),
)
def visit_match_op_binary(self, binary, operator, **kw):
if "postgresql_regconfig" in binary.modifiers:
regconfig = self.render_literal_value(
binary.modifiers["postgresql_regconfig"], sqltypes.STRINGTYPE
)
if regconfig:
return "%s @@ to_tsquery(%s, %s)" % (
self.process(binary.left, **kw),
regconfig,
self.process(binary.right, **kw),
)
return "%s @@ to_tsquery(%s)" % (
self.process(binary.left, **kw),
self.process(binary.right, **kw),
)
def visit_ilike_op_binary(self, binary, operator, **kw):
escape = binary.modifiers.get("escape", None)
return "%s ILIKE %s" % (
self.process(binary.left, **kw),
self.process(binary.right, **kw),
) + (
" ESCAPE " + self.render_literal_value(escape, sqltypes.STRINGTYPE)
if escape
else ""
)
def visit_not_ilike_op_binary(self, binary, operator, **kw):
escape = binary.modifiers.get("escape", None)
return "%s NOT ILIKE %s" % (
self.process(binary.left, **kw),
self.process(binary.right, **kw),
) + (
" ESCAPE " + self.render_literal_value(escape, sqltypes.STRINGTYPE)
if escape
else ""
)
def _regexp_match(self, base_op, binary, operator, kw):
flags = binary.modifiers["flags"]
if flags is None:
return self._generate_generic_binary(
binary, " %s " % base_op, **kw
)
if isinstance(flags, elements.BindParameter) and flags.value == "i":
return self._generate_generic_binary(
binary, " %s* " % base_op, **kw
)
flags = self.process(flags, **kw)
string = self.process(binary.left, **kw)
pattern = self.process(binary.right, **kw)
return "%s %s CONCAT('(?', %s, ')', %s)" % (
string,
base_op,
flags,
pattern,
)
def visit_regexp_match_op_binary(self, binary, operator, **kw):
return self._regexp_match("~", binary, operator, kw)
def visit_not_regexp_match_op_binary(self, binary, operator, **kw):
return self._regexp_match("!~", binary, operator, kw)
def visit_regexp_replace_op_binary(self, binary, operator, **kw):
string = self.process(binary.left, **kw)
pattern = self.process(binary.right, **kw)
flags = binary.modifiers["flags"]
if flags is not None:
flags = self.process(flags, **kw)
replacement = self.process(binary.modifiers["replacement"], **kw)
if flags is None:
return "REGEXP_REPLACE(%s, %s, %s)" % (
string,
pattern,
replacement,
)
else:
return "REGEXP_REPLACE(%s, %s, %s, %s)" % (
string,
pattern,
replacement,
flags,
)
def visit_empty_set_expr(self, element_types):
# cast the empty set to the type we are comparing against. if
# we are comparing against the null type, pick an arbitrary
# datatype for the empty set
return "SELECT %s WHERE 1!=1" % (
", ".join(
"CAST(NULL AS %s)"
% self.dialect.type_compiler.process(
INTEGER() if type_._isnull else type_
)
for type_ in element_types or [INTEGER()]
),
)
def render_literal_value(self, value, type_):
value = super(PGCompiler, self).render_literal_value(value, type_)
if self.dialect._backslash_escapes:
value = value.replace("\\", "\\\\")
return value
def visit_sequence(self, seq, **kw):
return "nextval('%s')" % self.preparer.format_sequence(seq)
def limit_clause(self, select, **kw):
text = ""
if select._limit_clause is not None:
text += " \n LIMIT " + self.process(select._limit_clause, **kw)
if select._offset_clause is not None:
if select._limit_clause is None:
text += "\n LIMIT ALL"
text += " OFFSET " + self.process(select._offset_clause, **kw)
return text
def format_from_hint_text(self, sqltext, table, hint, iscrud):
if hint.upper() != "ONLY":
raise exc.CompileError("Unrecognized hint: %r" % hint)
return "ONLY " + sqltext
def get_select_precolumns(self, select, **kw):
# Do not call super().get_select_precolumns because
# it will warn/raise when distinct on is present
if select._distinct or select._distinct_on:
if select._distinct_on:
return (
"DISTINCT ON ("
+ ", ".join(
[
self.process(col, **kw)
for col in select._distinct_on
]
)
+ ") "
)
else:
return "DISTINCT "
else:
return ""
def for_update_clause(self, select, **kw):
if select._for_update_arg.read:
if select._for_update_arg.key_share:
tmp = " FOR KEY SHARE"
else:
tmp = " FOR SHARE"
elif select._for_update_arg.key_share:
tmp = " FOR NO KEY UPDATE"
else:
tmp = " FOR UPDATE"
if select._for_update_arg.of:
tables = util.OrderedSet()
for c in select._for_update_arg.of:
tables.update(sql_util.surface_selectables_only(c))
tmp += " OF " + ", ".join(
self.process(table, ashint=True, use_schema=False, **kw)
for table in tables
)
if select._for_update_arg.nowait:
tmp += " NOWAIT"
if select._for_update_arg.skip_locked:
tmp += " SKIP LOCKED"
return tmp
def returning_clause(self, stmt, returning_cols):
columns = [
self._label_select_column(None, c, True, False, {})
for c in expression._select_iterables(returning_cols)
]
return "RETURNING " + ", ".join(columns)
def visit_substring_func(self, func, **kw):
s = self.process(func.clauses.clauses[0], **kw)
start = self.process(func.clauses.clauses[1], **kw)
if len(func.clauses.clauses) > 2:
length = self.process(func.clauses.clauses[2], **kw)
return "SUBSTRING(%s FROM %s FOR %s)" % (s, start, length)
else:
return "SUBSTRING(%s FROM %s)" % (s, start)
def _on_conflict_target(self, clause, **kw):
if clause.constraint_target is not None:
target_text = "ON CONSTRAINT %s" % clause.constraint_target
elif clause.inferred_target_elements is not None:
target_text = "(%s)" % ", ".join(
(
self.preparer.quote(c)
if isinstance(c, util.string_types)
else self.process(c, include_table=False, use_schema=False)
)
for c in clause.inferred_target_elements
)
if clause.inferred_target_whereclause is not None:
target_text += " WHERE %s" % self.process(
clause.inferred_target_whereclause,
include_table=False,
use_schema=False,
)
else:
target_text = ""
return target_text
def visit_on_conflict_do_nothing(self, on_conflict, **kw):
target_text = self._on_conflict_target(on_conflict, **kw)
if target_text:
return "ON CONFLICT %s DO NOTHING" % target_text
else:
return "ON CONFLICT DO NOTHING"
def visit_on_conflict_do_update(self, on_conflict, **kw):
clause = on_conflict
target_text = self._on_conflict_target(on_conflict, **kw)
action_set_ops = []
set_parameters = dict(clause.update_values_to_set)
# create a list of column assignment clauses as tuples
insert_statement = self.stack[-1]["selectable"]
cols = insert_statement.table.c
for c in cols:
col_key = c.key
if col_key in set_parameters:
value = set_parameters.pop(col_key)
elif c in set_parameters:
value = set_parameters.pop(c)
else:
continue
if coercions._is_literal(value):
value = elements.BindParameter(None, value, type_=c.type)
else:
if (
isinstance(value, elements.BindParameter)
and value.type._isnull
):
value = value._clone()
value.type = c.type
value_text = self.process(value.self_group(), use_schema=False)
key_text = self.preparer.quote(col_key)
action_set_ops.append("%s = %s" % (key_text, value_text))
# check for names that don't match columns
if set_parameters:
util.warn(
"Additional column names not matching "
"any column keys in table '%s': %s"
% (
self.current_executable.table.name,
(", ".join("'%s'" % c for c in set_parameters)),
)
)
for k, v in set_parameters.items():
key_text = (
self.preparer.quote(k)
if isinstance(k, util.string_types)
else self.process(k, use_schema=False)
)
value_text = self.process(
coercions.expect(roles.ExpressionElementRole, v),
use_schema=False,
)
action_set_ops.append("%s = %s" % (key_text, value_text))
action_text = ", ".join(action_set_ops)
if clause.update_whereclause is not None:
action_text += " WHERE %s" % self.process(
clause.update_whereclause, include_table=True, use_schema=False
)
return "ON CONFLICT %s DO UPDATE SET %s" % (target_text, action_text)
def update_from_clause(
self, update_stmt, from_table, extra_froms, from_hints, **kw
):
return "FROM " + ", ".join(
t._compiler_dispatch(self, asfrom=True, fromhints=from_hints, **kw)
for t in extra_froms
)
def delete_extra_from_clause(
self, delete_stmt, from_table, extra_froms, from_hints, **kw
):
"""Render the DELETE .. USING clause specific to PostgreSQL."""
return "USING " + ", ".join(
t._compiler_dispatch(self, asfrom=True, fromhints=from_hints, **kw)
for t in extra_froms
)
def fetch_clause(self, select, **kw):
# pg requires parens for non literal clauses. It's also required for
# bind parameters if a ::type casts is used by the driver (asyncpg),
# so it's easiest to just always add it
text = ""
if select._offset_clause is not None:
text += "\n OFFSET (%s) ROWS" % self.process(
select._offset_clause, **kw
)
if select._fetch_clause is not None:
text += "\n FETCH FIRST (%s)%s ROWS %s" % (
self.process(select._fetch_clause, **kw),
" PERCENT" if select._fetch_clause_options["percent"] else "",
"WITH TIES"
if select._fetch_clause_options["with_ties"]
else "ONLY",
)
return text
class PGDDLCompiler(compiler.DDLCompiler):
def get_column_specification(self, column, **kwargs):
colspec = self.preparer.format_column(column)
impl_type = column.type.dialect_impl(self.dialect)
if isinstance(impl_type, sqltypes.TypeDecorator):
impl_type = impl_type.impl
if (
column.primary_key
and column is column.table._autoincrement_column
and (
self.dialect.supports_smallserial
or not isinstance(impl_type, sqltypes.SmallInteger)
)
and column.identity is None
and (
column.default is None
or (
isinstance(column.default, schema.Sequence)
and column.default.optional
)
)
):
if isinstance(impl_type, sqltypes.BigInteger):
colspec += " BIGSERIAL"
elif isinstance(impl_type, sqltypes.SmallInteger):
colspec += " SMALLSERIAL"
else:
colspec += " SERIAL"
else:
colspec += " " + self.dialect.type_compiler.process(
column.type,
type_expression=column,
identifier_preparer=self.preparer,
)
default = self.get_column_default_string(column)
if default is not None:
colspec += " DEFAULT " + default
if column.computed is not None:
colspec += " " + self.process(column.computed)
if column.identity is not None:
colspec += " " + self.process(column.identity)
if not column.nullable and not column.identity:
colspec += " NOT NULL"
elif column.nullable and column.identity:
colspec += " NULL"
return colspec
def visit_check_constraint(self, constraint):
if constraint._type_bound:
typ = list(constraint.columns)[0].type
if (
isinstance(typ, sqltypes.ARRAY)
and isinstance(typ.item_type, sqltypes.Enum)
and not typ.item_type.native_enum
):
raise exc.CompileError(
"PostgreSQL dialect cannot produce the CHECK constraint "
"for ARRAY of non-native ENUM; please specify "
"create_constraint=False on this Enum datatype."
)
return super(PGDDLCompiler, self).visit_check_constraint(constraint)
def visit_drop_table_comment(self, drop):
return "COMMENT ON TABLE %s IS NULL" % self.preparer.format_table(
drop.element
)
def visit_create_enum_type(self, create):
type_ = create.element
return "CREATE TYPE %s AS ENUM (%s)" % (
self.preparer.format_type(type_),
", ".join(
self.sql_compiler.process(sql.literal(e), literal_binds=True)
for e in type_.enums
),
)
def visit_drop_enum_type(self, drop):
type_ = drop.element
return "DROP TYPE %s" % (self.preparer.format_type(type_))
def visit_create_index(self, create):
preparer = self.preparer
index = create.element
self._verify_index_table(index)
text = "CREATE "
if index.unique:
text += "UNIQUE "
text += "INDEX "
if self.dialect._supports_create_index_concurrently:
concurrently = index.dialect_options["postgresql"]["concurrently"]
if concurrently:
text += "CONCURRENTLY "
if create.if_not_exists:
text += "IF NOT EXISTS "
text += "%s ON %s " % (
self._prepared_index_name(index, include_schema=False),
preparer.format_table(index.table),
)
using = index.dialect_options["postgresql"]["using"]
if using:
text += (
"USING %s "
% self.preparer.validate_sql_phrase(using, IDX_USING).lower()
)
ops = index.dialect_options["postgresql"]["ops"]
text += "(%s)" % (
", ".join(
[
self.sql_compiler.process(
expr.self_group()
if not isinstance(expr, expression.ColumnClause)
else expr,
include_table=False,
literal_binds=True,
)
+ (
(" " + ops[expr.key])
if hasattr(expr, "key") and expr.key in ops
else ""
)
for expr in index.expressions
]
)
)
includeclause = index.dialect_options["postgresql"]["include"]
if includeclause:
inclusions = [
index.table.c[col]
if isinstance(col, util.string_types)
else col
for col in includeclause
]
text += " INCLUDE (%s)" % ", ".join(
[preparer.quote(c.name) for c in inclusions]
)
withclause = index.dialect_options["postgresql"]["with"]
if withclause:
text += " WITH (%s)" % (
", ".join(
[
"%s = %s" % storage_parameter
for storage_parameter in withclause.items()
]
)
)
tablespace_name = index.dialect_options["postgresql"]["tablespace"]
if tablespace_name:
text += " TABLESPACE %s" % preparer.quote(tablespace_name)
whereclause = index.dialect_options["postgresql"]["where"]
if whereclause is not None:
whereclause = coercions.expect(
roles.DDLExpressionRole, whereclause
)
where_compiled = self.sql_compiler.process(
whereclause, include_table=False, literal_binds=True
)
text += " WHERE " + where_compiled
return text
def visit_drop_index(self, drop):
index = drop.element
text = "\nDROP INDEX "
if self.dialect._supports_drop_index_concurrently:
concurrently = index.dialect_options["postgresql"]["concurrently"]
if concurrently:
text += "CONCURRENTLY "
if drop.if_exists:
text += "IF EXISTS "
text += self._prepared_index_name(index, include_schema=True)
return text
def visit_exclude_constraint(self, constraint, **kw):
text = ""
if constraint.name is not None:
text += "CONSTRAINT %s " % self.preparer.format_constraint(
constraint
)
elements = []
for expr, name, op in constraint._render_exprs:
kw["include_table"] = False
exclude_element = self.sql_compiler.process(expr, **kw) + (
(" " + constraint.ops[expr.key])
if hasattr(expr, "key") and expr.key in constraint.ops
else ""
)
elements.append("%s WITH %s" % (exclude_element, op))
text += "EXCLUDE USING %s (%s)" % (
self.preparer.validate_sql_phrase(
constraint.using, IDX_USING
).lower(),
", ".join(elements),
)
if constraint.where is not None:
text += " WHERE (%s)" % self.sql_compiler.process(
constraint.where, literal_binds=True
)
text += self.define_constraint_deferrability(constraint)
return text
def post_create_table(self, table):
table_opts = []
pg_opts = table.dialect_options["postgresql"]
inherits = pg_opts.get("inherits")
if inherits is not None:
if not isinstance(inherits, (list, tuple)):
inherits = (inherits,)
table_opts.append(
"\n INHERITS ( "
+ ", ".join(self.preparer.quote(name) for name in inherits)
+ " )"
)
if pg_opts["partition_by"]:
table_opts.append("\n PARTITION BY %s" % pg_opts["partition_by"])
if pg_opts["with_oids"] is True:
table_opts.append("\n WITH OIDS")
elif pg_opts["with_oids"] is False:
table_opts.append("\n WITHOUT OIDS")
if pg_opts["on_commit"]:
on_commit_options = pg_opts["on_commit"].replace("_", " ").upper()
table_opts.append("\n ON COMMIT %s" % on_commit_options)
if pg_opts["tablespace"]:
tablespace_name = pg_opts["tablespace"]
table_opts.append(
"\n TABLESPACE %s" % self.preparer.quote(tablespace_name)
)
return "".join(table_opts)
def visit_computed_column(self, generated):
if generated.persisted is False:
raise exc.CompileError(
"PostrgreSQL computed columns do not support 'virtual' "
"persistence; set the 'persisted' flag to None or True for "
"PostgreSQL support."
)
return "GENERATED ALWAYS AS (%s) STORED" % self.sql_compiler.process(
generated.sqltext, include_table=False, literal_binds=True
)
def visit_create_sequence(self, create, **kw):
prefix = None
if create.element.data_type is not None:
prefix = " AS %s" % self.type_compiler.process(
create.element.data_type
)
return super(PGDDLCompiler, self).visit_create_sequence(
create, prefix=prefix, **kw
)
class PGTypeCompiler(compiler.GenericTypeCompiler):
def visit_TSVECTOR(self, type_, **kw):
return "TSVECTOR"
def visit_INET(self, type_, **kw):
return "INET"
def visit_CIDR(self, type_, **kw):
return "CIDR"
def visit_MACADDR(self, type_, **kw):
return "MACADDR"
def visit_MONEY(self, type_, **kw):
return "MONEY"
def visit_OID(self, type_, **kw):
return "OID"
def visit_REGCLASS(self, type_, **kw):
return "REGCLASS"
def visit_FLOAT(self, type_, **kw):
if not type_.precision:
return "FLOAT"
else:
return "FLOAT(%(precision)s)" % {"precision": type_.precision}
def visit_DOUBLE_PRECISION(self, type_, **kw):
return "DOUBLE PRECISION"
def visit_BIGINT(self, type_, **kw):
return "BIGINT"
def visit_HSTORE(self, type_, **kw):
return "HSTORE"
def visit_JSON(self, type_, **kw):
return "JSON"
def visit_JSONB(self, type_, **kw):
return "JSONB"
def visit_INT4RANGE(self, type_, **kw):
return "INT4RANGE"
def visit_INT8RANGE(self, type_, **kw):
return "INT8RANGE"
def visit_NUMRANGE(self, type_, **kw):
return "NUMRANGE"
def visit_DATERANGE(self, type_, **kw):
return "DATERANGE"
def visit_TSRANGE(self, type_, **kw):
return "TSRANGE"
def visit_TSTZRANGE(self, type_, **kw):
return "TSTZRANGE"
def visit_datetime(self, type_, **kw):
return self.visit_TIMESTAMP(type_, **kw)
def visit_enum(self, type_, **kw):
if not type_.native_enum or not self.dialect.supports_native_enum:
return super(PGTypeCompiler, self).visit_enum(type_, **kw)
else:
return self.visit_ENUM(type_, **kw)
def visit_ENUM(self, type_, identifier_preparer=None, **kw):
if identifier_preparer is None:
identifier_preparer = self.dialect.identifier_preparer
return identifier_preparer.format_type(type_)
def visit_TIMESTAMP(self, type_, **kw):
return "TIMESTAMP%s %s" % (
"(%d)" % type_.precision
if getattr(type_, "precision", None) is not None
else "",
(type_.timezone and "WITH" or "WITHOUT") + " TIME ZONE",
)
def visit_TIME(self, type_, **kw):
return "TIME%s %s" % (
"(%d)" % type_.precision
if getattr(type_, "precision", None) is not None
else "",
(type_.timezone and "WITH" or "WITHOUT") + " TIME ZONE",
)
def visit_INTERVAL(self, type_, **kw):
text = "INTERVAL"
if type_.fields is not None:
text += " " + type_.fields
if type_.precision is not None:
text += " (%d)" % type_.precision
return text
def visit_BIT(self, type_, **kw):
if type_.varying:
compiled = "BIT VARYING"
if type_.length is not None:
compiled += "(%d)" % type_.length
else:
compiled = "BIT(%d)" % type_.length
return compiled
def visit_UUID(self, type_, **kw):
return "UUID"
def visit_large_binary(self, type_, **kw):
return self.visit_BYTEA(type_, **kw)
def visit_BYTEA(self, type_, **kw):
return "BYTEA"
def visit_ARRAY(self, type_, **kw):
# TODO: pass **kw?
inner = self.process(type_.item_type)
return re.sub(
r"((?: COLLATE.*)?)$",
(
r"%s\1"
% (
"[]"
* (type_.dimensions if type_.dimensions is not None else 1)
)
),
inner,
count=1,
)
class PGIdentifierPreparer(compiler.IdentifierPreparer):
reserved_words = RESERVED_WORDS
def _unquote_identifier(self, value):
if value[0] == self.initial_quote:
value = value[1:-1].replace(
self.escape_to_quote, self.escape_quote
)
return value
def format_type(self, type_, use_schema=True):
if not type_.name:
raise exc.CompileError("PostgreSQL ENUM type requires a name.")
name = self.quote(type_.name)
effective_schema = self.schema_for_object(type_)
if (
not self.omit_schema
and use_schema
and effective_schema is not None
):
name = self.quote_schema(effective_schema) + "." + name
return name
class PGInspector(reflection.Inspector):
def get_table_oid(self, table_name, schema=None):
"""Return the OID for the given table name."""
return self.dialect.get_table_oid(
self.bind, table_name, schema, info_cache=self.info_cache
)
def get_enums(self, schema=None):
"""Return a list of ENUM objects.
Each member is a dictionary containing these fields:
* name - name of the enum
* schema - the schema name for the enum.
* visible - boolean, whether or not this enum is visible
in the default search path.
* labels - a list of string labels that apply to the enum.
:param schema: schema name. If None, the default schema
(typically 'public') is used. May also be set to '*' to
indicate load enums for all schemas.
.. versionadded:: 1.0.0
"""
schema = schema or self.default_schema_name
return self.dialect._load_enums(self.bind, schema)
def get_foreign_table_names(self, schema=None):
"""Return a list of FOREIGN TABLE names.
Behavior is similar to that of
:meth:`_reflection.Inspector.get_table_names`,
except that the list is limited to those tables that report a
``relkind`` value of ``f``.
.. versionadded:: 1.0.0
"""
schema = schema or self.default_schema_name
return self.dialect._get_foreign_table_names(self.bind, schema)
def get_view_names(self, schema=None, include=("plain", "materialized")):
"""Return all view names in `schema`.
:param schema: Optional, retrieve names from a non-default schema.
For special quoting, use :class:`.quoted_name`.
:param include: specify which types of views to return. Passed
as a string value (for a single type) or a tuple (for any number
of types). Defaults to ``('plain', 'materialized')``.
.. versionadded:: 1.1
"""
return self.dialect.get_view_names(
self.bind, schema, info_cache=self.info_cache, include=include
)
class CreateEnumType(schema._CreateDropBase):
__visit_name__ = "create_enum_type"
class DropEnumType(schema._CreateDropBase):
__visit_name__ = "drop_enum_type"
class PGExecutionContext(default.DefaultExecutionContext):
def fire_sequence(self, seq, type_):
return self._execute_scalar(
(
"select nextval('%s')"
% self.identifier_preparer.format_sequence(seq)
),
type_,
)
def get_insert_default(self, column):
if column.primary_key and column is column.table._autoincrement_column:
if column.server_default and column.server_default.has_argument:
# pre-execute passive defaults on primary key columns
return self._execute_scalar(
"select %s" % column.server_default.arg, column.type
)
elif column.default is None or (
column.default.is_sequence and column.default.optional
):
# execute the sequence associated with a SERIAL primary
# key column. for non-primary-key SERIAL, the ID just
# generates server side.
try:
seq_name = column._postgresql_seq_name
except AttributeError:
tab = column.table.name
col = column.name
tab = tab[0 : 29 + max(0, (29 - len(col)))]
col = col[0 : 29 + max(0, (29 - len(tab)))]
name = "%s_%s_seq" % (tab, col)
column._postgresql_seq_name = seq_name = name
if column.table is not None:
effective_schema = self.connection.schema_for_object(
column.table
)
else:
effective_schema = None
if effective_schema is not None:
exc = 'select nextval(\'"%s"."%s"\')' % (
effective_schema,
seq_name,
)
else:
exc = "select nextval('\"%s\"')" % (seq_name,)
return self._execute_scalar(exc, column.type)
return super(PGExecutionContext, self).get_insert_default(column)
def should_autocommit_text(self, statement):
return AUTOCOMMIT_REGEXP.match(statement)
class PGReadOnlyConnectionCharacteristic(
characteristics.ConnectionCharacteristic
):
transactional = True
def reset_characteristic(self, dialect, dbapi_conn):
dialect.set_readonly(dbapi_conn, False)
def set_characteristic(self, dialect, dbapi_conn, value):
dialect.set_readonly(dbapi_conn, value)
def get_characteristic(self, dialect, dbapi_conn):
return dialect.get_readonly(dbapi_conn)
class PGDeferrableConnectionCharacteristic(
characteristics.ConnectionCharacteristic
):
transactional = True
def reset_characteristic(self, dialect, dbapi_conn):
dialect.set_deferrable(dbapi_conn, False)
def set_characteristic(self, dialect, dbapi_conn, value):
dialect.set_deferrable(dbapi_conn, value)
def get_characteristic(self, dialect, dbapi_conn):
return dialect.get_deferrable(dbapi_conn)
class PGDialect(default.DefaultDialect):
name = "postgresql"
supports_alter = True
max_identifier_length = 63
supports_sane_rowcount = True
supports_native_enum = True
supports_native_boolean = True
supports_smallserial = True
supports_sequences = True
sequences_optional = True
preexecute_autoincrement_sequences = True
postfetch_lastrowid = False
supports_comments = True
supports_default_values = True
supports_empty_insert = False
supports_multivalues_insert = True
default_paramstyle = "pyformat"
ischema_names = ischema_names
colspecs = colspecs
statement_compiler = PGCompiler
ddl_compiler = PGDDLCompiler
type_compiler = PGTypeCompiler
preparer = PGIdentifierPreparer
execution_ctx_cls = PGExecutionContext
inspector = PGInspector
isolation_level = None
implicit_returning = True
full_returning = True
connection_characteristics = (
default.DefaultDialect.connection_characteristics
)
connection_characteristics = connection_characteristics.union(
{
"postgresql_readonly": PGReadOnlyConnectionCharacteristic(),
"postgresql_deferrable": PGDeferrableConnectionCharacteristic(),
}
)
construct_arguments = [
(
schema.Index,
{
"using": False,
"include": None,
"where": None,
"ops": {},
"concurrently": False,
"with": {},
"tablespace": None,
},
),
(
schema.Table,
{
"ignore_search_path": False,
"tablespace": None,
"partition_by": None,
"with_oids": None,
"on_commit": None,
"inherits": None,
},
),
]
reflection_options = ("postgresql_ignore_search_path",)
_backslash_escapes = True
_supports_create_index_concurrently = True
_supports_drop_index_concurrently = True
def __init__(
self,
isolation_level=None,
json_serializer=None,
json_deserializer=None,
**kwargs
):
default.DefaultDialect.__init__(self, **kwargs)
# the isolation_level parameter to the PGDialect itself is legacy.
# still works however the execution_options method is the one that
# is documented.
self.isolation_level = isolation_level
self._json_deserializer = json_deserializer
self._json_serializer = json_serializer
def initialize(self, connection):
super(PGDialect, self).initialize(connection)
if self.server_version_info <= (8, 2):
self.full_returning = self.implicit_returning = False
self.supports_native_enum = self.server_version_info >= (8, 3)
if not self.supports_native_enum:
self.colspecs = self.colspecs.copy()
# pop base Enum type
self.colspecs.pop(sqltypes.Enum, None)
# psycopg2, others may have placed ENUM here as well
self.colspecs.pop(ENUM, None)
# http://www.postgresql.org/docs/9.3/static/release-9-2.html#AEN116689
self.supports_smallserial = self.server_version_info >= (9, 2)
if self.server_version_info < (8, 2):
self._backslash_escapes = False
else:
# ensure this query is not emitted on server version < 8.2
# as it will fail
std_string = connection.exec_driver_sql(
"show standard_conforming_strings"
).scalar()
self._backslash_escapes = std_string == "off"
self._supports_create_index_concurrently = (
self.server_version_info >= (8, 2)
)
self._supports_drop_index_concurrently = self.server_version_info >= (
9,
2,
)
def on_connect(self):
if self.isolation_level is not None:
def connect(conn):
self.set_isolation_level(conn, self.isolation_level)
return connect
else:
return None
_isolation_lookup = set(
[
"SERIALIZABLE",
"READ UNCOMMITTED",
"READ COMMITTED",
"REPEATABLE READ",
]
)
def set_isolation_level(self, connection, level):
level = level.replace("_", " ")
if level not in self._isolation_lookup:
raise exc.ArgumentError(
"Invalid value '%s' for isolation_level. "
"Valid isolation levels for %s are %s"
% (level, self.name, ", ".join(self._isolation_lookup))
)
cursor = connection.cursor()
cursor.execute(
"SET SESSION CHARACTERISTICS AS TRANSACTION "
"ISOLATION LEVEL %s" % level
)
cursor.execute("COMMIT")
cursor.close()
def get_isolation_level(self, connection):
cursor = connection.cursor()
cursor.execute("show transaction isolation level")
val = cursor.fetchone()[0]
cursor.close()
return val.upper()
def set_readonly(self, connection, value):
raise NotImplementedError()
def get_readonly(self, connection):
raise NotImplementedError()
def set_deferrable(self, connection, value):
raise NotImplementedError()
def get_deferrable(self, connection):
raise NotImplementedError()
def do_begin_twophase(self, connection, xid):
self.do_begin(connection.connection)
def do_prepare_twophase(self, connection, xid):
connection.exec_driver_sql("PREPARE TRANSACTION '%s'" % xid)
def do_rollback_twophase(
self, connection, xid, is_prepared=True, recover=False
):
if is_prepared:
if recover:
# FIXME: ugly hack to get out of transaction
# context when committing recoverable transactions
# Must find out a way how to make the dbapi not
# open a transaction.
connection.exec_driver_sql("ROLLBACK")
connection.exec_driver_sql("ROLLBACK PREPARED '%s'" % xid)
connection.exec_driver_sql("BEGIN")
self.do_rollback(connection.connection)
else:
self.do_rollback(connection.connection)
def do_commit_twophase(
self, connection, xid, is_prepared=True, recover=False
):
if is_prepared:
if recover:
connection.exec_driver_sql("ROLLBACK")
connection.exec_driver_sql("COMMIT PREPARED '%s'" % xid)
connection.exec_driver_sql("BEGIN")
self.do_rollback(connection.connection)
else:
self.do_commit(connection.connection)
def do_recover_twophase(self, connection):
resultset = connection.execute(
sql.text("SELECT gid FROM pg_prepared_xacts")
)
return [row[0] for row in resultset]
def _get_default_schema_name(self, connection):
return connection.exec_driver_sql("select current_schema()").scalar()
def has_schema(self, connection, schema):
query = (
"select nspname from pg_namespace " "where lower(nspname)=:schema"
)
cursor = connection.execute(
sql.text(query).bindparams(
sql.bindparam(
"schema",
util.text_type(schema.lower()),
type_=sqltypes.Unicode,
)
)
)
return bool(cursor.first())
def has_table(self, connection, table_name, schema=None):
# seems like case gets folded in pg_class...
if schema is None:
cursor = connection.execute(
sql.text(
"select relname from pg_class c join pg_namespace n on "
"n.oid=c.relnamespace where "
"pg_catalog.pg_table_is_visible(c.oid) "
"and relname=:name"
).bindparams(
sql.bindparam(
"name",
util.text_type(table_name),
type_=sqltypes.Unicode,
)
)
)
else:
cursor = connection.execute(
sql.text(
"select relname from pg_class c join pg_namespace n on "
"n.oid=c.relnamespace where n.nspname=:schema and "
"relname=:name"
).bindparams(
sql.bindparam(
"name",
util.text_type(table_name),
type_=sqltypes.Unicode,
),
sql.bindparam(
"schema",
util.text_type(schema),
type_=sqltypes.Unicode,
),
)
)
return bool(cursor.first())
def has_sequence(self, connection, sequence_name, schema=None):
if schema is None:
schema = self.default_schema_name
cursor = connection.execute(
sql.text(
"SELECT relname FROM pg_class c join pg_namespace n on "
"n.oid=c.relnamespace where relkind='S' and "
"n.nspname=:schema and relname=:name"
).bindparams(
sql.bindparam(
"name",
util.text_type(sequence_name),
type_=sqltypes.Unicode,
),
sql.bindparam(
"schema",
util.text_type(schema),
type_=sqltypes.Unicode,
),
)
)
return bool(cursor.first())
def has_type(self, connection, type_name, schema=None):
if schema is not None:
query = """
SELECT EXISTS (
SELECT * FROM pg_catalog.pg_type t, pg_catalog.pg_namespace n
WHERE t.typnamespace = n.oid
AND t.typname = :typname
AND n.nspname = :nspname
)
"""
query = sql.text(query)
else:
query = """
SELECT EXISTS (
SELECT * FROM pg_catalog.pg_type t
WHERE t.typname = :typname
AND pg_type_is_visible(t.oid)
)
"""
query = sql.text(query)
query = query.bindparams(
sql.bindparam(
"typname", util.text_type(type_name), type_=sqltypes.Unicode
)
)
if schema is not None:
query = query.bindparams(
sql.bindparam(
"nspname", util.text_type(schema), type_=sqltypes.Unicode
)
)
cursor = connection.execute(query)
return bool(cursor.scalar())
def _get_server_version_info(self, connection):
v = connection.exec_driver_sql("select version()").scalar()
m = re.match(
r".*(?:PostgreSQL|EnterpriseDB) "
r"(\d+)\.?(\d+)?(?:\.(\d+))?(?:\.\d+)?(?:devel|beta)?",
v,
)
if not m:
raise AssertionError(
"Could not determine version from string '%s'" % v
)
return tuple([int(x) for x in m.group(1, 2, 3) if x is not None])
@reflection.cache
def get_table_oid(self, connection, table_name, schema=None, **kw):
"""Fetch the oid for schema.table_name.
Several reflection methods require the table oid. The idea for using
this method is that it can be fetched one time and cached for
subsequent calls.
"""
table_oid = None
if schema is not None:
schema_where_clause = "n.nspname = :schema"
else:
schema_where_clause = "pg_catalog.pg_table_is_visible(c.oid)"
query = (
"""
SELECT c.oid
FROM pg_catalog.pg_class c
LEFT JOIN pg_catalog.pg_namespace n ON n.oid = c.relnamespace
WHERE (%s)
AND c.relname = :table_name AND c.relkind in
('r', 'v', 'm', 'f', 'p')
"""
% schema_where_clause
)
# Since we're binding to unicode, table_name and schema_name must be
# unicode.
table_name = util.text_type(table_name)
if schema is not None:
schema = util.text_type(schema)
s = sql.text(query).bindparams(table_name=sqltypes.Unicode)
s = s.columns(oid=sqltypes.Integer)
if schema:
s = s.bindparams(sql.bindparam("schema", type_=sqltypes.Unicode))
c = connection.execute(s, dict(table_name=table_name, schema=schema))
table_oid = c.scalar()
if table_oid is None:
raise exc.NoSuchTableError(table_name)
return table_oid
@reflection.cache
def get_schema_names(self, connection, **kw):
result = connection.execute(
sql.text(
"SELECT nspname FROM pg_namespace "
"WHERE nspname NOT LIKE 'pg_%' "
"ORDER BY nspname"
).columns(nspname=sqltypes.Unicode)
)
return [name for name, in result]
@reflection.cache
def get_table_names(self, connection, schema=None, **kw):
result = connection.execute(
sql.text(
"SELECT c.relname FROM pg_class c "
"JOIN pg_namespace n ON n.oid = c.relnamespace "
"WHERE n.nspname = :schema AND c.relkind in ('r', 'p')"
).columns(relname=sqltypes.Unicode),
dict(
schema=schema
if schema is not None
else self.default_schema_name
),
)
return [name for name, in result]
@reflection.cache
def _get_foreign_table_names(self, connection, schema=None, **kw):
result = connection.execute(
sql.text(
"SELECT c.relname FROM pg_class c "
"JOIN pg_namespace n ON n.oid = c.relnamespace "
"WHERE n.nspname = :schema AND c.relkind = 'f'"
).columns(relname=sqltypes.Unicode),
schema=schema if schema is not None else self.default_schema_name,
)
return [name for name, in result]
@reflection.cache
def get_view_names(
self, connection, schema=None, include=("plain", "materialized"), **kw
):
include_kind = {"plain": "v", "materialized": "m"}
try:
kinds = [include_kind[i] for i in util.to_list(include)]
except KeyError:
raise ValueError(
"include %r unknown, needs to be a sequence containing "
"one or both of 'plain' and 'materialized'" % (include,)
)
if not kinds:
raise ValueError(
"empty include, needs to be a sequence containing "
"one or both of 'plain' and 'materialized'"
)
result = connection.execute(
sql.text(
"SELECT c.relname FROM pg_class c "
"JOIN pg_namespace n ON n.oid = c.relnamespace "
"WHERE n.nspname = :schema AND c.relkind IN (%s)"
% (", ".join("'%s'" % elem for elem in kinds))
).columns(relname=sqltypes.Unicode),
dict(
schema=schema
if schema is not None
else self.default_schema_name
),
)
return [name for name, in result]
@reflection.cache
def get_sequence_names(self, connection, schema=None, **kw):
if not schema:
schema = self.default_schema_name
cursor = connection.execute(
sql.text(
"SELECT relname FROM pg_class c join pg_namespace n on "
"n.oid=c.relnamespace where relkind='S' and "
"n.nspname=:schema"
).bindparams(
sql.bindparam(
"schema",
util.text_type(schema),
type_=sqltypes.Unicode,
),
)
)
return [row[0] for row in cursor]
@reflection.cache
def get_view_definition(self, connection, view_name, schema=None, **kw):
view_def = connection.scalar(
sql.text(
"SELECT pg_get_viewdef(c.oid) view_def FROM pg_class c "
"JOIN pg_namespace n ON n.oid = c.relnamespace "
"WHERE n.nspname = :schema AND c.relname = :view_name "
"AND c.relkind IN ('v', 'm')"
).columns(view_def=sqltypes.Unicode),
dict(
schema=schema
if schema is not None
else self.default_schema_name,
view_name=view_name,
),
)
return view_def
@reflection.cache
def get_columns(self, connection, table_name, schema=None, **kw):
table_oid = self.get_table_oid(
connection, table_name, schema, info_cache=kw.get("info_cache")
)
generated = (
"a.attgenerated as generated"
if self.server_version_info >= (12,)
else "NULL as generated"
)
if self.server_version_info >= (10,):
# a.attidentity != '' is required or it will reflect also
# serial columns as identity.
identity = """\
(SELECT json_build_object(
'always', a.attidentity = 'a',
'start', s.seqstart,
'increment', s.seqincrement,
'minvalue', s.seqmin,
'maxvalue', s.seqmax,
'cache', s.seqcache,
'cycle', s.seqcycle)
FROM pg_catalog.pg_sequence s
JOIN pg_catalog.pg_class c on s.seqrelid = c."oid"
WHERE c.relkind = 'S'
AND a.attidentity != ''
AND s.seqrelid = pg_catalog.pg_get_serial_sequence(
a.attrelid::regclass::text, a.attname
)::regclass::oid
) as identity_options\
"""
else:
identity = "NULL as identity_options"
SQL_COLS = """
SELECT a.attname,
pg_catalog.format_type(a.atttypid, a.atttypmod),
(
SELECT pg_catalog.pg_get_expr(d.adbin, d.adrelid)
FROM pg_catalog.pg_attrdef d
WHERE d.adrelid = a.attrelid AND d.adnum = a.attnum
AND a.atthasdef
) AS DEFAULT,
a.attnotnull,
a.attrelid as table_oid,
pgd.description as comment,
%s,
%s
FROM pg_catalog.pg_attribute a
LEFT JOIN pg_catalog.pg_description pgd ON (
pgd.objoid = a.attrelid AND pgd.objsubid = a.attnum)
WHERE a.attrelid = :table_oid
AND a.attnum > 0 AND NOT a.attisdropped
ORDER BY a.attnum
""" % (
generated,
identity,
)
s = (
sql.text(SQL_COLS)
.bindparams(sql.bindparam("table_oid", type_=sqltypes.Integer))
.columns(attname=sqltypes.Unicode, default=sqltypes.Unicode)
)
c = connection.execute(s, dict(table_oid=table_oid))
rows = c.fetchall()
# dictionary with (name, ) if default search path or (schema, name)
# as keys
domains = self._load_domains(connection)
# dictionary with (name, ) if default search path or (schema, name)
# as keys
enums = dict(
((rec["name"],), rec)
if rec["visible"]
else ((rec["schema"], rec["name"]), rec)
for rec in self._load_enums(connection, schema="*")
)
# format columns
columns = []
for (
name,
format_type,
default_,
notnull,
table_oid,
comment,
generated,
identity,
) in rows:
column_info = self._get_column_info(
name,
format_type,
default_,
notnull,
domains,
enums,
schema,
comment,
generated,
identity,
)
columns.append(column_info)
return columns
def _get_column_info(
self,
name,
format_type,
default,
notnull,
domains,
enums,
schema,
comment,
generated,
identity,
):
def _handle_array_type(attype):
return (
# strip '[]' from integer[], etc.
re.sub(r"\[\]$", "", attype),
attype.endswith("[]"),
)
# strip (*) from character varying(5), timestamp(5)
# with time zone, geometry(POLYGON), etc.
attype = re.sub(r"\(.*\)", "", format_type)
# strip '[]' from integer[], etc. and check if an array
attype, is_array = _handle_array_type(attype)
# strip quotes from case sensitive enum or domain names
enum_or_domain_key = tuple(util.quoted_token_parser(attype))
nullable = not notnull
charlen = re.search(r"\(([\d,]+)\)", format_type)
if charlen:
charlen = charlen.group(1)
args = re.search(r"\((.*)\)", format_type)
if args and args.group(1):
args = tuple(re.split(r"\s*,\s*", args.group(1)))
else:
args = ()
kwargs = {}
if attype == "numeric":
if charlen:
prec, scale = charlen.split(",")
args = (int(prec), int(scale))
else:
args = ()
elif attype == "double precision":
args = (53,)
elif attype == "integer":
args = ()
elif attype in ("timestamp with time zone", "time with time zone"):
kwargs["timezone"] = True
if charlen:
kwargs["precision"] = int(charlen)
args = ()
elif attype in (
"timestamp without time zone",
"time without time zone",
"time",
):
kwargs["timezone"] = False
if charlen:
kwargs["precision"] = int(charlen)
args = ()
elif attype == "bit varying":
kwargs["varying"] = True
if charlen:
args = (int(charlen),)
else:
args = ()
elif attype.startswith("interval"):
field_match = re.match(r"interval (.+)", attype, re.I)
if charlen:
kwargs["precision"] = int(charlen)
if field_match:
kwargs["fields"] = field_match.group(1)
attype = "interval"
args = ()
elif charlen:
args = (int(charlen),)
while True:
# looping here to suit nested domains
if attype in self.ischema_names:
coltype = self.ischema_names[attype]
break
elif enum_or_domain_key in enums:
enum = enums[enum_or_domain_key]
coltype = ENUM
kwargs["name"] = enum["name"]
if not enum["visible"]:
kwargs["schema"] = enum["schema"]
args = tuple(enum["labels"])
break
elif enum_or_domain_key in domains:
domain = domains[enum_or_domain_key]
attype = domain["attype"]
attype, is_array = _handle_array_type(attype)
# strip quotes from case sensitive enum or domain names
enum_or_domain_key = tuple(util.quoted_token_parser(attype))
# A table can't override a not null on the domain,
# but can override nullable
nullable = nullable and domain["nullable"]
if domain["default"] and not default:
# It can, however, override the default
# value, but can't set it to null.
default = domain["default"]
continue
else:
coltype = None
break
if coltype:
coltype = coltype(*args, **kwargs)
if is_array:
coltype = self.ischema_names["_array"](coltype)
else:
util.warn(
"Did not recognize type '%s' of column '%s'" % (attype, name)
)
coltype = sqltypes.NULLTYPE
# If a zero byte or blank string depending on driver (is also absent
# for older PG versions), then not a generated column. Otherwise, s =
# stored. (Other values might be added in the future.)
if generated not in (None, "", b"\x00"):
computed = dict(
sqltext=default, persisted=generated in ("s", b"s")
)
default = None
else:
computed = None
# adjust the default value
autoincrement = False
if default is not None:
match = re.search(r"""(nextval\(')([^']+)('.*$)""", default)
if match is not None:
if issubclass(coltype._type_affinity, sqltypes.Integer):
autoincrement = True
# the default is related to a Sequence
sch = schema
if "." not in match.group(2) and sch is not None:
# unconditionally quote the schema name. this could
# later be enhanced to obey quoting rules /
# "quote schema"
default = (
match.group(1)
+ ('"%s"' % sch)
+ "."
+ match.group(2)
+ match.group(3)
)
column_info = dict(
name=name,
type=coltype,
nullable=nullable,
default=default,
autoincrement=autoincrement or identity is not None,
comment=comment,
)
if computed is not None:
column_info["computed"] = computed
if identity is not None:
column_info["identity"] = identity
return column_info
@reflection.cache
def get_pk_constraint(self, connection, table_name, schema=None, **kw):
table_oid = self.get_table_oid(
connection, table_name, schema, info_cache=kw.get("info_cache")
)
if self.server_version_info < (8, 4):
PK_SQL = """
SELECT a.attname
FROM
pg_class t
join pg_index ix on t.oid = ix.indrelid
join pg_attribute a
on t.oid=a.attrelid AND %s
WHERE
t.oid = :table_oid and ix.indisprimary = 't'
ORDER BY a.attnum
""" % self._pg_index_any(
"a.attnum", "ix.indkey"
)
else:
# unnest() and generate_subscripts() both introduced in
# version 8.4
PK_SQL = """
SELECT a.attname
FROM pg_attribute a JOIN (
SELECT unnest(ix.indkey) attnum,
generate_subscripts(ix.indkey, 1) ord
FROM pg_index ix
WHERE ix.indrelid = :table_oid AND ix.indisprimary
) k ON a.attnum=k.attnum
WHERE a.attrelid = :table_oid
ORDER BY k.ord
"""
t = sql.text(PK_SQL).columns(attname=sqltypes.Unicode)
c = connection.execute(t, dict(table_oid=table_oid))
cols = [r[0] for r in c.fetchall()]
PK_CONS_SQL = """
SELECT conname
FROM pg_catalog.pg_constraint r
WHERE r.conrelid = :table_oid AND r.contype = 'p'
ORDER BY 1
"""
t = sql.text(PK_CONS_SQL).columns(conname=sqltypes.Unicode)
c = connection.execute(t, dict(table_oid=table_oid))
name = c.scalar()
return {"constrained_columns": cols, "name": name}
@reflection.cache
def get_foreign_keys(
self,
connection,
table_name,
schema=None,
postgresql_ignore_search_path=False,
**kw
):
preparer = self.identifier_preparer
table_oid = self.get_table_oid(
connection, table_name, schema, info_cache=kw.get("info_cache")
)
FK_SQL = """
SELECT r.conname,
pg_catalog.pg_get_constraintdef(r.oid, true) as condef,
n.nspname as conschema
FROM pg_catalog.pg_constraint r,
pg_namespace n,
pg_class c
WHERE r.conrelid = :table AND
r.contype = 'f' AND
c.oid = confrelid AND
n.oid = c.relnamespace
ORDER BY 1
"""
# http://www.postgresql.org/docs/9.0/static/sql-createtable.html
FK_REGEX = re.compile(
r"FOREIGN KEY \((.*?)\) REFERENCES (?:(.*?)\.)?(.*?)\((.*?)\)"
r"[\s]?(MATCH (FULL|PARTIAL|SIMPLE)+)?"
r"[\s]?(ON UPDATE "
r"(CASCADE|RESTRICT|NO ACTION|SET NULL|SET DEFAULT)+)?"
r"[\s]?(ON DELETE "
r"(CASCADE|RESTRICT|NO ACTION|SET NULL|SET DEFAULT)+)?"
r"[\s]?(DEFERRABLE|NOT DEFERRABLE)?"
r"[\s]?(INITIALLY (DEFERRED|IMMEDIATE)+)?"
)
t = sql.text(FK_SQL).columns(
conname=sqltypes.Unicode, condef=sqltypes.Unicode
)
c = connection.execute(t, dict(table=table_oid))
fkeys = []
for conname, condef, conschema in c.fetchall():
m = re.search(FK_REGEX, condef).groups()
(
constrained_columns,
referred_schema,
referred_table,
referred_columns,
_,
match,
_,
onupdate,
_,
ondelete,
deferrable,
_,
initially,
) = m
if deferrable is not None:
deferrable = True if deferrable == "DEFERRABLE" else False
constrained_columns = [
preparer._unquote_identifier(x)
for x in re.split(r"\s*,\s*", constrained_columns)
]
if postgresql_ignore_search_path:
# when ignoring search path, we use the actual schema
# provided it isn't the "default" schema
if conschema != self.default_schema_name:
referred_schema = conschema
else:
referred_schema = schema
elif referred_schema:
# referred_schema is the schema that we regexp'ed from
# pg_get_constraintdef(). If the schema is in the search
# path, pg_get_constraintdef() will give us None.
referred_schema = preparer._unquote_identifier(referred_schema)
elif schema is not None and schema == conschema:
# If the actual schema matches the schema of the table
# we're reflecting, then we will use that.
referred_schema = schema
referred_table = preparer._unquote_identifier(referred_table)
referred_columns = [
preparer._unquote_identifier(x)
for x in re.split(r"\s*,\s", referred_columns)
]
options = {
k: v
for k, v in [
("onupdate", onupdate),
("ondelete", ondelete),
("initially", initially),
("deferrable", deferrable),
("match", match),
]
if v is not None and v != "NO ACTION"
}
fkey_d = {
"name": conname,
"constrained_columns": constrained_columns,
"referred_schema": referred_schema,
"referred_table": referred_table,
"referred_columns": referred_columns,
"options": options,
}
fkeys.append(fkey_d)
return fkeys
def _pg_index_any(self, col, compare_to):
if self.server_version_info < (8, 1):
# http://www.postgresql.org/message-id/10279.1124395722@sss.pgh.pa.us
# "In CVS tip you could replace this with "attnum = ANY (indkey)".
# Unfortunately, most array support doesn't work on int2vector in
# pre-8.1 releases, so I think you're kinda stuck with the above
# for now.
# regards, tom lane"
return "(%s)" % " OR ".join(
"%s[%d] = %s" % (compare_to, ind, col) for ind in range(0, 10)
)
else:
return "%s = ANY(%s)" % (col, compare_to)
@reflection.cache
def get_indexes(self, connection, table_name, schema, **kw):
table_oid = self.get_table_oid(
connection, table_name, schema, info_cache=kw.get("info_cache")
)
# cast indkey as varchar since it's an int2vector,
# returned as a list by some drivers such as pypostgresql
if self.server_version_info < (8, 5):
IDX_SQL = """
SELECT
i.relname as relname,
ix.indisunique, ix.indexprs, ix.indpred,
a.attname, a.attnum, NULL, ix.indkey%s,
%s, %s, am.amname,
NULL as indnkeyatts
FROM
pg_class t
join pg_index ix on t.oid = ix.indrelid
join pg_class i on i.oid = ix.indexrelid
left outer join
pg_attribute a
on t.oid = a.attrelid and %s
left outer join
pg_am am
on i.relam = am.oid
WHERE
t.relkind IN ('r', 'v', 'f', 'm')
and t.oid = :table_oid
and ix.indisprimary = 'f'
ORDER BY
t.relname,
i.relname
""" % (
# version 8.3 here was based on observing the
# cast does not work in PG 8.2.4, does work in 8.3.0.
# nothing in PG changelogs regarding this.
"::varchar" if self.server_version_info >= (8, 3) else "",
"ix.indoption::varchar"
if self.server_version_info >= (8, 3)
else "NULL",
"i.reloptions"
if self.server_version_info >= (8, 2)
else "NULL",
self._pg_index_any("a.attnum", "ix.indkey"),
)
else:
IDX_SQL = """
SELECT
i.relname as relname,
ix.indisunique, ix.indexprs,
a.attname, a.attnum, c.conrelid, ix.indkey::varchar,
ix.indoption::varchar, i.reloptions, am.amname,
pg_get_expr(ix.indpred, ix.indrelid),
%s as indnkeyatts
FROM
pg_class t
join pg_index ix on t.oid = ix.indrelid
join pg_class i on i.oid = ix.indexrelid
left outer join
pg_attribute a
on t.oid = a.attrelid and a.attnum = ANY(ix.indkey)
left outer join
pg_constraint c
on (ix.indrelid = c.conrelid and
ix.indexrelid = c.conindid and
c.contype in ('p', 'u', 'x'))
left outer join
pg_am am
on i.relam = am.oid
WHERE
t.relkind IN ('r', 'v', 'f', 'm', 'p')
and t.oid = :table_oid
and ix.indisprimary = 'f'
ORDER BY
t.relname,
i.relname
""" % (
"ix.indnkeyatts"
if self.server_version_info >= (11, 0)
else "NULL",
)
t = sql.text(IDX_SQL).columns(
relname=sqltypes.Unicode, attname=sqltypes.Unicode
)
c = connection.execute(t, dict(table_oid=table_oid))
indexes = defaultdict(lambda: defaultdict(dict))
sv_idx_name = None
for row in c.fetchall():
(
idx_name,
unique,
expr,
col,
col_num,
conrelid,
idx_key,
idx_option,
options,
amname,
filter_definition,
indnkeyatts,
) = row
if expr:
if idx_name != sv_idx_name:
util.warn(
"Skipped unsupported reflection of "
"expression-based index %s" % idx_name
)
sv_idx_name = idx_name
continue
has_idx = idx_name in indexes
index = indexes[idx_name]
if col is not None:
index["cols"][col_num] = col
if not has_idx:
idx_keys = idx_key.split()
# "The number of key columns in the index, not counting any
# included columns, which are merely stored and do not
# participate in the index semantics"
if indnkeyatts and idx_keys[indnkeyatts:]:
# this is a "covering index" which has INCLUDE columns
# as well as regular index columns
inc_keys = idx_keys[indnkeyatts:]
idx_keys = idx_keys[:indnkeyatts]
else:
inc_keys = []
index["key"] = [int(k.strip()) for k in idx_keys]
index["inc"] = [int(k.strip()) for k in inc_keys]
# (new in pg 8.3)
# "pg_index.indoption" is list of ints, one per column/expr.
# int acts as bitmask: 0x01=DESC, 0x02=NULLSFIRST
sorting = {}
for col_idx, col_flags in enumerate(
(idx_option or "").split()
):
col_flags = int(col_flags.strip())
col_sorting = ()
# try to set flags only if they differ from PG defaults...
if col_flags & 0x01:
col_sorting += ("desc",)
if not (col_flags & 0x02):
col_sorting += ("nulls_last",)
else:
if col_flags & 0x02:
col_sorting += ("nulls_first",)
if col_sorting:
sorting[col_idx] = col_sorting
if sorting:
index["sorting"] = sorting
index["unique"] = unique
if conrelid is not None:
index["duplicates_constraint"] = idx_name
if options:
index["options"] = dict(
[option.split("=") for option in options]
)
# it *might* be nice to include that this is 'btree' in the
# reflection info. But we don't want an Index object
# to have a ``postgresql_using`` in it that is just the
# default, so for the moment leaving this out.
if amname and amname != "btree":
index["amname"] = amname
if filter_definition:
index["postgresql_where"] = filter_definition
result = []
for name, idx in indexes.items():
entry = {
"name": name,
"unique": idx["unique"],
"column_names": [idx["cols"][i] for i in idx["key"]],
}
if self.server_version_info >= (11, 0):
entry["include_columns"] = [idx["cols"][i] for i in idx["inc"]]
if "duplicates_constraint" in idx:
entry["duplicates_constraint"] = idx["duplicates_constraint"]
if "sorting" in idx:
entry["column_sorting"] = dict(
(idx["cols"][idx["key"][i]], value)
for i, value in idx["sorting"].items()
)
if "options" in idx:
entry.setdefault("dialect_options", {})[
"postgresql_with"
] = idx["options"]
if "amname" in idx:
entry.setdefault("dialect_options", {})[
"postgresql_using"
] = idx["amname"]
if "postgresql_where" in idx:
entry.setdefault("dialect_options", {})[
"postgresql_where"
] = idx["postgresql_where"]
result.append(entry)
return result
@reflection.cache
def get_unique_constraints(
self, connection, table_name, schema=None, **kw
):
table_oid = self.get_table_oid(
connection, table_name, schema, info_cache=kw.get("info_cache")
)
UNIQUE_SQL = """
SELECT
cons.conname as name,
cons.conkey as key,
a.attnum as col_num,
a.attname as col_name
FROM
pg_catalog.pg_constraint cons
join pg_attribute a
on cons.conrelid = a.attrelid AND
a.attnum = ANY(cons.conkey)
WHERE
cons.conrelid = :table_oid AND
cons.contype = 'u'
"""
t = sql.text(UNIQUE_SQL).columns(col_name=sqltypes.Unicode)
c = connection.execute(t, dict(table_oid=table_oid))
uniques = defaultdict(lambda: defaultdict(dict))
for row in c.fetchall():
uc = uniques[row.name]
uc["key"] = row.key
uc["cols"][row.col_num] = row.col_name
return [
{"name": name, "column_names": [uc["cols"][i] for i in uc["key"]]}
for name, uc in uniques.items()
]
@reflection.cache
def get_table_comment(self, connection, table_name, schema=None, **kw):
table_oid = self.get_table_oid(
connection, table_name, schema, info_cache=kw.get("info_cache")
)
COMMENT_SQL = """
SELECT
pgd.description as table_comment
FROM
pg_catalog.pg_description pgd
WHERE
pgd.objsubid = 0 AND
pgd.objoid = :table_oid
"""
c = connection.execute(
sql.text(COMMENT_SQL), dict(table_oid=table_oid)
)
return {"text": c.scalar()}
@reflection.cache
def get_check_constraints(self, connection, table_name, schema=None, **kw):
table_oid = self.get_table_oid(
connection, table_name, schema, info_cache=kw.get("info_cache")
)
CHECK_SQL = """
SELECT
cons.conname as name,
pg_get_constraintdef(cons.oid) as src
FROM
pg_catalog.pg_constraint cons
WHERE
cons.conrelid = :table_oid AND
cons.contype = 'c'
"""
c = connection.execute(sql.text(CHECK_SQL), dict(table_oid=table_oid))
ret = []
for name, src in c:
# samples:
# "CHECK (((a > 1) AND (a < 5)))"
# "CHECK (((a = 1) OR ((a > 2) AND (a < 5))))"
# "CHECK (((a > 1) AND (a < 5))) NOT VALID"
# "CHECK (some_boolean_function(a))"
# "CHECK (((a\n < 1)\n OR\n (a\n >= 5))\n)"
m = re.match(
r"^CHECK *\((.+)\)( NOT VALID)?$", src, flags=re.DOTALL
)
if not m:
util.warn("Could not parse CHECK constraint text: %r" % src)
sqltext = ""
else:
sqltext = re.compile(
r"^[\s\n]*\((.+)\)[\s\n]*$", flags=re.DOTALL
).sub(r"\1", m.group(1))
entry = {"name": name, "sqltext": sqltext}
if m and m.group(2):
entry["dialect_options"] = {"not_valid": True}
ret.append(entry)
return ret
def _load_enums(self, connection, schema=None):
schema = schema or self.default_schema_name
if not self.supports_native_enum:
return {}
# Load data types for enums:
SQL_ENUMS = """
SELECT t.typname as "name",
-- no enum defaults in 8.4 at least
-- t.typdefault as "default",
pg_catalog.pg_type_is_visible(t.oid) as "visible",
n.nspname as "schema",
e.enumlabel as "label"
FROM pg_catalog.pg_type t
LEFT JOIN pg_catalog.pg_namespace n ON n.oid = t.typnamespace
LEFT JOIN pg_catalog.pg_enum e ON t.oid = e.enumtypid
WHERE t.typtype = 'e'
"""
if schema != "*":
SQL_ENUMS += "AND n.nspname = :schema "
# e.oid gives us label order within an enum
SQL_ENUMS += 'ORDER BY "schema", "name", e.oid'
s = sql.text(SQL_ENUMS).columns(
attname=sqltypes.Unicode, label=sqltypes.Unicode
)
if schema != "*":
s = s.bindparams(schema=schema)
c = connection.execute(s)
enums = []
enum_by_name = {}
for enum in c.fetchall():
key = (enum.schema, enum.name)
if key in enum_by_name:
enum_by_name[key]["labels"].append(enum.label)
else:
enum_by_name[key] = enum_rec = {
"name": enum.name,
"schema": enum.schema,
"visible": enum.visible,
"labels": [],
}
if enum.label is not None:
enum_rec["labels"].append(enum.label)
enums.append(enum_rec)
return enums
def _load_domains(self, connection):
# Load data types for domains:
SQL_DOMAINS = """
SELECT t.typname as "name",
pg_catalog.format_type(t.typbasetype, t.typtypmod) as "attype",
not t.typnotnull as "nullable",
t.typdefault as "default",
pg_catalog.pg_type_is_visible(t.oid) as "visible",
n.nspname as "schema"
FROM pg_catalog.pg_type t
LEFT JOIN pg_catalog.pg_namespace n ON n.oid = t.typnamespace
WHERE t.typtype = 'd'
"""
s = sql.text(SQL_DOMAINS)
c = connection.execution_options(future_result=True).execute(s)
domains = {}
for domain in c.mappings():
domain = domain
# strip (30) from character varying(30)
attype = re.search(r"([^\(]+)", domain["attype"]).group(1)
# 'visible' just means whether or not the domain is in a
# schema that's on the search path -- or not overridden by
# a schema with higher precedence. If it's not visible,
# it will be prefixed with the schema-name when it's used.
if domain["visible"]:
key = (domain["name"],)
else:
key = (domain["schema"], domain["name"])
domains[key] = {
"attype": attype,
"nullable": domain["nullable"],
"default": domain["default"],
}
return domains
| 34.110181 | 110 | 0.590945 |
2b4e5477549c5ce36391f053baf31582918db2ba | 370 | py | Python | examples/computer_vision/mmdetection_pytorch/configs/rpn/rpn_x101_64x4d_fpn_1x_coco.py | RAbraham/determined | 1161b667ed6d0242f70f9f15d58600f910c8d7f9 | [
"Apache-2.0"
] | 1 | 2021-03-29T13:39:45.000Z | 2021-03-29T13:39:45.000Z | examples/computer_vision/mmdetection_pytorch/configs/rpn/rpn_x101_64x4d_fpn_1x_coco.py | RAbraham/determined | 1161b667ed6d0242f70f9f15d58600f910c8d7f9 | [
"Apache-2.0"
] | null | null | null | examples/computer_vision/mmdetection_pytorch/configs/rpn/rpn_x101_64x4d_fpn_1x_coco.py | RAbraham/determined | 1161b667ed6d0242f70f9f15d58600f910c8d7f9 | [
"Apache-2.0"
] | null | null | null | _base_ = "./rpn_r50_fpn_1x_coco.py"
model = dict(
pretrained="open-mmlab://resnext101_64x4d",
backbone=dict(
type="ResNeXt",
depth=101,
groups=64,
base_width=4,
num_stages=4,
out_indices=(0, 1, 2, 3),
frozen_stages=1,
norm_cfg=dict(type="BN", requires_grad=True),
style="pytorch",
),
)
| 23.125 | 53 | 0.562162 |
5efac3c29c30576bd0e489cc5832ecbd736cc9c9 | 25,519 | py | Python | WrestlingNerd_wdr.py | parente/wnerd | 6133af690a2bb83ed38f5311529842e8daa04d18 | [
"MIT"
] | null | null | null | WrestlingNerd_wdr.py | parente/wnerd | 6133af690a2bb83ed38f5311529842e8daa04d18 | [
"MIT"
] | null | null | null | WrestlingNerd_wdr.py | parente/wnerd | 6133af690a2bb83ed38f5311529842e8daa04d18 | [
"MIT"
] | 1 | 2021-12-18T03:53:08.000Z | 2021-12-18T03:53:08.000Z | # -*- coding: iso-8859-15 -*-
#-----------------------------------------------------------------------------
# Python source generated by wxDesigner from file: WrestlingNerd.wdr
# Do not modify this file, all changes will be lost!
#-----------------------------------------------------------------------------
# Include wxWindows' modules
import wx
import wx.grid
# Custom source
ID_DELETE_MATCH_MENU = wx.NewId()
ID_DELETEALL_MATCH_MENU = wx.NewId()
ID_MOVEIN_MATCH_MENU = wx.NewId()
ID_DELETE_SEED_MENU = wx.NewId()
ID_DELETEMOVEUP_SEED_MENU = wx.NewId()
ID_INSERTMOVEDOWN_SEED_MENU = wx.NewId()
ID_SETLAST_SEED_MENU = wx.NewId()
ID_SWAPUP_SEED_MENU = wx.NewId()
ID_SWAPDOWN_SEED_MENU = wx.NewId()
ID_SWAPTO_SEED_MENU = wx.NewId()
start_caption = '''Welcome to the new tournament wizard.
Use the Next and Previous buttons below to move through the steps necessary to create a new tournament. Directions are given at the start of each step.
Press the Next button to begin.
'''
name_caption = '''First, enter a name for the tournament.
Press the Next button when you are done.
'''
teams_caption = '''Enter the names of all the teams in the tournament in the box below. Type the name of the team and then press the Add button. Remove an incorrect team by selecting it and pressing Remove. Pressing Enter after typing a team name is a shortcut for clicking the Add button.
Press the Next button when you are done.
'''
weights_caption = '''Enter the weight classes in the tournament. Type the name and use the Add and Remove buttons or Enter key as you did to enter the teams. Pressing the Add Standard button will enter all of the U.S. standard weight classes for you.
Press the Next button when you are done.
'''
layout_caption = '''Select the layout for this tournament. Choose a layout name from the box on the left. A description of the layout appears to the right.
Press the Next button when you are done.
'''
finished_caption = '''You have finished creating a new tournament. Press the Previous button to go back and make changes. When you are satisfied, press the Finish button to complete this wizard.'''
# Window functions
ID_WEIGHTS_CHOICE = 10000
ID_TEAMS_LIST = 10001
def CreateSidePanel( parent, call_fit = True, set_sizer = True ):
item0 = wx.BoxSizer( wx.VERTICAL )
item2 = wx.StaticBox( parent, -1, "Weight" )
item1 = wx.StaticBoxSizer( item2, wx.VERTICAL )
item3 = wx.Choice( parent, ID_WEIGHTS_CHOICE, wx.DefaultPosition, [100,-1], [], 0 )
item1.Add( item3, 0, wx.GROW|wx.ALIGN_CENTER_VERTICAL|wx.ALL, 5 )
item0.Add( item1, 0, wx.GROW|wx.ALIGN_CENTER_VERTICAL|wx.ALL, 5 )
item5 = wx.StaticBox( parent, -1, "Scores" )
item4 = wx.StaticBoxSizer( item5, wx.VERTICAL )
item6 = wx.ListCtrl( parent, ID_TEAMS_LIST, wx.DefaultPosition, [180,300], wx.LC_REPORT|wx.SUNKEN_BORDER )
item4.Add( item6, 0, wx.GROW|wx.ALL, 5 )
item0.Add( item4, 0, wx.ALIGN_CENTER|wx.ALL, 5 )
if set_sizer == True:
parent.SetSizer( item0 )
if call_fit == True:
item0.SetSizeHints( parent )
return item0
ID_TEXT = 10002
ID_START_CAPTION = 10003
ID_LINE = 10004
def WizardStartPanel( parent, call_fit = True, set_sizer = True ):
item0 = wx.BoxSizer( wx.VERTICAL )
item1 = wx.StaticText( parent, ID_TEXT, "New Tournament", wx.DefaultPosition, wx.DefaultSize, 0 )
item1.SetFont( wx.Font( 16, wx.SWISS, wx.NORMAL, wx.BOLD ) )
item0.Add( item1, 0, wx.ALIGN_CENTER_VERTICAL|wx.ALL, 5 )
item2 = wx.StaticText( parent, ID_START_CAPTION, "", wx.DefaultPosition, [-1,80], wx.ST_NO_AUTORESIZE )
item0.Add( item2, 0, wx.GROW|wx.ALIGN_CENTER_VERTICAL|wx.ALL, 5 )
item3 = wx.StaticLine( parent, ID_LINE, wx.DefaultPosition, [20,-1], wx.LI_HORIZONTAL )
item0.Add( item3, 0, wx.GROW|wx.ALIGN_CENTER_VERTICAL|wx.ALL, 5 )
if set_sizer == True:
parent.SetSizer( item0 )
if call_fit == True:
item0.SetSizeHints( parent )
return item0
ID_NAME_CAPTION = 10005
ID_NAME_TEXT = 10006
def WizardNamePanel( parent, call_fit = True, set_sizer = True ):
item0 = wx.BoxSizer( wx.VERTICAL )
item1 = wx.StaticText( parent, ID_TEXT, "Name", wx.DefaultPosition, wx.DefaultSize, 0 )
item1.SetFont( wx.Font( 16, wx.SWISS, wx.NORMAL, wx.BOLD ) )
item0.Add( item1, 0, wx.ALIGN_CENTER_VERTICAL|wx.ALL, 5 )
item2 = wx.StaticText( parent, ID_NAME_CAPTION, "", wx.DefaultPosition, wx.DefaultSize, wx.ST_NO_AUTORESIZE )
item0.Add( item2, 0, wx.GROW|wx.ALIGN_CENTER_VERTICAL|wx.ALL, 5 )
item3 = wx.StaticLine( parent, ID_LINE, wx.DefaultPosition, [20,-1], wx.LI_HORIZONTAL )
item0.Add( item3, 0, wx.GROW|wx.ALIGN_CENTER_VERTICAL|wx.ALL, 5 )
item4 = wx.StaticText( parent, ID_TEXT, "Tournament name", wx.DefaultPosition, wx.DefaultSize, 0 )
item0.Add( item4, 0, wx.ALIGN_CENTER_VERTICAL|wx.ALL, 5 )
item5 = wx.TextCtrl( parent, ID_NAME_TEXT, "", wx.DefaultPosition, [200,-1], 0 )
item0.Add( item5, 0, wx.ALIGN_CENTER_VERTICAL|wx.ALL, 5 )
if set_sizer == True:
parent.SetSizer( item0 )
if call_fit == True:
item0.SetSizeHints( parent )
return item0
ID_TEAMS_CAPTION = 10007
ID_TEAMS_TEXT = 10008
ID_ADD_TEAM = 10009
ID_REMOVE_TEAM = 10010
def WizardTeamsPanel( parent, call_fit = True, set_sizer = True ):
item0 = wx.BoxSizer( wx.VERTICAL )
item1 = wx.StaticText( parent, ID_TEXT, "Teams", wx.DefaultPosition, wx.DefaultSize, 0 )
item1.SetFont( wx.Font( 16, wx.SWISS, wx.NORMAL, wx.BOLD ) )
item0.Add( item1, 0, wx.ALIGN_CENTER_VERTICAL|wx.ALL, 5 )
item2 = wx.StaticText( parent, ID_TEAMS_CAPTION, "", wx.DefaultPosition, [-1,70], wx.ST_NO_AUTORESIZE )
item0.Add( item2, 0, wx.GROW|wx.ALIGN_CENTER_VERTICAL|wx.ALL, 5 )
item3 = wx.StaticLine( parent, ID_LINE, wx.DefaultPosition, [20,-1], wx.LI_HORIZONTAL )
item0.Add( item3, 0, wx.GROW|wx.ALIGN_CENTER_VERTICAL|wx.ALL, 5 )
item4 = wx.BoxSizer( wx.HORIZONTAL )
item5 = wx.BoxSizer( wx.VERTICAL )
item6 = wx.TextCtrl( parent, ID_TEAMS_TEXT, "", wx.DefaultPosition, [200,-1], wx.TE_PROCESS_ENTER )
item5.Add( item6, 0, wx.GROW|wx.ALIGN_CENTER_VERTICAL|wx.LEFT|wx.RIGHT|wx.TOP, 5 )
item7 = wx.ListBox( parent, ID_TEAMS_LIST, wx.DefaultPosition, [80,250], [], wx.LB_SINGLE )
item5.Add( item7, 0, wx.GROW|wx.ALIGN_CENTER_VERTICAL|wx.LEFT|wx.RIGHT|wx.BOTTOM, 5 )
item4.Add( item5, 0, wx.ALIGN_CENTER|wx.ALL, 5 )
item8 = wx.BoxSizer( wx.VERTICAL )
item9 = wx.Button( parent, ID_ADD_TEAM, "Add", wx.DefaultPosition, wx.DefaultSize, 0 )
item8.Add( item9, 0, wx.GROW|wx.ALIGN_CENTER_VERTICAL|wx.ALL, 5 )
item10 = wx.Button( parent, ID_REMOVE_TEAM, "Remove", wx.DefaultPosition, wx.DefaultSize, 0 )
item8.Add( item10, 0, wx.GROW|wx.ALIGN_CENTER_VERTICAL|wx.ALL, 5 )
item4.Add( item8, 0, wx.GROW|wx.ALIGN_CENTER_HORIZONTAL|wx.ALL, 5 )
item0.Add( item4, 0, wx.GROW|wx.ALIGN_CENTER_VERTICAL, 5 )
if set_sizer == True:
parent.SetSizer( item0 )
if call_fit == True:
item0.SetSizeHints( parent )
return item0
ID_WEIGHTS_CAPTION = 10011
ID_WEIGHTS_TEXT = 10012
ID_WEIGHTS_LIST = 10013
ID_ADD_WEIGHT = 10014
ID_REMOVE_WEIGHT = 10015
ID_ADD_STANDARD_WEIGHTS = 10016
def WizardWeightsPanel( parent, call_fit = True, set_sizer = True ):
item0 = wx.BoxSizer( wx.VERTICAL )
item1 = wx.StaticText( parent, ID_TEXT, "Weight Classes", wx.DefaultPosition, wx.DefaultSize, 0 )
item1.SetFont( wx.Font( 16, wx.SWISS, wx.NORMAL, wx.BOLD ) )
item0.Add( item1, 0, wx.ALIGN_CENTER_VERTICAL|wx.ALL, 5 )
item2 = wx.StaticText( parent, ID_WEIGHTS_CAPTION, "", wx.DefaultPosition, [-1,70], wx.ST_NO_AUTORESIZE )
item0.Add( item2, 0, wx.GROW|wx.ALIGN_CENTER_VERTICAL|wx.ALL, 5 )
item3 = wx.StaticLine( parent, ID_LINE, wx.DefaultPosition, [20,-1], wx.LI_HORIZONTAL )
item0.Add( item3, 0, wx.GROW|wx.ALIGN_CENTER_VERTICAL|wx.ALL, 5 )
item4 = wx.BoxSizer( wx.HORIZONTAL )
item5 = wx.BoxSizer( wx.VERTICAL )
item6 = wx.TextCtrl( parent, ID_WEIGHTS_TEXT, "", wx.DefaultPosition, [200,-1], wx.TE_PROCESS_ENTER )
item5.Add( item6, 0, wx.GROW|wx.ALIGN_CENTER_VERTICAL|wx.LEFT|wx.RIGHT|wx.TOP, 5 )
item7 = wx.ListBox( parent, ID_WEIGHTS_LIST, wx.DefaultPosition, [80,250], [], wx.LB_SINGLE )
item5.Add( item7, 0, wx.GROW|wx.ALIGN_CENTER_VERTICAL|wx.LEFT|wx.RIGHT|wx.BOTTOM, 5 )
item4.Add( item5, 0, wx.ALIGN_CENTER|wx.ALL, 5 )
item8 = wx.BoxSizer( wx.VERTICAL )
item9 = wx.Button( parent, ID_ADD_WEIGHT, "Add", wx.DefaultPosition, wx.DefaultSize, 0 )
item8.Add( item9, 0, wx.GROW|wx.ALIGN_CENTER_VERTICAL|wx.ALL, 5 )
item10 = wx.Button( parent, ID_REMOVE_WEIGHT, "Remove", wx.DefaultPosition, wx.DefaultSize, 0 )
item8.Add( item10, 0, wx.GROW|wx.ALIGN_CENTER_VERTICAL|wx.ALL, 5 )
item11 = wx.Button( parent, ID_ADD_STANDARD_WEIGHTS, "Add Standard", wx.DefaultPosition, wx.DefaultSize, 0 )
item8.Add( item11, 0, wx.GROW|wx.ALIGN_CENTER_VERTICAL|wx.ALL, 5 )
item4.Add( item8, 0, wx.GROW|wx.ALIGN_CENTER_HORIZONTAL|wx.ALL, 5 )
item0.Add( item4, 0, wx.GROW|wx.ALIGN_CENTER_VERTICAL, 5 )
if set_sizer == True:
parent.SetSizer( item0 )
if call_fit == True:
item0.SetSizeHints( parent )
return item0
ID_LAYOUT_CAPTION = 10017
ID_LAYOUT_LIST = 10018
ID_LAYOUT_TEXT = 10019
def WizardLayoutPanel( parent, call_fit = True, set_sizer = True ):
item0 = wx.BoxSizer( wx.VERTICAL )
item1 = wx.StaticText( parent, ID_TEXT, "Layout", wx.DefaultPosition, wx.DefaultSize, 0 )
item1.SetFont( wx.Font( 16, wx.SWISS, wx.NORMAL, wx.BOLD ) )
item0.Add( item1, 0, wx.ALIGN_CENTER_VERTICAL|wx.ALL, 5 )
item2 = wx.StaticText( parent, ID_LAYOUT_CAPTION, "", wx.DefaultPosition, [-1,60], wx.ST_NO_AUTORESIZE )
item0.Add( item2, 0, wx.GROW|wx.ALIGN_CENTER_VERTICAL|wx.ALL, 5 )
item3 = wx.StaticLine( parent, ID_LINE, wx.DefaultPosition, [20,-1], wx.LI_HORIZONTAL )
item0.Add( item3, 0, wx.GROW|wx.ALIGN_CENTER_VERTICAL|wx.ALL, 5 )
item4 = wx.BoxSizer( wx.HORIZONTAL )
item6 = wx.StaticBox( parent, -1, "Available brackets" )
item5 = wx.StaticBoxSizer( item6, wx.HORIZONTAL )
item7 = wx.ListBox( parent, ID_LAYOUT_LIST, wx.DefaultPosition, [180,200], [], wx.LB_SINGLE|wx.LB_SORT )
item5.Add( item7, 0, wx.GROW|wx.ALL, 5 )
item4.Add( item5, 0, wx.GROW|wx.ALIGN_CENTER_HORIZONTAL|wx.ALL, 5 )
item9 = wx.StaticBox( parent, -1, "Description" )
item8 = wx.StaticBoxSizer( item9, wx.HORIZONTAL )
item10 = wx.TextCtrl( parent, ID_LAYOUT_TEXT, "", wx.DefaultPosition, [250,40], wx.TE_MULTILINE|wx.TE_READONLY )
item8.Add( item10, 0, wx.GROW|wx.ALIGN_CENTER_VERTICAL|wx.ALL, 5 )
item4.Add( item8, 0, wx.GROW|wx.ALIGN_CENTER_HORIZONTAL|wx.ALL, 5 )
item0.Add( item4, 0, wx.GROW|wx.ALIGN_CENTER_VERTICAL, 5 )
if set_sizer == True:
parent.SetSizer( item0 )
if call_fit == True:
item0.SetSizeHints( parent )
return item0
ID_FINISHED_CAPTION = 10020
def WizardFinishedPanel( parent, call_fit = True, set_sizer = True ):
item0 = wx.BoxSizer( wx.VERTICAL )
item1 = wx.StaticText( parent, ID_TEXT, "Finished", wx.DefaultPosition, wx.DefaultSize, 0 )
item1.SetFont( wx.Font( 16, wx.SWISS, wx.NORMAL, wx.BOLD ) )
item0.Add( item1, 0, wx.ALIGN_CENTER_VERTICAL|wx.ALL, 5 )
item2 = wx.StaticText( parent, ID_FINISHED_CAPTION, "", wx.DefaultPosition, [-1,40], wx.ST_NO_AUTORESIZE )
item0.Add( item2, 0, wx.GROW|wx.ALIGN_CENTER_VERTICAL|wx.ALL, 5 )
item3 = wx.StaticLine( parent, ID_LINE, wx.DefaultPosition, [20,-1], wx.LI_HORIZONTAL )
item0.Add( item3, 0, wx.GROW|wx.ALIGN_CENTER_VERTICAL|wx.ALL, 5 )
if set_sizer == True:
parent.SetSizer( item0 )
if call_fit == True:
item0.SetSizeHints( parent )
return item0
ID_WINNER_LIST = 10021
ID_SCOREPOINTS_CHECK = 10022
ID_RESULT_TYPE_RADIO = 10023
ID_RESULT_PANEL = 10024
wxID_OK = 5100
wxID_CANCEL = 5101
def CreateMatchDialog( parent, call_fit = True, set_sizer = True ):
item0 = wx.BoxSizer( wx.VERTICAL )
item2 = wx.StaticBox( parent, -1, "Winner" )
item1 = wx.StaticBoxSizer( item2, wx.VERTICAL )
item3 = wx.ListBox( parent, ID_WINNER_LIST, wx.DefaultPosition, [60,30], [], wx.LB_SINGLE )
item1.Add( item3, 0, wx.GROW|wx.ALIGN_CENTER_VERTICAL|wx.ALL, 5 )
item4 = wx.CheckBox( parent, ID_SCOREPOINTS_CHECK, "Score team points", wx.DefaultPosition, wx.DefaultSize, 0 )
item1.Add( item4, 0, wx.ALIGN_CENTER_VERTICAL|wx.ALL, 5 )
item0.Add( item1, 0, wx.GROW|wx.ALIGN_CENTER_VERTICAL|wx.ALL, 5 )
item5 = wx.RadioBox( parent, ID_RESULT_TYPE_RADIO, "Win type", wx.DefaultPosition, wx.DefaultSize,
["None","Decision","Pin","Default","Bye"] , 2, wx.RA_SPECIFY_ROWS )
item0.Add( item5, 0, wx.GROW|wx.ALIGN_CENTER_VERTICAL|wx.ALL, 5 )
item7 = wx.StaticBox( parent, -1, "Result" )
item6 = wx.StaticBoxSizer( item7, wx.VERTICAL )
parent.result_sizer = item6
item8 = wx.Panel( parent, ID_RESULT_PANEL, wx.DefaultPosition, [200,30], 0 )
item6.Add( item8, 0, wx.ALIGN_CENTER_VERTICAL|wx.ALL, 5 )
item0.Add( item6, 0, wx.GROW|wx.ALIGN_CENTER_VERTICAL|wx.ALL, 5 )
item9 = wx.BoxSizer( wx.HORIZONTAL )
item10 = wx.Button( parent, wxID_OK, "OK", wx.DefaultPosition, wx.DefaultSize, 0 )
item10.SetDefault()
item9.Add( item10, 0, wx.ALIGN_CENTER|wx.ALL, 5 )
item11 = wx.Button( parent, wxID_CANCEL, "Cancel", wx.DefaultPosition, wx.DefaultSize, 0 )
item9.Add( item11, 0, wx.ALIGN_CENTER|wx.ALL, 5 )
item0.Add( item9, 0, wx.ALIGN_RIGHT|wx.ALIGN_CENTER_VERTICAL|wx.ALL, 5 )
if set_sizer == True:
parent.SetSizer( item0 )
if call_fit == True:
item0.SetSizeHints( parent )
return item0
ID_TYPE_RADIOBOX = 10025
ID_ROUNDS_LIST = 10026
wxID_OK = 5100
wxID_CANCEL = 5101
def CreatePrintDialog( parent, call_fit = True, set_sizer = True ):
item0 = wx.BoxSizer( wx.VERTICAL )
item1 = wx.RadioBox( parent, ID_TYPE_RADIOBOX, "Document", wx.DefaultPosition, wx.DefaultSize,
["&Brackets","B&outs","&Scores","&Places"] , 1, wx.RA_SPECIFY_ROWS )
item0.Add( item1, 0, wx.GROW|wx.ALIGN_CENTER_VERTICAL|wx.ALL, 5 )
item2 = wx.BoxSizer( wx.HORIZONTAL )
item4 = wx.StaticBox( parent, -1, "Weights" )
item3 = wx.StaticBoxSizer( item4, wx.VERTICAL )
item5 = wx.ListBox( parent, ID_WEIGHTS_LIST, wx.DefaultPosition, [200,200], [], wx.LB_EXTENDED )
item3.Add( item5, 0, wx.ALIGN_CENTER|wx.ALL, 5 )
item2.Add( item3, 0, wx.ALIGN_CENTER|wx.ALL, 5 )
item7 = wx.StaticBox( parent, -1, "Rounds" )
item6 = wx.StaticBoxSizer( item7, wx.VERTICAL )
item8 = wx.ListBox( parent, ID_ROUNDS_LIST, wx.DefaultPosition, [200,200], [], wx.LB_EXTENDED )
item8.Enable(False)
item6.Add( item8, 0, wx.ALIGN_CENTER|wx.ALL, 5 )
item2.Add( item6, 0, wx.ALIGN_CENTER|wx.ALL, 5 )
item0.Add( item2, 0, wx.ALIGN_CENTER|wx.ALL, 5 )
item9 = wx.BoxSizer( wx.HORIZONTAL )
item10 = wx.Button( parent, wxID_OK, "OK", wx.DefaultPosition, wx.DefaultSize, 0 )
item10.SetDefault()
item9.Add( item10, 0, wx.ALIGN_CENTER|wx.ALL, 5 )
item11 = wx.Button( parent, wxID_CANCEL, "Cancel", wx.DefaultPosition, wx.DefaultSize, 0 )
item9.Add( item11, 0, wx.ALIGN_CENTER|wx.ALL, 5 )
item0.Add( item9, 0, wx.ALIGN_RIGHT|wx.ALIGN_CENTER_VERTICAL|wx.ALL, 5 )
if set_sizer == True:
parent.SetSizer( item0 )
if call_fit == True:
item0.SetSizeHints( parent )
return item0
ID_POINTADJUST_TEXT = 10027
ID_POINTADJUST_SPIN = 10028
ID_WRESTLERS_LIST = 10029
wxID_OK = 5100
wxID_CANCEL = 5101
def CreateTeamDialog( parent, call_fit = True, set_sizer = True ):
item0 = wx.BoxSizer( wx.VERTICAL )
item2 = wx.StaticBox( parent, -1, "Point adjustment" )
item1 = wx.StaticBoxSizer( item2, wx.HORIZONTAL )
item3 = wx.TextCtrl( parent, ID_POINTADJUST_TEXT, "", wx.DefaultPosition, [60,-1], wx.TE_READONLY )
item1.Add( item3, 0, wx.ALIGN_CENTER|wx.LEFT|wx.TOP|wx.BOTTOM, 5 )
item4 = wx.SpinButton( parent, ID_POINTADJUST_SPIN, wx.DefaultPosition, [20,22], wx.SP_WRAP )
item4.SetRange( 0, 100 )
item4.SetValue( 0 )
item1.Add( item4, 0, wx.ALIGN_CENTER|wx.RIGHT|wx.TOP|wx.BOTTOM, 5 )
item0.Add( item1, 0, wx.GROW|wx.ALIGN_CENTER_VERTICAL|wx.ALL, 5 )
item6 = wx.StaticBox( parent, -1, "Wrestlers" )
item5 = wx.StaticBoxSizer( item6, wx.VERTICAL )
item7 = wx.ListCtrl( parent, ID_WRESTLERS_LIST, wx.DefaultPosition, [200,200], wx.LC_REPORT|wx.SUNKEN_BORDER )
item5.Add( item7, 0, wx.ALIGN_CENTER|wx.ALL, 5 )
item0.Add( item5, 0, wx.ALIGN_CENTER|wx.ALL, 5 )
item8 = wx.BoxSizer( wx.HORIZONTAL )
item9 = wx.Button( parent, wxID_OK, "OK", wx.DefaultPosition, wx.DefaultSize, 0 )
item9.SetDefault()
item8.Add( item9, 0, wx.ALIGN_CENTER|wx.ALL, 5 )
item10 = wx.Button( parent, wxID_CANCEL, "Cancel", wx.DefaultPosition, wx.DefaultSize, 0 )
item8.Add( item10, 0, wx.ALIGN_CENTER|wx.ALL, 5 )
item0.Add( item8, 0, wx.ALIGN_RIGHT|wx.ALIGN_CENTER_VERTICAL|wx.ALL, 5 )
if set_sizer == True:
parent.SetSizer( item0 )
if call_fit == True:
item0.SetSizeHints( parent )
return item0
ID_RESULTS_LIST = 10030
wxID_OK = 5100
def CreateFastFallDialog( parent, call_fit = True, set_sizer = True ):
item0 = wx.BoxSizer( wx.VERTICAL )
item1 = wx.ListCtrl( parent, ID_RESULTS_LIST, wx.DefaultPosition, [450,300], wx.LC_REPORT|wx.SUNKEN_BORDER )
item0.Add( item1, 0, wx.ALIGN_CENTER|wx.ALL, 5 )
item2 = wx.BoxSizer( wx.HORIZONTAL )
item3 = wx.Button( parent, wxID_OK, "OK", wx.DefaultPosition, wx.DefaultSize, 0 )
item3.SetDefault()
item2.Add( item3, 0, wx.ALIGN_CENTER|wx.ALL, 5 )
item0.Add( item2, 0, wx.ALIGN_RIGHT|wx.ALIGN_CENTER_VERTICAL|wx.ALL, 5 )
if set_sizer == True:
parent.SetSizer( item0 )
if call_fit == True:
item0.SetSizeHints( parent )
return item0
ID_STATICBITMAP = 10031
def CreateAboutDialog( parent, call_fit = True, set_sizer = True ):
item0 = wx.BoxSizer( wx.VERTICAL )
item1 = wx.StaticBitmap( parent, ID_STATICBITMAP, LogoBitmaps( 0 ), wx.DefaultPosition, wx.DefaultSize )
item0.Add( item1, 0, wx.GROW|wx.ALIGN_CENTER_VERTICAL|wx.ALL, 5 )
item2 = wx.StaticText( parent, ID_TEXT,
"Dedicated to my father, my mother, and my brother, who all know the glory and pain of wrestling.\n"
"Special thanks to David Greenleaf, the very first wrestling nerd.\n"
"\n"
"Copyright (c) 2003, 2004, 2005 Peter Parente under the terms of the MIT public license.\n"
"\n"
"See the included LICENSE.txt file included with this software for restrictions on the use and\n"
"distribution of this software.",
wx.DefaultPosition, wx.DefaultSize, 0 )
item2.SetBackgroundColour( wx.WHITE )
item0.Add( item2, 0, wx.ALIGN_CENTER|wx.ALL, 5 )
if set_sizer == True:
parent.SetSizer( item0 )
if call_fit == True:
item0.SetSizeHints( parent )
return item0
ID_SCORES_LIST = 10032
def CreateScoreFrame( parent, call_fit = True, set_sizer = True ):
item0 = wx.FlexGridSizer( 0, 2, 0, 0 )
item0.AddGrowableCol( 0 )
item0.AddGrowableRow( 0 )
item1 = wx.ListCtrl( parent, ID_SCORES_LIST, wx.DefaultPosition, [160,120], wx.LC_REPORT|wx.SUNKEN_BORDER )
item1.SetFont( wx.Font( 14, wx.SWISS, wx.NORMAL, wx.NORMAL ) )
item0.Add( item1, 0, wx.GROW|wx.ALIGN_CENTER_VERTICAL, 5 )
if set_sizer == True:
parent.SetSizer( item0 )
if call_fit == True:
item0.SetSizeHints( parent )
return item0
ID_TEAMS_CHOICE = 10033
wxID_OK = 5100
wxID_CANCEL = 5101
def CreateTeamSpellingDialog( parent, call_fit = True, set_sizer = True ):
item0 = wx.BoxSizer( wx.VERTICAL )
item1 = wx.StaticText( parent, ID_TEXT, "Current name", wx.DefaultPosition, wx.DefaultSize, 0 )
item0.Add( item1, 0, wx.ALIGN_CENTER_VERTICAL|wx.LEFT|wx.RIGHT|wx.TOP, 5 )
item2 = wx.Choice( parent, ID_TEAMS_CHOICE, wx.DefaultPosition, [200,-1], [], 0 )
item0.Add( item2, 0, wx.ALIGN_CENTER|wx.ALL, 5 )
item3 = wx.StaticText( parent, ID_TEXT, "New name", wx.DefaultPosition, wx.DefaultSize, 0 )
item0.Add( item3, 0, wx.ALIGN_CENTER_VERTICAL|wx.LEFT|wx.RIGHT|wx.TOP, 5 )
item4 = wx.TextCtrl( parent, ID_NAME_TEXT, "", wx.DefaultPosition, [200,-1], 0 )
item0.Add( item4, 0, wx.ALIGN_CENTER_VERTICAL|wx.ALL, 5 )
item5 = wx.BoxSizer( wx.HORIZONTAL )
item6 = wx.Button( parent, wxID_OK, "OK", wx.DefaultPosition, wx.DefaultSize, 0 )
item6.SetDefault()
item5.Add( item6, 0, wx.ALIGN_CENTER|wx.ALL, 5 )
item7 = wx.Button( parent, wxID_CANCEL, "Cancel", wx.DefaultPosition, wx.DefaultSize, 0 )
item5.Add( item7, 0, wx.ALIGN_CENTER|wx.ALL, 5 )
item0.Add( item5, 0, wx.ALIGN_RIGHT|wx.ALIGN_CENTER_VERTICAL|wx.ALL, 5 )
if set_sizer == True:
parent.SetSizer( item0 )
if call_fit == True:
item0.SetSizeHints( parent )
return item0
ID_WRESTLERS_CHOICE = 10034
wxID_OK = 5100
wxID_CANCEL = 5101
def CreateWrestlerSpellingDialog( parent, call_fit = True, set_sizer = True ):
item0 = wx.BoxSizer( wx.VERTICAL )
item1 = wx.StaticText( parent, ID_TEXT, "Team", wx.DefaultPosition, wx.DefaultSize, 0 )
item0.Add( item1, 0, wx.ALIGN_CENTER_VERTICAL|wx.LEFT|wx.RIGHT|wx.TOP, 5 )
item2 = wx.Choice( parent, ID_TEAMS_CHOICE, wx.DefaultPosition, [200,-1], [], 0 )
item0.Add( item2, 0, wx.ALIGN_CENTER|wx.ALL, 5 )
item3 = wx.StaticText( parent, ID_TEXT, "Current wrestler name", wx.DefaultPosition, wx.DefaultSize, 0 )
item0.Add( item3, 0, wx.ALIGN_CENTER_VERTICAL|wx.LEFT|wx.RIGHT|wx.TOP, 5 )
item4 = wx.Choice( parent, ID_WRESTLERS_CHOICE, wx.DefaultPosition, [200,-1], [], 0 )
item0.Add( item4, 0, wx.ALIGN_CENTER|wx.ALL, 5 )
item5 = wx.StaticText( parent, ID_TEXT, "New wrestler name", wx.DefaultPosition, wx.DefaultSize, 0 )
item0.Add( item5, 0, wx.ALIGN_CENTER_VERTICAL|wx.LEFT|wx.RIGHT|wx.TOP, 5 )
item6 = wx.TextCtrl( parent, ID_NAME_TEXT, "", wx.DefaultPosition, [200,-1], 0 )
item0.Add( item6, 0, wx.ALIGN_CENTER_VERTICAL|wx.ALL, 5 )
item7 = wx.BoxSizer( wx.HORIZONTAL )
item8 = wx.Button( parent, wxID_OK, "OK", wx.DefaultPosition, wx.DefaultSize, 0 )
item8.SetDefault()
item7.Add( item8, 0, wx.ALIGN_CENTER|wx.ALL, 5 )
item9 = wx.Button( parent, wxID_CANCEL, "Cancel", wx.DefaultPosition, wx.DefaultSize, 0 )
item7.Add( item9, 0, wx.ALIGN_CENTER|wx.ALL, 5 )
item0.Add( item7, 0, wx.ALIGN_RIGHT|wx.ALIGN_CENTER_VERTICAL|wx.ALL, 5 )
if set_sizer == True:
parent.SetSizer( item0 )
if call_fit == True:
item0.SetSizeHints( parent )
return item0
# Menubar functions
ID_NEW_MENU = 10035
ID_OPEN_MENU = 10036
ID_MENU = 10037
ID_SAVE_MENU = 10038
ID_SAVEAS_MENU = 10039
ID_BACKUP_MENU = 10040
ID_EXPORT_MENU = 10041
ID_PRINT_MENU = 10042
ID_PRINTPREVIEW_MENU = 10043
ID_EXIT_MENU = 10044
ID_FILE_MENU = 10045
ID_FASTFALL_MENU = 10046
ID_NUMBOUTS_MENU = 10047
ID_SCOREWIN_MENU = 10048
ID_QUERY_MENU = 10049
ID_ADDTEAM_MENU = 10050
ID_REMOVETEAM_MENU = 10051
ID_TEAMSPELLING_MENU = 10052
ID_WRESTLERSPELLING_MENU = 10053
ID_TOURNAMENTNAME_MENU = 10054
ID_TOOLS_MENU = 10055
ID_ABOUT_MENU = 10056
ID_HELP_MENU = 10057
def CreateMenuBar():
item0 = wx.MenuBar()
item1 = wx.Menu()
item1.Append( ID_NEW_MENU, "&New...\tCtrl-N", "" )
item1.Append( ID_OPEN_MENU, "&Open...\tCtrl-O", "" )
item1.AppendSeparator()
item1.Append( ID_SAVE_MENU, "&Save\tCtrl-S", "" )
item1.Append( ID_SAVEAS_MENU, "Save &as...\tShift-Ctrl-S", "" )
item1.Append( ID_BACKUP_MENU, "&Backup...", "" )
item1.AppendSeparator()
item1.Append( ID_EXPORT_MENU, "Export...\tCtrl-E", "" )
item1.AppendSeparator()
item1.Append( ID_PRINT_MENU, "&Print...\tCtrl-P", "" )
item1.Append( ID_PRINTPREVIEW_MENU, "Print previe&w...", "" )
item1.AppendSeparator()
item1.Append( ID_EXIT_MENU, "E&xit", "" )
item0.Append( item1, "&File" )
item2 = wx.Menu()
item2.Append( ID_FASTFALL_MENU, "&Fast fall...\tCtrl-F", "" )
item2.Append( ID_NUMBOUTS_MENU, "&Bout count...\tCtrl-B", "" )
item2.AppendSeparator()
item2.Append( ID_SCOREWIN_MENU, "Score &window...\tCtrl-W", "" )
item0.Append( item2, "&Query" )
item3 = wx.Menu()
item3.Append( ID_ADDTEAM_MENU, "&Add a team...", "" )
item3.Append( ID_REMOVETEAM_MENU, "&Remove a team...", "" )
item3.AppendSeparator()
item3.Append( ID_TEAMSPELLING_MENU, "Change team spelling...", "" )
item3.Append( ID_WRESTLERSPELLING_MENU, "Change wrestler spelling...", "" )
item3.Append( ID_TOURNAMENTNAME_MENU, "Change tournament name...", "" )
item0.Append( item3, "&Tools" )
item4 = wx.Menu()
item4.Append( ID_ABOUT_MENU, "&About", "" )
item0.Append( item4, "&Help" )
return item0
# Toolbar functions
# Bitmap functions
def LogoBitmaps( index ):
if index == 0:
return wx.Image( "WrestlingNerd_wdr/LogoBitmaps_0.png", wx.BITMAP_TYPE_PNG ).ConvertToBitmap()
return wx.NullBitmap
# End of generated file
| 37.472834 | 289 | 0.675575 |
3da0ba719c37db8cc1370899e3ff95ad7d51f30a | 9,626 | py | Python | inql/generators/query.py | AmesCornish/inql | ed6d35c1031ad7bf4bdfd39ccd71ce57008b34d6 | [
"Apache-2.0"
] | null | null | null | inql/generators/query.py | AmesCornish/inql | ed6d35c1031ad7bf4bdfd39ccd71ce57008b34d6 | [
"Apache-2.0"
] | null | null | null | inql/generators/query.py | AmesCornish/inql | ed6d35c1031ad7bf4bdfd39ccd71ce57008b34d6 | [
"Apache-2.0"
] | null | null | null | from __future__ import print_function
import json
from inql.utils import open, simplify_introspection
ORDER = {
"scalar": 0,
"enum": 1,
"type": 2,
"input": 3,
"interface": 4,
"union": 5
}
MINUS_INFINITE = -10000
def reverse_lookup_order(field, reverse_lookup):
try:
if field['required']:
ret = 0
else:
ret = 10
if field['array']:
ret += 100
if 'args' in field:
ret += 1000
ret += ORDER[reverse_lookup[field['type']]]
return ret
except KeyError:
return 10000
def recurse_fields(schema, reverse_lookup, t, max_nest=7, non_required_levels=1, dinput=None,
params_replace=lambda schema, reverse_lookup, elem: elem, recursed=0):
"""
Generates a JSON representation of the AST object representing a query
:param schema:
the output of a simplified schema
:param reverse_lookup:
a support hash that goes from typename to graphql type, useful to navigate the schema in O(1)
:param t:
type that you need to generate the AST for, since it is recursive it may be anything inside the graph
:param max_nest:
maximum number of recursive calls before returning the type name, this is needed in particularly broken cases
where recurse_fields may not exit autonomously (EG. hackerone.com is using union to create sql or/and/not
statements.) Consider that this will partially break params_replace calls.
:param non_required_levels:
expand up to non_required_levels levels automatically.
:param dinput:
the output object, it may even be provided from the outside.
:param params_replace:
a callback that takes (schema, reverse_lookup, elem) as parameter and returns a replacement for parameter.
Needed in case you want to generate real parameters for queries.
"""
if max_nest == 0:
return params_replace(schema, reverse_lookup, t)
if t not in reverse_lookup:
return params_replace(schema, reverse_lookup, t)
if dinput is None:
dinput = {}
if reverse_lookup[t] in ['type', 'interface', 'input']:
for inner_t, v in sorted(schema[reverse_lookup[t]][t].items(), key=lambda kv: reverse_lookup_order(kv[1], reverse_lookup)):
if inner_t == '__implements':
for iface in v.keys():
interface_recurse_fields = recurse_fields(schema, reverse_lookup, iface, max_nest=max_nest,
non_required_levels=non_required_levels,
params_replace=params_replace)
dinput.update(interface_recurse_fields)
continue
# try to add at least one required inner, if you should not recurse anymore
recurse = non_required_levels > 0 or (v['required'] and recursed <= 0) # required_only => v['required']
if recurse:
dinput[inner_t] = recurse_fields(schema, reverse_lookup, v['type'], max_nest=max_nest - 1,
non_required_levels=non_required_levels - 1,
params_replace=params_replace)
recursed += 1
if recurse and 'args' in v:
if inner_t not in dinput or type(dinput[inner_t]) is not dict:
dinput[inner_t] = {}
dinput[inner_t]["args"] = {}
for inner_a, inner_v in sorted(v['args'].items(), key=lambda kv: reverse_lookup_order(kv[1], reverse_lookup)):
# try to add at least a parameter, even if there are no required parameters
recurse_inner = non_required_levels > 0 or inner_v['required'] # required_only => v['required']
if recurse_inner:
arg = recurse_fields(schema, reverse_lookup, inner_v['type'], max_nest=max_nest-1, recursed=MINUS_INFINITE,
non_required_levels=non_required_levels-1, params_replace=params_replace)
if 'array' in inner_v and inner_v['array']:
if type(arg) is dict:
arg = [arg]
else:
arg = "[%s]" % arg
if 'required' in inner_v and inner_v['required']:
if type(arg) is not dict:
arg = "!%s" % arg
else:
pass # XXX: don't handle required array markers, this is a bug, but simplifies a lot the code
dinput[inner_t]['args'][inner_a] = arg
if len(dinput[inner_t]["args"]) == 0:
del dinput[inner_t]["args"]
if len(dinput[inner_t]) == 0:
del dinput[inner_t]
if len(dinput) == 0 and (t not in reverse_lookup or reverse_lookup[t] not in ['enum', 'scalar']):
items = list(schema[reverse_lookup[t]][t].items())
if len(items) > 0:
inner_t, v = items[0]
dinput[inner_t] = recurse_fields(schema, reverse_lookup, v['type'], max_nest=max_nest - 1,
non_required_levels=non_required_levels - 1, params_replace=params_replace)
elif reverse_lookup[t] == 'union':
# select the first type of the union
for union in schema['union'][t].keys():
dinput["... on %s" % union] = recurse_fields(schema, reverse_lookup, union, max_nest=max_nest,
non_required_levels=non_required_levels,
params_replace=params_replace)
elif reverse_lookup[t] in ['enum', 'scalar']:
# return the type since it is an enum
return params_replace(schema, reverse_lookup, t)
return dinput
def dict_to_args(d):
"""
Generates a string representing query arguments from an AST dict.
:param d: AST dict
"""
args = []
for k, v in d.items():
args.append("%s:%s" % (k, json.dumps(v).replace('"', '').replace("u'", "").replace("'", "").replace('@', '"')))
if len(args) > 0:
return "(%s)" % ', '.join(args)
else:
return ""
def dict_to_qbody(d, prefix=''):
"""
Generates a string representing a query body from an AST dict.
:param d: AST dict
:param prefix: needed in case it will recurse
"""
if type(d) is not dict:
return ''
s = ''
iprefix = prefix + '\t'
args = ''
for k, v in d.items():
if k == 'args':
args = dict_to_args(v)
elif type(v) is dict:
s += '\n' + iprefix + k + dict_to_qbody(v, prefix=iprefix)
else:
s += '\n' + iprefix + k
if len(s) > 0:
return "%s {%s\n%s}" % (args, s, prefix)
else:
return args
def preplace(schema, reverse_lookup, t):
"""
Replaces basic types and enums with default values.
:param schema:
the output of a simplified schema
:param reverse_lookup:
a support hash that goes from typename to graphql type, useful to navigate the schema in O(1)
:param t:
type that you need to generate the AST for, since it is recursive it may be anything inside the graph
"""
if t == 'String':
return '@code@'
elif t == 'Int':
return 1334
elif t == 'Boolean':
return 'true'
elif t == 'Float':
return 0.1334
elif t == 'ID':
return 14
elif reverse_lookup[t] == 'enum':
return list(schema['enum'][t].keys())[0]
elif reverse_lookup[t] == 'scalar':
# scalar may be any type, so the AST can be anything as well
# since the logic is custom implemented I have no generic way of replacing them
# for this reason we return it back as they are
return t
else:
return t
def generate(argument, qpath="%s/%s", detect=True, green_print=lambda s: print(s)):
"""
Generate query templates
:param argument: introspection query result
:param qpath:
directory template where to output the queries, first parameter is type of query and second is query name
:param detect:
retrieve placeholders according to arg type
:param green_print:
implements print in green
:return: None
"""
s = simplify_introspection(argument)
rev = {
"String": 'scalar',
"Int": 'scalar',
"Float": 'scalar',
"Boolean": 'scalar',
"ID": 'scalar',
}
for t, v in s.items():
for k in v.keys():
rev[k] = t
for qtype, qvalues in s['schema'].items():
green_print("Writing %s Templates" % qtype)
if detect:
rec = recurse_fields(s, rev, qvalues['type'], non_required_levels=2, params_replace=preplace)
else:
rec = recurse_fields(s, rev, qvalues['type'], non_required_levels=2)
for qname, qval in rec.items():
print("Writing %s %s" % (qname, qtype))
with open(qpath % (qtype, '%s.query' % qname), 'w') as ofile:
body = "%s {\n\t%s%s\n}" % (qtype, qname, dict_to_qbody(qval, prefix='\t'))
if detect:
body = body.replace('!', '')
query = {"query": body}
ofile.write(json.dumps(query))
green_print("DONE") | 38.504 | 131 | 0.559942 |
f2cd5c078f0ce3b37448fd7ae0ad166b83aed443 | 3,090 | py | Python | src/azure-cli/azure/cli/command_modules/acr/_constants.py | YuanyuanNi/azure-cli | 63844964374858bfacd209bfe1b69eb456bd64ca | [
"MIT"
] | 3,287 | 2016-07-26T17:34:33.000Z | 2022-03-31T09:52:13.000Z | src/azure-cli/azure/cli/command_modules/acr/_constants.py | YuanyuanNi/azure-cli | 63844964374858bfacd209bfe1b69eb456bd64ca | [
"MIT"
] | 19,206 | 2016-07-26T07:04:42.000Z | 2022-03-31T23:57:09.000Z | src/azure-cli/azure/cli/command_modules/acr/_constants.py | YuanyuanNi/azure-cli | 63844964374858bfacd209bfe1b69eb456bd64ca | [
"MIT"
] | 2,575 | 2016-07-26T06:44:40.000Z | 2022-03-31T22:56:06.000Z | # --------------------------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See License.txt in the project root for license information.
# --------------------------------------------------------------------------------------------
# pylint: disable=line-too-long
from azure.cli.core.profiles import ResourceType
ACR_RESOURCE_PROVIDER = 'Microsoft.ContainerRegistry'
REGISTRY_RESOURCE_TYPE = ACR_RESOURCE_PROVIDER + '/registries'
WEBHOOK_RESOURCE_TYPE = REGISTRY_RESOURCE_TYPE + '/webhooks'
REPLICATION_RESOURCE_TYPE = REGISTRY_RESOURCE_TYPE + '/replications'
TASK_RESOURCE_TYPE = REGISTRY_RESOURCE_TYPE + '/tasks'
TASK_VALID_VSTS_URLS = ['visualstudio.com', 'dev.azure.com']
TASK_RESOURCE_ID_TEMPLATE = '/subscriptions/{sub_id}/resourceGroups/{rg}/providers/Microsoft.ContainerRegistry/registries/{reg}/tasks/{name}'
TASKRUN_RESOURCE_TYPE = REGISTRY_RESOURCE_TYPE + '/taskruns'
ACR_TASK_YAML_DEFAULT_NAME = 'acb.yaml'
ACR_CACHED_BUILDER_IMAGES = ('cloudfoundry/cnb:bionic',)
ACR_NULL_CONTEXT = '/dev/null'
ACR_TASK_QUICKTASK = 'quicktask'
ACR_RUN_DEFAULT_TIMEOUT_IN_SEC = 60 * 60 # 60 minutes
def get_classic_sku(cmd):
SkuName = cmd.get_models('SkuName')
return [SkuName.classic.value]
def get_managed_sku(cmd):
SkuName = cmd.get_models('SkuName')
return [SkuName.basic.value, SkuName.standard.value, SkuName.premium.value]
def get_premium_sku(cmd):
SkuName = cmd.get_models('SkuName')
return [SkuName.premium.value]
def get_valid_os(cmd):
OS = cmd.get_models('OS', operation_group='task_runs')
return [item.value.lower() for item in OS]
def get_valid_architecture(cmd):
Architecture = cmd.get_models('Architecture', operation_group='task_runs')
return [item.value.lower() for item in Architecture]
def get_valid_variant(cmd):
Variant = cmd.get_models('Variant', operation_group='task_runs')
return [item.value.lower() for item in Variant]
def get_finished_run_status(cmd):
RunStatus = cmd.get_models('RunStatus', operation_group='task_runs')
return [RunStatus.succeeded.value,
RunStatus.failed.value,
RunStatus.canceled.value,
RunStatus.error.value,
RunStatus.timeout.value]
def get_succeeded_run_status(cmd):
RunStatus = cmd.get_models('RunStatus', operation_group='task_runs')
return [RunStatus.succeeded.value]
def get_acr_task_models(cmd):
from azure.cli.core.profiles import get_sdk
return get_sdk(cmd.cli_ctx, ResourceType.MGMT_CONTAINERREGISTRY, 'models', operation_group='tasks')
def get_succeeded_agentpool_status(cmd):
AgentPoolStatus = cmd.get_models('ProvisioningState', operation_group='agent_pools')
return [AgentPoolStatus.succeeded.value]
def get_finished_agentpool_status(cmd):
AgentPoolStatus = cmd.get_models('ProvisioningState', operation_group='agent_pools')
return [AgentPoolStatus.succeeded.value,
AgentPoolStatus.failed.value,
AgentPoolStatus.canceled.value]
| 34.333333 | 141 | 0.714239 |
5f655a225d27fdc3e1f8385af997646071b6a0e7 | 1,549 | py | Python | filters/mcInterface.py | MestreLion/mcedit | e998a47b7f702c352d642fdda76239162e9face3 | [
"0BSD"
] | 146 | 2015-01-02T18:11:31.000Z | 2022-03-01T09:05:55.000Z | filters/mcInterface.py | MestreLion/mcedit | e998a47b7f702c352d642fdda76239162e9face3 | [
"0BSD"
] | 4 | 2015-02-10T21:56:31.000Z | 2016-09-23T05:47:30.000Z | filters/mcInterface.py | MestreLion/mcedit | e998a47b7f702c352d642fdda76239162e9face3 | [
"0BSD"
] | 39 | 2015-01-06T21:27:46.000Z | 2022-03-27T16:30:51.000Z | #dummy mcInterface to adapt dudecon's interface to MCEdit's
class MCLevelAdapter(object):
def __init__(self, level, box):
self.level = level
self.box = box
def check_box_2d(self, x, z):
box = self.box
if x < box.minx or x >= box.maxx:
return False
if z < box.minz or z >= box.maxz:
return False
return True
def check_box_3d(self, x, y, z):
'''If the coordinates are within the box, return True, else return False'''
box = self.box
if not self.check_box_2d(x, z):
return False
if y < box.miny or y >= box.maxy:
return False
return True
def block(self, x, y, z):
if not self.check_box_3d(x, y, z):
return None
d = {}
d['B'] = self.level.blockAt(x, y, z)
d['D'] = self.level.blockDataAt(x, y, z)
return d
def set_block(self, x, y, z, d):
if not self.check_box_3d(x, y, z):
return None
if 'B' in d:
self.level.setBlockAt(x, y, z, d['B'])
if 'D' in d:
self.level.setBlockDataAt(x, y, z, d['D'])
def surface_block(self, x, z):
if not self.check_box_2d(x, z):
return None
y = self.level.heightMapAt(x, z)
y = max(0, y - 1)
d = self.block(x, y, z)
if d:
d['y'] = y
return d
SaveFile = MCLevelAdapter
#dict['L'] = self.level.blockLightAt(x,y,z)
#dict['S'] = self.level.skyLightAt(x,y,z)
| 26.706897 | 83 | 0.514526 |
8f310ac0587f47743359581f766f0bcedeb34a89 | 991 | py | Python | wce_triage/components/sound.py | pfrouleau/wce-triage-v2 | 25610cda55f5cb2170e13e121ae1cbaa92ef7626 | [
"MIT"
] | 3 | 2019-07-25T03:24:23.000Z | 2021-06-23T14:01:34.000Z | wce_triage/components/sound.py | pfrouleau/wce-triage-v2 | 25610cda55f5cb2170e13e121ae1cbaa92ef7626 | [
"MIT"
] | 1 | 2019-12-20T16:04:19.000Z | 2019-12-20T16:04:19.000Z | wce_triage/components/sound.py | pfrouleau/wce-triage-v2 | 25610cda55f5cb2170e13e121ae1cbaa92ef7626 | [
"MIT"
] | 2 | 2019-07-25T03:24:26.000Z | 2021-02-14T05:27:11.000Z | #!/usr/bin/python3
# Copyright (c) 2019 Naoyuki tai
# MIT license - see LICENSE
import os
from .component import Component
def detect_sound_device():
detected = False
try:
for snd_dev in os.listdir("/dev/snd"):
if snd_dev[0:3] == 'pcm':
detected = True
break
pass
pass
except:
pass
return detected
class Sound(Component):
def __init__(self):
self.dev = detect_sound_device()
pass
def get_component_type(self):
return "Sound"
def decision(self, **kwargs):
if not self.dev:
return [{"component": self.get_component_type(),
"result": False,
"message": "Sound card: NOT DETECTED -- INSTALL SOUND CARD"}]
pass
else:
return [{"component": self.get_component_type(),
"result": False,
"message": "Sound card detected -- Hit [play] button"}]
pass
pass
#
if __name__ == "__main__":
sound = Sound()
print(sound.decision())
pass
| 19.431373 | 75 | 0.601413 |
9d1e82db76a0d57154e3592d136e027a040614b3 | 7,823 | py | Python | train_net.py | linjian93/tensorflow_homographynet | a1f0f9c6d16cc13559e7758f16b56b8086c2ad27 | [
"MIT"
] | 47 | 2017-03-30T08:41:00.000Z | 2021-08-25T03:52:12.000Z | train_net.py | linjian93/tensorflow_homographynet | a1f0f9c6d16cc13559e7758f16b56b8086c2ad27 | [
"MIT"
] | 4 | 2017-05-18T14:46:00.000Z | 2019-10-14T07:24:53.000Z | train_net.py | linjian93/tensorflow_homographynet | a1f0f9c6d16cc13559e7758f16b56b8086c2ad27 | [
"MIT"
] | 12 | 2017-06-20T13:59:55.000Z | 2020-03-31T11:40:25.000Z | import tensorflow as tf
import os
import cv2
import random
import numpy as np
from models.homographynet import HomographyNet as HomoNet
import shutil
iter_max = 90000
save_iter = 2000
batch_size = 64
pairs_per_img = 1
lr_base = 5e-3
lr_decay_iter = 200000
dir_train = '/media/csc105/Data/dataset/ms-coco/train2014' # dir of train2014
dir_val = '/media/csc105/Data/dataset/ms-coco/val2014' # dir of val2014
dir_model = 'model_net/20170322_1' # dir of model to be saved
log_train = 'log_net/train_0322_1' # dir of train loss to be saved
log_val = 'log_net/val_0322_1' # dir of val loss to be saved
if os.path.exists(dir_model):
shutil.rmtree(dir_model)
if os.path.exists(log_train):
shutil.rmtree(log_train)
if os.path.exists(log_val):
shutil.rmtree(log_val)
os.mkdir(dir_model)
os.mkdir(log_train)
os.mkdir(log_val)
def load_data(raw_data_path):
dir_list_out = []
dir_list = os.listdir(raw_data_path)
if '.' in dir_list:
dir_list.remove('.')
if '..' in dir_list:
dir_list.remove('.')
if '.DS_Store' in dir_list:
dir_list.remove('.DS_Store')
dir_list.sort()
for i in range(len(dir_list)):
dir_list_out.append(os.path.join(raw_data_path, dir_list[i]))
return dir_list_out
def generate_data(img_path):
data_re = []
label_re = []
random_list = []
img = cv2.resize(cv2.imread(img_path, 0), (320, 240))
i = 1
while i < pairs_per_img + 1:
data = []
label = []
y_start = random.randint(32, 80)
y_end = y_start + 128
x_start = random.randint(32, 160)
x_end = x_start + 128
y_1 = y_start
x_1 = x_start
y_2 = y_end
x_2 = x_start
y_3 = y_end
x_3 = x_end
y_4 = y_start
x_4 = x_end
img_patch = img[y_start:y_end, x_start:x_end] # patch 1
y_1_offset = random.randint(-32, 32)
x_1_offset = random.randint(-32, 32)
y_2_offset = random.randint(-32, 32)
x_2_offset = random.randint(-32, 32)
y_3_offset = random.randint(-32, 32)
x_3_offset = random.randint(-32, 32)
y_4_offset = random.randint(-32, 32)
x_4_offset = random.randint(-32, 32)
y_1_p = y_1 + y_1_offset
x_1_p = x_1 + x_1_offset
y_2_p = y_2 + y_2_offset
x_2_p = x_2 + x_2_offset
y_3_p = y_3 + y_3_offset
x_3_p = x_3 + x_3_offset
y_4_p = y_4 + y_4_offset
x_4_p = x_4 + x_4_offset
pts_img_patch = np.array([[y_1,x_1],[y_2,x_2],[y_3,x_3],[y_4,x_4]]).astype(np.float32)
pts_img_patch_perturb = np.array([[y_1_p,x_1_p],[y_2_p,x_2_p],[y_3_p,x_3_p],[y_4_p,x_4_p]]).astype(np.float32)
h,status = cv2.findHomography(pts_img_patch, pts_img_patch_perturb, cv2.RANSAC)
img_perburb = cv2.warpPerspective(img, h, (320, 240))
img_perburb_patch = img_perburb[y_start:y_end, x_start:x_end] # patch 2
if not [y_1,x_1,y_2,x_2,y_3,x_3,y_4,x_4] in random_list:
data.append(img_patch)
data.append(img_perburb_patch) # [2, 128, 128]
random_list.append([y_1,x_1,y_2,x_2,y_3,x_3,y_4,x_4])
h_4pt = np.array([y_1_offset,x_1_offset,y_2_offset,x_2_offset,y_3_offset,x_3_offset,y_4_offset,x_4_offset])
# h_4pt = np.array([y_1_p,x_1_p,y_2_p,x_2_p,y_3_p,x_3_p,y_4_p,x_4_p]) # labels
label.append(h_4pt) # [1, 8]
i += 1
data_re.append(data) # [4, 2, 128, 128]
label_re.append(label) # [4, 1, 8]
return data_re, label_re
class DataSet(object):
def __init__(self, img_path_list):
self.img_path_list = img_path_list
self.index_in_epoch = 0
self.count = 0
self.number = len(img_path_list)
def next_batch(self):
self.count += 1
# print self.count
start = self.index_in_epoch
self.index_in_epoch += batch_size / pairs_per_img
if self.index_in_epoch > self.number:
self.index_in_epoch = 0
start = self.index_in_epoch
self.index_in_epoch += batch_size / pairs_per_img
end = self.index_in_epoch
data_batch, label_batch = generate_data(self.img_path_list[start])
for i in range(start+1, end):
data, label = generate_data(self.img_path_list[i]) # [4, 2, 128, 128], [4, 1, 8]
data_batch = np.concatenate((data_batch, data)) # [64, 2, 128, 128]
label_batch = np.concatenate((label_batch, label)) # [64, 1, 8]
data_batch = np.array(data_batch).transpose([0, 2, 3, 1]) # (64, 128, 128, 2)
# cv2.imshow('window2', data_batch[1,:,:,1].squeeze())
# cv2.waitKey()
label_batch = np.array(label_batch).squeeze() # (64, 1, 8)
return data_batch, label_batch
def main(_):
train_img_list = load_data(dir_train)
val_img_list = load_data(dir_val)
x1 = tf.placeholder(tf.float32, [None, 128, 128, 2])
x2 = tf.placeholder(tf.float32, [None, 8])
x3 = tf.placeholder(tf.float32, [])
x4 = tf.placeholder(tf.float32, []) # 1.0: use dropout; 0.0: turn off dropout
net = HomoNet({'data': x1, 'use_dropout': x4})
net_out = net.layers['fc2']
loss = tf.reduce_sum(tf.square(tf.sub(net_out, x2))) / 2 / batch_size
tvars = tf.trainable_variables()
# grads, _ = tf.clip_by_global_norm(tf.gradients(loss, tvars), 5.0)
# grads = tf.gradients(loss, tvars)
# optimizer = tf.train.GradientDescentOptimizer(x3)
# train_op = optimizer.apply_gradients(zip(grads, tvars))
# train_op = tf.train.AdamOptimizer(x3).minimize(loss)
train_op = tf.train.MomentumOptimizer(learning_rate=x3, momentum=0.9).minimize(loss)
# tensor board
tf.scalar_summary('loss',loss) # record loss
tf.summary.histogram('t0', tvars[0])
tf.summary.histogram('t1', tvars[1])
tf.summary.histogram('t34', tvars[34])
tf.summary.histogram('t35', tvars[35])
merged = tf.merge_all_summaries()
# gpu configuration
tf_config = tf.ConfigProto()
tf_config.gpu_options.allow_growth = True
# gpu_opinions = tf.GPUOptions(per_process_gpu_memory_fraction=0.333)
init = tf.initialize_all_variables()
saver = tf.train.Saver(max_to_keep=None)
with tf.Session(config=tf_config) as sess:
sess.run(init)
writer_train = tf.train.SummaryWriter(log_train, sess.graph) # use writer1 to record loss when train
writer_val = tf.train.SummaryWriter(log_val, sess.graph) # use writer2 to record loss when val
train_model = DataSet(train_img_list)
val_model = DataSet(val_img_list)
x_batch_val, y_batch_val = val_model.next_batch() # fix the val data
for i in range(iter_max):
lr_decay = 0.1 ** (i/lr_decay_iter)
lr = lr_base * lr_decay
x_batch_train, y_batch_train = train_model.next_batch()
sess.run(train_op, feed_dict={x1: x_batch_train, x2: y_batch_train, x3: lr, x4: 1.0})
# display
if not (i+1) % 5:
result1, loss_train = sess.run([merged, loss], feed_dict={x1: x_batch_train, x2: y_batch_train, x4: 0.0})
print ('iter %05d, lr = %.8f, train loss = %.5f' % ((i+1), lr, loss_train))
writer_train.add_summary(result1, i+1)
if not (i+1) % 20:
result2, loss_val = sess.run([merged, loss], feed_dict={x1: x_batch_val, x2: y_batch_val, x4: 0.0})
print ('iter %05d, lr = %.8f, val loss = %.5f' % ((i+1), lr, loss_val))
print "============================"
writer_val.add_summary(result2, i+1)
# save model
if not (i+1) % save_iter:
saver.save(sess, (dir_model + "/model_%d.ckpt") % (i+1))
if __name__ == "__main__":
tf.app.run()
| 35.885321 | 121 | 0.621884 |
ec49939f07d320961eb7e0a3d815a6de0d52e38c | 780 | py | Python | server/handlers/SubmitHandler.py | AKAMEDIASYSTEM/tuned-resonator | 01e47c518b60b746675ab3aef4ee135e594b3908 | [
"MIT"
] | 1 | 2015-08-07T23:54:47.000Z | 2015-08-07T23:54:47.000Z | server/handlers/SubmitHandler.py | AKAMEDIASYSTEM/tuned-resonator | 01e47c518b60b746675ab3aef4ee135e594b3908 | [
"MIT"
] | 3 | 2015-03-09T02:05:05.000Z | 2015-03-10T18:44:53.000Z | server/handlers/SubmitHandler.py | AKAMEDIASYSTEM/tuned-resonator | 01e47c518b60b746675ab3aef4ee135e594b3908 | [
"MIT"
] | null | null | null | #!/usr/bin/env python
# tuned-resonator
# experiments with google physical-web mdns broadcast
from handlers.BaseHandler import BaseHandler
from ResponseObject import ResponseObject
import beanstalkc
class SubmitHandler(BaseHandler):
"""json submission to curriculum-insular store"""
def post(self):
beanstalk = beanstalkc.Connection(host='localhost', port=14711, parse_yaml=False)
url = self.get_argument('url', None)
print 'inside curriculum-insular SubmitHandler', url
if url is not None:
try:
beanstalk.put(str(url))
except:
print 'there was a big problem with ', url
self.response = ResponseObject('200', 'Success')
self.write_response()
self.finish()
| 31.2 | 89 | 0.664103 |
459182b138685dfeaf6b52aa011a51248f0dcb84 | 13,577 | py | Python | jsonmodels/fields.py | robertofd1995/jsonmodels | 5054aa5cafd0a135d06462f554d181abc4ceaebb | [
"BSD-3-Clause"
] | null | null | null | jsonmodels/fields.py | robertofd1995/jsonmodels | 5054aa5cafd0a135d06462f554d181abc4ceaebb | [
"BSD-3-Clause"
] | null | null | null | jsonmodels/fields.py | robertofd1995/jsonmodels | 5054aa5cafd0a135d06462f554d181abc4ceaebb | [
"BSD-3-Clause"
] | 1 | 2021-03-23T11:28:35.000Z | 2021-03-23T11:28:35.000Z | import datetime
import re
from weakref import WeakKeyDictionary
import six
from dateutil.parser import parse
from .errors import ValidationError
from .collections import ModelCollection
# unique marker for "no default value specified". None is not good enough since
# it is a completely valid default value.
NotSet = object()
class BaseField(object):
"""Base class for all fields."""
types = None
def __init__(
self,
required=False,
nullable=False,
help_text=None,
validators=None,
default=NotSet,
name=None):
self.memory = WeakKeyDictionary()
self.required = required
self.help_text = help_text
self.nullable = nullable
self._assign_validators(validators)
self.name = name
self._validate_name()
if default is not NotSet:
self.validate(default)
self._default = default
@property
def has_default(self):
return self._default is not NotSet
def _assign_validators(self, validators):
if validators and not isinstance(validators, list):
validators = [validators]
self.validators = validators or []
def __set__(self, instance, value):
self._finish_initialization(type(instance))
value = self.parse_value(value)
self.validate(value)
self.memory[instance._cache_key] = value
def __get__(self, instance, owner=None):
if instance is None:
self._finish_initialization(owner)
return self
self._finish_initialization(type(instance))
self._check_value(instance)
return self.memory[instance._cache_key]
def _finish_initialization(self, owner):
pass
def _check_value(self, obj):
if obj._cache_key not in self.memory:
self.__set__(obj, self.get_default_value())
def validate_for_object(self, obj):
value = self.__get__(obj)
self.validate(value)
def validate(self, value):
self._check_types()
self._validate_against_types(value)
self._check_against_required(value)
self._validate_with_custom_validators(value)
def _check_against_required(self, value):
if value is None and self.required:
raise ValidationError('Field is required!')
def _validate_against_types(self, value):
if value is not None and not isinstance(value, self.types):
raise ValidationError(
'Value is wrong, expected type "{types}"'.format(
types=', '.join([t.__name__ for t in self.types])
),
value,
)
def _check_types(self):
if self.types is None:
raise ValidationError(
'Field "{type}" is not usable, try '
'different field type.'.format(type=type(self).__name__))
def to_struct(self, value):
"""Cast value to Python structure."""
return value
def parse_value(self, value):
"""Parse value from primitive to desired format.
Each field can parse value to form it wants it to be (like string or
int).
"""
return value
def _validate_with_custom_validators(self, value):
if value is None and self.nullable:
return
for validator in self.validators:
try:
validator.validate(value)
except AttributeError:
validator(value)
def get_default_value(self):
"""Get default value for field.
Each field can specify its default.
"""
return self._default if self.has_default else None
def _validate_name(self):
if self.name is None:
return
if not re.match(r'^[A-Za-z_](([\w\-]*)?\w+)?$', self.name):
raise ValueError('Wrong name', self.name)
def structue_name(self, default):
return self.name if self.name is not None else default
class StringField(BaseField):
"""String field."""
types = six.string_types
class IntField(BaseField):
"""Integer field."""
types = (int,)
def parse_value(self, value):
"""Cast value to `int`, e.g. from string or long"""
parsed = super(IntField, self).parse_value(value)
if parsed is None:
return parsed
return int(parsed)
class FloatField(BaseField):
"""Float field."""
types = (float, int)
class BoolField(BaseField):
"""Bool field."""
types = (bool,)
def parse_value(self, value):
"""Cast value to `bool`."""
parsed = super(BoolField, self).parse_value(value)
return bool(parsed) if parsed is not None else None
class DictField(BaseField):
"""Dict field."""
types = (dict, )
class ListField(BaseField):
"""List field."""
types = (list,)
def __init__(self, items_types=None, *args, **kwargs):
"""Init.
`ListField` is **always not required**. If you want to control number
of items use validators.
"""
self._assign_types(items_types)
super(ListField, self).__init__(*args, **kwargs)
self.required = False
def get_default_value(self):
default = super(ListField, self).get_default_value()
if default is None:
return ModelCollection(self)
return default
def _assign_types(self, items_types):
if items_types:
try:
self.items_types = tuple(items_types)
except TypeError:
self.items_types = items_types,
else:
self.items_types = tuple()
types = []
for type_ in self.items_types:
if isinstance(type_, six.string_types):
types.append(_LazyType(type_))
else:
types.append(type_)
self.items_types = tuple(types)
def validate(self, value):
super(ListField, self).validate(value)
if len(self.items_types) == 0:
return
for item in value:
self.validate_single_value(item)
def validate_single_value(self, item):
if len(self.items_types) == 0:
return
if not isinstance(item, self.items_types):
raise ValidationError(
'All items must be instances '
'of "{types}", and not "{type}".'.format(
types=', '.join([t.__name__ for t in self.items_types]),
type=type(item).__name__,
))
def parse_value(self, values):
"""Cast value to proper collection."""
result = self.get_default_value()
if not values:
return result
if not isinstance(values, list):
return values
return [self._cast_value(value) for value in values]
def _cast_value(self, value):
if isinstance(value, self.items_types):
return value
else:
if len(self.items_types) != 1:
tpl = 'Cannot decide which type to choose from "{types}".'
raise ValidationError(
tpl.format(
types=', '.join([t.__name__ for t in self.items_types])
)
)
return self.items_types[0](**value)
def _finish_initialization(self, owner):
super(ListField, self)._finish_initialization(owner)
types = []
for type in self.items_types:
if isinstance(type, _LazyType):
types.append(type.evaluate(owner))
else:
types.append(type)
self.items_types = tuple(types)
def _elem_to_struct(self, value):
try:
return value.to_struct()
except AttributeError:
return value
def to_struct(self, values):
return [self._elem_to_struct(v) for v in values]
class EmbeddedField(BaseField):
"""Field for embedded models."""
def __init__(self, model_types, *args, **kwargs):
self._assign_model_types(model_types)
super(EmbeddedField, self).__init__(*args, **kwargs)
def _assign_model_types(self, model_types):
if not isinstance(model_types, (list, tuple)):
model_types = (model_types,)
types = []
for type_ in model_types:
if isinstance(type_, six.string_types):
types.append(_LazyType(type_))
else:
types.append(type_)
self.types = tuple(types)
def _finish_initialization(self, owner):
super(EmbeddedField, self)._finish_initialization(owner)
types = []
for type in self.types:
if isinstance(type, _LazyType):
types.append(type.evaluate(owner))
else:
types.append(type)
self.types = tuple(types)
def validate(self, value):
super(EmbeddedField, self).validate(value)
try:
value.validate()
except AttributeError:
pass
def parse_value(self, value):
"""Parse value to proper model type."""
if not isinstance(value, dict):
return value
embed_type = self._get_embed_type()
return embed_type(**value)
def _get_embed_type(self):
if len(self.types) != 1:
raise ValidationError(
'Cannot decide which type to choose from "{types}".'.format(
types=', '.join([t.__name__ for t in self.types])
)
)
return self.types[0]
def to_struct(self, value):
return value.to_struct()
class _LazyType(object):
def __init__(self, path):
self.path = path
def evaluate(self, base_cls):
module, type_name = _evaluate_path(self.path, base_cls)
return _import(module, type_name)
def _evaluate_path(relative_path, base_cls):
base_module = base_cls.__module__
modules = _get_modules(relative_path, base_module)
type_name = modules.pop()
module = '.'.join(modules)
if not module:
module = base_module
return module, type_name
def _get_modules(relative_path, base_module):
canonical_path = relative_path.lstrip('.')
canonical_modules = canonical_path.split('.')
if not relative_path.startswith('.'):
return canonical_modules
parents_amount = len(relative_path) - len(canonical_path)
parent_modules = base_module.split('.')
parents_amount = max(0, parents_amount - 1)
if parents_amount > len(parent_modules):
raise ValueError("Can't evaluate path '{}'".format(relative_path))
return parent_modules[:parents_amount * -1] + canonical_modules
def _import(module_name, type_name):
module = __import__(module_name, fromlist=[type_name])
try:
return getattr(module, type_name)
except AttributeError:
raise ValueError(
"Can't find type '{}.{}'.".format(module_name, type_name))
class TimeField(StringField):
"""Time field."""
types = (datetime.time,)
def __init__(self, str_format=None, *args, **kwargs):
"""Init.
:param str str_format: Format to cast time to (if `None` - casting to
ISO 8601 format).
"""
self.str_format = str_format
super(TimeField, self).__init__(*args, **kwargs)
def to_struct(self, value):
"""Cast `time` object to string."""
if self.str_format:
return value.strftime(self.str_format)
return value.isoformat()
def parse_value(self, value):
"""Parse string into instance of `time`."""
if value is None:
return value
if isinstance(value, datetime.time):
return value
return parse(value).timetz()
class DateField(StringField):
"""Date field."""
types = (datetime.date,)
default_format = '%Y-%m-%d'
def __init__(self, str_format=None, *args, **kwargs):
"""Init.
:param str str_format: Format to cast date to (if `None` - casting to
%Y-%m-%d format).
"""
self.str_format = str_format
super(DateField, self).__init__(*args, **kwargs)
def to_struct(self, value):
"""Cast `date` object to string."""
if self.str_format:
return value.strftime(self.str_format)
return value.strftime(self.default_format)
def parse_value(self, value):
"""Parse string into instance of `date`."""
if value is None:
return value
if isinstance(value, datetime.date):
return value
return parse(value).date()
class DateTimeField(StringField):
"""Datetime field."""
types = (datetime.datetime,)
def __init__(self, str_format=None, *args, **kwargs):
"""Init.
:param str str_format: Format to cast datetime to (if `None` - casting
to ISO 8601 format).
"""
self.str_format = str_format
super(DateTimeField, self).__init__(*args, **kwargs)
def to_struct(self, value):
"""Cast `datetime` object to string."""
if self.str_format:
return value.strftime(self.str_format)
return value.isoformat()
def parse_value(self, value):
"""Parse string into instance of `datetime`."""
if isinstance(value, datetime.datetime):
return value
if value:
return parse(value)
else:
return None
| 27.483806 | 79 | 0.593946 |
216cbef30f3f1904481fa045409d9f073883334e | 300 | py | Python | scripts/combine_rst_files.py | hainm/rosseta_amber | 6130dc8c07959e9fc0f776f1581d682148cad180 | [
"BSD-2-Clause"
] | 1 | 2021-05-04T07:40:54.000Z | 2021-05-04T07:40:54.000Z | scripts/combine_rst_files.py | hainm/rosseta_amber | 6130dc8c07959e9fc0f776f1581d682148cad180 | [
"BSD-2-Clause"
] | null | null | null | scripts/combine_rst_files.py | hainm/rosseta_amber | 6130dc8c07959e9fc0f776f1581d682148cad180 | [
"BSD-2-Clause"
] | null | null | null | #!/usr/bin/env python
'''combine a bunch of restart files to a single trajectory for mmgbsa (to do energy decomposition)
'''
import pytraj as pt
# add your files here
rstlist = ['f1.rst', 'f2.rst']
parm = 'f1.parm'
traj = pt.iterload(rstlist, parm)
# save to netcdf format
traj.save('traj.nc')
| 17.647059 | 98 | 0.696667 |
c0534092bfaf9d23768c0b93a1710f4f89510591 | 13,512 | py | Python | a10sdk/core/router/router_ospf_area.py | deepfield/a10sdk-python | bfaa58099f51f085d5e91652d1d1a3fd5c529d5d | [
"Apache-2.0"
] | 16 | 2015-05-20T07:26:30.000Z | 2021-01-23T11:56:57.000Z | a10sdk/core/router/router_ospf_area.py | deepfield/a10sdk-python | bfaa58099f51f085d5e91652d1d1a3fd5c529d5d | [
"Apache-2.0"
] | 6 | 2015-03-24T22:07:11.000Z | 2017-03-28T21:31:18.000Z | a10sdk/core/router/router_ospf_area.py | deepfield/a10sdk-python | bfaa58099f51f085d5e91652d1d1a3fd5c529d5d | [
"Apache-2.0"
] | 23 | 2015-03-29T15:43:01.000Z | 2021-06-02T17:12:01.000Z | from a10sdk.common.A10BaseClass import A10BaseClass
class NssaCfg(A10BaseClass):
"""This class does not support CRUD Operations please use parent.
:param no_redistribution: {"default": 0, "type": "number", "description": "No redistribution into this NSSA area", "format": "flag"}
:param translator_role: {"default": "candidate", "enum": ["always", "candidate", "never"], "type": "string", "description": "'always': Translate always; 'candidate': Candidate for translator (default); 'never': Do not translate; ", "format": "enum"}
:param metric: {"description": "OSPF default metric (OSPF metric)", "format": "number", "default": 1, "maximum": 16777214, "minimum": 0, "type": "number"}
:param nssa: {"default": 0, "type": "number", "description": "Specify a NSSA area", "format": "flag"}
:param default_information_originate: {"default": 0, "type": "number", "description": "Originate Type 7 default into NSSA area", "format": "flag"}
:param no_summary: {"default": 0, "type": "number", "description": "Do not send summary LSA into NSSA", "format": "flag"}
:param metric_type: {"description": "OSPF metric type (OSPF metric type for default routes)", "format": "number", "default": 2, "maximum": 2, "minimum": 1, "type": "number"}
:param DeviceProxy: The device proxy for REST operations and session handling. Refer to `common/device_proxy.py`
"""
def __init__(self, **kwargs):
self.ERROR_MSG = ""
self.b_key = "nssa-cfg"
self.DeviceProxy = ""
self.no_redistribution = ""
self.translator_role = ""
self.metric = ""
self.nssa = ""
self.default_information_originate = ""
self.no_summary = ""
self.metric_type = ""
for keys, value in kwargs.items():
setattr(self,keys, value)
class FilterLists(A10BaseClass):
"""This class does not support CRUD Operations please use parent.
:param acl_direction: {"enum": ["in", "out"], "type": "string", "description": "'in': Filter networks sent to this area; 'out': Filter networks sent from this area; ", "format": "enum"}
:param plist_direction: {"enum": ["in", "out"], "type": "string", "description": "'in': Filter networks sent to this area; 'out': Filter networks sent from this area; ", "format": "enum"}
:param acl_name: {"minLength": 1, "maxLength": 128, "type": "string", "description": "Filter networks by access-list (Name of an access-list)", "format": "string"}
:param plist_name: {"minLength": 1, "maxLength": 128, "type": "string", "description": "Filter networks by prefix-list (Name of an IP prefix-list)", "format": "string"}
:param filter_list: {"default": 0, "type": "number", "description": "Filter networks between OSPF areas", "format": "flag"}
:param DeviceProxy: The device proxy for REST operations and session handling. Refer to `common/device_proxy.py`
"""
def __init__(self, **kwargs):
self.ERROR_MSG = ""
self.b_key = "filter-lists"
self.DeviceProxy = ""
self.acl_direction = ""
self.plist_direction = ""
self.acl_name = ""
self.plist_name = ""
self.filter_list = ""
for keys, value in kwargs.items():
setattr(self,keys, value)
class VirtualLinkList(A10BaseClass):
"""This class does not support CRUD Operations please use parent.
:param dead_interval: {"description": "Dead router detection time (Seconds)", "minimum": 1, "type": "number", "maximum": 65535, "format": "number"}
:param message_digest_key: {"description": "Set message digest key (Key ID)", "minimum": 1, "type": "number", "maximum": 255, "format": "number"}
:param hello_interval: {"description": "Hello packet interval (Seconds)", "minimum": 1, "type": "number", "maximum": 65535, "format": "number"}
:param bfd: {"default": 0, "type": "number", "description": "Bidirectional Forwarding Detection (BFD)", "format": "flag"}
:param transmit_delay: {"description": "LSA transmission delay (Seconds)", "format": "number", "default": 1, "maximum": 3600, "minimum": 1, "type": "number"}
:param virtual_link_authentication: {"default": 0, "type": "number", "description": "Enable authentication", "format": "flag"}
:param virtual_link_ip_addr: {"type": "string", "description": "ID (IP addr) associated with virtual link neighbor", "format": "ipv4-address"}
:param virtual_link_auth_type: {"enum": ["message-digest", "null"], "type": "string", "description": "'message-digest': Use message-digest authentication; 'null': Use null authentication; ", "format": "enum"}
:param authentication_key: {"minLength": 1, "maxLength": 8, "type": "string", "description": "Set authentication key (Authentication key (8 chars))", "format": "string-rlx"}
:param retransmit_interval: {"description": "LSA retransmit interval (Seconds)", "minimum": 1, "type": "number", "maximum": 3600, "format": "number"}
:param md5: {"minLength": 1, "maxLength": 16, "type": "string", "description": "Use MD5 algorithm (Authentication key (16 chars))", "format": "string-rlx"}
:param DeviceProxy: The device proxy for REST operations and session handling. Refer to `common/device_proxy.py`
"""
def __init__(self, **kwargs):
self.ERROR_MSG = ""
self.b_key = "virtual-link-list"
self.DeviceProxy = ""
self.dead_interval = ""
self.message_digest_key = ""
self.hello_interval = ""
self.bfd = ""
self.transmit_delay = ""
self.virtual_link_authentication = ""
self.virtual_link_ip_addr = ""
self.virtual_link_auth_type = ""
self.authentication_key = ""
self.retransmit_interval = ""
self.md5 = ""
for keys, value in kwargs.items():
setattr(self,keys, value)
class StubCfg(A10BaseClass):
"""This class does not support CRUD Operations please use parent.
:param stub: {"default": 0, "type": "number", "description": "Configure OSPF area as stub", "format": "flag"}
:param no_summary: {"default": 0, "type": "number", "description": "Do not inject inter-area routes into area", "format": "flag"}
:param DeviceProxy: The device proxy for REST operations and session handling. Refer to `common/device_proxy.py`
"""
def __init__(self, **kwargs):
self.ERROR_MSG = ""
self.b_key = "stub-cfg"
self.DeviceProxy = ""
self.stub = ""
self.no_summary = ""
for keys, value in kwargs.items():
setattr(self,keys, value)
class AuthCfg(A10BaseClass):
"""This class does not support CRUD Operations please use parent.
:param authentication: {"default": 0, "type": "number", "description": "Enable authentication", "format": "flag"}
:param message_digest: {"default": 0, "type": "number", "description": "Use message-digest authentication", "format": "flag"}
:param DeviceProxy: The device proxy for REST operations and session handling. Refer to `common/device_proxy.py`
"""
def __init__(self, **kwargs):
self.ERROR_MSG = ""
self.b_key = "auth-cfg"
self.DeviceProxy = ""
self.authentication = ""
self.message_digest = ""
for keys, value in kwargs.items():
setattr(self,keys, value)
class RangeList(A10BaseClass):
"""This class does not support CRUD Operations please use parent.
:param area_range_prefix: {"type": "string", "description": "Area range for IPv4 prefix", "format": "ipv4-cidr"}
:param option: {"default": "advertise", "enum": ["advertise", "not-advertise"], "type": "string", "description": "'advertise': Advertise this range (default); 'not-advertise': DoNotAdvertise this range; ", "format": "enum"}
:param DeviceProxy: The device proxy for REST operations and session handling. Refer to `common/device_proxy.py`
"""
def __init__(self, **kwargs):
self.ERROR_MSG = ""
self.b_key = "range-list"
self.DeviceProxy = ""
self.area_range_prefix = ""
self.option = ""
for keys, value in kwargs.items():
setattr(self,keys, value)
class Area(A10BaseClass):
"""Class Description::
OSPF area parameters.
Class area supports CRUD Operations and inherits from `common/A10BaseClass`.
This class is the `"PARENT"` class for this module.`
:param filter_lists: {"minItems": 1, "items": {"type": "object"}, "uniqueItems": true, "type": "array", "array": [{"properties": {"acl-direction": {"enum": ["in", "out"], "type": "string", "description": "'in': Filter networks sent to this area; 'out': Filter networks sent from this area; ", "format": "enum"}, "plist-direction": {"enum": ["in", "out"], "type": "string", "description": "'in': Filter networks sent to this area; 'out': Filter networks sent from this area; ", "format": "enum"}, "acl-name": {"minLength": 1, "maxLength": 128, "type": "string", "description": "Filter networks by access-list (Name of an access-list)", "format": "string"}, "plist-name": {"minLength": 1, "maxLength": 128, "type": "string", "description": "Filter networks by prefix-list (Name of an IP prefix-list)", "format": "string"}, "filter-list": {"default": 0, "type": "number", "description": "Filter networks between OSPF areas", "format": "flag"}, "optional": true}}]}
:param area_ipv4: {"optional": false, "type": "string", "description": "OSPF area ID in IP address format", "format": "ipv4-address"}
:param virtual_link_list: {"minItems": 1, "items": {"type": "object"}, "uniqueItems": true, "type": "array", "array": [{"properties": {"dead-interval": {"description": "Dead router detection time (Seconds)", "minimum": 1, "type": "number", "maximum": 65535, "format": "number"}, "message-digest-key": {"description": "Set message digest key (Key ID)", "minimum": 1, "type": "number", "maximum": 255, "format": "number"}, "hello-interval": {"description": "Hello packet interval (Seconds)", "minimum": 1, "type": "number", "maximum": 65535, "format": "number"}, "bfd": {"default": 0, "type": "number", "description": "Bidirectional Forwarding Detection (BFD)", "format": "flag"}, "transmit-delay": {"description": "LSA transmission delay (Seconds)", "format": "number", "default": 1, "maximum": 3600, "minimum": 1, "type": "number"}, "virtual-link-authentication": {"default": 0, "type": "number", "description": "Enable authentication", "format": "flag"}, "virtual-link-ip-addr": {"type": "string", "description": "ID (IP addr) associated with virtual link neighbor", "format": "ipv4-address"}, "virtual-link-auth-type": {"enum": ["message-digest", "null"], "type": "string", "description": "'message-digest': Use message-digest authentication; 'null': Use null authentication; ", "format": "enum"}, "authentication-key": {"minLength": 1, "maxLength": 8, "type": "string", "description": "Set authentication key (Authentication key (8 chars))", "format": "string-rlx"}, "retransmit-interval": {"description": "LSA retransmit interval (Seconds)", "minimum": 1, "type": "number", "maximum": 3600, "format": "number"}, "optional": true, "md5": {"minLength": 1, "maxLength": 16, "type": "string", "description": "Use MD5 algorithm (Authentication key (16 chars))", "format": "string-rlx"}}}]}
:param shortcut: {"description": "'default': Set default shortcutting behavior; 'disable': Disable shortcutting through the area; 'enable': Enable shortcutting through the area; ", "format": "enum", "default": "default", "type": "string", "enum": ["default", "disable", "enable"], "optional": true}
:param range_list: {"minItems": 1, "items": {"type": "object"}, "uniqueItems": true, "type": "array", "array": [{"properties": {"area-range-prefix": {"type": "string", "description": "Area range for IPv4 prefix", "format": "ipv4-cidr"}, "optional": true, "option": {"default": "advertise", "enum": ["advertise", "not-advertise"], "type": "string", "description": "'advertise': Advertise this range (default); 'not-advertise': DoNotAdvertise this range; ", "format": "enum"}}}]}
:param default_cost: {"description": "Set the summary-default cost of a NSSA or stub area (Stub's advertised default summary cost)", "format": "number", "default": 1, "optional": true, "maximum": 16777215, "minimum": 0, "type": "number"}
:param area_num: {"description": "OSPF area ID as a decimal value", "format": "number", "type": "number", "maximum": 4294967295, "minimum": 0, "optional": false}
:param DeviceProxy: The device proxy for REST operations and session handling. Refer to `common/device_proxy.py`
URL for this object::
`https://<Hostname|Ip address>//axapi/v3/router/ospf/{process_id}/area/{area_ipv4}+{area_num}`.
"""
def __init__(self, **kwargs):
self.ERROR_MSG = ""
self.required = [ "area_ipv4","area_num"]
self.b_key = "area"
self.a10_url="/axapi/v3/router/ospf/{process_id}/area/{area_ipv4}+{area_num}"
self.DeviceProxy = ""
self.nssa_cfg = {}
self.filter_lists = []
self.area_ipv4 = ""
self.virtual_link_list = []
self.stub_cfg = {}
self.shortcut = ""
self.auth_cfg = {}
self.range_list = []
self.default_cost = ""
self.area_num = ""
for keys, value in kwargs.items():
setattr(self,keys, value)
| 58.747826 | 1,792 | 0.636767 |
8674edce9a6e97924bfe1d4ae45fc92cf241ba86 | 3,182 | py | Python | src/arch/arm/ArmSemihosting.py | fei-shan/gem5-experiment | 70781db30d42b1fe50e495bd04f7755a4b0e0e59 | [
"BSD-3-Clause"
] | 1 | 2022-03-20T23:23:41.000Z | 2022-03-20T23:23:41.000Z | src/arch/arm/ArmSemihosting.py | fei-shan/gem5-experiment | 70781db30d42b1fe50e495bd04f7755a4b0e0e59 | [
"BSD-3-Clause"
] | 1 | 2021-04-17T17:13:56.000Z | 2021-04-17T17:13:56.000Z | src/arch/arm/ArmSemihosting.py | fei-shan/gem5-experiment | 70781db30d42b1fe50e495bd04f7755a4b0e0e59 | [
"BSD-3-Clause"
] | 1 | 2021-08-23T05:37:55.000Z | 2021-08-23T05:37:55.000Z | # Copyright (c) 2018, 2019 ARM Limited
# All rights reserved.
#
# The license below extends only to copyright in the software and shall
# not be construed as granting a license to any other intellectual
# property including but not limited to intellectual property relating
# to a hardware implementation of the functionality of the software
# licensed hereunder. You may use the software subject to the license
# terms below provided that you ensure that this notice is replicated
# unmodified and in its entirety in all distributions of the software,
# modified or unmodified, in source code or in binary form.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are
# met: redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer;
# redistributions in binary form must reproduce the above copyright
# notice, this list of conditions and the following disclaimer in the
# documentation and/or other materials provided with the distribution;
# neither the name of the copyright holders nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
from m5.params import *
from m5.SimObject import *
from m5.objects.Serial import SerialDevice
from m5.objects.Terminal import Terminal
class ArmSemihosting(SimObject):
type = 'ArmSemihosting'
cxx_header = "arch/arm/semihosting.hh"
cmd_line = Param.String("", "Command line to report to guest");
stdin = Param.String("stdin",
"Standard input (stdin for gem5's terminal)")
stdout = Param.String("stdout",
"Standard output (stdout for gem5's terminal)")
stderr = Param.String("stderr",
"Standard error (stderr for gem5's terminal)")
files_root_dir = Param.String("",
"Host root directory for files handled by Semihosting")
mem_reserve = Param.MemorySize("32MiB",
"Amount of memory to reserve at the start of the address map. This "
"memory won't be used by the heap reported to an application.");
stack_size = Param.MemorySize("32MiB", "Application stack size");
time = Param.Time('01/01/2009',
"System time to use ('Now' for actual time)")
| 50.507937 | 76 | 0.742615 |
4097ca9e6607cffcbc153aec510088bd17902233 | 670 | py | Python | framework/file/hash.py | jarret/bitcoin_helpers | 4b6155ea3b004ad58a717b36cd58138d058281b1 | [
"MIT"
] | null | null | null | framework/file/hash.py | jarret/bitcoin_helpers | 4b6155ea3b004ad58a717b36cd58138d058281b1 | [
"MIT"
] | null | null | null | framework/file/hash.py | jarret/bitcoin_helpers | 4b6155ea3b004ad58a717b36cd58138d058281b1 | [
"MIT"
] | null | null | null | #!/usr/bin/env python3
# Copyright (c) 2017 The Bitcoin Core developers
# Distributed under the MIT software license, see the accompanying
# file COPYING or http://www.opensource.org/licenses/mit-license.php.
import hashlib
BUF_SIZE = 65536
class FileHash(object):
"""
Calculates and represents the SHA256 hash of the contents of the file
"""
def __init__(self, file_path):
self.sha256 = hashlib.sha256()
f = open(file_path, 'rb')
while True:
data = f.read(BUF_SIZE)
if not data:
break
self.sha256.update(data)
def __str__(self):
return self.sha256.hexdigest()
| 25.769231 | 73 | 0.638806 |
af426386818e26c952fa76023506d276dbc18c58 | 775 | py | Python | cpro/migrations/0019_auto_20181002_0245.py | FlockyChou/CinderellaProducers | fdfa25a20ed21712caf6294fe3ada3cfdcd5832a | [
"Apache-2.0"
] | null | null | null | cpro/migrations/0019_auto_20181002_0245.py | FlockyChou/CinderellaProducers | fdfa25a20ed21712caf6294fe3ada3cfdcd5832a | [
"Apache-2.0"
] | null | null | null | cpro/migrations/0019_auto_20181002_0245.py | FlockyChou/CinderellaProducers | fdfa25a20ed21712caf6294fe3ada3cfdcd5832a | [
"Apache-2.0"
] | null | null | null | # -*- coding: utf-8 -*-
from __future__ import unicode_literals
from django.db import models, migrations
import django.core.validators
class Migration(migrations.Migration):
dependencies = [
('cpro', '0018_auto_20180912_0108'),
]
operations = [
migrations.AlterField(
model_name='account',
name='_cache_owner_email',
field=models.EmailField(max_length=75, null=True),
preserve_default=True,
),
migrations.AlterField(
model_name='account',
name='level',
field=models.PositiveIntegerField(null=True, verbose_name='Producer Level', validators=[django.core.validators.MaxValueValidator(500)]),
preserve_default=True,
),
]
| 27.678571 | 148 | 0.629677 |
927dcec9cf852a6b2d6786547bdce49acb1e0888 | 971 | py | Python | chatterbot/ext/django_chatterbot/migrations/0009_tags.py | priyashengole/Chatterbot | 49708c479226d6650a4cadf33a1bacd233c2ca0c | [
"BSD-3-Clause"
] | 5 | 2021-03-21T06:26:02.000Z | 2021-08-11T09:58:44.000Z | chatterbot/ext/django_chatterbot/migrations/0009_tags.py | priyashengole/Chatterbot | 49708c479226d6650a4cadf33a1bacd233c2ca0c | [
"BSD-3-Clause"
] | null | null | null | chatterbot/ext/django_chatterbot/migrations/0009_tags.py | priyashengole/Chatterbot | 49708c479226d6650a4cadf33a1bacd233c2ca0c | [
"BSD-3-Clause"
] | 1 | 2018-09-21T20:50:15.000Z | 2018-09-21T20:50:15.000Z | # -*- coding: utf-8 -*-
# Generated by Django 1.11a1 on 2017-07-07 00:12
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('django_chatterbot', '0008_update_conversations'),
]
operations = [
migrations.CreateModel(
name='Tag',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('name', models.SlugField()),
],
options={
'abstract': False,
},
),
migrations.AlterField(
model_name='statement',
name='text',
field=models.CharField(max_length=255, unique=True),
),
migrations.AddField(
model_name='tag',
name='statements',
field=models.ManyToManyField(related_name='tags', to='django_chatterbot.Statement'),
),
]
| 28.558824 | 114 | 0.546859 |
dc8b05b9181cc03a1b22ca38ff65f5867acd042d | 7,185 | py | Python | config.py | AntonDemchenko/voiceprint_maker | d1e673799a23bf19fa2878cff41e08bf2e32d84d | [
"MIT"
] | 1 | 2022-01-24T20:37:48.000Z | 2022-01-24T20:37:48.000Z | config.py | AntonDemchenko/voiceprint_maker | d1e673799a23bf19fa2878cff41e08bf2e32d84d | [
"MIT"
] | null | null | null | config.py | AntonDemchenko/voiceprint_maker | d1e673799a23bf19fa2878cff41e08bf2e32d84d | [
"MIT"
] | null | null | null | import configparser
import os
import re
import numpy as np
class SincNetConfigParser(configparser.ConfigParser):
def getintlist(self, section, option, size):
return list(map(int, self.getlist(section, option, size)))
def getbooleanlist(self, section, option, size):
return list(map(self._str_to_bool, self.getlist(section, option, size)))
def getfloatlist(self, section, option, size):
return list(map(float, self.getlist(section, option, size)))
def getlist(self, section, option, size):
value = self.get(section, option)
result = value.split(',')
if len(result) == 1:
result *= size
if len(result) != size:
raise ValueError(
'Invalid length of {}.{} list ({} is expected, {} is found)'\
.format(section, option, size, len(result))
)
return result
def _str_to_bool(self, s):
if s == 'True':
return True
elif s == 'False':
return False
else:
raise ValueError
class SincNetCfg:
def __init__(self, cfg_file):
if cfg_file is None:
raise ValueError
config = SincNetConfigParser()
config.read(cfg_file)
base_path = os.path.dirname(os.path.realpath(__file__))
# [data]
self.train_list_file = os.path.join(base_path, config.get('data', 'train_list_file'))
self.test_list_file = os.path.join(base_path, config.get('data', 'test_list_file'))
self.val_list_file = os.path.join(base_path, config.get('data', 'val_list_file'))
self.path_to_label_file = os.path.join(base_path, config.get('data', 'path_to_label_file'))
self.dataset_folder = config.get('data', 'dataset_folder', fallback=None)
if self.dataset_folder is not None:
self.dataset_folder = os.path.join(base_path, self.dataset_folder)
self.output_folder = os.path.join(base_path, config.get('data', 'output_folder'))
self.checkpoint_file = config.get('data', 'checkpoint_file', fallback=None)
if self.checkpoint_file:
self.checkpoint_file = os.path.join(base_path, self.checkpoint_file)
# [windowing]
self.sample_rate = config.getint('windowing', 'sample_rate')
self.window_len_ms = config.getint('windowing', 'window_len_ms')
self.window_shift_ms = config.getint('windowing', 'window_shift_ms')
# [cnn]
self.cnn_n_layers = config.getint('cnn', 'cnn_n_layers')
self.cnn_n_filters = config.getintlist('cnn', 'cnn_n_filters', self.cnn_n_layers)
self.cnn_filter_len = config.getintlist('cnn', 'cnn_filter_len', self.cnn_n_layers)
self.cnn_max_pool_len = config.getintlist('cnn', 'cnn_max_pool_len', self.cnn_n_layers)
self.cnn_use_layer_norm_before = config.getboolean('cnn', 'cnn_use_layer_norm_before')
self.cnn_use_batch_norm_before = config.getboolean('cnn', 'cnn_use_batch_norm_before')
self.cnn_use_layer_norm = config.getbooleanlist('cnn', 'cnn_use_layer_norm', self.cnn_n_layers)
self.cnn_use_batch_norm = config.getbooleanlist('cnn', 'cnn_use_batch_norm', self.cnn_n_layers)
self.cnn_act = config.getlist('cnn', 'cnn_act', self.cnn_n_layers)
self.cnn_drop = config.getfloatlist('cnn', 'cnn_drop', self.cnn_n_layers)
# [dnn]
self.fc_n_layers = config.getint('dnn', 'fc_n_layers')
self.fc_size = config.getintlist('dnn', 'fc_size', self.fc_n_layers)
self.fc_use_layer_norm_before = config.getboolean('dnn', 'fc_use_layer_norm_before')
self.fc_use_batch_norm_before = config.getboolean('dnn', 'fc_use_batch_norm_before')
self.fc_use_batch_norm = config.getbooleanlist('dnn', 'fc_use_batch_norm', self.fc_n_layers)
self.fc_use_layer_norm = config.getbooleanlist('dnn', 'fc_use_layer_norm', self.fc_n_layers)
self.fc_act = config.getlist('dnn', 'fc_act', self.fc_n_layers)
self.fc_drop = config.getfloatlist('dnn', 'fc_drop', self.fc_n_layers)
# [class]
self.n_classes = config.getint('class', 'n_classes')
self.class_use_layer_norm_before = config.getboolean('class', 'class_use_layer_norm_before')
self.class_use_batch_norm_before = config.getboolean(
'class', 'class_use_batch_norm_before'
)
# [optimization]
self.optimizer = config.get('optimization', 'optimizer')
self.lr = config.getfloat('optimization', 'lr')
self.batch_size = config.getint('optimization', 'batch_size')
self.n_epochs = config.getint('optimization', 'n_epochs')
self.n_batches = config.getint('optimization', 'n_batches')
self.val_freq = config.getint('optimization', 'val_freq')
self.seed = config.getint('optimization', 'seed')
self.n_val_windows_per_sample = config.getint('optimization', 'n_val_windows_per_sample')
self.batch_size_test = config.getint('optimization', 'batch_size_test')
# [callbacks]
self.best_checkpoint_freq = config.getint('callbacks', 'best_checkpoint_freq')
self.use_tensorboard_logger = config.getboolean('callbacks', 'use_tensorboard_logger')
self.save_checkpoints = config.getboolean('callbacks', 'save_checkpoints')
# [testing]
self.max_top = config.getint('testing', 'max_top')
self.checkpoint_folder = os.path.join(base_path, os.path.join(self.output_folder, 'checkpoints'))
self.best_checkpoint_path = os.path.join(self.checkpoint_folder, 'best_checkpoint.hdf5')
self.last_checkpoint_path = os.path.join(self.checkpoint_folder, 'last_checkpoint.hdf5')
self.window_len = int(self.sample_rate * self.window_len_ms / 1000.00)
self.window_shift = int(self.sample_rate * self.window_len_ms / 1000.00)
self.out_dim = 100
self.input_shape = (self.window_len, 1)
self.train_list = self._read_list_file(self.train_list_file)
self.test_list = self._read_list_file(self.test_list_file)
self.val_list = self._read_list_file(self.val_list_file)
self.path_to_label = np.load(self.path_to_label_file, allow_pickle=True).item()
self.fact_amp = 0.2
self.initial_epoch = self._get_initial_epoch()
def _read_list_file(self, list_file):
list_sig = []
with open(list_file, 'r') as f:
lines = f.readlines()
for x in lines:
list_sig.append(x.rstrip())
return list_sig
def _get_initial_epoch(self):
result = 0
log_path = os.path.join(self.output_folder, 'log.csv')
if self.checkpoint_file and os.path.exists(log_path):
s = ''
with open(log_path, 'r') as f:
for line in f:
s = line.split(',', 1)[0]
try:
result = int(s) + 1
except:
pass
return result
def read_config():
from optparse import OptionParser
parser = OptionParser()
parser.add_option('--cfg')
options, _ = parser.parse_args()
cfg = SincNetCfg(options.cfg)
return cfg
| 43.545455 | 105 | 0.655393 |
f052f4423d32a8b65fff6d397dabd6067d8dce21 | 4,911 | py | Python | pandas/tests/series/methods/test_convert.py | CJL89/pandas | 6210077d32a9e9675526ea896e6d1f9189629d4a | [
"BSD-3-Clause"
] | 603 | 2020-12-23T13:49:32.000Z | 2022-03-31T23:38:03.000Z | pandas/tests/series/methods/test_convert.py | CJL89/pandas | 6210077d32a9e9675526ea896e6d1f9189629d4a | [
"BSD-3-Clause"
] | 387 | 2020-12-15T14:54:04.000Z | 2022-03-31T07:00:21.000Z | pandas/tests/series/methods/test_convert.py | CJL89/pandas | 6210077d32a9e9675526ea896e6d1f9189629d4a | [
"BSD-3-Clause"
] | 35 | 2021-03-26T03:12:04.000Z | 2022-03-23T10:15:10.000Z | from datetime import datetime
import numpy as np
import pytest
from pandas import Series, Timestamp
import pandas._testing as tm
class TestConvert:
def test_convert(self):
# GH#10265
dt = datetime(2001, 1, 1, 0, 0)
td = dt - datetime(2000, 1, 1, 0, 0)
# Test coercion with mixed types
ser = Series(["a", "3.1415", dt, td])
results = ser._convert(numeric=True)
expected = Series([np.nan, 3.1415, np.nan, np.nan])
tm.assert_series_equal(results, expected)
# Test standard conversion returns original
results = ser._convert(datetime=True)
tm.assert_series_equal(results, ser)
results = ser._convert(numeric=True)
expected = Series([np.nan, 3.1415, np.nan, np.nan])
tm.assert_series_equal(results, expected)
results = ser._convert(timedelta=True)
tm.assert_series_equal(results, ser)
# test pass-through and non-conversion when other types selected
ser = Series(["1.0", "2.0", "3.0"])
results = ser._convert(datetime=True, numeric=True, timedelta=True)
expected = Series([1.0, 2.0, 3.0])
tm.assert_series_equal(results, expected)
results = ser._convert(True, False, True)
tm.assert_series_equal(results, ser)
ser = Series(
[datetime(2001, 1, 1, 0, 0), datetime(2001, 1, 1, 0, 0)], dtype="O"
)
results = ser._convert(datetime=True, numeric=True, timedelta=True)
expected = Series([datetime(2001, 1, 1, 0, 0), datetime(2001, 1, 1, 0, 0)])
tm.assert_series_equal(results, expected)
results = ser._convert(datetime=False, numeric=True, timedelta=True)
tm.assert_series_equal(results, ser)
td = datetime(2001, 1, 1, 0, 0) - datetime(2000, 1, 1, 0, 0)
ser = Series([td, td], dtype="O")
results = ser._convert(datetime=True, numeric=True, timedelta=True)
expected = Series([td, td])
tm.assert_series_equal(results, expected)
results = ser._convert(True, True, False)
tm.assert_series_equal(results, ser)
ser = Series([1.0, 2, 3], index=["a", "b", "c"])
result = ser._convert(numeric=True)
tm.assert_series_equal(result, ser)
# force numeric conversion
res = ser.copy().astype("O")
res["a"] = "1"
result = res._convert(numeric=True)
tm.assert_series_equal(result, ser)
res = ser.copy().astype("O")
res["a"] = "1."
result = res._convert(numeric=True)
tm.assert_series_equal(result, ser)
res = ser.copy().astype("O")
res["a"] = "garbled"
result = res._convert(numeric=True)
expected = ser.copy()
expected["a"] = np.nan
tm.assert_series_equal(result, expected)
# GH 4119, not converting a mixed type (e.g.floats and object)
ser = Series([1, "na", 3, 4])
result = ser._convert(datetime=True, numeric=True)
expected = Series([1, np.nan, 3, 4])
tm.assert_series_equal(result, expected)
ser = Series([1, "", 3, 4])
result = ser._convert(datetime=True, numeric=True)
tm.assert_series_equal(result, expected)
# dates
ser = Series(
[
datetime(2001, 1, 1, 0, 0),
datetime(2001, 1, 2, 0, 0),
datetime(2001, 1, 3, 0, 0),
]
)
result = ser._convert(datetime=True)
expected = Series(
[Timestamp("20010101"), Timestamp("20010102"), Timestamp("20010103")],
dtype="M8[ns]",
)
tm.assert_series_equal(result, expected)
result = ser._convert(datetime=True)
tm.assert_series_equal(result, expected)
# preserver if non-object
ser = Series([1], dtype="float32")
result = ser._convert(datetime=True)
tm.assert_series_equal(result, ser)
# FIXME: dont leave commented-out
# res = ser.copy()
# r[0] = np.nan
# result = res._convert(convert_dates=True,convert_numeric=False)
# assert result.dtype == 'M8[ns]'
def test_convert_no_arg_error(self):
ser = Series(["1.0", "2"])
msg = r"At least one of datetime, numeric or timedelta must be True\."
with pytest.raises(ValueError, match=msg):
ser._convert()
def test_convert_preserve_bool(self):
ser = Series([1, True, 3, 5], dtype=object)
res = ser._convert(datetime=True, numeric=True)
expected = Series([1, 1, 3, 5], dtype="i8")
tm.assert_series_equal(res, expected)
def test_convert_preserve_all_bool(self):
ser = Series([False, True, False, False], dtype=object)
res = ser._convert(datetime=True, numeric=True)
expected = Series([False, True, False, False], dtype=bool)
tm.assert_series_equal(res, expected)
| 36.110294 | 83 | 0.593158 |
60581a552439b695fea4493d183dff4e63a0283d | 1,054 | py | Python | {{cookiecutter.directory_name}}/apps/core/models/api_key.py | backbonesk/django-project-template | df5e1dcd9d67926234e52024b608cd69f5824d4e | [
"MIT"
] | null | null | null | {{cookiecutter.directory_name}}/apps/core/models/api_key.py | backbonesk/django-project-template | df5e1dcd9d67926234e52024b608cd69f5824d4e | [
"MIT"
] | null | null | null | {{cookiecutter.directory_name}}/apps/core/models/api_key.py | backbonesk/django-project-template | df5e1dcd9d67926234e52024b608cd69f5824d4e | [
"MIT"
] | 2 | 2021-05-19T10:13:54.000Z | 2022-03-17T11:54:15.000Z | from django.db import models
from django.utils.translation import gettext_lazy as _
from apps.core.models.base import BaseModel, UpdatedAtMixin
class ApiKey(BaseModel, UpdatedAtMixin):
class Meta:
app_label = 'core'
db_table = 'api_keys'
default_permissions = ()
verbose_name = _('api_key')
verbose_name_plural = _('api_keys')
class DevicePlatform(models.TextChoices):
WEB = 'web', _('web')
ANDROID = 'android', _('android')
IOS = 'ios', _('ios')
DEBUG = 'debug', _('debug')
name = models.CharField(max_length=200, null=True, verbose_name=_('apikey_name'))
platform = models.CharField(
max_length=10,
null=False,
choices=DevicePlatform.choices,
default=DevicePlatform.DEBUG,
verbose_name=_('apikey_platform')
)
secret = models.CharField(max_length=30, null=False, verbose_name=_('apikey_secret'))
is_active = models.BooleanField(default=False, verbose_name=_('apikey_is_active'))
__all__ = [
'ApiKey'
]
| 29.277778 | 89 | 0.658444 |
0b396aade554c822796b02f03322d0cc841c6dc3 | 9,913 | py | Python | app/recipe/tests/test_recipe_api.py | beckshaur23/recipe-app-api | a20d22d15aadd194790e6e3a3625a470ae64c58c | [
"MIT"
] | null | null | null | app/recipe/tests/test_recipe_api.py | beckshaur23/recipe-app-api | a20d22d15aadd194790e6e3a3625a470ae64c58c | [
"MIT"
] | null | null | null | app/recipe/tests/test_recipe_api.py | beckshaur23/recipe-app-api | a20d22d15aadd194790e6e3a3625a470ae64c58c | [
"MIT"
] | null | null | null | import tempfile
import os
from PIL import Image
from django.contrib.auth import get_user_model
from django.test import TestCase
from django.urls import reverse
from rest_framework import status
from rest_framework.test import APIClient
from core.models import Recipe, Tag, Ingredient
from recipe.serializers import RecipeSerializer, RecipeDetailSerializer
RECIPES_URL = reverse('recipe:recipe-list')
def image_upload_url(recipe_id):
"""Return URL for the recipe image upload"""
return reverse('recipe:recipe-upload-image', args=[recipe_id])
def detail_url(recipe_id):
"""Return recipe detail URL"""
return reverse('recipe:recipe-detail', args=[recipe_id])
def sample_tag(user, name='Main course'):
"""Helper function: Create and return a sample tag"""
return Tag.objects.create(user=user, name=name)
def sample_ingredient(user, name='Cinnamon'):
""""Helper function: Create and return a sample ingredient"""
return Ingredient.objects.create(user=user, name=name)
def sample_recipe(user, **params):
"""Helper function: Create and return a sample recipe"""
defaults = {
'title': 'Sample recipe',
'time_minutes': 10,
'price': 5.00
}
defaults.update(params)
return Recipe.objects.create(user=user, **defaults)
class PublicRecipeApiTests(TestCase):
"""Test unauthenticated recipe API access"""
def setUp(self):
self.client = APIClient()
def test_auth_required(self):
"""Test that authentication is required"""
res = self.client.get(RECIPES_URL)
self.assertEqual(res.status_code, status.HTTP_401_UNAUTHORIZED)
class PrivateRecipeApiTests(TestCase):
"""Test unauthenticated recipe APIs access"""
def setUp(self):
self.client = APIClient()
self.user = get_user_model().objects.create_user(
'test@londonappdev.com'
'testpass'
)
self.client.force_authenticate(self.user)
def test_retrieve_recipes(self):
"""Test retrieving a list of recipes"""
sample_recipe(user=self.user)
sample_recipe(user=self.user)
res = self.client.get(RECIPES_URL)
recipes = Recipe.objects.all().order_by('-id')
serializer = RecipeSerializer(recipes, many=True)
self.assertEqual(res.status_code, status.HTTP_200_OK)
self.assertEqual(res.data, serializer.data)
def test_recipes_limited_to_user(self):
"""Test retrieving recipes for users"""
user2 = get_user_model().objects.create_user(
'other@londonappdev.com'
'password123'
)
sample_recipe(user=user2)
sample_recipe(user=self.user)
res = self.client.get(RECIPES_URL)
recipes = Recipe.objects.filter(user=self.user)
serializer = RecipeSerializer(recipes, many=True)
self.assertEqual(res.status_code, status.HTTP_200_OK)
self.assertEqual(len(res.data), 1)
self.assertEqual(res.data, serializer.data)
def test_view_recipe_detail(self):
"""Test viewing a recipe detail"""
recipe = sample_recipe(user=self.user)
recipe.tags.add(sample_tag(user=self.user))
recipe.ingredients.add(sample_ingredient(user=self.user))
url = detail_url(recipe.id)
res = self.client.get(url)
serializer = RecipeDetailSerializer(recipe)
self.assertEqual(res.data, serializer.data)
def test_create_basic_recipe(self):
"""Test creating recipe"""
payload = {
'title': 'Chocolate cheesecake',
'time_minutes': 30,
'price': 5.00
}
res = self.client.post(RECIPES_URL, payload)
self.assertEqual(res.status_code, status.HTTP_201_CREATED)
recipe = Recipe.objects.get(id=res.data['id'])
for key in payload.keys():
self.assertEqual(payload[key], getattr(recipe, key))
def test_create_recipe_with_tags(self):
"""Test creating a recipe with tags"""
tag1 = sample_tag(user=self.user, name='Vegan')
tag2 = sample_tag(user=self.user, name='Dessert')
payload = {
'title': "Avocado lime cheesecake",
'tags': [tag1.id, tag2.id],
'time_minutes': 60,
'price': 20.00
}
res = self.client.post(RECIPES_URL, payload)
self.assertEqual(res.status_code, status.HTTP_201_CREATED)
recipe = Recipe.objects.get(id=res.data['id'])
tags = recipe.tags.all()
self.assertEqual(tags.count(), 2)
self.assertIn(tag1, tags)
self.assertIn(tag2, tags)
def test_create_recipe_with_ingredients(self):
"""Test creating recipe with ingredients"""
ingredient1 = sample_ingredient(user=self.user, name='Prawns')
ingredient2 = sample_ingredient(user=self.user, name='Ginger')
payload = {
'title': 'Thai prawn red curry',
'ingredients': [ingredient1.id, ingredient2.id],
'time_minutes': 20,
'price': 7.00
}
res = self.client.post(RECIPES_URL, payload)
self.assertEqual(res.status_code, status.HTTP_201_CREATED)
recipe = Recipe.objects.get(id=res.data['id'])
ingredients = recipe.ingredients.all()
self.assertEqual(ingredients.count(), 2)
self.assertIn(ingredient1, ingredients)
self.assertIn(ingredient2, ingredients)
def test_partial_update_recipe(self):
"""Test updating a recipe with patch"""
recipe = sample_recipe(user=self.user)
recipe.tags.add(sample_tag(user=self.user))
new_tag = sample_tag(user=self.user, name='Curry')
payload = {'title': 'chicken tikka', 'tags': [new_tag.id]}
url = detail_url(recipe.id)
self.client.patch(url, payload)
recipe.refresh_from_db()
self.assertEqual(recipe.title, payload['title'])
tags = recipe.tags.all()
self.assertEqual(len(tags), 1)
self.assertIn(new_tag, tags)
def test_full_update_recipe(self):
"""Test updating a recipe with put"""
recipe = sample_recipe(user=self.user)
recipe.tags.add(sample_tag(user=self.user))
payload = {
'title': 'Spaggeti',
'time_minutes': 25,
'price': 5.00
}
url = detail_url(recipe.id)
self.client.put(url, payload)
recipe.refresh_from_db()
self.assertEqual(recipe.title, payload['title'])
self.assertEqual(recipe.time_minutes, payload['time_minutes'])
self.assertEqual(recipe.price, payload['price'])
tags = recipe.tags.all()
self.assertEqual(len(tags), 0)
class RecipeImageUploadTests(TestCase):
def setUp(self):
self.client = APIClient()
self.user = get_user_model().objects.create_user(
'user@londonappdev.com'
'testpass'
)
self.client.force_authenticate(self.user)
self.recipe = sample_recipe(user=self.user)
def tearDown(self):
self.recipe.image.delete()
def test_upload_image_to_recipe(self):
"""Test uploading image to recipe"""
url = image_upload_url(self.recipe.id)
with tempfile.NamedTemporaryFile(suffix='.jpg') as ntf:
img = Image.new('RGB', (10, 10))
img.save(ntf, format='JPEG')
ntf.seek(0)
res = self.client.post(url, {'image': ntf}, format='multipart')
self.recipe.refresh_from_db()
self.assertEqual(res.status_code, status.HTTP_200_OK)
self.assertIn('image', res.data)
self.assertTrue(os.path.exists(self.recipe.image.path))
def test_upload_image_bad_request(self):
"""Test uploading an invalid image"""
url = image_upload_url(self.recipe.id)
res = self.client.post(url, {'image': 'notimage'}, format='multipart')
self.assertEqual(res.status_code, status.HTTP_400_BAD_REQUEST)
def test_filter_recipes_by_tags(self):
"""Test returning recipes with specific tags"""
recipe1 = sample_recipe(user=self.user, title='Thai vege curry')
recipe2 = sample_recipe(user=self.user, title='Aubergine with tahini')
tag1 = sample_tag(user=self.user, name='Vegan')
tag2 = sample_tag(user=self.user, name='Vegetarian')
recipe1.tags.add(tag1)
recipe2.tags.add(tag2)
recipe3 = sample_recipe(user=self.user, title='Fish and chips')
res = self.client.get(
RECIPES_URL,
{'tags': '{},{}'.format(tag1.id, tag2.id)}
)
serializer1 = RecipeSerializer(recipe1)
serializer2 = RecipeSerializer(recipe2)
serializer3 = RecipeSerializer(recipe3)
self.assertIn(serializer1.data, res.data)
self.assertIn(serializer2.data, res.data)
self.assertNotIn(serializer3.data, res.data)
def test_filter_recipes_by_ingredients(self):
"""Test returning recipes with specific ingredients"""
recipe1 = sample_recipe(user=self.user, title='Posh beans on toast')
recipe2 = sample_recipe(user=self.user, title='Chicken cacciotore')
ingredient1 = sample_ingredient(user=self.user, name='Feta cheese')
ingredient2 = sample_ingredient(user=self.user, name='Chicken')
recipe1.ingredients.add(ingredient1)
recipe2.ingredients.add(ingredient2)
recipe3 = sample_recipe(user=self.user, title='Steak and mushrooms')
res = self.client.get(
RECIPES_URL,
{'ingredients': '{},{}'.format(ingredient1.id, ingredient2.id)}
)
serializer1 = RecipeSerializer(recipe1)
serializer2 = RecipeSerializer(recipe2)
serializer3 = RecipeSerializer(recipe3)
self.assertIn(serializer1.data, res.data)
self.assertIn(serializer2.data, res.data)
self.assertNotIn(serializer3.data, res.data)
| 34.782456 | 78 | 0.646222 |
547ef90fec19bcd136ddaac7fe5f6192ff69d154 | 4,741 | py | Python | open_cp/gui/tk/config_view.py | BubbleStar/PredictCode | 1c6a5544b1d9185a4547c54fddc630a3592da3ba | [
"Artistic-2.0"
] | 1 | 2019-03-24T07:06:25.000Z | 2019-03-24T07:06:25.000Z | open_cp/gui/tk/config_view.py | BubbleStar/PredictCode | 1c6a5544b1d9185a4547c54fddc630a3592da3ba | [
"Artistic-2.0"
] | null | null | null | open_cp/gui/tk/config_view.py | BubbleStar/PredictCode | 1c6a5544b1d9185a4547c54fddc630a3592da3ba | [
"Artistic-2.0"
] | null | null | null | """
config_view
~~~~~~~~~~~
"""
import tkinter as tk
import tkinter.ttk as ttk
import open_cp.gui.tk.util as util
import open_cp.gui.tk.richtext as richtext
import open_cp.gui.tk.tooltips as tooltips
import open_cp.gui.funcs as funcs
import numpy as np
import scipy as scipy
try:
import geopandas as gpd
except:
gpd = None
try:
import pyproj
except:
pyproj = None
import sys
_text = {
"config" : "Configuration",
"ttk" : "ttk 'theme':",
"ttk_tt" : "Select the ttk theme to use. On Windows / OS X leaving this alone is strongly recommended",
"info" : "System information",
"info_tt" : "Information about the Python system you are running",
"plat" : "Platform: {}\n",
"tcl" : "TCL version in use: {}\n",
"pyplat" : "Python platform: {}\n",
"pyroot" : "Python root path: {}\n",
"np" : "Numpy version: {}\n",
"scipy" : "Scipy version: {}\n",
"gpd" : "GeoPandas version: {}\n",
"gpd_none" : "GeoPandas could not be loaded\n",
"pyproj" : "Pyproj version: {}\n",
"pyproj_none" : "Pyproj could not be loaded\n",
"okay": "Okay",
"cancel": "Cancel",
"sf" : "Settings filename: {}",
}
class ConfigView(tk.Frame):
def __init__(self, parent, model, controller):
super().__init__(parent)
self._parent = parent
self.controller = controller
self.model = model
self.master.protocol("WM_DELETE_WINDOW", self._cancel)
self.grid(sticky=util.NSEW)
util.stretchy_rows_cols(self, [11], [0])
self._add_widgets()
self.resize()
def resize(self, final=False):
self.update_idletasks()
util.centre_window(self._parent, self._parent.winfo_reqwidth(), self._parent.winfo_reqheight())
if not final:
self.after_idle(lambda : self.resize(True))
def _add_widgets(self):
frame = ttk.Frame(self)
frame.grid(row=0, column=0, padx=2, pady=2, sticky=tk.W)
la = ttk.Label(frame, text=_text["sf"].format(funcs.string_ellipse(self.model.settings_filename, 80)))
la.grid(row=0, column=0)
frame = ttk.LabelFrame(self, text=_text["config"])
frame.grid(row=10, column=0, padx=2, pady=2, sticky=tk.NSEW)
self._add_config_box(frame)
frame = ttk.LabelFrame(self, text=_text["info"])
frame.grid(row=11, column=0, padx=2, pady=2, sticky=tk.NSEW)
tooltips.ToolTipYellow(frame, _text["info_tt"])
util.stretchy_rows(frame, [0])
self._add_info_box(frame)
frame = ttk.Frame(self)
frame.grid(row=20, column=0, padx=2, pady=2, sticky=tk.EW)
util.stretchy_columns(frame, [0,1])
b = ttk.Button(frame, text=_text["okay"], command=self._okay)
b.grid(row=0, column=0, padx=2, pady=2, sticky=tk.NSEW)
b = ttk.Button(frame, text=_text["cancel"], command=self._cancel)
b.grid(row=0, column=1, padx=2, pady=2, sticky=tk.NSEW)
def _add_info_box(self, frame):
self._text = richtext.RichText(frame, height=10, scroll="v")
self._text.grid(row=0, column=0, padx=1, pady=1, sticky=tk.NSEW)
self._text.add_text(_text["plat"].format(sys.platform))
self._text.add_text(_text["tcl"].format(tk.Tcl().eval('info patchlevel')))
self._text.add_text(_text["pyplat"].format(sys.implementation))
self._text.add_text(_text["pyroot"].format(sys.base_prefix))
self._text.add_text(_text["np"].format(np.__version__))
self._text.add_text(_text["scipy"].format(scipy.__version__))
if gpd is not None:
self._text.add_text(_text["gpd"].format(gpd.__version__))
else:
self._text.add_text(_text["gpd_none"])
if pyproj is not None:
self._text.add_text(_text["pyproj"].format(pyproj.__version__))
else:
self._text.add_text(_text["pyproj_none"])
def _add_config_box(self, frame):
label = ttk.Label(frame, text=_text["ttk"])
label.grid(row=0, column=0, padx=2, pady=2)
tooltips.ToolTipYellow(label, _text["ttk_tt"])
self.theme_cbox = ttk.Combobox(frame, height=5, state="readonly")
self.theme_cbox.bind("<<ComboboxSelected>>", self._theme_selected)
self.theme_cbox.grid(row=0, column=1, padx=2, pady=2)
self.theme_cbox["values"] = list(self.model.themes)
def set_theme_selected(self, choice):
self.theme_cbox.current(choice)
def _theme_selected(self, event):
index = int(self.theme_cbox.current())
self.controller.selected_theme(index)
def _okay(self):
self.controller.okay()
self.destroy()
def _cancel(self):
self.controller.cancel()
self.destroy()
| 37.626984 | 110 | 0.626872 |
ab4420ce4fa9c7162dea270c3827ffef9d317be0 | 7,886 | py | Python | docs/conf.py | anosillus/thank-you_letter_generator | cfa650e0708fe5c594b7165276eecaa22e31aca5 | [
"MIT"
] | null | null | null | docs/conf.py | anosillus/thank-you_letter_generator | cfa650e0708fe5c594b7165276eecaa22e31aca5 | [
"MIT"
] | null | null | null | docs/conf.py | anosillus/thank-you_letter_generator | cfa650e0708fe5c594b7165276eecaa22e31aca5 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
#
# auto_thank-you_letter documentation build configuration file, created by
# sphinx-quickstart.
#
# This file is execfile()d with the current directory set to its containing dir.
#
# Note that not all possible configuration values are present in this
# autogenerated file.
#
# All configuration values have a default; values that are commented out
# serve to show the default.
import os
import sys
# If extensions (or modules to document with autodoc) are in another directory,
# add these directories to sys.path here. If the directory is relative to the
# documentation root, use os.path.abspath to make it absolute, like shown here.
# sys.path.insert(0, os.path.abspath('.'))
# -- General configuration -----------------------------------------------------
# If your documentation needs a minimal Sphinx version, state it here.
# needs_sphinx = '1.0'
# Add any Sphinx extension module names here, as strings. They can be extensions
# coming with Sphinx (named 'sphinx.ext.*') or your custom ones.
extensions = []
# Add any paths that contain templates here, relative to this directory.
templates_path = ['_templates']
# The suffix of source filenames.
source_suffix = '.rst'
# The encoding of source files.
# source_encoding = 'utf-8-sig'
# The master toctree document.
master_doc = 'index'
# General information about the project.
project = u'auto_thank-you_letter'
# The version info for the project you're documenting, acts as replacement for
# |version| and |release|, also used in various other places throughout the
# built documents.
#
# The short X.Y version.
version = '0.1'
# The full version, including alpha/beta/rc tags.
release = '0.1'
# The language for content autogenerated by Sphinx. Refer to documentation
# for a list of supported languages.
# language = None
# There are two options for replacing |today|: either, you set today to some
# non-false value, then it is used:
# today = ''
# Else, today_fmt is used as the format for a strftime call.
# today_fmt = '%B %d, %Y'
# List of patterns, relative to source directory, that match files and
# directories to ignore when looking for source files.
exclude_patterns = ['_build']
# The reST default role (used for this markup: `text`) to use for all documents.
# default_role = None
# If true, '()' will be appended to :func: etc. cross-reference text.
# add_function_parentheses = True
# If true, the current module name will be prepended to all description
# unit titles (such as .. function::).
# add_module_names = True
# If true, sectionauthor and moduleauthor directives will be shown in the
# output. They are ignored by default.
# show_authors = False
# The name of the Pygments (syntax highlighting) style to use.
pygments_style = 'sphinx'
# A list of ignored prefixes for module index sorting.
# modindex_common_prefix = []
# -- Options for HTML output ---------------------------------------------------
# The theme to use for HTML and HTML Help pages. See the documentation for
# a list of builtin themes.
html_theme = 'default'
# Theme options are theme-specific and customize the look and feel of a theme
# further. For a list of options available for each theme, see the
# documentation.
# html_theme_options = {}
# Add any paths that contain custom themes here, relative to this directory.
# html_theme_path = []
# The name for this set of Sphinx documents. If None, it defaults to
# "<project> v<release> documentation".
# html_title = None
# A shorter title for the navigation bar. Default is the same as html_title.
# html_short_title = None
# The name of an image file (relative to this directory) to place at the top
# of the sidebar.
# html_logo = None
# The name of an image file (within the static path) to use as favicon of the
# docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32
# pixels large.
# html_favicon = None
# Add any paths that contain custom static files (such as style sheets) here,
# relative to this directory. They are copied after the builtin static files,
# so a file named "default.css" will overwrite the builtin "default.css".
html_static_path = ['_static']
# If not '', a 'Last updated on:' timestamp is inserted at every page bottom,
# using the given strftime format.
# html_last_updated_fmt = '%b %d, %Y'
# If true, SmartyPants will be used to convert quotes and dashes to
# typographically correct entities.
# html_use_smartypants = True
# Custom sidebar templates, maps document names to template names.
# html_sidebars = {}
# Additional templates that should be rendered to pages, maps page names to
# template names.
# html_additional_pages = {}
# If false, no module index is generated.
# html_domain_indices = True
# If false, no index is generated.
# html_use_index = True
# If true, the index is split into individual pages for each letter.
# html_split_index = False
# If true, links to the reST sources are added to the pages.
# html_show_sourcelink = True
# If true, "Created using Sphinx" is shown in the HTML footer. Default is True.
# html_show_sphinx = True
# If true, "(C) Copyright ..." is shown in the HTML footer. Default is True.
# html_show_copyright = True
# If true, an OpenSearch description file will be output, and all pages will
# contain a <link> tag referring to it. The value of this option must be the
# base URL from which the finished HTML is served.
# html_use_opensearch = ''
# This is the file name suffix for HTML files (e.g. ".xhtml").
# html_file_suffix = None
# Output file base name for HTML help builder.
htmlhelp_basename = 'auto_thank-you_letterdoc'
# -- Options for LaTeX output --------------------------------------------------
latex_elements = {
# The paper size ('letterpaper' or 'a4paper').
# 'papersize': 'letterpaper',
# The font size ('10pt', '11pt' or '12pt').
# 'pointsize': '10pt',
# Additional stuff for the LaTeX preamble.
# 'preamble': '',
}
# Grouping the document tree into LaTeX files. List of tuples
# (source start file, target name, title, author, documentclass [howto/manual]).
latex_documents = [
('index',
'auto_thank-you_letter.tex',
u'auto_thank-you_letter Documentation',
u"anosillus", 'manual'),
]
# The name of an image file (relative to this directory) to place at the top of
# the title page.
# latex_logo = None
# For "manual" documents, if this is true, then toplevel headings are parts,
# not chapters.
# latex_use_parts = False
# If true, show page references after internal links.
# latex_show_pagerefs = False
# If true, show URL addresses after external links.
# latex_show_urls = False
# Documents to append as an appendix to all manuals.
# latex_appendices = []
# If false, no module index is generated.
# latex_domain_indices = True
# -- Options for manual page output --------------------------------------------
# One entry per manual page. List of tuples
# (source start file, name, description, authors, manual section).
man_pages = [
('index', 'auto_thank-you_letter', u'auto_thank-you_letter Documentation',
[u"anosillus"], 1)
]
# If true, show URL addresses after external links.
# man_show_urls = False
# -- Options for Texinfo output ------------------------------------------------
# Grouping the document tree into Texinfo files. List of tuples
# (source start file, target name, title, author,
# dir menu entry, description, category)
texinfo_documents = [
('index', 'auto_thank-you_letter', u'auto_thank-you_letter Documentation',
u"anosillus", 'auto_thank-you_letter',
'A short description of the project.', 'Miscellaneous'),
]
# Documents to append as an appendix to all manuals.
# texinfo_appendices = []
# If false, no module index is generated.
# texinfo_domain_indices = True
# How to display URL addresses: 'footnote', 'no', or 'inline'.
# texinfo_show_urls = 'footnote'
| 32.187755 | 80 | 0.709992 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.