code stringlengths 1 25.8M | language stringclasses 18 values | source stringclasses 4 values | repo stringclasses 78 values | path stringlengths 0 268 |
|---|---|---|---|---|
{
"drips": {
"ethereum": {
"ownedBy": "0xCE08E02c37d90d75C2bf7D9e55f7606C8DB80E70"
}
}
} | json | github | https://github.com/sveltejs/svelte | FUNDING.json |
'''
Metrics
=======
.. versionadded:: 1.5.0
A screen is defined by its physical size, density and resolution. These
factors are essential for creating UI's with correct size everywhere.
In Kivy, all the graphics pipelines work with pixels. But using pixels as a
measurement unit is problematic because sizes change according to the
screen.
Dimensions
----------
If you want to design your UI for different screen sizes, you will want better
measurement units to work with. Kivy provides some more scalable alternatives.
:Units:
`pt`
Points - 1/72 of an inch based on the physical size of the screen.
Prefer to use sp instead of pt.
`mm`
Millimeters - Based on the physical size of the screen.
`cm`
Centimeters - Based on the physical size of the screen.
`in`
Inches - Based on the physical size of the screen.
`dp`
Density-independent Pixels - An abstract unit that is based on the
physical density of the screen. With a :data:`~MetricsBase.density` of
1, 1dp is equal to 1px. When running on a higher density screen, the
number of pixels used to draw 1dp is scaled up a factor appropriate to
the screen's dpi, and the inverse for a lower dpi.
The ratio of dp-to-pixels will change with the screen density, but not
necessarily in direct proportion. Using the dp unit is a simple solution
to
making the view dimensions in your layout resize properly for different
screen densities. In others words, it provides consistency for the
real-world size of your UI across different devices.
`sp`
Scale-independent Pixels - This is like the dp unit, but it is also
scaled by the user's font size preference. We recommend you use this
unit when specifying font sizes, so the font size will be adjusted to
both the screen density and the user's preference.
Examples
--------
Here is an example of creating a label with a sp font_size and setting the
height manually with a 10dp margin::
#:kivy 1.5.0
<MyWidget>:
Label:
text: 'Hello world'
font_size: '15sp'
size_hint_y: None
height: self.texture_size[1] + dp(10)
Manual control of metrics
-------------------------
The metrics cannot be changed at runtime. Once a value has been converted to
pixels, you can't retrieve the original value anymore. This stems from the fact
that the DPI and density of a device cannot be changed at runtime.
We provide some environment variables to control metrics:
- `KIVY_METRICS_DENSITY`: if set, this value will be used for
:data:`~MetricsBase.density` instead of the systems one. On android, the value
varies between 0.75, 1, 1.5 and 2.
- `KIVY_METRICS_FONTSCALE`: if set, this value will be used for
:data:`~MetricsBase.fontscale` instead of the systems one. On android, the
value varies between 0.8 and 1.2.
- `KIVY_DPI`: if set, this value will be used for :data:`~MetricsBase.dpi`.
Please
note that setting the DPI will not impact the dp/sp notation because these
are based on the screen density.
For example, if you want to simulate a high-density screen (like the HTC One
X)::
KIVY_DPI=320 KIVY_METRICS_DENSITY=2 python main.py --size 1280x720
Or a medium-density (like Motorola Droid 2)::
KIVY_DPI=240 KIVY_METRICS_DENSITY=1.5 python main.py --size 854x480
You can also simulate an alternative user preference for fontscale as follows::
KIVY_METRICS_FONTSCALE=1.2 python main.py
'''
__all__ = ('Metrics', 'MetricsBase', 'pt', 'inch', 'cm', 'mm', 'dp', 'sp',
'metrics')
from os import environ
from kivy.utils import reify, platform
from kivy.properties import dpi2px
def pt(value):
'''Convert from points to pixels
'''
return dpi2px(value, 'pt')
def inch(value):
'''Convert from inches to pixels
'''
return dpi2px(value, 'in')
def cm(value):
'''Convert from centimeters to pixels
'''
return dpi2px(value, 'cm')
def mm(value):
'''Convert from millimeters to pixels
'''
return dpi2px(value, 'mm')
def dp(value):
'''Convert from density-independent pixels to pixels
'''
return dpi2px(value, 'dp')
def sp(value):
'''Convert from scale-independent pixels to pixels
'''
return dpi2px(value, 'sp')
class MetricsBase(object):
'''Class that contains the default attributes for Metrics. Don't use this
class directly, but use the `Metrics` instance.
'''
@reify
def dpi(self):
'''Return the DPI of the screen. Depending on the platform, the DPI can
be taken from the Window provider (Desktop mainly) or from a
platform-specific module (like android/ios).
'''
custom_dpi = environ.get('KIVY_DPI')
if custom_dpi:
return float(custom_dpi)
if platform == 'android':
import android
return android.get_dpi()
elif platform == 'ios':
import ios
return ios.get_dpi()
# for all other platforms..
from kivy.base import EventLoop
EventLoop.ensure_window()
return EventLoop.window.dpi
@reify
def dpi_rounded(self):
'''Return the DPI of the screen, rounded to the nearest of 120, 160,
240 or 320.
'''
dpi = self.dpi
if dpi < 140:
return 120
elif dpi < 200:
return 160
elif dpi < 280:
return 240
return 320
@reify
def density(self):
'''Return the density of the screen. This value is 1 by default
on desktops but varies on android depending on the screen.
'''
custom_density = environ.get('KIVY_METRICS_DENSITY')
if custom_density:
return float(custom_density)
if platform == 'android':
import jnius
Hardware = jnius.autoclass('org.renpy.android.Hardware')
return Hardware.metrics.scaledDensity
elif platform == 'ios':
# 0.75 is for mapping the same density as android tablet
import ios
return ios.get_scale() * 0.75
return 1.0
@reify
def fontscale(self):
'''Return the fontscale user preference. This value is 1 by default but
can vary between 0.8 and 1.2.
'''
custom_fontscale = environ.get('KIVY_METRICS_FONTSCALE')
if custom_fontscale:
return float(custom_fontscale)
if platform == 'android':
import jnius
PythonActivity = jnius.autoclass('org.renpy.android.PythonActivity')
config = PythonActivity.mActivity.getResources().getConfiguration()
return config.fontScale
return 1.0
#: Default instance of :class:`MetricsBase`, used everywhere in the code
#: .. versionadded:: 1.7.0
Metrics = MetricsBase()
#: default instance of :class:`MetricsBase`, used everywhere in the code
#: (deprecated, use `Metrics` instead.)
metrics = Metrics | unknown | codeparrot/codeparrot-clean | ||
// Copyright 2018 The Abseil Authors.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// https://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
#ifndef ABSL_CONTAINER_INTERNAL_UNORDERED_SET_LOOKUP_TEST_H_
#define ABSL_CONTAINER_INTERNAL_UNORDERED_SET_LOOKUP_TEST_H_
#include "gmock/gmock.h"
#include "gtest/gtest.h"
#include "absl/container/internal/hash_generator_testing.h"
#include "absl/container/internal/hash_policy_testing.h"
namespace absl {
ABSL_NAMESPACE_BEGIN
namespace container_internal {
template <class UnordSet>
class LookupTest : public ::testing::Test {};
TYPED_TEST_SUITE_P(LookupTest);
TYPED_TEST_P(LookupTest, Count) {
using T = hash_internal::GeneratedType<TypeParam>;
std::vector<T> values;
std::generate_n(std::back_inserter(values), 10,
hash_internal::Generator<T>());
TypeParam m;
for (const auto& v : values)
EXPECT_EQ(0, m.count(v)) << ::testing::PrintToString(v);
m.insert(values.begin(), values.end());
for (const auto& v : values)
EXPECT_EQ(1, m.count(v)) << ::testing::PrintToString(v);
}
TYPED_TEST_P(LookupTest, Find) {
using T = hash_internal::GeneratedType<TypeParam>;
std::vector<T> values;
std::generate_n(std::back_inserter(values), 10,
hash_internal::Generator<T>());
TypeParam m;
for (const auto& v : values)
EXPECT_TRUE(m.end() == m.find(v)) << ::testing::PrintToString(v);
m.insert(values.begin(), values.end());
for (const auto& v : values) {
typename TypeParam::iterator it = m.find(v);
static_assert(std::is_same<const typename TypeParam::value_type&,
decltype(*it)>::value,
"");
static_assert(std::is_same<const typename TypeParam::value_type*,
decltype(it.operator->())>::value,
"");
EXPECT_TRUE(m.end() != it) << ::testing::PrintToString(v);
EXPECT_EQ(v, *it) << ::testing::PrintToString(v);
}
}
TYPED_TEST_P(LookupTest, EqualRange) {
using T = hash_internal::GeneratedType<TypeParam>;
std::vector<T> values;
std::generate_n(std::back_inserter(values), 10,
hash_internal::Generator<T>());
TypeParam m;
for (const auto& v : values) {
auto r = m.equal_range(v);
ASSERT_EQ(0, std::distance(r.first, r.second));
}
m.insert(values.begin(), values.end());
for (const auto& v : values) {
auto r = m.equal_range(v);
ASSERT_EQ(1, std::distance(r.first, r.second));
EXPECT_EQ(v, *r.first);
}
}
REGISTER_TYPED_TEST_SUITE_P(LookupTest, Count, Find, EqualRange);
} // namespace container_internal
ABSL_NAMESPACE_END
} // namespace absl
#endif // ABSL_CONTAINER_INTERNAL_UNORDERED_SET_LOOKUP_TEST_H_ | c | github | https://github.com/mysql/mysql-server | extra/abseil/abseil-cpp-20230802.1/absl/container/internal/unordered_set_lookup_test.h |
// errorcheck
// Copyright 2012 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// Verify that the Go compiler will not
// die after running into an undefined
// type in the argument list for a
// function.
// Does not compile.
package main
func mine(int b) int { // ERROR "undefined.*b"
return b + 2 // ERROR "undefined.*b"
}
func main() {
mine() // ERROR "not enough arguments"
c = mine() // ERROR "undefined.*c|not enough arguments"
} | go | github | https://github.com/golang/go | test/typecheck.go |
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from threading import Thread
from six import StringIO
from six.moves.queue import Queue, Empty
class WinPty(object):
def __init__(self, stdin):
self._s = stdin
self._q = Queue()
def _read_next_char(stdin, queue):
while True:
char = stdin.read(1) # potentially blocking read
if char:
queue.put(char)
else:
break
self._t = Thread(target=_read_next_char, args=(self._s, self._q))
self._t.daemon = True
self._t.start() # read characters asynchronously from stdin
def read(self, blksize=-1, timeout=1):
buf = StringIO()
count = 0
try:
while count < blksize or blksize == -1:
next = self._q.get(block=timeout is not None, timeout=timeout)
buf.write(next)
count = count + 1
except Empty:
pass
return buf.getvalue() | unknown | codeparrot/codeparrot-clean | ||
// Copyright IBM Corp. 2016, 2025
// SPDX-License-Identifier: MPL-2.0
package api
import (
"context"
"errors"
"fmt"
"net/http"
"github.com/mitchellh/mapstructure"
)
// GetPluginRuntimeInput is used as input to the GetPluginRuntime function.
type GetPluginRuntimeInput struct {
Name string `json:"-"`
// Type of the plugin runtime. Required.
Type PluginRuntimeType `json:"type"`
}
// GetPluginRuntimeResponse is the response from the GetPluginRuntime call.
type GetPluginRuntimeResponse struct {
Type string `json:"type"`
Name string `json:"name"`
OCIRuntime string `json:"oci_runtime"`
CgroupParent string `json:"cgroup_parent"`
CPU int64 `json:"cpu_nanos"`
Memory int64 `json:"memory_bytes"`
}
// GetPluginRuntime retrieves information about the plugin.
func (c *Sys) GetPluginRuntime(ctx context.Context, i *GetPluginRuntimeInput) (*GetPluginRuntimeResponse, error) {
ctx, cancelFunc := c.c.withConfiguredTimeout(ctx)
defer cancelFunc()
path := pluginRuntimeCatalogPathByType(i.Type, i.Name)
req := c.c.NewRequest(http.MethodGet, path)
resp, err := c.c.rawRequestWithContext(ctx, req)
if err != nil {
return nil, err
}
defer resp.Body.Close()
var result struct {
Data *GetPluginRuntimeResponse
}
err = resp.DecodeJSON(&result)
if err != nil {
return nil, err
}
return result.Data, err
}
// RegisterPluginRuntimeInput is used as input to the RegisterPluginRuntime function.
type RegisterPluginRuntimeInput struct {
// Name is the name of the plugin. Required.
Name string `json:"-"`
// Type of the plugin. Required.
Type PluginRuntimeType `json:"type"`
OCIRuntime string `json:"oci_runtime,omitempty"`
CgroupParent string `json:"cgroup_parent,omitempty"`
CPU int64 `json:"cpu_nanos,omitempty"`
Memory int64 `json:"memory_bytes,omitempty"`
Rootless bool `json:"rootless,omitempty"`
}
// RegisterPluginRuntime registers the plugin with the given information.
func (c *Sys) RegisterPluginRuntime(ctx context.Context, i *RegisterPluginRuntimeInput) error {
ctx, cancelFunc := c.c.withConfiguredTimeout(ctx)
defer cancelFunc()
path := pluginRuntimeCatalogPathByType(i.Type, i.Name)
req := c.c.NewRequest(http.MethodPut, path)
if err := req.SetJSONBody(i); err != nil {
return err
}
resp, err := c.c.rawRequestWithContext(ctx, req)
if err == nil {
defer resp.Body.Close()
}
return err
}
// DeregisterPluginRuntimeInput is used as input to the DeregisterPluginRuntime function.
type DeregisterPluginRuntimeInput struct {
// Name is the name of the plugin runtime. Required.
Name string `json:"-"`
// Type of the plugin. Required.
Type PluginRuntimeType `json:"type"`
}
// DeregisterPluginRuntime removes the plugin with the given name from the plugin
// catalog.
func (c *Sys) DeregisterPluginRuntime(ctx context.Context, i *DeregisterPluginRuntimeInput) error {
ctx, cancelFunc := c.c.withConfiguredTimeout(ctx)
defer cancelFunc()
path := pluginRuntimeCatalogPathByType(i.Type, i.Name)
req := c.c.NewRequest(http.MethodDelete, path)
resp, err := c.c.rawRequestWithContext(ctx, req)
if err == nil {
defer resp.Body.Close()
}
return err
}
type PluginRuntimeDetails struct {
Type string `json:"type" mapstructure:"type"`
Name string `json:"name" mapstructure:"name"`
OCIRuntime string `json:"oci_runtime" mapstructure:"oci_runtime"`
CgroupParent string `json:"cgroup_parent" mapstructure:"cgroup_parent"`
CPU int64 `json:"cpu_nanos" mapstructure:"cpu_nanos"`
Memory int64 `json:"memory_bytes" mapstructure:"memory_bytes"`
}
// ListPluginRuntimesInput is used as input to the ListPluginRuntimes function.
type ListPluginRuntimesInput struct {
// Type of the plugin. Required.
Type PluginRuntimeType `json:"type"`
}
// ListPluginRuntimesResponse is the response from the ListPluginRuntimes call.
type ListPluginRuntimesResponse struct {
// RuntimesByType is the list of plugin runtimes by type.
Runtimes []PluginRuntimeDetails `json:"runtimes"`
}
// ListPluginRuntimes lists all plugin runtimes in the catalog and returns their names as a
// list of strings.
func (c *Sys) ListPluginRuntimes(ctx context.Context, input *ListPluginRuntimesInput) (*ListPluginRuntimesResponse, error) {
ctx, cancelFunc := c.c.withConfiguredTimeout(ctx)
defer cancelFunc()
if input != nil && input.Type == PluginRuntimeTypeUnsupported {
return nil, fmt.Errorf("%q is not a supported runtime type", input.Type.String())
}
resp, err := c.c.rawRequestWithContext(ctx, c.c.NewRequest(http.MethodGet, "/v1/sys/plugins/runtimes/catalog"))
if err != nil && resp == nil {
return nil, err
}
if resp == nil {
return nil, nil
}
defer resp.Body.Close()
secret, err := ParseSecret(resp.Body)
if err != nil {
return nil, err
}
if secret == nil || secret.Data == nil {
return nil, errors.New("data from server response is empty")
}
if _, ok := secret.Data["runtimes"]; !ok {
return nil, fmt.Errorf("data from server response does not contain runtimes")
}
var runtimes []PluginRuntimeDetails
if err = mapstructure.Decode(secret.Data["runtimes"], &runtimes); err != nil {
return nil, err
}
// return all runtimes in the catalog
if input == nil {
return &ListPluginRuntimesResponse{Runtimes: runtimes}, nil
}
result := &ListPluginRuntimesResponse{
Runtimes: []PluginRuntimeDetails{},
}
for _, runtime := range runtimes {
if runtime.Type == input.Type.String() {
result.Runtimes = append(result.Runtimes, runtime)
}
}
return result, nil
}
// pluginRuntimeCatalogPathByType is a helper to construct the proper API path by plugin type
func pluginRuntimeCatalogPathByType(runtimeType PluginRuntimeType, name string) string {
return fmt.Sprintf("/v1/sys/plugins/runtimes/catalog/%s/%s", runtimeType, name)
} | go | github | https://github.com/hashicorp/vault | api/sys_plugins_runtimes.go |
# Copyright 2015 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
r"""Transforms a float-trained graph into an equivalent quantized version.
An example of command-line usage is:
bazel build tensorflow/tools/quantization:quantize_graph \
&& bazel-bin/tensorflow/tools/quantization/quantize_graph \
--input=tensorflow_inception_graph.pb
--output_node_names="softmax2" --print_nodes --output=/tmp/quantized_graph.pb \
--mode=eightbit --logtostderr
"""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import collections
import re
import numpy as np
from tensorflow.core.framework import attr_value_pb2
from tensorflow.core.framework import graph_pb2
from tensorflow.core.framework import node_def_pb2
from tensorflow.python.client import session
from tensorflow.python.framework import constant_op
from tensorflow.python.framework import dtypes
from tensorflow.python.framework import graph_util
from tensorflow.python.framework import importer
from tensorflow.python.framework import ops
from tensorflow.python.framework import tensor_shape
from tensorflow.python.framework import tensor_util
from tensorflow.python.ops import array_ops
from tensorflow.python.platform import app
from tensorflow.python.platform import flags as flags_lib
from tensorflow.python.platform import gfile
flags = flags_lib
FLAGS = flags.FLAGS
flags.DEFINE_boolean("print_nodes", False, """Lists all nodes in the model.""")
flags.DEFINE_string("input", "", """TensorFlow 'GraphDef' file to load.""")
flags.DEFINE_string("output_node_names", "",
"""Output node names, comma separated.""")
flags.DEFINE_string("output", "", """File to save the output graph to.""")
flags.DEFINE_integer("bitdepth", 8,
"""How many bits to quantize the graph to.""")
flags.DEFINE_string("mode", "round",
"""What transformation to apply (round, quantize,"""
""" eightbit, weights, or weights_rounded).""")
flags.DEFINE_string("test_input_dims", "1,224,224,3",
"""The size of the input tensor to use when testing a"""
""" graph loaded from a file.""")
flags.DEFINE_boolean("strip_redundant_quantization", True,
"""Removes redundant dequantize/quantize pairs.""")
flags.DEFINE_boolean("quantized_input", False,
"If true, assume Placeholders are quantized with values "
"covering [--quantized_input_min,--quantized_input_max]. "
"Only supported when --mode=eightbit")
flags.DEFINE_float("quantized_input_min", 0,
"The minimum of the actual input range when "
"--quantized_input")
flags.DEFINE_float("quantized_input_max", 1,
"The maximum of the actual input range when "
"--quantized_input")
flags.DEFINE_float(
"quantized_fallback_min", None,
"The fallback 'min' value to use for layers which lack min-max "
"information. Note: this should be considered a coarse tool just good "
"enough for experimentation purposes, since graphs quantized in this way "
"would be very inaccurate.")
flags.DEFINE_float(
"quantized_fallback_max", None,
"The fallback 'max' value to use for layers which lack min-max "
"information. Note: this should be considered a coarse tool just good "
"enough for experimentation purposes, since graphs quantized in this way "
"would be very inaccurate.")
def print_input_nodes(current_node, nodes_map, indent, already_visited):
print(" " * indent + current_node.op + ":" + current_node.name)
already_visited[current_node.name] = True
for input_node_name in current_node.input:
if input_node_name in already_visited:
continue
input_node = nodes_map[input_node_name]
print_input_nodes(input_node, nodes_map, indent + 1, already_visited)
def create_node(op, name, inputs):
new_node = node_def_pb2.NodeDef()
new_node.op = op
new_node.name = name
for input_name in inputs:
new_node.input.extend([input_name])
return new_node
def create_constant_node(name, value, dtype, shape=None):
node = create_node("Const", name, [])
set_attr_dtype(node, "dtype", dtype)
set_attr_tensor(node, "value", value, dtype, shape)
return node
def copy_attr(node, key, attr_value):
try:
node.attr[key].CopyFrom(attr_value)
except KeyError:
pass
def set_attr_dtype(node, key, value):
try:
node.attr[key].CopyFrom(
attr_value_pb2.AttrValue(type=value.as_datatype_enum))
except KeyError:
pass
def set_attr_shape(node, key, value):
try:
node.attr[key].CopyFrom(
attr_value_pb2.AttrValue(shape=tensor_shape.as_shape(value).as_proto()))
except KeyError:
pass
def set_attr_tensor(node, key, value, dtype, shape=None):
try:
node.attr[key].CopyFrom(
attr_value_pb2.AttrValue(tensor=tensor_util.make_tensor_proto(
value, dtype=dtype, shape=shape)))
except KeyError:
pass
def set_attr_string(node, key, value):
try:
node.attr[key].CopyFrom(attr_value_pb2.AttrValue(s=value))
except KeyError:
pass
def set_attr_int_list(node, key, value):
list_value = attr_value_pb2.AttrValue.ListValue(i=value)
try:
node.attr[key].CopyFrom(attr_value_pb2.AttrValue(list=list_value))
except KeyError:
pass
def set_attr_bool(node, key, value):
try:
node.attr[key].CopyFrom(attr_value_pb2.AttrValue(b=value))
except KeyError:
pass
def set_attr_int(node, key, value):
try:
node.attr[key].CopyFrom(attr_value_pb2.AttrValue(i=value))
except KeyError:
pass
def set_attr_float(node, key, value):
try:
node.attr[key].CopyFrom(attr_value_pb2.AttrValue(f=value))
except KeyError:
pass
def node_name_from_input(node_name):
"""Strips off ports and other decorations to get the underlying node name."""
if node_name.startswith("^"):
node_name = node_name[1:]
m = re.search(r"(.*):\d+$", node_name)
if m:
node_name = m.group(1)
return node_name
def ensure_tensor_name_has_port(node_name):
"""Makes sure that a tensor name has :0 if no explicit port exists."""
m = re.search(r"(.*):\d+$", node_name)
if m:
name_with_port = node_name
else:
name_with_port = node_name + ":0"
return name_with_port
def unique_node_name_from_input(node_name):
"""Replaces invalid characters in input names to get a unique node name."""
return node_name.replace(":", "__port__").replace("^", "__hat__")
def quantize_array(arr, num_buckets):
"""Quantizes a numpy array.
This function maps each scalar in arr to the center of one of num_buckets
buckets. For instance,
quantize_array([0, 0.3, 0.6, 1], 2) => [0.25, 0.25, 0.75, 0.75]
Args:
arr: The numpy array to quantize.
num_buckets: The number of buckets to map "var" to.
Returns:
The quantized numpy array.
Raises:
ValueError: when num_buckets < 1.
"""
if num_buckets < 1:
raise ValueError("num_buckets must be >= 1")
arr_max = arr.max()
arr_min = arr.min()
if arr_max == arr_min:
return arr
bucket_width = (arr_max - arr_min) / num_buckets
# Map scalars to bucket indices. Take special care of max(arr).
bucket_indices = np.floor((arr - arr_min) / bucket_width)
bucket_indices[bucket_indices == num_buckets] = num_buckets - 1
# Map each scalar to the center of a bucket.
arr = arr_min + bucket_width * (bucket_indices + 0.5)
return arr
def quantize_weight_rounded(input_node):
"""Returns a replacement node for input_node containing bucketed floats."""
input_tensor = input_node.attr["value"].tensor
tensor_value = tensor_util.MakeNdarray(input_tensor)
shape = input_tensor.tensor_shape
# Currently, the parameter FLAGS.bitdepth is used to compute the
# number of buckets as 1 << FLAGS.bitdepth, meaning the number of
# buckets can only be a power of 2.
# This could be fixed by introducing a new parameter, num_buckets,
# which would allow for more flexibility in chosing the right model
# size/accuracy tradeoff. But I didn't want to add more parameters
# to this script than absolutely necessary.
num_buckets = 1 << FLAGS.bitdepth
tensor_value_rounded = quantize_array(tensor_value, num_buckets)
tensor_shape_list = tensor_util.TensorShapeProtoToList(shape)
return [
create_constant_node(
input_node.name,
tensor_value_rounded,
dtypes.float32,
shape=tensor_shape_list)
]
def quantize_weight_eightbit(input_node, quantization_mode):
"""Returns replacement nodes for input_node using the Dequantize op."""
base_name = input_node.name + "_"
quint8_const_name = base_name + "quint8_const"
min_name = base_name + "min"
max_name = base_name + "max"
float_tensor = tensor_util.MakeNdarray(input_node.attr["value"].tensor)
min_value = np.min(float_tensor.flatten())
max_value = np.max(float_tensor.flatten())
# Make sure that the range includes zero.
if min_value > 0.0:
min_value = 0.0
# min_value == max_value is a tricky case. It can occur for general
# tensors, and of course for scalars. The quantized ops cannot deal
# with this case, so we set max_value to something else.
# It's a tricky question what is the numerically best solution to
# deal with this degeneracy.
# TODO(petewarden): Better use a tolerance than a hard comparison?
if min_value == max_value:
if abs(min_value) < 0.000001:
max_value = min_value + 1.0
elif min_value > 0:
max_value = 2 * min_value
else:
max_value = min_value / 2.0
sess = session.Session()
with sess.as_default():
quantize_op = array_ops.quantize_v2(
float_tensor,
min_value,
max_value,
dtypes.quint8,
mode=quantization_mode)
quint8_tensor = quantize_op[0].eval()
shape = tensor_util.TensorShapeProtoToList(input_node.attr["value"]
.tensor.tensor_shape)
quint8_const_node = create_constant_node(
quint8_const_name, quint8_tensor, dtypes.quint8, shape=shape)
min_node = create_constant_node(min_name, min_value, dtypes.float32)
max_node = create_constant_node(max_name, max_value, dtypes.float32)
dequantize_node = create_node("Dequantize", input_node.name,
[quint8_const_name, min_name, max_name])
set_attr_dtype(dequantize_node, "T", dtypes.quint8)
set_attr_string(dequantize_node, "mode", quantization_mode)
return [quint8_const_node, min_node, max_node, dequantize_node]
EightbitizeRecursionState = collections.namedtuple(
"EightbitizeRecursionState",
["already_visited", "output_node_stack", "merged_with_fake_quant"])
class GraphRewriter(object):
"""Takes a float graph, and rewrites it in quantized form."""
def __init__(self,
input_graph,
mode,
quantized_input_range,
fallback_quantization_range=None):
"""Sets up the class to rewrite a float graph.
Args:
input_graph: A float graph to transform.
mode: A string controlling how quantization is performed -
round, quantize, eightbit, or weights.
quantized_input_range: if set, assume the input is
quantized and represents the range
[quantized_input_range[0], quantized_input_range[1]]
fallback_quantization_range: if set, then for nodes where the quantization
range can't be inferred from the graph, use the range
[fallback_quantization_range[0], fallback_quantization_range[1]) instead
of using a RequantizationRange node in the graph.
Raises:
ValueError: Two nodes with the same name were found in the graph.
"""
self.input_graph = input_graph
self.nodes_map = self.create_nodes_map(input_graph)
self.output_graph = None
self.mode = mode
self.final_node_renames = {}
if quantized_input_range:
self.input_range = (quantized_input_range[0], quantized_input_range[1])
if self.input_range[0] >= self.input_range[1]:
raise ValueError("Invalid quantized_input_range: [%s,%s]" %
self.input_range)
if self.mode != "eightbit":
raise ValueError(
"quantized_input_range can only be specified in eightbit mode")
else:
self.input_range = None
if fallback_quantization_range:
self.fallback_quantization_range = [
fallback_quantization_range[0], fallback_quantization_range[1]
]
if (self.fallback_quantization_range[0] >=
self.fallback_quantization_range[1]):
raise ValueError("Invalid fallback_quantization_range: [%s,%s]" %
self.fallback_quantization_range)
if self.mode != "eightbit":
raise ValueError("fallback_quantization_range can only be "
"specified in eightbit mode")
else:
self.fallback_quantization_range = None
# Data that is valid only during the recursive call to rewrite the graph.
self.state = None
def create_nodes_map(self, graph):
"""Builds a mapping of node names to their defs from the graph."""
nodes_map = {}
for node in graph.node:
if node.name not in nodes_map.keys():
nodes_map[node.name] = node
else:
raise ValueError("Duplicate node names detected.")
return nodes_map
def rewrite(self, output_node_names):
"""Triggers rewriting of the float graph.
Args:
output_node_names: A list of names of the nodes that produce the final
results.
Returns:
A quantized version of the float graph.
"""
self.output_graph = graph_pb2.GraphDef()
output_nodes = [
self.nodes_map[output_node_name]
for output_node_name in output_node_names
]
if self.mode == "round":
self.already_visited = {}
for output_node in output_nodes:
self.round_nodes_recursively(output_node)
elif self.mode == "quantize":
self.already_visited = {}
self.already_quantized = {}
for output_node in output_nodes:
self.quantize_nodes_recursively(output_node)
elif self.mode == "eightbit":
self.set_input_graph(graph_util.remove_training_nodes(self.input_graph))
output_nodes = [
self.nodes_map[output_node_name]
for output_node_name in output_node_names
]
self.state = EightbitizeRecursionState(
already_visited={}, output_node_stack=[], merged_with_fake_quant={})
for output_node in output_nodes:
self.eightbitize_nodes_recursively(output_node)
self.state = None
if self.input_range:
self.add_output_graph_node(
create_constant_node("quantized_input_min_value", self.input_range[
0], dtypes.float32, []))
self.add_output_graph_node(
create_constant_node("quantized_input_max_value", self.input_range[
1], dtypes.float32, []))
if self.fallback_quantization_range:
self.add_output_graph_node(
create_constant_node("fallback_quantization_min_value",
self.fallback_quantization_range[0],
dtypes.float32, []))
self.add_output_graph_node(
create_constant_node("fallback_quantization_max_value",
self.fallback_quantization_range[1],
dtypes.float32, []))
if FLAGS.strip_redundant_quantization:
self.output_graph = self.remove_redundant_quantization(
self.output_graph)
self.remove_dead_nodes(output_node_names)
self.apply_final_node_renames()
elif self.mode == "weights":
self.output_graph = self.quantize_weights(self.input_graph,
b"MIN_COMBINED")
self.remove_dead_nodes(output_node_names)
elif self.mode == "weights_rounded":
self.output_graph = self.quantize_weights(self.input_graph, self.mode)
self.remove_dead_nodes(output_node_names)
else:
print("Bad mode - " + self.mode + ".")
return self.output_graph
def round_nodes_recursively(self, current_node):
"""The entry point for simple rounding quantization."""
if self.already_visited[current_node.name]:
return
self.already_visited[current_node.name] = True
for input_node_name in current_node.input:
input_node_name = node_name_from_input(input_node_name)
input_node = self.nodes_map[input_node_name]
self.round_nodes_recursively(input_node)
nodes_to_quantize = ["Conv2D", "BiasAdd", "MatMul"]
if any(current_node.op in s for s in nodes_to_quantize):
new_node = node_def_pb2.NodeDef()
new_node.CopyFrom(current_node)
new_node.name = current_node.name + "_original"
self.add_output_graph_node(new_node)
levels = 1 << FLAGS.bitdepth
constant_name = current_node.name + "_round_depth"
constant_tensor = constant_op.constant(
levels, dtype=dtypes.int32, name=constant_name)
constant_node = constant_tensor.op.node_def
self.add_output_graph_node(constant_node)
quantize_node = node_def_pb2.NodeDef()
quantize_node.op = "RoundToSteps"
quantize_node.name = current_node.name
quantize_node.input.extend([current_node.name + "_original"])
quantize_node.input.extend([constant_node.name])
self.add_output_graph_node(quantize_node)
else:
new_node = node_def_pb2.NodeDef()
new_node.CopyFrom(current_node)
self.add_output_graph_node(new_node)
def quantize_nodes_recursively(self, current_node):
"""The entry point for quantizing nodes to eight bit and back."""
if self.already_visited[current_node.name]:
return
self.already_visited[current_node.name] = True
for input_node_name in current_node.input:
input_node_name = node_name_from_input(input_node_name)
input_node = self.nodes_map[input_node_name]
self.quantize_nodes_recursively(input_node)
nodes_to_quantize = ["Conv2D", "BiasAdd", "MatMul"]
if any(current_node.op in s for s in nodes_to_quantize):
for input_name in current_node.input:
input_name = node_name_from_input(input_name)
input_node = self.nodes_map[input_name]
self.quantize_node(input_node)
self.quantize_node(current_node)
else:
new_node = node_def_pb2.NodeDef()
new_node.CopyFrom(current_node)
self.add_output_graph_node(new_node)
def quantize_node(self, input_node):
"""Handles quantizing a single node."""
input_name = input_node.name
if input_name in self.already_quantized:
return
self.already_quantized[input_name] = True
original_input_name = input_name + "_original"
reshape_name = input_name + "_reshape"
reshape_dims_name = input_name + "_reshape_dims"
max_name = input_name + "_max"
min_name = input_name + "_min"
dims_name = input_name + "_dims"
quantize_name = input_name + "_quantize"
dequantize_name = input_name
original_input_node = node_def_pb2.NodeDef()
original_input_node.CopyFrom(input_node)
original_input_node.name = original_input_name
self.add_output_graph_node(original_input_node)
reshape_dims_node = create_constant_node(reshape_dims_name, -1,
dtypes.int32, [1])
self.add_output_graph_node(reshape_dims_node)
reshape_node = create_node("Reshape", reshape_name,
[original_input_name, reshape_dims_name])
set_attr_dtype(reshape_node, "T", dtypes.float32)
self.add_output_graph_node(reshape_node)
dims_node = create_constant_node(dims_name, 0, dtypes.int32, [1])
self.add_output_graph_node(dims_node)
max_node = create_node("Max", max_name, [reshape_name, dims_name])
set_attr_dtype(max_node, "T", dtypes.float32)
set_attr_bool(max_node, "keep_dims", False)
self.add_output_graph_node(max_node)
min_node = create_node("Min", min_name, [reshape_name, dims_name])
set_attr_dtype(min_node, "T", dtypes.float32)
set_attr_bool(min_node, "keep_dims", False)
self.add_output_graph_node(min_node)
quantize_node = create_node("Quantize", quantize_name,
[original_input_name, min_name, max_name])
set_attr_dtype(quantize_node, "T", dtypes.quint8)
set_attr_string(quantize_node, "mode", b"MIN_FIRST")
self.add_output_graph_node(quantize_node)
dequantize_node = create_node("Dequantize", dequantize_name,
[quantize_name, min_name, max_name])
set_attr_dtype(dequantize_node, "T", dtypes.quint8)
set_attr_string(dequantize_node, "mode", b"MIN_FIRST")
self.add_output_graph_node(dequantize_node)
def should_merge_with_fake_quant_node(self):
"""Should the current node merge with self.state.output_node_stack[-1]?"""
if not self.state.output_node_stack:
return False
top = self.state.output_node_stack[-1]
return top[1] == 0 and top[0].op in ["FakeQuantWithMinMaxVars"]
def should_quantize_const(self, node):
if not self.state.output_node_stack:
return False
top = self.state.output_node_stack[-1]
if not top[2]:
return False
dtype = dtypes.as_dtype(node.attr["dtype"].type)
assert dtype == dtypes.float32, (
"Failed to quantized constant %s of type %s" % (node.name, dtype))
return True
def eightbitize_nodes_recursively(self, current_node):
"""The entry point for transforming a graph into full eight bit."""
if current_node.name in self.state.already_visited:
if (self.should_merge_with_fake_quant_node() or
current_node.name in self.state.merged_with_fake_quant):
raise ValueError("Unsupported graph structure: output of node %s "
"is processed by a FakeQuant* node and should have "
"no other outputs.", current_node.name)
return
self.state.already_visited[current_node.name] = True
for i, input_node_name in enumerate(current_node.input):
quantize_input = False
if current_node.op in ("MatMul", "Conv2D", "BiasAdd", "MaxPool",
"AvgPool", "Relu", "Relu6",
"BatchNormWithGlobalNormalization"):
quantize_input = True
elif current_node.op == "Concat" and i > 0:
quantize_input = (
dtypes.as_dtype(current_node.attr["T"].type) == dtypes.float32)
elif current_node.op == "Reshape" and i == 0:
quantize_input = (
dtypes.as_dtype(current_node.attr["T"].type) == dtypes.float32)
self.state.output_node_stack.append((current_node, i, quantize_input))
input_node_name = node_name_from_input(input_node_name)
input_node = self.nodes_map[input_node_name]
self.eightbitize_nodes_recursively(input_node)
self.state.output_node_stack.pop()
if current_node.op == "MatMul":
self.eightbitize_mat_mul_node(current_node)
elif current_node.op == "Conv2D":
self.eightbitize_conv_node(current_node)
elif current_node.op == "BiasAdd":
self.eightbitize_bias_add_node(current_node)
elif current_node.op == "MaxPool" or current_node.op == "AvgPool":
self.eightbitize_single_input_tensor_node(current_node,
self.add_pool_function)
elif current_node.op == "Relu" or current_node.op == "Relu6":
self.eightbitize_single_input_tensor_node(current_node,
self.add_relu_function)
elif (current_node.op == "Concat" and
dtypes.as_dtype(current_node.attr["T"].type) == dtypes.float32):
self.eightbitize_concat_node(current_node)
elif current_node.op == "BatchNormWithGlobalNormalization":
self.eightbitize_batch_norm_node(current_node)
elif (current_node.op == "Reshape" and
dtypes.as_dtype(current_node.attr["T"].type) == dtypes.float32):
self.eightbitize_reshape_node(current_node)
elif (self.input_range and
current_node.op in ("Placeholder", "PlaceholderV2")):
self.eightbitize_placeholder_node(current_node)
elif current_node.op == "FakeQuantWithMinMaxVars":
# It will have been merged into the underlying node.
pass
elif current_node.op == "Const":
if self.should_quantize_const(current_node):
for n in quantize_weight_eightbit(current_node, b"MIN_FIRST"):
self.add_output_graph_node(n)
else:
new_node = node_def_pb2.NodeDef()
new_node.CopyFrom(current_node)
self.add_output_graph_node(new_node)
###################################################################
# Note: if more cases are added here, you may need to update the op
# name lists in the loop over children at the start of the function.
###################################################################
else:
new_node = node_def_pb2.NodeDef()
new_node.CopyFrom(current_node)
self.add_output_graph_node(new_node)
if (self.should_merge_with_fake_quant_node() and
current_node.name not in self.state.merged_with_fake_quant):
raise ValueError(
"FakeQuant* node %s failed to merge with node %s of type %s" %
(self.state.output_node_stack[-1][0], current_node.name,
current_node.op))
def add_eightbit_prologue_nodes(self, original_node):
"""Adds input conversion nodes to handle quantizing the underlying node."""
namespace_prefix = original_node.name + "_eightbit"
reshape_dims_name, reduction_dims_name = self.add_common_quantization_nodes(
namespace_prefix)
input_names = []
min_max_names = []
for original_input_name in original_node.input:
quantize_input_name, min_input_name, max_input_name = (
self.eightbitize_input_to_node(namespace_prefix, original_input_name,
reshape_dims_name,
reduction_dims_name))
input_names.append(quantize_input_name)
min_max_names.append(min_input_name)
min_max_names.append(max_input_name)
all_input_names = []
all_input_names.extend(input_names)
all_input_names.extend(min_max_names)
return all_input_names
def add_common_quantization_nodes(self, namespace_prefix):
"""Builds constant nodes needed for quantization of inputs."""
reshape_dims_name = namespace_prefix + "_reshape_dims"
reduction_dims_name = namespace_prefix + "_reduction_dims"
reshape_dims_node = create_constant_node(reshape_dims_name, -1,
dtypes.int32, [1])
self.add_output_graph_node(reshape_dims_node)
reduction_dims_node = create_constant_node(reduction_dims_name, 0,
dtypes.int32, [1])
self.add_output_graph_node(reduction_dims_node)
return reshape_dims_name, reduction_dims_name
def eightbitize_input_to_node(self, namespace_prefix, original_input_name,
reshape_dims_name, reduction_dims_name):
"""Takes one float input to an op, and converts it to quantized form."""
unique_input_name = unique_node_name_from_input(original_input_name)
reshape_input_name = namespace_prefix + "_reshape_" + unique_input_name
min_input_name = namespace_prefix + "_min_" + unique_input_name
max_input_name = namespace_prefix + "_max_" + unique_input_name
quantize_input_name = namespace_prefix + "_quantize_" + unique_input_name
reshape_input_node = create_node("Reshape", reshape_input_name,
[original_input_name, reshape_dims_name])
set_attr_dtype(reshape_input_node, "T", dtypes.float32)
self.add_output_graph_node(reshape_input_node)
min_input_node = create_node("Min", min_input_name,
[reshape_input_name, reduction_dims_name])
set_attr_dtype(min_input_node, "T", dtypes.float32)
set_attr_bool(min_input_node, "keep_dims", False)
self.add_output_graph_node(min_input_node)
max_input_node = create_node("Max", max_input_name,
[reshape_input_name, reduction_dims_name])
set_attr_dtype(max_input_node, "T", dtypes.float32)
set_attr_bool(max_input_node, "keep_dims", False)
self.add_output_graph_node(max_input_node)
quantize_input_node = create_node(
"QuantizeV2", quantize_input_name,
[original_input_name, min_input_name, max_input_name])
set_attr_dtype(quantize_input_node, "T", dtypes.quint8)
set_attr_string(quantize_input_node, "mode", b"MIN_FIRST")
self.add_output_graph_node(quantize_input_node)
min_output_name = quantize_input_name + ":1"
max_output_name = quantize_input_name + ":2"
return quantize_input_name, min_output_name, max_output_name
def add_quantize_down_nodes(self, original_node, quantized_output_name):
quantized_outputs = [
quantized_output_name, quantized_output_name + ":1",
quantized_output_name + ":2"
]
min_max_inputs = None
if self.should_merge_with_fake_quant_node():
# Use the inputs to the FakeQuantWithMinMaxVars node as the inputs to
# Requantize.
fake_quant_node = self.state.output_node_stack[-1][0]
min_max_inputs = [fake_quant_node.input[1], fake_quant_node.input[2]]
assert original_node.name not in self.state.merged_with_fake_quant
self.state.merged_with_fake_quant[original_node.name] = True
elif self.fallback_quantization_range:
min_max_inputs = [
"fallback_quantization_min_value:0",
"fallback_quantization_max_value:0"
]
else:
# Add a RequantizationRange node for finding the min and max values.
requant_range_node = create_node(
"RequantizationRange", original_node.name + "_eightbit_requant_range",
quantized_outputs)
set_attr_dtype(requant_range_node, "Tinput", dtypes.qint32)
self.add_output_graph_node(requant_range_node)
min_max_inputs = [
requant_range_node.name + ":0", requant_range_node.name + ":1"
]
requantize_node = create_node("Requantize",
original_node.name + "_eightbit_requantize",
quantized_outputs + min_max_inputs)
set_attr_dtype(requantize_node, "Tinput", dtypes.qint32)
set_attr_dtype(requantize_node, "out_type", dtypes.quint8)
self.add_output_graph_node(requantize_node)
return requantize_node.name
def add_dequantize_result_node(self,
quantized_output_name,
original_node_name,
min_tensor_index=1):
min_max_inputs = [
"%s:%s" % (quantized_output_name, min_tensor_index),
"%s:%s" % (quantized_output_name, (min_tensor_index + 1))
]
dequantize_name = original_node_name
if self.should_merge_with_fake_quant_node():
fake_quant_node = self.state.output_node_stack[-1][0]
if original_node_name not in self.state.merged_with_fake_quant:
min_max_inputs = [fake_quant_node.input[1], fake_quant_node.input[2]]
self.state.merged_with_fake_quant[original_node_name] = True
dequantize_name = fake_quant_node.name
dequantize_node = create_node(
"Dequantize", dequantize_name,
[quantized_output_name, min_max_inputs[0], min_max_inputs[1]])
set_attr_dtype(dequantize_node, "T", dtypes.quint8)
set_attr_string(dequantize_node, "mode", b"MIN_FIRST")
self.add_output_graph_node(dequantize_node)
def eightbitize_mat_mul_node(self, original_node):
"""Replaces a MatMul node with the eight bit equivalent sub-graph."""
quantized_mat_mul_name = original_node.name + "_eightbit_quantized_mat_mul"
all_input_names = self.add_eightbit_prologue_nodes(original_node)
quantized_mat_mul_node = create_node("QuantizedMatMul",
quantized_mat_mul_name,
all_input_names)
set_attr_dtype(quantized_mat_mul_node, "T1", dtypes.quint8)
set_attr_dtype(quantized_mat_mul_node, "T2", dtypes.quint8)
set_attr_dtype(quantized_mat_mul_node, "Toutput", dtypes.qint32)
copy_attr(quantized_mat_mul_node, "transpose_a",
original_node.attr["transpose_a"])
copy_attr(quantized_mat_mul_node, "transpose_b",
original_node.attr["transpose_b"])
self.add_output_graph_node(quantized_mat_mul_node)
quantize_down_name = self.add_quantize_down_nodes(original_node,
quantized_mat_mul_name)
self.add_dequantize_result_node(quantize_down_name, original_node.name)
def eightbitize_conv_node(self, original_node):
"""Replaces a Conv2D node with the eight bit equivalent sub-graph."""
all_input_names = self.add_eightbit_prologue_nodes(original_node)
quantized_conv_name = original_node.name + "_eightbit_quantized_conv"
quantized_conv_node = create_node("QuantizedConv2D", quantized_conv_name,
all_input_names)
copy_attr(quantized_conv_node, "strides", original_node.attr["strides"])
copy_attr(quantized_conv_node, "padding", original_node.attr["padding"])
set_attr_dtype(quantized_conv_node, "Tinput", dtypes.quint8)
set_attr_dtype(quantized_conv_node, "Tfilter", dtypes.quint8)
set_attr_dtype(quantized_conv_node, "out_type", dtypes.qint32)
self.add_output_graph_node(quantized_conv_node)
quantize_down_name = self.add_quantize_down_nodes(original_node,
quantized_conv_name)
self.add_dequantize_result_node(quantize_down_name, original_node.name)
def eightbitize_bias_add_node(self, original_node):
"""Replaces a BiasAdd node with the eight bit equivalent sub-graph."""
quantized_bias_add_name = (
original_node.name + "_eightbit_quantized_bias_add")
all_input_names = self.add_eightbit_prologue_nodes(original_node)
quantized_bias_add_node = create_node("QuantizedBiasAdd",
quantized_bias_add_name,
all_input_names)
set_attr_dtype(quantized_bias_add_node, "T1", dtypes.quint8)
set_attr_dtype(quantized_bias_add_node, "T2", dtypes.quint8)
set_attr_dtype(quantized_bias_add_node, "out_type", dtypes.qint32)
self.add_output_graph_node(quantized_bias_add_node)
quantize_down_name = self.add_quantize_down_nodes(original_node,
quantized_bias_add_name)
self.add_dequantize_result_node(quantize_down_name, original_node.name)
def eightbitize_single_input_tensor_node(self, original_node,
add_op_function):
"""Replaces a single-tensor node with the eight bit equivalent sub-graph.
Converts a node like this:
Shape(f) Input(f)
| |
+--------v v
Operation
|
v
(f)
Into a quantized equivalent:
Input(f) ReshapeDims
+------v v-------------+
| Reshape
| |
| | ReductionDims
| +-----+ |
| | +---c---------+
| v v v v-------+
| Min Max
| +----+ |
v v v--------+
Quantize
|
v
QuantizedOperation
| | |
v v v
Dequantize
|
v
(f)
Args:
original_node: Float node to be converted.
add_op_function: Function to create the actual node.
Returns:
Subgraph representing the quantized version of the original node.
"""
quantized_op_name = original_node.name + "_eightbit_quantized"
quantized_op_type = "Quantized" + original_node.op
all_input_names = self.add_eightbit_prologue_nodes(original_node)
quantized_op_node = create_node(quantized_op_type, quantized_op_name,
all_input_names)
add_op_function(original_node, quantized_op_node)
self.add_output_graph_node(quantized_op_node)
self.add_dequantize_result_node(quantized_op_name, original_node.name)
def add_pool_function(self, original_node, quantized_op_node):
set_attr_dtype(quantized_op_node, "T", dtypes.quint8)
copy_attr(quantized_op_node, "ksize", original_node.attr["ksize"])
copy_attr(quantized_op_node, "strides", original_node.attr["strides"])
copy_attr(quantized_op_node, "padding", original_node.attr["padding"])
def add_relu_function(self, unused_arg_node, quantized_op_node):
set_attr_dtype(quantized_op_node, "Tinput", dtypes.quint8)
def eightbitize_concat_node(self, original_node):
"""Replaces a Concat node with the eight bit equivalent sub-graph.
Converts a node like this:
Shape(f) Input0(f) Input1(f)
| | |
+--------v v v----------+
Concat
|
v
(f)
Into a quantized equivalent:
Shape(f) Input0(f) ReshapeDims Input1(f)
| +------v v--------------+------------------v v------+
| | Reshape Reshape |
| | | | |
| | | ReductionDims | |
| | +------+ | +--------+ |
| | | +---c---------+-----------c-----+ | |
| | +v v v v-------+---------v v v v+ |
| | Min Max Min Max |
| | +----+ | | +-----+ |
| v v v--------+ +----------v v v
| Quantize Quantize
| +------------------+ +----------------------+
+-------------------------------+ | |
v v v
QuantizedConcat
| | |
v v v
Dequantize
|
v
(f)
Args:
original_node: Float node to be converted.
Returns:
Subgraph representing the quantized version of the original node.
"""
namespace_prefix = original_node.name + "_eightbit"
quantized_concat_name = namespace_prefix + "_quantized_concat"
reshape_dims_name, reduction_dims_name = self.add_common_quantization_nodes(
namespace_prefix)
shape_input_name = original_node.input[0]
original_inputs = original_node.input[1:]
input_names = []
min_names = []
max_names = []
for original_input_name in original_inputs:
quantize_input_name, min_input_name, max_input_name = (
self.eightbitize_input_to_node(namespace_prefix, original_input_name,
reshape_dims_name,
reduction_dims_name))
input_names.append(quantize_input_name)
min_names.append(min_input_name)
max_names.append(max_input_name)
all_input_names = [shape_input_name]
all_input_names.extend(input_names)
all_input_names.extend(min_names)
all_input_names.extend(max_names)
quantized_concat_node = create_node("QuantizedConcat",
quantized_concat_name, all_input_names)
set_attr_int(quantized_concat_node, "N", len(original_inputs))
set_attr_dtype(quantized_concat_node, "T", dtypes.quint8)
self.add_output_graph_node(quantized_concat_node)
self.add_dequantize_result_node(quantized_concat_name, original_node.name)
def eightbitize_placeholder_node(self, current_node):
"""Replaces a placeholder node with a quint8 placeholder node+dequantize."""
name = current_node.name
# Convert the placeholder into a quantized type.
output_node = node_def_pb2.NodeDef()
output_node.CopyFrom(current_node)
set_attr_dtype(output_node, "dtype", dtypes.quint8)
output_node.name += "_original_input"
self.add_output_graph_node(output_node)
# Add a dequantize to convert back to float.
dequantize_node = create_node("Dequantize", name, [
output_node.name, "quantized_input_min_value",
"quantized_input_max_value"
])
set_attr_dtype(dequantize_node, "T", dtypes.quint8)
set_attr_string(dequantize_node, "mode", b"MIN_FIRST")
self.add_output_graph_node(dequantize_node)
# For the descent over the graph to work, the dequantize node must be named
# current_node.name. However, for the feeding of the graph to work, the
# placeholder must have the name current_node.name; so record a final set
# of renames to apply after all processing has been done.
self.final_node_renames[output_node.name] = name
self.final_node_renames[dequantize_node.name] = name + "_dequantize"
def eightbitize_reshape_node(self, original_node):
"""Replaces a Reshape node with the eight bit equivalent sub-graph.
Args:
original_node: Float node to be converted.
Returns:
Subgraph representing the quantized version of the original node.
"""
namespace_prefix = original_node.name + "_eightbit"
quantized_reshape_name = namespace_prefix + "_quantized_reshape"
reshape_dims_name, reduction_dims_name = self.add_common_quantization_nodes(
namespace_prefix)
shape_input_name = original_node.input[1]
quantize_input_name, min_input_name, max_input_name = (
self.eightbitize_input_to_node(namespace_prefix, original_node.input[0],
reshape_dims_name, reduction_dims_name))
quantized_reshape_node = create_node(
"QuantizedReshape", quantized_reshape_name,
[quantize_input_name, shape_input_name, min_input_name, max_input_name])
set_attr_dtype(quantized_reshape_node, "T", dtypes.quint8)
self.add_output_graph_node(quantized_reshape_node)
self.add_dequantize_result_node(quantized_reshape_name, original_node.name)
def eightbitize_batch_norm_node(self, original_node):
"""Replaces a MatMul node with the eight bit equivalent sub-graph."""
namespace_prefix = original_node.name + "_eightbit"
original_input_name = original_node.input[0]
original_mean_name = original_node.input[1]
original_variance_name = original_node.input[2]
original_beta_name = original_node.input[3]
original_gamma_name = original_node.input[4]
quantized_batch_norm_name = namespace_prefix + "_quantized_batch_norm"
reshape_dims_name, reduction_dims_name = self.add_common_quantization_nodes(
namespace_prefix)
quantize_input_name, min_input_name, max_input_name = (
self.eightbitize_input_to_node(namespace_prefix, original_input_name,
reshape_dims_name, reduction_dims_name))
quantize_mean_name, min_mean_name, max_mean_name = (
self.eightbitize_input_to_node(namespace_prefix, original_mean_name,
reshape_dims_name, reduction_dims_name))
quantize_variance_name, min_variance_name, max_variance_name = (
self.eightbitize_input_to_node(namespace_prefix, original_variance_name,
reshape_dims_name, reduction_dims_name))
quantize_beta_name, min_beta_name, max_beta_name = (
self.eightbitize_input_to_node(namespace_prefix, original_beta_name,
reshape_dims_name, reduction_dims_name))
quantize_gamma_name, min_gamma_name, max_gamma_name = (
self.eightbitize_input_to_node(namespace_prefix, original_gamma_name,
reshape_dims_name, reduction_dims_name))
quantized_batch_norm_node = create_node(
"QuantizedBatchNormWithGlobalNormalization", quantized_batch_norm_name,
[
quantize_input_name, min_input_name, max_input_name,
quantize_mean_name, min_mean_name, max_mean_name,
quantize_variance_name, min_variance_name, max_variance_name,
quantize_beta_name, min_beta_name, max_beta_name,
quantize_gamma_name, min_gamma_name, max_gamma_name
])
set_attr_dtype(quantized_batch_norm_node, "Tinput", dtypes.quint8)
set_attr_dtype(quantized_batch_norm_node, "out_type", dtypes.qint32)
copy_attr(quantized_batch_norm_node, "scale_after_normalization",
original_node.attr["scale_after_normalization"])
copy_attr(quantized_batch_norm_node, "variance_epsilon",
original_node.attr["variance_epsilon"])
self.add_output_graph_node(quantized_batch_norm_node)
quantize_down_name = self.add_quantize_down_nodes(original_node,
quantized_batch_norm_name)
self.add_dequantize_result_node(quantize_down_name, original_node.name)
def add_output_graph_node(self, output_node):
"""Inserts one node into the new graph."""
self.output_graph.node.extend([output_node])
def remove_redundant_quantization(self, old_graph):
"""Removes unneeded pairs of quantize/dequantize ops from the graph.
This is a bit of a tricky function, because it's attempting to spot the
pattern of dequantizing from eight-bit up to float, and then immediately
quantizing back down to eight bits again, that's introduced by previous
passes that do 'key-hole' conversions of individual nodes but have to
convert back to float to match the previous output interface, since they
don't know that the next op can handle quantized tensors.
It works by:
- Looking for Quantize nodes.
- Checking to see if their first input is a Dequantize node.
- Seeing if their min/max inputs come from Min/Max nodes.
- Making sure those Min/Max nodes are being fed from the same Dequantize.
- Or that the Min is indirectly being fed from the same Dequantize as Max.
- Making sure the Dequantize is going through a Reshape (which we add
during the previous pass when we create the quantize sub-graph).
- Looking for the dims Const op for the Min/Max dims.
If all of these conditions are met, then it's a sub-graph pattern that
we know how to optimize out (and is likely the common one we've introduced).
We then rewire the graph to skip it entirely, and then rely on the dead node
removal pass to get rid of any nodes that are no longer needed.
Args:
old_graph: The model we'll be stripping redundant nodes from.
Returns:
A graph with the unnecessary nodes removed.
Raises:
ValueError: Two nodes with the same name were found in the graph.
"""
old_nodes_map = self.create_nodes_map(old_graph)
self.output_graph = graph_pb2.GraphDef()
inputs_to_rename = {}
# We go through all the nodes, looking for any that match the patterns we
# know how to optimize away.
for node in old_graph.node:
# We always start with a Quantize node, and examine its inputs to see if
# they are in a form that can be removed.
if node.op not in ["Quantize", "QuantizeV2"]:
continue
dequantize_node_name = node_name_from_input(node.input[0])
if dequantize_node_name not in old_nodes_map:
raise ValueError("Input node name '" + dequantize_node_name +
"' not found in node '" + node.name + "'")
dequantize_node = old_nodes_map[dequantize_node_name]
# Do we have a Dequantize feeding in, with the same type as the Quantize?
if dequantize_node.op != "Dequantize":
continue
if node.attr["T"] != dequantize_node.attr["T"]:
continue
# Now look at the other inputs, and ensure they're Min/Max nodes.
min_node_name = node_name_from_input(node.input[1])
max_node_name = node_name_from_input(node.input[2])
min_node = old_nodes_map[min_node_name]
max_node = old_nodes_map[max_node_name]
is_min_right_type = (min_node.op in ["Min", "Dequantize"])
is_max_right_type = (max_node.op in ["Max", "Dequantize"])
if not is_min_right_type or not is_max_right_type:
print("Didn't find expected types on inputs : %s, %s." % (min_node.op,
max_node.op))
continue
min_node_input_name = node_name_from_input(min_node.input[0])
max_node_input_name = node_name_from_input(max_node.input[0])
# There are two different patterns for Min nodes we can recognize, one
# where the input comes directly from the same one as the Max, and
# another where we run it through another Min first, so check for both.
is_same_input = False
if min_node_input_name == max_node_input_name:
is_same_input = True
else:
first_min_node_input = old_nodes_map[min_node_input_name]
if first_min_node_input.op == "Concat":
second_min_node_name = node_name_from_input(
first_min_node_input.input[1])
second_min_node = old_nodes_map[second_min_node_name]
if second_min_node.op == "Min":
second_min_node_input_name = node_name_from_input(
second_min_node.input[0])
is_same_input = (second_min_node_input_name == max_node_input_name)
if not is_same_input:
print("Different min/max inputs: " + min_node_input_name)
continue
# We recognize this pattern, so mark the graph edges to be rewired to
# route around it entirely, since we know it's a no-op.
dequantize_source_name = node_name_from_input(dequantize_node.input[0])
node_tensor_name = ensure_tensor_name_has_port(node.name)
min_tensor_name = node.name + ":1"
max_tensor_name = node.name + ":2"
inputs_to_rename[node_tensor_name] = dequantize_source_name
inputs_to_rename[min_tensor_name] = dequantize_node.input[1]
inputs_to_rename[max_tensor_name] = dequantize_node.input[2]
# Finally we apply all the rewiring we've marked to the graph.
for node in old_graph.node:
for index, input_full_name in enumerate(node.input):
input_name = ensure_tensor_name_has_port(input_full_name)
if input_name in inputs_to_rename:
node.input[index] = inputs_to_rename[input_name]
self.add_output_graph_node(node)
return self.output_graph
def apply_final_node_renames(self):
"""Applies node renames in self.final_node_renames to self.output_graph."""
old_graph = self.output_graph
self.output_graph = graph_pb2.GraphDef()
for node in old_graph.node:
node.name = self.final_node_renames.get(node.name, node.name)
for index, input_name in enumerate(node.input):
node_name = node_name_from_input(input_name)
input_full_name = ensure_tensor_name_has_port(input_name)
if node_name in self.final_node_renames:
node.input[index] = "%s%s" % (self.final_node_renames[node_name],
input_full_name[len(node_name):])
self.add_output_graph_node(node)
return self.output_graph
def remove_dead_nodes(self, output_names):
"""Removes nodes that are no longer needed for inference from the graph."""
old_output_graph = self.output_graph
self.output_graph = graph_util.extract_sub_graph(old_output_graph,
output_names)
def quantize_weights(self, input_graph, quantization_mode):
"""Quantize float Const ops.
There are two modes of operations, both replace float Const ops with
quantized values.
1. If quantization_mode is "weights_rounded", this function replaces float
Const ops with quantized float Const ops - same as the original op, but
float values being mapped to the center of one of 1<<FLAGS.bitdepth buckets.
This does not change the raw model size, but compression algorithms such as
zip (as used for compressing apks) or bzip2 will achieve a very good
compression ratio.
2. For other quantization modes ("MIN_COMBINED" or "MIN_FIRST"), float
Const ops are quantized and replaced by a tuple of four ops to perform
the dequantization at runtime:
* eight-bit Const (bucket indices, same shape as original float Const op
* two float Const ops (min and max value of original float Const op)
* Dequantize op to convert the eight-bit consts to float tensors.
The quantization mode is important because we see accuracy problems when
quantizing weights for different situations depending on the algorithm
used. We haven't figured out exactly what the underlying cause is yet,
unfortunately.
Args:
input_graph: A GraphDef of the model containing float Const ops.
quantization_mode: How to quantize and dequantize the values.
Returns:
A GraphDef of the converted graph.
Raises:
ValueError: If quantization_mode is unsupported.
"""
output_graph = graph_pb2.GraphDef()
for input_node in input_graph.node:
should_quantize = False
if input_node.op == "Const":
dtype = dtypes.as_dtype(input_node.attr["dtype"].type)
if dtype == dtypes.float32:
should_quantize = True
if should_quantize:
if quantization_mode == "weights_rounded":
output_graph.node.extend(quantize_weight_rounded(input_node))
elif quantization_mode in (b"MIN_COMBINED", b"MIN_FIRST"):
output_graph.node.extend(
quantize_weight_eightbit(input_node, quantization_mode))
else:
raise ValueError("Unsupported quantization mode %s." %
quantization_mode)
else:
output_node = node_def_pb2.NodeDef()
output_node.CopyFrom(input_node)
output_graph.node.extend([output_node])
return output_graph
def set_input_graph(self, new_input_graph):
self.input_graph = new_input_graph
self.nodes_map = self.create_nodes_map(self.input_graph)
def main(unused_args):
if not gfile.Exists(FLAGS.input):
print("Input graph file '" + FLAGS.input + "' does not exist!")
return -1
known_modes = [
"round", "quantize", "eightbit", "weights", "test", "weights_rounded"
]
if not any(FLAGS.mode in s for s in known_modes):
print("mode is '" + FLAGS.mode + "', not in " + ", ".join(known_modes) +
".")
return -1
tf_graph = graph_pb2.GraphDef()
with gfile.Open(FLAGS.input, "rb") as f:
data = f.read()
tf_graph.ParseFromString(data)
graph = ops.Graph()
with graph.as_default():
importer.import_graph_def(tf_graph, input_map={}, name="")
quantized_input_range = None
if FLAGS.quantized_input:
quantized_input_range = [
FLAGS.quantized_input_min, FLAGS.quantized_input_max
]
fallback_quantization_range = None
if (FLAGS.quantized_fallback_min is not None or
FLAGS.quantized_fallback_max is not None):
assert FLAGS.quantized_fallback_min is not None
assert FLAGS.quantized_fallback_max is not None
fallback_quantization_range = [
FLAGS.quantized_fallback_min, FLAGS.quantized_fallback_max
]
rewriter = GraphRewriter(tf_graph, FLAGS.mode, quantized_input_range,
fallback_quantization_range)
output_graph = rewriter.rewrite(FLAGS.output_node_names.split(","))
f = gfile.FastGFile(FLAGS.output, "wb")
f.write(output_graph.SerializeToString())
return 0
if __name__ == "__main__":
app.run() | unknown | codeparrot/codeparrot-clean | ||
# ----------------------------------------------------------------------
# Numenta Platform for Intelligent Computing (NuPIC)
# Copyright (C) 2014, Numenta, Inc. Unless you have an agreement
# with Numenta, Inc., for a separate license for this software code, the
# following terms and conditions apply:
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero Public License version 3 as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
# See the GNU Affero Public License for more details.
#
# You should have received a copy of the GNU Affero Public License
# along with this program. If not, see http://www.gnu.org/licenses.
#
# http://numenta.org/licenses/
# ----------------------------------------------------------------------
import numpy as np
import tempfile
import unittest
from nupic.encoders.base import defaultDtype
from nupic.encoders.geospatial_coordinate import GeospatialCoordinateEncoder
try:
import capnp
except ImportError:
capnp = None
if capnp:
from nupic.encoders.geospatial_coordinate_capnp import (
GeospatialCoordinateEncoderProto
)
# Disable warnings about accessing protected members
# pylint: disable=W0212
class GeospatialCoordinateEncoderTest(unittest.TestCase):
"""Unit tests for GeospatialCoordinateEncoder class"""
def testCoordinateForPosition(self):
scale = 30 # meters
encoder = GeospatialCoordinateEncoder(scale, 60)
coordinate = encoder.coordinateForPosition(
-122.229194, 37.486782
)
self.assertEqual(coordinate.tolist(), [-453549, 150239])
def testCoordinateForPosition3D(self):
scale = 30 # meters
encoder = GeospatialCoordinateEncoder(scale, 60)
coordinate = encoder.coordinateForPosition(
-122.229194, 37.486782, 1500
)
self.assertEqual(coordinate.tolist(), [-90102, -142918, 128710])
def testCoordinateForPositionOrigin3D(self):
scale = 1 # meters
encoder = GeospatialCoordinateEncoder(scale, 60)
coordinate = encoder.coordinateForPosition(0,0,0)
# see WGS80 defining parameters (semi-major axis) on
# http://en.wikipedia.org/wiki/Geodetic_datum#Parameters_for_some_geodetic_systems
self.assertEqual(coordinate.tolist(), [6378137, 0, 0])
def testCoordinateForPositionOrigin(self):
scale = 30 # meters
encoder = GeospatialCoordinateEncoder(scale, 60)
coordinate = encoder.coordinateForPosition(0, 0)
self.assertEqual(coordinate.tolist(), [0, 0])
def testRadiusForSpeed(self):
scale = 30 # meters
timestep = 60 #seconds
speed = 50 # meters per second
encoder = GeospatialCoordinateEncoder(scale, timestep)
radius = encoder.radiusForSpeed(speed)
self.assertEqual(radius, 75)
def testRadiusForSpeed0(self):
scale = 30 # meters
timestep = 60 #seconds
speed = 0 # meters per second
n = 999
w = 27
encoder = GeospatialCoordinateEncoder(scale, timestep, n=n, w=w)
radius = encoder.radiusForSpeed(speed)
self.assertEqual(radius, 3)
def testRadiusForSpeedInt(self):
"""Test that radius will round to the nearest integer"""
scale = 30 # meters
timestep = 62 #seconds
speed = 25 # meters per second
encoder = GeospatialCoordinateEncoder(scale, timestep)
radius = encoder.radiusForSpeed(speed)
self.assertEqual(radius, 38)
def testEncodeIntoArray(self):
scale = 30 # meters
timestep = 60 #seconds
speed = 2.5 # meters per second
encoder = GeospatialCoordinateEncoder(scale, timestep,
n=999,
w=25)
encoding1 = encode(encoder, speed, -122.229194, 37.486782)
encoding2 = encode(encoder, speed, -122.229294, 37.486882)
encoding3 = encode(encoder, speed, -122.229294, 37.486982)
overlap1 = overlap(encoding1, encoding2)
overlap2 = overlap(encoding1, encoding3)
self.assertTrue(overlap1 > overlap2)
def testEncodeIntoArrayAltitude(self):
scale = 30 # meters
timestep = 60 # seconds
speed = 2.5 # meters per second
longitude, latitude = -122.229294, 37.486782
encoder = GeospatialCoordinateEncoder(scale, timestep,
n=999,
w=25)
encoding1 = encode(encoder, speed, longitude, latitude, 0)
encoding2 = encode(encoder, speed, longitude, latitude, 100)
encoding3 = encode(encoder, speed, longitude, latitude, 1000)
overlap1 = overlap(encoding1, encoding2)
overlap2 = overlap(encoding1, encoding3)
self.assertGreater(overlap1, overlap2)
def testEncodeIntoArray3D(self):
scale = 30 # meters
timestep = 60 # seconds
speed = 2.5 # meters per second
encoder = GeospatialCoordinateEncoder(scale, timestep,
n=999,
w=25)
encoding1 = encode(encoder, speed, -122.229194, 37.486782, 0)
encoding2 = encode(encoder, speed, -122.229294, 37.486882, 100)
encoding3 = encode(encoder, speed, -122.229294, 37.486982, 1000)
overlap1 = overlap(encoding1, encoding2)
overlap2 = overlap(encoding1, encoding3)
self.assertGreater(overlap1, overlap2)
@unittest.skipUnless(
capnp, "pycapnp is not installed, skipping serialization test.")
def testReadWrite(self):
scale = 30 # meters
timestep = 60 # seconds
speed = 2.5 # meters per second
original = GeospatialCoordinateEncoder(scale, timestep, n=999, w=25)
encode(original, speed, -122.229194, 37.486782, 0)
encode(original, speed, -122.229294, 37.486882, 100)
proto1 = GeospatialCoordinateEncoderProto.new_message()
original.write(proto1)
# Write the proto to a temp file and read it back into a new proto
with tempfile.TemporaryFile() as f:
proto1.write(f)
f.seek(0)
proto2 = GeospatialCoordinateEncoderProto.read(f)
encoder = GeospatialCoordinateEncoder.read(proto2)
self.assertIsInstance(encoder, GeospatialCoordinateEncoder)
self.assertEqual(encoder.w, original.w)
self.assertEqual(encoder.n, original.n)
self.assertEqual(encoder.name, original.name)
self.assertEqual(encoder.verbosity, original.verbosity)
# Compare a new value with the original and deserialized.
encoding3 = encode(original, speed, -122.229294, 37.486982, 1000)
encoding4 = encode(encoder, speed, -122.229294, 37.486982, 1000)
self.assertTrue(np.array_equal(encoding3, encoding4))
def encode(encoder, speed, longitude, latitude, altitude=None):
output = np.zeros(encoder.getWidth(), dtype=defaultDtype)
encoder.encodeIntoArray((speed, longitude, latitude, altitude), output)
return output
def overlap(sdr1, sdr2):
assert sdr1.size == sdr2.size
return float((sdr1 & sdr2).sum()) / sdr1.sum()
if __name__ == "__main__":
unittest.main() | unknown | codeparrot/codeparrot-clean | ||
/*
* Copyright (c) 2022 Mockito contributors
* This program is made available under the terms of the MIT License.
*/
package org.mockitousage.annotation;
import static org.junit.Assert.assertEquals;
import org.junit.Before;
import org.junit.Test;
import org.mockito.Mock;
import org.mockito.MockMakers;
import org.mockito.Mockito;
import org.mockito.MockitoAnnotations;
public class ProgrammaticMockMakerAnnotationTest {
@Mock(mockMaker = MockMakers.INLINE)
ClassWithFinalMethod inlineMock;
@Mock(mockMaker = MockMakers.SUBCLASS)
ClassWithFinalMethod subclassMock;
@Before
public void init() {
MockitoAnnotations.openMocks(this);
}
@Test
public void test_mock_uses_given_mock_maker() {
Mockito.when(inlineMock.finalMethodCallingNonFinal()).thenReturn("MOCKED");
Mockito.when(subclassMock.finalMethodCallingNonFinal()).thenReturn("MOCKED");
assertEquals("MOCKED", inlineMock.finalMethodCallingNonFinal());
assertEquals("ORIGINAL", subclassMock.finalMethodCallingNonFinal());
assertEquals("MOCKED", subclassMock.nonFinal());
}
private static class ClassWithFinalMethod {
final String finalMethodCallingNonFinal() {
nonFinal();
return "ORIGINAL";
}
String nonFinal() {
return "ORIGINAL";
}
}
} | java | github | https://github.com/mockito/mockito | mockito-integration-tests/programmatic-tests/src/test/java/org/mockitousage/annotation/ProgrammaticMockMakerAnnotationTest.java |
import re
import select
import socket
import subprocess
import sys
from multiprocessing import Value
import json
import base64
import requests
from flask import Flask, abort
from flask import jsonify
from flask import render_template
from flask import request, send_from_directory
from flask_cors import CORS
from flask_scss import Scss
import json
import config as conf
import jsonpickle
import helpers
from controller.chatbot_controller import chatbot_api
from controller.churn_controller import churn_api
from controller.argumentation_controller import argumentation_api
from controller.kp_extraction_controller import kp_extraction_api
from controller.machine_translation_controller import machine_translation_api
from controller.gsw_controller import gsw_api
#from controller.ner_controller import ner_api
from controller.neural_programmer_controller import neural_programmer_api
from controller.opinion_target_controller import opinion_target_api
from controller.sfid_controller import sfid_api
from controller.slot_filling_controller import slot_filling_api
from controller.seq2sql_controller import seq2sql_api
from controller.sid_controller import sid_api
from sfid import sfid
from argumentation import argumentation
from summarization import summarization
from grocery import grocery
from emotion import emotion
from go_chatbot import go_chatbot
from material import material
from chestxray import chestxray
from sid import sid
from data_selection import data_selection
app = Flask(__name__)
CORS(app)
Scss(app, static_dir='static/ui-kit/custom/css', asset_dir='static/ui-kit/custom/scss')
app.register_blueprint(seq2sql_api, url_prefix='/seq2sql')
app.register_blueprint(chatbot_api, url_prefix='/chatbot')
app.register_blueprint(neural_programmer_api, url_prefix='/neural_programmer')
app.register_blueprint(opinion_target_api, url_prefix='/opinion')
app.register_blueprint(churn_api, url_prefix='/churn')
#app.register_blueprint(ner_api, url_prefix='/ner')
app.register_blueprint(kp_extraction_api, url_prefix='/kp')
app.register_blueprint(machine_translation_api, url_prefix='/translate')
app.register_blueprint(gsw_api, url_prefix='/gsw')
app.register_blueprint(argumentation, url_prefix='/argumentation')
app.register_blueprint(slot_filling_api, url_prefix='/slotfilling')
app.register_blueprint(sfid, url_prefix='/sfid')
app.register_blueprint(go_chatbot, url_prefix='/go_chatbot')
app.register_blueprint(summarization, url_prefix='/summarization')
app.register_blueprint(sfid_api, url_prefix='/sfid_old')
app.register_blueprint(grocery, url_prefix='/grocery')
app.register_blueprint(emotion, url_prefix='/emotion')
app.register_blueprint(material, url_prefix='/material')
app.register_blueprint(chestxray, url_prefix='/chestxray')
app.register_blueprint(data_selection, url_prefix='/data_selection')
app.register_blueprint(sid, url_prefix='/sid')
@app.route('/')
def getIndex():
return render_template('index.html')
@app.errorhandler(404)
def page_not_found(e):
return render_template('error.html'), 404
if __name__ == '__main__':
app.run(host='127.0.0.1') | unknown | codeparrot/codeparrot-clean | ||
# Copyright (c) 2012, GPy authors (see AUTHORS.txt).
# Licensed under the BSD 3-clause license (see LICENSE.txt)
from .gp_regression import GPRegression
from .gp_classification import GPClassification
from .sparse_gp_regression import SparseGPRegression
from .sparse_gp_classification import SparseGPClassification, SparseGPClassificationUncertainInput
from .gplvm import GPLVM
from .bcgplvm import BCGPLVM
from .sparse_gplvm import SparseGPLVM
from .warped_gp import WarpedGP
from .bayesian_gplvm import BayesianGPLVM
from .mrd import MRD
from .gradient_checker import GradientChecker, HessianChecker, SkewChecker
from .ss_gplvm import SSGPLVM
from .gp_coregionalized_regression import GPCoregionalizedRegression
from .sparse_gp_coregionalized_regression import SparseGPCoregionalizedRegression
from .gp_heteroscedastic_regression import GPHeteroscedasticRegression
from .ss_mrd import SSMRD
from .gp_kronecker_gaussian_regression import GPKroneckerGaussianRegression
from .gp_var_gauss import GPVariationalGaussianApproximation
from .one_vs_all_classification import OneVsAllClassification
from .one_vs_all_sparse_classification import OneVsAllSparseClassification
from .dpgplvm import DPBayesianGPLVM
from .state_space_model import StateSpace
from .ibp_lfm import IBPLFM
from .gp_offset_regression import GPOffsetRegression
from .gp_grid_regression import GPRegressionGrid | unknown | codeparrot/codeparrot-clean | ||
#!/usr/bin/env python
# Copyright (c) 2013 Hewlett-Packard Development Company, L.P.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# THIS FILE IS MANAGED BY THE GLOBAL REQUIREMENTS REPO - DO NOT EDIT
import setuptools
# In python < 2.7.4, a lazy loading of package `pbr` will break
# setuptools if some other modules registered functions in `atexit`.
# solution from: http://bugs.python.org/issue15881#msg170215
try:
import multiprocessing # noqa
except ImportError:
pass
setuptools.setup(
setup_requires=['pbr'],
pbr=True) | unknown | codeparrot/codeparrot-clean | ||
# -*- coding: utf-8 -*-
# Image Occlusion Enhanced Add-on for Anki
#
# Copyright (C) 2016-2020 Aristotelis P. <https://glutanimate.com/>
# Copyright (C) 2012-2015 Tiago Barroso <tmbb@campus.ul.pt>
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as
# published by the Free Software Foundation, either version 3 of the
# License, or (at your option) any later version, with the additions
# listed at the end of the license file that accompanied this program.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
#
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <https://www.gnu.org/licenses/>.
#
# NOTE: This program is subject to certain additional terms pursuant to
# Section 7 of the GNU Affero General Public License. You should have
# received a copy of these additional terms immediately following the
# terms and conditions of the GNU Affero General Public License that
# accompanied this program.
#
# If not, please request a copy through one of the means of contact
# listed here: <https://glutanimate.com/contact/>.
#
# Any modifications to this file must keep this entire header intact.
"""
Handles the IO note type and card template
"""
from .config import *
# DEFAULT CARD TEMPLATES
iocard_front = """\
{{#%(src_img)s}}
<div id="io-header">{{%(header)s}}</div>
<div id="io-wrapper">
<div id="io-overlay">{{%(que)s}}</div>
<div id="io-original">{{%(src_img)s}}</div>
</div>
<div id="io-footer">{{%(footer)s}}</div>
<script>
// Prevent original image from loading before mask
aFade = 50, qFade = 0;
var mask = document.querySelector('#io-overlay>img');
function loaded() {
var original = document.querySelector('#io-original');
original.style.visibility = "visible";
}
if (mask === null || mask.complete) {
loaded();
} else {
mask.addEventListener('load', loaded);
}
</script>
{{/%(src_img)s}}
""" % \
{'que': IO_FLDS['qm'],
'ans': IO_FLDS['am'],
'svg': IO_FLDS['om'],
'src_img': IO_FLDS['im'],
'header': IO_FLDS['hd'],
'footer': IO_FLDS['ft'],
'remarks': IO_FLDS['rk'],
'sources': IO_FLDS['sc'],
'extraone': IO_FLDS['e1'],
'extratwo': IO_FLDS['e2']}
iocard_back = """\
{{#%(src_img)s}}
<div id="io-header">{{%(header)s}}</div>
<div id="io-wrapper">
<div id="io-overlay">{{%(ans)s}}</div>
<div id="io-original">{{%(src_img)s}}</div>
</div>
{{#%(footer)s}}<div id="io-footer">{{%(footer)s}}</div>{{/%(footer)s}}
<button id="io-revl-btn" onclick="toggle();">Toggle Masks</button>
<div id="io-extra-wrapper">
<div id="io-extra">
{{#%(remarks)s}}
<div class="io-extra-entry">
<div class="io-field-descr">%(remarks)s</div>{{%(remarks)s}}
</div>
{{/%(remarks)s}}
{{#%(sources)s}}
<div class="io-extra-entry">
<div class="io-field-descr">%(sources)s</div>{{%(sources)s}}
</div>
{{/%(sources)s}}
{{#%(extraone)s}}
<div class="io-extra-entry">
<div class="io-field-descr">%(extraone)s</div>{{%(extraone)s}}
</div>
{{/%(extraone)s}}
{{#%(extratwo)s}}
<div class="io-extra-entry">
<div class="io-field-descr">%(extratwo)s</div>{{%(extratwo)s}}
</div>
{{/%(extratwo)s}}
</div>
</div>
<script>
// Toggle answer mask on clicking the image
var toggle = function() {
var amask = document.getElementById('io-overlay');
if (amask.style.display === 'block' || amask.style.display === '')
amask.style.display = 'none';
else
amask.style.display = 'block'
}
// Prevent original image from loading before mask
aFade = 50, qFade = 0;
var mask = document.querySelector('#io-overlay>img');
function loaded() {
var original = document.querySelector('#io-original');
original.style.visibility = "visible";
}
if (mask === null || mask.complete) {
loaded();
} else {
mask.addEventListener('load', loaded);
}
</script>
{{/%(src_img)s}}
""" % \
{'que': IO_FLDS['qm'],
'ans': IO_FLDS['am'],
'svg': IO_FLDS['om'],
'src_img': IO_FLDS['im'],
'header': IO_FLDS['hd'],
'footer': IO_FLDS['ft'],
'remarks': IO_FLDS['rk'],
'sources': IO_FLDS['sc'],
'extraone': IO_FLDS['e1'],
'extratwo': IO_FLDS['e2']}
iocard_css = """\
/* GENERAL CARD STYLE */
.card {
font-family: "Helvetica LT Std", Helvetica, Arial, Sans;
font-size: 150%;
text-align: center;
color: black;
background-color: white;
}
/* OCCLUSION CSS START - don't edit this */
#io-overlay {
position:absolute;
top:0;
width:100%;
z-index:3
}
#io-original {
position:relative;
top:0;
width:100%;
z-index:2;
visibility: hidden;
}
#io-wrapper {
position:relative;
width: 100%;
}
/* OCCLUSION CSS END */
/* OTHER STYLES */
#io-header{
font-size: 1.1em;
margin-bottom: 0.2em;
}
#io-footer{
max-width: 80%;
margin-left: auto;
margin-right: auto;
margin-top: 0.8em;
font-style: italic;
}
#io-extra-wrapper{
/* the wrapper is needed to center the
left-aligned blocks below it */
width: 80%;
margin-left: auto;
margin-right: auto;
margin-top: 0.5em;
}
#io-extra{
text-align:center;
display: inline-block;
}
.io-extra-entry{
margin-top: 0.8em;
font-size: 0.9em;
text-align:left;
}
.io-field-descr{
margin-bottom: 0.2em;
font-weight: bold;
font-size: 1em;
}
#io-revl-btn {
font-size: 0.5em;
}
/* ADJUSTMENTS FOR MOBILE DEVICES */
.mobile .card, .mobile #content {
font-size: 120%;
margin: 0;
}
.mobile #io-extra-wrapper {
width: 95%;
}
.mobile #io-revl-btn {
font-size: 0.8em;
}
"""
# INCREMENTAL UPDATES
html_overlay_onload = """\
<script>
// Prevent original image from loading before mask
aFade = 50, qFade = 0;
var mask = document.querySelector('#io-overlay>img');
function loaded() {
var original = document.querySelector('#io-original');
original.style.visibility = "visible";
}
if (mask.complete) {
loaded();
} else {
mask.addEventListener('load', loaded);
}
</script>\
"""
css_original_hide = """\
/* Anki 2.1 additions */
#io-original {
visibility: hidden;
}\
"""
# List structure:
# (<version addition was introduced in>,
# (<qfmt_addition>, <afmt_addition>, <css_addition>))
# versions need to be ordered by semantic versioning
additions_by_version = [
(
1.30,
(html_overlay_onload, html_overlay_onload, css_original_hide)
),
]
def add_io_model(col):
models = col.models
io_model = models.new(IO_MODEL_NAME)
# Add fields:
for i in IO_FLDS_IDS:
fld = models.newField(IO_FLDS[i])
if i == "note_id":
fld['size'] = 0
models.addField(io_model, fld)
# Add template
template = models.newTemplate(IO_CARD_NAME)
template['qfmt'] = iocard_front
template['afmt'] = iocard_back
io_model['css'] = iocard_css
io_model['sortf'] = 1 # set sortfield to header
models.addTemplate(io_model, template)
models.add(io_model)
return io_model
def reset_template(col):
print("Resetting IO Enhanced card template to defaults")
io_model = col.models.byName(IO_MODEL_NAME)
template = io_model['tmpls'][0]
template['qfmt'] = iocard_front
template['afmt'] = iocard_back
io_model['css'] = iocard_css
col.models.save()
return io_model
def update_template(col, old_version):
print("Updating IO Enhanced card template")
additions = [[], [], []]
for version, components in additions_by_version:
if old_version >= version:
continue
for lst, addition in zip(additions, components):
lst.append(addition)
io_model = col.models.byName(IO_MODEL_NAME)
if not io_model:
return add_io_model(col)
template = io_model['tmpls'][0]
template['qfmt'] += "\n".join(additions[0])
template['afmt'] += "\n".join(additions[1])
io_model['css'] += "\n".join(additions[2])
col.models.save()
return io_model | unknown | codeparrot/codeparrot-clean | ||
package v2
import (
"errors"
"reflect"
"testing"
)
func TestNewSettable(t *testing.T) {
contexts := []struct {
arg string
name string
field string
value string
err error
}{
{"name=value", "name", "", "value", nil},
{"name", "name", "", "", nil},
{"name.field=value", "name", "field", "value", nil},
{"name.field", "name", "field", "", nil},
{"=value", "", "", "", errInvalidFormat},
{"=", "", "", "", errInvalidFormat},
}
for _, c := range contexts {
s, err := newSettable(c.arg)
if !errors.Is(err, c.err) {
t.Fatalf("expected error to be %v, got %v", c.err, err)
}
if s.name != c.name {
t.Fatalf("expected name to be %q, got %q", c.name, s.name)
}
if s.field != c.field {
t.Fatalf("expected field to be %q, got %q", c.field, s.field)
}
if s.value != c.value {
t.Fatalf("expected value to be %q, got %q", c.value, s.value)
}
}
}
func TestIsSettable(t *testing.T) {
contexts := []struct {
allowedSettableFields []string
set settable
settable []string
result bool
err error
}{
{allowedSettableFieldsEnv, settable{}, []string{}, false, nil},
{allowedSettableFieldsEnv, settable{field: "value"}, []string{}, false, nil},
{allowedSettableFieldsEnv, settable{}, []string{"value"}, true, nil},
{allowedSettableFieldsEnv, settable{field: "value"}, []string{"value"}, true, nil},
{allowedSettableFieldsEnv, settable{field: "foo"}, []string{"value"}, false, nil},
{allowedSettableFieldsEnv, settable{field: "foo"}, []string{"foo"}, false, nil},
{allowedSettableFieldsEnv, settable{}, []string{"value1", "value2"}, false, errMultipleFields},
}
for _, c := range contexts {
if res, err := c.set.isSettable(c.allowedSettableFields, c.settable); res != c.result {
t.Fatalf("expected result to be %t, got %t", c.result, res)
} else if !errors.Is(err, c.err) {
t.Fatalf("expected error to be %v, got %v", c.err, err)
}
}
}
func TestUpdateSettingsEnv(t *testing.T) {
contexts := []struct {
env []string
set settable
newEnv []string
}{
{[]string{}, settable{name: "DEBUG", value: "1"}, []string{"DEBUG=1"}},
{[]string{"DEBUG=0"}, settable{name: "DEBUG", value: "1"}, []string{"DEBUG=1"}},
{[]string{"FOO=0"}, settable{name: "DEBUG", value: "1"}, []string{"FOO=0", "DEBUG=1"}},
{[]string{"FOO=0", "DEBUG=0"}, settable{name: "DEBUG", value: "1"}, []string{"FOO=0", "DEBUG=1"}},
{[]string{"FOO=0", "DEBUG=0", "BAR=1"}, settable{name: "DEBUG", value: "1"}, []string{"FOO=0", "DEBUG=1", "BAR=1"}},
}
for _, c := range contexts {
updateSettingsEnv(&c.env, &c.set)
if !reflect.DeepEqual(c.env, c.newEnv) {
t.Fatalf("expected env to be %q, got %q", c.newEnv, c.env)
}
}
} | go | github | https://github.com/moby/moby | daemon/pkg/plugin/v2/settable_test.go |
{
"private": true,
"scripts": {
"dev": "next dev",
"build": "next build",
"start": "next start"
},
"dependencies": {
"mobx": "^6.6.1",
"mobx-react-lite": "^3.4.0",
"next": "latest",
"react": "^18.2.0",
"react-dom": "^18.2.0"
}
} | json | github | https://github.com/vercel/next.js | examples/with-mobx/package.json |
#include "git-compat-util.h"
#include "gettext.h"
#include "parse.h"
static uintmax_t get_unit_factor(const char *end)
{
if (!*end)
return 1;
else if (!strcasecmp(end, "k"))
return 1024;
else if (!strcasecmp(end, "m"))
return 1024 * 1024;
else if (!strcasecmp(end, "g"))
return 1024 * 1024 * 1024;
return 0;
}
int git_parse_signed(const char *value, intmax_t *ret, intmax_t max)
{
if (value && *value) {
char *end;
intmax_t val;
intmax_t factor;
if (max < 0)
BUG("max must be a positive integer");
errno = 0;
val = strtoimax(value, &end, 0);
if (errno == ERANGE)
return 0;
if (end == value) {
errno = EINVAL;
return 0;
}
factor = get_unit_factor(end);
if (!factor) {
errno = EINVAL;
return 0;
}
if ((val < 0 && (-max - 1) / factor > val) ||
(val > 0 && max / factor < val)) {
errno = ERANGE;
return 0;
}
val *= factor;
*ret = val;
return 1;
}
errno = EINVAL;
return 0;
}
int git_parse_unsigned(const char *value, uintmax_t *ret, uintmax_t max)
{
if (value && *value) {
char *end;
uintmax_t val;
uintmax_t factor;
/* negative values would be accepted by strtoumax */
if (strchr(value, '-')) {
errno = EINVAL;
return 0;
}
errno = 0;
val = strtoumax(value, &end, 0);
if (errno == ERANGE)
return 0;
if (end == value) {
errno = EINVAL;
return 0;
}
factor = get_unit_factor(end);
if (!factor) {
errno = EINVAL;
return 0;
}
if (unsigned_mult_overflows(factor, val) ||
factor * val > max) {
errno = ERANGE;
return 0;
}
val *= factor;
*ret = val;
return 1;
}
errno = EINVAL;
return 0;
}
int git_parse_int(const char *value, int *ret)
{
intmax_t tmp;
if (!git_parse_signed(value, &tmp, maximum_signed_value_of_type(int)))
return 0;
*ret = tmp;
return 1;
}
int git_parse_int64(const char *value, int64_t *ret)
{
intmax_t tmp;
if (!git_parse_signed(value, &tmp, maximum_signed_value_of_type(int64_t)))
return 0;
*ret = tmp;
return 1;
}
int git_parse_ulong(const char *value, unsigned long *ret)
{
uintmax_t tmp;
if (!git_parse_unsigned(value, &tmp, maximum_unsigned_value_of_type(long)))
return 0;
*ret = tmp;
return 1;
}
int git_parse_ssize_t(const char *value, ssize_t *ret)
{
intmax_t tmp;
if (!git_parse_signed(value, &tmp, maximum_signed_value_of_type(ssize_t)))
return 0;
*ret = tmp;
return 1;
}
int git_parse_double(const char *value, double *ret)
{
char *end;
double val;
uintmax_t factor;
if (!value || !*value) {
errno = EINVAL;
return 0;
}
errno = 0;
val = strtod(value, &end);
if (errno == ERANGE)
return 0;
if (end == value) {
errno = EINVAL;
return 0;
}
factor = get_unit_factor(end);
if (!factor) {
errno = EINVAL;
return 0;
}
val *= factor;
*ret = val;
return 1;
}
int git_parse_maybe_bool_text(const char *value)
{
if (!value)
return 1;
if (!*value)
return 0;
if (!strcasecmp(value, "true")
|| !strcasecmp(value, "yes")
|| !strcasecmp(value, "on"))
return 1;
if (!strcasecmp(value, "false")
|| !strcasecmp(value, "no")
|| !strcasecmp(value, "off"))
return 0;
return -1;
}
int git_parse_maybe_bool(const char *value)
{
int v = git_parse_maybe_bool_text(value);
if (0 <= v)
return v;
if (git_parse_int(value, &v))
return !!v;
return -1;
}
/*
* Parse environment variable 'k' as a boolean (in various
* possible spellings); if missing, use the default value 'def'.
*/
int git_env_bool(const char *k, int def)
{
const char *v = getenv(k);
int val;
if (!v)
return def;
val = git_parse_maybe_bool(v);
if (val < 0)
die(_("bad boolean environment value '%s' for '%s'"),
v, k);
return val;
}
/*
* Parse environment variable 'k' as ulong with possibly a unit
* suffix; if missing, use the default value 'val'.
*/
unsigned long git_env_ulong(const char *k, unsigned long val)
{
const char *v = getenv(k);
if (v && !git_parse_ulong(v, &val))
die(_("failed to parse %s"), k);
return val;
} | c | github | https://github.com/git/git | parse.c |
from random import randrange
from sympy.simplify.hyperexpand import (ShiftA, ShiftB, UnShiftA, UnShiftB,
MeijerShiftA, MeijerShiftB, MeijerShiftC, MeijerShiftD,
MeijerUnShiftA, MeijerUnShiftB, MeijerUnShiftC,
MeijerUnShiftD,
ReduceOrder, reduce_order, apply_operators,
devise_plan, make_derivative_operator, Formula,
hyperexpand, Hyper_Function, G_Function,
reduce_order_meijer,
build_hypergeometric_formula)
from sympy import hyper, I, S, meijerg, Piecewise
from sympy.abc import z, a, b, c
from sympy.utilities.pytest import XFAIL, raises, slow
from sympy.utilities.randtest import verify_numerically as tn
from sympy.core.compatibility import range
from sympy import (cos, sin, log, exp, asin, lowergamma, atanh, besseli,
gamma, sqrt, pi, erf, exp_polar)
def test_branch_bug():
assert hyperexpand(hyper((-S(1)/3, S(1)/2), (S(2)/3, S(3)/2), -z)) == \
-z**S('1/3')*lowergamma(exp_polar(I*pi)/3, z)/5 \
+ sqrt(pi)*erf(sqrt(z))/(5*sqrt(z))
assert hyperexpand(meijerg([S(7)/6, 1], [], [S(2)/3], [S(1)/6, 0], z)) == \
2*z**S('2/3')*(2*sqrt(pi)*erf(sqrt(z))/sqrt(z) - 2*lowergamma(
S(2)/3, z)/z**S('2/3'))*gamma(S(2)/3)/gamma(S(5)/3)
def test_hyperexpand():
# Luke, Y. L. (1969), The Special Functions and Their Approximations,
# Volume 1, section 6.2
assert hyperexpand(hyper([], [], z)) == exp(z)
assert hyperexpand(hyper([1, 1], [2], -z)*z) == log(1 + z)
assert hyperexpand(hyper([], [S.Half], -z**2/4)) == cos(z)
assert hyperexpand(z*hyper([], [S('3/2')], -z**2/4)) == sin(z)
assert hyperexpand(hyper([S('1/2'), S('1/2')], [S('3/2')], z**2)*z) \
== asin(z)
def can_do(ap, bq, numerical=True, div=1, lowerplane=False):
from sympy import exp_polar, exp
r = hyperexpand(hyper(ap, bq, z))
if r.has(hyper):
return False
if not numerical:
return True
repl = {}
for n, a in enumerate(r.free_symbols - set([z])):
repl[a] = randcplx(n)/div
[a, b, c, d] = [2, -1, 3, 1]
if lowerplane:
[a, b, c, d] = [2, -2, 3, -1]
return tn(
hyper(ap, bq, z).subs(repl),
r.replace(exp_polar, exp).subs(repl),
z, a=a, b=b, c=c, d=d)
def test_roach():
# Kelly B. Roach. Meijer G Function Representations.
# Section "Gallery"
assert can_do([S(1)/2], [S(9)/2])
assert can_do([], [1, S(5)/2, 4])
assert can_do([-S.Half, 1, 2], [3, 4])
assert can_do([S(1)/3], [-S(2)/3, -S(1)/2, S(1)/2, 1])
assert can_do([-S(3)/2, -S(1)/2], [-S(5)/2, 1])
assert can_do([-S(3)/2, ], [-S(1)/2, S(1)/2]) # shine-integral
assert can_do([-S(3)/2, -S(1)/2], [2]) # elliptic integrals
@XFAIL
def test_roach_fail():
assert can_do([-S(1)/2, 1], [S(1)/4, S(1)/2, S(3)/4]) # PFDD
assert can_do([S(3)/2], [S(5)/2, 5]) # struve function
assert can_do([-S(1)/2, S(1)/2, 1], [S(3)/2, S(5)/2]) # polylog, pfdd
assert can_do([1, 2, 3], [S(1)/2, 4]) # XXX ?
assert can_do([S(1)/2], [-S(1)/3, -S(1)/2, -S(2)/3]) # PFDD ?
# For the long table tests, see end of file
def test_polynomial():
from sympy import oo
assert hyperexpand(hyper([], [-1], z)) == oo
assert hyperexpand(hyper([-2], [-1], z)) == oo
assert hyperexpand(hyper([0, 0], [-1], z)) == 1
assert can_do([-5, -2, randcplx(), randcplx()], [-10, randcplx()])
def test_hyperexpand_bases():
assert hyperexpand(hyper([2], [a], z)) == \
a + z**(-a + 1)*(-a**2 + 3*a + z*(a - 1) - 2)*exp(z)* \
lowergamma(a - 1, z) - 1
# TODO [a+1, a-S.Half], [2*a]
assert hyperexpand(hyper([1, 2], [3], z)) == -2/z - 2*log(-z + 1)/z**2
assert hyperexpand(hyper([S.Half, 2], [S(3)/2], z)) == \
-1/(2*z - 2) + atanh(sqrt(z))/sqrt(z)/2
assert hyperexpand(hyper([S(1)/2, S(1)/2], [S(5)/2], z)) == \
(-3*z + 3)/4/(z*sqrt(-z + 1)) \
+ (6*z - 3)*asin(sqrt(z))/(4*z**(S(3)/2))
assert hyperexpand(hyper([1, 2], [S(3)/2], z)) == -1/(2*z - 2) \
- asin(sqrt(z))/(sqrt(z)*(2*z - 2)*sqrt(-z + 1))
assert hyperexpand(hyper([-S.Half - 1, 1, 2], [S.Half, 3], z)) == \
sqrt(z)*(6*z/7 - S(6)/5)*atanh(sqrt(z)) \
+ (-30*z**2 + 32*z - 6)/35/z - 6*log(-z + 1)/(35*z**2)
assert hyperexpand(hyper([1 + S.Half, 1, 1], [2, 2], z)) == \
-4*log(sqrt(-z + 1)/2 + S(1)/2)/z
# TODO hyperexpand(hyper([a], [2*a + 1], z))
# TODO [S.Half, a], [S(3)/2, a+1]
assert hyperexpand(hyper([2], [b, 1], z)) == \
z**(-b/2 + S(1)/2)*besseli(b - 1, 2*sqrt(z))*gamma(b) \
+ z**(-b/2 + 1)*besseli(b, 2*sqrt(z))*gamma(b)
# TODO [a], [a - S.Half, 2*a]
def test_hyperexpand_parametric():
assert hyperexpand(hyper([a, S(1)/2 + a], [S(1)/2], z)) \
== (1 + sqrt(z))**(-2*a)/2 + (1 - sqrt(z))**(-2*a)/2
assert hyperexpand(hyper([a, -S(1)/2 + a], [2*a], z)) \
== 2**(2*a - 1)*((-z + 1)**(S(1)/2) + 1)**(-2*a + 1)
def test_shifted_sum():
from sympy import simplify
assert simplify(hyperexpand(z**4*hyper([2], [3, S('3/2')], -z**2))) \
== z*sin(2*z) + (-z**2 + S.Half)*cos(2*z) - S.Half
def _randrat():
""" Steer clear of integers. """
return S(randrange(25) + 10)/50
def randcplx(offset=-1):
""" Polys is not good with real coefficients. """
return _randrat() + I*_randrat() + I*(1 + offset)
@slow
def test_formulae():
from sympy.simplify.hyperexpand import FormulaCollection
formulae = FormulaCollection().formulae
for formula in formulae:
h = formula.func(formula.z)
rep = {}
for n, sym in enumerate(formula.symbols):
rep[sym] = randcplx(n)
# NOTE hyperexpand returns truly branched functions. We know we are
# on the main sheet, but numerical evaluation can still go wrong
# (e.g. if exp_polar cannot be evalf'd).
# Just replace all exp_polar by exp, this usually works.
# first test if the closed-form is actually correct
h = h.subs(rep)
closed_form = formula.closed_form.subs(rep).rewrite('nonrepsmall')
z = formula.z
assert tn(h, closed_form.replace(exp_polar, exp), z)
# now test the computed matrix
cl = (formula.C * formula.B)[0].subs(rep).rewrite('nonrepsmall')
assert tn(closed_form.replace(
exp_polar, exp), cl.replace(exp_polar, exp), z)
deriv1 = z*formula.B.applyfunc(lambda t: t.rewrite(
'nonrepsmall')).diff(z)
deriv2 = formula.M * formula.B
for d1, d2 in zip(deriv1, deriv2):
assert tn(d1.subs(rep).replace(exp_polar, exp),
d2.subs(rep).rewrite('nonrepsmall').replace(exp_polar, exp), z)
def test_meijerg_formulae():
from sympy.simplify.hyperexpand import MeijerFormulaCollection
formulae = MeijerFormulaCollection().formulae
for sig in formulae:
for formula in formulae[sig]:
g = meijerg(formula.func.an, formula.func.ap,
formula.func.bm, formula.func.bq,
formula.z)
rep = {}
for sym in formula.symbols:
rep[sym] = randcplx()
# first test if the closed-form is actually correct
g = g.subs(rep)
closed_form = formula.closed_form.subs(rep)
z = formula.z
assert tn(g, closed_form, z)
# now test the computed matrix
cl = (formula.C * formula.B)[0].subs(rep)
assert tn(closed_form, cl, z)
deriv1 = z*formula.B.diff(z)
deriv2 = formula.M * formula.B
for d1, d2 in zip(deriv1, deriv2):
assert tn(d1.subs(rep), d2.subs(rep), z)
def op(f):
return z*f.diff(z)
def test_plan():
assert devise_plan(Hyper_Function([0], ()),
Hyper_Function([0], ()), z) == []
with raises(ValueError):
devise_plan(Hyper_Function([1], ()), Hyper_Function((), ()), z)
with raises(ValueError):
devise_plan(Hyper_Function([2], [1]), Hyper_Function([2], [2]), z)
with raises(ValueError):
devise_plan(Hyper_Function([2], []), Hyper_Function([S("1/2")], []), z)
# We cannot use pi/(10000 + n) because polys is insanely slow.
a1, a2, b1 = (randcplx(n) for n in range(3))
b1 += 2*I
h = hyper([a1, a2], [b1], z)
h2 = hyper((a1 + 1, a2), [b1], z)
assert tn(apply_operators(h,
devise_plan(Hyper_Function((a1 + 1, a2), [b1]),
Hyper_Function((a1, a2), [b1]), z), op),
h2, z)
h2 = hyper((a1 + 1, a2 - 1), [b1], z)
assert tn(apply_operators(h,
devise_plan(Hyper_Function((a1 + 1, a2 - 1), [b1]),
Hyper_Function((a1, a2), [b1]), z), op),
h2, z)
def test_plan_derivatives():
a1, a2, a3 = 1, 2, S('1/2')
b1, b2 = 3, S('5/2')
h = Hyper_Function((a1, a2, a3), (b1, b2))
h2 = Hyper_Function((a1 + 1, a2 + 1, a3 + 2), (b1 + 1, b2 + 1))
ops = devise_plan(h2, h, z)
f = Formula(h, z, h(z), [])
deriv = make_derivative_operator(f.M, z)
assert tn((apply_operators(f.C, ops, deriv)*f.B)[0], h2(z), z)
h2 = Hyper_Function((a1, a2 - 1, a3 - 2), (b1 - 1, b2 - 1))
ops = devise_plan(h2, h, z)
assert tn((apply_operators(f.C, ops, deriv)*f.B)[0], h2(z), z)
def test_reduction_operators():
a1, a2, b1 = (randcplx(n) for n in range(3))
h = hyper([a1], [b1], z)
assert ReduceOrder(2, 0) is None
assert ReduceOrder(2, -1) is None
assert ReduceOrder(1, S('1/2')) is None
h2 = hyper((a1, a2), (b1, a2), z)
assert tn(ReduceOrder(a2, a2).apply(h, op), h2, z)
h2 = hyper((a1, a2 + 1), (b1, a2), z)
assert tn(ReduceOrder(a2 + 1, a2).apply(h, op), h2, z)
h2 = hyper((a2 + 4, a1), (b1, a2), z)
assert tn(ReduceOrder(a2 + 4, a2).apply(h, op), h2, z)
# test several step order reduction
ap = (a2 + 4, a1, b1 + 1)
bq = (a2, b1, b1)
func, ops = reduce_order(Hyper_Function(ap, bq))
assert func.ap == (a1,)
assert func.bq == (b1,)
assert tn(apply_operators(h, ops, op), hyper(ap, bq, z), z)
def test_shift_operators():
a1, a2, b1, b2, b3 = (randcplx(n) for n in range(5))
h = hyper((a1, a2), (b1, b2, b3), z)
raises(ValueError, lambda: ShiftA(0))
raises(ValueError, lambda: ShiftB(1))
assert tn(ShiftA(a1).apply(h, op), hyper((a1 + 1, a2), (b1, b2, b3), z), z)
assert tn(ShiftA(a2).apply(h, op), hyper((a1, a2 + 1), (b1, b2, b3), z), z)
assert tn(ShiftB(b1).apply(h, op), hyper((a1, a2), (b1 - 1, b2, b3), z), z)
assert tn(ShiftB(b2).apply(h, op), hyper((a1, a2), (b1, b2 - 1, b3), z), z)
assert tn(ShiftB(b3).apply(h, op), hyper((a1, a2), (b1, b2, b3 - 1), z), z)
def test_ushift_operators():
a1, a2, b1, b2, b3 = (randcplx(n) for n in range(5))
h = hyper((a1, a2), (b1, b2, b3), z)
raises(ValueError, lambda: UnShiftA((1,), (), 0, z))
raises(ValueError, lambda: UnShiftB((), (-1,), 0, z))
raises(ValueError, lambda: UnShiftA((1,), (0, -1, 1), 0, z))
raises(ValueError, lambda: UnShiftB((0, 1), (1,), 0, z))
s = UnShiftA((a1, a2), (b1, b2, b3), 0, z)
assert tn(s.apply(h, op), hyper((a1 - 1, a2), (b1, b2, b3), z), z)
s = UnShiftA((a1, a2), (b1, b2, b3), 1, z)
assert tn(s.apply(h, op), hyper((a1, a2 - 1), (b1, b2, b3), z), z)
s = UnShiftB((a1, a2), (b1, b2, b3), 0, z)
assert tn(s.apply(h, op), hyper((a1, a2), (b1 + 1, b2, b3), z), z)
s = UnShiftB((a1, a2), (b1, b2, b3), 1, z)
assert tn(s.apply(h, op), hyper((a1, a2), (b1, b2 + 1, b3), z), z)
s = UnShiftB((a1, a2), (b1, b2, b3), 2, z)
assert tn(s.apply(h, op), hyper((a1, a2), (b1, b2, b3 + 1), z), z)
def can_do_meijer(a1, a2, b1, b2, numeric=True):
"""
This helper function tries to hyperexpand() the meijer g-function
corresponding to the parameters a1, a2, b1, b2.
It returns False if this expansion still contains g-functions.
If numeric is True, it also tests the so-obtained formula numerically
(at random values) and returns False if the test fails.
Else it returns True.
"""
from sympy import unpolarify, expand
r = hyperexpand(meijerg(a1, a2, b1, b2, z))
if r.has(meijerg):
return False
# NOTE hyperexpand() returns a truly branched function, whereas numerical
# evaluation only works on the main branch. Since we are evaluating on
# the main branch, this should not be a problem, but expressions like
# exp_polar(I*pi/2*x)**a are evaluated incorrectly. We thus have to get
# rid of them. The expand heuristically does this...
r = unpolarify(expand(r, force=True, power_base=True, power_exp=False,
mul=False, log=False, multinomial=False, basic=False))
if not numeric:
return True
repl = {}
for n, a in enumerate(meijerg(a1, a2, b1, b2, z).free_symbols - set([z])):
repl[a] = randcplx(n)
return tn(meijerg(a1, a2, b1, b2, z).subs(repl), r.subs(repl), z)
@slow
def test_meijerg_expand():
from sympy import combsimp, simplify
# from mpmath docs
assert hyperexpand(meijerg([[], []], [[0], []], -z)) == exp(z)
assert hyperexpand(meijerg([[1, 1], []], [[1], [0]], z)) == \
log(z + 1)
assert hyperexpand(meijerg([[1, 1], []], [[1], [1]], z)) == \
z/(z + 1)
assert hyperexpand(meijerg([[], []], [[S(1)/2], [0]], (z/2)**2)) \
== sin(z)/sqrt(pi)
assert hyperexpand(meijerg([[], []], [[0], [S(1)/2]], (z/2)**2)) \
== cos(z)/sqrt(pi)
assert can_do_meijer([], [a], [a - 1, a - S.Half], [])
assert can_do_meijer([], [], [a/2], [-a/2], False) # branches...
assert can_do_meijer([a], [b], [a], [b, a - 1])
# wikipedia
assert hyperexpand(meijerg([1], [], [], [0], z)) == \
Piecewise((0, abs(z) < 1), (1, abs(1/z) < 1),
(meijerg([1], [], [], [0], z), True))
assert hyperexpand(meijerg([], [1], [0], [], z)) == \
Piecewise((1, abs(z) < 1), (0, abs(1/z) < 1),
(meijerg([], [1], [0], [], z), True))
# The Special Functions and their Approximations
assert can_do_meijer([], [], [a + b/2], [a, a - b/2, a + S.Half])
assert can_do_meijer(
[], [], [a], [b], False) # branches only agree for small z
assert can_do_meijer([], [S.Half], [a], [-a])
assert can_do_meijer([], [], [a, b], [])
assert can_do_meijer([], [], [a, b], [])
assert can_do_meijer([], [], [a, a + S.Half], [b, b + S.Half])
assert can_do_meijer([], [], [a, -a], [0, S.Half], False) # dito
assert can_do_meijer([], [], [a, a + S.Half, b, b + S.Half], [])
assert can_do_meijer([S.Half], [], [0], [a, -a])
assert can_do_meijer([S.Half], [], [a], [0, -a], False) # dito
assert can_do_meijer([], [a - S.Half], [a, b], [a - S.Half], False)
assert can_do_meijer([], [a + S.Half], [a + b, a - b, a], [], False)
assert can_do_meijer([a + S.Half], [], [b, 2*a - b, a], [], False)
# This for example is actually zero.
assert can_do_meijer([], [], [], [a, b])
# Testing a bug:
assert hyperexpand(meijerg([0, 2], [], [], [-1, 1], z)) == \
Piecewise((0, abs(z) < 1),
(z/2 - 1/(2*z), abs(1/z) < 1),
(meijerg([0, 2], [], [], [-1, 1], z), True))
# Test that the simplest possible answer is returned:
assert combsimp(simplify(hyperexpand(
meijerg([1], [1 - a], [-a/2, -a/2 + S(1)/2], [], 1/z)))) == \
-2*sqrt(pi)*(sqrt(z + 1) + 1)**a/a
# Test that hyper is returned
assert hyperexpand(meijerg([1], [], [a], [0, 0], z)) == hyper(
(a,), (a + 1, a + 1), z*exp_polar(I*pi))*z**a*gamma(a)/gamma(a + 1)**2
def test_meijerg_lookup():
from sympy import uppergamma, Si, Ci
assert hyperexpand(meijerg([a], [], [b, a], [], z)) == \
z**b*exp(z)*gamma(-a + b + 1)*uppergamma(a - b, z)
assert hyperexpand(meijerg([0], [], [0, 0], [], z)) == \
exp(z)*uppergamma(0, z)
assert can_do_meijer([a], [], [b, a + 1], [])
assert can_do_meijer([a], [], [b + 2, a], [])
assert can_do_meijer([a], [], [b - 2, a], [])
assert hyperexpand(meijerg([a], [], [a, a, a - S(1)/2], [], z)) == \
-sqrt(pi)*z**(a - S(1)/2)*(2*cos(2*sqrt(z))*(Si(2*sqrt(z)) - pi/2)
- 2*sin(2*sqrt(z))*Ci(2*sqrt(z))) == \
hyperexpand(meijerg([a], [], [a, a - S(1)/2, a], [], z)) == \
hyperexpand(meijerg([a], [], [a - S(1)/2, a, a], [], z))
assert can_do_meijer([a - 1], [], [a + 2, a - S(3)/2, a + 1], [])
@XFAIL
def test_meijerg_expand_fail():
# These basically test hyper([], [1/2 - a, 1/2 + 1, 1/2], z),
# which is *very* messy. But since the meijer g actually yields a
# sum of bessel functions, things can sometimes be simplified a lot and
# are then put into tables...
assert can_do_meijer([], [], [a + S.Half], [a, a - b/2, a + b/2])
assert can_do_meijer([], [], [0, S.Half], [a, -a])
assert can_do_meijer([], [], [3*a - S.Half, a, -a - S.Half], [a - S.Half])
assert can_do_meijer([], [], [0, a - S.Half, -a - S.Half], [S.Half])
assert can_do_meijer([], [], [a, b + S(1)/2, b], [2*b - a])
assert can_do_meijer([], [], [a, b + S(1)/2, b, 2*b - a])
assert can_do_meijer([S.Half], [], [-a, a], [0])
@slow
def test_meijerg():
# carefully set up the parameters.
# NOTE: this used to fail sometimes. I believe it is fixed, but if you
# hit an inexplicable test failure here, please let me know the seed.
a1, a2 = (randcplx(n) - 5*I - n*I for n in range(2))
b1, b2 = (randcplx(n) + 5*I + n*I for n in range(2))
b3, b4, b5, a3, a4, a5 = (randcplx() for n in range(6))
g = meijerg([a1], [a3, a4], [b1], [b3, b4], z)
assert ReduceOrder.meijer_minus(3, 4) is None
assert ReduceOrder.meijer_plus(4, 3) is None
g2 = meijerg([a1, a2], [a3, a4], [b1], [b3, b4, a2], z)
assert tn(ReduceOrder.meijer_plus(a2, a2).apply(g, op), g2, z)
g2 = meijerg([a1, a2], [a3, a4], [b1], [b3, b4, a2 + 1], z)
assert tn(ReduceOrder.meijer_plus(a2, a2 + 1).apply(g, op), g2, z)
g2 = meijerg([a1, a2 - 1], [a3, a4], [b1], [b3, b4, a2 + 2], z)
assert tn(ReduceOrder.meijer_plus(a2 - 1, a2 + 2).apply(g, op), g2, z)
g2 = meijerg([a1], [a3, a4, b2 - 1], [b1, b2 + 2], [b3, b4], z)
assert tn(ReduceOrder.meijer_minus(
b2 + 2, b2 - 1).apply(g, op), g2, z, tol=1e-6)
# test several-step reduction
an = [a1, a2]
bq = [b3, b4, a2 + 1]
ap = [a3, a4, b2 - 1]
bm = [b1, b2 + 1]
niq, ops = reduce_order_meijer(G_Function(an, ap, bm, bq))
assert niq.an == (a1,)
assert set(niq.ap) == set([a3, a4])
assert niq.bm == (b1,)
assert set(niq.bq) == set([b3, b4])
assert tn(apply_operators(g, ops, op), meijerg(an, ap, bm, bq, z), z)
def test_meijerg_shift_operators():
# carefully set up the parameters. XXX this still fails sometimes
a1, a2, a3, a4, a5, b1, b2, b3, b4, b5 = (randcplx(n) for n in range(10))
g = meijerg([a1], [a3, a4], [b1], [b3, b4], z)
assert tn(MeijerShiftA(b1).apply(g, op),
meijerg([a1], [a3, a4], [b1 + 1], [b3, b4], z), z)
assert tn(MeijerShiftB(a1).apply(g, op),
meijerg([a1 - 1], [a3, a4], [b1], [b3, b4], z), z)
assert tn(MeijerShiftC(b3).apply(g, op),
meijerg([a1], [a3, a4], [b1], [b3 + 1, b4], z), z)
assert tn(MeijerShiftD(a3).apply(g, op),
meijerg([a1], [a3 - 1, a4], [b1], [b3, b4], z), z)
s = MeijerUnShiftA([a1], [a3, a4], [b1], [b3, b4], 0, z)
assert tn(
s.apply(g, op), meijerg([a1], [a3, a4], [b1 - 1], [b3, b4], z), z)
s = MeijerUnShiftC([a1], [a3, a4], [b1], [b3, b4], 0, z)
assert tn(
s.apply(g, op), meijerg([a1], [a3, a4], [b1], [b3 - 1, b4], z), z)
s = MeijerUnShiftB([a1], [a3, a4], [b1], [b3, b4], 0, z)
assert tn(
s.apply(g, op), meijerg([a1 + 1], [a3, a4], [b1], [b3, b4], z), z)
s = MeijerUnShiftD([a1], [a3, a4], [b1], [b3, b4], 0, z)
assert tn(
s.apply(g, op), meijerg([a1], [a3 + 1, a4], [b1], [b3, b4], z), z)
@slow
def test_meijerg_confluence():
def t(m, a, b):
from sympy import sympify, Piecewise
a, b = sympify([a, b])
m_ = m
m = hyperexpand(m)
if not m == Piecewise((a, abs(z) < 1), (b, abs(1/z) < 1), (m_, True)):
return False
if not (m.args[0].args[0] == a and m.args[1].args[0] == b):
return False
z0 = randcplx()/10
if abs(m.subs(z, z0).n() - a.subs(z, z0).n()).n() > 1e-10:
return False
if abs(m.subs(z, 1/z0).n() - b.subs(z, 1/z0).n()).n() > 1e-10:
return False
return True
assert t(meijerg([], [1, 1], [0, 0], [], z), -log(z), 0)
assert t(meijerg(
[], [3, 1], [0, 0], [], z), -z**2/4 + z - log(z)/2 - S(3)/4, 0)
assert t(meijerg([], [3, 1], [-1, 0], [], z),
z**2/12 - z/2 + log(z)/2 + S(1)/4 + 1/(6*z), 0)
assert t(meijerg([], [1, 1, 1, 1], [0, 0, 0, 0], [], z), -log(z)**3/6, 0)
assert t(meijerg([1, 1], [], [], [0, 0], z), 0, -log(1/z))
assert t(meijerg([1, 1], [2, 2], [1, 1], [0, 0], z),
-z*log(z) + 2*z, -log(1/z) + 2)
assert t(meijerg([S(1)/2], [1, 1], [0, 0], [S(3)/2], z), log(z)/2 - 1, 0)
def u(an, ap, bm, bq):
m = meijerg(an, ap, bm, bq, z)
m2 = hyperexpand(m, allow_hyper=True)
if m2.has(meijerg) and not (m2.is_Piecewise and len(m2.args) == 3):
return False
return tn(m, m2, z)
assert u([], [1], [0, 0], [])
assert u([1, 1], [], [], [0])
assert u([1, 1], [2, 2, 5], [1, 1, 6], [0, 0])
assert u([1, 1], [2, 2, 5], [1, 1, 6], [0])
def test_lerchphi():
from sympy import combsimp, exp_polar, polylog, log, lerchphi
assert hyperexpand(hyper([1, a], [a + 1], z)/a) == lerchphi(z, 1, a)
assert hyperexpand(
hyper([1, a, a], [a + 1, a + 1], z)/a**2) == lerchphi(z, 2, a)
assert hyperexpand(hyper([1, a, a, a], [a + 1, a + 1, a + 1], z)/a**3) == \
lerchphi(z, 3, a)
assert hyperexpand(hyper([1] + [a]*10, [a + 1]*10, z)/a**10) == \
lerchphi(z, 10, a)
assert combsimp(hyperexpand(meijerg([0, 1 - a], [], [0],
[-a], exp_polar(-I*pi)*z))) == lerchphi(z, 1, a)
assert combsimp(hyperexpand(meijerg([0, 1 - a, 1 - a], [], [0],
[-a, -a], exp_polar(-I*pi)*z))) == lerchphi(z, 2, a)
assert combsimp(hyperexpand(meijerg([0, 1 - a, 1 - a, 1 - a], [], [0],
[-a, -a, -a], exp_polar(-I*pi)*z))) == lerchphi(z, 3, a)
assert hyperexpand(z*hyper([1, 1], [2], z)) == -log(1 + -z)
assert hyperexpand(z*hyper([1, 1, 1], [2, 2], z)) == polylog(2, z)
assert hyperexpand(z*hyper([1, 1, 1, 1], [2, 2, 2], z)) == polylog(3, z)
assert hyperexpand(hyper([1, a, 1 + S(1)/2], [a + 1, S(1)/2], z)) == \
-2*a/(z - 1) + (-2*a**2 + a)*lerchphi(z, 1, a)
# Now numerical tests. These make sure reductions etc are carried out
# correctly
# a rational function (polylog at negative integer order)
assert can_do([2, 2, 2], [1, 1])
# NOTE these contain log(1-x) etc ... better make sure we have |z| < 1
# reduction of order for polylog
assert can_do([1, 1, 1, b + 5], [2, 2, b], div=10)
# reduction of order for lerchphi
# XXX lerchphi in mpmath is flaky
assert can_do(
[1, a, a, a, b + 5], [a + 1, a + 1, a + 1, b], numerical=False)
# test a bug
from sympy import Abs
assert hyperexpand(hyper([S(1)/2, S(1)/2, S(1)/2, 1],
[S(3)/2, S(3)/2, S(3)/2], S(1)/4)) == \
Abs(-polylog(3, exp_polar(I*pi)/2) + polylog(3, S(1)/2))
def test_partial_simp():
# First test that hypergeometric function formulae work.
a, b, c, d, e = (randcplx() for _ in range(5))
for func in [Hyper_Function([a, b, c], [d, e]),
Hyper_Function([], [a, b, c, d, e])]:
f = build_hypergeometric_formula(func)
z = f.z
assert f.closed_form == func(z)
deriv1 = f.B.diff(z)*z
deriv2 = f.M*f.B
for func1, func2 in zip(deriv1, deriv2):
assert tn(func1, func2, z)
# Now test that formulae are partially simplified.
from sympy.abc import a, b, z
assert hyperexpand(hyper([3, a], [1, b], z)) == \
(-a*b/2 + a*z/2 + 2*a)*hyper([a + 1], [b], z) \
+ (a*b/2 - 2*a + 1)*hyper([a], [b], z)
assert tn(
hyperexpand(hyper([3, d], [1, e], z)), hyper([3, d], [1, e], z), z)
assert hyperexpand(hyper([3], [1, a, b], z)) == \
hyper((), (a, b), z) \
+ z*hyper((), (a + 1, b), z)/(2*a) \
- z*(b - 4)*hyper((), (a + 1, b + 1), z)/(2*a*b)
assert tn(
hyperexpand(hyper([3], [1, d, e], z)), hyper([3], [1, d, e], z), z)
def test_hyperexpand_special():
assert hyperexpand(hyper([a, b], [c], 1)) == \
gamma(c)*gamma(c - a - b)/gamma(c - a)/gamma(c - b)
assert hyperexpand(hyper([a, b], [1 + a - b], -1)) == \
gamma(1 + a/2)*gamma(1 + a - b)/gamma(1 + a)/gamma(1 + a/2 - b)
assert hyperexpand(hyper([a, b], [1 + b - a], -1)) == \
gamma(1 + b/2)*gamma(1 + b - a)/gamma(1 + b)/gamma(1 + b/2 - a)
assert hyperexpand(meijerg([1 - z - a/2], [1 - z + a/2], [b/2], [-b/2], 1)) == \
gamma(1 - 2*z)*gamma(z + a/2 + b/2)/gamma(1 - z + a/2 - b/2) \
/gamma(1 - z - a/2 + b/2)/gamma(1 - z + a/2 + b/2)
assert hyperexpand(hyper([a], [b], 0)) == 1
assert hyper([a], [b], 0) != 0
def test_Mod1_behavior():
from sympy import Symbol, simplify, lowergamma
n = Symbol('n', integer=True)
# Note: this should not hang.
assert simplify(hyperexpand(meijerg([1], [], [n + 1], [0], z))) == \
lowergamma(n + 1, z)
@slow
def test_prudnikov_misc():
assert can_do([1, (3 + I)/2, (3 - I)/2], [S(3)/2, 2])
assert can_do([S.Half, a - 1], [S(3)/2, a + 1], lowerplane=True)
assert can_do([], [b + 1])
assert can_do([a], [a - 1, b + 1])
assert can_do([a], [a - S.Half, 2*a])
assert can_do([a], [a - S.Half, 2*a + 1])
assert can_do([a], [a - S.Half, 2*a - 1])
assert can_do([a], [a + S.Half, 2*a])
assert can_do([a], [a + S.Half, 2*a + 1])
assert can_do([a], [a + S.Half, 2*a - 1])
assert can_do([S.Half], [b, 2 - b])
assert can_do([S.Half], [b, 3 - b])
assert can_do([1], [2, b])
assert can_do([a, a + S.Half], [2*a, b, 2*a - b + 1])
assert can_do([a, a + S.Half], [S.Half, 2*a, 2*a + S.Half])
assert can_do([a], [a + 1], lowerplane=True) # lowergamma
@slow
def test_prudnikov_1():
# A. P. Prudnikov, Yu. A. Brychkov and O. I. Marichev (1990).
# Integrals and Series: More Special Functions, Vol. 3,.
# Gordon and Breach Science Publisher
# 7.3.1
assert can_do([a, -a], [S.Half])
assert can_do([a, 1 - a], [S.Half])
assert can_do([a, 1 - a], [S(3)/2])
assert can_do([a, 2 - a], [S.Half])
assert can_do([a, 2 - a], [S(3)/2])
assert can_do([a, 2 - a], [S(3)/2])
assert can_do([a, a + S(1)/2], [2*a - 1])
assert can_do([a, a + S(1)/2], [2*a])
assert can_do([a, a + S(1)/2], [2*a + 1])
assert can_do([a, a + S(1)/2], [S(1)/2])
assert can_do([a, a + S(1)/2], [S(3)/2])
assert can_do([a, a/2 + 1], [a/2])
assert can_do([1, b], [2])
assert can_do([1, b], [b + 1], numerical=False) # Lerch Phi
# NOTE: branches are complicated for |z| > 1
assert can_do([a], [2*a])
assert can_do([a], [2*a + 1])
assert can_do([a], [2*a - 1])
@slow
def test_prudnikov_2():
h = S.Half
assert can_do([-h, -h], [h])
assert can_do([-h, h], [3*h])
assert can_do([-h, h], [5*h])
assert can_do([-h, h], [7*h])
assert can_do([-h, 1], [h])
for p in [-h, h]:
for n in [-h, h, 1, 3*h, 2, 5*h, 3, 7*h, 4]:
for m in [-h, h, 3*h, 5*h, 7*h]:
assert can_do([p, n], [m])
for n in [1, 2, 3, 4]:
for m in [1, 2, 3, 4]:
assert can_do([p, n], [m])
@slow
def test_prudnikov_3():
h = S.Half
assert can_do([S(1)/4, S(3)/4], [h])
assert can_do([S(1)/4, S(3)/4], [3*h])
assert can_do([S(1)/3, S(2)/3], [3*h])
assert can_do([S(3)/4, S(5)/4], [h])
assert can_do([S(3)/4, S(5)/4], [3*h])
for p in [1, 2, 3, 4]:
for n in [-h, h, 1, 3*h, 2, 5*h, 3, 7*h, 4, 9*h]:
for m in [1, 3*h, 2, 5*h, 3, 7*h, 4]:
assert can_do([p, m], [n])
@slow
def test_prudnikov_4():
h = S.Half
for p in [3*h, 5*h, 7*h]:
for n in [-h, h, 3*h, 5*h, 7*h]:
for m in [3*h, 2, 5*h, 3, 7*h, 4]:
assert can_do([p, m], [n])
for n in [1, 2, 3, 4]:
for m in [2, 3, 4]:
assert can_do([p, m], [n])
@slow
def test_prudnikov_5():
h = S.Half
for p in [1, 2, 3]:
for q in range(p, 4):
for r in [1, 2, 3]:
for s in range(r, 4):
assert can_do([-h, p, q], [r, s])
for p in [h, 1, 3*h, 2, 5*h, 3]:
for q in [h, 3*h, 5*h]:
for r in [h, 3*h, 5*h]:
for s in [h, 3*h, 5*h]:
if s <= q and s <= r:
assert can_do([-h, p, q], [r, s])
for p in [h, 1, 3*h, 2, 5*h, 3]:
for q in [1, 2, 3]:
for r in [h, 3*h, 5*h]:
for s in [1, 2, 3]:
assert can_do([-h, p, q], [r, s])
@slow
def test_prudnikov_6():
h = S.Half
for m in [3*h, 5*h]:
for n in [1, 2, 3]:
for q in [h, 1, 2]:
for p in [1, 2, 3]:
assert can_do([h, q, p], [m, n])
for q in [1, 2, 3]:
for p in [3*h, 5*h]:
assert can_do([h, q, p], [m, n])
for q in [1, 2]:
for p in [1, 2, 3]:
for m in [1, 2, 3]:
for n in [1, 2, 3]:
assert can_do([h, q, p], [m, n])
assert can_do([h, h, 5*h], [3*h, 3*h])
assert can_do([h, 1, 5*h], [3*h, 3*h])
assert can_do([h, 2, 2], [1, 3])
# pages 435 to 457 contain more PFDD and stuff like this
@slow
def test_prudnikov_7():
assert can_do([3], [6])
h = S.Half
for n in [h, 3*h, 5*h, 7*h]:
assert can_do([-h], [n])
for m in [-h, h, 1, 3*h, 2, 5*h, 3, 7*h, 4]: # HERE
for n in [-h, h, 3*h, 5*h, 7*h, 1, 2, 3, 4]:
assert can_do([m], [n])
@slow
def test_prudnikov_8():
h = S.Half
# 7.12.2
for a in [1, 2, 3]:
for b in [1, 2, 3]:
for c in range(1, a + 1):
for d in [h, 1, 3*h, 2, 5*h, 3]:
assert can_do([a, b], [c, d])
for b in [3*h, 5*h]:
for c in [h, 1, 3*h, 2, 5*h, 3]:
for d in [1, 2, 3]:
assert can_do([a, b], [c, d])
for a in [-h, h, 3*h, 5*h]:
for b in [1, 2, 3]:
for c in [h, 1, 3*h, 2, 5*h, 3]:
for d in [1, 2, 3]:
assert can_do([a, b], [c, d])
for b in [h, 3*h, 5*h]:
for c in [h, 3*h, 5*h, 3]:
for d in [h, 1, 3*h, 2, 5*h, 3]:
if c <= b:
assert can_do([a, b], [c, d])
def test_prudnikov_9():
# 7.13.1 [we have a general formula ... so this is a bit pointless]
for i in range(9):
assert can_do([], [(S(i) + 1)/2])
for i in range(5):
assert can_do([], [-(2*S(i) + 1)/2])
@slow
def test_prudnikov_10():
# 7.14.2
h = S.Half
for p in [-h, h, 1, 3*h, 2, 5*h, 3, 7*h, 4]:
for m in [1, 2, 3, 4]:
for n in range(m, 5):
assert can_do([p], [m, n])
for p in [1, 2, 3, 4]:
for n in [h, 3*h, 5*h, 7*h]:
for m in [1, 2, 3, 4]:
assert can_do([p], [n, m])
for p in [3*h, 5*h, 7*h]:
for m in [h, 1, 2, 5*h, 3, 7*h, 4]:
assert can_do([p], [h, m])
assert can_do([p], [3*h, m])
for m in [h, 1, 2, 5*h, 3, 7*h, 4]:
assert can_do([7*h], [5*h, m])
assert can_do([-S(1)/2], [S(1)/2, S(1)/2]) # shine-integral shi
def test_prudnikov_11():
# 7.15
assert can_do([a, a + S.Half], [2*a, b, 2*a - b])
assert can_do([a, a + S.Half], [S(3)/2, 2*a, 2*a - S(1)/2])
assert can_do([S(1)/4, S(3)/4], [S(1)/2, S(1)/2, 1])
assert can_do([S(5)/4, S(3)/4], [S(3)/2, S(1)/2, 2])
assert can_do([S(5)/4, S(3)/4], [S(3)/2, S(3)/2, 1])
assert can_do([S(5)/4, S(7)/4], [S(3)/2, S(5)/2, 2])
assert can_do([1, 1], [S(3)/2, 2, 2]) # cosh-integral chi
@slow
def test_prudnikov_12():
# 7.16
assert can_do(
[], [a, a + S.Half, 2*a], False) # branches only agree for some z!
assert can_do([], [a, a + S.Half, 2*a + 1], False) # dito
assert can_do([], [S.Half, a, a + S.Half])
assert can_do([], [S(3)/2, a, a + S.Half])
assert can_do([], [S(1)/4, S(1)/2, S(3)/4])
assert can_do([], [S(1)/2, S(1)/2, 1])
assert can_do([], [S(1)/2, S(3)/2, 1])
assert can_do([], [S(3)/4, S(3)/2, S(5)/4])
assert can_do([], [1, 1, S(3)/2])
assert can_do([], [1, 2, S(3)/2])
assert can_do([], [1, S(3)/2, S(3)/2])
assert can_do([], [S(5)/4, S(3)/2, S(7)/4])
assert can_do([], [2, S(3)/2, S(3)/2])
@slow
def test_prudnikov_2F1():
h = S.Half
# Elliptic integrals
for p in [-h, h]:
for m in [h, 3*h, 5*h, 7*h]:
for n in [1, 2, 3, 4]:
assert can_do([p, m], [n])
@XFAIL
def test_prudnikov_fail_2F1():
assert can_do([a, b], [b + 1]) # incomplete beta function
assert can_do([-1, b], [c]) # Poly. also -2, -3 etc
# TODO polys
# Legendre functions:
assert can_do([a, b], [a + b + S.Half])
assert can_do([a, b], [a + b - S.Half])
assert can_do([a, b], [a + b + S(3)/2])
assert can_do([a, b], [(a + b + 1)/2])
assert can_do([a, b], [(a + b)/2 + 1])
assert can_do([a, b], [a - b + 1])
assert can_do([a, b], [a - b + 2])
assert can_do([a, b], [2*b])
assert can_do([a, b], [S.Half])
assert can_do([a, b], [S(3)/2])
assert can_do([a, 1 - a], [c])
assert can_do([a, 2 - a], [c])
assert can_do([a, 3 - a], [c])
assert can_do([a, a + S(1)/2], [c])
assert can_do([1, b], [c])
assert can_do([1, b], [S(3)/2])
assert can_do([S(1)/4, S(3)/4], [1])
# PFDD
o = S(1)
assert can_do([o/8, 1], [o/8*9])
assert can_do([o/6, 1], [o/6*7])
assert can_do([o/6, 1], [o/6*13])
assert can_do([o/5, 1], [o/5*6])
assert can_do([o/5, 1], [o/5*11])
assert can_do([o/4, 1], [o/4*5])
assert can_do([o/4, 1], [o/4*9])
assert can_do([o/3, 1], [o/3*4])
assert can_do([o/3, 1], [o/3*7])
assert can_do([o/8*3, 1], [o/8*11])
assert can_do([o/5*2, 1], [o/5*7])
assert can_do([o/5*2, 1], [o/5*12])
assert can_do([o/5*3, 1], [o/5*8])
assert can_do([o/5*3, 1], [o/5*13])
assert can_do([o/8*5, 1], [o/8*13])
assert can_do([o/4*3, 1], [o/4*7])
assert can_do([o/4*3, 1], [o/4*11])
assert can_do([o/3*2, 1], [o/3*5])
assert can_do([o/3*2, 1], [o/3*8])
assert can_do([o/5*4, 1], [o/5*9])
assert can_do([o/5*4, 1], [o/5*14])
assert can_do([o/6*5, 1], [o/6*11])
assert can_do([o/6*5, 1], [o/6*17])
assert can_do([o/8*7, 1], [o/8*15])
@XFAIL
def test_prudnikov_fail_3F2():
assert can_do([a, a + S(1)/3, a + S(2)/3], [S(1)/3, S(2)/3])
assert can_do([a, a + S(1)/3, a + S(2)/3], [S(2)/3, S(4)/3])
assert can_do([a, a + S(1)/3, a + S(2)/3], [S(4)/3, S(5)/3])
# page 421
assert can_do([a, a + S(1)/3, a + S(2)/3], [3*a/2, (3*a + 1)/2])
# pages 422 ...
assert can_do([-S.Half, S.Half, S.Half], [1, 1]) # elliptic integrals
assert can_do([-S.Half, S.Half, 1], [S(3)/2, S(3)/2])
# TODO LOTS more
# PFDD
assert can_do([S(1)/8, S(3)/8, 1], [S(9)/8, S(11)/8])
assert can_do([S(1)/8, S(5)/8, 1], [S(9)/8, S(13)/8])
assert can_do([S(1)/8, S(7)/8, 1], [S(9)/8, S(15)/8])
assert can_do([S(1)/6, S(1)/3, 1], [S(7)/6, S(4)/3])
assert can_do([S(1)/6, S(2)/3, 1], [S(7)/6, S(5)/3])
assert can_do([S(1)/6, S(2)/3, 1], [S(5)/3, S(13)/6])
assert can_do([S.Half, 1, 1], [S(1)/4, S(3)/4])
# LOTS more
@XFAIL
def test_prudnikov_fail_other():
# 7.11.2
# 7.12.1
assert can_do([1, a], [b, 1 - 2*a + b]) # ???
# 7.14.2
assert can_do([-S(1)/2], [S(1)/2, 1]) # struve
assert can_do([1], [S(1)/2, S(1)/2]) # struve
assert can_do([S(1)/4], [S(1)/2, S(5)/4]) # PFDD
assert can_do([S(3)/4], [S(3)/2, S(7)/4]) # PFDD
assert can_do([1], [S(1)/4, S(3)/4]) # PFDD
assert can_do([1], [S(3)/4, S(5)/4]) # PFDD
assert can_do([1], [S(5)/4, S(7)/4]) # PFDD
# TODO LOTS more
# 7.15.2
assert can_do([S(1)/2, 1], [S(3)/4, S(5)/4, S(3)/2]) # PFDD
assert can_do([S(1)/2, 1], [S(7)/4, S(5)/4, S(3)/2]) # PFDD
# 7.16.1
assert can_do([], [S(1)/3, S(2/3)]) # PFDD
assert can_do([], [S(2)/3, S(4/3)]) # PFDD
assert can_do([], [S(5)/3, S(4/3)]) # PFDD
# XXX this does not *evaluate* right??
assert can_do([], [a, a + S.Half, 2*a - 1])
def test_bug():
h = hyper([-1, 1], [z], -1)
assert hyperexpand(h) == (z + 1)/z | unknown | codeparrot/codeparrot-clean | ||
# hw1.py
# Name: Connor Durkin
# netID : cwd28
# Date: 29 September 2015
# Class: CPSC 458
# Instructor: Prof. Stephen Slade
import random
import numpy
import json
# initialize some useful global variables
global in_play
in_play = False
global outcome
outcome = " start game"
score = 0
# define globals for cards
SUITS = ('C', 'S', 'H', 'D')
RANKS = ('A', '2', '3', '4', '5', '6', '7', '8', '9', 'T', 'J', 'Q', 'K')
VALUES = {'A':1, '2':2, '3':3, '4':4, '5':5, '6':6, '7':7, '8':8, '9':9, 'T':10, 'J':10, 'Q':10, 'K':10}
# define card class
class Card:
def __init__(self, suit, rank):
if (suit in SUITS) and (rank in RANKS):
self.suit = suit
self.rank = rank
else:
self.suit = None
self.rank = None
print "Invalid card: ", suit, rank
def __str__(self):
return self.suit + self.rank
def get_suit(self):
return self.suit
def get_rank(self):
return self.rank
# define hand class
class Hand:
def __init__(self):
self.cards = []
def __str__(self):
ans = "Hand contains "
for i in range(len(self.cards)):
ans += str(self.cards[i]) + " "
return ans
# return a string representation of a hand
def add_card(self, card):
self.cards.append(card)
# add a card object to a hand
def get_value(self):
value = 0
aces = False
for c in self.cards:
rank = c.get_rank()
v = VALUES[rank]
if rank == 'A': aces = True
value += v
if aces and value < 12: value += 10
return value
# count aces as 1, if the hand has an ace, then add 10 to hand value if it doesn't bust
# compute the value of the hand, see Blackjack video
# define deck class
class Deck:
def __init__(self):
self.deck = []
for s in SUITS:
for r in RANKS:
self.deck.append(Card(s, r))
# create a Deck object
def shuffle(self):
random.shuffle(self.deck)
# shuffle the deck
def deal_card(self):
return self.deck.pop()
# deal a card object from the deck
def __str__(self):
ans = "The deck: "
for c in self.deck:
ans += str(c) + " "
return ans
# return a string representing the deck
#define event handlers for buttons
def deal():
global outcome, in_play, theDeck, playerhand, househand, score
if in_play:
outcome = "House winds by default!"
score -= 1
else:
outcome = "Hit or stand?"
in_play = True
theDeck = Deck()
theDeck.shuffle()
#print theDeck
playerhand = Hand()
househand = Hand()
playerhand.add_card(theDeck.deal_card())
playerhand.add_card(theDeck.deal_card())
househand.add_card(theDeck.deal_card())
househand.add_card(theDeck.deal_card())
#print "Player", playerhand, "Value:", playerhand.get_value()
#print "House", househand, "Value:", househand.get_value()
#print theDeck
def hit():
global in_play, score, outcome
if in_play:
playerhand.add_card(theDeck.deal_card())
val = playerhand.get_value()
#print "Player", playerhand, "Value:", val
if val > 21:
outcome = "You are busted! House wins!"
in_play = False
score -= 1
#print outcome, "Score:", score
# if the hand is in play, hit the player
# if busted, assign a message to outcome, update in_play and score
def stand():
global score, in_play, outcome
if playerhand.get_value() > 21:
outcome = "You are busted."
return None
if not in_play:
outcome = "Game is over."
return None
val = househand.get_value()
while(val < 17):
househand.add_card(theDeck.deal_card())
val = househand.get_value()
#print "House:", househand, "Value:", val
if (val > 21):
# print "House is busted!"
if playerhand.get_value() > 21:
outcome = "House is busted, but House wins tie game!"
score -= 1
else:
outcome = "House is busted! Player wins!"
score += 1
else:
if (val == playerhand.get_value()):
outcome = "House wins ties!"
score -= 1
elif (val > playerhand.get_value()):
outcome = "House wins!"
score -= 1
else:
outcome = "Player wins!"
score += 1
in_play = False
#print outcome, "Score:", score
# if hand is in play, repeatedly hit dealer until his hand has value 17 or more
# assign a message to outcome, update in_play and score
# sim
# performs Monte Carlo simulation to generate transcript
def sim(trials):
transcript = {}
for dealer_face_score in range(1,11):
for player_hand_value in range(1,22):
matrix_key = '{0}{1}'.format(player_hand_value,dealer_face_score)
transcript[matrix_key] = 0.0
for i in range(trials):
s = score
bust = False
deal()
matrix_key = '{0}{1}'.format(playerhand.get_value(),VALUES[househand.cards[0].get_rank()])
while not bust:
hit()
if (score-s) >= 0:
try:
transcript[matrix_key] = transcript[matrix_key] + [1]
except:
transcript[matrix_key] = [1]
else:
try:
transcript[matrix_key] = transcript[matrix_key] + [0]
except:
transcript[matrix_key] = [0]
bust = True
matrix_key = '{0}{1}'.format(playerhand.get_value(),VALUES[househand.cards[0].get_rank()])
# Convert created dictionary to boolean lookup table
transcript.update({n: (numpy.mean(transcript[n])) for n in transcript.keys()})
json.dump(transcript, open("transcript",'w'))
# hitme
# performs lookup function to transcript
def hitme(player_hand,dealerfacecard):
transcript = json.load(open("transcript","r"))
matrix_key = '{0}{1}'.format(player_hand,dealerfacecard)
hit = (transcript[matrix_key] > .5)
return hit
# play
# plays blackjack many times using the hitme function to determine whether or
# not to hit and returns win ratio
wins = []
def play(trials):
global in_play, score
score = 0
in_play = False
for i in range(trials):
deal()
s = score
while in_play:
player_hand = playerhand.get_value()
dealerfacecard = VALUES[househand.cards[0].get_rank()]
if hitme(player_hand,dealerfacecard):
hit()
else:
stand()
if (score-s) > 0:
wins.append(1)
else:
wins.append(0)
print numpy.mean(wins)
return numpy.mean(wins) | unknown | codeparrot/codeparrot-clean | ||
# coding: utf-8
#-----------------------------------------------------------------------------
# Copyright (C) 2013 The IPython Development Team
#
# Distributed under the terms of the BSD License. The full license is in
# the file COPYING, distributed as part of this software.
#-----------------------------------------------------------------------------
import requests
from unittest import SkipTest
from ....tests.base import NBViewerTestCase, FormatHTMLMixin
class GitHubTestCase(NBViewerTestCase):
def ipython_example(self, *parts, **kwargs):
ref = kwargs.get('ref', 'rel-2.0.0')
return self.url('github/ipython/ipython/blob/%s/examples' % ref, *parts)
def test_github(self):
url = self.ipython_example('Index.ipynb')
r = requests.get(url)
self.assertEqual(r.status_code, 200)
def test_github_unicode(self):
url = self.url('github/tlapicka/IPythonNotebooks/blob',
'ee6d2d13b96023e5f5e38e4516803eb22ede977e',
u'Matplotlib -- osy a mřížka.ipynb',
)
r = requests.get(url)
self.assertEqual(r.status_code, 200)
def test_github_blob_redirect_unicode(self):
url = self.url('/urls/github.com/tlapicka/IPythonNotebooks/blob',
'ee6d2d13b96023e5f5e38e4516803eb22ede977e',
u'Matplotlib -- osy a mřížka.ipynb',
)
r = requests.get(url)
self.assertEqual(r.status_code, 200)
# verify redirect
self.assertIn('/github/tlapicka/IPythonNotebooks/blob/', r.request.url)
def test_github_raw_redirect_unicode(self):
url = self.url('/url/raw.github.com/tlapicka/IPythonNotebooks',
'ee6d2d13b96023e5f5e38e4516803eb22ede977e',
u'Matplotlib -- osy a mřížka.ipynb',
)
r = requests.get(url)
self.assertEqual(r.status_code, 200)
# verify redirect
self.assertIn('/github/tlapicka/IPythonNotebooks/blob/', r.request.url)
def test_github_tag(self):
url = self.ipython_example('Index.ipynb', ref='rel-2.0.0')
r = requests.get(url)
self.assertEqual(r.status_code, 200)
def test_github_commit(self):
url = self.ipython_example('Index.ipynb',
ref='7f5cbd622058396f1f33c4b26c8d205a8dd26d16'
)
r = requests.get(url)
self.assertEqual(r.status_code, 200)
def test_github_blob_redirect(self):
url = self.url(
'urls/github.com/ipython/ipython/blob/rel-2.0.0/examples',
'Index.ipynb',
)
r = requests.get(url)
self.assertEqual(r.status_code, 200)
# verify redirect
self.assertIn('/github/ipython/ipython/blob/master', r.request.url)
def test_github_raw_redirect(self):
url = self.url(
'urls/raw.github.com/ipython/ipython/rel-2.0.0/examples',
'Index.ipynb',
)
r = requests.get(url)
self.assertEqual(r.status_code, 200)
# verify redirect
self.assertIn('/github/ipython/ipython/blob/rel-2.0.0/examples', r.request.url)
def test_github_rawusercontent_redirect(self):
"""Test GitHub's new raw domain"""
url = self.url(
'urls/raw.githubusercontent.com/ipython/ipython/rel-2.0.0/examples',
'Index.ipynb',
)
r = requests.get(url)
self.assertEqual(r.status_code, 200)
# verify redirect
self.assertIn('/github/ipython/ipython/blob/rel-2.0.0/examples', r.request.url)
def test_github_raw_redirect_2(self):
"""test /url/github.com/u/r/raw/ redirects"""
url = self.url(
"url/github.com/ipython/ipython/blob/rel-2.0.0/examples",
"Index.ipynb"
)
r = requests.get(url)
self.assertEqual(r.status_code, 200)
# verify redirect
self.assertIn('/github/ipython/ipython/blob/rel-2.0.0', r.request.url)
def test_github_repo_redirect(self):
url = self.url("github/ipython/ipython")
r = requests.get(url)
self.assertEqual(r.status_code, 200)
# verify redirect
self.assertIn('/github/ipython/ipython/tree/master', r.request.url)
def test_github_tree(self):
url = self.url("github/ipython/ipython/tree/rel-2.0.0/IPython/")
r = requests.get(url)
self.assertEqual(r.status_code, 200)
self.assertIn("__init__.py", r.text)
def test_github_tree_redirect(self):
url = self.url("github/ipython/ipython/tree/rel-2.0.0/MANIFEST.in")
r = requests.get(url)
self.assertEqual(r.status_code, 200)
# verify redirect
self.assertIn('/github/ipython/ipython/blob/rel-2.0.0', r.request.url)
self.assertIn('global-exclude', r.text)
def test_github_blob_redirect(self):
url = self.url("github/ipython/ipython/blob/rel-2.0.0/IPython")
r = requests.get(url)
self.assertEqual(r.status_code, 200)
# verify redirect
self.assertIn('/github/ipython/ipython/tree/rel-2.0.0/IPython', r.request.url)
self.assertIn('__init__.py', r.text)
def test_github_ref_list(self):
url = self.url('github/ipython/ipython/tree/master')
r = requests.get(url)
self.assertEqual(r.status_code, 200)
html = r.text
# verify branch is linked
self.assertIn('/github/ipython/ipython/tree/2.x/', html)
# verify tag is linked
self.assertIn('/github/ipython/ipython/tree/rel-2.3.0/', html)
class FormatHTMLGitHubTestCase(NBViewerTestCase, FormatHTMLMixin):
pass | unknown | codeparrot/codeparrot-clean | ||
# -*- coding:utf-8 -*-
'''
InCar单元测试.
'''
import inspect
import os
import unittest
from ..incar import InCar
from . import path
class InCarTest(unittest.TestCase):
def setUp(self):
# Create an InCar object.
self.maxDiff = True
def test_rdata(self):
" Test data line in INCAR can be read correctly. "
filename = path + "/INCAR"
incar = InCar(filename)
# Test integer parameter.
ref_line = "ISTART = 0 # 0 = new job, 1 = restart"
pnames, datas = incar.rdata(ref_line)
self.assertListEqual(pnames, ["ISTART"])
self.assertListEqual(datas, ["0"])
# Test string parameter.
ref_line = "PREC = Normal # [Low/Medium/High/Accurate/Normal]"
pnames, datas = incar.rdata(ref_line)
self.assertListEqual(pnames, ["PREC"])
self.assertListEqual(datas, ["Normal"])
# Test comment line.
ref_line = "! Electronic Structure"
result = incar.rdata(ref_line)
self.assertIsNone(result)
# Test multi-parameter line.
ref_line = "LHFCALC = .TRUE. ; HFSCREEN = 0.2 # HSE"
pnames, datas = incar.rdata(ref_line)
self.assertListEqual(pnames, ["LHFCALC", "HFSCREEN"])
self.assertListEqual(datas, [".TRUE.", "0.2"])
def test_load(self):
" Test all data in INCAR can be loaded. "
filename = path + "/INCAR"
incar = InCar(filename)
ref_pnames = ['SYSTEM', 'ISTART', 'ISPIN', 'PREC', 'ENCUT',
'NELM', 'NELMIN', 'ISMEAR', 'SIGMA', 'LREAL',
'EDIFFG', 'ALGO', 'ISIF', 'NSW', 'IBRION', 'POTIM',
'ISYM', 'NWRITE', 'LCHARG', 'LWAVE', 'NCORE']
ref_datas = ['per', '0', '2', 'Normal', '450', '400', '3',
'1', '0.1', 'A', '-0.05', 'Fast', '2', '900',
'1', '0.2', '0', '1', '.False.', '.False.', '4']
for pname, data in zip(ref_pnames, ref_datas):
self.assertEqual(getattr(incar, pname), data)
def test_parameter_set(self):
" Test existed parameter can be set correctly. "
filename = path + "/INCAR"
incar = InCar(filename)
self.assertTrue(incar.ISIF, "2")
incar.set("ISIF", 3)
self.assertTrue(incar.ISIF, "3")
def test_parameter_add(self):
" Test new parameter can be added correctly. "
filename = path + "/INCAR"
incar = InCar(filename)
self.assertFalse(hasattr(incar, "TEST_zjshao"))
incar.add("TEST_zjshao", "True")
self.assertTrue(incar.TEST_zjshao, "True")
def test_parameter_del(self):
" Make sure we can remove parameters correctly. "
filename = path + "/INCAR"
incar = InCar(filename)
# Check before deletion.
self.assertTrue(hasattr(incar, "ISIF"))
self.assertTrue("ISIF" in incar.pnames)
pname, value = incar.pop("ISIF")
# Check after deletion.
self.assertEqual(pname, "ISIF")
self.assertEqual(value, "2")
self.assertFalse(hasattr(incar, "ISIF"))
self.assertFalse("ISIF" in incar.pnames)
def test_compare(self):
" Make sure we can compare two InCar objects correctly. "
# Two equal INCAR.
filename1 = path + "/INCAR"
filename2 = path + "/INCAR2"
incar1 = InCar(filename1)
incar2 = InCar(filename1)
a_dict, b_dict = incar1.compare(incar2)
self.assertDictEqual(a_dict, {})
self.assertDictEqual(b_dict, {})
# Different INCAR.
incar1 = InCar(filename1)
incar2 = InCar(filename2)
a_dict, b_dict = incar1.compare(incar2)
self.assertDictEqual(a_dict, {'ISMEAR': '1', 'LREAL': 'A'})
self.assertDictEqual(b_dict, {'ISMEAR': '2', 'LREAL': ''})
def test_eq(self):
" Test __eq__() function."
# Two equal INCAR.
filename1 = path + "/INCAR"
filename2 = path + "/INCAR2"
incar1 = InCar(filename1)
incar2 = InCar(filename1)
self.assertTrue(incar1 == incar2)
# Different INCAR.
incar1 = InCar(filename1)
incar2 = InCar(filename2)
self.assertFalse(incar1 == incar2)
def test_ne(self):
" Test __ne__() function."
# Two equal INCAR.
filename1 = path + "/INCAR"
filename2 = path + "/INCAR2"
incar1 = InCar(filename1)
incar2 = InCar(filename1)
self.assertFalse(incar1 != incar2)
# Different INCAR.
incar1 = InCar(filename1)
incar2 = InCar(filename2)
self.assertTrue(incar1 != incar2)
def test_tofile(self):
" Test INCAR content can be write to file. "
# NEED IMPLEMENTATIN
pass | unknown | codeparrot/codeparrot-clean | ||
import pytest
from openshift_checks.package_version import PackageVersion, OpenShiftCheckException
def task_vars_for(openshift_release, deployment_type):
return dict(
ansible_pkg_mgr='yum',
openshift=dict(common=dict(service_type=deployment_type)),
openshift_release=openshift_release,
openshift_image_tag='v' + openshift_release,
openshift_deployment_type=deployment_type,
)
def test_openshift_version_not_supported():
check = PackageVersion(None, task_vars_for("1.2.3", 'origin'))
check.get_openshift_version_tuple = lambda: (3, 4, 1) # won't be in the dict
with pytest.raises(OpenShiftCheckException) as excinfo:
check.get_required_ovs_version()
assert "no recommended version of Open vSwitch" in str(excinfo.value)
def test_invalid_openshift_release_format():
task_vars = dict(
ansible_pkg_mgr='yum',
openshift=dict(common=dict(service_type='origin')),
openshift_image_tag='v0',
openshift_deployment_type='origin',
)
check = PackageVersion(lambda *_: {}, task_vars)
with pytest.raises(OpenShiftCheckException) as excinfo:
check.run()
assert "invalid version" in str(excinfo.value)
@pytest.mark.parametrize('openshift_release', [
"111.7.0",
"3.7",
"3.6",
"3.5.1.2.3",
"3.5",
"3.4",
"3.3",
"2.1.0",
])
def test_package_version(openshift_release):
return_value = {"foo": object()}
def execute_module(module_name=None, module_args=None, tmp=None, task_vars=None, *_):
assert module_name == 'aos_version'
assert "package_list" in module_args
for pkg in module_args["package_list"]:
if "-master" in pkg["name"] or "-node" in pkg["name"]:
assert pkg["version"] == task_vars["openshift_release"]
return return_value
check = PackageVersion(execute_module, task_vars_for(openshift_release, 'origin'))
result = check.run()
assert result == return_value
@pytest.mark.parametrize('group_names,is_containerized,is_active', [
(['oo_masters_to_config'], False, True),
# ensure check is skipped on containerized installs
(['oo_masters_to_config'], True, False),
(['oo_nodes_to_config'], False, True),
(['oo_masters_to_config', 'oo_nodes_to_config'], False, True),
(['oo_masters_to_config', 'oo_etcd_to_config'], False, True),
([], False, False),
(['oo_etcd_to_config'], False, False),
(['lb'], False, False),
(['nfs'], False, False),
])
def test_package_version_skip_when_not_master_nor_node(group_names, is_containerized, is_active):
task_vars = dict(
group_names=group_names,
openshift=dict(common=dict(is_containerized=is_containerized)),
)
assert PackageVersion(None, task_vars).is_active() == is_active | unknown | codeparrot/codeparrot-clean | ||
#!/usr/bin/python
# -*- coding: utf-8 -*-
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
# this is a windows documentation stub. actual code lives in the .ps1
# file of the same name
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'community'}
DOCUMENTATION = r'''
---
module: win_domain_user
version_added: '2.4'
short_description: Manages Windows Active Directory user accounts
description:
- Manages Windows Active Directory user accounts.
options:
name:
description:
- Name of the user to create, remove or modify.
type: str
required: true
state:
description:
- When C(present), creates or updates the user account.
- When C(absent), removes the user account if it exists.
- When C(query), retrieves the user account details without making any changes.
type: str
choices: [ absent, present, query ]
default: present
enabled:
description:
- C(yes) will enable the user account.
- C(no) will disable the account.
type: bool
default: yes
account_locked:
description:
- C(no) will unlock the user account if locked.
- Note that there is not a way to lock an account as an administrator.
- Accounts are locked due to user actions; as an admin, you may only unlock a locked account.
- If you wish to administratively disable an account, set I(enabled) to C(no).
choices: [ no ]
description:
description:
- Description of the user
type: str
groups:
description:
- Adds or removes the user from this list of groups,
depending on the value of I(groups_action).
- To remove all but the Principal Group, set C(groups=<principal group name>) and
I(groups_action=replace).
- Note that users cannot be removed from their principal group (for example, "Domain Users").
type: list
groups_action:
description:
- If C(add), the user is added to each group in I(groups) where not already a member.
- If C(remove), the user is removed from each group in I(groups).
- If C(replace), the user is added as a member of each group in
I(groups) and removed from any other groups.
type: str
choices: [ add, remove, replace ]
default: replace
password:
description:
- Optionally set the user's password to this (plain text) value.
- To enable an account - I(enabled) - a password must already be
configured on the account, or you must provide a password here.
type: str
update_password:
description:
- C(always) will update passwords if they differ.
- C(on_create) will only set the password for newly created users.
- Note that C(always) will always report an Ansible status of 'changed'
because we cannot determine whether the new password differs from
the old password.
type: str
choices: [ always, on_create ]
default: always
password_expired:
description:
- C(yes) will require the user to change their password at next login.
- C(no) will clear the expired password flag.
- This is mutually exclusive with I(password_never_expires).
type: bool
password_never_expires:
description:
- C(yes) will set the password to never expire.
- C(no) will allow the password to expire.
- This is mutually exclusive with I(password_expired).
type: bool
user_cannot_change_password:
description:
- C(yes) will prevent the user from changing their password.
- C(no) will allow the user to change their password.
type: bool
firstname:
description:
- Configures the user's first name (given name).
type: str
surname:
description:
- Configures the user's last name (surname).
type: str
company:
description:
- Configures the user's company name.
type: str
upn:
description:
- Configures the User Principal Name (UPN) for the account.
- This is not required, but is best practice to configure for modern
versions of Active Directory.
- The format is C(<username>@<domain>).
type: str
email:
description:
- Configures the user's email address.
- This is a record in AD and does not do anything to configure any email
servers or systems.
type: str
street:
description:
- Configures the user's street address.
type: str
city:
description:
- Configures the user's city.
type: str
state_province:
description:
- Configures the user's state or province.
type: str
postal_code:
description:
- Configures the user's postal code / zip code.
type: str
country:
description:
- Configures the user's country code.
- Note that this is a two-character ISO 3166 code.
type: str
path:
description:
- Container or OU for the new user; if you do not specify this, the
user will be placed in the default container for users in the domain.
- Setting the path is only available when a new user is created;
if you specify a path on an existing user, the user's path will not
be updated - you must delete (e.g., C(state=absent)) the user and
then re-add the user with the appropriate path.
type: str
attributes:
description:
- A dict of custom LDAP attributes to set on the user.
- This can be used to set custom attributes that are not exposed as module
parameters, e.g. C(telephoneNumber).
- See the examples on how to format this parameter.
type: str
version_added: '2.5'
domain_username:
description:
- The username to use when interacting with AD.
- If this is not set then the user Ansible used to log in with will be
used instead when using CredSSP or Kerberos with credential delegation.
type: str
version_added: '2.5'
domain_password:
description:
- The password for I(username).
type: str
version_added: '2.5'
domain_server:
description:
- Specifies the Active Directory Domain Services instance to connect to.
- Can be in the form of an FQDN or NetBIOS name.
- If not specified then the value is based on the domain of the computer
running PowerShell.
type: str
version_added: '2.5'
notes:
- Works with Windows 2012R2 and newer.
- If running on a server that is not a Domain Controller, credential
delegation through CredSSP or Kerberos with delegation must be used or the
I(domain_username), I(domain_password) must be set.
- Note that some individuals have confirmed successful operation on Windows
2008R2 servers with AD and AD Web Services enabled, but this has not
received the same degree of testing as Windows 2012R2.
seealso:
- module: win_domain
- module: win_domain_controller
- module: win_domain_computer
- module: win_domain_group
- module: win_domain_membership
- module: win_user
- module: win_user_profile
author:
- Nick Chandler (@nwchandler)
'''
EXAMPLES = r'''
- name: Ensure user bob is present with address information
win_domain_user:
name: bob
firstname: Bob
surname: Smith
company: BobCo
password: B0bP4ssw0rd
state: present
groups:
- Domain Admins
street: 123 4th St.
city: Sometown
state_province: IN
postal_code: 12345
country: US
attributes:
telephoneNumber: 555-123456
- name: Ensure user bob is created and use custom credentials to create the user
win_domain_user:
name: bob
firstname: Bob
surname: Smith
password: B0bP4ssw0rd
state: present
domain_username: DOMAIN\admin-account
domain_password: SomePas2w0rd
domain_server: domain@DOMAIN.COM
- name: Ensure user bob is present in OU ou=test,dc=domain,dc=local
win_domain_user:
name: bob
password: B0bP4ssw0rd
state: present
path: ou=test,dc=domain,dc=local
groups:
- Domain Admins
- name: Ensure user bob is absent
win_domain_user:
name: bob
state: absent
'''
RETURN = r'''
account_locked:
description: true if the account is locked
returned: always
type: bool
sample: false
changed:
description: true if the account changed during execution
returned: always
type: bool
sample: false
city:
description: The user city
returned: always
type: str
sample: Indianapolis
company:
description: The user company
returned: always
type: str
sample: RedHat
country:
description: The user country
returned: always
type: str
sample: US
description:
description: A description of the account
returned: always
type: str
sample: Server Administrator
distinguished_name:
description: DN of the user account
returned: always
type: str
sample: CN=nick,OU=test,DC=domain,DC=local
email:
description: The user email address
returned: always
type: str
sample: nick@domain.local
enabled:
description: true if the account is enabled and false if disabled
returned: always
type: str
sample: true
firstname:
description: The user first name
returned: always
type: str
sample: Nick
groups:
description: AD Groups to which the account belongs
returned: always
type: list
sample: [ "Domain Admins", "Domain Users" ]
msg:
description: Summary message of whether the user is present or absent
returned: always
type: str
sample: User nick is present
name:
description: The username on the account
returned: always
type: str
sample: nick
password_expired:
description: true if the account password has expired
returned: always
type: bool
sample: false
password_updated:
description: true if the password changed during this execution
returned: always
type: bool
sample: true
postal_code:
description: The user postal code
returned: always
type: str
sample: 46033
sid:
description: The SID of the account
returned: always
type: str
sample: S-1-5-21-2752426336-228313920-2202711348-1175
state:
description: The state of the user account
returned: always
type: str
sample: present
state_province:
description: The user state or province
returned: always
type: str
sample: IN
street:
description: The user street address
returned: always
type: str
sample: 123 4th St.
surname:
description: The user last name
returned: always
type: str
sample: Doe
upn:
description: The User Principal Name of the account
returned: always
type: str
sample: nick@domain.local
user_cannot_change_password:
description: true if the user is not allowed to change password
returned: always
type: str
sample: false
''' | unknown | codeparrot/codeparrot-clean | ||
- name: Test no warnings ref "http://github.com/ansible/ansible/issues/37535"
hosts: testhost
gather_facts: false
tasks:
- name: set ssh jump host args
set_fact:
ansible_ssh_common_args: "-o ProxyCommand='ssh -W %h:%p -q root@localhost'"
- name: set ssh jump host args (FQCN)
ansible.builtin.set_fact:
ansible_ssh_common_args: "-o ProxyCommand='ssh -W %h:%p -q root@localhost'" | unknown | github | https://github.com/ansible/ansible | test/integration/targets/set_fact/nowarn_clean_facts.yml |
/* Copyright 2019 The TensorFlow Authors. All Rights Reserved.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
==============================================================================*/
#ifndef TENSORFLOW_COMPILER_MLIR_TENSORFLOW_TRANSFORMS_PASSES_H_
#define TENSORFLOW_COMPILER_MLIR_TENSORFLOW_TRANSFORMS_PASSES_H_
#include <cstdint>
#include <memory>
#include <string>
#include "llvm/ADT/ArrayRef.h"
#include "llvm/ADT/STLFunctionalExtras.h"
#include "llvm/Support/CommandLine.h"
#include "mlir/Dialect/Func/IR/FuncOps.h" // from @llvm-project
#include "mlir/IR/BuiltinAttributeInterfaces.h" // from @llvm-project
#include "mlir/IR/BuiltinOps.h" // from @llvm-project
#include "mlir/IR/MLIRContext.h" // from @llvm-project
#include "mlir/IR/Operation.h" // from @llvm-project
#include "mlir/IR/PatternMatch.h" // from @llvm-project
#include "mlir/Pass/Pass.h" // from @llvm-project
#include "mlir/Pass/PassOptions.h" // from @llvm-project
#include "mlir/Support/LLVM.h" // from @llvm-project
#include "mlir/Support/LogicalResult.h" // from @llvm-project
#include "shardy/dialect/sdy/ir/dialect.h" // from @shardy // IWYU pragma: keep
#include "tensorflow/compiler/mlir/tensorflow/ir/tf_device.h"
namespace mlir {
// Creates a pass that breaks up an island with multiple ops into multiple
// islands, each with a single op.
std::unique_ptr<OperationPass<ModuleOp>> CreateBreakUpIslandsPass();
// Creates a pass that converts mlir functions consisting of mlir ops into a
// tf_executor dialect as a single island.
std::unique_ptr<OperationPass<func::FuncOp>>
CreateFunctionalToExecutorDialectConversionPass();
// Creates a pass that lifts inner ops of tf_executor.island ops in
// tf_executor.graph into the same block as the tf_executor.graph.
std::unique_ptr<OperationPass<func::FuncOp>>
CreateExecutorDialectToFunctionalConversionPass();
namespace TF {
// Creates a pass that canonicalizes legacy compilation and replication
// attributes.
std::unique_ptr<OperationPass<func::FuncOp>>
CreateCanonicalizeCompileAndReplicateAttributesPass();
// Creates a pass that drops `shape_invariant` attribute from While/WhileRegion
// ops.
std::unique_ptr<OperationPass<func::FuncOp>>
CreateDropWhileShapeInvariantPass();
// Creates a pass that drops `shape_invariant` attribute from While/WhileRegion
// ops within device cluster.
std::unique_ptr<OperationPass<func::FuncOp>>
CreateDropWhileShapeInvariantInDeviceClusterPass();
// Creates a pass that moves writes to replicate invariant resource variables
// outside tf_device.replicate op.
std::unique_ptr<OperationPass<func::FuncOp>>
CreateHoistReplicateInvariantResourceWritesPass();
// Transforms functional control flow operations in the TensorFlow dialect to
// MLIR Control Flow Graph (CFG) form.
std::unique_ptr<OperationPass<func::FuncOp>>
CreateTFFunctionalControlFlowToCFG();
// Transforms functional control flow operations in the TensorFlow dialect to
// their region based counterparts.
std::unique_ptr<OperationPass<ModuleOp>>
CreateTFFunctionalControlFlowToRegions();
std::unique_ptr<OperationPass<ModuleOp>> CreateTFFunctionalControlFlowToRegions(
bool allow_passthrough_args);
// Transforms region bases control flow operations in the TensorFlow dialect to
// their functional counterparts.
std::unique_ptr<OperationPass<ModuleOp>>
CreateTFRegionControlFlowToFunctional();
// Materialize the MlirPassthroughOp by replacing it with the MLIR module
// attached as an attribute.
std::unique_ptr<OperationPass<func::FuncOp>>
CreateMaterializePassthroughOpPass();
// Replicates the TensorList init op by undoing some CSE needed for correct
// shape assignment in shape_inference.
std::unique_ptr<OperationPass<func::FuncOp>>
CreateReplicateTensorListInitOpsPass();
// Performs Shape Inference on the TensorFlow dialect using the global registry.
std::unique_ptr<OperationPass<ModuleOp>> CreateTFShapeInferencePass(
ArrayRef<ArrayRef<int64_t>> input_shapes = {},
bool enable_stablehlo_propagation = false);
// Performs TF.data optimizations.
std::unique_ptr<OperationPass<func::FuncOp>> CreateTFDataOptimizationPass();
std::unique_ptr<OperationPass<func::FuncOp>> CreateMoveTransposesPass();
std::unique_ptr<OperationPass<func::FuncOp>> CreateLayoutAssignmentPass();
// Guarantee that all FuncOp's have a single use.
std::unique_ptr<OperationPass<ModuleOp>> CreateGuaranteeAllFuncsOneUsePass();
// Optional pass which will unroll BatchMatMul and use only MatMul
std::unique_ptr<OperationPass<func::FuncOp>> CreateUnrollBatchMatMulPassPass();
// Optional pass which will map TF BatchMatMul to TF Einsum
std::unique_ptr<OperationPass<func::FuncOp>> CreateBatchMatMulToEinsumPass();
// Pass that transform Einsum to other TF Ops for the supported variants.
std::unique_ptr<OperationPass<func::FuncOp>> CreateTransformEinsumPass();
// Optimizes Tensorflow graph.
std::unique_ptr<OperationPass<func::FuncOp>> CreateTFOptimizePass();
void RegisterTFOptimizePassPipeline();
// Creates pass to rewrite RecvTPUEmbeddingActivationsOp and
// SendTPUEmbeddingGradients ops to internal variants.
std::unique_ptr<OperationPass<func::FuncOp>> CreateRewriteTPUEmbeddingOpsPass();
// Performs specific fusion for GPU targets.
std::unique_ptr<OperationPass<func::FuncOp>> CreateGpuOpFusionPass();
// Creates a pass that decomposes to be compiled ReduceDataset ops into a while
// loop that iterates the dataset and calls the reduction function.
std::unique_ptr<OperationPass<func::FuncOp>> CreateDecomposeReduceDatasetPass();
// Create a pass that convert ops that copy tensors between devices, e.g.
// tf.Identity.
std::unique_ptr<OperationPass<mlir::func::FuncOp>>
CreateTensorDeviceCopyConversionPass();
// Returns a pass that folds tf.BroadcastTo nodes with subsequent nodes if they
// have built in broadcasting support.
std::unique_ptr<OperationPass<func::FuncOp>> CreateBroadcastFoldPass();
void populateTfControlFlowToScfPatterns(MLIRContext* context,
RewritePatternSet* patterns);
// Create a pass to convert TensorFlow control flow to SCF.
std::unique_ptr<OperationPass<ModuleOp>> createConvertTfControlFlowToScfPass();
struct LayoutOptimizationPipelineOptions
: public PassPipelineOptions<LayoutOptimizationPipelineOptions> {
Option<std::string> force_data_format{
*this, "force-data-format",
llvm::cl::desc("Force data format for all layout sensitive ops")};
Option<bool> skip_fold_transpose_in_ops{
*this, "skip-fold-transpose-in-ops",
llvm::cl::desc("Skip folding transpose operands in Ops which can support "
"different layouts.")};
};
// Layout optimization assigns optimal data layout for layout sensitive
// operations, and cancels all redundant transposes.
void CreateLayoutOptimizationPipeline(
OpPassManager& pm, // NOLINT - MLIR contract is pass by mutable reference.
const LayoutOptimizationPipelineOptions& options);
struct StandardPipelineOptions
: public PassPipelineOptions<StandardPipelineOptions> {
Option<bool> enable_inliner{*this, "enable-inliner",
llvm::cl::desc("Enable inliner."),
llvm::cl::init(false)};
Option<bool> form_clusters{*this, "form-clusters",
llvm::cl::desc("Enable Cluster Formation pass."),
llvm::cl::init(false)};
Option<bool> enable_stablehlo_shape_propagation{
*this, "enable-stablehlo-shape-propagation",
llvm::cl::desc(
"Enable StableHLO shape propagation in the TF shape inference pass."),
llvm::cl::init(false)};
ListOption<std::string> ops_to_preserve{
*this, "ops-to-preserve",
llvm::cl::desc(
"list of ops to preserve during graph pruning. This is "
"useful for keeping ops with side effects, e.g. DebugIdentityOp.")};
};
// Propagates the pass manager with the passes involved in transforming or
// optimizing an MLIR graph without any target specialization.
// NOLINTNEXTLINE - MLIR contract is pass by mutable reference.
void CreateTFStandardPipeline(OpPassManager& pm,
const StandardPipelineOptions& options);
// Propagates device attributes of resources from callers to callees.
std::unique_ptr<OperationPass<ModuleOp>> CreateResourceDeviceInferencePass();
// Creates a pass that promotes resource reads/writes in `functions` to inputs
// and outputs of `functions`, assuming that resource operations have already
// been decomposed and function calls have already been inlined. If `functions`
// is empty, the pass is applied to the main function by default. The pass also
// annotates the input arguments for resources with the indices of their
// aliasing output arguments.
std::unique_ptr<OperationPass<ModuleOp>> CreatePromoteResourcesToArgsPass(
llvm::ArrayRef<std::string> functions = {});
// Creates a pass that promotes tf.VarHandleOp to resource arguments for all
// functions.
std::unique_ptr<OperationPass<ModuleOp>> CreatePromoteVarHandlesToArgsPass();
// Creates a pass that converts readonly reference variables to the
// corresponding resource variables.
std::unique_ptr<OperationPass<func::FuncOp>>
CreateConvertReadonlyReferenceVariablesToResourceVariablesPass();
// Creates a simple device assignment pass on TF dialect for CoreRT use case.
std::unique_ptr<OperationPass<func::FuncOp>> CreateSimpleTFDeviceAssignmentPass(
llvm::StringRef default_device = "cpu");
// Creates a pass to perform device assignment for TF dialect ops that do not
// have device assignment, by using the device attribute of the function.
std::unique_ptr<OperationPass<func::FuncOp>>
CreateTFDeviceAssignmentByFuncAttrPass();
// Performs resource lifting on the function body to hoist resource variable
// accesses outside all control flow statements.
LogicalResult ResourceLiftingForFunctionalControlFlow(func::FuncOp function);
// Converts stack ops into operations on local variables, which can later be
// removed by resource lifting. Requires known maximum sizes of stacks and
// known element shapes of push ops.
std::unique_ptr<OperationPass<ModuleOp>> CreateStackOpsDecompositionPass();
// Creates a pass to strip the "tf._noinline" attribute from the functions in
// the module.
std::unique_ptr<OperationPass<ModuleOp>> CreateStripNoinlineAttributePass();
// Converts tensor list operations into operations on buffers and sizes. Needs
// static shapes and known max element count.
std::unique_ptr<OperationPass<ModuleOp>> CreateTensorListOpsDecompositionPass();
// Converts tensor array ops into operations on local variables, which can later
// be removed by resource lifting. Requires known sizes and known element shapes
// (either defined in TensorArrayV3 or implied in the first write).
std::unique_ptr<OperationPass<ModuleOp>>
CreateTensorArrayOpsDecompositionPass();
// Create a pass that legalize TFG to TF dialect.
std::unique_ptr<Pass> CreateLegalizeTFGToTFEPass();
// Matches sequence of ops to TensorFlow fused kernels. This pass should not be
// generally used beyond exporting to runtimes that supports these ops. In the
// future these fusions may be codegen'd automatically.
std::unique_ptr<OperationPass<func::FuncOp>> CreateFusedKernelMatcherPass();
// Creates function pass to select device index/fold tf.DeviceIndex.
std::unique_ptr<OperationPass<func::FuncOp>> CreateDeviceIndexSelectorPass();
// Creates function pass to replace InitializeTableFromTextFileV2Ops with
// LookupTableImportV2Op ops.
std::unique_ptr<OperationPass<func::FuncOp>> CreateInitTextFileToImportPass(
std::string saved_model_dir = "");
// Creates function pass to cluster TensorFlow ops by host. The program
// generated by this pass will have one function per host where all operations
// in the same function are placed on the same host. Each result of the per-host
// function will have a "tf.device" attribute which specifies the device
// assignment of the result.
std::unique_ptr<OperationPass<mlir::ModuleOp>> CreateClusterTFOpsByHostPass();
// Creates a pass to insert tf_device.send and tf_device.receive ops to make
// sure any argument of any op is on the same host of the op itself.
std::unique_ptr<OperationPass<mlir::ModuleOp>> CreateCrossHostTransferPass();
// Creates a pass that adds the device attribute to every tf.Const op based on
// the device attribute of the operations that read its result. If the result of
// a tf.Const op is read by operations placed on multiple devices, then the pass
// will replicate the tf.Const op once for each device.
std::unique_ptr<OperationPass<ModuleOp>> CreateConstantOpDeviceAssignmentPass();
// Returns pass that verifies whether all functions in module are of single
// tf_executor.graph and each tf_executor.island in tf_executor.graph only has a
// single op.
std::unique_ptr<OperationPass<ModuleOp>> CreateVerifySuitableForExportPass();
// Creates an op ordering favorable for the EmbeddingProgramKey pass.
std::unique_ptr<OperationPass<ModuleOp>> CreateOrderForProgramKeyPass();
// Returns pass that prepares TPU computation to be legal for export to
// TensorFlow.
std::unique_ptr<OperationPass<ModuleOp>>
CreatePrepareTpuComputationForTfExportPass();
// Rewrites ops that require quantized inputs or outputs to ops that allow
// non-quantized inputs and outputs.
std::unique_ptr<OperationPass<func::FuncOp>> CreateLowerQuantizedPass();
// Reorders ops so ops of the same dialect are next to each other.
std::unique_ptr<Pass> CreateOrderByDialectPass();
// Groups ops into functions that only contain one dialect.
std::unique_ptr<Pass> CreateGroupByDialectPass();
// Removes unused parameters from functions & their callers.
std::unique_ptr<OperationPass<ModuleOp>> CreateRemoveUnusedArgumentsPass();
// Removes unused results from WhileRegion ops.
std::unique_ptr<OperationPass<func::FuncOp>>
CreateRemoveUnusedWhileResultsPass();
// Hoists loop invariant ops to the outside of the loop.
std::unique_ptr<OperationPass<func::FuncOp>> CreateHoistLoopInvariantPass();
// Creates VarHandleOps right next to the operations that use them.
std::unique_ptr<OperationPass<ModuleOp>> CreateLocalizeVarHandlesPass();
// Removes all TF attributes
std::unique_ptr<OperationPass<ModuleOp>> CreateStripTfAttributesPass();
// Converts AnonymousIteratorOps to (named) IteratorOps.
std::unique_ptr<OperationPass<ModuleOp>> CreateNameAnonymousIteratorsPass();
// Creates a pass that breaks up an island with multiple ops into multiple
// islands, each with a single op. This pass intentionally does not propagate
// control dependencies across newly created islands and is handled by
// CreateTFExecutorUpdateControlDependenciesPass.
std::unique_ptr<OperationPass<func::FuncOp>> CreateSplitIntoIslandPerOpPass();
// Prints, but otherwise pipes through without changes, the current module.
std::unique_ptr<OperationPass<ModuleOp>> CreatePrintPass(
raw_ostream* os = nullptr);
// Moves TPUCompileMlir ops as far to the front as possible.
std::unique_ptr<OperationPass<func::FuncOp>> CreateMoveTpuCompileToFrontPass();
// Decomposes OptionalFromValue, OptionalGetValue, OptionalNone,
// and OptionalHasValue
std::unique_ptr<OperationPass<ModuleOp>> CreateDecomposeOptionalsPass();
//===----------------------------------------------------------------------===//
// XlaCallModule
//===----------------------------------------------------------------------===//
// Creates a pass that deserializes functions in the StableHLO modules from
// `tf.XlaCallModule` to the top-level module.
std::unique_ptr<OperationPass<ModuleOp>>
CreateXlaCallModuleDeserializationPass();
// Creates a pass that serializes StableHLO functions referenced by
// `tf.XlaCallModule` from the top-level module to `tf.XlaCallModule`'s
// `module` attribute.
std::unique_ptr<OperationPass<ModuleOp>> CreateXlaCallModuleSerializationPass();
} // namespace TF
namespace tf_executor {
// Creates a pass to chain control outputs of while loop body.
std::unique_ptr<OperationPass<ModuleOp>>
CreateTFExecutorConvertControlToDataOutputsPass();
std::unique_ptr<OperationPass<ModuleOp>>
CreateTFExecutorConvertControlToDataOutputsPass(
bool composite_tpuexecute_side_effects);
std::unique_ptr<OperationPass<ModuleOp>>
CreateTFExecutorCheckControlDependenciesPass();
// Creates a pass to merge IslandOps from TFExecutor dialect.
std::unique_ptr<OperationPass<func::FuncOp>>
CreateTFExecutorIslandCoarseningPass();
// Creates a pass to merge IslandOps for operation marked for execution on TPU.
// This is a V1 backward compatibility.
std::unique_ptr<OperationPass<ModuleOp>>
CreateTFExecutorTPUV1IslandCoarseningPass();
// Creates a pass to outlining TPU clusters from single IslandOp into a nested
// module suitable for being processed as-if it was a V2 module.
// This is a V1 backward compatibility.
std::unique_ptr<OperationPass<ModuleOp>>
CreateTFExecutorTPUV1IslandOutliningPass();
// Creates a pass to inline calls to the nested TPU module, this reverses the
// effect of the `TFExecutorTPUV1IslandOutlining` pass above.
// This is a V1 backward compatibility.
std::unique_ptr<OperationPass<ModuleOp>>
CreateTFExecutorTPUV1IslandInliningPass();
// Creates a pass to prune tf_executor.graph from dead nodes.
std::unique_ptr<OperationPass<func::FuncOp>> CreateTFExecutorGraphPruningPass(
llvm::ArrayRef<std::string> ops_to_preserve = {});
// Creates a pass to update control dependencies.
std::unique_ptr<OperationPass<ModuleOp>>
CreateTFExecutorUpdateControlDependenciesPass();
} // namespace tf_executor
namespace TFDevice {
// Creates a pass that forms clusters from instructions that are assigned to
// same device.
std::unique_ptr<OperationPass<ModuleOp>> CreateClusterFormationPass();
// Sinks `tf.Const` operations in the ClusterOp region using them. This is
// performed in order to limit the number of values implicitly captured in this
// region before outlining.
std::unique_ptr<OperationPass<func::FuncOp>> CreateClusterConstantSinkingPass(
llvm::function_ref<bool(tf_device::ClusterOp, ElementsAttr)> filter = {});
// Creates a pass that outlines regions of tf_device.cluster operations.
std::unique_ptr<OperationPass<ModuleOp>> CreateClusterOutliningPass();
// Creates a pass that outlines regions of tf_device.launch operations.
std::unique_ptr<OperationPass<ModuleOp>> CreateLaunchOutliningPass();
// Creates a pass that converts tf_device::LaunchFuncOp into
// TF::PartitionedCallOp.
std::unique_ptr<OperationPass<ModuleOp>> CreateConvertLaunchFuncToTFCallPass();
// A pass that decomposes composite resource operations into primitive ones like
// ReadVariableOp, AssignVariableOp and other computations to facilitate
// transformations like resource op lifting.
std::unique_ptr<OperationPass<func::FuncOp>> CreateDecomposeResourceOpsPass();
// A pass that decomposes composite resource operations in device cluster
// (tf_device.cluster op) into primitive ones like ReadVariableOp,
// AssignVariableOp and other computations to facilitate transformations like
// resource op lifting.
std::unique_ptr<OperationPass<ModuleOp>>
CreateDecomposeResourceOpsInClusterPass();
// Creates a pass that marks TPU cluster input-output pairs reading and writing
// to same resource variable as aliases.
std::unique_ptr<OperationPass<ModuleOp>> CreateMarkInputOutputAliasesPass();
// Creates a pass that lifts operations on external resource variables from
// device computation nested in `tf_device::LaunchOp` out so that resource
// variable load operations are all before device computation while resource
// variable store operations are all after device computation. After this pass,
// device computation no longer interacts with external resource variables.
std::unique_ptr<OperationPass<ModuleOp>> CreateResourceOpLiftingPass();
// Creates a pass that lifts operations from the main function.
std::unique_ptr<OperationPass<ModuleOp>>
CreateResourceOpLiftingForMainFunctionPass();
// Lifts resource operations from tf_device.launch_func ops nested in `op`
// outside. Returns a failure if there are remaining resource-type values that
// can not be lifted.
LogicalResult LiftResourceOps(Operation* op);
// Creates a pass that hoists invariant operations in a `tf_device.replicate`.
std::unique_ptr<OperationPass<func::FuncOp>>
CreateReplicateInvariantOpHoistingPass();
// Creates a pass that forms replica `tf_executor.island` from a single
// `tf_device.replicate` island.
std::unique_ptr<OperationPass<func::FuncOp>> CreateReplicateToIslandPass(
bool legacy_graph_export = true);
// Creates a pass that sets the device ordinal attribute of the required op
// using the replica id attribute.
std::unique_ptr<OperationPass<func::FuncOp>>
CreateReplicaIDToDeviceOrdinalPass();
// Creates a pass that creates `tf_executor.island` from a single
// `tf_device.parallel_execute` island.
std::unique_ptr<OperationPass<func::FuncOp>> CreateParallelExecuteToIslandsPass(
bool legacy_graph_export = true);
// Creates a pass that annotates whether a LaunchFuncOp's parameters have the
// same data across replicas.
std::unique_ptr<OperationPass<ModuleOp>>
CreateAnnotateParameterReplicationPass();
// Creates a pass that merges control flow with similar predicates.
std::unique_ptr<OperationPass<ModuleOp>> CreateMergeControlFlowPass();
// Creates a pass that wraps each TensorFlow dialect with `device` attribute
// in a `tf_device.launch` op with the same `device` attribute.
std::unique_ptr<OperationPass<func::FuncOp>>
CreateDeviceAttributeToLaunchPass();
// Creates a pass that hoists a `tf_device.launch` body and assigns a `device`
// attribute to each TensorFlow dialect op in the body based on the `device`
// attribute on the `tf_device.launch`.
std::unique_ptr<OperationPass<func::FuncOp>> CreateLaunchToDeviceAttributePass(
bool legacy_graph_export = true);
// Creates a pass to ensure that the `_xla_outside_compilation` and
// tf_device.launch op no longer exist after Outside Compilation is complete.
std::unique_ptr<OperationPass<func::FuncOp>>
CreateVerifyNoOutsideCompilationMarkersPass();
// Create a pass that inlines the StatefulPartitionedCallOp op based in the
// parent region.
std::unique_ptr<OperationPass<ModuleOp>> CreateXlaInlineDeviceOpsPass();
// Creates a pass that rewrites partitioned calls with `_xla_compile_device
// type` with `tf.XlaLaunch` ops.
std::unique_ptr<OperationPass<ModuleOp>> CreateXlaRewritePass();
// Create a pass that validates the input graph to the CPU/GPU bridge.
std::unique_ptr<OperationPass<ModuleOp>> CreateXlaValidateInputsPass();
} // namespace TFDevice
namespace TFTPU {
// Creates a pass that converts unified compilation and replication
// attributes back to legacy attributes.
std::unique_ptr<OperationPass<func::FuncOp>>
CreateConvertToLegacyCompileAndReplicateAttributesPass();
// Creates a pass that converts all TPUPartitionedInput to TPUPartitionedInputV2
std::unique_ptr<OperationPass<func::FuncOp>>
CreateTPUPartitionedOpConversionPass();
// Creates a pass that cleans up `_replication_info` attribute on operations
// that are inside a cluster.
std::unique_ptr<OperationPass<ModuleOp>>
CreateTPUClusterCleanupAttributesPass();
// Creates a pass that removes Identity/IdentityN ops from a cluster.
std::unique_ptr<OperationPass<ModuleOp>> CreateTPUIdentityPruningPass();
// Creates a pass that allows TPU program inputs to have layouts determined at
// run time.
std::unique_ptr<OperationPass<ModuleOp>> CreateTPUDynamicLayoutPass();
// Creates a pass that adds `tf.ReadVariableOp` to a TPU cluster for resources
// the cluster only writes to.
std::unique_ptr<OperationPass<ModuleOp>> CreateTPUResourceReadForWritePass();
// Creates a pass that reorders partitiioned resource reads and replicated
// inputs.
std::unique_ptr<OperationPass<func::FuncOp>>
CreateTPUReorderReplicateAndPartitionedInputsPass();
// Creates a pass that partitions unpartitioned resource read/write to
// partitioned resource variables.
std::unique_ptr<OperationPass<func::FuncOp>>
CreateTPUResourceReadsWritesPartitioningPass();
// Creates a pass that looks for usage of the result of
// TPUCopyWithDynamicShapeOp and annotate these values to be dynamic shape. This
// ensures that the generated tpu program has the correct inputs annotation.
std::unique_ptr<OperationPass<ModuleOp>>
CreateTPUAnnotateDynamicShapeInputsPass();
// Creates a pass that moves `tf.AssignVariableOp` into a
// `tf_device.parallel_execute` region if the `tf.AssignVariableOp` is the
// only consumer of a `tf_device.parallel_execute` result.
std::unique_ptr<OperationPass<func::FuncOp>>
CreateTPUParallelExecuteSinkResourceWritePass();
// Create a pass that extract TPUCopyWithDynamicShapeOp from the host launch op
// and wrap them in device launch op. This allows this op executed on TPU while
// still compiled on host.
std::unique_ptr<OperationPass<func::FuncOp>>
CreateExtractTPUCopyWithDynamicShapeOpPass();
// Creates a pass that wraps ReadVariableOp/AssignVariable op that consumes a
// packed tensor to have same device placement as underlying TPU device.
std::unique_ptr<OperationPass<func::FuncOp>>
CreateTPUColocateCompositeResourceOps();
// Creates a pass that expands outside compilation cluster at the head/tail of
// TPU computation by adding outside compilation attribute to identity/cast ops
// that are only used for host computation.
std::unique_ptr<OperationPass<func::FuncOp>>
CreateTPUHostComputationExpansionPass();
// Creates a pass that updates inputs to TPU embedding layer enqueue ops so that
// correct ops are invoked during training and evaluation.
std::unique_ptr<OperationPass<func::FuncOp>>
CreateTPUUpdateEmbeddingEnqueueOpInputsPass();
// Creates a pass that propagates TPU devices to users.
std::unique_ptr<OperationPass<func::FuncOp>> CreateTPUDevicePropagationPass();
// Create a pass that colocates each `Split` with its predecessor.
std::unique_ptr<OperationPass<func::FuncOp>> CreateTPUColocateSplitsPass();
// Creates a pass that replicates the tf._TPUCompileMlir op on each host that
// needs the compiled program. It helps avoid transferring the compiled binary
// between hosts.
std::unique_ptr<OperationPass<mlir::ModuleOp>>
CreateTPUCompileOpReplicationPass();
// Creates a pass that applies space to depth transform
// for the first or frontier convolutions consume host inputs on TPU.
std::unique_ptr<OperationPass<ModuleOp>> CreateTPUSpaceToDepthPass();
// Adjusts the device on TPUCopyWithDynamicShape ops.
std::unique_ptr<OperationPass<ModuleOp>>
CreateColocateTPUCopyWithDynamicShapePass();
} // namespace TFTPU
// Define the registrations in a detail namespace, just so that we can overload
// the main entry point `registerTensorFlowPasses` to inject
// RegisterTFOptimizePassPipeline.
namespace detail {
// Direction in which to move transposes in MoveTransposePass.
enum MoveTransposeDirection { kBegin, kEnd };
#define GEN_PASS_REGISTRATION
#define GEN_PASS_DECL_BATCHMATMULTOEINSUMPASS
#define GEN_PASS_DECL_BREAKUPISLANDSPASS
#define GEN_PASS_DECL_BROADCASTFOLDPASS
#define GEN_PASS_DECL_CANONICALIZECOMPILEANDREPLICATEATTRIBUTESPASS
#define GEN_PASS_DECL_CLUSTERCONSTANTSINKINGPASS
#define GEN_PASS_DECL_CLUSTERFORMATIONPASS
#define GEN_PASS_DECL_CLUSTEROUTLININGPASS
#define GEN_PASS_DECL_CLUSTERTFOPSBYHOSTPASS
#define GEN_PASS_DECL_CONSTANTOPDEVICEASSIGNMENTPASS
#define GEN_PASS_DECL_CONVERTLAUNCHFUNCTOTFCALLPASS
#define GEN_PASS_DECL_CONVERTREADONLYREFERENCEVARIABLESTORESOURCEVARIABLESPASS
#define GEN_PASS_DECL_CONVERTTFCONTROLFLOWTOSCFPASS
#define GEN_PASS_DECL_CONVERTTOLEGACYCOMPILEANDREPLICATEATTRIBUTESPASS
#define GEN_PASS_DECL_DECOMPOSEREDUCEDATASETPASS
#define GEN_PASS_DECL_DEVICEINDEXSELECTORPASS
#define GEN_PASS_DECL_DROPWHILESHAPEINVARIANTINDEVICECLUSTERPASS
#define GEN_PASS_DECL_DROPWHILESHAPEINVARIANTPASS
#define GEN_PASS_DECL_EXECUTORCHECKCONTROLDEPENDENCIESPASS
#define GEN_PASS_DECL_EXECUTORCONVERTCONTROLTODATAOUTPUTSPASS
#define GEN_PASS_DECL_EXECUTORDIALECTTOFUNCTIONALPASS
#define GEN_PASS_DECL_EXECUTORGRAPHPRUNINGPASS
#define GEN_PASS_DECL_EXECUTORISLANDCOARSENINGPASS
#define GEN_PASS_DECL_EXECUTORTPUV1ISLANDINLININGPASS
#define GEN_PASS_DECL_EXECUTORUPDATECONTROLDEPENDENCIESPASS
#define GEN_PASS_DECL_FUNCTIONALCONTROLFLOWTOCFGPASS
#define GEN_PASS_DECL_FUNCTIONALCONTROLFLOWTOREGIONSPASS
#define GEN_PASS_DECL_FUNCTIONALTOEXECUTORDIALECTCONVERSIONPASS
#define GEN_PASS_DECL_FUSEDKERNELMATCHERPASS
#define GEN_PASS_DECL_GROUPBYDIALECTPASS
#define GEN_PASS_DECL_GUARANTEEALLFUNCSONEUSEPASS
#define GEN_PASS_DECL_HOISTREPLICATEINVARIANTRESOURCEWRITESPASS
#define GEN_PASS_DECL_INITTEXTFILETOIMPORTPASS
#define GEN_PASS_DECL_LAUNCHOUTLININGPASS
#define GEN_PASS_DECL_LAYOUTASSIGNMENTPASS
#define GEN_PASS_DECL_LEGALIZEHLOTOTFPASS
#define GEN_PASS_DECL_LEGALIZETFGTOTFPASS
#define GEN_PASS_DECL_LOCALIZEVARHANDLESPASS
#define GEN_PASS_DECL_LOWERQUANTIZEDPASS
#define GEN_PASS_DECL_MARKINPUTOUTPUTALIASESPASS
#define GEN_PASS_DECL_MATERIALIZEPASSTHROUGHOP
#define GEN_PASS_DECL_MERGECONTROLFLOWPASS
#define GEN_PASS_DECL_MOVETRANSPOSESPASS
#define GEN_PASS_DECL_ORDERBYDIALECTPASS
#define GEN_PASS_DECL_PARALLELEXECUTETOISLANDSPASS
#define GEN_PASS_DECL_PREPARETPUCOMPUTATIONFORTFEXPORTPASS
#define GEN_PASS_DECL_PROMOTERESOURCESTOARGSPASS
#define GEN_PASS_DECL_PROMOTEVARHANDLESTOARGSPASS
#define GEN_PASS_DECL_REGIONCONTROLFLOWTOFUNCTIONALPASS
#define GEN_PASS_DECL_REMOVEUNUSEDARGUMENTSPASS
#define GEN_PASS_DECL_REMOVEUNUSEDWHILERESULTSPASS
#define GEN_PASS_DECL_REPLICAIDTODEVICEORDINALPASS
#define GEN_PASS_DECL_REPLICATEINVARIANTOPHOISTINGPASS
#define GEN_PASS_DECL_REPLICATETOISLANDPASS
#define GEN_PASS_DECL_RESOURCEDEVICEINFERENCEPASS
#define GEN_PASS_DECL_REWRITETPUEMBEDDINGOPSPASS
#define GEN_PASS_DECL_SIMPLETFDEVICEASSIGNMENTPASS
#define GEN_PASS_DECL_SPLITINTOISLANDPEROPPASS
#define GEN_PASS_DECL_STACKOPSDECOMPOSITIONPASS
#define GEN_PASS_DECL_STRIPNOINLINEATTRIBUTEPASS
#define GEN_PASS_DECL_TFDATAOPTIMIZATIONPASS
#define GEN_PASS_DECL_TFDEVICEASSIGNMENTBYFUNCATTRPASS
#define GEN_PASS_DECL_TPUBRIDGEEXECUTORISLANDOUTLININGPASS
#define GEN_PASS_DECL_TPUCLEANUPCLUSTERATTRIBUTESPASS
#define GEN_PASS_DECL_TPUCLUSTERFORMATIONPASS
#define GEN_PASS_DECL_TPUCOLOCATECOMPOSITERESOURCEOPSPASS
#define GEN_PASS_DECL_TPUDEVICEPROPAGATIONPASS
#define GEN_PASS_DECL_TPUDYNAMICLAYOUTPASS
#define GEN_PASS_DECL_TPUHOSTCOMPUTATIONEXPANSIONPASS
#define GEN_PASS_DECL_TPUIDENTITYPRUNINGPASS
#define GEN_PASS_DECL_EXTRACTTPUCOPYWITHDYNAMICSHAPEOPPASS
#define GEN_PASS_DECL_TPUPARALLELEXECUTESINKRESOURCEWRITEPASS
#define GEN_PASS_DECL_TPUREORDERREPLICATEANDPARTITIONEDINPUTSPASS
#define GEN_PASS_DECL_TPURESOURCEREADFORWRITEPASS
#define GEN_PASS_DECL_TPURESOURCEREADSWRITESPARTITIONINGPASS
#define GEN_PASS_DECL_TPUSPACETODEPTHPASS
#define GEN_PASS_DECL_TPUUPDATEEMBEDDINGENQUEUEOPINPUTSPASS
#define GEN_PASS_DECL_TENSORARRAYOPSDECOMPOSITIONPASS
#define GEN_PASS_DECL_TENSORDEVICECOPYCONVERSIONPASS
#define GEN_PASS_DECL_TENSORFLOWOPTIMIZEPASS
#define GEN_PASS_DECL_TENSORFLOWSHAPEINFERENCEPASS
#define GEN_PASS_DECL_TENSORLISTOPSDECOMPOSITIONPASS
#define GEN_PASS_DECL_TENSORFLOWGPUFUSION
#define GEN_PASS_DECL_TPUV1BRIDGEEXECUTORISLANDCOARSENINGPASS
#define GEN_PASS_DECL_TRANSFORMEINSUMPASS
#define GEN_PASS_DECL_UNROLLBATCHMATMULPASS
#define GEN_PASS_DECL_VERIFYSUITABLEFOREXPORTPASS
#define GEN_PASS_DECL_XLACALLMODULEDESERIALIZATIONPASS
#define GEN_PASS_DECL_XLACALLMODULESERIALIZATIONPASS
#define GEN_PASS_DECL_XLACALLMODULECUSTOMCALLTFFUNCTIONRENAMINGPASS
#include "tensorflow/compiler/mlir/tensorflow/transforms/tf_passes.h.inc"
} // namespace detail
using namespace detail; // NOLINT
inline void registerTensorFlowPasses() {
detail::registerTensorFlowPasses();
TF::RegisterTFOptimizePassPipeline();
}
namespace TFDevice {
#define GEN_PASS_REGISTRATION
#define GEN_PASS_DECL_ANNOTATEPARAMETERREPLICATIONPASS
#define GEN_PASS_DECL_DECOMPOSERESOURCEOPSINCLUSTERPASS
#define GEN_PASS_DECL_DECOMPOSERESOURCEOPSPASS
#define GEN_PASS_DECL_DEVICEATTRIBUTETOLAUNCHPASS
#define GEN_PASS_DECL_HOSTLAUNCHTOOUTSIDECOMPILEDPASS
#define GEN_PASS_DECL_LAUNCHTODEVICEATTRIBUTEPASS
#define GEN_PASS_DECL_OUTSIDECOMPILEDTOHOSTLAUNCHPASS
#define GEN_PASS_DECL_RESOURCEOPLIFTINGFORMAINFUNCTIONPASS
#define GEN_PASS_DECL_RESOURCEOPLIFTINGPASS
#define GEN_PASS_DECL_VERIFYNOOUTSIDECOMPILATIONMARKERSPASS
#define GEN_PASS_DECL_XLACLUSTERFORMATIONPASS
#define GEN_PASS_DECL_XLAINLINEDEVICEOPSPASS
#define GEN_PASS_DECL_XLAREWRITEPASS
#define GEN_PASS_DECL_XLAREWRITEV2PASS
#define GEN_PASS_DECL_XLAVALIDATEINPUTSPASS
#include "tensorflow/compiler/mlir/tensorflow/transforms/tf_device_passes.h.inc"
} // namespace TFDevice
} // namespace mlir
#endif // TENSORFLOW_COMPILER_MLIR_TENSORFLOW_TRANSFORMS_PASSES_H_ | c | github | https://github.com/tensorflow/tensorflow | tensorflow/compiler/mlir/tensorflow/transforms/passes.h |
// Copyright 2016 The etcd Authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package naming_test
import (
"bytes"
"fmt"
"strconv"
"testing"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"google.golang.org/grpc"
"google.golang.org/grpc/credentials/insecure"
testpb "google.golang.org/grpc/interop/grpc_testing"
"go.etcd.io/etcd/client/v3/naming/endpoints"
"go.etcd.io/etcd/client/v3/naming/resolver"
"go.etcd.io/etcd/pkg/v3/grpctesting"
"go.etcd.io/etcd/tests/v3/framework/integration"
)
func testEtcdGRPCResolver(t *testing.T, lbPolicy string) {
// Setup two new dummy stub servers
payloadBody := []byte{'1'}
s1 := grpctesting.NewDummyStubServer(payloadBody)
if err := s1.Start(nil); err != nil {
t.Fatal("failed to start dummy grpc server (s1)", err)
}
defer s1.Stop()
s2 := grpctesting.NewDummyStubServer(payloadBody)
if err := s2.Start(nil); err != nil {
t.Fatal("failed to start dummy grpc server (s2)", err)
}
defer s2.Stop()
// Create new cluster with endpoint manager with two endpoints
clus := integration.NewCluster(t, &integration.ClusterConfig{Size: 3})
defer clus.Terminate(t)
em, err := endpoints.NewManager(clus.Client(0), "foo")
if err != nil {
t.Fatal("failed to create EndpointManager", err)
}
e1 := endpoints.Endpoint{Addr: s1.Addr()}
e2 := endpoints.Endpoint{Addr: s2.Addr()}
err = em.AddEndpoint(t.Context(), "foo/e1", e1)
if err != nil {
t.Fatal("failed to add foo", err)
}
err = em.AddEndpoint(t.Context(), "foo/e2", e2)
if err != nil {
t.Fatal("failed to add foo", err)
}
b, err := resolver.NewBuilder(clus.Client(1))
if err != nil {
t.Fatal("failed to new resolver builder", err)
}
// Create connection with provided lb policy
conn, err := grpc.Dial("etcd:///foo", grpc.WithTransportCredentials(insecure.NewCredentials()), grpc.WithResolvers(b), //nolint:staticcheck // TODO: remove for a supported version
grpc.WithDefaultServiceConfig(fmt.Sprintf(`{"loadBalancingPolicy":"%s"}`, lbPolicy)))
if err != nil {
t.Fatal("failed to connect to foo", err)
}
defer conn.Close()
// Send an initial request that should go to e1
c := testpb.NewTestServiceClient(conn)
resp, err := c.UnaryCall(t.Context(), &testpb.SimpleRequest{}, grpc.WaitForReady(true))
if err != nil {
t.Fatal("failed to invoke rpc to foo (e1)", err)
}
if resp.GetPayload() == nil || !bytes.Equal(resp.GetPayload().GetBody(), payloadBody) {
t.Fatalf("unexpected response from foo (e1): %s", resp.GetPayload().GetBody())
}
// Send more requests
lastResponse := []byte{'1'}
totalRequests := 3500
for i := 1; i < totalRequests; i++ {
resp, err := c.UnaryCall(t.Context(), &testpb.SimpleRequest{}, grpc.WaitForReady(true))
if err != nil {
t.Fatal("failed to invoke rpc to foo", err)
}
t.Logf("Response: %v", string(resp.GetPayload().GetBody()))
require.NotNilf(t, resp.GetPayload(), "unexpected response from foo: %s", resp.GetPayload().GetBody())
lastResponse = resp.GetPayload().GetBody()
}
// If the load balancing policy is pick first then return payload should equal number of requests
t.Logf("Last response: %v", string(lastResponse))
if lbPolicy == "pick_first" {
require.Equalf(t, "3500", string(lastResponse), "unexpected total responses from foo: %s", lastResponse)
}
// If the load balancing policy is round robin we should see roughly half total requests served by each server
if lbPolicy == "round_robin" {
responses, err := strconv.Atoi(string(lastResponse))
require.NoErrorf(t, err, "couldn't convert to int: %s", lastResponse)
// Allow 25% tolerance as round robin is not perfect and we don't want the test to flake
expected := float64(totalRequests) * 0.5
assert.InEpsilonf(t, expected, float64(responses), 0.25, "unexpected total responses from foo: %s", lastResponse)
}
}
// TestEtcdGrpcResolverPickFirst mimics scenarios described in grpc_naming.md doc.
func TestEtcdGrpcResolverPickFirst(t *testing.T) {
integration.BeforeTest(t)
// Pick first is the default load balancer policy for grpc-go
testEtcdGRPCResolver(t, "pick_first")
}
// TestEtcdGrpcResolverRoundRobin mimics scenarios described in grpc_naming.md doc.
func TestEtcdGrpcResolverRoundRobin(t *testing.T) {
integration.BeforeTest(t)
// Round robin is a common alternative for more production oriented scenarios
testEtcdGRPCResolver(t, "round_robin")
}
func TestEtcdEndpointManager(t *testing.T) {
integration.BeforeTest(t)
s1PayloadBody := []byte{'1'}
s1 := grpctesting.NewDummyStubServer(s1PayloadBody)
err := s1.Start(nil)
require.NoError(t, err)
defer s1.Stop()
s2PayloadBody := []byte{'2'}
s2 := grpctesting.NewDummyStubServer(s2PayloadBody)
err = s2.Start(nil)
require.NoError(t, err)
defer s2.Stop()
clus := integration.NewCluster(t, &integration.ClusterConfig{Size: 3})
defer clus.Terminate(t)
// Check if any endpoint with the same prefix "foo" will not break the logic with multiple endpoints
em, err := endpoints.NewManager(clus.Client(0), "foo")
require.NoError(t, err)
emOther, err := endpoints.NewManager(clus.Client(1), "foo_other")
require.NoError(t, err)
e1 := endpoints.Endpoint{Addr: s1.Addr()}
e2 := endpoints.Endpoint{Addr: s2.Addr()}
em.AddEndpoint(t.Context(), "foo/e1", e1)
emOther.AddEndpoint(t.Context(), "foo_other/e2", e2)
epts, err := em.List(t.Context())
require.NoError(t, err)
eptsOther, err := emOther.List(t.Context())
require.NoError(t, err)
assert.Len(t, epts, 1)
assert.Len(t, eptsOther, 1)
} | go | github | https://github.com/etcd-io/etcd | tests/integration/clientv3/naming/resolver_test.go |
# Copyright 2017 Capital One Services, LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from __future__ import absolute_import, division, print_function, unicode_literals
from .common import BaseTest
import botocore.exceptions as b_exc
class TestNotebookInstance(BaseTest):
def test_list_notebook_instances(self):
session_factory = self.replay_flight_data("test_sagemaker_notebook_instances")
p = self.load_policy(
{
"name": "list-sagemaker-notebooks",
"resource": "sagemaker-notebook",
"filters": [
{"type": "value", "key": "SubnetId", "value": "subnet-efbcccb7"}
],
},
session_factory=session_factory,
)
resources = p.run()
self.assertEqual(len(resources), 1)
def test_tag_notebook_instances(self):
session_factory = self.replay_flight_data(
"test_sagemaker_tag_notebook_instances"
)
p = self.load_policy(
{
"name": "tag-sagemaker-notebooks",
"resource": "sagemaker-notebook",
"filters": [{"tag:Category": "absent"}],
"actions": [{"type": "tag", "key": "Category", "value": "TestValue"}],
},
session_factory=session_factory,
)
resources = p.run()
self.assertEqual(len(resources), 1)
client = session_factory().client("sagemaker")
tags = client.list_tags(ResourceArn=resources[0]["NotebookInstanceArn"])["Tags"]
self.assertEqual(tags[0]["Value"], "TestValue")
def test_remove_tag_notebook_instance(self):
session_factory = self.replay_flight_data(
"test_sagemaker_remove_tag_notebook_instances"
)
p = self.load_policy(
{
"name": "untag-sagemaker-notebooks",
"resource": "sagemaker-notebook",
"filters": [{"tag:Category": "TestValue"}],
"actions": [{"type": "remove-tag", "tags": ["Category"]}],
},
session_factory=session_factory,
)
resources = p.run()
self.assertEqual(len(resources), 1)
client = session_factory().client("sagemaker")
tags = client.list_tags(ResourceArn=resources[0]["NotebookInstanceArn"])["Tags"]
self.assertEqual(len(tags), 0)
def test_mark_for_op_notebook_instance(self):
session_factory = self.replay_flight_data(
"test_sagemaker_mark_for_op_notebook_instance"
)
p = self.load_policy(
{
"name": "sagemaker-notebooks-untagged-delete",
"resource": "sagemaker-notebook",
"filters": [
{"tag:Category": "absent"},
{"tag:custodian_cleanup": "absent"},
{"NotebookInstanceStatus": "InService"},
],
"actions": [
{
"type": "mark-for-op",
"tag": "custodian_cleanup",
"op": "stop",
"days": 1,
}
],
},
session_factory=session_factory,
)
resources = p.run()
self.assertEqual(len(resources), 1)
client = session_factory().client("sagemaker")
tags = client.list_tags(ResourceArn=resources[0]["NotebookInstanceArn"])["Tags"]
self.assertTrue(tags[0]["Key"], "custodian_cleanup")
def test_marked_for_op_notebook_instance(self):
session_factory = self.replay_flight_data(
"test_sagemaker_marked_for_op_notebook_instance"
)
p = self.load_policy(
{
"name": "sagemaker-notebooks-untagged-delete",
"resource": "sagemaker-notebook",
"filters": [
{
"type": "marked-for-op",
"tag": "custodian_cleanup",
"op": "stop",
"skew": 1,
}
],
},
session_factory=session_factory,
)
resources = p.run()
self.assertEqual(len(resources), 1)
def test_start_notebook_instance(self):
session_factory = self.replay_flight_data(
"test_sagemaker_start_notebook_instance"
)
p = self.load_policy(
{
"name": "start-sagemaker-notebook",
"resource": "sagemaker-notebook",
"actions": [{"type": "start"}],
},
session_factory=session_factory,
)
resources = p.run()
self.assertTrue(len(resources), 1)
client = session_factory().client("sagemaker")
notebook = client.describe_notebook_instance(
NotebookInstanceName=resources[0]["NotebookInstanceName"]
)
self.assertTrue(notebook["NotebookInstanceStatus"], "Pending")
def test_stop_notebook_instance(self):
session_factory = self.replay_flight_data(
"test_sagemaker_stop_notebook_instance"
)
p = self.load_policy(
{
"name": "stop-invalid-sagemaker-notebook",
"resource": "sagemaker-notebook",
"filters": [{"tag:Category": "absent"}],
"actions": [{"type": "stop"}],
},
session_factory=session_factory,
)
resources = p.run()
self.assertTrue(len(resources), 1)
client = session_factory().client("sagemaker")
notebook = client.describe_notebook_instance(
NotebookInstanceName=resources[0]["NotebookInstanceName"]
)
self.assertTrue(notebook["NotebookInstanceStatus"], "Stopping")
def test_delete_notebook_instance(self):
session_factory = self.replay_flight_data(
"test_sagemaker_delete_notebook_instance"
)
p = self.load_policy(
{
"name": "delete-unencrypted-sagemaker-notebook",
"resource": "sagemaker-notebook",
"filters": [{"KmsKeyId": "empty"}],
"actions": [{"type": "delete"}],
},
session_factory=session_factory,
)
resources = p.run()
self.assertTrue(len(resources), 1)
client = session_factory().client("sagemaker")
notebook = client.describe_notebook_instance(
NotebookInstanceName=resources[0]["NotebookInstanceName"]
)
self.assertTrue(notebook["NotebookInstanceStatus"], "Deleting")
def test_notebook_subnet(self):
nb = "c7n-test-nb"
session_factory = self.replay_flight_data(
"test_sagemaker_notebook_subnet_filter"
)
p = self.load_policy(
{
"name": "sagemaker-notebook",
"resource": "sagemaker-notebook",
"filters": [{"type": "subnet", "key": "tag:Name", "value": "Pluto"}],
},
session_factory=session_factory,
)
resources = p.run()
self.assertEqual(len(resources), 1)
self.assertEqual(resources[0]["NotebookInstanceName"], nb)
def test_notebook_security_group(self):
nb = "c7n-test-nb"
session_factory = self.replay_flight_data(
"test_sagemaker_notebook_security_group_filter"
)
p = self.load_policy(
{
"name": "sagemaker-notebook",
"resource": "sagemaker-notebook",
"filters": [
{"type": "security-group", "key": "GroupName", "value": "SGW-SG"}
],
},
session_factory=session_factory,
)
resources = p.run()
self.assertEqual(len(resources), 1)
self.assertEqual(resources[0]["NotebookInstanceName"], nb)
class TestModelInstance(BaseTest):
def test_list_model(self):
session_factory = self.replay_flight_data("test_sagemaker_model")
p = self.load_policy(
{"name": "list-sagemaker-model", "resource": "sagemaker-model"},
session_factory=session_factory,
)
resources = p.run()
self.assertGreaterEqual(len(resources), 1)
def test_delete_model(self):
session_factory = self.replay_flight_data("test_sagemaker_delete_model")
p = self.load_policy(
{
"name": "delete-invalid-sagemaker-model",
"resource": "sagemaker-model",
"filters": [{"tag:DeleteMe": "present"}],
"actions": [{"type": "delete"}],
},
session_factory=session_factory,
)
resources = p.run()
self.assertEqual(len(resources), 1)
client = session_factory().client("sagemaker")
try:
client.describe_model(ModelName=resources[0]["ModelName"])
except b_exc.ClientError as e:
if e.response["Error"]["Code"] != "ValidationException":
self.fail("Bad Error:" + e.response["Error"]["Code"])
else:
self.assertEqual(e.response["Error"]["Code"], "ValidationException")
else:
self.fail("Resource still exists")
def test_tag_model(self):
session_factory = self.replay_flight_data("test_sagemaker_tag_model")
p = self.load_policy(
{
"name": "tag-sagemaker-model",
"resource": "sagemaker-model",
"filters": [{"tag:Category": "absent"}],
"actions": [{"type": "tag", "key": "Category", "value": "TestValue"}],
},
session_factory=session_factory,
)
resources = p.run()
self.assertEqual(len(resources), 1)
client = session_factory().client("sagemaker")
tags = client.list_tags(ResourceArn=resources[0]["ModelArn"])["Tags"]
self.assertEqual(tags[0]["Value"], "TestValue")
def test_remove_tag_model(self):
session_factory = self.replay_flight_data("test_sagemaker_remove_tag_model")
p = self.load_policy(
{
"name": "untag-sagemaker-model",
"resource": "sagemaker-model",
"filters": [{"tag:Category": "TestValue"}],
"actions": [{"type": "remove-tag", "tags": ["Category"]}],
},
session_factory=session_factory,
)
resources = p.run()
self.assertEqual(len(resources), 1)
client = session_factory().client("sagemaker")
tags = client.list_tags(ResourceArn=resources[0]["ModelArn"])["Tags"]
self.assertEqual(len(tags), 0)
def test_model_mark_for_op(self):
session_factory = self.replay_flight_data("test_model_mark_for_op")
p = self.load_policy(
{
"name": "mark-failed-model-delete",
"resource": "sagemaker-model",
"filters": [{"tag:OpMe": "present"}],
"actions": [
{
"type": "mark-for-op",
"tag": "custodian_cleanup",
"op": "delete",
"days": 1,
}
],
},
session_factory=session_factory,
)
resources = p.run()
self.assertTrue(len(resources), 1)
client = session_factory(region="us-east-1").client("sagemaker")
tags = client.list_tags(ResourceArn=resources[0]["ModelArn"])["Tags"]
self.assertTrue(tags[0], "custodian_cleanup")
def test_model_marked_for_op(self):
session_factory = self.replay_flight_data("test_model_marked_for_op")
p = self.load_policy(
{
"name": "marked-failed-endpoints-delete",
"resource": "sagemaker-model",
"filters": [
{
"type": "marked-for-op",
"tag": "custodian_cleanup",
"op": "delete",
"skew": 1,
}
],
},
session_factory=session_factory,
)
resources = p.run()
self.assertEqual(len(resources), 1)
class TestSagemakerJob(BaseTest):
def test_sagemaker_training_job_query(self):
session_factory = self.replay_flight_data("test_sagemaker_training_job_query")
p = self.load_policy(
{
"name": "query-training-jobs",
"resource": "sagemaker-job",
"query": [{"StatusEquals": "Failed"}],
},
session_factory=session_factory,
)
resources = p.run()
self.assertEqual(len(resources), 1)
def test_stop_job(self):
session_factory = self.replay_flight_data("test_sagemaker_training_job_stop")
client = session_factory(region="us-east-1").client("sagemaker")
p = self.load_policy(
{
"name": "stop-training-job",
"resource": "sagemaker-job",
"filters": [
{
"type": "value",
"key": "InputDataConfig[].ChannelName",
"value": "train",
"op": "contains",
}
],
"actions": [{"type": "stop"}],
},
session_factory=session_factory,
)
resources = p.run()
self.assertTrue(len(resources), 1)
job = client.describe_training_job(
TrainingJobName=resources[0]["TrainingJobName"]
)
self.assertEqual(job["TrainingJobStatus"], "Stopping")
def test_tag_job(self):
session_factory = self.replay_flight_data("test_sagemaker_training_job_tag")
p = self.load_policy(
{
"name": "tag-training-job",
"resource": "sagemaker-job",
"filters": [{"tag:JobTag": "absent"}],
"actions": [{"type": "tag", "key": "JobTag", "value": "JobTagValue"}],
},
session_factory=session_factory,
)
resources = p.run()
self.assertTrue(len(resources), 1)
client = session_factory(region="us-east-1").client("sagemaker")
tags = client.list_tags(ResourceArn=resources[0]["TrainingJobArn"])["Tags"]
self.assertEqual([tags[0]["Key"], tags[0]["Value"]], ["JobTag", "JobTagValue"])
def test_untag_job(self):
session_factory = self.replay_flight_data(
"test_sagemaker_training_job_remove_tag"
)
p = self.load_policy(
{
"name": "remove-training-job-tag",
"resource": "sagemaker-job",
"filters": [{"tag:JobTag": "JobTagValue"}],
"actions": [{"type": "remove-tag", "tags": ["JobTag"]}],
},
session_factory=session_factory,
)
resources = p.run()
self.assertTrue(len(resources), 1)
client = session_factory(region="us-east-1").client("sagemaker")
tags = client.list_tags(ResourceArn=resources[0]["TrainingJobArn"])["Tags"]
self.assertEqual(len(tags), 0)
class TestSagemakerTransformJob(BaseTest):
def test_sagemaker_transform_job_query(self):
session_factory = self.replay_flight_data("test_sagemaker_transform_job_query")
p = self.load_policy(
{
"name": "query-transform-jobs",
"resource": "sagemaker-transform-job",
"query": [{"StatusEquals": "Completed"}],
},
session_factory=session_factory,
)
resources = p.run()
self.assertEqual(len(resources), 1)
def test_stop_transform_job(self):
session_factory = self.replay_flight_data("test_sagemaker_transform_job_stop")
client = session_factory(region="us-east-1").client("sagemaker")
p = self.load_policy(
{
"name": "stop-transform-job",
"resource": "sagemaker-transform-job",
"filters": [
{
"type": "value",
"key": "ModelName",
"value": "kmeans",
"op": "contains",
}
],
"actions": [{"type": "stop"}],
},
session_factory=session_factory,
)
resources = p.run()
self.assertTrue(len(resources), 1)
job = client.describe_transform_job(
TransformJobName=resources[0]["TransformJobName"]
)
self.assertEqual(job["TransformJobStatus"], "Stopping")
def test_tag_transform_job(self):
session_factory = self.replay_flight_data("test_sagemaker_transform_job_tag")
p = self.load_policy(
{
"name": "tag-transform-job",
"resource": "sagemaker-transform-job",
"filters": [{"tag:JobTag": "absent"}],
"actions": [{"type": "tag", "key": "JobTag", "value": "JobTagValue"}],
},
session_factory=session_factory,
)
resources = p.run()
self.assertTrue(len(resources), 1)
client = session_factory(region="us-east-1").client("sagemaker")
tags = client.list_tags(ResourceArn=resources[0]["TransformJobArn"])["Tags"]
self.assertEqual([tags[0]["Key"], tags[0]["Value"]], ["JobTag", "JobTagValue"])
def test_untag_transform_job(self):
session_factory = self.replay_flight_data(
"test_sagemaker_transform_job_remove_tag"
)
p = self.load_policy(
{
"name": "remove-transform-job-tag",
"resource": "sagemaker-transform-job",
"filters": [{"tag:JobTag": "JobTagValue"}],
"actions": [{"type": "remove-tag", "tags": ["JobTag"]}],
},
session_factory=session_factory,
)
resources = p.run()
self.assertTrue(len(resources), 1)
client = session_factory(region="us-east-1").client("sagemaker")
tags = client.list_tags(ResourceArn=resources[0]["TransformJobArn"])["Tags"]
self.assertEqual(len(tags), 0)
class TestSagemakerEndpoint(BaseTest):
def test_sagemaker_endpoints(self):
session_factory = self.replay_flight_data("test_sagemaker_endpoints")
p = self.load_policy(
{"name": "list-endpoints", "resource": "sagemaker-endpoint"},
session_factory=session_factory,
)
resources = p.run()
self.assertEqual(len(resources), 1)
def test_sagemaker_endpoint_delete(self):
session_factory = self.replay_flight_data("test_sagemaker_endpoint_delete")
client = session_factory(region="us-east-1").client("sagemaker")
p = self.load_policy(
{
"name": "delete-endpoint-by-config",
"resource": "sagemaker-endpoint",
"filters": [{"EndpointConfigName": "kmeans-2018-01-18-19-25-36-887"}],
"actions": [{"type": "delete"}],
},
session_factory=session_factory,
)
resources = p.run()
self.assertEqual(len(resources), 1)
status = client.describe_endpoint(EndpointName=resources[0]["EndpointName"])[
"EndpointStatus"
]
self.assertEqual(status, "Deleting")
def test_sagemaker_endpoint_tag(self):
session_factory = self.replay_flight_data("test_sagemaker_endpoint_tag")
p = self.load_policy(
{
"name": "endpoint-tag-missing",
"resource": "sagemaker-endpoint",
"filters": [{"tag:required-tag": "absent"}],
"actions": [
{"type": "tag", "key": "required-tag", "value": "required-value"}
],
},
session_factory=session_factory,
)
resources = p.run()
self.assertTrue(len(resources), 1)
client = session_factory(region="us-east-1").client("sagemaker")
tags = client.list_tags(ResourceArn=resources[0]["EndpointArn"])["Tags"]
self.assertTrue(tags[0]["Key"], "required-tag")
self.assertTrue(tags[0]["Key"], "required-value")
def test_sagemaker_endpoint_remove_tag(self):
session_factory = self.replay_flight_data("test_sagemaker_endpoint_remove_tag")
p = self.load_policy(
{
"name": "endpoint-required-tag-obsolete",
"resource": "sagemaker-endpoint",
"filters": [{"tag:expired-tag": "present"}],
"actions": [{"type": "remove-tag", "tags": ["expired-tag"]}],
},
session_factory=session_factory,
)
resources = p.run()
self.assertTrue(len(resources), 1)
client = session_factory(region="us-east-1").client("sagemaker")
tags = client.list_tags(ResourceArn=resources[0]["EndpointArn"])["Tags"]
self.assertEqual(len(tags), 0)
def test_sagemaker_endpoint_mark_for_op(self):
session_factory = self.replay_flight_data("test_sagemaker_endpoint_mark_for_op")
p = self.load_policy(
{
"name": "mark-failed-endpoints-delete",
"resource": "sagemaker-endpoint",
"filters": [{"EndpointStatus": "Failed"}],
"actions": [
{
"type": "mark-for-op",
"tag": "custodian_cleanup",
"op": "delete",
"days": 1,
}
],
},
session_factory=session_factory,
)
resources = p.run()
self.assertTrue(len(resources), 1)
client = session_factory(region="us-east-1").client("sagemaker")
tags = client.list_tags(ResourceArn=resources[0]["EndpointArn"])["Tags"]
self.assertTrue(tags[0], "custodian_cleanup")
def test_sagemaker_endpoint_marked_for_op(self):
session_factory = self.replay_flight_data(
"test_sagemaker_endpoint_marked_for_op"
)
p = self.load_policy(
{
"name": "marked-failed-endpoints-delete",
"resource": "sagemaker-endpoint",
"filters": [
{
"type": "marked-for-op",
"tag": "custodian_cleanup",
"op": "delete",
"skew": 1,
}
],
},
session_factory=session_factory,
)
resources = p.run()
self.assertEqual(len(resources), 1)
class TestSagemakerEndpointConfig(BaseTest):
def test_sagemaker_endpoint_config(self):
session_factory = self.replay_flight_data("test_sagemaker_endpoint_config")
p = self.load_policy(
{"name": "list-endpoint-configs", "resource": "sagemaker-endpoint-config"},
session_factory=session_factory,
)
resources = p.run()
self.assertEqual(len(resources), 1)
def test_sagemaker_endpoint_config_delete(self):
session_factory = self.replay_flight_data(
"test_sagemaker_endpoint_config_delete"
)
client = session_factory(region="us-east-1").client("sagemaker")
p = self.load_policy(
{
"name": "delete-endpoint-config",
"resource": "sagemaker-endpoint-config",
"filters": [
{
"type": "value",
"key": "ProductionVariants[].InstanceType",
"value": "ml.m4.xlarge",
"op": "contains",
}
],
"actions": [{"type": "delete"}],
},
session_factory=session_factory,
)
resources = p.run()
self.assertEqual(len(resources), 1)
configs = client.list_endpoint_configs()["EndpointConfigs"]
self.assertEqual(len(configs), 0)
def test_sagemaker_endpoint_config_tag(self):
session_factory = self.replay_flight_data("test_sagemaker_endpoint_config_tag")
p = self.load_policy(
{
"name": "endpoint-config-tag-missing",
"resource": "sagemaker-endpoint-config",
"filters": [{"tag:required-tag": "absent"}],
"actions": [
{"type": "tag", "key": "required-tag", "value": "required-value"}
],
},
session_factory=session_factory,
)
resources = p.run()
self.assertTrue(len(resources), 1)
client = session_factory(region="us-east-1").client("sagemaker")
tags = client.list_tags(ResourceArn=resources[0]["EndpointConfigArn"])["Tags"]
self.assertEqual(
[tags[0]["Key"], tags[0]["Value"]], ["required-tag", "required-value"]
)
def test_sagemaker_endpoint_config_remove_tag(self):
session_factory = self.replay_flight_data(
"test_sagemaker_endpoint_config_remove_tag"
)
p = self.load_policy(
{
"name": "endpoint-config-required-tag-obsolete",
"resource": "sagemaker-endpoint-config",
"filters": [{"tag:expired-tag": "present"}],
"actions": [{"type": "remove-tag", "tags": ["expired-tag"]}],
},
session_factory=session_factory,
)
resources = p.run()
self.assertTrue(len(resources), 1)
client = session_factory(region="us-east-1").client("sagemaker")
tags = client.list_tags(ResourceArn=resources[0]["EndpointConfigArn"])["Tags"]
self.assertEqual(len(tags), 0)
def test_sagemaker_endpoint_config_mark_for_op(self):
session_factory = self.replay_flight_data(
"test_sagemaker_endpoint_config_mark_for_op"
)
p = self.load_policy(
{
"name": "mark-endpoint-config-mark-for-op-delete",
"resource": "sagemaker-endpoint-config",
"filters": [
{
"type": "value",
"key": "ProductionVariants[].InstanceType",
"value": "ml.m4.xlarge",
"op": "contains",
}
],
"actions": [
{
"type": "mark-for-op",
"tag": "custodian_cleanup",
"op": "delete",
"days": 1,
}
],
},
session_factory=session_factory,
)
resources = p.run()
self.assertTrue(len(resources), 1)
client = session_factory(region="us-east-1").client("sagemaker")
tags = client.list_tags(ResourceArn=resources[0]["EndpointConfigArn"])["Tags"]
self.assertTrue(tags[0], "custodian_cleanup")
def test_sagemaker_endpoint_config_marked_for_op(self):
session_factory = self.replay_flight_data(
"test_sagemaker_endpoint_config_marked_for_op"
)
p = self.load_policy(
{
"name": "marked-failed-endpoint-config-delete",
"resource": "sagemaker-endpoint-config",
"filters": [
{
"type": "marked-for-op",
"tag": "custodian_cleanup",
"op": "delete",
"skew": 1,
}
],
},
session_factory=session_factory,
)
resources = p.run()
self.assertEqual(len(resources), 1) | unknown | codeparrot/codeparrot-clean | ||
{
"private": true,
"scripts": {
"predev": "npm run i18n:compile",
"dev": "next dev",
"prebuild": "npm run i18n:compile",
"build": "next build",
"start": "next start",
"lint": "eslint .",
"i18n:extract": "formatjs extract 'pages/**/*.ts*' 'components/**/*.ts*' --out-file lang/en.json",
"i18n:compile": "formatjs compile-folder lang compiled-lang"
},
"dependencies": {
"next": "latest",
"react": "18.2.0",
"react-dom": "18.2.0",
"react-intl": "6.1.1"
},
"devDependencies": {
"@formatjs/cli": "5.1.0",
"@types/node": "18.7.23",
"@types/react": "18.2.8",
"babel-plugin-formatjs": "10.3.28",
"eslint-plugin-formatjs": "4.3.1",
"typescript": "4.8.4"
}
} | json | github | https://github.com/vercel/next.js | examples/with-react-intl/package.json |
exports.abc = "abc";
Object.defineProperty(module, "exports", {
value: {
abc: "abc",
def: "def"
}
}); | javascript | github | https://github.com/webpack/webpack | test/cases/cjs-tree-shaking/bailouts/define-module-property.js |
import socket
try:
from select import poll, POLLIN
except ImportError: # `poll` doesn't exist on OSX and other platforms
poll = False
try:
from select import select
except ImportError: # `select` doesn't exist on AppEngine.
select = False
def is_connection_dropped(conn): # Platform-specific
"""
Returns True if the connection is dropped and should be closed.
:param conn:
:class:`httplib.HTTPConnection` object.
Note: For platforms like AppEngine, this will always return ``False`` to
let the platform handle connection recycling transparently for us.
"""
sock = getattr(conn, 'sock', False)
if sock is False: # Platform-specific: AppEngine
return False
if sock is None: # Connection already closed (such as by httplib).
return True
if not poll:
if not select: # Platform-specific: AppEngine
return False
try:
return select([sock], [], [], 0.0)[0]
except socket.error:
return True
# This version is better on platforms that support it.
p = poll()
p.register(sock, POLLIN)
for (fno, ev) in p.poll(0.0):
if fno == sock.fileno():
# Either data is buffered (bad), or the connection is dropped.
return True
# This function is copied from socket.py in the Python 2.7 standard
# library test suite. Added to its signature is only `socket_options`.
def create_connection(address, timeout=socket._GLOBAL_DEFAULT_TIMEOUT,
source_address=None, socket_options=None):
"""Connect to *address* and return the socket object.
Convenience function. Connect to *address* (a 2-tuple ``(host,
port)``) and return the socket object. Passing the optional
*timeout* parameter will set the timeout on the socket instance
before attempting to connect. If no *timeout* is supplied, the
global default timeout setting returned by :func:`getdefaulttimeout`
is used. If *source_address* is set it must be a tuple of (host, port)
for the socket to bind as a source address before making the connection.
An host of '' or port 0 tells the OS to use the default.
"""
host, port = address
err = None
for res in socket.getaddrinfo(host, port, 0, socket.SOCK_STREAM):
af, socktype, proto, canonname, sa = res
sock = None
try:
sock = socket.socket(af, socktype, proto)
# If provided, set socket level options before connecting.
# This is the only addition urllib3 makes to this function.
_set_socket_options(sock, socket_options)
if timeout is not socket._GLOBAL_DEFAULT_TIMEOUT:
sock.settimeout(timeout)
if source_address:
sock.bind(source_address)
sock.connect(sa)
return sock
except socket.error as _:
err = _
if sock is not None:
sock.close()
if err is not None:
raise err
else:
raise socket.error("getaddrinfo returns an empty list")
def _set_socket_options(sock, options):
if options is None:
return
for opt in options:
sock.setsockopt(*opt) | unknown | codeparrot/codeparrot-clean | ||
# -*- coding: utf-8 -*-
from __future__ import unicode_literals
from django.db import models, migrations
class Migration(migrations.Migration):
dependencies = [
('app', '0011_contactinformation_referenceitem_slideritem'),
]
operations = [
migrations.AlterModelOptions(
name='contactinformation',
options={'verbose_name': 'İletişim Bilgisi'},
),
migrations.AlterModelOptions(
name='referenceitem',
options={'verbose_name': 'Referans', 'verbose_name_plural': 'Referanslar'},
),
migrations.AlterField(
model_name='contactinformation',
name='address',
field=models.CharField(verbose_name='Adres Bilgisi', max_length=50),
),
migrations.AlterField(
model_name='referenceitem',
name='reference_explanation',
field=models.CharField(verbose_name='Referans Açıklaması', max_length=50),
),
migrations.AlterField(
model_name='slideritem',
name='slider_item_explanation',
field=models.CharField(verbose_name='Kayan Menü Açıklaması', max_length=50),
),
] | unknown | codeparrot/codeparrot-clean | ||
'use strict';
const common = require('../common.js');
const { AsyncLocalStorage } = require('async_hooks');
/**
* This benchmark verifies the performance degradation of
* async resource propagation on the increasing number of
* active `AsyncLocalStorage`s.
*
* - AsyncLocalStorage.run()
* - Promise
* - Promise
* ...
* - Promise
*/
const bench = common.createBenchmark(main, {
storageCount: [0, 1, 10, 100],
n: [1e5],
});
function runStores(stores, value, cb, idx = 0) {
if (idx === stores.length) {
cb();
} else {
stores[idx].run(value, () => {
runStores(stores, value, cb, idx + 1);
});
}
}
async function runBenchmark(n) {
for (let i = 0; i < n; i++) {
// Avoid creating additional ticks.
await undefined;
}
}
function main({ n, storageCount }) {
const stores = new Array(storageCount).fill(0).map(() => new AsyncLocalStorage());
const contextValue = {};
runStores(stores, contextValue, () => {
bench.start();
runBenchmark(n).then(() => {
bench.end(n);
});
});
} | javascript | github | https://github.com/nodejs/node | benchmark/async_hooks/async-local-storage-propagate-promise.js |
# -*- coding: utf-8 -*-
"""
===============================
Computing source space SNR
===============================
This example shows how to compute and plot source space SNR as in [1]_.
"""
# Author: Padma Sundaram <tottochan@gmail.com>
# Kaisu Lankinen <klankinen@mgh.harvard.edu>
#
# License: BSD (3-clause)
# sphinx_gallery_thumbnail_number = 2
import mne
from mne.datasets import sample
from mne.minimum_norm import make_inverse_operator, apply_inverse
import numpy as np
import matplotlib.pyplot as plt
print(__doc__)
data_path = sample.data_path()
subjects_dir = data_path + '/subjects'
# Read data
fname_evoked = data_path + '/MEG/sample/sample_audvis-ave.fif'
evoked = mne.read_evokeds(fname_evoked, condition='Left Auditory',
baseline=(None, 0))
fname_fwd = data_path + '/MEG/sample/sample_audvis-meg-eeg-oct-6-fwd.fif'
fname_cov = data_path + '/MEG/sample/sample_audvis-cov.fif'
fwd = mne.read_forward_solution(fname_fwd)
cov = mne.read_cov(fname_cov)
# Read inverse operator:
inv_op = make_inverse_operator(evoked.info, fwd, cov, fixed=True, verbose=True)
# Calculate MNE:
snr = 3.0
lambda2 = 1.0 / snr ** 2
stc = apply_inverse(evoked, inv_op, lambda2, 'MNE', verbose=True)
# Calculate SNR in source space:
snr_stc = stc.estimate_snr(evoked.info, fwd, cov)
# Plot an average SNR across source points over time:
ave = np.mean(snr_stc.data, axis=0)
fig, ax = plt.subplots()
ax.plot(evoked.times, ave)
ax.set(xlabel='Time (sec)', ylabel='SNR MEG-EEG')
fig.tight_layout()
# Find time point of maximum SNR:
maxidx = np.argmax(ave)
# Plot SNR on source space at the time point of maximum SNR:
kwargs = dict(initial_time=evoked.times[maxidx], hemi='split',
views=['lat', 'med'], subjects_dir=subjects_dir, size=(600, 600),
clim=dict(kind='value', lims=(-100, -70, -40)),
transparent=True, colormap='viridis')
brain = snr_stc.plot(**kwargs)
###############################################################################
# EEG
# ---
# Next we do the same for EEG and plot the result on the cortex:
evoked_eeg = evoked.copy().pick_types(eeg=True, meg=False)
inv_op_eeg = make_inverse_operator(evoked_eeg.info, fwd, cov, fixed=True,
verbose=True)
stc_eeg = apply_inverse(evoked_eeg, inv_op_eeg, lambda2, 'MNE', verbose=True)
snr_stc_eeg = stc_eeg.estimate_snr(evoked_eeg.info, fwd, cov)
brain = snr_stc_eeg.plot(**kwargs)
###############################################################################
# The same can be done for MEG, which looks more similar to the MEG-EEG case
# than the EEG case does.
#
# References
# ----------
# .. [1] Goldenholz, D. M., Ahlfors, S. P., Hämäläinen, M. S., Sharon, D.,
# Ishitobi, M., Vaina, L. M., & Stufflebeam, S. M. (2009). Mapping the
# Signal-To-Noise-Ratios of Cortical Sources in Magnetoencephalography
# and Electroencephalography. Human Brain Mapping, 30(4), 1077–1086.
# doi:10.1002/hbm.20571 | unknown | codeparrot/codeparrot-clean | ||
# -*- coding: utf-8 -*-
##############################################################################
#
# OpenERP, Open Source Management Solution
# Copyright (C) 2004-2009 Tiny SPRL (<http://tiny.be>).
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as
# published by the Free Software Foundation, either version 3 of the
# License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
#
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
#
##############################################################################
import sys
import copy
import reportlab
import re
from reportlab.pdfgen import canvas
from reportlab import platypus
import utils
import color
import os
import logging
from lxml import etree
import base64
from reportlab.platypus.doctemplate import ActionFlowable
from openerp.tools.safe_eval import safe_eval as eval
from reportlab.lib.units import inch,cm,mm
from openerp.tools.misc import file_open
from reportlab.pdfbase import pdfmetrics
from reportlab.lib.pagesizes import A4, letter
try:
from cStringIO import StringIO
_hush_pyflakes = [ StringIO ]
except ImportError:
from StringIO import StringIO
_logger = logging.getLogger(__name__)
encoding = 'utf-8'
def _open_image(filename, path=None):
"""Attempt to open a binary file and return the descriptor
"""
if os.path.isfile(filename):
return open(filename, 'rb')
for p in (path or []):
if p and os.path.isabs(p):
fullpath = os.path.join(p, filename)
if os.path.isfile(fullpath):
return open(fullpath, 'rb')
try:
if p:
fullpath = os.path.join(p, filename)
else:
fullpath = filename
return file_open(fullpath)
except IOError:
pass
raise IOError("File %s cannot be found in image path" % filename)
class NumberedCanvas(canvas.Canvas):
def __init__(self, *args, **kwargs):
canvas.Canvas.__init__(self, *args, **kwargs)
self._codes = []
self._flag=False
self._pageCount=0
self._currentPage =0
self._pageCounter=0
self.pages={}
def showPage(self):
self._currentPage +=1
if not self._flag:
self._pageCount += 1
else:
self.pages.update({self._currentPage:self._pageCount})
self._codes.append({'code': self._code, 'stack': self._codeStack})
self._startPage()
self._flag=False
def pageCount(self):
if self.pages.get(self._pageCounter,False):
self._pageNumber=0
self._pageCounter +=1
key=self._pageCounter
if not self.pages.get(key,False):
while not self.pages.get(key,False):
key = key + 1
self.setFont("Helvetica", 8)
self.drawRightString((self._pagesize[0]-30), (self._pagesize[1]-40),
" %(this)i / %(total)i" % {
'this': self._pageNumber+1,
'total': self.pages.get(key,False),
}
)
def save(self):
"""add page info to each page (page x of y)"""
# reset page counter
self._pageNumber = 0
for code in self._codes:
self._code = code['code']
self._codeStack = code['stack']
self.pageCount()
canvas.Canvas.showPage(self)
# self.restoreState()
self._doc.SaveToFile(self._filename, self)
class PageCount(platypus.Flowable):
def __init__(self, story_count=0):
platypus.Flowable.__init__(self)
self.story_count = story_count
def draw(self):
self.canv.beginForm("pageCount%d" % (self.story_count))
self.canv.setFont("Helvetica", utils.unit_get(str(8)))
self.canv.drawString(0, 0, str(self.canv.getPageNumber()))
self.canv.endForm()
class PageReset(platypus.Flowable):
def draw(self):
self.canv._doPageReset = True
class _rml_styles(object,):
def __init__(self, nodes, localcontext):
self.localcontext = localcontext
self.styles = {}
self.styles_obj = {}
self.names = {}
self.table_styles = {}
self.default_style = reportlab.lib.styles.getSampleStyleSheet()
for node in nodes:
for style in node.findall('blockTableStyle'):
self.table_styles[style.get('id')] = self._table_style_get(style)
for style in node.findall('paraStyle'):
sname = style.get('name')
self.styles[sname] = self._para_style_update(style)
self.styles_obj[sname] = reportlab.lib.styles.ParagraphStyle(sname, self.default_style["Normal"], **self.styles[sname])
for variable in node.findall('initialize'):
for name in variable.findall('name'):
self.names[ name.get('id')] = name.get('value')
def _para_style_update(self, node):
data = {}
for attr in ['textColor', 'backColor', 'bulletColor', 'borderColor']:
if node.get(attr):
data[attr] = color.get(node.get(attr))
for attr in ['fontName', 'bulletFontName', 'bulletText']:
if node.get(attr):
data[attr] = node.get(attr)
for attr in ['fontSize', 'leftIndent', 'rightIndent', 'spaceBefore', 'spaceAfter',
'firstLineIndent', 'bulletIndent', 'bulletFontSize', 'leading',
'borderWidth','borderPadding','borderRadius']:
if node.get(attr):
data[attr] = utils.unit_get(node.get(attr))
if node.get('alignment'):
align = {
'right':reportlab.lib.enums.TA_RIGHT,
'center':reportlab.lib.enums.TA_CENTER,
'justify':reportlab.lib.enums.TA_JUSTIFY
}
data['alignment'] = align.get(node.get('alignment').lower(), reportlab.lib.enums.TA_LEFT)
return data
def _table_style_get(self, style_node):
styles = []
for node in style_node:
start = utils.tuple_int_get(node, 'start', (0,0) )
stop = utils.tuple_int_get(node, 'stop', (-1,-1) )
if node.tag=='blockValign':
styles.append(('VALIGN', start, stop, str(node.get('value'))))
elif node.tag=='blockFont':
styles.append(('FONT', start, stop, str(node.get('name'))))
elif node.tag=='blockTextColor':
styles.append(('TEXTCOLOR', start, stop, color.get(str(node.get('colorName')))))
elif node.tag=='blockLeading':
styles.append(('LEADING', start, stop, utils.unit_get(node.get('length'))))
elif node.tag=='blockAlignment':
styles.append(('ALIGNMENT', start, stop, str(node.get('value'))))
elif node.tag=='blockSpan':
styles.append(('SPAN', start, stop))
elif node.tag=='blockLeftPadding':
styles.append(('LEFTPADDING', start, stop, utils.unit_get(node.get('length'))))
elif node.tag=='blockRightPadding':
styles.append(('RIGHTPADDING', start, stop, utils.unit_get(node.get('length'))))
elif node.tag=='blockTopPadding':
styles.append(('TOPPADDING', start, stop, utils.unit_get(node.get('length'))))
elif node.tag=='blockBottomPadding':
styles.append(('BOTTOMPADDING', start, stop, utils.unit_get(node.get('length'))))
elif node.tag=='blockBackground':
styles.append(('BACKGROUND', start, stop, color.get(node.get('colorName'))))
if node.get('size'):
styles.append(('FONTSIZE', start, stop, utils.unit_get(node.get('size'))))
elif node.tag=='lineStyle':
kind = node.get('kind')
kind_list = [ 'GRID', 'BOX', 'OUTLINE', 'INNERGRID', 'LINEBELOW', 'LINEABOVE','LINEBEFORE', 'LINEAFTER' ]
assert kind in kind_list
thick = 1
if node.get('thickness'):
thick = float(node.get('thickness'))
styles.append((kind, start, stop, thick, color.get(node.get('colorName'))))
return platypus.tables.TableStyle(styles)
def para_style_get(self, node):
style = False
sname = node.get('style')
if sname:
if sname in self.styles_obj:
style = self.styles_obj[sname]
else:
_logger.warning('Warning: style not found, %s - setting default!\n' % (node.get('style'),) )
if not style:
style = self.default_style['Normal']
para_update = self._para_style_update(node)
if para_update:
# update style only is necessary
style = copy.deepcopy(style)
style.__dict__.update(para_update)
return style
class _rml_doc(object):
def __init__(self, node, localcontext=None, images=None, path='.', title=None):
if images is None:
images = {}
if localcontext is None:
localcontext = {}
self.localcontext = localcontext
self.etree = node
self.filename = self.etree.get('filename')
self.images = images
self.path = path
self.title = title
def docinit(self, els):
from reportlab.lib.fonts import addMapping
from reportlab.pdfbase import pdfmetrics
from reportlab.pdfbase.ttfonts import TTFont
for node in els:
for font in node.findall('registerFont'):
name = font.get('fontName').encode('ascii')
fname = font.get('fontFile').encode('ascii')
if name not in pdfmetrics._fonts:
pdfmetrics.registerFont(TTFont(name, fname))
addMapping(name, 0, 0, name) #normal
addMapping(name, 0, 1, name) #italic
addMapping(name, 1, 0, name) #bold
addMapping(name, 1, 1, name) #italic and bold
def setTTFontMapping(self,face, fontname, filename, mode='all'):
from reportlab.lib.fonts import addMapping
from reportlab.pdfbase import pdfmetrics
from reportlab.pdfbase.ttfonts import TTFont
if fontname not in pdfmetrics._fonts:
pdfmetrics.registerFont(TTFont(fontname, filename))
if (mode == 'all'):
addMapping(face, 0, 0, fontname) #normal
addMapping(face, 0, 1, fontname) #italic
addMapping(face, 1, 0, fontname) #bold
addMapping(face, 1, 1, fontname) #italic and bold
elif (mode== 'normal') or (mode == 'regular'):
addMapping(face, 0, 0, fontname) #normal
elif (mode == 'italic'):
addMapping(face, 0, 1, fontname) #italic
elif (mode == 'bold'):
addMapping(face, 1, 0, fontname) #bold
elif (mode == 'bolditalic'):
addMapping(face, 1, 1, fontname) #italic and bold
def _textual_image(self, node):
rc = ''
for n in node:
rc +=( etree.tostring(n) or '') + n.tail
return base64.decodestring(node.tostring())
def _images(self, el):
result = {}
for node in el.findall('.//image'):
rc =( node.text or '')
result[node.get('name')] = base64.decodestring(rc)
return result
def render(self, out):
el = self.etree.findall('.//docinit')
if el:
self.docinit(el)
el = self.etree.findall('.//stylesheet')
self.styles = _rml_styles(el,self.localcontext)
el = self.etree.findall('.//images')
if el:
self.images.update( self._images(el[0]) )
el = self.etree.findall('.//template')
if len(el):
pt_obj = _rml_template(self.localcontext, out, el[0], self, images=self.images, path=self.path, title=self.title)
el = utils._child_get(self.etree, self, 'story')
pt_obj.render(el)
else:
self.canvas = canvas.Canvas(out)
pd = self.etree.find('pageDrawing')[0]
pd_obj = _rml_canvas(self.canvas, self.localcontext, None, self, self.images, path=self.path, title=self.title)
pd_obj.render(pd)
self.canvas.showPage()
self.canvas.save()
class _rml_canvas(object):
def __init__(self, canvas, localcontext, doc_tmpl=None, doc=None, images=None, path='.', title=None):
if images is None:
images = {}
self.localcontext = localcontext
self.canvas = canvas
self.styles = doc.styles
self.doc_tmpl = doc_tmpl
self.doc = doc
self.images = images
self.path = path
self.title = title
if self.title:
self.canvas.setTitle(self.title)
def _textual(self, node, x=0, y=0):
text = node.text and node.text.encode('utf-8') or ''
rc = utils._process_text(self, text)
for n in node:
if n.tag == 'seq':
from reportlab.lib.sequencer import getSequencer
seq = getSequencer()
rc += str(seq.next(n.get('id')))
if n.tag == 'pageCount':
if x or y:
self.canvas.translate(x,y)
self.canvas.doForm('pageCount%s' % (self.canvas._storyCount,))
if x or y:
self.canvas.translate(-x,-y)
if n.tag == 'pageNumber':
rc += str(self.canvas.getPageNumber())
rc += utils._process_text(self, n.tail)
return rc.replace('\n','')
def _drawString(self, node):
v = utils.attr_get(node, ['x','y'])
text=self._textual(node, **v)
text = utils.xml2str(text)
self.canvas.drawString(text=text, **v)
def _drawCenteredString(self, node):
v = utils.attr_get(node, ['x','y'])
text=self._textual(node, **v)
text = utils.xml2str(text)
self.canvas.drawCentredString(text=text, **v)
def _drawRightString(self, node):
v = utils.attr_get(node, ['x','y'])
text=self._textual(node, **v)
text = utils.xml2str(text)
self.canvas.drawRightString(text=text, **v)
def _rect(self, node):
if node.get('round'):
self.canvas.roundRect(radius=utils.unit_get(node.get('round')), **utils.attr_get(node, ['x','y','width','height'], {'fill':'bool','stroke':'bool'}))
else:
self.canvas.rect(**utils.attr_get(node, ['x','y','width','height'], {'fill':'bool','stroke':'bool'}))
def _ellipse(self, node):
x1 = utils.unit_get(node.get('x'))
x2 = utils.unit_get(node.get('width'))
y1 = utils.unit_get(node.get('y'))
y2 = utils.unit_get(node.get('height'))
self.canvas.ellipse(x1,y1,x2,y2, **utils.attr_get(node, [], {'fill':'bool','stroke':'bool'}))
def _curves(self, node):
line_str = node.text.split()
lines = []
while len(line_str)>7:
self.canvas.bezier(*[utils.unit_get(l) for l in line_str[0:8]])
line_str = line_str[8:]
def _lines(self, node):
line_str = node.text.split()
lines = []
while len(line_str)>3:
lines.append([utils.unit_get(l) for l in line_str[0:4]])
line_str = line_str[4:]
self.canvas.lines(lines)
def _grid(self, node):
xlist = [utils.unit_get(s) for s in node.get('xs').split(',')]
ylist = [utils.unit_get(s) for s in node.get('ys').split(',')]
self.canvas.grid(xlist, ylist)
def _translate(self, node):
dx = utils.unit_get(node.get('dx')) or 0
dy = utils.unit_get(node.get('dy')) or 0
self.canvas.translate(dx,dy)
def _circle(self, node):
self.canvas.circle(x_cen=utils.unit_get(node.get('x')), y_cen=utils.unit_get(node.get('y')), r=utils.unit_get(node.get('radius')), **utils.attr_get(node, [], {'fill':'bool','stroke':'bool'}))
def _place(self, node):
flows = _rml_flowable(self.doc, self.localcontext, images=self.images, path=self.path, title=self.title).render(node)
infos = utils.attr_get(node, ['x','y','width','height'])
infos['y']+=infos['height']
for flow in flows:
w,h = flow.wrap(infos['width'], infos['height'])
if w<=infos['width'] and h<=infos['height']:
infos['y']-=h
flow.drawOn(self.canvas,infos['x'],infos['y'])
infos['height']-=h
else:
raise ValueError, "Not enough space"
def _line_mode(self, node):
ljoin = {'round':1, 'mitered':0, 'bevelled':2}
lcap = {'default':0, 'round':1, 'square':2}
if node.get('width'):
self.canvas.setLineWidth(utils.unit_get(node.get('width')))
if node.get('join'):
self.canvas.setLineJoin(ljoin[node.get('join')])
if node.get('cap'):
self.canvas.setLineCap(lcap[node.get('cap')])
if node.get('miterLimit'):
self.canvas.setDash(utils.unit_get(node.get('miterLimit')))
if node.get('dash'):
dashes = node.get('dash').split(',')
for x in range(len(dashes)):
dashes[x]=utils.unit_get(dashes[x])
self.canvas.setDash(node.get('dash').split(','))
def _image(self, node):
import urllib
import urlparse
from reportlab.lib.utils import ImageReader
nfile = node.get('file')
if not nfile:
if node.get('name'):
image_data = self.images[node.get('name')]
_logger.debug("Image %s used", node.get('name'))
s = StringIO(image_data)
else:
newtext = node.text
if self.localcontext:
res = utils._regex.findall(newtext)
for key in res:
newtext = eval(key, {}, self.localcontext) or ''
image_data = None
if newtext:
image_data = base64.decodestring(newtext)
if image_data:
s = StringIO(image_data)
else:
_logger.debug("No image data!")
return False
else:
if nfile in self.images:
s = StringIO(self.images[nfile])
else:
try:
up = urlparse.urlparse(str(nfile))
except ValueError:
up = False
if up and up.scheme:
# RFC: do we really want to open external URLs?
# Are we safe from cross-site scripting or attacks?
_logger.debug("Retrieve image from %s", nfile)
u = urllib.urlopen(str(nfile))
s = StringIO(u.read())
else:
_logger.debug("Open image file %s ", nfile)
s = _open_image(nfile, path=self.path)
try:
img = ImageReader(s)
(sx,sy) = img.getSize()
_logger.debug("Image is %dx%d", sx, sy)
args = { 'x': 0.0, 'y': 0.0 }
for tag in ('width','height','x','y'):
if node.get(tag):
args[tag] = utils.unit_get(node.get(tag))
if ('width' in args) and (not 'height' in args):
args['height'] = sy * args['width'] / sx
elif ('height' in args) and (not 'width' in args):
args['width'] = sx * args['height'] / sy
elif ('width' in args) and ('height' in args):
if (float(args['width'])/args['height'])>(float(sx)>sy):
args['width'] = sx * args['height'] / sy
else:
args['height'] = sy * args['width'] / sx
self.canvas.drawImage(img, **args)
finally:
s.close()
# self.canvas._doc.SaveToFile(self.canvas._filename, self.canvas)
def _path(self, node):
self.path = self.canvas.beginPath()
self.path.moveTo(**utils.attr_get(node, ['x','y']))
for n in utils._child_get(node, self):
if not n.text :
if n.tag=='moveto':
vals = utils.text_get(n).split()
self.path.moveTo(utils.unit_get(vals[0]), utils.unit_get(vals[1]))
elif n.tag=='curvesto':
vals = utils.text_get(n).split()
while len(vals)>5:
pos=[]
while len(pos)<6:
pos.append(utils.unit_get(vals.pop(0)))
self.path.curveTo(*pos)
elif n.text:
data = n.text.split() # Not sure if I must merge all TEXT_NODE ?
while len(data)>1:
x = utils.unit_get(data.pop(0))
y = utils.unit_get(data.pop(0))
self.path.lineTo(x,y)
if (not node.get('close')) or utils.bool_get(node.get('close')):
self.path.close()
self.canvas.drawPath(self.path, **utils.attr_get(node, [], {'fill':'bool','stroke':'bool'}))
def setFont(self, node):
fontname = node.get('name')
if fontname not in pdfmetrics.getRegisteredFontNames()\
or fontname not in pdfmetrics.standardFonts:
# let reportlab attempt to find it
try:
pdfmetrics.getFont(fontname)
except Exception:
_logger.debug('Could not locate font %s, substituting default: %s',
fontname,
self.canvas._fontname)
fontname = self.canvas._fontname
return self.canvas.setFont(fontname, utils.unit_get(node.get('size')))
def render(self, node):
tags = {
'drawCentredString': self._drawCenteredString,
'drawRightString': self._drawRightString,
'drawString': self._drawString,
'rect': self._rect,
'ellipse': self._ellipse,
'lines': self._lines,
'grid': self._grid,
'curves': self._curves,
'fill': lambda node: self.canvas.setFillColor(color.get(node.get('color'))),
'stroke': lambda node: self.canvas.setStrokeColor(color.get(node.get('color'))),
'setFont': self.setFont ,
'place': self._place,
'circle': self._circle,
'lineMode': self._line_mode,
'path': self._path,
'rotate': lambda node: self.canvas.rotate(float(node.get('degrees'))),
'translate': self._translate,
'image': self._image
}
for n in utils._child_get(node, self):
if n.tag in tags:
tags[n.tag](n)
class _rml_draw(object):
def __init__(self, localcontext, node, styles, images=None, path='.', title=None):
if images is None:
images = {}
self.localcontext = localcontext
self.node = node
self.styles = styles
self.canvas = None
self.images = images
self.path = path
self.canvas_title = title
def render(self, canvas, doc):
canvas.saveState()
cnv = _rml_canvas(canvas, self.localcontext, doc, self.styles, images=self.images, path=self.path, title=self.canvas_title)
cnv.render(self.node)
canvas.restoreState()
class _rml_Illustration(platypus.flowables.Flowable):
def __init__(self, node, localcontext, styles, self2):
self.localcontext = (localcontext or {}).copy()
self.node = node
self.styles = styles
self.width = utils.unit_get(node.get('width'))
self.height = utils.unit_get(node.get('height'))
self.self2 = self2
def wrap(self, *args):
return (self.width, self.height)
def draw(self):
drw = _rml_draw(self.localcontext ,self.node,self.styles, images=self.self2.images, path=self.self2.path, title=self.self2.title)
drw.render(self.canv, None)
class _rml_flowable(object):
def __init__(self, doc, localcontext, images=None, path='.', title=None):
if images is None:
images = {}
self.localcontext = localcontext
self.doc = doc
self.styles = doc.styles
self.images = images
self.path = path
self.title = title
def _textual(self, node):
rc1 = utils._process_text(self, node.text or '')
for n in utils._child_get(node,self):
txt_n = copy.deepcopy(n)
for key in txt_n.attrib.keys():
if key in ('rml_except', 'rml_loop', 'rml_tag'):
del txt_n.attrib[key]
if not n.tag == 'bullet':
txt_n.text = utils.xml2str(self._textual(n))
txt_n.tail = n.tail and utils.xml2str(utils._process_text(self, n.tail.replace('\n',''))) or ''
rc1 += etree.tostring(txt_n)
return rc1
def _table(self, node):
children = utils._child_get(node,self,'tr')
if not children:
return None
length = 0
colwidths = None
rowheights = None
data = []
styles = []
posy = 0
for tr in children:
paraStyle = None
if tr.get('style'):
st = copy.deepcopy(self.styles.table_styles[tr.get('style')])
for si in range(len(st._cmds)):
s = list(st._cmds[si])
s[1] = (s[1][0],posy)
s[2] = (s[2][0],posy)
st._cmds[si] = tuple(s)
styles.append(st)
if tr.get('paraStyle'):
paraStyle = self.styles.styles[tr.get('paraStyle')]
data2 = []
posx = 0
for td in utils._child_get(tr, self,'td'):
if td.get('style'):
st = copy.deepcopy(self.styles.table_styles[td.get('style')])
for s in st._cmds:
s[1][1] = posy
s[2][1] = posy
s[1][0] = posx
s[2][0] = posx
styles.append(st)
if td.get('paraStyle'):
# TODO: merge styles
paraStyle = self.styles.styles[td.get('paraStyle')]
posx += 1
flow = []
for n in utils._child_get(td, self):
if n.tag == etree.Comment:
n.text = ''
continue
fl = self._flowable(n, extra_style=paraStyle)
if isinstance(fl,list):
flow += fl
else:
flow.append( fl )
if not len(flow):
flow = self._textual(td)
data2.append( flow )
if len(data2)>length:
length=len(data2)
for ab in data:
while len(ab)<length:
ab.append('')
while len(data2)<length:
data2.append('')
data.append( data2 )
posy += 1
if node.get('colWidths'):
assert length == len(node.get('colWidths').split(','))
colwidths = [utils.unit_get(f.strip()) for f in node.get('colWidths').split(',')]
if node.get('rowHeights'):
rowheights = [utils.unit_get(f.strip()) for f in node.get('rowHeights').split(',')]
if len(rowheights) == 1:
rowheights = rowheights[0]
table = platypus.LongTable(data = data, colWidths=colwidths, rowHeights=rowheights, **(utils.attr_get(node, ['splitByRow'] ,{'repeatRows':'int','repeatCols':'int'})))
if node.get('style'):
table.setStyle(self.styles.table_styles[node.get('style')])
for s in styles:
table.setStyle(s)
return table
def _illustration(self, node):
return _rml_Illustration(node, self.localcontext, self.styles, self)
def _textual_image(self, node):
return base64.decodestring(node.text)
def _pto(self, node):
sub_story = []
pto_header = None
pto_trailer = None
for node in utils._child_get(node, self):
if node.tag == etree.Comment:
node.text = ''
continue
elif node.tag=='pto_header':
pto_header = self.render(node)
elif node.tag=='pto_trailer':
pto_trailer = self.render(node)
else:
flow = self._flowable(node)
if flow:
if isinstance(flow,list):
sub_story = sub_story + flow
else:
sub_story.append(flow)
return platypus.flowables.PTOContainer(sub_story, trailer=pto_trailer, header=pto_header)
def _flowable(self, node, extra_style=None):
if node.tag=='pto':
return self._pto(node)
if node.tag=='para':
style = self.styles.para_style_get(node)
if extra_style:
style.__dict__.update(extra_style)
result = []
for i in self._textual(node).split('\n'):
result.append(platypus.Paragraph(i, style, **(utils.attr_get(node, [], {'bulletText':'str'}))))
return result
elif node.tag=='barCode':
try:
from reportlab.graphics.barcode import code128
from reportlab.graphics.barcode import code39
from reportlab.graphics.barcode import code93
from reportlab.graphics.barcode import common
from reportlab.graphics.barcode import fourstate
from reportlab.graphics.barcode import usps
from reportlab.graphics.barcode import createBarcodeDrawing
except ImportError:
_logger.warning("Cannot use barcode renderers:", exc_info=True)
return None
args = utils.attr_get(node, [], {'ratio':'float','xdim':'unit','height':'unit','checksum':'int','quiet':'int','width':'unit','stop':'bool','bearers':'int','barWidth':'float','barHeight':'float'})
codes = {
'codabar': lambda x: common.Codabar(x, **args),
'code11': lambda x: common.Code11(x, **args),
'code128': lambda x: code128.Code128(str(x), **args),
'standard39': lambda x: code39.Standard39(str(x), **args),
'standard93': lambda x: code93.Standard93(str(x), **args),
'i2of5': lambda x: common.I2of5(x, **args),
'extended39': lambda x: code39.Extended39(str(x), **args),
'extended93': lambda x: code93.Extended93(str(x), **args),
'msi': lambda x: common.MSI(x, **args),
'fim': lambda x: usps.FIM(x, **args),
'postnet': lambda x: usps.POSTNET(x, **args),
'ean13': lambda x: createBarcodeDrawing('EAN13', value=str(x), **args),
'qrcode': lambda x: createBarcodeDrawing('QR', value=x, **args),
}
code = 'code128'
if node.get('code'):
code = node.get('code').lower()
return codes[code](self._textual(node))
elif node.tag=='name':
self.styles.names[ node.get('id')] = node.get('value')
return None
elif node.tag=='xpre':
style = self.styles.para_style_get(node)
return platypus.XPreformatted(self._textual(node), style, **(utils.attr_get(node, [], {'bulletText':'str','dedent':'int','frags':'int'})))
elif node.tag=='pre':
style = self.styles.para_style_get(node)
return platypus.Preformatted(self._textual(node), style, **(utils.attr_get(node, [], {'bulletText':'str','dedent':'int'})))
elif node.tag=='illustration':
return self._illustration(node)
elif node.tag=='blockTable':
return self._table(node)
elif node.tag=='title':
styles = reportlab.lib.styles.getSampleStyleSheet()
style = styles['Title']
return platypus.Paragraph(self._textual(node), style, **(utils.attr_get(node, [], {'bulletText':'str'})))
elif re.match('^h([1-9]+[0-9]*)$', (node.tag or '')):
styles = reportlab.lib.styles.getSampleStyleSheet()
style = styles['Heading'+str(node.tag[1:])]
return platypus.Paragraph(self._textual(node), style, **(utils.attr_get(node, [], {'bulletText':'str'})))
elif node.tag=='image':
image_data = False
if not node.get('file'):
if node.get('name'):
if node.get('name') in self.doc.images:
_logger.debug("Image %s read ", node.get('name'))
image_data = self.doc.images[node.get('name')].read()
else:
_logger.warning("Image %s not defined", node.get('name'))
return False
else:
import base64
newtext = node.text
if self.localcontext:
newtext = utils._process_text(self, node.text or '')
image_data = base64.decodestring(newtext)
if not image_data:
_logger.debug("No inline image data")
return False
image = StringIO(image_data)
else:
_logger.debug("Image get from file %s", node.get('file'))
image = _open_image(node.get('file'), path=self.doc.path)
return platypus.Image(image, mask=(250,255,250,255,250,255), **(utils.attr_get(node, ['width','height'])))
elif node.tag=='spacer':
if node.get('width'):
width = utils.unit_get(node.get('width'))
else:
width = utils.unit_get('1cm')
length = utils.unit_get(node.get('length'))
return platypus.Spacer(width=width, height=length)
elif node.tag=='section':
return self.render(node)
elif node.tag == 'pageNumberReset':
return PageReset()
elif node.tag in ('pageBreak', 'nextPage'):
return platypus.PageBreak()
elif node.tag=='condPageBreak':
return platypus.CondPageBreak(**(utils.attr_get(node, ['height'])))
elif node.tag=='setNextTemplate':
return platypus.NextPageTemplate(str(node.get('name')))
elif node.tag=='nextFrame':
return platypus.CondPageBreak(1000) # TODO: change the 1000 !
elif node.tag == 'setNextFrame':
from reportlab.platypus.doctemplate import NextFrameFlowable
return NextFrameFlowable(str(node.get('name')))
elif node.tag == 'currentFrame':
from reportlab.platypus.doctemplate import CurrentFrameFlowable
return CurrentFrameFlowable(str(node.get('name')))
elif node.tag == 'frameEnd':
return EndFrameFlowable()
elif node.tag == 'hr':
width_hr=node.get('width') or '100%'
color_hr=node.get('color') or 'black'
thickness_hr=node.get('thickness') or 1
lineCap_hr=node.get('lineCap') or 'round'
return platypus.flowables.HRFlowable(width=width_hr,color=color.get(color_hr),thickness=float(thickness_hr),lineCap=str(lineCap_hr))
else:
sys.stderr.write('Warning: flowable not yet implemented: %s !\n' % (node.tag,))
return None
def render(self, node_story):
def process_story(node_story):
sub_story = []
for node in utils._child_get(node_story, self):
if node.tag == etree.Comment:
node.text = ''
continue
flow = self._flowable(node)
if flow:
if isinstance(flow,list):
sub_story = sub_story + flow
else:
sub_story.append(flow)
return sub_story
return process_story(node_story)
class EndFrameFlowable(ActionFlowable):
def __init__(self,resume=0):
ActionFlowable.__init__(self,('frameEnd',resume))
class TinyDocTemplate(platypus.BaseDocTemplate):
def beforeDocument(self):
# Store some useful value directly inside canvas, so it's available
# on flowable drawing (needed for proper PageCount handling)
self.canv._doPageReset = False
self.canv._storyCount = 0
def ___handle_pageBegin(self):
self.page = self.page + 1
self.pageTemplate.beforeDrawPage(self.canv,self)
self.pageTemplate.checkPageSize(self.canv,self)
self.pageTemplate.onPage(self.canv,self)
for f in self.pageTemplate.frames: f._reset()
self.beforePage()
self._curPageFlowableCount = 0
if hasattr(self,'_nextFrameIndex'):
del self._nextFrameIndex
for f in self.pageTemplate.frames:
if f.id == 'first':
self.frame = f
break
self.handle_frameBegin()
def afterPage(self):
if self.canv._doPageReset:
# Following a <pageReset/> tag:
# - we reset page number to 0
# - we add an new PageCount flowable (relative to the current
# story number), but not for NumeredCanvas at is handle page
# count itself)
# NOTE: _rml_template render() method add a PageReset flowable at end
# of each story, so we're sure to pass here at least once per story.
if not isinstance(self.canv, NumberedCanvas):
self.handle_flowable([ PageCount(story_count=self.canv._storyCount) ])
self.canv._pageCount = self.page
self.page = 0
self.canv._flag = True
self.canv._pageNumber = 0
self.canv._doPageReset = False
self.canv._storyCount += 1
class _rml_template(object):
def __init__(self, localcontext, out, node, doc, images=None, path='.', title=None):
if images is None:
images = {}
if not localcontext:
localcontext={'internal_header':True}
self.localcontext = localcontext
self.images= images
self.path = path
self.title = title
pagesize_map = {'a4': A4,
'us_letter': letter
}
pageSize = A4
if self.localcontext.get('company'):
pageSize = pagesize_map.get(self.localcontext.get('company').paper_format, A4)
if node.get('pageSize'):
ps = map(lambda x:x.strip(), node.get('pageSize').replace(')', '').replace('(', '').split(','))
pageSize = ( utils.unit_get(ps[0]),utils.unit_get(ps[1]) )
self.doc_tmpl = TinyDocTemplate(out, pagesize=pageSize, **utils.attr_get(node, ['leftMargin','rightMargin','topMargin','bottomMargin'], {'allowSplitting':'int','showBoundary':'bool','rotation':'int','title':'str','author':'str'}))
self.page_templates = []
self.styles = doc.styles
self.doc = doc
self.image=[]
pts = node.findall('pageTemplate')
for pt in pts:
frames = []
for frame_el in pt.findall('frame'):
frame = platypus.Frame( **(utils.attr_get(frame_el, ['x1','y1', 'width','height', 'leftPadding', 'rightPadding', 'bottomPadding', 'topPadding'], {'id':'str', 'showBoundary':'bool'})) )
if utils.attr_get(frame_el, ['last']):
frame.lastFrame = True
frames.append( frame )
try :
gr = pt.findall('pageGraphics')\
or pt[1].findall('pageGraphics')
except Exception: # FIXME: be even more specific, perhaps?
gr=''
if len(gr):
# self.image=[ n for n in utils._child_get(gr[0], self) if n.tag=='image' or not self.localcontext]
drw = _rml_draw(self.localcontext,gr[0], self.doc, images=images, path=self.path, title=self.title)
self.page_templates.append( platypus.PageTemplate(frames=frames, onPage=drw.render, **utils.attr_get(pt, [], {'id':'str'}) ))
else:
drw = _rml_draw(self.localcontext,node,self.doc,title=self.title)
self.page_templates.append( platypus.PageTemplate(frames=frames,onPage=drw.render, **utils.attr_get(pt, [], {'id':'str'}) ))
self.doc_tmpl.addPageTemplates(self.page_templates)
def render(self, node_stories):
if self.localcontext and not self.localcontext.get('internal_header',False):
del self.localcontext['internal_header']
fis = []
r = _rml_flowable(self.doc,self.localcontext, images=self.images, path=self.path, title=self.title)
story_cnt = 0
for node_story in node_stories:
if story_cnt > 0:
fis.append(platypus.PageBreak())
fis += r.render(node_story)
# Reset Page Number with new story tag
fis.append(PageReset())
story_cnt += 1
if self.localcontext and self.localcontext.get('internal_header',False):
self.doc_tmpl.afterFlowable(fis)
self.doc_tmpl.build(fis,canvasmaker=NumberedCanvas)
else:
self.doc_tmpl.build(fis)
def parseNode(rml, localcontext=None, fout=None, images=None, path='.', title=None):
node = etree.XML(rml)
r = _rml_doc(node, localcontext, images, path, title=title)
#try to override some font mappings
try:
from customfonts import SetCustomFonts
SetCustomFonts(r)
except ImportError:
# means there is no custom fonts mapping in this system.
pass
except Exception:
_logger.warning('Cannot set font mapping', exc_info=True)
pass
fp = StringIO()
r.render(fp)
return fp.getvalue()
def parseString(rml, localcontext=None, fout=None, images=None, path='.', title=None):
node = etree.XML(rml)
r = _rml_doc(node, localcontext, images, path, title=title)
#try to override some font mappings
try:
from customfonts import SetCustomFonts
SetCustomFonts(r)
except Exception:
pass
if fout:
fp = file(fout,'wb')
r.render(fp)
fp.close()
return fout
else:
fp = StringIO()
r.render(fp)
return fp.getvalue()
def trml2pdf_help():
print 'Usage: trml2pdf input.rml >output.pdf'
print 'Render the standard input (RML) and output a PDF file'
sys.exit(0)
if __name__=="__main__":
if len(sys.argv)>1:
if sys.argv[1]=='--help':
trml2pdf_help()
print parseString(file(sys.argv[1], 'r').read()),
else:
print 'Usage: trml2pdf input.rml >output.pdf'
print 'Try \'trml2pdf --help\' for more information.'
# vim:expandtab:smartindent:tabstop=4:softtabstop=4:shiftwidth=4: | unknown | codeparrot/codeparrot-clean | ||
#!/usr/bin/python
# (c) 2017, NetApp, Inc
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['deprecated'],
'supported_by': 'community'}
DOCUMENTATION = '''
module: sf_volume_access_group_manager
deprecated:
removed_in: "2.11"
why: This Module has been replaced
alternative: please use M(na_elementsw_access_group)
short_description: Manage SolidFire Volume Access Groups
extends_documentation_fragment:
- netapp.solidfire
version_added: '2.3'
author: Sumit Kumar (@timuster) <sumit4@netapp.com>
description:
- Create, destroy, or update volume access groups on SolidFire
options:
state:
description:
- Whether the specified volume access group should exist or not.
required: true
choices: ['present', 'absent']
name:
description:
- Name of the volume access group. It is not required to be unique, but recommended.
required: true
initiators:
description:
- List of initiators to include in the volume access group. If unspecified, the access group will start out without configured initiators.
volumes:
description:
- List of volumes to initially include in the volume access group. If unspecified, the access group will start without any volumes.
virtual_network_id:
description:
- The ID of the SolidFire Virtual Network ID to associate the volume access group with.
virtual_network_tags:
description:
- The ID of the VLAN Virtual Network Tag to associate the volume access group with.
attributes:
description: List of Name/Value pairs in JSON object format.
volume_access_group_id:
description:
- The ID of the volume access group to modify or delete.
'''
EXAMPLES = """
- name: Create Volume Access Group
sf_volume_access_group_manager:
hostname: "{{ solidfire_hostname }}"
username: "{{ solidfire_username }}"
password: "{{ solidfire_password }}"
state: present
name: AnsibleVolumeAccessGroup
volumes: [7,8]
- name: Modify Volume Access Group
sf_volume_access_group_manager:
hostname: "{{ solidfire_hostname }}"
username: "{{ solidfire_username }}"
password: "{{ solidfire_password }}"
state: present
volume_access_group_id: 1
name: AnsibleVolumeAccessGroup-Renamed
attributes: {"volumes": [1,2,3], "virtual_network_id": 12345}
- name: Delete Volume Access Group
sf_volume_access_group_manager:
hostname: "{{ solidfire_hostname }}"
username: "{{ solidfire_username }}"
password: "{{ solidfire_password }}"
state: absent
volume_access_group_id: 1
"""
RETURN = """
"""
import traceback
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils._text import to_native
import ansible.module_utils.netapp as netapp_utils
HAS_SF_SDK = netapp_utils.has_sf_sdk()
class SolidFireVolumeAccessGroup(object):
def __init__(self):
self.argument_spec = netapp_utils.ontap_sf_host_argument_spec()
self.argument_spec.update(dict(
state=dict(required=True, choices=['present', 'absent']),
name=dict(required=True, type='str'),
volume_access_group_id=dict(required=False, type='int', default=None),
initiators=dict(required=False, type='list', default=None),
volumes=dict(required=False, type='list', default=None),
virtual_network_id=dict(required=False, type='list', default=None),
virtual_network_tags=dict(required=False, type='list', default=None),
attributes=dict(required=False, type='dict', default=None),
))
self.module = AnsibleModule(
argument_spec=self.argument_spec,
supports_check_mode=True
)
p = self.module.params
# set up state variables
self.state = p['state']
self.name = p['name']
self.volume_access_group_id = p['volume_access_group_id']
self.initiators = p['initiators']
self.volumes = p['volumes']
self.virtual_network_id = p['virtual_network_id']
self.virtual_network_tags = p['virtual_network_tags']
self.attributes = p['attributes']
if HAS_SF_SDK is False:
self.module.fail_json(msg="Unable to import the SolidFire Python SDK")
else:
self.sfe = netapp_utils.create_sf_connection(module=self.module)
def get_volume_access_group(self):
access_groups_list = self.sfe.list_volume_access_groups()
for group in access_groups_list.volume_access_groups:
if group.name == self.name:
# Update self.volume_access_group_id:
if self.volume_access_group_id is not None:
if group.volume_access_group_id == self.volume_access_group_id:
return group
else:
self.volume_access_group_id = group.volume_access_group_id
return group
return None
def create_volume_access_group(self):
try:
self.sfe.create_volume_access_group(name=self.name,
initiators=self.initiators,
volumes=self.volumes,
virtual_network_id=self.virtual_network_id,
virtual_network_tags=self.virtual_network_tags,
attributes=self.attributes)
except Exception as e:
self.module.fail_json(msg="Error creating volume access group %s: %s" %
(self.name, to_native(e)), exception=traceback.format_exc())
def delete_volume_access_group(self):
try:
self.sfe.delete_volume_access_group(volume_access_group_id=self.volume_access_group_id)
except Exception as e:
self.module.fail_json(msg="Error deleting volume access group %s: %s" %
(self.volume_access_group_id, to_native(e)),
exception=traceback.format_exc())
def update_volume_access_group(self):
try:
self.sfe.modify_volume_access_group(volume_access_group_id=self.volume_access_group_id,
virtual_network_id=self.virtual_network_id,
virtual_network_tags=self.virtual_network_tags,
name=self.name,
initiators=self.initiators,
volumes=self.volumes,
attributes=self.attributes)
except Exception as e:
self.module.fail_json(msg="Error updating volume access group %s: %s" %
(self.volume_access_group_id, to_native(e)), exception=traceback.format_exc())
def apply(self):
changed = False
group_exists = False
update_group = False
group_detail = self.get_volume_access_group()
if group_detail:
group_exists = True
if self.state == 'absent':
changed = True
elif self.state == 'present':
# Check if we need to update the group
if self.volumes is not None and group_detail.volumes != self.volumes:
update_group = True
changed = True
elif self.initiators is not None and group_detail.initiators != self.initiators:
update_group = True
changed = True
elif self.virtual_network_id is not None or self.virtual_network_tags is not None or \
self.attributes is not None:
update_group = True
changed = True
else:
if self.state == 'present':
changed = True
if changed:
if self.module.check_mode:
pass
else:
if self.state == 'present':
if not group_exists:
self.create_volume_access_group()
elif update_group:
self.update_volume_access_group()
elif self.state == 'absent':
self.delete_volume_access_group()
self.module.exit_json(changed=changed)
def main():
v = SolidFireVolumeAccessGroup()
v.apply()
if __name__ == '__main__':
main() | unknown | codeparrot/codeparrot-clean | ||
/*-------------------------------------------------------------------------
*
* binaryheap.c
* A simple binary heap implementation
*
* Portions Copyright (c) 2012-2026, PostgreSQL Global Development Group
*
* IDENTIFICATION
* src/common/binaryheap.c
*
*-------------------------------------------------------------------------
*/
#ifdef FRONTEND
#include "postgres_fe.h"
#else
#include "postgres.h"
#endif
#ifdef FRONTEND
#include "common/logging.h"
#endif
#include "lib/binaryheap.h"
static void sift_down(binaryheap *heap, int node_off);
static void sift_up(binaryheap *heap, int node_off);
/*
* binaryheap_allocate
*
* Returns a pointer to a newly-allocated heap that has the capacity to
* store the given number of nodes, with the heap property defined by
* the given comparator function, which will be invoked with the additional
* argument specified by 'arg'.
*/
binaryheap *
binaryheap_allocate(int capacity, binaryheap_comparator compare, void *arg)
{
int sz;
binaryheap *heap;
sz = offsetof(binaryheap, bh_nodes) + sizeof(bh_node_type) * capacity;
heap = (binaryheap *) palloc(sz);
heap->bh_space = capacity;
heap->bh_compare = compare;
heap->bh_arg = arg;
heap->bh_size = 0;
heap->bh_has_heap_property = true;
return heap;
}
/*
* binaryheap_reset
*
* Resets the heap to an empty state, losing its data content but not the
* parameters passed at allocation.
*/
void
binaryheap_reset(binaryheap *heap)
{
heap->bh_size = 0;
heap->bh_has_heap_property = true;
}
/*
* binaryheap_free
*
* Releases memory used by the given binaryheap.
*/
void
binaryheap_free(binaryheap *heap)
{
pfree(heap);
}
/*
* These utility functions return the offset of the left child, right
* child, and parent of the node at the given index, respectively.
*
* The heap is represented as an array of nodes, with the root node
* stored at index 0. The left child of node i is at index 2*i+1, and
* the right child at 2*i+2. The parent of node i is at index (i-1)/2.
*/
static inline int
left_offset(int i)
{
return 2 * i + 1;
}
static inline int
right_offset(int i)
{
return 2 * i + 2;
}
static inline int
parent_offset(int i)
{
return (i - 1) / 2;
}
/*
* binaryheap_add_unordered
*
* Adds the given datum to the end of the heap's list of nodes in O(1) without
* preserving the heap property. This is a convenience to add elements quickly
* to a new heap. To obtain a valid heap, one must call binaryheap_build()
* afterwards.
*/
void
binaryheap_add_unordered(binaryheap *heap, bh_node_type d)
{
if (heap->bh_size >= heap->bh_space)
{
#ifdef FRONTEND
pg_fatal("out of binary heap slots");
#else
elog(ERROR, "out of binary heap slots");
#endif
}
heap->bh_has_heap_property = false;
heap->bh_nodes[heap->bh_size] = d;
heap->bh_size++;
}
/*
* binaryheap_build
*
* Assembles a valid heap in O(n) from the nodes added by
* binaryheap_add_unordered(). Not needed otherwise.
*/
void
binaryheap_build(binaryheap *heap)
{
int i;
for (i = parent_offset(heap->bh_size - 1); i >= 0; i--)
sift_down(heap, i);
heap->bh_has_heap_property = true;
}
/*
* binaryheap_add
*
* Adds the given datum to the heap in O(log n) time, while preserving
* the heap property.
*/
void
binaryheap_add(binaryheap *heap, bh_node_type d)
{
if (heap->bh_size >= heap->bh_space)
{
#ifdef FRONTEND
pg_fatal("out of binary heap slots");
#else
elog(ERROR, "out of binary heap slots");
#endif
}
heap->bh_nodes[heap->bh_size] = d;
heap->bh_size++;
sift_up(heap, heap->bh_size - 1);
}
/*
* binaryheap_first
*
* Returns a pointer to the first (root, topmost) node in the heap
* without modifying the heap. The caller must ensure that this
* routine is not used on an empty heap. Always O(1).
*/
bh_node_type
binaryheap_first(binaryheap *heap)
{
Assert(!binaryheap_empty(heap) && heap->bh_has_heap_property);
return heap->bh_nodes[0];
}
/*
* binaryheap_remove_first
*
* Removes the first (root, topmost) node in the heap and returns a
* pointer to it after rebalancing the heap. The caller must ensure
* that this routine is not used on an empty heap. O(log n) worst
* case.
*/
bh_node_type
binaryheap_remove_first(binaryheap *heap)
{
bh_node_type result;
Assert(!binaryheap_empty(heap) && heap->bh_has_heap_property);
/* extract the root node, which will be the result */
result = heap->bh_nodes[0];
/* easy if heap contains one element */
if (heap->bh_size == 1)
{
heap->bh_size--;
return result;
}
/*
* Remove the last node, placing it in the vacated root entry, and sift
* the new root node down to its correct position.
*/
heap->bh_nodes[0] = heap->bh_nodes[--heap->bh_size];
sift_down(heap, 0);
return result;
}
/*
* binaryheap_remove_node
*
* Removes the nth (zero based) node from the heap. The caller must ensure
* that there are at least (n + 1) nodes in the heap. O(log n) worst case.
*/
void
binaryheap_remove_node(binaryheap *heap, int n)
{
int cmp;
Assert(!binaryheap_empty(heap) && heap->bh_has_heap_property);
Assert(n >= 0 && n < heap->bh_size);
/* compare last node to the one that is being removed */
cmp = heap->bh_compare(heap->bh_nodes[--heap->bh_size],
heap->bh_nodes[n],
heap->bh_arg);
/* remove the last node, placing it in the vacated entry */
heap->bh_nodes[n] = heap->bh_nodes[heap->bh_size];
/* sift as needed to preserve the heap property */
if (cmp > 0)
sift_up(heap, n);
else if (cmp < 0)
sift_down(heap, n);
}
/*
* binaryheap_replace_first
*
* Replace the topmost element of a non-empty heap, preserving the heap
* property. O(1) in the best case, or O(log n) if it must fall back to
* sifting the new node down.
*/
void
binaryheap_replace_first(binaryheap *heap, bh_node_type d)
{
Assert(!binaryheap_empty(heap) && heap->bh_has_heap_property);
heap->bh_nodes[0] = d;
if (heap->bh_size > 1)
sift_down(heap, 0);
}
/*
* Sift a node up to the highest position it can hold according to the
* comparator.
*/
static void
sift_up(binaryheap *heap, int node_off)
{
bh_node_type node_val = heap->bh_nodes[node_off];
/*
* Within the loop, the node_off'th array entry is a "hole" that
* notionally holds node_val, but we don't actually store node_val there
* till the end, saving some unnecessary data copying steps.
*/
while (node_off != 0)
{
int cmp;
int parent_off;
bh_node_type parent_val;
/*
* If this node is smaller than its parent, the heap condition is
* satisfied, and we're done.
*/
parent_off = parent_offset(node_off);
parent_val = heap->bh_nodes[parent_off];
cmp = heap->bh_compare(node_val,
parent_val,
heap->bh_arg);
if (cmp <= 0)
break;
/*
* Otherwise, swap the parent value with the hole, and go on to check
* the node's new parent.
*/
heap->bh_nodes[node_off] = parent_val;
node_off = parent_off;
}
/* Re-fill the hole */
heap->bh_nodes[node_off] = node_val;
}
/*
* Sift a node down from its current position to satisfy the heap
* property.
*/
static void
sift_down(binaryheap *heap, int node_off)
{
bh_node_type node_val = heap->bh_nodes[node_off];
/*
* Within the loop, the node_off'th array entry is a "hole" that
* notionally holds node_val, but we don't actually store node_val there
* till the end, saving some unnecessary data copying steps.
*/
while (true)
{
int left_off = left_offset(node_off);
int right_off = right_offset(node_off);
int swap_off = left_off;
/* Is the right child larger than the left child? */
if (right_off < heap->bh_size &&
heap->bh_compare(heap->bh_nodes[left_off],
heap->bh_nodes[right_off],
heap->bh_arg) < 0)
swap_off = right_off;
/*
* If no children or parent is >= the larger child, heap condition is
* satisfied, and we're done.
*/
if (left_off >= heap->bh_size ||
heap->bh_compare(node_val,
heap->bh_nodes[swap_off],
heap->bh_arg) >= 0)
break;
/*
* Otherwise, swap the hole with the child that violates the heap
* property; then go on to check its children.
*/
heap->bh_nodes[node_off] = heap->bh_nodes[swap_off];
node_off = swap_off;
}
/* Re-fill the hole */
heap->bh_nodes[node_off] = node_val;
} | c | github | https://github.com/postgres/postgres | src/common/binaryheap.c |
/*
* Copyright 2002-present the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.build.multirelease;
import javax.inject.Inject;
import org.gradle.api.JavaVersion;
import org.gradle.api.Plugin;
import org.gradle.api.Project;
import org.gradle.api.artifacts.ConfigurationContainer;
import org.gradle.api.artifacts.dsl.DependencyHandler;
import org.gradle.api.model.ObjectFactory;
import org.gradle.api.plugins.ExtensionContainer;
import org.gradle.api.plugins.JavaPlugin;
import org.gradle.api.plugins.JavaPluginExtension;
import org.gradle.api.tasks.TaskContainer;
import org.gradle.api.tasks.TaskProvider;
import org.gradle.api.tasks.bundling.AbstractArchiveTask;
import org.gradle.jvm.tasks.Jar;
import org.gradle.jvm.toolchain.JavaLanguageVersion;
import org.gradle.jvm.toolchain.JavaToolchainService;
/**
* A plugin which adds support for building multi-release jars
* with Gradle.
* @author Cedric Champeau
* @author Brian Clozel
* @see <a href="https://github.com/melix/mrjar-gradle-plugin">original project</a>
*/
public class MultiReleaseJarPlugin implements Plugin<Project> {
public static String VALIDATE_JAR_TASK_NAME = "validateMultiReleaseJar";
@Inject
protected JavaToolchainService getToolchains() {
throw new UnsupportedOperationException();
}
public void apply(Project project) {
project.getPlugins().apply(JavaPlugin.class);
ExtensionContainer extensions = project.getExtensions();
JavaPluginExtension javaPluginExtension = extensions.getByType(JavaPluginExtension.class);
ConfigurationContainer configurations = project.getConfigurations();
TaskContainer tasks = project.getTasks();
DependencyHandler dependencies = project.getDependencies();
ObjectFactory objects = project.getObjects();
extensions.create("multiRelease", MultiReleaseExtension.class,
javaPluginExtension.getSourceSets(),
configurations,
tasks,
dependencies,
objects);
if (JavaVersion.current().isCompatibleWith(JavaVersion.VERSION_25)) {
TaskProvider<MultiReleaseJarValidateTask> validateJarTask = tasks.register(VALIDATE_JAR_TASK_NAME, MultiReleaseJarValidateTask.class, (task) -> {
task.getJar().set(tasks.named("jar", Jar.class).flatMap(AbstractArchiveTask::getArchiveFile));
task.getJavaLauncher().set(task.getJavaToolchainService().launcherFor(spec -> spec.getLanguageVersion().set(JavaLanguageVersion.of(25))));
});
tasks.named("check", task -> task.dependsOn(validateJarTask));
}
}
} | java | github | https://github.com/spring-projects/spring-framework | buildSrc/src/main/java/org/springframework/build/multirelease/MultiReleaseJarPlugin.java |
#!/usr/bin/env python
# Copyright 2015 IIX Inc.
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
"""
ovirt external inventory script
=================================
Generates inventory that Ansible can understand by making API requests to
oVirt via the ovirt-engine-sdk-python library.
When run against a specific host, this script returns the following variables
based on the data obtained from the ovirt_sdk Node object:
- ovirt_uuid
- ovirt_id
- ovirt_image
- ovirt_machine_type
- ovirt_ips
- ovirt_name
- ovirt_description
- ovirt_status
- ovirt_zone
- ovirt_tags
- ovirt_stats
When run in --list mode, instances are grouped by the following categories:
- zone:
zone group name.
- instance tags:
An entry is created for each tag. For example, if you have two instances
with a common tag called 'foo', they will both be grouped together under
the 'tag_foo' name.
- network name:
the name of the network is appended to 'network_' (e.g. the 'default'
network will result in a group named 'network_default')
- running status:
group name prefixed with 'status_' (e.g. status_up, status_down,..)
Examples:
Execute uname on all instances in the us-central1-a zone
$ ansible -i ovirt.py us-central1-a -m shell -a "/bin/uname -a"
Use the ovirt inventory script to print out instance specific information
$ contrib/inventory/ovirt.py --host my_instance
Author: Josha Inglis <jinglis@iix.net> based on the gce.py by Eric Johnson <erjohnso@google.com>
Version: 0.0.1
"""
USER_AGENT_PRODUCT = "Ansible-ovirt_inventory_plugin"
USER_AGENT_VERSION = "v1"
import sys
import os
import argparse
import ConfigParser
from collections import defaultdict
try:
import json
except ImportError:
# noinspection PyUnresolvedReferences,PyPackageRequirements
import simplejson as json
try:
# noinspection PyUnresolvedReferences
from ovirtsdk.api import API
# noinspection PyUnresolvedReferences
from ovirtsdk.xml import params
except ImportError:
print("ovirt inventory script requires ovirt-engine-sdk-python")
sys.exit(1)
class OVirtInventory(object):
def __init__(self):
# Read settings and parse CLI arguments
self.args = self.parse_cli_args()
self.driver = self.get_ovirt_driver()
# Just display data for specific host
if self.args.host:
print(self.json_format_dict(
self.node_to_dict(self.get_instance(self.args.host)),
pretty=self.args.pretty
))
sys.exit(0)
# Otherwise, assume user wants all instances grouped
print(
self.json_format_dict(
data=self.group_instances(),
pretty=self.args.pretty
)
)
sys.exit(0)
@staticmethod
def get_ovirt_driver():
"""
Determine the ovirt authorization settings and return a ovirt_sdk driver.
:rtype : ovirtsdk.api.API
"""
kwargs = {}
ovirt_ini_default_path = os.path.join(
os.path.dirname(os.path.realpath(__file__)), "ovirt.ini")
ovirt_ini_path = os.environ.get('OVIRT_INI_PATH', ovirt_ini_default_path)
# Create a ConfigParser.
# This provides empty defaults to each key, so that environment
# variable configuration (as opposed to INI configuration) is able
# to work.
config = ConfigParser.SafeConfigParser(defaults={
'ovirt_url': '',
'ovirt_username': '',
'ovirt_password': '',
'ovirt_api_secrets': '',
})
if 'ovirt' not in config.sections():
config.add_section('ovirt')
config.read(ovirt_ini_path)
# Attempt to get ovirt params from a configuration file, if one
# exists.
secrets_path = config.get('ovirt', 'ovirt_api_secrets')
secrets_found = False
try:
# noinspection PyUnresolvedReferences,PyPackageRequirements
import secrets
kwargs = getattr(secrets, 'OVIRT_KEYWORD_PARAMS', {})
secrets_found = True
except ImportError:
pass
if not secrets_found and secrets_path:
if not secrets_path.endswith('secrets.py'):
err = "Must specify ovirt_sdk secrets file as /absolute/path/to/secrets.py"
print(err)
sys.exit(1)
sys.path.append(os.path.dirname(secrets_path))
try:
# noinspection PyUnresolvedReferences,PyPackageRequirements
import secrets
kwargs = getattr(secrets, 'OVIRT_KEYWORD_PARAMS', {})
except ImportError:
pass
if not secrets_found:
kwargs = {
'url': config.get('ovirt', 'ovirt_url'),
'username': config.get('ovirt', 'ovirt_username'),
'password': config.get('ovirt', 'ovirt_password'),
}
# If the appropriate environment variables are set, they override
# other configuration; process those into our args and kwargs.
kwargs['url'] = os.environ.get('OVIRT_URL', kwargs['url'])
kwargs['username'] = next(val for val in [os.environ.get('OVIRT_EMAIL'), os.environ.get('OVIRT_USERNAME'), kwargs['username']] if val is not None)
kwargs['password'] = next(val for val in [os.environ.get('OVIRT_PASS'), os.environ.get('OVIRT_PASSWORD'), kwargs['password']] if val is not None)
# Retrieve and return the ovirt driver.
return API(insecure=True, **kwargs)
@staticmethod
def parse_cli_args():
"""
Command line argument processing
:rtype : argparse.Namespace
"""
parser = argparse.ArgumentParser(description='Produce an Ansible Inventory file based on ovirt')
parser.add_argument('--list', action='store_true', default=True, help='List instances (default: True)')
parser.add_argument('--host', action='store', help='Get all information about an instance')
parser.add_argument('--pretty', action='store_true', default=False, help='Pretty format (default: False)')
return parser.parse_args()
def node_to_dict(self, inst):
"""
:type inst: params.VM
"""
if inst is None:
return {}
inst.get_custom_properties()
ips = [ip.get_address() for ip in inst.get_guest_info().get_ips().get_ip()] \
if inst.get_guest_info() is not None else []
stats = {}
for stat in inst.get_statistics().list():
stats[stat.get_name()] = stat.get_values().get_value()[0].get_datum()
return {
'ovirt_uuid': inst.get_id(),
'ovirt_id': inst.get_id(),
'ovirt_image': inst.get_os().get_type(),
'ovirt_machine_type': inst.get_instance_type(),
'ovirt_ips': ips,
'ovirt_name': inst.get_name(),
'ovirt_description': inst.get_description(),
'ovirt_status': inst.get_status().get_state(),
'ovirt_zone': inst.get_cluster().get_id(),
'ovirt_tags': self.get_tags(inst),
'ovirt_stats': stats,
# Hosts don't have a public name, so we add an IP
'ansible_ssh_host': ips[0] if len(ips) > 0 else None
}
@staticmethod
def get_tags(inst):
"""
:type inst: params.VM
"""
return [x.get_name() for x in inst.get_tags().list()]
# noinspection PyBroadException,PyUnusedLocal
def get_instance(self, instance_name):
"""Gets details about a specific instance """
try:
return self.driver.vms.get(name=instance_name)
except Exception as e:
return None
def group_instances(self):
"""Group all instances"""
groups = defaultdict(list)
meta = {"hostvars": {}}
for node in self.driver.vms.list():
assert isinstance(node, params.VM)
name = node.get_name()
meta["hostvars"][name] = self.node_to_dict(node)
zone = node.get_cluster().get_name()
groups[zone].append(name)
tags = self.get_tags(node)
for t in tags:
tag = 'tag_%s' % t
groups[tag].append(name)
nets = [x.get_name() for x in node.get_nics().list()]
for net in nets:
net = 'network_%s' % net
groups[net].append(name)
status = node.get_status().get_state()
stat = 'status_%s' % status.lower()
if stat in groups:
groups[stat].append(name)
else:
groups[stat] = [name]
groups["_meta"] = meta
return groups
@staticmethod
def json_format_dict(data, pretty=False):
""" Converts a dict to a JSON object and dumps it as a formatted
string """
if pretty:
return json.dumps(data, sort_keys=True, indent=2)
else:
return json.dumps(data)
# Run the script
OVirtInventory() | unknown | codeparrot/codeparrot-clean | ||
# frozen_string_literal: true
# :markup: markdown
require "rack/utils"
require "rack/request"
require "rack/session/abstract/id"
require "action_dispatch/middleware/cookies"
module ActionDispatch
module Session
class SessionRestoreError < StandardError # :nodoc:
def initialize
super("Session contains objects whose class definition isn't available.\n" \
"Remember to require the classes for all objects kept in the session.\n" \
"(Original exception: #{$!.message} [#{$!.class}])\n")
set_backtrace $!.backtrace
end
end
module Compatibility
def initialize(app, options = {})
options[:key] ||= "_session_id"
super
end
def generate_sid
sid = SecureRandom.hex(16)
sid.encode!(Encoding::UTF_8)
sid
end
private
def initialize_sid # :doc:
@default_options.delete(:sidbits)
@default_options.delete(:secure_random)
end
def make_request(env)
ActionDispatch::Request.new env
end
end
module StaleSessionCheck
def load_session(env)
stale_session_check! { super }
end
def extract_session_id(env)
stale_session_check! { super }
end
def stale_session_check!
yield
rescue ArgumentError => argument_error
if argument_error.message =~ %r{undefined class/module ([\w:]*\w)}
begin
# Note that the regexp does not allow $1 to end with a ':'.
$1.constantize
rescue LoadError, NameError
raise ActionDispatch::Session::SessionRestoreError
end
retry
else
raise
end
end
end
module SessionObject # :nodoc:
def commit_session(req, res)
req.commit_csrf_token
super(req, res)
end
def prepare_session(req)
Request::Session.create(self, req, @default_options)
end
def loaded_session?(session)
!session.is_a?(Request::Session) || session.loaded?
end
end
class AbstractStore < Rack::Session::Abstract::Persisted
include Compatibility
include StaleSessionCheck
include SessionObject
private
def set_cookie(request, response, cookie)
request.cookie_jar[key] = cookie
end
end
class AbstractSecureStore < Rack::Session::Abstract::PersistedSecure
include Compatibility
include StaleSessionCheck
include SessionObject
def generate_sid
Rack::Session::SessionId.new(super)
end
private
def set_cookie(request, response, cookie)
request.cookie_jar[key] = cookie
end
end
end
end | ruby | github | https://github.com/rails/rails | actionpack/lib/action_dispatch/middleware/session/abstract_store.rb |
{
"compilerOptions": {
"target": "es5",
"lib": ["dom", "dom.iterable", "esnext"],
"allowJs": true,
"skipLibCheck": true,
"strict": true,
"forceConsistentCasingInFileNames": true,
"noEmit": true,
"esModuleInterop": true,
"module": "esnext",
"moduleResolution": "node",
"resolveJsonModule": true,
"isolatedModules": true,
"jsx": "react-jsx",
"incremental": true
},
"include": ["next-env.d.ts", "**/*.ts", "**/*.tsx", "yoga.d.ts"],
"exclude": ["node_modules"]
} | json | github | https://github.com/vercel/next.js | examples/with-yoga/tsconfig.json |
# Copyright 2008-2013 Nokia Siemens Networks Oyj
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Implementation of the public test library logging API.
This is exposed via :py:mod:`robot.api.logger`. Implementation must reside
here to avoid cyclic imports.
"""
import sys
import threading
from robot.utils import unic, encode_output
from .logger import LOGGER
from .loggerhelper import Message
LOGGING_THREADS = ('MainThread', 'RobotFrameworkTimeoutThread')
def write(msg, level, html=False):
# Callable messages allow lazy logging internally, but we don't want to
# expose this functionality publicly. See the following issue for details:
# http://code.google.com/p/robotframework/issues/detail?id=1505
if callable(msg):
msg = unic(msg)
if threading.currentThread().getName() in LOGGING_THREADS:
LOGGER.log_message(Message(msg, level, html))
def trace(msg, html=False):
write(msg, 'TRACE', html)
def debug(msg, html=False):
write(msg, 'DEBUG', html)
def info(msg, html=False, also_console=False):
write(msg, 'INFO', html)
if also_console:
console(msg)
def warn(msg, html=False):
write(msg, 'WARN', html)
def console(msg, newline=True, stream='stdout'):
msg = unic(msg)
if newline:
msg += '\n'
stream = sys.__stdout__ if stream.lower() != 'stderr' else sys.__stderr__
stream.write(encode_output(msg))
stream.flush() | unknown | codeparrot/codeparrot-clean | ||
import AxiosHeaders from '../../../lib/core/AxiosHeaders.js';
import assert from 'assert';
const [nodeMajorVersion] = process.versions.node.split('.').map(v => parseInt(v, 10));
describe('AxiosHeaders', function () {
it('should support headers argument', function () {
const headers = new AxiosHeaders({
x: 1,
y: 2
});
assert.strictEqual(headers.get('x'), '1');
assert.strictEqual(headers.get('y'), '2');
})
describe('set', function () {
it('should support adding a single header', function(){
const headers = new AxiosHeaders();
headers.set('foo', 'bar');
assert.strictEqual(headers.get('foo'), 'bar');
})
it('should support adding multiple headers', function(){
const headers = new AxiosHeaders();
headers.set({
foo: 'value1',
bar: 'value2',
});
assert.strictEqual(headers.get('foo'), 'value1');
assert.strictEqual(headers.get('bar'), 'value2');
});
it('should support adding multiple headers from raw headers string', function(){
const headers = new AxiosHeaders();
headers.set(`foo:value1\nbar:value2`);
assert.strictEqual(headers.get('foo'), 'value1');
assert.strictEqual(headers.get('bar'), 'value2');
});
it('should not rewrite header the header if the value is false', function(){
const headers = new AxiosHeaders();
headers.set('foo', 'value1');
headers.set('foo', 'value2', false);
assert.strictEqual(headers.get('foo'), 'value1');
headers.set('foo', 'value2');
assert.strictEqual(headers.get('foo'), 'value2');
headers.set('foo', 'value3', true);
assert.strictEqual(headers.get('foo'), 'value3');
});
it('should not rewrite the header if its value is false, unless rewrite options is set to true', function(){
const headers = new AxiosHeaders();
headers.set('foo', false);
headers.set('foo', 'value2');
assert.strictEqual(headers.get('foo'), false);
headers.set('foo', 'value2', true);
assert.strictEqual(headers.get('foo'), 'value2');
});
it('should support iterables as a key-value source object', function () {
const headers = new AxiosHeaders();
headers.set(new Map([['x', '123']]));
assert.strictEqual(headers.get('x'), '123');
});
it('should support setting multiple header values from an iterable source', function () {
if (nodeMajorVersion < 18) {
this.skip();
return;
}
const headers = new AxiosHeaders();
const nativeHeaders = new Headers();
nativeHeaders.append('set-cookie', 'foo');
nativeHeaders.append('set-cookie', 'bar');
nativeHeaders.append('set-cookie', 'baz');
nativeHeaders.append('y', 'qux');
headers.set(nativeHeaders);
assert.deepStrictEqual(headers.get('set-cookie'), ['foo', 'bar', 'baz']);
assert.strictEqual(headers.get('y'), 'qux');
});
});
it('should support uppercase name mapping for names overlapped by class methods', () => {
const headers = new AxiosHeaders({
set: 'foo'
});
headers.set('get', 'bar');
assert.strictEqual(headers.get('Set'), 'foo');
assert.strictEqual(headers.get('Get'), 'bar');
});
describe('get', function () {
describe('filter', function() {
it('should support RegExp', function () {
const headers = new AxiosHeaders();
headers.set('foo', 'bar=value1');
assert.strictEqual(headers.get('foo', /^bar=(\w+)/)[1], 'value1');
assert.strictEqual(headers.get('foo', /^foo=/), null);
});
it('should support function', function () {
const headers = new AxiosHeaders();
headers.set('foo', 'bar=value1');
assert.strictEqual(headers.get('foo', (value, header) => {
assert.strictEqual(value, 'bar=value1');
assert.strictEqual(header, 'foo');
return value;
}), 'bar=value1');
assert.strictEqual(headers.get('foo', () => false), false);
});
});
});
describe('has', function () {
it('should return true if the header is defined, otherwise false', function () {
const headers = new AxiosHeaders();
headers.set('foo', 'bar=value1');
assert.strictEqual(headers.has('foo'), true);
assert.strictEqual(headers.has('bar'), false);
});
describe('filter', function () {
it('should support RegExp', function () {
const headers = new AxiosHeaders();
headers.set('foo', 'bar=value1');
assert.strictEqual(headers.has('foo', /^bar=(\w+)/), true);
assert.strictEqual(headers.has('foo', /^foo=/), false);
});
it('should support function', function () {
const headers = new AxiosHeaders();
headers.set('foo', 'bar=value1');
assert.strictEqual(headers.has('foo', (value, header, headers) => {
assert.strictEqual(value, 'bar=value1');
assert.strictEqual(header, 'foo');
return true;
}), true);
assert.strictEqual(headers.has('foo', () => false), false);
});
it('should support string pattern', function () {
const headers = new AxiosHeaders();
headers.set('foo', 'bar=value1');
assert.strictEqual(headers.has('foo', 'value1'), true);
assert.strictEqual(headers.has('foo', 'value2'), false);
});
});
});
describe('delete', function () {
it('should delete the header', function () {
const headers = new AxiosHeaders();
headers.set('foo', 'bar=value1');
assert.strictEqual(headers.has('foo'), true);
headers.delete('foo');
assert.strictEqual(headers.has('foo'), false);
});
it('should return true if the header has been deleted, otherwise false', function () {
const headers = new AxiosHeaders();
headers.set('foo', 'bar=value1');
assert.strictEqual(headers.delete('bar'), false);
assert.strictEqual(headers.delete('foo'), true);
});
it('should support headers array', function () {
const headers = new AxiosHeaders();
headers.set('foo', 'x');
headers.set('bar', 'y');
headers.set('baz', 'z');
assert.strictEqual(headers.delete(['foo', 'baz']), true);
assert.strictEqual(headers.has('foo'), false);
assert.strictEqual(headers.has('bar'), true);
assert.strictEqual(headers.has('baa'), false);
});
describe('filter', function () {
it('should support RegExp', function () {
const headers = new AxiosHeaders();
headers.set('foo', 'bar=value1');
assert.strictEqual(headers.has('foo'), true);
headers.delete('foo', /baz=/);
assert.strictEqual(headers.has('foo'), true);
headers.delete('foo', /bar=/);
assert.strictEqual(headers.has('foo'), false);
});
it('should support function', function () {
const headers = new AxiosHeaders();
headers.set('foo', 'bar=value1');
headers.delete('foo', (value, header) => {
assert.strictEqual(value, 'bar=value1');
assert.strictEqual(header, 'foo');
return false;
});
assert.strictEqual(headers.has('foo'), true);
assert.strictEqual(headers.delete('foo', () => true), true);
assert.strictEqual(headers.has('foo'), false);
});
it('should support string pattern', function () {
const headers = new AxiosHeaders();
headers.set('foo', 'bar=value1');
assert.strictEqual(headers.has('foo'), true);
headers.delete('foo', 'baz');
assert.strictEqual(headers.has('foo'), true);
headers.delete('foo', 'bar');
assert.strictEqual(headers.has('foo'), false);
});
});
});
describe('clear', () => {
it('should clear all headers', () => {
const headers = new AxiosHeaders({x: 1, y:2});
headers.clear();
assert.deepStrictEqual({...headers.toJSON()}, {});
});
it('should clear matching headers if a matcher was specified', () => {
const headers = new AxiosHeaders({foo: 1, 'x-foo': 2, bar: 3});
assert.deepStrictEqual({...headers.toJSON()}, {foo: '1', 'x-foo': '2', bar: '3'});
headers.clear(/^x-/);
assert.deepStrictEqual({...headers.toJSON()}, {foo: '1', bar: '3'});
});
});
describe('toJSON', function () {
it('should return headers object with original headers case', function () {
const headers = new AxiosHeaders({
Foo: 'x',
bAr: 'y'
});
assert.deepStrictEqual({...headers.toJSON()}, {
Foo: 'x',
bAr: 'y'
});
});
});
describe('accessors', function () {
it('should support get accessor', function () {
const headers = new AxiosHeaders({
foo: 1
});
headers.constructor.accessor('foo');
assert.strictEqual(typeof headers.getFoo, 'function');
assert.strictEqual(headers.getFoo(), '1');
});
it('should support set accessor', function () {
const headers = new AxiosHeaders({
foo: 1
});
headers.constructor.accessor('foo');
assert.strictEqual(typeof headers.setFoo, 'function');
headers.setFoo(2);
assert.strictEqual(headers.getFoo(), '2');
});
it('should support has accessor', function () {
const headers = new AxiosHeaders({
foo: 1
});
headers.constructor.accessor('foo');
assert.strictEqual(typeof headers.hasFoo, 'function');
assert.strictEqual(headers.hasFoo(), true);
});
});
it('should be caseless', function () {
const headers = new AxiosHeaders({
fOo: 1
});
assert.strictEqual(headers.get('Foo'), '1');
assert.strictEqual(headers.get('foo'), '1');
headers.set('foo', 2);
assert.strictEqual(headers.get('foO'), '2');
assert.strictEqual(headers.get('fOo'), '2');
assert.strictEqual(headers.has('fOo'), true);
headers.delete('FOO');
assert.strictEqual(headers.has('fOo'), false);
});
describe('normalize()', function () {
it('should support auto-formatting', function () {
const headers = new AxiosHeaders({
fOo: 1,
'x-foo': 2,
'y-bar-bAz': 3
});
assert.deepStrictEqual({...headers.normalize(true).toJSON()}, {
Foo: '1',
'X-Foo': '2',
'Y-Bar-Baz': '3'
});
});
it('should support external defined values', function () {
const headers = new AxiosHeaders({
foo: '1'
});
headers['Foo'] = 2;
headers['bar'] = 3;
assert.deepStrictEqual({...headers.normalize().toJSON()}, {
foo: '2',
bar: '3'
});
});
it('should support array values', function () {
const headers = new AxiosHeaders({
foo: [1,2,3]
});
assert.deepStrictEqual({...headers.normalize().toJSON()}, {
foo: ['1','2','3']
});
});
});
describe('AxiosHeaders.concat', function () {
it('should concatenate plain headers into an AxiosHeader instance', function () {
const a = {a: 1};
const b = {b: 2};
const c = {c: 3};
const headers = AxiosHeaders.concat(a, b, c);
assert.deepStrictEqual({...headers.toJSON()}, {
a: '1',
b: '2',
c: '3'
});
});
it('should concatenate raw headers into an AxiosHeader instance', function () {
const a = 'a:1\nb:2';
const b = 'c:3\nx:4';
const headers = AxiosHeaders.concat(a, b);
assert.deepStrictEqual({...headers.toJSON()}, {
a: '1',
b: '2',
c: '3',
x: '4'
});
});
it('should concatenate Axios headers into a new AxiosHeader instance', function () {
const a = new AxiosHeaders({x: 1});
const b = new AxiosHeaders({y: 2});
const headers = AxiosHeaders.concat(a, b);
assert.deepStrictEqual({...headers.toJSON()}, {
x: '1',
y: '2'
});
});
});
describe('toString', function () {
it('should serialize AxiosHeader instance to a raw headers string', function () {
assert.deepStrictEqual(new AxiosHeaders({x:1, y:2}).toString(), 'x: 1\ny: 2');
});
});
describe('getSetCookie', function () {
it('should return set-cookie', function () {
const headers = new AxiosHeaders(
'Set-Cookie: key=val;\n' +
'Set-Cookie: key2=val2;\n'
);
assert.deepStrictEqual(headers.getSetCookie(), ['key=val;', 'key2=val2;']);
});
it('should return empty set-cookie', function () {
assert.deepStrictEqual(new AxiosHeaders().getSetCookie(), []);
});
});
}); | javascript | github | https://github.com/axios/axios | test/unit/core/AxiosHeaders.js |
#!/usr/bin/env python
# -*- coding: utf-8 -*-
"""
tests a test set of data using a specified, pre-trained model and weights
python -c "import ibeis_cnn"
"""
from __future__ import absolute_import, division, print_function, unicode_literals
# from ibeis_cnn import utils
from ibeis_cnn import models
#from ibeis_cnn import test
from ibeis_cnn import _plugin_grabmodels as grabmodels
import utool as ut
import cv2
import numpy as np
import random
import ibeis.constants as const
print, rrr, profile = ut.inject2(__name__, '[ibeis_cnn._plugin]')
try:
from ibeis.control.controller_inject import make_ibs_register_decorator
from ibeis.constants import VIEWTEXT_TO_YAW_RADIANS
CLASS_INJECT_KEY, register_ibs_method = make_ibs_register_decorator(__name__)
except ImportError as ex:
register_ibs_method = ut.identity
raise
def convert_species_viewpoint(species, viewpoint):
species_mapping = {
'ZEBRA_PLAINS': 'zebra_plains',
'ZEBRA_GREVYS': 'zebra_grevys',
'ELEPHANT_SAVANNA': 'elephant_savanna',
'GIRAFFE_RETICULATED': 'giraffe_reticulated',
'GIRAFFE_MASAI': 'giraffe_masai',
}
viewpoint_list = VIEWTEXT_TO_YAW_RADIANS.keys()
viewpoint_mapping = {
'LEFT': viewpoint_list[4],
'FRONT_LEFT': viewpoint_list[3],
'FRONT': viewpoint_list[2],
'FRONT_RIGHT': viewpoint_list[1],
'RIGHT': viewpoint_list[0],
'BACK_RIGHT': viewpoint_list[7],
'BACK': viewpoint_list[6],
'BACK_LEFT': viewpoint_list[5],
}
species_ = species_mapping[species]
viewpoint_ = viewpoint_mapping[viewpoint]
return species_, viewpoint_
def convert_label(label):
species, viewpoint = label.strip().split(':')
species = species.strip()
viewpoint = viewpoint.strip()
species_, viewpoint_ = convert_species_viewpoint(species, viewpoint)
return species_, viewpoint_
@register_ibs_method
def get_neuralnet_dir(ibs):
nets_dir = ut.unixjoin(ibs.get_cachedir(), ibs.const.PATH_NAMES.nets)
return nets_dir
@register_ibs_method
def get_verified_aid_pairs(ibs):
"""
Example:
>>> # DISABLE_DOCTEST
>>> from ibeis_cnn.train import * # NOQA
>>> import ibeis
>>> ibs = ibeis.opendb('NNP_Master3')
>>> verified_aid1_list, verified_aid2_list = get_verified_aid_pairs(ibs)
"""
# Grab marked hard cases
am_rowids = ibs._get_all_annotmatch_rowids()
remove_photobombs = True
if remove_photobombs:
flags = ibs.get_annotmatch_is_photobomb(am_rowids)
am_rowids = ut.filterfalse_items(am_rowids, flags)
verified_aid1_list = ibs.get_annotmatch_aid1(am_rowids)
verified_aid2_list = ibs.get_annotmatch_aid2(am_rowids)
return verified_aid1_list, verified_aid2_list
@register_ibs_method
def detect_annot_zebra_background_mask(ibs, aid_list, species=None, config2_=None):
r"""
Args:
ibs (IBEISController): ibeis controller object
aid_list (int): list of annotation ids
Returns:
list: mask_list
"""
# Read the data
print('\n[harness] Loading chips...')
chip_list = ibs.get_annot_chips(aid_list, verbose=True, config2_=config2_)
mask_list = list(generate_species_background(ibs, chip_list, species=species))
return mask_list
@register_ibs_method
def detect_annot_whale_fluke_background_mask(ibs, aid_list, species='whale_fluke', config2_=None):
r"""
Args:
ibs (IBEISController): ibeis controller object
aid_list (int): list of annotation ids
Returns:
list: mask_list
"""
# Read the data
print('\n[harness] Loading chips...')
chip_list = ibs.get_annot_chips(aid_list, verbose=True, config2_=config2_)
mask_list = list(generate_species_background(ibs, chip_list, species=species))
return mask_list
@register_ibs_method
def generate_species_background_mask(ibs, chip_fpath_list, species=None):
r"""
Args:
ibs (IBEISController): ibeis controller object
aid_list (int): list of annotation ids
Returns:
list: species_viewpoint_list
CommandLine:
python -m ibeis_cnn._plugin --exec-generate_species_background_mask --show --db PZ_Master1
python -m ibeis_cnn --tf generate_species_background_mask --show --db PZ_Master1 --aid 9970
Example:
>>> # DISABLE_DOCTEST
>>> import ibeis_cnn
>>> import ibeis
>>> from ibeis_cnn._plugin import * # NOQA
>>> ibs = ibeis.opendb(defaultdb='testdb1')
>>> aid_list = ut.get_argval(('--aids', '--aid'), type_=list, default=ibs.get_valid_aids()[0:10])
>>> chip_fpath_list = ibs.get_annot_chip_fpath(aid_list)
>>> species = ibs.const.TEST_SPECIES.ZEB_PLAIN
>>> mask_list = generate_species_background_mask(ibs, chip_fpath_list, species)
>>> ut.quit_if_noshow()
>>> import plottool as pt
>>> iteract_obj = pt.interact_multi_image.MultiImageInteraction(mask_list, nPerPage=4)
>>> #pt.imshow(mask_list[0])
>>> ut.show_if_requested()
#>>> from ibeis_cnn.draw_results import * # NOQA
#>>> from ibeis_cnn import ingest_data
#>>> data, labels = ingest_data.testdata_patchmatch2()
#>>> flat_metadata = {'fs': np.arange(len(labels))}
#>>> result = interact_siamsese_data_patches(labels, data, flat_metadata)
#>>> ut.show_if_requested()
"""
# Read the data
print('\n[harness] Loading chips...')
import vtool as vt
nInput = len(chip_fpath_list)
def bufgen2(_iter, size=64, nInput=None, **kwargs):
nTotal = None if nInput is None else int(np.ceil(nInput / size))
chunk_iter = ut.ichunks(_iter, size)
chunk_iter_ = ut.ProgressIter(chunk_iter, nTotal=nTotal, **kwargs)
for chunk in chunk_iter_:
for item in chunk:
yield item
chip_list = bufgen2(
(vt.imread(fpath) for fpath in chip_fpath_list),
lbl='loading chip chunk', nInput=nInput, adjust=True, time_thresh=30.0)
#mask_list = list(generate_species_background(ibs, chip_list, species=species, nInput=nInput))
mask_gen = generate_species_background(ibs, chip_list, species=species, nInput=nInput)
return mask_gen
@register_ibs_method
def generate_species_background(ibs, chip_list, species=None, nInput=None):
"""
TODO: Use this as the primary function
CommandLine:
python -m ibeis_cnn._plugin --exec-generate_species_background --show
python -m ibeis_cnn._plugin --exec-generate_species_background --db GZ_Master1 --species=zebra_grevys --save cnn_detect_results_gz.png --diskshow --clipwhite
python -m ibeis_cnn._plugin --exec-generate_species_background --db PZ_Master1 --species=zebra_plains --save cnn_detect_results_pz.png --diskshow --clipwhite
python -m ibeis_cnn._plugin --exec-generate_species_background --db PZ_Master1 --show
python -m ibeis_cnn._plugin --exec-generate_species_background --db GZ_Master1 --show
python -m ibeis_cnn._plugin --exec-generate_species_background --db GIRM_Master1 --show --species=giraffe_masai
Example:
>>> # ENABLE_DOCTEST
>>> import ibeis_cnn
>>> import ibeis
>>> from ibeis_cnn._plugin import * # NOQA
>>> ibs = ibeis.opendb(defaultdb='testdb1')
>>> aid_list = ibs.get_valid_aids()[0:8]
>>> species = ut.get_argval('--species', type_=str, default=None)
>>> config2_ = None
>>> nInput = len(aid_list)
>>> chip_iter = ibs.get_annot_chips(aid_list, verbose=True, config2_=config2_, eager=False)
>>> mask_iter = generate_species_background(ibs, chip_iter, species=species, nInput=nInput)
>>> mask_list = list(mask_iter)
>>> ut.quit_if_noshow()
>>> import plottool as pt
>>> import vtool as vt
>>> chip_list = ibs.get_annot_chips(aid_list, verbose=True, config2_=config2_, eager=True)
>>> stacked_list = [vt.stack_images(chip, mask)[0] for chip, mask in zip(chip_list, mask_list)]
>>> iteract_obj = pt.interact_multi_image.MultiImageInteraction(stacked_list, nPerPage=4)
>>> #hough_cpath = ibs.get_annot_probchip_fpath(aid_list, config2_=config2_)
>>> #iteract_obj2 = pt.interact_multi_image.MultiImageInteraction(hough_cpath, nPerPage=4)
>>> #pt.imshow(mask_list[0])
>>> ut.show_if_requested()
Ignore:
#>>> from ibeis_cnn.draw_results import * # NOQA
#>>> from ibeis_cnn import ingest_data
#>>> data, labels = ingest_data.testdata_patchmatch2()
#>>> flat_metadata = {'fs': np.arange(len(labels))}
#>>> result = interact_siamsese_data_patches(labels, data, flat_metadata)
#>>> ut.show_if_requested()
"""
from ibeis_cnn import harness
if species is None:
species = 'zebra_plains'
# Load chips and resize to the target
data_shape = (256, 256, 3)
# Define model and load weights
print('\n[harness] Loading model...')
if nInput is None:
try:
nInput = len(chip_list)
except TypeError:
print('Warning passed in generator without specifying nInput hint')
print('Explicitly evaluating generator')
print('type(chip_list) = %r' % (type(chip_list),))
chip_list = list(chip_list)
nInput = len(chip_list)
# batch_size = int(min(128, 2 ** np.floor(np.log2(nInput))))
batch_size = None
NEW = True
print(species)
if species in ['zebra_plains', 'zebra_grevys']:
if NEW:
assert species in ['zebra_plains', 'zebra_grevys']
model = models.BackgroundModel(batch_size=batch_size, data_shape=data_shape, num_output=3)
weights_path = grabmodels.ensure_model('background_zebra_plains_grevys', redownload=False)
canvas_key = species
else:
assert species in ['zebra_plains']
model = models.BackgroundModel(batch_size=batch_size, data_shape=data_shape)
weights_path = grabmodels.ensure_model('background_zebra_plains', redownload=False)
canvas_key = 'positive'
elif species in ['giraffe_masai']:
model = models.BackgroundModel(batch_size=batch_size, data_shape=data_shape)
weights_path = grabmodels.ensure_model('background_giraffe_masai', redownload=False)
canvas_key = species
elif species in ['whale_fluke', 'whale_humpback']:
species = 'whale_fluke'
model = models.BackgroundModel(batch_size=batch_size, data_shape=data_shape)
weights_path = grabmodels.ensure_model('background_whale_fluke', redownload=False)
canvas_key = species
else:
raise ValueError('species key does not have a trained model')
old_weights_fpath = weights_path
model.load_old_weights_kw2(old_weights_fpath)
# Create the Theano primitives
# create theano symbolic expressions that define the network
print('\n[harness] --- COMPILING SYMBOLIC THEANO FUNCTIONS ---')
print('[model] creating Theano primitives...')
theano_funcs = model.build_theano_funcs(request_predict=True,
request_forward=False,
request_backprop=False)
theano_backprop, theano_forward, theano_predict, updates = theano_funcs
print('[harness] Performing inference...')
_iter = ut.ProgressIter(chip_list, nTotal=nInput, lbl=species + ' fgdetect', adjust=True, freq=10, time_thresh=30.0)
for chip in _iter:
try:
samples, canvas_dict = harness.test_convolutional(model, theano_predict, chip, padding=24)
if NEW:
mask = np.maximum(255 - canvas_dict['negative'], canvas_dict[canvas_key])
else:
mask = canvas_dict[canvas_key]
except Exception as ex:
ut.printex(ex, ('Error running convnet with '
'chip.shape=%r, chip.dtype=%r') % (
chip.shape, chip.dtype))
raise
yield mask
@register_ibs_method
def fix_annot_species_viewpoint_quality_cnn(ibs, aid_list, min_conf=0.8):
r"""
Args:
ibs (IBEISController): ibeis controller object
aid_list (int): list of annotation ids
"""
# Load chips and resize to the target
data_shape = (96, 96, 3)
# Define model and load weights
print('Loading model...')
batch_size = int(min(128, 2 ** np.floor(np.log2(len(aid_list)))))
model = models.ViewpointModel(batch_size=batch_size, data_shape=data_shape)
weights_path = grabmodels.ensure_model('viewpoint', redownload=False)
old_weights_fpath = weights_path
model.load_old_weights_kw(old_weights_fpath)
# Read the data
target = data_shape[0:2]
print('Loading chips...')
chip_list = ibs.get_annot_chips(aid_list, verbose=True)
print('Resizing chips...')
chip_list_resized = [
cv2.resize(chip, target, interpolation=cv2.INTER_LANCZOS4)
for chip in ut.ProgressIter(chip_list, lbl='resizing chips')
]
# Build data for network
X_test = np.array(chip_list_resized, dtype=np.uint8)
y_test = None
from ibeis_cnn import harness
# Predict on the data and convert labels to IBEIS namespace
test_outputs = harness.test_data2(model, X_test, y_test)
label_list = test_outputs['labeled_predictions']
conf_list = test_outputs['confidences']
species_viewpoint_list = [ convert_label(label) for label in label_list ]
zipped = zip(aid_list, species_viewpoint_list, conf_list)
skipped_list = []
for aid, (species, viewpoint), conf in zipped:
if conf >= min_conf:
species_ = species
viewpoint_ = viewpoint
quality_ = const.QUAL_GOOD
else:
skipped_list.append(aid)
species_ = const.UNKNOWN
viewpoint_ = None
quality_ = const.QUAL_UNKNOWN
ibs.set_annot_species([aid], [species_])
ibs.set_annot_yaw_texts([aid], [viewpoint_])
ibs.set_annot_quality_texts([aid], [quality_])
return skipped_list
@register_ibs_method
def detect_annot_species_viewpoint_cnn(ibs, aid_list):
r"""
Args:
ibs (IBEISController): ibeis controller object
aid_list (int): list of annotation ids
Returns:
list: species_viewpoint_list
CommandLine:
python -m ibeis_cnn._plugin --exec-detect_annot_species_viewpoint_cnn
Example:
>>> # DISABLE_DOCTEST
>>> from ibeis_cnn._plugin import * # NOQA
>>> import ibeis
>>> ibs = ibeis.opendb(defaultdb='testdb1')
>>> aid_list = ibs.get_valid_aids()
>>> species_viewpoint_list = detect_annot_species_viewpoint_cnn(ibs, aid_list)
>>> result = ('species_viewpoint_list = %s' % (str(species_viewpoint_list),))
>>> print(result)
"""
# Load chips and resize to the target
data_shape = (96, 96, 3)
# Define model and load weights
print('Loading model...')
batch_size = int(min(128, 2 ** np.floor(np.log2(len(aid_list)))))
model = models.ViewpointModel(batch_size=batch_size, data_shape=data_shape)
weights_path = grabmodels.ensure_model('viewpoint', redownload=False)
old_weights_fpath = weights_path
model.load_old_weights_kw(old_weights_fpath)
# Read the data
target = data_shape[0:2]
print('Loading chips...')
chip_list = ibs.get_annot_chips(aid_list, verbose=True)
print('Resizing chips...')
chip_list_resized = [
cv2.resize(chip, target, interpolation=cv2.INTER_LANCZOS4)
for chip in ut.ProgressIter(chip_list, lbl='resizing chips')
]
# Build data for network
X_test = np.array(chip_list_resized, dtype=np.uint8)
y_test = None
from ibeis_cnn import harness
# Predict on the data and convert labels to IBEIS namespace
test_outputs = harness.test_data2(model, X_test, y_test)
label_list = test_outputs['labeled_predictions']
species_viewpoint_list = [ convert_label(label) for label in label_list ]
#pred_list, label_list, conf_list = test.test_data(X_test, y_test, model, weights_path)
#species_viewpoint_list = [ convert_label(label) for label in label_list ]
return species_viewpoint_list
@register_ibs_method
def validate_annot_species_viewpoint_cnn(ibs, aid_list, verbose=False):
r"""
Args:
ibs (IBEISController): ibeis controller object
aid_list (int): list of annotation ids
verbose (bool): verbosity flag(default = False)
Returns:
tuple: (bad_species_list, bad_viewpoint_list)
CommandLine:
python -m ibeis_cnn._plugin --exec-validate_annot_species_viewpoint_cnn --db PZ_FlankHack
python -m ibeis_cnn._plugin --exec-validate_annot_species_viewpoint_cnn --db GZ_Master1
Example:
>>> # DISABLE_DOCTEST
>>> from ibeis_cnn._plugin import * # NOQA
>>> import ibeis
>>> ibs = ibeis.opendb(defaultdb='testdb1')
>>> aid_list = ibs.get_valid_aids()
>>> verbose = False
>>> (bad_species_list, bad_viewpoint_list) = validate_annot_species_viewpoint_cnn(ibs, aid_list, verbose)
>>> print('bad_species_list = %s' % (bad_species_list,))
>>> print('bad_species_list = %s' % (bad_viewpoint_list,))
>>> print(result)
Ignore:
bad_viewpoint_list_ = [item for item in bad_viewpoint_list if item[2] is not None and item[0] > 1200]
grouped_dict = ut.group_items(bad_viewpoint_list, ut.get_list_column(bad_viewpoint_list_, 3))
grouped_list = grouped_dict.values()
regrouped_items = ut.flatten(ut.sortedby(grouped_list, map(len, grouped_list)))
candidate_aid_list = ut.get_list_column(regrouped_items, 0)
print('candidate_aid_list = %r' % (candidate_aid_list,))
"""
# Load chips and metadata
species_list = ibs.get_annot_species(aid_list)
viewpoint_list = ibs.get_annot_yaw_texts(aid_list)
species_viewpoint_list = ibs.detect_annot_species_viewpoint_cnn(aid_list)
# Find all bad
bad_species_list = []
bad_viewpoint_list = []
data = zip(aid_list, species_list, viewpoint_list, species_viewpoint_list)
for aid, species, viewpoint, (species_, viewpoint_) in data:
if species != species_:
bad_species_list.append( (aid, species, species_) )
continue
if viewpoint != viewpoint_:
bad_viewpoint_list.append( (aid, species, viewpoint, viewpoint_) )
continue
# Print bad if verbose
if verbose:
print('Found conflicting species:')
for bad_species in bad_species_list:
print(' AID %4d (%r) should be %r' % bad_species)
print('Found conflicting viewpoints:')
for bad_viewpoint in bad_viewpoint_list:
print(' AID %4d (%r, %r) should be %r' % bad_viewpoint)
# Return bad
return bad_species_list, bad_viewpoint_list
@register_ibs_method
def detect_yolo(ibs, gid_list):
r"""
Args:
ibs (IBEISController): ibeis controller object
gid_list (int): list of image ids
Returns:
list: aid_list
CommandLine:
python -m ibeis_cnn._plugin --exec-detect_yolo
Example:
>>> # DISABLE_DOCTEST
>>> from ibeis_cnn._plugin import * # NOQA
>>> import ibeis
>>> ibs = ibeis.opendb(defaultdb='PZ_MTEST')
>>> gid_list = ibs.get_valid_gids()
>>> aid_list = detect_yolo(ibs, gid_list)
>>> print(aid_list)
"""
# Load images and resize to the target
# Define model and load weights
print('Loading model...')
# batch_size = int(min(128, 2 ** np.floor(np.log2(len(gid_list)))))
batch_size = 1
model = models.DetectYoloModel(batch_size=batch_size)
model.print_layer_info()
# from ibeis_cnn.__LASAGNE__ import layers
# from ibeis_cnn.draw_net import show_convolutional_weights
# output_layer = model.get_output_layer()
# nn_layers = layers.get_all_layers(output_layer)
# weighted_layers = [layer for layer in nn_layers if hasattr(layer, 'W')]
# index = ut.get_argval('--index', type_=int, default=0)
# all_weights = weighted_layers[index].W.get_value()
# print('all_weights.shape = %r' % (all_weights.shape,))
# use_color = None
# limit = 12
# fig = show_convolutional_weights(all_weights, use_color, limit) # NOQA
# ut.show_if_requested()
# Read the data
target = (448, 448)
print('Loading images...')
# image_list = ibs.get_images(gid_list)
image_list = [
cv2.imread('/Users/bluemellophone/code/darknet-clean/test.jpg'),
cv2.imread('/Users/bluemellophone/code/darknet-clean/test.jpg'),
]
print('Resizing images...')
image_list_resized = [
cv2.resize(image, target, interpolation=cv2.INTER_LANCZOS4)
for image in ut.ProgressIter(image_list, lbl='resizing images')
]
# Build data for network
X_test = np.array(image_list_resized, dtype=np.uint8)
y_test = None
from ibeis_cnn import harness
# Predict on the data and convert labels to IBEIS namespace
test_outputs = harness.test_data2(model, X_test, y_test)
raw_output_list = test_outputs['network_output_determ']
side = 7
num = 2
classes = 5
square = True
for image, raw_output in zip(image_list, raw_output_list):
print(raw_output.shape)
box_list = []
probs_list = []
h, w = image.shape[:2]
min_, max_ = 1.0, 0.0
for i in range(side * side):
row = i / side
col = i % side
for n in range(num):
index = i * num + n
p_index = side * side * classes + index
scale = raw_output[p_index]
box_index = side * side * (classes + num) + (index) * 4
box = [
(raw_output[box_index + 0] + col) / side * w,
(raw_output[box_index + 1] + row) / side * h,
raw_output[box_index + 2] ** (2 if square else 1) * w,
raw_output[box_index + 3] ** (2 if square else 1) * h,
]
box_list.append(box)
prob_list = []
for j in range(classes):
class_index = i * classes
prob = scale * raw_output[class_index + j]
min_ = min(min_, prob)
max_ = max(max_, prob)
prob_list.append(prob)
probs_list.append(prob_list)
box_list = np.array(box_list)
probs_list = np.array(probs_list)
for (xc, yc, w, h), prob_list in zip(box_list, probs_list):
prob = max(prob_list)
point1 = (int(xc - w), int(yc - h))
point2 = (int(xc + w), int(yc + h))
width = (prob ** 0.5) * 10 + 1
width = 1 if np.isnan(width) or width < 1.0 else int(width)
cv2.rectangle(image, point1, point2, (255, 0, 0), width)
image = cv2.resize(image, target, interpolation=cv2.INTER_LANCZOS4)
cv2.imshow('', image)
cv2.waitKey(0)
# raise AssertionError
return True
def _suggest_random_candidate_regions(ibs, image, min_size, num_candidates=2000):
h, w, c = image.shape
h -= 1
w -= 1
min_x, min_y = min_size
def _candidate():
x0, y0, x1, y1 = 0, 0, 0, 0
while x1 - x0 < min_x or y1 - y0 < min_y:
x0 = int(random.uniform(0, w))
y0 = int(random.uniform(0, h))
x1 = int(random.uniform(0, w))
y1 = int(random.uniform(0, h))
if x0 > x1:
x0, x1 = x1, x0
if y0 > y1:
y0, y1 = y1, y0
return x0, y0, x1, y1
candidate_list = [ _candidate() for _ in range(num_candidates) ]
return candidate_list
def _suggest_bing_candidate_regions(ibs, image_path_list):
def _dedictify(dict_list):
return [ [d_['minx'], d_['miny'], d_['maxx'], d_['maxy']] for d_ in dict_list ]
from pybing import BING_Detector
detector = BING_Detector()
results_list = detector.detect(image_path_list)
result_list = [ _dedictify(results[1]) for results in results_list ]
return result_list
def non_max_suppression_fast(box_list, conf_list, overlapThresh=0.5):
"""
Python version of Malisiewicz's Matlab code:
https://github.com/quantombone/exemplarsvm
NOTE: This is adapted from Pedro Felzenszwalb's version (nms.m),
but an inner loop has been eliminated to significantly speed it
up in the case of a large number of boxes
Reference: https://github.com/rbgirshick/rcnn/blob/master/nms/nms.m
Reference: http://www.pyimagesearch.com/2015/02/16/faster-non-maximum-suppression-python/
"""
# if there are no boxes, return an empty list
if len(box_list) == 0:
return []
# Convert to Numpy
box_list = np.array(box_list)
conf_list = np.array(conf_list)
# if the bounding boxes integers, convert them to floats --
# this is important since we'll be doing a bunch of divisions
if box_list.dtype.kind == "i":
box_list = box_list.astype("float")
# initialize the list of picked indexes
pick = []
# grab the coordinates of the bounding boxes
# Our boxes are stored as y1, y2, x1, x2 to be in-line with OpenCV indexing
x1 = box_list[:, 0]
y1 = box_list[:, 1]
x2 = box_list[:, 2]
y2 = box_list[:, 3]
s = conf_list
# compute the area of the bounding boxes and sort the bounding
# boxes by the bottom-right y-coordinate of the bounding box
area = (x2 - x1 + 1) * (y2 - y1 + 1)
idxs = np.argsort(s)
# keep looping while some indexes still remain in the indexes
# list
while len(idxs) > 0:
# grab the last index in the indexes list and add the
# index value to the list of picked indexes
last = len(idxs) - 1
i = idxs[last]
pick.append(i)
# find the largest (x, y) coordinates for the start of
# the bounding box and the smallest (x, y) coordinates
# for the end of the bounding box
xx1 = np.maximum(x1[i], x1[idxs[:last]])
yy1 = np.maximum(y1[i], y1[idxs[:last]])
xx2 = np.minimum(x2[i], x2[idxs[:last]])
yy2 = np.minimum(y2[i], y2[idxs[:last]])
# compute the width and height of the bounding box
w = np.maximum(0, xx2 - xx1 + 1)
h = np.maximum(0, yy2 - yy1 + 1)
# compute the ratio of overlap
overlap = (w * h) / area[idxs[:last]]
# delete all indexes from the index list that have
idxs = np.delete(idxs, np.concatenate(([last], np.where(overlap > overlapThresh)[0])))
# return only the bounding boxes that were picked using the
# integer data type
return pick
@register_ibs_method
def detect_image_cnn(ibs, gid, confidence=0.90, extraction='bing'):
r"""
Args:
ibs (IBEISController): ibeis controller object
gid (?):
confidence (float): (default = 0.9)
extraction (str): (default = 'bing')
CommandLine:
python -m ibeis_cnn._plugin --exec-detect_image_cnn
Example:
>>> # DISABLE_DOCTEST
>>> from ibeis_cnn._plugin import * # NOQA
>>> from ibeis_cnn._plugin import _suggest_random_candidate_regions, _suggest_bing_candidate_regions # NOQA
>>> import ibeis
>>> ibs = ibeis.opendb(defaultdb='testdb1')
>>> gid = 1
>>> confidence = 0.9
>>> extraction = 'bing'
>>> result = detect_image_cnn(ibs, gid, confidence, extraction)
>>> print(result)
"""
# Load chips and resize to the target
target = (96, 96)
targetx, targety = target
# gid = gid_list[random.randint(0, len(gid_list))]
# gid = gid_list[0]
print('Detecting with gid=%r...' % (gid, ))
image = ibs.get_images(gid)
rects = np.copy(image)
h, w, c = image.shape
print('Querrying for candidate regions...')
image_path = ibs.get_image_paths(gid)
if extraction == 'random':
candidate_list = _suggest_random_candidate_regions(ibs, image, (32, 32))
else:
candidate_list = _suggest_bing_candidate_regions(ibs, [image_path])[0]
print('Num candidates: %r' % (len(candidate_list), ))
chip_list_resized = []
print('Extracting candidate regions...')
for candidate in candidate_list:
x0, y0, x1, y1 = candidate
chip = image[y0 : y1, x0 : x1]
chip = cv2.resize(chip, target, interpolation=cv2.INTER_LANCZOS4)
chip_list_resized.append(chip)
color = (255, 0, 0)
# cv2.rectangle(rects, (x0, y0), (x1, y1), color)
mx = int((x1 - x0) * 0.5)
my = int((y1 - y0) * 0.5)
cv2.circle(rects, (x0 + mx, y0 + my), 5, color, -1)
# cv2.imshow('', rects)
# cv2.waitKey(0)
# cv2.destroyAllWindows()
# Build data for network
X_test = np.array(chip_list_resized, dtype=np.uint8)
y_test = None
# Define model and load weights
print('Loading model...')
from ibeis_cnn import harness
data_shape = (96, 96, 3)
# Define model and load weights
print('Loading model...')
# batch_size = int(min(128, 2 ** np.floor(np.log2(len(chip_list_resized)))))
batch_size = None
model = models.ViewpointModel(batch_size=batch_size, data_shape=data_shape)
weights_path = grabmodels.ensure_model('viewpoint', redownload=False)
old_weights_fpath = weights_path
model.load_old_weights_kw(old_weights_fpath)
# Predict on the data and convert labels to IBEIS namespace
test_outputs = harness.test_data2(model, X_test, y_test)
conf_list = test_outputs['confidences']
label_list = test_outputs['labeled_predictions']
pred_list = test_outputs['predictions']
#pred_list, label_list, conf_list = test.test_data(X_test, y_test, model, weights_path)
species_viewpoint_list = [ convert_label(label) for label in label_list ]
num_all_candidates = len(conf_list)
index_list = non_max_suppression_fast(candidate_list, conf_list)
print('Surviving candidates: %r' % (index_list, ))
num_supressed_candidates = num_all_candidates - len(index_list)
print('Supressed: %d candidates' % (num_supressed_candidates, ))
candidate_list = np.take(candidate_list, index_list, axis=0)
pred_list = np.take(pred_list, index_list, axis=0)
species_viewpoint_list = np.take(species_viewpoint_list, index_list, axis=0)
conf_list = np.take(conf_list, index_list, axis=0)
values = zip(candidate_list, pred_list, species_viewpoint_list, conf_list)
rects = np.copy(image)
color_dict = {
'giraffe': (255, 0, 0),
'giraffe_masai': (255, 255, 0),
'zebra_plains': (0, 0, 255),
'zebra_grevys': (0, 255, 0),
'elephant_savanna': (0, 0, 0),
}
skipped = 0
for candidate, pred, species_viewpoint, conf in values:
x0, y0, x1, y1 = tuple(candidate)
species, viewpoint = species_viewpoint
if conf < confidence:
skipped += 1
continue
print('%r Found %s (%s, %s) at %s' % (candidate, pred, species, viewpoint, conf, ))
color = color_dict[species]
cv2.rectangle(rects, (x0, y0), (x1, y1), color)
# mx = int((x1 - x0) * 0.5)
# my = int((y1 - y0) * 0.5)
# cv2.circle(rects, (x0 + mx, y0 + my), 5, color, -1)
print('Skipped [ %d / %d ]' % (skipped, len(values), ))
cv2.imshow('', rects)
cv2.waitKey(0)
cv2.destroyAllWindows()
def get_siam_l2_model():
"""
model.show_weights_image()
"""
model_url = 'https://lev.cs.rpi.edu/public/models/siaml2_128_model_state.pkl'
model_dpath = ut.ensure_app_resource_dir('ibeis_cnn', 'models')
model_fpath = ut.grab_file_url(model_url, download_dir=model_dpath)
model_state = ut.load_cPkl(model_fpath)
import ibeis_cnn
ibeis_cnn.models
model = models.SiameseL2(
input_shape=model_state['input_shape'],
arch_tag=model_state['arch_tag'], autoinit=True)
model.load_model_state(fpath=model_fpath)
return model
def generate_siam_l2_128_feats(ibs, cid_list, config2_=None):
r"""
Args:
ibs (IBEISController): ibeis controller object
cid_list (list):
config2_ (dict): (default = None)
CommandLine:
python -m ibeis_cnn._plugin --test-generate_siam_l2_128_feats
python -m ibeis_cnn._plugin --test-generate_siam_l2_128_feats --db PZ_Master0
SeeAlso:
~/code/ibeis/ibeis/algo/preproc/preproc_feat.py
Example:
>>> # DISABLE_DOCTEST
>>> from ibeis_cnn._plugin import * # NOQA
>>> import ibeis
>>> ibs = ibeis.opendb(defaultdb='testdb1')
>>> cid_list = ibs.depc_annot.get_rowids('chips', ibs.get_valid_aids())
>>> config2_ = None
>>> # megahack
>>> config2_ = dict(feat_type='hesaff+siam128',
>>> feat_cfgstr=ibs.cfg.feat_cfg.get_cfgstr().replace('sift', 'siam128'),
>>> hesaff_params=ibs.cfg.feat_cfg.get_hesaff_params())
>>> featgen = generate_siam_l2_128_feats(ibs, cid_list, config2_)
>>> result = ut.depth_profile(list(featgen))
>>> print(result)
"""
#if config2_ is not None:
# # Get config from config2_ object
# #print('id(config2_) = ' + str(id(config2_)))
# feat_cfgstr = config2_.get('feat_cfgstr')
# hesaff_params = config2_.get('hesaff_params')
# assert feat_cfgstr is not None
# assert hesaff_params is not None
#else:
# # Get config from IBEIS controller
# feat_cfgstr = ibs.cfg.feat_cfg.get_cfgstr()
# hesaff_params = ibs.cfg.feat_cfg.get_hesaff_params()
# hack because we need the old features
import vtool as vt
import ibeis_cnn
model = get_siam_l2_model()
colorspace = 'gray' if model.input_shape[1] else None # 'bgr'
patch_size = model.input_shape[-1]
if config2_ is not None:
# Get config from config2_ object
#print('id(config2_) = ' + str(id(config2_)))
feat_cfgstr = config2_.get('feat_cfgstr')
hesaff_params = config2_.get('hesaff_params')
assert feat_cfgstr is not None
assert hesaff_params is not None
else:
# Get config from IBEIS controller
feat_cfgstr = ibs.cfg.feat_cfg.get_cfgstr()
hesaff_params = ibs.cfg.feat_cfg.get_hesaff_params()
hack_config2_ = dict(feat_type='hesaff+sift',
feat_cfgstr=feat_cfgstr.replace('siam128', 'sift'),
hesaff_params=hesaff_params)
print('Generating siam128 features for %d chips' % (len(cid_list),))
BATCHED = True
if BATCHED:
ibs.get_chip_feat_rowid(cid_list, config2_=hack_config2_, ensure=True)
for cid_batch in ut.ProgressIter(list(ut.ichunks(cid_list, 128)), lbl='siam128 chip chunk'):
sift_fid_list = ibs.get_chip_feat_rowid(cid_batch, config2_=hack_config2_)
print('Reading keypoints')
kpts_list = ibs.get_feat_kpts(sift_fid_list)
print('Reading chips')
chip_list = vt.convert_image_list_colorspace(
ibs.get_chips(cid_batch, ensure=True), colorspace)
print('Warping patches')
warped_patches_list = [vt.get_warped_patches(chip, kpts, patch_size=patch_size)[0]
for chip, kpts in zip(chip_list, kpts_list)]
flat_list, cumlen_list = ut.invertible_flatten2(warped_patches_list)
stacked_patches = np.transpose(np.array(flat_list)[None, :], (1, 2, 3, 0))
test_outputs = ibeis_cnn.harness.test_data2(model, stacked_patches, None)
network_output_determ = test_outputs['network_output_determ']
#network_output_determ.min()
#network_output_determ.max()
siam128_vecs_list = ut.unflatten2(network_output_determ, cumlen_list)
for cid, kpts, vecs in zip(cid_batch, kpts_list, siam128_vecs_list):
yield cid, len(kpts), kpts, vecs
else:
sift_fid_list = ibs.get_chip_feat_rowid(cid_list, config2_=hack_config2_, ensure=True) # NOQA
print('Reading keypoints')
kpts_list = ibs.get_feat_kpts(sift_fid_list)
print('Reading chips')
chip_list = vt.convert_image_list_colorspace(
ibs.get_chips(cid_list, ensure=True), colorspace)
print('Warping patches')
warped_patches_list = [vt.get_warped_patches(chip, kpts, patch_size=patch_size)[0]
for chip, kpts in zip(chip_list, kpts_list)]
flat_list, cumlen_list = ut.invertible_flatten2(warped_patches_list)
stacked_patches = np.transpose(np.array(flat_list)[None, :], (1, 2, 3, 0))
test_outputs = ibeis_cnn.harness.test_data2(model, stacked_patches, None)
network_output_determ = test_outputs['network_output_determ']
#network_output_determ.min()
#network_output_determ.max()
siam128_vecs_list = ut.unflatten2(network_output_determ, cumlen_list)
for cid, kpts, vecs in zip(cid_list, kpts_list, siam128_vecs_list):
yield cid, len(kpts), kpts, vecs
def extract_siam128_vecs(chip_list, kpts_list):
"""
Duplicate testing func for vtool
"""
import vtool as vt
import ibeis_cnn
model = get_siam_l2_model()
colorspace = 'gray' if model.input_shape[1] else None # 'bgr'
patch_size = model.input_shape[-1]
chip_list_ = vt.convert_image_list_colorspace(chip_list, colorspace)
warped_patches_list = [vt.get_warped_patches(chip, kpts, patch_size=patch_size)[0]
for chip, kpts in zip(chip_list_, kpts_list)]
flat_list, cumlen_list = ut.invertible_flatten2(warped_patches_list)
stacked_patches = np.transpose(np.array(flat_list)[None, :], (1, 2, 3, 0))
test_outputs = ibeis_cnn.harness.test_data2(model, stacked_patches, None)
network_output_determ = test_outputs['network_output_determ']
#network_output_determ.min()
#network_output_determ.max()
siam128_vecs_list = ut.unflatten2(network_output_determ, cumlen_list)
return siam128_vecs_list
if __name__ == '__main__':
"""
CommandLine:
python -m ibeis_cnn._plugin
python -m ibeis_cnn._plugin --allexamples
python -m ibeis_cnn._plugin --allexamples --noface --nosrc
"""
import multiprocessing
multiprocessing.freeze_support() # for win32
import utool as ut # NOQA
ut.doctest_funcs() | unknown | codeparrot/codeparrot-clean | ||
# Copyright (c) 2013, FinByz Tech Pvt. Ltd. and contributors
# For license information, please see license.txt
from __future__ import unicode_literals
import frappe
import json
import re
from frappe import _
from frappe.utils import nowdate
def execute(filters=None):
if not filters: filters.setdefault('posting_date', [nowdate(), nowdate()])
columns, data = [], []
columns = get_columns()
data = get_data(filters)
return columns, data
def get_data(filters):
conditions = get_conditions(filters)
data = frappe.db.sql("""
SELECT
dn.name as dn_id, dn.posting_date, dn.company, dn.company_gstin, dn.customer, dn.customer_gstin, dni.item_code, dni.item_name, dni.description, dni.gst_hsn_code, dni.uom, dni.qty, dni.amount, dn.mode_of_transport, dn.distance, dn.transporter_name, dn.gst_transporter_id, dn.lr_no, dn.lr_date, dn.vehicle_no, dn.gst_vehicle_type, dn.company_address, dn.shipping_address_name
FROM
`tabDelivery Note` AS dn join `tabDelivery Note Item` AS dni on (dni.parent = dn.name)
WHERE
dn.docstatus < 2
%s """ % conditions, as_dict=1)
unit = {
'Bag': "BAGS",
'Bottle': "BOTTLES",
'Kg': "KILOGRAMS",
'Liter': "LITERS",
'Meter': "METERS",
'Nos': "NUMBERS",
'PKT': "PACKS",
'Roll': "ROLLS",
'Set': "SETS"
}
# Regular expression set to remove all the special characters
special_characters = "[$%^*()+\\[\]{};':\"\\|<>.?]"
for row in data:
set_defaults(row)
set_taxes(row, filters)
set_address_details(row, special_characters)
# Eway Bill accepts date as dd/mm/yyyy and not dd-mm-yyyy
row.posting_date = '/'.join(str(row.posting_date).replace("-", "/").split('/')[::-1])
row.lr_date = '/'.join(str(row.lr_date).replace("-", "/").split('/')[::-1])
if row.gst_vehicle_type == 'Over Dimensional Cargo (ODC)':
row.gst_vehicle_type = 'ODC'
row.item_name = re.sub(special_characters, " ", row.item_name)
row.description = row.item_name
row.uom = unit.get(row.uom, row.uom)
# For removing special charactes and numbers from customer.
row.customer = re.sub(special_characters[:-1] + "&0-9" + "]", "", row.customer)
return data
def get_conditions(filters):
conditions = ""
conditions += filters.get('company') and " AND dn.company = '%s' " % filters.get('company') or ""
conditions += filters.get('posting_date') and " AND dn.posting_date >= '%s' AND dn.posting_date <= '%s' " % (filters.get('posting_date')[0], filters.get('posting_date')[1]) or ""
conditions += filters.get('delivery_note') and " AND dn.name = '%s' " % filters.get('delivery_note') or ""
conditions += filters.get('customer') and " AND dn.customer = '%s' " % filters.get('customer').replace("'", "\'") or ""
return conditions
def set_defaults(row):
row.setdefault(u'supply_type', "Outward")
row.setdefault(u'sub_type', "Supply")
row.setdefault(u'doc_type', "Delivery Challan")
def set_address_details(row, special_characters):
if row.get('company_address'):
address_line1, address_line2, city, pincode, state = frappe.db.get_value("Address", row.get('company_address'), ['address_line1', 'address_line2', 'city', 'pincode', 'state'])
row.update({'from_address_1': re.sub(special_characters, "", address_line1 or '')})
row.update({'from_address_2': re.sub(special_characters, "", address_line2 or '')})
row.update({'from_place': city and city.upper() or ''})
row.update({'from_pin_code': pincode and pincode.replace(" ", "") or ''})
row.update({'from_state': state and state.upper() or ''})
row.update({'dispatch_state': row.from_state})
if row.get('shipping_address_name'):
address_line1, address_line2, city, pincode, state = frappe.db.get_value("Address", row.get('shipping_address_name'), ['address_line1', 'address_line2', 'city', 'pincode', 'state'])
row.update({'to_address_1': re.sub(special_characters, "", address_line1 or '')})
row.update({'to_address_2': re.sub(special_characters, "", address_line2 or '')})
row.update({'to_place': city and city.upper() or ''})
row.update({'to_pin_code': pincode and pincode.replace(" ", "") or ''})
row.update({'to_state': state and state.upper() or ''})
row.update({'ship_to_state': row.to_state})
def set_taxes(row, filters):
taxes = frappe.get_list("Sales Taxes and Charges",
filters={
'parent': row.dn_id
},
fields=('item_wise_tax_detail', 'account_head'))
account_list = ["cgst_account", "sgst_account", "igst_account", "cess_account"]
taxes_list = frappe.get_list("GST Account",
filters={
"parent": "GST Settings",
"company": filters.company
},
fields=account_list)
if not taxes_list:
frappe.throw(_("Please set GST Accounts in GST Settings"))
item_tax_rate = {}
for tax in taxes:
item_wise_tax = json.loads(tax.item_wise_tax_detail)
item_tax_rate[tax.account_head] = item_wise_tax.get(row.item_code)
tax_rate = []
tax = taxes_list[0]
for key in account_list:
if tax[key] not in item_tax_rate.keys():
item_tax_rate[tax[key]] = [0.0, 0.0]
tax_rate.append(str(item_tax_rate[tax[key]][0]))
row.update({key[:5] + "amount": round(item_tax_rate.get(tax[key], 0.0)[1], 2)})
item_tax_rate.pop(tax[key])
row.amount = float(row.amount) + sum(i[1] for i in item_tax_rate.values())
row.update({'tax_rate': '+'.join(tax_rate)})
def get_columns():
columns = [
{
"fieldname": "supply_type",
"label": _("Supply Type"),
"fieldtype": "Data",
"width": 100
},
{
"fieldname": "sub_type",
"label": _("Sub Type"),
"fieldtype": "Data",
"width": 100
},
{
"fieldname": "doc_type",
"label": _("Doc Type"),
"fieldtype": "Data",
"width": 100
},
{
"fieldname": "dn_id",
"label": _("Doc Name"),
"fieldtype": "Link",
"options": "Delivery Note",
"width": 140
},
{
"fieldname": "posting_date",
"label": _("Doc Date"),
"fieldtype": "Data",
"width": 100
},
{
"fieldname": "company",
"label": _("From Party Name"),
"fieldtype": "Link",
"options": "Company",
"width": 120
},
{
"fieldname": "company_gstin",
"label": _("From GSTIN"),
"fieldtype": "Data",
"width": 100
},
{
"fieldname": "from_address_1",
"label": _("From Address 1"),
"fieldtype": "Data",
"width": 120
},
{
"fieldname": "from_address_2",
"label": _("From Address 2"),
"fieldtype": "Data",
"width": 120
},
{
"fieldname": "from_place",
"label": _("From Place"),
"fieldtype": "Data",
"width": 80
},
{
"fieldname": "from_pin_code",
"label": _("From Pin Code"),
"fieldtype": "Data",
"width": 80
},
{
"fieldname": "from_state",
"label": _("From State"),
"fieldtype": "Data",
"width": 80
},
{
"fieldname": "dispatch_state",
"label": _("Dispatch State"),
"fieldtype": "Data",
"width": 100
},
{
"fieldname": "customer",
"label": _("To Party Name"),
"fieldtype": "Data",
"width": 120
},
{
"fieldname": "customer_gstin",
"label": _("To GSTIN"),
"fieldtype": "Data",
"width": 120
},
{
"fieldname": "to_address_1",
"label": _("To Address 1"),
"fieldtype": "Data",
"width": 120
},
{
"fieldname": "to_address_2",
"label": _("To Address 2"),
"fieldtype": "Data",
"width": 120
},
{
"fieldname": "to_place",
"label": _("To Place"),
"fieldtype": "Data",
"width": 80
},
{
"fieldname": "to_pin_code",
"label": _("To Pin Code"),
"fieldtype": "Data",
"width": 80
},
{
"fieldname": "to_state",
"label": _("To State"),
"fieldtype": "Data",
"width": 80
},
{
"fieldname": "ship_to_state",
"label": _("Ship To State"),
"fieldtype": "Data",
"width": 100
},
{
"fieldname": "item_name",
"label": _("Product"),
"fieldtype": "Link",
"options": "Item",
"width": 120
},
{
"fieldname": "description",
"label": _("Description"),
"fieldtype": "Data",
"width": 100
},
{
"fieldname": "gst_hsn_code",
"label": _("HSN"),
"fieldtype": "Data",
"width": 120
},
{
"fieldname": "uom",
"label": _("Unit"),
"fieldtype": "Data",
"width": 100
},
{
"fieldname": "qty",
"label": _("Qty"),
"fieldtype": "Float",
"width": 100
},
{
"fieldname": "amount",
"label": _("Accessable Value"),
"fieldtype": "Float",
"width": 120
},
{
"fieldname": "tax_rate",
"label": _("Tax Rate"),
"fieldtype": "Data",
"width": 100
},
{
"fieldname": "cgst_amount",
"label": _("CGST Amount"),
"fieldtype": "Data",
"width": 100
},
{
"fieldname": "sgst_amount",
"label": _("SGST Amount"),
"fieldtype": "Data",
"width": 100
},
{
"fieldname": "igst_amount",
"label": _("IGST Amount"),
"fieldtype": "Data",
"width": 100
},
{
"fieldname": "cess_amount",
"label": _("CESS Amount"),
"fieldtype": "Data",
"width": 100
},
{
"fieldname": "mode_of_transport",
"label": _("Mode of Transport"),
"fieldtype": "Data",
"width": 100
},
{
"fieldname": "distance",
"label": _("Distance"),
"fieldtype": "Data",
"width": 100
},
{
"fieldname": "transporter_name",
"label": _("Transporter Name"),
"fieldtype": "Data",
"width": 120
},
{
"fieldname": "gst_transporter_id",
"label": _("Transporter ID"),
"fieldtype": "Data",
"width": 100
},
{
"fieldname": "lr_no",
"label": _("Transport Receipt No"),
"fieldtype": "Data",
"width": 120
},
{
"fieldname": "lr_date",
"label": _("Transport Receipt Date"),
"fieldtype": "Data",
"width": 120
},
{
"fieldname": "vehicle_no",
"label": _("Vehicle No"),
"fieldtype": "Data",
"width": 100
},
{
"fieldname": "gst_vehicle_type",
"label": _("Vehicle Type"),
"fieldtype": "Data",
"width": 100
},
]
return columns | unknown | codeparrot/codeparrot-clean | ||
# Copyright 2013 Red Hat, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import mock
from jacket.db import compute
from jacket.compute import exception
from jacket.objects.compute import flavor as flavor_obj
from jacket.tests.compute.unit.objects import test_objects
fake_flavor = {
'created_at': None,
'updated_at': None,
'deleted_at': None,
'deleted': 0,
'id': 1,
'name': 'm1.foo',
'memory_mb': 1024,
'vcpus': 4,
'root_gb': 20,
'ephemeral_gb': 0,
'flavorid': 'm1.foo',
'swap': 0,
'rxtx_factor': 1.0,
'vcpu_weight': 1,
'disabled': False,
'is_public': True,
'extra_specs': {'foo': 'bar'},
}
class _TestFlavor(object):
@staticmethod
def _compare(test, compute, obj):
for field, value in compute.items():
test.assertEqual(compute[field], obj[field])
def test_get_by_id(self):
with mock.patch.object(compute, 'flavor_get') as get:
get.return_value = fake_flavor
flavor = flavor_obj.Flavor.get_by_id(self.context, 1)
self._compare(self, fake_flavor, flavor)
def test_get_by_name(self):
with mock.patch.object(compute, 'flavor_get_by_name') as get_by_name:
get_by_name.return_value = fake_flavor
flavor = flavor_obj.Flavor.get_by_name(self.context, 'm1.foo')
self._compare(self, fake_flavor, flavor)
def test_get_by_flavor_id(self):
with mock.patch.object(compute, 'flavor_get_by_flavor_id') as get_by_id:
get_by_id.return_value = fake_flavor
flavor = flavor_obj.Flavor.get_by_flavor_id(self.context,
'm1.foo')
self._compare(self, fake_flavor, flavor)
def test_add_access(self):
elevated = self.context.elevated()
flavor = flavor_obj.Flavor(context=elevated, flavorid='123')
with mock.patch.object(compute, 'flavor_access_add') as add:
flavor.add_access('456')
add.assert_called_once_with(elevated, '123', '456')
def test_add_access_with_dirty_projects(self):
flavor = flavor_obj.Flavor(context=self.context, projects=['1'])
self.assertRaises(exception.ObjectActionError,
flavor.add_access, '2')
def test_remove_access(self):
elevated = self.context.elevated()
flavor = flavor_obj.Flavor(context=elevated, flavorid='123')
with mock.patch.object(compute, 'flavor_access_remove') as remove:
flavor.remove_access('456')
remove.assert_called_once_with(elevated, '123', '456')
def test_create(self):
flavor = flavor_obj.Flavor(context=self.context)
flavor.name = 'm1.foo'
flavor.extra_specs = fake_flavor['extra_specs']
with mock.patch.object(compute, 'flavor_create') as create:
create.return_value = fake_flavor
flavor.create()
self.assertEqual(self.context, flavor._context)
# NOTE(danms): Orphan this to avoid lazy-loads
flavor._context = None
self._compare(self, fake_flavor, flavor)
def test_create_with_projects(self):
context = self.context.elevated()
flavor = flavor_obj.Flavor(context=context)
flavor.name = 'm1.foo'
flavor.extra_specs = fake_flavor['extra_specs']
flavor.projects = ['project-1', 'project-2']
db_flavor = dict(fake_flavor, projects=list(flavor.projects))
with mock.patch.multiple(compute, flavor_create=mock.DEFAULT,
flavor_access_get_by_flavor_id=mock.DEFAULT
) as methods:
methods['flavor_create'].return_value = db_flavor
methods['flavor_access_get_by_flavor_id'].return_value = [
{'project_id': 'project-1'},
{'project_id': 'project-2'}]
flavor.create()
methods['flavor_create'].assert_called_once_with(
context,
{'name': 'm1.foo',
'extra_specs': fake_flavor['extra_specs']},
projects=['project-1', 'project-2'])
self.assertEqual(context, flavor._context)
# NOTE(danms): Orphan this to avoid lazy-loads
flavor._context = None
self._compare(self, fake_flavor, flavor)
self.assertEqual(['project-1', 'project-2'], flavor.projects)
def test_create_with_id(self):
flavor = flavor_obj.Flavor(context=self.context, id=123)
self.assertRaises(exception.ObjectActionError, flavor.create)
@mock.patch('compute.compute.flavor_access_add')
@mock.patch('compute.compute.flavor_access_remove')
@mock.patch('compute.compute.flavor_extra_specs_delete')
@mock.patch('compute.compute.flavor_extra_specs_update_or_create')
def test_save(self, mock_update, mock_delete, mock_remove, mock_add):
ctxt = self.context.elevated()
extra_specs = {'key1': 'value1', 'key2': 'value2'}
projects = ['project-1', 'project-2']
flavor = flavor_obj.Flavor(context=ctxt, flavorid='foo',
extra_specs=extra_specs, projects=projects)
flavor.obj_reset_changes()
# Test deleting an extra_specs key and project
del flavor.extra_specs['key1']
del flavor.projects[-1]
self.assertEqual(set(['extra_specs', 'projects']),
flavor.obj_what_changed())
flavor.save()
self.assertEqual({'key2': 'value2'}, flavor.extra_specs)
mock_delete.assert_called_once_with(ctxt, 'foo', 'key1')
self.assertEqual(['project-1'], flavor.projects)
mock_remove.assert_called_once_with(ctxt, 'foo', 'project-2')
# Test updating an extra_specs key value
flavor.extra_specs['key2'] = 'foobar'
self.assertEqual(set(['extra_specs']), flavor.obj_what_changed())
flavor.save()
self.assertEqual({'key2': 'foobar'}, flavor.extra_specs)
mock_update.assert_called_with(ctxt, 'foo', {'key2': 'foobar'})
# Test adding an extra_specs and project
flavor.extra_specs['key3'] = 'value3'
flavor.projects.append('project-3')
self.assertEqual(set(['extra_specs', 'projects']),
flavor.obj_what_changed())
flavor.save()
self.assertEqual({'key2': 'foobar', 'key3': 'value3'},
flavor.extra_specs)
mock_update.assert_called_with(ctxt, 'foo', {'key2': 'foobar',
'key3': 'value3'})
self.assertEqual(['project-1', 'project-3'], flavor.projects)
mock_add.assert_called_once_with(ctxt, 'foo', 'project-3')
@mock.patch('compute.compute.flavor_create')
@mock.patch('compute.compute.flavor_extra_specs_delete')
@mock.patch('compute.compute.flavor_extra_specs_update_or_create')
def test_save_deleted_extra_specs(self, mock_update, mock_delete,
mock_create):
mock_create.return_value = dict(fake_flavor,
extra_specs={'key1': 'value1'})
ctxt = self.context.elevated()
flavor = flavor_obj.Flavor(context=ctxt)
flavor.flavorid = 'test'
flavor.extra_specs = {'key1': 'value1'}
flavor.create()
flavor.extra_specs = {}
flavor.save()
mock_delete.assert_called_once_with(ctxt, flavor.flavorid,
'key1')
self.assertFalse(mock_update.called)
def test_save_invalid_fields(self):
flavor = flavor_obj.Flavor(id=123)
self.assertRaises(exception.ObjectActionError, flavor.save)
def test_destroy(self):
flavor = flavor_obj.Flavor(context=self.context, id=123, name='foo')
with mock.patch.object(compute, 'flavor_destroy') as destroy:
flavor.destroy()
destroy.assert_called_once_with(self.context, flavor.name)
def test_load_projects(self):
flavor = flavor_obj.Flavor(context=self.context, flavorid='foo')
with mock.patch.object(compute, 'flavor_access_get_by_flavor_id') as get:
get.return_value = [{'project_id': 'project-1'}]
projects = flavor.projects
self.assertEqual(['project-1'], projects)
self.assertNotIn('projects', flavor.obj_what_changed())
def test_load_anything_else(self):
flavor = flavor_obj.Flavor()
self.assertRaises(exception.ObjectActionError,
getattr, flavor, 'name')
class TestFlavor(test_objects._LocalTest, _TestFlavor):
pass
class TestFlavorRemote(test_objects._RemoteTest, _TestFlavor):
pass
class _TestFlavorList(object):
def test_get_all(self):
with mock.patch.object(compute, 'flavor_get_all') as get_all:
get_all.return_value = [fake_flavor]
filters = {'min_memory_mb': 4096}
flavors = flavor_obj.FlavorList.get_all(self.context,
inactive=False,
filters=filters,
sort_key='id',
sort_dir='asc')
self.assertEqual(1, len(flavors))
_TestFlavor._compare(self, fake_flavor, flavors[0])
get_all.assert_called_once_with(self.context, inactive=False,
filters=filters, sort_key='id',
sort_dir='asc', limit=None,
marker=None)
class TestFlavorList(test_objects._LocalTest, _TestFlavorList):
pass
class TestFlavorListRemote(test_objects._RemoteTest, _TestFlavorList):
pass | unknown | codeparrot/codeparrot-clean | ||
// SPDX-License-Identifier: GPL-2.0
#include <linux/kernel.h>
#include <linux/mm.h>
#include <linux/slab.h>
#include <linux/uaccess.h>
#include <linux/ktime.h>
#include <linux/debugfs.h>
#include <linux/highmem.h>
#include "gup_test.h"
static void put_back_pages(unsigned int cmd, struct page **pages,
unsigned long nr_pages, unsigned int gup_test_flags)
{
unsigned long i;
switch (cmd) {
case GUP_FAST_BENCHMARK:
case GUP_BASIC_TEST:
for (i = 0; i < nr_pages; i++)
put_page(pages[i]);
break;
case PIN_FAST_BENCHMARK:
case PIN_BASIC_TEST:
case PIN_LONGTERM_BENCHMARK:
unpin_user_pages(pages, nr_pages);
break;
case DUMP_USER_PAGES_TEST:
if (gup_test_flags & GUP_TEST_FLAG_DUMP_PAGES_USE_PIN) {
unpin_user_pages(pages, nr_pages);
} else {
for (i = 0; i < nr_pages; i++)
put_page(pages[i]);
}
break;
}
}
static void verify_dma_pinned(unsigned int cmd, struct page **pages,
unsigned long nr_pages)
{
unsigned long i;
struct folio *folio;
switch (cmd) {
case PIN_FAST_BENCHMARK:
case PIN_BASIC_TEST:
case PIN_LONGTERM_BENCHMARK:
for (i = 0; i < nr_pages; i++) {
folio = page_folio(pages[i]);
if (WARN(!folio_maybe_dma_pinned(folio),
"pages[%lu] is NOT dma-pinned\n", i)) {
dump_page(&folio->page, "gup_test failure");
break;
} else if (cmd == PIN_LONGTERM_BENCHMARK &&
WARN(!folio_is_longterm_pinnable(folio),
"pages[%lu] is NOT pinnable but pinned\n",
i)) {
dump_page(&folio->page, "gup_test failure");
break;
}
}
break;
}
}
static void dump_pages_test(struct gup_test *gup, struct page **pages,
unsigned long nr_pages)
{
unsigned int index_to_dump;
unsigned int i;
/*
* Zero out any user-supplied page index that is out of range. Remember:
* .which_pages[] contains a 1-based set of page indices.
*/
for (i = 0; i < GUP_TEST_MAX_PAGES_TO_DUMP; i++) {
if (gup->which_pages[i] > nr_pages) {
pr_warn("ZEROING due to out of range: .which_pages[%u]: %u\n",
i, gup->which_pages[i]);
gup->which_pages[i] = 0;
}
}
for (i = 0; i < GUP_TEST_MAX_PAGES_TO_DUMP; i++) {
index_to_dump = gup->which_pages[i];
if (index_to_dump) {
index_to_dump--; // Decode from 1-based, to 0-based
pr_info("---- page #%u, starting from user virt addr: 0x%llx\n",
index_to_dump, gup->addr);
dump_page(pages[index_to_dump],
"gup_test: dump_pages() test");
}
}
}
static int __gup_test_ioctl(unsigned int cmd,
struct gup_test *gup)
{
ktime_t start_time, end_time;
unsigned long i, nr_pages, addr, next;
long nr;
struct page **pages;
int ret = 0;
bool needs_mmap_lock =
cmd != GUP_FAST_BENCHMARK && cmd != PIN_FAST_BENCHMARK;
if (gup->size > ULONG_MAX)
return -EINVAL;
nr_pages = gup->size / PAGE_SIZE;
pages = kvcalloc(nr_pages, sizeof(void *), GFP_KERNEL);
if (!pages)
return -ENOMEM;
if (needs_mmap_lock && mmap_read_lock_killable(current->mm)) {
ret = -EINTR;
goto free_pages;
}
i = 0;
nr = gup->nr_pages_per_call;
start_time = ktime_get();
for (addr = gup->addr; addr < gup->addr + gup->size; addr = next) {
if (nr != gup->nr_pages_per_call)
break;
next = addr + nr * PAGE_SIZE;
if (next > gup->addr + gup->size) {
next = gup->addr + gup->size;
nr = (next - addr) / PAGE_SIZE;
}
switch (cmd) {
case GUP_FAST_BENCHMARK:
nr = get_user_pages_fast(addr, nr, gup->gup_flags,
pages + i);
break;
case GUP_BASIC_TEST:
nr = get_user_pages(addr, nr, gup->gup_flags, pages + i);
break;
case PIN_FAST_BENCHMARK:
nr = pin_user_pages_fast(addr, nr, gup->gup_flags,
pages + i);
break;
case PIN_BASIC_TEST:
nr = pin_user_pages(addr, nr, gup->gup_flags, pages + i);
break;
case PIN_LONGTERM_BENCHMARK:
nr = pin_user_pages(addr, nr,
gup->gup_flags | FOLL_LONGTERM,
pages + i);
break;
case DUMP_USER_PAGES_TEST:
if (gup->test_flags & GUP_TEST_FLAG_DUMP_PAGES_USE_PIN)
nr = pin_user_pages(addr, nr, gup->gup_flags,
pages + i);
else
nr = get_user_pages(addr, nr, gup->gup_flags,
pages + i);
break;
default:
ret = -EINVAL;
goto unlock;
}
if (nr <= 0)
break;
i += nr;
}
end_time = ktime_get();
/* Shifting the meaning of nr_pages: now it is actual number pinned: */
nr_pages = i;
gup->get_delta_usec = ktime_us_delta(end_time, start_time);
gup->size = addr - gup->addr;
/*
* Take an un-benchmark-timed moment to verify DMA pinned
* state: print a warning if any non-dma-pinned pages are found:
*/
verify_dma_pinned(cmd, pages, nr_pages);
if (cmd == DUMP_USER_PAGES_TEST)
dump_pages_test(gup, pages, nr_pages);
start_time = ktime_get();
put_back_pages(cmd, pages, nr_pages, gup->test_flags);
end_time = ktime_get();
gup->put_delta_usec = ktime_us_delta(end_time, start_time);
unlock:
if (needs_mmap_lock)
mmap_read_unlock(current->mm);
free_pages:
kvfree(pages);
return ret;
}
static DEFINE_MUTEX(pin_longterm_test_mutex);
static struct page **pin_longterm_test_pages;
static unsigned long pin_longterm_test_nr_pages;
static inline void pin_longterm_test_stop(void)
{
if (pin_longterm_test_pages) {
if (pin_longterm_test_nr_pages)
unpin_user_pages(pin_longterm_test_pages,
pin_longterm_test_nr_pages);
kvfree(pin_longterm_test_pages);
pin_longterm_test_pages = NULL;
pin_longterm_test_nr_pages = 0;
}
}
static inline int pin_longterm_test_start(unsigned long arg)
{
long nr_pages, cur_pages, addr, remaining_pages;
int gup_flags = FOLL_LONGTERM;
struct pin_longterm_test args;
struct page **pages;
int ret = 0;
bool fast;
if (pin_longterm_test_pages)
return -EINVAL;
if (copy_from_user(&args, (void __user *)arg, sizeof(args)))
return -EFAULT;
if (args.flags &
~(PIN_LONGTERM_TEST_FLAG_USE_WRITE|PIN_LONGTERM_TEST_FLAG_USE_FAST))
return -EINVAL;
if (!IS_ALIGNED(args.addr | args.size, PAGE_SIZE))
return -EINVAL;
if (args.size > LONG_MAX)
return -EINVAL;
nr_pages = args.size / PAGE_SIZE;
if (!nr_pages)
return -EINVAL;
pages = kvcalloc(nr_pages, sizeof(void *), GFP_KERNEL);
if (!pages)
return -ENOMEM;
if (args.flags & PIN_LONGTERM_TEST_FLAG_USE_WRITE)
gup_flags |= FOLL_WRITE;
fast = !!(args.flags & PIN_LONGTERM_TEST_FLAG_USE_FAST);
if (!fast && mmap_read_lock_killable(current->mm)) {
kvfree(pages);
return -EINTR;
}
pin_longterm_test_pages = pages;
pin_longterm_test_nr_pages = 0;
while (nr_pages - pin_longterm_test_nr_pages) {
remaining_pages = nr_pages - pin_longterm_test_nr_pages;
addr = args.addr + pin_longterm_test_nr_pages * PAGE_SIZE;
if (fast)
cur_pages = pin_user_pages_fast(addr, remaining_pages,
gup_flags, pages);
else
cur_pages = pin_user_pages(addr, remaining_pages,
gup_flags, pages);
if (cur_pages < 0) {
pin_longterm_test_stop();
ret = cur_pages;
break;
}
pin_longterm_test_nr_pages += cur_pages;
pages += cur_pages;
}
if (!fast)
mmap_read_unlock(current->mm);
return ret;
}
static inline int pin_longterm_test_read(unsigned long arg)
{
__u64 user_addr;
unsigned long i;
if (!pin_longterm_test_pages)
return -EINVAL;
if (copy_from_user(&user_addr, (void __user *)arg, sizeof(user_addr)))
return -EFAULT;
for (i = 0; i < pin_longterm_test_nr_pages; i++) {
void *addr = kmap_local_page(pin_longterm_test_pages[i]);
unsigned long ret;
ret = copy_to_user((void __user *)(unsigned long)user_addr, addr,
PAGE_SIZE);
kunmap_local(addr);
if (ret)
return -EFAULT;
user_addr += PAGE_SIZE;
}
return 0;
}
static long pin_longterm_test_ioctl(struct file *filep, unsigned int cmd,
unsigned long arg)
{
int ret = -EINVAL;
if (mutex_lock_killable(&pin_longterm_test_mutex))
return -EINTR;
switch (cmd) {
case PIN_LONGTERM_TEST_START:
ret = pin_longterm_test_start(arg);
break;
case PIN_LONGTERM_TEST_STOP:
pin_longterm_test_stop();
ret = 0;
break;
case PIN_LONGTERM_TEST_READ:
ret = pin_longterm_test_read(arg);
break;
}
mutex_unlock(&pin_longterm_test_mutex);
return ret;
}
static long gup_test_ioctl(struct file *filep, unsigned int cmd,
unsigned long arg)
{
struct gup_test gup;
int ret;
switch (cmd) {
case GUP_FAST_BENCHMARK:
case PIN_FAST_BENCHMARK:
case PIN_LONGTERM_BENCHMARK:
case GUP_BASIC_TEST:
case PIN_BASIC_TEST:
case DUMP_USER_PAGES_TEST:
break;
case PIN_LONGTERM_TEST_START:
case PIN_LONGTERM_TEST_STOP:
case PIN_LONGTERM_TEST_READ:
return pin_longterm_test_ioctl(filep, cmd, arg);
default:
return -EINVAL;
}
if (copy_from_user(&gup, (void __user *)arg, sizeof(gup)))
return -EFAULT;
ret = __gup_test_ioctl(cmd, &gup);
if (ret)
return ret;
if (copy_to_user((void __user *)arg, &gup, sizeof(gup)))
return -EFAULT;
return 0;
}
static int gup_test_release(struct inode *inode, struct file *file)
{
pin_longterm_test_stop();
return 0;
}
static const struct file_operations gup_test_fops = {
.open = nonseekable_open,
.unlocked_ioctl = gup_test_ioctl,
.compat_ioctl = compat_ptr_ioctl,
.release = gup_test_release,
};
static int __init gup_test_init(void)
{
debugfs_create_file_unsafe("gup_test", 0600, NULL, NULL,
&gup_test_fops);
return 0;
}
late_initcall(gup_test_init); | c | github | https://github.com/torvalds/linux | mm/gup_test.c |
"""local_session_caching.py
Grok everything so far ? This example
creates a new dogpile.cache backend that will persist data in a dictionary
which is local to the current session. remove() the session
and the cache is gone.
Create a new Dogpile cache backend that will store
cached data local to the current Session.
This is an advanced example which assumes familiarity
with the basic operation of CachingQuery.
"""
from dogpile.cache.api import CacheBackend, NO_VALUE
from dogpile.cache.region import register_backend
class ScopedSessionBackend(CacheBackend):
"""A dogpile backend which will cache objects locally on
the current session.
When used with the query_cache system, the effect is that the objects
in the cache are the same as that within the session - the merge()
is a formality that doesn't actually create a second instance.
This makes it safe to use for updates of data from an identity
perspective (still not ideal for deletes though).
When the session is removed, the cache is gone too, so the cache
is automatically disposed upon session.remove().
"""
def __init__(self, arguments):
self.scoped_session = arguments['scoped_session']
def get(self, key):
return self._cache_dictionary.get(key, NO_VALUE)
def set(self, key, value):
self._cache_dictionary[key] = value
def delete(self, key):
self._cache_dictionary.pop(key, None)
@property
def _cache_dictionary(self):
"""Return the cache dictionary linked to the current Session."""
sess = self.scoped_session()
try:
cache_dict = sess._cache_dictionary
except AttributeError:
sess._cache_dictionary = cache_dict = {}
return cache_dict
register_backend("sqlalchemy.session", __name__, "ScopedSessionBackend")
if __name__ == '__main__':
from .environment import Session, regions
from .caching_query import FromCache
from dogpile.cache import make_region
# set up a region based on the ScopedSessionBackend,
# pointing to the scoped_session declared in the example
# environment.
regions['local_session'] = make_region().configure(
'sqlalchemy.session',
arguments={
"scoped_session": Session
}
)
from .model import Person
# query to load Person by name, with criterion
# of "person 10"
q = Session.query(Person).\
options(FromCache("local_session")).\
filter(Person.name == "person 10")
# load from DB
person10 = q.one()
# next call, the query is cached.
person10 = q.one()
# clear out the Session. The "_cache_dictionary" dictionary
# disappears with it.
Session.remove()
# query calls from DB again
person10 = q.one()
# identity is preserved - person10 is the *same* object that's
# ultimately inside the cache. So it is safe to manipulate
# the not-queried-for attributes of objects when using such a
# cache without the need to invalidate - however, any change
# that would change the results of a cached query, such as
# inserts, deletes, or modification to attributes that are
# part of query criterion, still require careful invalidation.
cache, key = q._get_cache_plus_key()
assert person10 is cache.get(key)[0] | unknown | codeparrot/codeparrot-clean | ||
# (c) 2014, Brian Coca, Josh Drake, et al
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
from ansible.plugins.cache.base import BaseCacheModule
class CacheModule(BaseCacheModule):
def __init__(self, *args, **kwargs):
self._cache = {}
def get(self, key):
return self._cache.get(key)
def set(self, key, value):
self._cache[key] = value
def keys(self):
return self._cache.keys()
def contains(self, key):
return key in self._cache
def delete(self, key):
del self._cache[key]
def flush(self):
self._cache = {}
def copy(self):
return self._cache.copy()
def __getstate__(self):
return self.copy()
def __setstate__(self, data):
self._cache = data | unknown | codeparrot/codeparrot-clean | ||
/*
* Copyright 2012-present the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.boot.buildpack.platform.docker.type;
import org.junit.jupiter.api.Test;
import static org.assertj.core.api.Assertions.assertThat;
import static org.assertj.core.api.Assertions.assertThatIllegalArgumentException;
/**
* Tests for {@link VolumeName}.
*
* @author Phillip Webb
*/
class VolumeNameTests {
@Test
@SuppressWarnings("NullAway") // Test null check
void randomWhenPrefixIsNullThrowsException() {
assertThatIllegalArgumentException().isThrownBy(() -> VolumeName.random(null))
.withMessage("'prefix' must not be null");
}
@Test
void randomGeneratesRandomString() {
VolumeName v1 = VolumeName.random("abc-");
VolumeName v2 = VolumeName.random("abc-");
assertThat(v1.toString()).startsWith("abc-").hasSize(14);
assertThat(v2.toString()).startsWith("abc-").hasSize(14);
assertThat(v1).isNotEqualTo(v2);
assertThat(v1.toString()).isNotEqualTo(v2.toString());
}
@Test
void randomStringWithLengthGeneratesRandomString() {
VolumeName v1 = VolumeName.random("abc-", 20);
VolumeName v2 = VolumeName.random("abc-", 20);
assertThat(v1.toString()).startsWith("abc-").hasSize(24);
assertThat(v2.toString()).startsWith("abc-").hasSize(24);
assertThat(v1).isNotEqualTo(v2);
assertThat(v1.toString()).isNotEqualTo(v2.toString());
}
@Test
@SuppressWarnings("NullAway") // Test null check
void basedOnWhenSourceIsNullThrowsException() {
assertThatIllegalArgumentException().isThrownBy(() -> VolumeName.basedOn(null, "prefix", "suffix", 6))
.withMessage("'source' must not be null");
}
@Test
@SuppressWarnings("NullAway") // Test null check
void basedOnWhenNameExtractorIsNullThrowsException() {
assertThatIllegalArgumentException().isThrownBy(() -> VolumeName.basedOn("test", null, "prefix", "suffix", 6))
.withMessage("'nameExtractor' must not be null");
}
@Test
@SuppressWarnings("NullAway") // Test null check
void basedOnWhenPrefixIsNullThrowsException() {
assertThatIllegalArgumentException().isThrownBy(() -> VolumeName.basedOn("test", null, "suffix", 6))
.withMessage("'prefix' must not be null");
}
@Test
@SuppressWarnings("NullAway") // Test null check
void basedOnWhenSuffixIsNullThrowsException() {
assertThatIllegalArgumentException().isThrownBy(() -> VolumeName.basedOn("test", "prefix", null, 6))
.withMessage("'suffix' must not be null");
}
@Test
void basedOnGeneratesHashBasedName() {
VolumeName name = VolumeName.basedOn("index.docker.io/library/myapp:latest", "pack-cache-", ".build", 6);
assertThat(name).hasToString("pack-cache-40a311b545d7.build");
}
@Test
void basedOnWhenSizeIsTooBigThrowsException() {
assertThatIllegalArgumentException().isThrownBy(() -> VolumeName.basedOn("name", "prefix", "suffix", 33))
.withMessage("'digestLength' must be less than or equal to 32");
}
@Test
@SuppressWarnings("NullAway") // Test null check
void ofWhenValueIsNullThrowsException() {
assertThatIllegalArgumentException().isThrownBy(() -> VolumeName.of(null))
.withMessage("'value' must not be null");
}
@Test
void ofGeneratesValue() {
VolumeName name = VolumeName.of("test");
assertThat(name).hasToString("test");
}
@Test
void equalsAndHashCode() {
VolumeName n1 = VolumeName.of("test1");
VolumeName n2 = VolumeName.of("test1");
VolumeName n3 = VolumeName.of("test2");
assertThat(n1).hasSameHashCodeAs(n2);
assertThat(n1).isEqualTo(n1).isEqualTo(n2).isNotEqualTo(n3);
}
} | java | github | https://github.com/spring-projects/spring-boot | buildpack/spring-boot-buildpack-platform/src/test/java/org/springframework/boot/buildpack/platform/docker/type/VolumeNameTests.java |
# -*- coding: utf-8 -*-
# vi:si:et:sw=4:sts=4:ts=4
##
## Copyright (C) 2013 Async Open Source
##
## This program is free software; you can redistribute it and/or
## modify it under the terms of the GNU Lesser General Public License
## as published by the Free Software Foundation; either version 2
## of the License, or (at your option) any later version.
##
## This program is distributed in the hope that it will be useful,
## but WITHOUT ANY WARRANTY; without even the implied warranty of
## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
## GNU Lesser General Public License for more details.
##
## You should have received a copy of the GNU Lesser General Public License
## along with this program; if not, write to the Free Software
## Foundation, Inc., or visit: http://www.gnu.org/.
##
##
## Author(s): Stoq Team <stoq-devel@async.com.br>
##
"""Utilities for manipulating strings"""
def _increment(value):
# Make sure the new value is at least the same size the old one was.
# For example, this will make '009' become '010' instead of just '10'
return unicode(int(value) + 1).zfill(len(value))
def next_value_for(value):
"""Generate the next value for value.
For instance 4 -> 5, 99 -> 100, A83 -> A84 etc::
>>> next_value_for(u'999')
u'1000'
>>> next_value_for(u'1')
u'2'
>>> next_value_for(u'abc')
u'abd'
>>> next_value_for(u'XYZ')
u'XZ0'
>>> next_value_for(u'AB00099')
u'AB00100'
:param unicode value:
:returns:
:rtype: unicode
"""
if not value:
return u'1'
if value.isdigit():
return _increment(value)
last = value[-1]
if last.isdigit():
l = u''
# Get the greatest part in the string's end that is a number.
# For instance: 'ABC123' will get '123'
for c in reversed(value):
if not c.isdigit():
break
l = c + l
value = value[:-len(l)] + _increment(l)
elif last.isalpha():
last = chr(ord(last) + 1)
if last.isalpha():
value = value[:-1] + last
else:
value_len = len(value)
value = next_value_for(value[:-1])
# If the next_value_for didn't increased the string length, we
# need to. For instance: 'ABZ' would make the line above return
# 'AC' and thus the next value for the sequence is 'AC0'. It should
# be fine for '99Z' because it would generate '100'
if len(value) <= value_len:
value += u'0'
else:
value += u'0'
return value
def max_value_for(values):
"""Get the maximum value from the values
Python compares strings from left to right and thus comparisons
like '9' > '10' would be true.
This avoid that problem by 0-"padding" the strings to the same length
of the longest string on the sequence. Because of that, the return value
will be in that format. For instance::
>>> max_value_for([u'1', u'2'])
u'2'
>>> max_value_for([u'99', u'100'])
u'100'
>>> max_value_for([u'99', u'0001'])
u'0099'
:param values: a sequence of strings
:returns: the greatest string on the sequence
"""
max_length = max(len(v) for v in values)
return max(v.zfill(max_length) for v in values) | unknown | codeparrot/codeparrot-clean | ||
from coloredcoinlib import (ColorSet, IncompatibleTypesError, InvalidValueError,
SimpleColorValue, ColorValue)
from coloredcoinlib.comparable import ComparableMixin
from decimal import Decimal
class AssetDefinition(object):
"""Stores the definition of a particular asset, including its color set,
it's name (moniker), and the wallet model that represents it.
"""
def __init__(self, colormap, params):
"""Create an Asset for a color map <colormap> and configuration
<params>. Note params has the color definitions used for this
Asset.
"""
self.colormap = colormap
self.monikers = params.get('monikers', [])
# currently only single-color assets are supported
assert len(params.get('color_set')) == 1
self.color_set = ColorSet(colormap, params.get('color_set'))
self.unit = int(params.get('unit', 1))
def __repr__(self):
return "%s: %s" % (self.monikers, self.color_set)
def get_id(self):
return self.color_set.get_color_hash()
def get_all_ids(self):
return [self.get_id()]
def get_monikers(self):
"""Returns the list of monikers for this asset.
"""
return self.monikers
def get_color_id(self):
return list(self.get_color_set().color_id_set)[0]
def has_color_id(self, color_id):
return self.get_color_set().has_color_id(color_id)
def get_color_set(self):
"""Returns the list of colors for this asset.
"""
return self.color_set
def get_color_def(self):
color_set = self.get_color_set()
assert len(color_set.color_desc_list) == 1
return self.colormap.get_color_def(color_set.color_desc_list[0])
def get_null_colorvalue(self):
cd = self.get_color_def()
return SimpleColorValue(colordef=cd, value=0)
def get_colorvalue(self, utxo):
""" return colorvalue for a given utxo"""
if utxo.colorvalues:
for cv in utxo.colorvalues:
if self.has_color_id(cv.get_color_id()):
return cv
raise Exception("Cannot get colorvalue for UTXO!")
def validate_value(self, portion):
"""Returns True if the portion is an exact multiple of the Asset atoms.
"""
if isinstance(portion, ColorValue) or isinstance(portion, AssetValue):
portion = portion.get_value()
atom = Decimal("1") / Decimal(self.unit)
return Decimal(portion) % atom == Decimal("0")
def parse_value(self, portion):
"""Returns actual number of Satoshis for this Asset
given the <portion> of the asset.
"""
return int(Decimal(portion) * Decimal(self.unit))
def format_value(self, value):
"""Returns a string representation of the portion of the asset.
can involve rounding. doesn't display insignificant zeros
"""
if isinstance(value, ColorValue) or isinstance(value, AssetValue):
value = value.get_value()
return str(Decimal(value) / Decimal(self.unit))
def get_atom(self):
return self.format_value(1)
def get_data(self):
"""Returns a JSON-compatible object that represents this Asset
"""
return {
"monikers": self.monikers,
"assetid" : self.get_color_set().get_color_hash(),
"color_set": self.color_set.get_data(),
"unit": self.unit
}
class AssetValue(object):
def __init__(self, **kwargs):
self.asset = kwargs.pop('asset')
def get_kwargs(self):
kwargs = {}
kwargs['asset'] = self.get_asset()
return kwargs
def clone(self):
kwargs = self.get_kwargs()
return self.__class__(**kwargs)
def check_compatibility(self, other):
if self.get_color_set() != other.get_color_set():
raise IncompatibleTypesError
def get_asset(self):
return self.asset
def get_color_set(self):
return self.asset.get_color_set()
class AdditiveAssetValue(AssetValue, ComparableMixin):
def __init__(self, **kwargs):
super(AdditiveAssetValue, self).__init__(**kwargs)
self.value = kwargs.pop('value')
if not isinstance(self.value, int):
raise InvalidValueError('Value is not an int!')
def get_kwargs(self):
kwargs = super(AdditiveAssetValue, self).get_kwargs()
kwargs['value'] = self.get_value()
return kwargs
def get_value(self):
return self.value
def get_formatted_value(self):
return self.asset.format_value(self.get_value())
def __add__(self, other):
if isinstance(other, int) and other == 0:
return self
self.check_compatibility(other)
kwargs = self.get_kwargs()
kwargs['value'] = self.get_value() + other.get_value()
return self.__class__(**kwargs)
def __radd__(self, other):
return self + other
def __sub__(self, other):
if isinstance(other, int) and other == 0:
return self
self.check_compatibility(other)
kwargs = self.get_kwargs()
kwargs['value'] = self.get_value() - other.get_value()
return self.__class__(**kwargs)
def __iadd__(self, other):
self.check_compatibility(other)
self.value += other.value
return self
def __lt__(self, other):
self.check_compatibility(other)
return self.get_value() < other.get_value()
def __eq__(self, other):
if self.get_color_set() != other.get_color_set():
return False
else:
return self.get_value() == other.get_value()
def __gt__(self, other):
if isinstance(other, int) and other == 0:
return self.get_value() > 0
return other < self
def __repr__(self):
return "Asset Value: %s" % (self.get_value())
@classmethod
def sum(cls, items):
return reduce(lambda x,y:x + y, items)
class AssetTarget(object):
def __init__(self, address, assetvalue):
self.address = address
self.assetvalue = assetvalue
def get_asset(self):
return self.assetvalue.get_asset()
def get_color_set(self):
return self.assetvalue.get_color_set()
def get_address(self):
return self.address
def get_value(self):
return self.assetvalue.get_value()
def get_formatted_value(self):
return self.assetvalue.get_formatted_value()
def __repr__(self):
return "%s: %s" % (self.get_address(), self.assetvalue)
@classmethod
def sum(cls, targets):
if len(targets) == 0:
return 0
c = targets[0].assetvalue.__class__
return c.sum([t.assetvalue for t in targets])
class AssetDefinitionManager(object):
"""Manager for asset definitions. Useful for interacting with
various Assets.
"""
def __init__(self, colormap, config):
"""Given a color map <colormap> and a configuration <config>,
create a new asset definition manager.
"""
self.config = config
self.colormap = colormap
self.asset_definitions = []
self.lookup_by_moniker = {}
self.lookup_by_id = {}
for ad_params in config.get('asset_definitions', []):
self.register_asset_definition(
AssetDefinition(self.colormap, ad_params))
# add bitcoin as a definition
if "bitcoin" not in self.lookup_by_moniker:
btcdef = AssetDefinition(
self.colormap, {
"monikers": ["bitcoin"],
"color_set": [""],
"unit": 100000000,
})
self.lookup_by_moniker["bitcoin"] = btcdef
self.asset_definitions.append(btcdef)
self.update_config()
def register_asset_definition(self, assdef):
"""Given an asset definition <assdef> in JSON-compatible format,
register the asset with the manager. Note AssetDefinition's
get_data can be used to get this definition for persistence.
"""
self.asset_definitions.append(assdef)
for moniker in assdef.get_monikers():
if moniker in self.lookup_by_moniker:
msg = 'More than one asset definition have same moniker!'
raise Exception(msg)
self.lookup_by_moniker[moniker] = assdef
for aid in assdef.get_all_ids():
if aid in self.lookup_by_id:
mgs = 'More than one asset definition have same id!'
raise Exception(msg)
self.lookup_by_id[aid] = assdef
def add_asset_definition(self, params):
"""Create a new asset with given <params>.
params needs the following:
monikers - list of names (e.g. ["red", "blue"])
color_set - list of color sets
(e.g. ["obc:f0bd5...a5:0:128649", "obc:a..0:0:147477"])
"""
assdef = AssetDefinition(self.colormap, params)
self.register_asset_definition(assdef)
self.update_config()
return assdef
def get_asset_by_moniker(self, moniker):
"""Given a color name <moniker>, return the actual Asset Definition
"""
return self.lookup_by_moniker.get(moniker)
def get_asset_by_id(self, asset_id):
return self.lookup_by_id.get(asset_id)
def find_asset_by_color_set(self, color_set):
assets = [asset
for asset in self.asset_definitions
if asset.color_set.intersects(color_set)]
assert len(assets) <= 1
if assets:
return assets[0]
else:
return None
def update_config(self):
"""Write the current asset definitions to the persistent data-store
"""
self.config['asset_definitions'] = \
[assdef.get_data() for assdef in self.asset_definitions]
def get_all_assets(self):
"""Returns a list of all assets managed by this manager.
"""
return self.asset_definitions
def get_asset_and_address(self, color_address):
"""Given a color address <color_address> return the asset
and bitcoin address associated with the address. If the color
described in the address isn't managed by this object,
throw an exception.
"""
if color_address.find('@') == -1:
return (self.lookup_by_moniker.get('bitcoin'), color_address)
color_set_hash, address = color_address.split('@')
asset = self.get_asset_by_id(color_set_hash)
if asset:
return (asset, address)
msg = "No asset has a color set with this : %s"
raise Exception(msg % color_set_hash)
def get_asset_by_color_id(self, colorid):
colorset = ColorSet.from_color_ids(self.colormap, [colorid])
asset = self.find_asset_by_color_set(colorset)
if not asset:
raise Exception('Asset not found!')
return asset
def get_assetvalue_for_assetid_value(self, assetid, value):
asset = self.get_asset_by_id(assetid)
return AdditiveAssetValue(asset=asset, value=value)
def get_assetvalue_for_colorid_value(self, colorid, colorvalue):
asset = self.get_asset_by_color_id(colorid)
return AdditiveAssetValue(asset=asset, value=colorvalue)
def get_assetvalue_for_colorvalue(self, colorvalue):
return self.get_assetvalue_for_colorid_value(
colorvalue.get_color_id(),
colorvalue.get_value()
) | unknown | codeparrot/codeparrot-clean | ||
/*
* Copyright 2012-present the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.boot.testcontainers.service.connection;
import org.springframework.boot.autoconfigure.service.connection.ConnectionDetails;
public interface DatabaseConnectionDetails extends ConnectionDetails {
String getJdbcUrl();
} | java | github | https://github.com/spring-projects/spring-boot | core/spring-boot-testcontainers/src/test/java/org/springframework/boot/testcontainers/service/connection/DatabaseConnectionDetails.java |
from pyramid.renderers import render
def includeme(config):
config.add_renderer(name='chart',
factory='chartfood.InlineChartRenderer')
config.add_renderer(name='chart_response',
factory='chartfood.ChartResponseRenderer')
class ChartRenderer(object):
def cache_get(self, args):
try:
return args['cache'].get(args['datasource_url'])
except KeyError:
return None
def cache_set(self, args, data):
try:
args['cache'].put(args['datasource_url'], data)
except KeyError:
return
class InlineChartRenderer(ChartRenderer):
def __init__(self, info):
self._info = info
def __call__(self, args, system):
if isinstance(args, dict):
tpl_vars = args.copy()
cached_data = self.cache_get(tpl_vars)
if cached_data:
tpl_vars['data_table'] = cached_data
else:
tpl_vars = {'data_table': args}
cached_data = None
tpl_vars.setdefault('container_id', 'chart')
if 'data_table' in tpl_vars:
data = tpl_vars['data_table'].ToJSon().decode('utf-8')
if not cached_data:
self.cache_set(tpl_vars, tpl_vars['data_table'])
tpl_vars['data_line'] = "dataTable: {}".format(data)
elif 'datasource_url' in tpl_vars:
tpl_vars['data_line'] = "dataSourceUrl: '{}'".format(
tpl_vars['datasource_url'])
return render('chartfood:templates/inline_chart.pt', tpl_vars)
class ChartResponseRenderer(ChartRenderer):
def __init__(self, info):
self._info = info
def __call__(self, args, system):
if not isinstance(args, dict):
args = {'data_table': args}
args.setdefault('datasource_url', system['request'].path_url)
self.cache_set(args, args['data_table'])
tqx = system['request'].GET.get('tqx')
return args['data_table'].ToResponse(tqx=tqx) | unknown | codeparrot/codeparrot-clean | ||
'''
Contain the plotting tools portion of the analysis toolbox
Note: There is an equivalent file for analysis v2, include your new code there,
unless it is only inteded for analysis v1
'''
import lmfit
import matplotlib.pyplot as plt
import matplotlib
from matplotlib import cm
import numpy as np
import matplotlib.colors as col
import hsluv
from scipy.interpolate import interp1d
from matplotlib.patches import Rectangle, ConnectionPatch
golden_mean = (np.sqrt(5)-1.0)/2.0 # Aesthetic ratio
single_col_figsize = (3.39, golden_mean*3.39)
double_col_figsize = (6.9, golden_mean*6.9)
thesis_col_figsize = (12.2/2.54, golden_mean*12.2/2.54)
def set_xlabel(axis, label, unit=None, latexify_ticks=False, **kw):
"""
Add a unit aware x-label to an axis object.
Args:
axis: matplotlib axis object to set label on
label: the desired label
unit: the unit
**kw : keyword argument to be passed to matplotlib.set_xlabel
"""
if unit is not None and unit != '':
xticks = axis.get_xticks()
scale_factor, unit = SI_prefix_and_scale_factor(
val=max(abs(xticks)), unit=unit)
tick_str = '{:.4g}' if not latexify_ticks else r'${:.4g}$'
formatter = matplotlib.ticker.FuncFormatter(
lambda x, pos: tick_str.format(x * scale_factor))
axis.xaxis.set_major_formatter(formatter)
axis.set_xlabel(label + ' ({})'.format(unit), **kw)
else:
axis.set_xlabel(label, **kw)
return axis
def set_ylabel(axis, label, unit=None, latexify_ticks=False, **kw):
"""
Add a unit aware y-label to an axis object.
Args:
axis: matplotlib axis object to set label on
label: the desired label
unit: the unit
**kw : keyword argument to be passed to matplotlib.set_ylabel
"""
if unit is not None and unit != '':
yticks = axis.get_yticks()
scale_factor, unit = SI_prefix_and_scale_factor(
val=max(abs(yticks)), unit=unit)
tick_str = '{:.6g}' if not latexify_ticks else r'${:.6g}$'
formatter = matplotlib.ticker.FuncFormatter(
lambda x, pos: tick_str.format(x * scale_factor))
axis.yaxis.set_major_formatter(formatter)
axis.set_ylabel(label + ' ({})'.format(unit), **kw)
else:
axis.set_ylabel(label, **kw)
return axis
def set_cbarlabel(cbar, label, unit=None, **kw):
"""
Add a unit aware z-label to a colorbar object
Args:
cbar: colorbar object to set label on
label: the desired label
unit: the unit
**kw : keyword argument to be passed to cbar.set_label
"""
if unit is not None and unit != '':
zticks = cbar.get_ticks()
scale_factor, unit = SI_prefix_and_scale_factor(
val=max(abs(zticks)), unit=unit)
cbar.set_ticks(zticks)
cbar.set_ticklabels(zticks*scale_factor)
cbar.set_label(label + ' ({})'.format(unit))
else:
cbar.set_label(label, **kw)
return cbar
SI_PREFIXES = dict(zip(range(-24, 25, 3), 'yzafpnμm kMGTPEZY'))
SI_PREFIXES[0] = ""
# N.B. not all of these are SI units, however, all of these support SI prefixes
SI_UNITS = 'm,s,g,W,J,V,A,F,T,Hz,Ohm,S,N,C,px,b,B,K,Bar,Vpeak,Vpp,Vp,Vrms,$\Phi_0$,A/s'.split(
',')
def SI_prefix_and_scale_factor(val, unit=None):
"""
Takes in a value and unit and if applicable returns the proper
scale factor and SI prefix.
Args:
val (float) : the value
unit (str) : the unit of the value
returns:
scale_factor (float) : scale_factor needed to convert value
unit (str) : unit including the prefix
"""
if unit in SI_UNITS:
try:
with np.errstate(all="ignore"):
prefix_power = np.log10(abs(val))//3 * 3
prefix = SI_PREFIXES[prefix_power]
# Greek symbols not supported in tex
if plt.rcParams['text.usetex'] and prefix == 'μ':
prefix = r'$\mu$'
return 10 ** -prefix_power, prefix + unit
except (KeyError, TypeError):
pass
return 1, unit if unit is not None else ""
def SI_val_to_msg_str(val: float, unit: str=None, return_type=str):
"""
Takes in a value with optional unit and returns a string tuple consisting
of (value_str, unit) where the value and unit are rescaled according to
SI prefixes, IF the unit is an SI unit (according to the comprehensive list
of SI units in this file ;).
the value_str is of the type specified in return_type (str) by default.
"""
sc, new_unit = SI_prefix_and_scale_factor(val, unit)
try:
new_val = sc*val
except TypeError:
return return_type(val), unit
return return_type(new_val), new_unit
def format_lmfit_par(par_name: str, lmfit_par, end_char=''):
"""Format an lmfit par to a string of value with uncertainty."""
val_string = par_name
val_string += ': {:.4f}'.format(lmfit_par.value)
if lmfit_par.stderr is not None:
val_string += r'$\pm$' + '{:.4f}'.format(lmfit_par.stderr)
else:
val_string += r'$\pm$' + 'NaN'
val_string += end_char
return val_string
def data_to_table_png(data: list, filename: str, title: str='',
close_fig: bool=True):
"""
Takes in a list of list containing the data to be
put in a table and saves this as a png.
"""
# Determine the shape of the table
nrows, ncols = np.shape(data)
hcell, wcell = 0.3, 2.
hpad, wpad = 0.5, 0
fig = plt.figure(figsize=(ncols*wcell+wpad, nrows*hcell+hpad))
ax = fig.add_subplot(111)
ax.axis('off')
# make the table
table = ax.table(cellText=data,
loc='center')
# rescale to make it more readable
table.scale(1, 1.5)
ax.set_title(title)
fig.tight_layout()
plt.savefig(filename, dpi=450)
if close_fig:
plt.close(fig)
def annotate_point_pair(ax, text, xy_start, xy_end, xycoords='data',
text_offset=(-10, -5), arrowprops=None, **kw):
'''
Annotates two points by connecting them with an arrow.
The annotation text is placed near the center of the arrow.
Function copied from "http://stackoverflow.com/questions/14612637/
plotting-distance-arrows-in-technical-drawing/32522399#32522399"
Modified by Adriaan to allows specifying offset of text in two directions.
'''
if arrowprops is None:
arrowprops = dict(arrowstyle='<->')
assert isinstance(text, str)
xy_text = ((xy_start[0] + xy_end[0])/2., (xy_start[1] + xy_end[1])/2.)
arrow_vector = xy_end[0]-xy_start[0] + (xy_end[1] - xy_start[1]) * 1j
arrow_angle = np.angle(arrow_vector)
text_angle = arrow_angle - 0.5*np.pi
ax.annotate(
'', xy=xy_end, xycoords=xycoords,
xytext=xy_start, textcoords=xycoords,
arrowprops=arrowprops, **kw)
label = ax.annotate(
text,
xy=xy_text,
xycoords=xycoords,
xytext=(text_offset[0] * np.cos(text_angle) +
text_offset[1] * np.sin(text_angle),
text_offset[0] * np.sin(text_angle) +
text_offset[1] * np.cos(text_angle)),
textcoords='offset points', **kw)
return label
def get_color_order(i, max_num, cmap='viridis'):
# take a blue to red scale from 0 to max_num
# uses HSV system, H_red = 0, H_green = 1/3 H_blue=2/3
# return colors.hsv_to_rgb(2.*float(i)/(float(max_num)*3.), 1., 1.)
print('It is recommended to use the updated function "get_color_cycle".')
if isinstance(cmap, str):
cmap = cm.get_cmap(cmap)
return cmap((i/max_num) % 1)
def get_color_from_cmap(i, max_num):
pass
def plot_lmfit_res(fit_res, ax, plot_init: bool=False,
plot_numpoints: int=1000,
plot_kw: dict ={}, plot_init_kw: dict = {}, **kw):
"""
Plot the result of an lmfit optimization.
Args:
fit_res: lmfit result object.
ax: matplotlib axis object to plot on.
plot_init: if True plots the initial guess of the fit.
plot_numpoints: number of points to use for interpolating the fit.
plot_kw: dictionary of options to pass to the plot of the fit.
plot_init_kw dictionary of options to pass to the plot of the
initial guess.
**kw **kwargs, unused only here to match call signature.
Return:
axis : Returns matplotlib axis object on which the plot
was performed.
"""
if hasattr(fit_res, 'model'):
model = fit_res.model
# Testing input
if not (isinstance(model, lmfit.model.Model) or
isinstance(model, lmfit.model.ModelResult)):
raise TypeError(
'The passed item in "fit_res" needs to be'
' a fitting model, but is {}'.format(type(model)))
if len(model.independent_vars) == 1:
independent_var = model.independent_vars[0]
else:
raise ValueError('Fit can only be plotted if the model function'
' has one independent variable.')
x_arr = fit_res.userkws[independent_var]
xvals = np.linspace(np.min(x_arr), np.max(x_arr),
plot_numpoints)
yvals = model.eval(fit_res.params,
**{independent_var: xvals})
if plot_init:
yvals_init = model.eval(fit_res.init_params,
**{independent_var: xvals})
else: # case for the minimizer fit
# testing input
fit_xvals = fit_res.userkws
if len(fit_xvals.keys()) == 1:
independent_var = list(fit_xvals.keys())[0]
else:
raise ValueError('Fit can only be plotted if the model function'
' has one independent variable.')
x_arr = fit_res.userkws[independent_var]
xvals = np.linspace(np.min(x_arr), np.max(x_arr),
plot_numpoints)
fit_fn = fit_res.fit_fn
yvals = fit_fn(**fit_res.params,
**{independent_var: xvals})
if plot_init:
yvals_init = fit_fn(**fit_res.init_params,
**{independent_var: xvals})
# acutal plotting
ax.plot(xvals, yvals, **plot_kw)
if plot_init:
ax.plot(xvals, yvals_init, **plot_init_kw)
return ax
def flex_color_plot_vs_x(xvals, yvals, zvals, ax=None,
xwidth=None,
normalize=False, log=False,
save_name=None,
cmap='viridis',
clim=[None, None],
alpha=1,
**kw):
"""
Display a color figure for something like a tracked DAC sweep.
xvals should be a single vector with values for the primary sweep.
yvals and zvals should be a list of arrays with the sweep points and
measured values.
"""
# create a figure and set of axes
if ax is None:
fig = plt.figure(figsize=(12, 7))
ax = fig.add_subplot(111)
# calculate coordinates for corners of color blocks
# x coordinates
if xwidth is None:
xvals = np.array(xvals)
xvertices = np.zeros(np.array(xvals.shape)+1)
dx = abs(np.max(xvals)-np.min(xvals))/len(xvals)
xvertices[1:-1] = (xvals[:-1]+xvals[1:])/2.
xvertices[0] = xvals[0] - dx/2
xvertices[-1] = xvals[-1] + dx/2
else:
xvertices = []
for xval in xvals:
xvertices.append(xval+np.array([-0.5, 0.5])*xwidth)
# y coordinates
yvertices = []
for xx in range(len(xvals)):
# Important to sort arguments in case unsorted (e.g., FFT freqs)
sorted_yarguments = yvals[xx].argsort()
yvals[xx] = yvals[xx][sorted_yarguments]
zvals[xx] = zvals[xx][sorted_yarguments]
yvertices.append(np.zeros(np.array(yvals[xx].shape)+1))
yvertices[xx][1:-1] = (yvals[xx][:-1]+yvals[xx][1:])/2.
yvertices[xx][0] = yvals[xx][0] - (yvals[xx][1]-yvals[xx][0])/2
yvertices[xx][-1] = yvals[xx][-1] + (yvals[xx][-1]-yvals[xx][-2])/2
# normalized plot
if normalize:
zvals[xx] /= np.mean(zvals[xx])
# logarithmic plot
if log:
zvals[xx] = np.log(zvals[xx])/np.log(10)
# add blocks to plot
colormaps = []
for xx in range(len(xvals)):
tempzvals = np.array(
[np.append(zvals[xx], np.array(0)),
np.append(zvals[xx], np.array(0))]).transpose()
if xwidth is None:
colormaps.append(ax.pcolor(xvertices[xx:xx+2],
yvertices[xx],
tempzvals,
cmap=cmap, vmin=clim[0], vmax=clim[1],
alpha=alpha))
else:
colormaps.append(
ax.pcolor(xvertices[xx], yvertices[xx], tempzvals, cmap=cmap,
alpha=alpha))
return {'fig': ax.figure, 'ax': ax,
'cmap': colormaps[0], 'cmaps': colormaps}
def flex_colormesh_plot_vs_xy(xvals, yvals, zvals, ax=None,
normalize=False, log=False,
save_name=None, **kw):
"""
Add a rectangular block to a color plot using pcolormesh.
xvals and yvals should be single vectors with values for the
two sweep points.
zvals should be a list of arrays with the measured values with shape
(len(yvals), len(xvals)).
**grid-orientation**
The grid orientation for the zvals is the same as is used in
ax.pcolormesh.
Note that the column index corresponds to the x-coordinate,
and the row index corresponds to y.
This can be counterintuitive: zvals(y_idx, x_idx)
and can be inconsistent with some arrays of zvals
(such as a 2D histogram from numpy).
"""
xvals = np.array(xvals)
yvals = np.array(yvals)
# First, we need to sort the data as otherwise we get odd plotting
# artefacts. An example is e.g., plotting a fourier transform
sorted_x_arguments = xvals.argsort()
xvals = xvals[sorted_x_arguments]
sorted_y_arguments = yvals.argsort()
yvals = yvals[sorted_y_arguments]
zvals = zvals[:, sorted_x_arguments]
zvals = zvals[sorted_y_arguments, :]
# create a figure and set of axes
if ax is None:
fig = plt.figure(figsize=(12, 7))
ax = fig.add_subplot(111)
# convert xvals and yvals to single dimension arrays
xvals = np.squeeze(np.array(xvals))
yvals = np.squeeze(np.array(yvals))
# calculate coordinates for corners of color blocks
# x coordinates
xvertices = np.zeros(np.array(xvals.shape)+1)
xvertices[1:-1] = (xvals[:-1]+xvals[1:])/2.
xvertices[0] = xvals[0] - (xvals[1]-xvals[0])/2
xvertices[-1] = xvals[-1] + (xvals[-1]-xvals[-2])/2
# y coordinates
yvertices = np.zeros(np.array(yvals.shape)+1)
yvertices[1:-1] = (yvals[:-1]+yvals[1:])/2.
yvertices[0] = yvals[0] - (yvals[1]-yvals[0])/2
yvertices[-1] = yvals[-1] + (yvals[-1]-yvals[-2])/2
xgrid, ygrid = np.meshgrid(xvertices, yvertices)
# various plot options
# define colormap
cmap = plt.get_cmap(kw.pop('cmap', 'viridis'))
clim = kw.pop('clim', [None, None])
# normalized plot
if normalize:
zvals /= np.mean(zvals, axis=0)
# logarithmic plot
if log:
for xx in range(len(xvals)):
zvals[xx] = np.log(zvals[xx])/np.log(10)
# add blocks to plot
do_transpose = kw.pop('transpose', False)
if do_transpose:
colormap = ax.pcolormesh(ygrid.transpose(),
xgrid.transpose(),
zvals.transpose(),
cmap=cmap, vmin=clim[0], vmax=clim[1])
else:
colormap = ax.pcolormesh(xgrid, ygrid, zvals, cmap=cmap,
vmin=clim[0], vmax=clim[1])
return {'fig': ax.figure, 'ax': ax, 'cmap': colormap}
def autolabel_barplot(ax, rects, rotation=90):
"""
Attach a text label above each bar displaying its height
"""
for rect in rects:
height = rect.get_height()
ax.text(rect.get_x() + rect.get_width()/2., 0.5*height,
'%.2f' % (height),
ha='center', va='bottom', rotation=rotation)
def set_axeslabel_color(ax, color):
'''
Ad hoc function to set the labels, ticks, ticklabels and title to a color.
This is useful when e.g., making a presentation on a dark background
'''
ax.tick_params(color=color, which='both') # both major and minor ticks
plt.setp(ax.get_xticklabels(), color=color)
plt.setp(ax.get_yticklabels(), color=color)
plt.setp(ax.yaxis.get_label(), color=color)
plt.setp(ax.xaxis.get_label(), color=color)
plt.setp(ax.title, color=color)
# generate custom colormaps
# Inpired from
# https://stackoverflow.com/questions/23712207/cyclic-colormap-without-visual-distortions-for-use-in-phase-angle-plots
def make_segmented_cmap():
white = '#ffffff'
black = '#000000'
red = '#ff0000'
blue = '#0000ff'
anglemap = col.LinearSegmentedColormap.from_list(
'anglemap', [black, red, white, blue, black], N=256, gamma=1)
return anglemap
def make_anglemap_colorlist(N=256, use_hpl=True):
hue = np.ones(N) # hue
hue[:N // 2] = 11.6 # red
hue[N // 2:] = 258.6 # blue
s = 100 # saturation
lum = np.linspace(0, 100, N // 2) # luminosity
lum = np.hstack((lum, lum[::-1]))
colorlist = np.zeros((N, 3))
for ii in range(N):
if use_hpl:
colorlist[ii, :] = hsluv.hpluv_to_rgb((hue[ii], s, lum[ii]))
else:
colorlist[ii, :] = hsluv.hsluv_to_rgb((hue[ii], s, lum[ii]))
colorlist[colorlist > 1] = 1 # correct numeric errors
colorlist[colorlist < 0] = 0
return colorlist
def make_anglemap(N=256, use_hpl=True):
colorlist = make_anglemap_colorlist(N=N, use_hpl=use_hpl)
return col.ListedColormap(colorlist)
hsluv_anglemap = make_anglemap(use_hpl=False)
def circ_interp(x, y_deg, kind='linear'):
phases = np.deg2rad(y_deg)
newdata_cos = np.cos(phases)
newdata_sin = np.sin(phases)
ip_cos = interp1d(x, newdata_cos, kind=kind)
ip_sin = interp1d(x, newdata_sin, kind=kind)
return lambda interp_at: np.rad2deg(np.arctan2(ip_sin(interp_at), ip_cos(interp_at))) % 360
def make_anglemap45_colorlist(N=256, use_hpl=True):
col_space = 'hpluv' if use_hpl else 'hsluv'
colspace_to_rgb = getattr(hsluv, col_space + '_to_rgb')
rgb_to_colspace = getattr(hsluv, 'rgb_to_' + col_space)
black = [0., 0., 0.]
blue = [0.34, 0.86, 0.70]
violet = [0.34, 0.34, 0.86]
magenta = [0.86, 0.34, 0.86]
pink = [1.00, 0.90, 0.92]
red = [0.86, 0.34, 0.34]
yellow = [0.86, 0.86, 0.34]
green = [0.34, 0.86, 0.34]
rgb_list = [
black,
blue,
violet,
magenta,
pink,
red,
yellow,
green,
black
]
col_pos = np.linspace(0, 1, 9)
[hsl_hue, hsl_sat, hsl_lum] = np.array([rgb_to_colspace(np.array(rgb_col)) for rgb_col in rgb_list]).T
f_circ_interp = circ_interp(col_pos, hsl_hue)
f_hsl_sat = interp1d(col_pos, hsl_sat, kind='linear')
f_hsl_lum = interp1d(col_pos, hsl_lum, kind='linear')
pnts = np.linspace(0, 1, N)
new_col = [
f_circ_interp(pnts),
np.clip(f_hsl_sat(pnts), a_min=0, a_max=100),
np.clip(f_hsl_lum(pnts), a_min=0, a_max=100)
]
new_col = np.array([colspace_to_rgb(np.array(rgb_col)) for rgb_col in np.array(new_col).T])
new_col[new_col < 0] = 0
new_col[new_col > 1] = 1
return new_col
def make_anglemap45(N=256, use_hpl=True):
colorlist = make_anglemap45_colorlist(N=N, use_hpl=use_hpl)
return col.ListedColormap(colorlist)
hsluv_anglemap45 = make_anglemap45(use_hpl=False)
def plot_fit(xvals, fit_res, ax, **plot_kws):
"""
Evaluates a fit result at specified values to plot the fit.
"""
model = fit_res.model
independent_var = model.independent_vars[0]
yvals = model.eval(fit_res.params, **{independent_var: xvals})
ax.plot(xvals, yvals, **plot_kws)
def cmap_to_alpha(cmap):
"""
Takes a cmap and makes the transparency of the cmap
changes with each element.
"""
my_cmap = cmap(np.arange(cmap.N))
# Set alpha
my_cmap[:, -1] = np.linspace(0, 1, cmap.N)
# Create new colormap
my_cmap = col.ListedColormap(my_cmap)
return my_cmap
def cmap_first_to_alpha(cmap):
"""
Makes the first element of a cmap transparant.
"""
my_cmap = cmap(np.arange(cmap.N))
# Set alpha
my_cmap[0, -1] = 0
my_cmap[1:, -1] = 1
# Create new colormap
my_cmap = col.ListedColormap(my_cmap)
return my_cmap
def latexify(fig_width=None, fig_height=None, columns=1):
"""Set up matplotlib's RC params for LaTeX plotting.
Call this before plotting a figure.
Parameters
----------
fig_width : float, optional, inches
fig_height : float, optional, inches
columns : {1, 2}
"""
# code adapted from http://www.scipy.org/Cookbook/Matplotlib/LaTeX_Examples
# Width and max height in inches for IEEE journals taken from
# computer.org/cms/Computer.org/Journal%20templates/transactions_art_guide.pdf
assert(columns in [1, 2])
if fig_width is None:
fig_width = 3.39 if columns == 1 else 6.9 # width in inches
if fig_height is None:
fig_height = fig_width*golden_mean # height in inches
MAX_HEIGHT_INCHES = 8.0
if fig_height > MAX_HEIGHT_INCHES:
print("WARNING: fig_height too large:" + fig_height +
"so will reduce to" + MAX_HEIGHT_INCHES + "inches.")
fig_height = MAX_HEIGHT_INCHES
params = {'backend': 'ps',
'text.latex.preamble': [r'\usepackage{gensymb}'],
'axes.labelsize': 8, # fontsize for x and y labels (was 10)
'axes.titlesize': 8,
# 'text.fontsize': 8, # was 10
'legend.fontsize': 8, # was 10
'xtick.labelsize': 8,
'ytick.labelsize': 8,
'text.usetex': True,
'figure.figsize': [fig_width, fig_height],
'font.family': 'serif'
}
matplotlib.rcParams.update(params)
def lighten_color(color, amount=0.5):
"""
Lightens the given color by multiplying (1-luminosity) by the given amount.
Input can be matplotlib color string, hex string, or RGB tuple.
Examples:
>> lighten_color('g', 0.3)
>> lighten_color('#F034A3', 0.6)
>> lighten_color((.3,.55,.1), 0.5)
"""
import matplotlib.colors as mc
import colorsys
try:
c = mc.cnames[color]
except:
c = color
c = colorsys.rgb_to_hls(*mc.to_rgb(c))
return colorsys.hls_to_rgb(c[0], 1 - amount * (1 - c[1]), c[2])
def connected_zoombox(ax0, ins_ax,
corner_a=(1, 1), corner_b=(2, 2),
square_kws={}, line_kws={}):
"""
Create a rectangle in ax0 corresponding to the ins_ax and connect corners.
Parameters
----------
ax0 : matplotlib axis
The parent axis on which to draw the square and connecting lines.
ins_ax : matplotlib axis
The inset axis. The limits of this axis are taken to determine the
location of the square.
corner_a : tuple of ints
Tuple of location codes used to determine what corners to connect.
'upper right' 1
'upper left' 2
'lower left' 3
'lower right' 4
"""
x_ins = ins_ax.get_xlim()
y_ins = ins_ax.get_ylim()
# xy coordinates corresponding to counterclockwise locations.
# this order is chosen to be consistent with ax.legend()
xy1 = (x_ins[1], y_ins[1]) # upper right
xy2 = (x_ins[0], y_ins[1]) # upper left
xy3 = (x_ins[0], y_ins[0]) # lower left
xy4 = (x_ins[1], y_ins[0]) # lower right
xy_corners = [xy1, xy2, xy3, xy4]
# ensures we have sensible defaults that can be overwritten
def_line_kws = dict(
color='grey',
arrowstyle='-', zorder=0, lw=1.5, ls=':')
def_line_kws.update(line_kws)
conA = ConnectionPatch(xy_corners[corner_a[0]-1],
xy_corners[corner_a[1]-1],
'data', 'data',
axesA=ins_ax, axesB=ax0, **def_line_kws)
ins_ax.add_artist(conA)
conB = ConnectionPatch(xy_corners[corner_b[0]-1],
xy_corners[corner_b[1]-1],
'data', 'data',
axesA=ins_ax, axesB=ax0, **def_line_kws)
ins_ax.add_artist(conB)
def_sq_kws = dict(ec='k', lw=0.5, fill=0, zorder=4)
def_sq_kws.update(square_kws)
rect = Rectangle((x_ins[0], y_ins[0]),
x_ins[1]-x_ins[0], y_ins[1]-y_ins[0],
**def_sq_kws)
ax0.add_patch(rect)
def restore_default_plot_params():
"""
Restore the matplotlib rcParams to their default values
"""
matplotlib.rcParams.update(matplotlib.rcParamsDefault) | unknown | codeparrot/codeparrot-clean | ||
#!/usr/bin/env python
########################################################################
#
# (C) 2013, James Cammarata <jcammarata@ansible.com>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
#
########################################################################
import json
from urllib2 import quote as urlquote, HTTPError
from urlparse import urlparse
from ansible.errors import AnsibleError, AnsibleOptionsError
from ansible.module_utils.urls import open_url
class GalaxyAPI(object):
''' This class is meant to be used as a API client for an Ansible Galaxy server '''
SUPPORTED_VERSIONS = ['v1']
def __init__(self, galaxy, api_server):
self.galaxy = galaxy
try:
urlparse(api_server, scheme='https')
except:
raise AnsibleError("Invalid server API url passed: %s" % api_server)
server_version = self.get_server_api_version('%s/api/' % (api_server))
if not server_version:
raise AnsibleError("Could not retrieve server API version: %s" % api_server)
if server_version in self.SUPPORTED_VERSIONS:
self.baseurl = '%s/api/%s' % (api_server, server_version)
self.version = server_version # for future use
self.galaxy.display.vvvvv("Base API: %s" % self.baseurl)
else:
raise AnsibleError("Unsupported Galaxy server API version: %s" % server_version)
def get_server_api_version(self, api_server):
"""
Fetches the Galaxy API current version to ensure
the API server is up and reachable.
"""
#TODO: fix galaxy server which returns current_version path (/api/v1) vs actual version (v1)
# also should set baseurl using supported_versions which has path
return 'v1'
try:
data = json.load(open_url(api_server, validate_certs=self.galaxy.options.validate_certs))
return data.get("current_version", 'v1')
except Exception as e:
# TODO: report error
return None
def lookup_role_by_name(self, role_name, notify=True):
"""
Find a role by name
"""
role_name = urlquote(role_name)
try:
parts = role_name.split(".")
user_name = ".".join(parts[0:-1])
role_name = parts[-1]
if notify:
self.galaxy.display.display("- downloading role '%s', owned by %s" % (role_name, user_name))
except:
raise AnsibleError("- invalid role name (%s). Specify role as format: username.rolename" % role_name)
url = '%s/roles/?owner__username=%s&name=%s' % (self.baseurl, user_name, role_name)
self.galaxy.display.vvvv("- %s" % (url))
try:
data = json.load(open_url(url, validate_certs=self.galaxy.options.validate_certs))
if len(data["results"]) != 0:
return data["results"][0]
except:
# TODO: report on connection/availability errors
pass
return None
def fetch_role_related(self, related, role_id):
"""
Fetch the list of related items for the given role.
The url comes from the 'related' field of the role.
"""
try:
url = '%s/roles/%d/%s/?page_size=50' % (self.baseurl, int(role_id), related)
data = json.load(open_url(url, validate_certs=self.galaxy.options.validate_certs))
results = data['results']
done = (data.get('next', None) == None)
while not done:
url = '%s%s' % (self.baseurl, data['next'])
self.galaxy.display.display(url)
data = json.load(open_url(url, validate_certs=self.galaxy.options.validate_certs))
results += data['results']
done = (data.get('next', None) == None)
return results
except:
return None
def get_list(self, what):
"""
Fetch the list of items specified.
"""
try:
url = '%s/%s/?page_size' % (self.baseurl, what)
data = json.load(open_url(url, validate_certs=self.galaxy.options.validate_certs))
if "results" in data:
results = data['results']
else:
results = data
done = True
if "next" in data:
done = (data.get('next', None) == None)
while not done:
url = '%s%s' % (self.baseurl, data['next'])
self.galaxy.display.display(url)
data = json.load(open_url(url, validate_certs=self.galaxy.options.validate_certs))
results += data['results']
done = (data.get('next', None) == None)
return results
except Exception as error:
raise AnsibleError("Failed to download the %s list: %s" % (what, str(error)))
def search_roles(self, search, platforms=None, tags=None):
search_url = self.baseurl + '/roles/?page=1'
if search:
search_url += '&search=' + urlquote(search)
if tags is None:
tags = []
elif isinstance(tags, basestring):
tags = tags.split(',')
for tag in tags:
search_url += '&chain__tags__name=' + urlquote(tag)
if platforms is None:
platforms = []
elif isinstance(platforms, basestring):
platforms = platforms.split(',')
for plat in platforms:
search_url += '&chain__platforms__name=' + urlquote(plat)
self.galaxy.display.debug("Executing query: %s" % search_url)
try:
data = json.load(open_url(search_url, validate_certs=self.galaxy.options.validate_certs))
except HTTPError as e:
raise AnsibleError("Unsuccessful request to server: %s" % str(e))
return data | unknown | codeparrot/codeparrot-clean | ||
"""turtledemo/chaos.py
A demonstration of chaos.
"""
from turtle import *
N = 80
def f(x):
return 3.9*x*(1-x)
def g(x):
return 3.9*(x-x**2)
def h(x):
return 3.9*x-3.9*x*x
def jumpto(x, y):
penup(); goto(x,y)
def line(x1, y1, x2, y2):
jumpto(x1, y1)
pendown()
goto(x2, y2)
def coosys():
line(-1, 0, N+1, 0)
line(0, -0.1, 0, 1.1)
def plot(fun, start, color):
pencolor(color)
x = start
jumpto(0, x)
pendown()
dot(5)
for i in range(N):
x=fun(x)
goto(i+1,x)
dot(5)
def main():
reset()
setworldcoordinates(-1.0,-0.1, N+1, 1.1)
speed(0)
hideturtle()
coosys()
plot(f, 0.35, "blue")
plot(g, 0.35, "green")
plot(h, 0.35, "red")
# Now zoom in:
for s in range(100):
setworldcoordinates(0.5*s,-0.1, N+1, 1.1)
return "Done!"
if __name__ == "__main__":
main()
mainloop() | python | github | https://github.com/python/cpython | Lib/turtledemo/chaos.py |
/*
* Copyright 2010-2024 JetBrains s.r.o. and Kotlin Programming Language contributors.
* Use of this source code is governed by the Apache 2.0 license that can be found in the license/LICENSE.txt file.
*/
package org.jetbrains.kotlin.analysis.api.fir.test.cases.generated.cases.components.substitutorProvider;
import com.intellij.testFramework.TestDataPath;
import org.jetbrains.kotlin.test.util.KtTestUtil;
import org.jetbrains.annotations.NotNull;
import org.jetbrains.kotlin.analysis.api.fir.test.configurators.AnalysisApiFirTestConfiguratorFactory;
import org.jetbrains.kotlin.analysis.test.framework.test.configurators.AnalysisApiTestConfiguratorFactoryData;
import org.jetbrains.kotlin.analysis.test.framework.test.configurators.AnalysisApiTestConfigurator;
import org.jetbrains.kotlin.analysis.test.framework.test.configurators.TestModuleKind;
import org.jetbrains.kotlin.analysis.test.framework.test.configurators.FrontendKind;
import org.jetbrains.kotlin.analysis.test.framework.test.configurators.AnalysisSessionMode;
import org.jetbrains.kotlin.analysis.test.framework.test.configurators.AnalysisApiMode;
import org.jetbrains.kotlin.analysis.api.impl.base.test.cases.components.substitutorProvider.AbstractCreateInheritanceTypeSubstitutorTest;
import org.jetbrains.kotlin.test.TestMetadata;
import org.junit.jupiter.api.Test;
import java.io.File;
import java.util.regex.Pattern;
/** This class is generated by {@link org.jetbrains.kotlin.generators.tests.analysis.api.GenerateAnalysisApiTestsKt}. DO NOT MODIFY MANUALLY */
@SuppressWarnings("all")
@TestMetadata("analysis/analysis-api/testData/components/substitutorProvider/createInheritanceTypeSubstitutor")
@TestDataPath("$PROJECT_ROOT")
public class FirIdeDependentAnalysisScriptSourceModuleCreateInheritanceTypeSubstitutorTestGenerated extends AbstractCreateInheritanceTypeSubstitutorTest {
@NotNull
@Override
public AnalysisApiTestConfigurator getConfigurator() {
return AnalysisApiFirTestConfiguratorFactory.INSTANCE.createConfigurator(
new AnalysisApiTestConfiguratorFactoryData(
FrontendKind.Fir,
TestModuleKind.ScriptSource,
AnalysisSessionMode.Dependent,
AnalysisApiMode.Ide
)
);
}
@Test
public void testAllFilesPresentInCreateInheritanceTypeSubstitutor() {
KtTestUtil.assertAllTestsPresentByMetadataWithExcluded(this.getClass(), new File("analysis/analysis-api/testData/components/substitutorProvider/createInheritanceTypeSubstitutor"), Pattern.compile("^(.+)\\.kts$"), null, true);
}
} | java | github | https://github.com/JetBrains/kotlin | analysis/analysis-api-fir/tests-gen/org/jetbrains/kotlin/analysis/api/fir/test/cases/generated/cases/components/substitutorProvider/FirIdeDependentAnalysisScriptSourceModuleCreateInheritanceTypeSubstitutorTestGenerated.java |
# coding=utf-8
"""Test file for transform counts to ratios."""
import unittest
from safe.test.utilities import qgis_iface, load_test_vector_layer
from safe.definitions.fields import (
female_count_field, female_ratio_field, size_field, population_count_field)
from safe.gis.vector.from_counts_to_ratios import from_counts_to_ratios
from safe.gis.vector.prepare_vector_layer import prepare_vector_layer
__copyright__ = "Copyright 2016, The InaSAFE Project"
__license__ = "GPL version 3"
__email__ = "info@inasafe.org"
__revision__ = '$Format:%H$'
iface = qgis_iface()
class TestRecomputeCounts(unittest.TestCase):
"""Test class."""
def test_recompute_counts(self):
"""Test we can recompute counts in a layer."""
layer = load_test_vector_layer(
'gisv4', 'exposure', 'population.geojson',
clone=True)
self.assertIn(
female_count_field['key'], layer.keywords['inasafe_fields'])
layer = prepare_vector_layer(layer)
layer = from_counts_to_ratios(layer)
self.assertIn(
female_count_field['key'], layer.keywords['inasafe_fields'])
self.assertIn(
female_ratio_field['key'], layer.keywords['inasafe_fields'])
# Check that each feature has correct ratio
for feature in layer.getFeatures():
female_count = feature[female_count_field['field_name']]
population_count = feature[population_count_field['field_name']]
manual_ratio = female_count / float(population_count)
computing_ratio = feature[female_ratio_field['field_name']]
diff = abs(manual_ratio - computing_ratio)
message = 'The ratio difference is too big, diff = %s' % diff
self.assertTrue(diff < 10 ** -2, message) | unknown | codeparrot/codeparrot-clean | ||
#!/usr/bin/python
#
# Peteris Krumins (peter@catonmat.net)
# http://www.catonmat.net -- good coders code, great reuse
#
# http://www.catonmat.net/blog/python-library-for-google-sponsored-links-search/
#
# Code is licensed under MIT license.
#
import re
import urllib
import random
from htmlentitydefs import name2codepoint
from BeautifulSoup import BeautifulSoup
from browser import Browser, BrowserError
#
# TODO: join GoogleSearch and SponsoredLinks classes under a single base class
#
class SLError(Exception):
""" Sponsored Links Error """
pass
class SLParseError(Exception):
"""
Parse error in Google results.
self.msg attribute contains explanation why parsing failed
self.tag attribute contains BeautifulSoup object with the most relevant tag that failed to parse
Thrown only in debug mode
"""
def __init__(self, msg, tag):
self.msg = msg
self.tag = tag
def __str__(self):
return self.msg
def html(self):
return self.tag.prettify()
GET_ALL_SLEEP_FUNCTION = object()
class SponsoredLink(object):
""" a single sponsored link """
def __init__(self, title, url, display_url, desc):
self.title = title
self.url = url
self.display_url = display_url
self.desc = desc
class SponsoredLinks(object):
SEARCH_URL_0 = "http://www.google.com/sponsoredlinks?q=%(query)s&btnG=Search+Sponsored+Links&hl=en"
NEXT_PAGE_0 = "http://www.google.com/sponsoredlinks?q=%(query)s&sa=N&start=%(start)d&hl=en"
SEARCH_URL_1 = "http://www.google.com/sponsoredlinks?q=%(query)s&num=%(num)d&btnG=Search+Sponsored+Links&hl=en"
NEXT_PAGE_1 = "http://www.google.com/sponsoredlinks?q=%(query)s&num=%(num)d&sa=N&start=%(start)d&hl=en"
def __init__(self, query, random_agent=False, debug=False):
self.query = query
self.debug = debug
self.browser = Browser(debug=debug)
self._page = 0
self.eor = False
self.results_info = None
self._results_per_page = 10
if random_agent:
self.browser.set_random_user_agent()
@property
def num_results(self):
if not self.results_info:
page = self._get_results_page()
self.results_info = self._extract_info(page)
if self.results_info['total'] == 0:
self.eor = True
return self.results_info['total']
def _get_results_per_page(self):
return self._results_per_page
def _set_results_par_page(self, rpp):
self._results_per_page = rpp
results_per_page = property(_get_results_per_page, _set_results_par_page)
def get_results(self):
if self.eor:
return []
page = self._get_results_page()
info = self._extract_info(page)
if self.results_info is None:
self.results_info = info
if info['to'] == info['total']:
self.eor = True
results = self._extract_results(page)
if not results:
self.eor = True
return []
self._page += 1
return results
def _get_all_results_sleep_fn(self):
return random.random()*5 + 1 # sleep from 1 - 6 seconds
def get_all_results(self, sleep_function=None):
if sleep_function is GET_ALL_SLEEP_FUNCTION:
sleep_function = self._get_all_results_sleep_fn
if sleep_function is None:
sleep_function = lambda: None
ret_results = []
while True:
res = self.get_results()
if not res:
return ret_results
ret_results.extend(res)
return ret_results
def _maybe_raise(self, cls, *arg):
if self.debug:
raise cls(*arg)
def _extract_info(self, soup):
empty_info = { 'from': 0, 'to': 0, 'total': 0 }
stats_span = soup.find('span', id='stats')
if not stats_span:
return empty_info
txt = ''.join(stats_span.findAll(text=True))
txt = txt.replace(',', '').replace(" ", ' ')
matches = re.search(r'Results (\d+) - (\d+) of (?:about )?(\d+)', txt)
if not matches:
return empty_info
return {'from': int(matches.group(1)), 'to': int(matches.group(2)), 'total': int(matches.group(3))}
def _get_results_page(self):
if self._page == 0:
if self._results_per_page == 10:
url = SponsoredLinks.SEARCH_URL_0
else:
url = SponsoredLinks.SEARCH_URL_1
else:
if self._results_per_page == 10:
url = SponsoredLinks.NEXT_PAGE_0
else:
url = SponsoredLinks.NEXT_PAGE_1
safe_url = url % { 'query': urllib.quote_plus(self.query),
'start': self._page * self._results_per_page,
'num': self._results_per_page }
try:
page = self.browser.get_page(safe_url)
except BrowserError, e:
raise SLError, "Failed getting %s: %s" % (e.url, e.error)
return BeautifulSoup(page)
def _extract_results(self, soup):
results = soup.findAll('div', {'class': 'g'})
ret_res = []
for result in results:
eres = self._extract_result(result)
if eres:
ret_res.append(eres)
return ret_res
def _extract_result(self, result):
title, url = self._extract_title_url(result)
display_url = self._extract_display_url(result) # Warning: removes 'cite' from the result
desc = self._extract_description(result)
if not title or not url or not display_url or not desc:
return None
return SponsoredLink(title, url, display_url, desc)
def _extract_title_url(self, result):
title_a = result.find('a')
if not title_a:
self._maybe_raise(SLParseError, "Title tag in sponsored link was not found", result)
return None, None
title = ''.join(title_a.findAll(text=True))
title = self._html_unescape(title)
url = title_a['href']
match = re.search(r'q=(http[^&]+)&', url)
if not match:
self._maybe_raise(SLParseError, "URL inside a sponsored link was not found", result)
return None, None
url = urllib.unquote(match.group(1))
return title, url
def _extract_display_url(self, result):
cite = result.find('cite')
if not cite:
self._maybe_raise(SLParseError, "<cite> not found inside result", result)
return None
return ''.join(cite.findAll(text=True))
def _extract_description(self, result):
cite = result.find('cite')
if not cite:
return None
cite.extract()
desc_div = result.find('div', {'class': 'line23'})
if not desc_div:
self._maybe_raise(ParseError, "Description tag not found in sponsored link", result)
return None
desc_strs = desc_div.findAll(text=True)[0:-1]
desc = ''.join(desc_strs)
desc = desc.replace("\n", " ")
desc = desc.replace(" ", " ")
return self._html_unescape(desc)
def _html_unescape(self, str):
def entity_replacer(m):
entity = m.group(1)
if entity in name2codepoint:
return unichr(name2codepoint[entity])
else:
return m.group(0)
def ascii_replacer(m):
cp = int(m.group(1))
if cp <= 255:
return unichr(cp)
else:
return m.group(0)
s = re.sub(r'&#(\d+);', ascii_replacer, str, re.U)
return re.sub(r'&([^;]+);', entity_replacer, s, re.U) | unknown | codeparrot/codeparrot-clean | ||
# MySQL-specific implementations for south
# Original author: Andrew Godwin
# Patches by: F. Gabriel Gosselin <gabrielNOSPAM@evidens.ca>
from south.db import generic
from south.db.generic import DryRunError, INVALID
from south.logger import get_logger
def delete_column_constraints(func):
"""
Decorates column operation functions for MySQL.
Deletes the constraints from the database and clears local cache.
"""
def _column_rm(self, table_name, column_name, *args, **opts):
# Delete foreign key constraints
try:
self.delete_foreign_key(table_name, column_name)
except ValueError:
pass # If no foreign key on column, OK because it checks first
# Delete constraints referring to this column
try:
reverse = self._lookup_reverse_constraint(table_name, column_name)
for cname, rtable, rcolumn in reverse:
self.delete_foreign_key(rtable, rcolumn)
except DryRunError:
pass
return func(self, table_name, column_name, *args, **opts)
return _column_rm
def copy_column_constraints(func):
"""
Decorates column operation functions for MySQL.
Determines existing constraints and copies them to a new column
"""
def _column_cp(self, table_name, column_old, column_new, *args, **opts):
# Copy foreign key constraint
try:
constraint = self._find_foreign_constraints(
table_name, column_old)[0]
refs = self._lookup_constraint_references(table_name, constraint)
if refs is not None:
(ftable, fcolumn) = refs
if ftable and fcolumn:
fk_sql = self.foreign_key_sql(
table_name, column_new, ftable, fcolumn)
get_logger().debug("Foreign key SQL: " + fk_sql)
self.add_deferred_sql(fk_sql)
except IndexError:
pass # No constraint exists so ignore
except DryRunError:
pass
# Copy constraints referring to this column
try:
reverse = self._lookup_reverse_constraint(table_name, column_old)
for cname, rtable, rcolumn in reverse:
fk_sql = self.foreign_key_sql(
rtable, rcolumn, table_name, column_new)
self.add_deferred_sql(fk_sql)
except DryRunError:
pass
return func(self, table_name, column_old, column_new, *args, **opts)
return _column_cp
def invalidate_table_constraints(func):
"""
For MySQL we grab all table constraints simultaneously, so this is
effective.
It further solves the issues of invalidating referred table constraints.
"""
def _cache_clear(self, table, *args, **opts):
db_name = self._get_setting('NAME')
if db_name in self._constraint_cache:
del self._constraint_cache[db_name]
if db_name in self._reverse_cache:
del self._reverse_cache[db_name]
if db_name in self._constraint_references:
del self._constraint_references[db_name]
return func(self, table, *args, **opts)
return _cache_clear
class DatabaseOperations(generic.DatabaseOperations):
"""
MySQL implementation of database operations.
MySQL has no DDL transaction support This can confuse people when they ask
how to roll back - hence the dry runs, etc., found in the migration code.
"""
backend_name = "mysql"
alter_string_set_type = ''
alter_string_set_null = 'MODIFY %(column)s %(type)s NULL;'
alter_string_drop_null = 'MODIFY %(column)s %(type)s NOT NULL;'
drop_index_string = 'DROP INDEX %(index_name)s ON %(table_name)s'
delete_primary_key_sql = "ALTER TABLE %(table)s DROP PRIMARY KEY"
delete_foreign_key_sql = "ALTER TABLE %(table)s DROP FOREIGN KEY %(constraint)s"
delete_unique_sql = "ALTER TABLE %s DROP INDEX %s"
rename_table_sql = "RENAME TABLE %s TO %s;"
allows_combined_alters = False
has_check_constraints = False
raises_default_errors = False
geom_types = ['geometry', 'point', 'linestring', 'polygon']
text_types = ['text', 'blob']
def __init__(self, db_alias):
self._constraint_references = {}
self._reverse_cache = {}
super(DatabaseOperations, self).__init__(db_alias)
if self._has_setting('STORAGE_ENGINE') and self._get_setting('STORAGE_ENGINE'):
self.create_table_sql = self.create_table_sql + ' ENGINE=%s' % self._get_setting('STORAGE_ENGINE')
def _is_valid_cache(self, db_name, table_name):
cache = self._constraint_cache
# we cache the whole db so if there are any tables table_name is valid
return db_name in cache and cache[db_name].get(table_name, None) is not INVALID
def _fill_constraint_cache(self, db_name, table_name):
# for MySQL grab all constraints for this database. It's just as cheap as a single column.
self._constraint_cache[db_name] = {}
self._constraint_cache[db_name][table_name] = {}
self._reverse_cache[db_name] = {}
self._constraint_references[db_name] = {}
name_query = """
SELECT kc.`constraint_name`, kc.`column_name`, kc.`table_name`,
kc.`referenced_table_name`, kc.`referenced_column_name`
FROM information_schema.key_column_usage AS kc
WHERE
kc.table_schema = %s
"""
rows = self.execute(name_query, [db_name])
if not rows:
return
cnames = {}
for constraint, column, table, ref_table, ref_column in rows:
key = (table, constraint)
cnames.setdefault(key, set())
cnames[key].add((column, ref_table, ref_column))
type_query = """
SELECT c.constraint_name, c.table_name, c.constraint_type
FROM information_schema.table_constraints AS c
WHERE
c.table_schema = %s
"""
rows = self.execute(type_query, [db_name])
for constraint, table, kind in rows:
key = (table, constraint)
self._constraint_cache[db_name].setdefault(table, {})
try:
cols = cnames[key]
except KeyError:
cols = set()
for column_set in cols:
(column, ref_table, ref_column) = column_set
self._constraint_cache[db_name][table].setdefault(column, set())
if kind == 'FOREIGN KEY':
self._constraint_cache[db_name][table][column].add((kind,
constraint))
# Create constraint lookup, see constraint_references
self._constraint_references[db_name][(table,
constraint)] = (ref_table, ref_column)
# Create reverse table lookup, reverse_lookup
self._reverse_cache[db_name].setdefault(ref_table, {})
self._reverse_cache[db_name][ref_table].setdefault(ref_column,
set())
self._reverse_cache[db_name][ref_table][ref_column].add(
(constraint, table, column))
else:
self._constraint_cache[db_name][table][column].add((kind,
constraint))
def connection_init(self):
"""
Run before any SQL to let database-specific config be sent as a command,
e.g. which storage engine (MySQL) or transaction serialisability level.
"""
cursor = self._get_connection().cursor()
if cursor.execute("SHOW variables WHERE Variable_Name='default_storage_engine';"):
engine_var = 'default_storage_engine'
else:
engine_var = 'storage_engine'
if self._has_setting('STORAGE_ENGINE') and self._get_setting('STORAGE_ENGINE'):
cursor.execute("SET %s=%s;" % (engine_var, self._get_setting('STORAGE_ENGINE')))
def start_transaction(self):
super(DatabaseOperations, self).start_transaction()
self.execute("SET FOREIGN_KEY_CHECKS=0;")
@copy_column_constraints
@delete_column_constraints
@invalidate_table_constraints
def rename_column(self, table_name, old, new):
if old == new or self.dry_run:
return []
rows = [x for x in self.execute('DESCRIBE %s' % (self.quote_name(table_name),)) if x[0] == old]
if not rows:
raise ValueError("No column '%s' in '%s'." % (old, table_name))
params = (
self.quote_name(table_name),
self.quote_name(old),
self.quote_name(new),
rows[0][1],
rows[0][2] == "YES" and "NULL" or "NOT NULL",
rows[0][4] and "DEFAULT " or "",
rows[0][4] and "%s" or "",
rows[0][5] or "",
)
sql = 'ALTER TABLE %s CHANGE COLUMN %s %s %s %s %s %s %s;' % params
if rows[0][4]:
self.execute(sql, (rows[0][4],))
else:
self.execute(sql)
@delete_column_constraints
def delete_column(self, table_name, name):
super(DatabaseOperations, self).delete_column(table_name, name)
@invalidate_table_constraints
def rename_table(self, old_table_name, table_name):
super(DatabaseOperations, self).rename_table(old_table_name,
table_name)
@invalidate_table_constraints
def delete_table(self, table_name):
super(DatabaseOperations, self).delete_table(table_name)
def _lookup_constraint_references(self, table_name, cname):
"""
Provided an existing table and constraint, returns tuple of (foreign
table, column)
"""
db_name = self._get_setting('NAME')
try:
return self._constraint_references[db_name][(table_name, cname)]
except KeyError:
return None
def _lookup_reverse_constraint(self, table_name, column_name=None):
"""Look for the column referenced by a foreign constraint"""
db_name = self._get_setting('NAME')
if self.dry_run:
raise DryRunError("Cannot get constraints for columns.")
if not self._is_valid_cache(db_name, table_name):
# Piggy-back on lookup_constraint, ensures cache exists
self.lookup_constraint(db_name, table_name)
try:
table = self._reverse_cache[db_name][table_name]
if column_name == None:
return [(y, tuple(y)) for x, y in table.items()]
else:
return tuple(table[column_name])
except KeyError:
return []
def _field_sanity(self, field):
"""
This particular override stops us sending DEFAULTs for BLOB/TEXT columns.
"""
# MySQL does not support defaults for geometry columns also
type = self._db_type_for_alter_column(field).lower()
is_geom = True in [type.find(t) > -1 for t in self.geom_types]
is_text = True in [type.find(t) > -1 for t in self.text_types]
if is_geom or is_text:
field._suppress_default = True
return field
def _alter_set_defaults(self, field, name, params, sqls):
"""
MySQL does not support defaults on text or blob columns.
"""
type = params['type']
# MySQL does not support defaults for geometry columns also
is_geom = True in [type.find(t) > -1 for t in self.geom_types]
is_text = True in [type.find(t) > -1 for t in self.text_types]
if not is_geom and not is_text:
super(DatabaseOperations, self)._alter_set_defaults(field, name, params, sqls) | unknown | codeparrot/codeparrot-clean | ||
'''
Created on 10.01.2011
@author: michi
'''
from PyQt4.QtGui import QStyledItemDelegate
from PyQt4.QtCore import Qt
from ems import qt4
class ColumnNameDelegate(QStyledItemDelegate):
def __init__(self, parent=None):
super(ColumnNameDelegate, self).__init__(parent)
self.delegates = {}
def sizeHint(self, option, index):
delegate = self.delegates.get(index.column())
if delegate is not None:
return delegate.sizeHint(option, index)
else:
return QStyledItemDelegate.sizeHint(self, option, index)
def insertColumnDelegate(self, column, delegate):
delegate.setParent(self)
self.delegates[column] = delegate
def removeColumnDelegate(self, column):
if column in self.delegates:
del self.delegates[column]
def paint(self, painter, option, index):
delegate = self.delegates.get(index.column())
if delegate is not None:
return delegate.paint(painter, option, index)
else:
return QStyledItemDelegate.paint(self, painter, option, index)
def createEditor(self, parent, option, index):
delegate = self.delegates.get(index.column())
if delegate is not None:
return delegate.createEditor(parent, option, index)
else:
return QStyledItemDelegate.createEditor(self, parent, option,
index)
def setEditorData(self, editor, index):
delegate = self.delegates.get(index.column())
if delegate is not None:
return delegate.setEditorData(editor, index)
else:
return QStyledItemDelegate.setEditorData(self, editor, index)
def setModelData(self, editor, model, index):
delegate = self.delegates.get(index.column())
if delegate is not None:
return delegate.setModelData(editor, model, index)
else:
return QStyledItemDelegate.setModelData(self, editor, model, index) | unknown | codeparrot/codeparrot-clean | ||
# Copyright (c) 2015, Frappe Technologies Pvt. Ltd. and Contributors
# License: GNU General Public License v3. See license.txt
# ERPNext - web based ERP (http://erpnext.com)
# For license information, please see license.txt
from __future__ import unicode_literals
import frappe
from frappe.utils import cstr, flt, getdate, new_line_sep, nowdate, add_days
from frappe import msgprint, _
from frappe.model.mapper import get_mapped_doc
from erpnext.stock.stock_balance import update_bin_qty, get_indented_qty
from erpnext.controllers.buying_controller import BuyingController
from erpnext.manufacturing.doctype.work_order.work_order import get_item_details
from erpnext.buying.utils import check_for_closed_status, validate_for_items
from erpnext.stock.doctype.item.item import get_item_defaults
from six import string_types
form_grid_templates = {
"items": "templates/form_grid/material_request_grid.html"
}
class MaterialRequest(BuyingController):
def get_feed(self):
return _("{0}: {1}").format(self.status, self.material_request_type)
def check_if_already_pulled(self):
pass
def validate_qty_against_so(self):
so_items = {} # Format --> {'SO/00001': {'Item/001': 120, 'Item/002': 24}}
for d in self.get('items'):
if d.sales_order:
if not d.sales_order in so_items:
so_items[d.sales_order] = {d.item_code: flt(d.qty)}
else:
if not d.item_code in so_items[d.sales_order]:
so_items[d.sales_order][d.item_code] = flt(d.qty)
else:
so_items[d.sales_order][d.item_code] += flt(d.qty)
for so_no in so_items.keys():
for item in so_items[so_no].keys():
already_indented = frappe.db.sql("""select sum(qty)
from `tabMaterial Request Item`
where item_code = %s and sales_order = %s and
docstatus = 1 and parent != %s""", (item, so_no, self.name))
already_indented = already_indented and flt(already_indented[0][0]) or 0
actual_so_qty = frappe.db.sql("""select sum(stock_qty) from `tabSales Order Item`
where parent = %s and item_code = %s and docstatus = 1""", (so_no, item))
actual_so_qty = actual_so_qty and flt(actual_so_qty[0][0]) or 0
if actual_so_qty and (flt(so_items[so_no][item]) + already_indented > actual_so_qty):
frappe.throw(_("Material Request of maximum {0} can be made for Item {1} against Sales Order {2}").format(actual_so_qty - already_indented, item, so_no))
# Validate
# ---------------------
def validate(self):
super(MaterialRequest, self).validate()
self.validate_schedule_date()
self.validate_uom_is_integer("uom", "qty")
if not self.status:
self.status = "Draft"
from erpnext.controllers.status_updater import validate_status
validate_status(self.status,
["Draft", "Submitted", "Stopped", "Cancelled", "Pending",
"Partially Ordered", "Ordered", "Issued", "Transferred"])
validate_for_items(self)
self.set_title()
# self.validate_qty_against_so()
# NOTE: Since Item BOM and FG quantities are combined, using current data, it cannot be validated
# Though the creation of Material Request from a Production Plan can be rethought to fix this
def set_title(self):
'''Set title as comma separated list of items'''
items = ', '.join([d.item_name for d in self.items][:4])
self.title = _('{0} for {1}'.format(self.material_request_type, items))[:100]
def on_submit(self):
# frappe.db.set(self, 'status', 'Submitted')
self.update_requested_qty()
self.update_requested_qty_in_production_plan()
if self.material_request_type == 'Purchase':
self.validate_budget()
def before_save(self):
self.set_status(update=True)
def before_submit(self):
self.set_status(update=True)
def before_cancel(self):
# if MRQ is already closed, no point saving the document
check_for_closed_status(self.doctype, self.name)
self.set_status(update=True, status='Cancelled')
def check_modified_date(self):
mod_db = frappe.db.sql("""select modified from `tabMaterial Request` where name = %s""",
self.name)
date_diff = frappe.db.sql("""select TIMEDIFF('%s', '%s')"""
% (mod_db[0][0], cstr(self.modified)))
if date_diff and date_diff[0][0]:
frappe.throw(_("{0} {1} has been modified. Please refresh.").format(_(self.doctype), self.name))
def update_status(self, status):
self.check_modified_date()
self.status_can_change(status)
self.set_status(update=True, status=status)
self.update_requested_qty()
def status_can_change(self, status):
"""
validates that `status` is acceptable for the present controller status
and throws an Exception if otherwise.
"""
if self.status and self.status == 'Cancelled':
# cancelled documents cannot change
if status != self.status:
frappe.throw(
_("{0} {1} is cancelled so the action cannot be completed").
format(_(self.doctype), self.name),
frappe.InvalidStatusError
)
elif self.status and self.status == 'Draft':
# draft document to pending only
if status != 'Pending':
frappe.throw(
_("{0} {1} has not been submitted so the action cannot be completed").
format(_(self.doctype), self.name),
frappe.InvalidStatusError
)
def on_cancel(self):
self.update_requested_qty()
self.update_requested_qty_in_production_plan()
def update_completed_qty(self, mr_items=None, update_modified=True):
if self.material_request_type == "Purchase":
return
if not mr_items:
mr_items = [d.name for d in self.get("items")]
for d in self.get("items"):
if d.name in mr_items:
if self.material_request_type in ("Material Issue", "Material Transfer"):
d.ordered_qty = flt(frappe.db.sql("""select sum(transfer_qty)
from `tabStock Entry Detail` where material_request = %s
and material_request_item = %s and docstatus = 1""",
(self.name, d.name))[0][0])
if d.ordered_qty and d.ordered_qty > d.stock_qty:
frappe.throw(_("The total Issue / Transfer quantity {0} in Material Request {1} \
cannot be greater than requested quantity {2} for Item {3}").format(d.ordered_qty, d.parent, d.qty, d.item_code))
elif self.material_request_type == "Manufacture":
d.ordered_qty = flt(frappe.db.sql("""select sum(qty)
from `tabWork Order` where material_request = %s
and material_request_item = %s and docstatus = 1""",
(self.name, d.name))[0][0])
frappe.db.set_value(d.doctype, d.name, "ordered_qty", d.ordered_qty)
target_ref_field = 'qty' if self.material_request_type == "Manufacture" else 'stock_qty'
self._update_percent_field({
"target_dt": "Material Request Item",
"target_parent_dt": self.doctype,
"target_parent_field": "per_ordered",
"target_ref_field": target_ref_field,
"target_field": "ordered_qty",
"name": self.name,
}, update_modified)
def update_requested_qty(self, mr_item_rows=None):
"""update requested qty (before ordered_qty is updated)"""
item_wh_list = []
for d in self.get("items"):
if (not mr_item_rows or d.name in mr_item_rows) and [d.item_code, d.warehouse] not in item_wh_list \
and frappe.db.get_value("Item", d.item_code, "is_stock_item") == 1 and d.warehouse:
item_wh_list.append([d.item_code, d.warehouse])
for item_code, warehouse in item_wh_list:
update_bin_qty(item_code, warehouse, {
"indented_qty": get_indented_qty(item_code, warehouse)
})
def update_requested_qty_in_production_plan(self):
production_plans = []
for d in self.get('items'):
if d.production_plan and d.material_request_plan_item:
qty = d.qty if self.docstatus == 1 else 0
frappe.db.set_value('Material Request Plan Item',
d.material_request_plan_item, 'requested_qty', qty)
if d.production_plan not in production_plans:
production_plans.append(d.production_plan)
for production_plan in production_plans:
doc = frappe.get_doc('Production Plan', production_plan)
doc.set_status()
doc.db_set('status', doc.status)
def update_completed_and_requested_qty(stock_entry, method):
if stock_entry.doctype == "Stock Entry":
material_request_map = {}
for d in stock_entry.get("items"):
if d.material_request:
material_request_map.setdefault(d.material_request, []).append(d.material_request_item)
for mr, mr_item_rows in material_request_map.items():
if mr and mr_item_rows:
mr_obj = frappe.get_doc("Material Request", mr)
if mr_obj.status in ["Stopped", "Cancelled"]:
frappe.throw(_("{0} {1} is cancelled or stopped").format(_("Material Request"), mr),
frappe.InvalidStatusError)
mr_obj.update_completed_qty(mr_item_rows)
mr_obj.update_requested_qty(mr_item_rows)
def set_missing_values(source, target_doc):
target_doc.run_method("set_missing_values")
target_doc.run_method("calculate_taxes_and_totals")
def update_item(obj, target, source_parent):
target.conversion_factor = obj.conversion_factor
target.qty = flt(flt(obj.stock_qty) - flt(obj.ordered_qty))/ target.conversion_factor
target.stock_qty = (target.qty * target.conversion_factor)
@frappe.whitelist()
def update_status(name, status):
material_request = frappe.get_doc('Material Request', name)
material_request.check_permission('write')
material_request.update_status(status)
@frappe.whitelist()
def make_purchase_order(source_name, target_doc=None):
def postprocess(source, target_doc):
if frappe.flags.args and frappe.flags.args.default_supplier:
# items only for given default supplier
supplier_items = []
for d in target_doc.items:
default_supplier = get_item_defaults(d.item_code, target_doc.company).get('default_supplier')
if frappe.flags.args.default_supplier == default_supplier:
supplier_items.append(d)
target_doc.items = supplier_items
set_missing_values(source, target_doc)
def select_item(d):
return d.ordered_qty < d.stock_qty
doclist = get_mapped_doc("Material Request", source_name, {
"Material Request": {
"doctype": "Purchase Order",
"validation": {
"docstatus": ["=", 1],
"material_request_type": ["=", "Purchase"]
}
},
"Material Request Item": {
"doctype": "Purchase Order Item",
"field_map": [
["name", "material_request_item"],
["parent", "material_request"],
["uom", "stock_uom"],
["uom", "uom"],
["sales_order", "sales_order"],
["sales_order_item", "sales_order_item"]
],
"postprocess": update_item,
"condition": select_item
}
}, target_doc, postprocess)
return doclist
@frappe.whitelist()
def make_request_for_quotation(source_name, target_doc=None):
doclist = get_mapped_doc("Material Request", source_name, {
"Material Request": {
"doctype": "Request for Quotation",
"validation": {
"docstatus": ["=", 1],
"material_request_type": ["=", "Purchase"]
}
},
"Material Request Item": {
"doctype": "Request for Quotation Item",
"field_map": [
["name", "material_request_item"],
["parent", "material_request"],
["uom", "uom"]
]
}
}, target_doc)
return doclist
@frappe.whitelist()
def make_purchase_order_based_on_supplier(source_name, target_doc=None):
if target_doc:
if isinstance(target_doc, string_types):
import json
target_doc = frappe.get_doc(json.loads(target_doc))
target_doc.set("items", [])
material_requests, supplier_items = get_material_requests_based_on_supplier(source_name)
def postprocess(source, target_doc):
target_doc.supplier = source_name
target_doc.schedule_date = add_days(nowdate(), 1)
target_doc.set("items", [d for d in target_doc.get("items")
if d.get("item_code") in supplier_items and d.get("qty") > 0])
set_missing_values(source, target_doc)
for mr in material_requests:
target_doc = get_mapped_doc("Material Request", mr, {
"Material Request": {
"doctype": "Purchase Order",
},
"Material Request Item": {
"doctype": "Purchase Order Item",
"field_map": [
["name", "material_request_item"],
["parent", "material_request"],
["uom", "stock_uom"],
["uom", "uom"]
],
"postprocess": update_item,
"condition": lambda doc: doc.ordered_qty < doc.qty
}
}, target_doc, postprocess)
return target_doc
def get_material_requests_based_on_supplier(supplier):
supplier_items = [d.parent for d in frappe.db.get_all("Item Default",
{"default_supplier": supplier}, 'parent')]
if supplier_items:
material_requests = frappe.db.sql_list("""select distinct mr.name
from `tabMaterial Request` mr, `tabMaterial Request Item` mr_item
where mr.name = mr_item.parent
and mr_item.item_code in (%s)
and mr.material_request_type = 'Purchase'
and mr.per_ordered < 99.99
and mr.docstatus = 1
and mr.status != 'Stopped'
order by mr_item.item_code ASC""" % ', '.join(['%s']*len(supplier_items)),
tuple(supplier_items))
else:
material_requests = []
return material_requests, supplier_items
@frappe.whitelist()
def make_supplier_quotation(source_name, target_doc=None):
def postprocess(source, target_doc):
set_missing_values(source, target_doc)
doclist = get_mapped_doc("Material Request", source_name, {
"Material Request": {
"doctype": "Supplier Quotation",
"validation": {
"docstatus": ["=", 1],
"material_request_type": ["=", "Purchase"]
}
},
"Material Request Item": {
"doctype": "Supplier Quotation Item",
"field_map": {
"name": "material_request_item",
"parent": "material_request",
"sales_order": "sales_order"
}
}
}, target_doc, postprocess)
return doclist
@frappe.whitelist()
def make_stock_entry(source_name, target_doc=None):
def update_item(obj, target, source_parent):
qty = flt(flt(obj.stock_qty) - flt(obj.ordered_qty))/ target.conversion_factor \
if flt(obj.stock_qty) > flt(obj.ordered_qty) else 0
target.qty = qty
target.transfer_qty = qty * obj.conversion_factor
target.conversion_factor = obj.conversion_factor
if source_parent.material_request_type == "Material Transfer":
target.t_warehouse = obj.warehouse
else:
target.s_warehouse = obj.warehouse
def set_missing_values(source, target):
target.purpose = source.material_request_type
if source.job_card:
target.purpose = 'Material Transfer for Manufacture'
target.run_method("calculate_rate_and_amount")
target.set_job_card_data()
doclist = get_mapped_doc("Material Request", source_name, {
"Material Request": {
"doctype": "Stock Entry",
"validation": {
"docstatus": ["=", 1],
"material_request_type": ["in", ["Material Transfer", "Material Issue"]]
}
},
"Material Request Item": {
"doctype": "Stock Entry Detail",
"field_map": {
"name": "material_request_item",
"parent": "material_request",
"uom": "stock_uom",
},
"postprocess": update_item,
"condition": lambda doc: doc.ordered_qty < doc.stock_qty
}
}, target_doc, set_missing_values)
return doclist
@frappe.whitelist()
def raise_work_orders(material_request):
mr= frappe.get_doc("Material Request", material_request)
errors =[]
work_orders = []
default_wip_warehouse = frappe.db.get_single_value("Manufacturing Settings", "default_wip_warehouse")
for d in mr.items:
if (d.qty - d.ordered_qty) >0:
if frappe.db.exists("BOM", {"item": d.item_code, "is_default": 1}):
wo_order = frappe.new_doc("Work Order")
wo_order.update({
"production_item": d.item_code,
"qty": d.qty - d.ordered_qty,
"fg_warehouse": d.warehouse,
"wip_warehouse": default_wip_warehouse,
"description": d.description,
"stock_uom": d.stock_uom,
"expected_delivery_date": d.schedule_date,
"sales_order": d.sales_order,
"bom_no": get_item_details(d.item_code).bom_no,
"material_request": mr.name,
"material_request_item": d.name,
"planned_start_date": mr.transaction_date,
"company": mr.company
})
wo_order.set_work_order_operations()
wo_order.save()
work_orders.append(wo_order.name)
else:
errors.append(_("Row {0}: Bill of Materials not found for the Item {1}").format(d.idx, d.item_code))
if work_orders:
message = ["""<a href="#Form/Work Order/%s" target="_blank">%s</a>""" % \
(p, p) for p in work_orders]
msgprint(_("The following Work Orders were created:") + '\n' + new_line_sep(message))
if errors:
frappe.throw(_("Productions Orders cannot be raised for:") + '\n' + new_line_sep(errors))
return work_orders | unknown | codeparrot/codeparrot-clean | ||
# Copyright 2015 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""layers module with higher level CudnnRNN primitives."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import sys
# pylint: disable=unused-import,wildcard-import
from tensorflow.contrib.cudnn_rnn.python.layers.cudnn_rnn import *
# pylint: enable=unused-import,wildcard-import
from tensorflow.contrib.cudnn_rnn.python.ops.cudnn_rnn_ops import CudnnCompatibleGRUCell
from tensorflow.contrib.cudnn_rnn.python.ops.cudnn_rnn_ops import CudnnCompatibleLSTMCell
from tensorflow.contrib.cudnn_rnn.python.ops.cudnn_rnn_ops import CudnnGRUSaveable
from tensorflow.contrib.cudnn_rnn.python.ops.cudnn_rnn_ops import CudnnLSTMSaveable
from tensorflow.contrib.cudnn_rnn.python.ops.cudnn_rnn_ops import CudnnRNNReluSaveable
from tensorflow.contrib.cudnn_rnn.python.ops.cudnn_rnn_ops import CudnnRNNTanhSaveable | unknown | codeparrot/codeparrot-clean | ||
#
# Copyright (C) 2008 The Android Open Source Project
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import re
import sys
from formatter import AbstractFormatter, DumbWriter
from color import Coloring
from command import PagedCommand, MirrorSafeCommand
class Help(PagedCommand, MirrorSafeCommand):
common = False
helpSummary = "Display detailed help on a command"
helpUsage = """
%prog [--all|command]
"""
helpDescription = """
Displays detailed usage information about a command.
"""
def _PrintAllCommands(self):
print 'usage: repo COMMAND [ARGS]'
print """
The complete list of recognized repo commands are:
"""
commandNames = self.commands.keys()
commandNames.sort()
maxlen = 0
for name in commandNames:
maxlen = max(maxlen, len(name))
fmt = ' %%-%ds %%s' % maxlen
for name in commandNames:
command = self.commands[name]
try:
summary = command.helpSummary.strip()
except AttributeError:
summary = ''
print fmt % (name, summary)
print """
See 'repo help <command>' for more information on a specific command.
"""
def _PrintCommonCommands(self):
print 'usage: repo COMMAND [ARGS]'
print """
The most commonly used repo commands are:
"""
commandNames = [name
for name in self.commands.keys()
if self.commands[name].common]
commandNames.sort()
maxlen = 0
for name in commandNames:
maxlen = max(maxlen, len(name))
fmt = ' %%-%ds %%s' % maxlen
for name in commandNames:
command = self.commands[name]
try:
summary = command.helpSummary.strip()
except AttributeError:
summary = ''
print fmt % (name, summary)
print """
See 'repo help <command>' for more information on a specific command.
See 'repo help --all' for a complete list of recognized commands.
"""
def _PrintCommandHelp(self, cmd):
class _Out(Coloring):
def __init__(self, gc):
Coloring.__init__(self, gc, 'help')
self.heading = self.printer('heading', attr='bold')
self.wrap = AbstractFormatter(DumbWriter())
def _PrintSection(self, heading, bodyAttr):
try:
body = getattr(cmd, bodyAttr)
except AttributeError:
return
if body == '' or body is None:
return
self.nl()
self.heading('%s', heading)
self.nl()
self.heading('%s', ''.ljust(len(heading), '-'))
self.nl()
me = 'repo %s' % cmd.NAME
body = body.strip()
body = body.replace('%prog', me)
asciidoc_hdr = re.compile(r'^\n?([^\n]{1,})\n([=~-]{2,})$')
for para in body.split("\n\n"):
if para.startswith(' '):
self.write('%s', para)
self.nl()
self.nl()
continue
m = asciidoc_hdr.match(para)
if m:
title = m.group(1)
section_type = m.group(2)
if section_type[0] in ('=', '-'):
p = self.heading
else:
def _p(fmt, *args):
self.write(' ')
self.heading(fmt, *args)
p = _p
p('%s', title)
self.nl()
p('%s', ''.ljust(len(title),section_type[0]))
self.nl()
continue
self.wrap.add_flowing_data(para)
self.wrap.end_paragraph(1)
self.wrap.end_paragraph(0)
out = _Out(self.manifest.globalConfig)
out._PrintSection('Summary', 'helpSummary')
cmd.OptionParser.print_help()
out._PrintSection('Description', 'helpDescription')
def _Options(self, p):
p.add_option('-a', '--all',
dest='show_all', action='store_true',
help='show the complete list of commands')
def Execute(self, opt, args):
if len(args) == 0:
if opt.show_all:
self._PrintAllCommands()
else:
self._PrintCommonCommands()
elif len(args) == 1:
name = args[0]
try:
cmd = self.commands[name]
except KeyError:
print >>sys.stderr, "repo: '%s' is not a repo command." % name
sys.exit(1)
cmd.manifest = self.manifest
self._PrintCommandHelp(cmd)
else:
self._PrintCommandHelp(self) | unknown | codeparrot/codeparrot-clean | ||
from __future__ import unicode_literals
import datetime
from dateutil.parser import parse
from decimal import Decimal
import re
from django.core.exceptions import ObjectDoesNotExist, MultipleObjectsReturned
from django.utils import datetime_safe, importlib
from django.utils import six
from tastypie.bundle import Bundle
from tastypie.exceptions import ApiFieldError, NotFound
from tastypie.utils import dict_strip_unicode_keys, make_aware
class NOT_PROVIDED:
def __str__(self):
return 'No default provided.'
DATE_REGEX = re.compile('^(?P<year>\d{4})-(?P<month>\d{2})-(?P<day>\d{2}).*?$')
DATETIME_REGEX = re.compile('^(?P<year>\d{4})-(?P<month>\d{2})-(?P<day>\d{2})(T|\s+)(?P<hour>\d{2}):(?P<minute>\d{2}):(?P<second>\d{2}).*?$')
# All the ApiField variants.
class ApiField(object):
"""The base implementation of a field used by the resources."""
dehydrated_type = 'string'
help_text = ''
def __init__(self, attribute=None, default=NOT_PROVIDED, null=False, blank=False, readonly=False, unique=False, help_text=None, use_in='all'):
"""
Sets up the field. This is generally called when the containing
``Resource`` is initialized.
Optionally accepts an ``attribute``, which should be a string of
either an instance attribute or callable off the object during the
``dehydrate`` or push data onto an object during the ``hydrate``.
Defaults to ``None``, meaning data will be manually accessed.
Optionally accepts a ``default``, which provides default data when the
object being ``dehydrated``/``hydrated`` has no data on the field.
Defaults to ``NOT_PROVIDED``.
Optionally accepts a ``null``, which indicated whether or not a
``None`` is allowable data on the field. Defaults to ``False``.
Optionally accepts a ``blank``, which indicated whether or not
data may be omitted on the field. Defaults to ``False``.
Optionally accepts a ``readonly``, which indicates whether the field
is used during the ``hydrate`` or not. Defaults to ``False``.
Optionally accepts a ``unique``, which indicates if the field is a
unique identifier for the object.
Optionally accepts ``help_text``, which lets you provide a
human-readable description of the field exposed at the schema level.
Defaults to the per-Field definition.
Optionally accepts ``use_in``. This may be one of ``list``, ``detail``
``all`` or a callable which accepts a ``bundle`` and returns
``True`` or ``False``. Indicates wheather this field will be included
during dehydration of a list of objects or a single object. If ``use_in``
is a callable, and returns ``True``, the field will be included during
dehydration.
Defaults to ``all``.
"""
# Track what the index thinks this field is called.
self.instance_name = None
self._resource = None
self.attribute = attribute
self._default = default
self.null = null
self.blank = blank
self.readonly = readonly
self.value = None
self.unique = unique
self.use_in = 'all'
if use_in in ['all', 'detail', 'list'] or callable(use_in):
self.use_in = use_in
if help_text:
self.help_text = help_text
def contribute_to_class(self, cls, name):
# Do the least we can here so that we don't hate ourselves in the
# morning.
self.instance_name = name
self._resource = cls
def has_default(self):
"""Returns a boolean of whether this field has a default value."""
return self._default is not NOT_PROVIDED
@property
def default(self):
"""Returns the default value for the field."""
if callable(self._default):
return self._default()
return self._default
def dehydrate(self, bundle, for_list=True):
"""
Takes data from the provided object and prepares it for the
resource.
"""
if self.attribute is not None:
# Check for `__` in the field for looking through the relation.
attrs = self.attribute.split('__')
current_object = bundle.obj
for attr in attrs:
previous_object = current_object
current_object = getattr(current_object, attr, None)
if current_object is None:
if self.has_default():
current_object = self._default
# Fall out of the loop, given any further attempts at
# accesses will fail miserably.
break
elif self.null:
current_object = None
# Fall out of the loop, given any further attempts at
# accesses will fail miserably.
break
else:
raise ApiFieldError("The object '%r' has an empty attribute '%s' and doesn't allow a default or null value." % (previous_object, attr))
if callable(current_object):
current_object = current_object()
return self.convert(current_object)
if self.has_default():
return self.convert(self.default)
else:
return None
def convert(self, value):
"""
Handles conversion between the data found and the type of the field.
Extending classes should override this method and provide correct
data coercion.
"""
return value
def hydrate(self, bundle):
"""
Takes data stored in the bundle for the field and returns it. Used for
taking simple data and building a instance object.
"""
if self.readonly:
return None
if not self.instance_name in bundle.data:
if getattr(self, 'is_related', False) and not getattr(self, 'is_m2m', False):
# We've got an FK (or alike field) & a possible parent object.
# Check for it.
if bundle.related_obj and bundle.related_name in (self.attribute, self.instance_name):
return bundle.related_obj
if self.blank:
return None
elif self.attribute and getattr(bundle.obj, self.attribute, None):
return getattr(bundle.obj, self.attribute)
elif self.instance_name and hasattr(bundle.obj, self.instance_name):
return getattr(bundle.obj, self.instance_name)
elif self.has_default():
if callable(self._default):
return self._default()
return self._default
elif self.null:
return None
else:
raise ApiFieldError("The '%s' field has no data and doesn't allow a default or null value." % self.instance_name)
return bundle.data[self.instance_name]
class CharField(ApiField):
"""
A text field of arbitrary length.
Covers both ``models.CharField`` and ``models.TextField``.
"""
dehydrated_type = 'string'
help_text = 'Unicode string data. Ex: "Hello World"'
def convert(self, value):
if value is None:
return None
return six.text_type(value)
class FileField(ApiField):
"""
A file-related field.
Covers both ``models.FileField`` and ``models.ImageField``.
"""
dehydrated_type = 'string'
help_text = 'A file URL as a string. Ex: "http://media.example.com/media/photos/my_photo.jpg"'
def convert(self, value):
if value is None:
return None
try:
# Try to return the URL if it's a ``File``, falling back to the string
# itself if it's been overridden or is a default.
return getattr(value, 'url', value)
except ValueError:
return None
class IntegerField(ApiField):
"""
An integer field.
Covers ``models.IntegerField``, ``models.PositiveIntegerField``,
``models.PositiveSmallIntegerField`` and ``models.SmallIntegerField``.
"""
dehydrated_type = 'integer'
help_text = 'Integer data. Ex: 2673'
def convert(self, value):
if value is None:
return None
return int(value)
class FloatField(ApiField):
"""
A floating point field.
"""
dehydrated_type = 'float'
help_text = 'Floating point numeric data. Ex: 26.73'
def convert(self, value):
if value is None:
return None
return float(value)
class DecimalField(ApiField):
"""
A decimal field.
"""
dehydrated_type = 'decimal'
help_text = 'Fixed precision numeric data. Ex: 26.73'
def convert(self, value):
if value is None:
return None
return Decimal(value)
def hydrate(self, bundle):
value = super(DecimalField, self).hydrate(bundle)
if value and not isinstance(value, Decimal):
value = Decimal(value)
return value
class BooleanField(ApiField):
"""
A boolean field.
Covers both ``models.BooleanField`` and ``models.NullBooleanField``.
"""
dehydrated_type = 'boolean'
help_text = 'Boolean data. Ex: True'
def convert(self, value):
if value is None:
return None
return bool(value)
class ListField(ApiField):
"""
A list field.
"""
dehydrated_type = 'list'
help_text = "A list of data. Ex: ['abc', 26.73, 8]"
def convert(self, value):
if value is None:
return None
return list(value)
class DictField(ApiField):
"""
A dictionary field.
"""
dehydrated_type = 'dict'
help_text = "A dictionary of data. Ex: {'price': 26.73, 'name': 'Daniel'}"
def convert(self, value):
if value is None:
return None
return dict(value)
class DateField(ApiField):
"""
A date field.
"""
dehydrated_type = 'date'
help_text = 'A date as a string. Ex: "2010-11-10"'
def convert(self, value):
if value is None:
return None
if isinstance(value, six.string_types):
match = DATE_REGEX.search(value)
if match:
data = match.groupdict()
return datetime_safe.date(int(data['year']), int(data['month']), int(data['day']))
else:
raise ApiFieldError("Date provided to '%s' field doesn't appear to be a valid date string: '%s'" % (self.instance_name, value))
return value
def hydrate(self, bundle):
value = super(DateField, self).hydrate(bundle)
if value and not hasattr(value, 'year'):
try:
# Try to rip a date/datetime out of it.
value = make_aware(parse(value))
if hasattr(value, 'hour'):
value = value.date()
except ValueError:
pass
return value
class DateTimeField(ApiField):
"""
A datetime field.
"""
dehydrated_type = 'datetime'
help_text = 'A date & time as a string. Ex: "2010-11-10T03:07:43"'
def convert(self, value):
if value is None:
return None
if isinstance(value, six.string_types):
match = DATETIME_REGEX.search(value)
if match:
data = match.groupdict()
return make_aware(datetime_safe.datetime(int(data['year']), int(data['month']), int(data['day']), int(data['hour']), int(data['minute']), int(data['second'])))
else:
raise ApiFieldError("Datetime provided to '%s' field doesn't appear to be a valid datetime string: '%s'" % (self.instance_name, value))
return value
def hydrate(self, bundle):
value = super(DateTimeField, self).hydrate(bundle)
if value and not hasattr(value, 'year'):
if isinstance(value, six.string_types):
try:
# Try to rip a date/datetime out of it.
value = make_aware(parse(value))
except (ValueError, TypeError):
raise ApiFieldError("Datetime provided to '%s' field doesn't appear to be a valid datetime string: '%s'" % (self.instance_name, value))
else:
raise ApiFieldError("Datetime provided to '%s' field must be a string: %s" % (self.instance_name, value))
return value
class RelatedField(ApiField):
"""
Provides access to data that is related within the database.
The ``RelatedField`` base class is not intended for direct use but provides
functionality that ``ToOneField`` and ``ToManyField`` build upon.
The contents of this field actually point to another ``Resource``,
rather than the related object. This allows the field to represent its data
in different ways.
The abstractions based around this are "leaky" in that, unlike the other
fields provided by ``tastypie``, these fields don't handle arbitrary objects
very well. The subclasses use Django's ORM layer to make things go, though
there is no ORM-specific code at this level.
"""
dehydrated_type = 'related'
is_related = True
self_referential = False
help_text = 'A related resource. Can be either a URI or set of nested resource data.'
def __init__(self, to, attribute, related_name=None, default=NOT_PROVIDED, null=False, blank=False, readonly=False, full=False, unique=False, help_text=None, use_in='all', full_list=True, full_detail=True):
"""
Builds the field and prepares it to access to related data.
The ``to`` argument should point to a ``Resource`` class, NOT
to a ``Model``. Required.
The ``attribute`` argument should specify what field/callable points to
the related data on the instance object. Required.
Optionally accepts a ``related_name`` argument. Currently unused, as
unlike Django's ORM layer, reverse relations between ``Resource``
classes are not automatically created. Defaults to ``None``.
Optionally accepts a ``null``, which indicated whether or not a
``None`` is allowable data on the field. Defaults to ``False``.
Optionally accepts a ``blank``, which indicated whether or not
data may be omitted on the field. Defaults to ``False``.
Optionally accepts a ``readonly``, which indicates whether the field
is used during the ``hydrate`` or not. Defaults to ``False``.
Optionally accepts a ``full``, which indicates how the related
``Resource`` will appear post-``dehydrate``. If ``False``, the
related ``Resource`` will appear as a URL to the endpoint of that
resource. If ``True``, the result of the sub-resource's
``dehydrate`` will be included in full.
Optionally accepts a ``unique``, which indicates if the field is a
unique identifier for the object.
Optionally accepts ``help_text``, which lets you provide a
human-readable description of the field exposed at the schema level.
Defaults to the per-Field definition.
Optionally accepts ``use_in``. This may be one of ``list``, ``detail``
``all`` or a callable which accepts a ``bundle`` and returns
``True`` or ``False``. Indicates wheather this field will be included
during dehydration of a list of objects or a single object. If ``use_in``
is a callable, and returns ``True``, the field will be included during
dehydration.
Defaults to ``all``.
Optionally accepts a ``full_list``, which indicated whether or not
data should be fully dehydrated when the request is for a list of
resources. Accepts ``True``, ``False`` or a callable that accepts
a bundle and returns ``True`` or ``False``. Depends on ``full``
being ``True``. Defaults to ``True``.
Optionally accepts a ``full_detail``, which indicated whether or not
data should be fully dehydrated when then request is for a single
resource. Accepts ``True``, ``False`` or a callable that accepts a
bundle and returns ``True`` or ``False``.Depends on ``full``
being ``True``. Defaults to ``True``.
"""
self.instance_name = None
self._resource = None
self.to = to
self.attribute = attribute
self.related_name = related_name
self._default = default
self.null = null
self.blank = blank
self.readonly = readonly
self.full = full
self.api_name = None
self.resource_name = None
self.unique = unique
self._to_class = None
self.use_in = 'all'
self.full_list = full_list
self.full_detail = full_detail
if use_in in ['all', 'detail', 'list'] or callable(use_in):
self.use_in = use_in
if self.to == 'self':
self.self_referential = True
self._to_class = self.__class__
if help_text:
self.help_text = help_text
def contribute_to_class(self, cls, name):
super(RelatedField, self).contribute_to_class(cls, name)
# Check if we're self-referential and hook it up.
# We can't do this quite like Django because there's no ``AppCache``
# here (which I think we should avoid as long as possible).
if self.self_referential or self.to == 'self':
self._to_class = cls
def get_related_resource(self, related_instance):
"""
Instaniates the related resource.
"""
related_resource = self.to_class()
# Fix the ``api_name`` if it's not present.
if related_resource._meta.api_name is None:
if self._resource and not self._resource._meta.api_name is None:
related_resource._meta.api_name = self._resource._meta.api_name
# Try to be efficient about DB queries.
related_resource.instance = related_instance
return related_resource
@property
def to_class(self):
# We need to be lazy here, because when the metaclass constructs the
# Resources, other classes may not exist yet.
# That said, memoize this so we never have to relookup/reimport.
if self._to_class:
return self._to_class
if not isinstance(self.to, six.string_types):
self._to_class = self.to
return self._to_class
# It's a string. Let's figure it out.
if '.' in self.to:
# Try to import.
module_bits = self.to.split('.')
module_path, class_name = '.'.join(module_bits[:-1]), module_bits[-1]
module = importlib.import_module(module_path)
else:
# We've got a bare class name here, which won't work (No AppCache
# to rely on). Try to throw a useful error.
raise ImportError("Tastypie requires a Python-style path (<module.module.Class>) to lazy load related resources. Only given '%s'." % self.to)
self._to_class = getattr(module, class_name, None)
if self._to_class is None:
raise ImportError("Module '%s' does not appear to have a class called '%s'." % (module_path, class_name))
return self._to_class
def dehydrate_related(self, bundle, related_resource, for_list=True):
"""
Based on the ``full_resource``, returns either the endpoint or the data
from ``full_dehydrate`` for the related resource.
"""
should_dehydrate_full_resource = self.should_full_dehydrate(bundle, for_list=for_list)
if not should_dehydrate_full_resource:
# Be a good netizen.
return related_resource.get_resource_uri(bundle)
else:
# ZOMG extra data and big payloads.
bundle = related_resource.build_bundle(
obj=related_resource.instance,
request=bundle.request,
objects_saved=bundle.objects_saved
)
return related_resource.full_dehydrate(bundle)
def resource_from_uri(self, fk_resource, uri, request=None, related_obj=None, related_name=None):
"""
Given a URI is provided, the related resource is attempted to be
loaded based on the identifiers in the URI.
"""
try:
obj = fk_resource.get_via_uri(uri, request=request)
bundle = fk_resource.build_bundle(
obj=obj,
request=request
)
return fk_resource.full_dehydrate(bundle)
except ObjectDoesNotExist:
raise ApiFieldError("Could not find the provided object via resource URI '%s'." % uri)
def resource_from_data(self, fk_resource, data, request=None, related_obj=None, related_name=None):
"""
Given a dictionary-like structure is provided, a fresh related
resource is created using that data.
"""
# Try to hydrate the data provided.
data = dict_strip_unicode_keys(data)
fk_bundle = fk_resource.build_bundle(
data=data,
request=request
)
if related_obj:
fk_bundle.related_obj = related_obj
fk_bundle.related_name = related_name
unique_keys = dict((k, v) for k, v in data.items() if k == 'pk' or (hasattr(fk_resource, k) and getattr(fk_resource, k).unique))
# If we have no unique keys, we shouldn't go look for some resource that
# happens to match other kwargs. In the case of a create, it might be the
# completely wrong resource.
# We also need to check to see if updates are allowed on the FK resource.
if unique_keys and fk_resource.can_update():
try:
return fk_resource.obj_update(fk_bundle, skip_errors=True, **data)
except (NotFound, TypeError):
try:
# Attempt lookup by primary key
return fk_resource.obj_update(fk_bundle, skip_errors=True, **unique_keys)
except NotFound:
pass
except MultipleObjectsReturned:
pass
# If we shouldn't update a resource, or we couldn't find a matching
# resource we'll just return a populated bundle instead
# of mistakenly updating something that should be read-only.
fk_bundle = fk_resource.full_hydrate(fk_bundle)
fk_resource.is_valid(fk_bundle)
return fk_bundle
def resource_from_pk(self, fk_resource, obj, request=None, related_obj=None, related_name=None):
"""
Given an object with a ``pk`` attribute, the related resource
is attempted to be loaded via that PK.
"""
bundle = fk_resource.build_bundle(
obj=obj,
request=request
)
return fk_resource.full_dehydrate(bundle)
def build_related_resource(self, value, request=None, related_obj=None, related_name=None):
"""
Returns a bundle of data built by the related resource, usually via
``hydrate`` with the data provided.
Accepts either a URI, a data dictionary (or dictionary-like structure)
or an object with a ``pk``.
"""
self.fk_resource = self.to_class()
kwargs = {
'request': request,
'related_obj': related_obj,
'related_name': related_name,
}
if isinstance(value, Bundle):
# Already hydrated, probably nested bundles. Just return.
return value
elif isinstance(value, six.string_types):
# We got a URI. Load the object and assign it.
return self.resource_from_uri(self.fk_resource, value, **kwargs)
elif hasattr(value, 'items'):
# We've got a data dictionary.
# Since this leads to creation, this is the only one of these
# methods that might care about "parent" data.
return self.resource_from_data(self.fk_resource, value, **kwargs)
elif hasattr(value, 'pk'):
# We've got an object with a primary key.
return self.resource_from_pk(self.fk_resource, value, **kwargs)
else:
raise ApiFieldError("The '%s' field was given data that was not a URI, not a dictionary-alike and does not have a 'pk' attribute: %s." % (self.instance_name, value))
def should_full_dehydrate(self, bundle, for_list):
"""
Based on the ``full``, ``list_full`` and ``detail_full`` returns ``True`` or ``False``
indicating weather the resource should be fully dehydrated.
"""
should_dehydrate_full_resource = False
if self.full:
is_details_view = not for_list
if is_details_view:
if (not callable(self.full_detail) and self.full_detail) or (callable(self.full_detail) and self.full_detail(bundle)):
should_dehydrate_full_resource = True
else:
if (not callable(self.full_list) and self.full_list) or (callable(self.full_list) and self.full_list(bundle)):
should_dehydrate_full_resource = True
return should_dehydrate_full_resource
class ToOneField(RelatedField):
"""
Provides access to related data via foreign key.
This subclass requires Django's ORM layer to work properly.
"""
help_text = 'A single related resource. Can be either a URI or set of nested resource data.'
def __init__(self, to, attribute, related_name=None, default=NOT_PROVIDED,
null=False, blank=False, readonly=False, full=False,
unique=False, help_text=None, use_in='all', full_list=True, full_detail=True):
super(ToOneField, self).__init__(
to, attribute, related_name=related_name, default=default,
null=null, blank=blank, readonly=readonly, full=full,
unique=unique, help_text=help_text, use_in=use_in,
full_list=full_list, full_detail=full_detail
)
self.fk_resource = None
def dehydrate(self, bundle, for_list=True):
foreign_obj = None
error_to_raise = None
if isinstance(self.attribute, six.string_types):
attrs = self.attribute.split('__')
foreign_obj = bundle.obj
for attr in attrs:
previous_obj = foreign_obj
try:
foreign_obj = getattr(foreign_obj, attr, None)
except ObjectDoesNotExist:
foreign_obj = None
elif callable(self.attribute):
previous_obj = bundle.obj
foreign_obj = self.attribute(bundle)
if not foreign_obj:
if not self.null:
if callable(self.attribute):
raise ApiFieldError("The related resource for resource %s could not be found." % (previous_obj))
else:
raise ApiFieldError("The model '%r' has an empty attribute '%s' and doesn't allow a null value." % (previous_obj, attr))
return None
self.fk_resource = self.get_related_resource(foreign_obj)
fk_bundle = Bundle(obj=foreign_obj, request=bundle.request)
return self.dehydrate_related(fk_bundle, self.fk_resource, for_list=for_list)
def hydrate(self, bundle):
value = super(ToOneField, self).hydrate(bundle)
if value is None:
return value
return self.build_related_resource(value, request=bundle.request)
class ForeignKey(ToOneField):
"""
A convenience subclass for those who prefer to mirror ``django.db.models``.
"""
pass
class OneToOneField(ToOneField):
"""
A convenience subclass for those who prefer to mirror ``django.db.models``.
"""
pass
class ToManyField(RelatedField):
"""
Provides access to related data via a join table.
This subclass requires Django's ORM layer to work properly.
Note that the ``hydrate`` portions of this field are quite different than
any other field. ``hydrate_m2m`` actually handles the data and relations.
This is due to the way Django implements M2M relationships.
"""
is_m2m = True
help_text = 'Many related resources. Can be either a list of URIs or list of individually nested resource data.'
def __init__(self, to, attribute, related_name=None, default=NOT_PROVIDED,
null=False, blank=False, readonly=False, full=False,
unique=False, help_text=None, use_in='all', full_list=True, full_detail=True):
super(ToManyField, self).__init__(
to, attribute, related_name=related_name, default=default,
null=null, blank=blank, readonly=readonly, full=full,
unique=unique, help_text=help_text, use_in=use_in,
full_list=full_list, full_detail=full_detail
)
self.m2m_bundles = []
def dehydrate(self, bundle, for_list=True):
if not bundle.obj or not bundle.obj.pk:
if not self.null:
raise ApiFieldError("The model '%r' does not have a primary key and can not be used in a ToMany context." % bundle.obj)
return []
the_m2ms = None
previous_obj = bundle.obj
attr = self.attribute
if isinstance(self.attribute, six.string_types):
attrs = self.attribute.split('__')
the_m2ms = bundle.obj
for attr in attrs:
previous_obj = the_m2ms
try:
the_m2ms = getattr(the_m2ms, attr, None)
except ObjectDoesNotExist:
the_m2ms = None
if not the_m2ms:
break
elif callable(self.attribute):
the_m2ms = self.attribute(bundle)
if not the_m2ms:
if not self.null:
raise ApiFieldError("The model '%r' has an empty attribute '%s' and doesn't allow a null value." % (previous_obj, attr))
return []
self.m2m_resources = []
m2m_dehydrated = []
# TODO: Also model-specific and leaky. Relies on there being a
# ``Manager`` there.
for m2m in the_m2ms.all():
m2m_resource = self.get_related_resource(m2m)
m2m_bundle = Bundle(obj=m2m, request=bundle.request)
self.m2m_resources.append(m2m_resource)
m2m_dehydrated.append(self.dehydrate_related(m2m_bundle, m2m_resource, for_list=for_list))
return m2m_dehydrated
def hydrate(self, bundle):
pass
def hydrate_m2m(self, bundle):
if self.readonly:
return None
if bundle.data.get(self.instance_name) is None:
if self.blank:
return []
elif self.null:
return []
else:
raise ApiFieldError("The '%s' field has no data and doesn't allow a null value." % self.instance_name)
m2m_hydrated = []
for value in bundle.data.get(self.instance_name):
if value is None:
continue
kwargs = {
'request': bundle.request,
}
if self.related_name:
kwargs['related_obj'] = bundle.obj
kwargs['related_name'] = self.related_name
m2m_hydrated.append(self.build_related_resource(value, **kwargs))
return m2m_hydrated
class ManyToManyField(ToManyField):
"""
A convenience subclass for those who prefer to mirror ``django.db.models``.
"""
pass
class OneToManyField(ToManyField):
"""
A convenience subclass for those who prefer to mirror ``django.db.models``.
"""
pass
class TimeField(ApiField):
dehydrated_type = 'time'
help_text = 'A time as string. Ex: "20:05:23"'
def dehydrate(self, obj, for_list=True):
return self.convert(super(TimeField, self).dehydrate(obj))
def convert(self, value):
if isinstance(value, six.string_types):
return self.to_time(value)
return value
def to_time(self, s):
try:
dt = parse(s)
except (ValueError, TypeError) as e:
raise ApiFieldError(str(e))
else:
return datetime.time(dt.hour, dt.minute, dt.second)
def hydrate(self, bundle):
value = super(TimeField, self).hydrate(bundle)
if value and not isinstance(value, datetime.time):
value = self.to_time(value)
return value | unknown | codeparrot/codeparrot-clean | ||
# Volatility
# Copyright (C) 2007-2013 Volatility Foundation
#
# This file is part of Volatility.
#
# Volatility is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 2 of the License, or
# (at your option) any later version.
#
# Volatility is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Volatility. If not, see <http://www.gnu.org/licenses/>.
#
"""
@author: Andrew Case
@license: GNU General Public License 2.0
@contact: atcuno@gmail.com
@organization:
"""
import struct, string
import volatility.obj as obj
import volatility.debug as debug
import volatility.addrspace as addrspace
import volatility.plugins.mac.common as mac_common
import volatility.plugins.mac.pstasks as mac_tasks
from volatility.renderers import TreeGrid
bash_vtypes = {
'bash32_hist_entry': [ 0xc, {
'line': [0x0, ['pointer', ['String', dict(length = 1024)]]],
'timestamp': [0x4, ['pointer', ['String', dict(length = 1024)]]],
'data': [0x8, ['pointer', ['void']]],
}],
'bash64_hist_entry': [ 24, {
'line': [0, ['pointer', ['String', dict(length = 1024)]]],
'timestamp': [8, ['pointer', ['String', dict(length = 1024)]]],
'data': [16, ['pointer', ['void']]],
}],
}
class _mac_hist_entry(obj.CType):
"""A class for history entries"""
def is_valid(self):
line_addr = self.line_ptr()
time_addr = self.time_ptr()
if (not obj.CType.is_valid(self) or
not self.obj_vm.is_valid_address(line_addr) or
not self.obj_vm.is_valid_address(time_addr)):
return False
ts = self.obj_vm.read(time_addr, 256)
if not ts:
return False
idx = ts.find("\x00")
if idx != -1:
ts = ts[:idx]
# At this point in time, the epoc integer size will
# never be less than 10 characters, and the stamp is
# always preceded by a pound/hash character.
if len(ts) < 10 or str(ts)[0] != "#":
return False
# The final check is to make sure the entire string
# is composed of numbers. Try to convert to an int.
try:
int(str(ts)[1:])
except ValueError:
return False
return True
def line(self):
line_addr = self.line_ptr()
buf = self.obj_vm.read(line_addr, 256)
if buf:
idx = buf.find("\x00")
if idx != -1:
buf = buf[:idx]
ret = "".join([c for c in buf if c in string.printable])
else:
ret = ""
return ret
@property
def time_as_integer(self):
# Get the string and remove the leading "#" from the timestamp
time_addr = self.time_ptr()
ts = self.obj_vm.read(time_addr, 256)
ts = ts[1:]
idx = ts.find("\x00")
if idx != -1:
ts = ts[:idx]
# Convert the string into an integer (number of seconds)
return int(ts)
def time_object(self):
nsecs = self.time_as_integer
# Build a timestamp object from the integer
time_val = struct.pack("<I", nsecs)
time_buf = addrspace.BufferAddressSpace(self.obj_vm.get_config(), data = time_val)
time_obj = obj.Object("UnixTimeStamp", offset = 0, vm = time_buf, is_utc = True)
return time_obj
def line_ptr(self):
addr = self.m("line").obj_offset
return self.read_ptr(addr)
def time_ptr(self):
addr = self.m("timestamp").obj_offset
return self.read_ptr(addr)
class bash64_hist_entry(_mac_hist_entry):
def read_ptr(self, addr):
addr = self.obj_vm.read(addr, 8)
addr = struct.unpack("<Q", addr)[0]
return addr
class bash32_hist_entry(_mac_hist_entry):
def read_ptr(self, addr):
addr = self.obj_vm.read(addr, 4)
addr = struct.unpack("<I", addr)[0]
return addr
class MacBashTypes(obj.ProfileModification):
conditions = {"os" : lambda x : x in ["mac"]}
def modification(self, profile):
profile.vtypes.update(bash_vtypes)
profile.object_classes.update({"bash32_hist_entry": bash32_hist_entry, "bash64_hist_entry": bash64_hist_entry})
class mac_bash(mac_tasks.mac_tasks):
"""Recover bash history from bash process memory"""
def __init__(self, config, *args, **kwargs):
mac_tasks.mac_tasks.__init__(self, config, *args, **kwargs)
self._config.add_option('SCAN_ALL', short_option = 'A', default = False, help = 'scan all processes, not just those named bash', action = 'store_true')
def unified_output(self, data):
return TreeGrid([("Pid", int),
("Name", str),
("Command Time", str),
("Command", str),
], self.generator(data))
def generator(self, data):
for task in data:
if not (self._config.SCAN_ALL or str(task.p_comm) == "bash"):
continue
for hist_entry in task.bash_history_entries():
yield (0, [
int(task.p_pid),
str(task.p_comm),
str(hist_entry.time_object()),
str(hist_entry.line()),
])
def render_text(self, outfd, data):
self.table_header(outfd, [("Pid", "8"),
("Name", "20"),
("Command Time", "30"),
("Command", ""),])
for task in data:
if not (self._config.SCAN_ALL or str(task.p_comm) == "bash"):
continue
for hist_entry in task.bash_history_entries():
self.table_row(outfd, task.p_pid, task.p_comm,
hist_entry.time_object(),
hist_entry.line()) | unknown | codeparrot/codeparrot-clean | ||
# Copyright 2012 OpenStack Foundation
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import datetime
from oslo_config import cfg
import webob.exc
from nova.api.openstack import extensions
from nova.api.openstack import wsgi
from nova import compute
from nova.i18n import _
from nova import utils
CONF = cfg.CONF
CONF.import_opt('compute_topic', 'nova.compute.rpcapi')
ALIAS = 'os-instance-usage-audit-log'
authorize = extensions.os_compute_authorizer(ALIAS)
class InstanceUsageAuditLogController(wsgi.Controller):
def __init__(self):
self.host_api = compute.HostAPI()
@extensions.expected_errors(())
def index(self, req):
context = req.environ['nova.context']
authorize(context)
task_log = self._get_audit_task_logs(context)
return {'instance_usage_audit_logs': task_log}
@extensions.expected_errors(400)
def show(self, req, id):
context = req.environ['nova.context']
authorize(context)
try:
if '.' in id:
before_date = datetime.datetime.strptime(str(id),
"%Y-%m-%d %H:%M:%S.%f")
else:
before_date = datetime.datetime.strptime(str(id),
"%Y-%m-%d %H:%M:%S")
except ValueError:
msg = _("Invalid timestamp for date %s") % id
raise webob.exc.HTTPBadRequest(explanation=msg)
task_log = self._get_audit_task_logs(context,
before=before_date)
return {'instance_usage_audit_log': task_log}
def _get_audit_task_logs(self, context, begin=None, end=None,
before=None):
"""Returns a full log for all instance usage audit tasks on all
computes.
:param begin: datetime beginning of audit period to get logs for,
Defaults to the beginning of the most recently completed
audit period prior to the 'before' date.
:param end: datetime ending of audit period to get logs for,
Defaults to the ending of the most recently completed
audit period prior to the 'before' date.
:param before: By default we look for the audit period most recently
completed before this datetime. Has no effect if both begin and end
are specified.
"""
defbegin, defend = utils.last_completed_audit_period(before=before)
if begin is None:
begin = defbegin
if end is None:
end = defend
task_logs = self.host_api.task_log_get_all(context,
"instance_usage_audit",
begin, end)
# We do this this way to include disabled compute services,
# which can have instances on them. (mdragon)
filters = {'topic': CONF.compute_topic}
services = self.host_api.service_get_all(context, filters=filters)
hosts = set(serv['host'] for serv in services)
seen_hosts = set()
done_hosts = set()
running_hosts = set()
total_errors = 0
total_items = 0
for tlog in task_logs:
seen_hosts.add(tlog['host'])
if tlog['state'] == "DONE":
done_hosts.add(tlog['host'])
if tlog['state'] == "RUNNING":
running_hosts.add(tlog['host'])
total_errors += tlog['errors']
total_items += tlog['task_items']
log = {tl['host']: dict(state=tl['state'],
instances=tl['task_items'],
errors=tl['errors'],
message=tl['message'])
for tl in task_logs}
missing_hosts = hosts - seen_hosts
overall_status = "%s hosts done. %s errors." % (
'ALL' if len(done_hosts) == len(hosts)
else "%s of %s" % (len(done_hosts), len(hosts)),
total_errors)
return dict(period_beginning=str(begin),
period_ending=str(end),
num_hosts=len(hosts),
num_hosts_done=len(done_hosts),
num_hosts_running=len(running_hosts),
num_hosts_not_run=len(missing_hosts),
hosts_not_run=list(missing_hosts),
total_instances=total_items,
total_errors=total_errors,
overall_status=overall_status,
log=log)
class InstanceUsageAuditLog(extensions.V3APIExtensionBase):
"""Admin-only Task Log Monitoring."""
name = "OSInstanceUsageAuditLog"
alias = ALIAS
version = 1
def get_resources(self):
ext = extensions.ResourceExtension('os-instance_usage_audit_log',
InstanceUsageAuditLogController())
return [ext]
def get_controller_extensions(self):
return [] | unknown | codeparrot/codeparrot-clean | ||
# -*- coding:utf-8 -*-
import gettext
import json
import os
from os import path
import unittest
from django.conf import settings
from django.core.urlresolvers import reverse
from django.test import (
LiveServerTestCase, TestCase, modify_settings, override_settings)
from django.utils import six
from django.utils._os import upath
from django.utils.module_loading import import_string
from django.utils.translation import override, LANGUAGE_SESSION_KEY
from ..urls import locale_dir
class I18NTests(TestCase):
""" Tests django views in django/views/i18n.py """
urls = 'view_tests.urls'
def test_setlang(self):
"""
The set_language view can be used to change the session language.
The user is redirected to the 'next' argument if provided.
"""
for lang_code, lang_name in settings.LANGUAGES:
post_data = dict(language=lang_code, next='/')
response = self.client.post('/i18n/setlang/', data=post_data)
self.assertRedirects(response, 'http://testserver/')
self.assertEqual(self.client.session[LANGUAGE_SESSION_KEY], lang_code)
def test_setlang_unsafe_next(self):
"""
The set_language view only redirects to the 'next' argument if it is
"safe".
"""
lang_code, lang_name = settings.LANGUAGES[0]
post_data = dict(language=lang_code, next='//unsafe/redirection/')
response = self.client.post('/i18n/setlang/', data=post_data)
self.assertEqual(response.url, 'http://testserver/')
self.assertEqual(self.client.session[LANGUAGE_SESSION_KEY], lang_code)
def test_setlang_reversal(self):
self.assertEqual(reverse('set_language'), '/i18n/setlang/')
def test_setlang_cookie(self):
# we force saving language to a cookie rather than a session
# by excluding session middleware and those which do require it
test_settings = dict(
MIDDLEWARE_CLASSES=('django.middleware.common.CommonMiddleware',),
LANGUAGE_COOKIE_NAME='mylanguage',
LANGUAGE_COOKIE_AGE=3600 * 7 * 2,
LANGUAGE_COOKIE_DOMAIN='.example.com',
LANGUAGE_COOKIE_PATH='/test/',
)
with self.settings(**test_settings):
post_data = dict(language='pl', next='/views/')
response = self.client.post('/i18n/setlang/', data=post_data)
language_cookie = response.cookies.get('mylanguage')
self.assertEqual(language_cookie.value, 'pl')
self.assertEqual(language_cookie['domain'], '.example.com')
self.assertEqual(language_cookie['path'], '/test/')
self.assertEqual(language_cookie['max-age'], 3600 * 7 * 2)
def test_jsi18n(self):
"""The javascript_catalog can be deployed with language settings"""
for lang_code in ['es', 'fr', 'ru']:
with override(lang_code):
catalog = gettext.translation('djangojs', locale_dir, [lang_code])
if six.PY3:
trans_txt = catalog.gettext('this is to be translated')
else:
trans_txt = catalog.ugettext('this is to be translated')
response = self.client.get('/jsi18n/')
# response content must include a line like:
# "this is to be translated": <value of trans_txt Python variable>
# json.dumps() is used to be able to check unicode strings
self.assertContains(response, json.dumps(trans_txt), 1)
if lang_code == 'fr':
# Message with context (msgctxt)
self.assertContains(response, r'"month name\u0004May": "mai"', 1)
class JsI18NTests(TestCase):
"""
Tests django views in django/views/i18n.py that need to change
settings.LANGUAGE_CODE.
"""
urls = 'view_tests.urls'
def test_jsi18n_with_missing_en_files(self):
"""
The javascript_catalog shouldn't load the fallback language in the
case that the current selected language is actually the one translated
from, and hence missing translation files completely.
This happens easily when you're translating from English to other
languages and you've set settings.LANGUAGE_CODE to some other language
than English.
"""
with self.settings(LANGUAGE_CODE='es'), override('en-us'):
response = self.client.get('/jsi18n/')
self.assertNotContains(response, 'esto tiene que ser traducido')
def test_jsi18n_fallback_language(self):
"""
Let's make sure that the fallback language is still working properly
in cases where the selected language cannot be found.
"""
with self.settings(LANGUAGE_CODE='fr'), override('fi'):
response = self.client.get('/jsi18n/')
self.assertContains(response, 'il faut le traduire')
def test_i18n_language_non_english_default(self):
"""
Check if the Javascript i18n view returns an empty language catalog
if the default language is non-English, the selected language
is English and there is not 'en' translation available. See #13388,
#3594 and #13726 for more details.
"""
with self.settings(LANGUAGE_CODE='fr'), override('en-us'):
response = self.client.get('/jsi18n/')
self.assertNotContains(response, 'Choisir une heure')
@modify_settings(INSTALLED_APPS={'append': 'view_tests.app0'})
def test_non_english_default_english_userpref(self):
"""
Same as above with the difference that there IS an 'en' translation
available. The Javascript i18n view must return a NON empty language catalog
with the proper English translations. See #13726 for more details.
"""
with self.settings(LANGUAGE_CODE='fr'), override('en-us'):
response = self.client.get('/jsi18n_english_translation/')
self.assertContains(response, 'this app0 string is to be translated')
def test_i18n_language_non_english_fallback(self):
"""
Makes sure that the fallback language is still working properly
in cases where the selected language cannot be found.
"""
with self.settings(LANGUAGE_CODE='fr'), override('none'):
response = self.client.get('/jsi18n/')
self.assertContains(response, 'Choisir une heure')
def test_escaping(self):
# Force a language via GET otherwise the gettext functions are a noop!
response = self.client.get('/jsi18n_admin/?language=de')
self.assertContains(response, '\\x04')
@modify_settings(INSTALLED_APPS={'append': ['view_tests.app5']})
def test_non_BMP_char(self):
"""
Non-BMP characters should not break the javascript_catalog (#21725).
"""
with self.settings(LANGUAGE_CODE='en-us'), override('fr'):
response = self.client.get('/jsi18n/app5/')
self.assertEqual(response.status_code, 200)
self.assertContains(response, 'emoji')
self.assertContains(response, '\\ud83d\\udca9')
class JsI18NTestsMultiPackage(TestCase):
urls = 'view_tests.urls'
"""
Tests for django views in django/views/i18n.py that need to change
settings.LANGUAGE_CODE and merge JS translation from several packages.
"""
@modify_settings(INSTALLED_APPS={'append': ['view_tests.app1', 'view_tests.app2']})
def test_i18n_language_english_default(self):
"""
Check if the JavaScript i18n view returns a complete language catalog
if the default language is en-us, the selected language has a
translation available and a catalog composed by djangojs domain
translations of multiple Python packages is requested. See #13388,
#3594 and #13514 for more details.
"""
with self.settings(LANGUAGE_CODE='en-us'), override('fr'):
response = self.client.get('/jsi18n_multi_packages1/')
self.assertContains(response, 'il faut traduire cette cha\\u00eene de caract\\u00e8res de app1')
@modify_settings(INSTALLED_APPS={'append': ['view_tests.app3', 'view_tests.app4']})
def test_i18n_different_non_english_languages(self):
"""
Similar to above but with neither default or requested language being
English.
"""
with self.settings(LANGUAGE_CODE='fr'), override('es-ar'):
response = self.client.get('/jsi18n_multi_packages2/')
self.assertContains(response, 'este texto de app3 debe ser traducido')
def test_i18n_with_locale_paths(self):
extended_locale_paths = settings.LOCALE_PATHS + (
path.join(path.dirname(
path.dirname(path.abspath(upath(__file__)))), 'app3', 'locale'),)
with self.settings(LANGUAGE_CODE='es-ar', LOCALE_PATHS=extended_locale_paths):
with override('es-ar'):
response = self.client.get('/jsi18n/')
self.assertContains(response,
'este texto de app3 debe ser traducido')
skip_selenium = not os.environ.get('DJANGO_SELENIUM_TESTS', False)
@unittest.skipIf(skip_selenium, 'Selenium tests not requested')
class JavascriptI18nTests(LiveServerTestCase):
# The test cases use translations from these apps.
available_apps = ['django.contrib.admin', 'view_tests']
urls = 'view_tests.urls'
webdriver_class = 'selenium.webdriver.firefox.webdriver.WebDriver'
@classmethod
def setUpClass(cls):
try:
cls.selenium = import_string(cls.webdriver_class)()
except Exception as e:
raise unittest.SkipTest('Selenium webdriver "%s" not installed or '
'not operational: %s' % (cls.webdriver_class, str(e)))
super(JavascriptI18nTests, cls).setUpClass()
@classmethod
def tearDownClass(cls):
cls.selenium.quit()
super(JavascriptI18nTests, cls).tearDownClass()
@override_settings(LANGUAGE_CODE='de')
def test_javascript_gettext(self):
self.selenium.get('%s%s' % (self.live_server_url, '/jsi18n_template/'))
elem = self.selenium.find_element_by_id("gettext")
self.assertEqual(elem.text, "Entfernen")
elem = self.selenium.find_element_by_id("ngettext_sing")
self.assertEqual(elem.text, "1 Element")
elem = self.selenium.find_element_by_id("ngettext_plur")
self.assertEqual(elem.text, "455 Elemente")
elem = self.selenium.find_element_by_id("pgettext")
self.assertEqual(elem.text, "Kann")
elem = self.selenium.find_element_by_id("npgettext_sing")
self.assertEqual(elem.text, "1 Resultat")
elem = self.selenium.find_element_by_id("npgettext_plur")
self.assertEqual(elem.text, "455 Resultate")
class JavascriptI18nChromeTests(JavascriptI18nTests):
webdriver_class = 'selenium.webdriver.chrome.webdriver.WebDriver'
class JavascriptI18nIETests(JavascriptI18nTests):
webdriver_class = 'selenium.webdriver.ie.webdriver.WebDriver' | unknown | codeparrot/codeparrot-clean | ||
from pygame import Rect
from pygame.draw import lines
from thorpy.painting.painters.basicframe import BasicFrame
from thorpy.miscgui import style
from thorpy._utils.rectscomputing import get_top_coords, get_bottom_coords
from thorpy._utils.colorscomputing import grow_color, normalize_color
class ClassicFrame(BasicFrame):
def __init__(self, size=None, color=None, pressed=False, dark=None,
hovered=False, light=None, thick=1, clip="auto"):
if clip == "auto":
inflation = -2 * thick
clip = (inflation, inflation)
BasicFrame.__init__(self,
size=size,
color=color,
clip=clip,
pressed=pressed,
hovered=hovered)
self.dark = dark
self.light = light
self.thick = thick
self.light = style.LIGHT_FACTOR if light is None else light
self.dark = style.DARK_FACTOR if dark is None else dark
## if self.light is None:
## white = make_compatible(constants.WHITE, self.color)
## self.light = mid_color(self.color, white)
if isinstance(self.light, float):
self.light = normalize_color(grow_color(self.light, self.color))
## if self.dark is None:
## black = make_compatible(constants.BLACK, self.color)
## self.dark = mid_color(self.color, black)
if isinstance(self.dark, float):
self.dark = normalize_color(grow_color(self.dark, self.color))
def blit_borders_on(self, surface):
rect = Rect((0, 0), self.size)
for x in range(0, self.thick):
r = rect.inflate(-x, -x)
tc = get_top_coords(r)
bc = get_bottom_coords(r)
if self.pressed:
lines(surface, self.dark, False, tc, 1)
lines(surface, self.light, False, bc, 1)
else:
lines(surface, self.light, False, tc, 1)
lines(surface, self.dark, False, bc, 1)
def draw(self):
surface = BasicFrame.draw(self)
self.blit_borders_on(surface)
return surface
def get_fusion(self, title, center_title):
"""Fusion the painter.img and the title.img and returns this fusion"""
if self.pressed:
if center_title is True: # center the title on the element rect
title.center_on(self.size)
title._pos = (title._pos[0], title._pos[1] + self.thick)
# center_title is the topleft argument
elif center_title is not False:
title._pos = center_title
else:
title._pos = (0, 0)
painter_img = self.get_surface()
title.blit_on(painter_img)
return painter_img
else:
return BasicFrame.get_fusion(self, title, center_title)
def set_color(self, color):
self.color = color
if len(color) == 4:
self.dark = tuple(list(self.dark) + [color[3]])
self.light = tuple(list(self.light) + [color[3]]) | unknown | codeparrot/codeparrot-clean | ||
{
"name": "@swc/wasm-typescript",
"collaborators": [
"강동윤 <kdy1997.dev@gmail.com>"
],
"description": "wasm module for swc",
"version": "1.15.11",
"license": "Apache-2.0",
"repository": {
"type": "git",
"url": "https://github.com/swc-project/swc.git"
},
"files": [
"wasm.js",
"wasm.d.ts"
],
"main": "wasm.js",
"types": "wasm.d.ts"
} | json | github | https://github.com/nodejs/node | deps/amaro/dist/package.json |
# Copyright 2013 IBM Corp.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from oslo.serialization import jsonutils
import webob
from nova.api.openstack.compute.contrib import extended_virtual_interfaces_net
from nova import compute
from nova import network
from nova import test
from nova.tests.unit.api.openstack import fakes
FAKE_UUID = 'aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa'
FAKE_VIFS = [{'uuid': '00000000-0000-0000-0000-00000000000000000',
'address': '00-00-00-00-00-00',
'net_uuid': '00000000-0000-0000-0000-00000000000000001'},
{'uuid': '11111111-1111-1111-1111-11111111111111111',
'address': '11-11-11-11-11-11',
'net_uuid': '11111111-1111-1111-1111-11111111111111112'}]
EXPECTED_NET_UUIDS = ['00000000-0000-0000-0000-00000000000000001',
'11111111-1111-1111-1111-11111111111111112']
def compute_api_get(self, context, instance_id, expected_attrs=None,
want_objects=False):
return dict(uuid=FAKE_UUID, id=instance_id, instance_type_id=1, host='bob')
def get_vifs_by_instance(self, context, instance_id):
return FAKE_VIFS
def get_vif_by_mac_address(self, context, mac_address):
if mac_address == "00-00-00-00-00-00":
return {'net_uuid': '00000000-0000-0000-0000-00000000000000001'}
else:
return {'net_uuid': '11111111-1111-1111-1111-11111111111111112'}
class ExtendedServerVIFNetTest(test.NoDBTestCase):
content_type = 'application/json'
prefix = "%s:" % extended_virtual_interfaces_net. \
Extended_virtual_interfaces_net.alias
def setUp(self):
super(ExtendedServerVIFNetTest, self).setUp()
self.stubs.Set(compute.api.API, "get",
compute_api_get)
self.stubs.Set(network.api.API, "get_vifs_by_instance",
get_vifs_by_instance)
self.stubs.Set(network.api.API, "get_vif_by_mac_address",
get_vif_by_mac_address)
self.flags(
osapi_compute_extension=[
'nova.api.openstack.compute.contrib.select_extensions'],
osapi_compute_ext_list=['Virtual_interfaces',
'Extended_virtual_interfaces_net'])
def _make_request(self, url):
req = webob.Request.blank(url)
req.headers['Accept'] = self.content_type
res = req.get_response(fakes.wsgi_app(init_only=(
'os-virtual-interfaces', 'OS-EXT-VIF-NET')))
return res
def _get_vifs(self, body):
return jsonutils.loads(body).get('virtual_interfaces')
def _get_net_id(self, vifs):
for vif in vifs:
yield vif['%snet_id' % self.prefix]
def assertVIFs(self, vifs):
result = []
for net_id in self._get_net_id(vifs):
result.append(net_id)
sorted(result)
for i, net_uuid in enumerate(result):
self.assertEqual(net_uuid, EXPECTED_NET_UUIDS[i])
def test_get_extend_virtual_interfaces_list(self):
res = self._make_request('/v2/fake/servers/abcd/os-virtual-interfaces')
self.assertEqual(res.status_int, 200)
self.assertVIFs(self._get_vifs(res.body)) | unknown | codeparrot/codeparrot-clean | ||
#!/usr/bin/env python
# -*- coding: utf-8 -*-
#
# openvz.py
#
# Copyright 2014 jordonr <jordon@beamsyn.net>
#
# This file is part of Ansible.
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
#
# Inspired by libvirt_lxc.py inventory script
# https://github.com/ansible/ansible/blob/e5ef0eca03cbb6c8950c06dc50d0ca22aa8902f4/plugins/inventory/libvirt_lxc.py
#
# Groups are determined by the description field of openvz guests
# multiple groups can be seperated by commas: webserver,dbserver
from subprocess import Popen,PIPE
import sys
import json
#List openvz hosts
vzhosts = ['vzhost1','vzhost2','vzhost3']
#Add openvz hosts to the inventory and Add "_meta" trick
inventory = {'vzhosts': {'hosts': vzhosts}, '_meta': {'hostvars': {}}}
#default group, when description not defined
default_group = ['vzguest']
def get_guests():
#Loop through vzhosts
for h in vzhosts:
#SSH to vzhost and get the list of guests in json
pipe = Popen(['ssh', h,'vzlist','-j'], stdout=PIPE, universal_newlines=True)
#Load Json info of guests
json_data = json.loads(pipe.stdout.read())
#loop through guests
for j in json_data:
#Add information to host vars
inventory['_meta']['hostvars'][j['hostname']] = {'ctid': j['ctid'], 'veid': j['veid'], 'vpsid': j['vpsid'], 'private_path': j['private'], 'root_path': j['root'], 'ip': j['ip']}
#determine group from guest description
if j['description'] is not None:
groups = j['description'].split(",")
else:
groups = default_group
#add guest to inventory
for g in groups:
if g not in inventory:
inventory[g] = {'hosts': []}
inventory[g]['hosts'].append(j['hostname'])
return inventory
if len(sys.argv) == 2 and sys.argv[1] == '--list':
inv_json = get_guests()
print(json.dumps(inv_json, sort_keys=True))
elif len(sys.argv) == 3 and sys.argv[1] == '--host':
print(json.dumps({}))
else:
print("Need an argument, either --list or --host <host>") | unknown | codeparrot/codeparrot-clean | ||
#!/usr/bin/env python3
# Copyright (c) 2014-2016 The Bitcoin Core developers
# Distributed under the MIT software license, see the accompanying
# file COPYING or http://www.opensource.org/licenses/mit-license.php.
"""
Test that mining RPC continues to supply correct transaction metadata after
the Nov 2018 protocol upgrade which engages canonical transaction ordering
"""
import decimal
import random
import time
from test_framework.test_framework import BitcoinTestFramework
class CTORMiningTest(BitcoinTestFramework):
def set_test_params(self):
# Setup two nodes so we can getblocktemplate
# it errors out if it is not connected to other nodes
self.num_nodes = 2
self.setup_clean_chain = True
self.block_heights = {}
self.tip = None
self.blocks = {}
self.mocktime = int(time.time()) - 600 * 100
extra_arg = ['-spendzeroconfchange=0', '-whitelist=noban@127.0.0.1']
self.extra_args = [extra_arg, extra_arg]
def skip_test_if_missing_module(self):
self.skip_if_no_wallet()
def run_test(self):
mining_node = self.nodes[0]
# Helper for updating the times
def update_time():
mining_node.setmocktime(self.mocktime)
self.mocktime = self.mocktime + 600
mining_node.getnewaddress()
# Generate some unspent utxos and also
# activate magnetic anomaly
for x in range(150):
update_time()
mining_node.generate(1)
update_time()
unspent = mining_node.listunspent()
transactions = {}
# Spend all our coinbases
while len(unspent):
inputs = []
# Grab a random number of inputs
for _ in range(random.randrange(1, 5)):
txin = unspent.pop()
inputs.append({
'txid': txin['txid'],
'vout': 0 # This is a coinbase
})
if len(unspent) == 0:
break
outputs = {}
# Calculate a unique fee for this transaction
fee = decimal.Decimal(random.randint(
1000, 2000)) / decimal.Decimal(1e2)
# Spend to the same number of outputs as inputs, so we can leave
# the amounts unchanged and avoid rounding errors. This also ensures
# the number of sigops == number of sigchecks.
#
# NOTE: There will be 1 sigop per output (which equals the number
# of inputs now). We need this randomization to ensure the
# numbers are properly following the transactions in the block
# template metadata
addr = ""
for _ in range(len(inputs)):
addr = mining_node.getnewaddress()
output = {
# 50 BCH per coinbase
addr: decimal.Decimal(50000000)
}
outputs.update(output)
# Take the fee off the last output to avoid rounding errors we
# need the exact fee later for assertions
outputs[addr] -= fee
rawtx = mining_node.createrawtransaction(inputs, outputs)
signedtx = mining_node.signrawtransactionwithwallet(rawtx)
txid = mining_node.sendrawtransaction(signedtx['hex'])
# number of outputs is the same as the number of sigops in this
# case
transactions.update({txid: {'fee': fee, 'sigops': len(outputs)}})
tmpl = mining_node.getblocktemplate()
assert 'proposal' in tmpl['capabilities']
# Check the template transaction metadata and ordering
last_txid = 0
for txn in tmpl['transactions'][1:]:
txid = txn['txid']
txnMetadata = transactions[txid]
expectedFeeSats = int(txnMetadata['fee'] * 10**2)
expectedSigOps = txnMetadata['sigops']
txid_decoded = int(txid, 16)
# Assert we got the expected metadata
assert expectedFeeSats == txn['fee']
assert expectedSigOps == txn['sigops']
# Assert transaction ids are in order
assert last_txid == 0 or last_txid < txid_decoded
last_txid = txid_decoded
if __name__ == '__main__':
CTORMiningTest().main() | unknown | codeparrot/codeparrot-clean | ||
#!/usr/bin/python2
#LumexData.py
import datetime as dt
import numpy as np
import scipy.optimize as opt
import matplotlib.pyplot as plt
omitFlagged = True
def storeAsDatetime(date, time):
day = int(date.split(".")[0])
month = int(date.split(".")[1])
year = int(date.split(".")[2])
hour = int(time.split(":")[0])
minute = int(time.split(":")[1])
second = int(time.split(":")[2])
return dt.datetime(year, month, day, hour, minute, second)
#Calculate the passed days of the year (including leap year)
def month(mon, yea):
if(mon == 1):
days = 0
elif(mon == 2):#January 31 days
days = 31
elif(mon == 3):#February 28 days
days = 59
elif(mon == 4):#March 31 days
days = 90
elif(mon == 5):#April 30 days
days = 120
elif(mon == 6):#May 31 days
days = 151
elif(mon == 7):#June 30 days
days = 181
elif(mon == 8):#July 31 days
days = 212
elif(mon == 9):#August 31 days
days = 243
elif(mon == 10):#September 30 days
days = 273
elif(mon == 11):#October 31 days
days = 304
elif(mon == 12):#November 30 days
days = 334
if(yea % 4 == 0 and yea % 100 != 0 or yea % 400 == 0):
days = days + 1
return days
def calcDays(date):
years = 365*(date.year)
months = month(date.month, date.year)
days = date.day
return years+months+days
def calcDatetime(days):
pass
##
#
# This program contains the class LumexData. This class
#
# - stores the content of the calibration file (__init__)
# - flags the data (flagging) (Not implemented, because no flagging criteria are available)
# - arranges the accepted and flagged data in columns of time and concentration (storeTimeConc)
# - calculates the number of accepted measurements per day and calculates the daily means of the data set (length & averaging)
# - creates a column with date and the daiyly means, and flags the daily means of which the number of accepted measurements is smaller than 216 (averaing)
class LumexData:
## Constructor of LumexData
#
# Reads the content of the calibrationfile. Stores as a list of hashmaps
def __init__(self, calibrationfile="none", filedescriptor="none"):
self.__lumexdata = []
self.averaged = False
#Open the calibrationfile and read the content.
#The file should be stored as a plain text format
if(calibrationfile != "none"):
f = open(calibrationfile, "r")
calibration = f.readlines()
elif(filedescriptor != "none"):
calibration = filedescriptor.readlines()
for line in calibration:
if(len(line.split(" ")) != 7):
continue
#Store date and time as datetime
x = storeAsDatetime(line.split(" ")[0], line.split(" ")[1])
self.__lumexdata.append({"date": x, "time_dec": float(line.split(" ")[2]), "zero_span": float(line.split(" ")[3]), \
"calib_factor": float(line.split(" ")[4]), "temperature": float(line.split(" ")[5]), \
"concentration": float(line.split(" ")[6]), "flag": -1, "standarddeviation": 0, "counter": 0})
return
#END OF __init__()
## Helpfunction of LumexData
#
# Explains the class LumexData
def help(self):
f = open("README", "r")
cont = f.readlines()
for element in cont:
print(element)
return
#END OF help()
## Getter of LumexData
#
# Return the data of __lumexdata
def get(self, elementnumber, key):
if(type(elementnumber) is str):
output = []
try:
start = int(elementnumber.split(":")[0])
end = int(elementnumber.split(":")[1])
except AttributeError:
pass
raw_keys = key.split(",")
keys = []
for element in raw_keys:
keys.append(element)
for i in range(start, end):
for element in keys:
output.append(self.__lumexdata[i][element])
return output
elif(elementnumber != -1 and key != "all"):
return self.__lumexdata[elementnumber][key]
elif(elementnumber != -1 and key == "all"):
return self.__lumexdata[elementnumber]
elif(elementnumber == -1 and key != "all"):
output = []
for i in range(len(self.__lumexdata)):
output.append(self.__lumexdata[i][key])
return output
elif(elementnumber == -1 and key == "all"):
output = []
for i in range(len(self.__lumexdata)):
output.append(self.__lumexdata[i])
return output
return
#END OF get()
## Length of LumexData.__lumexdate
#
# Return the number of values in the object
def length(self):
return len(self.__lumexdata)
#END OF length()
## Save the data to a txt-file
#
# Stores the time and the concentration
def storeTimeConc(self, filename, ran="all"):
f = open(filename, "w")
g = open("flagged_"+filename,"w")
if(ran != "all"):
start = int(ran.split(":")[0])
end = int(ran.split(":")[1])
else:
start = 0
end = len(self.__lumexdata)
f.write("1. Date\n2. Time in decimal\n3. Concentration\n")
for i in range(len(self.__lumexdata)):
if(i >= start and i < end):
if(self.__lumexdata[i]["flag"] == -1):
f.write("{} {}\t{}\n".format(self.__lumexdata[i]["date"], self.__lumexdata[i]["time_dec"], self.__lumexdata[i]["concentration"]))
else:
g.write("{} {}\t{}\t{}\n".format(self.__lumexdata[i]["date"], self.__lumexdata[i]["time_dec"], self.__lumexdata[i]["concentration"], self.__lumexdata[i]["flag"]))
f.close()
return
## Flag the data
#
# Flags the data by the given criteria. criteria has to be a textfile
def flagging(self, filename="Flagged.dat", criteria=None):
f = open(filename, "w")
flag = [0 for x in range(len(self.__lumexdata))]
#Here flag the data by the given criteria
for line in self.__lumexdata:
f.write("{}\t{}\t{}\t{}\t{}\t{}\t{}\n".format(line["date"], line["time_dec"], line["zero_span"], line["calib_factor"], line["temperature"], line["concentration"], line["flag"]))
f.close()
for i in range(len(self.__lumexdata)):
self.__lumexdata[i]["flag"] = flag[i]
return
## Averaging the data
#
# Group the data for each day and calculate the mean, print them to a file (and return the output to the calling function)
def averaging(self, ran="all", overwrite=False):
f = open("averagedOutput.txt", "w")
givendate = calcDays(self.__lumexdata[0]["date"])
print(givendate)
#givendate = 365*(self.__lumexdata[0]["date"].year - 1900) + month(self.__lumexdata[0]["date"].month, self.__lumexdata[0]["date"].year) + self.__lumexdata[0]["date"].day
dummylist = []
averaged = []
errors = []
dates = []
flag = []
counter = 0
if(ran != "all"):
start = int(ran.split(":")[0])
end = int(ran.split(":")[1])
else:
start = 0
end = len(self.__lumexdata)
i = start
#for i in range(start, end): #Iterate over the whole data
while(i < end):
#mydate = 365*(self.__lumexdata[i]["date"].year - 1900) + month(self.__lumexdata[i]["date"].month, self.__lumexdata[i]["date"].year) + self.__lumexdata[i]["date"].day
mydate = calcDays(self.__lumexdata[i]["date"])
#print(mydate)
if(mydate == givendate):
#Omit the flagged data (if any)
if(omitFlagged and self.__lumexdata[i]["flag"] == -1):
dummylist.append(self.__lumexdata[i]["concentration"])
counter = counter + 1
else:
date = "{}.{}.{}".format(self.__lumexdata[i-1]["date"].day, self.__lumexdata[i-1]["date"].month, self.__lumexdata[i-1]["date"].year)
if(counter >= 216):
f.write("{}\t{}\t{}\t{}\n".format(date, np.mean(dummylist), np.std(dummylist), counter))
else:
f.write("{}\t{}\t{}\t{}\t###\n".format(date, np.mean(dummylist), np.std(dummylist), counter))
givendate = mydate
averaged.append(np.mean(dummylist))
errors.append(np.std(dummylist))
dates.append(dt.datetime.strptime(date, "%d.%m.%Y"))
flag.append(0 if counter <= 216 else -1)
if(omitFlagged and self.__lumexdata[i]["flag"] == -1):
dummylist = [self.__lumexdata[i]["concentration"]]
counter = 1
i = i + 1
if(counter != 0):
date = "{}.{}.{}".format(self.__lumexdata[end-1]["date"].day, self.__lumexdata[end-1]["date"].month, self.__lumexdata[end-1]["date"].year)
if(counter >= 216):
f.write("{}\t{}\t{}\t{}\n".format(date, np.mean(dummylist), np.std(dummylist), counter))
else:
f.write("{}\t{}\t{}\t{}\t###\n".format(date, np.mean(dummylist), np.std(dummylist), counter))
averaged.append(np.mean(dummylist))
errors.append(np.std(dummylist))
dates.append(dt.datetime.strptime(date, "%d.%m.%Y"))
flag.append(0 if counter <= 216 else -1)
dummylist = []
counter = 0
f.close()
#Overwrite the content of lumexdata
if(overwrite):
f = open("averagedOutput.txt", "r")
content = f.readlines()
f.close()
self.__lumexdata = [dict([("date", 0), ("concentration", 0), ("standarddeviation", 0), ("counter", 0), ("flag", 0)]) for x in range(len(content))]
for i in range(len(content)):
self.__lumexdata[i]["date"] = dt.datetime(int(content[i].split("\t")[0].split(".")[2]), int(content[i].split("\t")[0].split(".")[1]), int(content[i].split("\t")[0].split(".")[0]))
self.__lumexdata[i]["concentration"] = float(content[i].split("\t")[1])
self.__lumexdata[i]["standarddeviation"] = float(content[i].split("\t")[2])
self.__lumexdata[i]["counter"] = int(content[i].split("\t")[3])
self.__lumexdata[i]["flag"] = 99 if int(content[i].split("\t")[3]) < 216 else -1
self.averaged = True
return [averaged, errors, dates, flag]
#END OF average()
#Calculate fit using scipy.optimize.leastsq. The x-axis is the number of measurements. Fit as
#sinusoidal function
#linear
#polynomial (2 - 6)
#exponential
#logarithm
#gauss
#Parameter has to be a list with initial parameters
def __fitting(self, parameter, daily=False, ran="all", typ="trig", averaged=0, errors=0, av_date=0, flag=0):
dates = []
conc = []
standarddeviation = []
if(ran != "all"):
begin = ran.split(":")[0]
end = ran.split(":")[1]
begin = dt.datetime(int(begin.split(".")[2]), int(begin.split(".")[1]), int(begin.split(".")[0]))
end = dt.datetime(int(end.split(".")[2]), int(end.split(".")[1]), int(end.split(".")[0]))
else:
begin = self.__lumexdata[0]["date"]
end = self.__lumexdata[-1]["date"]
if(averaged == 0):
for i in range(len(self.__lumexdata)):
if(self.__lumexdata[i]["date"] >= begin and self.__lumexdata[i]["date"] < end or ran == "all"):
if(self.__lumexdata[i]["flag"] == -1):
dates.append(self.__lumexdata[i]["date"])
conc.append(self.__lumexdata[i]["concentration"])
else:
#[averaged, errors, av_date, flag] = self.averaging()
standarddeviation = []
for i in range(len(averaged)):
if(av_date[i] >= begin and av_date[i] < end or ran == "all"):
if(flag[i] == -1):
dates.append(av_date[i])
conc.append(averaged[i])
standarddeviation.append(errors[i])
array = np.linspace(0,len(dates)-1,len(dates))
#FITTING
if(typ == "trig"):
fitfunc = lambda parameter, x: parameter[0] * np.cos(2*np.pi / parameter[1]*x + parameter[2]) + parameter[3]*x
elif(typ == "lin"):
fitfunc = lambda parameter, x: parameter[0] * x + parameter[1]
elif(typ == "poly2"):
fitfunc = lambda parameter, x: parameter[0] * x**2 + parameter[1] * x + parameter[2]
elif(typ == "poly3"):
fitfunc = lambda parameter, x: parameter[0] * x**3 + parameter[1] * x**2 + parameter[2] * x + parameter[3]
elif(typ == "poly4"):
fitfunc = lambda parameter, x: parameter[0] * x**4 + parameter[1] * x**3 + parameter[2] * x**2 + parameter[3] * x + parameter[4]
elif(typ == "poly5"):
fitfunc = lambda parameter, x: parameter[0] * x**5 + parameter[1] * x**4 + parameter[2] * x**3 + parameter[3] * x**2 + parameter[4] * x + parameter[5]
elif(typ == "poly6"):
fitfunc = lambda parameter, x: parameter[0] * x**6 + parameter[1] * x**5 + parameter[2] * x**4 + parameter[3] * x**3 + parameter[4] * x**2 + parameter[5] * x + parameter[6]
elif(typ == "exp"):
fitfunc = lambda parameter, x: parameter[0] * np.exp(parameter[1] * x + parameter[2]) + parameter[3] * x + parameter[4]
elif(typ == "log"):
fitfunc = lambda parameter, x: parameter[0] * np.log(x) / np.log(parameter[1]) + parameter[2] * x + parameter[3]
elif(typ == "gauss"):
fitfunc = lambda parameter, x: 1 / (parameter[0] * np.sqrt(2 * np.pi)) * np.exp(-0.5 * ((x - parameter[1])/(parameter[0]))**2)
errfunc = lambda parameter, x, y: fitfunc(parameter,x) - y
p1, success = opt.leastsq(errfunc, parameter[:], args=(array, conc))
return [fitfunc, p1, array]
#Plotting without fit. If daily=True, use the daily mean, otherwise use all unflagged data
def plotting(self, title="Default", xlabel="x-Axis", ylabel="y-Axis", daily=False, ran="all", axessize=10, fsize=10, msize=10, colour="#000000", markerstyle="h", leastsq=False, typ="lin", parameter=[1,1], averaged=0, errors=0, av_date=0, flag=[]):
dates = []
conc = []
if(ran != "all"):
begin = ran.split(":")[0]
end = ran.split(":")[1]
begin = dt.datetime(int(begin.split(".")[2]), int(begin.split(".")[1]), int(begin.split(".")[0]))
end = dt.datetime(int(end.split(".")[2]), int(end.split(".")[1]), int(end.split(".")[0]))
else:
begin = self.__lumexdata[0]["date"]
end = self.__lumexdata[-1]["date"]
if(averaged == 0):
for i in range(len(self.__lumexdata)):
if(self.__lumexdata[i]["date"] >= begin and self.__lumexdata[i]["date"] < end or ran == "all"):
if(self.__lumexdata[i]["flag"] == -1):
dates.append(self.__lumexdata[i]["date"])
conc.append(self.__lumexdata[i]["concentration"])
else:
standarddeviation = []
for i in range(len(averaged)):
if(av_date[i] >= begin and av_date[i] < end or ran == "all"):
if(flag[i] == -1):
dates.append(av_date[i])
conc.append(averaged[i])
standarddeviation.append(errors[i])
fig = plt.figure()
#NotForUsing, (sp1) = plt.subplots(1, 1, sharey=False)
#sp1.set_title(title, fontsize=fsize)
plt.title(title, fontsize=fsize)
if(averaged == 0):
plt.plot(dates, conc, ls=".", marker=markerstyle, markersize=msize, color=colour)
# sp1.plot(dates, conc, ls=".", marker=markerstyle, markersize=msize, color=colour)
else:
plt.errorbar(dates, conc, yerr=standarddeviation, fmt=markerstyle, markersize=msize, color=colour)
# sp1.errorbar(dates, conc, yerr=standarddeviation, fmt=markerstyle, markersize=msize, color=colour)
if(leastsq):
[fitfunc, p1, array] = self.__fitting(parameter, daily, ran, typ=typ, averaged=averaged, errors=errors, av_date=av_date, flag=flag)
#Write fitparameters to a file
f = open("fitparams.txt", "w")
for i in range(len(p1)):
f.write("p{} = {}\n".format(i, p1[i]))
f.close()
plt.plot(dates, fitfunc(p1, array))
# sp1.plot(dates, fitfunc(p1, array))
#sp1.tick_params(labelsize=axessize)
plt.tick_params(labelsize=axessize)
plt.xlabel(xlabel, fontsize=fsize)
plt.ylabel(ylabel, fontsize=fsize)
#sp1.set_xlabel(xlabel, fontsize=fsize)
#sp1.set_ylabel(ylabel, fontsize=fsize)
#sp1.grid(True)
plt.grid(True)
plt.show() | unknown | codeparrot/codeparrot-clean | ||
#!/usr/bin/env python3
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import argparse
from kmeans import KMeans
from mixture import GaussianMixtureModel
def parse_args(*argument_array):
parser = argparse.ArgumentParser()
parser.add_argument('data_csv')
parser.add_argument('--num-clusters', type=int, default=15)
parser.add_argument('--algorithm', choices=['k-means', 'gmm'],
default='k-means')
args = parser.parse_args(*argument_array)
return args
def main(args):
df = pd.read_csv(args.data_csv)
data = np.array(df[['X', 'Y']])
plt.clf()
plt.scatter(data[:, 0], data[:, 1], s=3, color='blue')
if args.algorithm == 'gmm':
gmm = GaussianMixtureModel(args.num_clusters)
gmm.fit(data)
y = gmm.predict_cluster(data)
else:
km = KMeans(args.num_clusters)
km.fit(data)
y = km.predict(data)
plt.scatter(data[:, 0], data[:, 1], c=y)
plt.show()
if __name__ == '__main__':
args = parse_args()
main(args) | unknown | codeparrot/codeparrot-clean | ||
# -*- encoding: utf-8 -*-
##############################################################################
#
# OpenERP, Open Source Management Solution
#
# Copyright (c) 2011 Noviat nv/sa (www.noviat.be). All rights reserved.
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as
# published by the Free Software Foundation, either version 3 of the
# License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
#
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
#
##############################################################################
{
'name': 'Belgium - Import Bank CODA Statements',
'version': '2.1',
'author': 'University of Namur',
'category': 'Accounting & Finance',
'description': '''
Module to import CODA bank statements.
======================================
Supported are CODA flat files in V2 format from Belgian bank accounts.
----------------------------------------------------------------------
* CODA v1 support.
* CODA v2.2 support.
* Foreign Currency support.
* Support for all data record types (0, 1, 2, 3, 4, 8, 9).
* Parsing & logging of all Transaction Codes and Structured Format
Communications.
* Support for multiple Journals per Bank Account Number.
* Support for multiple statements from different bank accounts in a single
CODA file.
The machine readable CODA Files are parsed and Bank Statements are generated containing a subset of
the CODA information (only those transaction lines that are required for the
creation of the Financial Accounting records).
Remark on CODA V1 support:
~~~~~~~~~~~~~~~~~~~~~~~~~~
In some cases a transaction code, transaction category or structured
communication code has been given a new or clearer description in CODA V2.The
description provided by the CODA configuration tables is based upon the CODA
V2.2 specifications.
If required, you can manually adjust the descriptions via the CODA configuration menu.
''',
'depends': ['account_accountant', 'account_bank_statement_import', 'l10n_be'],
'demo': [
],
'data': [
'views/l10n_be_coda_view.xml',
'views/bank_statement_line_view.xml'
],
'auto_install': False,
'website': 'https://www.odoo.com/page/accounting',
'installable': True,
'license': 'AGPL-3',
}
# vim:expandtab:smartindent:tabstop=4:softtabstop=4:shiftwidth=4: | unknown | codeparrot/codeparrot-clean | ||
#!/usr/bin/env python
# Copyright (c) 2012 The Chromium Authors. All rights reserved.
# Use of this source code is governed by a BSD-style license that can be
# found in the LICENSE file.
"""The deep heap profiler script for Chrome."""
from collections import defaultdict
import os
import re
import subprocess
import sys
import tempfile
BUCKET_ID = 5
VIRTUAL = 0
COMMITTED = 1
ALLOC_COUNT = 2
FREE_COUNT = 3
NULL_REGEX = re.compile('')
PPROF_PATH = os.path.join(os.path.dirname(__file__),
os.pardir,
os.pardir,
'third_party',
'tcmalloc',
'chromium',
'src',
'pprof')
# Heap Profile Dump versions
# DUMP_DEEP_1 DOES NOT distinct mmap regions and malloc chunks.
# Their stacktraces DO contain mmap* or tc-* at their tops.
# They should be processed by POLICY_DEEP_1.
DUMP_DEEP_1 = 'DUMP_DEEP_1'
# DUMP_DEEP_2 DOES distinct mmap regions and malloc chunks.
# Their stacktraces still DO contain mmap* or tc-*.
# They should be processed by POLICY_DEEP_1.
DUMP_DEEP_2 = 'DUMP_DEEP_2'
# DUMP_DEEP_3 DOES distinct mmap regions and malloc chunks.
# Their stacktraces DO NOT contain mmap* or tc-*.
# They should be processed by POLICY_DEEP_2.
DUMP_DEEP_3 = 'DUMP_DEEP_3'
# Heap Profile Policy versions
# POLICY_DEEP_1 DOES NOT include allocation_type columns.
# mmap regions are distincted w/ mmap frames in the pattern column.
POLICY_DEEP_1 = 'POLICY_DEEP_1'
# POLICY_DEEP_2 DOES include allocation_type columns.
# mmap regions are distincted w/ the allocation_type column.
POLICY_DEEP_2 = 'POLICY_DEEP_2'
# TODO(dmikurube): Avoid global variables.
address_symbol_dict = {}
components = []
class Policy(object):
def __init__(self, name, mmap, pattern):
self.name = name
self.mmap = mmap
self.condition = re.compile(pattern + r'\Z')
def get_component(policy_list, bucket, mmap):
"""Returns a component name which a given bucket belongs to.
Args:
policy_list: A list containing Policy objects. (Parsed policy data by
parse_policy.)
bucket: A Bucket object to be searched for.
mmap: True if searching for a mmap region.
Returns:
A string representing a component name.
"""
if not bucket:
return 'no-bucket'
if bucket.component:
return bucket.component
stacktrace = ''.join(
address_symbol_dict[a] + ' ' for a in bucket.stacktrace).strip()
for policy in policy_list:
if mmap == policy.mmap and policy.condition.match(stacktrace):
bucket.component = policy.name
return policy.name
assert False
class Bucket(object):
def __init__(self, stacktrace):
self.stacktrace = stacktrace
self.component = ''
class Log(object):
"""A class representing one dumped log data."""
def __init__(self, log_path, buckets):
self.log_path = log_path
with open(self.log_path, mode='r') as log_f:
self.log_lines = log_f.readlines()
self.log_version = ''
sys.stderr.write('parsing a log file:%s\n' % log_path)
self.mmap_stacktrace_lines = []
self.malloc_stacktrace_lines = []
self.counters = {}
self.log_time = os.stat(self.log_path).st_mtime
self.parse_log(buckets)
@staticmethod
def dump_stacktrace_lines(stacktrace_lines, buckets):
"""Prints a given stacktrace.
Args:
stacktrace_lines: A list of strings which are valid as stacktraces.
buckets: A dict mapping bucket ids and their corresponding Bucket
objects.
"""
for l in stacktrace_lines:
words = l.split()
bucket = buckets[int(words[BUCKET_ID])]
if not bucket:
continue
for i in range(0, BUCKET_ID - 1):
sys.stdout.write(words[i] + ' ')
for address in bucket.stacktrace:
sys.stdout.write((address_symbol_dict.get(address) or address) + ' ')
sys.stdout.write('\n')
def dump_stacktrace(self, buckets):
"""Prints stacktraces contained in the log.
Args:
buckets: A dict mapping bucket ids and their corresponding Bucket
objects.
"""
self.dump_stacktrace_lines(self.mmap_stacktrace_lines, buckets)
self.dump_stacktrace_lines(self.malloc_stacktrace_lines, buckets)
@staticmethod
def accumulate_size_for_pprof(stacktrace_lines, policy_list, buckets,
component_name, mmap):
"""Accumulates size of committed chunks and the number of allocated chunks.
Args:
stacktrace_lines: A list of strings which are valid as stacktraces.
policy_list: A list containing Policy objects. (Parsed policy data by
parse_policy.)
buckets: A dict mapping bucket ids and their corresponding Bucket
objects.
component_name: A name of component for filtering.
mmap: True if searching for a mmap region.
Returns:
Two integers which are the accumulated size of committed regions and the
number of allocated chunks, respectively.
"""
com_committed = 0
com_allocs = 0
for l in stacktrace_lines:
words = l.split()
bucket = buckets[int(words[BUCKET_ID])]
if (not bucket or
(component_name and
component_name != get_component(policy_list, bucket, mmap))):
continue
com_committed += int(words[COMMITTED])
com_allocs += int(words[ALLOC_COUNT]) - int(words[FREE_COUNT])
return com_committed, com_allocs
@staticmethod
def dump_stacktrace_lines_for_pprof(stacktrace_lines, policy_list,
buckets, component_name, mmap):
"""Prints information of stacktrace lines for pprof.
Args:
stacktrace_lines: A list of strings which are valid as stacktraces.
policy_list: A list containing Policy objects. (Parsed policy data by
parse_policy.)
buckets: A dict mapping bucket ids and their corresponding Bucket
objects.
component_name: A name of component for filtering.
mmap: True if searching for a mmap region.
"""
for l in stacktrace_lines:
words = l.split()
bucket = buckets[int(words[BUCKET_ID])]
if (not bucket or
(component_name and
component_name != get_component(policy_list, bucket, mmap))):
continue
sys.stdout.write('%6d: %8s [%6d: %8s] @' % (
int(words[ALLOC_COUNT]) - int(words[FREE_COUNT]),
words[COMMITTED],
int(words[ALLOC_COUNT]) - int(words[FREE_COUNT]),
words[COMMITTED]))
for address in bucket.stacktrace:
sys.stdout.write(' ' + address)
sys.stdout.write('\n')
def dump_for_pprof(self, policy_list, buckets, mapping_lines, component_name):
"""Converts the log file so it can be processed by pprof.
Args:
policy_list: A list containing Policy objects. (Parsed policy data by
parse_policy.)
buckets: A dict mapping bucket ids and their corresponding Bucket
objects.
mapping_lines: A list of strings containing /proc/.../maps.
component_name: A name of component for filtering.
"""
sys.stdout.write('heap profile: ')
com_committed, com_allocs = self.accumulate_size_for_pprof(
self.mmap_stacktrace_lines, policy_list, buckets, component_name,
True)
add_committed, add_allocs = self.accumulate_size_for_pprof(
self.malloc_stacktrace_lines, policy_list, buckets, component_name,
False)
com_committed += add_committed
com_allocs += add_allocs
sys.stdout.write('%6d: %8s [%6d: %8s] @ heapprofile\n' % (
com_allocs, com_committed, com_allocs, com_committed))
self.dump_stacktrace_lines_for_pprof(
self.mmap_stacktrace_lines, policy_list, buckets, component_name,
True)
self.dump_stacktrace_lines_for_pprof(
self.malloc_stacktrace_lines, policy_list, buckets, component_name,
False)
sys.stdout.write('MAPPED_LIBRARIES:\n')
for l in mapping_lines:
sys.stdout.write(l)
@staticmethod
def check_stacktrace_line(stacktrace_line, buckets):
"""Checks if a given stacktrace_line is valid as stacktrace.
Args:
stacktrace_line: A string to be checked.
buckets: A dict mapping bucket ids and their corresponding Bucket
objects.
Returns:
True if the given stacktrace_line is valid.
"""
words = stacktrace_line.split()
if len(words) < BUCKET_ID + 1:
return False
if words[BUCKET_ID - 1] != '@':
return False
bucket = buckets[int(words[BUCKET_ID])]
if bucket:
for address in bucket.stacktrace:
address_symbol_dict[address] = ''
return True
@staticmethod
def skip_lines_while(line_number, max_line_number, skipping_condition):
"""Increments line_number until skipping_condition(line_number) is false.
"""
while skipping_condition(line_number):
line_number += 1
if line_number >= max_line_number:
sys.stderr.write('invalid heap profile dump.')
return line_number
return line_number
def parse_stacktraces_while_valid(self, buckets, log_lines, ln):
"""Parses stacktrace lines while the lines are valid.
Args:
buckets: A dict mapping bucket ids and their corresponding Bucket
objects.
log_lines: A list of lines to be parsed.
ln: An integer representing the starting line number in log_lines.
Returns:
A pair of a list of valid lines and an integer representing the last
line number in log_lines.
"""
ln = self.skip_lines_while(
ln, len(log_lines), lambda n: not log_lines[n].split()[0].isdigit())
stacktrace_lines_start = ln
ln = self.skip_lines_while(
ln, len(log_lines),
lambda n: self.check_stacktrace_line(log_lines[n], buckets))
return (log_lines[stacktrace_lines_start:ln], ln)
def parse_stacktraces(self, buckets):
"""Parses lines in self.log_lines as stacktrace.
Valid stacktrace lines are stored into self.mmap_stacktrace_lines and
self.malloc_stacktrace_lines.
Args:
buckets: A dict mapping bucket ids and their corresponding Bucket
objects.
Returns:
A string representing a version of the stacktrace dump. '' for invalid
dump.
"""
version = ''
# Skip until an identifiable line.
headers = ('STACKTRACES:\n', 'MMAP_STACKTRACES:\n', 'heap profile: ')
ln = self.skip_lines_while(
0, len(self.log_lines),
lambda n: not self.log_lines[n].startswith(headers))
# Identify a version.
if self.log_lines[ln].startswith('heap profile: '):
version = self.log_lines[ln][13:].strip()
if version == DUMP_DEEP_2 or version == DUMP_DEEP_3:
ln = self.skip_lines_while(
ln, len(self.log_lines),
lambda n: self.log_lines[n] != 'MMAP_STACKTRACES:\n')
else:
sys.stderr.write(' invalid heap profile dump version:%s\n' % version)
return ''
elif self.log_lines[ln] == 'STACKTRACES:\n':
version = DUMP_DEEP_1
elif self.log_lines[ln] == 'MMAP_STACKTRACES:\n':
version = DUMP_DEEP_2
if version == DUMP_DEEP_3:
sys.stderr.write(' heap profile dump version: %s\n' % version)
(self.mmap_stacktrace_lines, ln) = self.parse_stacktraces_while_valid(
buckets, self.log_lines, ln)
ln = self.skip_lines_while(
ln, len(self.log_lines),
lambda n: self.log_lines[n] != 'MALLOC_STACKTRACES:\n')
(self.malloc_stacktrace_lines, ln) = self.parse_stacktraces_while_valid(
buckets, self.log_lines, ln)
return version
elif version == DUMP_DEEP_2:
sys.stderr.write(' heap profile dump version: %s\n' % version)
(self.mmap_stacktrace_lines, ln) = self.parse_stacktraces_while_valid(
buckets, self.log_lines, ln)
ln = self.skip_lines_while(
ln, len(self.log_lines),
lambda n: self.log_lines[n] != 'MALLOC_STACKTRACES:\n')
(self.malloc_stacktrace_lines, ln) = self.parse_stacktraces_while_valid(
buckets, self.log_lines, ln)
self.malloc_stacktrace_lines.extend(self.mmap_stacktrace_lines)
self.mmap_stacktrace_lines = []
return version
elif version == DUMP_DEEP_1:
sys.stderr.write(' heap profile dump version: %s\n' % version)
(self.malloc_stacktrace_lines, ln) = self.parse_stacktraces_while_valid(
buckets, self.log_lines, ln)
return version
else:
sys.stderr.write(' invalid heap profile dump version:%s\n' % version)
return ''
def parse_global_stats(self):
"""Parses lines in self.log_lines as global stats."""
ln = self.skip_lines_while(
0, len(self.log_lines),
lambda n: self.log_lines[n] != 'GLOBAL_STATS:\n')
for prefix in ['total', 'file', 'anonymous', 'other', 'mmap', 'tcmalloc']:
ln = self.skip_lines_while(
ln, len(self.log_lines),
lambda n: self.log_lines[n].split()[0] != prefix)
words = self.log_lines[ln].split()
self.counters[prefix + '_virtual'] = int(words[-2])
self.counters[prefix + '_committed'] = int(words[-1])
def parse_log(self, buckets):
self.parse_global_stats()
self.log_version = self.parse_stacktraces(buckets)
@staticmethod
def accumulate_size_for_policy(stacktrace_lines,
policy_list, buckets, sizes, mmap):
for l in stacktrace_lines:
words = l.split()
bucket = buckets[int(words[BUCKET_ID])]
component_match = get_component(policy_list, bucket, mmap)
sizes[component_match] += int(words[COMMITTED])
if component_match.startswith('tc-'):
sizes['tc-total-log'] += int(words[COMMITTED])
elif component_match.startswith('mmap-'):
sizes['mmap-total-log'] += int(words[COMMITTED])
else:
sizes['other-total-log'] += int(words[COMMITTED])
def apply_policy(self, policy_list, buckets, first_log_time):
"""Aggregates the total memory size of each component.
Iterate through all stacktraces and attribute them to one of the components
based on the policy. It is important to apply policy in right order.
Args:
policy_list: A list containing Policy objects. (Parsed policy data by
parse_policy.)
buckets: A dict mapping bucket ids and their corresponding Bucket
objects.
first_log_time: An integer representing time when the first log is
dumped.
Returns:
A dict mapping components and their corresponding sizes.
"""
sys.stderr.write('apply policy:%s\n' % (self.log_path))
sizes = dict((c, 0) for c in components)
self.accumulate_size_for_policy(self.mmap_stacktrace_lines,
policy_list, buckets, sizes, True)
self.accumulate_size_for_policy(self.malloc_stacktrace_lines,
policy_list, buckets, sizes, False)
sizes['mmap-no-log'] = self.counters['mmap_committed'] - sizes[
'mmap-total-log']
sizes['mmap-total-record'] = self.counters['mmap_committed']
sizes['mmap-total-record-vm'] = self.counters['mmap_virtual']
sizes['tc-no-log'] = self.counters['tcmalloc_committed'] - sizes[
'tc-total-log']
sizes['tc-total-record'] = self.counters['tcmalloc_committed']
sizes['tc-unused'] = sizes['mmap-tcmalloc'] - self.counters[
'tcmalloc_committed']
sizes['tc-total'] = sizes['mmap-tcmalloc']
for key, value in { 'total': 'total_committed',
'filemapped': 'file_committed',
'anonymous': 'anonymous_committed',
'other': 'other_committed',
'total-vm': 'total_virtual',
'filemapped-vm': 'file_virtual',
'anonymous-vm': 'anonymous_virtual',
'other-vm': 'other_virtual' }.items():
if key in sizes:
sizes[key] = self.counters[value]
if 'unknown' in sizes:
sizes['unknown'] = self.counters['total_committed'] - self.counters[
'mmap_committed']
if 'total-exclude-profiler' in sizes:
sizes['total-exclude-profiler'] = self.counters[
'total_committed'] - sizes['mmap-profiler']
if 'hour' in sizes:
sizes['hour'] = (self.log_time - first_log_time) / 60.0 / 60.0
if 'minute' in sizes:
sizes['minute'] = (self.log_time - first_log_time) / 60.0
if 'second' in sizes:
sizes['second'] = self.log_time - first_log_time
return sizes
@staticmethod
def accumulate_size_for_expand(stacktrace_lines, policy_list, buckets,
component_name, depth, sizes, mmap):
for line in stacktrace_lines:
words = line.split()
bucket = buckets[int(words[BUCKET_ID])]
component_match = get_component(policy_list, bucket, mmap)
if component_match == component_name:
stacktrace_sequence = ''
for address in bucket.stacktrace[1 : min(len(bucket.stacktrace),
1 + depth)]:
stacktrace_sequence += address_symbol_dict[address] + ' '
if not stacktrace_sequence in sizes:
sizes[stacktrace_sequence] = 0
sizes[stacktrace_sequence] += int(words[COMMITTED])
def expand(self, policy_list, buckets, component_name, depth):
"""Prints all stacktraces in a given component of given depth.
Args:
policy_list: A list containing Policy objects. (Parsed policy data by
parse_policy.)
buckets: A dict mapping bucket ids and their corresponding Bucket
objects.
component_name: A name of component for filtering.
depth: An integer representing depth to be printed.
"""
sizes = {}
self.accumulate_size_for_expand(
self.mmap_stacktrace_lines, policy_list, buckets, component_name,
depth, sizes, True)
self.accumulate_size_for_expand(
self.malloc_stacktrace_lines, policy_list, buckets, component_name,
depth, sizes, False)
sorted_sizes_list = sorted(
sizes.iteritems(), key=(lambda x: x[1]), reverse=True)
total = 0
for size_pair in sorted_sizes_list:
sys.stdout.write('%10d %s\n' % (size_pair[1], size_pair[0]))
total += size_pair[1]
sys.stderr.write('total: %d\n' % (total))
def read_symbols(symbol_path, mapping_lines, chrome_path):
"""Reads symbol names from a .symbol file or a Chrome binary with pprof.
Args:
symbol_path: A string representing a path for a .symbol file.
mapping_lines: A list of strings containing /proc/.../maps.
chrome_path: A string representing a path for a Chrome binary.
"""
with open(symbol_path, mode='a+') as symbol_f:
symbol_lines = symbol_f.readlines()
if not symbol_lines:
with tempfile.NamedTemporaryFile(
suffix='maps', prefix="dmprof", mode='w+') as pprof_in:
with tempfile.NamedTemporaryFile(
suffix='symbols', prefix="dmprof", mode='w+') as pprof_out:
for line in mapping_lines:
pprof_in.write(line)
address_list = sorted(address_symbol_dict)
for key in address_list:
pprof_in.write(key + '\n')
pprof_in.seek(0)
p = subprocess.Popen(
'%s --symbols %s' % (PPROF_PATH, chrome_path),
shell=True, stdin=pprof_in, stdout=pprof_out)
p.wait()
pprof_out.seek(0)
symbols = pprof_out.readlines()
for address, symbol in zip(address_list, symbols):
address_symbol_dict[address] = symbol.strip()
for address, symbol in address_symbol_dict.iteritems():
symbol_f.write('%s %s\n' % (address, symbol))
else:
for l in symbol_lines:
items = l.split()
address_symbol_dict[items[0]] = items[1]
def parse_policy(policy_path):
"""Parses policy file.
A policy file contains component's names and their
stacktrace pattern written in regular expression.
Those patterns are matched against each symbols of
each stacktraces in the order written in the policy file
Args:
policy_path: A path for a policy file.
Returns:
A list containing component's name and its regex object
"""
with open(policy_path, mode='r') as policy_f:
policy_lines = policy_f.readlines()
policy_version = POLICY_DEEP_1
if policy_lines[0].startswith('heap profile policy: '):
policy_version = policy_lines[0][21:].strip()
policy_lines.pop(0)
policy_list = []
if policy_version == POLICY_DEEP_2 or policy_version == POLICY_DEEP_1:
sys.stderr.write(' heap profile policy version: %s\n' % policy_version)
for line in policy_lines:
if line[0] == '#':
continue
if policy_version == POLICY_DEEP_2:
(name, allocation_type, pattern) = line.strip().split(None, 2)
mmap = False
if allocation_type == 'mmap':
mmap = True
elif policy_version == POLICY_DEEP_1:
name = line.split()[0]
pattern = line[len(name) : len(line)].strip()
mmap = False
if pattern != 'default':
policy_list.append(Policy(name, mmap, pattern))
if components.count(name) == 0:
components.append(name)
else:
sys.stderr.write(' invalid heap profile policy version: %s\n' % (
policy_version))
return policy_list
def main():
if (len(sys.argv) < 4) or (not (sys.argv[1] in ['--csv',
'--expand',
'--list',
'--stacktrace',
'--pprof'])):
sys.stderr.write("""Usage:
%s [options] <chrome-binary> <policy> <profile> [component-name] [depth]
Options:
--csv Output result in csv format
--stacktrace Convert raw address to symbol names
--list Lists components and their sizes
--expand Show all stacktraces in the specified component
of given depth with their sizes
--pprof Format the profile file so it can be processed
by pprof
Examples:
dmprof --csv Debug/chrome dmpolicy hprof.12345.0001.heap > result.csv
dmprof --list Debug/chrome dmpolicy hprof.12345.0012.heap
dmprof --expand Debug/chrome dmpolicy hprof.12345.0012.heap tc-webkit 4
dmprof --pprof Debug/chrome dmpolicy hprof.12345.0012.heap > for_pprof.txt
""" % (sys.argv[0]))
sys.exit(1)
action = sys.argv[1]
chrome_path = sys.argv[2]
policy_path = sys.argv[3]
log_path = sys.argv[4]
sys.stderr.write('parsing a policy file\n')
policy_list = parse_policy(policy_path)
p = re.compile('\.[0-9][0-9][0-9][0-9]\.heap')
prefix = p.sub('', log_path)
symbol_path = prefix + '.symbols'
sys.stderr.write('parsing the maps file\n')
maps_path = prefix + '.maps'
with open(maps_path, mode='r') as maps_f:
maps_lines = maps_f.readlines()
# Reading buckets
sys.stderr.write('parsing the bucket file\n')
buckets = defaultdict(lambda: None)
bucket_count = 0
n = 0
while True:
buckets_path = '%s.%04d.buckets' % (prefix, n)
if not os.path.exists(buckets_path):
if n > 10:
break
n += 1
continue
sys.stderr.write('reading buckets from %s\n' % (buckets_path))
with open(buckets_path, mode='r') as buckets_f:
for l in buckets_f:
words = l.split()
st = []
for i in range(1, len(words)):
st.append(words[i])
buckets[int(words[0])] = Bucket(st)
bucket_count += 1
n += 1
sys.stderr.write('the number buckets: %d\n' % (bucket_count))
log_path_list = []
log_path_list.append(log_path)
if action == '--csv':
# search for the sequence of files
n = int(log_path[len(log_path) - 9 : len(log_path) - 5])
n += 1 # skip current file
while True:
p = '%s.%04d.heap' % (prefix, n)
if os.path.exists(p):
log_path_list.append(p)
else:
break
n += 1
logs = []
for path in log_path_list:
logs.append(Log(path, buckets))
sys.stderr.write('getting symbols\n')
read_symbols(symbol_path, maps_lines, chrome_path)
if action == '--stacktrace':
logs[0].dump_stacktrace(buckets)
elif action == '--csv':
sys.stdout.write(','.join(components))
sys.stdout.write('\n')
for log in logs:
component_sizes = log.apply_policy(policy_list, buckets, logs[0].log_time)
s = []
for c in components:
if c in ['hour', 'minute', 'second']:
s.append('%05.5f' % (component_sizes[c]))
else:
s.append('%05.5f' % (component_sizes[c] / 1024.0 / 1024.0))
sys.stdout.write(','.join(s))
sys.stdout.write('\n')
elif action == '--list':
component_sizes = logs[0].apply_policy(
policy_list, buckets, logs[0].log_time)
for c in components:
if c in ['hour', 'minute', 'second']:
sys.stdout.write('%30s %10.3f\n' % (c, component_sizes[c]))
else:
sys.stdout.write('%30s %10.3f\n' % (
c, component_sizes[c] / 1024.0 / 1024.0))
elif action == '--expand':
component_name = sys.argv[5]
depth = sys.argv[6]
logs[0].expand(policy_list, buckets, component_name, int(depth))
elif action == '--pprof':
if len(sys.argv) > 5:
logs[0].dump_for_pprof(policy_list, buckets, maps_lines, sys.argv[5])
else:
logs[0].dump_for_pprof(policy_list, buckets, maps_lines, None)
if __name__ == '__main__':
sys.exit(main()) | unknown | codeparrot/codeparrot-clean | ||
import sys, unittest, re, os.path
sys.path.insert(0, os.path.join(os.path.dirname(__file__), '..', '..', 'src'))
from tempfile import mkdtemp
from shutil import rmtree
from Exscript import Host
from Exscript.FileLogger import FileLogger
from LoggerTest import LoggerTest, FakeJob
class FakeError(Exception):
pass
class FileLoggerTest(LoggerTest):
CORRELATE = FileLogger
def setUp(self):
self.tempdir = mkdtemp()
self.logdir = os.path.join(self.tempdir, 'non-existent')
self.logger = FileLogger(self.logdir, clearmem = False)
self.job = FakeJob('fake')
self.logfile = os.path.join(self.logdir, 'fake.log')
self.errfile = self.logfile + '.error'
def tearDown(self):
LoggerTest.tearDown(self)
rmtree(self.tempdir)
def testConstructor(self):
self.assert_(os.path.isdir(self.tempdir))
self.failIf(os.path.exists(self.logfile))
self.failIf(os.path.exists(self.errfile))
def testAddLog(self):
log = LoggerTest.testAddLog(self)
self.assert_(os.path.isfile(self.logfile), 'No such file: ' + self.logfile)
self.failIf(os.path.exists(self.errfile))
return log
def testLog(self):
log = LoggerTest.testLog(self)
self.assert_(os.path.isfile(self.logfile))
self.failIf(os.path.exists(self.errfile))
return log
def testLogAborted(self):
log = LoggerTest.testLogAborted(self)
self.assert_(os.path.isfile(self.logfile))
self.assert_(os.path.isfile(self.errfile))
return log
def testLogSucceeded(self):
log = LoggerTest.testLogSucceeded(self)
self.assert_(os.path.isfile(self.logfile))
self.failIf(os.path.isfile(self.errfile))
return log
def testAddLog2(self):
# Like testAddLog(), but with attempt = 2.
self.logfile = os.path.join(self.logdir, self.job.name + '_retry1.log')
self.errfile = self.logfile + '.error'
self.failIf(os.path.exists(self.logfile))
self.failIf(os.path.exists(self.errfile))
self.logger.add_log(id(self.job), self.job.name, 2)
self.assert_(os.path.isfile(self.logfile))
self.failIf(os.path.exists(self.errfile))
content = open(self.logfile).read()
self.assertEqual(content, '')
def testLog2(self):
# Like testLog(), but with attempt = 2.
self.testAddLog2()
self.logger.log(id(self.job), 'hello world')
self.assert_(os.path.isfile(self.logfile))
self.failIf(os.path.exists(self.errfile))
content = open(self.logfile).read()
self.assertEqual(content, 'hello world')
def testLogSucceeded2(self):
# With attempt = 2.
self.testLog2()
self.logger.log_succeeded(id(self.job))
self.assert_(os.path.isfile(self.logfile))
self.failIf(os.path.exists(self.errfile))
def suite():
return unittest.TestLoader().loadTestsFromTestCase(FileLoggerTest)
if __name__ == '__main__':
unittest.TextTestRunner(verbosity = 2).run(suite()) | unknown | codeparrot/codeparrot-clean | ||
//===----------------------------------------------------------------------===//
//
// Part of the LLVM Project, under the Apache License v2.0 with LLVM Exceptions.
// See https://llvm.org/LICENSE.txt for license information.
// SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception
//
//===----------------------------------------------------------------------===//
#ifndef LLVM_CLANG_TOOLS_EXTRA_CLANG_TIDY_BUGPRONE_POINTERARITHMETICONPOLYMORPHICOBJECTCHECK_H
#define LLVM_CLANG_TOOLS_EXTRA_CLANG_TIDY_BUGPRONE_POINTERARITHMETICONPOLYMORPHICOBJECTCHECK_H
#include "../ClangTidyCheck.h"
namespace clang::tidy::bugprone {
/// Finds pointer arithmetic performed on classes that contain a
/// virtual function.
///
/// For the user-facing documentation see:
/// https://clang.llvm.org/extra/clang-tidy/checks/bugprone/pointer-arithmetic-on-polymorphic-object.html
class PointerArithmeticOnPolymorphicObjectCheck : public ClangTidyCheck {
public:
PointerArithmeticOnPolymorphicObjectCheck(StringRef Name,
ClangTidyContext *Context);
void storeOptions(ClangTidyOptions::OptionMap &Opts) override;
void registerMatchers(ast_matchers::MatchFinder *Finder) override;
void check(const ast_matchers::MatchFinder::MatchResult &Result) override;
bool isLanguageVersionSupported(const LangOptions &LangOpts) const override {
return LangOpts.CPlusPlus;
}
std::optional<TraversalKind> getCheckTraversalKind() const override {
return TK_IgnoreUnlessSpelledInSource;
}
private:
const bool IgnoreInheritedVirtualFunctions;
};
} // namespace clang::tidy::bugprone
#endif // LLVM_CLANG_TOOLS_EXTRA_CLANG_TIDY_BUGPRONE_POINTERARITHMETICONPOLYMORPHICOBJECTCHECK_H | c | github | https://github.com/llvm/llvm-project | clang-tools-extra/clang-tidy/bugprone/PointerArithmeticOnPolymorphicObjectCheck.h |
<?php
/*
* This file is part of the Symfony package.
*
* (c) Fabien Potencier <fabien@symfony.com>
*
* For the full copyright and license information, please view the LICENSE
* file that was distributed with this source code.
*/
namespace Symfony\Bridge\Twig\Validator\Constraints;
use Symfony\Component\Validator\Constraint;
use Symfony\Component\Validator\ConstraintValidator;
use Symfony\Component\Validator\Exception\UnexpectedTypeException;
use Symfony\Component\Validator\Exception\UnexpectedValueException;
use Twig\Environment;
use Twig\Error\Error;
use Twig\Loader\ArrayLoader;
use Twig\Source;
/**
* @author Mokhtar Tlili <tlili.mokhtar@gmail.com>
*/
class TwigValidator extends ConstraintValidator
{
public function __construct(private Environment $twig)
{
}
public function validate(mixed $value, Constraint $constraint): void
{
if (!$constraint instanceof Twig) {
throw new UnexpectedTypeException($constraint, Twig::class);
}
if (null === $value || '' === $value) {
return;
}
if (!\is_scalar($value) && !$value instanceof \Stringable) {
throw new UnexpectedValueException($value, 'string');
}
$value = (string) $value;
$realLoader = $this->twig->getLoader();
try {
$temporaryLoader = new ArrayLoader([$value]);
$this->twig->setLoader($temporaryLoader);
if (!$constraint->skipDeprecations) {
$prevErrorHandler = set_error_handler(static function ($level, $message, $file, $line) use (&$prevErrorHandler) {
if (\E_USER_DEPRECATED !== $level) {
return $prevErrorHandler ? $prevErrorHandler($level, $message, $file, $line) : false;
}
$templateLine = 0;
if (preg_match('/ at line (\d+)[ .]/', $message, $matches)) {
$templateLine = $matches[1];
}
throw new Error($message, $templateLine);
});
}
try {
$this->twig->parse($this->twig->tokenize(new Source($value, '')));
} finally {
if (!$constraint->skipDeprecations) {
restore_error_handler();
}
}
} catch (Error $e) {
$this->context->buildViolation($constraint->message)
->setParameter('{{ error }}', $e->getMessage())
->setParameter('{{ line }}', $e->getTemplateLine())
->setCode(Twig::INVALID_TWIG_ERROR)
->addViolation();
} finally {
$this->twig->setLoader($realLoader);
}
}
} | php | github | https://github.com/symfony/symfony | src/Symfony/Bridge/Twig/Validator/Constraints/TwigValidator.php |
//===----------------------------------------------------------------------===//
//
// Part of the LLVM Project, under the Apache License v2.0 with LLVM Exceptions.
// See https://llvm.org/LICENSE.txt for license information.
// SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception
//
//===----------------------------------------------------------------------===//
#ifndef LLVM_CLANG_TOOLS_EXTRA_CLANG_TIDY_LLVM_TYPESWITCHCASETYPESCHECK_H
#define LLVM_CLANG_TOOLS_EXTRA_CLANG_TIDY_LLVM_TYPESWITCHCASETYPESCHECK_H
#include "../ClangTidyCheck.h"
namespace clang::tidy::llvm_check {
/// Simplifies llvm::TypeSwitch Case calls by removing redundant explicit
/// template arguments or replacing 'auto' lambda parameters with explicit
/// types.
///
/// For the user-facing documentation see:
/// https://clang.llvm.org/extra/clang-tidy/checks/llvm/type-switch-case-types.html
class TypeSwitchCaseTypesCheck : public ClangTidyCheck {
public:
using ClangTidyCheck::ClangTidyCheck;
void registerMatchers(ast_matchers::MatchFinder *Finder) override;
void check(const ast_matchers::MatchFinder::MatchResult &Result) override;
bool isLanguageVersionSupported(const LangOptions &LangOpts) const override {
return LangOpts.CPlusPlus;
}
};
} // namespace clang::tidy::llvm_check
#endif // LLVM_CLANG_TOOLS_EXTRA_CLANG_TIDY_LLVM_TYPESWITCHCASETYPESCHECK_H | c | github | https://github.com/llvm/llvm-project | clang-tools-extra/clang-tidy/llvm/TypeSwitchCaseTypesCheck.h |
# -*- coding: utf-8 -*-
# Stores chart blocks for the various pages
class PageBlockLookup:
#
# MAIN DATA PAGES CHART BLOCKS
#
def get_uof_blocks(short_name):
''' Use of Force main data page blocks
'''
if short_name == 'BPD':
return {
'introduction': 'uof-introduction',
'first-block': 'uof-by-month',
'blocks': [
'uof-force-type',
'uof-by-assignment',
'officer-demographics',
'uof-race'
]
}
if short_name == 'LMPD':
return {
'introduction': 'uof-introduction',
'first-block': 'uof-by-month',
'blocks': [
'uof-force-type',
'uof-by-division',
'officer-demographics',
'uof-race'
]
}
if short_name == 'SRPD':
return {
'introduction': 'uof-introduction',
'first-block': 'uof-by-month',
'blocks': [
'uof-incident-force-type',
'uof-by-team',
'officer-demographics'
]
}
# IMPD's blocks are the default
return {
'introduction': 'uof-introduction',
'first-block': 'uof-force-type',
'blocks': [
'uof-by-inc-district',
'officer-demographics',
'uof-race'
]
}
def get_ois_blocks(short_name):
''' Officer-Involved Shooting main data page blocks
'''
if short_name == 'BPD':
return {
'introduction': 'ois-introduction',
'first-block': 'ois-by-month',
'blocks': [
'ois-by-assignment',
'officer-demographics',
'ois-race'
]
}
if short_name == 'SRPD':
return {
'introduction': 'ois-introduction',
'first-block': 'ois-by-month',
'blocks': [
'ois-by-type',
'ois-by-team',
'officer-demographics'
]
}
# IMPD's blocks are the default
return {
'introduction': 'ois-introduction',
'first-block': 'ois-by-inc-district',
'blocks': [
'ois-weapon-type',
'officer-demographics',
'ois-race'
]
}
def get_complaints_blocks(short_name):
''' Citizen Complaints main data page blocks
'''
if short_name == 'BPD':
return {
'introduction': 'complaints-introduction',
'first-block': 'complaints-by-month',
'blocks': [
'complaints-by-allegation',
'complaints-by-disposition',
'complaints-by-assignment',
'officer-demographics',
'complaints-by-demographic',
'complaints-by-officer-with-cap',
]
}
if short_name == 'SRPD':
return {
'introduction': 'complaints-introduction',
'first-block': 'complaints-by-month',
'blocks': [
'complaints-by-allegation',
'complaints-by-disposition',
'complaints-by-team',
'officer-demographics'
]
}
if short_name == 'WPD':
return {
'introduction': 'complaints-introduction',
'first-block': 'complaints-by-month',
'blocks': [
'complaints-by-allegation',
'complaints-by-allegation-type',
'complaints-by-finding',
'complaints-by-precinct',
'officer-demographics',
'complaints-by-demographic'
]
}
# IMPD's blocks are the default
return {
'introduction': 'complaints-introduction',
'first-block': 'complaints-by-month',
'blocks': [
'complaints-by-allegation',
'complaints-by-allegation-type',
'complaints-by-finding',
'complaints-by-precinct',
'officer-demographics',
'complaints-by-demographic',
'complaints-by-officer'
]
}
def get_pursuits_blocks(short_name):
''' Pursuits main data page blocks
'''
return {
'introduction': 'pursuits-introduction',
'first-block': 'pursuits-by-month',
'blocks': [
'pursuits-by-reason',
'pursuits-by-distance',
'pursuits-by-team'
]
}
def get_assaults_blocks(short_name):
''' Assaults on Officers main data page blocks
'''
return {
'introduction': 'assaults-introduction',
'first-block': 'assaults-by-service-type',
'blocks': [
'assaults-by-force-type',
'assaults-by-officer'
]
}
#
# SCHEMA PAGES CHART BLOCKS
#
def get_complaint_schema_blocks(short_name):
''' Citizen Complaint schema page blocks
'''
return {
'introduction': 'complaints-schema-introduction',
'footer': 'complaints-schema-footer',
'disclaimer': 'complaints-schema-disclaimer',
'blocks': 'complaints-schema-field-'
}
def get_uof_schema_blocks(short_name):
''' Use of Force schema page blocks
'''
return {
'introduction': 'uof-schema-introduction',
'footer': 'uof-schema-footer',
'disclaimer': 'uof-schema-disclaimer',
'blocks': 'uof-schema-field-'
}
def get_ois_schema_blocks(short_name):
''' Officer-Involved Shooting schema page blocks
'''
return {
'introduction': 'ois-schema-introduction',
'footer': 'ois-schema-footer',
'disclaimer': 'ois-schema-disclaimer',
'blocks': 'ois-schema-field-'
}
def get_pursuits_schema_blocks(short_name):
''' Pursuits schema page blocks
'''
return {
'introduction': 'pursuits-schema-introduction',
'footer': 'pursuits-schema-footer',
'disclaimer': 'pursuits-schema-disclaimer',
'blocks': 'pursuits-schema-field-'
}
def get_assaults_schema_blocks(short_name):
''' Assaults on Officers schema page blocks
'''
return {
'introduction': 'assaults-schema-introduction',
'footer': 'assaults-schema-footer',
'disclaimer': 'assaults-schema-disclaimer',
'blocks': 'assaults-schema-field-'
} | unknown | codeparrot/codeparrot-clean | ||
"""
This module implements a class to deal with Uniform Diffie-Hellman handshakes.
The class `UniformDH' is used by the server as well as by the client to handle
the Uniform Diffie-Hellman handshake used by ScrambleSuit.
"""
import const
import random
import binascii
import Crypto.Hash.SHA256
import util
import mycrypto
import obfsproxy.transports.obfs3_dh as obfs3_dh
import obfsproxy.transports.base as base
import obfsproxy.common.log as logging
log = logging.get_obfslogger()
class UniformDH( object ):
"""
Provide methods to deal with Uniform Diffie-Hellman handshakes.
The class provides methods to extract public keys and to generate public
keys wrapped in a valid UniformDH handshake.
"""
def __init__( self, sharedSecret, weAreServer ):
"""
Initialise a UniformDH object.
"""
# `True' if we are the server; `False' otherwise.
self.weAreServer = weAreServer
# The shared UniformDH secret.
self.sharedSecret = sharedSecret
# Cache a UniformDH public key until it's added to the replay table.
self.remotePublicKey = None
# Uniform Diffie-Hellman object (implemented in obfs3_dh.py).
self.udh = None
# Used by the server so it can simply echo the client's epoch.
self.echoEpoch = None
def getRemotePublicKey( self ):
"""
Return the cached remote UniformDH public key.
"""
return self.remotePublicKey
def receivePublicKey( self, data, callback, srvState=None ):
"""
Extract the public key and invoke a callback with the master secret.
First, the UniformDH public key is extracted out of `data'. Then, the
shared master secret is computed and `callback' is invoked with the
master secret as argument. If any of this fails, `False' is returned.
"""
# Extract the public key sent by the remote host.
remotePublicKey = self.extractPublicKey(data, srvState)
if not remotePublicKey:
return False
if self.weAreServer:
self.remotePublicKey = remotePublicKey
# As server, we need a DH object; as client, we already have one.
self.udh = obfs3_dh.UniformDH()
assert self.udh is not None
try:
uniformDHSecret = self.udh.get_secret(remotePublicKey)
except ValueError:
raise base.PluggableTransportError("Corrupted public key.")
# First, hash the 4096-bit UniformDH secret to obtain the master key.
masterKey = Crypto.Hash.SHA256.new(uniformDHSecret).digest()
# Second, session keys are now derived from the master key.
callback(masterKey)
return True
def extractPublicKey( self, data, srvState=None ):
"""
Extract and return a UniformDH public key out of `data'.
Before the public key is touched, the HMAC is verified. If the HMAC is
invalid or some other error occurs, `False' is returned. Otherwise,
the public key is returned. The extracted data is finally drained from
the given `data' object.
"""
assert self.sharedSecret is not None
# Do we already have the minimum amount of data?
if len(data) < (const.PUBLIC_KEY_LENGTH + const.MARK_LENGTH +
const.HMAC_SHA256_128_LENGTH):
return False
log.debug("Attempting to extract the remote machine's UniformDH "
"public key out of %d bytes of data." % len(data))
handshake = data.peek()
# First, find the mark to efficiently locate the HMAC.
publicKey = handshake[:const.PUBLIC_KEY_LENGTH]
mark = mycrypto.HMAC_SHA256_128(self.sharedSecret, publicKey)
index = util.locateMark(mark, handshake)
if not index:
return False
# Now that we know where the authenticating HMAC is: verify it.
hmacStart = index + const.MARK_LENGTH
existingHMAC = handshake[hmacStart:
(hmacStart + const.HMAC_SHA256_128_LENGTH)]
authenticated = False
for epoch in util.expandedEpoch():
myHMAC = mycrypto.HMAC_SHA256_128(self.sharedSecret,
handshake[0 : hmacStart] + epoch)
if util.isValidHMAC(myHMAC, existingHMAC, self.sharedSecret):
self.echoEpoch = epoch
authenticated = True
break
log.debug("HMAC invalid. Trying next epoch value.")
if not authenticated:
log.warning("Could not verify the authentication message's HMAC.")
return False
# Do nothing if the ticket is replayed. Immediately closing the
# connection would be suspicious.
if srvState is not None and srvState.isReplayed(existingHMAC):
log.warning("The HMAC was already present in the replay table.")
return False
data.drain(index + const.MARK_LENGTH + const.HMAC_SHA256_128_LENGTH)
if srvState is not None:
log.debug("Adding the HMAC authenticating the UniformDH message " \
"to the replay table: %s." % existingHMAC.encode('hex'))
srvState.registerKey(existingHMAC)
return handshake[:const.PUBLIC_KEY_LENGTH]
def createHandshake( self ):
"""
Create and return a ready-to-be-sent UniformDH handshake.
The returned handshake data includes the public key, pseudo-random
padding, the mark and the HMAC. If a UniformDH object has not been
initialised yet, a new instance is created.
"""
assert self.sharedSecret is not None
log.debug("Creating UniformDH handshake message.")
if self.udh is None:
self.udh = obfs3_dh.UniformDH()
publicKey = self.udh.get_public()
assert (const.MAX_PADDING_LENGTH - const.PUBLIC_KEY_LENGTH) >= 0
# Subtract the length of the public key to make the handshake on
# average as long as a redeemed ticket. That should thwart statistical
# length-based attacks.
padding = mycrypto.strongRandom(random.randint(0,
const.MAX_PADDING_LENGTH -
const.PUBLIC_KEY_LENGTH))
# Add a mark which enables efficient location of the HMAC.
mark = mycrypto.HMAC_SHA256_128(self.sharedSecret, publicKey)
if self.echoEpoch is None:
epoch = util.getEpoch()
else:
epoch = self.echoEpoch
log.debug("Echoing epoch rather than recreating it.")
# Authenticate the handshake including the current approximate epoch.
mac = mycrypto.HMAC_SHA256_128(self.sharedSecret,
publicKey + padding + mark + epoch)
return publicKey + padding + mark + mac
# Alias class name in order to provide a more intuitive API.
new = UniformDH | unknown | codeparrot/codeparrot-clean | ||
import builtins
open = builtins.open
# for seek()
SEEK_SET = 0
SEEK_CUR = 1
SEEK_END = 2
r"""File-like objects that read from or write to a string buffer.
This implements (nearly) all stdio methods.
f = StringIO() # ready for writing
f = StringIO(buf) # ready for reading
f.close() # explicitly release resources held
flag = f.isatty() # always false
pos = f.tell() # get current position
f.seek(pos) # set current position
f.seek(pos, mode) # mode 0: absolute; 1: relative; 2: relative to EOF
buf = f.read() # read until EOF
buf = f.read(n) # read up to n bytes
buf = f.readline() # read until end of line ('\n') or EOF
list = f.readlines()# list of f.readline() results until EOF
f.truncate([size]) # truncate file at to at most size (default: current pos)
f.write(buf) # write at current position
f.writelines(list) # for line in list: f.write(line)
f.getvalue() # return whole file's contents as a string
Notes:
- Using a real file is often faster (but less convenient).
- There's also a much faster implementation in C, called cStringIO, but
it's not subclassable.
- fileno() is left unimplemented so that code which uses it triggers
an exception early.
- Seeking far beyond EOF and then writing will insert real null
bytes that occupy space in the buffer.
- There's a simple test set (see end of this file).
"""
try:
from errno import EINVAL
except ImportError:
EINVAL = 22
__all__ = ["StringIO"]
def _complain_ifclosed(closed):
if closed:
raise ValueError("I/O operation on closed file")
class StringIO:
"""class StringIO([buffer])
When a StringIO object is created, it can be initialized to an existing
string by passing the string to the constructor. If no string is given,
the StringIO will start empty.
The StringIO object can accept either Unicode or 8-bit strings, but
mixing the two may take some care. If both are used, 8-bit strings that
cannot be interpreted as 7-bit ASCII (that use the 8th bit) will cause
a UnicodeError to be raised when getvalue() is called.
"""
def __init__(self, buf = ''):
self.buf = buf
self.len = len(buf)
self.buflist = []
self.pos = 0
self.closed = False
self.softspace = 0
def __iter__(self):
return self
def next(self):
"""A file object is its own iterator, for example iter(f) returns f
(unless f is closed). When a file is used as an iterator, typically
in a for loop (for example, for line in f: print line), the next()
method is called repeatedly. This method returns the next input line,
or raises StopIteration when EOF is hit.
"""
_complain_ifclosed(self.closed)
r = self.readline()
if not r:
raise StopIteration
return r
def close(self):
"""Free the memory buffer.
"""
if not self.closed:
self.closed = True
del self.buf, self.pos
def isatty(self):
"""Returns False because StringIO objects are not connected to a
tty-like device.
"""
_complain_ifclosed(self.closed)
return False
def seek(self, pos, mode = 0):
"""Set the file's current position.
The mode argument is optional and defaults to 0 (absolute file
positioning); other values are 1 (seek relative to the current
position) and 2 (seek relative to the file's end).
There is no return value.
"""
_complain_ifclosed(self.closed)
if self.buflist:
self.buf += ''.join(self.buflist)
self.buflist = []
if mode == 1:
pos += self.pos
elif mode == 2:
pos += self.len
self.pos = max(0, pos)
def tell(self):
"""Return the file's current position."""
_complain_ifclosed(self.closed)
return self.pos
def read(self, n = -1):
"""Read at most size bytes from the file
(less if the read hits EOF before obtaining size bytes).
If the size argument is negative or omitted, read all data until EOF
is reached. The bytes are returned as a string object. An empty
string is returned when EOF is encountered immediately.
"""
_complain_ifclosed(self.closed)
if self.buflist:
self.buf += ''.join(self.buflist)
self.buflist = []
if n is None or n < 0:
newpos = self.len
else:
newpos = min(self.pos+n, self.len)
r = self.buf[self.pos:newpos]
self.pos = newpos
return r
def readline(self, length=None):
r"""Read one entire line from the file.
A trailing newline character is kept in the string (but may be absent
when a file ends with an incomplete line). If the size argument is
present and non-negative, it is a maximum byte count (including the
trailing newline) and an incomplete line may be returned.
An empty string is returned only when EOF is encountered immediately.
Note: Unlike stdio's fgets(), the returned string contains null
characters ('\0') if they occurred in the input.
"""
_complain_ifclosed(self.closed)
if self.buflist:
self.buf += ''.join(self.buflist)
self.buflist = []
i = self.buf.find('\n', self.pos)
if i < 0:
newpos = self.len
else:
newpos = i+1
if length is not None and length >= 0:
if self.pos + length < newpos:
newpos = self.pos + length
r = self.buf[self.pos:newpos]
self.pos = newpos
return r
def readlines(self, sizehint = 0):
"""Read until EOF using readline() and return a list containing the
lines thus read.
If the optional sizehint argument is present, instead of reading up
to EOF, whole lines totalling approximately sizehint bytes (or more
to accommodate a final whole line).
"""
total = 0
lines = []
line = self.readline()
while line:
lines.append(line)
total += len(line)
if 0 < sizehint <= total:
break
line = self.readline()
return lines
def truncate(self, size=None):
"""Truncate the file's size.
If the optional size argument is present, the file is truncated to
(at most) that size. The size defaults to the current position.
The current file position is not changed unless the position
is beyond the new file size.
If the specified size exceeds the file's current size, the
file remains unchanged.
"""
_complain_ifclosed(self.closed)
if size is None:
size = self.pos
elif size < 0:
raise IOError(EINVAL, "Negative size not allowed")
elif size < self.pos:
self.pos = size
self.buf = self.getvalue()[:size]
self.len = size
def write(self, s):
"""Write a string to the file.
There is no return value.
"""
_complain_ifclosed(self.closed)
if not s: return
spos = self.pos
slen = self.len
if spos == slen:
self.buflist.append(s)
self.len = self.pos = spos + len(s)
return
if spos > slen:
self.buflist.append('\0'*(spos - slen))
slen = spos
newpos = spos + len(s)
if spos < slen:
if self.buflist:
self.buf += ''.join(self.buflist)
self.buflist = [self.buf[:spos], s, self.buf[newpos:]]
self.buf = ''
if newpos > slen:
slen = newpos
else:
self.buflist.append(s)
slen = newpos
self.len = slen
self.pos = newpos
def writelines(self, iterable):
"""Write a sequence of strings to the file. The sequence can be any
iterable object producing strings, typically a list of strings. There
is no return value.
(The name is intended to match readlines(); writelines() does not add
line separators.)
"""
write = self.write
for line in iterable:
write(line)
def flush(self):
"""Flush the internal buffer
"""
_complain_ifclosed(self.closed)
def getvalue(self):
"""
Retrieve the entire contents of the "file" at any time before
the StringIO object's close() method is called.
The StringIO object can accept either Unicode or 8-bit strings,
but mixing the two may take some care. If both are used, 8-bit
strings that cannot be interpreted as 7-bit ASCII (that use the
8th bit) will cause a UnicodeError to be raised when getvalue()
is called.
"""
_complain_ifclosed(self.closed)
if self.buflist:
self.buf += ''.join(self.buflist)
self.buflist = []
return self.buf
TextIOWrapper = StringIO
class RawIOBase:
def read(self,n=-1):
pass
def readall(self):
pass
def readinto(self,b):
pass
def write(self,b):
pass
BufferedReader = RawIOBase | unknown | codeparrot/codeparrot-clean | ||
# Copyright 2018 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Stateless random ops which take seed as a tensor input."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import numpy as np
from tensorflow.python.compat import compat
from tensorflow.python.framework import dtypes
from tensorflow.python.framework import ops
from tensorflow.python.framework import tensor_util
from tensorflow.python.ops import array_ops
from tensorflow.python.ops import gen_stateless_random_ops
from tensorflow.python.ops import gen_stateless_random_ops_v2
from tensorflow.python.ops import math_ops
from tensorflow.python.util import deprecation
from tensorflow.python.util import dispatch
from tensorflow.python.util.tf_export import tf_export
ops.NotDifferentiable("StatelessMultinomial")
ops.NotDifferentiable("StatelessRandomBinomial")
ops.NotDifferentiable("StatelessRandomNormal")
ops.NotDifferentiable("StatelessRandomPoisson")
ops.NotDifferentiable("StatelessRandomUniform")
ops.NotDifferentiable("StatelessRandomUniformInt")
ops.NotDifferentiable("StatelessRandomUniformFullInt")
ops.NotDifferentiable("StatelessTruncatedNormal")
ops.NotDifferentiable("StatelessRandomNormalV2")
ops.NotDifferentiable("StatelessRandomUniformV2")
ops.NotDifferentiable("StatelessRandomUniformIntV2")
ops.NotDifferentiable("StatelessRandomUniformFullIntV2")
ops.NotDifferentiable("StatelessTruncatedNormalV2")
@tf_export("random.experimental.stateless_split")
@dispatch.add_dispatch_support
def split(seed, num=2):
"""Splits an RNG seed into `num` new seeds by adding a leading axis.
Example:
>>> seed = [1, 2]
>>> new_seeds = tf.random.experimental.stateless_split(seed, num=3)
>>> print(new_seeds)
tf.Tensor(
[[1105988140 1738052849]
[-335576002 370444179]
[ 10670227 -246211131]], shape=(3, 2), dtype=int32)
>>> tf.random.stateless_normal(shape=[3], seed=new_seeds[0, :])
<tf.Tensor: shape=(3,), dtype=float32, numpy=array([-0.59835213, -0.9578608 ,
0.9002807 ], dtype=float32)>
Args:
seed: an RNG seed (a tensor with shape [2] and dtype `int32` or
`int64`). (When using XLA, only `int32` is allowed.)
num: optional, a positive integer or scalar tensor indicating the number of
seeds to produce (default 2).
Returns:
A tensor with shape [num, 2] representing `num` new seeds. It will have the
same dtype as `seed` (if `seed` doesn't have an explict dtype, the dtype
will be determined by `tf.convert_to_tensor`).
"""
seed = ops.convert_to_tensor(seed)
return stateless_random_uniform(shape=[num, 2], seed=seed, dtype=seed.dtype,
minval=None, maxval=None)
@tf_export("random.experimental.stateless_fold_in")
@dispatch.add_dispatch_support
def fold_in(seed, data):
"""Folds in data to an RNG seed to form a new RNG seed.
For example, in a distributed-training setting, suppose we have a master seed
and a replica ID. We want to fold the replica ID into the master seed to
form a "replica seed" to be used by that replica later on, so that different
replicas will generate different random numbers but the reproducibility of the
whole system can still be controlled by the master seed:
>>> master_seed = [1, 2]
>>> replica_id = 3
>>> replica_seed = tf.random.experimental.stateless_fold_in(
... master_seed, replica_id)
>>> print(replica_seed)
tf.Tensor([1105988140 3], shape=(2,), dtype=int32)
>>> tf.random.stateless_normal(shape=[3], seed=replica_seed)
<tf.Tensor: shape=(3,), dtype=float32, numpy=array([0.03197195, 0.8979765 ,
0.13253039], dtype=float32)>
Args:
seed: an RNG seed (a tensor with shape [2] and dtype `int32` or
`int64`). (When using XLA, only `int32` is allowed.)
data: an `int32` or `int64` scalar representing data to be folded in to the
seed.
Returns:
A new RNG seed that is a deterministic function of the inputs and is
statistically safe for producing a stream of new pseudo-random values. It
will have the same dtype as `data` (if `data` doesn't have an explict dtype,
the dtype will be determined by `tf.convert_to_tensor`).
"""
data = ops.convert_to_tensor(data)
seed1 = stateless_random_uniform(shape=[], seed=seed, dtype=data.dtype,
minval=None, maxval=None)
return array_ops.stack([seed1, data])
def _get_key_counter_alg(seed):
if compat.forward_compatible(2021, 3, 1):
key, counter = gen_stateless_random_ops_v2.stateless_random_get_key_counter(
seed)
alg = gen_stateless_random_ops_v2.stateless_random_get_alg()
return key, counter, alg
else:
return gen_stateless_random_ops_v2.stateless_random_get_key_counter_alg(
seed)
@tf_export("random.stateless_uniform")
@dispatch.add_dispatch_support
def stateless_random_uniform(shape,
seed,
minval=0,
maxval=None,
dtype=dtypes.float32,
name=None):
"""Outputs deterministic pseudorandom values from a uniform distribution.
This is a stateless version of `tf.random.uniform`: if run twice with the
same seeds and shapes, it will produce the same pseudorandom numbers. The
output is consistent across multiple runs on the same hardware (and between
CPU and GPU), but may change between versions of TensorFlow or on non-CPU/GPU
hardware.
The generated values follow a uniform distribution in the range
`[minval, maxval)`. The lower bound `minval` is included in the range, while
the upper bound `maxval` is excluded.
For floats, the default range is `[0, 1)`. For ints, at least `maxval` must
be specified explicitly.
In the integer case, the random integers are slightly biased unless
`maxval - minval` is an exact power of two. The bias is small for values of
`maxval - minval` significantly smaller than the range of the output (either
`2**32` or `2**64`).
For full-range (i.e. inclusive of both max and min) random integers, pass
`minval=None` and `maxval=None` with an integer `dtype`. For an integer dtype
either both `minval` and `maxval` must be `None` or neither may be `None`. For
example:
```python
ints = tf.random.stateless_uniform(
[10], seed=(2, 3), minval=None, maxval=None, dtype=tf.int32)
```
Args:
shape: A 1-D integer Tensor or Python array. The shape of the output tensor.
seed: A shape [2] Tensor, the seed to the random number generator. Must have
dtype `int32` or `int64`. (When using XLA, only `int32` is allowed.)
minval: A Tensor or Python value of type `dtype`, broadcastable with
`shape` (for integer types, broadcasting is not supported, so it needs to
be a scalar). The lower bound on the range of random values to
generate. Pass `None` for full-range integers. Defaults to 0.
maxval: A Tensor or Python value of type `dtype`, broadcastable with
`shape` (for integer types, broadcasting is not supported, so it needs to
be a scalar). The upper bound on the range of random values to generate.
Defaults to 1 if `dtype` is floating point. Pass `None` for full-range
integers.
dtype: The type of the output: `float16`, `float32`, `float64`, `int32`, or
`int64`. For unbounded uniform ints (`minval`, `maxval` both `None`),
`uint32` and `uint64` may be used.
name: A name for the operation (optional).
Returns:
A tensor of the specified shape filled with random uniform values.
Raises:
ValueError: If `dtype` is integral and only one of `minval` or `maxval` is
specified.
"""
dtype = dtypes.as_dtype(dtype)
if dtype not in (dtypes.float16, dtypes.bfloat16, dtypes.float32,
dtypes.float64, dtypes.int32, dtypes.int64, dtypes.uint32,
dtypes.uint64):
raise ValueError("Invalid dtype %r" % dtype)
if dtype.is_integer:
if (minval is None) != (maxval is None):
raise ValueError("For integer dtype {}, minval and maxval must be both "
"`None` or both non-`None`.".format(dtype))
if minval is not None and dtype in (dtypes.uint32, dtypes.uint64):
raise ValueError("Invalid dtype for bounded uniform integers: %r" % dtype)
elif maxval is None:
maxval = 1
with ops.name_scope(name, "stateless_random_uniform",
[shape, seed, minval, maxval]) as name:
shape = tensor_util.shape_tensor(shape)
if dtype.is_integer and minval is None:
if compat.forward_compatible(2020, 10, 25):
key, counter, alg = _get_key_counter_alg(seed)
result = (gen_stateless_random_ops_v2
.stateless_random_uniform_full_int_v2(
shape, key=key, counter=counter, dtype=dtype, alg=alg,
name=name))
else:
result = gen_stateless_random_ops.stateless_random_uniform_full_int(
shape, seed=seed, dtype=dtype, name=name)
else:
minval = ops.convert_to_tensor(minval, dtype=dtype, name="min")
maxval = ops.convert_to_tensor(maxval, dtype=dtype, name="max")
if dtype.is_integer:
if compat.forward_compatible(2020, 10, 25):
key, counter, alg = _get_key_counter_alg(seed)
result = gen_stateless_random_ops_v2.stateless_random_uniform_int_v2(
shape, key=key, counter=counter, minval=minval, maxval=maxval,
alg=alg, name=name)
else:
result = gen_stateless_random_ops.stateless_random_uniform_int(
shape, seed=seed, minval=minval, maxval=maxval, name=name)
else:
if compat.forward_compatible(2020, 10, 25):
key, counter, alg = _get_key_counter_alg(seed)
rnd = gen_stateless_random_ops_v2.stateless_random_uniform_v2(
shape, key=key, counter=counter, dtype=dtype, alg=alg)
else:
rnd = gen_stateless_random_ops.stateless_random_uniform(
shape, seed=seed, dtype=dtype)
result = math_ops.add(rnd * (maxval - minval), minval, name=name)
tensor_util.maybe_set_static_shape(result, shape)
return result
@tf_export("random.stateless_binomial")
@dispatch.add_dispatch_support
def stateless_random_binomial(shape,
seed,
counts,
probs,
output_dtype=dtypes.int32,
name=None):
"""Outputs deterministic pseudorandom values from a binomial distribution.
The generated values follow a binomial distribution with specified count and
probability of success parameters.
This is a stateless version of `tf.random.Generator.binomial`: if run twice
with the same seeds and shapes, it will produce the same pseudorandom numbers.
The output is consistent across multiple runs on the same hardware (and
between CPU and GPU), but may change between versions of TensorFlow or on
non-CPU/GPU hardware.
Example:
```python
counts = [10., 20.]
# Probability of success.
probs = [0.8]
binomial_samples = tf.random.stateless_binomial(
shape=[2], seed=[123, 456], counts=counts, probs=probs)
counts = ... # Shape [3, 1, 2]
probs = ... # Shape [1, 4, 2]
shape = [3, 4, 3, 4, 2]
# Sample shape will be [3, 4, 3, 4, 2]
binomial_samples = tf.random.stateless_binomial(
shape=shape, seed=[123, 456], counts=counts, probs=probs)
```
Args:
shape: A 1-D integer Tensor or Python array. The shape of the output tensor.
seed: A shape [2] Tensor, the seed to the random number generator. Must have
dtype `int32` or `int64`. (When using XLA, only `int32` is allowed.)
counts: Tensor. The counts of the binomial distribution. Must be
broadcastable with `probs`, and broadcastable with the rightmost
dimensions of `shape`.
probs: Tensor. The probability of success for the binomial distribution.
Must be broadcastable with `counts` and broadcastable with the rightmost
dimensions of `shape`.
output_dtype: The type of the output. Default: tf.int32
name: A name for the operation (optional).
Returns:
samples: A Tensor of the specified shape filled with random binomial
values. For each i, each samples[..., i] is an independent draw from
the binomial distribution on counts[i] trials with probability of
success probs[i].
"""
with ops.name_scope(name, "stateless_random_binomial",
[shape, seed, counts, probs]) as name:
shape = tensor_util.shape_tensor(shape)
probs = ops.convert_to_tensor(
probs, dtype_hint=dtypes.float32, name="probs")
counts = ops.convert_to_tensor(
counts, dtype_hint=probs.dtype, name="counts")
result = gen_stateless_random_ops.stateless_random_binomial(
shape=shape, seed=seed, counts=counts, probs=probs, dtype=output_dtype)
tensor_util.maybe_set_static_shape(result, shape)
return result
@tf_export("random.stateless_gamma")
@dispatch.add_dispatch_support
def stateless_random_gamma(shape,
seed,
alpha,
beta=None,
dtype=dtypes.float32,
name=None):
"""Outputs deterministic pseudorandom values from a gamma distribution.
The generated values follow a gamma distribution with specified concentration
(`alpha`) and inverse scale (`beta`) parameters.
This is a stateless version of `tf.random.gamma`: if run twice with the same
seeds and shapes, it will produce the same pseudorandom numbers. The output is
consistent across multiple runs on the same hardware (and between CPU and
GPU),
but may change between versions of TensorFlow or on non-CPU/GPU hardware.
A slight difference exists in the interpretation of the `shape` parameter
between `stateless_gamma` and `gamma`: in `gamma`, the `shape` is always
prepended to the shape of the broadcast of `alpha` with `beta`; whereas in
`stateless_gamma` the `shape` parameter must always encompass the shapes of
each of `alpha` and `beta` (which must broadcast together to match the
trailing dimensions of `shape`).
Note: Because internal calculations are done using `float64` and casting has
`floor` semantics, we must manually map zero outcomes to the smallest
possible positive floating-point value, i.e., `np.finfo(dtype).tiny`. This
means that `np.finfo(dtype).tiny` occurs more frequently than it otherwise
should. This bias can only happen for small values of `alpha`, i.e.,
`alpha << 1` or large values of `beta`, i.e., `beta >> 1`.
The samples are differentiable w.r.t. alpha and beta.
The derivatives are computed using the approach described in
(Figurnov et al., 2018).
Example:
```python
samples = tf.random.stateless_gamma([10, 2], seed=[12, 34], alpha=[0.5, 1.5])
# samples has shape [10, 2], where each slice [:, 0] and [:, 1] represents
# the samples drawn from each distribution
samples = tf.random.stateless_gamma([7, 5, 2], seed=[12, 34], alpha=[.5, 1.5])
# samples has shape [7, 5, 2], where each slice [:, :, 0] and [:, :, 1]
# represents the 7x5 samples drawn from each of the two distributions
alpha = tf.constant([[1.], [3.], [5.]])
beta = tf.constant([[3., 4.]])
samples = tf.random.stateless_gamma(
[30, 3, 2], seed=[12, 34], alpha=alpha, beta=beta)
# samples has shape [30, 3, 2], with 30 samples each of 3x2 distributions.
with tf.GradientTape() as tape:
tape.watch([alpha, beta])
loss = tf.reduce_mean(tf.square(tf.random.stateless_gamma(
[30, 3, 2], seed=[12, 34], alpha=alpha, beta=beta)))
dloss_dalpha, dloss_dbeta = tape.gradient(loss, [alpha, beta])
# unbiased stochastic derivatives of the loss function
alpha.shape == dloss_dalpha.shape # True
beta.shape == dloss_dbeta.shape # True
```
Args:
shape: A 1-D integer Tensor or Python array. The shape of the output tensor.
seed: A shape [2] Tensor, the seed to the random number generator. Must have
dtype `int32` or `int64`. (When using XLA, only `int32` is allowed.)
alpha: Tensor. The concentration parameter of the gamma distribution. Must
be broadcastable with `beta`, and broadcastable with the rightmost
dimensions of `shape`.
beta: Tensor. The inverse scale parameter of the gamma distribution. Must be
broadcastable with `alpha` and broadcastable with the rightmost dimensions
of `shape`.
dtype: Floating point dtype of `alpha`, `beta`, and the output.
name: A name for the operation (optional).
Returns:
samples: A Tensor of the specified shape filled with random gamma values.
For each i, each `samples[..., i] is an independent draw from the gamma
distribution with concentration alpha[i] and scale beta[i].
"""
with ops.name_scope(name, "stateless_random_gamma",
[shape, seed, alpha, beta]) as name:
shape = tensor_util.shape_tensor(shape)
alpha = ops.convert_to_tensor(alpha, dtype=dtype, name="alpha")
beta = ops.convert_to_tensor(
beta if beta is not None else 1, name="beta", dtype=dtype)
broadcast_shape = array_ops.broadcast_dynamic_shape(
array_ops.shape(alpha), array_ops.shape(beta))
alpha_broadcast = array_ops.broadcast_to(alpha, broadcast_shape)
result = math_ops.maximum(
np.finfo(alpha.dtype.as_numpy_dtype).tiny,
gen_stateless_random_ops.stateless_random_gamma_v2(
shape, seed=seed, alpha=alpha_broadcast) / beta)
tensor_util.maybe_set_static_shape(result, shape)
return result
@tf_export("random.stateless_poisson")
@dispatch.add_dispatch_support
def stateless_random_poisson(shape,
seed,
lam,
dtype=dtypes.int32,
name=None):
"""Outputs deterministic pseudorandom values from a Poisson distribution.
The generated values follow a Poisson distribution with specified rate
parameter.
This is a stateless version of `tf.random.poisson`: if run twice with the same
seeds and shapes, it will produce the same pseudorandom numbers. The output is
consistent across multiple runs on the same hardware, but may change between
versions of TensorFlow or on non-CPU/GPU hardware.
A slight difference exists in the interpretation of the `shape` parameter
between `stateless_poisson` and `poisson`: in `poisson`, the `shape` is always
prepended to the shape of `lam`; whereas in `stateless_poisson` the shape of
`lam` must match the trailing dimensions of `shape`.
Example:
```python
samples = tf.random.stateless_poisson([10, 2], seed=[12, 34], lam=[5, 15])
# samples has shape [10, 2], where each slice [:, 0] and [:, 1] represents
# the samples drawn from each distribution
samples = tf.random.stateless_poisson([7, 5, 2], seed=[12, 34], lam=[5, 15])
# samples has shape [7, 5, 2], where each slice [:, :, 0] and [:, :, 1]
# represents the 7x5 samples drawn from each of the two distributions
rate = tf.constant([[1.], [3.], [5.]])
samples = tf.random.stateless_poisson([30, 3, 1], seed=[12, 34], lam=rate)
# samples has shape [30, 3, 1], with 30 samples each of 3x1 distributions.
```
Args:
shape: A 1-D integer Tensor or Python array. The shape of the output tensor.
seed: A shape [2] Tensor, the seed to the random number generator. Must have
dtype `int32` or `int64`. (When using XLA, only `int32` is allowed.)
lam: Tensor. The rate parameter "lambda" of the Poisson distribution. Shape
must match the rightmost dimensions of `shape`.
dtype: Dtype of the samples (int or float dtypes are permissible, as samples
are discrete). Default: int32.
name: A name for the operation (optional).
Returns:
samples: A Tensor of the specified shape filled with random Poisson values.
For each i, each `samples[..., i]` is an independent draw from the Poisson
distribution with rate `lam[i]`.
"""
with ops.name_scope(name, "stateless_random_poisson",
[shape, seed, lam]) as name:
shape = tensor_util.shape_tensor(shape)
result = gen_stateless_random_ops.stateless_random_poisson(
shape, seed=seed, lam=lam, dtype=dtype)
tensor_util.maybe_set_static_shape(result, shape)
return result
@tf_export("random.stateless_normal")
@dispatch.add_dispatch_support
def stateless_random_normal(shape,
seed,
mean=0.0,
stddev=1.0,
dtype=dtypes.float32,
name=None):
"""Outputs deterministic pseudorandom values from a normal distribution.
This is a stateless version of `tf.random.normal`: if run twice with the
same seeds and shapes, it will produce the same pseudorandom numbers. The
output is consistent across multiple runs on the same hardware (and between
CPU and GPU), but may change between versions of TensorFlow or on non-CPU/GPU
hardware.
Args:
shape: A 1-D integer Tensor or Python array. The shape of the output tensor.
seed: A shape [2] Tensor, the seed to the random number generator. Must have
dtype `int32` or `int64`. (When using XLA, only `int32` is allowed.)
mean: A 0-D Tensor or Python value of type `dtype`. The mean of the normal
distribution.
stddev: A 0-D Tensor or Python value of type `dtype`. The standard deviation
of the normal distribution.
dtype: The type of the output.
name: A name for the operation (optional).
Returns:
A tensor of the specified shape filled with random normal values.
"""
with ops.name_scope(name, "stateless_random_normal",
[shape, seed, mean, stddev]) as name:
shape = tensor_util.shape_tensor(shape)
mean = ops.convert_to_tensor(mean, dtype=dtype, name="mean")
stddev = ops.convert_to_tensor(stddev, dtype=dtype, name="stddev")
if compat.forward_compatible(2021, 3, 1):
key, counter, alg = _get_key_counter_alg(seed)
rnd = gen_stateless_random_ops_v2.stateless_random_normal_v2(
shape, key=key, counter=counter, dtype=dtype, alg=alg)
else:
rnd = gen_stateless_random_ops.stateless_random_normal(shape, seed, dtype)
result = math_ops.add(rnd * stddev, mean, name=name)
tensor_util.maybe_set_static_shape(result, shape)
return result
@tf_export("random.stateless_truncated_normal")
@dispatch.add_dispatch_support
def stateless_truncated_normal(shape,
seed,
mean=0.0,
stddev=1.0,
dtype=dtypes.float32,
name=None):
"""Outputs deterministic pseudorandom values, truncated normally distributed.
This is a stateless version of `tf.random.truncated_normal`: if run twice with
the same seeds and shapes, it will produce the same pseudorandom numbers. The
output is consistent across multiple runs on the same hardware (and between
CPU and GPU), but may change between versions of TensorFlow or on non-CPU/GPU
hardware.
The generated values follow a normal distribution with specified mean and
standard deviation, except that values whose magnitude is more than 2 standard
deviations from the mean are dropped and re-picked.
Args:
shape: A 1-D integer Tensor or Python array. The shape of the output tensor.
seed: A shape [2] Tensor, the seed to the random number generator. Must have
dtype `int32` or `int64`. (When using XLA, only `int32` is allowed.)
mean: A 0-D Tensor or Python value of type `dtype`. The mean of the
truncated normal distribution.
stddev: A 0-D Tensor or Python value of type `dtype`. The standard deviation
of the normal distribution, before truncation.
dtype: The type of the output.
name: A name for the operation (optional).
Returns:
A tensor of the specified shape filled with random truncated normal values.
"""
with ops.name_scope(name, "stateless_truncated_normal",
[shape, seed, mean, stddev]) as name:
shape = tensor_util.shape_tensor(shape)
mean = ops.convert_to_tensor(mean, dtype=dtype, name="mean")
stddev = ops.convert_to_tensor(stddev, dtype=dtype, name="stddev")
if compat.forward_compatible(2020, 10, 25):
key, counter, alg = _get_key_counter_alg(seed)
rnd = gen_stateless_random_ops_v2.stateless_truncated_normal_v2(
shape, key=key, counter=counter, dtype=dtype, alg=alg)
else:
rnd = gen_stateless_random_ops.stateless_truncated_normal(
shape, seed, dtype)
result = math_ops.add(rnd * stddev, mean, name=name)
tensor_util.maybe_set_static_shape(result, shape)
return result
@tf_export(v1=["random.stateless_multinomial"])
@dispatch.add_dispatch_support
@deprecation.deprecated(
date=None, instructions="Use `tf.random.stateless_categorical` instead.")
def stateless_multinomial(logits,
num_samples,
seed,
output_dtype=dtypes.int64,
name=None):
"""Draws deterministic pseudorandom samples from a multinomial distribution.
This is a stateless version of `tf.random.categorical`: if run twice with the
same seeds and shapes, it will produce the same pseudorandom numbers. The
output is consistent across multiple runs on the same hardware (and between
CPU and GPU), but may change between versions of TensorFlow or on non-CPU/GPU
hardware.
Example:
```python
# samples has shape [1, 5], where each value is either 0 or 1 with equal
# probability.
samples = tf.random.stateless_categorical(
tf.math.log([[0.5, 0.5]]), 5, seed=[7, 17])
```
Args:
logits: 2-D Tensor with shape `[batch_size, num_classes]`. Each slice
`[i, :]` represents the unnormalized log-probabilities for all classes.
num_samples: 0-D. Number of independent samples to draw for each row slice.
seed: A shape [2] Tensor, the seed to the random number generator. Must have
dtype `int32` or `int64`. (When using XLA, only `int32` is allowed.)
output_dtype: integer type to use for the output. Defaults to int64.
name: Optional name for the operation.
Returns:
The drawn samples of shape `[batch_size, num_samples]`.
"""
with ops.name_scope(name, "stateless_multinomial", [logits, seed]):
return stateless_multinomial_categorical_impl(logits, num_samples,
output_dtype, seed)
@tf_export("random.stateless_categorical")
@dispatch.add_dispatch_support
def stateless_categorical(logits,
num_samples,
seed,
dtype=dtypes.int64,
name=None):
"""Draws deterministic pseudorandom samples from a categorical distribution.
This is a stateless version of `tf.categorical`: if run twice with the
same seeds and shapes, it will produce the same pseudorandom numbers. The
output is consistent across multiple runs on the same hardware (and between
CPU and GPU), but may change between versions of TensorFlow or on non-CPU/GPU
hardware.
Example:
```python
# samples has shape [1, 5], where each value is either 0 or 1 with equal
# probability.
samples = tf.random.stateless_categorical(
tf.math.log([[0.5, 0.5]]), 5, seed=[7, 17])
```
Args:
logits: 2-D Tensor with shape `[batch_size, num_classes]`. Each slice
`[i, :]` represents the unnormalized log-probabilities for all classes.
num_samples: 0-D. Number of independent samples to draw for each row slice.
seed: A shape [2] Tensor, the seed to the random number generator. Must have
dtype `int32` or `int64`. (When using XLA, only `int32` is allowed.)
dtype: integer type to use for the output. Defaults to int64.
name: Optional name for the operation.
Returns:
The drawn samples of shape `[batch_size, num_samples]`.
"""
with ops.name_scope(name, "stateless_categorical", [logits, seed]):
return stateless_multinomial_categorical_impl(logits, num_samples, dtype,
seed)
def stateless_multinomial_categorical_impl(logits, num_samples, dtype, seed):
"""Implementation for stateless multinomial/categorical ops (v1/v2)."""
logits = ops.convert_to_tensor(logits, name="logits")
return gen_stateless_random_ops.stateless_multinomial(
logits, num_samples, seed, output_dtype=dtype)
@dispatch.add_dispatch_support
@tf_export("random.stateless_parameterized_truncated_normal")
def stateless_parameterized_truncated_normal(shape,
seed,
means=0.0,
stddevs=1.0,
minvals=-2.0,
maxvals=2.0,
name=None):
"""Outputs random values from a truncated normal distribution.
The generated values follow a normal distribution with specified mean and
standard deviation, except that values whose magnitude is more than 2 standard
deviations from the mean are dropped and re-picked.
Examples:
Sample from a Truncated normal, with deferring shape parameters that
broadcast.
>>> means = 0.
>>> stddevs = tf.math.exp(tf.random.uniform(shape=[2, 3]))
>>> minvals = [-1., -2., -1000.]
>>> maxvals = [[10000.], [1.]]
>>> y = tf.random.stateless_parameterized_truncated_normal(
... shape=[10, 2, 3], seed=[7, 17],
... means=means, stddevs=stddevs, minvals=minvals, maxvals=maxvals)
>>> y.shape
TensorShape([10, 2, 3])
Args:
shape: A 1-D integer `Tensor` or Python array. The shape of the output
tensor.
seed: A shape [2] Tensor, the seed to the random number generator. Must have
dtype `int32` or `int64`. (When using XLA, only `int32` is allowed.)
means: A `Tensor` or Python value of type `dtype`. The mean of the truncated
normal distribution. This must broadcast with `stddevs`, `minvals` and
`maxvals`, and the broadcasted shape must be dominated by `shape`.
stddevs: A `Tensor` or Python value of type `dtype`. The standard deviation
of the truncated normal distribution. This must broadcast with `means`,
`minvals` and `maxvals`, and the broadcasted shape must be dominated by
`shape`.
minvals: A `Tensor` or Python value of type `dtype`. The minimum value of
the truncated normal distribution. This must broadcast with `means`,
`stddevs` and `maxvals`, and the broadcasted shape must be dominated by
`shape`.
maxvals: A `Tensor` or Python value of type `dtype`. The maximum value of
the truncated normal distribution. This must broadcast with `means`,
`stddevs` and `minvals`, and the broadcasted shape must be dominated by
`shape`.
name: A name for the operation (optional).
Returns:
A tensor of the specified shape filled with random truncated normal values.
"""
with ops.name_scope(name, "stateless_parameterized_truncated_normal",
[shape, means, stddevs, minvals, maxvals]) as name:
shape_tensor = tensor_util.shape_tensor(shape)
means_tensor = ops.convert_to_tensor(means, name="means")
stddevs_tensor = ops.convert_to_tensor(stddevs, name="stddevs")
minvals_tensor = ops.convert_to_tensor(minvals, name="minvals")
maxvals_tensor = ops.convert_to_tensor(maxvals, name="maxvals")
rnd = gen_stateless_random_ops.stateless_parameterized_truncated_normal(
shape_tensor, seed, means_tensor, stddevs_tensor, minvals_tensor,
maxvals_tensor)
tensor_util.maybe_set_static_shape(rnd, shape)
return rnd | unknown | codeparrot/codeparrot-clean | ||
# Copyright (c) 2015 Red Hat, Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import collections
import mock
import uuid
from neutron.agent.common import ovs_lib
from neutron.agent.linux import ip_lib
from neutron.tests import base as tests_base
from neutron.tests.common import net_helpers
from neutron.tests.functional.agent.linux import base
class OVSBridgeTestBase(base.BaseOVSLinuxTestCase):
# TODO(twilson) So far, only ovsdb-related tests are written. It would be
# good to also add the openflow-related functions
def setUp(self):
super(OVSBridgeTestBase, self).setUp()
self.ovs = ovs_lib.BaseOVS()
self.br = self.useFixture(net_helpers.OVSBridgeFixture()).bridge
def create_ovs_port(self, *interface_attrs):
# Convert ((a, b), (c, d)) to {a: b, c: d} and add 'type' by default
attrs = collections.OrderedDict(interface_attrs)
attrs.setdefault('type', 'internal')
port_name = tests_base.get_rand_device_name(net_helpers.PORT_PREFIX)
return (port_name, self.br.add_port(port_name, *attrs.items()))
def create_ovs_vif_port(self, iface_id=None, mac=None,
iface_field='iface-id'):
if iface_id is None:
iface_id = base.get_rand_name()
if mac is None:
mac = base.get_rand_name()
attrs = ('external_ids', {iface_field: iface_id, 'attached-mac': mac})
port_name, ofport = self.create_ovs_port(attrs)
return ovs_lib.VifPort(port_name, ofport, iface_id, mac, self.br)
class OVSBridgeTestCase(OVSBridgeTestBase):
def test_port_lifecycle(self):
(port_name, ofport) = self.create_ovs_port(('type', 'internal'))
# ofport should always be an integer string with value -1 or > 0.
self.assertTrue(int(ofport))
self.assertTrue(int(self.br.get_port_ofport(port_name)))
self.assertTrue(self.br.port_exists(port_name))
self.assertEqual(self.br.br_name,
self.br.get_bridge_for_iface(port_name))
self.br.delete_port(port_name)
self.assertFalse(self.br.port_exists(port_name))
def test_duplicate_port_may_exist_false(self):
port_name, ofport = self.create_ovs_port(('type', 'internal'))
cmd = self.br.ovsdb.add_port(self.br.br_name,
port_name, may_exist=False)
self.assertRaises(RuntimeError, cmd.execute, check_error=True)
def test_delete_port_if_exists_false(self):
cmd = self.br.ovsdb.del_port('nonexistantport', if_exists=False)
self.assertRaises(RuntimeError, cmd.execute, check_error=True)
def test_replace_port(self):
port_name = tests_base.get_rand_device_name(net_helpers.PORT_PREFIX)
self.br.replace_port(port_name, ('type', 'internal'))
self.assertTrue(self.br.port_exists(port_name))
self.assertEqual('internal',
self.br.db_get_val('Interface', port_name, 'type'))
self.br.replace_port(port_name, ('type', 'internal'),
('external_ids', {'test': 'test'}))
self.assertTrue(self.br.port_exists(port_name))
self.assertEqual('test', self.br.db_get_val('Interface', port_name,
'external_ids')['test'])
def test_attribute_lifecycle(self):
(port_name, ofport) = self.create_ovs_port()
tag = 42
self.ovs.set_db_attribute('Port', port_name, 'tag', tag)
self.assertEqual(tag, self.ovs.db_get_val('Port', port_name, 'tag'))
self.assertEqual(tag, self.br.get_port_tag_dict()[port_name])
self.ovs.clear_db_attribute('Port', port_name, 'tag')
self.assertEqual(self.ovs.db_get_val('Port', port_name, 'tag'), [])
self.assertEqual(self.br.get_port_tag_dict()[port_name], [])
def test_get_bridge_external_bridge_id(self):
self.ovs.set_db_attribute('Bridge', self.br.br_name,
'external_ids',
{'bridge-id': self.br.br_name})
self.assertEqual(
self.br.br_name,
self.ovs.get_bridge_external_bridge_id(self.br.br_name))
def test_controller_lifecycle(self):
controllers = {'tcp:127.0.0.1:6633', 'tcp:172.17.16.10:55'}
self.br.set_controller(controllers)
self.assertSetEqual(controllers, set(self.br.get_controller()))
self.br.del_controller()
self.assertEqual([], self.br.get_controller())
def test_non_index_queries(self):
controllers = ['tcp:127.0.0.1:6633']
self.br.set_controller(controllers)
cmd = self.br.ovsdb.db_set('Controller', self.br.br_name,
('connection_mode', 'out-of-band'))
cmd.execute(check_error=True)
self.assertEqual('out-of-band',
self.br.db_get_val('Controller', self.br.br_name,
'connection_mode'))
def test_set_fail_mode_secure(self):
self.br.set_secure_mode()
self._assert_br_fail_mode(ovs_lib.FAILMODE_SECURE)
def test_set_fail_mode_standalone(self):
self.br.set_standalone_mode()
self._assert_br_fail_mode(ovs_lib.FAILMODE_STANDALONE)
def _assert_br_fail_mode(self, fail_mode):
self.assertEqual(
self.br.db_get_val('Bridge', self.br.br_name, 'fail_mode'),
fail_mode)
def test_set_protocols(self):
self.br.set_protocols('OpenFlow10')
self.assertEqual(
self.br.db_get_val('Bridge', self.br.br_name, 'protocols'),
"OpenFlow10")
def test_get_datapath_id(self):
brdev = ip_lib.IPDevice(self.br.br_name)
dpid = brdev.link.attributes['link/ether'].replace(':', '')
self.br.set_db_attribute('Bridge',
self.br.br_name, 'datapath_id', dpid)
self.assertIn(dpid, self.br.get_datapath_id())
def test_add_tunnel_port(self):
attrs = {
'remote_ip': '192.0.2.1', # RFC 5737 TEST-NET-1
'local_ip': '198.51.100.1', # RFC 5737 TEST-NET-2
}
port_name = tests_base.get_rand_device_name(net_helpers.PORT_PREFIX)
self.br.add_tunnel_port(port_name, attrs['remote_ip'],
attrs['local_ip'])
self.assertEqual(self.ovs.db_get_val('Interface', port_name, 'type'),
'gre')
options = self.ovs.db_get_val('Interface', port_name, 'options')
for attr, val in attrs.items():
self.assertEqual(val, options[attr])
def test_add_patch_port(self):
local = tests_base.get_rand_device_name(net_helpers.PORT_PREFIX)
peer = 'remotepeer'
self.br.add_patch_port(local, peer)
self.assertEqual(self.ovs.db_get_val('Interface', local, 'type'),
'patch')
options = self.ovs.db_get_val('Interface', local, 'options')
self.assertEqual(peer, options['peer'])
def test_get_port_name_list(self):
# Note that ovs-vsctl's list-ports does not include the port created
# with the same name as the bridge
ports = {self.create_ovs_port()[0] for i in range(5)}
self.assertSetEqual(ports, set(self.br.get_port_name_list()))
def test_get_iface_name_list(self):
ifaces = {self.create_ovs_port()[0] for i in range(5)}
self.assertSetEqual(ifaces, set(self.br.get_iface_name_list()))
def test_get_port_stats(self):
# Nothing seems to use this function?
(port_name, ofport) = self.create_ovs_port()
stats = set(self.br.get_port_stats(port_name).keys())
self.assertTrue(set(['rx_packets', 'tx_packets']).issubset(stats))
def test_get_vif_ports(self):
for i in range(2):
self.create_ovs_port()
vif_ports = [self.create_ovs_vif_port() for i in range(3)]
ports = self.br.get_vif_ports()
self.assertEqual(3, len(ports))
self.assertTrue(all([isinstance(x, ovs_lib.VifPort) for x in ports]))
self.assertEqual(sorted([x.port_name for x in vif_ports]),
sorted([x.port_name for x in ports]))
def test_get_vif_ports_with_bond(self):
for i in range(2):
self.create_ovs_port()
vif_ports = [self.create_ovs_vif_port() for i in range(3)]
# bond ports don't have records in the Interface table but they do in
# the Port table
orig = self.br.get_port_name_list
new_port_name_list = lambda: orig() + ['bondport']
mock.patch.object(self.br, 'get_port_name_list',
new=new_port_name_list).start()
ports = self.br.get_vif_ports()
self.assertEqual(3, len(ports))
self.assertTrue(all([isinstance(x, ovs_lib.VifPort) for x in ports]))
self.assertEqual(sorted([x.port_name for x in vif_ports]),
sorted([x.port_name for x in ports]))
def test_get_vif_port_set(self):
for i in range(2):
self.create_ovs_port()
vif_ports = [self.create_ovs_vif_port() for i in range(2)]
ports = self.br.get_vif_port_set()
expected = set([x.vif_id for x in vif_ports])
self.assertEqual(expected, ports)
def test_get_vif_port_set_with_missing_port(self):
self.create_ovs_port()
vif_ports = [self.create_ovs_vif_port()]
# return an extra port to make sure the db list ignores it
orig = self.br.get_port_name_list
new_port_name_list = lambda: orig() + ['anotherport']
mock.patch.object(self.br, 'get_port_name_list',
new=new_port_name_list).start()
ports = self.br.get_vif_port_set()
expected = set([vif_ports[0].vif_id])
self.assertEqual(expected, ports)
def test_get_ports_attributes(self):
port_names = [self.create_ovs_port()[0], self.create_ovs_port()[0]]
db_ports = self.br.get_ports_attributes('Interface', columns=['name'])
db_ports_names = [p['name'] for p in db_ports]
self.assertEqual(sorted(port_names), sorted(db_ports_names))
def test_get_port_tag_dict(self):
# Simple case tested in port test_set_get_clear_db_val
pass
def test_get_vif_port_by_id(self):
for i in range(2):
self.create_ovs_port()
vif_ports = [self.create_ovs_vif_port() for i in range(3)]
for vif in vif_ports:
self.assertEqual(self.br.get_vif_port_by_id(vif.vif_id).vif_id,
vif.vif_id)
def test_get_vifs_by_ids(self):
for i in range(2):
self.create_ovs_port()
vif_ports = [self.create_ovs_vif_port() for i in range(3)]
by_id = self.br.get_vifs_by_ids([v.vif_id for v in vif_ports])
# convert to str for comparison of VifPorts
by_id = {vid: str(vport) for vid, vport in by_id.items()}
self.assertEqual({v.vif_id: str(v) for v in vif_ports}, by_id)
def test_delete_ports(self):
# TODO(twilson) I intensely dislike the current delete_ports function
# as the default behavior is really delete_vif_ports(), then it acts
# more like a delete_ports() seems like it should if all_ports=True is
# passed
# Create 2 non-vif ports and 2 vif ports
nonvifs = {self.create_ovs_port()[0] for i in range(2)}
vifs = {self.create_ovs_vif_port().port_name for i in range(2)}
self.assertSetEqual(nonvifs.union(vifs),
set(self.br.get_port_name_list()))
self.br.delete_ports()
self.assertSetEqual(nonvifs, set(self.br.get_port_name_list()))
self.br.delete_ports(all_ports=True)
self.assertEqual(len(self.br.get_port_name_list()), 0)
def test_reset_bridge(self):
self.create_ovs_port()
self.br.reset_bridge()
self.assertEqual(len(self.br.get_port_name_list()), 0)
self._assert_br_fail_mode([])
def test_reset_bridge_secure_mode(self):
self.br.reset_bridge(secure_mode=True)
self._assert_br_fail_mode(ovs_lib.FAILMODE_SECURE)
def test_set_controller_connection_mode(self):
controllers = ['tcp:192.0.2.0:6633']
self._set_controllers_connection_mode(controllers)
def test_set_multi_controllers_connection_mode(self):
controllers = ['tcp:192.0.2.0:6633', 'tcp:192.0.2.1:55']
self._set_controllers_connection_mode(controllers)
def _set_controllers_connection_mode(self, controllers):
self.br.set_controller(controllers)
self.assertEqual(sorted(controllers), sorted(self.br.get_controller()))
self.br.set_controllers_connection_mode('out-of-band')
self._assert_controllers_connection_mode('out-of-band')
self.br.del_controller()
self.assertEqual([], self.br.get_controller())
def _assert_controllers_connection_mode(self, connection_mode):
controllers = self.br.db_get_val('Bridge', self.br.br_name,
'controller')
controllers = [controllers] if isinstance(
controllers, uuid.UUID) else controllers
for controller in controllers:
self.assertEqual(connection_mode,
self.br.db_get_val('Controller',
controller,
'connection_mode'))
def test_egress_bw_limit(self):
port_name, _ = self.create_ovs_port()
self.br.create_egress_bw_limit_for_port(port_name, 700, 70)
max_rate, burst = self.br.get_egress_bw_limit_for_port(port_name)
self.assertEqual(700, max_rate)
self.assertEqual(70, burst)
self.br.delete_egress_bw_limit_for_port(port_name)
max_rate, burst = self.br.get_egress_bw_limit_for_port(port_name)
self.assertIsNone(max_rate)
self.assertIsNone(burst)
class OVSLibTestCase(base.BaseOVSLinuxTestCase):
def setUp(self):
super(OVSLibTestCase, self).setUp()
self.ovs = ovs_lib.BaseOVS()
def test_bridge_lifecycle_baseovs(self):
name = base.get_rand_name(prefix=net_helpers.BR_PREFIX)
self.addCleanup(self.ovs.delete_bridge, name)
br = self.ovs.add_bridge(name)
self.assertEqual(br.br_name, name)
self.assertTrue(self.ovs.bridge_exists(name))
self.ovs.delete_bridge(name)
self.assertFalse(self.ovs.bridge_exists(name))
def test_get_bridges(self):
bridges = {
self.useFixture(net_helpers.OVSBridgeFixture()).bridge.br_name
for i in range(5)}
self.assertTrue(set(self.ovs.get_bridges()).issuperset(bridges))
def test_bridge_lifecycle_ovsbridge(self):
name = base.get_rand_name(prefix=net_helpers.BR_PREFIX)
br = ovs_lib.OVSBridge(name)
self.assertEqual(br.br_name, name)
# Make sure that instantiating an OVSBridge does not actually create
self.assertFalse(self.ovs.bridge_exists(name))
self.addCleanup(self.ovs.delete_bridge, name)
br.create()
self.assertTrue(self.ovs.bridge_exists(name))
br.destroy()
self.assertFalse(self.ovs.bridge_exists(name)) | unknown | codeparrot/codeparrot-clean | ||
/*
* Copyright 2002-present the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.docs.web.websocket.websocketserverruntimeconfiguration;
import org.springframework.web.socket.handler.AbstractWebSocketHandler;
public class MyEchoHandler extends AbstractWebSocketHandler {
} | java | github | https://github.com/spring-projects/spring-framework | framework-docs/src/main/java/org/springframework/docs/web/websocket/websocketserverruntimeconfiguration/MyEchoHandler.java |
# -*- coding: utf-8 -*-
# Generated by Django 1.11.1 on 2017-05-30 19:21
from __future__ import unicode_literals
from urllib2 import quote, unquote
from django_bulk_update.helper import bulk_update
from django.db import migrations
def unquote_folder_paths(state, schema):
try:
NodeSettings = state.get_model('addons_googledrive', 'nodesettings')
targets = NodeSettings.objects.filter(folder_path__isnull=False)
except LookupError:
return
for obj in targets:
try:
obj.folder_path = unquote(obj.folder_path).decode('utf-8')
except UnicodeEncodeError:
obj.folder_path = unquote(obj.folder_path)
bulk_update(targets, update_fields=['folder_path'])
def quote_folder_paths(state, schema):
try:
NodeSettings = state.get_model('addons_googledrive', 'nodesettings')
targets = NodeSettings.objects.filter(folder_path__isnull=False)
except LookupError:
return
for obj in targets:
obj.folder_path = quote(obj.folder_path.encode('utf-8'))
bulk_update(targets, update_fields=['folder_path'])
class Migration(migrations.Migration):
dependencies = [
('osf', '0031_preprintprovider_share_source'),
]
operations = [
migrations.RunPython(unquote_folder_paths, quote_folder_paths),
] | unknown | codeparrot/codeparrot-clean | ||
# File generated from our OpenAPI spec by Stainless. See CONTRIBUTING.md for details.
from typing_extensions import Literal
from ..._models import BaseModel
__all__ = ["InputAudioBufferSpeechStoppedEvent"]
class InputAudioBufferSpeechStoppedEvent(BaseModel):
"""
Returned in `server_vad` mode when the server detects the end of speech in
the audio buffer. The server will also send an `conversation.item.created`
event with the user message item that is created from the audio buffer.
"""
audio_end_ms: int
"""Milliseconds since the session started when speech stopped.
This will correspond to the end of audio sent to the model, and thus includes
the `min_silence_duration_ms` configured in the Session.
"""
event_id: str
"""The unique ID of the server event."""
item_id: str
"""The ID of the user message item that will be created."""
type: Literal["input_audio_buffer.speech_stopped"]
"""The event type, must be `input_audio_buffer.speech_stopped`.""" | python | github | https://github.com/openai/openai-python | src/openai/types/realtime/input_audio_buffer_speech_stopped_event.py |
//// [tests/cases/conformance/internalModules/exportDeclarations/ExportClassWithAccessibleTypesInTypeParameterConstraintsClassHeritageListMemberTypeAnnotations.ts] ////
//// [ExportClassWithAccessibleTypesInTypeParameterConstraintsClassHeritageListMemberTypeAnnotations.ts]
namespace A {
export class Point {
x: number;
y: number;
}
export var Origin: Point = { x: 0, y: 0 };
export class Point3d extends Point {
z: number;
}
export var Origin3d: Point3d = { x: 0, y: 0, z: 0 };
export class Line<TPoint extends Point>{
constructor(public start: TPoint, public end: TPoint) { }
}
}
//// [ExportClassWithAccessibleTypesInTypeParameterConstraintsClassHeritageListMemberTypeAnnotations.js]
"use strict";
var A;
(function (A) {
class Point {
}
A.Point = Point;
A.Origin = { x: 0, y: 0 };
class Point3d extends Point {
}
A.Point3d = Point3d;
A.Origin3d = { x: 0, y: 0, z: 0 };
class Line {
constructor(start, end) {
this.start = start;
this.end = end;
}
}
A.Line = Line;
})(A || (A = {})); | javascript | github | https://github.com/microsoft/TypeScript | tests/baselines/reference/ExportClassWithAccessibleTypesInTypeParameterConstraintsClassHeritageListMemberTypeAnnotations.js |
<?php
namespace Illuminate\Cache;
class RedisLock extends Lock
{
/**
* The Redis factory implementation.
*
* @var \Illuminate\Redis\Connections\Connection
*/
protected $redis;
/**
* Create a new lock instance.
*
* @param \Illuminate\Redis\Connections\Connection $redis
* @param string $name
* @param int $seconds
* @param string|null $owner
*/
public function __construct($redis, $name, $seconds, $owner = null)
{
parent::__construct($name, $seconds, $owner);
$this->redis = $redis;
}
/**
* Attempt to acquire the lock.
*
* @return bool
*/
public function acquire()
{
if ($this->seconds > 0) {
return $this->redis->set($this->name, $this->owner, 'EX', $this->seconds, 'NX') == true;
}
return $this->redis->setnx($this->name, $this->owner) === 1;
}
/**
* Release the lock.
*
* @return bool
*/
public function release()
{
return (bool) $this->redis->eval(LuaScripts::releaseLock(), 1, $this->name, $this->owner);
}
/**
* Releases this lock in disregard of ownership.
*
* @return void
*/
public function forceRelease()
{
$this->redis->del($this->name);
}
/**
* Returns the owner value written into the driver for this lock.
*
* @return string
*/
protected function getCurrentOwner()
{
return $this->redis->get($this->name);
}
/**
* Get the name of the Redis connection being used to manage the lock.
*
* @return string
*/
public function getConnectionName()
{
return $this->redis->getName();
}
} | php | github | https://github.com/laravel/framework | src/Illuminate/Cache/RedisLock.php |
from django.db import models
from django.utils import timezone
from edc_base.model.models import BaseUuidModel
from edc_export.models import ExportTrackingFieldsMixin
from edc_sync.models import SyncModelMixin, SyncHistoricalRecords
from ..managers import OrderItemManager
from .aliquot import Aliquot
from .order import Order
from .panel import Panel
class OrderItem(SyncModelMixin, ExportTrackingFieldsMixin, BaseUuidModel):
order = models.ForeignKey(Order)
aliquot = models.ForeignKey(Aliquot)
panel = models.ForeignKey(
Panel,
null=True,
blank=False,
)
order_identifier = models.CharField(
max_length=25,
null=True,
help_text='',
)
order_datetime = models.DateTimeField(
default=timezone.now
)
subject_identifier = models.CharField(
max_length=50,
null=True,
help_text="non-user helper field to simplify search and filtering")
objects = OrderItemManager()
history = SyncHistoricalRecords()
def save(self, *args, **kwargs):
self.subject_identifier = self.aliquot.receive.registered_subject.subject_identifier
super(OrderItem, self).save(*args, **kwargs)
def natural_key(self):
return (self.order_identifier, )
class Meta:
app_label = 'td_lab'
ordering = ['-order_datetime', ] | unknown | codeparrot/codeparrot-clean | ||
// Copyright IBM Corp. 2016, 2025
// SPDX-License-Identifier: BUSL-1.1
package command
import (
"context"
"fmt"
"strings"
"github.com/hashicorp/cli"
"github.com/hashicorp/go-secure-stdlib/strutil"
"github.com/posener/complete"
)
var (
_ cli.Command = (*MonitorCommand)(nil)
_ cli.CommandAutocomplete = (*MonitorCommand)(nil)
)
type MonitorCommand struct {
*BaseCommand
logLevel string
logFormat string
// ShutdownCh is used to capture interrupt signal and end streaming
ShutdownCh chan struct{}
}
func (c *MonitorCommand) Synopsis() string {
return "Stream log messages from a Vault server"
}
func (c *MonitorCommand) Help() string {
helpText := `
Usage: vault monitor [options]
Stream log messages of a Vault server. The monitor command lets you listen
for log levels that may be filtered out of the server logs. For example,
the server may be logging at the INFO level, but with the monitor command
you can set -log-level=DEBUG.
` + c.Flags().Help()
return strings.TrimSpace(helpText)
}
func (c *MonitorCommand) Flags() *FlagSets {
set := c.flagSet(FlagSetHTTP)
f := set.NewFlagSet("Monitor Options")
f.StringVar(&StringVar{
Name: "log-level",
Target: &c.logLevel,
Default: "info",
Completion: complete.PredictSet("trace", "debug", "info", "warn", "error"),
Usage: "If passed, the log level to monitor logs. Supported values" +
"(in order of detail) are \"trace\", \"debug\", \"info\", \"warn\"" +
" and \"error\". These are not case sensitive.",
})
f.StringVar(&StringVar{
Name: "log-format",
Target: &c.logFormat,
Default: "standard",
Completion: complete.PredictSet("standard", "json"),
Usage: "Output format of logs. Supported values are \"standard\" and \"json\".",
})
return set
}
func (c *MonitorCommand) AutocompleteArgs() complete.Predictor {
return complete.PredictNothing
}
func (c *MonitorCommand) AutocompleteFlags() complete.Flags {
return c.Flags().Completions()
}
func (c *MonitorCommand) Run(args []string) int {
f := c.Flags()
if err := f.Parse(args); err != nil {
c.UI.Error(err.Error())
return 1
}
parsedArgs := f.Args()
if len(parsedArgs) > 0 {
c.UI.Error(fmt.Sprintf("Too many arguments (expected 0, got %d)", len(parsedArgs)))
return 1
}
c.logLevel = strings.ToLower(c.logLevel)
validLevels := []string{"trace", "debug", "info", "warn", "error"}
if !strutil.StrListContains(validLevels, c.logLevel) {
c.UI.Error(fmt.Sprintf("%s is an unknown log level. Valid log levels are: %s", c.logLevel, validLevels))
return 1
}
c.logFormat = strings.ToLower(c.logFormat)
validFormats := []string{"standard", "json"}
if !strutil.StrListContains(validFormats, c.logFormat) {
c.UI.Error(fmt.Sprintf("%s is an unknown log format. Valid log formats are: %s", c.logFormat, validFormats))
return 1
}
client, err := c.Client()
if err != nil {
c.UI.Error(err.Error())
return 2
}
// Remove the default 60 second timeout so we can stream indefinitely
client.SetClientTimeout(0)
var logCh chan string
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
logCh, err = client.Sys().Monitor(ctx, c.logLevel, c.logFormat)
if err != nil {
c.UI.Error(fmt.Sprintf("Error starting monitor: %s", err))
return 1
}
for {
select {
case log, ok := <-logCh:
if !ok {
return 0
}
c.UI.Info(log)
case <-c.ShutdownCh:
return 0
}
}
} | go | github | https://github.com/hashicorp/vault | command/monitor.go |
import re
import os
# The regular expression for freeze directives. These are comments with the
# word macfreeze immedeately followed by a colon, followed by a directive,
# followed by argument(s)
#
# The directives supported are
# include - Include a module or file
# exclude - Exclude a module
# optional - Include a module if it is found, but don't complain if it isn't
# path - Add sys.path entries. Relative paths are relative to the source file.
#
# See the macfreeze.py main program for a real live example.
#
DIRECTIVE_RE=r'^\s*#\s*macfreeze:\s*(\S*)\s*(.*)\s*$'
REPROG=re.compile(DIRECTIVE_RE)
def findfreezedirectives(program):
extra_modules = []
exclude_modules = []
optional_modules = []
extra_path = []
progdir, filename = os.path.split(program)
fp = open(program)
for line in fp.readlines():
match = REPROG.match(line)
if match:
directive = match.group(1)
argument = match.group(2)
if directive == 'include':
extra_modules.append(argument)
elif directive == 'exclude':
exclude_modules.append(argument)
elif directive == 'optional':
optional_modules.append(argument)
elif directive == 'path':
argument = os.path.join(progdir, argument)
extra_path.append(argument)
else:
print '** Unknown directive', line
return extra_modules, exclude_modules, optional_modules, extra_path | unknown | codeparrot/codeparrot-clean | ||
// Copyright 2017 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package math_test
import (
"fmt"
"math"
)
func ExampleAcos() {
fmt.Printf("%.2f", math.Acos(1))
// Output: 0.00
}
func ExampleAcosh() {
fmt.Printf("%.2f", math.Acosh(1))
// Output: 0.00
}
func ExampleAsin() {
fmt.Printf("%.2f", math.Asin(0))
// Output: 0.00
}
func ExampleAsinh() {
fmt.Printf("%.2f", math.Asinh(0))
// Output: 0.00
}
func ExampleAtan() {
fmt.Printf("%.2f", math.Atan(0))
// Output: 0.00
}
func ExampleAtan2() {
fmt.Printf("%.2f", math.Atan2(0, 0))
// Output: 0.00
}
func ExampleAtanh() {
fmt.Printf("%.2f", math.Atanh(0))
// Output: 0.00
}
func ExampleCopysign() {
fmt.Printf("%.2f", math.Copysign(3.2, -1))
// Output: -3.20
}
func ExampleCos() {
fmt.Printf("%.2f", math.Cos(math.Pi/2))
// Output: 0.00
}
func ExampleCosh() {
fmt.Printf("%.2f", math.Cosh(0))
// Output: 1.00
}
func ExampleSin() {
fmt.Printf("%.2f", math.Sin(math.Pi))
// Output: 0.00
}
func ExampleSincos() {
sin, cos := math.Sincos(0)
fmt.Printf("%.2f, %.2f", sin, cos)
// Output: 0.00, 1.00
}
func ExampleSinh() {
fmt.Printf("%.2f", math.Sinh(0))
// Output: 0.00
}
func ExampleTan() {
fmt.Printf("%.2f", math.Tan(0))
// Output: 0.00
}
func ExampleTanh() {
fmt.Printf("%.2f", math.Tanh(0))
// Output: 0.00
}
func ExampleSqrt() {
const (
a = 3
b = 4
)
c := math.Sqrt(a*a + b*b)
fmt.Printf("%.1f", c)
// Output: 5.0
}
func ExampleCeil() {
c := math.Ceil(1.49)
fmt.Printf("%.1f", c)
// Output: 2.0
}
func ExampleFloor() {
c := math.Floor(1.51)
fmt.Printf("%.1f", c)
// Output: 1.0
}
func ExamplePow() {
c := math.Pow(2, 3)
fmt.Printf("%.1f", c)
// Output: 8.0
}
func ExamplePow10() {
c := math.Pow10(2)
fmt.Printf("%.1f", c)
// Output: 100.0
}
func ExampleRound() {
p := math.Round(10.5)
fmt.Printf("%.1f\n", p)
n := math.Round(-10.5)
fmt.Printf("%.1f\n", n)
// Output:
// 11.0
// -11.0
}
func ExampleRoundToEven() {
u := math.RoundToEven(11.5)
fmt.Printf("%.1f\n", u)
d := math.RoundToEven(12.5)
fmt.Printf("%.1f\n", d)
// Output:
// 12.0
// 12.0
}
func ExampleLog() {
x := math.Log(1)
fmt.Printf("%.1f\n", x)
y := math.Log(2.7183)
fmt.Printf("%.1f\n", y)
// Output:
// 0.0
// 1.0
}
func ExampleLog2() {
fmt.Printf("%.1f", math.Log2(256))
// Output: 8.0
}
func ExampleLog10() {
fmt.Printf("%.1f", math.Log10(100))
// Output: 2.0
}
func ExampleRemainder() {
fmt.Printf("%.1f", math.Remainder(100, 30))
// Output: 10.0
}
func ExampleMod() {
c := math.Mod(7, 4)
fmt.Printf("%.1f", c)
// Output: 3.0
}
func ExampleAbs() {
x := math.Abs(-2)
fmt.Printf("%.1f\n", x)
y := math.Abs(2)
fmt.Printf("%.1f\n", y)
// Output:
// 2.0
// 2.0
}
func ExampleDim() {
fmt.Printf("%.2f\n", math.Dim(4, -2))
fmt.Printf("%.2f\n", math.Dim(-4, 2))
// Output:
// 6.00
// 0.00
}
func ExampleExp() {
fmt.Printf("%.2f\n", math.Exp(1))
fmt.Printf("%.2f\n", math.Exp(2))
fmt.Printf("%.2f\n", math.Exp(-1))
// Output:
// 2.72
// 7.39
// 0.37
}
func ExampleExp2() {
fmt.Printf("%.2f\n", math.Exp2(1))
fmt.Printf("%.2f\n", math.Exp2(-3))
// Output:
// 2.00
// 0.12
}
func ExampleExpm1() {
fmt.Printf("%.6f\n", math.Expm1(0.01))
fmt.Printf("%.6f\n", math.Expm1(-1))
// Output:
// 0.010050
// -0.632121
}
func ExampleTrunc() {
fmt.Printf("%.2f\n", math.Trunc(math.Pi))
fmt.Printf("%.2f\n", math.Trunc(-1.2345))
// Output:
// 3.00
// -1.00
}
func ExampleCbrt() {
fmt.Printf("%.2f\n", math.Cbrt(8))
fmt.Printf("%.2f\n", math.Cbrt(27))
// Output:
// 2.00
// 3.00
}
func ExampleModf() {
int, frac := math.Modf(3.14)
fmt.Printf("%.2f, %.2f\n", int, frac)
int, frac = math.Modf(-2.71)
fmt.Printf("%.2f, %.2f\n", int, frac)
// Output:
// 3.00, 0.14
// -2.00, -0.71
} | go | github | https://github.com/golang/go | src/math/example_test.go |
#!/usr/bin/python
# -*- coding: utf-8 -*-
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
ANSIBLE_METADATA = {'status': ['preview'],
'supported_by': 'community',
'version': '1.0'}
DOCUMENTATION = '''
---
module: digital_ocean_sshkey
short_description: Create/delete an SSH key in DigitalOcean
description:
- Create/delete an SSH key.
version_added: "1.6"
author: "Michael Gregson (@mgregson)"
options:
state:
description:
- Indicate desired state of the target.
default: present
choices: ['present', 'absent']
client_id:
description:
- DigitalOcean manager id.
api_key:
description:
- DigitalOcean api key.
id:
description:
- Numeric, the SSH key id you want to operate on.
name:
description:
- String, this is the name of an SSH key to create or destroy.
ssh_pub_key:
description:
- The public SSH key you want to add to your account.
notes:
- Two environment variables can be used, DO_CLIENT_ID and DO_API_KEY.
- Version 1 of DigitalOcean API is used.
requirements:
- "python >= 2.6"
- dopy
'''
EXAMPLES = '''
# Ensure a SSH key is present
# If a key matches this name, will return the ssh key id and changed = False
# If no existing key matches this name, a new key is created, the ssh key id is returned and changed = False
- digital_ocean_sshkey:
state: present
name: my_ssh_key
ssh_pub_key: 'ssh-rsa AAAA...'
client_id: XXX
api_key: XXX
'''
import os
import traceback
try:
from dopy.manager import DoError, DoManager
HAS_DOPY = True
except ImportError:
HAS_DOPY = False
from ansible.module_utils.basic import AnsibleModule
class JsonfyMixIn(object):
def to_json(self):
return self.__dict__
class SSH(JsonfyMixIn):
manager = None
def __init__(self, ssh_key_json):
self.__dict__.update(ssh_key_json)
update_attr = __init__
def destroy(self):
self.manager.destroy_ssh_key(self.id)
return True
@classmethod
def setup(cls, client_id, api_key):
cls.manager = DoManager(client_id, api_key)
@classmethod
def find(cls, name):
if not name:
return False
keys = cls.list_all()
for key in keys:
if key.name == name:
return key
return False
@classmethod
def list_all(cls):
json = cls.manager.all_ssh_keys()
return map(cls, json)
@classmethod
def add(cls, name, key_pub):
json = cls.manager.new_ssh_key(name, key_pub)
return cls(json)
def core(module):
def getkeyordie(k):
v = module.params[k]
if v is None:
module.fail_json(msg='Unable to load %s' % k)
return v
try:
# params['client_id'] will be None even if client_id is not passed in
client_id = module.params['client_id'] or os.environ['DO_CLIENT_ID']
api_key = module.params['api_key'] or os.environ['DO_API_KEY']
except KeyError as e:
module.fail_json(msg='Unable to load %s' % e.message)
state = module.params['state']
SSH.setup(client_id, api_key)
name = getkeyordie('name')
if state in ('present'):
key = SSH.find(name)
if key:
module.exit_json(changed=False, ssh_key=key.to_json())
key = SSH.add(name, getkeyordie('ssh_pub_key'))
module.exit_json(changed=True, ssh_key=key.to_json())
elif state in ('absent'):
key = SSH.find(name)
if not key:
module.exit_json(changed=False, msg='SSH key with the name of %s is not found.' % name)
key.destroy()
module.exit_json(changed=True)
def main():
module = AnsibleModule(
argument_spec = dict(
state = dict(choices=['present', 'absent'], default='present'),
client_id = dict(aliases=['CLIENT_ID'], no_log=True),
api_key = dict(aliases=['API_KEY'], no_log=True),
name = dict(type='str'),
id = dict(aliases=['droplet_id'], type='int'),
ssh_pub_key = dict(type='str'),
),
required_one_of = (
['id', 'name'],
),
)
if not HAS_DOPY:
module.fail_json(msg='dopy required for this module')
try:
core(module)
except (DoError, Exception) as e:
module.fail_json(msg=str(e), exception=traceback.format_exc())
if __name__ == '__main__':
main() | unknown | codeparrot/codeparrot-clean | ||
# -*- coding: utf-8 -*-
from __future__ import with_statement
import json
import datetime
from cms import api
from cms.utils.urlutils import admin_reverse
from djangocms_text_ckeditor.cms_plugins import TextPlugin
from djangocms_text_ckeditor.models import Text
from django.contrib import admin
from django.contrib.admin.models import LogEntry
from django.contrib.admin.sites import site
from django.contrib.auth import get_user_model
from django.contrib.auth.models import Permission, AnonymousUser
from django.contrib.sites.models import Site
from django.core.urlresolvers import reverse
from django.http import (Http404, HttpResponseBadRequest, HttpResponseForbidden, HttpResponse,
QueryDict, HttpResponseNotFound)
from django.utils.datastructures import MultiValueDictKeyError
from django.utils.encoding import force_text, smart_str
from django.utils import timezone
from django.utils.six.moves.urllib.parse import urlparse
from cms.admin.change_list import CMSChangeList
from cms.admin.forms import PageForm, AdvancedSettingsForm
from cms.admin.pageadmin import PageAdmin
from cms.admin.permissionadmin import PagePermissionInlineAdmin
from cms.api import create_page, create_title, add_plugin, assign_user_to_page, publish_page
from cms.constants import PLUGIN_MOVE_ACTION
from cms.models import UserSettings, StaticPlaceholder
from cms.models.pagemodel import Page
from cms.models.permissionmodels import GlobalPagePermission, PagePermission
from cms.models.placeholdermodel import Placeholder
from cms.models.pluginmodel import CMSPlugin
from cms.models.titlemodels import Title
from cms.test_utils import testcases as base
from cms.test_utils.testcases import CMSTestCase, URL_CMS_PAGE_DELETE, URL_CMS_PAGE, URL_CMS_TRANSLATION_DELETE
from cms.test_utils.util.fuzzy_int import FuzzyInt
from cms.utils import get_cms_setting
from cms.utils.compat import DJANGO_1_6
class AdminTestsBase(CMSTestCase):
@property
def admin_class(self):
return site._registry[Page]
def _get_guys(self, admin_only=False, use_global_permissions=True):
admiN_user = self.get_superuser()
if admin_only:
return admiN_user
USERNAME = 'test'
if get_user_model().USERNAME_FIELD == 'email':
normal_guy = get_user_model().objects.create_user(USERNAME, 'test@test.com', 'test@test.com')
else:
normal_guy = get_user_model().objects.create_user(USERNAME, 'test@test.com', USERNAME)
normal_guy.is_staff = True
normal_guy.is_active = True
normal_guy.save()
normal_guy.user_permissions = Permission.objects.filter(
codename__in=['change_page', 'change_title', 'add_page', 'add_title', 'delete_page', 'delete_title']
)
if use_global_permissions:
gpp = GlobalPagePermission.objects.create(
user=normal_guy,
can_change=True,
can_delete=True,
can_change_advanced_settings=False,
can_publish=True,
can_change_permissions=False,
can_move_page=True,
)
gpp.sites = Site.objects.all()
return admiN_user, normal_guy
class AdminTestCase(AdminTestsBase):
def test_extension_not_in_admin(self):
admin_user, staff = self._get_guys()
with self.login_user_context(admin_user):
request = self.get_request('/admin/cms/page/1/', 'en',)
response = site.index(request)
self.assertNotContains(response, '/mytitleextension/')
self.assertNotContains(response, '/mypageextension/')
def test_permissioned_page_list(self):
"""
Makes sure that a user with restricted page permissions can view
the page list.
"""
admin_user, normal_guy = self._get_guys(use_global_permissions=False)
current_site = Site.objects.get(pk=1)
page = create_page("Test page", "nav_playground.html", "en",
site=current_site, created_by=admin_user)
PagePermission.objects.create(page=page, user=normal_guy)
with self.login_user_context(normal_guy):
resp = self.client.get(URL_CMS_PAGE)
self.assertEqual(resp.status_code, 200)
def test_edit_does_not_reset_page_adv_fields(self):
"""
Makes sure that if a non-superuser with no rights to edit advanced page
fields edits a page, those advanced fields are not touched.
"""
OLD_PAGE_NAME = 'Test Page'
NEW_PAGE_NAME = 'Test page 2'
REVERSE_ID = 'Test'
OVERRIDE_URL = 'my/override/url'
admin_user, normal_guy = self._get_guys()
current_site = Site.objects.get(pk=1)
# The admin creates the page
page = create_page(OLD_PAGE_NAME, "nav_playground.html", "en",
site=current_site, created_by=admin_user)
page.reverse_id = REVERSE_ID
page.save()
title = page.get_title_obj()
title.has_url_overwrite = True
title.path = OVERRIDE_URL
title.save()
self.assertEqual(page.get_title(), OLD_PAGE_NAME)
self.assertEqual(page.reverse_id, REVERSE_ID)
self.assertEqual(title.overwrite_url, OVERRIDE_URL)
# The user edits the page (change the page name for ex.)
page_data = {
'title': NEW_PAGE_NAME,
'slug': page.get_slug(),
'language': title.language,
'site': page.site.pk,
'template': page.template,
'pagepermission_set-TOTAL_FORMS': 0,
'pagepermission_set-INITIAL_FORMS': 0,
'pagepermission_set-MAX_NUM_FORMS': 0,
'pagepermission_set-2-TOTAL_FORMS': 0,
'pagepermission_set-2-INITIAL_FORMS': 0,
'pagepermission_set-2-MAX_NUM_FORMS': 0
}
# required only if user haves can_change_permission
with self.login_user_context(normal_guy):
resp = self.client.post(base.URL_CMS_PAGE_CHANGE % page.pk, page_data,
follow=True)
self.assertEqual(resp.status_code, 200)
self.assertTemplateNotUsed(resp, 'admin/login.html')
page = Page.objects.get(pk=page.pk)
self.assertEqual(page.get_title(), NEW_PAGE_NAME)
self.assertEqual(page.reverse_id, REVERSE_ID)
title = page.get_title_obj()
self.assertEqual(title.overwrite_url, OVERRIDE_URL)
# The admin edits the page (change the page name for ex.)
page_data = {
'title': OLD_PAGE_NAME,
'slug': page.get_slug(),
'language': title.language,
'site': page.site.pk,
'template': page.template,
'reverse_id': page.reverse_id,
'pagepermission_set-TOTAL_FORMS': 0, # required only if user haves can_change_permission
'pagepermission_set-INITIAL_FORMS': 0,
'pagepermission_set-MAX_NUM_FORMS': 0,
'pagepermission_set-2-TOTAL_FORMS': 0,
'pagepermission_set-2-INITIAL_FORMS': 0,
'pagepermission_set-2-MAX_NUM_FORMS': 0
}
with self.login_user_context(admin_user):
resp = self.client.post(base.URL_CMS_PAGE_CHANGE % page.pk, page_data,
follow=True)
self.assertEqual(resp.status_code, 200)
self.assertTemplateNotUsed(resp, 'admin/login.html')
page = Page.objects.get(pk=page.pk)
self.assertEqual(page.get_title(), OLD_PAGE_NAME)
self.assertEqual(page.reverse_id, REVERSE_ID)
title = page.get_title_obj()
self.assertEqual(title.overwrite_url, OVERRIDE_URL)
def test_edit_does_not_reset_apphook(self):
"""
Makes sure that if a non-superuser with no rights to edit advanced page
fields edits a page, those advanced fields are not touched.
"""
OLD_PAGE_NAME = 'Test Page'
NEW_PAGE_NAME = 'Test page 2'
REVERSE_ID = 'Test'
APPLICATION_URLS = 'project.sampleapp.urls'
admin_user, normal_guy = self._get_guys()
current_site = Site.objects.get(pk=1)
# The admin creates the page
page = create_page(OLD_PAGE_NAME, "nav_playground.html", "en",
site=current_site, created_by=admin_user)
page.reverse_id = REVERSE_ID
page.save()
title = page.get_title_obj()
title.has_url_overwrite = True
title.save()
page.application_urls = APPLICATION_URLS
page.save()
self.assertEqual(page.get_title(), OLD_PAGE_NAME)
self.assertEqual(page.reverse_id, REVERSE_ID)
self.assertEqual(page.application_urls, APPLICATION_URLS)
# The user edits the page (change the page name for ex.)
page_data = {
'title': NEW_PAGE_NAME,
'slug': page.get_slug(),
'language': title.language,
'site': page.site.pk,
'template': page.template,
'pagepermission_set-TOTAL_FORMS': 0,
'pagepermission_set-INITIAL_FORMS': 0,
'pagepermission_set-MAX_NUM_FORMS': 0,
'pagepermission_set-2-TOTAL_FORMS': 0,
'pagepermission_set-2-INITIAL_FORMS': 0,
'pagepermission_set-2-MAX_NUM_FORMS': 0,
}
with self.login_user_context(normal_guy):
resp = self.client.post(base.URL_CMS_PAGE_CHANGE % page.pk, page_data,
follow=True)
self.assertEqual(resp.status_code, 200)
self.assertTemplateNotUsed(resp, 'admin/login.html')
page = Page.objects.get(pk=page.pk)
self.assertEqual(page.get_title(), NEW_PAGE_NAME)
self.assertEqual(page.reverse_id, REVERSE_ID)
self.assertEqual(page.application_urls, APPLICATION_URLS)
title = page.get_title_obj()
# The admin edits the page (change the page name for ex.)
page_data = {
'title': OLD_PAGE_NAME,
'slug': page.get_slug(),
'language': title.language,
'site': page.site.pk,
'template': page.template,
'reverse_id': page.reverse_id,
}
with self.login_user_context(admin_user):
resp = self.client.post(base.URL_CMS_PAGE_ADVANCED_CHANGE % page.pk, page_data,
follow=True)
self.assertEqual(resp.status_code, 200)
self.assertTemplateNotUsed(resp, 'admin/login.html')
resp = self.client.post(base.URL_CMS_PAGE_CHANGE % page.pk, page_data,
follow=True)
self.assertEqual(resp.status_code, 200)
self.assertTemplateNotUsed(resp, 'admin/login.html')
page = Page.objects.get(pk=page.pk)
self.assertEqual(page.get_title(), OLD_PAGE_NAME)
self.assertEqual(page.reverse_id, REVERSE_ID)
self.assertEqual(page.application_urls, '')
def test_2apphooks_with_same_namespace(self):
PAGE1 = 'Test Page'
PAGE2 = 'Test page 2'
APPLICATION_URLS = 'project.sampleapp.urls'
admin_user, normal_guy = self._get_guys()
current_site = Site.objects.get(pk=1)
# The admin creates the page
page = create_page(PAGE1, "nav_playground.html", "en",
site=current_site, created_by=admin_user)
page2 = create_page(PAGE2, "nav_playground.html", "en",
site=current_site, created_by=admin_user)
page.application_urls = APPLICATION_URLS
page.application_namespace = "space1"
page.save()
page2.application_urls = APPLICATION_URLS
page2.save()
# The admin edits the page (change the page name for ex.)
page_data = {
'title': PAGE2,
'slug': page2.get_slug(),
'language': 'en',
'site': page.site.pk,
'template': page2.template,
'application_urls': 'SampleApp',
'application_namespace': 'space1',
}
with self.login_user_context(admin_user):
resp = self.client.post(base.URL_CMS_PAGE_ADVANCED_CHANGE % page.pk, page_data)
self.assertEqual(resp.status_code, 302)
self.assertEqual(Page.objects.filter(application_namespace="space1").count(), 1)
resp = self.client.post(base.URL_CMS_PAGE_ADVANCED_CHANGE % page2.pk, page_data)
self.assertEqual(resp.status_code, 200)
page_data['application_namespace'] = 'space2'
resp = self.client.post(base.URL_CMS_PAGE_ADVANCED_CHANGE % page2.pk, page_data)
self.assertEqual(resp.status_code, 302)
def test_delete(self):
admin_user = self.get_superuser()
create_page("home", "nav_playground.html", "en",
created_by=admin_user, published=True)
page = create_page("delete-page", "nav_playground.html", "en",
created_by=admin_user, published=True)
create_page('child-page', "nav_playground.html", "en",
created_by=admin_user, published=True, parent=page)
body = page.placeholders.get(slot='body')
add_plugin(body, 'TextPlugin', 'en', body='text')
page.publish('en')
with self.login_user_context(admin_user):
data = {'post': 'yes'}
with self.assertNumQueries(FuzzyInt(300, 407)):
response = self.client.post(URL_CMS_PAGE_DELETE % page.pk, data)
self.assertRedirects(response, URL_CMS_PAGE)
def test_delete_diff_language(self):
admin_user = self.get_superuser()
create_page("home", "nav_playground.html", "en",
created_by=admin_user, published=True)
page = create_page("delete-page", "nav_playground.html", "en",
created_by=admin_user, published=True)
create_page('child-page', "nav_playground.html", "de",
created_by=admin_user, published=True, parent=page)
body = page.placeholders.get(slot='body')
add_plugin(body, 'TextPlugin', 'en', body='text')
page.publish('en')
with self.login_user_context(admin_user):
data = {'post': 'yes'}
with self.assertNumQueries(FuzzyInt(300, 394)):
response = self.client.post(URL_CMS_PAGE_DELETE % page.pk, data)
self.assertRedirects(response, URL_CMS_PAGE)
def test_search_fields(self):
superuser = self.get_superuser()
from django.contrib.admin import site
with self.login_user_context(superuser):
for model, admin_instance in site._registry.items():
if model._meta.app_label != 'cms':
continue
if not admin_instance.search_fields:
continue
url = admin_reverse('cms_%s_changelist' % model._meta.model_name)
response = self.client.get('%s?q=1' % url)
errmsg = response.content
self.assertEqual(response.status_code, 200, errmsg)
def test_delete_translation(self):
admin_user = self.get_superuser()
page = create_page("delete-page-translation", "nav_playground.html", "en",
created_by=admin_user, published=True)
create_title("de", "delete-page-translation-2", page, slug="delete-page-translation-2")
create_title("es-mx", "delete-page-translation-es", page, slug="delete-page-translation-es")
with self.login_user_context(admin_user):
response = self.client.get(URL_CMS_TRANSLATION_DELETE % page.pk, {'language': 'de'})
self.assertEqual(response.status_code, 200)
response = self.client.post(URL_CMS_TRANSLATION_DELETE % page.pk, {'language': 'de'})
self.assertRedirects(response, URL_CMS_PAGE)
response = self.client.get(URL_CMS_TRANSLATION_DELETE % page.pk, {'language': 'es-mx'})
self.assertEqual(response.status_code, 200)
response = self.client.post(URL_CMS_TRANSLATION_DELETE % page.pk, {'language': 'es-mx'})
self.assertRedirects(response, URL_CMS_PAGE)
def test_change_dates(self):
admin_user, staff = self._get_guys()
page = create_page('test-page', 'nav_playground.html', 'en')
page.publish('en')
draft = page.get_draft_object()
with self.settings(USE_TZ=False):
original_date = draft.publication_date
original_end_date = draft.publication_end_date
new_date = timezone.now() - datetime.timedelta(days=1)
new_end_date = timezone.now() + datetime.timedelta(days=1)
url = admin_reverse('cms_page_dates', args=(draft.pk,))
with self.login_user_context(admin_user):
response = self.client.post(url, {
'language': 'en',
'site': draft.site.pk,
'publication_date_0': new_date.date(),
'publication_date_1': new_date.strftime("%H:%M:%S"),
'publication_end_date_0': new_end_date.date(),
'publication_end_date_1': new_end_date.strftime("%H:%M:%S"),
})
self.assertEqual(response.status_code, 302)
draft = Page.objects.get(pk=draft.pk)
self.assertNotEqual(draft.publication_date.timetuple(), original_date.timetuple())
self.assertEqual(draft.publication_date.timetuple(), new_date.timetuple())
self.assertEqual(draft.publication_end_date.timetuple(), new_end_date.timetuple())
if original_end_date:
self.assertNotEqual(draft.publication_end_date.timetuple(), original_end_date.timetuple())
with self.settings(USE_TZ=True):
original_date = draft.publication_date
original_end_date = draft.publication_end_date
new_date = timezone.localtime(timezone.now()) - datetime.timedelta(days=1)
new_end_date = timezone.localtime(timezone.now()) + datetime.timedelta(days=1)
url = admin_reverse('cms_page_dates', args=(draft.pk,))
with self.login_user_context(admin_user):
response = self.client.post(url, {
'language': 'en',
'site': draft.site.pk,
'publication_date_0': new_date.date(),
'publication_date_1': new_date.strftime("%H:%M:%S"),
'publication_end_date_0': new_end_date.date(),
'publication_end_date_1': new_end_date.strftime("%H:%M:%S"),
})
self.assertEqual(response.status_code, 302)
draft = Page.objects.get(pk=draft.pk)
self.assertNotEqual(draft.publication_date.timetuple(), original_date.timetuple())
self.assertEqual(timezone.localtime(draft.publication_date).timetuple(), new_date.timetuple())
self.assertEqual(timezone.localtime(draft.publication_end_date).timetuple(), new_end_date.timetuple())
if original_end_date:
self.assertNotEqual(draft.publication_end_date.timetuple(), original_end_date.timetuple())
def test_change_template(self):
admin_user, staff = self._get_guys()
request = self.get_request('/admin/cms/page/1/', 'en')
request.method = "POST"
pageadmin = site._registry[Page]
with self.login_user_context(staff):
self.assertRaises(Http404, pageadmin.change_template, request, 1)
page = create_page('test-page', 'nav_playground.html', 'en')
response = pageadmin.change_template(request, page.pk)
self.assertEqual(response.status_code, 403)
url = admin_reverse('cms_page_change_template', args=(page.pk,))
with self.login_user_context(admin_user):
response = self.client.post(url, {'template': 'doesntexist'})
self.assertEqual(response.status_code, 400)
response = self.client.post(url, {'template': get_cms_setting('TEMPLATES')[0][0]})
self.assertEqual(response.status_code, 200)
def test_get_permissions(self):
page = create_page('test-page', 'nav_playground.html', 'en')
url = admin_reverse('cms_page_get_permissions', args=(page.pk,))
response = self.client.get(url)
if DJANGO_1_6:
self.assertEqual(response.status_code, 200)
self.assertTemplateUsed(response, 'admin/login.html')
else:
self.assertEqual(response.status_code, 302)
self.assertRedirects(response, '/en/admin/login/?next=/en/admin/cms/page/%s/permissions/' % page.pk)
admin_user = self.get_superuser()
with self.login_user_context(admin_user):
response = self.client.get(url)
self.assertEqual(response.status_code, 200)
self.assertTemplateNotUsed(response, 'admin/login.html')
def test_changelist_items(self):
admin_user = self.get_superuser()
first_level_page = create_page('level1', 'nav_playground.html', 'en')
second_level_page_top = create_page('level21', "nav_playground.html", "en",
created_by=admin_user, published=True, parent=first_level_page)
second_level_page_bottom = create_page('level22', "nav_playground.html", "en",
created_by=admin_user, published=True,
parent=self.reload(first_level_page))
third_level_page = create_page('level3', "nav_playground.html", "en",
created_by=admin_user, published=True, parent=second_level_page_top)
self.assertEqual(Page.objects.all().count(), 4)
url = admin_reverse('cms_%s_changelist' % Page._meta.model_name)
request = self.get_request(url)
request.session = {}
request.user = admin_user
page_admin = site._registry[Page]
cl_params = [request, page_admin.model, page_admin.list_display,
page_admin.list_display_links, page_admin.list_filter,
page_admin.date_hierarchy, page_admin.search_fields,
page_admin.list_select_related, page_admin.list_per_page]
if hasattr(page_admin, 'list_max_show_all'): # django 1.4
cl_params.append(page_admin.list_max_show_all)
cl_params.extend([page_admin.list_editable, page_admin])
cl = CMSChangeList(*tuple(cl_params))
cl.set_items(request)
root_page = cl.get_items()[0]
self.assertEqual(root_page, first_level_page)
self.assertEqual(root_page.get_children()[0], second_level_page_top)
self.assertEqual(root_page.get_children()[1], second_level_page_bottom)
self.assertEqual(root_page.get_children()[0].get_children()[0], third_level_page)
def test_changelist_tree(self):
""" This test checks for proper jstree cookie unquoting.
It should be converted to a selenium test to actually test the jstree behaviour.
Cookie set below is just a forged example (from live session)
"""
admin_user = self.get_superuser()
first_level_page = create_page('level1', 'nav_playground.html', 'en')
second_level_page_top = create_page('level21', "nav_playground.html", "en",
created_by=admin_user, published=True, parent=first_level_page)
second_level_page_bottom = create_page('level22', "nav_playground.html", "en",
created_by=admin_user, published=True,
parent=self.reload(first_level_page))
third_level_page = create_page('level3', "nav_playground.html", "en",
created_by=admin_user, published=True, parent=second_level_page_top)
url = admin_reverse('cms_%s_changelist' % Page._meta.model_name)
if get_user_model().USERNAME_FIELD == 'email':
self.client.login(username='admin@django-cms.org', password='admin@django-cms.org')
else:
self.client.login(username='admin', password='admin')
self.client.cookies['djangocms_nodes_open'] = 'page_1%2Cpage_2'
response = self.client.get(url)
self.assertEqual(response.status_code, 200)
self.assertEqual(response.context["open_menu_trees"], [1, 2])
# tests descendants method for the lazy load ajax call
url = "%s%d/en/descendants/" % (url, first_level_page.pk)
response = self.client.get(url)
self.assertEqual(response.status_code, 200)
# should include both direct descendant pages
self.assertContains(response, 'id="page_%s"' % second_level_page_top.pk)
self.assertContains(response, 'id="page_%s"' % second_level_page_bottom.pk)
# but not any further down the tree
self.assertNotContains(response, 'id="page_%s"' % third_level_page.pk)
self.assertNotContains(response, 'None')
def test_unihandecode_doesnt_break_404_in_admin(self):
self.get_superuser()
if get_user_model().USERNAME_FIELD == 'email':
self.client.login(username='admin@django-cms.org', password='admin@django-cms.org')
else:
self.client.login(username='admin', password='admin')
response = self.client.get('/en/admin/cms/page/1/?language=en')
self.assertEqual(response.status_code, 404)
def test_tree_displays_in_correct_language(self):
'''
Test to prove and protect that the page titles in the tree are
displayed in the currently set language.
'''
admin_guy, normal_guy = self._get_guys(use_global_permissions=False)
site = Site.objects.get(pk=1)
en_title = "EN Page"
es_title = "ES Pagina"
# Create a page in en
page = create_page(en_title, "nav_playground.html", "en", site=site, created_by=admin)
# Add a es-mx translation for this page
create_title("es-mx", es_title, page, slug="es_pagina")
url = admin_reverse('cms_%s_changelist' % Page._meta.model_name)
url_pat = '<a href="{0}/{1}/preview/"[^>]*>{2}</a>'
with self.login_user_context(admin_guy):
# Check the EN version of the tree...
response = self.client.get(url, {'language': 'en'})
self.assertRegexpMatches(str(response.content), url_pat.format(page.pk, 'en', en_title, ))
# Check the ES version of the tree...
response = self.client.get(url, {'language': 'es-mx'})
self.assertRegexpMatches(str(response.content), url_pat.format(page.pk, 'es-mx', es_title, ))
def test_empty_placeholder_in_correct_language(self):
"""
Test that Cleaning a placeholder only affect current language contents
"""
# create some objects
page_en = create_page("EmptyPlaceholderTestPage (EN)", "nav_playground.html", "en")
ph = page_en.placeholders.get(slot="body")
# add the text plugin to the en version of the page
add_plugin(ph, "TextPlugin", "en", body="Hello World EN 1")
add_plugin(ph, "TextPlugin", "en", body="Hello World EN 2")
# creating a de title of the page and adding plugins to it
create_title("de", page_en.get_title(), page_en, slug=page_en.get_slug())
add_plugin(ph, "TextPlugin", "de", body="Hello World DE")
add_plugin(ph, "TextPlugin", "de", body="Hello World DE 2")
add_plugin(ph, "TextPlugin", "de", body="Hello World DE 3")
# before cleaning the de placeholder
self.assertEqual(ph.get_plugins('en').count(), 2)
self.assertEqual(ph.get_plugins('de').count(), 3)
admin_user, staff = self._get_guys()
with self.login_user_context(admin_user):
url = '%s?language=de' % admin_reverse('cms_page_clear_placeholder', args=[ph.pk])
response = self.client.post(url, {'test': 0})
self.assertEqual(response.status_code, 302)
# After cleaning the de placeholder, en placeholder must still have all the plugins
self.assertEqual(ph.get_plugins('en').count(), 2)
self.assertEqual(ph.get_plugins('de').count(), 0)
class AdminTests(AdminTestsBase):
# TODO: needs tests for actual permissions, not only superuser/normaluser
def setUp(self):
self.page = create_page("testpage", "nav_playground.html", "en")
def get_admin(self):
User = get_user_model()
fields = dict(email="admin@django-cms.org", is_staff=True, is_superuser=True)
if (User.USERNAME_FIELD != 'email'):
fields[User.USERNAME_FIELD] = "admin"
usr = User(**fields)
usr.set_password(getattr(usr, User.USERNAME_FIELD))
usr.save()
return usr
def get_permless(self):
User = get_user_model()
fields = dict(email="permless@django-cms.org", is_staff=True)
if (User.USERNAME_FIELD != 'email'):
fields[User.USERNAME_FIELD] = "permless"
usr = User(**fields)
usr.set_password(getattr(usr, User.USERNAME_FIELD))
usr.save()
return usr
def get_page(self):
return self.page
def test_change_publish_unpublish(self):
page = self.get_page()
permless = self.get_permless()
with self.login_user_context(permless):
request = self.get_request()
response = self.admin_class.publish_page(request, page.pk, "en")
self.assertEqual(response.status_code, 403)
page = self.reload(page)
self.assertFalse(page.is_published('en'))
request = self.get_request(post_data={'no': 'data'})
response = self.admin_class.publish_page(request, page.pk, "en")
# Forbidden
self.assertEqual(response.status_code, 403)
self.assertFalse(page.is_published('en'))
admin_user = self.get_admin()
with self.login_user_context(admin_user):
request = self.get_request(post_data={'no': 'data'})
response = self.admin_class.publish_page(request, page.pk, "en")
self.assertEqual(response.status_code, 302)
page = self.reload(page)
self.assertTrue(page.is_published('en'))
response = self.admin_class.unpublish(request, page.pk, "en")
self.assertEqual(response.status_code, 302)
page = self.reload(page)
self.assertFalse(page.is_published('en'))
def test_change_status_adds_log_entry(self):
page = self.get_page()
admin_user = self.get_admin()
with self.login_user_context(admin_user):
request = self.get_request(post_data={'no': 'data'})
self.assertFalse(LogEntry.objects.count())
response = self.admin_class.publish_page(request, page.pk, "en")
self.assertEqual(response.status_code, 302)
self.assertEqual(1, LogEntry.objects.count())
self.assertEqual(page.pk, int(LogEntry.objects.all()[0].object_id))
def test_change_innavigation(self):
page = self.get_page()
permless = self.get_permless()
admin_user = self.get_admin()
with self.login_user_context(permless):
request = self.get_request()
response = self.admin_class.change_innavigation(request, page.pk)
self.assertEqual(response.status_code, 403)
with self.login_user_context(permless):
request = self.get_request(post_data={'no': 'data'})
self.assertRaises(Http404, self.admin_class.change_innavigation,
request, page.pk + 100)
with self.login_user_context(permless):
request = self.get_request(post_data={'no': 'data'})
response = self.admin_class.change_innavigation(request, page.pk)
self.assertEqual(response.status_code, 403)
with self.login_user_context(admin_user):
request = self.get_request(post_data={'no': 'data'})
old = page.in_navigation
response = self.admin_class.change_innavigation(request, page.pk)
# These asserts are for #3589
self.assertContains(response, 'lang="en"')
self.assertContains(response, './%s/en/preview/' % page.pk)
self.assertEqual(response.status_code, 200)
page = self.reload(page)
self.assertEqual(old, not page.in_navigation)
def test_publish_page_requires_perms(self):
permless = self.get_permless()
with self.login_user_context(permless):
request = self.get_request()
request.method = "POST"
response = self.admin_class.publish_page(request, Page.objects.all()[0].pk, "en")
self.assertEqual(response.status_code, 403)
def test_revert_page(self):
self.page.publish('en')
title = self.page.title_set.get(language='en')
title.title = 'new'
title.save()
self.assertEqual(Title.objects.all().count(), 2)
self.assertEqual(Page.objects.all().count(), 2)
with self.login_user_context(self.get_superuser()):
request = self.get_request()
request.method = "POST"
response = self.admin_class.revert_page(request, Page.objects.all()[0].pk, "en")
self.assertEqual(response.status_code, 302)
self.assertEqual(Title.objects.all().count(), 2)
self.assertEqual(Page.objects.all().count(), 2)
new_title = Title.objects.get(pk=title.pk)
self.assertNotEqual(title.title, new_title.title)
self.assertTrue(title.publisher_is_draft)
self.assertTrue(new_title.publisher_is_draft)
def test_revert_page_requires_perms(self):
permless = self.get_permless()
with self.login_user_context(permless):
request = self.get_request()
request.method = "POST"
response = self.admin_class.revert_page(request, Page.objects.all()[0].pk, 'en')
self.assertEqual(response.status_code, 403)
def test_revert_page_redirects(self):
admin_user = self.get_admin()
self.page.publish("en") # Ensure public copy exists before reverting
with self.login_user_context(admin_user):
response = self.client.get(admin_reverse('cms_page_revert_page', args=(self.page.pk, 'en')))
self.assertEqual(response.status_code, 302)
url = response['Location']
self.assertTrue(url.endswith('?%s' % get_cms_setting('CMS_TOOLBAR_URL__EDIT_OFF')))
def test_remove_plugin_requires_post(self):
ph = Placeholder.objects.create(slot='test')
plugin = add_plugin(ph, 'TextPlugin', 'en', body='test')
admin_user = self.get_admin()
with self.login_user_context(admin_user):
request = self.get_request()
response = self.admin_class.delete_plugin(request, plugin.pk)
self.assertEqual(response.status_code, 200)
def test_move_plugin(self):
ph = Placeholder.objects.create(slot='test')
plugin = add_plugin(ph, 'TextPlugin', 'en', body='test')
page = self.get_page()
source, target = list(page.placeholders.all())[:2]
pageplugin = add_plugin(source, 'TextPlugin', 'en', body='test')
plugin_class = pageplugin.get_plugin_class_instance()
expected = {'reload': plugin_class.requires_reload(PLUGIN_MOVE_ACTION)}
placeholder = Placeholder.objects.all()[0]
permless = self.get_permless()
admin_user = self.get_admin()
with self.login_user_context(permless):
request = self.get_request()
response = self.admin_class.move_plugin(request)
self.assertEqual(response.status_code, 405)
request = self.get_request(post_data={'not_usable': '1'})
self.assertRaises(MultiValueDictKeyError, self.admin_class.move_plugin, request)
with self.login_user_context(admin_user):
request = self.get_request(post_data={'ids': plugin.pk})
self.assertRaises(MultiValueDictKeyError, self.admin_class.move_plugin, request)
with self.login_user_context(admin_user):
request = self.get_request(post_data={'plugin_id': pageplugin.pk,
'placeholder_id': 'invalid-placeholder', 'plugin_language': 'en'})
self.assertRaises(ValueError, self.admin_class.move_plugin, request)
with self.login_user_context(permless):
request = self.get_request(post_data={'plugin_id': pageplugin.pk,
'placeholder_id': placeholder.pk, 'plugin_parent': '', 'plugin_language': 'en'})
self.assertEqual(self.admin_class.move_plugin(request).status_code, HttpResponseForbidden.status_code)
with self.login_user_context(admin_user):
request = self.get_request(post_data={'plugin_id': pageplugin.pk,
'placeholder_id': placeholder.pk, 'plugin_parent': '', 'plugin_language': 'en'})
response = self.admin_class.move_plugin(request)
self.assertEqual(response.status_code, 200)
self.assertEqual(json.loads(response.content.decode('utf8')), expected)
with self.login_user_context(permless):
request = self.get_request(post_data={'plugin_id': pageplugin.pk,
'placeholder_id': placeholder.id, 'plugin_parent': '', 'plugin_language': 'en'})
self.assertEqual(self.admin_class.move_plugin(request).status_code, HttpResponseForbidden.status_code)
with self.login_user_context(admin_user):
request = self.get_request(post_data={'plugin_id': pageplugin.pk,
'placeholder_id': placeholder.id, 'plugin_parent': '', 'plugin_language': 'en'})
response = self.admin_class.move_plugin(request)
self.assertEqual(response.status_code, 200)
self.assertEqual(json.loads(response.content.decode('utf8')), expected)
def test_move_language(self):
page = self.get_page()
source, target = list(page.placeholders.all())[:2]
col = add_plugin(source, 'MultiColumnPlugin', 'en')
sub_col = add_plugin(source, 'ColumnPlugin', 'en', target=col)
col2 = add_plugin(source, 'MultiColumnPlugin', 'de')
admin_user = self.get_admin()
with self.login_user_context(admin_user):
request = self.get_request(post_data={'plugin_id': sub_col.pk,
'placeholder_id': source.id, 'plugin_parent': col2.pk, 'plugin_language': 'de'})
response = self.admin_class.move_plugin(request)
self.assertEqual(response.status_code, 200)
sub_col = CMSPlugin.objects.get(pk=sub_col.pk)
self.assertEqual(sub_col.language, "de")
self.assertEqual(sub_col.parent_id, col2.pk)
def test_preview_page(self):
permless = self.get_permless()
with self.login_user_context(permless):
request = self.get_request()
self.assertRaises(Http404, self.admin_class.preview_page, request, 404, "en")
page = self.get_page()
page.publish("en")
base_url = page.get_absolute_url()
with self.login_user_context(permless):
request = self.get_request('/?public=true')
response = self.admin_class.preview_page(request, page.pk, 'en')
self.assertEqual(response.status_code, 302)
self.assertEqual(response['Location'], '%s?%s&language=en' % (base_url, get_cms_setting('CMS_TOOLBAR_URL__EDIT_ON')))
request = self.get_request()
response = self.admin_class.preview_page(request, page.pk, 'en')
self.assertEqual(response.status_code, 302)
self.assertEqual(response['Location'], '%s?%s&language=en' % (base_url, get_cms_setting('CMS_TOOLBAR_URL__EDIT_ON')))
current_site = Site.objects.create(domain='django-cms.org', name='django-cms')
page.site = current_site
page.save()
page.publish("en")
self.assertTrue(page.is_home)
response = self.admin_class.preview_page(request, page.pk, 'en')
self.assertEqual(response.status_code, 302)
self.assertEqual(response['Location'],
'http://django-cms.org%s?%s&language=en' % (base_url, get_cms_setting('CMS_TOOLBAR_URL__EDIT_ON')))
def test_too_many_plugins_global(self):
conf = {
'body': {
'limits': {
'global': 1,
},
},
}
admin_user = self.get_admin()
url = admin_reverse('cms_page_add_plugin')
with self.settings(CMS_PERMISSION=False, CMS_PLACEHOLDER_CONF=conf):
page = create_page('somepage', 'nav_playground.html', 'en')
body = page.placeholders.get(slot='body')
add_plugin(body, 'TextPlugin', 'en', body='text')
with self.login_user_context(admin_user):
data = {
'plugin_type': 'TextPlugin',
'placeholder_id': body.pk,
'plugin_language': 'en',
}
response = self.client.post(url, data)
self.assertEqual(response.status_code, HttpResponseBadRequest.status_code)
def test_too_many_plugins_type(self):
conf = {
'body': {
'limits': {
'TextPlugin': 1,
},
},
}
admin_user = self.get_admin()
url = admin_reverse('cms_page_add_plugin')
with self.settings(CMS_PERMISSION=False, CMS_PLACEHOLDER_CONF=conf):
page = create_page('somepage', 'nav_playground.html', 'en')
body = page.placeholders.get(slot='body')
add_plugin(body, 'TextPlugin', 'en', body='text')
with self.login_user_context(admin_user):
data = {
'plugin_type': 'TextPlugin',
'placeholder_id': body.pk,
'plugin_language': 'en',
'plugin_parent': '',
}
response = self.client.post(url, data)
self.assertEqual(response.status_code, HttpResponseBadRequest.status_code)
def test_edit_title_dirty_bit(self):
language = "en"
admin_user = self.get_admin()
page = create_page('A', 'nav_playground.html', language)
page_admin = PageAdmin(Page, None)
page_admin._current_page = page
page.publish("en")
draft_page = page.get_draft_object()
admin_url = reverse("admin:cms_page_edit_title_fields", args=(
draft_page.pk, language
))
post_data = {
'title': "A Title"
}
with self.login_user_context(admin_user):
self.client.post(admin_url, post_data)
draft_page = Page.objects.get(pk=page.pk).get_draft_object()
self.assertTrue(draft_page.is_dirty('en'))
def test_edit_title_languages(self):
language = "en"
admin_user = self.get_admin()
page = create_page('A', 'nav_playground.html', language)
page_admin = PageAdmin(Page, None)
page_admin._current_page = page
page.publish("en")
draft_page = page.get_draft_object()
admin_url = reverse("admin:cms_page_edit_title_fields", args=(
draft_page.pk, language
))
post_data = {
'title': "A Title"
}
with self.login_user_context(admin_user):
self.client.post(admin_url, post_data)
draft_page = Page.objects.get(pk=page.pk).get_draft_object()
self.assertTrue(draft_page.is_dirty('en'))
def test_page_form_leak(self):
language = "en"
admin_user = self.get_admin()
request = self.get_request('/', 'en')
request.user = admin_user
page = create_page('A', 'nav_playground.html', language, menu_title='menu title')
page_admin = PageAdmin(Page, site)
page_admin._current_page = page
edit_form = page_admin.get_form(request, page)
add_form = page_admin.get_form(request, None)
self.assertEqual(edit_form.base_fields['menu_title'].initial, 'menu title')
self.assertEqual(add_form.base_fields['menu_title'].initial, None)
class NoDBAdminTests(CMSTestCase):
@property
def admin_class(self):
return site._registry[Page]
def test_lookup_allowed_site__exact(self):
self.assertTrue(self.admin_class.lookup_allowed('site__exact', '1'))
def test_lookup_allowed_published(self):
self.assertTrue(self.admin_class.lookup_allowed('published', value='1'))
class PluginPermissionTests(AdminTestsBase):
def setUp(self):
self._page = create_page('test page', 'nav_playground.html', 'en')
self._placeholder = self._page.placeholders.all()[0]
def _get_admin(self):
User = get_user_model()
fields = dict(email="admin@django-cms.org", is_staff=True, is_active=True)
if (User.USERNAME_FIELD != 'email'):
fields[User.USERNAME_FIELD] = "admin"
admin_user = User(**fields)
admin_user.set_password('admin')
admin_user.save()
return admin_user
def _get_page_admin(self):
return admin.site._registry[Page]
def _give_permission(self, user, model, permission_type, save=True):
codename = '%s_%s' % (permission_type, model._meta.object_name.lower())
user.user_permissions.add(Permission.objects.get(codename=codename))
def _give_page_permission_rights(self, user):
self._give_permission(user, PagePermission, 'add')
self._give_permission(user, PagePermission, 'change')
self._give_permission(user, PagePermission, 'delete')
def _get_change_page_request(self, user, page):
return type('Request', (object,), {
'user': user,
'path': base.URL_CMS_PAGE_CHANGE % page.pk
})
def _give_cms_permissions(self, user, save=True):
for perm_type in ['add', 'change', 'delete']:
for model in [Page, Title]:
self._give_permission(user, model, perm_type, False)
gpp = GlobalPagePermission.objects.create(
user=user,
can_change=True,
can_delete=True,
can_change_advanced_settings=False,
can_publish=True,
can_change_permissions=False,
can_move_page=True,
)
gpp.sites = Site.objects.all()
if save:
user.save()
def _create_plugin(self):
plugin = add_plugin(self._placeholder, 'TextPlugin', 'en')
return plugin
def test_plugin_add_requires_permissions(self):
"""User tries to add a plugin but has no permissions. He can add the plugin after he got the permissions"""
admin = self._get_admin()
self._give_cms_permissions(admin)
if get_user_model().USERNAME_FIELD == 'email':
self.client.login(username='admin@django-cms.org', password='admin')
else:
self.client.login(username='admin', password='admin')
url = admin_reverse('cms_page_add_plugin')
data = {
'plugin_type': 'TextPlugin',
'placeholder_id': self._placeholder.pk,
'plugin_language': 'en',
'plugin_parent': '',
}
response = self.client.post(url, data)
self.assertEqual(response.status_code, HttpResponseForbidden.status_code)
self._give_permission(admin, Text, 'add')
response = self.client.post(url, data)
self.assertEqual(response.status_code, HttpResponse.status_code)
def test_plugin_edit_requires_permissions(self):
"""User tries to edit a plugin but has no permissions. He can edit the plugin after he got the permissions"""
plugin = self._create_plugin()
_, normal_guy = self._get_guys()
if get_user_model().USERNAME_FIELD == 'email':
self.client.login(username='test@test.com', password='test@test.com')
else:
self.client.login(username='test', password='test')
url = admin_reverse('cms_page_edit_plugin', args=[plugin.id])
response = self.client.post(url, dict())
self.assertEqual(response.status_code, HttpResponseForbidden.status_code)
# After he got the permissions, he can edit the plugin
self._give_permission(normal_guy, Text, 'change')
response = self.client.post(url, dict())
self.assertEqual(response.status_code, HttpResponse.status_code)
def test_plugin_edit_wrong_url(self):
"""User tries to edit a plugin using a random url. 404 response returned"""
plugin = self._create_plugin()
_, normal_guy = self._get_guys()
if get_user_model().USERNAME_FIELD == 'email':
self.client.login(username='test@test.com', password='test@test.com')
else:
self.client.login(username='test', password='test')
self._give_permission(normal_guy, Text, 'change')
url = '%s/edit-plugin/%s/' % (admin_reverse('cms_page_edit_plugin', args=[plugin.id]), plugin.id)
response = self.client.post(url, dict())
self.assertEqual(response.status_code, HttpResponseNotFound.status_code)
self.assertTrue("Plugin not found" in force_text(response.content))
def test_plugin_remove_requires_permissions(self):
"""User tries to remove a plugin but has no permissions. He can remove the plugin after he got the permissions"""
plugin = self._create_plugin()
_, normal_guy = self._get_guys()
if get_user_model().USERNAME_FIELD == 'email':
self.client.login(username='test@test.com', password='test@test.com')
else:
self.client.login(username='test', password='test')
url = admin_reverse('cms_page_delete_plugin', args=[plugin.pk])
data = dict(plugin_id=plugin.id)
response = self.client.post(url, data)
self.assertEqual(response.status_code, HttpResponseForbidden.status_code)
# After he got the permissions, he can edit the plugin
self._give_permission(normal_guy, Text, 'delete')
response = self.client.post(url, data)
self.assertEqual(response.status_code, 302)
def test_plugin_move_requires_permissions(self):
"""User tries to move a plugin but has no permissions. He can move the plugin after he got the permissions"""
plugin = self._create_plugin()
_, normal_guy = self._get_guys()
if get_user_model().USERNAME_FIELD == 'email':
self.client.login(username='test@test.com', password='test@test.com')
else:
self.client.login(username='test', password='test')
url = admin_reverse('cms_page_move_plugin')
data = dict(plugin_id=plugin.id,
placeholder_id=self._placeholder.pk,
plugin_parent='',
)
response = self.client.post(url, data)
self.assertEqual(response.status_code, HttpResponseForbidden.status_code)
# After he got the permissions, he can edit the plugin
self._give_permission(normal_guy, Text, 'change')
response = self.client.post(url, data)
self.assertEqual(response.status_code, HttpResponse.status_code)
def test_plugins_copy_requires_permissions(self):
"""User tries to copy plugin but has no permissions. He can copy plugins after he got the permissions"""
plugin = self._create_plugin()
_, normal_guy = self._get_guys()
if get_user_model().USERNAME_FIELD == 'email':
self.client.login(username='test@test.com', password='test@test.com')
else:
self.client.login(username='test', password='test')
url = admin_reverse('cms_page_copy_plugins')
data = dict(source_plugin_id=plugin.id,
source_placeholder_id=self._placeholder.pk,
source_language='en',
target_language='fr',
target_placeholder_id=self._placeholder.pk,
)
response = self.client.post(url, data)
self.assertEqual(response.status_code, HttpResponseForbidden.status_code)
# After he got the permissions, he can edit the plugin
self._give_permission(normal_guy, Text, 'add')
response = self.client.post(url, data)
self.assertEqual(response.status_code, HttpResponse.status_code)
def test_plugins_copy_placeholder_ref(self):
"""User copies a placeholder into a clipboard. A PlaceholderReferencePlugin is created. Afterwards he copies this
into a placeholder and the PlaceholderReferencePlugin unpacks its content. After that he clear the clipboard"""
self.assertEqual(Placeholder.objects.count(), 2)
self._create_plugin()
self._create_plugin()
admin_user = self.get_superuser()
clipboard = Placeholder()
clipboard.save()
self.assertEqual(CMSPlugin.objects.count(), 2)
settings = UserSettings(language="fr", clipboard=clipboard, user=admin_user)
settings.save()
self.assertEqual(Placeholder.objects.count(), 3)
if get_user_model().USERNAME_FIELD == 'email':
self.client.login(username='admin@django-cms.org', password='admin@django-cms.org')
else:
self.client.login(username='admin', password='admin')
url = admin_reverse('cms_page_copy_plugins')
data = dict(source_plugin_id='',
source_placeholder_id=self._placeholder.pk,
source_language='en',
target_language='en',
target_placeholder_id=clipboard.pk,
)
response = self.client.post(url, data)
self.assertEqual(response.status_code, HttpResponse.status_code)
clipboard_plugins = clipboard.get_plugins()
self.assertEqual(CMSPlugin.objects.count(), 5)
self.assertEqual(clipboard_plugins.count(), 1)
self.assertEqual(clipboard_plugins[0].plugin_type, "PlaceholderPlugin")
placeholder_plugin, _ = clipboard_plugins[0].get_plugin_instance()
ref_placeholder = placeholder_plugin.placeholder_ref
copied_plugins = ref_placeholder.get_plugins()
self.assertEqual(copied_plugins.count(), 2)
data = dict(source_plugin_id=placeholder_plugin.pk,
source_placeholder_id=clipboard.pk,
source_language='en',
target_language='fr',
target_placeholder_id=self._placeholder.pk,
)
response = self.client.post(url, data)
self.assertEqual(response.status_code, HttpResponse.status_code)
plugins = self._placeholder.get_plugins()
self.assertEqual(plugins.count(), 4)
self.assertEqual(CMSPlugin.objects.count(), 7)
self.assertEqual(Placeholder.objects.count(), 4)
url = admin_reverse('cms_page_clear_placeholder', args=[clipboard.pk])
with self.assertNumQueries(FuzzyInt(70, 80)):
response = self.client.post(url, {'test': 0})
self.assertEqual(response.status_code, 302)
self.assertEqual(CMSPlugin.objects.count(), 4)
self.assertEqual(Placeholder.objects.count(), 3)
def test_plugins_copy_language(self):
"""User tries to copy plugin but has no permissions. He can copy plugins after he got the permissions"""
self._create_plugin()
_, normal_guy = self._get_guys()
if get_user_model().USERNAME_FIELD != 'email':
self.client.login(username='test', password='test')
else:
self.client.login(username='test@test.com', password='test@test.com')
self.assertEqual(1, CMSPlugin.objects.all().count())
url = admin_reverse('cms_page_copy_language', args=[self._page.pk])
data = dict(
source_language='en',
target_language='fr',
)
response = self.client.post(url, data)
self.assertEqual(response.status_code, HttpResponseForbidden.status_code)
# After he got the permissions, he can edit the plugin
self._give_permission(normal_guy, Text, 'add')
response = self.client.post(url, data)
self.assertEqual(response.status_code, HttpResponse.status_code)
self.assertEqual(2, CMSPlugin.objects.all().count())
def test_page_permission_inline_visibility(self):
User = get_user_model()
fields = dict(email='user@domain.com', password='user', is_staff=True)
if get_user_model().USERNAME_FIELD != 'email':
fields[get_user_model().USERNAME_FIELD] = 'user'
user = User(**fields)
user.save()
self._give_page_permission_rights(user)
page = create_page('A', 'nav_playground.html', 'en')
page_permission = PagePermission.objects.create(
can_change_permissions=True, user=user, page=page)
request = self._get_change_page_request(user, page)
page_admin = PageAdmin(Page, None)
page_admin._current_page = page
# user has can_change_permission
# => must see the PagePermissionInline
self.assertTrue(
any(type(inline) is PagePermissionInlineAdmin
for inline in page_admin.get_inline_instances(request, page)))
page = Page.objects.get(pk=page.pk)
# remove can_change_permission
page_permission.can_change_permissions = False
page_permission.save()
request = self._get_change_page_request(user, page)
page_admin = PageAdmin(Page, None)
page_admin._current_page = page
# => PagePermissionInline is no longer visible
self.assertFalse(
any(type(inline) is PagePermissionInlineAdmin
for inline in page_admin.get_inline_instances(request, page)))
def test_edit_title_is_allowed_for_staff_user(self):
"""
We check here both the permission on a single page, and the global permissions
"""
user = self._create_user('user', is_staff=True)
another_user = self._create_user('another_user', is_staff=True)
page = create_page('A', 'nav_playground.html', 'en')
admin_url = reverse("admin:cms_page_edit_title_fields", args=(
page.pk, 'en'
))
page_admin = PageAdmin(Page, None)
page_admin._current_page = page
username = getattr(user, get_user_model().USERNAME_FIELD)
self.client.login(username=username, password=username)
response = self.client.get(admin_url)
self.assertEqual(response.status_code, HttpResponseForbidden.status_code)
assign_user_to_page(page, user, grant_all=True)
username = getattr(user, get_user_model().USERNAME_FIELD)
self.client.login(username=username, password=username)
response = self.client.get(admin_url)
self.assertEqual(response.status_code, HttpResponse.status_code)
self._give_cms_permissions(another_user)
username = getattr(another_user, get_user_model().USERNAME_FIELD)
self.client.login(username=username, password=username)
response = self.client.get(admin_url)
self.assertEqual(response.status_code, HttpResponse.status_code)
def test_plugin_add_returns_valid_pk_for_plugin(self):
admin_user = self._get_admin()
self._give_cms_permissions(admin_user)
self._give_permission(admin_user, Text, 'add')
username = getattr(admin_user, get_user_model().USERNAME_FIELD)
self.client.login(username=username, password='admin')
url = admin_reverse('cms_page_add_plugin')
data = {
'plugin_type': 'TextPlugin',
'placeholder_id': self._placeholder.pk,
'plugin_language': 'en',
'plugin_parent': '',
}
response = self.client.post(url, data)
self.assertEqual(response.status_code, HttpResponse.status_code)
self.assertEqual(response['content-type'], 'application/json')
pk = response.content.decode('utf8').split("edit-plugin/")[1].split("/")[0]
self.assertTrue(CMSPlugin.objects.filter(pk=int(pk)).exists())
class AdminFormsTests(AdminTestsBase):
def test_clean_overwrite_url(self):
user = AnonymousUser()
user.is_superuser = True
user.pk = 1
request = type('Request', (object,), {'user': user})
with self.settings():
data = {
'title': 'TestPage',
'slug': 'test-page',
'language': 'en',
'overwrite_url': '/overwrite/url/',
'site': Site.objects.get_current().pk,
'template': get_cms_setting('TEMPLATES')[0][0],
'published': True
}
form = PageForm(data)
self.assertTrue(form.is_valid(), form.errors.as_text())
instance = form.save()
instance.permission_user_cache = user
instance.permission_advanced_settings_cache = True
Title.objects.set_or_create(request, instance, form, 'en')
form = PageForm(data, instance=instance)
self.assertTrue(form.is_valid(), form.errors.as_text())
def test_missmatching_site_parent_dotsite(self):
site0 = Site.objects.create(domain='foo.com', name='foo.com')
site1 = Site.objects.create(domain='foo.com', name='foo.com')
parent_page = Page.objects.create(
template='nav_playground.html',
site=site0)
new_page_data = {
'title': 'Title',
'slug': 'slug',
'language': 'en',
'site': site1.pk,
'template': get_cms_setting('TEMPLATES')[0][0],
'reverse_id': '',
'parent': parent_page.pk,
}
form = PageForm(data=new_page_data, files=None)
self.assertFalse(form.is_valid())
self.assertIn(u"Site doesn't match the parent's page site",
form.errors['__all__'])
def test_form_errors(self):
new_page_data = {
'title': 'Title',
'slug': 'home',
'language': 'en',
'site': 10,
'template': get_cms_setting('TEMPLATES')[0][0],
'reverse_id': '',
}
form = PageForm(data=new_page_data, files=None)
self.assertFalse(form.is_valid())
site0 = Site.objects.create(domain='foo.com', name='foo.com')
page1 = api.create_page("test", get_cms_setting('TEMPLATES')[0][0], "fr", site=site0)
new_page_data = {
'title': 'Title',
'slug': 'home',
'language': 'en',
'site': 1,
'template': get_cms_setting('TEMPLATES')[0][0],
'reverse_id': '',
'parent': page1.pk,
}
form = PageForm(data=new_page_data, files=None)
self.assertFalse(form.is_valid())
new_page_data = {
'title': 'Title',
'slug': '#',
'language': 'en',
'site': 1,
'template': get_cms_setting('TEMPLATES')[0][0],
'reverse_id': '',
}
form = PageForm(data=new_page_data, files=None)
self.assertFalse(form.is_valid())
new_page_data = {
'title': 'Title',
'slug': 'home',
'language': 'pp',
'site': 1,
'template': get_cms_setting('TEMPLATES')[0][0],
'reverse_id': '',
'parent':'',
}
form = PageForm(data=new_page_data, files=None)
self.assertFalse(form.is_valid())
page2 = api.create_page("test", get_cms_setting('TEMPLATES')[0][0], "en")
new_page_data = {
'title': 'Title',
'slug': 'test',
'language': 'en',
'site': 1,
'template': get_cms_setting('TEMPLATES')[0][0],
'reverse_id': '',
'parent':'',
}
form = PageForm(data=new_page_data, files=None)
self.assertFalse(form.is_valid())
page3 = api.create_page("test", get_cms_setting('TEMPLATES')[0][0], "en", parent=page2)
page3.title_set.update(path="hello/")
page3 = page3.reload()
new_page_data = {
'title': 'Title',
'slug': 'test',
'language': 'en',
'site': 1,
'template': get_cms_setting('TEMPLATES')[0][0],
'reverse_id': '',
'parent':'',
}
form = PageForm(data=new_page_data, files=None, instance=page3)
self.assertFalse(form.is_valid())
def test_reverse_id_error_location(self):
''' Test moving the reverse_id validation error to a field specific one '''
# this is the Reverse ID we'll re-use to break things.
dupe_id = 'p1'
curren_site = Site.objects.get_current()
create_page('Page 1', 'nav_playground.html', 'en', reverse_id=dupe_id)
page2 = create_page('Page 2', 'nav_playground.html', 'en')
# Assemble a bunch of data to test the page form
page2_data = {
'language': 'en',
'site': curren_site.pk,
'reverse_id': dupe_id,
'template': 'col_two.html',
}
form = AdvancedSettingsForm(data=page2_data, files=None)
self.assertFalse(form.is_valid())
# reverse_id is the only item that is in __all__ as every other field
# has it's own clean method. Moving it to be a field error means
# __all__ is now not available.
self.assertNotIn('__all__', form.errors)
# In moving it to it's own field, it should be in form.errors, and
# the values contained therein should match these.
self.assertIn('reverse_id', form.errors)
self.assertEqual(1, len(form.errors['reverse_id']))
self.assertEqual([u'A page with this reverse URL id exists already.'],
form.errors['reverse_id'])
page2_data['reverse_id'] = ""
form = AdvancedSettingsForm(data=page2_data, files=None)
self.assertTrue(form.is_valid())
admin_user = self._get_guys(admin_only=True)
# reset some of page2_data so we can use cms.api.create_page
page2 = page2.reload()
page2.site = curren_site
page2.save()
with self.login_user_context(admin_user):
# re-reset the page2_data for the admin form instance.
page2_data['reverse_id'] = dupe_id
page2_data['site'] = curren_site.pk
# post to the admin change form for page 2, and test that the
# reverse_id form row has an errors class. Django's admin avoids
# collapsing these, so that the error is visible.
resp = self.client.post(base.URL_CMS_PAGE_ADVANCED_CHANGE % page2.pk, page2_data)
self.assertContains(resp, '<div class="form-row errors reverse_id">')
def test_create_page_type(self):
page = create_page('Test', 'static.html', 'en', published=True, reverse_id="home")
for placeholder in Placeholder.objects.all():
add_plugin(placeholder, TextPlugin, 'en', body='<b>Test</b>')
page.publish('en')
self.assertEqual(Page.objects.count(), 2)
self.assertEqual(CMSPlugin.objects.count(), 4)
superuser = self.get_superuser()
with self.login_user_context(superuser):
response = self.client.get(
"%s?copy_target=%s&language=%s" % (admin_reverse("cms_page_add_page_type"), page.pk, 'en'))
self.assertEqual(response.status_code, 302)
self.assertEqual(Page.objects.count(), 3)
self.assertEqual(Page.objects.filter(reverse_id="page_types").count(), 1)
page_types = Page.objects.get(reverse_id='page_types')
url = response.url if hasattr(response, 'url') else response['Location']
expected_url_params = QueryDict(
'target=%s&position=first-child&add_page_type=1©_target=%s&language=en' % (page_types.pk, page.pk))
response_url_params = QueryDict(urlparse(url).query)
self.assertDictEqual(expected_url_params, response_url_params)
response = self.client.get("%s?copy_target=%s&language=%s" % (
admin_reverse("cms_page_add_page_type"), page.pk, 'en'), follow=True)
self.assertEqual(response.status_code, 200)
# test no page types if no page types there
response = self.client.get(admin_reverse('cms_page_add'))
self.assertNotContains(response, "page_type")
# create out first page type
page_data = {
'title': 'type1', 'slug': 'type1', '_save': 1, 'template': 'static.html', 'site': 1,
'language': 'en'
}
response = self.client.post(
"/en/admin/cms/page/add/?target=%s&position=first-child&add_page_type=1©_target=%s&language=en" % (
page_types.pk, page.pk), data=page_data)
self.assertEqual(response.status_code, 302)
self.assertEqual(Page.objects.count(), 4)
self.assertEqual(CMSPlugin.objects.count(), 6)
response = self.client.get(admin_reverse('cms_page_add'))
self.assertContains(response, "page_type")
# no page types available if you use the copy_target
response = self.client.get("%s?copy_target=%s&language=en" % (admin_reverse('cms_page_add'), page.pk))
self.assertNotContains(response, "page_type")
def test_render_edit_mode(self):
from django.core.cache import cache
cache.clear()
create_page('Test', 'static.html', 'en', published=True)
for placeholder in Placeholder.objects.all():
add_plugin(placeholder, TextPlugin, 'en', body='<b>Test</b>')
user = self.get_superuser()
self.assertEqual(Placeholder.objects.all().count(), 4)
with self.login_user_context(user):
with self.assertNumQueries(FuzzyInt(40, 66)):
output = force_text(self.client.get('/en/?%s' % get_cms_setting('CMS_TOOLBAR_URL__EDIT_ON')).content)
self.assertIn('<b>Test</b>', output)
self.assertEqual(Placeholder.objects.all().count(), 9)
self.assertEqual(StaticPlaceholder.objects.count(), 2)
for placeholder in Placeholder.objects.all():
add_plugin(placeholder, TextPlugin, 'en', body='<b>Test</b>')
with self.assertNumQueries(FuzzyInt(40, 72)):
output = force_text(self.client.get('/en/?%s' % get_cms_setting('CMS_TOOLBAR_URL__EDIT_ON')).content)
self.assertIn('<b>Test</b>', output)
with self.assertNumQueries(FuzzyInt(18, 45)):
force_text(self.client.get('/en/?%s' % get_cms_setting('CMS_TOOLBAR_URL__EDIT_ON')).content)
with self.assertNumQueries(FuzzyInt(11, 29)):
force_text(self.client.get('/en/').content)
def test_tree_view_queries(self):
from django.core.cache import cache
cache.clear()
for i in range(10):
create_page('Test%s' % i, 'col_two.html', 'en', published=True)
for placeholder in Placeholder.objects.all():
add_plugin(placeholder, TextPlugin, 'en', body='<b>Test</b>')
user = self.get_superuser()
with self.login_user_context(user):
with self.assertNumQueries(FuzzyInt(18, 33)):
force_text(self.client.get('/en/admin/cms/page/'))
def test_smart_link_published_pages(self):
admin, staff_guy = self._get_guys()
page_url = '/en/admin/cms/page/published-pages/' # Not sure how to achieve this with reverse...
with self.login_user_context(staff_guy):
multi_title_page = create_page('main_title', 'col_two.html', 'en', published=True,
overwrite_url='overwritten_url',
menu_title='menu_title')
title = multi_title_page.get_title_obj()
title.page_title = 'page_title'
title.save()
multi_title_page.save()
publish_page(multi_title_page, admin, 'en')
# Non ajax call should return a 403 as this page shouldn't be accessed by anything else but ajax queries
self.assertEqual(403, self.client.get(page_url).status_code)
self.assertEqual(200,
self.client.get(page_url, HTTP_X_REQUESTED_WITH='XMLHttpRequest').status_code
)
# Test that the query param is working as expected.
self.assertEqual(1, len(json.loads(self.client.get(page_url, {'q':'main_title'},
HTTP_X_REQUESTED_WITH='XMLHttpRequest').content.decode("utf-8"))))
self.assertEqual(1, len(json.loads(self.client.get(page_url, {'q':'menu_title'},
HTTP_X_REQUESTED_WITH='XMLHttpRequest').content.decode("utf-8"))))
self.assertEqual(1, len(json.loads(self.client.get(page_url, {'q':'overwritten_url'},
HTTP_X_REQUESTED_WITH='XMLHttpRequest').content.decode("utf-8"))))
self.assertEqual(1, len(json.loads(self.client.get(page_url, {'q':'page_title'},
HTTP_X_REQUESTED_WITH='XMLHttpRequest').content.decode("utf-8"))))
class AdminPageEditContentSizeTests(AdminTestsBase):
"""
System user count influences the size of the page edit page,
but the users are only 2 times present on the page
The test relates to extra=0
at PagePermissionInlineAdminForm and ViewRestrictionInlineAdmin
"""
def test_editpage_contentsize(self):
"""
Expected a username only 2 times in the content, but a relationship
between usercount and pagesize
"""
with self.settings(CMS_PERMISSION=True):
admin_user = self.get_superuser()
PAGE_NAME = 'TestPage'
USER_NAME = 'test_size_user_0'
current_site = Site.objects.get(pk=1)
page = create_page(PAGE_NAME, "nav_playground.html", "en", site=current_site, created_by=admin_user)
page.save()
self._page = page
with self.login_user_context(admin_user):
url = base.URL_CMS_PAGE_PERMISSION_CHANGE % self._page.pk
response = self.client.get(url)
self.assertEqual(response.status_code, 200)
old_response_size = len(response.content)
old_user_count = get_user_model().objects.count()
# create additionals user and reload the page
get_user_model().objects.create_user(username=USER_NAME, email=USER_NAME + '@django-cms.org',
password=USER_NAME)
user_count = get_user_model().objects.count()
more_users_in_db = old_user_count < user_count
# we have more users
self.assertTrue(more_users_in_db, "New users got NOT created")
response = self.client.get(url)
new_response_size = len(response.content)
page_size_grown = old_response_size < new_response_size
# expect that the pagesize gets influenced by the useramount of the system
self.assertTrue(page_size_grown, "Page size has not grown after user creation")
# usernames are only 2 times in content
text = smart_str(response.content, response._charset)
foundcount = text.count(USER_NAME)
# 2 forms contain usernames as options
self.assertEqual(foundcount, 2,
"Username %s appeared %s times in response.content, expected 2 times" % (
USER_NAME, foundcount)) | unknown | codeparrot/codeparrot-clean | ||
# Copyright (C) 2018 PrivacyScore Contributors
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
import os
from time import sleep
from django.core.management import BaseCommand
from django.utils import timezone
from privacyscore.backend.models import Site, ScanList
from privacyscore.utils import normalize_url
class Command(BaseCommand):
help = 'Rescan all sites in an exisiting ScanList.'
def add_arguments(self, parser):
parser.add_argument('scan_list_id')
parser.add_argument('-s', '--sleep-between-scans', type=float, default=0)
def handle(self, *args, **options):
scan_list = ScanList.objects.get(id=options['scan_list_id'])
sites = scan_list.sites.all()
scan_count = 0
for site in sites:
status_code = site.scan()
if status_code == Site.SCAN_COOLDOWN:
self.stdout.write(
'Rate limiting -- Not scanning site {}'.format(site))
continue
if status_code == Site.SCAN_BLACKLISTED:
self.stdout.write(
'Blacklisted -- Not scanning site {}'.format(site))
continue
scan_count += 1
self.stdout.write('Scanning site {}'.format(
site))
if options['sleep_between_scans']:
self.stdout.write('Sleeping {}'.format(options['sleep_between_scans']))
sleep(options['sleep_between_scans'])
self.stdout.write('read {} sites, scanned {}'.format(
len(sites), scan_count)) | unknown | codeparrot/codeparrot-clean | ||
/*
* Copyright 2002-present the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.context.annotation;
import java.io.Closeable;
import java.util.concurrent.CompletableFuture;
import org.junit.jupiter.api.Test;
import reactor.core.publisher.Mono;
import org.springframework.beans.factory.DisposableBean;
import org.springframework.context.ConfigurableApplicationContext;
import org.springframework.context.support.GenericXmlApplicationContext;
import static org.assertj.core.api.Assertions.assertThat;
/**
* @author Chris Beams
* @author Juergen Hoeller
* @author Stephane Nicoll
*/
class DestroyMethodInferenceTests {
@Test
void beanMethods() {
ConfigurableApplicationContext ctx = new AnnotationConfigApplicationContext(Config.class);
WithExplicitDestroyMethod c0 = ctx.getBean(WithExplicitDestroyMethod.class);
WithLocalCloseMethod c1 = ctx.getBean("c1", WithLocalCloseMethod.class);
WithLocalCloseMethod c2 = ctx.getBean("c2", WithLocalCloseMethod.class);
WithInheritedCloseMethod c3 = ctx.getBean("c3", WithInheritedCloseMethod.class);
WithInheritedCloseMethod c4 = ctx.getBean("c4", WithInheritedCloseMethod.class);
WithInheritedCloseMethod c5 = ctx.getBean("c5", WithInheritedCloseMethod.class);
WithNoCloseMethod c6 = ctx.getBean("c6", WithNoCloseMethod.class);
WithLocalShutdownMethod c7 = ctx.getBean("c7", WithLocalShutdownMethod.class);
WithInheritedCloseMethod c8 = ctx.getBean("c8", WithInheritedCloseMethod.class);
WithDisposableBean c9 = ctx.getBean("c9", WithDisposableBean.class);
WithAutoCloseable c10 = ctx.getBean("c10", WithAutoCloseable.class);
WithCompletableFutureMethod c11 = ctx.getBean("c11", WithCompletableFutureMethod.class);
WithReactorMonoMethod c12 = ctx.getBean("c12", WithReactorMonoMethod.class);
assertThat(c0.closed).as("c0").isFalse();
assertThat(c1.closed).as("c1").isFalse();
assertThat(c2.closed).as("c2").isFalse();
assertThat(c3.closed).as("c3").isFalse();
assertThat(c4.closed).as("c4").isFalse();
assertThat(c5.closed).as("c5").isFalse();
assertThat(c6.closed).as("c6").isFalse();
assertThat(c7.closed).as("c7").isFalse();
assertThat(c8.closed).as("c8").isFalse();
assertThat(c9.closed).as("c9").isFalse();
assertThat(c10.closed).as("c10").isFalse();
assertThat(c11.closed).as("c11").isFalse();
assertThat(c12.closed).as("c12").isFalse();
ctx.close();
assertThat(c0.closed).as("c0").isTrue();
assertThat(c1.closed).as("c1").isTrue();
assertThat(c2.closed).as("c2").isTrue();
assertThat(c3.closed).as("c3").isTrue();
assertThat(c4.closed).as("c4").isTrue();
assertThat(c5.closed).as("c5").isTrue();
assertThat(c6.closed).as("c6").isFalse();
assertThat(c7.closed).as("c7").isTrue();
assertThat(c8.closed).as("c8").isFalse();
assertThat(c9.closed).as("c9").isTrue();
assertThat(c10.closed).as("c10").isTrue();
assertThat(c11.closed).as("c11").isTrue();
assertThat(c12.closed).as("c12").isTrue();
}
@Test
void xml() {
ConfigurableApplicationContext ctx = new GenericXmlApplicationContext(
getClass(), "DestroyMethodInferenceTests-context.xml");
WithLocalCloseMethod x1 = ctx.getBean("x1", WithLocalCloseMethod.class);
WithLocalCloseMethod x2 = ctx.getBean("x2", WithLocalCloseMethod.class);
WithLocalCloseMethod x3 = ctx.getBean("x3", WithLocalCloseMethod.class);
WithNoCloseMethod x4 = ctx.getBean("x4", WithNoCloseMethod.class);
WithInheritedCloseMethod x8 = ctx.getBean("x8", WithInheritedCloseMethod.class);
WithDisposableBean x9 = ctx.getBean("x9", WithDisposableBean.class);
WithAutoCloseable x10 = ctx.getBean("x10", WithAutoCloseable.class);
assertThat(x1.closed).isFalse();
assertThat(x2.closed).isFalse();
assertThat(x3.closed).isFalse();
assertThat(x4.closed).isFalse();
assertThat(x8.closed).isFalse();
assertThat(x9.closed).isFalse();
assertThat(x10.closed).isFalse();
ctx.close();
assertThat(x1.closed).isFalse();
assertThat(x2.closed).isTrue();
assertThat(x3.closed).isTrue();
assertThat(x4.closed).isFalse();
assertThat(x8.closed).isFalse();
assertThat(x9.closed).isTrue();
assertThat(x10.closed).isTrue();
}
@Configuration(proxyBeanMethods = false)
static class Config {
@Bean(destroyMethod = "explicitClose")
public WithExplicitDestroyMethod c0() {
return new WithExplicitDestroyMethod();
}
@Bean
public WithLocalCloseMethod c1() {
return new WithLocalCloseMethod();
}
@Bean
public Object c2() {
return new WithLocalCloseMethod();
}
@Bean
public WithInheritedCloseMethod c3() {
return new WithInheritedCloseMethod();
}
@Bean
public Closeable c4() {
return new WithInheritedCloseMethod();
}
@Bean(destroyMethod = "other")
public WithInheritedCloseMethod c5() {
return new WithInheritedCloseMethod() {
@Override
public void close() {
throw new IllegalStateException("close() should not be called");
}
@SuppressWarnings("unused")
public void other() {
this.closed = true;
}
};
}
@Bean
public WithNoCloseMethod c6() {
return new WithNoCloseMethod();
}
@Bean
public WithLocalShutdownMethod c7() {
return new WithLocalShutdownMethod();
}
@Bean(destroyMethod = "")
public WithInheritedCloseMethod c8() {
return new WithInheritedCloseMethod();
}
@Bean(destroyMethod = "")
public WithDisposableBean c9() {
return new WithDisposableBean();
}
@Bean
public WithAutoCloseable c10() {
return new WithAutoCloseable();
}
@Bean
public WithCompletableFutureMethod c11() {
return new WithCompletableFutureMethod();
}
@Bean
public WithReactorMonoMethod c12() {
return new WithReactorMonoMethod();
}
}
static class WithExplicitDestroyMethod {
boolean closed = false;
public void explicitClose() {
closed = true;
}
}
static class WithLocalCloseMethod {
boolean closed = false;
public void close() {
closed = true;
}
}
static class WithInheritedCloseMethod implements Closeable {
boolean closed = false;
@Override
public void close() {
closed = true;
}
}
static class WithNoCloseMethod {
boolean closed = false;
}
static class WithLocalShutdownMethod {
boolean closed = false;
public void shutdown() {
closed = true;
}
}
static class WithDisposableBean implements DisposableBean {
boolean closed = false;
@Override
public void destroy() {
closed = true;
}
}
static class WithAutoCloseable implements AutoCloseable {
boolean closed = false;
@Override
public void close() {
closed = true;
}
}
static class WithCompletableFutureMethod {
boolean closed = false;
public CompletableFuture<Void> close() {
return CompletableFuture.runAsync(() -> {
try {
Thread.sleep(100);
}
catch (InterruptedException e) {
Thread.currentThread().interrupt();
}
closed = true;
});
}
}
static class WithReactorMonoMethod {
boolean closed = false;
public Mono<Void> close() {
try {
Thread.sleep(100);
}
catch (InterruptedException e) {
Thread.currentThread().interrupt();
}
return Mono.fromRunnable(() -> closed = true);
}
}
} | java | github | https://github.com/spring-projects/spring-framework | spring-context/src/test/java/org/springframework/context/annotation/DestroyMethodInferenceTests.java |
# (C) British Crown Copyright 2013 - 2015, Met Office
#
# This file is part of Iris.
#
# Iris is free software: you can redistribute it and/or modify it under
# the terms of the GNU Lesser General Public License as published by the
# Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Iris is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Lesser General Public License for more details.
#
# You should have received a copy of the GNU Lesser General Public License
# along with Iris. If not, see <http://www.gnu.org/licenses/>.
from __future__ import (absolute_import, division, print_function)
from six.moves import (filter, input, map, range, zip) # noqa
from datetime import datetime
import os
import os.path
HEADER = \
'''# (C) British Crown Copyright 2013 - {}, Met Office
#
# This file is part of Iris.
#
# Iris is free software: you can redistribute it and/or modify it under
# the terms of the GNU Lesser General Public License as published by the
# Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Iris is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Lesser General Public License for more details.
#
# You should have received a copy of the GNU Lesser General Public License
# along with Iris. If not, see <http://www.gnu.org/licenses/>.
#
# DO NOT EDIT: AUTO-GENERATED'''
def absolute_path(path):
return os.path.abspath(os.path.join(os.path.dirname(__file__), path))
def prep_module_file(module_path):
"""
prepare a module file, creating directory if needed and writing the
header into that file
"""
module_path = absolute_path(module_path)
module_dir = os.path.dirname(module_path)
if not os.path.isdir(module_dir):
os.makedirs(module_dir)
with open(module_path, 'w') as module_file:
module_file.write(HEADER.format(datetime.utcnow().year)) | unknown | codeparrot/codeparrot-clean | ||
//===- bolt/Passes/RegReAssign.h --------------------------------*- C++ -*-===//
//
// Part of the LLVM Project, under the Apache License v2.0 with LLVM Exceptions.
// See https://llvm.org/LICENSE.txt for license information.
// SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception
//
//===----------------------------------------------------------------------===//
#ifndef BOLT_PASSES_REGREASSIGN_H
#define BOLT_PASSES_REGREASSIGN_H
#include "bolt/Core/BinaryFunctionCallGraph.h"
#include "bolt/Passes/BinaryPasses.h"
#include "bolt/Passes/RegAnalysis.h"
namespace llvm {
namespace bolt {
class RegReAssign : public BinaryFunctionPass {
std::vector<int64_t> RegScore;
std::vector<size_t> RankedRegs;
BitVector ClassicRegs;
BitVector CalleeSaved;
BitVector ClassicCSR;
BitVector ExtendedCSR;
BitVector GPRegs;
/// Hooks to other passes
std::unique_ptr<RegAnalysis> RA;
std::unique_ptr<BinaryFunctionCallGraph> CG;
/// Stats
DenseSet<const BinaryFunction *> FuncsChanged;
int64_t StaticBytesSaved{0};
int64_t DynBytesSaved{0};
void swap(BinaryFunction &Function, MCPhysReg A, MCPhysReg B);
void rankRegisters(BinaryFunction &Function);
void aggressivePassOverFunction(BinaryFunction &Function);
bool conservativePassOverFunction(BinaryFunction &Function);
void setupAggressivePass(BinaryContext &BC,
std::map<uint64_t, BinaryFunction> &BFs);
void setupConservativePass(BinaryContext &BC,
std::map<uint64_t, BinaryFunction> &BFs);
public:
/// BinaryPass public interface
explicit RegReAssign(const cl::opt<bool> &PrintPass)
: BinaryFunctionPass(PrintPass) {}
const char *getName() const override { return "regreassign"; }
bool shouldPrint(const BinaryFunction &BF) const override {
return BinaryFunctionPass::shouldPrint(BF) && FuncsChanged.count(&BF) > 0;
}
Error runOnFunctions(BinaryContext &BC) override;
};
} // namespace bolt
} // namespace llvm
#endif | c | github | https://github.com/llvm/llvm-project | bolt/include/bolt/Passes/RegReAssign.h |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.