hexsha
stringlengths 40
40
| size
int64 1
1.03M
| ext
stringclasses 10
values | lang
stringclasses 1
value | max_stars_repo_path
stringlengths 3
239
| max_stars_repo_name
stringlengths 5
130
| max_stars_repo_head_hexsha
stringlengths 40
78
| max_stars_repo_licenses
listlengths 1
10
| max_stars_count
int64 1
191k
⌀ | max_stars_repo_stars_event_min_datetime
stringlengths 24
24
⌀ | max_stars_repo_stars_event_max_datetime
stringlengths 24
24
⌀ | max_issues_repo_path
stringlengths 3
239
| max_issues_repo_name
stringlengths 5
130
| max_issues_repo_head_hexsha
stringlengths 40
78
| max_issues_repo_licenses
listlengths 1
10
| max_issues_count
int64 1
67k
⌀ | max_issues_repo_issues_event_min_datetime
stringlengths 24
24
⌀ | max_issues_repo_issues_event_max_datetime
stringlengths 24
24
⌀ | max_forks_repo_path
stringlengths 3
239
| max_forks_repo_name
stringlengths 5
130
| max_forks_repo_head_hexsha
stringlengths 40
78
| max_forks_repo_licenses
listlengths 1
10
| max_forks_count
int64 1
105k
⌀ | max_forks_repo_forks_event_min_datetime
stringlengths 24
24
⌀ | max_forks_repo_forks_event_max_datetime
stringlengths 24
24
⌀ | content
stringlengths 1
1.03M
| avg_line_length
float64 1
958k
| max_line_length
int64 1
1.03M
| alphanum_fraction
float64 0
1
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
4a124e29d399bdce0752d2746d43fa4acbad4bb6
| 55,566
|
py
|
Python
|
training/retrain.py
|
namtran98/Tensorflow_Research
|
958cbb0d9e36c21d59d29c4cc289abdf8cf2e687
|
[
"Apache-2.0"
] | null | null | null |
training/retrain.py
|
namtran98/Tensorflow_Research
|
958cbb0d9e36c21d59d29c4cc289abdf8cf2e687
|
[
"Apache-2.0"
] | null | null | null |
training/retrain.py
|
namtran98/Tensorflow_Research
|
958cbb0d9e36c21d59d29c4cc289abdf8cf2e687
|
[
"Apache-2.0"
] | null | null | null |
# Copyright 2015 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
# NOTICE: This work was derived from tensorflow/examples/image_retraining
# and modified to use TensorFlow Hub modules.
# pylint: disable=line-too-long
r"""Simple transfer learning with image modules from TensorFlow Hub.
This example shows how to train an image classifier based on any
TensorFlow Hub module that computes image feature vectors. By default,
it uses the feature vectors computed by Inception V3 trained on ImageNet.
See https://github.com/tensorflow/hub/blob/master/docs/modules/image.md
for more options.
The top layer receives as input a 2048-dimensional vector (assuming
Inception V3) for each image. We train a softmax layer on top of this
representation. If the softmax layer contains N labels, this corresponds
to learning N + 2048*N model parameters for the biases and weights.
Here's an example, which assumes you have a folder containing class-named
subfolders, each full of images for each label. The example folder flower_photos
should have a structure like this:
~/flower_photos/daisy/photo1.jpg
~/flower_photos/daisy/photo2.jpg
...
~/flower_photos/rose/anotherphoto77.jpg
...
~/flower_photos/sunflower/somepicture.jpg
The subfolder names are important, since they define what label is applied to
each image, but the filenames themselves don't matter. (For a working example,
download http://download.tensorflow.org/example_images/flower_photos.tgz
and run tar xzf flower_photos.tgz to unpack it.)
Once your images are prepared, and you have pip-installed tensorflow-hub and
a sufficiently recent version of tensorflow, you can run the training with a
command like this:
```bash
python retrain.py --image_dir ~/flower_photos
```
You can replace the image_dir argument with any folder containing subfolders of
images. The label for each image is taken from the name of the subfolder it's
in.
This produces a new model file that can be loaded and run by any TensorFlow
program, for example the tensorflow/examples/label_image sample code.
By default this script will use the highly accurate, but comparatively large and
slow Inception V3 model architecture. It's recommended that you start with this
to validate that you have gathered good training data, but if you want to deploy
on resource-limited platforms, you can try the `--tfhub_module` flag with a
Mobilenet model. For more information on Mobilenet, see
https://research.googleblog.com/2017/06/mobilenets-open-source-models-for.html
For example:
Run floating-point version of Mobilenet:
```bash
python retrain.py --image_dir ~/flower_photos \
--tfhub_module https://tfhub.dev/google/imagenet/mobilenet_v1_100_224/feature_vector/1
```
Run Mobilenet, instrumented for quantization:
```bash
python retrain.py --image_dir ~/flower_photos/ \
--tfhub_module https://tfhub.dev/google/imagenet/mobilenet_v1_100_224/quantops/feature_vector/1
```
These instrumented models can be converted to fully quantized mobile models via
TensorFlow Lite.
There are different Mobilenet models to choose from, with a variety of file
size and latency options.
- The first number can be '100', '075', '050', or '025' to control the number
of neurons (activations of hidden layers); the number of weights (and hence
to some extent the file size and speed) shrinks with the square of that
fraction.
- The second number is the input image size. You can choose '224', '192',
'160', or '128', with smaller sizes giving faster speeds.
To use with TensorBoard:
By default, this script will log summaries to /tmp/retrain_logs directory
Visualize the summaries with this command:
tensorboard --logdir /tmp/retrain_logs
To use with Tensorflow Serving, run this tool with --saved_model_dir set
to some increasingly numbered export location under the model base path, e.g.:
```bash
python retrain.py (... other args as before ...) \
--saved_model_dir=/tmp/saved_models/$(date +%s)/
tensorflow_model_server --port=9000 --model_name=my_image_classifier \
--model_base_path=/tmp/saved_models/
```
"""
# pylint: enable=line-too-long
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import argparse
import collections
from datetime import datetime
import hashlib
import os.path
import random
import re
import sys
import numpy as np
import tensorflow as tf
import tensorflow_hub as hub
FLAGS = None
MAX_NUM_IMAGES_PER_CLASS = 2 ** 27 - 1 # ~134M
# The location where variable checkpoints will be stored.
CHECKPOINT_NAME = '/tmp/_retrain_checkpoint'
# A module is understood as instrumented for quantization with TF-Lite
# if it contains any of these ops.
FAKE_QUANT_OPS = ('FakeQuantWithMinMaxVars',
'FakeQuantWithMinMaxVarsPerChannel')
def create_image_lists(image_dir, testing_percentage, validation_percentage):
"""Builds a list of training images from the file system.
Analyzes the sub folders in the image directory, splits them into stable
training, testing, and validation sets, and returns a data structure
describing the lists of images for each label and their paths.
Args:
image_dir: String path to a folder containing subfolders of images.
testing_percentage: Integer percentage of the images to reserve for tests.
validation_percentage: Integer percentage of images reserved for validation.
Returns:
An OrderedDict containing an entry for each label subfolder, with images
split into training, testing, and validation sets within each label.
The order of items defines the class indices.
"""
if not tf.gfile.Exists(image_dir):
tf.logging.error("Image directory '" + image_dir + "' not found.")
return None
result = collections.OrderedDict()
sub_dirs = sorted(x[0] for x in tf.gfile.Walk(image_dir))
# The root directory comes first, so skip it.
is_root_dir = True
for sub_dir in sub_dirs:
if is_root_dir:
is_root_dir = False
continue
extensions = ['jpg', 'jpeg', 'JPG', 'JPEG'] # 'png', 'PNG'] # ORIGINALLY DID NOT HAVE PNG or png
file_list = []
dir_name = os.path.basename(sub_dir)
if dir_name == image_dir:
continue
tf.logging.info("Looking for images in '" + dir_name + "'")
for extension in extensions:
file_glob = os.path.join(image_dir, dir_name, '*.' + extension)
file_list.extend(tf.gfile.Glob(file_glob))
if not file_list:
tf.logging.warning('No files found')
continue
if len(file_list) < 20:
tf.logging.warning(
'WARNING: Folder has less than 20 images, which may cause issues.')
elif len(file_list) > MAX_NUM_IMAGES_PER_CLASS:
tf.logging.warning(
'WARNING: Folder {} has more than {} images. Some images will '
'never be selected.'.format(dir_name, MAX_NUM_IMAGES_PER_CLASS))
label_name = re.sub(r'[^a-z0-9]+', ' ', dir_name.lower())
training_images = []
testing_images = []
validation_images = []
for file_name in file_list:
base_name = os.path.basename(file_name)
# We want to ignore anything after '_nohash_' in the file name when
# deciding which set to put an image in, the data set creator has a way of
# grouping photos that are close variations of each other. For example
# this is used in the plant disease data set to group multiple pictures of
# the same leaf.
hash_name = re.sub(r'_nohash_.*$', '', file_name)
# This looks a bit magical, but we need to decide whether this file should
# go into the training, testing, or validation sets, and we want to keep
# existing files in the same set even if more files are subsequently
# added.
# To do that, we need a stable way of deciding based on just the file name
# itself, so we do a hash of that and then use that to generate a
# probability value that we use to assign it.
hash_name_hashed = hashlib.sha1(tf.compat.as_bytes(hash_name)).hexdigest()
percentage_hash = ((int(hash_name_hashed, 16) %
(MAX_NUM_IMAGES_PER_CLASS + 1)) *
(100.0 / MAX_NUM_IMAGES_PER_CLASS))
if percentage_hash < validation_percentage:
validation_images.append(base_name)
elif percentage_hash < (testing_percentage + validation_percentage):
testing_images.append(base_name)
else:
training_images.append(base_name)
result[label_name] = {
'dir': dir_name,
'training': training_images,
'testing': testing_images,
'validation': validation_images,
}
return result
def get_image_path(image_lists, label_name, index, image_dir, category):
"""Returns a path to an image for a label at the given index.
Args:
image_lists: OrderedDict of training images for each label.
label_name: Label string we want to get an image for.
index: Int offset of the image we want. This will be moduloed by the
available number of images for the label, so it can be arbitrarily large.
image_dir: Root folder string of the subfolders containing the training
images.
category: Name string of set to pull images from - training, testing, or
validation.
Returns:
File system path string to an image that meets the requested parameters.
"""
if label_name not in image_lists:
tf.logging.fatal('Label does not exist %s.', label_name)
label_lists = image_lists[label_name]
if category not in label_lists:
tf.logging.fatal('Category does not exist %s.', category)
category_list = label_lists[category]
if not category_list:
tf.logging.fatal('Label %s has no images in the category %s.',
label_name, category)
mod_index = index % len(category_list)
base_name = category_list[mod_index]
sub_dir = label_lists['dir']
full_path = os.path.join(image_dir, sub_dir, base_name)
return full_path
def get_bottleneck_path(image_lists, label_name, index, bottleneck_dir,
category, module_name):
"""Returns a path to a bottleneck file for a label at the given index.
Args:
image_lists: OrderedDict of training images for each label.
label_name: Label string we want to get an image for.
index: Integer offset of the image we want. This will be moduloed by the
available number of images for the label, so it can be arbitrarily large.
bottleneck_dir: Folder string holding cached files of bottleneck values.
category: Name string of set to pull images from - training, testing, or
validation.
module_name: The name of the image module being used.
Returns:
File system path string to an image that meets the requested parameters.
"""
module_name = (module_name.replace('://', '~') # URL scheme.
.replace('/', '~') # URL and Unix paths.
.replace(':', '~').replace('\\', '~')) # Windows paths.
return get_image_path(image_lists, label_name, index, bottleneck_dir,
category) + '_' + module_name + '.txt'
def create_module_graph(module_spec):
"""Creates a graph and loads Hub Module into it.
Args:
module_spec: the hub.ModuleSpec for the image module being used.
Returns:
graph: the tf.Graph that was created.
bottleneck_tensor: the bottleneck values output by the module.
resized_input_tensor: the input images, resized as expected by the module.
wants_quantization: a boolean, whether the module has been instrumented
with fake quantization ops.
"""
height, width = hub.get_expected_image_size(module_spec)
with tf.Graph().as_default() as graph:
resized_input_tensor = tf.placeholder(tf.float32, [None, height, width, 3])
m = hub.Module(module_spec)
bottleneck_tensor = m(resized_input_tensor)
wants_quantization = any(node.op in FAKE_QUANT_OPS
for node in graph.as_graph_def().node)
return graph, bottleneck_tensor, resized_input_tensor, wants_quantization
def run_bottleneck_on_image(sess, image_data, image_data_tensor,
decoded_image_tensor, resized_input_tensor,
bottleneck_tensor):
"""Runs inference on an image to extract the 'bottleneck' summary layer.
Args:
sess: Current active TensorFlow Session.
image_data: String of raw JPEG data.
image_data_tensor: Input data layer in the graph.
decoded_image_tensor: Output of initial image resizing and preprocessing.
resized_input_tensor: The input node of the recognition graph.
bottleneck_tensor: Layer before the final softmax.
Returns:
Numpy array of bottleneck values.
"""
# First decode the JPEG image, resize it, and rescale the pixel values.
resized_input_values = sess.run(decoded_image_tensor,
{image_data_tensor: image_data})
# Then run it through the recognition network.
bottleneck_values = sess.run(bottleneck_tensor,
{resized_input_tensor: resized_input_values})
bottleneck_values = np.squeeze(bottleneck_values)
return bottleneck_values
def ensure_dir_exists(dir_name):
"""Makes sure the folder exists on disk.
Args:
dir_name: Path string to the folder we want to create.
"""
if not os.path.exists(dir_name):
os.makedirs(dir_name)
def create_bottleneck_file(bottleneck_path, image_lists, label_name, index,
image_dir, category, sess, jpeg_data_tensor,
decoded_image_tensor, resized_input_tensor,
bottleneck_tensor):
"""Create a single bottleneck file."""
tf.logging.info('Creating bottleneck at ' + bottleneck_path)
image_path = get_image_path(image_lists, label_name, index,
image_dir, category)
if not tf.gfile.Exists(image_path):
tf.logging.fatal('File does not exist %s', image_path)
image_data = tf.gfile.FastGFile(image_path, 'rb').read()
try:
bottleneck_values = run_bottleneck_on_image(
sess, image_data, jpeg_data_tensor, decoded_image_tensor,
resized_input_tensor, bottleneck_tensor)
except Exception as e:
raise RuntimeError('Error during processing file %s (%s)' % (image_path,
str(e)))
bottleneck_string = ','.join(str(x) for x in bottleneck_values)
with open(bottleneck_path, 'w') as bottleneck_file:
bottleneck_file.write(bottleneck_string)
def get_or_create_bottleneck(sess, image_lists, label_name, index, image_dir,
category, bottleneck_dir, jpeg_data_tensor,
decoded_image_tensor, resized_input_tensor,
bottleneck_tensor, module_name):
"""Retrieves or calculates bottleneck values for an image.
If a cached version of the bottleneck data exists on-disk, return that,
otherwise calculate the data and save it to disk for future use.
Args:
sess: The current active TensorFlow Session.
image_lists: OrderedDict of training images for each label.
label_name: Label string we want to get an image for.
index: Integer offset of the image we want. This will be modulo-ed by the
available number of images for the label, so it can be arbitrarily large.
image_dir: Root folder string of the subfolders containing the training
images.
category: Name string of which set to pull images from - training, testing,
or validation.
bottleneck_dir: Folder string holding cached files of bottleneck values.
jpeg_data_tensor: The tensor to feed loaded jpeg data into.
decoded_image_tensor: The output of decoding and resizing the image.
resized_input_tensor: The input node of the recognition graph.
bottleneck_tensor: The output tensor for the bottleneck values.
module_name: The name of the image module being used.
Returns:
Numpy array of values produced by the bottleneck layer for the image.
"""
label_lists = image_lists[label_name]
sub_dir = label_lists['dir']
sub_dir_path = os.path.join(bottleneck_dir, sub_dir)
ensure_dir_exists(sub_dir_path)
bottleneck_path = get_bottleneck_path(image_lists, label_name, index,
bottleneck_dir, category, module_name)
if not os.path.exists(bottleneck_path):
create_bottleneck_file(bottleneck_path, image_lists, label_name, index,
image_dir, category, sess, jpeg_data_tensor,
decoded_image_tensor, resized_input_tensor,
bottleneck_tensor)
with open(bottleneck_path, 'r') as bottleneck_file:
bottleneck_string = bottleneck_file.read()
did_hit_error = False
try:
bottleneck_values = [float(x) for x in bottleneck_string.split(',')]
except ValueError:
tf.logging.warning('Invalid float found, recreating bottleneck')
did_hit_error = True
if did_hit_error:
create_bottleneck_file(bottleneck_path, image_lists, label_name, index,
image_dir, category, sess, jpeg_data_tensor,
decoded_image_tensor, resized_input_tensor,
bottleneck_tensor)
with open(bottleneck_path, 'r') as bottleneck_file:
bottleneck_string = bottleneck_file.read()
# Allow exceptions to propagate here, since they shouldn't happen after a
# fresh creation
bottleneck_values = [float(x) for x in bottleneck_string.split(',')]
return bottleneck_values
def cache_bottlenecks(sess, image_lists, image_dir, bottleneck_dir,
jpeg_data_tensor, decoded_image_tensor,
resized_input_tensor, bottleneck_tensor, module_name):
"""Ensures all the training, testing, and validation bottlenecks are cached.
Because we're likely to read the same image multiple times (if there are no
distortions applied during training) it can speed things up a lot if we
calculate the bottleneck layer values once for each image during
preprocessing, and then just read those cached values repeatedly during
training. Here we go through all the images we've found, calculate those
values, and save them off.
Args:
sess: The current active TensorFlow Session.
image_lists: OrderedDict of training images for each label.
image_dir: Root folder string of the subfolders containing the training
images.
bottleneck_dir: Folder string holding cached files of bottleneck values.
jpeg_data_tensor: Input tensor for jpeg data from file.
decoded_image_tensor: The output of decoding and resizing the image.
resized_input_tensor: The input node of the recognition graph.
bottleneck_tensor: The penultimate output layer of the graph.
module_name: The name of the image module being used.
Returns:
Nothing.
"""
how_many_bottlenecks = 0
ensure_dir_exists(bottleneck_dir)
for label_name, label_lists in image_lists.items():
for category in ['training', 'testing', 'validation']:
category_list = label_lists[category]
for index, unused_base_name in enumerate(category_list):
get_or_create_bottleneck(
sess, image_lists, label_name, index, image_dir, category,
bottleneck_dir, jpeg_data_tensor, decoded_image_tensor,
resized_input_tensor, bottleneck_tensor, module_name)
how_many_bottlenecks += 1
if how_many_bottlenecks % 100 == 0:
tf.logging.info(
str(how_many_bottlenecks) + ' bottleneck files created.')
def get_random_cached_bottlenecks(sess, image_lists, how_many, category,
bottleneck_dir, image_dir, jpeg_data_tensor,
decoded_image_tensor, resized_input_tensor,
bottleneck_tensor, module_name):
"""Retrieves bottleneck values for cached images.
If no distortions are being applied, this function can retrieve the cached
bottleneck values directly from disk for images. It picks a random set of
images from the specified category.
Args:
sess: Current TensorFlow Session.
image_lists: OrderedDict of training images for each label.
how_many: If positive, a random sample of this size will be chosen.
If negative, all bottlenecks will be retrieved.
category: Name string of which set to pull from - training, testing, or
validation.
bottleneck_dir: Folder string holding cached files of bottleneck values.
image_dir: Root folder string of the subfolders containing the training
images.
jpeg_data_tensor: The layer to feed jpeg image data into.
decoded_image_tensor: The output of decoding and resizing the image.
resized_input_tensor: The input node of the recognition graph.
bottleneck_tensor: The bottleneck output layer of the CNN graph.
module_name: The name of the image module being used.
Returns:
List of bottleneck arrays, their corresponding ground truths, and the
relevant filenames.
"""
class_count = len(image_lists.keys())
bottlenecks = []
ground_truths = []
filenames = []
if how_many >= 0:
# Retrieve a random sample of bottlenecks.
for unused_i in range(how_many):
label_index = random.randrange(class_count)
label_name = list(image_lists.keys())[label_index]
image_index = random.randrange(MAX_NUM_IMAGES_PER_CLASS + 1)
image_name = get_image_path(image_lists, label_name, image_index,
image_dir, category)
bottleneck = get_or_create_bottleneck(
sess, image_lists, label_name, image_index, image_dir, category,
bottleneck_dir, jpeg_data_tensor, decoded_image_tensor,
resized_input_tensor, bottleneck_tensor, module_name)
bottlenecks.append(bottleneck)
ground_truths.append(label_index)
filenames.append(image_name)
else:
# Retrieve all bottlenecks.
for label_index, label_name in enumerate(image_lists.keys()):
for image_index, image_name in enumerate(
image_lists[label_name][category]):
image_name = get_image_path(image_lists, label_name, image_index,
image_dir, category)
bottleneck = get_or_create_bottleneck(
sess, image_lists, label_name, image_index, image_dir, category,
bottleneck_dir, jpeg_data_tensor, decoded_image_tensor,
resized_input_tensor, bottleneck_tensor, module_name)
bottlenecks.append(bottleneck)
ground_truths.append(label_index)
filenames.append(image_name)
return bottlenecks, ground_truths, filenames
def get_random_distorted_bottlenecks(
sess, image_lists, how_many, category, image_dir, input_jpeg_tensor,
distorted_image, resized_input_tensor, bottleneck_tensor):
"""Retrieves bottleneck values for training images, after distortions.
If we're training with distortions like crops, scales, or flips, we have to
recalculate the full model for every image, and so we can't use cached
bottleneck values. Instead we find random images for the requested category,
run them through the distortion graph, and then the full graph to get the
bottleneck results for each.
Args:
sess: Current TensorFlow Session.
image_lists: OrderedDict of training images for each label.
how_many: The integer number of bottleneck values to return.
category: Name string of which set of images to fetch - training, testing,
or validation.
image_dir: Root folder string of the subfolders containing the training
images.
input_jpeg_tensor: The input layer we feed the image data to.
distorted_image: The output node of the distortion graph.
resized_input_tensor: The input node of the recognition graph.
bottleneck_tensor: The bottleneck output layer of the CNN graph.
Returns:
List of bottleneck arrays and their corresponding ground truths.
"""
class_count = len(image_lists.keys())
bottlenecks = []
ground_truths = []
for unused_i in range(how_many):
label_index = random.randrange(class_count)
label_name = list(image_lists.keys())[label_index]
image_index = random.randrange(MAX_NUM_IMAGES_PER_CLASS + 1)
image_path = get_image_path(image_lists, label_name, image_index, image_dir,
category)
if not tf.gfile.Exists(image_path):
tf.logging.fatal('File does not exist %s', image_path)
jpeg_data = tf.gfile.FastGFile(image_path, 'rb').read()
# Note that we materialize the distorted_image_data as a numpy array before
# sending running inference on the image. This involves 2 memory copies and
# might be optimized in other implementations.
distorted_image_data = sess.run(distorted_image,
{input_jpeg_tensor: jpeg_data})
bottleneck_values = sess.run(bottleneck_tensor,
{resized_input_tensor: distorted_image_data})
bottleneck_values = np.squeeze(bottleneck_values)
bottlenecks.append(bottleneck_values)
ground_truths.append(label_index)
return bottlenecks, ground_truths
def should_distort_images(flip_left_right, random_crop, random_scale,
random_brightness):
"""Whether any distortions are enabled, from the input flags.
Args:
flip_left_right: Boolean whether to randomly mirror images horizontally.
random_crop: Integer percentage setting the total margin used around the
crop box.
random_scale: Integer percentage of how much to vary the scale by.
random_brightness: Integer range to randomly multiply the pixel values by.
Returns:
Boolean value indicating whether any distortions should be applied.
"""
return (flip_left_right or (random_crop != 0) or (random_scale != 0) or
(random_brightness != 0))
def add_input_distortions(flip_left_right, random_crop, random_scale,
random_brightness, module_spec):
"""Creates the operations to apply the specified distortions.
During training it can help to improve the results if we run the images
through simple distortions like crops, scales, and flips. These reflect the
kind of variations we expect in the real world, and so can help train the
model to cope with natural data more effectively. Here we take the supplied
parameters and construct a network of operations to apply them to an image.
Cropping
~~~~~~~~
Cropping is done by placing a bounding box at a random position in the full
image. The cropping parameter controls the size of that box relative to the
input image. If it's zero, then the box is the same size as the input and no
cropping is performed. If the value is 50%, then the crop box will be half the
width and height of the input. In a diagram it looks like this:
< width >
+---------------------+
| |
| width - crop% |
| < > |
| +------+ |
| | | |
| | | |
| | | |
| +------+ |
| |
| |
+---------------------+
Scaling
~~~~~~~
Scaling is a lot like cropping, except that the bounding box is always
centered and its size varies randomly within the given range. For example if
the scale percentage is zero, then the bounding box is the same size as the
input and no scaling is applied. If it's 50%, then the bounding box will be in
a random range between half the width and height and full size.
Args:
flip_left_right: Boolean whether to randomly mirror images horizontally.
random_crop: Integer percentage setting the total margin used around the
crop box.
random_scale: Integer percentage of how much to vary the scale by.
random_brightness: Integer range to randomly multiply the pixel values by.
graph.
module_spec: The hub.ModuleSpec for the image module being used.
Returns:
The jpeg input layer and the distorted result tensor.
"""
input_height, input_width = hub.get_expected_image_size(module_spec)
input_depth = hub.get_num_image_channels(module_spec)
jpeg_data = tf.placeholder(tf.string, name='DistortJPGInput')
decoded_image = tf.image.decode_jpeg(jpeg_data, channels=input_depth)
# Convert from full range of uint8 to range [0,1] of float32.
decoded_image_as_float = tf.image.convert_image_dtype(decoded_image,
tf.float32)
decoded_image_4d = tf.expand_dims(decoded_image_as_float, 0)
margin_scale = 1.0 + (random_crop / 100.0)
resize_scale = 1.0 + (random_scale / 100.0)
margin_scale_value = tf.constant(margin_scale)
resize_scale_value = tf.random_uniform(shape=[],
minval=1.0,
maxval=resize_scale)
scale_value = tf.multiply(margin_scale_value, resize_scale_value)
precrop_width = tf.multiply(scale_value, input_width)
precrop_height = tf.multiply(scale_value, input_height)
precrop_shape = tf.stack([precrop_height, precrop_width])
precrop_shape_as_int = tf.cast(precrop_shape, dtype=tf.int32)
precropped_image = tf.image.resize_bilinear(decoded_image_4d,
precrop_shape_as_int)
precropped_image_3d = tf.squeeze(precropped_image, axis=[0])
cropped_image = tf.random_crop(precropped_image_3d,
[input_height, input_width, input_depth])
if flip_left_right:
flipped_image = tf.image.random_flip_left_right(cropped_image)
else:
flipped_image = cropped_image
brightness_min = 1.0 - (random_brightness / 100.0)
brightness_max = 1.0 + (random_brightness / 100.0)
brightness_value = tf.random_uniform(shape=[],
minval=brightness_min,
maxval=brightness_max)
brightened_image = tf.multiply(flipped_image, brightness_value)
distort_result = tf.expand_dims(brightened_image, 0, name='DistortResult')
return jpeg_data, distort_result
def variable_summaries(var):
"""Attach a lot of summaries to a Tensor (for TensorBoard visualization)."""
with tf.name_scope('summaries'):
mean = tf.reduce_mean(var)
tf.summary.scalar('mean', mean)
with tf.name_scope('stddev'):
stddev = tf.sqrt(tf.reduce_mean(tf.square(var - mean)))
tf.summary.scalar('stddev', stddev)
tf.summary.scalar('max', tf.reduce_max(var))
tf.summary.scalar('min', tf.reduce_min(var))
tf.summary.histogram('histogram', var)
def add_final_retrain_ops(class_count, final_tensor_name, bottleneck_tensor,
quantize_layer, is_training):
"""Adds a new softmax and fully-connected layer for training and eval.
We need to retrain the top layer to identify our new classes, so this function
adds the right operations to the graph, along with some variables to hold the
weights, and then sets up all the gradients for the backward pass.
The set up for the softmax and fully-connected layers is based on:
https://www.tensorflow.org/tutorials/mnist/beginners/index.html
Args:
class_count: Integer of how many categories of things we're trying to
recognize.
final_tensor_name: Name string for the new final node that produces results.
bottleneck_tensor: The output of the main CNN graph.
quantize_layer: Boolean, specifying whether the newly added layer should be
instrumented for quantization with TF-Lite.
is_training: Boolean, specifying whether the newly add layer is for training
or eval.
Returns:
The tensors for the training and cross entropy results, and tensors for the
bottleneck input and ground truth input.
"""
batch_size, bottleneck_tensor_size = bottleneck_tensor.get_shape().as_list()
assert batch_size is None, 'We want to work with arbitrary batch size.'
with tf.name_scope('input'):
bottleneck_input = tf.placeholder_with_default(
bottleneck_tensor,
shape=[batch_size, bottleneck_tensor_size],
name='BottleneckInputPlaceholder')
ground_truth_input = tf.placeholder(
tf.int64, [batch_size], name='GroundTruthInput')
# Organizing the following ops so they are easier to see in TensorBoard.
layer_name = 'final_retrain_ops'
with tf.name_scope(layer_name):
with tf.name_scope('weights'):
initial_value = tf.truncated_normal(
[bottleneck_tensor_size, class_count], stddev=0.001)
layer_weights = tf.Variable(initial_value, name='final_weights')
variable_summaries(layer_weights)
with tf.name_scope('biases'):
layer_biases = tf.Variable(tf.zeros([class_count]), name='final_biases')
variable_summaries(layer_biases)
with tf.name_scope('Wx_plus_b'):
logits = tf.matmul(bottleneck_input, layer_weights) + layer_biases
tf.summary.histogram('pre_activations', logits)
final_tensor = tf.nn.softmax(logits, name=final_tensor_name)
# The tf.contrib.quantize functions rewrite the graph in place for
# quantization. The imported model graph has already been rewritten, so upon
# calling these rewrites, only the newly added final layer will be
# transformed.
if quantize_layer:
if is_training:
tf.contrib.quantize.create_training_graph()
else:
tf.contrib.quantize.create_eval_graph()
tf.summary.histogram('activations', final_tensor)
# If this is an eval graph, we don't need to add loss ops or an optimizer.
if not is_training:
return None, None, bottleneck_input, ground_truth_input, final_tensor
with tf.name_scope('cross_entropy'):
cross_entropy_mean = tf.losses.sparse_softmax_cross_entropy(
labels=ground_truth_input, logits=logits)
tf.summary.scalar('cross_entropy', cross_entropy_mean)
with tf.name_scope('train'):
optimizer = tf.train.GradientDescentOptimizer(FLAGS.learning_rate)
train_step = optimizer.minimize(cross_entropy_mean)
return (train_step, cross_entropy_mean, bottleneck_input, ground_truth_input,
final_tensor)
def add_evaluation_step(result_tensor, ground_truth_tensor):
"""Inserts the operations we need to evaluate the accuracy of our results.
Args:
result_tensor: The new final node that produces results.
ground_truth_tensor: The node we feed ground truth data
into.
Returns:
Tuple of (evaluation step, prediction).
"""
with tf.name_scope('accuracy'):
with tf.name_scope('correct_prediction'):
prediction = tf.argmax(result_tensor, 1)
correct_prediction = tf.equal(prediction, ground_truth_tensor)
with tf.name_scope('accuracy'):
evaluation_step = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
tf.summary.scalar('accuracy', evaluation_step)
return evaluation_step, prediction
def run_final_eval(train_session, module_spec, class_count, image_lists,
jpeg_data_tensor, decoded_image_tensor,
resized_image_tensor, bottleneck_tensor):
"""Runs a final evaluation on an eval graph using the test data set.
Args:
train_session: Session for the train graph with the tensors below.
module_spec: The hub.ModuleSpec for the image module being used.
class_count: Number of classes
image_lists: OrderedDict of training images for each label.
jpeg_data_tensor: The layer to feed jpeg image data into.
decoded_image_tensor: The output of decoding and resizing the image.
resized_image_tensor: The input node of the recognition graph.
bottleneck_tensor: The bottleneck output layer of the CNN graph.
"""
test_bottlenecks, test_ground_truth, test_filenames = (
get_random_cached_bottlenecks(train_session, image_lists,
FLAGS.test_batch_size,
'testing', FLAGS.bottleneck_dir,
FLAGS.image_dir, jpeg_data_tensor,
decoded_image_tensor, resized_image_tensor,
bottleneck_tensor, FLAGS.tfhub_module))
(eval_session, _, bottleneck_input, ground_truth_input, evaluation_step,
prediction) = build_eval_session(module_spec, class_count)
test_accuracy, predictions = eval_session.run(
[evaluation_step, prediction],
feed_dict={
bottleneck_input: test_bottlenecks,
ground_truth_input: test_ground_truth
})
tf.logging.info('Final test accuracy = %.1f%% (N=%d)' %
(test_accuracy * 100, len(test_bottlenecks)))
if FLAGS.print_misclassified_test_images:
tf.logging.info('=== MISCLASSIFIED TEST IMAGES ===')
for i, test_filename in enumerate(test_filenames):
if predictions[i] != test_ground_truth[i]:
tf.logging.info('%70s %s' % (test_filename,
list(image_lists.keys())[predictions[i]]))
def build_eval_session(module_spec, class_count):
"""Builds an restored eval session without train operations for exporting.
Args:
module_spec: The hub.ModuleSpec for the image module being used.
class_count: Number of classes
Returns:
Eval session containing the restored eval graph.
The bottleneck input, ground truth, eval step, and prediction tensors.
"""
# If quantized, we need to create the correct eval graph for exporting.
eval_graph, bottleneck_tensor, resized_input_tensor, wants_quantization = (
create_module_graph(module_spec))
eval_sess = tf.Session(graph=eval_graph)
with eval_graph.as_default():
# Add the new layer for exporting.
(_, _, bottleneck_input,
ground_truth_input, final_tensor) = add_final_retrain_ops(
class_count, FLAGS.final_tensor_name, bottleneck_tensor,
wants_quantization, is_training=False)
# Now we need to restore the values from the training graph to the eval
# graph.
tf.train.Saver().restore(eval_sess, CHECKPOINT_NAME)
evaluation_step, prediction = add_evaluation_step(final_tensor,
ground_truth_input)
return (eval_sess, resized_input_tensor, bottleneck_input, ground_truth_input,
evaluation_step, prediction)
def save_graph_to_file(graph, graph_file_name, module_spec, class_count):
"""Saves an graph to file, creating a valid quantized one if necessary."""
sess, _, _, _, _, _ = build_eval_session(module_spec, class_count)
graph = sess.graph
output_graph_def = tf.graph_util.convert_variables_to_constants(
sess, graph.as_graph_def(), [FLAGS.final_tensor_name])
with tf.gfile.FastGFile(graph_file_name, 'wb') as f:
f.write(output_graph_def.SerializeToString())
def prepare_file_system():
# Set up the directory we'll write summaries to for TensorBoard
if tf.gfile.Exists(FLAGS.summaries_dir):
tf.gfile.DeleteRecursively(FLAGS.summaries_dir)
tf.gfile.MakeDirs(FLAGS.summaries_dir)
if FLAGS.intermediate_store_frequency > 0:
ensure_dir_exists(FLAGS.intermediate_output_graphs_dir)
return
def add_png_decoding(module_spec):
"""
Adds operations that perform PNG decoding and resizing to the graph
Args:
module_spec:
Returns:
Tensors for the node to feed PNG data into, and the output of the preprocessing steps.
"""
pass
def add_jpeg_decoding(module_spec):
"""Adds operations that perform JPEG decoding and resizing to the graph..
Args:
module_spec: The hub.ModuleSpec for the image module being used.
Returns:
Tensors for the node to feed JPEG data into, and the output of the
preprocessing steps.
"""
input_height, input_width = hub.get_expected_image_size(module_spec)
input_depth = hub.get_num_image_channels(module_spec)
jpeg_data = tf.placeholder(tf.string, name='DecodeJPGInput')
decoded_image = tf.image.decode_jpeg(jpeg_data, channels=input_depth)
# Convert from full range of uint8 to range [0,1] of float32.
decoded_image_as_float = tf.image.convert_image_dtype(decoded_image,
tf.float32)
decoded_image_4d = tf.expand_dims(decoded_image_as_float, 0)
resize_shape = tf.stack([input_height, input_width])
resize_shape_as_int = tf.cast(resize_shape, dtype=tf.int32)
resized_image = tf.image.resize_bilinear(decoded_image_4d,
resize_shape_as_int)
return jpeg_data, resized_image
def export_model(module_spec, class_count, saved_model_dir):
"""Exports model for serving.
Args:
module_spec: The hub.ModuleSpec for the image module being used.
class_count: The number of classes.
saved_model_dir: Directory in which to save exported model and variables.
"""
# The SavedModel should hold the eval graph.
sess, in_image, _, _, _, _ = build_eval_session(module_spec, class_count)
graph = sess.graph
with graph.as_default():
inputs = {'image': tf.saved_model.utils.build_tensor_info(in_image)}
out_classes = sess.graph.get_tensor_by_name('final_result:0')
outputs = {
'prediction': tf.saved_model.utils.build_tensor_info(out_classes)
}
signature = tf.saved_model.signature_def_utils.build_signature_def(
inputs=inputs,
outputs=outputs,
method_name=tf.saved_model.signature_constants.PREDICT_METHOD_NAME)
legacy_init_op = tf.group(tf.tables_initializer(), name='legacy_init_op')
# Save out the SavedModel.
builder = tf.saved_model.builder.SavedModelBuilder(saved_model_dir)
builder.add_meta_graph_and_variables(
sess, [tf.saved_model.tag_constants.SERVING],
signature_def_map={
tf.saved_model.signature_constants.
DEFAULT_SERVING_SIGNATURE_DEF_KEY:
signature
},
legacy_init_op=legacy_init_op)
builder.save()
def main(_):
# Needed to make sure the logging output is visible.
# See https://github.com/tensorflow/tensorflow/issues/3047
tf.logging.set_verbosity(tf.logging.INFO)
if not FLAGS.image_dir:
tf.logging.error('Must set flag --image_dir.')
return -1
# Prepare necessary directories that can be used during training
prepare_file_system()
# Look at the folder structure, and create lists of all the images.
image_lists = create_image_lists(FLAGS.image_dir, FLAGS.testing_percentage,
FLAGS.validation_percentage)
class_count = len(image_lists.keys())
if class_count == 0:
tf.logging.error('No valid folders of images found at ' + FLAGS.image_dir)
return -1
if class_count == 1:
tf.logging.error('Only one valid folder of images found at ' +
FLAGS.image_dir +
' - multiple classes are needed for classification.')
return -1
# See if the command-line flags mean we're applying any distortions.
do_distort_images = should_distort_images(
FLAGS.flip_left_right, FLAGS.random_crop, FLAGS.random_scale,
FLAGS.random_brightness)
# Set up the pre-trained graph.
module_spec = hub.load_module_spec(FLAGS.tfhub_module)
graph, bottleneck_tensor, resized_image_tensor, wants_quantization = (
create_module_graph(module_spec))
# Add the new layer that we'll be training.
with graph.as_default():
(train_step, cross_entropy, bottleneck_input,
ground_truth_input, final_tensor) = add_final_retrain_ops(
class_count, FLAGS.final_tensor_name, bottleneck_tensor,
wants_quantization, is_training=True)
with tf.Session(graph=graph) as sess:
# Initialize all weights: for the module to their pretrained values,
# and for the newly added retraining layer to random initial values.
init = tf.global_variables_initializer()
sess.run(init)
# Set up the image decoding sub-graph.
jpeg_data_tensor, decoded_image_tensor = add_jpeg_decoding(module_spec)
if do_distort_images:
# We will be applying distortions, so set up the operations we'll need.
(distorted_jpeg_data_tensor,
distorted_image_tensor) = add_input_distortions(
FLAGS.flip_left_right, FLAGS.random_crop, FLAGS.random_scale,
FLAGS.random_brightness, module_spec)
else:
# We'll make sure we've calculated the 'bottleneck' image summaries and
# cached them on disk.
cache_bottlenecks(sess, image_lists, FLAGS.image_dir,
FLAGS.bottleneck_dir, jpeg_data_tensor,
decoded_image_tensor, resized_image_tensor,
bottleneck_tensor, FLAGS.tfhub_module)
# Create the operations we need to evaluate the accuracy of our new layer.
evaluation_step, _ = add_evaluation_step(final_tensor, ground_truth_input)
# Merge all the summaries and write them out to the summaries_dir
merged = tf.summary.merge_all()
train_writer = tf.summary.FileWriter(FLAGS.summaries_dir + '/train',
sess.graph)
validation_writer = tf.summary.FileWriter(
FLAGS.summaries_dir + '/validation')
# Create a train saver that is used to restore values into an eval graph
# when exporting models.
train_saver = tf.train.Saver()
# Run the training for as many cycles as requested on the command line.
for i in range(FLAGS.how_many_training_steps):
# Get a batch of input bottleneck values, either calculated fresh every
# time with distortions applied, or from the cache stored on disk.
if do_distort_images:
(train_bottlenecks,
train_ground_truth) = get_random_distorted_bottlenecks(
sess, image_lists, FLAGS.train_batch_size, 'training',
FLAGS.image_dir, distorted_jpeg_data_tensor,
distorted_image_tensor, resized_image_tensor, bottleneck_tensor)
else:
(train_bottlenecks,
train_ground_truth, _) = get_random_cached_bottlenecks(
sess, image_lists, FLAGS.train_batch_size, 'training',
FLAGS.bottleneck_dir, FLAGS.image_dir, jpeg_data_tensor,
decoded_image_tensor, resized_image_tensor, bottleneck_tensor,
FLAGS.tfhub_module)
# Feed the bottlenecks and ground truth into the graph, and run a training
# step. Capture training summaries for TensorBoard with the `merged` op.
train_summary, _ = sess.run(
[merged, train_step],
feed_dict={bottleneck_input: train_bottlenecks,
ground_truth_input: train_ground_truth})
train_writer.add_summary(train_summary, i)
# Every so often, print out how well the graph is training.
is_last_step = (i + 1 == FLAGS.how_many_training_steps)
if (i % FLAGS.eval_step_interval) == 0 or is_last_step:
train_accuracy, cross_entropy_value = sess.run(
[evaluation_step, cross_entropy],
feed_dict={bottleneck_input: train_bottlenecks,
ground_truth_input: train_ground_truth})
tf.logging.info('%s: Step %d: Train accuracy = %.1f%%' %
(datetime.now(), i, train_accuracy * 100))
tf.logging.info('%s: Step %d: Cross entropy = %f' %
(datetime.now(), i, cross_entropy_value))
# TODO: Make this use an eval graph, to avoid quantization
# moving averages being updated by the validation set, though in
# practice this makes a negligable difference.
validation_bottlenecks, validation_ground_truth, _ = (
get_random_cached_bottlenecks(
sess, image_lists, FLAGS.validation_batch_size, 'validation',
FLAGS.bottleneck_dir, FLAGS.image_dir, jpeg_data_tensor,
decoded_image_tensor, resized_image_tensor, bottleneck_tensor,
FLAGS.tfhub_module))
# Run a validation step and capture training summaries for TensorBoard
# with the `merged` op.
validation_summary, validation_accuracy = sess.run(
[merged, evaluation_step],
feed_dict={bottleneck_input: validation_bottlenecks,
ground_truth_input: validation_ground_truth})
validation_writer.add_summary(validation_summary, i)
tf.logging.info('%s: Step %d: Validation accuracy = %.1f%% (N=%d)' %
(datetime.now(), i, validation_accuracy * 100,
len(validation_bottlenecks)))
# Store intermediate results
intermediate_frequency = FLAGS.intermediate_store_frequency
if (intermediate_frequency > 0 and (i % intermediate_frequency == 0)
and i > 0):
# If we want to do an intermediate save, save a checkpoint of the train
# graph, to restore into the eval graph.
train_saver.save(sess, CHECKPOINT_NAME)
intermediate_file_name = (FLAGS.intermediate_output_graphs_dir +
'intermediate_' + str(i) + '.pb')
tf.logging.info('Save intermediate result to : ' +
intermediate_file_name)
save_graph_to_file(graph, intermediate_file_name, module_spec,
class_count)
# After training is complete, force one last save of the train checkpoint.
train_saver.save(sess, CHECKPOINT_NAME)
# We've completed all our training, so run a final test evaluation on
# some new images we haven't used before.
run_final_eval(sess, module_spec, class_count, image_lists,
jpeg_data_tensor, decoded_image_tensor, resized_image_tensor,
bottleneck_tensor)
# Write out the trained graph and labels with the weights stored as
# constants.
tf.logging.info('Save final result to : ' + FLAGS.output_graph)
if wants_quantization:
tf.logging.info('The model is instrumented for quantization with TF-Lite')
save_graph_to_file(graph, FLAGS.output_graph, module_spec, class_count)
with tf.gfile.FastGFile(FLAGS.output_labels, 'w') as f:
f.write('\n'.join(image_lists.keys()) + '\n')
if FLAGS.saved_model_dir:
export_model(module_spec, class_count, FLAGS.saved_model_dir)
if __name__ == '__main__':
parser = argparse.ArgumentParser()
parser.add_argument(
'--image_dir',
type=str,
default='',
help='Path to folders of labeled images.'
)
parser.add_argument(
'--output_graph',
type=str,
default='/tmp/output_graph.pb',
help='Where to save the trained graph.'
)
parser.add_argument(
'--intermediate_output_graphs_dir',
type=str,
default='/tmp/intermediate_graph/',
help='Where to save the intermediate graphs.'
)
parser.add_argument(
'--intermediate_store_frequency',
type=int,
default=0,
help="""\
How many steps to store intermediate graph. If "0" then will not
store.\
"""
)
parser.add_argument(
'--output_labels',
type=str,
default='/tmp/output_labels.txt',
help='Where to save the trained graph\'s labels.'
)
parser.add_argument(
'--summaries_dir',
type=str,
default='/tmp/retrain_logs',
help='Where to save summary logs for TensorBoard.'
)
parser.add_argument(
'--how_many_training_steps',
type=int,
default=4000,
help='How many training steps to run before ending.'
)
parser.add_argument(
'--learning_rate',
type=float,
default=0.01,
help='How large a learning rate to use when training.'
)
parser.add_argument(
'--testing_percentage',
type=int,
default=10,
help='What percentage of images to use as a test set.'
)
parser.add_argument(
'--validation_percentage',
type=int,
default=10,
help='What percentage of images to use as a validation set.'
)
parser.add_argument(
'--eval_step_interval',
type=int,
default=10,
help='How often to evaluate the training results.'
)
parser.add_argument(
'--train_batch_size',
type=int,
default=100,
help='How many images to train on at a time.'
)
parser.add_argument(
'--test_batch_size',
type=int,
default=-1,
help="""\
How many images to test on. This test set is only used once, to evaluate
the final accuracy of the model after training completes.
A value of -1 causes the entire test set to be used, which leads to more
stable results across runs.\
"""
)
parser.add_argument(
'--validation_batch_size',
type=int,
default=100,
help="""\
How many images to use in an evaluation batch. This validation set is
used much more often than the test set, and is an early indicator of how
accurate the model is during training.
A value of -1 causes the entire validation set to be used, which leads to
more stable results across training iterations, but may be slower on large
training sets.\
"""
)
parser.add_argument(
'--print_misclassified_test_images',
default=False,
help="""\
Whether to print out a list of all misclassified test images.\
""",
action='store_true'
)
parser.add_argument(
'--bottleneck_dir',
type=str,
default='/tmp/bottleneck',
help='Path to cache bottleneck layer values as files.'
)
parser.add_argument(
'--final_tensor_name',
type=str,
default='final_result',
help="""\
The name of the output classification layer in the retrained graph.\
"""
)
parser.add_argument(
'--flip_left_right',
default=False,
help="""\
Whether to randomly flip half of the training images horizontally.\
""",
action='store_true'
)
parser.add_argument(
'--random_crop',
type=int,
default=0,
help="""\
A percentage determining how much of a margin to randomly crop off the
training images.\
"""
)
parser.add_argument(
'--random_scale',
type=int,
default=0,
help="""\
A percentage determining how much to randomly scale up the size of the
training images by.\
"""
)
parser.add_argument(
'--random_brightness',
type=int,
default=0,
help="""\
A percentage determining how much to randomly multiply the training image
input pixels up or down by.\
"""
)
parser.add_argument(
'--tfhub_module',
type=str,
default=(
'https://tfhub.dev/google/imagenet/inception_v3/feature_vector/1'),
help="""\
Which TensorFlow Hub module to use.
See https://github.com/tensorflow/hub/blob/master/docs/modules/image.md
for some publicly available ones.\
""")
parser.add_argument(
'--saved_model_dir',
type=str,
default='',
help='Where to save the exported graph.')
FLAGS, unparsed = parser.parse_known_args()
tf.app.run(main=main, argv=[sys.argv[0]] + unparsed)
| 43.615385
| 152
| 0.699241
|
4a124e93b906bfc4ef4b2ab0066fe063f200d67b
| 4,049
|
py
|
Python
|
neutron/conf/quota.py
|
10088/neutron
|
38cdfb88558887b830514b4ef390fd535c11942c
|
[
"Apache-2.0"
] | null | null | null |
neutron/conf/quota.py
|
10088/neutron
|
38cdfb88558887b830514b4ef390fd535c11942c
|
[
"Apache-2.0"
] | null | null | null |
neutron/conf/quota.py
|
10088/neutron
|
38cdfb88558887b830514b4ef390fd535c11942c
|
[
"Apache-2.0"
] | null | null | null |
# Copyright 2016 Intel Corporation.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from oslo_config import cfg
from neutron._i18n import _
QUOTA_DB_DIRECTORY = 'neutron.db.quota.'
QUOTA_DB_DRIVER_LEGACY = QUOTA_DB_DIRECTORY + 'driver.DbQuotaDriver'
QUOTA_DB_DRIVER_NO_LOCK = (QUOTA_DB_DIRECTORY +
'driver_nolock.DbQuotaNoLockDriver')
QUOTA_DB_DRIVER_NULL = QUOTA_DB_DIRECTORY + 'driver_null.DbQuotaDriverNull'
QUOTA_DB_DRIVER = QUOTA_DB_DRIVER_NO_LOCK
QUOTAS_CFG_GROUP = 'QUOTAS'
DEFAULT_QUOTA = -1
DEFAULT_QUOTA_NETWORK = 100
DEFAULT_QUOTA_SUBNET = 100
DEFAULT_QUOTA_PORT = 500
DEFAULT_QUOTA_SG = 10
DEFAULT_QUOTA_SG_RULE = 100
DEFAULT_QUOTA_ROUTER = 10
DEFAULT_QUOTA_FIP = 50
DEFAULT_QUOTA_RBAC = 10
# quota_opts from neutron/quota/__init__.py
# renamed quota_opts to core_quota_opts
core_quota_opts = [
cfg.IntOpt('default_quota',
default=DEFAULT_QUOTA,
help=_('Default number of resource allowed per tenant. '
'A negative value means unlimited.')),
cfg.IntOpt('quota_network',
default=DEFAULT_QUOTA_NETWORK,
help=_('Number of networks allowed per tenant. '
'A negative value means unlimited.')),
cfg.IntOpt('quota_subnet',
default=DEFAULT_QUOTA_SUBNET,
help=_('Number of subnets allowed per tenant, '
'A negative value means unlimited.')),
cfg.IntOpt('quota_port',
default=DEFAULT_QUOTA_PORT,
help=_('Number of ports allowed per tenant. '
'A negative value means unlimited.')),
cfg.StrOpt('quota_driver',
default=QUOTA_DB_DRIVER,
help=_('Default driver to use for quota checks.')),
cfg.BoolOpt('track_quota_usage',
default=True,
help=_('Keep in track in the database of current resource '
'quota usage. Plugins which do not leverage the '
'neutron database should set this flag to False.')),
]
# security_group_quota_opts from neutron/extensions/securitygroup.py
security_group_quota_opts = [
cfg.IntOpt('quota_security_group',
default=DEFAULT_QUOTA_SG,
help=_('Number of security groups allowed per tenant. '
'A negative value means unlimited.')),
cfg.IntOpt('quota_security_group_rule',
default=DEFAULT_QUOTA_SG_RULE,
help=_('Number of security rules allowed per tenant. '
'A negative value means unlimited.')),
]
# l3_quota_opts from neutron/extensions/l3.py
l3_quota_opts = [
cfg.IntOpt('quota_router',
default=DEFAULT_QUOTA_ROUTER,
help=_('Number of routers allowed per tenant. '
'A negative value means unlimited.')),
cfg.IntOpt('quota_floatingip',
default=DEFAULT_QUOTA_FIP,
help=_('Number of floating IPs allowed per tenant. '
'A negative value means unlimited.')),
]
# rbac_quota_opts from neutron/extensions/rbac.py
rbac_quota_opts = [
cfg.IntOpt('quota_rbac_policy', default=DEFAULT_QUOTA_RBAC,
deprecated_name='quota_rbac_entry',
help=_('Default number of RBAC entries allowed per tenant. '
'A negative value means unlimited.'))
]
def register_quota_opts(opts, cfg=cfg.CONF):
cfg.register_opts(opts, QUOTAS_CFG_GROUP)
| 38.198113
| 78
| 0.65794
|
4a124ea7ae0ae6857c7715b0d2e0a69b2ba29d50
| 6,823
|
py
|
Python
|
analysis-tools/Python3/Kwik.py
|
RynzzZ/DCN
|
0dfacca9ac984f4429c9d4b30f45b5ff44b7a4e7
|
[
"MIT"
] | 1
|
2016-09-09T02:51:05.000Z
|
2016-09-09T02:51:05.000Z
|
analysis-tools/Python3/Kwik.py
|
RynzzZ/DCN
|
0dfacca9ac984f4429c9d4b30f45b5ff44b7a4e7
|
[
"MIT"
] | null | null | null |
analysis-tools/Python3/Kwik.py
|
RynzzZ/DCN
|
0dfacca9ac984f4429c9d4b30f45b5ff44b7a4e7
|
[
"MIT"
] | 1
|
2016-09-20T06:21:18.000Z
|
2016-09-20T06:21:18.000Z
|
# -*- coding: utf-8 -*-
"""
Created on Wed Oct 8 12:05:54 2014
@author: Josh Siegle
Loads .kwd, .kwe, .kwik and .kwx files
Examples:
# load recording dataset 0
Raw = Kwik.load('experiment1_100.raw.kwd')
# load a specific dataset
Raw = Kwik.load('experiment1_100.raw.kwd', 7)
# load all datasets
Raw = Kwik.load('experiment1_100.raw.kwd', 'all')
# load spikes and events
Events = Kwik.load('experiment1.kwe')
Spks = Kwik.load('experiment1.kwx')
# load all files in a folder:
Raw, Events, Spks, Files = load_all_files(folder)
"""
import glob
import h5py
import numpy as np
def load(filename, dataset=0):
f = h5py.File(filename, 'r')
if filename[-4:] == '.kwd':
data = {}
if dataset == 'all':
data['info'] = {Rec: f['recordings'][Rec].attrs
for Rec in f['recordings'].keys()}
data['data'] = {Rec: f['recordings'][Rec]['data']
for Rec in f['recordings'].keys()}
R = list(f['recordings'])[0]
if 'channel_bit_volts' in f['recordings'][R]\
['application_data'].keys():
data['channel_bit_volts'] = {Rec: f['recordings'][Rec]\
['application_data']\
['channel_bit_volts']
for Rec in f['recordings'].keys()}
else:
# Old OE versions do not have channel_bit_volts info.
# Assuming bit volt = 0.195 (Intan headstages).
# Keep in mind that analog inputs have a different value!
# In out system it is 0.00015258789
data['channel_bit_volts'] = {Rec: [0.195]*len(
data['data'][Rec][1, :]
)
for Rec in f['recordings'].keys()}
data['timestamps'] = {Rec: ((
np.arange(0,data['data'][Rec].shape[0])
+ data['info'][Rec]['start_time'])
/ data['info'][Rec]['sample_rate'])
for Rec in f['recordings']}
else:
data['info'] = f['recordings'][str(dataset)].attrs
data['channel_bit_volts'] = f['recordings'][str(dataset)]\
['application_data']\
['channel_bit_volts']
data['data'] = f['recordings'][str(dataset)]['data']
data['timestamps'] = ((np.arange(0,data['data'].shape[0])
+ data['info']['start_time'])
/ data['info']['sample_rate'])
return(data)
elif filename[-4:] == '.kwe':
data = {}
data['Messages'] = f['event_types']['Messages']['events']
data['TTLs'] = f['event_types']['TTL']['events']
return(data)
elif filename[-4:] == 'kwik':
data = {}
data['Messages'] = f['event_types']['Messages']['events']
data['TTLs'] = f['event_types']['TTL']['events']
return(data)
elif filename[-4:] == '.kwx':
data = f['channel_groups']
return(data)
else:
print('Supported files: .kwd, .kwe, .kwik, .kwx')
def load_all_files(folder, dataset='all'):
"""
Load kwd, kwe, kwik and/or kwx files in a folder.
Returns:
Raw: dict containing info, timestamps and raw data from one or all
datasets
Events: dict containing messages and TTLs info
Spks: dict containing spike info
"""
FilesList = glob.glob(folder+'/*'); FilesList.sort()
Raw, Events, Spks, Files = {}, {}, {}, {}
for File in FilesList:
if '.kwd' in File:
try:
Raw[File[-11:-8]] = load(File, dataset)
Files[File[-11:-8]+'_kwd'] = File
except OSError:
print('File', File, "is corrupted :'(")
elif '.kwe' in File:
try:
Events = load(File)
Files['kwe'] = File
except OSError:
print('File ', File, " is corrupted :'(")
elif '.kwik' in File:
try:
Events = load(File)
Files['kwik'] = File
except OSError:
print('File ', File, " is corrupted :'(")
elif '.kwx' in File:
try:
Spks = load(File)
Files['kwx'] = File
except OSError:
print('File ', File, " is corrupted :'(")
Spks = []
return(Raw, Events, Spks, Files)
def convert(filename, filetype='dat', dataset=0):
f = h5py.File(filename, 'r')
fnameout = filename[:-3] + filetype
if filetype == 'dat':
data = f['recordings'][str(dataset)]['data'][:,:]
data.tofile(fnameout)
def write(filename, dataset=0, bit_depth=1.0, sample_rate=25000.0):
f = h5py.File(filename, 'w-')
f.attrs['kwik_version'] = 2
grp = f.create_group("/recordings/0")
dset = grp.create_dataset("data", dataset.shape, dtype='i16')
dset[:,:] = dataset
grp.attrs['start_time'] = 0.0
grp.attrs['start_sample'] = 0
grp.attrs['sample_rate'] = sample_rate
grp.attrs['bit_depth'] = bit_depth
f.close()
def get_sample_rate(f):
return f['recordings']['0'].attrs['sample_rate']
def get_edge_times(f, TTLchan, rising=True):
events_for_chan = np.where(np.squeeze(f['event_types']['TTL']['events']['user_data']['event_channels']) == TTLchan)
edges = np.where(np.squeeze(f['event_types']['TTL']['events']['user_data']['eventID']) == 1*rising)
edges_for_chan = np.intersect1d(events_for_chan, edges)
edge_samples = np.squeeze(f['event_types']['TTL']['events']['time_samples'][:])[edges_for_chan]
edge_times = edge_samples / get_sample_rate(f)
return edge_times
def get_rising_edge_times(filename, TTLchan):
f = h5py.File(filename, 'r')
return get_edge_times(f, TTLchan, True)
def get_falling_edge_times(filename, TTLchan):
f = h5py.File(filename, 'r')
return get_edge_times(f, TTLchan, False)
def get_experiment_start_time(filename):
f = h5py.File(filename, 'r')
return f['event_types']['Messages']['events']['time_samples'][1]/ get_sample_rate(f)
| 32.032864
| 119
| 0.487909
|
4a124eecf58174226269caac104f0e73af6258a7
| 1,123
|
py
|
Python
|
WideAndDeep/TreeUtil.py
|
JYLFamily/Porto_Seguro_Safe_Driver_Prediction
|
13dfb8073cbd2a3c06a34f59b7dc446bc25bc937
|
[
"Apache-2.0"
] | 1
|
2019-05-20T06:32:27.000Z
|
2019-05-20T06:32:27.000Z
|
WideAndDeep/TreeUtil.py
|
JYLFamily/Porto_Seguro_Safe_Driver_Prediction
|
13dfb8073cbd2a3c06a34f59b7dc446bc25bc937
|
[
"Apache-2.0"
] | null | null | null |
WideAndDeep/TreeUtil.py
|
JYLFamily/Porto_Seguro_Safe_Driver_Prediction
|
13dfb8073cbd2a3c06a34f59b7dc446bc25bc937
|
[
"Apache-2.0"
] | null | null | null |
# coding:utf-8
from sklearn.model_selection import cross_val_score
from sklearn.ensemble import RandomForestClassifier
from bayes_opt import BayesianOptimization
def rf_cv(min_samples_leaf, n_estimators, feature, target):
rf = RandomForestClassifier(
min_samples_leaf=min_samples_leaf,
n_estimators=n_estimators,
n_jobs=2,
random_state=7
)
val = cross_val_score(rf, feature, target, scoring="roc_auc", cv=3)
return val.mean()
def optimize_rf(feature, target):
def rf_crossval(min_samples_leaf, n_estimators):
return rf_cv(
min_samples_leaf=max(min(min_samples_leaf, 1.0), 0),
n_estimators=max(int(round(n_estimators)), 1),
feature=feature,
target=target
)
optimizer = BayesianOptimization(
f=rf_crossval,
pbounds={
"min_samples_leaf": (0.005, 0.05),
"n_estimators": (15, 25),
},
random_state=7,
verbose=2
)
optimizer.maximize(init_points=2, n_iter=2, alpha=1e-4)
return optimizer.max["target"], optimizer.max["params"]
| 27.390244
| 71
| 0.651825
|
4a124fd047d294edd222b8ad6a764723dc46e169
| 1,875
|
py
|
Python
|
examples/sample_usage.py
|
JasmineBhalla17/ml-pipeline-analyzer
|
9beb94925b77ba4d50007d8f6fcde05d086bb361
|
[
"MIT"
] | 5
|
2022-02-14T19:27:33.000Z
|
2022-03-29T01:38:45.000Z
|
examples/sample_usage.py
|
JasmineBhalla17/ml-pipeline-analyzer
|
9beb94925b77ba4d50007d8f6fcde05d086bb361
|
[
"MIT"
] | null | null | null |
examples/sample_usage.py
|
JasmineBhalla17/ml-pipeline-analyzer
|
9beb94925b77ba4d50007d8f6fcde05d086bb361
|
[
"MIT"
] | 3
|
2022-02-19T20:05:52.000Z
|
2022-03-08T09:31:36.000Z
|
from sklearn.svm import SVC
from sklearn.pipeline import *
from sklearn.preprocessing import *
from sklearn.discriminant_analysis import *
from sklearn.impute import *
from sklearn.feature_selection import RFE
from sklearn.decomposition import PCA
from sklearn.ensemble import RandomForestClassifier
import numpy as np
from mlpipeline_analyzer import PipelineDiagram
import joblib
def custom_function(set=1):
s = 'Hello' * set
return
model = SVC(C=1.0, kernel='poly', degree=5, gamma='scale')
sklearn_pipeline = Pipeline(
[('custom', custom_function(set=10)), ('labelencoder', LabelEncoder()), # -- Pipe Transformer 1
("imputer", SimpleImputer(missing_values=np.nan, strategy='mean')), # -- Pipe Transformer 2
('scale', FeatureUnion([
('minmax', MinMaxScaler()), # -- Parallel Transformer 3
('standardscaler', StandardScaler()), # -- Parallel Transformer 4
('normalize', Normalizer())])), # -- Parallel Transformer 5
('feature_select', RFE(estimator=model, n_features_to_select=1)), # -- Pipe Transformer 6
('PCA', PCA(n_components=1)), # -- Pipe Transformer 7
("LDA", LinearDiscriminantAnalysis()), # -- Pipe Transformer 8
# ('classifier', model), #-- Pipe Classifier/Predictor 9
('voting', RandomForestClassifier(n_estimators=10))]) # -- Pipe Classifier/Predictor 10
joblib.dump(sklearn_pipeline, 'sample_models/ml_pipeline.pkl')
sklearn_pipeline = joblib.load('sample_models/ml_pipeline.pkl')
a = PipelineDiagram(sklearn_pipeline)
a.show(title='Sklearn ML Pipeline Diagram')
a.show_params(title='Sklearn Machine Learning Parameters Pipeline')
# evalml_pipeline = joblib.load('sample_models/automl_pipeline.pkl')
# b = PipelineDiagram(evalml_pipeline)
# b.show(title='Evalml ML Pipeline Diagram')
# b.show_params(title='Evalml Machine Learning Parameters Pipeline')
| 42.613636
| 101
| 0.725333
|
4a12501bbf64d6733d18ee660bb21c7436926ea2
| 311
|
py
|
Python
|
api/forms/room_form.py
|
huyhoang17/Word2Vec_Recommender_System
|
69e3b3e438f802e405fbf6496360c152d9e3c939
|
[
"MIT"
] | 8
|
2019-04-17T13:38:24.000Z
|
2021-07-24T06:46:29.000Z
|
api/forms/room_form.py
|
huyhoang17/Word2Vec_Recommender_System
|
69e3b3e438f802e405fbf6496360c152d9e3c939
|
[
"MIT"
] | 6
|
2020-02-12T00:49:16.000Z
|
2021-06-11T01:42:18.000Z
|
api/forms/room_form.py
|
huyhoang17/Word2Vec_Recommender_System
|
69e3b3e438f802e405fbf6496360c152d9e3c939
|
[
"MIT"
] | 3
|
2020-02-28T15:00:28.000Z
|
2021-12-08T07:06:55.000Z
|
from django import forms
from api.forms.abstract_form import AbstractForm
class LuxstayRoomForm(AbstractForm):
custom_session_id = forms.CharField(required=False, initial=None)
ip_address = forms.CharField(required=False, initial=None)
room_id = forms.IntegerField(required=False, initial=None)
| 28.272727
| 69
| 0.794212
|
4a12505b82a3a30031591a8cfae55ddea0b1e877
| 978
|
py
|
Python
|
mul_func.py
|
motokimura/shake_shake_chainer
|
3b87193dbfcf58723586dfc34c9bc21a900da327
|
[
"MIT"
] | 2
|
2018-11-26T13:51:56.000Z
|
2019-08-12T00:22:20.000Z
|
mul_func.py
|
motokimura/shake_shake_chainer
|
3b87193dbfcf58723586dfc34c9bc21a900da327
|
[
"MIT"
] | null | null | null |
mul_func.py
|
motokimura/shake_shake_chainer
|
3b87193dbfcf58723586dfc34c9bc21a900da327
|
[
"MIT"
] | null | null | null |
#!/usr/bin/env python
# -*- coding: utf-8 -*-
import chainer
from chainer import cuda
from chainer import configuration
class Mul(chainer.function.Function):
def __init__(self):
return
def forward(self, inputs):
x1, x2 = inputs
xp = cuda.get_array_module(x1) # Get numpy(x=n) or cupy(x=c) array module
alpha = xp.ones(x1.shape, dtype=x1.dtype) * 0.5
if configuration.config.train:
for i in range(len(alpha)):
alpha[i] = xp.random.rand()
return x1 * alpha + x2 * (xp.ones(x1.shape, dtype=x1.dtype) - alpha),
def backward(self, inputs, grad_outputs):
gx, = grad_outputs
xp = cuda.get_array_module(gx)
beta = xp.empty(gx.shape, dtype=gx.dtype)
for i in range(len(beta)):
beta[i] = xp.random.rand()
return gx * beta, gx * (xp.ones(gx.shape, dtype=gx.dtype) - beta)
def mul(x1, x2):
return Mul()(x1, x2)
| 26.432432
| 81
| 0.578732
|
4a1250c74cb85854f8782a2eab9ba0b33c458e05
| 724
|
py
|
Python
|
tests/conftest.py
|
kei0822kei/twinpy
|
14b47df1fa5b57a54f57d5c2120ed3fe9502a9bc
|
[
"MIT"
] | null | null | null |
tests/conftest.py
|
kei0822kei/twinpy
|
14b47df1fa5b57a54f57d5c2120ed3fe9502a9bc
|
[
"MIT"
] | 5
|
2021-01-19T13:08:28.000Z
|
2021-02-20T12:03:59.000Z
|
tests/conftest.py
|
kei0822kei/twinpy
|
14b47df1fa5b57a54f57d5c2120ed3fe9502a9bc
|
[
"MIT"
] | null | null | null |
#!/usr/bin/env python
# -*- coding: utf-8 -*-
"""
This is for pytest fixtures.
"""
import pytest
from twinpy.properties.hexagonal import get_hcp_cell
a = 2.93
c = 4.65
symbol = 'Ti'
@pytest.fixture(autouse=True, scope='session')
def ti_cell_wyckoff_c() -> tuple:
"""
Ti hexagonal cell.
Returns:
tuple: Ti hexagonal cell.
"""
wyckoff = 'c'
cell = get_hcp_cell(a=a, c=c, symbol=symbol, wyckoff=wyckoff)
return cell
@pytest.fixture(autouse=True, scope='session')
def ti_cell_wyckoff_d() -> tuple:
"""
Ti hexagonal cell.
Returns:
tuple: Ti hexagonal cell.
"""
wyckoff = 'd'
cell = get_hcp_cell(a=a, c=c, symbol=symbol, wyckoff=wyckoff)
return cell
| 18.564103
| 65
| 0.631215
|
4a125144d66da9b8047be1c368f33878cb21a580
| 5,473
|
py
|
Python
|
gaokao.py
|
firmianay/gaokao
|
871e8d9df7ba15ecfbf1c16107e985f8ffc4d2be
|
[
"MIT"
] | 27
|
2017-12-06T06:12:07.000Z
|
2021-12-22T10:59:20.000Z
|
gaokao.py
|
firmianay/gaokao
|
871e8d9df7ba15ecfbf1c16107e985f8ffc4d2be
|
[
"MIT"
] | null | null | null |
gaokao.py
|
firmianay/gaokao
|
871e8d9df7ba15ecfbf1c16107e985f8ffc4d2be
|
[
"MIT"
] | 8
|
2017-12-06T06:12:08.000Z
|
2021-10-31T03:19:55.000Z
|
# -*- coding: utf_8 -*-
import config
import json
from itertools import islice
class ZhiYuan():
def __init__(self, config):
self.region = config.get('region')
self.subject = config.get('subject')
self.firstLine = config.get('firstLine')
self.score = config.get('score')
self.rank = config.get('rank')
self.preScore = {
'2016':'',
'2015':'',
'2014':''
}
self.preRank = {
'2016':'',
'2015':'',
'2014':''
}
self.majors_1 = {}
self.majors_2 = {}
self.majors_3 = {}
self.ratio = [0.4, 0.3, 0.3]
self.probability = {}
def generate(self):
with open('./data/2016tj.txt', 'r') as f:
for line in islice(f, 1, None):
line = line.split()
if self.rank > int(line[4])-int(line[3]) and self.rank < int(line[4]):
self.preScore['2016'] = line[0]
if self.subject == '理科':
self.preRank['2016'] = "%s~%s" % (int(line[4])-int(line[3]), line[4])
elif self.subject == '文科':
self.preRank['2016'] = "%s~%s" % (int(line[2])-int(line[1]), line[2])
with open('./data/2015tj.txt', 'r') as f:
for line in islice(f, 1, None):
line = line.split()
if self.rank > int(line[4])-int(line[3]) and self.rank < int(line[4]):
self.preScore['2015'] = line[0]
if self.subject == '理科':
self.preRank['2015'] = "%s~%s" % (int(line[4])-int(line[3]), line[4])
elif self.subject == '文科':
self.preRank['2015'] = "%s~%s" % (int(line[2])-int(line[1]), line[2])
with open('./data/2014tj.txt', 'r') as f:
for line in islice(f, 1, None):
line = line.split()
if self.rank > int(line[4])-int(line[3]) and self.rank < int(line[4]):
self.preScore['2014'] = line[0]
if self.subject == '理科':
self.preRank['2014'] = "%s~%s" % (int(line[4])-int(line[3]), line[4])
elif self.subject == '文科':
self.preRank['2014'] = "%s~%s" % (int(line[2])-int(line[1]), line[2])
with open('./data/2016zy.txt', 'r')as f:
for line in islice(f, 1, None):
line = line.split()
if int(self.preScore['2016'].split("~")[1]) >= int(line[1]):
self.majors_1[line[0]] = 1
else:
self.majors_1[line[0]] = 0
with open('./data/2015zy.txt', 'r')as f:
for line in islice(f, 1, None):
line = line.split()
if int(self.preScore['2015'].split("~")[1]) >= int(line[1]):
self.majors_2[line[0]] = 1
else:
self.majors_2[line[0]] = 0
with open('./data/2014zy.txt', 'r')as f:
for line in islice(f, 1, None):
line = line.split()
if int(self.preScore['2014'].split("~")[1]) >= int(line[1]):
self.majors_3[line[0]] = 1
else:
self.majors_3[line[0]] = 0
major_1 = self.majors_1.keys()
major_2 = self.majors_2.keys()
major_3 = self.majors_3.keys()
all_majors = list(major_1 | major_2 | major_3)
for m in all_majors:
prob = 0
num = 0
if m in major_1:
num += 1
if m in major_2:
num += 2
if m in major_3:
num += 4
if num == 7:
if self.majors_1[m] == 1:
prob += 0.4
if self.majors_2[m] == 1:
prob += 0.3
if self.majors_3[m] == 1:
prob += 0.3
elif num == 6:
if self.majors_2[m] == 1:
prob += 0.5
if self.majors_3[m] == 1:
prob += 0.5
elif num == 5:
if self.majors_1[m] == 1:
prob += 0.6
if self.majors_3[m] == 1:
prob += 0.4
elif num == 4:
if self.majors_3[m] == 1:
prob += 1
elif num == 3:
if self.majors_1[m] == 1:
prob += 0.5
if self.majors_2[m] == 1:
prob += 0.5
elif num == 2:
if self.majors_2[m] == 1:
prob += 1
elif num == 1:
if self.majors_1[m] == 1:
prob += 1
self.probability[m] = prob
def printResult(self):
obj = {
'省份':self.region,
'科目':self.subject,
'你的分数':self.score,
'你的排名':self.rank,
'对应往年成绩':self.preScore,
'对应往年排名':self.preRank,
'2016录取情况':self.majors_1,
'2015录取情况':self.majors_2,
'2014录取情况':self.majors_3,
'2017录取预测':self.probability
}
print(json.dumps(obj, indent=4, ensure_ascii=False))
def start(self):
self.generate()
self.printResult()
if __name__ == "__main__":
Z = ZhiYuan(config.config)
Z.start()
| 34.20625
| 93
| 0.415129
|
4a1251464623aa111780c07ccf743d6cc7ae3281
| 9,205
|
py
|
Python
|
examples/binary_examples/PyFstat_example_binary_mcmc_vs_grid_comparison.py
|
RobertRosca/PyFstat
|
1c9568bb3dc87c3d33aeb41b3f572e9990665372
|
[
"MIT"
] | null | null | null |
examples/binary_examples/PyFstat_example_binary_mcmc_vs_grid_comparison.py
|
RobertRosca/PyFstat
|
1c9568bb3dc87c3d33aeb41b3f572e9990665372
|
[
"MIT"
] | 1
|
2021-02-11T16:16:26.000Z
|
2021-02-11T16:16:26.000Z
|
examples/binary_examples/PyFstat_example_binary_mcmc_vs_grid_comparison.py
|
RobertRosca/PyFstat
|
1c9568bb3dc87c3d33aeb41b3f572e9990665372
|
[
"MIT"
] | null | null | null |
#!/usr/bin/env python
"""
Binary CW example: Comparison between MCMC and grid search
==========================================================
Comparison of the semicoherent F-statistic MCMC search algorithm
to a simple grid search accross the parameter space corresponding
to a CW source in a binary system.
"""
import pyfstat
import os
import numpy as np
import matplotlib.pyplot as plt
# Set to false to include eccentricity
circular_orbit = False
label = "PyFstat_example_binary_mcmc_vs_grid_comparison" + (
"_circular_orbit" if circular_orbit else ""
)
outdir = os.path.join("PyFstat_example_data", label)
# Parameters to generate a data set
data_parameters = {
"sqrtSX": 1e-22,
"tstart": 1000000000,
"duration": 90 * 86400,
"detectors": "H1,L1",
"Tsft": 3600,
"Band": 4,
}
# Injected signal parameters
tend = data_parameters["tstart"] + data_parameters["duration"]
mid_time = 0.5 * (data_parameters["tstart"] + tend)
depth = 10.0
signal_parameters = {
"tref": data_parameters["tstart"],
"F0": 40.0,
"F1": 0,
"F2": 0,
"Alpha": 0.5,
"Delta": 0.5,
"period": 85 * 24 * 3600.0,
"asini": 4.0,
"tp": mid_time * 1.05,
"argp": 0.0 if circular_orbit else 0.54,
"ecc": 0.0 if circular_orbit else 0.7,
"h0": data_parameters["sqrtSX"] / depth,
"cosi": 1.0,
}
print("Generating SFTs with injected signal...")
writer = pyfstat.BinaryModulatedWriter(
label="simulated_signal",
outdir=outdir,
**data_parameters,
**signal_parameters,
)
writer.make_data()
print("")
print("Performing Grid Search...")
# Create ad-hoc grid and compute Fstatistic around injection point
# There's no class supporting a binary search in the same way as
# grid_based_searches.GridSearch, so we do it by hand constructing
# a grid and using core.ComputeFstat.
half_points_per_dimension = 2
search_keys = ["period", "asini", "tp", "argp", "ecc"]
search_keys_label = [
r"$P$ [s]",
r"$a_p$ [s]",
r"$t_{p}$ [s]",
r"$\omega$ [rad]",
r"$e$",
]
grid_arrays = np.meshgrid(
*[
signal_parameters[key]
* (
1
+ 0.01
* np.arange(-half_points_per_dimension, half_points_per_dimension + 1, 1)
)
for key in search_keys
]
)
grid_points = np.hstack(
[grid_arrays[i].reshape(-1, 1) for i in range(len(grid_arrays))]
)
compute_f_stat = pyfstat.ComputeFstat(
sftfilepattern=os.path.join(outdir, "*simulated_signal*sft"),
tref=signal_parameters["tref"],
binary=True,
minCoverFreq=-0.5,
maxCoverFreq=-0.5,
)
twoF_values = np.zeros(grid_points.shape[0])
for ind in range(grid_points.shape[0]):
point = grid_points[ind]
twoF_values[ind] = compute_f_stat.get_fullycoherent_twoF(
F0=signal_parameters["F0"],
F1=signal_parameters["F1"],
F2=signal_parameters["F2"],
Alpha=signal_parameters["Alpha"],
Delta=signal_parameters["Delta"],
period=point[0],
asini=point[1],
tp=point[2],
argp=point[3],
ecc=point[4],
)
print(f"2Fstat computed on {grid_points.shape[0]} points")
print("")
print("Plotting results...")
dim = len(search_keys)
fig, ax = plt.subplots(dim, 1, figsize=(10, 10))
for ind in range(dim):
a = ax.ravel()[ind]
a.grid()
a.set(xlabel=search_keys_label[ind], ylabel=r"$2 \mathcal{F}$", yscale="log")
a.plot(grid_points[:, ind], twoF_values, "o")
a.axvline(signal_parameters[search_keys[ind]], label="Injection", color="orange")
plt.tight_layout()
fig.savefig(os.path.join(outdir, "grid_twoF_per_dimension.png"))
print("Performing MCMCSearch...")
# Fixed points in frequency and sky parameters
theta_prior = {
"F0": signal_parameters["F0"],
"F1": signal_parameters["F1"],
"F2": signal_parameters["F2"],
"Alpha": signal_parameters["Alpha"],
"Delta": signal_parameters["Delta"],
}
# Set up priors for the binary parameters
for key in search_keys:
theta_prior.update(
{
key: {
"type": "unif",
"lower": 0.999 * signal_parameters[key],
"upper": 1.001 * signal_parameters[key],
}
}
)
if circular_orbit:
for key in ["ecc", "argp"]:
theta_prior[key] = 0
search_keys.remove(key)
# ptemcee sampler settings - in a real application we might want higher values
ntemps = 2
log10beta_min = -1
nwalkers = 100
nsteps = [100, 100] # [burnin,production]
mcmcsearch = pyfstat.MCMCSearch(
label="mcmc_search",
outdir=outdir,
sftfilepattern=os.path.join(outdir, "*simulated_signal*sft"),
theta_prior=theta_prior,
tref=signal_parameters["tref"],
nsteps=nsteps,
nwalkers=nwalkers,
ntemps=ntemps,
log10beta_min=log10beta_min,
binary=True,
)
# walker plot is generated during main run of the search class
mcmcsearch.run(
plot_walkers=True,
walker_plot_args={"plot_det_stat": True, "injection_parameters": signal_parameters},
)
mcmcsearch.print_summary()
# call some built-in plotting methods
# these can all highlight the injection parameters, too
print("Making MCMCSearch {:s} corner plot...".format("-".join(search_keys)))
mcmcsearch.plot_corner(truths=signal_parameters)
print("Making MCMCSearch prior-posterior comparison plot...")
mcmcsearch.plot_prior_posterior(injection_parameters=signal_parameters)
print("")
print("*" * 20)
print("Quantitative comparisons:")
print("*" * 20)
# some informative command-line output comparing search results and injection
# get max twoF and binary Doppler parameters
max_grid_index = np.argmax(twoF_values)
max_grid_2F = twoF_values[max_grid_index]
max_grid_parameters = grid_points[max_grid_index]
# same for MCMCSearch, here twoF is separate, and non-sampled parameters are not included either
max_dict_mcmc, max_2F_mcmc = mcmcsearch.get_max_twoF()
print(
"Grid Search:\n\tmax2F={:.4f}\n\tOffsets from injection parameters (relative error): {:s}.".format(
max_grid_2F,
", ".join(
[
"\n\t\t{1:s}: {0:.4e} ({2:.4f}%)".format(
max_grid_parameters[search_keys.index(key)]
- signal_parameters[key],
key,
100
* (
max_grid_parameters[search_keys.index(key)]
- signal_parameters[key]
)
/ signal_parameters[key],
)
for key in search_keys
]
),
)
)
print(
"Max 2F candidate from MCMC Search:\n\tmax2F={:.4f}"
"\n\tOffsets from injection parameters (relative error): {:s}.".format(
max_2F_mcmc,
", ".join(
[
"\n\t\t{1:s}: {0:.4e} ({2:.4f}%)".format(
max_dict_mcmc[key] - signal_parameters[key],
key,
100
* (max_dict_mcmc[key] - signal_parameters[key])
/ signal_parameters[key],
)
for key in search_keys
]
),
)
)
# get additional point and interval estimators
stats_dict_mcmc = mcmcsearch.get_summary_stats()
print(
"Mean from MCMCSearch:\n\tOffset from injection parameters (relative error): {:s}"
"\n\tExpressed as fractions of 2sigma intervals: {:s}.".format(
", ".join(
[
"\n\t\t{1:s}: {0:.4e} ({2:.4f}%)".format(
stats_dict_mcmc[key]["mean"] - signal_parameters[key],
key,
100
* (stats_dict_mcmc[key]["mean"] - signal_parameters[key])
/ signal_parameters[key],
)
for key in search_keys
]
),
", ".join(
[
"\n\t\t{1:s}: {0:.4f}%".format(
100
* np.abs(stats_dict_mcmc[key]["mean"] - signal_parameters[key])
/ (2 * stats_dict_mcmc[key]["std"]),
key,
)
for key in search_keys
]
),
)
)
print(
"Median from MCMCSearch:\n\tOffset from injection parameters (relative error): {:s},"
"\n\tExpressed as fractions of 90% confidence intervals: {:s}.".format(
", ".join(
[
"\n\t\t{1:s}: {0:.4e} ({2:.4f}%)".format(
stats_dict_mcmc[key]["median"] - signal_parameters[key],
key,
100
* (stats_dict_mcmc[key]["median"] - signal_parameters[key])
/ signal_parameters[key],
)
for key in search_keys
]
),
", ".join(
[
"\n\t\t{1:s}: {0:.4f}%".format(
100
* np.abs(stats_dict_mcmc[key]["median"] - signal_parameters[key])
/ (
stats_dict_mcmc[key]["upper90"]
- stats_dict_mcmc[key]["lower90"]
),
key,
)
for key in search_keys
]
),
)
)
| 30.180328
| 103
| 0.576534
|
4a1252d11d25a536d51d4fc9102311fa9ed53ab4
| 3,778
|
py
|
Python
|
tests/www/www/settings.py
|
grantjenks/django-modelqueue
|
6e46760cd6da1060d8e7a011c7c8f160f905bddc
|
[
"Apache-2.0"
] | 3
|
2018-03-27T01:12:03.000Z
|
2021-03-07T17:06:09.000Z
|
tests/www/www/settings.py
|
grantjenks/django-modelqueue
|
6e46760cd6da1060d8e7a011c7c8f160f905bddc
|
[
"Apache-2.0"
] | 2
|
2021-12-13T16:11:13.000Z
|
2021-12-13T16:13:39.000Z
|
tests/www/www/settings.py
|
grantjenks/django-modelqueue
|
6e46760cd6da1060d8e7a011c7c8f160f905bddc
|
[
"Apache-2.0"
] | 1
|
2020-06-13T10:02:51.000Z
|
2020-06-13T10:02:51.000Z
|
"""
Django settings for www project.
Generated by 'django-admin startproject' using Django 1.11.11.
For more information on this file, see
https://docs.djangoproject.com/en/1.11/topics/settings/
For the full list of settings and their values, see
https://docs.djangoproject.com/en/1.11/ref/settings/
"""
import os
# Build paths inside the project like this: os.path.join(BASE_DIR, ...)
BASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
# Quick-start development settings - unsuitable for production
# See https://docs.djangoproject.com/en/1.11/howto/deployment/checklist/
# SECURITY WARNING: keep the secret key used in production secret!
SECRET_KEY = '!6vkwfc7+yq3bqi2j50w_err_8-z=#ny5l13by*w&j%(cif7td'
# SECURITY WARNING: don't run with debug turned on in production!
DEBUG = False
ALLOWED_HOSTS = ['*']
# Application definition
INSTALLED_APPS = [
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'www',
]
MIDDLEWARE = [
'django.middleware.security.SecurityMiddleware',
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.common.CommonMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'django.middleware.clickjacking.XFrameOptionsMiddleware',
]
ROOT_URLCONF = 'www.urls'
TEMPLATES = [
{
'BACKEND': 'django.template.backends.django.DjangoTemplates',
'DIRS': [],
'APP_DIRS': True,
'OPTIONS': {
'context_processors': [
'django.template.context_processors.debug',
'django.template.context_processors.request',
'django.contrib.auth.context_processors.auth',
'django.contrib.messages.context_processors.messages',
],
},
},
]
WSGI_APPLICATION = 'www.wsgi.application'
# Database
# https://docs.djangoproject.com/en/1.11/ref/settings/#databases
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.sqlite3',
'NAME': os.path.join(BASE_DIR, 'db.sqlite3'),
}
}
# DATABASES = {
# 'default': {
# 'ENGINE': 'django.db.backends.postgresql_psycopg2',
# 'NAME': 'modelqueue',
# 'USER': 'django',
# 'PASSWORD': 'django',
# 'HOST': 'localhost',
# 'PORT': '5432',
# }
# }
# Password validation
# https://docs.djangoproject.com/en/1.11/ref/settings/#auth-password-validators
AUTH_PASSWORD_VALIDATORS = [
{
'NAME': 'django.contrib.auth.password_validation.UserAttributeSimilarityValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.MinimumLengthValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.CommonPasswordValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.NumericPasswordValidator',
},
]
# Internationalization
# https://docs.djangoproject.com/en/1.11/topics/i18n/
LANGUAGE_CODE = 'en-us'
TIME_ZONE = 'UTC'
USE_I18N = True
USE_L10N = True
USE_TZ = True
# Static files (CSS, JavaScript, Images)
# https://docs.djangoproject.com/en/1.11/howto/static-files/
STATIC_URL = '/static/'
# Logging
# LOGGING = {
# 'version': 1,
# 'disable_existing_loggers': False,
# 'handlers': {
# 'console': {
# 'level': 'DEBUG',
# 'class': 'logging.StreamHandler',
# },
# },
# 'loggers': {
# 'django.db.backends': {
# 'handlers': ['console'],
# 'level': 'DEBUG',
# 'propagate': False,
# },
# },
# }
| 24.69281
| 91
| 0.644521
|
4a125476ca8aaa4222f7e6ffc370e96ef167c571
| 8,489
|
py
|
Python
|
qtile_extras/resources/stravadata/sync.py
|
joefiorini/qtile-extras
|
6ba140281b4ae52576198e1dffd969cb41b4cf7f
|
[
"MIT"
] | 30
|
2021-09-01T18:16:45.000Z
|
2022-03-27T02:19:53.000Z
|
qtile_extras/resources/stravadata/sync.py
|
joefiorini/qtile-extras
|
6ba140281b4ae52576198e1dffd969cb41b4cf7f
|
[
"MIT"
] | 23
|
2021-10-11T00:43:02.000Z
|
2022-03-19T06:29:49.000Z
|
qtile_extras/resources/stravadata/sync.py
|
joefiorini/qtile-extras
|
6ba140281b4ae52576198e1dffd969cb41b4cf7f
|
[
"MIT"
] | 4
|
2021-11-05T16:52:05.000Z
|
2022-03-04T01:48:28.000Z
|
# Copyright (c) 2016-21 elParaguayo
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
# SOFTWARE.
import datetime
import json
import os
import pickle
import time
from stravalib import Client
from stravalib.model import Activity
from units import unit
from qtile_extras.resources.stravadata.locations import AUTH, CACHE, CREDS, TIMESTAMP
NUM_EVENTS = 5
APP_ID = AUTH.get("id", False)
SECRET = AUTH.get("secret", False)
SHOW_EXTRA_MONTHS = 5
KM = unit("km")
class ActivityHistory(object):
def __init__(self):
self.current = None
self.previous = []
self.year = None
self.alltime = None
def add_month(self, actsum):
self.previous.append(actsum)
class ActivitySummary(object):
def __init__(self, distance_unit=unit("km"), groupdate=None, child=False):
self.activities = []
self.distance_unit = distance_unit
self.dist = distance_unit(0)
self.time = 0
self._date = datetime.datetime.now()
self.groupdate = groupdate
self.paceformat = "{min:2.0f}:{sec:02.0f}"
self.timeformat = "{hr:0d}:{min:02d}:{sec:02d}"
self.child = child
self.children = []
self._name = ""
@classmethod
def from_activity(cls, activity, distance_unit=unit("km"), child=False):
act = cls(distance_unit=distance_unit, child=child)
act.add_activity(activity)
return act
@classmethod
def from_activities(cls, activities, distance_unit=unit("km"), child=False):
act = cls(distance_unit=distance_unit, child=child)
act.add_activities(activities)
return act
def _is_activity(self, activity):
return type(activity) == Activity and activity.type == "Run"
def create_child(self, activity):
if not self.child:
self.children.append(ActivitySummary.from_activity(activity, child=True))
def add_activity(self, activity):
if self._is_activity(activity):
self.activities.append(activity)
if not self.is_multi_activity:
self._date = activity.start_date_local
self._name = activity.name
self.dist += activity.distance
self.time += activity.moving_time.total_seconds()
self.create_child(activity)
def add_activities(self, activities):
for act in activities:
self.add_activity(act)
@property
def is_multi_activity(self):
return len(self.activities) > 1
@property
def pace(self):
if self.distance > 0:
pace = self.time / self.distance
else:
pace = 0
m, s = divmod(pace, 60)
return (int(m), int(s))
@property
def distance(self):
return self.dist.num
@property
def format_pace(self):
min, sec = self.pace
return self.paceformat.format(min=min, sec=sec)
@property
def format_time(self):
return self.timeformat.format(hr=self.hours, min=self.mins, sec=self.secs)
@property
def elapsed_time_hms(self):
m, s = divmod(self.time, 60)
h, m = divmod(m, 60)
return (int(h), int(m), int(s))
@property
def hours(self):
return self.elapsed_time_hms[0]
@property
def mins(self):
return self.elapsed_time_hms[1]
@property
def secs(self):
return self.elapsed_time_hms[2]
@property
def date(self):
if self.is_multi_activity:
return self.groupdate
else:
return self._date
@property
def is_plural(self):
return len(self.activities) != 1
@property
def name(self):
if self.is_multi_activity or self.groupdate:
runs = "runs" if self.is_plural else "run"
return "{} {}".format(len(self.activities), runs)
else:
try:
return self._name
except IndexError:
return "No activity"
@property
def count(self):
return len(self.activities)
def refresh_token(client):
token = client.refresh_access_token(
client_id=APP_ID, client_secret=SECRET, refresh_token=client.refresh_token
)
with open(CREDS, "w") as out:
json.dump(token, out)
return token
def load_token():
with open(CREDS, "r") as f:
token = json.load(f)
return token
def current_month():
return datetime.datetime.now()
def previous_month(curmonth=None):
if curmonth is None:
cur = current_month()
else:
cur = curmonth
cur = cur.replace(day=1) - datetime.timedelta(days=1)
return cur
def same_month(source, ref):
return (source.month == ref.month) and (source.year == ref.year)
def same_year(source, ref):
return source.year == ref.year
def pace(mtime, distance):
secs = mtime.total_seconds()
pace = secs / KM(distance).num
m, s = divmod(pace, 60)
return (int(m), int(s))
def get_activities(activities):
data = ActivityHistory()
act_sum = ActivitySummary
cmonth = current_month()
current = [a for a in activities if same_month(a.start_date_local, cmonth)]
curacs = ActivitySummary.from_activities(current)
curacs.groupdate = cmonth
data.current = curacs
month = cmonth
for _ in range(SHOW_EXTRA_MONTHS):
month = previous_month(month)
previous = [a for a in activities if same_month(a.start_date_local, month)]
summary = act_sum()
summary.add_activities(previous)
summary.groupdate = month
data.add_month(summary)
ysum = act_sum()
yacts = [a for a in activities if same_year(a.start_date_local, cmonth)]
ysum.add_activities(yacts)
ysum.groupdate = cmonth
data.year = ysum
summary = act_sum()
summary.add_activities(activities)
summary.groupdate = month
data.alltime = summary
return data
def get_client():
client = Client()
token = load_token()
client.refresh_token = token["refresh_token"]
if token["expires_at"] < time.time():
token = refresh_token(client)
client.access_token = token["access_token"]
return client
def update(interval=900):
fetch = check_last_update(interval)
if fetch:
return fetch_data()
else:
return read_cache()
def check_last_update(interval):
fetch = True
now = time.time()
if not os.path.isfile(TIMESTAMP):
pass
else:
with open(TIMESTAMP, "r") as ts:
stamp = ts.read().strip()
try:
last = float(stamp)
if (now - last) < interval:
fetch = False
except ValueError:
pass
return fetch
def fetch_data():
if not (APP_ID and SECRET):
return (False, "Cannot read app_id and secret.")
try:
client = get_client()
except Exception as e:
return (False, e)
acs = list(client.get_activities())
data = get_activities(acs)
cache_data(data)
return (True, data)
def read_cache():
try:
with open(CACHE, "rb") as saved:
data = pickle.load(saved)
return (True, data)
except pickle.PickleError as e:
return (False, e)
except FileNotFoundError:
return (False, "Pickled data not found")
def cache_data(data):
now = time.time()
with open(TIMESTAMP, "w") as ts:
ts.write(str(now))
with open(CACHE, "wb") as pick:
pickle.dump(data, pick)
| 25.802432
| 85
| 0.639651
|
4a12549d687e794dcd54854411c42df74b7f245f
| 1,576
|
py
|
Python
|
interactive-deep-colorization/caffe_files/color_quantization.py
|
arthw/colorization
|
e7f85ec307c9d27a16a87276beaaf2dee5492292
|
[
"BSD-2-Clause"
] | 2
|
2018-08-10T13:15:11.000Z
|
2022-01-15T02:04:18.000Z
|
interactive-deep-colorization/caffe_files/color_quantization.py
|
arthw/colorization
|
e7f85ec307c9d27a16a87276beaaf2dee5492292
|
[
"BSD-2-Clause"
] | null | null | null |
interactive-deep-colorization/caffe_files/color_quantization.py
|
arthw/colorization
|
e7f85ec307c9d27a16a87276beaaf2dee5492292
|
[
"BSD-2-Clause"
] | 1
|
2022-02-06T16:00:10.000Z
|
2022-02-06T16:00:10.000Z
|
import numpy as np
from IPython.core.debugger import Pdb as pdb
import sklearn.neighbors as nn
import util
import caffe
class NNEncode():
# Encode points as a linear combination of unordered points
# using NN search and RBF kernel
def __init__(self,NN,sigma,km_filepath='./data/color_bins/pts_in_hull.npy',cc=-1):
if(util.check_value(cc,-1)):
self.cc = np.load(km_filepath)
else:
self.cc = cc
self.K = self.cc.shape[0]
self.NN = int(NN)
self.sigma = sigma
self.nbrs = nn.NearestNeighbors(n_neighbors=self.NN, algorithm='auto').fit(self.cc)
def encode_points_mtx_nd(self,pts_nd,axis=1,returnSparse=False):
t = util.Timer()
pts_flt = util.flatten_nd_array(pts_nd,axis=axis)
P = pts_flt.shape[0]
(dists,inds) = self.nbrs.kneighbors(pts_flt)
pts_enc_flt = np.zeros((P,self.K))
wts = np.exp(-dists**2/(2*self.sigma**2))
wts = wts/np.sum(wts,axis=1)[:,util.na()]
pts_enc_flt[np.arange(0,P,dtype='int')[:,util.na()],inds] = wts
pts_enc_nd = util.unflatten_2d_array(pts_enc_flt,pts_nd,axis=axis)
return pts_enc_nd
def decode_points_mtx_nd(self,pts_enc_nd,axis=1):
pts_enc_flt = util.flatten_nd_array(pts_enc_nd,axis=axis)
pts_dec_flt = np.dot(pts_enc_flt,self.cc)
pts_dec_nd = util.unflatten_2d_array(pts_dec_flt,pts_enc_nd,axis=axis)
return pts_dec_nd
def decode_1hot_mtx_nd(self,pts_enc_nd,axis=1,returnEncode=False):
pts_1hot_nd = nd_argmax_1hot(pts_enc_nd,axis=axis)
pts_dec_nd = self.decode_points_mtx_nd(pts_1hot_nd,axis=axis)
if(returnEncode):
return (pts_dec_nd,pts_1hot_nd)
else:
return pts_dec_nd
| 30.901961
| 85
| 0.742386
|
4a125543e9c1119cc557a6f7addbaca45565ffc0
| 8,420
|
py
|
Python
|
tools/plain_train_net.py
|
Zeinab-Haroon/detectron2
|
6a56c9cadaf392697c4bdef00325e415d07a459f
|
[
"Apache-2.0"
] | 5
|
2021-06-16T04:40:44.000Z
|
2022-02-09T11:20:10.000Z
|
tools/plain_train_net.py
|
Zeinab-Haroon/detectron2
|
6a56c9cadaf392697c4bdef00325e415d07a459f
|
[
"Apache-2.0"
] | 1
|
2022-02-16T12:06:15.000Z
|
2022-02-16T12:06:15.000Z
|
tools/plain_train_net.py
|
Zeinab-Haroon/detectron2
|
6a56c9cadaf392697c4bdef00325e415d07a459f
|
[
"Apache-2.0"
] | 3
|
2021-09-26T14:42:00.000Z
|
2022-02-09T11:20:14.000Z
|
#!/usr/bin/env python
# Copyright (c) Facebook, Inc. and its affiliates.
"""
Detectron2 training script with a plain training loop.
This script reads a given config file and runs the training or evaluation.
It is an entry point that is able to train standard models in detectron2.
In order to let one script support training of many models,
this script contains logic that are specific to these built-in models and therefore
may not be suitable for your own project.
For example, your research project perhaps only needs a single "evaluator".
Therefore, we recommend you to use detectron2 as a library and take
this file as an example of how to use the library.
You may want to write your own script with your datasets and other customizations.
Compared to "train_net.py", this script supports fewer default features.
It also includes fewer abstraction, therefore is easier to add custom logic.
"""
import logging
import os
from collections import OrderedDict
import torch
from torch.nn.parallel import DistributedDataParallel
import detectron2.utils.comm as comm
from detectron2.checkpoint import DetectionCheckpointer, PeriodicCheckpointer
from detectron2.config import get_cfg
from detectron2.data import (
MetadataCatalog,
build_detection_test_loader,
build_detection_train_loader,
)
from detectron2.engine import default_argument_parser, default_setup, launch
from detectron2.evaluation import (
CityscapesInstanceEvaluator,
CityscapesSemSegEvaluator,
COCOEvaluator,
COCOPanopticEvaluator,
DatasetEvaluators,
LVISEvaluator,
PascalVOCDetectionEvaluator,
SemSegEvaluator,
inference_on_dataset,
print_csv_format,
)
from detectron2.modeling import build_model
from detectron2.solver import build_lr_scheduler, build_optimizer
from detectron2.utils.events import (
CommonMetricPrinter,
EventStorage,
JSONWriter,
TensorboardXWriter,
)
logger = logging.getLogger("detectron2")
def get_evaluator(cfg, dataset_name, output_folder=None):
"""
Create evaluator(s) for a given dataset.
This uses the special metadata "evaluator_type" associated with each builtin dataset.
For your own dataset, you can simply create an evaluator manually in your
script and do not have to worry about the hacky if-else logic here.
"""
if output_folder is None:
output_folder = os.path.join(cfg.OUTPUT_DIR, "inference")
evaluator_list = []
evaluator_type = MetadataCatalog.get(dataset_name).evaluator_type
if evaluator_type in ["sem_seg", "coco_panoptic_seg"]:
evaluator_list.append(
SemSegEvaluator(
dataset_name,
distributed=True,
output_dir=output_folder,
)
)
if evaluator_type in ["coco", "coco_panoptic_seg"]:
evaluator_list.append(COCOEvaluator(dataset_name, output_dir=output_folder))
if evaluator_type == "coco_panoptic_seg":
evaluator_list.append(COCOPanopticEvaluator(dataset_name, output_folder))
if evaluator_type == "cityscapes_instance":
assert (
torch.cuda.device_count() >= comm.get_rank()
), "CityscapesEvaluator currently do not work with multiple machines."
return CityscapesInstanceEvaluator(dataset_name)
if evaluator_type == "cityscapes_sem_seg":
assert (
torch.cuda.device_count() >= comm.get_rank()
), "CityscapesEvaluator currently do not work with multiple machines."
return CityscapesSemSegEvaluator(dataset_name)
if evaluator_type == "pascal_voc":
return PascalVOCDetectionEvaluator(dataset_name)
if evaluator_type == "lvis":
return LVISEvaluator(dataset_name, cfg, True, output_folder)
if len(evaluator_list) == 0:
raise NotImplementedError(
"no Evaluator for the dataset {} with the type {}".format(dataset_name, evaluator_type)
)
if len(evaluator_list) == 1:
return evaluator_list[0]
return DatasetEvaluators(evaluator_list)
def do_test(cfg, model):
results = OrderedDict()
for dataset_name in cfg.DATASETS.TEST:
data_loader = build_detection_test_loader(cfg, dataset_name)
evaluator = get_evaluator(
cfg, dataset_name, os.path.join(cfg.OUTPUT_DIR, "inference", dataset_name)
)
results_i = inference_on_dataset(model, data_loader, evaluator)
results[dataset_name] = results_i
if comm.is_main_process():
logger.info("Evaluation results for {} in csv format:".format(dataset_name))
print_csv_format(results_i)
if len(results) == 1:
results = list(results.values())[0]
return results
def do_train(cfg, model, resume=False):
model.train()
optimizer = build_optimizer(cfg, model)
scheduler = build_lr_scheduler(cfg, optimizer)
checkpointer = DetectionCheckpointer(
model, cfg.OUTPUT_DIR, optimizer=optimizer, scheduler=scheduler
)
start_iter = (
checkpointer.resume_or_load(cfg.MODEL.WEIGHTS, resume=resume).get("iteration", -1) + 1
)
max_iter = cfg.SOLVER.MAX_ITER
periodic_checkpointer = PeriodicCheckpointer(
checkpointer, cfg.SOLVER.CHECKPOINT_PERIOD, max_iter=max_iter
)
writers = (
[
CommonMetricPrinter(max_iter),
JSONWriter(os.path.join(cfg.OUTPUT_DIR, "metrics.json")),
TensorboardXWriter(cfg.OUTPUT_DIR),
]
if comm.is_main_process()
else []
)
# compared to "train_net.py", we do not support accurate timing and
# precise BN here, because they are not trivial to implement in a small training loop
data_loader = build_detection_train_loader(cfg)
logger.info("Starting training from iteration {}".format(start_iter))
with EventStorage(start_iter) as storage:
for data, iteration in zip(data_loader, range(start_iter, max_iter)):
storage.iter = iteration
loss_dict = model(data)
losses = sum(loss_dict.values())
assert torch.isfinite(losses).all(), loss_dict
loss_dict_reduced = {k: v.item() for k, v in comm.reduce_dict(loss_dict).items()}
losses_reduced = sum(loss for loss in loss_dict_reduced.values())
if comm.is_main_process():
storage.put_scalars(total_loss=losses_reduced, **loss_dict_reduced)
optimizer.zero_grad()
losses.backward()
optimizer.step()
storage.put_scalar("lr", optimizer.param_groups[0]["lr"], smoothing_hint=False)
scheduler.step()
if (
cfg.TEST.EVAL_PERIOD > 0
and (iteration + 1) % cfg.TEST.EVAL_PERIOD == 0
and iteration != max_iter - 1
):
do_test(cfg, model)
# Compared to "train_net.py", the test results are not dumped to EventStorage
comm.synchronize()
if iteration - start_iter > 5 and (
(iteration + 1) % 20 == 0 or iteration == max_iter - 1
):
for writer in writers:
writer.write()
periodic_checkpointer.step(iteration)
def setup(args):
"""
Create configs and perform basic setups.
"""
cfg = get_cfg()
cfg.merge_from_file(args.config_file)
cfg.merge_from_list(args.opts)
cfg.freeze()
default_setup(
cfg, args
) # if you don't like any of the default setup, write your own setup code
return cfg
def main(args):
cfg = setup(args)
model = build_model(cfg)
logger.info("Model:\n{}".format(model))
if args.eval_only:
DetectionCheckpointer(model, save_dir=cfg.OUTPUT_DIR).resume_or_load(
cfg.MODEL.WEIGHTS, resume=args.resume
)
return do_test(cfg, model)
distributed = comm.get_world_size() > 1
if distributed:
model = DistributedDataParallel(
model, device_ids=[comm.get_local_rank()], broadcast_buffers=False
)
do_train(cfg, model, resume=args.resume)
return do_test(cfg, model)
if __name__ == "__main__":
args = default_argument_parser().parse_args()
print("Command Line Args:", args)
launch(
main,
args.num_gpus,
num_machines=args.num_machines,
machine_rank=args.machine_rank,
dist_url=args.dist_url,
args=(args,),
)
| 35.527426
| 99
| 0.67981
|
4a125558f73775f5099682c5f40af9d6869fc3e5
| 124
|
py
|
Python
|
declare_qtquick/widgets/api/qtquick3d/effects/__init__.py
|
likianta/declare-qtquick
|
93c2ce49d841ccdeb0272085c5f731139927f0d7
|
[
"MIT"
] | 3
|
2021-11-02T03:45:27.000Z
|
2022-03-27T05:33:36.000Z
|
declare_qtquick/widgets/api/qtquick3d/effects/__init__.py
|
likianta/declare-qtquick
|
93c2ce49d841ccdeb0272085c5f731139927f0d7
|
[
"MIT"
] | null | null | null |
declare_qtquick/widgets/api/qtquick3d/effects/__init__.py
|
likianta/declare-qtquick
|
93c2ce49d841ccdeb0272085c5f731139927f0d7
|
[
"MIT"
] | null | null | null |
from __declare_qtquick_internals__ import qml_imports
qml_imports.add("QtQuick3D.Effects")
from .__list__ import * # noqa
| 24.8
| 53
| 0.822581
|
4a1256b0ec9adc40e1e507862095f133af318e44
| 348
|
py
|
Python
|
wesp32_light_server_socket/boot.py
|
ocdtrekkie/wesp32-demos
|
d1340c555488011db12cebfa7e110fe6f233bd75
|
[
"MIT"
] | 18
|
2018-10-16T19:14:16.000Z
|
2021-11-29T21:58:55.000Z
|
wesp32_light_server_socket/boot.py
|
ocdtrekkie/wesp32-demos
|
d1340c555488011db12cebfa7e110fe6f233bd75
|
[
"MIT"
] | 4
|
2019-02-11T20:59:30.000Z
|
2021-05-23T03:13:08.000Z
|
wesp32_light_server_socket/boot.py
|
ocdtrekkie/wesp32-demos
|
d1340c555488011db12cebfa7e110fe6f233bd75
|
[
"MIT"
] | 5
|
2019-02-11T17:27:58.000Z
|
2021-09-25T10:08:02.000Z
|
# This file is executed on every boot (including wake-boot from deepsleep)
import machine
import network
# Connect to LAN
lan = network.LAN(mdc = machine.Pin(16), mdio = machine.Pin(17),
power = None, phy_type = network.PHY_LAN8720, phy_addr=0)
lan.active(True)
# Define convenient reset function
def reset():
machine.reset()
| 26.769231
| 75
| 0.70977
|
4a12573725b3323dd3382e787e0e77f4c66f7f21
| 4,702
|
py
|
Python
|
myapp/time_series/heavy_tailed.py
|
rnagumo/numpyro_example
|
04820753586658e621c6537ee63a29274f8f427e
|
[
"MIT"
] | null | null | null |
myapp/time_series/heavy_tailed.py
|
rnagumo/numpyro_example
|
04820753586658e621c6537ee63a29274f8f427e
|
[
"MIT"
] | null | null | null |
myapp/time_series/heavy_tailed.py
|
rnagumo/numpyro_example
|
04820753586658e621c6537ee63a29274f8f427e
|
[
"MIT"
] | null | null | null |
"""Univariate, heavy tailed time series.
ref)
https://pyro.ai/examples/forecasting_i.html
data)
https://www.bart.gov/about/reports/ridership
"""
import pathlib
from typing import Dict, Optional, Tuple
import jax.numpy as jnp
import matplotlib.pyplot as plt
import numpy as np
import numpyro
import numpyro.distributions as dist
from jax import random
from numpyro import diagnostics, infer
from numpyro.contrib.control_flow import scan
def model(
covariates: jnp.ndarray,
x: Optional[jnp.ndarray] = None,
x_dim: int = 1,
z_dim: int = 1,
) -> None:
if x is not None:
x_dim = x.shape[-1]
seq_len, batch, c_dim = covariates.shape
weight = numpyro.sample(
"weight", dist.Normal(np.zeros((c_dim, x_dim)), np.ones((c_dim, x_dim)) * 0.1)
)
bias = numpyro.sample("bias", dist.Normal(np.zeros(x_dim), np.ones(x_dim) * 10))
sigma = numpyro.sample("sigma", dist.LogNormal(-5 * np.ones(x_dim), 5 * np.ones(x_dim)))
def transition_fn(
carry: Tuple[jnp.ndarray], t: jnp.ndarray
) -> Tuple[Tuple[jnp.ndarray], jnp.ndarray]:
z_prev, *_ = carry
z = numpyro.sample("z", dist.Normal(z_prev, jnp.ones(z_dim)))
numpyro.sample("x", dist.Cauchy(z + jnp.matmul(covariates[t], weight) + bias, sigma))
return (z,), None
with numpyro.handlers.condition(data={"x": x}):
scan(transition_fn, (jnp.zeros((batch, z_dim)),), jnp.arange(seq_len))
def _load_data(
num_seasons: int = 10, batch: int = 1, x_dim: int = 1
) -> Tuple[jnp.ndarray, jnp.ndarray]:
"""Load sequential data with peaky noize.
ref) http://docs.pyro.ai/en/stable/_modules/pyro/contrib/examples/bart.html
Returns:
Time series data with shape of `(seq_len, batch, data_dim)`.
"""
rng_key = random.PRNGKey(1234)
rng_key_0, rng_key_1 = random.split(rng_key, 2)
x = dist.Poisson(100).sample(rng_key_0, (70 * num_seasons, batch, x_dim))
x += jnp.array(([1] * 65 + [50] * 5) * num_seasons)[:, None, None] * random.normal(
rng_key_1, (70 * num_seasons, batch, x_dim)
)
t = jnp.arange(len(x))[:, None, None]
t = t.repeat(batch, axis=1)
assert isinstance(x, jnp.ndarray)
assert isinstance(t, jnp.ndarray)
assert x.shape[0] == t.shape[0]
assert x.shape[1] == t.shape[1]
return x, t
def _save_results(
x: jnp.ndarray,
prior_samples: Dict[str, jnp.ndarray],
posterior_samples: Dict[str, jnp.ndarray],
posterior_predictive: Dict[str, jnp.ndarray],
num_train: int,
) -> None:
root = pathlib.Path("./data/heavy_tailed")
root.mkdir(exist_ok=True)
jnp.savez(root / "piror_samples.npz", **prior_samples)
jnp.savez(root / "posterior_samples.npz", **posterior_samples)
jnp.savez(root / "posterior_predictive.npz", **posterior_predictive)
x_pred = posterior_predictive["x"]
x_pred_trn = x_pred[:, :num_train]
x_hpdi_trn = diagnostics.hpdi(x_pred_trn)
t_train = np.arange(num_train)
x_pred_tst = x_pred[:, num_train:]
x_hpdi_tst = diagnostics.hpdi(x_pred_tst)
num_test = x_pred_tst.shape[1]
t_test = np.arange(num_train, num_train + num_test)
prop_cycle = plt.rcParams["axes.prop_cycle"]
colors = prop_cycle.by_key()["color"]
plt.figure(figsize=(12, 6))
plt.plot(x.ravel(), label="ground truth", color=colors[0])
plt.plot(t_train, x_pred_trn.mean(0).ravel(), label="prediction", color=colors[1])
plt.fill_between(
t_train, x_hpdi_trn[0].ravel(), x_hpdi_trn[1].ravel(), alpha=0.3, color=colors[1]
)
plt.plot(t_test, x_pred_tst.mean(0).ravel(), label="forecast", color=colors[2])
plt.fill_between(
t_test, x_hpdi_tst[0].ravel(), x_hpdi_tst[1].ravel(), alpha=0.3, color=colors[2]
)
plt.legend()
plt.tight_layout()
plt.savefig(root / "data.png")
plt.close()
def main() -> None:
# Data
x, t = _load_data(5)
num_train = int(len(x) * 0.8)
x_train = x[:num_train]
t_train = t[:num_train]
rng_key = random.PRNGKey(0)
rng_key, rng_key_prior, rng_key_infer, rng_key_posterior = random.split(rng_key, 4)
# prior
predictive = infer.Predictive(model, num_samples=10)
prior_samples = predictive(rng_key_prior, t)
# Inference
kernel = infer.NUTS(model)
mcmc = infer.MCMC(kernel, 100, 100)
mcmc.run(rng_key_infer, t_train, x_train)
posterior_samples = mcmc.get_samples()
# Posterior prediction
predictive = infer.Predictive(model, posterior_samples=posterior_samples)
posterior_predictive = predictive(rng_key_posterior, t)
_save_results(x, prior_samples, posterior_samples, posterior_predictive, num_train)
if __name__ == "__main__":
main()
| 29.3875
| 93
| 0.664185
|
4a12574eb6f303f6dd67536ead479cfd9d81faf4
| 1,171
|
py
|
Python
|
videoKernelPseudoRGB/scripts/genColorMixes.py
|
Vortetty/AreOS
|
6705561986cc3e5228303d7ff11e255ba950aff3
|
[
"Apache-2.0"
] | null | null | null |
videoKernelPseudoRGB/scripts/genColorMixes.py
|
Vortetty/AreOS
|
6705561986cc3e5228303d7ff11e255ba950aff3
|
[
"Apache-2.0"
] | null | null | null |
videoKernelPseudoRGB/scripts/genColorMixes.py
|
Vortetty/AreOS
|
6705561986cc3e5228303d7ff11e255ba950aff3
|
[
"Apache-2.0"
] | null | null | null |
from itertools import permutations, combinations
colors = [
(0, 0, 0),
(0, 0, 168),
(0, 168, 0),
(0, 168, 168),
(168, 0, 0),
(168, 0, 168),
(168, 168, 0),
(168, 168, 168),
(87, 87, 87),
(87, 87, 255),
(87, 255, 87),
(87, 255, 255),
(255, 87, 87),
(255, 87, 255),
(255, 255, 87),
(255, 255, 255)
]
possibile_colors = list(set(permutations(colors, 2)))
combos = []
for c in possibile_colors:
for i in [176, 177, 178, 219]:
combos.append( (tuple(c), i) )
possibile_colors.clear()
ratios = {
176: 0.33333333333,
177: 0.5,
178: 0.73333333333,
219: 1
}
def mergeColor(c, c1, c2):
p = ratios[c]
return tuple([max(0, min(255, int(a+b))) for a,b in zip(tuple([p * x for x in c1]), (tuple([(1 - p) * x for x in c2])))])
def vga_entry_color(fg, bg):
return fg | bg << 4
def vga_entry(uc, color):
return uc | color << 8
out_colors = {}
for i in combos:
out_colors[mergeColor(i[1], i[0][0], i[0][1])] = vga_entry(i[1], vga_entry_color(colors.index(i[0][0]), colors.index(i[0][1])))
f = open("vgaColors.py", "w+")
f.write("VGA_COLOR_COMBOS = " + repr(out_colors))
| 22.09434
| 131
| 0.551665
|
4a125875c5224f3a40699943dc6ef767359bf48d
| 18,316
|
py
|
Python
|
dash_fcast/distributions/table.py
|
dsbowen/dash-fcast
|
b589dfa370e8170f76893ba2f484bf03eaca7522
|
[
"MIT"
] | null | null | null |
dash_fcast/distributions/table.py
|
dsbowen/dash-fcast
|
b589dfa370e8170f76893ba2f484bf03eaca7522
|
[
"MIT"
] | 1
|
2021-05-26T12:55:28.000Z
|
2021-05-27T12:15:48.000Z
|
dash_fcast/distributions/table.py
|
dsbowen/dash-fcast
|
b589dfa370e8170f76893ba2f484bf03eaca7522
|
[
"MIT"
] | null | null | null |
"""# Tabular distribution
Examples
--------
In `app.py`:
```python
import dash_fcast.distributions as dist
import dash
import dash_bootstrap_components as dbc
import dash_core_components as dcc
import dash_html_components as html
import plotly.graph_objects as go
from dash.dependencies import Input, Output
app = dash.Dash(__name__, external_stylesheets=[dbc.themes.BOOTSTRAP])
app.layout = html.Div([
\ html.Br(),
\ dist.Table(
\ id='Forecast',
\ datatable={'editable': True, 'row_deletable': True},
\ row_addable=True,
\ smoother=True
\ ),
\ html.Div(id='graphs')
], className='container')
dist.Table.register_callbacks(app)
@app.callback(
\ Output('graphs', 'children'),
\ [Input(dist.Table.get_id('Forecast'), 'children')]
)
def update_graphs(dist_state):
\ distribution = dist.Table.load(dist_state)
\ pdf = go.Figure([distribution.pdf_plot(), distribution.bar_plot()])
\ pdf.update_layout(transition_duration=500, title='PDF')
\ cdf = go.Figure([distribution.cdf_plot()])
\ cdf.update_layout(transition_duration=500, title='CDF')
\ return [dcc.Graph(figure=pdf), dcc.Graph(figure=cdf)]
if __name__ == '__main__':
\ app.run_server(debug=True)
```
Run the app with:
```bash
$ python app.py
```
Open your browser and navigate to <http://localhost:8050/>.
"""
from .base import Base
from ..utils import get_changed_cell, get_deleted_row, get_trigger_ids
import dash
import dash_bootstrap_components as dbc
import dash_html_components as html
import dash_table
import numpy as np
import plotly.graph_objects as go
from dash.dependencies import MATCH, Input, Output, State
from dash_table.Format import Format, Scheme
from smoother import Smoother, DerivativeObjective, MassConstraint
import json
class Table(Base):
"""
Tabular distribution elicitation.
Parameters and attributes
-------------------------
id : str, default
Distribution identifier.
bins : list of scalars, default=[0, .25, .5, .75, 1]
List of 'break points' for the bins. The first bin starts at
`bins[0]`. The last bin ends at `bins[-1]`.
prob : list of scalars, default=[.25, .25, .25, .25]
Probability density function. This is the amount of probability mass
in each bin. Must sum to 1 and `len(prob)` must be `len(bins)-1`.
datatable : dict, default={}
Keyword arguments for the datatable associated with the table
distribution. See <https://dash.plotly.com/datatable>.
row_addable : bool, default=False
Indicates whether the forecaster can add rows.
scalable : bool, default=False
Provides a scaling function for the table bins.
smoother : bool, default=False
Indicates whether to use a smoother for interpolation. See
<https://dsbowen.github.io/smoother/>.
\*args, \*\*kwargs :
Arguments and keyword arguments passed to `super().__init__`.
"""
def __init__(
self,
id,
bins=[0, .25, .5, .75, 1],
prob=[.25, .25, .25, .25],
editable_cols=['bin-start', 'bin-end', 'pdf', 'cdf'],
datatable={},
row_addable=False,
scalable=False,
smoother=False,
*args, **kwargs
):
super().__init__(id, *args, **kwargs)
self.bins = bins
self.prob = prob
self.editable_cols = editable_cols
self.datatable = datatable
self.row_addable = row_addable
self.scalable = scalable
self.smoother = smoother
# underlying distribution if using smoother
self._dist = Smoother()
def to_plotly_json(self):
return {
'props': {
'children': self.elicitation(
self.bins,
self.prob,
self.editable_cols,
self.datatable,
self.row_addable,
self.scalable
)
},
'type': 'Div',
'namespace': 'dash_html_components'
}
def elicitation(
self,
bins=[0, .25, .5, .75, 1],
prob=[.25, .25, .25, .25],
editable_cols=['bin-start', 'bin-end', 'pdf', 'cdf'],
datatable={},
row_addable=False,
scalable=False
):
"""
Parameters
----------
bins : list of scalars or numpy.array, default=[0, .25, .5, .75, 1]
prob : list of scalars or numpy.array, default=[.25, .25, .25, .25]
datatable : dict, default={}
row_addable : bool, default=False
scalable : bool, default=False
Returns
-------
elicitation elements : list of dash elements
Dash elements used to elicit the distribution.
"""
def gen_formgroup(label, type, value):
id = self.get_id(self.id, type)
return dbc.FormGroup([
dbc.Label(label, html_for=id, width=6),
dbc.Col([
dbc.Input(
id=id,
value=value,
type='number',
style={'text-align': 'right'}
)
], width=6)
], row=True)
return [
# hidden state div
html.Div(
self.dump(),
id=self.get_id(self.id, 'state'),
style={'display': 'none'}
),
html.Div([
gen_formgroup('Lower bound', 'lb', self.bins[0]),
gen_formgroup('Upper bound', 'ub', self.bins[-1]),
dbc.Button(
'Rescale',
id=self.get_id(self.id, 'rescale'),
color='primary',
style={'margin-bottom': '1em'}
),
], style={} if scalable else {'display': 'none'}),
dash_table.DataTable(
id=self.get_id(self.id, 'table'),
columns=self.get_columns(editable_cols),
data=self.get_data(bins, prob),
**datatable
),
html.Div([
html.Br(),
dbc.Button(
'Add row',
id=self.get_id(self.id, 'row-add'),
color='primary',
)
], style={} if self.row_addable else {'display': 'none'})
]
def get_columns(
self, editable_cols=['bin-start', 'bin-end', 'pdf', 'cdf']
):
"""
Returns
-------
columns : list of dict
List of dictionaries specifying the datatable columns. See
<https://dash.plotly.com/datatable>,
"""
format = Format(scheme=Scheme.fixed, precision=2)
cols = [
{
'id': 'bin-start',
'name': 'Bin start',
'type': 'numeric'
},
{
'id': 'bin-end',
'name': 'Bin end',
'type': 'numeric'
},
{
'id': 'pdf',
'name': 'Probability',
'type': 'numeric',
'format': format
},
{
'id': 'cdf',
'name': 'Probability (cum)',
'type': 'numeric',
'format': format
}
]
for col in cols:
col['editable'] = col['id'] in editable_cols
return cols
def get_data(self, bins=None, prob=None):
"""
Parameters
----------
bins : list of scalars or numpy.array or None, default=None
If `None`, use `self.bins`.
prob : list of scalars or numpy.array or None, default=None
If `None`, use `self.prob`.
Returns
-------
records : list of dict
Datatable data in records format.
"""
def get_record(bin_start, bin_end, pdf_i, cdf_i):
return {
'bin-start': bin_start,
'bin-end': bin_end,
'pdf': 100*pdf_i,
'cdf': 100*cdf_i
}
bins = self.bins if bins is None else bins
pdf = self.prob if prob is None else prob
cdf = np.cumsum(pdf)
assert len(bins)-1 == len(pdf)
return [
get_record(*args) for args in zip(bins[:-1], bins[1:], pdf, cdf)
]
@classmethod
def register_callbacks(cls, app):
"""
Register dash callbacks for table distributions.
Parameters
----------
app : dash.Dash
App with which to register callbacks.
"""
@app.callback(
[
Output(cls.get_id(MATCH, 'state'), 'children'),
Output(cls.get_id(MATCH, 'table'), 'data')
],
[
Input(cls.get_id(MATCH, 'table'), 'data_timestamp'),
Input(cls.get_id(MATCH, 'rescale'), 'n_clicks'),
Input(cls.get_id(MATCH, 'row-add'), 'n_clicks')
],
[
State(cls.get_id(MATCH, 'state'), 'children'),
State(cls.get_id(MATCH, 'lb'), 'value'),
State(cls.get_id(MATCH, 'ub'), 'value'),
State(cls.get_id(MATCH, 'table'), 'data')
]
)
def update_table_state(
_, rescale, add_row, table_state, lb, ub, data
):
trigger_ids = get_trigger_ids(dash.callback_context)
table = cls.load(table_state)
table._handle_rescale(rescale, lb, ub, trigger_ids)
table._handle_data_updates(data, trigger_ids)
table._handle_row_add(add_row, trigger_ids)
return table.dump(), table.get_data()
def fit(self, bins=None, prob=None, derivative=2):
"""
Fit the smoother given masses constraints.
Parameters
----------
bins : list of scalars or numpy.array
Ordered list of bin break points. If `None`, use `self.bins`.
prob : list of scalars or numpy.array
Probability density function. This is the amount of probability mass
in each bin. Must sum to 1 and `len(prob)` should be `len(bins)-1`.
If `None`, use `self.prob`.
derivative : int, default=2
Deriviate of the derivative smoothing function to maximize. e.g.
`2` means the smoother will minimize the mean squaure second
derivative.
Returns
-------
self
"""
bins = np.array(self.bins if bins is None else bins)
pdf = self.prob if prob is None else prob
# 0-1 scaling; ensures consistent smoother fitting at different scales
loc, scale = bins[0], bins[-1] - bins[0]
bins = (bins - loc) / scale
# fit smoother
params = zip(bins[:-1], bins[1:], pdf)
self._dist.fit(
0, 1, [MassConstraint(lb, ub, mass) for lb, ub, mass in params],
DerivativeObjective(derivative)
)
# restore to original scale
self._dist.x = scale * self._dist.x + loc
return self
def dump(self):
"""
Dump the table distribution state dictionary in JSON format.
Returns
-------
state : dict, JSON
"""
return json.dumps({
'cls': self.__class__.__name__,
'id': self.id,
'bins': self.bins,
'prob': self.prob,
'datatable': self.datatable,
'editable_cols': self.editable_cols,
'row_addable': self.row_addable,
'scalable': self.scalable,
'smoother': self.smoother,
'x': list(self._dist.x),
'_f_x': list(self._dist._f_x)
})
@classmethod
def load(cls, state_dict):
"""
Load a table distribution from its state dictionary.
Parameters
----------
state_dict : dict
Output of `Table.dump`.
Returns
-------
table : `Table`
"""
state = json.loads(state_dict)
table = cls(
id=state['id'],
bins=state['bins'],
prob=state['prob'],
datatable=state['datatable'],
editable_cols=state['editable_cols'],
row_addable=state['row_addable'],
scalable=state['scalable'],
smoother=state['smoother']
)
table._dist.x = np.array(state['x'])
table._dist._f_x = np.array(state['_f_x'])
return table
def _handle_rescale(self, rescale, lb, ub, trigger_ids):
"""
Helper method for callabck scaling table bins.
"""
def rescale_f(x):
x = np.array(x)
return (ub-lb) * (x-curr_lb) / (curr_ub - curr_lb) + lb
if rescale and self.get_id(self.id, 'rescale') in trigger_ids:
curr_lb, curr_ub = self.bins[0], self.bins[-1]
self.bins = list(rescale_f(self.bins))
self._dist.x = rescale_f(self._dist.x)
def _handle_data_updates(self, data, trigger_ids):
"""
Helper method for callback updating table state which handles updates
to the data in the data table.
"""
def handle_row_delete():
"""
Handle a row being deleted.
"""
i = get_deleted_row(data, prev_data)
pdf_i = self.prob.pop(i)
if i < len(self.prob):
self.prob[i] += pdf_i
handle_bin_update()
def handle_data_update():
"""
Handle data updates.
"""
# Strictly speaking, it should be sufficient to handle updates for
# only the changed column. But it's often useful to check that the
# columns are consistent because of asynchronous updating.
_, changed_col = get_changed_cell(data, prev_data)
handle_bin_update(end_updated=changed_col=='bin-end')
if changed_col == 'pdf':
self.prob = [d['pdf']/100. for d in data]
else:
cdf = np.insert([d['cdf'] for d in data], 0, 0)
self.prob = list(np.diff(cdf)/100.)
def handle_bin_update(end_updated=True):
"""
Handle bin updates.
"""
bin_start = [d['bin-start'] for d in data]
bin_end = [d['bin-end'] for d in data]
self.bins = (
bin_start[:1] + bin_end if end_updated
else bin_start + bin_end[-1:]
)
if self.get_id(self.id, 'table') not in trigger_ids:
return
prev_data = self.get_data()
if len(data) < len(prev_data):
handle_row_delete()
else:
handle_data_update()
if self.smoother:
try:
self.fit()
except:
pass
return self
def _handle_row_add(self, add_row, trigger_ids):
"""
Helper method for callback updating table state which handles adding
rows.
"""
if add_row and self.get_id(self.id, 'row-add') in trigger_ids:
self.bins.append(self.bins[-1])
self.prob.append(0)
def pdf(self, x):
if self.smoother:
return self._dist.pdf(x)
if x < self.bins[0] or self.bins[-1] <= x:
return 0
params = zip(self.bins[:-1], self.bins[1:], self.prob)
for bin_start, bin_end, pdf in params:
if bin_start < x <= bin_end:
return pdf / (bin_end - bin_start)
def cdf(self, x):
if self.smoother:
return self._dist.cdf(x)
if x <= self.bins[0]:
return 0
if x >= self.bins[-1]:
return 1
cdf = 0
params = zip(self.bins[:-1], self.bins[1:], self.prob)
for bin_start, bin_end, pdf in params:
if bin_start < x <= bin_end:
return cdf + pdf * (x-bin_start) / (bin_end - bin_start)
cdf += pdf
def pdf_plot(self, **kwargs):
"""
Parameters
----------
\*\*kwargs :
Keyword arguments for `go.Scatter`.
Returns
-------
scatter : go.Scatter.
Scatter plot of the pdf.
"""
name = kwargs.pop('name', self.id)
if self.smoother:
return go.Scatter(
x=self._dist.x, y=self._dist.f_x, name=name, **kwargs
)
heights = np.array(self.prob) / np.diff(self.bins)
x, y = [self.bins[0]], [heights[0]]
values = zip(self.bins[1:], heights[:-1], heights[1:])
for x_i, height_prev, height_curr in values:
x += [x_i, x_i]
y += [height_prev, height_curr]
x.append(self.bins[-1])
y.append(heights[-1])
return go.Scatter(x=x, y=y, name=name, **kwargs)
def cdf_plot(self, **kwargs):
"""
Parameters
----------
\*\*kwargs :
Keyword arguments for `go.Scatter`.
Returns
-------
scatter : go.Scatter
Scatter plot of the cdf.
"""
name = kwargs.pop('name', self.id)
if self.smoother:
return go.Scatter(
x=self._dist.x, y=self._dist.F_x, name=name, **kwargs
)
F_x = np.insert(np.cumsum(self.prob), 0, 0)
return go.Scatter(x=self.bins, y=F_x, name=name, **kwargs)
def bar_plot(self, **kwargs):
"""
Parameters
----------
\*\*kwargs :
Keyword arguments for `go.Bar`.
Returns
-------
bar plot : go.Bar
Bar plot of the pdf in the datatable.
"""
name = kwargs.pop('name', self.id)
return go.Bar(
x=(np.array(self.bins[1:]) + np.array(self.bins[:-1])) / 2.,
y=np.array(self.prob) / np.diff(self.bins),
width=np.diff(self.bins),
name=name,
**kwargs
)
| 31.41681
| 80
| 0.510483
|
4a125891b82449b927cc423573d0f38e18684c79
| 959
|
py
|
Python
|
sopel/util/pyexec.py
|
Ameenekosan/Yumiko
|
16624f0b3f5c94262104b85866ce2cf7fd96f0db
|
[
"EFL-2.0"
] | null | null | null |
sopel/util/pyexec.py
|
Ameenekosan/Yumiko
|
16624f0b3f5c94262104b85866ce2cf7fd96f0db
|
[
"EFL-2.0"
] | null | null | null |
sopel/util/pyexec.py
|
Ameenekosan/Yumiko
|
16624f0b3f5c94262104b85866ce2cf7fd96f0db
|
[
"EFL-2.0"
] | null | null | null |
import http
import web
def eval_py(code, paste_multiline=True):
attempts = 0
while True:
try:
output = http.get("http://eval.appspot.com/eval", statement=code).rstrip('\n')
# sometimes the API returns a blank string on first attempt, lets try again
# and make sure it is actually supposed to be a blank string. ._.
if output == "":
output = http.get("http://eval.appspot.com/eval", statement=code).rstrip('\n')
break
except http.HTTPError:
if attempts > 2:
return "Failed to execute code."
else:
attempts += 1
continue
if "Traceback (most recent call last):" in output:
status = "Python error: "
else:
status = "Code executed sucessfully: "
if "\n" in output and paste_multiline:
return status + web.haste(output)
else:
return output
| 29.96875
| 94
| 0.561001
|
4a12590d9af5739763d15bc1082209a83f0165af
| 10,877
|
py
|
Python
|
tests/test_nhg.py
|
YuriiHordynskyi/sonic-swss
|
17a2f93a545b8669e44a62231c340ba518272ed7
|
[
"Apache-2.0"
] | null | null | null |
tests/test_nhg.py
|
YuriiHordynskyi/sonic-swss
|
17a2f93a545b8669e44a62231c340ba518272ed7
|
[
"Apache-2.0"
] | null | null | null |
tests/test_nhg.py
|
YuriiHordynskyi/sonic-swss
|
17a2f93a545b8669e44a62231c340ba518272ed7
|
[
"Apache-2.0"
] | 2
|
2020-06-22T11:50:22.000Z
|
2021-04-13T12:40:13.000Z
|
import os
import re
import time
import json
import pytest
import ipaddress
from swsscommon import swsscommon
class TestNextHopGroup(object):
def test_route_nhg(self, dvs, testlog):
config_db = swsscommon.DBConnector(swsscommon.CONFIG_DB, dvs.redis_sock, 0)
intf_tbl = swsscommon.Table(config_db, "INTERFACE")
fvs = swsscommon.FieldValuePairs([("NULL","NULL")])
intf_tbl.set("Ethernet0", fvs)
intf_tbl.set("Ethernet4", fvs)
intf_tbl.set("Ethernet8", fvs)
intf_tbl.set("Ethernet0|10.0.0.0/31", fvs)
intf_tbl.set("Ethernet4|10.0.0.2/31", fvs)
intf_tbl.set("Ethernet8|10.0.0.4/31", fvs)
dvs.runcmd("config interface startup Ethernet0")
dvs.runcmd("config interface startup Ethernet4")
dvs.runcmd("config interface startup Ethernet8")
dvs.runcmd("arp -s 10.0.0.1 00:00:00:00:00:01")
dvs.runcmd("arp -s 10.0.0.3 00:00:00:00:00:02")
dvs.runcmd("arp -s 10.0.0.5 00:00:00:00:00:03")
assert dvs.servers[0].runcmd("ip link set down dev eth0") == 0
assert dvs.servers[1].runcmd("ip link set down dev eth0") == 0
assert dvs.servers[2].runcmd("ip link set down dev eth0") == 0
assert dvs.servers[0].runcmd("ip link set up dev eth0") == 0
assert dvs.servers[1].runcmd("ip link set up dev eth0") == 0
assert dvs.servers[2].runcmd("ip link set up dev eth0") == 0
db = swsscommon.DBConnector(0, dvs.redis_sock, 0)
ps = swsscommon.ProducerStateTable(db, "ROUTE_TABLE")
fvs = swsscommon.FieldValuePairs([("nexthop","10.0.0.1,10.0.0.3,10.0.0.5"), ("ifname", "Ethernet0,Ethernet4,Ethernet8")])
ps.set("2.2.2.0/24", fvs)
time.sleep(1)
# check if route was propagated to ASIC DB
adb = swsscommon.DBConnector(1, dvs.redis_sock, 0)
rtbl = swsscommon.Table(adb, "ASIC_STATE:SAI_OBJECT_TYPE_ROUTE_ENTRY")
nhgtbl = swsscommon.Table(adb, "ASIC_STATE:SAI_OBJECT_TYPE_NEXT_HOP_GROUP")
nhg_member_tbl = swsscommon.Table(adb, "ASIC_STATE:SAI_OBJECT_TYPE_NEXT_HOP_GROUP_MEMBER")
keys = rtbl.getKeys()
found_route = False
for k in keys:
rt_key = json.loads(k)
if rt_key['dest'] == "2.2.2.0/24":
found_route = True
break
assert found_route
# assert the route points to next hop group
(status, fvs) = rtbl.get(k)
for v in fvs:
if v[0] == "SAI_ROUTE_ENTRY_ATTR_NEXT_HOP_ID":
nhgid = v[1]
(status, fvs) = nhgtbl.get(nhgid)
assert status
keys = nhg_member_tbl.getKeys()
assert len(keys) == 3
for k in keys:
(status, fvs) = nhg_member_tbl.get(k)
for v in fvs:
if v[0] == "SAI_NEXT_HOP_GROUP_MEMBER_ATTR_NEXT_HOP_GROUP_ID":
assert v[1] == nhgid
# bring links down one-by-one
for i in [0, 1, 2]:
dvs.servers[i].runcmd("ip link set down dev eth0") == 0
time.sleep(1)
tbl = swsscommon.Table(db, "PORT_TABLE")
(status, fvs) = tbl.get("Ethernet%d" % (i * 4))
assert status == True
oper_status = "unknown"
for v in fvs:
if v[0] == "oper_status":
oper_status = v[1]
break
assert oper_status == "down"
keys = nhg_member_tbl.getKeys()
assert len(keys) == 2 - i
# bring links up one-by-one
for i in [0, 1, 2]:
dvs.servers[i].runcmd("ip link set up dev eth0") == 0
time.sleep(1)
tbl = swsscommon.Table(db, "PORT_TABLE")
(status, fvs) = tbl.get("Ethernet%d" % (i * 4))
assert status == True
oper_status = "unknown"
for v in fvs:
if v[0] == "oper_status":
oper_status = v[1]
break
assert oper_status == "up"
keys = nhg_member_tbl.getKeys()
assert len(keys) == i + 1
for k in keys:
(status, fvs) = nhg_member_tbl.get(k)
for v in fvs:
if v[0] == "SAI_NEXT_HOP_GROUP_MEMBER_ATTR_NEXT_HOP_GROUP_ID":
assert v[1] == nhgid
def test_route_nhg_exhaust(self, dvs, testlog):
"""
Test the situation of exhausting ECMP group, assume SAI_SWITCH_ATTR_NUMBER_OF_ECMP_GROUPS is 512
In order to achieve that, we will config
1. 9 ports
2. 512 routes with different nexthop group
See Also
--------
SwitchStateBase::set_number_of_ecmp_groups()
https://github.com/Azure/sonic-sairedis/blob/master/vslib/src/SwitchStateBase.cpp
"""
# TODO: check ECMP 512
def port_name(i):
return "Ethernet" + str(i * 4)
def port_ip(i):
return "10.0.0." + str(i * 2)
def peer_ip(i):
return "10.0.0." + str(i * 2 + 1)
def port_ipprefix(i):
return port_ip(i) + "/31"
def port_mac(i):
return "00:00:00:00:00:0" + str(i)
def gen_ipprefix(r):
""" Construct route like 2.X.X.0/24 """
ip = ipaddress.IPv4Address(IP_INTEGER_BASE + r * 256)
ip = str(ip)
ipprefix = ip + "/24"
return ipprefix
def gen_nhg_fvs(binary):
nexthop = []
ifname = []
for i in range(MAX_PORT_COUNT):
if binary[i] == '1':
nexthop.append(peer_ip(i))
ifname.append(port_name(i))
nexthop = ','.join(nexthop)
ifname = ','.join(ifname)
fvs = swsscommon.FieldValuePairs([("nexthop", nexthop), ("ifname", ifname)])
return fvs
def asic_route_exists(keys, ipprefix):
for k in keys:
rt_key = json.loads(k)
if rt_key['dest'] == ipprefix:
return k
else:
return None
def asic_route_nhg_fvs(k):
fvs = asic_db.get_entry("ASIC_STATE:SAI_OBJECT_TYPE_ROUTE_ENTRY", k)
if not fvs:
return None
print fvs
nhgid = fvs.get("SAI_ROUTE_ENTRY_ATTR_NEXT_HOP_ID")
if nhgid is None:
return None
fvs = asic_db.get_entry("ASIC_STATE:SAI_OBJECT_TYPE_NEXT_HOP_GROUP", nhgid)
return fvs
MAX_ECMP_COUNT = 512
MAX_PORT_COUNT = 10
IP_INTEGER_BASE = int(ipaddress.IPv4Address(unicode("2.2.2.0")))
config_db = dvs.get_config_db()
fvs = {"NULL": "NULL"}
for i in range(MAX_PORT_COUNT):
config_db.create_entry("INTERFACE", port_name(i), fvs)
config_db.create_entry("INTERFACE", "{}|{}".format(port_name(i), port_ipprefix(i)), fvs)
dvs.runcmd("config interface startup " + port_name(i))
dvs.runcmd("arp -s {} {}".format(peer_ip(i), port_mac(i)))
assert dvs.servers[i].runcmd("ip link set down dev eth0") == 0
assert dvs.servers[i].runcmd("ip link set up dev eth0") == 0
app_db = dvs.get_app_db()
ps = swsscommon.ProducerStateTable(app_db.db_connection, "ROUTE_TABLE")
# Add first batch of routes with unique nexthop groups in AppDB
route_count = 0
r = 0
while route_count < MAX_ECMP_COUNT:
r += 1
fmt = '{{0:0{}b}}'.format(MAX_PORT_COUNT)
binary = fmt.format(r)
# We need at least 2 ports for a nexthop group
if binary.count('1') <= 1:
continue
fvs = gen_nhg_fvs(binary)
route_ipprefix = gen_ipprefix(route_count)
ps.set(route_ipprefix, fvs)
route_count += 1
asic_db = dvs.get_asic_db()
# Wait and check ASIC DB the count of nexthop groups used
asic_db.wait_for_n_keys("ASIC_STATE:SAI_OBJECT_TYPE_NEXT_HOP_GROUP", MAX_ECMP_COUNT)
asic_routes_count = len(asic_db.get_keys("ASIC_STATE:SAI_OBJECT_TYPE_ROUTE_ENTRY"))
# Add second batch of routes with unique nexthop groups in AppDB
# Add more routes with new nexthop group in AppDBdd
route_ipprefix = gen_ipprefix(route_count)
base_ipprefix = route_ipprefix
base = route_count
route_count = 0
while route_count < 10:
r += 1
fmt = '{{0:0{}b}}'.format(MAX_PORT_COUNT)
binary = fmt.format(r)
# We need at least 2 ports for a nexthop group
if binary.count('1') <= 1:
continue
fvs = gen_nhg_fvs(binary)
route_ipprefix = gen_ipprefix(base + route_count)
ps.set(route_ipprefix, fvs)
route_count += 1
last_ipprefix = route_ipprefix
# Wait until we get expected routes and check ASIC DB on the count of nexthop groups used, and it should not increase
keys = asic_db.wait_for_n_keys("ASIC_STATE:SAI_OBJECT_TYPE_ROUTE_ENTRY", asic_routes_count + 10)
asic_db.wait_for_n_keys("ASIC_STATE:SAI_OBJECT_TYPE_NEXT_HOP_GROUP", MAX_ECMP_COUNT)
# Check the route points to next hop group
# Note: no need to wait here
k = asic_route_exists(keys, "2.2.2.0/24")
assert k is not None
fvs = asic_route_nhg_fvs(k)
assert fvs is not None
# Check the second batch does not point to next hop group
k = asic_route_exists(keys, base_ipprefix)
assert k is not None
fvs = asic_route_nhg_fvs(k)
assert not(fvs)
# Remove first batch of routes with unique nexthop groups in AppDB
route_count = 0
r = 0
while route_count < MAX_ECMP_COUNT:
r += 1
fmt = '{{0:0{}b}}'.format(MAX_PORT_COUNT)
binary = fmt.format(r)
# We need at least 2 ports for a nexthop group
if binary.count('1') <= 1:
continue
route_ipprefix = gen_ipprefix(route_count)
ps._del(route_ipprefix)
route_count += 1
# Wait and check the second batch points to next hop group
# Check ASIC DB on the count of nexthop groups used, and it should not increase or decrease
asic_db.wait_for_n_keys("ASIC_STATE:SAI_OBJECT_TYPE_NEXT_HOP_GROUP", 10)
keys = asic_db.get_keys("ASIC_STATE:SAI_OBJECT_TYPE_ROUTE_ENTRY")
k = asic_route_exists(keys, base_ipprefix)
assert k is not None
fvs = asic_route_nhg_fvs(k)
assert fvs is not None
k = asic_route_exists(keys, last_ipprefix)
assert k is not None
fvs = asic_route_nhg_fvs(k)
assert fvs is not None
| 34.204403
| 129
| 0.57001
|
4a125916e35c005ae3766a8dcbd3dd2ee14254a7
| 948
|
py
|
Python
|
ui/About_logic.py
|
wendyltan/ZebraCrossDensityDetector
|
b69d4adca403c3404738e088934627d60d244a90
|
[
"MIT"
] | 5
|
2019-03-11T11:12:31.000Z
|
2020-08-04T07:33:49.000Z
|
ui/About_logic.py
|
wendyltan/ZebraCrossDensityDetector
|
b69d4adca403c3404738e088934627d60d244a90
|
[
"MIT"
] | null | null | null |
ui/About_logic.py
|
wendyltan/ZebraCrossDensityDetector
|
b69d4adca403c3404738e088934627d60d244a90
|
[
"MIT"
] | 4
|
2019-03-12T15:44:45.000Z
|
2020-08-04T07:33:50.000Z
|
#!/usr/bin/env python
# -*- coding: utf-8 -*-
# @Time : 2019/3/14 15:45
# @Author : wendy
# @Usage :
# @File : About_logic.py
# @Software: PyCharm
from PyQt5 import QtWidgets, QtCore
from PyQt5.QtCore import QUrl
from PyQt5.QtWebEngineWidgets import QWebEngineView
from utils import helper as hp
from ui.AboutDialog import Ui_Dialog
from utils import markdown_convertor as md
class AboutDialog(QtWidgets.QDialog,Ui_Dialog):
def __init__(self):
super(AboutDialog,self).__init__()
self.setupUi(self)
self.setFixedSize(501, 371)
self.webView = QWebEngineView(self)
self.webView.setGeometry(QtCore.QRect(0, 20, 481, 281))
self.webView.setObjectName("webView")
filenames = 'README'
md.convert(filenames)
path = hp.load_file(filenames+'.html')
self.webView.load(QUrl.fromLocalFile(path))
self.webView.show()
self.setWindowTitle(filenames)
| 29.625
| 63
| 0.681435
|
4a12598758c0b7c90685a5b7b4b440c973c98bdf
| 6,690
|
py
|
Python
|
unoffical_api_wrapper/database.py
|
DrArtemi/riot-api
|
a68bf94061a3c63e511418669097499c3e2c055d
|
[
"MIT"
] | null | null | null |
unoffical_api_wrapper/database.py
|
DrArtemi/riot-api
|
a68bf94061a3c63e511418669097499c3e2c055d
|
[
"MIT"
] | null | null | null |
unoffical_api_wrapper/database.py
|
DrArtemi/riot-api
|
a68bf94061a3c63e511418669097499c3e2c055d
|
[
"MIT"
] | null | null | null |
from sqlalchemy import create_engine
from sqlalchemy.orm import sessionmaker
from models import Leagues, Matches, Players, Stages, Teams, Tournaments, create_table
import dateparser
import json
class DBUtility:
"""This class is used to store league data to database.
"""
def __init__(self, user: str, password: str, database: str) -> None:
engine = create_engine(f'postgresql://{user}:{password}@localhost:5432/{database}')
create_table(engine)
self.Session = sessionmaker(bind=engine)
def add_league(self, league):
session = self.Session()
exist_league = session.query(Leagues).filter_by(
riot_id=league["id"]
).first()
league_obj = exist_league if exist_league is not None else Leagues()
if exist_league is None:
league_obj.riot_id = league["id"]
league_obj.slug = league["slug"]
league_obj.name = league["name"]
league_obj.region = league["region"]
league_obj.image_url = league["image"]
league_obj.priority = league["priority"]
league_obj.priority_position = league["displayPriority"]["position"]
league_obj.priority_status = league["displayPriority"]["status"]
session.add(league_obj)
session.commit()
session.close()
def add_tournament(self, tournament, league_id):
session = self.Session()
league = session.query(Leagues).filter_by(
riot_id=league_id
).first()
exist_tournament = session.query(Tournaments).filter_by(
riot_id=tournament["id"]
).first()
tournament_obj = exist_tournament if exist_tournament is not None else Tournaments()
if exist_tournament is None:
tournament_obj.riot_id = tournament["id"]
tournament_obj.slug = tournament["slug"]
tournament_obj.start_date = dateparser.parse(tournament["startDate"])
tournament_obj.end_date = dateparser.parse(tournament["endDate"])
if league:
tournament_obj.league = league
session.add(tournament_obj)
session.commit()
session.close()
def add_stage(self, stage, tournament_id):
session = self.Session()
tournament = session.query(Tournaments).filter_by(
riot_id=tournament_id
).first()
exist_stage = session.query(Stages).join(Stages.tournament).\
filter((Tournaments.riot_id == tournament_id), (Stages.slug == stage["slug"])).first()
stage_obj = exist_stage if exist_stage is not None else Stages()
if exist_stage is None:
stage_obj.slug = stage["slug"]
stage_obj.name = stage["name"]
stage_obj.type = stage["type"]
if tournament:
stage_obj.tournament = tournament
session.add(stage_obj)
session.commit()
session.close()
def add_team(self, team):
session = self.Session()
exist_team = session.query(Teams).filter_by(
riot_id=team["id"]
).first()
team_obj = exist_team if exist_team is not None else Teams()
if exist_team is None:
team_obj.riot_id = team["id"]
team_obj.slug = team["slug"]
team_obj.name = team["name"]
team_obj.code = team["code"]
team_obj.image = team["image"]
team_obj.alt_image = team["alternativeImage"]
team_obj.bg_image = team["backgroundImage"]
team_obj.status = team["status"]
session.add(team_obj)
session.commit()
session.close()
def add_player(self, player, team_id):
session = self.Session()
exist_team = session.query(Teams).filter_by(
riot_id=team_id
).first()
exist_player = session.query(Players).filter_by(
riot_id=player["id"]
).first()
player_obj = exist_player if exist_player is not None else Players()
if exist_player is None:
player_obj.riot_id = player["id"]
if exist_team:
player_obj.current_team = exist_team
player_obj.summoner_name = player["summonerName"]
player_obj.first_name = player["firstName"]
player_obj.last_name = player["lastName"]
player_obj.image = player["image"]
player_obj.role = player["role"]
if exist_team:
player_obj.teams.append(exist_team)
session.add(player_obj)
session.commit()
session.close()
def add_match(self, match, stage, tournament_id, league_id, match_final_state=None, match_evolution=None):
session = self.Session()
with session.no_autoflush:
exist_stage = session.query(Stages).join(Stages.tournament).\
filter((Tournaments.riot_id == tournament_id), (Stages.slug == stage["slug"])).first()
exist_match = session.query(Matches).filter_by(
riot_id=match["id"]
).first()
match_obj = exist_match if exist_match is not None else Matches()
if exist_match is None:
match_obj.riot_id = match["id"]
match_obj.state = match["state"]
if match_final_state:
match_obj.final_state = json.dumps(match_final_state, separators=(',', ':'))
match_obj.date = dateparser.parse(match_final_state["timestamp"])
if match_evolution:
match_obj.evolution = json.dumps(match_evolution, separators=(',', ':'))
if exist_stage:
match_obj.stage = exist_stage
teams = match.pop("teams")
for i, team in enumerate(teams):
exist_team = session.query(Teams).filter_by(
riot_id=team["id"]
).first()
if exist_team is None:
continue
if exist_team.league is None:
league = session.query(Leagues).filter_by(
riot_id=league_id
).first()
exist_team.league = league
if i == 0:
match_obj.team_1 = exist_team
match_obj.team_1_win = team["result"]["outcome"] == "win"
elif i == 1:
match_obj.team_2 = exist_team
match_obj.team_2_win = team["result"]["outcome"] == "win"
# Ignore uncomplete matches for now
if match_obj.team_1 and match_obj.team_2:
session.add(match_obj)
session.commit()
session.close()
| 40.301205
| 110
| 0.589387
|
4a125a24e6f63898df95fb1e4a101091f1c8c356
| 227
|
py
|
Python
|
tests/check_framework/urls/no_warnings_i18n.py
|
jpmallarino/django
|
659d2421c7adbbcd205604002d521d82d6b0b465
|
[
"BSD-3-Clause",
"0BSD"
] | 16
|
2019-08-10T12:24:06.000Z
|
2020-05-21T09:11:14.000Z
|
tests/check_framework/urls/no_warnings_i18n.py
|
jpmallarino/django
|
659d2421c7adbbcd205604002d521d82d6b0b465
|
[
"BSD-3-Clause",
"0BSD"
] | 12
|
2019-08-10T11:55:29.000Z
|
2020-05-21T04:46:30.000Z
|
tests/check_framework/urls/no_warnings_i18n.py
|
jpmallarino/django
|
659d2421c7adbbcd205604002d521d82d6b0b465
|
[
"BSD-3-Clause",
"0BSD"
] | 3
|
2019-08-20T13:29:34.000Z
|
2020-01-30T22:05:10.000Z
|
from django.conf.urls.i18n import i18n_patterns
from django.urls import path
from django.utils.translation import gettext_lazy as _
urlpatterns = i18n_patterns(
path(_("translated/"), lambda x: x, name="i18n_prefixed"),
)
| 28.375
| 62
| 0.77533
|
4a125a56d67ed784287969d1f1d164aae8ae04d5
| 1,378
|
py
|
Python
|
Superhero-Statistics/code.py
|
shyam8898/ga-learner-dsmp-repo
|
c104c9fed903aec579cdbcda7c84591a81377de3
|
[
"MIT"
] | null | null | null |
Superhero-Statistics/code.py
|
shyam8898/ga-learner-dsmp-repo
|
c104c9fed903aec579cdbcda7c84591a81377de3
|
[
"MIT"
] | null | null | null |
Superhero-Statistics/code.py
|
shyam8898/ga-learner-dsmp-repo
|
c104c9fed903aec579cdbcda7c84591a81377de3
|
[
"MIT"
] | null | null | null |
# --------------
#Header files
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
#path of the data file- path
data=pd.read_csv(path)
data['Gender'].replace('_','Agender',inplace=True)
gender_count=data['Gender'].value_counts()
#Code starts here
# --------------
# #Code starts here
lis=data['Alignment'].unique()
alignment=data['Alignment'].value_counts()
plt.pie(alignment,labels=lis)
# --------------
#Code starts here
sc_df=data[['Strength','Combat']]
sc_covariance=sc_df['Strength'].cov(sc_df['Combat'])
sc_strength=sc_df['Strength'].std()
sc_combat=sc_df['Combat'].std()
sc_pearson=sc_covariance/(sc_strength*sc_combat)
ic_df=data[['Intelligence','Combat']]
ic_covariance=ic_df['Intelligence'].cov(ic_df['Combat'])
ic_intelligence=ic_df['Intelligence'].std()
ic_combat=ic_df['Combat'].std()
ic_pearson=ic_covariance/(ic_intelligence*ic_combat)
# --------------
#Code starts here
total_high=data['Total'].quantile(0.99)
super_best=data[data['Total']>total_high]
super_best_names=[super_best['Name']]
# --------------
#Code starts here
fig = plt.figure()
ax_1 = fig.add_subplot(131)
plt.boxplot(data['Intelligence'])
plt.title('Intelligence')
ax_2 = fig.add_subplot(132)
plt.boxplot(data['Speed'])
plt.title('Speed')
ax_3 = fig.add_subplot(133)
plt.boxplot(data['Power'])
plt.title('Power')
| 21.53125
| 57
| 0.680697
|
4a125cb3f4d770b13b02397dc764d90f978b3019
| 7,116
|
py
|
Python
|
simplekml/styleselector.py
|
unhcfreg/DROP
|
80dac9dcfeb8ed16d262d78f053464a2f4f10caf
|
[
"MIT"
] | 40
|
2017-08-28T01:55:27.000Z
|
2022-03-16T12:35:14.000Z
|
simplekml/styleselector.py
|
adamjaffeback/DROP
|
80dac9dcfeb8ed16d262d78f053464a2f4f10caf
|
[
"MIT"
] | 2
|
2018-11-02T15:48:16.000Z
|
2021-09-25T11:01:41.000Z
|
simplekml/styleselector.py
|
adamjaffeback/DROP
|
80dac9dcfeb8ed16d262d78f053464a2f4f10caf
|
[
"MIT"
] | 21
|
2017-05-11T20:59:34.000Z
|
2022-02-02T09:35:51.000Z
|
"""
Copyright 2011-2015 Kyle Lancaster
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program. If not, see <http://www.gnu.org/licenses/>.
Contact me at kyle.lan@gmail.com
"""
from simplekml.base import Kmlable, check
from simplekml.substyle import IconStyle, LabelStyle, LineStyle, PolyStyle, BalloonStyle, ListStyle
class StyleSelector(Kmlable):
"""Abstract style class, extended by :class:`simplekml.Style` and :class:`simplekml.StyleMap`
There are no arguments.
"""
_id = 0
def __init__(self):
super(StyleSelector, self).__init__()
self._id = "stylesel_{0}".format(StyleSelector._id)
StyleSelector._id += 1
@property
def id(self):
"""The id of the style, read-only."""
return self._id
class Style(StyleSelector):
"""Styles affect how Geometry is presented.
Arguments are the same as the properties.
Usage::
import simplekml
kml = simplekml.Kml()
pnt = kml.newpoint(name='A Point')
pnt.coords = [(1.0, 2.0)]
pnt.style.labelstyle.color = simplekml.Color.red # Make the text red
pnt.style.labelstyle.scale = 2 # Make the text twice as big
pnt.style.iconstyle.icon.href = 'http://maps.google.com/mapfiles/kml/shapes/placemark_circle.png'
kml.save("Style.kml")
"""
def __init__(self,
iconstyle=None,
labelstyle=None,
linestyle=None,
polystyle=None,
balloonstyle=None,
liststyle=None):
super(Style, self).__init__()
self._kml["IconStyle_"] = iconstyle
self._kml["LabelStyle_"] = labelstyle
self._kml["LineStyle_"] = linestyle
self._kml["PolyStyle_"] = polystyle
self._kml["BalloonStyle"] = balloonstyle
self._kml["ListStyle"] = liststyle
def __str__(self):
return '<Style id="{0}">{1}</Style>'.format(self._id, super(Style, self).__str__())
@property
def iconstyle(self):
"""The iconstyle, accepts :class:`simplekml.IconStyle`."""
if self._kml["IconStyle_"] is None:
self._kml["IconStyle_"] = IconStyle()
return self._kml["IconStyle_"]
@iconstyle.setter
@check(IconStyle)
def iconstyle(self, iconstyle):
self._kml["IconStyle_"] = iconstyle
@property
def labelstyle(self):
"""The labelstyle, accepts :class:`simplekml.LabelStyle`."""
if self._kml["LabelStyle_"] is None:
self._kml["LabelStyle_"] = LabelStyle()
return self._kml["LabelStyle_"]
@labelstyle.setter
@check(LabelStyle)
def labelstyle(self, labelstyle):
self._kml["LabelStyle_"] = labelstyle
@property
def linestyle(self):
"""The linestyle, accepts :class:`simplekml.LineStyle`."""
if self._kml["LineStyle_"] is None:
self._kml["LineStyle_"] = LineStyle()
return self._kml["LineStyle_"]
@linestyle.setter
@check(LineStyle)
def linestyle(self, linestyle):
self._kml["LineStyle_"] = linestyle
@property
def polystyle(self):
"""The polystyle, accepts :class:`simplekml.PolyStyle`."""
if self._kml["PolyStyle_"] is None:
self._kml["PolyStyle_"] = PolyStyle()
return self._kml["PolyStyle_"]
@polystyle.setter
@check(PolyStyle)
def polystyle(self, polystyle):
self._kml["PolyStyle_"] = polystyle
@property
def balloonstyle(self):
"""The balloonstyle, accepts :class:`simplekml.BalloonStyle`."""
if self._kml["BalloonStyle"] is None:
self._kml["BalloonStyle"] = BalloonStyle()
return self._kml["BalloonStyle"]
@balloonstyle.setter
@check(BalloonStyle)
def balloonstyle(self, balloonstyle):
self._kml["BalloonStyle"] = balloonstyle
@property
def liststyle(self):
"""The liststyle, accepts :class:`simplekml.ListStyle`."""
if self._kml["ListStyle"] is None:
self._kml["ListStyle"] = ListStyle()
return self._kml["ListStyle"]
@liststyle.setter
@check(ListStyle)
def liststyle(self, liststyle):
self._kml["ListStyle"] = liststyle
class StyleMap(StyleSelector):
"""Styles affect how Geometry is presented.
Arguments are the same as the properties.
Usage::
import simplekml
kml = simplekml.Kml()
pnt = kml.newpoint(coords=[(18.432314,-33.988862)])
pnt.stylemap.normalstyle.labelstyle.color = simplekml.Color.blue
pnt.stylemap.highlightstyle.labelstyle.color = simplekml.Color.red
kml.save("StyleMap.kml")
"""
def __init__(self,
normalstyle=None,
highlightstyle=None):
super(StyleMap, self).__init__()
self._pairnormal = None
self._pairhighlight = None
self.normalstyle = normalstyle
self.highlightstyle = highlightstyle
def __str__(self):
buf = ['<StyleMap id="{0}">'.format(self._id),
super(StyleMap, self).__str__()]
if self._pairnormal is not None:
buf.append("<Pair>")
buf.append("<key>normal</key>")
buf.append("<styleUrl>#{0}</styleUrl>".format(self._pairnormal._id))
buf.append("</Pair>")
if self._pairhighlight is not None:
buf.append("<Pair>")
buf.append("<key>highlight</key>")
buf.append("<styleUrl>#{0}</styleUrl>".format(self._pairhighlight._id))
buf.append("</Pair>")
buf.append("</StyleMap>")
return "".join(buf)
@property
def normalstyle(self):
"""The normal :class:`simplekml.Style`, accepts :class:`simplekml.Style`."""
if self._pairnormal is None:
self._pairnormal = Style()
return self._pairnormal
@normalstyle.setter
@check(Style)
def normalstyle(self, normal):
self._pairnormal = normal
@property
def highlightstyle(self):
"""The highlighted :class:`simplekml.Style`, accepts :class:`simplekml.Style`."""
if self._pairhighlight is None:
self._pairhighlight = Style()
return self._pairhighlight
@highlightstyle.setter
@check(Style)
def highlightstyle(self, highlighturl):
self._pairhighlight = highlighturl
| 33.885714
| 106
| 0.607223
|
4a125dc5f98629828682f45728f632627c91c6e9
| 9,382
|
py
|
Python
|
simdkalman/primitives.py
|
microprediction/simdkalman
|
2bf7d69d319eec465eb023d81b7aa3ff1c58da7e
|
[
"MIT"
] | null | null | null |
simdkalman/primitives.py
|
microprediction/simdkalman
|
2bf7d69d319eec465eb023d81b7aa3ff1c58da7e
|
[
"MIT"
] | null | null | null |
simdkalman/primitives.py
|
microprediction/simdkalman
|
2bf7d69d319eec465eb023d81b7aa3ff1c58da7e
|
[
"MIT"
] | 1
|
2021-12-19T16:22:16.000Z
|
2021-12-19T16:22:16.000Z
|
"""
Low-level Kalman filter computation steps with multi-dimensional input arrays.
Unlike with the `KalmanFilter <index.html#simdkalman.KalmanFilter>`_ class,
all inputs must be numpy arrays. However, their dimensions can flexibly vary
form 1 to 3 as long as they are reasonable from the point of view of matrix
multiplication and numpy broadcasting rules. Matrix operations are applied on
the *last* two axes of the arrays.
"""
import numpy as np
from functools import wraps
# work around some numpy glitches associated with different versions
from numpy.lib import NumpyVersion
_HAVE_MATMUL = NumpyVersion(np.__version__) >= '1.10.0'
_EINSUM_OPTS = {}
if NumpyVersion(np.__version__) == '1.14.0':
# https://github.com/numpy/numpy/issues/10343
_EINSUM_OPTS = { 'optimize': False }
def ddot(A, B):
"Matrix multiplication over last two axes"
if _HAVE_MATMUL:
return np.matmul(A, B)
else:
return np.einsum('...ij,...jk->...ik', A, B)
def ddot_t_right(A, B):
"Matrix multiplication over last 2 axes with right operand transposed"
return np.einsum('...ij,...kj->...ik', A, B, **_EINSUM_OPTS)
def douter(a, b):
"Outer product, last two axes"
return a * b.transpose((0,2,1))
def dinv(A):
"Matrix inverse applied to last two axes"
return np.linalg.inv(A)
def autoshape(func):
"Automatically shape arguments and return values"
def to_3d_array(v):
if len(v.shape) == 1:
return v[np.newaxis,:,np.newaxis]
elif len(v.shape) == 2:
return v[np.newaxis,...]
else:
return v
@wraps(func)
def reshaped_func(*args, **kwargs):
any_tensor = any([len(x.shape) > 2 for x in args])
outputs = func(*[to_3d_array(a) for a in args], **kwargs)
if not any_tensor:
outputs = [mat[0,...] for mat in outputs]
return outputs
return reshaped_func
@autoshape
def predict(mean, covariance, state_transition, process_noise):
"""
Kalman filter prediction step
:param mean: :math:`{\\mathbb E}[x_{j-1}]`,
the filtered mean form the previous step
:param covariance: :math:`{\\rm Cov}[x_{j-1}]`,
the filtered covariance form the previous step
:param state_transition: matrix :math:`A`
:param process_noise: matrix :math:`Q`
:rtype: ``(prior_mean, prior_cov)`` predicted mean and covariance
:math:`{\\mathbb E}[x_j]`, :math:`{\\rm Cov}[x_j]`
"""
n = mean.shape[1]
assert(covariance.shape[-2:] == (n,n))
assert(covariance.shape[-2:] == (n,n))
assert(process_noise.shape[-2:] == (n,n))
assert(state_transition.shape[-2:] == (n,n))
# mp = A * m
prior_mean = ddot(state_transition, mean)
# Pp = A * P * A.t + Q
prior_cov = ddot(state_transition, ddot_t_right(covariance, state_transition)) + process_noise
return prior_mean, prior_cov
@autoshape
def _update(prior_mean, prior_covariance, observation_model, observation_noise, measurement, log_likelihood=False):
n = prior_mean.shape[1]
m = observation_model.shape[1]
assert(measurement.shape[-2:] == (m,1))
assert(prior_covariance.shape[-2:] == (n,n))
assert(observation_model.shape[-2:] == (m,n))
assert(observation_noise.shape[-2:] == (m,m))
# y - H * mp
v = measurement - ddot(observation_model, prior_mean)
# H * Pp * H.t + R
S = ddot(observation_model, ddot_t_right(prior_covariance, observation_model)) + observation_noise
invS = dinv(S)
# Kalman gain: Pp * H.t * invS
K = ddot(ddot_t_right(prior_covariance, observation_model), invS)
# K * v + mp
posterior_mean = ddot(K, v) + prior_mean
# Pp - K * H * Pp
posterior_covariance = prior_covariance - ddot(K, ddot(observation_model, prior_covariance))
# inv-chi2 test var
# outlier_test = np.sum(v * ddot(invS, v), axis=0)
if log_likelihood:
l = np.ravel(ddot(v.transpose((0,2,1)), ddot(invS, v)))
l += np.log(np.linalg.det(S))
l *= -0.5
return posterior_mean, posterior_covariance, K, l
else:
return posterior_mean, posterior_covariance, K
return posterior_mean, posterior_covariance
def update(prior_mean, prior_covariance, observation_model, observation_noise, measurement):
"""
Kalman filter update step
:param prior_mean: :math:`{\\mathbb E}[x_j|y_1,\\ldots,y_{j-1}]`,
the prior mean of :math:`x_j`
:param prior_covariance: :math:`{\\rm Cov}[x_j|y_1,\\ldots,y_{j-1}]`,
the prior covariance of :math:`x_j`
:param observation_model: matrix :math:`H`
:param observation_noise: matrix :math:`R`
:param measurement: observation :math:`y_j`
:rtype: ``(posterior_mean, posterior_covariance)``
posterior mean and covariance
:math:`{\\mathbb E}[x_j|y_1,\\ldots,y_j]`,
:math:`{\\rm Cov}[x_j|y_1,\\ldots,y_j]`
after observing :math:`y_j`
"""
return _update(prior_mean, prior_covariance, observation_model, observation_noise, measurement)[:2]
@autoshape
def priv_smooth(posterior_mean, posterior_covariance, state_transition, process_noise, next_smooth_mean, next_smooth_covariance):
n = posterior_mean.shape[1]
assert(posterior_covariance.shape[-2:] == (n,n))
assert(process_noise.shape[-2:] == (n,n))
assert(state_transition.shape[-2:] == (n,n))
assert(next_smooth_mean.shape == posterior_mean.shape)
assert(next_smooth_covariance.shape == posterior_covariance.shape)
# re-predict a priori estimates for the next state
# A * m
mp = ddot(state_transition, posterior_mean)
# A * P * A.t + Q
Pp = ddot(state_transition, ddot_t_right(posterior_covariance, state_transition)) + process_noise
# Kalman smoothing gain: P * A.t * inv(Pp)
C = ddot(ddot_t_right(posterior_covariance, state_transition), dinv(Pp))
# m + C * (ms - mp)
smooth_mean = posterior_mean + ddot(C, next_smooth_mean - mp)
# P + C * (Ps - Pp) * C.t
smooth_covariance = posterior_covariance + ddot(C, ddot_t_right(next_smooth_covariance - Pp, C))
return smooth_mean, smooth_covariance, C
def smooth(posterior_mean, posterior_covariance, state_transition, process_noise, next_smooth_mean, next_smooth_covariance):
"""
Kalman smoother backwards step
:param posterior_mean: :math:`{\\mathbb E}[x_j|y_1,\\ldots,y_j]`,
the filtered mean of :math:`x_j`
:param posterior_covariance: :math:`{\\rm Cov}[x_j|y_1,\\ldots,y_j]`,
the filtered covariance of :math:`x_j`
:param state_transition: matrix :math:`A`
:param process_noise: matrix :math:`Q`
:param next_smooth_mean:
:math:`{\\mathbb E}[x_{j+1}|y_1,\\ldots,y_T]`
:param next_smooth_covariance:
:math:`{\\rm Cov}[x_{j+1}|y_1,\\ldots,y_T]`
:rtype: ``(smooth_mean, smooth_covariance, smoothing_gain)``
smoothed mean :math:`{\\mathbb E}[x_j|y_1,\\ldots,y_T]`,
and covariance :math:`{\\rm Cov}[x_j|y_1,\\ldots,y_T]`
"""
return priv_smooth(posterior_mean, posterior_covariance, state_transition, process_noise, next_smooth_mean, next_smooth_covariance)[:2]
@autoshape
def predict_observation(mean, covariance, observation_model, observation_noise):
"""
Compute probability distribution of the observation :math:`y`, given
the distribution of :math:`x`.
:param mean: :math:`{\\mathbb E}[x]`
:param covariance: :math:`{\\rm Cov}[x]`
:param observation_model: matrix :math:`H`
:param observation_noise: matrix :math:`R`
:rtype: mean :math:`{\\mathbb E}[y]` and covariance :math:`{\\rm Cov}[y]`
"""
n = mean.shape[1]
m = observation_model.shape[1]
assert(observation_model.shape[-2:] == (m,n))
assert(covariance.shape[-2:] == (n,n))
assert(observation_model.shape[-2:] == (m,n))
# H * m
obs_mean = ddot(observation_model, mean)
# H * P * H^T + R
obs_cov = ddot(observation_model,
ddot_t_right(covariance, observation_model)) + observation_noise
return obs_mean, obs_cov
@autoshape
def priv_update_with_nan_check(
prior_mean,
prior_covariance,
observation_model,
observation_noise,
measurement,
log_likelihood=False):
tup = _update(
prior_mean,
prior_covariance,
observation_model,
observation_noise,
measurement,
log_likelihood=log_likelihood)
m1, P1, K = tup[:3]
is_nan = np.ravel(np.any(np.isnan(m1), axis=1))
m1[is_nan,...] = prior_mean[is_nan,...]
P1[is_nan,...] = prior_covariance[is_nan,...]
K[is_nan,...] = 0
if log_likelihood:
l = tup[-1]
l[is_nan] = 0
return m1, P1, K, l
else:
return m1, P1, K
def update_with_nan_check(
prior_mean,
prior_covariance,
observation_model,
observation_noise,
measurement):
"""
Kalman filter update with a check for NaN observations. Like ``update`` but
returns ``(prior_mean, prior_covariance)`` if ``measurement`` is NaN
"""
return priv_update_with_nan_check(
prior_mean,
prior_covariance,
observation_model,
observation_noise,
measurement)[:2]
def ensure_matrix(x, dim=1):
# pylint: disable=W0702,W0104,E1136
try:
y = np.array(x)
y.shape[0]
x = y
except:
x = np.eye(dim)*x
return x
| 32.919298
| 139
| 0.651567
|
4a126006b69c70d7780a310de46c0c2e0a0495ba
| 1,976
|
py
|
Python
|
mmocr/utils/model.py
|
yuexy/mmocr
|
82488024db159266e66ea6b0d6f84a5a18e87362
|
[
"Apache-2.0"
] | 2,261
|
2021-04-08T03:45:41.000Z
|
2022-03-31T23:37:46.000Z
|
mmocr/utils/model.py
|
yuexy/mmocr
|
82488024db159266e66ea6b0d6f84a5a18e87362
|
[
"Apache-2.0"
] | 789
|
2021-04-08T05:40:13.000Z
|
2022-03-31T09:42:39.000Z
|
mmocr/utils/model.py
|
yuexy/mmocr
|
82488024db159266e66ea6b0d6f84a5a18e87362
|
[
"Apache-2.0"
] | 432
|
2021-04-08T03:56:16.000Z
|
2022-03-30T18:44:43.000Z
|
# Copyright (c) OpenMMLab. All rights reserved.
import torch
class _BatchNormXd(torch.nn.modules.batchnorm._BatchNorm):
"""A general BatchNorm layer without input dimension check.
Reproduced from @kapily's work:
(https://github.com/pytorch/pytorch/issues/41081#issuecomment-783961547)
The only difference between BatchNorm1d, BatchNorm2d, BatchNorm3d, etc
is `_check_input_dim` that is designed for tensor sanity checks.
The check has been bypassed in this class for the convenience of converting
SyncBatchNorm.
"""
def _check_input_dim(self, input):
return
def revert_sync_batchnorm(module):
"""Helper function to convert all `SyncBatchNorm` layers in the model to
`BatchNormXd` layers.
Adapted from @kapily's work:
(https://github.com/pytorch/pytorch/issues/41081#issuecomment-783961547)
Args:
module (nn.Module): The module containing `SyncBatchNorm` layers.
Returns:
module_output: The converted module with `BatchNormXd` layers.
"""
module_output = module
if isinstance(module, torch.nn.modules.batchnorm.SyncBatchNorm):
module_output = _BatchNormXd(module.num_features, module.eps,
module.momentum, module.affine,
module.track_running_stats)
if module.affine:
with torch.no_grad():
module_output.weight = module.weight
module_output.bias = module.bias
module_output.running_mean = module.running_mean
module_output.running_var = module.running_var
module_output.num_batches_tracked = module.num_batches_tracked
module_output.training = module.training
if hasattr(module, 'qconfig'):
module_output.qconfig = module.qconfig
for name, child in module.named_children():
module_output.add_module(name, revert_sync_batchnorm(child))
del module
return module_output
| 38
| 79
| 0.690283
|
4a12626a242a94a98adbcccc61c17949a824194b
| 2,626
|
py
|
Python
|
h2o-py/tests/testdir_algos/automl/pyunit_automl_params_attributes.py
|
JannisBush/h2o-3
|
30aa2a86e6bfa1febb5f95f3cb43811337895f7f
|
[
"Apache-2.0"
] | null | null | null |
h2o-py/tests/testdir_algos/automl/pyunit_automl_params_attributes.py
|
JannisBush/h2o-3
|
30aa2a86e6bfa1febb5f95f3cb43811337895f7f
|
[
"Apache-2.0"
] | null | null | null |
h2o-py/tests/testdir_algos/automl/pyunit_automl_params_attributes.py
|
JannisBush/h2o-3
|
30aa2a86e6bfa1febb5f95f3cb43811337895f7f
|
[
"Apache-2.0"
] | 1
|
2021-09-09T03:47:11.000Z
|
2021-09-09T03:47:11.000Z
|
import os
import sys
sys.path.insert(1, os.path.join("..","..",".."))
import h2o
from h2o.automl import H2OAutoML
from h2o.exceptions import H2OValueError
from tests import pyunit_utils as pu
def import_dataset(seed=0):
df = h2o.import_file(path=pu.locate("smalldata/prostate/prostate.csv"))
target = "CAPSULE"
df[target] = df[target].asfactor()
fr = df.split_frame(ratios=[.8,.1], seed=seed)
return pu.ns(train=fr[0], valid=fr[1], test=fr[2], target=target, target_idx=1)
def test_params_can_be_set_as_attributes():
aml = H2OAutoML()
aml.max_models = 4
aml.seed = 42
aml.nfolds = 0
ds = import_dataset()
aml.train(y=ds.target, training_frame=ds.train, validation_frame=ds.valid)
assert aml.leaderboard.nrows == aml.max_models == 4
assert aml.project_name is not None
def test_params_are_validated_in_setter():
aml = H2OAutoML()
try:
aml.nfolds = 1
assert False, "should have raised"
except AssertionError as e:
assert aml.nfolds == 5, "nfolds should have remained to default value"
assert "nfolds set to 1; use nfolds >=2 if you want cross-validated metrics and Stacked Ensembles or use nfolds = 0 to disable." == str(e)
aml.nfolds = 3
assert aml.nfolds == 3
def test_non_train_params_are_frozen_after_first_train():
aml = H2OAutoML(max_models=2, nfolds=3, seed=42, keep_cross_validation_predictions=True)
ds = import_dataset()
aml.train(y=ds.target, training_frame=ds.train, validation_frame=ds.valid)
assert aml.leaderboard.nrows == aml.max_models+1 == 3 # only 1 SE as we have only one type of models
assert aml.leaderboard.columns[1] == 'auc'
try:
aml.nfolds = 0
assert False, "should have raised"
except H2OValueError as e:
assert "Param ``nfolds`` can not be modified after the first call to ``train``." == str(e)
assert aml.nfolds == 3
try:
aml.seed = 24
assert False, "should have raised"
except H2OValueError as e:
assert "Param ``seed`` can not be modified after the first call to ``train``." == str(e)
assert aml.seed == 42
assert aml.sort_metric == 'AUTO'
aml.sort_metric = 'logloss'
aml.train(y=ds.target, training_frame=ds.train, validation_frame=ds.valid)
print(aml.leaderboard)
assert aml.leaderboard.nrows == (aml.max_models+1)*2 == 6
assert aml.leaderboard.columns[1] == 'logloss'
pu.run_tests([
test_params_can_be_set_as_attributes,
test_params_are_validated_in_setter,
test_non_train_params_are_frozen_after_first_train,
])
| 33.666667
| 146
| 0.680503
|
4a1262f58bd2b0f5a8f5d89f03facccc8ccdc5b1
| 176
|
py
|
Python
|
openiPrototype/openiPrototype/APIS/Products_and_Services/Card/admin.py
|
OPENi-ict/ntua_demo
|
104118fbe1f54db35386ca96286317ceb64cb658
|
[
"Apache-2.0"
] | null | null | null |
openiPrototype/openiPrototype/APIS/Products_and_Services/Card/admin.py
|
OPENi-ict/ntua_demo
|
104118fbe1f54db35386ca96286317ceb64cb658
|
[
"Apache-2.0"
] | null | null | null |
openiPrototype/openiPrototype/APIS/Products_and_Services/Card/admin.py
|
OPENi-ict/ntua_demo
|
104118fbe1f54db35386ca96286317ceb64cb658
|
[
"Apache-2.0"
] | null | null | null |
__author__ = 'mpetyx'
from django.contrib import admin
from .models import OpeniCard
class CardAdmin(admin.ModelAdmin):
pass
admin.site.register(OpeniCard, CardAdmin)
| 14.666667
| 41
| 0.778409
|
4a12630acaf39d35ccbaf6b5b990dbc644982cd0
| 698
|
py
|
Python
|
Demo/util.py
|
matthagy/rtchemstats
|
4233020779b5fd98ef2bf5908cbc787bc762030b
|
[
"Apache-2.0"
] | 2
|
2020-08-24T19:51:18.000Z
|
2020-08-24T20:18:27.000Z
|
Demo/util.py
|
matthagy/rtchemstats
|
4233020779b5fd98ef2bf5908cbc787bc762030b
|
[
"Apache-2.0"
] | null | null | null |
Demo/util.py
|
matthagy/rtchemstats
|
4233020779b5fd98ef2bf5908cbc787bc762030b
|
[
"Apache-2.0"
] | null | null | null |
from pyljfluid.components import LJForceField, Config, MDSimulator
N = 1000
forcefield = LJForceField(sigma=1.0, epsilon=1.0, r_cutoff=2.5)
r_neighbor_skin = 1.0 * forcefield.sigma
mass = 48 * forcefield.epsilon * forcefield.sigma ** -2
dt = 0.032
def get_equilibrated_simulation(rho, T):
config0 = Config.create(N=N, rho=rho, dt=dt, sigma=forcefield.sigma, T=T, mass=mass)
sim = MDSimulator(config0, forcefield, mass=mass, r_skin=r_neighbor_skin)
for i in xrange(500):
if not i%10:
sim.config.randomize_velocities(T=T, mass=mass)
sim.cycle(50)
U = sim.evaluate_potential()
print 'equilibrate cycle i=%03d U=%.3f' % (i, U)
return sim
| 33.238095
| 88
| 0.681948
|
4a1263155473ea49f5444937843ea92ec649e264
| 806
|
py
|
Python
|
excel_sheet_column_title.py
|
fossilet/leetcode
|
4cf787c74fc339dc6aee6a0b633ca15b38ac18a1
|
[
"MIT"
] | 5
|
2015-12-10T14:19:02.000Z
|
2021-07-02T01:23:34.000Z
|
excel_sheet_column_title.py
|
fossilet/leetcode
|
4cf787c74fc339dc6aee6a0b633ca15b38ac18a1
|
[
"MIT"
] | null | null | null |
excel_sheet_column_title.py
|
fossilet/leetcode
|
4cf787c74fc339dc6aee6a0b633ca15b38ac18a1
|
[
"MIT"
] | 1
|
2015-10-01T01:43:14.000Z
|
2015-10-01T01:43:14.000Z
|
"""
https://oj.leetcode.com/problems/excel-sheet-column-title/
Given a positive integer, return its corresponding column title as appear in an Excel sheet.
For example:
1 -> A
2 -> B
3 -> C
...
26 -> Z
27 -> AA
28 -> AB
"""
class Solution:
# @return a string
def convertToTitle(self, num):
l = []
a = ord('A')
while num != 0:
num, r = divmod(num, 26)
if r != 0:
l.insert(0, chr(r + a - 1))
else:
l.insert(0, 'Z')
num -= 1
return ''.join(l)
if __name__ == '__main__':
s = Solution()
assert s.convertToTitle(1) == 'A'
assert s.convertToTitle(26) == 'Z'
assert s.convertToTitle(27) == 'AA'
assert s.convertToTitle(1381) == 'BAC'
| 20.15
| 92
| 0.5
|
4a1263831d4105dfe997b5e956f58093160127cb
| 3,770
|
py
|
Python
|
SynthesiaKontrol-MK1.py
|
a1vv/KompleteKontrolLightGuide
|
8b0953a1067a99d55c719a878ffc0a4e58fc334f
|
[
"MIT"
] | 3
|
2019-01-20T22:38:21.000Z
|
2021-07-20T08:48:58.000Z
|
SynthesiaKontrol-MK1.py
|
a1vv/KompleteKontrolLightGuide
|
8b0953a1067a99d55c719a878ffc0a4e58fc334f
|
[
"MIT"
] | null | null | null |
SynthesiaKontrol-MK1.py
|
a1vv/KompleteKontrolLightGuide
|
8b0953a1067a99d55c719a878ffc0a4e58fc334f
|
[
"MIT"
] | 2
|
2019-02-13T15:57:01.000Z
|
2020-04-02T12:17:17.000Z
|
# The MIT License
# Modified to work with MK1 keyboards
import hid
import mido
from msvcrt import getch
numkeys = 88 #change this to the number of keys on your keyboard
offset = -(108-numkeys+1)
pid = 0x1410 #change this to the product id of your keyboard
def init():
"""Connect to the keyboard, switch all lights off"""
global bufferC # Buffer with the full key/lights mapping
global device
device=hid.device()
# 0x17cc: Native Instruments. 0x1410: KK S88 MK1
device.open(0x17cc, pid)
device.write([0xa0])
bufferC = [0x00] * numkeys
notes_off()
return True
def notes_off():
"""Turn off lights for all notes"""
bufferC = [0x00] * numkeys
device.write([0x82] + bufferC)
def accept_notes(port):
"""Only let note_on and note_off messages through."""
for message in port:
if message.type in ('note_on', 'note_off'):
yield message
if message.type == 'control_change' and message.channel == 0 and message.control == 16:
if (message.value & 4):
print ("User is playing")
if (message.value & 1):
print ("Playing Right Hand")
if (message.value & 2):
print ("Playing Left Hand")
notes_off()
def LightNote(note, status, channel, velocity):
"""Light a note ON or OFF"""
key = (note + offset)
if key < 0 or key >= numkeys:
return
# Determine color
left = [0x00] + [0x00] + [0xFF] # Blue
left_thumb = [0x00] + [0x00] + [0x80] # Lighter Blue
right = [0x00] + [0xFF] + [0x00] # Green
right_thumb = [0x00] + [0x80] + [0x00] # Lighter Green
default = right
color = default
# Finger based channel protocol from Synthesia
# Reference: https://www.synthesiagame.com/forum/viewtopic.php?p=43585#p43585
if channel == 0:
# we don't know who or what this note belongs to, but light something up anyway
color = default
if channel >= 1 and channel <= 5:
# left hand fingers, thumb through pinky
if channel == 1:
color = left_thumb
else:
color = left
if channel >= 6 and channel <= 10:
# right hand fingers, thumb through pinky
if channel == 6:
color = right_thumb
else:
color = right
if channel == 11:
# left hand, unknown finger
color = left
if channel == 12:
# right hand, unknown finger
color = right
black = [0x00] * 3
if status == 'note_on' :
colors[3*key:3*key+3]=color #set the three colorvalues of the key to desired color
if status == 'note_off' :
colors[3*key:3*key+3]=black #set the key back to black
device.write([0x82] + colors) #changes the color of pressed key
if __name__ == '__main__':
"""Main: connect to keyboard, open midi input port, listen to midi"""
print ("Connecting to Komplete Kontrol Keyboard")
connected = init()
if connected:
print ("Opening LoopBe input port")
ports = mido.get_input_names()
for port in ports:
if "LoopBe" in port:
portName = port
print ("Listening to Midi")
with mido.open_input(portName) as midiPort:
black = [0x00] * 3 #color, R + G + B (in this case black)
colors = black * numkeys #sets the color to all 88 keys (when it gets written to kontrol)
for message in accept_notes(midiPort):
print('Received {}'.format(message))
LightNote(message.note, message.type, message.channel, message.velocity)
| 34.272727
| 109
| 0.577188
|
4a12638d1a91c76a3a86d3012fe89033695be65a
| 6,784
|
py
|
Python
|
src/toil/server/app.py
|
BD2KGenomics/slugflow
|
ec83920e1636fd24814688bf1569feebfae73620
|
[
"Apache-2.0"
] | 516
|
2015-07-30T19:08:55.000Z
|
2018-07-03T20:53:42.000Z
|
src/toil/server/app.py
|
BD2KGenomics/toil
|
88d73fbfd42ec22fd692940fc1efdacaf53b1b76
|
[
"Apache-2.0"
] | 1,949
|
2015-07-29T23:38:49.000Z
|
2018-07-05T12:42:04.000Z
|
src/toil/server/app.py
|
BD2KGenomics/slugflow
|
ec83920e1636fd24814688bf1569feebfae73620
|
[
"Apache-2.0"
] | 193
|
2015-07-31T18:52:57.000Z
|
2018-07-05T08:54:11.000Z
|
# Copyright (C) 2015-2021 Regents of the University of California
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import argparse
import logging
import os
from typing import Type
import connexion # type: ignore
from toil.lib.misc import get_public_ip
from toil.server.wes.toil_backend import ToilBackend
from toil.server.wsgi_app import run_app
from toil.version import version
from toil.lib.aws import running_on_ec2, running_on_ecs, get_current_aws_region
logger = logging.getLogger(__name__)
def parser_with_server_options() -> argparse.ArgumentParser:
parser = argparse.ArgumentParser(description="Toil server mode.")
parser.add_argument("--debug", action="store_true", default=False)
parser.add_argument("--bypass_celery", action="store_true", default=False,
help="Skip sending workflows to Celery and just run them under the"
"server. For testing.")
parser.add_argument("--host", type=str, default="127.0.0.1",
help="The host interface that the Toil server binds on. (default: '127.0.0.1').")
parser.add_argument("--port", type=int, default=8080,
help="The port that the Toil server listens on. (default: 8080).")
parser.add_argument("--swagger_ui", action="store_true", default=False,
help="If True, the swagger UI will be enabled and hosted on the "
"`{api_base_path}/ui` endpoint. (default: False)")
# CORS
parser.add_argument("--cors", action="store_true", default=False,
help="Enable Cross Origin Resource Sharing (CORS). This should only be turned on "
"if the server is intended to be used by a website or domain.")
parser.add_argument("--cors_origins", type=str, default="*",
help="Ignored if --cors is False. This sets the allowed origins for CORS. "
"For details about CORS and its security risks, see: "
"https://w3id.org/ga4gh/product-approval-support/cors. (default: '*')")
# production only
parser.add_argument("-w", "--workers", dest='workers', type=int, default=2,
help="Ignored if --debug is True. The number of worker processes launched by the "
"WSGI server. (default: 2).")
parser.add_argument("--work_dir", type=str, default=os.path.join(os.getcwd(), "workflows"),
help="The directory where workflows should be stored. This directory should be "
"empty or only contain previous workflows. (default: './workflows').")
parser.add_argument("--state_store", type=str, default=None,
help="The local path or S3 URL where workflow state metadata should be stored. "
"(default: in --work_dir)")
parser.add_argument("--opt", "-o", type=str, action="append", default=[],
help="Specify the default parameters to be sent to the workflow engine for each "
"run. Options taking arguments must use = syntax. Accepts multiple values.\n"
"Example: '--opt=--logLevel=CRITICAL --opt=--workDir=/tmp'.")
parser.add_argument("--dest_bucket_base", type=str, default=None,
help="Direct CWL workflows to save output files to dynamically generated "
"unique paths under the given URL. Supports AWS S3.")
parser.add_argument("--version", action='version', version=version)
return parser
def create_app(args: argparse.Namespace) -> "connexion.FlaskApp":
"""
Create a "connexion.FlaskApp" instance with Toil server configurations.
"""
flask_app = connexion.FlaskApp(__name__,
specification_dir='api_spec/',
options={"swagger_ui": args.swagger_ui})
flask_app.app.config['JSON_SORT_KEYS'] = False
if args.cors:
# enable cross origin resource sharing
from flask_cors import CORS # type: ignore
CORS(flask_app.app, resources={r"/ga4gh/*": {"origins": args.cors_origins}})
# add workflow execution service (WES) API endpoints
backend = ToilBackend(work_dir=args.work_dir,
state_store=args.state_store,
options=args.opt,
dest_bucket_base=args.dest_bucket_base,
bypass_celery=args.bypass_celery)
flask_app.add_api('workflow_execution_service.swagger.yaml',
resolver=connexion.Resolver(backend.resolve_operation_id)) # noqa
# add custom endpoints
if isinstance(backend, ToilBackend):
# We extend the WES API to allow presenting log data
base_url = "/toil/wes/v1"
flask_app.app.add_url_rule(f"{base_url}/logs/<run_id>/stdout", view_func=backend.get_stdout)
flask_app.app.add_url_rule(f"{base_url}/logs/<run_id>/stderr", view_func=backend.get_stderr)
# To be a well-behaved AGC engine we can implement the default status check endpoint
flask_app.app.add_url_rule("/engine/v1/status", view_func=backend.get_health)
# And we can provide lost humans some information on what they are looking at
flask_app.app.add_url_rule("/", view_func=backend.get_homepage)
return flask_app
def start_server(args: argparse.Namespace) -> None:
""" Start a Toil server."""
# Explain a bit about who and where we are
logger.info("Toil WES server version %s starting...", version)
if running_on_ecs():
logger.info("Environment appears to be Amazon ECS")
if running_on_ec2():
logger.info("Environment appears to be Amazon EC2")
aws_region = get_current_aws_region()
if aws_region:
logger.info("AWS region appears to be: %s", aws_region)
flask_app = create_app(args)
host = args.host
port = args.port
if args.debug:
flask_app.run(host=host, port=port)
else:
# start a production WSGI server
run_app(flask_app.app, options={
"bind": f"{host}:{port}",
"workers": args.workers,
})
| 48.457143
| 106
| 0.638267
|
4a1263cbf7666c292320f8a80580ae8c05546c8b
| 6,870
|
py
|
Python
|
google/cloud/asset_v1p4beta1/services/asset_service/transports/base.py
|
renovate-bot/python-asset
|
8222f9c247e6a27dc5b74452866c01cd11f86f8c
|
[
"Apache-2.0"
] | 24
|
2020-04-24T04:15:30.000Z
|
2022-03-23T12:24:00.000Z
|
google/cloud/asset_v1p4beta1/services/asset_service/transports/base.py
|
renovate-bot/python-asset
|
8222f9c247e6a27dc5b74452866c01cd11f86f8c
|
[
"Apache-2.0"
] | 115
|
2020-02-07T03:12:01.000Z
|
2022-03-07T16:39:29.000Z
|
google/cloud/asset_v1p4beta1/services/asset_service/transports/base.py
|
renovate-bot/python-asset
|
8222f9c247e6a27dc5b74452866c01cd11f86f8c
|
[
"Apache-2.0"
] | 21
|
2020-02-08T04:14:50.000Z
|
2022-01-29T08:07:32.000Z
|
# -*- coding: utf-8 -*-
# Copyright 2020 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
import abc
from typing import Awaitable, Callable, Dict, Optional, Sequence, Union
import pkg_resources
import google.auth # type: ignore
import google.api_core
from google.api_core import exceptions as core_exceptions
from google.api_core import gapic_v1
from google.api_core import retry as retries
from google.api_core import operations_v1
from google.auth import credentials as ga_credentials # type: ignore
from google.oauth2 import service_account # type: ignore
from google.cloud.asset_v1p4beta1.types import asset_service
from google.longrunning import operations_pb2 # type: ignore
try:
DEFAULT_CLIENT_INFO = gapic_v1.client_info.ClientInfo(
gapic_version=pkg_resources.get_distribution("google-cloud-asset",).version,
)
except pkg_resources.DistributionNotFound:
DEFAULT_CLIENT_INFO = gapic_v1.client_info.ClientInfo()
class AssetServiceTransport(abc.ABC):
"""Abstract transport class for AssetService."""
AUTH_SCOPES = ("https://www.googleapis.com/auth/cloud-platform",)
DEFAULT_HOST: str = "cloudasset.googleapis.com"
def __init__(
self,
*,
host: str = DEFAULT_HOST,
credentials: ga_credentials.Credentials = None,
credentials_file: Optional[str] = None,
scopes: Optional[Sequence[str]] = None,
quota_project_id: Optional[str] = None,
client_info: gapic_v1.client_info.ClientInfo = DEFAULT_CLIENT_INFO,
always_use_jwt_access: Optional[bool] = False,
**kwargs,
) -> None:
"""Instantiate the transport.
Args:
host (Optional[str]):
The hostname to connect to.
credentials (Optional[google.auth.credentials.Credentials]): The
authorization credentials to attach to requests. These
credentials identify the application to the service; if none
are specified, the client will attempt to ascertain the
credentials from the environment.
credentials_file (Optional[str]): A file with credentials that can
be loaded with :func:`google.auth.load_credentials_from_file`.
This argument is mutually exclusive with credentials.
scopes (Optional[Sequence[str]]): A list of scopes.
quota_project_id (Optional[str]): An optional project to use for billing
and quota.
client_info (google.api_core.gapic_v1.client_info.ClientInfo):
The client info used to send a user-agent string along with
API requests. If ``None``, then default info will be used.
Generally, you only need to set this if you're developing
your own client library.
always_use_jwt_access (Optional[bool]): Whether self signed JWT should
be used for service account credentials.
"""
# Save the hostname. Default to port 443 (HTTPS) if none is specified.
if ":" not in host:
host += ":443"
self._host = host
scopes_kwargs = {"scopes": scopes, "default_scopes": self.AUTH_SCOPES}
# Save the scopes.
self._scopes = scopes
# If no credentials are provided, then determine the appropriate
# defaults.
if credentials and credentials_file:
raise core_exceptions.DuplicateCredentialArgs(
"'credentials_file' and 'credentials' are mutually exclusive"
)
if credentials_file is not None:
credentials, _ = google.auth.load_credentials_from_file(
credentials_file, **scopes_kwargs, quota_project_id=quota_project_id
)
elif credentials is None:
credentials, _ = google.auth.default(
**scopes_kwargs, quota_project_id=quota_project_id
)
# If the credentials are service account credentials, then always try to use self signed JWT.
if (
always_use_jwt_access
and isinstance(credentials, service_account.Credentials)
and hasattr(service_account.Credentials, "with_always_use_jwt_access")
):
credentials = credentials.with_always_use_jwt_access(True)
# Save the credentials.
self._credentials = credentials
def _prep_wrapped_messages(self, client_info):
# Precompute the wrapped methods.
self._wrapped_methods = {
self.analyze_iam_policy: gapic_v1.method.wrap_method(
self.analyze_iam_policy,
default_retry=retries.Retry(
initial=0.1,
maximum=60.0,
multiplier=1.3,
predicate=retries.if_exception_type(
core_exceptions.ServiceUnavailable,
),
deadline=300.0,
),
default_timeout=300.0,
client_info=client_info,
),
self.export_iam_policy_analysis: gapic_v1.method.wrap_method(
self.export_iam_policy_analysis,
default_timeout=60.0,
client_info=client_info,
),
}
def close(self):
"""Closes resources associated with the transport.
.. warning::
Only call this method if the transport is NOT shared
with other clients - this may cause errors in other clients!
"""
raise NotImplementedError()
@property
def operations_client(self):
"""Return the client designed to process long-running operations."""
raise NotImplementedError()
@property
def analyze_iam_policy(
self,
) -> Callable[
[asset_service.AnalyzeIamPolicyRequest],
Union[
asset_service.AnalyzeIamPolicyResponse,
Awaitable[asset_service.AnalyzeIamPolicyResponse],
],
]:
raise NotImplementedError()
@property
def export_iam_policy_analysis(
self,
) -> Callable[
[asset_service.ExportIamPolicyAnalysisRequest],
Union[operations_pb2.Operation, Awaitable[operations_pb2.Operation]],
]:
raise NotImplementedError()
__all__ = ("AssetServiceTransport",)
| 37.747253
| 101
| 0.648326
|
4a1263d3b37ff62b5c8cd7f9cb7b2bc2231e0cf7
| 565
|
py
|
Python
|
env/lib/python3.8/site-packages/plotly/validators/surface/contours/z/_highlightwidth.py
|
acrucetta/Chicago_COVI_WebApp
|
a37c9f492a20dcd625f8647067394617988de913
|
[
"MIT",
"Unlicense"
] | 76
|
2020-07-06T14:44:05.000Z
|
2022-02-14T15:30:21.000Z
|
env/lib/python3.8/site-packages/plotly/validators/surface/contours/z/_highlightwidth.py
|
acrucetta/Chicago_COVI_WebApp
|
a37c9f492a20dcd625f8647067394617988de913
|
[
"MIT",
"Unlicense"
] | 11
|
2020-08-09T02:30:14.000Z
|
2022-03-12T00:50:14.000Z
|
env/lib/python3.8/site-packages/plotly/validators/surface/contours/z/_highlightwidth.py
|
acrucetta/Chicago_COVI_WebApp
|
a37c9f492a20dcd625f8647067394617988de913
|
[
"MIT",
"Unlicense"
] | 11
|
2020-07-12T16:18:07.000Z
|
2022-02-05T16:48:35.000Z
|
import _plotly_utils.basevalidators
class HighlightwidthValidator(_plotly_utils.basevalidators.NumberValidator):
def __init__(
self, plotly_name="highlightwidth", parent_name="surface.contours.z", **kwargs
):
super(HighlightwidthValidator, self).__init__(
plotly_name=plotly_name,
parent_name=parent_name,
edit_type=kwargs.pop("edit_type", "calc"),
max=kwargs.pop("max", 16),
min=kwargs.pop("min", 1),
role=kwargs.pop("role", "style"),
**kwargs
)
| 33.235294
| 86
| 0.623009
|
4a12641bc56deb2b6e58e77e3fd7bc6b81dce2f9
| 327
|
py
|
Python
|
home/migrations/0034_rename_event_events1.py
|
ianshulx/egiportal
|
3a147a4a61e58f2a6229e08a5a4c256d7b674b81
|
[
"MIT"
] | 1
|
2022-02-17T10:34:21.000Z
|
2022-02-17T10:34:21.000Z
|
home/migrations/0034_rename_event_events1.py
|
ianshulx/egiportal
|
3a147a4a61e58f2a6229e08a5a4c256d7b674b81
|
[
"MIT"
] | null | null | null |
home/migrations/0034_rename_event_events1.py
|
ianshulx/egiportal
|
3a147a4a61e58f2a6229e08a5a4c256d7b674b81
|
[
"MIT"
] | null | null | null |
# Generated by Django 3.2.9 on 2021-12-26 08:04
from django.db import migrations
class Migration(migrations.Migration):
dependencies = [
('home', '0033_auto_20211226_1323'),
]
operations = [
migrations.RenameModel(
old_name='event',
new_name='events1',
),
]
| 18.166667
| 47
| 0.590214
|
4a1264d59397461ec6b3d2b85c32cf493d50f823
| 33,338
|
py
|
Python
|
LightPipes/core.py
|
jccmak/lightpipes
|
1a296fe08bdd97fc9a0e11f92bab25c85f68e57d
|
[
"BSD-3-Clause"
] | null | null | null |
LightPipes/core.py
|
jccmak/lightpipes
|
1a296fe08bdd97fc9a0e11f92bab25c85f68e57d
|
[
"BSD-3-Clause"
] | null | null | null |
LightPipes/core.py
|
jccmak/lightpipes
|
1a296fe08bdd97fc9a0e11f92bab25c85f68e57d
|
[
"BSD-3-Clause"
] | null | null | null |
# -*- coding: utf-8 -*-
import numpy as _np
from scipy.special import hermite, genlaguerre
from scipy.interpolate import RectBivariateSpline
from .misc import backward_compatible
USE_CV2 = False
if USE_CV2:
import cv2
USE_SKIMAGE = False
if USE_SKIMAGE:
from skimage.restoration import unwrap_phase as _unwrap_phase
else:
#used in PhaseUnwrap
# own implementation currently slower, but seems a little more stable
# with jumpy phases and of course removes dependency on the extra package
from .unwrap import unwrap_phase as _unwrap_phase
from .units import deg
from .field import Field
from .subs import Inv_Squares
def BeamMix(Fin1, Fin2):
"""
*Addition of the fields Fin1 and Fin2.*
:param Fin1: First field.
:type Fin1: Field
:param Fin2: Second field
:param Fin2: Field
:return: output field (N x N square array of complex numbers).
:rtype: `LightPipes.field.Field`
:Example:
>>> F = BeamMix(F1 , F2)
.. seealso::
* :ref:`Manual: Splitting and mixing beams. <Splitting and mixing beams.>`
* :ref:`Examples: Young's experiment. <Young's experiment.>`
"""
if Fin1.field.shape != Fin2.field.shape:
raise ValueError('Field sizes do not match')
Fout = Field.copy(Fin1)
Fout.field += Fin2.field
return Fout
@backward_compatible
def CircAperture(Fin, R, x_shift = 0.0, y_shift = 0.0):
"""
*Inserts a circular aperture in the field.*
:param R: radius of the aperture
:type R: int, float
:param x_shift: shift in x direction (default = 0.0)
:param y_shift: shift in y direction (default = 0.0)
:type x_shift: int, float
:type y_shift: int, float
:param Fin: input field
:type Fin: Field
:return: output field (N x N square array of complex numbers).
:rtype: `LightPipes.field.Field`
:Example:
>>> F = CircAperture(F, 3*mm) # A 3 mm radius circular aperture in the center of the grid.
>>> # alternative notations:
>>> F = CircAperture(F, 3*mm, 0, -3*mm) # Shifted -3 mm in the y-direction.
>>> F = CircAperture(F, R = 3*mm, y_shift = -3*mm) # Idem
>>> F = CircAperture(3*mm, 0.0, -3*mm, F) # Idem, old order of arguments for backward compatibility.
.. seealso::
* :ref:`Manual: Apertures and screens<Apertures and screens.>`
* :ref:`Examples: Diffraction from a circular aperture.<Diffraction from a circular aperture.>`
"""
#from
#https://stackoverflow.com/questions/44865023/
# circular-masking-an-image-in-python-using-numpy-arrays
Fout = Field.copy(Fin)
Y, X = Fout.mgrid_cartesian
Y = Y - y_shift
X = X - x_shift
dist_sq = X**2 + Y**2 #squared, no need for sqrt
Fout.field[dist_sq > R**2] = 0.0
return Fout
@backward_compatible
def CircScreen(Fin, R, x_shift=0.0, y_shift=0.0):
"""
*Inserts a circular screen in the field.*
:param Fin: input field
:type Fin: Field
:param R: radius of the screen
:type R: int, float
:param x_shift: shift in x direction (default = 0.0)
:param y_shift: shift in y direction (default = 0.0)
:type x_shift: int, float
:type y_shift: int, float
:return: output field (N x N square array of complex numbers).
:rtype: `LightPipes.field.Field`
:Example:
>>> F = CircScreen(F, 3*mm) # A 3 mm radius circular screen in the center of the grid.
>>> # alternative notations:
>>> F = CircScreen(F, 3*mm, 0, -3*mm) # Shifted -3 mm in the y-direction.
>>> F = CircScreen(F, R = 3*mm, y_shift = -3*mm) # Idem
>>> F = CircScreen(3*mm, 0.0, -3*mm, F) # Idem, old order of arguments for backward compatibility.
.. seealso::
* :ref:`Manual: Apertures and screens<Apertures and screens.>`
* :ref:`Examples: Spot of Poisson <Spot of Poisson.>`
"""
#from
#https://stackoverflow.com/questions/44865023/
# circular-masking-an-image-in-python-using-numpy-arrays
Fout = Field.copy(Fin)
Y, X = Fout.mgrid_cartesian
Y = Y - y_shift
X = X - x_shift
dist_sq = X**2 + Y**2 #squared, no need for sqrt
Fout.field[dist_sq <= R**2] = 0.0
return Fout
@backward_compatible
def GaussAperture(Fin, w, x_shift = 0.0, y_shift = 0.0, T = 1.0, ):
"""
*Inserts an aperture with a Gaussian shape in the field.*
:math:`F_{out}(x,y)= \\sqrt{T}e^{ -\\frac{ x^{2}+y^{2} }{2w^{2}} } F_{in}(x,y)`
:param Fin: input field
:type Fin: Field
:param w: 1/e intensity width
:type w: int, float
:param x_shift: shift in x direction (default = 0.0)
:param y_shift: shift in y direction (default = 0.0)
:type x_shift: int, float
:type y_shift: int, float
:param T: center intensity transmission (default = 1.0)
:type T: int, float
:return: output field (N x N square array of complex numbers).
:rtype: `LightPipes.field.Field`
:Example:
>>> F = GaussAperture(Fin, w) # centered, T=1.0, width = w
>>> F = GaussAperture(Fin, w, T = 0.5) # idem, transmission = 0.5
>>> F = GaussAperture(Fin, w, T = 0.5, y_shift = -3 *mm) # idem, shifted in y direction
>>> F = GaussAperture(Fin, w, 0.0, -3.0*mm, 0.5) # idem
.. seealso::
* :ref:`Manual: Apertures and screens.<Apertures and screens.>`
"""
Fout = Field.copy(Fin)
Y, X = Fout.mgrid_cartesian
Y = Y - y_shift
X = X - x_shift
w2=w*w*2
SqrtT=_np.sqrt(T)
Fout.field*=SqrtT*_np.exp(-(X*X+Y*Y)/w2)
return Fout
def SuperGaussAperture(Fin, w, n = 2.0, x_shift = 0.0, y_shift = 0.0, T = 1.0 ):
"""
*Inserts an aperture with a super-Gaussian shape in the field.*
:math:`F_{out}(x,y)= \\sqrt{T}e^{ -\\left [ \\frac{ x^{2}+y^{2} }{2w^{2}} \\right ]^n } F_{in}(x,y)`
:param Fin: input field
:type Fin: Field
:param w: 1/e intensity width
:type w: int, float
:param n: power of the super Gauss (default = 2.0)
:type n: int, float
:param x_shift: shift in x direction (default = 0.0)
:param y_shift: shift in y direction (default = 0.0)
:type x_shift: int, float
:type y_shift: int, float
:param T: center intensity transmission (default = 1.0)
:type T: int, float
:return: output field (N x N square array of complex numbers).
:rtype: `LightPipes.field.Field`
:Example:
>>> F = SuperGaussAperture(Fin, w) # centered, T=1.0, width = w, power = 2.0
>>> F = SuperGaussAperture(Fin, w, n = 21) # idem, power = 21
>>> F = SuperGaussAperture(Fin, w, n = 21, y_shift = -3 *mm) # idem, shifted in y direction
>>> F = SuperGaussAperture(Fin, w, 21, 0.0, -3.0*mm, 0.5) # idem
.. seealso::
* :ref:`Manual: Apertures and screens.<Apertures and screens.>`
"""
Fout = Field.copy(Fin)
Y, X = Fout.mgrid_cartesian
Y = Y - y_shift
X = X - x_shift
w2=w*w*2
SqrtT=_np.sqrt(T)
Fout.field*=SqrtT*_np.exp(-((X*X+Y*Y)/w2)**n)
return Fout
@backward_compatible
def GaussScreen(Fin, w, x_shift = 0.0, y_shift = 0.0, T = 0.0 ):
"""
*Inserts a screen with a Gaussian shape in the field.*
:math:`F_{out}(x,y)= \\sqrt{1-(1-T)e^{ -\\frac{ x^{2}+y^{2} }{w^{2}} }} F_{in}(x,y)`
:param Fin: input field
:type Fin: Field
:param w: 1/e intensity width
:type w: int, float
:param x_shift: shift in x direction (default = 0.0)
:param y_shift: shift in y direction (default = 0.0)
:type x_shift: int, float
:type y_shift: int, float
:param T: center intensity transmission (default = 0.0)
:type T: int, float
:return: output field (N x N square array of complex numbers).
:rtype: `LightPipes.field.Field`
:Example:
>>> F = GaussAperture(Fin, w) # centered, T=1.0, width = w
>>> F = GaussAperture(Fin, w, T = 0.5) # idem, transmission = 0.5
>>> F = GaussAperture(Fin, w, T = 0.5, y_shift = -3 *mm) # idem, shifted in y direction
>>> F = GaussAperture(Fin, w, 0.0, -3.0*mm, 0.5) # idem
.. seealso::
* :ref:`Manual: Apertures and screens.<Apertures and screens.>`
"""
Fout = Field.copy(Fin)
Y, X = Fout.mgrid_cartesian
Y = Y - y_shift
X = X - x_shift
w2=w*w
Fout.field*=_np.sqrt(1-(1-T)*_np.exp(-(X*X+Y*Y)/w2))
return Fout
def GaussHermite(Fin, w0, m = 0, n = 0, A = 1.0):
"""
*Substitutes a Hermite-Gauss mode (beam waist) in the field.*
:math:`F_{m,n}(x,y,z=0) = A H_m\\left(\\dfrac{\\sqrt{2}x}{w_0}\\right)H_n\\left(\\dfrac{\\sqrt{2}y}{w_0}\\right)e^{-\\frac{x^2+y^2}{w_0^2}}`
:param Fin: input field
:type Fin: Field
:param w0: Gaussian spot size parameter in the beam waist (1/e amplitude point)
:type w0: int, float
:param m: mode index (default = 0.0)
:param n: mode index (default = 0.0)
:type m: int, float
:type n: int, float
:param A: amplitude (default = 1.0)
:type A: int, float
:return: output field (N x N square array of complex numbers).
:rtype: `LightPipes.field.Field`
:Example:
>>> F = GaussHermite(F, 3*mm) # Fundamental Gauss mode, HG0,0 with a beam radius of 3 mm
>>> F = GaussHermite(F, 3*mm, m=3) # Idem, HG3,0
>>> F = GaussHermite(F, 3*mm, m=3, n=1, A=2.0) # Idem, HG3,1, amplitude 2.0
>>> F = GaussHermite(F, 3*mm, 3, 1, 2.0) # Idem
.. seealso::
* :ref:`Examples: Hermite Gauss modes.<Hermite Gauss modes.>`
Reference::
A. Siegman, "Lasers", p. 642
"""
# ************* Backward compatibility section ****************
#The general backward_compatible decorator does not work for this command,
#because of the positional argument w0.
_using_oldstyle = False
if not isinstance(Fin, Field):
#first arg is not a field, either backward compat syntax or
# complete usage error -> find out if Field is last, else error
if isinstance(A, Field):
#found field in last arg
_using_oldstyle = True #just in case code wants to know this later
# in function
Fin, w0, m, n, A = A, n, Fin, w0, m
#caution: python can swap the values only if written on single
# line, if split up a temporary assignment is necessary
# (since a=b, b=a would not work, only temp=a, a=b, b=temp)
#-> now all the variables contain what is expected in new style
else:
raise ValueError('GaussHermite: Field is neither first nor '
+ 'last parameter (backward compatibility check)'
+ ', please check syntax/usage.')
# ************* end of Backward compatibility section *********
Fout = Field.copy(Fin)
Y, X = Fout.mgrid_cartesian
#Y = Y - y_shift
#X = X - x_shift
sqrt2w0=_np.sqrt(2.0)/w0
w02=w0*w0
Fout.field = A * hermite(m)(sqrt2w0*X)*hermite(n)(sqrt2w0*Y)*_np.exp(-(X*X+Y*Y)/w02)
return Fout
def GaussLaguerre(Fin, w0, p = 0, l = 0, A = 1.0 ):
"""
*Substitutes a Laguerre-Gauss mode (beam waist) in the field.*
:math:`F_{p,l}(x,y,z=0) = A \\left(\\frac{\\rho}{2}\\right)^{\\frac{|l|}{2} }L^p_l\\left(\\rho\\right)e^{-\\frac{\\rho}{2}}\\cos(l\\theta)`,
with: :math:`\\rho=\\frac{2(x^2+y^2)}{w_0^2}`
:param Fin: input field
:type Fin: Field
:param w0: Gaussian spot size parameter in the beam waist (1/e amplitude point)
:type w0: int, float
:param p: mode index (default = 0.0)
:param l: mode index (default = 0.0)
:type p: int, float
:type l: int, float
:param A: amplitude (default = 1.0)
:type A: int, float
:return: output field (N x N square array of complex numbers).
:rtype: `LightPipes.field.Field`
:Example:
>>> F = GaussLaguerre(F, 3*mm) # Fundamental Gauss mode, LG0,0 with a beam radius of 3 mm
>>> F = GaussLaguerre(F, 3*mm, m=3) # Idem, LG3,0
>>> F = GaussLaguerre(F, 3*mm, m=3, n=1, A=2.0) # Idem, LG3,1, amplitude 2.0
>>> F = GaussLaguerre(F, 3*mm, 3, 1, 2.0) # Idem
.. seealso::
* :ref:`Examples: Laguerre Gauss modes.<Laguerre Gauss modes.>`
Reference::
A. Siegman, "Lasers", p. 642
"""
# ************* Backward compatibility section ****************
#The general backward_compatible decorator does not work for this command,
#because of the positional argument w0.
#Old style: GaussLaguerre(p, l, A, w0,Fin)
#New style: GaussLaguerre(Fin, w0, p=0, l=0, A=1.0)
_using_oldstyle = False
if not isinstance(Fin, Field):
#first arg is not a field, either backward compat syntax or
# complete usage error -> find out if Field is last, else error
if isinstance(A, Field):
#found field in last arg
_using_oldstyle = True #just in case code wants to know this later
# in function
Fin, w0, p, l, A = A, l, Fin, w0, p
#caution: python can swap the values only if written on single
# line, if split up a temporary assignment is necessary
# (since a=b, b=a would not work, only temp=a, a=b, b=temp)
#-> now all the variables contain what is expected in new style
else:
raise ValueError('GaussLaguerre: Field is neither first nor '
+ 'last parameter (backward compatibility check)'
+ ', please check syntax/usage.')
# ************* end of Backward compatibility section *********
Fout = Field.copy(Fin)
R, Phi = Fout.mgrid_polar
w02=w0*w0
la=abs(l)
rho = 2*R*R/w02
Fout.field = A * rho**(la/2) * genlaguerre(p,la)(rho) * _np.exp(-rho/2) * _np.cos(l*Phi)
return Fout
@backward_compatible
def IntAttenuator(Fin, att = 0.5 ):
"""
*Attenuates the intensity of the field.*
:math:`F_{out}(x,y)=\\sqrt{att}F_{in}(x,y)`
:param Fin: input field
:type Fin: Field
:param att: intensity attenuation factor (default = 0.5)
:type att: int, float
:return: output field (N x N square array of complex numbers).
:rtype: `LightPipes.field.Field`
:Example:
>>> F = IntAttenuator(F) # attenuates the intensity of the field with a factor 0.5
>>> F = IntAttenuator(F, att = 0.2) # Idem, with a factor 0.2
>>> F = IntAttenuator(F, 0.2) # Idem
.. seealso::
* :ref:`Manual: Splitting and mixing beams.<Splitting and mixing beams.>`
* :ref:`Examples: Michelson interferometer.<Michelson interferometer.>`
"""
Efactor = _np.sqrt(att) #att. given as intensity
Fout = Field.copy(Fin)
Fout.field *= Efactor
return Fout
@backward_compatible
def Intensity(Fin, flag = 0):
"""
*Calculates the intensity of the field.*
:math:`I(x,y)=F_{in}(x,y).F_{in}(x,y)^*`
:param Fin: input field
:type Fin: Field
:param flag: 0: no normalisation, 1: normalisation to 1, 2: normalized to 255 (for bitmaps) (default = 0)
:type flag: int, float
:return: output intensity distribution (N x N square array of real numbers).
:rtype: `numpy.ndarray`
:Example:
>>> I = Intensity(F) # intensity of the field, no normalisation
>>> I = Intensity(F, flag=1) # Idem, normalized to 1
>>> I = Intensity(F, 2) # Idem, normalized to 255
.. seealso::
* :ref:`Manual: Graphing and visualisation.<Graphing and visualisation.>`
"""
I = _np.abs(Fin.field)**2
if flag > 0:
Imax = I.max()
if Imax == 0.0:
raise ValueError('Cannot normalize because of 0 beam power.')
I = I/Imax
if flag == 2:
I = I*255
return I
@backward_compatible
def Interpol(Fin, new_size, new_N, x_shift = 0.0, y_shift = 0.0, angle = 0.0, magnif = 1.0 ):
"""
*Interpolates the field to a new grid size, grid dimension.*
:param Fin: input field
:type Fin: Field
:param new_size: new grid size
:type new_size: int, float
:param new_N: new grid dimension
:type new_N: int, float
:param x_shift: shift of the field in x direction (default = 0.0)
:type x_shift: int, float
:param y_shift: shift of the field in y direction (default = 0.0)
:type y_shift: int, float
:param angle: rotation of the field in degrees (default = 0.0)
:type angle: int, float
:param magnif: magnification of the field amplitude (default = 1.0)
:return: output field (N x N square array of complex numbers).
:rtype: `LightPipes.field.Field`
:Example:
>>> F = Interpol(F, 50*mm, 200) # interpolates the field to a grid size of 50 mm and a grid dimension of 200
>>> F = Interpol(F, 50*mm, 200, y_shift = 2*mm) # Idem, shifted 2 mm in the y direction
>>> F = Interpol(F, 50*mm, 200, y_shift = 2*mm, magnif = 2.0) # Idem, magnifizes the field a factor 2.0
>>> F = Interpol(F, 50*mm, 200, 0.0, 2*mm, 0.0, 2.0) # Idem
.. seealso::
* :ref:`Manual: Interpolation.<Interpolation.>`
"""
Fout = Field.begin(new_size, Fin.lam, new_N)
Fout.field[:,:] = 0.0
legacy = True
if legacy:
Pi = 3.141592654 #compare Cpp results numerically
else:
Pi = _np.pi #more accurate, but slightly different results
angle *= Pi/180.
cc=_np.cos(angle)
ss=_np.sin(angle)
if legacy:
#dx defined differently
size_old = Fin.siz
old_number = Fin.N
dx_old = size_old/(old_number-1)
on21 = int(old_number/2)
Xold = dx_old * _np.arange(-on21, old_number-on21)
Yold = dx_old * _np.arange(-on21, old_number-on21)
else:
Xold = Fin.xvalues
Yold = Fin.yvalues
if legacy:
dx_new = new_size/(new_N-1) #TODO legacy, once again without -1 seems correct
nn21 = int(new_N/2)
X0 = dx_new * _np.arange(-nn21, new_N-nn21)
Y0 = dx_new * _np.arange(-nn21, new_N-nn21)
X0, Y0 = _np.meshgrid(X0, Y0)
else:
dx_new = Fout.dx
Y0, X0 = Fout.mgrid_cartesian #note swapped order!
X0 -= x_shift
Y0 -= y_shift
Xnew = (X0*cc + Y0*ss)/magnif
Ynew = (X0*(-ss) + Y0* cc)/magnif
xmin, xmax = Xold[0], Xold[-1]
ymin, ymax = Yold[0], Yold[-1]
#filter strictly inside (not <=) since edge pixels seem wrong in interp
filtmask = ((Xnew > xmin) & (Xnew < xmax) &
(Ynew > ymin) & (Ynew < ymax))
# same goes for Cpp lightpipes, interpolating a 20x20 grid to a 20x20 grid
# of same size will have 0s along the edges and only 18x18 useful pixels
#instead of calling interp for all pixels, only call for those new pixels
# who's coordinates (transformed to old) are inside old grid box
Xmask = Xnew[filtmask] #flat list of X-values, not meshgrid anymore
Ymask = Ynew[filtmask]
use_scipy_interp = False
if use_scipy_interp:
ks = 1 #spline order: linear or higher
interp_real = RectBivariateSpline(Xold, Yold, Fin.field.real,
kx=ks, ky=ks)
interp_imag = RectBivariateSpline(Xold, Yold, Fin.field.imag,
kx=ks, ky=ks)
out_real = interp_real(Xmask, Ymask, grid=False)
out_imag = interp_imag(Xmask, Ymask, grid=False)
out_comp = out_real + 1j* out_imag
Fout.field[filtmask] = out_comp
else:
out_z = Inv_Squares(Xmask, Ymask, Fin.field, dx_old)
Fout.field[filtmask] = out_z
Fout.field /= magnif
return Fout
@backward_compatible
def MultIntensity( Fin, Intens):
"""
*Multiplies the field with a given intensity distribution.*
:param Fin: input field
:type Fin: Field
:param Intens: N x N square array of real numbers or scalar
:type Intens: numpy.ndarray, float, int
:return: output field (N x N square array of complex numbers).
:rtype: `LightPipes.field.Field`
:Example:
>>> import numpy as np
>>> Int=np.empty([N,N])
>>> for i in range(1,N):
>>> for j in range(1,N):
>>> Int[i][j]=math.fabs(math.sin(i/10.0)*math.cos(j/5.0))
>>> F = MultIntensity(F, Int)
.. seealso::
* :ref:`Manual: User defined phase and intensity filters.<User defined phase and intensity filters.>`
"""
if not _np.isscalar(Intens):
if Intens.shape != Fin.field.shape:
raise ValueError('Intensity pattern shape does not match field size')
Fout = Field.copy(Fin)
Efield = _np.sqrt(Intens)
Fout.field *= Efield
return Fout
@backward_compatible
def MultPhase( Fin, Phi):
"""
*Multiplies the field with a given phase distribution.*
:param Fin: input field
:type Fin: Field
:param Phi: N x N square array of real numbers or scalar
:type Phi: numpy.ndarray, int, float
:return: output field (N x N square array of complex numbers).
:rtype: `LightPipes.field.Field`
:Example:
>>> # multiply with a phase distribution:
>>> #
>>> import numpy as np
>>> Phi=np.empty([N,N])
>>> for i in range(1,N):
>>> for j in range(1,N):
>>> Phi[i][j]=math.fabs(math.sin(i/10.0)*math.cos(j/5.0))
>>> F = MultPhase(F, Phi)
>>> #
>>> # multiply with a scalar:
>>> F = MultPhase(F, 0.12345*rad) # multiplies the field with a constant phase factor of 0.12345 rad
.. seealso::
* :ref:`Manual: User defined phase and intensity filters.<User defined phase and intensity filters.>`
"""
if not _np.isscalar(Phi):
if Phi.shape != Fin.field.shape:
raise ValueError('Phase pattern shape does not match field size')
Fout = Field.copy(Fin)
Fout.field *= _np.exp(1j*Phi)
return Fout
def Normal(Fin):
"""
*Normalizes the field using beam power.*
:math:`F_{out}(x,y)= \\frac{F_{in}(x,y)}{\\sqrt{P}}`
with: :math:`P=\\int \\int F_{in}(x,y)^2 dx dy`
:param Fin: input field
:type Fin: Field
:return: output field (N x N square array of complex numbers).
:rtype: `LightPipes.field.Field`
:Example:
>>> F = Normal(F)
.. seealso::
* :ref:`Manual: Diagnostics: Strehl ratio, beam power.<Diagnostics: Strehl ratio, beam power.>`
"""
Fabs = _np.abs(Fin.field)**2
Fabs *= Fin.dx**2
Ptot = Fabs.sum()
if Ptot == 0.0:
raise ValueError('Error in Normal(Fin): Zero beam power!')
Fout = Field.copy(Fin)
Fout.field *= _np.sqrt(1/Ptot)
return Fout
def Phase(Fin, unwrap = False, units='rad', blank_eps=0):
"""
*Calculates the phase of the field.*
:param Fin: input field
:type Fin: Field
:param unwrap: Call PhaseUnwrap on the extracted Phase (default = False)
:type unwrap: bool
:param units: 'opd': returned in [meters] of optical path length
'lam': returned in multiples of lambda
'rad': returned in multiples of 2pi phase jumps (default)
:type units: string
:param blank_eps: [fraction] of max. Intensity at which to blank the phase
and replace the value with numpy.nan (e.g. 1e-3==0.1%)
Set to 0 or None to disable
:type blank_eps: int, None
:return: output phase distribution (N x N square array of real numbers).
:rtype: `numpy.ndarray`
:Example:
>>> Phi = Phase(F) # returns phase distribution
>>> Phi = Phase(F, unwrap = True) # Idem, phase unwrapped
>>> Phi = Phase(F, units = 'lam') # phase in multiples of wavelength
.. seealso::
* :ref:`Manual: Graphing and visualisation.<Graphing and visualisation.>`
"""
_2pi = 2*_np.pi
Phi = _np.angle(Fin.field)
if unwrap:
Phi = PhaseUnwrap(Phi)
if units=='opd':
Phi = Phi/_2pi*Fin.lam #a PtV of 2pi will yield e.g. 1*lam=1e-6=1um
elif units=='lam':
Phi = Phi/_2pi #a PtV of 2pi=6.28 will yield 1 (as in 1 lambda)
elif units=='rad':
pass #a PtV of 2pi will yield 6.28 as requested
else:
raise ValueError('Unknown value for option units={}'.format(units))
if blank_eps:
I = Intensity(0,Fin)
Phi[I<blank_eps*I.max()] = _np.nan
return Phi
def PhaseSpiral(Fin, m = 1):
"""
*Multiplies Fin with a spiral phase distribution.*
:param Fin: input field
:type Fin: Field
:param m: Order of the spiral (default = 1)
:type m: int, float
:return: output field (N x N square array of complex numbers).
:rtype: `LightPipes.field.Field`
:Example:
>>> order = 2
>>> F=PhaseSpiral(F,m=order) # multiplies the field with a spiral phase distribution of order 2
"""
Fout = Field.copy(Fin)
R, Phi = Fout.mgrid_polar
Fout.field *= _np.exp(1j * m * Phi)
return Fout
def PhaseUnwrap(Phi):
"""
*Unwraps (removes jumps of pi radians) the phase.*
:param Phi: input phase distribution
:type Phi: numpy
:param Phi: Order of the spiral (default = 1)
:type m: int, float
:return: output phase distribution (N x N square array of real numbers).
:rtype: `numpy.ndarray`
:Example:
>>> Phi = PhaseUnwrap(Phi) # unwraps the phase distribution Phi
"""
PhiU = _unwrap_phase(Phi)
return PhiU
def Power(Fin):
"""
*Calculates the total power.*
.. math:: P=\int \int(|F_{in}(x,y)|)^2dxdy
:param Fin: input field
:type Fin: Field
:return: output power
:rtype: float
:Example:
>>> P = Power(F) # returns the power of the field F
"""
#TODO why does Normal() also sum dx**2 (==integral) while this does not??
I = _np.abs(Fin.field)**2
return I.sum()
@backward_compatible
def RandomIntensity(Fin, seed = 123, noise = 1.0, ):
"""
*Adds random intensity to the field*
:param Fin: input field
:type Fin: Field
:param seed: seed number for the random noise generator (default = 123)
:type seed: int, float
:param noise: level of the noise (default = 1.0)
:type noise: int, float
:return: output field (N x N square array of complex numbers).
:rtype: `LightPipes.field.Field`
:Example:
>>> F = RandomIntensity(F) # adds noise to the field
>>> F = RandomIntensity(F, seed = 49) # Idem, with seed 49
>>> F = RandomIntensity(F, noise = 0.1) # adds noise to the field with amplitude 0.1
.. seealso::
* :ref:`Manual: Random filters.<Random filters.>`
"""
#TODO implementation error in original LP: field error, not I error!
# need to sqrt for that
Fout = Field.copy(Fin)
_np.random.seed(int(seed))
N = Fout.N
ranint = _np.random.rand(N, N)*noise
Fout.field += ranint
return Fout
@backward_compatible
def RandomPhase(Fin, seed =456, maxPhase = _np.pi ):
"""
*Adds random phase to the field*
:param Fin: input field
:type Fin: Field
:param seed: seed number for the random noise generator (default = 456)
:type seed: int, float
:param maxPhase: max value of the phase (default = 3.1415 (pi))
:type maxPhase: int, float
:return: output field (N x N square array of complex numbers).
:rtype: `LightPipes.field.Field`
:Example:
>>> F = RandomPhase(F) # adds noise to the phase of the field
>>> F = RandomPhase(F, seed = 49) # Idem, with seed 49
>>> F = RandomPhase(F, maxPhase = 0.1) # adds phase-noise to the field with maximum value 0.1
.. seealso::
* :ref:`Manual: Random filters.<Random filters.>`
"""
#2020023 - ldo - tested similar result as Cpp version, although not
# 1:1 since seed is different in numpy
Fout = Field.copy(Fin)
_np.random.seed(int(seed))
N = Fout.N
ranphase = (_np.random.rand(N, N)-0.5)*maxPhase
Fout.field *= _np.exp(1j * ranphase)
return Fout
@backward_compatible
def RectAperture(Fin, sx, sy, x_shift = 0.0, y_shift = 0.0, angle = 0.0 ):
"""
*Inserts a rectangular aperture in the field.*
:param Fin: input field
:type Fin: Field
:param sx: width of the aperture
:type sx: int, float
:param sy: height of the aperture
:type sy: int, float
:param x_shift: shift in x direction (default = 0.0)
:param y_shift: shift in y direction (default = 0.0)
:type x_shift: int, float
:type y_shift: int, float
:param angle: rotation angle in degrees (default = 0.0)
:type angle: int, float
:return: output field (N x N square array of complex numbers).
:rtype: `LightPipes.field.Field`
:Example:
>>> F = RectAperture(F, 3*mm, 4*mm) # A 3 x 4 mm rectangular aperture in the center of the grid.
>>> F = RectAperture(F, 3*mm, 4*mm, 0, -3*mm) # Idem, shifted -3 mm in the y-direction.
>>> F = RectAperture(F, 3*mm, 4*mm, y_shift = -3*mm) # Idem
.. seealso::
* :ref:`Manual: Apertures and screens<Apertures and screens.>`
"""
Fout = Field.copy(Fin)
yy, xx = Fout.mgrid_cartesian
yy = yy - y_shift
xx = xx - x_shift
if angle!=0.0:
ang_rad = -1*angle*deg #-1 copied from Cpp convention
cc = _np.cos(ang_rad)
ss = _np.sin(ang_rad)
xxr = cc * xx + ss * yy
yyr = -ss * xx + cc * yy
yy, xx = yyr, xxr
matchx = _np.abs(xx) > sx/2
matchy = _np.abs(yy) > sy/2
Fout.field[matchx | matchy] = 0.0
return Fout
@backward_compatible
def RectScreen(Fin, sx, sy, x_shift = 0.0, y_shift = 0.0, angle = 0.0 ):
"""
*Inserts a rectangular screen in the field.*
:param Fin: input field
:type Fin: Field
:param sx: width of the screen
:type sx: int, float
:param sy: height of the screen
:type sy: int, float
:param x_shift: shift in x direction (default = 0.0)
:param y_shift: shift in y direction (default = 0.0)
:type x_shift: int, float
:type y_shift: int, float
:param angle: rotation angle in degrees (default = 0.0)
:type angle: int, float
:return: output field (N x N square array of complex numbers).
:rtype: `LightPipes.field.Field`
:Example:
>>> F = RectScreen(F, 3*mm, 4*mm) # A 3 x 4 mm rectangular screen in the center of the grid.
>>> F = RectScreen(F, 3*mm, 4*mm, 0, -3*mm) # Idem, shifted -3 mm in the y-direction.
>>> F = RectScreen(F, 3*mm, 4*mm, y_shift = -3*mm) # Idem
.. seealso::
* :ref:`Manual: Apertures and screens<Apertures and screens.>`
"""
Fout = Field.copy(Fin)
yy, xx = Fout.mgrid_cartesian
yy = yy - y_shift
xx = xx - x_shift
if angle!=0.0:
ang_rad = -1*angle*deg #-1 copied from Cpp convention
cc = _np.cos(ang_rad)
ss = _np.sin(ang_rad)
xxr = cc * xx + ss * yy
yyr = -ss * xx + cc * yy
yy, xx = yyr, xxr
matchx = _np.abs(xx) <= sx/2
matchy = _np.abs(yy) <= sy/2
Fout.field[matchx & matchy] = 0.0
return Fout
def Strehl(Fin):
"""
*Calculates the Strehl value of the field*
:param Fin: input field
:type Fin: Field
:return: Strehl value of the field
:rtype: float
:Example:
>>> S = Strehl(F) # returns the Strehl value of the field
.. seealso::
* :ref:`Manual: Diagnostics: Strehl ratio, beam power.<Diagnostics: Strehl ratio, beam power.>`
"""
normsq = _np.abs(Fin.field).sum()**2
if normsq == 0.0:
raise ValueError('Error in Strehl: Zero beam power')
strehl = _np.real(Fin.field).sum()**2 + _np.imag(Fin.field).sum()**2
strehl = strehl/normsq
return strehl
@backward_compatible
def SubIntensity(Fin, Intens ):
"""
*Substitutes a given intensity distribution in the field with.*
:param Fin: input field
:type Fin: Field
:param Intens: N x N square array of real numbers or scalar
:type Intens: numpy.ndarray, int, float
:return: output field (N x N square array of complex numbers).
:rtype: `LightPipes.field.Field`
:Example:
.. seealso::
* :ref:`Matlab: User defined phase and intensity filters.<User defined phase and intensity filters.>`
"""
Fout = Field.copy(Fin)
Intens = _np.asarray(Intens)
if Intens.shape != Fout.field.shape:
raise ValueError('Intensity map has wrong shape')
phi = _np.angle(Fout.field)
Efield = _np.sqrt(Intens)
Fout.field = Efield * _np.exp(1j * phi)
return Fout
@backward_compatible
def SubPhase( Fin, Phi):
"""
*Substitutes a given phase distribution in the field with.*
:param Phi: N x N square array of real numbers or scalar
:type Phi: numpy.ndarray, int, float
:return: output field (N x N square array of complex numbers).
:rtype: `LightPipes.field.Field`
:Example:
.. seealso::
* :ref:`Manual: User defined phase and intensity filters.<User defined phase and intensity filters.>`
"""
Fout = Field.copy(Fin)
if not _np.isscalar(Phi):
Phi = _np.asarray(Phi)
if Phi.shape != Fin.field.shape:
raise ValueError('Phase mapppp has wrong shape')
oldabs = _np.abs(Fout.field)
Fout.field = oldabs * _np.exp(1j * Phi)
return Fout
| 33.539235
| 148
| 0.598446
|
4a12650f398eca17ed0a9b8dd83350929ae6f12f
| 752
|
py
|
Python
|
WebMirror/management/rss_parser_funcs/feed_parse_extract夢見る世界.py
|
fake-name/ReadableWebProxy
|
ed5c7abe38706acc2684a1e6cd80242a03c5f010
|
[
"BSD-3-Clause"
] | 193
|
2016-08-02T22:04:35.000Z
|
2022-03-09T20:45:41.000Z
|
WebMirror/management/rss_parser_funcs/feed_parse_extract夢見る世界.py
|
fake-name/ReadableWebProxy
|
ed5c7abe38706acc2684a1e6cd80242a03c5f010
|
[
"BSD-3-Clause"
] | 533
|
2016-08-23T20:48:23.000Z
|
2022-03-28T15:55:13.000Z
|
WebMirror/management/rss_parser_funcs/feed_parse_extract夢見る世界.py
|
fake-name/ReadableWebProxy
|
ed5c7abe38706acc2684a1e6cd80242a03c5f010
|
[
"BSD-3-Clause"
] | 19
|
2015-08-13T18:01:08.000Z
|
2021-07-12T17:13:09.000Z
|
def extract夢見る世界(item):
"""
Parser for '夢見る世界'
"""
if 'Otome Games' in item['tags']:
return None
if 'Drama CDs' in item['tags']:
return None
vol, chp, frag, postfix = extractVolChapterFragmentPostfix(item['title'])
if not (chp or vol) or 'preview' in item['title'].lower():
return None
tagmap = [
('Miss Appraiser and Gallery Demon', 'Miss Appraiser and Gallery Demon', 'translated'),
('Light Beyond Road\'s End', 'Light Beyond (LN)', 'translated'),
]
for tagname, name, tl_type in tagmap:
if tagname in item['tags']:
return buildReleaseMessageWithType(item, name, vol, chp, frag=frag, postfix=postfix, tl_type=tl_type)
return False
| 30.08
| 117
| 0.607713
|
4a1265c8a6badf0011a0a26f3bfd00c2fb78e479
| 4,366
|
py
|
Python
|
extra_apps/xadmin/plugins/quickform.py
|
txqzzz/831net-backend
|
c73167124b6a10a774e873389900d31fb15a842c
|
[
"CC0-1.0"
] | null | null | null |
extra_apps/xadmin/plugins/quickform.py
|
txqzzz/831net-backend
|
c73167124b6a10a774e873389900d31fb15a842c
|
[
"CC0-1.0"
] | null | null | null |
extra_apps/xadmin/plugins/quickform.py
|
txqzzz/831net-backend
|
c73167124b6a10a774e873389900d31fb15a842c
|
[
"CC0-1.0"
] | null | null | null |
import copy
from django import forms
from django.db import models
from django.forms.models import modelform_factory
from django.utils.safestring import mark_safe
from django.utils.translation import ugettext as _
from xadmin.layout import Layout
from xadmin.sites import site
from xadmin.util import get_model_from_relation, vendor
from xadmin.views import BaseAdminPlugin, ModelFormAdminView
class QuickFormPlugin(BaseAdminPlugin):
def init_request(self, *args, **kwargs):
if self.request.method == 'GET' and self.request.is_ajax() or self.request.GET.get('_ajax'):
self.admin_view.add_form_template = 'xadmin/views/quick_form.html'
self.admin_view.change_form_template = 'xadmin/views/quick_form.html'
return True
return False
def get_model_form(self, __, **kwargs):
if '_field' in self.request.GET:
defaults = {
"form": self.admin_view.form,
"fields": self.request.GET['_field'].split(','),
"formfield_callback": self.admin_view.formfield_for_dbfield,
}
return modelform_factory(self.model, **defaults)
return __()
def get_form_layout(self, __):
if '_field' in self.request.GET:
return Layout(*self.request.GET['_field'].split(','))
return __()
def get_context(self, context):
context['form_url'] = self.request.path
return context
class RelatedFieldWidgetWrapper(forms.Widget):
"""
This class is a wrapper to a given widget to add the add icon for the
admin interface.
"""
def __init__(self, widget, rel, add_url, rel_add_url):
self.needs_multipart_form = widget.needs_multipart_form
self.attrs = widget.attrs
self.choices = widget.choices
self.is_required = widget.is_required
self.widget = widget
self.rel = rel
self.add_url = add_url
self.rel_add_url = rel_add_url
if hasattr(self, 'input_type'):
self.input_type = widget.input_type
def __deepcopy__(self, memo):
obj = copy.copy(self)
obj.widget = copy.deepcopy(self.widget, memo)
obj.attrs = self.widget.attrs
memo[id(self)] = obj
return obj
@property
def media(self):
media = self.widget.media + vendor('xadmin.plugin.quick-form.js')
return media
def render(self, name, value, renderer=None, *args, **kwargs):
self.widget.choices = self.choices
output = []
if self.add_url:
output.append(
u'<a href="%s" title="%s" class="btn btn-primary btn-sm btn-ajax pull-right" data-for-id="id_%s" data-refresh-url="%s"><i class="fa fa-plus"></i></a>'
% (
self.add_url, (_('Create New %s') % self.rel.to._meta.verbose_name), name,
"%s?_field=%s&%s=" % (self.rel_add_url, name, name)))
output.extend(['<div class="control-wrap" id="id_%s_wrap_container">' % name,
self.widget.render(name, value, *args, **kwargs), '</div>'])
return mark_safe(u''.join(output))
def build_attrs(self, extra_attrs=None, **kwargs):
"Helper function for building an attribute dictionary."
self.attrs = self.widget.build_attrs(extra_attrs=None, **kwargs)
return self.attrs
def value_from_datadict(self, data, files, name):
return self.widget.value_from_datadict(data, files, name)
def id_for_label(self, id_):
return self.widget.id_for_label(id_)
class QuickAddBtnPlugin(BaseAdminPlugin):
def formfield_for_dbfield(self, formfield, db_field, **kwargs):
if formfield and self.model in self.admin_site._registry and isinstance(db_field, (
models.ForeignKey, models.ManyToManyField)):
rel_model = get_model_from_relation(db_field)
if rel_model in self.admin_site._registry and self.has_model_perm(rel_model, 'add'):
add_url = self.get_model_url(rel_model, 'add')
formfield.widget = RelatedFieldWidgetWrapper(
formfield.widget, db_field.rel, add_url, self.get_model_url(self.model, 'add'))
return formfield
site.register_plugin(QuickFormPlugin, ModelFormAdminView)
site.register_plugin(QuickAddBtnPlugin, ModelFormAdminView)
| 37.965217
| 166
| 0.649336
|
4a126626ea8013b7a7ce44f0434e182ef8128f73
| 7,120
|
py
|
Python
|
src/robot/parsing/lexer/lexer.py
|
timgates42/robotframework
|
53fa94a667317621808ee41f274580b3162a0188
|
[
"ECL-2.0",
"Apache-2.0"
] | null | null | null |
src/robot/parsing/lexer/lexer.py
|
timgates42/robotframework
|
53fa94a667317621808ee41f274580b3162a0188
|
[
"ECL-2.0",
"Apache-2.0"
] | null | null | null |
src/robot/parsing/lexer/lexer.py
|
timgates42/robotframework
|
53fa94a667317621808ee41f274580b3162a0188
|
[
"ECL-2.0",
"Apache-2.0"
] | null | null | null |
# Copyright 2008-2015 Nokia Networks
# Copyright 2016- Robot Framework Foundation
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from itertools import chain
from robot.errors import DataError
from robot.utils import get_error_message, FileReader
from .blocklexers import FileLexer
from .context import InitFileContext, TestCaseFileContext, ResourceFileContext
from .tokenizer import Tokenizer
from .tokens import EOS, Token
def get_tokens(source, data_only=False):
"""Parses the given source to tokens.
:param source: The source where to read the data. Can be a path to
a source file as a string or as ``pathlib.Path`` object, an already
opened file object, or Unicode text containing the date directly.
Source files must be UTF-8 encoded.
:param data_only: When ``False`` (default), returns all tokens. When set
to ``True``, omits separators, comments, continuations, and other
non-data tokens.
Returns a generator that yields :class:`~robot.parsing.lexer.tokens.Token`
instances.
"""
lexer = Lexer(TestCaseFileContext(), data_only)
lexer.input(source)
return lexer.get_tokens()
def get_resource_tokens(source, data_only=False):
"""Parses the given source to resource file tokens.
Otherwise same as :func:`get_tokens` but the source is considered to be
a resource file. This affects, for example, what settings are valid.
"""
lexer = Lexer(ResourceFileContext(), data_only)
lexer.input(source)
return lexer.get_tokens()
def get_init_tokens(source, data_only=False):
"""Parses the given source to init file tokens.
Otherwise same as :func:`get_tokens` but the source is considered to be
a suite initialization file. This affects, for example, what settings are
valid.
"""
lexer = Lexer(InitFileContext(), data_only)
lexer.input(source)
return lexer.get_tokens()
class Lexer(object):
def __init__(self, ctx, data_only=False):
self.lexer = FileLexer(ctx)
self.data_only = data_only
self.statements = []
def input(self, source):
for statement in Tokenizer().tokenize(self._read(source),
self.data_only):
# Store all tokens but pass only DATA tokens to lexer.
self.statements.append(statement)
if self.data_only:
data = statement[:]
else:
data = [t for t in statement if t.type == Token.DATA]
if data:
self.lexer.input(data)
def _read(self, source):
try:
with FileReader(source, accept_text=True) as reader:
return reader.read()
except:
raise DataError(get_error_message())
def get_tokens(self):
self.lexer.lex()
if self.data_only:
ignore = {Token.IGNORE, Token.COMMENT_HEADER, Token.COMMENT,
Token.OLD_FOR_INDENT}
else:
ignore = {Token.IGNORE}
statements = self._handle_old_for(self.statements)
if not self.data_only:
statements = chain.from_iterable(
self._split_trailing_commented_and_empty_lines(s)
for s in statements
)
# Setting local variables is performance optimization to avoid
# unnecessary lookups and attribute access.
name_types = (Token.TESTCASE_NAME, Token.KEYWORD_NAME)
separator_type = Token.SEPARATOR
eol_type = Token.EOL
for statement in statements:
name_seen = False
separator_after_name = None
prev_token = None
for token in statement:
type = token.type # Performance optimization.
if type in ignore:
continue
if name_seen:
if type == separator_type:
separator_after_name = token
continue
if type != eol_type:
yield EOS.from_token(prev_token)
if separator_after_name:
yield separator_after_name
name_seen = False
if type in name_types:
name_seen = True
prev_token = token
yield token
if prev_token:
yield EOS.from_token(prev_token)
def _handle_old_for(self, statements):
end_statement = [Token(Token.SEPARATOR), Token(Token.END)]
old_for = False
for statement in statements:
marker = self._get_first_data_token(statement)
if marker:
if marker.type == Token.OLD_FOR_INDENT:
old_for = True
elif old_for:
if marker.type == Token.END:
# We get here if block has been indented with '\' but
# there is also 'END'. The former is deprecated and
# removing the value causes a deprecation warning.
marker.value = ''
else:
yield end_statement
old_for = False
yield statement
if old_for:
yield end_statement
def _get_first_data_token(self, statement):
for token in statement:
if token.type not in Token.NON_DATA_TOKENS:
return token
return None
def _split_trailing_commented_and_empty_lines(self, statement):
lines = self._split_to_lines(statement)
commented_or_empty = []
for line in reversed(lines):
if not self._is_commented_or_empty(line):
break
commented_or_empty.append(line)
lines.pop()
yield list(chain.from_iterable(lines))
for line in reversed(commented_or_empty):
yield line
def _split_to_lines(self, statement):
lines = []
current = []
for token in statement:
current.append(token)
if token.type == Token.EOL:
lines.append(current)
current = []
if current:
lines.append(current)
return lines
def _is_commented_or_empty(self, line):
separator_or_ignore = (Token.SEPARATOR, Token.IGNORE)
comment_or_eol = (Token.COMMENT, Token.EOL)
for token in line:
if token.type not in separator_or_ignore:
return token.type in comment_or_eol
return False
| 36.512821
| 78
| 0.605197
|
4a1266799b31f6ae81fd0dd5dda1fe984a8f664a
| 2,443
|
py
|
Python
|
day21/day21.py
|
alcatrazEscapee/AdventOfCode2021
|
a473b01b8931791b4a1fd03bf05b286ed0ac9f85
|
[
"MIT"
] | 4
|
2021-12-07T22:25:51.000Z
|
2021-12-22T18:15:25.000Z
|
day21/day21.py
|
alcatrazEscapee/AdventOfCode2021
|
a473b01b8931791b4a1fd03bf05b286ed0ac9f85
|
[
"MIT"
] | null | null | null |
day21/day21.py
|
alcatrazEscapee/AdventOfCode2021
|
a473b01b8931791b4a1fd03bf05b286ed0ac9f85
|
[
"MIT"
] | 1
|
2021-12-17T00:47:26.000Z
|
2021-12-17T00:47:26.000Z
|
from utils import get_input, ints, cyclic_mod
from collections import Counter
from functools import lru_cache
from typing import Tuple
def main(text: str):
p1, p2 = (ints(line)[-1] for line in text.split('\n'))
print('Part 1:', play_deterministic_dice(p1, p2))
print('Part 2:', max(play_dirac_dice(p1, p2)))
def play_deterministic_dice(p1: int, p2: int, dice_state: int = 1, total_dice: int = 0, p1s: int = 0, p2s: int = 0) -> int:
""" p1, p2 are the player positions, p1s, p2s are the player's scores. The current turn to be taken is by player one.
Returns the product of total dice rolled * losing player's score
"""
p1_dice, dice_state = deterministic_dice(dice_state)
total_dice += 3
p1 = cyclic_mod(p1 + p1_dice, 1, 10)
p1s += p1
if p1s >= 1000:
return total_dice * p2s
return play_deterministic_dice(p2, p1, dice_state, total_dice, p2s, p1s)
def deterministic_dice(state: int) -> Tuple[int, int]:
""" Rolls a deterministic dice three times, returning the dice sum and the state value of the dice """
new_state = cyclic_mod(state + 3, 1, 100)
return 3 * state + 3, new_state
@lru_cache(None)
def play_dirac_dice(p1: int, p2: int, p1s: int = 0, p2s: int = 0) -> Tuple[int, int]:
""" p1, p2 are the player positions. p1s, p2s are the player's scores. The current turn to be taken is by player one
Returns the pair of counts of universes in which p1 and p2 win, respectively.
"""
p1_wins = p2_wins = 0 # Counts of the universes in which each player wins
for roll, count in dirac_dice().items():
p1_next = cyclic_mod(p1 + roll, 1, 10)
p1s_next = p1s + p1_next
if p1s_next < 21:
# player 1 has not yet won. We swap the players and calculate wins, then swap the sums
a, b = play_dirac_dice(p2, p1_next, p2s, p1s_next)
p1_wins += b * count
p2_wins += a * count
else:
# player 1 has won in these universes, so we just increment their wins
p1_wins += count
return p1_wins, p2_wins
@lru_cache(1)
def dirac_dice() -> Counter:
""" The map of roll value -> universe count of a set of three dirac dice """
counts = Counter()
for a in range(1, 1 + 3):
for b in range(1, 1 + 3):
for c in range(1, 1 + 3):
counts[a + b + c] += 1
return counts
if __name__ == '__main__':
main(get_input())
| 36.462687
| 123
| 0.636103
|
4a126725c5f2b18676a4abfdacd87a84ff6bdac3
| 2,146
|
py
|
Python
|
resume_parser/cli.py
|
kailaspathi/ResumeParser
|
6457100a5cbd1d5131a31c5a1b3f221b100db516
|
[
"MIT"
] | 215
|
2018-12-12T05:36:27.000Z
|
2022-03-12T06:33:29.000Z
|
resume_parser/cli.py
|
kailaspathi/ResumeParser
|
6457100a5cbd1d5131a31c5a1b3f221b100db516
|
[
"MIT"
] | 40
|
2018-12-19T01:01:36.000Z
|
2022-03-25T05:32:19.000Z
|
resume_parser/cli.py
|
kailaspathi/ResumeParser
|
6457100a5cbd1d5131a31c5a1b3f221b100db516
|
[
"MIT"
] | 153
|
2018-12-18T22:17:24.000Z
|
2022-03-25T08:11:30.000Z
|
# Author: Omkar Pathak
import os
import argparse
from pprint import pprint
from resume_parser.resume_parser import ResumeParser
import multiprocessing as mp
def print_cyan(text):
print("\033[96m {}\033[00m" .format(text))
class ResumeParserCli(object):
def __init__(self):
self.__parser = argparse.ArgumentParser()
self.__parser.add_argument('-f', '--file', help="resume file to be extracted")
self.__parser.add_argument('-d', '--directory', help="directory containing all the resumes to be extracted")
return
def extract_resume_data(self):
args = self.__parser.parse_args()
if args.file and not args.directory:
return self.__extract_from_file(args.file)
elif args.directory and not args.file:
return self.__extract_from_directory(args.directory)
else:
return 'Invalid option. Please provide a valid option.'
def __extract_from_file(self, file):
if os.path.exists(file):
print_cyan('Extracting data from: {}'.format(file))
resume_parser = ResumeParser(file)
return [resume_parser.get_extracted_data()]
else:
return 'File not found. Please provide a valid file name.'
def __extract_from_directory(self, directory):
if os.path.exists(directory):
pool = mp.Pool(mp.cpu_count())
resumes = []
data = []
for root, directories, filenames in os.walk(directory):
for filename in filenames:
file = os.path.join(root, filename)
resumes.append(file)
results = pool.map(resume_result_wrapper, resumes)
pool.close()
pool.join()
return results
else:
return 'Directory not found. Please provide a valid directory.'
def resume_result_wrapper(resume):
print_cyan('Extracting data from: {}'.format(resume))
parser = ResumeParser(resume)
return parser.get_extracted_data()
if __name__ == '__main__':
cli_obj = ResumeParserCli()
pprint(cli_obj.extract_resume_data())
| 34.063492
| 116
| 0.638397
|
4a12673a2fd4282eebe72167aa51a3c3c7a807fc
| 5,074
|
py
|
Python
|
tools/perf/page_sets/rendering/tough_scrollbar_scrolling_cases.py
|
zealoussnow/chromium
|
fd8a8914ca0183f0add65ae55f04e287543c7d4a
|
[
"BSD-3-Clause-No-Nuclear-License-2014",
"BSD-3-Clause"
] | 14,668
|
2015-01-01T01:57:10.000Z
|
2022-03-31T23:33:32.000Z
|
tools/perf/page_sets/rendering/tough_scrollbar_scrolling_cases.py
|
zealoussnow/chromium
|
fd8a8914ca0183f0add65ae55f04e287543c7d4a
|
[
"BSD-3-Clause-No-Nuclear-License-2014",
"BSD-3-Clause"
] | 86
|
2015-10-21T13:02:42.000Z
|
2022-03-14T07:50:50.000Z
|
tools/perf/page_sets/rendering/tough_scrollbar_scrolling_cases.py
|
zealoussnow/chromium
|
fd8a8914ca0183f0add65ae55f04e287543c7d4a
|
[
"BSD-3-Clause-No-Nuclear-License-2014",
"BSD-3-Clause"
] | 5,941
|
2015-01-02T11:32:21.000Z
|
2022-03-31T16:35:46.000Z
|
# Copyright 2021 The Chromium Authors. All rights reserved.
# Use of this source code is governed by a BSD-style license that can be
# found in the LICENSE file.
import time
from telemetry.internal.actions import page_action
from telemetry.page import shared_page_state
from page_sets.rendering import rendering_story
from page_sets.rendering import story_tags
TOUGH_SCROLLBAR_UMA = [
'Graphics.Smoothness.Checkerboarding.ScrollbarScroll',
]
class ToughFastScrollbarScrollingPage(rendering_story.RenderingStory):
ABSTRACT_STORY = True
SPEED_IN_PIXELS_PER_SECOND = None
SYNTHETIC_GESTURE_SOURCE = page_action.GESTURE_SOURCE_MOUSE
TAGS = [story_tags.GPU_RASTERIZATION, story_tags.TOUGH_SCROLLBAR_SCROLLING]
def __init__(self,
page_set,
shared_page_state_class=shared_page_state.SharedPageState,
name_suffix='',
extra_browser_args=None):
super(ToughFastScrollbarScrollingPage,
self).__init__(page_set=page_set,
shared_page_state_class=shared_page_state_class,
name_suffix=name_suffix,
extra_browser_args=extra_browser_args)
def RunPageInteractions(self, action_runner):
scrollbar_x, scrollbar_top, scrollbar_bottom = self._ScrollBarRatios(
action_runner)
start = time.time()
with action_runner.CreateGestureInteraction('DragAction'):
direction = 'down'
while time.time() - start < 15:
if direction == 'down':
action_runner.DragPage(
left_start_ratio=scrollbar_x,
top_start_ratio=scrollbar_top,
left_end_ratio=scrollbar_x,
top_end_ratio=scrollbar_bottom,
speed_in_pixels_per_second=self.SPEED_IN_PIXELS_PER_SECOND)
else:
action_runner.DragPage(
left_start_ratio=scrollbar_x,
top_start_ratio=scrollbar_bottom,
left_end_ratio=scrollbar_x,
top_end_ratio=scrollbar_top,
speed_in_pixels_per_second=self.SPEED_IN_PIXELS_PER_SECOND)
direction = 'up' if direction == 'down' else 'down'
def _ScrollBarRatios(self, action_runner):
# Calculate scrollbar thickness by adding an element to the page.
# Record the client width of that element, and then add a long child
# element so that the first element includes a scrollbar. Record the
# client width again and then calculate the difference to get the
# scrollbar thickness.
scrollbar_thickness = float(
action_runner.EvaluateJavaScript('''
(function() {
window.__scrollableElementForTelemetry = document.scrollingElement;
var container = document.createElement("div");
container.style.width = "100px";
container.style.height = "100px";
container.style.position = "absolute";
container.style.visibility = "hidden";
container.style.overflow = "auto";
document.body.appendChild(container);
var widthBefore = container.clientWidth;
var longContent = document.createElement("div");
longContent.style.height = "1000px";
container.appendChild(longContent);
var widthAfter = container.clientWidth;
container.remove();
window.__scrollbarThickness = widthBefore - widthAfter;
return window.__scrollbarThickness;
})();'''))
window_width = float(action_runner.EvaluateJavaScript('window.innerWidth'))
window_height = float(
action_runner.EvaluateJavaScript('window.innerHeight'))
top = 0
bottom = 1 - (scrollbar_thickness * 1.5) / (window_height +
scrollbar_thickness)
scrollbar_mid_x = (window_width + (scrollbar_thickness / 2)) / window_width
return scrollbar_mid_x, top, bottom
def WillStartTracing(self, chrome_trace_config):
chrome_trace_config.EnableUMAHistograms(*TOUGH_SCROLLBAR_UMA)
class ScrollbarScrollingText100Page(ToughFastScrollbarScrollingPage):
BASE_NAME = 'text_scrollbar_100_pixels_per_second'
URL = 'file://../tough_scrolling_cases/text.html'
SPEED_IN_PIXELS_PER_SECOND = 100
class ScrollbarScrollingText200Page(ToughFastScrollbarScrollingPage):
BASE_NAME = 'text_scrollbar_200_pixels_per_second'
URL = 'file://../tough_scrolling_cases/text.html'
SPEED_IN_PIXELS_PER_SECOND = 400
class ScrollbarScrollingText300Page(ToughFastScrollbarScrollingPage):
BASE_NAME = 'text_scrollbar_700_pixels_per_second'
URL = 'file://../tough_scrolling_cases/text.html'
SPEED_IN_PIXELS_PER_SECOND = 700
class ScrollbarScrollingText1200Page(ToughFastScrollbarScrollingPage):
BASE_NAME = 'text_scrollbar_1200_pixels_per_second'
URL = 'file://../tough_scrolling_cases/text.html'
SPEED_IN_PIXELS_PER_SECOND = 1200
class ScrollbarScrollingText2300Page(ToughFastScrollbarScrollingPage):
BASE_NAME = 'text_scrollbar_2300_pixels_per_second'
URL = 'file://../tough_scrolling_cases/text.html'
SPEED_IN_PIXELS_PER_SECOND = 2300
| 39.030769
| 79
| 0.715609
|
4a1267577ff7cca44769131af60a554bb53aae75
| 2,003
|
py
|
Python
|
src/merge.py
|
vishalmhjn/sumo_network_detectors
|
4fd267f535e93c97c0aa0236a91a6c159f43654b
|
[
"MIT"
] | null | null | null |
src/merge.py
|
vishalmhjn/sumo_network_detectors
|
4fd267f535e93c97c0aa0236a91a6c159f43654b
|
[
"MIT"
] | null | null | null |
src/merge.py
|
vishalmhjn/sumo_network_detectors
|
4fd267f535e93c97c0aa0236a91a6c159f43654b
|
[
"MIT"
] | null | null | null |
import geojson
import pandas as pd
import sys
scenario = sys.argv[1]
if scenario =="munich":
OUT_FILE="../sharedstreets/detector_sumo_mapping_munich.csv"
with open('../sharedstreets/bast_munich.matched.geojson') as f:
munich_detectors = geojson.load(f)
with open('../sharedstreets/sumo_network_munich.matched.geojson') as f:
sumo_network = geojson.load(f)
df_detectors = pd.DataFrame(munich_detectors)
df_detectors['geometry'] = df_detectors['features'].apply(lambda x: x['geometry'])
df_detectors['shst_id'] = df_detectors['features'].apply(lambda x: x['properties']['shstReferenceId'])
df_detectors['length'] = df_detectors['features'].apply(lambda x: x['properties']['referenceLength'])
df_detectors['munich_id'] = df_detectors['features'].apply(lambda x: x['properties']['pp_id'])
df_detectors['direction'] = df_detectors['features'].apply(lambda x: x['properties']['pp_direction'])
df_detectors['detector_lat'] = df_detectors['features'].apply(lambda x: x['properties']['pp_lat'])
df_detectors['detector_lon'] = df_detectors['features'].apply(lambda x: x['properties']['pp_lon'])
df_detectors.drop(columns=['type'], inplace=True)
df_sumo_network = pd.DataFrame(sumo_network)
df_sumo_network['geometry'] = df_sumo_network['features'].apply(lambda x: x['geometry'])
df_sumo_network['shst_id'] = df_sumo_network['features'].apply(lambda x: x['properties']['shstReferenceId'])
df_sumo_network['sumo_id'] = df_sumo_network['features'].apply(lambda x: x['properties']['pp_id'])
df_sumo_network.drop(columns=['type'], inplace=True)
df_detector_sumo_map = pd.merge(df_detectors, df_sumo_network, left_on='shst_id', right_on='shst_id')
df_detector_sumo_map[~df_detector_sumo_map[['shst_id', 'length', 'munich_id', 'sumo_id']].duplicated(subset=None, keep='first')]
df_detector_sumo_map.to_csv(OUT_FILE, index=False)
else:
raise("Specify a valid scenario in CLI")
print("Finished!")
| 47.690476
| 132
| 0.717424
|
4a1267689e525b4856893f57c6c0b5dbb4f98936
| 3,192
|
py
|
Python
|
scripts/unused/unused-whole_brain_segmentation/02_winner_map.py
|
ofgulban/meso-MRI
|
15ef8e19aae6218833a06bf01418d3d83eafd8c7
|
[
"BSD-3-Clause"
] | 1
|
2022-01-21T13:48:01.000Z
|
2022-01-21T13:48:01.000Z
|
scripts/unused/unused-whole_brain_segmentation/02_winner_map.py
|
ofgulban/meso-MRI
|
15ef8e19aae6218833a06bf01418d3d83eafd8c7
|
[
"BSD-3-Clause"
] | null | null | null |
scripts/unused/unused-whole_brain_segmentation/02_winner_map.py
|
ofgulban/meso-MRI
|
15ef8e19aae6218833a06bf01418d3d83eafd8c7
|
[
"BSD-3-Clause"
] | 1
|
2022-01-21T13:48:08.000Z
|
2022-01-21T13:48:08.000Z
|
"""Winner maps from tissue probabilities."""
import os
import subprocess
import numpy as np
import nibabel as nb
# =============================================================================
NII_NAMES = [
'/home/faruk/data/DATA_MRI_NIFTI/derived/sub-01/T1_wholebrain/01_DNN_Segmentator/sub-01_ses-T2s_MP2RAGE_uni_pt7_reframed_tissue-probs-slow_map-1_Background.nii.gz',
'/home/faruk/data/DATA_MRI_NIFTI/derived/sub-01/T1_wholebrain/01_DNN_Segmentator/sub-01_ses-T2s_MP2RAGE_uni_pt7_reframed_tissue-probs-slow_map-2_White_matter.nii.gz',
'/home/faruk/data/DATA_MRI_NIFTI/derived/sub-01/T1_wholebrain/01_DNN_Segmentator/sub-01_ses-T2s_MP2RAGE_uni_pt7_reframed_tissue-probs-slow_map-3_Grey_matter.nii.gz',
'/home/faruk/data/DATA_MRI_NIFTI/derived/sub-01/T1_wholebrain/01_DNN_Segmentator/sub-01_ses-T2s_MP2RAGE_uni_pt7_reframed_tissue-probs-slow_map-4_CSF_extra-cerebral.nii.gz',
'/home/faruk/data/DATA_MRI_NIFTI/derived/sub-01/T1_wholebrain/01_DNN_Segmentator/sub-01_ses-T2s_MP2RAGE_uni_pt7_reframed_tissue-probs-slow_map-5_Ventricles_Lateral_5th.nii.gz',
'/home/faruk/data/DATA_MRI_NIFTI/derived/sub-01/T1_wholebrain/01_DNN_Segmentator/sub-01_ses-T2s_MP2RAGE_uni_pt7_reframed_tissue-probs-slow_map-6_Subcortical_structures.nii.gz',
'/home/faruk/data/DATA_MRI_NIFTI/derived/sub-01/T1_wholebrain/01_DNN_Segmentator/sub-01_ses-T2s_MP2RAGE_uni_pt7_reframed_tissue-probs-slow_map-7_Blood_vessels.nii.gz',
'/home/faruk/data/DATA_MRI_NIFTI/derived/sub-01/T1_wholebrain/01_DNN_Segmentator/sub-01_ses-T2s_MP2RAGE_uni_pt7_reframed_tissue-probs-slow_map-8_Sagittal_sinus.nii.gz',
]
OUTDIR = "/home/faruk/data/DATA_MRI_NIFTI/derived/sub-01/T1_wholebrain/02_segmentation"
OUT_NAME = "sub-01_ses-T2s_MP2RAGE_uni_segm"
# =============================================================================
print("Step_02: Create winner maps.")
# Output directory
if not os.path.exists(OUTDIR):
os.makedirs(OUTDIR)
print(" Output directory: {}\n".format(OUTDIR))
# -----------------------------------------------------------------------------
# Prepare
nii = nb.load(NII_NAMES[0])
dims = nii.shape
nr_files = len(NII_NAMES)
# Load probability maps
temp = np.zeros(dims + (nr_files,))
for i in range(nr_files):
nii_temp = nb.load(NII_NAMES[i])
temp[..., i] = nii_temp.get_fdata()
# Tweaks
if i == 1: # WM
temp[..., i] *= 2
elif i == 3: # CSF
temp[..., i] *= 0.1
# Winner maps
winner = np.argmax(temp, axis=-1)
img = nb.Nifti1Image(winner, affine=nii.affine, header=nii.header)
nb.save(img, os.path.join(OUTDIR, '{}_winner.nii.gz'.format(OUT_NAME)))
# Brain mask
mask = np.zeros(dims)
for i in [1, 2, 3, 6, 7]:
mask[winner == i] = 1
img = nb.Nifti1Image(mask, affine=nii.affine, header=nii.header)
nb.save(img, os.path.join(OUTDIR, '{}_brainmask.nii.gz'.format(OUT_NAME)))
# Rim file
rim = np.zeros(dims)
rim[winner == 1] = 2 # inner gm
rim[winner == 2] = 1 # pure gm
rim[winner == 3] = 3 # outer gm
rim[winner == 6] = 3 # outer gm
rim[winner == 7] = 3 # outer gm
img = nb.Nifti1Image(rim, affine=nii.affine, header=nii.header)
nb.save(img, os.path.join(OUTDIR, '{}_rim.nii.gz'.format(OUT_NAME)))
print('Finished.')
| 43.726027
| 180
| 0.696115
|
4a12678a79d932bf6bd6afcccefb4290c671e7b2
| 7,591
|
py
|
Python
|
tests/python/relay/aot/test_cpp_aot.py
|
rkimball/incubator-tvm
|
85e42b6af38ea3bd0c99c8208d7baed5086a8959
|
[
"Apache-2.0"
] | null | null | null |
tests/python/relay/aot/test_cpp_aot.py
|
rkimball/incubator-tvm
|
85e42b6af38ea3bd0c99c8208d7baed5086a8959
|
[
"Apache-2.0"
] | null | null | null |
tests/python/relay/aot/test_cpp_aot.py
|
rkimball/incubator-tvm
|
85e42b6af38ea3bd0c99c8208d7baed5086a8959
|
[
"Apache-2.0"
] | null | null | null |
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
import re
import sys
import textwrap
import numpy as np
import pytest
import tvm
from tvm import IRModule
from tvm import relay
from tvm.relay import backend, testing
from tvm.testing.aot import generate_ref_data
from tvm.micro.testing.aot_test_utils import AOT_DEFAULT_RUNNER
def test_error_c_interface():
interface_api = "c"
use_unpacked_api = False
test_runner = AOT_DEFAULT_RUNNER
two = relay.add(relay.const(1), relay.const(1))
func = relay.Function([], two)
with pytest.raises(
tvm.TVMError,
match=re.escape(
'Need unpacked-api == false (got: 0) and interface-api == "packed" (got: c) when '
"targeting c++ runtime"
),
):
tvm.relay.build(
IRModule.from_expr(func),
target="llvm",
executor=backend.Executor("aot", {"interface-api": "c"}),
)
enable_usmp = tvm.testing.parameter(True, False)
target_kind = tvm.testing.parameter("c", "llvm")
def test_conv2d(enable_usmp, target_kind):
RELAY_MODEL = textwrap.dedent(
"""\
#[version = "0.0.5"]
def @main(%data : Tensor[(1, 3, 64, 64), uint8], %weight : Tensor[(3, 3, 5, 5), int8]) {
%1 = nn.conv2d(
%data,
%weight,
padding=[2, 2],
channels=3,
kernel_size=[5, 5],
data_layout="NCHW",
kernel_layout="OIHW",
out_dtype="int32");
%2 = cast(nn.max_pool2d(%1, pool_size=[3, 3]), dtype="int8");
%3 = nn.conv2d(
%2,
%weight,
padding=[2, 2],
channels=3,
kernel_size=[5, 5],
data_layout="NCHW",
kernel_layout="OIHW",
out_dtype="int32");
%4 = nn.max_pool2d(%3, pool_size=[3, 3]);
%4
}
"""
)
ir_mod = tvm.parser.fromtext(RELAY_MODEL)
main_func = ir_mod["main"]
shape_dict = {p.name_hint: p.checked_type.concrete_shape for p in main_func.params}
type_dict = {p.name_hint: p.checked_type.dtype for p in main_func.params}
weight_data = np.ones(shape_dict["weight"]).astype(type_dict["weight"])
input_data = np.ones(shape_dict["data"]).astype(type_dict["data"])
params = {"weight": weight_data}
inputs = {"data": input_data}
ref_outputs = generate_ref_data(ir_mod, inputs, params)
with tvm.transform.PassContext(
opt_level=3, config={"tir.disable_vectorize": True, "tir.usmp.enable": enable_usmp}
):
mod = tvm.relay.build(
ir_mod,
params=params,
target=target_kind,
executor=backend.Executor("aot", {"interface-api": "packed"}),
)
temp_dir = tvm.contrib.utils.TempDirectory()
test_so_path = temp_dir / "test.so"
mod.export_library(test_so_path, cc="gcc", options=["-std=c11"])
loaded_mod = tvm.runtime.load_module(test_so_path)
runner = tvm.runtime.executor.AotModule(loaded_mod["default"](tvm.cpu(0)))
runner.set_input(**inputs)
runner.run()
assert (runner.get_output(0).asnumpy() == list(ref_outputs.values())[0]).all()
def test_mobilenet(enable_usmp, target_kind):
ir_mod, params = testing.mobilenet.get_workload(batch_size=1)
data_shape = [int(x) for x in ir_mod["main"].checked_type.arg_types[0].shape]
data = np.random.uniform(size=data_shape).astype("float32")
inputs = {"data": data}
ref_outputs = generate_ref_data(ir_mod, inputs, params)
with tvm.transform.PassContext(
opt_level=3, config={"tir.disable_vectorize": True, "tir.usmp.enable": enable_usmp}
):
mod = tvm.relay.build(
ir_mod,
params=params,
target=target_kind,
executor=backend.Executor("aot", {"interface-api": "packed"}),
)
temp_dir = tvm.contrib.utils.TempDirectory()
test_so_path = temp_dir / "test.so"
mod.export_library(test_so_path, cc="gcc", options=["-std=c11"])
loaded_mod = tvm.runtime.load_module(test_so_path)
runner = tvm.runtime.executor.AotModule(loaded_mod["default"](tvm.cpu(0)))
runner.set_input(**inputs)
runner.run()
assert (runner.get_output(0).asnumpy() == list(ref_outputs.values())[0]).all()
def test_module_list():
x = tvm.relay.var("x", tvm.relay.TensorType([1], dtype="float32"))
expr = tvm.relay.add(x, tvm.relay.Constant(tvm.nd.array(np.array([1], dtype="float32"))))
mod = tvm.relay.build(
tvm.IRModule.from_expr(tvm.relay.Function([x], expr)),
target="c",
executor=tvm.relay.backend.Executor("aot", {"interface-api": "packed"}),
mod_name="unusual_module_name_fred",
)
temp_dir = tvm.contrib.utils.TempDirectory()
test_so_path = temp_dir / "test.so"
mod.export_library(test_so_path, cc="gcc", options=["-std=c11"])
loaded_mod = tvm.runtime.load_module(test_so_path)
list_module_names = loaded_mod.get_function("list_module_names")
names_expected = ["unusual_module_name_fred"]
assert list(sorted(names_expected)) == list(sorted(list_module_names()))
def test_create_executor():
x = tvm.relay.var("x", tvm.relay.TensorType([1], dtype="float32"))
expr = tvm.relay.add(x, tvm.relay.Constant(tvm.nd.array(np.array([1], dtype="float32"))))
actual = relay.create_executor(
"aot", mod=tvm.IRModule.from_expr(tvm.relay.Function([x], expr)), target="c -executor=aot"
).evaluate()(np.array([2], dtype="float32"))
np.isfinite(np.array([3], dtype="float32"))
np.testing.assert_allclose(actual.numpy(), np.array([3], dtype="float32"))
def test_pass_wrong_device_arg():
x = tvm.relay.var("x", tvm.relay.TensorType([1], dtype="float32"))
expr = tvm.relay.add(x, tvm.relay.Constant(tvm.nd.array(np.array([1], dtype="float32"))))
with tvm.transform.PassContext(opt_level=3, config={"tir.disable_vectorize": True}):
mod = tvm.relay.build(
tvm.IRModule.from_expr(tvm.relay.Function([x], expr)),
target="c",
executor=backend.Executor("aot", {"interface-api": "packed"}),
)
temp_dir = tvm.contrib.utils.TempDirectory()
test_so_path = temp_dir / "test.so"
mod.export_library(test_so_path, cc="gcc", options=["-std=c11"])
loaded_mod = tvm.runtime.load_module(test_so_path)
with pytest.raises(tvm.TVMError) as cm:
tvm.runtime.executor.AotModule(loaded_mod["default"](tvm.cpu(0), tvm.cpu(0)))
assert (
"Check failed: devices_.size() == 1 (2 vs. 1) : Expect exactly 1 device passed."
in str(cm.exception)
)
# TODO write asserts for # and type of device.
if __name__ == "__main__":
sys.exit(pytest.main([__file__] + sys.argv[1:]))
| 36.849515
| 98
| 0.636016
|
4a126826edc9dc3afa64f70a1e40c4536771f95b
| 1,588
|
py
|
Python
|
src/climsoft_api/api/obsscheduleclass/schema.py
|
faysal-ishtiaq/climsoft-api
|
46dacdeba5d935ee3b944df00731640170b87ccd
|
[
"MIT"
] | null | null | null |
src/climsoft_api/api/obsscheduleclass/schema.py
|
faysal-ishtiaq/climsoft-api
|
46dacdeba5d935ee3b944df00731640170b87ccd
|
[
"MIT"
] | 2
|
2022-01-16T15:41:27.000Z
|
2022-01-30T18:37:13.000Z
|
src/climsoft_api/api/obsscheduleclass/schema.py
|
openclimateinitiative/climsoft-api
|
3591d7499dd7777617b8086332dc83fab1af9588
|
[
"MIT"
] | 2
|
2021-12-22T21:50:19.000Z
|
2022-01-28T12:53:32.000Z
|
from typing import List
import climsoft_api.api.station.schema as station_schema
from climsoft_api.api.schema import BaseSchema, Response
from pydantic import constr, Field
class CreateObsScheduleClass(BaseSchema):
scheduleClass: constr(max_length=255) = Field(title="Schedule Class")
description: constr(max_length=255) = Field(title="Description")
refersTo: constr(max_length=255) = Field(title="Refers To")
class Config:
fields = {
"scheduleClass": "schedule_class",
"refersTo": "refers_to"
}
class UpdateObsScheduleClass(BaseSchema):
description: constr(max_length=255) = Field(title="Description")
refersTo: constr(max_length=255) = Field(title="Refers To")
class Config:
fields = {
"refersTo": "refers_to"
}
class ObsScheduleClass(CreateObsScheduleClass):
class Config:
orm_mode = True
allow_population_by_field_name = True
fields = {
"scheduleClass": "schedule_class",
"refersTo": "refers_to"
}
class ObsScheduleClassResponse(Response):
result: List[ObsScheduleClass] = Field(title="Result")
class ObsScheduleClassWithStation(ObsScheduleClass):
station: station_schema.Station = Field(title="Station")
class ObsScheduleClassWithStationResponse(Response):
result: List[ObsScheduleClassWithStation] = Field(title="Result")
class ObsScheduleClassQueryResponse(ObsScheduleClassResponse):
limit: int = Field(title="Limit")
page: int = Field(title="Page")
pages: int = Field(title="Pages")
| 28.357143
| 73
| 0.700252
|
4a12689caf78c60ffcbb4584cae4fe8899871c10
| 1,646
|
py
|
Python
|
tests/tests_conversations/tests_view_transactions/test_utils.py
|
borissimkin/moneykeeper-bot
|
45f7ed92be187db71d28c5326a5b62cb587c88bf
|
[
"MIT"
] | 2
|
2021-08-04T08:04:31.000Z
|
2022-01-21T13:00:28.000Z
|
tests/tests_conversations/tests_view_transactions/test_utils.py
|
borissimkin/moneykeeper-bot
|
45f7ed92be187db71d28c5326a5b62cb587c88bf
|
[
"MIT"
] | 2
|
2021-06-08T21:07:54.000Z
|
2021-09-08T01:46:50.000Z
|
tests/tests_conversations/tests_view_transactions/test_utils.py
|
borissimkin/moneykeeper-bot
|
45f7ed92be187db71d28c5326a5b62cb587c88bf
|
[
"MIT"
] | null | null | null |
import datetime
import unittest
from sqlalchemy.orm import sessionmaker
from bot.conversations.view_transactions.utils import make_list_transactions
from bot.models import Base, Earning, Consumption
from tests.test_models import engine
from tests.utils_models import example_user, add_example_user, add_example_category_consumption, \
example_category_consumption, add_example_earning, add_example_consumption
Session = sessionmaker(bind=engine)
session = Session()
class TestMakeListTransactions(unittest.TestCase):
def setUp(self):
Base.metadata.create_all(engine)
def tearDown(self):
Base.metadata.drop_all(engine)
def test_make_list_transactions(self):
add_example_user(session)
session.add(Consumption(id=1,
user_id=1,
time_creation=datetime.datetime(2020, 1, 10, 12, 30, 00)))
session.add(Consumption(id=2,
user_id=1,
time_creation=datetime.datetime(2020, 1, 8, 12, 25, 00)))
session.add(Earning(id=1,
user_id=1,
time_creation=datetime.datetime(2020, 1, 9, 12, 25, 00)))
session.add(Consumption(id=3,
user_id=1,
time_creation=datetime.datetime(2020, 1, 12, 12, 25, 00)))
session.commit()
answer = make_list_transactions(session, example_user['telegram_user_id'])
expected = [Consumption(id=3), Consumption(id=1), Earning(id=1), Consumption(id=2)]
self.assertEqual(answer, expected)
| 41.15
| 98
| 0.634265
|
4a1269fb7062e258a27ae185b86810b11e124880
| 375
|
py
|
Python
|
script/deduplicate.py
|
SYSU-zhanglab/tRNA-m5C
|
25f77801fecdb06449c9a4754688180fa3a23006
|
[
"MIT"
] | 1
|
2021-07-14T10:01:38.000Z
|
2021-07-14T10:01:38.000Z
|
script/deduplicate.py
|
SYSU-zhanglab/tRNA-m5C
|
25f77801fecdb06449c9a4754688180fa3a23006
|
[
"MIT"
] | null | null | null |
script/deduplicate.py
|
SYSU-zhanglab/tRNA-m5C
|
25f77801fecdb06449c9a4754688180fa3a23006
|
[
"MIT"
] | 1
|
2022-03-04T04:18:56.000Z
|
2022-03-04T04:18:56.000Z
|
from Bio import SeqIO
import sys
data = {}
for seq in SeqIO.parse(sys.argv[1],"fasta"):
# tRNA-Ala-AGC-1-1
ID = seq.id.split("-")
new_id = "-".join(ID[:-1])
if new_id not in data:
data[new_id] = str(seq.seq).upper()
else:
if data[new_id] != str(seq.seq).upper():
data[seq.id] = str(seq.seq).upper()
for key, values in data.items():
print ">"+key
print values
| 22.058824
| 44
| 0.626667
|
4a126acb6b91dff87d9d5bfd94f440bf65b27b84
| 2,972
|
py
|
Python
|
thejoker/tests/test_data.py
|
stephtdouglas/thejoker
|
b1f2681cd72b6c04d19b24aadf818639c5f59ad0
|
[
"MIT"
] | null | null | null |
thejoker/tests/test_data.py
|
stephtdouglas/thejoker
|
b1f2681cd72b6c04d19b24aadf818639c5f59ad0
|
[
"MIT"
] | null | null | null |
thejoker/tests/test_data.py
|
stephtdouglas/thejoker
|
b1f2681cd72b6c04d19b24aadf818639c5f59ad0
|
[
"MIT"
] | null | null | null |
# Third-party
import astropy.time as atime
import astropy.units as u
import numpy as np
import pytest
try:
import matplotlib.pyplot as plt
HAS_MPL = True
except:
HAS_MPL = False
# Package
from ..data import RVData
def test_rvdata():
# test various initializations
t = np.random.uniform(55555., 56012., size=1024)
rv = 100 * np.sin(0.5*t) * u.km/u.s
ivar = 1 / (np.random.normal(0,5,size=1024)*u.km/u.s)**2
RVData(t=t, rv=rv, ivar=ivar)
t = atime.Time(t, format='mjd', scale='utc')
RVData(t=t, rv=rv, ivar=ivar)
with pytest.raises(TypeError):
RVData(t=t, rv=rv.value, ivar=ivar)
with pytest.raises(TypeError):
RVData(t=t, rv=rv, ivar=ivar.value)
# pass both
with pytest.raises(ValueError):
RVData(t=t, rv=rv, ivar=ivar, stddev=np.sqrt(1/ivar))
# not velocity units
with pytest.raises(u.UnitsError):
RVData(t=t, rv=rv, ivar=ivar.value*u.km)
# no error
data = RVData(t=t, rv=rv)
assert np.isnan(data.stddev.value).all()
# shapes must be consistent
with pytest.raises(ValueError):
RVData(t=t[:-1], rv=rv, ivar=ivar)
with pytest.raises(ValueError):
RVData(t=t, rv=rv[:-1], ivar=ivar)
with pytest.raises(ValueError):
RVData(t=t, rv=rv, ivar=ivar[:-1])
# check that copy works
t = atime.Time(t, format='mjd', scale='utc')
data1 = RVData(t=t, rv=rv, ivar=ivar)
data2 = data1.copy()
data1._t_bmjd += 1.5
data1.rv *= 1.5
data1.ivar *= 1.5
assert np.all(data2._t_bmjd != data1._t_bmjd)
assert np.all(data2.rv != data1.rv)
assert np.all(data2.ivar != data1.ivar)
# check slicing
t = atime.Time(t, format='mjd', scale='utc')
data1 = RVData(t=t, rv=rv, ivar=ivar)
data2 = data1[:16]
assert len(data2) == 16
assert len(data2.t) == 16
assert len(data2.rv) == 16
assert len(data2.ivar) == 16
# check filtering NaN's
t = np.random.uniform(55555., 56012., size=128)
rv = 100 * np.sin(0.5*t)
rv[:16] = np.nan
rv = rv * u.km/u.s
ivar = 1 / (np.random.normal(0,5,size=t.size)*u.km/u.s)**2
data = RVData(t=t, rv=rv, ivar=ivar)
assert len(data) == (128-16)
@pytest.mark.skipif(not HAS_MPL, reason='matplotlib not installed')
def test_plotting():
# check that plotting at least succeeds with allowed arguments
t = np.random.uniform(55555., 56012., size=128)
rv = 100 * np.sin(0.5*t) * u.km/u.s
ivar = 1 / (np.random.normal(0,5,size=t.size)*u.km/u.s)**2
data = RVData(t=t, rv=rv, ivar=ivar)
data.plot()
# style
data.plot(color='r')
# custom axis
fig,ax = plt.subplots(1,1)
data.plot(ax=plt.gca())
# formatting
data.plot(rv_unit=u.m/u.s)
data.plot(rv_unit=u.m/u.s, time_format='jd')
data.plot(rv_unit=u.m/u.s, time_format=lambda x: x.utc.mjd)
# try with no errors
data = RVData(t=t, rv=rv)
data.plot()
data.plot(ecolor='r')
plt.close('all')
| 25.843478
| 67
| 0.60969
|
4a126c3781c64234251f71066acec5a68dee0bb3
| 1,282
|
py
|
Python
|
tests/models/test_rutherford_reimer_tem.py
|
drix00/pyelectroncrosssections
|
b9090f458f3e69af80d103d3883444248715d100
|
[
"Apache-2.0"
] | null | null | null |
tests/models/test_rutherford_reimer_tem.py
|
drix00/pyelectroncrosssections
|
b9090f458f3e69af80d103d3883444248715d100
|
[
"Apache-2.0"
] | null | null | null |
tests/models/test_rutherford_reimer_tem.py
|
drix00/pyelectroncrosssections
|
b9090f458f3e69af80d103d3883444248715d100
|
[
"Apache-2.0"
] | null | null | null |
#!/usr/bin/env python
# -*- coding: utf-8 -*-
"""
.. py:currentmodule:: tests.models.test_rutherford_reimer_tem
.. moduleauthor:: Hendrix Demers <hendrix.demers@mail.mcgill.ca>
Tests for the :py:mod:`eecs.models.rutherford_reimer_tem` module.
"""
###############################################################################
# Copyright 2021 Hendrix Demers
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###############################################################################
# Standard library modules.
# Third party modules.
# Local modules.
# Project modules.
# Globals and constants variables.
def test_is_discovered():
"""
Test used to validate the file is included in the tests
by the test framework.
"""
# assert False
assert True
| 27.869565
| 79
| 0.639626
|
4a126c5727d1712a568cdbc82ecbb856eb7fc3fb
| 8,755
|
py
|
Python
|
workBousaiTYO_baseline/predflowio/STDN_attention.py
|
deepkashiwa20/DeepCrowd
|
847bcc20ca36b521eead4ededa5d11b2fd2af30a
|
[
"MIT"
] | 4
|
2021-09-07T09:29:43.000Z
|
2022-03-28T07:18:16.000Z
|
workBousaiOSA_baseline/predflowio/STDN_attention.py
|
deepkashiwa20/DeepCrowd
|
847bcc20ca36b521eead4ededa5d11b2fd2af30a
|
[
"MIT"
] | null | null | null |
workBousaiOSA_baseline/predflowio/STDN_attention.py
|
deepkashiwa20/DeepCrowd
|
847bcc20ca36b521eead4ededa5d11b2fd2af30a
|
[
"MIT"
] | 2
|
2021-06-18T01:28:16.000Z
|
2021-08-10T01:24:49.000Z
|
from keras.layers import Layer
import keras.backend as K
class Attention(Layer):
def __init__(self, method=None, **kwargs):
self.supports_masking = True
if method != 'lba' and method !='ga' and method != 'cba' and method is not None:
raise ValueError('attention method is not supported')
self.method = method
super(Attention, self).__init__(**kwargs)
def build(self, input_shape):
if isinstance(input_shape, list):
self.att_size = input_shape[0][-1]
self.query_dim = input_shape[1][-1]
if self.method == 'ga' or self.method == 'cba':
self.Wq = self.add_weight(name='kernal_query_features', shape=(self.query_dim, self.att_size), initializer='glorot_normal', trainable=True)
else:
self.att_size = input_shape[-1]
if self.method == 'cba':
self.Wh = self.add_weight(name='kernal_hidden_features', shape=(self.att_size,self.att_size), initializer='glorot_normal', trainable=True)
if self.method == 'lba' or self.method == 'cba':
self.v = self.add_weight(name='query_vector', shape=(self.att_size, 1), initializer='zeros', trainable=True)
super(Attention, self).build(input_shape)
def call(self, inputs, mask=None):
'''
:param inputs: a list of tensor of length not larger than 2, or a memory tensor of size BxTXD1.
If a list, the first entry is memory, and the second one is query tensor of size BxD2 if any
:param mask: the masking entry will be directly discarded
:return: a tensor of size BxD1, weighted summing along the sequence dimension
'''
if isinstance(inputs, list) and len(inputs) == 2:
memory, query = inputs
if self.method is None:
return memory[:,-1,:]
elif self.method == 'cba':
hidden = K.dot(memory, self.Wh) + K.expand_dims(K.dot(query, self.Wq), 1)
hidden = K.tanh(hidden)
s = K.squeeze(K.dot(hidden, self.v), -1)
elif self.method == 'ga':
s = K.sum(K.expand_dims(K.dot(query, self.Wq), 1) * memory, axis=-1)
else:
s = K.squeeze(K.dot(memory, self.v), -1)
if mask is not None:
mask = mask[0]
else:
if isinstance(inputs, list):
if len(inputs) != 1:
raise ValueError('inputs length should not be larger than 2')
memory = inputs[0]
else:
memory = inputs
if self.method is None:
return memory[:,-1,:]
elif self.method == 'cba':
hidden = K.dot(memory, self.Wh)
hidden = K.tanh(hidden)
s = K.squeeze(K.dot(hidden, self.v), -1)
elif self.method == 'ga':
raise ValueError('general attention needs the second input')
else:
s = K.squeeze(K.dot(memory, self.v), -1)
s = K.softmax(s)
if mask is not None:
s *= K.cast(mask, dtype='float32')
sum_by_time = K.sum(s, axis=-1, keepdims=True)
s = s / (sum_by_time + K.epsilon())
return K.sum(memory * K.expand_dims(s), axis=1)
def compute_mask(self, inputs, mask=None):
return None
def compute_output_shape(self, input_shape):
if isinstance(input_shape, list):
att_size = input_shape[0][-1]
batch = input_shape[0][0]
else:
att_size = input_shape[-1]
batch = input_shape[0]
return (batch, att_size)
class SimpleAttention(Layer):
def __init__(self, method=None, **kwargs):
self.supports_masking = True
if method != 'lba' and method !='ga' and method != 'cba' and method is not None:
raise ValueError('attention method is not supported')
self.method = method
super(SimpleAttention, self).__init__(**kwargs)
def build(self, input_shape):
if isinstance(input_shape, list):
self.att_size = input_shape[0][-1]
self.query_dim = input_shape[1][-1] + self.att_size
else:
self.att_size = input_shape[-1]
self.query_dim = self.att_size
if self.method == 'cba' or self.method == 'ga':
self.Wq = self.add_weight(name='kernal_query_features', shape=(self.query_dim, self.att_size),
initializer='glorot_normal', trainable=True)
if self.method == 'cba':
self.Wh = self.add_weight(name='kernal_hidden_features', shape=(self.att_size, self.att_size), initializer='glorot_normal', trainable=True)
if self.method == 'lba' or self.method == 'cba':
self.v = self.add_weight(name='query_vector', shape=(self.att_size, 1), initializer='zeros', trainable=True)
super(SimpleAttention, self).build(input_shape)
def call(self, inputs, mask=None):
'''
:param inputs: a list of tensor of length not larger than 2, or a memory tensor of size BxTXD1.
If a list, the first entry is memory, and the second one is query tensor of size BxD2 if any
:param mask: the masking entry will be directly discarded
:return: a tensor of size BxD1, weighted summing along the sequence dimension
'''
query = None
if isinstance(inputs, list):
memory = inputs[0]
if len(inputs) > 1:
query = inputs[1]
elif len(inputs) > 2:
raise ValueError('inputs length should not be larger than 2')
if isinstance(mask, list):
mask = mask[0]
else:
memory = inputs
input_shape = K.int_shape(memory)
if len(input_shape) >3:
input_length = input_shape[1]
memory = K.reshape(memory, (-1,) + input_shape[2:])
if mask is not None:
mask = K.reshape(mask, (-1,) + input_shape[2:-1])
if query is not None:
raise ValueError('query can be not supported')
last = memory[:,-1,:]
memory = memory[:,:-1,:]
if query is None:
query = last
else:
query = K.concatenate([query, last], axis=-1)
if self.method is None:
if len(input_shape) > 3:
output_shape = K.int_shape(last)
return K.reshape(last, (-1, input_shape[1], output_shape[-1]))
else:
return last
elif self.method == 'cba':
hidden = K.dot(memory, self.Wh) + K.expand_dims(K.dot(query, self.Wq), 1)
hidden = K.tanh(hidden)
s = K.squeeze(K.dot(hidden, self.v), -1)
elif self.method == 'ga':
s = K.sum(K.expand_dims(K.dot(query, self.Wq), 1) * memory, axis=-1)
else:
s = K.squeeze(K.dot(memory, self.v), -1)
s = K.softmax(s)
if mask is not None:
mask = mask[:,:-1]
s *= K.cast(mask, dtype='float32')
sum_by_time = K.sum(s, axis=-1, keepdims=True)
s = s / (sum_by_time + K.epsilon())
#return [K.concatenate([K.sum(memory * K.expand_dims(s), axis=1), last], axis=-1), s]
result = K.concatenate([K.sum(memory * K.expand_dims(s), axis=1), last], axis=-1)
if len(input_shape)>3:
output_shape = K.int_shape(result)
return K.reshape(result, (-1, input_shape[1], output_shape[-1]))
else:
return result
def compute_mask(self, inputs, mask=None):
if isinstance(inputs, list):
memory = inputs[0]
else:
memory = inputs
if len(K.int_shape(memory)) > 3 and mask is not None:
return K.all(mask, axis=-1)
else:
return None
def compute_output_shape(self, input_shape):
if isinstance(input_shape, list):
att_size = input_shape[0][-1]
seq_len = input_shape[0][1]
batch = input_shape[0][0]
else:
att_size = input_shape[-1]
seq_len = input_shape[1]
batch = input_shape[0]
#shape2 = (batch, seq_len, 1)
if len(input_shape)>3:
if self.method is not None:
shape1 = (batch, seq_len, att_size*2)
else:
shape1 = (batch, seq_len, att_size)
#return [shape1, shape2]
return shape1
else:
if self.method is not None:
shape1 = (batch, att_size*2)
else:
shape1 = (batch, att_size)
#return [shape1, shape2]
return shape1
| 41.29717
| 155
| 0.552142
|
4a126d7ae31577eb871f956b17604f9c769cb135
| 3,771
|
py
|
Python
|
barf/utils/utils.py
|
IMULMUL/barf-project
|
9547ef843b8eb021c2c32c140e36173c0b4eafa3
|
[
"BSD-2-Clause"
] | 1,395
|
2015-01-02T11:43:30.000Z
|
2022-03-30T01:15:26.000Z
|
barf/utils/utils.py
|
IMULMUL/barf-project
|
9547ef843b8eb021c2c32c140e36173c0b4eafa3
|
[
"BSD-2-Clause"
] | 54
|
2015-02-11T05:18:05.000Z
|
2021-12-10T08:45:39.000Z
|
barf/utils/utils.py
|
IMULMUL/barf-project
|
9547ef843b8eb021c2c32c140e36173c0b4eafa3
|
[
"BSD-2-Clause"
] | 207
|
2015-01-05T09:47:54.000Z
|
2022-03-30T01:15:29.000Z
|
# Copyright (c) 2014, Fundacion Dr. Manuel Sadosky
# All rights reserved.
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are met:
# 1. Redistributions of source code must retain the above copyright notice, this
# list of conditions and the following disclaimer.
# 2. Redistributions in binary form must reproduce the above copyright notice,
# this list of conditions and the following disclaimer in the documentation
# and/or other materials provided with the distribution.
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
# DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
# SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
# OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
def extract_sign_bit(value, size):
return value >> (size-1)
def twos_complement(value, size):
return 2**size - value
def extract_value(main_value, offset, size):
return (main_value >> offset) & 2**size-1
def insert_value(main_value, value_to_insert, offset, size):
main_value &= ~((2**size-1) << offset)
main_value |= (value_to_insert & 2**size-1) << offset
return main_value
def read_c_string(emulator, address, max_length=1024):
i = 0
data = bytearray()
while i < max_length:
b = emulator.read_memory(address + i, 1)
if b == 0:
break
data.append(b)
i += 1
return data.decode()
def write_c_string(emulator, address, string):
for i, b in enumerate(bytearray(string + "\x00", encoding='ascii')):
emulator.write_memory(address + i, 1, b)
class VariableNamer(object):
"""Variable Name Generator."""
def __init__(self, base_name, separator="_", counter=0):
self._base_name = base_name
self._counter_init = counter
self._counter_curr = counter
self._separator = separator
def get_init(self):
"""Return initial name.
"""
suffix = self._separator + "%s" % str(self._counter_init)
return self._base_name + suffix
def get_current(self):
"""Return current name.
"""
suffix = self._separator + "%s" % str(self._counter_curr)
return self._base_name + suffix
def get_next(self):
"""Return next name.
"""
self._counter_curr += 1
suffix = self._separator + "%s" % str(self._counter_curr)
return self._base_name + suffix
def reset(self):
"""Restart name counter.
"""
self._counter_curr = self._counter_init
class InvalidAddressError(Exception):
pass
class ExecutionCache(object):
def __init__(self):
self.__container = {}
def add(self, address, instruction, container):
# NOTE Does not take into account self modifying code.
if address in self.__container:
raise Exception("Invalid instruction")
self.__container[address] = (instruction, container)
def retrieve(self, address):
if address not in self.__container:
# print("cache miss!")
raise InvalidAddressError()
# print("cache hit!")
return self.__container[address]
| 30.41129
| 80
| 0.679661
|
4a126d9b80820403a87e6fa9374a27438d68d55b
| 5,316
|
py
|
Python
|
GPyOpt/acquisitions/LCB_oneq.py
|
FredericSauv/GPyOpt
|
ce59d70e890c95c65ab6b8c1d803d364fc825df5
|
[
"BSD-3-Clause"
] | 1
|
2020-04-02T21:05:51.000Z
|
2020-04-02T21:05:51.000Z
|
GPyOpt/acquisitions/LCB_oneq.py
|
FredericSauv/GPyOpt
|
ce59d70e890c95c65ab6b8c1d803d364fc825df5
|
[
"BSD-3-Clause"
] | null | null | null |
GPyOpt/acquisitions/LCB_oneq.py
|
FredericSauv/GPyOpt
|
ce59d70e890c95c65ab6b8c1d803d364fc825df5
|
[
"BSD-3-Clause"
] | null | null | null |
# Copyright (c) 2016, the GPyOpt Authors
# Licensed under the BSD 3-clause license (see LICENSE.txt)
from .base import AcquisitionBase
from ..util.general import change_of_var_Phi,change_of_var_Phi_withGradients
from ..models import gpmodel
import numpy as np
class AcquisitionLCB_oneq(AcquisitionBase):
"""
GP-Lower Confidence Bound acquisition function Customized for one qubit
F_1q = 1/2(1 + <x>_tgt.<x>+ <x>_tgt.<x>+ <x>_tgt.<z>)
= 1/2(1 + <x>_tgt (2.p_x - 1) + <z>_tgt (2.p_z - 1) + <y>_tgt (2.p_y - 1))
= 1/2(1 - <x>_tgt - <y>_tgt - <z>_tgt) + <x>_tgt.px + <y>_tgt.py + <z>_tgt.pz
-- target is a list of [px, py, pz]
"""
analytical_gradient_prediction = False
def __init__(self, model, space, optimizer=None, cost_withGradients=None, exploration_weight=2, target = None, nb_output=1, acq_nbq = 1):
self.optimizer = optimizer
super(AcquisitionLCB_oneq, self).__init__(model, space, optimizer, nb_output=nb_output)
self.exploration_weight = exploration_weight
self.target = np.array(target)
self.tgt_p = np.array(target)
self.tgt_sigmas = 2 * self.tgt_p - 1
self.acq_nbq = acq_nbq
self.coeff_dim = 1/(2**acq_nbq)
if cost_withGradients is not None:
print('The set cost function is ignored! LCB acquisition does not make sense with cost.')
def _compute_acq(self, x):
"""
Computes the GP-Lower Confidence Bound
Use the mean and var of the folded distrib
"""
m, s = self.model.predict(x) #mean and std of the
if type(self.model) == gpmodel.GPModelCustomLik:
m_p, s_p = change_of_var_Phi(m, s)
else:
m_p, s_p = m, s
m_sigmas, s_sigmas = 2 * m_p - 1, 2 * s_p
m_acq = self.coeff_dim * (1 + np.sum(m_sigmas * self.tgt_sigmas, 1))
s_acq = self.coeff_dim * np.sqrt(np.sum(np.square(s_sigmas * self.tgt_sigmas), 1))
f_acqu = m_acq + self.exploration_weight * s_acq
return f_acqu[:, np.newaxis]
def _compute_acq_withGradients(self, x):
"""
Computes the GP-Lower Confidence Bound and its derivative
Use the mean and var of the folded distrib
"""
m, s, dmdx, dsdx = self.model.predict_withGradients(x)
if type(self.model) == gpmodel.GPModelCustomLik:
mp, vp, dmpdx, dvpdx = change_of_var_Phi_withGradients(m, s, dmdx, dsdx)
else:
mp, vp, dmpdx, dvpdx = m, np.square(s), dmdx, 2 * s * dsdx
msigma, vsigma, dmsigmadx, dvsigmadx = (2*mp -1), 4*vp, 2*dmpdx, 4*dvpdx
macq = self.coeff_dim * (1 + np.dot(msigma, self.tgt_sigmas))
sacq = self.coeff_dim * np.sqrt(np.dot(vsigma, np.square(self.tgt_sigmas)))
dmacqdx = self.coeff_dim * np.einsum('j,ijk', self.tgt_sigmas, dmsigmadx)
dsacqdx = self.coeff_dim**2 * np.einsum('j,ijk', np.square(self.tgt_sigmas), dvsigmadx)/(2*sacq)
#dsacqdx = np.einsum('j,ijk', np.square(self.coeff_dim * self.tgt_sigmas), dvsigmadx * vsigma[:,:, np.newaxis]) /sacq
f_acqu = macq + self.exploration_weight * sacq
df_acqu = dmacqdx + self.exploration_weight * dsacqdx
return f_acqu[:, np.newaxis], df_acqu[:, np.newaxis]
# eps = 1e-6
# x_eps = x + eps * np.ones(np.shape(x))
# m_eps, s_eps, dmdx_eps, dsdx_eps = self.model.predict_withGradients(x_eps)
# mp_eps, vp_eps, dmpdx_eps, dvpdx_eps = change_of_var_Phi_withGradients(m_eps, s_eps, dmdx_eps, dsdx_eps)
# (m_eps - m) /eps
# np.sum(dmdx, 1)
# (mp_eps - mp) /eps
# np.sum(dmpdx, 2)
# (vp_eps - vp) /eps
# np.sum(dvpdx, 2)
# msigma_eps, vsigma_eps, dmsigmadx_eps, dvsigmadx_eps = (2*mp_eps -1), 4*vp_eps, 2*dmpdx_eps, 4*dvpdx_eps
# (msigma_eps - msigma)/eps
# np.sum(dmsigmadx, 2)
# macq_eps = self.coeff_dim * (1 + np.dot(msigma_eps, self.tgt_sigmas))
# sacq_eps = self.coeff_dim * np.sqrt(np.dot(vsigma_eps, np.square(self.tgt_sigmas)))
# dmacqdx_eps = self.coeff_dim * np.einsum('j,ijk', self.tgt_sigmas, dmsigmadx_eps)
# dsacqdx_eps = self.coeff_dim * np.einsum('j,ijk', np.square(self.tgt_sigmas), dvsigmadx_eps)/(2*sacq_eps)
#
def _compute_acq_novar(self, x):
"""
Computes the acquisition function without the uncertainty part i.e. the expected value of the fom
"""
m, s = self.model.predict(x) #mean and std of the
m_p, s_p = change_of_var_Phi(m, s)
m_sigmas = 2 * m_p - 1
m_acq = self.coeff_dim * (1 + np.sum(m_sigmas * self.tgt_sigmas, 1))
return -m_acq[:, np.newaxis]
def _compute_acq_splitted(self, x):
"""
Computes the two parts (expected value, std) used in the acqu function
"""
m, s = self.model.predict(x) #mean and std of the
if type(self.model) == gpmodel.GPModelCustomLik:
m_p, s_p = change_of_var_Phi(m, s)
else:
m_p, s_p = m, s
m_sigmas, s_sigmas = 2 * m_p - 1, 2 * s_p
m_acq = self.coeff_dim * (1 + np.sum(m_sigmas * self.tgt_sigmas, 1))
s_acq = self.coeff_dim * np.sqrt(np.sum(np.square(s_sigmas * self.tgt_sigmas), 1))
return m_acq[:,np.newaxis], s_acq[:,np.newaxis]
| 45.435897
| 141
| 0.617381
|
4a126d9cd660685dd476a374006967a14f1596e4
| 905
|
py
|
Python
|
sunscraper/ecb/tasks.py
|
dekoza/demo_scraper
|
bd7eb97def49e8d284e0db05c89a86b8523843de
|
[
"MIT"
] | null | null | null |
sunscraper/ecb/tasks.py
|
dekoza/demo_scraper
|
bd7eb97def49e8d284e0db05c89a86b8523843de
|
[
"MIT"
] | null | null | null |
sunscraper/ecb/tasks.py
|
dekoza/demo_scraper
|
bd7eb97def49e8d284e0db05c89a86b8523843de
|
[
"MIT"
] | null | null | null |
from decimal import Decimal
import feedparser
import pendulum
import logging
from sunscraper.taskapp.celery import app
from . import models
logger = logging.getLogger('celery')
@app.task(bind=True)
def scrape_currency_rsses():
# TODO: parallelize
for currency in models.Currency.objects.all():
feed = feedparser.parse(currency.rss_link)
for entry in feed.entries:
date = pendulum.parse(entry.updated).in_tz('utc').date()
currency_code = entry.cb_targetcurrency
if currency_code != currency.code:
logger.error(
f"Currency code mismatch parsing {feed.url}: expected {currency.code}, got {currency_code}")
break
rate_value = Decimal(entry.cb_exchangerate.split('\n')[0])
models.Rate.objects.get_or_create(date=date, currency=currency, defaults={'value': rate_value})
| 33.518519
| 112
| 0.668508
|
4a126dde619afedf37c5a3aebecdab9ccbadf55f
| 102
|
py
|
Python
|
geral/livro_python_eficaz/src/capitulo1/item6_strider.py
|
flaviogf/Cursos
|
2b120dbcd24a907121f58482fdcdfa01b164872c
|
[
"MIT"
] | 2
|
2021-02-20T23:50:07.000Z
|
2021-08-15T03:04:35.000Z
|
geral/livro_python_eficaz/src/capitulo1/item6_strider.py
|
flaviogf/Cursos
|
2b120dbcd24a907121f58482fdcdfa01b164872c
|
[
"MIT"
] | 18
|
2019-08-07T02:33:00.000Z
|
2021-03-18T22:52:38.000Z
|
geral/livro_python_eficaz/src/capitulo1/item6_strider.py
|
flaviogf/Cursos
|
2b120dbcd24a907121f58482fdcdfa01b164872c
|
[
"MIT"
] | 2
|
2020-09-28T13:00:09.000Z
|
2021-12-30T12:21:08.000Z
|
a = ['red', 'blue', 'yellow', 'green']
print(a[::2])
print(a[1::2])
b = b'mongoose'
print(b[::-1])
| 11.333333
| 38
| 0.5
|
4a126e9d7ad426d4e9e6072e2c98f57eeaa9db74
| 60,591
|
py
|
Python
|
tensorflow/python/ops/resource_variable_ops.py
|
hsm207/tensorflow
|
8ab4678ba216c3ec8fa32f417cb667b056689939
|
[
"Apache-2.0"
] | 4
|
2021-06-15T17:26:07.000Z
|
2021-11-17T10:58:08.000Z
|
tensorflow/python/ops/resource_variable_ops.py
|
hsm207/tensorflow
|
8ab4678ba216c3ec8fa32f417cb667b056689939
|
[
"Apache-2.0"
] | 4
|
2020-09-26T00:55:50.000Z
|
2022-02-10T01:53:06.000Z
|
tensorflow/python/ops/resource_variable_ops.py
|
hsm207/tensorflow
|
8ab4678ba216c3ec8fa32f417cb667b056689939
|
[
"Apache-2.0"
] | 6
|
2018-12-20T01:35:20.000Z
|
2020-07-10T17:29:57.000Z
|
# Copyright 2016 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Ops to use variables as resources."""
# pylint: disable=g-bad-name
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import contextlib
from tensorflow.core.framework import attr_value_pb2
from tensorflow.core.framework import variable_pb2
from tensorflow.python import pywrap_tensorflow
from tensorflow.python.eager import context
from tensorflow.python.eager import tape
from tensorflow.python.framework import cpp_shape_inference_pb2
from tensorflow.python.framework import dtypes
from tensorflow.python.framework import ops
from tensorflow.python.framework import tensor_shape
from tensorflow.python.ops import array_ops
from tensorflow.python.ops import gen_array_ops
from tensorflow.python.ops import gen_resource_variable_ops
from tensorflow.python.ops import gen_state_ops
from tensorflow.python.ops import math_ops
from tensorflow.python.ops import variables
# go/tf-wildcard-import
# pylint: disable=wildcard-import
from tensorflow.python.ops.gen_resource_variable_ops import *
# pylint: enable=wildcard-import
from tensorflow.python.training.checkpointable import base as checkpointable
from tensorflow.python.util import compat
def get_resource_handle_data(graph_op):
assert ops._USE_C_SHAPES # pylint: disable=protected-access
assert type(graph_op) == ops.Tensor # pylint: disable=unidiomatic-typecheck
handle_data = pywrap_tensorflow.GetHandleShapeAndType(
graph_op.graph._c_graph, graph_op._as_tf_output()) # pylint: disable=protected-access
return cpp_shape_inference_pb2.CppShapeInferenceResult.HandleData.FromString(
compat.as_bytes(handle_data))
def eager_safe_variable_handle(shape, dtype, shared_name, name, graph_mode):
"""Creates a variable handle with information to do shape inference."""
container = ops.get_default_graph()._container # pylint: disable=protected-access
if container is None:
container = ""
handle = gen_resource_variable_ops.var_handle_op(shape=shape, dtype=dtype,
shared_name=shared_name,
name=name,
container=container)
if graph_mode:
return handle
# We do not want two distinct ResourceVariable objects for the same
# underlying resource in the runtime.
# When in eager mode, explicitly ensure so here. When in graph mode, it's
# ensured by always generating different variable names.
exists = gen_resource_variable_ops.var_is_initialized_op(handle)
if exists:
raise ValueError("variable object with name '%s' already created. Use "
"get_variable() if reuse is desired." %
shared_name)
with context.graph_mode(), ops.Graph().as_default() as graph:
h = gen_resource_variable_ops.var_handle_op(shape=shape, dtype=dtype,
shared_name=shared_name,
name=name,
container=container)
# Tensor._handle_data contains information for the shape-inference code to
# know the shape and dtype of the variable pointed to by a handle. Since
# shape inference doesn't run in eager mode we copy this data here for when
# the handle is captured by an eager mode function.
# pylint: disable=protected-access
if ops._USE_C_SHAPES:
handle._handle_data = get_resource_handle_data(h)
else:
if h._handle_data is None:
ops.set_shape_and_handle_data_for_outputs(h.op)
handle._handle_data = h._handle_data
# pylint: enable=protected-access
# Clean up op->graph->op reference cycles.
ops.dismantle_graph(graph)
return handle
@contextlib.contextmanager
def _handle_graph(handle):
# Note: might have an eager tensor but not be executing eagerly when building
# functions.
if (context.executing_eagerly() or isinstance(handle, ops.EagerTensor)
or ops.has_default_graph()):
yield
else:
with handle.graph.as_default():
yield
class EagerResourceDeleter(object):
"""An object which cleans up a resource handle.
An alternative to defining a __del__ method on an object. The intended use is
that ResourceVariables or other objects with resource handles will maintain a
single reference to this object. When the parent object is collected, this
object will be too. Even if the parent object is part of a reference cycle,
the cycle will be collectable.
"""
def __init__(self, handle, handle_device):
if not isinstance(handle, ops.Tensor):
raise ValueError(
("Passed handle=%s to EagerResourceDeleter. Was expecting a handle "
"Tensor." % (handle,)))
self._handle = handle
self._handle_device = handle_device
def __del__(self):
# Resources follow object-identity when executing eagerly, so it is safe to
# delete the resource we have a handle to.
try:
# This resource was created in eager mode. However, this destructor may be
# running in graph mode (especially during unit tests). To clean up
# successfully, we switch back into eager mode temporarily.
with context.eager_mode():
with ops.device(self._handle_device):
gen_resource_variable_ops.destroy_resource_op(
self._handle, ignore_lookup_error=True)
except TypeError:
# Suppress some exceptions, mainly for the case when we're running on
# module deletion. Things that can go wrong include the context module
# already being unloaded, self._handle._handle_data no longer being
# valid, and so on. Printing warnings in these cases is silly
# (exceptions raised from __del__ are printed as warnings to stderr).
pass # 'NoneType' object is not callable when the handle has been
# partially unloaded.
except AttributeError:
pass # 'NoneType' object has no attribute 'eager_mode' when context has
# been unloaded. Will catch other module unloads as well.
def shape_safe_assign_variable_handle(handle, shape, value, name=None):
"""Helper that checks shape compatibility and assigns variable."""
with _handle_graph(handle):
value_tensor = ops.convert_to_tensor(value)
shape.assert_is_compatible_with(value_tensor.shape)
return gen_resource_variable_ops.assign_variable_op(handle,
value_tensor,
name=name)
# TODO(apassos) make this be variables.Variable
class ResourceVariable(variables.RefVariable):
"""Variable based on resource handles.
See the [Variables How To](https://tensorflow.org/guide/variables)
for a high level overview.
A `ResourceVariable` allows you to maintain state across subsequent calls to
session.run.
The `ResourceVariable` constructor requires an initial value for the variable,
which can be a `Tensor` of any type and shape. The initial value defines the
type and shape of the variable. After construction, the type and shape of
the variable are fixed. The value can be changed using one of the assign
methods.
Just like any `Tensor`, variables created with
`tf.Variable(use_resource=True)` can be used as inputs for other Ops in the
graph. Additionally, all the operators overloaded for the `Tensor` class are
carried over to variables, so you can also add nodes to the graph by just
doing arithmetic on variables.
Unlike ref-based variable, a ResourceVariable has well-defined semantics. Each
usage of a ResourceVariable in a TensorFlow graph adds a read_value operation
to the graph. The Tensors returned by a read_value operation are guaranteed to
see all modifications to the value of the variable which happen in any
operation on which the read_value depends on (either directly, indirectly, or
via a control dependency) and guaranteed to not see any modification to the
value of the variable from operations that depend on the read_value operation.
Updates from operations that have no dependency relationship to the read_value
operation might or might not be visible to read_value.
For example, if there is more than one assignment to a ResourceVariable in
a single session.run call there is a well-defined value for each operation
which uses the variable's value if the assignments and the read are connected
by edges in the graph. Consider the following example, in which two writes
can cause tf.Variable and tf.ResourceVariable to behave differently:
```python
a = tf.Variable(1.0, use_resource=True)
a.initializer.run()
assign = a.assign(2.0)
with tf.control_dependencies([assign]):
b = a.read_value()
with tf.control_dependencies([b]):
other_assign = a.assign(3.0)
with tf.control_dependencies([other_assign]):
# Will print 2.0 because the value was read before other_assign ran. If
# `a` was a tf.Variable instead, 2.0 or 3.0 could be printed.
tf.Print(b, [b]).eval()
```
"""
def __init__(self,
initial_value=None,
trainable=True,
collections=None,
validate_shape=True,
caching_device=None,
name=None,
dtype=None,
variable_def=None,
import_scope=None,
constraint=None):
"""Creates a variable.
Args:
initial_value: A `Tensor`, or Python object convertible to a `Tensor`,
which is the initial value for the Variable. The initial value must have
a shape specified unless `validate_shape` is set to False. Can also be a
callable with no argument that returns the initial value when called.
(Note that initializer functions from init_ops.py must first be bound
to a shape before being used here.)
trainable: If `True`, the default, also adds the variable to the graph
collection `GraphKeys.TRAINABLE_VARIABLES`. This collection is used as
the default list of variables to use by the `Optimizer` classes.
collections: List of graph collections keys. The new variable is added to
these collections. Defaults to `[GraphKeys.GLOBAL_VARIABLES]`.
validate_shape: Ignored. Provided for compatibility with tf.Variable.
caching_device: Optional device string or function describing where the
Variable should be cached for reading. Defaults to the Variable's
device. If not `None`, caches on another device. Typical use is to
cache on the device where the Ops using the Variable reside, to
deduplicate copying through `Switch` and other conditional statements.
name: Optional name for the variable. Defaults to `'Variable'` and gets
uniquified automatically.
dtype: If set, initial_value will be converted to the given type.
If None, either the datatype will be kept (if initial_value is
a Tensor) or float32 will be used (if it is a Python object convertible
to a Tensor).
variable_def: `VariableDef` protocol buffer. If not None, recreates the
`ResourceVariable` object with its contents. `variable_def` and other
arguments (except for import_scope) are mutually exclusive.
import_scope: Optional `string`. Name scope to add to the
ResourceVariable. Only used when `variable_def` is provided.
constraint: An optional projection function to be applied to the variable
after being updated by an `Optimizer` (e.g. used to implement norm
constraints or value constraints for layer weights). The function must
take as input the unprojected Tensor representing the value of the
variable and return the Tensor for the projected value
(which must have the same shape). Constraints are not safe to
use when doing asynchronous distributed training.
Raises:
ValueError: If the initial value is not specified, or does not have a
shape and `validate_shape` is `True`.
@compatibility(eager)
When Eager Execution is enabled, the default for the `collections` argument
is `None`, which signifies that this `Variable` will not be added to any
collections.
@end_compatibility
"""
if variable_def:
if initial_value is not None:
raise ValueError("variable_def and initial_value are mutually "
"exclusive.")
if context.executing_eagerly():
raise ValueError("Creating ResourceVariable from variable_def is "
"not supported when eager execution is enabled.")
self._init_from_proto(variable_def, import_scope=import_scope)
else:
self._init_from_args(
initial_value=initial_value,
trainable=trainable,
collections=collections,
validate_shape=validate_shape,
caching_device=caching_device,
name=name,
dtype=dtype,
constraint=constraint)
# pylint: disable=unused-argument
def _init_from_args(self,
initial_value=None,
trainable=True,
collections=None,
validate_shape=True,
caching_device=None,
name=None,
dtype=None,
constraint=None):
"""Creates a variable.
Args:
initial_value: A `Tensor`, or Python object convertible to a `Tensor`,
which is the initial value for the Variable. The initial value must have
a shape specified unless `validate_shape` is set to False. Can also be a
callable with no argument that returns the initial value when called.
(Note that initializer functions from init_ops.py must first be bound
to a shape before being used here.)
trainable: If `True`, the default, also adds the variable to the graph
collection `GraphKeys.TRAINABLE_VARIABLES`. This collection is used as
the default list of variables to use by the `Optimizer` classes.
collections: List of graph collections keys. The new variable is added to
these collections. Defaults to `[GraphKeys.GLOBAL_VARIABLES]`.
validate_shape: Ignored. Provided for compatibility with tf.Variable.
caching_device: Optional device string or function describing where the
Variable should be cached for reading. Defaults to the Variable's
device. If not `None`, caches on another device. Typical use is to
cache on the device where the Ops using the Variable reside, to
deduplicate copying through `Switch` and other conditional statements.
name: Optional name for the variable. Defaults to `'Variable'` and gets
uniquified automatically.
dtype: If set, initial_value will be converted to the given type.
If None, either the datatype will be kept (if initial_value is
a Tensor) or float32 will be used (if it is a Python object convertible
to a Tensor).
constraint: An optional projection function to be applied to the variable
after being updated by an `Optimizer` (e.g. used to implement norm
constraints or value constraints for layer weights). The function must
take as input the unprojected Tensor representing the value of the
variable and return the Tensor for the projected value
(which must have the same shape). Constraints are not safe to
use when doing asynchronous distributed training.
Raises:
ValueError: If the initial value is not specified, or does not have a
shape and `validate_shape` is `True`.
@compatibility(eager)
When Eager Execution is enabled, variables are never added to collections.
It is not implicitly added to the `GLOBAL_VARIABLES` or
`TRAINABLE_VARIABLES` collections, and the `collections` argument is
ignored.
@end_compatibility
"""
if initial_value is None:
raise ValueError("initial_value must be specified.")
init_from_fn = callable(initial_value)
if isinstance(initial_value, ops.Tensor) and hasattr(
initial_value, "graph") and initial_value.graph.building_function:
raise ValueError("Tensor-typed variable initializers must either be "
"wrapped in an init_scope or callable "
"(e.g., `tf.Variable(lambda : "
"tf.truncated_normal([10, 40]))`) when building "
"functions. Please file a feature request if this "
"restriction inconveniences you.")
if collections is None:
collections = [ops.GraphKeys.GLOBAL_VARIABLES]
if not isinstance(collections, (list, tuple, set)):
raise ValueError(
"collections argument to Variable constructor must be a list, tuple, "
"or set. Got %s of type %s" % (collections, type(collections)))
if constraint is not None and not callable(constraint):
raise ValueError("The `constraint` argument must be a callable.")
if isinstance(initial_value, checkpointable.CheckpointInitialValue):
self._maybe_initialize_checkpointable()
self._update_uid = initial_value.checkpoint_position.restore_uid
initial_value = initial_value.wrapped_value
self._trainable = trainable
if trainable and ops.GraphKeys.TRAINABLE_VARIABLES not in collections:
collections = list(collections) + [ops.GraphKeys.TRAINABLE_VARIABLES]
self._save_slice_info = None
# Store the graph key so optimizers know how to only retrieve variables from
# this graph.
self._graph_key = ops.get_default_graph()._graph_key # pylint: disable=protected-access
with ops.init_scope():
self._in_graph_mode = not context.executing_eagerly()
with ops.name_scope(name, "Variable", []
if init_from_fn else [initial_value]) as name:
# pylint: disable=protected-access
handle_name = ops._name_from_scope_name(name)
if self._in_graph_mode:
shared_name = handle_name
else:
# When in eager mode use a uid for the shared_name, to prevent
# accidental sharing.
shared_name = "%s_%d" % (handle_name, ops.uid())
# Use attr_scope and device(None) to simulate the behavior of
# colocate_with when the variable we want to colocate with doesn't
# yet exist.
attr = attr_value_pb2.AttrValue(
list=attr_value_pb2.AttrValue.ListValue(
s=[compat.as_bytes("loc:@%s" % handle_name)]))
with ops.get_default_graph()._attr_scope({"_class": attr}):
with ops.name_scope("Initializer"), ops.device(None):
initial_value = ops.convert_to_tensor(
initial_value() if init_from_fn else initial_value,
name="initial_value", dtype=dtype)
self._handle = eager_safe_variable_handle(
shape=initial_value.get_shape(),
dtype=initial_value.dtype.base_dtype,
shared_name=shared_name,
name=name,
graph_mode=self._in_graph_mode)
self._shape = initial_value.shape
# pylint: disable=protected-access
if (self._in_graph_mode and initial_value is not None and
initial_value.op._get_control_flow_context() is not None):
raise ValueError(
"Initializer for variable %s is from inside a control-flow "
"construct, such as a loop or conditional. When creating a "
"variable inside a loop or conditional, use a lambda as the "
"initializer." % name)
# pylint: enable=protected-access
self._unique_id = shared_name
self._initial_value = initial_value if self._in_graph_mode else None
self._handle_name = handle_name + ":0"
self._dtype = initial_value.dtype.base_dtype
self._constraint = constraint
if self._in_graph_mode:
with ops.name_scope("IsInitialized"):
self._is_initialized_op = (
gen_resource_variable_ops.var_is_initialized_op(self._handle))
if initial_value is not None:
with ops.name_scope("Assign") as n, ops.colocate_with(self._handle):
self._initializer_op = (
gen_resource_variable_ops.assign_variable_op(
self._handle,
self._try_guard_against_uninitialized_dependencies(
initial_value),
name=n))
with ops.name_scope("Read"), ops.colocate_with(self._handle):
# Manually assign reads to the handle's device to avoid log
# messages.
with ops.device(self._handle.device):
value = self._read_variable_op()
self._graph_element = value
if caching_device is not None:
# Variables may be created in a tf.device() or ops.colocate_with()
# context. At the same time, users would expect caching device to
# be independent of this context, and/or would not expect the
# current device context to be merged with the caching device
# spec. Therefore we reset the colocation stack before creating
# the cached value. Note that resetting the colocation stack will
# also reset the device stack.
with ops.colocate_with(None, ignore_existing=True):
with ops.device(caching_device):
self._cached_value = array_ops.identity(value)
else:
self._cached_value = None
else:
gen_resource_variable_ops.assign_variable_op(self._handle,
initial_value)
self._is_initialized_op = None
self._initializer_op = None
self._graph_element = None
if caching_device:
with ops.device(caching_device):
self._cached_value = self._read_variable_op()
else:
self._cached_value = None
if not context.executing_eagerly():
# Eager variables are only added to collections if they are part of an
# eager variable store (otherwise in an interactive session they would
# hog memory and cause OOM). This is done in ops/variable_scope.py.
ops.add_to_collections(collections, self)
elif ops.GraphKeys.GLOBAL_STEP in collections:
ops.add_to_collections(ops.GraphKeys.GLOBAL_STEP, self)
if not self._in_graph_mode:
# After the handle has been created, set up a way to clean it up when
# executing eagerly. We'll hold the only reference to the deleter, so that
# when this object is garbage collected the deleter will be too. This
# means ResourceVariables can be part of reference cycles without those
# cycles being uncollectable, and means that no __del__ will be defined at
# all in graph mode.
self._handle_deleter = EagerResourceDeleter(
handle=self._handle, handle_device=self._handle.device)
self._cached_shape_as_list = None
def _init_from_proto(self, variable_def, import_scope=None):
"""Initializes from `VariableDef` proto."""
# Note that init_from_proto is currently not supported in Eager mode.
assert not context.executing_eagerly()
self._in_graph_mode = True
assert isinstance(variable_def, variable_pb2.VariableDef)
if not variable_def.is_resource:
raise ValueError("Trying to restore Variable as ResourceVariable.")
# Create from variable_def.
g = ops.get_default_graph()
self._handle = g.as_graph_element(
ops.prepend_name_scope(
variable_def.variable_name, import_scope=import_scope))
self._shape = tensor_shape.TensorShape(
self._handle.op.get_attr("shape"))
self._handle_name = self._handle.name
self._unique_id = self._handle_name
self._initializer_op = g.as_graph_element(
ops.prepend_name_scope(
variable_def.initializer_name, import_scope=import_scope))
# Check whether initial_value_name exists for backwards compatibility.
if (hasattr(variable_def, "initial_value_name") and
variable_def.initial_value_name):
self._initial_value = g.as_graph_element(
ops.prepend_name_scope(variable_def.initial_value_name,
import_scope=import_scope))
else:
self._initial_value = None
self._trainable = getattr(variable_def, "trainable", True)
if variable_def.snapshot_name:
snapshot = g.as_graph_element(
ops.prepend_name_scope(
variable_def.snapshot_name, import_scope=import_scope))
self._cached_value = snapshot
while snapshot.op.type != "ReadVariableOp":
snapshot = snapshot.op.inputs[0]
self._graph_element = snapshot
else:
self._cached_value = None
# Legacy case for protos without the snapshot name; assume it's the
# following.
self._graph_element = g.get_tensor_by_name(
self._handle.op.name + "/Read/ReadVariableOp:0")
if variable_def.HasField("save_slice_info_def"):
self._save_slice_info = variables.Variable.SaveSliceInfo(
save_slice_info_def=variable_def.save_slice_info_def,
import_scope=import_scope)
else:
self._save_slice_info = None
self._caching_device = None
self._dtype = dtypes.as_dtype(self._handle.op.get_attr("dtype"))
self._constraint = None
self._cached_shape_as_list = None
@contextlib.contextmanager
def _assign_dependencies(self):
"""Makes assignments depend on the cached value, if any.
This prevents undefined behavior with reads not ordered wrt writes.
Yields:
None.
"""
if self._cached_value is not None:
with ops.control_dependencies([self._cached_value]):
yield
else:
yield
def __nonzero__(self):
return self.__bool__()
def __bool__(self):
return bool(self.read_value())
def __copy__(self):
return self
def __deepcopy__(self, memo):
if not context.executing_eagerly():
raise NotImplementedError(
"__deepcopy__() is only available when eager execution is enabled.")
copied_variable = ResourceVariable(
initial_value=self.read_value(),
trainable=self._trainable,
constraint=self._constraint,
dtype=self._dtype,
name=self._shared_name + "_copy")
memo[self._unique_id] = copied_variable
return copied_variable
@property
def dtype(self):
"""The dtype of this variable."""
return self._dtype
@property
def device(self):
"""The device this variable is on."""
return self._handle.device
@property
def graph(self):
"""The `Graph` of this variable."""
return self._handle.graph
@property
def name(self):
"""The name of the handle for this variable."""
return self._handle_name
@property
def shape(self):
"""The shape of this variable."""
return self._shape
def _shape_as_list(self):
if self._cached_shape_as_list:
return self._cached_shape_as_list
if self.shape.ndims is None:
return None
self._cached_shape_as_list = [dim.value for dim in self.shape.dims]
return self._cached_shape_as_list
def _shape_tuple(self):
shape = self._shape_as_list()
if shape is None:
return None
return tuple(shape)
@property
def create(self):
"""The op responsible for initializing this variable."""
if not self._in_graph_mode:
raise RuntimeError("Calling create is not supported when eager execution"
" is enabled.")
return self._initializer_op
@property
def handle(self):
"""The handle by which this variable can be accessed."""
return self._handle
def value(self):
"""A cached operation which reads the value of this variable."""
if self._cached_value is not None:
return self._cached_value
with ops.colocate_with(None, ignore_existing=True):
with ops.device(self._handle.device):
return self._read_variable_op()
def _as_graph_element(self):
"""Conversion function for Graph.as_graph_element()."""
return self._graph_element
@property
def initializer(self):
"""The op responsible for initializing this variable."""
return self._initializer_op
@property
def initial_value(self):
"""Returns the Tensor used as the initial value for the variable."""
if context.executing_eagerly():
raise RuntimeError("initial_value not supported in EAGER mode.")
return self._initial_value
@property
def constraint(self):
"""Returns the constraint function associated with this variable.
Returns:
The constraint function that was passed to the variable constructor.
Can be `None` if no constraint was passed.
"""
return self._constraint
@property
def op(self):
"""The op for this variable."""
return self._handle.op
def eval(self, session=None):
"""Evaluates and returns the value of this variable."""
if context.executing_eagerly():
raise RuntimeError("Trying to eval in EAGER mode")
return self._graph_element.eval(session=session)
def numpy(self):
if context.executing_eagerly():
return self.read_value().numpy()
raise NotImplementedError(
"numpy() is only available when eager execution is enabled.")
def count_up_to(self, limit):
"""Increments this variable until it reaches `limit`.
When that Op is run it tries to increment the variable by `1`. If
incrementing the variable would bring it above `limit` then the Op raises
the exception `OutOfRangeError`.
If no error is raised, the Op outputs the value of the variable before
the increment.
This is essentially a shortcut for `count_up_to(self, limit)`.
Args:
limit: value at which incrementing the variable raises an error.
Returns:
A `Tensor` that will hold the variable value before the increment. If no
other Op modifies this variable, the values produced will all be
distinct.
"""
return gen_state_ops.resource_count_up_to(self.handle, limit=limit,
T=self.dtype)
def _set_save_slice_info(self, save_slice_info):
"""Sets the slice info for this `ResourceVariable`.
Args:
save_slice_info: A `Variable.SaveSliceInfo` object.
"""
self._save_slice_info = save_slice_info
def _get_save_slice_info(self):
return self._save_slice_info
def _read_variable_op(self):
if self.trainable:
tape.variable_accessed(self)
result = gen_resource_variable_ops.read_variable_op(self._handle,
self._dtype)
if not context.executing_eagerly():
# Note that if a control flow context is active the input of the read op
# might not actually be the handle. This line bypasses it.
tape.record_operation(
"ReadVariableOp", [result], [self._handle], lambda x: [x])
return result
def read_value(self):
"""Constructs an op which reads the value of this variable.
Should be used when there are multiple reads, or when it is desirable to
read the value only after some condition is true.
Returns:
the read operation.
"""
with ops.name_scope("Read"):
# Ensure we read the variable in the same device as the handle.
with ops.device(self._handle.device):
value = self._read_variable_op()
# Return an identity so it can get placed on whatever device the context
# specifies instead of the device where the variable is.
return array_ops.identity(value)
def sparse_read(self, indices, name=None):
"""Reads the value of this variable sparsely, using `gather`."""
with ops.name_scope("Gather" if name is None else name) as name:
if self.trainable:
tape.variable_accessed(self)
value = gen_resource_variable_ops.resource_gather(
self._handle, indices, dtype=self._dtype, name=name)
return array_ops.identity(value)
def to_proto(self, export_scope=None):
"""Converts a `ResourceVariable` to a `VariableDef` protocol buffer.
Args:
export_scope: Optional `string`. Name scope to remove.
Raises:
RuntimeError: If run in EAGER mode.
Returns:
A `VariableDef` protocol buffer, or `None` if the `Variable` is not
in the specified name scope.
"""
if context.executing_eagerly():
raise RuntimeError("to_proto not supported in EAGER mode.")
if export_scope is None or self.handle.name.startswith(export_scope):
var_def = variable_pb2.VariableDef()
var_def.variable_name = ops.strip_name_scope(self.handle.name,
export_scope)
if self._initial_value is not None:
# This is inside an if-statement for backwards compatibility, since
# self._initial_value might be None for variables constructed from old
# protos.
var_def.initial_value_name = ops.strip_name_scope(
self._initial_value.name, export_scope)
var_def.initializer_name = ops.strip_name_scope(self.initializer.name,
export_scope)
if self._cached_value is not None:
var_def.snapshot_name = ops.strip_name_scope(self._cached_value.name,
export_scope)
else:
# Store the graph_element here
var_def.snapshot_name = ops.strip_name_scope(self._graph_element.name,
export_scope)
var_def.is_resource = True
var_def.trainable = self.trainable
if self._save_slice_info:
var_def.save_slice_info_def.MergeFrom(
self._save_slice_info.to_proto(export_scope=export_scope))
return var_def
else:
return None
@staticmethod
def from_proto(variable_def, import_scope=None):
if context.executing_eagerly():
raise RuntimeError("from_proto not supported in EAGER mode.")
return ResourceVariable(
variable_def=variable_def, import_scope=import_scope)
@staticmethod
def _OverloadAllOperators(): # pylint: disable=invalid-name
"""Register overloads for all operators."""
for operator in ops.Tensor.OVERLOADABLE_OPERATORS:
ResourceVariable._OverloadOperator(operator)
# For slicing, bind getitem differently than a tensor (use SliceHelperVar
# instead)
# pylint: disable=protected-access
setattr(ResourceVariable, "__getitem__", array_ops._SliceHelperVar)
def _AsTensor(self):
return self.value()
def _ref(self):
"""Unsupported."""
raise NotImplementedError("ResourceVariable does not implement _ref()")
def set_shape(self, shape):
"""Unsupported."""
raise NotImplementedError("ResourceVariable does not implement set_shape()")
@staticmethod
def _OverloadOperator(operator): # pylint: disable=invalid-name
"""Defer an operator overload to `ops.Tensor`.
We pull the operator out of ops.Tensor dynamically to avoid ordering issues.
Args:
operator: string. The operator name.
"""
tensor_oper = getattr(ops.Tensor, operator)
def _run_op(a, *args):
# pylint: disable=protected-access
value = a._AsTensor()
return tensor_oper(value, *args)
# Propagate __doc__ to wrapper
try:
_run_op.__doc__ = tensor_oper.__doc__
except AttributeError:
pass
setattr(ResourceVariable, operator, _run_op)
__array_priority__ = 100
def is_initialized(self, name=None):
"""Checks whether a resource variable has been initialized.
Outputs boolean scalar indicating whether the tensor has been initialized.
Args:
name: A name for the operation (optional).
Returns:
A `Tensor` of type `bool`.
"""
return gen_resource_variable_ops.var_is_initialized_op(self.handle, name)
def assign_sub(self, delta, use_locking=None, name=None, read_value=True):
"""Subtracts a value from this variable.
Args:
delta: A `Tensor`. The value to subtract from this variable.
use_locking: If `True`, use locking during the operation.
name: The name to use for the operation.
read_value: A `bool`. Whether to read and return the new value of the
variable or not.
Returns:
If `read_value` is `True`, this method will return the new value of the
variable after the assignment has completed. Otherwise, when in graph mode
it will return the `Operation` that does the assignment, and when in eager
mode it will return `None`.
"""
# TODO(apassos): this here and below is not atomic. Consider making it
# atomic if there's a way to do so without a performance cost for those who
# don't need it.
with _handle_graph(self.handle), self._assign_dependencies():
assign_sub_op = gen_resource_variable_ops.assign_sub_variable_op(
self.handle, ops.convert_to_tensor(delta, dtype=self.dtype),
name=name)
if read_value:
return self._lazy_read(assign_sub_op)
return assign_sub_op
def assign_add(self, delta, use_locking=None, name=None, read_value=True):
"""Adds a value to this variable.
Args:
delta: A `Tensor`. The value to add to this variable.
use_locking: If `True`, use locking during the operation.
name: The name to use for the operation.
read_value: A `bool`. Whether to read and return the new value of the
variable or not.
Returns:
If `read_value` is `True`, this method will return the new value of the
variable after the assignment has completed. Otherwise, when in graph mode
it will return the `Operation` that does the assignment, and when in eager
mode it will return `None`.
"""
with _handle_graph(self.handle), self._assign_dependencies():
assign_add_op = gen_resource_variable_ops.assign_add_variable_op(
self.handle, ops.convert_to_tensor(delta, dtype=self.dtype),
name=name)
if read_value:
return self._lazy_read(assign_add_op)
return assign_add_op
def _lazy_read(self, op):
if self.trainable:
tape.variable_accessed(self)
return _UnreadVariable(
handle=self._handle, dtype=self.dtype, shape=self._shape,
in_graph_mode=self._in_graph_mode,
deleter=self._handle_deleter if not self._in_graph_mode else None,
parent_op=op, unique_id=self._unique_id)
def assign(self, value, use_locking=None, name=None, read_value=True):
"""Assigns a new value to this variable.
Args:
value: A `Tensor`. The new value for this variable.
use_locking: If `True`, use locking during the assignment.
name: The name to use for the assignment.
read_value: A `bool`. Whether to read and return the new value of the
variable or not.
Returns:
If `read_value` is `True`, this method will return the new value of the
variable after the assignment has completed. Otherwise, when in graph mode
it will return the `Operation` that does the assignment, and when in eager
mode it will return `None`.
"""
# Note: not depending on the cached value here since this can used to
# initialize the variable.
with _handle_graph(self.handle):
value_tensor = ops.convert_to_tensor(value, dtype=self.dtype)
self._shape.assert_is_compatible_with(value_tensor.shape)
assign_op = gen_resource_variable_ops.assign_variable_op(
self.handle, value_tensor, name=name)
if read_value:
return self._lazy_read(assign_op)
return assign_op
def __reduce__(self):
return (ResourceVariable, (self.numpy(),))
def scatter_sub(self, sparse_delta, use_locking=False, name=None):
"""Subtracts `IndexedSlices` from this variable.
Args:
sparse_delta: `IndexedSlices` to be subtracted from this variable.
use_locking: If `True`, use locking during the operation.
name: the name of the operation.
Returns:
A `Tensor` that will hold the new value of this variable after
the scattered subtraction has completed.
Raises:
ValueError: if `sparse_delta` is not an `IndexedSlices`.
"""
if not isinstance(sparse_delta, ops.IndexedSlices):
raise ValueError("sparse_delta is not IndexedSlices: %s" % sparse_delta)
return self._lazy_read(gen_resource_variable_ops.resource_scatter_sub(
self.handle, sparse_delta.indices,
ops.convert_to_tensor(sparse_delta.values, self.dtype), name=name))
def scatter_add(self, sparse_delta, use_locking=False, name=None):
"""Adds `IndexedSlices` from this variable.
Args:
sparse_delta: `IndexedSlices` to be added to this variable.
use_locking: If `True`, use locking during the operation.
name: the name of the operation.
Returns:
A `Tensor` that will hold the new value of this variable after
the scattered subtraction has completed.
Raises:
ValueError: if `sparse_delta` is not an `IndexedSlices`.
"""
if not isinstance(sparse_delta, ops.IndexedSlices):
raise ValueError("sparse_delta is not IndexedSlices: %s" % sparse_delta)
return self._lazy_read(gen_resource_variable_ops.resource_scatter_add(
self.handle, sparse_delta.indices,
ops.convert_to_tensor(sparse_delta.values, self.dtype), name=name))
def scatter_update(self, sparse_delta, use_locking=False, name=None):
"""Assigns `IndexedSlices` to this variable.
Args:
sparse_delta: `IndexedSlices` to be assigned to this variable.
use_locking: If `True`, use locking during the operation.
name: the name of the operation.
Returns:
A `Tensor` that will hold the new value of this variable after
the scattered subtraction has completed.
Raises:
ValueError: if `sparse_delta` is not an `IndexedSlices`.
"""
if not isinstance(sparse_delta, ops.IndexedSlices):
raise ValueError("sparse_delta is not IndexedSlices: %s" % sparse_delta)
return self._lazy_read(gen_resource_variable_ops.resource_scatter_update(
self.handle, sparse_delta.indices,
ops.convert_to_tensor(sparse_delta.values, self.dtype), name=name))
def scatter_nd_sub(self, indices, updates, name=None):
"""Applies sparse subtraction to individual values or slices in a Variable.
`ref` is a `Tensor` with rank `P` and `indices` is a `Tensor` of rank `Q`.
`indices` must be integer tensor, containing indices into `ref`.
It must be shape `[d_0, ..., d_{Q-2}, K]` where `0 < K <= P`.
The innermost dimension of `indices` (with length `K`) corresponds to
indices into elements (if `K = P`) or slices (if `K < P`) along the `K`th
dimension of `ref`.
`updates` is `Tensor` of rank `Q-1+P-K` with shape:
```
[d_0, ..., d_{Q-2}, ref.shape[K], ..., ref.shape[P-1]].
```
For example, say we want to add 4 scattered elements to a rank-1 tensor to
8 elements. In Python, that update would look like this:
```python
ref = tf.Variable([1, 2, 3, 4, 5, 6, 7, 8])
indices = tf.constant([[4], [3], [1] ,[7]])
updates = tf.constant([9, 10, 11, 12])
op = ref.scatter_nd_sub(indices, updates)
with tf.Session() as sess:
print sess.run(op)
```
The resulting update to ref would look like this:
[1, -9, 3, -6, -6, 6, 7, -4]
See `tf.scatter_nd` for more details about how to make updates to
slices.
Args:
indices: The indices to be used in the operation.
updates: The values to be used in the operation.
name: the name of the operation.
Returns:
A `Tensor` that will hold the new value of this variable after
the scattered subtraction has completed.
Raises:
ValueError: if `sparse_delta` is not an `IndexedSlices`.
"""
return self._lazy_read(gen_state_ops.resource_scatter_nd_sub(
self.handle, indices, ops.convert_to_tensor(updates, self.dtype),
name=name))
def scatter_nd_add(self, indices, updates, name=None):
"""Applies sparse addition to individual values or slices in a Variable.
`ref` is a `Tensor` with rank `P` and `indices` is a `Tensor` of rank `Q`.
`indices` must be integer tensor, containing indices into `ref`.
It must be shape `[d_0, ..., d_{Q-2}, K]` where `0 < K <= P`.
The innermost dimension of `indices` (with length `K`) corresponds to
indices into elements (if `K = P`) or slices (if `K < P`) along the `K`th
dimension of `ref`.
`updates` is `Tensor` of rank `Q-1+P-K` with shape:
```
[d_0, ..., d_{Q-2}, ref.shape[K], ..., ref.shape[P-1]].
```
For example, say we want to add 4 scattered elements to a rank-1 tensor to
8 elements. In Python, that update would look like this:
```python
ref = tf.Variable([1, 2, 3, 4, 5, 6, 7, 8])
indices = tf.constant([[4], [3], [1] ,[7]])
updates = tf.constant([9, 10, 11, 12])
add = ref.scatter_nd_add(indices, updates)
with tf.Session() as sess:
print sess.run(add)
```
The resulting update to ref would look like this:
[1, 13, 3, 14, 14, 6, 7, 20]
See `tf.scatter_nd` for more details about how to make updates to
slices.
Args:
indices: The indices to be used in the operation.
updates: The values to be used in the operation.
name: the name of the operation.
Returns:
A `Tensor` that will hold the new value of this variable after
the scattered subtraction has completed.
Raises:
ValueError: if `sparse_delta` is not an `IndexedSlices`.
"""
return self._lazy_read(gen_state_ops.resource_scatter_nd_add(
self.handle, indices, ops.convert_to_tensor(updates, self.dtype),
name=name))
def scatter_nd_update(self, indices, updates, name=None):
"""Applies sparse assignment to individual values or slices in a Variable.
`ref` is a `Tensor` with rank `P` and `indices` is a `Tensor` of rank `Q`.
`indices` must be integer tensor, containing indices into `ref`.
It must be shape `[d_0, ..., d_{Q-2}, K]` where `0 < K <= P`.
The innermost dimension of `indices` (with length `K`) corresponds to
indices into elements (if `K = P`) or slices (if `K < P`) along the `K`th
dimension of `ref`.
`updates` is `Tensor` of rank `Q-1+P-K` with shape:
```
[d_0, ..., d_{Q-2}, ref.shape[K], ..., ref.shape[P-1]].
```
For example, say we want to add 4 scattered elements to a rank-1 tensor to
8 elements. In Python, that update would look like this:
```python
ref = tf.Variable([1, 2, 3, 4, 5, 6, 7, 8])
indices = tf.constant([[4], [3], [1] ,[7]])
updates = tf.constant([9, 10, 11, 12])
op = ref.scatter_nd_update(indices, updates)
with tf.Session() as sess:
print sess.run(op)
```
The resulting update to ref would look like this:
[1, 11, 3, 10, 9, 6, 7, 12]
See `tf.scatter_nd` for more details about how to make updates to
slices.
Args:
indices: The indices to be used in the operation.
updates: The values to be used in the operation.
name: the name of the operation.
Returns:
A `Tensor` that will hold the new value of this variable after
the scattered subtraction has completed.
Raises:
ValueError: if `sparse_delta` is not an `IndexedSlices`.
"""
return self._lazy_read(gen_state_ops.resource_scatter_nd_update(
self.handle, indices, ops.convert_to_tensor(updates, self.dtype),
name=name))
def _strided_slice_assign(self, begin, end, strides, value, name, begin_mask,
end_mask, ellipsis_mask, new_axis_mask,
shrink_axis_mask):
with _handle_graph(self.handle), self._assign_dependencies():
return self._lazy_read(
gen_array_ops.resource_strided_slice_assign(
ref=self.handle,
begin=begin,
end=end,
strides=strides,
value=ops.convert_to_tensor(value, dtype=self.dtype),
name=name,
begin_mask=begin_mask,
end_mask=end_mask,
ellipsis_mask=ellipsis_mask,
new_axis_mask=new_axis_mask,
shrink_axis_mask=shrink_axis_mask))
def __int__(self):
if self.dtype != dtypes.int32 and self.dtype != dtypes.int64:
raise TypeError("Non-integer variable can't be converted to integer.")
return int(self.value().numpy())
def _dense_var_to_tensor(self, dtype=None, name=None, as_ref=False):
del name
if dtype is not None and dtype != self.dtype:
return NotImplemented
if as_ref:
return self.read_value().op.inputs[0]
else:
return self.value()
def __iadd__(self, unused_other):
raise RuntimeError("Variable += value not supported. Use "
"variable.assign_add(value) to modify the variable "
"value and variable = variable + value to get a new "
"Tensor object.")
def __isub__(self, unused_other):
raise RuntimeError("Variable -= value not supported. Use "
"variable.assign_sub(value) to modify the variable "
"value and variable = variable - value to get a new "
"Tensor object.")
def __imul__(self, unused_other):
raise RuntimeError("Variable *= value not supported. Use "
"`var.assign(var * value)` to modify the variable or "
"`var = var * value` to get a new Tensor object.")
def __idiv__(self, unused_other):
raise RuntimeError("Variable /= value not supported. Use "
"`var.assign(var / value)` to modify the variable or "
"`var = var / value` to get a new Tensor object.")
def __itruediv__(self, unused_other):
raise RuntimeError("Variable /= value not supported. Use "
"`var.assign(var / value)` to modify the variable or "
"`var = var / value` to get a new Tensor object.")
def __irealdiv__(self, unused_other):
raise RuntimeError("Variable /= value not supported. Use "
"`var.assign(var / value)` to modify the variable or "
"`var = var / value` to get a new Tensor object.")
def __ipow__(self, unused_other):
raise RuntimeError("Variable **= value not supported. Use "
"`var.assign(var ** value)` to modify the variable or "
"`var = var ** value` to get a new Tensor object.")
pywrap_tensorflow.TFE_Py_RegisterResourceVariableType(ResourceVariable)
math_ops._resource_variable_type = ResourceVariable # pylint: disable=protected-access
def _dense_var_to_tensor(var, dtype=None, name=None, as_ref=False):
return var._dense_var_to_tensor(dtype=dtype, name=name, as_ref=as_ref) # pylint: disable=protected-access
class _UnreadVariable(ResourceVariable):
"""Represents a future for a read of a variable.
Pretends to be the tensor if anyone looks.
"""
def __init__(self, handle, dtype, # pylint: disable=super-init-not-called
shape, in_graph_mode, deleter, parent_op, unique_id):
# We do not call super init on purpose.
self._trainable = False
self._save_slice_info = None
self._graph_key = ops.get_default_graph()._graph_key # pylint: disable=protected-access
self._in_graph_mode = in_graph_mode
self._handle = handle
self._shape = shape
self._initial_value = None
if isinstance(self._handle, ops.EagerTensor):
self._handle_name = ""
else:
self._handle_name = self._handle.name
self._unique_id = unique_id
self._dtype = dtype
self._constraint = None
self._cached_value = None
self._is_initialized_op = None
self._initializer_op = None
self._parent_op = parent_op
if context.executing_eagerly():
self._graph_element = None
else:
self._graph_element = self.read_value()
self._handle_deleter = deleter
@property
def name(self):
if self._in_graph_mode:
return self._parent_op.name
else:
return "UnreadVariable"
def value(self):
return self._read_variable_op()
def read_value(self):
return self._read_variable_op()
def _read_variable_op(self):
with ops.control_dependencies([self._parent_op]):
return gen_resource_variable_ops.read_variable_op(self._handle,
self._dtype)
def set_shape(self, shape):
self._shape = shape
self._cached_shape_as_list = None
@property
def op(self):
"""The op for this variable."""
return self._parent_op
ops.register_tensor_conversion_function(_UnreadVariable, _dense_var_to_tensor)
ops.register_dense_tensor_like_type(_UnreadVariable)
class _MixedPrecisionVariable(ResourceVariable):
"""Represents a variable that can return in desired dtype when read.
In mixed precision training, it is usually desirable to use different dtypes
for variables and computation. This class will be used to wrap created
ResourceVariable when mixed precision training is enabled. It allows layers to
perform computation in a different dtype than their variable dtypes, in order
to achieve higher performance without causing quality loss.
"""
def __init__(self, var, read_dtype):
"""Creates a MixedPrecisionVariable.
Args:
var: A ResourceVariable instance.
read_dtype: A tf.DType, the returned dtype when read, default to None.
Casting is performed if read_dtype is not None and differs from
var.dtype.
Returns:
An MixedPrecisionVariable instance.
Raises:
ValueError: if var is not a ResourceVariable instance, or read_dtype is
not a tf.DType instance.
"""
# pylint: disable=super-init-not-called
# We do not call super init on purpose.
if not isinstance(var, ResourceVariable):
raise ValueError("InvalidArgument: var must be a ResourceVariable type.")
if not isinstance(read_dtype, dtypes.DType):
raise ValueError("InvalidArgument: read_dtype must be a tf.DType type.")
self._var = var
self._trainable = var.trainable
self._save_slice_info = None
self._graph_key = ops.get_default_graph()._graph_key # pylint: disable=protected-access
self._in_graph_mode = var._in_graph_mode # pylint: disable=protected-access
self._handle = var.handle
self._shape = var.shape
self._initial_value = None
if isinstance(self.handle, ops.EagerTensor):
self._handle_name = ""
else:
self._handle_name = self.handle.name
self._unique_id = var._unique_id # pylint: disable=protected-access
self._dtype = var.dtype
self._constraint = None
self._cached_value = None
self._is_initialized_op = var._is_initialized_op # pylint: disable=protected-access
self._initializer_op = var._initializer_op # pylint: disable=protected-access
# This needs to be set before read_value() is called.
self._read_dtype = read_dtype
if context.executing_eagerly():
self._graph_element = None
else:
self._graph_element = self.read_value()
self._handle_deleter = (
var._handle_deleter if not self._in_graph_mode # pylint: disable=protected-access
else None)
# pylint: enable=super-init-not-called
@property
def name(self):
return self._var.name
def value(self):
return self._read_variable_op()
def read_value(self):
return self._read_variable_op()
def _read_variable_op(self):
with ops.colocate_with(self._handle):
res = gen_resource_variable_ops.read_variable_op(self._handle,
self._dtype)
if self._read_dtype != self._dtype:
return math_ops.cast(res, self._read_dtype)
else:
return res
def set_shape(self, shape):
self._shape = shape
self._cached_shape_as_list = None
@property
def op(self):
"""The op for this variable."""
return self._var.op
@property
def read_dtype(self):
"""The dtype of the returned tensor when reading the var."""
return self._read_dtype
def _dense_var_to_tensor(self, dtype=None, name=None, as_ref=False):
del name
dtype = dtype or self.read_dtype
if dtype != self.read_dtype or as_ref:
return NotImplemented
else:
res = self.value()
return res
def _should_act_as_resource_variable(self):
"""To pass resource_variable_ops.is_resource_variable check."""
pass
# Register a conversion function which reads the value of the variable,
# allowing instances of the class to be used as tensors.
# Note: registering for Variable after ResourceVariable because inheritance will
# otherwise lead to the wrong behavior.
ops.register_tensor_conversion_function(ResourceVariable, _dense_var_to_tensor)
ops.register_tensor_conversion_function(
variables.Variable, variables.Variable._TensorConversionFunction) # pylint: disable=protected-access
# pylint: disable=protected-access
ResourceVariable._OverloadAllOperators()
ops.register_dense_tensor_like_type(ResourceVariable)
@ops.RegisterGradient("ReadVariableOp")
def _ReadGrad(_, grad):
"""Gradient for read op."""
return grad
@ops.RegisterGradient("ResourceGather")
def _GatherGrad(op, grad):
"""Gradient for gather op."""
# Build appropriately shaped IndexedSlices
handle = op.inputs[0]
indices = op.inputs[1]
params_shape = gen_resource_variable_ops.variable_shape(handle)
size = array_ops.expand_dims(array_ops.size(indices), 0)
values_shape = array_ops.concat([size, params_shape[1:]], 0)
values = array_ops.reshape(grad, values_shape)
indices = array_ops.reshape(indices, size)
return (ops.IndexedSlices(values, indices, params_shape), None)
def _to_proto_fn(v, export_scope=None):
"""Converts Variable and ResourceVariable to VariableDef for collections."""
return v.to_proto(export_scope=export_scope)
def _from_proto_fn(v, import_scope=None):
"""Creates Variable or ResourceVariable from VariableDef as needed."""
if v.is_resource:
return ResourceVariable.from_proto(v, import_scope=import_scope)
return variables.Variable.from_proto(v, import_scope=import_scope)
ops.register_proto_function(
ops.GraphKeys.GLOBAL_VARIABLES,
proto_type=variable_pb2.VariableDef,
to_proto=_to_proto_fn,
from_proto=_from_proto_fn)
ops.register_proto_function(
ops.GraphKeys.TRAINABLE_VARIABLES,
proto_type=variable_pb2.VariableDef,
to_proto=_to_proto_fn,
from_proto=_from_proto_fn)
ops.register_proto_function(
ops.GraphKeys.MOVING_AVERAGE_VARIABLES,
proto_type=variable_pb2.VariableDef,
to_proto=_to_proto_fn,
from_proto=_from_proto_fn)
ops.register_proto_function(
ops.GraphKeys.LOCAL_VARIABLES,
proto_type=variable_pb2.VariableDef,
to_proto=_to_proto_fn,
from_proto=_from_proto_fn)
ops.register_proto_function(
ops.GraphKeys.MODEL_VARIABLES,
proto_type=variable_pb2.VariableDef,
to_proto=_to_proto_fn,
from_proto=_from_proto_fn)
ops.register_proto_function(
ops.GraphKeys.GLOBAL_STEP,
proto_type=variable_pb2.VariableDef,
to_proto=_to_proto_fn,
from_proto=_from_proto_fn)
def is_resource_variable(var):
""""Returns True if `var` is to be considered a ResourceVariable."""
return isinstance(var, ResourceVariable) or hasattr(
var, "_should_act_as_resource_variable")
| 40.046927
| 108
| 0.683666
|
4a126f9fe21db0e65d3c7cd22f2f0b69968b34df
| 987
|
py
|
Python
|
Rekursif/liveCoding/2-rekursive-format-duration.py
|
SyamsulAlterra/Alta
|
13e8c185e91414e3f46e5d20f39370f8e58e7cd0
|
[
"MIT"
] | null | null | null |
Rekursif/liveCoding/2-rekursive-format-duration.py
|
SyamsulAlterra/Alta
|
13e8c185e91414e3f46e5d20f39370f8e58e7cd0
|
[
"MIT"
] | 6
|
2021-09-02T18:50:40.000Z
|
2022-02-27T11:06:31.000Z
|
Rekursif/liveCoding/2-rekursive-format-duration.py
|
SyamsulAlterra/Alta
|
13e8c185e91414e3f46e5d20f39370f8e58e7cd0
|
[
"MIT"
] | null | null | null |
def hour(num): #rekursif jam
if num<3600:
return 0
num=num-3600
return 1+hour(num)
def minutes(num): #rekursif menit
if num<60:
return 0
num=num-60
return 1+minutes(num)
def recursiveFormatDuration(num):
if num<1 or num>86400:
return "angka diluar ketentuan, masukkan angka 1-86400 inklusif"
h=hour(num)
if h!=0:
stringH=str(h)+" jam "
else:
stringH=""
m=minutes(num%3600)
if m!=0:
stringM=str(m)+" menit "
else:
stringM=""
s=(num%3600)%60
if s!=0:
stringS=str(s)+" detik "
else:
stringS=""
return stringH+stringM+stringS
print(recursiveFormatDuration(86400)) # 24 jam
print(recursiveFormatDuration(60)) # 1 menit
print(recursiveFormatDuration(6)) # 6 detik
print(recursiveFormatDuration(3660)) # 1 jam 1 menit
print(recursiveFormatDuration(62)) # 1 menit 2 detik
print(recursiveFormatDuration(7324)) # 2 jam 2 menit 4 detik
| 21.456522
| 72
| 0.624113
|
4a126fca36b9e4c630d98f8ddae0589cb8f29e5d
| 12,743
|
py
|
Python
|
saas/backend/apps/role/serializers.py
|
Canway-shiisa/bk-iam-saas
|
73c3770d9647c9cc8d515427cd1d053d8af9d071
|
[
"MIT"
] | null | null | null |
saas/backend/apps/role/serializers.py
|
Canway-shiisa/bk-iam-saas
|
73c3770d9647c9cc8d515427cd1d053d8af9d071
|
[
"MIT"
] | null | null | null |
saas/backend/apps/role/serializers.py
|
Canway-shiisa/bk-iam-saas
|
73c3770d9647c9cc8d515427cd1d053d8af9d071
|
[
"MIT"
] | null | null | null |
# -*- coding: utf-8 -*-
"""
TencentBlueKing is pleased to support the open source community by making 蓝鲸智云-权限中心(BlueKing-IAM) available.
Copyright (C) 2017-2021 THL A29 Limited, a Tencent company. All rights reserved.
Licensed under the MIT License (the "License"); you may not use this file except in compliance with the License.
You may obtain a copy of the License at http://opensource.org/licenses/MIT
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
"""
import time
from django.conf import settings
from rest_framework import serializers
from backend.apps.application.base_serializers import BaseAggActionListSLZ, SystemInfoSLZ, validate_action_repeat
from backend.apps.policy.serializers import ConditionSLZ, InstanceSLZ, ResourceGroupSLZ, ResourceSLZ, ResourceTypeSLZ
from backend.apps.role.models import Role, RoleCommonAction, RoleUser
from backend.biz.role import RoleBiz, RoleCheckBiz
from backend.biz.subject import SubjectInfoList
from backend.common.time import PERMANENT_SECONDS
from backend.service.constants import (
ADMIN_USER,
ANY_ID,
SUBJECT_ALL,
SUBJECT_TYPE_ALL,
GroupMemberType,
RoleScopeSubjectType,
SubjectType,
)
from backend.service.models import Subject
from .constants import PermissionTypeEnum
class RoleScopeSubjectSLZ(serializers.Serializer):
type = serializers.ChoiceField(label="成员类型", choices=RoleScopeSubjectType.get_choices())
id = serializers.CharField(label="成员id")
def validate(self, attrs):
"""校验type和id"""
_id, _type = attrs["id"], attrs["type"]
# 当id为*时,type必须为*
if _id == SUBJECT_ALL and _type != SUBJECT_TYPE_ALL:
raise serializers.ValidationError("type must be * when id is *")
# 当type为部门时,id必须是数字字符串或*
if _type == RoleScopeSubjectType.DEPARTMENT.value and _id != SUBJECT_ALL:
if not _id.isdigit():
raise serializers.ValidationError("department id can only be a string consisting of numbers only")
return attrs
class RoleResourceSLZ(ResourceSLZ):
child_type = serializers.CharField(label="子资源类型", required=False, allow_blank=True, default="")
class RoleInstanceSLZ(InstanceSLZ):
path = serializers.ListField(
label="层级链路",
child=serializers.ListField(label="链路", child=RoleResourceSLZ(label="节点"), allow_empty=False),
required=True,
allow_empty=False,
)
def validate(self, data):
"""
分级管理员的auth scope资源链路的最后一级忽略任意
"""
paths = data["path"]
for i in range(len(paths)):
path = paths[i]
if len(path) > 1 and path[-1]["id"] == ANY_ID:
paths[i] = path[:-1]
data["path"] = paths
return data
class RoleConditionSLZ(ConditionSLZ):
instances = serializers.ListField(label="拓扑选择", child=RoleInstanceSLZ(label="拓扑实例"))
class RoleResourceTypeSLZ(ResourceTypeSLZ):
condition = serializers.ListField(label="生效条件", child=RoleConditionSLZ(label="条件"))
class RoleResourceGroupSLZ(ResourceGroupSLZ):
related_resource_types = serializers.ListField(label="资源类型条件", child=RoleResourceTypeSLZ(label="资源类型"))
class GradeManagerActionSLZ(serializers.Serializer):
id = serializers.CharField(label="操作ID")
resource_groups = serializers.ListField(label="资源条件组", child=RoleResourceGroupSLZ(label="资源条件组"))
class RoleScopeAuthorizationSLZ(serializers.Serializer):
system_id = serializers.CharField(label="系统id", max_length=32)
actions = serializers.ListField(label="操作策略", child=GradeManagerActionSLZ(label="策略"))
aggregations = serializers.ListField(
label="聚合操作", child=BaseAggActionListSLZ(label="聚合操作"), required=False, default=list
)
def validate(self, data):
# 检查操作是否重复
validate_action_repeat(data)
if len(data["actions"]) == 0 and len(data["aggregations"]) == 0:
raise serializers.ValidationError("actions must not be empty")
return data
class RatingMangerBaseInfoSZL(serializers.Serializer):
name = serializers.CharField(label="分级管理员名称", max_length=128)
description = serializers.CharField(label="描述", allow_blank=True)
members = serializers.ListField(
label="成员列表",
child=serializers.CharField(label="用户ID", max_length=64),
max_length=settings.SUBJECT_AUTHORIZATION_LIMIT["grade_manager_member_limit"],
)
def validate(self, data):
"""
校验成员加入的分级管理员数是否超过限制
"""
role_check_biz = RoleCheckBiz()
for username in data["members"]:
# subject加入的分级管理员数量不能超过最大值
role_check_biz.check_subject_grade_manager_limit(Subject(type=SubjectType.USER.value, id=username))
return data
class RatingMangerCreateSLZ(RatingMangerBaseInfoSZL):
authorization_scopes = serializers.ListField(
label="系统操作", child=RoleScopeAuthorizationSLZ(label="系统操作"), allow_empty=False
)
subject_scopes = serializers.ListField(label="授权对象", child=RoleScopeSubjectSLZ(label="授权对象"), allow_empty=False)
def validate(self, data):
if len(data["authorization_scopes"]) != len({sys["system_id"] for sys in data["authorization_scopes"]}):
raise serializers.ValidationError({"authorization_scopes": ["system must not repeat"]})
return data
class RoleIdSLZ(serializers.Serializer):
"""
角色ID
"""
id = serializers.IntegerField(label="角色ID")
class RatingMangerListSchemaSLZ(serializers.Serializer):
members = serializers.ListField(label="成员列表", child=serializers.CharField(label="用户ID", max_length=128))
class Meta:
model = Role
fields = ("id", "name", "description", "updated_time", "creator", "members")
class RoleScopeAuthorizationSchemaSLZ(serializers.Serializer):
system = SystemInfoSLZ(label="系统信息")
actions = serializers.ListField(label="操作策略", child=GradeManagerActionSLZ(label="策略"))
class RatingMangerDetailSchemaSLZ(RatingMangerListSchemaSLZ):
authorization_scopes = serializers.ListField(
label="系统操作", child=RoleScopeAuthorizationSchemaSLZ(label="系统操作"), allow_empty=False
)
subject_scopes = serializers.ListField(label="授权对象", child=RoleScopeSubjectSLZ(label="授权对象"), allow_empty=False)
class Meta:
model = Role
fields = (
"id",
"name",
"description",
"updated_time",
"creator",
"members",
"authorization_scopes",
"subject_scopes",
)
class RatingMangerListSLZ(serializers.ModelSerializer):
members = serializers.SerializerMethodField(label="成员列表")
class Meta:
model = Role
fields = ("id", "name", "description", "creator", "created_time", "updated_time", "updater", "members")
def get_members(self, obj):
return list(RoleUser.objects.filter(role_id=obj.id).values_list("username", flat=True))
class RatingMangerDetailSLZ(RatingMangerListSLZ):
authorization_scopes = serializers.SerializerMethodField(label="系统操作")
subject_scopes = serializers.SerializerMethodField(label="授权对象")
class Meta:
model = Role
fields = (
"id",
"name",
"description",
"updated_time",
"creator",
"members",
"authorization_scopes",
"subject_scopes",
)
def get_authorization_scopes(self, obj):
# ResourceNameAutoUpdate
scope_systems = RoleBiz().list_auth_scope_bean(obj.id, should_auto_update_resource_name=True)
return [one.dict() for one in scope_systems]
def get_subject_scopes(self, obj):
subjects = RoleBiz().list_subject_scope(obj.id)
subject_list = SubjectInfoList(subjects)
return [one.dict() for one in subject_list.subjects]
class SystemManagerSLZ(RatingMangerListSLZ):
system_permission_global_enabled = serializers.SerializerMethodField(label="是否拥有系统所有权限")
class Meta:
model = Role
fields = ("id", "name", "name_en", "description", "members", "system_permission_global_enabled")
def get_system_permission_global_enabled(self, obj):
return obj.system_permission_enabled_content.global_enabled
class SystemManagerMemberUpdateSLZ(serializers.Serializer):
members = serializers.ListField(label="成员列表", child=serializers.CharField(label="用户ID", max_length=128))
class MemberSystemPermissionUpdateSLZ(serializers.Serializer):
system_permission_global_enabled = serializers.BooleanField(label="是否拥有系统所有权限")
class SuperManagerMemberSLZ(serializers.Serializer):
username = serializers.CharField(label="用户名")
system_permission_enabled = serializers.BooleanField(label="是否拥有系统所有权限")
class SuperManagerMemberDeleteSLZ(serializers.Serializer):
username = serializers.CharField(label="用户名")
def validate_username(self, value):
if value == ADMIN_USER:
raise serializers.ValidationError("admin cannot be deleted")
return value
class RoleCommonActionSLZ(serializers.ModelSerializer):
class Meta:
model = RoleCommonAction
fields = ("id", "system_id", "name", "name_en", "action_ids")
class RoleCommonCreateSLZ(serializers.Serializer):
system_id = serializers.CharField(label="系统id", max_length=32)
name = serializers.CharField(label="名称", max_length=128)
action_ids = serializers.ListField(label="操作ID", child=serializers.CharField(), allow_empty=False)
def validate(self, data):
action_id_set = set(data["action_ids"])
# 判断操作是否有重复
if len(action_id_set) != len(data["action_ids"]):
raise serializers.ValidationError({"action_ids": ["action_ids must not repeat"]})
return data
def create(self, validated_data):
action_ids = validated_data.pop("action_ids")
instance = RoleCommonAction(**validated_data)
instance.action_ids = action_ids
instance.save()
return instance
class RoleGroupMemberRenewSLZ(serializers.Serializer):
type = serializers.ChoiceField(label="成员类型", choices=GroupMemberType.get_choices())
id = serializers.CharField(label="成员id")
parent_type = serializers.ChoiceField(label="父级类型", choices=[(SubjectType.GROUP.value, "用户组")])
parent_id = serializers.CharField(label="父级ID")
expired_at = serializers.IntegerField(label="过期时间", max_value=PERMANENT_SECONDS)
def validate_expired_at(self, value):
"""
验证过期时间
"""
if value <= (time.time()):
raise serializers.ValidationError("expired_at must greater than now timestamp")
return value
class RoleGroupMembersRenewSLZ(serializers.Serializer):
members = serializers.ListField(label="续期成员", child=RoleGroupMemberRenewSLZ(), allow_empty=False)
class ResourceInstancePathSLZ(serializers.Serializer):
id = serializers.CharField(label="资源实例ID", max_length=settings.MAX_LENGTH_OF_RESOURCE_ID)
type = serializers.CharField(label="资源实例类型")
name = serializers.CharField(label="资源实例名")
class ResourceInstancesSLZ(serializers.Serializer):
system_id = serializers.CharField(label="系统ID", required=True)
id = serializers.CharField(label="资源实例ID", required=True, max_length=settings.MAX_LENGTH_OF_RESOURCE_ID)
type = serializers.CharField(label="资源实例类型", required=True)
name = serializers.CharField(label="资源实例名", required=True)
path = serializers.ListField(
label="资源实例路径", required=False, child=ResourceInstancePathSLZ(label="资源实例路径"), default=list
)
class QueryAuthorizedSubjectsSLZ(serializers.Serializer):
system_id = serializers.CharField(label="系统ID")
action_id = serializers.CharField(label="操作ID")
limit = serializers.IntegerField(label="返回结果数", min_value=10, max_value=1000)
resource_instances = serializers.ListField(
label="资源实例", required=False, child=ResourceInstancesSLZ(label="资源实例信息"), default=list
)
permission_type = serializers.ChoiceField(label="权限类型", choices=PermissionTypeEnum.get_choices())
def validate(self, data):
if data["permission_type"] == PermissionTypeEnum.RESOURCE_INSTANCE.value:
if not data.get("resource_instances"):
data["resource_instances"] = []
return data
return data
class AuthorizedSubjectsSLZ(serializers.Serializer):
type = serializers.CharField(label="Subject对象类型")
id = serializers.CharField(label="Subject对象ID")
name = serializers.CharField(label="Subject对象名称")
| 37.260234
| 117
| 0.710351
|
4a126fe6048e0559a34f1c911cbe7a43c808754c
| 12,617
|
py
|
Python
|
models/pix2pix_tm2_mc_reg_in2_multi_model.py
|
tkuri/pytorch-CycleGAN-and-pix2pix
|
b00b3f0bcebfb12d3f026c2a61c98ff63175a583
|
[
"BSD-3-Clause"
] | null | null | null |
models/pix2pix_tm2_mc_reg_in2_multi_model.py
|
tkuri/pytorch-CycleGAN-and-pix2pix
|
b00b3f0bcebfb12d3f026c2a61c98ff63175a583
|
[
"BSD-3-Clause"
] | null | null | null |
models/pix2pix_tm2_mc_reg_in2_multi_model.py
|
tkuri/pytorch-CycleGAN-and-pix2pix
|
b00b3f0bcebfb12d3f026c2a61c98ff63175a583
|
[
"BSD-3-Clause"
] | null | null | null |
import torch
from .base_model import BaseModel
from . import networks
from torch.nn import functional as F
class Pix2PixTm2McRegIn2MultiModel(BaseModel):
""" This class implements the pix2pix model, for learning a mapping from input images to output images given paired data.
The model training requires '--dataset_mode aligned' dataset.
By default, it uses a '--netG unet256' U-Net generator,
a '--netD basic' discriminator (PatchGAN),
and a '--gan_mode' vanilla GAN loss (the cross-entropy objective used in the orignal GAN paper).
pix2pix paper: https://arxiv.org/pdf/1611.07004.pdf
"""
@staticmethod
def modify_commandline_options(parser, is_train=True):
"""Add new dataset-specific options, and rewrite default values for existing options.
Parameters:
parser -- original option parser
is_train (bool) -- whether training phase or test phase. You can use this flag to add training-specific or test-specific options.
Returns:
the modified parser.
For pix2pix, we do not use image buffer
The training objective is: GAN Loss + lambda_L1 * ||G(A)-B||_1
By default, we use vanilla GAN loss, UNet with batchnorm, and aligned datasets.
"""
# changing the default values to match the pix2pix paper (https://phillipi.github.io/pix2pix/)
parser.set_defaults(norm='batch', netG='unet_256', dataset_mode='aligned3')
if is_train:
parser.set_defaults(pool_size=0, gan_mode='vanilla')
parser.add_argument('--lambda_L1', type=float, default=100.0, help='weight for L1 loss')
parser.add_argument('--lambda_LTMReg_1', type=float, default=100.0, help='weight for LTM Regularization 1')
parser.add_argument('--lambda_LTMReg_2', type=float, default=100.0, help='weight for LTM Regularization 2')
return parser
def __init__(self, opt):
"""Initialize the pix2pix class.
Parameters:
opt (Option class)-- stores all the experiment flags; needs to be a subclass of BaseOptions
"""
BaseModel.__init__(self, opt)
# specify the training losses you want to print out. The training/test scripts will call <BaseModel.get_current_losses>
self.loss_names = ['G_GAN', 'G_L1', 'G_LTMReg_1', 'G_LTMReg_2', 'D_real', 'D_fake']
# specify the images you want to save/display. The training/test scripts will call <BaseModel.get_current_visuals>
# self.visual_names = ['real_A', 'fake_B', 'real_B']
self.visual_names = ['real_A', 'fake_B', 'real_B', 'real_C', 'real_C_itp2', 'matrix_1_0', 'matrix_1_1', 'matrix_1_2', 'matrix_1_3', 'matrix_2_0', 'matrix_2_1', 'matrix_2_2', 'matrix_2_3']
# self.visual_names = ['real_A', 'fake_B', 'real_B', 'real_C']
# specify the models you want to save to the disk. The training/test scripts will call <BaseModel.save_networks> and <BaseModel.load_networks>
if self.isTrain:
# self.model_names = ['G', 'D']
self.model_names = ['G', 'G2', 'D']
else: # during test time, only load G
self.model_names = ['G', 'G2']
# define networks (both generator and discriminator)
self.output_nc = opt.output_nc
self.light_res = opt.light_res
self.intermediate_nc = opt.intermediate_nc
print('opt.output_nc', opt.output_nc)
print('light_res', self.light_res)
print('intermediate_nc', self.intermediate_nc)
self.netG = networks.define_G(opt.input_nc + opt.input2_nc, opt.output_nc*self.intermediate_nc, opt.ngf, 'unet_256_lastrelu', opt.norm,
not opt.no_dropout, opt.init_type, opt.init_gain, self.gpu_ids)
self.netG2 = networks.define_G(opt.input_nc + opt.input2_nc, self.intermediate_nc, opt.ngf, 'unet_256_lastrelu', opt.norm,
not opt.no_dropout, opt.init_type, opt.init_gain, self.gpu_ids)
if self.isTrain: # define a discriminator; conditional GANs need to take both input and output images; Therefore, #channels for D is input_nc + output_nc
self.netD = networks.define_D(opt.input_nc + opt.input2_nc + opt.output_nc, opt.ndf, opt.netD,
opt.n_layers_D, opt.norm, opt.init_type, opt.init_gain, self.gpu_ids)
if self.isTrain:
# define loss functions
self.criterionGAN = networks.GANLoss(opt.gan_mode).to(self.device)
self.criterionL1 = torch.nn.L1Loss()
# initialize optimizers; schedulers will be automatically created by function <BaseModel.setup>.
self.optimizer_G = torch.optim.Adam(self.netG.parameters(), lr=opt.lr, betas=(opt.beta1, 0.999))
self.optimizer_G2 = torch.optim.Adam(self.netG2.parameters(), lr=opt.lr, betas=(opt.beta1, 0.999))
self.optimizer_D = torch.optim.Adam(self.netD.parameters(), lr=opt.lr, betas=(opt.beta1, 0.999))
self.optimizers.append(self.optimizer_G)
self.optimizers.append(self.optimizer_G2)
self.optimizers.append(self.optimizer_D)
def set_input(self, input):
"""Unpack input data from the dataloader and perform necessary pre-processing steps.
Parameters:
input (dict): include the data itself and its metadata information.
The option 'direction' can be used to swap images in domain A and domain B.
"""
AtoB = self.opt.direction == 'AtoB'
self.real_A = torch.squeeze(input['A'],0).to(self.device) # [25, 3, 256, 256]
self.real_B = torch.squeeze(input['B'],0).to(self.device) # [25, 3, 256, 256]
self.real_C = torch.squeeze(input['C'],0).to(self.device) # [25, 1, 256, 256]
self.real_C_itp = F.interpolate(self.real_C, (self.light_res, self.light_res), mode='bilinear', align_corners=False)
self.real_C_itp_flat = self.real_C_itp.view(-1, self.light_res**2, 1) # [1, lsxls, 1]
self.real_C_itp2 = torch.clamp((F.interpolate(self.real_C_itp, (self.real_C.size(-2), self.real_C.size(-1)), mode='nearest')-0.5)/0.5, min=-1.0, max=1.0)
self.real_AC = torch.cat([self.real_A, self.real_C], dim=1)
self.image_paths = input['A_paths' if AtoB else 'B_paths']
# self.matrix_1_gain = 0.25
# self.matrix_2_gain = 64.0
self.matrix_1_gain = 1.0
self.matrix_2_gain = 1.0
def forward(self):
# print("test")
"""Run forward pass; called by both functions <optimize_parameters> and <test>."""
sub_matrix1 = self.netG(self.real_AC) # [1, 3xmc, 256, 256]
sub_matrix2 = self.netG2(self.real_AC) # [1, mc, 256, 256]
sub_matrix2 = F.interpolate(sub_matrix2, (self.light_res, self.light_res), mode='bilinear', align_corners=False)# [1, mc, ls, ls]
self.sub_matrix_1 = sub_matrix1.clone()
self.sub_matrix_2 = sub_matrix2.clone()
self.matrix_1 = torch.clamp((sub_matrix1*self.matrix_1_gain-0.5)/0.5, min=-1.0, max=1.0)
self.matrix_1_0 = self.matrix_1[:, [0, self.intermediate_nc, self.intermediate_nc*2], :, :]
self.matrix_1_1 = self.matrix_1[:, [1, 1 + self.intermediate_nc, 1 + self.intermediate_nc*2], :, :]
self.matrix_1_2 = self.matrix_1[:, [2, 2 + self.intermediate_nc, 3 + self.intermediate_nc*2], :, :]
self.matrix_1_3 = self.matrix_1[:, [3, 3 + self.intermediate_nc, 3 + self.intermediate_nc*2], :, :]
self.matrix_2 = torch.clamp((F.interpolate(sub_matrix2, (self.real_B.size(-2), self.real_B.size(-1)), mode='nearest')*self.matrix_2_gain-0.5)/0.5, min=-1.0, max=1.0)
self.matrix_2_0 = self.matrix_2[:, 0, :, :]
self.matrix_2_1 = self.matrix_2[:, 1, :, :]
self.matrix_2_2 = self.matrix_2[:, 2, :, :]
self.matrix_2_3 = self.matrix_2[:, 3, :, :]
sub_matrix1 = sub_matrix1.view(-1, sub_matrix1.size(1), sub_matrix1.size(2)*sub_matrix1.size(3)) # [1, 3xmc, 256x256]
sub_matrix2 = sub_matrix2.view(-1, sub_matrix2.size(1), sub_matrix2.size(2)*sub_matrix2.size(3)) # [1, mc, lsxls]
sub_matrix3 = torch.matmul(sub_matrix2, self.real_C_itp_flat) # [1, mc, 1]
sub_matrix1 = torch.transpose(sub_matrix1, 1, 2) # [1, 256x256, 3xmc]
sm1R = sub_matrix1[:, :, 0:self.intermediate_nc] # [1, 256x256, mc]
sm1G = sub_matrix1[:, :, self.intermediate_nc:self.intermediate_nc*2]
sm1B = sub_matrix1[:, :, self.intermediate_nc*2:self.intermediate_nc*3]
bufR = torch.matmul(sm1R, sub_matrix3) # [1, 256x256, 1]
bufG = torch.matmul(sm1G, sub_matrix3)
bufB = torch.matmul(sm1B, sub_matrix3)
buf = torch.cat([bufR, bufG, bufB], dim=2) # [1, 256x256, 3]
buf = torch.transpose(buf, 1, 2) # [1, 3, 256x256]
buf = (buf - 0.5) / 0.5
buf = torch.clamp(buf, min=-1.0, max=1.0)
# print('buf:', buf.size())
self.fake_B = buf.view(self.real_B.size()) # [1, 3, 256, 256]
def backward_D(self):
"""Calculate GAN loss for the discriminator"""
# Fake; stop backprop to the generator by detaching fake_B
# fake_AB = torch.cat((self.real_A, self.fake_B), 1) # we use conditional GANs; we need to feed both input and output to the discriminator
# pred_fake = self.netD(fake_AB.detach())
fake_ACB = torch.cat((self.real_AC, self.fake_B), 1) # we use conditional GANs; we need to feed both input and output to the discriminator
pred_fake = self.netD(fake_ACB.detach())
self.loss_D_fake = self.criterionGAN(pred_fake, False)
# Real
# real_AB = torch.cat((self.real_A, self.real_B), 1)
# pred_real = self.netD(real_AB)
real_ACB = torch.cat((self.real_AC, self.real_B), 1)
pred_real = self.netD(real_ACB)
self.loss_D_real = self.criterionGAN(pred_real, True)
# combine loss and calculate gradients
self.loss_D = (self.loss_D_fake + self.loss_D_real) * 0.5
self.loss_D.backward()
def backward_G(self):
"""Calculate GAN and L1 loss for the generator"""
# First, G(A) should fake the discriminator
# fake_AB = torch.cat((self.real_A, self.fake_B), 1)
# pred_fake = self.netD(fake_AB)
fake_ACB = torch.cat((self.real_AC, self.fake_B), 1)
pred_fake = self.netD(fake_ACB)
self.loss_G_GAN = self.criterionGAN(pred_fake, True)
# Second, G(A) = B
self.loss_G_L1 = self.criterionL1(self.fake_B, self.real_B) * self.opt.lambda_L1
trans_mean_1 = torch.mean(self.sub_matrix_1, dim=0, keepdim=True) # [1, 75, 256, 256]
trans_mean_1 = trans_mean_1.expand(self.sub_matrix_1.size(0), trans_mean_1.size(1), trans_mean_1.size(2), trans_mean_1.size(3)) # [25, 75, 256, 256]
self.loss_G_LTMReg_1 = self.criterionL1(self.sub_matrix_1, trans_mean_1) * self.opt.lambda_LTMReg_1
trans_mean_2 = torch.mean(self.sub_matrix_2, dim=0, keepdim=True) # [1, 75, 256, 256]
trans_mean_2 = trans_mean_2.expand(self.sub_matrix_2.size(0), trans_mean_2.size(1), trans_mean_2.size(2), trans_mean_2.size(3)) # [25, 75, 256, 256]
self.loss_G_LTMReg_2 = self.criterionL1(self.sub_matrix_2, trans_mean_2) * self.opt.lambda_LTMReg_2
# combine loss and calculate gradients
self.loss_G = self.loss_G_GAN + self.loss_G_L1 + self.loss_G_LTMReg_1 + self.loss_G_LTMReg_2
self.loss_G.backward()
def optimize_parameters(self):
self.forward() # compute fake images: G(A)
# update D
self.set_requires_grad(self.netD, True) # enable backprop for D
self.optimizer_D.zero_grad() # set D's gradients to zero
self.backward_D() # calculate gradients for D
self.optimizer_D.step() # update D's weights
# update G
self.set_requires_grad(self.netD, False) # D requires no gradients when optimizing G
# self.optimizer_G.zero_grad() # set G's gradients to zero
# self.backward_G() # calculate graidents for G
# self.optimizer_G.step() # udpate G's weights
self.optimizer_G.zero_grad() # set G's gradients to zero
self.optimizer_G2.zero_grad() # set G's gradients to zero
self.backward_G() # calculate graidents for G
self.optimizer_G.step() # udpate G's weights
self.optimizer_G2.step() # udpate G's weights
| 58.957944
| 195
| 0.6389
|
4a1271051f59cc429d7e9a430cccaa6dc41dff23
| 668
|
py
|
Python
|
lib/model/nms_poly/nms_poly_wrapper.py
|
VDIGPKU/SReN_MM
|
aeb95305d489d0c4ea18bbf15b3f39387b73738d
|
[
"MIT"
] | 1
|
2021-11-05T03:29:28.000Z
|
2021-11-05T03:29:28.000Z
|
lib/model/nms_poly/nms_poly_wrapper.py
|
VDIGPKU/SReN_MM
|
aeb95305d489d0c4ea18bbf15b3f39387b73738d
|
[
"MIT"
] | null | null | null |
lib/model/nms_poly/nms_poly_wrapper.py
|
VDIGPKU/SReN_MM
|
aeb95305d489d0c4ea18bbf15b3f39387b73738d
|
[
"MIT"
] | null | null | null |
# --------------------------------------------------------
# Fast R-CNN
# Copyright (c) 2015 Microsoft
# Licensed under The MIT License [see LICENSE for details]
# Written by Ross Girshick
# --------------------------------------------------------
import torch
from lib.model.utils.config import cfg
from lib.model.nms_poly.nms_poly_gpu import nms_poly_gpu
def nms_poly(dets, thresh, force_cpu=False):
"""Dispatch to either CPU or GPU NMS implementations."""
if dets.shape[0] == 0:
return []
# ---numpy version---
# original: return gpu_nms(dets, thresh, device_id=cfg.GPU_ID)
# ---pytorch version---
return nms_poly_gpu(dets, thresh)
| 35.157895
| 66
| 0.583832
|
4a1271f036e10e95cf781747d5f3a517fd97dc81
| 1,673
|
py
|
Python
|
client/osx/objc_test.py
|
nahidupa/grr
|
100a9d85ef2abb234e12e3ac2623caffb4116be7
|
[
"Apache-2.0"
] | 6
|
2015-04-03T02:25:28.000Z
|
2021-11-17T21:42:59.000Z
|
client/osx/objc_test.py
|
nahidupa/grr
|
100a9d85ef2abb234e12e3ac2623caffb4116be7
|
[
"Apache-2.0"
] | 3
|
2020-02-11T22:29:15.000Z
|
2021-06-10T17:44:31.000Z
|
client/osx/objc_test.py
|
nahidupa/grr
|
100a9d85ef2abb234e12e3ac2623caffb4116be7
|
[
"Apache-2.0"
] | null | null | null |
#!/usr/bin/env python
# Copyright 2012 Google Inc. All Rights Reserved.
"""Tests for grr.client.lib.osx.objc.
These tests don't have OS X dependencies and will run on linux.
"""
import ctypes
import mox
from grr.client.osx import objc
from grr.lib import flags
from grr.lib import test_lib
class ObjcTest(test_lib.GRRBaseTest):
def setUp(self):
super(ObjcTest, self).setUp()
self.mox = mox.Mox()
self.mox.StubOutWithMock(objc.ctypes.util, 'find_library')
self.mox.StubOutWithMock(objc.ctypes.cdll, 'LoadLibrary')
self.dll = self.mox.CreateMockAnything()
self.function = self.mox.CreateMockAnything()
self.dll.CFMockFunc = self.function
self.argtypes = [ctypes.c_void_p, ctypes.c_void_p]
self.restype = ctypes.c_void_p
self.cftable = [
('CFMockFunc',
self.argtypes,
self.restype)
]
def tearDown(self):
self.mox.UnsetStubs()
def testSetCTypesForLibraryLibNotFound(self):
objc.ctypes.util.find_library('mock').AndReturn(None)
self.mox.ReplayAll()
self.assertRaises(objc.ErrorLibNotFound, objc.SetCTypesForLibrary,
'mock', self.cftable)
self.mox.VerifyAll()
def testSetCTypesForLibrary(self):
objc.ctypes.util.find_library('mock').AndReturn('/mock/path')
objc.ctypes.cdll.LoadLibrary('/mock/path').AndReturn(self.dll)
self.mox.ReplayAll()
dll = objc.SetCTypesForLibrary('mock', self.cftable)
self.assertEqual(dll.CFMockFunc.argtypes, self.argtypes)
self.assertEqual(dll.CFMockFunc.restype, self.restype)
self.mox.VerifyAll()
def main(argv):
test_lib.main(argv)
if __name__ == '__main__':
flags.StartMain(main)
| 26.983871
| 70
| 0.706515
|
4a12734401fea3206ce88dcf9ebb5adc9b10539e
| 16,675
|
py
|
Python
|
research/object_detection/model_lib_test.py
|
luk1684tw/models
|
7e7776e1ce0db64cdf22d6de9f1a9848e5a71b2c
|
[
"Apache-2.0"
] | 51
|
2018-06-06T06:58:12.000Z
|
2021-09-29T18:43:03.000Z
|
research/object_detection/model_lib_test.py
|
luk1684tw/models
|
7e7776e1ce0db64cdf22d6de9f1a9848e5a71b2c
|
[
"Apache-2.0"
] | 3
|
2018-07-17T12:53:05.000Z
|
2019-03-29T04:43:37.000Z
|
research/object_detection/model_lib_test.py
|
luk1684tw/models
|
7e7776e1ce0db64cdf22d6de9f1a9848e5a71b2c
|
[
"Apache-2.0"
] | 39
|
2018-06-06T04:49:05.000Z
|
2021-03-25T12:25:14.000Z
|
# Copyright 2017 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Tests for object detection model library."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import functools
import os
import numpy as np
import tensorflow as tf
from tensorflow.contrib.tpu.python.tpu import tpu_config
from tensorflow.contrib.tpu.python.tpu import tpu_estimator
from object_detection import inputs
from object_detection import model_hparams
from object_detection import model_lib
from object_detection.builders import model_builder
from object_detection.core import standard_fields as fields
from object_detection.utils import config_util
# Model for test. Options are:
# 'ssd_inception_v2_pets', 'faster_rcnn_resnet50_pets'
MODEL_NAME_FOR_TEST = 'ssd_inception_v2_pets'
def _get_data_path():
"""Returns an absolute path to TFRecord file."""
return os.path.join(tf.resource_loader.get_data_files_path(), 'test_data',
'pets_examples.record')
def get_pipeline_config_path(model_name):
"""Returns path to the local pipeline config file."""
return os.path.join(tf.resource_loader.get_data_files_path(), 'samples',
'configs', model_name + '.config')
def _get_labelmap_path():
"""Returns an absolute path to label map file."""
return os.path.join(tf.resource_loader.get_data_files_path(), 'data',
'pet_label_map.pbtxt')
def _get_configs_for_model(model_name):
"""Returns configurations for model."""
filename = get_pipeline_config_path(model_name)
data_path = _get_data_path()
label_map_path = _get_labelmap_path()
configs = config_util.get_configs_from_pipeline_file(filename)
configs = config_util.merge_external_params_with_configs(
configs,
train_input_path=data_path,
eval_input_path=data_path,
label_map_path=label_map_path)
return configs
class ModelLibTest(tf.test.TestCase):
@classmethod
def setUpClass(cls):
tf.reset_default_graph()
def _assert_model_fn_for_train_eval(self, configs, mode,
class_agnostic=False):
model_config = configs['model']
train_config = configs['train_config']
with tf.Graph().as_default():
if mode == 'train':
features, labels = inputs.create_train_input_fn(
configs['train_config'],
configs['train_input_config'],
configs['model'])()
model_mode = tf.estimator.ModeKeys.TRAIN
batch_size = train_config.batch_size
elif mode == 'eval':
features, labels = inputs.create_eval_input_fn(
configs['eval_config'],
configs['eval_input_config'],
configs['model'])()
model_mode = tf.estimator.ModeKeys.EVAL
batch_size = 1
elif mode == 'eval_on_train':
features, labels = inputs.create_eval_input_fn(
configs['eval_config'],
configs['train_input_config'],
configs['model'])()
model_mode = tf.estimator.ModeKeys.EVAL
batch_size = 1
detection_model_fn = functools.partial(
model_builder.build, model_config=model_config, is_training=True)
hparams = model_hparams.create_hparams(
hparams_overrides='load_pretrained=false')
model_fn = model_lib.create_model_fn(detection_model_fn, configs, hparams)
estimator_spec = model_fn(features, labels, model_mode)
self.assertIsNotNone(estimator_spec.loss)
self.assertIsNotNone(estimator_spec.predictions)
if class_agnostic:
self.assertNotIn('detection_classes', estimator_spec.predictions)
else:
detection_classes = estimator_spec.predictions['detection_classes']
self.assertEqual(batch_size, detection_classes.shape.as_list()[0])
self.assertEqual(tf.float32, detection_classes.dtype)
detection_boxes = estimator_spec.predictions['detection_boxes']
detection_scores = estimator_spec.predictions['detection_scores']
num_detections = estimator_spec.predictions['num_detections']
self.assertEqual(batch_size, detection_boxes.shape.as_list()[0])
self.assertEqual(tf.float32, detection_boxes.dtype)
self.assertEqual(batch_size, detection_scores.shape.as_list()[0])
self.assertEqual(tf.float32, detection_scores.dtype)
self.assertEqual(tf.float32, num_detections.dtype)
if model_mode == tf.estimator.ModeKeys.TRAIN:
self.assertIsNotNone(estimator_spec.train_op)
return estimator_spec
def _assert_model_fn_for_predict(self, configs):
model_config = configs['model']
with tf.Graph().as_default():
features, _ = inputs.create_eval_input_fn(
configs['eval_config'],
configs['eval_input_config'],
configs['model'])()
detection_model_fn = functools.partial(
model_builder.build, model_config=model_config, is_training=False)
hparams = model_hparams.create_hparams(
hparams_overrides='load_pretrained=false')
model_fn = model_lib.create_model_fn(detection_model_fn, configs, hparams)
estimator_spec = model_fn(features, None, tf.estimator.ModeKeys.PREDICT)
self.assertIsNone(estimator_spec.loss)
self.assertIsNone(estimator_spec.train_op)
self.assertIsNotNone(estimator_spec.predictions)
self.assertIsNotNone(estimator_spec.export_outputs)
self.assertIn(tf.saved_model.signature_constants.PREDICT_METHOD_NAME,
estimator_spec.export_outputs)
def test_model_fn_in_train_mode(self):
"""Tests the model function in TRAIN mode."""
configs = _get_configs_for_model(MODEL_NAME_FOR_TEST)
self._assert_model_fn_for_train_eval(configs, 'train')
def test_model_fn_in_eval_mode(self):
"""Tests the model function in EVAL mode."""
configs = _get_configs_for_model(MODEL_NAME_FOR_TEST)
self._assert_model_fn_for_train_eval(configs, 'eval')
def test_model_fn_in_eval_on_train_mode(self):
"""Tests the model function in EVAL mode with train data."""
configs = _get_configs_for_model(MODEL_NAME_FOR_TEST)
self._assert_model_fn_for_train_eval(configs, 'eval_on_train')
def test_model_fn_in_predict_mode(self):
"""Tests the model function in PREDICT mode."""
configs = _get_configs_for_model(MODEL_NAME_FOR_TEST)
self._assert_model_fn_for_predict(configs)
def test_create_estimator_and_inputs(self):
"""Tests that Estimator and input function are constructed correctly."""
run_config = tf.estimator.RunConfig()
hparams = model_hparams.create_hparams(
hparams_overrides='load_pretrained=false')
pipeline_config_path = get_pipeline_config_path(MODEL_NAME_FOR_TEST)
train_steps = 20
eval_steps = 10
train_and_eval_dict = model_lib.create_estimator_and_inputs(
run_config,
hparams,
pipeline_config_path,
train_steps=train_steps,
eval_steps=eval_steps)
estimator = train_and_eval_dict['estimator']
train_steps = train_and_eval_dict['train_steps']
eval_steps = train_and_eval_dict['eval_steps']
self.assertIsInstance(estimator, tf.estimator.Estimator)
self.assertEqual(20, train_steps)
self.assertEqual(10, eval_steps)
self.assertIn('train_input_fn', train_and_eval_dict)
self.assertIn('eval_input_fn', train_and_eval_dict)
self.assertIn('eval_on_train_input_fn', train_and_eval_dict)
def test_create_estimator_with_default_train_eval_steps(self):
"""Tests that number of train/eval defaults to config values."""
run_config = tf.estimator.RunConfig()
hparams = model_hparams.create_hparams(
hparams_overrides='load_pretrained=false')
pipeline_config_path = get_pipeline_config_path(MODEL_NAME_FOR_TEST)
configs = config_util.get_configs_from_pipeline_file(pipeline_config_path)
config_train_steps = configs['train_config'].num_steps
config_eval_steps = configs['eval_config'].num_examples
train_and_eval_dict = model_lib.create_estimator_and_inputs(
run_config, hparams, pipeline_config_path)
estimator = train_and_eval_dict['estimator']
train_steps = train_and_eval_dict['train_steps']
eval_steps = train_and_eval_dict['eval_steps']
self.assertIsInstance(estimator, tf.estimator.Estimator)
self.assertEqual(config_train_steps, train_steps)
self.assertEqual(config_eval_steps, eval_steps)
def test_create_tpu_estimator_and_inputs(self):
"""Tests that number of train/eval defaults to config values."""
run_config = tpu_config.RunConfig()
hparams = model_hparams.create_hparams(
hparams_overrides='load_pretrained=false')
pipeline_config_path = get_pipeline_config_path(MODEL_NAME_FOR_TEST)
train_steps = 20
eval_steps = 10
train_and_eval_dict = model_lib.create_estimator_and_inputs(
run_config,
hparams,
pipeline_config_path,
train_steps=train_steps,
eval_steps=eval_steps,
use_tpu_estimator=True)
estimator = train_and_eval_dict['estimator']
train_steps = train_and_eval_dict['train_steps']
eval_steps = train_and_eval_dict['eval_steps']
self.assertIsInstance(estimator, tpu_estimator.TPUEstimator)
self.assertEqual(20, train_steps)
self.assertEqual(10, eval_steps)
def test_create_train_and_eval_specs(self):
"""Tests that `TrainSpec` and `EvalSpec` is created correctly."""
run_config = tf.estimator.RunConfig()
hparams = model_hparams.create_hparams(
hparams_overrides='load_pretrained=false')
pipeline_config_path = get_pipeline_config_path(MODEL_NAME_FOR_TEST)
train_steps = 20
eval_steps = 10
train_and_eval_dict = model_lib.create_estimator_and_inputs(
run_config,
hparams,
pipeline_config_path,
train_steps=train_steps,
eval_steps=eval_steps)
train_input_fn = train_and_eval_dict['train_input_fn']
eval_input_fn = train_and_eval_dict['eval_input_fn']
eval_on_train_input_fn = train_and_eval_dict['eval_on_train_input_fn']
predict_input_fn = train_and_eval_dict['predict_input_fn']
train_steps = train_and_eval_dict['train_steps']
eval_steps = train_and_eval_dict['eval_steps']
train_spec, eval_specs = model_lib.create_train_and_eval_specs(
train_input_fn,
eval_input_fn,
eval_on_train_input_fn,
predict_input_fn,
train_steps,
eval_steps,
eval_on_train_data=True,
final_exporter_name='exporter',
eval_spec_name='holdout')
self.assertEqual(train_steps, train_spec.max_steps)
self.assertEqual(2, len(eval_specs))
self.assertEqual(eval_steps, eval_specs[0].steps)
self.assertEqual('holdout', eval_specs[0].name)
self.assertEqual('exporter', eval_specs[0].exporters[0].name)
self.assertEqual(eval_steps, eval_specs[1].steps)
self.assertEqual('eval_on_train', eval_specs[1].name)
def test_experiment(self):
"""Tests that the `Experiment` object is constructed correctly."""
run_config = tf.estimator.RunConfig()
hparams = model_hparams.create_hparams(
hparams_overrides='load_pretrained=false')
pipeline_config_path = get_pipeline_config_path(MODEL_NAME_FOR_TEST)
experiment = model_lib.populate_experiment(
run_config,
hparams,
pipeline_config_path,
train_steps=10,
eval_steps=20)
self.assertEqual(10, experiment.train_steps)
self.assertEqual(20, experiment.eval_steps)
class UnbatchTensorsTest(tf.test.TestCase):
def test_unbatch_without_unpadding(self):
image_placeholder = tf.placeholder(tf.float32, [2, None, None, None])
groundtruth_boxes_placeholder = tf.placeholder(tf.float32, [2, None, None])
groundtruth_classes_placeholder = tf.placeholder(tf.float32,
[2, None, None])
groundtruth_weights_placeholder = tf.placeholder(tf.float32, [2, None])
tensor_dict = {
fields.InputDataFields.image:
image_placeholder,
fields.InputDataFields.groundtruth_boxes:
groundtruth_boxes_placeholder,
fields.InputDataFields.groundtruth_classes:
groundtruth_classes_placeholder,
fields.InputDataFields.groundtruth_weights:
groundtruth_weights_placeholder
}
unbatched_tensor_dict = model_lib.unstack_batch(
tensor_dict, unpad_groundtruth_tensors=False)
with self.test_session() as sess:
unbatched_tensor_dict_out = sess.run(
unbatched_tensor_dict,
feed_dict={
image_placeholder:
np.random.rand(2, 4, 4, 3).astype(np.float32),
groundtruth_boxes_placeholder:
np.random.rand(2, 5, 4).astype(np.float32),
groundtruth_classes_placeholder:
np.random.rand(2, 5, 6).astype(np.float32),
groundtruth_weights_placeholder:
np.random.rand(2, 5).astype(np.float32)
})
for image_out in unbatched_tensor_dict_out[fields.InputDataFields.image]:
self.assertAllEqual(image_out.shape, [4, 4, 3])
for groundtruth_boxes_out in unbatched_tensor_dict_out[
fields.InputDataFields.groundtruth_boxes]:
self.assertAllEqual(groundtruth_boxes_out.shape, [5, 4])
for groundtruth_classes_out in unbatched_tensor_dict_out[
fields.InputDataFields.groundtruth_classes]:
self.assertAllEqual(groundtruth_classes_out.shape, [5, 6])
for groundtruth_weights_out in unbatched_tensor_dict_out[
fields.InputDataFields.groundtruth_weights]:
self.assertAllEqual(groundtruth_weights_out.shape, [5])
def test_unbatch_and_unpad_groundtruth_tensors(self):
image_placeholder = tf.placeholder(tf.float32, [2, None, None, None])
groundtruth_boxes_placeholder = tf.placeholder(tf.float32, [2, 5, None])
groundtruth_classes_placeholder = tf.placeholder(tf.float32, [2, 5, None])
groundtruth_weights_placeholder = tf.placeholder(tf.float32, [2, 5])
num_groundtruth_placeholder = tf.placeholder(tf.int32, [2])
tensor_dict = {
fields.InputDataFields.image:
image_placeholder,
fields.InputDataFields.groundtruth_boxes:
groundtruth_boxes_placeholder,
fields.InputDataFields.groundtruth_classes:
groundtruth_classes_placeholder,
fields.InputDataFields.groundtruth_weights:
groundtruth_weights_placeholder,
fields.InputDataFields.num_groundtruth_boxes:
num_groundtruth_placeholder
}
unbatched_tensor_dict = model_lib.unstack_batch(
tensor_dict, unpad_groundtruth_tensors=True)
with self.test_session() as sess:
unbatched_tensor_dict_out = sess.run(
unbatched_tensor_dict,
feed_dict={
image_placeholder:
np.random.rand(2, 4, 4, 3).astype(np.float32),
groundtruth_boxes_placeholder:
np.random.rand(2, 5, 4).astype(np.float32),
groundtruth_classes_placeholder:
np.random.rand(2, 5, 6).astype(np.float32),
groundtruth_weights_placeholder:
np.random.rand(2, 5).astype(np.float32),
num_groundtruth_placeholder:
np.array([3, 3], np.int32)
})
for image_out in unbatched_tensor_dict_out[fields.InputDataFields.image]:
self.assertAllEqual(image_out.shape, [4, 4, 3])
for groundtruth_boxes_out in unbatched_tensor_dict_out[
fields.InputDataFields.groundtruth_boxes]:
self.assertAllEqual(groundtruth_boxes_out.shape, [3, 4])
for groundtruth_classes_out in unbatched_tensor_dict_out[
fields.InputDataFields.groundtruth_classes]:
self.assertAllEqual(groundtruth_classes_out.shape, [3, 6])
for groundtruth_weights_out in unbatched_tensor_dict_out[
fields.InputDataFields.groundtruth_weights]:
self.assertAllEqual(groundtruth_weights_out.shape, [3])
if __name__ == '__main__':
tf.test.main()
| 41.583541
| 80
| 0.7209
|
4a1273d9d3c5b67e034f194e0b599a4eeef2fffd
| 3,354
|
py
|
Python
|
OfficeCloudToolsX/settings.py
|
thomsbe/OfficeCloudToolsX
|
ffb1c69216cc960d5cee99051666d184c0117705
|
[
"Apache-2.0"
] | null | null | null |
OfficeCloudToolsX/settings.py
|
thomsbe/OfficeCloudToolsX
|
ffb1c69216cc960d5cee99051666d184c0117705
|
[
"Apache-2.0"
] | null | null | null |
OfficeCloudToolsX/settings.py
|
thomsbe/OfficeCloudToolsX
|
ffb1c69216cc960d5cee99051666d184c0117705
|
[
"Apache-2.0"
] | null | null | null |
"""
Django settings for OfficeCloudToolsX project.
Generated by 'django-admin startproject' using Django 1.9.4.
For more information on this file, see
https://docs.djangoproject.com/en/1.9/topics/settings/
For the full list of settings and their values, see
https://docs.djangoproject.com/en/1.9/ref/settings/
"""
import os
# Build paths inside the project like this: os.path.join(BASE_DIR, ...)
BASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
# Quick-start development settings - unsuitable for production
# See https://docs.djangoproject.com/en/1.9/howto/deployment/checklist/
# SECURITY WARNING: keep the secret key used in production secret!
SECRET_KEY = '!96=w6$7t!o7*+3h)$swx_(ux-&^0@w$0^#7#m7$l=5@(hx2o6'
# SECURITY WARNING: don't run with debug turned on in production!
DEBUG = True
ALLOWED_HOSTS = []
# Application definition
INSTALLED_APPS = [
'material',
'material.frontend',
'material.admin',
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'officecloud',
'kaffeekasse',
]
MIDDLEWARE_CLASSES = [
'django.middleware.security.SecurityMiddleware',
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.common.CommonMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.auth.middleware.SessionAuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'django.middleware.clickjacking.XFrameOptionsMiddleware',
]
ROOT_URLCONF = 'OfficeCloudToolsX.urls'
TEMPLATES = [
{
'BACKEND': 'django.template.backends.django.DjangoTemplates',
'DIRS': [os.path.join(BASE_DIR, 'templates')]
,
'APP_DIRS': True,
'OPTIONS': {
'context_processors': [
'django.template.context_processors.debug',
'django.template.context_processors.request',
'django.contrib.auth.context_processors.auth',
'django.contrib.messages.context_processors.messages',
],
},
},
]
WSGI_APPLICATION = 'OfficeCloudToolsX.wsgi.application'
# Database
# https://docs.djangoproject.com/en/1.9/ref/settings/#databases
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.sqlite3',
'NAME': os.path.join(BASE_DIR, 'db.sqlite3'),
}
}
# Password validation
# https://docs.djangoproject.com/en/1.9/ref/settings/#auth-password-validators
AUTH_PASSWORD_VALIDATORS = [
{
'NAME': 'django.contrib.auth.password_validation.UserAttributeSimilarityValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.MinimumLengthValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.CommonPasswordValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.NumericPasswordValidator',
},
]
# Internationalization
# https://docs.djangoproject.com/en/1.9/topics/i18n/
LANGUAGE_CODE = 'de-de'
TIME_ZONE = 'Europe/Berlin'
USE_I18N = True
USE_L10N = True
USE_TZ = True
# Static files (CSS, JavaScript, Images)
# https://docs.djangoproject.com/en/1.9/howto/static-files/
STATIC_URL = '/static/'
| 26.203125
| 91
| 0.695886
|
4a1274b6436b9cb5f165ba22a069aab5b047c1b6
| 1,267
|
py
|
Python
|
demo/link_arm_control/camera.py
|
shigeyukioba/matchernet
|
5aec7abd3b157846ca406e3c5103e3a1bebc711e
|
[
"Apache-2.0"
] | 1
|
2020-02-26T13:34:23.000Z
|
2020-02-26T13:34:23.000Z
|
demo/link_arm_control/camera.py
|
shigeyukioba/matchernet
|
5aec7abd3b157846ca406e3c5103e3a1bebc711e
|
[
"Apache-2.0"
] | 39
|
2019-11-28T07:48:48.000Z
|
2020-05-23T14:13:38.000Z
|
demo/link_arm_control/camera.py
|
shigeyukioba/matchernet
|
5aec7abd3b157846ca406e3c5103e3a1bebc711e
|
[
"Apache-2.0"
] | 1
|
2020-05-22T05:11:14.000Z
|
2020-05-22T05:11:14.000Z
|
# -*- coding: utf-8 -*-
import sys
import numpy as np
from geom import Matrix4
class Camera(object):
""" 3D camera class. """
def __init__(self, eye_from, eye_to):
up = np.array([0.0, 1.0, 0.0], dtype=np.float32)
self.m = self.get_lookat_mat(eye_from, eye_to, up)
self.m_inv = self.m.invert()
def get_lookat_mat(self, eye_from, eye_to, up):
def normalize_vec(v):
v /= np.linalg.norm(v)
return v
forward = eye_to - eye_from;
forward = normalize_vec(forward)
side = np.cross(forward, up)
side = normalize_vec(side)
new_up = np.cross(side, forward)
m = Matrix4()
m.m[0,0] = side[0];
m.m[1,0] = side[1];
m.m[2,0] = side[2];
m.m[0,1] = new_up[0];
m.m[1,1] = new_up[1];
m.m[2,1] = new_up[2];
m.m[0,2] = -forward[0];
m.m[1,2] = -forward[1];
m.m[2,2] = -forward[2];
m.m[0,3] = eye_from[0];
m.m[1,3] = eye_from[1];
m.m[2,3] = eye_from[2];
return m
def get_inv_mat(self):
""" Get invererted camera matrix
Returns:
numpy ndarray: inverted camera matrix
"""
return self.m_inv
| 23.90566
| 58
| 0.502762
|
4a1274e9f74c433a18cd0d371cb7f9b673e990d4
| 1,404
|
py
|
Python
|
test/test_pricing_component_value_request.py
|
billforward/bf-python
|
d2b812329ca3ed1fd94364d7f46f69ad74665596
|
[
"Apache-2.0"
] | 2
|
2016-11-23T17:32:37.000Z
|
2022-02-24T05:13:20.000Z
|
test/test_pricing_component_value_request.py
|
billforward/bf-python
|
d2b812329ca3ed1fd94364d7f46f69ad74665596
|
[
"Apache-2.0"
] | null | null | null |
test/test_pricing_component_value_request.py
|
billforward/bf-python
|
d2b812329ca3ed1fd94364d7f46f69ad74665596
|
[
"Apache-2.0"
] | 1
|
2016-12-30T20:02:48.000Z
|
2016-12-30T20:02:48.000Z
|
# coding: utf-8
"""
BillForward REST API
OpenAPI spec version: 1.0.0
Generated by: https://github.com/swagger-api/swagger-codegen.git
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
"""
from __future__ import absolute_import
import os
import sys
import unittest
import billforward
from billforward.rest import ApiException
from billforward.models.pricing_component_value_request import PricingComponentValueRequest
class TestPricingComponentValueRequest(unittest.TestCase):
""" PricingComponentValueRequest unit test stubs """
def setUp(self):
pass
def tearDown(self):
pass
def testPricingComponentValueRequest(self):
"""
Test PricingComponentValueRequest
"""
model = billforward.models.pricing_component_value_request.PricingComponentValueRequest()
if __name__ == '__main__':
unittest.main()
| 26.490566
| 97
| 0.736467
|
4a1274fc8241a5176fd2a90cae5b96b284d10fef
| 3,513
|
py
|
Python
|
apps/rois.py
|
skd862/streamlit-geospatial
|
1e618c87e9138b60a84155cb6af20bbce7d77d3c
|
[
"MIT"
] | null | null | null |
apps/rois.py
|
skd862/streamlit-geospatial
|
1e618c87e9138b60a84155cb6af20bbce7d77d3c
|
[
"MIT"
] | null | null | null |
apps/rois.py
|
skd862/streamlit-geospatial
|
1e618c87e9138b60a84155cb6af20bbce7d77d3c
|
[
"MIT"
] | null | null | null |
""" A module for storing some sample ROIs for creating Landsat/GOES timelapse.
"""
from shapely.geometry import Polygon
goes_rois = {
"Creek Fire, CA (2020-09-05)": {
"region": Polygon(
[
[-121.003418, 36.848857],
[-121.003418, 39.049052],
[-117.905273, 39.049052],
[-117.905273, 36.848857],
[-121.003418, 36.848857],
]
),
"start_time": "2020-09-05T15:00:00",
"end_time": "2020-09-06T02:00:00",
},
"Bomb Cyclone (2021-10-24)": {
"region": Polygon(
[
[-159.5954, 60.4088],
[-159.5954, 24.5178],
[-114.2438, 24.5178],
[-114.2438, 60.4088],
]
),
"start_time": "2021-10-24T14:00:00",
"end_time": "2021-10-25T01:00:00",
},
}
landsat_rois = {
"Aral Sea": Polygon(
[
[57.667236, 43.834527],
[57.667236, 45.996962],
[61.12793, 45.996962],
[61.12793, 43.834527],
[57.667236, 43.834527],
]
),
"Dubai": Polygon(
[
[54.541626, 24.763044],
[54.541626, 25.427152],
[55.632019, 25.427152],
[55.632019, 24.763044],
[54.541626, 24.763044],
]
),
"Hong Kong International Airport": Polygon(
[
[113.825226, 22.198849],
[113.825226, 22.349758],
[114.085121, 22.349758],
[114.085121, 22.198849],
[113.825226, 22.198849],
]
),
"Las Vegas, NV": Polygon(
[
[-115.554199, 35.804449],
[-115.554199, 36.558188],
[-113.903503, 36.558188],
[-113.903503, 35.804449],
[-115.554199, 35.804449],
]
),
"Pucallpa, Peru": Polygon(
[
[-74.672699, -8.600032],
[-74.672699, -8.254983],
[-74.279938, -8.254983],
[-74.279938, -8.600032],
]
),
}
modis_rois = {
"World": Polygon(
[
[-171.210938, -57.136239],
[-171.210938, 79.997168],
[177.539063, 79.997168],
[177.539063, -57.136239],
[-171.210938, -57.136239],
]
),
"Africa": Polygon(
[
[-18.6983, 38.1446],
[-18.6983, -36.1630],
[52.2293, -36.1630],
[52.2293, 38.1446],
]
),
"USA": Polygon(
[
[-127.177734, 23.725012],
[-127.177734, 50.792047],
[-66.269531, 50.792047],
[-66.269531, 23.725012],
[-127.177734, 23.725012],
]
),
}
ocean_rois = {
"Gulf of Mexico": Polygon(
[
[-101.206055, 15.496032],
[-101.206055, 32.361403],
[-75.673828, 32.361403],
[-75.673828, 15.496032],
[-101.206055, 15.496032],
]
),
"North Atlantic Ocean": Polygon(
[
[-85.341797, 24.046464],
[-85.341797, 45.02695],
[-55.810547, 45.02695],
[-55.810547, 24.046464],
[-85.341797, 24.046464],
]
),
"World": Polygon(
[
[-171.210938, -57.136239],
[-171.210938, 79.997168],
[177.539063, 79.997168],
[177.539063, -57.136239],
[-171.210938, -57.136239],
]
),
}
| 25.092857
| 78
| 0.418446
|
4a12772db03419281268bc22f7fee068ef438262
| 425
|
py
|
Python
|
BackEnd(Django)/playlistBackend/categories/urls.py
|
Maxyee/Julhas_Playlist_Angular7_Django
|
e2ed7ec34ab2317f5bf8202bb122b1f42ade771f
|
[
"MIT"
] | 2
|
2019-05-17T17:36:41.000Z
|
2019-07-03T05:51:38.000Z
|
BackEnd(Django)/playlistBackend/categories/urls.py
|
Maxyee/Julhas_Playlist_Angular7_Django
|
e2ed7ec34ab2317f5bf8202bb122b1f42ade771f
|
[
"MIT"
] | 8
|
2019-07-07T10:39:27.000Z
|
2022-02-10T11:05:45.000Z
|
BackEnd(Django)/playlistBackend/categories/urls.py
|
Maxyee/Julhas_Playlist_Angular7_Django
|
e2ed7ec34ab2317f5bf8202bb122b1f42ade771f
|
[
"MIT"
] | 2
|
2019-05-22T08:44:18.000Z
|
2021-04-02T13:08:31.000Z
|
from django.urls import include, path, re_path
from . import views
urlpatterns = [
re_path(r'^api/v1/categories/(?P<pk>[0-9]+)$', # Url to get update or delete a movie
views.GetDeleteUpdateCategories.as_view(),
name='get_delete_update_movie'
),
path('api/v1/categories/', # urls list all and create new one
views.GetPostCategories.as_view(),
name='get_post_movies'
)
]
| 28.333333
| 89
| 0.651765
|
4a12776c9c0df652ff2b6562b6020882282064e2
| 223
|
py
|
Python
|
scripts/update_flair.py
|
erxclau/rare-houseplants-bst-bot
|
562b872359df6faa98c7f0c54ff702b9deac0e6a
|
[
"MIT"
] | 1
|
2020-09-04T16:30:57.000Z
|
2020-09-04T16:30:57.000Z
|
scripts/update_flair.py
|
erxclau/rare-houseplants-bst-bot
|
562b872359df6faa98c7f0c54ff702b9deac0e6a
|
[
"MIT"
] | 7
|
2020-07-26T02:46:21.000Z
|
2020-09-11T19:21:36.000Z
|
scripts/update_flair.py
|
erxclau/rare-houseplants-bst-bot
|
562b872359df6faa98c7f0c54ff702b9deac0e6a
|
[
"MIT"
] | null | null | null |
from utl import utility
config = utility.get_json("config.json")
flair_tiers = config["USER_FLAIRS"]
reddit, subreddit = utility.get_reddit(config)
subreddit.flair.set(
"USERNAME", flair_template_id=flair_tiers[0]
)
| 20.272727
| 48
| 0.766816
|
4a1278ca6049da13c665c281d28ea93a1961c18a
| 255,440
|
py
|
Python
|
swig/python/osgeo/ogr.py
|
VisualAwarenessTech/gdal-2.3.1
|
a3a57f5e651596d453d2b380c337bce5505b13c8
|
[
"Apache-2.0"
] | 62
|
2018-01-25T10:46:13.000Z
|
2022-03-20T23:49:57.000Z
|
swig/python/osgeo/ogr.py
|
VisualAwarenessTech/gdal-2.3.1
|
a3a57f5e651596d453d2b380c337bce5505b13c8
|
[
"Apache-2.0"
] | 45
|
2018-07-10T10:37:19.000Z
|
2022-02-17T14:41:06.000Z
|
swig/python/osgeo/ogr.py
|
VisualAwarenessTech/gdal-2.3.1
|
a3a57f5e651596d453d2b380c337bce5505b13c8
|
[
"Apache-2.0"
] | 18
|
2018-02-09T08:57:30.000Z
|
2022-03-31T16:57:05.000Z
|
# This file was automatically generated by SWIG (http://www.swig.org).
# Version 3.0.8
#
# Do not make changes to this file unless you know what you are doing--modify
# the SWIG interface file instead.
from sys import version_info
if version_info >= (2, 6, 0):
def swig_import_helper():
from os.path import dirname
import imp
fp = None
try:
fp, pathname, description = imp.find_module('_ogr', [dirname(__file__)])
except ImportError:
import _ogr
return _ogr
if fp is not None:
try:
_mod = imp.load_module('_ogr', fp, pathname, description)
finally:
fp.close()
return _mod
_ogr = swig_import_helper()
del swig_import_helper
else:
import _ogr
del version_info
try:
_swig_property = property
except NameError:
pass # Python < 2.2 doesn't have 'property'.
def _swig_setattr_nondynamic(self, class_type, name, value, static=1):
if (name == "thisown"):
return self.this.own(value)
if (name == "this"):
if type(value).__name__ == 'SwigPyObject':
self.__dict__[name] = value
return
method = class_type.__swig_setmethods__.get(name, None)
if method:
return method(self, value)
if (not static):
if _newclass:
object.__setattr__(self, name, value)
else:
self.__dict__[name] = value
else:
raise AttributeError("You cannot add attributes to %s" % self)
def _swig_setattr(self, class_type, name, value):
return _swig_setattr_nondynamic(self, class_type, name, value, 0)
def _swig_getattr_nondynamic(self, class_type, name, static=1):
if (name == "thisown"):
return self.this.own()
method = class_type.__swig_getmethods__.get(name, None)
if method:
return method(self)
if (not static):
return object.__getattr__(self, name)
else:
raise AttributeError(name)
def _swig_getattr(self, class_type, name):
return _swig_getattr_nondynamic(self, class_type, name, 0)
def _swig_repr(self):
try:
strthis = "proxy of " + self.this.__repr__()
except Exception:
strthis = ""
return "<%s.%s; %s >" % (self.__class__.__module__, self.__class__.__name__, strthis,)
try:
_object = object
_newclass = 1
except AttributeError:
class _object:
pass
_newclass = 0
_ogr.wkb25DBit_swigconstant(_ogr)
wkb25DBit = _ogr.wkb25DBit
_ogr.wkb25Bit_swigconstant(_ogr)
wkb25Bit = _ogr.wkb25Bit
_ogr.wkbUnknown_swigconstant(_ogr)
wkbUnknown = _ogr.wkbUnknown
_ogr.wkbPoint_swigconstant(_ogr)
wkbPoint = _ogr.wkbPoint
_ogr.wkbLineString_swigconstant(_ogr)
wkbLineString = _ogr.wkbLineString
_ogr.wkbPolygon_swigconstant(_ogr)
wkbPolygon = _ogr.wkbPolygon
_ogr.wkbMultiPoint_swigconstant(_ogr)
wkbMultiPoint = _ogr.wkbMultiPoint
_ogr.wkbMultiLineString_swigconstant(_ogr)
wkbMultiLineString = _ogr.wkbMultiLineString
_ogr.wkbMultiPolygon_swigconstant(_ogr)
wkbMultiPolygon = _ogr.wkbMultiPolygon
_ogr.wkbGeometryCollection_swigconstant(_ogr)
wkbGeometryCollection = _ogr.wkbGeometryCollection
_ogr.wkbCircularString_swigconstant(_ogr)
wkbCircularString = _ogr.wkbCircularString
_ogr.wkbCompoundCurve_swigconstant(_ogr)
wkbCompoundCurve = _ogr.wkbCompoundCurve
_ogr.wkbCurvePolygon_swigconstant(_ogr)
wkbCurvePolygon = _ogr.wkbCurvePolygon
_ogr.wkbMultiCurve_swigconstant(_ogr)
wkbMultiCurve = _ogr.wkbMultiCurve
_ogr.wkbMultiSurface_swigconstant(_ogr)
wkbMultiSurface = _ogr.wkbMultiSurface
_ogr.wkbCurve_swigconstant(_ogr)
wkbCurve = _ogr.wkbCurve
_ogr.wkbSurface_swigconstant(_ogr)
wkbSurface = _ogr.wkbSurface
_ogr.wkbPolyhedralSurface_swigconstant(_ogr)
wkbPolyhedralSurface = _ogr.wkbPolyhedralSurface
_ogr.wkbTIN_swigconstant(_ogr)
wkbTIN = _ogr.wkbTIN
_ogr.wkbTriangle_swigconstant(_ogr)
wkbTriangle = _ogr.wkbTriangle
_ogr.wkbNone_swigconstant(_ogr)
wkbNone = _ogr.wkbNone
_ogr.wkbLinearRing_swigconstant(_ogr)
wkbLinearRing = _ogr.wkbLinearRing
_ogr.wkbCircularStringZ_swigconstant(_ogr)
wkbCircularStringZ = _ogr.wkbCircularStringZ
_ogr.wkbCompoundCurveZ_swigconstant(_ogr)
wkbCompoundCurveZ = _ogr.wkbCompoundCurveZ
_ogr.wkbCurvePolygonZ_swigconstant(_ogr)
wkbCurvePolygonZ = _ogr.wkbCurvePolygonZ
_ogr.wkbMultiCurveZ_swigconstant(_ogr)
wkbMultiCurveZ = _ogr.wkbMultiCurveZ
_ogr.wkbMultiSurfaceZ_swigconstant(_ogr)
wkbMultiSurfaceZ = _ogr.wkbMultiSurfaceZ
_ogr.wkbCurveZ_swigconstant(_ogr)
wkbCurveZ = _ogr.wkbCurveZ
_ogr.wkbSurfaceZ_swigconstant(_ogr)
wkbSurfaceZ = _ogr.wkbSurfaceZ
_ogr.wkbPolyhedralSurfaceZ_swigconstant(_ogr)
wkbPolyhedralSurfaceZ = _ogr.wkbPolyhedralSurfaceZ
_ogr.wkbTINZ_swigconstant(_ogr)
wkbTINZ = _ogr.wkbTINZ
_ogr.wkbTriangleZ_swigconstant(_ogr)
wkbTriangleZ = _ogr.wkbTriangleZ
_ogr.wkbPointM_swigconstant(_ogr)
wkbPointM = _ogr.wkbPointM
_ogr.wkbLineStringM_swigconstant(_ogr)
wkbLineStringM = _ogr.wkbLineStringM
_ogr.wkbPolygonM_swigconstant(_ogr)
wkbPolygonM = _ogr.wkbPolygonM
_ogr.wkbMultiPointM_swigconstant(_ogr)
wkbMultiPointM = _ogr.wkbMultiPointM
_ogr.wkbMultiLineStringM_swigconstant(_ogr)
wkbMultiLineStringM = _ogr.wkbMultiLineStringM
_ogr.wkbMultiPolygonM_swigconstant(_ogr)
wkbMultiPolygonM = _ogr.wkbMultiPolygonM
_ogr.wkbGeometryCollectionM_swigconstant(_ogr)
wkbGeometryCollectionM = _ogr.wkbGeometryCollectionM
_ogr.wkbCircularStringM_swigconstant(_ogr)
wkbCircularStringM = _ogr.wkbCircularStringM
_ogr.wkbCompoundCurveM_swigconstant(_ogr)
wkbCompoundCurveM = _ogr.wkbCompoundCurveM
_ogr.wkbCurvePolygonM_swigconstant(_ogr)
wkbCurvePolygonM = _ogr.wkbCurvePolygonM
_ogr.wkbMultiCurveM_swigconstant(_ogr)
wkbMultiCurveM = _ogr.wkbMultiCurveM
_ogr.wkbMultiSurfaceM_swigconstant(_ogr)
wkbMultiSurfaceM = _ogr.wkbMultiSurfaceM
_ogr.wkbCurveM_swigconstant(_ogr)
wkbCurveM = _ogr.wkbCurveM
_ogr.wkbSurfaceM_swigconstant(_ogr)
wkbSurfaceM = _ogr.wkbSurfaceM
_ogr.wkbPolyhedralSurfaceM_swigconstant(_ogr)
wkbPolyhedralSurfaceM = _ogr.wkbPolyhedralSurfaceM
_ogr.wkbTINM_swigconstant(_ogr)
wkbTINM = _ogr.wkbTINM
_ogr.wkbTriangleM_swigconstant(_ogr)
wkbTriangleM = _ogr.wkbTriangleM
_ogr.wkbPointZM_swigconstant(_ogr)
wkbPointZM = _ogr.wkbPointZM
_ogr.wkbLineStringZM_swigconstant(_ogr)
wkbLineStringZM = _ogr.wkbLineStringZM
_ogr.wkbPolygonZM_swigconstant(_ogr)
wkbPolygonZM = _ogr.wkbPolygonZM
_ogr.wkbMultiPointZM_swigconstant(_ogr)
wkbMultiPointZM = _ogr.wkbMultiPointZM
_ogr.wkbMultiLineStringZM_swigconstant(_ogr)
wkbMultiLineStringZM = _ogr.wkbMultiLineStringZM
_ogr.wkbMultiPolygonZM_swigconstant(_ogr)
wkbMultiPolygonZM = _ogr.wkbMultiPolygonZM
_ogr.wkbGeometryCollectionZM_swigconstant(_ogr)
wkbGeometryCollectionZM = _ogr.wkbGeometryCollectionZM
_ogr.wkbCircularStringZM_swigconstant(_ogr)
wkbCircularStringZM = _ogr.wkbCircularStringZM
_ogr.wkbCompoundCurveZM_swigconstant(_ogr)
wkbCompoundCurveZM = _ogr.wkbCompoundCurveZM
_ogr.wkbCurvePolygonZM_swigconstant(_ogr)
wkbCurvePolygonZM = _ogr.wkbCurvePolygonZM
_ogr.wkbMultiCurveZM_swigconstant(_ogr)
wkbMultiCurveZM = _ogr.wkbMultiCurveZM
_ogr.wkbMultiSurfaceZM_swigconstant(_ogr)
wkbMultiSurfaceZM = _ogr.wkbMultiSurfaceZM
_ogr.wkbCurveZM_swigconstant(_ogr)
wkbCurveZM = _ogr.wkbCurveZM
_ogr.wkbSurfaceZM_swigconstant(_ogr)
wkbSurfaceZM = _ogr.wkbSurfaceZM
_ogr.wkbPolyhedralSurfaceZM_swigconstant(_ogr)
wkbPolyhedralSurfaceZM = _ogr.wkbPolyhedralSurfaceZM
_ogr.wkbTINZM_swigconstant(_ogr)
wkbTINZM = _ogr.wkbTINZM
_ogr.wkbTriangleZM_swigconstant(_ogr)
wkbTriangleZM = _ogr.wkbTriangleZM
_ogr.wkbPoint25D_swigconstant(_ogr)
wkbPoint25D = _ogr.wkbPoint25D
_ogr.wkbLineString25D_swigconstant(_ogr)
wkbLineString25D = _ogr.wkbLineString25D
_ogr.wkbPolygon25D_swigconstant(_ogr)
wkbPolygon25D = _ogr.wkbPolygon25D
_ogr.wkbMultiPoint25D_swigconstant(_ogr)
wkbMultiPoint25D = _ogr.wkbMultiPoint25D
_ogr.wkbMultiLineString25D_swigconstant(_ogr)
wkbMultiLineString25D = _ogr.wkbMultiLineString25D
_ogr.wkbMultiPolygon25D_swigconstant(_ogr)
wkbMultiPolygon25D = _ogr.wkbMultiPolygon25D
_ogr.wkbGeometryCollection25D_swigconstant(_ogr)
wkbGeometryCollection25D = _ogr.wkbGeometryCollection25D
_ogr.OFTInteger_swigconstant(_ogr)
OFTInteger = _ogr.OFTInteger
_ogr.OFTIntegerList_swigconstant(_ogr)
OFTIntegerList = _ogr.OFTIntegerList
_ogr.OFTReal_swigconstant(_ogr)
OFTReal = _ogr.OFTReal
_ogr.OFTRealList_swigconstant(_ogr)
OFTRealList = _ogr.OFTRealList
_ogr.OFTString_swigconstant(_ogr)
OFTString = _ogr.OFTString
_ogr.OFTStringList_swigconstant(_ogr)
OFTStringList = _ogr.OFTStringList
_ogr.OFTWideString_swigconstant(_ogr)
OFTWideString = _ogr.OFTWideString
_ogr.OFTWideStringList_swigconstant(_ogr)
OFTWideStringList = _ogr.OFTWideStringList
_ogr.OFTBinary_swigconstant(_ogr)
OFTBinary = _ogr.OFTBinary
_ogr.OFTDate_swigconstant(_ogr)
OFTDate = _ogr.OFTDate
_ogr.OFTTime_swigconstant(_ogr)
OFTTime = _ogr.OFTTime
_ogr.OFTDateTime_swigconstant(_ogr)
OFTDateTime = _ogr.OFTDateTime
_ogr.OFTInteger64_swigconstant(_ogr)
OFTInteger64 = _ogr.OFTInteger64
_ogr.OFTInteger64List_swigconstant(_ogr)
OFTInteger64List = _ogr.OFTInteger64List
_ogr.OFSTNone_swigconstant(_ogr)
OFSTNone = _ogr.OFSTNone
_ogr.OFSTBoolean_swigconstant(_ogr)
OFSTBoolean = _ogr.OFSTBoolean
_ogr.OFSTInt16_swigconstant(_ogr)
OFSTInt16 = _ogr.OFSTInt16
_ogr.OFSTFloat32_swigconstant(_ogr)
OFSTFloat32 = _ogr.OFSTFloat32
_ogr.OJUndefined_swigconstant(_ogr)
OJUndefined = _ogr.OJUndefined
_ogr.OJLeft_swigconstant(_ogr)
OJLeft = _ogr.OJLeft
_ogr.OJRight_swigconstant(_ogr)
OJRight = _ogr.OJRight
_ogr.wkbXDR_swigconstant(_ogr)
wkbXDR = _ogr.wkbXDR
_ogr.wkbNDR_swigconstant(_ogr)
wkbNDR = _ogr.wkbNDR
_ogr.NullFID_swigconstant(_ogr)
NullFID = _ogr.NullFID
_ogr.ALTER_NAME_FLAG_swigconstant(_ogr)
ALTER_NAME_FLAG = _ogr.ALTER_NAME_FLAG
_ogr.ALTER_TYPE_FLAG_swigconstant(_ogr)
ALTER_TYPE_FLAG = _ogr.ALTER_TYPE_FLAG
_ogr.ALTER_WIDTH_PRECISION_FLAG_swigconstant(_ogr)
ALTER_WIDTH_PRECISION_FLAG = _ogr.ALTER_WIDTH_PRECISION_FLAG
_ogr.ALTER_NULLABLE_FLAG_swigconstant(_ogr)
ALTER_NULLABLE_FLAG = _ogr.ALTER_NULLABLE_FLAG
_ogr.ALTER_DEFAULT_FLAG_swigconstant(_ogr)
ALTER_DEFAULT_FLAG = _ogr.ALTER_DEFAULT_FLAG
_ogr.ALTER_ALL_FLAG_swigconstant(_ogr)
ALTER_ALL_FLAG = _ogr.ALTER_ALL_FLAG
_ogr.F_VAL_NULL_swigconstant(_ogr)
F_VAL_NULL = _ogr.F_VAL_NULL
_ogr.F_VAL_GEOM_TYPE_swigconstant(_ogr)
F_VAL_GEOM_TYPE = _ogr.F_VAL_GEOM_TYPE
_ogr.F_VAL_WIDTH_swigconstant(_ogr)
F_VAL_WIDTH = _ogr.F_VAL_WIDTH
_ogr.F_VAL_ALLOW_NULL_WHEN_DEFAULT_swigconstant(_ogr)
F_VAL_ALLOW_NULL_WHEN_DEFAULT = _ogr.F_VAL_ALLOW_NULL_WHEN_DEFAULT
_ogr.F_VAL_ALL_swigconstant(_ogr)
F_VAL_ALL = _ogr.F_VAL_ALL
_ogr.OLCRandomRead_swigconstant(_ogr)
OLCRandomRead = _ogr.OLCRandomRead
_ogr.OLCSequentialWrite_swigconstant(_ogr)
OLCSequentialWrite = _ogr.OLCSequentialWrite
_ogr.OLCRandomWrite_swigconstant(_ogr)
OLCRandomWrite = _ogr.OLCRandomWrite
_ogr.OLCFastSpatialFilter_swigconstant(_ogr)
OLCFastSpatialFilter = _ogr.OLCFastSpatialFilter
_ogr.OLCFastFeatureCount_swigconstant(_ogr)
OLCFastFeatureCount = _ogr.OLCFastFeatureCount
_ogr.OLCFastGetExtent_swigconstant(_ogr)
OLCFastGetExtent = _ogr.OLCFastGetExtent
_ogr.OLCCreateField_swigconstant(_ogr)
OLCCreateField = _ogr.OLCCreateField
_ogr.OLCDeleteField_swigconstant(_ogr)
OLCDeleteField = _ogr.OLCDeleteField
_ogr.OLCReorderFields_swigconstant(_ogr)
OLCReorderFields = _ogr.OLCReorderFields
_ogr.OLCAlterFieldDefn_swigconstant(_ogr)
OLCAlterFieldDefn = _ogr.OLCAlterFieldDefn
_ogr.OLCTransactions_swigconstant(_ogr)
OLCTransactions = _ogr.OLCTransactions
_ogr.OLCDeleteFeature_swigconstant(_ogr)
OLCDeleteFeature = _ogr.OLCDeleteFeature
_ogr.OLCFastSetNextByIndex_swigconstant(_ogr)
OLCFastSetNextByIndex = _ogr.OLCFastSetNextByIndex
_ogr.OLCStringsAsUTF8_swigconstant(_ogr)
OLCStringsAsUTF8 = _ogr.OLCStringsAsUTF8
_ogr.OLCIgnoreFields_swigconstant(_ogr)
OLCIgnoreFields = _ogr.OLCIgnoreFields
_ogr.OLCCreateGeomField_swigconstant(_ogr)
OLCCreateGeomField = _ogr.OLCCreateGeomField
_ogr.OLCCurveGeometries_swigconstant(_ogr)
OLCCurveGeometries = _ogr.OLCCurveGeometries
_ogr.OLCMeasuredGeometries_swigconstant(_ogr)
OLCMeasuredGeometries = _ogr.OLCMeasuredGeometries
_ogr.ODsCCreateLayer_swigconstant(_ogr)
ODsCCreateLayer = _ogr.ODsCCreateLayer
_ogr.ODsCDeleteLayer_swigconstant(_ogr)
ODsCDeleteLayer = _ogr.ODsCDeleteLayer
_ogr.ODsCCreateGeomFieldAfterCreateLayer_swigconstant(_ogr)
ODsCCreateGeomFieldAfterCreateLayer = _ogr.ODsCCreateGeomFieldAfterCreateLayer
_ogr.ODsCCurveGeometries_swigconstant(_ogr)
ODsCCurveGeometries = _ogr.ODsCCurveGeometries
_ogr.ODsCTransactions_swigconstant(_ogr)
ODsCTransactions = _ogr.ODsCTransactions
_ogr.ODsCEmulatedTransactions_swigconstant(_ogr)
ODsCEmulatedTransactions = _ogr.ODsCEmulatedTransactions
_ogr.ODsCMeasuredGeometries_swigconstant(_ogr)
ODsCMeasuredGeometries = _ogr.ODsCMeasuredGeometries
_ogr.ODsCRandomLayerRead_swigconstant(_ogr)
ODsCRandomLayerRead = _ogr.ODsCRandomLayerRead
_ogr.ODsCRandomLayerWrite_swigconstant(_ogr)
ODsCRandomLayerWrite = _ogr.ODsCRandomLayerWrite
_ogr.ODrCCreateDataSource_swigconstant(_ogr)
ODrCCreateDataSource = _ogr.ODrCCreateDataSource
_ogr.ODrCDeleteDataSource_swigconstant(_ogr)
ODrCDeleteDataSource = _ogr.ODrCDeleteDataSource
_ogr.OLMD_FID64_swigconstant(_ogr)
OLMD_FID64 = _ogr.OLMD_FID64
_ogr.OGRERR_NONE_swigconstant(_ogr)
OGRERR_NONE = _ogr.OGRERR_NONE
_ogr.OGRERR_NOT_ENOUGH_DATA_swigconstant(_ogr)
OGRERR_NOT_ENOUGH_DATA = _ogr.OGRERR_NOT_ENOUGH_DATA
_ogr.OGRERR_NOT_ENOUGH_MEMORY_swigconstant(_ogr)
OGRERR_NOT_ENOUGH_MEMORY = _ogr.OGRERR_NOT_ENOUGH_MEMORY
_ogr.OGRERR_UNSUPPORTED_GEOMETRY_TYPE_swigconstant(_ogr)
OGRERR_UNSUPPORTED_GEOMETRY_TYPE = _ogr.OGRERR_UNSUPPORTED_GEOMETRY_TYPE
_ogr.OGRERR_UNSUPPORTED_OPERATION_swigconstant(_ogr)
OGRERR_UNSUPPORTED_OPERATION = _ogr.OGRERR_UNSUPPORTED_OPERATION
_ogr.OGRERR_CORRUPT_DATA_swigconstant(_ogr)
OGRERR_CORRUPT_DATA = _ogr.OGRERR_CORRUPT_DATA
_ogr.OGRERR_FAILURE_swigconstant(_ogr)
OGRERR_FAILURE = _ogr.OGRERR_FAILURE
_ogr.OGRERR_UNSUPPORTED_SRS_swigconstant(_ogr)
OGRERR_UNSUPPORTED_SRS = _ogr.OGRERR_UNSUPPORTED_SRS
_ogr.OGRERR_INVALID_HANDLE_swigconstant(_ogr)
OGRERR_INVALID_HANDLE = _ogr.OGRERR_INVALID_HANDLE
_ogr.OGRERR_NON_EXISTING_FEATURE_swigconstant(_ogr)
OGRERR_NON_EXISTING_FEATURE = _ogr.OGRERR_NON_EXISTING_FEATURE
def GetUseExceptions(*args):
"""GetUseExceptions() -> int"""
return _ogr.GetUseExceptions(*args)
def UseExceptions(*args):
"""UseExceptions()"""
return _ogr.UseExceptions(*args)
def DontUseExceptions(*args):
"""DontUseExceptions()"""
return _ogr.DontUseExceptions(*args)
import osr
class MajorObject(_object):
"""Proxy of C++ GDALMajorObjectShadow class."""
__swig_setmethods__ = {}
__setattr__ = lambda self, name, value: _swig_setattr(self, MajorObject, name, value)
__swig_getmethods__ = {}
__getattr__ = lambda self, name: _swig_getattr(self, MajorObject, name)
def __init__(self, *args, **kwargs):
raise AttributeError("No constructor defined")
__repr__ = _swig_repr
def GetDescription(self, *args):
"""GetDescription(MajorObject self) -> char const *"""
return _ogr.MajorObject_GetDescription(self, *args)
def SetDescription(self, *args):
"""SetDescription(MajorObject self, char const * pszNewDesc)"""
return _ogr.MajorObject_SetDescription(self, *args)
def GetMetadataDomainList(self, *args):
"""GetMetadataDomainList(MajorObject self) -> char **"""
return _ogr.MajorObject_GetMetadataDomainList(self, *args)
def GetMetadata_Dict(self, *args):
"""GetMetadata_Dict(MajorObject self, char const * pszDomain) -> char **"""
return _ogr.MajorObject_GetMetadata_Dict(self, *args)
def GetMetadata_List(self, *args):
"""GetMetadata_List(MajorObject self, char const * pszDomain) -> char **"""
return _ogr.MajorObject_GetMetadata_List(self, *args)
def SetMetadata(self, *args):
"""
SetMetadata(MajorObject self, char ** papszMetadata, char const * pszDomain) -> CPLErr
SetMetadata(MajorObject self, char * pszMetadataString, char const * pszDomain) -> CPLErr
"""
return _ogr.MajorObject_SetMetadata(self, *args)
def GetMetadataItem(self, *args):
"""GetMetadataItem(MajorObject self, char const * pszName, char const * pszDomain) -> char const *"""
return _ogr.MajorObject_GetMetadataItem(self, *args)
def SetMetadataItem(self, *args):
"""SetMetadataItem(MajorObject self, char const * pszName, char const * pszValue, char const * pszDomain) -> CPLErr"""
return _ogr.MajorObject_SetMetadataItem(self, *args)
def GetMetadata( self, domain = '' ):
if domain[:4] == 'xml:':
return self.GetMetadata_List( domain )
return self.GetMetadata_Dict( domain )
MajorObject_swigregister = _ogr.MajorObject_swigregister
MajorObject_swigregister(MajorObject)
class StyleTable(_object):
"""Proxy of C++ OGRStyleTableShadow class."""
__swig_setmethods__ = {}
__setattr__ = lambda self, name, value: _swig_setattr(self, StyleTable, name, value)
__swig_getmethods__ = {}
__getattr__ = lambda self, name: _swig_getattr(self, StyleTable, name)
__repr__ = _swig_repr
def __init__(self, *args):
"""__init__(OGRStyleTableShadow self) -> StyleTable"""
this = _ogr.new_StyleTable(*args)
try:
self.this.append(this)
except Exception:
self.this = this
__swig_destroy__ = _ogr.delete_StyleTable
__del__ = lambda self: None
def AddStyle(self, *args):
"""AddStyle(StyleTable self, char const * pszName, char const * pszStyleString) -> int"""
return _ogr.StyleTable_AddStyle(self, *args)
def LoadStyleTable(self, *args):
"""LoadStyleTable(StyleTable self, char const * utf8_path) -> int"""
return _ogr.StyleTable_LoadStyleTable(self, *args)
def SaveStyleTable(self, *args):
"""SaveStyleTable(StyleTable self, char const * utf8_path) -> int"""
return _ogr.StyleTable_SaveStyleTable(self, *args)
def Find(self, *args):
"""Find(StyleTable self, char const * pszName) -> char const *"""
return _ogr.StyleTable_Find(self, *args)
def ResetStyleStringReading(self, *args):
"""ResetStyleStringReading(StyleTable self)"""
return _ogr.StyleTable_ResetStyleStringReading(self, *args)
def GetNextStyle(self, *args):
"""GetNextStyle(StyleTable self) -> char const *"""
return _ogr.StyleTable_GetNextStyle(self, *args)
def GetLastStyleName(self, *args):
"""GetLastStyleName(StyleTable self) -> char const *"""
return _ogr.StyleTable_GetLastStyleName(self, *args)
StyleTable_swigregister = _ogr.StyleTable_swigregister
StyleTable_swigregister(StyleTable)
class Driver(MajorObject):
"""Proxy of C++ OGRDriverShadow class."""
__swig_setmethods__ = {}
for _s in [MajorObject]:
__swig_setmethods__.update(getattr(_s, '__swig_setmethods__', {}))
__setattr__ = lambda self, name, value: _swig_setattr(self, Driver, name, value)
__swig_getmethods__ = {}
for _s in [MajorObject]:
__swig_getmethods__.update(getattr(_s, '__swig_getmethods__', {}))
__getattr__ = lambda self, name: _swig_getattr(self, Driver, name)
def __init__(self, *args, **kwargs):
raise AttributeError("No constructor defined")
__repr__ = _swig_repr
__swig_getmethods__["name"] = _ogr.Driver_name_get
if _newclass:
name = _swig_property(_ogr.Driver_name_get)
def CreateDataSource(self, *args, **kwargs):
"""CreateDataSource(Driver self, char const * utf8_path, char ** options=None) -> DataSource"""
return _ogr.Driver_CreateDataSource(self, *args, **kwargs)
def CopyDataSource(self, *args, **kwargs):
"""CopyDataSource(Driver self, DataSource copy_ds, char const * utf8_path, char ** options=None) -> DataSource"""
return _ogr.Driver_CopyDataSource(self, *args, **kwargs)
def Open(self, *args, **kwargs):
"""Open(Driver self, char const * utf8_path, int update=0) -> DataSource"""
return _ogr.Driver_Open(self, *args, **kwargs)
def DeleteDataSource(self, *args):
"""DeleteDataSource(Driver self, char const * utf8_path) -> int"""
return _ogr.Driver_DeleteDataSource(self, *args)
def TestCapability(self, *args):
"""TestCapability(Driver self, char const * cap) -> bool"""
return _ogr.Driver_TestCapability(self, *args)
def GetName(self, *args):
"""GetName(Driver self) -> char const *"""
return _ogr.Driver_GetName(self, *args)
def Register(self, *args):
"""Register(Driver self)"""
return _ogr.Driver_Register(self, *args)
def Deregister(self, *args):
"""Deregister(Driver self)"""
return _ogr.Driver_Deregister(self, *args)
Driver_swigregister = _ogr.Driver_swigregister
Driver_swigregister(Driver)
class DataSource(MajorObject):
"""Proxy of C++ OGRDataSourceShadow class."""
__swig_setmethods__ = {}
for _s in [MajorObject]:
__swig_setmethods__.update(getattr(_s, '__swig_setmethods__', {}))
__setattr__ = lambda self, name, value: _swig_setattr(self, DataSource, name, value)
__swig_getmethods__ = {}
for _s in [MajorObject]:
__swig_getmethods__.update(getattr(_s, '__swig_getmethods__', {}))
__getattr__ = lambda self, name: _swig_getattr(self, DataSource, name)
def __init__(self, *args, **kwargs):
raise AttributeError("No constructor defined")
__repr__ = _swig_repr
__swig_getmethods__["name"] = _ogr.DataSource_name_get
if _newclass:
name = _swig_property(_ogr.DataSource_name_get)
__swig_destroy__ = _ogr.delete_DataSource
__del__ = lambda self: None
def GetRefCount(self, *args):
"""
GetRefCount(DataSource self) -> int
int
OGR_DS_GetRefCount(OGRDataSourceH hDataSource)
"""
return _ogr.DataSource_GetRefCount(self, *args)
def GetSummaryRefCount(self, *args):
"""
GetSummaryRefCount(DataSource self) -> int
int
OGR_DS_GetSummaryRefCount(OGRDataSourceH hDataSource)
"""
return _ogr.DataSource_GetSummaryRefCount(self, *args)
def GetLayerCount(self, *args):
"""
GetLayerCount(DataSource self) -> int
int
OGR_DS_GetLayerCount(OGRDataSourceH hDS)
Get the number of layers in this data source.
Deprecated Use GDALDatasetGetLayerCount() in GDAL 2.0
Parameters:
-----------
hDS: handle to the data source from which to get the number of
layers.
layer count.
"""
return _ogr.DataSource_GetLayerCount(self, *args)
def GetDriver(self, *args):
"""
GetDriver(DataSource self) -> Driver
OGRSFDriverH
OGR_DS_GetDriver(OGRDataSourceH hDS)
Returns the driver that the dataset was opened with.
NOTE: Starting with GDAL 2.0, it is NOT safe to cast the returned
handle to OGRSFDriver*. If a C++ object is needed, the handle should
be cast to GDALDriver*.
Deprecated Use GDALGetDatasetDriver() in GDAL 2.0
Parameters:
-----------
hDS: handle to the datasource
NULL if driver info is not available, or pointer to a driver owned by
the OGRSFDriverManager.
"""
return _ogr.DataSource_GetDriver(self, *args)
def GetName(self, *args):
"""
GetName(DataSource self) -> char const *
const char*
OGR_DS_GetName(OGRDataSourceH hDS)
Returns the name of the data source.
This string should be sufficient to open the data source if passed to
the same OGRSFDriver that this data source was opened with, but it
need not be exactly the same string that was used to open the data
source. Normally this is a filename.
Deprecated Use GDALGetDescription() in GDAL 2.0
Parameters:
-----------
hDS: handle to the data source to get the name from.
pointer to an internal name string which should not be modified or
freed by the caller.
"""
return _ogr.DataSource_GetName(self, *args)
def DeleteLayer(self, *args):
"""
DeleteLayer(DataSource self, int index) -> OGRErr
OGRErr
OGR_DS_DeleteLayer(OGRDataSourceH hDS, int iLayer)
Delete the indicated layer from the datasource.
If this method is supported the ODsCDeleteLayer capability will test
TRUE on the OGRDataSource.
Deprecated Use GDALDatasetDeleteLayer() in GDAL 2.0
Parameters:
-----------
hDS: handle to the datasource
iLayer: the index of the layer to delete.
OGRERR_NONE on success, or OGRERR_UNSUPPORTED_OPERATION if deleting
layers is not supported for this datasource.
"""
return _ogr.DataSource_DeleteLayer(self, *args)
def SyncToDisk(self, *args):
"""
SyncToDisk(DataSource self) -> OGRErr
OGRErr
OGR_DS_SyncToDisk(OGRDataSourceH hDS)
Flush pending changes to disk.
See GDALDataset::FlushCache()
"""
return _ogr.DataSource_SyncToDisk(self, *args)
def FlushCache(self, *args):
"""FlushCache(DataSource self)"""
return _ogr.DataSource_FlushCache(self, *args)
def CreateLayer(self, *args, **kwargs):
"""
CreateLayer(DataSource self, char const * name, SpatialReference srs=None, OGRwkbGeometryType geom_type, char ** options=None) -> Layer
OGRLayerH
OGR_DS_CreateLayer(OGRDataSourceH hDS, const char *pszName,
OGRSpatialReferenceH hSpatialRef, OGRwkbGeometryType eType, char
**papszOptions)
This function attempts to create a new layer on the data source with
the indicated name, coordinate system, geometry type.
The papszOptions argument can be used to control driver specific
creation options. These options are normally documented in the format
specific documentation.
Deprecated Use GDALDatasetCreateLayer() in GDAL 2.0
Parameters:
-----------
hDS: The dataset handle.
pszName: the name for the new layer. This should ideally not match
any existing layer on the datasource.
hSpatialRef: handle to the coordinate system to use for the new
layer, or NULL if no coordinate system is available. The driver might
only increase the reference counter of the object to take ownership,
and not make a full copy, so do not use OSRDestroySpatialReference(),
but OSRRelease() instead when you are done with the object.
eType: the geometry type for the layer. Use wkbUnknown if there are
no constraints on the types geometry to be written.
papszOptions: a StringList of name=value options. Options are driver
specific, and driver information can be found at the following
url:http://www.gdal.org/ogr_formats.html
NULL is returned on failure, or a new OGRLayer handle on success.
Example:
"""
return _ogr.DataSource_CreateLayer(self, *args, **kwargs)
def CopyLayer(self, *args, **kwargs):
"""
CopyLayer(DataSource self, Layer src_layer, char const * new_name, char ** options=None) -> Layer
OGRLayerH
OGR_DS_CopyLayer(OGRDataSourceH hDS, OGRLayerH hSrcLayer, const char
*pszNewName, char **papszOptions)
Duplicate an existing layer.
This function creates a new layer, duplicate the field definitions of
the source layer and then duplicate each features of the source layer.
The papszOptions argument can be used to control driver specific
creation options. These options are normally documented in the format
specific documentation. The source layer may come from another
dataset.
Deprecated Use GDALDatasetCopyLayer() in GDAL 2.0
Parameters:
-----------
hDS: handle to the data source where to create the new layer
hSrcLayer: handle to the source layer.
pszNewName: the name of the layer to create.
papszOptions: a StringList of name=value options. Options are driver
specific.
an handle to the layer, or NULL if an error occurs.
"""
return _ogr.DataSource_CopyLayer(self, *args, **kwargs)
def GetLayerByIndex(self, *args):
"""GetLayerByIndex(DataSource self, int index=0) -> Layer"""
return _ogr.DataSource_GetLayerByIndex(self, *args)
def GetLayerByName(self, *args):
"""
GetLayerByName(DataSource self, char const * layer_name) -> Layer
OGRLayerH
OGR_DS_GetLayerByName(OGRDataSourceH hDS, const char *pszName)
Fetch a layer by name.
The returned layer remains owned by the OGRDataSource and should not
be deleted by the application.
Deprecated Use GDALDatasetGetLayerByName() in GDAL 2.0
Parameters:
-----------
hDS: handle to the data source from which to get the layer.
pszLayerName: Layer the layer name of the layer to fetch.
an handle to the layer, or NULL if the layer is not found or an error
occurs.
"""
return _ogr.DataSource_GetLayerByName(self, *args)
def TestCapability(self, *args):
"""
TestCapability(DataSource self, char const * cap) -> bool
int
OGR_DS_TestCapability(OGRDataSourceH hDS, const char *pszCap)
Test if capability is available.
One of the following data source capability names can be passed into
this function, and a TRUE or FALSE value will be returned indicating
whether or not the capability is available for this object.
ODsCCreateLayer: True if this datasource can create new layers.
ODsCDeleteLayer: True if this datasource can delete existing layers.
ODsCCreateGeomFieldAfterCreateLayer: True if the layers of this
datasource support CreateGeomField() just after layer creation.
ODsCCurveGeometries: True if this datasource supports writing curve
geometries. (GDAL 2.0). In that case, OLCCurveGeometries must also be
declared in layers of that dataset.
The #define macro forms of the capability names should be used in
preference to the strings themselves to avoid misspelling.
Deprecated Use GDALDatasetTestCapability() in GDAL 2.0
Parameters:
-----------
hDS: handle to the data source against which to test the capability.
pszCapability: the capability to test.
TRUE if capability available otherwise FALSE.
"""
return _ogr.DataSource_TestCapability(self, *args)
def ExecuteSQL(self, *args, **kwargs):
"""
ExecuteSQL(DataSource self, char const * statement, Geometry spatialFilter=None, char const * dialect) -> Layer
OGRLayerH
OGR_DS_ExecuteSQL(OGRDataSourceH hDS, const char *pszStatement,
OGRGeometryH hSpatialFilter, const char *pszDialect)
Execute an SQL statement against the data store.
The result of an SQL query is either NULL for statements that are in
error, or that have no results set, or an OGRLayer handle representing
a results set from the query. Note that this OGRLayer is in addition
to the layers in the data store and must be destroyed with
OGR_DS_ReleaseResultSet() before the data source is closed
(destroyed).
For more information on the SQL dialect supported internally by OGR
review theOGR SQL document. Some drivers (i.e. Oracle and PostGIS)
pass the SQL directly through to the underlying RDBMS.
Starting with OGR 1.10, theSQLITE dialect can also be used.
Deprecated Use GDALDatasetExecuteSQL() in GDAL 2.0
Parameters:
-----------
hDS: handle to the data source on which the SQL query is executed.
pszSQLCommand: the SQL statement to execute.
hSpatialFilter: handle to a geometry which represents a spatial
filter. Can be NULL.
pszDialect: allows control of the statement dialect. If set to NULL,
the OGR SQL engine will be used, except for RDBMS drivers that will
use their dedicated SQL engine, unless OGRSQL is explicitly passed as
the dialect. Starting with OGR 1.10, the SQLITE dialect can also be
used.
an handle to a OGRLayer containing the results of the query.
Deallocate with OGR_DS_ReleaseResultSet().
"""
return _ogr.DataSource_ExecuteSQL(self, *args, **kwargs)
def ReleaseResultSet(self, *args):
"""
ReleaseResultSet(DataSource self, Layer layer)
void
OGR_DS_ReleaseResultSet(OGRDataSourceH hDS, OGRLayerH hLayer)
Release results of OGR_DS_ExecuteSQL().
This function should only be used to deallocate OGRLayers resulting
from an OGR_DS_ExecuteSQL() call on the same OGRDataSource. Failure to
deallocate a results set before destroying the OGRDataSource may cause
errors.
Deprecated Use GDALDatasetReleaseResultSet() in GDAL 2.0
Parameters:
-----------
hDS: an handle to the data source on which was executed an SQL query.
hLayer: handle to the result of a previous OGR_DS_ExecuteSQL() call.
"""
return _ogr.DataSource_ReleaseResultSet(self, *args)
def GetStyleTable(self, *args):
"""
GetStyleTable(DataSource self) -> StyleTable
OGRStyleTableH
OGR_DS_GetStyleTable(OGRDataSourceH hDS)
Get style table.
"""
return _ogr.DataSource_GetStyleTable(self, *args)
def SetStyleTable(self, *args):
"""
SetStyleTable(DataSource self, StyleTable table)
void
OGR_DS_SetStyleTable(OGRDataSourceH hDS, OGRStyleTableH hStyleTable)
Set style table.
"""
return _ogr.DataSource_SetStyleTable(self, *args)
def StartTransaction(self, *args, **kwargs):
"""StartTransaction(DataSource self, int force=False) -> OGRErr"""
return _ogr.DataSource_StartTransaction(self, *args, **kwargs)
def CommitTransaction(self, *args):
"""CommitTransaction(DataSource self) -> OGRErr"""
return _ogr.DataSource_CommitTransaction(self, *args)
def RollbackTransaction(self, *args):
"""RollbackTransaction(DataSource self) -> OGRErr"""
return _ogr.DataSource_RollbackTransaction(self, *args)
def Destroy(self):
"Once called, self has effectively been destroyed. Do not access. For backwards compatibility only"
_ogr.delete_DataSource( self )
self.thisown = 0
def Release(self):
"Once called, self has effectively been destroyed. Do not access. For backwards compatibility only"
_ogr.delete_DataSource( self )
self.thisown = 0
def Reference(self):
"For backwards compatibility only."
return self.Reference()
def Dereference(self):
"For backwards compatibility only."
self.Dereference()
def __len__(self):
"""Returns the number of layers on the datasource"""
return self.GetLayerCount()
def __getitem__(self, value):
"""Support dictionary, list, and slice -like access to the datasource.
ds[0] would return the first layer on the datasource.
ds['aname'] would return the layer named "aname".
ds[0:4] would return a list of the first four layers."""
if isinstance(value, slice):
output = []
for i in xrange(value.start,value.stop,value.step):
try:
output.append(self.GetLayer(i))
except OGRError: #we're done because we're off the end
return output
return output
if isinstance(value, int):
if value > len(self)-1:
raise IndexError
return self.GetLayer(value)
elif isinstance(value, str):
return self.GetLayer(value)
else:
raise TypeError('Input %s is not of String or Int type' % type(value))
def GetLayer(self,iLayer=0):
"""Return the layer given an index or a name"""
if isinstance(iLayer, str):
return self.GetLayerByName(str(iLayer))
elif isinstance(iLayer, int):
return self.GetLayerByIndex(iLayer)
else:
raise TypeError("Input %s is not of String or Int type" % type(iLayer))
def DeleteLayer(self, value):
"""Deletes the layer given an index or layer name"""
if isinstance(value, str):
for i in range(self.GetLayerCount()):
name = self.GetLayer(i).GetName()
if name == value:
return _ogr.DataSource_DeleteLayer(self, i)
raise ValueError("Layer %s not found to delete" % value)
elif isinstance(value, int):
return _ogr.DataSource_DeleteLayer(self, value)
else:
raise TypeError("Input %s is not of String or Int type" % type(value))
DataSource_swigregister = _ogr.DataSource_swigregister
DataSource_swigregister(DataSource)
class Layer(MajorObject):
"""Proxy of C++ OGRLayerShadow class."""
__swig_setmethods__ = {}
for _s in [MajorObject]:
__swig_setmethods__.update(getattr(_s, '__swig_setmethods__', {}))
__setattr__ = lambda self, name, value: _swig_setattr(self, Layer, name, value)
__swig_getmethods__ = {}
for _s in [MajorObject]:
__swig_getmethods__.update(getattr(_s, '__swig_getmethods__', {}))
__getattr__ = lambda self, name: _swig_getattr(self, Layer, name)
def __init__(self, *args, **kwargs):
raise AttributeError("No constructor defined")
__repr__ = _swig_repr
def GetRefCount(self, *args):
"""
GetRefCount(Layer self) -> int
int OGR_L_GetRefCount(OGRLayerH
hLayer)
"""
return _ogr.Layer_GetRefCount(self, *args)
def SetSpatialFilter(self, *args):
"""
SetSpatialFilter(Layer self, Geometry filter)
SetSpatialFilter(Layer self, int iGeomField, Geometry filter)
void
OGR_L_SetSpatialFilter(OGRLayerH hLayer, OGRGeometryH hGeom)
Set a new spatial filter.
This function set the geometry to be used as a spatial filter when
fetching features via the OGR_L_GetNextFeature() function. Only
features that geometrically intersect the filter geometry will be
returned.
Currently this test is may be inaccurately implemented, but it is
guaranteed that all features whose envelope (as returned by
OGR_G_GetEnvelope()) overlaps the envelope of the spatial filter will
be returned. This can result in more shapes being returned that should
strictly be the case.
Starting with GDAL 2.3, features with null or empty geometries will
never be considered as matching a spatial filter.
This function makes an internal copy of the passed geometry. The
passed geometry remains the responsibility of the caller, and may be
safely destroyed.
For the time being the passed filter geometry should be in the same
SRS as the layer (as returned by OGR_L_GetSpatialRef()). In the future
this may be generalized.
This function is the same as the C++ method
OGRLayer::SetSpatialFilter.
Parameters:
-----------
hLayer: handle to the layer on which to set the spatial filter.
hGeom: handle to the geometry to use as a filtering region. NULL may
be passed indicating that the current spatial filter should be
cleared, but no new one instituted.
"""
return _ogr.Layer_SetSpatialFilter(self, *args)
def SetSpatialFilterRect(self, *args):
"""
SetSpatialFilterRect(Layer self, double minx, double miny, double maxx, double maxy)
SetSpatialFilterRect(Layer self, int iGeomField, double minx, double miny, double maxx, double maxy)
void
OGR_L_SetSpatialFilterRect(OGRLayerH hLayer, double dfMinX, double
dfMinY, double dfMaxX, double dfMaxY)
Set a new rectangular spatial filter.
This method set rectangle to be used as a spatial filter when fetching
features via the OGR_L_GetNextFeature() method. Only features that
geometrically intersect the given rectangle will be returned.
The x/y values should be in the same coordinate system as the layer as
a whole (as returned by OGRLayer::GetSpatialRef()). Internally this
method is normally implemented as creating a 5 vertex closed
rectangular polygon and passing it to OGRLayer::SetSpatialFilter(). It
exists as a convenience.
The only way to clear a spatial filter set with this method is to call
OGRLayer::SetSpatialFilter(NULL).
This method is the same as the C++ method
OGRLayer::SetSpatialFilterRect().
Parameters:
-----------
hLayer: handle to the layer on which to set the spatial filter.
dfMinX: the minimum X coordinate for the rectangular region.
dfMinY: the minimum Y coordinate for the rectangular region.
dfMaxX: the maximum X coordinate for the rectangular region.
dfMaxY: the maximum Y coordinate for the rectangular region.
"""
return _ogr.Layer_SetSpatialFilterRect(self, *args)
def GetSpatialFilter(self, *args):
"""
GetSpatialFilter(Layer self) -> Geometry
OGRGeometryH
OGR_L_GetSpatialFilter(OGRLayerH hLayer)
This function returns the current spatial filter for this layer.
The returned pointer is to an internally owned object, and should not
be altered or deleted by the caller.
This function is the same as the C++ method
OGRLayer::GetSpatialFilter().
Parameters:
-----------
hLayer: handle to the layer to get the spatial filter from.
an handle to the spatial filter geometry.
"""
return _ogr.Layer_GetSpatialFilter(self, *args)
def SetAttributeFilter(self, *args):
"""
SetAttributeFilter(Layer self, char * filter_string) -> OGRErr
OGRErr
OGR_L_SetAttributeFilter(OGRLayerH hLayer, const char *pszQuery)
Set a new attribute query.
This function sets the attribute query string to be used when fetching
features via the OGR_L_GetNextFeature() function. Only features for
which the query evaluates as true will be returned.
The query string should be in the format of an SQL WHERE clause. For
instance "population > 1000000 and population < 5000000" where
population is an attribute in the layer. The query format is a
restricted form of SQL WHERE clause as defined
"eq_format=restricted_where" about half way through this document:
http://ogdi.sourceforge.net/prop/6.2.CapabilitiesMetadata.html
Note that installing a query string will generally result in resetting
the current reading position (ala OGR_L_ResetReading()).
This function is the same as the C++ method
OGRLayer::SetAttributeFilter().
Parameters:
-----------
hLayer: handle to the layer on which attribute query will be
executed.
pszQuery: query in restricted SQL WHERE format, or NULL to clear the
current query.
OGRERR_NONE if successfully installed, or an error code if the query
expression is in error, or some other failure occurs.
"""
return _ogr.Layer_SetAttributeFilter(self, *args)
def ResetReading(self, *args):
"""
ResetReading(Layer self)
void
OGR_L_ResetReading(OGRLayerH hLayer)
Reset feature reading to start on the first feature.
This affects GetNextFeature().
This function is the same as the C++ method OGRLayer::ResetReading().
Parameters:
-----------
hLayer: handle to the layer on which features are read.
"""
return _ogr.Layer_ResetReading(self, *args)
def GetName(self, *args):
"""
GetName(Layer self) -> char const *
const char* OGR_L_GetName(OGRLayerH
hLayer)
Return the layer name.
This returns the same content as
OGR_FD_GetName(OGR_L_GetLayerDefn(hLayer)), but for a few drivers,
calling OGR_L_GetName() directly can avoid lengthy layer definition
initialization.
This function is the same as the C++ method OGRLayer::GetName().
Parameters:
-----------
hLayer: handle to the layer.
the layer name (must not been freed)
OGR 1.8.0
"""
return _ogr.Layer_GetName(self, *args)
def GetGeomType(self, *args):
"""
GetGeomType(Layer self) -> OGRwkbGeometryType
OGRwkbGeometryType
OGR_L_GetGeomType(OGRLayerH hLayer)
Return the layer geometry type.
This returns the same result as
OGR_FD_GetGeomType(OGR_L_GetLayerDefn(hLayer)), but for a few drivers,
calling OGR_L_GetGeomType() directly can avoid lengthy layer
definition initialization.
For layers with multiple geometry fields, this method only returns the
geometry type of the first geometry column. For other columns, use
OGR_GFld_GetType(OGR_FD_GetGeomFieldDefn(OGR_L_GetLayerDefn(hLayer),
i)). For layers without any geometry field, this method returns
wkbNone.
This function is the same as the C++ method OGRLayer::GetGeomType().
Parameters:
-----------
hLayer: handle to the layer.
the geometry type
OGR 1.8.0
"""
return _ogr.Layer_GetGeomType(self, *args)
def GetGeometryColumn(self, *args):
"""
GetGeometryColumn(Layer self) -> char const *
const char*
OGR_L_GetGeometryColumn(OGRLayerH hLayer)
This method returns the name of the underlying database column being
used as the geometry column, or "" if not supported.
For layers with multiple geometry fields, this method only returns the
geometry type of the first geometry column. For other columns, use OGR
_GFld_GetNameRef(OGR_FD_GetGeomFieldDefn(OGR_L_GetLayerDefn(hLayer),
i)).
This method is the same as the C++ method
OGRLayer::GetGeometryColumn()
Parameters:
-----------
hLayer: handle to the layer
geometry column name.
"""
return _ogr.Layer_GetGeometryColumn(self, *args)
def GetFIDColumn(self, *args):
"""
GetFIDColumn(Layer self) -> char const *
const char*
OGR_L_GetFIDColumn(OGRLayerH hLayer)
This method returns the name of the underlying database column being
used as the FID column, or "" if not supported.
This method is the same as the C++ method OGRLayer::GetFIDColumn()
Parameters:
-----------
hLayer: handle to the layer
fid column name.
"""
return _ogr.Layer_GetFIDColumn(self, *args)
def GetFeature(self, *args):
"""
GetFeature(Layer self, GIntBig fid) -> Feature
OGRFeatureH
OGR_L_GetFeature(OGRLayerH hLayer, GIntBig nFeatureId)
Fetch a feature by its identifier.
This function will attempt to read the identified feature. The nFID
value cannot be OGRNullFID. Success or failure of this operation is
unaffected by the spatial or attribute filters (and specialized
implementations in drivers should make sure that they do not take into
account spatial or attribute filters).
If this function returns a non-NULL feature, it is guaranteed that its
feature id ( OGR_F_GetFID()) will be the same as nFID.
Use OGR_L_TestCapability(OLCRandomRead) to establish if this layer
supports efficient random access reading via OGR_L_GetFeature();
however, the call should always work if the feature exists as a
fallback implementation just scans all the features in the layer
looking for the desired feature.
Sequential reads (with OGR_L_GetNextFeature()) are generally
considered interrupted by a OGR_L_GetFeature() call.
The returned feature should be free with OGR_F_Destroy().
This function is the same as the C++ method OGRLayer::GetFeature( ).
Parameters:
-----------
hLayer: handle to the layer that owned the feature.
nFeatureId: the feature id of the feature to read.
an handle to a feature now owned by the caller, or NULL on failure.
"""
return _ogr.Layer_GetFeature(self, *args)
def GetNextFeature(self, *args):
"""
GetNextFeature(Layer self) -> Feature
OGRFeatureH
OGR_L_GetNextFeature(OGRLayerH hLayer)
Fetch the next available feature from this layer.
The returned feature becomes the responsibility of the caller to
delete with OGR_F_Destroy(). It is critical that all features
associated with an OGRLayer (more specifically an OGRFeatureDefn) be
deleted before that layer/datasource is deleted.
Only features matching the current spatial filter (set with
SetSpatialFilter()) will be returned.
This function implements sequential access to the features of a layer.
The OGR_L_ResetReading() function can be used to start at the
beginning again.
Features returned by OGR_GetNextFeature() may or may not be affected
by concurrent modifications depending on drivers. A guaranteed way of
seeing modifications in effect is to call OGR_L_ResetReading() on
layers where OGR_GetNextFeature() has been called, before reading
again. Structural changes in layers (field addition, deletion, ...)
when a read is in progress may or may not be possible depending on
drivers. If a transaction is committed/aborted, the current sequential
reading may or may not be valid after that operation and a call to
OGR_L_ResetReading() might be needed.
This function is the same as the C++ method
OGRLayer::GetNextFeature().
Parameters:
-----------
hLayer: handle to the layer from which feature are read.
an handle to a feature, or NULL if no more features are available.
"""
return _ogr.Layer_GetNextFeature(self, *args)
def SetNextByIndex(self, *args):
"""
SetNextByIndex(Layer self, GIntBig new_index) -> OGRErr
OGRErr
OGR_L_SetNextByIndex(OGRLayerH hLayer, GIntBig nIndex)
Move read cursor to the nIndex'th feature in the current resultset.
This method allows positioning of a layer such that the
GetNextFeature() call will read the requested feature, where nIndex is
an absolute index into the current result set. So, setting it to 3
would mean the next feature read with GetNextFeature() would have been
the 4th feature to have been read if sequential reading took place
from the beginning of the layer, including accounting for spatial and
attribute filters.
Only in rare circumstances is SetNextByIndex() efficiently
implemented. In all other cases the default implementation which calls
ResetReading() and then calls GetNextFeature() nIndex times is used.
To determine if fast seeking is available on the current layer use the
TestCapability() method with a value of OLCFastSetNextByIndex.
This method is the same as the C++ method OGRLayer::SetNextByIndex()
Parameters:
-----------
hLayer: handle to the layer
nIndex: the index indicating how many steps into the result set to
seek.
OGRERR_NONE on success or an error code.
"""
return _ogr.Layer_SetNextByIndex(self, *args)
def SetFeature(self, *args):
"""
SetFeature(Layer self, Feature feature) -> OGRErr
OGRErr OGR_L_SetFeature(OGRLayerH
hLayer, OGRFeatureH hFeat)
Rewrite an existing feature.
This function will write a feature to the layer, based on the feature
id within the OGRFeature.
Use OGR_L_TestCapability(OLCRandomWrite) to establish if this layer
supports random access writing via OGR_L_SetFeature().
This function is the same as the C++ method OGRLayer::SetFeature().
Parameters:
-----------
hLayer: handle to the layer to write the feature.
hFeat: the feature to write.
OGRERR_NONE if the operation works, otherwise an appropriate error
code (e.g OGRERR_NON_EXISTING_FEATURE if the feature does not exist).
"""
return _ogr.Layer_SetFeature(self, *args)
def CreateFeature(self, *args):
"""
CreateFeature(Layer self, Feature feature) -> OGRErr
OGRErr
OGR_L_CreateFeature(OGRLayerH hLayer, OGRFeatureH hFeat)
Create and write a new feature within a layer.
The passed feature is written to the layer as a new feature, rather
than overwriting an existing one. If the feature has a feature id
other than OGRNullFID, then the native implementation may use that as
the feature id of the new feature, but not necessarily. Upon
successful return the passed feature will have been updated with the
new feature id.
This function is the same as the C++ method OGRLayer::CreateFeature().
Parameters:
-----------
hLayer: handle to the layer to write the feature to.
hFeat: the handle of the feature to write to disk.
OGRERR_NONE on success.
"""
return _ogr.Layer_CreateFeature(self, *args)
def DeleteFeature(self, *args):
"""
DeleteFeature(Layer self, GIntBig fid) -> OGRErr
OGRErr
OGR_L_DeleteFeature(OGRLayerH hLayer, GIntBig nFID)
Delete feature from layer.
The feature with the indicated feature id is deleted from the layer if
supported by the driver. Most drivers do not support feature deletion,
and will return OGRERR_UNSUPPORTED_OPERATION. The
OGR_L_TestCapability() function may be called with OLCDeleteFeature to
check if the driver supports feature deletion.
This method is the same as the C++ method OGRLayer::DeleteFeature().
Parameters:
-----------
hLayer: handle to the layer
nFID: the feature id to be deleted from the layer
OGRERR_NONE if the operation works, otherwise an appropriate error
code (e.g OGRERR_NON_EXISTING_FEATURE if the feature does not exist).
"""
return _ogr.Layer_DeleteFeature(self, *args)
def SyncToDisk(self, *args):
"""
SyncToDisk(Layer self) -> OGRErr
OGRErr OGR_L_SyncToDisk(OGRLayerH
hLayer)
Flush pending changes to disk.
This call is intended to force the layer to flush any pending writes
to disk, and leave the disk file in a consistent state. It would not
normally have any effect on read-only datasources.
Some layers do not implement this method, and will still return
OGRERR_NONE. The default implementation just returns OGRERR_NONE. An
error is only returned if an error occurs while attempting to flush to
disk.
In any event, you should always close any opened datasource with
OGR_DS_Destroy() that will ensure all data is correctly flushed.
This method is the same as the C++ method OGRLayer::SyncToDisk()
Parameters:
-----------
hLayer: handle to the layer
OGRERR_NONE if no error occurs (even if nothing is done) or an error
code.
"""
return _ogr.Layer_SyncToDisk(self, *args)
def GetLayerDefn(self, *args):
"""
GetLayerDefn(Layer self) -> FeatureDefn
OGRFeatureDefnH
OGR_L_GetLayerDefn(OGRLayerH hLayer)
Fetch the schema information for this layer.
The returned handle to the OGRFeatureDefn is owned by the OGRLayer,
and should not be modified or freed by the application. It
encapsulates the attribute schema of the features of the layer.
This function is the same as the C++ method OGRLayer::GetLayerDefn().
Parameters:
-----------
hLayer: handle to the layer to get the schema information.
an handle to the feature definition.
"""
return _ogr.Layer_GetLayerDefn(self, *args)
def GetFeatureCount(self, *args, **kwargs):
"""
GetFeatureCount(Layer self, int force=1) -> GIntBig
GIntBig
OGR_L_GetFeatureCount(OGRLayerH hLayer, int bForce)
Fetch the feature count in this layer.
Returns the number of features in the layer. For dynamic databases the
count may not be exact. If bForce is FALSE, and it would be expensive
to establish the feature count a value of -1 may be returned
indicating that the count isn't know. If bForce is TRUE some
implementations will actually scan the entire layer once to count
objects.
The returned count takes the spatial filter into account.
Note that some implementations of this method may alter the read
cursor of the layer.
This function is the same as the CPP OGRLayer::GetFeatureCount().
Note: since GDAL 2.0, this method returns a GIntBig (previously a int)
Parameters:
-----------
hLayer: handle to the layer that owned the features.
bForce: Flag indicating whether the count should be computed even if
it is expensive.
feature count, -1 if count not known.
"""
return _ogr.Layer_GetFeatureCount(self, *args, **kwargs)
def GetExtent(self, *args, **kwargs):
"""
GetExtent(Layer self, int force=1, int can_return_null=0, int geom_field=0)
OGRErr OGR_L_GetExtent(OGRLayerH
hLayer, OGREnvelope *psExtent, int bForce)
Fetch the extent of this layer.
Returns the extent (MBR) of the data in the layer. If bForce is FALSE,
and it would be expensive to establish the extent then OGRERR_FAILURE
will be returned indicating that the extent isn't know. If bForce is
TRUE then some implementations will actually scan the entire layer
once to compute the MBR of all the features in the layer.
Depending on the drivers, the returned extent may or may not take the
spatial filter into account. So it is safer to call OGR_L_GetExtent()
without setting a spatial filter.
Layers without any geometry may return OGRERR_FAILURE just indicating
that no meaningful extents could be collected.
Note that some implementations of this method may alter the read
cursor of the layer.
This function is the same as the C++ method OGRLayer::GetExtent().
Parameters:
-----------
hLayer: handle to the layer from which to get extent.
psExtent: the structure in which the extent value will be returned.
bForce: Flag indicating whether the extent should be computed even if
it is expensive.
OGRERR_NONE on success, OGRERR_FAILURE if extent not known.
"""
return _ogr.Layer_GetExtent(self, *args, **kwargs)
def TestCapability(self, *args):
"""
TestCapability(Layer self, char const * cap) -> bool
int
OGR_L_TestCapability(OGRLayerH hLayer, const char *pszCap)
Test if this layer supported the named capability.
The capability codes that can be tested are represented as strings,
but #defined constants exists to ensure correct spelling. Specific
layer types may implement class specific capabilities, but this can't
generally be discovered by the caller.
OLCRandomRead / "RandomRead": TRUE if the GetFeature() method is
implemented in an optimized way for this layer, as opposed to the
default implementation using ResetReading() and GetNextFeature() to
find the requested feature id.
OLCSequentialWrite / "SequentialWrite": TRUE if the CreateFeature()
method works for this layer. Note this means that this particular
layer is writable. The same OGRLayer class may returned FALSE for
other layer instances that are effectively read-only.
OLCRandomWrite / "RandomWrite": TRUE if the SetFeature() method is
operational on this layer. Note this means that this particular layer
is writable. The same OGRLayer class may returned FALSE for other
layer instances that are effectively read-only.
OLCFastSpatialFilter / "FastSpatialFilter": TRUE if this layer
implements spatial filtering efficiently. Layers that effectively read
all features, and test them with the OGRFeature intersection methods
should return FALSE. This can be used as a clue by the application
whether it should build and maintain its own spatial index for
features in this layer.
OLCFastFeatureCount / "FastFeatureCount": TRUE if this layer can
return a feature count (via OGR_L_GetFeatureCount()) efficiently, i.e.
without counting the features. In some cases this will return TRUE
until a spatial filter is installed after which it will return FALSE.
OLCFastGetExtent / "FastGetExtent": TRUE if this layer can return
its data extent (via OGR_L_GetExtent()) efficiently, i.e. without
scanning all the features. In some cases this will return TRUE until a
spatial filter is installed after which it will return FALSE.
OLCFastSetNextByIndex / "FastSetNextByIndex": TRUE if this layer can
perform the SetNextByIndex() call efficiently, otherwise FALSE.
OLCCreateField / "CreateField": TRUE if this layer can create new
fields on the current layer using CreateField(), otherwise FALSE.
OLCCreateGeomField / "CreateGeomField": (GDAL >= 1.11) TRUE if this
layer can create new geometry fields on the current layer using
CreateGeomField(), otherwise FALSE.
OLCDeleteField / "DeleteField": TRUE if this layer can delete
existing fields on the current layer using DeleteField(), otherwise
FALSE.
OLCReorderFields / "ReorderFields": TRUE if this layer can reorder
existing fields on the current layer using ReorderField() or
ReorderFields(), otherwise FALSE.
OLCAlterFieldDefn / "AlterFieldDefn": TRUE if this layer can alter
the definition of an existing field on the current layer using
AlterFieldDefn(), otherwise FALSE.
OLCDeleteFeature / "DeleteFeature": TRUE if the DeleteFeature()
method is supported on this layer, otherwise FALSE.
OLCStringsAsUTF8 / "StringsAsUTF8": TRUE if values of OFTString
fields are assured to be in UTF-8 format. If FALSE the encoding of
fields is uncertain, though it might still be UTF-8.
OLCTransactions / "Transactions": TRUE if the StartTransaction(),
CommitTransaction() and RollbackTransaction() methods work in a
meaningful way, otherwise FALSE.
OLCCurveGeometries / "CurveGeometries": TRUE if this layer supports
writing curve geometries or may return such geometries. (GDAL 2.0).
This function is the same as the C++ method
OGRLayer::TestCapability().
Parameters:
-----------
hLayer: handle to the layer to get the capability from.
pszCap: the name of the capability to test.
TRUE if the layer has the requested capability, or FALSE otherwise.
OGRLayers will return FALSE for any unrecognized capabilities.
"""
return _ogr.Layer_TestCapability(self, *args)
def CreateField(self, *args, **kwargs):
"""
CreateField(Layer self, FieldDefn field_def, int approx_ok=1) -> OGRErr
OGRErr
OGR_L_CreateField(OGRLayerH hLayer, OGRFieldDefnH hField, int
bApproxOK)
Create a new field on a layer.
You must use this to create new fields on a real layer. Internally the
OGRFeatureDefn for the layer will be updated to reflect the new field.
Applications should never modify the OGRFeatureDefn used by a layer
directly.
This function should not be called while there are feature objects in
existence that were obtained or created with the previous layer
definition.
Not all drivers support this function. You can query a layer to check
if it supports it with the OLCCreateField capability. Some drivers may
only support this method while there are still no features in the
layer. When it is supported, the existing features of the backing
file/database should be updated accordingly.
Drivers may or may not support not-null constraints. If they support
creating fields with not-null constraints, this is generally before
creating any feature to the layer.
This function is the same as the C++ method OGRLayer::CreateField().
Parameters:
-----------
hLayer: handle to the layer to write the field definition.
hField: handle of the field definition to write to disk.
bApproxOK: If TRUE, the field may be created in a slightly different
form depending on the limitations of the format driver.
OGRERR_NONE on success.
"""
return _ogr.Layer_CreateField(self, *args, **kwargs)
def DeleteField(self, *args):
"""
DeleteField(Layer self, int iField) -> OGRErr
OGRErr
OGR_L_DeleteField(OGRLayerH hLayer, int iField)
Delete an existing field on a layer.
You must use this to delete existing fields on a real layer.
Internally the OGRFeatureDefn for the layer will be updated to reflect
the deleted field. Applications should never modify the OGRFeatureDefn
used by a layer directly.
This function should not be called while there are feature objects in
existence that were obtained or created with the previous layer
definition.
Not all drivers support this function. You can query a layer to check
if it supports it with the OLCDeleteField capability. Some drivers may
only support this method while there are still no features in the
layer. When it is supported, the existing features of the backing
file/database should be updated accordingly.
This function is the same as the C++ method OGRLayer::DeleteField().
Parameters:
-----------
hLayer: handle to the layer.
iField: index of the field to delete.
OGRERR_NONE on success.
OGR 1.9.0
"""
return _ogr.Layer_DeleteField(self, *args)
def ReorderField(self, *args):
"""
ReorderField(Layer self, int iOldFieldPos, int iNewFieldPos) -> OGRErr
OGRErr
OGR_L_ReorderField(OGRLayerH hLayer, int iOldFieldPos, int
iNewFieldPos)
Reorder an existing field on a layer.
This function is a convenience wrapper of OGR_L_ReorderFields()
dedicated to move a single field.
You must use this to reorder existing fields on a real layer.
Internally the OGRFeatureDefn for the layer will be updated to reflect
the reordering of the fields. Applications should never modify the
OGRFeatureDefn used by a layer directly.
This function should not be called while there are feature objects in
existence that were obtained or created with the previous layer
definition.
The field definition that was at initial position iOldFieldPos will be
moved at position iNewFieldPos, and elements between will be shuffled
accordingly.
For example, let suppose the fields were "0","1","2","3","4"
initially. ReorderField(1, 3) will reorder them as
"0","2","3","1","4".
Not all drivers support this function. You can query a layer to check
if it supports it with the OLCReorderFields capability. Some drivers
may only support this method while there are still no features in the
layer. When it is supported, the existing features of the backing
file/database should be updated accordingly.
This function is the same as the C++ method OGRLayer::ReorderField().
Parameters:
-----------
hLayer: handle to the layer.
iOldFieldPos: previous position of the field to move. Must be in the
range [0,GetFieldCount()-1].
iNewFieldPos: new position of the field to move. Must be in the range
[0,GetFieldCount()-1].
OGRERR_NONE on success.
OGR 1.9.0
"""
return _ogr.Layer_ReorderField(self, *args)
def ReorderFields(self, *args):
"""
ReorderFields(Layer self, int nList) -> OGRErr
OGRErr
OGR_L_ReorderFields(OGRLayerH hLayer, int *panMap)
Reorder all the fields of a layer.
You must use this to reorder existing fields on a real layer.
Internally the OGRFeatureDefn for the layer will be updated to reflect
the reordering of the fields. Applications should never modify the
OGRFeatureDefn used by a layer directly.
This function should not be called while there are feature objects in
existence that were obtained or created with the previous layer
definition.
panMap is such that,for each field definition at position i after
reordering, its position before reordering was panMap[i].
For example, let suppose the fields were "0","1","2","3","4"
initially. ReorderFields([0,2,3,1,4]) will reorder them as
"0","2","3","1","4".
Not all drivers support this function. You can query a layer to check
if it supports it with the OLCReorderFields capability. Some drivers
may only support this method while there are still no features in the
layer. When it is supported, the existing features of the backing
file/database should be updated accordingly.
This function is the same as the C++ method OGRLayer::ReorderFields().
Parameters:
-----------
hLayer: handle to the layer.
panMap: an array of GetLayerDefn()-> OGRFeatureDefn::GetFieldCount()
elements which is a permutation of [0, GetLayerDefn()->
OGRFeatureDefn::GetFieldCount()-1].
OGRERR_NONE on success.
OGR 1.9.0
"""
return _ogr.Layer_ReorderFields(self, *args)
def AlterFieldDefn(self, *args):
"""
AlterFieldDefn(Layer self, int iField, FieldDefn field_def, int nFlags) -> OGRErr
OGRErr
OGR_L_AlterFieldDefn(OGRLayerH hLayer, int iField, OGRFieldDefnH
hNewFieldDefn, int nFlags)
Alter the definition of an existing field on a layer.
You must use this to alter the definition of an existing field of a
real layer. Internally the OGRFeatureDefn for the layer will be
updated to reflect the altered field. Applications should never modify
the OGRFeatureDefn used by a layer directly.
This function should not be called while there are feature objects in
existence that were obtained or created with the previous layer
definition.
Not all drivers support this function. You can query a layer to check
if it supports it with the OLCAlterFieldDefn capability. Some drivers
may only support this method while there are still no features in the
layer. When it is supported, the existing features of the backing
file/database should be updated accordingly. Some drivers might also
not support all update flags.
This function is the same as the C++ method
OGRLayer::AlterFieldDefn().
Parameters:
-----------
hLayer: handle to the layer.
iField: index of the field whose definition must be altered.
hNewFieldDefn: new field definition
nFlags: combination of ALTER_NAME_FLAG, ALTER_TYPE_FLAG,
ALTER_WIDTH_PRECISION_FLAG, ALTER_NULLABLE_FLAG and ALTER_DEFAULT_FLAG
to indicate which of the name and/or type and/or width and precision
fields and/or nullability from the new field definition must be taken
into account.
OGRERR_NONE on success.
OGR 1.9.0
"""
return _ogr.Layer_AlterFieldDefn(self, *args)
def CreateGeomField(self, *args, **kwargs):
"""
CreateGeomField(Layer self, GeomFieldDefn field_def, int approx_ok=1) -> OGRErr
OGRErr
OGR_L_CreateGeomField(OGRLayerH hLayer, OGRGeomFieldDefnH hField, int
bApproxOK)
Create a new geometry field on a layer.
You must use this to create new geometry fields on a real layer.
Internally the OGRFeatureDefn for the layer will be updated to reflect
the new field. Applications should never modify the OGRFeatureDefn
used by a layer directly.
This function should not be called while there are feature objects in
existence that were obtained or created with the previous layer
definition.
Not all drivers support this function. You can query a layer to check
if it supports it with the OLCCreateField capability. Some drivers may
only support this method while there are still no features in the
layer. When it is supported, the existing features of the backing
file/database should be updated accordingly.
Drivers may or may not support not-null constraints. If they support
creating fields with not-null constraints, this is generally before
creating any feature to the layer.
This function is the same as the C++ method OGRLayer::CreateField().
Parameters:
-----------
hLayer: handle to the layer to write the field definition.
hField: handle of the geometry field definition to write to disk.
bApproxOK: If TRUE, the field may be created in a slightly different
form depending on the limitations of the format driver.
OGRERR_NONE on success.
OGR 1.11
"""
return _ogr.Layer_CreateGeomField(self, *args, **kwargs)
def StartTransaction(self, *args):
"""
StartTransaction(Layer self) -> OGRErr
OGRErr
OGR_L_StartTransaction(OGRLayerH hLayer)
For datasources which support transactions, StartTransaction creates a
transaction.
If starting the transaction fails, will return OGRERR_FAILURE.
Datasources which do not support transactions will always return
OGRERR_NONE.
Note: as of GDAL 2.0, use of this API is discouraged when the dataset
offers dataset level transaction with GDALDataset::StartTransaction().
The reason is that most drivers can only offer transactions at dataset
level, and not layer level. Very few drivers really support
transactions at layer scope.
This function is the same as the C++ method
OGRLayer::StartTransaction().
Parameters:
-----------
hLayer: handle to the layer
OGRERR_NONE on success.
"""
return _ogr.Layer_StartTransaction(self, *args)
def CommitTransaction(self, *args):
"""
CommitTransaction(Layer self) -> OGRErr
OGRErr
OGR_L_CommitTransaction(OGRLayerH hLayer)
For datasources which support transactions, CommitTransaction commits
a transaction.
If no transaction is active, or the commit fails, will return
OGRERR_FAILURE. Datasources which do not support transactions will
always return OGRERR_NONE.
This function is the same as the C++ method
OGRLayer::CommitTransaction().
Parameters:
-----------
hLayer: handle to the layer
OGRERR_NONE on success.
"""
return _ogr.Layer_CommitTransaction(self, *args)
def RollbackTransaction(self, *args):
"""
RollbackTransaction(Layer self) -> OGRErr
OGRErr
OGR_L_RollbackTransaction(OGRLayerH hLayer)
For datasources which support transactions, RollbackTransaction will
roll back a datasource to its state before the start of the current
transaction.
If no transaction is active, or the rollback fails, will return
OGRERR_FAILURE. Datasources which do not support transactions will
always return OGRERR_NONE.
This function is the same as the C++ method
OGRLayer::RollbackTransaction().
Parameters:
-----------
hLayer: handle to the layer
OGRERR_NONE on success.
"""
return _ogr.Layer_RollbackTransaction(self, *args)
def FindFieldIndex(self, *args):
"""
FindFieldIndex(Layer self, char const * pszFieldName, int bExactMatch) -> int
int
OGR_L_FindFieldIndex(OGRLayerH hLayer, const char *pszFieldName, int
bExactMatch)
Find the index of field in a layer.
The returned number is the index of the field in the layers, or -1 if
the field doesn't exist.
If bExactMatch is set to FALSE and the field doesn't exists in the
given form the driver might apply some changes to make it match, like
those it might do if the layer was created (eg. like LAUNDER in the
OCI driver).
This method is the same as the C++ method OGRLayer::FindFieldIndex().
field index, or -1 if the field doesn't exist
"""
return _ogr.Layer_FindFieldIndex(self, *args)
def GetSpatialRef(self, *args):
"""
GetSpatialRef(Layer self) -> SpatialReference
OGRSpatialReferenceH
OGR_L_GetSpatialRef(OGRLayerH hLayer)
Fetch the spatial reference system for this layer.
The returned object is owned by the OGRLayer and should not be
modified or freed by the application.
This function is the same as the C++ method OGRLayer::GetSpatialRef().
Parameters:
-----------
hLayer: handle to the layer to get the spatial reference from.
spatial reference, or NULL if there isn't one.
"""
return _ogr.Layer_GetSpatialRef(self, *args)
def GetFeaturesRead(self, *args):
"""
GetFeaturesRead(Layer self) -> GIntBig
GIntBig
OGR_L_GetFeaturesRead(OGRLayerH hLayer)
"""
return _ogr.Layer_GetFeaturesRead(self, *args)
def SetIgnoredFields(self, *args):
"""
SetIgnoredFields(Layer self, char const ** options) -> OGRErr
OGRErr
OGR_L_SetIgnoredFields(OGRLayerH hLayer, const char **papszFields)
Set which fields can be omitted when retrieving features from the
layer.
If the driver supports this functionality (testable using
OLCIgnoreFields capability), it will not fetch the specified fields in
subsequent calls to GetFeature() / GetNextFeature() and thus save some
processing time and/or bandwidth.
Besides field names of the layers, the following special fields can be
passed: "OGR_GEOMETRY" to ignore geometry and "OGR_STYLE" to
ignore layer style.
By default, no fields are ignored.
This method is the same as the C++ method OGRLayer::SetIgnoredFields()
Parameters:
-----------
papszFields: an array of field names terminated by NULL item. If NULL
is passed, the ignored list is cleared.
OGRERR_NONE if all field names have been resolved (even if the driver
does not support this method)
"""
return _ogr.Layer_SetIgnoredFields(self, *args)
def Intersection(self, *args, **kwargs):
"""
Intersection(Layer self, Layer method_layer, Layer result_layer, char ** options=None, GDALProgressFunc callback=0, void * callback_data=None) -> OGRErr
OGRErr
OGR_L_Intersection(OGRLayerH pLayerInput, OGRLayerH pLayerMethod,
OGRLayerH pLayerResult, char **papszOptions, GDALProgressFunc
pfnProgress, void *pProgressArg)
Intersection of two layers.
The result layer contains features whose geometries represent areas
that are common between features in the input layer and in the method
layer. The features in the result layer have attributes from both
input and method layers. The schema of the result layer can be set by
the user or, if it is empty, is initialized to contain all fields in
the input and method layers.
If the schema of the result is set by user and contains fields that
have the same name as a field in input and in method layer, then the
attribute in the result feature will get the value from the feature of
the method layer.
For best performance use the minimum amount of features in the method
layer and copy it into a memory layer.
This method relies on GEOS support. Do not use unless the GEOS support
is compiled in. The recognized list of options is :
SKIP_FAILURES=YES/NO. Set it to YES to go on, even when a feature
could not be inserted or a GEOS call failed.
PROMOTE_TO_MULTI=YES/NO. Set it to YES to convert Polygons into
MultiPolygons, or LineStrings to MultiLineStrings.
INPUT_PREFIX=string. Set a prefix for the field names that will be
created from the fields of the input layer.
METHOD_PREFIX=string. Set a prefix for the field names that will be
created from the fields of the method layer.
USE_PREPARED_GEOMETRIES=YES/NO. Set to NO to not use prepared
geometries to pretest intersection of features of method layer with
features of this layer.
PRETEST_CONTAINMENT=YES/NO. Set to YES to pretest the containment of
features of method layer within the features of this layer. This will
speed up the method significantly in some cases. Requires that the
prepared geometries are in effect.
KEEP_LOWER_DIMENSION_GEOMETRIES=YES/NO. Set to NO to skip result
features with lower dimension geometry that would otherwise be added
to the result layer. The default is to add but only if the result
layer has an unknown geometry type.
This function is the same as the C++ method OGRLayer::Intersection().
Parameters:
-----------
pLayerInput: the input layer. Should not be NULL.
pLayerMethod: the method layer. Should not be NULL.
pLayerResult: the layer where the features resulting from the
operation are inserted. Should not be NULL. See above the note about
the schema.
papszOptions: NULL terminated list of options (may be NULL).
pfnProgress: a GDALProgressFunc() compatible callback function for
reporting progress or NULL.
pProgressArg: argument to be passed to pfnProgress. May be NULL.
an error code if there was an error or the execution was interrupted,
OGRERR_NONE otherwise.
The first geometry field is always used.
OGR 1.10
"""
return _ogr.Layer_Intersection(self, *args, **kwargs)
def Union(self, *args, **kwargs):
"""
Union(Layer self, Layer method_layer, Layer result_layer, char ** options=None, GDALProgressFunc callback=0, void * callback_data=None) -> OGRErr
OGRErr OGR_L_Union(OGRLayerH
pLayerInput, OGRLayerH pLayerMethod, OGRLayerH pLayerResult, char
**papszOptions, GDALProgressFunc pfnProgress, void *pProgressArg)
Union of two layers.
The result layer contains features whose geometries represent areas
that are in either in the input layer, in the method layer, or in
both. The features in the result layer have attributes from both input
and method layers. For features which represent areas that are only in
the input or in the method layer the respective attributes have
undefined values. The schema of the result layer can be set by the
user or, if it is empty, is initialized to contain all fields in the
input and method layers.
If the schema of the result is set by user and contains fields that
have the same name as a field in input and in method layer, then the
attribute in the result feature will get the value from the feature of
the method layer (even if it is undefined).
For best performance use the minimum amount of features in the method
layer and copy it into a memory layer.
This method relies on GEOS support. Do not use unless the GEOS support
is compiled in. The recognized list of options is :
SKIP_FAILURES=YES/NO. Set it to YES to go on, even when a feature
could not be inserted or a GEOS call failed.
PROMOTE_TO_MULTI=YES/NO. Set it to YES to convert Polygons into
MultiPolygons, or LineStrings to MultiLineStrings.
INPUT_PREFIX=string. Set a prefix for the field names that will be
created from the fields of the input layer.
METHOD_PREFIX=string. Set a prefix for the field names that will be
created from the fields of the method layer.
USE_PREPARED_GEOMETRIES=YES/NO. Set to NO to not use prepared
geometries to pretest intersection of features of method layer with
features of this layer.
KEEP_LOWER_DIMENSION_GEOMETRIES=YES/NO. Set to NO to skip result
features with lower dimension geometry that would otherwise be added
to the result layer. The default is to add but only if the result
layer has an unknown geometry type.
This function is the same as the C++ method OGRLayer::Union().
Parameters:
-----------
pLayerInput: the input layer. Should not be NULL.
pLayerMethod: the method layer. Should not be NULL.
pLayerResult: the layer where the features resulting from the
operation are inserted. Should not be NULL. See above the note about
the schema.
papszOptions: NULL terminated list of options (may be NULL).
pfnProgress: a GDALProgressFunc() compatible callback function for
reporting progress or NULL.
pProgressArg: argument to be passed to pfnProgress. May be NULL.
an error code if there was an error or the execution was interrupted,
OGRERR_NONE otherwise.
The first geometry field is always used.
OGR 1.10
"""
return _ogr.Layer_Union(self, *args, **kwargs)
def SymDifference(self, *args, **kwargs):
"""
SymDifference(Layer self, Layer method_layer, Layer result_layer, char ** options=None, GDALProgressFunc callback=0, void * callback_data=None) -> OGRErr
OGRErr
OGR_L_SymDifference(OGRLayerH pLayerInput, OGRLayerH pLayerMethod,
OGRLayerH pLayerResult, char **papszOptions, GDALProgressFunc
pfnProgress, void *pProgressArg)
Symmetrical difference of two layers.
The result layer contains features whose geometries represent areas
that are in either in the input layer or in the method layer but not
in both. The features in the result layer have attributes from both
input and method layers. For features which represent areas that are
only in the input or in the method layer the respective attributes
have undefined values. The schema of the result layer can be set by
the user or, if it is empty, is initialized to contain all fields in
the input and method layers.
If the schema of the result is set by user and contains fields that
have the same name as a field in input and in method layer, then the
attribute in the result feature will get the value from the feature of
the method layer (even if it is undefined).
For best performance use the minimum amount of features in the method
layer and copy it into a memory layer.
This method relies on GEOS support. Do not use unless the GEOS support
is compiled in. The recognized list of options is :
SKIP_FAILURES=YES/NO. Set it to YES to go on, even when a feature
could not be inserted or a GEOS call failed.
PROMOTE_TO_MULTI=YES/NO. Set it to YES to convert Polygons into
MultiPolygons, or LineStrings to MultiLineStrings.
INPUT_PREFIX=string. Set a prefix for the field names that will be
created from the fields of the input layer.
METHOD_PREFIX=string. Set a prefix for the field names that will be
created from the fields of the method layer.
This function is the same as the C++ method OGRLayer::SymDifference().
Parameters:
-----------
pLayerInput: the input layer. Should not be NULL.
pLayerMethod: the method layer. Should not be NULL.
pLayerResult: the layer where the features resulting from the
operation are inserted. Should not be NULL. See above the note about
the schema.
papszOptions: NULL terminated list of options (may be NULL).
pfnProgress: a GDALProgressFunc() compatible callback function for
reporting progress or NULL.
pProgressArg: argument to be passed to pfnProgress. May be NULL.
an error code if there was an error or the execution was interrupted,
OGRERR_NONE otherwise.
The first geometry field is always used.
OGR 1.10
"""
return _ogr.Layer_SymDifference(self, *args, **kwargs)
def Identity(self, *args, **kwargs):
"""
Identity(Layer self, Layer method_layer, Layer result_layer, char ** options=None, GDALProgressFunc callback=0, void * callback_data=None) -> OGRErr
OGRErr OGR_L_Identity(OGRLayerH
pLayerInput, OGRLayerH pLayerMethod, OGRLayerH pLayerResult, char
**papszOptions, GDALProgressFunc pfnProgress, void *pProgressArg)
Identify the features of this layer with the ones from the identity
layer.
The result layer contains features whose geometries represent areas
that are in the input layer. The features in the result layer have
attributes from both input and method layers. The schema of the result
layer can be set by the user or, if it is empty, is initialized to
contain all fields in input and method layers.
If the schema of the result is set by user and contains fields that
have the same name as a field in input and in method layer, then the
attribute in the result feature will get the value from the feature of
the method layer (even if it is undefined).
For best performance use the minimum amount of features in the method
layer and copy it into a memory layer.
This method relies on GEOS support. Do not use unless the GEOS support
is compiled in. The recognized list of options is :
SKIP_FAILURES=YES/NO. Set it to YES to go on, even when a feature
could not be inserted or a GEOS call failed.
PROMOTE_TO_MULTI=YES/NO. Set it to YES to convert Polygons into
MultiPolygons, or LineStrings to MultiLineStrings.
INPUT_PREFIX=string. Set a prefix for the field names that will be
created from the fields of the input layer.
METHOD_PREFIX=string. Set a prefix for the field names that will be
created from the fields of the method layer.
USE_PREPARED_GEOMETRIES=YES/NO. Set to NO to not use prepared
geometries to pretest intersection of features of method layer with
features of this layer.
KEEP_LOWER_DIMENSION_GEOMETRIES=YES/NO. Set to NO to skip result
features with lower dimension geometry that would otherwise be added
to the result layer. The default is to add but only if the result
layer has an unknown geometry type.
This function is the same as the C++ method OGRLayer::Identity().
Parameters:
-----------
pLayerInput: the input layer. Should not be NULL.
pLayerMethod: the method layer. Should not be NULL.
pLayerResult: the layer where the features resulting from the
operation are inserted. Should not be NULL. See above the note about
the schema.
papszOptions: NULL terminated list of options (may be NULL).
pfnProgress: a GDALProgressFunc() compatible callback function for
reporting progress or NULL.
pProgressArg: argument to be passed to pfnProgress. May be NULL.
an error code if there was an error or the execution was interrupted,
OGRERR_NONE otherwise.
The first geometry field is always used.
OGR 1.10
"""
return _ogr.Layer_Identity(self, *args, **kwargs)
def Update(self, *args, **kwargs):
"""
Update(Layer self, Layer method_layer, Layer result_layer, char ** options=None, GDALProgressFunc callback=0, void * callback_data=None) -> OGRErr
OGRErr OGR_L_Update(OGRLayerH
pLayerInput, OGRLayerH pLayerMethod, OGRLayerH pLayerResult, char
**papszOptions, GDALProgressFunc pfnProgress, void *pProgressArg)
Update this layer with features from the update layer.
The result layer contains features whose geometries represent areas
that are either in the input layer or in the method layer. The
features in the result layer have areas of the features of the method
layer or those ares of the features of the input layer that are not
covered by the method layer. The features of the result layer get
their attributes from the input layer. The schema of the result layer
can be set by the user or, if it is empty, is initialized to contain
all fields in the input layer.
If the schema of the result is set by user and contains fields that
have the same name as a field in the method layer, then the attribute
in the result feature the originates from the method layer will get
the value from the feature of the method layer.
For best performance use the minimum amount of features in the method
layer and copy it into a memory layer.
This method relies on GEOS support. Do not use unless the GEOS support
is compiled in. The recognized list of options is :
SKIP_FAILURES=YES/NO. Set it to YES to go on, even when a feature
could not be inserted or a GEOS call failed.
PROMOTE_TO_MULTI=YES/NO. Set it to YES to convert Polygons into
MultiPolygons, or LineStrings to MultiLineStrings.
INPUT_PREFIX=string. Set a prefix for the field names that will be
created from the fields of the input layer.
METHOD_PREFIX=string. Set a prefix for the field names that will be
created from the fields of the method layer.
This function is the same as the C++ method OGRLayer::Update().
Parameters:
-----------
pLayerInput: the input layer. Should not be NULL.
pLayerMethod: the method layer. Should not be NULL.
pLayerResult: the layer where the features resulting from the
operation are inserted. Should not be NULL. See above the note about
the schema.
papszOptions: NULL terminated list of options (may be NULL).
pfnProgress: a GDALProgressFunc() compatible callback function for
reporting progress or NULL.
pProgressArg: argument to be passed to pfnProgress. May be NULL.
an error code if there was an error or the execution was interrupted,
OGRERR_NONE otherwise.
The first geometry field is always used.
OGR 1.10
"""
return _ogr.Layer_Update(self, *args, **kwargs)
def Clip(self, *args, **kwargs):
"""
Clip(Layer self, Layer method_layer, Layer result_layer, char ** options=None, GDALProgressFunc callback=0, void * callback_data=None) -> OGRErr
OGRErr OGR_L_Clip(OGRLayerH pLayerInput,
OGRLayerH pLayerMethod, OGRLayerH pLayerResult, char **papszOptions,
GDALProgressFunc pfnProgress, void *pProgressArg)
Clip off areas that are not covered by the method layer.
The result layer contains features whose geometries represent areas
that are in the input layer and in the method layer. The features in
the result layer have the (possibly clipped) areas of features in the
input layer and the attributes from the same features. The schema of
the result layer can be set by the user or, if it is empty, is
initialized to contain all fields in the input layer.
For best performance use the minimum amount of features in the method
layer and copy it into a memory layer.
This method relies on GEOS support. Do not use unless the GEOS support
is compiled in. The recognized list of options is :
SKIP_FAILURES=YES/NO. Set it to YES to go on, even when a feature
could not be inserted or a GEOS call failed.
PROMOTE_TO_MULTI=YES/NO. Set it to YES to convert Polygons into
MultiPolygons, or LineStrings to MultiLineStrings.
INPUT_PREFIX=string. Set a prefix for the field names that will be
created from the fields of the input layer.
METHOD_PREFIX=string. Set a prefix for the field names that will be
created from the fields of the method layer.
This function is the same as the C++ method OGRLayer::Clip().
Parameters:
-----------
pLayerInput: the input layer. Should not be NULL.
pLayerMethod: the method layer. Should not be NULL.
pLayerResult: the layer where the features resulting from the
operation are inserted. Should not be NULL. See above the note about
the schema.
papszOptions: NULL terminated list of options (may be NULL).
pfnProgress: a GDALProgressFunc() compatible callback function for
reporting progress or NULL.
pProgressArg: argument to be passed to pfnProgress. May be NULL.
an error code if there was an error or the execution was interrupted,
OGRERR_NONE otherwise.
The first geometry field is always used.
OGR 1.10
"""
return _ogr.Layer_Clip(self, *args, **kwargs)
def Erase(self, *args, **kwargs):
"""
Erase(Layer self, Layer method_layer, Layer result_layer, char ** options=None, GDALProgressFunc callback=0, void * callback_data=None) -> OGRErr
OGRErr OGR_L_Erase(OGRLayerH
pLayerInput, OGRLayerH pLayerMethod, OGRLayerH pLayerResult, char
**papszOptions, GDALProgressFunc pfnProgress, void *pProgressArg)
Remove areas that are covered by the method layer.
The result layer contains features whose geometries represent areas
that are in the input layer but not in the method layer. The features
in the result layer have attributes from the input layer. The schema
of the result layer can be set by the user or, if it is empty, is
initialized to contain all fields in the input layer.
For best performance use the minimum amount of features in the method
layer and copy it into a memory layer.
This method relies on GEOS support. Do not use unless the GEOS support
is compiled in. The recognized list of options is :
SKIP_FAILURES=YES/NO. Set it to YES to go on, even when a feature
could not be inserted or a GEOS call failed.
PROMOTE_TO_MULTI=YES/NO. Set it to YES to convert Polygons into
MultiPolygons, or LineStrings to MultiLineStrings.
INPUT_PREFIX=string. Set a prefix for the field names that will be
created from the fields of the input layer.
METHOD_PREFIX=string. Set a prefix for the field names that will be
created from the fields of the method layer.
This function is the same as the C++ method OGRLayer::Erase().
Parameters:
-----------
pLayerInput: the input layer. Should not be NULL.
pLayerMethod: the method layer. Should not be NULL.
pLayerResult: the layer where the features resulting from the
operation are inserted. Should not be NULL. See above the note about
the schema.
papszOptions: NULL terminated list of options (may be NULL).
pfnProgress: a GDALProgressFunc() compatible callback function for
reporting progress or NULL.
pProgressArg: argument to be passed to pfnProgress. May be NULL.
an error code if there was an error or the execution was interrupted,
OGRERR_NONE otherwise.
The first geometry field is always used.
OGR 1.10
"""
return _ogr.Layer_Erase(self, *args, **kwargs)
def GetStyleTable(self, *args):
"""
GetStyleTable(Layer self) -> StyleTable
OGRStyleTableH
OGR_L_GetStyleTable(OGRLayerH hLayer)
Get style table.
"""
return _ogr.Layer_GetStyleTable(self, *args)
def SetStyleTable(self, *args):
"""
SetStyleTable(Layer self, StyleTable table)
void
OGR_L_SetStyleTable(OGRLayerH hLayer, OGRStyleTableH hStyleTable)
Set style table.
"""
return _ogr.Layer_SetStyleTable(self, *args)
def Reference(self):
"For backwards compatibility only."
pass
def Dereference(self):
"For backwards compatibility only."
pass
def __len__(self):
"""Returns the number of features in the layer"""
return self.GetFeatureCount()
# To avoid __len__ being called when testing boolean value
# which can have side effects (#4758)
def __nonzero__(self):
return True
# For Python 3 compat
__bool__ = __nonzero__
def __getitem__(self, value):
"""Support list and slice -like access to the layer.
layer[0] would return the first feature on the layer.
layer[0:4] would return a list of the first four features."""
if isinstance(value, slice):
import sys
output = []
if value.stop == sys.maxint:
#for an unending slice, sys.maxint is used
#We need to stop before that or GDAL will write an
##error to stdout
stop = len(self) - 1
else:
stop = value.stop
for i in xrange(value.start,stop,value.step):
feature = self.GetFeature(i)
if feature:
output.append(feature)
else:
return output
return output
if isinstance(value, int):
if value > len(self)-1:
raise IndexError
return self.GetFeature(value)
else:
raise TypeError("Input %s is not of IntType or SliceType" % type(value))
def CreateFields(self, fields):
"""Create a list of fields on the Layer"""
for i in fields:
self.CreateField(i)
def __iter__(self):
return self
def next(self):
feature = self.GetNextFeature()
if not feature:
raise StopIteration
else:
return feature
def schema(self):
output = []
defn = self.GetLayerDefn()
for n in range(defn.GetFieldCount()):
output.append(defn.GetFieldDefn(n))
return output
schema = property(schema)
Layer_swigregister = _ogr.Layer_swigregister
Layer_swigregister(Layer)
class Feature(_object):
"""Proxy of C++ OGRFeatureShadow class."""
__swig_setmethods__ = {}
__setattr__ = lambda self, name, value: _swig_setattr(self, Feature, name, value)
__swig_getmethods__ = {}
__getattr__ = lambda self, name: _swig_getattr(self, Feature, name)
__repr__ = _swig_repr
__swig_destroy__ = _ogr.delete_Feature
__del__ = lambda self: None
def __init__(self, *args, **kwargs):
"""__init__(OGRFeatureShadow self, FeatureDefn feature_def) -> Feature"""
this = _ogr.new_Feature(*args, **kwargs)
try:
self.this.append(this)
except Exception:
self.this = this
def GetDefnRef(self, *args):
"""
GetDefnRef(Feature self) -> FeatureDefn
OGRFeatureDefnH
OGR_F_GetDefnRef(OGRFeatureH hFeat)
Fetch feature definition.
This function is the same as the C++ method OGRFeature::GetDefnRef().
Parameters:
-----------
hFeat: handle to the feature to get the feature definition from.
an handle to the feature definition object on which feature depends.
"""
return _ogr.Feature_GetDefnRef(self, *args)
def SetGeometry(self, *args):
"""
SetGeometry(Feature self, Geometry geom) -> OGRErr
OGRErr
OGR_F_SetGeometry(OGRFeatureH hFeat, OGRGeometryH hGeom)
Set feature geometry.
This function updates the features geometry, and operate exactly as
SetGeometryDirectly(), except that this function does not assume
ownership of the passed geometry, but instead makes a copy of it.
This function is the same as the C++ OGRFeature::SetGeometry().
This method has only an effect on the in-memory feature object. If
this object comes from a layer and the modifications must be
serialized back to the datasource, OGR_L_SetFeature() must be used
afterwards. Or if this is a new feature, OGR_L_CreateFeature() must be
used afterwards.
Parameters:
-----------
hFeat: handle to the feature on which new geometry is applied to.
hGeom: handle to the new geometry to apply to feature.
OGRERR_NONE if successful, or OGR_UNSUPPORTED_GEOMETRY_TYPE if the
geometry type is illegal for the OGRFeatureDefn (checking not yet
implemented).
"""
return _ogr.Feature_SetGeometry(self, *args)
def SetGeometryDirectly(self, *args):
"""
SetGeometryDirectly(Feature self, Geometry geom) -> OGRErr
OGRErr
OGR_F_SetGeometryDirectly(OGRFeatureH hFeat, OGRGeometryH hGeom)
Set feature geometry.
This function updates the features geometry, and operate exactly as
SetGeometry(), except that this function assumes ownership of the
passed geometry (even in case of failure of that function).
This function is the same as the C++ method
OGRFeature::SetGeometryDirectly.
This method has only an effect on the in-memory feature object. If
this object comes from a layer and the modifications must be
serialized back to the datasource, OGR_L_SetFeature() must be used
afterwards. Or if this is a new feature, OGR_L_CreateFeature() must be
used afterwards.
Parameters:
-----------
hFeat: handle to the feature on which to apply the geometry.
hGeom: handle to the new geometry to apply to feature.
OGRERR_NONE if successful, or OGR_UNSUPPORTED_GEOMETRY_TYPE if the
geometry type is illegal for the OGRFeatureDefn (checking not yet
implemented).
"""
return _ogr.Feature_SetGeometryDirectly(self, *args)
def GetGeometryRef(self, *args):
"""
GetGeometryRef(Feature self) -> Geometry
OGRGeometryH
OGR_F_GetGeometryRef(OGRFeatureH hFeat)
Fetch an handle to feature geometry.
This function is essentially the same as the C++ method
OGRFeature::GetGeometryRef() (the only difference is that this C
function honours OGRGetNonLinearGeometriesEnabledFlag())
Parameters:
-----------
hFeat: handle to the feature to get geometry from.
an handle to internal feature geometry. This object should not be
modified.
"""
return _ogr.Feature_GetGeometryRef(self, *args)
def SetGeomField(self, *args):
"""
SetGeomField(Feature self, int iField, Geometry geom) -> OGRErr
SetGeomField(Feature self, char const * field_name, Geometry geom) -> OGRErr
OGRErr
OGR_F_SetGeomField(OGRFeatureH hFeat, int iField, OGRGeometryH hGeom)
Set feature geometry of a specified geometry field.
This function updates the features geometry, and operate exactly as
SetGeometryDirectly(), except that this function does not assume
ownership of the passed geometry, but instead makes a copy of it.
This function is the same as the C++ OGRFeature::SetGeomField().
Parameters:
-----------
hFeat: handle to the feature on which new geometry is applied to.
iField: geometry field to set.
hGeom: handle to the new geometry to apply to feature.
OGRERR_NONE if successful, or OGR_UNSUPPORTED_GEOMETRY_TYPE if the
geometry type is illegal for the OGRFeatureDefn (checking not yet
implemented).
"""
return _ogr.Feature_SetGeomField(self, *args)
def SetGeomFieldDirectly(self, *args):
"""
SetGeomFieldDirectly(Feature self, int iField, Geometry geom) -> OGRErr
SetGeomFieldDirectly(Feature self, char const * field_name, Geometry geom) -> OGRErr
OGRErr
OGR_F_SetGeomFieldDirectly(OGRFeatureH hFeat, int iField, OGRGeometryH
hGeom)
Set feature geometry of a specified geometry field.
This function updates the features geometry, and operate exactly as
SetGeomField(), except that this function assumes ownership of the
passed geometry (even in case of failure of that function).
This function is the same as the C++ method
OGRFeature::SetGeomFieldDirectly.
Parameters:
-----------
hFeat: handle to the feature on which to apply the geometry.
iField: geometry field to set.
hGeom: handle to the new geometry to apply to feature.
OGRERR_NONE if successful, or OGRERR_FAILURE if the index is invalid,
or OGR_UNSUPPORTED_GEOMETRY_TYPE if the geometry type is illegal for
the OGRFeatureDefn (checking not yet implemented).
GDAL 1.11
"""
return _ogr.Feature_SetGeomFieldDirectly(self, *args)
def GetGeomFieldRef(self, *args):
"""
GetGeomFieldRef(Feature self, int iField) -> Geometry
GetGeomFieldRef(Feature self, char const * field_name) -> Geometry
OGRGeometryH
OGR_F_GetGeomFieldRef(OGRFeatureH hFeat, int iField)
Fetch an handle to feature geometry.
This function is the same as the C++ method
OGRFeature::GetGeomFieldRef().
Parameters:
-----------
hFeat: handle to the feature to get geometry from.
iField: geometry field to get.
an handle to internal feature geometry. This object should not be
modified.
GDAL 1.11
"""
return _ogr.Feature_GetGeomFieldRef(self, *args)
def Clone(self, *args):
"""
Clone(Feature self) -> Feature
OGRFeatureH OGR_F_Clone(OGRFeatureH
hFeat)
Duplicate feature.
The newly created feature is owned by the caller, and will have it's
own reference to the OGRFeatureDefn.
This function is the same as the C++ method OGRFeature::Clone().
Parameters:
-----------
hFeat: handle to the feature to clone.
an handle to the new feature, exactly matching this feature.
"""
return _ogr.Feature_Clone(self, *args)
def Equal(self, *args):
"""
Equal(Feature self, Feature feature) -> bool
int OGR_F_Equal(OGRFeatureH hFeat,
OGRFeatureH hOtherFeat)
Test if two features are the same.
Two features are considered equal if the share them (handle equality)
same OGRFeatureDefn, have the same field values, and the same geometry
(as tested by OGR_G_Equal()) as well as the same feature id.
This function is the same as the C++ method OGRFeature::Equal().
Parameters:
-----------
hFeat: handle to one of the feature.
hOtherFeat: handle to the other feature to test this one against.
TRUE if they are equal, otherwise FALSE.
"""
return _ogr.Feature_Equal(self, *args)
def GetFieldCount(self, *args):
"""
GetFieldCount(Feature self) -> int
int
OGR_F_GetFieldCount(OGRFeatureH hFeat)
Fetch number of fields on this feature This will always be the same as
the field count for the OGRFeatureDefn.
This function is the same as the C++ method
OGRFeature::GetFieldCount().
Parameters:
-----------
hFeat: handle to the feature to get the fields count from.
count of fields.
"""
return _ogr.Feature_GetFieldCount(self, *args)
def GetFieldDefnRef(self, *args):
"""
GetFieldDefnRef(Feature self, int id) -> FieldDefn
GetFieldDefnRef(Feature self, char const * field_name) -> FieldDefn
OGRFieldDefnH
OGR_F_GetFieldDefnRef(OGRFeatureH hFeat, int i)
Fetch definition for this field.
This function is the same as the C++ method
OGRFeature::GetFieldDefnRef().
Parameters:
-----------
hFeat: handle to the feature on which the field is found.
i: the field to fetch, from 0 to GetFieldCount()-1.
an handle to the field definition (from the OGRFeatureDefn). This is
an internal reference, and should not be deleted or modified.
"""
return _ogr.Feature_GetFieldDefnRef(self, *args)
def GetGeomFieldCount(self, *args):
"""
GetGeomFieldCount(Feature self) -> int
int
OGR_F_GetGeomFieldCount(OGRFeatureH hFeat)
Fetch number of geometry fields on this feature This will always be
the same as the geometry field count for the OGRFeatureDefn.
This function is the same as the C++ method
OGRFeature::GetGeomFieldCount().
Parameters:
-----------
hFeat: handle to the feature to get the geometry fields count from.
count of geometry fields.
GDAL 1.11
"""
return _ogr.Feature_GetGeomFieldCount(self, *args)
def GetGeomFieldDefnRef(self, *args):
"""
GetGeomFieldDefnRef(Feature self, int id) -> GeomFieldDefn
GetGeomFieldDefnRef(Feature self, char const * field_name) -> GeomFieldDefn
OGRGeomFieldDefnH
OGR_F_GetGeomFieldDefnRef(OGRFeatureH hFeat, int i)
Fetch definition for this geometry field.
This function is the same as the C++ method
OGRFeature::GetGeomFieldDefnRef().
Parameters:
-----------
hFeat: handle to the feature on which the field is found.
i: the field to fetch, from 0 to GetGeomFieldCount()-1.
an handle to the field definition (from the OGRFeatureDefn). This is
an internal reference, and should not be deleted or modified.
GDAL 1.11
"""
return _ogr.Feature_GetGeomFieldDefnRef(self, *args)
def GetFieldAsString(self, *args):
"""
GetFieldAsString(Feature self, int id) -> char const
GetFieldAsString(Feature self, char const * field_name) -> char const *
const char*
OGR_F_GetFieldAsString(OGRFeatureH hFeat, int iField)
Fetch field value as a string.
OFTReal and OFTInteger fields will be translated to string using
sprintf(), but not necessarily using the established formatting rules.
Other field types, or errors will result in a return value of zero.
This function is the same as the C++ method
OGRFeature::GetFieldAsString().
Parameters:
-----------
hFeat: handle to the feature that owned the field.
iField: the field to fetch, from 0 to GetFieldCount()-1.
the field value. This string is internal, and should not be modified,
or freed. Its lifetime may be very brief.
"""
return _ogr.Feature_GetFieldAsString(self, *args)
def GetFieldAsInteger(self, *args):
"""
GetFieldAsInteger(Feature self, int id) -> int
GetFieldAsInteger(Feature self, char const * field_name) -> int
int
OGR_F_GetFieldAsInteger(OGRFeatureH hFeat, int iField)
Fetch field value as integer.
OFTString features will be translated using atoi(). OFTReal fields
will be cast to integer. Other field types, or errors will result in a
return value of zero.
This function is the same as the C++ method
OGRFeature::GetFieldAsInteger().
Parameters:
-----------
hFeat: handle to the feature that owned the field.
iField: the field to fetch, from 0 to GetFieldCount()-1.
the field value.
"""
return _ogr.Feature_GetFieldAsInteger(self, *args)
def GetFieldAsInteger64(self, *args):
"""
GetFieldAsInteger64(Feature self, int id) -> GIntBig
GetFieldAsInteger64(Feature self, char const * field_name) -> GIntBig
GIntBig
OGR_F_GetFieldAsInteger64(OGRFeatureH hFeat, int iField)
Fetch field value as integer 64 bit.
OFTInteger are promoted to 64 bit. OFTString features will be
translated using CPLAtoGIntBig(). OFTReal fields will be cast to
integer. Other field types, or errors will result in a return value of
zero.
This function is the same as the C++ method
OGRFeature::GetFieldAsInteger64().
Parameters:
-----------
hFeat: handle to the feature that owned the field.
iField: the field to fetch, from 0 to GetFieldCount()-1.
the field value.
GDAL 2.0
"""
return _ogr.Feature_GetFieldAsInteger64(self, *args)
def GetFieldAsDouble(self, *args):
"""
GetFieldAsDouble(Feature self, int id) -> double
GetFieldAsDouble(Feature self, char const * field_name) -> double
double
OGR_F_GetFieldAsDouble(OGRFeatureH hFeat, int iField)
Fetch field value as a double.
OFTString features will be translated using CPLAtof(). OFTInteger
fields will be cast to double. Other field types, or errors will
result in a return value of zero.
This function is the same as the C++ method
OGRFeature::GetFieldAsDouble().
Parameters:
-----------
hFeat: handle to the feature that owned the field.
iField: the field to fetch, from 0 to GetFieldCount()-1.
the field value.
"""
return _ogr.Feature_GetFieldAsDouble(self, *args)
def GetFieldAsDateTime(self, *args):
"""
GetFieldAsDateTime(Feature self, int id)
GetFieldAsDateTime(Feature self, char const * field_name)
int
OGR_F_GetFieldAsDateTime(OGRFeatureH hFeat, int iField, int *pnYear,
int *pnMonth, int *pnDay, int *pnHour, int *pnMinute, int *pnSecond,
int *pnTZFlag)
Fetch field value as date and time.
Currently this method only works for OFTDate, OFTTime and OFTDateTime
fields.
This function is the same as the C++ method
OGRFeature::GetFieldAsDateTime().
Parameters:
-----------
hFeat: handle to the feature that owned the field.
iField: the field to fetch, from 0 to GetFieldCount()-1.
pnYear: (including century)
pnMonth: (1-12)
pnDay: (1-31)
pnHour: (0-23)
pnMinute: (0-59)
pnSecond: (0-59)
pnTZFlag: (0=unknown, 1=localtime, 100=GMT, see data model for
details)
TRUE on success or FALSE on failure.
See: Use OGR_F_GetFieldAsDateTimeEx() for second with millisecond
accuracy.
"""
return _ogr.Feature_GetFieldAsDateTime(self, *args)
def GetFieldAsIntegerList(self, *args):
"""
GetFieldAsIntegerList(Feature self, int id)
GetFieldAsIntegerList(Feature self, char const * field_name)
const int*
OGR_F_GetFieldAsIntegerList(OGRFeatureH hFeat, int iField, int
*pnCount)
Fetch field value as a list of integers.
Currently this function only works for OFTIntegerList fields.
This function is the same as the C++ method
OGRFeature::GetFieldAsIntegerList().
Parameters:
-----------
hFeat: handle to the feature that owned the field.
iField: the field to fetch, from 0 to GetFieldCount()-1.
pnCount: an integer to put the list count (number of integers) into.
the field value. This list is internal, and should not be modified, or
freed. Its lifetime may be very brief. If *pnCount is zero on return
the returned pointer may be NULL or non-NULL.
"""
return _ogr.Feature_GetFieldAsIntegerList(self, *args)
def GetFieldAsInteger64List(self, *args):
"""
GetFieldAsInteger64List(Feature self, int id)
const GIntBig*
OGR_F_GetFieldAsInteger64List(OGRFeatureH hFeat, int iField, int
*pnCount)
Fetch field value as a list of 64 bit integers.
Currently this function only works for OFTInteger64List fields.
This function is the same as the C++ method
OGRFeature::GetFieldAsInteger64List().
Parameters:
-----------
hFeat: handle to the feature that owned the field.
iField: the field to fetch, from 0 to GetFieldCount()-1.
pnCount: an integer to put the list count (number of integers) into.
the field value. This list is internal, and should not be modified, or
freed. Its lifetime may be very brief. If *pnCount is zero on return
the returned pointer may be NULL or non-NULL.
GDAL 2.0
"""
return _ogr.Feature_GetFieldAsInteger64List(self, *args)
def GetFieldAsDoubleList(self, *args):
"""
GetFieldAsDoubleList(Feature self, int id)
GetFieldAsDoubleList(Feature self, char const * field_name)
const double*
OGR_F_GetFieldAsDoubleList(OGRFeatureH hFeat, int iField, int
*pnCount)
Fetch field value as a list of doubles.
Currently this function only works for OFTRealList fields.
This function is the same as the C++ method
OGRFeature::GetFieldAsDoubleList().
Parameters:
-----------
hFeat: handle to the feature that owned the field.
iField: the field to fetch, from 0 to GetFieldCount()-1.
pnCount: an integer to put the list count (number of doubles) into.
the field value. This list is internal, and should not be modified, or
freed. Its lifetime may be very brief. If *pnCount is zero on return
the returned pointer may be NULL or non-NULL.
"""
return _ogr.Feature_GetFieldAsDoubleList(self, *args)
def GetFieldAsStringList(self, *args):
"""
GetFieldAsStringList(Feature self, int id) -> char **
char**
OGR_F_GetFieldAsStringList(OGRFeatureH hFeat, int iField)
Fetch field value as a list of strings.
Currently this method only works for OFTStringList fields.
The returned list is terminated by a NULL pointer. The number of
elements can also be calculated using CSLCount().
This function is the same as the C++ method
OGRFeature::GetFieldAsStringList().
Parameters:
-----------
hFeat: handle to the feature that owned the field.
iField: the field to fetch, from 0 to GetFieldCount()-1.
the field value. This list is internal, and should not be modified, or
freed. Its lifetime may be very brief.
"""
return _ogr.Feature_GetFieldAsStringList(self, *args)
def GetFieldAsBinary(self, *args):
"""
GetFieldAsBinary(Feature self, int id) -> OGRErr
GetFieldAsBinary(Feature self, char const * field_name) -> OGRErr
GByte*
OGR_F_GetFieldAsBinary(OGRFeatureH hFeat, int iField, int *pnBytes)
Fetch field value as binary.
This method only works for OFTBinary and OFTString fields.
This function is the same as the C++ method
OGRFeature::GetFieldAsBinary().
Parameters:
-----------
hFeat: handle to the feature that owned the field.
iField: the field to fetch, from 0 to GetFieldCount()-1.
pnBytes: location to place count of bytes returned.
the field value. This list is internal, and should not be modified, or
freed. Its lifetime may be very brief.
"""
return _ogr.Feature_GetFieldAsBinary(self, *args)
def IsFieldSet(self, *args):
"""
IsFieldSet(Feature self, int id) -> bool
IsFieldSet(Feature self, char const * field_name) -> bool
int OGR_F_IsFieldSet(OGRFeatureH
hFeat, int iField)
Test if a field has ever been assigned a value or not.
This function is the same as the C++ method OGRFeature::IsFieldSet().
Parameters:
-----------
hFeat: handle to the feature on which the field is.
iField: the field to test.
TRUE if the field has been set, otherwise false.
"""
return _ogr.Feature_IsFieldSet(self, *args)
def IsFieldNull(self, *args):
"""
IsFieldNull(Feature self, int id) -> bool
IsFieldNull(Feature self, char const * field_name) -> bool
int OGR_F_IsFieldNull(OGRFeatureH
hFeat, int iField)
Test if a field is null.
This function is the same as the C++ method OGRFeature::IsFieldNull().
Parameters:
-----------
hFeat: handle to the feature on which the field is.
iField: the field to test.
TRUE if the field is null, otherwise false.
GDAL 2.2
"""
return _ogr.Feature_IsFieldNull(self, *args)
def IsFieldSetAndNotNull(self, *args):
"""
IsFieldSetAndNotNull(Feature self, int id) -> bool
IsFieldSetAndNotNull(Feature self, char const * field_name) -> bool
int
OGR_F_IsFieldSetAndNotNull(OGRFeatureH hFeat, int iField)
Test if a field is set and not null.
This function is the same as the C++ method
OGRFeature::IsFieldSetAndNotNull().
Parameters:
-----------
hFeat: handle to the feature on which the field is.
iField: the field to test.
TRUE if the field is set and not null, otherwise false.
GDAL 2.2
"""
return _ogr.Feature_IsFieldSetAndNotNull(self, *args)
def GetFieldIndex(self, *args):
"""
GetFieldIndex(Feature self, char const * field_name) -> int
int
OGR_F_GetFieldIndex(OGRFeatureH hFeat, const char *pszName)
Fetch the field index given field name.
This is a cover for the OGRFeatureDefn::GetFieldIndex() method.
This function is the same as the C++ method
OGRFeature::GetFieldIndex().
Parameters:
-----------
hFeat: handle to the feature on which the field is found.
pszName: the name of the field to search for.
the field index, or -1 if no matching field is found.
"""
return _ogr.Feature_GetFieldIndex(self, *args)
def GetGeomFieldIndex(self, *args):
"""
GetGeomFieldIndex(Feature self, char const * field_name) -> int
int
OGR_F_GetGeomFieldIndex(OGRFeatureH hFeat, const char *pszName)
Fetch the geometry field index given geometry field name.
This is a cover for the OGRFeatureDefn::GetGeomFieldIndex() method.
This function is the same as the C++ method
OGRFeature::GetGeomFieldIndex().
Parameters:
-----------
hFeat: handle to the feature on which the geometry field is found.
pszName: the name of the geometry field to search for.
the geometry field index, or -1 if no matching geometry field is
found.
GDAL 1.11
"""
return _ogr.Feature_GetGeomFieldIndex(self, *args)
def GetFID(self, *args):
"""
GetFID(Feature self) -> GIntBig
GIntBig OGR_F_GetFID(OGRFeatureH
hFeat)
Get feature identifier.
This function is the same as the C++ method OGRFeature::GetFID().
Note: since GDAL 2.0, this method returns a GIntBig (previously a
long)
Parameters:
-----------
hFeat: handle to the feature from which to get the feature
identifier.
feature id or OGRNullFID if none has been assigned.
"""
return _ogr.Feature_GetFID(self, *args)
def SetFID(self, *args):
"""
SetFID(Feature self, GIntBig fid) -> OGRErr
OGRErr OGR_F_SetFID(OGRFeatureH hFeat,
GIntBig nFID)
Set the feature identifier.
For specific types of features this operation may fail on illegal
features ids. Generally it always succeeds. Feature ids should be
greater than or equal to zero, with the exception of OGRNullFID (-1)
indicating that the feature id is unknown.
This function is the same as the C++ method OGRFeature::SetFID().
Parameters:
-----------
hFeat: handle to the feature to set the feature id to.
nFID: the new feature identifier value to assign.
On success OGRERR_NONE, or on failure some other value.
"""
return _ogr.Feature_SetFID(self, *args)
def DumpReadable(self, *args):
"""
DumpReadable(Feature self)
void
OGR_F_DumpReadable(OGRFeatureH hFeat, FILE *fpOut)
Dump this feature in a human readable form.
This dumps the attributes, and geometry; however, it doesn't
definition information (other than field types and names), nor does it
report the geometry spatial reference system.
This function is the same as the C++ method
OGRFeature::DumpReadable().
Parameters:
-----------
hFeat: handle to the feature to dump.
fpOut: the stream to write to, such as strout.
"""
return _ogr.Feature_DumpReadable(self, *args)
def UnsetField(self, *args):
"""
UnsetField(Feature self, int id)
UnsetField(Feature self, char const * field_name)
void OGR_F_UnsetField(OGRFeatureH
hFeat, int iField)
Clear a field, marking it as unset.
This function is the same as the C++ method OGRFeature::UnsetField().
Parameters:
-----------
hFeat: handle to the feature on which the field is.
iField: the field to unset.
"""
return _ogr.Feature_UnsetField(self, *args)
def SetFieldNull(self, *args):
"""
SetFieldNull(Feature self, int id)
SetFieldNull(Feature self, char const * field_name)
void
OGR_F_SetFieldNull(OGRFeatureH hFeat, int iField)
Clear a field, marking it as null.
This function is the same as the C++ method
OGRFeature::SetFieldNull().
Parameters:
-----------
hFeat: handle to the feature on which the field is.
iField: the field to set to null.
GDAL 2.2
"""
return _ogr.Feature_SetFieldNull(self, *args)
def SetFieldInteger64(self, *args):
"""
SetFieldInteger64(Feature self, int id, GIntBig value)
void
OGR_F_SetFieldInteger64(OGRFeatureH hFeat, int iField, GIntBig nValue)
Set field to 64 bit integer value.
OFTInteger, OFTInteger64 and OFTReal fields will be set directly.
OFTString fields will be assigned a string representation of the
value, but not necessarily taking into account formatting constraints
on this field. Other field types may be unaffected.
This function is the same as the C++ method OGRFeature::SetField().
This method has only an effect on the in-memory feature object. If
this object comes from a layer and the modifications must be
serialized back to the datasource, OGR_L_SetFeature() must be used
afterwards. Or if this is a new feature, OGR_L_CreateFeature() must be
used afterwards.
Parameters:
-----------
hFeat: handle to the feature that owned the field.
iField: the field to fetch, from 0 to GetFieldCount()-1.
nValue: the value to assign.
GDAL 2.0
"""
return _ogr.Feature_SetFieldInteger64(self, *args)
def SetField(self, *args):
"""
SetField(Feature self, int id, char const * value)
SetField(Feature self, char const * field_name, char const * value)
SetField(Feature self, int id, double value)
SetField(Feature self, char const * field_name, double value)
SetField(Feature self, int id, int year, int month, int day, int hour, int minute, float second, int tzflag)
SetField(Feature self, char const * field_name, int year, int month, int day, int hour, int minute, float second, int tzflag)
"""
return _ogr.Feature_SetField(self, *args)
def SetFieldIntegerList(self, *args):
"""
SetFieldIntegerList(Feature self, int id, int nList)
void
OGR_F_SetFieldIntegerList(OGRFeatureH hFeat, int iField, int nCount,
int *panValues)
Set field to list of integers value.
This function currently on has an effect of OFTIntegerList,
OFTInteger64List and OFTRealList fields.
This function is the same as the C++ method OGRFeature::SetField().
This method has only an effect on the in-memory feature object. If
this object comes from a layer and the modifications must be
serialized back to the datasource, OGR_L_SetFeature() must be used
afterwards. Or if this is a new feature, OGR_L_CreateFeature() must be
used afterwards.
Parameters:
-----------
hFeat: handle to the feature that owned the field.
iField: the field to set, from 0 to GetFieldCount()-1.
nCount: the number of values in the list being assigned.
panValues: the values to assign.
"""
return _ogr.Feature_SetFieldIntegerList(self, *args)
def SetFieldInteger64List(self, *args):
"""
SetFieldInteger64List(Feature self, int id, int nList)
void
OGR_F_SetFieldInteger64List(OGRFeatureH hFeat, int iField, int nCount,
const GIntBig *panValues)
Set field to list of 64 bit integers value.
This function currently on has an effect of OFTIntegerList,
OFTInteger64List and OFTRealList fields.
This function is the same as the C++ method OGRFeature::SetField().
This method has only an effect on the in-memory feature object. If
this object comes from a layer and the modifications must be
serialized back to the datasource, OGR_L_SetFeature() must be used
afterwards. Or if this is a new feature, OGR_L_CreateFeature() must be
used afterwards.
Parameters:
-----------
hFeat: handle to the feature that owned the field.
iField: the field to set, from 0 to GetFieldCount()-1.
nCount: the number of values in the list being assigned.
panValues: the values to assign.
GDAL 2.0
"""
return _ogr.Feature_SetFieldInteger64List(self, *args)
def SetFieldDoubleList(self, *args):
"""
SetFieldDoubleList(Feature self, int id, int nList)
void
OGR_F_SetFieldDoubleList(OGRFeatureH hFeat, int iField, int nCount,
double *padfValues)
Set field to list of doubles value.
This function currently on has an effect of OFTIntegerList,
OFTInteger64List, OFTRealList fields.
This function is the same as the C++ method OGRFeature::SetField().
This method has only an effect on the in-memory feature object. If
this object comes from a layer and the modifications must be
serialized back to the datasource, OGR_L_SetFeature() must be used
afterwards. Or if this is a new feature, OGR_L_CreateFeature() must be
used afterwards.
Parameters:
-----------
hFeat: handle to the feature that owned the field.
iField: the field to set, from 0 to GetFieldCount()-1.
nCount: the number of values in the list being assigned.
padfValues: the values to assign.
"""
return _ogr.Feature_SetFieldDoubleList(self, *args)
def SetFieldStringList(self, *args):
"""
SetFieldStringList(Feature self, int id, char ** pList)
void
OGR_F_SetFieldStringList(OGRFeatureH hFeat, int iField, char
**papszValues)
Set field to list of strings value.
This function currently on has an effect of OFTStringList fields.
This function is the same as the C++ method OGRFeature::SetField().
This method has only an effect on the in-memory feature object. If
this object comes from a layer and the modifications must be
serialized back to the datasource, OGR_L_SetFeature() must be used
afterwards. Or if this is a new feature, OGR_L_CreateFeature() must be
used afterwards.
Parameters:
-----------
hFeat: handle to the feature that owned the field.
iField: the field to set, from 0 to GetFieldCount()-1.
papszValues: the values to assign.
"""
return _ogr.Feature_SetFieldStringList(self, *args)
def SetFieldBinaryFromHexString(self, *args):
"""
SetFieldBinaryFromHexString(Feature self, int id, char const * pszValue)
SetFieldBinaryFromHexString(Feature self, char const * field_name, char const * pszValue)
"""
return _ogr.Feature_SetFieldBinaryFromHexString(self, *args)
def SetFrom(self, *args, **kwargs):
"""
SetFrom(Feature self, Feature other, int forgiving=1) -> OGRErr
OGRErr OGR_F_SetFrom(OGRFeatureH
hFeat, OGRFeatureH hOtherFeat, int bForgiving)
Set one feature from another.
Overwrite the contents of this feature from the geometry and
attributes of another. The hOtherFeature does not need to have the
same OGRFeatureDefn. Field values are copied by corresponding field
names. Field types do not have to exactly match. OGR_F_SetField*()
function conversion rules will be applied as needed.
This function is the same as the C++ method OGRFeature::SetFrom().
Parameters:
-----------
hFeat: handle to the feature to set to.
hOtherFeat: handle to the feature from which geometry, and field
values will be copied.
bForgiving: TRUE if the operation should continue despite lacking
output fields matching some of the source fields.
OGRERR_NONE if the operation succeeds, even if some values are not
transferred, otherwise an error code.
"""
return _ogr.Feature_SetFrom(self, *args, **kwargs)
def SetFromWithMap(self, *args):
"""
SetFromWithMap(Feature self, Feature other, int forgiving, int nList) -> OGRErr
OGRErr
OGR_F_SetFromWithMap(OGRFeatureH hFeat, OGRFeatureH hOtherFeat, int
bForgiving, int *panMap)
Set one feature from another.
Overwrite the contents of this feature from the geometry and
attributes of another. The hOtherFeature does not need to have the
same OGRFeatureDefn. Field values are copied according to the provided
indices map. Field types do not have to exactly match.
OGR_F_SetField*() function conversion rules will be applied as needed.
This is more efficient than OGR_F_SetFrom() in that this doesn't
lookup the fields by their names. Particularly useful when the field
names don't match.
This function is the same as the C++ method OGRFeature::SetFrom().
Parameters:
-----------
hFeat: handle to the feature to set to.
hOtherFeat: handle to the feature from which geometry, and field
values will be copied.
panMap: Array of the indices of the destination feature's fields
stored at the corresponding index of the source feature's fields. A
value of -1 should be used to ignore the source's field. The array
should not be NULL and be as long as the number of fields in the
source feature.
bForgiving: TRUE if the operation should continue despite lacking
output fields matching some of the source fields.
OGRERR_NONE if the operation succeeds, even if some values are not
transferred, otherwise an error code.
"""
return _ogr.Feature_SetFromWithMap(self, *args)
def GetStyleString(self, *args):
"""
GetStyleString(Feature self) -> char const *
const char*
OGR_F_GetStyleString(OGRFeatureH hFeat)
Fetch style string for this feature.
Set the OGR Feature Style Specification for details on the format of
this string, and ogr_featurestyle.h for services available to parse
it.
This function is the same as the C++ method
OGRFeature::GetStyleString().
Parameters:
-----------
hFeat: handle to the feature to get the style from.
a reference to a representation in string format, or NULL if there
isn't one.
"""
return _ogr.Feature_GetStyleString(self, *args)
def SetStyleString(self, *args):
"""
SetStyleString(Feature self, char const * the_string)
void
OGR_F_SetStyleString(OGRFeatureH hFeat, const char *pszStyle)
Set feature style string.
This method operate exactly as OGR_F_SetStyleStringDirectly() except
that it does not assume ownership of the passed string, but instead
makes a copy of it.
This function is the same as the C++ method
OGRFeature::SetStyleString().
Parameters:
-----------
hFeat: handle to the feature to set style to.
pszStyle: the style string to apply to this feature, cannot be NULL.
"""
return _ogr.Feature_SetStyleString(self, *args)
def GetFieldType(self, *args):
"""
GetFieldType(Feature self, int id) -> OGRFieldType
GetFieldType(Feature self, char const * field_name) -> OGRFieldType
"""
return _ogr.Feature_GetFieldType(self, *args)
def Validate(self, *args):
"""
Validate(Feature self, int flags, int bEmitError=True) -> int
int OGR_F_Validate(OGRFeatureH
hFeat, int nValidateFlags, int bEmitError)
Validate that a feature meets constraints of its schema.
The scope of test is specified with the nValidateFlags parameter.
Regarding OGR_F_VAL_WIDTH, the test is done assuming the string width
must be interpreted as the number of UTF-8 characters. Some drivers
might interpret the width as the number of bytes instead. So this test
is rather conservative (if it fails, then it will fail for all
interpretations).
This function is the same as the C++ method OGRFeature::Validate().
Parameters:
-----------
hFeat: handle to the feature to validate.
nValidateFlags: OGR_F_VAL_ALL or combination of OGR_F_VAL_NULL,
OGR_F_VAL_GEOM_TYPE, OGR_F_VAL_WIDTH and
OGR_F_VAL_ALLOW_NULL_WHEN_DEFAULT with '|' operator
bEmitError: TRUE if a CPLError() must be emitted when a check fails
TRUE if all enabled validation tests pass.
GDAL 2.0
"""
return _ogr.Feature_Validate(self, *args)
def FillUnsetWithDefault(self, *args):
"""
FillUnsetWithDefault(Feature self, int bNotNullableOnly=False, char ** options=None)
void
OGR_F_FillUnsetWithDefault(OGRFeatureH hFeat, int bNotNullableOnly,
char **papszOptions)
Fill unset fields with default values that might be defined.
This function is the same as the C++ method
OGRFeature::FillUnsetWithDefault().
Parameters:
-----------
hFeat: handle to the feature.
bNotNullableOnly: if we should fill only unset fields with a not-null
constraint.
papszOptions: unused currently. Must be set to NULL.
GDAL 2.0
"""
return _ogr.Feature_FillUnsetWithDefault(self, *args)
def GetNativeData(self, *args):
"""
GetNativeData(Feature self) -> char const *
const char*
OGR_F_GetNativeData(OGRFeatureH hFeat)
Returns the native data for the feature.
The native data is the representation in a "natural" form that comes
from the driver that created this feature, or that is aimed at an
output driver. The native data may be in different format, which is
indicated by OGR_F_GetNativeMediaType().
Note that most drivers do not support storing the native data in the
feature object, and if they do, generally the NATIVE_DATA open option
must be passed at dataset opening.
The "native data" does not imply it is something more performant or
powerful than what can be obtained with the rest of the API, but it
may be useful in round-tripping scenarios where some characteristics
of the underlying format are not captured otherwise by the OGR
abstraction.
This function is the same as the C++ method
OGRFeature::GetNativeData().
Parameters:
-----------
hFeat: handle to the feature.
a string with the native data, or NULL if there is none.
GDAL 2.1
See:
https://trac.osgeo.org/gdal/wiki/rfc60_improved_roundtripping_in_ogr
"""
return _ogr.Feature_GetNativeData(self, *args)
def GetNativeMediaType(self, *args):
"""
GetNativeMediaType(Feature self) -> char const *
const char*
OGR_F_GetNativeMediaType(OGRFeatureH hFeat)
Returns the native media type for the feature.
The native media type is the identifier for the format of the native
data. It follows the IANA RFC 2045
(seehttps://en.wikipedia.org/wiki/Media_type), e.g.
"application/vnd.geo+json" for JSon.
This function is the same as the C function
OGR_F_GetNativeMediaType().
Parameters:
-----------
hFeat: handle to the feature.
a string with the native media type, or NULL if there is none.
GDAL 2.1
See:
https://trac.osgeo.org/gdal/wiki/rfc60_improved_roundtripping_in_ogr
"""
return _ogr.Feature_GetNativeMediaType(self, *args)
def SetNativeData(self, *args):
"""
SetNativeData(Feature self, char const * nativeData)
void
OGR_F_SetNativeData(OGRFeatureH hFeat, const char *pszNativeData)
Sets the native data for the feature.
The native data is the representation in a "natural" form that comes
from the driver that created this feature, or that is aimed at an
output driver. The native data may be in different format, which is
indicated by OGR_F_GetNativeMediaType().
This function is the same as the C++ method
OGRFeature::SetNativeData().
Parameters:
-----------
hFeat: handle to the feature.
pszNativeData: a string with the native data, or NULL if there is
none.
GDAL 2.1
See:
https://trac.osgeo.org/gdal/wiki/rfc60_improved_roundtripping_in_ogr
"""
return _ogr.Feature_SetNativeData(self, *args)
def SetNativeMediaType(self, *args):
"""
SetNativeMediaType(Feature self, char const * nativeMediaType)
void
OGR_F_SetNativeMediaType(OGRFeatureH hFeat, const char
*pszNativeMediaType)
Sets the native media type for the feature.
The native media type is the identifier for the format of the native
data. It follows the IANA RFC 2045
(seehttps://en.wikipedia.org/wiki/Media_type), e.g.
"application/vnd.geo+json" for JSon.
This function is the same as the C++ method
OGRFeature::SetNativeMediaType().
Parameters:
-----------
hFeat: handle to the feature.
pszNativeMediaType: a string with the native media type, or NULL if
there is none.
GDAL 2.1
See:
https://trac.osgeo.org/gdal/wiki/rfc60_improved_roundtripping_in_ogr
"""
return _ogr.Feature_SetNativeMediaType(self, *args)
def SetFieldString(self, *args):
"""
SetFieldString(Feature self, int id, char const * value)
void
OGR_F_SetFieldString(OGRFeatureH hFeat, int iField, const char
*pszValue)
Set field to string value.
OFTInteger fields will be set based on an atoi() conversion of the
string. OFTInteger64 fields will be set based on an CPLAtoGIntBig()
conversion of the string. OFTReal fields will be set based on an
CPLAtof() conversion of the string. Other field types may be
unaffected.
This function is the same as the C++ method OGRFeature::SetField().
This method has only an effect on the in-memory feature object. If
this object comes from a layer and the modifications must be
serialized back to the datasource, OGR_L_SetFeature() must be used
afterwards. Or if this is a new feature, OGR_L_CreateFeature() must be
used afterwards.
Parameters:
-----------
hFeat: handle to the feature that owned the field.
iField: the field to fetch, from 0 to GetFieldCount()-1.
pszValue: the value to assign.
"""
return _ogr.Feature_SetFieldString(self, *args)
def Reference(self):
pass
def Dereference(self):
pass
def Destroy(self):
"Once called, self has effectively been destroyed. Do not access. For backwards compatibility only"
_ogr.delete_Feature( self )
self.thisown = 0
def __cmp__(self, other):
"""Compares a feature to another for equality"""
return self.Equal(other)
def __copy__(self):
return self.Clone()
# This makes it possible to fetch fields in the form "feature.area".
# This has some risk of name collisions.
def __getattr__(self, key):
"""Returns the values of fields by the given name"""
if key == 'this':
return self.__dict__[key]
idx = self.GetFieldIndex(key)
if idx < 0:
idx = self.GetGeomFieldIndex(key)
if idx < 0:
raise AttributeError(key)
else:
return self.GetGeomFieldRef(idx)
else:
return self.GetField(idx)
# This makes it possible to set fields in the form "feature.area".
# This has some risk of name collisions.
def __setattr__(self, key, value):
"""Set the values of fields by the given name"""
if key == 'this' or key == 'thisown':
self.__dict__[key] = value
else:
idx = self.GetFieldIndex(key)
if idx != -1:
self.SetField2(idx,value)
else:
idx = self.GetGeomFieldIndex(key)
if idx != -1:
self.SetGeomField(idx, value)
else:
self.__dict__[key] = value
# This makes it possible to fetch fields in the form "feature['area']".
def __getitem__(self, key):
"""Returns the values of fields by the given name / field_index"""
if isinstance(key, str) or isinstance(key, type(u'')):
fld_index = self.GetFieldIndex(key)
else:
fld_index = key
if key == self.GetFieldCount():
raise IndexError
if fld_index < 0:
if isinstance(key, str) or isinstance(key, type(u'')):
fld_index = self.GetGeomFieldIndex(key)
if fld_index < 0:
raise ValueError("Illegal field requested in GetField()")
else:
return self.GetGeomFieldRef(fld_index)
else:
return self.GetField(fld_index)
# This makes it possible to set fields in the form "feature['area'] = 123".
def __setitem__(self, key, value):
"""Returns the value of a field by field name / index"""
if isinstance(key, str) or isinstance(key, type(u'')):
fld_index = self.GetFieldIndex(key)
else:
fld_index = key
if key == self.GetFieldCount():
raise IndexError
if fld_index < 0:
if isinstance(key, str) or isinstance(key, type(u'')):
fld_index = self.GetGeomFieldIndex(key)
if fld_index < 0:
raise ValueError("Illegal field requested in SetField()")
else:
return self.SetGeomField( fld_index, value )
else:
return self.SetField2( fld_index, value )
def GetField(self, fld_index):
if isinstance(fld_index, str) or isinstance(fld_index, type(u'')):
fld_index = self.GetFieldIndex(fld_index)
if (fld_index < 0) or (fld_index > self.GetFieldCount()):
raise ValueError("Illegal field requested in GetField()")
if not (self.IsFieldSet(fld_index)) or self.IsFieldNull(fld_index):
return None
fld_type = self.GetFieldType(fld_index)
if fld_type == OFTInteger:
return self.GetFieldAsInteger(fld_index)
if fld_type == OFTInteger64:
return self.GetFieldAsInteger64(fld_index)
if fld_type == OFTReal:
return self.GetFieldAsDouble(fld_index)
if fld_type == OFTStringList:
return self.GetFieldAsStringList(fld_index)
if fld_type == OFTIntegerList:
return self.GetFieldAsIntegerList(fld_index)
if fld_type == OFTInteger64List:
return self.GetFieldAsInteger64List(fld_index)
if fld_type == OFTRealList:
return self.GetFieldAsDoubleList(fld_index)
## if fld_type == OFTDateTime or fld_type == OFTDate or fld_type == OFTTime:
# return self.GetFieldAsDate(fld_index)
# default to returning as a string. Should we add more types?
try:
return self.GetFieldAsString(fld_index)
except:
# For Python3 on non-UTF8 strings
return self.GetFieldAsBinary(fld_index)
# With several override, SWIG cannot dispatch automatically unicode strings
# to the right implementation, so we have to do it at hand
def SetField(self, *args):
"""
SetField(self, int id, char value)
SetField(self, char name, char value)
SetField(self, int id, int value)
SetField(self, char name, int value)
SetField(self, int id, double value)
SetField(self, char name, double value)
SetField(self, int id, int year, int month, int day, int hour, int minute,
int second, int tzflag)
SetField(self, char name, int year, int month, int day, int hour,
int minute, int second, int tzflag)
"""
if len(args) == 2 and args[1] is None:
return _ogr.Feature_SetFieldNull(self, args[0])
if len(args) == 2 and (type(args[1]) == type(1) or type(args[1]) == type(12345678901234)):
fld_index = args[0]
if isinstance(fld_index, str) or isinstance(fld_index, type(u'')):
fld_index = self.GetFieldIndex(fld_index)
return _ogr.Feature_SetFieldInteger64(self, fld_index, args[1])
if len(args) == 2 and str(type(args[1])) == "<type 'unicode'>":
fld_index = args[0]
if isinstance(fld_index, str) or isinstance(fld_index, type(u'')):
fld_index = self.GetFieldIndex(fld_index)
return _ogr.Feature_SetFieldString(self, fld_index, args[1])
return _ogr.Feature_SetField(self, *args)
def SetField2(self, fld_index, value):
if isinstance(fld_index, str) or isinstance(fld_index, type(u'')):
fld_index = self.GetFieldIndex(fld_index)
if (fld_index < 0) or (fld_index > self.GetFieldCount()):
raise ValueError("Illegal field requested in SetField2()")
if value is None:
self.SetFieldNull( fld_index )
return
if isinstance(value,list):
if len(value) == 0:
self.SetFieldNull( fld_index )
return
if isinstance(value[0],type(1)) or isinstance(value[0],type(12345678901234)):
self.SetFieldInteger64List(fld_index,value)
return
elif isinstance(value[0],float):
self.SetFieldDoubleList(fld_index,value)
return
elif isinstance(value[0],str):
self.SetFieldStringList(fld_index,value)
return
else:
raise TypeError( 'Unsupported type of list in SetField2(). Type of element is %s' % str(type(value[0])) )
try:
self.SetField( fld_index, value )
except:
self.SetField( fld_index, str(value) )
return
def keys(self):
names = []
for i in range(self.GetFieldCount()):
fieldname = self.GetFieldDefnRef(i).GetName()
names.append(fieldname)
return names
def items(self):
keys = self.keys()
output = {}
for key in keys:
output[key] = self.GetField(key)
return output
def geometry(self):
return self.GetGeometryRef()
def ExportToJson(self, as_object = False, options = None):
"""Exports a GeoJSON object which represents the Feature. The
as_object parameter determines whether the returned value
should be a Python object instead of a string. Defaults to False.
The options parameter is passed to Geometry.ExportToJson()"""
try:
import simplejson
except ImportError:
try:
import json as simplejson
except ImportError:
raise ImportError("Unable to import simplejson or json, needed for ExportToJson.")
geom = self.GetGeometryRef()
if geom is not None:
if options is None:
options = []
geom_json_string = geom.ExportToJson(options = options)
geom_json_object = simplejson.loads(geom_json_string)
else:
geom_json_object = None
output = {'type':'Feature',
'geometry': geom_json_object,
'properties': {}
}
fid = self.GetFID()
if fid != NullFID:
output['id'] = fid
for key in self.keys():
fld_defn = self.GetFieldDefnRef(self.GetFieldIndex(key))
if fld_defn.GetType() == _ogr.OFTInteger and fld_defn.GetSubType() == _ogr.OFSTBoolean:
if self.GetField(key):
output['properties'][key] = True
else:
output['properties'][key] = False
else:
output['properties'][key] = self.GetField(key)
if not as_object:
output = simplejson.dumps(output)
return output
Feature_swigregister = _ogr.Feature_swigregister
Feature_swigregister(Feature)
class FeatureDefn(_object):
"""Proxy of C++ OGRFeatureDefnShadow class."""
__swig_setmethods__ = {}
__setattr__ = lambda self, name, value: _swig_setattr(self, FeatureDefn, name, value)
__swig_getmethods__ = {}
__getattr__ = lambda self, name: _swig_getattr(self, FeatureDefn, name)
__repr__ = _swig_repr
__swig_destroy__ = _ogr.delete_FeatureDefn
__del__ = lambda self: None
def __init__(self, *args, **kwargs):
"""__init__(OGRFeatureDefnShadow self, char const * name_null_ok=None) -> FeatureDefn"""
this = _ogr.new_FeatureDefn(*args, **kwargs)
try:
self.this.append(this)
except Exception:
self.this = this
def GetName(self, *args):
"""
GetName(FeatureDefn self) -> char const *
const char*
OGR_FD_GetName(OGRFeatureDefnH hDefn)
Get name of the OGRFeatureDefn passed as an argument.
This function is the same as the C++ method OGRFeatureDefn::GetName().
Parameters:
-----------
hDefn: handle to the feature definition to get the name from.
the name. This name is internal and should not be modified, or freed.
"""
return _ogr.FeatureDefn_GetName(self, *args)
def GetFieldCount(self, *args):
"""
GetFieldCount(FeatureDefn self) -> int
int
OGR_FD_GetFieldCount(OGRFeatureDefnH hDefn)
Fetch number of fields on the passed feature definition.
This function is the same as the C++ OGRFeatureDefn::GetFieldCount().
Parameters:
-----------
hDefn: handle to the feature definition to get the fields count from.
count of fields.
"""
return _ogr.FeatureDefn_GetFieldCount(self, *args)
def GetFieldDefn(self, *args):
"""
GetFieldDefn(FeatureDefn self, int i) -> FieldDefn
OGRFieldDefnH
OGR_FD_GetFieldDefn(OGRFeatureDefnH hDefn, int iField)
Fetch field definition of the passed feature definition.
This function is the same as the C++ method
OGRFeatureDefn::GetFieldDefn().
Starting with GDAL 1.7.0, this method will also issue an error if the
index is not valid.
Parameters:
-----------
hDefn: handle to the feature definition to get the field definition
from.
iField: the field to fetch, between 0 and GetFieldCount()-1.
an handle to an internal field definition object or NULL if invalid
index. This object should not be modified or freed by the application.
"""
return _ogr.FeatureDefn_GetFieldDefn(self, *args)
def GetFieldIndex(self, *args):
"""
GetFieldIndex(FeatureDefn self, char const * field_name) -> int
int
OGR_FD_GetFieldIndex(OGRFeatureDefnH hDefn, const char *pszFieldName)
Find field by name.
The field index of the first field matching the passed field name
(case insensitively) is returned.
This function is the same as the C++ method
OGRFeatureDefn::GetFieldIndex.
Parameters:
-----------
hDefn: handle to the feature definition to get field index from.
pszFieldName: the field name to search for.
the field index, or -1 if no match found.
"""
return _ogr.FeatureDefn_GetFieldIndex(self, *args)
def AddFieldDefn(self, *args):
"""
AddFieldDefn(FeatureDefn self, FieldDefn defn)
void
OGR_FD_AddFieldDefn(OGRFeatureDefnH hDefn, OGRFieldDefnH hNewField)
Add a new field definition to the passed feature definition.
To add a new field definition to a layer definition, do not use this
function directly, but use OGR_L_CreateField() instead.
This function should only be called while there are no OGRFeature
objects in existence based on this OGRFeatureDefn. The OGRFieldDefn
passed in is copied, and remains the responsibility of the caller.
This function is the same as the C++ method
OGRFeatureDefn::AddFieldDefn().
Parameters:
-----------
hDefn: handle to the feature definition to add the field definition
to.
hNewField: handle to the new field definition.
"""
return _ogr.FeatureDefn_AddFieldDefn(self, *args)
def GetGeomFieldCount(self, *args):
"""
GetGeomFieldCount(FeatureDefn self) -> int
int
OGR_FD_GetGeomFieldCount(OGRFeatureDefnH hDefn)
Fetch number of geometry fields on the passed feature definition.
This function is the same as the C++
OGRFeatureDefn::GetGeomFieldCount().
Parameters:
-----------
hDefn: handle to the feature definition to get the fields count from.
count of geometry fields.
GDAL 1.11
"""
return _ogr.FeatureDefn_GetGeomFieldCount(self, *args)
def GetGeomFieldDefn(self, *args):
"""
GetGeomFieldDefn(FeatureDefn self, int i) -> GeomFieldDefn
OGRGeomFieldDefnH
OGR_FD_GetGeomFieldDefn(OGRFeatureDefnH hDefn, int iGeomField)
Fetch geometry field definition of the passed feature definition.
This function is the same as the C++ method
OGRFeatureDefn::GetGeomFieldDefn().
Parameters:
-----------
hDefn: handle to the feature definition to get the field definition
from.
iGeomField: the geometry field to fetch, between 0 and
GetGeomFieldCount() - 1.
an handle to an internal field definition object or NULL if invalid
index. This object should not be modified or freed by the application.
GDAL 1.11
"""
return _ogr.FeatureDefn_GetGeomFieldDefn(self, *args)
def GetGeomFieldIndex(self, *args):
"""
GetGeomFieldIndex(FeatureDefn self, char const * field_name) -> int
int
OGR_FD_GetGeomFieldIndex(OGRFeatureDefnH hDefn, const char
*pszGeomFieldName)
Find geometry field by name.
The geometry field index of the first geometry field matching the
passed field name (case insensitively) is returned.
This function is the same as the C++ method
OGRFeatureDefn::GetGeomFieldIndex.
Parameters:
-----------
hDefn: handle to the feature definition to get field index from.
pszGeomFieldName: the geometry field name to search for.
the geometry field index, or -1 if no match found.
"""
return _ogr.FeatureDefn_GetGeomFieldIndex(self, *args)
def AddGeomFieldDefn(self, *args):
"""
AddGeomFieldDefn(FeatureDefn self, GeomFieldDefn defn)
void
OGR_FD_AddGeomFieldDefn(OGRFeatureDefnH hDefn, OGRGeomFieldDefnH
hNewGeomField)
Add a new field definition to the passed feature definition.
To add a new field definition to a layer definition, do not use this
function directly, but use OGR_L_CreateGeomField() instead.
This function should only be called while there are no OGRFeature
objects in existence based on this OGRFeatureDefn. The
OGRGeomFieldDefn passed in is copied, and remains the responsibility
of the caller.
This function is the same as the C++ method
OGRFeatureDefn::AddGeomFieldDefn().
Parameters:
-----------
hDefn: handle to the feature definition to add the geometry field
definition to.
hNewGeomField: handle to the new field definition.
GDAL 1.11
"""
return _ogr.FeatureDefn_AddGeomFieldDefn(self, *args)
def DeleteGeomFieldDefn(self, *args):
"""
DeleteGeomFieldDefn(FeatureDefn self, int idx) -> OGRErr
OGRErr
OGR_FD_DeleteGeomFieldDefn(OGRFeatureDefnH hDefn, int iGeomField)
Delete an existing geometry field definition.
To delete an existing geometry field definition from a layer
definition, do not use this function directly, but use
OGR_L_DeleteGeomField() instead ( not implemented yet).
This method should only be called while there are no OGRFeature
objects in existence based on this OGRFeatureDefn.
This method is the same as the C++ method
OGRFeatureDefn::DeleteGeomFieldDefn().
Parameters:
-----------
hDefn: handle to the feature definition.
iGeomField: the index of the geometry field definition.
OGRERR_NONE in case of success.
GDAL 1.11
"""
return _ogr.FeatureDefn_DeleteGeomFieldDefn(self, *args)
def GetGeomType(self, *args):
"""
GetGeomType(FeatureDefn self) -> OGRwkbGeometryType
OGRwkbGeometryType
OGR_FD_GetGeomType(OGRFeatureDefnH hDefn)
Fetch the geometry base type of the passed feature definition.
This function is the same as the C++ method
OGRFeatureDefn::GetGeomType().
Starting with GDAL 1.11, this method returns
GetGeomFieldDefn(0)->GetType().
Parameters:
-----------
hDefn: handle to the feature definition to get the geometry type
from.
the base type for all geometry related to this definition.
"""
return _ogr.FeatureDefn_GetGeomType(self, *args)
def SetGeomType(self, *args):
"""
SetGeomType(FeatureDefn self, OGRwkbGeometryType geom_type)
void
OGR_FD_SetGeomType(OGRFeatureDefnH hDefn, OGRwkbGeometryType eType)
Assign the base geometry type for the passed layer (the same as the
feature definition).
All geometry objects using this type must be of the defined type or a
derived type. The default upon creation is wkbUnknown which allows for
any geometry type. The geometry type should generally not be changed
after any OGRFeatures have been created against this definition.
This function is the same as the C++ method
OGRFeatureDefn::SetGeomType().
Starting with GDAL 1.11, this method calls
GetGeomFieldDefn(0)->SetType().
Parameters:
-----------
hDefn: handle to the layer or feature definition to set the geometry
type to.
eType: the new type to assign.
"""
return _ogr.FeatureDefn_SetGeomType(self, *args)
def GetReferenceCount(self, *args):
"""
GetReferenceCount(FeatureDefn self) -> int
int
OGR_FD_GetReferenceCount(OGRFeatureDefnH hDefn)
Fetch current reference count.
This function is the same as the C++ method
OGRFeatureDefn::GetReferenceCount().
Parameters:
-----------
hDefn: handle to the feature definition on witch OGRFeature are based
on.
the current reference count.
"""
return _ogr.FeatureDefn_GetReferenceCount(self, *args)
def IsGeometryIgnored(self, *args):
"""
IsGeometryIgnored(FeatureDefn self) -> int
int
OGR_FD_IsGeometryIgnored(OGRFeatureDefnH hDefn)
Determine whether the geometry can be omitted when fetching features.
This function is the same as the C++ method
OGRFeatureDefn::IsGeometryIgnored().
Starting with GDAL 1.11, this method returns
GetGeomFieldDefn(0)->IsIgnored().
Parameters:
-----------
hDefn: handle to the feature definition on witch OGRFeature are based
on.
ignore state
"""
return _ogr.FeatureDefn_IsGeometryIgnored(self, *args)
def SetGeometryIgnored(self, *args):
"""
SetGeometryIgnored(FeatureDefn self, int bIgnored)
void
OGR_FD_SetGeometryIgnored(OGRFeatureDefnH hDefn, int bIgnore)
Set whether the geometry can be omitted when fetching features.
This function is the same as the C++ method
OGRFeatureDefn::SetGeometryIgnored().
Starting with GDAL 1.11, this method calls
GetGeomFieldDefn(0)->SetIgnored().
Parameters:
-----------
hDefn: handle to the feature definition on witch OGRFeature are based
on.
bIgnore: ignore state
"""
return _ogr.FeatureDefn_SetGeometryIgnored(self, *args)
def IsStyleIgnored(self, *args):
"""
IsStyleIgnored(FeatureDefn self) -> int
int
OGR_FD_IsStyleIgnored(OGRFeatureDefnH hDefn)
Determine whether the style can be omitted when fetching features.
This function is the same as the C++ method
OGRFeatureDefn::IsStyleIgnored().
Parameters:
-----------
hDefn: handle to the feature definition on which OGRFeature are based
on.
ignore state
"""
return _ogr.FeatureDefn_IsStyleIgnored(self, *args)
def SetStyleIgnored(self, *args):
"""
SetStyleIgnored(FeatureDefn self, int bIgnored)
void
OGR_FD_SetStyleIgnored(OGRFeatureDefnH hDefn, int bIgnore)
Set whether the style can be omitted when fetching features.
This function is the same as the C++ method
OGRFeatureDefn::SetStyleIgnored().
Parameters:
-----------
hDefn: handle to the feature definition on witch OGRFeature are based
on.
bIgnore: ignore state
"""
return _ogr.FeatureDefn_SetStyleIgnored(self, *args)
def IsSame(self, *args):
"""
IsSame(FeatureDefn self, FeatureDefn other_defn) -> int
int OGR_FD_IsSame(OGRFeatureDefnH
hFDefn, OGRFeatureDefnH hOtherFDefn)
Test if the feature definition is identical to the other one.
Parameters:
-----------
hFDefn: handle to the feature definition on witch OGRFeature are
based on.
hOtherFDefn: handle to the other feature definition to compare to.
TRUE if the feature definition is identical to the other one.
OGR 1.11
"""
return _ogr.FeatureDefn_IsSame(self, *args)
def Destroy(self):
"Once called, self has effectively been destroyed. Do not access. For backwards compatibility only"
_ogr.delete_FeatureDefn( self )
self.thisown = 0
FeatureDefn_swigregister = _ogr.FeatureDefn_swigregister
FeatureDefn_swigregister(FeatureDefn)
class FieldDefn(_object):
"""Proxy of C++ OGRFieldDefnShadow class."""
__swig_setmethods__ = {}
__setattr__ = lambda self, name, value: _swig_setattr(self, FieldDefn, name, value)
__swig_getmethods__ = {}
__getattr__ = lambda self, name: _swig_getattr(self, FieldDefn, name)
__repr__ = _swig_repr
__swig_destroy__ = _ogr.delete_FieldDefn
__del__ = lambda self: None
def __init__(self, *args, **kwargs):
"""__init__(OGRFieldDefnShadow self, char const * name_null_ok, OGRFieldType field_type) -> FieldDefn"""
this = _ogr.new_FieldDefn(*args, **kwargs)
try:
self.this.append(this)
except Exception:
self.this = this
def GetName(self, *args):
"""GetName(FieldDefn self) -> char const *"""
return _ogr.FieldDefn_GetName(self, *args)
def GetNameRef(self, *args):
"""
GetNameRef(FieldDefn self) -> char const *
const char*
OGR_Fld_GetNameRef(OGRFieldDefnH hDefn)
Fetch name of this field.
This function is the same as the CPP method
OGRFieldDefn::GetNameRef().
Parameters:
-----------
hDefn: handle to the field definition.
the name of the field definition.
"""
return _ogr.FieldDefn_GetNameRef(self, *args)
def SetName(self, *args):
"""
SetName(FieldDefn self, char const * name)
void OGR_Fld_SetName(OGRFieldDefnH
hDefn, const char *pszName)
Reset the name of this field.
This function is the same as the CPP method OGRFieldDefn::SetName().
Parameters:
-----------
hDefn: handle to the field definition to apply the new name to.
pszName: the new name to apply.
"""
return _ogr.FieldDefn_SetName(self, *args)
def GetType(self, *args):
"""
GetType(FieldDefn self) -> OGRFieldType
OGRFieldType
OGR_Fld_GetType(OGRFieldDefnH hDefn)
Fetch type of this field.
This function is the same as the CPP method OGRFieldDefn::GetType().
Parameters:
-----------
hDefn: handle to the field definition to get type from.
field type.
"""
return _ogr.FieldDefn_GetType(self, *args)
def SetType(self, *args):
"""
SetType(FieldDefn self, OGRFieldType type)
void OGR_Fld_SetType(OGRFieldDefnH
hDefn, OGRFieldType eType)
Set the type of this field.
This should never be done to an OGRFieldDefn that is already part of
an OGRFeatureDefn.
This function is the same as the CPP method OGRFieldDefn::SetType().
Parameters:
-----------
hDefn: handle to the field definition to set type to.
eType: the new field type.
"""
return _ogr.FieldDefn_SetType(self, *args)
def GetSubType(self, *args):
"""
GetSubType(FieldDefn self) -> OGRFieldSubType
OGRFieldSubType
OGR_Fld_GetSubType(OGRFieldDefnH hDefn)
Fetch subtype of this field.
This function is the same as the CPP method
OGRFieldDefn::GetSubType().
Parameters:
-----------
hDefn: handle to the field definition to get subtype from.
field subtype.
GDAL 2.0
"""
return _ogr.FieldDefn_GetSubType(self, *args)
def SetSubType(self, *args):
"""
SetSubType(FieldDefn self, OGRFieldSubType type)
void
OGR_Fld_SetSubType(OGRFieldDefnH hDefn, OGRFieldSubType eSubType)
Set the subtype of this field.
This should never be done to an OGRFieldDefn that is already part of
an OGRFeatureDefn.
This function is the same as the CPP method
OGRFieldDefn::SetSubType().
Parameters:
-----------
hDefn: handle to the field definition to set type to.
eSubType: the new field subtype.
GDAL 2.0
"""
return _ogr.FieldDefn_SetSubType(self, *args)
def GetJustify(self, *args):
"""
GetJustify(FieldDefn self) -> OGRJustification
OGRJustification
OGR_Fld_GetJustify(OGRFieldDefnH hDefn)
Get the justification for this field.
This function is the same as the CPP method
OGRFieldDefn::GetJustify().
Note: no driver is know to use the concept of field justification.
Parameters:
-----------
hDefn: handle to the field definition to get justification from.
the justification.
"""
return _ogr.FieldDefn_GetJustify(self, *args)
def SetJustify(self, *args):
"""
SetJustify(FieldDefn self, OGRJustification justify)
void
OGR_Fld_SetJustify(OGRFieldDefnH hDefn, OGRJustification eJustify)
Set the justification for this field.
Note: no driver is know to use the concept of field justification.
This function is the same as the CPP method
OGRFieldDefn::SetJustify().
Parameters:
-----------
hDefn: handle to the field definition to set justification to.
eJustify: the new justification.
"""
return _ogr.FieldDefn_SetJustify(self, *args)
def GetWidth(self, *args):
"""
GetWidth(FieldDefn self) -> int
int OGR_Fld_GetWidth(OGRFieldDefnH
hDefn)
Get the formatting width for this field.
This function is the same as the CPP method OGRFieldDefn::GetWidth().
Parameters:
-----------
hDefn: handle to the field definition to get width from.
the width, zero means no specified width.
"""
return _ogr.FieldDefn_GetWidth(self, *args)
def SetWidth(self, *args):
"""
SetWidth(FieldDefn self, int width)
void OGR_Fld_SetWidth(OGRFieldDefnH
hDefn, int nNewWidth)
Set the formatting width for this field in characters.
This function is the same as the CPP method OGRFieldDefn::SetWidth().
Parameters:
-----------
hDefn: handle to the field definition to set width to.
nNewWidth: the new width.
"""
return _ogr.FieldDefn_SetWidth(self, *args)
def GetPrecision(self, *args):
"""
GetPrecision(FieldDefn self) -> int
int
OGR_Fld_GetPrecision(OGRFieldDefnH hDefn)
Get the formatting precision for this field.
This should normally be zero for fields of types other than OFTReal.
This function is the same as the CPP method
OGRFieldDefn::GetPrecision().
Parameters:
-----------
hDefn: handle to the field definition to get precision from.
the precision.
"""
return _ogr.FieldDefn_GetPrecision(self, *args)
def SetPrecision(self, *args):
"""
SetPrecision(FieldDefn self, int precision)
void
OGR_Fld_SetPrecision(OGRFieldDefnH hDefn, int nPrecision)
Set the formatting precision for this field in characters.
This should normally be zero for fields of types other than OFTReal.
This function is the same as the CPP method
OGRFieldDefn::SetPrecision().
Parameters:
-----------
hDefn: handle to the field definition to set precision to.
nPrecision: the new precision.
"""
return _ogr.FieldDefn_SetPrecision(self, *args)
def GetTypeName(self, *args):
"""GetTypeName(FieldDefn self) -> char const *"""
return _ogr.FieldDefn_GetTypeName(self, *args)
def GetFieldTypeName(self, *args):
"""GetFieldTypeName(FieldDefn self, OGRFieldType type) -> char const *"""
return _ogr.FieldDefn_GetFieldTypeName(self, *args)
def IsIgnored(self, *args):
"""
IsIgnored(FieldDefn self) -> int
int OGR_Fld_IsIgnored(OGRFieldDefnH
hDefn)
Return whether this field should be omitted when fetching features.
This method is the same as the C++ method OGRFieldDefn::IsIgnored().
Parameters:
-----------
hDefn: handle to the field definition
ignore state
"""
return _ogr.FieldDefn_IsIgnored(self, *args)
def SetIgnored(self, *args):
"""
SetIgnored(FieldDefn self, int bIgnored)
void
OGR_Fld_SetIgnored(OGRFieldDefnH hDefn, int ignore)
Set whether this field should be omitted when fetching features.
This method is the same as the C++ method OGRFieldDefn::SetIgnored().
Parameters:
-----------
hDefn: handle to the field definition
ignore: ignore state
"""
return _ogr.FieldDefn_SetIgnored(self, *args)
def IsNullable(self, *args):
"""
IsNullable(FieldDefn self) -> int
int
OGR_Fld_IsNullable(OGRFieldDefnH hDefn)
Return whether this field can receive null values.
By default, fields are nullable.
Even if this method returns FALSE (i.e not-nullable field), it doesn't
mean that OGRFeature::IsFieldSet() will necessary return TRUE, as
fields can be temporary unset and null /not-null validation is usually
done when OGRLayer::CreateFeature()/SetFeature() is called.
This method is the same as the C++ method OGRFieldDefn::IsNullable().
Parameters:
-----------
hDefn: handle to the field definition
TRUE if the field is authorized to be null.
GDAL 2.0
"""
return _ogr.FieldDefn_IsNullable(self, *args)
def SetNullable(self, *args):
"""
SetNullable(FieldDefn self, int bNullable)
void
OGR_Fld_SetNullable(OGRFieldDefnH hDefn, int bNullableIn)
Set whether this field can receive null values.
By default, fields are nullable, so this method is generally called
with FALSE to set a not-null constraint.
Drivers that support writing not-null constraint will advertize the
GDAL_DCAP_NOTNULL_FIELDS driver metadata item.
This method is the same as the C++ method OGRFieldDefn::SetNullable().
Parameters:
-----------
hDefn: handle to the field definition
bNullableIn: FALSE if the field must have a not-null constraint.
GDAL 2.0
"""
return _ogr.FieldDefn_SetNullable(self, *args)
def GetDefault(self, *args):
"""
GetDefault(FieldDefn self) -> char const *
const char*
OGR_Fld_GetDefault(OGRFieldDefnH hDefn)
Get default field value.
This function is the same as the C++ method
OGRFieldDefn::GetDefault().
Parameters:
-----------
hDefn: handle to the field definition.
default field value or NULL.
GDAL 2.0
"""
return _ogr.FieldDefn_GetDefault(self, *args)
def SetDefault(self, *args):
"""
SetDefault(FieldDefn self, char const * pszValue)
void
OGR_Fld_SetDefault(OGRFieldDefnH hDefn, const char *pszDefault)
Set default field value.
The default field value is taken into account by drivers (generally
those with a SQL interface) that support it at field creation time.
OGR will generally not automatically set the default field value to
null fields by itself when calling OGRFeature::CreateFeature() /
OGRFeature::SetFeature(), but will let the low-level layers to do the
job. So retrieving the feature from the layer is recommended.
The accepted values are NULL, a numeric value, a literal value
enclosed between single quote characters (and inner single quote
characters escaped by repetition of the single quote character),
CURRENT_TIMESTAMP, CURRENT_TIME, CURRENT_DATE or a driver specific
expression (that might be ignored by other drivers). For a datetime
literal value, format should be 'YYYY/MM/DD HH:MM:SS[.sss]'
(considered as UTC time).
Drivers that support writing DEFAULT clauses will advertize the
GDAL_DCAP_DEFAULT_FIELDS driver metadata item.
This function is the same as the C++ method
OGRFieldDefn::SetDefault().
Parameters:
-----------
hDefn: handle to the field definition.
pszDefault: new default field value or NULL pointer.
GDAL 2.0
"""
return _ogr.FieldDefn_SetDefault(self, *args)
def IsDefaultDriverSpecific(self, *args):
"""
IsDefaultDriverSpecific(FieldDefn self) -> int
int
OGR_Fld_IsDefaultDriverSpecific(OGRFieldDefnH hDefn)
Returns whether the default value is driver specific.
Driver specific default values are those that are not NULL, a numeric
value, a literal value enclosed between single quote characters,
CURRENT_TIMESTAMP, CURRENT_TIME, CURRENT_DATE or datetime literal
value.
This function is the same as the C++ method
OGRFieldDefn::IsDefaultDriverSpecific().
Parameters:
-----------
hDefn: handle to the field definition
TRUE if the default value is driver specific.
GDAL 2.0
"""
return _ogr.FieldDefn_IsDefaultDriverSpecific(self, *args)
width = property(GetWidth, SetWidth)
type = property(GetType, SetType)
precision = property(GetPrecision, SetPrecision)
name = property(GetName, SetName)
justify = property(GetJustify, SetJustify)
def Destroy(self):
"Once called, self has effectively been destroyed. Do not access. For backwards compatibility only"
_ogr.delete_FieldDefn( self )
self.thisown = 0
FieldDefn_swigregister = _ogr.FieldDefn_swigregister
FieldDefn_swigregister(FieldDefn)
class GeomFieldDefn(_object):
"""Proxy of C++ OGRGeomFieldDefnShadow class."""
__swig_setmethods__ = {}
__setattr__ = lambda self, name, value: _swig_setattr(self, GeomFieldDefn, name, value)
__swig_getmethods__ = {}
__getattr__ = lambda self, name: _swig_getattr(self, GeomFieldDefn, name)
__repr__ = _swig_repr
__swig_destroy__ = _ogr.delete_GeomFieldDefn
__del__ = lambda self: None
def __init__(self, *args, **kwargs):
"""__init__(OGRGeomFieldDefnShadow self, char const * name_null_ok, OGRwkbGeometryType field_type) -> GeomFieldDefn"""
this = _ogr.new_GeomFieldDefn(*args, **kwargs)
try:
self.this.append(this)
except Exception:
self.this = this
def GetName(self, *args):
"""GetName(GeomFieldDefn self) -> char const *"""
return _ogr.GeomFieldDefn_GetName(self, *args)
def GetNameRef(self, *args):
"""GetNameRef(GeomFieldDefn self) -> char const *"""
return _ogr.GeomFieldDefn_GetNameRef(self, *args)
def SetName(self, *args):
"""SetName(GeomFieldDefn self, char const * name)"""
return _ogr.GeomFieldDefn_SetName(self, *args)
def GetType(self, *args):
"""GetType(GeomFieldDefn self) -> OGRwkbGeometryType"""
return _ogr.GeomFieldDefn_GetType(self, *args)
def SetType(self, *args):
"""SetType(GeomFieldDefn self, OGRwkbGeometryType type)"""
return _ogr.GeomFieldDefn_SetType(self, *args)
def GetSpatialRef(self, *args):
"""GetSpatialRef(GeomFieldDefn self) -> SpatialReference"""
return _ogr.GeomFieldDefn_GetSpatialRef(self, *args)
def SetSpatialRef(self, *args):
"""SetSpatialRef(GeomFieldDefn self, SpatialReference srs)"""
return _ogr.GeomFieldDefn_SetSpatialRef(self, *args)
def IsIgnored(self, *args):
"""IsIgnored(GeomFieldDefn self) -> int"""
return _ogr.GeomFieldDefn_IsIgnored(self, *args)
def SetIgnored(self, *args):
"""SetIgnored(GeomFieldDefn self, int bIgnored)"""
return _ogr.GeomFieldDefn_SetIgnored(self, *args)
def IsNullable(self, *args):
"""IsNullable(GeomFieldDefn self) -> int"""
return _ogr.GeomFieldDefn_IsNullable(self, *args)
def SetNullable(self, *args):
"""SetNullable(GeomFieldDefn self, int bNullable)"""
return _ogr.GeomFieldDefn_SetNullable(self, *args)
type = property(GetType, SetType)
name = property(GetName, SetName)
srs = property(GetSpatialRef, SetSpatialRef)
GeomFieldDefn_swigregister = _ogr.GeomFieldDefn_swigregister
GeomFieldDefn_swigregister(GeomFieldDefn)
def CreateGeometryFromWkb(*args, **kwargs):
"""CreateGeometryFromWkb(int len, SpatialReference reference=None) -> Geometry"""
return _ogr.CreateGeometryFromWkb(*args, **kwargs)
def CreateGeometryFromWkt(*args, **kwargs):
"""CreateGeometryFromWkt(char ** val, SpatialReference reference=None) -> Geometry"""
return _ogr.CreateGeometryFromWkt(*args, **kwargs)
def CreateGeometryFromGML(*args):
"""CreateGeometryFromGML(char const * input_string) -> Geometry"""
return _ogr.CreateGeometryFromGML(*args)
def CreateGeometryFromJson(*args):
"""CreateGeometryFromJson(char const * input_string) -> Geometry"""
return _ogr.CreateGeometryFromJson(*args)
def BuildPolygonFromEdges(*args, **kwargs):
"""BuildPolygonFromEdges(Geometry hLineCollection, int bBestEffort=0, int bAutoClose=0, double dfTolerance=0) -> Geometry"""
return _ogr.BuildPolygonFromEdges(*args, **kwargs)
def ApproximateArcAngles(*args, **kwargs):
"""ApproximateArcAngles(double dfCenterX, double dfCenterY, double dfZ, double dfPrimaryRadius, double dfSecondaryAxis, double dfRotation, double dfStartAngle, double dfEndAngle, double dfMaxAngleStepSizeDegrees) -> Geometry"""
return _ogr.ApproximateArcAngles(*args, **kwargs)
def ForceToPolygon(*args):
"""ForceToPolygon(Geometry geom_in) -> Geometry"""
return _ogr.ForceToPolygon(*args)
def ForceToLineString(*args):
"""ForceToLineString(Geometry geom_in) -> Geometry"""
return _ogr.ForceToLineString(*args)
def ForceToMultiPolygon(*args):
"""ForceToMultiPolygon(Geometry geom_in) -> Geometry"""
return _ogr.ForceToMultiPolygon(*args)
def ForceToMultiPoint(*args):
"""ForceToMultiPoint(Geometry geom_in) -> Geometry"""
return _ogr.ForceToMultiPoint(*args)
def ForceToMultiLineString(*args):
"""ForceToMultiLineString(Geometry geom_in) -> Geometry"""
return _ogr.ForceToMultiLineString(*args)
def ForceTo(*args):
"""ForceTo(Geometry geom_in, OGRwkbGeometryType eTargetType, char ** options=None) -> Geometry"""
return _ogr.ForceTo(*args)
class Geometry(_object):
"""Proxy of C++ OGRGeometryShadow class."""
__swig_setmethods__ = {}
__setattr__ = lambda self, name, value: _swig_setattr(self, Geometry, name, value)
__swig_getmethods__ = {}
__getattr__ = lambda self, name: _swig_getattr(self, Geometry, name)
__repr__ = _swig_repr
__swig_destroy__ = _ogr.delete_Geometry
__del__ = lambda self: None
def __init__(self, *args, **kwargs):
"""__init__(OGRGeometryShadow self, OGRwkbGeometryType type, char * wkt=None, int wkb=0, char * gml=None) -> Geometry"""
this = _ogr.new_Geometry(*args, **kwargs)
try:
self.this.append(this)
except Exception:
self.this = this
def ExportToWkt(self, *args):
"""
ExportToWkt(Geometry self) -> OGRErr
OGRErr
OGR_G_ExportToWkt(OGRGeometryH hGeom, char **ppszSrcText)
Convert a geometry into well known text format.
This function relates to the SFCOM IWks::ExportToWKT() method.
For backward compatibility purposes, it exports the Old-style 99-402
extended dimension (Z) WKB types for types Point, LineString, Polygon,
MultiPoint, MultiLineString, MultiPolygon and GeometryCollection. For
other geometry types, it is equivalent to OGR_G_ExportToIsoWkt().
This function is the same as the CPP method
OGRGeometry::exportToWkt().
Parameters:
-----------
hGeom: handle on the geometry to convert to a text format from.
ppszSrcText: a text buffer is allocated by the program, and assigned
to the passed pointer. After use, *ppszDstText should be freed with
CPLFree().
Currently OGRERR_NONE is always returned.
"""
return _ogr.Geometry_ExportToWkt(self, *args)
def ExportToIsoWkt(self, *args):
"""
ExportToIsoWkt(Geometry self) -> OGRErr
OGRErr
OGR_G_ExportToIsoWkt(OGRGeometryH hGeom, char **ppszSrcText)
Convert a geometry into SFSQL 1.2 / ISO SQL/MM Part 3 well known text
format.
This function relates to the SFCOM IWks::ExportToWKT() method. It
exports the SFSQL 1.2 and ISO SQL/MM Part 3 extended dimension (Z&M)
WKB types.
This function is the same as the CPP method
OGRGeometry::exportToWkt(wkbVariantIso).
Parameters:
-----------
hGeom: handle on the geometry to convert to a text format from.
ppszSrcText: a text buffer is allocated by the program, and assigned
to the passed pointer. After use, *ppszDstText should be freed with
CPLFree().
Currently OGRERR_NONE is always returned.
GDAL 2.0
"""
return _ogr.Geometry_ExportToIsoWkt(self, *args)
def ExportToWkb(self, *args, **kwargs):
"""
ExportToWkb(Geometry self, OGRwkbByteOrder byte_order) -> OGRErr
OGRErr
OGR_G_ExportToWkb(OGRGeometryH hGeom, OGRwkbByteOrder eOrder, unsigned
char *pabyDstBuffer)
Convert a geometry well known binary format.
This function relates to the SFCOM IWks::ExportToWKB() method.
For backward compatibility purposes, it exports the Old-style 99-402
extended dimension (Z) WKB types for types Point, LineString, Polygon,
MultiPoint, MultiLineString, MultiPolygon and GeometryCollection. For
other geometry types, it is equivalent to OGR_G_ExportToIsoWkb().
This function is the same as the CPP method
OGRGeometry::exportToWkb(OGRwkbByteOrder, unsigned char *,
OGRwkbVariant) with eWkbVariant = wkbVariantOldOgc.
Parameters:
-----------
hGeom: handle on the geometry to convert to a well know binary data
from.
eOrder: One of wkbXDR or wkbNDR indicating MSB or LSB byte order
respectively.
pabyDstBuffer: a buffer into which the binary representation is
written. This buffer must be at least OGR_G_WkbSize() byte in size.
Currently OGRERR_NONE is always returned.
"""
return _ogr.Geometry_ExportToWkb(self, *args, **kwargs)
def ExportToIsoWkb(self, *args, **kwargs):
"""
ExportToIsoWkb(Geometry self, OGRwkbByteOrder byte_order) -> OGRErr
OGRErr
OGR_G_ExportToIsoWkb(OGRGeometryH hGeom, OGRwkbByteOrder eOrder,
unsigned char *pabyDstBuffer)
Convert a geometry into SFSQL 1.2 / ISO SQL/MM Part 3 well known
binary format.
This function relates to the SFCOM IWks::ExportToWKB() method. It
exports the SFSQL 1.2 and ISO SQL/MM Part 3 extended dimension (Z&M)
WKB types.
This function is the same as the CPP method
OGRGeometry::exportToWkb(OGRwkbByteOrder, unsigned char *,
OGRwkbVariant) with eWkbVariant = wkbVariantIso.
Parameters:
-----------
hGeom: handle on the geometry to convert to a well know binary data
from.
eOrder: One of wkbXDR or wkbNDR indicating MSB or LSB byte order
respectively.
pabyDstBuffer: a buffer into which the binary representation is
written. This buffer must be at least OGR_G_WkbSize() byte in size.
Currently OGRERR_NONE is always returned.
GDAL 2.0
"""
return _ogr.Geometry_ExportToIsoWkb(self, *args, **kwargs)
def ExportToGML(self, *args, **kwargs):
"""ExportToGML(Geometry self, char ** options=None) -> retStringAndCPLFree *"""
return _ogr.Geometry_ExportToGML(self, *args, **kwargs)
def ExportToKML(self, *args):
"""ExportToKML(Geometry self, char const * altitude_mode=None) -> retStringAndCPLFree *"""
return _ogr.Geometry_ExportToKML(self, *args)
def ExportToJson(self, *args, **kwargs):
"""ExportToJson(Geometry self, char ** options=None) -> retStringAndCPLFree *"""
return _ogr.Geometry_ExportToJson(self, *args, **kwargs)
def AddPoint(self, *args, **kwargs):
"""AddPoint(Geometry self, double x, double y, double z=0)"""
return _ogr.Geometry_AddPoint(self, *args, **kwargs)
def AddPointM(self, *args, **kwargs):
"""AddPointM(Geometry self, double x, double y, double m)"""
return _ogr.Geometry_AddPointM(self, *args, **kwargs)
def AddPointZM(self, *args, **kwargs):
"""AddPointZM(Geometry self, double x, double y, double z, double m)"""
return _ogr.Geometry_AddPointZM(self, *args, **kwargs)
def AddPoint_2D(self, *args):
"""AddPoint_2D(Geometry self, double x, double y)"""
return _ogr.Geometry_AddPoint_2D(self, *args)
def AddGeometryDirectly(self, *args):
"""AddGeometryDirectly(Geometry self, Geometry other_disown) -> OGRErr"""
return _ogr.Geometry_AddGeometryDirectly(self, *args)
def AddGeometry(self, *args):
"""AddGeometry(Geometry self, Geometry other) -> OGRErr"""
return _ogr.Geometry_AddGeometry(self, *args)
def RemoveGeometry(self, *args):
"""RemoveGeometry(Geometry self, int iSubGeom) -> OGRErr"""
return _ogr.Geometry_RemoveGeometry(self, *args)
def Clone(self, *args):
"""
Clone(Geometry self) -> Geometry
OGRGeometryH OGR_G_Clone(OGRGeometryH
hGeom)
Make a copy of this object.
This function relates to the SFCOM IGeometry::clone() method.
This function is the same as the CPP method OGRGeometry::clone().
Parameters:
-----------
hGeom: handle on the geometry to clone from.
an handle on the copy of the geometry with the spatial reference
system as the original.
"""
return _ogr.Geometry_Clone(self, *args)
def GetGeometryType(self, *args):
"""
GetGeometryType(Geometry self) -> OGRwkbGeometryType
OGRwkbGeometryType
OGR_G_GetGeometryType(OGRGeometryH hGeom)
Fetch geometry type.
Note that the geometry type may include the 2.5D flag. To get a 2D
flattened version of the geometry type apply the wkbFlatten() macro to
the return result.
This function is the same as the CPP method
OGRGeometry::getGeometryType().
Parameters:
-----------
hGeom: handle on the geometry to get type from.
the geometry type code.
"""
return _ogr.Geometry_GetGeometryType(self, *args)
def GetGeometryName(self, *args):
"""
GetGeometryName(Geometry self) -> char const *
const char*
OGR_G_GetGeometryName(OGRGeometryH hGeom)
Fetch WKT name for geometry type.
There is no SFCOM analog to this function.
This function is the same as the CPP method
OGRGeometry::getGeometryName().
Parameters:
-----------
hGeom: handle on the geometry to get name from.
name used for this geometry type in well known text format.
"""
return _ogr.Geometry_GetGeometryName(self, *args)
def Length(self, *args):
"""Length(Geometry self) -> double"""
return _ogr.Geometry_Length(self, *args)
def Area(self, *args):
"""Area(Geometry self) -> double"""
return _ogr.Geometry_Area(self, *args)
def GetArea(self, *args):
"""GetArea(Geometry self) -> double"""
return _ogr.Geometry_GetArea(self, *args)
def GetPointCount(self, *args):
"""GetPointCount(Geometry self) -> int"""
return _ogr.Geometry_GetPointCount(self, *args)
def GetPoints(self, *args, **kwargs):
"""GetPoints(Geometry self, int nCoordDimension=0)"""
return _ogr.Geometry_GetPoints(self, *args, **kwargs)
def GetX(self, *args, **kwargs):
"""GetX(Geometry self, int point=0) -> double"""
return _ogr.Geometry_GetX(self, *args, **kwargs)
def GetY(self, *args, **kwargs):
"""GetY(Geometry self, int point=0) -> double"""
return _ogr.Geometry_GetY(self, *args, **kwargs)
def GetZ(self, *args, **kwargs):
"""GetZ(Geometry self, int point=0) -> double"""
return _ogr.Geometry_GetZ(self, *args, **kwargs)
def GetM(self, *args, **kwargs):
"""GetM(Geometry self, int point=0) -> double"""
return _ogr.Geometry_GetM(self, *args, **kwargs)
def GetPoint(self, *args):
"""GetPoint(Geometry self, int iPoint=0)"""
return _ogr.Geometry_GetPoint(self, *args)
def GetPointZM(self, *args):
"""GetPointZM(Geometry self, int iPoint=0)"""
return _ogr.Geometry_GetPointZM(self, *args)
def GetPoint_2D(self, *args):
"""GetPoint_2D(Geometry self, int iPoint=0)"""
return _ogr.Geometry_GetPoint_2D(self, *args)
def GetGeometryCount(self, *args):
"""GetGeometryCount(Geometry self) -> int"""
return _ogr.Geometry_GetGeometryCount(self, *args)
def SetPoint(self, *args, **kwargs):
"""SetPoint(Geometry self, int point, double x, double y, double z=0)"""
return _ogr.Geometry_SetPoint(self, *args, **kwargs)
def SetPointM(self, *args, **kwargs):
"""SetPointM(Geometry self, int point, double x, double y, double m)"""
return _ogr.Geometry_SetPointM(self, *args, **kwargs)
def SetPointZM(self, *args, **kwargs):
"""SetPointZM(Geometry self, int point, double x, double y, double z, double m)"""
return _ogr.Geometry_SetPointZM(self, *args, **kwargs)
def SetPoint_2D(self, *args, **kwargs):
"""SetPoint_2D(Geometry self, int point, double x, double y)"""
return _ogr.Geometry_SetPoint_2D(self, *args, **kwargs)
def SwapXY(self, *args):
"""
SwapXY(Geometry self)
void OGR_G_SwapXY(OGRGeometryH hGeom)
Swap x and y coordinates.
Parameters:
-----------
hGeom: geometry.
OGR 2.3.0
"""
return _ogr.Geometry_SwapXY(self, *args)
def GetGeometryRef(self, *args):
"""GetGeometryRef(Geometry self, int geom) -> Geometry"""
return _ogr.Geometry_GetGeometryRef(self, *args)
def Simplify(self, *args):
"""
Simplify(Geometry self, double tolerance) -> Geometry
OGRGeometryH
OGR_G_Simplify(OGRGeometryH hThis, double dTolerance)
Compute a simplified geometry.
This function is the same as the C++ method OGRGeometry::Simplify().
This function is built on the GEOS library, check it for the
definition of the geometry operation. If OGR is built without the GEOS
library, this function will always fail, issuing a CPLE_NotSupported
error.
Parameters:
-----------
hThis: the geometry.
dTolerance: the distance tolerance for the simplification.
the simplified geometry or NULL if an error occurs.
OGR 1.8.0
"""
return _ogr.Geometry_Simplify(self, *args)
def SimplifyPreserveTopology(self, *args):
"""
SimplifyPreserveTopology(Geometry self, double tolerance) -> Geometry
OGRGeometryH
OGR_G_SimplifyPreserveTopology(OGRGeometryH hThis, double dTolerance)
Simplify the geometry while preserving topology.
This function is the same as the C++ method
OGRGeometry::SimplifyPreserveTopology().
This function is built on the GEOS library, check it for the
definition of the geometry operation. If OGR is built without the GEOS
library, this function will always fail, issuing a CPLE_NotSupported
error.
Parameters:
-----------
hThis: the geometry.
dTolerance: the distance tolerance for the simplification.
the simplified geometry or NULL if an error occurs.
OGR 1.9.0
"""
return _ogr.Geometry_SimplifyPreserveTopology(self, *args)
def DelaunayTriangulation(self, *args, **kwargs):
"""
DelaunayTriangulation(Geometry self, double dfTolerance=0.0, int bOnlyEdges=False) -> Geometry
OGRGeometryH
OGR_G_DelaunayTriangulation(OGRGeometryH hThis, double dfTolerance,
int bOnlyEdges)
Return a Delaunay triangulation of the vertices of the geometry.
This function is the same as the C++ method
OGRGeometry::DelaunayTriangulation().
This function is built on the GEOS library, v3.4 or above. If OGR is
built without the GEOS library, this function will always fail,
issuing a CPLE_NotSupported error.
Parameters:
-----------
hThis: the geometry.
dfTolerance: optional snapping tolerance to use for improved
robustness
bOnlyEdges: if TRUE, will return a MULTILINESTRING, otherwise it will
return a GEOMETRYCOLLECTION containing triangular POLYGONs.
the geometry resulting from the Delaunay triangulation or NULL if an
error occurs.
OGR 2.1
"""
return _ogr.Geometry_DelaunayTriangulation(self, *args, **kwargs)
def Polygonize(self, *args):
"""
Polygonize(Geometry self) -> Geometry
OGRGeometryH
OGR_G_Polygonize(OGRGeometryH hTarget)
Polygonizes a set of sparse edges.
A new geometry object is created and returned containing a collection
of reassembled Polygons: NULL will be returned if the input collection
doesn't corresponds to a MultiLinestring, or when reassembling Edges
into Polygons is impossible due to topological inconsistencies.
This function is the same as the C++ method OGRGeometry::Polygonize().
This function is built on the GEOS library, check it for the
definition of the geometry operation. If OGR is built without the GEOS
library, this function will always fail, issuing a CPLE_NotSupported
error.
Parameters:
-----------
hTarget: The Geometry to be polygonized.
a handle to a newly allocated geometry now owned by the caller, or
NULL on failure.
OGR 1.9.0
"""
return _ogr.Geometry_Polygonize(self, *args)
def Boundary(self, *args):
"""
Boundary(Geometry self) -> Geometry
OGRGeometryH
OGR_G_Boundary(OGRGeometryH hTarget)
Compute boundary.
A new geometry object is created and returned containing the boundary
of the geometry on which the method is invoked.
This function is the same as the C++ method OGR_G_Boundary().
This function is built on the GEOS library, check it for the
definition of the geometry operation. If OGR is built without the GEOS
library, this function will always fail, issuing a CPLE_NotSupported
error.
Parameters:
-----------
hTarget: The Geometry to calculate the boundary of.
a handle to a newly allocated geometry now owned by the caller, or
NULL on failure.
OGR 1.8.0
"""
return _ogr.Geometry_Boundary(self, *args)
def GetBoundary(self, *args):
"""
GetBoundary(Geometry self) -> Geometry
OGRGeometryH
OGR_G_GetBoundary(OGRGeometryH hTarget)
Compute boundary (deprecated)
Deprecated
See: OGR_G_Boundary()
"""
return _ogr.Geometry_GetBoundary(self, *args)
def ConvexHull(self, *args):
"""
ConvexHull(Geometry self) -> Geometry
OGRGeometryH
OGR_G_ConvexHull(OGRGeometryH hTarget)
Compute convex hull.
A new geometry object is created and returned containing the convex
hull of the geometry on which the method is invoked.
This function is the same as the C++ method OGRGeometry::ConvexHull().
This function is built on the GEOS library, check it for the
definition of the geometry operation. If OGR is built without the GEOS
library, this function will always fail, issuing a CPLE_NotSupported
error.
Parameters:
-----------
hTarget: The Geometry to calculate the convex hull of.
a handle to a newly allocated geometry now owned by the caller, or
NULL on failure.
"""
return _ogr.Geometry_ConvexHull(self, *args)
def Buffer(self, *args, **kwargs):
"""
Buffer(Geometry self, double distance, int quadsecs=30) -> Geometry
OGRGeometryH OGR_G_Buffer(OGRGeometryH
hTarget, double dfDist, int nQuadSegs)
Compute buffer of geometry.
Builds a new geometry containing the buffer region around the geometry
on which it is invoked. The buffer is a polygon containing the region
within the buffer distance of the original geometry.
Some buffer sections are properly described as curves, but are
converted to approximate polygons. The nQuadSegs parameter can be used
to control how many segments should be used to define a 90 degree
curve - a quadrant of a circle. A value of 30 is a reasonable default.
Large values result in large numbers of vertices in the resulting
buffer geometry while small numbers reduce the accuracy of the result.
This function is the same as the C++ method OGRGeometry::Buffer().
This function is built on the GEOS library, check it for the
definition of the geometry operation. If OGR is built without the GEOS
library, this function will always fail, issuing a CPLE_NotSupported
error.
Parameters:
-----------
hTarget: the geometry.
dfDist: the buffer distance to be applied. Should be expressed into
the same unit as the coordinates of the geometry.
nQuadSegs: the number of segments used to approximate a 90 degree
(quadrant) of curvature.
the newly created geometry, or NULL if an error occurs.
"""
return _ogr.Geometry_Buffer(self, *args, **kwargs)
def Intersection(self, *args):
"""
Intersection(Geometry self, Geometry other) -> Geometry
OGRGeometryH
OGR_G_Intersection(OGRGeometryH hThis, OGRGeometryH hOther)
Compute intersection.
Generates a new geometry which is the region of intersection of the
two geometries operated on. The OGR_G_Intersects() function can be
used to test if two geometries intersect.
This function is the same as the C++ method
OGRGeometry::Intersection().
This function is built on the GEOS library, check it for the
definition of the geometry operation. If OGR is built without the GEOS
library, this function will always fail, issuing a CPLE_NotSupported
error.
Parameters:
-----------
hThis: the geometry.
hOther: the other geometry.
a new geometry representing the intersection or NULL if there is no
intersection or an error occurs.
"""
return _ogr.Geometry_Intersection(self, *args)
def Union(self, *args):
"""
Union(Geometry self, Geometry other) -> Geometry
OGRGeometryH OGR_G_Union(OGRGeometryH
hThis, OGRGeometryH hOther)
Compute union.
Generates a new geometry which is the region of union of the two
geometries operated on.
This function is the same as the C++ method OGRGeometry::Union().
This function is built on the GEOS library, check it for the
definition of the geometry operation. If OGR is built without the GEOS
library, this function will always fail, issuing a CPLE_NotSupported
error.
Parameters:
-----------
hThis: the geometry.
hOther: the other geometry.
a new geometry representing the union or NULL if an error occurs.
"""
return _ogr.Geometry_Union(self, *args)
def UnionCascaded(self, *args):
"""
UnionCascaded(Geometry self) -> Geometry
OGRGeometryH
OGR_G_UnionCascaded(OGRGeometryH hThis)
Compute union using cascading.
This function is the same as the C++ method
OGRGeometry::UnionCascaded().
This function is built on the GEOS library, check it for the
definition of the geometry operation. If OGR is built without the GEOS
library, this function will always fail, issuing a CPLE_NotSupported
error.
Parameters:
-----------
hThis: the geometry.
a new geometry representing the union or NULL if an error occurs.
"""
return _ogr.Geometry_UnionCascaded(self, *args)
def Difference(self, *args):
"""
Difference(Geometry self, Geometry other) -> Geometry
OGRGeometryH
OGR_G_Difference(OGRGeometryH hThis, OGRGeometryH hOther)
Compute difference.
Generates a new geometry which is the region of this geometry with the
region of the other geometry removed.
This function is the same as the C++ method OGRGeometry::Difference().
This function is built on the GEOS library, check it for the
definition of the geometry operation. If OGR is built without the GEOS
library, this function will always fail, issuing a CPLE_NotSupported
error.
Parameters:
-----------
hThis: the geometry.
hOther: the other geometry.
a new geometry representing the difference or NULL if the difference
is empty or an error occurs.
"""
return _ogr.Geometry_Difference(self, *args)
def SymDifference(self, *args):
"""
SymDifference(Geometry self, Geometry other) -> Geometry
OGRGeometryH
OGR_G_SymDifference(OGRGeometryH hThis, OGRGeometryH hOther)
Compute symmetric difference.
Generates a new geometry which is the symmetric difference of this
geometry and the other geometry.
This function is the same as the C++ method
OGRGeometry::SymmetricDifference().
This function is built on the GEOS library, check it for the
definition of the geometry operation. If OGR is built without the GEOS
library, this function will always fail, issuing a CPLE_NotSupported
error.
Parameters:
-----------
hThis: the geometry.
hOther: the other geometry.
a new geometry representing the symmetric difference or NULL if the
difference is empty or an error occurs.
OGR 1.8.0
"""
return _ogr.Geometry_SymDifference(self, *args)
def SymmetricDifference(self, *args):
"""
SymmetricDifference(Geometry self, Geometry other) -> Geometry
OGRGeometryH
OGR_G_SymmetricDifference(OGRGeometryH hThis, OGRGeometryH hOther)
Compute symmetric difference (deprecated)
Deprecated
See: OGR_G_SymmetricDifference()
"""
return _ogr.Geometry_SymmetricDifference(self, *args)
def Distance(self, *args):
"""
Distance(Geometry self, Geometry other) -> double
double OGR_G_Distance(OGRGeometryH
hFirst, OGRGeometryH hOther)
Compute distance between two geometries.
Returns the shortest distance between the two geometries. The distance
is expressed into the same unit as the coordinates of the geometries.
This function is the same as the C++ method OGRGeometry::Distance().
This function is built on the GEOS library, check it for the
definition of the geometry operation. If OGR is built without the GEOS
library, this function will always fail, issuing a CPLE_NotSupported
error.
Parameters:
-----------
hFirst: the first geometry to compare against.
hOther: the other geometry to compare against.
the distance between the geometries or -1 if an error occurs.
"""
return _ogr.Geometry_Distance(self, *args)
def Distance3D(self, *args):
"""
Distance3D(Geometry self, Geometry other) -> double
double
OGR_G_Distance3D(OGRGeometryH hFirst, OGRGeometryH hOther)
Returns the 3D distance between two geometries.
The distance is expressed into the same unit as the coordinates of the
geometries.
This method is built on the SFCGAL library, check it for the
definition of the geometry operation. If OGR is built without the
SFCGAL library, this method will always return -1.0
This function is the same as the C++ method OGRGeometry::Distance3D().
Parameters:
-----------
hFirst: the first geometry to compare against.
hOther: the other geometry to compare against.
distance between the two geometries
GDAL 2.2
the distance between the geometries or -1 if an error occurs.
"""
return _ogr.Geometry_Distance3D(self, *args)
def Empty(self, *args):
"""
Empty(Geometry self)
void OGR_G_Empty(OGRGeometryH hGeom)
Clear geometry information.
This restores the geometry to it's initial state after construction,
and before assignment of actual geometry.
This function relates to the SFCOM IGeometry::Empty() method.
This function is the same as the CPP method OGRGeometry::empty().
Parameters:
-----------
hGeom: handle on the geometry to empty.
"""
return _ogr.Geometry_Empty(self, *args)
def IsEmpty(self, *args):
"""
IsEmpty(Geometry self) -> bool
int OGR_G_IsEmpty(OGRGeometryH hGeom)
Test if the geometry is empty.
This method is the same as the CPP method OGRGeometry::IsEmpty().
Parameters:
-----------
hGeom: The Geometry to test.
TRUE if the geometry has no points, otherwise FALSE.
"""
return _ogr.Geometry_IsEmpty(self, *args)
def IsValid(self, *args):
"""
IsValid(Geometry self) -> bool
int OGR_G_IsValid(OGRGeometryH hGeom)
Test if the geometry is valid.
This function is the same as the C++ method OGRGeometry::IsValid().
This function is built on the GEOS library, check it for the
definition of the geometry operation. If OGR is built without the GEOS
library, this function will always return FALSE.
Parameters:
-----------
hGeom: The Geometry to test.
TRUE if the geometry has no points, otherwise FALSE.
"""
return _ogr.Geometry_IsValid(self, *args)
def IsSimple(self, *args):
"""
IsSimple(Geometry self) -> bool
int OGR_G_IsSimple(OGRGeometryH
hGeom)
Returns TRUE if the geometry is simple.
Returns TRUE if the geometry has no anomalous geometric points, such
as self intersection or self tangency. The description of each
instantiable geometric class will include the specific conditions that
cause an instance of that class to be classified as not simple.
This function is the same as the C++ method OGRGeometry::IsSimple()
method.
If OGR is built without the GEOS library, this function will always
return FALSE.
Parameters:
-----------
hGeom: The Geometry to test.
TRUE if object is simple, otherwise FALSE.
"""
return _ogr.Geometry_IsSimple(self, *args)
def IsRing(self, *args):
"""
IsRing(Geometry self) -> bool
int OGR_G_IsRing(OGRGeometryH hGeom)
Test if the geometry is a ring.
This function is the same as the C++ method OGRGeometry::IsRing().
This function is built on the GEOS library, check it for the
definition of the geometry operation. If OGR is built without the GEOS
library, this function will always return FALSE.
Parameters:
-----------
hGeom: The Geometry to test.
TRUE if the geometry has no points, otherwise FALSE.
"""
return _ogr.Geometry_IsRing(self, *args)
def Intersects(self, *args):
"""
Intersects(Geometry self, Geometry other) -> bool
int OGR_G_Intersects(OGRGeometryH
hGeom, OGRGeometryH hOtherGeom)
Do these features intersect?
Determines whether two geometries intersect. If GEOS is enabled, then
this is done in rigorous fashion otherwise TRUE is returned if the
envelopes (bounding boxes) of the two geometries overlap.
This function is the same as the CPP method OGRGeometry::Intersects.
Parameters:
-----------
hGeom: handle on the first geometry.
hOtherGeom: handle on the other geometry to test against.
TRUE if the geometries intersect, otherwise FALSE.
"""
return _ogr.Geometry_Intersects(self, *args)
def Intersect(self, *args):
"""Intersect(Geometry self, Geometry other) -> bool"""
return _ogr.Geometry_Intersect(self, *args)
def Equals(self, *args):
"""
Equals(Geometry self, Geometry other) -> bool
int OGR_G_Equals(OGRGeometryH hGeom,
OGRGeometryH hOther)
Returns TRUE if two geometries are equivalent.
This operation implements the SQL/MM ST_OrderingEquals() operation.
The comparison is done in a structural way, that is to say that the
geometry types must be identical, as well as the number and ordering
of sub-geometries and vertices. Or equivalently, two geometries are
considered equal by this method if their WKT/WKB representation is
equal. Note: this must be distinguished for equality in a spatial way
(which is the purpose of the ST_Equals() operation).
This function is the same as the CPP method OGRGeometry::Equals()
method.
Parameters:
-----------
hGeom: handle on the first geometry.
hOther: handle on the other geometry to test against.
TRUE if equivalent or FALSE otherwise.
"""
return _ogr.Geometry_Equals(self, *args)
def Equal(self, *args):
"""Equal(Geometry self, Geometry other) -> bool"""
return _ogr.Geometry_Equal(self, *args)
def Disjoint(self, *args):
"""
Disjoint(Geometry self, Geometry other) -> bool
int OGR_G_Disjoint(OGRGeometryH
hThis, OGRGeometryH hOther)
Test for disjointness.
Tests if this geometry and the other geometry are disjoint.
This function is the same as the C++ method OGRGeometry::Disjoint().
This function is built on the GEOS library, check it for the
definition of the geometry operation. If OGR is built without the GEOS
library, this function will always fail, issuing a CPLE_NotSupported
error.
Parameters:
-----------
hThis: the geometry to compare.
hOther: the other geometry to compare.
TRUE if they are disjoint, otherwise FALSE.
"""
return _ogr.Geometry_Disjoint(self, *args)
def Touches(self, *args):
"""
Touches(Geometry self, Geometry other) -> bool
int OGR_G_Touches(OGRGeometryH hThis,
OGRGeometryH hOther)
Test for touching.
Tests if this geometry and the other geometry are touching.
This function is the same as the C++ method OGRGeometry::Touches().
This function is built on the GEOS library, check it for the
definition of the geometry operation. If OGR is built without the GEOS
library, this function will always fail, issuing a CPLE_NotSupported
error.
Parameters:
-----------
hThis: the geometry to compare.
hOther: the other geometry to compare.
TRUE if they are touching, otherwise FALSE.
"""
return _ogr.Geometry_Touches(self, *args)
def Crosses(self, *args):
"""
Crosses(Geometry self, Geometry other) -> bool
int OGR_G_Crosses(OGRGeometryH hThis,
OGRGeometryH hOther)
Test for crossing.
Tests if this geometry and the other geometry are crossing.
This function is the same as the C++ method OGRGeometry::Crosses().
This function is built on the GEOS library, check it for the
definition of the geometry operation. If OGR is built without the GEOS
library, this function will always fail, issuing a CPLE_NotSupported
error.
Parameters:
-----------
hThis: the geometry to compare.
hOther: the other geometry to compare.
TRUE if they are crossing, otherwise FALSE.
"""
return _ogr.Geometry_Crosses(self, *args)
def Within(self, *args):
"""
Within(Geometry self, Geometry other) -> bool
int OGR_G_Within(OGRGeometryH hThis,
OGRGeometryH hOther)
Test for containment.
Tests if this geometry is within the other geometry.
This function is the same as the C++ method OGRGeometry::Within().
This function is built on the GEOS library, check it for the
definition of the geometry operation. If OGR is built without the GEOS
library, this function will always fail, issuing a CPLE_NotSupported
error.
Parameters:
-----------
hThis: the geometry to compare.
hOther: the other geometry to compare.
TRUE if hThis is within hOther, otherwise FALSE.
"""
return _ogr.Geometry_Within(self, *args)
def Contains(self, *args):
"""
Contains(Geometry self, Geometry other) -> bool
int OGR_G_Contains(OGRGeometryH
hThis, OGRGeometryH hOther)
Test for containment.
Tests if this geometry contains the other geometry.
This function is the same as the C++ method OGRGeometry::Contains().
This function is built on the GEOS library, check it for the
definition of the geometry operation. If OGR is built without the GEOS
library, this function will always fail, issuing a CPLE_NotSupported
error.
Parameters:
-----------
hThis: the geometry to compare.
hOther: the other geometry to compare.
TRUE if hThis contains hOther geometry, otherwise FALSE.
"""
return _ogr.Geometry_Contains(self, *args)
def Overlaps(self, *args):
"""
Overlaps(Geometry self, Geometry other) -> bool
int OGR_G_Overlaps(OGRGeometryH
hThis, OGRGeometryH hOther)
Test for overlap.
Tests if this geometry and the other geometry overlap, that is their
intersection has a non-zero area.
This function is the same as the C++ method OGRGeometry::Overlaps().
This function is built on the GEOS library, check it for the
definition of the geometry operation. If OGR is built without the GEOS
library, this function will always fail, issuing a CPLE_NotSupported
error.
Parameters:
-----------
hThis: the geometry to compare.
hOther: the other geometry to compare.
TRUE if they are overlapping, otherwise FALSE.
"""
return _ogr.Geometry_Overlaps(self, *args)
def TransformTo(self, *args):
"""
TransformTo(Geometry self, SpatialReference reference) -> OGRErr
OGRErr
OGR_G_TransformTo(OGRGeometryH hGeom, OGRSpatialReferenceH hSRS)
Transform geometry to new spatial reference system.
This function will transform the coordinates of a geometry from their
current spatial reference system to a new target spatial reference
system. Normally this means reprojecting the vectors, but it could
include datum shifts, and changes of units.
This function will only work if the geometry already has an assigned
spatial reference system, and if it is transformable to the target
coordinate system.
Because this function requires internal creation and initialization of
an OGRCoordinateTransformation object it is significantly more
expensive to use this function to transform many geometries than it is
to create the OGRCoordinateTransformation in advance, and call
transform() with that transformation. This function exists primarily
for convenience when only transforming a single geometry.
This function is the same as the CPP method OGRGeometry::transformTo.
Parameters:
-----------
hGeom: handle on the geometry to apply the transform to.
hSRS: handle on the spatial reference system to apply.
OGRERR_NONE on success, or an error code.
"""
return _ogr.Geometry_TransformTo(self, *args)
def Transform(self, *args):
"""
Transform(Geometry self, CoordinateTransformation trans) -> OGRErr
OGRErr OGR_G_Transform(OGRGeometryH
hGeom, OGRCoordinateTransformationH hTransform)
Apply arbitrary coordinate transformation to geometry.
This function will transform the coordinates of a geometry from their
current spatial reference system to a new target spatial reference
system. Normally this means reprojecting the vectors, but it could
include datum shifts, and changes of units.
Note that this function does not require that the geometry already
have a spatial reference system. It will be assumed that they can be
treated as having the source spatial reference system of the
OGRCoordinateTransformation object, and the actual SRS of the geometry
will be ignored. On successful completion the output
OGRSpatialReference of the OGRCoordinateTransformation will be
assigned to the geometry.
This function is the same as the CPP method OGRGeometry::transform.
Parameters:
-----------
hGeom: handle on the geometry to apply the transform to.
hTransform: handle on the transformation to apply.
OGRERR_NONE on success or an error code.
"""
return _ogr.Geometry_Transform(self, *args)
def GetSpatialReference(self, *args):
"""
GetSpatialReference(Geometry self) -> SpatialReference
OGRSpatialReferenceH
OGR_G_GetSpatialReference(OGRGeometryH hGeom)
Returns spatial reference system for geometry.
This function relates to the SFCOM IGeometry::get_SpatialReference()
method.
This function is the same as the CPP method
OGRGeometry::getSpatialReference().
Parameters:
-----------
hGeom: handle on the geometry to get spatial reference from.
a reference to the spatial reference geometry.
"""
return _ogr.Geometry_GetSpatialReference(self, *args)
def AssignSpatialReference(self, *args):
"""
AssignSpatialReference(Geometry self, SpatialReference reference)
void
OGR_G_AssignSpatialReference(OGRGeometryH hGeom, OGRSpatialReferenceH
hSRS)
Assign spatial reference to this object.
Any existing spatial reference is replaced, but under no circumstances
does this result in the object being reprojected. It is just changing
the interpretation of the existing geometry. Note that assigning a
spatial reference increments the reference count on the
OGRSpatialReference, but does not copy it.
Starting with GDAL 2.3, this will also assign the spatial reference to
potential sub-geometries of the geometry ( OGRGeometryCollection,
OGRCurvePolygon/OGRPolygon, OGRCompoundCurve, OGRPolyhedralSurface and
their derived classes).
This is similar to the SFCOM IGeometry::put_SpatialReference() method.
This function is the same as the CPP method
OGRGeometry::assignSpatialReference.
Parameters:
-----------
hGeom: handle on the geometry to apply the new spatial reference
system.
hSRS: handle on the new spatial reference system to apply.
"""
return _ogr.Geometry_AssignSpatialReference(self, *args)
def CloseRings(self, *args):
"""
CloseRings(Geometry self)
void OGR_G_CloseRings(OGRGeometryH
hGeom)
Force rings to be closed.
If this geometry, or any contained geometries has polygon rings that
are not closed, they will be closed by adding the starting point at
the end.
Parameters:
-----------
hGeom: handle to the geometry.
"""
return _ogr.Geometry_CloseRings(self, *args)
def FlattenTo2D(self, *args):
"""
FlattenTo2D(Geometry self)
void
OGR_G_FlattenTo2D(OGRGeometryH hGeom)
Convert geometry to strictly 2D.
In a sense this converts all Z coordinates to 0.0.
This function is the same as the CPP method
OGRGeometry::flattenTo2D().
Parameters:
-----------
hGeom: handle on the geometry to convert.
"""
return _ogr.Geometry_FlattenTo2D(self, *args)
def Segmentize(self, *args):
"""
Segmentize(Geometry self, double dfMaxLength)
void OGR_G_Segmentize(OGRGeometryH
hGeom, double dfMaxLength)
Modify the geometry such it has no segment longer then the given
distance.
Interpolated points will have Z and M values (if needed) set to 0.
Distance computation is performed in 2d only.
This function is the same as the CPP method OGRGeometry::segmentize().
Parameters:
-----------
hGeom: handle on the geometry to segmentize
dfMaxLength: the maximum distance between 2 points after
segmentization
"""
return _ogr.Geometry_Segmentize(self, *args)
def GetEnvelope(self, *args):
"""
GetEnvelope(Geometry self)
void
OGR_G_GetEnvelope(OGRGeometryH hGeom, OGREnvelope *psEnvelope)
Computes and returns the bounding envelope for this geometry in the
passed psEnvelope structure.
This function is the same as the CPP method
OGRGeometry::getEnvelope().
Parameters:
-----------
hGeom: handle of the geometry to get envelope from.
psEnvelope: the structure in which to place the results.
"""
return _ogr.Geometry_GetEnvelope(self, *args)
def GetEnvelope3D(self, *args):
"""
GetEnvelope3D(Geometry self)
void
OGR_G_GetEnvelope3D(OGRGeometryH hGeom, OGREnvelope3D *psEnvelope)
Computes and returns the bounding envelope (3D) for this geometry in
the passed psEnvelope structure.
This function is the same as the CPP method
OGRGeometry::getEnvelope().
Parameters:
-----------
hGeom: handle of the geometry to get envelope from.
psEnvelope: the structure in which to place the results.
OGR 1.9.0
"""
return _ogr.Geometry_GetEnvelope3D(self, *args)
def Centroid(self, *args):
"""
Centroid(Geometry self) -> Geometry
int OGR_G_Centroid(OGRGeometryH
hGeom, OGRGeometryH hCentroidPoint)
Compute the geometry centroid.
The centroid location is applied to the passed in OGRPoint object. The
centroid is not necessarily within the geometry.
This method relates to the SFCOM ISurface::get_Centroid() method
however the current implementation based on GEOS can operate on other
geometry types such as multipoint, linestring, geometrycollection such
as multipolygons. OGC SF SQL 1.1 defines the operation for surfaces
(polygons). SQL/MM-Part 3 defines the operation for surfaces and
multisurfaces (multipolygons).
This function is the same as the C++ method OGRGeometry::Centroid().
This function is built on the GEOS library, check it for the
definition of the geometry operation. If OGR is built without the GEOS
library, this function will always fail, issuing a CPLE_NotSupported
error.
OGRERR_NONE on success or OGRERR_FAILURE on error.
"""
return _ogr.Geometry_Centroid(self, *args)
def PointOnSurface(self, *args):
"""
PointOnSurface(Geometry self) -> Geometry
OGRGeometryH
OGR_G_PointOnSurface(OGRGeometryH hGeom)
Returns a point guaranteed to lie on the surface.
This method relates to the SFCOM ISurface::get_PointOnSurface() method
however the current implementation based on GEOS can operate on other
geometry types than the types that are supported by SQL/MM-Part 3 :
surfaces (polygons) and multisurfaces (multipolygons).
This method is built on the GEOS library, check it for the definition
of the geometry operation. If OGR is built without the GEOS library,
this method will always fail, issuing a CPLE_NotSupported error.
Parameters:
-----------
hGeom: the geometry to operate on.
a point guaranteed to lie on the surface or NULL if an error occurred.
OGR 1.10
"""
return _ogr.Geometry_PointOnSurface(self, *args)
def WkbSize(self, *args):
"""
WkbSize(Geometry self) -> int
int OGR_G_WkbSize(OGRGeometryH hGeom)
Returns size of related binary representation.
This function returns the exact number of bytes required to hold the
well known binary representation of this geometry object. Its
computation may be slightly expensive for complex geometries.
This function relates to the SFCOM IWks::WkbSize() method.
This function is the same as the CPP method OGRGeometry::WkbSize().
Parameters:
-----------
hGeom: handle on the geometry to get the binary size from.
size of binary representation in bytes.
"""
return _ogr.Geometry_WkbSize(self, *args)
def GetCoordinateDimension(self, *args):
"""
GetCoordinateDimension(Geometry self) -> int
int
OGR_G_GetCoordinateDimension(OGRGeometryH hGeom)
Get the dimension of the coordinates in this geometry.
This function is the same as the CPP method
OGRGeometry::getCoordinateDimension().
Parameters:
-----------
hGeom: handle on the geometry to get the dimension of the coordinates
from.
Deprecated use OGR_G_CoordinateDimension(), OGR_G_Is3D(), or
OGR_G_IsMeasured().
this will return 2 or 3.
"""
return _ogr.Geometry_GetCoordinateDimension(self, *args)
def CoordinateDimension(self, *args):
"""
CoordinateDimension(Geometry self) -> int
int
OGR_G_CoordinateDimension(OGRGeometryH hGeom)
Get the dimension of the coordinates in this geometry.
This function is the same as the CPP method
OGRGeometry::CoordinateDimension().
Parameters:
-----------
hGeom: handle on the geometry to get the dimension of the coordinates
from.
this will return 2 for XY, 3 for XYZ and XYM, and 4 for XYZM data.
GDAL 2.1
"""
return _ogr.Geometry_CoordinateDimension(self, *args)
def Is3D(self, *args):
"""
Is3D(Geometry self) -> int
int OGR_G_Is3D(OGRGeometryH hGeom)
See whether this geometry has Z coordinates.
This function is the same as the CPP method OGRGeometry::Is3D().
Parameters:
-----------
hGeom: handle on the geometry to check whether it has Z coordinates.
TRUE if the geometry has Z coordinates.
GDAL 2.1
"""
return _ogr.Geometry_Is3D(self, *args)
def IsMeasured(self, *args):
"""
IsMeasured(Geometry self) -> int
int OGR_G_IsMeasured(OGRGeometryH
hGeom)
See whether this geometry is measured.
This function is the same as the CPP method OGRGeometry::IsMeasured().
Parameters:
-----------
hGeom: handle on the geometry to check whether it is measured.
TRUE if the geometry has M coordinates.
GDAL 2.1
"""
return _ogr.Geometry_IsMeasured(self, *args)
def SetCoordinateDimension(self, *args):
"""
SetCoordinateDimension(Geometry self, int dimension)
void
OGR_G_SetCoordinateDimension(OGRGeometryH hGeom, int nNewDimension)
Set the coordinate dimension.
This method sets the explicit coordinate dimension. Setting the
coordinate dimension of a geometry to 2 should zero out any existing Z
values. Setting the dimension of a geometry collection, a compound
curve, a polygon, etc. will affect the children geometries. This will
also remove the M dimension if present before this call.
Deprecated use OGR_G_Set3D() or OGR_G_SetMeasured().
Parameters:
-----------
hGeom: handle on the geometry to set the dimension of the
coordinates.
nNewDimension: New coordinate dimension value, either 2 or 3.
"""
return _ogr.Geometry_SetCoordinateDimension(self, *args)
def Set3D(self, *args):
"""
Set3D(Geometry self, int b3D)
void OGR_G_Set3D(OGRGeometryH hGeom,
int bIs3D)
Add or remove the Z coordinate dimension.
This method adds or removes the explicit Z coordinate dimension.
Removing the Z coordinate dimension of a geometry will remove any
existing Z values. Adding the Z dimension to a geometry collection, a
compound curve, a polygon, etc. will affect the children geometries.
Parameters:
-----------
hGeom: handle on the geometry to set or unset the Z dimension.
bIs3D: Should the geometry have a Z dimension, either TRUE or FALSE.
GDAL 2.1
"""
return _ogr.Geometry_Set3D(self, *args)
def SetMeasured(self, *args):
"""
SetMeasured(Geometry self, int bMeasured)
void
OGR_G_SetMeasured(OGRGeometryH hGeom, int bIsMeasured)
Add or remove the M coordinate dimension.
This method adds or removes the explicit M coordinate dimension.
Removing the M coordinate dimension of a geometry will remove any
existing M values. Adding the M dimension to a geometry collection, a
compound curve, a polygon, etc. will affect the children geometries.
Parameters:
-----------
hGeom: handle on the geometry to set or unset the M dimension.
bIsMeasured: Should the geometry have a M dimension, either TRUE or
FALSE.
GDAL 2.1
"""
return _ogr.Geometry_SetMeasured(self, *args)
def GetDimension(self, *args):
"""
GetDimension(Geometry self) -> int
int
OGR_G_GetDimension(OGRGeometryH hGeom)
Get the dimension of this geometry.
This function corresponds to the SFCOM IGeometry::GetDimension()
method. It indicates the dimension of the geometry, but does not
indicate the dimension of the underlying space (as indicated by
OGR_G_GetCoordinateDimension() function).
This function is the same as the CPP method
OGRGeometry::getDimension().
Parameters:
-----------
hGeom: handle on the geometry to get the dimension from.
0 for points, 1 for lines and 2 for surfaces.
"""
return _ogr.Geometry_GetDimension(self, *args)
def HasCurveGeometry(self, *args):
"""HasCurveGeometry(Geometry self, int bLookForCircular=False) -> int"""
return _ogr.Geometry_HasCurveGeometry(self, *args)
def GetLinearGeometry(self, *args, **kwargs):
"""GetLinearGeometry(Geometry self, double dfMaxAngleStepSizeDegrees=0.0, char ** options=None) -> Geometry"""
return _ogr.Geometry_GetLinearGeometry(self, *args, **kwargs)
def GetCurveGeometry(self, *args, **kwargs):
"""GetCurveGeometry(Geometry self, char ** options=None) -> Geometry"""
return _ogr.Geometry_GetCurveGeometry(self, *args, **kwargs)
def Value(self, *args):
"""Value(Geometry self, double dfDistance) -> Geometry"""
return _ogr.Geometry_Value(self, *args)
def Destroy(self):
self.__swig_destroy__(self)
self.__del__()
self.thisown = 0
def __str__(self):
return self.ExportToWkt()
def __reduce__(self):
return (self.__class__, (), self.ExportToWkb())
def __setstate__(self, state):
result = CreateGeometryFromWkb(state)
self.this = result.this
def __iter__(self):
self.iter_subgeom = 0
return self
def next(self):
if self.iter_subgeom < self.GetGeometryCount():
subgeom = self.GetGeometryRef(self.iter_subgeom)
self.iter_subgeom += 1
return subgeom
else:
raise StopIteration
Geometry_swigregister = _ogr.Geometry_swigregister
Geometry_swigregister(Geometry)
def GetDriverCount(*args):
"""GetDriverCount() -> int"""
return _ogr.GetDriverCount(*args)
def GetOpenDSCount(*args):
"""GetOpenDSCount() -> int"""
return _ogr.GetOpenDSCount(*args)
def SetGenerate_DB2_V72_BYTE_ORDER(*args):
"""SetGenerate_DB2_V72_BYTE_ORDER(int bGenerate_DB2_V72_BYTE_ORDER) -> OGRErr"""
return _ogr.SetGenerate_DB2_V72_BYTE_ORDER(*args)
def RegisterAll(*args):
"""RegisterAll()"""
return _ogr.RegisterAll(*args)
def GeometryTypeToName(*args):
"""GeometryTypeToName(OGRwkbGeometryType eType) -> char const *"""
return _ogr.GeometryTypeToName(*args)
def GetFieldTypeName(*args):
"""GetFieldTypeName(OGRFieldType type) -> char const *"""
return _ogr.GetFieldTypeName(*args)
def GetFieldSubTypeName(*args):
"""GetFieldSubTypeName(OGRFieldSubType type) -> char const *"""
return _ogr.GetFieldSubTypeName(*args)
def GT_Flatten(*args):
"""GT_Flatten(OGRwkbGeometryType eType) -> OGRwkbGeometryType"""
return _ogr.GT_Flatten(*args)
def GT_SetZ(*args):
"""GT_SetZ(OGRwkbGeometryType eType) -> OGRwkbGeometryType"""
return _ogr.GT_SetZ(*args)
def GT_SetM(*args):
"""GT_SetM(OGRwkbGeometryType eType) -> OGRwkbGeometryType"""
return _ogr.GT_SetM(*args)
def GT_SetModifier(*args):
"""GT_SetModifier(OGRwkbGeometryType eType, int bSetZ, int bSetM=False) -> OGRwkbGeometryType"""
return _ogr.GT_SetModifier(*args)
def GT_HasZ(*args):
"""GT_HasZ(OGRwkbGeometryType eType) -> int"""
return _ogr.GT_HasZ(*args)
def GT_HasM(*args):
"""GT_HasM(OGRwkbGeometryType eType) -> int"""
return _ogr.GT_HasM(*args)
def GT_IsSubClassOf(*args):
"""GT_IsSubClassOf(OGRwkbGeometryType eType, OGRwkbGeometryType eSuperType) -> int"""
return _ogr.GT_IsSubClassOf(*args)
def GT_IsCurve(*args):
"""GT_IsCurve(OGRwkbGeometryType arg1) -> int"""
return _ogr.GT_IsCurve(*args)
def GT_IsSurface(*args):
"""GT_IsSurface(OGRwkbGeometryType arg1) -> int"""
return _ogr.GT_IsSurface(*args)
def GT_IsNonLinear(*args):
"""GT_IsNonLinear(OGRwkbGeometryType arg1) -> int"""
return _ogr.GT_IsNonLinear(*args)
def GT_GetCollection(*args):
"""GT_GetCollection(OGRwkbGeometryType eType) -> OGRwkbGeometryType"""
return _ogr.GT_GetCollection(*args)
def GT_GetCurve(*args):
"""GT_GetCurve(OGRwkbGeometryType eType) -> OGRwkbGeometryType"""
return _ogr.GT_GetCurve(*args)
def GT_GetLinear(*args):
"""GT_GetLinear(OGRwkbGeometryType eType) -> OGRwkbGeometryType"""
return _ogr.GT_GetLinear(*args)
def SetNonLinearGeometriesEnabledFlag(*args):
"""SetNonLinearGeometriesEnabledFlag(int bFlag)"""
return _ogr.SetNonLinearGeometriesEnabledFlag(*args)
def GetNonLinearGeometriesEnabledFlag(*args):
"""GetNonLinearGeometriesEnabledFlag() -> int"""
return _ogr.GetNonLinearGeometriesEnabledFlag(*args)
def GetOpenDS(*args):
"""GetOpenDS(int ds_number) -> DataSource"""
return _ogr.GetOpenDS(*args)
def Open(*args, **kwargs):
"""Open(char const * utf8_path, int update=0) -> DataSource"""
return _ogr.Open(*args, **kwargs)
def OpenShared(*args, **kwargs):
"""OpenShared(char const * utf8_path, int update=0) -> DataSource"""
return _ogr.OpenShared(*args, **kwargs)
def GetDriverByName(*args):
"""GetDriverByName(char const * name) -> Driver"""
return _ogr.GetDriverByName(*args)
def GetDriver(*args):
"""GetDriver(int driver_number) -> Driver"""
return _ogr.GetDriver(*args)
def GeneralCmdLineProcessor(*args):
"""GeneralCmdLineProcessor(char ** papszArgv, int nOptions=0) -> char **"""
return _ogr.GeneralCmdLineProcessor(*args)
def TermProgress_nocb(*args, **kwargs):
"""TermProgress_nocb(double dfProgress, char const * pszMessage=None, void * pData=None) -> int"""
return _ogr.TermProgress_nocb(*args, **kwargs)
_ogr.TermProgress_swigconstant(_ogr)
TermProgress = _ogr.TermProgress
# This file is compatible with both classic and new-style classes.
| 31.854346
| 231
| 0.659247
|
4a12798cc100290651e895dc3a34b951f10f6977
| 424
|
py
|
Python
|
accounts/forms.py
|
aleattene/django-for-beginners
|
e9277d38c274e9b3d4ef1f852f9f5cf9228a165d
|
[
"MIT"
] | null | null | null |
accounts/forms.py
|
aleattene/django-for-beginners
|
e9277d38c274e9b3d4ef1f852f9f5cf9228a165d
|
[
"MIT"
] | null | null | null |
accounts/forms.py
|
aleattene/django-for-beginners
|
e9277d38c274e9b3d4ef1f852f9f5cf9228a165d
|
[
"MIT"
] | null | null | null |
from django import forms
from django.contrib.auth.forms import UserCreationForm, UserChangeForm
from .models import CustomUser
class CustomUserCreationForm(UserCreationForm):
class Meta(UserCreationForm):
model = CustomUser
fields = ('username', 'email', 'age',)
class CustomUserChangeForm(UserChangeForm):
class Meta:
model = CustomUser
fields = ('username', 'email', 'age',)
| 22.315789
| 70
| 0.705189
|
4a127af21e5c8cb8ed6fe6670486ce5fefb0334e
| 192
|
py
|
Python
|
src/kgmk/dsa/algebra/modular/inverse/fermat/jit.py
|
kagemeka/python
|
486ce39d97360b61029527bacf00a87fdbcf552c
|
[
"MIT"
] | null | null | null |
src/kgmk/dsa/algebra/modular/inverse/fermat/jit.py
|
kagemeka/python
|
486ce39d97360b61029527bacf00a87fdbcf552c
|
[
"MIT"
] | null | null | null |
src/kgmk/dsa/algebra/modular/inverse/fermat/jit.py
|
kagemeka/python
|
486ce39d97360b61029527bacf00a87fdbcf552c
|
[
"MIT"
] | null | null | null |
from kgmk.dsa.algebra.modular.pow.bottomup.jit import (
mod_pow,
)
# TODO cut below
import numba as nb
@nb.njit
def mod_inverse(n: int, p: int) -> int:
return mod_pow(n, p - 2, p)
| 14.769231
| 55
| 0.661458
|
4a127d141a8ec9a743ce78688a79a137347f935d
| 1,036
|
py
|
Python
|
Alerts-Analysis.py
|
michaelduong/yolocapital
|
399a78cdfe1e2d2bad80547830947d364eec0e9c
|
[
"MIT"
] | null | null | null |
Alerts-Analysis.py
|
michaelduong/yolocapital
|
399a78cdfe1e2d2bad80547830947d364eec0e9c
|
[
"MIT"
] | null | null | null |
Alerts-Analysis.py
|
michaelduong/yolocapital
|
399a78cdfe1e2d2bad80547830947d364eec0e9c
|
[
"MIT"
] | null | null | null |
from pathlib import Path, PureWindowsPath
import pandas as pd
import numpy as np
# Path for either OXS or Windows
data_folder = Path.cwd() / "Options Alerts Dump" / "Options Alerts"
path_on_windows = PureWindowsPath(data_folder)
# Iterate through directory to get all the Excel files
excel_files = [file for file in data_folder.iterdir() if file.suffix == ".csv"]
result = []
# Iterate through each individual Excel workbook, create a dataform, append to result array
for individual_files in excel_files:
data = pd.read_csv(individual_files)
result.append(data)
# Concatenate each dataform into one dataform
new_df = pd.concat(result)
# Drop irrelevant columns
new_df = new_df.iloc[:, 0:10]
new_df.drop(["SYMBOL", "TIME", "EXP", "STRIKE"], axis=1, inplace=True)
# Group data together based on DETAILS columns
grouped_df = new_df.groupby("DETAILS").agg({"%GAIN":['count', 'max', 'mean']})
# Output to Excel file
grouped_df.to_excel("BB Alert Result Statistics.xlsx")
print("Finished filtering statistics data for alerts")
| 33.419355
| 91
| 0.750965
|
4a127d2fe570c9917efe679bf0a9c1370f59fee1
| 1,523
|
py
|
Python
|
medium/17-Letter Combinations of a Phone Number.py
|
Davidxswang/leetcode
|
d554b7f5228f14c646f726ddb91014a612673e06
|
[
"Apache-2.0"
] | 2
|
2020-05-08T02:17:17.000Z
|
2020-05-17T04:55:56.000Z
|
medium/17-Letter Combinations of a Phone Number.py
|
Davidxswang/leetcode
|
d554b7f5228f14c646f726ddb91014a612673e06
|
[
"Apache-2.0"
] | null | null | null |
medium/17-Letter Combinations of a Phone Number.py
|
Davidxswang/leetcode
|
d554b7f5228f14c646f726ddb91014a612673e06
|
[
"Apache-2.0"
] | null | null | null |
"""
https://leetcode.com/problems/letter-combinations-of-a-phone-number/
Given a string containing digits from 2-9 inclusive, return all possible letter combinations that the number could represent.
A mapping of digit to letters (just like on the telephone buttons) is given below. Note that 1 does not map to any letters.
Example:
Input: "23"
Output: ["ad", "ae", "af", "bd", "be", "bf", "cd", "ce", "cf"].
Note:
Although the above answer is in lexicographical order, your answer could be in any order you want.
"""
# time complexity: O(3^n * 4^m), space complexity: O(1), where n is the number of number '2' to '8' and m is the number of number '9'
# this is a very good example of recursion.
class Solution:
def letterCombinations(self, digits: str) -> List[str]:
if len(digits) == 0:
return []
result = []
dic = { '2': ['a', 'b', 'c'],
'3': ['d', 'e', 'f'],
'4': ['g', 'h', 'i'],
'5': ['j', 'k', 'l'],
'6': ['m', 'n', 'o'],
'7': ['p', 'q', 'r', 's'],
'8': ['t', 'u', 'v'],
'9': ['w', 'x', 'y', 'z']
}
def recurse(base: str, digit: str) -> None:
if len(digit) == 0:
result.append(base)
return
letter = digit[0]
for i in range(len(dic[letter])):
recurse(base+dic[letter][i], digit[1:])
recurse('', digits)
return result
| 32.404255
| 133
| 0.502298
|
4a127dfab94c04e33eefe11b57a9f7f7f30bb59b
| 14,429
|
py
|
Python
|
application.py
|
jasonelijah/itemcatalog
|
ce271510bb576f6891a88223f97c7671e0034519
|
[
"MIT"
] | null | null | null |
application.py
|
jasonelijah/itemcatalog
|
ce271510bb576f6891a88223f97c7671e0034519
|
[
"MIT"
] | null | null | null |
application.py
|
jasonelijah/itemcatalog
|
ce271510bb576f6891a88223f97c7671e0034519
|
[
"MIT"
] | null | null | null |
import flask
import sqlalchemy
import database_setup
import random
import string
import json
import httplib2
import requests
import flow_from_clientsecrets
import FlowExchangeError
from flask import Flask, render_template, url_for, request,
redirect, jsonify, make_response, flash
from sqlalchemy import create_engine
from sqlalchemy.orm import sessionmaker
from database_setup import Base, Category, CategoryItem, User
from flask import session as login_session
from oauth2client.client
app = Flask(__name__)
CLIENT_ID = json.loads(open('client_secrets.json', 'r').read())[
'web']['client_id']
# Connect to Database
#engine = create_engine('sqlite:///gamelibrarywithusers.db')
#Base.metadata.bind = engine
engine = create_engine('postgresql://catalog:grader@localhost/gameslibrarywithusers’)
# Create database session
DBSession = sessionmaker(bind=engine)
session = DBSession()
# User Helper Functions
def createUser(login_session):
newUser = User(name=login_session['username'], email=login_session[
'email'], picture=login_session['picture'])
session.add(newUser)
session.commit()
user = session.query(User).filter_by(email=login_session['email']).one()
return user.id
def getUserInfo(user_id):
user = session.query(User).filter_by(id=user_id).one()
return user
def getUserID(email):
try:
user = session.query(User).filter_by(email=email).one()
return user.id
except:
return None
@app.route('/')
@app.route('/catalog')
def showCategories():
# Get all categories
categories = session.query(Category).all()
# Get lastest 5 category items added
categoryItems = session.query(CategoryItem).all()
return render_template(
'categories.html', categories=categories, categoryItems=categoryItems)
@app.route('/catalog/<int:catalog_id>')
@app.route('/catalog/<int:catalog_id>/items')
def showCategory(catalog_id):
# Get all categories
categories = session.query(Category).all()
# Get category
category = session.query(Category).filter_by(id=catalog_id).first()
# Get name of category
categoryName = category.name
# Get all items of a specific category
categoryItems = session.query(CategoryItem).filter_by(
category_id=catalog_id).all()
# Get count of category items
categoryItemsCount = session.query(
CategoryItem).filter_by(category_id=catalog_id).count()
return render_template(
'category.html', categories=categories, categoryItems=categoryItems,
categoryName=categoryName, categoryItemsCount=categoryItemsCount)
@app.route('/catalog/<int:catalog_id>/items/<int:item_id>')
def showCategoryItem(catalog_id, item_id):
# Get category item
categoryItem = session.query(CategoryItem).filter_by(id=item_id).first()
# Get creator of item
creator = getUserInfo(categoryItem.user_id)
return render_template(
'categoryItem.html',
categoryItem=categoryItem, creator=creator)
@app.route('/catalog/add', methods=['GET', 'POST'])
def addCategoryItem():
# Check if user is logged in
if 'username' not in login_session:
return redirect('/login')
if request.method == 'POST':
# TODO: Retain data when there is an error
if not request.form['name']:
flash('Please enter Game name')
return redirect(url_for('addCategoryItem'))
if not request.form['description']:
flash('Please add a description')
return redirect(url_for('addCategoryItem'))
# Add category item
newCategoryItem =
CategoryItem(
name=request.form['name'], description=request.form[
'description'], category_id=request.form[
'category'], user_id=login_session['user_id'])
session.add(newCategoryItem)
session.commit()
return redirect(url_for('showCategories'))
else:
# Get all categories
categories = session.query(Category).all()
return render_template('addCategoryItem.html', categories=categories)
@app.route(
'/catalog/<int:catalog_id>/items/<int:item_id>/edit', methods=[
'GET', 'POST'])
def editCategoryItem(catalog_id, item_id):
# Check if user is logged in
if 'username' not in login_session:
return redirect('/login')
# Get category item
categoryItem = session.query(CategoryItem).filter_by(id=item_id).first()
# Get creator of item
creator = getUserInfo(categoryItem.user_id)
# Check if logged in user is creator of category item
if creator.id != login_session['user_id']:
return redirect('/login')
# Get all categories
categories = session.query(Category).all()
if request.method == 'POST':
if request.form['name']:
categoryItem.name = request.form['name']
if request.form['description']:
categoryItem.description = request.form['description']
if request.form['category']:
categoryItem.category_id = request.form['category']
return redirect(
url_for(
'showCategoryItem', catalog_id=categoryItem.category_id,
item_id=categoryItem.id))
else:
return render_template(
'editCategoryItem.html',
categories=categories,
categoryItem=categoryItem)
@app.route(
'/catalog/<int:catalog_id>/items/<int:item_id>/delete', methods=[
'GET', 'POST'])
def deleteCategoryItem(catalog_id, item_id):
# Check if user is logged in
if 'username' not in login_session:
return redirect('/login')
# Get category item
categoryItem = session.query(CategoryItem).filter_by(id=item_id).first()
# Get creator of item
creator = getUserInfo(categoryItem.user_id)
# Check if logged in user is creator of category item
if creator.id != login_session['user_id']:
return redirect('/login')
if request.method == 'POST':
session.delete(categoryItem)
session.commit()
return redirect(
url_for(
'showCategory', catalog_id=categoryItem.category_id))
else:
return render_template(
'deleteCategoryItem.html', categoryItem=categoryItem)
@app.route('/login')
def login():
# Create anti-forgery state token
state = ''.join(random.choice(string.ascii_uppercase + string.digits)
for x in xrange(32))
login_session['state'] = state
return render_template('login.html', STATE=state)
@app.route('/logout')
def logout():
if login_session['provider'] == 'facebook':
fbdisconnect()
del login_session['facebook_id']
if login_session['provider'] == 'google':
gdisconnect()
del login_session['gplus_id']
del login_session['access_token']
del login_session['username']
del login_session['email']
del login_session['picture']
del login_session['user_id']
del login_session['provider']
return redirect(url_for('showCategories'))
@app.route('/fbconnect', methods=['POST'])
def fbconnect():
# Validate anti-forgery state token
if request.args.get('state') != login_session['state']:
response = make_response(json.dumps('Invalid state parameter.'), 401)
response.headers['Content-Type'] = 'application/json'
return response
# Gets acces token
access_token = request.data
print "access token received %s " % access_token
# Gets info from fb clients secrets
app_id = json.loads(open('fb_client_secrets.json', 'r').read())[
'web']['app_id']
app_secret = json.loads(open('fb_client_secrets.json', 'r').read())[
'web']['app_secret']
url = 'https://graph.facebook.com/oauth/access_token?grant_type=fb_exchange_token&client_id=%s&client_secret=%s&fb_exchange_token=%s' % (
app_id, app_secret, access_token)
h = httplib2.Http()
result = h.request(url, 'GET')[1]
# Use token to get user info from API
userinfo_url = "https://graph.facebook.com/v2.4/me"
# strip expire tag from access token
token = result.split("&")[0]
url = 'https://graph.facebook.com/v2.4/me?%s&fields=name,id,email' % token
h = httplib2.Http()
result = h.request(url, 'GET')[1]
data = json.loads(result)
login_session['provider'] = 'facebook'
login_session['username'] = data["name"]
login_session['email'] = data["email"]
login_session['facebook_id'] = data["id"]
# Store token in login_session in order to logout
stored_token = token.split("=")[1]
login_session['access_token'] = stored_token
# Get user picture
url = 'https://graph.facebook.com/v2.4/me/picture?%s&redirect=0&height=200&width=200' % token
h = httplib2.Http()
result = h.request(url, 'GET')[1]
data = json.loads(result)
login_session['picture'] = data["data"]["url"]
# See if user exists
user_id = getUserID(login_session['email'])
if not user_id:
user_id = createUser(login_session)
login_session['user_id'] = user_id
return "Login Successful!"
@app.route('/fbdisconnect')
def fbdisconnect():
facebook_id = login_session['facebook_id']
access_token = login_session['access_token']
url = 'https://graph.facebook.com/%s/permissions?access_token=%s' % (
facebook_id, access_token)
h = httplib2.Http()
result = h.request(url, 'DELETE')[1]
return "you have been logged out"
@app.route('/gconnect', methods=['POST'])
def gconnect():
# Validate anti-forgery state token
if request.args.get('state') != login_session['state']:
response = make_response(json.dumps('Invalid state parameter.'), 401)
response.headers['Content-Type'] = 'application/json'
return response
# Obtain authorization code
code = request.data
try:
# Upgrade the authorization code into a credentials object
oauth_flow = flow_from_clientsecrets('client_secrets.json', scope='')
oauth_flow.redirect_uri = 'postmessage'
credentials = oauth_flow.step2_exchange(code)
except FlowExchangeError:
response = make_response(json.dumps(
'Failed to upgrade the authorization code.'), 401)
response.headers['Content-Type'] = 'application/json'
return response
# Check that the access token is valid.
access_token = credentials.access_token
url = ('https://www.googleapis.com/oauth2/v1/tokeninfo?access_token=%s' %
access_token)
h = httplib2.Http()
result = json.loads(h.request(url, 'GET')[1])
# If there was an error in the access token info, abort.
if result.get('error') is not None:
response = make_response(json.dumps(result.get('error')), 500)
response.headers['Content-Type'] = 'application/json'
return response
# Verify that the access token is used for the intended user.
gplus_id = credentials.id_token['sub']
if result['user_id'] != gplus_id:
response = make_response(json.dumps(
"Token's user ID doesn't match given user ID."), 401)
response.headers['Content-Type'] = 'application/json'
return response
# Verify that the access token is valid for this app.
if result['issued_to'] != CLIENT_ID:
response = make_response(json.dumps(
"Token's client ID does not match app's."), 401)
print "Token's client ID does not match app's."
response.headers['Content-Type'] = 'application/json'
return response
stored_access_token = login_session.get('access_token')
stored_gplus_id = login_session.get('gplus_id')
if stored_access_token is not None and gplus_id == stored_gplus_id:
response = make_response(json.dumps(
'Current user is already connected.'), 200)
response.headers['Content-Type'] = 'application/json'
return response
# Store the access token in the session for later use.
login_session['access_token'] = credentials.access_token
login_session['gplus_id'] = gplus_id
# Get user info
userinfo_url = "https://www.googleapis.com/oauth2/v1/userinfo"
params = {'access_token': credentials.access_token, 'alt': 'json'}
answer = requests.get(userinfo_url, params=params)
data = answer.json()
login_session['username'] = data['name']
login_session['picture'] = data['picture']
login_session['email'] = data['email']
login_session['provider'] = 'google'
# See if user exists
user_id = getUserID(data["email"])
if not user_id:
user_id = createUser(login_session)
login_session['user_id'] = user_id
return "Login Successful"
@app.route('/gdisconnect')
def gdisconnect():
# Only disconnect a connected user.
access_token = login_session.get('access_token')
if access_token is None:
response = make_response(json.dumps(
'Current user not connected.'), 401)
response.headers['Content-Type'] = 'application/json'
return response
url = 'https://accounts.google.com/o/oauth2/revoke?token=%s' % access_token
h = httplib2.Http()
result = h.request(url, 'GET')[0]
if result['status'] != '200':
# For whatever reason, the given token was invalid.
response = make_response(json.dumps(
'Failed to revoke token for given user.'), 400)
response.headers['Content-Type'] = 'application/json'
return response
@app.route('/catalog/JSON')
def showCategoriesJSON():
categories = session.query(Category).all()
return jsonify(categories=[category.serialize for category in categories])
@app.route('/catalog/<int:catalog_id>/JSON')
@app.route('/catalog/<int:catalog_id>/items/JSON')
def showCategoryJSON(catalog_id):
categoryItems = session.query(CategoryItem).filter_by(
category_id=catalog_id).all()
return jsonify(
categoryItems=[
categoryItem.serialize for categoryItem in categoryItems])
@app.route('/catalog/<int:catalog_id>/items/<int:item_id>/JSON')
def showCategoryItemJSON(catalog_id, item_id):
categoryItem = session.query(CategoryItem).filter_by(id=item_id).first()
return jsonify(categoryItem=[categoryItem.serialize])
if __name__ == '__main__':
app.debug = True
app.secret_key = 'super_secret_key'
app.run(host='0.0.0.0', port=5000)
| 32.064444
| 141
| 0.67101
|
4a127e0b46a38e04c889ea4273a740aa27cafbcc
| 4,510
|
py
|
Python
|
videodet/pair_retinaMV2/retinaMV2_256_mxmx.py
|
ktw361/mmdetection_impl
|
d09f5320290699cdae0817c9f6a52e8e07c1e098
|
[
"Apache-2.0"
] | 2
|
2021-05-09T15:49:35.000Z
|
2021-05-22T02:16:14.000Z
|
videodet/pair_retinaMV2/retinaMV2_256_mxmx.py
|
ktw361/mmdetection_impl
|
d09f5320290699cdae0817c9f6a52e8e07c1e098
|
[
"Apache-2.0"
] | null | null | null |
videodet/pair_retinaMV2/retinaMV2_256_mxmx.py
|
ktw361/mmdetection_impl
|
d09f5320290699cdae0817c9f6a52e8e07c1e098
|
[
"Apache-2.0"
] | 1
|
2021-05-09T15:49:43.000Z
|
2021-05-09T15:49:43.000Z
|
# model settings
model = dict(
type='PairRetinaNet',
pretrained='zoo/mobilenet_v2.pth.tar',
backbone=dict(
type='SSDMobileNetV2',
input_size=-1,
frozen_stages=3,
out_layers=('layer4', 'layer7', 'layer14', 'layer19'),
with_extra=False,
norm_eval=True,
),
neck=dict(
type='FPN',
in_channels=[24, 32, 96, 1280],
out_channels=256,
start_level=1,
add_extra_convs=True,
num_outs=5),
pair_module=dict(
type='CorrAssemble',
disp=4,
neck_first=True,
use_mx_corr=True,
assemble_type='mx'),
bbox_head=dict(
type='RetinaHead',
num_classes=31,
in_channels=256,
stacked_convs=4,
feat_channels=256,
octave_base_scale=4,
scales_per_octave=3,
anchor_ratios=[0.5, 1.0, 2.0],
anchor_strides=[8, 16, 32, 64, 128],
target_means=[.0, .0, .0, .0],
target_stds=[1.0, 1.0, 1.0, 1.0],
loss_cls=dict(
type='FocalLoss',
use_sigmoid=True,
gamma=2.0,
alpha=0.25,
loss_weight=1.0),
loss_bbox=dict(type='SmoothL1Loss', beta=0.11, loss_weight=1.0)))
# training and testing settings
train_cfg = dict(
assigner=dict(
type='MaxIoUAssigner',
pos_iou_thr=0.5,
neg_iou_thr=0.4,
min_pos_iou=0,
ignore_iof_thr=-1),
allowed_border=-1,
pos_weight=-1,
debug=False)
test_cfg = dict(
nms_pre=1000,
min_bbox_size=0,
score_thr=0.05,
nms=dict(type='nms', iou_thr=0.5),
max_per_img=100)
# dataset settings
vid_dataset_type = 'PairVIDDataset'
det_dataset_type = 'PairDET30Dataset'
data_root = 'data/ILSVRC2015/'
img_norm_cfg = dict(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225], to_rgb=True)
train_pipeline = [
dict(type='LoadImageFromFile'),
dict(type='LoadAnnotations', with_bbox=True, skip_img_without_anno=False),
dict(type='Resize', img_scale=(256, 256), keep_ratio=False),
dict(type='RandomFlip', flip_ratio=0.5),
dict(type='Normalize', **img_norm_cfg),
dict(type='Pad', size_divisor=32),
dict(type='DefaultFormatBundle'),
dict(type='Collect', keys=['img', 'gt_bboxes', 'gt_labels']),
]
test_pipeline = [
dict(type='LoadImageFromFile'),
dict(
type='MultiScaleFlipAug',
img_scale=(256, 256),
flip=False,
transforms=[
dict(type='Resize', keep_ratio=False),
dict(type='RandomFlip'),
dict(type='Normalize', **img_norm_cfg),
dict(type='Pad', size_divisor=32),
dict(type='ImageToTensor', keys=['img']),
dict(type='Collect', keys=['img'],
meta_keys=('filename', 'ori_shape', 'img_shape', 'pad_shape',
'scale_factor', 'flip', 'img_norm_cfg', 'is_first')),
])
]
data = dict(
imgs_per_gpu=16,
workers_per_gpu=2,
train=[
dict(
type=vid_dataset_type,
ann_file=data_root + 'ImageSets/VID/VID_train_15frames.txt',
img_prefix=data_root,
pipeline=train_pipeline),
dict(
type=det_dataset_type,
ann_file=data_root + 'ImageSets/VID/DET_train_30classes.txt',
img_prefix=data_root,
pipeline=train_pipeline),
],
val=dict(
type=vid_dataset_type,
ann_file=data_root + 'ImageSets/VID/VID_val_videos.txt',
img_prefix=data_root,
pipeline=test_pipeline),
test=dict(
type=vid_dataset_type,
ann_file=data_root + 'ImageSets/VID/VID_val_videos.txt',
img_prefix=data_root,
pipeline=test_pipeline))
# optimizer
optimizer = dict(type='SGD', lr=0.001, momentum=0.9, weight_decay=0.0001)
optimizer_config = dict(grad_clip=dict(max_norm=35, norm_type=2))
# learning policy
lr_config = dict(
policy='step',
warmup='linear',
warmup_iters=500,
warmup_ratio=1.0 / 3,
step=[8, 11])
checkpoint_config = dict(interval=1)
# yapf:disable
log_config = dict(
interval=100,
hooks=[
dict(type='TextLoggerHook'),
dict(type='TensorboardLoggerHook')
])
# yapf:enable
evaluation = dict(interval=12, num_evals=5000, shuffle=False)
# runtime settings
total_epochs = 12
device_ids = range(8)
dist_params = dict(backend='nccl')
log_level = 'INFO'
work_dir = './workpairs/retinaMV2_256_mxmx'
load_from = None
resume_from = None
workflow = [('train', 1)]
| 30.472973
| 87
| 0.608426
|
4a127ef4a3800e87ea951b8539416ff5f37cce06
| 14,354
|
py
|
Python
|
webscan_backend/plugins/portscan/portscan.py
|
imfiver/Sec-Tools
|
a828e31c2e371c37f1256f0a574707a24776530d
|
[
"Apache-2.0"
] | 144
|
2021-11-05T10:45:05.000Z
|
2022-03-31T03:17:19.000Z
|
webscan_backend/plugins/portscan/portscan.py
|
imfiver/Sec-Tools
|
a828e31c2e371c37f1256f0a574707a24776530d
|
[
"Apache-2.0"
] | 6
|
2021-11-07T02:47:41.000Z
|
2022-03-06T05:50:15.000Z
|
webscan_backend/plugins/portscan/portscan.py
|
imfiver/Sec-Tools
|
a828e31c2e371c37f1256f0a574707a24776530d
|
[
"Apache-2.0"
] | 41
|
2021-11-07T13:35:02.000Z
|
2022-03-29T00:09:36.000Z
|
# -*- coding:utf-8 -*-
# Reference: https://github.com/AnthraX1/InsightScan/blob/master/scanner.py
import socket
import re
import concurrent.futures
import sys
import os
from urllib import parse
sys.path.append(os.getcwd())
THREADNUM = 64 # 线程数
SIGNS = (
# 协议 | 版本 | 关键字
b'smb|smb|^\0\0\0.\xffSMBr\0\0\0\0.*',
b"xmpp|xmpp|^\<\?xml version='1.0'\?\>",
b'netbios|netbios|^\x79\x08.*BROWSE',
b'http|http|HTTP/1.1',
b'netbios|netbios|^\x79\x08.\x00\x00\x00\x00',
b'netbios|netbios|^\x05\x00\x0d\x03',
b'netbios|netbios|^\x82\x00\x00\x00',
b'netbios|netbios|\x83\x00\x00\x01\x8f',
b'backdoor|backdoor|^500 Not Loged in',
b'backdoor|backdoor|GET: command',
b'backdoor|backdoor|sh: GET:',
b'bachdoor|bachdoor|[a-z]*sh: .* command not found',
b'backdoor|backdoor|^bash[$#]',
b'backdoor|backdoor|^sh[$#]',
b'backdoor|backdoor|^Microsoft Windows',
b'db2|db2|.*SQLDB2RA',
b'dell-openmanage|dell-openmanage|^\x4e\x00\x0d',
b'finger|finger|^\r\n Line User',
b'finger|finger|Line User',
b'finger|finger|Login name: ',
b'finger|finger|Login.*Name.*TTY.*Idle',
b'finger|finger|^No one logged on',
b'finger|finger|^\r\nWelcome',
b'finger|finger|^finger:',
b'finger|finger|^must provide username',
b'finger|finger|finger: GET: ',
b'ftp|ftp|^220.*\n331',
b'ftp|ftp|^220.*\n530',
b'ftp|ftp|^220.*FTP',
b'ftp|ftp|^220 .* Microsoft .* FTP',
b'ftp|ftp|^220 Inactivity timer',
b'ftp|ftp|^220 .* UserGate',
b'ftp|ftp|^220.*FileZilla Server',
b'ldap|ldap|^\x30\x0c\x02\x01\x01\x61',
b'ldap|ldap|^\x30\x32\x02\x01',
b'ldap|ldap|^\x30\x33\x02\x01',
b'ldap|ldap|^\x30\x38\x02\x01',
b'ldap|ldap|^\x30\x84',
b'ldap|ldap|^\x30\x45',
b'ldp|ldp|^\x00\x01\x00.*?\r\n\r\n$',
b'rdp|rdp|^\x03\x00\x00\x0b',
b'rdp|rdp|^\x03\x00\x00\x11',
b'rdp|rdp|^\x03\0\0\x0b\x06\xd0\0\0\x12.\0$',
b'rdp|rdp|^\x03\0\0\x17\x08\x02\0\0Z~\0\x0b\x05\x05@\x06\0\x08\x91J\0\x02X$',
b'rdp|rdp|^\x03\0\0\x11\x08\x02..}\x08\x03\0\0\xdf\x14\x01\x01$',
b'rdp|rdp|^\x03\0\0\x0b\x06\xd0\0\0\x03.\0$',
b'rdp|rdp|^\x03\0\0\x0b\x06\xd0\0\0\0\0\0',
b'rdp|rdp|^\x03\0\0\x0e\t\xd0\0\0\0[\x02\xa1]\0\xc0\x01\n$',
b'rdp|rdp|^\x03\0\0\x0b\x06\xd0\0\x004\x12\0',
b'rdp-proxy|rdp-proxy|^nmproxy: Procotol byte is not 8\n$',
b'msrpc|msrpc|^\x05\x00\x0d\x03\x10\x00\x00\x00\x18\x00\x00\x00\x00\x00',
b'msrpc|msrpc|\x05\0\r\x03\x10\0\0\0\x18\0\0\0....\x04\0\x01\x05\0\0\0\0$',
b'mssql|mssql|^\x05\x6e\x00',
b'mssql|mssql|^\x04\x01',
b'mssql|mysql|;MSSQLSERVER;',
b'mysql|mysql|mysql_native_password',
b'mysql|mysql|^\x19\x00\x00\x00\x0a',
b'mysql|mysql|^\x2c\x00\x00\x00\x0a',
b'mysql|mysql|hhost \'',
b'mysql|mysql|khost \'',
b'mysql|mysql|mysqladmin',
b'mysql|mysql|whost \'',
b'mysql|mysql|^[.*]\x00\x00\x00\n.*?\x00',
b'mysql-secured|mysql|this MySQL server',
b'mysql-secured|MariaDB|MariaDB server',
b'mysql-secured|mysql-secured|\x00\x00\x00\xffj\x04Host',
b'db2jds|db2jds|^N\x00',
b'nagiosd|nagiosd|Sorry, you \(.*are not among the allowed hosts...',
b'nessus|nessus|< NTP 1.2 >\x0aUser:',
b'oracle-tns-listener|\(ERROR_STACK=\(ERROR=\(CODE=',
b'oracle-tns-listener|\(ADDRESS=\(PROTOCOL=',
b'oracle-dbsnmp|^\x00\x0c\x00\x00\x04\x00\x00\x00\x00',
b'oracle-https|^220- ora',
b'rmi|rmi|\x00\x00\x00\x76\x49\x6e\x76\x61',
b'rmi|rmi|^\x4e\x00\x09',
b'postgresql|postgres|Invalid packet length',
b'postgresql|postgres|^EFATAL',
b'rpc-nfs|rpc-nfs|^\x02\x00\x00\x00\x00\x00\x00\x01\x00\x00\x00\x01\x00\x00\x00\x00',
b'rpc|rpc|\x01\x86\xa0',
b'rpc|rpc|\x03\x9b\x65\x42\x00\x00\x00\x01',
b'rpc|rpc|^\x80\x00\x00',
b'rsync|rsync|^@RSYNCD:',
b'smux|smux|^\x41\x01\x02\x00',
b'snmp-public|snmp-public|\x70\x75\x62\x6c\x69\x63\xa2',
b'snmp|snmp|\x41\x01\x02',
b'socks|socks|^\x05[\x00-\x08]\x00',
b'ssl|ssl|^..\x04\0.\0\x02',
b'ssl|ssl|^\x16\x03\x01..\x02...\x03\x01',
b'ssl|ssl|^\x16\x03\0..\x02...\x03\0',
b'ssl|ssl|SSL.*GET_CLIENT_HELLO',
b'ssl|ssl|^-ERR .*tls_start_servertls',
b'ssl|ssl|^\x16\x03\0\0J\x02\0\0F\x03\0',
b'ssl|ssl|^\x16\x03\0..\x02\0\0F\x03\0',
b'ssl|ssl|^\x15\x03\0\0\x02\x02\.*',
b'ssl|ssl|^\x16\x03\x01..\x02...\x03\x01',
b'ssl|ssl|^\x16\x03\0..\x02...\x03\0',
b'sybase|sybase|^\x04\x01\x00',
b'telnet|telnet|Telnet',
b'telnet|telnet|^\xff[\xfa-\xff]',
b'telnet|telnet|^\r\n%connection closed by remote host!\x00$',
b'rlogin|rlogin|login: ',
b'rlogin|rlogin|rlogind: ',
b'rlogin|rlogin|^\x01\x50\x65\x72\x6d\x69\x73\x73\x69\x6f\x6e\x20\x64\x65\x6e\x69\x65\x64\x2e\x0a',
b'tftp|tftp|^\x00[\x03\x05]\x00',
b'uucp|uucp|^login: password: ',
b'vnc|vnc|^RFB',
b'imap|imap|^\* OK.*?IMAP',
b'pop|pop|^\+OK.*?',
b'smtp|smtp|^220.*?SMTP',
b'smtp|smtp|^554 SMTP',
b'ftp|ftp|^220-',
b'ftp|ftp|^220.*?FTP',
b'ftp|ftp|^220.*?FileZilla',
b'ssh|ssh|^SSH-',
b'ssh|ssh|connection refused by remote host.',
b'rtsp|rtsp|^RTSP/',
b'sip|sip|^SIP/',
b'nntp|nntp|^200 NNTP',
b'sccp|sccp|^\x01\x00\x00\x00$',
b'webmin|webmin|.*MiniServ',
b'webmin|webmin|^0\.0\.0\.0:.*:[0-9]',
b'websphere-javaw|websphere-javaw|^\x15\x00\x00\x00\x02\x02\x0a',
b'smb|smb|^\x83\x00\x00\x01\x8f',
b'mongodb|mongodb|MongoDB',
b'Rsync|Rsync|@RSYNCD:',
b'Squid|Squid|X-Squid-Error',
b'mssql|Mssql|MSSQLSERVER',
b'Vmware|Vmware|VMware',
b'iscsi|iscsi|\x00\x02\x0b\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00',
b'redis|redis|^-ERR unknown command',
b'redis|redis|^-ERR wrong number of arguments',
b'redis|redis|^-DENIED Redis is running',
b'memcached|memcached|^ERROR\r\n',
b'websocket|websocket|Server: WebSocket',
b'https|https|Instead use the HTTPS scheme to access'
b'https|https|HTTPS port',
b'https|https|Location: https',
b'SVN|SVN|^\( success \( 2 2 \( \) \( edit-pipeline svndiff1',
b'dubbo|dubbo|^Unsupported command',
b'http|elasticsearch|cluster_name.*elasticsearch',
b'RabbitMQ|RabbitMQ|^AMQP\x00\x00\t\x01',
)
def get_server(port):
SERVER = {
'FTP': '21',
'SSH': '22',
'Telnet': '23',
'SMTP': '25',
'DNS': '53',
'DHCP': '68',
'HTTP': '80',
'TFTP': '69',
'HTTP': '8080',
'POP3': '995',
'POP3': '110',
'NetBIOS': '139',
'IMAP': '143',
'HTTPS': '443',
'SNMP': '161',
'LDAP': '489',
'SMB': '445',
'SMTPS': '465',
'Linux R RPE': '512',
'Linux R RLT': '513',
'Linux R cmd': '514',
'Rsync': '1873',
'IMAPS': '993',
'Proxy': '1080',
'JavaRMI': '10990',
'Oracle EMCTL': '1158',
'Lotus': '1352',
'MSSQL': '1433',
'MSSQL Monitor': '1434',
'Oracle': '1521',
'PPTP': '1723',
'cPanel admin panel/CentOS web panel': '2082',
'CPanel admin panel/CentOS web panel': '2083',
'Oracle XDB FTP': '2100',
'Zookeeper': '2181',
'DA admin panel': '2222',
'Docker': '2375',
'Zebra': '2604',
'Gitea Web': '3000',
'Squid Proxy': '3128',
'MySQL/MariaDB': '3306',
'Kangle admin panel': '3312',
'RDP': '3389',
'SVN': '3690',
'Rundeck': '4440',
'GlassFish': '4848',
'SysBase/DB2': '5000',
'PostgreSql': '5432',
'PcAnywhere': '5632',
'VNC': '5900',
'TeamViewer': '5938',
'CouchDB': '5984',
'varnish': '6082',
'Redis': '6379',
'Aria2': '6800',
'Weblogic': '9001',
'Kloxo admin panel': '7778',
'Zabbix': '8069',
'RouterOS/Winbox': '8291',
'BT/宝塔面板': '8888',
'WebSphere': '9090',
'Elasticsearch': '9300',
'Virtualmin/Webmin': '10000',
'Zabbix agent': '10050',
'Zabbix server': '10051',
'Memcached': '11211',
'FileZilla Manager': '14147',
'MongoDB': '27017',
'MongoDB': '28017',
'SAP NetWeaver': '50000',
'Hadoop': '50070',
'hdfs':'9000',
}
for k, v in SERVER.items():
if v == port:
return '{}:{}'.format(k, port)
return 'Unknown:{}'.format(port)
PORTS = [21, 22, 23, 25, 26, 37, 47, 49, 53, 69, 70, 79, 80, 81, 82, 83, 84, 88, 89, 110, 111, 119, 123, 129, 135,
137, 139, 143, 161, 175, 179, 195, 311, 389, 443, 444, 445, 465, 500, 502, 503, 512, 513, 514, 515, 520,
523, 530, 548, 554, 563, 587, 593, 623, 626, 631, 636, 660, 666, 749, 751, 771, 789, 873, 888, 901, 902, 990,
992, 993, 995, 1000, 1010, 1023, 1024, 1025, 1080, 1088, 1099, 1111, 1177, 1200, 1234, 1311, 1325, 1352,
1400, 1433, 1434, 1471, 1515, 1521, 1599, 1604, 1723, 1741, 1777, 1883, 1900, 1911, 1920, 1962, 1991,
2000, 2049, 2067, 2081, 2082, 2083, 2086, 2087, 2121, 2123, 2152, 2181, 2222, 2323, 2332, 2333, 2375,
2376, 2379, 2404, 2433, 2455, 2480, 2601, 2604, 2628, 3000, 3001, 3128, 3260, 3269, 3283, 3299, 3306,
3307, 3310, 3311, 3312, 3333, 3386, 3388, 3389, 3460, 3478, 3493, 3541, 3542, 3560, 3661, 3689, 3690,
3702,
3749, 3794, 3780, 3784, 3790, 4000, 4022, 4040, 4063, 4064, 4070, 4200, 4343, 4369, 4400, 4440, 4443,
4444,
4500, 4550, 4567, 4664, 4730, 4782, 4786, 4800, 4840, 4848, 4899, 4911, 4949, 5000, 5001, 5006, 5007,
5008,
5009, 5060, 5094, 5222, 5269, 5353, 5357, 5431, 5432, 5433, 5555, 5560, 5577, 5601, 5631, 5632, 5666,
5672,
5683, 5800, 5801, 5858, 5900, 5901, 5938, 5984, 5985, 5986, 6000, 6001, 6014, 6082, 6371, 6372, 6373, 6374,
6379, 6390, 6664,
6666, 6667, 6881, 6969, 7000, 7001, 7002, 7071, 7080, 7218, 7474, 7547, 7548, 7549, 7657, 7777, 7779,
7903,
7905, 8000, 8001, 8008, 8009, 8010, 8060, 8069, 8080, 8081, 8082, 8083, 8086, 8087, 8088, 8089, 8090,
8098,
8099, 8112, 8126, 8139, 8140, 8161, 8181, 8191, 8200, 8291, 8307, 8333, 8334, 8443, 8554, 8649, 8688,
8800, 8834, 8880, 8883, 8888, 8889, 8899, 9000, 9001, 9002, 9009, 9014, 9042, 9043, 9050, 9051, 9080,
9081, 9090, 9092, 9100, 9151, 9160, 9191, 9200, 9300, 9306, 9418, 9443, 9595, 9600, 9869, 9903, 9943,
9944, 9981, 9990, 9998, 9999, 10000, 10001, 10050, 10051, 10243, 10554, 11211, 11300, 12345, 13579, 14147,
16010, 16992, 16993, 17000, 17778, 18081, 18245, 18505, 20000, 20547, 21025, 21379, 21546, 22022, 22222,
23023, 23389, 23424, 25105, 25565, 27015, 27016, 27017, 27018, 27019, 28015, 28017, 28561, 30000, 30718,
32400,
32764, 32768, 32769, 32770, 32771, 33389, 33890, 33899, 37777, 38190, 40001, 40049, 40650, 41706, 42178,
43382, 44818, 47808, 48899, 49152, 49153, 50000, 50010, 50011, 50015, 50030, 50050, 50060, 50070, 50100,
51106, 53413, 54138, 55443, 55553, 55554, 62078, 64738, 65535]
PROBE = {
'GET / HTTP/1.0\r\n\r\n'
}
class ScanPort():
def __init__(self, ipaddr):
'''
初始化参数
'''
self.ipaddr = ipaddr
self.port = []
self.out = []
self.num = 0
def socket_scan(self, hosts):
'''
端口扫描核心代码
'''
global PROBE
socket.setdefaulttimeout(1)
ip, port = hosts.split(':')
try:
if len(self.port) < 25:
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM) #创建TCP套接字
# TCP三次握手建立连接
result = sock.connect_ex((ip, int(port))) #调用socket,端口开放则返回0,否则返回错误代码,实现全连接扫描。
if result == 0: # 成功建立TCP链接
self.port.append(port) # 结果集中增加端口
for i in PROBE: # 通过HTTP1.1刺探
sock.sendall(i.encode()) # 发送完整的TCP数据包
response = sock.recv(256) # 接受最大256byte(字节数)
sock.close()
if response:
break
if response:
for pattern in SIGNS:
pattern = pattern.split(b'|')
if re.search(pattern[-1], response, re.IGNORECASE): # 正则匹配banner信息与字典中的服务
proto = '{}:{}'.format(pattern[1].decode(), port)
self.out.append(proto) # 添加至输出结果
break
else:
self.num = 1
except (socket.timeout, ConnectionResetError):
pass
except:
pass
def run(self, ip):
'''
多线程扫描
'''
hosts = []
global PORTS, THREADNUM
for i in PORTS:
hosts.append('{}:{}'.format(ip, i))
try:
with concurrent.futures.ThreadPoolExecutor(
max_workers=THREADNUM) as executor:
executor.map(self.socket_scan, hosts)
except EOFError:
pass
def pool(self):
out = []
try:
if (not parse.urlparse(self.ipaddr).path) and (parse.urlparse(self.ipaddr).path != '/'):
self.ipaddr = self.ipaddr.replace('http://', '').replace('https://', '').rstrip('/')
else:
self.ipaddr = self.ipaddr.replace('http://', '').replace('https://', '').rstrip('/')
self.ipaddr = re.sub('/\w+', '', self.ipaddr)
if re.search('\d+\.\d+\.\d+\.\d+', self.ipaddr):
ipaddr = self.ipaddr
else:
ipaddr = socket.gethostbyname(self.ipaddr)
if ':' in ipaddr:
ipaddr = re.sub(':\d+', '', ipaddr)
self.run(ipaddr)
except Exception as e:
pass
for i in self.out:
_, port = i.split(':')
out.append(port)
for i in self.port:
if i not in out:
self.out.append(get_server(i))
if self.num == 0:
return list(set(self.out))
else:
return ['Portspoof:0']
if __name__ == "__main__":
print(ScanPort('127.0.0.1').pool())
| 38.482574
| 118
| 0.542218
|
4a1280128e555449666aa4264ca5afdbdc5e1cfe
| 4,827
|
py
|
Python
|
truecaser/Truecaser.py
|
jtbr/tv-news-quality
|
0aa57a67ff3f3207e5cdf4129d58ea967e698e90
|
[
"MIT"
] | null | null | null |
truecaser/Truecaser.py
|
jtbr/tv-news-quality
|
0aa57a67ff3f3207e5cdf4129d58ea967e698e90
|
[
"MIT"
] | null | null | null |
truecaser/Truecaser.py
|
jtbr/tv-news-quality
|
0aa57a67ff3f3207e5cdf4129d58ea967e698e90
|
[
"MIT"
] | null | null | null |
# -*- coding: utf-8 -*-
import string
import math
from unidecode import unidecode
try:
# Python 2
range = xrange
except NameError:
# Python 3
pass
"""
This file contains the functions to truecase a sentence.
"""
def getScore(prevToken, possibleToken, nextToken, wordCasingLookup, uniDist, backwardBiDist, forwardBiDist, trigramDist):
pseudoCount = 5.0
#Get Unigram Score
nominator = uniDist[possibleToken]+pseudoCount
denominator = 0
# avoid python 2/3 difference by using unidecode
possibleTokenLower = possibleToken.lower()
if possibleTokenLower not in wordCasingLookup:
possibleTokenLower = unidecode(possibleTokenLower)
for alternativeToken in wordCasingLookup[possibleTokenLower]:
denominator += uniDist[alternativeToken]+pseudoCount
unigramScore = nominator / denominator
#Get Backward Score
bigramBackwardScore = 1
if prevToken != None:
nominator = backwardBiDist[prevToken+'_'+possibleToken]+pseudoCount
denominator = 0
for alternativeToken in wordCasingLookup[possibleTokenLower]:
denominator += backwardBiDist[prevToken+'_'+alternativeToken]+pseudoCount
bigramBackwardScore = nominator / denominator
#Get Forward Score
bigramForwardScore = 1
if nextToken != None:
#Ensure it is lower case and in the dict
nextTokenLower = nextToken.lower()
# if nextTokenLower not in wordCasingLookup:
# nextTokenLower = unidecode(nextTokenLower)
nominator = forwardBiDist[possibleToken+"_"+nextTokenLower]+pseudoCount
denominator = 0
for alternativeToken in wordCasingLookup[possibleTokenLower]:
denominator += forwardBiDist[alternativeToken+"_"+nextTokenLower]+pseudoCount
bigramForwardScore = nominator / denominator
#Get Trigram Score
trigramScore = 1
if prevToken != None and nextToken != None:
nominator = trigramDist[prevToken+"_"+possibleToken+"_"+nextTokenLower]+pseudoCount
denominator = 0
for alternativeToken in wordCasingLookup[possibleTokenLower]:
denominator += trigramDist[prevToken+"_"+alternativeToken+"_"+nextTokenLower]+pseudoCount
trigramScore = nominator / denominator
result = math.log(unigramScore) + math.log(bigramBackwardScore) + math.log(bigramForwardScore) + math.log(trigramScore)
#print "Scores: %f %f %f %f = %f" % (unigramScore, bigramBackwardScore, bigramForwardScore, trigramScore, math.exp(result))
return result
def getTrueCase(tokens, outOfVocabularyTokenOption, wordCasingLookup, uniDist, backwardBiDist, forwardBiDist, trigramDist):
"""
Returns the true case for the passed tokens.
@param tokens: Tokens in a single sentence
@param outOfVocabulariyTokenOption:
title: Returns out of vocabulary (OOV) tokens in 'title' format
lower: Returns OOV tokens in lower case
as-is: Returns OOV tokens as is
"""
tokensTrueCase = []
for tokenIdx in range(len(tokens)):
token = tokens[tokenIdx]
if token in string.punctuation or token.isdigit():
tokensTrueCase.append(token)
else:
if token in wordCasingLookup:
if len(wordCasingLookup[token]) == 1:
tokensTrueCase.append(list(wordCasingLookup[token])[0])
else:
prevToken = tokensTrueCase[tokenIdx-1] if tokenIdx > 0 else None
nextToken = tokens[tokenIdx+1] if tokenIdx < len(tokens)-1 else None
bestToken = None
highestScore = float("-inf")
for possibleToken in wordCasingLookup[token]:
score = getScore(prevToken, possibleToken, nextToken, wordCasingLookup, uniDist, backwardBiDist, forwardBiDist, trigramDist)
if score > highestScore:
bestToken = possibleToken
highestScore = score
tokensTrueCase.append(bestToken)
if tokenIdx == 0:
tokensTrueCase[0] = tokensTrueCase[0].title();
else: #Token out of vocabulary
if outOfVocabularyTokenOption == 'title':
tokensTrueCase.append(token.title())
elif outOfVocabularyTokenOption == 'lower':
tokensTrueCase.append(token.lower())
else:
tokensTrueCase.append(token)
return tokensTrueCase
| 40.563025
| 148
| 0.616739
|
4a128115ec592b25a01ee1c21bc22d950d217716
| 3,384
|
py
|
Python
|
tpdatasrc/tpgamefiles/rules/d20_ai/d20_ai_utils.py
|
anatoliy-savchak/TemplePlus
|
50922bb14cc2d7dcf8fceeccf45c3b905c1b512f
|
[
"MIT"
] | 2
|
2019-02-25T21:06:18.000Z
|
2019-02-25T21:06:18.000Z
|
tpdatasrc/tpgamefiles/rules/d20_ai/d20_ai_utils.py
|
anatoliy-savchak/TemplePlus
|
50922bb14cc2d7dcf8fceeccf45c3b905c1b512f
|
[
"MIT"
] | null | null | null |
tpdatasrc/tpgamefiles/rules/d20_ai/d20_ai_utils.py
|
anatoliy-savchak/TemplePlus
|
50922bb14cc2d7dcf8fceeccf45c3b905c1b512f
|
[
"MIT"
] | null | null | null |
from toee import *
# Not exactly sure of the purpose of this check, but it's
# used in considerTarget.
def isBusted(obj):
flags = obj.obj_get_int(obj_f_flags)
if not (flags & OF_DESTROYED):
typ = obj.obj_get_int(obj_f_type)
if typ == obj_t_portal:
return obj.obj_get_int(obj_f_portal_flags) & OPF_BUSTED
elif typ == obj_t_container:
return obj.obj_get_int(obj_f_container_flags) & OCOF_BUSTED
elif typ == obj_t_scenery:
return obj.obj_get_int(obj_f_scenery_flags) & OSF_BUSTED
elif typ == obj_t_trap:
return obj.obj_get_int(obj_f_trap_flags) & OTF_BUSTED
return 0
# Check if `target` is charmed by `critter`
def isCharmedBy(critter, target):
if target.d20_query(Q_Critter_Is_Charmed):
charmer = target.d20_query_get_obj(Q_Critter_Is_Charmed, 0)
return charmer == critter
return 0
def getLivingPartyMemberCount():
count = 0
for pc in game.party:
if not pc.is_dead_or_destroyed():
count += 1
return count
def partyHasNonCharmedLivingMembers():
count = 0
for pc in game.party:
if not pc.d20_query(Q_Critter_Is_Charmed):
if not pc.is_dead_or_destroyed():
count += 1
# TODO: this seems wrong, but it's what the original says
return count == 0
def targetIsPcPartyNotDead(target):
if target.obj_get_int(obj_f_type) == obj_t_pc:
if partyHasNonCharmedLivingMembers():
return True
# TODO: this <= also seems backwards
if getLivingPartyMemberCount() <= 1:
return True
return False
def isUnconcealed(critter):
flags = critter.obj_get_int(obj_f_flags)
if flags & OCF_MOVING_SILENTLY:
return 0
if flags & OCF_IS_CONCEALED:
return 0
return 1
def hasNullFaction(critter):
if not critter.type == obj_t_npc:
return True
elif critter.is_in_party:
return False
return critter.faction_has(0)
def getLeaderForNPC(critter):
if critter.type != obj_t_npc:
return OBJ_HANDLE_NULL
leader = critter
while leader != OBJ_HANDLE_NULL and leader.type != obj_t_pc:
leader = leader.leader_get
return leader
def getAiFightingStatus(critter):
aifs = AiFightingStatus()
crit_flags = critter.obj_get_int(obj_f_critter_flags)
if crit_flags & OCF_FLEEING:
return AIFS_FLEEING, critter.obj_get_obj(obj_f_critter_fleeing_from)
if crit_flags & OCF_SURRENDERED:
return AIFS_SURRENDERED, critter.obj_get_obj(obj_f_critter_fleeing_from)
ai_flags = critter.obj_get_int64(obj_f_npc_ai_flags64)
if ai_flags & AiFlag_Fighting:
return AIFS_FIGHTING, critter.obj_get_obj(obj_f_npc_combat_focus)
if ai_flags & AiFlag_FindingHelp:
return AIFS_FINDING_HELP, critter.obj_get_obj(obj_f_npc_combat_focus)
return AIFS_NONE, OBJ_HANDLE_NULL
class AiFightStatus:
def __init__(self, critter):
s, t = getAiFightingStatus(critter)
self.status = s
self.target = t
return
def aiListFind(obj, tgt, list_type):
'''
Finds tgt in obj's obj_f_npc_ai_list_idx, while also matching list_type
to the corresponding entry in obj_f_npc_ai_list_type_idx
'''
if obj == OBJ_HANDLE_NULL or obj.type != obj_t_npc: return False
N = obj.obj_get_idx_int_size(obj_f_npc_ai_list_type_idx)
N = min(N, obj.obj_get_idx_obj_size(obj_f_npc_ai_list_idx) )
if N == 0: return False
for i in range(N):
entry_type = obj.obj_get_idx_int(obj_f_npc_ai_list_type_idx, i)
if entry_type != list_type:
continue
list_obj = obj.obj_get_idx_int(obj_f_npc_ai_list_idx, i)
if list_obj == tgt:
return True
return False
| 25.066667
| 74
| 0.765662
|
4a1282a67c3772aea6fc03a0633fb891cefc1242
| 44,650
|
py
|
Python
|
python_modules/dagster/dagster/core/definitions/repository_definition.py
|
petehunt/dagster
|
1df8496851a7a50a19053759fdac32753cc087a1
|
[
"Apache-2.0"
] | null | null | null |
python_modules/dagster/dagster/core/definitions/repository_definition.py
|
petehunt/dagster
|
1df8496851a7a50a19053759fdac32753cc087a1
|
[
"Apache-2.0"
] | null | null | null |
python_modules/dagster/dagster/core/definitions/repository_definition.py
|
petehunt/dagster
|
1df8496851a7a50a19053759fdac32753cc087a1
|
[
"Apache-2.0"
] | null | null | null |
from abc import ABC, abstractmethod
from types import FunctionType
from typing import (
TYPE_CHECKING,
Any,
Callable,
Dict,
Generic,
List,
Mapping,
Optional,
Type,
TypeVar,
Union,
cast,
)
from dagster import check
from dagster.core.asset_defs.source_asset import SourceAsset
from dagster.core.errors import DagsterInvalidDefinitionError, DagsterInvariantViolationError
from dagster.utils import merge_dicts
from .events import AssetKey
from .graph_definition import GraphDefinition, SubselectedGraphDefinition
from .job_definition import JobDefinition
from .partition import PartitionScheduleDefinition, PartitionSetDefinition
from .pipeline_definition import PipelineDefinition
from .schedule_definition import ScheduleDefinition
from .sensor_definition import SensorDefinition
from .utils import check_valid_name
if TYPE_CHECKING:
from dagster.core.asset_defs.asset_group import AssetGroup
VALID_REPOSITORY_DATA_DICT_KEYS = {
"pipelines",
"partition_sets",
"schedules",
"sensors",
"jobs",
}
RepositoryLevelDefinition = TypeVar(
"RepositoryLevelDefinition",
PipelineDefinition,
JobDefinition,
PartitionSetDefinition,
ScheduleDefinition,
SensorDefinition,
)
class _CacheingDefinitionIndex(Generic[RepositoryLevelDefinition]):
def __init__(
self,
definition_class: Type[RepositoryLevelDefinition],
definition_class_name: str,
definition_kind: str,
definitions: Mapping[
str, Union[RepositoryLevelDefinition, Callable[[], RepositoryLevelDefinition]]
],
validation_fn: Callable[[RepositoryLevelDefinition], RepositoryLevelDefinition],
lazy_definitions_fn: Optional[Callable[[], List[RepositoryLevelDefinition]]] = None,
):
"""
Args:
definitions: A dictionary of definition names to definitions or functions that load
definitions.
lazy_definitions_fn: A function for loading a list of definitions whose names are not
even known until loaded.
"""
for key, definition in definitions.items():
check.invariant(
isinstance(definition, definition_class) or callable(definition),
"Bad definition for {definition_kind} {key}: must be {definition_class_name} or "
"callable, got {type_}".format(
definition_kind=definition_kind,
key=key,
definition_class_name=definition_class_name,
type_=type(definition),
),
)
self._definition_class: Type[RepositoryLevelDefinition] = definition_class
self._definition_class_name = definition_class_name
self._definition_kind = definition_kind
self._validation_fn: Callable[
[RepositoryLevelDefinition], RepositoryLevelDefinition
] = validation_fn
self._definitions: Mapping[
str, Union[RepositoryLevelDefinition, Callable[[], RepositoryLevelDefinition]]
] = definitions
self._definition_cache: Dict[str, RepositoryLevelDefinition] = {}
self._definition_names: Optional[List[str]] = None
self._lazy_definitions_fn: Callable[
[], List[RepositoryLevelDefinition]
] = lazy_definitions_fn or (lambda: [])
self._lazy_definitions: Optional[List[RepositoryLevelDefinition]] = None
self._all_definitions: Optional[List[RepositoryLevelDefinition]] = None
def _get_lazy_definitions(self) -> List[RepositoryLevelDefinition]:
if self._lazy_definitions is None:
self._lazy_definitions = self._lazy_definitions_fn()
for definition in self._lazy_definitions:
self._validate_and_cache_definition(definition, definition.name)
return self._lazy_definitions
def get_definition_names(self) -> List[str]:
if self._definition_names:
return self._definition_names
lazy_names = []
for definition in self._get_lazy_definitions():
strict_definition = self._definitions.get(definition.name)
if strict_definition:
check.invariant(
strict_definition == definition,
f"Duplicate definition found for {definition.name}",
)
else:
lazy_names.append(definition.name)
self._definition_names = list(self._definitions.keys()) + lazy_names
return self._definition_names
def has_definition(self, definition_name: str) -> bool:
check.str_param(definition_name, "definition_name")
return definition_name in self.get_definition_names()
def get_all_definitions(self) -> List[RepositoryLevelDefinition]:
if self._all_definitions is not None:
return self._all_definitions
self._all_definitions = list(
sorted(
map(self.get_definition, self.get_definition_names()),
key=lambda definition: definition.name,
)
)
return self._all_definitions
def get_definition(self, definition_name: str) -> RepositoryLevelDefinition:
check.str_param(definition_name, "definition_name")
if not self.has_definition(definition_name):
raise DagsterInvariantViolationError(
"Could not find {definition_kind} '{definition_name}'. Found: "
"{found_names}.".format(
definition_kind=self._definition_kind,
definition_name=definition_name,
found_names=", ".join(
[
"'{found_name}'".format(found_name=found_name)
for found_name in self.get_definition_names()
]
),
)
)
if definition_name in self._definition_cache:
return self._definition_cache[definition_name]
definition_source = self._definitions[definition_name]
if isinstance(definition_source, self._definition_class):
self._definition_cache[definition_name] = self._validation_fn(definition_source)
return definition_source
else:
definition = cast(Callable, definition_source)()
self._validate_and_cache_definition(definition, definition_name)
return definition
def _validate_and_cache_definition(
self, definition: RepositoryLevelDefinition, definition_dict_key: str
):
check.invariant(
isinstance(definition, self._definition_class),
"Bad constructor for {definition_kind} {definition_name}: must return "
"{definition_class_name}, got value of type {type_}".format(
definition_kind=self._definition_kind,
definition_name=definition_dict_key,
definition_class_name=self._definition_class_name,
type_=type(definition),
),
)
check.invariant(
definition.name == definition_dict_key,
"Bad constructor for {definition_kind} '{definition_name}': name in "
"{definition_class_name} does not match: got '{definition_def_name}'".format(
definition_kind=self._definition_kind,
definition_name=definition_dict_key,
definition_class_name=self._definition_class_name,
definition_def_name=definition.name,
),
)
self._definition_cache[definition_dict_key] = self._validation_fn(definition)
class RepositoryData(ABC):
"""
Users should usually rely on the :py:func:`@repository <repository>` decorator to create new
repositories, which will in turn call the static constructors on this class. However, users may
subclass :py:class:`RepositoryData` for fine-grained control over access to and lazy creation
of repository members.
"""
@abstractmethod
def get_all_pipelines(self) -> List[PipelineDefinition]:
"""Return all pipelines/jobs in the repository as a list.
Returns:
List[PipelineDefinition]: All pipelines/jobs in the repository.
"""
def get_all_jobs(self) -> List[JobDefinition]:
"""Return all jobs in the repository as a list.
Returns:
List[JobDefinition]: All jobs in the repository.
"""
return [job for job in self.get_all_pipelines() if isinstance(job, JobDefinition)]
def get_pipeline_names(self) -> List[str]:
"""Get the names of all pipelines/jobs in the repository.
Returns:
List[str]
"""
return [pipeline_def.name for pipeline_def in self.get_all_pipelines()]
def get_job_names(self) -> List[str]:
"""Get the names of all jobs in the repository.
Returns:
List[str]
"""
return [job_def.name for job_def in self.get_all_jobs()]
def has_pipeline(self, pipeline_name: str) -> bool:
"""Check if a pipeline/job with a given name is present in the repository.
Args:
pipeline_name (str): The name of the pipeline/job.
Returns:
bool
"""
return pipeline_name in self.get_pipeline_names()
def has_job(self, job_name: str) -> bool:
"""Check if a job with a given name is present in the repository.
Args:
job_name (str): The name of the job.
Returns:
bool
"""
return job_name in self.get_job_names()
def get_pipeline(self, pipeline_name) -> PipelineDefinition:
"""Get a pipeline/job by name.
Args:
pipeline_name (str): Name of the pipeline/job to retrieve.
Returns:
PipelineDefinition: The pipeline/job definition corresponding to the given name.
"""
pipelines_with_name = [
pipeline for pipeline in self.get_all_pipelines() if pipeline.name == pipeline_name
]
if not pipelines_with_name:
raise DagsterInvariantViolationError(
f"Could not find pipeline/job {pipeline_name} in repository"
)
return pipelines_with_name[0]
def get_job(self, job_name: str) -> JobDefinition:
"""Get a job by name.
Args:
job_name (str): Name of the job to retrieve.
Returns:
JobDefinition: The job definition corresponding to the given name.
"""
match = next(job for job in self.get_all_jobs() if job.name == job_name)
if match is None:
raise DagsterInvariantViolationError(f"Could not find job {job_name} in repository")
return match
def get_partition_set_names(self):
"""Get the names of all partition sets in the repository.
Returns:
List[str]
"""
return [partition_set.name for partition_set in self.get_all_partition_sets()]
def has_partition_set(self, partition_set_name: str) -> bool:
"""Check if a partition set with a given name is present in the repository.
Args:
partition_set_name (str): The name of the partition set.
Returns:
bool
"""
return partition_set_name in self.get_partition_set_names()
def get_all_partition_sets(self) -> List[PartitionSetDefinition]:
"""Return all partition sets in the repository as a list.
Returns:
List[PartitionSetDefinition]: All partition sets in the repository.
"""
return []
def get_partition_set(self, partition_set_name: str) -> PartitionSetDefinition:
"""Get a partition set by name.
Args:
partition_set_name (str): Name of the partition set to retrieve.
Returns:
PartitionSetDefinition: The partition set definition corresponding to the given name.
"""
partition_sets_with_name = [
partition_set
for partition_set in self.get_all_partition_sets()
if partition_set.name == partition_set_name
]
if not partition_sets_with_name:
raise DagsterInvariantViolationError(
f"Could not find partition set {partition_set_name} in repository"
)
return partition_sets_with_name[0]
def get_schedule_names(self) -> List[str]:
"""Get the names of all schedules in the repository.
Returns:
List[str]
"""
return [schedule.name for schedule in self.get_all_schedules()]
def get_all_schedules(self) -> List[ScheduleDefinition]:
"""Return all schedules in the repository as a list.
Returns:
List[ScheduleDefinition]: All pipelines in the repository.
"""
return []
def get_schedule(self, schedule_name: str) -> ScheduleDefinition:
"""Get a schedule by name.
args:
schedule_name (str): name of the schedule to retrieve.
Returns:
ScheduleDefinition: The schedule definition corresponding to the given name.
"""
schedules_with_name = [
schedule for schedule in self.get_all_schedules() if schedule.name == schedule_name
]
if not schedules_with_name:
raise DagsterInvariantViolationError(
f"Could not find schedule {schedule_name} in repository"
)
return schedules_with_name[0]
def has_schedule(self, schedule_name: str) -> bool:
return schedule_name in self.get_schedule_names()
def get_all_sensors(self) -> List[SensorDefinition]:
return []
def get_sensor_names(self) -> List[str]:
return [sensor.name for sensor in self.get_all_sensors()]
def get_sensor(self, sensor_name: str) -> SensorDefinition:
sensors_with_name = [
sensor for sensor in self.get_all_sensors() if sensor.name == sensor_name
]
if not sensors_with_name:
raise DagsterInvariantViolationError(
f"Could not find sensor {sensor_name} in repository"
)
return sensors_with_name[0]
def has_sensor(self, sensor_name: str) -> bool:
return sensor_name in self.get_sensor_names()
def get_source_assets_by_key(self) -> Mapping[AssetKey, SourceAsset]:
return {}
T = TypeVar("T")
Resolvable = Callable[[], T]
class CachingRepositoryData(RepositoryData):
"""Default implementation of RepositoryData used by the :py:func:`@repository <repository>` decorator."""
_all_jobs: Optional[List[JobDefinition]]
_all_pipelines: Optional[List[PipelineDefinition]]
def __init__(
self,
pipelines: Mapping[str, Union[PipelineDefinition, Resolvable[PipelineDefinition]]],
jobs: Mapping[str, Union[JobDefinition, Resolvable[JobDefinition]]],
partition_sets: Mapping[
str, Union[PartitionSetDefinition, Resolvable[PartitionSetDefinition]]
],
schedules: Mapping[str, Union[ScheduleDefinition, Resolvable[ScheduleDefinition]]],
sensors: Mapping[str, Union[SensorDefinition, Resolvable[SensorDefinition]]],
source_assets: Mapping[AssetKey, SourceAsset],
):
"""Constructs a new CachingRepositoryData object.
You may pass pipeline, job, partition_set, and schedule definitions directly, or you may pass
callables with no arguments that will be invoked to lazily construct definitions when
accessed by name. This can be helpful for performance when there are many definitions in a
repository, or when constructing the definitions is costly.
Note that when lazily constructing a definition, the name of the definition must match its
key in its dictionary index, or a :py:class:`DagsterInvariantViolationError` will be thrown
at retrieval time.
Args:
pipelines (Mapping[str, Union[PipelineDefinition, Callable[[], PipelineDefinition]]]):
The pipeline definitions belonging to the repository.
jobs (Mapping[str, Union[JobDefinition, Callable[[], JobDefinition]]]):
The job definitions belonging to the repository.
partition_sets (Mapping[str, Union[PartitionSetDefinition, Callable[[], PartitionSetDefinition]]]):
The partition sets belonging to the repository.
schedules (Mapping[str, Union[ScheduleDefinition, Callable[[], ScheduleDefinition]]]):
The schedules belonging to the repository.
sensors (Mapping[str, Union[SensorDefinition, Callable[[], SensorDefinition]]]):
The sensors belonging to a repository.
source_assets (Mapping[AssetKey, SourceAsset]): The source assets belonging to a repository.
"""
check.mapping_param(
pipelines, "pipelines", key_type=str, value_type=(PipelineDefinition, FunctionType)
)
check.mapping_param(jobs, "jobs", key_type=str, value_type=(JobDefinition, FunctionType))
check.mapping_param(
partition_sets,
"partition_sets",
key_type=str,
value_type=(PartitionSetDefinition, FunctionType),
)
check.mapping_param(
schedules, "schedules", key_type=str, value_type=(ScheduleDefinition, FunctionType)
)
check.mapping_param(
sensors, "sensors", key_type=str, value_type=(SensorDefinition, FunctionType)
)
check.mapping_param(
source_assets, "source_assets", key_type=AssetKey, value_type=SourceAsset
)
self._pipelines = _CacheingDefinitionIndex(
PipelineDefinition,
"PipelineDefinition",
"pipeline",
pipelines,
self._validate_pipeline,
)
self._jobs = _CacheingDefinitionIndex(
JobDefinition,
"JobDefinition",
"job",
jobs,
self._validate_job,
)
self._schedules = _CacheingDefinitionIndex(
ScheduleDefinition,
"ScheduleDefinition",
"schedule",
schedules,
self._validate_schedule,
)
schedule_partition_sets = [
schedule.get_partition_set()
for schedule in self._schedules.get_all_definitions()
if isinstance(schedule, PartitionScheduleDefinition)
]
self._source_assets = source_assets
def load_partition_sets_from_pipelines() -> List[PartitionSetDefinition]:
job_partition_sets = []
for pipeline in self.get_all_pipelines():
if isinstance(pipeline, JobDefinition):
job_partition_set = pipeline.get_partition_set_def()
if job_partition_set:
# should only return a partition set if this was constructed using the job
# API, with a partitioned config
job_partition_sets.append(job_partition_set)
return job_partition_sets
self._partition_sets = _CacheingDefinitionIndex(
PartitionSetDefinition,
"PartitionSetDefinition",
"partition set",
merge_dicts(
{partition_set.name: partition_set for partition_set in schedule_partition_sets},
partition_sets,
),
self._validate_partition_set,
load_partition_sets_from_pipelines,
)
self._sensors = _CacheingDefinitionIndex(
SensorDefinition,
"SensorDefinition",
"sensor",
sensors,
self._validate_sensor,
)
# load all sensors to force validation
self._sensors.get_all_definitions()
self._all_pipelines = None
self._all_jobs = None
@staticmethod
def from_dict(repository_definitions: Dict[str, Dict[str, Any]]) -> "CachingRepositoryData":
"""Static constructor.
Args:
repository_definition (Dict[str, Dict[str, ...]]): A dict of the form:
{
'pipelines': Dict[str, Callable[[], PipelineDefinition]],
'jobs': Dict[str, Callable[[], JobDefinition]],
'partition_sets': Dict[str, Callable[[], PartitionSetDefinition]],
'schedules': Dict[str, Callable[[], ScheduleDefinition]]
}
This form is intended to allow definitions to be created lazily when accessed by name,
which can be helpful for performance when there are many definitions in a repository, or
when constructing the definitions is costly.
"""
check.dict_param(repository_definitions, "repository_definitions", key_type=str)
check.invariant(
set(repository_definitions.keys()).issubset(VALID_REPOSITORY_DATA_DICT_KEYS),
"Bad dict: must not contain keys other than {{{valid_keys}}}: found {bad_keys}.".format(
valid_keys=", ".join(
["'{key}'".format(key=key) for key in VALID_REPOSITORY_DATA_DICT_KEYS]
),
bad_keys=", ".join(
[
"'{key}'"
for key in repository_definitions.keys()
if key not in VALID_REPOSITORY_DATA_DICT_KEYS
]
),
),
)
for key in VALID_REPOSITORY_DATA_DICT_KEYS:
if key not in repository_definitions:
repository_definitions[key] = {}
duplicate_keys = set(repository_definitions["schedules"].keys()).intersection(
set(repository_definitions["sensors"].keys())
)
if duplicate_keys:
raise DagsterInvalidDefinitionError(
f"Duplicate definitions between schedules and sensors found for keys: {', '.join(duplicate_keys)}"
)
# merge jobs in to pipelines while they are just implemented as pipelines
for key, job in repository_definitions["jobs"].items():
if key in repository_definitions["pipelines"]:
raise DagsterInvalidDefinitionError(
f'Conflicting entries for name {key} in "jobs" and "pipelines".'
)
if isinstance(job, GraphDefinition):
repository_definitions["jobs"][key] = job.coerce_to_job()
elif not isinstance(job, JobDefinition):
raise DagsterInvalidDefinitionError(
f"Object mapped to {key} is not an instance of JobDefinition or GraphDefinition."
)
return CachingRepositoryData(**repository_definitions, source_assets={})
@classmethod
def from_list(
cls,
repository_definitions: List[
Union[
PipelineDefinition,
PartitionSetDefinition,
ScheduleDefinition,
SensorDefinition,
"AssetGroup",
GraphDefinition,
]
],
) -> "CachingRepositoryData":
"""Static constructor.
Args:
repository_definitions (List[Union[PipelineDefinition, PartitionSetDefinition, ScheduleDefinition, SensorDefinition, AssetGroup, GraphDefinition]]):
Use this constructor when you have no need to lazy load pipelines/jobs or other
definitions.
"""
from dagster.core.asset_defs import AssetGroup, build_assets_job
pipelines_or_jobs: Dict[str, Union[PipelineDefinition, JobDefinition]] = {}
partition_sets: Dict[str, PartitionSetDefinition] = {}
schedules: Dict[str, ScheduleDefinition] = {}
sensors: Dict[str, SensorDefinition] = {}
source_assets: Dict[AssetKey, SourceAsset] = {}
for definition in repository_definitions:
if isinstance(definition, PipelineDefinition):
if (
definition.name in pipelines_or_jobs
and pipelines_or_jobs[definition.name] != definition
):
raise DagsterInvalidDefinitionError(
"Duplicate {target_type} definition found for {target}".format(
target_type=definition.target_type, target=definition.describe_target()
)
)
if definition.name == AssetGroup.all_assets_job_name():
raise DagsterInvalidDefinitionError(
f"Attempted to provide job called {AssetGroup.all_assets_job_name()} to repository, which is a reserved name. Please rename the job."
)
pipelines_or_jobs[definition.name] = definition
elif isinstance(definition, PartitionSetDefinition):
if definition.name in partition_sets:
raise DagsterInvalidDefinitionError(
"Duplicate partition set definition found for partition set "
"{partition_set_name}".format(partition_set_name=definition.name)
)
partition_sets[definition.name] = definition
elif isinstance(definition, SensorDefinition):
if definition.name in sensors or definition.name in schedules:
raise DagsterInvalidDefinitionError(
f"Duplicate definition found for {definition.name}"
)
sensors[definition.name] = definition
if definition.has_loadable_targets():
targets = definition.load_targets()
for target in targets:
pipelines_or_jobs[target.name] = target
elif isinstance(definition, ScheduleDefinition):
if definition.name in sensors or definition.name in schedules:
raise DagsterInvalidDefinitionError(
f"Duplicate definition found for {definition.name}"
)
schedules[definition.name] = definition
if definition.has_loadable_target():
target = definition.load_target()
pipelines_or_jobs[target.name] = target
if isinstance(definition, PartitionScheduleDefinition):
partition_set_def = definition.get_partition_set()
if (
partition_set_def.name in partition_sets
and partition_set_def != partition_sets[partition_set_def.name]
):
raise DagsterInvalidDefinitionError(
"Duplicate partition set definition found for partition set "
"{partition_set_name}".format(partition_set_name=partition_set_def.name)
)
partition_sets[partition_set_def.name] = partition_set_def
elif isinstance(definition, GraphDefinition):
coerced = definition.coerce_to_job()
if coerced.name in pipelines_or_jobs:
raise DagsterInvalidDefinitionError(
"Duplicate {target_type} definition found for graph '{name}'".format(
target_type=coerced.target_type, name=coerced.name
)
)
pipelines_or_jobs[coerced.name] = coerced
elif isinstance(definition, AssetGroup):
asset_group = definition
if asset_group.all_assets_job_name() in pipelines_or_jobs:
raise DagsterInvalidDefinitionError(
"When constructing repository, attempted to pass multiple AssetGroups. There can only be one AssetGroup per repository."
)
pipelines_or_jobs[asset_group.all_assets_job_name()] = build_assets_job(
asset_group.all_assets_job_name(),
assets=asset_group.assets,
source_assets=asset_group.source_assets,
resource_defs=asset_group.resource_defs,
executor_def=asset_group.executor_def,
)
source_assets = {
source_asset.key: source_asset for source_asset in asset_group.source_assets
}
else:
check.failed(f"Unexpected repository entry {definition}")
pipelines: Dict[str, PipelineDefinition] = {}
jobs: Dict[str, JobDefinition] = {}
for name, pipeline_or_job in pipelines_or_jobs.items():
if isinstance(pipeline_or_job, JobDefinition):
jobs[name] = pipeline_or_job
else:
pipelines[name] = pipeline_or_job
return CachingRepositoryData(
pipelines=pipelines,
jobs=jobs,
partition_sets=partition_sets,
schedules=schedules,
sensors=sensors,
source_assets=source_assets,
)
def get_pipeline_names(self) -> List[str]:
"""Get the names of all pipelines/jobs in the repository.
Returns:
List[str]
"""
return self._pipelines.get_definition_names() + self.get_job_names()
def get_job_names(self) -> List[str]:
"""Get the names of all jobs in the repository.
Returns:
List[str]
"""
return self._jobs.get_definition_names()
def has_pipeline(self, pipeline_name: str) -> bool:
"""Check if a pipeline/job with a given name is present in the repository.
Args:
pipeline_name (str): The name of the pipeline/job.
Returns:
bool
"""
check.str_param(pipeline_name, "pipeline_name")
return self._pipelines.has_definition(pipeline_name) or self._jobs.has_definition(
pipeline_name
)
def has_job(self, job_name: str) -> bool:
"""Check if a job with a given name is present in the repository.
Args:
job_name (str): The name of the job.
Returns:
bool
"""
check.str_param(job_name, "job_name")
return self._jobs.has_definition(job_name)
def get_all_pipelines(self) -> List[PipelineDefinition]:
"""Return all pipelines/jobs in the repository as a list.
Note that this will construct any pipeline/job that has not yet been constructed.
Returns:
List[PipelineDefinition]: All pipelines/jobs in the repository.
"""
if self._all_pipelines is not None:
return self._all_pipelines
self._all_jobs = self._jobs.get_all_definitions()
pipelines: List[PipelineDefinition] = [
*self._pipelines.get_all_definitions(),
*self._all_jobs,
]
self._check_solid_defs(pipelines)
self._all_pipelines = pipelines
return self._all_pipelines
def get_all_jobs(self) -> List[JobDefinition]:
"""Return all jobs in the repository as a list.
Note that this will construct any job that has not yet been constructed.
Returns:
List[JobDefinition]: All jobs in the repository.
"""
if self._all_jobs is not None:
return self._all_jobs
# _check_solid_defs enforces that pipeline and graph definition names are
# unique within a repository. Loads pipelines in the line below to enforce
# pipeline/job/graph uniqueness.
self.get_all_pipelines()
# The `get_all_pipelines` call ensures _all_jobs is set.
return cast(List[JobDefinition], self._all_jobs)
def get_pipeline(self, pipeline_name: str) -> PipelineDefinition:
"""Get a pipeline/job by name.
If this pipeline/job has not yet been constructed, only this pipeline/job is constructed, and will
be cached for future calls.
Args:
pipeline_name (str): Name of the pipeline/job to retrieve.
Returns:
PipelineDefinition: The pipeline/job definition corresponding to the given name.
"""
check.str_param(pipeline_name, "pipeline_name")
if self._jobs.has_definition(pipeline_name):
return self._jobs.get_definition(pipeline_name)
else:
return self._pipelines.get_definition(pipeline_name)
def get_job(self, job_name: str) -> JobDefinition:
"""Get a job by name.
If this job has not yet been constructed, only this job is constructed, and will
be cached for future calls.
Args:
job_name (str): Name of the job to retrieve.
Returns:
JobDefinition: The job definition corresponding to the given name.
"""
check.str_param(job_name, "job_name")
return self._jobs.get_definition(job_name)
def get_partition_set_names(self) -> List[str]:
"""Get the names of all partition sets in the repository.
Returns:
List[str]
"""
return self._partition_sets.get_definition_names()
def has_partition_set(self, partition_set_name: str) -> bool:
"""Check if a partition set with a given name is present in the repository.
Args:
partition_set_name (str): The name of the partition set.
Returns:
bool
"""
check.str_param(partition_set_name, "partition_set_name")
return self._partition_sets.has_definition(partition_set_name)
def get_all_partition_sets(self) -> List[PartitionSetDefinition]:
"""Return all partition sets in the repository as a list.
Note that this will construct any partition set that has not yet been constructed.
Returns:
List[PartitionSetDefinition]: All partition sets in the repository.
"""
return self._partition_sets.get_all_definitions()
def get_partition_set(self, partition_set_name: str) -> PartitionSetDefinition:
"""Get a partition set by name.
If this partition set has not yet been constructed, only this partition set is constructed,
and will be cached for future calls.
Args:
partition_set_name (str): Name of the partition set to retrieve.
Returns:
PartitionSetDefinition: The partition set definition corresponding to the given name.
"""
check.str_param(partition_set_name, "partition_set_name")
return self._partition_sets.get_definition(partition_set_name)
def get_schedule_names(self) -> List[str]:
"""Get the names of all schedules in the repository.
Returns:
List[str]
"""
return self._schedules.get_definition_names()
def get_all_schedules(self) -> List[ScheduleDefinition]:
"""Return all schedules in the repository as a list.
Note that this will construct any schedule that has not yet been constructed.
Returns:
List[ScheduleDefinition]: All schedules in the repository.
"""
return self._schedules.get_all_definitions()
def get_schedule(self, schedule_name: str) -> ScheduleDefinition:
"""Get a schedule by name.
if this schedule has not yet been constructed, only this schedule is constructed, and will
be cached for future calls.
args:
schedule_name (str): name of the schedule to retrieve.
Returns:
ScheduleDefinition: The schedule definition corresponding to the given name.
"""
check.str_param(schedule_name, "schedule_name")
return self._schedules.get_definition(schedule_name)
def has_schedule(self, schedule_name: str) -> bool:
check.str_param(schedule_name, "schedule_name")
return self._schedules.has_definition(schedule_name)
def get_all_sensors(self) -> List[SensorDefinition]:
return self._sensors.get_all_definitions()
def get_sensor_names(self) -> List[str]:
return self._sensors.get_definition_names()
def get_sensor(self, sensor_name: str) -> SensorDefinition:
return self._sensors.get_definition(sensor_name)
def has_sensor(self, sensor_name: str) -> bool:
return self._sensors.has_definition(sensor_name)
def get_source_assets_by_key(self) -> Mapping[AssetKey, SourceAsset]:
return self._source_assets
def _check_solid_defs(self, pipelines: List[PipelineDefinition]) -> None:
solid_defs = {}
solid_to_pipeline = {}
for pipeline in pipelines:
for solid_def in [*pipeline.all_node_defs, pipeline.graph]:
# skip checks for subselected graphs because they don't have their own names
if isinstance(solid_def, SubselectedGraphDefinition):
break
if solid_def.name not in solid_defs:
solid_defs[solid_def.name] = solid_def
solid_to_pipeline[solid_def.name] = pipeline.name
if not solid_defs[solid_def.name] is solid_def:
first_name, second_name = sorted(
[solid_to_pipeline[solid_def.name], pipeline.name]
)
raise DagsterInvalidDefinitionError(
(
f"Conflicting definitions found in repository with name '{solid_def.name}'. "
"Op/Graph/Solid definition names must be unique within a "
f"repository. {solid_def.__class__.__name__} is defined in {pipeline.target_type} "
f"'{first_name}' and in {pipeline.target_type} '{second_name}'."
)
)
def _validate_pipeline(self, pipeline: PipelineDefinition) -> PipelineDefinition:
return pipeline
def _validate_job(self, job: JobDefinition) -> JobDefinition:
return job
def _validate_schedule(self, schedule: ScheduleDefinition) -> ScheduleDefinition:
pipelines = self.get_pipeline_names()
if schedule.pipeline_name not in pipelines:
raise DagsterInvalidDefinitionError(
f'ScheduleDefinition "{schedule.name}" targets job/pipeline "{schedule.pipeline_name}" '
"which was not found in this repository."
)
return schedule
def _validate_sensor(self, sensor: SensorDefinition) -> SensorDefinition:
pipelines = self.get_pipeline_names()
if len(sensor.targets) == 0:
# skip validation when the sensor does not target a pipeline
return sensor
for target in sensor.targets:
if target.pipeline_name not in pipelines:
raise DagsterInvalidDefinitionError(
f'SensorDefinition "{sensor.name}" targets job/pipeline "{sensor.pipeline_name}" '
"which was not found in this repository."
)
return sensor
def _validate_partition_set(
self, partition_set: PartitionSetDefinition
) -> PartitionSetDefinition:
return partition_set
class RepositoryDefinition:
"""Define a repository that contains a group of definitions.
Users should typically not create objects of this class directly. Instead, use the
:py:func:`@repository` decorator.
Args:
name (str): The name of the repository.
repository_data (RepositoryData): Contains the definitions making up the repository.
description (Optional[str]): A string description of the repository.
"""
def __init__(
self,
name,
repository_data,
description=None,
):
self._name = check_valid_name(name)
self._description = check.opt_str_param(description, "description")
self._repository_data = check.inst_param(repository_data, "repository_data", RepositoryData)
@property
def name(self) -> str:
return self._name
@property
def description(self) -> Optional[str]:
return self._description
@property
def pipeline_names(self) -> List[str]:
"""List[str]: Names of all pipelines/jobs in the repository"""
return self._repository_data.get_pipeline_names()
@property
def job_names(self) -> List[str]:
"""List[str]: Names of all jobs in the repository"""
return self._repository_data.get_job_names()
def has_pipeline(self, name: str) -> bool:
"""Check if a pipeline/job with a given name is present in the repository.
Args:
name (str): The name of the pipeline/job.
Returns:
bool
"""
return self._repository_data.has_pipeline(name)
def get_pipeline(self, name: str) -> PipelineDefinition:
"""Get a pipeline/job by name.
If this pipeline/job is present in the lazily evaluated dictionary passed to the
constructor, but has not yet been constructed, only this pipeline/job is constructed, and will
be cached for future calls.
Args:
name (str): Name of the pipeline/job to retrieve.
Returns:
PipelineDefinition: The pipeline/job definition corresponding to the given name.
"""
return self._repository_data.get_pipeline(name)
def get_all_pipelines(self) -> List[PipelineDefinition]:
"""Return all pipelines/jobs in the repository as a list.
Note that this will construct any pipeline/job in the lazily evaluated dictionary that
has not yet been constructed.
Returns:
List[PipelineDefinition]: All pipelines/jobs in the repository.
"""
return self._repository_data.get_all_pipelines()
def has_job(self, name: str) -> bool:
"""Check if a job with a given name is present in the repository.
Args:
name (str): The name of the job.
Returns:
bool
"""
return self._repository_data.has_job(name)
def get_job(self, name: str) -> JobDefinition:
"""Get a job by name.
If this job is present in the lazily evaluated dictionary passed to the
constructor, but has not yet been constructed, only this job is constructed, and
will be cached for future calls.
Args:
name (str): Name of the job to retrieve.
Returns:
JobDefinition: The job definition corresponding to
the given name.
"""
return self._repository_data.get_job(name)
def get_all_jobs(self) -> List[JobDefinition]:
"""Return all jobs in the repository as a list.
Note that this will construct any job in the lazily evaluated dictionary that has
not yet been constructed.
Returns:
List[JobDefinition]: All jobs in the repository.
"""
return self._repository_data.get_all_jobs()
@property
def partition_set_defs(self) -> List[PartitionSetDefinition]:
return self._repository_data.get_all_partition_sets()
def get_partition_set_def(self, name: str) -> PartitionSetDefinition:
return self._repository_data.get_partition_set(name)
@property
def schedule_defs(self) -> List[ScheduleDefinition]:
return self._repository_data.get_all_schedules()
def get_schedule_def(self, name: str) -> ScheduleDefinition:
return self._repository_data.get_schedule(name)
def has_schedule_def(self, name: str) -> bool:
return self._repository_data.has_schedule(name)
@property
def sensor_defs(self) -> List[SensorDefinition]:
return self._repository_data.get_all_sensors()
def get_sensor_def(self, name: str) -> SensorDefinition:
return self._repository_data.get_sensor(name)
def has_sensor_def(self, name: str) -> bool:
return self._repository_data.has_sensor(name)
@property
def source_assets_by_key(self) -> Dict[AssetKey, SourceAsset]:
return self._repository_data.get_source_assets_by_key()
# If definition comes from the @repository decorator, then the __call__ method will be
# overwritten. Therefore, we want to maintain the call-ability of repository definitions.
def __call__(self, *args, **kwargs):
return self
| 38.359107
| 160
| 0.629698
|
4a1283491dc5a6c3005a47f97eb0962ad069b0b7
| 53,476
|
py
|
Python
|
website/routes.py
|
alexschiller/osf.io
|
4122d4be152c6189142c2ebb19cfdee09c77035d
|
[
"Apache-2.0"
] | null | null | null |
website/routes.py
|
alexschiller/osf.io
|
4122d4be152c6189142c2ebb19cfdee09c77035d
|
[
"Apache-2.0"
] | null | null | null |
website/routes.py
|
alexschiller/osf.io
|
4122d4be152c6189142c2ebb19cfdee09c77035d
|
[
"Apache-2.0"
] | null | null | null |
# -*- coding: utf-8 -*-
from __future__ import absolute_import
import os
import httplib as http
from flask import request
from flask import send_from_directory
from geoip import geolite2
from framework import status
from framework import sentry
from framework.auth import cas
from framework.routing import Rule
from framework.flask import redirect
from framework.routing import WebRenderer
from framework.exceptions import HTTPError
from framework.auth import get_display_name
from framework.routing import json_renderer
from framework.routing import process_rules
from framework.auth import views as auth_views
from framework.routing import render_mako_string
from framework.auth.core import _get_current_user
from modularodm import Q
from modularodm.exceptions import QueryException, NoResultsFound
from website import util
from website import prereg
from website import settings
from website import language
from website.util import metrics
from website.util import paths
from website.util import sanitize
from website import maintenance
from website.models import Institution
from website import landing_pages as landing_page_views
from website import views as website_views
from website.citations import views as citation_views
from website.search import views as search_views
from website.oauth import views as oauth_views
from website.profile import views as profile_views
from website.project import views as project_views
from addons.base import views as addon_views
from website.discovery import views as discovery_views
from website.conferences import views as conference_views
from website.preprints import views as preprint_views
from website.registries import views as registries_views
from website.institutions import views as institution_views
from website.notifications import views as notification_views
def get_globals():
"""Context variables that are available for every template rendered by
OSFWebRenderer.
"""
user = _get_current_user()
user_institutions = [{'id': inst._id, 'name': inst.name, 'logo_path': inst.logo_path_rounded_corners} for inst in user.affiliated_institutions.all()] if user else []
location = geolite2.lookup(request.remote_addr) if request.remote_addr else None
if request.host_url != settings.DOMAIN:
try:
inst_id = (Institution.find_one(Q('domains', 'eq', request.host.lower())))._id
request_login_url = '{}institutions/{}'.format(settings.DOMAIN, inst_id)
except NoResultsFound:
request_login_url = request.url.replace(request.host_url, settings.DOMAIN)
else:
request_login_url = request.url
return {
'private_link_anonymous': is_private_link_anonymous_view(),
'user_name': user.username if user else '',
'user_full_name': user.fullname if user else '',
'user_id': user._id if user else '',
'user_locale': user.locale if user and user.locale else '',
'user_timezone': user.timezone if user and user.timezone else '',
'user_url': user.url if user else '',
'user_gravatar': profile_views.current_user_gravatar(size=25)['gravatar_url'] if user else '',
'user_email_verifications': user.unconfirmed_email_info if user else [],
'user_api_url': user.api_url if user else '',
'user_entry_point': metrics.get_entry_point(user) if user else '',
'user_institutions': user_institutions if user else None,
'display_name': get_display_name(user.fullname) if user else '',
'anon': {
'continent': getattr(location, 'continent', None),
'country': getattr(location, 'country', None),
},
'use_cdn': settings.USE_CDN_FOR_CLIENT_LIBS,
'sentry_dsn_js': settings.SENTRY_DSN_JS if sentry.enabled else None,
'dev_mode': settings.DEV_MODE,
'allow_login': settings.ALLOW_LOGIN,
'cookie_name': settings.COOKIE_NAME,
'status': status.pop_status_messages(),
'prev_status': status.pop_previous_status_messages(),
'domain': settings.DOMAIN,
'api_domain': settings.API_DOMAIN,
'disk_saving_mode': settings.DISK_SAVING_MODE,
'language': language,
'noteworthy_links_node': settings.NEW_AND_NOTEWORTHY_LINKS_NODE,
'popular_links_node': settings.POPULAR_LINKS_NODE,
'web_url_for': util.web_url_for,
'api_url_for': util.api_url_for,
'api_v2_url': util.api_v2_url, # URL function for templates
'api_v2_base': util.api_v2_url(''), # Base url used by JS api helper
'sanitize': sanitize,
'sjson': lambda s: sanitize.safe_json(s),
'webpack_asset': paths.webpack_asset,
'waterbutler_url': settings.WATERBUTLER_URL,
'login_url': cas.get_login_url(request_login_url),
'reauth_url': util.web_url_for('auth_logout', redirect_url=request.url, reauth=True),
'profile_url': cas.get_profile_url(),
'enable_institutions': settings.ENABLE_INSTITUTIONS,
'keen': {
'public': {
'project_id': settings.KEEN['public']['project_id'],
'write_key': settings.KEEN['public']['write_key'],
},
'private': {
'project_id': settings.KEEN['private']['project_id'],
'write_key': settings.KEEN['private']['write_key'],
},
},
'maintenance': maintenance.get_maintenance(),
'recaptcha_site_key': settings.RECAPTCHA_SITE_KEY
}
def is_private_link_anonymous_view():
try:
# Avoid circular import
from osf.models import PrivateLink
return PrivateLink.find_one(
Q('key', 'eq', request.args.get('view_only'))
).anonymous
except QueryException:
return False
class OsfWebRenderer(WebRenderer):
"""Render a Mako template with OSF context vars.
:param trust: Optional. If ``False``, markup-safe escaping will be enabled
"""
def __init__(self, *args, **kwargs):
kwargs['data'] = get_globals
super(OsfWebRenderer, self).__init__(*args, **kwargs)
#: Use if a view only redirects or raises error
notemplate = OsfWebRenderer('', renderer=render_mako_string, trust=False)
# Static files (robots.txt, etc.)
def favicon():
return send_from_directory(
settings.STATIC_FOLDER,
'favicon.ico',
mimetype='image/vnd.microsoft.icon'
)
def robots():
"""Serves the robots.txt file."""
# Allow local robots.txt
if os.path.exists(os.path.join(settings.STATIC_FOLDER,
'robots.local.txt')):
robots_file = 'robots.local.txt'
else:
robots_file = 'robots.txt'
return send_from_directory(
settings.STATIC_FOLDER,
robots_file,
mimetype='text/plain'
)
def ember_app(path=None):
"""Serve the contents of the ember application"""
ember_app_folder = None
fp = path or 'index.html'
for k in settings.EXTERNAL_EMBER_APPS.keys():
if request.path.strip('/').startswith(k):
ember_app_folder = os.path.abspath(os.path.join(os.getcwd(), settings.EXTERNAL_EMBER_APPS[k]['path']))
break
if not ember_app_folder:
raise HTTPError(http.NOT_FOUND)
if not os.path.abspath(os.path.join(ember_app_folder, fp)).startswith(ember_app_folder):
# Prevent accessing files outside of the ember build dir
raise HTTPError(http.NOT_FOUND)
if not os.path.isfile(os.path.join(ember_app_folder, fp)):
fp = 'index.html'
return send_from_directory(ember_app_folder, fp)
def goodbye():
# Redirect to dashboard if logged in
if _get_current_user():
return redirect(util.web_url_for('index'))
status.push_status_message(language.LOGOUT, kind='success', trust=False)
return {}
def make_url_map(app):
"""Set up all the routes for the OSF app.
:param app: A Flask/Werkzeug app to bind the rules to.
"""
# Set default views to 404, using URL-appropriate renderers
process_rules(app, [
Rule(
'/<path:_>',
['get', 'post'],
HTTPError(http.NOT_FOUND),
OsfWebRenderer('', render_mako_string, trust=False)
),
Rule(
'/api/v1/<path:_>',
['get', 'post'],
HTTPError(http.NOT_FOUND),
json_renderer
),
])
### GUID ###
process_rules(app, [
Rule(
[
'/<guid>/',
'/<guid>/<path:suffix>',
],
['get', 'post', 'put', 'patch', 'delete'],
website_views.resolve_guid,
notemplate,
),
Rule(
[
'/api/v1/<guid>/',
'/api/v1/<guid>/<path:suffix>',
],
['get', 'post', 'put', 'patch', 'delete'],
website_views.resolve_guid,
json_renderer,
),
])
# Static files
process_rules(app, [
Rule('/favicon.ico', 'get', favicon, json_renderer),
Rule('/robots.txt', 'get', robots, json_renderer),
])
if settings.USE_EXTERNAL_EMBER:
# Routes that serve up the Ember application. Hide behind feature flag.
rules = []
for prefix in settings.EXTERNAL_EMBER_APPS.keys():
rules += [
'/{}/'.format(prefix),
'/{}/<path:path>'.format(prefix),
]
process_rules(app, [
Rule(rules, 'get', ember_app, json_renderer),
])
### Base ###
process_rules(app, [
Rule(
'/dashboard/',
'get',
website_views.dashboard,
OsfWebRenderer('home.mako', trust=False)
),
Rule(
'/myprojects/',
'get',
website_views.my_projects,
OsfWebRenderer('my_projects.mako', trust=False)
),
Rule(
'/reproducibility/',
'get',
website_views.reproducibility,
notemplate
),
Rule('/about/', 'get', website_views.redirect_about, notemplate),
Rule('/help/', 'get', website_views.redirect_help, notemplate),
Rule('/faq/', 'get', {}, OsfWebRenderer('public/pages/faq.mako', trust=False)),
Rule(['/getting-started/', '/getting-started/email/', '/howosfworks/'], 'get', website_views.redirect_getting_started, notemplate),
Rule('/support/', 'get', {}, OsfWebRenderer('public/pages/support.mako', trust=False)),
Rule(
'/explore/',
'get',
{},
OsfWebRenderer('public/explore.mako', trust=False)
),
Rule(
[
'/messages/',
],
'get',
{},
OsfWebRenderer('public/comingsoon.mako', trust=False)
),
Rule(
'/view/<meeting>/',
'get',
conference_views.conference_results,
OsfWebRenderer('public/pages/meeting.mako', trust=False),
),
Rule(
'/view/<meeting>/plain/',
'get',
conference_views.conference_results,
OsfWebRenderer('public/pages/meeting_plain.mako', trust=False),
endpoint_suffix='__plain',
),
Rule(
'/api/v1/view/<meeting>/',
'get',
conference_views.conference_data,
json_renderer,
),
Rule(
'/meetings/',
'get',
conference_views.conference_view,
OsfWebRenderer('public/pages/meeting_landing.mako', trust=False),
),
Rule(
'/api/v1/meetings/submissions/',
'get',
conference_views.conference_submissions,
json_renderer,
),
Rule(
'/presentations/',
'get',
conference_views.redirect_to_meetings,
json_renderer,
),
Rule(
'/news/',
'get',
website_views.redirect_to_cos_news,
notemplate
),
Rule(
[
'/prereg/',
'/erpc/',
],
'get',
prereg.prereg_landing_page,
OsfWebRenderer('prereg_landing_page.mako', trust=False)
),
Rule(
'/preprints/',
'get',
preprint_views.preprint_landing_page,
OsfWebRenderer('public/pages/preprint_landing.mako', trust=False),
),
Rule(
'/registries/',
'get',
registries_views.registries_landing_page,
OsfWebRenderer('public/pages/registries_landing.mako', trust=False),
),
Rule(
'/preprint/',
'get',
preprint_views.preprint_redirect,
notemplate,
),
Rule(
'/api/v1/<campaign>/draft_registrations/',
'get',
prereg.prereg_draft_registrations,
json_renderer,
),
])
# Site-wide API routes
process_rules(app, [
Rule(
'/citations/styles/',
'get',
citation_views.list_citation_styles,
json_renderer,
),
], prefix='/api/v1')
process_rules(app, [
Rule(
[
'/project/<pid>/<addon>/settings/disable/',
'/project/<pid>/node/<nid>/<addon>/settings/disable/',
],
'post',
addon_views.disable_addon,
json_renderer,
),
Rule(
'/profile/<uid>/<addon>/settings/',
'get',
addon_views.get_addon_user_config,
json_renderer,
),
], prefix='/api/v1')
# OAuth
process_rules(app, [
Rule(
'/oauth/connect/<service_name>/',
'get',
oauth_views.oauth_connect,
json_renderer,
),
Rule(
'/oauth/callback/<service_name>/',
'get',
oauth_views.oauth_callback,
OsfWebRenderer('util/oauth_complete.mako', trust=False),
),
])
process_rules(app, [
Rule(
[
'/oauth/accounts/<external_account_id>/',
],
'delete',
oauth_views.oauth_disconnect,
json_renderer,
)
], prefix='/api/v1')
process_rules(app, [
Rule('/confirmed_emails/', 'put', auth_views.unconfirmed_email_add, json_renderer),
Rule('/confirmed_emails/', 'delete', auth_views.unconfirmed_email_remove, json_renderer)
], prefix='/api/v1')
### Metadata ###
process_rules(app, [
Rule(
[
'/project/<pid>/comments/timestamps/',
'/project/<pid>/node/<nid>/comments/timestamps/',
],
'put',
project_views.comment.update_comments_timestamp,
json_renderer,
),
Rule(
[
'/project/<pid>/citation/',
'/project/<pid>/node/<nid>/citation/',
],
'get',
citation_views.node_citation,
json_renderer,
),
], prefix='/api/v1')
### Forms ###
process_rules(app, [
Rule('/forms/signin/', 'get', website_views.signin_form, json_renderer),
Rule('/forms/forgot_password/', 'get', website_views.forgot_password_form, json_renderer),
], prefix='/api/v1')
### Discovery ###
process_rules(app, [
Rule(
'/explore/activity/',
'get',
discovery_views.activity,
OsfWebRenderer('public/pages/active_nodes.mako', trust=False)
),
])
### Auth ###
process_rules(app, [
# confirm email
Rule(
'/confirm/<uid>/<token>/',
'get',
auth_views.confirm_email_get,
notemplate
),
# confirm email for login through external identity provider
Rule(
'/confirm/external/<uid>/<token>/',
'get',
auth_views.external_login_confirm_email_get,
notemplate
),
# reset password get
Rule(
'/resetpassword/<uid>/<token>/',
'get',
auth_views.reset_password_get,
OsfWebRenderer('public/resetpassword.mako', render_mako_string, trust=False)
),
# reset password post
Rule(
'/resetpassword/<uid>/<token>/',
'post',
auth_views.reset_password_post,
OsfWebRenderer('public/resetpassword.mako', render_mako_string, trust=False)
),
# resend confirmation get
Rule(
'/resend/',
'get',
auth_views.resend_confirmation_get,
OsfWebRenderer('resend.mako', render_mako_string, trust=False)
),
# resend confirmation post
Rule(
'/resend/',
'post',
auth_views.resend_confirmation_post,
OsfWebRenderer('resend.mako', render_mako_string, trust=False)
),
# oauth user email get
Rule(
'/external-login/email',
'get',
auth_views.external_login_email_get,
OsfWebRenderer('external_login_email.mako', render_mako_string, trust=False)
),
# oauth user email post
Rule(
'/external-login/email',
'post',
auth_views.external_login_email_post,
OsfWebRenderer('external_login_email.mako', render_mako_string, trust=False)
),
# user sign up page
Rule(
'/register/',
'get',
auth_views.auth_register,
OsfWebRenderer('public/register.mako', trust=False)
),
# osf login and campaign login
Rule(
[
'/login/',
'/account/'
],
'get',
auth_views.auth_login,
notemplate
),
# create user account via api
Rule(
'/api/v1/register/',
'post',
auth_views.register_user,
json_renderer
),
# osf logout and cas logout
Rule(
'/logout/',
'get',
auth_views.auth_logout,
notemplate
),
# forgot password get
Rule(
'/forgotpassword/',
'get',
auth_views.forgot_password_get,
OsfWebRenderer('public/forgot_password.mako', trust=False)
),
# forgot password post
Rule(
'/forgotpassword/',
'post',
auth_views.forgot_password_post,
OsfWebRenderer('public/forgot_password.mako', trust=False)
),
Rule(
'/login/connected_tools/',
'get',
landing_page_views.connected_tools,
notemplate
),
Rule(
'/login/enriched_profile/',
'get',
landing_page_views.enriched_profile,
notemplate
),
])
### Profile ###
# Web
process_rules(app, [
Rule(
'/profile/',
'get',
profile_views.profile_view,
OsfWebRenderer('profile.mako', trust=False)
),
Rule(
'/profile/<uid>/',
'get',
profile_views.profile_view_id,
OsfWebRenderer('profile.mako', trust=False)
),
# unregistered user claim account (contributor-ship of a project)
# user will be required to set email and password
# claim token must be present in query parameter
Rule(
['/user/<uid>/<pid>/claim/'],
['get', 'post'],
project_views.contributor.claim_user_form,
OsfWebRenderer('claim_account.mako', trust=False)
),
# registered user claim account (contributor-ship of a project)
# user will be required to verify password
# claim token must be present in query parameter
Rule(
['/user/<uid>/<pid>/claim/verify/<token>/'],
['get', 'post'],
project_views.contributor.claim_user_registered,
OsfWebRenderer('claim_account_registered.mako', trust=False)
),
Rule(
'/settings/',
'get',
profile_views.user_profile,
OsfWebRenderer('profile/settings.mako', trust=False),
),
Rule(
'/settings/account/',
'get',
profile_views.user_account,
OsfWebRenderer('profile/account.mako', trust=False),
),
Rule(
'/settings/account/password',
'post',
profile_views.user_account_password,
OsfWebRenderer('profile/account.mako', trust=False),
),
Rule(
'/settings/addons/',
'get',
profile_views.user_addons,
OsfWebRenderer('profile/addons.mako', trust=False),
),
Rule(
'/settings/notifications/',
'get',
profile_views.user_notifications,
OsfWebRenderer('profile/notifications.mako', trust=False),
),
Rule(
'/settings/applications/',
'get',
profile_views.oauth_application_list,
OsfWebRenderer('profile/oauth_app_list.mako', trust=False)
),
Rule(
'/settings/applications/create/',
'get',
profile_views.oauth_application_register,
OsfWebRenderer('profile/oauth_app_detail.mako', trust=False)
),
Rule(
'/settings/applications/<client_id>/',
'get',
profile_views.oauth_application_detail,
OsfWebRenderer('profile/oauth_app_detail.mako', trust=False)
),
Rule(
'/settings/tokens/',
'get',
profile_views.personal_access_token_list,
OsfWebRenderer('profile/personal_tokens_list.mako', trust=False)
),
Rule(
'/settings/tokens/create/',
'get',
profile_views.personal_access_token_register,
OsfWebRenderer('profile/personal_tokens_detail.mako', trust=False)
),
Rule(
'/settings/tokens/<_id>/',
'get',
profile_views.personal_access_token_detail,
OsfWebRenderer('profile/personal_tokens_detail.mako', trust=False)
),
# TODO: Uncomment once outstanding issues with this feature are addressed
# Rule(
# '/@<twitter_handle>/',
# 'get',
# profile_views.redirect_to_twitter,
# OsfWebRenderer('error.mako', render_mako_string, trust=False)
# ),
])
# API
process_rules(app, [
Rule('/profile/', 'get', profile_views.profile_view, json_renderer),
Rule('/profile/', 'put', profile_views.update_user, json_renderer),
Rule('/resend/', 'put', profile_views.resend_confirmation, json_renderer),
Rule('/profile/<uid>/', 'get', profile_views.profile_view_id, json_renderer),
# Used by profile.html
Rule('/profile/<uid>/edit/', 'post', profile_views.edit_profile, json_renderer),
Rule('/profile/<uid>/public_projects/', 'get',
profile_views.get_public_projects, json_renderer),
Rule('/profile/<uid>/public_components/', 'get',
profile_views.get_public_components, json_renderer),
Rule('/profile/<user_id>/summary/', 'get',
profile_views.get_profile_summary, json_renderer),
Rule('/user/<uid>/<pid>/claim/email/', 'post',
project_views.contributor.claim_user_post, json_renderer),
Rule(
'/profile/export/',
'post',
profile_views.request_export,
json_renderer,
),
Rule(
'/profile/deactivate/',
'post',
profile_views.request_deactivation,
json_renderer,
),
Rule(
'/profile/logins/',
'patch',
profile_views.delete_external_identity,
json_renderer,
),
Rule(
[
'/profile/gravatar/',
'/users/gravatar/',
'/profile/gravatar/<size>',
'/users/gravatar/<size>',
],
'get',
profile_views.current_user_gravatar,
json_renderer,
),
Rule(
[
'/profile/<uid>/gravatar/',
'/users/<uid>/gravatar/',
'/profile/<uid>/gravatar/<size>',
'/users/<uid>/gravatar/<size>',
],
'get',
profile_views.get_gravatar,
json_renderer,
),
# Rules for user profile configuration
Rule('/settings/names/', 'get', profile_views.serialize_names, json_renderer),
Rule('/settings/names/', 'put', profile_views.unserialize_names, json_renderer),
Rule('/settings/names/impute/', 'get', profile_views.impute_names, json_renderer),
Rule(
[
'/settings/social/',
'/settings/social/<uid>/',
],
'get',
profile_views.serialize_social,
json_renderer,
),
Rule(
[
'/settings/jobs/',
'/settings/jobs/<uid>/',
],
'get',
profile_views.serialize_jobs,
json_renderer,
),
Rule(
[
'/settings/schools/',
'/settings/schools/<uid>/',
],
'get',
profile_views.serialize_schools,
json_renderer,
),
Rule(
[
'/settings/social/',
'/settings/social/<uid>/',
],
'put',
profile_views.unserialize_social,
json_renderer
),
Rule(
[
'/settings/jobs/',
'/settings/jobs/<uid>/',
],
'put',
profile_views.unserialize_jobs,
json_renderer
),
Rule(
[
'/settings/schools/',
'/settings/schools/<uid>/',
],
'put',
profile_views.unserialize_schools,
json_renderer
),
], prefix='/api/v1',)
### Search ###
# Web
process_rules(app, [
Rule(
'/search/',
'get',
{'shareUrl': settings.SHARE_URL},
OsfWebRenderer('search.mako', trust=False)
),
Rule(
'/share/registration/',
'get',
{'register': settings.SHARE_REGISTRATION_URL},
OsfWebRenderer('share_registration.mako', trust=False)
),
Rule(
'/api/v1/user/search/',
'get', search_views.search_contributor,
json_renderer
),
Rule(
'/api/v1/search/node/',
'post',
project_views.node.search_node,
json_renderer,
),
])
# API
process_rules(app, [
Rule(['/search/', '/search/<type>/'], ['get', 'post'], search_views.search_search, json_renderer),
Rule('/search/projects/', 'get', search_views.search_projects_by_title, json_renderer),
], prefix='/api/v1')
# Institution
process_rules(app, [
Rule('/institutions/<inst_id>/', 'get', institution_views.view_institution, OsfWebRenderer('institution.mako', trust=False))
])
# Project
# Web
process_rules(app, [
# '/' route loads home.mako if logged in, otherwise loads landing.mako
Rule('/', 'get', website_views.index, OsfWebRenderer('index.mako', trust=False)),
Rule('/goodbye/', 'get', goodbye, OsfWebRenderer('landing.mako', trust=False)),
Rule(
[
'/project/<pid>/',
'/project/<pid>/node/<nid>/',
],
'get',
project_views.node.view_project,
OsfWebRenderer('project/project.mako', trust=False)
),
# Create a new subproject/component
Rule(
'/project/<pid>/newnode/',
'post',
project_views.node.project_new_node,
notemplate
),
# # TODO: Add API endpoint for tags
# Rule('/tags/<tag>/', 'get', project_views.tag.project_tag, OsfWebRenderer('tags.mako', trust=False)),
Rule('/project/new/<pid>/beforeTemplate/', 'get',
project_views.node.project_before_template, json_renderer),
Rule(
[
'/project/<pid>/contributors/',
'/project/<pid>/node/<nid>/contributors/',
],
'get',
project_views.node.node_contributors,
OsfWebRenderer('project/contributors.mako', trust=False),
),
Rule(
[
'/project/<pid>/settings/',
'/project/<pid>/node/<nid>/settings/',
],
'get',
project_views.node.node_setting,
OsfWebRenderer('project/settings.mako', trust=False)
),
# Permissions
Rule( # TODO: Where, if anywhere, is this route used?
[
'/project/<pid>/permissions/<permissions>/',
'/project/<pid>/node/<nid>/permissions/<permissions>/',
],
'post',
project_views.node.project_set_privacy,
OsfWebRenderer('project/project.mako', trust=False)
),
### Logs ###
# View forks
Rule(
[
'/project/<pid>/forks/',
'/project/<pid>/node/<nid>/forks/',
],
'get',
project_views.node.node_forks,
OsfWebRenderer('project/forks.mako', trust=False)
),
# Registrations
Rule(
[
'/project/<pid>/register/',
'/project/<pid>/node/<nid>/register/',
],
'get',
project_views.register.node_register_page,
OsfWebRenderer('project/register.mako', trust=False)
),
Rule(
[
'/project/<pid>/register/<metaschema_id>/',
'/project/<pid>/node/<nid>/register/<metaschema_id>/',
],
'get',
project_views.register.node_register_template_page,
OsfWebRenderer('project/register.mako', trust=False)
),
Rule(
[
'/project/<pid>/registrations/',
'/project/<pid>/node/<nid>/registrations/',
],
'get',
project_views.node.node_registrations,
OsfWebRenderer('project/registrations.mako', trust=False)
),
Rule(
[
'/project/<pid>/registrations/',
'/project/<pid>/node/<nid>/registrations/',
],
'post',
project_views.drafts.new_draft_registration,
OsfWebRenderer('project/edit_draft_registration.mako', trust=False)),
Rule(
[
'/project/<pid>/drafts/<draft_id>/',
'/project/<pid>/node/<nid>/drafts/<draft_id>/',
],
'get',
project_views.drafts.edit_draft_registration_page,
OsfWebRenderer('project/edit_draft_registration.mako', trust=False)),
Rule(
[
'/project/<pid>/drafts/<draft_id>/register/',
'/project/<pid>/node/<nid>/drafts/<draft_id>/register/',
],
'get',
project_views.drafts.draft_before_register_page,
OsfWebRenderer('project/register_draft.mako', trust=False)),
Rule(
[
'/project/<pid>/retraction/',
'/project/<pid>/node/<nid>/retraction/',
],
'get',
project_views.register.node_registration_retraction_redirect,
notemplate,
),
Rule(
[
'/project/<pid>/withdraw/',
'/project/<pid>/node/<nid>/withdraw/',
],
'get',
project_views.register.node_registration_retraction_get,
OsfWebRenderer('project/retract_registration.mako', trust=False)
),
Rule(
'/ids/<category>/<path:value>/',
'get',
project_views.register.get_referent_by_identifier,
notemplate,
),
# Statistics
Rule(
[
'/project/<pid>/statistics/',
'/project/<pid>/node/<nid>/statistics/',
],
'get',
project_views.node.project_statistics_redirect,
notemplate,
),
Rule(
[
'/project/<pid>/analytics/',
'/project/<pid>/node/<nid>/analytics/',
],
'get',
project_views.node.project_statistics,
OsfWebRenderer('project/statistics.mako', trust=False)
),
### Files ###
# Note: Web endpoint for files view must pass `mode` = `page` to
# include project view data and JS includes
# TODO: Start waterbutler to test
Rule(
[
'/project/<pid>/files/',
'/project/<pid>/node/<nid>/files/',
],
'get',
project_views.file.collect_file_trees,
OsfWebRenderer('project/files.mako', trust=False),
view_kwargs={'mode': 'page'},
),
Rule(
[
'/project/<pid>/files/<provider>/<path:path>/',
'/project/<pid>/node/<nid>/files/<provider>/<path:path>/',
],
'get',
addon_views.addon_view_or_download_file,
OsfWebRenderer('project/view_file.mako', trust=False)
),
Rule(
[
'/api/v1/project/<pid>/files/<provider>/<path:path>/',
'/api/v1/project/<pid>/node/<nid>/files/<provider>/<path:path>/',
],
'get',
addon_views.addon_view_or_download_file,
json_renderer
),
Rule(
[
'/project/<pid>/files/deleted/<trashed_id>/',
'/project/<pid>/node/<nid>/files/deleted/<trashed_id>/',
],
'get',
addon_views.addon_deleted_file,
OsfWebRenderer('project/view_file.mako', trust=False)
),
Rule(
[
# Legacy Addon view file paths
'/project/<pid>/<provider>/files/<path:path>/',
'/project/<pid>/node/<nid>/<provider>/files/<path:path>/',
'/project/<pid>/<provider>/files/<path:path>/download/',
'/project/<pid>/node/<nid>/<provider>/files/<path:path>/download/',
# Legacy routes for `download_file`
'/project/<pid>/osffiles/<fid>/download/',
'/project/<pid>/node/<nid>/osffiles/<fid>/download/',
# Legacy routes for `view_file`
'/project/<pid>/osffiles/<fid>/',
'/project/<pid>/node/<nid>/osffiles/<fid>/',
# Note: Added these old URLs for backwards compatibility with
# hard-coded links.
'/project/<pid>/osffiles/download/<fid>/',
'/project/<pid>/node/<nid>/osffiles/download/<fid>/',
'/project/<pid>/files/<fid>/',
'/project/<pid>/node/<nid>/files/<fid>/',
'/project/<pid>/files/download/<fid>/',
'/project/<pid>/node/<nid>/files/download/<fid>/',
# Legacy routes for `download_file_by_version`
'/project/<pid>/osffiles/<fid>/version/<vid>/download/',
'/project/<pid>/node/<nid>/osffiles/<fid>/version/<vid>/download/',
# Note: Added these old URLs for backwards compatibility with
# hard-coded links.
'/project/<pid>/osffiles/<fid>/version/<vid>/',
'/project/<pid>/node/<nid>/osffiles/<fid>/version/<vid>/',
'/project/<pid>/osffiles/download/<fid>/version/<vid>/',
'/project/<pid>/node/<nid>/osffiles/download/<fid>/version/<vid>/',
'/project/<pid>/files/<fid>/version/<vid>/',
'/project/<pid>/node/<nid>/files/<fid>/version/<vid>/',
'/project/<pid>/files/download/<fid>/version/<vid>/',
'/project/<pid>/node/<nid>/files/download/<fid>/version/<vid>/',
],
'get',
addon_views.addon_view_or_download_file_legacy,
OsfWebRenderer('project/view_file.mako', trust=False),
),
Rule(
[
# api/v1 Legacy routes for `download_file`
'/api/v1/project/<pid>/osffiles/<fid>/',
'/api/v1/project/<pid>/node/<nid>/osffiles/<fid>/',
'/api/v1/project/<pid>/files/download/<fid>/',
'/api/v1/project/<pid>/node/<nid>/files/download/<fid>/',
#api/v1 Legacy routes for `download_file_by_version`
'/api/v1/project/<pid>/osffiles/<fid>/version/<vid>/',
'/api/v1/project/<pid>/node/<nid>/osffiles/<fid>/version/<vid>/',
'/api/v1/project/<pid>/files/download/<fid>/version/<vid>/',
'/api/v1/project/<pid>/node/<nid>/files/download/<fid>/version/<vid>/',
],
'get',
addon_views.addon_view_or_download_file_legacy,
json_renderer
),
])
# API
process_rules(app, [
Rule(
'/email/meeting/',
'post',
conference_views.meeting_hook,
json_renderer,
),
Rule('/mailchimp/hooks/', 'get', profile_views.mailchimp_get_endpoint, json_renderer),
Rule('/mailchimp/hooks/', 'post', profile_views.sync_data_from_mailchimp, json_renderer),
# Create project, used by [coming replacement]
Rule('/project/new/', 'post', project_views.node.project_new_post, json_renderer),
Rule([
'/project/<pid>/contributors_abbrev/',
'/project/<pid>/node/<nid>/contributors_abbrev/',
], 'get', project_views.contributor.get_node_contributors_abbrev, json_renderer),
Rule('/tags/<tag>/', 'get', project_views.tag.project_tag, json_renderer),
Rule([
'/project/<pid>/',
'/project/<pid>/node/<nid>/',
], 'get', project_views.node.view_project, json_renderer),
Rule(
[
'/project/<pid>/pointer/',
'/project/<pid>/node/<nid>/pointer/',
],
'get',
project_views.node.get_pointed,
json_renderer,
),
Rule(
[
'/project/<pid>/pointer/',
'/project/<pid>/node/<nid>/pointer/',
],
'post',
project_views.node.add_pointers,
json_renderer,
),
Rule(
[
'/pointer/',
],
'post',
project_views.node.add_pointer,
json_renderer,
),
Rule(
[
'/pointers/move/',
],
'post',
project_views.node.move_pointers,
json_renderer,
),
Rule(
[
'/project/<pid>/pointer/',
'/project/<pid>/node/<nid>pointer/',
],
'delete',
project_views.node.remove_pointer,
json_renderer,
),
Rule(
[
'/folder/<pid>/pointer/<pointer_id>',
],
'delete',
project_views.node.remove_pointer_from_folder,
json_renderer,
),
Rule([
'/project/<pid>/get_summary/',
'/project/<pid>/node/<nid>/get_summary/',
], 'get', project_views.node.get_summary, json_renderer),
# TODO: [#OSF-6557] Route "get_children" is deprecated. Use get_readable_descendants.
Rule([
'/project/<pid>/get_children/',
'/project/<pid>/node/<nid>/get_children/',
'/project/<pid>/get_readable_descendants/',
'/project/<pid>/node/<nid>/get_readable_descendants/',
], 'get', project_views.node.get_readable_descendants, json_renderer),
Rule([
'/project/<pid>/get_forks/',
'/project/<pid>/node/<nid>/get_forks/',
], 'get', project_views.node.get_forks, json_renderer),
Rule([
'/project/<pid>/get_registrations/',
'/project/<pid>/node/<nid>/get_registrations/',
], 'get', project_views.node.get_registrations, json_renderer),
# Draft Registrations
Rule([
'/project/<pid>/drafts/',
], 'get', project_views.drafts.get_draft_registrations, json_renderer),
Rule([
'/project/<pid>/drafts/<draft_id>/',
], 'get', project_views.drafts.get_draft_registration, json_renderer),
Rule([
'/project/<pid>/drafts/<draft_id>/',
], 'put', project_views.drafts.update_draft_registration, json_renderer),
Rule([
'/project/<pid>/drafts/<draft_id>/',
], 'delete', project_views.drafts.delete_draft_registration, json_renderer),
Rule([
'/project/<pid>/drafts/<draft_id>/submit/',
], 'post', project_views.drafts.submit_draft_for_review, json_renderer),
# Meta Schemas
Rule([
'/project/drafts/schemas/',
], 'get', project_views.drafts.get_metaschemas, json_renderer),
Rule([
'/project/<pid>/get_contributors/',
'/project/<pid>/node/<nid>/get_contributors/',
], 'get', project_views.contributor.get_contributors, json_renderer),
Rule([
'/project/<pid>/get_contributors_from_parent/',
'/project/<pid>/node/<nid>/get_contributors_from_parent/',
], 'get', project_views.contributor.get_contributors_from_parent, json_renderer),
# Reorder contributors
Rule(
[
'/project/<pid>/contributors/manage/',
'/project/<pid>/node/<nid>/contributors/manage/',
],
'POST',
project_views.contributor.project_manage_contributors,
json_renderer,
),
Rule(
[
'/project/<pid>/contributor/remove/',
'/project/<pid>/node/<nid>/contributor/remove/',
],
'POST',
project_views.contributor.project_remove_contributor,
json_renderer,
),
Rule([
'/project/<pid>/get_editable_children/',
'/project/<pid>/node/<nid>/get_editable_children/',
], 'get', project_views.node.get_editable_children, json_renderer),
# Private Link
Rule([
'/project/<pid>/private_link/',
'/project/<pid>/node/<nid>/private_link/',
], 'post', project_views.node.project_generate_private_link_post, json_renderer),
Rule([
'/project/<pid>/private_link/edit/',
'/project/<pid>/node/<nid>/private_link/edit/',
], 'put', project_views.node.project_private_link_edit, json_renderer),
Rule([
'/project/<pid>/private_link/',
'/project/<pid>/node/<nid>/private_link/',
], 'delete', project_views.node.remove_private_link, json_renderer),
Rule([
'/project/<pid>/private_link/',
'/project/<pid>/node/<nid>/private_link/',
], 'get', project_views.node.private_link_table, json_renderer),
# Create, using existing project as a template
Rule([
'/project/new/<nid>/',
], 'post', project_views.node.project_new_from_template, json_renderer),
# Update
Rule(
[
'/project/<pid>/',
'/project/<pid>/node/<nid>/',
],
'put',
project_views.node.update_node,
json_renderer,
),
# Remove
Rule(
[
'/project/<pid>/',
'/project/<pid>/node/<nid>/',
],
'delete',
project_views.node.component_remove,
json_renderer,
),
# Reorder components
Rule('/project/<pid>/reorder_components/', 'post',
project_views.node.project_reorder_components, json_renderer),
# Edit node
Rule([
'/project/<pid>/edit/',
'/project/<pid>/node/<nid>/edit/',
], 'post', project_views.node.edit_node, json_renderer),
# Add / remove tags
Rule([
'/project/<pid>/tags/',
'/project/<pid>/node/<nid>/tags/',
'/project/<pid>/tags/<tag>/',
'/project/<pid>/node/<nid>/tags/<tag>/',
], 'post', project_views.tag.project_add_tag, json_renderer),
Rule([
'/project/<pid>/tags/',
'/project/<pid>/node/<nid>/tags/',
'/project/<pid>/tags/<tag>/',
'/project/<pid>/node/<nid>/tags/<tag>/',
], 'delete', project_views.tag.project_remove_tag, json_renderer),
# Add / remove contributors
Rule([
'/project/<pid>/contributors/',
'/project/<pid>/node/<nid>/contributors/',
], 'post', project_views.contributor.project_contributors_post, json_renderer),
# Forks
Rule(
[
'/project/<pid>/fork/before/',
'/project/<pid>/node/<nid>/fork/before/',
], 'get', project_views.node.project_before_fork, json_renderer,
),
Rule(
[
'/project/<pid>/fork/',
'/project/<pid>/node/<nid>/fork/',
], 'post', project_views.node.node_fork_page, json_renderer,
),
Rule(
[
'/project/<pid>/pointer/fork/',
'/project/<pid>/node/<nid>/pointer/fork/',
], 'post', project_views.node.fork_pointer, json_renderer,
),
# View forks
Rule([
'/project/<pid>/forks/',
'/project/<pid>/node/<nid>/forks/',
], 'get', project_views.node.node_forks, json_renderer),
# Registrations
Rule([
'/project/<pid>/beforeregister/',
'/project/<pid>/node/<nid>/beforeregister',
], 'get', project_views.register.project_before_register, json_renderer),
Rule([
'/project/<pid>/drafts/<draft_id>/register/',
'/project/<pid>/node/<nid>/drafts/<draft_id>/register/',
], 'post', project_views.drafts.register_draft_registration, json_renderer),
Rule([
'/project/<pid>/register/<template>/',
'/project/<pid>/node/<nid>/register/<template>/',
], 'get', project_views.register.node_register_template_page, json_renderer),
Rule([
'/project/<pid>/withdraw/',
'/project/<pid>/node/<nid>/withdraw/'
], 'post', project_views.register.node_registration_retraction_post, json_renderer),
Rule(
[
'/project/<pid>/identifiers/',
'/project/<pid>/node/<nid>/identifiers/',
],
'get',
project_views.register.node_identifiers_get,
json_renderer,
),
Rule(
[
'/project/<pid>/identifiers/',
'/project/<pid>/node/<nid>/identifiers/',
],
'post',
project_views.register.node_identifiers_post,
json_renderer,
),
# Statistics
Rule([
'/project/<pid>/statistics/',
'/project/<pid>/node/<nid>/statistics/',
], 'get', project_views.node.project_statistics, json_renderer),
# Permissions
Rule([
'/project/<pid>/permissions/<permissions>/',
'/project/<pid>/node/<nid>/permissions/<permissions>/',
], 'post', project_views.node.project_set_privacy, json_renderer),
Rule([
'/project/<pid>/permissions/beforepublic/',
'/project/<pid>/node/<nid>/permissions/beforepublic/',
], 'get', project_views.node.project_before_set_public, json_renderer),
### Watching ###
Rule([
'/project/<pid>/watch/',
'/project/<pid>/node/<nid>/watch/'
], 'post', project_views.node.watch_post, json_renderer),
Rule([
'/project/<pid>/unwatch/',
'/project/<pid>/node/<nid>/unwatch/'
], 'post', project_views.node.unwatch_post, json_renderer),
Rule([
'/project/<pid>/togglewatch/',
'/project/<pid>/node/<nid>/togglewatch/'
], 'post', project_views.node.togglewatch_post, json_renderer),
# Combined files
Rule(
[
'/project/<pid>/files/',
'/project/<pid>/node/<nid>/files/'
],
'get',
project_views.file.collect_file_trees,
json_renderer,
),
# Endpoint to fetch Rubeus.JS/Hgrid-formatted data
Rule(
[
'/project/<pid>/files/grid/',
'/project/<pid>/node/<nid>/files/grid/'
],
'get',
project_views.file.grid_data,
json_renderer
),
# Settings
Rule(
'/files/auth/',
'get',
addon_views.get_auth,
json_renderer,
),
Rule(
[
'/project/<pid>/waterbutler/logs/',
'/project/<pid>/node/<nid>/waterbutler/logs/',
],
'put',
addon_views.create_waterbutler_log,
json_renderer,
),
Rule(
[
'/registration/<pid>/callbacks/',
],
'put',
project_views.register.registration_callbacks,
json_renderer,
),
Rule(
'/settings/addons/',
'post',
profile_views.user_choose_addons,
json_renderer,
),
Rule(
'/settings/notifications/',
'get',
profile_views.user_notifications,
json_renderer,
),
Rule(
'/settings/notifications/',
'post',
profile_views.user_choose_mailing_lists,
json_renderer,
),
Rule(
'/subscriptions/',
'get',
notification_views.get_subscriptions,
json_renderer,
),
Rule(
[
'/project/<pid>/subscriptions/',
'/project/<pid>/node/<nid>/subscriptions/'
],
'get',
notification_views.get_node_subscriptions,
json_renderer,
),
Rule(
[
'/project/<pid>/tree/',
'/project/<pid>/node/<nid>/tree/'
],
'get',
project_views.node.get_node_tree,
json_renderer,
),
Rule(
'/subscriptions/',
'post',
notification_views.configure_subscription,
json_renderer,
),
Rule(
[
'/project/<pid>/settings/addons/',
'/project/<pid>/node/<nid>/settings/addons/',
],
'post',
project_views.node.node_choose_addons,
json_renderer,
),
Rule(
[
'/project/<pid>/settings/comments/',
'/project/<pid>/node/<nid>/settings/comments/',
],
'post',
project_views.node.configure_comments,
json_renderer,
),
# Invite Users
Rule(
[
'/project/<pid>/invite_contributor/',
'/project/<pid>/node/<nid>/invite_contributor/'
],
'post',
project_views.contributor.invite_contributor_post,
json_renderer
)
], prefix='/api/v1')
# Set up static routing for addons
# NOTE: We use nginx to serve static addon assets in production
addon_base_path = os.path.abspath('addons')
if settings.DEV_MODE:
@app.route('/static/addons/<addon>/<path:filename>')
def addon_static(addon, filename):
addon_path = os.path.join(addon_base_path, addon, 'static')
return send_from_directory(addon_path, filename)
| 31.235981
| 169
| 0.527564
|
4a12841b62264b7d1849fd506b5338945bb7767e
| 446
|
py
|
Python
|
setup.py
|
emre/safran
|
990e8b56400e291177f0b7b0348ca9c30a31b218
|
[
"MIT"
] | 3
|
2015-11-04T09:29:42.000Z
|
2021-05-07T13:56:36.000Z
|
setup.py
|
emre/safran
|
990e8b56400e291177f0b7b0348ca9c30a31b218
|
[
"MIT"
] | null | null | null |
setup.py
|
emre/safran
|
990e8b56400e291177f0b7b0348ca9c30a31b218
|
[
"MIT"
] | 2
|
2016-05-28T07:33:21.000Z
|
2018-01-08T05:12:13.000Z
|
from setuptools import setup
setup(
name='safran',
version='0.1',
packages=['safran'],
url='http://github.com/emre/safran',
license='MIT',
author='Emre Yilmaz',
author_email='mail@emreyilmaz.me',
description='safran.io CLI reader',
entry_points={
'console_scripts': [
'safran = safran.__main__:main',
],
},
install_requires=[
'feedparser',
'clint',
],
)
| 21.238095
| 44
| 0.567265
|
4a1284edd8b3063f0f8b7c2a623f8e677fceea23
| 487
|
py
|
Python
|
collage-master/collage/wsgi.py
|
iyerikuzwe/The-moringa-tribune
|
a234f4db9b0eb74deb278ab98b068b8392e18735
|
[
"Unlicense"
] | null | null | null |
collage-master/collage/wsgi.py
|
iyerikuzwe/The-moringa-tribune
|
a234f4db9b0eb74deb278ab98b068b8392e18735
|
[
"Unlicense"
] | null | null | null |
collage-master/collage/wsgi.py
|
iyerikuzwe/The-moringa-tribune
|
a234f4db9b0eb74deb278ab98b068b8392e18735
|
[
"Unlicense"
] | null | null | null |
"""
WSGI config for collage project.
It exposes the WSGI callable as a module-level variable named ``application``.
For more information on this file, see
https://docs.djangoproject.com/en/1.11/howto/deployment/wsgi/
"""
import os
from django.core.wsgi import get_wsgi_application
# from whitenoise.django import DjangoWhiteNoise
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "collage.settings")
application = get_wsgi_application()
# application = DjangoWhiteNoise(application)
| 25.631579
| 78
| 0.796715
|
4a1285369230a6fee1ad8d1398ad5dff28c717c8
| 33
|
py
|
Python
|
addons14/storage_media/tests/__init__.py
|
odoochain/addons_oca
|
55d456d798aebe16e49b4a6070765f206a8885ca
|
[
"MIT"
] | 1
|
2021-06-10T14:59:13.000Z
|
2021-06-10T14:59:13.000Z
|
addons14/storage_media/tests/__init__.py
|
odoochain/addons_oca
|
55d456d798aebe16e49b4a6070765f206a8885ca
|
[
"MIT"
] | null | null | null |
addons14/storage_media/tests/__init__.py
|
odoochain/addons_oca
|
55d456d798aebe16e49b4a6070765f206a8885ca
|
[
"MIT"
] | 1
|
2021-04-09T09:44:44.000Z
|
2021-04-09T09:44:44.000Z
|
from . import test_storage_media
| 16.5
| 32
| 0.848485
|
4a12853f7aacc331c8ffeb94f37a4377133ca8e4
| 6,058
|
py
|
Python
|
python-src/resources/storm.py
|
AuthEceSoftEng/cenote-vital-write
|
160923d946b84e1195f8875ff7bfdc311a00aa08
|
[
"MIT"
] | null | null | null |
python-src/resources/storm.py
|
AuthEceSoftEng/cenote-vital-write
|
160923d946b84e1195f8875ff7bfdc311a00aa08
|
[
"MIT"
] | 2
|
2020-04-01T13:22:03.000Z
|
2020-04-10T14:39:12.000Z
|
python-src/resources/storm.py
|
AuthEceSoftEng/cenote-write
|
1e68a30a33c43d7c6362836e676a3da3df7cb7ca
|
[
"MIT"
] | null | null | null |
import os
import sys
import traceback
from collections import deque
try:
import simplejson as json
except ImportError:
import json
json_encode = lambda x: json.dumps(x)
json_decode = lambda x: json.loads(x)
# reads lines and reconstructs newlines appropriately
def readMsg():
msg = ""
while True:
line = sys.stdin.readline()
if not line:
raise Exception('Read EOF from stdin')
if line[0:-1] == "end":
break
msg = msg + line
return json_decode(msg[0:-1])
MODE = None
ANCHOR_TUPLE = None
# queue up commands we read while trying to read taskids
pending_commands = deque()
def readTaskIds():
if pending_taskids:
return pending_taskids.popleft()
else:
msg = readMsg()
while type(msg) is not list:
pending_commands.append(msg)
msg = readMsg()
return msg
# queue up taskids we read while trying to read commands/tuples
pending_taskids = deque()
def readCommand():
if pending_commands:
return pending_commands.popleft()
else:
msg = readMsg()
while type(msg) is list:
pending_taskids.append(msg)
msg = readMsg()
return msg
def readTuple():
cmd = readCommand()
return Tuple(cmd["id"], cmd["comp"], cmd["stream"], cmd["task"], cmd["tuple"])
def sendMsgToParent(msg):
print(json_encode(msg))
print("end")
sys.stdout.flush()
def sync():
sendMsgToParent({'command': 'sync'})
def sendpid(heartbeatdir):
pid = os.getpid()
sendMsgToParent({'pid': pid})
open(heartbeatdir + "/" + str(pid), "w").close()
def emit(*args, **kwargs):
__emit(*args, **kwargs)
return readTaskIds()
def emitDirect(task, *args, **kwargs):
kwargs["directTask"] = task
__emit(*args, **kwargs)
def __emit(*args, **kwargs):
global MODE
if MODE == Bolt:
emitBolt(*args, **kwargs)
elif MODE == Spout:
emitSpout(*args, **kwargs)
def emitBolt(tup, stream=None, anchors=[], directTask=None):
global ANCHOR_TUPLE
if ANCHOR_TUPLE is not None:
anchors = [ANCHOR_TUPLE]
m = {"command": "emit"}
if stream is not None:
m["stream"] = stream
m["anchors"] = map(lambda a: a.id, anchors)
if directTask is not None:
m["task"] = directTask
m["tuple"] = tup
sendMsgToParent(m)
def emitSpout(tup, stream=None, id=None, directTask=None):
m = {"command": "emit"}
if id is not None:
m["id"] = id
if stream is not None:
m["stream"] = stream
if directTask is not None:
m["task"] = directTask
m["tuple"] = tup
sendMsgToParent(m)
def ack(tup):
sendMsgToParent({"command": "ack", "id": tup.id})
def fail(tup):
sendMsgToParent({"command": "fail", "id": tup.id})
def reportError(msg):
sendMsgToParent({"command": "error", "msg": msg})
def log(msg, level=2):
sendMsgToParent({"command": "log", "msg": msg, "level": level})
def logTrace(msg):
log(msg, 0)
def logDebug(msg):
log(msg, 1)
def logInfo(msg):
log(msg, 2)
def logWarn(msg):
log(msg, 3)
def logError(msg):
log(msg, 4)
def rpcMetrics(name, params):
sendMsgToParent({"command": "metrics", "name": name, "params": params})
def initComponent():
setupInfo = readMsg()
sendpid(setupInfo['pidDir'])
return [setupInfo['conf'], setupInfo['context']]
class Tuple(object):
def __init__(self, id, component, stream, task, values):
self.id = id
self.component = component
self.stream = stream
self.task = task
self.values = values
def __repr__(self):
return '<%s%s>' % (
self.__class__.__name__,
''.join(' %s=%r' % (k, self.__dict__[k]) for k in sorted(self.__dict__.keys())))
def is_heartbeat_tuple(self):
return self.task == -1 and self.stream == "__heartbeat"
class Bolt(object):
def initialize(self, stormconf, context):
pass
def process(self, tuple):
pass
def run(self):
global MODE
MODE = Bolt
conf, context = initComponent()
try:
self.initialize(conf, context)
while True:
tup = readTuple()
if tup.is_heartbeat_tuple():
sync()
else:
self.process(tup)
except Exception as e:
reportError(traceback.format_exc(e))
class BasicBolt(object):
def initialize(self, stormconf, context):
pass
def process(self, tuple):
pass
def run(self):
global MODE
MODE = Bolt
global ANCHOR_TUPLE
conf, context = initComponent()
try:
self.initialize(conf, context)
while True:
tup = readTuple()
if tup.is_heartbeat_tuple():
sync()
else:
ANCHOR_TUPLE = tup
try:
self.process(tup)
ack(tup)
except Exception as e:
reportError(traceback.format_exc(e))
fail(tup)
except Exception as e:
reportError(traceback.format_exc(e))
class Spout(object):
def initialize(self, conf, context):
pass
def ack(self, id):
pass
def fail(self, id):
pass
def nextTuple(self):
pass
def run(self):
global MODE
MODE = Spout
conf, context = initComponent()
try:
self.initialize(conf, context)
while True:
msg = readCommand()
if msg["command"] == "next":
self.nextTuple()
if msg["command"] == "ack":
self.ack(msg["id"])
if msg["command"] == "fail":
self.fail(msg["id"])
sync()
except Exception as e:
reportError(traceback.format_exc(e))
| 22.272059
| 92
| 0.554308
|
4a12857e637c1158ff83b367c5aa88b6306ee4f7
| 1,143
|
py
|
Python
|
test/bridge/testpyscf.py
|
chemistry-scripts/cclib
|
e8e0ea9b3e9b7091f8dfc4dd52d5e5e84a1cc258
|
[
"BSD-3-Clause"
] | null | null | null |
test/bridge/testpyscf.py
|
chemistry-scripts/cclib
|
e8e0ea9b3e9b7091f8dfc4dd52d5e5e84a1cc258
|
[
"BSD-3-Clause"
] | null | null | null |
test/bridge/testpyscf.py
|
chemistry-scripts/cclib
|
e8e0ea9b3e9b7091f8dfc4dd52d5e5e84a1cc258
|
[
"BSD-3-Clause"
] | null | null | null |
# -*- coding: utf-8 -*-
#
# Copyright (c) 2020, the cclib development team
#
# This file is part of cclib (http://cclib.github.io) and is distributed under
# the terms of the BSD 3-Clause License.
import unittest
import numpy as np
from cclib.bridge import cclib2pyscf
from cclib.parser.utils import find_package
class PyscfTest(unittest.TestCase):
"""Tests for the cclib2pyscf bridge in cclib."""
def setUp(self):
super(PyscfTest, self).setUp()
if not find_package('pyscf'):
raise ImportError('Must install pyscf to run this test')
def test_makepyscf(self):
import pyscf
from pyscf import scf
atomnos = np.array([1, 8, 1], "i")
atomcoords = np.array([[-1, 1, 0], [0, 0, 0], [1, 1, 0]], "f")
pyscfmol = cclib2pyscf.makepyscf(atomcoords, atomnos)
pyscfmol.basis = "6-31G**"
pyscfmol.cart = True
pyscfmol.verbose = 0
pyscfmol.build()
mhf = pyscfmol.HF(conv_tol=1e-6)
en = mhf.kernel()
ref = -75.824754602
assert abs(en - ref) < 1.0e-6
if __name__ == "__main__":
unittest.main()
| 25.977273
| 78
| 0.613298
|
4a1285a6bae933977b9b5572b02c6d9e4ac42d28
| 5,691
|
py
|
Python
|
lib/fama/diamond_parser/diamond_hit_list.py
|
aekazakov/FamaProfiling
|
d9db15ea217e3be2aab65c356564a6d345b4f410
|
[
"MIT"
] | null | null | null |
lib/fama/diamond_parser/diamond_hit_list.py
|
aekazakov/FamaProfiling
|
d9db15ea217e3be2aab65c356564a6d345b4f410
|
[
"MIT"
] | null | null | null |
lib/fama/diamond_parser/diamond_hit_list.py
|
aekazakov/FamaProfiling
|
d9db15ea217e3be2aab65c356564a6d345b4f410
|
[
"MIT"
] | null | null | null |
"""Describes DiamondHitList class"""
import fama.diamond_parser.hit_utils as hit_utils
class DiamondHitList(object):
"""DiamondHitList stores a set of DiamondHit objects for one
query sequence (usually, query is a sequence read or a protein)
"""
def __init__(self, query_id=None):
""" Args:
query_id (str): query sequence identifier
"""
self._query_id = query_id
self.data = []
def add_hit(self, hit):
"""Adds a DiamondHit to the DiamondHitList.
Note: checks if query_id of DiamondHit is identical to query_id of
the DiamondHitList. If they don't, new DiamondHit will not be added.
Then, checks if a DiamondHit with the same subject_id, q_start, q_end
already exists in the DiamondHitList. If it does, new DiamondHit
will not be added.
Args:
hit (:obj:'DiamondHit'): DiamondHit to be added
"""
if hit.query_id == self.query_id:
hit_exists = False
for existing_hit in self.hits:
if existing_hit.subject_id == hit.subject_id and (
existing_hit.q_start == hit.q_start
) and (
existing_hit.q_end == hit.q_end
):
hit_exists = True
break
if not hit_exists:
self.hits.append(hit)
else:
print('Diamond hit was not added to the list: different query IDs:'
+ self.query_id + ',' + hit.query_id)
@property
def hits(self):
""":obj:'list' of :obj:'DiamondHit': list of DIAMOND hits"""
return self.data
@hits.setter
def hits(self, var):
self.data = var
@property
def query_id(self):
"""str: query sequence identifier"""
return self._query_id
@query_id.setter
def query_id(self, var):
if self.hits:
for hit in self.hits:
if hit.query_id != var:
raise ValueError
self._query_id = var
@property
def hits_number(self):
""":obj:'int': number of DiamondHit objects in the list"""
return len(self.hits)
def filter_list(self, overlap_cutoff):
"""Filters list of DiamondHit objects in the DiamondHitList.
Removes hits, which overlap by more than 'overlap_cutoff' base
pairs with any hit with higher bit-score.
Args:
overlap_cutoff(int): minimal length of a common area between
hits to be considered overlapping
"""
temp_list = []
for hit in self.hits:
if not temp_list:
temp_list.append(hit)
elif not hit_utils.hit_overlaps_any_hits(hit, temp_list, overlap_cutoff):
temp_list.append(hit)
elif hit_utils.has_higher_score(hit, temp_list, overlap_cutoff):
temp_list = hit_utils.replace_hit(hit, temp_list, overlap_cutoff)
self.hits = temp_list
def filter_list_by_identity(self, reference_data):
"""Filters list of DiamondHit objects in the DiamondHitList.
Removes hits, which have amino acid identity below threshold defined
in reference data or config file.
WARNING: before filtering, functions must be assigned to assigned
to all hits (i.e. call annotate_hits function first)
Args:
reference_data (:obj:ReferenceData): functional reference data
"""
temp_list = []
for hit in self.hits:
max_threshold = 0.0
for function in hit.functions:
function_threshold = reference_data.lookup_identity_threshold(function=function)
if max_threshold < function_threshold:
max_threshold = function_threshold
if max_threshold == 0.0:
max_threshold = reference_data.lookup_identity_threshold()
if hit.identity >= max_threshold:
temp_list.append(hit)
self.hits = temp_list
def remove_hit(self, hit_to_remove):
"""Removes a DiamondHit from the DiamondHitList.
Finds identical DiamondHit object in the DiamondHitList and deletes it.
Args:
hit_to_remove(:obj:'DiamondHit'): a hit to be removed
"""
index_remove = None
for hit_index, hit in enumerate(self.hits):
if hit == hit_to_remove:
index_remove = hit_index
break
if index_remove is not None:
del self.hits[index_remove]
def remove_hit_by_index(self, index_remove):
"""Removes a DiamondHit from the DiamondHitList by its index.
Finds DiamondHit object in the DiamondHitList and deletes it.
Args:
index_remove(int): index of the hit to be removed
"""
try:
del self.hits[index_remove]
except IndexError:
print('Unable to remove hit with index', index_remove)
def remove_all_hits(self):
"""Assigns function to each DiamondHit in the DiamondHitList"""
self.hits = []
def annotate_hits(self, reference_data):
"""Assigns function to each DiamondHit in the DiamondHitList"""
for hit in self.hits:
hit.annotate_hit(reference_data)
def print_hits(self):
"""Prints string representation of each DiamondHit in the DiamondHitList"""
for hit in self.hits:
print(hit)
def __str__(self):
"""Returns string representation of DiamondHitList object"""
return 'Hit List '+'\n'.join(str(hit) for hit in self.hits)
| 34.283133
| 96
| 0.604287
|
4a1285d3e474bc18c566bf409eb4b513b76d650b
| 1,025
|
py
|
Python
|
nemo/collections/nlp/nm/trainables/__init__.py
|
ParikhKadam/NeMo
|
ee11f7c4666d410d91f9da33c61f4819ea625013
|
[
"Apache-2.0"
] | 1
|
2020-11-05T09:39:59.000Z
|
2020-11-05T09:39:59.000Z
|
nemo/collections/nlp/nm/trainables/__init__.py
|
ParikhKadam/NeMo
|
ee11f7c4666d410d91f9da33c61f4819ea625013
|
[
"Apache-2.0"
] | 1
|
2020-06-11T00:54:42.000Z
|
2020-06-11T00:54:42.000Z
|
nemo/collections/nlp/nm/trainables/__init__.py
|
ParikhKadam/NeMo
|
ee11f7c4666d410d91f9da33c61f4819ea625013
|
[
"Apache-2.0"
] | 3
|
2020-03-10T05:10:07.000Z
|
2020-12-08T01:33:35.000Z
|
# =============================================================================
# Copyright 2020 NVIDIA. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# =============================================================================
from nemo.collections.nlp.nm.trainables.common import *
from nemo.collections.nlp.nm.trainables.dialogue_state_tracking import *
from nemo.collections.nlp.nm.trainables.joint_intent_slot import *
from nemo.collections.nlp.nm.trainables.punctuation_capitalization import *
| 48.809524
| 79
| 0.667317
|
4a12869b9c02fdc59cfcf6d91a9e271402c227d1
| 10,016
|
py
|
Python
|
dataset/augmentation.py
|
LightTwist/RobustVideoMatting
|
79eb143fef3a4c58b4857c1a5a927a318f528093
|
[
"Apache-2.0"
] | 11
|
2021-08-31T06:20:17.000Z
|
2021-11-08T13:14:29.000Z
|
dataset/augmentation.py
|
umit-ml/RobustVideoMatting
|
03096f23de1831b8181dadd5e165561c2759f9eb
|
[
"Apache-2.0"
] | 1
|
2021-09-15T10:45:48.000Z
|
2021-09-15T10:45:48.000Z
|
dataset/augmentation.py
|
umit-ml/RobustVideoMatting
|
03096f23de1831b8181dadd5e165561c2759f9eb
|
[
"Apache-2.0"
] | 21
|
2021-08-31T00:55:25.000Z
|
2021-09-16T09:17:31.000Z
|
import easing_functions as ef
import random
import torch
from torchvision import transforms
from torchvision.transforms import functional as F
class MotionAugmentation:
def __init__(self,
size,
prob_fgr_affine,
prob_bgr_affine,
prob_noise,
prob_color_jitter,
prob_grayscale,
prob_sharpness,
prob_blur,
prob_hflip,
prob_pause,
static_affine=True,
aspect_ratio_range=(0.9, 1.1)):
self.size = size
self.prob_fgr_affine = prob_fgr_affine
self.prob_bgr_affine = prob_bgr_affine
self.prob_noise = prob_noise
self.prob_color_jitter = prob_color_jitter
self.prob_grayscale = prob_grayscale
self.prob_sharpness = prob_sharpness
self.prob_blur = prob_blur
self.prob_hflip = prob_hflip
self.prob_pause = prob_pause
self.static_affine = static_affine
self.aspect_ratio_range = aspect_ratio_range
def __call__(self, fgrs, phas, bgrs):
# Foreground affine
if random.random() < self.prob_fgr_affine:
fgrs, phas = self._motion_affine(fgrs, phas)
# Background affine
if random.random() < self.prob_bgr_affine / 2:
bgrs = self._motion_affine(bgrs)
if random.random() < self.prob_bgr_affine / 2:
fgrs, phas, bgrs = self._motion_affine(fgrs, phas, bgrs)
# Still Affine
if self.static_affine:
fgrs, phas = self._static_affine(fgrs, phas, scale_ranges=(0.5, 1))
bgrs = self._static_affine(bgrs, scale_ranges=(1, 1.5))
# To tensor
fgrs = torch.stack([F.to_tensor(fgr) for fgr in fgrs])
phas = torch.stack([F.to_tensor(pha) for pha in phas])
bgrs = torch.stack([F.to_tensor(bgr) for bgr in bgrs])
# Resize
params = transforms.RandomResizedCrop.get_params(fgrs, scale=(1, 1), ratio=self.aspect_ratio_range)
fgrs = F.resized_crop(fgrs, *params, self.size, interpolation=F.InterpolationMode.BILINEAR)
phas = F.resized_crop(phas, *params, self.size, interpolation=F.InterpolationMode.BILINEAR)
params = transforms.RandomResizedCrop.get_params(bgrs, scale=(1, 1), ratio=self.aspect_ratio_range)
bgrs = F.resized_crop(bgrs, *params, self.size, interpolation=F.InterpolationMode.BILINEAR)
# Horizontal flip
if random.random() < self.prob_hflip:
fgrs = F.hflip(fgrs)
phas = F.hflip(phas)
if random.random() < self.prob_hflip:
bgrs = F.hflip(bgrs)
# Noise
if random.random() < self.prob_noise:
fgrs, bgrs = self._motion_noise(fgrs, bgrs)
# Color jitter
if random.random() < self.prob_color_jitter:
fgrs = self._motion_color_jitter(fgrs)
if random.random() < self.prob_color_jitter:
bgrs = self._motion_color_jitter(bgrs)
# Grayscale
if random.random() < self.prob_grayscale:
fgrs = F.rgb_to_grayscale(fgrs, num_output_channels=3).contiguous()
bgrs = F.rgb_to_grayscale(bgrs, num_output_channels=3).contiguous()
# Sharpen
if random.random() < self.prob_sharpness:
sharpness = random.random() * 8
fgrs = F.adjust_sharpness(fgrs, sharpness)
phas = F.adjust_sharpness(phas, sharpness)
bgrs = F.adjust_sharpness(bgrs, sharpness)
# Blur
if random.random() < self.prob_blur / 3:
fgrs, phas = self._motion_blur(fgrs, phas)
if random.random() < self.prob_blur / 3:
bgrs = self._motion_blur(bgrs)
if random.random() < self.prob_blur / 3:
fgrs, phas, bgrs = self._motion_blur(fgrs, phas, bgrs)
# Pause
if random.random() < self.prob_pause:
fgrs, phas, bgrs = self._motion_pause(fgrs, phas, bgrs)
return fgrs, phas, bgrs
def _static_affine(self, *imgs, scale_ranges):
params = transforms.RandomAffine.get_params(
degrees=(-10, 10), translate=(0.1, 0.1), scale_ranges=scale_ranges,
shears=(-5, 5), img_size=imgs[0][0].size)
imgs = [[F.affine(t, *params, F.InterpolationMode.BILINEAR) for t in img] for img in imgs]
return imgs if len(imgs) > 1 else imgs[0]
def _motion_affine(self, *imgs):
config = dict(degrees=(-10, 10), translate=(0.1, 0.1),
scale_ranges=(0.9, 1.1), shears=(-5, 5), img_size=imgs[0][0].size)
angleA, (transXA, transYA), scaleA, (shearXA, shearYA) = transforms.RandomAffine.get_params(**config)
angleB, (transXB, transYB), scaleB, (shearXB, shearYB) = transforms.RandomAffine.get_params(**config)
T = len(imgs[0])
easing = random_easing_fn()
for t in range(T):
percentage = easing(t / (T - 1))
angle = lerp(angleA, angleB, percentage)
transX = lerp(transXA, transXB, percentage)
transY = lerp(transYA, transYB, percentage)
scale = lerp(scaleA, scaleB, percentage)
shearX = lerp(shearXA, shearXB, percentage)
shearY = lerp(shearYA, shearYB, percentage)
for img in imgs:
img[t] = F.affine(img[t], angle, (transX, transY), scale, (shearX, shearY), F.InterpolationMode.BILINEAR)
return imgs if len(imgs) > 1 else imgs[0]
def _motion_noise(self, *imgs):
grain_size = random.random() * 3 + 1 # range 1 ~ 4
monochrome = random.random() < 0.5
for img in imgs:
T, C, H, W = img.shape
noise = torch.randn((T, 1 if monochrome else C, round(H / grain_size), round(W / grain_size)))
noise.mul_(random.random() * 0.2 / grain_size)
if grain_size != 1:
noise = F.resize(noise, (H, W))
img.add_(noise).clamp_(0, 1)
return imgs if len(imgs) > 1 else imgs[0]
def _motion_color_jitter(self, *imgs):
brightnessA, brightnessB, contrastA, contrastB, saturationA, saturationB, hueA, hueB \
= torch.randn(8).mul(0.1).tolist()
strength = random.random() * 0.2
easing = random_easing_fn()
T = len(imgs[0])
for t in range(T):
percentage = easing(t / (T - 1)) * strength
for img in imgs:
img[t] = F.adjust_brightness(img[t], max(1 + lerp(brightnessA, brightnessB, percentage), 0.1))
img[t] = F.adjust_contrast(img[t], max(1 + lerp(contrastA, contrastB, percentage), 0.1))
img[t] = F.adjust_saturation(img[t], max(1 + lerp(brightnessA, brightnessB, percentage), 0.1))
img[t] = F.adjust_hue(img[t], min(0.5, max(-0.5, lerp(hueA, hueB, percentage) * 0.1)))
return imgs if len(imgs) > 1 else imgs[0]
def _motion_blur(self, *imgs):
blurA = random.random() * 10
blurB = random.random() * 10
T = len(imgs[0])
easing = random_easing_fn()
for t in range(T):
percentage = easing(t / (T - 1))
blur = max(lerp(blurA, blurB, percentage), 0)
if blur != 0:
kernel_size = int(blur * 2)
if kernel_size % 2 == 0:
kernel_size += 1 # Make kernel_size odd
for img in imgs:
img[t] = F.gaussian_blur(img[t], kernel_size, sigma=blur)
return imgs if len(imgs) > 1 else imgs[0]
def _motion_pause(self, *imgs):
T = len(imgs[0])
pause_frame = random.choice(range(T - 1))
pause_length = random.choice(range(T - pause_frame))
for img in imgs:
img[pause_frame + 1 : pause_frame + pause_length] = img[pause_frame]
return imgs if len(imgs) > 1 else imgs[0]
def lerp(a, b, percentage):
return a * (1 - percentage) + b * percentage
def random_easing_fn():
if random.random() < 0.2:
return ef.LinearInOut()
else:
return random.choice([
ef.BackEaseIn,
ef.BackEaseOut,
ef.BackEaseInOut,
ef.BounceEaseIn,
ef.BounceEaseOut,
ef.BounceEaseInOut,
ef.CircularEaseIn,
ef.CircularEaseOut,
ef.CircularEaseInOut,
ef.CubicEaseIn,
ef.CubicEaseOut,
ef.CubicEaseInOut,
ef.ExponentialEaseIn,
ef.ExponentialEaseOut,
ef.ExponentialEaseInOut,
ef.ElasticEaseIn,
ef.ElasticEaseOut,
ef.ElasticEaseInOut,
ef.QuadEaseIn,
ef.QuadEaseOut,
ef.QuadEaseInOut,
ef.QuarticEaseIn,
ef.QuarticEaseOut,
ef.QuarticEaseInOut,
ef.QuinticEaseIn,
ef.QuinticEaseOut,
ef.QuinticEaseInOut,
ef.SineEaseIn,
ef.SineEaseOut,
ef.SineEaseInOut,
Step,
])()
class Step: # Custom easing function for sudden change.
def __call__(self, value):
return 0 if value < 0.5 else 1
# ---------------------------- Frame Sampler ----------------------------
class TrainFrameSampler:
def __init__(self, speed=[0.5, 1, 2, 3, 4, 5]):
self.speed = speed
def __call__(self, seq_length):
frames = list(range(seq_length))
# Speed up
speed = random.choice(self.speed)
frames = [int(f * speed) for f in frames]
# Shift
shift = random.choice(range(seq_length))
frames = [f + shift for f in frames]
# Reverse
if random.random() < 0.5:
frames = frames[::-1]
return frames
class ValidFrameSampler:
def __call__(self, seq_length):
return range(seq_length)
| 38.375479
| 121
| 0.568191
|
4a12879c018e7249cb520206be3cdc8ea829c46c
| 11,471
|
py
|
Python
|
API/app.py
|
TitanLi/Stock-lab
|
fd0e0c2e1d89ff5019a5b17ca2e1665d840703bc
|
[
"MIT"
] | null | null | null |
API/app.py
|
TitanLi/Stock-lab
|
fd0e0c2e1d89ff5019a5b17ca2e1665d840703bc
|
[
"MIT"
] | null | null | null |
API/app.py
|
TitanLi/Stock-lab
|
fd0e0c2e1d89ff5019a5b17ca2e1665d840703bc
|
[
"MIT"
] | null | null | null |
from unicodedata import name
from flask import Flask, request, render_template
import subprocess
import json
import twstock
from datetime import datetime
import requests
import time
app = Flask(__name__)
def Average(lst):
return sum(lst) / len(lst)
finmindtradeToken = ''
mongoToken = ''
stockMapping = {}
with open('./env.json') as json_file:
data = json.load(json_file)
# print('token => ' + data['finmindtradeToken'])
finmindtradeToken = data['finmindtradeToken']
mongoToken = data['mongoToken']
with open('./stockMapping.json') as json_file:
stockMapping = json.load(json_file)
# print(stockMapping)
numberList = []
with open('./number.json') as json_file:
numberList = json.load(json_file)
# print(numberList)
test = {'2021':[]}
if "2021" in test:
print('exist')
@app.route('/monthRevenueAndEPS')
def monthRevenueAndEPS():
for number in numberList:
monthRevenueAndEPS = {}
epsUrl = 'https://api.finmindtrade.com/api/v4/data?dataset=TaiwanStockFinancialStatements&token='+finmindtradeToken+'&data_id='+str(number)+'&start_date=2010-01-01'
eps = requests.get(epsUrl)
epsJSON = eps.json()
epsData = epsJSON['data']
for getEpsData in epsData:
if getEpsData['type'] == 'EPS':
date = getEpsData['date']
dateSplit = date.split('-')
year = int(dateSplit[0])
month = int(dateSplit[1])
element = int(month/3) - 1
value = getEpsData['value']
if year in monthRevenueAndEPS:
# print(str(year)+'is exist!!')
monthRevenueAndEPS[year]['eps'][element] = value
else:
# print(str(year)+'is not exist!!')
monthRevenueAndEPS[year] = {}
monthRevenueAndEPS[year]['eps'] = [0,0,0,0]
monthRevenueAndEPS[year]['eps'][element] = value
revenueUrl = 'https://api.finmindtrade.com/api/v4/data?dataset=TaiwanStockMonthRevenue&token='+finmindtradeToken+'&data_id='+str(number)+'&start_date=2010-02-01'
revenue = requests.get(revenueUrl)
revenueJSON = revenue.json()
revenueData = revenueJSON['data']
for getRevenueData in revenueData:
year = getRevenueData['revenue_year']
month = getRevenueData['revenue_month']
element = month - 1
value = getRevenueData['revenue']
if year in monthRevenueAndEPS:
if 'revenue' in monthRevenueAndEPS[year]:
monthRevenueAndEPS[year]['revenue'][element] = value
else:
monthRevenueAndEPS[year]['revenue'] = [0,0,0,0,0,0,0,0,0,0,0,0]
monthRevenueAndEPS[year]['revenue'][element] = value
else:
monthRevenueAndEPS[year] = {}
monthRevenueAndEPS[year]['revenue'] = [0,0,0,0,0,0,0,0,0,0,0,0]
monthRevenueAndEPS[year]['revenue'][element] = value
print('處理'+'('+str(number)+')')
stockName = stockMapping[str(number)]
instertData = {
"collection":"monthRevenueAndEPS",
"database":"history",
"dataSource":"Stock",
"document": {}
}
instertData['document']['number'] = number
instertData['document']['name'] = stockName
instertData['document']['data'] = monthRevenueAndEPS
headers = {
'Content-Type':'application/json',
'api-key':mongoToken,
'Access-Control-Request-Headers':'*'
}
resData = requests.post('https://data.mongodb-api.com/app/data-jbgqo/endpoint/data/beta/action/insertOne', data=json.dumps(instertData), headers=headers)
print(resData)
time.sleep(1)
return json.dumps(monthRevenueAndEPS)
@app.route('/dividend')
def dividend():
for number in numberList:
stockDividendJSON = {}
url = 'https://api.finmindtrade.com/api/v4/data?dataset=TaiwanStockDividend&token='+finmindtradeToken+'&data_id='+str(number)+'&start_date=2010-01-01'
dividend = requests.get(url)
dividendJSON = dividend.json()
data = dividendJSON['data']
for j in data:
# 現金股利
cashDividend = j['CashEarningsDistribution']
# 股票股利
stockDividend = j['StockEarningsDistribution']
# 公積配股
statutorySurplusDividend = j['StockStatutorySurplus']
# 除息日
dividendDealDate = j['CashExDividendTradingDate']
# 發放日
dividendReleaseDate = j['CashDividendPaymentDate']
date = j['date']
print(date)
year = date.split('-')
stockDividendJSON[year[0]] = {
'cashDividend':cashDividend,
'stockDividend':stockDividend,
'surplusDividend':statutorySurplusDividend,
'totalDividend':round(cashDividend+stockDividend+statutorySurplusDividend,15),
'dealDate':dividendDealDate,
'releaseDate':dividendReleaseDate,
}
print('處理'+'('+str(number)+')')
stockName = stockMapping[str(number)]
instertData = {
"collection":"dividend",
"database":"history",
"dataSource":"Stock",
"document": {}
}
instertData['document']['number'] = number
instertData['document']['name'] = stockName
instertData['document']['data'] = stockDividendJSON
headers = {
'Content-Type':'application/json',
'api-key':mongoToken,
'Access-Control-Request-Headers':'*'
}
resData = requests.post('https://data.mongodb-api.com/app/data-jbgqo/endpoint/data/beta/action/insertOne', data=json.dumps(instertData), headers=headers)
print(resData)
time.sleep(1)
print(json.dumps(stockDividendJSON))
return json.dumps(number)
@app.route('/number')
def number():
queryData = {
"collection":"company",
"database":"history",
"dataSource":"Stock",
"filter": {}
}
headers = {
'Content-Type':'application/json',
'api-key':mongoToken,
'Access-Control-Request-Headers':'*'
}
resData = requests.post('https://data.mongodb-api.com/app/data-jbgqo/endpoint/data/beta/action/find', data=json.dumps(queryData), headers=headers)
data = resData.json()
number = []
for i in data['documents']:
number.append(i['number'])
return json.dumps(number)
# 公司代號
# 公司名稱
# 股利年度
# 配息金額
@app.route('/opendata')
def opendata():
# 2019
dividend = requests.get(
'https://openapi.twse.com.tw/v1/opendata/t187ap40_L')
dividendJSON = dividend.json()
for company in dividendJSON:
number = company['公司代號']
name = company['公司名稱']
incomeYear = company['股利年度']
money = float(company['股東配發內容-盈餘分配之現金股利(元/股)']) + float(company['股東配發內容-法定盈餘公積、資本公積發放之現金(元/股)']) + float(company['股東配發內容-盈餘轉增資配股(元/股)']) + float(company['股東配發內容-法定盈餘公積、資本公積轉增資配股(元/股)'])
print(number)
print(name)
print(incomeYear)
print(money)
instertData = {
"collection":"company",
"database":"history",
"dataSource":"Stock",
"document": {}
}
instertData['document']['number'] = number
instertData['document']['name'] = name
instertData['document']['incomeYear'] = incomeYear
instertData['document']['money'] = money
headers = {
'Content-Type':'application/json',
'api-key':mongoToken,
'Access-Control-Request-Headers':'*'
}
resData = requests.post('https://data.mongodb-api.com/app/data-jbgqo/endpoint/data/beta/action/insertOne', data=json.dumps(instertData), headers=headers)
print(resData)
return json.dumps([])
@app.route('/')
def home():
# 殖利率
# 2019
# dividendYield_2019 = requests.get(
# 'https://www.twse.com.tw/exchangeReport/BWIBBU_d?response=json&date=20191227&selectType=ALL&_=1646738579135')
# hightDividendYield_2019 = []
# companyNum_2019 = []
# for dividendYield_data in json.loads(dividendYield_2019.text)['data']:
# if float(dividendYield_data[2]) > 6:
# hightDividendYield_2019.append(dividendYield_data)
# companyNum_2019.append(dividendYield_data[0])
# # 2020
# dividendYield_2020 = requests.get(
# 'https://www.twse.com.tw/exchangeReport/BWIBBU_d?response=json&date=20201201&selectType=ALL&_=1646738579135')
# hightDividendYield_2020 = []
# companyNum_2020 = []
# for dividendYield_data in json.loads(dividendYield_2020.text)['data']:
# if float(dividendYield_data[2]) > 6:
# hightDividendYield_2020.append(dividendYield_data)
# companyNum_2020.append(dividendYield_data[0])
# # 2021
# dividendYield_2021 = requests.get(
# 'https://www.twse.com.tw/exchangeReport/BWIBBU_d?response=json&date=20211201&selectType=ALL&_=1646738579135')
# hightDividendYield_2021 = []
# companyNum_2021 = []
# for dividendYield_data in json.loads(dividendYield_2021.text)['data']:
# if float(dividendYield_data[2]) > 6:
# hightDividendYield_2021.append(dividendYield_data)
# companyNum_2021.append(dividendYield_data[0])
# set0 = set(companyNum_2019)
# set1 = set(companyNum_2020)
# set2 = set(companyNum_2021)
# set3 = set0 & set1 & set2
# list3 = list(set3)
# url = "https://api.finmindtrade.com/api/v4/data"
# parameter = {
# "dataset": "TaiwanStockFinancialStatements",
# "data_id": "2330",
# "start_date": "2019-01-01",
# "token": finmindtradeToken, # 參考登入,獲取金鑰
# }
# data = requests.get(url, params=parameter)
# data = data.json()
# print(data['data'])
# totalEPS2019 = 0
# totalEPS2020 = 0
# totalEPS2021 = 0
# for i in data['data']:
# if "2019" in i['date']:
# if "EPS" in i['type']:
# totalEPS2019 = totalEPS2019 + i['value']
# if "2020" in i['date']:
# if "EPS" in i['type']:
# totalEPS2020 = totalEPS2020 + i['value']
# if "2021" in i['date']:
# if "EPS" in i['type']:
# totalEPS2021 = totalEPS2021 + i['value']
# print("totalEPS2019" + str(totalEPS2019))
# print("totalEPS2020" + str(totalEPS2020))
# print("totalEPS2021" + str(totalEPS2021))
# for companyNum in list3:
# # 查詢股價
# stock = twstock.realtime.get(companyNum)
# print(stock.get('realtime').get('open'))
# 查詢股價
stock = twstock.Stock('1101')
stock.fetch_from(2021, 1)
data = stock.price
date = stock.date
dataList = []
for i in range(len(date)):
timestamp = int(datetime.timestamp(date[i]))
print(int(timestamp))
listElement = []
listElement.append(timestamp * 1000)
# listElement.append(datetime.fromtimestamp(timestamp).strftime('%y-%m-%d'))
listElement.append(data[i])
dataList.append(listElement)
# print(len(list3))
return json.dumps(dataList)
if __name__ == '__main__':
app.run(host='0.0.0.0', port=5000, debug=True)
| 38.364548
| 193
| 0.590969
|
4a1287a9a1f303ec5d31e95b0e60f623296e2e49
| 1,281
|
py
|
Python
|
portfolio/app/models.py
|
tarasivashchuk/portfolio
|
74939e175c77e2e5b3e428eda6319fe016e0ddf4
|
[
"Unlicense",
"MIT"
] | null | null | null |
portfolio/app/models.py
|
tarasivashchuk/portfolio
|
74939e175c77e2e5b3e428eda6319fe016e0ddf4
|
[
"Unlicense",
"MIT"
] | null | null | null |
portfolio/app/models.py
|
tarasivashchuk/portfolio
|
74939e175c77e2e5b3e428eda6319fe016e0ddf4
|
[
"Unlicense",
"MIT"
] | null | null | null |
from flask_appbuilder import Model
from sqlalchemy import Column, Integer, String, ForeignKey
from sqlalchemy.orm import relationship
"""
You can use the extra Flask-AppBuilder fields and Mixin's
AuditMixin will add automatic timestamp of created and modified by who
"""
| 1.25835
| 70
| 0.173302
|
4a12890ca44f8317931504dc2d628b22fb6e3fe8
| 723
|
py
|
Python
|
code/python/echomesh/util/string/UniqueName_test.py
|
rec/echomesh
|
be668971a687b141660fd2e5635d2fd598992a01
|
[
"MIT"
] | 30
|
2015-02-18T14:07:00.000Z
|
2021-12-11T15:19:01.000Z
|
code/python/echomesh/util/string/UniqueName_test.py
|
rec/echomesh
|
be668971a687b141660fd2e5635d2fd598992a01
|
[
"MIT"
] | 16
|
2015-01-01T23:17:24.000Z
|
2015-04-18T23:49:27.000Z
|
code/python/echomesh/util/string/UniqueName_test.py
|
rec/echomesh
|
be668971a687b141660fd2e5635d2fd598992a01
|
[
"MIT"
] | 31
|
2015-03-11T20:04:07.000Z
|
2020-11-02T13:56:59.000Z
|
from __future__ import absolute_import, division, print_function, unicode_literals
from echomesh.util.string.UniqueName import unique_name
from echomesh.util.TestCase import TestCase
class UniqueNameTest(TestCase):
def test_empty(self):
self.assertEqual(unique_name('', []), '')
def test_missing(self):
self.assertEqual(unique_name('foo', ['bar', 'baz']), 'foo')
def test_dupe(self):
self.assertEqual(unique_name('bar', ['bar', 'baz']), 'bar-2')
def test_dupe_two(self):
self.assertEqual(unique_name('bar', ['bar', 'baz', 'bar-1',]), 'bar-2')
def test_dupe_three(self):
self.assertEqual(
unique_name('bar-2', ['bar', 'baz', 'bar-2',]), 'bar-3')
| 32.863636
| 82
| 0.651452
|
4a1289405ad66c99eeff299030c8f6f8f72d16e2
| 3,178
|
py
|
Python
|
sensor.py
|
jcsogo/daikin_madoka
|
47d012330c4a7bdc061db66b8399c85ae9c502b7
|
[
"MIT"
] | null | null | null |
sensor.py
|
jcsogo/daikin_madoka
|
47d012330c4a7bdc061db66b8399c85ae9c502b7
|
[
"MIT"
] | null | null | null |
sensor.py
|
jcsogo/daikin_madoka
|
47d012330c4a7bdc061db66b8399c85ae9c502b7
|
[
"MIT"
] | null | null | null |
"""Support for Daikin AC sensors."""
import logging
from homeassistant.const import (
CONF_DEVICE_CLASS,
CONF_ICON,
CONF_NAME,
CONF_TYPE,
CONF_UNIT_OF_MEASUREMENT,
TEMP_CELSIUS,
DEVICE_CLASS_TEMPERATURE,
)
from homeassistant.helpers.entity import Entity
from . import DOMAIN
from .const import (
ATTR_INSIDE_TEMPERATURE,
ATTR_OUTSIDE_TEMPERATURE,
SENSOR_TYPE_TEMPERATURE,
CONTROLLERS,
)
from pymadoka import Controller
from pymadoka.feature import ConnectionException, ConnectionStatus
_LOGGER = logging.getLogger(__name__)
async def async_setup_platform(hass, config, async_add_entities, discovery_info=None):
"""Old way of setting up the Daikin sensors.
Can only be called when a user accidentally mentions the platform in their
config. But even in that case it would have been ignored.
"""
async def async_setup_entry(hass, entry, async_add_entities):
"""Set up Daikin climate based on config_entry."""
ent = []
for controller in hass.data[DOMAIN][CONTROLLERS].values():
ent.append(MadokaSensor(controller))
async_add_entities(ent)
class MadokaSensor(Entity):
"""Representation of a Sensor."""
def __init__(self, controller: Controller) -> None:
"""Initialize the sensor."""
self.controller = controller
self._sensor = {
CONF_TYPE: SENSOR_TYPE_TEMPERATURE,
CONF_UNIT_OF_MEASUREMENT: TEMP_CELSIUS,
}
@property
def available(self):
"""Return the availability."""
return self.controller.connection.connection_status is ConnectionStatus.CONNECTED
@property
def unique_id(self):
"""Return a unique ID."""
return self.controller.connection.address
@property
def name(self):
"""Return the name of the thermostat, if any."""
return self.controller.connection.name if self.controller.connection.name is not None else self.controller.connection.address
@property
def state(self):
"""Return the internal state of the sensor."""
if self.controller.temperatures.status is None:
return None
return self.controller.temperatures.status.indoor
@property
def device_class(self):
"""Return the class of this device."""
return DEVICE_CLASS_TEMPERATURE
@property
def icon(self):
"""Return the icon of this device."""
return None
@property
def unit_of_measurement(self):
"""Return the unit of measurement."""
return TEMP_CELSIUS
async def async_update(self):
"""Retrieve latest state."""
try:
_LOGGER.debug(f"Getting temperature of device {self.name}")
await self.controller.temperatures.query()
except ConnectionAbortedError:
pass
except ConnectionException:
pass
@property
async def async_device_info(self):
"""Return a device description for device registry."""
try:
return await self.controller.read_info()
except ConnectionAbortedError:
pass
except ConnectionException:
pass
| 28.375
| 133
| 0.672435
|
4a1289a21c581f45cc0e856ca1fffe610ea4ae19
| 10,093
|
py
|
Python
|
supports/pyload/src/pyload/webui/app/blueprints/json_blueprint.py
|
LuckyNicky/pycrawler
|
4b3fe2f6e8e51f236d95a64a89a44199e4e97743
|
[
"Apache-2.0"
] | 1
|
2020-04-02T17:03:39.000Z
|
2020-04-02T17:03:39.000Z
|
supports/pyload/src/pyload/webui/app/blueprints/json_blueprint.py
|
LuckyNicky/pycrawler
|
4b3fe2f6e8e51f236d95a64a89a44199e4e97743
|
[
"Apache-2.0"
] | null | null | null |
supports/pyload/src/pyload/webui/app/blueprints/json_blueprint.py
|
LuckyNicky/pycrawler
|
4b3fe2f6e8e51f236d95a64a89a44199e4e97743
|
[
"Apache-2.0"
] | null | null | null |
# -*- coding: utf-8 -*-
# AUTHOR: vuolter
import os
import flask
from flask.json import jsonify
from ....core.utils import format
from ..helpers import login_required, render_template
bp = flask.Blueprint("json", __name__, url_prefix="/json")
def format_time(seconds):
seconds = int(seconds)
hours, seconds = divmod(seconds, 3600)
minutes, seconds = divmod(seconds, 60)
return f"{hours:02}:{minutes:02}:{seconds:02}"
@bp.route("/status", methods=["GET", "POST"], endpoint="status")
# @apiver_check
@login_required("LIST")
def status():
api = flask.current_app.config["PYLOAD_API"]
data = api.status_server()
return jsonify(data)
@bp.route("/links", methods=["GET", "POST"], endpoint="links")
# @apiver_check
@login_required("LIST")
def links():
api = flask.current_app.config["PYLOAD_API"]
try:
links = api.status_downloads()
ids = []
for link in links:
ids.append(link["fid"])
if link["status"] == 12:
formatted_eta = link["format_eta"]
formatted_speed = format.speed(link["speed"])
link["info"] = f"{formatted_eta} @ {formatted_speed}"
elif link["status"] == 5:
link["percent"] = 0
link["size"] = 0
link["bleft"] = 0
link["info"] = api._("waiting {}").format(link["format_wait"])
else:
link["info"] = ""
return jsonify(links=links, ids=ids)
except Exception as exc:
flask.abort(500)
return jsonify(False)
@bp.route("/packages", endpoint="packages")
# @apiver_check
@login_required("LIST")
def packages():
api = flask.current_app.config["PYLOAD_API"]
try:
data = api.get_queue()
for package in data:
package["links"] = []
for file in api.get_package_files(package["id"]):
package["links"].append(api.get_file_info(file))
return jsonify(data)
except Exception:
flask.abort(500)
return jsonify(False)
@bp.route("/package?=<int:id>", endpoint="package")
# @apiver_check
@login_required("LIST")
def package(id):
api = flask.current_app.config["PYLOAD_API"]
try:
data = api.get_package_data(id)
for pyfile in data["links"]:
if pyfile["status"] == 0:
pyfile["icon"] = "status_finished.png"
elif pyfile["status"] in (2, 3):
pyfile["icon"] = "status_queue.png"
elif pyfile["status"] in (9, 1):
pyfile["icon"] = "status_offline.png"
elif pyfile["status"] == 5:
pyfile["icon"] = "status_waiting.png"
elif pyfile["status"] == 8:
pyfile["icon"] = "status_failed.png"
elif pyfile["status"] == 4:
pyfile["icon"] = "arrow_right.png"
elif pyfile["status"] in (11, 13):
pyfile["icon"] = "status_proc.png"
else:
pyfile["icon"] = "status_downloading.png"
tmp = data["links"]
tmp.sort(key=lambda entry: entry["order"])
data["links"] = tmp
return jsonify(data)
except Exception:
flask.abort(500)
return jsonify(False)
# NOTE: 'ids' is a string
@bp.route("/package_order?=<ids>", endpoint="package_order")
# @apiver_check
@login_required("ADD")
def package_order(ids):
api = flask.current_app.config["PYLOAD_API"]
try:
pid, pos = ids.split(",")
api.order_package(int(pid), int(pos))
return jsonify(response="success")
except Exception:
flask.abort(500)
return jsonify(False)
@bp.route("/abort_link?=<int:id>", endpoint="abort_link")
# @apiver_check
@login_required("DELETE")
def abort_link(id):
api = flask.current_app.config["PYLOAD_API"]
try:
api.stop_downloads([id])
return jsonify(response="success")
except Exception:
flask.abort(500)
return jsonify(False)
# NOTE: 'ids' is a string
@bp.route("/link_order?=<ids>", endpoint="link_order")
# @apiver_check
@login_required("ADD")
def link_order(ids):
api = flask.current_app.config["PYLOAD_API"]
try:
pid, pos = ids.split(",")
api.order_file(int(pid), int(pos))
return jsonify(response="success")
except Exception:
flask.abort(500)
return jsonify(False)
@bp.route("/add_package", methods=["POST"], endpoint="add_package")
# @apiver_check
@login_required("ADD")
def add_package():
api = flask.current_app.config["PYLOAD_API"]
name = flask.request.form.get("add_name", "New Package").strip()
queue = int(flask.request.form["add_dest"])
links = flask.request.form["add_links"].split("\n")
pw = flask.request.form.get("add_password", "").strip("\n\r")
try:
f = flask.request.files["add_file"]
if not name or name == "New Package":
name = f.name
fpath = os.path.join(
api.get_config_value("general", "storage_folder"), "tmp_" + f.filename
)
f.save(fpath)
links.insert(0, fpath)
except Exception:
pass
urls = [url for url in links if url.strip()]
pack = api.add_package(name, urls, queue)
if pw:
data = {"password": pw}
api.set_package_data(pack, data)
return jsonify(True)
@bp.route("/move_package?=<int:dest>,<int:id>", endpoint="move_package")
# @apiver_check
@login_required("MODIFY")
def move_package(dest, id):
api = flask.current_app.config["PYLOAD_API"]
try:
api.move_package(dest, id)
return jsonify(response="success")
except Exception:
flask.abort(500)
return jsonify(False)
@bp.route("/edit_package", methods=["POST"], endpoint="edit_package")
# @apiver_check
@login_required("MODIFY")
def edit_package():
api = flask.current_app.config["PYLOAD_API"]
try:
id = int(flask.request.form["pack_id"])
data = {
"name": flask.request.form["pack_name"],
"folder": flask.request.form["pack_folder"],
"password": flask.request.form["pack_pws"],
}
api.set_package_data(id, data)
return jsonify(response="success")
except Exception:
flask.abort(500)
return jsonify(False)
@bp.route("/set_captcha", methods=["GET", "POST"], endpoint="set_captcha")
# @apiver_check
@login_required("ADD")
def set_captcha():
api = flask.current_app.config["PYLOAD_API"]
if flask.request.method == "POST":
tid = int(flask.request.form["cap_id"])
result = flask.request.form["cap_result"]
api.set_captcha_result(tid, result)
task = api.get_captcha_task()
if task.tid >= 0:
data = {
"captcha": True,
"id": task.tid,
"params": task.data,
"result_type": task.result_type,
}
else:
data = {"captcha": False}
return jsonify(data)
@bp.route("/load_config?=<category>,<section>", endpoint="load_config")
# @apiver_check
@login_required("SETTINGS")
def load_config(category, section):
conf = None
api = flask.current_app.config["PYLOAD_API"]
if category == "general":
conf = api.get_config_dict()
elif category == "plugin":
conf = api.get_plugin_config_dict()
for key, option in conf[section].items():
if key in ("desc", "outline"):
continue
if ";" in option["type"]:
option["list"] = option["type"].split(";")
return render_template("settings_item.html", skey=section, section=conf[section])
@bp.route("/save_config?=<category>", methods=["POST"], endpoint="save_config")
# @apiver_check
@login_required("SETTINGS")
def save_config(category):
api = flask.current_app.config["PYLOAD_API"]
for key, value in flask.request.form.items():
try:
section, option = key.split("|")
except Exception:
continue
if category == "general":
category = "core"
api.set_config_value(section, option, value, category)
return jsonify(True)
@bp.route("/add_account", methods=["POST"], endpoint="add_account")
# @apiver_check
@login_required("ACCOUNTS")
# @fresh_login_required
def add_account():
api = flask.current_app.config["PYLOAD_API"]
login = flask.request.form["account_login"]
password = flask.request.form["account_password"]
type = flask.request.form["account_type"]
api.update_account(type, login, password)
return jsonify(True)
@bp.route("/update_accounts", methods=["POST"], endpoint="update_accounts")
# @apiver_check
@login_required("ACCOUNTS")
# @fresh_login_required
def update_accounts():
deleted = [] #: dont update deleted accs or they will be created again
api = flask.current_app.config["PYLOAD_API"]
for name, value in flask.request.form.items():
value = value.strip()
if not value:
continue
tmp, user = name.split(";")
plugin, action = tmp.split("|")
if (plugin, user) in deleted:
continue
if action == "password":
api.update_account(plugin, user, value)
elif action == "time" and "-" in value:
api.update_account(plugin, user, options={"time": [value]})
elif action == "limitdl" and value.isdigit():
api.update_account(plugin, user, options={"limit_dl": [value]})
elif action == "delete":
deleted.append((plugin, user))
api.remove_account(plugin, user)
return jsonify(True)
@bp.route("/change_password", methods=["POST"], endpoint="change_password")
# @apiver_check
# @fresh_login_required
@login_required("ACCOUNTS")
def change_password():
api = flask.current_app.config["PYLOAD_API"]
user = flask.request.form["user_login"]
oldpw = flask.request.form["login_current_password"]
newpw = flask.request.form["login_new_password"]
done = api.change_password(user, oldpw, newpw)
if not done:
return "Wrong password", 500
return jsonify(True)
| 27.576503
| 85
| 0.608243
|
4a128a0d56eb266874b0bd72356ea4b906e79155
| 1,000
|
py
|
Python
|
test/test_modify_group.py
|
galarina1880/python_training
|
f8fefdd484f4409cb6f43be1d791b50306e5bb2d
|
[
"Apache-2.0"
] | null | null | null |
test/test_modify_group.py
|
galarina1880/python_training
|
f8fefdd484f4409cb6f43be1d791b50306e5bb2d
|
[
"Apache-2.0"
] | null | null | null |
test/test_modify_group.py
|
galarina1880/python_training
|
f8fefdd484f4409cb6f43be1d791b50306e5bb2d
|
[
"Apache-2.0"
] | null | null | null |
# -*- coding: utf-8 -*-
from model.group import Group
from random import randrange
def test_modify_group_name(app):
if app.group.count() == 0:
app.group.create(Group(name="test", header="headr", footer="footr"))
old_groups = app.group.get_group_list()
index = randrange(len(old_groups))
group = Group(name="New name")
group.id = old_groups[index].id
app.group.modify_group_by_index(index, group)
assert len(old_groups) == app.group.count()
new_groups = app.group.get_group_list()
old_groups[index] = group
assert sorted(old_groups, key=Group.id_or_max) == sorted(new_groups, key=Group.id_or_max)
# def test_modify_group_header(app):
# if app.group.count() == 0:
# app.group.create(Group(name="test", header="headr", footer="footr"))
# old_groups = app.group.get_group_list()
# app.group.modify_first_group(Group(header="New header"))
# new_groups = app.group.get_group_list()
# assert len(old_groups) == len(new_groups)
| 37.037037
| 93
| 0.689
|
4a128a8d46ad0708a1ef68416c98d1202833b405
| 10,749
|
py
|
Python
|
pyscf/x2c/sfx2c1e.py
|
QuESt-Calculator/pyscf
|
0ed03633b699505c7278f1eb501342667d0aa910
|
[
"Apache-2.0"
] | 501
|
2018-12-06T23:48:17.000Z
|
2022-03-31T11:53:18.000Z
|
pyscf/x2c/sfx2c1e.py
|
QuESt-Calculator/pyscf
|
0ed03633b699505c7278f1eb501342667d0aa910
|
[
"Apache-2.0"
] | 710
|
2018-11-26T22:04:52.000Z
|
2022-03-30T03:53:12.000Z
|
pyscf/x2c/sfx2c1e.py
|
QuESt-Calculator/pyscf
|
0ed03633b699505c7278f1eb501342667d0aa910
|
[
"Apache-2.0"
] | 273
|
2018-11-26T10:10:24.000Z
|
2022-03-30T12:25:28.000Z
|
#!/usr/bin/env python
# Copyright 2014-2019 The PySCF Developers. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
'''
1-electron Spin-free X2C approximation
'''
from functools import reduce
import numpy
import scipy.linalg
from pyscf import lib
from pyscf import gto
from pyscf.lib import logger
from pyscf.scf import hf
from pyscf.scf import ghf
from pyscf.x2c import x2c
from pyscf.data import nist
def sfx2c1e(mf):
'''Spin-free X2C.
For the given SCF object, it updates the hcore constructor. All integrals
are computed in the real spherical GTO basis.
Args:
mf : an SCF object
Returns:
An SCF object
Examples:
>>> mol = gto.M(atom='H 0 0 0; F 0 0 1', basis='ccpvdz', verbose=0)
>>> mf = scf.RHF(mol).sfx2c1e()
>>> mf.scf()
>>> import pyscf.x2c.sfx2c1e
>>> mol.symmetry = 1
>>> mol.build(0, 0)
>>> mf = pyscf.x2c.sfx2c1e.sfx2c1e(scf.UHF(mol))
>>> mf.scf()
'''
if isinstance(mf, x2c._X2C_SCF):
if mf.with_x2c is None:
return mf.__class__(mf)
else:
return mf
assert(isinstance(mf, hf.SCF))
mf_class = mf.__class__
if mf_class.__doc__ is None:
doc = ''
else:
doc = mf_class.__doc__
class SFX2C1E_SCF(x2c._X2C_SCF, mf_class):
__doc__ = doc + '''
Attributes for spin-free X2C:
with_x2c : X2C object
'''
def __init__(self, mf):
self.__dict__.update(mf.__dict__)
self.with_x2c = SpinFreeX2C(mf.mol)
self._keys = self._keys.union(['with_x2c'])
def get_hcore(self, mol=None):
if self.with_x2c:
hcore = self.with_x2c.get_hcore(mol)
if isinstance(self, ghf.GHF):
hcore = scipy.linalg.block_diag(hcore, hcore)
return hcore
else:
return mf_class.get_hcore(self, mol)
def dump_flags(self, verbose=None):
mf_class.dump_flags(self, verbose)
if self.with_x2c:
self.with_x2c.dump_flags(verbose)
return self
def reset(self, mol):
self.with_x2c.reset(mol)
return mf_class.reset(self, mol)
def dip_moment(self, mol=None, dm=None, unit='Debye', verbose=logger.NOTE,
picture_change=True, **kwargs):
r''' Dipole moment calculation with picture change correction
Args:
mol: an instance of :class:`Mole`
dm : a 2D ndarrays density matrices
Kwarg:
picture_chang (bool) : Whether to compute the dipole moment with
picture change correction.
Return:
A list: the dipole moment on x, y and z component
'''
if mol is None: mol = self.mol
if dm is None: dm =self.make_rdm1()
log = logger.new_logger(mol, verbose)
if 'unit_symbol' in kwargs: # pragma: no cover
log.warn('Kwarg "unit_symbol" was deprecated. It was replaced by kwarg '
'unit since PySCF-1.5.')
unit = kwargs['unit_symbol']
if not (isinstance(dm, numpy.ndarray) and dm.ndim == 2):
# UHF denisty matrices
dm = dm[0] + dm[1]
with mol.with_common_orig((0,0,0)):
if picture_change:
xmol = self.with_x2c.get_xmol()[0]
nao = xmol.nao
prp = xmol.intor_symmetric('int1e_sprsp').reshape(3,4,nao,nao)[:,0]
ao_dip = self.with_x2c.picture_change(('int1e_r', prp))
else:
ao_dip = mol.intor_symmetric('int1e_r')
el_dip = numpy.einsum('xij,ji->x', ao_dip, dm).real
charges = mol.atom_charges()
coords = mol.atom_coords()
nucl_dip = numpy.einsum('i,ix->x', charges, coords)
mol_dip = nucl_dip - el_dip
if unit.upper() == 'DEBYE':
mol_dip *= nist.AU2DEBYE
log.note('Dipole moment(X, Y, Z, Debye): %8.5f, %8.5f, %8.5f', *mol_dip)
else:
log.note('Dipole moment(X, Y, Z, A.U.): %8.5f, %8.5f, %8.5f', *mol_dip)
return mol_dip
return SFX2C1E_SCF(mf)
sfx2c = sfx2c1e
class SpinFreeX2C(x2c.X2C):
'''1-component X2c (spin-free part only)
'''
def get_hcore(self, mol=None):
'''1-component X2c Foldy-Wouthuysen (FW Hamiltonian (spin-free part only)
'''
if mol is None: mol = self.mol
if mol.has_ecp():
raise NotImplementedError
xmol, contr_coeff = self.get_xmol(mol)
c = lib.param.LIGHT_SPEED
assert('1E' in self.approx.upper())
t = xmol.intor_symmetric('int1e_kin')
v = xmol.intor_symmetric('int1e_nuc')
s = xmol.intor_symmetric('int1e_ovlp')
w = xmol.intor_symmetric('int1e_pnucp')
if 'get_xmat' in self.__dict__:
# If the get_xmat method is overwritten by user, build the X
# matrix with the external get_xmat method
x = self.get_xmat(xmol)
h1 = x2c._get_hcore_fw(t, v, w, s, x, c)
elif 'ATOM' in self.approx.upper():
atom_slices = xmol.offset_nr_by_atom()
nao = xmol.nao_nr()
x = numpy.zeros((nao,nao))
for ia in range(xmol.natm):
ish0, ish1, p0, p1 = atom_slices[ia]
shls_slice = (ish0, ish1, ish0, ish1)
t1 = xmol.intor('int1e_kin', shls_slice=shls_slice)
s1 = xmol.intor('int1e_ovlp', shls_slice=shls_slice)
with xmol.with_rinv_at_nucleus(ia):
z = -xmol.atom_charge(ia)
v1 = z * xmol.intor('int1e_rinv', shls_slice=shls_slice)
w1 = z * xmol.intor('int1e_prinvp', shls_slice=shls_slice)
x[p0:p1,p0:p1] = x2c._x2c1e_xmatrix(t1, v1, w1, s1, c)
h1 = x2c._get_hcore_fw(t, v, w, s, x, c)
else:
h1 = x2c._x2c1e_get_hcore(t, v, w, s, c)
if self.basis is not None:
s22 = xmol.intor_symmetric('int1e_ovlp')
s21 = gto.intor_cross('int1e_ovlp', xmol, mol)
c = lib.cho_solve(s22, s21)
h1 = reduce(numpy.dot, (c.T, h1, c))
if self.xuncontract and contr_coeff is not None:
h1 = reduce(numpy.dot, (contr_coeff.T, h1, contr_coeff))
return h1
def picture_change(self, even_operator=(None, None), odd_operator=None):
'''Picture change for even_operator + odd_operator
even_operator has two terms at diagonal blocks
[ v 0 ]
[ 0 w ]
odd_operator has the term at off-diagonal blocks
[ 0 p ]
[ p^T 0 ]
v, w, and p can be strings (integral name) or matrices.
'''
mol = self.mol
xmol, c = self.get_xmol(mol)
pc_mat = self._picture_change(xmol, even_operator, odd_operator)
if self.basis is not None:
s22 = xmol.intor_symmetric('int1e_ovlp')
s21 = gto.mole.intor_cross('int1e_ovlp', xmol, mol)
c = lib.cho_solve(s22, s21)
elif self.xuncontract:
pass
else:
return pc_mat
if pc_mat.ndim == 2:
return lib.einsum('pi,pq,qj->ij', c, pc_mat, c)
else:
return lib.einsum('pi,xpq,qj->xij', c, pc_mat, c)
def get_xmat(self, mol=None):
if mol is None:
xmol = self.get_xmol(mol)[0]
else:
xmol = mol
c = lib.param.LIGHT_SPEED
assert('1E' in self.approx.upper())
if 'ATOM' in self.approx.upper():
atom_slices = xmol.offset_nr_by_atom()
nao = xmol.nao_nr()
x = numpy.zeros((nao,nao))
for ia in range(xmol.natm):
ish0, ish1, p0, p1 = atom_slices[ia]
shls_slice = (ish0, ish1, ish0, ish1)
t1 = xmol.intor('int1e_kin', shls_slice=shls_slice)
s1 = xmol.intor('int1e_ovlp', shls_slice=shls_slice)
with xmol.with_rinv_at_nucleus(ia):
z = -xmol.atom_charge(ia)
v1 = z * xmol.intor('int1e_rinv', shls_slice=shls_slice)
w1 = z * xmol.intor('int1e_prinvp', shls_slice=shls_slice)
x[p0:p1,p0:p1] = x2c._x2c1e_xmatrix(t1, v1, w1, s1, c)
else:
t = xmol.intor_symmetric('int1e_kin')
v = xmol.intor_symmetric('int1e_nuc')
s = xmol.intor_symmetric('int1e_ovlp')
w = xmol.intor_symmetric('int1e_pnucp')
x = x2c._x2c1e_xmatrix(t, v, w, s, c)
return x
def _get_rmat(self, x=None):
'''The matrix (in AO basis) that changes metric from NESC metric to NR metric'''
xmol = self.get_xmol()[0]
if x is None:
x = self.get_xmat(xmol)
c = lib.param.LIGHT_SPEED
s = xmol.intor_symmetric('int1e_ovlp')
t = xmol.intor_symmetric('int1e_kin')
s1 = s + reduce(numpy.dot, (x.conj().T, t, x)) * (.5/c**2)
return x2c._get_r(s, s1)
def hcore_deriv_generator(self, mol=None, deriv=1):
from pyscf.x2c import sfx2c1e_grad
from pyscf.x2c import sfx2c1e_hess
if deriv == 1:
return sfx2c1e_grad.hcore_grad_generator(self, mol)
elif deriv == 2:
return sfx2c1e_hess.hcore_hess_generator(self, mol)
else:
raise NotImplementedError
if __name__ == '__main__':
mol = gto.Mole()
mol.build(
verbose = 0,
atom = [["O" , (0. , 0. , 0.)],
[1 , (0. , -0.757 , 0.587)],
[1 , (0. , 0.757 , 0.587)] ],
basis = 'ccpvdz-dk',
)
method = hf.RHF(mol)
enr = method.kernel()
print('E(NR) = %.12g' % enr)
method = sfx2c1e(hf.RHF(mol))
esfx2c = method.kernel()
print('E(SFX2C1E) = %.12g' % esfx2c)
method.with_x2c.basis = 'unc-ccpvqz-dk'
print('E(SFX2C1E) = %.12g' % method.kernel())
method.with_x2c.approx = 'atom1e'
print('E(SFX2C1E) = %.12g' % method.kernel())
| 34.562701
| 88
| 0.557726
|
4a128b8f72ed2a6d3767b2d8de330e4c7647352c
| 462
|
py
|
Python
|
omdbapi-python-tool/MISC Python scripts/produce_failed_imdb_id_list.py
|
know-airl/Vote-Goat-Data
|
d523d45107b5994b9135577db5e0269eb6d4c613
|
[
"MIT"
] | 1
|
2020-06-12T16:58:19.000Z
|
2020-06-12T16:58:19.000Z
|
omdbapi-python-tool/MISC Python scripts/produce_failed_imdb_id_list.py
|
know-airl/Vote-Goat-Data
|
d523d45107b5994b9135577db5e0269eb6d4c613
|
[
"MIT"
] | 2
|
2018-07-04T11:19:13.000Z
|
2018-07-12T11:46:16.000Z
|
omdbapi-python-tool/MISC Python scripts/produce_failed_imdb_id_list.py
|
know-airl/Vote-Goat-Data
|
d523d45107b5994b9135577db5e0269eb6d4c613
|
[
"MIT"
] | null | null | null |
import json
import csv
with open('imdb_movies.json') as json_data:
data = json.load(json_data)
count = 0
imdb_ids = []
for entry in data:
if entry['Response'] == "False":
imdb_ids.append(entry['imdb_id'])
with open("failed_imdb_ids.csv", "w") as csv_file:
check = 0
for id in imdb_ids:
if (check == 0):
csv_file.write('id' + '\n')
else:
csv_file.write(id + '\n')
check += 1
| 21
| 50
| 0.554113
|
4a128bcb5318e9b4e561aad6934933b8089d933b
| 2,399
|
py
|
Python
|
A_Web_Crawler_With_asyncio_Coroutines/simple_coroutine.py
|
czs0x55aa/500lines_homework
|
a67a144181afadae387e2889f5ae29565e76cdad
|
[
"MIT"
] | null | null | null |
A_Web_Crawler_With_asyncio_Coroutines/simple_coroutine.py
|
czs0x55aa/500lines_homework
|
a67a144181afadae387e2889f5ae29565e76cdad
|
[
"MIT"
] | null | null | null |
A_Web_Crawler_With_asyncio_Coroutines/simple_coroutine.py
|
czs0x55aa/500lines_homework
|
a67a144181afadae387e2889f5ae29565e76cdad
|
[
"MIT"
] | null | null | null |
# coding=utf8
import socket
from selectors2 import DefaultSelector, EVENT_READ, EVENT_WRITE
import time
url = 'xkcd.com'
selector = DefaultSelector()
stopped = False
res = []
def log_time(func):
def wrapper(*args, **kw):
start_time = time.time()
func(*args, **kw)
print(time.time() - start_time)
return wrapper
class Future(object):
def __init__(self):
self.result = None
self._callbacks = []
def add_done_callback(self, fn):
self._callbacks.append(fn)
def set_result(self, result):
self.result = result
for fn in self._callbacks:
fn(self)
class Task(object):
def __init__(self, coro):
self.coro = coro
f = Future()
# f.set_result(None)
self.step(f)
def step(self, future):
try:
next_future = self.coro.send(future.result)
except StopIteration:
return
next_future.add_done_callback(self.step)
class Crawler(object):
def __init__(self, url):
self.url = url
self.response = b''
def fetch(self):
sock = socket.socket()
sock.setblocking(False)
try:
sock.connect((self.url, 80))
except IOError:
pass
f = Future()
def on_connected():
f.set_result(None)
selector.register(sock.fileno(), EVENT_WRITE, on_connected)
yield f
selector.unregister(sock.fileno())
request = 'GET {} HTTP/1.0\r\nHost: xkcd.com\r\n\r\n'.format(self.url)
sock.send(request.encode('ascii'))
while True:
f = Future()
def on_readable():
f.set_result(sock.recv(4096))
selector.register(sock.fileno(), EVENT_READ, on_readable)
chunk = yield f
selector.unregister(sock.fileno())
if chunk:
self.response += chunk
else:
global res, stopped
res.append(self.response)
if len(res) >= 10:
stopped = True
break
@log_time
def loop():
for i in range(10):
crawler = Crawler(url)
Task(crawler.fetch())
while not stopped:
events = selector.select()
for event_key, event_mask in events:
callback = event_key.data
callback()
loop()
| 24.232323
| 78
| 0.55273
|
4a128bd1ad0c8239dd660d3f266bb58695c775cc
| 1,519
|
py
|
Python
|
test-servo.py
|
jweisz/tjlamp-client
|
2d5b39b88d12bb93f6c2abecc212a901b38e997e
|
[
"MIT"
] | null | null | null |
test-servo.py
|
jweisz/tjlamp-client
|
2d5b39b88d12bb93f6c2abecc212a901b38e997e
|
[
"MIT"
] | null | null | null |
test-servo.py
|
jweisz/tjlamp-client
|
2d5b39b88d12bb93f6c2abecc212a901b38e997e
|
[
"MIT"
] | null | null | null |
#!/usr/bin/env python3
import argparse
import wiringpi
import asyncio
from Servo import ServoPigpio
def armBack(pin):
wiringpi.pwmWrite(pin, 60)
def armUp(pin):
wiringpi.pwmWrite(pin, 140)
def armDown(pin):
wiringpi.pwmWrite(pin, 240)
async def main(pin):
# use 'GPIO naming'
wiringpi.wiringPiSetupGpio()
# set #13 to be a PWM output
wiringpi.pinMode(pin, wiringpi.GPIO.PWM_OUTPUT)
# set the PWM mode to milliseconds stype
#wiringpi.pwmSetMode(wiringpi.GPIO.PWM_MODE_MS)
# divide down clock
#wiringpi.pwmSetClock(192)
#wiringpi.pwmSetRange(2000)
try:
while True:
armBack(pin)
await asyncio.sleep(1)
armUp(pin)
await asyncio.sleep(1)
armDown(pin)
await asyncio.sleep(1)
except KeyboardInterrupt:
armBack(pin)
async def main2(pin):
arm = ServoPigpio(pin, enable=True)
try:
while True:
arm.wave(1)
await asyncio.sleep(2)
except KeyboardInterrupt:
arm.armUp()
# Main program logic follows:
if __name__ == '__main__':
# must be run as root
#if not os.geteuid() == 0:
# sys.exit('This script must be run as root in order to control the servo.')
parser = argparse.ArgumentParser()
parser.add_argument('--pin', type=int, help='BCM PIN number the servo is attached to', default=13)
args = parser.parse_args()
asyncio.get_event_loop().run_until_complete(main2(args.pin))
| 23.734375
| 102
| 0.637261
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.