hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
a6e01799839e93c7540def880406e4bd14034a91 | 156 | py | Python | jennie/jennie_tools/__init__.py | Ask-Jennie/py-jennie | d33c7a9ad1d93db770e835fad31a179fd7cdf758 | [
"MIT"
] | null | null | null | jennie/jennie_tools/__init__.py | Ask-Jennie/py-jennie | d33c7a9ad1d93db770e835fad31a179fd7cdf758 | [
"MIT"
] | null | null | null | jennie/jennie_tools/__init__.py | Ask-Jennie/py-jennie | d33c7a9ad1d93db770e835fad31a179fd7cdf758 | [
"MIT"
] | null | null | null | import os
import requests
from jennie.jennie_tools.checks import *
from jennie.jennie_tools.filehandler import *
from jennie.jennie_tools.userinput import * | 31.2 | 45 | 0.846154 | 22 | 156 | 5.863636 | 0.409091 | 0.232558 | 0.372093 | 0.488372 | 0.418605 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.096154 | 156 | 5 | 46 | 31.2 | 0.914894 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 8 |
a6e178f0aa266f3e765567b30f8b6019bad6a987 | 104 | py | Python | gfcaccount/context_processors.py | gluwer/przepisymm | dc83fdc4068fb0102a87081bd519807fd66397c2 | [
"BSD-3-Clause"
] | null | null | null | gfcaccount/context_processors.py | gluwer/przepisymm | dc83fdc4068fb0102a87081bd519807fd66397c2 | [
"BSD-3-Clause"
] | null | null | null | gfcaccount/context_processors.py | gluwer/przepisymm | dc83fdc4068fb0102a87081bd519807fd66397c2 | [
"BSD-3-Clause"
] | null | null | null | from kay.conf import settings
def gfc_settings(request):
return {'gfc_site_id': settings.GFC_SITE_ID} | 26 | 46 | 0.798077 | 17 | 104 | 4.588235 | 0.647059 | 0.179487 | 0.230769 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.105769 | 104 | 4 | 46 | 26 | 0.83871 | 0 | 0 | 0 | 0 | 0 | 0.104762 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.333333 | 0.333333 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 7 |
47085a6aeec6f13a7b897d09092c5d0546fb8447 | 56,722 | py | Python | locan/simulation/simulate_locdata.py | super-resolution/Locan | 94ed7759f7d7ceddee7c7feaabff80010cfedf30 | [
"BSD-3-Clause"
] | 8 | 2021-11-25T20:05:49.000Z | 2022-03-27T17:45:00.000Z | locan/simulation/simulate_locdata.py | super-resolution/Locan | 94ed7759f7d7ceddee7c7feaabff80010cfedf30 | [
"BSD-3-Clause"
] | 4 | 2021-12-15T22:39:20.000Z | 2022-03-11T17:35:34.000Z | locan/simulation/simulate_locdata.py | super-resolution/Locan | 94ed7759f7d7ceddee7c7feaabff80010cfedf30 | [
"BSD-3-Clause"
] | 1 | 2022-03-22T19:53:13.000Z | 2022-03-22T19:53:13.000Z | """
Simulate localization data.
This module provides functions to simulate localization data and return LocData objects.
Localizations are often distributed either by a spatial process of complete-spatial randomness or following a
Neyman-Scott process [1]_. For a Neyman-Scott process parent events (representing single emitters) yield a random number
of cluster_mu events (representing localizations due to repeated blinking). Related spatial point processes include
Matérn and Thomas processes.
Functions that are named as make_* provide point data arrays. Functions that are named as simulate_* provide
locdata.
Parts of this code is adapted from scikit-learn/sklearn/datasets/_samples_generator.py .
(BSD 3-Clause License, Copyright (c) 2007-2020 The scikit-learn developers.)
References
----------
.. [1] Neyman, J. & Scott, E. L.,
A Theory of the Spatial Distribution of Galaxies.
Astrophysical Journal 1952, vol. 116, p.144.
"""
import sys
import time
from itertools import chain
import numpy as np
import pandas as pd
from locan.data import metadata_pb2
from locan.data.locdata import LocData
from locan.data.region import (
AxisOrientedCuboid,
AxisOrientedHypercuboid,
Ellipse,
EmptyRegion,
Interval,
Rectangle,
Region,
)
from locan.data.region_utils import expand_region
__all__ = [
"make_uniform",
"make_Poisson",
"make_cluster",
"make_NeymanScott",
"make_Matern",
"make_Thomas",
"make_dstorm",
"simulate_uniform",
"simulate_Poisson",
"simulate_cluster",
"simulate_NeymanScott",
"simulate_Matern",
"simulate_Thomas",
"simulate_dstorm",
"simulate_tracks",
"resample",
"simulate_frame_numbers",
]
def make_uniform(n_samples, region=(0, 1), seed=None):
"""
Provide points that are distributed by a uniform (complete spatial randomness) point process
within the boundaries given by `region`.
Parameters
----------
n_samples : int
The total number of localizations of the point process
region : Region, array-like
The region (or support) for all features.
If array-like it must provide upper and lower bounds for each feature.
seed : None, int, array_like[ints], numpy.random.SeedSequence, numpy.random.BitGenerator, numpy.random.Generator
Random number generation seed
Returns
-------
numpy.ndarray of shape (n_samples, n_features)
The generated samples.
"""
rng = np.random.default_rng(seed)
if not isinstance(region, Region):
region = Region.from_intervals(region)
if isinstance(region, EmptyRegion):
samples = np.array([])
elif isinstance(
region, (Interval, Rectangle, AxisOrientedCuboid, AxisOrientedHypercuboid)
):
samples = rng.uniform(
region.bounds[: region.dimension],
region.bounds[region.dimension :],
size=(n_samples, region.dimension),
)
elif isinstance(region, Ellipse) and region.width == region.height:
radius = region.width / 2
# angular and radial coordinates of Poisson points
theta = rng.random(n_samples) * 2 * np.pi
rho = radius * np.sqrt(rng.random(n_samples))
# Convert from polar to Cartesian coordinates
xx = rho * np.cos(theta)
yy = rho * np.sin(theta)
samples = np.array((xx, yy)).T
else:
sampling_ratio = region.region_measure / region.bounding_box.region_measure
n_samples_updated = int(n_samples / sampling_ratio * 2)
samples = []
n_remaining = n_samples
while n_remaining > 0:
new_samples = rng.random(size=(n_samples_updated, region.dimension))
new_samples = region.extent * new_samples + region.bounding_box.corner
new_samples = new_samples[region.contains(new_samples)]
samples.append(new_samples)
n_remaining = n_remaining - len(new_samples)
samples = np.concatenate(samples)
samples = samples[0:n_samples]
return samples
def simulate_uniform(n_samples, region=(0, 1), seed=None):
"""
Provide points that are distributed by a uniform Poisson point process within the boundaries given by `region`.
Parameters
----------
n_samples : int
The total number of localizations of the point process
region : Region, array-like
The region (or support) for each feature.
If array-like it must provide upper and lower bounds for each feature.
seed : None, int, array_like[ints], numpy.random.SeedSequence, numpy.random.BitGenerator, numpy.random.Generator
random number generation seed
Returns
-------
LocData
The generated samples.
"""
parameter = locals()
samples = make_uniform(n_samples=n_samples, region=region, seed=seed)
region_ = region if isinstance(region, Region) else Region.from_intervals(region)
locdata = LocData.from_coordinates(coordinates=samples)
locdata.dimension = region_.dimension
locdata.region = region_
# metadata
locdata.meta.source = metadata_pb2.SIMULATION
del locdata.meta.history[:]
locdata.meta.history.add(name=make_uniform.__name__, parameter=str(parameter))
return locdata
def make_Poisson(intensity, region=(0, 1), seed=None):
"""
Provide points that are distributed by a uniform Poisson point process within the boundaries given by `region`.
Parameters
----------
intensity : int, float
The intensity (points per unit region measure) of the point process
region : Region, array-like
The region (or support) for all features.
If array-like it must provide upper and lower bounds for each feature.
seed : None, int, array_like[ints], numpy.random.SeedSequence, numpy.random.BitGenerator, numpy.random.Generator
random number generation seed
Returns
-------
numpy.ndarray of shape (n_samples, n_features)
The generated samples.
"""
rng = np.random.default_rng(seed)
if not isinstance(region, Region):
region = Region.from_intervals(region)
n_samples = rng.poisson(lam=intensity * region.region_measure)
if isinstance(region, EmptyRegion):
samples = np.array([])
elif n_samples == 0:
samples = np.array([])
for i in range(region.dimension):
samples = samples[..., np.newaxis]
elif isinstance(
region, (Interval, Rectangle, AxisOrientedCuboid, AxisOrientedHypercuboid)
):
samples = rng.uniform(
region.bounds[: region.dimension],
region.bounds[region.dimension :],
size=(n_samples, region.dimension),
)
elif isinstance(region, Ellipse) and region.width == region.height:
radius = region.width / 2
# angular and radial coordinates of Poisson points
theta = rng.random(n_samples) * 2 * np.pi
rho = radius * np.sqrt(rng.random(n_samples))
# Convert from polar to Cartesian coordinates
xx = rho * np.cos(theta)
yy = rho * np.sin(theta)
samples = np.array((xx, yy)).T
else:
sampling_ratio = region.region_measure / region.bounding_box.region_measure
n_samples_updated = int(n_samples / sampling_ratio * 2)
samples = []
n_remaining = n_samples
while n_remaining > 0:
new_samples = rng.random(size=(n_samples_updated, region.dimension))
new_samples = region.extent * new_samples + region.bounding_box.corner
new_samples = new_samples[region.contains(new_samples)]
samples.append(new_samples)
n_remaining = n_remaining - len(new_samples)
samples = np.concatenate(samples)
samples = samples[0:n_samples]
return samples
def simulate_Poisson(intensity, region=(0, 1), seed=None):
"""
Provide points that are distributed by a uniform Poisson point process within the boundaries given by `region`.
Parameters
----------
intensity : int, float
The intensity (points per unit region measure) of the point process
region : Region, array-like
The region (or support) for each feature.
If array-like it must provide upper and lower bounds for each feature.
seed : None, int, array_like[ints], numpy.random.SeedSequence, numpy.random.BitGenerator, numpy.random.Generator
random number generation seed
Returns
-------
LocData
The generated samples.
"""
parameter = locals()
samples = make_Poisson(intensity=intensity, region=region, seed=seed)
region_ = region if isinstance(region, Region) else Region.from_intervals(region)
locdata = LocData.from_coordinates(coordinates=samples)
locdata.dimension = region_.dimension
locdata.region = region_
# metadata
locdata.meta.source = metadata_pb2.SIMULATION
del locdata.meta.history[:]
locdata.meta.history.add(name=make_Poisson.__name__, parameter=str(parameter))
return locdata
def make_cluster(
centers=3,
region=(0, 1.0),
expansion_distance=0,
offspring=None,
clip=True,
shuffle=True,
seed=None,
):
"""
Parent positions are taken from `centers`
or are distributed according to a homogeneous Poisson process with exactly `centers`
within the boundaries given by `region` expanded by the expansion_distance.
Each parent position is then replaced by cluster_mu offspring points as passed or generated by a given function.
Offspring from parent events that are located outside the region are included.
Parameters
----------
centers : int, array-like
The number of parents or coordinates for parent events, where each parent represents a cluster center.
region : Region, array-like
The region (or support) for all features.
If array-like it must provide upper and lower bounds for each feature.
expansion_distance : float
The distance by which region is expanded on all boundaries.
offspring : array-like, callable, None
Points or function for point process to provide cluster.
Callable must take single parent point as parameter and return an iterable.
If array-like it must have the same length as parent events.
clip : bool
If True the result will be clipped to 'region'. If False the extended region will be kept.
shuffle : boolean
Shuffle the samples.
seed : None, int, array_like[ints], numpy.random.SeedSequence, numpy.random.BitGenerator, numpy.random.Generator
random number generation seed
Returns
-------
tuple of numpy.ndarray of shape (n_samples, n_features) and region
The generated samples, labels, parent_samples, region
"""
rng = np.random.default_rng(seed)
if isinstance(region, EmptyRegion):
samples, labels, parent_samples, region = (
np.array([]),
np.array([]),
np.array([]),
EmptyRegion(),
)
return samples, labels, parent_samples, region
elif not isinstance(region, Region):
region = Region.from_intervals(region)
expanded_region = expand_region(region, expansion_distance)
if isinstance(centers, (int, np.integer)):
n_centers = centers
parent_samples = make_uniform(
n_samples=n_centers, region=expanded_region, seed=rng
)
else: # if centers is array
parent_samples = np.array(centers)
centers_shape = np.shape(parent_samples)
centers_dimension = 1 if len(centers_shape) == 1 else centers_shape[1]
if region.dimension != centers_dimension:
raise ValueError(
f"Region dimensions must be the same as the dimensions for each center. "
f"Got region dimension: {region.dimension} and "
f"center dimensions: {centers_dimension} instead."
)
n_centers = centers_shape[0]
# replace parents by offspring samples
if offspring is None:
samples = parent_samples
labels = np.arange(0, len(parent_samples))
elif callable(offspring):
try:
offspring_samples = offspring(parent_samples)
labels = [[i] * len(os) for i, os in enumerate(offspring_samples)]
except TypeError:
offspring_samples = []
labels = []
for i, parent in enumerate(parent_samples):
offspring_samples_ = offspring(parent)
offspring_samples.append(offspring_samples_)
labels.append([i] * len(offspring_samples_))
samples = np.array(list(chain(*offspring_samples)))
labels = np.array(list(chain(*labels)))
elif len(offspring) >= len(parent_samples):
offspring_samples = []
labels = []
for i, (os, parent) in enumerate(zip(offspring[:n_centers], parent_samples)):
if len(os) > 0:
offspring_samples_ = np.asarray(os) + parent
offspring_samples.append(offspring_samples_)
labels.append([i] * len(offspring_samples_))
samples = np.array(list(chain(*offspring_samples)))
labels = np.array(list(chain(*labels)))
else:
raise TypeError(
f"offspring must be callable or array-like with length >= than n_centers {n_centers}."
)
if (
samples.ndim == 1
): # this is to convert 1-dimensional arrays into arrays with shape (n_samples, 1).
samples.shape = (len(samples), 1)
if clip is True:
if len(samples) != 0:
inside_indices = region.contains(samples)
samples = samples[inside_indices]
labels = labels[inside_indices]
region_ = region
else:
region_ = expanded_region
if shuffle:
shuffled_indices = rng.permutation(len(samples))
samples = samples[shuffled_indices]
labels = labels[shuffled_indices]
if (
len(samples) == 0
): # this is to convert empty arrays into arrays with shape (n_samples, n_features).
samples = np.array([])
samples = samples[:, np.newaxis]
return samples, labels, parent_samples, region_
def simulate_cluster(
centers=3,
region=(0, 1.0),
expansion_distance=0,
offspring=None,
clip=True,
shuffle=True,
seed=None,
):
"""
Generate clustered point data.
Parent positions are taken from `centers`
or are distributed according to a homogeneous Poisson process with exactly `centers`
within the boundaries given by `region` expanded by the expansion_distance.
Each parent position is then replaced by offspring points as passed or generated by a given function.
Offspring from parent events that are located outside the region are included.
Parameters
----------
centers : int, array-like
The number of parents or coordinates for parent events, where each parent represents a cluster center.
region : Region, array-like
The region (or support) for each feature.
If array-like it must provide upper and lower bounds for each feature.
expansion_distance : float
The distance by which region is expanded on all boundaries.
offspring : array-like, callable, None
Points or function for point process to provide cluster.
Callable must take single parent point as parameter and return an iterable.
If array-like it must have the same length as parent events.
clip : bool
If True the result will be clipped to 'region'. If False the extended region will be kept.
shuffle : boolean
Shuffle the samples.
seed : None, int, array_like[ints], numpy.random.SeedSequence, numpy.random.BitGenerator, numpy.random.Generator
random number generation seed
Returns
-------
LocData
The generated samples.
"""
parameter = locals()
samples, labels, _, region = make_cluster(
centers, region, expansion_distance, offspring, clip, shuffle, seed
)
region_ = region if isinstance(region, Region) else Region.from_intervals(region)
locdata = LocData.from_coordinates(coordinates=samples)
locdata.dimension = region_.dimension
locdata.region = region_
locdata.dataframe = locdata.dataframe.assign(cluster_label=labels)
# metadata
locdata.meta.source = metadata_pb2.SIMULATION
del locdata.meta.history[:]
locdata.meta.history.add(name=make_cluster.__name__, parameter=str(parameter))
return locdata
def make_NeymanScott(
parent_intensity=100,
region=(0, 1.0),
expansion_distance=0,
offspring=None,
clip=True,
shuffle=True,
seed=None,
):
"""
Generate clustered point data following a Neyman-Scott random point process.
Parent positions are distributed according to a homogeneous Poisson process with `parent_intensity`
within the boundaries given by `region` expanded by the expansion_distance.
Each parent position is then replaced by offspring points as passed or generated by a given function.
Offspring from parent events that are located outside the region are included.
Parameters
----------
parent_intensity : int, float
The intensity (points per unit region measure) of the Poisson point process for parent events.
region : Region, array-like
The region (or support) for all features.
If array-like it must provide upper and lower bounds for each feature.
expansion_distance : float
The distance by which region is expanded on all boundaries.
offspring : array-like, callable, None
Points or function for point process to provide offspring points.
Callable must take single parent point as parameter.
If array-like it must have enough elements to fit the randomly generated number of parent events.
clip : bool
If True the result will be clipped to 'region'. If False the extended region will be kept.
shuffle : boolean
Shuffle the samples.
seed : None, int, array_like[ints], numpy.random.SeedSequence, numpy.random.BitGenerator, numpy.random.Generator
random number generation seed
Returns
-------
tuple of numpy.ndarray of shape (n_samples, n_features)
The generated samples, labels, parent_samples
"""
rng = np.random.default_rng(seed)
if isinstance(region, EmptyRegion):
samples, labels, parent_samples, region = (
np.array([]),
np.array([]),
np.array([]),
EmptyRegion(),
)
return samples, labels, parent_samples, region
elif not isinstance(region, Region):
region = Region.from_intervals(region)
expanded_region = expand_region(region, expansion_distance)
parent_samples = make_Poisson(
intensity=parent_intensity, region=expanded_region, seed=rng
)
# replace parents by offspring samples
if offspring is None:
samples = parent_samples
labels = np.arange(0, len(parent_samples))
elif callable(offspring):
try:
offspring_samples = offspring(parent_samples)
labels = [[i] * len(os) for i, os in enumerate(offspring_samples)]
except TypeError:
offspring_samples = []
labels = []
for i, parent in enumerate(parent_samples):
offspring_samples_ = offspring(parent)
offspring_samples.append(offspring_samples_)
labels.append([i] * len(offspring_samples_))
samples = np.array(list(chain(*offspring_samples)))
labels = np.array(list(chain(*labels)))
elif len(offspring) >= len(parent_samples):
offspring_samples = []
labels = []
if isinstance(offspring, np.ndarray):
offspring_samples = (
np.asarray(offspring[: len(parent_samples)]) + parent_samples
)
labels = [[i] * len(os) for i, os in enumerate(offspring_samples)]
else:
for i, (os, parent) in enumerate(
zip(offspring[: len(parent_samples)], parent_samples)
):
if len(os) > 0:
offspring_samples_ = np.asarray(os) + parent
offspring_samples.append(offspring_samples_)
labels.append([i] * len(offspring_samples_))
samples = np.array(list(chain(*offspring_samples)))
labels = np.array(list(chain(*labels)))
else:
raise TypeError(
f"offspring must be callable or array-like with "
f"length >= n_centers {len(parent_samples)}."
)
if (
samples.ndim == 1
): # this is to convert 1-dimensional arrays into arrays with shape (n_samples, 1).
samples.shape = (len(samples), 1)
if clip is True:
if len(samples) != 0:
inside_indices = region.contains(samples)
samples = samples[inside_indices]
labels = labels[inside_indices]
region_ = region
else:
region_ = expanded_region
if shuffle:
shuffled_indices = rng.permutation(len(samples))
samples = samples[shuffled_indices]
labels = labels[shuffled_indices]
if (
len(samples) == 0
): # this is to convert empty arrays into arrays with shape (n_samples, n_features).
samples = np.array([])
samples = samples[:, np.newaxis]
return samples, labels, parent_samples, region_
def simulate_NeymanScott(
parent_intensity=100,
region=(0, 1.0),
expansion_distance=0,
offspring=None,
clip=True,
shuffle=True,
seed=None,
):
"""
Generate clustered point data following a Neyman-Scott random point process.
Parent positions are distributed according to a homogeneous Poisson process with `parent_intensity`
within the boundaries given by `region` expanded by the expansion_distance.
Each parent position is then replaced by offspring points as passed or generated by a given function.
Offspring from parent events that are located outside the region are included.
Parameters
----------
parent_intensity : int, float
The intensity (points per unit region measure) of the Poisson point process for parent events.
region : Region, array-like
The region (or support) for each feature.
If array-like it must provide upper and lower bounds for each feature.
expansion_distance : float
The distance by which region is expanded on all boundaries.
offspring : array-like, callable, None
Points or function for point process to provide offspring points.
Callable must take single parent point as parameter.
If array-like it must have enough elements to fit the randomly generated number of parent events.
clip : bool
If True the result will be clipped to 'region'. If False the extended region will be kept.
shuffle : boolean
Shuffle the samples.
seed : None, int, array_like[ints], numpy.random.SeedSequence, numpy.random.BitGenerator, numpy.random.Generator
random number generation seed
Returns
-------
LocData
The generated samples.
"""
parameter = locals()
samples, labels, _, region = make_NeymanScott(
parent_intensity, region, expansion_distance, offspring, clip, shuffle, seed
)
region_ = region if isinstance(region, Region) else Region.from_intervals(region)
locdata = LocData.from_coordinates(coordinates=samples)
locdata.dimension = region_.dimension
locdata.region = region_
locdata.dataframe = locdata.dataframe.assign(cluster_label=labels)
# metadata
locdata.meta.source = metadata_pb2.SIMULATION
del locdata.meta.history[:]
locdata.meta.history.add(name=make_NeymanScott.__name__, parameter=str(parameter))
return locdata
def make_Matern(
parent_intensity=1,
region=(0, 1.0),
cluster_mu=1,
radius=1.0,
clip=True,
shuffle=True,
seed=None,
):
"""
Generate clustered point data following a Matern cluster random point process.
Parent positions are distributed according to a homogeneous Poisson process with `parent_intensity`
within the boundaries given by `region` expanded by the maximum radius.
Each parent position is then replaced by spots of size `radius` with Poisson distributed points inside.
Offspring from parent events that are located outside the region are included.
Parameters
----------
parent_intensity : int, float
The intensity (points per unit region measure) of the Poisson point process for parent events.
region : Region, array-like
The region (or support) for all features.
If array-like it must provide upper and lower bounds for each feature.
cluster_mu : int, float
The mean number of points of the Poisson point process for cluster(cluster_mu) events.
radius : float or sequence of floats
The radius for the spots. If tuple, the number of elements must be larger than the expected number of parents.
clip : bool
If True the result will be clipped to 'region'. If False the extended region will be kept.
shuffle : boolean
Shuffle the samples.
seed : None, int, array_like[ints], numpy.random.SeedSequence, numpy.random.BitGenerator, numpy.random.Generator
random number generation seed
Returns
-------
tuple of numpy.ndarray of shape (n_samples, n_features)
The generated samples, labels, parent_samples
"""
rng = np.random.default_rng(seed)
if isinstance(region, EmptyRegion):
samples, labels, parent_samples, region = (
np.array([]),
np.array([]),
np.array([]),
EmptyRegion(),
)
return samples, labels, parent_samples, region
elif not isinstance(region, Region):
region = Region.from_intervals(region)
expansion_distance = np.max(radius)
expanded_region = expand_region(region, expansion_distance)
parent_samples = make_Poisson(
intensity=parent_intensity, region=expanded_region, seed=rng
)
# radius: if radius is given as list, it must be consistent with the n_parent_samples
if hasattr(radius, "__len__"):
if len(radius) < len(parent_samples):
raise ValueError(
f"Length of `radius` {len(radius)} is less than "
f"the generated n_parent_samples {len(parent_samples)}."
)
else:
radii = radius[: len(parent_samples)]
else: # if isinstance(radius, float):
radii = np.full(len(parent_samples), radius)
# replace parents by offspring samples
samples = []
labels = []
for i, (parent, radius_) in enumerate(zip(parent_samples, radii)):
if region.dimension == 1:
offspring_region = Interval(-radius_, radius_)
offspring_intensity = cluster_mu / offspring_region.region_measure
offspring_samples = make_Poisson(
intensity=offspring_intensity, region=offspring_region, seed=rng
)
elif region.dimension == 2:
offspring_region = Ellipse((0, 0), 2 * radius_, 2 * radius_, 0)
offspring_intensity = cluster_mu / offspring_region.region_measure
offspring_samples = make_Poisson(
intensity=offspring_intensity, region=offspring_region, seed=rng
)
elif region.dimension == 3:
raise NotImplementedError
else:
raise ValueError("region dimension must be 1, 2, or 3.")
if len(offspring_samples) != 0:
offspring_samples = offspring_samples + parent
samples.append(offspring_samples)
labels += [i] * len(offspring_samples)
samples = np.array(list(chain(*samples)))
labels = np.array(labels)
if clip is True:
if len(samples) != 0:
inside_indices = region.contains(samples)
samples = samples[inside_indices]
labels = labels[inside_indices]
region_ = region
else:
region_ = expanded_region
if shuffle:
shuffled_indices = rng.permutation(len(samples))
samples = samples[shuffled_indices]
labels = labels[shuffled_indices]
if (
len(samples) == 0
): # this is to convert empty arrays into arrays with shape (n_samples, n_features).
samples = np.array([])
samples = samples[:, np.newaxis]
return samples, labels, parent_samples, region_
def simulate_Matern(
parent_intensity=1,
region=(0, 1.0),
cluster_mu=1,
radius=1.0,
clip=True,
shuffle=True,
seed=None,
):
"""
Generate clustered point data following a Matern cluster random point process.
Parent positions are distributed according to a homogeneous Poisson process with `parent_intensity`
within the boundaries given by `region` expanded by the maximum radius.
Each parent position is then replaced by spots of size `radius` with Poisson distributed points inside.
Offspring from parent events that are located outside the region are included.
Parameters
----------
parent_intensity : int, float
The intensity (points per unit region measure) of the Poisson point process for parent events.
region : Region, array-like
The region (or support) for each feature.
If array-like it must provide upper and lower bounds for each feature.
cluster_mu : int, float
The mean number of points of the Poisson point process for cluster(cluster_mu) events.
radius : float or sequence of floats
The radius for the spots. If tuple, the number of elements must be larger than the expected number of parents.
clip : bool
If True the result will be clipped to 'region'. If False the extended region will be kept.
shuffle : boolean
Shuffle the samples.
seed : None, int, array_like[ints], numpy.random.SeedSequence, numpy.random.BitGenerator, numpy.random.Generator
random number generation seed
Returns
-------
LocData
The generated samples.
"""
parameter = locals()
samples, labels, _, region = make_Matern(
parent_intensity, region, cluster_mu, radius, clip, shuffle, seed
)
region_ = region if isinstance(region, Region) else Region.from_intervals(region)
locdata = LocData.from_coordinates(coordinates=samples)
locdata.dimension = region_.dimension
locdata.region = region_
locdata.dataframe = locdata.dataframe.assign(cluster_label=labels)
# metadata
locdata.meta.source = metadata_pb2.SIMULATION
del locdata.meta.history[:]
locdata.meta.history.add(name=make_Matern.__name__, parameter=str(parameter))
return locdata
def make_Thomas(
parent_intensity=1,
region=(0, 1.0),
expansion_factor=6,
cluster_mu=1,
cluster_std=1.0,
clip=True,
shuffle=True,
seed=None,
):
"""
Generate clustered point data following a Thomas random point process.
Parent positions are distributed according to a homogeneous Poisson process with `parent_intensity`
within the boundaries given by `region` expanded by an expansion distance that equals
expansion_factor * max(cluster_std).
Each parent position is then replaced by n offspring points
where n is Poisson-distributed with mean number `cluster_mu`
and point coordinates are normal-distributed around the parent point with standard deviation `cluster_std`.
Offspring from parent events that are located outside the region are included.
Parameters
----------
parent_intensity : int, float
The intensity (points per unit region measure) of the Poisson point process for parent events.
region : Region, array-like
The region (or support) for all features.
If array-like it must provide upper and lower bounds for each feature.
expansion_factor : int, float
Factor by which the cluster_std is multiplied to set the region expansion distance.
cluster_mu : int, float, sequence of floats
The mean number of points for normal-distributed offspring points.
cluster_std : float, sequence of floats, sequence of sequence of floats
The standard deviation for normal-distributed offspring points.
clip : bool
If True the result will be clipped to 'region'. If False the extended region will be kept.
shuffle : boolean
Shuffle the samples.
seed : None, int, array_like[ints], numpy.random.SeedSequence, numpy.random.BitGenerator, numpy.random.Generator
random number generation seed
Returns
-------
tuple of numpy.ndarray of shape (n_samples, n_features)
The generated samples, labels, parent_samples
"""
rng = np.random.default_rng(seed)
if not isinstance(region, Region):
region = Region.from_intervals(region)
if (
parent_intensity == 0
or (np.size(cluster_mu) == 1 and cluster_mu == 0)
or isinstance(region, EmptyRegion)
):
samples, labels, parent_samples, region = (
np.array([]),
np.array([]),
np.array([]),
region,
)
if region.dimension and region.dimension > 0:
samples = samples[:, np.newaxis]
return samples, labels, parent_samples, region
# expand region
expansion_distance = expansion_factor * np.max(cluster_std)
expanded_region = expand_region(region, expansion_distance)
parent_samples = make_Poisson(
intensity=parent_intensity, region=expanded_region, seed=rng
)
n_cluster = len(parent_samples)
# check cluster_std consistent with n_centers or n_features
if len(np.shape(cluster_std)) == 0:
cluster_std_ = np.full(
shape=(n_cluster, region.dimension), fill_value=cluster_std
)
elif len(np.shape(cluster_std)) == 1: # iterate over cluster_std for each feature
if region.dimension == 1 or len(cluster_std) != region.dimension:
raise TypeError(
f"The shape of cluster_std {np.shape(cluster_std)} is incompatible "
f"with n_features {region.dimension}."
)
else:
cluster_std_ = np.empty(shape=(n_cluster, region.dimension))
for i, element in enumerate(cluster_std):
cluster_std_[:, i] = np.full((n_cluster,), element)
elif len(np.shape(cluster_std)) == 2: # iterate over cluster_std for each center
if np.shape(cluster_std) < (n_cluster, region.dimension):
raise TypeError(
f"The shape of cluster_std {np.shape(cluster_std)} is incompatible with "
f"n_cluster {n_cluster} or n_features {region.dimension}."
)
else:
cluster_std_ = cluster_std
else:
raise TypeError(
f"The shape of cluster_std {np.shape(cluster_std)} is incompatible."
)
# replace parents by normal-distributed offspring samples
try:
n_offspring_list = rng.poisson(lam=cluster_mu[:n_cluster], size=n_cluster)
except ValueError as e:
e.args += (f"Too few offspring events for n_cluster: {n_cluster}",)
raise
except (TypeError, IndexError):
n_offspring_list = rng.poisson(lam=cluster_mu, size=n_cluster)
samples = []
labels = []
for i, (parent, std, n_offspring) in enumerate(
zip(parent_samples, cluster_std_, n_offspring_list)
):
offspring_samples = rng.normal(
loc=parent, scale=std, size=(n_offspring, region.dimension)
)
samples.append(offspring_samples)
labels += [i] * len(offspring_samples)
samples = np.concatenate(samples) if len(samples) != 0 else np.array([])
labels = np.array(labels)
if clip is True:
if len(samples) != 0:
inside_indices = region.contains(samples)
samples = samples[inside_indices]
labels = labels[inside_indices]
region_ = region
else:
region_ = expanded_region
if shuffle:
shuffled_indices = rng.permutation(len(samples))
samples = samples[shuffled_indices]
labels = labels[shuffled_indices]
if (
len(samples) == 0
): # this is to convert empty arrays into arrays with shape (n_samples, n_features).
samples = np.array([])
samples = samples[:, np.newaxis]
return samples, labels, parent_samples, region_
def simulate_Thomas(
parent_intensity=1,
region=(0, 1.0),
expansion_factor=6,
cluster_mu=1,
cluster_std=1.0,
clip=True,
shuffle=True,
seed=None,
):
"""
Generate clustered point data following a Thomas random point process.
Parent positions are distributed according to a homogeneous Poisson process with `parent_intensity`
within the boundaries given by `region` expanded by an expansion distance that equals
expansion_factor * max(cluster_std).
Each parent position is then replaced by n offspring points
where n is Poisson-distributed with mean number `cluster_mu`
and point coordinates are normal-distributed around the parent point with standard deviation `cluster_std`.
Offspring from parent events that are located outside the region are included.
Parameters
----------
parent_intensity : int, float
The intensity (points per unit region measure) of the Poisson point process for parent events.
region : Region, array-like
The region (or support) for each feature.
If array-like it must provide upper and lower bounds for each feature.
expansion_factor : int, float
Factor by which the cluster_std is multiplied to set the region expansion distance.
cluster_mu : int, float, sequence of floats
The mean number of points for normal-distributed offspring points.
cluster_std : float, sequence of floats, sequence of sequence of floats
The standard deviation for normal-distributed offspring points.
clip : bool
If True the result will be clipped to 'region'. If False the extended region will be kept.
shuffle : boolean
Shuffle the samples.
seed : None, int, array_like[ints], numpy.random.SeedSequence, numpy.random.BitGenerator, numpy.random.Generator
random number generation seed
Returns
-------
LocData
The generated samples.
"""
parameter = locals()
samples, labels, _, region = make_Thomas(
parent_intensity,
region,
expansion_factor,
cluster_mu,
cluster_std,
clip,
shuffle,
seed,
)
region_ = region if isinstance(region, Region) else Region.from_intervals(region)
locdata = LocData.from_coordinates(coordinates=samples)
locdata.dimension = region_.dimension
locdata.region = region_
locdata.dataframe = locdata.dataframe.assign(cluster_label=labels)
# metadata
locdata.meta.source = metadata_pb2.SIMULATION
del locdata.meta.history[:]
locdata.meta.history.add(name=make_Thomas.__name__, parameter=str(parameter))
return locdata
def make_dstorm(
parent_intensity=1,
region=(0, 1.0),
expansion_factor=6,
cluster_mu=1,
min_points=0,
cluster_std=1.0,
clip=True,
shuffle=True,
seed=None,
):
"""
Generate clustered point data following a Thomas-like random point process.
Parent positions are distributed according to a homogeneous Poisson process with `parent_intensity`
within the boundaries given by `region` expanded by an expansion distance that equals
expansion_factor * max(cluster_std).
Each parent position is then replaced by n offspring points
where n is geometrically-distributed with mean number `cluster_mu`
and point coordinates are normal-distributed around the parent point with standard deviation `cluster_std`.
Offspring from parent events that are located outside the region are included.
Parameters
----------
parent_intensity : int, float
The intensity (points per unit region measure) of the Poisson point process for parent events.
region : Region, array-like
The region (or support) for all features.
If array-like it must provide upper and lower bounds for each feature.
expansion_factor : int, float
Factor by which the cluster_std is multiplied to set the region expansion distance.
cluster_mu : int, float, sequence of floats
The mean number of points for normal-distributed offspring points.
min_points : int
The minimum number of points per cluster.
cluster_std : float, sequence of floats, sequence of sequence of floats
The standard deviation for normal-distributed offspring points.
clip : bool
If True the result will be clipped to 'region'. If False the extended region will be kept.
shuffle : boolean
Shuffle the samples.
seed : None, int, array_like[ints], numpy.random.SeedSequence, numpy.random.BitGenerator, numpy.random.Generator
random number generation seed
Returns
-------
tuple of numpy.ndarray of shape (n_samples, n_features)
The generated samples, labels, parent_samples
"""
rng = np.random.default_rng(seed)
if not isinstance(region, Region):
region = Region.from_intervals(region)
if (
parent_intensity == 0
or (np.size(cluster_mu) == 1 and cluster_mu == 0)
or isinstance(region, EmptyRegion)
):
samples, labels, parent_samples, region = (
np.array([]),
np.array([]),
np.array([]),
region,
)
if region.dimension and region.dimension > 0:
samples = samples[:, np.newaxis]
return samples, labels, parent_samples, region
# expand region
expansion_distance = expansion_factor * np.max(cluster_std)
expanded_region = expand_region(region, expansion_distance)
parent_samples = make_Poisson(
intensity=parent_intensity, region=expanded_region, seed=rng
)
n_cluster = len(parent_samples)
# check cluster_std consistent with n_centers or n_features
if len(np.shape(cluster_std)) == 0:
cluster_std_ = np.full(
shape=(n_cluster, region.dimension), fill_value=cluster_std
)
elif len(np.shape(cluster_std)) == 1: # iterate over cluster_std for each feature
if region.dimension == 1 or len(cluster_std) != region.dimension:
raise TypeError(
f"The shape of cluster_std {np.shape(cluster_std)} is incompatible "
f"with n_features {region.dimension}."
)
else:
cluster_std_ = np.empty(shape=(n_cluster, region.dimension))
for i, element in enumerate(cluster_std):
cluster_std_[:, i] = np.full((n_cluster,), element)
elif len(np.shape(cluster_std)) == 2: # iterate over cluster_std for each center
if np.shape(cluster_std) < (n_cluster, region.dimension):
raise TypeError(
f"The shape of cluster_std {np.shape(cluster_std)} is incompatible with "
f"n_cluster {n_cluster} or n_features {region.dimension}."
)
else:
cluster_std_ = cluster_std
else:
raise TypeError(
f"The shape of cluster_std {np.shape(cluster_std)} is incompatible."
)
# replace parents by normal-distributed offspring samples
try:
p_values = [1 / (mu + 1 - min_points) for mu in cluster_mu[:n_cluster]]
# p for a geometric distribution sampling points from min_points to inf is 1 / (mean + 1 - min_points)
n_offspring_list = rng.geometric(p=p_values, size=n_cluster) - 1 + min_points
# rng.geometric samples points from 1 to inf.
except ValueError as e:
e.args += (f"Too few offspring events for n_cluster: {n_cluster}",)
raise
except (TypeError, IndexError):
n_offspring_list = (
rng.geometric(p=1 / (cluster_mu + 1 - min_points), size=n_cluster)
- 1
+ min_points
)
samples = []
labels = []
for i, (parent, std, n_offspring) in enumerate(
zip(parent_samples, cluster_std_, n_offspring_list)
):
offspring_samples = rng.normal(
loc=parent, scale=std, size=(n_offspring, region.dimension)
)
samples.append(offspring_samples)
labels += [i] * len(offspring_samples)
samples = np.concatenate(samples) if len(samples) != 0 else np.array([])
labels = np.array(labels)
if clip is True:
if len(samples) != 0:
inside_indices = region.contains(samples)
samples = samples[inside_indices]
labels = labels[inside_indices]
region_ = region
else:
region_ = expanded_region
if shuffle:
shuffled_indices = rng.permutation(len(samples))
samples = samples[shuffled_indices]
labels = labels[shuffled_indices]
if (
len(samples) == 0
): # this is to convert empty arrays into arrays with shape (n_samples, n_features).
samples = np.array([])
samples = samples[:, np.newaxis]
return samples, labels, parent_samples, region_
def simulate_dstorm(
parent_intensity=1,
region=(0, 1.0),
expansion_factor=6,
cluster_mu=1,
min_points=0,
cluster_std=1.0,
clip=True,
shuffle=True,
seed=None,
):
"""
Generate clustered point data following a Thomas-like random point process.
Parent positions are distributed according to a homogeneous Poisson process with `parent_intensity`
within the boundaries given by `region` expanded by an expansion distance that equals
expansion_factor * max(cluster_std).
Each parent position is then replaced by n offspring points
where n is geometrically-distributed with mean number `cluster_mu`
and point coordinates are normal-distributed around the parent point with standard deviation `cluster_std`.
Offspring from parent events that are located outside the region are included.
Parameters
----------
parent_intensity : int, float
The intensity (points per unit region measure) of the Poisson point process for parent events.
region : Region, array-like
The region (or support) for each feature.
If array-like it must provide upper and lower bounds for each feature.
expansion_factor : int, float
Factor by which the cluster_std is multiplied to set the region expansion distance.
cluster_mu : int, float, sequence of floats
The mean number of points for normal-distributed offspring points.
min_points : int
The minimum number of points per cluster.
cluster_std : float, sequence of floats, sequence of sequence of floats
The standard deviation for normal-distributed offspring points.
clip : bool
If True the result will be clipped to 'region'. If False the extended region will be kept.
shuffle : boolean
Shuffle the samples.
seed : None, int, array_like[ints], numpy.random.SeedSequence, numpy.random.BitGenerator, numpy.random.Generator
random number generation seed
Returns
-------
LocData
The generated samples.
"""
parameter = locals()
samples, labels, _, region = make_dstorm(
parent_intensity,
region,
expansion_factor,
cluster_mu,
min_points,
cluster_std,
clip,
shuffle,
seed,
)
region_ = region if isinstance(region, Region) else Region.from_intervals(region)
locdata = LocData.from_coordinates(coordinates=samples)
locdata.dimension = region_.dimension
locdata.region = region_
locdata.dataframe = locdata.dataframe.assign(cluster_label=labels)
# metadata
locdata.meta.source = metadata_pb2.SIMULATION
del locdata.meta.history[:]
locdata.meta.history.add(name=make_dstorm.__name__, parameter=str(parameter))
return locdata
def _random_walk(
n_walks=1, n_steps=10, dimensions=2, diffusion_constant=1, time_step=10, seed=None
):
"""
Random walk simulation
Parameters
----------
n_walks: int
Number of walks
n_steps : int
Number of time steps (i.e. frames)
dimensions : int
spatial dimensions to simulate
diffusion_constant : int or float
Diffusion constant in units length per seconds^2
time_step : float
Time per frame (or simulation step) in seconds.
seed : None, int, array_like[ints], numpy.random.SeedSequence, numpy.random.BitGenerator, numpy.random.Generator
random number generation seed
Returns
-------
tuple of arrays
(times, positions), where shape(times) is 1 and shape of positions is (n_walks, n_steps, dimensions)
"""
rng = np.random.default_rng(seed)
# equally spaced time steps
times = np.arange(n_steps) * time_step
# random step sizes according to the diffusion constant
random_numbers = rng.integers(
0, 2, size=(n_walks, n_steps, dimensions)
) # np.random.randint(0, 2, size=(n_walks, n_steps, dimensions))
step_size = np.sqrt(2 * dimensions * diffusion_constant * time_step)
steps = np.where(random_numbers == 0, -step_size, +step_size)
# walker positions
positions = np.cumsum(steps, axis=1)
return times, positions
def simulate_tracks(
n_walks=1,
n_steps=10,
ranges=((0, 10000), (0, 10000)),
diffusion_constant=1,
time_step=10,
seed=None,
):
"""
Provide a dataset of localizations representing random walks with starting points being spatially-distributed
on a rectangular shape or cubic volume by complete spatial randomness.
Parameters
----------
n_walks: int
Number of walks
n_steps : int
Number of time steps (i.e. frames)
ranges : tuple of tuples of two ints
the range for valid x[, y, z]-coordinates
diffusion_constant : int or float
Diffusion constant with unit length per seconds^2
time_step : float
Time per frame (or simulation step) in seconds.
seed : None, int, array_like[ints], numpy.random.SeedSequence, numpy.random.BitGenerator, numpy.random.Generator
random number generation seed
Returns
-------
LocData
A new LocData instance with localization data.
"""
parameter = locals()
start_positions = np.array(
[np.random.uniform(*_range, size=n_walks) for _range in ranges]
).T
times, positions = _random_walk(
n_walks=n_walks,
n_steps=n_steps,
dimensions=len(ranges),
diffusion_constant=diffusion_constant,
time_step=time_step,
seed=seed,
)
new_positions = np.concatenate(
[
start_position + position
for start_position, position in zip(start_positions, positions)
]
)
locdata_dict = {
"position_" + label: position_values
for _, position_values, label in zip(ranges, new_positions.T, ("x", "y", "z"))
}
locdata_dict.update(frame=np.tile(range(len(times)), n_walks))
locdata = LocData.from_dataframe(dataframe=pd.DataFrame(locdata_dict))
# metadata
locdata.meta.source = metadata_pb2.SIMULATION
del locdata.meta.history[:]
locdata.meta.history.add(
name=sys._getframe().f_code.co_name, parameter=str(parameter)
)
return locdata
def resample(locdata, n_samples=10, seed=None):
"""
Resample locdata according to localization uncertainty. Per localization *n_samples* new localizations
are simulated normally distributed around the localization coordinates with a standard deviation set to the
uncertainty in each dimension.
The resulting LocData object carries new localizations with the following properties: 'position_x',
'position_y'[, 'position_z'], 'origin_index'
Parameters
----------
locdata : LocData
Localization data to be resampled
n_samples : int
The number of localizations generated for each original localization.
seed : None, int, array_like[ints], numpy.random.SeedSequence, numpy.random.BitGenerator, numpy.random.Generator
random number generation seed
Returns
-------
locdata : LocData
New localization data with simulated coordinates.
"""
rng = np.random.default_rng(seed)
# generate dataframe
list_ = []
for i in range(len(locdata)):
new_d = {}
new_d.update({"origin_index": np.full(n_samples, i)})
x_values = rng.normal(
loc=locdata.data.iloc[i]["position_x"],
scale=locdata.data.iloc[i]["uncertainty_x"],
size=n_samples,
)
new_d.update({"position_x": x_values})
y_values = rng.normal(
loc=locdata.data.iloc[i]["position_y"],
scale=locdata.data.iloc[i]["uncertainty_y"],
size=n_samples,
)
new_d.update({"position_y": y_values})
try:
z_values = rng.normal(
loc=locdata.data.iloc[i]["position_z"],
scale=locdata.data.iloc[i]["uncertainty_z"],
size=n_samples,
)
new_d.update({"position_z": z_values})
except KeyError:
pass
list_.append(pd.DataFrame(new_d))
dataframe = pd.concat(list_, ignore_index=True)
# metadata
meta_ = metadata_pb2.Metadata()
meta_.CopyFrom(locdata.meta)
try:
meta_.ClearField("identifier")
except ValueError:
pass
try:
meta_.ClearField("element_count")
except ValueError:
pass
try:
meta_.ClearField("frame_count")
except ValueError:
pass
meta_.modification_time.GetCurrentTime()
meta_.state = metadata_pb2.MODIFIED
meta_.ancestor_identifiers.append(locdata.meta.identifier)
meta_.history.add(
name="resample", parameter="locdata={}, n_samples={}".format(locdata, n_samples)
)
# instantiate
new_locdata = LocData.from_dataframe(dataframe=dataframe, meta=meta_)
return new_locdata
def _random_poisson_repetitions(n_samples, lam, seed=None):
"""
Return numpy.ndarray of sorted integers with each integer i being repeated n(i) times
where n(i) is drawn from a Poisson distribution with mean `lam`.
Parameters
----------
n_samples : int
number of elements to be returned
lam : float
mean of the Poisson distribution (lambda)
seed : None, int, array_like[ints], numpy.random.SeedSequence, numpy.random.BitGenerator, numpy.random.Generator
random number generation seed
Returns
-------
numpy.ndarray with shape (n_samples,)
The generated sequence of integers.
"""
rng = np.random.default_rng(seed)
frames = np.ones(n_samples, dtype=int)
n_random_numbers = n_samples if lam > 0 else int(n_samples / lam)
position = 0
current_number = 0
while position < n_samples:
repeats = rng.poisson(lam=lam, size=n_random_numbers)
for repeat in repeats:
try:
frames[position : position + repeat] = current_number
except ValueError:
frames[position:] = current_number
break
position += repeat
current_number += 1
return frames
def simulate_frame_numbers(n_samples, lam, seed=None):
"""
Simulate Poisson-distributed frame numbers for a list of localizations.
Return numpy.ndarray of sorted integers with each integer i being repeated n(i) times
where n(i) is drawn from a Poisson distribution with mean `lam`.
Use the following to add frame numbers to a given LocData object::
frames = simulate_frame_numbers(n_samples=len(locdata), lam=5)
locdata.dataframe = locdata.dataframe.assign(frame = frames)
Parameters
----------
n_samples : int
number of elements to be returned
lam : float
mean of the Poisson distribution (lambda)
seed : None, int, array_like[ints], numpy.random.SeedSequence, numpy.random.BitGenerator, numpy.random.Generator
random number generation seed
Returns
-------
numpy.ndarray with shape (n_samples,)
The generated sequence of integers.
"""
return _random_poisson_repetitions(n_samples, lam, seed=seed)
| 36.618464 | 120 | 0.665844 | 6,971 | 56,722 | 5.291206 | 0.060823 | 0.023749 | 0.00873 | 0.014098 | 0.848339 | 0.837956 | 0.824997 | 0.813393 | 0.796828 | 0.793032 | 0 | 0.005415 | 0.251243 | 56,722 | 1,548 | 121 | 36.642119 | 0.86306 | 0.423716 | 0 | 0.701564 | 0 | 0 | 0.051803 | 0.006651 | 0 | 0 | 0 | 0 | 0 | 1 | 0.022864 | false | 0.004813 | 0.01083 | 0 | 0.062575 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
5b3f28b871273d28508c176f6fa31250f7745a68 | 391 | py | Python | src/einsteinpy/plotting/__init__.py | bibek22/einsteinpy | 78bf5d942cbb12393852f8e4d7a8426f1ffe6f23 | [
"MIT"
] | 2 | 2019-04-07T04:01:57.000Z | 2019-07-11T11:59:55.000Z | src/einsteinpy/plotting/__init__.py | bibek22/einsteinpy | 78bf5d942cbb12393852f8e4d7a8426f1ffe6f23 | [
"MIT"
] | 2 | 2019-04-08T17:39:50.000Z | 2019-04-11T03:10:09.000Z | src/einsteinpy/plotting/__init__.py | bibek22/einsteinpy | 78bf5d942cbb12393852f8e4d7a8426f1ffe6f23 | [
"MIT"
] | null | null | null | from einsteinpy.plotting.fractal import fractal
from einsteinpy.plotting.geodesics.core import GeodesicPlotter
from einsteinpy.plotting.geodesics.interactive import InteractiveGeodesicPlotter
from einsteinpy.plotting.geodesics.static import StaticGeodesicPlotter
from einsteinpy.plotting.hypersurface.core import HypersurfacePlotter
from einsteinpy.plotting.rays.shadow import ShadowPlotter
| 55.857143 | 80 | 0.895141 | 41 | 391 | 8.536585 | 0.414634 | 0.24 | 0.377143 | 0.265714 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.061381 | 391 | 6 | 81 | 65.166667 | 0.953678 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
5b4e5e6bd5b1b46ef6c7f57818551a717e55f5db | 847 | py | Python | budget/models.py | adrianyim/Budget-Balance-Senior-Project | 75bd3e7399c96e424d2d00637364e83d3820b5ca | [
"Unlicense"
] | null | null | null | budget/models.py | adrianyim/Budget-Balance-Senior-Project | 75bd3e7399c96e424d2d00637364e83d3820b5ca | [
"Unlicense"
] | null | null | null | budget/models.py | adrianyim/Budget-Balance-Senior-Project | 75bd3e7399c96e424d2d00637364e83d3820b5ca | [
"Unlicense"
] | null | null | null | from django.db import models
from django.conf import settings
# Create your models here.
class Item(models.Model):
item = models.CharField(max_length=50)
item_type = models.CharField(max_length=50)
cost = models.DecimalField(max_digits=10, decimal_places=2)
cost_type = models.CharField(max_length=10)
remark = models.TextField()
date = models.DateField()
username = models.ForeignKey(settings.AUTH_USER_MODEL, on_delete=models.CASCADE)
class New_item(models.Model):
item = models.CharField(max_length=50)
item_type = models.CharField(max_length=50)
cost = models.DecimalField(max_digits=10, decimal_places=2)
cost_type = models.CharField(max_length=10)
remark = models.TextField()
date = models.DateField()
username = models.ForeignKey(settings.AUTH_USER_MODEL, on_delete=models.CASCADE) | 38.5 | 84 | 0.753247 | 115 | 847 | 5.365217 | 0.321739 | 0.145867 | 0.175041 | 0.233387 | 0.862237 | 0.862237 | 0.862237 | 0.862237 | 0.862237 | 0.862237 | 0 | 0.024793 | 0.142857 | 847 | 22 | 85 | 38.5 | 0.825069 | 0.028335 | 0 | 0.777778 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.111111 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 9 |
5bd0aa216d5d3bf3fb003e6f90b9c2010f0a4bca | 6,563 | py | Python | applications/gestiune/controllers/users.py | Vlad-Iliescu/gest | 32fbd3a859316727cd8564029d51b8d3c94cc0a0 | [
"BSD-3-Clause"
] | null | null | null | applications/gestiune/controllers/users.py | Vlad-Iliescu/gest | 32fbd3a859316727cd8564029d51b8d3c94cc0a0 | [
"BSD-3-Clause"
] | null | null | null | applications/gestiune/controllers/users.py | Vlad-Iliescu/gest | 32fbd3a859316727cd8564029d51b8d3c94cc0a0 | [
"BSD-3-Clause"
] | null | null | null | # coding: utf8
# try something like
def get_users():
if not request.post_vars.user_hash:
return dict(status=500, error="Hash nu exista!")
else:
ok, user_id, username, nivel = is_logged(request.post_vars.user_hash)
if not (ok):
return dict(status=401, error="Sesiunea a expirat!")
users = db(db.user.nivel < 5).select(db.user.id, db.user.username, db.user.nivel, db.user.nume,
db.user.email).as_list()
return dict(status=200, error="", users=users)
def delete_user():
if not request.post_vars.user_hash:
return dict(status=500, error="Hash nu exista!")
else:
ok, user_id, username, nivel = is_logged(request.post_vars.user_hash)
if not ok:
return dict(status=401, error="Sesiunea a expirat!")
if not request.post_vars.user_id:
return dict(status=500, error="Id User nu exista")
else:
_user = db(db.user.id == request.post_vars.user_id).select().first()
if not _user:
return dict(status=500, error="User cu ID-ul '{0}' nu exista".format(request.post_vars.user_id))
db(db.user.id == request.post_vars.user_id).delete()
###################### LOG #######################
log_user(
id=user_id,
name=username,
tabel='user',
contract='{0}'.format(request.post_vars.user_id),
ip=request.env.remote_addr,
action='Stergere User [{0}]'.format(_user.username),
detalii='{0}'.format(
dict(id=_user.id, username=_user.username, nivel=_user.nivel, nume=_user.nume, email=_user.email)),
query=db(db.user.id == request.post_vars.user_id)._delete()
)
###################################################
return dict(status=200, error="")
def add_user():
if not request.post_vars.user_hash:
return dict(status=500, error="Hash nu exista!");
else:
ok, user_id, username, nivel = is_logged(request.post_vars.user_hash)
if not ok:
return dict(status=401, error="Sesiunea a expirat!")
if not request.post_vars.username:
return dict(status=500, error="Username nu exista")
if not request.post_vars.password:
return dict(status=500, error="Password nu exista")
if not request.post_vars.nivel:
return dict(status=500, error="Nivel nu exista")
if not request.post_vars.nume:
nume = ""
else:
nume = request.post_vars.nume.title()
if not request.post_vars.email:
email = ""
else:
email = request.post_vars.email
user_check = db(db.user.username == request.post_vars.username).select(db.user.id).first()
if user_check:
return dict(status=409, error="Userul {0} exista deja in baza de date!".format(request.post_vars.username))
user_id = db.user.insert(username=request.post_vars.username, password=request.post_vars.password,
nivel=request.post_vars.nivel,
nume=nume, email=email)
###################### LOG #######################
log_user(
id=user_id,
name=username,
tabel='user',
contract='{0}'.format(user_id),
ip=request.env.remote_addr,
action='Adaugare User [{0}]'.format(request.post_vars.username),
detalii='{0}'.format(
dict(id=user_id, username=request.post_vars.username, nivel=request.post_vars.nivel, nume=nume,
email=email)),
query=db.user._insert(username=request.post_vars.username, password=request.post_vars.password,
nivel=request.post_vars.nivel,
nume=nume, email=email)
)
###################################################
return dict(status=200, error="")
def edit_user():
if not request.post_vars.user_hash:
return dict(status=500, error="Hash nu exista!")
else:
ok, user_id, username, nivel = is_logged(request.post_vars.user_hash)
if not ok:
return dict(status=401, error="Sesiunea a expirat!")
if not request.post_vars.username:
return dict(status=500, error="Username nu exista")
if not request.post_vars.nivel:
return dict(status=500, error="Nivel nu exista")
if not request.post_vars.user_id:
return dict(status=500, error="Id User nu exista")
else:
_user = db(db.user.id == request.post_vars.user_id).select().first()
if not request.post_vars.nume:
nume = ""
else:
nume = request.post_vars.nume.title()
if not (request.post_vars.email):
email = ""
else:
email = request.post_vars.email
user_check = db(
(db.user.username == request.post_vars.username) & (db.user.id != request.post_vars.user_id)).select(
db.user.id).first()
if user_check:
return dict(status=500, error="Userul {0} exista deja in baza de date!".format(request.post_vars.username))
if not request.post_vars.password:
db(db.user.id == request.post_vars.user_id).update(username=request.post_vars.username,
nivel=request.post_vars.nivel,
nume=nume, email=email)
return dict(status=200, error="")
db(db.user.id == request.post_vars.user_id).update(username=request.post_vars.username,
password=request.post_vars.password,
nivel=request.post_vars.nivel, nume=nume, email=email)
###################### LOG #######################
log_user(
id=user_id,
name=username,
tabel='user',
contract='{0}'.format(request.post_vars.user_id),
ip=request.env.remote_addr,
action='Editare User [{0}]'.format(_user.username),
detalii='{0}'.format(
dict(id=request.post_vars.user_id, username=request.post_vars.username, nivel=request.post_vars.nivel,
nume=nume, email=email)),
query=db(db.user.id == request.post_vars.user_id)._update(username=request.post_vars.username,
password=request.post_vars.password,
nivel=request.post_vars.nivel, nume=nume, email=email)
)
###################################################
return dict(status=200, error="")
| 37.936416 | 120 | 0.571233 | 817 | 6,563 | 4.438188 | 0.091799 | 0.178985 | 0.244071 | 0.115279 | 0.927744 | 0.900993 | 0.871208 | 0.871208 | 0.861831 | 0.84225 | 0 | 0.017274 | 0.267865 | 6,563 | 172 | 121 | 38.156977 | 0.737357 | 0.006857 | 0 | 0.713178 | 0 | 0 | 0.071796 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.031008 | false | 0.054264 | 0 | 0 | 0.209302 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 9 |
5be8e5b5f2e45dabc5f41f6b1f69968873695661 | 31,877 | py | Python | easy_vk/user/api/wall.py | UmbrellaMalware/easy_vk | 2a84b6bbf7fa9f65633a3fc1cbbe3235a6ee1651 | [
"MIT"
] | 5 | 2020-05-03T12:23:06.000Z | 2020-08-07T16:55:53.000Z | easy_vk/user/api/wall.py | UmbrellaMalware/easy_vk | 2a84b6bbf7fa9f65633a3fc1cbbe3235a6ee1651 | [
"MIT"
] | 4 | 2020-05-03T12:28:58.000Z | 2021-09-07T22:39:02.000Z | easy_vk/user/api/wall.py | UmbrellaMalware/easy_vk | 2a84b6bbf7fa9f65633a3fc1cbbe3235a6ee1651 | [
"MIT"
] | 3 | 2021-09-04T22:46:11.000Z | 2021-09-07T22:20:19.000Z | # This file was autogenerated from vk-api json schema
from typing import List, Union, Optional, overload
from easy_vk.types import objects
from easy_vk.types import responses
from easy_vk.api_category import BaseCategory
try:
from typing import Literal
except Exception:
from typing_extensions import Literal
class Wall(BaseCategory):
def __init__(self, session, access_token: str, v: str, last_call_timer, delay: float, auto_retry: bool, max_retries: int, timeout: float):
super().__init__(session, access_token, v, last_call_timer, delay, auto_retry, max_retries, timeout)
def close_comments(self, owner_id: int, post_id: int) -> responses.BaseBool:
"""
:param owner_id:
:param post_id:
"""
method_parameters = {k: v for k, v in locals().items() if k not in {'self', 'raw_response'}}
param_aliases = []
method_name = 'wall.closeComments'
response_type = responses.BaseBool
return self._call(method_name, method_parameters, param_aliases, response_type)
def create_comment(self, post_id: int, owner_id: Optional[int] = None, from_group: Optional[int] = None, message: Optional[str] = None, reply_to_comment: Optional[int] = None, attachments: Optional[List[str]] = None, sticker_id: Optional[int] = None, guid: Optional[str] = None) -> responses.WallCreateComment:
"""
Adds a comment to a post on a user wall or community wall.
:param post_id: Post ID.
:param owner_id: User ID or community ID. Use a negative value to designate a community ID.
:param from_group: Group ID.
:param message: (Required if 'attachments' is not set.) Text of the comment.
:param reply_to_comment: ID of comment to reply.
:param attachments: (Required if 'message' is not set.) List of media objects attached to the comment, in the following format: "<owner_id>_<media_id>,<owner_id>_<media_id>", '' — Type of media ojbect: 'photo' — photo, 'video' — video, 'audio' — audio, 'doc' — document, '<owner_id>' — ID of the media owner. '<media_id>' — Media ID. For example: "photo100172_166443618,photo66748_265827614"
:param sticker_id: Sticker ID.
:param guid: Unique identifier to avoid repeated comments.
"""
method_parameters = {k: v for k, v in locals().items() if k not in {'self', 'raw_response'}}
param_aliases = []
method_name = 'wall.createComment'
response_type = responses.WallCreateComment
return self._call(method_name, method_parameters, param_aliases, response_type)
def delete(self, owner_id: Optional[int] = None, post_id: Optional[int] = None) -> responses.BaseOk:
"""
Deletes a post from a user wall or community wall.
:param owner_id: User ID or community ID. Use a negative value to designate a community ID.
:param post_id: ID of the post to be deleted.
"""
method_parameters = {k: v for k, v in locals().items() if k not in {'self', 'raw_response'}}
param_aliases = []
method_name = 'wall.delete'
response_type = responses.BaseOk
return self._call(method_name, method_parameters, param_aliases, response_type)
def delete_comment(self, comment_id: int, owner_id: Optional[int] = None) -> responses.BaseOk:
"""
Deletes a comment on a post on a user wall or community wall.
:param comment_id: Comment ID.
:param owner_id: User ID or community ID. Use a negative value to designate a community ID.
"""
method_parameters = {k: v for k, v in locals().items() if k not in {'self', 'raw_response'}}
param_aliases = []
method_name = 'wall.deleteComment'
response_type = responses.BaseOk
return self._call(method_name, method_parameters, param_aliases, response_type)
def edit(self, post_id: int, owner_id: Optional[int] = None, friends_only: Optional[bool] = None, message: Optional[str] = None, attachments: Optional[List[str]] = None, services: Optional[str] = None, signed: Optional[bool] = None, publish_date: Optional[int] = None, lat: Optional[float] = None, long: Optional[float] = None, place_id: Optional[int] = None, mark_as_ads: Optional[bool] = None, close_comments: Optional[bool] = None, poster_bkg_id: Optional[int] = None, poster_bkg_owner_id: Optional[int] = None, poster_bkg_access_hash: Optional[str] = None, copyright_: Optional[str] = None) -> responses.WallEdit:
"""
Edits a post on a user wall or community wall.
:param post_id:
:param owner_id: User ID or community ID. Use a negative value to designate a community ID.
:param friends_only:
:param message: (Required if 'attachments' is not set.) Text of the post.
:param attachments: (Required if 'message' is not set.) List of objects attached to the post, in the following format: "<owner_id>_<media_id>,<owner_id>_<media_id>", '' — Type of media attachment: 'photo' — photo, 'video' — video, 'audio' — audio, 'doc' — document, '<owner_id>' — ID of the media application owner. '<media_id>' — Media application ID. Example: "photo100172_166443618,photo66748_265827614", May contain a link to an external page to include in the post. Example: "photo66748_265827614,http://habrahabr.ru", "NOTE: If more than one link is being attached, an error is thrown."
:param services:
:param signed:
:param publish_date:
:param lat:
:param long:
:param place_id:
:param mark_as_ads:
:param close_comments:
:param poster_bkg_id:
:param poster_bkg_owner_id:
:param poster_bkg_access_hash:
:param copyright_:
"""
method_parameters = {k: v for k, v in locals().items() if k not in {'self', 'raw_response'}}
param_aliases = [('copyright_', 'copyright')]
method_name = 'wall.edit'
response_type = responses.WallEdit
return self._call(method_name, method_parameters, param_aliases, response_type)
def edit_ads_stealth(self, post_id: int, owner_id: Optional[int] = None, message: Optional[str] = None, attachments: Optional[List[str]] = None, signed: Optional[bool] = None, lat: Optional[float] = None, long: Optional[float] = None, place_id: Optional[int] = None, link_button: Optional[str] = None, link_title: Optional[str] = None, link_image: Optional[str] = None, link_video: Optional[str] = None) -> responses.BaseOk:
"""
Allows to edit hidden post.
:param post_id: Post ID. Used for publishing of scheduled and suggested posts.
:param owner_id: User ID or community ID. Use a negative value to designate a community ID.
:param message: (Required if 'attachments' is not set.) Text of the post.
:param attachments: (Required if 'message' is not set.) List of objects attached to the post, in the following format: "<owner_id>_<media_id>,<owner_id>_<media_id>", '' — Type of media attachment: 'photo' — photo, 'video' — video, 'audio' — audio, 'doc' — document, 'page' — wiki-page, 'note' — note, 'poll' — poll, 'album' — photo album, '<owner_id>' — ID of the media application owner. '<media_id>' — Media application ID. Example: "photo100172_166443618,photo66748_265827614", May contain a link to an external page to include in the post. Example: "photo66748_265827614,http://habrahabr.ru", "NOTE: If more than one link is being attached, an error will be thrown."
:param signed: Only for posts in communities with 'from_group' set to '1': '1' — post will be signed with the name of the posting user, '0' — post will not be signed (default)
:param lat: Geographical latitude of a check-in, in degrees (from -90 to 90).
:param long: Geographical longitude of a check-in, in degrees (from -180 to 180).
:param place_id: ID of the location where the user was tagged.
:param link_button: Link button ID
:param link_title: Link title
:param link_image: Link image url
:param link_video: Link video ID in format "<owner_id>_<media_id>"
"""
method_parameters = {k: v for k, v in locals().items() if k not in {'self', 'raw_response'}}
param_aliases = []
method_name = 'wall.editAdsStealth'
response_type = responses.BaseOk
return self._call(method_name, method_parameters, param_aliases, response_type)
def edit_comment(self, comment_id: int, owner_id: Optional[int] = None, message: Optional[str] = None, attachments: Optional[List[str]] = None) -> responses.BaseOk:
"""
Edits a comment on a user wall or community wall.
:param comment_id: Comment ID.
:param owner_id: User ID or community ID. Use a negative value to designate a community ID.
:param message: New comment text.
:param attachments: List of objects attached to the comment, in the following format: , "<owner_id>_<media_id>,<owner_id>_<media_id>", '' — Type of media attachment: 'photo' — photo, 'video' — video, 'audio' — audio, 'doc' — document, '<owner_id>' — ID of the media attachment owner. '<media_id>' — Media attachment ID. For example: "photo100172_166443618,photo66748_265827614"
"""
method_parameters = {k: v for k, v in locals().items() if k not in {'self', 'raw_response'}}
param_aliases = []
method_name = 'wall.editComment'
response_type = responses.BaseOk
return self._call(method_name, method_parameters, param_aliases, response_type)
@overload
def get(self, owner_id: Optional[int] = None, domain: Optional[str] = None, offset: Optional[int] = None, count: Optional[int] = None, filter_: Optional[str] = None, extended: None = None, fields: Optional[List[Union[objects.BaseUserGroupFields, str]]] = None) -> responses.WallGet: ...
@overload
def get(self, extended: bool, owner_id: Optional[int] = None, domain: Optional[str] = None, offset: Optional[int] = None, count: Optional[int] = None, filter_: Optional[str] = None, fields: Optional[List[Union[objects.BaseUserGroupFields, str]]] = None) -> responses.WallGetExtended: ...
def get(self, owner_id: Optional[int] = None, domain: Optional[str] = None, offset: Optional[int] = None, count: Optional[int] = None, filter_: Optional[str] = None, extended: Optional[bool] = None, fields: Optional[List[Union[objects.BaseUserGroupFields, str]]] = None):
"""
Returns a list of posts on a user wall or community wall.
:param owner_id: ID of the user or community that owns the wall. By default, current user ID. Use a negative value to designate a community ID.
:param domain: User or community short address.
:param offset: Offset needed to return a specific subset of posts.
:param count: Number of posts to return (maximum 100).
:param filter_: Filter to apply: 'owner' — posts by the wall owner, 'others' — posts by someone else, 'all' — posts by the wall owner and others (default), 'postponed' — timed posts (only available for calls with an 'access_token'), 'suggests' — suggested posts on a community wall
:param extended: '1' — to return 'wall', 'profiles', and 'groups' fields, '0' — to return no additional fields (default)
:param fields:
"""
method_parameters = {k: v for k, v in locals().items() if k not in {'self', 'raw_response'}}
param_aliases = [('filter_', 'filter')]
method_name = 'wall.get'
if not extended:
response_type = responses.WallGet
if extended:
response_type = responses.WallGetExtended
return self._call(method_name, method_parameters, param_aliases, response_type)
@overload
def get_by_id(self, posts: List[str], extended: None = None, copy_history_depth: Optional[int] = None, fields: Optional[List[Union[objects.BaseUserGroupFields, str]]] = None) -> responses.WallGetById: ...
@overload
def get_by_id(self, posts: List[str], extended: bool, copy_history_depth: Optional[int] = None, fields: Optional[List[Union[objects.BaseUserGroupFields, str]]] = None) -> responses.WallGetByIdExtended: ...
def get_by_id(self, posts: List[str], extended: Optional[bool] = None, copy_history_depth: Optional[int] = None, fields: Optional[List[Union[objects.BaseUserGroupFields, str]]] = None):
"""
Returns a list of posts from user or community walls by their IDs.
:param posts: User or community IDs and post IDs, separated by underscores. Use a negative value to designate a community ID. Example: "93388_21539,93388_20904,2943_4276,-1_1"
:param extended: '1' — to return user and community objects needed to display posts, '0' — no additional fields are returned (default)
:param copy_history_depth: Sets the number of parent elements to include in the array 'copy_history' that is returned if the post is a repost from another wall.
:param fields:
"""
method_parameters = {k: v for k, v in locals().items() if k not in {'self', 'raw_response'}}
param_aliases = []
method_name = 'wall.getById'
if not extended:
response_type = responses.WallGetById
if extended:
response_type = responses.WallGetByIdExtended
return self._call(method_name, method_parameters, param_aliases, response_type)
@overload
def get_comment(self, comment_id: int, owner_id: Optional[int] = None, extended: None = None, fields: Optional[List[Union[objects.BaseUserGroupFields, str]]] = None) -> responses.WallGetComment: ...
@overload
def get_comment(self, comment_id: int, extended: bool, owner_id: Optional[int] = None, fields: Optional[List[Union[objects.BaseUserGroupFields, str]]] = None) -> responses.WallGetCommentExtended: ...
def get_comment(self, comment_id: int, owner_id: Optional[int] = None, extended: Optional[bool] = None, fields: Optional[List[Union[objects.BaseUserGroupFields, str]]] = None):
"""
Returns a comment on a post on a user wall or community wall.
:param comment_id: Comment ID.
:param owner_id: User ID or community ID. Use a negative value to designate a community ID.
:param extended:
:param fields:
"""
method_parameters = {k: v for k, v in locals().items() if k not in {'self', 'raw_response'}}
param_aliases = []
method_name = 'wall.getComment'
if not extended:
response_type = responses.WallGetComment
if extended:
response_type = responses.WallGetCommentExtended
return self._call(method_name, method_parameters, param_aliases, response_type)
@overload
def get_comments(self, owner_id: Optional[int] = None, post_id: Optional[int] = None, need_likes: Optional[bool] = None, start_comment_id: Optional[int] = None, offset: Optional[int] = None, count: Optional[int] = None, sort: Optional[str] = None, preview_length: Optional[int] = None, extended: None = None, fields: Optional[List[Union[objects.BaseUserGroupFields, str]]] = None, comment_id: Optional[int] = None, thread_items_count: Optional[int] = None) -> responses.WallGetComments: ...
@overload
def get_comments(self, extended: bool, owner_id: Optional[int] = None, post_id: Optional[int] = None, need_likes: Optional[bool] = None, start_comment_id: Optional[int] = None, offset: Optional[int] = None, count: Optional[int] = None, sort: Optional[str] = None, preview_length: Optional[int] = None, fields: Optional[List[Union[objects.BaseUserGroupFields, str]]] = None, comment_id: Optional[int] = None, thread_items_count: Optional[int] = None) -> responses.WallGetCommentsExtended: ...
def get_comments(self, owner_id: Optional[int] = None, post_id: Optional[int] = None, need_likes: Optional[bool] = None, start_comment_id: Optional[int] = None, offset: Optional[int] = None, count: Optional[int] = None, sort: Optional[str] = None, preview_length: Optional[int] = None, extended: Optional[bool] = None, fields: Optional[List[Union[objects.BaseUserGroupFields, str]]] = None, comment_id: Optional[int] = None, thread_items_count: Optional[int] = None):
"""
Returns a list of comments on a post on a user wall or community wall.
:param owner_id: User ID or community ID. Use a negative value to designate a community ID.
:param post_id: Post ID.
:param need_likes: '1' — to return the 'likes' field, '0' — not to return the 'likes' field (default)
:param start_comment_id:
:param offset: Offset needed to return a specific subset of comments.
:param count: Number of comments to return (maximum 100).
:param sort: Sort order: 'asc' — chronological, 'desc' — reverse chronological
:param preview_length: Number of characters at which to truncate comments when previewed. By default, '90'. Specify '0' if you do not want to truncate comments.
:param extended:
:param fields:
:param comment_id: Comment ID.
:param thread_items_count: Count items in threads.
"""
method_parameters = {k: v for k, v in locals().items() if k not in {'self', 'raw_response'}}
param_aliases = []
method_name = 'wall.getComments'
if not extended:
response_type = responses.WallGetComments
if extended:
response_type = responses.WallGetCommentsExtended
return self._call(method_name, method_parameters, param_aliases, response_type)
def get_reposts(self, owner_id: Optional[int] = None, post_id: Optional[int] = None, offset: Optional[int] = None, count: Optional[int] = None) -> responses.WallGetReposts:
"""
Returns information about reposts of a post on user wall or community wall.
:param owner_id: User ID or community ID. By default, current user ID. Use a negative value to designate a community ID.
:param post_id: Post ID.
:param offset: Offset needed to return a specific subset of reposts.
:param count: Number of reposts to return.
"""
method_parameters = {k: v for k, v in locals().items() if k not in {'self', 'raw_response'}}
param_aliases = []
method_name = 'wall.getReposts'
response_type = responses.WallGetReposts
return self._call(method_name, method_parameters, param_aliases, response_type)
def open_comments(self, owner_id: int, post_id: int) -> responses.BaseBool:
"""
:param owner_id:
:param post_id:
"""
method_parameters = {k: v for k, v in locals().items() if k not in {'self', 'raw_response'}}
param_aliases = []
method_name = 'wall.openComments'
response_type = responses.BaseBool
return self._call(method_name, method_parameters, param_aliases, response_type)
def pin(self, post_id: int, owner_id: Optional[int] = None) -> responses.BaseOk:
"""
Pins the post on wall.
:param post_id: Post ID.
:param owner_id: ID of the user or community that owns the wall. By default, current user ID. Use a negative value to designate a community ID.
"""
method_parameters = {k: v for k, v in locals().items() if k not in {'self', 'raw_response'}}
param_aliases = []
method_name = 'wall.pin'
response_type = responses.BaseOk
return self._call(method_name, method_parameters, param_aliases, response_type)
def post(self, owner_id: Optional[int] = None, friends_only: Optional[bool] = None, from_group: Optional[bool] = None, message: Optional[str] = None, attachments: Optional[List[str]] = None, services: Optional[str] = None, signed: Optional[bool] = None, publish_date: Optional[int] = None, lat: Optional[float] = None, long: Optional[float] = None, place_id: Optional[int] = None, post_id: Optional[int] = None, guid: Optional[str] = None, mark_as_ads: Optional[bool] = None, close_comments: Optional[bool] = None, mute_notifications: Optional[bool] = None, copyright_: Optional[str] = None) -> responses.WallPost:
"""
Adds a new post on a user wall or community wall. Can also be used to publish suggested or scheduled posts.
:param owner_id: User ID or community ID. Use a negative value to designate a community ID.
:param friends_only: '1' — post will be available to friends only, '0' — post will be available to all users (default)
:param from_group: For a community: '1' — post will be published by the community, '0' — post will be published by the user (default)
:param message: (Required if 'attachments' is not set.) Text of the post.
:param attachments: (Required if 'message' is not set.) List of objects attached to the post, in the following format: "<owner_id>_<media_id>,<owner_id>_<media_id>", '' — Type of media attachment: 'photo' — photo, 'video' — video, 'audio' — audio, 'doc' — document, 'page' — wiki-page, 'note' — note, 'poll' — poll, 'album' — photo album, '<owner_id>' — ID of the media application owner. '<media_id>' — Media application ID. Example: "photo100172_166443618,photo66748_265827614", May contain a link to an external page to include in the post. Example: "photo66748_265827614,http://habrahabr.ru", "NOTE: If more than one link is being attached, an error will be thrown."
:param services: List of services or websites the update will be exported to, if the user has so requested. Sample values: 'twitter', 'facebook'.
:param signed: Only for posts in communities with 'from_group' set to '1': '1' — post will be signed with the name of the posting user, '0' — post will not be signed (default)
:param publish_date: Publication date (in Unix time). If used, posting will be delayed until the set time.
:param lat: Geographical latitude of a check-in, in degrees (from -90 to 90).
:param long: Geographical longitude of a check-in, in degrees (from -180 to 180).
:param place_id: ID of the location where the user was tagged.
:param post_id: Post ID. Used for publishing of scheduled and suggested posts.
:param guid:
:param mark_as_ads:
:param close_comments:
:param mute_notifications:
:param copyright_:
"""
method_parameters = {k: v for k, v in locals().items() if k not in {'self', 'raw_response'}}
param_aliases = [('copyright_', 'copyright')]
method_name = 'wall.post'
response_type = responses.WallPost
return self._call(method_name, method_parameters, param_aliases, response_type)
def post_ads_stealth(self, owner_id: int, message: Optional[str] = None, attachments: Optional[List[str]] = None, signed: Optional[bool] = None, lat: Optional[float] = None, long: Optional[float] = None, place_id: Optional[int] = None, guid: Optional[str] = None, link_button: Optional[str] = None, link_title: Optional[str] = None, link_image: Optional[str] = None, link_video: Optional[str] = None) -> responses.WallPostAdsStealth:
"""
Allows to create hidden post which will not be shown on the community's wall and can be used for creating an ad with type "Community post".
:param owner_id: User ID or community ID. Use a negative value to designate a community ID.
:param message: (Required if 'attachments' is not set.) Text of the post.
:param attachments: (Required if 'message' is not set.) List of objects attached to the post, in the following format: "<owner_id>_<media_id>,<owner_id>_<media_id>", '' — Type of media attachment: 'photo' — photo, 'video' — video, 'audio' — audio, 'doc' — document, 'page' — wiki-page, 'note' — note, 'poll' — poll, 'album' — photo album, '<owner_id>' — ID of the media application owner. '<media_id>' — Media application ID. Example: "photo100172_166443618,photo66748_265827614", May contain a link to an external page to include in the post. Example: "photo66748_265827614,http://habrahabr.ru", "NOTE: If more than one link is being attached, an error will be thrown."
:param signed: Only for posts in communities with 'from_group' set to '1': '1' — post will be signed with the name of the posting user, '0' — post will not be signed (default)
:param lat: Geographical latitude of a check-in, in degrees (from -90 to 90).
:param long: Geographical longitude of a check-in, in degrees (from -180 to 180).
:param place_id: ID of the location where the user was tagged.
:param guid: Unique identifier to avoid duplication the same post.
:param link_button: Link button ID
:param link_title: Link title
:param link_image: Link image url
:param link_video: Link video ID in format "<owner_id>_<media_id>"
"""
method_parameters = {k: v for k, v in locals().items() if k not in {'self', 'raw_response'}}
param_aliases = []
method_name = 'wall.postAdsStealth'
response_type = responses.WallPostAdsStealth
return self._call(method_name, method_parameters, param_aliases, response_type)
def report_comment(self, owner_id: int, comment_id: int, reason: Optional[int] = None) -> responses.BaseOk:
"""
Reports (submits a complaint about) a comment on a post on a user wall or community wall.
:param owner_id: ID of the user or community that owns the wall.
:param comment_id: Comment ID.
:param reason: Reason for the complaint: '0' – spam, '1' – child pornography, '2' – extremism, '3' – violence, '4' – drug propaganda, '5' – adult material, '6' – insult, abuse
"""
method_parameters = {k: v for k, v in locals().items() if k not in {'self', 'raw_response'}}
param_aliases = []
method_name = 'wall.reportComment'
response_type = responses.BaseOk
return self._call(method_name, method_parameters, param_aliases, response_type)
def report_post(self, owner_id: int, post_id: int, reason: Optional[int] = None) -> responses.BaseOk:
"""
Reports (submits a complaint about) a post on a user wall or community wall.
:param owner_id: ID of the user or community that owns the wall.
:param post_id: Post ID.
:param reason: Reason for the complaint: '0' – spam, '1' – child pornography, '2' – extremism, '3' – violence, '4' – drug propaganda, '5' – adult material, '6' – insult, abuse
"""
method_parameters = {k: v for k, v in locals().items() if k not in {'self', 'raw_response'}}
param_aliases = []
method_name = 'wall.reportPost'
response_type = responses.BaseOk
return self._call(method_name, method_parameters, param_aliases, response_type)
def repost(self, object_: str, message: Optional[str] = None, group_id: Optional[int] = None, mark_as_ads: Optional[bool] = None, mute_notifications: Optional[bool] = None) -> responses.WallRepost:
"""
Reposts (copies) an object to a user wall or community wall.
:param object_: ID of the object to be reposted on the wall. Example: "wall66748_3675"
:param message: Comment to be added along with the reposted object.
:param group_id: Target community ID when reposting to a community.
:param mark_as_ads:
:param mute_notifications:
"""
method_parameters = {k: v for k, v in locals().items() if k not in {'self', 'raw_response'}}
param_aliases = [('object_', 'object')]
method_name = 'wall.repost'
response_type = responses.WallRepost
return self._call(method_name, method_parameters, param_aliases, response_type)
def restore(self, owner_id: Optional[int] = None, post_id: Optional[int] = None) -> responses.BaseOk:
"""
Restores a post deleted from a user wall or community wall.
:param owner_id: User ID or community ID from whose wall the post was deleted. Use a negative value to designate a community ID.
:param post_id: ID of the post to be restored.
"""
method_parameters = {k: v for k, v in locals().items() if k not in {'self', 'raw_response'}}
param_aliases = []
method_name = 'wall.restore'
response_type = responses.BaseOk
return self._call(method_name, method_parameters, param_aliases, response_type)
def restore_comment(self, comment_id: int, owner_id: Optional[int] = None) -> responses.BaseOk:
"""
Restores a comment deleted from a user wall or community wall.
:param comment_id: Comment ID.
:param owner_id: User ID or community ID. Use a negative value to designate a community ID.
"""
method_parameters = {k: v for k, v in locals().items() if k not in {'self', 'raw_response'}}
param_aliases = []
method_name = 'wall.restoreComment'
response_type = responses.BaseOk
return self._call(method_name, method_parameters, param_aliases, response_type)
@overload
def search(self, owner_id: Optional[int] = None, domain: Optional[str] = None, query: Optional[str] = None, owners_only: Optional[bool] = None, count: Optional[int] = None, offset: Optional[int] = None, extended: None = None, fields: Optional[List[Union[objects.BaseUserGroupFields, str]]] = None) -> responses.WallSearch: ...
@overload
def search(self, extended: bool, owner_id: Optional[int] = None, domain: Optional[str] = None, query: Optional[str] = None, owners_only: Optional[bool] = None, count: Optional[int] = None, offset: Optional[int] = None, fields: Optional[List[Union[objects.BaseUserGroupFields, str]]] = None) -> responses.WallSearchExtended: ...
def search(self, owner_id: Optional[int] = None, domain: Optional[str] = None, query: Optional[str] = None, owners_only: Optional[bool] = None, count: Optional[int] = None, offset: Optional[int] = None, extended: Optional[bool] = None, fields: Optional[List[Union[objects.BaseUserGroupFields, str]]] = None):
"""
Allows to search posts on user or community walls.
:param owner_id: user or community id. "Remember that for a community 'owner_id' must be negative."
:param domain: user or community screen name.
:param query: search query string.
:param owners_only: '1' – returns only page owner's posts.
:param count: count of posts to return.
:param offset: Offset needed to return a specific subset of posts.
:param extended: show extended post info.
:param fields:
"""
method_parameters = {k: v for k, v in locals().items() if k not in {'self', 'raw_response'}}
param_aliases = []
method_name = 'wall.search'
if not extended:
response_type = responses.WallSearch
if extended:
response_type = responses.WallSearchExtended
return self._call(method_name, method_parameters, param_aliases, response_type)
def unpin(self, post_id: int, owner_id: Optional[int] = None) -> responses.BaseOk:
"""
Unpins the post on wall.
:param post_id: Post ID.
:param owner_id: ID of the user or community that owns the wall. By default, current user ID. Use a negative value to designate a community ID.
"""
method_parameters = {k: v for k, v in locals().items() if k not in {'self', 'raw_response'}}
param_aliases = []
method_name = 'wall.unpin'
response_type = responses.BaseOk
return self._call(method_name, method_parameters, param_aliases, response_type)
| 69.448802 | 678 | 0.673432 | 4,415 | 31,877 | 4.749943 | 0.08675 | 0.041963 | 0.057222 | 0.036479 | 0.813361 | 0.792332 | 0.765963 | 0.752563 | 0.737209 | 0.72915 | 0 | 0.013764 | 0.220535 | 31,877 | 458 | 679 | 69.600437 | 0.826538 | 0.411958 | 0 | 0.538043 | 1 | 0 | 0.044536 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.184783 | false | 0 | 0.032609 | 0 | 0.347826 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
5be9ff973eb88426e6654a986b25c29f05657e5c | 5,090 | py | Python | custom_components/ready4sky/lib/test.py | suver/r4sky | 81402a22635a3aa75b5ca74b5a7182e3d7b3c50b | [
"Apache-2.0"
] | null | null | null | custom_components/ready4sky/lib/test.py | suver/r4sky | 81402a22635a3aa75b5ca74b5a7182e3d7b3c50b | [
"Apache-2.0"
] | null | null | null | custom_components/ready4sky/lib/test.py | suver/r4sky | 81402a22635a3aa75b5ca74b5a7182e3d7b3c50b | [
"Apache-2.0"
] | null | null | null | #!/usr/bin/env python3
from .kettle import RedmondKettle
CONF_DEVICE = 'hci0'
CONF_MAC = 'e0:04:66:69:51:fe'
CONF_PASSWORD = '1234567890123456'
CONF_SCAN_INTERVAL = 10
rk = RedmondKettle(mac=CONF_MAC, password=CONF_PASSWORD)
print(rk.onLight(rgb1='27FF00', rgb2='00FFEC', rgb3='000FFF'))
# rk.on()
# rk.onTemperatureToLight()
# rk.onModeHeat(temperature=70)
# sleep(5)
# rk.off()
# rk.onModeBoil()
# sleep(5)
# rk.off()
# rk.onLight()
# sleep(5)
# rk.offLight()
# RedmondKettle.scan()
# bte = RedmondKettleController(CONF_MAC, CONF_PASSWORD)
# Получаем текущий режим работы
# print(bte.auth())
# print(bte.sync())
# print(bte.info())
# print(bte.stat())
# print(bte.RGBLight('boil'))
# print(bte.mode())
# Нагрев до температуры
# print(bte.auth())
# print(bte.sync())
# print(bte.mode())
# print(bte.sendMode(mode='boil', temp=75, howMuchBoil=50))
# print(bte.onMode())
#
# print(bte.onTemperatureToLight())
# Ночник
# print(bte.auth())
# print(bte.sync())
# print(bte.mode())
# print(bte.sendRGBLight('boil', rgb1='eeff00', rgb2='ffbb00', rgb3='ff3c00'))
# print(bte.sendMode(mode='light', temp=75, howMuchBoil=50))
# print(bte.onMode())
# print(bte.offMode())
# Установка цвета для отображения нагрева и запуск чайника
# print(bte.auth())
# print(bte.sync())
# print(bte.mode())
# print(bte.RGBLight('boil'))
# # print(bte.sendRGBLight('boil', rgb1='eeff00', rgb2='ffbb00', rgb3='ff3c00'))
# print(bte.sendRGBLight('boil'))
# print(bte.RGBLight('boil'))
# print(bte.onTemperatureToLight())
# print(bte.onMode())
# Установка цвета для отображения нагрева
# print(bte.onTemperatureToLight())
# print(bte.offTemperatureToLight())
# print(bte.on())
# print(bte.off())
# print(bte.stat())
# print(bte.mode())
# 0 -
# 1 - Номер итерациии / команды
# 2 - Режим команды, выполняемая опция, то что мы хотим что бы сделал чайник
# 3 - Режим работы 00 - boil 01 - heat to temp / нагрев 02 - ? 03 - backlight (boil by default) temp - in HEX
# 4 -
# 5 - температура, до которой нужно нагревать в режиме работы «нагрев», в режиме кипячения равен 00
# 6 -
# 7 -
# 8 - текущая температура воды (2a=42 по Цельсию)
# 9 - Световая индикация при синхронизации 00 - выключена, 0f (15) - включена
# 11 - статус чайника - 02 - on 00 - off
# 16 - время удержания температуры
#
# -- -- ?? M -- T ?? ?? CT ++ -- S -- -- -- -- TE -- -- --
# 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19
# ['55', '03', '50', '00', '00', '00', '9d', '02', '00', '00', '00', '00', '00', '00', '00', '00', '00', '00', '00', 'aa']
# ['55', '03', '06', '01', '00', '23', '00', '01', '2c', '0f', '00', '00', '00', '00', '00', '00', '80', '00', '00', 'aa']
# ['55', '03', '06', '00', '00', '00', '00', '01', '2b', '0f', '00', '00', '00', '00', '00', '00', '80', '00', '00', 'aa']
# ['55', '03', '06', '00', '00', '00', '00', '01', '5e', '0f', '00', '00', '00', '00', '00', '00', '80', '00', '00', 'aa']
# ['55', '03', '50', '00', '00', '00', '9e', '02', '00', '00', '00', '00', '00', '00', '00', '00', '00', '00', '00', 'aa']
# ['55', '03', '06', '01', '00', '23', '00', '01', '59', '0f', '00', '00', '00', '00', '00', '00', '80', '00', '00', 'aa']
# ['55', '03', '50', '00', '00', '00', '9e', '02', '00', '00', '00', '00', '00', '00', '00', '00', '00', '00', '00', 'aa']
# ['55', '03', '50', '00', '00', '00', '9e', '02', '00', '00', '00', '00', '00', '00', '00', '00', '00', '00', '00', 'aa']
# ['55', '03', '06', '01', '00', '46', '00', '01', '18', '0f', '00', '00', '00', '00', '00', '00', '80', '00', '00', 'aa'] - выкл
# ['55', '03', '06', '00', '00', '00', '00', '01', '33', '0f', '00', '02', '00', '00', '00', '00', '80', '00', '00', 'aa'] - вкл
# ['55', '03', '06', '00', '00', '00', '00', '01', '58', '0f', '00', '02', '00', '00', '00', '00', '80', '00', '00', 'aa'] - выкл после кипячения
# ['55', '03', '06', '00', '00', '00', '00', '01', '2a', '0f', '00', '00', '00', '00', '00', '00', '80', '00', '00', 'aa'] - посое снеятия и долития
# ['55', '03', '06', '00', '00', '00', '00', '01', '28', '0f', '00', '00', '00', '00', '00', '00', '80', '00', '00', 'aa'] - отжата кнопка свежая вода
# ['55', '03', '06', '02', '00', '23', '00', '01', '28', '0f', '00', '00', '00', '00', '00', '00', '80', '00', '00', 'aa'] - включен нагрев и кипячение
# ['55', '03', '06', '01', '00', '23', '00', '01', '28', '0f', '00', '00', '00', '00', '00', '00', '80', '00', '00', 'aa'] - включен нагрев и выключено кипячение
# ['55', '03', '50', '00', '00', '00', '9f', '02', '00', '00', '00', '00', '00', '00', '00', '00', '00', '00', '00', 'aa'] - нагрев до 80
# ['55', '03', '06', '01', '00', '32', '00', '01', '28', '0f', '00', '00', '00', '00', '00', '00', '80', '00', '00', 'aa'] - нагрев до 50
# ['55', '03', '06', '01', '00', '3c', '00', '01', '27', '0f', '00', '00', '00', '00', '00', '00', '80', '00', '00', 'aa'] - нагрев до 60
# ['55', '03', '06', '01', '00', '46', '00', '01', '27', '0f', '00', '00', '00', '00', '00', '00', '80', '00', '00', 'aa'] - нагрев до 70
#
#
#
#
# | 43.87931 | 161 | 0.496267 | 753 | 5,090 | 3.345286 | 0.245684 | 0.250893 | 0.271536 | 0.266773 | 0.562922 | 0.475586 | 0.439063 | 0.439063 | 0.392219 | 0.356491 | 0 | 0.203414 | 0.182908 | 5,090 | 116 | 162 | 43.87931 | 0.401779 | 0.908251 | 0 | 0 | 0 | 0 | 0.147059 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.285714 | 0.142857 | 0 | 0.142857 | 0.142857 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 7 |
752581c8631b8ae0136f5b8850cd908c43e4d846 | 16,279 | py | Python | components/TeamEditorMenu.py | davibarberini/pythonbots-v1.1 | 3c1f47c2579e73e1e9211e3c6b85cf3c0e64dc71 | [
"MIT"
] | null | null | null | components/TeamEditorMenu.py | davibarberini/pythonbots-v1.1 | 3c1f47c2579e73e1e9211e3c6b85cf3c0e64dc71 | [
"MIT"
] | 1 | 2019-09-24T14:00:08.000Z | 2019-09-24T14:00:08.000Z | components/TeamEditorMenu.py | davibarberini/pythonbots-v1.1 | 3c1f47c2579e73e1e9211e3c6b85cf3c0e64dc71 | [
"MIT"
] | null | null | null | import sys, pygame,random, string, math, os
sys.path.append(sys.path[0] + "/robots")
#initiate the pygame library
pygame.init()
#Fonts
pygame.font.NameFont= pygame.font.SysFont("Tahoma", 16, bold=True, italic=False)
pygame.font.RobotNameFont= pygame.font.SysFont("Tahoma", 48, bold=True, italic=False)
pygame.font.StatsFont= pygame.font.SysFont("Tahoma", 18, bold=True, italic=False)
def ScreenSetUp():
#Set up the screen
size = width, height = 700, 500 #Set the size as a tuple with width and height
screen = pygame.display.set_mode(size) #Sets the screen display size
pygame.display.set_caption('Python Bots') #Sets window Title
pygame.display.flip() #Refreshes Screen
MainMenu(screen) #Sample function Call
#You will notice the initial screen gets passed to every function
#This is nessecary unless you define the screen outside a function to begin with.
def MainMenu(screen):
menuimage = pygame.image.load('menuImages/TeamEditor.jpg') #Load in an image
screen.blit(menuimage,(0,0)) #Place your image
pygame.display.flip() #Refreshes Screen
while 1 :
#Pygame.event.get() takes in input from the user
#It records keyboard input and mouse clicks on the window
for event in pygame.event.get():
#If the user hits the X button on the window
if event.type == pygame.QUIT:
Exit(screen)
mouseButtons = pygame.mouse.get_pressed()
if mouseButtons == (1,0,0):
mouseX,mouseY = pygame.mouse.get_pos()
if (mouseX>76 and mouseX<255 and mouseY>160 and mouseY<209):
NewTeam(screen)
if (mouseX>77 and mouseX<254 and mouseY>221 and mouseY<258):
EditTeam(screen)
if (mouseX>77 and mouseX<254 and mouseY>260 and mouseY<302):
CopyTeam(screen)
if (mouseX>76 and mouseX<254 and mouseY>309 and mouseY<350):
Return(screen)
screen.blit(menuimage,(0,0))
pygame.display.flip() #Refreshes Screen
def NewTeam(screen):
botSelect(screen,"new")
def EditTeam(screen):
teamSelect(screen,"edit")
def CopyTeam(screen):
teamSelect(screen,"copy")
def parseForColour(filename):
roboFile = file(filename)
foundcolour = False
for line in roboFile:
if (foundcolour):
newline = line.strip()
newline = newline.lstrip("return")
newline = newline.strip()
newtuple = tuple(map(int,newline[1:-1].split(',')))
return newtuple
if line.startswith("def colour"):
foundcolour = True
return (0,0,0)
def parseForRobots(filename):
roboFile = file("teams/"+ filename)
foundcolour = False
bots = []
for line in roboFile:
if (foundcolour):
newline = line.strip()
newline = newline.lstrip("return")
bots = newline.strip()
return bots
if line.startswith("def robots"):
foundcolour = True
return []
def botSelect(screen,action):
menuimage = pygame.image.load('menuImages/BotSelected.jpg') #Load in an image
robotSelectedImage = pygame.image.load('menuImages/botSelected.png') #Load in an image
robotTargettedImage = pygame.image.load('menuImages/botTargetted.png') #Load in an image
allimage = pygame.image.load('menuImages/botImageLarge.png') #Load in an image
randomimage = pygame.image.load('menuImages/botImageLarge.png') #Load in an image
hideimage = pygame.image.load('menuImages/startBattleHider.jpg') #Load in an image
screen.blit(menuimage,(0,0)) #Place your image
screen.blit(hideimage,(541,452)) #Place your image
pygame.display.flip() #Refreshes Screen
Robots = []
SelectedRobots = []
x = 15
y= 273
for filename in os.listdir("robots"): #Code for displaying robot .py files
if filename[-3:] == ".py" and filename != ".svn":
#Name
robotName = pygame.font.RobotNameFont.render(str(filename[:-3]), True, (255,255, 255))
colour = parseForColour("robots/"+filename)
Robots.append([filename,robotName,x,y,False,colour,False])
x = x+42
if x>658:
x = 18
y = y+41
Robots[0][6]=True
current = 0
while 1 :
#Pygame.event.get() takes in input from the user
#It records keyboard input and mouse clicks on the window
for event in pygame.event.get():
#If the user hits the X button on the window
if event.type == pygame.QUIT:
Exit(screen)
if event.type == pygame.KEYDOWN:
if event.key == pygame.K_LEFT:
Robots[current][6] = False
current = current-1
if current<0:
current = len(Robots)-1
Robots[current][6] = True
if event.key == pygame.K_RIGHT:
Robots[current][6] = False
current = current+1
if current>len(Robots)-1:
current = 0
Robots[current][6] = True
if event.key == pygame.K_UP:
Robots[current][6] = False
current = current-16
if current<0:
current = (int((len(Robots)-1)/16)*16)+(current-(int((current/16))*16))
if current> (len(Robots)-1):
current = len(Robots)-1
Robots[current][6] = True
if event.key == pygame.K_DOWN:
Robots[current][6] = False
current = current+16
if current>len(Robots)-1:
current = 0+(current-(int((current/16))*16))
Robots[current][6] = True
if event.key == pygame.K_BACKSPACE:#Back
ScreenSetUp()
if event.key == pygame.K_RETURN:#Start Editor
StartEditor(screen,Robots[current])
mouseButtons = pygame.mouse.get_pressed()
if mouseButtons == (1,0,0):
mouseX,mouseY = pygame.mouse.get_pos()
if (mouseX>11 and mouseX<155 and mouseY>452 and mouseY<482):#Back Button
ScreenSetUp()
counter=0
for robot in Robots:
if (mouseX>robot[2] and mouseX<(robot[2]+39) and mouseY>robot[3] and mouseY<(robot[3]+39)):
if robot[6]:
StartEditor(screen,Robots[current])
else:
for trobot in Robots:
if trobot[6]:
trobot[6]=False
robot[6]=True
current = counter
counter = counter+1
screen.blit(menuimage,(0,0))
screen.blit(hideimage,(541,452)) #Place your image
for robot in Robots:
if robot[6]:
target = robot
largeimage = pygame.image.load('menuImages/botImageLarge.png') #Load in an image
changedColor = pygame.PixelArray(largeimage)
changedColor.replace((195,29,29),target[5],0.11,(0.299,0.587,0.114))
robotImage = changedColor.make_surface()
screen.blit(robotImage,(23,63)) #Place your image
screen.blit(target[1],(223,63)) #Place your image
counter = 0
scounter = 0
for robot in Robots:
if robot[0]!="all" and robot[0]!="random":
smallimage = pygame.image.load('menuImages/botImage.png') #Load in an image
changedColor = pygame.PixelArray(smallimage)
changedColor.replace((195,29,29),robot[5],0.1,(0.299,0.587,0.114))
robotImage = changedColor.make_surface()
screen.blit(robotImage,(robot[2],robot[3])) #Place your image
if robot[4]:
screen.blit(robotSelectedImage,(robot[2]-1,robot[3]-1)) #Place your image
screen.blit(robotTargettedImage,(target[2]-1,target[3]-1)) #Place your image
pygame.display.flip() #Refreshes Screen
def Return(screen):
import setup
setup.ScreenSetUp()
Exit(screen)
def StartEditor(screen,robot):
roboname = ""
roboname = robot[0] #this code can be hashed back in to provide an argument
#the following code will only work on unix systems,
#we will need a switch case for windows
os.system("python2.6 RobotEditor.py "+roboname+" &")
print "Start the Editor"
def Exit(screen):
#Exit the system
pygame.quit()
sys.exit()
def teamSelect(screen,action):
menuimage = pygame.image.load('menuImages/BotSelected.jpg') #Load in an image
robotSelectedImage = pygame.image.load('menuImages/botSelected.png') #Load in an image
robotTargettedImage = pygame.image.load('menuImages/botTargetted.png') #Load in an image
allimage = pygame.image.load('menuImages/botImageLarge.png') #Load in an image
randomimage = pygame.image.load('menuImages/botImageLarge.png') #Load in an image
hideimage = pygame.image.load('menuImages/startBattleHider.jpg') #Load in an image
screen.blit(menuimage,(0,0))
screen.blit(hideimage,(541,452))#Place your image
pygame.display.flip() #Refreshes Screen
Robots = []
SelectedRobots = []
x = 15
y= 273
bots = []
for filename in os.listdir("teams"): #Code for displaying robot .py files
if filename[-3:] == ".py" and filename != ".svn":
#Name
robotName = pygame.font.RobotNameFont.render(str(filename[:-3]), True, (255,255, 255))
colour = parseForColour("teams/"+filename)
bots = parseForRobots(filename)
Robots.append([filename,robotName,x,y,False,colour,False,bots])
x = x+42
if x>658:
x = 18
y = y+41
Robots[0][6]=True
current = 0
while 1 :
#Pygame.event.get() takes in input from the user
#It records keyboard input and mouse clicks on the window
for event in pygame.event.get():
#If the user hits the X button on the window
if event.type == pygame.QUIT:
Exit(screen)
if event.type == pygame.KEYDOWN:
if event.key == pygame.K_LEFT:
Robots[current][6] = False
current = current-1
if current<0:
current = len(Robots)-1
Robots[current][6] = True
if event.key == pygame.K_RIGHT:
Robots[current][6] = False
current = current+1
if current>len(Robots)-1:
current = 0
Robots[current][6] = True
if event.key == pygame.K_UP:
Robots[current][6] = False
current = current-16
if current<0:
current = (int((len(Robots)-1)/16)*16)+(current-(int((current/16))*16))
if current> (len(Robots)-1):
current = len(Robots)-1
Robots[current][6] = True
if event.key == pygame.K_DOWN:
Robots[current][6] = False
current = current+16
if current>len(Robots)-1:
current = 0+(current-(int((current/16))*16))
Robots[current][6] = True
if event.key == pygame.K_SPACE:#Select
if Robots[current][4]:
Robots[current][4] = False
else:
Robots[current][4] = True
if event.key == pygame.K_BACKSPACE:#Back
ScreenSetUp()
if event.key == pygame.K_RETURN:#Start Battle
for robot in Robots:
if robot[4]:
SelectedRobots.append(robot)
if len(SelectedRobots)>1:
if BattleType=="Team":
TeamStartGame(screen,BattleType,Arena,SelectedRobots)
elif BattleType=="Dodgeball":
DodgeballStartGame(screen,BattleType,Arena,SelectedRobots)
else:
SelectedRobots = []
mouseButtons = pygame.mouse.get_pressed()
if mouseButtons == (1,0,0):
mouseX,mouseY = pygame.mouse.get_pos()
if (mouseX>11 and mouseX<155 and mouseY>452 and mouseY<482):#Back Button
ScreenSetUp()
if (mouseX>541 and mouseX<685 and mouseY>454 and mouseY<483):#Start Battle
for robot in Robots:
if robot[4]:
SelectedRobots.append(robot)
if len(SelectedRobots)>1:
if BattleType=="Team":
TeamStartGame(screen,BattleType,Arena,SelectedRobots)
elif BattleType=="Dodgeball":
DodgeballStartGame(screen,BattleType,Arena,SelectedRobots)
else:
SelectedRobots = []
counter = 0
for robot in Robots:
if (mouseX>robot[2] and mouseX<(robot[2]+39) and mouseY>robot[3] and mouseY<(robot[3]+39)):
if robot[6]:
if robot[0]!="all" and robot[0]!="random" and robot[0]!="new":
if robot[4]:
robot[4] = False
else:
robot[4] = True
else:
for trobot in Robots:
if trobot[6]:
trobot[6]=False
current = counter
robot[6]=True
counter = counter+1
screen.blit(menuimage,(0,0))
screen.blit(hideimage,(541,452))
for robot in Robots:
if robot[6]:
target = robot
if target[0]=="all":
screen.blit(allimage,(23,63)) #Place your image
elif target[0]=="random":
screen.blit(randomimage,(23,63)) #Place your image
elif target[0]=="new":
screen.blit(randomimage,(23,63)) #Place your image
else:
largeimage = pygame.image.load('menuImages/botImageLarge.png') #Load in an image
changedColor = pygame.PixelArray(largeimage)
changedColor.replace((195,29,29),target[5],0.11,(0.299,0.587,0.114))
robotImage = changedColor.make_surface()
screen.blit(robotImage,(23,63)) #Place your image
screen.blit(target[1],(223,63)) #Place your image
counter = 0
scounter = 0
for robot in Robots:
if robot[0]!="all" and robot[0]!="random" and robot[0]!="new":
smallimage = pygame.image.load('menuImages/botImage.png') #Load in an image
changedColor = pygame.PixelArray(smallimage)
changedColor.replace((195,29,29),robot[5],0.1,(0.299,0.587,0.114))
robotImage = changedColor.make_surface()
screen.blit(robotImage,(robot[2],robot[3])) #Place your image
if robot[4]:
screen.blit(robotSelectedImage,(robot[2]-1,robot[3]-1)) #Place your image
screen.blit(robotTargettedImage,(target[2]-1,target[3]-1)) #Place your image
pygame.display.flip() #Refreshes Screen
| 42.839474 | 111 | 0.525892 | 1,787 | 16,279 | 4.776721 | 0.144936 | 0.026945 | 0.029522 | 0.049789 | 0.785848 | 0.765581 | 0.754335 | 0.754335 | 0.736762 | 0.719892 | 0 | 0.046019 | 0.367283 | 16,279 | 379 | 112 | 42.952507 | 0.782718 | 0.105658 | 0 | 0.772871 | 0 | 0 | 0.048125 | 0.031692 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.006309 | null | null | 0.003155 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
f35cc5189652de73b9019212c5150924653def28 | 2,647 | py | Python | pynos/versions/ver_6/ver_6_0_1/yang/brocade_ha.py | bdeetz/pynos | bd8a34e98f322de3fc06750827d8bbc3a0c00380 | [
"Apache-2.0"
] | 12 | 2015-09-21T23:56:09.000Z | 2018-03-30T04:35:32.000Z | pynos/versions/ver_6/ver_6_0_1/yang/brocade_ha.py | bdeetz/pynos | bd8a34e98f322de3fc06750827d8bbc3a0c00380 | [
"Apache-2.0"
] | 10 | 2016-09-15T19:03:27.000Z | 2017-07-17T23:38:01.000Z | pynos/versions/ver_6/ver_6_0_1/yang/brocade_ha.py | bdeetz/pynos | bd8a34e98f322de3fc06750827d8bbc3a0c00380 | [
"Apache-2.0"
] | 6 | 2015-08-14T08:05:23.000Z | 2022-02-03T15:33:54.000Z | #!/usr/bin/env python
import xml.etree.ElementTree as ET
class brocade_ha(object):
"""Auto generated class.
"""
def __init__(self, **kwargs):
self._callback = kwargs.pop('callback')
def reload_input_rbridge_id(self, **kwargs):
"""Auto Generated Code
"""
config = ET.Element("config")
reload = ET.Element("reload")
config = reload
input = ET.SubElement(reload, "input")
rbridge_id = ET.SubElement(input, "rbridge-id")
rbridge_id.text = kwargs.pop('rbridge_id')
callback = kwargs.pop('callback', self._callback)
return callback(config)
def reload_input_system(self, **kwargs):
"""Auto Generated Code
"""
config = ET.Element("config")
reload = ET.Element("reload")
config = reload
input = ET.SubElement(reload, "input")
system = ET.SubElement(input, "system")
callback = kwargs.pop('callback', self._callback)
return callback(config)
def reload_input_standby(self, **kwargs):
"""Auto Generated Code
"""
config = ET.Element("config")
reload = ET.Element("reload")
config = reload
input = ET.SubElement(reload, "input")
standby = ET.SubElement(input, "standby")
callback = kwargs.pop('callback', self._callback)
return callback(config)
def reload_input_rbridge_id(self, **kwargs):
"""Auto Generated Code
"""
config = ET.Element("config")
reload = ET.Element("reload")
config = reload
input = ET.SubElement(reload, "input")
rbridge_id = ET.SubElement(input, "rbridge-id")
rbridge_id.text = kwargs.pop('rbridge_id')
callback = kwargs.pop('callback', self._callback)
return callback(config)
def reload_input_system(self, **kwargs):
"""Auto Generated Code
"""
config = ET.Element("config")
reload = ET.Element("reload")
config = reload
input = ET.SubElement(reload, "input")
system = ET.SubElement(input, "system")
callback = kwargs.pop('callback', self._callback)
return callback(config)
def reload_input_standby(self, **kwargs):
"""Auto Generated Code
"""
config = ET.Element("config")
reload = ET.Element("reload")
config = reload
input = ET.SubElement(reload, "input")
standby = ET.SubElement(input, "standby")
callback = kwargs.pop('callback', self._callback)
return callback(config)
| 31.141176 | 57 | 0.58028 | 274 | 2,647 | 5.489051 | 0.124088 | 0.131649 | 0.079122 | 0.116356 | 0.914894 | 0.914894 | 0.914894 | 0.914894 | 0.914894 | 0.914894 | 0 | 0 | 0.291273 | 2,647 | 85 | 58 | 31.141176 | 0.801706 | 0.083491 | 0 | 0.925926 | 1 | 0 | 0.093685 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.12963 | false | 0 | 0.018519 | 0 | 0.277778 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
f3b795b3a3a5fff12fb874645ed25688ceee7d26 | 1,707 | py | Python | tests/integration/follow/test_follow.py | LeaveMyYard/Hillelgram | 4ba5131d84477ce5fb1479b4de2c2b2a1f09f8fd | [
"MIT"
] | null | null | null | tests/integration/follow/test_follow.py | LeaveMyYard/Hillelgram | 4ba5131d84477ce5fb1479b4de2c2b2a1f09f8fd | [
"MIT"
] | 2 | 2021-11-27T10:54:55.000Z | 2021-11-27T12:57:02.000Z | tests/integration/follow/test_follow.py | LeaveMyYard/Hillelgram | 4ba5131d84477ce5fb1479b4de2c2b2a1f09f8fd | [
"MIT"
] | null | null | null | from flask.testing import FlaskClient
def test_follow_normal(client: FlaskClient) -> None:
test_login = "TestUser"
test_password = "hiadihdai"
client.post(
"/api/user",
json={"login": test_login, "password": test_password},
)
client.post(
"/api/user",
json={"login": f"{test_login}1", "password": f"{test_password}1"},
)
response = client.post(
f"/api/user/{test_login}1/follow",
auth=(
test_login,
test_password,
),
)
assert response.status_code == 200, response.data
def test_follow_self(client: FlaskClient) -> None:
test_login = "TestUser"
test_password = "hiadihdai"
client.post(
"/api/user",
json={"login": test_login, "password": test_password},
)
response = client.post(
f"/api/user/{test_login}/follow",
auth=(
test_login,
test_password,
),
)
assert response.status_code == 403, response.data
def test_follow_twice(client: FlaskClient) -> None:
test_login = "TestUser"
test_password = "hiadihdai"
client.post(
"/api/user",
json={"login": test_login, "password": test_password},
)
client.post(
"/api/user",
json={"login": f"{test_login}1", "password": f"{test_password}1"},
)
client.post(
f"/api/user/{test_login}1/follow",
auth=(
test_login,
test_password,
),
)
response = client.post(
f"/api/user/{test_login}1/follow",
auth=(
test_login,
test_password,
),
)
assert response.status_code == 409, response.data
| 24.385714 | 74 | 0.56239 | 186 | 1,707 | 4.962366 | 0.172043 | 0.156013 | 0.070423 | 0.092091 | 0.911159 | 0.856988 | 0.856988 | 0.856988 | 0.856988 | 0.856988 | 0 | 0.013356 | 0.298184 | 1,707 | 69 | 75 | 24.73913 | 0.757095 | 0 | 0 | 0.721311 | 0 | 0 | 0.198008 | 0.069713 | 0 | 0 | 0 | 0 | 0.04918 | 1 | 0.04918 | false | 0.196721 | 0.016393 | 0 | 0.065574 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 8 |
cae67724a651c3fe4cb3716389bdd3fc558ccbf3 | 114 | py | Python | prices.py | zenux287/cyheck-2 | b5b54c10f697324a84505142cd6454a186835629 | [
"MIT"
] | null | null | null | prices.py | zenux287/cyheck-2 | b5b54c10f697324a84505142cd6454a186835629 | [
"MIT"
] | null | null | null | prices.py | zenux287/cyheck-2 | b5b54c10f697324a84505142cd6454a186835629 | [
"MIT"
] | null | null | null | from moneywagon import get_current_price
def get_price(cc, c):
get_price.result = get_current_price(cc, c)
| 22.8 | 46 | 0.763158 | 19 | 114 | 4.263158 | 0.526316 | 0.246914 | 0.37037 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.157895 | 114 | 4 | 47 | 28.5 | 0.84375 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
59f169c9208c9fbfe7dec790f6f7dd245b38276c | 1,288 | py | Python | tests/test_sync.py | DiamondLightSource/ispyb-datasync | f238b0becd79a9199f2f015262824ef1b2f3fd6c | [
"Apache-2.0"
] | 1 | 2018-02-16T23:19:33.000Z | 2018-02-16T23:19:33.000Z | tests/test_sync.py | DiamondLightSource/ispyb-propagation | f238b0becd79a9199f2f015262824ef1b2f3fd6c | [
"Apache-2.0"
] | null | null | null | tests/test_sync.py | DiamondLightSource/ispyb-propagation | f238b0becd79a9199f2f015262824ef1b2f3fd6c | [
"Apache-2.0"
] | 1 | 2019-06-27T16:45:06.000Z | 2019-06-27T16:45:06.000Z | import context
import datasync
def test_sync_proposals(testconfig):
with datasync.open(conf_file = testconfig, source='dummyuas', target='ispyb') as ds:
ds.sync_proposals()
with datasync.open(conf_file = testconfig, source='dummyuas', target='ispyb') as ds:
ds.sync_proposals()
def test_sync_proposals_have_persons(testconfig):
with datasync.open(conf_file = testconfig, source='dummyuas', target='ispyb') as ds:
ds.sync_proposals_have_persons()
def test_sync_sessions(testconfig):
with datasync.open(conf_file = testconfig, source='dummyuas', target='ispyb') as ds:
ds.sync_sessions()
def test_sync_persons(testconfig):
with datasync.open(conf_file = testconfig, source='dummyuas', target='ispyb') as ds:
ds.sync_persons()
def test_sync_session_types(testconfig):
with datasync.open(conf_file = testconfig, source='dummyuas', target='ispyb') as ds:
ds.sync_session_types()
def test_sync_sessions_have_persons(testconfig):
with datasync.open(conf_file = testconfig, source='dummyuas', target='ispyb') as ds:
ds.sync_sessions_have_persons()
def test_sync_components(testconfig):
with datasync.open(conf_file = testconfig, source='dummyuas', target='ispyb') as ds:
ds.sync_components()
| 37.882353 | 88 | 0.733696 | 169 | 1,288 | 5.35503 | 0.142012 | 0.106077 | 0.141436 | 0.176796 | 0.81547 | 0.766851 | 0.766851 | 0.766851 | 0.766851 | 0.766851 | 0 | 0 | 0.150621 | 1,288 | 33 | 89 | 39.030303 | 0.827239 | 0 | 0 | 0.4 | 0 | 0 | 0.080745 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.28 | false | 0 | 0.08 | 0 | 0.36 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
94733f933eddc3a74869c108d94ab1692bd7b8ea | 110,779 | py | Python | calendarserver/tools/test/test_calverify.py | backwardn/ccs-calendarserver | 13c706b985fb728b9aab42dc0fef85aae21921c3 | [
"Apache-2.0"
] | 462 | 2016-08-14T17:43:24.000Z | 2022-03-17T07:38:16.000Z | calendarserver/tools/test/test_calverify.py | backwardn/ccs-calendarserver | 13c706b985fb728b9aab42dc0fef85aae21921c3 | [
"Apache-2.0"
] | 72 | 2016-09-01T23:19:35.000Z | 2020-02-05T02:09:26.000Z | calendarserver/tools/test/test_calverify.py | backwardn/ccs-calendarserver | 13c706b985fb728b9aab42dc0fef85aae21921c3 | [
"Apache-2.0"
] | 171 | 2016-08-16T03:50:30.000Z | 2022-03-26T11:49:55.000Z | ##
# Copyright (c) 2012-2017 Apple Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
##
from __future__ import print_function
from twext.enterprise.dal.syntax import Update
from twext.enterprise.jobs.jobitem import JobItem
from txdav.common.datastore.sql_tables import schema
"""
Tests for calendarserver.tools.calverify
"""
from calendarserver.tools.calverify import BadDataService, \
SchedulingMismatchService, DoubleBookingService, DarkPurgeService, \
EventSplitService, MissingLocationService
from pycalendar.datetime import DateTime
from twisted.internet import reactor
from twisted.internet.defer import inlineCallbacks
from twistedcaldav.config import config
from twistedcaldav.ical import normalize_iCalStr
from twistedcaldav.test.util import StoreTestCase
from txdav.common.datastore.test.util import populateCalendarsFrom
from StringIO import StringIO
OK_ICS = """BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Apple Inc.//iCal 4.0.1//EN
CALSCALE:GREGORIAN
BEGIN:VEVENT
CREATED:20100303T181216Z
UID:OK
DTEND:20100307T151500Z
TRANSP:OPAQUE
SUMMARY:Ancient event
DTSTART:20100307T111500Z
DTSTAMP:20100303T181220Z
SEQUENCE:2
END:VEVENT
END:VCALENDAR
""".replace("\n", "\r\n")
# Missing DTSTAMP
BAD1_ICS = """BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Apple Inc.//iCal 4.0.1//EN
CALSCALE:GREGORIAN
BEGIN:VEVENT
CREATED:20100303T181216Z
UID:BAD1
DTEND:20100307T151500Z
TRANSP:OPAQUE
SUMMARY:Ancient event
DTSTART:20100307T111500Z
SEQUENCE:2
END:VEVENT
END:VCALENDAR
""".replace("\n", "\r\n")
# Bad recurrence
BAD2_ICS = """BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Apple Inc.//iCal 4.0.1//EN
CALSCALE:GREGORIAN
BEGIN:VEVENT
CREATED:20100303T181216Z
UID:BAD2
DTEND:20100307T151500Z
TRANSP:OPAQUE
SUMMARY:Ancient event
DTSTART:20100307T111500Z
DTSTAMP:20100303T181220Z
RRULE:FREQ=DAILY;COUNT=3
SEQUENCE:2
END:VEVENT
BEGIN:VEVENT
CREATED:20100303T181216Z
UID:BAD2
RECURRENCE-ID:20100307T120000Z
DTEND:20100307T151500Z
TRANSP:OPAQUE
SUMMARY:Ancient event
DTSTART:20100307T111500Z
DTSTAMP:20100303T181220Z
SEQUENCE:2
END:VEVENT
END:VCALENDAR
""".replace("\n", "\r\n")
# Bad recurrence
BAD3_ICS = """BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Apple Inc.//iCal 4.0.1//EN
CALSCALE:GREGORIAN
BEGIN:VEVENT
CREATED:20100303T181216Z
UID:BAD2
DTEND:20100307T151500Z
TRANSP:OPAQUE
SUMMARY:Ancient event
DTSTART:20100307T111500Z
DTSTAMP:20100303T181220Z
RRULE:FREQ=DAILY;COUNT=3
SEQUENCE:2
END:VEVENT
BEGIN:VEVENT
CREATED:20100303T181216Z
UID:BAD2
RECURRENCE-ID:20100307T120000Z
DTEND:20100307T151500Z
TRANSP:OPAQUE
SUMMARY:Ancient event
DTSTART:20100307T111500Z
DTSTAMP:20100303T181220Z
SEQUENCE:2
END:VEVENT
END:VCALENDAR
""".replace("\n", "\r\n")
# Missing Organizer
BAD3_ICS = """BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Apple Inc.//iCal 4.0.1//EN
CALSCALE:GREGORIAN
BEGIN:VEVENT
CREATED:20100303T181216Z
UID:BAD3
DTEND:20100307T151500Z
TRANSP:OPAQUE
SUMMARY:Ancient event
DTSTART:20100307T111500Z
DTSTAMP:20100303T181220Z
RRULE:FREQ=DAILY;COUNT=3
SEQUENCE:2
END:VEVENT
BEGIN:VEVENT
CREATED:20100303T181216Z
UID:BAD3
RECURRENCE-ID:20100307T111500Z
DTEND:20100307T151500Z
TRANSP:OPAQUE
SUMMARY:Ancient event
DTSTART:20100307T111500Z
DTSTAMP:20100303T181220Z
SEQUENCE:2
ORGANIZER:urn:x-uid:47B16BB4-DB5F-4BF6-85FE-A7DA54230F92
ATTENDEE:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0
ATTENDEE:urn:x-uid:47B16BB4-DB5F-4BF6-85FE-A7DA54230F92
END:VEVENT
END:VCALENDAR
""".replace("\n", "\r\n")
# https Organizer
BAD4_ICS = """BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Apple Inc.//iCal 4.0.1//EN
CALSCALE:GREGORIAN
BEGIN:VEVENT
CREATED:20100303T181216Z
UID:BAD4
DTEND:20100307T151500Z
TRANSP:OPAQUE
SUMMARY:Ancient event
DTSTART:20100307T111500Z
DTSTAMP:20100303T181220Z
ORGANIZER:http://demo.com:8008/principals/__uids__/D46F3D71-04B7-43C2-A7B6-6F92F92E61D0
ATTENDEE:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0
ATTENDEE:urn:x-uid:47B16BB4-DB5F-4BF6-85FE-A7DA54230F92
SEQUENCE:2
END:VEVENT
END:VCALENDAR
""".replace("\n", "\r\n")
# https Attendee
BAD5_ICS = """BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Apple Inc.//iCal 4.0.1//EN
CALSCALE:GREGORIAN
BEGIN:VEVENT
CREATED:20100303T181216Z
UID:BAD5
DTEND:20100307T151500Z
TRANSP:OPAQUE
SUMMARY:Ancient event
DTSTART:20100307T111500Z
DTSTAMP:20100303T181220Z
ORGANIZER:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0
ATTENDEE:http://demo.com:8008/principals/__uids__/D46F3D71-04B7-43C2-A7B6-6F92F92E61D0
ATTENDEE:urn:x-uid:47B16BB4-DB5F-4BF6-85FE-A7DA54230F92
SEQUENCE:2
END:VEVENT
END:VCALENDAR
""".replace("\n", "\r\n")
# https Organizer and Attendee
BAD6_ICS = """BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Apple Inc.//iCal 4.0.1//EN
CALSCALE:GREGORIAN
BEGIN:VEVENT
CREATED:20100303T181216Z
UID:BAD6
DTEND:20100307T151500Z
TRANSP:OPAQUE
SUMMARY:Ancient event
DTSTART:20100307T111500Z
DTSTAMP:20100303T181220Z
ORGANIZER:http://demo.com:8008/principals/__uids__/D46F3D71-04B7-43C2-A7B6-6F92F92E61D0
ATTENDEE:http://demo.com:8008/principals/__uids__/D46F3D71-04B7-43C2-A7B6-6F92F92E61D0
ATTENDEE:urn:x-uid:47B16BB4-DB5F-4BF6-85FE-A7DA54230F92
SEQUENCE:2
END:VEVENT
END:VCALENDAR
""".replace("\n", "\r\n")
# Base64 Organizer and Attendee parameter
OK8_ICS = """BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Apple Inc.//iCal 4.0.1//EN
CALSCALE:GREGORIAN
BEGIN:VEVENT
CREATED:20100303T181216Z
UID:OK8
DTEND:20100307T151500Z
TRANSP:OPAQUE
SUMMARY:Ancient event
DTSTART:20100307T111500Z
DTSTAMP:20100303T181220Z
ORGANIZER;CALENDARSERVER-OLD-CUA="base64-aHR0cDovL2RlbW8uY29tOjgwMDgvcHJpbm
NpcGFscy9fX3VpZHNfXy9ENDZGM0Q3MS0wNEI3LTQzQzItQTdCNi02RjkyRjkyRTYxRDA=":
urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0
ATTENDEE;CALENDARSERVER-OLD-CUA="base64-aHR0cDovL2RlbW8uY29tOjgwMDgvcHJpbmN
pcGFscy9fX3VpZHNfXy9ENDZGM0Q3MS0wNEI3LTQzQzItQTdCNi02RjkyRjkyRTYxRDA=":u
rn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0
ATTENDEE:urn:x-uid:47B16BB4-DB5F-4BF6-85FE-A7DA54230F92
SEQUENCE:2
END:VEVENT
END:VCALENDAR
""".replace("\n", "\r\n")
# Non-mailto: Organizer
BAD10_ICS = """BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Apple Inc.//iCal 4.0.1//EN
CALSCALE:GREGORIAN
BEGIN:VEVENT
CREATED:20100303T181216Z
UID:BAD10
DTEND:20100307T151500Z
TRANSP:OPAQUE
SUMMARY:Ancient event
DTSTART:20100307T111500Z
DTSTAMP:20100303T181220Z
ORGANIZER;CN=Example User1;SCHEDULE-AGENT=NONE:example1@example.com
ATTENDEE;CN=Example User1:example1@example.com
ATTENDEE;CN=Example User2:example2@example.com
ATTENDEE;CN=Example User3:/principals/users/example3
ATTENDEE;CN=Example User4:http://demo.com:8008/principals/users/example4
SEQUENCE:2
END:VEVENT
END:VCALENDAR
""".replace("\n", "\r\n")
# Bad recurrence EXDATE
BAD11_ICS = """BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Apple Inc.//iCal 4.0.1//EN
CALSCALE:GREGORIAN
BEGIN:VEVENT
CREATED:20100303T181216Z
UID:BAD11
DTEND:20100307T151500Z
TRANSP:OPAQUE
SUMMARY:Ancient event
DTSTART:20100307T111500Z
DTSTAMP:20100303T181220Z
EXDATE:20100314T111500Z
RRULE:FREQ=WEEKLY
SEQUENCE:2
END:VEVENT
BEGIN:VEVENT
CREATED:20100303T181216Z
UID:BAD11
RECURRENCE-ID:20100314T111500Z
DTEND:20100314T151500Z
TRANSP:OPAQUE
SUMMARY:Ancient event
DTSTART:20100314T111500Z
DTSTAMP:20100303T181220Z
SEQUENCE:2
END:VEVENT
END:VCALENDAR
""".replace("\n", "\r\n")
BAD12_ICS = """BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Apple Inc.//iCal 4.0.1//EN
CALSCALE:GREGORIAN
BEGIN:VEVENT
CREATED:20100303T181216Z
UID:BAD12
DTEND:20100307T151500Z
TRANSP:OPAQUE
SUMMARY:Ancient event
DTSTART:20100307T111500Z
DTSTAMP:20100303T181220Z
ORGANIZER:mailto:example2@example.com
ATTENDEE:mailto:example1@example.com
ATTENDEE:mailto:example2@example.com
SEQUENCE:2
END:VEVENT
END:VCALENDAR
""".replace("\n", "\r\n")
BAD13_ICS = """BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Apple Inc.//iCal 4.0.1//EN
CALSCALE:GREGORIAN
BEGIN:VEVENT
CREATED:20100303T181216Z
UID:BAD13
DTEND:20100307T151500Z
TRANSP:OPAQUE
SUMMARY:Ancient event
DTSTART:20100307T111500Z
DTSTAMP:20100303T181220Z
ATTENDEE:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0
ATTENDEE:urn:x-uid:47B16BB4-DB5F-4BF6-85FE-A7DA54230F92
SEQUENCE:2
END:VEVENT
END:VCALENDAR
""".replace("\n", "\r\n")
BAD14_ICS = """BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Apple Inc.//iCal 4.0.1//EN
CALSCALE:GREGORIAN
BEGIN:VEVENT
CREATED:20100303T181216Z
UID:BAD14
DTEND:20100307T151500Z
TRANSP:OPAQUE
SUMMARY:Ancient event
DTSTART:20100307T111500Z
DTSTAMP:20100303T181220Z
SEQUENCE:2
GEO:40.1;40.1
END:VEVENT
END:VCALENDAR
""".replace("\n", "\r\n")
CORRUPT14_ICS = """BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Apple Inc.//iCal 4.0.1//EN
CALSCALE:GREGORIAN
BEGIN:VEVENT
CREATED:20100303T181216Z
UID:BAD14
DTEND:20100307T151500Z
TRANSP:OPAQUE
SUMMARY:Ancient event
DTSTART:20100307T111500Z
DTSTAMP:20100303T181220Z
SEQUENCE:2
GEO:40.1
END:VEVENT
END:VCALENDAR
""".replace("\n", "\r\n")
class CalVerifyDataTests(StoreTestCase):
"""
Tests calverify for iCalendar data problems.
"""
metadata = {
"accessMode": "PUBLIC",
"isScheduleObject": True,
"scheduleTag": "abc",
"scheduleEtags": (),
"hasPrivateComment": False,
}
requirements = {
"home1": {
"calendar_1": {
"ok.ics": (OK_ICS, metadata,),
"bad1.ics": (BAD1_ICS, metadata,),
"bad2.ics": (BAD2_ICS, metadata,),
"bad3.ics": (BAD3_ICS, metadata,),
"bad4.ics": (BAD4_ICS, metadata,),
"bad5.ics": (BAD5_ICS, metadata,),
"bad6.ics": (BAD6_ICS, metadata,),
"ok8.ics": (OK8_ICS, metadata,),
"bad10.ics": (BAD10_ICS, metadata,),
"bad11.ics": (BAD11_ICS, metadata,),
"bad12.ics": (BAD12_ICS, metadata,),
"bad13.ics": (BAD13_ICS, metadata,),
"bad14.ics": (BAD14_ICS, metadata,),
}
},
}
number_to_process = len(requirements["home1"]["calendar_1"])
@inlineCallbacks
def populate(self):
# Need to bypass normal validation inside the store
yield populateCalendarsFrom(self.requirements, self.storeUnderTest())
# Have to manually write these into the database
populateTxn = self.storeUnderTest().newTransaction()
co = schema.CALENDAR_OBJECT
yield Update(
{co.ICALENDAR_TEXT: CORRUPT14_ICS},
Where=co.RESOURCE_NAME == "bad14.ics",
).on(populateTxn)
yield populateTxn.commit()
self.notifierFactory.reset()
def storeUnderTest(self):
"""
Create and return a L{CalendarStore} for testing.
"""
return self._sqlCalendarStore
def verifyResultsByUID(self, results, expected):
reported = set([(home, uid) for home, uid, _ignore_resid, _ignore_reason in results])
self.assertEqual(reported, expected)
@inlineCallbacks
def test_scanBadData(self):
"""
CalVerifyService.doScan without fix. Make sure it detects common errors.
Make sure sync-token is not changed.
"""
sync_token_old = (yield (yield self.calendarUnderTest()).syncToken())
yield self.commit()
options = {
"ical": True,
"fix": False,
"nobase64": False,
"verbose": False,
"uid": "",
"uuid": "",
"path": "",
"tzid": "",
}
output = StringIO()
calverify = BadDataService(self._sqlCalendarStore, options, output, reactor, config)
calverify.emailDomain = "example.com"
yield calverify.doAction()
self.assertEqual(calverify.results["Number of events to process"], self.number_to_process)
self.verifyResultsByUID(calverify.results["Bad iCalendar data"], set((
("home1", "BAD1",),
("home1", "BAD2",),
("home1", "BAD3",),
("home1", "BAD4",),
("home1", "BAD5",),
("home1", "BAD6",),
("home1", "BAD10",),
("home1", "BAD11",),
("home1", "BAD12",),
("home1", "BAD13",),
("home1", "BAD14",),
)))
sync_token_new = (yield (yield self.calendarUnderTest()).syncToken())
self.assertEqual(sync_token_old, sync_token_new)
@inlineCallbacks
def test_fixBadData(self):
"""
CalVerifyService.doScan with fix. Make sure it detects and fixes as much as it can.
Make sure sync-token is changed.
"""
sync_token_old = (yield (yield self.calendarUnderTest()).syncToken())
yield self.commit()
options = {
"ical": True,
"fix": True,
"nobase64": False,
"verbose": False,
"uid": "",
"uuid": "",
"path": "",
"tzid": "",
}
output = StringIO()
# Do fix
self.patch(config.Scheduling.Options, "PrincipalHostAliases", "demo.com")
self.patch(config, "HTTPPort", 8008)
calverify = BadDataService(self._sqlCalendarStore, options, output, reactor, config)
calverify.emailDomain = "example.com"
yield calverify.doAction()
self.assertEqual(calverify.results["Number of events to process"], self.number_to_process)
self.verifyResultsByUID(calverify.results["Bad iCalendar data"], set((
("home1", "BAD1",),
("home1", "BAD2",),
("home1", "BAD3",),
("home1", "BAD4",),
("home1", "BAD5",),
("home1", "BAD6",),
("home1", "BAD10",),
("home1", "BAD11",),
("home1", "BAD12",),
("home1", "BAD13",),
("home1", "BAD14",),
)))
# Do scan
options["fix"] = False
calverify = BadDataService(self._sqlCalendarStore, options, output, reactor, config)
calverify.emailDomain = "example.com"
yield calverify.doAction()
self.assertEqual(calverify.results["Number of events to process"], self.number_to_process)
self.verifyResultsByUID(calverify.results["Bad iCalendar data"], set((
("home1", "BAD1",),
("home1", "BAD13",),
)))
sync_token_new = (yield (yield self.calendarUnderTest()).syncToken())
self.assertNotEqual(sync_token_old, sync_token_new)
# Make sure mailto: fix results in urn:x-uid value without SCHEDULE-AGENT
obj = yield self.calendarObjectUnderTest(name="bad10.ics")
ical = yield obj.component()
org = ical.getOrganizerProperty()
self.assertEqual(org.value(), "urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0")
self.assertFalse(org.hasParameter("SCHEDULE-AGENT"))
for attendee in ical.getAllAttendeeProperties():
self.assertTrue(
attendee.value().startswith("urn:x-uid:") or
attendee.value().startswith("/principals")
)
@inlineCallbacks
def test_scanBadCuaOnly(self):
"""
CalVerifyService.doScan without fix for CALENDARSERVER-OLD-CUA only. Make sure it detects
and fixes as much as it can. Make sure sync-token is not changed.
"""
sync_token_old = (yield (yield self.calendarUnderTest()).syncToken())
yield self.commit()
options = {
"ical": False,
"fix": False,
"badcua": True,
"nobase64": False,
"verbose": False,
"uid": "",
"uuid": "",
"path": "",
"tzid": "",
}
output = StringIO()
calverify = BadDataService(self._sqlCalendarStore, options, output, reactor, config)
calverify.emailDomain = "example.com"
yield calverify.doAction()
self.assertEqual(calverify.results["Number of events to process"], self.number_to_process)
self.verifyResultsByUID(calverify.results["Bad iCalendar data"], set((
("home1", "BAD4",),
("home1", "BAD5",),
("home1", "BAD6",),
("home1", "BAD10",),
("home1", "BAD12",),
("home1", "BAD14",),
)))
sync_token_new = (yield (yield self.calendarUnderTest()).syncToken())
self.assertEqual(sync_token_old, sync_token_new)
@inlineCallbacks
def test_fixBadCuaOnly(self):
"""
CalVerifyService.doScan with fix for CALENDARSERVER-OLD-CUA only. Make sure it detects
and fixes as much as it can. Make sure sync-token is changed.
"""
sync_token_old = (yield (yield self.calendarUnderTest()).syncToken())
yield self.commit()
options = {
"ical": False,
"fix": True,
"badcua": True,
"nobase64": False,
"verbose": False,
"uid": "",
"uuid": "",
"path": "",
"tzid": "",
}
output = StringIO()
# Do fix
self.patch(config.Scheduling.Options, "PrincipalHostAliases", "demo.com")
self.patch(config, "HTTPPort", 8008)
calverify = BadDataService(self._sqlCalendarStore, options, output, reactor, config)
calverify.emailDomain = "example.com"
yield calverify.doAction()
self.assertEqual(calverify.results["Number of events to process"], self.number_to_process)
self.verifyResultsByUID(calverify.results["Bad iCalendar data"], set((
("home1", "BAD4",),
("home1", "BAD5",),
("home1", "BAD6",),
("home1", "BAD10",),
("home1", "BAD12",),
("home1", "BAD14",),
)))
# Do scan
options["fix"] = False
calverify = BadDataService(self._sqlCalendarStore, options, output, reactor, config)
calverify.emailDomain = "example.com"
yield calverify.doAction()
self.assertEqual(calverify.results["Number of events to process"], self.number_to_process)
self.verifyResultsByUID(calverify.results["Bad iCalendar data"], set((
)))
sync_token_new = (yield (yield self.calendarUnderTest()).syncToken())
self.assertNotEqual(sync_token_old, sync_token_new)
class CalVerifyMismatchTestsBase(StoreTestCase):
"""
Tests calverify for iCalendar mismatch problems.
"""
metadata = {
"accessMode": "PUBLIC",
"isScheduleObject": True,
"scheduleTag": "abc",
"scheduleEtags": (),
"hasPrivateComment": False,
}
uuid1 = "D46F3D71-04B7-43C2-A7B6-6F92F92E61D0"
uuid2 = "47B16BB4-DB5F-4BF6-85FE-A7DA54230F92"
uuid3 = "AC478592-7783-44D1-B2AE-52359B4E8415"
uuidl1 = "75EA36BE-F71B-40F9-81F9-CF59BF40CA8F"
uuidl2 = "CDAF464F-9C77-4F56-A7A6-98E4ED9903D6"
@inlineCallbacks
def populate(self):
# Need to bypass normal validation inside the store
yield populateCalendarsFrom(self.requirements, self.storeUnderTest())
self.notifierFactory.reset()
now = DateTime.getToday()
now.setDay(1)
now.offsetMonth(2)
nowYear = now.getYear()
nowMonth = now.getMonth()
class CalVerifyMismatchTestsNonRecurring(CalVerifyMismatchTestsBase):
"""
Tests calverify for iCalendar mismatch problems for non-recurring events.
"""
# Organizer has event, attendees do not
MISSING_ATTENDEE_1_ICS = """BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Apple Inc.//iCal 4.0.1//EN
CALSCALE:GREGORIAN
BEGIN:VEVENT
CREATED:20100303T181216Z
UID:MISSING_ATTENDEE_ICS
DTEND:%(year)s%(month)02d07T151500Z
TRANSP:OPAQUE
SUMMARY:Ancient event
DTSTART:%(year)s%(month)02d07T111500Z
DTSTAMP:20100303T181220Z
SEQUENCE:2
ORGANIZER:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0
ATTENDEE:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0
ATTENDEE:urn:x-uid:47B16BB4-DB5F-4BF6-85FE-A7DA54230F92
ATTENDEE:urn:x-uid:AC478592-7783-44D1-B2AE-52359B4E8415
END:VEVENT
END:VCALENDAR
""".replace("\n", "\r\n") % {"year": nowYear, "month": nowMonth}
# Attendees have event, organizer does not
MISSING_ORGANIZER_2_ICS = """BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Apple Inc.//iCal 4.0.1//EN
CALSCALE:GREGORIAN
BEGIN:VEVENT
CREATED:20100303T181216Z
UID:MISSING_ORGANIZER_ICS
DTEND:%(year)s%(month)02d07T151500Z
TRANSP:OPAQUE
SUMMARY:Ancient event
DTSTART:%(year)s%(month)02d07T111500Z
DTSTAMP:20100303T181220Z
SEQUENCE:2
ORGANIZER:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0
ATTENDEE:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0
ATTENDEE:urn:x-uid:47B16BB4-DB5F-4BF6-85FE-A7DA54230F92
ATTENDEE:urn:x-uid:AC478592-7783-44D1-B2AE-52359B4E8415
END:VEVENT
END:VCALENDAR
""".replace("\n", "\r\n") % {"year": nowYear, "month": nowMonth}
MISSING_ORGANIZER_3_ICS = """BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Apple Inc.//iCal 4.0.1//EN
CALSCALE:GREGORIAN
BEGIN:VEVENT
CREATED:20100303T181216Z
UID:MISSING_ORGANIZER_ICS
DTEND:%(year)s%(month)02d07T151500Z
TRANSP:OPAQUE
SUMMARY:Ancient event
DTSTART:%(year)s%(month)02d07T111500Z
DTSTAMP:20100303T181220Z
SEQUENCE:2
ORGANIZER:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0
ATTENDEE:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0
ATTENDEE:urn:x-uid:47B16BB4-DB5F-4BF6-85FE-A7DA54230F92
ATTENDEE:urn:x-uid:AC478592-7783-44D1-B2AE-52359B4E8415
END:VEVENT
END:VCALENDAR
""".replace("\n", "\r\n") % {"year": nowYear, "month": nowMonth}
# Attendee partstat mismatch
MISMATCH_ATTENDEE_1_ICS = """BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Apple Inc.//iCal 4.0.1//EN
CALSCALE:GREGORIAN
BEGIN:VEVENT
CREATED:20100303T181216Z
UID:MISMATCH_ATTENDEE_ICS
DTEND:%(year)s%(month)02d07T151500Z
TRANSP:OPAQUE
SUMMARY:Ancient event
DTSTART:%(year)s%(month)02d07T111500Z
DTSTAMP:20100303T181220Z
SEQUENCE:2
ORGANIZER:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0
ATTENDEE;PARTSTAT=ACCEPTED:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0
ATTENDEE;PARTSTAT=DECLINED:urn:x-uid:47B16BB4-DB5F-4BF6-85FE-A7DA54230F92
ATTENDEE;PARTSTAT=NEEDS-ACTION:urn:x-uid:AC478592-7783-44D1-B2AE-52359B4E8415
END:VEVENT
END:VCALENDAR
""".replace("\n", "\r\n") % {"year": nowYear, "month": nowMonth}
MISMATCH_ATTENDEE_2_ICS = """BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Apple Inc.//iCal 4.0.1//EN
CALSCALE:GREGORIAN
BEGIN:VEVENT
CREATED:20100303T181216Z
UID:MISMATCH_ATTENDEE_ICS
DTEND:%(year)s%(month)02d07T151500Z
TRANSP:OPAQUE
SUMMARY:Ancient event
DTSTART:%(year)s%(month)02d07T111500Z
DTSTAMP:20100303T181220Z
SEQUENCE:2
ORGANIZER:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0
ATTENDEE;PARTSTAT=ACCEPTED:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0
ATTENDEE;PARTSTAT=NEEDS-ACTION:urn:x-uid:47B16BB4-DB5F-4BF6-85FE-A7DA54230F92
ATTENDEE;PARTSTAT=NEEDS-ACTION:urn:x-uid:AC478592-7783-44D1-B2AE-52359B4E8415
END:VEVENT
END:VCALENDAR
""".replace("\n", "\r\n") % {"year": nowYear, "month": nowMonth}
MISMATCH_ATTENDEE_3_ICS = """BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Apple Inc.//iCal 4.0.1//EN
CALSCALE:GREGORIAN
BEGIN:VEVENT
CREATED:20100303T181216Z
UID:MISMATCH_ATTENDEE_ICS
DTEND:%(year)s%(month)02d07T151500Z
TRANSP:OPAQUE
SUMMARY:Ancient event
DTSTART:%(year)s%(month)02d07T111500Z
DTSTAMP:20100303T181220Z
SEQUENCE:2
ORGANIZER:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0
ATTENDEE;PARTSTAT=ACCEPTED:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0
ATTENDEE;PARTSTAT=NEEDS-ACTION:urn:x-uid:47B16BB4-DB5F-4BF6-85FE-A7DA54230F92
ATTENDEE;PARTSTAT=ACCEPTED:urn:x-uid:AC478592-7783-44D1-B2AE-52359B4E8415
END:VEVENT
END:VCALENDAR
""".replace("\n", "\r\n") % {"year": nowYear, "month": nowMonth}
# Attendee events outside time range
MISMATCH2_ATTENDEE_1_ICS = """BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Apple Inc.//iCal 4.0.1//EN
CALSCALE:GREGORIAN
BEGIN:VEVENT
CREATED:20100303T181216Z
UID:MISMATCH2_ATTENDEE_ICS
DTEND:%(year)s%(month)02d07T151500Z
TRANSP:OPAQUE
SUMMARY:Ancient event
DTSTART:%(year)s%(month)02d07T111500Z
DTSTAMP:20100303T181220Z
SEQUENCE:2
ORGANIZER:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0
ATTENDEE;PARTSTAT=ACCEPTED:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0
ATTENDEE;PARTSTAT=DECLINED:urn:x-uid:47B16BB4-DB5F-4BF6-85FE-A7DA54230F92
ATTENDEE;PARTSTAT=NEEDS-ACTION:urn:x-uid:AC478592-7783-44D1-B2AE-52359B4E8415
END:VEVENT
END:VCALENDAR
""".replace("\n", "\r\n") % {"year": nowYear, "month": nowMonth}
MISMATCH2_ATTENDEE_2_ICS = """BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Apple Inc.//iCal 4.0.1//EN
CALSCALE:GREGORIAN
BEGIN:VEVENT
CREATED:20100303T181216Z
UID:MISMATCH2_ATTENDEE_ICS
DTEND:%(year)s%(month)02d07T151500Z
TRANSP:OPAQUE
SUMMARY:Ancient event
DTSTART:%(year)s%(month)02d07T111500Z
DTSTAMP:20100303T181220Z
SEQUENCE:2
ORGANIZER:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0
ATTENDEE;PARTSTAT=ACCEPTED:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0
ATTENDEE;PARTSTAT=NEEDS-ACTION:urn:x-uid:47B16BB4-DB5F-4BF6-85FE-A7DA54230F92
ATTENDEE;PARTSTAT=NEEDS-ACTION:urn:x-uid:AC478592-7783-44D1-B2AE-52359B4E8415
END:VEVENT
END:VCALENDAR
""".replace("\n", "\r\n") % {"year": nowYear, "month": nowMonth}
MISMATCH2_ATTENDEE_3_ICS = """BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Apple Inc.//iCal 4.0.1//EN
CALSCALE:GREGORIAN
BEGIN:VEVENT
CREATED:20100303T181216Z
UID:MISMATCH2_ATTENDEE_ICS
DTEND:%(year)s%(month)02d07T151500Z
TRANSP:OPAQUE
SUMMARY:Ancient event
DTSTART:%(year)s%(month)02d07T111500Z
DTSTAMP:20100303T181220Z
SEQUENCE:2
ORGANIZER:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0
ATTENDEE;PARTSTAT=ACCEPTED:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0
ATTENDEE;PARTSTAT=NEEDS-ACTION:urn:x-uid:47B16BB4-DB5F-4BF6-85FE-A7DA54230F92
ATTENDEE;PARTSTAT=ACCEPTED:urn:x-uid:AC478592-7783-44D1-B2AE-52359B4E8415
END:VEVENT
END:VCALENDAR
""".replace("\n", "\r\n") % {"year": nowYear, "month": nowMonth}
# Organizer event outside time range
MISMATCH_ORGANIZER_1_ICS = """BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Apple Inc.//iCal 4.0.1//EN
CALSCALE:GREGORIAN
BEGIN:VEVENT
CREATED:20100303T181216Z
UID:MISMATCH_ORGANIZER_ICS
DTEND:%(year)s%(month)02d07T151500Z
TRANSP:OPAQUE
SUMMARY:Ancient event
DTSTART:%(year)s%(month)02d07T111500Z
DTSTAMP:20100303T181220Z
SEQUENCE:2
ORGANIZER:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0
ATTENDEE;PARTSTAT=ACCEPTED:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0
ATTENDEE;PARTSTAT=DECLINED:urn:x-uid:47B16BB4-DB5F-4BF6-85FE-A7DA54230F92
ATTENDEE;PARTSTAT=NEEDS-ACTION:urn:x-uid:AC478592-7783-44D1-B2AE-52359B4E8415
END:VEVENT
END:VCALENDAR
""".replace("\n", "\r\n") % {"year": nowYear - 1, "month": nowMonth}
MISMATCH_ORGANIZER_2_ICS = """BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Apple Inc.//iCal 4.0.1//EN
CALSCALE:GREGORIAN
BEGIN:VEVENT
CREATED:20100303T181216Z
UID:MISMATCH_ORGANIZER_ICS
DTEND:%(year)s%(month)02d07T151500Z
TRANSP:OPAQUE
SUMMARY:Ancient event
DTSTART:%(year)s%(month)02d07T111500Z
DTSTAMP:20100303T181220Z
SEQUENCE:2
ORGANIZER:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0
ATTENDEE;PARTSTAT=ACCEPTED:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0
ATTENDEE;PARTSTAT=NEEDS-ACTION:urn:x-uid:47B16BB4-DB5F-4BF6-85FE-A7DA54230F92
ATTENDEE;PARTSTAT=NEEDS-ACTION:urn:x-uid:AC478592-7783-44D1-B2AE-52359B4E8415
END:VEVENT
END:VCALENDAR
""".replace("\n", "\r\n") % {"year": nowYear, "month": nowMonth}
# Attendee uuid3 has event with different organizer
MISMATCH3_ATTENDEE_1_ICS = """BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Apple Inc.//iCal 4.0.1//EN
CALSCALE:GREGORIAN
BEGIN:VEVENT
CREATED:20100303T181216Z
UID:MISMATCH3_ATTENDEE_ICS
DTEND:%(year)s%(month)02d07T151500Z
TRANSP:OPAQUE
SUMMARY:Ancient event
DTSTART:%(year)s%(month)02d07T111500Z
DTSTAMP:20100303T181220Z
SEQUENCE:2
ORGANIZER:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0
ATTENDEE;PARTSTAT=ACCEPTED:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0
ATTENDEE;PARTSTAT=ACCEPTED:urn:x-uid:47B16BB4-DB5F-4BF6-85FE-A7DA54230F92
ATTENDEE;PARTSTAT=ACCEPTED:urn:x-uid:AC478592-7783-44D1-B2AE-52359B4E8415
END:VEVENT
END:VCALENDAR
""".replace("\n", "\r\n") % {"year": nowYear, "month": nowMonth}
MISMATCH3_ATTENDEE_2_ICS = """BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Apple Inc.//iCal 4.0.1//EN
CALSCALE:GREGORIAN
BEGIN:VEVENT
CREATED:20100303T181216Z
UID:MISMATCH3_ATTENDEE_ICS
DTEND:%(year)s%(month)02d07T151500Z
TRANSP:OPAQUE
SUMMARY:Ancient event
DTSTART:%(year)s%(month)02d07T111500Z
DTSTAMP:20100303T181220Z
SEQUENCE:2
ORGANIZER:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0
ATTENDEE;PARTSTAT=ACCEPTED:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0
ATTENDEE;PARTSTAT=ACCEPTED:urn:x-uid:47B16BB4-DB5F-4BF6-85FE-A7DA54230F92
END:VEVENT
END:VCALENDAR
""".replace("\n", "\r\n") % {"year": nowYear, "month": nowMonth}
MISMATCH3_ATTENDEE_3_ICS = """BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Apple Inc.//iCal 4.0.1//EN
CALSCALE:GREGORIAN
BEGIN:VEVENT
CREATED:20100303T181216Z
UID:MISMATCH3_ATTENDEE_ICS
DTEND:%(year)s%(month)02d07T151500Z
TRANSP:OPAQUE
SUMMARY:Ancient event
DTSTART:%(year)s%(month)02d07T111500Z
DTSTAMP:20100303T181220Z
SEQUENCE:2
ORGANIZER:urn:x-uid:47B16BB4-DB5F-4BF6-85FE-A7DA54230F92
ATTENDEE;PARTSTAT=ACCEPTED:urn:x-uid:47B16BB4-DB5F-4BF6-85FE-A7DA54230F92
END:VEVENT
END:VCALENDAR
""".replace("\n", "\r\n") % {"year": nowYear, "month": nowMonth}
MISMATCH_ORGANIZER_3_ICS = """BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Apple Inc.//iCal 4.0.1//EN
CALSCALE:GREGORIAN
BEGIN:VEVENT
CREATED:20100303T181216Z
UID:MISMATCH_ORGANIZER_ICS
DTEND:%(year)s%(month)02d07T151500Z
TRANSP:OPAQUE
SUMMARY:Ancient event
DTSTART:%(year)s%(month)02d07T111500Z
DTSTAMP:20100303T181220Z
SEQUENCE:2
ORGANIZER:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0
ATTENDEE;PARTSTAT=ACCEPTED:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0
ATTENDEE;PARTSTAT=NEEDS-ACTION:urn:x-uid:47B16BB4-DB5F-4BF6-85FE-A7DA54230F92
ATTENDEE;PARTSTAT=ACCEPTED:urn:x-uid:AC478592-7783-44D1-B2AE-52359B4E8415
END:VEVENT
END:VCALENDAR
""".replace("\n", "\r\n") % {"year": nowYear, "month": nowMonth}
# Attendee uuid3 has event they are not invited to
MISMATCH2_ORGANIZER_1_ICS = """BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Apple Inc.//iCal 4.0.1//EN
CALSCALE:GREGORIAN
BEGIN:VEVENT
CREATED:20100303T181216Z
UID:MISMATCH2_ORGANIZER_ICS
DTEND:%(year)s%(month)02d07T151500Z
TRANSP:OPAQUE
SUMMARY:Ancient event
DTSTART:%(year)s%(month)02d07T111500Z
DTSTAMP:20100303T181220Z
SEQUENCE:2
ORGANIZER:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0
ATTENDEE;PARTSTAT=ACCEPTED:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0
ATTENDEE;PARTSTAT=DECLINED:urn:x-uid:47B16BB4-DB5F-4BF6-85FE-A7DA54230F92
END:VEVENT
END:VCALENDAR
""".replace("\n", "\r\n") % {"year": nowYear, "month": nowMonth}
MISMATCH2_ORGANIZER_2_ICS = """BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Apple Inc.//iCal 4.0.1//EN
CALSCALE:GREGORIAN
BEGIN:VEVENT
CREATED:20100303T181216Z
UID:MISMATCH2_ORGANIZER_ICS
DTEND:%(year)s%(month)02d07T151500Z
TRANSP:OPAQUE
SUMMARY:Ancient event
DTSTART:%(year)s%(month)02d07T111500Z
DTSTAMP:20100303T181220Z
SEQUENCE:2
ORGANIZER:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0
ATTENDEE;PARTSTAT=ACCEPTED:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0
ATTENDEE;PARTSTAT=DECLINED:urn:x-uid:47B16BB4-DB5F-4BF6-85FE-A7DA54230F92
END:VEVENT
END:VCALENDAR
""".replace("\n", "\r\n") % {"year": nowYear, "month": nowMonth}
MISMATCH2_ORGANIZER_3_ICS = """BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Apple Inc.//iCal 4.0.1//EN
CALSCALE:GREGORIAN
BEGIN:VEVENT
CREATED:20100303T181216Z
UID:MISMATCH2_ORGANIZER_ICS
DTEND:%(year)s%(month)02d07T151500Z
TRANSP:OPAQUE
SUMMARY:Ancient event
DTSTART:%(year)s%(month)02d07T111500Z
DTSTAMP:20100303T181220Z
SEQUENCE:2
ORGANIZER:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0
ATTENDEE;PARTSTAT=ACCEPTED:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0
ATTENDEE;PARTSTAT=DECLINED:urn:x-uid:47B16BB4-DB5F-4BF6-85FE-A7DA54230F92
ATTENDEE;PARTSTAT=ACCEPTED:urn:x-uid:AC478592-7783-44D1-B2AE-52359B4E8415
END:VEVENT
END:VCALENDAR
""".replace("\n", "\r\n") % {"year": nowYear, "month": nowMonth}
requirements = {
CalVerifyMismatchTestsBase.uuid1: {
"calendar": {
"missing_attendee.ics": (MISSING_ATTENDEE_1_ICS, CalVerifyMismatchTestsBase.metadata,),
"mismatched_attendee.ics": (MISMATCH_ATTENDEE_1_ICS, CalVerifyMismatchTestsBase.metadata,),
"mismatched2_attendee.ics": (MISMATCH2_ATTENDEE_1_ICS, CalVerifyMismatchTestsBase.metadata,),
"mismatched3_attendee.ics": (MISMATCH3_ATTENDEE_1_ICS, CalVerifyMismatchTestsBase.metadata,),
"mismatched_organizer.ics": (MISMATCH_ORGANIZER_1_ICS, CalVerifyMismatchTestsBase.metadata,),
"mismatched2_organizer.ics": (MISMATCH2_ORGANIZER_1_ICS, CalVerifyMismatchTestsBase.metadata,),
},
"inbox": {},
},
CalVerifyMismatchTestsBase.uuid2: {
"calendar": {
"mismatched_attendee.ics": (MISMATCH_ATTENDEE_2_ICS, CalVerifyMismatchTestsBase.metadata,),
"mismatched2_attendee.ics": (MISMATCH2_ATTENDEE_2_ICS, CalVerifyMismatchTestsBase.metadata,),
"mismatched3_attendee.ics": (MISMATCH3_ATTENDEE_2_ICS, CalVerifyMismatchTestsBase.metadata,),
"missing_organizer.ics": (MISSING_ORGANIZER_2_ICS, CalVerifyMismatchTestsBase.metadata,),
"mismatched_organizer.ics": (MISMATCH_ORGANIZER_2_ICS, CalVerifyMismatchTestsBase.metadata,),
"mismatched2_organizer.ics": (MISMATCH2_ORGANIZER_2_ICS, CalVerifyMismatchTestsBase.metadata,),
},
"inbox": {},
},
CalVerifyMismatchTestsBase.uuid3: {
"calendar": {
"mismatched_attendee.ics": (MISMATCH_ATTENDEE_3_ICS, CalVerifyMismatchTestsBase.metadata,),
"mismatched3_attendee.ics": (MISMATCH3_ATTENDEE_3_ICS, CalVerifyMismatchTestsBase.metadata,),
"missing_organizer.ics": (MISSING_ORGANIZER_3_ICS, CalVerifyMismatchTestsBase.metadata,),
"mismatched2_organizer.ics": (MISMATCH2_ORGANIZER_3_ICS, CalVerifyMismatchTestsBase.metadata,),
},
"calendar2": {
"mismatched_organizer.ics": (MISMATCH_ORGANIZER_3_ICS, CalVerifyMismatchTestsBase.metadata,),
"mismatched2_attendee.ics": (MISMATCH2_ATTENDEE_3_ICS, CalVerifyMismatchTestsBase.metadata,),
},
"inbox": {},
},
}
@inlineCallbacks
def setUp(self):
yield super(CalVerifyMismatchTestsNonRecurring, self).setUp()
home = (yield self.homeUnderTest(name=self.uuid3))
calendar = (yield self.calendarUnderTest(name="calendar2", home=self.uuid3))
yield home.setDefaultCalendar(calendar, "VEVENT")
yield self.commit()
@inlineCallbacks
def test_scanMismatchOnly(self):
"""
CalVerifyService.doScan without fix for mismatches. Make sure it detects
as much as it can. Make sure sync-token is not changed.
"""
sync_token_old1 = (yield (yield self.calendarUnderTest(home=self.uuid1, name="calendar")).syncToken())
sync_token_old2 = (yield (yield self.calendarUnderTest(home=self.uuid2, name="calendar")).syncToken())
sync_token_old3 = (yield (yield self.calendarUnderTest(home=self.uuid3, name="calendar")).syncToken())
yield self.commit()
options = {
"ical": False,
"badcua": False,
"mismatch": True,
"nobase64": False,
"fix": False,
"verbose": False,
"details": False,
"uid": "",
"uuid": "",
"tzid": "",
"start": DateTime(nowYear, 1, 1, 0, 0, 0),
}
output = StringIO()
calverify = SchedulingMismatchService(self._sqlCalendarStore, options, output, reactor, config)
yield calverify.doAction()
self.assertEqual(calverify.results["Number of events to process"], 17)
self.assertEqual(calverify.results["Missing Attendee"], set((
("MISSING_ATTENDEE_ICS", self.uuid1, self.uuid2,),
("MISSING_ATTENDEE_ICS", self.uuid1, self.uuid3,),
)))
self.assertEqual(calverify.results["Mismatch Attendee"], set((
("MISMATCH_ATTENDEE_ICS", self.uuid1, self.uuid2,),
("MISMATCH_ATTENDEE_ICS", self.uuid1, self.uuid3,),
("MISMATCH2_ATTENDEE_ICS", self.uuid1, self.uuid2,),
("MISMATCH2_ATTENDEE_ICS", self.uuid1, self.uuid3,),
("MISMATCH3_ATTENDEE_ICS", self.uuid1, self.uuid3,),
)))
self.assertEqual(calverify.results["Missing Organizer"], set((
("MISSING_ORGANIZER_ICS", self.uuid2, self.uuid1,),
("MISSING_ORGANIZER_ICS", self.uuid3, self.uuid1,),
)))
self.assertEqual(calverify.results["Mismatch Organizer"], set((
("MISMATCH_ORGANIZER_ICS", self.uuid2, self.uuid1,),
("MISMATCH_ORGANIZER_ICS", self.uuid3, self.uuid1,),
("MISMATCH2_ORGANIZER_ICS", self.uuid3, self.uuid1,),
)))
self.assertTrue("Fix change event" not in calverify.results)
self.assertTrue("Fix add event" not in calverify.results)
self.assertTrue("Fix add inbox" not in calverify.results)
self.assertTrue("Fix remove" not in calverify.results)
self.assertTrue("Fix remove" not in calverify.results)
self.assertTrue("Fix failures" not in calverify.results)
self.assertTrue("Auto-Accepts" not in calverify.results)
sync_token_new1 = (yield (yield self.calendarUnderTest(home=self.uuid1, name="calendar")).syncToken())
sync_token_new2 = (yield (yield self.calendarUnderTest(home=self.uuid2, name="calendar")).syncToken())
sync_token_new3 = (yield (yield self.calendarUnderTest(home=self.uuid3, name="calendar")).syncToken())
self.assertEqual(sync_token_old1, sync_token_new1)
self.assertEqual(sync_token_old2, sync_token_new2)
self.assertEqual(sync_token_old3, sync_token_new3)
@inlineCallbacks
def test_fixMismatch(self):
"""
CalVerifyService.doScan with fix for mismatches. Make sure it detects
and fixes as much as it can. Make sure sync-token is not changed.
"""
sync_token_old1 = (yield (yield self.calendarUnderTest(home=self.uuid1, name="calendar")).syncToken())
sync_token_old2 = (yield (yield self.calendarUnderTest(home=self.uuid2, name="calendar")).syncToken())
sync_token_old3 = (yield (yield self.calendarUnderTest(home=self.uuid3, name="calendar")).syncToken())
yield self.commit()
options = {
"ical": False,
"badcua": False,
"mismatch": True,
"nobase64": False,
"fix": True,
"verbose": False,
"details": False,
"uid": "",
"uuid": "",
"tzid": "",
"start": DateTime(nowYear, 1, 1, 0, 0, 0),
}
output = StringIO()
calverify = SchedulingMismatchService(self._sqlCalendarStore, options, output, reactor, config)
yield calverify.doAction()
self.assertEqual(calverify.results["Number of events to process"], 17)
self.assertEqual(calverify.results["Missing Attendee"], set((
("MISSING_ATTENDEE_ICS", self.uuid1, self.uuid2,),
("MISSING_ATTENDEE_ICS", self.uuid1, self.uuid3,),
)))
self.assertEqual(calverify.results["Mismatch Attendee"], set((
("MISMATCH_ATTENDEE_ICS", self.uuid1, self.uuid2,),
("MISMATCH_ATTENDEE_ICS", self.uuid1, self.uuid3,),
("MISMATCH2_ATTENDEE_ICS", self.uuid1, self.uuid2,),
("MISMATCH2_ATTENDEE_ICS", self.uuid1, self.uuid3,),
("MISMATCH3_ATTENDEE_ICS", self.uuid1, self.uuid3,),
)))
self.assertEqual(calverify.results["Missing Organizer"], set((
("MISSING_ORGANIZER_ICS", self.uuid2, self.uuid1,),
("MISSING_ORGANIZER_ICS", self.uuid3, self.uuid1,),
)))
self.assertEqual(calverify.results["Mismatch Organizer"], set((
("MISMATCH_ORGANIZER_ICS", self.uuid2, self.uuid1,),
("MISMATCH_ORGANIZER_ICS", self.uuid3, self.uuid1,),
("MISMATCH2_ORGANIZER_ICS", self.uuid3, self.uuid1,),
)))
self.assertEqual(calverify.results["Fix change event"], set((
(self.uuid2, "calendar", "MISMATCH_ATTENDEE_ICS",),
(self.uuid3, "calendar", "MISMATCH_ATTENDEE_ICS",),
(self.uuid2, "calendar", "MISMATCH2_ATTENDEE_ICS",),
(self.uuid3, "calendar2", "MISMATCH2_ATTENDEE_ICS",),
(self.uuid3, "calendar", "MISMATCH3_ATTENDEE_ICS",),
(self.uuid2, "calendar", "MISMATCH_ORGANIZER_ICS",),
(self.uuid3, "calendar2", "MISMATCH_ORGANIZER_ICS",),
)))
self.assertEqual(calverify.results["Fix add event"], set((
(self.uuid2, "calendar", "MISSING_ATTENDEE_ICS",),
(self.uuid3, "calendar2", "MISSING_ATTENDEE_ICS",),
)))
self.assertEqual(calverify.results["Fix add inbox"], set((
(self.uuid2, "MISSING_ATTENDEE_ICS",),
(self.uuid3, "MISSING_ATTENDEE_ICS",),
(self.uuid2, "MISMATCH_ATTENDEE_ICS",),
(self.uuid3, "MISMATCH_ATTENDEE_ICS",),
(self.uuid2, "MISMATCH2_ATTENDEE_ICS",),
(self.uuid3, "MISMATCH2_ATTENDEE_ICS",),
(self.uuid3, "MISMATCH3_ATTENDEE_ICS",),
(self.uuid2, "MISMATCH_ORGANIZER_ICS",),
(self.uuid3, "MISMATCH_ORGANIZER_ICS",),
)))
self.assertEqual(calverify.results["Fix remove"], set((
(self.uuid2, "calendar", "missing_organizer.ics",),
(self.uuid3, "calendar", "missing_organizer.ics",),
(self.uuid3, "calendar", "mismatched2_organizer.ics",),
)))
obj = yield self.calendarObjectUnderTest(home=self.uuid2, calendar_name="calendar", name="missing_organizer.ics")
self.assertEqual(obj, None)
obj = yield self.calendarObjectUnderTest(home=self.uuid3, calendar_name="calendar", name="missing_organizer.ics")
self.assertEqual(obj, None)
obj = yield self.calendarObjectUnderTest(home=self.uuid3, calendar_name="calendar", name="mismatched2_organizer.ics")
self.assertEqual(obj, None)
self.assertEqual(calverify.results["Fix failures"], 0)
self.assertEqual(calverify.results["Auto-Accepts"], [])
sync_token_new1 = (yield (yield self.calendarUnderTest(home=self.uuid1, name="calendar")).syncToken())
sync_token_new2 = (yield (yield self.calendarUnderTest(home=self.uuid2, name="calendar")).syncToken())
sync_token_new3 = (yield (yield self.calendarUnderTest(home=self.uuid3, name="calendar")).syncToken())
self.assertEqual(sync_token_old1, sync_token_new1)
self.assertNotEqual(sync_token_old2, sync_token_new2)
self.assertNotEqual(sync_token_old3, sync_token_new3)
# Re-scan after changes to make sure there are no errors
yield self.commit()
options["fix"] = False
calverify = SchedulingMismatchService(self._sqlCalendarStore, options, output, reactor, config)
yield calverify.doAction()
self.assertEqual(calverify.results["Number of events to process"], 14)
self.assertTrue("Missing Attendee" not in calverify.results)
self.assertTrue("Mismatch Attendee" not in calverify.results)
self.assertTrue("Missing Organizer" not in calverify.results)
self.assertTrue("Mismatch Organizer" not in calverify.results)
self.assertTrue("Fix add event" not in calverify.results)
self.assertTrue("Fix add inbox" not in calverify.results)
self.assertTrue("Fix remove" not in calverify.results)
self.assertTrue("Fix failures" not in calverify.results)
self.assertTrue("Auto-Accepts" not in calverify.results)
class CalVerifyMismatchTestsAutoAccept(CalVerifyMismatchTestsBase):
"""
Tests calverify for iCalendar mismatch problems for auto-accept attendees.
"""
# Organizer has event, attendee do not
MISSING_ATTENDEE_1_ICS = """BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Apple Inc.//iCal 4.0.1//EN
CALSCALE:GREGORIAN
BEGIN:VEVENT
CREATED:20100303T181216Z
UID:MISSING_ATTENDEE_ICS
DTEND:%(year)s%(month)02d07T151500Z
TRANSP:OPAQUE
SUMMARY:Ancient event
DTSTART:%(year)s%(month)02d07T111500Z
DTSTAMP:20100303T181220Z
SEQUENCE:2
ORGANIZER:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0
ATTENDEE:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0
ATTENDEE:urn:x-uid:75EA36BE-F71B-40F9-81F9-CF59BF40CA8F
END:VEVENT
END:VCALENDAR
""".replace("\n", "\r\n") % {"year": nowYear, "month": nowMonth}
# Attendee partstat mismatch
MISMATCH_ATTENDEE_1_ICS = """BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Apple Inc.//iCal 4.0.1//EN
CALSCALE:GREGORIAN
BEGIN:VEVENT
CREATED:20100303T181216Z
UID:MISMATCH_ATTENDEE_ICS
DTEND:%(year)s%(month)02d07T151500Z
TRANSP:OPAQUE
SUMMARY:Ancient event
DTSTART:%(year)s%(month)02d07T111500Z
DTSTAMP:20100303T181220Z
SEQUENCE:2
ORGANIZER:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0
ATTENDEE;PARTSTAT=ACCEPTED:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0
ATTENDEE;PARTSTAT=NEEDS-ACTION:urn:x-uid:75EA36BE-F71B-40F9-81F9-CF59BF40CA8F
END:VEVENT
END:VCALENDAR
""".replace("\n", "\r\n") % {"year": nowYear, "month": nowMonth}
MISMATCH_ATTENDEE_L1_ICS = """BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Apple Inc.//iCal 4.0.1//EN
CALSCALE:GREGORIAN
BEGIN:VEVENT
CREATED:20100303T181216Z
UID:MISMATCH_ATTENDEE_ICS
DTEND:%(year)s%(month)02d07T151500Z
TRANSP:OPAQUE
SUMMARY:Ancient event
DTSTART:%(year)s%(month)02d07T111500Z
DTSTAMP:20100303T181220Z
SEQUENCE:2
ORGANIZER:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0
ATTENDEE;PARTSTAT=ACCEPTED:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0
ATTENDEE;PARTSTAT=ACCEPTED:urn:x-uid:75EA36BE-F71B-40F9-81F9-CF59BF40CA8F
END:VEVENT
END:VCALENDAR
""".replace("\n", "\r\n") % {"year": nowYear, "month": nowMonth}
requirements = {
CalVerifyMismatchTestsBase.uuid1: {
"calendar": {
"missing_attendee.ics": (MISSING_ATTENDEE_1_ICS, CalVerifyMismatchTestsBase.metadata,),
"mismatched_attendee.ics": (MISMATCH_ATTENDEE_1_ICS, CalVerifyMismatchTestsBase.metadata,),
},
"inbox": {},
},
CalVerifyMismatchTestsBase.uuid2: {
"calendar": {},
"inbox": {},
},
CalVerifyMismatchTestsBase.uuid3: {
"calendar": {},
"inbox": {},
},
CalVerifyMismatchTestsBase.uuidl1: {
"calendar": {
"mismatched_attendee.ics": (MISMATCH_ATTENDEE_L1_ICS, CalVerifyMismatchTestsBase.metadata,),
},
"inbox": {},
},
}
@inlineCallbacks
def test_scanMismatchOnly(self):
"""
CalVerifyService.doScan without fix for mismatches. Make sure it detects
as much as it can. Make sure sync-token is not changed.
"""
sync_token_old1 = (yield (yield self.calendarUnderTest(home=self.uuid1, name="calendar")).syncToken())
sync_token_oldl1 = (yield (yield self.calendarUnderTest(home=self.uuidl1, name="calendar")).syncToken())
yield self.commit()
options = {
"ical": False,
"badcua": False,
"mismatch": True,
"nobase64": False,
"fix": False,
"verbose": False,
"details": False,
"uid": "",
"uuid": "",
"tzid": "",
"start": DateTime(nowYear, 1, 1, 0, 0, 0),
}
output = StringIO()
calverify = SchedulingMismatchService(self._sqlCalendarStore, options, output, reactor, config)
yield calverify.doAction()
self.assertEqual(calverify.results["Number of events to process"], 3)
self.assertEqual(calverify.results["Missing Attendee"], set((
("MISSING_ATTENDEE_ICS", self.uuid1, self.uuidl1,),
)))
self.assertEqual(calverify.results["Mismatch Attendee"], set((
("MISMATCH_ATTENDEE_ICS", self.uuid1, self.uuidl1,),
)))
self.assertTrue("Missing Organizer" not in calverify.results)
self.assertTrue("Mismatch Organizer" not in calverify.results)
self.assertTrue("Fix change event" not in calverify.results)
self.assertTrue("Fix add event" not in calverify.results)
self.assertTrue("Fix add inbox" not in calverify.results)
self.assertTrue("Fix remove" not in calverify.results)
self.assertTrue("Fix failures" not in calverify.results)
self.assertTrue("Auto-Accepts" not in calverify.results)
sync_token_new1 = (yield (yield self.calendarUnderTest(home=self.uuid1, name="calendar")).syncToken())
sync_token_newl1 = (yield (yield self.calendarUnderTest(home=self.uuidl1, name="calendar")).syncToken())
self.assertEqual(sync_token_old1, sync_token_new1)
self.assertEqual(sync_token_oldl1, sync_token_newl1)
@inlineCallbacks
def test_fixMismatch(self):
"""
CalVerifyService.doScan with fix for mismatches. Make sure it detects
and fixes as much as it can. Make sure sync-token is not changed.
"""
sync_token_old1 = (yield (yield self.calendarUnderTest(home=self.uuid1, name="calendar")).syncToken())
sync_token_oldl1 = (yield (yield self.calendarUnderTest(home=self.uuidl1, name="calendar")).syncToken())
yield self.commit()
options = {
"ical": False,
"badcua": False,
"mismatch": True,
"nobase64": False,
"fix": True,
"verbose": False,
"details": False,
"uid": "",
"uuid": "",
"tzid": "",
"start": DateTime(nowYear, 1, 1, 0, 0, 0),
}
output = StringIO()
calverify = SchedulingMismatchService(self._sqlCalendarStore, options, output, reactor, config)
yield calverify.doAction()
self.assertEqual(calverify.results["Number of events to process"], 3)
self.assertEqual(calverify.results["Missing Attendee"], set((
("MISSING_ATTENDEE_ICS", self.uuid1, self.uuidl1,),
)))
self.assertEqual(calverify.results["Mismatch Attendee"], set((
("MISMATCH_ATTENDEE_ICS", self.uuid1, self.uuidl1,),
)))
self.assertTrue("Missing Organizer" not in calverify.results)
self.assertTrue("Mismatch Organizer" not in calverify.results)
self.assertEqual(calverify.results["Fix change event"], set((
(self.uuidl1, "calendar", "MISMATCH_ATTENDEE_ICS",),
)))
self.assertEqual(calverify.results["Fix add event"], set((
(self.uuidl1, "calendar", "MISSING_ATTENDEE_ICS",),
)))
self.assertEqual(calverify.results["Fix add inbox"], set((
(self.uuidl1, "MISSING_ATTENDEE_ICS",),
(self.uuidl1, "MISMATCH_ATTENDEE_ICS",),
)))
self.assertTrue("Fix remove" not in calverify.results)
self.assertEqual(calverify.results["Fix failures"], 0)
testResults = sorted(calverify.results["Auto-Accepts"], key=lambda x: x["uid"])
self.assertEqual(testResults[0]["path"], "/calendars/__uids__/%s/calendar/mismatched_attendee.ics" % self.uuidl1)
self.assertEqual(testResults[0]["uid"], "MISMATCH_ATTENDEE_ICS")
self.assertEqual(testResults[0]["start"].getText()[:8], "%(year)s%(month)02d07" % {"year": nowYear, "month": nowMonth})
self.assertEqual(testResults[1]["uid"], "MISSING_ATTENDEE_ICS")
self.assertEqual(testResults[1]["start"].getText()[:8], "%(year)s%(month)02d07" % {"year": nowYear, "month": nowMonth})
sync_token_new1 = (yield (yield self.calendarUnderTest(home=self.uuid1, name="calendar")).syncToken())
sync_token_newl1 = (yield (yield self.calendarUnderTest(home=self.uuidl1, name="calendar")).syncToken())
self.assertEqual(sync_token_old1, sync_token_new1)
self.assertNotEqual(sync_token_oldl1, sync_token_newl1)
# Re-scan after changes to make sure there are no errors
yield self.commit()
options["fix"] = False
calverify = SchedulingMismatchService(self._sqlCalendarStore, options, output, reactor, config)
yield calverify.doAction()
self.assertEqual(calverify.results["Number of events to process"], 4)
self.assertTrue("Missing Attendee" not in calverify.results)
self.assertTrue("Mismatch Attendee" not in calverify.results)
self.assertTrue("Missing Organizer" not in calverify.results)
self.assertTrue("Mismatch Organizer" not in calverify.results)
self.assertTrue("Fix add event" not in calverify.results)
self.assertTrue("Fix add inbox" not in calverify.results)
self.assertTrue("Fix remove" not in calverify.results)
self.assertTrue("Fix failures" not in calverify.results)
self.assertTrue("Auto-Accepts" not in calverify.results)
class CalVerifyMismatchTestsUUID(CalVerifyMismatchTestsBase):
"""
Tests calverify for iCalendar mismatch problems for auto-accept attendees.
"""
# Organizer has event, attendee do not
MISSING_ATTENDEE_1_ICS = """BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Apple Inc.//iCal 4.0.1//EN
CALSCALE:GREGORIAN
BEGIN:VEVENT
CREATED:20100303T181216Z
UID:MISSING_ATTENDEE_ICS
DTEND:%(year)s%(month)02d07T151500Z
TRANSP:OPAQUE
SUMMARY:Ancient event
DTSTART:%(year)s%(month)02d07T111500Z
DTSTAMP:20100303T181220Z
SEQUENCE:2
ORGANIZER:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0
ATTENDEE:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0
ATTENDEE:urn:x-uid:75EA36BE-F71B-40F9-81F9-CF59BF40CA8F
END:VEVENT
END:VCALENDAR
""".replace("\n", "\r\n") % {"year": nowYear, "month": nowMonth}
# Attendee partstat mismatch
MISMATCH_ATTENDEE_1_ICS = """BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Apple Inc.//iCal 4.0.1//EN
CALSCALE:GREGORIAN
BEGIN:VEVENT
CREATED:20100303T181216Z
UID:MISMATCH_ATTENDEE_ICS
DTEND:%(year)s%(month)02d07T151500Z
TRANSP:OPAQUE
SUMMARY:Ancient event
DTSTART:%(year)s%(month)02d07T111500Z
DTSTAMP:20100303T181220Z
SEQUENCE:2
ORGANIZER:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0
ATTENDEE;PARTSTAT=ACCEPTED:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0
ATTENDEE;PARTSTAT=NEEDS-ACTION:urn:x-uid:75EA36BE-F71B-40F9-81F9-CF59BF40CA8F
END:VEVENT
END:VCALENDAR
""".replace("\n", "\r\n") % {"year": nowYear, "month": nowMonth}
MISMATCH_ATTENDEE_L1_ICS = """BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Apple Inc.//iCal 4.0.1//EN
CALSCALE:GREGORIAN
BEGIN:VEVENT
CREATED:20100303T181216Z
UID:MISMATCH_ATTENDEE_ICS
DTEND:%(year)s%(month)02d07T151500Z
TRANSP:OPAQUE
SUMMARY:Ancient event
DTSTART:%(year)s%(month)02d07T111500Z
DTSTAMP:20100303T181220Z
SEQUENCE:2
ORGANIZER:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0
ATTENDEE;PARTSTAT=ACCEPTED:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0
ATTENDEE;PARTSTAT=ACCEPTED:urn:x-uid:75EA36BE-F71B-40F9-81F9-CF59BF40CA8F
END:VEVENT
END:VCALENDAR
""".replace("\n", "\r\n") % {"year": nowYear, "month": nowMonth}
requirements = {
CalVerifyMismatchTestsBase.uuid1: {
"calendar": {
"missing_attendee.ics": (MISSING_ATTENDEE_1_ICS, CalVerifyMismatchTestsBase.metadata,),
"mismatched_attendee.ics": (MISMATCH_ATTENDEE_1_ICS, CalVerifyMismatchTestsBase.metadata,),
},
"inbox": {},
},
CalVerifyMismatchTestsBase.uuid2: {
"calendar": {},
"inbox": {},
},
CalVerifyMismatchTestsBase.uuid3: {
"calendar": {},
"inbox": {},
},
CalVerifyMismatchTestsBase.uuidl1: {
"calendar": {
"mismatched_attendee.ics": (MISMATCH_ATTENDEE_L1_ICS, CalVerifyMismatchTestsBase.metadata,),
},
"inbox": {},
},
}
@inlineCallbacks
def test_scanMismatchOnly(self):
"""
CalVerifyService.doScan without fix for mismatches. Make sure it detects
as much as it can. Make sure sync-token is not changed.
"""
sync_token_old1 = (yield (yield self.calendarUnderTest(home=self.uuid1, name="calendar")).syncToken())
sync_token_oldl1 = (yield (yield self.calendarUnderTest(home=self.uuidl1, name="calendar")).syncToken())
yield self.commit()
options = {
"ical": False,
"badcua": False,
"mismatch": True,
"nobase64": False,
"fix": False,
"verbose": False,
"details": False,
"uid": "",
"uuid": CalVerifyMismatchTestsBase.uuidl1,
"tzid": "",
"start": DateTime(nowYear, 1, 1, 0, 0, 0),
}
output = StringIO()
calverify = SchedulingMismatchService(self._sqlCalendarStore, options, output, reactor, config)
yield calverify.doAction()
self.assertEqual(calverify.results["Number of events to process"], 2)
self.assertTrue("Missing Attendee" not in calverify.results)
self.assertEqual(calverify.results["Mismatch Attendee"], set((
("MISMATCH_ATTENDEE_ICS", self.uuid1, self.uuidl1,),
)))
self.assertTrue("Missing Organizer" not in calverify.results)
self.assertTrue("Mismatch Organizer" not in calverify.results)
self.assertTrue("Fix change event" not in calverify.results)
self.assertTrue("Fix add event" not in calverify.results)
self.assertTrue("Fix add inbox" not in calverify.results)
self.assertTrue("Fix remove" not in calverify.results)
self.assertTrue("Fix failures" not in calverify.results)
self.assertTrue("Auto-Accepts" not in calverify.results)
sync_token_new1 = (yield (yield self.calendarUnderTest(home=self.uuid1, name="calendar")).syncToken())
sync_token_newl1 = (yield (yield self.calendarUnderTest(home=self.uuidl1, name="calendar")).syncToken())
self.assertEqual(sync_token_old1, sync_token_new1)
self.assertEqual(sync_token_oldl1, sync_token_newl1)
@inlineCallbacks
def test_fixMismatch(self):
"""
CalVerifyService.doScan with fix for mismatches. Make sure it detects
and fixes as much as it can. Make sure sync-token is not changed.
"""
sync_token_old1 = (yield (yield self.calendarUnderTest(home=self.uuid1, name="calendar")).syncToken())
sync_token_oldl1 = (yield (yield self.calendarUnderTest(home=self.uuidl1, name="calendar")).syncToken())
yield self.commit()
options = {
"ical": False,
"badcua": False,
"mismatch": True,
"nobase64": False,
"fix": True,
"verbose": False,
"details": False,
"uid": "",
"uuid": CalVerifyMismatchTestsBase.uuidl1,
"tzid": "",
"start": DateTime(nowYear, 1, 1, 0, 0, 0),
}
output = StringIO()
calverify = SchedulingMismatchService(self._sqlCalendarStore, options, output, reactor, config)
yield calverify.doAction()
self.assertEqual(calverify.results["Number of events to process"], 2)
self.assertTrue("Missing Attendee" not in calverify.results)
self.assertEqual(calverify.results["Mismatch Attendee"], set((
("MISMATCH_ATTENDEE_ICS", self.uuid1, self.uuidl1,),
)))
self.assertTrue("Missing Organizer" not in calverify.results)
self.assertTrue("Mismatch Organizer" not in calverify.results)
self.assertEqual(calverify.results["Fix change event"], set((
(self.uuidl1, "calendar", "MISMATCH_ATTENDEE_ICS",),
)))
self.assertTrue("Fix add event" not in calverify.results)
self.assertEqual(calverify.results["Fix add inbox"], set((
(self.uuidl1, "MISMATCH_ATTENDEE_ICS",),
)))
self.assertTrue("Fix remove" not in calverify.results)
self.assertEqual(calverify.results["Fix failures"], 0)
testResults = sorted(calverify.results["Auto-Accepts"], key=lambda x: x["uid"])
self.assertEqual(testResults[0]["path"], "/calendars/__uids__/%s/calendar/mismatched_attendee.ics" % self.uuidl1)
self.assertEqual(testResults[0]["uid"], "MISMATCH_ATTENDEE_ICS")
self.assertEqual(testResults[0]["start"].getText()[:8], "%(year)s%(month)02d07" % {"year": nowYear, "month": nowMonth})
sync_token_new1 = (yield (yield self.calendarUnderTest(home=self.uuid1, name="calendar")).syncToken())
sync_token_newl1 = (yield (yield self.calendarUnderTest(home=self.uuidl1, name="calendar")).syncToken())
self.assertEqual(sync_token_old1, sync_token_new1)
self.assertNotEqual(sync_token_oldl1, sync_token_newl1)
# Re-scan after changes to make sure there are no errors
yield self.commit()
options["fix"] = False
options["uuid"] = CalVerifyMismatchTestsBase.uuidl1
calverify = SchedulingMismatchService(self._sqlCalendarStore, options, output, reactor, config)
yield calverify.doAction()
self.assertEqual(calverify.results["Number of events to process"], 2)
self.assertTrue("Missing Attendee" not in calverify.results)
self.assertTrue("Mismatch Attendee" not in calverify.results)
self.assertTrue("Missing Organizer" not in calverify.results)
self.assertTrue("Mismatch Organizer" not in calverify.results)
self.assertTrue("Fix add event" not in calverify.results)
self.assertTrue("Fix add inbox" not in calverify.results)
self.assertTrue("Fix remove" not in calverify.results)
self.assertTrue("Fix failures" not in calverify.results)
self.assertTrue("Auto-Accepts" not in calverify.results)
class CalVerifyDoubleBooked(CalVerifyMismatchTestsBase):
"""
Tests calverify for double-bookings.
"""
# No overlap
INVITE_NO_OVERLAP_ICS = """BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Apple Inc.//iCal 4.0.1//EN
CALSCALE:GREGORIAN
BEGIN:VEVENT
CREATED:20100303T181216Z
UID:INVITE_NO_OVERLAP_ICS
TRANSP:OPAQUE
SUMMARY:INVITE_NO_OVERLAP_ICS
DTSTART:%(year)s%(month)02d07T100000Z
DURATION:PT1H
DTSTAMP:20100303T181220Z
SEQUENCE:2
ORGANIZER:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0
ATTENDEE:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0
ATTENDEE:urn:x-uid:75EA36BE-F71B-40F9-81F9-CF59BF40CA8F
END:VEVENT
END:VCALENDAR
""".replace("\n", "\r\n") % {"year": nowYear, "month": nowMonth}
# Two overlapping
INVITE_NO_OVERLAP1_1_ICS = """BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Apple Inc.//iCal 4.0.1//EN
CALSCALE:GREGORIAN
BEGIN:VEVENT
CREATED:20100303T181216Z
UID:INVITE_NO_OVERLAP1_1_ICS
TRANSP:OPAQUE
SUMMARY:INVITE_NO_OVERLAP1_1_ICS
DTSTART:%(year)s%(month)02d07T110000Z
DURATION:PT2H
DTSTAMP:20100303T181220Z
SEQUENCE:2
ORGANIZER:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0
ATTENDEE:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0
ATTENDEE:urn:x-uid:75EA36BE-F71B-40F9-81F9-CF59BF40CA8F
END:VEVENT
END:VCALENDAR
""".replace("\n", "\r\n") % {"year": nowYear, "month": nowMonth}
INVITE_NO_OVERLAP1_2_ICS = """BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Apple Inc.//iCal 4.0.1//EN
CALSCALE:GREGORIAN
BEGIN:VEVENT
CREATED:20100303T181216Z
UID:INVITE_NO_OVERLAP1_2_ICS
TRANSP:OPAQUE
SUMMARY:INVITE_NO_OVERLAP1_2_ICS
DTSTART:%(year)s%(month)02d07T120000Z
DURATION:PT1H
DTSTAMP:20100303T181220Z
SEQUENCE:2
ORGANIZER:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0
ATTENDEE:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0
ATTENDEE:urn:x-uid:75EA36BE-F71B-40F9-81F9-CF59BF40CA8F
END:VEVENT
END:VCALENDAR
""".replace("\n", "\r\n") % {"year": nowYear, "month": nowMonth}
# Two overlapping with one transparent
INVITE_NO_OVERLAP2_1_ICS = """BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Apple Inc.//iCal 4.0.1//EN
CALSCALE:GREGORIAN
BEGIN:VEVENT
CREATED:20100303T181216Z
UID:INVITE_NO_OVERLAP2_1_ICS
TRANSP:OPAQUE
SUMMARY:INVITE_NO_OVERLAP2_1_ICS
DTSTART:%(year)s%(month)02d07T140000Z
DURATION:PT2H
DTSTAMP:20100303T181220Z
SEQUENCE:2
ORGANIZER:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0
ATTENDEE:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0
ATTENDEE:urn:x-uid:75EA36BE-F71B-40F9-81F9-CF59BF40CA8F
END:VEVENT
END:VCALENDAR
""".replace("\n", "\r\n") % {"year": nowYear, "month": nowMonth}
INVITE_NO_OVERLAP2_2_ICS = """BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Apple Inc.//iCal 4.0.1//EN
CALSCALE:GREGORIAN
BEGIN:VEVENT
CREATED:20100303T181216Z
UID:INVITE_NO_OVERLAP2_2_ICS
SUMMARY:INVITE_NO_OVERLAP2_2_ICS
DTSTART:%(year)s%(month)02d07T150000Z
DURATION:PT1H
DTSTAMP:20100303T181220Z
SEQUENCE:2
ORGANIZER:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0
ATTENDEE:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0
ATTENDEE:urn:x-uid:75EA36BE-F71B-40F9-81F9-CF59BF40CA8F
TRANSP:TRANSPARENT
END:VEVENT
END:VCALENDAR
""".replace("\n", "\r\n") % {"year": nowYear, "month": nowMonth}
# Two overlapping with one cancelled
INVITE_NO_OVERLAP3_1_ICS = """BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Apple Inc.//iCal 4.0.1//EN
CALSCALE:GREGORIAN
BEGIN:VEVENT
CREATED:20100303T181216Z
UID:INVITE_NO_OVERLAP3_1_ICS
TRANSP:OPAQUE
SUMMARY:Ancient event
DTSTART:%(year)s%(month)02d07T170000Z
DURATION:PT2H
DTSTAMP:20100303T181220Z
SEQUENCE:2
ORGANIZER:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0
ATTENDEE:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0
ATTENDEE:urn:x-uid:75EA36BE-F71B-40F9-81F9-CF59BF40CA8F
END:VEVENT
END:VCALENDAR
""".replace("\n", "\r\n") % {"year": nowYear, "month": nowMonth}
INVITE_NO_OVERLAP3_2_ICS = """BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Apple Inc.//iCal 4.0.1//EN
CALSCALE:GREGORIAN
BEGIN:VEVENT
CREATED:20100303T181216Z
UID:INVITE_NO_OVERLAP3_2_ICS
SUMMARY:INVITE_NO_OVERLAP3_2_ICS
DTSTART:%(year)s%(month)02d07T180000Z
DURATION:PT1H
DTSTAMP:20100303T181220Z
SEQUENCE:2
ORGANIZER:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0
ATTENDEE:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0
ATTENDEE:urn:x-uid:75EA36BE-F71B-40F9-81F9-CF59BF40CA8F
STATUS:CANCELLED
END:VEVENT
END:VCALENDAR
""".replace("\n", "\r\n") % {"year": nowYear, "month": nowMonth}
# Two overlapping recurring
INVITE_NO_OVERLAP4_1_ICS = """BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Apple Inc.//iCal 4.0.1//EN
CALSCALE:GREGORIAN
BEGIN:VEVENT
CREATED:20100303T181216Z
UID:INVITE_NO_OVERLAP4_1_ICS
TRANSP:OPAQUE
SUMMARY:INVITE_NO_OVERLAP4_1_ICS
DTSTART:%(year)s%(month)02d08T120000Z
DURATION:PT2H
DTSTAMP:20100303T181220Z
SEQUENCE:2
ORGANIZER:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0
ATTENDEE:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0
ATTENDEE:urn:x-uid:75EA36BE-F71B-40F9-81F9-CF59BF40CA8F
RRULE:FREQ=DAILY;COUNT=3
END:VEVENT
END:VCALENDAR
""".replace("\n", "\r\n") % {"year": nowYear, "month": nowMonth}
INVITE_NO_OVERLAP4_2_ICS = """BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Apple Inc.//iCal 4.0.1//EN
CALSCALE:GREGORIAN
BEGIN:VEVENT
CREATED:20100303T181216Z
UID:INVITE_NO_OVERLAP4_2_ICS
SUMMARY:INVITE_NO_OVERLAP4_2_ICS
DTSTART:%(year)s%(month)02d09T120000Z
DURATION:PT1H
DTSTAMP:20100303T181220Z
SEQUENCE:2
ORGANIZER:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0
ATTENDEE:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0
ATTENDEE:urn:x-uid:75EA36BE-F71B-40F9-81F9-CF59BF40CA8F
RRULE:FREQ=DAILY;COUNT=2
END:VEVENT
END:VCALENDAR
""".replace("\n", "\r\n") % {"year": nowYear, "month": nowMonth}
# Two overlapping on one recurrence instance
INVITE_NO_OVERLAP5_1_ICS = """BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Apple Inc.//iCal 4.0.1//EN
CALSCALE:GREGORIAN
BEGIN:VEVENT
CREATED:20100303T181216Z
UID:INVITE_NO_OVERLAP5_1_ICS
TRANSP:OPAQUE
SUMMARY:INVITE_NO_OVERLAP5_1_ICS
DTSTART:%(year)s%(month)02d12T120000Z
DURATION:PT2H
DTSTAMP:20100303T181220Z
SEQUENCE:2
ORGANIZER:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0
ATTENDEE:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0
ATTENDEE:urn:x-uid:75EA36BE-F71B-40F9-81F9-CF59BF40CA8F
RRULE:FREQ=DAILY;COUNT=3
END:VEVENT
END:VCALENDAR
""".replace("\n", "\r\n") % {"year": nowYear, "month": nowMonth}
INVITE_NO_OVERLAP5_2_ICS = """BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Apple Inc.//iCal 4.0.1//EN
CALSCALE:GREGORIAN
BEGIN:VEVENT
CREATED:20100303T181216Z
UID:INVITE_NO_OVERLAP5_2_ICS
SUMMARY:INVITE_NO_OVERLAP5_2_ICS
DTSTART:%(year)s%(month)02d13T140000Z
DURATION:PT1H
DTSTAMP:20100303T181220Z
SEQUENCE:2
ORGANIZER:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0
ATTENDEE:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0
ATTENDEE:urn:x-uid:75EA36BE-F71B-40F9-81F9-CF59BF40CA8F
RRULE:FREQ=DAILY;COUNT=2
END:VEVENT
BEGIN:VEVENT
CREATED:20100303T181216Z
UID:INVITE_NO_OVERLAP5_2_ICS
SUMMARY:INVITE_NO_OVERLAP5_2_ICS
RECURRENCE-ID:%(year)s%(month)02d14T140000Z
DTSTART:%(year)s%(month)02d14T130000Z
DURATION:PT1H
DTSTAMP:20100303T181220Z
SEQUENCE:2
ORGANIZER:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0
ATTENDEE:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0
ATTENDEE:urn:x-uid:75EA36BE-F71B-40F9-81F9-CF59BF40CA8F
END:VEVENT
END:VCALENDAR
""".replace("\n", "\r\n") % {"year": nowYear, "month": nowMonth}
# Two not overlapping - one all-day
INVITE_NO_OVERLAP6_1_ICS = """BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Apple Inc.//iCal 4.0.1//EN
CALSCALE:GREGORIAN
BEGIN:VTIMEZONE
TZID:America/Los_Angeles
BEGIN:DAYLIGHT
DTSTART:20070311T020000
RRULE:FREQ=YEARLY;BYDAY=2SU;BYMONTH=3
TZNAME:PDT
TZOFFSETFROM:-0800
TZOFFSETTO:-0700
END:DAYLIGHT
BEGIN:STANDARD
DTSTART:20071104T020000
RRULE:FREQ=YEARLY;BYDAY=1SU;BYMONTH=11
TZNAME:PST
TZOFFSETFROM:-0700
TZOFFSETTO:-0800
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
CREATED:20100303T181216Z
UID:INVITE_NO_OVERLAP6_1_ICS
TRANSP:OPAQUE
SUMMARY:INVITE_NO_OVERLAP6_1_ICS
DTSTART;TZID=America/Los_Angeles:%(year)s%(month)02d20T200000
DURATION:PT2H
DTSTAMP:20100303T181220Z
SEQUENCE:2
ORGANIZER:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0
ATTENDEE:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0
ATTENDEE:urn:x-uid:75EA36BE-F71B-40F9-81F9-CF59BF40CA8F
END:VEVENT
END:VCALENDAR
""".replace("\n", "\r\n") % {"year": nowYear, "month": nowMonth}
INVITE_NO_OVERLAP6_2_ICS = """BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Apple Inc.//iCal 4.0.1//EN
CALSCALE:GREGORIAN
BEGIN:VEVENT
CREATED:20100303T181216Z
UID:INVITE_NO_OVERLAP6_2_ICS
TRANSP:OPAQUE
SUMMARY:INVITE_NO_OVERLAP6_2_ICS
DTSTART;VALUE=DATE:%(year)s%(month)02d21
DTSTAMP:20100303T181220Z
SEQUENCE:2
ORGANIZER:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0
ATTENDEE:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0
ATTENDEE:urn:x-uid:75EA36BE-F71B-40F9-81F9-CF59BF40CA8F
END:VEVENT
END:VCALENDAR
""".replace("\n", "\r\n") % {"year": nowYear, "month": nowMonth}
# Two overlapping - same organizer and summary
INVITE_NO_OVERLAP7_1_ICS = """BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Apple Inc.//iCal 4.0.1//EN
CALSCALE:GREGORIAN
BEGIN:VEVENT
CREATED:20100303T181216Z
UID:INVITE_NO_OVERLAP7_1_ICS
TRANSP:OPAQUE
SUMMARY:INVITE_NO_OVERLAP7_1_ICS
DTSTART:%(year)s%(month)02d23T110000Z
DURATION:PT2H
DTSTAMP:20100303T181220Z
SEQUENCE:2
ORGANIZER:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0
ATTENDEE:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0
ATTENDEE:urn:x-uid:75EA36BE-F71B-40F9-81F9-CF59BF40CA8F
END:VEVENT
END:VCALENDAR
""".replace("\n", "\r\n") % {"year": nowYear, "month": nowMonth}
INVITE_NO_OVERLAP7_2_ICS = """BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Apple Inc.//iCal 4.0.1//EN
CALSCALE:GREGORIAN
BEGIN:VEVENT
CREATED:20100303T181216Z
UID:INVITE_NO_OVERLAP7_2_ICS
TRANSP:OPAQUE
SUMMARY:INVITE_NO_OVERLAP7_1_ICS
DTSTART:%(year)s%(month)02d23T120000Z
DURATION:PT1H
DTSTAMP:20100303T181220Z
SEQUENCE:2
ORGANIZER:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0
ATTENDEE:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0
ATTENDEE:urn:x-uid:75EA36BE-F71B-40F9-81F9-CF59BF40CA8F
END:VEVENT
END:VCALENDAR
""".replace("\n", "\r\n") % {"year": nowYear, "month": nowMonth}
allEvents = {
"invite1.ics": (INVITE_NO_OVERLAP_ICS, CalVerifyMismatchTestsBase.metadata,),
"invite2.ics": (INVITE_NO_OVERLAP1_1_ICS, CalVerifyMismatchTestsBase.metadata,),
"invite3.ics": (INVITE_NO_OVERLAP1_2_ICS, CalVerifyMismatchTestsBase.metadata,),
"invite4.ics": (INVITE_NO_OVERLAP2_1_ICS, CalVerifyMismatchTestsBase.metadata,),
"invite5.ics": (INVITE_NO_OVERLAP2_2_ICS, CalVerifyMismatchTestsBase.metadata,),
"invite6.ics": (INVITE_NO_OVERLAP3_1_ICS, CalVerifyMismatchTestsBase.metadata,),
"invite7.ics": (INVITE_NO_OVERLAP3_2_ICS, CalVerifyMismatchTestsBase.metadata,),
"invite8.ics": (INVITE_NO_OVERLAP4_1_ICS, CalVerifyMismatchTestsBase.metadata,),
"invite9.ics": (INVITE_NO_OVERLAP4_2_ICS, CalVerifyMismatchTestsBase.metadata,),
"invite10.ics": (INVITE_NO_OVERLAP5_1_ICS, CalVerifyMismatchTestsBase.metadata,),
"invite11.ics": (INVITE_NO_OVERLAP5_2_ICS, CalVerifyMismatchTestsBase.metadata,),
"invite12.ics": (INVITE_NO_OVERLAP6_1_ICS, CalVerifyMismatchTestsBase.metadata,),
"invite13.ics": (INVITE_NO_OVERLAP6_2_ICS, CalVerifyMismatchTestsBase.metadata,),
"invite14.ics": (INVITE_NO_OVERLAP7_1_ICS, CalVerifyMismatchTestsBase.metadata,),
"invite15.ics": (INVITE_NO_OVERLAP7_2_ICS, CalVerifyMismatchTestsBase.metadata,),
}
requirements = {
CalVerifyMismatchTestsBase.uuid1: {
"calendar": allEvents,
"inbox": {},
},
CalVerifyMismatchTestsBase.uuid2: {
"calendar": {},
"inbox": {},
},
CalVerifyMismatchTestsBase.uuid3: {
"calendar": {},
"inbox": {},
},
CalVerifyMismatchTestsBase.uuidl1: {
"calendar": allEvents,
"inbox": {},
},
}
@inlineCallbacks
def test_scanDoubleBookingOnly(self):
"""
CalVerifyService.doScan without fix for mismatches. Make sure it detects
as much as it can. Make sure sync-token is not changed.
"""
sync_token_old1 = (yield (yield self.calendarUnderTest(home=self.uuid1, name="calendar")).syncToken())
sync_token_oldl1 = (yield (yield self.calendarUnderTest(home=self.uuidl1, name="calendar")).syncToken())
yield self.commit()
options = {
"ical": False,
"badcua": False,
"mismatch": False,
"nobase64": False,
"double": True,
"fix": False,
"verbose": False,
"details": False,
"summary": False,
"days": 365,
"uid": "",
"uuid": self.uuidl1,
"tzid": "utc",
"start": DateTime(nowYear, 1, 1, 0, 0, 0),
}
output = StringIO()
calverify = DoubleBookingService(self._sqlCalendarStore, options, output, reactor, config)
yield calverify.doAction()
self.assertEqual(calverify.results["Number of events to process"], len(self.requirements[CalVerifyMismatchTestsBase.uuidl1]["calendar"]))
self.assertEqual(
[(sorted((i.uid1, i.uid2,)), str(i.start),) for i in calverify.results["Double-bookings"]],
[
(["INVITE_NO_OVERLAP1_1_ICS", "INVITE_NO_OVERLAP1_2_ICS"], "%(year)s%(month)02d07T120000Z" % {"year": nowYear, "month": nowMonth}),
(["INVITE_NO_OVERLAP4_1_ICS", "INVITE_NO_OVERLAP4_2_ICS"], "%(year)s%(month)02d09T120000Z" % {"year": nowYear, "month": nowMonth}),
(["INVITE_NO_OVERLAP4_1_ICS", "INVITE_NO_OVERLAP4_2_ICS"], "%(year)s%(month)02d10T120000Z" % {"year": nowYear, "month": nowMonth}),
(["INVITE_NO_OVERLAP5_1_ICS", "INVITE_NO_OVERLAP5_2_ICS"], "%(year)s%(month)02d14T130000Z" % {"year": nowYear, "month": nowMonth}),
],
)
self.assertEqual(calverify.results["Number of double-bookings"], 4)
self.assertEqual(calverify.results["Number of unique double-bookings"], 3)
sync_token_new1 = (yield (yield self.calendarUnderTest(home=self.uuid1, name="calendar")).syncToken())
sync_token_newl1 = (yield (yield self.calendarUnderTest(home=self.uuidl1, name="calendar")).syncToken())
self.assertEqual(sync_token_old1, sync_token_new1)
self.assertEqual(sync_token_oldl1, sync_token_newl1)
class CalVerifyDarkPurge(CalVerifyMismatchTestsBase):
"""
Tests calverify for events.
"""
# No organizer
INVITE_NO_ORGANIZER_ICS = """BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Apple Inc.//iCal 4.0.1//EN
CALSCALE:GREGORIAN
BEGIN:VEVENT
CREATED:20100303T181216Z
UID:INVITE_NO_ORGANIZER_ICS
TRANSP:OPAQUE
SUMMARY:INVITE_NO_ORGANIZER_ICS
DTSTART:%(year)s%(month)02d07T100000Z
DURATION:PT1H
DTSTAMP:20100303T181220Z
SEQUENCE:2
END:VEVENT
END:VCALENDAR
""".replace("\n", "\r\n") % {"year": nowYear, "month": nowMonth}
# Valid organizer
INVITE_VALID_ORGANIZER_ICS = """BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Apple Inc.//iCal 4.0.1//EN
CALSCALE:GREGORIAN
BEGIN:VEVENT
CREATED:20100303T181216Z
UID:INVITE_VALID_ORGANIZER_ICS
TRANSP:OPAQUE
SUMMARY:INVITE_VALID_ORGANIZER_ICS
DTSTART:%(year)s%(month)02d08T100000Z
DURATION:PT1H
DTSTAMP:20100303T181220Z
SEQUENCE:2
ORGANIZER:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0
ATTENDEE:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0
ATTENDEE:urn:x-uid:75EA36BE-F71B-40F9-81F9-CF59BF40CA8F
END:VEVENT
END:VCALENDAR
""".replace("\n", "\r\n") % {"year": nowYear, "month": nowMonth}
# Invalid organizer #1
INVITE_INVALID_ORGANIZER_1_ICS = """BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Apple Inc.//iCal 4.0.1//EN
CALSCALE:GREGORIAN
BEGIN:VEVENT
CREATED:20100303T181216Z
UID:INVITE_INVALID_ORGANIZER_1_ICS
TRANSP:OPAQUE
SUMMARY:INVITE_INVALID_ORGANIZER_1_ICS
DTSTART:%(year)s%(month)02d09T100000Z
DURATION:PT1H
DTSTAMP:20100303T181220Z
SEQUENCE:2
ORGANIZER:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0-1
ATTENDEE:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0-1
ATTENDEE:urn:x-uid:75EA36BE-F71B-40F9-81F9-CF59BF40CA8F
END:VEVENT
END:VCALENDAR
""".replace("\n", "\r\n") % {"year": nowYear, "month": nowMonth}
# Invalid organizer #2
INVITE_INVALID_ORGANIZER_2_ICS = """BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Apple Inc.//iCal 4.0.1//EN
CALSCALE:GREGORIAN
BEGIN:VEVENT
CREATED:20100303T181216Z
UID:INVITE_INVALID_ORGANIZER_2_ICS
TRANSP:OPAQUE
SUMMARY:INVITE_INVALID_ORGANIZER_2_ICS
DTSTART:%(year)s%(month)02d10T100000Z
DURATION:PT1H
DTSTAMP:20100303T181220Z
SEQUENCE:2
ORGANIZER:mailto:foobar@example.com
ATTENDEE:mailto:foobar@example.com
ATTENDEE:urn:x-uid:75EA36BE-F71B-40F9-81F9-CF59BF40CA8F
END:VEVENT
END:VCALENDAR
""".replace("\n", "\r\n") % {"year": nowYear, "month": nowMonth}
allEvents = {
"invite1.ics": (INVITE_NO_ORGANIZER_ICS, CalVerifyMismatchTestsBase.metadata,),
"invite2.ics": (INVITE_VALID_ORGANIZER_ICS, CalVerifyMismatchTestsBase.metadata,),
"invite3.ics": (INVITE_INVALID_ORGANIZER_1_ICS, CalVerifyMismatchTestsBase.metadata,),
"invite4.ics": (INVITE_INVALID_ORGANIZER_2_ICS, CalVerifyMismatchTestsBase.metadata,),
}
requirements = {
CalVerifyMismatchTestsBase.uuid1: {
"calendar": {},
"inbox": {},
},
CalVerifyMismatchTestsBase.uuid2: {
"calendar": {},
"inbox": {},
},
CalVerifyMismatchTestsBase.uuid3: {
"calendar": {},
"inbox": {},
},
CalVerifyMismatchTestsBase.uuidl1: {
"calendar": allEvents,
"inbox": {},
},
}
@inlineCallbacks
def test_scanDarkEvents(self):
"""
CalVerifyService.doScan without fix for dark events. Make sure it detects
as much as it can. Make sure sync-token is not changed.
"""
sync_token_oldl1 = (yield (yield self.calendarUnderTest(home=self.uuidl1, name="calendar")).syncToken())
yield self.commit()
options = {
"ical": False,
"badcua": False,
"mismatch": False,
"nobase64": False,
"double": True,
"dark-purge": False,
"fix": False,
"verbose": False,
"details": False,
"summary": False,
"days": 365,
"uid": "",
"uuid": self.uuidl1,
"tzid": "utc",
"start": DateTime(nowYear, 1, 1, 0, 0, 0),
"no-organizer": False,
"invalid-organizer": False,
"disabled-organizer": False,
}
output = StringIO()
calverify = DarkPurgeService(self._sqlCalendarStore, options, output, reactor, config)
yield calverify.doAction()
self.assertEqual(calverify.results["Number of events to process"], len(self.requirements[CalVerifyMismatchTestsBase.uuidl1]["calendar"]))
self.assertEqual(
sorted([i.uid for i in calverify.results["Dark Events"]]),
["INVITE_INVALID_ORGANIZER_1_ICS", "INVITE_INVALID_ORGANIZER_2_ICS", ]
)
self.assertEqual(calverify.results["Number of dark events"], 2)
self.assertTrue("Fix dark events" not in calverify.results)
self.assertTrue("Fix remove" not in calverify.results)
sync_token_newl1 = (yield (yield self.calendarUnderTest(home=self.uuidl1, name="calendar")).syncToken())
self.assertEqual(sync_token_oldl1, sync_token_newl1)
@inlineCallbacks
def test_fixDarkEvents(self):
"""
CalVerifyService.doScan with fix for dark events. Make sure it detects
as much as it can. Make sure sync-token is changed.
"""
sync_token_oldl1 = (yield (yield self.calendarUnderTest(home=self.uuidl1, name="calendar")).syncToken())
yield self.commit()
options = {
"ical": False,
"badcua": False,
"mismatch": False,
"nobase64": False,
"double": True,
"dark-purge": False,
"fix": True,
"verbose": False,
"details": False,
"summary": False,
"days": 365,
"uid": "",
"uuid": self.uuidl1,
"tzid": "utc",
"start": DateTime(nowYear, 1, 1, 0, 0, 0),
"no-organizer": False,
"invalid-organizer": False,
"disabled-organizer": False,
}
output = StringIO()
calverify = DarkPurgeService(self._sqlCalendarStore, options, output, reactor, config)
yield calverify.doAction()
self.assertEqual(calverify.results["Number of events to process"], len(self.requirements[CalVerifyMismatchTestsBase.uuidl1]["calendar"]))
self.assertEqual(
sorted([i.uid for i in calverify.results["Dark Events"]]),
["INVITE_INVALID_ORGANIZER_1_ICS", "INVITE_INVALID_ORGANIZER_2_ICS", ]
)
self.assertEqual(calverify.results["Number of dark events"], 2)
self.assertEqual(calverify.results["Fix dark events"], 2)
self.assertTrue("Fix remove" in calverify.results)
sync_token_newl1 = (yield (yield self.calendarUnderTest(home=self.uuidl1, name="calendar")).syncToken())
self.assertNotEqual(sync_token_oldl1, sync_token_newl1)
# Re-scan after changes to make sure there are no errors
yield self.commit()
options["fix"] = False
options["uuid"] = self.uuidl1
calverify = DarkPurgeService(self._sqlCalendarStore, options, output, reactor, config)
yield calverify.doAction()
self.assertEqual(calverify.results["Number of events to process"], 2)
self.assertEqual(len(calverify.results["Dark Events"]), 0)
self.assertTrue("Fix dark events" not in calverify.results)
self.assertTrue("Fix remove" not in calverify.results)
@inlineCallbacks
def test_fixDarkEventsNoOrganizerOnly(self):
"""
CalVerifyService.doScan with fix for dark events. Make sure it detects
as much as it can. Make sure sync-token is changed.
"""
sync_token_oldl1 = (yield (yield self.calendarUnderTest(home=self.uuidl1, name="calendar")).syncToken())
yield self.commit()
options = {
"ical": False,
"badcua": False,
"mismatch": False,
"nobase64": False,
"double": True,
"dark-purge": False,
"fix": True,
"verbose": False,
"details": False,
"summary": False,
"days": 365,
"uid": "",
"uuid": self.uuidl1,
"tzid": "utc",
"start": DateTime(nowYear, 1, 1, 0, 0, 0),
"no-organizer": True,
"invalid-organizer": False,
"disabled-organizer": False,
}
output = StringIO()
calverify = DarkPurgeService(self._sqlCalendarStore, options, output, reactor, config)
yield calverify.doAction()
self.assertEqual(calverify.results["Number of events to process"], len(self.requirements[CalVerifyMismatchTestsBase.uuidl1]["calendar"]))
self.assertEqual(
sorted([i.uid for i in calverify.results["Dark Events"]]),
["INVITE_NO_ORGANIZER_ICS", ]
)
self.assertEqual(calverify.results["Number of dark events"], 1)
self.assertEqual(calverify.results["Fix dark events"], 1)
self.assertTrue("Fix remove" in calverify.results)
sync_token_newl1 = (yield (yield self.calendarUnderTest(home=self.uuidl1, name="calendar")).syncToken())
self.assertNotEqual(sync_token_oldl1, sync_token_newl1)
# Re-scan after changes to make sure there are no errors
yield self.commit()
options["fix"] = False
options["uuid"] = self.uuidl1
calverify = DarkPurgeService(self._sqlCalendarStore, options, output, reactor, config)
yield calverify.doAction()
self.assertEqual(calverify.results["Number of events to process"], 3)
self.assertEqual(len(calverify.results["Dark Events"]), 0)
self.assertTrue("Fix dark events" not in calverify.results)
self.assertTrue("Fix remove" not in calverify.results)
@inlineCallbacks
def test_fixDarkEventsAllTypes(self):
"""
CalVerifyService.doScan with fix for dark events. Make sure it detects
as much as it can. Make sure sync-token is changed.
"""
sync_token_oldl1 = (yield (yield self.calendarUnderTest(home=self.uuidl1, name="calendar")).syncToken())
yield self.commit()
options = {
"ical": False,
"badcua": False,
"mismatch": False,
"nobase64": False,
"double": True,
"dark-purge": False,
"fix": True,
"verbose": False,
"details": False,
"summary": False,
"days": 365,
"uid": "",
"uuid": self.uuidl1,
"tzid": "utc",
"start": DateTime(nowYear, 1, 1, 0, 0, 0),
"no-organizer": True,
"invalid-organizer": True,
"disabled-organizer": True,
}
output = StringIO()
calverify = DarkPurgeService(self._sqlCalendarStore, options, output, reactor, config)
yield calverify.doAction()
self.assertEqual(calverify.results["Number of events to process"], len(self.requirements[CalVerifyMismatchTestsBase.uuidl1]["calendar"]))
self.assertEqual(
sorted([i.uid for i in calverify.results["Dark Events"]]),
["INVITE_INVALID_ORGANIZER_1_ICS", "INVITE_INVALID_ORGANIZER_2_ICS", "INVITE_NO_ORGANIZER_ICS", ]
)
self.assertEqual(calverify.results["Number of dark events"], 3)
self.assertEqual(calverify.results["Fix dark events"], 3)
self.assertTrue("Fix remove" in calverify.results)
sync_token_newl1 = (yield (yield self.calendarUnderTest(home=self.uuidl1, name="calendar")).syncToken())
self.assertNotEqual(sync_token_oldl1, sync_token_newl1)
# Re-scan after changes to make sure there are no errors
yield self.commit()
options["fix"] = False
options["uuid"] = self.uuidl1
calverify = DarkPurgeService(self._sqlCalendarStore, options, output, reactor, config)
yield calverify.doAction()
self.assertEqual(calverify.results["Number of events to process"], 1)
self.assertEqual(len(calverify.results["Dark Events"]), 0)
self.assertTrue("Fix dark events" not in calverify.results)
self.assertTrue("Fix remove" not in calverify.results)
class CalVerifyEventPurge(CalVerifyMismatchTestsBase):
"""
Tests calverify for events.
"""
# No organizer
NO_ORGANIZER_ICS = """BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Apple Inc.//iCal 4.0.1//EN
CALSCALE:GREGORIAN
BEGIN:VEVENT
CREATED:20100303T181216Z
UID:INVITE_NO_ORGANIZER_ICS
TRANSP:OPAQUE
SUMMARY:INVITE_NO_ORGANIZER_ICS
DTSTART:%(now_fwd10)s
DURATION:PT1H
DTSTAMP:20100303T181220Z
SEQUENCE:2
END:VEVENT
END:VCALENDAR
""".replace("\n", "\r\n")
# Valid organizer
VALID_ORGANIZER_ICS = """BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Apple Inc.//iCal 4.0.1//EN
BEGIN:VEVENT
UID:INVITE_VALID_ORGANIZER_ICS
DTSTART:%(now)s
DURATION:PT1H
ATTENDEE;PARTSTAT=ACCPETED:urn:x-uid:%(uuid1)s
ATTENDEE;RSVP=TRUE:urn:x-uid:%(uuid2)s
ORGANIZER:urn:x-uid:%(uuid1)s
RRULE:FREQ=DAILY
SUMMARY:INVITE_VALID_ORGANIZER_ICS
END:VEVENT
END:VCALENDAR
""".replace("\n", "\r\n")
# Valid attendee
VALID_ATTENDEE_ICS = """BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Apple Inc.//iCal 4.0.1//EN
BEGIN:VEVENT
UID:INVITE_VALID_ORGANIZER_ICS
DTSTART:%(now)s
DURATION:PT1H
ATTENDEE;PARTSTAT=ACCPETED:urn:x-uid:%(uuid1)s
ATTENDEE;RSVP=TRUE:urn:x-uid:%(uuid2)s
ORGANIZER:urn:x-uid:%(uuid1)s
RRULE:FREQ=DAILY
SUMMARY:INVITE_VALID_ORGANIZER_ICS
END:VEVENT
END:VCALENDAR
""".replace("\n", "\r\n")
# Valid organizer
VALID_ORGANIZER_FUTURE_ICS = """BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Apple Inc.//iCal 4.0.1//EN
BEGIN:VEVENT
UID:INVITE_VALID_ORGANIZER_ICS
DTSTART:%(now_fwd11)s
DURATION:PT1H
ATTENDEE;PARTSTAT=ACCPETED:urn:x-uid:%(uuid1)s
ATTENDEE;RSVP=TRUE:urn:x-uid:%(uuid2)s
ORGANIZER:urn:x-uid:%(uuid1)s
RELATED-TO;RELTYPE=X-CALENDARSERVER-RECURRENCE-SET:%(relID)s
RRULE:FREQ=DAILY
SEQUENCE:1
SUMMARY:INVITE_VALID_ORGANIZER_ICS
END:VEVENT
END:VCALENDAR
""".replace("\n", "\r\n")
# Valid attendee
VALID_ATTENDEE_FUTURE_ICS = """BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Apple Inc.//iCal 4.0.1//EN
BEGIN:VEVENT
UID:INVITE_VALID_ORGANIZER_ICS
DTSTART:%(now_fwd11)s
DURATION:PT1H
ATTENDEE;PARTSTAT=ACCPETED:urn:x-uid:%(uuid1)s
ATTENDEE;RSVP=TRUE:urn:x-uid:%(uuid2)s
ORGANIZER:urn:x-uid:%(uuid1)s
RELATED-TO;RELTYPE=X-CALENDARSERVER-RECURRENCE-SET:%(relID)s
RRULE:FREQ=DAILY
SEQUENCE:1
SUMMARY:INVITE_VALID_ORGANIZER_ICS
END:VEVENT
END:VCALENDAR
""".replace("\n", "\r\n")
# Valid organizer
VALID_ORGANIZER_PAST_ICS = """BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Apple Inc.//iCal 4.0.1//EN
BEGIN:VEVENT
UID:%(uid)s
DTSTART:%(now)s
DURATION:PT1H
ATTENDEE;PARTSTAT=ACCPETED:urn:x-uid:%(uuid1)s
ATTENDEE;RSVP=TRUE:urn:x-uid:%(uuid2)s
ORGANIZER:urn:x-uid:%(uuid1)s
RELATED-TO;RELTYPE=X-CALENDARSERVER-RECURRENCE-SET:%(relID)s
RRULE:FREQ=DAILY;UNTIL=%(now_fwd11_1)s
SEQUENCE:1
SUMMARY:INVITE_VALID_ORGANIZER_ICS
END:VEVENT
END:VCALENDAR
""".replace("\n", "\r\n")
# Valid attendee
VALID_ATTENDEE_PAST_ICS = """BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Apple Inc.//iCal 4.0.1//EN
BEGIN:VEVENT
UID:%(uid)s
DTSTART:%(now)s
DURATION:PT1H
ATTENDEE;PARTSTAT=ACCPETED:urn:x-uid:%(uuid1)s
ATTENDEE;RSVP=TRUE:urn:x-uid:%(uuid2)s
ORGANIZER:urn:x-uid:%(uuid1)s
RELATED-TO;RELTYPE=X-CALENDARSERVER-RECURRENCE-SET:%(relID)s
RRULE:FREQ=DAILY;UNTIL=%(now_fwd11_1)s
SEQUENCE:1
SUMMARY:INVITE_VALID_ORGANIZER_ICS
END:VEVENT
END:VCALENDAR
""".replace("\n", "\r\n")
# Valid organizer
VALID_ORGANIZER_OVERRIDE_ICS = """BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Apple Inc.//iCal 4.0.1//EN
BEGIN:VEVENT
UID:VALID_ORGANIZER_OVERRIDE_ICS
DTSTART:%(now)s
DURATION:PT1H
ATTENDEE;PARTSTAT=ACCPETED:urn:x-uid:%(uuid1)s
ATTENDEE;RSVP=TRUE:urn:x-uid:%(uuid2)s
ORGANIZER:urn:x-uid:%(uuid1)s
RRULE:FREQ=DAILY
SUMMARY:INVITE_VALID_ORGANIZER_ICS
END:VEVENT
BEGIN:VEVENT
UID:VALID_ORGANIZER_OVERRIDE_ICS
RECURRENCE-ID:%(now_fwd11)s
DTSTART:%(now_fwd11)s
DURATION:PT2H
ATTENDEE;PARTSTAT=ACCPETED:urn:x-uid:%(uuid1)s
ATTENDEE;RSVP=TRUE:urn:x-uid:%(uuid2)s
ORGANIZER:urn:x-uid:%(uuid1)s
RRULE:FREQ=DAILY
SUMMARY:INVITE_VALID_ORGANIZER_ICS
END:VEVENT
END:VCALENDAR
""".replace("\n", "\r\n")
@inlineCallbacks
def setUp(self):
self.subs = {
"uuid1": CalVerifyMismatchTestsBase.uuid1,
"uuid2": CalVerifyMismatchTestsBase.uuid2,
}
self.now = DateTime.getNowUTC()
self.now.setHHMMSS(0, 0, 0)
self.subs["now"] = self.now
for i in range(30):
attrname = "now_back%s" % (i + 1,)
setattr(self, attrname, self.now.duplicate())
getattr(self, attrname).offsetDay(-(i + 1))
self.subs[attrname] = getattr(self, attrname)
attrname_12h = "now_back%s_12h" % (i + 1,)
setattr(self, attrname_12h, getattr(self, attrname).duplicate())
getattr(self, attrname_12h).offsetHours(12)
self.subs[attrname_12h] = getattr(self, attrname_12h)
attrname_1 = "now_back%s_1" % (i + 1,)
setattr(self, attrname_1, getattr(self, attrname).duplicate())
getattr(self, attrname_1).offsetSeconds(-1)
self.subs[attrname_1] = getattr(self, attrname_1)
for i in range(30):
attrname = "now_fwd%s" % (i + 1,)
setattr(self, attrname, self.now.duplicate())
getattr(self, attrname).offsetDay(i + 1)
self.subs[attrname] = getattr(self, attrname)
attrname_12h = "now_fwd%s_12h" % (i + 1,)
setattr(self, attrname_12h, getattr(self, attrname).duplicate())
getattr(self, attrname_12h).offsetHours(12)
self.subs[attrname_12h] = getattr(self, attrname_12h)
attrname_1 = "now_fwd%s_1" % (i + 1,)
setattr(self, attrname_1, getattr(self, attrname).duplicate())
getattr(self, attrname_1).offsetSeconds(-1)
self.subs[attrname_1] = getattr(self, attrname_1)
self.requirements = {
CalVerifyMismatchTestsBase.uuid1: {
"calendar": {
"invite1.ics": (self.NO_ORGANIZER_ICS % self.subs, CalVerifyMismatchTestsBase.metadata,),
"invite2.ics": (self.VALID_ORGANIZER_ICS % self.subs, CalVerifyMismatchTestsBase.metadata,),
"invite3.ics": (self.VALID_ORGANIZER_OVERRIDE_ICS % self.subs, CalVerifyMismatchTestsBase.metadata,),
},
"inbox": {},
},
CalVerifyMismatchTestsBase.uuid2: {
"calendar": {
"invite2a.ics": (self.VALID_ATTENDEE_ICS % self.subs, CalVerifyMismatchTestsBase.metadata,),
},
"inbox": {},
},
CalVerifyMismatchTestsBase.uuid3: {
"calendar": {},
"inbox": {},
},
CalVerifyMismatchTestsBase.uuidl1: {
"calendar": {},
"inbox": {},
},
}
yield super(CalVerifyEventPurge, self).setUp()
@inlineCallbacks
def test_validSplit(self):
"""
CalVerifyService.doScan without fix for dark events. Make sure it detects
as much as it can. Make sure sync-token is not changed.
"""
options = {
"nuke": False,
"missing": False,
"ical": False,
"mismatch": False,
"double": True,
"dark-purge": False,
"split": True,
"path": "/calendars/__uids__/%(uuid1)s/calendar/invite2.ics" % self.subs,
"rid": "%(now_fwd11)s" % self.subs,
"summary": False,
}
output = StringIO()
calverify = EventSplitService(self._sqlCalendarStore, options, output, reactor, config)
oldUID, oldRelatedTo = yield calverify.doAction()
relsubs = dict(self.subs)
relsubs["uid"] = oldUID
relsubs["relID"] = oldRelatedTo
calendar = yield self.calendarUnderTest(home=CalVerifyMismatchTestsBase.uuid1, name="calendar")
objs = yield calendar.listObjectResources()
self.assertEqual(len(objs), 4)
self.assertTrue("invite2.ics" in objs)
oldName = filter(lambda x: not x.startswith("invite"), objs)[0]
obj1 = yield calendar.objectResourceWithName("invite2.ics")
ical1 = yield obj1.component()
self.assertEqual(normalize_iCalStr(ical1), self.VALID_ORGANIZER_FUTURE_ICS % relsubs)
obj2 = yield calendar.objectResourceWithName(oldName)
ical2 = yield obj2.component()
self.assertEqual(normalize_iCalStr(ical2), self.VALID_ORGANIZER_PAST_ICS % relsubs)
calendar = yield self.calendarUnderTest(home=CalVerifyMismatchTestsBase.uuid2, name="calendar")
objs = yield calendar.listObjectResources()
self.assertEqual(len(objs), 2)
self.assertTrue("invite2a.ics" in objs)
oldName = filter(lambda x: not x.startswith("invite"), objs)[0]
obj1 = yield calendar.objectResourceWithName("invite2a.ics")
ical1 = yield obj1.component()
self.assertEqual(normalize_iCalStr(ical1), self.VALID_ATTENDEE_FUTURE_ICS % relsubs)
obj2 = yield calendar.objectResourceWithName(oldName)
ical2 = yield obj2.component()
self.assertEqual(normalize_iCalStr(ical2), self.VALID_ATTENDEE_PAST_ICS % relsubs)
@inlineCallbacks
def test_summary(self):
"""
CalVerifyService.doScan without fix for dark events. Make sure it detects
as much as it can. Make sure sync-token is not changed.
"""
options = {
"nuke": False,
"missing": False,
"ical": False,
"mismatch": False,
"double": True,
"dark-purge": False,
"split": True,
"path": "/calendars/__uids__/%(uuid1)s/calendar/invite3.ics" % self.subs,
"rid": "%(now_fwd11)s" % self.subs,
"summary": True,
}
output = StringIO()
calverify = EventSplitService(self._sqlCalendarStore, options, output, reactor, config)
yield calverify.doAction()
result = output.getvalue().splitlines()
self.assertTrue("%(now)s" % self.subs in result)
self.assertTrue("%(now_fwd10)s" % self.subs in result)
self.assertTrue("%(now_fwd11)s *" % self.subs in result)
self.assertTrue("%(now_fwd12)s" % self.subs in result)
class CalVerifyMissingLocations(CalVerifyMismatchTestsBase):
"""
Tests calverify for events.
"""
subs = {
"year": nowYear,
"month": nowMonth,
"uuid1": CalVerifyMismatchTestsBase.uuid1,
"uuid2": CalVerifyMismatchTestsBase.uuid2,
"uuid3": CalVerifyMismatchTestsBase.uuid3,
"uuidl1": CalVerifyMismatchTestsBase.uuidl1,
"uuidl2": CalVerifyMismatchTestsBase.uuidl2,
}
# Valid event
VALID_ICS = """BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Apple Inc.//iCal 4.0.1//EN
CALSCALE:GREGORIAN
BEGIN:VEVENT
CREATED:20100303T181216Z
UID:VALID_ICS
TRANSP:OPAQUE
SUMMARY:VALID_ICS
DTSTART:%(year)s%(month)02d08T100000Z
DURATION:PT1H
DTSTAMP:20100303T181220Z
SEQUENCE:2
ORGANIZER:urn:x-uid:%(uuid1)s
ATTENDEE:urn:x-uid:%(uuid1)s
ATTENDEE:urn:x-uid:%(uuid2)s
ATTENDEE:urn:x-uid:%(uuidl1)s
LOCATION:Room 01
END:VEVENT
END:VCALENDAR
""".replace("\n", "\r\n") % subs
# Valid event
VALID_MULTI_ICS = """BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Apple Inc.//iCal 4.0.1//EN
CALSCALE:GREGORIAN
BEGIN:VEVENT
CREATED:20100303T181216Z
UID:VALID_MULTI_ICS
TRANSP:OPAQUE
SUMMARY:VALID_MULTI_ICS
DTSTART:%(year)s%(month)02d08T100000Z
DURATION:PT1H
DTSTAMP:20100303T181220Z
SEQUENCE:2
ORGANIZER:urn:x-uid:%(uuid1)s
ATTENDEE:urn:x-uid:%(uuid1)s
ATTENDEE:urn:x-uid:%(uuid2)s
ATTENDEE:urn:x-uid:%(uuidl1)s
ATTENDEE:urn:x-uid:%(uuidl2)s
LOCATION:Room 01\\;Room 02
END:VEVENT
END:VCALENDAR
""".replace("\n", "\r\n") % subs
# Invalid event
INVALID_ICS = """BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Apple Inc.//iCal 4.0.1//EN
CALSCALE:GREGORIAN
BEGIN:VEVENT
CREATED:20100303T181216Z
UID:INVALID_ICS
TRANSP:OPAQUE
SUMMARY:INVALID_ICS
DTSTART:%(year)s%(month)02d08T120000Z
DURATION:PT1H
DTSTAMP:20100303T181220Z
SEQUENCE:2
ORGANIZER:urn:x-uid:%(uuid1)s
ATTENDEE:urn:x-uid:%(uuid1)s
ATTENDEE:urn:x-uid:%(uuid2)s
ATTENDEE:urn:x-uid:%(uuidl1)s
END:VEVENT
END:VCALENDAR
""".replace("\n", "\r\n") % subs
# Invalid event
INVALID_MULTI_ICS = """BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Apple Inc.//iCal 4.0.1//EN
CALSCALE:GREGORIAN
BEGIN:VEVENT
CREATED:20100303T181216Z
UID:INVALID_MULTI_ICS
TRANSP:OPAQUE
SUMMARY:INVALID_MULTI_ICS
DTSTART:%(year)s%(month)02d08T120000Z
DURATION:PT1H
DTSTAMP:20100303T181220Z
SEQUENCE:2
ORGANIZER:urn:x-uid:%(uuid1)s
ATTENDEE:urn:x-uid:%(uuid1)s
ATTENDEE:urn:x-uid:%(uuid2)s
ATTENDEE:urn:x-uid:%(uuidl1)s
ATTENDEE:urn:x-uid:%(uuidl2)s
LOCATION:Room 02
END:VEVENT
END:VCALENDAR
""".replace("\n", "\r\n") % subs
# Room as organizer event
ROOM_ORGANIZER_ICS = """BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Apple Inc.//iCal 4.0.1//EN
CALSCALE:GREGORIAN
BEGIN:VEVENT
CREATED:20100303T181216Z
UID:ROOM_ORGANIZER_ICS
TRANSP:OPAQUE
SUMMARY:ROOM_ORGANIZER_ICS
DTSTART:%(year)s%(month)02d08T120000Z
DURATION:PT1H
DTSTAMP:20100303T181220Z
SEQUENCE:2
ORGANIZER:urn:x-uid:%(uuidl1)s
ATTENDEE:urn:x-uid:%(uuidl1)s
END:VEVENT
END:VCALENDAR
""".replace("\n", "\r\n") % subs
# Room event without organizer
ROOM_NO_ORGANIZER_ICS = """BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Apple Inc.//iCal 4.0.1//EN
CALSCALE:GREGORIAN
BEGIN:VEVENT
CREATED:20100303T181216Z
UID:ROOM_NO_ORGANIZER_ICS
TRANSP:OPAQUE
SUMMARY:ROOM_NO_ORGANIZER_ICS
DTSTART:%(year)s%(month)02d08T120000Z
DURATION:PT1H
DTSTAMP:20100303T181220Z
SEQUENCE:2
END:VEVENT
END:VCALENDAR
""".replace("\n", "\r\n") % subs
# Invalid event with no organizer copt
ROOM_MISSING_ORGANIZER_ICS = """BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Apple Inc.//iCal 4.0.1//EN
CALSCALE:GREGORIAN
BEGIN:VEVENT
CREATED:20100303T181216Z
UID:ROOM_MISSING_ORGANIZER_ICS
TRANSP:OPAQUE
SUMMARY:ROOM_MISSING_ORGANIZER_ICS
DTSTART:%(year)s%(month)02d08T120000Z
DURATION:PT1H
DTSTAMP:20100303T181220Z
SEQUENCE:2
ORGANIZER:urn:x-uid:%(uuid1)s
ATTENDEE:urn:x-uid:%(uuid1)s
ATTENDEE:urn:x-uid:%(uuid2)s
ATTENDEE:urn:x-uid:%(uuidl1)s
END:VEVENT
END:VCALENDAR
""".replace("\n", "\r\n") % subs
allEvents = {
"invite1.ics": (VALID_ICS, CalVerifyMismatchTestsBase.metadata,),
"invite2.ics": (INVALID_ICS, CalVerifyMismatchTestsBase.metadata,),
"invite3.ics": (VALID_MULTI_ICS, CalVerifyMismatchTestsBase.metadata,),
"invite4.ics": (INVALID_MULTI_ICS, CalVerifyMismatchTestsBase.metadata,),
}
allEvents_Room1 = {
"invite1.ics": (VALID_ICS, CalVerifyMismatchTestsBase.metadata,),
"invite2.ics": (INVALID_ICS, CalVerifyMismatchTestsBase.metadata,),
"invite3.ics": (VALID_MULTI_ICS, CalVerifyMismatchTestsBase.metadata,),
"invite4.ics": (INVALID_MULTI_ICS, CalVerifyMismatchTestsBase.metadata,),
"invite5.ics": (ROOM_ORGANIZER_ICS, CalVerifyMismatchTestsBase.metadata,),
"invite6.ics": (ROOM_NO_ORGANIZER_ICS, CalVerifyMismatchTestsBase.metadata,),
"invite7.ics": (ROOM_MISSING_ORGANIZER_ICS, CalVerifyMismatchTestsBase.metadata,),
}
allEvents_Room2 = {
"invite3.ics": (VALID_MULTI_ICS, CalVerifyMismatchTestsBase.metadata,),
"invite4.ics": (INVALID_MULTI_ICS, CalVerifyMismatchTestsBase.metadata,),
}
requirements = {
CalVerifyMismatchTestsBase.uuid1: {
"calendar": allEvents,
"inbox": {},
},
CalVerifyMismatchTestsBase.uuid2: {
"calendar": allEvents,
"inbox": {},
},
CalVerifyMismatchTestsBase.uuid3: {
"calendar": {},
"inbox": {},
},
CalVerifyMismatchTestsBase.uuidl1: {
"calendar": allEvents_Room1,
"inbox": {},
},
CalVerifyMismatchTestsBase.uuidl2: {
"calendar": allEvents_Room2,
"inbox": {},
},
}
badEvents = set(("INVALID_ICS", "INVALID_MULTI_ICS", "ROOM_MISSING_ORGANIZER_ICS", "ROOM_ORGANIZER_ICS",))
@inlineCallbacks
def test_scanMissingLocations(self):
"""
MissingLocationService.doAction without fix for missing locations. Make sure it detects
as much as it can. Make sure sync-token is not changed.
"""
sync_token_oldl1 = (yield (yield self.calendarUnderTest(home=self.uuidl1, name="calendar")).syncToken())
yield self.commit()
options = {
"ical": False,
"badcua": False,
"mismatch": False,
"nobase64": False,
"double": False,
"dark-purge": False,
"missing-location": True,
"fix": False,
"verbose": False,
"details": False,
"summary": False,
"days": 365,
"uid": "",
"uuid": self.uuidl1,
"tzid": "utc",
"start": DateTime(nowYear, 1, 1, 0, 0, 0),
"no-organizer": False,
"invalid-organizer": False,
"disabled-organizer": False,
}
output = StringIO()
calverify = MissingLocationService(self._sqlCalendarStore, options, output, reactor, config)
yield calverify.doAction()
self.assertEqual(calverify.results["Number of events to process"], len(self.requirements[CalVerifyMismatchTestsBase.uuidl1]["calendar"]))
self.assertEqual(
set([i.uid for i in calverify.results["Bad Events"]]),
self.badEvents,
)
self.assertEqual(calverify.results["Number of bad events"], len(self.badEvents))
self.assertTrue("Fix bad events" not in calverify.results)
sync_token_newl1 = (yield (yield self.calendarUnderTest(home=self.uuidl1, name="calendar")).syncToken())
self.assertEqual(sync_token_oldl1, sync_token_newl1)
@inlineCallbacks
def test_fixMissingLocations(self):
"""
MissingLocationService.doAction with fix for missing locations. Make sure it detects
as much as it can. Make sure sync-token is changed.
"""
# Make sure location is in each users event
for uid in (self.uuid1, self.uuid2, self.uuidl1,):
calobj = yield self.calendarObjectUnderTest(home=uid, calendar_name="calendar", name="invite2.ics")
caldata = yield calobj.componentForUser()
self.assertTrue("LOCATION:" not in str(caldata))
yield self.commit()
sync_token_oldl1 = (yield (yield self.calendarUnderTest(home=self.uuidl1, name="calendar")).syncToken())
yield self.commit()
options = {
"ical": False,
"badcua": False,
"mismatch": False,
"nobase64": False,
"double": False,
"dark-purge": False,
"missing-location": True,
"fix": True,
"verbose": False,
"details": False,
"summary": False,
"days": 365,
"uid": "",
"uuid": self.uuidl1,
"tzid": "utc",
"start": DateTime(nowYear, 1, 1, 0, 0, 0),
"no-organizer": False,
"invalid-organizer": False,
"disabled-organizer": False,
}
output = StringIO()
calverify = MissingLocationService(self._sqlCalendarStore, options, output, reactor, config)
yield calverify.doAction()
self.assertEqual(calverify.results["Number of events to process"], len(self.requirements[CalVerifyMismatchTestsBase.uuidl1]["calendar"]))
self.assertEqual(
set([i.uid for i in calverify.results["Bad Events"]]),
self.badEvents,
)
self.assertEqual(calverify.results["Number of bad events"], len(self.badEvents))
self.assertEqual(calverify.results["Fix bad events"], len(self.badEvents))
sync_token_newl1 = (yield (yield self.calendarUnderTest(home=self.uuidl1, name="calendar")).syncToken())
self.assertNotEqual(sync_token_oldl1, sync_token_newl1)
yield self.commit()
# Wait for it to complete
yield JobItem.waitEmpty(self._sqlCalendarStore.newTransaction, reactor, 60)
# Re-scan after changes to make sure there are no errors
options["fix"] = False
options["uuid"] = self.uuidl1
calverify = MissingLocationService(self._sqlCalendarStore, options, output, reactor, config)
yield calverify.doAction()
self.assertEqual(calverify.results["Number of events to process"], len(self.requirements[CalVerifyMismatchTestsBase.uuidl1]["calendar"]) - 2)
self.assertEqual(len(calverify.results["Bad Events"]), 0)
self.assertTrue("Fix bad events" not in calverify.results)
# Make sure location is in each users event
for uid in (self.uuid1, self.uuid2, self.uuidl1,):
calobj = yield self.calendarObjectUnderTest(home=uid, calendar_name="calendar", name="invite2.ics")
caldata = yield calobj.componentForUser()
self.assertTrue("LOCATION:" in str(caldata))
calobj = yield self.calendarObjectUnderTest(home=self.uuidl1, calendar_name="calendar", name="invite3.ics")
self.assertTrue(calobj is not None)
calobj = yield self.calendarObjectUnderTest(home=self.uuidl2, calendar_name="calendar", name="invite3.ics")
self.assertTrue(calobj is not None)
calobj = yield self.calendarObjectUnderTest(home=self.uuidl1, calendar_name="calendar", name="invite4.ics")
self.assertTrue(calobj is None)
calobj = yield self.calendarObjectUnderTest(home=self.uuidl2, calendar_name="calendar", name="invite4.ics")
self.assertTrue(calobj is not None)
calobj = yield self.calendarObjectUnderTest(home=self.uuidl1, calendar_name="calendar", name="invite5.ics")
caldata = yield calobj.componentForUser()
self.assertTrue("LOCATION:" in str(caldata))
calobj = yield self.calendarObjectUnderTest(home=self.uuidl1, calendar_name="calendar", name="invite6.ics")
caldata = yield calobj.componentForUser()
self.assertTrue("LOCATION:" not in str(caldata))
calobj = yield self.calendarObjectUnderTest(home=self.uuidl1, calendar_name="calendar", name="invite7.ics")
self.assertTrue(calobj is None)
yield self.commit()
| 34.716076 | 149 | 0.686385 | 12,988 | 110,779 | 5.756314 | 0.046658 | 0.011021 | 0.019007 | 0.025146 | 0.903241 | 0.876476 | 0.847184 | 0.832123 | 0.810856 | 0.804489 | 0 | 0.092975 | 0.182011 | 110,779 | 3,190 | 150 | 34.726959 | 0.732078 | 0.049576 | 0 | 0.809225 | 0 | 0.019188 | 0.448457 | 0.236373 | 0 | 0 | 0 | 0 | 0.077491 | 1 | 0.009225 | false | 0 | 0.004797 | 0 | 0.047232 | 0.000369 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
849ea6b23b80a6a84ed5a4adc35f4a55ae877148 | 7,332 | py | Python | tests/performance_indicator/test_performance_indicator.py | gabicavalcante/pymoo | 1711ce3a96e5ef622d0116d6c7ea4d26cbe2c846 | [
"Apache-2.0"
] | 11 | 2018-05-22T17:38:02.000Z | 2022-02-28T03:34:33.000Z | tests/performance_indicator/test_performance_indicator.py | gabicavalcante/pymoo | 1711ce3a96e5ef622d0116d6c7ea4d26cbe2c846 | [
"Apache-2.0"
] | 15 | 2022-01-03T19:36:36.000Z | 2022-03-30T03:57:58.000Z | tests/performance_indicator/test_performance_indicator.py | gabicavalcante/pymoo | 1711ce3a96e5ef622d0116d6c7ea4d26cbe2c846 | [
"Apache-2.0"
] | 3 | 2021-11-22T08:01:47.000Z | 2022-03-11T08:53:58.000Z | import os
import unittest
import numpy as np
from pymoo.configuration import get_pymoo
from pymoo.factory import get_performance_indicator
from pymoo.performance_indicator.gd import GD
from pymoo.performance_indicator.igd import IGD
from tests.test_usage import test_usage
def get_indicators(pf):
gd = get_performance_indicator("gd", pf)
igd = get_performance_indicator("igd", pf)
gd_plus = get_performance_indicator("gd+", pf)
igd_plus = get_performance_indicator("igd+", pf)
return gd, igd, gd_plus, igd_plus
class PerformanceIndicatorTest(unittest.TestCase):
def test_usages(self):
test_usage([os.path.join(get_pymoo(), "pymoo", "usage", "usage_performance_indicator.py")])
# test whether they return the same as values from jmetalpy
def test_values_of_indicators(self):
l = [
(GD, "gd"),
(IGD, "igd")
]
folder = os.path.join(get_pymoo(), "tests", "performance_indicator")
pf = np.loadtxt(os.path.join(folder, "performance_indicators.pf"))
for indicator, ext in l:
for i in range(1, 5):
F = np.loadtxt(os.path.join(folder, "performance_indicators_%s.f" % i))
val = indicator(pf).calc(F)
correct = np.loadtxt(os.path.join(folder, "performance_indicators_%s.%s" % (i, ext)))
self.assertTrue(correct == val)
def test_performance_indicator_1(self):
A = np.array([2, 5])
B = np.array([3, 9])
D = np.array([2, 1])
pf = np.array([[1, 0], [0, 10]])
gd, igd, gd_plus, igd_plus = get_indicators(pf)
self.assertAlmostEqual(gd.calc(A), 5.099, places=2)
self.assertAlmostEqual(gd.calc(B), 3.162, places=2)
self.assertAlmostEqual(igd.calc(A), 5.242, places=2)
self.assertAlmostEqual(igd.calc(B), 6.191, places=2)
self.assertAlmostEqual(igd.calc(D), 5.32, places=2)
self.assertAlmostEqual(igd_plus.calc(A), 3.550, places=2)
self.assertAlmostEqual(igd_plus.calc(B), 6.110, places=2)
self.assertAlmostEqual(gd_plus.calc(A), 2.0, places=2)
self.assertAlmostEqual(gd_plus.calc(B), 3.0, places=2)
def test_performance_indicator_2(self):
A = np.array([5, 2])
B = np.array([11, 3])
pf = np.array([[0, 1], [10, 0]])
gd, igd, gd_plus, igd_plus = get_indicators(pf)
self.assertAlmostEqual(gd.calc(A), 5.099, places=2)
self.assertAlmostEqual(gd.calc(B), 3.162, places=2)
self.assertAlmostEqual(gd_plus.calc(A), 2.0, places=2)
self.assertAlmostEqual(gd_plus.calc(B), 3.162, places=2)
self.assertAlmostEqual(igd.calc(A), 5.242, places=2)
self.assertAlmostEqual(igd.calc(B), 7.171, places=2)
self.assertAlmostEqual(igd_plus.calc(A), 3.550, places=2)
self.assertAlmostEqual(igd_plus.calc(B), 7.171, places=2)
def test_performance_indicator_3(self):
A = np.array([2, 5])
B = np.array([3, 9])
C = np.array([10, 10])
D = np.array([2, 1])
pf = np.array([[1, 0], [0, 10]])
gd, igd, gd_plus, igd_plus = get_indicators(pf)
self.assertAlmostEqual(gd.calc(D), 1.414, places=2)
self.assertAlmostEqual(gd.calc(A), 5.099, places=2)
self.assertAlmostEqual(gd_plus.calc(D), 1.414, places=2)
self.assertAlmostEqual(gd_plus.calc(A), 2.00, places=2)
self.assertAlmostEqual(igd.calc(D), 5.317, places=2)
self.assertAlmostEqual(igd.calc(A), 5.242, places=2)
self.assertAlmostEqual(igd_plus.calc(D), 1.707, places=2)
self.assertAlmostEqual(igd_plus.calc(A), 3.550, places=2)
def test_performance_indicator_4(self):
A = np.array([[2, 4], [3, 3], [4, 2]])
B = np.array([[2, 8], [4, 4], [8, 2]])
pf = np.array([[0, 10], [1, 6], [2, 2], [6, 1], [10, 0]])
gd, igd, gd_plus, igd_plus = get_indicators(pf)
self.assertAlmostEqual(gd.calc(A), 1.805, places=2)
self.assertAlmostEqual(gd.calc(B), 2.434, places=2)
self.assertAlmostEqual(gd_plus.calc(A), 1.138, places=2)
self.assertAlmostEqual(gd_plus.calc(B), 2.276, places=2)
self.assertAlmostEqual(igd.calc(A), 3.707, places=2)
self.assertAlmostEqual(igd.calc(B), 2.591, places=2)
self.assertAlmostEqual(igd_plus.calc(A), 1.483, places=2)
self.assertAlmostEqual(igd_plus.calc(B), 2.260, places=2)
def test_performance_indicator_5(self):
A = np.array([[5, 2]])
B = np.array([[6, 4], [10, 3]])
pf = np.array([[0, 1], [10, 0]])
gd, igd, gd_plus, igd_plus = get_indicators(pf)
self.assertAlmostEqual(gd.calc(A), 5.099, places=2)
self.assertAlmostEqual(gd.calc(B), 4.328, places=2)
self.assertAlmostEqual(gd_plus.calc(A), 2.0, places=2)
self.assertAlmostEqual(gd_plus.calc(B), 3.5, places=2)
self.assertAlmostEqual(igd.calc(A), 5.242, places=2)
self.assertAlmostEqual(igd.calc(B), 4.854, places=2)
self.assertAlmostEqual(igd_plus.calc(A), 3.550, places=2)
self.assertAlmostEqual(igd_plus.calc(B), 4.854, places=2)
def test_performance_indicator_6(self):
A = np.array([[1, 5]])
B = np.array([[5, 6]])
pf = np.array([[4, 4]])
gd, igd, gd_plus, igd_plus = get_indicators(pf)
self.assertAlmostEqual(gd.calc(A), 3.162, places=2)
self.assertAlmostEqual(gd.calc(B), 2.236, places=2)
self.assertAlmostEqual(gd_plus.calc(A), 1.0, places=2)
self.assertAlmostEqual(gd_plus.calc(B), 2.236, places=2)
self.assertAlmostEqual(igd.calc(A), 3.162, places=2)
self.assertAlmostEqual(igd.calc(B), 2.236, places=2)
self.assertAlmostEqual(igd_plus.calc(A), 1.0, places=2)
self.assertAlmostEqual(igd_plus.calc(B), 2.236, places=2)
def test_performance_indicator_8(self):
A = np.array([[1, 8], [2, 2], [8, 1]])
B = np.array([[4, 3]])
pf = np.array([[0, 0]])
gd, igd, gd_plus, igd_plus = get_indicators(pf)
self.assertAlmostEqual(gd.calc(A), 6.318, places=2)
self.assertAlmostEqual(gd.calc(B), 5.0, places=2)
self.assertAlmostEqual(gd_plus.calc(A), 6.318, places=2)
self.assertAlmostEqual(gd_plus.calc(B), 5.0, places=2)
self.assertAlmostEqual(igd.calc(A), 2.828, places=2)
self.assertAlmostEqual(igd.calc(B), 5.0, places=2)
self.assertAlmostEqual(igd_plus.calc(A), 2.828, places=2)
self.assertAlmostEqual(igd_plus.calc(B), 5.0, places=2)
def test_performance_indicator_9(self):
A = np.array([[1, 8], [2, 2], [8, 1]])
B = np.array([[2, 2]])
pf = np.array([[0, 0]])
gd, igd, gd_plus, igd_plus = get_indicators(pf)
self.assertAlmostEqual(gd.calc(A), 6.318, places=2)
self.assertAlmostEqual(gd.calc(B), 2.828, places=2)
self.assertAlmostEqual(gd_plus.calc(A), 6.318, places=2)
self.assertAlmostEqual(gd_plus.calc(B), 2.828, places=2)
self.assertAlmostEqual(igd.calc(A), 2.828, places=2)
self.assertAlmostEqual(igd.calc(B), 2.828, places=2)
self.assertAlmostEqual(igd_plus.calc(A), 2.828, places=2)
self.assertAlmostEqual(igd_plus.calc(B), 2.828, places=2)
if __name__ == '__main__':
unittest.main()
| 37.218274 | 101 | 0.621386 | 1,096 | 7,332 | 4.044708 | 0.089416 | 0.307918 | 0.141439 | 0.360027 | 0.829686 | 0.800135 | 0.742387 | 0.719152 | 0.647192 | 0.516129 | 0 | 0.071292 | 0.21754 | 7,332 | 196 | 102 | 37.408163 | 0.701412 | 0.007774 | 0 | 0.335714 | 0 | 0 | 0.023512 | 0.018012 | 0 | 0 | 0 | 0 | 0.471429 | 1 | 0.078571 | false | 0 | 0.057143 | 0 | 0.15 | 0 | 0 | 0 | 0 | null | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
ca2cabfda78f90cacf9819d46b0636f5bc2f5f3a | 6,792 | py | Python | geometric2dr/embedding_methods/skipgram_trainer.py | paulmorio/geo2dr | 49d5f1cdc0a4aa0c2c19744f6b1c723fd5988955 | [
"MIT"
] | 32 | 2020-03-13T21:09:50.000Z | 2021-10-02T13:01:46.000Z | geometric2dr/embedding_methods/skipgram_trainer.py | paulmorio/geo2dr | 49d5f1cdc0a4aa0c2c19744f6b1c723fd5988955 | [
"MIT"
] | 3 | 2020-03-22T14:34:49.000Z | 2021-08-17T15:20:40.000Z | geometric2dr/embedding_methods/skipgram_trainer.py | paulmorio/geo2dr | 49d5f1cdc0a4aa0c2c19744f6b1c723fd5988955 | [
"MIT"
] | 5 | 2020-03-29T00:31:10.000Z | 2021-08-17T10:57:32.000Z | """
Module containining class definitions of trainers for skipgram models,
which are partly used by Deep Graph Kernels
Author: Paul Scherer
"""
import torch
import torch.optim as optim
from torch.utils.data import DataLoader
from tqdm import tqdm
# Internal
from .skipgram_data_reader import SkipgramCorpus, InMemorySkipgramCorpus
from .skipgram import Skipgram
from .utils import save_subgraph_embeddings
class Trainer(object):
"""Handles corpus construction (hard drive version), skipgram initialization and training.
Paramaters
----------
corpus_dir : str
path to directory containing graph files
extension : str
extension used in graph documents produced after decomposition stage
max_files : int
the maximum number of graph files to consider, default of 0 uses all files
output_fh : str
the path to the file where embeddings should be saved
emb_dimension : int (default=128)
the desired dimension of the embeddings
batch_size : int (default=32)
the desired batch size
epochs : int (default=100)
the desired number of epochs for which the network should be trained
initial_lr : float (default=1e-3)
the initial learning rate
min_count : int (default=1)
the minimum number of times a pattern should occur across the dataset to
be considered part of the substructure pattern vocabulary
Returns
-------
self : Trainer
A Trainer instance
"""
def __init__(self, corpus_dir, extension, max_files, window_size, output_fh, emb_dimension=128, batch_size=32, epochs=100, initial_lr=1e-3, min_count=1):
self.corpus = SkipgramCorpus(corpus_dir, extension, max_files, min_count, window_size)
self.dataloader = DataLoader(self.corpus, batch_size, shuffle=False, num_workers=4, collate_fn = self.corpus.collate)
self.corpus_dir = corpus_dir
self.extension = extension
self.max_files = max_files
self.output_fh = output_fh
self.emb_dimension = emb_dimension
self.batch_size = batch_size
self.epochs = epochs
self.initial_lr = initial_lr
self.min_count = min_count
self.window_size = window_size
self.num_targets = self.corpus.num_subgraphs # the special feature here is that we are learning subgraph reps
self.vocab_size = self.corpus.num_subgraphs
self.skipgram = Skipgram(self.num_targets, self.vocab_size, self.emb_dimension)
if torch.cuda.is_available():
self.device = torch.device("cuda")
self.skipgram.cuda()
else:
self.device = torch.device("cpu")
def train(self):
"""Train the network with the settings used to initialise the Trainer"""
for epoch in range(self.epochs):
print("### Epoch: " + str(epoch))
optimizer = optim.Adagrad(self.skipgram.parameters(), lr=self.initial_lr)
scheduler = torch.optim.lr_scheduler.CosineAnnealingLR(optimizer, len(self.dataloader))
running_loss = 0.0
for sample_batched in tqdm(self.dataloader):
if len(sample_batched[0]) > 1:
pos_target = sample_batched[0].to(self.device)
pos_context = sample_batched[1].to(self.device)
neg_context = sample_batched[2].to(self.device)
optimizer.zero_grad()
loss = self.skipgram.forward(pos_target, pos_context, neg_context) # the loss is integrated into the forward function
loss.backward()
optimizer.step()
scheduler.step()
running_loss = running_loss * 0.9 + loss.item() * 0.1
print(" Loss: " + str(running_loss))
final_embeddings = self.skipgram.target_embeddings.weight.cpu().data.numpy()
save_subgraph_embeddings(self.corpus, final_embeddings, self.output_fh)
return(final_embeddings)
class InMemoryTrainer(object):
"""Handles corpus construction (in-memory version), PVDBOW initialization and training.
Paramaters
----------
corpus_dir : str
path to directory containing graph files
extension : str
extension used in graph documents produced after decomposition stage
max_files : int
the maximum number of graph files to consider, default of 0 uses all files
output_fh : str
the path to the file where embeddings should be saved
emb_dimension : int (default=128)
the desired dimension of the embeddings
batch_size : int (default=32)
the desired batch size
epochs : int (default=100)
the desired number of epochs for which the network should be trained
initial_lr : float (default=1e-3)
the initial learning rate
min_count : int (default=1)
the minimum number of times a pattern should occur across the dataset to
be considered part of the substructure pattern vocabulary
Returns
-------
self : InMemoryTrainer
A trainer instance which has the dataset stored in memory for fast access
"""
def __init__(self, corpus_dir, extension, max_files, window_size, output_fh, emb_dimension=128, batch_size=32, epochs=100, initial_lr=1e-3, min_count=1):
self.corpus = InMemorySkipgramCorpus(corpus_dir, extension, max_files, min_count, window_size)
self.dataloader = DataLoader(self.corpus, batch_size, shuffle=False, num_workers=4, pin_memory=True, collate_fn = self.corpus.collate)
self.corpus_dir = corpus_dir
self.extension = extension
self.max_files = max_files
self.output_fh = output_fh
self.emb_dimension = emb_dimension
self.batch_size = batch_size
self.epochs = epochs
self.initial_lr = initial_lr
self.min_count = min_count
self.window_size = window_size
self.num_targets = self.corpus.num_subgraphs # the special feature here is that we are learning subgraph reps
self.vocab_size = self.corpus.num_subgraphs
self.skipgram = Skipgram(self.num_targets, self.vocab_size, self.emb_dimension)
if torch.cuda.is_available():
self.device = torch.device("cuda")
self.skipgram.cuda()
else:
self.device = torch.device("cpu")
def train(self):
"""Train the network with the settings used to initialise the Trainer"""
for epoch in range(self.epochs):
print("### Epoch: " + str(epoch))
optimizer = optim.Adagrad(self.skipgram.parameters(), lr=self.initial_lr)
scheduler = torch.optim.lr_scheduler.CosineAnnealingLR(optimizer, len(self.dataloader))
running_loss = 0.0
for sample_batched in tqdm(self.dataloader):
if len(sample_batched[0]) > 1:
pos_target = sample_batched[0].to(self.device)
pos_context = sample_batched[1].to(self.device)
neg_context = sample_batched[2].to(self.device)
optimizer.zero_grad()
loss = self.skipgram.forward(pos_target, pos_context, neg_context) # the loss is integrated into the forward function
loss.backward()
optimizer.step()
scheduler.step()
running_loss = running_loss * 0.9 + loss.item() * 0.1
print(" Loss: " + str(running_loss))
final_embeddings = self.skipgram.target_embeddings.weight.cpu().data.numpy()
save_subgraph_embeddings(self.corpus, final_embeddings, self.output_fh)
return(final_embeddings)
if __name__ == '__main__':
pass
| 34.830769 | 154 | 0.752208 | 972 | 6,792 | 5.088477 | 0.193416 | 0.032349 | 0.014557 | 0.016983 | 0.871411 | 0.871411 | 0.871411 | 0.871411 | 0.871411 | 0.871411 | 0 | 0.012236 | 0.157686 | 6,792 | 194 | 155 | 35.010309 | 0.852299 | 0.376767 | 0 | 0.842105 | 0 | 0 | 0.013492 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.042105 | false | 0.010526 | 0.073684 | 0 | 0.136842 | 0.042105 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
ca80a942c592d60559a08153b385c5d4cc07ebb5 | 6,273 | py | Python | python_modules/dagster-graphql/dagster_graphql_tests/graphql/test_dynamic_pipeline.py | ericct/dagster | dd2c9f05751e1bae212a30dbc54381167a14f6c5 | [
"Apache-2.0"
] | null | null | null | python_modules/dagster-graphql/dagster_graphql_tests/graphql/test_dynamic_pipeline.py | ericct/dagster | dd2c9f05751e1bae212a30dbc54381167a14f6c5 | [
"Apache-2.0"
] | null | null | null | python_modules/dagster-graphql/dagster_graphql_tests/graphql/test_dynamic_pipeline.py | ericct/dagster | dd2c9f05751e1bae212a30dbc54381167a14f6c5 | [
"Apache-2.0"
] | null | null | null | from dagster.core.storage.tags import RESUME_RETRY_TAG
from dagster_graphql.client.query import (
LAUNCH_PIPELINE_EXECUTION_MUTATION,
LAUNCH_PIPELINE_REEXECUTION_MUTATION,
)
from dagster_graphql.test.utils import (
execute_dagster_graphql_and_finish_runs,
infer_pipeline_selector,
)
from .utils import (
get_all_logs_for_finished_run_via_subscription,
step_did_fail,
step_did_not_run,
step_did_succeed,
)
def test_dynamic_full_reexecution(graphql_context):
selector = infer_pipeline_selector(graphql_context, "dynamic_pipeline")
result = execute_dagster_graphql_and_finish_runs(
graphql_context,
LAUNCH_PIPELINE_EXECUTION_MUTATION,
variables={
"executionParams": {
"selector": selector,
"runConfigData": {
"solids": {"multiply_inputs": {"inputs": {"should_fail": {"value": True}}}},
"storage": {"filesystem": {}},
},
"mode": "default",
}
},
)
assert not result.errors
assert result.data
assert result.data["launchPipelineExecution"]["__typename"] == "LaunchPipelineRunSuccess"
assert result.data["launchPipelineExecution"]["run"]["pipeline"]["name"] == "dynamic_pipeline"
parent_run_id = result.data["launchPipelineExecution"]["run"]["runId"]
logs = get_all_logs_for_finished_run_via_subscription(graphql_context, parent_run_id)[
"pipelineRunLogs"
]["messages"]
assert step_did_succeed(logs, "emit")
assert step_did_succeed(logs, "multiply_inputs[0]")
assert step_did_succeed(logs, "multiply_inputs[1]")
assert step_did_fail(logs, "multiply_inputs[2]")
assert step_did_succeed(logs, "multiply_by_two[0]")
assert step_did_succeed(logs, "multiply_by_two[1]")
retry_one = execute_dagster_graphql_and_finish_runs(
graphql_context,
LAUNCH_PIPELINE_REEXECUTION_MUTATION,
variables={
"executionParams": {
"mode": "default",
"selector": selector,
"runConfigData": {
"solids": {"multiply_inputs": {"inputs": {"should_fail": {"value": True}}}},
"storage": {"filesystem": {}},
},
"executionMetadata": {
"rootRunId": parent_run_id,
"parentRunId": parent_run_id,
"tags": [{"key": RESUME_RETRY_TAG, "value": "true"}],
},
}
},
)
assert not retry_one.errors
assert retry_one.data
assert (
retry_one.data["launchPipelineReexecution"]["__typename"] == "LaunchPipelineRunSuccess"
), retry_one.data["launchPipelineReexecution"].get("message")
run_id = retry_one.data["launchPipelineReexecution"]["run"]["runId"]
logs = get_all_logs_for_finished_run_via_subscription(graphql_context, run_id)[
"pipelineRunLogs"
]["messages"]
assert step_did_not_run(logs, "emit")
assert step_did_not_run(logs, "multiply_inputs[0]")
assert step_did_not_run(logs, "multiply_inputs[1]")
assert step_did_succeed(logs, "multiply_inputs[2]")
assert step_did_not_run(logs, "multiply_by_two[0]")
assert step_did_not_run(logs, "multiply_by_two[1]")
assert step_did_succeed(logs, "multiply_by_two[2]")
def test_dynamic_subset(graphql_context):
selector = infer_pipeline_selector(graphql_context, "dynamic_pipeline")
result = execute_dagster_graphql_and_finish_runs(
graphql_context,
LAUNCH_PIPELINE_EXECUTION_MUTATION,
variables={
"executionParams": {
"selector": selector,
"runConfigData": {
"solids": {"multiply_inputs": {"inputs": {"should_fail": {"value": True}}}},
"storage": {"filesystem": {}},
},
"mode": "default",
}
},
)
assert not result.errors
assert result.data
assert result.data["launchPipelineExecution"]["__typename"] == "LaunchPipelineRunSuccess"
assert result.data["launchPipelineExecution"]["run"]["pipeline"]["name"] == "dynamic_pipeline"
parent_run_id = result.data["launchPipelineExecution"]["run"]["runId"]
logs = get_all_logs_for_finished_run_via_subscription(graphql_context, parent_run_id)[
"pipelineRunLogs"
]["messages"]
assert step_did_succeed(logs, "emit")
assert step_did_succeed(logs, "multiply_inputs[0]")
assert step_did_succeed(logs, "multiply_inputs[1]")
assert step_did_fail(logs, "multiply_inputs[2]")
assert step_did_succeed(logs, "multiply_by_two[0]")
assert step_did_succeed(logs, "multiply_by_two[1]")
retry_one = execute_dagster_graphql_and_finish_runs(
graphql_context,
LAUNCH_PIPELINE_REEXECUTION_MUTATION,
variables={
"executionParams": {
"mode": "default",
"selector": selector,
"runConfigData": {
"solids": {"multiply_inputs": {"inputs": {"should_fail": {"value": True}}}},
"storage": {"filesystem": {}},
},
"executionMetadata": {"rootRunId": parent_run_id, "parentRunId": parent_run_id,},
# manual version of from-failure above
"stepKeys": ["multiply_inputs[2]", "multiply_by_two[2]"],
}
},
)
assert not retry_one.errors
assert retry_one.data
assert (
retry_one.data["launchPipelineReexecution"]["__typename"] == "LaunchPipelineRunSuccess"
), retry_one.data["launchPipelineReexecution"].get("message")
run_id = retry_one.data["launchPipelineReexecution"]["run"]["runId"]
logs = get_all_logs_for_finished_run_via_subscription(graphql_context, run_id)[
"pipelineRunLogs"
]["messages"]
assert step_did_not_run(logs, "emit")
assert step_did_not_run(logs, "multiply_inputs[0]")
assert step_did_not_run(logs, "multiply_inputs[1]")
assert step_did_succeed(logs, "multiply_inputs[2]")
assert step_did_not_run(logs, "multiply_by_two[0]")
assert step_did_not_run(logs, "multiply_by_two[1]")
assert step_did_succeed(logs, "multiply_by_two[2]")
| 37.562874 | 98 | 0.642276 | 651 | 6,273 | 5.784946 | 0.133641 | 0.053903 | 0.08975 | 0.074349 | 0.894052 | 0.894052 | 0.885024 | 0.885024 | 0.874668 | 0.874668 | 0 | 0.005003 | 0.235294 | 6,273 | 166 | 99 | 37.789157 | 0.780071 | 0.005739 | 0 | 0.732394 | 0 | 0 | 0.261427 | 0.061588 | 0 | 0 | 0 | 0 | 0.28169 | 1 | 0.014085 | false | 0 | 0.028169 | 0 | 0.042254 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
0491ab085d8c2270c0984c253e7d73d82375bf66 | 2,660 | py | Python | tests/test_0action_mask.py | machineteaching-io/stable-baselines | 58c218de12d61d313bc5e9877a505668ff0bf661 | [
"MIT"
] | 1 | 2020-07-03T19:40:10.000Z | 2020-07-03T19:40:10.000Z | tests/test_0action_mask.py | machineteaching-io/stable-baselines | 58c218de12d61d313bc5e9877a505668ff0bf661 | [
"MIT"
] | null | null | null | tests/test_0action_mask.py | machineteaching-io/stable-baselines | 58c218de12d61d313bc5e9877a505668ff0bf661 | [
"MIT"
] | 1 | 2020-04-22T21:41:22.000Z | 2020-04-22T21:41:22.000Z | import os
import warnings
import pytest
from stable_baselines.common.policies import MlpPolicy
from stable_baselines.common.vec_env import SubprocVecEnv, DummyVecEnv
from stable_baselines import A2C, PPO2, SAC
from stable_baselines.common.action_mask_env import DiscreteActionMaskEnv, MultiDiscreteActionMaskEnv
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'
warnings.filterwarnings("ignore")
@pytest.mark.slow
@pytest.mark.parametrize('vec_env', [DummyVecEnv])
@pytest.mark.parametrize('policy', [MlpPolicy])
@pytest.mark.parametrize('env_class', [DiscreteActionMaskEnv, MultiDiscreteActionMaskEnv])
def test_action_mask_learn_ppo2(vec_env, policy, env_class):
env = vec_env([env_class])
model = PPO2(policy, env, verbose=0)
model.learn(total_timesteps=128)
env.close()
@pytest.mark.slow
@pytest.mark.parametrize('vec_env', [SubprocVecEnv, DummyVecEnv])
@pytest.mark.parametrize('policy', [MlpPolicy])
@pytest.mark.parametrize('env_class', [DiscreteActionMaskEnv, MultiDiscreteActionMaskEnv])
def test_action_mask_run_ppo2(vec_env, policy, env_class):
env = vec_env([env_class])
model = PPO2(policy, env, verbose=0, nminibatches=1)
obs, done, action_masks = env.reset(), [False], []
while not done[0]:
action, _states = model.predict(obs, action_mask=action_masks)
obs, _, done, infos = env.step(action)
action_masks.clear()
for info in infos:
env_action_mask = info.get('action_mask')
action_masks.append(env_action_mask)
env.close()
@pytest.mark.slow
@pytest.mark.parametrize('vec_env', [DummyVecEnv])
@pytest.mark.parametrize('policy', [MlpPolicy])
@pytest.mark.parametrize('env_class', [DiscreteActionMaskEnv, MultiDiscreteActionMaskEnv])
def test_action_mask_learn_a2c(vec_env, policy, env_class):
env = vec_env([env_class])
model = A2C(policy, env, verbose=0)
model.learn(total_timesteps=128)
env.close()
@pytest.mark.slow
@pytest.mark.parametrize('vec_env', [SubprocVecEnv, DummyVecEnv])
@pytest.mark.parametrize('policy', [MlpPolicy])
@pytest.mark.parametrize('env_class', [DiscreteActionMaskEnv, MultiDiscreteActionMaskEnv])
def test_action_mask_run_a2c(vec_env, policy, env_class):
env = vec_env([env_class])
model = A2C(policy, env, verbose=0)
obs, done, action_masks = env.reset(), [False], []
while not done[0]:
action, _states = model.predict(obs, action_mask=action_masks)
obs, _, done, infos = env.step(action)
action_masks.clear()
for info in infos:
env_action_mask = info.get('action_mask')
action_masks.append(env_action_mask)
env.close()
| 32.048193 | 101 | 0.730451 | 336 | 2,660 | 5.553571 | 0.193452 | 0.085745 | 0.135048 | 0.042872 | 0.819936 | 0.819936 | 0.819936 | 0.819936 | 0.819936 | 0.819936 | 0 | 0.010554 | 0.145113 | 2,660 | 82 | 102 | 32.439024 | 0.810026 | 0 | 0 | 0.745763 | 0 | 0 | 0.051543 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.067797 | false | 0 | 0.118644 | 0 | 0.186441 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
04adec2f0c858dbaf78e2176a31d2c67ae9569fc | 247 | py | Python | rx/internal/utils.py | rlugojr/RxPy | 9f9b1de0ab833e53b0d1626a3b43a6c9424f01ec | [
"ECL-2.0",
"Apache-2.0"
] | 78 | 2015-01-22T23:57:01.000Z | 2021-06-04T15:16:22.000Z | rx/internal/utils.py | rlugojr/RxPy | 9f9b1de0ab833e53b0d1626a3b43a6c9424f01ec | [
"ECL-2.0",
"Apache-2.0"
] | 1 | 2015-10-19T12:59:57.000Z | 2015-10-19T12:59:57.000Z | rx/internal/utils.py | rlugojr/RxPy | 9f9b1de0ab833e53b0d1626a3b43a6c9424f01ec | [
"ECL-2.0",
"Apache-2.0"
] | 11 | 2015-02-16T20:43:45.000Z | 2018-05-30T11:46:50.000Z | from rx import AnonymousObservable
from rx.disposables import CompositeDisposable
def add_ref(xs, r):
def subscribe(observer):
return CompositeDisposable(r.disposable, xs.subscribe(observer))
return AnonymousObservable(subscribe) | 30.875 | 72 | 0.789474 | 27 | 247 | 7.185185 | 0.555556 | 0.061856 | 0.237113 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.1417 | 247 | 8 | 73 | 30.875 | 0.915094 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.333333 | 0.166667 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 7 |
04e1e7ab2059eedc9f69db92010c550ac514f759 | 81,747 | py | Python | tests/test_observable_multiple.py | rlugojr/RxPy | 9f9b1de0ab833e53b0d1626a3b43a6c9424f01ec | [
"ECL-2.0",
"Apache-2.0"
] | 78 | 2015-01-22T23:57:01.000Z | 2021-06-04T15:16:22.000Z | tests/test_observable_multiple.py | rlugojr/RxPy | 9f9b1de0ab833e53b0d1626a3b43a6c9424f01ec | [
"ECL-2.0",
"Apache-2.0"
] | 1 | 2015-10-19T12:59:57.000Z | 2015-10-19T12:59:57.000Z | tests/test_observable_multiple.py | rlugojr/RxPy | 9f9b1de0ab833e53b0d1626a3b43a6c9424f01ec | [
"ECL-2.0",
"Apache-2.0"
] | 11 | 2015-02-16T20:43:45.000Z | 2018-05-30T11:46:50.000Z | from rx import Observable
from rx.testing import TestScheduler, ReactiveTest, is_prime, MockDisposable
from rx.disposables import Disposable, SerialDisposable
on_next = ReactiveTest.on_next
on_completed = ReactiveTest.on_completed
on_error = ReactiveTest.on_error
subscribe = ReactiveTest.subscribe
subscribed = ReactiveTest.subscribed
disposed = ReactiveTest.disposed
created = ReactiveTest.created
class RxException(Exception):
pass
# Helper function for raising exceptions within lambdas
def _raise(ex):
raise RxException(ex)
def test_take_until_preempt_somedata_next():
scheduler = TestScheduler()
l_msgs = [on_next(150, 1), on_next(210, 2), on_next(220, 3), on_next(230, 4), on_next(240, 5), on_completed(250)]
r_msgs = [on_next(150, 1), on_next(225, 99), on_completed(230)]
l = scheduler.create_hot_observable(l_msgs)
r = scheduler.create_hot_observable(r_msgs)
def create():
return l.take_until(r)
results = scheduler.start(create)
results.messages.assert_equal(on_next(210, 2), on_next(220, 3), on_completed(225))
def test_take_until_preempt_somedata_error():
ex = 'ex'
scheduler = TestScheduler()
l_msgs = [on_next(150, 1), on_next(210, 2), on_next(220, 3), on_next(230, 4), on_next(240, 5), on_completed(250)]
r_msgs = [on_next(150, 1), on_error(225, ex)]
l = scheduler.create_hot_observable(l_msgs)
r = scheduler.create_hot_observable(r_msgs)
def create():
return l.take_until(r)
results = scheduler.start(create)
results.messages.assert_equal(on_next(210, 2), on_next(220, 3), on_error(225, ex))
def test_take_until_nopreempt_somedata_empty():
scheduler = TestScheduler()
l_msgs = [on_next(150, 1), on_next(210, 2), on_next(220, 3), on_next(230, 4), on_next(240, 5), on_completed(250)]
r_msgs = [on_next(150, 1), on_completed(225)]
l = scheduler.create_hot_observable(l_msgs)
r = scheduler.create_hot_observable(r_msgs)
def create():
return l.take_until(r)
results = scheduler.start(create)
results.messages.assert_equal(on_next(210, 2), on_next(220, 3), on_next(230, 4), on_next(240, 5), on_completed(250))
def test_take_until_nopreempt_somedata_never():
scheduler = TestScheduler()
l_msgs = [on_next(150, 1), on_next(210, 2), on_next(220, 3), on_next(230, 4), on_next(240, 5), on_completed(250)]
l = scheduler.create_hot_observable(l_msgs)
r = Observable.never()
def create():
return l.take_until(r)
results = scheduler.start(create)
results.messages.assert_equal(on_next(210, 2), on_next(220, 3), on_next(230, 4), on_next(240, 5), on_completed(250))
def test_take_until_preempt_never_next():
scheduler = TestScheduler()
r_msgs = [on_next(150, 1), on_next(225, 2), on_completed(250)]
l = Observable.never()
r = scheduler.create_hot_observable(r_msgs)
def create():
return l.take_until(r)
results = scheduler.start(create)
results.messages.assert_equal(on_completed(225))
def test_take_until_preempt_never_error():
ex = 'ex'
scheduler = TestScheduler()
r_msgs = [on_next(150, 1), on_error(225, ex)]
l = Observable.never()
r = scheduler.create_hot_observable(r_msgs)
def create():
return l.take_until(r)
results = scheduler.start(create)
results.messages.assert_equal(on_error(225, ex))
def test_take_until_nopreempt_never_empty():
scheduler = TestScheduler()
r_msgs = [on_next(150, 1), on_completed(225)]
l = Observable.never()
r = scheduler.create_hot_observable(r_msgs)
def create():
return l.take_until(r)
results = scheduler.start(create)
results.messages.assert_equal()
def test_take_until_nopreempt_never_never():
scheduler = TestScheduler()
l = Observable.never()
r = Observable.never()
def create():
return l.take_until(r)
results = scheduler.start(create)
results.messages.assert_equal()
def test_Take_until_Preempt_BeforeFirstProduced():
scheduler = TestScheduler()
l_msgs = [on_next(150, 1), on_next(230, 2), on_completed(240)]
r_msgs = [on_next(150, 1), on_next(210, 2), on_completed(220)]
l = scheduler.create_hot_observable(l_msgs)
r = scheduler.create_hot_observable(r_msgs)
def create():
return l.take_until(r)
results = scheduler.start(create)
results.messages.assert_equal(on_completed(210))
def test_take_until_preempt_beforefirstproduced_remain_silent_and_proper_disposed():
scheduler = TestScheduler()
l_msgs = [on_next(150, 1), on_error(215, 'ex'), on_completed(240)]
r_msgs = [on_next(150, 1), on_next(210, 2), on_completed(220)]
source_not_disposed = False
def action():
nonlocal source_not_disposed
source_not_disposed = True
l = scheduler.create_hot_observable(l_msgs).do_action(on_next=action)
r = scheduler.create_hot_observable(r_msgs)
def create():
return l.take_until(r)
results = scheduler.start(create)
results.messages.assert_equal(on_completed(210))
assert(not source_not_disposed)
def test_take_until_nopreempt_afterlastproduced_proper_disposed_signal():
scheduler = TestScheduler()
l_msgs = [on_next(150, 1), on_next(230, 2), on_completed(240)]
r_msgs = [on_next(150, 1), on_next(250, 2), on_completed(260)]
signal_not_disposed = False
l = scheduler.create_hot_observable(l_msgs)
def action():
nonlocal signal_not_disposed
signal_not_disposed = True
r = scheduler.create_hot_observable(r_msgs).do_action(on_next=action)
def create():
return l.take_until(r)
results = scheduler.start(create)
results.messages.assert_equal(on_next(230, 2), on_completed(240))
assert(not signal_not_disposed)
def test_skip_until_somedata_next():
scheduler = TestScheduler()
l_msgs = [on_next(150, 1), on_next(210, 2), on_next(220, 3), on_next(230, 4), on_next(240, 5), on_completed(250)]
r_msgs = [on_next(150, 1), on_next(225, 99), on_completed(230)]
l = scheduler.create_hot_observable(l_msgs)
r = scheduler.create_hot_observable(r_msgs)
def create():
return l.skip_until(r)
results = scheduler.start(create)
results.messages.assert_equal(on_next(230, 4), on_next(240, 5), on_completed(250))
def test_skip_until_somedata_error():
scheduler = TestScheduler()
ex = 'ex'
l_msgs = [on_next(150, 1), on_next(210, 2), on_next(220, 3), on_next(230, 4), on_next(240, 5), on_completed(250)]
r_msgs = [on_next(150, 1), on_error(225, ex)]
l = scheduler.create_hot_observable(l_msgs)
r = scheduler.create_hot_observable(r_msgs)
def create():
return l.skip_until(r)
results = scheduler.start(create)
results.messages.assert_equal(on_error(225, ex))
def test_skip_until_somedata_empty():
scheduler = TestScheduler()
l_msgs = [on_next(150, 1), on_next(210, 2), on_next(220, 3), on_next(230, 4), on_next(240, 5), on_completed(250)]
r_msgs = [on_next(150, 1), on_completed(225)]
l = scheduler.create_hot_observable(l_msgs)
r = scheduler.create_hot_observable(r_msgs)
def create():
return l.skip_until(r)
results = scheduler.start(create)
results.messages.assert_equal()
def test_skip_until_never_next():
scheduler = TestScheduler()
r_msgs = [on_next(150, 1), on_next(225, 2), on_completed(250)]
l = Observable.never()
r = scheduler.create_hot_observable(r_msgs)
def create():
return l.skip_until(r)
results = scheduler.start(create)
results.messages.assert_equal()
def test_skip_until_never_error():
ex = 'ex'
scheduler = TestScheduler()
r_msgs = [on_next(150, 1), on_error(225, ex)]
l = Observable.never()
r = scheduler.create_hot_observable(r_msgs)
def create():
return l.skip_until(r)
results = scheduler.start(create)
results.messages.assert_equal(on_error(225, ex))
def test_skip_until_somedata_never():
scheduler = TestScheduler()
l_msgs = [on_next(150, 1), on_next(210, 2), on_next(220, 3), on_next(230, 4), on_next(240, 5), on_completed(250)]
l = scheduler.create_hot_observable(l_msgs)
r = Observable.never()
def create():
return l.skip_until(r)
results = scheduler.start(create)
results.messages.assert_equal()
def test_skip_until_never_empty():
scheduler = TestScheduler()
r_msgs = [on_next(150, 1), on_completed(225)]
l = Observable.never()
r = scheduler.create_hot_observable(r_msgs)
def create():
return l.skip_until(r)
results = scheduler.start(create)
results.messages.assert_equal()
def test_skip_until_never_never():
scheduler = TestScheduler()
l = Observable.never()
r = Observable.never()
def create():
return l.skip_until(r)
results = scheduler.start(create)
results.messages.assert_equal()
def test_skip_until_has_completed_causes_disposal():
scheduler = TestScheduler()
l_msgs = [on_next(150, 1), on_next(210, 2), on_next(220, 3), on_next(230, 4), on_next(240, 5), on_completed(250)]
disposed = False
l = scheduler.create_hot_observable(l_msgs)
def subscribe(observer):
nonlocal disposed
disposed = True
r = Observable(subscribe)
def create():
return l.skip_until(r)
results = scheduler.start(create)
results.messages.assert_equal()
assert(disposed)
def test_merge_never2():
scheduler = TestScheduler()
n1 = Observable.never()
n2 = Observable.never()
def create():
return Observable.merge(scheduler, n1, n2)
results = scheduler.start(create)
results.messages.assert_equal()
def test_merge_never3():
scheduler = TestScheduler()
n1 = Observable.never()
n2 = Observable.never()
n3 = Observable.never()
def create():
return Observable.merge(scheduler, n1, n2, n3)
results = scheduler.start(create)
results.messages.assert_equal()
def test_merge_empty2():
scheduler = TestScheduler()
e1 = Observable.empty()
e2 = Observable.empty()
def create():
return Observable.merge(scheduler, e1, e2)
results = scheduler.start(create)
results.messages.assert_equal(on_completed(203))
def test_merge_empty3():
scheduler = TestScheduler()
e1 = Observable.empty()
e2 = Observable.empty()
e3 = Observable.empty()
def create():
return Observable.merge(scheduler, e1, e2, e3)
results = scheduler.start(create)
results.messages.assert_equal(on_completed(204))
def test_merge_empty_delayed2_right_last():
scheduler = TestScheduler()
l_msgs = [on_next(150, 1), on_completed(240)]
r_msgs = [on_next(150, 1), on_completed(250)]
e1 = scheduler.create_hot_observable(l_msgs)
e2 = scheduler.create_hot_observable(r_msgs)
def create():
return Observable.merge(scheduler, e1, e2)
results = scheduler.start(create)
results.messages.assert_equal(on_completed(250))
def test_merge_empty_delayed2_left_last():
scheduler = TestScheduler()
l_msgs = [on_next(150, 1), on_completed(250)]
r_msgs = [on_next(150, 1), on_completed(240)]
e1 = scheduler.create_hot_observable(l_msgs)
e2 = scheduler.create_hot_observable(r_msgs)
def create():
return Observable.merge(scheduler, e1, e2)
results = scheduler.start(create)
results.messages.assert_equal(on_completed(250))
def test_merge_empty_delayed3_middle_last():
scheduler = TestScheduler()
msgs1 = [on_next(150, 1), on_completed(245)]
msgs2 = [on_next(150, 1), on_completed(250)]
msgs3 = [on_next(150, 1), on_completed(240)]
e1 = scheduler.create_hot_observable(msgs1)
e2 = scheduler.create_hot_observable(msgs2)
e3 = scheduler.create_hot_observable(msgs3)
def create():
return Observable.merge(scheduler, e1, e2, e3)
results = scheduler.start(create)
results.messages.assert_equal(on_completed(250))
def test_merge_empty_never():
scheduler = TestScheduler()
msgs1 = [on_next(150, 1), on_completed(245)]
e1 = scheduler.create_hot_observable(msgs1)
n1 = Observable.never()
def create():
return Observable.merge(scheduler, e1, n1)
results = scheduler.start(create)
results.messages.assert_equal()
def test_merge_never_empty():
scheduler = TestScheduler()
msgs1 = [on_next(150, 1), on_completed(245)]
e1 = scheduler.create_hot_observable(msgs1)
n1 = Observable.never()
def create():
return Observable.merge(scheduler, n1, e1)
results = scheduler.start(create)
results.messages.assert_equal()
def test_merge_return_never():
scheduler = TestScheduler()
msgs1 = [on_next(150, 1), on_next(210, 2), on_completed(245)]
r1 = scheduler.create_hot_observable(msgs1)
n1 = Observable.never()
def create():
return Observable.merge(scheduler, r1, n1)
results = scheduler.start(create)
results.messages.assert_equal(on_next(210, 2))
def test_merge_never_return():
scheduler = TestScheduler()
msgs1 = [on_next(150, 1), on_next(210, 2), on_completed(245)]
r1 = scheduler.create_hot_observable(msgs1)
n1 = Observable.never()
def create():
return Observable.merge(scheduler, n1, r1)
results = scheduler.start(create)
results.messages.assert_equal(on_next(210, 2))
def test_merge_error_never():
ex = 'ex'
scheduler = TestScheduler()
msgs1 = [on_next(150, 1), on_next(210, 2), on_error(245, ex)]
e1 = scheduler.create_hot_observable(msgs1)
n1 = Observable.never()
def create():
return Observable.merge(scheduler, e1, n1)
results = scheduler.start(create)
results.messages.assert_equal(on_next(210, 2), on_error(245, ex))
def test_merge_never_error():
ex = 'ex'
scheduler = TestScheduler()
msgs1 = [on_next(150, 1), on_next(210, 2), on_error(245, ex)]
e1 = scheduler.create_hot_observable(msgs1)
n1 = Observable.never()
def create():
return Observable.merge(scheduler, n1, e1)
results = scheduler.start(create)
results.messages.assert_equal(on_next(210, 2), on_error(245, ex))
def test_merge_empty_eeturn():
scheduler = TestScheduler()
msgs1 = [on_next(150, 1), on_completed(245)]
msgs2 = [on_next(150, 1), on_next(210, 2), on_completed(250)]
e1 = scheduler.create_hot_observable(msgs1)
r1 = scheduler.create_hot_observable(msgs2)
def create():
return Observable.merge(scheduler, e1, r1)
results = scheduler.start(create)
results.messages.assert_equal(on_next(210, 2), on_completed(250))
def test_merge_return_empty():
scheduler = TestScheduler()
msgs1 = [on_next(150, 1), on_completed(245)]
msgs2 = [on_next(150, 1), on_next(210, 2), on_completed(250)]
e1 = scheduler.create_hot_observable(msgs1)
r1 = scheduler.create_hot_observable(msgs2)
def create():
return Observable.merge(scheduler, r1, e1)
results = scheduler.start(create)
results.messages.assert_equal(on_next(210, 2), on_completed(250))
def test_merge_lots2():
scheduler = TestScheduler()
msgs1 = [on_next(150, 1), on_next(210, 2), on_next(220, 4), on_next(230, 6), on_next(240, 8), on_completed(245)]
msgs2 = [on_next(150, 1), on_next(215, 3), on_next(225, 5), on_next(235, 7), on_next(245, 9), on_completed(250)]
o1 = scheduler.create_hot_observable(msgs1)
o2 = scheduler.create_hot_observable(msgs2)
def create():
return Observable.merge(scheduler, o1, o2)
results = scheduler.start(create).messages
assert(len(results) == 9)
for i, result in enumerate(results[:-1]):
assert(result.value.kind == 'N')
assert(result.time == 210 + i * 5)
assert(result.value.value == i + 2)
assert(results[8].value.kind == 'C' and results[8].time == 250)
def test_merge_lots3():
scheduler = TestScheduler()
msgs1 = [on_next(150, 1), on_next(210, 2), on_next(225, 5), on_next(240, 8), on_completed(245)]
msgs2 = [on_next(150, 1), on_next(215, 3), on_next(230, 6), on_next(245, 9), on_completed(250)]
msgs3 = [on_next(150, 1), on_next(220, 4), on_next(235, 7), on_completed(240)]
o1 = scheduler.create_hot_observable(msgs1)
o2 = scheduler.create_hot_observable(msgs2)
o3 = scheduler.create_hot_observable(msgs3)
def create():
return Observable.merge(scheduler, o1, o2, o3)
results = scheduler.start(create).messages
assert(len(results) == 9)
for i, result in enumerate(results[:-1]):
assert(results[i].value.kind == 'N' and results[i].time == 210 + i * 5 and results[i].value.value == i + 2)
assert(results[8].value.kind == 'C' and results[8].time == 250)
def test_merge_error_left():
ex = 'ex'
scheduler = TestScheduler()
msgs1 = [on_next(150, 1), on_next(210, 2), on_error(245, ex)]
msgs2 = [on_next(150, 1), on_next(215, 3), on_completed(250)]
o1 = scheduler.create_hot_observable(msgs1)
o2 = scheduler.create_hot_observable(msgs2)
def create():
return Observable.merge(scheduler, o1, o2)
results = scheduler.start(create)
results.messages.assert_equal(on_next(210, 2), on_next(215, 3), on_error(245, ex))
def test_merge_error_causes_disposal():
ex = 'ex'
scheduler = TestScheduler()
msgs1 = [on_next(150, 1), on_error(210, ex)]
msgs2 = [on_next(150, 1), on_next(220, 1), on_completed(250)]
source_not_disposed = False
o1 = scheduler.create_hot_observable(msgs1)
def action():
nonlocal source_not_disposed
source_not_disposed = True
o2 = scheduler.create_hot_observable(msgs2).do_action(on_next=action)
def create():
return Observable.merge(scheduler, o1, o2)
results = scheduler.start(create)
results.messages.assert_equal(on_error(210, ex))
assert(not source_not_disposed)
def test_merge_observable_of_observable_data():
scheduler = TestScheduler()
xs = scheduler.create_hot_observable(on_next(300, scheduler.create_cold_observable(on_next(10, 101), on_next(20, 102), on_next(110, 103), on_next(120, 104), on_next(210, 105), on_next(220, 106), on_completed(230))), on_next(400, scheduler.create_cold_observable(on_next(10, 201), on_next(20, 202), on_next(30, 203), on_next(40, 204), on_completed(50))), on_next(500, scheduler.create_cold_observable(on_next(10, 301), on_next(20, 302), on_next(30, 303), on_next(40, 304), on_next(120, 305), on_completed(150))), on_completed(600))
def create():
return xs.merge_observable()
results = scheduler.start(create)
results.messages.assert_equal(on_next(310, 101), on_next(320, 102), on_next(410, 103), on_next(410, 201), on_next(420, 104), on_next(420, 202), on_next(430, 203), on_next(440, 204), on_next(510, 105), on_next(510, 301), on_next(520, 106), on_next(520, 302), on_next(530, 303), on_next(540, 304), on_next(620, 305), on_completed(650))
def test_merge_observable_of_observable_data_non_overlapped():
scheduler = TestScheduler()
xs = scheduler.create_hot_observable(on_next(300, scheduler.create_cold_observable(on_next(10, 101), on_next(20, 102), on_completed(230))), on_next(400, scheduler.create_cold_observable(on_next(10, 201), on_next(20, 202), on_next(30, 203), on_next(40, 204), on_completed(50))), on_next(500, scheduler.create_cold_observable(on_next(10, 301), on_next(20, 302), on_next(30, 303), on_next(40, 304), on_completed(50))), on_completed(600))
def create():
return xs.merge_observable()
results = scheduler.start(create)
results.messages.assert_equal(on_next(310, 101), on_next(320, 102), on_next(410, 201), on_next(420, 202), on_next(430, 203), on_next(440, 204), on_next(510, 301), on_next(520, 302), on_next(530, 303), on_next(540, 304), on_completed(600))
def test_merge_observable_of_observable_inner_throws():
ex = 'ex'
scheduler = TestScheduler()
xs = scheduler.create_hot_observable(on_next(300, scheduler.create_cold_observable(on_next(10, 101), on_next(20, 102), on_completed(230))), on_next(400, scheduler.create_cold_observable(on_next(10, 201), on_next(20, 202), on_next(30, 203), on_next(40, 204), on_error(50, ex))), on_next(500, scheduler.create_cold_observable(on_next(10, 301), on_next(20, 302), on_next(30, 303), on_next(40, 304), on_completed(50))), on_completed(600))
def create():
return xs.merge_observable()
results = scheduler.start(create)
results.messages.assert_equal(on_next(310, 101), on_next(320, 102), on_next(410, 201), on_next(420, 202), on_next(430, 203), on_next(440, 204), on_error(450, ex))
def test_merge_observable_of_observable_outer_throws():
ex = 'ex'
scheduler = TestScheduler()
xs = scheduler.create_hot_observable(on_next(300, scheduler.create_cold_observable(on_next(10, 101), on_next(20, 102), on_completed(230))), on_next(400, scheduler.create_cold_observable(on_next(10, 201), on_next(20, 202), on_next(30, 203), on_next(40, 204), on_completed(50))), on_error(500, ex))
def create():
return xs.merge_observable()
results = scheduler.start(create)
results.messages.assert_equal(on_next(310, 101), on_next(320, 102), on_next(410, 201), on_next(420, 202), on_next(430, 203), on_next(440, 204), on_error(500, ex))
def test_switch_data():
scheduler = TestScheduler()
xs = scheduler.create_hot_observable(on_next(300, scheduler.create_cold_observable(on_next(10, 101), on_next(20, 102), on_next(110, 103), on_next(120, 104), on_next(210, 105), on_next(220, 106), on_completed(230))), on_next(400, scheduler.create_cold_observable(on_next(10, 201), on_next(20, 202), on_next(30, 203), on_next(40, 204), on_completed(50))), on_next(500, scheduler.create_cold_observable(on_next(10, 301), on_next(20, 302), on_next(30, 303), on_next(40, 304), on_completed(150))), on_completed(600))
def create():
return xs.switch_latest()
results = scheduler.start(create)
results.messages.assert_equal(on_next(310, 101), on_next(320, 102), on_next(410, 201), on_next(420, 202), on_next(430, 203), on_next(440, 204), on_next(510, 301), on_next(520, 302), on_next(530, 303), on_next(540, 304), on_completed(650))
def test_switch_inner_throws():
ex = 'ex'
scheduler = TestScheduler()
xs = scheduler.create_hot_observable(on_next(300, scheduler.create_cold_observable(on_next(10, 101), on_next(20, 102), on_next(110, 103), on_next(120, 104), on_next(210, 105), on_next(220, 106), on_completed(230))), on_next(400, scheduler.create_cold_observable(on_next(10, 201), on_next(20, 202), on_next(30, 203), on_next(40, 204), on_error(50, ex))), on_next(500, scheduler.create_cold_observable(on_next(10, 301), on_next(20, 302), on_next(30, 303), on_next(40, 304), on_completed(150))), on_completed(600))
def create():
return xs.switch_latest()
results = scheduler.start(create)
results.messages.assert_equal(on_next(310, 101), on_next(320, 102), on_next(410, 201), on_next(420, 202), on_next(430, 203), on_next(440, 204), on_error(450, ex))
def test_switch_outer_throws():
ex = 'ex'
scheduler = TestScheduler()
xs = scheduler.create_hot_observable(on_next(300, scheduler.create_cold_observable(on_next(10, 101), on_next(20, 102), on_next(110, 103), on_next(120, 104), on_next(210, 105), on_next(220, 106), on_completed(230))), on_next(400, scheduler.create_cold_observable(on_next(10, 201), on_next(20, 202), on_next(30, 203), on_next(40, 204), on_completed(50))), on_error(500, ex))
def create():
return xs.switch_latest()
results = scheduler.start(create)
results.messages.assert_equal(on_next(310, 101), on_next(320, 102), on_next(410, 201), on_next(420, 202), on_next(430, 203), on_next(440, 204), on_error(500, ex))
def test_switch_no_inner():
scheduler = TestScheduler()
xs = scheduler.create_hot_observable(on_completed(500))
def create():
return xs.switch_latest()
results = scheduler.start(create)
results.messages.assert_equal(on_completed(500))
def test_switch_inner_completes():
scheduler = TestScheduler()
xs = scheduler.create_hot_observable(on_next(300, scheduler.create_cold_observable(on_next(10, 101), on_next(20, 102), on_next(110, 103), on_next(120, 104), on_next(210, 105), on_next(220, 106), on_completed(230))), on_completed(540))
def create():
return xs.switch_latest()
results = scheduler.start(create)
results.messages.assert_equal(on_next(310, 101), on_next(320, 102), on_next(410, 103), on_next(420, 104), on_next(510, 105), on_next(520, 106), on_completed(540))
def test_amb_never2():
scheduler = TestScheduler()
l = Observable.never()
r = Observable.never()
def create():
return l.amb(r)
results = scheduler.start(create)
results.messages.assert_equal()
def test_amb_never3():
scheduler = TestScheduler()
n1 = Observable.never()
n2 = Observable.never()
n3 = Observable.never()
def create():
return Observable.amb(n1, n2, n3)
results = scheduler.start(create)
results.messages.assert_equal()
def test_amb_never_empty():
scheduler = TestScheduler()
r_msgs = [on_next(150, 1), on_completed(225)]
n = Observable.never()
e = scheduler.create_hot_observable(r_msgs)
def create():
return n.amb(e)
results = scheduler.start(create)
results.messages.assert_equal(on_completed(225))
def test_amb_empty_never():
scheduler = TestScheduler()
r_msgs = [on_next(150, 1), on_completed(225)]
n = Observable.never()
e = scheduler.create_hot_observable(r_msgs)
def create():
return e.amb(n)
results = scheduler.start(create)
results.messages.assert_equal(on_completed(225))
def test_amb_regular_should_dispose_loser():
scheduler = TestScheduler()
msgs1 = [on_next(150, 1), on_next(210, 2), on_completed(240)]
msgs2 = [on_next(150, 1), on_next(220, 3), on_completed(250)]
source_not_disposed = False
o1 = scheduler.create_hot_observable(msgs1)
def action():
nonlocal source_not_disposed
source_not_disposed = True
o2 = scheduler.create_hot_observable(msgs2).do_action(on_next=action)
def create():
return o1.amb(o2)
results = scheduler.start(create)
results.messages.assert_equal(on_next(210, 2), on_completed(240))
assert(not source_not_disposed)
def test_amb_winner_throws():
ex = 'ex'
scheduler = TestScheduler()
msgs1 = [on_next(150, 1), on_next(210, 2), on_error(220, ex)]
msgs2 = [on_next(150, 1), on_next(220, 3), on_completed(250)]
source_not_disposed = False
o1 = scheduler.create_hot_observable(msgs1)
def action():
nonlocal source_not_disposed
source_not_disposed = True
o2 = scheduler.create_hot_observable(msgs2).do_action(on_next=action)
def create():
return o1.amb(o2)
results = scheduler.start(create)
results.messages.assert_equal(on_next(210, 2), on_error(220, ex))
assert(not source_not_disposed)
def test_amb_loser_throws():
ex = 'ex'
scheduler = TestScheduler()
msgs1 = [on_next(150, 1), on_next(220, 2), on_error(230, ex)]
msgs2 = [on_next(150, 1), on_next(210, 3), on_completed(250)]
source_not_disposed = False
def action():
nonlocal source_not_disposed
source_not_disposed = True
o1 = scheduler.create_hot_observable(msgs1).do_action(on_next=action)
o2 = scheduler.create_hot_observable(msgs2)
def create():
return o1.amb(o2)
results = scheduler.start(create)
results.messages.assert_equal(on_next(210, 3), on_completed(250))
assert(not source_not_disposed)
def test_amb_throws_before_election():
ex = 'ex'
scheduler = TestScheduler()
msgs1 = [on_next(150, 1), on_error(210, ex)]
msgs2 = [on_next(150, 1), on_next(220, 3), on_completed(250)]
source_not_disposed = False
o1 = scheduler.create_hot_observable(msgs1)
def action():
nonlocal source_not_disposed
source_not_disposed = True
o2 = scheduler.create_hot_observable(msgs2).do_action(on_next=action)
def create():
return o1.amb(o2)
results = scheduler.start(create)
results.messages.assert_equal(on_error(210, ex))
assert(not source_not_disposed)
def test_catch_no_errors():
scheduler = TestScheduler()
msgs1 = [on_next(150, 1), on_next(210, 2), on_next(220, 3), on_completed(230)]
msgs2 = [on_next(240, 5), on_completed(250)]
o1 = scheduler.create_hot_observable(msgs1)
o2 = scheduler.create_hot_observable(msgs2)
def create():
return o1.catch_exception(o2)
results = scheduler.start(create)
results.messages.assert_equal(on_next(210, 2), on_next(220, 3), on_completed(230))
def test_catch_never():
scheduler = TestScheduler()
msgs2 = [on_next(240, 5), on_completed(250)]
o1 = Observable.never()
o2 = scheduler.create_hot_observable(msgs2)
def create():
return o1.catch_exception(o2)
results = scheduler.start(create)
results.messages.assert_equal()
def test_catch_empty():
scheduler = TestScheduler()
msgs1 = [on_next(150, 1), on_completed(230)]
msgs2 = [on_next(240, 5), on_completed(250)]
o1 = scheduler.create_hot_observable(msgs1)
o2 = scheduler.create_hot_observable(msgs2)
def create():
return o1.catch_exception(o2)
results = scheduler.start(create)
results.messages.assert_equal(on_completed(230))
def test_catch_return():
scheduler = TestScheduler()
msgs1 = [on_next(150, 1), on_next(210, 2), on_completed(230)]
msgs2 = [on_next(240, 5), on_completed(250)]
o1 = scheduler.create_hot_observable(msgs1)
o2 = scheduler.create_hot_observable(msgs2)
def create():
return o1.catch_exception(o2)
results = scheduler.start(create)
results.messages.assert_equal(on_next(210, 2), on_completed(230))
def test_catch_error():
ex = 'ex'
scheduler = TestScheduler()
msgs1 = [on_next(150, 1), on_next(210, 2), on_next(220, 3), on_error(230, ex)]
msgs2 = [on_next(240, 5), on_completed(250)]
o1 = scheduler.create_hot_observable(msgs1)
o2 = scheduler.create_hot_observable(msgs2)
def create():
return o1.catch_exception(o2)
results = scheduler.start(create)
results.messages.assert_equal(on_next(210, 2), on_next(220, 3), on_next(240, 5), on_completed(250))
def test_catch_error_never():
ex = 'ex'
scheduler = TestScheduler()
msgs1 = [on_next(150, 1), on_next(210, 2), on_next(220, 3), on_error(230, ex)]
o1 = scheduler.create_hot_observable(msgs1)
o2 = Observable.never()
def create():
return o1.catch_exception(o2)
results = scheduler.start(create)
results.messages.assert_equal(on_next(210, 2), on_next(220, 3))
def test_catch_error_error():
ex = 'ex'
scheduler = TestScheduler()
msgs1 = [on_next(150, 1), on_next(210, 2), on_next(220, 3), on_error(230, 'ex1')]
msgs2 = [on_next(240, 4), on_error(250, ex)]
o1 = scheduler.create_hot_observable(msgs1)
o2 = scheduler.create_hot_observable(msgs2)
def create():
return o1.catch_exception(o2)
results = scheduler.start(create)
results.messages.assert_equal(on_next(210, 2), on_next(220, 3), on_next(240, 4), on_error(250, ex))
def test_catch_multiple():
ex = 'ex'
scheduler = TestScheduler()
msgs1 = [on_next(150, 1), on_next(210, 2), on_error(215, ex)]
msgs2 = [on_next(220, 3), on_error(225, ex)]
msgs3 = [on_next(230, 4), on_completed(235)]
o1 = scheduler.create_hot_observable(msgs1)
o2 = scheduler.create_hot_observable(msgs2)
o3 = scheduler.create_hot_observable(msgs3)
def create():
return Observable.catch_exception(o1, o2, o3)
results = scheduler.start(create)
results.messages.assert_equal(on_next(210, 2), on_next(220, 3), on_next(230, 4), on_completed(235))
def test_catch_error_specific_caught():
ex = 'ex'
handler_called = False
scheduler = TestScheduler()
msgs1 = [on_next(150, 1), on_next(210, 2), on_next(220, 3), on_error(230, ex)]
msgs2 = [on_next(240, 4), on_completed(250)]
o1 = scheduler.create_hot_observable(msgs1)
o2 = scheduler.create_hot_observable(msgs2)
def create():
def handler(e):
nonlocal handler_called
handler_called = True
return o2
return o1.catch_exception(handler)
results = scheduler.start(create)
results.messages.assert_equal(on_next(210, 2), on_next(220, 3), on_next(240, 4), on_completed(250))
assert(handler_called)
def test_catch_error_specific_caught_immediate():
ex = 'ex'
handler_called = False
scheduler = TestScheduler()
msgs2 = [on_next(240, 4), on_completed(250)]
o2 = scheduler.create_hot_observable(msgs2)
def create():
def handler(e):
nonlocal handler_called
handler_called = True
return o2
return Observable.throw_exception('ex').catch_exception(handler)
results = scheduler.start(create)
results.messages.assert_equal(on_next(240, 4), on_completed(250))
assert(handler_called)
def test_catch_handler_throws():
ex = 'ex'
ex2 = 'ex2'
handler_called = False
scheduler = TestScheduler()
msgs1 = [on_next(150, 1), on_next(210, 2), on_next(220, 3), on_error(230, ex)]
o1 = scheduler.create_hot_observable(msgs1)
def create():
def handler(e):
nonlocal handler_called
handler_called = True
raise Exception(ex2)
return o1.catch_exception(handler)
results = scheduler.start(create)
results.messages.assert_equal(on_next(210, 2), on_next(220, 3), on_error(230, ex2))
assert(handler_called)
def test_catch_nested_outer_catches():
ex = 'ex'
first_handler_called = False
second_handler_called = False
scheduler = TestScheduler()
msgs1 = [on_next(150, 1), on_next(210, 2), on_error(215, ex)]
msgs2 = [on_next(220, 3), on_completed(225)]
msgs3 = [on_next(220, 4), on_completed(225)]
o1 = scheduler.create_hot_observable(msgs1)
o2 = scheduler.create_hot_observable(msgs2)
o3 = scheduler.create_hot_observable(msgs3)
def create():
def handler1(e):
nonlocal first_handler_called
first_handler_called = True
return o2
def handler2(e):
nonlocal second_handler_called
second_handler_called = True
return o3
return o1.catch_exception(handler1).catch_exception(handler2)
results = scheduler.start(create)
results.messages.assert_equal(on_next(210, 2), on_next(220, 3), on_completed(225))
assert(first_handler_called)
assert(not second_handler_called)
def test_catch_throw_from_nested_catch():
ex = 'ex'
ex2 = 'ex'
first_handler_called = False
second_handler_called = False
scheduler = TestScheduler()
msgs1 = [on_next(150, 1), on_next(210, 2), on_error(215, ex)]
msgs2 = [on_next(220, 3), on_error(225, ex2)]
msgs3 = [on_next(230, 4), on_completed(235)]
o1 = scheduler.create_hot_observable(msgs1)
o2 = scheduler.create_hot_observable(msgs2)
o3 = scheduler.create_hot_observable(msgs3)
def create():
def handler1(e):
nonlocal first_handler_called
first_handler_called = True
assert(e == ex)
return o2
def handler2(e):
nonlocal second_handler_called
second_handler_called = True
assert(e == ex2)
return o3
return o1.catch_exception(handler1).catch_exception(handler2)
results = scheduler.start(create)
results.messages.assert_equal(on_next(210, 2), on_next(220, 3), on_next(230, 4), on_completed(235))
assert(first_handler_called)
assert(second_handler_called)
def test_on_error_resume_next_no_errors():
scheduler = TestScheduler()
msgs1 = [on_next(150, 1), on_next(210, 2), on_next(220, 3), on_completed(230)]
msgs2 = [on_next(240, 4), on_completed(250)]
o1 = scheduler.create_hot_observable(msgs1)
o2 = scheduler.create_hot_observable(msgs2)
def create():
return o1.on_error_resume_next(o2)
results = scheduler.start(create)
results.messages.assert_equal(on_next(210, 2), on_next(220, 3), on_next(240, 4), on_completed(250))
def test_on_error_resume_next_Error():
scheduler = TestScheduler()
msgs1 = [on_next(150, 1), on_next(210, 2), on_next(220, 3), on_error(230, 'ex')]
msgs2 = [on_next(240, 4), on_completed(250)]
o1 = scheduler.create_hot_observable(msgs1)
o2 = scheduler.create_hot_observable(msgs2)
def create():
return o1.on_error_resume_next(o2)
results = scheduler.start(create)
results.messages.assert_equal(on_next(210, 2), on_next(220, 3), on_next(240, 4), on_completed(250))
def test_on_error_resume_next_error_multiple():
scheduler = TestScheduler()
msgs1 = [on_next(150, 1), on_next(210, 2), on_error(220, 'ex')]
msgs2 = [on_next(230, 4), on_error(240, 'ex')]
msgs3 = [on_completed(250)]
o1 = scheduler.create_hot_observable(msgs1)
o2 = scheduler.create_hot_observable(msgs2)
o3 = scheduler.create_hot_observable(msgs3)
def create():
return Observable.on_error_resume_next(o1, o2, o3)
results = scheduler.start(create)
results.messages.assert_equal(on_next(210, 2), on_next(230, 4), on_completed(250))
def test_on_error_resume_next_empty_return_throw_and_more():
scheduler = TestScheduler()
msgs1 = [on_next(150, 1), on_completed(205)]
msgs2 = [on_next(215, 2), on_completed(220)]
msgs3 = [on_next(225, 3), on_next(230, 4), on_completed(235)]
msgs4 = [on_error(240, 'ex')]
msgs5 = [on_next(245, 5), on_completed(250)]
o1 = scheduler.create_hot_observable(msgs1)
o2 = scheduler.create_hot_observable(msgs2)
o3 = scheduler.create_hot_observable(msgs3)
o4 = scheduler.create_hot_observable(msgs4)
o5 = scheduler.create_hot_observable(msgs5)
def create():
return Observable.on_error_resume_next(o1, o2, o3, o4, o5)
results = scheduler.start(create)
results.messages.assert_equal(on_next(215, 2), on_next(225, 3), on_next(230, 4), on_next(245, 5), on_completed(250))
def test_on_error_resume_next_empty_return_throw_and_more():
ex = 'ex'
scheduler = TestScheduler()
msgs1 = [on_next(150, 1), on_next(210, 2), on_completed(220)]
msgs2 = [on_error(230, ex)]
o1 = scheduler.create_hot_observable(msgs1)
o2 = scheduler.create_hot_observable(msgs2)
def create():
return o1.on_error_resume_next(o2)
results = scheduler.start(create)
results.messages.assert_equal(on_next(210, 2), on_completed(230))
def test_on_error_resume_next_single_source_throws():
ex = 'ex'
scheduler = TestScheduler()
msgs1 = [on_error(230, ex)]
o1 = scheduler.create_hot_observable(msgs1)
def create():
return Observable.on_error_resume_next(o1)
results = scheduler.start(create)
results.messages.assert_equal(on_completed(230))
def test_on_error_resume_next_end_with_never():
scheduler = TestScheduler()
msgs1 = [on_next(150, 1), on_next(210, 2), on_completed(220)]
o1 = scheduler.create_hot_observable(msgs1)
o2 = Observable.never()
def create():
return Observable.on_error_resume_next(o1, o2)
results = scheduler.start(create)
results.messages.assert_equal(on_next(210, 2))
def test_on_error_resume_next_start_with_never():
scheduler = TestScheduler()
msgs1 = [on_next(150, 1), on_next(210, 2), on_completed(220)]
o1 = Observable.never()
o2 = scheduler.create_hot_observable(msgs1)
def create():
return Observable.on_error_resume_next(o1, o2)
results = scheduler.start(create)
results.messages.assert_equal()
def test_zip_never_never():
scheduler = TestScheduler()
o1 = Observable.never()
o2 = Observable.never()
def create():
return o1.zip(o2, lambda x, y: x + y)
results = scheduler.start(create)
results.messages.assert_equal()
def test_zip_never_empty():
scheduler = TestScheduler()
msgs = [on_next(150, 1), on_completed(210)]
o1 = Observable.never()
o2 = scheduler.create_hot_observable(msgs)
def create():
return o1.zip(o2, lambda x, y: x + y)
results = scheduler.start(create)
results.messages.assert_equal()
def test_zip_empty_empty():
scheduler = TestScheduler()
msgs1 = [on_next(150, 1), on_completed(210)]
msgs2 = [on_next(150, 1), on_completed(210)]
e1 = scheduler.create_hot_observable(msgs1)
e2 = scheduler.create_hot_observable(msgs2)
def create():
return e1.zip(e2, lambda x, y: x + y)
results = scheduler.start(create)
results.messages.assert_equal(on_completed(210))
def test_zip_empty_non_empty():
scheduler = TestScheduler()
msgs1 = [on_next(150, 1), on_completed(210)]
msgs2 = [on_next(150, 1), on_next(215, 2), on_completed(220)]
e1 = scheduler.create_hot_observable(msgs1)
e2 = scheduler.create_hot_observable(msgs2)
def create():
return e1.zip(e2, lambda x, y: x + y)
results = scheduler.start(create)
results.messages.assert_equal(on_completed(215))
def test_zip_non_empty_empty():
scheduler = TestScheduler()
msgs1 = [on_next(150, 1), on_completed(210)]
msgs2 = [on_next(150, 1), on_next(215, 2), on_completed(220)]
e1 = scheduler.create_hot_observable(msgs1)
e2 = scheduler.create_hot_observable(msgs2)
def create():
return e2.zip(e1, lambda x, y: x + y)
results = scheduler.start(create)
results.messages.assert_equal(on_completed(215))
def test_zip_never_non_empty():
scheduler = TestScheduler()
msgs = [on_next(150, 1), on_next(215, 2), on_completed(220)]
e1 = scheduler.create_hot_observable(msgs)
e2 = Observable.never()
def create():
return e2.zip(e1, lambda x, y: x + y)
results = scheduler.start(create)
results.messages.assert_equal()
def test_zip_non_empty_never():
scheduler = TestScheduler()
msgs = [on_next(150, 1), on_next(215, 2), on_completed(220)]
e1 = scheduler.create_hot_observable(msgs)
e2 = Observable.never()
def create():
return e1.zip(e2, lambda x, y: x + y)
results = scheduler.start(create)
results.messages.assert_equal()
def test_zip_non_empty_non_empty():
scheduler = TestScheduler()
msgs1 = [on_next(150, 1), on_next(215, 2), on_completed(230)]
msgs2 = [on_next(150, 1), on_next(220, 3), on_completed(240)]
e1 = scheduler.create_hot_observable(msgs1)
e2 = scheduler.create_hot_observable(msgs2)
def create():
return e1.zip(e2, lambda x, y: x + y)
results = scheduler.start(create)
results.messages.assert_equal(on_next(220, 2 + 3), on_completed(240))
def test_zip_empty_error():
ex = 'ex'
scheduler = TestScheduler()
msgs1 = [on_next(150, 1), on_completed(230)]
msgs2 = [on_next(150, 1), on_error(220, ex)]
e1 = scheduler.create_hot_observable(msgs1)
e2 = scheduler.create_hot_observable(msgs2)
def create():
return e1.zip(e2, lambda x, y: x + y)
results = scheduler.start(create)
results.messages.assert_equal(on_error(220, ex))
def test_zip_error_empty():
ex = 'ex'
scheduler = TestScheduler()
msgs1 = [on_next(150, 1), on_completed(230)]
msgs2 = [on_next(150, 1), on_error(220, ex)]
e1 = scheduler.create_hot_observable(msgs1)
e2 = scheduler.create_hot_observable(msgs2)
def create():
return e2.zip(e1, lambda x, y: x + y)
results = scheduler.start(create)
results.messages.assert_equal(on_error(220, ex))
def test_zip_never_error():
ex = 'ex'
scheduler = TestScheduler()
msgs2 = [on_next(150, 1), on_error(220, ex)]
e1 = Observable.never()
e2 = scheduler.create_hot_observable(msgs2)
def create():
return e1.zip(e2, lambda x, y: x + y)
results = scheduler.start(create)
results.messages.assert_equal(on_error(220, ex))
def test_zip_error_never():
ex = 'ex'
scheduler = TestScheduler()
msgs2 = [on_next(150, 1), on_error(220, ex)]
e1 = Observable.never()
e2 = scheduler.create_hot_observable(msgs2)
def create():
return e2.zip(e1, lambda x, y: x + y)
results = scheduler.start(create)
results.messages.assert_equal(on_error(220, ex))
def test_zip_error_error():
ex1 = 'ex1'
ex2 = 'ex2'
scheduler = TestScheduler()
msgs1 = [on_next(150, 1), on_error(230, ex1)]
msgs2 = [on_next(150, 1), on_error(220, ex2)]
e1 = scheduler.create_hot_observable(msgs1)
e2 = scheduler.create_hot_observable(msgs2)
def create():
return e2.zip(e1, lambda x, y: x + y)
results = scheduler.start(create)
results.messages.assert_equal(on_error(220, ex2))
def test_zip_some_error():
ex = 'ex'
scheduler = TestScheduler()
msgs1 = [on_next(150, 1), on_next(215, 2), on_completed(230)]
msgs2 = [on_next(150, 1), on_error(220, ex)]
e1 = scheduler.create_hot_observable(msgs1)
e2 = scheduler.create_hot_observable(msgs2)
def create():
return e1.zip(e2, lambda x, y: x + y)
results = scheduler.start(create)
results.messages.assert_equal(on_error(220, ex))
def test_zip_error_some():
ex = 'ex'
scheduler = TestScheduler()
msgs1 = [on_next(150, 1), on_next(215, 2), on_completed(230)]
msgs2 = [on_next(150, 1), on_error(220, ex)]
e1 = scheduler.create_hot_observable(msgs1)
e2 = scheduler.create_hot_observable(msgs2)
def create():
return e2.zip(e1, lambda x, y: x + y)
results = scheduler.start(create)
results.messages.assert_equal(on_error(220, ex))
def test_zip_some_data_asymmetric1():
scheduler = TestScheduler()
def msgs1_factory():
results = []
for i in range(5):
results.append(on_next(205 + i * 5, i))
return results
msgs1 = msgs1_factory()
def msgs2_factory():
results = []
for i in range(10):
results.append(on_next(205 + i * 8, i))
return results
msgs2 = msgs2_factory()
length = min(len(msgs1), len(msgs2))
e1 = scheduler.create_hot_observable(msgs1)
e2 = scheduler.create_hot_observable(msgs2)
def create():
return e1.zip(e2, lambda x, y: x + y)
results = scheduler.start(create).messages
assert(length == len(results))
for i in range(length):
_sum = msgs1[i].value.value + msgs2[i].value.value
time = max(msgs1[i].time, msgs2[i].time)
assert(results[i].value.kind == 'N' and results[i].time == time and results[i].value.value == _sum)
def test_zip_some_data_asymmetric2():
scheduler = TestScheduler()
def msgs1_factory():
results = []
for i in range(10):
results.append(on_next(205 + i * 5, i))
return results
msgs1 = msgs1_factory()
def msgs2_factory():
results = []
for i in range(5):
results.append(on_next(205 + i * 8, i))
return results
msgs2 = msgs2_factory()
length = min(len(msgs1), len(msgs2))
e1 = scheduler.create_hot_observable(msgs1)
e2 = scheduler.create_hot_observable(msgs2)
def create():
return e1.zip(e2, lambda x, y: x + y)
results = scheduler.start(create).messages
assert(length == len(results))
for i in range(length):
_sum = msgs1[i].value.value + msgs2[i].value.value
time = max(msgs1[i].time, msgs2[i].time)
assert(results[i].value.kind == 'N' and results[i].time == time and results[i].value.value == _sum)
def test_zip_some_data_symmetric():
scheduler = TestScheduler()
def msgs1_factory():
results = []
for i in range(10):
results.append(on_next(205 + i * 5, i))
return results
msgs1 = msgs1_factory()
def msgs2_factory():
results = []
for i in range(10):
results.append(on_next(205 + i * 8, i))
return results
msgs2 = msgs2_factory()
length = min(len(msgs1), len(msgs2))
e1 = scheduler.create_hot_observable(msgs1)
e2 = scheduler.create_hot_observable(msgs2)
def create():
return e1.zip(e2, lambda x, y: x + y)
results = scheduler.start(create).messages
assert(length == len(results))
for i in range(length):
_sum = msgs1[i].value.value + msgs2[i].value.value
time = max(msgs1[i].time, msgs2[i].time)
assert(results[i].value.kind == 'N' and results[i].time == time and results[i].value.value == _sum)
def test_zip_selector_throws():
ex = 'ex'
scheduler = TestScheduler()
msgs1 = [on_next(150, 1), on_next(215, 2), on_next(225, 4), on_completed(240)]
msgs2 = [on_next(150, 1), on_next(220, 3), on_next(230, 5), on_completed(250)]
e1 = scheduler.create_hot_observable(msgs1)
e2 = scheduler.create_hot_observable(msgs2)
def create():
def selector(x, y):
if y == 5:
raise Exception(ex)
else:
return x + y
return e1.zip(e2, selector)
results = scheduler.start(create)
results.messages.assert_equal(on_next(220, 2 + 3), on_error(230, ex))
def test_combine_latest_never_never():
scheduler = TestScheduler()
e1 = Observable.never()
e2 = Observable.never()
def create():
return e1.combine_latest(e2, lambda x, y: x + y)
results = scheduler.start(create)
results.messages.assert_equal()
def test_combine_latest_never_empty():
scheduler = TestScheduler()
msgs = [on_next(150, 1), on_completed(210)]
e1 = Observable.never()
e2 = scheduler.create_hot_observable(msgs)
def create():
return e1.combine_latest(e2, lambda x, y: x + y)
results = scheduler.start(create)
results.messages.assert_equal()
def test_combine_latest_empty_never():
scheduler = TestScheduler()
msgs = [on_next(150, 1), on_completed(210)]
e1 = Observable.never()
e2 = scheduler.create_hot_observable(msgs)
def create():
return e2.combine_latest(e1, lambda x, y: x + y)
results = scheduler.start(create)
results.messages.assert_equal()
def test_combine_latest_empty_empty():
scheduler = TestScheduler()
msgs1 = [on_next(150, 1), on_completed(210)]
msgs2 = [on_next(150, 1), on_completed(210)]
e1 = scheduler.create_hot_observable(msgs1)
e2 = scheduler.create_hot_observable(msgs2)
def create():
return e2.combine_latest(e1, lambda x, y: x + y)
results = scheduler.start(create)
results.messages.assert_equal(on_completed(210))
def test_combine_latest_empty_return():
scheduler = TestScheduler()
msgs1 = [on_next(150, 1), on_completed(210)]
msgs2 = [on_next(150, 1), on_next(215, 2), on_completed(220)]
e1 = scheduler.create_hot_observable(msgs1)
e2 = scheduler.create_hot_observable(msgs2)
def create():
return e1.combine_latest(e2, lambda x, y: x + y)
results = scheduler.start(create)
results.messages.assert_equal(on_completed(215))
def test_combine_latest_return_empty():
scheduler = TestScheduler()
msgs1 = [on_next(150, 1), on_completed(210)]
msgs2 = [on_next(150, 1), on_next(215, 2), on_completed(220)]
e1 = scheduler.create_hot_observable(msgs1)
e2 = scheduler.create_hot_observable(msgs2)
def create():
return e2.combine_latest(e1, lambda x, y: x + y)
results = scheduler.start(create)
results.messages.assert_equal(on_completed(215))
def test_combine_latest_never_feturn():
scheduler = TestScheduler()
msgs = [on_next(150, 1), on_next(215, 2), on_completed(220)]
e1 = scheduler.create_hot_observable(msgs)
e2 = Observable.never()
def create():
return e1.combine_latest(e2, lambda x, y: x + y)
results = scheduler.start(create)
results.messages.assert_equal()
def test_combine_latest_return_never():
scheduler = TestScheduler()
msgs = [on_next(150, 1), on_next(215, 2), on_completed(210)]
e1 = scheduler.create_hot_observable(msgs)
e2 = Observable.never()
def create():
return e2.combine_latest(e1, lambda x, y: x + y)
results = scheduler.start(create)
results.messages.assert_equal()
def test_combine_latest_return_return():
scheduler = TestScheduler()
msgs1 = [on_next(150, 1), on_next(215, 2), on_completed(230)]
msgs2 = [on_next(150, 1), on_next(220, 3), on_completed(240)]
e1 = scheduler.create_hot_observable(msgs1)
e2 = scheduler.create_hot_observable(msgs2)
def create():
return e1.combine_latest(e2, lambda x, y: x + y)
results = scheduler.start(create)
results.messages.assert_equal(on_next(220, 2 + 3), on_completed(240))
def test_combine_latest_empty_error():
ex = 'ex'
scheduler = TestScheduler()
msgs1 = [on_next(150, 1), on_completed(230)]
msgs2 = [on_next(150, 1), on_error(220, ex)]
e1 = scheduler.create_hot_observable(msgs1)
e2 = scheduler.create_hot_observable(msgs2)
def create():
return e1.combine_latest(e2, lambda x, y: x + y)
results = scheduler.start(create)
results.messages.assert_equal(on_error(220, ex))
def test_combine_latest_error_empty():
ex = 'ex'
scheduler = TestScheduler()
msgs1 = [on_next(150, 1), on_completed(230)]
msgs2 = [on_next(150, 1), on_error(220, ex)]
e1 = scheduler.create_hot_observable(msgs1)
e2 = scheduler.create_hot_observable(msgs2)
def create():
return e2.combine_latest(e1, lambda x, y: x + y)
results = scheduler.start(create)
results.messages.assert_equal(on_error(220, ex))
def test_combine_latest_return_throw():
ex = 'ex'
scheduler = TestScheduler()
msgs1 = [on_next(150, 1), on_next(210, 2), on_completed(230)]
msgs2 = [on_next(150, 1), on_error(220, ex)]
e1 = scheduler.create_hot_observable(msgs1)
e2 = scheduler.create_hot_observable(msgs2)
def create():
return e1.combine_latest(e2, lambda x, y: x + y)
results = scheduler.start(create)
results.messages.assert_equal(on_error(220, ex))
def test_combine_latest_throw_return():
ex = 'ex'
scheduler = TestScheduler()
msgs1 = [on_next(150, 1), on_next(210, 2), on_completed(230)]
msgs2 = [on_next(150, 1), on_error(220, ex)]
e1 = scheduler.create_hot_observable(msgs1)
e2 = scheduler.create_hot_observable(msgs2)
def create():
return e2.combine_latest(e1, lambda x, y: x + y)
results = scheduler.start(create)
results.messages.assert_equal(on_error(220, ex))
def test_combine_latest_throw_throw():
ex1 = 'ex1'
ex2 = 'ex2'
scheduler = TestScheduler()
msgs1 = [on_next(150, 1), on_error(220, ex1)]
msgs2 = [on_next(150, 1), on_error(230, ex2)]
e1 = scheduler.create_hot_observable(msgs1)
e2 = scheduler.create_hot_observable(msgs2)
def create():
return e1.combine_latest(e2, lambda x, y: x + y)
results = scheduler.start(create)
results.messages.assert_equal(on_error(220, ex1))
def test_combine_latest_error_throw():
ex1 = 'ex1'
ex2 = 'ex2'
scheduler = TestScheduler()
msgs1 = [on_next(150, 1), on_next(210, 2), on_error(220, ex1)]
msgs2 = [on_next(150, 1), on_error(230, ex2)]
e1 = scheduler.create_hot_observable(msgs1)
e2 = scheduler.create_hot_observable(msgs2)
def create():
return e1.combine_latest(e2, lambda x, y: x + y)
results = scheduler.start(create)
results.messages.assert_equal(on_error(220, ex1))
def test_combine_latest_throw_error():
ex1 = 'ex1'
ex2 = 'ex2'
scheduler = TestScheduler()
msgs1 = [on_next(150, 1), on_next(210, 2), on_error(220, ex1)]
msgs2 = [on_next(150, 1), on_error(230, ex2)]
e1 = scheduler.create_hot_observable(msgs1)
e2 = scheduler.create_hot_observable(msgs2)
def create():
return e2.combine_latest(e1, lambda x, y: x + y)
results = scheduler.start(create)
results.messages.assert_equal(on_error(220, ex1))
def test_combine_latest_never_throw():
ex = 'ex'
scheduler = TestScheduler()
msgs = [on_next(150, 1), on_error(220, ex)]
e1 = Observable.never()
e2 = scheduler.create_hot_observable(msgs)
def create():
return e1.combine_latest(e2, lambda x, y: x + y)
results = scheduler.start(create)
results.messages.assert_equal(on_error(220, ex))
def test_combine_latest_throw_never():
ex = 'ex'
scheduler = TestScheduler()
msgs = [on_next(150, 1), on_error(220, ex)]
e1 = Observable.never()
e2 = scheduler.create_hot_observable(msgs)
def create():
return e2.combine_latest(e1, lambda x, y: x + y)
results = scheduler.start(create)
results.messages.assert_equal(on_error(220, ex))
def test_combine_latest_some_throw():
ex = 'ex'
scheduler = TestScheduler()
msgs1 = [on_next(150, 1), on_next(215, 2), on_completed(230)]
msgs2 = [on_next(150, 1), on_error(220, ex)]
e1 = scheduler.create_hot_observable(msgs1)
e2 = scheduler.create_hot_observable(msgs2)
def create():
return e1.combine_latest(e2, lambda x, y: x + y)
results = scheduler.start(create)
results.messages.assert_equal(on_error(220, ex))
def test_combine_latest_throw_some():
ex = 'ex'
scheduler = TestScheduler()
msgs1 = [on_next(150, 1), on_next(215, 2), on_completed(230)]
msgs2 = [on_next(150, 1), on_error(220, ex)]
e1 = scheduler.create_hot_observable(msgs1)
e2 = scheduler.create_hot_observable(msgs2)
def create():
return e2.combine_latest(e1, lambda x, y: x + y)
results = scheduler.start(create)
results.messages.assert_equal(on_error(220, ex))
def test_combine_latest_throw_after_complete_left():
ex = 'ex'
scheduler = TestScheduler()
msgs1 = [on_next(150, 1), on_next(215, 2), on_completed(220)]
msgs2 = [on_next(150, 1), on_error(230, ex)]
e1 = scheduler.create_hot_observable(msgs1)
e2 = scheduler.create_hot_observable(msgs2)
def create():
return e1.combine_latest(e2, lambda x, y: x + y)
results = scheduler.start(create)
results.messages.assert_equal(on_error(230, ex))
def test_combine_latest_throw_after_complete_right():
ex = 'ex'
scheduler = TestScheduler()
msgs1 = [on_next(150, 1), on_next(215, 2), on_completed(220)]
msgs2 = [on_next(150, 1), on_error(230, ex)]
e1 = scheduler.create_hot_observable(msgs1)
e2 = scheduler.create_hot_observable(msgs2)
def create():
return e2.combine_latest(e1, lambda x, y: x + y)
results = scheduler.start(create)
results.messages.assert_equal(on_error(230, ex))
def test_combine_latest_interleaved_with_tail():
scheduler = TestScheduler()
msgs1 = [on_next(150, 1), on_next(215, 2), on_next(225, 4), on_completed(230)]
msgs2 = [on_next(150, 1), on_next(220, 3), on_next(230, 5), on_next(235, 6), on_next(240, 7), on_completed(250)]
e1 = scheduler.create_hot_observable(msgs1)
e2 = scheduler.create_hot_observable(msgs2)
def create():
return e1.combine_latest(e2, lambda x, y: x + y)
results = scheduler.start(create)
results.messages.assert_equal(on_next(220, 2 + 3), on_next(225, 3 + 4), on_next(230, 4 + 5), on_next(235, 4 + 6), on_next(240, 4 + 7), on_completed(250))
def test_combine_latest_consecutive():
scheduler = TestScheduler()
msgs1 = [on_next(150, 1), on_next(215, 2), on_next(225, 4), on_completed(230)]
msgs2 = [on_next(150, 1), on_next(235, 6), on_next(240, 7), on_completed(250)]
e1 = scheduler.create_hot_observable(msgs1)
e2 = scheduler.create_hot_observable(msgs2)
def create():
return e1.combine_latest(e2, lambda x, y: x + y)
results = scheduler.start(create)
results.messages.assert_equal(on_next(235, 4 + 6), on_next(240, 4 + 7), on_completed(250))
def test_combine_latest_consecutive_end_with_error_left():
ex = 'ex'
scheduler = TestScheduler()
msgs1 = [on_next(150, 1), on_next(215, 2), on_next(225, 4), on_error(230, ex)]
msgs2 = [on_next(150, 1), on_next(235, 6), on_next(240, 7), on_completed(250)]
e1 = scheduler.create_hot_observable(msgs1)
e2 = scheduler.create_hot_observable(msgs2)
def create():
return e1.combine_latest(e2, lambda x, y: x + y)
results = scheduler.start(create)
results.messages.assert_equal(on_error(230, ex))
def test_combine_latest_consecutive_end_with_error_right():
ex = 'ex'
scheduler = TestScheduler()
msgs1 = [on_next(150, 1), on_next(215, 2), on_next(225, 4), on_completed(230)]
msgs2 = [on_next(150, 1), on_next(235, 6), on_next(240, 7), on_error(245, ex)]
e1 = scheduler.create_hot_observable(msgs1)
e2 = scheduler.create_hot_observable(msgs2)
def create():
return e2.combine_latest(e1, lambda x, y: x + y)
results = scheduler.start(create)
results.messages.assert_equal(on_next(235, 4 + 6), on_next(240, 4 + 7), on_error(245, ex))
def test_combine_latest_selector_throws():
ex = 'ex'
scheduler = TestScheduler()
msgs1 = [on_next(150, 1), on_next(215, 2), on_completed(230)]
msgs2 = [on_next(150, 1), on_next(220, 3), on_completed(240)]
e1 = scheduler.create_hot_observable(msgs1)
e2 = scheduler.create_hot_observable(msgs2)
def create():
return e1.combine_latest(e2, lambda x, y: _raise(ex))
results = scheduler.start(create)
results.messages.assert_equal(on_error(220, ex))
def test_concat_empty_empty():
scheduler = TestScheduler()
msgs1 = [on_next(150, 1), on_completed(230)]
msgs2 = [on_next(150, 1), on_completed(250)]
e1 = scheduler.create_hot_observable(msgs1)
e2 = scheduler.create_hot_observable(msgs2)
def create():
return e1.concat(e2)
results = scheduler.start(create)
results.messages.assert_equal(on_completed(250))
def test_concat_empty_never():
scheduler = TestScheduler()
msgs1 = [on_next(150, 1), on_completed(230)]
e1 = scheduler.create_hot_observable(msgs1)
e2 = Observable.never()
def create():
return e1.concat(e2)
results = scheduler.start(create)
results.messages.assert_equal()
def test_concat_never_empty():
scheduler = TestScheduler()
msgs1 = [on_next(150, 1), on_completed(230)]
e1 = scheduler.create_hot_observable(msgs1)
e2 = Observable.never()
def create():
return e2.concat(e1)
results = scheduler.start(create)
results.messages.assert_equal()
def test_Concat_NeverNever():
scheduler = TestScheduler()
e1 = Observable.never()
e2 = Observable.never()
def create():
return e1.concat(e2)
results = scheduler.start(create)
results.messages.assert_equal()
def test_Concat_EmptyThrow():
ex = 'ex'
scheduler = TestScheduler()
msgs1 = [on_next(150, 1), on_completed(230)]
msgs2 = [on_next(150, 1), on_error(250, ex)]
e1 = scheduler.create_hot_observable(msgs1)
e2 = scheduler.create_hot_observable(msgs2)
def create():
return e1.concat(e2)
results = scheduler.start(create)
results.messages.assert_equal(on_error(250, ex))
def test_concat_throw_empty():
ex = 'ex'
scheduler = TestScheduler()
msgs1 = [on_next(150, 1), on_error(230, ex)]
msgs2 = [on_next(150, 1), on_completed(250)]
e1 = scheduler.create_hot_observable(msgs1)
e2 = scheduler.create_hot_observable(msgs2)
def create():
return e1.concat(e2)
results = scheduler.start(create)
results.messages.assert_equal(on_error(230, ex))
def test_concat_throw_throw():
ex = 'ex'
scheduler = TestScheduler()
msgs1 = [on_next(150, 1), on_error(230, ex)]
msgs2 = [on_next(150, 1), on_error(250, 'ex2')]
e1 = scheduler.create_hot_observable(msgs1)
e2 = scheduler.create_hot_observable(msgs2)
def create():
return e1.concat(e2)
results = scheduler.start(create)
results.messages.assert_equal(on_error(230, ex))
def test_concat_return_empty():
scheduler = TestScheduler()
msgs1 = [on_next(150, 1), on_next(210, 2), on_completed(230)]
msgs2 = [on_next(150, 1), on_completed(250)]
e1 = scheduler.create_hot_observable(msgs1)
e2 = scheduler.create_hot_observable(msgs2)
def create():
return e1.concat(e2)
results = scheduler.start(create)
results.messages.assert_equal(on_next(210, 2), on_completed(250))
def test_concat_empty_return():
scheduler = TestScheduler()
msgs1 = [on_next(150, 1), on_completed(230)]
msgs2 = [on_next(150, 1), on_next(240, 2), on_completed(250)]
e1 = scheduler.create_hot_observable(msgs1)
e2 = scheduler.create_hot_observable(msgs2)
def create():
return e1.concat(e2)
results = scheduler.start(create)
results.messages.assert_equal(on_next(240, 2), on_completed(250))
def test_concat_return_never():
scheduler = TestScheduler()
msgs1 = [on_next(150, 1), on_next(210, 2), on_completed(230)]
e1 = scheduler.create_hot_observable(msgs1)
e2 = Observable.never()
def create():
return e1.concat(e2)
results = scheduler.start(create)
results.messages.assert_equal(on_next(210, 2))
def test_concat_never_return():
scheduler = TestScheduler()
msgs1 = [on_next(150, 1), on_next(210, 2), on_completed(230)]
e1 = scheduler.create_hot_observable(msgs1)
e2 = Observable.never()
def create():
return e2.concat(e1)
results = scheduler.start(create)
results.messages.assert_equal()
def test_concat_return_return():
scheduler = TestScheduler()
msgs1 = [on_next(150, 1), on_next(220, 2), on_completed(230)]
msgs2 = [on_next(150, 1), on_next(240, 3), on_completed(250)]
e1 = scheduler.create_hot_observable(msgs1)
e2 = scheduler.create_hot_observable(msgs2)
def create():
return e1.concat(e2)
results = scheduler.start(create)
results.messages.assert_equal(on_next(220, 2), on_next(240, 3), on_completed(250))
def test_Concat_ThrowReturn():
ex = 'ex'
scheduler = TestScheduler()
msgs1 = [on_next(150, 1), on_error(230, ex)]
msgs2 = [on_next(150, 1), on_next(240, 2), on_completed(250)]
e1 = scheduler.create_hot_observable(msgs1)
e2 = scheduler.create_hot_observable(msgs2)
def create():
return e1.concat(e2)
results = scheduler.start(create)
results.messages.assert_equal(on_error(230, ex))
def test_concat_return_throw():
ex = 'ex'
scheduler = TestScheduler()
msgs1 = [on_next(150, 1), on_next(220, 2), on_completed(230)]
msgs2 = [on_next(150, 1), on_error(250, ex)]
e1 = scheduler.create_hot_observable(msgs1)
e2 = scheduler.create_hot_observable(msgs2)
def create():
return e1.concat(e2)
results = scheduler.start(create)
results.messages.assert_equal(on_next(220, 2), on_error(250, ex))
def test_concat_some_data_some_data():
scheduler = TestScheduler()
msgs1 = [on_next(150, 1), on_next(210, 2), on_next(220, 3), on_completed(225)]
msgs2 = [on_next(150, 1), on_next(230, 4), on_next(240, 5), on_completed(250)]
e1 = scheduler.create_hot_observable(msgs1)
e2 = scheduler.create_hot_observable(msgs2)
def create():
return e1.concat(e2)
results = scheduler.start(create)
results.messages.assert_equal(on_next(210, 2), on_next(220, 3), on_next(230, 4), on_next(240, 5), on_completed(250))
# def test_MergeConcat_Basic():
# var results, scheduler, xs
# scheduler = TestScheduler()
# xs = scheduler.create_hot_observable(on_next(210, scheduler.create_cold_observable(on_next(50, 1), on_next(100, 2), on_next(120, 3), on_completed(140))), on_next(260, scheduler.create_cold_observable(on_next(20, 4), on_next(70, 5), on_completed(200))), on_next(270, scheduler.create_cold_observable(on_next(10, 6), on_next(90, 7), on_next(110, 8), on_completed(130))), on_next(320, scheduler.create_cold_observable(on_next(210, 9), on_next(240, 10), on_completed(300))), on_completed(400))
# results = scheduler.start(create)
# return xs.merge(2)
#
# results.messages.assert_equal(on_next(260, 1), on_next(280, 4), on_next(310, 2), on_next(330, 3), on_next(330, 5), on_next(360, 6), on_next(440, 7), on_next(460, 8), on_next(670, 9), on_next(700, 10), on_completed(760))
# xs.subscriptions.assert_equal(subscribe(200, 760))
#
# def test_MergeConcat_Basic_Long():
# var results, scheduler, xs
# scheduler = TestScheduler()
# xs = scheduler.create_hot_observable(on_next(210, scheduler.create_cold_observable(on_next(50, 1), on_next(100, 2), on_next(120, 3), on_completed(140))), on_next(260, scheduler.create_cold_observable(on_next(20, 4), on_next(70, 5), on_completed(300))), on_next(270, scheduler.create_cold_observable(on_next(10, 6), on_next(90, 7), on_next(110, 8), on_completed(130))), on_next(320, scheduler.create_cold_observable(on_next(210, 9), on_next(240, 10), on_completed(300))), on_completed(400))
# results = scheduler.start(create)
# return xs.merge(2)
#
# results.messages.assert_equal(on_next(260, 1), on_next(280, 4), on_next(310, 2), on_next(330, 3), on_next(330, 5), on_next(360, 6), on_next(440, 7), on_next(460, 8), on_next(690, 9), on_next(720, 10), on_completed(780))
# xs.subscriptions.assert_equal(subscribe(200, 780))
#
# def test_MergeConcat_Basic_Wide():
# var results, scheduler, xs
# scheduler = TestScheduler()
# xs = scheduler.create_hot_observable(on_next(210, scheduler.create_cold_observable(on_next(50, 1), on_next(100, 2), on_next(120, 3), on_completed(140))), on_next(260, scheduler.create_cold_observable(on_next(20, 4), on_next(70, 5), on_completed(300))), on_next(270, scheduler.create_cold_observable(on_next(10, 6), on_next(90, 7), on_next(110, 8), on_completed(130))), on_next(420, scheduler.create_cold_observable(on_next(210, 9), on_next(240, 10), on_completed(300))), on_completed(450))
# results = scheduler.start(create)
# return xs.merge(3)
#
# results.messages.assert_equal(on_next(260, 1), on_next(280, 4), on_next(280, 6), on_next(310, 2), on_next(330, 3), on_next(330, 5), on_next(360, 7), on_next(380, 8), on_next(630, 9), on_next(660, 10), on_completed(720))
# xs.subscriptions.assert_equal(subscribe(200, 720))
#
# def test_MergeConcat_Basic_Late():
# var results, scheduler, xs
# scheduler = TestScheduler()
# xs = scheduler.create_hot_observable(on_next(210, scheduler.create_cold_observable(on_next(50, 1), on_next(100, 2), on_next(120, 3), on_completed(140))), on_next(260, scheduler.create_cold_observable(on_next(20, 4), on_next(70, 5), on_completed(300))), on_next(270, scheduler.create_cold_observable(on_next(10, 6), on_next(90, 7), on_next(110, 8), on_completed(130))), on_next(420, scheduler.create_cold_observable(on_next(210, 9), on_next(240, 10), on_completed(300))), on_completed(750))
# results = scheduler.start(create)
# return xs.merge(3)
#
# results.messages.assert_equal(on_next(260, 1), on_next(280, 4), on_next(280, 6), on_next(310, 2), on_next(330, 3), on_next(330, 5), on_next(360, 7), on_next(380, 8), on_next(630, 9), on_next(660, 10), on_completed(750))
# xs.subscriptions.assert_equal(subscribe(200, 750))
#
# def test_MergeConcat_Disposed():
# var results, scheduler, xs
# scheduler = TestScheduler()
# xs = scheduler.create_hot_observable(on_next(210, scheduler.create_cold_observable(on_next(50, 1), on_next(100, 2), on_next(120, 3), on_completed(140))), on_next(260, scheduler.create_cold_observable(on_next(20, 4), on_next(70, 5), on_completed(200))), on_next(270, scheduler.create_cold_observable(on_next(10, 6), on_next(90, 7), on_next(110, 8), on_completed(130))), on_next(320, scheduler.create_cold_observable(on_next(210, 9), on_next(240, 10), on_completed(300))), on_completed(400))
# results = scheduler.startWithDispose(function () {
# return xs.merge(2)
# }, 450)
# results.messages.assert_equal(on_next(260, 1), on_next(280, 4), on_next(310, 2), on_next(330, 3), on_next(330, 5), on_next(360, 6), on_next(440, 7))
# xs.subscriptions.assert_equal(subscribe(200, 450))
#
# def test_MergeConcat_OuterError():
# var ex, results, scheduler, xs
# ex = 'ex'
# scheduler = TestScheduler()
# xs = scheduler.create_hot_observable(on_next(210, scheduler.create_cold_observable(on_next(50, 1), on_next(100, 2), on_next(120, 3), on_completed(140))), on_next(260, scheduler.create_cold_observable(on_next(20, 4), on_next(70, 5), on_completed(200))), on_next(270, scheduler.create_cold_observable(on_next(10, 6), on_next(90, 7), on_next(110, 8), on_completed(130))), on_next(320, scheduler.create_cold_observable(on_next(210, 9), on_next(240, 10), on_completed(300))), on_error(400, ex))
# results = scheduler.start(create)
# return xs.merge(2)
#
# results.messages.assert_equal(on_next(260, 1), on_next(280, 4), on_next(310, 2), on_next(330, 3), on_next(330, 5), on_next(360, 6), on_error(400, ex))
# xs.subscriptions.assert_equal(subscribe(200, 400))
#
# def test_MergeConcat_InnerError():
# var ex, results, scheduler, xs
# ex = 'ex'
# scheduler = TestScheduler()
# xs = scheduler.create_hot_observable(on_next(210, scheduler.create_cold_observable(on_next(50, 1), on_next(100, 2), on_next(120, 3), on_completed(140))), on_next(260, scheduler.create_cold_observable(on_next(20, 4), on_next(70, 5), on_completed(200))), on_next(270, scheduler.create_cold_observable(on_next(10, 6), on_next(90, 7), on_next(110, 8), on_error(140, ex))), on_next(320, scheduler.create_cold_observable(on_next(210, 9), on_next(240, 10), on_completed(300))), on_completed(400))
# results = scheduler.start(create)
# return xs.merge(2)
#
# results.messages.assert_equal(on_next(260, 1), on_next(280, 4), on_next(310, 2), on_next(330, 3), on_next(330, 5), on_next(360, 6), on_next(440, 7), on_next(460, 8), on_error(490, ex))
# xs.subscriptions.assert_equal(subscribe(200, 490))
#
# def test_ZipWithEnumerable_NeverEmpty():
# var n1, n2, results, scheduler
# scheduler = TestScheduler()
# n1 = scheduler.create_hot_observable(on_next(150, 1))
# n2 = []
# results = scheduler.start(create)
# return n1.zip(n2, function (x, y) {
# return x + y
#
#
# results.messages.assert_equal()
# n1.subscriptions.assert_equal(subscribe(200, 1000))
#
# def test_ZipWithEnumerable_EmptyEmpty():
# var n1, n2, results, scheduler
# scheduler = TestScheduler()
# n1 = scheduler.create_hot_observable(on_next(150, 1), on_completed(210))
# n2 = []
# results = scheduler.start(create)
# return n1.zip(n2, function (x, y) {
# return x + y
#
#
# results.messages.assert_equal(on_completed(210))
# n1.subscriptions.assert_equal(subscribe(200, 210))
#
# def test_ZipWithEnumerable_EmptyNonEmpty():
# var n1, n2, results, scheduler
# scheduler = TestScheduler()
# n1 = scheduler.create_hot_observable(on_next(150, 1), on_completed(210))
# n2 = [2]
# results = scheduler.start(create)
# return n1.zip(n2, function (x, y) {
# return x + y
#
#
# results.messages.assert_equal(on_completed(210))
# n1.subscriptions.assert_equal(subscribe(200, 210))
#
# def test_ZipWithEnumerable_NonEmptyEmpty():
# var n1, n2, results, scheduler
# scheduler = TestScheduler()
# n1 = scheduler.create_hot_observable(on_next(150, 1), on_next(215, 2), on_completed(220))
# n2 = []
# results = scheduler.start(create)
# return n1.zip(n2, function (x, y) {
# return x + y
#
#
# results.messages.assert_equal(on_completed(215))
# n1.subscriptions.assert_equal(subscribe(200, 215))
#
# def test_ZipWithEnumerable_NeverNonEmpty():
# var n1, n2, results, scheduler
# scheduler = TestScheduler()
# n1 = scheduler.create_hot_observable(on_next(150, 1))
# n2 = [2]
# results = scheduler.start(create)
# return n1.zip(n2, function (x, y) {
# return x + y
#
#
# results.messages.assert_equal()
# n1.subscriptions.assert_equal(subscribe(200, 1000))
#
# def test_ZipWithEnumerable_NonEmptyNonEmpty():
# var n1, n2, results, scheduler
# scheduler = TestScheduler()
# n1 = scheduler.create_hot_observable(on_next(150, 1), on_next(215, 2), on_completed(230))
# n2 = [3]
# results = scheduler.start(create)
# return n1.zip(n2, function (x, y) {
# return x + y
#
#
# results.messages.assert_equal(on_next(215, 2 + 3), on_completed(230))
# n1.subscriptions.assert_equal(subscribe(200, 230))
#
# def test_ZipWithEnumerable_ErrorEmpty():
# var ex, n1, n2, results, scheduler
# ex = 'ex'
# scheduler = TestScheduler()
# n1 = scheduler.create_hot_observable(on_next(150, 1), on_error(220, ex))
# n2 = []
# results = scheduler.start(create)
# return n1.zip(n2, function (x, y) {
# return x + y
#
#
# results.messages.assert_equal(on_error(220, ex))
# n1.subscriptions.assert_equal(subscribe(200, 220))
#
# def test_ZipWithEnumerable_ErrorSome():
# var ex, n1, n2, results, scheduler
# ex = 'ex'
# scheduler = TestScheduler()
# n1 = scheduler.create_hot_observable(on_next(150, 1), on_error(220, ex))
# n2 = [2]
# results = scheduler.start(create)
# return n1.zip(n2, function (x, y) {
# return x + y
#
#
# results.messages.assert_equal(on_error(220, ex))
# n1.subscriptions.assert_equal(subscribe(200, 220))
#
# def test_ZipWithEnumerable_SomeDataBothSides():
# var n1, n2, results, scheduler
# scheduler = TestScheduler()
# n1 = scheduler.create_hot_observable(on_next(150, 1), on_next(210, 2), on_next(220, 3), on_next(230, 4), on_next(240, 5))
# n2 = [5, 4, 3, 2]
# results = scheduler.start(create)
# return n1.zip(n2, function (x, y) {
# return x + y
#
#
# results.messages.assert_equal(on_next(210, 7), on_next(220, 7), on_next(230, 7), on_next(240, 7))
# n1.subscriptions.assert_equal(subscribe(200, 1000))
#
# def test_ZipWithEnumerable_SelectorThrows():
# var ex, n1, n2, results, scheduler
# ex = 'ex'
# scheduler = TestScheduler()
# n1 = scheduler.create_hot_observable(on_next(150, 1), on_next(215, 2), on_next(225, 4), on_completed(240))
# n2 = [3, 5]
# results = scheduler.start(create)
# return n1.zip(n2, function (x, y) {
# if (y == 5) {
# throw ex
# }
# return x + y
#
#
# results.messages.assert_equal(on_next(215, 2 + 3), on_error(225, ex))
# n1.subscriptions.assert_equal(subscribe(200, 225))
#
# test("Rx.Observable.catchException() does not lose subscription to underlying observable", 12, function () {
# var subscribes = 0,
# unsubscribes = 0,
# tracer = Rx.Observable.create(function (observer) { ++subscribes return function () { ++unsubscribes } ,
# s
# // Try it without catchException()
# s = tracer.subscribe()
# strictEqual(subscribes, 1, "1 subscribes")
# strictEqual(unsubscribes, 0, "0 unsubscribes")
# s.dispose()
# strictEqual(subscribes, 1, "After dispose: 1 subscribes")
# strictEqual(unsubscribes, 1, "After dispose: 1 unsubscribes")
# // Now try again with catchException(Observable):
# subscribes = unsubscribes = 0
# s = tracer.catchException(Rx.Observable.never()).subscribe()
# strictEqual(subscribes, 1, "catchException(Observable): 1 subscribes")
# strictEqual(unsubscribes, 0, "catchException(Observable): 0 unsubscribes")
# s.dispose()
# strictEqual(subscribes, 1, "catchException(Observable): After dispose: 1 subscribes")
# strictEqual(unsubscribes, 1, "catchException(Observable): After dispose: 1 unsubscribes")
# // And now try again with catchException(function()):
# subscribes = unsubscribes = 0
# s = tracer.catchException(function () { return Rx.Observable.never() .subscribe()
# strictEqual(subscribes, 1, "catchException(function): 1 subscribes")
# strictEqual(unsubscribes, 0, "catchException(function): 0 unsubscribes")
# s.dispose()
# strictEqual(subscribes, 1, "catchException(function): After dispose: 1 subscribes")
# strictEqual(unsubscribes, 1, "catchException(function): After dispose: 1 unsubscribes") // this one FAILS (unsubscribes is 0)
#
if __name__ == '__main__':
test_combine_latest_return_empty() | 36.856177 | 534 | 0.67222 | 11,480 | 81,747 | 4.54216 | 0.027526 | 0.093319 | 0.080086 | 0.124578 | 0.946168 | 0.93518 | 0.914985 | 0.899701 | 0.879775 | 0.873658 | 0 | 0.085987 | 0.196068 | 81,747 | 2,218 | 535 | 36.856177 | 0.707451 | 0.156177 | 0 | 0.809026 | 0 | 0 | 0.002386 | 0 | 0 | 0 | 0 | 0 | 0.10726 | 1 | 0.195553 | false | 0.000654 | 0.001962 | 0.086331 | 0.296926 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
04e38aa62d7b4177cb207d1768af9db21b681237 | 146 | py | Python | salstm/process_all.py | XIAOYEJIAYOU/GSAN | 8ca4fdf4c3d615af9cc10e1f9f22ceb7e27fe196 | [
"MIT"
] | 6 | 2021-10-01T11:42:59.000Z | 2021-10-04T23:33:43.000Z | salstm/process_all.py | XIAOYEJIAYOU/GSAN | 8ca4fdf4c3d615af9cc10e1f9f22ceb7e27fe196 | [
"MIT"
] | null | null | null | salstm/process_all.py | XIAOYEJIAYOU/GSAN | 8ca4fdf4c3d615af9cc10e1f9f22ceb7e27fe196 | [
"MIT"
] | 1 | 2022-03-15T13:06:29.000Z | 2022-03-15T13:06:29.000Z | import os
os.system("python process_1_csv2pkl.py")
os.system("python process_2_pklDataBalance.py")
os.system("python process_3_combineFeature.py") | 36.5 | 47 | 0.828767 | 23 | 146 | 5 | 0.521739 | 0.208696 | 0.365217 | 0.547826 | 0.4 | 0 | 0 | 0 | 0 | 0 | 0 | 0.028777 | 0.047945 | 146 | 4 | 48 | 36.5 | 0.798561 | 0 | 0 | 0 | 0 | 0 | 0.646259 | 0.367347 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.25 | 0 | 0.25 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
b6d9a77ebba7101c956c5f8cf989b97ad80960b2 | 122 | py | Python | CompareSegmentations_9de7a126150a11e98401005056c00008/__init__.py | daniella-patton/Bone_Micro_Strength | 5e01364f060ea2844898459184835d388d3f17e9 | [
"MIT"
] | null | null | null | CompareSegmentations_9de7a126150a11e98401005056c00008/__init__.py | daniella-patton/Bone_Micro_Strength | 5e01364f060ea2844898459184835d388d3f17e9 | [
"MIT"
] | null | null | null | CompareSegmentations_9de7a126150a11e98401005056c00008/__init__.py | daniella-patton/Bone_Micro_Strength | 5e01364f060ea2844898459184835d388d3f17e9 | [
"MIT"
] | null | null | null | from .CompareSegmentations_9de7a126150a11e98401005056c00008 import CompareSegmentations_9de7a126150a11e98401005056c00008
| 61 | 121 | 0.95082 | 6 | 122 | 19 | 0.666667 | 0.912281 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.444444 | 0.040984 | 122 | 1 | 122 | 122 | 0.529915 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 1 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 9 |
8e4c1e724ad1079d018b79fdfa12c74f307b6050 | 301 | py | Python | AEs/__init__.py | MaximeRedstone/UnstructuredCAE-DA | b54bd53540c11aa1b70e5160751905141f463217 | [
"MIT"
] | null | null | null | AEs/__init__.py | MaximeRedstone/UnstructuredCAE-DA | b54bd53540c11aa1b70e5160751905141f463217 | [
"MIT"
] | null | null | null | AEs/__init__.py | MaximeRedstone/UnstructuredCAE-DA | b54bd53540c11aa1b70e5160751905141f463217 | [
"MIT"
] | null | null | null | from UnstructuredCAEDA.AEs.AE_Base import BaseAE
from UnstructuredCAEDA.AEs.AE_Vanilla import VanillaAE
from UnstructuredCAEDA.AEs.CAE_3D import CAE_3D
from UnstructuredCAEDA.AEs.AE_Toy import ToyAE
from UnstructuredCAEDA.AEs.CAE_Toy import ToyCAE
from UnstructuredCAEDA.AEs.Jacobian import Jacobian
| 37.625 | 54 | 0.877076 | 42 | 301 | 6.142857 | 0.357143 | 0.488372 | 0.55814 | 0.302326 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.007246 | 0.083056 | 301 | 7 | 55 | 43 | 0.927536 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
6d2c9abff61d9c01266a030dd427bf4ed183f828 | 6,584 | py | Python | deepcrf/__init__.py | massongit/deep-crf | ff24ecc9e9238afef28d14a2f3c8b75307c9ceaf | [
"MIT"
] | 174 | 2017-03-16T12:28:11.000Z | 2021-09-06T09:11:55.000Z | deepcrf/__init__.py | massongit/deep-crf | ff24ecc9e9238afef28d14a2f3c8b75307c9ceaf | [
"MIT"
] | 30 | 2017-05-25T04:21:29.000Z | 2019-09-10T15:42:08.000Z | deepcrf/__init__.py | massongit/deep-crf | ff24ecc9e9238afef28d14a2f3c8b75307c9ceaf | [
"MIT"
] | 49 | 2017-05-17T08:07:48.000Z | 2020-02-03T04:26:16.000Z | #!/usr/bin/env python
# -*- coding: utf-8 -*-
import click
import logging
import deepcrf.main
import deepcrf.evaluate
@click.group()
def cli():
LOG_FORMAT = '[%(asctime)s] [%(levelname)s] %(message)s (%(funcName)s@%(filename)s:%(lineno)s)'
logging.basicConfig(level=logging.INFO, format=LOG_FORMAT)
@cli.command()
@click.argument('train_file', type=click.Path(exists=True))
@click.option('--save_dir', type=str, default='save_model_dir',
help='save model dir')
@click.option('--model_name', type=str, default='bilstm-cnn-crf',
help="select from [bilstm-cnn-crf, bilstm-cnn]")
@click.option('--batchsize', type=int, default=32, help='batch size')
@click.option('--max_iter', type=int, default=50, help='max iterations (default: 50)')
@click.option('--optimizer', type=str, default='adam',
help="select from [adam, adadelta, sgd, sgd_mom]")
@click.option('--init_lr', type=float, default=0.001, help='Initial Learning rate (default: 0.001)')
@click.option('--weight_decay', type=float, default=0.0)
@click.option('--use_lr_decay', type=int, default=0)
@click.option('--use_crf', type=int, default=1, help='use CRF flag.')
@click.option('--n_layer', type=int, default=1)
@click.option('--n_hidden', type=int, default=200)
@click.option('--n_vocab_min_cnt', type=int, default=0,
help='min count of vocab.')
@click.option('--n_word_emb', type=int, default=100,
help='word embedding size.')
@click.option('--n_add_feature_emb', type=int, default=100,
help='additional feature embedding size.')
@click.option('--n_char_emb', type=int, default=30,
help='character embedding size.')
@click.option('--n_char_hidden', type=int, default=30,
help='character hidden vector size.')
@click.option('--dropout_rate', type=float, default=0.33)
@click.option('--gpu', type=int, default=-1,
help='gpu ID. when gpu=-1 use CPU mode.')
@click.option('--word_emb_file', type=click.Path())
@click.option('--word_emb_vocab_type', type=str, default='replace_all',
help="select from [replace_all, replace_only, additional]")
@click.option('--vocab_file', type=click.Path())
@click.option('--vocab_char_file', type=click.Path())
@click.option('--dev_file', type=click.Path(), help='development file to use early stopping')
@click.option('--test_file', type=click.Path())
@click.option('--model_filename', type=click.Path())
@click.option('--input_idx', type=str, default='0', help='input_idx for features.')
@click.option('--output_idx', type=str, default='-1', help='output_idx for predicting.')
@click.option('--delimiter', type=str, default='\t',
help='delimiter string')
@click.option('--save_name', type=str, default='bilstm-cnn-crf_adam', help='save_name')
@click.option('--use_cudnn', type=int, default=1, help='use_cudnn = 0 or 1')
@click.option('--efficient_gpu', type=int, default=1,
help='efficient_gpu (if efficient_gpu == 1, it needs small GPU memory)')
@click.option('--use_list_files', type=int, default=0,
help='1: input file contains file path for each line.')
def train(train_file, **args):
# load input_file
main.run(train_file, is_train=True, **args)
@cli.command()
@click.argument('input_file', type=click.Path(exists=True))
@click.option('--save_dir', type=str, default='save_model_dir',
help='save model dir')
@click.option('--model_name', type=str, default='bilstm-cnn-crf',
help="select from [bilstm-cnn-crf, bilstm-cnn]")
@click.option('--batchsize', type=int, default=32, help='batch size')
@click.option('--max_iter', type=int, default=50, help='max iterations (default: 50)')
@click.option('--optimizer', type=str, default='adam',
help="select from [adam, adadelta, sgd, sgd_mom]")
@click.option('--init_lr', type=float, default=0.001, help='Initial Learning rate (default: 0.001)')
@click.option('--weight_decay', type=float, default=0.0)
@click.option('--use_lr_decay', type=int, default=0)
@click.option('--use_crf', type=int, default=1, help='use CRF flag.')
@click.option('--n_layer', type=int, default=1)
@click.option('--n_hidden', type=int, default=200)
@click.option('--n_vocab_min_cnt', type=int, default=0,
help='min count of vocab.')
@click.option('--n_word_emb', type=int, default=100,
help='word embedding size.')
@click.option('--n_add_feature_emb', type=int, default=100,
help='additional feature embedding size.')
@click.option('--n_char_emb', type=int, default=30,
help='character embedding size.')
@click.option('--n_char_hidden', type=int, default=30,
help='character hidden vector size.')
@click.option('--dropout_rate', type=float, default=0.33)
@click.option('--gpu', type=int, default=-1,
help='gpu ID. when gpu=-1 use CPU mode.')
@click.option('--word_emb_file', type=click.Path())
@click.option('--word_emb_vocab_type', type=str, default='replace_all',
help="select from [replace_all, replace_only, additional]")
@click.option('--dev_file', type=click.Path(), help='development file to use early stopping')
@click.option('--test_file', type=click.Path())
@click.option('--vocab_file', type=click.Path())
@click.option('--vocab_char_file', type=click.Path())
@click.option('--delimiter', type=str, default='\t',
help='delimiter string')
@click.option('--save_name', type=str, default='bilstm-cnn-crf_adam', help='save_name')
@click.option('--predicted_output', type=str, default='',
help='predicted_output')
@click.option('--model_filename', type=click.Path())
@click.option('--input_idx', type=str, default='0', help='input_idx for features.')
@click.option('--output_idx', type=str, default='-1', help='output_idx for predicting.')
@click.option('--use_cudnn', type=int, default=1, help='use_cudnn = 0 or 1')
@click.option('--efficient_gpu', type=int, default=1,
help='efficient_gpu (if efficient_gpu == 1, it needs small GPU memory)')
@click.option('--use_list_files', type=int, default=0,
help='1: input file contains file path for each line.')
def predict(input_file, **args):
main.run(input_file, is_train=False, **args)
@cli.command()
@click.argument('gold_file', type=click.Path(exists=True))
@click.argument('predicted_file', type=click.Path(exists=True))
@click.option('--tag_type', type=str, default='BIOES', help='select from [BIO, BIOES]')
def eval(gold_file, predicted_file, **args):
evaluate.run(gold_file, predicted_file, **args)
| 51.4375 | 100 | 0.673603 | 951 | 6,584 | 4.528917 | 0.153523 | 0.168563 | 0.097516 | 0.055259 | 0.88716 | 0.863014 | 0.863014 | 0.855584 | 0.846761 | 0.846761 | 0 | 0.016585 | 0.130012 | 6,584 | 127 | 101 | 51.84252 | 0.735335 | 0.008809 | 0 | 0.815789 | 0 | 0.008772 | 0.369462 | 0.012264 | 0 | 0 | 0 | 0 | 0 | 1 | 0.035088 | false | 0 | 0.035088 | 0 | 0.070175 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
eda15f56c1fa924a07a20fb1cb36f990826dc5dd | 8,845 | py | Python | vsbrnn/data/region_model.py | jonomon/VSMood | 7c5797b09d89b37b06adcabf9808e90f5b937a29 | [
"Apache-2.0"
] | null | null | null | vsbrnn/data/region_model.py | jonomon/VSMood | 7c5797b09d89b37b06adcabf9808e90f5b937a29 | [
"Apache-2.0"
] | null | null | null | vsbrnn/data/region_model.py | jonomon/VSMood | 7c5797b09d89b37b06adcabf9808e90f5b937a29 | [
"Apache-2.0"
] | null | null | null | from vsbrnn.utils import Rect
import numpy as np
class RegionModel:
def __init__(self):
self.model_TGH = {}
self.model_TWH = {}
self.ignore_fixations_outside = False
self.segmentation = None
self.region_type = None
@classmethod
def from_segmentation(cls, segmentation):
cls = RegionModel()
cls.region_type = "segmentation"
cls.segmentation = segmentation
return cls
def add_region_TGH(self, name, x, y, w, h):
self.model_TGH[name] = Rect(x, y, w, h)
def add_region_TWH(self, name, x, y, w, h):
self.model_TWH[name] = Rect(x, y, w, h)
def fix_in_segmentation(self, fix):
x = np.floor(fix.x * self.segmentation.shape[0])
y = np.floor(fix.y * self.segmentation.shape[1])
if x < 0 or x >= self.segmentation.shape[0] or y < 0 or y >= self.segmentation.shape[1]:
return np.max(self.segmentation) + 2
else:
return self.segmentation[int(x), int(y)] + 1
def fix_in_bounding_box(self, test, fix):
if "TWH" in test:
models = self.model_TWH
else:
models = self.model_TGH
for key in models:
model = models[key]
if model.isPointInRect(fix.x, fix.y):
return key
return "N"
def fix_in_region(self, test, fix):
if self.region_type == "segmentation":
return self.fix_in_segmentation(fix)
elif self.region_type == "bounding_box":
return self.fix_in_bounding_box(test, fix)
class FaceRegionModel_semantic5(RegionModel):
def __init__(self):
RegionModel.__init__(self)
self.region_type = "bounding_box"
self.add_region_TGH("LE", x=0.0, y=0.25, w=0.45, h=0.2)
self.add_region_TGH("RE", x=0.55, y=0.25, w=0.45, h=0.2)
self.add_region_TGH("NO", x=0.275, y=0.45, w=0.45, h=0.2)
self.add_region_TGH("M", x=0.15, y=0.65, w=0.7, h=0.2)
self.add_region_TWH("LE", x=0.0, y=0.25, w=0.45, h=0.20)
self.add_region_TWH("RE", x=0.55, y=0.25, w=0.45, h=0.20)
self.add_region_TWH("NO", x=0.275, y=0.45, w=0.45, h=0.20)
self.add_region_TWH("M", x=0.15, y=0.65, w=0.7, h=0.2)
class FaceRegionModel_semantic8(RegionModel):
def __init__(self):
RegionModel.__init__(self)
self.region_type = "bounding_box"
self.add_region_TGH("FH", x=0.0, y=0.0, w=1, h=0.2)
self.add_region_TGH("LE", x=0.0, y=0.25, w=0.45, h=0.2)
self.add_region_TGH("RE", x=0.55, y=0.25, w=0.45, h=0.2)
self.add_region_TGH("NO", x=0.275, y=0.45, w=0.45, h=0.2)
self.add_region_TGH("LC", x=0.0, y=0.45, w=0.25, h=0.2)
self.add_region_TGH("RC", x=0.75, y=0.45, w=0.25, h=0.2)
self.add_region_TGH("M", x=0.15, y=0.65, w=0.7, h=0.2)
self.add_region_TWH("FH", x=0.0, y=0.0, w=1, h=0.2)
self.add_region_TWH("LE", x=0.0, y=0.25, w=0.45, h=0.20)
self.add_region_TWH("RE", x=0.55, y=0.25, w=0.45, h=0.20)
self.add_region_TWH("NO", x=0.275, y=0.45, w=0.45, h=0.20)
self.add_region_TWH("LC", x=0.0, y=0.45, w=0.25, h=0.2)
self.add_region_TWH("RC", x=0.75, y=0.45, w=0.25, h=0.2)
self.add_region_TWH("M", x=0.15, y=0.65, w=0.7, h=0.2)
class FaceRegionModel4(RegionModel):
def __init__(self):
RegionModel.__init__(self)
self.region_type = "bounding_box"
self.ignore_fixations_outside = True
self.add_region_TGH("FH1", x=0.0, y=0.0, w=0.38, h=0.25)
self.add_region_TGH("FH2", x=0.38, y=0.0, w=0.24, h=0.25)
self.add_region_TGH("FH3", x=0.62, y=0.0, w=0.38, h=0.25)
self.add_region_TGH("LE", x=0.0, y=0.25, w=0.38, h=0.21)
self.add_region_TGH("NT", x=0.38, y=0.25, w=0.24, h=0.21)
self.add_region_TGH("RE", x=0.62, y=0.25, w=0.38, h=0.21)
self.add_region_TGH("LC", x=0.0, y=0.46, w=0.38, h=0.21)
self.add_region_TGH("NO", x=0.38, y=0.46, w=0.24, h=0.21)
self.add_region_TGH("RC", x=0.62, y=0.46, w=0.38, h=0.21)
self.add_region_TGH("LM", x=0.0, y=0.67, w=0.38, h=0.33)
self.add_region_TGH("M", x=0.38, y=0.67, w=0.24, h=0.33)
self.add_region_TGH("RM", x=0.62, y=0.67, w=0.38, h=0.33)
self.add_region_TWH("FH1", x=0.0, y=0.0, w=0.38, h=0.25)
self.add_region_TWH("FH2", x=0.38, y=0.0, w=0.24, h=0.25)
self.add_region_TWH("FH3", x=0.62, y=0.0, w=0.38, h=0.25)
self.add_region_TWH("LE", x=0.0, y=0.25, w=0.38, h=0.21)
self.add_region_TWH("NT", x=0.38, y=0.25, w=0.24, h=0.21)
self.add_region_TWH("RE", x=0.62, y=0.25, w=0.38, h=0.21)
self.add_region_TWH("LC", x=0.0, y=0.46, w=0.38, h=0.21)
self.add_region_TWH("NO", x=0.38, y=0.46, w=0.24, h=0.21)
self.add_region_TWH("RC", x=0.62, y=0.46, w=0.38, h=0.21)
self.add_region_TWH("LM", x=0.0, y=0.67, w=0.38, h=0.33)
self.add_region_TWH("M", x=0.38, y=0.67, w=0.24, h=0.33)
self.add_region_TWH("RM", x=0.62, y=0.67, w=0.38, h=0.33)
class FaceRegionModel_grid9(RegionModel):
def __init__(self):
RegionModel.__init__(self)
self.region_type = "bounding_box"
self.ignore_fixations_outside = True
self.add_region_TGH("FH1", x=0.0, y=0.0, w=0.33, h=0.33)
self.add_region_TGH("FH2", x=0.33, y=0.0, w=0.33, h=0.33)
self.add_region_TGH("FH3", x=0.66, y=0.0, w=0.33, h=0.33)
self.add_region_TGH("LE", x=0.0, y=0.33, w=0.33, h=0.33)
self.add_region_TGH("NT", x=0.33, y=0.33, w=0.33, h=0.33)
self.add_region_TGH("RE", x=0.66, y=0.33, w=0.33, h=0.33)
self.add_region_TGH("LC", x=0.0, y=0.66, w=0.33, h=0.33)
self.add_region_TGH("NO", x=0.33, y=0.66, w=0.33, h=0.33)
self.add_region_TGH("RC", x=0.66, y=0.66, w=0.33, h=0.33)
self.add_region_TWH("FH1", x=0.0, y=0.0, w=0.33, h=0.33)
self.add_region_TWH("FH2", x=0.33, y=0.0, w=0.33, h=0.33)
self.add_region_TWH("FH3", x=0.66, y=0.0, w=0.33, h=0.33)
self.add_region_TWH("LE", x=0.0, y=0.33, w=0.33, h=0.33)
self.add_region_TWH("NT", x=0.33, y=0.33, w=0.33, h=0.33)
self.add_region_TWH("RE", x=0.66, y=0.33, w=0.33, h=0.33)
self.add_region_TWH("LC", x=0.0, y=0.66, w=0.33, h=0.33)
self.add_region_TWH("NO", x=0.33, y=0.66, w=0.33, h=0.33)
self.add_region_TWH("RC", x=0.66, y=0.66, w=0.33, h=0.33)
class FaceRegionModel_grid16(RegionModel):
def __init__(self):
RegionModel.__init__(self)
self.region_type = "bounding_box"
self.ignore_fixations_outside = True
self.add_region_TGH("FH1", x=0.0, y=0.0, w=0.25, h=0.25)
self.add_region_TGH("FH2", x=0.25, y=0.0, w=0.25, h=0.25)
self.add_region_TGH("FH3", x=0.50, y=0.0, w=0.25, h=0.25)
self.add_region_TGH("FH4", x=0.75, y=0.0, w=0.25, h=0.25)
self.add_region_TGH("LE1", x=0.0, y=0.25, w=0.25, h=0.25)
self.add_region_TGH("LE2", x=0.25, y=0.25, w=0.25, h=0.25)
self.add_region_TGH("RE1", x=0.50, y=0.25, w=0.25, h=0.25)
self.add_region_TGH("RE2", x=0.75, y=0.25, w=0.25, h=0.25)
self.add_region_TGH("LC", x=0.0, y=0.50, w=0.25, h=0.25)
self.add_region_TGH("NO1", x=0.25, y=0.50, w=0.25, h=0.25)
self.add_region_TGH("NO2", x=0.50, y=0.50, w=0.25, h=0.25)
self.add_region_TGH("RC", x=0.75, y=0.50, w=0.25, h=0.25)
self.add_region_TGH("M1", x=0.0, y=0.75, w=0.25, h=0.25)
self.add_region_TGH("M2", x=0.25, y=0.75, w=0.25, h=0.25)
self.add_region_TGH("M3", x=0.50, y=0.75, w=0.25, h=0.25)
self.add_region_TGH("M4", x=0.75, y=0.75, w=0.25, h=0.25)
self.add_region_TWH("FH1", x=0.0, y=0.0, w=0.25, h=0.25)
self.add_region_TWH("FH2", x=0.25, y=0.0, w=0.25, h=0.25)
self.add_region_TWH("FH3", x=0.50, y=0.0, w=0.25, h=0.25)
self.add_region_TWH("FH4", x=0.75, y=0.0, w=0.25, h=0.25)
self.add_region_TWH("LE1", x=0.0, y=0.25, w=0.25, h=0.25)
self.add_region_TWH("LE2", x=0.25, y=0.25, w=0.25, h=0.25)
self.add_region_TWH("RE1", x=0.50, y=0.25, w=0.25, h=0.25)
self.add_region_TWH("RE2", x=0.75, y=0.25, w=0.25, h=0.25)
self.add_region_TWH("LC", x=0.0, y=0.50, w=0.25, h=0.25)
self.add_region_TWH("NO1", x=0.25, y=0.50, w=0.25, h=0.25)
self.add_region_TWH("NO2", x=0.50, y=0.50, w=0.25, h=0.25)
self.add_region_TWH("RC", x=0.75, y=0.50, w=0.25, h=0.25)
self.add_region_TWH("M1", x=0.0, y=0.75, w=0.25, h=0.25)
self.add_region_TWH("M2", x=0.25, y=0.75, w=0.25, h=0.25)
self.add_region_TWH("M3", x=0.50, y=0.75, w=0.25, h=0.25)
self.add_region_TWH("M4", x=0.75, y=0.75, w=0.25, h=0.25)
| 45.592784 | 96 | 0.559638 | 1,934 | 8,845 | 2.411582 | 0.053258 | 0.066895 | 0.267581 | 0.164666 | 0.831261 | 0.797599 | 0.797599 | 0.789237 | 0.780232 | 0.780232 | 0 | 0.166227 | 0.228717 | 8,845 | 193 | 97 | 45.829016 | 0.517444 | 0 | 0 | 0.23125 | 0 | 0 | 0.035953 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.075 | false | 0 | 0.0125 | 0 | 0.16875 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
edf79359923e02e80b801f8e6f95055e320c0f6f | 212 | py | Python | src/asm/translation/interfaces.py | ctheune/assembly-cms | 20e000373fc30d9a14cb5dc882499b5eed1d86ee | [
"ZPL-2.1"
] | null | null | null | src/asm/translation/interfaces.py | ctheune/assembly-cms | 20e000373fc30d9a14cb5dc882499b5eed1d86ee | [
"ZPL-2.1"
] | null | null | null | src/asm/translation/interfaces.py | ctheune/assembly-cms | 20e000373fc30d9a14cb5dc882499b5eed1d86ee | [
"ZPL-2.1"
] | null | null | null | import zope.interface.common.sequence
class ILanguageProfile(zope.interface.common.sequence.ISequence):
"""A list of language ISO codes. The first language is the default/fallback
language."""
pass
| 26.5 | 79 | 0.759434 | 27 | 212 | 5.962963 | 0.740741 | 0.161491 | 0.236025 | 0.335404 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.150943 | 212 | 7 | 80 | 30.285714 | 0.894444 | 0.386792 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.333333 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 8 |
b6166655c666646623f467149c89c7aec531f6e8 | 256 | py | Python | InvisibleCharm/lib/data/__init__.py | MPCodeWriter21/InvisibleCharm | a6c906de869adf4b18acf8edee800a8226a62eb5 | [
"Apache-2.0"
] | 1 | 2021-07-31T13:26:14.000Z | 2021-07-31T13:26:14.000Z | InvisibleCharm/lib/data/__init__.py | MPCodeWriter21/InvisibleCharm | a6c906de869adf4b18acf8edee800a8226a62eb5 | [
"Apache-2.0"
] | null | null | null | InvisibleCharm/lib/data/__init__.py | MPCodeWriter21/InvisibleCharm | a6c906de869adf4b18acf8edee800a8226a62eb5 | [
"Apache-2.0"
] | null | null | null | # InvisibleCharm.lib.data.__init__.py
# CodeWriter21
import InvisibleCharm.lib.data.Encryption as Encryption
import InvisibleCharm.lib.data.Prepare as Prepare
from InvisibleCharm.lib.data.Encryption import *
from InvisibleCharm.lib.data.Prepare import *
| 28.444444 | 55 | 0.835938 | 32 | 256 | 6.5625 | 0.34375 | 0.404762 | 0.5 | 0.257143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.008547 | 0.085938 | 256 | 8 | 56 | 32 | 0.888889 | 0.1875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
b629322810cb413c51411a8f4e4e98eb0ecf8bb9 | 6,837 | py | Python | tests/internal/test_utils_hpp.py | RUrlus/ModelMetricUncertainty | f401a25dd196d6e4edf4901fcfee4b56ebd7c10b | [
"Apache-2.0"
] | null | null | null | tests/internal/test_utils_hpp.py | RUrlus/ModelMetricUncertainty | f401a25dd196d6e4edf4901fcfee4b56ebd7c10b | [
"Apache-2.0"
] | 11 | 2021-12-08T10:34:17.000Z | 2022-01-20T13:40:05.000Z | tests/internal/test_utils_hpp.py | RUrlus/ModelMetricUncertainty | f401a25dd196d6e4edf4901fcfee4b56ebd7c10b | [
"Apache-2.0"
] | null | null | null | # import numpy as np
# import pytest
#
# from mmu_tests import _mmu_core_tests
#
#
# def test_check_contiguous():
# carr = np.ones((10), order='C', dtype=np.float64)
# farr = np.ones((10), order='F', dtype=np.float64)
# assert _mmu_core_tests.check_contiguous(carr) is None
# assert _mmu_core_tests.check_contiguous(farr) is None
#
# carr = np.ones((10, 4), order='C', dtype=np.float64)
# farr = np.ones((10, 4), order='F', dtype=np.float64)
# assert _mmu_core_tests.check_contiguous(carr) is None
# assert _mmu_core_tests.check_contiguous(farr) is None
#
# assert _mmu_core_tests.check_contiguous(carr[:, [1, 3]]) is None
# assert _mmu_core_tests.check_contiguous(farr[:, [1, 3]]) is None
# assert _mmu_core_tests.check_contiguous(carr[[1, 3], :]) is None
# assert _mmu_core_tests.check_contiguous(farr[[1, 3], :]) is None
#
#
# def test_1d_soft():
# x = np.zeros((100, ))
# assert _mmu_core_tests.check_1d_soft(x) == 0
# x = np.zeros((100, 1))
# assert _mmu_core_tests.check_1d_soft(x) == 0
# assert _mmu_core_tests.check_1d_soft(x.T) == 1
# x = np.zeros((100, 2))
# with pytest.raises(RuntimeError):
# _mmu_core_tests.check_1d_soft(x)
# _mmu_core_tests.check_1d_soft(x.T)
# _mmu_core_tests.check_1d_soft(np.zeros((100, 4, 1)))
#
#
# def test_check_equal_length():
# x = np.zeros((100, 4))
# y = np.zeros((100, 4))
# assert _mmu_core_tests.check_equal_length(x, y) is None
# assert _mmu_core_tests.check_equal_length(x.T, y.T) is None
# assert _mmu_core_tests.check_equal_length(x[:, [0, 1]], y) is None
# assert _mmu_core_tests.check_equal_length(x, y[:, [0, 1]]) is None
# with pytest.raises(RuntimeError):
# _mmu_core_tests.check_equal_length(x[:10], y)
# _mmu_core_tests.check_equal_length(x, y[:10])
# _mmu_core_tests.check_equal_length(x.T, y)
# _mmu_core_tests.check_equal_length(x, y.T)
#
#
# def test_check_shape_length():
# x = np.zeros((100, 2))
# y = np.zeros((100, 2))
# assert _mmu_core_tests.check_equal_shape(x, y) is None
# assert _mmu_core_tests.check_equal_shape(x.T, y.T) is None
# with pytest.raises(RuntimeError):
# _mmu_core_tests.check_equal_shape(x[:10], y)
# _mmu_core_tests.check_equal_shape(x, y[:10])
# _mmu_core_tests.check_equal_shape(x.T, y)
# _mmu_core_tests.check_equal_shape(x, y.T)
# _mmu_core_tests.check_equal_shape(x[:, [0, 1]], y)
# _mmu_core_tests.check_equal_shape(x, y[:, [0, 1]])
#
# x = np.zeros((100, 2))
# y = np.zeros((100, 4, 2))
# with pytest.raises(RuntimeError):
# _mmu_core_tests.check_equal_shape(x, y)
# _mmu_core_tests.check_equal_shape(y, x)
#
#
# def test_check_shape_order():
# """Test check_shape_order.
#
# check_shape_order checks if the array is contiguous along the obs_axis,
# if not it will return a copy of the array in the correct order, C for
# obs_axis == 1 and F order for obs_axis == 2
# """
# carr_good = np.zeros((2, 10), order='C', dtype=np.float64)
# carr_bad = np.zeros((10, 2), order='C', dtype=np.float64)
#
# arr = _mmu_core_tests.check_shape_order(carr_good, 'x', 0)
# assert arr.flags['F_CONTIGUOUS']
#
# arr = _mmu_core_tests.check_shape_order(carr_bad, 'x', 0)
# assert arr.flags['F_CONTIGUOUS']
#
# arr = _mmu_core_tests.check_shape_order(carr_bad, 'x', 1)
# assert arr.flags['C_CONTIGUOUS']
#
# arr = _mmu_core_tests.check_shape_order(carr_good, 'x', 1)
# assert arr.flags['C_CONTIGUOUS']
#
# farr_good = np.zeros((10, 2), order='F', dtype=np.float64)
# farr_bad = np.zeros((2, 10), order='F', dtype=np.float64)
#
# arr = _mmu_core_tests.check_shape_order(farr_good, 'x', 0)
# assert arr.flags['F_CONTIGUOUS']
#
# arr = _mmu_core_tests.check_shape_order(farr_bad, 'x', 0)
# assert arr.flags['F_CONTIGUOUS']
#
# arr = _mmu_core_tests.check_shape_order(farr_bad, 'x', 1)
# assert arr.flags['C_CONTIGUOUS']
#
# arr = _mmu_core_tests.check_shape_order(farr_good, 'x', 1)
# assert arr.flags['C_CONTIGUOUS']
#
# # check non-contiguous views
# inp = np.zeros((10, 100), order='C', dtype=np.float64)[:, :90]
# arr = _mmu_core_tests.check_shape_order(inp, 'x', 1)
# assert arr.flags['C_CONTIGUOUS']
# assert arr.shape == (10, 90), arr.shape
# assert np.isclose(arr[2, 10], inp[2, 10])
#
# inp = np.zeros((10, 100), order='C', dtype=np.float64)[:, :90]
# arr = _mmu_core_tests.check_shape_order(inp, 'x', 0)
# assert arr.flags['F_CONTIGUOUS']
# assert arr.shape == (10, 90), arr.shape
# assert np.isclose(arr[2, 10], inp[2, 10])
#
# inp = np.zeros((100, 10), order='F', dtype=np.float64)[:90, :]
# arr = _mmu_core_tests.check_shape_order(inp, 'x', 0)
# assert arr.flags['F_CONTIGUOUS']
# assert arr.shape == (90, 10), arr.shape
# assert np.isclose(arr[10, 2], inp[10, 2])
#
# inp = np.zeros((100, 10), order='F', dtype=np.float64)[:90, :]
# arr = _mmu_core_tests.check_shape_order(inp, 'x', 1)
# assert arr.flags['C_CONTIGUOUS']
# assert arr.shape == (90, 10), arr.shape
# assert np.isclose(arr[10, 2], inp[10, 2])
#
# with pytest.raises(RuntimeError):
# _mmu_core_tests.check_shape_order(np.zeros((10, 4, 2)), 'arr', 0)
#
# with pytest.raises(RuntimeError):
# _mmu_core_tests.check_shape_order(np.zeros((10, 2)), 'arr', 2)
#
# def test_assert_shape_order():
# """Test assrt_shape_order.
#
# assert_shape_order checks if the array is contiguous along the axis of
# length ``expected`` if not it will throw a RuntimeError
# """
# carr_good = np.zeros((10, 4), order='C', dtype=np.float64)
# carr_bad = np.zeros((4, 10), order='C', dtype=np.float64)
#
# _mmu_core_tests.assert_shape_order(carr_good, 'x', 4)
# with pytest.raises(RuntimeError):
# _mmu_core_tests.assert_shape_order(carr_bad, 'x', 4)
#
# farr_good = np.zeros((4, 10), order='F', dtype=np.float64)
# farr_bad = np.zeros((10, 4), order='F', dtype=np.float64)
#
# _mmu_core_tests.assert_shape_order(farr_good, 'x', 4)
# with pytest.raises(RuntimeError):
# _mmu_core_tests.assert_shape_order(farr_bad, 'x', 4)
# with pytest.raises(RuntimeError):
# _mmu_core_tests.assert_shape_order(farr_good[:, [0, 3, 5, 4]], 'x', 4)
#
# arr_bad = np.zeros((10, 5), dtype=np.float64)
# with pytest.raises(RuntimeError):
# _mmu_core_tests.assert_shape_order(arr_bad, 'x', 4)
#
# arr_bad = np.zeros((10, 4, 2), dtype=np.float64)
# with pytest.raises(RuntimeError):
# _mmu_core_tests.assert_shape_order(arr_bad, 'x', 4)
#
# arr_good = np.zeros((4,), dtype=np.float64)
# assert _mmu_core_tests.assert_shape_order(arr_good, 'x', 4) is None
| 39.982456 | 80 | 0.646482 | 1,080 | 6,837 | 3.785185 | 0.068519 | 0.094178 | 0.161448 | 0.191292 | 0.874755 | 0.850783 | 0.836106 | 0.794031 | 0.755137 | 0.585127 | 0 | 0.04657 | 0.189703 | 6,837 | 170 | 81 | 40.217647 | 0.691336 | 0.949978 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 0 | 0 | 0 | null | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
b64da7fb24ca400dbe2a3e290bbc85eb93cdf140 | 73,706 | py | Python | DEFA/MS_Office/carpe_ppt.py | sk-yaho/carpe | 077ef7ba1582b3de9f5c08d63431e744b77a9e09 | [
"Apache-2.0"
] | 2 | 2020-07-09T02:01:50.000Z | 2020-11-21T15:19:32.000Z | DEFA/MS_Office/carpe_ppt.py | sk-yaho/carpe | 077ef7ba1582b3de9f5c08d63431e744b77a9e09 | [
"Apache-2.0"
] | null | null | null | DEFA/MS_Office/carpe_ppt.py | sk-yaho/carpe | 077ef7ba1582b3de9f5c08d63431e744b77a9e09 | [
"Apache-2.0"
] | null | null | null | # carpe_ppt.py
import struct
import zipfile
import zlib
import os
import shutil
from compoundfiles import *
class PPT :
RT_CurrentUserAtom = b'\xF6\x0F'
RT_UserEditAtom = b'\xF5\x0F'
RT_PersistPtrIncrementalAtom = b'\x72\x17'
RT_Document = b'\xE8\x03'
RT_MainMaster = b'\xF8\x03'
RT_Slide = b'\xEE\x03'
RT_Notes = b'\xF0\x03'
RT_NotesAtom = b'\xF1\x03'
RT_SlideListWithText = b'\xF0\x0F'
RT_SlidePersistAtom = b'\xF3\x03'
RT_TextHeader = b'\x9F\x0F'
RT_TextBytesAtom = b'\xA8\x0F'
RT_TextCharsAtom = b'\xA0\x0F'
RT_StyleTextPropAtom = b'\xA1\x0F'
RT_TextSpecInfoAtom = b'\xAA\x0F'
RT_SlideAtom = b'\xEF\x03'
RT_PPDrawing = b'\x0C\x04'
RT_EscherClientTextbox = b'\x0D\xF0'
# define RT_Slide 0x03EE // 1006 [C]
RT_ProgTags = b'\x88\x13'
RT_BinaryTagDataBlob = b'\x8B\x13'
RT_Comment10 = b'\xE0\x2E'
RT_CString = b'\xBA\x0F'
# define RT_CString 0x0FBA // 4026 [A] // UNICODE
# define RT_Comment10 0x2EE0 // 12000 [C]
# define RT_CString 0x0FBA // 4026 [A] // UNICODE
RT_EndDocument = b'\xEA\x03'
def __init__(self, compound):
self.compound = compound
self.powerpoint_document = b''
self.current_user = b''
self.current_offset = 0
self.arr_user_edit_block = []
self.arr_persist_ptr_incremental_block = []
self.arr_edit_block_text = []
self.text = b'' # tempText
self.text_bytes = b''
self.text_chars = b''
self.filteredText = bytearray(b'')
def __enter__(self):
raise NotImplementedError
def __exit__(self):
raise NotImplementedError
def parse_ppt(self):
if self.compound.is_damaged == self.compound.CONST_DOCUMENT_NORMAL:
self.__parse_ppt_normal__()
elif self.compound.is_damaged == self.compound.CONST_DOCUMENT_DAMAGED:
self.__parse_ppt_damaged__()
def __get_user_edit_offset__(self):
raise NotImplementedError
def __set_chain__(self):
raise NotImplementedError
def __extract_text__(self):
raise NotImplementedError
def __extract_text_in_slide__(self, block_number, sheet_number, slide_id):
slide_total_length = 0
length = 0
tmpHeader = {}
tmpHeader['option'] = 0
tmpHeader['type'] = b''
tmpHeader['length'] = 0
# set the current offset
self.current_offset = 0
for i in range(block_number, len(self.arr_persist_ptr_incremental_block)):
for j in range(0, len(self.arr_persist_ptr_incremental_block[i])):
for k in range(0, self.arr_persist_ptr_incremental_block[i][j]['count']):
if self.arr_persist_ptr_incremental_block[i][j]['startnum'] + k == sheet_number:
self.current_offset = struct.unpack('<I', self.arr_persist_ptr_incremental_block[i][j]['object'][4 * k : 4 * (k + 1)])[0]
if self.current_offset != 0:
break
if self.current_offset != 0:
break
# traverse records in a slide record
if self.__get_header_info__(tmpHeader) == False:
return
if tmpHeader['type'] != self.RT_Slide and tmpHeader['type'] != self.RT_Notes:
return
slide_total_length = tmpHeader['length']
while True:
if length >= slide_total_length:
return
if self.__get_header_info__(tmpHeader) == False:
return
if tmpHeader['type'] == self.RT_PPDrawing:
self.__extract_text_in_slide_ppt_drawing__(tmpHeader)
length += tmpHeader['length']
continue
elif tmpHeader['type'] == self.RT_ProgTags:
self.__extract_text_in_slide_ppt_comment__(tmpHeader)
length += tmpHeader['length']
continue
self.current_offset += tmpHeader['length']
length += tmpHeader['length']
def __extract_text_in_slide_ppt_comment__(self, header):
j = 0
preSize = 0
totalLength_tmp = header['length']
totalLength = 0
tmpLength = 0
readLength = 0
tmpHeader = {}
tmpHeader['option'] = 0
tmpHeader['type'] = b''
tmpHeader['length'] = 0
while True :
if totalLength_tmp <= totalLength:
break
if ((tmpHeader['option'] & 0x000F) == 0x000F): # Container
pass
else: # Atom
self.current_offset += tmpHeader['length']
totalLength += tmpHeader['length']
if totalLength_tmp <= totalLength:
break
if self.__get_header_info__(tmpHeader) == self.compound.CONST_ERROR:
break
totalLength += 8
if tmpHeader['type'] == self.RT_BinaryTagDataBlob:
tmpLength = tmpHeader['length']
readLength = 0
while True:
if tmpLength <= readLength:
break
if self.__get_header_info__(tmpHeader) == self.compound.CONST_ERROR:
break
readLength += 8
if tmpHeader['type'] == self.RT_CString:
if tmpHeader['length'] == 2 and self.powerpoint_document[self.current_offset] == 0x6A and self.powerpoint_document[self.current_offset + 1] == 0x00:
self.current_offset += tmpHeader['length']
readLength += tmpHeader['length']
self.text_chars = b''
self.text_chars = self.powerpoint_document[self.current_offset : self.current_offset + tmpHeader['length']]
preSize = len(self.text)
self.text += self.text_chars
self.text += b'\x0A'
self.current_offset += tmpHeader['length']
readLength += tmpHeader['length']
else:
if((tmpHeader['option'] & 0x000F) == 0x000F): # Container
pass
else: # Atom
self.current_offset += tmpHeader['length']
readLength += tmpHeader['length']
totalLength += readLength
if totalLength_tmp != totalLength :
if self.__get_header_info__(tmpHeader) == self.compound.CONST_ERROR:
break
totalLength += 8
def __extract_text_in_slide_ppt_drawing__(self, header):
preSize = 0
tmpPPDrawingLength = header['length']
PPDrawingReadLength = 0
tmpLength = 0
readLength = 0
textOK = False
tmpHeader = {}
tmpHeader['option'] = 0
tmpHeader['type'] = b''
tmpHeader['length'] = 0
while True:
if tmpPPDrawingLength <= PPDrawingReadLength:
break
if tmpHeader['option'] & 0x000F == 0x000F:
pass # Container
else: # Atom
self.current_offset += tmpHeader['length']
PPDrawingReadLength += tmpHeader['length']
if tmpPPDrawingLength <= PPDrawingReadLength:
break
if self.__get_header_info__(tmpHeader) == self.compound.CONST_ERROR:
break
PPDrawingReadLength += 8
if tmpHeader['type'] == self.RT_EscherClientTextbox:
tmpLength = tmpHeader['length']
readLength = 0
textOK = False
while True:
if tmpLength <= readLength:
break
if self.__get_header_info__(tmpHeader) == self.compound.CONST_ERROR:
break
readLength += 8
if tmpHeader['type'] == self.RT_SlidePersistAtom:
pass
elif tmpHeader['type'] == self.RT_TextHeader:
self.current_offset += tmpHeader['length']
readLength += tmpHeader['length']
elif tmpHeader['type'] == self.RT_TextBytesAtom:
self.text_bytes = b''
self.text_chars = b''
self.text_bytes = self.powerpoint_document[self.current_offset : self.current_offset + tmpHeader['length']]
for i in range(0, len(self.text_bytes)):
self.text_chars += bytes([self.text_bytes[i]])
self.text_chars += b'\x00'
preSize = len(self.text)
self.text += self.text_chars
self.text += b'\x0A\x00'
self.current_offset += tmpHeader['length']
readLength += tmpHeader['length']
textOK = True
elif tmpHeader['type'] == self.RT_TextCharsAtom:
self.text_chars = b''
self.text_chars = self.powerpoint_document[self.current_offset : self.current_offset + tmpHeader['length']]
self.text += self.text_chars
self.text += b'\x0A\x00'
self.current_offset += tmpHeader['length']
readLength += tmpHeader['length']
textOK = True
else:
if((tmpHeader['option'] & 0x000F) == 0x000F): # Container
pass
else: # Atom
self.current_offset += tmpHeader['length']
readLength += tmpHeader['length']
PPDrawingReadLength += readLength
if tmpPPDrawingLength != PPDrawingReadLength:
if self.__get_header_info__(tmpHeader) == self.compound.CONST_ERROR:
break
PPDrawingReadLength += 8
def __get_header_info__(self, header):
if self.current_offset + 8 > len(self.powerpoint_document):
return self.compound.CONST_ERROR
header['option'] = struct.unpack('<H', self.powerpoint_document[self.current_offset: self.current_offset + 2])[0]
header['type'] = self.powerpoint_document[self.current_offset + 2: self.current_offset + 4]
header['length'] = struct.unpack('<I', self.powerpoint_document[self.current_offset + 4: self.current_offset + 8])[0]
self.current_offset += 8
return self.compound.CONST_SUCCESS
def __ppt_extra_filter__(self, tempLen):
i = 0
j = 0
k = 0
uBlank = b'\x20\x00' # ASCII Blank
uBlank2 = b'\xA0\x00' # Unicode Blank
uBlank3 = b'\x0B\x00'
uNewline = b'\x0A\x00' # Line Feed
uTab = b'\x09\x00' # Horizontal Tab
uCR = b'\x0D\x00' # Carriage Return
uFilteredTextLen = tempLen
while i < len(self.filteredText):
if i == 0:
k = 0
while self.filteredText[0 : 2] == uBlank or self.filteredText[0 : 2] == uBlank2 or self.filteredText[0 : 2] == uNewline :
self.filteredText = self.filteredText[: k] + self.filteredText[k + 2 :]
uFilteredTextLen -= 1
if len(self.filteredText) <= 0:
break
if len(self.filteredText) <= 0:
break
if self.filteredText[i : i + 2] == uNewline:
j = i
while True :
j += 2
if j >= len(self.filteredText):
break
if self.filteredText[j : j + 2] == uNewline or self.filteredText[j : j + 2] == uBlank or self.filteredText[j : j + 2] == uBlank2:
self.filteredText = self.filteredText[: j] + self.filteredText[j + 2:]
uFilteredTextLen -= 1
j -= 2
elif self.filteredText[j : j + 2] == uTab or self.filteredText[j : j + 2] == uBlank3 or self.filteredText[j : j + 2] == uCR :
self.filteredText = self.filteredText[: j] + self.filteredText[j + 2:]
uFilteredTextLen -= 1
j -= 2
else :
break
elif self.filteredText[i : i + 2] == uBlank or self.filteredText[i : i + 2] == uBlank2:
j = i
while True :
j += 2
if j >= len(self.filteredText):
break
if self.filteredText[j: j + 2] == uBlank or self.filteredText[j : j + 2] == uBlank2:
self.filteredText = self.filteredText[: j] + self.filteredText[j + 2:]
uFilteredTextLen -= 1
j -= 2
elif self.filteredText[j: j + 2] == uTab:
self.filteredText = self.filteredText[: j] + self.filteredText[j + 2:]
uFilteredTextLen -= 1
j -= 2
elif self.filteredText[j: j + 2] == uBlank3:
self.filteredText = self.filteredText[: j] + self.filteredText[j + 2:]
uFilteredTextLen -= 1
j -= 2
else:
break
elif self.filteredText[i : i + 2] == uBlank3:
self.filteredText[i] = 0x20
i -= 2
elif self.filteredText[i : i + 2] == uCR:
self.filteredText[i] = 0x0A
i -= 2
elif self.filteredText[i : i + 2] == uTab:
self.filteredText[i] = 0x20
i -= 2
i += 2
return uFilteredTextLen
def __parse_ppt_normal__(self):
# ppt 97????????????????
self.powerpoint_document = bytearray(self.compound.fp.open('PowerPoint Document').read())
self.current_user = bytearray(self.compound.fp.open('Current User').read())
# Get User Edit Offset
self.current_offset = struct.unpack('<I', self.current_user[16 : 20])[0]
# Set Chain
# Set User Edit Chain
tmpHeader = {}
tmpHeader['option'] = struct.unpack('<H', self.powerpoint_document[self.current_offset: self.current_offset + 2])[0]
tmpHeader['type'] = self.powerpoint_document[self.current_offset + 2 : self.current_offset + 4]
tmpHeader['length'] = struct.unpack('<I', self.powerpoint_document[self.current_offset + 4: self.current_offset + 8])[0]
editblock = {}
editblock['last_user_edit_atom_offset'] = 0
editblock['persist_ptr_incremental_block_offset'] = 0
if tmpHeader['type'] == self.RT_UserEditAtom:
self.current_offset += 8
self.current_offset += 8
editblock['last_user_edit_atom_offset'] = struct.unpack('<I', self.powerpoint_document[self.current_offset : self.current_offset + 4])[0]
self.current_offset += 4
editblock['persist_ptr_incremental_block_offset'] = struct.unpack('<I', self.powerpoint_document[self.current_offset : self.current_offset + 4])[0]
self.current_offset += 4
self.arr_user_edit_block.append(editblock)
while editblock['last_user_edit_atom_offset'] != 0:
self.current_offset = editblock['last_user_edit_atom_offset']
tmpHeader.fromkeys(tmpHeader.keys(), 0)
editblock.fromkeys(editblock.keys(), 0)
tmpHeader['option'] = struct.unpack('<H', self.powerpoint_document[self.current_offset: self.current_offset + 2])[0]
tmpHeader['type'] = self.powerpoint_document[self.current_offset + 2: self.current_offset + 4]
tmpHeader['length'] = struct.unpack('<I', self.powerpoint_document[self.current_offset + 4: self.current_offset + 8])[0]
if tmpHeader['type'] == self.RT_UserEditAtom:
self.current_offset += 8
self.current_offset += 8
editblock['last_user_edit_atom_offset'] = struct.unpack('<I', self.powerpoint_document[self.current_offset : self.current_offset + 4])[0]
self.current_offset += 4
editblock['persist_ptr_incremental_block_offset'] = struct.unpack('<I', self.powerpoint_document[self.current_offset : self.current_offset + 4])[0]
self.current_offset += 4
self.arr_user_edit_block.append(editblock)
# SetPersistPtrIncrementalBlockChain
tmpSheet = 0
tmpLength = 0
ppl_block = []
for i in range(0, len(self.arr_user_edit_block)):
self.current_offset = self.arr_user_edit_block[i]['persist_ptr_incremental_block_offset']
tmpHeader.fromkeys(tmpHeader.keys(), 0)
ppl_block.clear()
tmpLength = 0
tmpHeader['option'] = struct.unpack('<H', self.powerpoint_document[self.current_offset: self.current_offset + 2])[0]
tmpHeader['type'] = self.powerpoint_document[self.current_offset + 2: self.current_offset + 4]
tmpHeader['length'] = struct.unpack('<I', self.powerpoint_document[self.current_offset + 4: self.current_offset + 8])[0]
if tmpHeader['type'] == self.RT_PersistPtrIncrementalAtom:
self.current_offset += 8
while True:
sheet_offset = {}
sheet_offset['count'] = 0
sheet_offset['startnum'] = 0
sheet_offset['object'] = b''
sheet_offset['slidenum'] = []
sheet_offset['slideid'] = []
sheet_offset.fromkeys(sheet_offset.keys(), 0)
tmpSheet = 0
tmpSheet = struct.unpack('<I', self.powerpoint_document[self.current_offset : self.current_offset + 4])[0]
self.current_offset += 4
sheet_offset['count'] = tmpSheet >> 20
sheet_offset['startnum'] = tmpSheet & 0x000FFFFF
sheet_offset['object'] = self.powerpoint_document[self.current_offset : self.current_offset + sheet_offset['count'] * 4]
self.current_offset += sheet_offset['count'] * 4
ppl_block.append(sheet_offset)
tmpLength += (sheet_offset['count'] + 1) * 4
if tmpHeader['length'] == tmpLength:
break
self.arr_persist_ptr_incremental_block.append(ppl_block)
### Extract Body Text
# arrSlideText에 각 slide의 text 저장
arrSlideText = []
for i in range(0, len(self.arr_persist_ptr_incremental_block)):
self.current_offset = struct.unpack('<I', self.arr_persist_ptr_incremental_block[i][0]['object'][0:4])[0]
tmpHeader.fromkeys(tmpHeader.keys(), 0)
tmpHeader['option'] = struct.unpack('<H', self.powerpoint_document[self.current_offset: self.current_offset + 2])[0]
tmpHeader['type'] = self.powerpoint_document[self.current_offset + 2: self.current_offset + 4]
tmpHeader['length'] = struct.unpack('<I', self.powerpoint_document[self.current_offset + 4: self.current_offset + 8])[0]
self.current_offset += 8
if tmpHeader['type'] != self.RT_Document:
#print("Not RT_Document.")
return
tmpHeader.fromkeys(tmpHeader.keys(), 0)
tmpHeader['option'] = struct.unpack('<H', self.powerpoint_document[self.current_offset: self.current_offset + 2])[0]
tmpHeader['type'] = self.powerpoint_document[self.current_offset + 2: self.current_offset + 4]
tmpHeader['length'] = struct.unpack('<I', self.powerpoint_document[self.current_offset + 4: self.current_offset + 8])[0]
self.current_offset += 8
if tmpHeader['type'] != self.RT_SlideListWithText:
while True:
self.current_offset += tmpHeader['length']
if self.current_offset > len(self.powerpoint_document):
#print("Error!")
return
tmpHeader.fromkeys(tmpHeader.keys(), 0)
tmpHeader['option'] = struct.unpack('<H', self.powerpoint_document[self.current_offset: self.current_offset + 2])[0]
tmpHeader['type'] = self.powerpoint_document[self.current_offset + 2: self.current_offset + 4]
tmpHeader['length'] = struct.unpack('<I', self.powerpoint_document[self.current_offset + 4: self.current_offset + 8])[0]
self.current_offset += 8
if tmpHeader['type'] == self.RT_SlideListWithText:
break
slide_text = ""
in_text = False
sheet_number = 0
slide_id = 0
current_offset_backup = 0
presize = 0
editblock_text = []
while tmpHeader['type'] != self.RT_EndDocument:
if len(self.powerpoint_document) < self.current_offset + 8:
if len(self.text) > 0 :
editblock_text.append(self.text)
self.text = b''
else :
slide_text = ""
break
tmpHeader.fromkeys(tmpHeader.keys(), 0)
tmpHeader['option'] = struct.unpack('<H', self.powerpoint_document[self.current_offset: self.current_offset + 2])[0]
tmpHeader['type'] = self.powerpoint_document[self.current_offset + 2: self.current_offset + 4]
tmpHeader['length'] = struct.unpack('<I', self.powerpoint_document[self.current_offset + 4: self.current_offset + 8])[0]
self.current_offset += 8
if tmpHeader['type'] == self.RT_SlideListWithText:
pass
elif tmpHeader['type'] == self.RT_SlidePersistAtom:
if in_text == True:
editblock_text.append(self.text)
self.text = b''
in_text = False
if len(self.powerpoint_document) >= self.current_offset + 16 :
sheet_number = struct.unpack('<I', self.powerpoint_document[self.current_offset : self.current_offset + 4])[0]
slide_id = struct.unpack('<I', self.powerpoint_document[self.current_offset + 12: self.current_offset + 16])[0]
current_offset_backup = self.current_offset
self.__extract_text_in_slide__(i, sheet_number, slide_id)
self.current_offset = current_offset_backup
if self.text == None :
pass
elif len(self.text) > 0:
in_text = True
else :
# 각 슬라이드 text 추출
if len(self.powerpoint_document) >= self.current_offset + 16 :
sheet_number = struct.unpack('<I', self.powerpoint_document[self.current_offset : self.current_offset + 4])[0]
slide_id = struct.unpack('<I', self.powerpoint_document[self.current_offset + 12: self.current_offset + 16])[0]
current_offset_backup = self.current_offset
self.__extract_text_in_slide__(i, sheet_number, slide_id)
self.current_offset = current_offset_backup
if self.text == None :
pass
elif len(self.text) > 0:
in_text = True
self.current_offset += tmpHeader['length']
elif tmpHeader['type'] == self.RT_TextHeader:
self.current_offset += tmpHeader['length']
elif tmpHeader['type'] == self.RT_TextBytesAtom:
self.text_bytes = b''
self.text_chars = b''
self.text_bytes = self.powerpoint_document[self.current_offset: self.current_offset + tmpHeader['length']]
for i in range(0, len(self.text_bytes)):
self.text_chars += bytes([self.text_bytes[i]])
self.text_chars += b'\x00'
if self.text == None:
self.text = b''
presize = len(self.text)
self.text += self.text_chars
self.text += b'\x0A\x00'
self.current_offset += tmpHeader['length']
in_text = True
elif tmpHeader['type'] == self.RT_TextCharsAtom:
self.text_chars = b''
self.text_chars = self.powerpoint_document[self.current_offset : self.current_offset + tmpHeader['length']]
if self.text == None:
self.text = b''
self.text += self.text_chars
self.text += b'\x0A\x00'
self.current_offset += tmpHeader['length']
in_text = True
elif tmpHeader['type'] == self.RT_EndDocument:
if in_text == True:
editblock_text.append(self.text)
in_text = False
else :
self.current_offset += tmpHeader['length']
if len(editblock_text) > 0:
self.arr_edit_block_text.append(editblock_text)
j = 0
uFilteredTextLen = 0
for i in range(0, 1):
if len(self.arr_edit_block_text) > 0:
for j in range(0, len(self.arr_edit_block_text[i])):
uTempLen = int(len(self.arr_edit_block_text[i][j]) / 2)
self.filteredText += self.arr_edit_block_text[i][j]
uFilteredTextLen += uTempLen
uFilteredTextLen = self.__ppt_extra_filter__(uFilteredTextLen)
"""
for i in range(0, len(self.filteredText), 2):
try:
self.compound.content += self.filteredText[i:i+2].decode('utf-16')
except UnicodeDecodeError:
continue
"""
for i in range(0, len(self.filteredText), 2):
try:
self.compound.content += self.filteredText[i:i+2].decode('utf-16')
except UnicodeDecodeError:
continue
#self.compound.content = self.filteredText.decode('utf-16')
#### Drawing
try:
drawing_data = bytearray(self.compound.fp.open('Pictures').read())
except Exception:
# not exist pictures
return False
drawing_offset = 0
img_num = 0
while drawing_offset < len(drawing_data):
embedded_blip_rh_ver_instance = struct.unpack('<H', drawing_data[drawing_offset: drawing_offset + 2])[0]
embedded_blip_rh_Type = struct.unpack('<H', drawing_data[drawing_offset + 2: drawing_offset + 4])[0]
embedded_blip_rh_recLen = struct.unpack('<I', drawing_data[drawing_offset + 4: drawing_offset + 8])[0]
drawing_offset += 0x08
embedded_size = embedded_blip_rh_recLen
embedded_blip_rgbUid1 = drawing_data[drawing_offset : drawing_offset + 0x10]
drawing_offset += 0x10
embedded_size -= 0x10
embedded_blip_rgbUid2 = None
if int(embedded_blip_rh_ver_instance / 0x10) == 0x46B or int(embedded_blip_rh_ver_instance / 0x10) == 0x6E3:
embedded_blip_rgbUid2 = drawing_data[drawing_offset: drawing_offset + 0x10]
drawing_offset += 0x10
embedded_size -= 0x10
if embedded_blip_rh_Type != 0xF01A and embedded_blip_rh_Type != 0xF01B and embedded_blip_rh_Type != 0xF01C and \
embedded_blip_rh_Type != 0xF01D and embedded_blip_rh_Type != 0xF01E and embedded_blip_rh_Type != 0xF01F and \
embedded_blip_rh_Type != 0xF029:
break
extension = ""
if embedded_blip_rh_Type == 0xF01A:
extension = ".emf"
embedded_blip_metafileheader = drawing_data[drawing_offset: drawing_offset + 0x22]
drawing_offset += 0x22
embedded_size -= 0x22
elif embedded_blip_rh_Type == 0xF01B:
extension = ".wmf"
embedded_blip_metafileheader = drawing_data[drawing_offset: drawing_offset + 0x22]
drawing_offset += 0x22
embedded_size -= 0x22
elif embedded_blip_rh_Type == 0xF01C:
extension = ".pict"
embedded_blip_metafileheader = drawing_data[drawing_offset: drawing_offset + 0x22]
drawing_offset += 0x22
embedded_size -= 0x22
elif embedded_blip_rh_Type == 0xF01D:
extension = ".jpg"
embedded_blip_tag = drawing_data[drawing_offset: drawing_offset + 0x01]
drawing_offset += 0x01
embedded_size -= 0x01
elif embedded_blip_rh_Type == 0xF01E:
extension = ".png"
embedded_blip_tag = drawing_data[drawing_offset: drawing_offset + 0x01]
drawing_offset += 0x01
embedded_size -= 0x01
elif embedded_blip_rh_Type == 0xF01F:
extension = ".dib"
embedded_blip_tag = drawing_data[drawing_offset: drawing_offset + 0x01]
drawing_offset += 0x01
embedded_size -= 0x01
elif embedded_blip_rh_Type == 0xF029:
extension = ".tiff"
embedded_blip_tag = drawing_data[drawing_offset: drawing_offset + 0x01]
drawing_offset += 0x01
embedded_size -= 0x01
embedded_data = drawing_data[drawing_offset : drawing_offset + embedded_size]
drawing_offset += embedded_size
if not (os.path.isdir(self.compound.tmp_path + self.compound.fileName + "_extracted")):
os.makedirs(os.path.join(self.compound.tmp_path + self.compound.fileName + "_extracted"))
self.compound.ole_path.append(
self.compound.tmp_path + self.compound.fileName + "_extracted\\" + self.compound.fileName + "_" + str(img_num) + extension)
embedded_fp = open(self.compound.tmp_path + self.compound.fileName + "_extracted\\" + self.compound.fileName + "_" + str(img_num) + extension, 'wb')
img_num += 1
embedded_fp.write(embedded_data)
embedded_fp.close()
##### OLE Object
counter = 0
self.current_offset = 0
file_list = []
while(self.current_offset < len(self.powerpoint_document)):
rh_ver_instance = struct.unpack('<H', self.powerpoint_document[self.current_offset: self.current_offset + 2])[0]
rh_Type = struct.unpack('<H', self.powerpoint_document[self.current_offset + 2: self.current_offset + 4])[0]
rh_recLen = struct.unpack('<I', self.powerpoint_document[self.current_offset + 4: self.current_offset + 8])[0]
self.current_offset += 8
if rh_Type == 0x1011:
if not (os.path.isdir(self.compound.tmp_path + self.compound.fileName + "_extracted")):
os.makedirs(os.path.join(self.compound.tmp_path + self.compound.fileName + "_extracted"))
path = self.compound.tmp_path + self.compound.fileName + "_extracted\\OLE_Object" + str(counter) + ".bin"
self.compound.ole_path.append(path)
outfile = open(path, "wb")
counter += 1
bindata: bytearray = bytearray(self.powerpoint_document[self.current_offset + 6 : self.current_offset + rh_recLen - 8])
decompress = zlib.decompressobj(-zlib.MAX_WBITS)
stream = bytearray()
try:
stream = decompress.decompress(bindata)
stream += decompress.flush()
except Exception:
pass
file_list.append(path)
outfile.write(stream)
outfile.close()
self.current_offset += (rh_recLen) # - Header Size
self.__getOLEFile__(file_list)
def __getOLEFile__(self, files):
for file in files:
try:
ole = CompoundFileReader(file)
ole_filename = file[file.rfind('\\') + 1:]
for entry in ole.root:
if entry.name == 'Package': # ooxml
bindata: bytearray = bytearray(ole.open('Package').read())
f = open(ole_filename + '.zip', mode='wb')
f.write(bindata)
f.close()
with zipfile.ZipFile(ole_filename + '.zip') as z:
for filename in z.namelist():
if filename == 'word/document.xml':
savefilename = ole_filename + '.docx'
break
elif filename == 'ppt/presentation.xml':
savefilename = ole_filename + '.pptx'
break
elif filename == 'xl/workbook.xml':
savefilename = ole_filename + '.xlsx'
break
if not (os.path.isdir(self.compound.tmp_path + self.compound.fileName + "_extracted")):
os.makedirs(os.path.join(self.compound.tmp_path + self.compound.fileName + "_extracted"))
self.compound.ole_path.append(
self.compound.tmp_path + self.compound.fileName + "_extracted\\" + savefilename)
f = open(self.compound.tmp_path + self.compound.fileName + "_extracted\\" + savefilename, mode='wb')
f.write(bindata)
f.close()
os.remove(ole_filename + '.zip')
elif entry.name == 'WordDocument':
if not (os.path.isdir(self.compound.tmp_path + self.compound.fileName + "_extracted")):
os.makedirs(os.path.join(self.compound.tmp_path + self.compound.fileName + "_extracted"))
self.compound.ole_path.append(file + '.doc')
shutil.copy(file, file + '.doc')
elif entry.name == 'PowerPoint Document':
if not (os.path.isdir(self.compound.tmp_path + self.compound.fileName + "_extracted")):
os.makedirs(os.path.join(self.compound.tmp_path + self.compound.fileName + "_extracted"))
self.compound.ole_path.append(file + '.ppt')
shutil.copy(file, file + '.ppt')
elif entry.name == 'Workbook':
if not (os.path.isdir(self.compound.tmp_path + self.compound.fileName + "_extracted")):
os.makedirs(os.path.join(self.compound.tmp_path + self.compound.fileName + "_extracted"))
self.compound.ole_path.append(file + '.xls')
shutil.copy(file, file + '.xls')
elif entry.name == 'CONTENTS':
bindata: bytearray = bytearray(ole.open('CONTENTS').read())
if not (os.path.isdir(self.compound.tmp_path + self.compound.fileName + "_extracted")):
os.makedirs(os.path.join(self.compound.tmp_path + self.compound.fileName + "_extracted"))
self.compound.ole_path.append(
self.compound.tmp_path + self.compound.fileName + "_extracted\\" + ole_filename + '.pdf')
f = open(self.compound.tmp_path + self.compound.fileName + "_extracted\\" + ole_filename + '.pdf', mode='wb')
f.write(bindata)
f.close()
elif entry.name == 'Ole10Native':
if not (os.path.isdir(self.compound.tmp_path + self.compound.fileName + "_extracted")):
os.makedirs(os.path.join(self.compound.tmp_path + self.compound.fileName + "_extracted"))
bindata: bytearray = bytearray(ole.open('Ole10Native').read())
name_len = 6
for i in bindata[name_len:]:
if i == 0:
break
name_len += 1
cnt = 0
while cnt < 1000:
if bindata[cnt:cnt + 4] == b'RIFF' and bindata[cnt + 8: cnt + 16] == b'AVI\x20LIST':
self.compound.ole_path.append(self.compound.tmp_path + self.compound.fileName + "_extracted\\" + ole_filename + '.avi')
f = open(self.compound.tmp_path + self.compound.fileName + "_extracted\\" + ole_filename + '.avi', mode='wb')
f.write(bindata[cnt:])
f.close()
break
if bindata[cnt:cnt + 4] == b'RIFF' and bindata[cnt + 8 : cnt + 16] == b'WAVEfmt\x20':
self.compound.ole_path.append(self.compound.tmp_path + self.compound.fileName + "_extracted\\" + ole_filename + '.wav')
f = open(self.compound.tmp_path + self.compound.fileName + "_extracted\\" + ole_filename + '.wav', mode='wb')
f.write(bindata[cnt:])
f.close()
break
if bindata[cnt:cnt + 2] == b'\x00\x00' and bindata[cnt + 4: cnt + 8] == b'ftyp':
self.compound.ole_path.append(self.compound.tmp_path + self.compound.fileName + "_extracted\\" + ole_filename + '.mp4')
f = open(self.compound.tmp_path + self.compound.fileName + "_extracted\\" + ole_filename + '.mp4', mode='wb')
f.write(bindata[cnt:])
f.close()
break
cnt += 1
elif entry.name == 'BodyText':
if not (os.path.isdir(self.compound.tmp_path + self.compound.fileName + "_extracted")):
os.makedirs(os.path.join(self.compound.tmp_path + self.compound.fileName + "_extracted"))
self.compound.ole_path.append(file + '.hwp')
shutil.copy(file, file + '.hwp')
else:
continue
except:
return False
def __parse_ppt_normal_for_ole__(self, powerpoint_document, current_user):
# ppt 97????????????????
self.powerpoint_document = powerpoint_document
self.current_user = current_user
# Get User Edit Offset
self.current_offset = struct.unpack('<I', self.current_user[16 : 20])[0]
# Set Chain
# Set User Edit Chain
tmpHeader = {}
tmpHeader['option'] = struct.unpack('<H', self.powerpoint_document[self.current_offset: self.current_offset + 2])[0]
tmpHeader['type'] = self.powerpoint_document[self.current_offset + 2 : self.current_offset + 4]
tmpHeader['length'] = struct.unpack('<I', self.powerpoint_document[self.current_offset + 4: self.current_offset + 8])[0]
editblock = {}
editblock['last_user_edit_atom_offset'] = 0
editblock['persist_ptr_incremental_block_offset'] = 0
if tmpHeader['type'] == self.RT_UserEditAtom:
self.current_offset += 8
self.current_offset += 8
editblock['last_user_edit_atom_offset'] = struct.unpack('<I', self.powerpoint_document[self.current_offset : self.current_offset + 4])[0]
self.current_offset += 4
editblock['persist_ptr_incremental_block_offset'] = struct.unpack('<I', self.powerpoint_document[self.current_offset : self.current_offset + 4])[0]
self.current_offset += 4
self.arr_user_edit_block.append(editblock)
while editblock['last_user_edit_atom_offset'] != 0:
self.current_offset = editblock['last_user_edit_atom_offset']
tmpHeader.fromkeys(tmpHeader.keys(), 0)
editblock.fromkeys(editblock.keys(), 0)
tmpHeader['option'] = struct.unpack('<H', self.powerpoint_document[self.current_offset: self.current_offset + 2])[0]
tmpHeader['type'] = self.powerpoint_document[self.current_offset + 2: self.current_offset + 4]
tmpHeader['length'] = struct.unpack('<I', self.powerpoint_document[self.current_offset + 4: self.current_offset + 8])[0]
if tmpHeader['type'] == self.RT_UserEditAtom:
self.current_offset += 8
self.current_offset += 8
editblock['last_user_edit_atom_offset'] = struct.unpack('<I', self.powerpoint_document[self.current_offset : self.current_offset + 4])[0]
self.current_offset += 4
editblock['persist_ptr_incremental_block_offset'] = struct.unpack('<I', self.powerpoint_document[self.current_offset : self.current_offset + 4])[0]
self.current_offset += 4
self.arr_user_edit_block.append(editblock)
# SetPersistPtrIncrementalBlockChain
tmpSheet = 0
tmpLength = 0
ppl_block = []
for i in range(0, len(self.arr_user_edit_block)):
self.current_offset = self.arr_user_edit_block[i]['persist_ptr_incremental_block_offset']
tmpHeader.fromkeys(tmpHeader.keys(), 0)
ppl_block.clear()
tmpLength = 0
tmpHeader['option'] = struct.unpack('<H', self.powerpoint_document[self.current_offset: self.current_offset + 2])[0]
tmpHeader['type'] = self.powerpoint_document[self.current_offset + 2: self.current_offset + 4]
tmpHeader['length'] = struct.unpack('<I', self.powerpoint_document[self.current_offset + 4: self.current_offset + 8])[0]
if tmpHeader['type'] == self.RT_PersistPtrIncrementalAtom:
self.current_offset += 8
while True:
sheet_offset = {}
sheet_offset['count'] = 0
sheet_offset['startnum'] = 0
sheet_offset['object'] = b''
sheet_offset['slidenum'] = []
sheet_offset['slideid'] = []
sheet_offset.fromkeys(sheet_offset.keys(), 0)
tmpSheet = 0
tmpSheet = struct.unpack('<I', self.powerpoint_document[self.current_offset : self.current_offset + 4])[0]
self.current_offset += 4
sheet_offset['count'] = tmpSheet >> 20
sheet_offset['startnum'] = tmpSheet & 0x000FFFFF
sheet_offset['object'] = self.powerpoint_document[self.current_offset : self.current_offset + sheet_offset['count'] * 4]
self.current_offset += sheet_offset['count'] * 4
ppl_block.append(sheet_offset)
tmpLength += (sheet_offset['count'] + 1) * 4
if tmpHeader['length'] == tmpLength:
break
self.arr_persist_ptr_incremental_block.append(ppl_block)
### Extract Body Text
# arrSlideText에 각 slide의 text 저장
arrSlideText = []
for i in range(0, len(self.arr_persist_ptr_incremental_block)):
self.current_offset = struct.unpack('<I', self.arr_persist_ptr_incremental_block[i][0]['object'][0:4])[0]
tmpHeader.fromkeys(tmpHeader.keys(), 0)
tmpHeader['option'] = struct.unpack('<H', self.powerpoint_document[self.current_offset: self.current_offset + 2])[0]
tmpHeader['type'] = self.powerpoint_document[self.current_offset + 2: self.current_offset + 4]
tmpHeader['length'] = struct.unpack('<I', self.powerpoint_document[self.current_offset + 4: self.current_offset + 8])[0]
self.current_offset += 8
if tmpHeader['type'] != self.RT_Document:
#print("Not RT_Document.")
return
tmpHeader.fromkeys(tmpHeader.keys(), 0)
tmpHeader['option'] = struct.unpack('<H', self.powerpoint_document[self.current_offset: self.current_offset + 2])[0]
tmpHeader['type'] = self.powerpoint_document[self.current_offset + 2: self.current_offset + 4]
tmpHeader['length'] = struct.unpack('<I', self.powerpoint_document[self.current_offset + 4: self.current_offset + 8])[0]
self.current_offset += 8
if tmpHeader['type'] != self.RT_SlideListWithText:
while True:
self.current_offset += tmpHeader['length']
if self.current_offset > len(self.powerpoint_document):
#print("Error!")
return
tmpHeader.fromkeys(tmpHeader.keys(), 0)
tmpHeader['option'] = struct.unpack('<H', self.powerpoint_document[self.current_offset: self.current_offset + 2])[0]
tmpHeader['type'] = self.powerpoint_document[self.current_offset + 2: self.current_offset + 4]
tmpHeader['length'] = struct.unpack('<I', self.powerpoint_document[self.current_offset + 4: self.current_offset + 8])[0]
self.current_offset += 8
if tmpHeader['type'] == self.RT_SlideListWithText:
break
slide_text = ""
in_text = False
sheet_number = 0
slide_id = 0
current_offset_backup = 0
presize = 0
editblock_text = []
while tmpHeader['type'] != self.RT_EndDocument:
if len(self.powerpoint_document) < self.current_offset + 8:
if len(self.text) > 0 :
editblock_text.append(self.text)
self.text = b''
else :
slide_text = ""
break
tmpHeader.fromkeys(tmpHeader.keys(), 0)
tmpHeader['option'] = struct.unpack('<H', self.powerpoint_document[self.current_offset: self.current_offset + 2])[0]
tmpHeader['type'] = self.powerpoint_document[self.current_offset + 2: self.current_offset + 4]
tmpHeader['length'] = struct.unpack('<I', self.powerpoint_document[self.current_offset + 4: self.current_offset + 8])[0]
self.current_offset += 8
if tmpHeader['type'] == self.RT_SlideListWithText:
pass
elif tmpHeader['type'] == self.RT_SlidePersistAtom:
if in_text == True:
editblock_text.append(self.text)
self.text = b''
in_text = False
if len(self.powerpoint_document) >= self.current_offset + 16 :
sheet_number = struct.unpack('<I', self.powerpoint_document[self.current_offset : self.current_offset + 4])[0]
slide_id = struct.unpack('<I', self.powerpoint_document[self.current_offset + 12: self.current_offset + 16])[0]
current_offset_backup = self.current_offset
self.__extract_text_in_slide__(i, sheet_number, slide_id)
self.current_offset = current_offset_backup
if self.text == None :
pass
elif len(self.text) > 0:
in_text = True
else :
# 각 슬라이드 text 추출
if len(self.powerpoint_document) >= self.current_offset + 16 :
sheet_number = struct.unpack('<I', self.powerpoint_document[self.current_offset : self.current_offset + 4])[0]
slide_id = struct.unpack('<I', self.powerpoint_document[self.current_offset + 12: self.current_offset + 16])[0]
current_offset_backup = self.current_offset
self.__extract_text_in_slide__(i, sheet_number, slide_id)
self.current_offset = current_offset_backup
if self.text == None :
pass
elif len(self.text) > 0:
in_text = True
self.current_offset += tmpHeader['length']
elif tmpHeader['type'] == self.RT_TextHeader:
self.current_offset += tmpHeader['length']
elif tmpHeader['type'] == self.RT_TextBytesAtom:
self.text_bytes = b''
self.text_chars = b''
self.text_bytes = self.powerpoint_document[self.current_offset: self.current_offset + tmpHeader['length']]
for i in range(0, len(self.text_bytes)):
self.text_chars += bytes([self.text_bytes[i]])
self.text_chars += b'\x00'
if self.text == None:
self.text = b''
presize = len(self.text)
self.text += self.text_chars
self.text += b'\x0A\x00'
self.current_offset += tmpHeader['length']
in_text = True
elif tmpHeader['type'] == self.RT_TextCharsAtom:
self.text_chars = b''
self.text_chars = self.powerpoint_document[self.current_offset : self.current_offset + tmpHeader['length']]
if self.text == None:
self.text = b''
self.text += self.text_chars
self.text += b'\x0A\x00'
self.current_offset += tmpHeader['length']
in_text = True
elif tmpHeader['type'] == self.RT_EndDocument:
if in_text == True:
editblock_text.append(self.text)
in_text = False
else :
self.current_offset += tmpHeader['length']
if len(editblock_text) > 0:
self.arr_edit_block_text.append(editblock_text)
j = 0
uFilteredTextLen = 0
for i in range(0, 1):
if len(self.arr_edit_block_text) > 0:
for j in range(0, len(self.arr_edit_block_text[i])):
uTempLen = int(len(self.arr_edit_block_text[i][j]) / 2)
self.filteredText += self.arr_edit_block_text[i][j]
uFilteredTextLen += uTempLen
uFilteredTextLen = self.__ppt_extra_filter__(uFilteredTextLen)
"""
for i in range(0, len(self.filteredText), 2):
try:
self.compound.content += self.filteredText[i:i+2].decode('utf-16')
except UnicodeDecodeError:
continue
"""
result = ""
for i in range(0, len(self.filteredText), 2):
try:
result += self.filteredText[i:i+2].decode('utf-16')
except UnicodeDecodeError:
continue
#self.compound.content = self.filteredText.decode('utf-16')
return result
def __parse_ppt_damaged__(self):
file = bytearray(self.compound.fp.read())
m_root = b''
m_pictures = b''
m_currentuser = b''
m_powerpointdocumentation = b''
isRootDir = False
isPictures = False
isCurrentUser = False
isPowerPointDocumentation = False
CONST_DIR_ENTRY_NAME_ROOT = b'\x52\x00\x6F\x00\x6F\x00\x74\x00\x20\x00\x45\x00\x6E\x00\x74\x00\x72\x00\x79\x00\x00\x00'
CONST_DIR_ENTRY_NAME_POWER_POINT_DOCUMENT = b'\x50\x00\x6F\x00\x77\x00\x65\x00\x72\x00\x50\x00\x6F\x00\x69\x00\x6E\x00\x74\x00\x20\x00\x44\x00\x6F\x00\x63\x00\x75\x00\x6D\x00\x65\x00\x6E\x00\x74\x00\x00\x00'
CONST_DIR_ENTRY_NAME_PICTURES = b'\x50\x00\x69\x00\x63\x00\x74\x00\x75\x00\x72\x00\x65\x00\x73\x00\x00\x00'
CONST_CURRENT_USER = b'\x00\x00\xF6\x0F'
self.current_offset = 0
while (self.current_offset < len(file)):
if (file[self.current_offset : self.current_offset + len(CONST_DIR_ENTRY_NAME_ROOT)] == CONST_DIR_ENTRY_NAME_ROOT):
m_root = file[self.current_offset: self.current_offset + 0x80]
isRootDir = True
if (file[self.current_offset : self.current_offset + len(CONST_DIR_ENTRY_NAME_POWER_POINT_DOCUMENT)] == CONST_DIR_ENTRY_NAME_POWER_POINT_DOCUMENT):
m_powerpointdocumentation = file[self.current_offset: self.current_offset + 0x80]
isPowerPointDocumentation = True
if (file[self.current_offset : self.current_offset + len(CONST_DIR_ENTRY_NAME_PICTURES)] == CONST_DIR_ENTRY_NAME_PICTURES):
m_pictures = file[self.current_offset: self.current_offset + 0x80]
isPictures = True
self.current_offset += 0x80
self.current_offset = 0
while (self.current_offset < len(file)):
if (file[self.current_offset : self.current_offset + 4] == CONST_CURRENT_USER):
isCurrentUser = True
break
self.current_offset += 0x40
if isCurrentUser == False or isPowerPointDocumentation == False:
return self.compound.CONST_ERROR
self.current_user = file[self.current_offset : self.current_offset + 64]
powerpoint_document_start = (struct.unpack('<I', m_powerpointdocumentation[0x74 : 0x78])[0] + 1) * 0x200
powerpoint_document_size = struct.unpack('<I', m_powerpointdocumentation[0x78 : 0x7C])[0]
self.powerpoint_document = file[powerpoint_document_start : powerpoint_document_start + powerpoint_document_size]
pictures_start = (struct.unpack('<I', m_pictures[0x74 : 0x78])[0] + 1) * 0x200
pictures_size = struct.unpack('<I', m_pictures[0x78 : 0x7C])[0]
drawing_data = file[pictures_start: pictures_start + pictures_size]
# Get User Edit Offset
self.current_offset = struct.unpack('<I', self.current_user[16 : 20])[0]
# Set Chain
# Set User Edit Chain
tmpHeader = {}
tmpHeader['option'] = struct.unpack('<H', self.powerpoint_document[self.current_offset: self.current_offset + 2])[0]
tmpHeader['type'] = self.powerpoint_document[self.current_offset + 2 : self.current_offset + 4]
tmpHeader['length'] = struct.unpack('<I', self.powerpoint_document[self.current_offset + 4: self.current_offset + 8])[0]
editblock = {}
editblock['last_user_edit_atom_offset'] = 0
editblock['persist_ptr_incremental_block_offset'] = 0
if tmpHeader['type'] == self.RT_UserEditAtom:
self.current_offset += 8
self.current_offset += 8
editblock['last_user_edit_atom_offset'] = struct.unpack('<I', self.powerpoint_document[self.current_offset : self.current_offset + 4])[0]
self.current_offset += 4
editblock['persist_ptr_incremental_block_offset'] = struct.unpack('<I', self.powerpoint_document[self.current_offset : self.current_offset + 4])[0]
self.current_offset += 4
self.arr_user_edit_block.append(editblock)
while editblock['last_user_edit_atom_offset'] != 0:
self.current_offset = editblock['last_user_edit_atom_offset']
tmpHeader.fromkeys(tmpHeader.keys(), 0)
editblock.fromkeys(editblock.keys(), 0)
tmpHeader['option'] = struct.unpack('<H', self.powerpoint_document[self.current_offset: self.current_offset + 2])[0]
tmpHeader['type'] = self.powerpoint_document[self.current_offset + 2: self.current_offset + 4]
tmpHeader['length'] = struct.unpack('<I', self.powerpoint_document[self.current_offset + 4: self.current_offset + 8])[0]
if tmpHeader['type'] == self.RT_UserEditAtom:
self.current_offset += 8
self.current_offset += 8
editblock['last_user_edit_atom_offset'] = struct.unpack('<I', self.powerpoint_document[self.current_offset : self.current_offset + 4])[0]
self.current_offset += 4
editblock['persist_ptr_incremental_block_offset'] = struct.unpack('<I', self.powerpoint_document[self.current_offset : self.current_offset + 4])[0]
self.current_offset += 4
self.arr_user_edit_block.append(editblock)
# SetPersistPtrIncrementalBlockChain
tmpSheet = 0
tmpLength = 0
ppl_block = []
for i in range(0, len(self.arr_user_edit_block)):
self.current_offset = self.arr_user_edit_block[i]['persist_ptr_incremental_block_offset']
tmpHeader.fromkeys(tmpHeader.keys(), 0)
ppl_block.clear()
tmpLength = 0
tmpHeader['option'] = \
struct.unpack('<H', self.powerpoint_document[self.current_offset: self.current_offset + 2])[0]
tmpHeader['type'] = self.powerpoint_document[self.current_offset + 2: self.current_offset + 4]
tmpHeader['length'] = \
struct.unpack('<I', self.powerpoint_document[self.current_offset + 4: self.current_offset + 8])[0]
if tmpHeader['type'] == self.RT_PersistPtrIncrementalAtom:
self.current_offset += 8
while True:
sheet_offset = {}
sheet_offset['count'] = 0
sheet_offset['startnum'] = 0
sheet_offset['object'] = b''
sheet_offset['slidenum'] = []
sheet_offset['slideid'] = []
sheet_offset.fromkeys(sheet_offset.keys(), 0)
tmpSheet = 0
tmpSheet = \
struct.unpack('<I', self.powerpoint_document[self.current_offset: self.current_offset + 4])[0]
self.current_offset += 4
sheet_offset['count'] = tmpSheet >> 20
sheet_offset['startnum'] = tmpSheet & 0x000FFFFF
sheet_offset['object'] = self.powerpoint_document[
self.current_offset: self.current_offset + sheet_offset['count'] * 4]
self.current_offset += sheet_offset['count'] * 4
ppl_block.append(sheet_offset)
tmpLength += (sheet_offset['count'] + 1) * 4
if tmpHeader['length'] == tmpLength:
break
self.arr_persist_ptr_incremental_block.append(ppl_block)
### Extract Body Text
# arrSlideText에 각 slide의 text 저장
arrSlideText = []
for i in range(0, len(self.arr_persist_ptr_incremental_block)):
self.current_offset = struct.unpack('<I', self.arr_persist_ptr_incremental_block[i][0]['object'][0:4])[0]
tmpHeader.fromkeys(tmpHeader.keys(), 0)
tmpHeader['option'] = struct.unpack('<H', self.powerpoint_document[self.current_offset: self.current_offset + 2])[0]
tmpHeader['type'] = self.powerpoint_document[self.current_offset + 2: self.current_offset + 4]
tmpHeader['length'] = struct.unpack('<I', self.powerpoint_document[self.current_offset + 4: self.current_offset + 8])[0]
self.current_offset += 8
if tmpHeader['type'] != self.RT_Document:
#print("Not RT_Document.")
return
tmpHeader.fromkeys(tmpHeader.keys(), 0)
tmpHeader['option'] = struct.unpack('<H', self.powerpoint_document[self.current_offset: self.current_offset + 2])[0]
tmpHeader['type'] = self.powerpoint_document[self.current_offset + 2: self.current_offset + 4]
tmpHeader['length'] = struct.unpack('<I', self.powerpoint_document[self.current_offset + 4: self.current_offset + 8])[0]
self.current_offset += 8
if tmpHeader['type'] != self.RT_SlideListWithText:
while True:
self.current_offset += tmpHeader['length']
if self.current_offset > len(self.powerpoint_document):
#print("Error!")
return
tmpHeader.fromkeys(tmpHeader.keys(), 0)
tmpHeader['option'] = struct.unpack('<H', self.powerpoint_document[self.current_offset: self.current_offset + 2])[0]
tmpHeader['type'] = self.powerpoint_document[self.current_offset + 2: self.current_offset + 4]
tmpHeader['length'] = struct.unpack('<I', self.powerpoint_document[self.current_offset + 4: self.current_offset + 8])[0]
self.current_offset += 8
if tmpHeader['type'] == self.RT_SlideListWithText:
break
slide_text = ""
in_text = False
sheet_number = 0
slide_id = 0
current_offset_backup = 0
presize = 0
editblock_text = []
while tmpHeader['type'] != self.RT_EndDocument:
if len(self.powerpoint_document) < self.current_offset + 8:
if len(self.text) > 0 :
editblock_text.append(self.text)
self.text = b''
else :
slide_text = ""
break
tmpHeader.fromkeys(tmpHeader.keys(), 0)
tmpHeader['option'] = struct.unpack('<H', self.powerpoint_document[self.current_offset: self.current_offset + 2])[0]
tmpHeader['type'] = self.powerpoint_document[self.current_offset + 2: self.current_offset + 4]
tmpHeader['length'] = struct.unpack('<I', self.powerpoint_document[self.current_offset + 4: self.current_offset + 8])[0]
self.current_offset += 8
if tmpHeader['type'] == self.RT_SlideListWithText:
pass
elif tmpHeader['type'] == self.RT_SlidePersistAtom:
if in_text == True:
editblock_text.append(self.text)
self.text = b''
in_text = False
if len(self.powerpoint_document) >= self.current_offset + 16 :
sheet_number = struct.unpack('<I', self.powerpoint_document[self.current_offset : self.current_offset + 4])[0]
slide_id = struct.unpack('<I', self.powerpoint_document[self.current_offset + 12: self.current_offset + 16])[0]
current_offset_backup = self.current_offset
self.__extract_text_in_slide__(i, sheet_number, slide_id)
self.current_offset = current_offset_backup
if self.text == None :
pass
elif len(self.text) > 0:
in_text = True
else :
# 각 슬라이드 text 추출
if len(self.powerpoint_document) >= self.current_offset + 16 :
sheet_number = struct.unpack('<I', self.powerpoint_document[self.current_offset : self.current_offset + 4])[0]
slide_id = struct.unpack('<I', self.powerpoint_document[self.current_offset + 12: self.current_offset + 16])[0]
current_offset_backup = self.current_offset
self.__extract_text_in_slide__(i, sheet_number, slide_id)
self.current_offset = current_offset_backup
if self.text == None :
pass
elif len(self.text) > 0:
in_text = True
self.current_offset += tmpHeader['length']
elif tmpHeader['type'] == self.RT_TextHeader:
self.current_offset += tmpHeader['length']
elif tmpHeader['type'] == self.RT_TextBytesAtom:
self.text_bytes = b''
self.text_chars = b''
self.text_bytes = self.powerpoint_document[self.current_offset: self.current_offset + tmpHeader['length']]
for i in range(0, len(self.text_bytes)):
self.text_chars += bytes([self.text_bytes[i]])
self.text_chars += b'\x00'
if self.text == None:
self.text = b''
presize = len(self.text)
self.text += self.text_chars
self.text += b'\x0A\x00'
self.current_offset += tmpHeader['length']
in_text = True
elif tmpHeader['type'] == self.RT_TextCharsAtom:
self.text_chars = b''
self.text_chars = self.powerpoint_document[self.current_offset : self.current_offset + tmpHeader['length']]
if self.text == None:
self.text = b''
self.text += self.text_chars
self.text += b'\x0A\x00'
self.current_offset += tmpHeader['length']
in_text = True
elif tmpHeader['type'] == self.RT_EndDocument:
if in_text == True:
editblock_text.append(self.text)
in_text = False
else :
self.current_offset += tmpHeader['length']
if len(editblock_text) > 0:
self.arr_edit_block_text.append(editblock_text)
j = 0
uFilteredTextLen = 0
for i in range(0, 1):
if len(self.arr_edit_block_text) > 0:
for j in range(0, len(self.arr_edit_block_text[i])):
uTempLen = int(len(self.arr_edit_block_text[i][j]) / 2)
self.filteredText += self.arr_edit_block_text[i][j]
uFilteredTextLen += uTempLen
uFilteredTextLen = self.__ppt_extra_filter__(uFilteredTextLen)
for i in range(0, len(self.filteredText), 2):
try:
self.compound.content += self.filteredText[i:i+2].decode('utf-16')
except UnicodeDecodeError:
continue
#self.compound.content = self.filteredText.decode('utf-16')
#### Drawing
drawing_offset = 0
img_num = 0
while drawing_offset < len(drawing_data):
embedded_blip_rh_ver_instance = \
struct.unpack('<H', drawing_data[drawing_offset: drawing_offset + 2])[0]
embedded_blip_rh_Type = struct.unpack('<H', drawing_data[drawing_offset + 2: drawing_offset + 4])[0]
embedded_blip_rh_recLen = struct.unpack('<I', drawing_data[drawing_offset + 4: drawing_offset + 8])[
0]
drawing_offset += 0x08
embedded_size = embedded_blip_rh_recLen
embedded_blip_rgbUid1 = drawing_data[drawing_offset: drawing_offset + 0x10]
drawing_offset += 0x10
embedded_size -= 0x10
embedded_blip_rgbUid2 = None
if int(embedded_blip_rh_ver_instance / 0x10) == 0x46B or int(
embedded_blip_rh_ver_instance / 0x10) == 0x6E3:
embedded_blip_rgbUid2 = drawing_data[drawing_offset: drawing_offset + 0x10]
drawing_offset += 0x10
embedded_size -= 0x10
if embedded_blip_rh_Type != 0xF01A and embedded_blip_rh_Type != 0xF01B and embedded_blip_rh_Type != 0xF01C and \
embedded_blip_rh_Type != 0xF01D and embedded_blip_rh_Type != 0xF01E and embedded_blip_rh_Type != 0xF01F and \
embedded_blip_rh_Type != 0xF029:
break
extension = ""
if embedded_blip_rh_Type == 0xF01A:
extension = ".emf"
embedded_blip_metafileheader = drawing_data[drawing_offset: drawing_offset + 0x22]
drawing_offset += 0x22
embedded_size -= 0x22
elif embedded_blip_rh_Type == 0xF01B:
extension = ".wmf"
embedded_blip_metafileheader = drawing_data[drawing_offset: drawing_offset + 0x22]
drawing_offset += 0x22
embedded_size -= 0x22
elif embedded_blip_rh_Type == 0xF01C:
extension = ".pict"
embedded_blip_metafileheader = drawing_data[drawing_offset: drawing_offset + 0x22]
drawing_offset += 0x22
embedded_size -= 0x22
elif embedded_blip_rh_Type == 0xF01D:
extension = ".jpg"
embedded_blip_tag = drawing_data[drawing_offset: drawing_offset + 0x01]
drawing_offset += 0x01
embedded_size -= 0x01
elif embedded_blip_rh_Type == 0xF01E:
extension = ".png"
embedded_blip_tag = drawing_data[drawing_offset: drawing_offset + 0x01]
drawing_offset += 0x01
embedded_size -= 0x01
elif embedded_blip_rh_Type == 0xF01F:
extension = ".dib"
embedded_blip_tag = drawing_data[drawing_offset: drawing_offset + 0x01]
drawing_offset += 0x01
embedded_size -= 0x01
elif embedded_blip_rh_Type == 0xF029:
extension = ".tiff"
embedded_blip_tag = drawing_data[drawing_offset: drawing_offset + 0x01]
drawing_offset += 0x01
embedded_size -= 0x01
embedded_data = drawing_data[drawing_offset: drawing_offset + embedded_size]
drawing_offset += embedded_size
if not (os.path.isdir(self.compound.tmp_path + self.compound.fileName + "_extracted")):
os.makedirs(os.path.join(self.compound.tmp_path + self.compound.fileName + "_extracted"))
self.compound.ole_path.append(
self.compound.tmp_path + self.compound.fileName + "_extracted\\" + self.compound.fileName + "_" + str(
img_num) + extension)
embedded_fp = open(self.compound.tmp_path + self.compound.fileName + "_extracted\\" + self.compound.fileName + "_" + str(
img_num) + extension, 'wb')
img_num += 1
embedded_fp.write(embedded_data)
embedded_fp.close()
##### OLE Object
counter = 0
self.current_offset = 0
file_list = []
while (self.current_offset < len(self.powerpoint_document)):
rh_ver_instance = \
struct.unpack('<H', self.powerpoint_document[self.current_offset: self.current_offset + 2])[0]
rh_Type = \
struct.unpack('<H', self.powerpoint_document[self.current_offset + 2: self.current_offset + 4])[0]
rh_recLen = \
struct.unpack('<I', self.powerpoint_document[self.current_offset + 4: self.current_offset + 8])[0]
self.current_offset += 8
if rh_Type == 0x1011:
if not (os.path.isdir(self.compound.tmp_path + self.compound.fileName + "_extracted")):
os.makedirs(os.path.join(self.compound.tmp_path + self.compound.fileName + "_extracted"))
path = self.compound.tmp_path + self.compound.fileName + "_extracted\\OLE_Object" + str(counter) + ".bin"
self.compound.ole_path.append(path)
outfile = open(path, "wb")
counter += 1
bindata: bytearray = bytearray(
self.powerpoint_document[self.current_offset + 6: self.current_offset + rh_recLen - 8])
decompress = zlib.decompressobj(-zlib.MAX_WBITS)
stream = bytearray()
try:
stream = decompress.decompress(bindata)
stream += decompress.flush()
except Exception:
pass
file_list.append(path)
outfile.write(stream)
outfile.close()
self.current_offset += (rh_recLen) # - Header Size
self.__getOLEFile__(file_list)
| 46.355975 | 215 | 0.546957 | 7,644 | 73,706 | 5.030351 | 0.048666 | 0.131853 | 0.165349 | 0.094273 | 0.89496 | 0.86422 | 0.850723 | 0.836237 | 0.824092 | 0.821778 | 0 | 0.02832 | 0.348927 | 73,706 | 1,589 | 216 | 46.385148 | 0.772964 | 0.017244 | 0 | 0.799003 | 0 | 0.002492 | 0.052082 | 0.016498 | 0 | 0 | 0.008347 | 0 | 0 | 1 | 0.013289 | false | 0.013289 | 0.004983 | 0 | 0.052326 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
b667f9c253b16fa333ca20bdd53e10a4522b11af | 37,897 | py | Python | wistl/tests/test_tower.py | GeoscienceAustralia/wistl | fa8b6aeaabb902ea72085b3552b5167cd20040a4 | [
"Apache-2.0"
] | null | null | null | wistl/tests/test_tower.py | GeoscienceAustralia/wistl | fa8b6aeaabb902ea72085b3552b5167cd20040a4 | [
"Apache-2.0"
] | 2 | 2021-11-15T17:50:15.000Z | 2021-11-26T05:37:41.000Z | wistl/tests/test_tower.py | GeoscienceAustralia/wistl | fa8b6aeaabb902ea72085b3552b5167cd20040a4 | [
"Apache-2.0"
] | 1 | 2021-01-18T06:35:43.000Z | 2021-01-18T06:35:43.000Z | #!/usr/bin/env python
__author__ = 'Hyeuk Ryu'
import unittest
import logging
import copy
import pandas as pd
import os
import tempfile
import numpy as np
from scipy import stats
from wistl.config import Config
#from wistl.constants import RTOL, ATOL
from wistl.tower import Tower, angle_between_two
from wistl.tests.test_config import assertDeepAlmostEqual
BASE_DIR = os.path.dirname(os.path.realpath(__file__))
RTOL = 0.05
ATOL = 0.001
PM_THRESHOLD = 1.0e-3
def create_wind_given_bearing(bearing, ratio):
if isinstance(bearing, list):
assert len(bearing) == len(ratio)
df = pd.DataFrame(np.array([ratio, bearing]).T, columns=['ratio', 'Bearing'])
nperiods = len(bearing)
else:
df = pd.DataFrame([[ratio, bearing], [ratio, bearing]], columns=['ratio', 'Bearing'])
nperiods = 2
df['time'] = pd.date_range(start='01/01/2011', periods=nperiods, freq='D')
df.set_index('time', inplace=True)
return df
class TestTower1(unittest.TestCase):
# suspension tower
@classmethod
def setUpClass(cls):
cls.logger = logging.getLogger(__name__)
frag_dic = {11.5: {'minor': ['lognorm', '1.02', '0.02'],
'collapse': ['lognorm', '1.05', '0.02']},
28.75: {'minor': ['lognorm', '1.0', '0.02'],
'collapse': ['lognorm', '1.02', '0.02']},
41.25: {'minor': ['lognorm', '1.04', '0.02'],
'collapse': ['lognorm', '1.07', '0.02']},
90: {'minor': ['lognorm', '-1.05', '0.02'],
'collapse': ['lognorm', '-1.05', '0.02']},
}
cond_pc = {
(0, 1): 0.075,
(-1, 0): 0.075,
(-1, 0, 1): 0.35,
(-1, 0, 1, 2): 0.025,
(-2, -1, 0, 1): 0.025,
(-2, -1, 0, 1, 2): 0.1}
cond_pc_adj = {
12: 0.575,
14: 0.575,
15: 0.125,
11: 0.125}
cond_pc_adj_sim_idx = [(12, 14, 15), (11, 12, 14), (14,), (12,), (11, 12, 14, 15), (12, 14)]
cond_pc_adj_sim_prob = np.array([0.025, 0.05 , 0.125, 0.2 , 0.3 , 0.65 ])
cls.tower_dic = {
'type': 'Lattice Tower',
'name': 'T14',
'latitude': 0.0,
'longitude': 149.0,
'comment': 'Test',
'function': 'Suspension',
'devangle': 0,
'axisaz': 134,
'constcost': 0.0,
'height': 17.0,
'yrbuilt': 1980,
'locsource': 'Fake',
'lineroute': 'LineA',
#'shapes': <shapefile.Shape object at 0x7ff06908ec50>,
'coord': np.array([149.065, 0. ]),
'coord_lat_lon': np.array([ 0. , 149.065]),
#'point': <shapely.geometry.point.Point object at 0x7ff06908e320>,
'design_span': 400.0,
'design_level': 'low',
'design_speed': 75.0,
'terrain_cat': 2,
'file_wind_base_name': 'ts.T14.csv',
'height_z': 15.4,
'ratio_z_to_10': 1.0524,
'actual_span': 556.5974539658616,
'u_factor': 1.0,
'collapse_capacity': 75.0,
'cond_pc': cond_pc,
'max_no_adj_towers': 2,
'id_adj': [11, 12, 13, 14, 15],
'idl': 13,
'idn': 0,
'cond_pc_adj': cond_pc_adj,
'cond_pc_adj_sim_idx': cond_pc_adj_sim_idx,
'cond_pc_adj_sim_prob': cond_pc_adj_sim_prob,
'no_sims': 1000,
'damage_states': ['minor', 'collapse'],
'non_collapse': ['minor'],
'rnd_state': np.random.RandomState(1),
'event_id': 0,
'scale': 1.0,
'frag_dic': frag_dic,
'rtol': RTOL,
'atol': ATOL,
'dmg_threshold': PM_THRESHOLD,
'path_event': os.path.join(BASE_DIR, 'wind_event/test1'),
}
cls.tower = Tower(**cls.tower_dic)
#cls.tower.wind['ratio'] = 1.082 # 1.05*np.exp(0.03)
# cls.tower = Tower(tower_id=0, logger=logge**cls.cfg.towers.loc[0])
# cls.network = TransmissionNetwork(cfg=cls.cfg, event_id='test2', scale=2.5)
#
# cls.tower = cls.network.lines['Calaca - Amadeo'].towers['AC-100']
# cls.ps_tower = cls.tower.ps_tower
# cls.tower.event_tuple = (cls.tower.file_wind, 3.0)
# set wind file, which also sets wind and time_index
# cls.tower.file_wind = file_wind
# compute prob_damage_isolation and prob_damage_adjacent
# cls.tower.compute_dmg_isolated_isolation()
#cls.tower.compute_pc_adj()
def test_repr(self):
expected = 'Tower(name=T14, function=Suspension, idl=13, idn=0)'
self.assertEqual(repr(self.tower), expected)
def test_logger_file_wind1(self):
with self.assertLogs('wistl.tower', level='INFO') as cm:
self.tower._file_wind = None
self.tower.path_event = 'dummy_path'
#self.tower.file_wind_base_name = 'dummy'
self.tower.file_wind
msg = f'Invalid path_event dummy_path'
self.assertIn(f'ERROR:wistl.tower:{msg}', cm.output)
def test_logger_file_wind2(self):
with self.assertLogs('wistl.tower', level='INFO') as cm:
self.tower._file_wind = None
self.tower.path_event = BASE_DIR
self.tower.file_wind_base_name = 'dummy'
self.tower.file_wind
msg = f'Invalid file_wind {BASE_DIR}/dummy'
self.assertIn(f'ERROR:wistl.tower:{msg}', cm.output)
def test_logger_wind(self):
with self.assertLogs('wistl.tower', level='INFO') as cm:
self.tower._wind = None
self.tower._file_wind = 'dummy'
self.tower.wind
msg = f'Invalid file_wind dummy'
self.assertIn(f'CRITICAL:wistl.tower:{msg}', cm.output)
def test_logger_init(self):
with self.assertLogs('wistl.tower', level='DEBUG') as cm:
self.tower.init()
msg = f'{self.tower.name} is initialized'
self.assertIn(f'DEBUG:wistl.tower:{msg}', cm.output)
def test_logger_dmg_sim(self):
with self.assertLogs('wistl.tower', level='INFO') as cm:
tower_dic = self.tower_dic.copy()
tower = Tower(**tower_dic)
tower.init()
tower.no_sims = 10
# 1. determine damage state of tower due to wind
tower._wind = create_wind_given_bearing(130.0, 1.0712) # 1.05*np.exp(0.02)
tower.dmg_sim
msg = 'WARNING:wistl.tower'
self.assertEqual(msg, ':'.join(cm.output[0].split(':')[:2]))
def test_logger_collapse_adj_sim(self):
with self.assertLogs('wistl.tower', level='INFO') as cm:
# tower14 (idl: 13,
tower_dic = self.tower_dic.copy()
tower = Tower(**tower_dic)
tower.init()
tower.no_sims = 10
tower._wind = create_wind_given_bearing(130.0, 1.0712) # 1.05*np.exp(0.02)
tower.collapse_adj_sim
msg = 'WARNING:wistl.tower'
self.assertEqual(msg, ':'.join(cm.output[0].split(':')[:2]))
def test_logger_collapse_interaction(self):
with self.assertLogs('wistl.tower', level='INFO') as cm:
tower_dic = self.tower_dic.copy()
tower_dic.update({'axisaz': 91, 'no_sims': 50, 'cond_pc_interaction_no': [1, 3, 5, 7],
'cond_pc_interaction_cprob': [0.2, 0.3, 0.31, 0.311],
'cond_pc_interaction_prob': {1:0.2, 3:0.1, 5:0.01, 7:0.001}})
tower = Tower(**tower_dic)
tower.init()
prob_dic = {1: 0.2, 3: 0.1, 5: 0.01, 7: 0.001}
tower._wind = create_wind_given_bearing(130.0, 1.072)
tower.collapse_interaction
msg = f'WARNING:wistl.tower:Pc_interaction({tower.name})'
self.assertEqual(msg, ':'.join(cm.output[0].split(':')[:3]))
def test_damage_states(self):
self.assertEqual(self.tower.damage_states, ['minor', 'collapse'])
def test_no_time(self):
file_wind = tempfile.NamedTemporaryFile(mode='w+t', delete=False)
# read file_wind
file_wind.writelines([
'Time,Longitude,Latitude,Speed,UU,VV,Bearing,Pressure\n',
'2014-07-13 09:00,120.79,13.93,68.8,-0.18,-5.6,130.84,100780.97\n',
'2014-07-13 09:05,120.80,13.93,68.8,-0.18,-5.6,130.89,100780.92\n'
])
file_wind.seek(0)
self.tower.init()
self.tower._file_wind = file_wind.name
self.assertEqual(self.tower.no_time, 2)
os.unlink(file_wind.name)
def test_sorted_frag_dic_keys(self):
self.assertEqual(self.tower.sorted_frag_dic_keys, [11.5, 28.75, 41.25, 90.0])
def test_file_wind(self):
assert self.tower.name == 'T14'
expected = os.path.join(BASE_DIR, 'wind_event/test1', 'ts.T14.csv')
self.assertEqual(self.tower.file_wind, expected)
def test_angle_between_two(self):
deg1 = [ 0., 30., 60.,
90., 120., 150.,
180., 210., 240.,
270., 300., 330.]
d2 = 0
expected = [0, 30, 60,
90, 60, 30,
0, 30, 60,
90, 60, 30]
for d1, e in zip(deg1, expected):
result = angle_between_two(d1, d2)
self.assertAlmostEqual(result, e)
d2 = 90
expected = [90, 60, 30,
0, 30, 60,
90, 60, 30,
0, 30, 60]
for d1, e in zip(deg1, expected):
result = angle_between_two(d1, d2)
self.assertAlmostEqual(result, e)
d2 = 180
expected = [0, 30, 60,
90, 60, 30,
0, 30, 60,
90, 60, 30]
for d1, e in zip(deg1, expected):
result = angle_between_two(d1, d2)
self.assertAlmostEqual(result, e)
d2 = 270
expected = [90, 60, 30,
0, 30, 60,
90, 60, 30,
0, 30, 60]
for d1, e in zip(deg1, expected):
result = angle_between_two(d1, d2)
self.assertAlmostEqual(result, e)
def test_get_directional_vulnerability1(self):
# thresholds: 11.5, 28.75, 41.25, 90.0
tower_dic = self.tower_dic.copy()
tower_dic.update({'axisaz': 90})
bearings = [10.0, 45.0, 60.0, 70.0, 80.0,
170.0, 135.0, 120.0, 110.0, 100.0,
190.0, 225.0, 240.0, 250.0, 260.0,
350.0, 315.0, 300.0, 290.0, 280.0]
expected = [90.0, 90.0, 41.25, 28.75, 11.5] * 4
for bearing, value in zip(bearings, expected):
tower = Tower(**tower_dic)
result = tower.get_directional_vulnerability(bearing)
try:
self.assertAlmostEqual(result, value)
except AssertionError:
print(f'Wrong: bearing:{bearing}, axisaz: {tower_dic["axisaz"]}, result:{result}, expected: {value}')
def test_get_directional_vulnerability2(self):
# thresholds: 11.5, 28.75, 41.25, 90.0
tower_dic = self.tower_dic.copy()
tower_dic.update({'axisaz': 0})
bearings = [ 0., 15., 30., 45., 60., 75.,
90., 105., 120., 135., 150., 165.,
180., 195., 210., 225., 240., 255.,
270., 285., 300., 315., 330., 345.]
expected = [11.5, 28.75, 41.25, 90.0, 90.0, 90.0,
90.0, 90.0, 90.0, 90.0, 41.25, 28.75] * 2
for bearing, value in zip(bearings, expected):
tower = Tower(**tower_dic)
result = tower.get_directional_vulnerability(bearing)
try:
self.assertAlmostEqual(result, value)
except AssertionError:
print(f'Wrong: bearing:{bearing}, axisaz: {tower_dic["axisaz"]}, result:{result}, expected: {value}')
def test_wind(self):
file_wind = tempfile.NamedTemporaryFile(mode='w+t', delete=False)
# read file_wind
file_wind.writelines([
'Time,Longitude,Latitude,Speed,UU,VV,Bearing,Pressure\n',
'2014-07-13 09:00,120.79,13.93,3.68,-0.18,-5.6,1.84,100780.97\n',
'2014-07-13 09:05,120.80,13.93,3.68,-0.18,-5.6,1.89,100780.92\n'
])
file_wind.seek(0)
self.tower._file_wind = file_wind.name
self.tower._wind = None
self.assertAlmostEqual(self.tower.wind.loc['2014-07-13 09:00', 'Speed'], 1.0524*3.68)
self.assertAlmostEqual(self.tower.wind.loc['2014-07-13 09:05', 'ratio'], 1.0524*3.68/75.0)
os.unlink(file_wind.name)
def test_dmg_isolated(self):
frag_dic = {11.5: {'minor': stats.lognorm(0.02, scale=1.02),
'collapse': stats.lognorm(0.02, scale=1.05)},
28.75: {'minor': stats.lognorm(0.02, scale=1.0),
'collapse': stats.lognorm(0.02, scale=1.02)},
41.25: {'minor': stats.lognorm(0.02, scale=1.04),
'collapse': stats.lognorm(0.02, scale=1.07)},
90: {'minor': stats.lognorm(0.1, scale=-1.2),
'collapse': stats.lognorm(0.1, scale=-1.4)}
}
self.assertAlmostEqual(self.tower.collapse_capacity, 75.0)
self.assertAlmostEqual(self.tower.axisaz, 134.0)
bearing, ratio = 130.0, 1.02
self.tower._wind = create_wind_given_bearing(bearing, ratio)
key = self.tower.get_directional_vulnerability(bearing)
self.assertEqual(key, 11.5)
self.tower._dmg = None
self.assertAlmostEqual(self.tower.dmg.loc['01/01/2011', 'minor'],
stats.lognorm.cdf(1.02, 0.02, scale=1.02))
self.assertAlmostEqual(self.tower.dmg.loc['01/01/2011', 'collapse'],
stats.lognorm.cdf(1.02, 0.02, scale=1.05))
bearing, ratio = 45.0, 1.02
self.tower._wind = create_wind_given_bearing(bearing, ratio)
key = self.tower.get_directional_vulnerability(bearing)
self.assertEqual(key, 90.0)
self.tower._dmg = None
#self.assertAlmostEqual(self.tower.dmg.loc['01/01/2011', 'minor'], 0.0)
#self.assertAlmostEqual(self.tower.dmg.loc['01/01/2011', 'collapse'], 0.0)
bearing, ratio = 110.0, 1.04
self.tower._wind = create_wind_given_bearing(bearing, ratio)
key = self.tower.get_directional_vulnerability(bearing)
self.assertEqual(key, 28.75)
self.tower._dmg = None
self.assertAlmostEqual(self.tower.dmg.loc['01/01/2011', 'minor'],
stats.lognorm.cdf(1.04, 0.02, scale=1.0))
self.assertAlmostEqual(self.tower.dmg.loc['01/01/2011', 'collapse'],
stats.lognorm.cdf(1.04, 0.02, scale=1.02))
bearing, ratio = 100.0, 1.0
self.tower._wind = create_wind_given_bearing(bearing, ratio)
key = self.tower.get_directional_vulnerability(bearing)
self.assertEqual(key, 41.25)
self.tower._dmg = None
self.assertAlmostEqual(self.tower.dmg.loc['01/01/2011', 'minor'],
stats.lognorm.cdf(ratio, 0.02, scale=1.04))
self.assertAlmostEqual(self.tower.dmg.loc['01/01/2011', 'collapse'],
stats.lognorm.cdf(ratio, 0.02, scale=1.07))
def test_collapse_adj(self):
cond_pc_adj = {11: 0.125, 12: 0.575, 14: 0.575, 15: 0.125}
assertDeepAlmostEqual(self, dict(self.tower.cond_pc_adj), cond_pc_adj)
self.tower._wind = create_wind_given_bearing(130, 1.0712) # 1.05*np.exp(0.02)
self.assertEqual(self.tower.id_adj, [11, 12, 13, 14, 15])
self.tower._dmg = None
for id_abs, value in cond_pc_adj.items():
self.assertAlmostEqual(self.tower.collapse_adj[id_abs][0],
0.842 * value, places=2)
def test_dmg_sim(self):
# 1. determine damage state of tower due to wind
self.tower.init()
self.tower._wind = create_wind_given_bearing(130.0, 1.0712) # 1.05*np.exp(0.02)
self.assertAlmostEqual(self.tower.dmg['collapse'].values[0],
stats.lognorm.cdf(1.0712, 0.02, scale=1.05), places=2)
self.assertAlmostEqual(self.tower.dmg['minor'].values[0],
stats.lognorm.cdf(1.0712, 0.02, scale=1.02), places=2)
self.tower.dmg_sim
def test_dmg_state_sim_old(self):
self.tower.init()
self.tower._wind = create_wind_given_bearing(130.0, 1.0712) # 1.05*np.exp(0.02)
rv = stats.uniform.rvs(size=(self.tower.no_sims, self.tower.no_time))
a = np.array([rv < self.tower.dmg[ds].values
for ds in self.tower.damage_states]).sum(axis=0)
b = (rv[:, :, np.newaxis] < self.tower.dmg.values).sum(axis=2)
np.testing.assert_array_equal(a, b)
def test_dmg_threshold(self):
df = pd.DataFrame([[0.1, 130.0], [0.9605, 130.0], [0.1, 130.0], [0.97, 130.0]], columns=['ratio', 'Bearing'])
df['time'] = pd.date_range(start='01/01/2011', end='01/04/2011', freq='D')
df.set_index('time', inplace=True)
self.tower._wind = df
self.tower._dmg = None
# checking index
pd.testing.assert_index_equal(self.tower.dmg.index, df.index[1:3+1])
def test_dmg_time_idx(self):
df = pd.DataFrame([[0.1, 130.0], [0.9605, 130.0], [0.1, 130.0], [0.97, 130.0]], columns=['ratio', 'Bearing'])
df['time'] = pd.date_range(start='01/01/2011', end='01/04/2011', freq='D')
df.set_index('time', inplace=True)
self.tower.init()
self.tower._wind = df
# checking index
self.assertEqual((1,4), self.tower.dmg_time_idx)
def test_dmg_idxmax(self):
df = pd.DataFrame([[0.1, 130.0], [0.9605, 130.0], [0.1, 130.0], [0.97, 130.0]], columns=['ratio', 'Bearing'])
df['time'] = pd.date_range(start='01/01/2011', end='01/04/2011', freq='D')
df.set_index('time', inplace=True)
self.tower.init()
self.tower._wind = df
# checking index
self.assertEqual([3], self.tower.dmg_idxmax)
def test_compare_dmg_with_dmg_sim(self):
# dmg_isolated vs. dmg_sim
prob_sim = 0
for ds in self.tower.damage_states[::-1]:
no = len(self.tower.dmg_state_sim[ds]['id_sim'][
self.tower.dmg_state_sim[ds]['id_time'] == 1])
prob_sim += no / self.tower.no_sims
isclose = np.isclose(prob_sim,
self.tower.dmg[ds].values[1],
rtol=RTOL, atol=ATOL)
if not isclose:
self.logger.warning(f'PE of {ds}: '
f'simulation {prob_sim:.3f} vs. '
f'analytical {self.tower.dmg[ds].values[1]:.3f}')
#@unittest.skip("skipping ATM")
def test_dmg_state_sim(self):
rv = np.array([[0, 0], [1, 1], [0.5, 0.9]]) # no_sims, no_time
dmg = pd.DataFrame(np.array([[0.9928, 0.8412],
[0.9928, 0.8412]]), columns=['minor', 'collapse'])
_array = (rv[:, :, np.newaxis] < self.tower.dmg.values).sum(axis=2)
np.testing.assert_equal(_array, np.array([[2, 2], [0, 0], [2, 1]]))
dmg_state_sim = {}
for ids, ds in enumerate(self.tower.damage_states, 1):
id_sim, id_time = np.where(_array == ids)
dmg_state_sim[ds] = pd.DataFrame(np.vstack((id_sim, id_time)).T, columns=['id_sim', 'id_time'])
np.testing.assert_equal(dmg_state_sim['minor']['id_sim'].values, np.array([2]))
np.testing.assert_equal(dmg_state_sim['minor']['id_time'].values, np.array([1]))
np.testing.assert_equal(dmg_state_sim['collapse']['id_sim'].values, np.array([0, 0, 2]))
np.testing.assert_equal(dmg_state_sim['collapse']['id_time'].values, np.array([0, 1, 0]))
#@unittest.skip("skip ATM")
def test_dmg_state_sim_threshold(self):
rv = np.array([[0.9, 0.5, 0],
[0.1, 0.5, 0.9],
[0.5, 0.9, 0.7]]) # no_sims, no_time
dmg = pd.DataFrame(np.array([[0.9928, 0.8412],
[0.0, 0.0],
[0.9928, 0.8412]]), columns=['minor', 'collapse'])
_array = (rv[:, :, np.newaxis] < dmg.values).sum(axis=2)
np.testing.assert_equal(_array, np.array([[1, 0, 2], [2, 0, 1], [2, 0, 2]])) # no_sims, no_time
dmg_state_sim = {}
for ids, ds in enumerate(self.tower.damage_states, 1):
id_sim, id_time = np.where(_array == ids)
dmg_state_sim[ds] = pd.DataFrame(np.vstack((id_sim, id_time)).T, columns=['id_sim', 'id_time'])
np.testing.assert_equal(dmg_state_sim['minor']['id_sim'].values, np.array([0, 1]))
np.testing.assert_equal(dmg_state_sim['minor']['id_time'].values, np.array([0, 2]))
np.testing.assert_equal(dmg_state_sim['collapse']['id_sim'].values, np.array([0, 1, 2, 2]))
np.testing.assert_equal(dmg_state_sim['collapse']['id_time'].values, np.array([2, 0, 0, 2]))
def test_collapse_adj_sim(self):
# tower14 (idl: 13,
tower_dic = self.tower_dic.copy()
tower_dic.update({'axisaz': 90,
'no_sims': 6000})
tower = Tower(**tower_dic)
tower.init()
tower._wind = create_wind_given_bearing([130, 130, 120, 130],[0.0712, 1.0712, 1.0712, 0.0712]) # 1.05*np.exp(0.02)
df = tower.collapse_adj_sim.groupby(['id_time','id_adj']).apply(len).reset_index()
for idl in tower.cond_pc_adj.keys():
x = df.loc[df['id_adj'].apply(lambda x: idl in x)].groupby('id_time').sum()/tower.no_sims
np.testing.assert_allclose(x[0].values, tower.collapse_adj[idl], atol=ATOL, rtol=RTOL)
def test_collapse_interaction(self):
tower_dic = self.tower_dic.copy()
tower_dic.update({'axisaz': 91, 'no_sims': 5000, 'cond_pc_interaction_no': [1, 3, 5, 7],
'cond_pc_interaction_cprob': [0.2, 0.3, 0.31, 0.311],
'cond_pc_interaction_prob': {1:0.2, 3:0.1, 5:0.01, 7:0.001}})
tower = Tower(**tower_dic)
tower.init()
tower._wind = create_wind_given_bearing(130.0, 1.072)
#tower.collapse_interaction
x = tower.collapse_interaction.groupby(['id_time', 'no_collapse']).apply(len).reset_index()
for _, row in x.iterrows():
expected = tower.dmg['collapse'].iloc[row['id_time']] * tower.cond_pc_interaction_prob[row['no_collapse']]
result = row[0] / tower.no_sims
np.testing.assert_allclose(expected, result, atol=ATOL, rtol=RTOL)
class TestTower2(unittest.TestCase):
# strainer
@classmethod
def setUpClass(cls):
# 23, T9
cls.logger = logging.getLogger(__name__)
frag_dic = {180: {'minor': ['lognorm','1.143','0.032'],
'collapse': ['lognorm','1.18','0.04']}
}
cond_pc = {
(-1,0,1): 0.05,
(-2,-1,0,1,2): 0.08,
(-3,-2,-1,0,1,2,3): 0.10,
(-4,-3,-2,-1,0,1,2,3,4): 0.08,
(-5,-4,-3,-2,-1,0,1,2,3,4,5): 0.05,
(-6,-5,-4,-3,-2,-1,0,1,2,3,4,5,6): 0.62,
}
cond_pc_adj = {
2: 0.62,
3: 0.67,
4: 0.75,
5: 0.85,
6: 0.93,
7: 0.98,
9: 0.98,
10: 0.93,
11: 0.85,
12: 0.75,
13: 0.67,
14: 0.62}
cond_pc_adj_sim_idx = [
(7, 9),
(3, 4, 5, 6, 7, 9, 10, 11, 12, 13),
(6, 7, 9, 10),
(4, 5, 6, 7, 9, 10, 11, 12),
(5, 6, 7, 9, 10, 11),
(2, 3, 4, 5, 6, 7, 9, 10, 11, 12, 13, 14)]
cond_pc_adj_sim_prob = np.array([0.05, 0.10 , 0.18, 0.26, 0.36, 0.98])
cls.tower_dic = {
'type': 'Lattice Tower',
'name': 'T9',
'latitude': 0.0,
'longitude': 149.0,
'comment': 'Test',
'function': 'Suspension',
'devangle': 0,
'axisaz': 134,
'constcost': 0.0,
'height': 17.0,
'yrbuilt': 1980,
'locsource': 'Fake',
'lineroute': 'LineA',
#'shapes': <shapefile.Shape object at 0x7ff06908ec50>,
'coord': np.array([149.065, 0. ]),
'coord_lat_lon': np.array([ 0. , 149.065]),
#'point': <shapely.geometry.point.Point object at 0x7ff06908e320>,
'design_span': 400.0,
'design_level': 'low',
'design_speed': 75.0,
'terrain_cat': 2,
'file_wind_base_name': 'ts.T9.csv',
'height_z': 15.4,
'ratio_z_to_10': 1.0524,
'actual_span': 556.5974539658616,
'u_factor': 1.0,
'collapse_capacity': 75.0,
'cond_pc': cond_pc,
'max_no_adj_towers': 6,
'id_adj': [2, 3, 4, 5, 6, 7, -1, 9, 10, 11, 12, 13, 14],
'idl': 8,
'idn': 0,
'cond_pc_adj': cond_pc_adj,
'cond_pc_adj_sim_idx': cond_pc_adj_sim_idx,
'cond_pc_adj_sim_prob': cond_pc_adj_sim_prob,
'no_sims': 10000,
'damage_states': ['minor', 'collapse'],
'non_collapse': ['minor'],
'rnd_state': np.random.RandomState(1),
'event_id': 0,
'rtol': RTOL,
'atol': ATOL,
'dmg_threshold': PM_THRESHOLD,
'scale': 1.0,
'frag_dic': frag_dic,
'path_event': os.path.join(BASE_DIR, 'wind_event/test1'),
}
cls.tower = Tower(**cls.tower_dic)
def test_sorted_frag_dic_keys(self):
self.assertEqual(self.tower.sorted_frag_dic_keys, [180.0])
def test_file_wind(self):
assert self.tower.name == 'T9'
expected = os.path.join(BASE_DIR, 'wind_event/test1', 'ts.T9.csv')
self.assertEqual(self.tower.file_wind, expected)
def test_get_directional_vulnerability1(self):
# thresholds: 11.5, 28.75, 41.25, 90.0
tower_dic = self.tower_dic.copy()
tower_dic.update({'axisaz': 90})
bearings = [10.0, 45.0, 60.0, 70.0, 80.0,
170.0, 135.0, 120.0, 110.0, 100.0,
190.0, 225.0, 240.0, 250.0, 260.0,
350.0, 315.0, 300.0, 290.0, 280.0]
expected = [180] * 20
for bearing, value in zip(bearings, expected):
tower = Tower(**tower_dic)
result = tower.get_directional_vulnerability(bearing)
try:
self.assertAlmostEqual(result, value)
except AssertionError:
print(f'Wrong: bearing:{bearing}, axisaz: {tower_dic["axisaz"]}, result:{result}, expected: {value}')
def test_wind(self):
file_wind = tempfile.NamedTemporaryFile(mode='w+t', delete=False)
# read file_wind
file_wind.writelines([
'Time,Longitude,Latitude,Speed,UU,VV,Bearing,Pressure\n',
'2014-07-13 09:00,120.79,13.93,3.68,-0.18,-5.6,1.84,100780.97\n',
'2014-07-13 09:05,120.80,13.93,3.68,-0.18,-5.6,1.89,100780.92\n'
])
file_wind.seek(0)
self.tower._file_wind = file_wind.name
self.tower._wind = None
self.assertAlmostEqual(self.tower.wind.loc['2014-07-13 09:00', 'Speed'], 1.0524*3.68)
self.assertAlmostEqual(self.tower.wind.loc['2014-07-13 09:05', 'ratio'], 1.0524*3.68/75.0)
os.unlink(file_wind.name)
def test_dmg_isolated(self):
frag_dic = {180: {'minor': ['lognorm','1.143','0.032'],
'collapse': ['lognorm','1.18','0.04']}}
self.assertAlmostEqual(self.tower.collapse_capacity, 75.0)
self.assertAlmostEqual(self.tower.axisaz, 134.0)
bearing, ratio = 130.0, 1.02
self.tower._wind = create_wind_given_bearing(bearing, ratio)
key = self.tower.get_directional_vulnerability(bearing)
self.assertEqual(key, 180)
self.tower._dmg = None
with self.assertLogs('wistl.tower', level='INFO') as cm:
self.assertTrue(self.tower.dmg.empty)
#self.assertAlmostEqual(self.tower.dmg.loc['01/01/2011', 'minor'],
# stats.lognorm.cdf(1.02, 0.032, scale=1.143))
#self.assertAlmostEqual(self.tower.dmg.loc['01/01/2011', 'collapse'],
# stats.lognorm.cdf(1.02, 0.04, scale=1.18))
bearing, ratio = 110.0, 1.04
self.tower._wind = create_wind_given_bearing(bearing, ratio)
key = self.tower.get_directional_vulnerability(bearing)
self.assertEqual(key, 180)
self.tower._dmg = None
self.assertAlmostEqual(self.tower.dmg.loc['01/01/2011', 'minor'],
stats.lognorm.cdf(1.04, 0.032, scale=1.143))
self.assertAlmostEqual(self.tower.dmg.loc['01/01/2011', 'collapse'],
stats.lognorm.cdf(1.04, 0.04, scale=1.18))
def test_collapse_adj(self):
cond_pc_adj = {
2: 0.62,
3: 0.67,
4: 0.75,
5: 0.85,
6: 0.93,
7: 0.98,
9: 0.98,
10: 0.93,
11: 0.85,
12: 0.75,
13: 0.67,
14: 0.62}
assertDeepAlmostEqual(self, dict(self.tower.cond_pc_adj), cond_pc_adj)
self.tower._wind = create_wind_given_bearing(130, 1.22816) # 1.18*np.exp(0.04)
self.assertEqual(self.tower.id_adj, [2, 3, 4, 5, 6, 7, -1, 9, 10, 11, 12, 13, 14])
self.tower._dmg = None
for id_abs, value in cond_pc_adj.items():
self.assertAlmostEqual(self.tower.collapse_adj[id_abs][0],
0.842 * value, places=2)
def test_dmg_sim(self):
# 1. determine damage state of tower due to wind
self.tower._wind = create_wind_given_bearing(130.0, 1.22816) # 1.18*np.exp(0.04)
self.tower._dmg = None
self.tower._dmg_state_sim = None
self.tower._dmg_sim = None
self.assertAlmostEqual(self.tower.dmg['collapse'].values[0],
stats.lognorm.cdf(1.22816, 0.04, scale=1.18), places=2)
self.assertAlmostEqual(self.tower.dmg['minor'].values[0],
stats.lognorm.cdf(1.22816, 0.032, scale=1.143), places=2)
self.assertAlmostEqual(self.tower.dmg_sim['collapse'][0], self.tower.dmg['collapse'].values[0], places=2)
def test_dmg_state_sim_old(self):
self.tower.init()
self.tower._wind = create_wind_given_bearing(130.0, 1.22816) # 1.18*np.exp(0.04)
rv = stats.uniform.rvs(size=(self.tower.no_sims, self.tower.no_time))
a = np.array([rv < self.tower.dmg[ds].values
for ds in self.tower.damage_states]).sum(axis=0)
b = (rv[:, :, np.newaxis] < self.tower.dmg.values).sum(axis=2)
np.testing.assert_array_equal(a, b)
def test_compare_dmg_with_dmg_sim(self):
# dmg_isolated vs. dmg_sim
prob_sim = 0
for ds in self.tower.damage_states[::-1]:
no = len(self.tower.dmg_state_sim[ds]['id_sim'][
self.tower.dmg_state_sim[ds]['id_time'] == 1])
prob_sim += no / self.tower.no_sims
isclose = np.isclose(prob_sim,
self.tower.dmg[ds].values[1],
rtol=RTOL, atol=ATOL)
if not isclose:
self.logger.warning(f'PE of {ds}: '
f'simulation {prob_sim:.3f} vs. '
f'analytical {self.tower.dmg[ds].values[1]:.3f}')
def test_dmg_state_sim(self):
self.tower.init()
self.tower._wind = create_wind_given_bearing(130.0, 1.22816) # 1.18*np.exp(0.04)
np.testing.assert_allclose(self.tower.dmg.values,
np.array([[0.987637, 0.841361],
[0.987637, 0.841361]]),
rtol=1.e-4) # minor, collapse
#self.tower._dmg_state_sim = (rv[:, :, np.newaxis] < self.tower.dmg.values).sum(axis=2)
#np.testing.assert_equal(self.tower._dmg_state_sim, np.array([[2, 2], [0, 0], [2, 1]]))
self.tower.dmg_state_sim
def test_dmg_state_sim2(self):
rv = np.array([[0, 0], [1, 1], [0.5, 0.9]]) # no_sims, no_time
dmg = pd.DataFrame(np.array([[0.987637, 0.841361],
[0.987637, 0.841361]]), columns=['minor', 'collapse'])
_array = (rv[:, :, np.newaxis] < self.tower.dmg.values).sum(axis=2)
np.testing.assert_equal(_array, np.array([[2, 2], [0, 0], [2, 1]]))
dmg_state_sim = {}
for ids, ds in enumerate(self.tower.damage_states, 1):
id_sim, id_time = np.where(_array == ids)
dmg_state_sim[ds] = pd.DataFrame(np.vstack((id_sim, id_time)).T, columns=['id_sim', 'id_time'])
np.testing.assert_equal(dmg_state_sim['minor']['id_sim'].values, np.array([2]))
np.testing.assert_equal(dmg_state_sim['minor']['id_time'].values, np.array([1]))
np.testing.assert_equal(dmg_state_sim['collapse']['id_sim'].values, np.array([0, 0, 2]))
np.testing.assert_equal(dmg_state_sim['collapse']['id_time'].values, np.array([0, 1, 0]))
def test_collapse_adj_sim(self):
# tower14 (idl: 13,
tower_dic = self.tower_dic.copy()
tower_dic.update({'axisaz': 90,
'no_sims': 5000,
'function': 'strainer'})
tower = Tower(**tower_dic)
tower.init()
tower._wind = create_wind_given_bearing(130.0, 1.2282) # 1.18*np.exp(0.04)
tower.collapse_adj_sim
class TestTower3(unittest.TestCase):
# suspension tower neighboring strainer
@classmethod
def setUpClass(cls):
cls.logger = logging.getLogger(__name__)
frag_dic = {11.5: {'minor': ['lognorm', '1.02', '0.02'],
'collapse': ['lognorm', '1.05', '0.02']},
28.75: {'minor': ['lognorm', '1.0', '0.02'],
'collapse': ['lognorm', '1.02', '0.02']},
41.25: {'minor': ['lognorm', '1.04', '0.02'],
'collapse': ['lognorm', '1.07', '0.02']},
90: {'minor': ['lognorm', '-1.05', '0.02'],
'collapse': ['lognorm', '-1.05', '0.02']},
}
cond_pc = {
(0, 1): 0.075,
(-1, 0): 0.075,
(-1, 0, 1): 0.35,
(-1, 0, 1, 2): 0.025,
(-2, -1, 0, 1): 0.025,
(-2, -1, 0, 1, 2): 0.1}
cond_pc_adj = {
11: 0.575,
12: 0.125}
cond_pc_adj_sim_idx = [(11, 12,), (11, )]
cond_pc_adj_sim_prob = np.array([0.125, 0.575 ])
cls.tower_dic = {
'type': 'Lattice Tower',
'name': 'T33',
'latitude': 0.0,
'longitude': 149.0,
'comment': 'Test',
'function': 'Suspension',
'devangle': 0,
'axisaz': 134,
'constcost': 0.0,
'height': 17.0,
'yrbuilt': 1980,
'locsource': 'Fake',
'lineroute': 'LineB',
#'shapes': <shapefile.Shape object at 0x7ff06908ec50>,
'coord': np.array([149.065, 0. ]),
'coord_lat_lon': np.array([ 0. , 149.065]),
#'point': <shapely.geometry.point.Point object at 0x7ff06908e320>,
'design_span': 400.0,
'design_level': 'low',
'design_speed': 75.0,
'terrain_cat': 2,
'file_wind_base_name': 'ts.T33.csv',
'height_z': 15.4,
'ratio_z_to_10': 1.0524,
'actual_span': 556.5974539658616,
'u_factor': 1.0,
'collapse_capacity': 75.0,
'cond_pc': cond_pc,
'max_no_adj_towers': 2,
'id_adj': [8, -1, 10, 11, 12],
'idl': 10,
'idn': 33,
'cond_pc_adj': cond_pc_adj,
'cond_pc_adj_sim_idx': cond_pc_adj_sim_idx,
'cond_pc_adj_sim_prob': cond_pc_adj_sim_prob,
'no_sims': 1000,
'damage_states': ['minor', 'collapse'],
'non_collapse': ['minor'],
'rnd_state': np.random.RandomState(1),
'event_id': 0,
'rtol': RTOL,
'atol': ATOL,
'dmg_threshold': PM_THRESHOLD,
'scale': 1.0,
'frag_dic': frag_dic,
'path_event': os.path.join(BASE_DIR, 'wind_event/test1'),
}
cls.tower = Tower(**cls.tower_dic)
def test_collapse_adj_sim(self):
# tower14 (idl: 13,
tower_dic = self.tower_dic.copy()
tower_dic.update({'axisaz': 90,
'no_sims': 5000})
tower = Tower(**tower_dic)
tower.init()
tower._wind = create_wind_given_bearing([130, 130, 130, 130],[0.0712, 1.0712, 1.0712, 0.0712]) # 1.05*np.exp(0.02)
df = tower.collapse_adj_sim.groupby(['id_time','id_adj']).apply(len).reset_index()
for idl in tower.cond_pc_adj.keys():
x = df.loc[df['id_adj'].apply(lambda x: idl in x)].groupby('id_time').sum()/tower.no_sims
np.testing.assert_allclose(x[0].values, tower.collapse_adj[idl], atol=ATOL, rtol=RTOL)
if __name__ == '__main__':
unittest.main(verbosity=2)
| 38.279798 | 123 | 0.534897 | 5,199 | 37,897 | 3.73341 | 0.081554 | 0.068624 | 0.033385 | 0.041731 | 0.864039 | 0.842607 | 0.829006 | 0.802473 | 0.782174 | 0.768727 | 0 | 0.113864 | 0.304536 | 37,897 | 989 | 124 | 38.318504 | 0.622591 | 0.065784 | 0 | 0.705075 | 0 | 0.012346 | 0.129656 | 0.025218 | 0 | 0 | 0 | 0 | 0.128944 | 1 | 0.061728 | false | 0 | 0.015089 | 0 | 0.082305 | 0.004115 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
b66da7bad6def52a97c8e2a4ab1965a02873c8f0 | 10,421 | py | Python | CarCounting/getDOTstream.py | PeterJWei/CitywideFootprinting | 98064e6119ceab26079c0b11629f3d428f4e745f | [
"MIT"
] | 1 | 2019-01-25T16:01:57.000Z | 2019-01-25T16:01:57.000Z | CarCounting/getDOTstream.py | PeterJWei/CitywideFootprinting | 98064e6119ceab26079c0b11629f3d428f4e745f | [
"MIT"
] | 1 | 2018-10-31T18:33:07.000Z | 2018-10-31T18:38:57.000Z | CarCounting/getDOTstream.py | PeterJWei/CitywideFootprinting | 98064e6119ceab26079c0b11629f3d428f4e745f | [
"MIT"
] | 1 | 2018-12-24T23:35:15.000Z | 2018-12-24T23:35:15.000Z | import urllib2
#import urllib.request
import cv2
import numpy as np
import web
import base64
from TF_SSD import CarDetector
from utility.correlation import correlationClass
import json
urls = ("/", "stream",
"/vehicle", "vehicleCount",
"/test", "testCamera")
C = CarDetector('CarCounting/InferenceGraph/citycam_graph.pb')
#C = CarDetector('CarCounting/InferenceGraph/ssd_lite_graph.pb')
class tempData:
def __init__(self):
self.boundingBoxes = []
self.total = 0
self.prevImage = None
T = tempData()
class vehicleCount:
def __init__(self):
URL='http://207.251.86.238/797'
print("Getting stream from " + URL + "...")
self.G = getStreamCount(URL)
self.count = 0
def vehicleCountFromImage(self):
self.count += self.G.getImage()
return self.count
class stream:
def __init__(self):
return
def GET(self):
data = web.input()
print(data)
#self.G = getStream('http://207.251.86.238/cctv797.jpg?math=0.8641532073791593')
if "URL" in data:
URL = data["URL"]
print("found in data")
else:
#URL='http://207.251.86.238/cctv31.jpg'
URL='http://207.251.86.238/797'
# URL='http://207.251.86.238/cctv797.jpg?math=0.658582090996567'
print("Getting stream from " + URL + "...")
self.G = getStream(URL)
return self.G.getImage()
class testCamera:
def GET(self):
im = cv2.imread("CarCounting/pic0.jpg")
#im = im[:, :, ::-1]
sensitivity = 0.5
C = CarDetector('CarCounting/InferenceGraph/frozen_inference_graph.pb')
boxes, scores, classes, num = C.getClassification(im)
for i in range(scores[0].shape[0]):
if scores[0][i] < sensitivity:
limit = i
break
nBoxes = boxes[0][0:limit]
nScores = scores[0][0:limit]
nClasses = classes[0][0:limit]
#Box colors
R = 0
G = 255
B = 0
for box1 in nBoxes:
x1 = min(351,int(round(box1[1]*352)))
y1 = min(239,int(round(box1[0]*240)))
x2 = min(351,int(round(box1[3]*352)))
y2 = min(239,int(round(box1[2]*240)))
im[y1:y2, x1, 0] = R
im[y1:y2, x1, 1] = G
im[y1:y2, x1, 2] = B
im[y1:y2, x2, 0] = R
im[y1:y2, x2, 1] = G
im[y1:y2, x2, 2] = B
im[y1, x1:x2, 0] = R
im[y1, x1:x2, 1] = G
im[y1, x1:x2, 2] = B
im[y2, x1:x2, 0] = R
im[y2, x1:x2, 1] = G
im[y2, x1:x2, 2] = B
retval, b = cv2.imencode('.jpg', im)
encoded_string = base64.b64encode(b)
return encoded_string
class getStreamCount:
def __init__(self, url):
self.stream = urllib2.urlopen(url)
def getImage(self):
total = T.total #total number of bounding boxes
boundingBoxes = T.boundingBoxes #stored bounding box coordinates from last frame
prevImage = T.prevImage #stored image from previous frame
file = self.stream.read()
encoded_string = base64.b64encode(file)
arr = np.asarray(bytearray(file), dtype=np.uint8)
try:
img = cv2.imdecode(arr, -1)
except Exception:
return 0
#filter out background
img2 = img.copy()
img2 = self.filter2(79, 104, 43, 240, 141, 108, 351, 195, img2) #hacked solution to black out the non-essential parts of the image
sensitivity = 0.4 #threshold to filter out detections
boxes, scores, classes, num = C.getClassification(img2) #runs the image through ssd-mobilenet
limit = 0
for i in range(scores[0].shape[0]):
limit = i
if scores[0][i] < sensitivity:
break
nBoxes = boxes[0][0:limit]
nScores = scores[0][0:limit]
nClasses = classes[0][0:limit]
currentBoxes = []
for box1 in nBoxes:
x1 = min(351,int(round(box1[1]*352)))
y1 = min(239,int(round(box1[0]*240)))
x2 = min(351,int(round(box1[3]*352)))
y2 = min(239,int(round(box1[2]*240)))
print((x1, x2, y1, y2))
currentBoxes.append((x1, x2, y1, y2))
#currentBoxes now holds the coordinates of the bounding boxes for this frame
#TODO: Run the current bounding boxes through VGG
#EXAMPLE
for coord in currentBoxes:
(x1, x2, y1, y2) = coord
boundingBox = img[y1:y2, x1:x2, :]
#Determine whether this bounding box is a car or not by passing through VGG
#remove bounding box if below score threshold
#instantiates a correlation object with the boxes from the previous frame and this frame
self.corr = correlationClass(boundingBoxes, currentBoxes)
#correlates the bounding boxes. method to be implemented in correlation.py.
#tracked and new each contain a list of indices for the bounding boxes in this frame,
#whether the car in the box is matched with a bounding box in the previous frame, or not.
tracked, new = self.corr.correlateBoxes(prevImage, img)
for i in range(len(currentBoxes)):
(x1, x2, y1, y2) = currentBoxes[i]
if i in new:
img = self.drawBox(img, x1, x2, y1, y2, [0, 255, 0])
else:
img = self.drawBox(img, x1, x2, y1, y2, [0, 0, 255])
#img = self.filter(img)
print("Image 1 bounding boxes: " + str(len(boundingBoxes)))
print("Image 2 bounding boxes: " + str(len(currentBoxes)))
print("Number of correlations: " + str(self.corr.numCorrelations))
T.boundingBoxes = currentBoxes
T.total += len(new)
T.prevImage = img
return len(new)
def filter(self, img, regions=None):
if regions is None:
img[:, 0:135, 0] = 0
img[:, 0:135, 1] = 0
img[:, 0:135, 2] = 0
img[170:,:,0] = 0
img[170:,:,1] = 0
img[170:,:,2] = 0
img[:67, 135:220, 0] = 0
img[:67, 135:220, 1] = 0
img[:67, 135:220, 2] = 0
for i in range(141):
for j in range(84,352):
if (i*148.0/78 + 84 < j):
img[i, j, 0] = 0
img[i, j, 1] = 0
img[i, j, 2] = 0
return img
def filter2(self, x1, y1, x2, y2, x3, y3, x4, y4, img):
m = (y1-y2)*1.0/(x1-x2)
b = y1 - m*x1
m1 = (y3-y4)*1.0/(x3-x4)
b1 = y3-m1*x3
for i in range(240):
for j in range(352):
if (m*j+b > i) or (m1*j+b1 > i):
img[i, j, 0] = 0
img[i, j, 1] = 0
img[i, j, 2] = 0
return img
def drawBox(self, img, x1, x2, y1, y2, colors):
R = colors[0]
G = colors[1]
B = colors[2]
img[y1:y2, x1, 0] = R
img[y1:y2, x1, 1] = G
img[y1:y2, x1, 2] = B
img[y1:y2, x2, 0] = R
img[y1:y2, x2, 1] = G
img[y1:y2, x2, 2] = B
img[y1, x1:x2, 0] = R
img[y1, x1:x2, 1] = G
img[y1, x1:x2, 2] = B
img[y2, x1:x2, 0] = R
img[y2, x1:x2, 1] = G
img[y2, x1:x2, 2] = B
return img
class getStream:
def __init__(self, url):
self.stream = urllib2.urlopen(url)
def getImage(self):
total = T.total #total number of bounding boxes
boundingBoxes = T.boundingBoxes #stored bounding box coordinates from last frame
prevImage = T.prevImage #stored image from previous frame
file = self.stream.read()
encoded_string = base64.b64encode(file)
arr = np.asarray(bytearray(file), dtype=np.uint8)
img = cv2.imdecode(arr, -1)
#filter out background
img2 = img.copy()
img2 = self.filter2(79, 104, 43, 240, 141, 108, 351, 195, img2) #hacked solution to black out the non-essential parts of the image
sensitivity = 0.4 #threshold to filter out detections
boxes, scores, classes, num = C.getClassification(img2) #runs the image through ssd-mobilenet
limit = 0
for i in range(scores[0].shape[0]):
limit = i
if scores[0][i] < sensitivity:
break
nBoxes = boxes[0][0:limit]
nScores = scores[0][0:limit]
nClasses = classes[0][0:limit]
currentBoxes = []
for box1 in nBoxes:
x1 = min(351,int(round(box1[1]*352)))
y1 = min(239,int(round(box1[0]*240)))
x2 = min(351,int(round(box1[3]*352)))
y2 = min(239,int(round(box1[2]*240)))
print((x1, x2, y1, y2))
currentBoxes.append((x1, x2, y1, y2))
#currentBoxes now holds the coordinates of the bounding boxes for this frame
#TODO: Run the current bounding boxes through VGG
#EXAMPLE
for coord in currentBoxes:
(x1, x2, y1, y2) = coord
boundingBox = img[y1:y2, x1:x2, :]
#Determine whether this bounding box is a car or not by passing through VGG
#remove bounding box if below score threshold
#instantiates a correlation object with the boxes from the previous frame and this frame
self.corr = correlationClass(boundingBoxes, currentBoxes)
#correlates the bounding boxes. method to be implemented in correlation.py.
#tracked and new each contain a list of indices for the bounding boxes in this frame,
#whether the car in the box is matched with a bounding box in the previous frame, or not.
tracked, new = self.corr.correlateBoxes(prevImage, img)
for i in range(len(currentBoxes)):
(x1, x2, y1, y2) = currentBoxes[i]
if i in new:
img = self.drawBox(img, x1, x2, y1, y2, [0, 255, 0])
else:
img = self.drawBox(img, x1, x2, y1, y2, [0, 0, 255])
#img = self.filter(img)
print("Image 1 bounding boxes: " + str(len(boundingBoxes)))
print("Image 2 bounding boxes: " + str(len(currentBoxes)))
print("Number of correlations: " + str(self.corr.numCorrelations))
T.boundingBoxes = currentBoxes
T.total += len(new)
T.prevImage = img
font = cv2.FONT_HERSHEY_SIMPLEX
cv2.putText(img,'Car Count: ' + str(T.total),(10,230), font, 0.5,(255,255,255),2,cv2.LINE_AA)
retval, b = cv2.imencode('.jpg', img)
retval2, b2 = cv2.imencode('.jpg', img2)
encoded_string = base64.b64encode(b)
encoded_string2 = base64.b64encode(b2)
D = {
"im1":encoded_string,
"im2":encoded_string2
}
json_data = json.dumps(D)
#encoded_string = base64.b64encode(arr)
return json_data
def filter(self, img, regions=None):
if regions is None:
img[:, 0:135, 0] = 0
img[:, 0:135, 1] = 0
img[:, 0:135, 2] = 0
img[170:,:,0] = 0
img[170:,:,1] = 0
img[170:,:,2] = 0
img[:67, 135:220, 0] = 0
img[:67, 135:220, 1] = 0
img[:67, 135:220, 2] = 0
for i in range(141):
for j in range(84,352):
if (i*148.0/78 + 84 < j):
img[i, j, 0] = 0
img[i, j, 1] = 0
img[i, j, 2] = 0
return img
def filter2(self, x1, y1, x2, y2, x3, y3, x4, y4, img):
m = (y1-y2)*1.0/(x1-x2)
b = y1 - m*x1
m1 = (y3-y4)*1.0/(x3-x4)
b1 = y3-m1*x3
for i in range(240):
for j in range(352):
if (m*j+b > i) or (m1*j+b1 > i):
img[i, j, 0] = 0
img[i, j, 1] = 0
img[i, j, 2] = 0
return img
def drawBox(self, img, x1, x2, y1, y2, colors):
R = colors[0]
G = colors[1]
B = colors[2]
img[y1:y2, x1, 0] = R
img[y1:y2, x1, 1] = G
img[y1:y2, x1, 2] = B
img[y1:y2, x2, 0] = R
img[y1:y2, x2, 1] = G
img[y1:y2, x2, 2] = B
img[y1, x1:x2, 0] = R
img[y1, x1:x2, 1] = G
img[y1, x1:x2, 2] = B
img[y2, x1:x2, 0] = R
img[y2, x1:x2, 1] = G
img[y2, x1:x2, 2] = B
return img
DOTstream = web.application(urls, locals());
| 27.938338 | 132 | 0.631609 | 1,753 | 10,421 | 3.731888 | 0.140331 | 0.022012 | 0.01284 | 0.01712 | 0.814277 | 0.773158 | 0.763987 | 0.748395 | 0.744726 | 0.735555 | 0 | 0.106367 | 0.213319 | 10,421 | 372 | 133 | 28.013441 | 0.691632 | 0.198349 | 0 | 0.746479 | 0 | 0 | 0.053537 | 0.011429 | 0 | 0 | 0 | 0.002688 | 0 | 1 | 0.056338 | false | 0 | 0.028169 | 0.003521 | 0.151408 | 0.042254 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
b66f28e2957ede637eabb7744cb16e3eebd4a3fd | 148 | py | Python | platform/radio/efr32_multiphy_configurator/pyradioconfig/parts/viper/calculators/calc_utilities.py | SiliconLabs/Gecko_SDK | 991121c706578c9a2135b6f75cc88856e8c64bdc | [
"Zlib"
] | 82 | 2016-06-29T17:24:43.000Z | 2021-04-16T06:49:17.000Z | platform/radio/efr32_multiphy_configurator/pyradioconfig/parts/viper/calculators/calc_utilities.py | SiliconLabs/Gecko_SDK | 991121c706578c9a2135b6f75cc88856e8c64bdc | [
"Zlib"
] | 2 | 2017-02-13T10:07:17.000Z | 2017-03-22T21:28:26.000Z | platform/radio/efr32_multiphy_configurator/pyradioconfig/parts/viper/calculators/calc_utilities.py | SiliconLabs/Gecko_SDK | 991121c706578c9a2135b6f75cc88856e8c64bdc | [
"Zlib"
] | 56 | 2016-08-02T10:50:50.000Z | 2021-07-19T08:57:34.000Z | from pyradioconfig.parts.bobcat.calculators.calc_utilities import Calc_Utilities_Bobcat
class calc_utilities_viper(Calc_Utilities_Bobcat):
pass | 37 | 87 | 0.878378 | 19 | 148 | 6.473684 | 0.578947 | 0.422764 | 0.308943 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.074324 | 148 | 4 | 88 | 37 | 0.89781 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.333333 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 8 |
b67348f48164e14083de856b7f67d997c17a4d3e | 123 | py | Python | grapevine/emails/backends/__init__.py | craiglabenz/django-grapevine | d71d510814ba965fe836b3e6a522945e74c01120 | [
"MIT"
] | 7 | 2015-04-02T20:47:55.000Z | 2022-01-20T13:49:31.000Z | grapevine/emails/backends/__init__.py | craiglabenz/django-grapevine | d71d510814ba965fe836b3e6a522945e74c01120 | [
"MIT"
] | 3 | 2020-02-12T00:31:44.000Z | 2021-06-10T20:07:23.000Z | grapevine/emails/backends/__init__.py | craiglabenz/django-grapevine | d71d510814ba965fe836b3e6a522945e74c01120 | [
"MIT"
] | 2 | 2015-05-21T16:23:52.000Z | 2020-09-04T21:31:39.000Z | from .mailgun import EmailBackend as MailGunEmailBackend
from .sendgrid_driver import EmailBackend as SendGridEmailBackend
| 41 | 65 | 0.886179 | 13 | 123 | 8.307692 | 0.692308 | 0.333333 | 0.37037 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.097561 | 123 | 2 | 66 | 61.5 | 0.972973 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
1e334b7879b28bdc07fba71272a7e1229ad7d9d7 | 145 | py | Python | idmatch/idcardocr/templates/kg/processing.py | javierherrera1996/idmatch | 8bb27dafaa12b7b0bdb745071e81e6b940b7553a | [
"MIT"
] | 55 | 2017-05-27T11:13:33.000Z | 2022-01-27T21:22:28.000Z | idmatch/idcardocr/templates/kg/processing.py | javierherrera1996/idmatch | 8bb27dafaa12b7b0bdb745071e81e6b940b7553a | [
"MIT"
] | 14 | 2017-05-27T11:10:08.000Z | 2022-01-13T00:39:22.000Z | idmatch/idcardocr/templates/kg/processing.py | javierherrera1996/idmatch | 8bb27dafaa12b7b0bdb745071e81e6b940b7553a | [
"MIT"
] | 18 | 2017-05-30T19:08:17.000Z | 2022-01-29T00:19:25.000Z | # coding: utf-8
from idmatch.idcardocr.core.processing.idcardocr import recognize_card
def processing(image):
return recognize_card(image)
| 20.714286 | 70 | 0.8 | 19 | 145 | 6 | 0.736842 | 0.22807 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.007813 | 0.117241 | 145 | 6 | 71 | 24.166667 | 0.882813 | 0.089655 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.333333 | 0.333333 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 7 |
1ea35ce554cea461d4b131edd8dd55f46058d9ee | 16,841 | py | Python | spark/pyspark/arctern_pyspark/render_func.py | jeffoverflow/arctern | 69e8a7549fffb88db06f188ac93e3aea4d432d8f | [
"Apache-2.0"
] | 68 | 2020-03-02T03:09:10.000Z | 2020-05-27T06:26:55.000Z | spark/pyspark/arctern_pyspark/render_func.py | jeffoverflow/arctern | 69e8a7549fffb88db06f188ac93e3aea4d432d8f | [
"Apache-2.0"
] | 382 | 2020-02-29T07:48:52.000Z | 2020-06-01T02:43:17.000Z | spark/pyspark/arctern_pyspark/render_func.py | jeffoverflow/arctern | 69e8a7549fffb88db06f188ac93e3aea4d432d8f | [
"Apache-2.0"
] | 47 | 2020-03-02T09:01:37.000Z | 2020-06-01T03:07:27.000Z | # Copyright (C) 2019-2020 Zilliz. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
__all__ = [
"pointmap",
"weighted_pointmap",
"heatmap",
"choroplethmap",
"icon_viz",
"fishnetmap",
]
def pointmap(vega, df):
if df.rdd.isEmpty():
return None
if len(df.schema.names) != 1:
return None
col_point = df.schema.names[0]
from pyspark.sql.functions import pandas_udf, PandasUDFType, col, lit
from ._wrapper_func import TransformAndProjection, Projection
bounding_box = vega.bounding_box()
top_left = 'POINT (' + str(bounding_box[0]) + ' ' + str(bounding_box[3]) + ')'
bottom_right = 'POINT (' + str(bounding_box[2]) + ' ' + str(bounding_box[1]) + ')'
height = vega.height()
width = vega.width()
coor = vega.coor()
if coor != 'EPSG:3857':
df = df.select(TransformAndProjection(col(col_point), lit(str(coor)), lit('EPSG:3857'), lit(bottom_right), lit(top_left), lit(int(height)), lit(int(width))).alias(col_point))
else:
df = df.select(Projection(col(col_point), lit(bottom_right), lit(top_left), lit(int(height)), lit(int(width))).alias(col_point))
@pandas_udf("string", PandasUDFType.GROUPED_AGG)
def pointmap_wkb(point, conf=vega):
from arctern import point_map_layer
return point_map_layer(conf, point, False)
df = df.rdd.coalesce(1, shuffle=True).toDF()
hex_data = df.agg(pointmap_wkb(df[col_point])).collect()[0][0]
return hex_data
# pylint: disable=too-many-statements
def weighted_pointmap(vega, df):
if df.rdd.isEmpty():
return None
if len(df.schema.names) == 1:
col_point = df.schema.names[0]
render_mode = 0
elif len(df.schema.names) == 2:
col_point = df.schema.names[0]
col_count = df.schema.names[1]
render_mode = 1
elif len(df.schema.names) == 3:
col_point = df.schema.names[0]
col_color = df.schema.names[1]
col_stroke = df.schema.names[2]
render_mode = 2
else:
return None
from pyspark.sql.functions import pandas_udf, PandasUDFType, col, lit
from pyspark.sql.types import (StructType, StructField, BinaryType, IntegerType)
from ._wrapper_func import TransformAndProjection, Projection
bounding_box = vega.bounding_box()
top_left = 'POINT (' + str(bounding_box[0]) + ' ' + str(bounding_box[3]) + ')'
bottom_right = 'POINT (' + str(bounding_box[2]) + ' ' + str(bounding_box[1]) + ')'
height = vega.height()
width = vega.width()
coor = vega.coor()
aggregation_type = vega.aggregation_type()
if coor == 'EPSG:3857':
if render_mode == 2:
df = df.select(Projection(col(col_point), lit(bottom_right), lit(top_left), lit(int(height)), lit(int(width))).alias(col_point), col(col_color), col(col_stroke))
agg_schema = StructType([StructField(col_point, BinaryType(), True),
StructField(col_color, IntegerType(), True),
StructField(col_stroke, IntegerType(), True)])
@pandas_udf(agg_schema, PandasUDFType.MAP_ITER)
def render_agg_UDF_3857_2(batch_iter):
for pdf in batch_iter:
dd = pdf.groupby([col_point])
ll = [col_color, col_stroke]
dd = dd[ll].agg([aggregation_type]).reset_index()
dd.columns = [col_point, col_color, col_stroke]
yield dd
@pandas_udf("string", PandasUDFType.GROUPED_AGG)
def weighted_pointmap_wkb_3857_2(point, c, s, conf=vega):
from arctern import weighted_point_map_layer
return weighted_point_map_layer(conf, point, False, color_weights=c, size_weights=s)
agg_df = df.mapInPandas(render_agg_UDF_3857_2)
agg_df = agg_df.rdd.coalesce(1, shuffle=True).toDF()
hex_data = agg_df.agg(weighted_pointmap_wkb_3857_2(agg_df[col_point], agg_df[col_color], agg_df[col_stroke])).collect()[0][0]
elif render_mode == 1:
df = df.select(Projection(col(col_point), lit(bottom_right), lit(top_left), lit(int(height)), lit(int(width))).alias(col_point), col(col_count))
agg_schema = StructType([StructField(col_point, BinaryType(), True),
StructField(col_count, IntegerType(), True)])
@pandas_udf(agg_schema, PandasUDFType.MAP_ITER)
def render_agg_UDF_3857_1(batch_iter):
for pdf in batch_iter:
dd = pdf.groupby([col_point])
dd = dd[col_count].agg([aggregation_type]).reset_index()
dd.columns = [col_point, col_count]
yield dd
@pandas_udf("string", PandasUDFType.GROUPED_AGG)
def weighted_pointmap_wkb_3857_1(point, c, conf=vega):
from arctern import weighted_point_map_layer
return weighted_point_map_layer(conf, point, False, color_weights=c)
agg_df = df.mapInPandas(render_agg_UDF_3857_1)
agg_df = agg_df.rdd.coalesce(1, shuffle=True).toDF()
hex_data = agg_df.agg(weighted_pointmap_wkb_3857_1(agg_df[col_point], agg_df[col_count])).collect()[0][0]
else:
df = df.select(Projection(col(col_point), lit(bottom_right), lit(top_left), lit(int(height)), lit(int(width))).alias(col_point))
@pandas_udf("string", PandasUDFType.GROUPED_AGG)
def weighted_pointmap_wkb(point, conf=vega):
from arctern import weighted_point_map_layer
return weighted_point_map_layer(conf, point, False)
df = df.rdd.coalesce(1, shuffle=True).toDF()
hex_data = df.agg(weighted_pointmap_wkb(df[col_point])).collect()[0][0]
return hex_data
if render_mode == 2:
df = df.select(TransformAndProjection(col(col_point), lit(str(coor)), lit('EPSG:3857'), lit(bottom_right), lit(top_left), lit(int(height)), lit(int(width))).alias(col_point), col(col_color), col(col_stroke))
agg_schema = StructType([StructField(col_point, BinaryType(), True),
StructField(col_color, IntegerType(), True),
StructField(col_stroke, IntegerType(), True)])
@pandas_udf(agg_schema, PandasUDFType.MAP_ITER)
def render_agg_UDF_2(batch_iter):
for pdf in batch_iter:
dd = pdf.groupby([col_point])
ll = [col_color, col_stroke]
dd = dd[ll].agg([aggregation_type]).reset_index()
dd.columns = [col_point, col_color, col_stroke]
yield dd
@pandas_udf("string", PandasUDFType.GROUPED_AGG)
def weighted_pointmap_wkb_2(point, c, s, conf=vega):
from arctern import weighted_point_map_layer
return weighted_point_map_layer(conf, point, False, color_weights=c, size_weights=s)
agg_df = df.mapInPandas(render_agg_UDF_2)
agg_df = agg_df.rdd.coalesce(1, shuffle=True).toDF()
hex_data = agg_df.agg(weighted_pointmap_wkb_2(agg_df[col_point], agg_df[col_color], agg_df[col_stroke])).collect()[0][0]
elif render_mode == 1:
df = df.select(TransformAndProjection(col(col_point), lit(str(coor)), lit('EPSG:3857'), lit(bottom_right), lit(top_left), lit(int(height)), lit(int(width))).alias(col_point), col(col_count))
agg_schema = StructType([StructField(col_point, BinaryType(), True),
StructField(col_count, IntegerType(), True)])
@pandas_udf(agg_schema, PandasUDFType.MAP_ITER)
def render_agg_UDF_1(batch_iter):
for pdf in batch_iter:
dd = pdf.groupby([col_point])
dd = dd[col_count].agg([aggregation_type]).reset_index()
dd.columns = [col_point, col_count]
yield dd
@pandas_udf("string", PandasUDFType.GROUPED_AGG)
def weighted_pointmap_wkb_1(point, c, conf=vega):
from arctern import weighted_point_map_layer
return weighted_point_map_layer(conf, point, False, color_weights=c)
agg_df = df.mapInPandas(render_agg_UDF_1)
agg_df = agg_df.rdd.coalesce(1, shuffle=True).toDF()
hex_data = agg_df.agg(weighted_pointmap_wkb_1(agg_df[col_point], agg_df[col_count])).collect()[0][0]
else:
df = df.select(TransformAndProjection(col(col_point), lit(str(coor)), lit('EPSG:3857'), lit(bottom_right), lit(top_left), lit(int(height)), lit(int(width))).alias(col_point))
@pandas_udf("string", PandasUDFType.GROUPED_AGG)
def weighted_pointmap_wkb_0(point, conf=vega):
from arctern import weighted_point_map_layer
return weighted_point_map_layer(conf, point, False)
df = df.rdd.coalesce(1, shuffle=True).toDF()
hex_data = df.agg(weighted_pointmap_wkb_0(df[col_point])).collect()[0][0]
return hex_data
def heatmap(vega, df):
if df.rdd.isEmpty():
return None
if len(df.schema.names) != 2:
return None
col_point = df.schema.names[0]
col_count = df.schema.names[1]
from pyspark.sql.functions import pandas_udf, PandasUDFType, lit, col
from pyspark.sql.types import (StructType, StructField, BinaryType, IntegerType)
from ._wrapper_func import TransformAndProjection, Projection
bounding_box = vega.bounding_box()
top_left = 'POINT (' + str(bounding_box[0]) + ' ' + str(bounding_box[3]) + ')'
bottom_right = 'POINT (' + str(bounding_box[2]) + ' ' + str(bounding_box[1]) + ')'
height = vega.height()
width = vega.width()
coor = vega.coor()
aggregation_type = vega.aggregation_type()
if coor != 'EPSG:3857':
df = df.select(TransformAndProjection(col(col_point), lit(str(coor)), lit('EPSG:3857'), lit(bottom_right), lit(top_left), lit(int(height)), lit(int(width))).alias(col_point), col(col_count))
else:
df = df.select(Projection(col(col_point), lit(bottom_right), lit(top_left), lit(int(height)), lit(int(width))).alias(col_point), col(col_count))
agg_schema = StructType([StructField(col_point, BinaryType(), True),
StructField(col_count, IntegerType(), True)])
@pandas_udf(agg_schema, PandasUDFType.MAP_ITER)
def render_agg_UDF(batch_iter):
for pdf in batch_iter:
dd = pdf.groupby([col_point])
dd = dd[col_count].agg([aggregation_type]).reset_index()
dd.columns = [col_point, col_count]
yield dd
@pandas_udf("string", PandasUDFType.GROUPED_AGG)
def heatmap_wkb(point, w, conf=vega):
from arctern import heat_map_layer
return heat_map_layer(conf, point, w, False)
agg_df = df.mapInPandas(render_agg_UDF)
agg_df = agg_df.rdd.coalesce(1, shuffle=True).toDF()
hex_data = agg_df.agg(heatmap_wkb(agg_df[col_point], agg_df[col_count])).collect()[0][0]
return hex_data
def choroplethmap(vega, df):
if df.rdd.isEmpty():
return None
if len(df.schema.names) != 2:
return None
col_polygon = df.schema.names[0]
col_count = df.schema.names[1]
from pyspark.sql.functions import pandas_udf, PandasUDFType, col, lit
from pyspark.sql.types import (StructType, StructField, BinaryType, IntegerType)
from ._wrapper_func import TransformAndProjection, Projection
bounding_box = vega.bounding_box()
top_left = 'POINT (' + str(bounding_box[0]) + ' ' + str(bounding_box[3]) + ')'
bottom_right = 'POINT (' + str(bounding_box[2]) + ' ' + str(bounding_box[1]) + ')'
height = vega.height()
width = vega.width()
coor = vega.coor()
aggregation_type = vega.aggregation_type()
if coor != 'EPSG:3857':
df = df.select(TransformAndProjection(col(col_polygon), lit(str(coor)), lit('EPSG:3857'), lit(bottom_right), lit(top_left), lit(int(height)), lit(int(width))).alias(col_polygon), col(col_count))
else:
df = df.select(Projection(col(col_polygon), lit(bottom_right), lit(top_left), lit(int(height)), lit(int(width))).alias(col_polygon), col(col_count))
agg_schema = StructType([StructField(col_polygon, BinaryType(), True),
StructField(col_count, IntegerType(), True)])
@pandas_udf(agg_schema, PandasUDFType.MAP_ITER)
def render_agg_UDF(batch_iter):
for pdf in batch_iter:
dd = pdf.groupby([col_polygon])
dd = dd[col_count].agg([aggregation_type]).reset_index()
dd.columns = [col_polygon, col_count]
yield dd
@pandas_udf("string", PandasUDFType.GROUPED_AGG)
def choroplethmap_wkb(wkb, w, conf=vega):
from arctern import choropleth_map_layer
return choropleth_map_layer(conf, wkb, w, False)
agg_df = df.mapInPandas(render_agg_UDF)
agg_df = agg_df.rdd.coalesce(1, shuffle=True).toDF()
hex_data = agg_df.agg(choroplethmap_wkb(agg_df[col_polygon], agg_df[col_count])).collect()[0][0]
return hex_data
def icon_viz(vega, df):
if df.rdd.isEmpty():
return None
if len(df.schema.names) != 1:
return None
col_point = df.schema.names[0]
from pyspark.sql.functions import pandas_udf, PandasUDFType, col, lit
from ._wrapper_func import TransformAndProjection, Projection
bounding_box = vega.bounding_box()
top_left = 'POINT (' + str(bounding_box[0]) + ' ' + str(bounding_box[3]) + ')'
bottom_right = 'POINT (' + str(bounding_box[2]) + ' ' + str(bounding_box[1]) + ')'
height = vega.height()
width = vega.width()
coor = vega.coor()
if coor != 'EPSG:3857':
df = df.select(TransformAndProjection(col(col_point), lit(str(coor)), lit('EPSG:3857'), lit(bottom_right), lit(top_left), lit(int(height)), lit(int(width))).alias(col_point))
else:
df = df.select(Projection(col(col_point), lit(bottom_right), lit(top_left), lit(int(height)), lit(int(width))).alias(col_point))
@pandas_udf("string", PandasUDFType.GROUPED_AGG)
def iconviz(point, conf=vega):
from arctern import icon_viz_layer
return icon_viz_layer(conf, point, False)
df = df.rdd.coalesce(1, shuffle=True).toDF()
hex_data = df.agg(iconviz(df[col_point])).collect()[0][0]
return hex_data
def fishnetmap(vega, df):
if df.rdd.isEmpty():
return None
if len(df.schema.names) != 2:
return None
col_point = df.schema.names[0]
col_count = df.schema.names[1]
from pyspark.sql.functions import pandas_udf, PandasUDFType, lit, col
from pyspark.sql.types import (StructType, StructField, BinaryType, IntegerType)
from ._wrapper_func import TransformAndProjection, Projection
bounding_box = vega.bounding_box()
top_left = 'POINT (' + str(bounding_box[0]) + ' ' + str(bounding_box[3]) + ')'
bottom_right = 'POINT (' + str(bounding_box[2]) + ' ' + str(bounding_box[1]) + ')'
height = vega.height()
width = vega.width()
coor = vega.coor()
aggregation_type = vega.aggregation_type()
if coor != 'EPSG:3857':
df = df.select(TransformAndProjection(col(col_point), lit(str(coor)), lit('EPSG:3857'), lit(bottom_right), lit(top_left), lit(int(height)), lit(int(width))).alias(col_point), col(col_count))
else:
df = df.select(Projection(col(col_point), lit(bottom_right), lit(top_left), lit(int(height)), lit(int(width))).alias(col_point), col(col_count))
agg_schema = StructType([StructField(col_point, BinaryType(), True),
StructField(col_count, IntegerType(), True)])
@pandas_udf(agg_schema, PandasUDFType.MAP_ITER)
def render_agg_UDF(batch_iter):
for pdf in batch_iter:
dd = pdf.groupby([col_point])
dd = dd[col_count].agg([aggregation_type]).reset_index()
dd.columns = [col_point, col_count]
yield dd
@pandas_udf("string", PandasUDFType.GROUPED_AGG)
def fishnetmap_wkb(point, w, conf=vega):
from arctern import fishnet_map_layer
return fishnet_map_layer(conf, point, w, False)
agg_df = df.mapInPandas(render_agg_UDF)
agg_df = agg_df.rdd.coalesce(1, shuffle=True).toDF()
hex_data = agg_df.agg(fishnetmap_wkb(agg_df[col_point], agg_df[col_count])).collect()[0][0]
return hex_data
| 43.629534 | 215 | 0.648121 | 2,275 | 16,841 | 4.562637 | 0.075604 | 0.048555 | 0.03237 | 0.026204 | 0.907514 | 0.901927 | 0.894412 | 0.889499 | 0.875048 | 0.875048 | 0 | 0.015562 | 0.221602 | 16,841 | 385 | 216 | 43.742857 | 0.776261 | 0.036221 | 0 | 0.758503 | 0 | 0 | 0.022385 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.081633 | false | 0 | 0.091837 | 0 | 0.27551 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
1ece7eb34877d611a25014f9b3fad68f3ddd1faa | 1,203 | py | Python | stk/exceptions.py | jolsten/STK | d52b68a0f4df91f1ecae10d7d05b99ab6444afe3 | [
"MIT"
] | 1 | 2021-03-23T17:28:49.000Z | 2021-03-23T17:28:49.000Z | stk/exceptions.py | jolsten/STK | d52b68a0f4df91f1ecae10d7d05b99ab6444afe3 | [
"MIT"
] | null | null | null | stk/exceptions.py | jolsten/STK | d52b68a0f4df91f1ecae10d7d05b99ab6444afe3 | [
"MIT"
] | 1 | 2021-03-23T17:28:53.000Z | 2021-03-23T17:28:53.000Z | # -*- coding: utf-8 -*-
"""
Created on Tue Aug 4 20:13:16 2020
@author: jolsten
"""
import logging
class STKLicenseError(RuntimeError):
def __init__(self, *args):
if args:
self.message = args[0]
else:
self.message = None
def __str__(self):
if self.message:
return f'{type(self).__name__}: {self.message}'
else:
return f'{type(self).__name__} has been raised'
class STKConnectError(RuntimeError):
def __init__(self, *args):
if args:
self.message = args[0]
else:
self.message = None
def __str__(self):
if self.message:
return f'{type(self).__name__}: {self.message}'
else:
return f'{type(self).__name__} has been raised'
class STKNackError(IOError):
def __init__(self, *args):
if args:
self.message = args[0]
else:
self.message = None
def __str__(self):
if self.message:
return f'{type(self).__name__}: {self.message}'
else:
return f'{type(self).__name__} has been raised'
| 24.06 | 60 | 0.526185 | 133 | 1,203 | 4.398496 | 0.293233 | 0.225641 | 0.112821 | 0.153846 | 0.801709 | 0.801709 | 0.801709 | 0.801709 | 0.801709 | 0.801709 | 0 | 0.01943 | 0.358271 | 1,203 | 49 | 61 | 24.55102 | 0.738342 | 0.063175 | 0 | 0.882353 | 0 | 0 | 0.207477 | 0.120561 | 0 | 0 | 0 | 0 | 0 | 1 | 0.176471 | false | 0 | 0.029412 | 0 | 0.470588 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
94c1f42f1cab8a245660ff688ae3dba51b00730f | 11,197 | py | Python | tests/test_property.py | viniciuschiele/configd | f9b405cd2254f79152a13e57e2be907550ef83ed | [
"MIT"
] | 3 | 2017-03-17T15:44:26.000Z | 2021-08-31T02:45:52.000Z | tests/test_property.py | viniciuschiele/configd | f9b405cd2254f79152a13e57e2be907550ef83ed | [
"MIT"
] | 1 | 2017-07-28T23:20:15.000Z | 2017-07-29T11:13:49.000Z | tests/test_property.py | viniciuschiele/configd | f9b405cd2254f79152a13e57e2be907550ef83ed | [
"MIT"
] | 2 | 2021-06-26T20:57:24.000Z | 2021-11-21T19:29:39.000Z | from __future__ import absolute_import
from collections import MutableMapping
from central.config import MemoryConfig
from central.property import PropertyManager, PropertyContainer, Property
from central.compat import string_types
from central.utils import EventHandler, Version
from threading import Event
from unittest import TestCase
class TestPropertyManager(TestCase):
def test_init_config_with_none_value(self):
with self.assertRaises(TypeError):
PropertyManager(config=None)
def test_init_config_with_str_value(self):
with self.assertRaises(TypeError):
PropertyManager(config='str')
def test_get_property_with_name_as_none(self):
properties = PropertyManager(MemoryConfig())
with self.assertRaises(TypeError):
properties.get_property(None)
def test_get_property_with_name_as_int(self):
properties = PropertyManager(MemoryConfig())
with self.assertRaises(TypeError):
properties.get_property(123)
def test_get_property_with_name_as_str(self):
properties = PropertyManager(MemoryConfig())
self.assertEqual(PropertyContainer, type(properties.get_property('key')))
def test_get_property_for_same_key(self):
properties = PropertyManager(MemoryConfig())
self.assertEqual(properties.get_property('key'), properties.get_property('key'))
def test_invalidate_properties(self):
config = MemoryConfig()
properties = PropertyManager(config)
self.assertEqual(0, properties._version.number)
config.set('key', 'value')
self.assertEqual(1, properties._version.number)
class TestPropertyContainer(TestCase):
def test_init_name_with_none_value(self):
with self.assertRaises(TypeError):
PropertyContainer(name=None, config=MemoryConfig(), version=Version())
def test_init_name_with_int_value(self):
with self.assertRaises(TypeError):
PropertyContainer(name=123, config=MemoryConfig(), version=Version())
def test_init_config_with_none_value(self):
with self.assertRaises(TypeError):
PropertyContainer(name='key', config=None, version=Version())
def test_init_config_with_int_value(self):
with self.assertRaises(TypeError):
PropertyContainer(name='key', config=123, version=Version())
def test_init_version_with_none_value(self):
with self.assertRaises(TypeError):
PropertyContainer(name='key', config=MemoryConfig(), version=None)
def test_init_version_with_int_value(self):
with self.assertRaises(TypeError):
PropertyContainer(name='key', config=MemoryConfig(), version=123)
def test_as_bool_with_existent_key(self):
config = MemoryConfig()
config.set('key', '0')
container = PropertyContainer('key', config, version=Version())
self.assertIsInstance(container.as_bool(True).get(), bool)
self.assertEqual(False, container.as_bool(True).get())
def test_as_bool_with_nonexistent_key(self):
config = MemoryConfig()
container = PropertyContainer('key', config, version=Version())
self.assertEqual(True, container.as_bool(True).get())
def test_as_float_with_existent_key(self):
config = MemoryConfig()
config.set('key', '2')
container = PropertyContainer('key', config, version=Version())
self.assertIsInstance(container.as_float(1.0).get(), float)
self.assertEqual(2, container.as_float(1.0).get())
def test_as_float_with_nonexistent_key(self):
config = MemoryConfig()
container = PropertyContainer('key', config, version=Version())
self.assertIsInstance(container.as_float(1.0).get(), float)
self.assertEqual(1.0, container.as_float(1.0).get())
def test_as_int_with_existent_key(self):
config = MemoryConfig()
config.set('key', '2')
container = PropertyContainer('key', config, version=Version())
self.assertIsInstance(container.as_int(1).get(), int)
self.assertEqual(2, container.as_int(1).get())
def test_as_int_with_nonexistent_key(self):
config = MemoryConfig()
container = PropertyContainer('key', config, version=Version())
self.assertIsInstance(container.as_int(1).get(), int)
self.assertEqual(1, container.as_int(1).get())
def test_as_str_with_existent_key(self):
config = MemoryConfig()
config.set('key', 'value')
container = PropertyContainer('key', config, version=Version())
self.assertIsInstance(container.as_str('default value').get(), string_types)
self.assertEqual('value', container.as_str('default value').get())
def test_as_str_with_nonexistent_key(self):
config = MemoryConfig()
container = PropertyContainer('key', config, version=Version())
self.assertIsInstance(container.as_str('default value').get(), string_types)
self.assertEqual('default value', container.as_str('default value').get())
def test_as_dict_with_existent_key(self):
config = MemoryConfig()
config.set('key', {'key': 'value'})
container = PropertyContainer('key', config, version=Version())
self.assertIsInstance(container.as_dict({}).get(), MutableMapping)
self.assertEqual({'key': 'value'}, container.as_dict({}).get())
def test_as_dict_with_nonexistent_key(self):
config = MemoryConfig()
container = PropertyContainer('key', config, version=Version())
self.assertIsInstance(container.as_dict({}).get(), MutableMapping)
self.assertEqual({}, container.as_dict({}).get())
def test_as_list_with_existent_key(self):
config = MemoryConfig()
config.set('key', ['value'])
container = PropertyContainer('key', config, version=Version())
self.assertIsInstance(container.as_list([]).get(), list)
self.assertEqual(['value'], container.as_list([]).get())
def test_as_list_with_nonexistent_key(self):
config = MemoryConfig()
container = PropertyContainer('key', config, version=Version())
self.assertIsInstance(container.as_list([]).get(), list)
self.assertEqual([], container.as_list([]).get())
def test_as_type_with_existent_key(self):
config = MemoryConfig()
config.set('key', '2')
container = PropertyContainer('key', config, version=Version())
self.assertIsInstance(container.as_type(int, 1).get(), int)
self.assertEqual(2, container.as_type(int, 1).get())
def test_as_type_with_nonexistent_key(self):
config = MemoryConfig()
container = PropertyContainer('key', config, version=Version())
self.assertIsInstance(container.as_type(int, 1).get(), int)
self.assertEqual(1, container.as_type(int, 1).get())
def test_as_type_for_same_existent_key(self):
config = MemoryConfig()
config.set('key', '2')
container = PropertyContainer('key', config, version=Version())
self.assertEqual(container.as_type(int, 1), container.as_type(int, 1))
class TestProperty(TestCase):
def test_init_name_with_none_value(self):
with self.assertRaises(TypeError):
Property(name=None, default=1, type=int, config=MemoryConfig(), version=Version())
def test_init_name_with_int_value(self):
with self.assertRaises(TypeError):
Property(name=123, default=1, type=int, config=MemoryConfig(), version=Version())
def test_init_name_with_str_value(self):
prop = Property(name='name', default=1, type=int, config=MemoryConfig(), version=Version())
self.assertEqual('name', prop.name)
def test_init_cast_with_none_value(self):
with self.assertRaises(ValueError):
Property(name='key', default=1, type=None, config=MemoryConfig(), version=Version())
def test_init_cast_with_int_value(self):
prop = Property(name='key', default=1, type=int, config=MemoryConfig(), version=Version())
self.assertEqual(int, prop.type)
def test_init_config_with_none_value(self):
with self.assertRaises(TypeError):
Property(name='key', default=1, type=int, config=None, version=Version())
def test_init_config_with_int_value(self):
with self.assertRaises(TypeError):
Property(name='key', default=1, type=int, config=123, version=Version())
def test_init_version_with_none_value(self):
with self.assertRaises(TypeError):
Property(name='key', default=1, type=int, config=MemoryConfig(), version=None)
def test_init_version_with_int_value(self):
with self.assertRaises(TypeError):
Property(name='key', default=1, type=int, config=MemoryConfig(), version=123)
def test_get_updated_with_default_value(self):
prop = Property(name='key', default=1, type=int, config=MemoryConfig(), version=Version())
self.assertEqual(EventHandler, type(prop.updated))
def test_get_for_existent_key(self):
config = MemoryConfig()
config.set('key', '2')
prop = Property('key', 1, int, config, Version())
self.assertEqual(2, prop.get())
def test_get_for_nonexistent_key(self):
prop = Property('key', 1, int, MemoryConfig(), Version())
self.assertEqual(1, prop.get())
def test_get_for_nonexistent_key_with_callable_default_value(self):
prop = Property('key', lambda: 1, int, MemoryConfig(), Version())
self.assertEqual(1, prop.get())
def test_get_for_invalidated_key(self):
config = MemoryConfig()
config.set('key', '2')
version = Version()
prop = Property('key', 1, int, config, version)
self.assertEqual(2, prop.get())
config.set('key', '3')
version.number += 1
self.assertEqual(3, prop.get())
def test_on_updated_with_func_value(self):
prop = Property(name='key', default=1, type=int, config=MemoryConfig(), version=Version())
def dummy():
pass
prop.on_updated(dummy)
self.assertEqual(1, len(prop.updated))
def test_add_updated_with_func_value(self):
config = MemoryConfig()
version = Version()
ev = Event()
prop = Property(name='key', default=1, type=int, config=config, version=version)
def dummy(v):
ev.set()
prop.updated.add(dummy)
version.number += 1
self.assertTrue(ev.is_set())
def test_remove_updated_with_func_value(self):
config = MemoryConfig()
version = Version()
ev = Event()
prop = Property(name='key', default=1, type=int, config=config, version=version)
def dummy(v):
ev.set()
prop.updated.add(dummy)
prop.updated.remove(dummy)
version.number += 1
self.assertFalse(ev.is_set())
def test_str(self):
config = MemoryConfig()
config.set('key', '2')
prop = Property(name='key', default=1, type=int, config=config, version=Version())
self.assertEqual('2', str(prop))
| 36.953795 | 99 | 0.671519 | 1,304 | 11,197 | 5.555982 | 0.064417 | 0.044444 | 0.063768 | 0.058661 | 0.859627 | 0.823326 | 0.784265 | 0.74824 | 0.703244 | 0.666253 | 0 | 0.009504 | 0.201215 | 11,197 | 302 | 100 | 37.076159 | 0.800537 | 0 | 0 | 0.518349 | 0 | 0 | 0.024739 | 0 | 0 | 0 | 0 | 0 | 0.279817 | 1 | 0.224771 | false | 0.004587 | 0.036697 | 0 | 0.275229 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
94cc71107b3a652db2d2c84fcbf4fd523f806014 | 24,360 | py | Python | tests/match.py | abw333/tennis | eaa8cef28beb61d182f17f377c6bbf0a45607c06 | [
"MIT"
] | null | null | null | tests/match.py | abw333/tennis | eaa8cef28beb61d182f17f377c6bbf0a45607c06 | [
"MIT"
] | null | null | null | tests/match.py | abw333/tennis | eaa8cef28beb61d182f17f377c6bbf0a45607c06 | [
"MIT"
] | null | null | null | import re
import unittest
import tennis
class Match(unittest.TestCase):
def test_init_no_args(self):
match = tennis.Match()
self.assertEqual(
match.sets,
[tennis.Set(
games=[tennis.Game(server_points=0, returner_points=0, deciding_point=False)],
target_games=6,
deciding_point=False,
tiebreak_games=6,
tiebreak_points=7
)]
)
self.assertEqual(match.target_sets, 2)
self.assertEqual(match.target_games, 6)
self.assertFalse(match.deciding_point)
self.assertEqual(match.tiebreak_games, 6)
self.assertEqual(match.tiebreak_points, 7)
self.assertEqual(match.final_set_target_games, 6)
self.assertFalse(match.final_set_deciding_point)
self.assertEqual(match.final_set_tiebreak_games, 6)
self.assertEqual(match.final_set_tiebreak_points, 7)
def test_init_args(self):
with self.assertRaisesRegex(
TypeError,
'^{}$'.format(re.escape('__init__() takes 1 positional argument but 2 were given'))
):
tennis.Match([])
def test_init_kwargs(self):
match = tennis.Match(
final_set_tiebreak_points=1,
final_set_tiebreak_games=2,
final_set_deciding_point=True,
final_set_target_games=3,
tiebreak_points=4,
tiebreak_games=5,
deciding_point=True,
target_games=6,
target_sets=7,
sets=[tennis.Set()]
)
self.assertEqual(match.sets, [tennis.Set()])
self.assertEqual(match.target_sets, 7)
self.assertEqual(match.target_games, 6)
self.assertTrue(match.deciding_point)
self.assertEqual(match.tiebreak_games, 5)
self.assertEqual(match.tiebreak_points, 4)
self.assertEqual(match.final_set_target_games, 3)
self.assertTrue(match.final_set_deciding_point)
self.assertEqual(match.final_set_tiebreak_games, 2)
self.assertEqual(match.final_set_tiebreak_points, 1)
def test_init_inconsistent_tiebreak_args(self):
with self.assertRaisesRegex(
RuntimeError,
'^{}$'.format(re.escape('tiebreak_games and tiebreak_points must both be None or non-None.'))
):
tennis.Match(tiebreak_games=1, tiebreak_points=None)
with self.assertRaisesRegex(
RuntimeError,
'^{}$'.format(re.escape('tiebreak_games and tiebreak_points must both be None or non-None.'))
):
tennis.Match(tiebreak_games=None, tiebreak_points=1)
def test_init_inconsistent_final_set_tiebreak_args(self):
with self.assertRaisesRegex(
RuntimeError,
'^{}$'.format(re.escape(
'final_set_tiebreak_games and final_set_tiebreak_points must both be None or non-None.'
))
):
tennis.Match(final_set_tiebreak_games=1, final_set_tiebreak_points=None)
with self.assertRaisesRegex(
RuntimeError,
'^{}$'.format(re.escape(
'final_set_tiebreak_games and final_set_tiebreak_points must both be None or non-None.'
))
):
tennis.Match(final_set_tiebreak_games=None, final_set_tiebreak_points=1)
def test_init_negative_points(self):
with self.assertRaisesRegex(
RuntimeError,
'^{}$'.format(re.escape('Point scores must be non-negative.'))
):
tennis.Match(target_games=-1)
with self.assertRaisesRegex(
RuntimeError,
'^{}$'.format(re.escape('Point scores must be non-negative.'))
):
tennis.Match(tiebreak_games=-1, tiebreak_points=1)
with self.assertRaisesRegex(
RuntimeError,
'^{}$'.format(re.escape('Point scores must be non-negative.'))
):
tennis.Match(tiebreak_games=1, tiebreak_points=-1)
def test_init_final_set_negative_points(self):
with self.assertRaisesRegex(
RuntimeError,
'^{}$'.format(re.escape('Point scores must be non-negative.'))
):
tennis.Match(final_set_target_games=-1)
with self.assertRaisesRegex(
RuntimeError,
'^{}$'.format(re.escape('Point scores must be non-negative.'))
):
tennis.Match(final_set_tiebreak_games=-1, final_set_tiebreak_points=1)
with self.assertRaisesRegex(
RuntimeError,
'^{}$'.format(re.escape('Point scores must be non-negative.'))
):
tennis.Match(final_set_tiebreak_games=1, final_set_tiebreak_points=-1)
def test_init_zero_target_sets(self):
with self.assertRaisesRegex(
RuntimeError,
'^{}$'.format(re.escape('target_sets must be at least 1.'))
):
tennis.Match(target_sets=0)
def test_init_first_set(self):
match = tennis.Match(
sets=None,
target_sets=7,
target_games=6,
deciding_point=True,
tiebreak_games=5,
tiebreak_points=4,
final_set_target_games=3,
final_set_deciding_point=False,
final_set_tiebreak_games=2,
final_set_tiebreak_points=1
)
self.assertEqual(
match.sets,
[tennis.Set(
games=[tennis.Game(server_points=0, returner_points=0, deciding_point=True)],
target_games=6,
deciding_point=True,
tiebreak_games=5,
tiebreak_points=4
)]
)
self.assertEqual(match.target_sets, 7)
self.assertEqual(match.target_games, 6)
self.assertTrue(match.deciding_point)
self.assertEqual(match.tiebreak_games, 5)
self.assertEqual(match.tiebreak_points, 4)
self.assertEqual(match.final_set_target_games, 3)
self.assertFalse(match.final_set_deciding_point)
self.assertEqual(match.final_set_tiebreak_games, 2)
self.assertEqual(match.final_set_tiebreak_points, 1)
def test_init_final_set(self):
match = tennis.Match(
sets=None,
target_sets=1,
target_games=2,
deciding_point=False,
tiebreak_games=3,
tiebreak_points=4,
final_set_target_games=5,
final_set_deciding_point=True,
final_set_tiebreak_games=6,
final_set_tiebreak_points=7
)
self.assertEqual(
match.sets,
[tennis.Set(
games=[tennis.Game(server_points=0, returner_points=0, deciding_point=True)],
target_games=5,
deciding_point=True,
tiebreak_games=6,
tiebreak_points=7
)]
)
self.assertEqual(match.target_sets, 1)
self.assertEqual(match.target_games, 2)
self.assertFalse(match.deciding_point)
self.assertEqual(match.tiebreak_games, 3)
self.assertEqual(match.tiebreak_points, 4)
self.assertEqual(match.final_set_target_games, 5)
self.assertTrue(match.final_set_deciding_point)
self.assertEqual(match.final_set_tiebreak_games, 6)
self.assertEqual(match.final_set_tiebreak_points, 7)
def test_first_server_served_first(self):
match = tennis.Match(sets=[
tennis.Set(games=[tennis.Game()]),
tennis.Set(games=[tennis.Game()] * 2),
tennis.Set(games=[tennis.Game()] * 3),
tennis.Set(games=[tennis.Game()] * 4)
])
self.assertEqual(list(match.first_server_served_first), [True, False, False, True])
def test_sets(self):
sets = []
match = tennis.Match(sets=sets)
self.assertEqual(match.first_server_sets(), 0)
self.assertEqual(match.first_returner_sets(), 0)
sets.append(tennis.Set(
games=[tennis.Game(server_points=4), tennis.Game(returner_points=4)],
target_games=2
))
match = tennis.Match(sets=sets)
self.assertEqual(match.first_server_sets(), 1)
self.assertEqual(match.first_returner_sets(), 0)
sets.append(tennis.Set(
games=[tennis.Tiebreak(first_returner_points=7)],
tiebreak_games=0
))
match = tennis.Match(sets=sets)
self.assertEqual(match.first_server_sets(), 1)
self.assertEqual(match.first_returner_sets(), 1)
sets.append(tennis.Set(
games=[tennis.Game(server_points=4), tennis.Game(returner_points=4)],
target_games=2
))
match = tennis.Match(sets=sets)
self.assertEqual(match.first_server_sets(), 1)
self.assertEqual(match.first_returner_sets(), 2)
sets.append(tennis.Set(
games=[tennis.Tiebreak(first_returner_points=7)],
tiebreak_games=0
))
match = tennis.Match(sets=sets)
self.assertEqual(match.first_server_sets(), 2)
self.assertEqual(match.first_returner_sets(), 2)
sets.append(tennis.Set())
match = tennis.Match(sets=sets)
self.assertEqual(match.first_server_sets(), 2)
self.assertEqual(match.first_returner_sets(), 2)
sets.append(tennis.Set())
match = tennis.Match(sets=sets)
self.assertEqual(match.first_server_sets(), 2)
self.assertEqual(match.first_returner_sets(), 2)
def test_winner(self):
self.assertIsNone(tennis.Match(sets=[]).winner)
self.assertIsNone(tennis.Match(sets=[
tennis.Set(games=[tennis.Tiebreak(first_server_points=7)], tiebreak_games=0)
]).winner)
self.assertIsNone(tennis.Match(sets=[
tennis.Set(games=[tennis.Tiebreak(first_server_points=7)], tiebreak_games=0),
tennis.Set(games=[tennis.Tiebreak(first_server_points=7)], tiebreak_games=0)
]).winner)
self.assertTrue(tennis.Match(sets=[
tennis.Set(games=[tennis.Tiebreak(first_server_points=7)], tiebreak_games=0),
tennis.Set(games=[tennis.Tiebreak(first_server_points=7)], tiebreak_games=0),
tennis.Set(games=[tennis.Tiebreak(first_server_points=7)], tiebreak_games=0)
]).winner)
self.assertFalse(tennis.Match(sets=[
tennis.Set(games=[tennis.Tiebreak(first_server_points=7)], tiebreak_games=0),
tennis.Set(games=[tennis.Tiebreak(first_server_points=7)], tiebreak_games=0),
tennis.Set(games=[tennis.Tiebreak(first_returner_points=7)], tiebreak_games=0)
]).winner)
self.assertTrue(tennis.Match(
sets=[tennis.Set(games=[tennis.Tiebreak(first_server_points=7)], tiebreak_games=0)],
target_sets=1
).winner)
self.assertFalse(tennis.Match(
sets=[tennis.Set(games=[tennis.Tiebreak(first_returner_points=7)], tiebreak_games=0)],
target_sets=1
).winner)
def test_first_server_to_serve(self):
with self.assertRaisesRegex(
RuntimeError,
'^{}$'.format(re.escape('No server is to serve the next point because the match is over.'))
):
tennis.Match(
sets=[tennis.Set(games=[tennis.Tiebreak(first_server_points=7)], tiebreak_games=0)],
target_sets=1,
tiebreak_games=0
).first_server_to_serve()
self.assertTrue(
tennis.Match(
sets=[tennis.Set(games=[tennis.Tiebreak(first_server_points=0)], tiebreak_games=0)],
tiebreak_games=0
).first_server_to_serve()
)
self.assertFalse(
tennis.Match(
sets=[tennis.Set(games=[tennis.Tiebreak(first_server_points=1)], tiebreak_games=0)],
tiebreak_games=0
).first_server_to_serve()
)
self.assertFalse(
tennis.Match(
sets=[
tennis.Set(games=[tennis.Tiebreak(first_server_points=7)], tiebreak_games=0),
tennis.Set(games=[tennis.Tiebreak(first_server_points=0)], tiebreak_games=0)
],
tiebreak_games=0
).first_server_to_serve()
)
self.assertTrue(
tennis.Match(
sets=[
tennis.Set(games=[tennis.Tiebreak(first_server_points=7)], tiebreak_games=0),
tennis.Set(games=[tennis.Tiebreak(first_server_points=1)], tiebreak_games=0)
],
tiebreak_games=0
).first_server_to_serve()
)
def test_point(self):
with self.assertRaisesRegex(
RuntimeError,
'^{}$'.format(re.escape('Cannot advance this match\'s score because the match is over.'))
):
tennis.Match(
sets=[tennis.Set(games=[tennis.Tiebreak(first_server_points=7)], tiebreak_games=0)],
target_sets=1,
tiebreak_games=0
).point(first_server=True)
match = tennis.Match(tiebreak_games=0, tiebreak_points=2)
self.assertIsNone(match.point(first_server=True))
self.assertEqual(
match,
tennis.Match(
sets=[tennis.Set(
games=[tennis.Tiebreak(first_server_points=1, target_points=2)],
tiebreak_games=0,
tiebreak_points=2
)],
tiebreak_games=0,
tiebreak_points=2
)
)
self.assertIsNone(match.point(first_server=True))
self.assertEqual(
match,
tennis.Match(
sets=[
tennis.Set(
games=[tennis.Tiebreak(first_server_points=2, target_points=2)],
tiebreak_games=0,
tiebreak_points=2
),
tennis.Set(
games=[tennis.Tiebreak(target_points=2)],
tiebreak_games=0,
tiebreak_points=2
)
],
tiebreak_games=0,
tiebreak_points=2
)
)
self.assertIsNone(match.point(first_server=True))
self.assertEqual(
match,
tennis.Match(
sets=[
tennis.Set(
games=[tennis.Tiebreak(first_server_points=2, target_points=2)],
tiebreak_games=0,
tiebreak_points=2
),
tennis.Set(
games=[tennis.Tiebreak(first_returner_points=1, target_points=2)],
tiebreak_games=0,
tiebreak_points=2
)
],
tiebreak_games=0,
tiebreak_points=2
)
)
self.assertTrue(match.point(first_server=True))
self.assertEqual(
match,
tennis.Match(
sets=[
tennis.Set(
games=[tennis.Tiebreak(first_server_points=2, target_points=2)],
tiebreak_games=0,
tiebreak_points=2
),
tennis.Set(
games=[tennis.Tiebreak(first_returner_points=2, target_points=2)],
tiebreak_games=0,
tiebreak_points=2
)
],
tiebreak_games=0,
tiebreak_points=2
)
)
match = tennis.Match(
sets=[tennis.Set(
games=[tennis.Tiebreak(first_server_points=1, target_points=2)],
tiebreak_games=0,
tiebreak_points=2
)],
target_sets=1,
tiebreak_games=0,
tiebreak_points=2
)
self.assertTrue(match.point(first_server=True))
self.assertEqual(
match,
tennis.Match(
sets=[tennis.Set(
games=[tennis.Tiebreak(first_server_points=2, target_points=2)],
tiebreak_games=0,
tiebreak_points=2
)],
target_sets=1,
tiebreak_games=0,
tiebreak_points=2
)
)
match = tennis.Match(
sets=[
tennis.Set(
games=[tennis.Tiebreak(first_returner_points=2, target_points=2)],
tiebreak_games=0,
tiebreak_points=2
),
tennis.Set(
games=[tennis.Tiebreak(first_returner_points=1, target_points=2)],
tiebreak_games=0,
tiebreak_points=2
)
],
tiebreak_games=0,
tiebreak_points=2,
final_set_target_games=3,
final_set_deciding_point=True,
final_set_tiebreak_games=4,
final_set_tiebreak_points=5
)
self.assertIsNone(match.point(first_server=True))
self.assertEqual(
match,
tennis.Match(
sets=[
tennis.Set(
games=[tennis.Tiebreak(first_returner_points=2, target_points=2)],
tiebreak_games=0,
tiebreak_points=2
),
tennis.Set(
games=[tennis.Tiebreak(first_returner_points=2, target_points=2)],
tiebreak_games=0,
tiebreak_points=2
),
tennis.Set(
games=[tennis.Game(deciding_point=True)],
target_games=3,
deciding_point=True,
tiebreak_games=4,
tiebreak_points=5
)
],
tiebreak_games=0,
tiebreak_points=2,
final_set_target_games=3,
final_set_deciding_point=True,
final_set_tiebreak_games=4,
final_set_tiebreak_points=5
)
)
match = tennis.Match(tiebreak_games=0, tiebreak_points=2)
self.assertIsNone(match.point(first_server=False))
self.assertEqual(
match,
tennis.Match(
sets=[tennis.Set(
games=[tennis.Tiebreak(first_returner_points=1, target_points=2)],
tiebreak_games=0,
tiebreak_points=2
)],
tiebreak_games=0,
tiebreak_points=2
)
)
self.assertIsNone(match.point(first_server=False))
self.assertEqual(
match,
tennis.Match(
sets=[
tennis.Set(
games=[tennis.Tiebreak(first_returner_points=2, target_points=2)],
tiebreak_games=0,
tiebreak_points=2
),
tennis.Set(
games=[tennis.Tiebreak(target_points=2)],
tiebreak_games=0,
tiebreak_points=2
)
],
tiebreak_games=0,
tiebreak_points=2
)
)
self.assertIsNone(match.point(first_server=False))
self.assertEqual(
match,
tennis.Match(
sets=[
tennis.Set(
games=[tennis.Tiebreak(first_returner_points=2, target_points=2)],
tiebreak_games=0,
tiebreak_points=2
),
tennis.Set(
games=[tennis.Tiebreak(first_server_points=1, target_points=2)],
tiebreak_games=0,
tiebreak_points=2
)
],
tiebreak_games=0,
tiebreak_points=2
)
)
self.assertFalse(match.point(first_server=False))
self.assertEqual(
match,
tennis.Match(
sets=[
tennis.Set(
games=[tennis.Tiebreak(first_returner_points=2, target_points=2)],
tiebreak_games=0,
tiebreak_points=2
),
tennis.Set(
games=[tennis.Tiebreak(first_server_points=2, target_points=2)],
tiebreak_games=0,
tiebreak_points=2
)
],
tiebreak_games=0,
tiebreak_points=2
)
)
match = tennis.Match(
sets=[tennis.Set(
games=[tennis.Tiebreak(first_returner_points=1, target_points=2)],
tiebreak_games=0,
tiebreak_points=2
)],
target_sets=1,
tiebreak_games=0,
tiebreak_points=2
)
self.assertFalse(match.point(first_server=False))
self.assertEqual(
match,
tennis.Match(
sets=[tennis.Set(
games=[tennis.Tiebreak(first_returner_points=2, target_points=2)],
tiebreak_games=0,
tiebreak_points=2
)],
target_sets=1,
tiebreak_games=0,
tiebreak_points=2
)
)
match = tennis.Match(
sets=[
tennis.Set(
games=[tennis.Tiebreak(first_server_points=2, target_points=2)],
tiebreak_games=0,
tiebreak_points=2
),
tennis.Set(
games=[tennis.Tiebreak(first_server_points=1, target_points=2)],
tiebreak_games=0,
tiebreak_points=2
)
],
tiebreak_games=0,
tiebreak_points=2,
final_set_target_games=3,
final_set_deciding_point=True,
final_set_tiebreak_games=4,
final_set_tiebreak_points=5
)
self.assertIsNone(match.point(first_server=False))
self.assertEqual(
match,
tennis.Match(
sets=[
tennis.Set(
games=[tennis.Tiebreak(first_server_points=2, target_points=2)],
tiebreak_games=0,
tiebreak_points=2
),
tennis.Set(
games=[tennis.Tiebreak(first_server_points=2, target_points=2)],
tiebreak_games=0,
tiebreak_points=2
),
tennis.Set(
games=[tennis.Game(deciding_point=True)],
target_games=3,
deciding_point=True,
tiebreak_games=4,
tiebreak_points=5
)
],
tiebreak_games=0,
tiebreak_points=2,
final_set_target_games=3,
final_set_deciding_point=True,
final_set_tiebreak_games=4,
final_set_tiebreak_points=5
)
)
def test_str(self):
self.assertEqual(
str(tennis.Match(
sets=[tennis.Set(
games=[tennis.Game(server_points=1, returner_points=2, deciding_point=True)],
target_games=3,
deciding_point=True,
tiebreak_games=4,
tiebreak_points=5
)],
target_sets=6,
target_games=3,
deciding_point=True,
tiebreak_games=4,
tiebreak_points=5,
final_set_target_games=7,
final_set_deciding_point=False,
final_set_tiebreak_games=8,
final_set_tiebreak_points=9
)),
'Match('
'sets=[Set('
'games=[Game(server_points=1, returner_points=2, deciding_point=True)], '
'target_games=3, '
'deciding_point=True, '
'tiebreak_games=4, '
'tiebreak_points=5'
')], '
'target_sets=6, '
'target_games=3, '
'deciding_point=True, '
'tiebreak_games=4, '
'tiebreak_points=5, '
'final_set_target_games=7, '
'final_set_deciding_point=False, '
'final_set_tiebreak_games=8, '
'final_set_tiebreak_points=9'
')'
)
def test_repr(self):
self.assertEqual(
repr(tennis.Match(
sets=[tennis.Set(
games=[tennis.Game(server_points=1, returner_points=2, deciding_point=True)],
target_games=3,
deciding_point=True,
tiebreak_games=4,
tiebreak_points=5
)],
target_sets=6,
target_games=3,
deciding_point=True,
tiebreak_games=4,
tiebreak_points=5,
final_set_target_games=7,
final_set_deciding_point=False,
final_set_tiebreak_games=8,
final_set_tiebreak_points=9
)),
'Match('
'sets=[Set('
'games=[Game(server_points=1, returner_points=2, deciding_point=True)], '
'target_games=3, '
'deciding_point=True, '
'tiebreak_games=4, '
'tiebreak_points=5'
')], '
'target_sets=6, '
'target_games=3, '
'deciding_point=True, '
'tiebreak_games=4, '
'tiebreak_points=5, '
'final_set_target_games=7, '
'final_set_deciding_point=False, '
'final_set_tiebreak_games=8, '
'final_set_tiebreak_points=9'
')'
)
def test_eq(self):
self.assertEqual(
tennis.Match(
sets=[tennis.Set(
games=[tennis.Game(server_points=1, returner_points=2, deciding_point=True)],
target_games=3,
deciding_point=True,
tiebreak_games=4,
tiebreak_points=5
)],
target_sets=6,
target_games=3,
deciding_point=True,
tiebreak_games=4,
tiebreak_points=5,
final_set_target_games=6,
final_set_deciding_point=False,
final_set_tiebreak_games=7,
final_set_tiebreak_points=8
),
tennis.Match(
sets=[tennis.Set(
games=[tennis.Game(server_points=1, returner_points=2, deciding_point=True)],
target_games=3,
deciding_point=True,
tiebreak_games=4,
tiebreak_points=5
)],
target_sets=6,
target_games=3,
deciding_point=True,
tiebreak_games=4,
tiebreak_points=5,
final_set_target_games=6,
final_set_deciding_point=False,
final_set_tiebreak_games=7,
final_set_tiebreak_points=8
)
)
self.assertNotEqual(tennis.Match(sets=None), tennis.Match(sets=[]))
self.assertNotEqual(tennis.Match(target_sets=1), tennis.Match(target_sets=2))
self.assertNotEqual(tennis.Match(target_games=3), tennis.Match(target_games=4))
self.assertNotEqual(tennis.Match(deciding_point=True), tennis.Match(deciding_point=False))
self.assertNotEqual(tennis.Match(tiebreak_games=5), tennis.Match(tiebreak_games=6))
self.assertNotEqual(tennis.Match(tiebreak_points=7), tennis.Match(tiebreak_points=8))
self.assertNotEqual(
tennis.Match(final_set_target_games=9),
tennis.Match(final_set_target_games=10)
)
self.assertNotEqual(
tennis.Match(final_set_deciding_point=True),
tennis.Match(final_set_deciding_point=False)
)
self.assertNotEqual(
tennis.Match(final_set_tiebreak_games=11),
tennis.Match(final_set_tiebreak_games=12)
)
self.assertNotEqual(
tennis.Match(final_set_tiebreak_points=13),
tennis.Match(final_set_tiebreak_points=14)
)
if __name__ == '__main__':
unittest.main()
| 30.680101 | 99 | 0.634729 | 2,899 | 24,360 | 5.059331 | 0.033115 | 0.113452 | 0.067771 | 0.084544 | 0.948047 | 0.912797 | 0.886889 | 0.856344 | 0.83739 | 0.822936 | 0 | 0.022418 | 0.252874 | 24,360 | 793 | 100 | 30.718789 | 0.783462 | 0 | 0 | 0.791165 | 0 | 0 | 0.06055 | 0.017159 | 0 | 0 | 0 | 0 | 0.156627 | 1 | 0.024096 | false | 0 | 0.004016 | 0 | 0.029451 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
bfb8a243fe4e682cbfe38f0f6099a4fda1f922e4 | 67,365 | py | Python | src/configs/sandybridge/abc_micro_kernel_gen.py | flame/tblis-strassen | 6e929ab34c366c4ec6804ad2bf7cae4b84ee81ab | [
"BSD-3-Clause"
] | 9 | 2017-08-25T08:25:01.000Z | 2021-12-02T20:41:28.000Z | src/configs/sandybridge/abc_micro_kernel_gen.py | flame/tblis-strassen | 6e929ab34c366c4ec6804ad2bf7cae4b84ee81ab | [
"BSD-3-Clause"
] | null | null | null | src/configs/sandybridge/abc_micro_kernel_gen.py | flame/tblis-strassen | 6e929ab34c366c4ec6804ad2bf7cae4b84ee81ab | [
"BSD-3-Clause"
] | 3 | 2018-07-31T05:58:20.000Z | 2022-01-11T03:36:46.000Z | #import abc_gen
import sys
from common import is_one, is_negone, is_nonzero, write_line, write_break, transpose, printmat, contain_nontrivial
def write_header_start( myfile ):
myfile.write( \
'''\
#ifndef BLISLAB_DFMM_KERNEL_H
#define BLISLAB_DFMM_KERNEL_H
#include "bl_config.h"
#include <stdio.h>
#include <immintrin.h> // AVX
// Allow C++ users to include this header file in their source code. However,
// we make the extern "C" conditional on whether we're using a C++ compiler,
// since regular C compilers don't understand the extern "C" construct.
#ifdef __cplusplus
extern "C" {
#endif
typedef unsigned long long dim_t;
typedef union {
__m256d v;
__m256i u;
double d[ 4 ];
} v4df_t;
typedef union {
__m128i v;
int d[ 4 ];
} v4li_t;
struct aux_s {
double *b_next;
float *b_next_s;
char *flag;
int pc;
int m;
int n;
};
typedef struct aux_s aux_t;
void bl_dgemm_asm_8x4( int k,
double *a,
double *b,
double *c,
unsigned long long ldc,
aux_t* data );
void bl_dgemm_asm_8x4_mulstrassen( int k,
double *a,
double *b,
unsigned long long len_c, unsigned long long ldc,
double **c_list, double *alpha_list,
aux_t* data );
static void (*bl_micro_kernel) (
int k,
double *a,
double *b,
double *c,
unsigned long long ldc,
aux_t* data
) = {
BL_MICRO_KERNEL
};
''')
def write_header_end( myfile ):
myfile.write( \
'''\
// End extern "C" construct block.
#ifdef __cplusplus
}
#endif
#endif
''')
def generate_kernel_header( myfile, nonzero_coeffs, index ):
nnz = len( nonzero_coeffs )
#write_line( myfile, 1, 'a' )
add = 'inline void bl_dgemm_micro_kernel_stra_abc%d( int k, double *a, double *b, ' % index
add += 'unsigned long long ldc, '
add += ', '.join( ['double *c%d' % ( i ) for i in range( nnz )] )
if ( contain_nontrivial( nonzero_coeffs ) ):
add += ', double *alpha_list'
add += ', aux_t *aux );'
write_line(myfile, 0, add)
#write_break( myfile )
def write_prefetch_assembly( myfile, nonzero_coeffs ):
for j, coeff in enumerate(nonzero_coeffs):
myfile.write( \
'''\
"movq %{0}, %%{2} \\n\\t" // load address of c{1}
"leaq (%%{2},%%rdi,2), %%{3} \\n\\t" // load address of c{1} + 2 * ldc;
"prefetcht0 3 * 8(%%{2}) \\n\\t" // prefetch c{1} + 0 * ldc
"prefetcht0 3 * 8(%%{2},%%rdi) \\n\\t" // prefetch c{1} + 1 * ldc
"prefetcht0 3 * 8(%%{3}) \\n\\t" // prefetch c{1} + 2 * ldc
"prefetcht0 3 * 8(%%{3},%%rdi) \\n\\t" // prefetch c{1} + 3 * ldc
'''.format( str(j+6), str(j), get_reg(), get_reg() ) )
#Round Robin way to get the register
def get_reg( avoid_reg = '' ):
get_reg.counter += 1
res_reg = get_reg.reg_pool[ get_reg.counter % len(get_reg.reg_pool) ]
if ( res_reg == avoid_reg ):
get_reg.counter += 1
res_reg = get_reg.reg_pool[ get_reg.counter % len(get_reg.reg_pool) ]
return res_reg
get_reg.counter = -1
get_reg.reg_pool = [ 'rcx', 'rdx', 'r8', 'r9', 'r10', 'r11', 'r12', 'r13', 'r14' ]
#get_reg.reg_pool = [ 'rcx', 'rdx', 'rsi', 'r8', 'r9', 'r10', 'r11', 'r12', 'r13', 'r14' ]
# rdi, rax, rbx, r15, already occupied.
# (rcx, rdx, rsi, r8, r9, r10, r11, r12, r13, r14): register allocation algorithm
#Round Robin way to get the AVX 256-bit register
def get_avx_reg( avoid_reg = '' ):
get_avx_reg.counter += 1
res_reg = get_avx_reg.avx_reg_pool[ get_avx_reg.counter % len(get_avx_reg.avx_reg_pool) ]
if( res_reg == avoid_reg ):
get_avx_reg.counter += 1
res_reg = get_avx_reg.avx_reg_pool[ get_avx_reg.counter % len(get_avx_reg.avx_reg_pool) ]
return res_reg
get_avx_reg.counter = -1
get_avx_reg.avx_reg_pool = [ 'ymm0', 'ymm1', 'ymm2', 'ymm3', 'ymm4', 'ymm5', 'ymm6', 'ymm7' ]
def write_updatec_assembly( myfile, nonzero_coeffs ):
nnz = len( nonzero_coeffs )
if contain_nontrivial( nonzero_coeffs ):
write_line( myfile, 1, '"movq %{0}, %%rax \\n\\t" // load address of alpha_list'.format(nnz+6) )
for j, coeff in enumerate(nonzero_coeffs):
if is_one(coeff) or is_negone(coeff):
if is_one(coeff):
update_avx = 'vaddpd'
update_op = '+'
elif is_negone(coeff):
update_avx = 'vsubpd'
update_op = '-'
myfile.write( \
'''\
"movq %{2}, %%{4} \\n\\t" // load address of c
" \\n\\t"
"vmovapd 0 * 32(%%{4}), %%{5} \\n\\t" // {5} = c{3}( 0:3, 0 )
"{0} %%ymm9, %%{5}, %%{5} \\n\\t" // {5} {1}= ymm9
"vmovapd %%{5}, 0(%%{4}) \\n\\t" // c{3}( 0:3, 0 ) = {5}
"vmovapd 1 * 32(%%{4}), %%{6} \\n\\t" // {6} = c{3}( 4:7, 0 )
"{0} %%ymm8, %%{6}, %%{6} \\n\\t" // {6} {1}= ymm8
"vmovapd %%{6}, 32(%%{4}) \\n\\t" // c{3}( 4:7, 0 ) = {6}
"addq %%rdi, %%{4} \\n\\t"
"vmovapd 0 * 32(%%{4}), %%{7} \\n\\t" // {7} = c{3}( 0:3, 1 )
"{0} %%ymm11, %%{7}, %%{7} \\n\\t" // {7} {1}= ymm11
"vmovapd %%{7}, 0(%%{4}) \\n\\t" // c{3}( 0:3, 1 ) = {7}
"vmovapd 1 * 32(%%{4}), %%{8} \\n\\t" // {8} = c{3}( 4:7, 1 )
"{0} %%ymm10, %%{8}, %%{8} \\n\\t" // {8} {1}= ymm10
"vmovapd %%{8}, 32(%%{4}) \\n\\t" // c{3}( 4:7, 1 ) = {8}
"addq %%rdi, %%{4} \\n\\t"
"vmovapd 0 * 32(%%{4}), %%{9} \\n\\t" // {9} = c{3}( 0:3, 2 )
"{0} %%ymm13, %%{9}, %%{9} \\n\\t" // {9} {1}= ymm13
"vmovapd %%{9}, 0(%%{4}) \\n\\t" // c{3}( 0:3, 2 ) = {9}
"vmovapd 1 * 32(%%{4}), %%{10} \\n\\t" // {10} = c{3}( 4:7, 2 )
"{0} %%ymm12, %%{10}, %%{10} \\n\\t" // {10} {1}= ymm12
"vmovapd %%{10}, 32(%%{4}) \\n\\t" // c{3}( 4:7, 2 ) = {10}
"addq %%rdi, %%{4} \\n\\t"
"vmovapd 0 * 32(%%{4}), %%{11} \\n\\t" // {11} = c{3}( 0:3, 3 )
"{0} %%ymm15, %%{11}, %%{11} \\n\\t" // {11} {1}= ymm15
"vmovapd %%{11}, 0(%%{4}) \\n\\t" // c{3}( 0:3, 3 ) = {11}
"vmovapd 1 * 32(%%{4}), %%{12} \\n\\t" // {12} = c{3}( 4:7, 3 )
"{0} %%ymm14, %%{12}, %%{12} \\n\\t" // {12} {1}= ymm14
"vmovapd %%{12}, 32(%%{4}) \\n\\t" // c{3}( 4:7, 3 ) = {12}
'''.format( update_avx, update_op, str(j+6), str(j), get_reg(), get_avx_reg(), get_avx_reg(), get_avx_reg(), get_avx_reg(), get_avx_reg(), get_avx_reg(), get_avx_reg(), get_avx_reg() ) )
if contain_nontrivial( nonzero_coeffs ):
write_line( myfile, 1, '"addq $1 * 8, %%rax \\n\\t" // alpha_list += 8' )
else:
#print "coeff not 1 / -1!"
alpha_avx_reg = get_avx_reg()
myfile.write( \
'''\
" \\n\\t"
"vbroadcastsd (%%rax), %%{3} \\n\\t" // load alpha_list[ i ] and duplicate
"movq %{0}, %%{2} \\n\\t" // load address of c
" \\n\\t"
"vmovapd 0 * 32(%%{2}), %%{4} \\n\\t" // {4} = c{1}( 0:3, 0 )
"vmulpd %%{3}, %%ymm9, %%{5} \\n\\t" // scale by alpha, {5} = {3}( alpha ) * ymm9( c{1}( 0:3, 0 ) )
"vaddpd %%{4}, %%{5}, %%{4} \\n\\t" // {4} += {5}
"vmovapd %%{4}, 0(%%{2}) \\n\\t" // c{1}( 0:3, 0 ) = {4}
"vmovapd 1 * 32(%%{2}), %%{6} \\n\\t" // {6} = c{1}( 4:7, 0 )
"vmulpd %%{3}, %%ymm8, %%{7} \\n\\t" // scale by alpha, {7} = {3}( alpha ) * ymm8( c{1}( 4:7, 0 ) )
"vaddpd %%{6}, %%{7}, %%{6} \\n\\t" // {6} += {7}
"vmovapd %%{6}, 32(%%{2}) \\n\\t" // c{1}( 4:7, 0 ) = {6}
"addq %%rdi, %%{2} \\n\\t"
"vmovapd 0 * 32(%%{2}), %%{8} \\n\\t" // {8} = c{1}( 0:3, 1 )
"vmulpd %%{3}, %%ymm11, %%{9} \\n\\t" // scale by alpha, {5} = {3}( alpha ) * ymm11( c{1}( 0:3, 1 ) )
"vaddpd %%{8}, %%{9}, %%{8} \\n\\t" // {8} += {7}
"vmovapd %%{8}, 0(%%{2}) \\n\\t" // c{1}( 0:3, 1 ) = {8}
"vmovapd 1 * 32(%%{2}), %%{10} \\n\\t" // {10} = c{1}( 4:7, 1 )
"vmulpd %%{3}, %%ymm10, %%{11} \\n\\t" // scale by alpha, {5} = {3}( alpha ) * ymm10( c{1}( 4:7, 1 ) )
"vaddpd %%{10}, %%{11}, %%{10} \\n\\t" // {10} += {9}
"vmovapd %%{10}, 32(%%{2}) \\n\\t" // c{1}( 4:7, 1 ) = {10}
"addq %%rdi, %%{2} \\n\\t"
"vmovapd 0 * 32(%%{2}), %%{12} \\n\\t" // {12} = c{1}( 0:3, 2 )
"vmulpd %%{3}, %%ymm13, %%{13} \\n\\t" // scale by alpha, {5} = {3}( alpha ) * ymm13( c{1}( 0:3, 2 ) )
"vaddpd %%{12}, %%{13}, %%{12} \\n\\t" // {12} += {11}
"vmovapd %%{12}, 0(%%{2}) \\n\\t" // c{1}( 0:3, 2 ) = {12}
"vmovapd 1 * 32(%%{2}), %%{14} \\n\\t" // {14} = c{1}( 4:7, 2 )
"vmulpd %%{3}, %%ymm12, %%{15} \\n\\t" // scale by alpha, {5} = {3}( alpha ) * ymm12( c{1}( 4:7, 2 ) )
"vaddpd %%{14}, %%{15}, %%{14} \\n\\t" // {14} += {13}
"vmovapd %%{14}, 32(%%{2}) \\n\\t" // c{1}( 4:7, 2 ) = {14}
"addq %%rdi, %%{2} \\n\\t"
"vmovapd 0 * 32(%%{2}), %%{16} \\n\\t" // {16} = c{1}( 0:3, 3 )
"vmulpd %%{3}, %%ymm15, %%{17} \\n\\t" // scale by alpha, {5} = {3}( alpha ) * ymm15( c{1}( 0:3, 3 ) )
"vaddpd %%{16}, %%{17}, %%{16} \\n\\t" // {16} += {15}
"vmovapd %%{16}, 0(%%{2}) \\n\\t" // c{1}( 0:3, 3 ) = {16}
"vmovapd 1 * 32(%%{2}), %%{18} \\n\\t" // {18} = c{1}( 4:7, 3 )
"vmulpd %%{3}, %%ymm14, %%{19} \\n\\t" // scale by alpha, {5} = {3}( alpha ) * ymm14( c{1}( 4:7, 3 ) )
"vaddpd %%{18}, %%{19}, %%{18} \\n\\t" // {18} +={17}
"vmovapd %%{18}, 32(%%{2}) \\n\\t" // c{1}( 4:7, 3 ) = {18}
"addq $1 * 8, %%rax \\n\\t" // alpha_list += 8
" \\n\\t"
'''.format( str(j+6), str(j), get_reg(), alpha_avx_reg, get_avx_reg( alpha_avx_reg ), get_avx_reg( alpha_avx_reg ), get_avx_reg( alpha_avx_reg ), get_avx_reg( alpha_avx_reg ), get_avx_reg( alpha_avx_reg ), get_avx_reg( alpha_avx_reg ), get_avx_reg( alpha_avx_reg ), get_avx_reg( alpha_avx_reg ), get_avx_reg( alpha_avx_reg ), get_avx_reg( alpha_avx_reg ), get_avx_reg( alpha_avx_reg ), get_avx_reg( alpha_avx_reg ), get_avx_reg( alpha_avx_reg ), get_avx_reg( alpha_avx_reg ), get_avx_reg( alpha_avx_reg ), get_avx_reg( alpha_avx_reg ) ) )
def write_common_rankk_macro_assembly( myfile ):
myfile.write( \
'''\
#define STRINGIFY(...) #__VA_ARGS__
#define RANKK_UPDATE( NUM ) \\
"vxorpd %%ymm8, %%ymm8, %%ymm8 \\n\\t" /* set ymm8 to 0 ( v ) */ \\
"vxorpd %%ymm9, %%ymm9, %%ymm9 \\n\\t" \\
"vxorpd %%ymm10, %%ymm10, %%ymm10 \\n\\t" \\
"vxorpd %%ymm11, %%ymm11, %%ymm11 \\n\\t" \\
"vxorpd %%ymm12, %%ymm12, %%ymm12 \\n\\t" \\
"vxorpd %%ymm13, %%ymm13, %%ymm13 \\n\\t" \\
"vxorpd %%ymm14, %%ymm14, %%ymm14 \\n\\t" \\
"vxorpd %%ymm15, %%ymm15, %%ymm15 \\n\\t" \\
" \\n\\t" \\
"movq %0, %%rsi \\n\\t" /* i = k_iter; ( v ) */ \\
"testq %%rsi, %%rsi \\n\\t" /* check i via logical AND. ( v ) */ \\
"je .DCONSIDKLEFT"STRINGIFY(NUM)" \\n\\t" /* if i == 0, jump to code that ( v ) */ \\
" \\n\\t" /* contains the k_left loop. */ \\
" \\n\\t" \\
".DLOOPKITER"STRINGIFY(NUM)": \\n\\t" /* MAIN LOOP */ \\
" \\n\\t" \\
"addq $4 * 4 * 8, %%r15 \\n\\t" /* b_next += 4*4 (unroll x nr) ( v ) */ \\
" \\n\\t" \\
" \\n\\t" /* iteration 0 */ \\
"vmovapd 1 * 32(%%rax), %%ymm1 \\n\\t" /* preload a47 for iter 0 */ \\
"vmulpd %%ymm0, %%ymm2, %%ymm6 \\n\\t" /* ymm6 ( c_tmp0 ) = ymm0 ( a03 ) * ymm2( b0 ) */ \\
"vperm2f128 $0x3, %%ymm2, %%ymm2, %%ymm4 \\n\\t" /* ymm4 ( b0x3_0 ) */ \\
"vmulpd %%ymm0, %%ymm3, %%ymm7 \\n\\t" /* ymm7 ( c_tmp1 ) = ymm0 ( a03 ) * ymm3( b0x5 ) */ \\
"vperm2f128 $0x3, %%ymm3, %%ymm3, %%ymm5 \\n\\t" /* ymm5 ( b0x3_1 ) */ \\
"vaddpd %%ymm15, %%ymm6, %%ymm15 \\n\\t" /* ymm15 ( c_03_0 ) += ymm6( c_tmp0 ) */ \\
"vaddpd %%ymm13, %%ymm7, %%ymm13 \\n\\t" /* ymm13 ( c_03_1 ) += ymm7( c_tmp1 ) */ \\
" \\n\\t" \\
"prefetcht0 16 * 32(%%rax) \\n\\t" /* prefetch a03 for iter 1 */ \\
"vmulpd %%ymm1, %%ymm2, %%ymm6 \\n\\t" \\
"vmovapd 1 * 32(%%rbx), %%ymm2 \\n\\t" /* preload b for iter 1 */ \\
"vmulpd %%ymm1, %%ymm3, %%ymm7 \\n\\t" \\
"vpermilpd $0x5, %%ymm2, %%ymm3 \\n\\t" \\
"vaddpd %%ymm14, %%ymm6, %%ymm14 \\n\\t" \\
"vaddpd %%ymm12, %%ymm7, %%ymm12 \\n\\t" \\
" \\n\\t" \\
"vmulpd %%ymm0, %%ymm4, %%ymm6 \\n\\t" \\
"vmulpd %%ymm0, %%ymm5, %%ymm7 \\n\\t" \\
"vmovapd 2 * 32(%%rax), %%ymm0 \\n\\t" /* preload a03 for iter 1 */ \\
"vaddpd %%ymm11, %%ymm6, %%ymm11 \\n\\t" \\
"vaddpd %%ymm9, %%ymm7, %%ymm9 \\n\\t" \\
"prefetcht0 0 * 32(%%r15) \\n\\t" /* prefetch b_next[0*4] */ \\
" \\n\\t" \\
"vmulpd %%ymm1, %%ymm4, %%ymm6 \\n\\t" \\
"vmulpd %%ymm1, %%ymm5, %%ymm7 \\n\\t" \\
"vaddpd %%ymm10, %%ymm6, %%ymm10 \\n\\t" \\
"vaddpd %%ymm8, %%ymm7, %%ymm8 \\n\\t" \\
" \\n\\t" \\
" \\n\\t" /* iteration 1 */ \\
"vmovapd 3 * 32(%%rax), %%ymm1 \\n\\t" /* preload a47 for iter 1 */ \\
"vmulpd %%ymm0, %%ymm2, %%ymm6 \\n\\t" \\
"vperm2f128 $0x3, %%ymm2, %%ymm2, %%ymm4 \\n\\t" \\
"vmulpd %%ymm0, %%ymm3, %%ymm7 \\n\\t" \\
"vperm2f128 $0x3, %%ymm3, %%ymm3, %%ymm5 \\n\\t" \\
"vaddpd %%ymm15, %%ymm6, %%ymm15 \\n\\t" \\
"vaddpd %%ymm13, %%ymm7, %%ymm13 \\n\\t" \\
" \\n\\t" \\
"prefetcht0 18 * 32(%%rax) \\n\\t" /* prefetch a for iter 9 ( ? ) */ \\
"vmulpd %%ymm1, %%ymm2, %%ymm6 \\n\\t" \\
"vmovapd 2 * 32(%%rbx), %%ymm2 \\n\\t" /* preload b for iter 2 */ \\
"vmulpd %%ymm1, %%ymm3, %%ymm7 \\n\\t" \\
"vpermilpd $0x5, %%ymm2, %%ymm3 \\n\\t" \\
"vaddpd %%ymm14, %%ymm6, %%ymm14 \\n\\t" \\
"vaddpd %%ymm12, %%ymm7, %%ymm12 \\n\\t" \\
" \\n\\t" \\
"vmulpd %%ymm0, %%ymm4, %%ymm6 \\n\\t" \\
"vmulpd %%ymm0, %%ymm5, %%ymm7 \\n\\t" \\
"vmovapd 4 * 32(%%rax), %%ymm0 \\n\\t" /* preload a03 for iter 2 */ \\
"vaddpd %%ymm11, %%ymm6, %%ymm11 \\n\\t" \\
"vaddpd %%ymm9, %%ymm7, %%ymm9 \\n\\t" \\
" \\n\\t" \\
"vmulpd %%ymm1, %%ymm4, %%ymm6 \\n\\t" \\
"vmulpd %%ymm1, %%ymm5, %%ymm7 \\n\\t" \\
"vaddpd %%ymm10, %%ymm6, %%ymm10 \\n\\t" \\
"vaddpd %%ymm8, %%ymm7, %%ymm8 \\n\\t" \\
" \\n\\t" \\
" \\n\\t" /* iteration 2 */ \\
"vmovapd 5 * 32(%%rax), %%ymm1 \\n\\t" /* preload a47 for iter 2 */ \\
"vmulpd %%ymm0, %%ymm2, %%ymm6 \\n\\t" \\
"vperm2f128 $0x3, %%ymm2, %%ymm2, %%ymm4 \\n\\t" \\
"vmulpd %%ymm0, %%ymm3, %%ymm7 \\n\\t" \\
"vperm2f128 $0x3, %%ymm3, %%ymm3, %%ymm5 \\n\\t" \\
"vaddpd %%ymm15, %%ymm6, %%ymm15 \\n\\t" \\
"vaddpd %%ymm13, %%ymm7, %%ymm13 \\n\\t" \\
" \\n\\t" \\
"prefetcht0 20 * 32(%%rax) \\n\\t" /* prefetch a for iter 10 ( ? ) */ \\
"vmulpd %%ymm1, %%ymm2, %%ymm6 \\n\\t" \\
"vmovapd 3 * 32(%%rbx), %%ymm2 \\n\\t" /* preload b for iter 3 */ \\
"addq $4 * 4 * 8, %%rbx \\n\\t" /* b += 4*4 (unroll x nr) */ \\
"vmulpd %%ymm1, %%ymm3, %%ymm7 \\n\\t" \\
"vpermilpd $0x5, %%ymm2, %%ymm3 \\n\\t" \\
"vaddpd %%ymm14, %%ymm6, %%ymm14 \\n\\t" \\
"vaddpd %%ymm12, %%ymm7, %%ymm12 \\n\\t" \\
" \\n\\t" \\
"vmulpd %%ymm0, %%ymm4, %%ymm6 \\n\\t" \\
"vmulpd %%ymm0, %%ymm5, %%ymm7 \\n\\t" \\
"vmovapd 6 * 32(%%rax), %%ymm0 \\n\\t" /* preload a03 for iter 3 */ \\
"vaddpd %%ymm11, %%ymm6, %%ymm11 \\n\\t" \\
"vaddpd %%ymm9, %%ymm7, %%ymm9 \\n\\t" \\
"prefetcht0 2 * 32(%%r15) \\n\\t" /* prefetch b_next[2*4] */ \\
" \\n\\t" \\
"vmulpd %%ymm1, %%ymm4, %%ymm6 \\n\\t" \\
"vmulpd %%ymm1, %%ymm5, %%ymm7 \\n\\t" \\
"vaddpd %%ymm10, %%ymm6, %%ymm10 \\n\\t" \\
"vaddpd %%ymm8, %%ymm7, %%ymm8 \\n\\t" \\
" \\n\\t" \\
" \\n\\t" /* iteration 3 */ \\
"vmovapd 7 * 32(%%rax), %%ymm1 \\n\\t" /* preload a47 for iter 3 */ \\
"addq $4 * 8 * 8, %%rax \\n\\t" /* a += 4*8 (unroll x mr) */ \\
"vmulpd %%ymm0, %%ymm2, %%ymm6 \\n\\t" \\
"vperm2f128 $0x3, %%ymm2, %%ymm2, %%ymm4 \\n\\t" \\
"vmulpd %%ymm0, %%ymm3, %%ymm7 \\n\\t" \\
"vperm2f128 $0x3, %%ymm3, %%ymm3, %%ymm5 \\n\\t" \\
"vaddpd %%ymm15, %%ymm6, %%ymm15 \\n\\t" \\
"vaddpd %%ymm13, %%ymm7, %%ymm13 \\n\\t" \\
" \\n\\t" \\
"prefetcht0 14 * 32(%%rax) \\n\\t" /* prefetch a for iter 11 ( ? ) */ \\
"vmulpd %%ymm1, %%ymm2, %%ymm6 \\n\\t" \\
"vmovapd 0 * 32(%%rbx), %%ymm2 \\n\\t" /* preload b for iter 4 */ \\
"vmulpd %%ymm1, %%ymm3, %%ymm7 \\n\\t" \\
"vpermilpd $0x5, %%ymm2, %%ymm3 \\n\\t" \\
"vaddpd %%ymm14, %%ymm6, %%ymm14 \\n\\t" \\
"vaddpd %%ymm12, %%ymm7, %%ymm12 \\n\\t" \\
" \\n\\t" \\
"vmulpd %%ymm0, %%ymm4, %%ymm6 \\n\\t" \\
"vmulpd %%ymm0, %%ymm5, %%ymm7 \\n\\t" \\
"vmovapd 0 * 32(%%rax), %%ymm0 \\n\\t" /* preload a03 for iter 4 */ \\
"vaddpd %%ymm11, %%ymm6, %%ymm11 \\n\\t" \\
"vaddpd %%ymm9, %%ymm7, %%ymm9 \\n\\t" \\
" \\n\\t" \\
"vmulpd %%ymm1, %%ymm4, %%ymm6 \\n\\t" \\
"vmulpd %%ymm1, %%ymm5, %%ymm7 \\n\\t" \\
"vaddpd %%ymm10, %%ymm6, %%ymm10 \\n\\t" \\
"vaddpd %%ymm8, %%ymm7, %%ymm8 \\n\\t" \\
" \\n\\t" \\
"decq %%rsi \\n\\t" /* i -= 1; */ \\
"jne .DLOOPKITER"STRINGIFY(NUM)" \\n\\t" /* iterate again if i != 0. */ \\
" \\n\\t" \\
".DCONSIDKLEFT"STRINGIFY(NUM)": \\n\\t" \\
" \\n\\t" \\
"movq %1, %%rsi \\n\\t" /* i = k_left; */ \\
"testq %%rsi, %%rsi \\n\\t" /* check i via logical AND. */ \\
"je .DPOSTACCUM"STRINGIFY(NUM)" \\n\\t" /* if i == 0, we're done; jump to end. */ \\
" \\n\\t" /* else, we prepare to enter k_left loop. */ \\
" \\n\\t" \\
" \\n\\t" \\
".DLOOPKLEFT"STRINGIFY(NUM)": \\n\\t" /* EDGE LOOP */ \\
" \\n\\t" \\
"vmovapd 1 * 32(%%rax), %%ymm1 \\n\\t" /* preload a47 */ \\
"addq $8 * 1 * 8, %%rax \\n\\t" /* a += 8 (1 x mr) */ \\
"vmulpd %%ymm0, %%ymm2, %%ymm6 \\n\\t" \\
"vperm2f128 $0x3, %%ymm2, %%ymm2, %%ymm4 \\n\\t" \\
"vmulpd %%ymm0, %%ymm3, %%ymm7 \\n\\t" \\
"vperm2f128 $0x3, %%ymm3, %%ymm3, %%ymm5 \\n\\t" \\
"vaddpd %%ymm15, %%ymm6, %%ymm15 \\n\\t" \\
"vaddpd %%ymm13, %%ymm7, %%ymm13 \\n\\t" \\
" \\n\\t" \\
"prefetcht0 14 * 32(%%rax) \\n\\t" /* prefetch a03 for iter 7 later ( ? ) */ \\
"vmulpd %%ymm1, %%ymm2, %%ymm6 \\n\\t" \\
"vmovapd 1 * 32(%%rbx), %%ymm2 \\n\\t" \\
"addq $4 * 1 * 8, %%rbx \\n\\t" /* b += 4 (1 x nr) */ \\
"vmulpd %%ymm1, %%ymm3, %%ymm7 \\n\\t" \\
"vpermilpd $0x5, %%ymm2, %%ymm3 \\n\\t" \\
"vaddpd %%ymm14, %%ymm6, %%ymm14 \\n\\t" \\
"vaddpd %%ymm12, %%ymm7, %%ymm12 \\n\\t" \\
" \\n\\t" \\
"vmulpd %%ymm0, %%ymm4, %%ymm6 \\n\\t" \\
"vmulpd %%ymm0, %%ymm5, %%ymm7 \\n\\t" \\
"vmovapd 0 * 32(%%rax), %%ymm0 \\n\\t" \\
"vaddpd %%ymm11, %%ymm6, %%ymm11 \\n\\t" \\
"vaddpd %%ymm9, %%ymm7, %%ymm9 \\n\\t" \\
" \\n\\t" \\
"vmulpd %%ymm1, %%ymm4, %%ymm6 \\n\\t" \\
"vmulpd %%ymm1, %%ymm5, %%ymm7 \\n\\t" \\
"vaddpd %%ymm10, %%ymm6, %%ymm10 \\n\\t" \\
"vaddpd %%ymm8, %%ymm7, %%ymm8 \\n\\t" \\
" \\n\\t" \\
" \\n\\t" \\
"decq %%rsi \\n\\t" /* i -= 1; */ \\
"jne .DLOOPKLEFT"STRINGIFY(NUM)" \\n\\t" /* iterate again if i != 0. */ \\
" \\n\\t" \\
" \\n\\t" \\
" \\n\\t" \\
".DPOSTACCUM"STRINGIFY(NUM)": \\n\\t" \\
" \\n\\t" \\
" \\n\\t" /* ymm15: ymm13: ymm11: ymm9: */ \\
" \\n\\t" /* ( ab00 ( ab01 ( ab02 ( ab03 */ \\
" \\n\\t" /* ab11 ab10 ab13 ab12 */ \\
" \\n\\t" /* ab22 ab23 ab20 ab21 */ \\
" \\n\\t" /* ab33 ) ab32 ) ab31 ) ab30 ) */ \\
" \\n\\t" \\
" \\n\\t" /* ymm14: ymm12: ymm10: ymm8: */ \\
" \\n\\t" /* ( ab40 ( ab41 ( ab42 ( ab43 */ \\
" \\n\\t" /* ab51 ab50 ab53 ab52 */ \\
" \\n\\t" /* ab62 ab63 ab60 ab61 */ \\
" \\n\\t" /* ab73 ) ab72 ) ab71 ) ab70 ) */ \\
" \\n\\t" \\
"vmovapd %%ymm15, %%ymm7 \\n\\t" \\
"vshufpd $0xa, %%ymm15, %%ymm13, %%ymm15 \\n\\t" \\
"vshufpd $0xa, %%ymm13, %%ymm7, %%ymm13 \\n\\t" \\
" \\n\\t" \\
"vmovapd %%ymm11, %%ymm7 \\n\\t" \\
"vshufpd $0xa, %%ymm11, %%ymm9, %%ymm11 \\n\\t" \\
"vshufpd $0xa, %%ymm9, %%ymm7, %%ymm9 \\n\\t" \\
" \\n\\t" \\
"vmovapd %%ymm14, %%ymm7 \\n\\t" \\
"vshufpd $0xa, %%ymm14, %%ymm12, %%ymm14 \\n\\t" \\
"vshufpd $0xa, %%ymm12, %%ymm7, %%ymm12 \\n\\t" \\
" \\n\\t" \\
"vmovapd %%ymm10, %%ymm7 \\n\\t" \\
"vshufpd $0xa, %%ymm10, %%ymm8, %%ymm10 \\n\\t" \\
"vshufpd $0xa, %%ymm8, %%ymm7, %%ymm8 \\n\\t" \\
" \\n\\t" \\
" \\n\\t" /* ymm15: ymm13: ymm11: ymm9: */ \\
" \\n\\t" /* ( ab01 ( ab00 ( ab03 ( ab02 */ \\
" \\n\\t" /* ab11 ab10 ab13 ab12 */ \\
" \\n\\t" /* ab23 ab22 ab21 ab20 */ \\
" \\n\\t" /* ab33 ) ab32 ) ab31 ) ab30 ) */ \\
" \\n\\t" \\
" \\n\\t" /* ymm14: ymm12: ymm10: ymm8: */ \\
" \\n\\t" /* ( ab41 ( ab40 ( ab43 ( ab42 */ \\
" \\n\\t" /* ab51 ab50 ab53 ab52 */ \\
" \\n\\t" /* ab63 ab62 ab61 ab60 */ \\
" \\n\\t" /* ab73 ) ab72 ) ab71 ) ab70 ) */ \\
" \\n\\t" \\
"vmovapd %%ymm15, %%ymm7 \\n\\t" \\
"vperm2f128 $0x30, %%ymm15, %%ymm11, %%ymm15 \\n\\t" \\
"vperm2f128 $0x12, %%ymm7, %%ymm11, %%ymm11 \\n\\t" \\
" \\n\\t" \\
"vmovapd %%ymm13, %%ymm7 \\n\\t" \\
"vperm2f128 $0x30, %%ymm13, %%ymm9, %%ymm13 \\n\\t" \\
"vperm2f128 $0x12, %%ymm7, %%ymm9, %%ymm9 \\n\\t" \\
" \\n\\t" \\
"vmovapd %%ymm14, %%ymm7 \\n\\t" \\
"vperm2f128 $0x30, %%ymm14, %%ymm10, %%ymm14 \\n\\t" \\
"vperm2f128 $0x12, %%ymm7, %%ymm10, %%ymm10 \\n\\t" \\
" \\n\\t" \\
"vmovapd %%ymm12, %%ymm7 \\n\\t" \\
"vperm2f128 $0x30, %%ymm12, %%ymm8, %%ymm12 \\n\\t" \\
"vperm2f128 $0x12, %%ymm7, %%ymm8, %%ymm8 \\n\\t" \\
" \\n\\t" \\
" \\n\\t" /* ymm9: ymm11: ymm13: ymm15: */ \\
" \\n\\t" /* ( ab00 ( ab01 ( ab02 ( ab03 */ \\
" \\n\\t" /* ab10 ab11 ab12 ab13 */ \\
" \\n\\t" /* ab20 ab21 ab22 ab23 */ \\
" \\n\\t" /* ab30 ) ab31 ) ab32 ) ab33 ) */ \\
" \\n\\t" \\
" \\n\\t" /* ymm8: ymm10: ymm12: ymm14: */ \\
" \\n\\t" /* ( ab40 ( ab41 ( ab42 ( ab43 */ \\
" \\n\\t" /* ab50 ab51 ab52 ab53 */ \\
" \\n\\t" /* ab60 ab61 ab62 ab63 */ \\
" \\n\\t" /* ab70 ) ab71 ) ab72 ) ab73 ) */
''' )
def macro_rankk_xor0_assembly( myfile ):
myfile.write( \
'''\
#define RANKK_XOR0 \\
"vxorpd %%ymm8, %%ymm8, %%ymm8 \\n\\t" /* set ymm8 to 0 ( v ) */ \\
"vxorpd %%ymm9, %%ymm9, %%ymm9 \\n\\t" \\
"vxorpd %%ymm10, %%ymm10, %%ymm10 \\n\\t" \\
"vxorpd %%ymm11, %%ymm11, %%ymm11 \\n\\t" \\
"vxorpd %%ymm12, %%ymm12, %%ymm12 \\n\\t" \\
"vxorpd %%ymm13, %%ymm13, %%ymm13 \\n\\t" \\
"vxorpd %%ymm14, %%ymm14, %%ymm14 \\n\\t" \\
"vxorpd %%ymm15, %%ymm15, %%ymm15 \\n\\t" \\
" \\n\\t" \\
" \\n\\t" \\
" \\n\\t" \\
"movq %0, %%rsi \\n\\t" /* i = k_iter; ( v ) */ \\
"testq %%rsi, %%rsi \\n\\t" /* check i via logical AND. ( v ) */ \\
''' )
def macro_rankk_loopkiter_assembly( myfile ):
myfile.write( \
'''\
#define RANKK_LOOPKITER \\
"addq $4 * 4 * 8, %%r15 \\n\\t" /* b_next += 4*4 (unroll x nr) ( v ) */ \\
" \\n\\t" \\
" \\n\\t" /* iteration 0 */ \\
"vmovapd 1 * 32(%%rax), %%ymm1 \\n\\t" /* preload a47 for iter 0 */ \\
"vmulpd %%ymm0, %%ymm2, %%ymm6 \\n\\t" /* ymm6 ( c_tmp0 ) = ymm0 ( a03 ) * ymm2( b0 ) */ \\
"vperm2f128 $0x3, %%ymm2, %%ymm2, %%ymm4 \\n\\t" /* ymm4 ( b0x3_0 ) */ \\
"vmulpd %%ymm0, %%ymm3, %%ymm7 \\n\\t" /* ymm7 ( c_tmp1 ) = ymm0 ( a03 ) * ymm3( b0x5 ) */ \\
"vperm2f128 $0x3, %%ymm3, %%ymm3, %%ymm5 \\n\\t" /* ymm5 ( b0x3_1 ) */ \\
"vaddpd %%ymm15, %%ymm6, %%ymm15 \\n\\t" /* ymm15 ( c_03_0 ) += ymm6( c_tmp0 ) */ \\
"vaddpd %%ymm13, %%ymm7, %%ymm13 \\n\\t" /* ymm13 ( c_03_1 ) += ymm7( c_tmp1 ) */ \\
" \\n\\t" \\
"prefetcht0 16 * 32(%%rax) \\n\\t" /* prefetch a03 for iter 1 */ \\
"vmulpd %%ymm1, %%ymm2, %%ymm6 \\n\\t" \\
"vmovapd 1 * 32(%%rbx), %%ymm2 \\n\\t" /* preload b for iter 1 */ \\
"vmulpd %%ymm1, %%ymm3, %%ymm7 \\n\\t" \\
"vpermilpd $0x5, %%ymm2, %%ymm3 \\n\\t" \\
"vaddpd %%ymm14, %%ymm6, %%ymm14 \\n\\t" \\
"vaddpd %%ymm12, %%ymm7, %%ymm12 \\n\\t" \\
" \\n\\t" \\
"vmulpd %%ymm0, %%ymm4, %%ymm6 \\n\\t" \\
"vmulpd %%ymm0, %%ymm5, %%ymm7 \\n\\t" \\
"vmovapd 2 * 32(%%rax), %%ymm0 \\n\\t" /* preload a03 for iter 1 */ \\
"vaddpd %%ymm11, %%ymm6, %%ymm11 \\n\\t" \\
"vaddpd %%ymm9, %%ymm7, %%ymm9 \\n\\t" \\
"prefetcht0 0 * 32(%%r15) \\n\\t" /* prefetch b_next[0*4] */ \\
" \\n\\t" \\
"vmulpd %%ymm1, %%ymm4, %%ymm6 \\n\\t" \\
"vmulpd %%ymm1, %%ymm5, %%ymm7 \\n\\t" \\
"vaddpd %%ymm10, %%ymm6, %%ymm10 \\n\\t" \\
"vaddpd %%ymm8, %%ymm7, %%ymm8 \\n\\t" \\
" \\n\\t" \\
" \\n\\t" \\
" \\n\\t" /* iteration 1 */ \\
"vmovapd 3 * 32(%%rax), %%ymm1 \\n\\t" /* preload a47 for iter 1 */ \\
"vmulpd %%ymm0, %%ymm2, %%ymm6 \\n\\t" \\
"vperm2f128 $0x3, %%ymm2, %%ymm2, %%ymm4 \\n\\t" \\
"vmulpd %%ymm0, %%ymm3, %%ymm7 \\n\\t" \\
"vperm2f128 $0x3, %%ymm3, %%ymm3, %%ymm5 \\n\\t" \\
"vaddpd %%ymm15, %%ymm6, %%ymm15 \\n\\t" \\
"vaddpd %%ymm13, %%ymm7, %%ymm13 \\n\\t" \\
" \\n\\t" \\
"prefetcht0 18 * 32(%%rax) \\n\\t" /* prefetch a for iter 9 ( ? ) */ \\
"vmulpd %%ymm1, %%ymm2, %%ymm6 \\n\\t" \\
"vmovapd 2 * 32(%%rbx), %%ymm2 \\n\\t" /* preload b for iter 2 */ \\
"vmulpd %%ymm1, %%ymm3, %%ymm7 \\n\\t" \\
"vpermilpd $0x5, %%ymm2, %%ymm3 \\n\\t" \\
"vaddpd %%ymm14, %%ymm6, %%ymm14 \\n\\t" \\
"vaddpd %%ymm12, %%ymm7, %%ymm12 \\n\\t" \\
" \\n\\t" \\
"vmulpd %%ymm0, %%ymm4, %%ymm6 \\n\\t" \\
"vmulpd %%ymm0, %%ymm5, %%ymm7 \\n\\t" \\
"vmovapd 4 * 32(%%rax), %%ymm0 \\n\\t" /* preload a03 for iter 2 */ \\
"vaddpd %%ymm11, %%ymm6, %%ymm11 \\n\\t" \\
"vaddpd %%ymm9, %%ymm7, %%ymm9 \\n\\t" \\
" \\n\\t" \\
"vmulpd %%ymm1, %%ymm4, %%ymm6 \\n\\t" \\
"vmulpd %%ymm1, %%ymm5, %%ymm7 \\n\\t" \\
"vaddpd %%ymm10, %%ymm6, %%ymm10 \\n\\t" \\
"vaddpd %%ymm8, %%ymm7, %%ymm8 \\n\\t" \\
" \\n\\t" \\
" \\n\\t" \\
" \\n\\t" /* iteration 2 */ \\
"vmovapd 5 * 32(%%rax), %%ymm1 \\n\\t" /* preload a47 for iter 2 */ \\
"vmulpd %%ymm0, %%ymm2, %%ymm6 \\n\\t" \\
"vperm2f128 $0x3, %%ymm2, %%ymm2, %%ymm4 \\n\\t" \\
"vmulpd %%ymm0, %%ymm3, %%ymm7 \\n\\t" \\
"vperm2f128 $0x3, %%ymm3, %%ymm3, %%ymm5 \\n\\t" \\
"vaddpd %%ymm15, %%ymm6, %%ymm15 \\n\\t" \\
"vaddpd %%ymm13, %%ymm7, %%ymm13 \\n\\t" \\
" \\n\\t" \\
"prefetcht0 20 * 32(%%rax) \\n\\t" /* prefetch a for iter 10 ( ? ) */ \\
"vmulpd %%ymm1, %%ymm2, %%ymm6 \\n\\t" \\
"vmovapd 3 * 32(%%rbx), %%ymm2 \\n\\t" /* preload b for iter 3 */ \\
"addq $4 * 4 * 8, %%rbx \\n\\t" /* b += 4*4 (unroll x nr) */ \\
"vmulpd %%ymm1, %%ymm3, %%ymm7 \\n\\t" \\
"vpermilpd $0x5, %%ymm2, %%ymm3 \\n\\t" \\
"vaddpd %%ymm14, %%ymm6, %%ymm14 \\n\\t" \\
"vaddpd %%ymm12, %%ymm7, %%ymm12 \\n\\t" \\
" \\n\\t" \\
"vmulpd %%ymm0, %%ymm4, %%ymm6 \\n\\t" \\
"vmulpd %%ymm0, %%ymm5, %%ymm7 \\n\\t" \\
"vmovapd 6 * 32(%%rax), %%ymm0 \\n\\t" /* preload a03 for iter 3 */ \\
"vaddpd %%ymm11, %%ymm6, %%ymm11 \\n\\t" \\
"vaddpd %%ymm9, %%ymm7, %%ymm9 \\n\\t" \\
"prefetcht0 2 * 32(%%r15) \\n\\t" /* prefetch b_next[2*4] */ \\
" \\n\\t" \\
"vmulpd %%ymm1, %%ymm4, %%ymm6 \\n\\t" \\
"vmulpd %%ymm1, %%ymm5, %%ymm7 \\n\\t" \\
"vaddpd %%ymm10, %%ymm6, %%ymm10 \\n\\t" \\
"vaddpd %%ymm8, %%ymm7, %%ymm8 \\n\\t" \\
" \\n\\t" \\
" \\n\\t" \\
" \\n\\t" /* iteration 3 */ \\
"vmovapd 7 * 32(%%rax), %%ymm1 \\n\\t" /* preload a47 for iter 3 */ \\
"addq $4 * 8 * 8, %%rax \\n\\t" /* a += 4*8 (unroll x mr) */ \\
"vmulpd %%ymm0, %%ymm2, %%ymm6 \\n\\t" \\
"vperm2f128 $0x3, %%ymm2, %%ymm2, %%ymm4 \\n\\t" \\
"vmulpd %%ymm0, %%ymm3, %%ymm7 \\n\\t" \\
"vperm2f128 $0x3, %%ymm3, %%ymm3, %%ymm5 \\n\\t" \\
"vaddpd %%ymm15, %%ymm6, %%ymm15 \\n\\t" \\
"vaddpd %%ymm13, %%ymm7, %%ymm13 \\n\\t" \\
" \\n\\t" \\
"prefetcht0 14 * 32(%%rax) \\n\\t" /* prefetch a for iter 11 ( ? ) */ \\
"vmulpd %%ymm1, %%ymm2, %%ymm6 \\n\\t" \\
"vmovapd 0 * 32(%%rbx), %%ymm2 \\n\\t" /* preload b for iter 4 */ \\
"vmulpd %%ymm1, %%ymm3, %%ymm7 \\n\\t" \\
"vpermilpd $0x5, %%ymm2, %%ymm3 \\n\\t" \\
"vaddpd %%ymm14, %%ymm6, %%ymm14 \\n\\t" \\
"vaddpd %%ymm12, %%ymm7, %%ymm12 \\n\\t" \\
" \\n\\t" \\
"vmulpd %%ymm0, %%ymm4, %%ymm6 \\n\\t" \\
"vmulpd %%ymm0, %%ymm5, %%ymm7 \\n\\t" \\
"vmovapd 0 * 32(%%rax), %%ymm0 \\n\\t" /* preload a03 for iter 4 */ \\
"vaddpd %%ymm11, %%ymm6, %%ymm11 \\n\\t" \\
"vaddpd %%ymm9, %%ymm7, %%ymm9 \\n\\t" \\
" \\n\\t" \\
"vmulpd %%ymm1, %%ymm4, %%ymm6 \\n\\t" \\
"vmulpd %%ymm1, %%ymm5, %%ymm7 \\n\\t" \\
"vaddpd %%ymm10, %%ymm6, %%ymm10 \\n\\t" \\
"vaddpd %%ymm8, %%ymm7, %%ymm8 \\n\\t" \\
" \\n\\t" \\
" \\n\\t" \\
" \\n\\t" \\
" \\n\\t" \\
"decq %%rsi \\n\\t" /* i -= 1; */ \\
''' )
def macro_rankk_loopkleft_assembly( myfile ):
myfile.write( \
'''\
#define RANKK_LOOPKLEFT \\
"vmovapd 1 * 32(%%rax), %%ymm1 \\n\\t" /* preload a47 */ \\
"addq $8 * 1 * 8, %%rax \\n\\t" /* a += 8 (1 x mr) */ \\
"vmulpd %%ymm0, %%ymm2, %%ymm6 \\n\\t" \\
"vperm2f128 $0x3, %%ymm2, %%ymm2, %%ymm4 \\n\\t" \\
"vmulpd %%ymm0, %%ymm3, %%ymm7 \\n\\t" \\
"vperm2f128 $0x3, %%ymm3, %%ymm3, %%ymm5 \\n\\t" \\
"vaddpd %%ymm15, %%ymm6, %%ymm15 \\n\\t" \\
"vaddpd %%ymm13, %%ymm7, %%ymm13 \\n\\t" \\
" \\n\\t" \\
"prefetcht0 14 * 32(%%rax) \\n\\t" /* prefetch a03 for iter 7 later ( ? ) */ \\
"vmulpd %%ymm1, %%ymm2, %%ymm6 \\n\\t" \\
"vmovapd 1 * 32(%%rbx), %%ymm2 \\n\\t" \\
"addq $4 * 1 * 8, %%rbx \\n\\t" /* b += 4 (1 x nr) */ \\
"vmulpd %%ymm1, %%ymm3, %%ymm7 \\n\\t" \\
"vpermilpd $0x5, %%ymm2, %%ymm3 \\n\\t" \\
"vaddpd %%ymm14, %%ymm6, %%ymm14 \\n\\t" \\
"vaddpd %%ymm12, %%ymm7, %%ymm12 \\n\\t" \\
" \\n\\t" \\
"vmulpd %%ymm0, %%ymm4, %%ymm6 \\n\\t" \\
"vmulpd %%ymm0, %%ymm5, %%ymm7 \\n\\t" \\
"vmovapd 0 * 32(%%rax), %%ymm0 \\n\\t" \\
"vaddpd %%ymm11, %%ymm6, %%ymm11 \\n\\t" \\
"vaddpd %%ymm9, %%ymm7, %%ymm9 \\n\\t" \\
" \\n\\t" \\
"vmulpd %%ymm1, %%ymm4, %%ymm6 \\n\\t" \\
"vmulpd %%ymm1, %%ymm5, %%ymm7 \\n\\t" \\
"vaddpd %%ymm10, %%ymm6, %%ymm10 \\n\\t" \\
"vaddpd %%ymm8, %%ymm7, %%ymm8 \\n\\t" \\
" \\n\\t" \\
" \\n\\t" \\
"decq %%rsi \\n\\t" /* i -= 1; */ \\
''' )
def macro_rankk_postaccum_assembly( myfile ):
myfile.write( \
'''\
#define RANKK_POSTACCUM \\
" \\n\\t" \\
" \\n\\t" /* ymm15: ymm13: ymm11: ymm9: */ \\
" \\n\\t" /* ( ab00 ( ab01 ( ab02 ( ab03 */ \\
" \\n\\t" /* ab11 ab10 ab13 ab12 */ \\
" \\n\\t" /* ab22 ab23 ab20 ab21 */ \\
" \\n\\t" /* ab33 ) ab32 ) ab31 ) ab30 ) */ \\
" \\n\\t" \\
" \\n\\t" /* ymm14: ymm12: ymm10: ymm8: */ \\
" \\n\\t" /* ( ab40 ( ab41 ( ab42 ( ab43 */ \\
" \\n\\t" /* ab51 ab50 ab53 ab52 */ \\
" \\n\\t" /* ab62 ab63 ab60 ab61 */ \\
" \\n\\t" /* ab73 ) ab72 ) ab71 ) ab70 ) */ \\
" \\n\\t" \\
"vmovapd %%ymm15, %%ymm7 \\n\\t" \\
"vshufpd $0xa, %%ymm15, %%ymm13, %%ymm15 \\n\\t" \\
"vshufpd $0xa, %%ymm13, %%ymm7, %%ymm13 \\n\\t" \\
" \\n\\t" \\
"vmovapd %%ymm11, %%ymm7 \\n\\t" \\
"vshufpd $0xa, %%ymm11, %%ymm9, %%ymm11 \\n\\t" \\
"vshufpd $0xa, %%ymm9, %%ymm7, %%ymm9 \\n\\t" \\
" \\n\\t" \\
"vmovapd %%ymm14, %%ymm7 \\n\\t" \\
"vshufpd $0xa, %%ymm14, %%ymm12, %%ymm14 \\n\\t" \\
"vshufpd $0xa, %%ymm12, %%ymm7, %%ymm12 \\n\\t" \\
" \\n\\t" \\
"vmovapd %%ymm10, %%ymm7 \\n\\t" \\
"vshufpd $0xa, %%ymm10, %%ymm8, %%ymm10 \\n\\t" \\
"vshufpd $0xa, %%ymm8, %%ymm7, %%ymm8 \\n\\t" \\
" \\n\\t" \\
" \\n\\t" /* ymm15: ymm13: ymm11: ymm9: */ \\
" \\n\\t" /* ( ab01 ( ab00 ( ab03 ( ab02 */ \\
" \\n\\t" /* ab11 ab10 ab13 ab12 */ \\
" \\n\\t" /* ab23 ab22 ab21 ab20 */ \\
" \\n\\t" /* ab33 ) ab32 ) ab31 ) ab30 ) */ \\
" \\n\\t" \\
" \\n\\t" /* ymm14: ymm12: ymm10: ymm8: */ \\
" \\n\\t" /* ( ab41 ( ab40 ( ab43 ( ab42 */ \\
" \\n\\t" /* ab51 ab50 ab53 ab52 */ \\
" \\n\\t" /* ab63 ab62 ab61 ab60 */ \\
" \\n\\t" /* ab73 ) ab72 ) ab71 ) ab70 ) */ \\
" \\n\\t" \\
"vmovapd %%ymm15, %%ymm7 \\n\\t" \\
"vperm2f128 $0x30, %%ymm15, %%ymm11, %%ymm15 \\n\\t" \\
"vperm2f128 $0x12, %%ymm7, %%ymm11, %%ymm11 \\n\\t" \\
" \\n\\t" \\
"vmovapd %%ymm13, %%ymm7 \\n\\t" \\
"vperm2f128 $0x30, %%ymm13, %%ymm9, %%ymm13 \\n\\t" \\
"vperm2f128 $0x12, %%ymm7, %%ymm9, %%ymm9 \\n\\t" \\
" \\n\\t" \\
"vmovapd %%ymm14, %%ymm7 \\n\\t" \\
"vperm2f128 $0x30, %%ymm14, %%ymm10, %%ymm14 \\n\\t" \\
"vperm2f128 $0x12, %%ymm7, %%ymm10, %%ymm10 \\n\\t" \\
" \\n\\t" \\
"vmovapd %%ymm12, %%ymm7 \\n\\t" \\
"vperm2f128 $0x30, %%ymm12, %%ymm8, %%ymm12 \\n\\t" \\
"vperm2f128 $0x12, %%ymm7, %%ymm8, %%ymm8 \\n\\t" \\
" \\n\\t" \\
" \\n\\t" /* ymm9: ymm11: ymm13: ymm15: */ \\
" \\n\\t" /* ( ab00 ( ab01 ( ab02 ( ab03 */ \\
" \\n\\t" /* ab10 ab11 ab12 ab13 */ \\
" \\n\\t" /* ab20 ab21 ab22 ab23 */ \\
" \\n\\t" /* ab30 ) ab31 ) ab32 ) ab33 ) */ \\
" \\n\\t" \\
" \\n\\t" /* ymm8: ymm10: ymm12: ymm14: */ \\
" \\n\\t" /* ( ab40 ( ab41 ( ab42 ( ab43 */ \\
" \\n\\t" /* ab50 ab51 ab52 ab53 */ \\
" \\n\\t" /* ab60 ab61 ab62 ab63 */ \\
" \\n\\t" /* ab70 ) ab71 ) ab72 ) ab73 ) */
''' )
def write_common_simple_rankk_assembly( myfile, index ):
myfile.write( \
'''\
RANKK_XOR0
"je .DCONSIDKLEFT{0} \\n\\t" // if i == 0, jump to code that ( v )
" \\n\\t" // contains the k_left loop.
".DLOOPKITER{0}: \\n\\t" // MAIN LOOP
" \\n\\t"
RANKK_LOOPKITER
"jne .DLOOPKITER{0} \\n\\t" // iterate again if i != 0.
".DCONSIDKLEFT{0}: \\n\\t"
"movq %1, %%rsi \\n\\t" // i = k_left;
"testq %%rsi, %%rsi \\n\\t" // check i via logical AND.
"je .DPOSTACCUM{0} \\n\\t" // if i == 0, we're done; jump to end.
" \\n\\t" // else, we prepare to enter k_left loop.
".DLOOPKLEFT{0}: \\n\\t" // EDGE LOOP
" \\n\\t"
RANKK_LOOPKLEFT
"jne .DLOOPKLEFT{0} \\n\\t" // iterate again if i != 0.
" \\n\\t"
" \\n\\t"
" \\n\\t"
".DPOSTACCUM{0}: \\n\\t"
" \\n\\t"
RANKK_POSTACCUM
'''.format( index ) )
def write_common_rankk_assembly( myfile, index ):
#write_line( myfile, 1, )
myfile.write( \
'''\
" \\n\\t"
"vxorpd %%ymm8, %%ymm8, %%ymm8 \\n\\t" // set ymm8 to 0 ( v )
"vxorpd %%ymm9, %%ymm9, %%ymm9 \\n\\t"
"vxorpd %%ymm10, %%ymm10, %%ymm10 \\n\\t"
"vxorpd %%ymm11, %%ymm11, %%ymm11 \\n\\t"
"vxorpd %%ymm12, %%ymm12, %%ymm12 \\n\\t"
"vxorpd %%ymm13, %%ymm13, %%ymm13 \\n\\t"
"vxorpd %%ymm14, %%ymm14, %%ymm14 \\n\\t"
"vxorpd %%ymm15, %%ymm15, %%ymm15 \\n\\t"
" \\n\\t"
" \\n\\t"
" \\n\\t"
"movq %0, %%rsi \\n\\t" // i = k_iter; ( v )
"testq %%rsi, %%rsi \\n\\t" // check i via logical AND. ( v )
"je .DCONSIDKLEFT{0} \\n\\t" // if i == 0, jump to code that ( v )
" \\n\\t" // contains the k_left loop.
" \\n\\t"
" \\n\\t"
".DLOOPKITER{0}: \\n\\t" // MAIN LOOP
" \\n\\t"
"addq $4 * 4 * 8, %%r15 \\n\\t" // b_next += 4*4 (unroll x nr) ( v )
" \\n\\t"
" \\n\\t" // iteration 0
"vmovapd 1 * 32(%%rax), %%ymm1 \\n\\t" // preload a47 for iter 0
"vmulpd %%ymm0, %%ymm2, %%ymm6 \\n\\t" // ymm6 ( c_tmp0 ) = ymm0 ( a03 ) * ymm2( b0 )
"vperm2f128 $0x3, %%ymm2, %%ymm2, %%ymm4 \\n\\t" // ymm4 ( b0x3_0 )
"vmulpd %%ymm0, %%ymm3, %%ymm7 \\n\\t" // ymm7 ( c_tmp1 ) = ymm0 ( a03 ) * ymm3( b0x5 )
"vperm2f128 $0x3, %%ymm3, %%ymm3, %%ymm5 \\n\\t" // ymm5 ( b0x3_1 )
"vaddpd %%ymm15, %%ymm6, %%ymm15 \\n\\t" // ymm15 ( c_03_0 ) += ymm6( c_tmp0 )
"vaddpd %%ymm13, %%ymm7, %%ymm13 \\n\\t" // ymm13 ( c_03_1 ) += ymm7( c_tmp1 )
" \\n\\t"
"prefetcht0 16 * 32(%%rax) \\n\\t" // prefetch a03 for iter 1
"vmulpd %%ymm1, %%ymm2, %%ymm6 \\n\\t"
"vmovapd 1 * 32(%%rbx), %%ymm2 \\n\\t" // preload b for iter 1
"vmulpd %%ymm1, %%ymm3, %%ymm7 \\n\\t"
"vpermilpd $0x5, %%ymm2, %%ymm3 \\n\\t"
"vaddpd %%ymm14, %%ymm6, %%ymm14 \\n\\t"
"vaddpd %%ymm12, %%ymm7, %%ymm12 \\n\\t"
" \\n\\t"
"vmulpd %%ymm0, %%ymm4, %%ymm6 \\n\\t"
"vmulpd %%ymm0, %%ymm5, %%ymm7 \\n\\t"
"vmovapd 2 * 32(%%rax), %%ymm0 \\n\\t" // preload a03 for iter 1
"vaddpd %%ymm11, %%ymm6, %%ymm11 \\n\\t"
"vaddpd %%ymm9, %%ymm7, %%ymm9 \\n\\t"
"prefetcht0 0 * 32(%%r15) \\n\\t" // prefetch b_next[0*4]
" \\n\\t"
"vmulpd %%ymm1, %%ymm4, %%ymm6 \\n\\t"
"vmulpd %%ymm1, %%ymm5, %%ymm7 \\n\\t"
"vaddpd %%ymm10, %%ymm6, %%ymm10 \\n\\t"
"vaddpd %%ymm8, %%ymm7, %%ymm8 \\n\\t"
" \\n\\t"
" \\n\\t"
" \\n\\t" // iteration 1
"vmovapd 3 * 32(%%rax), %%ymm1 \\n\\t" // preload a47 for iter 1
"vmulpd %%ymm0, %%ymm2, %%ymm6 \\n\\t"
"vperm2f128 $0x3, %%ymm2, %%ymm2, %%ymm4 \\n\\t"
"vmulpd %%ymm0, %%ymm3, %%ymm7 \\n\\t"
"vperm2f128 $0x3, %%ymm3, %%ymm3, %%ymm5 \\n\\t"
"vaddpd %%ymm15, %%ymm6, %%ymm15 \\n\\t"
"vaddpd %%ymm13, %%ymm7, %%ymm13 \\n\\t"
" \\n\\t"
"prefetcht0 18 * 32(%%rax) \\n\\t" // prefetch a for iter 9 ( ? )
"vmulpd %%ymm1, %%ymm2, %%ymm6 \\n\\t"
"vmovapd 2 * 32(%%rbx), %%ymm2 \\n\\t" // preload b for iter 2
"vmulpd %%ymm1, %%ymm3, %%ymm7 \\n\\t"
"vpermilpd $0x5, %%ymm2, %%ymm3 \\n\\t"
"vaddpd %%ymm14, %%ymm6, %%ymm14 \\n\\t"
"vaddpd %%ymm12, %%ymm7, %%ymm12 \\n\\t"
" \\n\\t"
"vmulpd %%ymm0, %%ymm4, %%ymm6 \\n\\t"
"vmulpd %%ymm0, %%ymm5, %%ymm7 \\n\\t"
"vmovapd 4 * 32(%%rax), %%ymm0 \\n\\t" // preload a03 for iter 2
"vaddpd %%ymm11, %%ymm6, %%ymm11 \\n\\t"
"vaddpd %%ymm9, %%ymm7, %%ymm9 \\n\\t"
" \\n\\t"
"vmulpd %%ymm1, %%ymm4, %%ymm6 \\n\\t"
"vmulpd %%ymm1, %%ymm5, %%ymm7 \\n\\t"
"vaddpd %%ymm10, %%ymm6, %%ymm10 \\n\\t"
"vaddpd %%ymm8, %%ymm7, %%ymm8 \\n\\t"
" \\n\\t"
" \\n\\t"
" \\n\\t" // iteration 2
"vmovapd 5 * 32(%%rax), %%ymm1 \\n\\t" // preload a47 for iter 2
"vmulpd %%ymm0, %%ymm2, %%ymm6 \\n\\t"
"vperm2f128 $0x3, %%ymm2, %%ymm2, %%ymm4 \\n\\t"
"vmulpd %%ymm0, %%ymm3, %%ymm7 \\n\\t"
"vperm2f128 $0x3, %%ymm3, %%ymm3, %%ymm5 \\n\\t"
"vaddpd %%ymm15, %%ymm6, %%ymm15 \\n\\t"
"vaddpd %%ymm13, %%ymm7, %%ymm13 \\n\\t"
" \\n\\t"
"prefetcht0 20 * 32(%%rax) \\n\\t" // prefetch a for iter 10 ( ? )
"vmulpd %%ymm1, %%ymm2, %%ymm6 \\n\\t"
"vmovapd 3 * 32(%%rbx), %%ymm2 \\n\\t" // preload b for iter 3
"addq $4 * 4 * 8, %%rbx \\n\\t" // b += 4*4 (unroll x nr)
"vmulpd %%ymm1, %%ymm3, %%ymm7 \\n\\t"
"vpermilpd $0x5, %%ymm2, %%ymm3 \\n\\t"
"vaddpd %%ymm14, %%ymm6, %%ymm14 \\n\\t"
"vaddpd %%ymm12, %%ymm7, %%ymm12 \\n\\t"
" \\n\\t"
"vmulpd %%ymm0, %%ymm4, %%ymm6 \\n\\t"
"vmulpd %%ymm0, %%ymm5, %%ymm7 \\n\\t"
"vmovapd 6 * 32(%%rax), %%ymm0 \\n\\t" // preload a03 for iter 3
"vaddpd %%ymm11, %%ymm6, %%ymm11 \\n\\t"
"vaddpd %%ymm9, %%ymm7, %%ymm9 \\n\\t"
"prefetcht0 2 * 32(%%r15) \\n\\t" // prefetch b_next[2*4]
" \\n\\t"
"vmulpd %%ymm1, %%ymm4, %%ymm6 \\n\\t"
"vmulpd %%ymm1, %%ymm5, %%ymm7 \\n\\t"
"vaddpd %%ymm10, %%ymm6, %%ymm10 \\n\\t"
"vaddpd %%ymm8, %%ymm7, %%ymm8 \\n\\t"
" \\n\\t"
" \\n\\t"
" \\n\\t" // iteration 3
"vmovapd 7 * 32(%%rax), %%ymm1 \\n\\t" // preload a47 for iter 3
"addq $4 * 8 * 8, %%rax \\n\\t" // a += 4*8 (unroll x mr)
"vmulpd %%ymm0, %%ymm2, %%ymm6 \\n\\t"
"vperm2f128 $0x3, %%ymm2, %%ymm2, %%ymm4 \\n\\t"
"vmulpd %%ymm0, %%ymm3, %%ymm7 \\n\\t"
"vperm2f128 $0x3, %%ymm3, %%ymm3, %%ymm5 \\n\\t"
"vaddpd %%ymm15, %%ymm6, %%ymm15 \\n\\t"
"vaddpd %%ymm13, %%ymm7, %%ymm13 \\n\\t"
" \\n\\t"
"prefetcht0 14 * 32(%%rax) \\n\\t" // prefetch a for iter 11 ( ? )
"vmulpd %%ymm1, %%ymm2, %%ymm6 \\n\\t"
"vmovapd 0 * 32(%%rbx), %%ymm2 \\n\\t" // preload b for iter 4
"vmulpd %%ymm1, %%ymm3, %%ymm7 \\n\\t"
"vpermilpd $0x5, %%ymm2, %%ymm3 \\n\\t"
"vaddpd %%ymm14, %%ymm6, %%ymm14 \\n\\t"
"vaddpd %%ymm12, %%ymm7, %%ymm12 \\n\\t"
" \\n\\t"
"vmulpd %%ymm0, %%ymm4, %%ymm6 \\n\\t"
"vmulpd %%ymm0, %%ymm5, %%ymm7 \\n\\t"
"vmovapd 0 * 32(%%rax), %%ymm0 \\n\\t" // preload a03 for iter 4
"vaddpd %%ymm11, %%ymm6, %%ymm11 \\n\\t"
"vaddpd %%ymm9, %%ymm7, %%ymm9 \\n\\t"
" \\n\\t"
"vmulpd %%ymm1, %%ymm4, %%ymm6 \\n\\t"
"vmulpd %%ymm1, %%ymm5, %%ymm7 \\n\\t"
"vaddpd %%ymm10, %%ymm6, %%ymm10 \\n\\t"
"vaddpd %%ymm8, %%ymm7, %%ymm8 \\n\\t"
" \\n\\t"
" \\n\\t"
" \\n\\t"
" \\n\\t"
"decq %%rsi \\n\\t" // i -= 1;
"jne .DLOOPKITER{0} \\n\\t" // iterate again if i != 0.
" \\n\\t"
" \\n\\t"
" \\n\\t"
" \\n\\t"
".DCONSIDKLEFT{0}: \\n\\t"
" \\n\\t"
"movq %1, %%rsi \\n\\t" // i = k_left;
"testq %%rsi, %%rsi \\n\\t" // check i via logical AND.
"je .DPOSTACCUM{0} \\n\\t" // if i == 0, we're done; jump to end.
" \\n\\t" // else, we prepare to enter k_left loop.
" \\n\\t"
" \\n\\t"
".DLOOPKLEFT{0}: \\n\\t" // EDGE LOOP
" \\n\\t"
"vmovapd 1 * 32(%%rax), %%ymm1 \\n\\t" // preload a47
"addq $8 * 1 * 8, %%rax \\n\\t" // a += 8 (1 x mr)
"vmulpd %%ymm0, %%ymm2, %%ymm6 \\n\\t"
"vperm2f128 $0x3, %%ymm2, %%ymm2, %%ymm4 \\n\\t"
"vmulpd %%ymm0, %%ymm3, %%ymm7 \\n\\t"
"vperm2f128 $0x3, %%ymm3, %%ymm3, %%ymm5 \\n\\t"
"vaddpd %%ymm15, %%ymm6, %%ymm15 \\n\\t"
"vaddpd %%ymm13, %%ymm7, %%ymm13 \\n\\t"
" \\n\\t"
"prefetcht0 14 * 32(%%rax) \\n\\t" // prefetch a03 for iter 7 later ( ? )
"vmulpd %%ymm1, %%ymm2, %%ymm6 \\n\\t"
"vmovapd 1 * 32(%%rbx), %%ymm2 \\n\\t"
"addq $4 * 1 * 8, %%rbx \\n\\t" // b += 4 (1 x nr)
"vmulpd %%ymm1, %%ymm3, %%ymm7 \\n\\t"
"vpermilpd $0x5, %%ymm2, %%ymm3 \\n\\t"
"vaddpd %%ymm14, %%ymm6, %%ymm14 \\n\\t"
"vaddpd %%ymm12, %%ymm7, %%ymm12 \\n\\t"
" \\n\\t"
"vmulpd %%ymm0, %%ymm4, %%ymm6 \\n\\t"
"vmulpd %%ymm0, %%ymm5, %%ymm7 \\n\\t"
"vmovapd 0 * 32(%%rax), %%ymm0 \\n\\t"
"vaddpd %%ymm11, %%ymm6, %%ymm11 \\n\\t"
"vaddpd %%ymm9, %%ymm7, %%ymm9 \\n\\t"
" \\n\\t"
"vmulpd %%ymm1, %%ymm4, %%ymm6 \\n\\t"
"vmulpd %%ymm1, %%ymm5, %%ymm7 \\n\\t"
"vaddpd %%ymm10, %%ymm6, %%ymm10 \\n\\t"
"vaddpd %%ymm8, %%ymm7, %%ymm8 \\n\\t"
" \\n\\t"
" \\n\\t"
"decq %%rsi \\n\\t" // i -= 1;
"jne .DLOOPKLEFT{0} \\n\\t" // iterate again if i != 0.
" \\n\\t"
" \\n\\t"
" \\n\\t"
".DPOSTACCUM{0}: \\n\\t"
" \\n\\t"
" \\n\\t"
" \\n\\t" // ymm15: ymm13: ymm11: ymm9:
" \\n\\t" // ( ab00 ( ab01 ( ab02 ( ab03
" \\n\\t" // ab11 ab10 ab13 ab12
" \\n\\t" // ab22 ab23 ab20 ab21
" \\n\\t" // ab33 ) ab32 ) ab31 ) ab30 )
" \\n\\t"
" \\n\\t" // ymm14: ymm12: ymm10: ymm8:
" \\n\\t" // ( ab40 ( ab41 ( ab42 ( ab43
" \\n\\t" // ab51 ab50 ab53 ab52
" \\n\\t" // ab62 ab63 ab60 ab61
" \\n\\t" // ab73 ) ab72 ) ab71 ) ab70 )
" \\n\\t"
"vmovapd %%ymm15, %%ymm7 \\n\\t"
"vshufpd $0xa, %%ymm15, %%ymm13, %%ymm15 \\n\\t"
"vshufpd $0xa, %%ymm13, %%ymm7, %%ymm13 \\n\\t"
" \\n\\t"
"vmovapd %%ymm11, %%ymm7 \\n\\t"
"vshufpd $0xa, %%ymm11, %%ymm9, %%ymm11 \\n\\t"
"vshufpd $0xa, %%ymm9, %%ymm7, %%ymm9 \\n\\t"
" \\n\\t"
"vmovapd %%ymm14, %%ymm7 \\n\\t"
"vshufpd $0xa, %%ymm14, %%ymm12, %%ymm14 \\n\\t"
"vshufpd $0xa, %%ymm12, %%ymm7, %%ymm12 \\n\\t"
" \\n\\t"
"vmovapd %%ymm10, %%ymm7 \\n\\t"
"vshufpd $0xa, %%ymm10, %%ymm8, %%ymm10 \\n\\t"
"vshufpd $0xa, %%ymm8, %%ymm7, %%ymm8 \\n\\t"
" \\n\\t"
" \\n\\t" // ymm15: ymm13: ymm11: ymm9:
" \\n\\t" // ( ab01 ( ab00 ( ab03 ( ab02
" \\n\\t" // ab11 ab10 ab13 ab12
" \\n\\t" // ab23 ab22 ab21 ab20
" \\n\\t" // ab33 ) ab32 ) ab31 ) ab30 )
" \\n\\t"
" \\n\\t" // ymm14: ymm12: ymm10: ymm8:
" \\n\\t" // ( ab41 ( ab40 ( ab43 ( ab42
" \\n\\t" // ab51 ab50 ab53 ab52
" \\n\\t" // ab63 ab62 ab61 ab60
" \\n\\t" // ab73 ) ab72 ) ab71 ) ab70 )
" \\n\\t"
"vmovapd %%ymm15, %%ymm7 \\n\\t"
"vperm2f128 $0x30, %%ymm15, %%ymm11, %%ymm15 \\n\\t"
"vperm2f128 $0x12, %%ymm7, %%ymm11, %%ymm11 \\n\\t"
" \\n\\t"
"vmovapd %%ymm13, %%ymm7 \\n\\t"
"vperm2f128 $0x30, %%ymm13, %%ymm9, %%ymm13 \\n\\t"
"vperm2f128 $0x12, %%ymm7, %%ymm9, %%ymm9 \\n\\t"
" \\n\\t"
"vmovapd %%ymm14, %%ymm7 \\n\\t"
"vperm2f128 $0x30, %%ymm14, %%ymm10, %%ymm14 \\n\\t"
"vperm2f128 $0x12, %%ymm7, %%ymm10, %%ymm10 \\n\\t"
" \\n\\t"
"vmovapd %%ymm12, %%ymm7 \\n\\t"
"vperm2f128 $0x30, %%ymm12, %%ymm8, %%ymm12 \\n\\t"
"vperm2f128 $0x12, %%ymm7, %%ymm8, %%ymm8 \\n\\t"
" \\n\\t"
" \\n\\t" // ymm9: ymm11: ymm13: ymm15:
" \\n\\t" // ( ab00 ( ab01 ( ab02 ( ab03
" \\n\\t" // ab10 ab11 ab12 ab13
" \\n\\t" // ab20 ab21 ab22 ab23
" \\n\\t" // ab30 ) ab31 ) ab32 ) ab33 )
" \\n\\t"
" \\n\\t" // ymm8: ymm10: ymm12: ymm14:
" \\n\\t" // ( ab40 ( ab41 ( ab42 ( ab43
" \\n\\t" // ab50 ab51 ab52 ab53
" \\n\\t" // ab60 ab61 ab62 ab63
" \\n\\t" // ab70 ) ab71 ) ab72 ) ab73 )
" \\n\\t"
'''.format( index ) )
def write_common_start_assembly( myfile ):
write_line( myfile, 1, 'unsigned long long k_iter = (unsigned long long)k / 4;' )
write_line( myfile, 1, 'unsigned long long k_left = (unsigned long long)k % 4;' )
write_line( myfile, 1, '__asm__ volatile' )
write_line( myfile, 1, '(' )
write_line( myfile, 1, 'INITIALIZE' )
def macro_initialize_assembly( myfile ):
myfile.write( \
'''\
#define INITIALIZE \\
"movq %2, %%rax \\n\\t" /* load address of a. */ \\
"movq %3, %%rbx \\n\\t" /* load address of b. */ \\
"movq %4, %%r15 \\n\\t" /* load address of b_next. */ \\
"addq $-4 * 64, %%r15 \\n\\t" \\
" \\n\\t" \\
"vmovapd 0 * 32(%%rax), %%ymm0 \\n\\t" /* initialize loop by pre-loading */ \\
"vmovapd 0 * 32(%%rbx), %%ymm2 \\n\\t" /* elements of a and b. */ \\
"vpermilpd $0x5, %%ymm2, %%ymm3 \\n\\t" \\
" \\n\\t" \\
"movq %5, %%rdi \\n\\t" /* load ldc */ \\
"leaq (,%%rdi,8), %%rdi \\n\\t" /* ldc * sizeof(double) */ \\
" \\n\\t" \\
''' )
def write_common_end_assembly( myfile, nnz, index, has_nontrivial=False ):
write_line( myfile, 1, '" \\n\\t"' )
write_line( myfile, 1, '".DDONE{0}: \\n\\t"'.format( index ) )
write_line( myfile, 1, '" \\n\\t"' )
write_line( myfile, 1, ': // output operands (none)' )
write_line( myfile, 1, ': // input operands' )
write_line( myfile, 1, ' "m" (k_iter), // 0' )
write_line( myfile, 1, ' "m" (k_left), // 1' )
write_line( myfile, 1, ' "m" (a), // 2' )
write_line( myfile, 1, ' "m" (b), // 3' )
write_line( myfile, 1, ' "m" (aux->b_next), // 4' )
write_line( myfile, 1, ' "m" (ldc), // 5' )
add = ''
add += '\n '.join( [ ' "m" (c%d) // %d' % ( i, i+6 ) for i in range( nnz ) ] )
if has_nontrivial:
add += '\n "m" (alpha_list) // %d' % (nnz + 6)
#write_line( myfile, 1, ' "m" (c) // 6' )
write_line( myfile, 1, add )
write_line( myfile, 1, ': // register clobber list' )
write_line( myfile, 1, ' "rax", "rbx", "rcx", "rdx", "rsi", "rdi",' )
write_line( myfile, 1, ' "r8", "r9", "r10", "r11", "r12", "r13", "r14", "r15",' )
write_line( myfile, 1, ' "xmm0", "xmm1", "xmm2", "xmm3",' )
write_line( myfile, 1, ' "xmm4", "xmm5", "xmm6", "xmm7",' )
write_line( myfile, 1, ' "xmm8", "xmm9", "xmm10", "xmm11",' )
write_line( myfile, 1, ' "xmm12", "xmm13", "xmm14", "xmm15",' )
write_line( myfile, 1, ' "memory"' )
write_line( myfile, 1, ');' )
def generate_micro_kernel( myfile, nonzero_coeffs, index ):
nnz = len( nonzero_coeffs )
#write_line( myfile, 1, 'a' )
add = 'inline void bl_dgemm_micro_kernel_stra_abc%d( int k, double *a, double *b, unsigned long long ldc, ' % index
add += ', '.join( ['double *c%d' % ( i ) for i in range( nnz )] )
if ( contain_nontrivial( nonzero_coeffs ) ):
add += ', double *alpha_list'
add += ', aux_t *aux ) {'
write_line(myfile, 0, add)
write_common_start_assembly( myfile )
write_prefetch_assembly( myfile, nonzero_coeffs )
write_line( myfile, 1, 'RANKK_UPDATE( %d )' % index )
#write_common_rankk_assembly( myfile, index )
#write_common_simple_rankk_assembly( myfile, index )
write_updatec_assembly( myfile, nonzero_coeffs )
write_common_end_assembly( myfile, nnz, index, contain_nontrivial( nonzero_coeffs ) )
write_line( myfile, 0, '}' )
#write_break( myfile )
def main():
nonzero_coeffs = [ 1, -1 ]
myfile = open( "a.c", 'w' )
i = 3
generate_micro_kernel( myfile, nonzero_coeffs, i )
if __name__ == '__main__':
main()
| 58.073276 | 538 | 0.329028 | 6,888 | 67,365 | 3.154036 | 0.050087 | 0.079632 | 0.020299 | 0.027066 | 0.865639 | 0.838389 | 0.802071 | 0.780207 | 0.759264 | 0.739333 | 0 | 0.110666 | 0.458888 | 67,365 | 1,159 | 539 | 58.123382 | 0.485323 | 0.008862 | 0 | 0.323529 | 0 | 0.014706 | 0.201692 | 0.009625 | 0 | 0 | 0 | 0 | 0 | 1 | 0.139706 | false | 0 | 0.014706 | 0 | 0.169118 | 0.007353 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
44e0d8a490a2a881ff8c6c7e7635b1db1cc5036f | 56,146 | py | Python | tests/test_clustering.py | mbhall88/tbpore | a9bd722d2bba1a09adf18401b87b7937adb25c1d | [
"MIT"
] | null | null | null | tests/test_clustering.py | mbhall88/tbpore | a9bd722d2bba1a09adf18401b87b7937adb25c1d | [
"MIT"
] | null | null | null | tests/test_clustering.py | mbhall88/tbpore | a9bd722d2bba1a09adf18401b87b7937adb25c1d | [
"MIT"
] | null | null | null | from pathlib import Path
import pytest
from tbpore.clustering import AsymmetrixMatrixError, get_clusters, produce_clusters
psdm_matrix = Path("tests/data/psdm_matrix/psdm.matrix.csv")
class TestClusteringIntegrationTests:
def test___several_thresholds(self):
threshold_to_expected_cluster = {
0: [
["R27657", "R28012", "R30078"],
["R18040", "R22601"],
["R20260", "R20574"],
["R30234", "R31095"],
["mada_1-25", "mada_2-25"],
["mada_1-46", "mada_2-46"],
["mada_1-53", "mada_2-53"],
["mada_132", "mada_152"],
["17_616026"],
["17_616156"],
["18_0621851"],
["18_0622057"],
["18_0622158"],
["18_0622224"],
["18_0622267"],
["18_0622291"],
["18_0622300"],
["18_0622312"],
["18_0622363"],
["18_0622365"],
["18_0622431"],
["18_0622434"],
["18_0622446"],
["18_0622455"],
["18_0622456"],
["18_0622465"],
["18_620606"],
["R15311"],
["R18043"],
["R20896"],
["R20983"],
["R21363"],
["R21408"],
["R21770"],
["R21839"],
["R21893"],
["R23146"],
["R23571"],
["R23887"],
["R24100"],
["R24120"],
["R25048"],
["R26778"],
["R26791"],
["R27252"],
["R27725"],
["R27937"],
["R28182"],
["R28581"],
["R28703"],
["R28980"],
["R29598"],
["R29816"],
["R30215"],
["R30396"],
["R30420"],
["R32929"],
["R36431"],
["R37765"],
["mada_1-1"],
["mada_1-10"],
["mada_1-11"],
["mada_1-12"],
["mada_1-13"],
["mada_1-14"],
["mada_1-15"],
["mada_1-16"],
["mada_1-17"],
["mada_1-18"],
["mada_1-19"],
["mada_1-2"],
["mada_1-20"],
["mada_1-21"],
["mada_1-22"],
["mada_1-28"],
["mada_1-3"],
["mada_1-30"],
["mada_1-32"],
["mada_1-33"],
["mada_1-36"],
["mada_1-38"],
["mada_1-39"],
["mada_1-40"],
["mada_1-41"],
["mada_1-43"],
["mada_1-44"],
["mada_1-47"],
["mada_1-48"],
["mada_1-5"],
["mada_1-50"],
["mada_1-51"],
["mada_1-54"],
["mada_1-6"],
["mada_1-7"],
["mada_1-8"],
["mada_102"],
["mada_103"],
["mada_104"],
["mada_105"],
["mada_106"],
["mada_107"],
["mada_109"],
["mada_110"],
["mada_111"],
["mada_112"],
["mada_113"],
["mada_115"],
["mada_116"],
["mada_117"],
["mada_118"],
["mada_120"],
["mada_121"],
["mada_122"],
["mada_123"],
["mada_124"],
["mada_125"],
["mada_126"],
["mada_127"],
["mada_128"],
["mada_129"],
["mada_130"],
["mada_131"],
["mada_133"],
["mada_134"],
["mada_135"],
["mada_136"],
["mada_137"],
["mada_139"],
["mada_140"],
["mada_141"],
["mada_142"],
["mada_143"],
["mada_144"],
["mada_148"],
["mada_150"],
["mada_151"],
["mada_154"],
["mada_2-1"],
["mada_2-31"],
["mada_2-34"],
["mada_2-42"],
["mada_2-50"],
],
1: [
["R18040", "R18043", "R22601"],
["R27657", "R28012", "R30078"],
["17_616026", "17_616156"],
["R20260", "R20574"],
["R20983", "R21770"],
["R30234", "R31095"],
["mada_1-25", "mada_2-25"],
["mada_1-46", "mada_2-46"],
["mada_1-53", "mada_2-53"],
["mada_132", "mada_152"],
["18_0621851"],
["18_0622057"],
["18_0622158"],
["18_0622224"],
["18_0622267"],
["18_0622291"],
["18_0622300"],
["18_0622312"],
["18_0622363"],
["18_0622365"],
["18_0622431"],
["18_0622434"],
["18_0622446"],
["18_0622455"],
["18_0622456"],
["18_0622465"],
["18_620606"],
["R15311"],
["R20896"],
["R21363"],
["R21408"],
["R21839"],
["R21893"],
["R23146"],
["R23571"],
["R23887"],
["R24100"],
["R24120"],
["R25048"],
["R26778"],
["R26791"],
["R27252"],
["R27725"],
["R27937"],
["R28182"],
["R28581"],
["R28703"],
["R28980"],
["R29598"],
["R29816"],
["R30215"],
["R30396"],
["R30420"],
["R32929"],
["R36431"],
["R37765"],
["mada_1-1"],
["mada_1-10"],
["mada_1-11"],
["mada_1-12"],
["mada_1-13"],
["mada_1-14"],
["mada_1-15"],
["mada_1-16"],
["mada_1-17"],
["mada_1-18"],
["mada_1-19"],
["mada_1-2"],
["mada_1-20"],
["mada_1-21"],
["mada_1-22"],
["mada_1-28"],
["mada_1-3"],
["mada_1-30"],
["mada_1-32"],
["mada_1-33"],
["mada_1-36"],
["mada_1-38"],
["mada_1-39"],
["mada_1-40"],
["mada_1-41"],
["mada_1-43"],
["mada_1-44"],
["mada_1-47"],
["mada_1-48"],
["mada_1-5"],
["mada_1-50"],
["mada_1-51"],
["mada_1-54"],
["mada_1-6"],
["mada_1-7"],
["mada_1-8"],
["mada_102"],
["mada_103"],
["mada_104"],
["mada_105"],
["mada_106"],
["mada_107"],
["mada_109"],
["mada_110"],
["mada_111"],
["mada_112"],
["mada_113"],
["mada_115"],
["mada_116"],
["mada_117"],
["mada_118"],
["mada_120"],
["mada_121"],
["mada_122"],
["mada_123"],
["mada_124"],
["mada_125"],
["mada_126"],
["mada_127"],
["mada_128"],
["mada_129"],
["mada_130"],
["mada_131"],
["mada_133"],
["mada_134"],
["mada_135"],
["mada_136"],
["mada_137"],
["mada_139"],
["mada_140"],
["mada_141"],
["mada_142"],
["mada_143"],
["mada_144"],
["mada_148"],
["mada_150"],
["mada_151"],
["mada_154"],
["mada_2-1"],
["mada_2-31"],
["mada_2-34"],
["mada_2-42"],
["mada_2-50"],
],
2: [
["R18040", "R18043", "R22601"],
["R27657", "R28012", "R30078"],
["17_616026", "17_616156"],
["R20260", "R20574"],
["R20983", "R21770"],
["R21408", "R26791"],
["R30234", "R31095"],
["mada_1-25", "mada_2-25"],
["mada_1-46", "mada_2-46"],
["mada_1-53", "mada_2-53"],
["mada_132", "mada_152"],
["18_0621851"],
["18_0622057"],
["18_0622158"],
["18_0622224"],
["18_0622267"],
["18_0622291"],
["18_0622300"],
["18_0622312"],
["18_0622363"],
["18_0622365"],
["18_0622431"],
["18_0622434"],
["18_0622446"],
["18_0622455"],
["18_0622456"],
["18_0622465"],
["18_620606"],
["R15311"],
["R20896"],
["R21363"],
["R21839"],
["R21893"],
["R23146"],
["R23571"],
["R23887"],
["R24100"],
["R24120"],
["R25048"],
["R26778"],
["R27252"],
["R27725"],
["R27937"],
["R28182"],
["R28581"],
["R28703"],
["R28980"],
["R29598"],
["R29816"],
["R30215"],
["R30396"],
["R30420"],
["R32929"],
["R36431"],
["R37765"],
["mada_1-1"],
["mada_1-10"],
["mada_1-11"],
["mada_1-12"],
["mada_1-13"],
["mada_1-14"],
["mada_1-15"],
["mada_1-16"],
["mada_1-17"],
["mada_1-18"],
["mada_1-19"],
["mada_1-2"],
["mada_1-20"],
["mada_1-21"],
["mada_1-22"],
["mada_1-28"],
["mada_1-3"],
["mada_1-30"],
["mada_1-32"],
["mada_1-33"],
["mada_1-36"],
["mada_1-38"],
["mada_1-39"],
["mada_1-40"],
["mada_1-41"],
["mada_1-43"],
["mada_1-44"],
["mada_1-47"],
["mada_1-48"],
["mada_1-5"],
["mada_1-50"],
["mada_1-51"],
["mada_1-54"],
["mada_1-6"],
["mada_1-7"],
["mada_1-8"],
["mada_102"],
["mada_103"],
["mada_104"],
["mada_105"],
["mada_106"],
["mada_107"],
["mada_109"],
["mada_110"],
["mada_111"],
["mada_112"],
["mada_113"],
["mada_115"],
["mada_116"],
["mada_117"],
["mada_118"],
["mada_120"],
["mada_121"],
["mada_122"],
["mada_123"],
["mada_124"],
["mada_125"],
["mada_126"],
["mada_127"],
["mada_128"],
["mada_129"],
["mada_130"],
["mada_131"],
["mada_133"],
["mada_134"],
["mada_135"],
["mada_136"],
["mada_137"],
["mada_139"],
["mada_140"],
["mada_141"],
["mada_142"],
["mada_143"],
["mada_144"],
["mada_148"],
["mada_150"],
["mada_151"],
["mada_154"],
["mada_2-1"],
["mada_2-31"],
["mada_2-34"],
["mada_2-42"],
["mada_2-50"],
],
3: [
["R18040", "R18043", "R22601", "R27937"],
["R21408", "R26791", "R27725"],
["R27657", "R28012", "R30078"],
["17_616026", "17_616156"],
["R20260", "R20574"],
["R20983", "R21770"],
["R30234", "R31095"],
["mada_1-11", "mada_115"],
["mada_1-25", "mada_2-25"],
["mada_1-41", "mada_2-31"],
["mada_1-46", "mada_2-46"],
["mada_1-53", "mada_2-53"],
["mada_132", "mada_152"],
["18_0621851"],
["18_0622057"],
["18_0622158"],
["18_0622224"],
["18_0622267"],
["18_0622291"],
["18_0622300"],
["18_0622312"],
["18_0622363"],
["18_0622365"],
["18_0622431"],
["18_0622434"],
["18_0622446"],
["18_0622455"],
["18_0622456"],
["18_0622465"],
["18_620606"],
["R15311"],
["R20896"],
["R21363"],
["R21839"],
["R21893"],
["R23146"],
["R23571"],
["R23887"],
["R24100"],
["R24120"],
["R25048"],
["R26778"],
["R27252"],
["R28182"],
["R28581"],
["R28703"],
["R28980"],
["R29598"],
["R29816"],
["R30215"],
["R30396"],
["R30420"],
["R32929"],
["R36431"],
["R37765"],
["mada_1-1"],
["mada_1-10"],
["mada_1-12"],
["mada_1-13"],
["mada_1-14"],
["mada_1-15"],
["mada_1-16"],
["mada_1-17"],
["mada_1-18"],
["mada_1-19"],
["mada_1-2"],
["mada_1-20"],
["mada_1-21"],
["mada_1-22"],
["mada_1-28"],
["mada_1-3"],
["mada_1-30"],
["mada_1-32"],
["mada_1-33"],
["mada_1-36"],
["mada_1-38"],
["mada_1-39"],
["mada_1-40"],
["mada_1-43"],
["mada_1-44"],
["mada_1-47"],
["mada_1-48"],
["mada_1-5"],
["mada_1-50"],
["mada_1-51"],
["mada_1-54"],
["mada_1-6"],
["mada_1-7"],
["mada_1-8"],
["mada_102"],
["mada_103"],
["mada_104"],
["mada_105"],
["mada_106"],
["mada_107"],
["mada_109"],
["mada_110"],
["mada_111"],
["mada_112"],
["mada_113"],
["mada_116"],
["mada_117"],
["mada_118"],
["mada_120"],
["mada_121"],
["mada_122"],
["mada_123"],
["mada_124"],
["mada_125"],
["mada_126"],
["mada_127"],
["mada_128"],
["mada_129"],
["mada_130"],
["mada_131"],
["mada_133"],
["mada_134"],
["mada_135"],
["mada_136"],
["mada_137"],
["mada_139"],
["mada_140"],
["mada_141"],
["mada_142"],
["mada_143"],
["mada_144"],
["mada_148"],
["mada_150"],
["mada_151"],
["mada_154"],
["mada_2-1"],
["mada_2-34"],
["mada_2-42"],
["mada_2-50"],
],
4: [
["R18040", "R18043", "R21408", "R22601", "R26791", "R27725", "R27937"],
["R27657", "R28012", "R30078"],
["mada_1-11", "mada_115", "mada_125"],
["17_616026", "17_616156"],
["R20260", "R20574"],
["R20983", "R21770"],
["R30234", "R31095"],
["mada_1-25", "mada_2-25"],
["mada_1-41", "mada_2-31"],
["mada_1-46", "mada_2-46"],
["mada_1-53", "mada_2-53"],
["mada_132", "mada_152"],
["18_0621851"],
["18_0622057"],
["18_0622158"],
["18_0622224"],
["18_0622267"],
["18_0622291"],
["18_0622300"],
["18_0622312"],
["18_0622363"],
["18_0622365"],
["18_0622431"],
["18_0622434"],
["18_0622446"],
["18_0622455"],
["18_0622456"],
["18_0622465"],
["18_620606"],
["R15311"],
["R20896"],
["R21363"],
["R21839"],
["R21893"],
["R23146"],
["R23571"],
["R23887"],
["R24100"],
["R24120"],
["R25048"],
["R26778"],
["R27252"],
["R28182"],
["R28581"],
["R28703"],
["R28980"],
["R29598"],
["R29816"],
["R30215"],
["R30396"],
["R30420"],
["R32929"],
["R36431"],
["R37765"],
["mada_1-1"],
["mada_1-10"],
["mada_1-12"],
["mada_1-13"],
["mada_1-14"],
["mada_1-15"],
["mada_1-16"],
["mada_1-17"],
["mada_1-18"],
["mada_1-19"],
["mada_1-2"],
["mada_1-20"],
["mada_1-21"],
["mada_1-22"],
["mada_1-28"],
["mada_1-3"],
["mada_1-30"],
["mada_1-32"],
["mada_1-33"],
["mada_1-36"],
["mada_1-38"],
["mada_1-39"],
["mada_1-40"],
["mada_1-43"],
["mada_1-44"],
["mada_1-47"],
["mada_1-48"],
["mada_1-5"],
["mada_1-50"],
["mada_1-51"],
["mada_1-54"],
["mada_1-6"],
["mada_1-7"],
["mada_1-8"],
["mada_102"],
["mada_103"],
["mada_104"],
["mada_105"],
["mada_106"],
["mada_107"],
["mada_109"],
["mada_110"],
["mada_111"],
["mada_112"],
["mada_113"],
["mada_116"],
["mada_117"],
["mada_118"],
["mada_120"],
["mada_121"],
["mada_122"],
["mada_123"],
["mada_124"],
["mada_126"],
["mada_127"],
["mada_128"],
["mada_129"],
["mada_130"],
["mada_131"],
["mada_133"],
["mada_134"],
["mada_135"],
["mada_136"],
["mada_137"],
["mada_139"],
["mada_140"],
["mada_141"],
["mada_142"],
["mada_143"],
["mada_144"],
["mada_148"],
["mada_150"],
["mada_151"],
["mada_154"],
["mada_2-1"],
["mada_2-34"],
["mada_2-42"],
["mada_2-50"],
],
5: [
["R18040", "R18043", "R21408", "R22601", "R26791", "R27725", "R27937"],
["mada_1-11", "mada_1-8", "mada_115", "mada_125"],
["R27657", "R28012", "R30078"],
["17_616026", "17_616156"],
["R20260", "R20574"],
["R20983", "R21770"],
["R29816", "R36431"],
["R30234", "R31095"],
["mada_1-25", "mada_2-25"],
["mada_1-3", "mada_1-7"],
["mada_1-41", "mada_2-31"],
["mada_1-46", "mada_2-46"],
["mada_1-53", "mada_2-53"],
["mada_132", "mada_152"],
["18_0621851"],
["18_0622057"],
["18_0622158"],
["18_0622224"],
["18_0622267"],
["18_0622291"],
["18_0622300"],
["18_0622312"],
["18_0622363"],
["18_0622365"],
["18_0622431"],
["18_0622434"],
["18_0622446"],
["18_0622455"],
["18_0622456"],
["18_0622465"],
["18_620606"],
["R15311"],
["R20896"],
["R21363"],
["R21839"],
["R21893"],
["R23146"],
["R23571"],
["R23887"],
["R24100"],
["R24120"],
["R25048"],
["R26778"],
["R27252"],
["R28182"],
["R28581"],
["R28703"],
["R28980"],
["R29598"],
["R30215"],
["R30396"],
["R30420"],
["R32929"],
["R37765"],
["mada_1-1"],
["mada_1-10"],
["mada_1-12"],
["mada_1-13"],
["mada_1-14"],
["mada_1-15"],
["mada_1-16"],
["mada_1-17"],
["mada_1-18"],
["mada_1-19"],
["mada_1-2"],
["mada_1-20"],
["mada_1-21"],
["mada_1-22"],
["mada_1-28"],
["mada_1-30"],
["mada_1-32"],
["mada_1-33"],
["mada_1-36"],
["mada_1-38"],
["mada_1-39"],
["mada_1-40"],
["mada_1-43"],
["mada_1-44"],
["mada_1-47"],
["mada_1-48"],
["mada_1-5"],
["mada_1-50"],
["mada_1-51"],
["mada_1-54"],
["mada_1-6"],
["mada_102"],
["mada_103"],
["mada_104"],
["mada_105"],
["mada_106"],
["mada_107"],
["mada_109"],
["mada_110"],
["mada_111"],
["mada_112"],
["mada_113"],
["mada_116"],
["mada_117"],
["mada_118"],
["mada_120"],
["mada_121"],
["mada_122"],
["mada_123"],
["mada_124"],
["mada_126"],
["mada_127"],
["mada_128"],
["mada_129"],
["mada_130"],
["mada_131"],
["mada_133"],
["mada_134"],
["mada_135"],
["mada_136"],
["mada_137"],
["mada_139"],
["mada_140"],
["mada_141"],
["mada_142"],
["mada_143"],
["mada_144"],
["mada_148"],
["mada_150"],
["mada_151"],
["mada_154"],
["mada_2-1"],
["mada_2-34"],
["mada_2-42"],
["mada_2-50"],
],
7: [
[
"R18040",
"R18043",
"R20260",
"R20574",
"R21408",
"R22601",
"R26791",
"R27725",
"R27937",
"R28182",
],
[
"mada_1-11",
"mada_1-28",
"mada_1-8",
"mada_115",
"mada_125",
"mada_142",
],
["R24120", "R29816", "R36431"],
["R27657", "R28012", "R30078"],
["mada_109", "mada_120", "mada_148"],
["17_616026", "17_616156"],
["R20983", "R21770"],
["R23146", "R28980"],
["R24100", "R29598"],
["R30234", "R31095"],
["mada_1-25", "mada_2-25"],
["mada_1-3", "mada_1-7"],
["mada_1-41", "mada_2-31"],
["mada_1-46", "mada_2-46"],
["mada_1-53", "mada_2-53"],
["mada_110", "mada_112"],
["mada_132", "mada_152"],
["18_0621851"],
["18_0622057"],
["18_0622158"],
["18_0622224"],
["18_0622267"],
["18_0622291"],
["18_0622300"],
["18_0622312"],
["18_0622363"],
["18_0622365"],
["18_0622431"],
["18_0622434"],
["18_0622446"],
["18_0622455"],
["18_0622456"],
["18_0622465"],
["18_620606"],
["R15311"],
["R20896"],
["R21363"],
["R21839"],
["R21893"],
["R23571"],
["R23887"],
["R25048"],
["R26778"],
["R27252"],
["R28581"],
["R28703"],
["R30215"],
["R30396"],
["R30420"],
["R32929"],
["R37765"],
["mada_1-1"],
["mada_1-10"],
["mada_1-12"],
["mada_1-13"],
["mada_1-14"],
["mada_1-15"],
["mada_1-16"],
["mada_1-17"],
["mada_1-18"],
["mada_1-19"],
["mada_1-2"],
["mada_1-20"],
["mada_1-21"],
["mada_1-22"],
["mada_1-30"],
["mada_1-32"],
["mada_1-33"],
["mada_1-36"],
["mada_1-38"],
["mada_1-39"],
["mada_1-40"],
["mada_1-43"],
["mada_1-44"],
["mada_1-47"],
["mada_1-48"],
["mada_1-5"],
["mada_1-50"],
["mada_1-51"],
["mada_1-54"],
["mada_1-6"],
["mada_102"],
["mada_103"],
["mada_104"],
["mada_105"],
["mada_106"],
["mada_107"],
["mada_111"],
["mada_113"],
["mada_116"],
["mada_117"],
["mada_118"],
["mada_121"],
["mada_122"],
["mada_123"],
["mada_124"],
["mada_126"],
["mada_127"],
["mada_128"],
["mada_129"],
["mada_130"],
["mada_131"],
["mada_133"],
["mada_134"],
["mada_135"],
["mada_136"],
["mada_137"],
["mada_139"],
["mada_140"],
["mada_141"],
["mada_143"],
["mada_144"],
["mada_150"],
["mada_151"],
["mada_154"],
["mada_2-1"],
["mada_2-34"],
["mada_2-42"],
["mada_2-50"],
],
10: [
[
"R18040",
"R18043",
"R20260",
"R20574",
"R21408",
"R22601",
"R26791",
"R27725",
"R27937",
"R28182",
],
[
"R24120",
"R27657",
"R28012",
"R29816",
"R30078",
"R30234",
"R31095",
"R36431",
],
[
"mada_1-11",
"mada_1-28",
"mada_1-8",
"mada_115",
"mada_125",
"mada_142",
"mada_2-42",
],
["mada_109", "mada_120", "mada_126", "mada_148"],
["mada_103", "mada_110", "mada_112"],
["17_616026", "17_616156"],
["R20896", "R30396"],
["R20983", "R21770"],
["R23146", "R28980"],
["R24100", "R29598"],
["mada_1-25", "mada_2-25"],
["mada_1-3", "mada_1-7"],
["mada_1-40", "mada_1-44"],
["mada_1-41", "mada_2-31"],
["mada_1-46", "mada_2-46"],
["mada_1-53", "mada_2-53"],
["mada_132", "mada_152"],
["18_0621851"],
["18_0622057"],
["18_0622158"],
["18_0622224"],
["18_0622267"],
["18_0622291"],
["18_0622300"],
["18_0622312"],
["18_0622363"],
["18_0622365"],
["18_0622431"],
["18_0622434"],
["18_0622446"],
["18_0622455"],
["18_0622456"],
["18_0622465"],
["18_620606"],
["R15311"],
["R21363"],
["R21839"],
["R21893"],
["R23571"],
["R23887"],
["R25048"],
["R26778"],
["R27252"],
["R28581"],
["R28703"],
["R30215"],
["R30420"],
["R32929"],
["R37765"],
["mada_1-1"],
["mada_1-10"],
["mada_1-12"],
["mada_1-13"],
["mada_1-14"],
["mada_1-15"],
["mada_1-16"],
["mada_1-17"],
["mada_1-18"],
["mada_1-19"],
["mada_1-2"],
["mada_1-20"],
["mada_1-21"],
["mada_1-22"],
["mada_1-30"],
["mada_1-32"],
["mada_1-33"],
["mada_1-36"],
["mada_1-38"],
["mada_1-39"],
["mada_1-43"],
["mada_1-47"],
["mada_1-48"],
["mada_1-5"],
["mada_1-50"],
["mada_1-51"],
["mada_1-54"],
["mada_1-6"],
["mada_102"],
["mada_104"],
["mada_105"],
["mada_106"],
["mada_107"],
["mada_111"],
["mada_113"],
["mada_116"],
["mada_117"],
["mada_118"],
["mada_121"],
["mada_122"],
["mada_123"],
["mada_124"],
["mada_127"],
["mada_128"],
["mada_129"],
["mada_130"],
["mada_131"],
["mada_133"],
["mada_134"],
["mada_135"],
["mada_136"],
["mada_137"],
["mada_139"],
["mada_140"],
["mada_141"],
["mada_143"],
["mada_144"],
["mada_150"],
["mada_151"],
["mada_154"],
["mada_2-1"],
["mada_2-34"],
["mada_2-50"],
],
12: [
[
"R18040",
"R18043",
"R20260",
"R20574",
"R21408",
"R22601",
"R26791",
"R27725",
"R27937",
"R28182",
],
[
"R24120",
"R27657",
"R28012",
"R29816",
"R30078",
"R30234",
"R31095",
"R36431",
],
[
"mada_1-11",
"mada_1-28",
"mada_1-8",
"mada_115",
"mada_125",
"mada_142",
"mada_2-42",
],
["R23146", "R26778", "R28980", "R30420", "R32929"],
["mada_103", "mada_110", "mada_112", "mada_124"],
["mada_109", "mada_120", "mada_126", "mada_148"],
["17_616026", "17_616156"],
["R20896", "R30396"],
["R20983", "R21770"],
["R24100", "R29598"],
["mada_1-25", "mada_2-25"],
["mada_1-3", "mada_1-7"],
["mada_1-40", "mada_1-44"],
["mada_1-41", "mada_2-31"],
["mada_1-46", "mada_2-46"],
["mada_1-53", "mada_2-53"],
["mada_132", "mada_152"],
["18_0621851"],
["18_0622057"],
["18_0622158"],
["18_0622224"],
["18_0622267"],
["18_0622291"],
["18_0622300"],
["18_0622312"],
["18_0622363"],
["18_0622365"],
["18_0622431"],
["18_0622434"],
["18_0622446"],
["18_0622455"],
["18_0622456"],
["18_0622465"],
["18_620606"],
["R15311"],
["R21363"],
["R21839"],
["R21893"],
["R23571"],
["R23887"],
["R25048"],
["R27252"],
["R28581"],
["R28703"],
["R30215"],
["R37765"],
["mada_1-1"],
["mada_1-10"],
["mada_1-12"],
["mada_1-13"],
["mada_1-14"],
["mada_1-15"],
["mada_1-16"],
["mada_1-17"],
["mada_1-18"],
["mada_1-19"],
["mada_1-2"],
["mada_1-20"],
["mada_1-21"],
["mada_1-22"],
["mada_1-30"],
["mada_1-32"],
["mada_1-33"],
["mada_1-36"],
["mada_1-38"],
["mada_1-39"],
["mada_1-43"],
["mada_1-47"],
["mada_1-48"],
["mada_1-5"],
["mada_1-50"],
["mada_1-51"],
["mada_1-54"],
["mada_1-6"],
["mada_102"],
["mada_104"],
["mada_105"],
["mada_106"],
["mada_107"],
["mada_111"],
["mada_113"],
["mada_116"],
["mada_117"],
["mada_118"],
["mada_121"],
["mada_122"],
["mada_123"],
["mada_127"],
["mada_128"],
["mada_129"],
["mada_130"],
["mada_131"],
["mada_133"],
["mada_134"],
["mada_135"],
["mada_136"],
["mada_137"],
["mada_139"],
["mada_140"],
["mada_141"],
["mada_143"],
["mada_144"],
["mada_150"],
["mada_151"],
["mada_154"],
["mada_2-1"],
["mada_2-34"],
["mada_2-50"],
],
15: [
[
"R18040",
"R18043",
"R20260",
"R20574",
"R21408",
"R22601",
"R26791",
"R27725",
"R27937",
"R28182",
],
[
"R24120",
"R27657",
"R28012",
"R29816",
"R30078",
"R30234",
"R31095",
"R36431",
],
[
"mada_1-11",
"mada_1-28",
"mada_1-8",
"mada_115",
"mada_125",
"mada_142",
"mada_2-42",
],
["R23146", "R26778", "R28980", "R30420", "R32929"],
["mada_103", "mada_110", "mada_112", "mada_124"],
["mada_109", "mada_120", "mada_126", "mada_148"],
["17_616026", "17_616156"],
["R20896", "R30396"],
["R20983", "R21770"],
["R24100", "R29598"],
["mada_1-25", "mada_2-25"],
["mada_1-3", "mada_1-7"],
["mada_1-40", "mada_1-44"],
["mada_1-41", "mada_2-31"],
["mada_1-46", "mada_2-46"],
["mada_1-53", "mada_2-53"],
["mada_107", "mada_143"],
["mada_132", "mada_152"],
["18_0621851"],
["18_0622057"],
["18_0622158"],
["18_0622224"],
["18_0622267"],
["18_0622291"],
["18_0622300"],
["18_0622312"],
["18_0622363"],
["18_0622365"],
["18_0622431"],
["18_0622434"],
["18_0622446"],
["18_0622455"],
["18_0622456"],
["18_0622465"],
["18_620606"],
["R15311"],
["R21363"],
["R21839"],
["R21893"],
["R23571"],
["R23887"],
["R25048"],
["R27252"],
["R28581"],
["R28703"],
["R30215"],
["R37765"],
["mada_1-1"],
["mada_1-10"],
["mada_1-12"],
["mada_1-13"],
["mada_1-14"],
["mada_1-15"],
["mada_1-16"],
["mada_1-17"],
["mada_1-18"],
["mada_1-19"],
["mada_1-2"],
["mada_1-20"],
["mada_1-21"],
["mada_1-22"],
["mada_1-30"],
["mada_1-32"],
["mada_1-33"],
["mada_1-36"],
["mada_1-38"],
["mada_1-39"],
["mada_1-43"],
["mada_1-47"],
["mada_1-48"],
["mada_1-5"],
["mada_1-50"],
["mada_1-51"],
["mada_1-54"],
["mada_1-6"],
["mada_102"],
["mada_104"],
["mada_105"],
["mada_106"],
["mada_111"],
["mada_113"],
["mada_116"],
["mada_117"],
["mada_118"],
["mada_121"],
["mada_122"],
["mada_123"],
["mada_127"],
["mada_128"],
["mada_129"],
["mada_130"],
["mada_131"],
["mada_133"],
["mada_134"],
["mada_135"],
["mada_136"],
["mada_137"],
["mada_139"],
["mada_140"],
["mada_141"],
["mada_144"],
["mada_150"],
["mada_151"],
["mada_154"],
["mada_2-1"],
["mada_2-34"],
["mada_2-50"],
],
20: [
[
"R18040",
"R18043",
"R20260",
"R20574",
"R21408",
"R22601",
"R23146",
"R26778",
"R26791",
"R27725",
"R27937",
"R28182",
"R28980",
"R30420",
"R32929",
],
[
"R24120",
"R27657",
"R28012",
"R28703",
"R29816",
"R30078",
"R30234",
"R31095",
"R36431",
],
[
"mada_1-11",
"mada_1-28",
"mada_1-8",
"mada_115",
"mada_125",
"mada_142",
"mada_2-42",
],
["mada_103", "mada_110", "mada_112", "mada_124"],
["mada_109", "mada_120", "mada_126", "mada_148"],
["17_616026", "17_616156"],
["R20896", "R30396"],
["R20983", "R21770"],
["R24100", "R29598"],
["R25048", "R30215"],
["mada_1-25", "mada_2-25"],
["mada_1-3", "mada_1-7"],
["mada_1-40", "mada_1-44"],
["mada_1-41", "mada_2-31"],
["mada_1-46", "mada_2-46"],
["mada_1-53", "mada_2-53"],
["mada_107", "mada_143"],
["mada_132", "mada_152"],
["18_0621851"],
["18_0622057"],
["18_0622158"],
["18_0622224"],
["18_0622267"],
["18_0622291"],
["18_0622300"],
["18_0622312"],
["18_0622363"],
["18_0622365"],
["18_0622431"],
["18_0622434"],
["18_0622446"],
["18_0622455"],
["18_0622456"],
["18_0622465"],
["18_620606"],
["R15311"],
["R21363"],
["R21839"],
["R21893"],
["R23571"],
["R23887"],
["R27252"],
["R28581"],
["R37765"],
["mada_1-1"],
["mada_1-10"],
["mada_1-12"],
["mada_1-13"],
["mada_1-14"],
["mada_1-15"],
["mada_1-16"],
["mada_1-17"],
["mada_1-18"],
["mada_1-19"],
["mada_1-2"],
["mada_1-20"],
["mada_1-21"],
["mada_1-22"],
["mada_1-30"],
["mada_1-32"],
["mada_1-33"],
["mada_1-36"],
["mada_1-38"],
["mada_1-39"],
["mada_1-43"],
["mada_1-47"],
["mada_1-48"],
["mada_1-5"],
["mada_1-50"],
["mada_1-51"],
["mada_1-54"],
["mada_1-6"],
["mada_102"],
["mada_104"],
["mada_105"],
["mada_106"],
["mada_111"],
["mada_113"],
["mada_116"],
["mada_117"],
["mada_118"],
["mada_121"],
["mada_122"],
["mada_123"],
["mada_127"],
["mada_128"],
["mada_129"],
["mada_130"],
["mada_131"],
["mada_133"],
["mada_134"],
["mada_135"],
["mada_136"],
["mada_137"],
["mada_139"],
["mada_140"],
["mada_141"],
["mada_144"],
["mada_150"],
["mada_151"],
["mada_154"],
["mada_2-1"],
["mada_2-34"],
["mada_2-50"],
],
50: [
[
"R18040",
"R18043",
"R20260",
"R20574",
"R21408",
"R21893",
"R22601",
"R23146",
"R26778",
"R26791",
"R27725",
"R27937",
"R28182",
"R28980",
"R30420",
"R32929",
],
[
"R24100",
"R28581",
"R29598",
"mada_1-11",
"mada_1-28",
"mada_1-8",
"mada_115",
"mada_125",
"mada_142",
"mada_2-42",
],
[
"R24120",
"R27657",
"R28012",
"R28703",
"R29816",
"R30078",
"R30234",
"R31095",
"R36431",
],
[
"mada_1-50",
"mada_1-53",
"mada_113",
"mada_132",
"mada_152",
"mada_2-1",
"mada_2-53",
],
["R25048", "R30215", "mada_1-6", "mada_131"],
["mada_103", "mada_110", "mada_112", "mada_124"],
["mada_109", "mada_120", "mada_126", "mada_148"],
["mada_1-48", "mada_107", "mada_143"],
["mada_128", "mada_154", "mada_2-50"],
["17_616026", "17_616156"],
["18_0622365", "R15311"],
["R20896", "R30396"],
["R20983", "R21770"],
["mada_1-25", "mada_2-25"],
["mada_1-3", "mada_1-7"],
["mada_1-40", "mada_1-44"],
["mada_1-41", "mada_2-31"],
["mada_1-46", "mada_2-46"],
["mada_137", "mada_140"],
["18_0621851"],
["18_0622057"],
["18_0622158"],
["18_0622224"],
["18_0622267"],
["18_0622291"],
["18_0622300"],
["18_0622312"],
["18_0622363"],
["18_0622431"],
["18_0622434"],
["18_0622446"],
["18_0622455"],
["18_0622456"],
["18_0622465"],
["18_620606"],
["R21363"],
["R21839"],
["R23571"],
["R23887"],
["R27252"],
["R37765"],
["mada_1-1"],
["mada_1-10"],
["mada_1-12"],
["mada_1-13"],
["mada_1-14"],
["mada_1-15"],
["mada_1-16"],
["mada_1-17"],
["mada_1-18"],
["mada_1-19"],
["mada_1-2"],
["mada_1-20"],
["mada_1-21"],
["mada_1-22"],
["mada_1-30"],
["mada_1-32"],
["mada_1-33"],
["mada_1-36"],
["mada_1-38"],
["mada_1-39"],
["mada_1-43"],
["mada_1-47"],
["mada_1-5"],
["mada_1-51"],
["mada_1-54"],
["mada_102"],
["mada_104"],
["mada_105"],
["mada_106"],
["mada_111"],
["mada_116"],
["mada_117"],
["mada_118"],
["mada_121"],
["mada_122"],
["mada_123"],
["mada_127"],
["mada_129"],
["mada_130"],
["mada_133"],
["mada_134"],
["mada_135"],
["mada_136"],
["mada_139"],
["mada_141"],
["mada_144"],
["mada_150"],
["mada_151"],
["mada_2-34"],
],
}
for threshold, expected_cluster in threshold_to_expected_cluster.items():
actual_cluster = get_clusters(psdm_matrix, threshold)
assert actual_cluster == expected_cluster
def test___produce_clusters(self):
expected_cluster = """Cluster #1: R18040 R18043 R21408 R22601 R26791 R27725 R27937
Cluster #2: mada_1-11 mada_1-8 mada_115 mada_125
Cluster #3: R27657 R28012 R30078
Cluster #4: 17_616026 17_616156
Cluster #5: R20260 R20574
Cluster #6: R20983 R21770
Cluster #7: R29816 R36431
Cluster #8: R30234 R31095
Cluster #9: mada_1-25 mada_2-25
Cluster #10: mada_1-3 mada_1-7
Cluster #11: mada_1-41 mada_2-31
Cluster #12: mada_1-46 mada_2-46
Cluster #13: mada_1-53 mada_2-53
Cluster #14: mada_132 mada_152
Cluster #15: 18_0621851
Cluster #16: 18_0622057
Cluster #17: 18_0622158
Cluster #18: 18_0622224
Cluster #19: 18_0622267
Cluster #20: 18_0622291
Cluster #21: 18_0622300
Cluster #22: 18_0622312
Cluster #23: 18_0622363
Cluster #24: 18_0622365
Cluster #25: 18_0622431
Cluster #26: 18_0622434
Cluster #27: 18_0622446
Cluster #28: 18_0622455
Cluster #29: 18_0622456
Cluster #30: 18_0622465
Cluster #31: 18_620606
Cluster #32: R15311
Cluster #33: R20896
Cluster #34: R21363
Cluster #35: R21839
Cluster #36: R21893
Cluster #37: R23146
Cluster #38: R23571
Cluster #39: R23887
Cluster #40: R24100
Cluster #41: R24120
Cluster #42: R25048
Cluster #43: R26778
Cluster #44: R27252
Cluster #45: R28182
Cluster #46: R28581
Cluster #47: R28703
Cluster #48: R28980
Cluster #49: R29598
Cluster #50: R30215
Cluster #51: R30396
Cluster #52: R30420
Cluster #53: R32929
Cluster #54: R37765
Cluster #55: mada_1-1
Cluster #56: mada_1-10
Cluster #57: mada_1-12
Cluster #58: mada_1-13
Cluster #59: mada_1-14
Cluster #60: mada_1-15
Cluster #61: mada_1-16
Cluster #62: mada_1-17
Cluster #63: mada_1-18
Cluster #64: mada_1-19
Cluster #65: mada_1-2
Cluster #66: mada_1-20
Cluster #67: mada_1-21
Cluster #68: mada_1-22
Cluster #69: mada_1-28
Cluster #70: mada_1-30
Cluster #71: mada_1-32
Cluster #72: mada_1-33
Cluster #73: mada_1-36
Cluster #74: mada_1-38
Cluster #75: mada_1-39
Cluster #76: mada_1-40
Cluster #77: mada_1-43
Cluster #78: mada_1-44
Cluster #79: mada_1-47
Cluster #80: mada_1-48
Cluster #81: mada_1-5
Cluster #82: mada_1-50
Cluster #83: mada_1-51
Cluster #84: mada_1-54
Cluster #85: mada_1-6
Cluster #86: mada_102
Cluster #87: mada_103
Cluster #88: mada_104
Cluster #89: mada_105
Cluster #90: mada_106
Cluster #91: mada_107
Cluster #92: mada_109
Cluster #93: mada_110
Cluster #94: mada_111
Cluster #95: mada_112
Cluster #96: mada_113
Cluster #97: mada_116
Cluster #98: mada_117
Cluster #99: mada_118
Cluster #100: mada_120
Cluster #101: mada_121
Cluster #102: mada_122
Cluster #103: mada_123
Cluster #104: mada_124
Cluster #105: mada_126
Cluster #106: mada_127
Cluster #107: mada_128
Cluster #108: mada_129
Cluster #109: mada_130
Cluster #110: mada_131
Cluster #111: mada_133
Cluster #112: mada_134
Cluster #113: mada_135
Cluster #114: mada_136
Cluster #115: mada_137
Cluster #116: mada_139
Cluster #117: mada_140
Cluster #118: mada_141
Cluster #119: mada_142
Cluster #120: mada_143
Cluster #121: mada_144
Cluster #122: mada_148
Cluster #123: mada_150
Cluster #124: mada_151
Cluster #125: mada_154
Cluster #126: mada_2-1
Cluster #127: mada_2-34
Cluster #128: mada_2-42
Cluster #129: mada_2-50
"""
produce_clusters(psdm_matrix, 5, psdm_matrix.parent)
with open(psdm_matrix.parent / "clusters.txt") as fin:
actual_cluster = fin.read()
assert expected_cluster == actual_cluster
def test___diagonal_is_not_zero___raises_AsymmetrixMatrixError(self):
with pytest.raises(AsymmetrixMatrixError):
get_clusters(
Path("tests/data/psdm_matrix/sample_diagonal_is_not_zero.csv"), 5
)
def test___asymmetrix_matrix___raises_AsymmetrixMatrixError(self):
with pytest.raises(AsymmetrixMatrixError):
get_clusters(Path("tests/data/psdm_matrix/sample_asymmetrix.csv"), 5)
| 30.748083 | 90 | 0.304563 | 4,414 | 56,146 | 3.536701 | 0.057997 | 0.162386 | 0.005829 | 0.00916 | 0.799693 | 0.789571 | 0.775927 | 0.766959 | 0.764397 | 0.758247 | 0 | 0.32457 | 0.528462 | 56,146 | 1,825 | 91 | 30.764932 | 0.26508 | 0 | 0 | 0.880926 | 0 | 0 | 0.313415 | 0.002422 | 0 | 0 | 0 | 0 | 0.001103 | 1 | 0.002205 | false | 0 | 0.001654 | 0 | 0.00441 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 9 |
78273363d19679819237caa91d8e337ad2d3c936 | 522 | py | Python | venv/Lib/site-packages/altair/vega/v5/data.py | ajayiagbebaku/NFL-Model | afcc67a85ca7138c58c3334d45988ada2da158ed | [
"MIT"
] | 6,831 | 2016-09-23T19:35:19.000Z | 2022-03-31T13:29:39.000Z | venv/Lib/site-packages/altair/vega/v5/data.py | ajayiagbebaku/NFL-Model | afcc67a85ca7138c58c3334d45988ada2da158ed | [
"MIT"
] | 2,068 | 2016-09-23T14:53:23.000Z | 2022-03-31T01:43:15.000Z | venv/Lib/site-packages/altair/vega/v5/data.py | ajayiagbebaku/NFL-Model | afcc67a85ca7138c58c3334d45988ada2da158ed | [
"MIT"
] | 711 | 2016-09-26T16:59:18.000Z | 2022-03-24T11:32:40.000Z | from ..data import (
MaxRowsError,
curry,
default_data_transformer,
limit_rows,
pipe,
sample,
to_csv,
to_json,
to_values,
)
# ==============================================================================
# Vega 5 data transformers
# ==============================================================================
__all__ = (
"MaxRowsError",
"curry",
"default_data_transformer",
"limit_rows",
"pipe",
"sample",
"to_csv",
"to_json",
"to_values",
)
| 17.4 | 80 | 0.398467 | 38 | 522 | 5.052632 | 0.5 | 0.177083 | 0.25 | 0.291667 | 0.802083 | 0.802083 | 0.802083 | 0.802083 | 0.802083 | 0.802083 | 0 | 0.002439 | 0.214559 | 522 | 29 | 81 | 18 | 0.465854 | 0.348659 | 0 | 0 | 0 | 0 | 0.247024 | 0.071429 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.045455 | 0 | 0.045455 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 9 |
78302e2df1062d3612d67b5b1143ef0a98407700 | 15,234 | py | Python | tests/unit/cloud/clouds/vmware_test.py | vamshi98/salt-formulas | 30edeadafd5d173efe4e1f767a8d562547ad128a | [
"Apache-2.0"
] | 3 | 2015-04-16T18:42:35.000Z | 2017-10-30T16:57:49.000Z | tests/unit/cloud/clouds/vmware_test.py | vamshi98/salt-formulas | 30edeadafd5d173efe4e1f767a8d562547ad128a | [
"Apache-2.0"
] | 16 | 2015-11-18T00:44:03.000Z | 2018-10-29T20:48:27.000Z | tests/unit/cloud/clouds/vmware_test.py | vamshi98/salt-formulas | 30edeadafd5d173efe4e1f767a8d562547ad128a | [
"Apache-2.0"
] | 1 | 2017-01-27T21:33:36.000Z | 2017-01-27T21:33:36.000Z | # -*- coding: utf-8 -*-
'''
:codeauthor: `Nitin Madhok <nmadhok@clemson.edu>`
tests.unit.cloud.clouds.vmware_test
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
'''
# Import Python libs
from __future__ import absolute_import
# Import Salt Testing Libs
from salttesting import TestCase, skipIf
from salttesting.mock import MagicMock, NO_MOCK, NO_MOCK_REASON, patch
from salttesting.helpers import ensure_in_syspath
ensure_in_syspath('../../../')
# Import Salt Libs
from salt.cloud.clouds import vmware
from salt.exceptions import SaltCloudSystemExit
# Global Variables
vmware.__active_provider_name__ = ''
vmware.__opts__ = {}
VM_NAME = 'test-vm'
@skipIf(NO_MOCK, NO_MOCK_REASON)
@patch('salt.cloud.clouds.vmware.__virtual__', MagicMock(return_value='vmware'))
class VMwareTestCase(TestCase):
'''
Unit TestCase for salt.cloud.clouds.vmware module.
'''
def test_test_vcenter_connection_call(self):
'''
Tests that a SaltCloudSystemExit is raised when trying to call test_vcenter_connection
with anything other than --function or -f.
'''
self.assertRaises(SaltCloudSystemExit, vmware.test_vcenter_connection, call='action')
def test_get_vcenter_version_call(self):
'''
Tests that a SaltCloudSystemExit is raised when trying to call get_vcenter_version
with anything other than --function or -f.
'''
self.assertRaises(SaltCloudSystemExit, vmware.get_vcenter_version, call='action')
def test_avail_images_call(self):
'''
Tests that a SaltCloudSystemExit is raised when trying to call avail_images
with --action or -a.
'''
self.assertRaises(SaltCloudSystemExit, vmware.avail_images, call='action')
def test_avail_locations_call(self):
'''
Tests that a SaltCloudSystemExit is raised when trying to call avail_locations
with --action or -a.
'''
self.assertRaises(SaltCloudSystemExit, vmware.avail_locations, call='action')
def test_avail_sizes_call(self):
'''
Tests that a SaltCloudSystemExit is raised when trying to call avail_sizes
with --action or -a.
'''
self.assertRaises(SaltCloudSystemExit, vmware.avail_sizes, call='action')
def test_list_datacenters_call(self):
'''
Tests that a SaltCloudSystemExit is raised when trying to call list_datacenters
with anything other than --function or -f.
'''
self.assertRaises(SaltCloudSystemExit, vmware.list_datacenters, call='action')
def test_list_clusters_call(self):
'''
Tests that a SaltCloudSystemExit is raised when trying to call list_clusters
with anything other than --function or -f.
'''
self.assertRaises(SaltCloudSystemExit, vmware.list_clusters, call='action')
def test_list_datastore_clusters_call(self):
'''
Tests that a SaltCloudSystemExit is raised when trying to call list_datastore_clusters
with anything other than --function or -f.
'''
self.assertRaises(SaltCloudSystemExit, vmware.list_datastore_clusters, call='action')
def test_list_datastores_call(self):
'''
Tests that a SaltCloudSystemExit is raised when trying to call list_datastores
with anything other than --function or -f.
'''
self.assertRaises(SaltCloudSystemExit, vmware.list_datastores, call='action')
def test_list_hosts_call(self):
'''
Tests that a SaltCloudSystemExit is raised when trying to call list_hosts
with anything other than --function or -f.
'''
self.assertRaises(SaltCloudSystemExit, vmware.list_hosts, call='action')
def test_list_resourcepools_call(self):
'''
Tests that a SaltCloudSystemExit is raised when trying to call list_resourcepools
with anything other than --function or -f.
'''
self.assertRaises(SaltCloudSystemExit, vmware.list_resourcepools, call='action')
def test_list_networks_call(self):
'''
Tests that a SaltCloudSystemExit is raised when trying to call list_networks
with anything other than --function or -f.
'''
self.assertRaises(SaltCloudSystemExit, vmware.list_networks, call='action')
def test_list_nodes_call(self):
'''
Tests that a SaltCloudSystemExit is raised when trying to call list_nodes
with --action or -a.
'''
self.assertRaises(SaltCloudSystemExit, vmware.list_nodes, call='action')
def test_list_nodes_min_call(self):
'''
Tests that a SaltCloudSystemExit is raised when trying to call list_nodes_min
with --action or -a.
'''
self.assertRaises(SaltCloudSystemExit, vmware.list_nodes_min, call='action')
def test_list_nodes_full_call(self):
'''
Tests that a SaltCloudSystemExit is raised when trying to call list_nodes_full
with --action or -a.
'''
self.assertRaises(SaltCloudSystemExit, vmware.list_nodes_full, call='action')
def test_list_nodes_select_call(self):
'''
Tests that a SaltCloudSystemExit is raised when trying to call list_nodes_full
with --action or -a.
'''
self.assertRaises(SaltCloudSystemExit, vmware.list_nodes_select, call='action')
def test_list_folders_call(self):
'''
Tests that a SaltCloudSystemExit is raised when trying to call list_folders
with anything other than --function or -f.
'''
self.assertRaises(SaltCloudSystemExit, vmware.list_folders, call='action')
def test_list_snapshots_call(self):
'''
Tests that a SaltCloudSystemExit is raised when trying to call list_snapshots
with anything other than --function or -f.
'''
self.assertRaises(SaltCloudSystemExit, vmware.list_snapshots, call='action')
def test_list_hosts_by_cluster_call(self):
'''
Tests that a SaltCloudSystemExit is raised when trying to call list_hosts_by_cluster
with anything other than --function or -f.
'''
self.assertRaises(SaltCloudSystemExit, vmware.list_hosts_by_cluster, call='action')
def test_list_clusters_by_datacenter_call(self):
'''
Tests that a SaltCloudSystemExit is raised when trying to call list_clusters_by_datacenter
with anything other than --function or -f.
'''
self.assertRaises(SaltCloudSystemExit, vmware.list_clusters_by_datacenter, call='action')
def test_list_hosts_by_datacenter_call(self):
'''
Tests that a SaltCloudSystemExit is raised when trying to call list_hosts_by_datacenter
with anything other than --function or -f.
'''
self.assertRaises(SaltCloudSystemExit, vmware.list_hosts_by_datacenter, call='action')
def test_list_hbas_call(self):
'''
Tests that a SaltCloudSystemExit is raised when trying to call list_hbas
with anything other than --function or -f.
'''
self.assertRaises(SaltCloudSystemExit, vmware.list_hbas, call='action')
def test_list_dvs_call(self):
'''
Tests that a SaltCloudSystemExit is raised when trying to call list_dvs
with anything other than --function or -f.
'''
self.assertRaises(SaltCloudSystemExit, vmware.list_dvs, call='action')
def test_list_vapps_call(self):
'''
Tests that a SaltCloudSystemExit is raised when trying to call list_vapps
with anything other than --function or -f.
'''
self.assertRaises(SaltCloudSystemExit, vmware.list_vapps, call='action')
def test_list_templates_call(self):
'''
Tests that a SaltCloudSystemExit is raised when trying to call list_templates
with anything other than --function or -f.
'''
self.assertRaises(SaltCloudSystemExit, vmware.list_templates, call='action')
def test_create_datacenter_call(self):
'''
Tests that a SaltCloudSystemExit is raised when trying to call create_datacenter
with anything other than --function or -f.
'''
self.assertRaises(SaltCloudSystemExit, vmware.create_datacenter, call='action')
def test_create_cluster_call(self):
'''
Tests that a SaltCloudSystemExit is raised when trying to call create_cluster
with anything other than --function or -f.
'''
self.assertRaises(SaltCloudSystemExit, vmware.create_cluster, call='action')
def test_rescan_hba_call(self):
'''
Tests that a SaltCloudSystemExit is raised when trying to call rescan_hba
with anything other than --function or -f.
'''
self.assertRaises(SaltCloudSystemExit, vmware.rescan_hba, call='action')
def test_upgrade_tools_all_call(self):
'''
Tests that a SaltCloudSystemExit is raised when trying to call upgrade_tools_all
with anything other than --function or -f.
'''
self.assertRaises(SaltCloudSystemExit, vmware.upgrade_tools_all, call='action')
def test_enter_maintenance_mode_call(self):
'''
Tests that a SaltCloudSystemExit is raised when trying to call enter_maintenance_mode
with anything other than --function or -f.
'''
self.assertRaises(SaltCloudSystemExit, vmware.enter_maintenance_mode, call='action')
def test_exit_maintenance_mode_call(self):
'''
Tests that a SaltCloudSystemExit is raised when trying to call exit_maintenance_mode
with anything other than --function or -f.
'''
self.assertRaises(SaltCloudSystemExit, vmware.exit_maintenance_mode, call='action')
def test_create_folder_call(self):
'''
Tests that a SaltCloudSystemExit is raised when trying to call create_folder
with anything other than --function or -f.
'''
self.assertRaises(SaltCloudSystemExit, vmware.create_folder, call='action')
def test_add_host_call(self):
'''
Tests that a SaltCloudSystemExit is raised when trying to call add_host
with anything other than --function or -f.
'''
self.assertRaises(SaltCloudSystemExit, vmware.add_host, call='action')
def test_remove_host_call(self):
'''
Tests that a SaltCloudSystemExit is raised when trying to call remove_host
with anything other than --function or -f.
'''
self.assertRaises(SaltCloudSystemExit, vmware.remove_host, call='action')
def test_connect_host_call(self):
'''
Tests that a SaltCloudSystemExit is raised when trying to call connect_host
with anything other than --function or -f.
'''
self.assertRaises(SaltCloudSystemExit, vmware.connect_host, call='action')
def test_disconnect_host_call(self):
'''
Tests that a SaltCloudSystemExit is raised when trying to call disconnect_host
with anything other than --function or -f.
'''
self.assertRaises(SaltCloudSystemExit, vmware.disconnect_host, call='action')
def test_reboot_host_call(self):
'''
Tests that a SaltCloudSystemExit is raised when trying to call reboot_host
with anything other than --function or -f.
'''
self.assertRaises(SaltCloudSystemExit, vmware.reboot_host, call='action')
def test_create_datastore_cluster_call(self):
'''
Tests that a SaltCloudSystemExit is raised when trying to call create_datastore_cluster
with anything other than --function or -f.
'''
self.assertRaises(SaltCloudSystemExit, vmware.create_datastore_cluster, call='action')
def test_show_instance_call(self):
'''
Tests that a SaltCloudSystemExit is raised when trying to call show_instance
with anything other than --action or -a.
'''
self.assertRaises(SaltCloudSystemExit, vmware.show_instance, name=VM_NAME, call='function')
def test_start_call(self):
'''
Tests that a SaltCloudSystemExit is raised when trying to call start
with anything other than --action or -a.
'''
self.assertRaises(SaltCloudSystemExit, vmware.start, name=VM_NAME, call='function')
def test_stop_call(self):
'''
Tests that a SaltCloudSystemExit is raised when trying to call stop
with anything other than --action or -a.
'''
self.assertRaises(SaltCloudSystemExit, vmware.stop, name=VM_NAME, call='function')
def test_suspend_call(self):
'''
Tests that a SaltCloudSystemExit is raised when trying to call suspend
with anything other than --action or -a.
'''
self.assertRaises(SaltCloudSystemExit, vmware.suspend, name=VM_NAME, call='function')
def test_reset_call(self):
'''
Tests that a SaltCloudSystemExit is raised when trying to call reset
with anything other than --action or -a.
'''
self.assertRaises(SaltCloudSystemExit, vmware.reset, name=VM_NAME, call='function')
def test_terminate_call(self):
'''
Tests that a SaltCloudSystemExit is raised when trying to call terminate
with anything other than --action or -a.
'''
self.assertRaises(SaltCloudSystemExit, vmware.terminate, name=VM_NAME, call='function')
def test_destroy_call(self):
'''
Tests that a SaltCloudSystemExit is raised when trying to call destroy
with --function or -f.
'''
self.assertRaises(SaltCloudSystemExit, vmware.destroy, name=VM_NAME, call='function')
def test_upgrade_tools_call(self):
'''
Tests that a SaltCloudSystemExit is raised when trying to call upgrade_tools
with anything other than --action or -a.
'''
self.assertRaises(SaltCloudSystemExit, vmware.upgrade_tools, name=VM_NAME, call='function')
def test_create_snapshot_call(self):
'''
Tests that a SaltCloudSystemExit is raised when trying to call create_snapshot
with anything other than --action or -a.
'''
self.assertRaises(SaltCloudSystemExit, vmware.create_snapshot, name=VM_NAME, call='function')
def test_revert_to_snapshot_call(self):
'''
Tests that a SaltCloudSystemExit is raised when trying to call revert_to_snapshot
with anything other than --action or -a.
'''
self.assertRaises(SaltCloudSystemExit, vmware.revert_to_snapshot, name=VM_NAME, call='function')
def test_remove_all_snapshots_call(self):
'''
Tests that a SaltCloudSystemExit is raised when trying to call remove_all_snapshots
with anything other than --action or -a.
'''
self.assertRaises(SaltCloudSystemExit, vmware.remove_all_snapshots, name=VM_NAME, call='function')
def test_avail_sizes(self):
'''
Tests that avail_sizes returns an empty dictionary.
'''
self.assertEqual(vmware.avail_sizes(call='foo'), {})
if __name__ == '__main__':
from integration import run_tests
run_tests(VMwareTestCase, needs_daemon=False)
| 39.161954 | 106 | 0.680845 | 1,802 | 15,234 | 5.567703 | 0.077137 | 0.034885 | 0.064786 | 0.083026 | 0.833848 | 0.784212 | 0.747932 | 0.707067 | 0.699691 | 0.682049 | 0 | 0.000086 | 0.236248 | 15,234 | 388 | 107 | 39.262887 | 0.862226 | 0.398254 | 0 | 0 | 0 | 0 | 0.050751 | 0.004746 | 0 | 0 | 0 | 0 | 0.431034 | 1 | 0.431034 | false | 0 | 0.060345 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
78764c4e475b9dac50e7e73eca054b1ff9ff8a58 | 312 | py | Python | tutorials/install/turtlesim/lib/python3.8/site-packages/turtlesim/srv/__init__.py | cclehui/study_ros | ecf7c258b561c4e6d6359471519e318c2ef888ae | [
"MIT"
] | 3 | 2021-08-20T03:25:37.000Z | 2022-03-31T02:47:28.000Z | tutorials/install/turtlesim/lib/python3.8/site-packages/turtlesim/srv/__init__.py | cclehui/study_ros | ecf7c258b561c4e6d6359471519e318c2ef888ae | [
"MIT"
] | null | null | null | tutorials/install/turtlesim/lib/python3.8/site-packages/turtlesim/srv/__init__.py | cclehui/study_ros | ecf7c258b561c4e6d6359471519e318c2ef888ae | [
"MIT"
] | null | null | null | from turtlesim.srv._kill import Kill # noqa: F401
from turtlesim.srv._set_pen import SetPen # noqa: F401
from turtlesim.srv._spawn import Spawn # noqa: F401
from turtlesim.srv._teleport_absolute import TeleportAbsolute # noqa: F401
from turtlesim.srv._teleport_relative import TeleportRelative # noqa: F401
| 52 | 75 | 0.807692 | 43 | 312 | 5.674419 | 0.372093 | 0.266393 | 0.327869 | 0.344262 | 0.459016 | 0.262295 | 0 | 0 | 0 | 0 | 0 | 0.055147 | 0.128205 | 312 | 5 | 76 | 62.4 | 0.841912 | 0.173077 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
78a4f63923cc2b9f9a44a116ef83dceec671251c | 68,204 | py | Python | sdk/python/pulumi_alicloud/cdn/domain.py | pulumi/pulumi-alicloud | 9c34d84b4588a7c885c6bec1f03b5016e5a41683 | [
"ECL-2.0",
"Apache-2.0"
] | 42 | 2019-03-18T06:34:37.000Z | 2022-03-24T07:08:57.000Z | sdk/python/pulumi_alicloud/cdn/domain.py | pulumi/pulumi-alicloud | 9c34d84b4588a7c885c6bec1f03b5016e5a41683 | [
"ECL-2.0",
"Apache-2.0"
] | 152 | 2019-04-15T21:03:44.000Z | 2022-03-29T18:00:57.000Z | sdk/python/pulumi_alicloud/cdn/domain.py | pulumi/pulumi-alicloud | 9c34d84b4588a7c885c6bec1f03b5016e5a41683 | [
"ECL-2.0",
"Apache-2.0"
] | 3 | 2020-08-26T17:30:07.000Z | 2021-07-05T01:37:45.000Z | # coding=utf-8
# *** WARNING: this file was generated by the Pulumi Terraform Bridge (tfgen) Tool. ***
# *** Do not edit by hand unless you're certain you know what you are doing! ***
import warnings
import pulumi
import pulumi.runtime
from typing import Any, Mapping, Optional, Sequence, Union, overload
from .. import _utilities
from . import outputs
from ._inputs import *
__all__ = ['DomainArgs', 'Domain']
@pulumi.input_type
class DomainArgs:
def __init__(__self__, *,
cdn_type: pulumi.Input[str],
domain_name: pulumi.Input[str],
auth_config: Optional[pulumi.Input['DomainAuthConfigArgs']] = None,
block_ips: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None,
cache_configs: Optional[pulumi.Input[Sequence[pulumi.Input['DomainCacheConfigArgs']]]] = None,
certificate_config: Optional[pulumi.Input['DomainCertificateConfigArgs']] = None,
http_header_configs: Optional[pulumi.Input[Sequence[pulumi.Input['DomainHttpHeaderConfigArgs']]]] = None,
optimize_enable: Optional[pulumi.Input[str]] = None,
page404_config: Optional[pulumi.Input['DomainPage404ConfigArgs']] = None,
page_compress_enable: Optional[pulumi.Input[str]] = None,
parameter_filter_config: Optional[pulumi.Input['DomainParameterFilterConfigArgs']] = None,
range_enable: Optional[pulumi.Input[str]] = None,
refer_config: Optional[pulumi.Input['DomainReferConfigArgs']] = None,
scope: Optional[pulumi.Input[str]] = None,
source_port: Optional[pulumi.Input[int]] = None,
source_type: Optional[pulumi.Input[str]] = None,
sources: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None,
video_seek_enable: Optional[pulumi.Input[str]] = None):
"""
The set of arguments for constructing a Domain resource.
:param pulumi.Input[str] cdn_type: Cdn type of the accelerated domain. Valid values are `web`, `download`, `video`, `liveStream`.
:param pulumi.Input[str] domain_name: Name of the accelerated domain. This name without suffix can have a string of 1 to 63 characters, must contain only alphanumeric characters or "-", and must not begin or end with "-", and "-" must not in the 3th and 4th character positions at the same time. Suffix `.sh` and `.tel` are not supported.
:param pulumi.Input['DomainAuthConfigArgs'] auth_config: The auth config of the accelerated domain.
:param pulumi.Input[Sequence[pulumi.Input['DomainCacheConfigArgs']]] cache_configs: The cache configs of the accelerated domain.
:param pulumi.Input[Sequence[pulumi.Input['DomainHttpHeaderConfigArgs']]] http_header_configs: The http header configs of the accelerated domain.
:param pulumi.Input[str] optimize_enable: Page Optimize config of the accelerated domain. Valid values are `on` and `off`. Default value is `off`. It can effectively remove the page redundant content, reduce the file size and improve the speed of distribution when this parameter value is `on`.
:param pulumi.Input['DomainPage404ConfigArgs'] page404_config: The error page config of the accelerated domain.
:param pulumi.Input[str] page_compress_enable: Page Compress config of the accelerated domain. Valid values are `on` and `off`. Default value is `off`.
:param pulumi.Input['DomainParameterFilterConfigArgs'] parameter_filter_config: The parameter filter config of the accelerated domain.
:param pulumi.Input[str] range_enable: Range Source config of the accelerated domain. Valid values are `on` and `off`. Default value is `off`.
:param pulumi.Input['DomainReferConfigArgs'] refer_config: The refer config of the accelerated domain.
:param pulumi.Input[str] scope: Scope of the accelerated domain. Valid values are `domestic`, `overseas`, `global`. Default value is `domestic`. This parameter's setting is valid Only for the international users and domestic L3 and above users .
:param pulumi.Input[int] source_port: Source port of the accelerated domain. Valid values are `80` and `443`. Default value is `80`. You must use `80` when the `source_type` is `oss`.
:param pulumi.Input[str] source_type: Source type of the accelerated domain. Valid values are `ipaddr`, `domain`, `oss`. You must set this parameter when `cdn_type` value is not `liveStream`.
:param pulumi.Input[Sequence[pulumi.Input[str]]] sources: Sources of the accelerated domain. It's a list of domain names or IP address and consists of at most 20 items. You must set this parameter when `cdn_type` value is not `liveStream`.
:param pulumi.Input[str] video_seek_enable: Video Seek config of the accelerated domain. Valid values are `on` and `off`. Default value is `off`.
"""
pulumi.set(__self__, "cdn_type", cdn_type)
pulumi.set(__self__, "domain_name", domain_name)
if auth_config is not None:
warnings.warn("""Use `alicloud_cdn_domain_config` configuration `function_name` and `function_args` arguments instead.""", DeprecationWarning)
pulumi.log.warn("""auth_config is deprecated: Use `alicloud_cdn_domain_config` configuration `function_name` and `function_args` arguments instead.""")
if auth_config is not None:
pulumi.set(__self__, "auth_config", auth_config)
if block_ips is not None:
warnings.warn("""Use `alicloud_cdn_domain_config` configuration `function_name` and `function_args` arguments instead.""", DeprecationWarning)
pulumi.log.warn("""block_ips is deprecated: Use `alicloud_cdn_domain_config` configuration `function_name` and `function_args` arguments instead.""")
if block_ips is not None:
pulumi.set(__self__, "block_ips", block_ips)
if cache_configs is not None:
warnings.warn("""Use `alicloud_cdn_domain_config` configuration `function_name` and `function_args` arguments instead.""", DeprecationWarning)
pulumi.log.warn("""cache_configs is deprecated: Use `alicloud_cdn_domain_config` configuration `function_name` and `function_args` arguments instead.""")
if cache_configs is not None:
pulumi.set(__self__, "cache_configs", cache_configs)
if certificate_config is not None:
warnings.warn("""Use `alicloud_cdn_domain_config` configuration `function_name` and `function_args` arguments instead.""", DeprecationWarning)
pulumi.log.warn("""certificate_config is deprecated: Use `alicloud_cdn_domain_config` configuration `function_name` and `function_args` arguments instead.""")
if certificate_config is not None:
pulumi.set(__self__, "certificate_config", certificate_config)
if http_header_configs is not None:
warnings.warn("""Use `alicloud_cdn_domain_config` configuration `function_name` and `function_args` arguments instead.""", DeprecationWarning)
pulumi.log.warn("""http_header_configs is deprecated: Use `alicloud_cdn_domain_config` configuration `function_name` and `function_args` arguments instead.""")
if http_header_configs is not None:
pulumi.set(__self__, "http_header_configs", http_header_configs)
if optimize_enable is not None:
warnings.warn("""Use `alicloud_cdn_domain_config` configuration `function_name` and `function_args` arguments instead.""", DeprecationWarning)
pulumi.log.warn("""optimize_enable is deprecated: Use `alicloud_cdn_domain_config` configuration `function_name` and `function_args` arguments instead.""")
if optimize_enable is not None:
pulumi.set(__self__, "optimize_enable", optimize_enable)
if page404_config is not None:
warnings.warn("""Use `alicloud_cdn_domain_config` configuration `function_name` and `function_args` arguments instead.""", DeprecationWarning)
pulumi.log.warn("""page404_config is deprecated: Use `alicloud_cdn_domain_config` configuration `function_name` and `function_args` arguments instead.""")
if page404_config is not None:
pulumi.set(__self__, "page404_config", page404_config)
if page_compress_enable is not None:
warnings.warn("""Use `alicloud_cdn_domain_config` configuration `function_name` and `function_args` arguments instead.""", DeprecationWarning)
pulumi.log.warn("""page_compress_enable is deprecated: Use `alicloud_cdn_domain_config` configuration `function_name` and `function_args` arguments instead.""")
if page_compress_enable is not None:
pulumi.set(__self__, "page_compress_enable", page_compress_enable)
if parameter_filter_config is not None:
warnings.warn("""Use `alicloud_cdn_domain_config` configuration `function_name` and `function_args` arguments instead.""", DeprecationWarning)
pulumi.log.warn("""parameter_filter_config is deprecated: Use `alicloud_cdn_domain_config` configuration `function_name` and `function_args` arguments instead.""")
if parameter_filter_config is not None:
pulumi.set(__self__, "parameter_filter_config", parameter_filter_config)
if range_enable is not None:
warnings.warn("""Use `alicloud_cdn_domain_config` configuration `function_name` and `function_args` arguments instead.""", DeprecationWarning)
pulumi.log.warn("""range_enable is deprecated: Use `alicloud_cdn_domain_config` configuration `function_name` and `function_args` arguments instead.""")
if range_enable is not None:
pulumi.set(__self__, "range_enable", range_enable)
if refer_config is not None:
warnings.warn("""Use `alicloud_cdn_domain_config` configuration `function_name` and `function_args` arguments instead.""", DeprecationWarning)
pulumi.log.warn("""refer_config is deprecated: Use `alicloud_cdn_domain_config` configuration `function_name` and `function_args` arguments instead.""")
if refer_config is not None:
pulumi.set(__self__, "refer_config", refer_config)
if scope is not None:
pulumi.set(__self__, "scope", scope)
if source_port is not None:
warnings.warn("""Use `alicloud_cdn_domain_new` configuration `sources` block `port` argument instead.""", DeprecationWarning)
pulumi.log.warn("""source_port is deprecated: Use `alicloud_cdn_domain_new` configuration `sources` block `port` argument instead.""")
if source_port is not None:
pulumi.set(__self__, "source_port", source_port)
if source_type is not None:
warnings.warn("""Use `alicloud_cdn_domain_new` configuration `sources` block `type` argument instead.""", DeprecationWarning)
pulumi.log.warn("""source_type is deprecated: Use `alicloud_cdn_domain_new` configuration `sources` block `type` argument instead.""")
if source_type is not None:
pulumi.set(__self__, "source_type", source_type)
if sources is not None:
warnings.warn("""Use `alicloud_cdn_domain_new` configuration `sources` argument instead.""", DeprecationWarning)
pulumi.log.warn("""sources is deprecated: Use `alicloud_cdn_domain_new` configuration `sources` argument instead.""")
if sources is not None:
pulumi.set(__self__, "sources", sources)
if video_seek_enable is not None:
warnings.warn("""Use `alicloud_cdn_domain_config` configuration `function_name` and `function_args` arguments instead.""", DeprecationWarning)
pulumi.log.warn("""video_seek_enable is deprecated: Use `alicloud_cdn_domain_config` configuration `function_name` and `function_args` arguments instead.""")
if video_seek_enable is not None:
pulumi.set(__self__, "video_seek_enable", video_seek_enable)
@property
@pulumi.getter(name="cdnType")
def cdn_type(self) -> pulumi.Input[str]:
"""
Cdn type of the accelerated domain. Valid values are `web`, `download`, `video`, `liveStream`.
"""
return pulumi.get(self, "cdn_type")
@cdn_type.setter
def cdn_type(self, value: pulumi.Input[str]):
pulumi.set(self, "cdn_type", value)
@property
@pulumi.getter(name="domainName")
def domain_name(self) -> pulumi.Input[str]:
"""
Name of the accelerated domain. This name without suffix can have a string of 1 to 63 characters, must contain only alphanumeric characters or "-", and must not begin or end with "-", and "-" must not in the 3th and 4th character positions at the same time. Suffix `.sh` and `.tel` are not supported.
"""
return pulumi.get(self, "domain_name")
@domain_name.setter
def domain_name(self, value: pulumi.Input[str]):
pulumi.set(self, "domain_name", value)
@property
@pulumi.getter(name="authConfig")
def auth_config(self) -> Optional[pulumi.Input['DomainAuthConfigArgs']]:
"""
The auth config of the accelerated domain.
"""
return pulumi.get(self, "auth_config")
@auth_config.setter
def auth_config(self, value: Optional[pulumi.Input['DomainAuthConfigArgs']]):
pulumi.set(self, "auth_config", value)
@property
@pulumi.getter(name="blockIps")
def block_ips(self) -> Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]:
return pulumi.get(self, "block_ips")
@block_ips.setter
def block_ips(self, value: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]):
pulumi.set(self, "block_ips", value)
@property
@pulumi.getter(name="cacheConfigs")
def cache_configs(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['DomainCacheConfigArgs']]]]:
"""
The cache configs of the accelerated domain.
"""
return pulumi.get(self, "cache_configs")
@cache_configs.setter
def cache_configs(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['DomainCacheConfigArgs']]]]):
pulumi.set(self, "cache_configs", value)
@property
@pulumi.getter(name="certificateConfig")
def certificate_config(self) -> Optional[pulumi.Input['DomainCertificateConfigArgs']]:
return pulumi.get(self, "certificate_config")
@certificate_config.setter
def certificate_config(self, value: Optional[pulumi.Input['DomainCertificateConfigArgs']]):
pulumi.set(self, "certificate_config", value)
@property
@pulumi.getter(name="httpHeaderConfigs")
def http_header_configs(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['DomainHttpHeaderConfigArgs']]]]:
"""
The http header configs of the accelerated domain.
"""
return pulumi.get(self, "http_header_configs")
@http_header_configs.setter
def http_header_configs(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['DomainHttpHeaderConfigArgs']]]]):
pulumi.set(self, "http_header_configs", value)
@property
@pulumi.getter(name="optimizeEnable")
def optimize_enable(self) -> Optional[pulumi.Input[str]]:
"""
Page Optimize config of the accelerated domain. Valid values are `on` and `off`. Default value is `off`. It can effectively remove the page redundant content, reduce the file size and improve the speed of distribution when this parameter value is `on`.
"""
return pulumi.get(self, "optimize_enable")
@optimize_enable.setter
def optimize_enable(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "optimize_enable", value)
@property
@pulumi.getter(name="page404Config")
def page404_config(self) -> Optional[pulumi.Input['DomainPage404ConfigArgs']]:
"""
The error page config of the accelerated domain.
"""
return pulumi.get(self, "page404_config")
@page404_config.setter
def page404_config(self, value: Optional[pulumi.Input['DomainPage404ConfigArgs']]):
pulumi.set(self, "page404_config", value)
@property
@pulumi.getter(name="pageCompressEnable")
def page_compress_enable(self) -> Optional[pulumi.Input[str]]:
"""
Page Compress config of the accelerated domain. Valid values are `on` and `off`. Default value is `off`.
"""
return pulumi.get(self, "page_compress_enable")
@page_compress_enable.setter
def page_compress_enable(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "page_compress_enable", value)
@property
@pulumi.getter(name="parameterFilterConfig")
def parameter_filter_config(self) -> Optional[pulumi.Input['DomainParameterFilterConfigArgs']]:
"""
The parameter filter config of the accelerated domain.
"""
return pulumi.get(self, "parameter_filter_config")
@parameter_filter_config.setter
def parameter_filter_config(self, value: Optional[pulumi.Input['DomainParameterFilterConfigArgs']]):
pulumi.set(self, "parameter_filter_config", value)
@property
@pulumi.getter(name="rangeEnable")
def range_enable(self) -> Optional[pulumi.Input[str]]:
"""
Range Source config of the accelerated domain. Valid values are `on` and `off`. Default value is `off`.
"""
return pulumi.get(self, "range_enable")
@range_enable.setter
def range_enable(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "range_enable", value)
@property
@pulumi.getter(name="referConfig")
def refer_config(self) -> Optional[pulumi.Input['DomainReferConfigArgs']]:
"""
The refer config of the accelerated domain.
"""
return pulumi.get(self, "refer_config")
@refer_config.setter
def refer_config(self, value: Optional[pulumi.Input['DomainReferConfigArgs']]):
pulumi.set(self, "refer_config", value)
@property
@pulumi.getter
def scope(self) -> Optional[pulumi.Input[str]]:
"""
Scope of the accelerated domain. Valid values are `domestic`, `overseas`, `global`. Default value is `domestic`. This parameter's setting is valid Only for the international users and domestic L3 and above users .
"""
return pulumi.get(self, "scope")
@scope.setter
def scope(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "scope", value)
@property
@pulumi.getter(name="sourcePort")
def source_port(self) -> Optional[pulumi.Input[int]]:
"""
Source port of the accelerated domain. Valid values are `80` and `443`. Default value is `80`. You must use `80` when the `source_type` is `oss`.
"""
return pulumi.get(self, "source_port")
@source_port.setter
def source_port(self, value: Optional[pulumi.Input[int]]):
pulumi.set(self, "source_port", value)
@property
@pulumi.getter(name="sourceType")
def source_type(self) -> Optional[pulumi.Input[str]]:
"""
Source type of the accelerated domain. Valid values are `ipaddr`, `domain`, `oss`. You must set this parameter when `cdn_type` value is not `liveStream`.
"""
return pulumi.get(self, "source_type")
@source_type.setter
def source_type(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "source_type", value)
@property
@pulumi.getter
def sources(self) -> Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]:
"""
Sources of the accelerated domain. It's a list of domain names or IP address and consists of at most 20 items. You must set this parameter when `cdn_type` value is not `liveStream`.
"""
return pulumi.get(self, "sources")
@sources.setter
def sources(self, value: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]):
pulumi.set(self, "sources", value)
@property
@pulumi.getter(name="videoSeekEnable")
def video_seek_enable(self) -> Optional[pulumi.Input[str]]:
"""
Video Seek config of the accelerated domain. Valid values are `on` and `off`. Default value is `off`.
"""
return pulumi.get(self, "video_seek_enable")
@video_seek_enable.setter
def video_seek_enable(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "video_seek_enable", value)
@pulumi.input_type
class _DomainState:
def __init__(__self__, *,
auth_config: Optional[pulumi.Input['DomainAuthConfigArgs']] = None,
block_ips: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None,
cache_configs: Optional[pulumi.Input[Sequence[pulumi.Input['DomainCacheConfigArgs']]]] = None,
cdn_type: Optional[pulumi.Input[str]] = None,
certificate_config: Optional[pulumi.Input['DomainCertificateConfigArgs']] = None,
domain_name: Optional[pulumi.Input[str]] = None,
http_header_configs: Optional[pulumi.Input[Sequence[pulumi.Input['DomainHttpHeaderConfigArgs']]]] = None,
optimize_enable: Optional[pulumi.Input[str]] = None,
page404_config: Optional[pulumi.Input['DomainPage404ConfigArgs']] = None,
page_compress_enable: Optional[pulumi.Input[str]] = None,
parameter_filter_config: Optional[pulumi.Input['DomainParameterFilterConfigArgs']] = None,
range_enable: Optional[pulumi.Input[str]] = None,
refer_config: Optional[pulumi.Input['DomainReferConfigArgs']] = None,
scope: Optional[pulumi.Input[str]] = None,
source_port: Optional[pulumi.Input[int]] = None,
source_type: Optional[pulumi.Input[str]] = None,
sources: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None,
video_seek_enable: Optional[pulumi.Input[str]] = None):
"""
Input properties used for looking up and filtering Domain resources.
:param pulumi.Input['DomainAuthConfigArgs'] auth_config: The auth config of the accelerated domain.
:param pulumi.Input[Sequence[pulumi.Input['DomainCacheConfigArgs']]] cache_configs: The cache configs of the accelerated domain.
:param pulumi.Input[str] cdn_type: Cdn type of the accelerated domain. Valid values are `web`, `download`, `video`, `liveStream`.
:param pulumi.Input[str] domain_name: Name of the accelerated domain. This name without suffix can have a string of 1 to 63 characters, must contain only alphanumeric characters or "-", and must not begin or end with "-", and "-" must not in the 3th and 4th character positions at the same time. Suffix `.sh` and `.tel` are not supported.
:param pulumi.Input[Sequence[pulumi.Input['DomainHttpHeaderConfigArgs']]] http_header_configs: The http header configs of the accelerated domain.
:param pulumi.Input[str] optimize_enable: Page Optimize config of the accelerated domain. Valid values are `on` and `off`. Default value is `off`. It can effectively remove the page redundant content, reduce the file size and improve the speed of distribution when this parameter value is `on`.
:param pulumi.Input['DomainPage404ConfigArgs'] page404_config: The error page config of the accelerated domain.
:param pulumi.Input[str] page_compress_enable: Page Compress config of the accelerated domain. Valid values are `on` and `off`. Default value is `off`.
:param pulumi.Input['DomainParameterFilterConfigArgs'] parameter_filter_config: The parameter filter config of the accelerated domain.
:param pulumi.Input[str] range_enable: Range Source config of the accelerated domain. Valid values are `on` and `off`. Default value is `off`.
:param pulumi.Input['DomainReferConfigArgs'] refer_config: The refer config of the accelerated domain.
:param pulumi.Input[str] scope: Scope of the accelerated domain. Valid values are `domestic`, `overseas`, `global`. Default value is `domestic`. This parameter's setting is valid Only for the international users and domestic L3 and above users .
:param pulumi.Input[int] source_port: Source port of the accelerated domain. Valid values are `80` and `443`. Default value is `80`. You must use `80` when the `source_type` is `oss`.
:param pulumi.Input[str] source_type: Source type of the accelerated domain. Valid values are `ipaddr`, `domain`, `oss`. You must set this parameter when `cdn_type` value is not `liveStream`.
:param pulumi.Input[Sequence[pulumi.Input[str]]] sources: Sources of the accelerated domain. It's a list of domain names or IP address and consists of at most 20 items. You must set this parameter when `cdn_type` value is not `liveStream`.
:param pulumi.Input[str] video_seek_enable: Video Seek config of the accelerated domain. Valid values are `on` and `off`. Default value is `off`.
"""
if auth_config is not None:
warnings.warn("""Use `alicloud_cdn_domain_config` configuration `function_name` and `function_args` arguments instead.""", DeprecationWarning)
pulumi.log.warn("""auth_config is deprecated: Use `alicloud_cdn_domain_config` configuration `function_name` and `function_args` arguments instead.""")
if auth_config is not None:
pulumi.set(__self__, "auth_config", auth_config)
if block_ips is not None:
warnings.warn("""Use `alicloud_cdn_domain_config` configuration `function_name` and `function_args` arguments instead.""", DeprecationWarning)
pulumi.log.warn("""block_ips is deprecated: Use `alicloud_cdn_domain_config` configuration `function_name` and `function_args` arguments instead.""")
if block_ips is not None:
pulumi.set(__self__, "block_ips", block_ips)
if cache_configs is not None:
warnings.warn("""Use `alicloud_cdn_domain_config` configuration `function_name` and `function_args` arguments instead.""", DeprecationWarning)
pulumi.log.warn("""cache_configs is deprecated: Use `alicloud_cdn_domain_config` configuration `function_name` and `function_args` arguments instead.""")
if cache_configs is not None:
pulumi.set(__self__, "cache_configs", cache_configs)
if cdn_type is not None:
pulumi.set(__self__, "cdn_type", cdn_type)
if certificate_config is not None:
warnings.warn("""Use `alicloud_cdn_domain_config` configuration `function_name` and `function_args` arguments instead.""", DeprecationWarning)
pulumi.log.warn("""certificate_config is deprecated: Use `alicloud_cdn_domain_config` configuration `function_name` and `function_args` arguments instead.""")
if certificate_config is not None:
pulumi.set(__self__, "certificate_config", certificate_config)
if domain_name is not None:
pulumi.set(__self__, "domain_name", domain_name)
if http_header_configs is not None:
warnings.warn("""Use `alicloud_cdn_domain_config` configuration `function_name` and `function_args` arguments instead.""", DeprecationWarning)
pulumi.log.warn("""http_header_configs is deprecated: Use `alicloud_cdn_domain_config` configuration `function_name` and `function_args` arguments instead.""")
if http_header_configs is not None:
pulumi.set(__self__, "http_header_configs", http_header_configs)
if optimize_enable is not None:
warnings.warn("""Use `alicloud_cdn_domain_config` configuration `function_name` and `function_args` arguments instead.""", DeprecationWarning)
pulumi.log.warn("""optimize_enable is deprecated: Use `alicloud_cdn_domain_config` configuration `function_name` and `function_args` arguments instead.""")
if optimize_enable is not None:
pulumi.set(__self__, "optimize_enable", optimize_enable)
if page404_config is not None:
warnings.warn("""Use `alicloud_cdn_domain_config` configuration `function_name` and `function_args` arguments instead.""", DeprecationWarning)
pulumi.log.warn("""page404_config is deprecated: Use `alicloud_cdn_domain_config` configuration `function_name` and `function_args` arguments instead.""")
if page404_config is not None:
pulumi.set(__self__, "page404_config", page404_config)
if page_compress_enable is not None:
warnings.warn("""Use `alicloud_cdn_domain_config` configuration `function_name` and `function_args` arguments instead.""", DeprecationWarning)
pulumi.log.warn("""page_compress_enable is deprecated: Use `alicloud_cdn_domain_config` configuration `function_name` and `function_args` arguments instead.""")
if page_compress_enable is not None:
pulumi.set(__self__, "page_compress_enable", page_compress_enable)
if parameter_filter_config is not None:
warnings.warn("""Use `alicloud_cdn_domain_config` configuration `function_name` and `function_args` arguments instead.""", DeprecationWarning)
pulumi.log.warn("""parameter_filter_config is deprecated: Use `alicloud_cdn_domain_config` configuration `function_name` and `function_args` arguments instead.""")
if parameter_filter_config is not None:
pulumi.set(__self__, "parameter_filter_config", parameter_filter_config)
if range_enable is not None:
warnings.warn("""Use `alicloud_cdn_domain_config` configuration `function_name` and `function_args` arguments instead.""", DeprecationWarning)
pulumi.log.warn("""range_enable is deprecated: Use `alicloud_cdn_domain_config` configuration `function_name` and `function_args` arguments instead.""")
if range_enable is not None:
pulumi.set(__self__, "range_enable", range_enable)
if refer_config is not None:
warnings.warn("""Use `alicloud_cdn_domain_config` configuration `function_name` and `function_args` arguments instead.""", DeprecationWarning)
pulumi.log.warn("""refer_config is deprecated: Use `alicloud_cdn_domain_config` configuration `function_name` and `function_args` arguments instead.""")
if refer_config is not None:
pulumi.set(__self__, "refer_config", refer_config)
if scope is not None:
pulumi.set(__self__, "scope", scope)
if source_port is not None:
warnings.warn("""Use `alicloud_cdn_domain_new` configuration `sources` block `port` argument instead.""", DeprecationWarning)
pulumi.log.warn("""source_port is deprecated: Use `alicloud_cdn_domain_new` configuration `sources` block `port` argument instead.""")
if source_port is not None:
pulumi.set(__self__, "source_port", source_port)
if source_type is not None:
warnings.warn("""Use `alicloud_cdn_domain_new` configuration `sources` block `type` argument instead.""", DeprecationWarning)
pulumi.log.warn("""source_type is deprecated: Use `alicloud_cdn_domain_new` configuration `sources` block `type` argument instead.""")
if source_type is not None:
pulumi.set(__self__, "source_type", source_type)
if sources is not None:
warnings.warn("""Use `alicloud_cdn_domain_new` configuration `sources` argument instead.""", DeprecationWarning)
pulumi.log.warn("""sources is deprecated: Use `alicloud_cdn_domain_new` configuration `sources` argument instead.""")
if sources is not None:
pulumi.set(__self__, "sources", sources)
if video_seek_enable is not None:
warnings.warn("""Use `alicloud_cdn_domain_config` configuration `function_name` and `function_args` arguments instead.""", DeprecationWarning)
pulumi.log.warn("""video_seek_enable is deprecated: Use `alicloud_cdn_domain_config` configuration `function_name` and `function_args` arguments instead.""")
if video_seek_enable is not None:
pulumi.set(__self__, "video_seek_enable", video_seek_enable)
@property
@pulumi.getter(name="authConfig")
def auth_config(self) -> Optional[pulumi.Input['DomainAuthConfigArgs']]:
"""
The auth config of the accelerated domain.
"""
return pulumi.get(self, "auth_config")
@auth_config.setter
def auth_config(self, value: Optional[pulumi.Input['DomainAuthConfigArgs']]):
pulumi.set(self, "auth_config", value)
@property
@pulumi.getter(name="blockIps")
def block_ips(self) -> Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]:
return pulumi.get(self, "block_ips")
@block_ips.setter
def block_ips(self, value: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]):
pulumi.set(self, "block_ips", value)
@property
@pulumi.getter(name="cacheConfigs")
def cache_configs(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['DomainCacheConfigArgs']]]]:
"""
The cache configs of the accelerated domain.
"""
return pulumi.get(self, "cache_configs")
@cache_configs.setter
def cache_configs(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['DomainCacheConfigArgs']]]]):
pulumi.set(self, "cache_configs", value)
@property
@pulumi.getter(name="cdnType")
def cdn_type(self) -> Optional[pulumi.Input[str]]:
"""
Cdn type of the accelerated domain. Valid values are `web`, `download`, `video`, `liveStream`.
"""
return pulumi.get(self, "cdn_type")
@cdn_type.setter
def cdn_type(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "cdn_type", value)
@property
@pulumi.getter(name="certificateConfig")
def certificate_config(self) -> Optional[pulumi.Input['DomainCertificateConfigArgs']]:
return pulumi.get(self, "certificate_config")
@certificate_config.setter
def certificate_config(self, value: Optional[pulumi.Input['DomainCertificateConfigArgs']]):
pulumi.set(self, "certificate_config", value)
@property
@pulumi.getter(name="domainName")
def domain_name(self) -> Optional[pulumi.Input[str]]:
"""
Name of the accelerated domain. This name without suffix can have a string of 1 to 63 characters, must contain only alphanumeric characters or "-", and must not begin or end with "-", and "-" must not in the 3th and 4th character positions at the same time. Suffix `.sh` and `.tel` are not supported.
"""
return pulumi.get(self, "domain_name")
@domain_name.setter
def domain_name(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "domain_name", value)
@property
@pulumi.getter(name="httpHeaderConfigs")
def http_header_configs(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['DomainHttpHeaderConfigArgs']]]]:
"""
The http header configs of the accelerated domain.
"""
return pulumi.get(self, "http_header_configs")
@http_header_configs.setter
def http_header_configs(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['DomainHttpHeaderConfigArgs']]]]):
pulumi.set(self, "http_header_configs", value)
@property
@pulumi.getter(name="optimizeEnable")
def optimize_enable(self) -> Optional[pulumi.Input[str]]:
"""
Page Optimize config of the accelerated domain. Valid values are `on` and `off`. Default value is `off`. It can effectively remove the page redundant content, reduce the file size and improve the speed of distribution when this parameter value is `on`.
"""
return pulumi.get(self, "optimize_enable")
@optimize_enable.setter
def optimize_enable(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "optimize_enable", value)
@property
@pulumi.getter(name="page404Config")
def page404_config(self) -> Optional[pulumi.Input['DomainPage404ConfigArgs']]:
"""
The error page config of the accelerated domain.
"""
return pulumi.get(self, "page404_config")
@page404_config.setter
def page404_config(self, value: Optional[pulumi.Input['DomainPage404ConfigArgs']]):
pulumi.set(self, "page404_config", value)
@property
@pulumi.getter(name="pageCompressEnable")
def page_compress_enable(self) -> Optional[pulumi.Input[str]]:
"""
Page Compress config of the accelerated domain. Valid values are `on` and `off`. Default value is `off`.
"""
return pulumi.get(self, "page_compress_enable")
@page_compress_enable.setter
def page_compress_enable(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "page_compress_enable", value)
@property
@pulumi.getter(name="parameterFilterConfig")
def parameter_filter_config(self) -> Optional[pulumi.Input['DomainParameterFilterConfigArgs']]:
"""
The parameter filter config of the accelerated domain.
"""
return pulumi.get(self, "parameter_filter_config")
@parameter_filter_config.setter
def parameter_filter_config(self, value: Optional[pulumi.Input['DomainParameterFilterConfigArgs']]):
pulumi.set(self, "parameter_filter_config", value)
@property
@pulumi.getter(name="rangeEnable")
def range_enable(self) -> Optional[pulumi.Input[str]]:
"""
Range Source config of the accelerated domain. Valid values are `on` and `off`. Default value is `off`.
"""
return pulumi.get(self, "range_enable")
@range_enable.setter
def range_enable(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "range_enable", value)
@property
@pulumi.getter(name="referConfig")
def refer_config(self) -> Optional[pulumi.Input['DomainReferConfigArgs']]:
"""
The refer config of the accelerated domain.
"""
return pulumi.get(self, "refer_config")
@refer_config.setter
def refer_config(self, value: Optional[pulumi.Input['DomainReferConfigArgs']]):
pulumi.set(self, "refer_config", value)
@property
@pulumi.getter
def scope(self) -> Optional[pulumi.Input[str]]:
"""
Scope of the accelerated domain. Valid values are `domestic`, `overseas`, `global`. Default value is `domestic`. This parameter's setting is valid Only for the international users and domestic L3 and above users .
"""
return pulumi.get(self, "scope")
@scope.setter
def scope(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "scope", value)
@property
@pulumi.getter(name="sourcePort")
def source_port(self) -> Optional[pulumi.Input[int]]:
"""
Source port of the accelerated domain. Valid values are `80` and `443`. Default value is `80`. You must use `80` when the `source_type` is `oss`.
"""
return pulumi.get(self, "source_port")
@source_port.setter
def source_port(self, value: Optional[pulumi.Input[int]]):
pulumi.set(self, "source_port", value)
@property
@pulumi.getter(name="sourceType")
def source_type(self) -> Optional[pulumi.Input[str]]:
"""
Source type of the accelerated domain. Valid values are `ipaddr`, `domain`, `oss`. You must set this parameter when `cdn_type` value is not `liveStream`.
"""
return pulumi.get(self, "source_type")
@source_type.setter
def source_type(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "source_type", value)
@property
@pulumi.getter
def sources(self) -> Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]:
"""
Sources of the accelerated domain. It's a list of domain names or IP address and consists of at most 20 items. You must set this parameter when `cdn_type` value is not `liveStream`.
"""
return pulumi.get(self, "sources")
@sources.setter
def sources(self, value: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]):
pulumi.set(self, "sources", value)
@property
@pulumi.getter(name="videoSeekEnable")
def video_seek_enable(self) -> Optional[pulumi.Input[str]]:
"""
Video Seek config of the accelerated domain. Valid values are `on` and `off`. Default value is `off`.
"""
return pulumi.get(self, "video_seek_enable")
@video_seek_enable.setter
def video_seek_enable(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "video_seek_enable", value)
class Domain(pulumi.CustomResource):
@overload
def __init__(__self__,
resource_name: str,
opts: Optional[pulumi.ResourceOptions] = None,
auth_config: Optional[pulumi.Input[pulumi.InputType['DomainAuthConfigArgs']]] = None,
block_ips: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None,
cache_configs: Optional[pulumi.Input[Sequence[pulumi.Input[pulumi.InputType['DomainCacheConfigArgs']]]]] = None,
cdn_type: Optional[pulumi.Input[str]] = None,
certificate_config: Optional[pulumi.Input[pulumi.InputType['DomainCertificateConfigArgs']]] = None,
domain_name: Optional[pulumi.Input[str]] = None,
http_header_configs: Optional[pulumi.Input[Sequence[pulumi.Input[pulumi.InputType['DomainHttpHeaderConfigArgs']]]]] = None,
optimize_enable: Optional[pulumi.Input[str]] = None,
page404_config: Optional[pulumi.Input[pulumi.InputType['DomainPage404ConfigArgs']]] = None,
page_compress_enable: Optional[pulumi.Input[str]] = None,
parameter_filter_config: Optional[pulumi.Input[pulumi.InputType['DomainParameterFilterConfigArgs']]] = None,
range_enable: Optional[pulumi.Input[str]] = None,
refer_config: Optional[pulumi.Input[pulumi.InputType['DomainReferConfigArgs']]] = None,
scope: Optional[pulumi.Input[str]] = None,
source_port: Optional[pulumi.Input[int]] = None,
source_type: Optional[pulumi.Input[str]] = None,
sources: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None,
video_seek_enable: Optional[pulumi.Input[str]] = None,
__props__=None):
"""
Create a Domain resource with the given unique name, props, and options.
:param str resource_name: The name of the resource.
:param pulumi.ResourceOptions opts: Options for the resource.
:param pulumi.Input[pulumi.InputType['DomainAuthConfigArgs']] auth_config: The auth config of the accelerated domain.
:param pulumi.Input[Sequence[pulumi.Input[pulumi.InputType['DomainCacheConfigArgs']]]] cache_configs: The cache configs of the accelerated domain.
:param pulumi.Input[str] cdn_type: Cdn type of the accelerated domain. Valid values are `web`, `download`, `video`, `liveStream`.
:param pulumi.Input[str] domain_name: Name of the accelerated domain. This name without suffix can have a string of 1 to 63 characters, must contain only alphanumeric characters or "-", and must not begin or end with "-", and "-" must not in the 3th and 4th character positions at the same time. Suffix `.sh` and `.tel` are not supported.
:param pulumi.Input[Sequence[pulumi.Input[pulumi.InputType['DomainHttpHeaderConfigArgs']]]] http_header_configs: The http header configs of the accelerated domain.
:param pulumi.Input[str] optimize_enable: Page Optimize config of the accelerated domain. Valid values are `on` and `off`. Default value is `off`. It can effectively remove the page redundant content, reduce the file size and improve the speed of distribution when this parameter value is `on`.
:param pulumi.Input[pulumi.InputType['DomainPage404ConfigArgs']] page404_config: The error page config of the accelerated domain.
:param pulumi.Input[str] page_compress_enable: Page Compress config of the accelerated domain. Valid values are `on` and `off`. Default value is `off`.
:param pulumi.Input[pulumi.InputType['DomainParameterFilterConfigArgs']] parameter_filter_config: The parameter filter config of the accelerated domain.
:param pulumi.Input[str] range_enable: Range Source config of the accelerated domain. Valid values are `on` and `off`. Default value is `off`.
:param pulumi.Input[pulumi.InputType['DomainReferConfigArgs']] refer_config: The refer config of the accelerated domain.
:param pulumi.Input[str] scope: Scope of the accelerated domain. Valid values are `domestic`, `overseas`, `global`. Default value is `domestic`. This parameter's setting is valid Only for the international users and domestic L3 and above users .
:param pulumi.Input[int] source_port: Source port of the accelerated domain. Valid values are `80` and `443`. Default value is `80`. You must use `80` when the `source_type` is `oss`.
:param pulumi.Input[str] source_type: Source type of the accelerated domain. Valid values are `ipaddr`, `domain`, `oss`. You must set this parameter when `cdn_type` value is not `liveStream`.
:param pulumi.Input[Sequence[pulumi.Input[str]]] sources: Sources of the accelerated domain. It's a list of domain names or IP address and consists of at most 20 items. You must set this parameter when `cdn_type` value is not `liveStream`.
:param pulumi.Input[str] video_seek_enable: Video Seek config of the accelerated domain. Valid values are `on` and `off`. Default value is `off`.
"""
...
@overload
def __init__(__self__,
resource_name: str,
args: DomainArgs,
opts: Optional[pulumi.ResourceOptions] = None):
"""
Create a Domain resource with the given unique name, props, and options.
:param str resource_name: The name of the resource.
:param DomainArgs args: The arguments to use to populate this resource's properties.
:param pulumi.ResourceOptions opts: Options for the resource.
"""
...
def __init__(__self__, resource_name: str, *args, **kwargs):
resource_args, opts = _utilities.get_resource_args_opts(DomainArgs, pulumi.ResourceOptions, *args, **kwargs)
if resource_args is not None:
__self__._internal_init(resource_name, opts, **resource_args.__dict__)
else:
__self__._internal_init(resource_name, *args, **kwargs)
def _internal_init(__self__,
resource_name: str,
opts: Optional[pulumi.ResourceOptions] = None,
auth_config: Optional[pulumi.Input[pulumi.InputType['DomainAuthConfigArgs']]] = None,
block_ips: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None,
cache_configs: Optional[pulumi.Input[Sequence[pulumi.Input[pulumi.InputType['DomainCacheConfigArgs']]]]] = None,
cdn_type: Optional[pulumi.Input[str]] = None,
certificate_config: Optional[pulumi.Input[pulumi.InputType['DomainCertificateConfigArgs']]] = None,
domain_name: Optional[pulumi.Input[str]] = None,
http_header_configs: Optional[pulumi.Input[Sequence[pulumi.Input[pulumi.InputType['DomainHttpHeaderConfigArgs']]]]] = None,
optimize_enable: Optional[pulumi.Input[str]] = None,
page404_config: Optional[pulumi.Input[pulumi.InputType['DomainPage404ConfigArgs']]] = None,
page_compress_enable: Optional[pulumi.Input[str]] = None,
parameter_filter_config: Optional[pulumi.Input[pulumi.InputType['DomainParameterFilterConfigArgs']]] = None,
range_enable: Optional[pulumi.Input[str]] = None,
refer_config: Optional[pulumi.Input[pulumi.InputType['DomainReferConfigArgs']]] = None,
scope: Optional[pulumi.Input[str]] = None,
source_port: Optional[pulumi.Input[int]] = None,
source_type: Optional[pulumi.Input[str]] = None,
sources: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None,
video_seek_enable: Optional[pulumi.Input[str]] = None,
__props__=None):
if opts is None:
opts = pulumi.ResourceOptions()
if not isinstance(opts, pulumi.ResourceOptions):
raise TypeError('Expected resource options to be a ResourceOptions instance')
if opts.version is None:
opts.version = _utilities.get_version()
if opts.id is None:
if __props__ is not None:
raise TypeError('__props__ is only valid when passed in combination with a valid opts.id to get an existing resource')
__props__ = DomainArgs.__new__(DomainArgs)
if auth_config is not None and not opts.urn:
warnings.warn("""Use `alicloud_cdn_domain_config` configuration `function_name` and `function_args` arguments instead.""", DeprecationWarning)
pulumi.log.warn("""auth_config is deprecated: Use `alicloud_cdn_domain_config` configuration `function_name` and `function_args` arguments instead.""")
__props__.__dict__["auth_config"] = auth_config
if block_ips is not None and not opts.urn:
warnings.warn("""Use `alicloud_cdn_domain_config` configuration `function_name` and `function_args` arguments instead.""", DeprecationWarning)
pulumi.log.warn("""block_ips is deprecated: Use `alicloud_cdn_domain_config` configuration `function_name` and `function_args` arguments instead.""")
__props__.__dict__["block_ips"] = block_ips
if cache_configs is not None and not opts.urn:
warnings.warn("""Use `alicloud_cdn_domain_config` configuration `function_name` and `function_args` arguments instead.""", DeprecationWarning)
pulumi.log.warn("""cache_configs is deprecated: Use `alicloud_cdn_domain_config` configuration `function_name` and `function_args` arguments instead.""")
__props__.__dict__["cache_configs"] = cache_configs
if cdn_type is None and not opts.urn:
raise TypeError("Missing required property 'cdn_type'")
__props__.__dict__["cdn_type"] = cdn_type
if certificate_config is not None and not opts.urn:
warnings.warn("""Use `alicloud_cdn_domain_config` configuration `function_name` and `function_args` arguments instead.""", DeprecationWarning)
pulumi.log.warn("""certificate_config is deprecated: Use `alicloud_cdn_domain_config` configuration `function_name` and `function_args` arguments instead.""")
__props__.__dict__["certificate_config"] = certificate_config
if domain_name is None and not opts.urn:
raise TypeError("Missing required property 'domain_name'")
__props__.__dict__["domain_name"] = domain_name
if http_header_configs is not None and not opts.urn:
warnings.warn("""Use `alicloud_cdn_domain_config` configuration `function_name` and `function_args` arguments instead.""", DeprecationWarning)
pulumi.log.warn("""http_header_configs is deprecated: Use `alicloud_cdn_domain_config` configuration `function_name` and `function_args` arguments instead.""")
__props__.__dict__["http_header_configs"] = http_header_configs
if optimize_enable is not None and not opts.urn:
warnings.warn("""Use `alicloud_cdn_domain_config` configuration `function_name` and `function_args` arguments instead.""", DeprecationWarning)
pulumi.log.warn("""optimize_enable is deprecated: Use `alicloud_cdn_domain_config` configuration `function_name` and `function_args` arguments instead.""")
__props__.__dict__["optimize_enable"] = optimize_enable
if page404_config is not None and not opts.urn:
warnings.warn("""Use `alicloud_cdn_domain_config` configuration `function_name` and `function_args` arguments instead.""", DeprecationWarning)
pulumi.log.warn("""page404_config is deprecated: Use `alicloud_cdn_domain_config` configuration `function_name` and `function_args` arguments instead.""")
__props__.__dict__["page404_config"] = page404_config
if page_compress_enable is not None and not opts.urn:
warnings.warn("""Use `alicloud_cdn_domain_config` configuration `function_name` and `function_args` arguments instead.""", DeprecationWarning)
pulumi.log.warn("""page_compress_enable is deprecated: Use `alicloud_cdn_domain_config` configuration `function_name` and `function_args` arguments instead.""")
__props__.__dict__["page_compress_enable"] = page_compress_enable
if parameter_filter_config is not None and not opts.urn:
warnings.warn("""Use `alicloud_cdn_domain_config` configuration `function_name` and `function_args` arguments instead.""", DeprecationWarning)
pulumi.log.warn("""parameter_filter_config is deprecated: Use `alicloud_cdn_domain_config` configuration `function_name` and `function_args` arguments instead.""")
__props__.__dict__["parameter_filter_config"] = parameter_filter_config
if range_enable is not None and not opts.urn:
warnings.warn("""Use `alicloud_cdn_domain_config` configuration `function_name` and `function_args` arguments instead.""", DeprecationWarning)
pulumi.log.warn("""range_enable is deprecated: Use `alicloud_cdn_domain_config` configuration `function_name` and `function_args` arguments instead.""")
__props__.__dict__["range_enable"] = range_enable
if refer_config is not None and not opts.urn:
warnings.warn("""Use `alicloud_cdn_domain_config` configuration `function_name` and `function_args` arguments instead.""", DeprecationWarning)
pulumi.log.warn("""refer_config is deprecated: Use `alicloud_cdn_domain_config` configuration `function_name` and `function_args` arguments instead.""")
__props__.__dict__["refer_config"] = refer_config
__props__.__dict__["scope"] = scope
if source_port is not None and not opts.urn:
warnings.warn("""Use `alicloud_cdn_domain_new` configuration `sources` block `port` argument instead.""", DeprecationWarning)
pulumi.log.warn("""source_port is deprecated: Use `alicloud_cdn_domain_new` configuration `sources` block `port` argument instead.""")
__props__.__dict__["source_port"] = source_port
if source_type is not None and not opts.urn:
warnings.warn("""Use `alicloud_cdn_domain_new` configuration `sources` block `type` argument instead.""", DeprecationWarning)
pulumi.log.warn("""source_type is deprecated: Use `alicloud_cdn_domain_new` configuration `sources` block `type` argument instead.""")
__props__.__dict__["source_type"] = source_type
if sources is not None and not opts.urn:
warnings.warn("""Use `alicloud_cdn_domain_new` configuration `sources` argument instead.""", DeprecationWarning)
pulumi.log.warn("""sources is deprecated: Use `alicloud_cdn_domain_new` configuration `sources` argument instead.""")
__props__.__dict__["sources"] = sources
if video_seek_enable is not None and not opts.urn:
warnings.warn("""Use `alicloud_cdn_domain_config` configuration `function_name` and `function_args` arguments instead.""", DeprecationWarning)
pulumi.log.warn("""video_seek_enable is deprecated: Use `alicloud_cdn_domain_config` configuration `function_name` and `function_args` arguments instead.""")
__props__.__dict__["video_seek_enable"] = video_seek_enable
super(Domain, __self__).__init__(
'alicloud:cdn/domain:Domain',
resource_name,
__props__,
opts)
@staticmethod
def get(resource_name: str,
id: pulumi.Input[str],
opts: Optional[pulumi.ResourceOptions] = None,
auth_config: Optional[pulumi.Input[pulumi.InputType['DomainAuthConfigArgs']]] = None,
block_ips: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None,
cache_configs: Optional[pulumi.Input[Sequence[pulumi.Input[pulumi.InputType['DomainCacheConfigArgs']]]]] = None,
cdn_type: Optional[pulumi.Input[str]] = None,
certificate_config: Optional[pulumi.Input[pulumi.InputType['DomainCertificateConfigArgs']]] = None,
domain_name: Optional[pulumi.Input[str]] = None,
http_header_configs: Optional[pulumi.Input[Sequence[pulumi.Input[pulumi.InputType['DomainHttpHeaderConfigArgs']]]]] = None,
optimize_enable: Optional[pulumi.Input[str]] = None,
page404_config: Optional[pulumi.Input[pulumi.InputType['DomainPage404ConfigArgs']]] = None,
page_compress_enable: Optional[pulumi.Input[str]] = None,
parameter_filter_config: Optional[pulumi.Input[pulumi.InputType['DomainParameterFilterConfigArgs']]] = None,
range_enable: Optional[pulumi.Input[str]] = None,
refer_config: Optional[pulumi.Input[pulumi.InputType['DomainReferConfigArgs']]] = None,
scope: Optional[pulumi.Input[str]] = None,
source_port: Optional[pulumi.Input[int]] = None,
source_type: Optional[pulumi.Input[str]] = None,
sources: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None,
video_seek_enable: Optional[pulumi.Input[str]] = None) -> 'Domain':
"""
Get an existing Domain resource's state with the given name, id, and optional extra
properties used to qualify the lookup.
:param str resource_name: The unique name of the resulting resource.
:param pulumi.Input[str] id: The unique provider ID of the resource to lookup.
:param pulumi.ResourceOptions opts: Options for the resource.
:param pulumi.Input[pulumi.InputType['DomainAuthConfigArgs']] auth_config: The auth config of the accelerated domain.
:param pulumi.Input[Sequence[pulumi.Input[pulumi.InputType['DomainCacheConfigArgs']]]] cache_configs: The cache configs of the accelerated domain.
:param pulumi.Input[str] cdn_type: Cdn type of the accelerated domain. Valid values are `web`, `download`, `video`, `liveStream`.
:param pulumi.Input[str] domain_name: Name of the accelerated domain. This name without suffix can have a string of 1 to 63 characters, must contain only alphanumeric characters or "-", and must not begin or end with "-", and "-" must not in the 3th and 4th character positions at the same time. Suffix `.sh` and `.tel` are not supported.
:param pulumi.Input[Sequence[pulumi.Input[pulumi.InputType['DomainHttpHeaderConfigArgs']]]] http_header_configs: The http header configs of the accelerated domain.
:param pulumi.Input[str] optimize_enable: Page Optimize config of the accelerated domain. Valid values are `on` and `off`. Default value is `off`. It can effectively remove the page redundant content, reduce the file size and improve the speed of distribution when this parameter value is `on`.
:param pulumi.Input[pulumi.InputType['DomainPage404ConfigArgs']] page404_config: The error page config of the accelerated domain.
:param pulumi.Input[str] page_compress_enable: Page Compress config of the accelerated domain. Valid values are `on` and `off`. Default value is `off`.
:param pulumi.Input[pulumi.InputType['DomainParameterFilterConfigArgs']] parameter_filter_config: The parameter filter config of the accelerated domain.
:param pulumi.Input[str] range_enable: Range Source config of the accelerated domain. Valid values are `on` and `off`. Default value is `off`.
:param pulumi.Input[pulumi.InputType['DomainReferConfigArgs']] refer_config: The refer config of the accelerated domain.
:param pulumi.Input[str] scope: Scope of the accelerated domain. Valid values are `domestic`, `overseas`, `global`. Default value is `domestic`. This parameter's setting is valid Only for the international users and domestic L3 and above users .
:param pulumi.Input[int] source_port: Source port of the accelerated domain. Valid values are `80` and `443`. Default value is `80`. You must use `80` when the `source_type` is `oss`.
:param pulumi.Input[str] source_type: Source type of the accelerated domain. Valid values are `ipaddr`, `domain`, `oss`. You must set this parameter when `cdn_type` value is not `liveStream`.
:param pulumi.Input[Sequence[pulumi.Input[str]]] sources: Sources of the accelerated domain. It's a list of domain names or IP address and consists of at most 20 items. You must set this parameter when `cdn_type` value is not `liveStream`.
:param pulumi.Input[str] video_seek_enable: Video Seek config of the accelerated domain. Valid values are `on` and `off`. Default value is `off`.
"""
opts = pulumi.ResourceOptions.merge(opts, pulumi.ResourceOptions(id=id))
__props__ = _DomainState.__new__(_DomainState)
__props__.__dict__["auth_config"] = auth_config
__props__.__dict__["block_ips"] = block_ips
__props__.__dict__["cache_configs"] = cache_configs
__props__.__dict__["cdn_type"] = cdn_type
__props__.__dict__["certificate_config"] = certificate_config
__props__.__dict__["domain_name"] = domain_name
__props__.__dict__["http_header_configs"] = http_header_configs
__props__.__dict__["optimize_enable"] = optimize_enable
__props__.__dict__["page404_config"] = page404_config
__props__.__dict__["page_compress_enable"] = page_compress_enable
__props__.__dict__["parameter_filter_config"] = parameter_filter_config
__props__.__dict__["range_enable"] = range_enable
__props__.__dict__["refer_config"] = refer_config
__props__.__dict__["scope"] = scope
__props__.__dict__["source_port"] = source_port
__props__.__dict__["source_type"] = source_type
__props__.__dict__["sources"] = sources
__props__.__dict__["video_seek_enable"] = video_seek_enable
return Domain(resource_name, opts=opts, __props__=__props__)
@property
@pulumi.getter(name="authConfig")
def auth_config(self) -> pulumi.Output[Optional['outputs.DomainAuthConfig']]:
"""
The auth config of the accelerated domain.
"""
return pulumi.get(self, "auth_config")
@property
@pulumi.getter(name="blockIps")
def block_ips(self) -> pulumi.Output[Optional[Sequence[str]]]:
return pulumi.get(self, "block_ips")
@property
@pulumi.getter(name="cacheConfigs")
def cache_configs(self) -> pulumi.Output[Optional[Sequence['outputs.DomainCacheConfig']]]:
"""
The cache configs of the accelerated domain.
"""
return pulumi.get(self, "cache_configs")
@property
@pulumi.getter(name="cdnType")
def cdn_type(self) -> pulumi.Output[str]:
"""
Cdn type of the accelerated domain. Valid values are `web`, `download`, `video`, `liveStream`.
"""
return pulumi.get(self, "cdn_type")
@property
@pulumi.getter(name="certificateConfig")
def certificate_config(self) -> pulumi.Output[Optional['outputs.DomainCertificateConfig']]:
return pulumi.get(self, "certificate_config")
@property
@pulumi.getter(name="domainName")
def domain_name(self) -> pulumi.Output[str]:
"""
Name of the accelerated domain. This name without suffix can have a string of 1 to 63 characters, must contain only alphanumeric characters or "-", and must not begin or end with "-", and "-" must not in the 3th and 4th character positions at the same time. Suffix `.sh` and `.tel` are not supported.
"""
return pulumi.get(self, "domain_name")
@property
@pulumi.getter(name="httpHeaderConfigs")
def http_header_configs(self) -> pulumi.Output[Optional[Sequence['outputs.DomainHttpHeaderConfig']]]:
"""
The http header configs of the accelerated domain.
"""
return pulumi.get(self, "http_header_configs")
@property
@pulumi.getter(name="optimizeEnable")
def optimize_enable(self) -> pulumi.Output[Optional[str]]:
"""
Page Optimize config of the accelerated domain. Valid values are `on` and `off`. Default value is `off`. It can effectively remove the page redundant content, reduce the file size and improve the speed of distribution when this parameter value is `on`.
"""
return pulumi.get(self, "optimize_enable")
@property
@pulumi.getter(name="page404Config")
def page404_config(self) -> pulumi.Output[Optional['outputs.DomainPage404Config']]:
"""
The error page config of the accelerated domain.
"""
return pulumi.get(self, "page404_config")
@property
@pulumi.getter(name="pageCompressEnable")
def page_compress_enable(self) -> pulumi.Output[Optional[str]]:
"""
Page Compress config of the accelerated domain. Valid values are `on` and `off`. Default value is `off`.
"""
return pulumi.get(self, "page_compress_enable")
@property
@pulumi.getter(name="parameterFilterConfig")
def parameter_filter_config(self) -> pulumi.Output[Optional['outputs.DomainParameterFilterConfig']]:
"""
The parameter filter config of the accelerated domain.
"""
return pulumi.get(self, "parameter_filter_config")
@property
@pulumi.getter(name="rangeEnable")
def range_enable(self) -> pulumi.Output[Optional[str]]:
"""
Range Source config of the accelerated domain. Valid values are `on` and `off`. Default value is `off`.
"""
return pulumi.get(self, "range_enable")
@property
@pulumi.getter(name="referConfig")
def refer_config(self) -> pulumi.Output[Optional['outputs.DomainReferConfig']]:
"""
The refer config of the accelerated domain.
"""
return pulumi.get(self, "refer_config")
@property
@pulumi.getter
def scope(self) -> pulumi.Output[str]:
"""
Scope of the accelerated domain. Valid values are `domestic`, `overseas`, `global`. Default value is `domestic`. This parameter's setting is valid Only for the international users and domestic L3 and above users .
"""
return pulumi.get(self, "scope")
@property
@pulumi.getter(name="sourcePort")
def source_port(self) -> pulumi.Output[Optional[int]]:
"""
Source port of the accelerated domain. Valid values are `80` and `443`. Default value is `80`. You must use `80` when the `source_type` is `oss`.
"""
return pulumi.get(self, "source_port")
@property
@pulumi.getter(name="sourceType")
def source_type(self) -> pulumi.Output[Optional[str]]:
"""
Source type of the accelerated domain. Valid values are `ipaddr`, `domain`, `oss`. You must set this parameter when `cdn_type` value is not `liveStream`.
"""
return pulumi.get(self, "source_type")
@property
@pulumi.getter
def sources(self) -> pulumi.Output[Optional[Sequence[str]]]:
"""
Sources of the accelerated domain. It's a list of domain names or IP address and consists of at most 20 items. You must set this parameter when `cdn_type` value is not `liveStream`.
"""
return pulumi.get(self, "sources")
@property
@pulumi.getter(name="videoSeekEnable")
def video_seek_enable(self) -> pulumi.Output[Optional[str]]:
"""
Video Seek config of the accelerated domain. Valid values are `on` and `off`. Default value is `off`.
"""
return pulumi.get(self, "video_seek_enable")
| 64.465028 | 346 | 0.691866 | 8,222 | 68,204 | 5.528217 | 0.030893 | 0.067278 | 0.06521 | 0.05421 | 0.961521 | 0.957143 | 0.941016 | 0.935164 | 0.924603 | 0.910853 | 0 | 0.005186 | 0.202745 | 68,204 | 1,057 | 347 | 64.526017 | 0.830716 | 0.268459 | 0 | 0.873259 | 1 | 0 | 0.307692 | 0.090856 | 0 | 0 | 0 | 0 | 0 | 1 | 0.135097 | false | 0.001393 | 0.009749 | 0.008357 | 0.225627 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
78ce06e71e09d3e02f301c0375cfcd1efb79e56c | 132 | py | Python | viyaRestPy/Folders/__init__.py | xavierBizoux/viyaRestPy | 2db2c982a369252f0b97861a74ec35ade969af9a | [
"Apache-2.0"
] | null | null | null | viyaRestPy/Folders/__init__.py | xavierBizoux/viyaRestPy | 2db2c982a369252f0b97861a74ec35ade969af9a | [
"Apache-2.0"
] | null | null | null | viyaRestPy/Folders/__init__.py | xavierBizoux/viyaRestPy | 2db2c982a369252f0b97861a74ec35ade969af9a | [
"Apache-2.0"
] | null | null | null | from .find_object_in_folder import find_object_in_folder
from .get_folder import get_folder
from .create_folder import create_folder | 44 | 56 | 0.893939 | 22 | 132 | 4.909091 | 0.363636 | 0.333333 | 0.222222 | 0.333333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.083333 | 132 | 3 | 57 | 44 | 0.892562 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 8 |
15a898ca553d32206045daac49750c9a56166518 | 29 | py | Python | Projetos/len21milhao.py | anderson-br-ti/python | d65d851f0934267dff9256dfdac09b100efb3b45 | [
"MIT"
] | null | null | null | Projetos/len21milhao.py | anderson-br-ti/python | d65d851f0934267dff9256dfdac09b100efb3b45 | [
"MIT"
] | null | null | null | Projetos/len21milhao.py | anderson-br-ti/python | d65d851f0934267dff9256dfdac09b100efb3b45 | [
"MIT"
] | 2 | 2020-09-05T04:20:52.000Z | 2020-10-15T13:46:48.000Z | print (len(str(2**1000000)))
| 14.5 | 28 | 0.655172 | 5 | 29 | 3.8 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.296296 | 0.068966 | 29 | 1 | 29 | 29 | 0.407407 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 7 |
ec6b3d147d198b4f3496ad37730d65dd1ae2471f | 71,428 | py | Python | venv/Lib/site-packages/statsmodels/tsa/statespace/tests/test_simulate.py | EkremBayar/bayar | aad1a32044da671d0b4f11908416044753360b39 | [
"MIT"
] | 6,931 | 2015-01-01T11:41:55.000Z | 2022-03-31T17:03:24.000Z | venv/Lib/site-packages/statsmodels/tsa/statespace/tests/test_simulate.py | EkremBayar/bayar | aad1a32044da671d0b4f11908416044753360b39 | [
"MIT"
] | 6,137 | 2015-01-01T00:33:45.000Z | 2022-03-31T22:53:17.000Z | venv/Lib/site-packages/statsmodels/tsa/statespace/tests/test_simulate.py | EkremBayar/bayar | aad1a32044da671d0b4f11908416044753360b39 | [
"MIT"
] | 2,608 | 2015-01-02T21:32:31.000Z | 2022-03-31T07:38:30.000Z | """
Tests for simulation of time series
Author: Chad Fulton
License: Simplified-BSD
"""
import numpy as np
import pandas as pd
from numpy.testing import assert_, assert_allclose, assert_equal
import pytest
from scipy.signal import lfilter
from .test_impulse_responses import TVSS
from statsmodels.tools.sm_exceptions import SpecificationWarning, \
EstimationWarning
from statsmodels.tsa.statespace import (sarimax, structural, varmax,
dynamic_factor)
def test_arma_lfilter():
# Tests of an ARMA model simulation against scipy.signal.lfilter
# Note: the first elements of the generated SARIMAX datasets are based on
# the initial state, so we do not include them in the comparisons
np.random.seed(10239)
nobs = 100
eps = np.random.normal(size=nobs)
# AR(1)
mod = sarimax.SARIMAX([0], order=(1, 0, 0))
actual = mod.simulate([0.5, 1.], nobs + 1, state_shocks=np.r_[eps, 0],
initial_state=np.zeros(mod.k_states))
desired = lfilter([1], [1, -0.5], eps)
assert_allclose(actual[1:], desired)
# MA(1)
mod = sarimax.SARIMAX([0], order=(0, 0, 1))
actual = mod.simulate([0.5, 1.], nobs + 1, state_shocks=np.r_[eps, 0],
initial_state=np.zeros(mod.k_states))
desired = lfilter([1, 0.5], [1], eps)
assert_allclose(actual[1:], desired)
# ARMA(1, 1)
mod = sarimax.SARIMAX([0], order=(1, 0, 1))
actual = mod.simulate([0.5, 0.2, 1.], nobs + 1, state_shocks=np.r_[eps, 0],
initial_state=np.zeros(mod.k_states))
desired = lfilter([1, 0.2], [1, -0.5], eps)
assert_allclose(actual[1:], desired)
def test_arma_direct():
# Tests of an ARMA model simulation against direct construction
# This is useful for e.g. trend components
# Note: the first elements of the generated SARIMAX datasets are based on
# the initial state, so we do not include them in the comparisons
np.random.seed(10239)
nobs = 100
eps = np.random.normal(size=nobs)
exog = np.random.normal(size=nobs)
# AR(1)
mod = sarimax.SARIMAX([0], order=(1, 0, 0))
actual = mod.simulate([0.5, 1.], nobs + 1, state_shocks=np.r_[eps, 0],
initial_state=np.zeros(mod.k_states))
desired = np.zeros(nobs)
for i in range(nobs):
if i == 0:
desired[i] = eps[i]
else:
desired[i] = 0.5 * desired[i - 1] + eps[i]
assert_allclose(actual[1:], desired)
# MA(1)
mod = sarimax.SARIMAX([0], order=(0, 0, 1))
actual = mod.simulate([0.5, 1.], nobs + 1, state_shocks=np.r_[eps, 0],
initial_state=np.zeros(mod.k_states))
desired = np.zeros(nobs)
for i in range(nobs):
if i == 0:
desired[i] = eps[i]
else:
desired[i] = 0.5 * eps[i - 1] + eps[i]
assert_allclose(actual[1:], desired)
# ARMA(1, 1)
mod = sarimax.SARIMAX([0], order=(1, 0, 1))
actual = mod.simulate([0.5, 0.2, 1.], nobs + 1, state_shocks=np.r_[eps, 0],
initial_state=np.zeros(mod.k_states))
desired = np.zeros(nobs)
for i in range(nobs):
if i == 0:
desired[i] = eps[i]
else:
desired[i] = 0.5 * desired[i - 1] + 0.2 * eps[i - 1] + eps[i]
assert_allclose(actual[1:], desired)
# ARMA(1, 1) + intercept
mod = sarimax.SARIMAX([0], order=(1, 0, 1), trend='c')
actual = mod.simulate([1.3, 0.5, 0.2, 1.], nobs + 1,
state_shocks=np.r_[eps, 0],
initial_state=np.zeros(mod.k_states))
desired = np.zeros(nobs)
for i in range(nobs):
trend = 1.3
if i == 0:
desired[i] = trend + eps[i]
else:
desired[i] = (trend + 0.5 * desired[i - 1] +
0.2 * eps[i - 1] + eps[i])
assert_allclose(actual[1:], desired)
# ARMA(1, 1) + intercept + time trend
# Note: to allow time-varying SARIMAX to simulate 101 observations, need to
# give it 101 observations up front
mod = sarimax.SARIMAX(np.zeros(nobs + 1), order=(1, 0, 1), trend='ct')
actual = mod.simulate([1.3, 0.2, 0.5, 0.2, 1.], nobs + 1,
state_shocks=np.r_[eps, 0],
initial_state=np.zeros(mod.k_states))
desired = np.zeros(nobs)
for i in range(nobs):
trend = 1.3 + 0.2 * (i + 1)
if i == 0:
desired[i] = trend + eps[i]
else:
desired[i] = (trend + 0.5 * desired[i - 1] +
0.2 * eps[i - 1] + eps[i])
assert_allclose(actual[1:], desired)
# ARMA(1, 1) + intercept + time trend + exog
# Note: to allow time-varying SARIMAX to simulate 101 observations, need to
# give it 101 observations up front
# Note: the model is regression with SARIMAX errors, so the exog is
# introduced into the observation equation rather than the ARMA part
mod = sarimax.SARIMAX(np.zeros(nobs + 1), exog=np.r_[0, exog],
order=(1, 0, 1), trend='ct')
actual = mod.simulate([1.3, 0.2, -0.5, 0.5, 0.2, 1.], nobs + 1,
state_shocks=np.r_[eps, 0],
initial_state=np.zeros(mod.k_states))
desired = np.zeros(nobs)
for i in range(nobs):
trend = 1.3 + 0.2 * (i + 1)
if i == 0:
desired[i] = trend + eps[i]
else:
desired[i] = (trend + 0.5 * desired[i - 1] +
0.2 * eps[i - 1] + eps[i])
desired = desired - 0.5 * exog
assert_allclose(actual[1:], desired)
def test_structural():
np.random.seed(38947)
nobs = 100
eps = np.random.normal(size=nobs)
exog = np.random.normal(size=nobs)
eps1 = np.zeros(nobs)
eps2 = np.zeros(nobs)
eps2[49] = 1
eps3 = np.zeros(nobs)
eps3[50:] = 1
# AR(1)
mod1 = structural.UnobservedComponents([0], autoregressive=1)
mod2 = sarimax.SARIMAX([0], order=(1, 0, 0))
actual = mod1.simulate([1, 0.5], nobs, state_shocks=eps,
initial_state=np.zeros(mod1.k_states))
desired = mod2.simulate([0.5, 1], nobs, state_shocks=eps,
initial_state=np.zeros(mod2.k_states))
assert_allclose(actual, desired)
# ARX(1)
mod1 = structural.UnobservedComponents(np.zeros(nobs), exog=exog,
autoregressive=1)
mod2 = sarimax.SARIMAX(np.zeros(nobs), exog=exog, order=(1, 0, 0))
actual = mod1.simulate([1, 0.5, 0.2], nobs, state_shocks=eps,
initial_state=np.zeros(mod2.k_states))
desired = mod2.simulate([0.2, 0.5, 1], nobs, state_shocks=eps,
initial_state=np.zeros(mod2.k_states))
assert_allclose(actual, desired)
# Irregular
mod = structural.UnobservedComponents([0], 'irregular')
actual = mod.simulate([1.], nobs, measurement_shocks=eps,
initial_state=np.zeros(mod.k_states))
assert_allclose(actual, eps)
# Fixed intercept
# (in practice this is a deterministic constant, because an irregular
# component must be added)
warning = SpecificationWarning
match = 'irregular component added'
with pytest.warns(warning, match=match):
mod = structural.UnobservedComponents([0], 'fixed intercept')
actual = mod.simulate([1.], nobs, measurement_shocks=eps,
initial_state=[10])
assert_allclose(actual, 10 + eps)
# Deterministic constant
mod = structural.UnobservedComponents([0], 'deterministic constant')
actual = mod.simulate([1.], nobs, measurement_shocks=eps,
initial_state=[10])
assert_allclose(actual, 10 + eps)
# Local level
mod = structural.UnobservedComponents([0], 'local level')
actual = mod.simulate([1., 1.], nobs, measurement_shocks=eps,
state_shocks=eps2,
initial_state=np.zeros(mod.k_states))
assert_allclose(actual, eps + eps3)
# Random walk
mod = structural.UnobservedComponents([0], 'random walk')
actual = mod.simulate([1.], nobs, measurement_shocks=eps,
state_shocks=eps2,
initial_state=np.zeros(mod.k_states))
assert_allclose(actual, eps + eps3)
# Fixed slope
# (in practice this is a deterministic trend, because an irregular
# component must be added)
warning = SpecificationWarning
match = 'irregular component added'
with pytest.warns(warning, match=match):
mod = structural.UnobservedComponents([0], 'fixed slope')
actual = mod.simulate([1., 1.], nobs, measurement_shocks=eps,
state_shocks=eps2, initial_state=[0, 1])
assert_allclose(actual, eps + np.arange(100))
# Deterministic trend
mod = structural.UnobservedComponents([0], 'deterministic trend')
actual = mod.simulate([1.], nobs, measurement_shocks=eps,
state_shocks=eps2, initial_state=[0, 1])
assert_allclose(actual, eps + np.arange(100))
# Local linear deterministic trend
mod = structural.UnobservedComponents(
[0], 'local linear deterministic trend')
actual = mod.simulate([1., 1.], nobs, measurement_shocks=eps,
state_shocks=eps2, initial_state=[0, 1])
desired = eps + np.r_[np.arange(50), 1 + np.arange(50, 100)]
assert_allclose(actual, desired)
# Random walk with drift
mod = structural.UnobservedComponents([0], 'random walk with drift')
actual = mod.simulate([1.], nobs, state_shocks=eps2,
initial_state=[0, 1])
desired = np.r_[np.arange(50), 1 + np.arange(50, 100)]
assert_allclose(actual, desired)
# Local linear trend
mod = structural.UnobservedComponents([0], 'local linear trend')
actual = mod.simulate([1., 1., 1.], nobs, measurement_shocks=eps,
state_shocks=np.c_[eps2, eps1], initial_state=[0, 1])
desired = eps + np.r_[np.arange(50), 1 + np.arange(50, 100)]
assert_allclose(actual, desired)
actual = mod.simulate([1., 1., 1.], nobs, measurement_shocks=eps,
state_shocks=np.c_[eps1, eps2], initial_state=[0, 1])
desired = eps + np.r_[np.arange(50), np.arange(50, 150, 2)]
assert_allclose(actual, desired)
# Smooth trend
mod = structural.UnobservedComponents([0], 'smooth trend')
actual = mod.simulate([1., 1.], nobs, measurement_shocks=eps,
state_shocks=eps1, initial_state=[0, 1])
desired = eps + np.r_[np.arange(100)]
assert_allclose(actual, desired)
actual = mod.simulate([1., 1.], nobs, measurement_shocks=eps,
state_shocks=eps2, initial_state=[0, 1])
desired = eps + np.r_[np.arange(50), np.arange(50, 150, 2)]
assert_allclose(actual, desired)
# Random trend
mod = structural.UnobservedComponents([0], 'random trend')
actual = mod.simulate([1., 1.], nobs,
state_shocks=eps1, initial_state=[0, 1])
desired = np.r_[np.arange(100)]
assert_allclose(actual, desired)
actual = mod.simulate([1., 1.], nobs,
state_shocks=eps2, initial_state=[0, 1])
desired = np.r_[np.arange(50), np.arange(50, 150, 2)]
assert_allclose(actual, desired)
# Seasonal (deterministic)
mod = structural.UnobservedComponents([0], 'irregular', seasonal=2,
stochastic_seasonal=False)
actual = mod.simulate([1.], nobs, measurement_shocks=eps,
initial_state=[10])
desired = eps + np.tile([10, -10], 50)
assert_allclose(actual, desired)
# Seasonal (stochastic)
mod = structural.UnobservedComponents([0], 'irregular', seasonal=2)
actual = mod.simulate([1., 1.], nobs, measurement_shocks=eps,
state_shocks=eps2, initial_state=[10])
desired = eps + np.r_[np.tile([10, -10], 25), np.tile([11, -11], 25)]
assert_allclose(actual, desired)
# Cycle (deterministic)
mod = structural.UnobservedComponents([0], 'irregular', cycle=True)
actual = mod.simulate([1., 1.2], nobs, measurement_shocks=eps,
initial_state=[1, 0])
x1 = [np.cos(1.2), np.sin(1.2)]
x2 = [-np.sin(1.2), np.cos(1.2)]
T = np.array([x1, x2])
desired = eps
states = [1, 0]
for i in range(nobs):
desired[i] += states[0]
states = np.dot(T, states)
assert_allclose(actual, desired)
# Cycle (stochastic)
mod = structural.UnobservedComponents([0], 'irregular', cycle=True,
stochastic_cycle=True)
actual = mod.simulate([1., 1., 1.2], nobs, measurement_shocks=eps,
state_shocks=np.c_[eps2, eps2], initial_state=[1, 0])
x1 = [np.cos(1.2), np.sin(1.2)]
x2 = [-np.sin(1.2), np.cos(1.2)]
T = np.array([x1, x2])
desired = eps
states = [1, 0]
for i in range(nobs):
desired[i] += states[0]
states = np.dot(T, states) + eps2[i]
assert_allclose(actual, desired)
def test_varmax():
np.random.seed(371934)
nobs = 100
eps = np.random.normal(size=nobs)
exog = np.random.normal(size=(nobs, 1))
eps1 = np.zeros(nobs)
eps2 = np.zeros(nobs)
eps2[49] = 1
eps3 = np.zeros(nobs)
eps3[50:] = 1
# VAR(2) - single series
mod1 = varmax.VARMAX([[0]], order=(2, 0), trend='n')
mod2 = sarimax.SARIMAX([0], order=(2, 0, 0))
actual = mod1.simulate([0.5, 0.2, 1], nobs, state_shocks=eps,
initial_state=np.zeros(mod1.k_states))
desired = mod2.simulate([0.5, 0.2, 1], nobs, state_shocks=eps,
initial_state=np.zeros(mod2.k_states))
assert_allclose(actual, desired)
# VMA(2) - single series
mod1 = varmax.VARMAX([[0]], order=(0, 2), trend='n')
mod2 = sarimax.SARIMAX([0], order=(0, 0, 2))
actual = mod1.simulate([0.5, 0.2, 1], nobs, state_shocks=eps,
initial_state=np.zeros(mod1.k_states))
desired = mod2.simulate([0.5, 0.2, 1], nobs, state_shocks=eps,
initial_state=np.zeros(mod2.k_states))
assert_allclose(actual, desired)
# VARMA(2, 2) - single series
warning = EstimationWarning
match = r'VARMA\(p,q\) models is not'
with pytest.warns(warning, match=match):
mod1 = varmax.VARMAX([[0]], order=(2, 2), trend='n')
mod2 = sarimax.SARIMAX([0], order=(2, 0, 2))
actual = mod1.simulate([0.5, 0.2, 0.1, -0.2, 1], nobs, state_shocks=eps,
initial_state=np.zeros(mod1.k_states))
desired = mod2.simulate([0.5, 0.2, 0.1, -0.2, 1], nobs, state_shocks=eps,
initial_state=np.zeros(mod2.k_states))
assert_allclose(actual, desired)
# VARMA(2, 2) + trend - single series
warning = EstimationWarning
match = r'VARMA\(p,q\) models is not'
with pytest.warns(warning, match=match):
mod1 = varmax.VARMAX([[0]], order=(2, 2), trend='c')
mod2 = sarimax.SARIMAX([0], order=(2, 0, 2), trend='c')
actual = mod1.simulate([10, 0.5, 0.2, 0.1, -0.2, 1], nobs,
state_shocks=eps,
initial_state=np.zeros(mod1.k_states))
desired = mod2.simulate([10, 0.5, 0.2, 0.1, -0.2, 1], nobs,
state_shocks=eps,
initial_state=np.zeros(mod2.k_states))
assert_allclose(actual, desired)
# VAR(1)
transition = np.array([[0.5, 0.1],
[-0.1, 0.2]])
mod = varmax.VARMAX([[0, 0]], order=(1, 0), trend='n')
actual = mod.simulate(np.r_[transition.ravel(), 1., 0, 1.], nobs,
state_shocks=np.c_[eps1, eps1],
initial_state=np.zeros(mod.k_states))
assert_allclose(actual, 0)
actual = mod.simulate(np.r_[transition.ravel(), 1., 0, 1.], nobs,
state_shocks=np.c_[eps1, eps1], initial_state=[1, 1])
desired = np.zeros((nobs, 2))
state = np.r_[1, 1]
for i in range(nobs):
desired[i] = state
state = np.dot(transition, state)
assert_allclose(actual, desired)
# VAR(1) + measurement error
mod = varmax.VARMAX([[0, 0]], order=(1, 0), trend='n',
measurement_error=True)
actual = mod.simulate(np.r_[transition.ravel(), 1., 0, 1., 1., 1.], nobs,
measurement_shocks=np.c_[eps, eps],
state_shocks=np.c_[eps1, eps1],
initial_state=np.zeros(mod.k_states))
assert_allclose(actual, np.c_[eps, eps])
# VARX(1)
mod = varmax.VARMAX(np.zeros((nobs, 2)), order=(1, 0), trend='n',
exog=exog)
actual = mod.simulate(np.r_[transition.ravel(), 5, -2, 1., 0, 1.], nobs,
state_shocks=np.c_[eps1, eps1], initial_state=[1, 1])
desired = np.zeros((nobs, 2))
state = np.r_[1, 1]
for i in range(nobs):
desired[i] = state
if i < nobs - 1:
state = exog[i + 1] * [5, -2] + np.dot(transition, state)
assert_allclose(actual, desired)
# VMA(1)
# TODO: This is just a smoke test
mod = varmax.VARMAX(
np.random.normal(size=(nobs, 2)), order=(0, 1), trend='n')
mod.simulate(mod.start_params, nobs)
# VARMA(2, 2) + trend + exog
# TODO: This is just a smoke test
warning = EstimationWarning
match = r"VARMA\(p,q\) models is not"
with pytest.warns(warning, match=match):
mod = varmax.VARMAX(
np.random.normal(size=(nobs, 2)), order=(2, 2), trend='c',
exog=exog)
mod.simulate(mod.start_params, nobs)
def test_dynamic_factor():
np.random.seed(93739)
nobs = 100
eps = np.random.normal(size=nobs)
exog = np.random.normal(size=(nobs, 1))
eps1 = np.zeros(nobs)
eps2 = np.zeros(nobs)
eps2[49] = 1
eps3 = np.zeros(nobs)
eps3[50:] = 1
# DFM: 2 series, AR(2) factor
mod1 = dynamic_factor.DynamicFactor([[0, 0]], k_factors=1, factor_order=2)
mod2 = sarimax.SARIMAX([0], order=(2, 0, 0))
actual = mod1.simulate([-0.9, 0.8, 1., 1., 0.5, 0.2], nobs,
measurement_shocks=np.c_[eps1, eps1],
state_shocks=eps,
initial_state=np.zeros(mod1.k_states))
desired = mod2.simulate([0.5, 0.2, 1], nobs, state_shocks=eps,
initial_state=np.zeros(mod2.k_states))
assert_allclose(actual[:, 0], -0.9 * desired)
assert_allclose(actual[:, 1], 0.8 * desired)
# DFM: 2 series, AR(2) factor, exog
mod1 = dynamic_factor.DynamicFactor(np.zeros((nobs, 2)), k_factors=1,
factor_order=2, exog=exog)
mod2 = sarimax.SARIMAX([0], order=(2, 0, 0))
actual = mod1.simulate([-0.9, 0.8, 5, -2, 1., 1., 0.5, 0.2], nobs,
measurement_shocks=np.c_[eps1, eps1],
state_shocks=eps,
initial_state=np.zeros(mod1.k_states))
desired = mod2.simulate([0.5, 0.2, 1], nobs, state_shocks=eps,
initial_state=np.zeros(mod2.k_states))
assert_allclose(actual[:, 0], -0.9 * desired + 5 * exog[:, 0])
assert_allclose(actual[:, 1], 0.8 * desired - 2 * exog[:, 0])
# DFM, 3 series, VAR(2) factor, exog, error VAR
# TODO: This is just a smoke test
mod = dynamic_factor.DynamicFactor(np.random.normal(size=(nobs, 3)),
k_factors=2, factor_order=2, exog=exog,
error_order=2, error_var=True)
mod.simulate(mod.start_params, nobs)
def test_known_initialization():
# Need to test that "known" initialization is taken into account in
# time series simulation
np.random.seed(38947)
nobs = 100
eps = np.random.normal(size=nobs)
eps1 = np.zeros(nobs)
eps2 = np.zeros(nobs)
eps2[49] = 1
eps3 = np.zeros(nobs)
eps3[50:] = 1
# SARIMAX
# (test that when state shocks are shut down, the initial state
# geometrically declines according to the AR parameter)
mod = sarimax.SARIMAX([0], order=(1, 0, 0))
mod.ssm.initialize_known([100], [[0]])
actual = mod.simulate([0.5, 1.], nobs, state_shocks=eps1)
assert_allclose(actual, 100 * 0.5**np.arange(nobs))
# Unobserved components
# (test that the initial level shifts the entire path)
mod = structural.UnobservedComponents([0], 'local level')
mod.ssm.initialize_known([100], [[0]])
actual = mod.simulate([1., 1.], nobs, measurement_shocks=eps,
state_shocks=eps2)
assert_allclose(actual, 100 + eps + eps3)
# VARMAX
# (here just test that with an independent VAR we have each initial state
# geometrically declining at the appropriate rate)
transition = np.diag([0.5, 0.2])
mod = varmax.VARMAX([[0, 0]], order=(1, 0), trend='n')
mod.initialize_known([100, 50], np.diag([0, 0]))
actual = mod.simulate(np.r_[transition.ravel(), 1., 0, 1.], nobs,
measurement_shocks=np.c_[eps1, eps1],
state_shocks=np.c_[eps1, eps1])
assert_allclose(actual, np.c_[100 * 0.5**np.arange(nobs),
50 * 0.2**np.arange(nobs)])
# Dynamic factor
# (test that the initial state declines geometrically and then loads
# correctly onto the series)
mod = dynamic_factor.DynamicFactor([[0, 0]], k_factors=1, factor_order=1)
mod.initialize_known([100], [[0]])
actual = mod.simulate([0.8, 0.2, 1.0, 1.0, 0.5], nobs,
measurement_shocks=np.c_[eps1, eps1],
state_shocks=eps1)
tmp = 100 * 0.5**np.arange(nobs)
assert_allclose(actual, np.c_[0.8 * tmp, 0.2 * tmp])
def test_sequential_simulate():
# Test that we can perform simulation, change the system matrices, and then
# perform simulation again (i.e. check that everything updates correctly
# in the simulation smoother).
n_simulations = 100
mod = sarimax.SARIMAX([1], order=(0, 0, 0), trend='c')
actual = mod.simulate([1, 0], n_simulations)
assert_allclose(actual, np.ones(n_simulations))
actual = mod.simulate([10, 0], n_simulations)
assert_allclose(actual, np.ones(n_simulations) * 10)
def test_sarimax_end_time_invariant_noshocks():
# Test simulating values from the end of a time-invariant SARIMAX model
# In this test, we suppress randomness by setting the shocks to zeros
endog = np.arange(1, 11)
mod = sarimax.SARIMAX(endog)
res = mod.filter([0.5, 1.])
nsimulations = 10
measurement_shocks = np.zeros((nsimulations, mod.k_endog))
state_shocks = np.zeros((nsimulations, mod.k_states))
initial_state = res.predicted_state[..., -1]
assert_allclose(initial_state, 5)
actual = res.simulate(nsimulations, anchor='end',
measurement_shocks=measurement_shocks,
state_shocks=state_shocks,
initial_state=initial_state)
# Compute the desired simulated values directly
desired = 10 * 0.5**np.arange(1, nsimulations + 1)
assert_allclose(actual, desired)
# Test using the model versus the results class
mod_actual = mod.simulate(
res.params, nsimulations, anchor='end',
measurement_shocks=measurement_shocks,
state_shocks=state_shocks,
initial_state=initial_state)
assert_allclose(mod_actual, desired)
# Alternatively, since we've shut down the shocks, we can compare against
# the forecast values
assert_allclose(actual, res.forecast(nsimulations))
def test_sarimax_simple_differencing_end_time_invariant_noshocks():
# Test simulating values from the end of a time-invariant SARIMAX model
# in which simple differencing is used.
# In this test, we suppress randomness by setting the shocks to zeros
endog = np.cumsum(np.arange(0, 11))
mod = sarimax.SARIMAX(endog, order=(1, 1, 0), simple_differencing=True)
res = mod.filter([0.5, 1.])
nsimulations = 10
measurement_shocks = np.zeros((nsimulations, mod.k_endog))
state_shocks = np.zeros((nsimulations, mod.k_states))
initial_state = res.predicted_state[..., -1]
assert_allclose(initial_state, 5)
actual = res.simulate(nsimulations, anchor='end',
measurement_shocks=measurement_shocks,
state_shocks=state_shocks,
initial_state=initial_state)
# Compute the desired simulated values directly
desired = 10 * 0.5**np.arange(1, nsimulations + 1)
assert_allclose(actual, desired)
# Test using the model versus the results class
mod_actual = mod.simulate(
res.params, nsimulations, anchor='end',
measurement_shocks=measurement_shocks,
state_shocks=state_shocks,
initial_state=initial_state)
assert_allclose(mod_actual, desired)
# Alternatively, since we've shut down the shocks, we can compare against
# the forecast values
assert_allclose(actual, res.forecast(nsimulations))
def test_sarimax_time_invariant_shocks(reset_randomstate):
# Test simulating values from the end of a time-invariant SARIMAX model,
# with nonzero shocks
endog = np.arange(1, 11)
mod = sarimax.SARIMAX(endog)
res = mod.filter([0.5, 1.])
nsimulations = 10
measurement_shocks = np.random.normal(size=nsimulations)
state_shocks = np.random.normal(size=nsimulations)
initial_state = res.predicted_state[:1, -1]
actual = res.simulate(nsimulations, anchor='end',
measurement_shocks=measurement_shocks,
state_shocks=state_shocks,
initial_state=initial_state)
desired = (
lfilter([1], [1, -0.5], np.r_[initial_state, state_shocks])[:-1] +
measurement_shocks)
assert_allclose(actual, desired)
# Test using the model versus the results class
mod_actual = mod.simulate(
res.params, nsimulations, anchor='end',
measurement_shocks=measurement_shocks,
state_shocks=state_shocks,
initial_state=initial_state)
assert_allclose(mod_actual, desired)
def test_sarimax_simple_differencing_end_time_invariant_shocks():
# Test simulating values from the end of a time-invariant SARIMAX model
# in which simple differencing is used.
# In this test, we suppress randomness by setting the shocks to zeros
endog = np.cumsum(np.arange(0, 11))
mod = sarimax.SARIMAX(endog, order=(1, 1, 0), simple_differencing=True)
res = mod.filter([0.5, 1.])
nsimulations = 10
measurement_shocks = np.random.normal(size=nsimulations)
state_shocks = np.random.normal(size=nsimulations)
initial_state = res.predicted_state[:1, -1]
actual = res.simulate(nsimulations, anchor='end',
measurement_shocks=measurement_shocks,
state_shocks=state_shocks,
initial_state=initial_state)
desired = (
lfilter([1], [1, -0.5], np.r_[initial_state, state_shocks])[:-1] +
measurement_shocks)
assert_allclose(actual, desired)
# Test using the model versus the results class
mod_actual = mod.simulate(
res.params, nsimulations, anchor='end',
measurement_shocks=measurement_shocks,
state_shocks=state_shocks,
initial_state=initial_state)
assert_allclose(mod_actual, desired)
def test_sarimax_time_varying_trend_noshocks():
# Test simulating values from the end of a time-varying SARIMAX model
# In this test, we suppress randomness by setting the shocks to zeros
endog = np.arange(1, 11)
mod = sarimax.SARIMAX(endog, trend='t')
res = mod.filter([1., 0.2, 1.])
nsimulations = 10
measurement_shocks = np.zeros((nsimulations, mod.k_endog))
state_shocks = np.zeros((nsimulations, mod.k_states))
initial_state = res.predicted_state[..., -1]
assert_allclose(initial_state, 12)
actual = res.simulate(nsimulations, anchor='end',
measurement_shocks=measurement_shocks,
state_shocks=state_shocks,
initial_state=initial_state)
# Compute the desired simulated values directly
desired = lfilter([1], [1, -0.2], np.r_[12, np.arange(11, 20)])
assert_allclose(actual, desired)
# Test using the model versus the results class
mod_actual = mod.simulate(
res.params, nsimulations, anchor='end',
measurement_shocks=measurement_shocks,
state_shocks=state_shocks,
initial_state=initial_state)
assert_allclose(mod_actual, desired)
# Alternatively, since we've shut down the shocks, we can compare against
# the forecast values
assert_allclose(actual, res.forecast(nsimulations))
def test_sarimax_simple_differencing_time_varying_trend_noshocks():
# Test simulating values from the end of a time-varying SARIMAX model
# in which simple differencing is used.
# In this test, we suppress randomness by setting the shocks to zeros
endog = np.cumsum(np.arange(0, 11))
mod = sarimax.SARIMAX(endog, order=(1, 1, 0), trend='t',
simple_differencing=True)
res = mod.filter([1., 0.2, 1.])
nsimulations = 10
measurement_shocks = np.zeros((nsimulations, mod.k_endog))
state_shocks = np.zeros((nsimulations, mod.k_states))
initial_state = res.predicted_state[..., -1]
assert_allclose(initial_state, 12)
actual = res.simulate(nsimulations, anchor='end',
measurement_shocks=measurement_shocks,
state_shocks=state_shocks,
initial_state=initial_state)
# Compute the desired simulated values directly
desired = lfilter([1], [1, -0.2], np.r_[12, np.arange(11, 20)])
assert_allclose(actual, desired)
# Test using the model versus the results class
mod_actual = mod.simulate(
res.params, nsimulations, anchor='end',
measurement_shocks=measurement_shocks,
state_shocks=state_shocks,
initial_state=initial_state)
assert_allclose(mod_actual, desired)
# Alternatively, since we've shut down the shocks, we can compare against
# the forecast values
assert_allclose(actual, res.forecast(nsimulations))
def test_sarimax_time_varying_trend_shocks(reset_randomstate):
# Test simulating values from the end of a time-varying SARIMAX model,
# with nonzero shocks
endog = np.arange(1, 11)
mod = sarimax.SARIMAX(endog, trend='t')
res = mod.filter([1., 0.2, 1.])
nsimulations = 10
measurement_shocks = np.random.normal(size=nsimulations)
state_shocks = np.random.normal(size=nsimulations)
initial_state = res.predicted_state[:1, -1]
actual = res.simulate(nsimulations, anchor='end',
measurement_shocks=measurement_shocks,
state_shocks=state_shocks,
initial_state=initial_state)
x = np.r_[initial_state, state_shocks + np.arange(11, 21)]
desired = lfilter([1], [1, -0.2], x)[:-1] + measurement_shocks
assert_allclose(actual, desired)
# Test using the model versus the results class
mod_actual = mod.simulate(
res.params, nsimulations, anchor='end',
measurement_shocks=measurement_shocks,
state_shocks=state_shocks,
initial_state=initial_state)
assert_allclose(mod_actual, desired)
def test_sarimax_simple_differencing_time_varying_trend_shocks(
reset_randomstate):
# Test simulating values from the end of a time-varying SARIMAX model
# in which simple differencing is used.
# with nonzero shocks
endog = np.cumsum(np.arange(0, 11))
mod = sarimax.SARIMAX(endog, order=(1, 1, 0), trend='t',
simple_differencing=True)
res = mod.filter([1., 0.2, 1.])
nsimulations = 10
measurement_shocks = np.random.normal(size=nsimulations)
state_shocks = np.random.normal(size=nsimulations)
initial_state = res.predicted_state[:1, -1]
assert_allclose(initial_state, 12)
actual = res.simulate(nsimulations, anchor='end',
measurement_shocks=measurement_shocks,
state_shocks=state_shocks,
initial_state=initial_state)
x = np.r_[initial_state, state_shocks + np.arange(11, 21)]
desired = lfilter([1], [1, -0.2], x)[:-1] + measurement_shocks
assert_allclose(actual, desired)
# Test using the model versus the results class
mod_actual = mod.simulate(
res.params, nsimulations, anchor='end',
measurement_shocks=measurement_shocks,
state_shocks=state_shocks,
initial_state=initial_state)
assert_allclose(mod_actual, desired)
def test_sarimax_time_varying_exog_noshocks():
# Test simulating values from the end of a time-varying SARIMAX model
# In this test, we suppress randomness by setting the shocks to zeros
# Note that `exog` here has basically the same effect as measurement shocks
endog = np.arange(1, 11)
exog = np.arange(1, 21)**2
mod = sarimax.SARIMAX(endog, exog=exog[:10])
res = mod.filter([1., 0.2, 1.])
nsimulations = 10
measurement_shocks = np.zeros((nsimulations, mod.k_endog))
state_shocks = np.zeros((nsimulations, mod.k_states))
initial_state = res.predicted_state[..., -1]
actual = res.simulate(nsimulations, exog=exog[10:], anchor='end',
measurement_shocks=measurement_shocks,
state_shocks=state_shocks,
initial_state=initial_state)
# Compute the desired simulated values directly
desired = (lfilter([1], [1, -0.2], np.r_[initial_state, [0] * 9]) +
exog[10:])
assert_allclose(actual, desired)
# Test using the model versus the results class
mod_actual = mod.simulate(
res.params, nsimulations, exog=exog[10:], anchor='end',
measurement_shocks=measurement_shocks,
state_shocks=state_shocks,
initial_state=initial_state)
assert_allclose(mod_actual, desired)
# Alternatively, since we've shut down the shocks, we can compare against
# the forecast values
assert_allclose(actual, res.forecast(nsimulations, exog=exog[10:]))
def test_sarimax_simple_differencing_time_varying_exog_noshocks():
# Test simulating values from the end of a time-varying SARIMAX model
# with simple differencing
# In this test, we suppress randomness by setting the shocks to zeros
endog = np.cumsum(np.arange(0, 11))
exog = np.cumsum(np.arange(0, 21)**2)
mod = sarimax.SARIMAX(endog, order=(1, 1, 0), exog=exog[:11],
simple_differencing=True)
res = mod.filter([1., 0.2, 1.])
nsimulations = 10
measurement_shocks = np.zeros((nsimulations, mod.k_endog))
state_shocks = np.zeros((nsimulations, mod.k_states))
initial_state = res.predicted_state[..., -1]
actual = res.simulate(nsimulations, exog=exog[11:], anchor='end',
measurement_shocks=measurement_shocks,
state_shocks=state_shocks,
initial_state=initial_state)
# Compute the desired simulated values directly
desired = (lfilter([1], [1, -0.2], np.r_[initial_state, [0] * 9]) +
np.diff(exog)[10:])
assert_allclose(actual, desired)
# Test using the model versus the results class
mod_actual = mod.simulate(
res.params, nsimulations, exog=exog[11:], anchor='end',
measurement_shocks=measurement_shocks,
state_shocks=state_shocks,
initial_state=initial_state)
assert_allclose(mod_actual, desired)
# Alternatively, since we've shut down the shocks, we can compare against
# the forecast values
assert_allclose(actual, res.forecast(nsimulations, exog=exog[11:]))
def test_sarimax_time_varying_exog_shocks(reset_randomstate):
# Test simulating values from the end of a time-varying SARIMAX model,
# with nonzero shocks
endog = np.arange(1, 11)
exog = np.arange(1, 21)**2
mod = sarimax.SARIMAX(endog, exog=exog[:10])
res = mod.filter([1., 0.2, 1.])
nsimulations = 10
measurement_shocks = np.random.normal(size=nsimulations)
state_shocks = np.random.normal(size=nsimulations)
initial_state = res.predicted_state[:1, -1]
actual = res.simulate(nsimulations, exog=exog[10:], anchor='end',
measurement_shocks=measurement_shocks,
state_shocks=state_shocks,
initial_state=initial_state)
x = np.r_[initial_state, state_shocks[:-1]]
desired = lfilter([1], [1, -0.2], x) + exog[10:] + measurement_shocks
assert_allclose(actual, desired)
# Test using the model versus the results class
mod_actual = mod.simulate(
res.params, nsimulations, exog=exog[10:], anchor='end',
measurement_shocks=measurement_shocks,
state_shocks=state_shocks,
initial_state=initial_state)
assert_allclose(mod_actual, desired)
def test_sarimax_simple_differencing_time_varying_exog_shocks(
reset_randomstate):
# Test simulating values from the end of a time-varying SARIMAX model
# Note that `exog` here has basically the same effect as measurement shocks
endog = np.cumsum(np.arange(0, 11))
exog = np.cumsum(np.arange(0, 21)**2)
mod = sarimax.SARIMAX(endog, order=(1, 1, 0), exog=exog[:11],
simple_differencing=True)
res = mod.filter([1., 0.2, 1.])
nsimulations = 10
measurement_shocks = np.random.normal(size=nsimulations)
state_shocks = np.random.normal(size=nsimulations)
initial_state = res.predicted_state[:1, -1]
actual = res.simulate(nsimulations, exog=exog[11:], anchor='end',
measurement_shocks=measurement_shocks,
state_shocks=state_shocks,
initial_state=initial_state)
# Compute the desired simulated values directly
x = np.r_[initial_state, state_shocks[:-1]]
desired = (lfilter([1], [1, -0.2], x) + np.diff(exog)[10:] +
measurement_shocks)
assert_allclose(actual, desired)
# Test using the model versus the results class
mod_actual = mod.simulate(
res.params, nsimulations, exog=exog[11:], anchor='end',
measurement_shocks=measurement_shocks,
state_shocks=state_shocks,
initial_state=initial_state)
assert_allclose(mod_actual, desired)
def test_unobserved_components_end_time_invariant_noshocks():
# Test simulating values from the end of a time-invariant
# UnobservedComponents model
# In this test, we suppress randomness by setting the shocks to zeros
endog = np.arange(1, 11)
mod = structural.UnobservedComponents(endog, 'llevel')
res = mod.filter([1., 1.])
nsimulations = 10
measurement_shocks = np.zeros((nsimulations, mod.k_endog))
state_shocks = np.zeros((nsimulations, mod.k_states))
initial_state = res.predicted_state[..., -1]
actual = res.simulate(nsimulations, anchor='end',
measurement_shocks=measurement_shocks,
state_shocks=state_shocks,
initial_state=initial_state)
# The mean of the simulated local level values is just the last value
desired = initial_state[0]
assert_allclose(actual, desired)
# Test using the model versus the results class
mod_actual = mod.simulate(
res.params, nsimulations, anchor='end',
measurement_shocks=measurement_shocks,
state_shocks=state_shocks,
initial_state=initial_state)
assert_allclose(mod_actual, desired)
# Alternatively, since we've shut down the shocks, we can compare against
# the forecast values
assert_allclose(actual, res.forecast(nsimulations))
def test_unobserved_components_end_time_invariant_shocks(reset_randomstate):
# Test simulating values from the end of a time-invariant
# UnobservedComponents model, with nonzero shocks
endog = np.arange(1, 11)
mod = structural.UnobservedComponents(endog, 'llevel')
res = mod.filter([1., 1.])
nsimulations = 10
measurement_shocks = np.random.normal(size=nsimulations)
state_shocks = np.random.normal(size=nsimulations)
initial_state = res.predicted_state[:1, -1]
actual = res.simulate(nsimulations, anchor='end',
measurement_shocks=measurement_shocks,
state_shocks=state_shocks,
initial_state=initial_state)
desired = (initial_state + np.cumsum(np.r_[0, state_shocks[:-1]]) +
measurement_shocks)
assert_allclose(actual, desired)
# Test using the model versus the results class
mod_actual = mod.simulate(
res.params, nsimulations, anchor='end',
measurement_shocks=measurement_shocks,
state_shocks=state_shocks,
initial_state=initial_state)
assert_allclose(mod_actual, desired)
def test_unobserved_components_end_time_varying_exog_noshocks():
# Test simulating values from the end of a time-varying
# UnobservedComponents model with exog
# In this test, we suppress randomness by setting the shocks to zeros
endog = np.arange(1, 11)
exog = np.arange(1, 21)**2
mod = structural.UnobservedComponents(endog, 'llevel', exog=exog[:10])
res = mod.filter([1., 1., 1.])
nsimulations = 10
measurement_shocks = np.zeros((nsimulations, mod.k_endog))
state_shocks = np.zeros((nsimulations, mod.k_states))
initial_state = res.predicted_state[..., -1]
actual = res.simulate(nsimulations, exog=exog[10:], anchor='end',
measurement_shocks=measurement_shocks,
state_shocks=state_shocks,
initial_state=initial_state)
# The mean of the simulated local level values is just the last value
desired = initial_state[0] + exog[10:]
assert_allclose(actual, desired)
# Test using the model versus the results class
mod_actual = mod.simulate(
res.params, nsimulations, exog=exog[10:], anchor='end',
measurement_shocks=measurement_shocks,
state_shocks=state_shocks,
initial_state=initial_state)
assert_allclose(mod_actual, desired)
# Alternatively, since we've shut down the shocks, we can compare against
# the forecast values
assert_allclose(actual, res.forecast(nsimulations, exog=exog[10:]))
def test_unobserved_components_end_time_varying_exog_shocks(reset_randomstate):
# Test simulating values from the end of a time-varying
# UnobservedComponents model with exog
endog = np.arange(1, 11)
exog = np.arange(1, 21)**2
mod = structural.UnobservedComponents(endog, 'llevel', exog=exog[:10])
res = mod.filter([1., 1., 1.])
nsimulations = 10
measurement_shocks = np.random.normal(size=nsimulations)
state_shocks = np.random.normal(size=nsimulations)
initial_state = res.predicted_state[:1, -1]
actual = res.simulate(nsimulations, exog=exog[10:], anchor='end',
measurement_shocks=measurement_shocks,
state_shocks=state_shocks,
initial_state=initial_state)
desired = (initial_state + np.cumsum(np.r_[0, state_shocks[:-1]]) +
measurement_shocks + exog[10:])
assert_allclose(actual, desired)
# Test using the model versus the results class
mod_actual = mod.simulate(
res.params, nsimulations, exog=exog[10:], anchor='end',
measurement_shocks=measurement_shocks,
state_shocks=state_shocks,
initial_state=initial_state)
assert_allclose(mod_actual, desired)
def test_varmax_end_time_invariant_noshocks():
# Test simulating values from the end of a time-invariant VARMAX model
# In this test, we suppress randomness by setting the shocks to zeros
endog = np.arange(1, 21).reshape(10, 2)
mod = varmax.VARMAX(endog, trend='n')
res = mod.filter([1., 1., 1., 1., 1., 0.5, 1.])
nsimulations = 10
measurement_shocks = np.zeros((nsimulations, mod.k_endog))
state_shocks = np.zeros((nsimulations, mod.k_states))
initial_state = res.predicted_state[:, -1]
actual = res.simulate(nsimulations, anchor='end',
measurement_shocks=measurement_shocks,
state_shocks=state_shocks,
initial_state=initial_state)
desired = (initial_state[:, None] * 2 ** np.arange(10)).T
assert_allclose(actual, desired)
# Test using the model versus the results class
mod_actual = mod.simulate(
res.params, nsimulations, anchor='end',
measurement_shocks=measurement_shocks,
state_shocks=state_shocks,
initial_state=initial_state)
assert_allclose(mod_actual, desired)
# Alternatively, since we've shut down the shocks, we can compare against
# the forecast values
assert_allclose(actual, res.forecast(nsimulations))
def test_varmax_end_time_invariant_shocks(reset_randomstate):
# Test simulating values from the end of a time-invariant VARMAX model,
# with nonzero shocks
endog = np.arange(1, 21).reshape(10, 2)
mod = varmax.VARMAX(endog, trend='n')
res = mod.filter([1., 1., 1., 1., 1., 0.5, 1.])
nsimulations = 10
measurement_shocks = np.random.normal(size=(nsimulations, mod.k_endog))
state_shocks = np.random.normal(size=(nsimulations, mod.k_states))
initial_state = res.predicted_state[:, -1]
actual = res.simulate(nsimulations, anchor='end',
measurement_shocks=measurement_shocks,
state_shocks=state_shocks,
initial_state=initial_state)
desired = np.zeros((nsimulations, mod.k_endog))
desired[0] = initial_state
for i in range(1, nsimulations):
desired[i] = desired[i - 1].sum() + state_shocks[i - 1]
desired = desired + measurement_shocks
assert_allclose(actual, desired)
# Test using the model versus the results class
mod_actual = mod.simulate(
res.params, nsimulations, anchor='end',
measurement_shocks=measurement_shocks,
state_shocks=state_shocks,
initial_state=initial_state)
assert_allclose(mod_actual, desired)
def test_varmax_end_time_varying_trend_noshocks():
# Test simulating values from the end of a time-varying VARMAX model
# with a trend
# In this test, we suppress randomness by setting the shocks to zeros
endog = np.arange(1, 21).reshape(10, 2)
mod = varmax.VARMAX(endog, trend='ct')
res = mod.filter([1., 1., 1., 1., 1, 1, 1., 1., 1., 0.5, 1.])
nsimulations = 10
measurement_shocks = np.zeros((nsimulations, mod.k_endog))
state_shocks = np.zeros((nsimulations, mod.k_states))
# Need to set the final predicted state given the new trend
with res._set_final_predicted_state(exog=None, out_of_sample=10):
initial_state = res.predicted_state[:, -1].copy()
# Simulation
actual = res.simulate(nsimulations, anchor='end',
measurement_shocks=measurement_shocks,
state_shocks=state_shocks,
initial_state=initial_state)
desired = np.zeros((nsimulations, mod.k_endog))
desired[0] = initial_state
tmp_trend = 1 + np.arange(11, 21)
for i in range(1, nsimulations):
desired[i] = desired[i - 1].sum() + tmp_trend[i] + state_shocks[i - 1]
desired = desired + measurement_shocks
assert_allclose(actual, desired)
# Test using the model versus the results class
mod_actual = mod.simulate(
res.params, nsimulations, anchor='end',
measurement_shocks=measurement_shocks,
state_shocks=state_shocks,
initial_state=initial_state)
assert_allclose(mod_actual, desired)
# Alternatively, since we've shut down the shocks, we can compare against
# the forecast values
assert_allclose(actual, res.forecast(nsimulations))
def test_varmax_end_time_varying_trend_shocks(reset_randomstate):
# Test simulating values from the end of a time-varying VARMAX model
# with a trend
endog = np.arange(1, 21).reshape(10, 2)
mod = varmax.VARMAX(endog, trend='ct')
res = mod.filter([1., 1., 1., 1., 1, 1, 1., 1., 1., 0.5, 1.])
nsimulations = 10
measurement_shocks = np.random.normal(size=(nsimulations, mod.k_endog))
state_shocks = np.random.normal(size=(nsimulations, mod.k_states))
# Need to set the final predicted state given the new trend
with res._set_final_predicted_state(exog=None, out_of_sample=10):
initial_state = res.predicted_state[:, -1].copy()
# Simulation
actual = res.simulate(nsimulations, anchor='end',
measurement_shocks=measurement_shocks,
state_shocks=state_shocks,
initial_state=initial_state)
desired = np.zeros((nsimulations, mod.k_endog))
desired[0] = initial_state
tmp_trend = 1 + np.arange(11, 21)
for i in range(1, nsimulations):
desired[i] = desired[i - 1].sum() + tmp_trend[i] + state_shocks[i - 1]
desired = desired + measurement_shocks
assert_allclose(actual, desired)
# Test using the model versus the results class
mod_actual = mod.simulate(
res.params, nsimulations, anchor='end',
measurement_shocks=measurement_shocks,
state_shocks=state_shocks,
initial_state=initial_state)
assert_allclose(mod_actual, desired)
def test_varmax_end_time_varying_exog_noshocks():
# Test simulating values from the end of a time-varying VARMAX model
# with exog
# In this test, we suppress randomness by setting the shocks to zeros
endog = np.arange(1, 21).reshape(10, 2)
exog = np.arange(1, 21)**2
mod = varmax.VARMAX(endog, trend='n', exog=exog[:10])
res = mod.filter([1., 1., 1., 1., 1., 1., 1., 0.5, 1.])
nsimulations = 10
measurement_shocks = np.zeros((nsimulations, mod.k_endog))
state_shocks = np.zeros((nsimulations, mod.k_states))
# Need to set the final predicted state given the new exog
tmp_exog = mod._validate_out_of_sample_exog(exog[10:], out_of_sample=10)
with res._set_final_predicted_state(exog=tmp_exog, out_of_sample=10):
initial_state = res.predicted_state[:, -1].copy()
# Simulation
actual = res.simulate(nsimulations, exog=exog[10:], anchor='end',
measurement_shocks=measurement_shocks,
state_shocks=state_shocks,
initial_state=initial_state)
desired = np.zeros((nsimulations, mod.k_endog))
desired[0] = initial_state
for i in range(1, nsimulations):
desired[i] = desired[i - 1].sum() + exog[10 + i] + state_shocks[i - 1]
desired = desired + measurement_shocks
assert_allclose(actual, desired)
# Test using the model versus the results class
mod_actual = mod.simulate(
res.params, nsimulations, exog=exog[10:], anchor='end',
measurement_shocks=measurement_shocks,
state_shocks=state_shocks,
initial_state=initial_state)
assert_allclose(mod_actual, desired)
# Alternatively, since we've shut down the shocks, we can compare against
# the forecast values
assert_allclose(actual, res.forecast(nsimulations, exog=exog[10:]))
def test_varmax_end_time_varying_exog_shocks(reset_randomstate):
# Test simulating values from the end of a time-varying VARMAX model
# with exog
endog = np.arange(1, 23).reshape(11, 2)
exog = np.arange(1, 21)**2
mod = varmax.VARMAX(endog[:10], trend='n', exog=exog[:10])
res = mod.filter([1., 1., 1., 1., 1., 1., 1., 0.5, 1.])
mod2 = varmax.VARMAX(endog, trend='n', exog=exog[:11])
res2 = mod2.filter([1., 1., 1., 1., 1., 1., 1., 0.5, 1.])
nsimulations = 10
measurement_shocks = np.random.normal(size=(nsimulations, mod.k_endog))
state_shocks = np.random.normal(size=(nsimulations, mod.k_states))
# Need to set the final predicted state given the new exog
tmp_exog = mod._validate_out_of_sample_exog(exog[10:], out_of_sample=10)
with res._set_final_predicted_state(exog=tmp_exog, out_of_sample=10):
initial_state = res.predicted_state[:, -1].copy()
# Simulation
actual = res.simulate(nsimulations, exog=exog[10:], anchor='end',
measurement_shocks=measurement_shocks,
state_shocks=state_shocks,
initial_state=initial_state)
actual2 = res2.simulate(nsimulations, exog=exog[11:], anchor=-1,
measurement_shocks=measurement_shocks,
state_shocks=state_shocks,
initial_state=res2.predicted_state[:, -2])
desired = np.zeros((nsimulations, mod.k_endog))
desired[0] = initial_state
for i in range(1, nsimulations):
desired[i] = desired[i - 1].sum() + exog[10 + i] + state_shocks[i - 1]
desired = desired + measurement_shocks
assert_allclose(actual, desired)
assert_allclose(actual2, desired)
# Test using the model versus the results class
mod_actual = mod.simulate(
res.params, nsimulations, exog=exog[10:], anchor='end',
measurement_shocks=measurement_shocks,
state_shocks=state_shocks,
initial_state=initial_state)
assert_allclose(mod_actual, desired)
def test_dynamic_factor_end_time_invariant_noshocks():
# Test simulating values from the end of a time-invariant dynamic factor
# In this test, we suppress randomness by setting the shocks to zeros
endog = np.arange(1, 21).reshape(10, 2)
mod = dynamic_factor.DynamicFactor(endog, k_factors=1, factor_order=1)
mod.ssm.filter_univariate = True
res = mod.filter([1., 1., 1., 1., 1.])
nsimulations = 10
measurement_shocks = np.zeros((nsimulations, mod.k_endog))
state_shocks = np.zeros((nsimulations, mod.k_states))
initial_state = res.predicted_state[..., -1]
# Simulation
actual = res.simulate(nsimulations, anchor='end',
measurement_shocks=measurement_shocks,
state_shocks=state_shocks,
initial_state=initial_state)
# Construct the simulation directly
desired = np.zeros((nsimulations, mod.k_endog))
desired[0] = initial_state
for i in range(1, nsimulations):
desired[i] = desired[i - 1] + state_shocks[i - 1]
desired = desired + measurement_shocks
assert_allclose(actual, desired)
# Test using the model versus the results class
mod_actual = mod.simulate(
res.params, nsimulations, anchor='end',
measurement_shocks=measurement_shocks,
state_shocks=state_shocks,
initial_state=initial_state)
assert_allclose(mod_actual, desired)
# Alternatively, since we've shut down the shocks, we can compare against
# the forecast values
assert_allclose(actual, res.forecast(nsimulations))
def test_dynamic_factor_end_time_invariant_shocks(reset_randomstate):
# Test simulating values from the end of a time-invariant dynamic factor
endog = np.arange(1, 21).reshape(10, 2)
mod = dynamic_factor.DynamicFactor(endog, k_factors=1, factor_order=1)
mod.ssm.filter_univariate = True
res = mod.filter([1., 1., 1., 1., 1., 1., 1.])
nsimulations = 10
measurement_shocks = np.random.normal(size=(nsimulations, mod.k_endog))
state_shocks = np.random.normal(size=(nsimulations, mod.k_states))
initial_state = res.predicted_state[..., -1]
# Simulation
actual = res.simulate(nsimulations, anchor='end',
measurement_shocks=measurement_shocks,
state_shocks=state_shocks,
initial_state=initial_state)
# Construct the simulation directly
desired = np.zeros((nsimulations, mod.k_endog))
desired[0] = initial_state
for i in range(1, nsimulations):
desired[i] = desired[i - 1] + state_shocks[i - 1]
desired = desired + measurement_shocks
assert_allclose(actual, desired)
# Test using the model versus the results class
mod_actual = mod.simulate(
res.params, nsimulations, anchor='end',
measurement_shocks=measurement_shocks,
state_shocks=state_shocks,
initial_state=initial_state)
assert_allclose(mod_actual, desired)
def test_dynamic_factor_end_time_varying_exog_noshocks():
# Test simulating values from the end of a time-varying dynamic factor
# model with exogenous inputs
# In this test, we suppress randomness by setting the shocks to zeros
endog = np.arange(1, 21).reshape(10, 2)
exog = np.arange(1, 21)**2
mod = dynamic_factor.DynamicFactor(endog, k_factors=1, factor_order=1,
exog=exog[:10])
mod.ssm.filter_univariate = True
res = mod.filter([1., 1., 1., 1., 1., 1., 1.])
nsimulations = 10
measurement_shocks = np.zeros((nsimulations, mod.k_endog))
state_shocks = np.zeros((nsimulations, mod.k_states))
initial_state = res.predicted_state[..., -1]
# Simulation
actual = res.simulate(nsimulations, exog=exog[10:], anchor='end',
measurement_shocks=measurement_shocks,
state_shocks=state_shocks,
initial_state=initial_state)
# Construct the simulation directly
desired = np.zeros((nsimulations, mod.k_endog))
desired[0] = initial_state
for i in range(1, nsimulations):
desired[i] = desired[i - 1] + state_shocks[i - 1]
desired = desired + measurement_shocks + exog[10:, None]
assert_allclose(actual, desired)
# Test using the model versus the results class
mod_actual = mod.simulate(
res.params, nsimulations, exog=exog[10:], anchor='end',
measurement_shocks=measurement_shocks,
state_shocks=state_shocks,
initial_state=initial_state)
assert_allclose(mod_actual, desired)
# Alternatively, since we've shut down the shocks, we can compare against
# the forecast values
assert_allclose(actual, res.forecast(nsimulations, exog=exog[10:]))
def test_dynamic_factor_end_time_varying_exog_shocks(reset_randomstate):
# Test simulating values from the end of a time-varying dynamic factor
# model with exogenous inputs
endog = np.arange(1, 23).reshape(11, 2)
exog = np.arange(1, 21)**2
mod = dynamic_factor.DynamicFactor(
endog[:10], k_factors=1, factor_order=1, exog=exog[:10])
mod.ssm.filter_univariate = True
res = mod.filter([1., 1., 1., 1., 1., 1., 1.])
mod2 = dynamic_factor.DynamicFactor(
endog, k_factors=1, factor_order=1, exog=exog[:11])
mod2.ssm.filter_univariate = True
res2 = mod2.filter([1., 1., 1., 1., 1., 1., 1.])
nsimulations = 10
measurement_shocks = np.random.normal(size=(nsimulations, mod.k_endog))
state_shocks = np.random.normal(size=(nsimulations, mod.k_states))
initial_state = res.predicted_state[..., -1]
# Simulations
actual = res.simulate(nsimulations, exog=exog[10:], anchor='end',
measurement_shocks=measurement_shocks,
state_shocks=state_shocks,
initial_state=initial_state)
actual2 = res2.simulate(nsimulations, exog=exog[11:], anchor=-1,
measurement_shocks=measurement_shocks,
state_shocks=state_shocks,
initial_state=initial_state)
# Construct the simulation directly
desired = np.zeros((nsimulations, mod.k_endog))
desired[0] = initial_state
for i in range(1, nsimulations):
desired[i] = desired[i - 1] + state_shocks[i - 1]
desired = desired + measurement_shocks + exog[10:, None]
assert_allclose(actual, desired)
assert_allclose(actual2, desired)
# Test using the model versus the results class
mod_actual = mod.simulate(
res.params, nsimulations, exog=exog[10:], anchor='end',
measurement_shocks=measurement_shocks,
state_shocks=state_shocks,
initial_state=initial_state)
assert_allclose(mod_actual, desired)
def test_pandas_univariate_rangeindex():
# Simulate will also have RangeIndex
endog = pd.Series(np.zeros(2))
mod = sarimax.SARIMAX(endog)
res = mod.filter([0.5, 1.])
# Default simulate anchors to the start of the sample
actual = res.simulate(2, state_shocks=np.zeros(2),
initial_state=np.zeros(1))
desired = pd.Series([0, 0])
assert_allclose(actual, desired)
# Alternative anchor changes the index
actual = res.simulate(2, anchor=2, state_shocks=np.zeros(2),
initial_state=np.zeros(1))
ix = pd.RangeIndex(2, 4)
desired = pd.Series([0, 0], index=ix)
assert_allclose(actual, desired)
assert_(actual.index.equals(desired.index))
def test_pandas_univariate_rangeindex_repetitions():
# Simulate will also have RangeIndex
endog = pd.Series(np.zeros(2))
mod = sarimax.SARIMAX(endog)
res = mod.filter([0.5, 1.])
# Default simulate anchors to the start of the sample
actual = res.simulate(2, state_shocks=np.zeros(2),
initial_state=np.zeros(1), repetitions=2)
columns = pd.MultiIndex.from_product([['y'], [0, 1]])
desired = pd.DataFrame(np.zeros((2, 2)), columns=columns)
assert_allclose(actual, desired)
assert_(actual.columns.equals(desired.columns))
# Alternative anchor changes the index
actual = res.simulate(2, anchor=2, state_shocks=np.zeros(2),
initial_state=np.zeros(1), repetitions=2)
ix = pd.RangeIndex(2, 4)
columns = pd.MultiIndex.from_product([['y'], [0, 1]])
desired = pd.DataFrame(np.zeros((2, 2)), index=ix, columns=columns)
assert_allclose(actual, desired)
assert_(actual.index.equals(desired.index))
assert_(actual.columns.equals(desired.columns))
def test_pandas_univariate_dateindex():
# Simulation will maintain have date index
ix = pd.date_range(start='2000', periods=2, freq='M')
endog = pd.Series(np.zeros(2), index=ix)
mod = sarimax.SARIMAX(endog)
res = mod.filter([0.5, 1.])
# Default simulate anchors to the start of the sample
actual = res.simulate(2, state_shocks=np.zeros(2),
initial_state=np.zeros(1))
ix = pd.date_range(start='2000-01', periods=2, freq='M')
desired = pd.Series([0, 0], index=ix)
assert_allclose(actual, desired)
assert_(actual.index.equals(desired.index))
# Alternative anchor changes the index
actual = res.simulate(2, anchor=2, state_shocks=np.zeros(2),
initial_state=np.zeros(1))
ix = pd.date_range(start='2000-03', periods=2, freq='M')
desired = pd.Series([0, 0], index=ix)
assert_allclose(actual, desired)
def test_pandas_univariate_dateindex_repetitions():
# Simulation will maintain have date index
ix = pd.date_range(start='2000', periods=2, freq='M')
endog = pd.Series(np.zeros(2), index=ix)
mod = sarimax.SARIMAX(endog)
res = mod.filter([0.5, 1.])
# Default simulate anchors to the start of the sample
actual = res.simulate(2, state_shocks=np.zeros(2),
initial_state=np.zeros(1), repetitions=2)
ix = pd.date_range(start='2000-01', periods=2, freq='M')
columns = pd.MultiIndex.from_product([['y'], [0, 1]])
desired = pd.DataFrame(np.zeros((2, 2)), index=ix, columns=columns)
assert_allclose(actual, desired)
assert_(actual.columns.equals(desired.columns))
# Alternative anchor changes the index
actual = res.simulate(2, anchor=2, state_shocks=np.zeros(2),
initial_state=np.zeros(1), repetitions=2)
ix = pd.date_range(start='2000-03', periods=2, freq='M')
columns = pd.MultiIndex.from_product([['y'], [0, 1]])
desired = pd.DataFrame(np.zeros((2, 2)), index=ix, columns=columns)
assert_allclose(actual, desired)
assert_(actual.index.equals(desired.index))
assert_(actual.columns.equals(desired.columns))
def test_pandas_multivariate_rangeindex():
# Simulate will also have RangeIndex
endog = pd.DataFrame(np.zeros((2, 2)))
mod = varmax.VARMAX(endog, trend='n')
res = mod.filter([0.5, 0., 0., 0.2, 1., 0., 1.])
# Default simulate anchors to the start of the sample
actual = res.simulate(2, state_shocks=np.zeros((2, 2)),
initial_state=np.zeros(2))
desired = pd.DataFrame(np.zeros((2, 2)))
assert_allclose(actual, desired)
# Alternative anchor changes the index
actual = res.simulate(2, anchor=2, state_shocks=np.zeros((2, 2)),
initial_state=np.zeros(2))
ix = pd.RangeIndex(2, 4)
desired = pd.DataFrame(np.zeros((2, 2)), index=ix)
assert_allclose(actual, desired)
assert_(actual.index.equals(desired.index))
def test_pandas_multivariate_rangeindex_repetitions():
# Simulate will also have RangeIndex
endog = pd.DataFrame(np.zeros((2, 2)), columns=['y1', 'y2'])
mod = varmax.VARMAX(endog, trend='n')
res = mod.filter([0.5, 0., 0., 0.2, 1., 0., 1.])
# Default simulate anchors to the start of the sample
actual = res.simulate(2, state_shocks=np.zeros((2, 2)),
initial_state=np.zeros(2), repetitions=2)
columns = pd.MultiIndex.from_product([['y1', 'y2'], [0, 1]])
desired = pd.DataFrame(np.zeros((2, 4)), columns=columns)
assert_allclose(actual, desired)
assert_(actual.columns.equals(desired.columns))
# Alternative anchor changes the index
actual = res.simulate(2, anchor=2, state_shocks=np.zeros((2, 2)),
initial_state=np.zeros(2), repetitions=2)
ix = pd.RangeIndex(2, 4)
columns = pd.MultiIndex.from_product([['y1', 'y2'], [0, 1]])
desired = pd.DataFrame(np.zeros((2, 4)), index=ix, columns=columns)
assert_allclose(actual, desired)
assert_(actual.index.equals(desired.index))
assert_(actual.columns.equals(desired.columns))
def test_pandas_multivariate_dateindex():
# Simulate will also have RangeIndex
ix = pd.date_range(start='2000', periods=2, freq='M')
endog = pd.DataFrame(np.zeros((2, 2)), index=ix)
mod = varmax.VARMAX(endog, trend='n')
res = mod.filter([0.5, 0., 0., 0.2, 1., 0., 1.])
# Default simulate anchors to the start of the sample
actual = res.simulate(2, state_shocks=np.zeros((2, 2)),
initial_state=np.zeros(2))
desired = pd.DataFrame(np.zeros((2, 2)), index=ix)
assert_allclose(actual, desired)
# Alternative anchor changes the index
actual = res.simulate(2, anchor=2, state_shocks=np.zeros((2, 2)),
initial_state=np.zeros(2))
ix = pd.date_range(start='2000-03', periods=2, freq='M')
desired = pd.DataFrame(np.zeros((2, 2)), index=ix)
assert_allclose(actual, desired)
assert_(actual.index.equals(desired.index))
def test_pandas_multivariate_dateindex_repetitions():
# Simulate will also have RangeIndex
ix = pd.date_range(start='2000', periods=2, freq='M')
endog = pd.DataFrame(np.zeros((2, 2)), columns=['y1', 'y2'], index=ix)
mod = varmax.VARMAX(endog, trend='n')
res = mod.filter([0.5, 0., 0., 0.2, 1., 0., 1.])
# Default simulate anchors to the start of the sample
actual = res.simulate(2, state_shocks=np.zeros((2, 2)),
initial_state=np.zeros(2), repetitions=2)
columns = pd.MultiIndex.from_product([['y1', 'y2'], [0, 1]])
desired = pd.DataFrame(np.zeros((2, 4)), columns=columns, index=ix)
assert_allclose(actual, desired)
assert_(actual.columns.equals(desired.columns))
# Alternative anchor changes the index
actual = res.simulate(2, anchor=2, state_shocks=np.zeros((2, 2)),
initial_state=np.zeros(2), repetitions=2)
ix = pd.date_range(start='2000-03', periods=2, freq='M')
columns = pd.MultiIndex.from_product([['y1', 'y2'], [0, 1]])
desired = pd.DataFrame(np.zeros((2, 4)), index=ix, columns=columns)
assert_allclose(actual, desired)
assert_(actual.index.equals(desired.index))
assert_(actual.columns.equals(desired.columns))
def test_pandas_anchor():
# Test that anchor with dates works
ix = pd.date_range(start='2000', periods=2, freq='M')
endog = pd.Series(np.zeros(2), index=ix)
mod = sarimax.SARIMAX(endog)
res = mod.filter([0.5, 1.])
desired = res.simulate(2, anchor=1, state_shocks=np.zeros(2),
initial_state=np.zeros(1))
# Anchor to date
actual = res.simulate(2, anchor=ix[1], state_shocks=np.zeros(2),
initial_state=np.zeros(1))
assert_allclose(actual, desired)
assert_(actual.index.equals(desired.index))
# Anchor to negative index
actual = res.simulate(2, anchor=-1, state_shocks=np.zeros(2),
initial_state=np.zeros(1))
assert_allclose(actual, desired)
assert_(actual.index.equals(desired.index))
@pytest.mark.smoke
def test_time_varying(reset_randomstate):
mod = TVSS(np.zeros((10, 2)))
mod.simulate([], 10)
def test_time_varying_obs_cov(reset_randomstate):
mod = TVSS(np.zeros((10, 2)))
mod['obs_cov'] = np.zeros((mod.k_endog, mod.k_endog, mod.nobs))
mod['obs_cov', ..., 9] = np.eye(mod.k_endog)
mod['state_intercept', :] = 0
mod['state_cov'] = mod['state_cov', :, :, 0] * 0
mod['selection'] = mod['selection', :, :, 0]
assert_equal(mod['state_cov'].shape, (mod.ssm.k_posdef, mod.ssm.k_posdef))
assert_equal(mod['selection'].shape, (mod.k_states, mod.ssm.k_posdef))
sim = mod.simulate([], 10, initial_state=np.zeros(mod.k_states))
assert_allclose(sim[:9], mod['obs_intercept', :, :9].T)
def test_time_varying_state_cov(reset_randomstate):
mod = TVSS(np.zeros((10, 2)))
mod['obs_cov'] = mod['obs_cov', :, :, 0] * 0
mod['selection'] = mod['selection', :, :, 0]
mod['state_intercept', :] = 0
mod['state_cov'] = np.zeros((mod.ssm.k_posdef, mod.ssm.k_posdef, mod.nobs))
mod['state_cov', ..., -1] = np.eye(mod.ssm.k_posdef)
assert_equal(mod['obs_cov'].shape, (mod.k_endog, mod.k_endog))
assert_equal(mod['selection'].shape, (mod.k_states, mod.ssm.k_posdef))
sim = mod.simulate([], 10)
assert_allclose(sim, mod['obs_intercept'].T)
@pytest.mark.smoke
def test_time_varying_selection(reset_randomstate):
mod = TVSS(np.zeros((10, 2)))
mod['obs_cov'] = mod['obs_cov', :, :, 0]
mod['state_cov'] = mod['state_cov', :, :, 0]
assert_equal(mod['obs_cov'].shape, (mod.k_endog, mod.k_endog))
assert_equal(mod['state_cov'].shape, (mod.ssm.k_posdef, mod.ssm.k_posdef))
mod.simulate([], 10)
| 39.970901 | 79 | 0.635857 | 9,419 | 71,428 | 4.675019 | 0.037902 | 0.063496 | 0.041695 | 0.039242 | 0.941591 | 0.926738 | 0.90848 | 0.887814 | 0.871531 | 0.861107 | 0 | 0.036054 | 0.241628 | 71,428 | 1,786 | 80 | 39.993281 | 0.776847 | 0.144789 | 0 | 0.814754 | 0 | 0 | 0.015155 | 0 | 0 | 0 | 0 | 0.00056 | 0.135246 | 1 | 0.037705 | false | 0 | 0.006557 | 0 | 0.044262 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
ec7025b8b5bc845e53b3b6814d8f824a9ef83050 | 3,675 | py | Python | benchmarks/imports.py | WilliamJamieson/astropy-benchmarks | dcff9e1d8a584f7933743a91682806647a4c1e86 | [
"BSD-3-Clause"
] | 5 | 2018-01-30T01:26:07.000Z | 2020-12-03T08:11:19.000Z | benchmarks/imports.py | WilliamJamieson/astropy-benchmarks | dcff9e1d8a584f7933743a91682806647a4c1e86 | [
"BSD-3-Clause"
] | 72 | 2015-10-03T13:46:06.000Z | 2021-11-19T16:42:03.000Z | benchmarks/imports.py | WilliamJamieson/astropy-benchmarks | dcff9e1d8a584f7933743a91682806647a4c1e86 | [
"BSD-3-Clause"
] | 18 | 2015-05-25T17:01:26.000Z | 2021-03-18T16:49:32.000Z | """Benchmarks for import timing for astropy and its subpackages."""
# https://github.com/airspeed-velocity/asv/pull/832
def timeraw_import_astropy():
return """
import astropy
"""
def timeraw_import_astropy_config():
return """
from astropy import config
"""
def timeraw_import_astropy_constants():
return """
from astropy import constants
"""
def timeraw_import_astropy_convolution():
return """
from astropy import convolution
"""
def timeraw_import_astropy_coordinates():
return """
from astropy import coordinates
"""
def timeraw_import_astropy_cosmology():
return """
from astropy import cosmology
"""
def timeraw_import_astropy_io():
return """
from astropy import io
"""
def timeraw_import_astropy_io_ascii():
return """
from astropy.io import ascii
"""
def timeraw_import_astropy_io_fits():
return """
from astropy.io import fits
"""
def timeraw_import_astropy_io_votable():
return """
from astropy.io import votable
"""
def timeraw_import_astropy_io_misc():
return """
from astropy.io import misc
"""
def timeraw_import_astropy_io_misc_hdf5():
return """
from astropy.io.misc import hdf5
"""
def timeraw_import_astropy_io_misc_yaml():
return """
from astropy.io.misc import yaml
"""
def timeraw_import_astropy_io_misc_asdf():
return """
from astropy.io.misc import asdf
"""
def timeraw_import_astropy_io_misc_pandas():
return """
from astropy.io.misc import pandas
"""
def timeraw_import_astropy_logger():
return """
from astropy import logger
"""
def timeraw_import_astropy_modeling():
return """
from astropy import modeling
"""
def timeraw_import_astropy_nddata():
return """
from astropy import nddata
"""
def timeraw_import_astropy_samp():
return """
from astropy import samp
"""
def timeraw_import_astropy_stats():
return """
from astropy import stats
"""
def timeraw_import_astropy_table():
return """
from astropy import table
"""
def timeraw_import_astropy_tests():
return """
from astropy import tests
"""
def timeraw_import_astropy_tests_runner():
return """
from astropy.tests import runner
"""
def timeraw_import_astropy_time():
return """
from astropy import time
"""
def timeraw_import_astropy_timeseries():
return """
from astropy import timeseries
"""
def timeraw_import_astropy_timeseries_io():
return """
from astropy.timeseries import io
"""
def timeraw_import_astropy_timeseries_periodograms():
return """
from astropy.timeseries import periodograms
"""
def timeraw_import_astropy_uncertainty():
return """
from astropy import uncertainty
"""
def timeraw_import_astropy_units():
return """
from astropy import units
"""
def timeraw_import_astropy_units_quantity():
return """
from astropy.units import quantity
"""
def timeraw_import_astropy_utils():
return """
from astropy import utils
"""
def timeraw_import_astropy_utils_iers():
return """
from astropy.utils import iers
"""
def timeraw_import_astropy_visualization():
return """
from astropy import visualization
"""
def timeraw_import_astropy_visualization_wcsaxes():
return """
from astropy.visualization import wcsaxes
"""
def timeraw_import_astropy_wcs():
return """
from astropy import wcs
"""
def timeraw_import_astropy_wcs_wcsapi():
return """
from astropy.wcs import wcsapi
"""
| 16.780822 | 67 | 0.674286 | 412 | 3,675 | 5.708738 | 0.131068 | 0.204507 | 0.244898 | 0.352041 | 0.394558 | 0.13733 | 0 | 0 | 0 | 0 | 0 | 0.001762 | 0.227755 | 3,675 | 218 | 68 | 16.857798 | 0.826991 | 0.030476 | 0 | 0.5 | 0 | 0 | 0.392747 | 0.005904 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | true | 0 | 0.5 | 0.25 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 1 | 1 | 0 | 0 | 8 |
ecb13195d020d83690fdf167b793815895335be3 | 12,985 | py | Python | app.py | HarrisonHemstreet/open-secrets | 61e21ffc7ce9bc57ef894f1ba58e7100f5f6a027 | [
"CC-BY-3.0"
] | null | null | null | app.py | HarrisonHemstreet/open-secrets | 61e21ffc7ce9bc57ef894f1ba58e7100f5f6a027 | [
"CC-BY-3.0"
] | null | null | null | app.py | HarrisonHemstreet/open-secrets | 61e21ffc7ce9bc57ef894f1ba58e7100f5f6a027 | [
"CC-BY-3.0"
] | null | null | null | import re
firstlastCID = "Name, CID,Bradley Byrne,N00035380,Martha Roby,N00030768,Mike D Rogers,N00024759,Robert B Aderholt,N00003028,Mo Brooks,N00030910,Gary Palmer,N00035691,Terri A Sewell,N00030622,Doug Jones,N00024817,Richard C Shelby,N00009920,Don Young,N00007999,Dan Sullivan,N00035774,Lisa Murkowski,N00026050,Tom O'Halleran,N00037515,Ann Kirkpatrick,N00029260,Raul M Grijalva,N00025284,Paul Gosar,N00030771,Andy Biggs,N00039293,David Schweikert,N00006460,Ruben Gallego,N00036097,Debbie Lesko,N00042056,Greg Stanton,N00041750,Mark Kelly,N00044223,Martha McSally,N00033982,Kyrsten Sinema,N00033983,Rick Crawford,N00030770,French Hill,N00035792,Steve Womack,N00031857,Bruce Westerman,N00035527,Tom Cotton,N00033363,John Boozman,N00013873,Doug LaMalfa,N00033987,Jared Huffman,N00033030,John Garamendi,N00030856,Tom McClintock,N00006863,Mike Thompson,N00007419,Doris Matsui,N00027459,Ami Bera,N00030717,Paul Cook,N00034224,Jerry McNerney,N00026926,Josh Harder,N00040853,Mark Desaulnier,N00030709,Nancy Pelosi,N00007360,Barbara Lee,N00008046,Jackie Speier,N00029649,Eric Swalwell,N00033508,Jim Costa,N00026341,Ro Khanna,N00026427,Anna Eshoo,N00007335,Zoe Lofgren,N00007479,Jimmy Panetta,N00038601,TJ Cox,N00028122,Devin Nunes,N00007248,Kevin McCarthy,N00028152,Salud Carbajal,N00037015,Katie Hill,N00040644,Mike Garcia,N00044298,Julia Brownley,N00034254,Judy Chu,N00030600,Adam Schiff,N00009585,Tony Cardenas,N00033373,Brad Sherman,N00006897,Pete Aguilar,N00033997,Grace Napolitano,N00006789,Ted Lieu,N00035825,Jimmy Gomez,N00040597,Norma Torres,N00036107,Raul Ruiz,N00033510,Karen Bass,N00031877,Linda Sanchez,N00024870,Gil Cisneros,N00041464,Lucille Roybal-Allard,N00006671,Mark Takano,N00006701,Ken Calvert,N00007099,Maxine Waters,N00006690,Nanette Barragan,N00037019,Katie Porter,N00040865,Lou Correa,N00037260,Alan Lowenthal,N00033274,Harley Rouda,N00040666,Mike Levin,N00040667,Duncan D. Hunter,N00029258,Juan Vargas,N00007021,Scott Peters,N00033591,Susan Davis,N00009604,Kamala Harris,N00036915,Dianne Feinstein,N00007364,Diana DeGette,N00006134,Joseph Neguse,N00041080,Scott Tipton,N00027509,Ken Buck,N00030829,Douglas L Lamborn,N00028133,Jason Crow,N00040876,Ed Perlmutter,N00027510,Michael Bennet,N00030608,Cory Gardner,N00030780,John B Larson,N00000575,Joe Courtney,N00024842,Rosa L DeLauro,N00000615,Jim Himes,N00029070,Jahana Hayes,N00043421,Christopher S Murphy,N00027566,Richard Blumenthal,N00031685,Lisa Blunt Rochester,N00038414,Tom Carper,N00012508,Chris Coons,N00031820,Matt Gaetz,N00039503,Neal Dunn,N00037442,Ted Yoho,N00033220,John Rutherford,N00039777,Al Lawson,N00030642,Michael Waltz,N00042403,Stephanie Murphy,N00040133,Bill Posey,N00029662,Darren Soto,N00037422,Val Demings,N00033449,Daniel Webster,N00026335,Gus Bilirakis,N00027462,Charlie Crist,N00002942,Kathy Castor,N00027514,Ross Spano,N00043319,Vernon Buchanan,N00027626,Greg Steube,N00042808,Brian Mast,N00037269,Francis Rooney,N00040007,Alcee L Hastings,N00002884,Lois J Frankel,N00002893,Ted Deutch,N00031317,Debbie Wasserman Schultz,N00026106,Frederica Wilson,N00030650,Mario Diaz-Balart,N00025337,Debbie Mucarsel-Powell,N00041561,Donna Shalala,N00042811,Rick Scott,N00043290,Marco Rubio,N00030612,Buddy Carter,N00035346,Sanford Bishop,N00002674,Drew Ferguson,N00039090,Hank Johnson,N00027848,John Lewis,N00002577,Lucy McBath,N00042813,Rob Woodall,N00032416,Austin Scott,N00032457,Doug Collins,N00033518,Jody Hice,N00032243,Barry Loudermilk,N00035347,Richard W Allen,N00033720,David Scott,N00024871,Tom Graves,N00030788,David Perdue,N00035516,Johnny Isakson,N00002593,Kelly Loeffler,N00046125,Ed Case,N00025882,Tulsi Gabbard,N00033281,Brian Schatz,N00028138,Mazie K Hirono,N00028139,Russ Fulcher,N00041335,Mike Simpson,N00006263,James E Risch,N00029441,Mike Crapo,N00006267,Bobby L Rush,N00004887,Robin Kelly,N00035215,Daniel Lipinski,N00027239,Jesus Garcia,N00042114,Mike Quigley,N00030581,Sean Casten,N00041338,Danny K Davis,N00004884,Raja Krishnamoorthi,N00033240,Jan Schakowsky,N00004724,Brad Schneider,N00033101,Bill Foster,N00029139,Mike Bost,N00035420,Rodney Davis,N00034784,Lauren A Underwood,N00041569,John Shimkus,N00004961,Adam Kinzinger,N00030667,Cheri Bustos,N00033390,Darin LaHood,N00037031,Dick Durbin,N00004981,Tammy Duckworth,N00027860,Pete Visclosky,N00003813,Jackie Walorski,N00031226,Jim Banks,N00037185,Jim Baird,N00041954,Susan Brooks,N00033495,Greg Pence,N00041956,Andre Carson,N00029513,Larry Bucshon,N00031227,Trey Hollingsworth,N00038429,Mike Braun,N00041731,Todd Young,N00030670,Abby Finkenauer,N00040888,David Loebsack,N00027741,Cindy Axne,N00041104,Steven A. King,N00025237,Chuck Grassley,N00001758,Joni Ernst,N00035483,Roger Marshall,N00037034,Steve Watkins,N00042126,Sharice Davids,N00042626,Ron Estes,N00040712,Pat Roberts,N00005285,Jerry Moran,N00005282,James Comer,N00038260,Brett Guthrie,N00029675,John A Yarmuth,N00028073,Thomas Massie,N00034041,Hal Rogers,N00003473,Andy Barr,N00031233,Mitch McConnell,N00003389,Rand Paul,N00030836,Steve Scalise,N00009660,Cedric Richmond,N00030184,Clay Higgins,N00039953,Mike Johnson,N00039106,Ralph Abraham,N00036633,Garret Graves,N00036135,Bill Cassidy,N00030245,John Kennedy,N00026823,Chellie Pingree,N00013817,Jared Golden,N00041668,Angus King,N00034580,Susan Collins,N00000491,Andy Harris,N00029147,Dutch Ruppersberger,N00025482,John Sarbanes,N00027751,Anthony Brown,N00036999,Steny H Hoyer,N00001821,David Trone,N00039122,Elijah E Cummings,N00001971,Kweisi Mfume,N00001799,Jamie Raskin,N00037036,Ben Cardin,N00001955,Chris Van Hollen,N00013820,Richard E Neal,N00000153,James P McGovern,N00000179,Lori Trahan,N00041808,Joe Kennedy III,N00034044,Katherine Clark,N00035278,Seth Moulton,N00035431,Ayanna Pressley,N00042581,Stephen F Lynch,N00013855,Bill Keating,N00031933,Elizabeth Warren,N00033492,Ed Markey,N00000270,John Bergman,N00039533,Bill Huizenga,N00030673,Justin Amash,N00031938,John Moolenaar,N00036275,Dan Kildee,N00033395,Fred Upton,N00004133,Tim Walberg,N00026368,Elissa Slotkin,N00041357,Andy Levin,N00042149,Paul Mitchell,N00036274,Haley Stevens,N00040915,Debbie Dingell,N00036149,Rashida Tlaib,N00042649,Brenda Lawrence,N00034068,Gary Peters,N00029277,Debbie Stabenow,N00004118,Jim Hagedorn,N00031390,Angie Craig,N00037039,Dean Phillips,N00041134,Betty McCollum,N00012942,Ilhan Omar,N00043581,Tom Emmer,N00035440,Collin Peterson,N00004558,Pete Stauber,N00041511,Tina Smith,N00042353,Amy Klobuchar,N00027500,Trent Kelly,N00037003,Bennie G Thompson,N00003288,Michael Patrick Guest,N00042458,Steven Palazzo,N00031958,Cindy Hyde-Smith,N00043298,Roger Wicker,N00003280,William L Clay Jr.,N00012460,Ann L Wagner,N00033106,Blaine Luetkemeyer,N00030026,Vicky Hartzler,N00031005,Emanuel Cleaver,N00026790,Sam Graves,N00013323,Billy Long,N00030676,Jason Smith,N00035282,Roy Blunt,N00005195,Josh Hawley,N00041620,Greg Gianforte,N00040733,Jon Tester,N00027605,Steven Daines,N00033054,Jeff Fortenberry,N00026631,Don Bacon,N00037049,Adrian Smith,N00027623,Deb Fischer,N00033443,Ben Sasse,N00035544,Dina Titus,N00030191,Mark Amodei,N00031177,Susie Lee,N00037247,Steven Horsford,N00033638,Jacky Rosen,N00038734,Catherine Cortez Masto,N00037161,Chris Pappas,N00042161,Ann Kuster,N00030875,Maggie Hassan,N00038397,Jeanne Shaheen,N00024790,Don Norcross,N00036154,Jeff Van Drew,N00042164,Andy Kim,N00041370,Chris Smith,N00009816,Josh Gottheimer,N00036944,Frank Pallone Jr.,N00000781,Tom Malinowski,N00041843,Albio Sires,N00027523,Bill Pascrell Jr.,N00000751,Donald M Payne Jr.,N00034639,Mikie Sherrill,N00041154,Bonnie Watson Coleman,N00036158,Robert Menendez,N00000699,Cory Booker,N00035267,Debra Haaland,N00040933,Xochitl Torres Small,N00042467,Ben Ray Lujan,N00029562,Martin Heinrich,N00029835,Tom Udall,N00006561,Lee Zeldin,N00029404,Pete King,N00001193,Tom Suozzi,N00038742,Kathleen Rice,N00035927,Gregory W Meeks,N00001171,Grace Meng,N00034547,Nydia M Velazquez,N00001102,Hakeem Jeffries,N00033640,Yvette D Clarke,N00026961,Jerrold Nadler,N00000939,Max Rose,N00041588,Carolyn B Maloney,N00000078,Adriano Espaillat,N00034549,Alexandria Ocasio-Cortez,N00041162,Jose E Serrano,N00001813,Eliot Engel,N00001003,Nita M Lowey,N00001024,Sean Patrick Maloney,N00034277,Antonio Delgado,N00040741,Paul Tonko,N00030196,Elise Stefanik,N00035523,Anthony Brindisi,N00041385,Tom Reed,N00030949,John Katko,N00035934,Joseph D Morelle,N00043207,Brian M Higgins,N00027060,Chris Jacobs,N00044575,Chris Collins,N00001285,Kirsten Gillibrand,N00027658,Charles E Schumer,N00001093,G K Butterfield,N00027035,George Holding,N00033399,Walter B Jones Jr.,N00002299,Greg Murphy,N00044027,David Price,N00002260,Virginia Foxx,N00026166,Mark Walker,N00035311,David Rouzer,N00033527,Richard Hudson,N00033630,Dan Bishop,N00044335,Patrick McHenry,N00026627,Mark Meadows,N00033631,Alma Adams,N00035451,Ted Budd,N00039551,Thom Tillis,N00035492,Richard Burr,N00002221,Kelly Armstrong,N00042868,John Hoeven,N00031688,Kevin Cramer,N00004614,Steve Chabot,N00003689,Brad Wenstrup,N00033310,Joyce Beatty,N00033904,Jim Jordan,N00027894,Robert E Latta,N00012233,Bill Johnson,N00032088,Bob Gibbs,N00031128,Warren Davidson,N00038767,Marcy Kaptur,N00003522,Michael R Turner,N00025175,Marcia L Fudge,N00030490,Troy Balderson,N00042194,Tim Ryan,N00025280,David P Joyce,N00035007,Steve Stivers,N00029574,Anthony Gonzalez,N00041690,Sherrod Brown,N00003535,Rob Portman,N00003682,Kevin Hern,N00040829,Markwayne Mullin,N00033410,Frank D Lucas,N00005559,Tom Cole,N00025726,Kendra Horn,N00041394,James Lankford,N00031129,James M Inhofe,N00005582,Suzanne Bonamici,N00033474,Greg Walden,N00007690,Earl Blumenauer,N00007727,Peter DeFazio,N00007781,Kurt Schrader,N00030071,Jeff Merkley,N00029303,Ron Wyden,N00007724,Brian Fitzpatrick,N00038779,Brendan Boyle,N00035307,Dwight Evans,N00038450,Madeleine Dean,N00042894,Mary Gay Scanlon,N00042706,Chrissy Houlahan,N00040949,Susan Wild,N00041997,Matt Cartwright,N00034128,Dan Meuser,N00029416,Scott Perry,N00034120,Lloyd Smucker,N00038781,Tom Marino,N00031777,Fred Keller,N00044065,John Joyce,N00043242,Guy Reschenthaler,N00041871,Glenn Thompson,N00029736,Mike Kelly,N00031647,Conor Lamb,N00041870,Mike Doyle,N00001373,Pat Toomey,N00001489,Bob Casey,N00027503,David Cicilline,N00032019,Jim Langevin,N00009724,Sheldon Whitehouse,N00027533,Jack Reed,N00000362,Joe Cunningham,N00041400,Joe Wilson,N00024809,Jeff Duncan,N00030752,William Timmons,N00042715,Ralph Norman,N00027783,James E Clyburn,N00002408,Tom Rice,N00033832,Tim Scott,N00031782,Lindsey Graham,N00009975,Dusty Johnson,N00040559,John Thune,N00004572,Mike Rounds,N00035187,Phil Roe,N00028463,Tim Burchett,N00041594,Chuck Fleischmann,N00030815,Scott Desjarlais,N00030957,Jim Cooper,N00003132,John Rose,N00041599,Mark Green,N00041873,David Kustoff,N00025445,Steve Cohen,N00003225,Marsha Blackburn,N00003105,Lamar Alexander,N00009888,Louis B Gohmert Jr.,N00026148,Dan Crenshaw,N00042224,Van Taylor,N00027709,John Ratcliffe,N00035972,Lance Gooden,N00042237,Ron Wright,N00042240,Lizzie Fletcher,N00041194,Kevin Brady,N00005883,Al Green,N00026686,Michael McCaul,N00026460,Mike Conaway,N00026041,Kay Granger,N00008799,Mac Thornberry,N00006052,Randy Weber,N00033539,Vicente Gonzalez,N00038809,Veronica Escobar,N00041702,Bill Flores,N00031545,Sheila Jackson Lee,N00005818,Jodey Arrington,N00038285,Joaquin Castro,N00033316,Chip Roy,N00042268,Pete Olson,N00029285,Will Hurd,N00031417,Kenny Marchant,N00026710,Roger Williams,N00030602,Michael Burgess,N00025219,Michael Cloud,N00041882,Henry Cuellar,N00024978,Sylvia Garcia,N00042282,Eddie Bernice Johnson,N00008122,John Carter,N00025095,Colin Allred,N00040989,Marc Veasey,N00033839,Filemon Vela,N00034349,Lloyd Doggett,N00006023,Brian Babin,N00005736,John Cornyn,N00024852,Ted Cruz,N00033085,Rob Bishop,N00025292,Chris Stewart,N00033932,John Curtis,N00041221,Ben McAdams,N00042013,Mitt Romney,N00000286,Mike Lee,N00031696,Peter Welch,N00000515,Bernie Sanders,N00000528,Patrick Leahy,N00009918,Rob Wittman,N00029459,Elaine Luria,N00042293,Bobby Scott,N00002147,Donald McEachin,N00039327,Denver Riggleman,N00043541,Ben Cline,N00042296,Abigail Spanberger,N00041418,Don Beyer,N00036018,Morgan Griffith,N00032029,Jennifer Wexton,N00041002,Gerry Connolly,N00029891,Tim Kaine,N00033177,Mark Warner,N00002097,Suzan DelBene,N00030693,Rick Larsen,N00009759,Jaime Herrera Beutler,N00031559,Dan Newhouse,N00036403,Cathy McMorris Rodgers,N00026314,Derek Kilmer,N00034453,Pramila Jayapal,N00038858,Kim Schrier,N00041606,Adam Smith,N00007833,Dennis Heck,N00031557,Maria Cantwell,N00007836,Patty Murray,N00007876,David McKinley,N00031681,Alex Mooney,N00033814,Carol Miller,N00041542,Joe Manchin,N00032838,Shelley Moore Capito,N00009771,Bryan Steil,N00043379,Mark Pocan,N00033549,Ron Kind,N00004403,Gwen Moore,N00026914,Jim Sensenbrenner,N00004291,Glenn S Grothman,N00036409,Sean P Duffy,N00030967,Tom Tiffany,N00045307,Mike Gallagher,N00039330,Tammy Baldwin,N00004367,Ron Johnson,N00032546,Liz Cheney,N00035504,John A Barrasso,N00006236,Mike Enzi,N00006249"
#re.sub('(,[^,]*),', r'\1 ', firstlastCID)
re.sub(/^(.+?,.+?),\s*/g,'$1\n')
print(firstlastCID) | 1,855 | 12,877 | 0.862996 | 1,727 | 12,985 | 6.488709 | 0.790967 | 0.001071 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.35176 | 0.048518 | 12,985 | 7 | 12,878 | 1,855 | 0.555241 | 0.003157 | 0 | 0 | 0 | 0.25 | 0.99382 | 0.731072 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.25 | null | null | 0.25 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
ecc66c43efc4a0f7bb838e898b9cd75cd71bd0c7 | 193 | py | Python | src/genui/compounds/extensions/generated/admin.py | Tontolda/genui | c5b7da7c5a99fc16d34878e2170145ac7c8e31c4 | [
"0BSD"
] | 15 | 2021-05-31T13:39:17.000Z | 2022-03-30T12:04:14.000Z | src/genui/compounds/extensions/generated/admin.py | martin-sicho/genui | ea7f1272030a13e8e253a7a9b6479ac6a78552d3 | [
"MIT"
] | 3 | 2021-04-08T22:02:22.000Z | 2022-03-16T09:10:20.000Z | src/genui/compounds/extensions/generated/admin.py | Tontolda/genui | c5b7da7c5a99fc16d34878e2170145ac7c8e31c4 | [
"0BSD"
] | 5 | 2021-03-04T11:00:54.000Z | 2021-12-18T22:59:22.000Z | from django.contrib import admin
from . import models
from genui.compounds.admin import MolSetAdmin
@admin.register(models.GeneratedMolSet)
class ChEMBLCompoundsAdmin(MolSetAdmin):
pass
| 19.3 | 45 | 0.818653 | 22 | 193 | 7.181818 | 0.636364 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.119171 | 193 | 9 | 46 | 21.444444 | 0.929412 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.166667 | 0.5 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 7 |
01e7fbc6358789cc400ba0283c111afc19a413ce | 3,498 | py | Python | pytorch-toolbelt/losses/jaccard.py | dnbaker/pytorch-toolbelt | fbfd998abfe6cfdcd22e74ad6b647dce142e0895 | [
"MIT"
] | null | null | null | pytorch-toolbelt/losses/jaccard.py | dnbaker/pytorch-toolbelt | fbfd998abfe6cfdcd22e74ad6b647dce142e0895 | [
"MIT"
] | null | null | null | pytorch-toolbelt/losses/jaccard.py | dnbaker/pytorch-toolbelt | fbfd998abfe6cfdcd22e74ad6b647dce142e0895 | [
"MIT"
] | null | null | null | from typing import List
import torch
from torch import Tensor
from torch.nn.modules.loss import _Loss
from .functional import soft_jaccard_score
class BinaryJaccardLoss(_Loss):
"""Implementation of Jaccard loss for binary image segmentation task
"""
def __init__(self, classes: List[int] = None, from_logits=True, weight=None, reduction='elementwise_mean'):
super(BinaryJaccardLoss, self).__init__(reduction=reduction)
self.classes = classes
self.from_logits = from_logits
self.weight = weight
def forward(self, y_pred: Tensor, y_true: Tensor):
"""
:param y_pred: NxCxHxW
:param y_true: NxHxW
:return: scalar
"""
if self.from_logits:
y_pred = y_pred.softmax(dim=1)
n_classes = y_pred.size(1)
smooth = 1e-3
loss = torch.zeros(n_classes, dtype=torch.float, device=y_pred.device)
if self.classes is None:
classes = range(n_classes)
else:
classes = self.classes
if self.weight is None:
weights = [1] * n_classes
else:
weights = self.weight
for class_index, weight in zip(classes, weights):
jaccard_target = (y_true == class_index).float()
jaccard_output = y_pred[:, class_index, ...]
num_preds = jaccard_target.long().sum()
if num_preds == 0:
loss[class_index] = 0
else:
iou = soft_jaccard_score(jaccard_output, jaccard_target, from_logits=False, smooth=smooth)
loss[class_index] = (1.0 - iou) * weight
if self.reduction == 'elementwise_mean':
return loss.mean()
if self.reduction == 'sum':
return loss.sum()
return loss
class MulticlassJaccardLoss(_Loss):
"""Implementation of Jaccard loss for multiclass (semantic) image segmentation task
"""
def __init__(self, classes: List[int] = None, from_logits=True, weight=None, reduction='elementwise_mean'):
super(MulticlassJaccardLoss, self).__init__(reduction=reduction)
self.classes = classes
self.from_logits = from_logits
self.weight = weight
def forward(self, y_pred: Tensor, y_true: Tensor):
"""
:param y_pred: NxCxHxW
:param y_true: NxHxW
:return: scalar
"""
if self.from_logits:
y_pred = y_pred.softmax(dim=1)
n_classes = y_pred.size(1)
smooth = 1e-3
loss = torch.zeros(n_classes, dtype=torch.float, device=y_pred.device)
if self.classes is None:
classes = range(n_classes)
else:
classes = self.classes
if self.weight is None:
weights = [1] * n_classes
else:
weights = self.weight
for class_index, weight in zip(classes, weights):
jaccard_target = (y_true == class_index).float()
jaccard_output = y_pred[:, class_index, ...]
num_preds = jaccard_target.long().sum()
if num_preds == 0:
loss[class_index] = 0
else:
iou = soft_jaccard_score(jaccard_output, jaccard_target, from_logits=False, smooth=smooth)
loss[class_index] = (1.0 - iou) * weight
if self.reduction == 'elementwise_mean':
return loss.mean()
if self.reduction == 'sum':
return loss.sum()
return loss
| 28.672131 | 111 | 0.591481 | 412 | 3,498 | 4.800971 | 0.182039 | 0.035389 | 0.048534 | 0.0273 | 0.883721 | 0.883721 | 0.849343 | 0.849343 | 0.849343 | 0.849343 | 0 | 0.007472 | 0.311321 | 3,498 | 121 | 112 | 28.909091 | 0.813616 | 0.078902 | 0 | 0.876712 | 0 | 0 | 0.022357 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.054795 | false | 0 | 0.068493 | 0 | 0.232877 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
bdb1b4a84ac09c991b1323c8af8c837b583cdee2 | 47,332 | py | Python | codegen/swagger_client/api/books_api.py | jordiyeh/safrs | eecfaf6d63ed44b9dc44b7b86c600db02989b512 | [
"MIT"
] | null | null | null | codegen/swagger_client/api/books_api.py | jordiyeh/safrs | eecfaf6d63ed44b9dc44b7b86c600db02989b512 | [
"MIT"
] | null | null | null | codegen/swagger_client/api/books_api.py | jordiyeh/safrs | eecfaf6d63ed44b9dc44b7b86c600db02989b512 | [
"MIT"
] | null | null | null | # coding: utf-8
"""
SAFRS Demo App
<a href=http://jsonapi.org>Json-API</a> compliant API built with https://github.com/thomaxxl/safrs <br/>- <a href=\"https://github.com/thomaxxl/safrs/blob/master/examples/demo_relationship.py\">Source code of this page</a> <br/> - Auto-generated swagger spec: <a href=swagger.json>swagger.json</a> <br/> - Petstore <a href=http://petstore.swagger.io/?url=http://thomaxxl.pythonanywhere.com/api/swagger.json>Swagger2 UI</a> # noqa: E501
OpenAPI spec version: 0.0
Generated by: https://github.com/swagger-api/swagger-codegen.git
"""
from __future__ import absolute_import
import re # noqa: F401
# python 2 and python 3 compatibility library
import six
from swagger_client.api_client import ApiClient
class BooksApi(object):
"""NOTE: This class is auto generated by the swagger code generator program.
Do not edit the class manually.
Ref: https://github.com/swagger-api/swagger-codegen
"""
def __init__(self, api_client=None):
if api_client is None:
api_client = ApiClient()
self.api_client = api_client
def createa_bookobject0(self, post_body, **kwargs): # noqa: E501
"""Create a Book object # noqa: E501
Returns a Book # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.createa_bookobject0(post_body, async=True)
>>> result = thread.get()
:param async bool
:param BookPOSTSample post_body: Book attributes (required)
:return: None
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async'):
return self.createa_bookobject0_with_http_info(post_body, **kwargs) # noqa: E501
else:
(data) = self.createa_bookobject0_with_http_info(post_body, **kwargs) # noqa: E501
return data
def createa_bookobject0_with_http_info(self, post_body, **kwargs): # noqa: E501
"""Create a Book object # noqa: E501
Returns a Book # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.createa_bookobject0_with_http_info(post_body, async=True)
>>> result = thread.get()
:param async bool
:param BookPOSTSample post_body: Book attributes (required)
:return: None
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['post_body'] # noqa: E501
all_params.append('async')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method createa_bookobject0" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'post_body' is set
if ('post_body' not in params or
params['post_body'] is None):
raise ValueError("Missing the required parameter `post_body` when calling `createa_bookobject0`") # noqa: E501
collection_formats = {}
path_params = {}
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
if 'post_body' in params:
body_params = params['post_body']
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# Authentication setting
auth_settings = [] # noqa: E501
return self.api_client.call_api(
'/Books/', 'POST',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type=None, # noqa: E501
auth_settings=auth_settings,
async=params.get('async'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def createa_bookobject1(self, book_id, post_body, **kwargs): # noqa: E501
"""Create a Book object # noqa: E501
Returns a Book # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.createa_bookobject1(book_id, post_body, async=True)
>>> result = thread.get()
:param async bool
:param str book_id: (required)
:param BookPOSTSample post_body: Book attributes (required)
:return: None
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async'):
return self.createa_bookobject1_with_http_info(book_id, post_body, **kwargs) # noqa: E501
else:
(data) = self.createa_bookobject1_with_http_info(book_id, post_body, **kwargs) # noqa: E501
return data
def createa_bookobject1_with_http_info(self, book_id, post_body, **kwargs): # noqa: E501
"""Create a Book object # noqa: E501
Returns a Book # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.createa_bookobject1_with_http_info(book_id, post_body, async=True)
>>> result = thread.get()
:param async bool
:param str book_id: (required)
:param BookPOSTSample post_body: Book attributes (required)
:return: None
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['book_id', 'post_body'] # noqa: E501
all_params.append('async')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method createa_bookobject1" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'book_id' is set
if ('book_id' not in params or
params['book_id'] is None):
raise ValueError("Missing the required parameter `book_id` when calling `createa_bookobject1`") # noqa: E501
# verify the required parameter 'post_body' is set
if ('post_body' not in params or
params['post_body'] is None):
raise ValueError("Missing the required parameter `post_body` when calling `createa_bookobject1`") # noqa: E501
collection_formats = {}
path_params = {}
if 'book_id' in params:
path_params['BookId'] = params['book_id'] # noqa: E501
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
if 'post_body' in params:
body_params = params['post_body']
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# Authentication setting
auth_settings = [] # noqa: E501
return self.api_client.call_api(
'/Books/{BookId}/', 'POST',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type=None, # noqa: E501
auth_settings=auth_settings,
async=params.get('async'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def deletea_bookobject0(self, book_id, **kwargs): # noqa: E501
"""Delete a Book object # noqa: E501
Delete a Book object # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.deletea_bookobject0(book_id, async=True)
>>> result = thread.get()
:param async bool
:param str book_id: (required)
:return: None
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async'):
return self.deletea_bookobject0_with_http_info(book_id, **kwargs) # noqa: E501
else:
(data) = self.deletea_bookobject0_with_http_info(book_id, **kwargs) # noqa: E501
return data
def deletea_bookobject0_with_http_info(self, book_id, **kwargs): # noqa: E501
"""Delete a Book object # noqa: E501
Delete a Book object # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.deletea_bookobject0_with_http_info(book_id, async=True)
>>> result = thread.get()
:param async bool
:param str book_id: (required)
:return: None
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['book_id'] # noqa: E501
all_params.append('async')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method deletea_bookobject0" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'book_id' is set
if ('book_id' not in params or
params['book_id'] is None):
raise ValueError("Missing the required parameter `book_id` when calling `deletea_bookobject0`") # noqa: E501
collection_formats = {}
path_params = {}
if 'book_id' in params:
path_params['BookId'] = params['book_id'] # noqa: E501
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# Authentication setting
auth_settings = [] # noqa: E501
return self.api_client.call_api(
'/Books/{BookId}/', 'DELETE',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type=None, # noqa: E501
auth_settings=auth_settings,
async=params.get('async'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def deletefrom_bookuser0(self, book_id, user_id, **kwargs): # noqa: E501
"""Delete from Book user # noqa: E501
Delete a User object from the user relation on Book # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.deletefrom_bookuser0(book_id, user_id, async=True)
>>> result = thread.get()
:param async bool
:param str book_id: Book item (required)
:param str user_id: user item (required)
:return: None
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async'):
return self.deletefrom_bookuser0_with_http_info(book_id, user_id, **kwargs) # noqa: E501
else:
(data) = self.deletefrom_bookuser0_with_http_info(book_id, user_id, **kwargs) # noqa: E501
return data
def deletefrom_bookuser0_with_http_info(self, book_id, user_id, **kwargs): # noqa: E501
"""Delete from Book user # noqa: E501
Delete a User object from the user relation on Book # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.deletefrom_bookuser0_with_http_info(book_id, user_id, async=True)
>>> result = thread.get()
:param async bool
:param str book_id: Book item (required)
:param str user_id: user item (required)
:return: None
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['book_id', 'user_id'] # noqa: E501
all_params.append('async')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method deletefrom_bookuser0" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'book_id' is set
if ('book_id' not in params or
params['book_id'] is None):
raise ValueError("Missing the required parameter `book_id` when calling `deletefrom_bookuser0`") # noqa: E501
# verify the required parameter 'user_id' is set
if ('user_id' not in params or
params['user_id'] is None):
raise ValueError("Missing the required parameter `user_id` when calling `deletefrom_bookuser0`") # noqa: E501
collection_formats = {}
path_params = {}
if 'book_id' in params:
path_params['BookId'] = params['book_id'] # noqa: E501
if 'user_id' in params:
path_params['UserId'] = params['user_id'] # noqa: E501
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# Authentication setting
auth_settings = [] # noqa: E501
return self.api_client.call_api(
'/Books/{BookId}/user/{UserId}', 'DELETE',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type=None, # noqa: E501
auth_settings=auth_settings,
async=params.get('async'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def invoke_bookgetlist0(self, post_book_get_list, **kwargs): # noqa: E501
"""Invoke Book.get_list # noqa: E501
Invoke Book.get_list # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.invoke_bookgetlist0(post_book_get_list, async=True)
>>> result = thread.get()
:param async bool
:param PostBookGetList post_book_get_list: Retrieve a list of objects with the ids in id_list. (required)
:return: None
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async'):
return self.invoke_bookgetlist0_with_http_info(post_book_get_list, **kwargs) # noqa: E501
else:
(data) = self.invoke_bookgetlist0_with_http_info(post_book_get_list, **kwargs) # noqa: E501
return data
def invoke_bookgetlist0_with_http_info(self, post_book_get_list, **kwargs): # noqa: E501
"""Invoke Book.get_list # noqa: E501
Invoke Book.get_list # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.invoke_bookgetlist0_with_http_info(post_book_get_list, async=True)
>>> result = thread.get()
:param async bool
:param PostBookGetList post_book_get_list: Retrieve a list of objects with the ids in id_list. (required)
:return: None
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['post_book_get_list'] # noqa: E501
all_params.append('async')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method invoke_bookgetlist0" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'post_book_get_list' is set
if ('post_book_get_list' not in params or
params['post_book_get_list'] is None):
raise ValueError("Missing the required parameter `post_book_get_list` when calling `invoke_bookgetlist0`") # noqa: E501
collection_formats = {}
path_params = {}
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
if 'post_book_get_list' in params:
body_params = params['post_book_get_list']
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# Authentication setting
auth_settings = [] # noqa: E501
return self.api_client.call_api(
'/Books/get_list', 'POST',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type=None, # noqa: E501
auth_settings=auth_settings,
async=params.get('async'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def retrievea_bookobject0(self, **kwargs): # noqa: E501
"""Retrieve a Book object # noqa: E501
Returns a Book # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.retrievea_bookobject0(async=True)
>>> result = thread.get()
:param async bool
:param int page_offset: Page offset
:param int page_limit: max number of items
:param str include: related objects to include
:param str fields_books: Fields to be selected (csv)
:param str sort: Sort order
:param str filter_name: name attribute filter (csv)
:param str filter_user_id: user_id attribute filter (csv)
:return: None
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async'):
return self.retrievea_bookobject0_with_http_info(**kwargs) # noqa: E501
else:
(data) = self.retrievea_bookobject0_with_http_info(**kwargs) # noqa: E501
return data
def retrievea_bookobject0_with_http_info(self, **kwargs): # noqa: E501
"""Retrieve a Book object # noqa: E501
Returns a Book # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.retrievea_bookobject0_with_http_info(async=True)
>>> result = thread.get()
:param async bool
:param int page_offset: Page offset
:param int page_limit: max number of items
:param str include: related objects to include
:param str fields_books: Fields to be selected (csv)
:param str sort: Sort order
:param str filter_name: name attribute filter (csv)
:param str filter_user_id: user_id attribute filter (csv)
:return: None
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['page_offset', 'page_limit', 'include', 'fields_books', 'sort', 'filter_name', 'filter_user_id'] # noqa: E501
all_params.append('async')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method retrievea_bookobject0" % key
)
params[key] = val
del params['kwargs']
collection_formats = {}
path_params = {}
query_params = []
if 'page_offset' in params:
query_params.append(('page[offset]', params['page_offset'])) # noqa: E501
if 'page_limit' in params:
query_params.append(('page[limit]', params['page_limit'])) # noqa: E501
if 'include' in params:
query_params.append(('include', params['include'])) # noqa: E501
if 'fields_books' in params:
query_params.append(('fields[Books]', params['fields_books'])) # noqa: E501
if 'sort' in params:
query_params.append(('sort', params['sort'])) # noqa: E501
if 'filter_name' in params:
query_params.append(('filter[name]', params['filter_name'])) # noqa: E501
if 'filter_user_id' in params:
query_params.append(('filter[user_id]', params['filter_user_id'])) # noqa: E501
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# Authentication setting
auth_settings = [] # noqa: E501
return self.api_client.call_api(
'/Books/', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type=None, # noqa: E501
auth_settings=auth_settings,
async=params.get('async'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def retrievea_bookobject1(self, book_id, **kwargs): # noqa: E501
"""Retrieve a Book object # noqa: E501
Returns a Book # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.retrievea_bookobject1(book_id, async=True)
>>> result = thread.get()
:param async bool
:param str book_id: (required)
:param int page_offset: Page offset
:param int page_limit: max number of items
:param str include: related objects to include
:param str fields_books: Fields to be selected (csv)
:param str sort: Sort order
:param str filter_name: name attribute filter (csv)
:param str filter_user_id: user_id attribute filter (csv)
:return: None
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async'):
return self.retrievea_bookobject1_with_http_info(book_id, **kwargs) # noqa: E501
else:
(data) = self.retrievea_bookobject1_with_http_info(book_id, **kwargs) # noqa: E501
return data
def retrievea_bookobject1_with_http_info(self, book_id, **kwargs): # noqa: E501
"""Retrieve a Book object # noqa: E501
Returns a Book # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.retrievea_bookobject1_with_http_info(book_id, async=True)
>>> result = thread.get()
:param async bool
:param str book_id: (required)
:param int page_offset: Page offset
:param int page_limit: max number of items
:param str include: related objects to include
:param str fields_books: Fields to be selected (csv)
:param str sort: Sort order
:param str filter_name: name attribute filter (csv)
:param str filter_user_id: user_id attribute filter (csv)
:return: None
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['book_id', 'page_offset', 'page_limit', 'include', 'fields_books', 'sort', 'filter_name', 'filter_user_id'] # noqa: E501
all_params.append('async')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method retrievea_bookobject1" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'book_id' is set
if ('book_id' not in params or
params['book_id'] is None):
raise ValueError("Missing the required parameter `book_id` when calling `retrievea_bookobject1`") # noqa: E501
collection_formats = {}
path_params = {}
if 'book_id' in params:
path_params['BookId'] = params['book_id'] # noqa: E501
query_params = []
if 'page_offset' in params:
query_params.append(('page[offset]', params['page_offset'])) # noqa: E501
if 'page_limit' in params:
query_params.append(('page[limit]', params['page_limit'])) # noqa: E501
if 'include' in params:
query_params.append(('include', params['include'])) # noqa: E501
if 'fields_books' in params:
query_params.append(('fields[Books]', params['fields_books'])) # noqa: E501
if 'sort' in params:
query_params.append(('sort', params['sort'])) # noqa: E501
if 'filter_name' in params:
query_params.append(('filter[name]', params['filter_name'])) # noqa: E501
if 'filter_user_id' in params:
query_params.append(('filter[user_id]', params['filter_user_id'])) # noqa: E501
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# Authentication setting
auth_settings = [] # noqa: E501
return self.api_client.call_api(
'/Books/{BookId}/', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type=None, # noqa: E501
auth_settings=auth_settings,
async=params.get('async'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def retrieveauserobject0(self, book_id, **kwargs): # noqa: E501
"""Retrieve a user object # noqa: E501
Returns Book user ids # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.retrieveauserobject0(book_id, async=True)
>>> result = thread.get()
:param async bool
:param str book_id: Book item (required)
:param int page_offset: Page offset
:param int page_limit: max number of items
:param str include: related objects to include
:param str fields_books: Fields to be selected (csv)
:param str sort: Sort order
:param str filter_name: name attribute filter (csv)
:param str filter_user_id: user_id attribute filter (csv)
:return: None
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async'):
return self.retrieveauserobject0_with_http_info(book_id, **kwargs) # noqa: E501
else:
(data) = self.retrieveauserobject0_with_http_info(book_id, **kwargs) # noqa: E501
return data
def retrieveauserobject0_with_http_info(self, book_id, **kwargs): # noqa: E501
"""Retrieve a user object # noqa: E501
Returns Book user ids # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.retrieveauserobject0_with_http_info(book_id, async=True)
>>> result = thread.get()
:param async bool
:param str book_id: Book item (required)
:param int page_offset: Page offset
:param int page_limit: max number of items
:param str include: related objects to include
:param str fields_books: Fields to be selected (csv)
:param str sort: Sort order
:param str filter_name: name attribute filter (csv)
:param str filter_user_id: user_id attribute filter (csv)
:return: None
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['book_id', 'page_offset', 'page_limit', 'include', 'fields_books', 'sort', 'filter_name', 'filter_user_id'] # noqa: E501
all_params.append('async')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method retrieveauserobject0" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'book_id' is set
if ('book_id' not in params or
params['book_id'] is None):
raise ValueError("Missing the required parameter `book_id` when calling `retrieveauserobject0`") # noqa: E501
collection_formats = {}
path_params = {}
if 'book_id' in params:
path_params['BookId'] = params['book_id'] # noqa: E501
query_params = []
if 'page_offset' in params:
query_params.append(('page[offset]', params['page_offset'])) # noqa: E501
if 'page_limit' in params:
query_params.append(('page[limit]', params['page_limit'])) # noqa: E501
if 'include' in params:
query_params.append(('include', params['include'])) # noqa: E501
if 'fields_books' in params:
query_params.append(('fields[Books]', params['fields_books'])) # noqa: E501
if 'sort' in params:
query_params.append(('sort', params['sort'])) # noqa: E501
if 'filter_name' in params:
query_params.append(('filter[name]', params['filter_name'])) # noqa: E501
if 'filter_user_id' in params:
query_params.append(('filter[user_id]', params['filter_user_id'])) # noqa: E501
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# Authentication setting
auth_settings = [] # noqa: E501
return self.api_client.call_api(
'/Books/{BookId}/user', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type=None, # noqa: E501
auth_settings=auth_settings,
async=params.get('async'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def retrieveauserobject1(self, book_id, user_id, **kwargs): # noqa: E501
"""Retrieve a user object # noqa: E501
Returns Book user ids # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.retrieveauserobject1(book_id, user_id, async=True)
>>> result = thread.get()
:param async bool
:param str book_id: Book item (required)
:param str user_id: user item (required)
:return: None
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async'):
return self.retrieveauserobject1_with_http_info(book_id, user_id, **kwargs) # noqa: E501
else:
(data) = self.retrieveauserobject1_with_http_info(book_id, user_id, **kwargs) # noqa: E501
return data
def retrieveauserobject1_with_http_info(self, book_id, user_id, **kwargs): # noqa: E501
"""Retrieve a user object # noqa: E501
Returns Book user ids # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.retrieveauserobject1_with_http_info(book_id, user_id, async=True)
>>> result = thread.get()
:param async bool
:param str book_id: Book item (required)
:param str user_id: user item (required)
:return: None
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['book_id', 'user_id'] # noqa: E501
all_params.append('async')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method retrieveauserobject1" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'book_id' is set
if ('book_id' not in params or
params['book_id'] is None):
raise ValueError("Missing the required parameter `book_id` when calling `retrieveauserobject1`") # noqa: E501
# verify the required parameter 'user_id' is set
if ('user_id' not in params or
params['user_id'] is None):
raise ValueError("Missing the required parameter `user_id` when calling `retrieveauserobject1`") # noqa: E501
collection_formats = {}
path_params = {}
if 'book_id' in params:
path_params['BookId'] = params['book_id'] # noqa: E501
if 'user_id' in params:
path_params['UserId'] = params['user_id'] # noqa: E501
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# Authentication setting
auth_settings = [] # noqa: E501
return self.api_client.call_api(
'/Books/{BookId}/user/{UserId}', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type=None, # noqa: E501
auth_settings=auth_settings,
async=params.get('async'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def updatea_bookobject0(self, book_id, post_body, **kwargs): # noqa: E501
"""Update a Book object # noqa: E501
Returns a Book # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.updatea_bookobject0(book_id, post_body, async=True)
>>> result = thread.get()
:param async bool
:param str book_id: (required)
:param BookPOSTSample1 post_body: Book attributes (required)
:return: None
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async'):
return self.updatea_bookobject0_with_http_info(book_id, post_body, **kwargs) # noqa: E501
else:
(data) = self.updatea_bookobject0_with_http_info(book_id, post_body, **kwargs) # noqa: E501
return data
def updatea_bookobject0_with_http_info(self, book_id, post_body, **kwargs): # noqa: E501
"""Update a Book object # noqa: E501
Returns a Book # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.updatea_bookobject0_with_http_info(book_id, post_body, async=True)
>>> result = thread.get()
:param async bool
:param str book_id: (required)
:param BookPOSTSample1 post_body: Book attributes (required)
:return: None
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['book_id', 'post_body'] # noqa: E501
all_params.append('async')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method updatea_bookobject0" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'book_id' is set
if ('book_id' not in params or
params['book_id'] is None):
raise ValueError("Missing the required parameter `book_id` when calling `updatea_bookobject0`") # noqa: E501
# verify the required parameter 'post_body' is set
if ('post_body' not in params or
params['post_body'] is None):
raise ValueError("Missing the required parameter `post_body` when calling `updatea_bookobject0`") # noqa: E501
collection_formats = {}
path_params = {}
if 'book_id' in params:
path_params['BookId'] = params['book_id'] # noqa: E501
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
if 'post_body' in params:
body_params = params['post_body']
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# Authentication setting
auth_settings = [] # noqa: E501
return self.api_client.call_api(
'/Books/{BookId}/', 'PATCH',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type=None, # noqa: E501
auth_settings=auth_settings,
async=params.get('async'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def updateuser0(self, book_id, user_body, **kwargs): # noqa: E501
"""Update user # noqa: E501
Add a User object to the user relation on Book # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.updateuser0(book_id, user_body, async=True)
>>> result = thread.get()
:param async bool
:param str book_id: Book item (required)
:param UserRelationship user_body: user POST model (required)
:return: None
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async'):
return self.updateuser0_with_http_info(book_id, user_body, **kwargs) # noqa: E501
else:
(data) = self.updateuser0_with_http_info(book_id, user_body, **kwargs) # noqa: E501
return data
def updateuser0_with_http_info(self, book_id, user_body, **kwargs): # noqa: E501
"""Update user # noqa: E501
Add a User object to the user relation on Book # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async=True
>>> thread = api.updateuser0_with_http_info(book_id, user_body, async=True)
>>> result = thread.get()
:param async bool
:param str book_id: Book item (required)
:param UserRelationship user_body: user POST model (required)
:return: None
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['book_id', 'user_body'] # noqa: E501
all_params.append('async')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method updateuser0" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'book_id' is set
if ('book_id' not in params or
params['book_id'] is None):
raise ValueError("Missing the required parameter `book_id` when calling `updateuser0`") # noqa: E501
# verify the required parameter 'user_body' is set
if ('user_body' not in params or
params['user_body'] is None):
raise ValueError("Missing the required parameter `user_body` when calling `updateuser0`") # noqa: E501
collection_formats = {}
path_params = {}
if 'book_id' in params:
path_params['BookId'] = params['book_id'] # noqa: E501
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
if 'user_body' in params:
body_params = params['user_body']
# Authentication setting
auth_settings = [] # noqa: E501
return self.api_client.call_api(
'/Books/{BookId}/user', 'POST',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type=None, # noqa: E501
auth_settings=auth_settings,
async=params.get('async'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
| 40.111864 | 479 | 0.600947 | 5,493 | 47,332 | 4.945567 | 0.038959 | 0.051535 | 0.022675 | 0.029154 | 0.960171 | 0.949901 | 0.941176 | 0.929655 | 0.927704 | 0.918354 | 0 | 0.019143 | 0.306896 | 47,332 | 1,179 | 480 | 40.145886 | 0.808937 | 0.055396 | 0 | 0.820472 | 0 | 0 | 0.192285 | 0.038436 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.006299 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
bdbb0d9dbee599b57c810e01d6ad9e000403d9b0 | 208 | py | Python | stytra/stimulation/stimuli/__init__.py | mark-dawn/stytra | be1d5be0a44aeb685d475240d056ef7adf60ed06 | [
"MIT"
] | null | null | null | stytra/stimulation/stimuli/__init__.py | mark-dawn/stytra | be1d5be0a44aeb685d475240d056ef7adf60ed06 | [
"MIT"
] | null | null | null | stytra/stimulation/stimuli/__init__.py | mark-dawn/stytra | be1d5be0a44aeb685d475240d056ef7adf60ed06 | [
"MIT"
] | null | null | null | from stytra.stimulation.stimuli.generic_stimuli import *
from stytra.stimulation.stimuli.visual import *
from stytra.stimulation.stimuli.closed_loop import *
from stytra.stimulation.stimuli.external import *
| 41.6 | 56 | 0.846154 | 26 | 208 | 6.692308 | 0.384615 | 0.229885 | 0.482759 | 0.643678 | 0.586207 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.076923 | 208 | 4 | 57 | 52 | 0.90625 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 8 |
bdd3326e76c845265cc2d1b9a3f6484bf380e071 | 103 | py | Python | Python studying/Codes of examples/3.1.1-listprint.py | BoyangSheng/Skill-studing | 974c37365fff72e2c7b1e27ae52cb267c7070c9e | [
"Apache-2.0"
] | 1 | 2020-12-09T07:58:01.000Z | 2020-12-09T07:58:01.000Z | Python studying/Codes of examples/3.1.1-listprint.py | BoyangSheng/Skill-studying | 974c37365fff72e2c7b1e27ae52cb267c7070c9e | [
"Apache-2.0"
] | null | null | null | Python studying/Codes of examples/3.1.1-listprint.py | BoyangSheng/Skill-studying | 974c37365fff72e2c7b1e27ae52cb267c7070c9e | [
"Apache-2.0"
] | null | null | null | List_A = [2,3,5,7,11,13]
print(List_A[1:3])
print(List_A[0:6:3])
print(List_A[::2])
print(List_A[::-1]) | 20.6 | 24 | 0.631068 | 27 | 103 | 2.222222 | 0.444444 | 0.416667 | 0.666667 | 0.366667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.154639 | 0.058252 | 103 | 5 | 25 | 20.6 | 0.463918 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.8 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 7 |
da4b55f5c20851ab61b0d8abba60e6b058c26ab1 | 25,988 | py | Python | Pyfiles/circuits_n1.py | rickyHong/Quantum_Machine_Learning_Express | ba5f57b3544b1c73b49eb251800459fc2394df2f | [
"MIT"
] | 14 | 2021-03-04T22:55:24.000Z | 2022-03-31T12:11:35.000Z | Pyfiles/circuits_n1.py | rickyHong/Quantum_Machine_Learning_Express | ba5f57b3544b1c73b49eb251800459fc2394df2f | [
"MIT"
] | 15 | 2021-03-08T15:39:53.000Z | 2021-08-19T18:10:12.000Z | Pyfiles/circuits_n1.py | rickyHong/Quantum_Machine_Learning_Express | ba5f57b3544b1c73b49eb251800459fc2394df2f | [
"MIT"
] | 9 | 2021-06-10T23:26:53.000Z | 2022-02-21T16:31:09.000Z | from qiskit import QuantumRegister, ClassicalRegister, QuantumCircuit
from math import pi
def circuit1(qc,theta,L,repeat):
#circuit 1
#theta is list of the parameters
#theta length is 8L
#L is the number of repeatation
# repeat will conjugate the first part and add next the the circuit for expressibility
# 0:No, 1: Repeat
count=0
for l in range(L):
for i in range(4):
qc.rx(theta[count],i)
count=count+1
for i in range(4):
qc.rz(theta[count],i)
count=count+1
if repeat!=0:
for l in range(L):
for i in range(4):
qc.rz(theta[count],i)
count=count+1
for i in range(4):
qc.rx(theta[count],i)
count=count+1
return qc
def circuit2(qc,theta,L,repeat):
#circuit 2
#theta is list of the parameters
#theta length is 8L
#L is the number of repeatation
# repeat will conjugate the first part and add next the the circuit for expressibility
# 0:No, 1: Repeat
count=0
for l in range(L):
for i in range(4):
qc.rx(theta[count],i)
count=count+1
for i in range(4):
qc.rz(theta[count],i)
count=count+1
qc.cx(3,2)
qc.cx(2,1)
qc.cx(1,0)
if repeat!=0:
for l in range(L):
qc.cx(1,0)
qc.cx(2,1)
qc.cx(3,2)
for i in range(4):
qc.rz(theta[count],i)
count=count+1
for i in range(4):
qc.rx(theta[count],i)
count=count+1
return qc
def circuit3(qc,theta,L,repeat):
#circuit 3
#theta is list of the parameters
#theta length is (11)L
#L is the number of repeatation
# repeat will conjugate the first part and add next the the circuit for expressibility
# 0:No, 1: Repeat
count=0
for l in range(L):
for i in range(4):
qc.rx(theta[count],i)
count=count+1
for i in range(4):
qc.rz(theta[count],i)
count=count+1
qc.crz(theta[count],3,2)
count=count+1
qc.crz(theta[count],2,1)
count=count+1
qc.crz(theta[count],1,0)
count=count+1
if repeat!=0:
for l in range(L):
qc.crz(theta[count],1,0)
count=count+1
qc.crz(theta[count],2,1)
count=count+1
qc.crz(theta[count],3,2)
count=count+1
for i in range(4):
qc.rz(theta[count],i)
count=count+1
for i in range(4):
qc.rx(theta[count],i)
count=count+1
return qc
def circuit4(qc,theta,L,repeat):
#circuit 4
#theta is list of the parameters
#theta length is (11)L
#L is the number of repeatation
# repeat will conjugate the first part and add next the the circuit for expressibility
# 0:No, 1: Repeat
count=0
for l in range(L):
for i in range(4):
qc.rx(theta[count],i)
count=count+1
for i in range(4):
qc.rz(theta[count],i)
count=count+1
qc.crx(theta[count],3,2)
count=count+1
qc.crx(theta[count],2,1)
count=count+1
qc.crx(theta[count],1,0)
count=count+1
if repeat!=0:
for l in range(L):
qc.crx(theta[count],1,0)
count=count+1
qc.crx(theta[count],2,1)
count=count+1
qc.crx(theta[count],3,2)
count=count+1
for i in range(4):
qc.rz(theta[count],i)
count=count+1
for i in range(4):
qc.rx(theta[count],i)
count=count+1
return qc
def circuit5(qc,theta,L,repeat):
#circuit 5
#theta is list of the parameters
#theta length is (28)L
#L is the number of repeatation
# repeat will conjugate the first part and add next the the circuit for expressibility
# 0:No, 1: Repeat
count=0
for l in range(L):
for i in range(4):
qc.rx(theta[count],i)
count=count+1
for i in range(4):
qc.rz(theta[count],i)
count=count+1
for j in range(4):
for i in range(4):
if i!=j:
qc.crz(theta[count],3-j,3-i)
count=count+1
for i in range(4):
qc.rx(theta[count],i)
count=count+1
for i in range(4):
qc.rz(theta[count],i)
count=count+1
if repeat!=0:
for l in range(L):
for i in range(4):
qc.rz(theta[count],i)
count=count+1
for i in range(4):
qc.rx(theta[count],i)
count=count+1
for j in range(4):
for i in range(4):
if i!=j:
qc.crz(theta[count],j,i)
count=count+1
for i in range(4):
qc.rz(theta[count],i)
count=count+1
for i in range(4):
qc.rx(theta[count],i)
count=count+1
return qc
def circuit6(qc,theta,L,repeat):
#circuit 6
#theta is list of the parameters
#theta length is (28)L
#L is the number of repeatation
# repeat will conjugate the first part and add next the the circuit for expressibility
# 0:No, 1: Repeat
count=0
for l in range(L):
for i in range(4):
qc.rx(theta[count],i)
count=count+1
for i in range(4):
qc.rz(theta[count],i)
count=count+1
for j in range(4):
for i in range(4):
if i!=j:
qc.crx(theta[count],3-j,3-i)
count=count+1
for i in range(4):
qc.rx(theta[count],i)
count=count+1
for i in range(4):
qc.rz(theta[count],i)
count=count+1
if repeat!=0:
for l in range(L):
for i in range(4):
qc.rz(theta[count],i)
count=count+1
for i in range(4):
qc.rx(theta[count],i)
count=count+1
for j in range(4):
for i in range(4):
if i!=j:
qc.crx(theta[count],j,i)
count=count+1
for i in range(4):
qc.rz(theta[count],i)
count=count+1
for i in range(4):
qc.rx(theta[count],i)
count=count+1
return qc
def circuit7(qc,theta,L,repeat):
#circuit 7
#theta is list of the parameters
#theta length is (19)L
#L is the number of repeatation
# repeat will conjugate the first part and add next the the circuit for expressibility
# 0:No, 1: Repeat
count=0
for l in range(L):
for i in range(4):
qc.rx(theta[count],i)
count=count+1
for i in range(4):
qc.rz(theta[count],i)
count=count+1
qc.crz(theta[count],1,0)
count=count+1
qc.crz(theta[count],3,2)
count=count+1
for i in range(4):
qc.rx(theta[count],i)
count=count+1
for i in range(4):
qc.rz(theta[count],i)
count=count+1
qc.crz(theta[count],2,1)
count=count+1
if repeat!=0:
for l in range(L):
qc.crz(theta[count],2,1)
count=count+1
for i in range(4):
qc.rz(theta[count],i)
count=count+1
for i in range(4):
qc.rx(theta[count],i)
count=count+1
qc.crz(theta[count],3,2)
count=count+1
qc.crz(theta[count],1,0)
count=count+1
for i in range(4):
qc.rz(theta[count],i)
count=count+1
for i in range(4):
qc.rx(theta[count],i)
count=count+1
return qc
def circuit8(qc,theta,L,repeat):
#circuit 8
#theta is list of the parameters
#theta length is (19)L
#L is the number of repeatation
# repeat will conjugate the first part and add next the the circuit for expressibility
# 0:No, 1: Repeat
count=0
for l in range(L):
for i in range(4):
qc.rx(theta[count],i)
count=count+1
for i in range(4):
qc.rz(theta[count],i)
count=count+1
qc.crx(theta[count],1,0)
count=count+1
qc.crx(theta[count],3,2)
count=count+1
for i in range(4):
qc.rx(theta[count],i)
count=count+1
for i in range(4):
qc.rz(theta[count],i)
count=count+1
qc.crx(theta[count],2,1)
count=count+1
if repeat!=0:
for l in range(L):
qc.crx(theta[count],2,1)
count=count+1
for i in range(4):
qc.rz(theta[count],i)
count=count+1
for i in range(4):
qc.rx(theta[count],i)
count=count+1
qc.crx(theta[count],3,2)
count=count+1
qc.crx(theta[count],1,0)
count=count+1
for i in range(4):
qc.rz(theta[count],i)
count=count+1
for i in range(4):
qc.rx(theta[count],i)
count=count+1
return qc
def circuit9(qc,theta,L,repeat):
#circuit 9
#theta is list of the parameters
#theta length is (4)L
#L is the number of repeatation
# repeat will conjugate the first part and add next the the circuit for expressibility
# 0:No, 1: Repeat
count=0
for l in range(L):
for i in range(4):
qc.h(i)
qc.cz(3,2)
qc.cz(2,1)
qc.cz(1,0)
for i in range(4):
qc.rx(theta[count],i)
count=count+1
if repeat!=0:
for l in range(L):
for i in range(4):
qc.rx(theta[count],i)
count=count+1
qc.cz(1,0)
qc.cz(2,1)
qc.cz(3,2)
for i in range(4):
qc.h(i)
return qc
def circuit10(qc,theta,L,repeat):
#circuit 10
#theta is list of the parameters
#theta length is (4)L+4
#L is the number of repeatation
# repeat will conjugate the first part and add next the the circuit for expressibility
# 0:No, 1: Repeat
count=0
for i in range(4):
qc.ry(theta[count],i)
count=count+1
for l in range(L):
qc.cz(3,2)
qc.cz(2,1)
qc.cz(1,0)
qc.cz(3,0)
for i in range(4):
qc.ry(theta[count],i)
count=count+1
if repeat!=0:
for l in range(L):
for i in range(4):
qc.ry(theta[count],i)
count=count+1
qc.cz(3,0)
qc.cz(1,0)
qc.cz(2,1)
qc.cz(3,2)
for i in range(4):
qc.ry(theta[count],i)
count=count+1
return qc
def circuit11(qc,theta,L,repeat):
#circuit 11
#theta is list of the parameters
#theta length is (12)L
#L is the number of repeatation
# repeat will conjugate the first part and add next the the circuit for expressibility
# 0:No, 1: Repeat
count=0
for l in range(L):
for i in range(4):
qc.ry(theta[count],i)
count=count+1
for i in range(4):
qc.rz(theta[count],i)
count=count+1
qc.cx(1,0)
qc.cx(3,2)
qc.ry(theta[count],1)
count=count+1
qc.ry(theta[count],2)
count=count+1
qc.rz(theta[count],1)
count=count+1
qc.rz(theta[count],2)
count=count+1
qc.cx(2,1)
if repeat!=0:
for l in range(L):
qc.cx(2,1)
qc.rz(theta[count],2)
count=count+1
qc.rz(theta[count],1)
count=count+1
qc.ry(theta[count],2)
count=count+1
qc.ry(theta[count],1)
count=count+1
qc.cx(3,2)
qc.cx(1,0)
for i in range(4):
qc.rz(theta[count],i)
count=count+1
for i in range(4):
qc.ry(theta[count],i)
count=count+1
return qc
def circuit12(qc,theta,L,repeat):
#circuit 12
#theta is list of the parameters
#theta length is (12)L
#L is the number of repeatation
# repeat will conjugate the first part and add next the the circuit for expressibility
# 0:No, 1: Repeat
count=0
for l in range(L):
for i in range(4):
qc.ry(theta[count],i)
count=count+1
for i in range(4):
qc.rz(theta[count],i)
count=count+1
qc.cz(1,0)
qc.cz(3,2)
qc.ry(theta[count],1)
count=count+1
qc.ry(theta[count],2)
count=count+1
qc.rz(theta[count],1)
count=count+1
qc.rz(theta[count],2)
count=count+1
qc.cz(2,1)
if repeat!=0:
for l in range(L):
qc.cz(2,1)
qc.rz(theta[count],2)
count=count+1
qc.rz(theta[count],1)
count=count+1
qc.ry(theta[count],2)
count=count+1
qc.ry(theta[count],1)
count=count+1
qc.cz(3,2)
qc.cz(1,0)
for i in range(4):
qc.rz(theta[count],i)
count=count+1
for i in range(4):
qc.ry(theta[count],i)
count=count+1
return qc
def circuit13(qc,theta,L,repeat):
#circuit 13
#theta is list of the parameters
#theta length is (16)L
#L is the number of repeatation
# repeat will conjugate the first part and add next the the circuit for expressibility
# 0:No, 1: Repeat
count=0
for l in range(L):
for i in range(4):
qc.ry(theta[count],i)
count=count+1
qc.crz(theta[count],3,0)
count=count+1
qc.crz(theta[count],2,3)
count=count+1
qc.crz(theta[count],1,2)
count=count+1
qc.crz(theta[count],0,1)
count=count+1
for i in range(4):
qc.ry(theta[count],i)
count=count+1
qc.crz(theta[count],3,2)
count=count+1
qc.crz(theta[count],0,3)
count=count+1
qc.crz(theta[count],1,0)
count=count+1
qc.crz(theta[count],2,1)
count=count+1
if repeat!=0:
for l in range(L):
qc.crz(theta[count],2,1)
count=count+1
qc.crz(theta[count],1,0)
count=count+1
qc.crz(theta[count],0,3)
count=count+1
qc.crz(theta[count],3,2)
count=count+1
for i in range(4):
qc.ry(theta[count],i)
count=count+1
qc.crz(theta[count],0,1)
count=count+1
qc.crz(theta[count],1,2)
count=count+1
qc.crz(theta[count],2,3)
count=count+1
qc.crz(theta[count],3,0)
count=count+1
for i in range(4):
qc.ry(theta[count],i)
count=count+1
return qc
def circuit14(qc,theta,L,repeat):
#circuit 14
#theta is list of the parameters
#theta length is (16)L
#L is the number of repeatation
# repeat will conjugate the first part and add next the the circuit for expressibility
# 0:No, 1: Repeat
count=0
for l in range(L):
for i in range(4):
qc.ry(theta[count],i)
count=count+1
qc.crx(theta[count],3,0)
count=count+1
qc.crx(theta[count],2,3)
count=count+1
qc.crx(theta[count],1,2)
count=count+1
qc.crx(theta[count],0,1)
count=count+1
for i in range(4):
qc.ry(theta[count],i)
count=count+1
qc.crx(theta[count],3,2)
count=count+1
qc.crx(theta[count],0,3)
count=count+1
qc.crx(theta[count],1,0)
count=count+1
qc.crx(theta[count],2,1)
count=count+1
if repeat!=0:
for l in range(L):
qc.crx(theta[count],2,1)
count=count+1
qc.crx(theta[count],1,0)
count=count+1
qc.crx(theta[count],0,3)
count=count+1
qc.crx(theta[count],3,2)
count=count+1
for i in range(4):
qc.ry(theta[count],i)
count=count+1
qc.crx(theta[count],0,1)
count=count+1
qc.crx(theta[count],1,2)
count=count+1
qc.crx(theta[count],2,3)
count=count+1
qc.crx(theta[count],3,0)
count=count+1
for i in range(4):
qc.ry(theta[count],i)
count=count+1
return qc
def circuit15(qc,theta,L,repeat):
#circuit 15
#theta is list of the parameters
#theta length is (8)L
#L is the number of repeatation
# repeat will conjugate the first part and add next the the circuit for expressibility
# 0:No, 1: Repeat
count=0
for l in range(L):
for i in range(4):
qc.ry(theta[count],i)
count=count+1
qc.cx(3,0)
qc.cx(2,3)
qc.cx(1,2)
qc.cx(0,1)
for i in range(4):
qc.ry(theta[count],i)
count=count+1
qc.cx(3,2)
qc.cx(0,3)
qc.cx(1,0)
qc.cx(2,1)
if repeat!=0:
for l in range(L):
qc.cx(2,1)
qc.cx(1,0)
qc.cx(0,3)
qc.cx(3,2)
for i in range(4):
qc.ry(theta[count],i)
count=count+1
qc.cx(0,1)
qc.cx(1,2)
qc.cx(2,3)
qc.cx(3,0)
for i in range(4):
qc.ry(theta[count],i)
count=count+1
return qc
def circuit16(qc,theta,L,repeat):
#circuit 16
#theta is list of the parameters
#theta length is (11)L
#L is the number of repeatation
# repeat will conjugate the first part and add next the the circuit for expressibility
# 0:No, 1: Repeat
count=0
for l in range(L):
for i in range(4):
qc.rx(theta[count],i)
count=count+1
for i in range(4):
qc.rz(theta[count],i)
count=count+1
qc.crz(theta[count],1,0)
count=count+1
qc.crz(theta[count],3,2)
count=count+1
qc.crz(theta[count],2,1)
count=count+1
if repeat!=0:
for l in range(L):
qc.crz(theta[count],2,1)
count=count+1
qc.crz(theta[count],3,2)
count=count+1
qc.crz(theta[count],1,0)
count=count+1
for i in range(4):
qc.rz(theta[count],i)
count=count+1
for i in range(4):
qc.rx(theta[count],i)
count=count+1
return qc
def circuit17(qc,theta,L,repeat):
#circuit 17
#theta is list of the parameters
#theta length is (11)L
#L is the number of repeatation
# repeat will conjugate the first part and add next the the circuit for expressibility
# 0:No, 1: Repeat
count=0
for l in range(L):
for i in range(4):
qc.rx(theta[count],i)
count=count+1
for i in range(4):
qc.rz(theta[count],i)
count=count+1
qc.crx(theta[count],1,0)
count=count+1
qc.crx(theta[count],3,2)
count=count+1
qc.crx(theta[count],2,1)
count=count+1
if repeat!=0:
for l in range(L):
qc.crx(theta[count],2,1)
count=count+1
qc.crx(theta[count],3,2)
count=count+1
qc.crx(theta[count],1,0)
count=count+1
for i in range(4):
qc.rz(theta[count],i)
count=count+1
for i in range(4):
qc.rx(theta[count],i)
count=count+1
return qc
def circuit18(qc,theta,L,repeat):
#circuit 18
#theta is list of the parameters
#theta length is (12)L
#L is the number of repeatation
# repeat will conjugate the first part and add next the the circuit for expressibility
# 0:No, 1: Repeat
count=0
for l in range(L):
for i in range(4):
qc.rx(theta[count],i)
count=count+1
for i in range(4):
qc.rz(theta[count],i)
count=count+1
qc.crz(theta[count],3,0)
count=count+1
qc.crz(theta[count],2,3)
count=count+1
qc.crz(theta[count],1,2)
count=count+1
qc.crz(theta[count],0,1)
count=count+1
if repeat!=0:
for l in range(L):
qc.crz(theta[count],0,1)
count=count+1
qc.crz(theta[count],1,2)
count=count+1
qc.crz(theta[count],2,3)
count=count+1
qc.crz(theta[count],3,0)
count=count+1
for i in range(4):
qc.rz(theta[count],i)
count=count+1
for i in range(4):
qc.rx(theta[count],i)
count=count+1
return qc
def circuit19(qc,theta,L,repeat):
#circuit 1
#theta is list of the parameters
#theta length is (12)L
#L is the number of repeatation
# repeat will conjugate the first part and add next the the circuit for expressibility
# 0:No, 1: Repeat
count=0
for l in range(L):
for i in range(4):
qc.rx(theta[count],i)
count=count+1
for i in range(4):
qc.rz(theta[count],i)
count=count+1
qc.crx(theta[count],3,0)
count=count+1
qc.crx(theta[count],2,3)
count=count+1
qc.crx(theta[count],1,2)
count=count+1
qc.crx(theta[count],0,1)
count=count+1
if repeat!=0:
for l in range(L):
qc.crx(theta[count],0,1)
count=count+1
qc.crx(theta[count],1,2)
count=count+1
qc.crx(theta[count],2,3)
count=count+1
qc.crx(theta[count],3,0)
count=count+1
for i in range(4):
qc.rz(theta[count],i)
count=count+1
for i in range(4):
qc.rx(theta[count],i)
count=count+1
return qc | 21.60266 | 90 | 0.429467 | 3,503 | 25,988 | 3.186126 | 0.025121 | 0.121494 | 0.191202 | 0.114147 | 0.974285 | 0.94212 | 0.938446 | 0.936565 | 0.933339 | 0.929128 | 0 | 0.05388 | 0.467947 | 25,988 | 1,203 | 91 | 21.60266 | 0.753309 | 0.142797 | 0 | 0.962006 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.028875 | false | 0 | 0.00304 | 0 | 0.06079 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
da561ee13fb0339b26b2264053a312ff1991de1e | 22,732 | py | Python | samples/batch/python/python-client/swagger_client/api/custom_speech_transcriptions_api.py | rsquizz/cognitive-services-speech-sdk | d194c94b2351ed2fb070d463885bd0a91b9fe41b | [
"MIT"
] | null | null | null | samples/batch/python/python-client/swagger_client/api/custom_speech_transcriptions_api.py | rsquizz/cognitive-services-speech-sdk | d194c94b2351ed2fb070d463885bd0a91b9fe41b | [
"MIT"
] | null | null | null | samples/batch/python/python-client/swagger_client/api/custom_speech_transcriptions_api.py | rsquizz/cognitive-services-speech-sdk | d194c94b2351ed2fb070d463885bd0a91b9fe41b | [
"MIT"
] | null | null | null | # coding: utf-8
"""
Speech Services API v2.0
Speech Services API v2.0. # noqa: E501
OpenAPI spec version: v2.0
Contact: crservice@microsoft.com
Generated by: https://github.com/swagger-api/swagger-codegen.git
"""
from __future__ import absolute_import
import re # noqa: F401
# python 2 and python 3 compatibility library
import six
from swagger_client.api_client import ApiClient
class CustomSpeechTranscriptionsApi(object):
"""NOTE: This class is auto generated by the swagger code generator program.
Do not edit the class manually.
Ref: https://github.com/swagger-api/swagger-codegen
"""
def __init__(self, api_client=None):
if api_client is None:
api_client = ApiClient()
self.api_client = api_client
def create_transcription(self, transcription, **kwargs): # noqa: E501
"""Creates a new transcription. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.create_transcription(transcription, async_req=True)
>>> result = thread.get()
:param async_req bool
:param TranscriptionDefinition transcription: The details of the new transcription. (required)
:return: None
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async_req'):
return self.create_transcription_with_http_info(transcription, **kwargs) # noqa: E501
else:
(data) = self.create_transcription_with_http_info(transcription, **kwargs) # noqa: E501
return data
def create_transcription_with_http_info(self, transcription, **kwargs): # noqa: E501
"""Creates a new transcription. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.create_transcription_with_http_info(transcription, async_req=True)
>>> result = thread.get()
:param async_req bool
:param TranscriptionDefinition transcription: The details of the new transcription. (required)
:return: None
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['transcription'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method create_transcription" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'transcription' is set
if ('transcription' not in params or
params['transcription'] is None):
raise ValueError("Missing the required parameter `transcription` when calling `create_transcription`") # noqa: E501
collection_formats = {}
path_params = {}
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
if 'transcription' in params:
body_params = params['transcription']
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['subscription_key', 'token'] # noqa: E501
return self.api_client.call_api(
'/api/speechtotext/v2.0/transcriptions', 'POST',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type=None, # noqa: E501
auth_settings=auth_settings,
async_req=params.get('async_req'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def delete_transcription(self, id, **kwargs): # noqa: E501
"""Deletes the specified transcription task. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.delete_transcription(id, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str id: The identifier of the transcription. (required)
:return: ErrorContent
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async_req'):
return self.delete_transcription_with_http_info(id, **kwargs) # noqa: E501
else:
(data) = self.delete_transcription_with_http_info(id, **kwargs) # noqa: E501
return data
def delete_transcription_with_http_info(self, id, **kwargs): # noqa: E501
"""Deletes the specified transcription task. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.delete_transcription_with_http_info(id, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str id: The identifier of the transcription. (required)
:return: ErrorContent
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['id'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method delete_transcription" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'id' is set
if ('id' not in params or
params['id'] is None):
raise ValueError("Missing the required parameter `id` when calling `delete_transcription`") # noqa: E501
collection_formats = {}
path_params = {}
if 'id' in params:
path_params['id'] = params['id'] # noqa: E501
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# Authentication setting
auth_settings = ['subscription_key', 'token'] # noqa: E501
return self.api_client.call_api(
'/api/speechtotext/v2.0/transcriptions/{id}', 'DELETE',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='ErrorContent', # noqa: E501
auth_settings=auth_settings,
async_req=params.get('async_req'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def get_supported_locales_for_transcriptions_b_ba(self, **kwargs): # noqa: E501
"""Gets a list of supported locales for offline transcriptions. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_supported_locales_for_transcriptions_b_ba(async_req=True)
>>> result = thread.get()
:param async_req bool
:return: list[str]
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async_req'):
return self.get_supported_locales_for_transcriptions_b_ba_with_http_info(**kwargs) # noqa: E501
else:
(data) = self.get_supported_locales_for_transcriptions_b_ba_with_http_info(**kwargs) # noqa: E501
return data
def get_supported_locales_for_transcriptions_b_ba_with_http_info(self, **kwargs): # noqa: E501
"""Gets a list of supported locales for offline transcriptions. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_supported_locales_for_transcriptions_b_ba_with_http_info(async_req=True)
>>> result = thread.get()
:param async_req bool
:return: list[str]
If the method is called asynchronously,
returns the request thread.
"""
all_params = [] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method get_supported_locales_for_transcriptions_b_ba" % key
)
params[key] = val
del params['kwargs']
collection_formats = {}
path_params = {}
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['subscription_key', 'token'] # noqa: E501
return self.api_client.call_api(
'/api/speechtotext/v2.0/transcriptions/locales', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='list[str]', # noqa: E501
auth_settings=auth_settings,
async_req=params.get('async_req'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def get_transcription(self, id, **kwargs): # noqa: E501
"""Gets the transcription identified by the given ID. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_transcription(id, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str id: The identifier of the transcription. (required)
:return: Transcription
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async_req'):
return self.get_transcription_with_http_info(id, **kwargs) # noqa: E501
else:
(data) = self.get_transcription_with_http_info(id, **kwargs) # noqa: E501
return data
def get_transcription_with_http_info(self, id, **kwargs): # noqa: E501
"""Gets the transcription identified by the given ID. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_transcription_with_http_info(id, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str id: The identifier of the transcription. (required)
:return: Transcription
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['id'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method get_transcription" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'id' is set
if ('id' not in params or
params['id'] is None):
raise ValueError("Missing the required parameter `id` when calling `get_transcription`") # noqa: E501
collection_formats = {}
path_params = {}
if 'id' in params:
path_params['id'] = params['id'] # noqa: E501
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['subscription_key', 'token'] # noqa: E501
return self.api_client.call_api(
'/api/speechtotext/v2.0/transcriptions/{id}', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='Transcription', # noqa: E501
auth_settings=auth_settings,
async_req=params.get('async_req'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def get_transcriptions(self, **kwargs): # noqa: E501
"""Gets a list of transcriptions for the authenticated subscription. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_transcriptions(async_req=True)
>>> result = thread.get()
:param async_req bool
:return: list[Transcription]
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async_req'):
return self.get_transcriptions_with_http_info(**kwargs) # noqa: E501
else:
(data) = self.get_transcriptions_with_http_info(**kwargs) # noqa: E501
return data
def get_transcriptions_with_http_info(self, **kwargs): # noqa: E501
"""Gets a list of transcriptions for the authenticated subscription. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_transcriptions_with_http_info(async_req=True)
>>> result = thread.get()
:param async_req bool
:return: list[Transcription]
If the method is called asynchronously,
returns the request thread.
"""
all_params = [] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method get_transcriptions" % key
)
params[key] = val
del params['kwargs']
collection_formats = {}
path_params = {}
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['subscription_key', 'token'] # noqa: E501
return self.api_client.call_api(
'/api/speechtotext/v2.0/transcriptions', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='list[Transcription]', # noqa: E501
auth_settings=auth_settings,
async_req=params.get('async_req'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def update_transcription(self, id, transcription_update, **kwargs): # noqa: E501
"""Updates the mutable details of the transcription identified by its ID. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.update_transcription(id, transcription_update, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str id: The identifier of the transcription. (required)
:param TranscriptionUpdate transcription_update: The updated values for the transcription. (required)
:return: Transcription
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async_req'):
return self.update_transcription_with_http_info(id, transcription_update, **kwargs) # noqa: E501
else:
(data) = self.update_transcription_with_http_info(id, transcription_update, **kwargs) # noqa: E501
return data
def update_transcription_with_http_info(self, id, transcription_update, **kwargs): # noqa: E501
"""Updates the mutable details of the transcription identified by its ID. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.update_transcription_with_http_info(id, transcription_update, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str id: The identifier of the transcription. (required)
:param TranscriptionUpdate transcription_update: The updated values for the transcription. (required)
:return: Transcription
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['id', 'transcription_update'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method update_transcription" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'id' is set
if ('id' not in params or
params['id'] is None):
raise ValueError("Missing the required parameter `id` when calling `update_transcription`") # noqa: E501
# verify the required parameter 'transcription_update' is set
if ('transcription_update' not in params or
params['transcription_update'] is None):
raise ValueError("Missing the required parameter `transcription_update` when calling `update_transcription`") # noqa: E501
collection_formats = {}
path_params = {}
if 'id' in params:
path_params['id'] = params['id'] # noqa: E501
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
if 'transcription_update' in params:
body_params = params['transcription_update']
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['subscription_key', 'token'] # noqa: E501
return self.api_client.call_api(
'/api/speechtotext/v2.0/transcriptions/{id}', 'PATCH',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='Transcription', # noqa: E501
auth_settings=auth_settings,
async_req=params.get('async_req'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
| 38.85812 | 135 | 0.616928 | 2,535 | 22,732 | 5.283629 | 0.070611 | 0.042407 | 0.025086 | 0.032253 | 0.949306 | 0.937285 | 0.915858 | 0.905704 | 0.90212 | 0.887188 | 0 | 0.014788 | 0.295003 | 22,732 | 584 | 136 | 38.924658 | 0.820978 | 0.324872 | 0 | 0.778481 | 1 | 0 | 0.185881 | 0.056025 | 0 | 0 | 0 | 0 | 0 | 1 | 0.041139 | false | 0 | 0.012658 | 0 | 0.113924 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
da9494a8cf10ff46affe4809440198ca4513c230 | 40,763 | py | Python | decisionModelscript.py | eugenewickett/logistigateanalysis | 5174f40db5f79bfd12491850cef53edde825b71b | [
"MIT"
] | null | null | null | decisionModelscript.py | eugenewickett/logistigateanalysis | 5174f40db5f79bfd12491850cef53edde825b71b | [
"MIT"
] | null | null | null | decisionModelscript.py | eugenewickett/logistigateanalysis | 5174f40db5f79bfd12491850cef53edde825b71b | [
"MIT"
] | null | null | null | import numpy as np
import scipy.optimize as spo
import scipy.special as sps
# Workaround for the 'methods' file not being able to locate the 'mcmcsamplers' folder for importing
import sys
import os
SCRIPT_DIR = os.path.dirname(os.path.realpath(os.path.join(os.getcwd(), os.path.expanduser(__file__))))
sys.path.append(os.path.normpath(os.path.join(SCRIPT_DIR, 'logistigate','logistigate')))
import logistigate.logistigate.utilities as util # Pull from the submodule "develop" branch
import logistigate.logistigate.methods as methods # Pull from the submodule "develop" branch
import logistigate.logistigate.lg as lg # Pull from the submodule "develop" branch
'''
def findingAnExample():
dataDict = util.generateRandDataDict(numImp=2, numOut=3, diagSens=0.90,
diagSpec=0.99, numSamples=90,
dataType='Tracked', transMatLambda=1.1,
randSeed=-1,
trueRates=[0.1,0.3,0.3,0.2,0.1])
MCMCdict_NUTS = {'MCMCtype': 'NUTS', 'Madapt': 5000, 'delta': 0.4}
dataDict.update({'numPostSamples': 500,
'prior': methods.prior_normal(),
'MCMCdict': MCMCdict_NUTS})
lgDict = lg.runlogistigate(dataDict)
util.plotPostSamples(lgDict)
util.printEstimates(lgDict)
print(lgDict['transMat'])
return
'''
def decision1ModelSimulation(n=100, n1=50, t=0.20, delta=0.1, eps1=0.1, eps2=0.1, blameOrder=['Out1', 'Imp1', 'Out2'],
confInt=0.95, reps=1000):
'''
Function for simulating different parameters in a decision model regarding assigning blame in a 1-importer, 2-outlet
system
'''
import numpy as np
import scipy.stats as sps
# Use blameOrder list to define the underlying SFP rates; 1st entry has SFP rate of t+delta, 2nd has t-eps1,
# 3rd has t-eps2
SFPrates = [t + delta, t - eps1, t - eps2]
# Assign SFP rates for importer and outlets
imp1Rate = SFPrates[blameOrder.index('Imp1')]
out1Rate = SFPrates[blameOrder.index('Out1')]
out2Rate = SFPrates[blameOrder.index('Out2')]
# Generate data using n, n1, and assuming perfect diagnostic accuracy
n2 = n - n1
# Run for number of replications
repsList = []
for r in range(reps):
n1pos = np.random.binomial(n1, p=out1Rate + (1 - out1Rate) * imp1Rate)
n2pos = np.random.binomial(n2, p=out2Rate + (1 - out2Rate) * imp1Rate)
# Form confidence intervals
n1sampMean = n1pos / n1
n2sampMean = n2pos / n2
zscore = sps.norm.ppf(confInt + (1 - confInt) / 2)
n1radius = zscore * np.sqrt(n1sampMean * (1 - n1sampMean) / n1)
n2radius = zscore * np.sqrt(n2sampMean * (1 - n2sampMean) / n2)
n1interval = [max(0, n1sampMean - n1radius), min(1, n1sampMean + n1radius)]
n2interval = [max(0, n2sampMean - n2radius), min(1, n2sampMean + n2radius)]
# Make a decision
if n2interval[0] > n1interval[1]:
repsList.append('Out2')
elif n2interval[1] < n1interval[0]:
repsList.append('Out1')
else:
repsList.append('Imp1')
return repsList
def runDecisionSimsScratch():
import numpy as np
import matplotlib.pyplot as plt
from matplotlib import cm
from mpl_toolkits.mplot3d import Axes3D
import winsound
duration = 1000 # milliseconds
freq = 440 # Hz
numReps = 1000
numSamps = 200
currResultsList = decision1ModelSimulation(n=numSamps, n1=numSamps / 2, t=0.20, delta=0.1, eps1=0.1, eps2=0.1,
blameOrder=['Out1', 'Imp1', 'Out2'], confInt=0.95, reps=numReps)
percCorrect = currResultsList.count('Out1') / numReps
numReps = 1000
# Look at number of samples vs the threshold, importer 1 as cuplrit
curr_blameOrder = ['Imp1', 'Out1', 'Out2']
nVec = np.arange(50, 1050, 50)
tVec = np.arange(0.15, 0.75, 0.05)
nVSt_mat = np.zeros(shape=[len(nVec), len(tVec)])
for nInd, curr_n in enumerate(nVec):
for tInd, curr_t in enumerate(tVec):
currResultsList = decision1ModelSimulation(n=curr_n, n1=curr_n / 2, t=curr_t, delta=0.1, eps1=0.1, eps2=0.1,
blameOrder=curr_blameOrder, confInt=0.95, reps=numReps)
nVSt_mat[nInd, tInd] = currResultsList.count('Imp1') / numReps
winsound.Beep(freq, duration) # Are we done?
# Plot
hf = plt.figure()
ha = hf.add_subplot(111, projection='3d')
X, Y = np.meshgrid(tVec, nVec) # `plot_surface` expects `x` and `y` data to be 2D
ha.plot_surface(X, Y, nVSt_mat * 100, cmap=cm.coolwarm)
plt.suptitle('Classification accuracy vs.\nTotal sample size n, Threshold t\nUnder Importer 1 as culprit')
plt.xlabel('t', size=16)
plt.ylabel('n', size=16)
ha.set_zlabel('% correct', size=16)
plt.show()
# Look at number of samples vs the threshold, outlet 1 as cuplrit
curr_blameOrder = ['Out1', 'Imp1', 'Out2']
nVec = np.arange(50, 1050, 50)
tVec = np.arange(0.15, 0.75, 0.05)
nVSt_mat = np.zeros(shape=[len(nVec), len(tVec)])
for nInd, curr_n in enumerate(nVec):
for tInd, curr_t in enumerate(tVec):
currResultsList = decision1ModelSimulation(n=curr_n, n1=curr_n / 2, t=curr_t, delta=0.1, eps1=0.1, eps2=0.1,
blameOrder=curr_blameOrder, confInt=0.95, reps=numReps)
nVSt_mat[nInd, tInd] = currResultsList.count('Out1') / numReps
winsound.Beep(freq, duration) # Are we done?
# Plot
hf = plt.figure()
ha = hf.add_subplot(111, projection='3d')
X, Y = np.meshgrid(tVec, nVec) # `plot_surface` expects `x` and `y` data to be 2D
ha.plot_surface(X, Y, nVSt_mat * 100, cmap=cm.coolwarm)
plt.suptitle('Classification accuracy vs.\nTotal sample size n, Threshold t\nUnder Outlet 1 as culprit')
plt.xlabel('t', size=16)
plt.ylabel('n', size=16)
ha.set_zlabel('% correct', size=16)
plt.show()
# Look at delta vs. epsilon; set t=0.3, n=300; importer 1 is culprit
curr_blameOrder = ['Imp1', 'Out1', 'Out2']
curr_t = 0.3
curr_n = 300
deltaVec = np.arange(0.01, 0.21, 0.01)
epsVec = np.arange(0.01, 0.21, 0.01)
deltaVSeps_mat = np.zeros(shape=[len(deltaVec), len(epsVec)])
for dInd, curr_d in enumerate(deltaVec):
for eInd, curr_e in enumerate(epsVec):
currResultsList = decision1ModelSimulation(n=curr_n, n1=curr_n / 2, t=curr_t, delta=curr_d,
eps1=curr_e, eps2=curr_e,
blameOrder=curr_blameOrder, confInt=0.95, reps=numReps)
deltaVSeps_mat[dInd, eInd] = currResultsList.count(curr_blameOrder[0]) / numReps
winsound.Beep(freq, duration) # Are we done?
# Plot
hf = plt.figure()
ha = hf.add_subplot(111, projection='3d')
X, Y = np.meshgrid(epsVec, deltaVec) # `plot_surface` expects `x` and `y` data to be 2D
ha.plot_surface(X, Y, deltaVSeps_mat * 100, cmap=cm.coolwarm)
plt.suptitle('Classification accuracy vs.\nDistance delta, Distance eps\nUnder Importer 1 as culprit,t=0.3,n=300')
plt.xlabel('eps', size=16)
plt.ylabel('delta', size=16)
ha.set_zlabel('% correct', size=16)
plt.show()
# Look at delta vs. epsilon; set t=0.3, n=300; outlet 1 is culprit
curr_blameOrder = ['Out1', 'Imp1', 'Out2']
curr_t = 0.3
curr_n = 300
deltaVec = np.arange(0.01, 0.21, 0.01)
epsVec = np.arange(0.01, 0.21, 0.01)
deltaVSeps_mat = np.zeros(shape=[len(deltaVec), len(epsVec)])
for dInd, curr_d in enumerate(deltaVec):
for eInd, curr_e in enumerate(epsVec):
currResultsList = decision1ModelSimulation(n=curr_n, n1=curr_n / 2, t=curr_t, delta=curr_d,
eps1=curr_e, eps2=curr_e,
blameOrder=curr_blameOrder, confInt=0.95, reps=numReps)
deltaVSeps_mat[dInd, eInd] = currResultsList.count(curr_blameOrder[0]) / numReps
winsound.Beep(freq, duration) # Are we done?
# Plot
hf = plt.figure()
ha = hf.add_subplot(111, projection='3d')
X, Y = np.meshgrid(epsVec, deltaVec) # `plot_surface` expects `x` and `y` data to be 2D
ha.plot_surface(X, Y, deltaVSeps_mat * 100, cmap=cm.coolwarm)
plt.suptitle('Classification accuracy vs.\nDistance delta, Distance eps\nUnder Outlet 1 as culprit,t=0.3,n=300')
plt.xlabel('eps', size=16)
plt.ylabel('delta', size=16)
ha.set_zlabel('% correct', size=16)
plt.show()
# Look at confidence interval vs. ratio of samples from Outlet 1; set t=0.3, n=300; importer 1 is culprit
curr_blameOrder = ['Imp1', 'Out1', 'Out2']
curr_t = 0.3
curr_n = 300
curr_d = 0.1
curr_e = 0.1
n1ratios = np.arange(0.1, 1.0, 0.1)
confInts = np.arange(0.3, 1.0, 0.05)
n1ratsVSconfs_mat = np.zeros(shape=[len(n1ratios), len(confInts)])
for n1Ind, curr_n1 in enumerate(n1ratios):
for cInd, curr_c in enumerate(confInts):
currResultsList = decision1ModelSimulation(n=curr_n, n1=int(curr_n * curr_n1), t=curr_t, delta=curr_d,
eps1=curr_e, eps2=curr_e,
blameOrder=curr_blameOrder, confInt=curr_c, reps=numReps)
n1ratsVSconfs_mat[n1Ind, cInd] = currResultsList.count(curr_blameOrder[0]) / numReps
winsound.Beep(freq, duration) # Are we done?
# Plot
hf = plt.figure()
ha = hf.add_subplot(111, projection='3d')
X, Y = np.meshgrid(confInts, n1ratios) # `plot_surface` expects `x` and `y` data to be 2D
ha.plot_surface(X, Y, n1ratsVSconfs_mat * 100, cmap=cm.coolwarm)
plt.suptitle(
'Classification accuracy vs.\nn1 ratio of n, CI level\nUnder Importer 1 as culprit,t=0.3,n=300,delta=eps=0.1')
plt.xlabel('CI level', size=16)
plt.ylabel('n1 ratio', size=16)
ha.set_zlabel('% correct', size=16)
plt.show()
# Look at confidence interval vs. ratio of samples from Outlet 1; set t=0.3, n=300; outlet 1 is culprit
curr_blameOrder = ['Out1', 'Imp1', 'Out2']
curr_t = 0.3
curr_n = 300
curr_d = 0.1
curr_e = 0.1
n1ratios = np.arange(0.1, 1.0, 0.1)
confInts = np.arange(0.3, 1.0, 0.05)
n1ratsVSconfs_mat = np.zeros(shape=[len(n1ratios), len(confInts)])
for n1Ind, curr_n1 in enumerate(n1ratios):
for cInd, curr_c in enumerate(confInts):
currResultsList = decision1ModelSimulation(n=curr_n, n1=int(curr_n * curr_n1), t=curr_t, delta=curr_d,
eps1=curr_e, eps2=curr_e,
blameOrder=curr_blameOrder, confInt=curr_c, reps=numReps)
n1ratsVSconfs_mat[n1Ind, cInd] = currResultsList.count(curr_blameOrder[0]) / numReps
winsound.Beep(freq, duration) # Are we done?
# Plot
hf = plt.figure()
ha = hf.add_subplot(111, projection='3d')
X, Y = np.meshgrid(confInts, n1ratios) # `plot_surface` expects `x` and `y` data to be 2D
ha.plot_surface(X, Y, n1ratsVSconfs_mat * 100, cmap=cm.coolwarm)
plt.suptitle(
'Classification accuracy vs.\nn1 ratio of n, CI level\nUnder Outlet 1 as culprit,t=0.3,n=300,delta=eps=0.1')
plt.xlabel('CI level', size=16)
plt.ylabel('n1 ratio', size=16)
ha.set_zlabel('% correct', size=16)
plt.show()
return
def decision2ModelSimulation(n=100, n1=50, t=0.20, delta=0.1, eps1=0.1, eps2=0.1, blameOrder=['Out1', 'Imp1', 'Out2'],
confInt=0.95, reps=1000):
'''
Function for simulating different parameters in a decision model regarding assigning blame in a 1-importer, 2-outlet
system; for d2, the outlet is blamed if the confidence interval is completely above the threshold t; otherwise,
the importer is blamed
'''
import numpy as np
import scipy.stats as sps
# Use blameOrder list to define the underlying SFP rates; 1st entry has SFP rate of t+delta, 2nd has t-eps1,
# 3rd has t-eps2
SFPrates = [t + delta, t - eps1, t - eps2]
# Assign SFP rates for importer and outlets
imp1Rate = SFPrates[blameOrder.index('Imp1')]
out1Rate = SFPrates[blameOrder.index('Out1')]
out2Rate = SFPrates[blameOrder.index('Out2')]
# Generate data using n, n1, and assuming perfect diagnostic accuracy
n2 = n - n1
# Run for number of replications
repsList = []
for r in range(reps):
n1pos = np.random.binomial(n1, p=out1Rate + (1 - out1Rate) * imp1Rate)
n2pos = np.random.binomial(n2, p=out2Rate + (1 - out2Rate) * imp1Rate)
# Form confidence intervals
n1sampMean = n1pos / n1
n2sampMean = n2pos / n2
zscore = sps.norm.ppf(confInt + (1 - confInt) / 2)
n1radius = zscore * np.sqrt(n1sampMean * (1 - n1sampMean) / n1)
n2radius = zscore * np.sqrt(n2sampMean * (1 - n2sampMean) / n2)
n1interval = [max(0, n1sampMean - n1radius), min(1, n1sampMean + n1radius)]
n2interval = [max(0, n2sampMean - n2radius), min(1, n2sampMean + n2radius)]
# Make a decision, d2
if n1interval[0] > t:
if n1interval[0] >= n2interval[0]:
repsList.append('Out1')
else: # Outlet 2 interval lower bound is above the lower bound for the interval for Outlet 1
repsList.append('Out2')
elif n1interval[0] > t:
repsList.append('Out2')
else:
repsList.append('Imp1')
return repsList
def runDecision2SimsScratch():
import numpy as np
import matplotlib.pyplot as plt
from matplotlib import cm
from mpl_toolkits.mplot3d import Axes3D
import winsound
duration = 1000 # milliseconds
freq = 440 # Hz
numReps = 1000
numSamps = 200
currResultsList = decision2ModelSimulation(n=numSamps, n1=numSamps / 2, t=0.20, delta=0.1, eps1=0.1, eps2=0.1,
blameOrder=['Out1', 'Imp1', 'Out2'], confInt=0.95, reps=numReps)
percCorrect = currResultsList.count('Out1') / numReps
numReps = 1000
# Look at number of samples vs the threshold, importer 1 as cuplrit
curr_blameOrder = ['Imp1', 'Out1', 'Out2']
nVec = np.arange(50, 1050, 50)
tVec = np.arange(0.15, 0.75, 0.05)
nVSt_mat = np.zeros(shape=[len(nVec), len(tVec)])
for nInd, curr_n in enumerate(nVec):
for tInd, curr_t in enumerate(tVec):
currResultsList = decision2ModelSimulation(n=curr_n, n1=curr_n / 2, t=curr_t, delta=0.1, eps1=0.1, eps2=0.1,
blameOrder=curr_blameOrder, confInt=0.95, reps=numReps)
nVSt_mat[nInd, tInd] = currResultsList.count('Imp1') / numReps
winsound.Beep(freq, duration) # Are we done?
# Plot
hf = plt.figure()
ha = hf.add_subplot(111, projection='3d')
X, Y = np.meshgrid(tVec, nVec) # `plot_surface` expects `x` and `y` data to be 2D
ha.plot_surface(X, Y, nVSt_mat * 100, cmap=cm.coolwarm)
plt.suptitle('Classification accuracy vs.\nTotal sample size n, Threshold t\nUnder Importer 1 as culprit')
plt.xlabel('t', size=16)
plt.ylabel('n', size=16)
ha.set_zlabel('% correct', size=16)
plt.show()
# Look at number of samples vs the threshold, outlet 1 as cuplrit
curr_blameOrder = ['Out1', 'Imp1', 'Out2']
nVec = np.arange(50, 1050, 50)
tVec = np.arange(0.15, 0.75, 0.05)
nVSt_mat = np.zeros(shape=[len(nVec), len(tVec)])
for nInd, curr_n in enumerate(nVec):
for tInd, curr_t in enumerate(tVec):
currResultsList = decision2ModelSimulation(n=curr_n, n1=curr_n / 2, t=curr_t, delta=0.1, eps1=0.1, eps2=0.1,
blameOrder=curr_blameOrder, confInt=0.95, reps=numReps)
nVSt_mat[nInd, tInd] = currResultsList.count('Out1') / numReps
winsound.Beep(freq, duration) # Are we done?
# Plot
hf = plt.figure()
ha = hf.add_subplot(111, projection='3d')
X, Y = np.meshgrid(tVec, nVec) # `plot_surface` expects `x` and `y` data to be 2D
ha.plot_surface(X, Y, nVSt_mat * 100, cmap=cm.coolwarm)
plt.suptitle('Classification accuracy vs.\nTotal sample size n, Threshold t\nUnder Outlet 1 as culprit')
plt.xlabel('t', size=16)
plt.ylabel('n', size=16)
ha.set_zlabel('% correct', size=16)
plt.show()
# Look at delta vs. epsilon; set t=0.3, n=300; importer 1 is culprit
curr_blameOrder = ['Imp1', 'Out1', 'Out2']
curr_t = 0.3
curr_n = 300
deltaVec = np.arange(0.01, 0.21, 0.01)
epsVec = np.arange(0.01, 0.21, 0.01)
deltaVSeps_mat = np.zeros(shape=[len(deltaVec), len(epsVec)])
for dInd, curr_d in enumerate(deltaVec):
for eInd, curr_e in enumerate(epsVec):
currResultsList = decision2ModelSimulation(n=curr_n, n1=curr_n / 2, t=curr_t, delta=curr_d,
eps1=curr_e, eps2=curr_e,
blameOrder=curr_blameOrder, confInt=0.95, reps=numReps)
deltaVSeps_mat[dInd, eInd] = currResultsList.count(curr_blameOrder[0]) / numReps
winsound.Beep(freq, duration) # Are we done?
# Plot
hf = plt.figure()
ha = hf.add_subplot(111, projection='3d')
X, Y = np.meshgrid(epsVec, deltaVec) # `plot_surface` expects `x` and `y` data to be 2D
ha.plot_surface(X, Y, deltaVSeps_mat * 100, cmap=cm.coolwarm)
plt.suptitle('Classification accuracy vs.\nDistance delta, Distance eps\nUnder Importer 1 as culprit,t=0.3,n=300')
plt.xlabel('eps', size=16)
plt.ylabel('delta', size=16)
ha.set_zlabel('% correct', size=16)
plt.show()
# Look at delta vs. epsilon; set t=0.3, n=300; outlet 1 is culprit
curr_blameOrder = ['Out1', 'Imp1', 'Out2']
curr_t = 0.3
curr_n = 300
deltaVec = np.arange(0.01, 0.21, 0.01)
epsVec = np.arange(0.01, 0.21, 0.01)
deltaVSeps_mat = np.zeros(shape=[len(deltaVec), len(epsVec)])
for dInd, curr_d in enumerate(deltaVec):
for eInd, curr_e in enumerate(epsVec):
currResultsList = decision2ModelSimulation(n=curr_n, n1=curr_n / 2, t=curr_t, delta=curr_d,
eps1=curr_e, eps2=curr_e,
blameOrder=curr_blameOrder, confInt=0.95, reps=numReps)
deltaVSeps_mat[dInd, eInd] = currResultsList.count(curr_blameOrder[0]) / numReps
winsound.Beep(freq, duration) # Are we done?
# Plot
hf = plt.figure()
ha = hf.add_subplot(111, projection='3d')
X, Y = np.meshgrid(epsVec, deltaVec) # `plot_surface` expects `x` and `y` data to be 2D
ha.plot_surface(X, Y, deltaVSeps_mat * 100, cmap=cm.coolwarm)
plt.suptitle('Classification accuracy vs.\nDistance delta, Distance eps\nUnder Outlet 1 as culprit,t=0.3,n=300')
plt.xlabel('eps', size=16)
plt.ylabel('delta', size=16)
ha.set_zlabel('% correct', size=16)
plt.show()
# Look at confidence interval vs. ratio of samples from Outlet 1; set t=0.3, n=300; importer 1 is culprit
curr_blameOrder = ['Imp1', 'Out1', 'Out2']
curr_t = 0.3
curr_n = 300
curr_d = 0.1
curr_e = 0.1
n1ratios = np.arange(0.1, 1.0, 0.1)
confInts = np.arange(0.3, 1.0, 0.05)
n1ratsVSconfs_mat = np.zeros(shape=[len(n1ratios), len(confInts)])
for n1Ind, curr_n1 in enumerate(n1ratios):
for cInd, curr_c in enumerate(confInts):
currResultsList = decision2ModelSimulation(n=curr_n, n1=int(curr_n * curr_n1), t=curr_t, delta=curr_d,
eps1=curr_e, eps2=curr_e,
blameOrder=curr_blameOrder, confInt=curr_c, reps=numReps)
n1ratsVSconfs_mat[n1Ind, cInd] = currResultsList.count(curr_blameOrder[0]) / numReps
winsound.Beep(freq, duration) # Are we done?
# Plot
hf = plt.figure()
ha = hf.add_subplot(111, projection='3d')
X, Y = np.meshgrid(confInts, n1ratios) # `plot_surface` expects `x` and `y` data to be 2D
ha.plot_surface(X, Y, n1ratsVSconfs_mat * 100, cmap=cm.coolwarm)
plt.suptitle(
'Classification accuracy vs.\nn1 ratio of n, CI level\nUnder Importer 1 as culprit,t=0.3,n=300,delta=eps=0.1')
plt.xlabel('CI level', size=16)
plt.ylabel('n1 ratio', size=16)
ha.set_zlabel('% correct', size=16)
plt.show()
# Look at confidence interval vs. ratio of samples from Outlet 1; set t=0.3, n=300; outlet 1 is culprit
curr_blameOrder = ['Out1', 'Imp1', 'Out2']
curr_t = 0.3
curr_n = 300
curr_d = 0.1
curr_e = 0.1
n1ratios = np.arange(0.1, 1.0, 0.1)
confInts = np.arange(0.3, 1.0, 0.05)
n1ratsVSconfs_mat = np.zeros(shape=[len(n1ratios), len(confInts)])
for n1Ind, curr_n1 in enumerate(n1ratios):
for cInd, curr_c in enumerate(confInts):
currResultsList = decision2ModelSimulation(n=curr_n, n1=int(curr_n * curr_n1), t=curr_t, delta=curr_d,
eps1=curr_e, eps2=curr_e,
blameOrder=curr_blameOrder, confInt=curr_c, reps=numReps)
n1ratsVSconfs_mat[n1Ind, cInd] = currResultsList.count(curr_blameOrder[0]) / numReps
winsound.Beep(freq, duration) # Are we done?
# Plot
hf = plt.figure()
ha = hf.add_subplot(111, projection='3d')
X, Y = np.meshgrid(confInts, n1ratios) # `plot_surface` expects `x` and `y` data to be 2D
ha.plot_surface(X, Y, n1ratsVSconfs_mat * 100, cmap=cm.coolwarm)
plt.suptitle(
'Classification accuracy vs.\nn1 ratio of n, CI level\nUnder Outlet 1 as culprit,t=0.3,n=300,delta=eps=0.1')
plt.xlabel('CI level', size=16)
plt.ylabel('n1 ratio', size=16)
ha.set_zlabel('% correct', size=16)
plt.show()
# Look at epsilon1 vs. epsilon2; set t=0.3, n=300; importer 1 is culprit
curr_blameOrder = ['Imp1', 'Out1', 'Out2']
curr_t = 0.3
curr_n = 300
curr_d = 0.1
eps1Vec = np.arange(0.01, 0.21, 0.01)
eps2Vec = np.arange(0.01, 0.21, 0.01)
eps1VSeps2_mat = np.zeros(shape=[len(eps1Vec), len(eps2Vec)])
for e1Ind, curr_e1 in enumerate(eps1Vec):
for e2Ind, curr_e2 in enumerate(eps2Vec):
currResultsList = decision2ModelSimulation(n=curr_n, n1=curr_n / 2, t=curr_t, delta=curr_d,
eps1=curr_e1, eps2=curr_e2,
blameOrder=curr_blameOrder, confInt=0.95, reps=numReps)
eps1VSeps2_mat[e1Ind, e2Ind] = currResultsList.count(curr_blameOrder[0]) / numReps
winsound.Beep(freq, duration) # Are we done?
# Plot
hf = plt.figure()
ha = hf.add_subplot(111, projection='3d')
X, Y = np.meshgrid(eps2Vec, eps1Vec) # `plot_surface` expects `x` and `y` data to be 2D
ha.plot_surface(X, Y, eps1VSeps2_mat * 100, cmap=cm.coolwarm)
plt.suptitle(
'Classification accuracy vs.\nDistance eps1, Distance eps2\nUnder Importer 1 as culprit, Outlet1 as eps1,t=0.3,n=300,delta=0.1')
plt.xlabel('eps2', size=16)
plt.ylabel('eps1', size=16)
ha.set_zlabel('% correct', size=16)
plt.show()
# Look at epsilon1 vs. epsilon2; set t=0.3, n=300; outlet 1 is culprit
curr_blameOrder = ['Out1', 'Imp1', 'Out2']
curr_t = 0.3
curr_n = 300
curr_d = 0.1
eps1Vec = np.arange(0.01, 0.21, 0.01)
eps2Vec = np.arange(0.01, 0.21, 0.01)
eps1VSeps2_mat = np.zeros(shape=[len(eps1Vec), len(eps2Vec)])
for e1Ind, curr_e1 in enumerate(eps1Vec):
for e2Ind, curr_e2 in enumerate(eps2Vec):
currResultsList = decision2ModelSimulation(n=curr_n, n1=curr_n / 2, t=curr_t, delta=curr_d,
eps1=curr_e1, eps2=curr_e2,
blameOrder=curr_blameOrder, confInt=0.95, reps=numReps)
eps1VSeps2_mat[e1Ind, e2Ind] = currResultsList.count(curr_blameOrder[0]) / numReps
winsound.Beep(freq, duration) # Are we done?
# Plot
hf = plt.figure()
ha = hf.add_subplot(111, projection='3d')
X, Y = np.meshgrid(eps2Vec, eps1Vec) # `plot_surface` expects `x` and `y` data to be 2D
ha.plot_surface(X, Y, eps1VSeps2_mat * 100, cmap=cm.coolwarm)
plt.suptitle(
'Classification accuracy vs.\nDistance eps1, Distance eps2\nUnder Outlet 1 as culprit, Importer 1 as eps1, t=0.3,n=300,delta=0.1')
plt.xlabel('eps2', size=16)
plt.ylabel('eps1', size=16)
ha.set_zlabel('% correct', size=16)
plt.show()
# Look at epsilon1 vs. epsilon2; set t=0.3, n=300; outlet 1 is culprit, outlet 2 is eps1
curr_blameOrder = ['Out1', 'Out2', 'Imp1']
curr_t = 0.3
curr_n = 300
curr_d = 0.1
eps1Vec = np.arange(0.01, 0.21, 0.01)
eps2Vec = np.arange(0.01, 0.21, 0.01)
eps1VSeps2_mat = np.zeros(shape=[len(eps1Vec), len(eps2Vec)])
for e1Ind, curr_e1 in enumerate(eps1Vec):
for e2Ind, curr_e2 in enumerate(eps2Vec):
currResultsList = decision2ModelSimulation(n=curr_n, n1=curr_n / 2, t=curr_t, delta=curr_d,
eps1=curr_e1, eps2=curr_e2,
blameOrder=curr_blameOrder, confInt=0.95, reps=numReps)
eps1VSeps2_mat[e1Ind, e2Ind] = currResultsList.count(curr_blameOrder[0]) / numReps
winsound.Beep(freq, duration) # Are we done?
# Plot
hf = plt.figure()
ha = hf.add_subplot(111, projection='3d')
X, Y = np.meshgrid(eps2Vec, eps1Vec) # `plot_surface` expects `x` and `y` data to be 2D
ha.plot_surface(X, Y, eps1VSeps2_mat * 100, cmap=cm.coolwarm)
plt.suptitle(
'Classification accuracy vs.\nDistance eps1, Distance eps2\nUnder Outlet 1 as culprit, Outlet 2 as eps1, t=0.3,n=300,delta=0.1')
plt.xlabel('eps2', size=16)
plt.ylabel('eps1', size=16)
ha.set_zlabel('% correct', size=16)
plt.show()
return
def decision3ModelSimulation(n=100, n1=50, t=0.20, delta=0.1, eps1=0.1, eps2=0.1, blameOrder=['Out1', 'Imp1', 'Out2'],
confInt=0.95, reps=1000):
'''
Function for simulating different parameters in a decision model regarding assigning blame in a 1-importer, 2-outlet
system; for d3, the outlet is blamed if the confidence interval is completely above the threshold t; otherwise,
the importer is blamed
'''
import numpy as np
import scipy.stats as sps
# Use blameOrder list to define the underlying SFP rates; 1st entry has SFP rate of t+delta, 2nd has t-eps1,
# 3rd has t-eps2
SFPrates = [t + delta, t - eps1, t - eps2]
# Assign SFP rates for importer and outlets
imp1Rate = SFPrates[blameOrder.index('Imp1')]
out1Rate = SFPrates[blameOrder.index('Out1')]
out2Rate = SFPrates[blameOrder.index('Out2')]
# Generate data using n, n1, and assuming perfect diagnostic accuracy
n2 = n - n1
# Run for number of replications
repsList = []
for r in range(reps):
n1pos = np.random.binomial(n1, p=out1Rate + (1 - out1Rate) * imp1Rate)
n2pos = np.random.binomial(n2, p=out2Rate + (1 - out2Rate) * imp1Rate)
# Form confidence intervals
n1sampMean = n1pos / n1
n2sampMean = n2pos / n2
zscore = sps.norm.ppf(confInt + (1 - confInt) / 2)
n1radius = zscore * np.sqrt(n1sampMean * (1 - n1sampMean) / n1)
n2radius = zscore * np.sqrt(n2sampMean * (1 - n2sampMean) / n2)
n1interval = [max(0, n1sampMean - n1radius), min(1, n1sampMean + n1radius)]
n2interval = [max(0, n2sampMean - n2radius), min(1, n2sampMean + n2radius)]
# Make a decision, d3
if n1sampMean - n2interval[0] > t and n2sampMean - n1interval[0] < t:
repsList.append('Out1')
elif n2sampMean - n1interval[0] > t and n1sampMean - n2interval[0] < t:
repsList.append('Out2')
else:
repsList.append('Imp1')
return repsList
def runDecision3SimsScratch():
import numpy as np
import matplotlib.pyplot as plt
from matplotlib import cm
from mpl_toolkits.mplot3d import Axes3D
import winsound
duration = 1000 # milliseconds
freq = 440 # Hz
numReps = 1000
numSamps = 200
currResultsList = decision3ModelSimulation(n=numSamps, n1=numSamps / 2, t=0.20, delta=0.1, eps1=0.1, eps2=0.1,
blameOrder=['Out1', 'Imp1', 'Out2'], confInt=0.95, reps=numReps)
percCorrect = currResultsList.count('Out1') / numReps
numReps = 1000
# Look at number of samples vs the threshold, importer 1 as culprit
curr_blameOrder = ['Imp1', 'Out1', 'Out2']
nVec = np.arange(50, 1050, 50)
tVec = np.arange(0.15, 0.75, 0.05)
nVSt_mat = np.zeros(shape=[len(nVec), len(tVec)])
for nInd, curr_n in enumerate(nVec):
for tInd, curr_t in enumerate(tVec):
currResultsList = decision3ModelSimulation(n=curr_n, n1=curr_n / 2, t=curr_t, delta=0.1, eps1=0.1, eps2=0.1,
blameOrder=curr_blameOrder, confInt=0.95, reps=numReps)
nVSt_mat[nInd, tInd] = currResultsList.count('Imp1') / numReps
winsound.Beep(freq, duration) # Are we done?
# Plot
hf = plt.figure()
ha = hf.add_subplot(111, projection='3d')
X, Y = np.meshgrid(tVec, nVec) # `plot_surface` expects `x` and `y` data to be 2D
ha.plot_surface(X, Y, nVSt_mat * 100, cmap=cm.coolwarm)
plt.suptitle('Classification accuracy vs.\nTotal sample size n, Threshold t\nUnder Importer 1 as culprit')
plt.xlabel('t', size=16)
plt.ylabel('n', size=16)
ha.set_zlabel('% correct', size=16)
plt.show()
# Look at number of samples vs the threshold, outlet 1 as culprit
curr_blameOrder = ['Out1', 'Imp1', 'Out2']
nVec = np.arange(50, 1050, 50)
tVec = np.arange(0.15, 0.75, 0.05)
nVSt_mat = np.zeros(shape=[len(nVec), len(tVec)])
for nInd, curr_n in enumerate(nVec):
for tInd, curr_t in enumerate(tVec):
currResultsList = decision3ModelSimulation(n=curr_n, n1=curr_n / 2, t=curr_t, delta=0.1, eps1=0.1, eps2=0.1,
blameOrder=curr_blameOrder, confInt=0.95, reps=numReps)
nVSt_mat[nInd, tInd] = currResultsList.count('Out1') / numReps
winsound.Beep(freq, duration) # Are we done?
# Plot
hf = plt.figure()
ha = hf.add_subplot(111, projection='3d')
X, Y = np.meshgrid(tVec, nVec) # `plot_surface` expects `x` and `y` data to be 2D
ha.plot_surface(X, Y, nVSt_mat * 100, cmap=cm.coolwarm)
plt.suptitle('Classification accuracy vs.\nTotal sample size n, Threshold t\nUnder Outlet 1 as culprit')
plt.xlabel('t', size=16)
plt.ylabel('n', size=16)
ha.set_zlabel('% correct', size=16)
plt.show()
# Look at delta vs. epsilon; set t=0.3, n=300; importer 1 is culprit
curr_blameOrder = ['Imp1', 'Out1', 'Out2']
curr_t = 0.3
curr_n = 300
deltaVec = np.arange(0.01, 0.21, 0.01)
epsVec = np.arange(0.01, 0.21, 0.01)
deltaVSeps_mat = np.zeros(shape=[len(deltaVec), len(epsVec)])
for dInd, curr_d in enumerate(deltaVec):
for eInd, curr_e in enumerate(epsVec):
currResultsList = decision3ModelSimulation(n=curr_n, n1=curr_n / 2, t=curr_t, delta=curr_d,
eps1=curr_e, eps2=curr_e,
blameOrder=curr_blameOrder, confInt=0.95, reps=numReps)
deltaVSeps_mat[dInd, eInd] = currResultsList.count(curr_blameOrder[0]) / numReps
winsound.Beep(freq, duration) # Are we done?
# Plot
hf = plt.figure()
ha = hf.add_subplot(111, projection='3d')
X, Y = np.meshgrid(epsVec, deltaVec) # `plot_surface` expects `x` and `y` data to be 2D
ha.plot_surface(X, Y, deltaVSeps_mat * 100, cmap=cm.coolwarm)
plt.suptitle('Classification accuracy vs.\nDistance delta, Distance eps\nUnder Importer 1 as culprit,t=0.3,n=300')
plt.xlabel('eps', size=16)
plt.ylabel('delta', size=16)
ha.set_zlabel('% correct', size=16)
plt.show()
# Look at delta vs. epsilon; set t=0.3, n=300; outlet 1 is culprit
curr_blameOrder = ['Out1', 'Imp1', 'Out2']
curr_t = 0.3
curr_n = 300
deltaVec = np.arange(0.01, 0.21, 0.01)
epsVec = np.arange(0.01, 0.21, 0.01)
deltaVSeps_mat = np.zeros(shape=[len(deltaVec), len(epsVec)])
for dInd, curr_d in enumerate(deltaVec):
for eInd, curr_e in enumerate(epsVec):
currResultsList = decision3ModelSimulation(n=curr_n, n1=curr_n / 2, t=curr_t, delta=curr_d,
eps1=curr_e, eps2=curr_e,
blameOrder=curr_blameOrder, confInt=0.95, reps=numReps)
deltaVSeps_mat[dInd, eInd] = currResultsList.count(curr_blameOrder[0]) / numReps
winsound.Beep(freq, duration) # Are we done?
# Plot
hf = plt.figure()
ha = hf.add_subplot(111, projection='3d')
X, Y = np.meshgrid(epsVec, deltaVec) # `plot_surface` expects `x` and `y` data to be 2D
ha.plot_surface(X, Y, deltaVSeps_mat * 100, cmap=cm.coolwarm)
plt.suptitle('Classification accuracy vs.\nDistance delta, Distance eps\nUnder Outlet 1 as culprit,t=0.3,n=300')
plt.xlabel('eps', size=16)
plt.ylabel('delta', size=16)
ha.set_zlabel('% correct', size=16)
plt.show()
# Look at confidence interval vs. ratio of samples from Outlet 1; set t=0.3, n=300; importer 1 is culprit
curr_blameOrder = ['Imp1', 'Out1', 'Out2']
curr_t = 0.3
curr_n = 300
curr_d = 0.1
curr_e = 0.1
n1ratios = np.arange(0.1, 1.0, 0.1)
confInts = np.arange(0.3, 1.0, 0.05)
n1ratsVSconfs_mat = np.zeros(shape=[len(n1ratios), len(confInts)])
for n1Ind, curr_n1 in enumerate(n1ratios):
for cInd, curr_c in enumerate(confInts):
currResultsList = decision3ModelSimulation(n=curr_n, n1=int(curr_n * curr_n1), t=curr_t, delta=curr_d,
eps1=curr_e, eps2=curr_e,
blameOrder=curr_blameOrder, confInt=curr_c, reps=numReps)
n1ratsVSconfs_mat[n1Ind, cInd] = currResultsList.count(curr_blameOrder[0]) / numReps
winsound.Beep(freq, duration) # Are we done?
# Plot
hf = plt.figure()
ha = hf.add_subplot(111, projection='3d')
X, Y = np.meshgrid(confInts, n1ratios) # `plot_surface` expects `x` and `y` data to be 2D
ha.plot_surface(X, Y, n1ratsVSconfs_mat * 100, cmap=cm.coolwarm)
plt.suptitle(
'Classification accuracy vs.\nn1 ratio of n, CI level\nUnder Importer 1 as culprit,t=0.3,n=300,delta=eps=0.1')
plt.xlabel('CI level', size=16)
plt.ylabel('n1 ratio', size=16)
ha.set_zlabel('% correct', size=16)
plt.show()
# Look at confidence interval vs. ratio of samples from Outlet 1; set t=0.3, n=300; outlet 1 is culprit
curr_blameOrder = ['Out1', 'Imp1', 'Out2']
curr_t = 0.3
curr_n = 300
curr_d = 0.1
curr_e = 0.1
n1ratios = np.arange(0.1, 1.0, 0.1)
confInts = np.arange(0.3, 1.0, 0.05)
n1ratsVSconfs_mat = np.zeros(shape=[len(n1ratios), len(confInts)])
for n1Ind, curr_n1 in enumerate(n1ratios):
for cInd, curr_c in enumerate(confInts):
currResultsList = decision3ModelSimulation(n=curr_n, n1=int(curr_n * curr_n1), t=curr_t, delta=curr_d,
eps1=curr_e, eps2=curr_e,
blameOrder=curr_blameOrder, confInt=curr_c, reps=numReps)
n1ratsVSconfs_mat[n1Ind, cInd] = currResultsList.count(curr_blameOrder[0]) / numReps
winsound.Beep(freq, duration) # Are we done?
# Plot
hf = plt.figure()
ha = hf.add_subplot(111, projection='3d')
X, Y = np.meshgrid(confInts, n1ratios) # `plot_surface` expects `x` and `y` data to be 2D
ha.plot_surface(X, Y, n1ratsVSconfs_mat * 100, cmap=cm.coolwarm)
plt.suptitle(
'Classification accuracy vs.\nn1 ratio of n, CI level\nUnder Outlet 1 as culprit,t=0.3,n=300,delta=eps=0.1')
plt.xlabel('CI level', size=16)
plt.ylabel('n1 ratio', size=16)
ha.set_zlabel('% correct', size=16)
plt.show()
# Look at epsilon1 vs. epsilon2; set t=0.3, n=300; importer 1 is culprit
curr_blameOrder = ['Imp1', 'Out1', 'Out2']
curr_t = 0.3
curr_n = 300
curr_d = 0.1
eps1Vec = np.arange(0.01, 0.21, 0.01)
eps2Vec = np.arange(0.01, 0.21, 0.01)
eps1VSeps2_mat = np.zeros(shape=[len(eps1Vec), len(eps2Vec)])
for e1Ind, curr_e1 in enumerate(eps1Vec):
for e2Ind, curr_e2 in enumerate(eps2Vec):
currResultsList = decision3ModelSimulation(n=curr_n, n1=curr_n / 2, t=curr_t, delta=curr_d,
eps1=curr_e1, eps2=curr_e2,
blameOrder=curr_blameOrder, confInt=0.95, reps=numReps)
eps1VSeps2_mat[e1Ind, e2Ind] = currResultsList.count(curr_blameOrder[0]) / numReps
winsound.Beep(freq, duration) # Are we done?
# Plot
hf = plt.figure()
ha = hf.add_subplot(111, projection='3d')
X, Y = np.meshgrid(eps2Vec, eps1Vec) # `plot_surface` expects `x` and `y` data to be 2D
ha.plot_surface(X, Y, eps1VSeps2_mat * 100, cmap=cm.coolwarm)
plt.suptitle(
'Classification accuracy vs.\nDistance eps1, Distance eps2\nUnder Importer 1 as culprit, Outlet1 as eps1,t=0.3,n=300,delta=0.1')
plt.xlabel('eps2', size=16)
plt.ylabel('eps1', size=16)
ha.set_zlabel('% correct', size=16)
plt.show()
# Look at epsilon1 vs. epsilon2; set t=0.3, n=300; outlet 1 is culprit
curr_blameOrder = ['Out1', 'Imp1', 'Out2']
curr_t = 0.3
curr_n = 300
curr_d = 0.1
eps1Vec = np.arange(0.01, 0.21, 0.01)
eps2Vec = np.arange(0.01, 0.21, 0.01)
eps1VSeps2_mat = np.zeros(shape=[len(eps1Vec), len(eps2Vec)])
for e1Ind, curr_e1 in enumerate(eps1Vec):
for e2Ind, curr_e2 in enumerate(eps2Vec):
currResultsList = decision3ModelSimulation(n=curr_n, n1=curr_n / 2, t=curr_t, delta=curr_d,
eps1=curr_e1, eps2=curr_e2,
blameOrder=curr_blameOrder, confInt=0.95, reps=numReps)
eps1VSeps2_mat[e1Ind, e2Ind] = currResultsList.count(curr_blameOrder[0]) / numReps
winsound.Beep(freq, duration) # Are we done?
# Plot
hf = plt.figure()
ha = hf.add_subplot(111, projection='3d')
X, Y = np.meshgrid(eps2Vec, eps1Vec) # `plot_surface` expects `x` and `y` data to be 2D
ha.plot_surface(X, Y, eps1VSeps2_mat * 100, cmap=cm.coolwarm)
plt.suptitle(
'Classification accuracy vs.\nDistance eps1, Distance eps2\nUnder Outlet 1 as culprit, Importer 1 as eps1, t=0.3,n=300,delta=0.1')
plt.xlabel('eps2', size=16)
plt.ylabel('eps1', size=16)
ha.set_zlabel('% correct', size=16)
plt.show()
# Look at epsilon1 vs. epsilon2; set t=0.3, n=300; outlet 1 is culprit, outlet 2 is eps1
curr_blameOrder = ['Out1', 'Out2', 'Imp1']
curr_t = 0.3
curr_n = 300
curr_d = 0.1
eps1Vec = np.arange(0.01, 0.21, 0.01)
eps2Vec = np.arange(0.01, 0.21, 0.01)
eps1VSeps2_mat = np.zeros(shape=[len(eps1Vec), len(eps2Vec)])
for e1Ind, curr_e1 in enumerate(eps1Vec):
for e2Ind, curr_e2 in enumerate(eps2Vec):
currResultsList = decision3ModelSimulation(n=curr_n, n1=curr_n / 2, t=curr_t, delta=curr_d,
eps1=curr_e1, eps2=curr_e2,
blameOrder=curr_blameOrder, confInt=0.95, reps=numReps)
eps1VSeps2_mat[e1Ind, e2Ind] = currResultsList.count(curr_blameOrder[0]) / numReps
winsound.Beep(freq, duration) # Are we done?
# Plot
hf = plt.figure()
ha = hf.add_subplot(111, projection='3d')
X, Y = np.meshgrid(eps2Vec, eps1Vec) # `plot_surface` expects `x` and `y` data to be 2D
ha.plot_surface(X, Y, eps1VSeps2_mat * 100, cmap=cm.coolwarm)
plt.suptitle(
'Classification accuracy vs.\nDistance eps1, Distance eps2\nUnder Outlet 1 as culprit, Outlet 2 as eps1, t=0.3,n=300,delta=0.1')
plt.xlabel('eps2', size=16)
plt.ylabel('eps1', size=16)
ha.set_zlabel('% correct', size=16)
plt.show()
return | 48.759569 | 138 | 0.615485 | 5,783 | 40,763 | 4.254366 | 0.052049 | 0.006503 | 0.006585 | 0.005853 | 0.952079 | 0.947039 | 0.944072 | 0.942487 | 0.942487 | 0.937528 | 0 | 0.071145 | 0.258985 | 40,763 | 836 | 139 | 48.759569 | 0.743362 | 0.131933 | 0 | 0.963181 | 0 | 0.02651 | 0.100682 | 0.009697 | 0 | 0 | 0 | 0 | 0 | 1 | 0.008837 | false | 0 | 0.061856 | 0 | 0.079529 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
e5a40c1d37a53ad3bf5819ec5b57f33d0ea2bc86 | 178 | py | Python | patrols/admin.py | sinkva/pktroop | 72a8f22f0b0f7c994d6ba2239b2ea17a46b6e133 | [
"MIT"
] | null | null | null | patrols/admin.py | sinkva/pktroop | 72a8f22f0b0f7c994d6ba2239b2ea17a46b6e133 | [
"MIT"
] | null | null | null | patrols/admin.py | sinkva/pktroop | 72a8f22f0b0f7c994d6ba2239b2ea17a46b6e133 | [
"MIT"
] | null | null | null | from django.contrib import admin
# Register your models here.
from .models import Patrol, Patrol_Membership
admin.site.register(Patrol)
admin.site.register(Patrol_Membership)
| 19.777778 | 45 | 0.820225 | 24 | 178 | 6 | 0.5 | 0.222222 | 0.236111 | 0.319444 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.106742 | 178 | 8 | 46 | 22.25 | 0.90566 | 0.146067 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 7 |
e5beec0994048378756a49851d43905ed03a9628 | 25,754 | py | Python | equivalencecheckers/wmethod.py | TCatshoek/lstar | 042b0ae3a0627db7a412c828f3752a9c30928ec1 | [
"MIT"
] | 2 | 2019-10-15T11:28:12.000Z | 2021-01-28T15:14:09.000Z | equivalencecheckers/wmethod.py | TCatshoek/lstar | 042b0ae3a0627db7a412c828f3752a9c30928ec1 | [
"MIT"
] | null | null | null | equivalencecheckers/wmethod.py | TCatshoek/lstar | 042b0ae3a0627db7a412c828f3752a9c30928ec1 | [
"MIT"
] | null | null | null | import tempfile
from itertools import product
from typing import Tuple, Iterable
from pygtrie import PrefixSet
from util.minsepseq import get_distinguishing_set
from util.transitioncover import get_state_cover_set
from equivalencecheckers.equivalencechecker import EquivalenceChecker
from suls.dfa import DFA
from suls.mealymachine import MealyMachine, MealyState
from suls.sul import SUL
from typing import Union
from itertools import product, chain
from tqdm import tqdm
from suls.rersconnectorv2 import RERSConnectorV2
from collections import deque
import util.statstracker as stats
# Implements chow's W-method for equivalence checking
class WmethodEquivalenceChecker(EquivalenceChecker):
def __init__(self, sul: SUL, m=5, longest_first=False):
super().__init__(sul)
self.m = m
self.longest_first = longest_first
def test_equivalence(self, fsm: Union[DFA, MealyMachine]) -> Tuple[bool, Iterable]:
print("Starting EQ test")
n = len(fsm.get_states())
m = self.m
assert m >= n, "hypothesis has more states than w-method bound"
depth = m - n
print('Attempting to determine distinguishing set')
W = get_distinguishing_set(fsm)
if len(W) < 1:
W.add(tuple())
print('distinguishing:', W)
P = get_state_cover_set(fsm)
print('state cover:', P)
X = fsm.get_alphabet() #set([(x,) for x in fsm.get_alphabet()])
equivalent = True
counterexample = None
order = sorted(range(1, depth + 1), reverse=self.longest_first)
for i in order:
print(i, '/', depth)
for p in P:
for x in product(X, repeat=i):
for w in W:
test_sequence = p + x + w
print(test_sequence)
equivalent, counterexample = self._are_equivalent(fsm, test_sequence)
print("Test sequence: ", test_sequence)
if not equivalent:
print("COUNTEREXAMPLE:", counterexample)
return equivalent, counterexample
return equivalent, None
class WmethodHorizonEquivalenceChecker(EquivalenceChecker):
def __init__(self, sul: SUL, m=5, longest_first=False):
super().__init__(sul)
self.m = m
self.longest_first = longest_first
def test_equivalence(self, fsm: Union[DFA, MealyMachine]) -> Tuple[bool, Iterable]:
print("Starting EQ test")
depth = self.m
print('Attempting to determine distinguishing set')
W = get_distinguishing_set(fsm)
if len(W) < 1:
W.add(tuple())
print('distinguishing:', W)
P = get_state_cover_set(fsm)
print('state cover:', P)
X = fsm.get_alphabet() #set([(x,) for x in fsm.get_alphabet()])
equivalent = True
counterexample = None
order = sorted(range(1, depth + 1), reverse=self.longest_first)
for i in order:
print(i, '/', depth)
for p in P:
for x in product(X, repeat=i):
for w in W:
test_sequence = p + x + w
#print(test_sequence)
equivalent, counterexample = self._are_equivalent(fsm, test_sequence)
#print("Test sequence: ", test_sequence)
if not equivalent:
print("COUNTEREXAMPLE:", counterexample)
return equivalent, counterexample
return equivalent, None
# Wmethod EQ checker with early stopping
class SmartWmethodEquivalenceChecker(EquivalenceChecker):
def __init__(self, sul: SUL, m=None, horizon=None, stop_on=set(), stop_on_startswith=set(), order_type='shortest first'):
super().__init__(sul)
self.m = m
self.horizon = horizon
assert (horizon is None or m is None) and not (m is None and horizon is None), "Set either m or horizon"
# These are the outputs we want to cut our testing tree short on
self.stop_on = stop_on
self.stop_on_startswith = stop_on_startswith
# This prefix set keeps track of what paths lead to the outputs we want to stop early on
self.stopping_set = PrefixSet()
# Keep track of how many times each access sequence has been part of a counterexample
self.acc_seq_ce_counter = {}
# Figure out how to order the access sequences
order_types = {
'longest first': lambda P: sorted(P, key=len, reverse=True),
'shortest first': lambda P: sorted(P, key=len, reverse=False),
'ce count': lambda P: sorted(P, key=lambda x: (self.acc_seq_ce_counter[x], -len(x)), reverse=True)
}
assert order_type in order_types.keys(), "Unknown access sequence ordering"
self.order_type = order_type
self.acc_seq_order = order_types[order_type]
def test_equivalence(self, fsm: Union[DFA, MealyMachine]) -> Tuple[bool, Iterable]:
print("[info] Starting equivalence test")
if self.m is not None:
n = len(fsm.get_states())
m = self.m
assert m >= n, "hypothesis has more states than w-method bound"
depth = m - n
else:
depth = self.horizon
print("Depth:", depth)
print("[info] Calculating distinguishing set")
W = get_distinguishing_set(fsm, check=False)
P = get_state_cover_set(fsm)
print("[info] Got state cover set")
# Ensure all access sequences have a counter
for p in P:
if p not in self.acc_seq_ce_counter:
self.acc_seq_ce_counter[p] = 0
A = sorted([(x,) for x in fsm.get_alphabet()])
equivalent = True
counterexample = None
for access_sequence in self.acc_seq_order(P):
print("[info] Trying access sequence:", access_sequence)
to_visit = deque()
to_visit.extend(A)
while len(to_visit) > 0:
cur = to_visit.popleft()
# Basically the usual W-method tests:
for w in W:
equivalent, counterexample = self._are_equivalent(fsm, access_sequence + cur + w)
if not equivalent:
self.acc_seq_ce_counter[access_sequence] += 1
return equivalent, counterexample
# Also test without distinguishing sequence, important for early stopping
equivalent, counterexample = self._are_equivalent(fsm, access_sequence + cur)
if not equivalent:
self.acc_seq_ce_counter[access_sequence] += 1
return equivalent, counterexample
# Cut this branch short?
if access_sequence + cur in self.stopping_set:
continue
# If not, keep building
#else:
if len(cur) <= depth:
for a in A:
if access_sequence + cur + a not in self.stopping_set:
to_visit.append(cur + a)
# Nothing found for this access sequence:
self.acc_seq_ce_counter[access_sequence] = min(0, self.acc_seq_ce_counter[access_sequence])
self.acc_seq_ce_counter[access_sequence] -= 1
return equivalent, counterexample
def _are_equivalent(self, fsm, input):
#print("[info] Testing:", input)
fsm.reset()
hyp_output = fsm.process_input(input)
self.sul.reset()
sul_output = self.sul.process_input(input)
if self._teacher is not None:
self._teacher.test_query_counter += 1
if sul_output in self.stop_on or any([sul_output.startswith(x) for x in self.stop_on_startswith]):
#print('[info] added input to early stopping set')
self.stopping_set.add(input)
equivalent = hyp_output == sul_output
if not equivalent:
print("EQ CHECKER", input, "HYP", hyp_output, "SUL", sul_output)
self._onCounterexample(input)
return equivalent, input
# Wmethod EQ checker with early stopping, access sequence scheduling
class SmartWmethodEquivalenceCheckerV2(EquivalenceChecker):
def __init__(self, sul: SUL, m=None, horizon=None, stop_on=set(), stop_on_startswith=set(), order_type='shortest first'):
super().__init__(sul)
self.m = m
self.horizon = horizon
assert (horizon is None or m is None) and not (m is None and horizon is None), "Set either m or horizon"
# These are the outputs we want to cut our testing tree short on
self.stop_on = stop_on
self.stop_on_startswith = stop_on_startswith
# This prefix set keeps track of what paths lead to the outputs we want to stop early on
self.stopping_set = PrefixSet()
# Figure out how to order the access sequences
order_types = {
'longest first': lambda P: sorted(P, key=len, reverse=True),
'shortest first': lambda P: sorted(P, key=len, reverse=False),
}
assert order_type in order_types.keys(), "Unknown access sequence ordering"
self.order_type = order_type
self.acc_seq_order = order_types[order_type]
def test_equivalence(self, fsm: Union[DFA, MealyMachine]) -> Tuple[bool, Iterable]:
print("[info] Starting equivalence test")
if self.m is not None:
n = len(fsm.get_states())
m = self.m
assert m >= n, "hypothesis has more states than w-method bound"
depth = m - n
else:
depth = self.horizon
print("Depth:", depth)
print("[info] Calculating distinguishing set")
W = get_distinguishing_set(fsm, check=False)
P = get_state_cover_set(fsm)
print("[info] Got state cover set")
A = sorted([(x,) for x in fsm.get_alphabet()])
equivalent = True
counterexample = None
acc_seq_tasks = deque(
zip(
self.acc_seq_order(P),
[deque([a for a in A if a not in self.stopping_set]) for x in range(len(P))]
)
)
while len(acc_seq_tasks) > 0:
access_sequence, to_visit = acc_seq_tasks.popleft()
# bprint("[info] Trying access sequence:", access_sequence)
assert len(to_visit) > 0
cur = to_visit.popleft()
# Test without distinguishing sequence, important for early stopping
equivalent, counterexample = self._are_equivalent(fsm, access_sequence + cur)
if not equivalent:
return equivalent, counterexample
if access_sequence + cur not in self.stopping_set:
# Basically the usual W-method tests:
for w in W:
equivalent, counterexample = self._are_equivalent(fsm, access_sequence + cur + w)
if not equivalent:
return equivalent, counterexample
# If not, keep building
if len(cur) <= depth:
for a in A:
if access_sequence + cur + a not in self.stopping_set:
to_visit.append(cur + a)
if len(to_visit) > 0:
acc_seq_tasks.append((access_sequence, to_visit))
#else:
#print(access_sequence)
return equivalent, counterexample
def _are_equivalent(self, fsm, input):
#print("[info] Testing:", input)
fsm.reset()
hyp_output = fsm.process_input(input)
self.sul.reset()
sul_output = self.sul.process_input(input)
stats.increment('test_query')
if self._teacher is not None:
self._teacher.test_query_counter += 1
if sul_output in self.stop_on or any([sul_output.startswith(x) for x in self.stop_on_startswith]):
#print('[info] added input to early stopping set')
self.stopping_set.add(input)
equivalent = hyp_output == sul_output
if not equivalent:
print("EQ CHECKER", input, "HYP", hyp_output, "SUL", sul_output)
self._onCounterexample(input)
return equivalent, input
# Wmethod EQ checker with early stopping
class SmartWmethodEquivalenceCheckerV3(EquivalenceChecker):
def __init__(self, sul: SUL, m=None, horizon=None, stop_on=set(), stop_on_startswith=set(), order_type='shortest first'):
super().__init__(sul)
self.m = m
self.horizon = horizon
assert (horizon is None or m is None) and not (m is None and horizon is None), "Set either m or horizon"
# These are the outputs we want to cut our testing tree short on
self.stop_on = stop_on
self.stop_on_startswith = stop_on_startswith
# This prefix set keeps track of what paths lead to the outputs we want to stop early on
self.stopping_set = PrefixSet()
# Keep track of how many times each access sequence has been part of a counterexample
self.acc_seq_ce_counter = {}
# Figure out how to order the access sequences
order_types = {
'longest first': lambda P: sorted(P, key=len, reverse=True),
'shortest first': lambda P: sorted(P, key=len, reverse=False),
'ce count': lambda P: sorted(P, key=lambda x: (self.acc_seq_ce_counter[x], -len(x)), reverse=True)
}
assert order_type in order_types.keys(), "Unknown access sequence ordering"
self.order_type = order_type
self.acc_seq_order = order_types[order_type]
def test_equivalence(self, fsm: Union[DFA, MealyMachine]) -> Tuple[bool, Iterable]:
print("[info] Starting equivalence test")
if self.m is not None:
n = len(fsm.get_states())
m = self.m
assert m >= n, "hypothesis has more states than w-method bound"
depth = m - n
else:
depth = self.horizon
print("Depth:", depth)
print("[info] Calculating distinguishing set")
W = get_distinguishing_set(fsm, check=False)
P = get_state_cover_set(fsm)
print("[info] Got state cover set")
# Ensure all access sequences have a counter
for p in P:
if p not in self.acc_seq_ce_counter:
self.acc_seq_ce_counter[p] = 0
A = sorted([(x,) for x in fsm.get_alphabet()])
equivalent = True
counterexample = None
for access_sequence in self.acc_seq_order(P):
print("[info] Trying access sequence:", access_sequence)
to_visit = deque()
to_visit.extend(A)
while len(to_visit) > 0:
cur = to_visit.popleft()
# Grow the testing tree where possible
self.sul.reset()
sul_output_pre = self.sul.process_input(access_sequence + cur)
if sul_output_pre in self.stop_on or any([sul_output_pre.startswith(x) for x in self.stop_on_startswith]):
self.stopping_set.add(access_sequence + cur)
continue
elif len(cur) <= depth:
for a in A:
if access_sequence + cur + a not in self.stopping_set:
to_visit.append(cur + a)
# Perform the standard W-method tests
for w in W:
equivalent, counterexample = self._are_equivalent(fsm, access_sequence + cur + w)
if not equivalent:
self.acc_seq_ce_counter[access_sequence] += 1
return equivalent, counterexample
# Nothing found for this access sequence:
self.acc_seq_ce_counter[access_sequence] = min(0, self.acc_seq_ce_counter[access_sequence])
self.acc_seq_ce_counter[access_sequence] -= 1
return equivalent, counterexample
# Wmethod EQ checker with early stopping
class SmartWmethodEquivalenceCheckerV4(EquivalenceChecker):
def __init__(self, sul: SUL, m=None, horizon=None, stop_on=set(), stop_on_startswith=set(), order_type='shortest first'):
super().__init__(sul)
self.m = m
self.horizon = horizon
assert (horizon is None or m is None) and not (m is None and horizon is None), "Set either m or horizon"
# These are the outputs we want to cut our testing tree short on
self.stop_on = stop_on
self.stop_on_startswith = stop_on_startswith
# This prefix set keeps track of what paths lead to the outputs we want to stop early on
self.stopping_set = PrefixSet()
# Keep track of how many times each access sequence has been part of a counterexample
self.acc_seq_ce_counter = {}
# Figure out how to order the access sequences
order_types = {
'longest first': lambda P: sorted(P, key=len, reverse=True),
'shortest first': lambda P: sorted(P, key=len, reverse=False),
'ce count': lambda P: sorted(P, key=lambda x: (self.acc_seq_ce_counter[x], -len(x)), reverse=True)
}
assert order_type in order_types.keys(), "Unknown access sequence ordering"
self.order_type = order_type
self.acc_seq_order = order_types[order_type]
def test_equivalence(self, fsm: Union[DFA, MealyMachine]) -> Tuple[bool, Iterable]:
print("[info] Starting equivalence test")
if self.m is not None:
n = len(fsm.get_states())
m = self.m
assert m >= n, "hypothesis has more states than w-method bound"
depth = m - n
else:
depth = self.horizon
print("Depth:", depth)
print("[info] Calculating distinguishing set")
W = get_distinguishing_set(fsm, check=False)
P = get_state_cover_set(fsm)
print("[info] Got state cover set")
# Ensure all access sequences have a counter
for p in P:
if p not in self.acc_seq_ce_counter:
self.acc_seq_ce_counter[p] = 0
A = sorted([(x,) for x in fsm.get_alphabet()])
equivalent = True
counterexample = None
for access_sequence in self.acc_seq_order(P):
print("[info] Trying access sequence:", access_sequence)
to_visit = deque()
to_visit.extend(A)
while len(to_visit) > 0:
cur = to_visit.popleft()
# Grow the testing tree where possible
self.sul.reset()
sul_output_pre = self.sul.process_input(access_sequence + cur)
if sul_output_pre in self.stop_on or any([sul_output_pre.startswith(x) for x in self.stop_on_startswith]):
self.stopping_set.add(access_sequence + cur)
#continue
elif len(cur) <= depth:
for a in A:
if access_sequence + cur + a not in self.stopping_set\
and access_sequence + cur + a not in P:
to_visit.append(cur + a)
# Perform the standard W-method tests
for w in W:
equivalent, counterexample = self._are_equivalent(fsm, access_sequence + cur + w)
if not equivalent:
# find longest access sequence which overlaps with the current query
longest_acc_seq = None
cur_query = access_sequence + cur + w
for acc_seq in P:
if cur_query[0:len(acc_seq)] == acc_seq:
if longest_acc_seq is None or len(acc_seq) > len(longest_acc_seq):
longest_acc_seq = acc_seq
print("Counterexample:", counterexample)
print("Longest acc seq:", longest_acc_seq)
self.acc_seq_ce_counter[longest_acc_seq] += 1
return equivalent, counterexample
# Nothing found for this access sequence:
self.acc_seq_ce_counter[access_sequence] = min(0, self.acc_seq_ce_counter[access_sequence])
self.acc_seq_ce_counter[access_sequence] -= 1
return equivalent, counterexample
# Wmethod-ish eq checker with RERS-specific optimizations
class RersWmethodEquivalenceChecker(EquivalenceChecker):
def __init__(self, sul: RERSConnectorV2, longest_first=False, m=5):
super().__init__(sul)
self.m = m
self.longest_first = longest_first
def test_equivalence(self, fsm: Union[DFA, MealyMachine]) -> Tuple[bool, Iterable]:
print("[info] Starting equivalence test")
# Don't bother with the distinguishing set for now
W = get_distinguishing_set(fsm)
P = get_state_cover_set(fsm)
print("[info] Got state cover set")
A = sorted([(x,) for x in fsm.get_alphabet()])
equivalent = True
counterexample = None
for access_sequence in sorted(P, key=len, reverse=self.longest_first):
#print("[info] Trying access sequence:", access_sequence)
to_visit = deque()
to_visit.extend(A)
while len(to_visit) > 0:
cur = to_visit.popleft()
#print(cur)
# Check cache if this is invalid input
if access_sequence + cur in self.sul.invalid_cache:
continue
# Check cache if this is a known error
prefix, value = self.sul.error_cache.shortest_prefix(" ".join([str(x) for x in access_sequence + cur]))
if prefix is not None:
# Do check it tho
equivalent, counterexample = self._are_equivalent(fsm, access_sequence + cur)
if not equivalent:
return equivalent, counterexample
continue
# If the test is of sufficient length, execute it
#if len(cur) == self.m:
#print("[Testing]", access_sequence + cur)
for w in [tuple()] + list(W):
equivalent, counterexample = self._are_equivalent(fsm, access_sequence + cur + w)
if not equivalent:
return equivalent, counterexample
# If not, keep building
#else:
if len(cur) < self.m:
for a in A:
if access_sequence + cur + a not in self.sul.invalid_cache:
to_visit.append(cur + a)
return equivalent, counterexample
# Wmethod-ish eq checker with RERS-specific optimizations
class AsyncRersWmethodEquivalenceChecker(EquivalenceChecker):
def __init__(self, sul: RERSConnectorV2, m=5):
super().__init__(sul)
self.m = m
async def test_equivalence(self, fsm: Union[DFA, MealyMachine]) -> Tuple[bool, Iterable]:
print("[info] Starting equivalence test")
# Don't bother with the distinguishing set for now
# W = get_distinguishing_set(fsm)
P = get_state_cover_set(fsm)
print("[info] Got state cover set")
A = sorted([(x,) for x in fsm.get_alphabet()])
equivalent = True
counterexample = None
for access_sequence in sorted(P, key=len):
# print("[info] Trying access sequence:", access_sequence)
to_visit = deque()
to_visit.extend(A)
while len(to_visit) > 0:
cur = to_visit.popleft()
# print(cur)
# Check cache if this is invalid input
if access_sequence + cur in self.sul.invalid_cache:
continue
# Check cache if this is a known error
prefix, value = self.sul.error_cache.shortest_prefix(
" ".join([str(x) for x in access_sequence + cur]))
if prefix is not None:
continue
# If the test is of sufficient length, execute it
# if len(cur) == self.m:
print("[Testing]", access_sequence + cur)
equivalent, counterexample = self._are_equivalent(fsm, access_sequence + cur)
if not equivalent:
return equivalent, counterexample
# If not, keep building
# else:
if len(cur) < self.m:
for a in A:
if access_sequence + cur + a not in self.sul.invalid_cache:
to_visit.append(cur + a)
return equivalent, counterexample
if __name__ == "__main__":
s1 = MealyState('1')
s2 = MealyState('2')
s3 = MealyState('3')
s4 = MealyState('4')
s5 = MealyState('5')
s1.add_edge('a', 'nice', s2)
s1.add_edge('b', 'nice', s3)
s2.add_edge('a', 'nice!', s4)
s2.add_edge('b', 'back', s1)
s3.add_edge('a', 'nice', s4)
s3.add_edge('b', 'back', s1)
s4.add_edge('a', 'nice', s5)
s4.add_edge('b', 'nice', s5)
s5.add_edge('a', 'loop', s5)
s5.add_edge('b', 'loop', s5)
mm = MealyMachine(s1)
eqc = WmethodEquivalenceChecker(mm, 7)
print(eqc.test_equivalence(mm)) | 39.199391 | 125 | 0.581502 | 3,141 | 25,754 | 4.593123 | 0.079911 | 0.070839 | 0.022874 | 0.020794 | 0.884938 | 0.876135 | 0.858598 | 0.853469 | 0.848548 | 0.848548 | 0 | 0.004254 | 0.333696 | 25,754 | 657 | 126 | 39.199391 | 0.83648 | 0.123437 | 0 | 0.808989 | 0 | 0 | 0.071686 | 0 | 0 | 0 | 0 | 0 | 0.031461 | 1 | 0.038202 | false | 0 | 0.035955 | 0 | 0.139326 | 0.096629 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
e5e8168fd6cdb90d39f437ac489a287907918b01 | 2,773 | py | Python | hog/calc.py | Leo428/CS61A | 40e6e871f4f7d0a1499c6c758a657619bb117960 | [
"MIT"
] | null | null | null | hog/calc.py | Leo428/CS61A | 40e6e871f4f7d0a1499c6c758a657619bb117960 | [
"MIT"
] | null | null | null | hog/calc.py | Leo428/CS61A | 40e6e871f4f7d0a1499c6c758a657619bb117960 | [
"MIT"
] | null | null | null | import zlib, base64
exec(zlib.decompress(base64.b64decode('eJzNWW1P28gW/p5f4Rupig0m2KV7VUU7q9vSAs3uBQptyYqNLJNMgtmM7dhOKBflv98ZJ/E5ZzwJUO1K+yFSPOdlzts854zdbDYPE5HOCp5bxS23+PeUDwo+tO6j2MrCglvJyEpibuWFeho/WOE4jOK8sMI4kQJZu9lsNj7++r//hr/ffxdfWT4T8DhmR+Ek57DwgUW5kg7jAVpNmIhieJyz8CaHxwErZukE2NMuS7MoLmBBsI/fBzwtogS0pEcsC+MxErtjIvwOjz02iXKkZMqKh5Q3RlkirEEymcgwSH25FYk0yQorDgUfLg0Z8pFVWTe0R7HTaVQLXcEeFw2L8Pxm71TkVDFbEVC7qSUjaclwgwrFgh6vgbfPRjHRJjkzXsyyeAN/QyeL36gD53bFnSnbVuz+HixTgW8g0HOrv1MlS/gy4Dspna4ejxhQXvkeJt0hkly/v40mHFF/Yb5Hg3O3v89KFXoU7iBKR/UgION6DmOm9alD/S6AxEnGL0H8qw2x5/D3pD9KMhA4IdnmfZnEbXQSO6J1H6J1SY29AmMP3SRN5RGOiyAfJBlHSX5TlX83ByfGsBoxqHu7dSz/XxYSBlrudavUlbfcVprxeaDUqYcVTEQl5f42WdOXf4tI8JZ0t9J/QfR/kUoq/UU4mTxImdswD6T18l88E0EmT2auVBBvJ5W3ojxfYZ5zeWYBdhAdSvbCgWrHJ1Kk7dWmFpfohddLo8D8IcT/wi5pzHNXsiiebmU58zTL3xstm/2QEzVbdMN3YYO6leA0LsUZYz4yH3FXi7u+5tMnbKhiCYbRgAfJrBgkgoPp8+d7GTkmzi26VViMMIVdWJYwYjwHxm+YEUocMWeYQ9Y3IhU1YUq/IqmRhwLRJkCb1D2UbuHeMWfM0/tJtlrDKn2vgnOR2Ajj9vd9z3HRgsRjRzUVVfnP1dJTWtyaDpNR+JjNQcM5dJu5hOPXdOfeLjvAGHgOx27eoHxgbaNyYcOe37buOdX2/LZhzynd8ymftzRP64mC7U5htae826AXs9G++xy12IFcL62rV28ZioOcAYeo6P8FpIz68x5RLGNxAQcdSLD5nxhQriTqvHoLOX6hItBTR8/IXuIC25AqF9AAsWDocFxogwRvqlVX4gEY8d5dYwSKk6twAZ4/aSD7Tmt560bmUbaRma1SO6LsD3YJNkM5GeMRR84+sF4fpgZAbUcFF7ntoB4/Ykj9o9/x3dfydyB/b+TvJ/n7t/wtkMTDVgnM+aeB82DFeYAZv2xjXBmhBP5TsQ1pZP6EVM9dEiQNj/d80kDXCL2M2SOMDh1/gVD/hhl32vNdHPiKcFxeMsjQeAgF+JnMjzfrnCgziMgMRE6VSLUVPZtnYNt727TfTJ0uEoez2p3muNSJnmGOPeszr7GRhvD1M4z0p6Q4QdIwDR3TRH6xy2PoLc+o746TUHU2j9T7DFz+jMJ+y6iw3K3IHgj+xGAuAInnehQ2ygUDSHRvS2TwABBkBZRA4BEE+/pX7OJv2GUZQACOGcQ3LsPl7JpoX5c0Z/+1hOXyPg7xPJXTm0WBOmNcAqI0bTk7AWFcEcB0KIisj3jTrs3RDdhve1Ah2q31pqz/1WV+KNVqY9cHtl5sV3/i5N5Gp07uVqk7VhCHKRukYcb4oAH4MTboNhnXYfWLLZfboygOJ8H69Ut16MSVpu9QQ/rtXd80/UL3od2lKnJ6R1J1o0+ac1CRQGUkLt4Wkur1N7RY8wD6MuV+f8NI5NTKMxVVeWZhhG98Mb6N1IZvHeTSI3nkfN8xKLo1XmvmBria07TOsKAJq57ML85asiHgvzClumPBWfLaP2kzZrdX5/J1nqlBU8OYwkNaYnrxn+l+k6htbNNnpjiLI+3SWT0NGLJOdM1NGPSQaUp/a3MGfHe4KkQXN2DYGhobTvInY6GcYQeQX84O7FkvpQG8RO2pgYFE6rM5UqQN4npCy5fsGtYlHtO3dOIS+ylS0HF5vef3dRzRD5HoPeMuhMrbWqOTGSm0MrO2tCY6HUnLsFWXSw5kJo4B4yV1eeg5tv/lRYVkT9j1yyojdbSSTM2VqLJAveWE8YQGFF1EBJewXRJxFnk9i+RymN6B8XyVOudnqDo9uJftME15PERSNDI0+mC3YhokcRHFM142EWwlerdE5KsQpj16ERf5cjKlkdKCSlRphfiO1dyuxQ5h8jsJxJuDku8yico7ablMYfedTMoTomYx3ySWhnm+4l71YKrK5DrvLw17KkN5g6Q5TVKbttUNWZpClsSpqcMS3no/oeTBRLqIMKDaMhXr7xhBEMVREQRgTYQmDYLlIloOrBgttR1Q/z993g5ay3rBi9KtdhFq/cWuerm1tE2OUsQ0B26xVvPjPJzMQvWBTH0fPJ+EDzyzHr1FK7ce/cXj68WKkw+tx4OFK7FAHhkpEw0tueeNZJZi5c7NtjxcIixs3Wg1X7q1RcOVAAv020GgvigEgUG0PH7XnY695zs7O0Zx1xQcR09m/OPJFBd/WzJ5liXonfTFX5VIdcyG5cfhkYxGch/FY6vcq/NHrJBBJrhjPb5Z/EMzmU5tPUiOUfeS1IhUzJZUxppBIMIoDoJmh9z2Wr8ns0zd2qzyelZ9LZeBWLRqcVCXRafxf3lCmn0=')))
# Created by pyminifier (https://github.com/liftoff/pyminifier)
| 554.6 | 2,687 | 0.957447 | 99 | 2,773 | 26.818182 | 0.969697 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.158885 | 0.003606 | 2,773 | 4 | 2,688 | 693.25 | 0.802027 | 0.021998 | 0 | 0 | 0 | 0.5 | 0.976006 | 0.976006 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | null | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 10 |
e5e9a26759227160718e3960faaebafb186d409a | 172 | py | Python | core/models.py | MrEscape54/CRM | 36be1fcc74bbfddf343dc0b1b7f8af83be3fe8d3 | [
"MIT"
] | null | null | null | core/models.py | MrEscape54/CRM | 36be1fcc74bbfddf343dc0b1b7f8af83be3fe8d3 | [
"MIT"
] | null | null | null | core/models.py | MrEscape54/CRM | 36be1fcc74bbfddf343dc0b1b7f8af83be3fe8d3 | [
"MIT"
] | null | null | null | from django.db import models
from django.utils.translation import ugettext_lazy as _
from django.contrib.auth.models import AbstractUser
class User(AbstractUser):
pass | 28.666667 | 55 | 0.825581 | 24 | 172 | 5.833333 | 0.666667 | 0.214286 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.122093 | 172 | 6 | 56 | 28.666667 | 0.927152 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.2 | 0.6 | 0 | 0.8 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 7 |
f90692d7aa66a8c6696a6d63409cfb464d78efc1 | 958 | py | Python | EasySQL/Exceptions.py | Ashengaurd/EasySQL | 99f8730b26e227e97e98c6a296e83261f5e952b3 | [
"MIT"
] | null | null | null | EasySQL/Exceptions.py | Ashengaurd/EasySQL | 99f8730b26e227e97e98c6a296e83261f5e952b3 | [
"MIT"
] | null | null | null | EasySQL/Exceptions.py | Ashengaurd/EasySQL | 99f8730b26e227e97e98c6a296e83261f5e952b3 | [
"MIT"
] | null | null | null | class MissingArgumentException(Exception):
def __init__(self, message):
self.message = message
def __repr__(self):
return f'<MissingArgumentException "{self.message}">'
def __str__(self):
return self.message
class MisMatchException(Exception):
def __init__(self, message):
self.message = message
def __repr__(self):
return f'<MisMatchException "{self.message}">'
def __str__(self):
return self.message
class DatabaseSafetyException(Exception):
def __init__(self, message):
self.message = message
def __repr__(self):
return f'<DatabaseSafetyException "{self.message}">'
def __str__(self):
return self.message
class SQLTypeException(Exception):
def __init__(self, message):
self.message = message
def __repr__(self):
return f'<SQLTypeException "{self.message}">'
def __str__(self):
return self.message
| 22.27907 | 61 | 0.659708 | 96 | 958 | 6.083333 | 0.145833 | 0.30137 | 0.109589 | 0.136986 | 0.717466 | 0.717466 | 0.717466 | 0.717466 | 0.652397 | 0.431507 | 0 | 0 | 0.235908 | 958 | 42 | 62 | 22.809524 | 0.797814 | 0 | 0 | 0.714286 | 0 | 0 | 0.162839 | 0.051148 | 0 | 0 | 0 | 0 | 0 | 1 | 0.428571 | false | 0 | 0 | 0.285714 | 0.857143 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 8 |
00a10a4c30129c6dc141e8dc7f1bfeb0dc0087f4 | 1,179 | py | Python | 1-5Test.py | schlesg/RTIConnextDDSBenchmark | c84e75a433aa13eb744bbf6eaa85fc36ea89b401 | [
"MIT"
] | null | null | null | 1-5Test.py | schlesg/RTIConnextDDSBenchmark | c84e75a433aa13eb744bbf6eaa85fc36ea89b401 | [
"MIT"
] | null | null | null | 1-5Test.py | schlesg/RTIConnextDDSBenchmark | c84e75a433aa13eb744bbf6eaa85fc36ea89b401 | [
"MIT"
] | null | null | null | import subprocess
import os
msgLen = '50'
commandList = []
commandList.append("bin/x64Linux3gcc5.4.0/release/perftest_cpp03 -sub -bestEffort -numPublishers 1 -noPrintIntervals -sidMultiSubTest 0 -dataLen " + msgLen)
commandList.append("bin/x64Linux3gcc5.4.0/release/perftest_cpp03 -sub -bestEffort -numPublishers 1 -noPrintIntervals -sidMultiSubTest 1 -dataLen " + msgLen)
commandList.append("bin/x64Linux3gcc5.4.0/release/perftest_cpp03 -sub -bestEffort -numPublishers 1 -noPrintIntervals -sidMultiSubTest 2 -dataLen " + msgLen)
commandList.append("bin/x64Linux3gcc5.4.0/release/perftest_cpp03 -sub -bestEffort -numPublishers 1 -noPrintIntervals -sidMultiSubTest 3 -dataLen " + msgLen)
commandList.append("bin/x64Linux3gcc5.4.0/release/perftest_cpp03 -sub -bestEffort -numPublishers 1 -noPrintIntervals -sidMultiSubTest 4 -dataLen " + msgLen)
commandList.append("bin/x64Linux3gcc5.4.0/release/perftest_cpp03 -pub -bestEffort -latencyTest -numIter 1000 -noPrintIntervals -numSubscribers 5 -dataLen " + msgLen)
for command in commandList:
subprocess.Popen(command.split(), cwd="../rtiperftest-2.4")
input("Press Enter to shutdown...\n")
os.system("pkill -f perftest")
| 53.590909 | 165 | 0.7905 | 139 | 1,179 | 6.661871 | 0.323741 | 0.110151 | 0.12959 | 0.213823 | 0.739741 | 0.739741 | 0.739741 | 0.739741 | 0.739741 | 0.739741 | 0 | 0.062558 | 0.091603 | 1,179 | 21 | 166 | 56.142857 | 0.802054 | 0 | 0 | 0 | 0 | 0.428571 | 0.698897 | 0.223919 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.142857 | 0 | 0.142857 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
daa1b11a1f4d713c240e05995732572ea2f6e5f5 | 135 | py | Python | src/PyMajsoul/majsoul_client.py | chaserhkj/PyMajSoul | 55ce9352977dd09648e7a7e69f1ab9a2fd6c2e1e | [
"0BSD"
] | 15 | 2019-05-23T03:55:14.000Z | 2022-02-13T06:44:54.000Z | src/PyMajsoul/majsoul_client.py | chaserhkj/majsoul-record-dump | 55ce9352977dd09648e7a7e69f1ab9a2fd6c2e1e | [
"0BSD"
] | null | null | null | src/PyMajsoul/majsoul_client.py | chaserhkj/majsoul-record-dump | 55ce9352977dd09648e7a7e69f1ab9a2fd6c2e1e | [
"0BSD"
] | 5 | 2019-06-05T18:55:03.000Z | 2021-12-31T05:59:47.000Z | from . import majsoul_pb2 as pb
from .majsoul_msjrpc import Lobby
from .msjrpc import MSJRpcChannel
class MJSClient(object):
pass
| 19.285714 | 33 | 0.792593 | 19 | 135 | 5.526316 | 0.684211 | 0.228571 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.00885 | 0.162963 | 135 | 6 | 34 | 22.5 | 0.920354 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.2 | 0.6 | 0 | 0.8 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 7 |
daafa24d174908fc9d308dae0db4f0172bfab624 | 5,753 | py | Python | solid_backend/media_object/tests/test_base_models.py | zentrumnawi/solid-backend | 0a6ac51608d4c713903856bb9b0cbf0068aa472c | [
"MIT"
] | 1 | 2021-01-24T11:54:01.000Z | 2021-01-24T11:54:01.000Z | solid_backend/media_object/tests/test_base_models.py | zentrumnawi/solid-backend | 0a6ac51608d4c713903856bb9b0cbf0068aa472c | [
"MIT"
] | 112 | 2020-04-22T10:07:03.000Z | 2022-03-29T15:25:26.000Z | solid_backend/media_object/tests/test_base_models.py | zentrumnawi/solid-backend | 0a6ac51608d4c713903856bb9b0cbf0068aa472c | [
"MIT"
] | null | null | null | from django.db import models
from stdimage import JPEGField
class TestDeepZoomModelExists:
"""
Test whether an object DeepZoom can be imported and is a Django model.
"""
def test_model_exists(self):
pass
def test_model_is_django_model(self):
from solid_backend.photograph.models import DeepZoom
assert issubclass(DeepZoom, models.Model)
class TestDeepZoomModelFields:
"""
Test suite with basic field tests whether all fields of the DeepZoom
object exist and have the correct class instance.
"""
def test_model_has_field_dzi_option(self, deepzoom_model_class):
assert hasattr(deepzoom_model_class, "dzi_option")
def test_model_has_field_dzi_file(self, deepzoom_model_class):
assert hasattr(deepzoom_model_class, "dzi_file")
def test_field_type_dzi_option(self, deepzoom_model_class):
assert isinstance(
deepzoom_model_class._meta.get_field("dzi_option"), models.BooleanField
)
def test_field_type_dzi_file(self, deepzoom_model_class):
assert isinstance(
deepzoom_model_class._meta.get_field("dzi_file"), models.FileField
)
class TestPhotographModelExists:
"""
Test whether an object Photograph can be imported and is a Django model.
"""
def test_model_exists(self):
pass
def test_model_is_django_model(self):
from solid_backend.media_object.models import MediaObject
assert issubclass(MediaObject, models.Model)
class TestPhotographModelFields:
"""
Test suite with basic field tests whether all fields of the Photograph
object exist and have the correct class instance.
"""
def test_model_has_field_img(self, media_object_model_class):
assert hasattr(media_object_model_class, "img")
def test_model_has_field_img_original_width(self, media_object_model_class):
assert hasattr(media_object_model_class, "img_original_width")
def test_model_has_field_img_original_height(self, media_object_model_class):
assert hasattr(media_object_model_class, "img_original_height")
def test_model_has_field_img_original_scale(self, media_object_model_class):
assert hasattr(media_object_model_class, "img_original_scale")
def test_model_has_field_img_alt(self, media_object_model_class):
assert hasattr(media_object_model_class, "img_alt")
def test_model_has_field_description(self, media_object_model_class):
assert hasattr(media_object_model_class, "description")
def test_model_has_field_audio(self, media_object_model_class):
assert hasattr(media_object_model_class, "audio")
def test_model_has_field_audio_duration(self, media_object_model_class):
assert hasattr(media_object_model_class, "audio_duration")
def test_model_has_field_profile_position(self, media_object_model_class):
assert hasattr(media_object_model_class, "profile_position")
def test_model_has_field_date(self, media_object_model_class):
assert hasattr(media_object_model_class, "date")
def test_model_has_field_author(self, media_object_model_class):
assert hasattr(media_object_model_class, "author")
def test_model_has_field_license(self, media_object_model_class):
assert hasattr(media_object_model_class, "license")
def test_field_type_img(self, media_object_model_class):
assert isinstance(media_object_model_class._meta.get_field("img"), JPEGField)
def test_field_type_img_original_width(self, media_object_model_class):
assert isinstance(
media_object_model_class._meta.get_field("img_original_width"),
models.PositiveSmallIntegerField,
)
def test_field_type_img_original_height(self, media_object_model_class):
assert isinstance(
media_object_model_class._meta.get_field("img_original_height"),
models.PositiveSmallIntegerField,
)
def test_field_type_img_original_scale(self, media_object_model_class):
assert isinstance(
media_object_model_class._meta.get_field("img_original_scale"),
models.FloatField,
)
def test_field_type_img_alt(self, media_object_model_class):
assert isinstance(
media_object_model_class._meta.get_field("img_alt"), models.CharField
)
def test_field_type_description(self, media_object_model_class):
assert isinstance(
media_object_model_class._meta.get_field("description"), models.TextField
)
def test_field_type_audio(self, media_object_model_class):
assert isinstance(
media_object_model_class._meta.get_field("audio"), models.FileField
)
def test_field_type_audio_duration(self, media_object_model_class):
assert isinstance(
media_object_model_class._meta.get_field("audio_duration"),
models.FloatField,
)
def test_field_type_profile_position(self, media_object_model_class):
assert isinstance(
media_object_model_class._meta.get_field("profile_position"),
models.PositiveSmallIntegerField,
)
def test_field_type_date(self, media_object_model_class):
assert isinstance(
media_object_model_class._meta.get_field("date"), models.DateField
)
def test_field_type_author(self, media_object_model_class):
assert isinstance(
media_object_model_class._meta.get_field("author"), models.CharField
)
def test_field_type_license(self, media_object_model_class):
assert isinstance(
media_object_model_class._meta.get_field("license"), models.CharField
)
| 35.95625 | 85 | 0.737702 | 729 | 5,753 | 5.348423 | 0.106996 | 0.148756 | 0.196974 | 0.258528 | 0.831752 | 0.785586 | 0.716594 | 0.674532 | 0.632983 | 0.607848 | 0 | 0 | 0.195724 | 5,753 | 159 | 86 | 36.18239 | 0.842663 | 0.066574 | 0 | 0.23301 | 0 | 0 | 0.055084 | 0 | 0 | 0 | 0 | 0 | 0.291262 | 1 | 0.31068 | false | 0.019417 | 0.038835 | 0 | 0.38835 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
dae1a0093e73866ef70b4efd4b6a79df05501b30 | 1,950 | py | Python | pyluna-common/tests/luna/common/test_constants.py | msk-mind/data-processing | c016d218da2eca003d06b96f2c03f16b3ce97873 | [
"Apache-2.0"
] | 1 | 2022-03-29T03:48:00.000Z | 2022-03-29T03:48:00.000Z | pyluna-common/tests/luna/common/test_constants.py | msk-mind/data-processing | c016d218da2eca003d06b96f2c03f16b3ce97873 | [
"Apache-2.0"
] | 96 | 2020-11-15T01:39:12.000Z | 2021-08-24T14:37:49.000Z | pyluna-common/tests/luna/common/test_constants.py | msk-mind/luna | 282b5bd594cb5bf1ef2a7fdf56fca9bea5ad7102 | [
"Apache-2.0"
] | 1 | 2021-01-04T15:14:23.000Z | 2021-01-04T15:14:23.000Z | # -*- coding: utf-8 -*-
'''
Created on October 17, 2019
@author: aukermaa@mskcc.org
'''
import pytest
from luna.common.config import ConfigSet
import luna.common.constants as const
def test_source_table_name():
c1 = ConfigSet(name=const.DATA_CFG,
config_file='pyluna-pathology/tests/luna/pathology/refined_table/regional_annotation/geojson_data.yaml')
assert const.TABLE_NAME(c1, is_source=True) == "REGIONAL_BITMASK_dsn"
def test_table_name():
c1 = ConfigSet(name=const.DATA_CFG,
config_file='pyluna-common/tests/luna/common/testdata/data_ingestion_template_valid.yml')
assert const.TABLE_NAME(c1) == "CT_OV_16-158_CT_20201028"
def test_table_location():
c1 = ConfigSet(name=const.DATA_CFG,
config_file='pyluna-common/tests/luna/common/testdata/data_ingestion_template_valid.yml')
assert const.TABLE_LOCATION(c1) == "pyluna-radiology/tests/luna/radiology/proxy_table/test_data/OV_16-158/tables/CT_OV_16-158_CT_20201028"
def test_table_location_emptystring():
c1 = ConfigSet(name=const.DATA_CFG,
config_file='pyluna-common/tests/luna/common/testdata/data_ingestion_template_valid_empty_dataset.yml')
assert const.TABLE_LOCATION(c1) == "pyluna-common/tests/luna/radiology/proxy_table/test_data/OV_16-158/tables/CT"
def test_table_location_none():
c1 = ConfigSet(name=const.DATA_CFG,
config_file='pyluna-common/tests/luna/common/testdata/data_ingestion_template_valid_empty_dataset_2.yml')
assert const.TABLE_LOCATION(c1) == "pyluna-common/tests/luna/radiology/proxy_table/test_data/OV_16-158/tables/CT"
def test_project_location():
c1 = ConfigSet(name=const.DATA_CFG,
config_file='pyluna-common/tests/luna/common/testdata/data_ingestion_template_valid.yml')
assert const.PROJECT_LOCATION(c1) == "pyluna-radiology/tests/luna/radiology/proxy_table/test_data/OV_16-158"
| 38.235294 | 142 | 0.747692 | 274 | 1,950 | 5.032847 | 0.233577 | 0.065265 | 0.086294 | 0.106599 | 0.783901 | 0.763597 | 0.763597 | 0.763597 | 0.763597 | 0.763597 | 0 | 0.039262 | 0.137949 | 1,950 | 50 | 143 | 39 | 0.781083 | 0.040513 | 0 | 0.407407 | 0 | 0.111111 | 0.459184 | 0.448443 | 0 | 0 | 0 | 0 | 0.222222 | 1 | 0.222222 | false | 0 | 0.111111 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
972fcab4de5d15b8f237a44416dd4cedd9d95c49 | 6,410 | py | Python | tests/test_secrets.py | nickgaya/tufa | 7d7458b73de08d225929dba19b4c7a37b4ddb423 | [
"MIT"
] | null | null | null | tests/test_secrets.py | nickgaya/tufa | 7d7458b73de08d225929dba19b4c7a37b4ddb423 | [
"MIT"
] | null | null | null | tests/test_secrets.py | nickgaya/tufa | 7d7458b73de08d225929dba19b4c7a37b4ddb423 | [
"MIT"
] | null | null | null | """Tests of tufa.secrets"""
from subprocess import CompletedProcess
import pytest
from tufa.exceptions import KeychainError
from tufa.secrets import SecretStore
NAME = 'test'
SECRET = 'GMTBYHBJGDL5BBO2'
@pytest.fixture(autouse=True)
def mock_subprocess(mocker):
return mocker.patch('tufa.secrets.subprocess', autospec=True)
@pytest.fixture
def secret_store():
return SecretStore()
def test_store_secret(secret_store, mock_subprocess):
mock_subprocess.run.return_value = CompletedProcess(args=..., returncode=0)
secret_store.store_secret('test', SECRET)
mock_subprocess.run.assert_called_once_with(
['/usr/bin/security', 'add-generic-password',
'-s', 'tufa', '-a', 'test',
'-l', 'tufa: test',
'-D', 'hotp/totp secret',
'-w', SECRET],
stdin=mock_subprocess.DEVNULL,
capture_output=True, text=True, start_new_session=True)
def test_store_secret_update(secret_store, mock_subprocess):
mock_subprocess.run.return_value = CompletedProcess(args=..., returncode=0)
secret_store.store_secret('test', SECRET, update=True)
mock_subprocess.run.assert_called_once_with(
['/usr/bin/security', 'add-generic-password',
'-s', 'tufa', '-a', 'test',
'-l', 'tufa: test',
'-D', 'hotp/totp secret',
'-w', SECRET,
'-U'],
stdin=mock_subprocess.DEVNULL,
capture_output=True, text=True, start_new_session=True)
def test_store_secret_keychain(secret_store, mock_subprocess):
mock_subprocess.run.return_value = CompletedProcess(args=..., returncode=0)
secret_store.store_secret('test', SECRET, keychain='example.keychain')
mock_subprocess.run.assert_called_once_with(
['/usr/bin/security', 'add-generic-password',
'-s', 'tufa', '-a', 'test',
'-l', 'tufa: test',
'-D', 'hotp/totp secret',
'-w', SECRET,
'example.keychain'],
stdin=mock_subprocess.DEVNULL,
capture_output=True, text=True, start_new_session=True)
def test_store_secret_error(secret_store, mock_subprocess):
mock_subprocess.run.return_value = CompletedProcess(
args=..., returncode=45)
with pytest.raises(KeychainError):
secret_store.store_secret('test', SECRET)
mock_subprocess.run.assert_called_once_with(
['/usr/bin/security', 'add-generic-password',
'-s', 'tufa', '-a', 'test',
'-l', 'tufa: test',
'-D', 'hotp/totp secret',
'-w', SECRET],
stdin=mock_subprocess.DEVNULL,
capture_output=True, text=True, start_new_session=True)
def test_retrieve_secret(secret_store, mock_subprocess):
mock_subprocess.run.return_value = CompletedProcess(
args=..., returncode=0, stdout=SECRET)
assert secret_store.retrieve_secret('test') == SECRET
mock_subprocess.run.assert_called_once_with(
['/usr/bin/security', 'find-generic-password',
'-s', 'tufa', '-a', 'test', '-w'],
stdin=mock_subprocess.DEVNULL,
capture_output=True, text=True, start_new_session=True)
def test_retrieve_secret_keychain(secret_store, mock_subprocess):
mock_subprocess.run.return_value = CompletedProcess(
args=..., returncode=0, stdout=SECRET)
assert secret_store.retrieve_secret('test', keychain='example.keychain') \
== SECRET
mock_subprocess.run.assert_called_once_with(
['/usr/bin/security', 'find-generic-password',
'-s', 'tufa', '-a', 'test', '-w', 'example.keychain'],
stdin=mock_subprocess.DEVNULL,
capture_output=True, text=True, start_new_session=True)
def test_retrieve_secret_error(secret_store, mock_subprocess):
mock_subprocess.run.return_value = CompletedProcess(
args=..., returncode=44)
with pytest.raises(KeychainError):
secret_store.retrieve_secret('test')
mock_subprocess.run.assert_called_once_with(
['/usr/bin/security', 'find-generic-password',
'-s', 'tufa', '-a', 'test', '-w'],
stdin=mock_subprocess.DEVNULL,
capture_output=True, text=True, start_new_session=True)
def test_delete_secret(secret_store, mock_subprocess):
mock_subprocess.run.return_value = CompletedProcess(args=..., returncode=0)
secret_store.delete_secret('test')
mock_subprocess.run.assert_called_once_with(
['/usr/bin/security', 'delete-generic-password',
'-s', 'tufa', '-a', 'test'],
stdin=mock_subprocess.DEVNULL,
capture_output=True, text=True, start_new_session=True)
def test_delete_secret_keychain(secret_store, mock_subprocess):
mock_subprocess.run.return_value = CompletedProcess(args=..., returncode=0)
secret_store.delete_secret('test', keychain='example.keychain')
mock_subprocess.run.assert_called_once_with(
['/usr/bin/security', 'delete-generic-password',
'-s', 'tufa', '-a', 'test', 'example.keychain'],
stdin=mock_subprocess.DEVNULL,
capture_output=True, text=True, start_new_session=True)
def test_delete_secret_error(secret_store, mock_subprocess):
mock_subprocess.run.return_value = CompletedProcess(
args=..., returncode=44)
with pytest.raises(KeychainError):
secret_store.delete_secret('test')
mock_subprocess.run.assert_called_once_with(
['/usr/bin/security', 'delete-generic-password',
'-s', 'tufa', '-a', 'test'],
stdin=mock_subprocess.DEVNULL,
capture_output=True, text=True, start_new_session=True)
def test_verify_keychain(secret_store, mock_subprocess):
mock_subprocess.run.return_value = CompletedProcess(args=..., returncode=0)
secret_store.verify_keychain('example.keychain')
mock_subprocess.run.assert_called_once_with(
['/usr/bin/security', 'show-keychain-info', 'example.keychain'],
stdin=mock_subprocess.DEVNULL,
capture_output=True, text=True, start_new_session=True)
def test_verify_keychain_error(secret_store, mock_subprocess):
mock_subprocess.run.return_value = CompletedProcess(
args=..., returncode=50)
with pytest.raises(KeychainError):
secret_store.verify_keychain('example.keychain')
mock_subprocess.run.assert_called_once_with(
['/usr/bin/security', 'show-keychain-info', 'example.keychain'],
stdin=mock_subprocess.DEVNULL,
capture_output=True, text=True, start_new_session=True)
| 34.836957 | 79 | 0.686739 | 762 | 6,410 | 5.511811 | 0.094488 | 0.163333 | 0.097143 | 0.071429 | 0.919286 | 0.911905 | 0.898095 | 0.898095 | 0.898095 | 0.898095 | 0 | 0.003403 | 0.174883 | 6,410 | 183 | 80 | 35.027322 | 0.790698 | 0.003276 | 0 | 0.717557 | 0 | 0 | 0.14899 | 0.024283 | 0 | 0 | 0 | 0 | 0.10687 | 1 | 0.10687 | false | 0.076336 | 0.030534 | 0.015267 | 0.152672 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 8 |
973013e9f743ec4b5675efd0964faa9d0abe20b5 | 3,289 | py | Python | rony/writer.py | felipe-cosse/rony | f2bed037ab73c4c74e8f3c585b1b4e0084a1244f | [
"Apache-2.0"
] | null | null | null | rony/writer.py | felipe-cosse/rony | f2bed037ab73c4c74e8f3c585b1b4e0084a1244f | [
"Apache-2.0"
] | null | null | null | rony/writer.py | felipe-cosse/rony | f2bed037ab73c4c74e8f3c585b1b4e0084a1244f | [
"Apache-2.0"
] | null | null | null | import os
import shutil
from datetime import datetime
import rony
def copy_files(LOCAL_PATH, project_name):
shutil.copy(os.path.join(rony.__path__[0], 'base_files', 'etl', 'Dockerfile'), os.path.join(LOCAL_PATH, project_name, 'etl'))
shutil.copy(os.path.join(rony.__path__[0], 'base_files', 'etl', 'run.py'), os.path.join(LOCAL_PATH, project_name, 'etl'))
shutil.copy(os.path.join(rony.__path__[0], 'base_files', 'etl', 'lambda_function.py'), os.path.join(LOCAL_PATH, project_name, 'etl'))
shutil.copy(os.path.join(rony.__path__[0], 'base_files', 'etl', 'lambda_requirements.txt'), os.path.join(LOCAL_PATH, project_name, 'etl'))
shutil.copy(os.path.join(rony.__path__[0], 'base_files', 'requirements.txt'), os.path.join(LOCAL_PATH, project_name))
shutil.copy(os.path.join(rony.__path__[0], 'base_files', '.gitignore'), os.path.join(LOCAL_PATH, project_name))
shutil.copy(os.path.join(rony.__path__[0], 'base_files', 'infrastructure', 'cloudwatch.tf'), os.path.join(LOCAL_PATH, project_name, 'infrastructure'))
shutil.copy(os.path.join(rony.__path__[0], 'base_files', 'infrastructure', 'glue_crawler.tf'), os.path.join(LOCAL_PATH, project_name, 'infrastructure'))
shutil.copy(os.path.join(rony.__path__[0], 'base_files', 'infrastructure', 'iam.tf'), os.path.join(LOCAL_PATH, project_name, 'infrastructure'))
shutil.copy(os.path.join(rony.__path__[0], 'base_files', 'infrastructure', 'lambda.tf'), os.path.join(LOCAL_PATH, project_name, 'infrastructure'))
shutil.copy(os.path.join(rony.__path__[0], 'base_files', 'infrastructure', 'main.tf'), os.path.join(LOCAL_PATH, project_name, 'infrastructure'))
shutil.copy(os.path.join(rony.__path__[0], 'base_files', 'infrastructure', 's3.tf'), os.path.join(LOCAL_PATH, project_name, 'infrastructure'))
shutil.copy(os.path.join(rony.__path__[0], 'base_files', 'infrastructure', 'variables.tf'), os.path.join(LOCAL_PATH, project_name, 'infrastructure'))
shutil.copy(os.path.join(rony.__path__[0], 'base_files', 'infrastructure', 'ecr.tf'), os.path.join(LOCAL_PATH, project_name, 'infrastructure'))
shutil.copy(os.path.join(rony.__path__[0], 'base_files', 'scripts', 'build_lambda_package.sh'), os.path.join(LOCAL_PATH, project_name, 'scripts'))
shutil.copy(os.path.join(rony.__path__[0], 'base_files', 'tests', 'test_lambda.py'), os.path.join(LOCAL_PATH, project_name, 'tests'))
shutil.copy(os.path.join(rony.__path__[0], 'base_files', 'ci', 'github_ci.yml'), os.path.join(LOCAL_PATH, project_name, '.github', 'workflows'))
shutil.copy(os.path.join(rony.__path__[0], 'base_files', 'ci', 'README.md'), os.path.join(LOCAL_PATH, project_name, '.github', 'workflows'))
shutil.copy(os.path.join(rony.__path__[0], 'base_files', 'dags', 'conditional_example.py'), os.path.join(LOCAL_PATH, project_name, 'dags'))
shutil.copy(os.path.join(rony.__path__[0], 'base_files', 'dags', 'titanic_example.py'), os.path.join(LOCAL_PATH, project_name, 'dags'))
def write_readme_file(LOCAL_PATH, project_name):
with open(os.path.join(LOCAL_PATH, project_name, 'README.md'), 'w+') as outfile:
outfile.write(f"""# {project_name}
Project started in {datetime.today().strftime("%B %d, %Y")}.
**Please, complete here information on using and testing this project.**
""") | 78.309524 | 156 | 0.720888 | 488 | 3,289 | 4.532787 | 0.153689 | 0.111212 | 0.185353 | 0.207957 | 0.809222 | 0.809222 | 0.809222 | 0.782098 | 0.767631 | 0.754069 | 0 | 0.006993 | 0.086957 | 3,289 | 42 | 157 | 78.309524 | 0.729604 | 0 | 0 | 0 | 0 | 0 | 0.286626 | 0.029787 | 0 | 0 | 0 | 0 | 0 | 1 | 0.064516 | false | 0 | 0.129032 | 0 | 0.193548 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
9738656abfec6c9fe58dd24cc34c8f1b1429b64a | 367,257 | py | Python | examples/06github/githubschema.py | dotness/swagger-marshmallow-codegen | 62938d780672d754431d50bde3eae04abefb64f1 | [
"MIT"
] | null | null | null | examples/06github/githubschema.py | dotness/swagger-marshmallow-codegen | 62938d780672d754431d50bde3eae04abefb64f1 | [
"MIT"
] | null | null | null | examples/06github/githubschema.py | dotness/swagger-marshmallow-codegen | 62938d780672d754431d50bde3eae04abefb64f1 | [
"MIT"
] | null | null | null | # -*- coding:utf-8 -*-
# this is auto-generated by swagger-marshmallow-codegen
from marshmallow import (
Schema,
fields
)
from marshmallow.validate import (
Length,
OneOf
)
from swagger_marshmallow_codegen.schema import (
AdditionalPropertiesSchema,
PrimitiveValueSchema
)
class Asset(Schema):
content_type = fields.String()
created_at = fields.String()
download_count = fields.Number()
id = fields.Number()
label = fields.String()
name = fields.String()
size = fields.Number()
state = fields.String()
updated_at = fields.String()
uploader = fields.Nested('AssetUploader')
url = fields.String()
class AssetUploader(Schema):
avatar_url = fields.String()
events_url = fields.String()
followers_url = fields.String()
following_url = fields.String()
gists_url = fields.String()
gravatar_id = fields.String()
html_url = fields.String()
id = fields.Number()
login = fields.String()
organizations_url = fields.String()
received_events_url = fields.String()
repos_url = fields.String()
site_admin = fields.Boolean()
starred_url = fields.String()
subscriptions_url = fields.String()
type = fields.String()
url = fields.String()
class AssetPatch(Schema):
label = fields.String()
name = fields.String(required=True)
class AssetsItem(Schema):
content_type = fields.String()
created_at = fields.String()
download_count = fields.Number()
id = fields.Number()
label = fields.String()
name = fields.String()
size = fields.Number()
state = fields.String()
updated_at = fields.String()
uploader = fields.Nested('AssetsItemUploader')
url = fields.String()
class AssetsItemUploader(Schema):
avatar_url = fields.String()
events_url = fields.String()
followers_url = fields.String()
following_url = fields.String()
gists_url = fields.String()
gravatar_id = fields.String()
html_url = fields.String()
id = fields.Number()
login = fields.String()
organizations_url = fields.String()
received_events_url = fields.String()
repos_url = fields.String()
site_admin = fields.Boolean()
starred_url = fields.String()
subscriptions_url = fields.String()
type = fields.String()
url = fields.String()
class AssigneesItem(Schema):
avatar_url = fields.Integer()
gravatar_id = fields.String()
id = fields.Integer()
login = fields.String()
url = fields.String()
class Blob(Schema):
content = fields.String()
encoding = fields.String(validate=[OneOf(choices=['utf-8', 'base64'], labels=[])])
sha = fields.String()
size = fields.Integer()
class Blobs(Schema):
sha = fields.String()
class Branch(Schema):
_links = fields.Nested('Branch_links')
commit = fields.Nested('BranchCommit')
name = fields.String()
class BranchCommit(Schema):
author = fields.Nested('BranchCommitAuthor')
commit = fields.Nested('BranchCommitCommit')
committer = fields.Nested('BranchCommitCommitter')
parents = fields.List(fields.Nested('BranchCommitParentsItem'))
sha = fields.String()
url = fields.String()
class BranchCommitParentsItem(Schema):
sha = fields.String()
url = fields.String()
class BranchCommitCommitter(Schema):
avatar_url = fields.String()
gravatar_id = fields.String()
id = fields.Integer()
login = fields.String()
url = fields.String()
class BranchCommitCommit(Schema):
author = fields.Nested('BranchCommitCommitAuthor')
committer = fields.Nested('BranchCommitCommitCommitter')
message = fields.String()
tree = fields.Nested('BranchCommitCommitTree')
url = fields.String()
class BranchCommitCommitTree(Schema):
sha = fields.String()
url = fields.String()
class BranchCommitCommitCommitter(Schema):
date = fields.String(description='ISO 8601 format: YYYY-MM-DDTHH:MM:SSZ')
email = fields.String()
name = fields.String()
class BranchCommitCommitAuthor(Schema):
date = fields.String(description='ISO 8601 format: YYYY-MM-DDTHH:MM:SSZ')
email = fields.String()
name = fields.String()
class BranchCommitAuthor(Schema):
avatar_url = fields.String()
gravatar_id = fields.String()
id = fields.Integer()
login = fields.String()
url = fields.String()
class Branch_links(Schema):
html = fields.String()
self = fields.String()
class BranchesItem(Schema):
commit = fields.Nested('BranchesItemCommit')
name = fields.String()
class BranchesItemCommit(Schema):
sha = fields.String()
url = fields.String()
class CollaboratorsItem(Schema):
avatar_url = fields.String()
gravatar_id = fields.String()
id = fields.Integer()
login = fields.String()
url = fields.String()
class Comment(Schema):
body = fields.String()
class CommentBody(Schema):
body = fields.String(required=True)
class CommentsItem(Schema):
body = fields.String()
created_at = fields.String(description='ISO 8601.')
id = fields.Integer()
url = fields.String()
user = fields.Nested('CommentsItemUser')
class CommentsItemUser(Schema):
avatar_url = fields.String()
gravatar_id = fields.String()
id = fields.Integer()
login = fields.String()
url = fields.String()
class Commit(Schema):
author = fields.Nested('CommitAuthor')
commit = fields.Nested('CommitCommit')
committer = fields.Nested('CommitCommitter')
files = fields.List(fields.Nested('CommitFilesItem'))
parents = fields.List(fields.Nested('CommitParentsItem'))
sha = fields.String()
stats = fields.Nested('CommitStats')
url = fields.String()
class CommitStats(Schema):
additions = fields.Integer()
deletions = fields.Integer()
total = fields.Integer()
class CommitParentsItem(Schema):
sha = fields.String()
url = fields.String()
class CommitFilesItem(Schema):
additions = fields.Integer()
blob_url = fields.String()
changes = fields.Integer()
deletions = fields.Integer()
filename = fields.String()
patch = fields.String()
raw_url = fields.String()
status = fields.String()
class CommitCommitter(Schema):
avatar_url = fields.String()
gravatar_id = fields.String()
id = fields.Integer()
login = fields.String()
url = fields.String()
class CommitCommit(Schema):
author = fields.Nested('CommitCommitAuthor')
committer = fields.Nested('CommitCommitCommitter')
message = fields.String()
tree = fields.Nested('CommitCommitTree')
url = fields.String()
class CommitCommitTree(Schema):
sha = fields.String()
url = fields.String()
class CommitCommitCommitter(Schema):
date = fields.String(description='ISO 8601 format: YYYY-MM-DDTHH:MM:SSZ')
email = fields.String()
name = fields.String()
class CommitCommitAuthor(Schema):
date = fields.String(description='ISO 8601 format: YYYY-MM-DDTHH:MM:SSZ')
email = fields.String()
name = fields.String()
class CommitAuthor(Schema):
avatar_url = fields.String()
gravatar_id = fields.String()
id = fields.Integer()
login = fields.String()
url = fields.String()
class CommitActivityStatsItem(Schema):
days = fields.List(fields.Integer())
total = fields.Integer()
week = fields.Integer()
class CommitBody(Schema):
body = fields.String(required=True)
line = fields.String(description='Deprecated - Use position parameter instead.')
number = fields.String(description='Line number in the file to comment on. Defaults to null.')
path = fields.String(description='Relative path of the file to comment on.')
position = fields.Integer(description='Line index in the diff to comment on.')
sha = fields.String(required=True, description='SHA of the commit to comment on.')
class CommitComments(Schema):
body = fields.String()
commit_id = fields.String()
created_at = fields.String(description='ISO 8601 format: YYYY-MM-DDTHH:MM:SSZ')
html_url = fields.String()
id = fields.Integer()
line = fields.Integer()
path = fields.String()
position = fields.Integer()
updated_at = fields.String(description='ISO 8601 format: YYYY-MM-DDTHH:MM:SSZ')
url = fields.String()
user = fields.Nested('CommitCommentsUser')
class CommitCommentsUser(Schema):
avatar_url = fields.String()
gravatar_id = fields.String()
id = fields.Integer()
login = fields.String()
url = fields.String()
class CommitsItem(Schema):
author = fields.Nested('CommitsItemAuthor')
commit = fields.Nested('CommitsItemCommit')
committer = fields.Nested('CommitsItemCommitter')
parents = fields.List(fields.Nested('CommitsItemParentsItem'))
sha = fields.String()
url = fields.String()
class CommitsItemParentsItem(Schema):
sha = fields.String()
url = fields.String()
class CommitsItemCommitter(Schema):
avatar_url = fields.String()
gravatar_id = fields.String()
id = fields.Integer()
login = fields.String()
url = fields.String()
class CommitsItemCommit(Schema):
author = fields.Nested('CommitsItemCommitAuthor')
committer = fields.Nested('CommitsItemCommitCommitter')
message = fields.String()
tree = fields.Nested('CommitsItemCommitTree')
url = fields.String()
class CommitsItemCommitTree(Schema):
sha = fields.String()
url = fields.String()
class CommitsItemCommitCommitter(Schema):
date = fields.String(description='ISO 8601 format: YYYY-MM-DDTHH:MM:SSZ')
email = fields.String()
name = fields.String()
class CommitsItemCommitAuthor(Schema):
date = fields.String(description='ISO 8601 format: YYYY-MM-DDTHH:MM:SSZ')
email = fields.String()
name = fields.String()
class CommitsItemAuthor(Schema):
avatar_url = fields.String()
gravatar_id = fields.String()
id = fields.Integer()
login = fields.String()
url = fields.String()
class Compare_commits(Schema):
ahead_by = fields.Integer()
base_commit = fields.Nested('Compare_commitsBase_commit')
behind_by = fields.Integer()
commits = fields.List(fields.Nested('Compare_commitsCommitsItem'))
diff_url = fields.String()
files = fields.List(fields.Nested('Compare_commitsFilesItem'))
html_url = fields.String()
patch_url = fields.String()
permalink_url = fields.String()
status = fields.String()
total_commits = fields.Integer()
url = fields.String()
class Compare_commitsFilesItem(Schema):
additions = fields.Integer()
blob_url = fields.String()
changes = fields.Integer()
contents_url = fields.String()
deletions = fields.Integer()
filename = fields.String()
patch = fields.String()
raw_url = fields.String()
sha = fields.String()
status = fields.String()
class Compare_commitsCommitsItem(Schema):
author = fields.Nested('Compare_commitsCommitsItemAuthor')
commit = fields.Nested('Compare_commitsCommitsItemCommit')
committer = fields.Nested('Compare_commitsCommitsItemCommitter')
parents = fields.List(fields.Nested('Compare_commitsCommitsItemParentsItem'))
sha = fields.String()
url = fields.String()
class Compare_commitsCommitsItemParentsItem(Schema):
sha = fields.String()
url = fields.String()
class Compare_commitsCommitsItemCommitter(Schema):
avatar_url = fields.String()
events_url = fields.String()
followers_url = fields.String()
following_url = fields.String()
gists_url = fields.String()
gravatar_id = fields.String()
html_url = fields.String()
id = fields.Integer()
login = fields.String()
organizations_url = fields.String()
received_events_url = fields.String()
repos_url = fields.String()
site_admin = fields.Boolean()
starred_url = fields.String()
subscriptions_url = fields.String()
type = fields.String()
url = fields.String()
class Compare_commitsCommitsItemCommit(Schema):
author = fields.Nested('Compare_commitsCommitsItemCommitAuthor')
committer = fields.Nested('Compare_commitsCommitsItemCommitCommitter')
message = fields.String()
tree = fields.Nested('Compare_commitsCommitsItemCommitTree')
url = fields.String()
class Compare_commitsCommitsItemCommitTree(Schema):
sha = fields.String()
url = fields.String()
class Compare_commitsCommitsItemCommitCommitter(Schema):
date = fields.String()
email = fields.String()
name = fields.String()
class Compare_commitsCommitsItemCommitAuthor(Schema):
date = fields.String()
email = fields.String()
name = fields.String()
class Compare_commitsCommitsItemAuthor(Schema):
avatar_url = fields.String()
events_url = fields.String()
followers_url = fields.String()
following_url = fields.String()
gists_url = fields.String()
gravatar_id = fields.String()
html_url = fields.String()
id = fields.Integer()
login = fields.String()
organizations_url = fields.String()
received_events_url = fields.String()
repos_url = fields.String()
site_admin = fields.Boolean()
starred_url = fields.String()
subscriptions_url = fields.String()
type = fields.String()
url = fields.String()
class Compare_commitsBase_commit(Schema):
author = fields.Nested('Compare_commitsBase_commitAuthor')
commit = fields.Nested('Compare_commitsBase_commitCommit')
committer = fields.Nested('Compare_commitsBase_commitCommitter')
parents = fields.List(fields.Nested('Compare_commitsBase_commitParentsItem'))
sha = fields.String()
url = fields.String()
class Compare_commitsBase_commitParentsItem(Schema):
sha = fields.String()
url = fields.String()
class Compare_commitsBase_commitCommitter(Schema):
avatar_url = fields.String()
events_url = fields.String()
followers_url = fields.String()
following_url = fields.String()
gists_url = fields.String()
gravatar_id = fields.String()
html_url = fields.String()
id = fields.Integer()
login = fields.String()
organizations_url = fields.String()
received_events_url = fields.String()
repos_url = fields.String()
site_admin = fields.Boolean()
starred_url = fields.String()
subscriptions_url = fields.String()
type = fields.String()
url = fields.String()
class Compare_commitsBase_commitCommit(Schema):
author = fields.Nested('Compare_commitsBase_commitCommitAuthor')
committer = fields.Nested('Compare_commitsBase_commitCommitCommitter')
message = fields.String()
tree = fields.Nested('Compare_commitsBase_commitCommitTree')
url = fields.String()
class Compare_commitsBase_commitCommitTree(Schema):
sha = fields.String()
url = fields.String()
class Compare_commitsBase_commitCommitCommitter(Schema):
date = fields.String()
email = fields.String()
name = fields.String()
class Compare_commitsBase_commitCommitAuthor(Schema):
date = fields.String()
email = fields.String()
name = fields.String()
class Compare_commitsBase_commitAuthor(Schema):
avatar_url = fields.String()
events_url = fields.String()
followers_url = fields.String()
following_url = fields.String()
gists_url = fields.String()
gravatar_id = fields.String()
html_url = fields.String()
id = fields.Integer()
login = fields.String()
organizations_url = fields.String()
received_events_url = fields.String()
repos_url = fields.String()
site_admin = fields.Boolean()
starred_url = fields.String()
subscriptions_url = fields.String()
type = fields.String()
url = fields.String()
class Contents_path(Schema):
_links = fields.Nested('Contents_path_links')
content = fields.String()
encoding = fields.String()
git_url = fields.String()
html_url = fields.String()
name = fields.String()
path = fields.String()
sha = fields.String()
size = fields.Integer()
type = fields.String()
url = fields.String()
class Contents_path_links(Schema):
git = fields.String()
html = fields.String()
self = fields.String()
class ContributorsItem(Schema):
avatar_url = fields.String()
contributions = fields.Integer()
gravatar_id = fields.String()
id = fields.Integer()
login = fields.String()
url = fields.String()
class ContributorsStatsItem(Schema):
author = fields.Nested('ContributorsStatsItemAuthor')
total = fields.Integer(description='The Total number of commits authored by the contributor.')
weeks = fields.List(fields.Nested('ContributorsStatsItemWeeksItem'))
class ContributorsStatsItemWeeksItem(Schema):
a = fields.Integer(description='Number of additions.')
c = fields.Integer(description='Number of commits.')
d = fields.Integer(description='Number of deletions.')
w = fields.String(description='Start of the week.')
class ContributorsStatsItemAuthor(Schema):
avatar_url = fields.String()
gravatar_id = fields.String()
id = fields.Integer()
login = fields.String()
url = fields.String()
class CreateDownload(Schema):
accesskeyid = fields.String()
acl = fields.String()
bucket = fields.String()
content_type = fields.String()
description = fields.String()
download_count = fields.Integer()
expirationdate = fields.String(description='ISO 8601 format: YYYY-MM-DDTHH:MM:SSZ')
html_url = fields.String()
id = fields.Integer()
mime_type = fields.String()
name = fields.String()
path = fields.String()
policy = fields.String()
prefix = fields.String()
redirect = fields.Boolean()
s3_url = fields.String()
signature = fields.String()
size = fields.Integer()
url = fields.String()
class CreateFile(Schema):
commit = fields.Nested('CreateFileCommit')
content = fields.Nested('CreateFileContent')
class CreateFileContent(Schema):
_links = fields.Nested('CreateFileContent_links')
git_url = fields.String()
html_url = fields.String()
name = fields.String()
path = fields.String()
sha = fields.String()
size = fields.Integer()
type = fields.String()
url = fields.String()
class CreateFileContent_links(Schema):
git = fields.String()
html = fields.String()
self = fields.String()
class CreateFileCommit(Schema):
author = fields.Nested('CreateFileCommitAuthor')
committer = fields.Nested('CreateFileCommitCommitter')
html_url = fields.String()
message = fields.String()
parents = fields.List(fields.Nested('CreateFileCommitParentsItem'))
sha = fields.String()
tree = fields.Nested('CreateFileCommitTree')
url = fields.String()
class CreateFileCommitTree(Schema):
sha = fields.String()
url = fields.String()
class CreateFileCommitParentsItem(Schema):
html_url = fields.String()
sha = fields.String()
url = fields.String()
class CreateFileCommitCommitter(Schema):
date = fields.String()
email = fields.String()
name = fields.String()
class CreateFileCommitAuthor(Schema):
date = fields.String()
email = fields.String()
name = fields.String()
class CreateFileBody(Schema):
committer = fields.Nested('CreateFileBodyCommitter')
content = fields.String()
message = fields.String()
class CreateFileBodyCommitter(Schema):
email = fields.String()
name = fields.String()
class DeleteFile(Schema):
commit = fields.Nested('DeleteFileCommit')
content = fields.String()
class DeleteFileCommit(Schema):
author = fields.Nested('DeleteFileCommitAuthor')
committer = fields.Nested('DeleteFileCommitCommitter')
html_url = fields.String()
message = fields.String()
parents = fields.Nested('DeleteFileCommitParents')
sha = fields.String()
tree = fields.Nested('DeleteFileCommitTree')
url = fields.String()
class DeleteFileCommitTree(Schema):
sha = fields.String()
url = fields.String()
class DeleteFileCommitParents(Schema):
html_url = fields.String()
sha = fields.String()
url = fields.String()
class DeleteFileCommitCommitter(Schema):
date = fields.String()
email = fields.String()
name = fields.String()
class DeleteFileCommitAuthor(Schema):
date = fields.String()
email = fields.String()
name = fields.String()
class DeleteFileBody(Schema):
committer = fields.Nested('DeleteFileBodyCommitter')
message = fields.String()
sha = fields.String()
class DeleteFileBodyCommitter(Schema):
email = fields.String()
name = fields.String()
class Deployment(Schema):
description = fields.String()
payload = fields.Nested('DeploymentPayload')
ref = fields.String()
class DeploymentPayload(Schema):
deploy_user = fields.String()
environment = fields.String()
room_id = fields.Number()
class Deployment_resp(Schema):
created_at = fields.String()
creator = fields.Nested('Deployment_respCreator')
description = fields.String()
id = fields.Integer()
payload = fields.String()
sha = fields.String()
statuses_url = fields.String()
updated_at = fields.String()
url = fields.String()
class Deployment_respCreator(Schema):
avatar_url = fields.String()
events_url = fields.String()
followers_url = fields.String()
following_url = fields.String()
gists_url = fields.String()
gravatar_id = fields.String()
html_url = fields.String()
id = fields.Integer()
login = fields.String()
organizations_url = fields.String()
received_events_url = fields.String()
repos_url = fields.String()
site_admin = fields.Boolean()
starred_url = fields.String()
subscriptions_url = fields.String()
type = fields.String()
url = fields.String()
class Deployment_statusesItem(Schema):
created_at = fields.String()
creator = fields.Nested('Deployment_statusesItemCreator')
description = fields.String()
id = fields.Integer()
payload = fields.String()
state = fields.String()
target_url = fields.String()
updated_at = fields.String()
url = fields.String()
class Deployment_statusesItemCreator(Schema):
avatar_url = fields.String()
events_url = fields.String()
followers_url = fields.String()
following_url = fields.String()
gists_url = fields.String()
gravatar_id = fields.String()
html_url = fields.String()
id = fields.Integer()
login = fields.String()
organizations_url = fields.String()
received_events_url = fields.String()
repos_url = fields.String()
site_admin = fields.Boolean()
starred_url = fields.String()
subscriptions_url = fields.String()
type = fields.String()
url = fields.String()
class Deployment_statuses_create(Schema):
description = fields.String()
state = fields.String()
target_url = fields.String()
class DownloadBody(Schema):
content_type = fields.String()
description = fields.String()
name = fields.String(required=True)
size = fields.Integer(required=True, description='Size of file in bytes.')
class Downloads(Schema):
content_type = fields.String()
description = fields.String()
download_count = fields.Integer()
html_url = fields.String()
id = fields.Integer()
name = fields.String()
size = fields.Integer()
url = fields.String()
class EditTeam(Schema):
name = fields.String(required=True)
permission = fields.String(validate=[OneOf(choices=['pull', 'push', 'admin'], labels=[])])
class Emojis(Schema):
n100 = fields.String(data_key='100')
n1234 = fields.String(data_key='1234')
x1 = fields.String(data_key='+1')
x_1 = fields.String(data_key='-1')
n8ball = fields.String(data_key='8ball')
a = fields.String()
ab = fields.String()
class Event(Schema):
actor = fields.Nested('EventActor')
commit_id = fields.String()
created_at = fields.String(description='ISO 8601 format: YYYY-MM-DDTHH:MM:SSZ')
event = fields.String()
issue = fields.Nested('EventIssue')
url = fields.String()
class EventIssue(Schema):
assignee = fields.Nested('EventIssueAssignee')
body = fields.String()
closed_at = fields.String(description='ISO 8601 format: YYYY-MM-DDTHH:MM:SSZ')
comments = fields.Integer()
created_at = fields.String(description='ISO 8601 format: YYYY-MM-DDTHH:MM:SSZ')
html_url = fields.String()
labels = fields.List(fields.Nested('EventIssueLabelsItem'))
milestone = fields.Nested('EventIssueMilestone')
number = fields.Integer()
pull_request = fields.Nested('EventIssuePull_request')
state = fields.String(validate=[OneOf(choices=['open', 'closed'], labels=[])])
title = fields.String()
updated_at = fields.String(description='ISO 8601 format: YYYY-MM-DDTHH:MM:SSZ')
url = fields.String()
user = fields.Nested('EventIssueUser')
class EventIssueUser(Schema):
avatar_url = fields.String()
gravatar_id = fields.String()
id = fields.Integer()
login = fields.String()
url = fields.String()
class EventIssuePull_request(Schema):
diff_url = fields.String()
html_url = fields.String()
patch_url = fields.String()
class EventIssueMilestone(Schema):
closed_issues = fields.Integer()
created_at = fields.String(description='ISO 8601 format: YYYY-MM-DDTHH:MM:SSZ')
creator = fields.Nested('EventIssueMilestoneCreator')
description = fields.String()
due_on = fields.String(description='ISO 8601 format: YYYY-MM-DDTHH:MM:SSZ')
number = fields.Integer()
open_issues = fields.Integer()
state = fields.String(validate=[OneOf(choices=['open', 'closed'], labels=[])])
title = fields.String()
url = fields.String()
class EventIssueMilestoneCreator(Schema):
avatar_url = fields.String()
gravatar_id = fields.String()
id = fields.Integer()
login = fields.String()
url = fields.String()
class EventIssueLabelsItem(Schema):
color = fields.String()
name = fields.String()
url = fields.String()
class EventIssueAssignee(Schema):
avatar_url = fields.String()
gravatar_id = fields.String()
id = fields.Integer()
login = fields.String()
url = fields.String()
class EventActor(Schema):
avatar_url = fields.String()
gravatar_id = fields.String()
id = fields.Integer()
login = fields.String()
url = fields.String()
class Events(Schema):
actor = fields.Nested('EventsActor')
created_at = fields.Field()
id = fields.Integer()
org = fields.Nested('EventsOrg')
payload = fields.Nested('EventsPayload')
public = fields.Boolean()
repo = fields.Nested('EventsRepo')
type = fields.String()
class EventsRepo(Schema):
id = fields.Integer()
name = fields.String()
url = fields.String()
class EventsPayload(Schema):
pass
class EventsOrg(Schema):
avatar_url = fields.String()
gravatar_id = fields.String()
id = fields.Integer()
login = fields.String()
url = fields.String()
class EventsActor(Schema):
avatar_url = fields.String()
gravatar_id = fields.String()
id = fields.Integer()
login = fields.String()
url = fields.String()
class Feeds(Schema):
_links = fields.Nested('Feeds_links')
current_user_actor_url = fields.String()
current_user_organization_url = fields.String()
current_user_public = fields.String()
current_user_url = fields.String()
timeline_url = fields.String()
user_url = fields.String()
class Feeds_links(Schema):
current_user = fields.Nested('Feeds_linksCurrent_user')
current_user_actor = fields.Nested('Feeds_linksCurrent_user_actor')
current_user_organization = fields.Nested('Feeds_linksCurrent_user_organization')
current_user_public = fields.Nested('Feeds_linksCurrent_user_public')
timeline = fields.Nested('Feeds_linksTimeline')
user = fields.Nested('Feeds_linksUser')
class Feeds_linksUser(Schema):
href = fields.String()
type = fields.String()
class Feeds_linksTimeline(Schema):
href = fields.String()
type = fields.String()
class Feeds_linksCurrent_user_public(Schema):
href = fields.String()
type = fields.String()
class Feeds_linksCurrent_user_organization(Schema):
href = fields.String()
type = fields.String()
class Feeds_linksCurrent_user_actor(Schema):
href = fields.String()
type = fields.String()
class Feeds_linksCurrent_user(Schema):
href = fields.String()
type = fields.String()
class Fork(Schema):
clone_url = fields.String()
created_at = fields.String(description='ISO 8601 format: YYYY-MM-DDTHH:MM:SSZ')
description = fields.String()
fork = fields.Boolean()
forks = fields.Integer()
forks_count = fields.Integer()
full_name = fields.String()
git_url = fields.String()
homepage = fields.String()
html_url = fields.String()
id = fields.Integer()
language = fields.String()
master_branch = fields.String()
mirror_url = fields.String()
name = fields.String()
open_issues = fields.Integer()
open_issues_count = fields.Integer()
owner = fields.Nested('ForkOwner')
private = fields.Boolean()
pushed_at = fields.String(description='ISO 8601 format: YYYY-MM-DDTHH:MM:SSZ')
size = fields.Integer()
ssh_url = fields.String()
svn_url = fields.String()
updated_at = fields.String(description='ISO 8601 format: YYYY-MM-DDTHH:MM:SSZ')
url = fields.String()
watchers = fields.Integer()
watchers_count = fields.Integer()
class ForkOwner(Schema):
avatar_url = fields.String()
gravatar_id = fields.String()
id = fields.Integer()
login = fields.String()
url = fields.String()
class ForkBody(Schema):
organization = fields.String()
class ForksItem(Schema):
clone_url = fields.String()
created_at = fields.String(description='ISO 8601 format: YYYY-MM-DDTHH:MM:SSZ')
description = fields.String()
fork = fields.Boolean()
forks = fields.Integer()
forks_count = fields.Integer()
full_name = fields.String()
git_url = fields.String()
homepage = fields.String()
html_url = fields.String()
id = fields.Integer()
language = fields.String()
master_branch = fields.String()
mirror_url = fields.String()
name = fields.String()
open_issues = fields.Integer()
open_issues_count = fields.Integer()
owner = fields.Nested('ForksItemOwner')
private = fields.Boolean()
pushed_at = fields.String(description='ISO 8601 format: YYYY-MM-DDTHH:MM:SSZ')
size = fields.Integer()
ssh_url = fields.String()
svn_url = fields.String()
updated_at = fields.String(description='ISO 8601 format: YYYY-MM-DDTHH:MM:SSZ')
url = fields.String()
watchers = fields.Integer()
watchers_count = fields.Integer()
class ForksItemOwner(Schema):
avatar_url = fields.String()
gravatar_id = fields.String()
id = fields.Integer()
login = fields.String()
url = fields.String()
class Gist(Schema):
comments = fields.Integer()
comments_url = fields.String()
created_at = fields.String(description='Timestamp in ISO 8601 format: YYYY-MM-DDTHH:MM:SSZ.')
description = fields.String()
files = fields.Nested('GistFiles')
forks = fields.List(fields.Nested('GistForksItem'))
git_pull_url = fields.String()
git_push_url = fields.String()
history = fields.List(fields.Nested('GistHistoryItem'))
html_url = fields.String()
id = fields.String()
public = fields.Boolean()
url = fields.String()
user = fields.Nested('GistUser')
class GistUser(Schema):
avatar_url = fields.String()
gravatar_id = fields.String()
id = fields.Integer()
login = fields.String()
url = fields.String()
class GistHistoryItem(Schema):
change_status = fields.Nested('GistHistoryItemChange_status')
committed_at = fields.String(description='Timestamp in ISO 8601 format: YYYY-MM-DDTHH:MM:SSZ.')
url = fields.String()
user = fields.Nested('GistHistoryItemUser')
version = fields.String()
class GistHistoryItemUser(Schema):
avatar_url = fields.String()
gravatar_id = fields.String()
id = fields.Integer()
login = fields.String()
url = fields.String()
class GistHistoryItemChange_status(Schema):
additions = fields.Integer()
deletions = fields.Integer()
total = fields.Integer()
class GistForksItem(Schema):
created_at = fields.String(description='Timestamp in ISO 8601 format: YYYY-MM-DDTHH:MM:SSZ.')
url = fields.String()
user = fields.Nested('GistForksItemUser')
class GistForksItemUser(Schema):
avatar_url = fields.String()
gravatar_id = fields.String()
id = fields.Integer()
login = fields.String()
url = fields.String()
class GistFiles(Schema):
ringerl = fields.Nested('GistFilesRingerl', data_key='ring.erl')
class GistFilesRingerl(Schema):
filename = fields.String()
raw_url = fields.String()
size = fields.Integer()
class GistsItem(Schema):
comments = fields.Integer()
comments_url = fields.String()
created_at = fields.String()
description = fields.String()
files = fields.Nested('GistsItemFiles')
git_pull_url = fields.String()
git_push_url = fields.String()
html_url = fields.String()
id = fields.String()
public = fields.Boolean()
url = fields.String()
user = fields.Nested('GistsItemUser')
class GistsItemUser(Schema):
avatar_url = fields.String()
gravatar_id = fields.String()
id = fields.Integer()
login = fields.String()
url = fields.String()
class GistsItemFiles(Schema):
ringerl = fields.Nested('GistsItemFilesRingerl', data_key='ring.erl')
class GistsItemFilesRingerl(Schema):
filename = fields.String()
raw_url = fields.String()
size = fields.Integer()
class GitCommit(Schema):
author = fields.Nested('GitCommitAuthor')
message = fields.String()
parents = fields.String()
tree = fields.String()
class GitCommitAuthor(Schema):
date = fields.String()
email = fields.String()
name = fields.String()
class GitRefPatch(Schema):
force = fields.Boolean()
sha = fields.String()
class GitignoreItem(Schema):
pass
class Gitignore_lang(Schema):
name = fields.String()
source = fields.String()
class HeadBranch(Schema):
object = fields.Nested('HeadBranchObject')
ref = fields.String()
url = fields.String()
class HeadBranchObject(Schema):
sha = fields.String()
type = fields.String()
url = fields.String()
class HeadBranchBody(Schema):
force = fields.Boolean(required=True, description='Boolean indicating whether to force the update or to make sure the update is a fast-forward update. The default is false, so leaving this out or setting it to false will make sure you’re not overwriting work.')
sha = fields.String(required=True, description='String of the SHA1 value to set this reference to.')
class HeadsItem(Schema):
commit = fields.Nested('HeadsItemCommit')
name = fields.String()
tarball_url = fields.String()
zipball_url = fields.String()
class HeadsItemCommit(Schema):
sha = fields.String()
url = fields.String()
class HookItem(Schema):
active = fields.Boolean()
config = fields.Nested('HookItemConfig')
created_at = fields.String(description='ISO 8601 format: YYYY-MM-DDTHH:MM:SSZ')
events = fields.List(fields.Nested('HookItemEventsItem', validate=[OneOf(choices=['push', 'issues', 'issue_comment', 'commit_comment', 'pull_request', 'pull_request_review_comment', 'gollum', 'watch', 'download', 'fork', 'fork_apply', 'member', 'public', 'team_add', 'status'], labels=[])]))
id = fields.Integer()
name = fields.String()
updated_at = fields.String(description='ISO 8601 format: YYYY-MM-DDTHH:MM:SSZ')
url = fields.String()
class HookItemEventsItem(Schema):
pass
class HookItemConfig(Schema):
content_type = fields.String()
url = fields.String()
class HookBody(Schema):
active = fields.Boolean()
add_events = fields.List(fields.String())
class Issue(Schema):
assignee = fields.String()
body = fields.String()
labels = fields.List(fields.String())
milestone = fields.Number()
title = fields.String()
class IssueBody(Schema):
assignee = fields.String()
body = fields.String()
labels = fields.List(fields.String())
milestone = fields.Number()
title = fields.String()
class IssuesItem(Schema):
assignee = fields.Nested('IssuesItemAssignee')
body = fields.String()
closed_at = fields.String(description='ISO 8601 format: YYYY-MM-DDTHH:MM:SSZ')
comments = fields.Integer()
created_at = fields.String(description='ISO 8601 format: YYYY-MM-DDTHH:MM:SSZ')
html_url = fields.String()
labels = fields.List(fields.Nested('IssuesItemLabelsItem'))
milestone = fields.Nested('IssuesItemMilestone')
number = fields.Integer()
pull_request = fields.Nested('IssuesItemPull_request')
state = fields.String(validate=[OneOf(choices=['open', 'closed'], labels=[])])
title = fields.String()
updated_at = fields.String(description='ISO 8601 format: YYYY-MM-DDTHH:MM:SSZ')
url = fields.String()
user = fields.Nested('IssuesItemUser')
class IssuesItemUser(Schema):
avatar_url = fields.String()
gravatar_id = fields.String()
id = fields.Integer()
login = fields.String()
url = fields.String()
class IssuesItemPull_request(Schema):
diff_url = fields.String()
html_url = fields.String()
patch_url = fields.String()
class IssuesItemMilestone(Schema):
closed_issues = fields.Integer()
created_at = fields.String(description='ISO 8601 format: YYYY-MM-DDTHH:MM:SSZ')
creator = fields.Nested('IssuesItemMilestoneCreator')
description = fields.String()
due_on = fields.String(description='ISO 8601 format: YYYY-MM-DDTHH:MM:SSZ')
number = fields.Integer()
open_issues = fields.Integer()
state = fields.String(validate=[OneOf(choices=['open', 'closed'], labels=[])])
title = fields.String()
url = fields.String()
class IssuesItemMilestoneCreator(Schema):
avatar_url = fields.String()
gravatar_id = fields.String()
id = fields.Integer()
login = fields.String()
url = fields.String()
class IssuesItemLabelsItem(Schema):
color = fields.String()
name = fields.String()
url = fields.String()
class IssuesItemAssignee(Schema):
avatar_url = fields.String()
gravatar_id = fields.String()
id = fields.Integer()
login = fields.String()
url = fields.String()
class IssuesComment(Schema):
body = fields.String()
created_at = fields.String(description='ISO 8601 format: YYYY-MM-DDTHH:MM:SSZ')
html_url = fields.String()
id = fields.Integer()
updated_at = fields.String(description='ISO 8601 format: YYYY-MM-DDTHH:MM:SSZ')
url = fields.String()
user = fields.Nested('IssuesCommentUser')
class IssuesCommentUser(Schema):
avatar_url = fields.String()
gravatar_id = fields.String()
id = fields.Integer()
login = fields.String()
url = fields.String()
class IssuesCommentsItem(Schema):
_links = fields.Nested('IssuesCommentsItem_links')
body = fields.String()
commit_id = fields.String()
created_at = fields.String(description='ISO 8601 format: YYYY-MM-DDTHH:MM:SSZ')
id = fields.Integer()
path = fields.String()
position = fields.Integer()
updated_at = fields.String(description='ISO 8601 format: YYYY-MM-DDTHH:MM:SSZ')
url = fields.String()
user = fields.Nested('IssuesCommentsItemUser')
class IssuesCommentsItemUser(Schema):
avatar_url = fields.String()
gravatar_id = fields.String()
id = fields.Integer()
login = fields.String()
url = fields.String()
class IssuesCommentsItem_links(Schema):
html = fields.Nested('IssuesCommentsItem_linksHtml')
pull_request = fields.Nested('IssuesCommentsItem_linksPull_request')
self = fields.Nested('IssuesCommentsItem_linksSelf')
class IssuesCommentsItem_linksSelf(Schema):
href = fields.String()
class IssuesCommentsItem_linksPull_request(Schema):
href = fields.String()
class IssuesCommentsItem_linksHtml(Schema):
href = fields.String()
class Key(Schema):
id = fields.Integer()
key = fields.String()
title = fields.String()
url = fields.String()
class KeyBody(Schema):
key = fields.String()
title = fields.String()
class KeysItem(Schema):
id = fields.Integer()
key = fields.String()
title = fields.String()
url = fields.String()
class Label(Schema):
color = fields.String(validate=[Length(min=6, max=6, equal=None)])
name = fields.String()
url = fields.String()
class LabelsItem(Schema):
color = fields.String(validate=[Length(min=6, max=6, equal=None)])
name = fields.String()
url = fields.String()
class Languages(AdditionalPropertiesSchema):
class Meta:
additional_field = fields.Integer()
class Markdown(Schema):
context = fields.String()
mode = fields.String()
text = fields.String()
class MembersItem(Schema):
avatar_url = fields.String()
gravatar_id = fields.String()
id = fields.Integer()
login = fields.String()
url = fields.String()
class Merge(Schema):
merged = fields.Boolean()
message = fields.String()
sha = fields.String()
class MergePullBody(Schema):
commit_message = fields.String()
class MergesBody(Schema):
base = fields.String()
commit_message = fields.String()
head = fields.String()
class MergesConflict(Schema):
message = fields.String(description='Error message')
class MergesSuccessful(Schema):
author = fields.Nested('MergesSuccessfulAuthor')
comments_url = fields.String()
commit = fields.Nested('MergesSuccessfulCommit')
committer = fields.Nested('MergesSuccessfulCommitter')
merged = fields.Boolean()
message = fields.String()
parents = fields.List(fields.Nested('MergesSuccessfulParentsItem'))
sha = fields.String()
url = fields.String()
class MergesSuccessfulParentsItem(Schema):
sha = fields.String()
url = fields.String()
class MergesSuccessfulCommitter(Schema):
avatar_url = fields.String()
events_url = fields.String()
followers_url = fields.String()
following_url = fields.String()
gists_url = fields.String()
gravatar_id = fields.String()
html_url = fields.String()
id = fields.Integer()
login = fields.String()
organizations_url = fields.String()
received_events_url = fields.String()
repos_url = fields.String()
starred_url = fields.String()
subscriptions_url = fields.String()
type = fields.String()
url = fields.String()
class MergesSuccessfulCommit(Schema):
author = fields.Nested('MergesSuccessfulCommitAuthor')
comment_count = fields.Integer()
committer = fields.Nested('MergesSuccessfulCommitCommitter')
message = fields.String()
tree = fields.Nested('MergesSuccessfulCommitTree')
url = fields.String()
class MergesSuccessfulCommitTree(Schema):
sha = fields.String()
url = fields.String()
class MergesSuccessfulCommitCommitter(Schema):
date = fields.String()
email = fields.String()
name = fields.String()
class MergesSuccessfulCommitAuthor(Schema):
date = fields.String()
email = fields.String()
name = fields.String()
class MergesSuccessfulAuthor(Schema):
avatar_url = fields.String()
events_url = fields.String()
followers_url = fields.String()
following_url = fields.String()
gists_url = fields.String()
gravatar_id = fields.String()
html_url = fields.String()
id = fields.Integer()
login = fields.String()
organizations_url = fields.String()
received_events_url = fields.String()
repos_url = fields.String()
starred_url = fields.String()
subscriptions_url = fields.String()
type = fields.String()
url = fields.String()
class Meta(Schema):
git = fields.List(fields.String(description='An Array of IP addresses in CIDR format specifying the Git servers at GitHub.'))
hooks = fields.List(fields.String(description='An Array of IP addresses in CIDR format specifying the addresses that incoming service hooks will originate from.'))
class Milestone(Schema):
closed_issues = fields.Integer()
created_at = fields.String(description='ISO 8601 format: YYYY-MM-DDTHH:MM:SSZ')
creator = fields.Nested('MilestoneCreator')
description = fields.String()
due_on = fields.String(description='ISO 8601 format: YYYY-MM-DDTHH:MM:SSZ')
number = fields.Integer()
open_issues = fields.Integer()
state = fields.String(validate=[OneOf(choices=['open', 'closed'], labels=[])])
title = fields.String()
url = fields.String()
class MilestoneCreator(Schema):
avatar_url = fields.String()
gravatar_id = fields.String()
id = fields.Integer()
login = fields.String()
url = fields.String()
class MilestoneBody(Schema):
description = fields.String()
due_on = fields.String()
state = fields.String()
title = fields.String()
class MilestoneUpdate(Schema):
description = fields.String()
due_on = fields.String()
state = fields.String()
title = fields.String()
class NotificationMarkRead(Schema):
last_read_at = fields.String()
class Notifications(Schema):
id = fields.Integer()
last_read_at = fields.String()
reason = fields.String()
repository = fields.Nested('NotificationsRepository')
subject = fields.Nested('NotificationsSubject')
unread = fields.Boolean()
updated_at = fields.String()
url = fields.String()
class NotificationsSubject(Schema):
latest_comment_url = fields.String()
title = fields.String()
type = fields.String()
url = fields.String()
class NotificationsRepository(Schema):
description = fields.String()
fork = fields.Boolean()
full_name = fields.String()
html_url = fields.String()
id = fields.Integer()
name = fields.String()
owner = fields.Nested('NotificationsRepositoryOwner')
private = fields.Boolean()
url = fields.String()
class NotificationsRepositoryOwner(Schema):
avatar_url = fields.String()
gravatar_id = fields.String()
id = fields.Integer()
login = fields.String()
url = fields.String()
class OrgMembersItem(Schema):
avatar_url = fields.String()
gravatar_id = fields.String()
id = fields.Integer()
login = fields.String()
url = fields.String()
class OrgPublicMembersItem(Schema):
avatar_url = fields.String()
gravatar_id = fields.String()
id = fields.Integer()
login = fields.String()
url = fields.String()
class OrgTeamsItem(Schema):
id = fields.Integer()
name = fields.String()
url = fields.String()
class OrgTeamsPost(Schema):
name = fields.String(required=True)
permission = fields.String(validate=[OneOf(choices=['pull', 'push', 'admin'], labels=[])])
repo_names = fields.List(fields.String())
class Organization(Schema):
avatar_url = fields.String()
blog = fields.String()
company = fields.String()
created_at = fields.String(description='ISO 8601 format: YYYY-MM-DDTHH:MM:SSZ')
email = fields.String()
followers = fields.Integer()
following = fields.Integer()
html_url = fields.String()
id = fields.Integer()
location = fields.String()
login = fields.String()
name = fields.String()
public_gists = fields.Integer()
public_repos = fields.Integer()
type = fields.String()
url = fields.String()
class OrganizationAsTeamMember(Schema):
errors = fields.List(fields.Nested('OrganizationAsTeamMemberErrorsItem'))
message = fields.String()
class OrganizationAsTeamMemberErrorsItem(Schema):
code = fields.String()
field = fields.String()
resource = fields.String()
class ParticipationStats(Schema):
all = fields.List(fields.Integer())
owner = fields.List(fields.Integer())
class PatchGist(Schema):
description = fields.String()
files = fields.Nested('PatchGistFiles')
class PatchGistFiles(Schema):
delete_this_filetxt = fields.String(data_key='delete_this_file.txt')
file1txt = fields.Nested('PatchGistFilesFile1txt', data_key='file1.txt')
new_filetxt = fields.Nested('PatchGistFilesNew_filetxt', data_key='new_file.txt')
old_nametxt = fields.Nested('PatchGistFilesOld_nametxt', data_key='old_name.txt')
class PatchGistFilesOld_nametxt(Schema):
content = fields.String()
filename = fields.String()
class PatchGistFilesNew_filetxt(Schema):
content = fields.String()
class PatchGistFilesFile1txt(Schema):
content = fields.String()
class PatchOrg(Schema):
billing_email = fields.String(description='Billing email address. This address is not publicized.')
company = fields.String()
email = fields.String(description='Publicly visible email address.')
location = fields.String()
name = fields.String()
class PostComment(Schema):
body = fields.String(required=True)
class PostGist(Schema):
description = fields.String()
files = fields.Nested('PostGistFiles')
public = fields.Boolean()
class PostGistFiles(Schema):
file1txt = fields.Nested('PostGistFilesFile1txt', data_key='file1.txt')
class PostGistFilesFile1txt(Schema):
content = fields.String()
class PostRepo(Schema):
auto_init = fields.Boolean(description='True to create an initial commit with empty README. Default is false.')
description = fields.String()
gitignore_template = fields.String(description='Desired language or platform .gitignore template to apply. Use the name of the template without the extension. For example, "Haskell" Ignored if auto_init parameter is not provided. ')
has_downloads = fields.Boolean(description='True to enable downloads for this repository, false to disable them. Default is true.')
has_issues = fields.Boolean(description='True to enable issues for this repository, false to disable them. Default is true.')
has_wiki = fields.Boolean(description='True to enable the wiki for this repository, false to disable it. Default is true.')
homepage = fields.String()
name = fields.String(required=True)
private = fields.Boolean(description='True to create a private repository, false to create a public one. Creating private repositories requires a paid GitHub account.')
team_id = fields.Integer(description='The id of the team that will be granted access to this repository. This is only valid when creating a repo in an organization.')
class PullRequest(Schema):
_links = fields.Nested('PullRequest_links')
additions = fields.Integer()
base = fields.Nested('PullRequestBase')
body = fields.String()
changed_files = fields.Integer()
closed_at = fields.String()
comments = fields.Integer()
commits = fields.Integer()
created_at = fields.String()
deletions = fields.Integer()
diff_url = fields.String()
head = fields.Nested('PullRequestHead')
html_url = fields.String()
issue_url = fields.String()
merge_commit_sha = fields.String()
mergeable = fields.Boolean()
merged = fields.Boolean()
merged_at = fields.String()
merged_by = fields.Nested('PullRequestMerged_by')
number = fields.Integer()
patch_url = fields.String()
state = fields.String()
title = fields.String()
updated_at = fields.String()
url = fields.String()
user = fields.Nested('PullRequestUser')
class PullRequestUser(Schema):
avatar_url = fields.String()
gravatar_id = fields.String()
id = fields.Integer()
login = fields.String()
url = fields.String()
class PullRequestMerged_by(Schema):
avatar_url = fields.String()
gravatar_id = fields.String()
id = fields.Integer()
login = fields.String()
url = fields.String()
class PullRequestHead(Schema):
label = fields.String()
ref = fields.String()
repo = fields.Nested('PullRequestHeadRepo')
sha = fields.String()
user = fields.Nested('PullRequestHeadUser')
class PullRequestHeadUser(Schema):
avatar_url = fields.String()
gravatar_id = fields.String()
id = fields.Integer()
login = fields.String()
url = fields.String()
class PullRequestHeadRepo(Schema):
clone_url = fields.String()
created_at = fields.String()
description = fields.String()
fork = fields.Boolean()
forks = fields.Integer()
forks_count = fields.Integer()
full_name = fields.String()
git_url = fields.String()
homepage = fields.String()
html_url = fields.String()
id = fields.Integer()
language = fields.Field()
master_branch = fields.String()
mirror_url = fields.String()
name = fields.String()
open_issues = fields.Integer()
open_issues_count = fields.Integer()
owner = fields.Nested('PullRequestHeadRepoOwner')
private = fields.Boolean()
pushed_at = fields.String()
size = fields.Integer()
ssh_url = fields.String()
svn_url = fields.String()
updated_at = fields.String()
url = fields.String()
watchers = fields.Integer()
watchers_count = fields.Integer()
class PullRequestHeadRepoOwner(Schema):
avatar_url = fields.String()
gravatar_id = fields.String()
id = fields.Integer()
login = fields.String()
url = fields.String()
class PullRequestBase(Schema):
label = fields.String()
ref = fields.String()
repo = fields.Nested('PullRequestBaseRepo')
sha = fields.String()
user = fields.Nested('PullRequestBaseUser')
class PullRequestBaseUser(Schema):
avatar_url = fields.String()
gravatar_id = fields.String()
id = fields.Integer()
login = fields.String()
url = fields.String()
class PullRequestBaseRepo(Schema):
clone_url = fields.String()
created_at = fields.String()
description = fields.String()
fork = fields.Boolean()
forks = fields.Integer()
forks_count = fields.Integer()
full_name = fields.String()
git_url = fields.String()
homepage = fields.String()
html_url = fields.String()
id = fields.Integer()
language = fields.Field()
master_branch = fields.String()
mirror_url = fields.String()
name = fields.String()
open_issues = fields.Integer()
open_issues_count = fields.Integer()
owner = fields.Nested('PullRequestBaseRepoOwner')
private = fields.Boolean()
pushed_at = fields.String()
size = fields.Integer()
ssh_url = fields.String()
svn_url = fields.String()
updated_at = fields.String()
url = fields.String()
watchers = fields.Integer()
watchers_count = fields.Integer()
class PullRequestBaseRepoOwner(Schema):
avatar_url = fields.String()
gravatar_id = fields.String()
id = fields.Integer()
login = fields.String()
url = fields.String()
class PullRequest_links(Schema):
comments = fields.Nested('PullRequest_linksComments')
html = fields.Nested('PullRequest_linksHtml')
review_comments = fields.Nested('PullRequest_linksReview_comments')
self = fields.Nested('PullRequest_linksSelf')
class PullRequest_linksSelf(Schema):
href = fields.String()
class PullRequest_linksReview_comments(Schema):
href = fields.String()
class PullRequest_linksHtml(Schema):
href = fields.String()
class PullRequest_linksComments(Schema):
href = fields.String()
class PullUpdate(Schema):
body = fields.String()
state = fields.String()
title = fields.String()
class PullsItem(Schema):
_links = fields.Nested('PullsItem_links')
base = fields.Nested('PullsItemBase')
body = fields.String()
closed_at = fields.String(description='ISO 8601 format: YYYY-MM-DDTHH:MM:SSZ')
created_at = fields.String(description='ISO 8601 format: YYYY-MM-DDTHH:MM:SSZ')
diff_url = fields.String()
head = fields.Nested('PullsItemHead')
html_url = fields.String()
issue_url = fields.String()
merged_at = fields.String(description='ISO 8601 format: YYYY-MM-DDTHH:MM:SSZ')
number = fields.Integer()
patch_url = fields.String()
state = fields.String(validate=[OneOf(choices=['open', 'closed'], labels=[])])
title = fields.String()
updated_at = fields.String(description='ISO 8601 format: YYYY-MM-DDTHH:MM:SSZ')
url = fields.String()
user = fields.Nested('PullsItemUser')
class PullsItemUser(Schema):
avatar_url = fields.String()
gravatar_id = fields.String()
id = fields.Integer()
login = fields.String()
url = fields.String()
class PullsItemHead(Schema):
label = fields.String()
ref = fields.String()
repo = fields.Nested('PullsItemHeadRepo')
sha = fields.String()
user = fields.Nested('PullsItemHeadUser')
class PullsItemHeadUser(Schema):
avatar_url = fields.String()
gravatar_id = fields.String()
id = fields.Integer()
login = fields.String()
url = fields.String()
class PullsItemHeadRepo(Schema):
clone_url = fields.String()
created_at = fields.String(description='ISO 8601 format: YYYY-MM-DDTHH:MM:SSZ')
description = fields.String()
fork = fields.Boolean()
forks = fields.Integer()
forks_count = fields.Integer()
full_name = fields.String()
git_url = fields.String()
homepage = fields.String()
html_url = fields.String()
id = fields.Integer()
language = fields.String()
master_branch = fields.String()
mirror_url = fields.String()
name = fields.String()
open_issues = fields.Integer()
open_issues_count = fields.Integer()
owner = fields.Nested('PullsItemHeadRepoOwner')
private = fields.Boolean()
pushed_at = fields.String(description='ISO 8601 format: YYYY-MM-DDTHH:MM:SSZ')
size = fields.Integer()
ssh_url = fields.String()
svn_url = fields.String()
updated_at = fields.String(description='ISO 8601 format: YYYY-MM-DDTHH:MM:SSZ')
url = fields.String()
watchers = fields.Integer()
watchers_count = fields.Integer()
class PullsItemHeadRepoOwner(Schema):
avatar_url = fields.String()
gravatar_id = fields.String()
id = fields.Integer()
login = fields.String()
url = fields.String()
class PullsItemBase(Schema):
label = fields.String()
ref = fields.String()
repo = fields.Nested('PullsItemBaseRepo')
sha = fields.String()
user = fields.Nested('PullsItemBaseUser')
class PullsItemBaseUser(Schema):
avatar_url = fields.String()
gravatar_id = fields.String()
id = fields.Integer()
login = fields.String()
url = fields.String()
class PullsItemBaseRepo(Schema):
clone_url = fields.String()
created_at = fields.String(description='ISO 8601 format: YYYY-MM-DDTHH:MM:SSZ')
description = fields.String()
fork = fields.Boolean()
forks = fields.Integer()
forks_count = fields.Integer()
full_name = fields.String()
git_url = fields.String()
homepage = fields.String()
html_url = fields.String()
id = fields.Integer()
language = fields.String()
master_branch = fields.String()
mirror_url = fields.String()
name = fields.String()
open_issues = fields.Integer()
open_issues_count = fields.Integer()
owner = fields.Nested('PullsItemBaseRepoOwner')
private = fields.Boolean()
pushed_at = fields.String(description='ISO 8601 format: YYYY-MM-DDTHH:MM:SSZ')
size = fields.Integer()
ssh_url = fields.String()
svn_url = fields.String()
updated_at = fields.String(description='ISO 8601 format: YYYY-MM-DDTHH:MM:SSZ')
url = fields.String()
watchers = fields.Integer()
watchers_count = fields.Integer()
class PullsItemBaseRepoOwner(Schema):
avatar_url = fields.String()
gravatar_id = fields.String()
id = fields.Integer()
login = fields.String()
url = fields.String()
class PullsItem_links(Schema):
comments = fields.Nested('PullsItem_linksComments')
html = fields.Nested('PullsItem_linksHtml')
review_comments = fields.Nested('PullsItem_linksReview_comments')
self = fields.Nested('PullsItem_linksSelf')
class PullsItem_linksSelf(Schema):
href = fields.String()
class PullsItem_linksReview_comments(Schema):
href = fields.String()
class PullsItem_linksHtml(Schema):
href = fields.String()
class PullsItem_linksComments(Schema):
href = fields.String()
class PullsComment(Schema):
_links = fields.Nested('PullsComment_links')
body = fields.String()
commit_id = fields.String()
created_at = fields.String(description='ISO 8601 format: YYYY-MM-DDTHH:MM:SSZ')
id = fields.Integer()
path = fields.String()
position = fields.Integer()
updated_at = fields.String(description='ISO 8601 format: YYYY-MM-DDTHH:MM:SSZ')
url = fields.String()
user = fields.Nested('PullsCommentUser')
class PullsCommentUser(Schema):
avatar_url = fields.String()
gravatar_id = fields.String()
id = fields.Integer()
login = fields.String()
url = fields.String()
class PullsComment_links(Schema):
html = fields.Nested('PullsComment_linksHtml')
pull_request = fields.Nested('PullsComment_linksPull_request')
self = fields.Nested('PullsComment_linksSelf')
class PullsComment_linksSelf(Schema):
href = fields.String()
class PullsComment_linksPull_request(Schema):
href = fields.String()
class PullsComment_linksHtml(Schema):
href = fields.String()
class PullsCommentPost(Schema):
body = fields.String()
commit_id = fields.String()
path = fields.String()
position = fields.Number()
class PullsCommentsItem(Schema):
_links = fields.Nested('PullsCommentsItem_links')
body = fields.String()
commit_id = fields.String()
created_at = fields.String(description='ISO 8601 format: YYYY-MM-DDTHH:MM:SSZ')
id = fields.Integer()
path = fields.String()
position = fields.Integer()
updated_at = fields.String(description='ISO 8601 format: YYYY-MM-DDTHH:MM:SSZ')
url = fields.String()
user = fields.Nested('PullsCommentsItemUser')
class PullsCommentsItemUser(Schema):
avatar_url = fields.String()
gravatar_id = fields.String()
id = fields.Integer()
login = fields.String()
url = fields.String()
class PullsCommentsItem_links(Schema):
html = fields.Nested('PullsCommentsItem_linksHtml')
pull_request = fields.Nested('PullsCommentsItem_linksPull_request')
self = fields.Nested('PullsCommentsItem_linksSelf')
class PullsCommentsItem_linksSelf(Schema):
href = fields.String()
class PullsCommentsItem_linksPull_request(Schema):
href = fields.String()
class PullsCommentsItem_linksHtml(Schema):
href = fields.String()
class PullsPost(Schema):
base = fields.String()
body = fields.String()
head = fields.String()
title = fields.String()
class PutSubscription(Schema):
created_at = fields.String()
ignored = fields.Boolean()
reason = fields.Field()
subscribed = fields.Boolean()
thread_url = fields.String()
url = fields.String()
class Rate_limit(Schema):
rate = fields.Nested('Rate_limitRate')
class Rate_limitRate(Schema):
limit = fields.Integer()
remaining = fields.Integer()
reset = fields.Integer()
class Readme(Schema):
_links = fields.Nested('Readme_links')
content = fields.String()
encoding = fields.String()
git_url = fields.String()
html_url = fields.String()
name = fields.String()
path = fields.String()
sha = fields.String()
size = fields.Integer()
type = fields.String()
url = fields.String()
class Readme_links(Schema):
git = fields.String()
html = fields.String()
self = fields.String()
class RefItem(Schema):
created_at = fields.String(description='ISO 8601 format: YYYY-MM-DDTHH:MM:SSZ')
creator = fields.Nested('RefItemCreator')
description = fields.String()
id = fields.Integer()
state = fields.String()
target_url = fields.String()
updated_at = fields.String(description='ISO 8601 format: YYYY-MM-DDTHH:MM:SSZ')
url = fields.String()
class RefItemCreator(Schema):
avatar_url = fields.String()
gravatar_id = fields.String()
id = fields.Integer()
login = fields.String()
url = fields.String()
class RefBody(Schema):
object = fields.Nested('RefBodyObject')
ref = fields.String()
url = fields.String()
class RefBodyObject(Schema):
sha = fields.String()
type = fields.String()
url = fields.String()
class RefStatusItem(Schema):
commit_url = fields.String()
name = fields.String()
repository_url = fields.String()
sha = fields.String()
state = fields.String()
statuses = fields.List(fields.Nested('RefStatusItemStatusesItem'))
class RefStatusItemStatusesItem(Schema):
context = fields.String()
created_at = fields.String()
description = fields.String()
id = fields.Number()
state = fields.String()
target_url = fields.String()
updated_at = fields.String()
url = fields.String()
class RefsItem(Schema):
object = fields.Nested('RefsItemObject')
ref = fields.String()
url = fields.String()
class RefsItemObject(Schema):
sha = fields.String()
type = fields.String()
url = fields.String()
class RefsBody(Schema):
ref = fields.String()
sha = fields.String()
class Release(Schema):
assets = fields.List(fields.Nested('ReleaseAssetsItem'))
assets_url = fields.String()
author = fields.Nested('ReleaseAuthor')
body = fields.String()
created_at = fields.String()
draft = fields.Boolean()
html_url = fields.String()
id = fields.Integer()
name = fields.String()
prerelease = fields.Boolean()
published_at = fields.String()
tag_name = fields.String()
tarball_url = fields.String()
target_commitish = fields.String()
upload_url = fields.String()
url = fields.String()
zipball_url = fields.String()
class ReleaseAuthor(Schema):
avatar_url = fields.String()
events_url = fields.String()
followers_url = fields.String()
following_url = fields.String()
gists_url = fields.String()
gravatar_id = fields.String()
html_url = fields.String()
id = fields.Integer()
login = fields.String()
organizations_url = fields.String()
received_events_url = fields.String()
repos_url = fields.String()
site_admin = fields.Boolean()
starred_url = fields.String()
subscriptions_url = fields.String()
type = fields.String()
url = fields.String()
class ReleaseAssetsItem(Schema):
content_type = fields.String()
created_at = fields.String()
download_count = fields.Integer()
id = fields.Integer()
label = fields.String()
name = fields.String()
size = fields.Integer()
state = fields.String()
updated_at = fields.String()
uploader = fields.Nested('ReleaseAssetsItemUploader')
url = fields.String()
class ReleaseAssetsItemUploader(Schema):
avatar_url = fields.String()
events_url = fields.String()
followers_url = fields.String()
following_url = fields.String()
gists_url = fields.String()
gravatar_id = fields.String()
html_url = fields.String()
id = fields.Integer()
login = fields.String()
organizations_url = fields.String()
received_events_url = fields.String()
repos_url = fields.String()
site_admin = fields.Boolean()
starred_url = fields.String()
subscriptions_url = fields.String()
type = fields.String()
url = fields.String()
class Release_create(Schema):
body = fields.String()
draft = fields.Boolean()
name = fields.String()
prerelease = fields.Boolean()
tag_name = fields.String()
target_commitish = fields.String()
class ReleasesItem(Schema):
assets = fields.List(fields.Nested('ReleasesItemAssetsItem'))
assets_url = fields.String()
author = fields.Nested('ReleasesItemAuthor')
body = fields.String()
created_at = fields.String()
draft = fields.Boolean()
html_url = fields.String()
id = fields.Integer()
name = fields.String()
prerelease = fields.Boolean()
published_at = fields.String()
tag_name = fields.String()
tarball_url = fields.String()
target_commitish = fields.String()
upload_url = fields.String()
url = fields.String()
zipball_url = fields.String()
class ReleasesItemAuthor(Schema):
avatar_url = fields.String()
events_url = fields.String()
followers_url = fields.String()
following_url = fields.String()
gists_url = fields.String()
gravatar_id = fields.String()
html_url = fields.String()
id = fields.Integer()
login = fields.String()
organizations_url = fields.String()
received_events_url = fields.String()
repos_url = fields.String()
site_admin = fields.Boolean()
starred_url = fields.String()
subscriptions_url = fields.String()
type = fields.String()
url = fields.String()
class ReleasesItemAssetsItem(Schema):
content_type = fields.String()
created_at = fields.String()
download_count = fields.Integer()
id = fields.Integer()
label = fields.String()
name = fields.String()
size = fields.Integer()
state = fields.String()
updated_at = fields.String()
uploader = fields.Nested('ReleasesItemAssetsItemUploader')
url = fields.String()
class ReleasesItemAssetsItemUploader(Schema):
avatar_url = fields.String()
events_url = fields.String()
followers_url = fields.String()
following_url = fields.String()
gists_url = fields.String()
gravatar_id = fields.String()
html_url = fields.String()
id = fields.Integer()
login = fields.String()
organizations_url = fields.String()
received_events_url = fields.String()
repos_url = fields.String()
site_admin = fields.Boolean()
starred_url = fields.String()
subscriptions_url = fields.String()
type = fields.String()
url = fields.String()
class Repo(Schema):
clone_url = fields.String()
created_at = fields.String(description='ISO 8601 format: YYYY-MM-DDTHH:MM:SSZ')
description = fields.String()
fork = fields.Boolean()
forks = fields.Integer()
forks_count = fields.Integer()
full_name = fields.String()
git_url = fields.String()
has_downloads = fields.Boolean()
has_issues = fields.Boolean()
has_wiki = fields.Boolean()
homepage = fields.String()
html_url = fields.String()
id = fields.Integer()
language = fields.String()
master_branch = fields.String()
mirror_url = fields.String()
name = fields.String()
open_issues = fields.Integer()
open_issues_count = fields.Integer()
organization = fields.Nested('RepoOrganization')
owner = fields.Nested('RepoOwner')
parent = fields.Nested('RepoParent', description='Is present when the repo is a fork. Parent is the repo this repo was forked from.')
private = fields.Boolean()
pushed_at = fields.String(description='ISO 8601 format: YYYY-MM-DDTHH:MM:SSZ')
size = fields.Integer()
source = fields.Nested('RepoSource', description='Is present when the repo is a fork. Source is the ultimate source for the network.')
ssh_url = fields.String()
svn_url = fields.String()
updated_at = fields.String(description='ISO 8601 format: YYYY-MM-DDTHH:MM:SSZ')
url = fields.String()
watchers = fields.Integer()
watchers_count = fields.Integer()
class RepoSource(Schema):
"""Is present when the repo is a fork. Source is the ultimate source for the network."""
clone_url = fields.String()
created_at = fields.String(description='ISO 8601 format: YYYY-MM-DDTHH:MM:SSZ')
description = fields.String()
fork = fields.Boolean()
forks = fields.Integer()
forks_count = fields.Integer()
full_name = fields.String()
git_url = fields.String()
homepage = fields.String()
html_url = fields.String()
id = fields.Integer()
language = fields.String()
master_branch = fields.String()
mirror_url = fields.String()
name = fields.String()
open_issues = fields.Integer()
open_issues_count = fields.Integer()
owner = fields.Nested('RepoSourceOwner')
private = fields.Boolean()
pushed_at = fields.String(description='ISO 8601 format: YYYY-MM-DDTHH:MM:SSZ')
size = fields.Integer()
ssh_url = fields.String()
svn_url = fields.String()
updated_at = fields.String(description='ISO 8601 format: YYYY-MM-DDTHH:MM:SSZ')
url = fields.String()
watchers = fields.Integer()
watchers_count = fields.Integer()
class RepoSourceOwner(Schema):
avatar_url = fields.String()
gravatar_id = fields.String()
id = fields.Integer()
login = fields.String()
url = fields.String()
class RepoParent(Schema):
"""Is present when the repo is a fork. Parent is the repo this repo was forked from."""
clone_url = fields.String()
created_at = fields.String(description='ISO 8601 format: YYYY-MM-DDTHH:MM:SSZ')
description = fields.String()
fork = fields.Boolean()
forks = fields.Integer()
forks_count = fields.Integer()
full_name = fields.String()
git_url = fields.String()
homepage = fields.String()
html_url = fields.String()
id = fields.Integer()
language = fields.String()
master_branch = fields.String()
mirror_url = fields.String()
name = fields.String()
open_issues = fields.Integer()
open_issues_count = fields.Integer()
owner = fields.Nested('RepoParentOwner')
private = fields.Boolean()
pushed_at = fields.String(description='ISO 8601 format: YYYY-MM-DDTHH:MM:SSZ')
size = fields.Integer()
ssh_url = fields.String()
svn_url = fields.String()
updated_at = fields.String(description='ISO 8601 format: YYYY-MM-DDTHH:MM:SSZ')
url = fields.String()
watchers = fields.Integer()
watchers_count = fields.Integer()
class RepoParentOwner(Schema):
avatar_url = fields.String()
gravatar_id = fields.String()
id = fields.Integer()
login = fields.String()
url = fields.String()
class RepoOwner(Schema):
avatar_url = fields.String()
gravatar_id = fields.String()
id = fields.Integer()
login = fields.String()
url = fields.String()
class RepoOrganization(Schema):
avatar_url = fields.String()
gravatar_id = fields.String()
id = fields.Integer()
login = fields.String()
type = fields.String()
url = fields.String()
class Repo_deploymentsItem(Schema):
created_at = fields.String()
creator = fields.Nested('Repo_deploymentsItemCreator')
description = fields.String()
id = fields.Integer()
payload = fields.String()
sha = fields.String()
statuses_url = fields.String()
updated_at = fields.String()
url = fields.String()
class Repo_deploymentsItemCreator(Schema):
avatar_url = fields.String()
events_url = fields.String()
followers_url = fields.String()
following_url = fields.String()
gists_url = fields.String()
gravatar_id = fields.String()
html_url = fields.String()
id = fields.Integer()
login = fields.String()
organizations_url = fields.String()
received_events_url = fields.String()
repos_url = fields.String()
site_admin = fields.Boolean()
starred_url = fields.String()
subscriptions_url = fields.String()
type = fields.String()
url = fields.String()
class RepoCommentsItem(Schema):
body = fields.String()
commit_id = fields.String()
created_at = fields.String(description='ISO 8601 format: YYYY-MM-DDTHH:MM:SSZ')
html_url = fields.String()
id = fields.Integer()
line = fields.Integer()
path = fields.String()
position = fields.Integer()
updated_at = fields.String(description='ISO 8601 format: YYYY-MM-DDTHH:MM:SSZ')
url = fields.String()
user = fields.Nested('RepoCommentsItemUser')
class RepoCommentsItemUser(Schema):
avatar_url = fields.String()
gravatar_id = fields.String()
id = fields.Integer()
login = fields.String()
url = fields.String()
class RepoCommit(Schema):
author = fields.Nested('RepoCommitAuthor')
committer = fields.Nested('RepoCommitCommitter')
message = fields.String()
parents = fields.List(fields.Nested('RepoCommitParentsItem'))
sha = fields.String()
tree = fields.Nested('RepoCommitTree')
url = fields.String()
class RepoCommitTree(Schema):
sha = fields.String()
url = fields.String()
class RepoCommitParentsItem(Schema):
sha = fields.String()
url = fields.String()
class RepoCommitCommitter(Schema):
date = fields.String(description='ISO 8601 format: YYYY-MM-DDTHH:MM:SSZ')
email = fields.String()
name = fields.String()
class RepoCommitAuthor(Schema):
date = fields.String(description='ISO 8601 format: YYYY-MM-DDTHH:MM:SSZ')
email = fields.String()
name = fields.String()
class RepoCommitBody(Schema):
author = fields.Nested('RepoCommitBodyAuthor')
message = fields.String(required=True)
parents = fields.List(fields.String(), required=True)
tree = fields.String(required=True)
class RepoCommitBodyAuthor(Schema):
date = fields.String()
email = fields.String()
name = fields.String()
class RepoEdit(Schema):
description = fields.String()
has_downloads = fields.Boolean()
has_issues = fields.Boolean()
has_wiki = fields.Boolean()
homepage = fields.String()
name = fields.String()
private = fields.Boolean()
class ReposItem(Schema):
clone_url = fields.String()
created_at = fields.String(description='ISO 8601 format: YYYY-MM-DDTHH:MM:SSZ')
description = fields.String()
fork = fields.Boolean()
forks = fields.Integer()
forks_count = fields.Integer()
full_name = fields.String()
git_url = fields.String()
homepage = fields.String()
html_url = fields.String()
id = fields.Integer()
language = fields.String()
master_branch = fields.String()
mirror_url = fields.String()
name = fields.String()
open_issues = fields.Integer()
open_issues_count = fields.Integer()
owner = fields.Nested('ReposItemOwner')
private = fields.Boolean()
pushed_at = fields.String(description='ISO 8601 format: YYYY-MM-DDTHH:MM:SSZ')
size = fields.Integer()
ssh_url = fields.String()
svn_url = fields.String()
updated_at = fields.String(description='ISO 8601 format: YYYY-MM-DDTHH:MM:SSZ')
url = fields.String()
watchers = fields.Integer()
watchers_count = fields.Integer()
class ReposItemOwner(Schema):
avatar_url = fields.String()
gravatar_id = fields.String()
id = fields.Integer()
login = fields.String()
url = fields.String()
class RepositoriesItem(Schema):
description = fields.String()
fork = fields.Boolean()
full_name = fields.String()
html_url = fields.String()
id = fields.Integer()
name = fields.String()
owner = fields.Nested('RepositoriesItemOwner')
private = fields.Boolean()
url = fields.String()
class RepositoriesItemOwner(Schema):
avatar_url = fields.String()
gravatar_id = fields.String()
id = fields.Integer()
login = fields.String()
url = fields.String()
class Search_code(Schema):
items = fields.List(fields.Nested('Search_codeItemsItem'))
total_count = fields.Integer()
class Search_codeItemsItem(Schema):
git_url = fields.String()
html_url = fields.String()
name = fields.String()
path = fields.String()
repository = fields.Nested('Search_codeItemsItemRepository')
score = fields.Number()
sha = fields.String()
url = fields.String()
class Search_codeItemsItemRepository(Schema):
archive_url = fields.String()
assignees_url = fields.String()
blobs_url = fields.String()
branches_url = fields.String()
collaborators_url = fields.String()
comments_url = fields.String()
commits_url = fields.String()
compare_url = fields.String()
contents_url = fields.String()
contributors_url = fields.String()
description = fields.String()
downloads_url = fields.String()
events_url = fields.String()
fork = fields.Boolean()
forks_url = fields.String()
full_name = fields.String()
git_commits_url = fields.String()
git_refs_url = fields.String()
git_tags_url = fields.String()
hooks_url = fields.String()
html_url = fields.String()
id = fields.Integer()
issue_comment_url = fields.String()
issue_events_url = fields.String()
issues_url = fields.String()
keys_url = fields.String()
labels_url = fields.String()
languages_url = fields.String()
merges_url = fields.String()
milestones_url = fields.String()
name = fields.String()
notifications_url = fields.String()
owner = fields.Nested('Search_codeItemsItemRepositoryOwner')
private = fields.Boolean()
pulls_url = fields.String()
stargazers_url = fields.String()
statuses_url = fields.String()
subscribers_url = fields.String()
subscription_url = fields.String()
tags_url = fields.String()
teams_url = fields.String()
trees_url = fields.String()
url = fields.String()
class Search_codeItemsItemRepositoryOwner(Schema):
avatar_url = fields.String()
events_url = fields.String()
followers_url = fields.String()
following_url = fields.String()
gists_url = fields.String()
gravatar_id = fields.String()
html_url = fields.String()
id = fields.Integer()
login = fields.String()
organizations_url = fields.String()
received_events_url = fields.String()
repos_url = fields.String()
starred_url = fields.String()
subscriptions_url = fields.String()
type = fields.String()
url = fields.String()
class Search_issues(Schema):
items = fields.List(fields.Nested('Search_issuesItemsItem'))
total_count = fields.Integer()
class Search_issuesItemsItem(Schema):
assignee = fields.Field()
body = fields.String()
closed_at = fields.Field()
comments = fields.Integer()
comments_url = fields.String()
created_at = fields.String()
events_url = fields.String()
html_url = fields.String()
id = fields.Integer()
labels = fields.List(fields.Nested('Search_issuesItemsItemLabelsItem'))
labels_url = fields.String()
milestone = fields.Field()
number = fields.Integer()
pull_request = fields.Nested('Search_issuesItemsItemPull_request')
score = fields.Number()
state = fields.String()
title = fields.String()
updated_at = fields.String()
url = fields.String()
user = fields.Nested('Search_issuesItemsItemUser')
class Search_issuesItemsItemUser(Schema):
avatar_url = fields.String()
events_url = fields.String()
followers_url = fields.String()
following_url = fields.String()
gists_url = fields.String()
gravatar_id = fields.String()
html_url = fields.String()
id = fields.Integer()
login = fields.String()
organizations_url = fields.String()
received_events_url = fields.String()
repos_url = fields.String()
starred_url = fields.String()
subscriptions_url = fields.String()
type = fields.String()
url = fields.String()
class Search_issuesItemsItemPull_request(Schema):
diff_url = fields.Field()
html_url = fields.Field()
patch_url = fields.Field()
class Search_issuesItemsItemLabelsItem(Schema):
color = fields.String()
name = fields.String()
url = fields.String()
class Search_issues_by_keyword(Schema):
issues = fields.List(fields.Nested('Search_issues_by_keywordIssuesItem'))
class Search_issues_by_keywordIssuesItem(Schema):
body = fields.String()
comments = fields.Integer()
created_at = fields.String()
gravatar_id = fields.String()
html_url = fields.String()
labels = fields.List(fields.String())
number = fields.Integer()
position = fields.Integer()
state = fields.String()
title = fields.String()
updated_at = fields.String()
user = fields.String()
votes = fields.Integer()
class Search_repositories(Schema):
items = fields.List(fields.Nested('Search_repositoriesItemsItem'))
total_count = fields.Integer()
class Search_repositoriesItemsItem(Schema):
created_at = fields.String(description='ISO 8601 format: YYYY-MM-DDTHH:MM:SSZ')
default_branch = fields.String()
description = fields.String()
fork = fields.Boolean()
forks = fields.Integer()
forks_count = fields.Integer()
full_name = fields.String()
homepage = fields.String()
html_url = fields.String()
id = fields.Integer()
language = fields.String()
master_branch = fields.String()
name = fields.String()
open_issues = fields.Integer()
open_issues_count = fields.Integer()
owner = fields.Nested('Search_repositoriesItemsItemOwner')
private = fields.Boolean()
pushed_at = fields.String(description='ISO 8601 format: YYYY-MM-DDTHH:MM:SSZ')
score = fields.Number()
size = fields.Integer()
updated_at = fields.String(description='ISO 8601 format: YYYY-MM-DDTHH:MM:SSZ')
url = fields.String()
watchers = fields.Integer()
watchers_count = fields.Integer()
class Search_repositoriesItemsItemOwner(Schema):
avatar_url = fields.String()
gravatar_id = fields.String()
id = fields.Integer()
login = fields.String()
received_events_url = fields.String()
type = fields.String()
url = fields.String()
class Search_repositories_by_keyword(Schema):
repositories = fields.List(fields.Nested('Search_repositories_by_keywordRepositoriesItem'))
class Search_repositories_by_keywordRepositoriesItem(Schema):
created = fields.String()
created_at = fields.String()
description = fields.String()
followers = fields.Integer()
fork = fields.Boolean()
forks = fields.Integer()
has_downloads = fields.Boolean()
has_issues = fields.Boolean()
has_wiki = fields.Boolean()
homepage = fields.String()
language = fields.String()
name = fields.String()
open_issues = fields.Integer()
owner = fields.String()
private = fields.Boolean()
pushed = fields.String()
pushed_at = fields.String()
score = fields.Number()
size = fields.Integer()
type = fields.String()
url = fields.String()
username = fields.String()
watchers = fields.Integer()
class Search_user_by_email(Schema):
user = fields.Nested('Search_user_by_emailUser')
class Search_user_by_emailUser(Schema):
blog = fields.String()
company = fields.String()
created = fields.String()
created_at = fields.String()
email = fields.String()
followers_count = fields.Integer()
following_count = fields.Integer()
gravatar_id = fields.String()
id = fields.Integer()
location = fields.String()
login = fields.String()
name = fields.String()
public_gist_count = fields.Integer()
public_repo_count = fields.Integer()
type = fields.String()
class Search_users(Schema):
items = fields.List(fields.Nested('Search_usersItemsItem'))
total_count = fields.Integer()
class Search_usersItemsItem(Schema):
avatar_url = fields.String()
followers_url = fields.String()
gravatar_id = fields.String()
html_url = fields.String()
id = fields.Integer()
login = fields.String()
organizations_url = fields.String()
received_events_url = fields.String()
repos_url = fields.String()
score = fields.Number()
subscriptions_url = fields.String()
type = fields.String()
url = fields.String()
class Search_users_by_keyword(Schema):
users = fields.List(fields.Nested('Search_users_by_keywordUsersItem'))
class Search_users_by_keywordUsersItem(Schema):
created = fields.String()
created_at = fields.String()
followers = fields.Integer()
followers_count = fields.Integer()
fullname = fields.String()
gravatar_id = fields.String()
id = fields.String()
language = fields.String()
location = fields.String()
login = fields.String()
name = fields.String()
public_repo_count = fields.Integer()
repos = fields.Integer()
score = fields.Number()
type = fields.String()
username = fields.String()
class StargazersItem(Schema):
avatar_url = fields.String()
gravatar_id = fields.String()
id = fields.Integer()
login = fields.String()
url = fields.String()
class Subscribition(Schema):
created_at = fields.String(description='ISO 8601 format: YYYY-MM-DDTHH:MM:SSZ')
ignored = fields.Boolean()
reason = fields.String()
repository_url = fields.String()
subscribed = fields.Boolean()
url = fields.String()
class SubscribitionBody(Schema):
ignored = fields.Boolean()
subscribed = fields.Boolean()
class Subscription(Schema):
created_at = fields.String()
ignored = fields.Boolean()
reason = fields.Boolean()
subscribed = fields.Boolean()
thread_url = fields.String()
url = fields.String()
class Tag(Schema):
message = fields.String()
object = fields.Nested('TagObject')
sha = fields.String()
tag = fields.String()
tagger = fields.Nested('TagTagger')
url = fields.String()
class TagTagger(Schema):
date = fields.String(description='ISO 8601 format: YYYY-MM-DDTHH:MM:SSZ')
email = fields.String()
name = fields.String()
class TagObject(Schema):
sha = fields.String()
type = fields.String()
url = fields.String()
class Tags(Schema):
message = fields.String(required=True, description='String of the tag message.')
object = fields.String(required=True, description='String of the SHA of the git object this is tagging.')
tag = fields.String(required=True)
tagger = fields.Nested('TagsTagger', required=True)
type = fields.String(required=True, description='String of the type of the object we’re tagging. Normally this is a commit but it can also be a tree or a blob.')
class TagsTagger(Schema):
date = fields.String(description='Timestamp of when this object was tagged.')
email = fields.String(description='String of the email of the author of the tag.')
name = fields.String(description='String of the name of the author of the tag.')
class Team(Schema):
id = fields.Integer()
members_count = fields.Integer()
name = fields.String()
permission = fields.String()
repos_count = fields.Integer()
url = fields.String()
class TeamMembership(Schema):
state = fields.String()
url = fields.String()
class TeamReposItem(Schema):
clone_url = fields.String()
created_at = fields.String()
description = fields.String()
fork = fields.Boolean()
forks = fields.Integer()
forks_count = fields.Integer()
full_name = fields.String()
git_url = fields.String()
homepage = fields.String()
html_url = fields.String()
id = fields.Integer()
language = fields.Field()
master_branch = fields.String()
mirror_url = fields.String()
name = fields.String()
open_issues = fields.Integer()
open_issues_count = fields.Integer()
owner = fields.Nested('TeamReposItemOwner')
private = fields.Boolean()
pushed_at = fields.String()
size = fields.Integer()
ssh_url = fields.String()
svn_url = fields.String()
updated_at = fields.String()
url = fields.String()
watchers = fields.Integer()
watchers_count = fields.Integer()
class TeamReposItemOwner(Schema):
avatar_url = fields.String()
gravatar_id = fields.String()
id = fields.Integer()
login = fields.String()
url = fields.String()
class TeamsItem(Schema):
id = fields.Integer()
name = fields.String()
url = fields.String()
class Teams_listItem(Schema):
id = fields.Integer()
members_count = fields.Integer()
name = fields.String()
organization = fields.Nested('Teams_listItemOrganization')
permission = fields.String()
repos_count = fields.Integer()
url = fields.String()
class Teams_listItemOrganization(Schema):
avatar_url = fields.String()
id = fields.Integer()
login = fields.String()
url = fields.String()
class Tree(Schema):
sha = fields.String()
tree = fields.List(fields.Nested('TreeTreeItem'))
url = fields.String()
class TreeTreeItem(Schema):
mode = fields.String()
path = fields.String()
sha = fields.String()
size = fields.Integer()
type = fields.String()
url = fields.String()
class Trees(Schema):
base_tree = fields.String()
sha = fields.String(description='SHA1 checksum ID of the object in the tree.')
tree = fields.List(fields.Nested('TreesTreeItem'))
url = fields.String()
class TreesTreeItem(Schema):
mode = fields.String(description='One of 100644 for file (blob), 100755 for executable (blob), 040000 for subdirectory (tree), 160000 for submodule (commit) or 120000 for a blob that specifies the path of a symlink.', validate=[OneOf(choices=['100644', '100755', '040000', '160000', '120000'], labels=[])])
path = fields.String()
sha = fields.String(description='SHA1 checksum ID of the object in the tree.')
type = fields.String(validate=[OneOf(choices=['blob', 'tree', 'commit'], labels=[])])
url = fields.String()
class User(Schema):
avatar_url = fields.String()
bio = fields.String()
blog = fields.String()
collaborators = fields.Integer()
company = fields.String()
created_at = fields.String(description='ISO 8601 format: YYYY-MM-DDTHH:MM:SSZ')
disk_usage = fields.Integer()
email = fields.String()
followers = fields.Integer()
following = fields.Integer()
gravatar_id = fields.String()
hireable = fields.Boolean()
html_url = fields.String()
id = fields.Integer()
location = fields.String()
login = fields.String()
name = fields.String()
owned_private_repos = fields.Integer()
plan = fields.Nested('UserPlan')
private_gists = fields.Integer()
public_gists = fields.Integer()
public_repos = fields.Integer()
total_private_repos = fields.Integer()
type = fields.String()
url = fields.String()
class UserPlan(Schema):
collaborators = fields.Integer()
name = fields.String()
private_repos = fields.Integer()
space = fields.Integer()
class User_emails_finalItem(Schema):
pass
class User_keysItem(Schema):
pass
class User_keys_keyId(Schema):
id = fields.Integer()
key = fields.String()
title = fields.String()
url = fields.String()
class User_keys_post(Schema):
key = fields.String()
title = fields.String()
class User_update(Schema):
bio = fields.String()
blog = fields.String()
company = fields.String()
email = fields.String()
hireable = fields.Boolean()
location = fields.String()
name = fields.String()
class User_userId(Schema):
avatar_url = fields.String()
bio = fields.String()
blog = fields.String()
company = fields.String()
created_at = fields.String(description='ISO 8601 format: YYYY-MM-DDTHH:MM:SSZ')
email = fields.String(description='Note: The returned email is the user’s publicly visible email address (or null if the user has not specified a public email address in their profile).')
followers = fields.Integer()
following = fields.Integer()
gravatar_id = fields.String()
hireable = fields.Boolean()
html_url = fields.String()
id = fields.Integer()
location = fields.String()
login = fields.String()
name = fields.String()
public_gists = fields.Integer()
public_repos = fields.Integer()
type = fields.String()
url = fields.String()
class User_userId_starredItem(Schema):
pass
class User_userId_subscribitionsItem(Schema):
clone_url = fields.String()
created_at = fields.String(description='ISO 8601 format: YYYY-MM-DDTHH:MM:SSZ')
description = fields.String()
fork = fields.Boolean()
forks = fields.Integer()
forks_count = fields.Integer()
full_name = fields.String()
git_url = fields.String()
homepage = fields.String()
html_url = fields.String()
id = fields.Integer()
language = fields.String()
master_branch = fields.Integer()
mirror_url = fields.String()
name = fields.String()
open_issues = fields.Integer()
open_issues_count = fields.Integer()
owner = fields.Nested('User_userId_subscribitionsItemOwner')
private = fields.Boolean()
pushed_at = fields.String(description='ISO 8601 format: YYYY-MM-DDTHH:MM:SSZ')
size = fields.Integer()
ssh_url = fields.String()
svn_url = fields.String()
updated_at = fields.String(description='ISO 8601 format: YYYY-MM-DDTHH:MM:SSZ')
url = fields.String()
watchers = fields.Integer()
watchers_count = fields.Integer()
class User_userId_subscribitionsItemOwner(Schema):
avatar_url = fields.String()
gravatar_id = fields.String()
id = fields.Integer()
login = fields.String()
url = fields.String()
class UsersItem(Schema):
avatar_url = fields.String()
gravatar_id = fields.String()
id = fields.Integer()
login = fields.String()
url = fields.String()
class Users_userId_keysItem(Schema):
pass
class Users_userId_orgsItem(Schema):
pass
class EmojisInput:
class Get:
"""
Lists all the emojis available to use on GitHub.
"""
class Header(Schema):
X_GitHub_Media_Type = fields.String(description='You can check the current version of media type in responses.\n', data_key='X-GitHub-Media-Type')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class EventsInput:
class Get:
"""
List public events.
"""
class Header(Schema):
X_GitHub_Media_Type = fields.String(description='You can check the current version of media type in responses.\n', data_key='X-GitHub-Media-Type')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class FeedsInput:
class Get:
"""
List Feeds.
GitHub provides several timeline resources in Atom format. The Feeds API
lists all the feeds available to the authenticating user.
"""
class Header(Schema):
X_GitHub_Media_Type = fields.String(description='You can check the current version of media type in responses.\n', data_key='X-GitHub-Media-Type')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class GistsInput:
class Get:
"""
List the authenticated user's gists or if called anonymously, this will
return all public gists.
"""
class Header(Schema):
X_GitHub_Media_Type = fields.String(description='You can check the current version of media type in responses.\n', data_key='X-GitHub-Media-Type')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class Query(Schema):
since = fields.String(description='Timestamp in ISO 8601 format YYYY-MM-DDTHH:MM:SSZ.\nOnly gists updated at or after this time are returned.\n')
class Post:
"""
Create a gist.
"""
class Body(PostGist):
pass
class Header(Schema):
X_GitHub_Media_Type = fields.String(description='You can check the current version of media type in responses.\n', data_key='X-GitHub-Media-Type')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class GistsPublicInput:
class Get:
"""
List all public gists.
"""
class Header(Schema):
X_GitHub_Media_Type = fields.String(description='You can check the current version of media type in responses.\n', data_key='X-GitHub-Media-Type')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class Query(Schema):
since = fields.String(description='Timestamp in ISO 8601 format YYYY-MM-DDTHH:MM:SSZ.\nOnly gists updated at or after this time are returned.\n')
class GistsStarredInput:
class Get:
"""
List the authenticated user's starred gists.
"""
class Header(Schema):
X_GitHub_Media_Type = fields.String(description='You can check the current version of media type in responses.\n', data_key='X-GitHub-Media-Type')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class Query(Schema):
since = fields.String(description='Timestamp in ISO 8601 format YYYY-MM-DDTHH:MM:SSZ.\nOnly gists updated at or after this time are returned.\n')
class GistsIdInput:
class Delete:
"""
Delete a gist.
"""
class Header(Schema):
X_GitHub_Media_Type = fields.String(description='You can check the current version of media type in responses.\n', data_key='X-GitHub-Media-Type')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class Path(Schema):
id = fields.Integer(required=True, description='Id of gist.')
class Get:
"""
Get a single gist.
"""
class Header(Schema):
X_GitHub_Media_Type = fields.String(description='You can check the current version of media type in responses.\n', data_key='X-GitHub-Media-Type')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class Path(Schema):
id = fields.Integer(required=True, description='Id of gist.')
class Patch:
"""
Edit a gist.
"""
class Body(PatchGist):
pass
class Header(Schema):
X_GitHub_Media_Type = fields.String(description='You can check the current version of media type in responses.\n', data_key='X-GitHub-Media-Type')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class Path(Schema):
id = fields.Integer(required=True, description='Id of gist.')
class GistsIdCommentsInput:
class Get:
"""
List comments on a gist.
"""
class Header(Schema):
X_GitHub_Media_Type = fields.String(description='You can check the current version of media type in responses.\n', data_key='X-GitHub-Media-Type')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class Path(Schema):
id = fields.Integer(required=True, description='Id of gist.')
class Post:
"""
Create a commen
"""
class Body(CommentBody):
pass
class Header(Schema):
X_GitHub_Media_Type = fields.String(description='You can check the current version of media type in responses.\n', data_key='X-GitHub-Media-Type')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class Path(Schema):
id = fields.Integer(required=True, description='Id of gist.')
class GistsIdCommentsCommentIdInput:
class Delete:
"""
Delete a comment.
"""
class Header(Schema):
X_GitHub_Media_Type = fields.String(description='You can check the current version of media type in responses.\n', data_key='X-GitHub-Media-Type')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class Path(Schema):
id = fields.Integer(required=True, description='Id of gist.')
commentId = fields.Integer(required=True, description='Id of comment.')
class Get:
"""
Get a single comment.
"""
class Header(Schema):
X_GitHub_Media_Type = fields.String(description='You can check the current version of media type in responses.\n', data_key='X-GitHub-Media-Type')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class Path(Schema):
id = fields.Integer(required=True, description='Id of gist.')
commentId = fields.Integer(required=True, description='Id of comment.')
class Patch:
"""
Edit a comment.
"""
class Body(Comment):
pass
class Header(Schema):
X_GitHub_Media_Type = fields.String(description='You can check the current version of media type in responses.\n', data_key='X-GitHub-Media-Type')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class Path(Schema):
id = fields.Integer(required=True, description='Id of gist.')
commentId = fields.Integer(required=True, description='Id of comment.')
class GistsIdForksInput:
class Post:
"""
Fork a gist.
"""
class Header(Schema):
X_GitHub_Media_Type = fields.String(description='You can check the current version of media type in responses.\n', data_key='X-GitHub-Media-Type')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class Path(Schema):
id = fields.Integer(required=True, description='Id of gist.')
class GistsIdStarInput:
class Delete:
"""
Unstar a gist.
"""
class Header(Schema):
X_GitHub_Media_Type = fields.String(description='You can check the current version of media type in responses.\n', data_key='X-GitHub-Media-Type')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class Path(Schema):
id = fields.Integer(required=True, description='Id of gist.')
class Get:
"""
Check if a gist is starred.
"""
class Header(Schema):
X_GitHub_Media_Type = fields.String(description='You can check the current version of media type in responses.\n', data_key='X-GitHub-Media-Type')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class Path(Schema):
id = fields.Integer(required=True, description='Id of gist.')
class Put:
"""
Star a gist.
"""
class Header(Schema):
X_GitHub_Media_Type = fields.String(description='You can check the current version of media type in responses.\n', data_key='X-GitHub-Media-Type')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class Path(Schema):
id = fields.Integer(required=True, description='Id of gist.')
class GitignoreTemplatesInput:
class Get:
"""
Listing available templates.
List all templates available to pass as an option when creating a repository.
"""
class Header(Schema):
X_GitHub_Media_Type = fields.String(description='You can check the current version of media type in responses.\n', data_key='X-GitHub-Media-Type')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class GitignoreTemplatesLanguageInput:
class Get:
"""
Get a single template.
"""
class Header(Schema):
X_GitHub_Media_Type = fields.String(description='You can check the current version of media type in responses.\n', data_key='X-GitHub-Media-Type')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class Path(Schema):
language = fields.String(required=True)
class IssuesInput:
class Get:
"""
List issues.
List all issues across all the authenticated user's visible repositories.
"""
class Header(Schema):
X_GitHub_Media_Type = fields.String(description='You can check the current version of media type in responses.\n', data_key='X-GitHub-Media-Type')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class Query(Schema):
filter = fields.String(required=True, description="Issues assigned to you / created by you / mentioning you / you're\nsubscribed to updates for / All issues the authenticated user can see\n", validate=[OneOf(choices=['assigned', 'created', 'mentioned', 'subscribed', 'all'], labels=[])])
state = fields.String(required=True, validate=[OneOf(choices=['open', 'closed'], labels=[])])
labels = fields.String(required=True, description='String list of comma separated Label names. Example - bug,ui,@high.')
sort = fields.String(required=True, validate=[OneOf(choices=['created', 'updated', 'comments'], labels=[])])
direction = fields.String(required=True, validate=[OneOf(choices=['asc', 'desc'], labels=[])])
since = fields.String(description='Optional string of a timestamp in ISO 8601 format: YYYY-MM-DDTHH:MM:SSZ.\nOnly issues updated at or after this time are returned.\n')
class LegacyIssuesSearchOwnerRepositoryStateKeywordInput:
class Get:
"""
Find issues by state and keyword.
"""
class Header(Schema):
X_GitHub_Media_Type = fields.String(description='You can check the current version of media type in responses.\n', data_key='X-GitHub-Media-Type')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class Path(Schema):
keyword = fields.String(required=True, description='The search term.')
state = fields.String(required=True, description='Indicates the state of the issues to return. Can be either open or closed.', validate=[OneOf(choices=['open', 'closed'], labels=[])])
owner = fields.String(required=True)
repository = fields.String(required=True)
class LegacyReposSearchKeywordInput:
class Get:
"""
Find repositories by keyword. Note, this legacy method does not follow the v3 pagination pattern. This method returns up to 100 results per page and pages can be fetched using the start_page parameter.
"""
class Header(Schema):
X_GitHub_Media_Type = fields.String(description='You can check the current version of media type in responses.\n', data_key='X-GitHub-Media-Type')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class Path(Schema):
keyword = fields.String(required=True, description='The search term')
class Query(Schema):
order = fields.String(description='The sort field. if sort param is provided. Can be either asc or desc.', missing=lambda: 'desc', validate=[OneOf(choices=['desc', 'asc'], labels=[])])
language = fields.String(description='Filter results by language')
start_page = fields.String(description='The page number to fetch')
sort = fields.String(description='The sort field. One of stars, forks, or updated. Default: results are sorted by best match.', validate=[OneOf(choices=['updated', 'stars', 'forks'], labels=[])])
class LegacyUserEmailEmailInput:
class Get:
"""
This API call is added for compatibility reasons only.
"""
class Header(Schema):
X_GitHub_Media_Type = fields.String(description='You can check the current version of media type in responses.\n', data_key='X-GitHub-Media-Type')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class Path(Schema):
email = fields.String(required=True, description='The email address')
class LegacyUserSearchKeywordInput:
class Get:
"""
Find users by keyword.
"""
class Header(Schema):
X_GitHub_Media_Type = fields.String(description='You can check the current version of media type in responses.\n', data_key='X-GitHub-Media-Type')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class Path(Schema):
keyword = fields.String(required=True, description='The search term')
class Query(Schema):
order = fields.String(description='The sort field. if sort param is provided. Can be either asc or desc.', missing=lambda: 'desc', validate=[OneOf(choices=['desc', 'asc'], labels=[])])
start_page = fields.String(description='The page number to fetch')
sort = fields.String(description='The sort field. One of stars, forks, or updated. Default: results are sorted by best match.', validate=[OneOf(choices=['updated', 'stars', 'forks'], labels=[])])
class MarkdownInput:
class Post:
"""
Render an arbitrary Markdown document
"""
class Body(Markdown):
pass
class Header(Schema):
X_GitHub_Media_Type = fields.String(description='You can check the current version of media type in responses.\n', data_key='X-GitHub-Media-Type')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class MarkdownRawInput:
class Post:
"""
Render a Markdown document in raw mode
"""
class Header(Schema):
X_GitHub_Media_Type = fields.String(description='You can check the current version of media type in responses.\n', data_key='X-GitHub-Media-Type')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class MetaInput:
class Get:
"""
This gives some information about GitHub.com, the service.
"""
class Header(Schema):
X_GitHub_Media_Type = fields.String(description='You can check the current version of media type in responses.\n', data_key='X-GitHub-Media-Type')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class NetworksOwnerRepoEventsInput:
class Get:
"""
List public events for a network of repositories.
"""
class Header(Schema):
X_GitHub_Media_Type = fields.String(description='You can check the current version of media type in responses.\n', data_key='X-GitHub-Media-Type')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class Path(Schema):
owner = fields.String(required=True, description='Name of the owner.')
repo = fields.String(required=True, description='Name of repository.')
class NotificationsInput:
class Get:
"""
List your notifications.
List all notifications for the current user, grouped by repository.
"""
class Header(Schema):
X_GitHub_Media_Type = fields.String(description='You can check the current version of media type in responses.\n', data_key='X-GitHub-Media-Type')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class Query(Schema):
all = fields.Boolean(description='True to show notifications marked as read.')
participating = fields.Boolean(description='True to show only notifications in which the user is directly participating\nor mentioned.\n')
since = fields.String(description='The time should be passed in as UTC in the ISO 8601 format: YYYY-MM-DDTHH:MM:SSZ.\nExample: "2012-10-09T23:39:01Z".\n')
class Put:
"""
Mark as read.
Marking a notification as "read" removes it from the default view on GitHub.com.
"""
class Body(NotificationMarkRead):
pass
class Header(Schema):
X_GitHub_Media_Type = fields.String(description='You can check the current version of media type in responses.\n', data_key='X-GitHub-Media-Type')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class NotificationsThreadsIdInput:
class Get:
"""
View a single thread.
"""
class Header(Schema):
X_GitHub_Media_Type = fields.String(description='You can check the current version of media type in responses.\n', data_key='X-GitHub-Media-Type')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class Path(Schema):
id = fields.Integer(required=True, description='Id of thread.')
class Patch:
"""
Mark a thread as read
"""
class Header(Schema):
X_GitHub_Media_Type = fields.String(description='You can check the current version of media type in responses.\n', data_key='X-GitHub-Media-Type')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class Path(Schema):
id = fields.Integer(required=True, description='Id of thread.')
class NotificationsThreadsIdSubscriptionInput:
class Delete:
"""
Delete a Thread Subscription.
"""
class Header(Schema):
X_GitHub_Media_Type = fields.String(description='You can check the current version of media type in responses.\n', data_key='X-GitHub-Media-Type')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class Path(Schema):
id = fields.Integer(required=True, description='Id of thread.')
class Get:
"""
Get a Thread Subscription.
"""
class Header(Schema):
X_GitHub_Media_Type = fields.String(description='You can check the current version of media type in responses.\n', data_key='X-GitHub-Media-Type')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class Path(Schema):
id = fields.Integer(required=True, description='Id of thread.')
class Put:
"""
Set a Thread Subscription.
This lets you subscribe to a thread, or ignore it. Subscribing to a thread
is unnecessary if the user is already subscribed to the repository. Ignoring
a thread will mute all future notifications (until you comment or get @mentioned).
"""
class Body(PutSubscription):
pass
class Header(Schema):
X_GitHub_Media_Type = fields.String(description='You can check the current version of media type in responses.\n', data_key='X-GitHub-Media-Type')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class Path(Schema):
id = fields.Integer(required=True, description='Id of thread.')
class OrgsOrgInput:
class Get:
"""
Get an Organization.
"""
class Header(Schema):
X_GitHub_Media_Type = fields.String(description='You can check the current version of media type in responses.\n', data_key='X-GitHub-Media-Type')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class Path(Schema):
org = fields.String(required=True, description='Name of organisation.')
class Patch:
"""
Edit an Organization.
"""
class Body(PatchOrg):
pass
class Header(Schema):
X_GitHub_Media_Type = fields.String(description='You can check the current version of media type in responses.\n', data_key='X-GitHub-Media-Type')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class Path(Schema):
org = fields.String(required=True, description='Name of organisation.')
class OrgsOrgEventsInput:
class Get:
"""
List public events for an organization.
"""
class Header(Schema):
X_GitHub_Media_Type = fields.String(description='You can check the current version of media type in responses.\n', data_key='X-GitHub-Media-Type')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class Path(Schema):
org = fields.String(required=True, description='Name of organisation.')
class OrgsOrgIssuesInput:
class Get:
"""
List issues.
List all issues for a given organization for the authenticated user.
"""
class Header(Schema):
X_GitHub_Media_Type = fields.String(description='You can check the current version of media type in responses.\n', data_key='X-GitHub-Media-Type')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class Path(Schema):
org = fields.String(required=True, description='Name of organisation.')
class Query(Schema):
filter = fields.String(required=True, description="Issues assigned to you / created by you / mentioning you / you're\nsubscribed to updates for / All issues the authenticated user can see\n", validate=[OneOf(choices=['assigned', 'created', 'mentioned', 'subscribed', 'all'], labels=[])])
state = fields.String(required=True, validate=[OneOf(choices=['open', 'closed'], labels=[])])
labels = fields.String(required=True, description='String list of comma separated Label names. Example - bug,ui,@high.')
sort = fields.String(required=True, validate=[OneOf(choices=['created', 'updated', 'comments'], labels=[])])
direction = fields.String(required=True, validate=[OneOf(choices=['asc', 'desc'], labels=[])])
since = fields.String(description='Optional string of a timestamp in ISO 8601 format: YYYY-MM-DDTHH:MM:SSZ.\nOnly issues updated at or after this time are returned.\n')
class OrgsOrgMembersInput:
class Get:
"""
Members list.
List all users who are members of an organization. A member is a user tha
belongs to at least 1 team in the organization. If the authenticated user
is also an owner of this organization then both concealed and public members
will be returned. If the requester is not an owner of the organization the
query will be redirected to the public members list.
"""
class Header(Schema):
X_GitHub_Media_Type = fields.String(description='You can check the current version of media type in responses.\n', data_key='X-GitHub-Media-Type')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class Path(Schema):
org = fields.String(required=True, description='Name of organisation.')
class OrgsOrgMembersUsernameInput:
class Delete:
"""
Remove a member.
Removing a user from this list will remove them from all teams and they
will no longer have any access to the organization's repositories.
"""
class Header(Schema):
X_GitHub_Media_Type = fields.String(description='You can check the current version of media type in responses.\n', data_key='X-GitHub-Media-Type')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class Path(Schema):
org = fields.String(required=True, description='Name of organisation.')
username = fields.String(required=True, description='Name of the user.')
class Get:
"""
Check if a user is, publicly or privately, a member of the organization.
"""
class Header(Schema):
X_GitHub_Media_Type = fields.String(description='You can check the current version of media type in responses.\n', data_key='X-GitHub-Media-Type')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class Path(Schema):
org = fields.String(required=True, description='Name of organisation.')
username = fields.String(required=True, description='Name of the user.')
class OrgsOrgPublicMembersInput:
class Get:
"""
Public members list.
Members of an organization can choose to have their membership publicized
or not.
"""
class Header(Schema):
X_GitHub_Media_Type = fields.String(description='You can check the current version of media type in responses.\n', data_key='X-GitHub-Media-Type')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class Path(Schema):
org = fields.String(required=True, description='Name of organisation.')
class OrgsOrgPublicMembersUsernameInput:
class Delete:
"""
Conceal a user's membership.
"""
class Header(Schema):
X_GitHub_Media_Type = fields.String(description='You can check the current version of media type in responses.\n', data_key='X-GitHub-Media-Type')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class Path(Schema):
org = fields.String(required=True, description='Name of organisation.')
username = fields.String(required=True, description='Name of the user.')
class Get:
"""
Check public membership.
"""
class Header(Schema):
X_GitHub_Media_Type = fields.String(description='You can check the current version of media type in responses.\n', data_key='X-GitHub-Media-Type')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class Path(Schema):
org = fields.String(required=True, description='Name of organisation.')
username = fields.String(required=True, description='Name of the user.')
class Put:
"""
Publicize a user's membership.
"""
class Header(Schema):
X_GitHub_Media_Type = fields.String(description='You can check the current version of media type in responses.\n', data_key='X-GitHub-Media-Type')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class Path(Schema):
org = fields.String(required=True, description='Name of organisation.')
username = fields.String(required=True, description='Name of the user.')
class OrgsOrgReposInput:
class Get:
"""
List repositories for the specified org.
"""
class Header(Schema):
X_GitHub_Media_Type = fields.String(description='You can check the current version of media type in responses.\n', data_key='X-GitHub-Media-Type')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class Path(Schema):
org = fields.String(required=True, description='Name of organisation.')
class Query(Schema):
type = fields.String(missing=lambda: 'all', validate=[OneOf(choices=['all', 'public', 'private', 'forks', 'sources', 'member'], labels=[])])
class Post:
"""
Create a new repository for the authenticated user. OAuth users must supply
repo scope.
"""
class Body(PostRepo):
pass
class Header(Schema):
X_GitHub_Media_Type = fields.String(description='You can check the current version of media type in responses.\n', data_key='X-GitHub-Media-Type')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class Path(Schema):
org = fields.String(required=True, description='Name of organisation.')
class OrgsOrgTeamsInput:
class Get:
"""
List teams.
"""
class Header(Schema):
X_GitHub_Media_Type = fields.String(description='You can check the current version of media type in responses.\n', data_key='X-GitHub-Media-Type')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class Path(Schema):
org = fields.String(required=True, description='Name of organisation.')
class Post:
"""
Create team.
In order to create a team, the authenticated user must be an owner of organization.
"""
class Body(OrgTeamsPost):
pass
class Header(Schema):
X_GitHub_Media_Type = fields.String(description='You can check the current version of media type in responses.\n', data_key='X-GitHub-Media-Type')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class Path(Schema):
org = fields.String(required=True, description='Name of organisation.')
class RateLimitInput:
class Get:
"""
Get your current rate limit status
Note: Accessing this endpoint does not count against your rate limit.
"""
class Header(Schema):
X_GitHub_Media_Type = fields.String(description='You can check the current version of media type in responses.\n', data_key='X-GitHub-Media-Type')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class ReposOwnerRepoInput:
class Delete:
"""
Delete a Repository.
Deleting a repository requires admin access. If OAuth is used, the delete_repo
scope is required.
"""
class Header(Schema):
X_GitHub_Media_Type = fields.String(description='You can check the current version of media type in responses.\n', data_key='X-GitHub-Media-Type')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class Path(Schema):
owner = fields.String(required=True, description='Name of repository owner.')
repo = fields.String(required=True, description='Name of repository.')
class Get:
"""
Get repository.
"""
class Header(Schema):
X_GitHub_Media_Type = fields.String(description='You can check the current version of media type in responses.\n', data_key='X-GitHub-Media-Type')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class Path(Schema):
owner = fields.String(required=True, description='Name of repository owner.')
repo = fields.String(required=True, description='Name of repository.')
class Patch:
"""
Edit repository.
"""
class Body(RepoEdit):
pass
class Header(Schema):
X_GitHub_Media_Type = fields.String(description='You can check the current version of media type in responses.\n', data_key='X-GitHub-Media-Type')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class Path(Schema):
owner = fields.String(required=True, description='Name of repository owner.')
repo = fields.String(required=True, description='Name of repository.')
class ReposOwnerRepoAssigneesInput:
class Get:
"""
List assignees.
This call lists all the available assignees (owner + collaborators) to which
issues may be assigned.
"""
class Header(Schema):
X_GitHub_Media_Type = fields.String(description='You can check the current version of media type in responses.\n', data_key='X-GitHub-Media-Type')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class Path(Schema):
owner = fields.String(required=True, description='Name of repository owner.')
repo = fields.String(required=True, description='Name of repository.')
class ReposOwnerRepoAssigneesAssigneeInput:
class Get:
"""
Check assignee.
You may also check to see if a particular user is an assignee for a repository.
"""
class Header(Schema):
X_GitHub_Media_Type = fields.String(description='You can check the current version of media type in responses.\n', data_key='X-GitHub-Media-Type')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class Path(Schema):
owner = fields.String(required=True, description='Name of repository owner.')
repo = fields.String(required=True, description='Name of repository.')
assignee = fields.String(required=True, description='Login of the assignee.')
class ReposOwnerRepoBranchesInput:
class Get:
"""
Get list of branches
"""
class Header(Schema):
X_GitHub_Media_Type = fields.String(description='You can check the current version of media type in responses.\n', data_key='X-GitHub-Media-Type')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class Path(Schema):
owner = fields.String(required=True, description='Name of repository owner.')
repo = fields.String(required=True, description='Name of repository.')
class ReposOwnerRepoBranchesBranchInput:
class Get:
"""
Get Branch
"""
class Header(Schema):
X_GitHub_Media_Type = fields.String(description='You can check the current version of media type in responses.\n', data_key='X-GitHub-Media-Type')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class Path(Schema):
owner = fields.String(required=True, description='Name of repository owner.')
repo = fields.String(required=True, description='Name of repository.')
branch = fields.String(required=True, description='Name of the branch.')
class ReposOwnerRepoCollaboratorsInput:
class Get:
"""
List.
When authenticating as an organization owner of an organization-owned
repository, all organization owners are included in the list of
collaborators. Otherwise, only users with access to the repository are
returned in the collaborators list.
"""
class Header(Schema):
X_GitHub_Media_Type = fields.String(description='You can check the current version of media type in responses.\n', data_key='X-GitHub-Media-Type')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class Path(Schema):
owner = fields.String(required=True, description='Name of repository owner.')
repo = fields.String(required=True, description='Name of repository.')
class ReposOwnerRepoCollaboratorsUserInput:
class Delete:
"""
Remove collaborator.
"""
class Header(Schema):
X_GitHub_Media_Type = fields.String(description='You can check the current version of media type in responses.\n', data_key='X-GitHub-Media-Type')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class Path(Schema):
owner = fields.String(required=True, description='Name of repository owner.')
repo = fields.String(required=True, description='Name of repository.')
user = fields.String(required=True, description='Login of the user.')
class Get:
"""
Check if user is a collaborator
"""
class Header(Schema):
X_GitHub_Media_Type = fields.String(description='You can check the current version of media type in responses.\n', data_key='X-GitHub-Media-Type')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class Path(Schema):
owner = fields.String(required=True, description='Name of repository owner.')
repo = fields.String(required=True, description='Name of repository.')
user = fields.String(required=True, description='Login of the user.')
class Put:
"""
Add collaborator.
"""
class Header(Schema):
X_GitHub_Media_Type = fields.String(description='You can check the current version of media type in responses.\n', data_key='X-GitHub-Media-Type')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class Path(Schema):
owner = fields.String(required=True, description='Name of repository owner.')
repo = fields.String(required=True, description='Name of repository.')
user = fields.String(required=True, description='Login of the user.')
class ReposOwnerRepoCommentsInput:
class Get:
"""
List commit comments for a repository.
Comments are ordered by ascending ID.
"""
class Header(Schema):
X_GitHub_Media_Type = fields.String(description='You can check the current version of media type in responses.\n', data_key='X-GitHub-Media-Type')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class Path(Schema):
owner = fields.String(required=True, description='Name of repository owner.')
repo = fields.String(required=True, description='Name of repository.')
class ReposOwnerRepoCommentsCommentIdInput:
class Delete:
"""
Delete a commit comment
"""
class Header(Schema):
X_GitHub_Media_Type = fields.String(description='You can check the current version of media type in responses.\n', data_key='X-GitHub-Media-Type')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class Path(Schema):
owner = fields.String(required=True, description='Name of repository owner.')
repo = fields.String(required=True, description='Name of repository.')
commentId = fields.Integer(required=True, description='Id of comment.')
class Get:
"""
Get a single commit comment.
"""
class Header(Schema):
X_GitHub_Media_Type = fields.String(description='You can check the current version of media type in responses.\n', data_key='X-GitHub-Media-Type')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class Path(Schema):
owner = fields.String(required=True, description='Name of repository owner.')
repo = fields.String(required=True, description='Name of repository.')
commentId = fields.Integer(required=True, description='Id of comment.')
class Patch:
"""
Update a commit comment.
"""
class Body(CommentBody):
pass
class Header(Schema):
X_GitHub_Media_Type = fields.String(description='You can check the current version of media type in responses.\n', data_key='X-GitHub-Media-Type')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class Path(Schema):
owner = fields.String(required=True, description='Name of repository owner.')
repo = fields.String(required=True, description='Name of repository.')
commentId = fields.Integer(required=True, description='Id of comment.')
class ReposOwnerRepoCommitsInput:
class Get:
"""
List commits on a repository.
"""
class Header(Schema):
X_GitHub_Media_Type = fields.String(description='You can check the current version of media type in responses.\n', data_key='X-GitHub-Media-Type')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class Path(Schema):
owner = fields.String(required=True, description='Name of repository owner.')
repo = fields.String(required=True, description='Name of repository.')
class Query(Schema):
since = fields.String(description='The time should be passed in as UTC in the ISO 8601 format: YYYY-MM-DDTHH:MM:SSZ.\nExample: "2012-10-09T23:39:01Z".\n')
sha = fields.String(description='Sha or branch to start listing commits from.')
path = fields.String(description='Only commits containing this file path will be returned.')
author = fields.String(description='GitHub login, name, or email by which to filter by commit author.')
until = fields.String(description='ISO 8601 Date - Only commits before this date will be returned.')
class ReposOwnerRepoCommitsRefStatusInput:
class Get:
"""
Get the combined Status for a specific Ref
The Combined status endpoint is currently available for developers to preview. During the preview period, the API may change without advance notice. Please see the blog post for full details.
To access this endpoint during the preview period, you must provide a custom media type in the Accept header:
application/vnd.github.she-hulk-preview+json
"""
class Header(Schema):
X_GitHub_Media_Type = fields.String(description='You can check the current version of media type in responses.\n', data_key='X-GitHub-Media-Type')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class Path(Schema):
owner = fields.String(required=True, description='Name of repository owner.')
repo = fields.String(required=True, description='Name of repository.')
ref = fields.String(required=True)
class ReposOwnerRepoCommitsShaCodeInput:
class Get:
"""
Get a single commit.
"""
class Header(Schema):
X_GitHub_Media_Type = fields.String(description='You can check the current version of media type in responses.\n', data_key='X-GitHub-Media-Type')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class Path(Schema):
owner = fields.String(required=True, description='Name of repository owner.')
repo = fields.String(required=True, description='Name of repository.')
shaCode = fields.String(required=True, description='SHA-1 code of the commit.')
class ReposOwnerRepoCommitsShaCodeCommentsInput:
class Get:
"""
List comments for a single commitList comments for a single commit.
"""
class Header(Schema):
X_GitHub_Media_Type = fields.String(description='You can check the current version of media type in responses.\n', data_key='X-GitHub-Media-Type')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class Path(Schema):
owner = fields.String(required=True, description='Name of repository owner.')
repo = fields.String(required=True, description='Name of repository.')
shaCode = fields.String(required=True, description='SHA-1 code of the commit.')
class Post:
"""
Create a commit comment.
"""
class Body(CommitBody):
pass
class Header(Schema):
X_GitHub_Media_Type = fields.String(description='You can check the current version of media type in responses.\n', data_key='X-GitHub-Media-Type')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class Path(Schema):
owner = fields.String(required=True, description='Name of repository owner.')
repo = fields.String(required=True, description='Name of repository.')
shaCode = fields.String(required=True, description='SHA-1 code of the commit.')
class ReposOwnerRepoCompareBaseIdheadIdInput:
class Get:
"""
Compare two commits
"""
class Header(Schema):
X_GitHub_Media_Type = fields.String(description='You can check the current version of media type in responses.\n', data_key='X-GitHub-Media-Type')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class Path(Schema):
owner = fields.String(required=True, description='Name of repository owner.')
repo = fields.String(required=True, description='Name of repository.')
baseId = fields.String(required=True)
headId = fields.String(required=True)
class ReposOwnerRepoContentsPathInput:
class Delete:
"""
Delete a file.
This method deletes a file in a repository.
"""
class Body(DeleteFileBody):
pass
class Header(Schema):
X_GitHub_Media_Type = fields.String(description='You can check the current version of media type in responses.\n', data_key='X-GitHub-Media-Type')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class Path(Schema):
owner = fields.String(required=True, description='Name of repository owner.')
repo = fields.String(required=True, description='Name of repository.')
path = fields.String(required=True)
class Get:
"""
Get contents.
This method returns the contents of a file or directory in a repository.
Files and symlinks support a custom media type for getting the raw content.
Directories and submodules do not support custom media types.
Note: This API supports files up to 1 megabyte in size.
Here can be many outcomes. For details see "http://developer.github.com/v3/repos/contents/"
"""
class Header(Schema):
X_GitHub_Media_Type = fields.String(description='You can check the current version of media type in responses.\n', data_key='X-GitHub-Media-Type')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class Path(Schema):
owner = fields.String(required=True, description='Name of repository owner.')
repo = fields.String(required=True, description='Name of repository.')
path = fields.String(required=True)
class Query(Schema):
path = fields.String(description='The content path.')
ref = fields.String(description="The String name of the Commit/Branch/Tag. Defaults to 'master'.")
class Put:
"""
Create a file.
"""
class Body(CreateFileBody):
pass
class Header(Schema):
X_GitHub_Media_Type = fields.String(description='You can check the current version of media type in responses.\n', data_key='X-GitHub-Media-Type')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class Path(Schema):
owner = fields.String(required=True, description='Name of repository owner.')
repo = fields.String(required=True, description='Name of repository.')
path = fields.String(required=True)
class ReposOwnerRepoContributorsInput:
class Get:
"""
Get list of contributors.
"""
class Header(Schema):
X_GitHub_Media_Type = fields.String(description='You can check the current version of media type in responses.\n', data_key='X-GitHub-Media-Type')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class Path(Schema):
owner = fields.String(required=True, description='Name of repository owner.')
repo = fields.String(required=True, description='Name of repository.')
class Query(Schema):
anon = fields.String(required=True, description='Set to 1 or true to include anonymous contributors in results.')
class ReposOwnerRepoDeploymentsInput:
class Get:
"""
Users with pull access can view deployments for a repository
"""
class Header(Schema):
X_GitHub_Media_Type = fields.String(description='You can check the current version of media type in responses.\n', data_key='X-GitHub-Media-Type')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class Path(Schema):
owner = fields.String(required=True, description='Name of repository owner.')
repo = fields.String(required=True, description='Name of repository.')
class Post:
"""
Users with push access can create a deployment for a given ref
"""
class Body(Deployment):
pass
class Header(Schema):
X_GitHub_Media_Type = fields.String(description='You can check the current version of media type in responses.\n', data_key='X-GitHub-Media-Type')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class Path(Schema):
owner = fields.String(required=True, description='Name of repository owner.')
repo = fields.String(required=True, description='Name of repository.')
class ReposOwnerRepoDeploymentsIdStatusesInput:
class Get:
"""
Users with pull access can view deployment statuses for a deployment
"""
class Header(Schema):
X_GitHub_Media_Type = fields.String(description='You can check the current version of media type in responses.\n', data_key='X-GitHub-Media-Type')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class Path(Schema):
owner = fields.String(required=True, description='Name of repository owner.')
repo = fields.String(required=True, description='Name of repository.')
id = fields.Integer(required=True, description='The Deployment ID to list the statuses from.')
class Post:
"""
Create a Deployment Status
Users with push access can create deployment statuses for a given deployment:
"""
class Body(Deployment_statuses_create):
pass
class Header(Schema):
X_GitHub_Media_Type = fields.String(description='You can check the current version of media type in responses.\n', data_key='X-GitHub-Media-Type')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class Path(Schema):
owner = fields.String(required=True, description='Name of repository owner.')
repo = fields.String(required=True, description='Name of repository.')
id = fields.Integer(required=True, description='The Deployment ID to list the statuses from.')
class ReposOwnerRepoDownloadsInput:
class Get:
"""
Deprecated. List downloads for a repository.
"""
class Header(Schema):
X_GitHub_Media_Type = fields.String(description='You can check the current version of media type in responses.\n', data_key='X-GitHub-Media-Type')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class Path(Schema):
owner = fields.String(required=True, description='Name of repository owner.')
repo = fields.String(required=True, description='Name of repository.')
class ReposOwnerRepoDownloadsDownloadIdInput:
class Delete:
"""
Deprecated. Delete a download.
"""
class Header(Schema):
X_GitHub_Media_Type = fields.String(description='You can check the current version of media type in responses.\n', data_key='X-GitHub-Media-Type')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class Path(Schema):
owner = fields.String(required=True, description='Name of repository owner.')
repo = fields.String(required=True, description='Name of repository.')
downloadId = fields.Integer(required=True, description='Id of download.')
class Get:
"""
Deprecated. Get a single download.
"""
class Header(Schema):
X_GitHub_Media_Type = fields.String(description='You can check the current version of media type in responses.\n', data_key='X-GitHub-Media-Type')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class Path(Schema):
owner = fields.String(required=True, description='Name of repository owner.')
repo = fields.String(required=True, description='Name of repository.')
downloadId = fields.Integer(required=True, description='Id of download.')
class ReposOwnerRepoEventsInput:
class Get:
"""
Get list of repository events.
"""
class Header(Schema):
X_GitHub_Media_Type = fields.String(description='You can check the current version of media type in responses.\n', data_key='X-GitHub-Media-Type')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class Path(Schema):
owner = fields.String(required=True, description='Name of repository owner.')
repo = fields.String(required=True, description='Name of repository.')
class ReposOwnerRepoForksInput:
class Get:
"""
List forks.
"""
class Header(Schema):
X_GitHub_Media_Type = fields.String(description='You can check the current version of media type in responses.\n', data_key='X-GitHub-Media-Type')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class Path(Schema):
owner = fields.String(required=True, description='Name of repository owner.')
repo = fields.String(required=True, description='Name of repository.')
class Query(Schema):
sort = fields.String(missing=lambda: 'newes', validate=[OneOf(choices=['newes', 'oldes', 'watchers'], labels=[])])
class Post:
"""
Create a fork.
Forking a Repository happens asynchronously. Therefore, you may have to wai
a short period before accessing the git objects. If this takes longer than 5
minutes, be sure to contact Support.
"""
class Body(ForkBody):
pass
class Header(Schema):
X_GitHub_Media_Type = fields.String(description='You can check the current version of media type in responses.\n', data_key='X-GitHub-Media-Type')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class Path(Schema):
owner = fields.String(required=True, description='Name of repository owner.')
repo = fields.String(required=True, description='Name of repository.')
class ReposOwnerRepoGitBlobsInput:
class Post:
"""
Create a Blob.
"""
class Body(Blob):
pass
class Header(Schema):
X_GitHub_Media_Type = fields.String(description='You can check the current version of media type in responses.\n', data_key='X-GitHub-Media-Type')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class Path(Schema):
owner = fields.String(required=True, description='Name of repository owner.')
repo = fields.String(required=True, description='Name of repository.')
class ReposOwnerRepoGitBlobsShaCodeInput:
class Get:
"""
Get a Blob.
Since blobs can be any arbitrary binary data, the input and responses for
the blob API takes an encoding parameter that can be either utf-8 or
base64. If your data cannot be losslessly sent as a UTF-8 string, you can
base64 encode it.
"""
class Header(Schema):
X_GitHub_Media_Type = fields.String(description='You can check the current version of media type in responses.\n', data_key='X-GitHub-Media-Type')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class Path(Schema):
owner = fields.String(required=True, description='Name of repository owner.')
repo = fields.String(required=True, description='Name of repository.')
shaCode = fields.String(required=True, description='SHA-1 code.')
class ReposOwnerRepoGitCommitsInput:
class Post:
"""
Create a Commit.
"""
class Body(RepoCommitBody):
pass
class Header(Schema):
X_GitHub_Media_Type = fields.String(description='You can check the current version of media type in responses.\n', data_key='X-GitHub-Media-Type')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class Path(Schema):
owner = fields.String(required=True, description='Name of repository owner.')
repo = fields.String(required=True, description='Name of repository.')
class ReposOwnerRepoGitCommitsShaCodeInput:
class Get:
"""
Get a Commit.
"""
class Header(Schema):
X_GitHub_Media_Type = fields.String(description='You can check the current version of media type in responses.\n', data_key='X-GitHub-Media-Type')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class Path(Schema):
owner = fields.String(required=True, description='Name of repository owner.')
repo = fields.String(required=True, description='Name of repository.')
shaCode = fields.String(required=True, description='SHA-1 code.')
class ReposOwnerRepoGitRefsInput:
class Get:
"""
Get all References
"""
class Header(Schema):
X_GitHub_Media_Type = fields.String(description='You can check the current version of media type in responses.\n', data_key='X-GitHub-Media-Type')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class Path(Schema):
owner = fields.String(required=True, description='Name of repository owner.')
repo = fields.String(required=True, description='Name of repository.')
class Post:
"""
Create a Reference
"""
class Body(RefsBody):
pass
class Header(Schema):
X_GitHub_Media_Type = fields.String(description='You can check the current version of media type in responses.\n', data_key='X-GitHub-Media-Type')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class Path(Schema):
owner = fields.String(required=True, description='Name of repository owner.')
repo = fields.String(required=True, description='Name of repository.')
class ReposOwnerRepoGitRefsRefInput:
class Delete:
"""
Delete a Reference
Example: Deleting a branch: DELETE /repos/octocat/Hello-World/git/refs/heads/feature-a
Example: Deleting a tag: DELETE /repos/octocat/Hello-World/git/refs/tags/v1.0
"""
class Header(Schema):
X_GitHub_Media_Type = fields.String(description='You can check the current version of media type in responses.\n', data_key='X-GitHub-Media-Type')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class Path(Schema):
owner = fields.String(required=True, description='Name of repository owner.')
repo = fields.String(required=True, description='Name of repository.')
ref = fields.String(required=True)
class Get:
"""
Get a Reference
"""
class Header(Schema):
X_GitHub_Media_Type = fields.String(description='You can check the current version of media type in responses.\n', data_key='X-GitHub-Media-Type')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class Path(Schema):
owner = fields.String(required=True, description='Name of repository owner.')
repo = fields.String(required=True, description='Name of repository.')
ref = fields.String(required=True)
class Patch:
"""
Update a Reference
"""
class Body(GitRefPatch):
pass
class Header(Schema):
X_GitHub_Media_Type = fields.String(description='You can check the current version of media type in responses.\n', data_key='X-GitHub-Media-Type')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class Path(Schema):
owner = fields.String(required=True, description='Name of repository owner.')
repo = fields.String(required=True, description='Name of repository.')
ref = fields.String(required=True)
class ReposOwnerRepoGitTagsInput:
class Post:
"""
Create a Tag Object.
Note that creating a tag object does not create the reference that makes a
tag in Git. If you want to create an annotated tag in Git, you have to do
this call to create the tag object, and then create the refs/tags/[tag]
reference. If you want to create a lightweight tag, you only have to create
the tag reference - this call would be unnecessary.
"""
class Body(Tag):
pass
class Header(Schema):
X_GitHub_Media_Type = fields.String(description='You can check the current version of media type in responses.\n', data_key='X-GitHub-Media-Type')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class Path(Schema):
owner = fields.String(required=True, description='Name of repository owner.')
repo = fields.String(required=True, description='Name of repository.')
class ReposOwnerRepoGitTagsShaCodeInput:
class Get:
"""
Get a Tag.
"""
class Header(Schema):
X_GitHub_Media_Type = fields.String(description='You can check the current version of media type in responses.\n', data_key='X-GitHub-Media-Type')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class Path(Schema):
owner = fields.String(required=True, description='Name of repository owner.')
repo = fields.String(required=True, description='Name of repository.')
shaCode = fields.String(required=True)
class ReposOwnerRepoGitTreesInput:
class Post:
"""
Create a Tree.
The tree creation API will take nested entries as well. If both a tree and
a nested path modifying that tree are specified, it will overwrite the
contents of that tree with the new path contents and write a new tree out.
"""
class Body(Tree):
pass
class Header(Schema):
X_GitHub_Media_Type = fields.String(description='You can check the current version of media type in responses.\n', data_key='X-GitHub-Media-Type')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class Path(Schema):
owner = fields.String(required=True, description='Name of repository owner.')
repo = fields.String(required=True, description='Name of repository.')
class ReposOwnerRepoGitTreesShaCodeInput:
class Get:
"""
Get a Tree.
"""
class Header(Schema):
X_GitHub_Media_Type = fields.String(description='You can check the current version of media type in responses.\n', data_key='X-GitHub-Media-Type')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class Path(Schema):
owner = fields.String(required=True, description='Name of repository owner.')
repo = fields.String(required=True, description='Name of repository.')
shaCode = fields.String(required=True, description='Tree SHA.')
class Query(Schema):
recursive = fields.Integer(description='Get a Tree Recursively. (0 or 1)')
class ReposOwnerRepoHooksInput:
class Get:
"""
Get list of hooks.
"""
class Header(Schema):
X_GitHub_Media_Type = fields.String(description='You can check the current version of media type in responses.\n', data_key='X-GitHub-Media-Type')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class Path(Schema):
owner = fields.String(required=True, description='Name of repository owner.')
repo = fields.String(required=True, description='Name of repository.')
class Post:
"""
Create a hook.
"""
class Body(HookBody):
pass
class Header(Schema):
X_GitHub_Media_Type = fields.String(description='You can check the current version of media type in responses.\n', data_key='X-GitHub-Media-Type')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class Path(Schema):
owner = fields.String(required=True, description='Name of repository owner.')
repo = fields.String(required=True, description='Name of repository.')
class ReposOwnerRepoHooksHookIdInput:
class Delete:
"""
Delete a hook.
"""
class Header(Schema):
X_GitHub_Media_Type = fields.String(description='You can check the current version of media type in responses.\n', data_key='X-GitHub-Media-Type')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class Path(Schema):
owner = fields.String(required=True, description='Name of repository owner.')
repo = fields.String(required=True, description='Name of repository.')
hookId = fields.Integer(required=True, description='Id of hook.')
class Get:
"""
Get single hook.
"""
class Header(Schema):
X_GitHub_Media_Type = fields.String(description='You can check the current version of media type in responses.\n', data_key='X-GitHub-Media-Type')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class Path(Schema):
owner = fields.String(required=True, description='Name of repository owner.')
repo = fields.String(required=True, description='Name of repository.')
hookId = fields.Integer(required=True, description='Id of hook.')
class Patch:
"""
Edit a hook.
"""
class Body(HookBody):
pass
class Header(Schema):
X_GitHub_Media_Type = fields.String(description='You can check the current version of media type in responses.\n', data_key='X-GitHub-Media-Type')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class Path(Schema):
owner = fields.String(required=True, description='Name of repository owner.')
repo = fields.String(required=True, description='Name of repository.')
hookId = fields.Integer(required=True, description='Id of hook.')
class ReposOwnerRepoHooksHookIdTestsInput:
class Post:
"""
Test a push hook.
This will trigger the hook with the latest push to the current repository
if the hook is subscribed to push events. If the hook is not subscribed
to push events, the server will respond with 204 but no test POST will
be generated.
Note: Previously /repos/:owner/:repo/hooks/:id/tes
"""
class Header(Schema):
X_GitHub_Media_Type = fields.String(description='You can check the current version of media type in responses.\n', data_key='X-GitHub-Media-Type')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class Path(Schema):
owner = fields.String(required=True, description='Name of repository owner.')
repo = fields.String(required=True, description='Name of repository.')
hookId = fields.Integer(required=True, description='Id of hook.')
class ReposOwnerRepoIssuesInput:
class Get:
"""
List issues for a repository.
"""
class Header(Schema):
X_GitHub_Media_Type = fields.String(description='You can check the current version of media type in responses.\n', data_key='X-GitHub-Media-Type')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class Path(Schema):
owner = fields.String(required=True, description='Name of repository owner.')
repo = fields.String(required=True, description='Name of repository.')
class Query(Schema):
filter = fields.String(required=True, description="Issues assigned to you / created by you / mentioning you / you're\nsubscribed to updates for / All issues the authenticated user can see\n", validate=[OneOf(choices=['assigned', 'created', 'mentioned', 'subscribed', 'all'], labels=[])])
state = fields.String(required=True, validate=[OneOf(choices=['open', 'closed'], labels=[])])
labels = fields.String(required=True, description='String list of comma separated Label names. Example - bug,ui,@high.')
sort = fields.String(required=True, validate=[OneOf(choices=['created', 'updated', 'comments'], labels=[])])
direction = fields.String(required=True, validate=[OneOf(choices=['asc', 'desc'], labels=[])])
since = fields.String(description='Optional string of a timestamp in ISO 8601 format: YYYY-MM-DDTHH:MM:SSZ.\nOnly issues updated at or after this time are returned.\n')
class Post:
"""
Create an issue.
Any user with pull access to a repository can create an issue.
"""
class Body(Issue):
pass
class Header(Schema):
X_GitHub_Media_Type = fields.String(description='You can check the current version of media type in responses.\n', data_key='X-GitHub-Media-Type')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class Path(Schema):
owner = fields.String(required=True, description='Name of repository owner.')
repo = fields.String(required=True, description='Name of repository.')
class ReposOwnerRepoIssuesCommentsInput:
class Get:
"""
List comments in a repository.
"""
class Header(Schema):
X_GitHub_Media_Type = fields.String(description='You can check the current version of media type in responses.\n', data_key='X-GitHub-Media-Type')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class Path(Schema):
owner = fields.String(required=True, description='Name of repository owner.')
repo = fields.String(required=True, description='Name of repository.')
class Query(Schema):
direction = fields.String(description="Ignored without 'sort' parameter.")
sort = fields.String(description='', validate=[OneOf(choices=['created', 'updated'], labels=[])])
since = fields.String(description='The time should be passed in as UTC in the ISO 8601 format: YYYY-MM-DDTHH:MM:SSZ.\nExample: "2012-10-09T23:39:01Z".\n')
class ReposOwnerRepoIssuesCommentsCommentIdInput:
class Delete:
"""
Delete a comment.
"""
class Header(Schema):
X_GitHub_Media_Type = fields.String(description='You can check the current version of media type in responses.\n', data_key='X-GitHub-Media-Type')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class Path(Schema):
owner = fields.String(required=True, description='Name of repository owner.')
repo = fields.String(required=True, description='Name of repository.')
commentId = fields.Integer(required=True, description='ID of comment.')
class Get:
"""
Get a single comment.
"""
class Header(Schema):
X_GitHub_Media_Type = fields.String(description='You can check the current version of media type in responses.\n', data_key='X-GitHub-Media-Type')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class Path(Schema):
owner = fields.String(required=True, description='Name of repository owner.')
repo = fields.String(required=True, description='Name of repository.')
commentId = fields.Integer(required=True, description='ID of comment.')
class Patch:
"""
Edit a comment.
"""
class Body(CommentBody):
pass
class Header(Schema):
X_GitHub_Media_Type = fields.String(description='You can check the current version of media type in responses.\n', data_key='X-GitHub-Media-Type')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class Path(Schema):
owner = fields.String(required=True, description='Name of repository owner.')
repo = fields.String(required=True, description='Name of repository.')
commentId = fields.Integer(required=True, description='ID of comment.')
class ReposOwnerRepoIssuesEventsInput:
class Get:
"""
List issue events for a repository.
"""
class Header(Schema):
X_GitHub_Media_Type = fields.String(description='You can check the current version of media type in responses.\n', data_key='X-GitHub-Media-Type')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class Path(Schema):
owner = fields.String(required=True, description='Name of repository owner.')
repo = fields.String(required=True, description='Name of repository.')
class ReposOwnerRepoIssuesEventsEventIdInput:
class Get:
"""
Get a single event.
"""
class Header(Schema):
X_GitHub_Media_Type = fields.String(description='You can check the current version of media type in responses.\n', data_key='X-GitHub-Media-Type')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class Path(Schema):
owner = fields.String(required=True, description='Name of repository owner.')
repo = fields.String(required=True, description='Name of repository.')
eventId = fields.Integer(required=True, description='Id of the event.')
class ReposOwnerRepoIssuesNumberInput:
class Get:
"""
Get a single issue
"""
class Header(Schema):
X_GitHub_Media_Type = fields.String(description='You can check the current version of media type in responses.\n', data_key='X-GitHub-Media-Type')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class Path(Schema):
owner = fields.String(required=True, description='Name of repository owner.')
repo = fields.String(required=True, description='Name of repository.')
number = fields.Integer(required=True, description='Number of issue.')
class Patch:
"""
Edit an issue.
Issue owners and users with push access can edit an issue.
"""
class Body(Issue):
pass
class Header(Schema):
X_GitHub_Media_Type = fields.String(description='You can check the current version of media type in responses.\n', data_key='X-GitHub-Media-Type')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class Path(Schema):
owner = fields.String(required=True, description='Name of repository owner.')
repo = fields.String(required=True, description='Name of repository.')
number = fields.Integer(required=True, description='Number of issue.')
class ReposOwnerRepoIssuesNumberCommentsInput:
class Get:
"""
List comments on an issue.
"""
class Header(Schema):
X_GitHub_Media_Type = fields.String(description='You can check the current version of media type in responses.\n', data_key='X-GitHub-Media-Type')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class Path(Schema):
owner = fields.String(required=True, description='Name of repository owner.')
repo = fields.String(required=True, description='Name of repository.')
number = fields.Integer(required=True, description='Number of issue.')
class Post:
"""
Create a comment.
"""
class Body(CommentBody):
pass
class Header(Schema):
X_GitHub_Media_Type = fields.String(description='You can check the current version of media type in responses.\n', data_key='X-GitHub-Media-Type')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class Path(Schema):
owner = fields.String(required=True, description='Name of repository owner.')
repo = fields.String(required=True, description='Name of repository.')
number = fields.Integer(required=True, description='Number of issue.')
class ReposOwnerRepoIssuesNumberEventsInput:
class Get:
"""
List events for an issue.
"""
class Header(Schema):
X_GitHub_Media_Type = fields.String(description='You can check the current version of media type in responses.\n', data_key='X-GitHub-Media-Type')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class Path(Schema):
owner = fields.String(required=True, description='Name of repository owner.')
repo = fields.String(required=True, description='Name of repository.')
number = fields.Integer(required=True, description='Number of issue.')
class ReposOwnerRepoIssuesNumberLabelsInput:
class Delete:
"""
Remove all labels from an issue.
"""
class Header(Schema):
X_GitHub_Media_Type = fields.String(description='You can check the current version of media type in responses.\n', data_key='X-GitHub-Media-Type')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class Path(Schema):
owner = fields.String(required=True, description='Name of repository owner.')
repo = fields.String(required=True, description='Name of repository.')
number = fields.Integer(required=True, description='Number of issue.')
class Get:
"""
List labels on an issue.
"""
class Header(Schema):
X_GitHub_Media_Type = fields.String(description='You can check the current version of media type in responses.\n', data_key='X-GitHub-Media-Type')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class Path(Schema):
owner = fields.String(required=True, description='Name of repository owner.')
repo = fields.String(required=True, description='Name of repository.')
number = fields.Integer(required=True, description='Number of issue.')
class Post:
"""
Add labels to an issue.
"""
class Body(PrimitiveValueSchema):
class schema_class(Schema):
value = fields.List(fields.String())
class Header(Schema):
X_GitHub_Media_Type = fields.String(description='You can check the current version of media type in responses.\n', data_key='X-GitHub-Media-Type')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class Path(Schema):
owner = fields.String(required=True, description='Name of repository owner.')
repo = fields.String(required=True, description='Name of repository.')
number = fields.Integer(required=True, description='Number of issue.')
class Put:
"""
Replace all labels for an issue.
Sending an empty array ([]) will remove all Labels from the Issue.
"""
class Body(PrimitiveValueSchema):
class schema_class(Schema):
value = fields.List(fields.String())
class Header(Schema):
X_GitHub_Media_Type = fields.String(description='You can check the current version of media type in responses.\n', data_key='X-GitHub-Media-Type')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class Path(Schema):
owner = fields.String(required=True, description='Name of repository owner.')
repo = fields.String(required=True, description='Name of repository.')
number = fields.Integer(required=True, description='Number of issue.')
class ReposOwnerRepoIssuesNumberLabelsNameInput:
class Delete:
"""
Remove a label from an issue.
"""
class Header(Schema):
X_GitHub_Media_Type = fields.String(description='You can check the current version of media type in responses.\n', data_key='X-GitHub-Media-Type')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class Path(Schema):
owner = fields.String(required=True, description='Name of repository owner.')
repo = fields.String(required=True, description='Name of repository.')
number = fields.Integer(required=True, description='Number of issue.')
name = fields.String(required=True, description='Name of the label.')
class ReposOwnerRepoKeysInput:
class Get:
"""
Get list of keys.
"""
class Header(Schema):
X_GitHub_Media_Type = fields.String(description='You can check the current version of media type in responses.\n', data_key='X-GitHub-Media-Type')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class Path(Schema):
owner = fields.String(required=True, description='Name of repository owner.')
repo = fields.String(required=True, description='Name of repository.')
class Post:
"""
Create a key.
"""
class Body(User_keys_post):
pass
class Header(Schema):
X_GitHub_Media_Type = fields.String(description='You can check the current version of media type in responses.\n', data_key='X-GitHub-Media-Type')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class Path(Schema):
owner = fields.String(required=True, description='Name of repository owner.')
repo = fields.String(required=True, description='Name of repository.')
class ReposOwnerRepoKeysKeyIdInput:
class Delete:
"""
Delete a key.
"""
class Header(Schema):
X_GitHub_Media_Type = fields.String(description='You can check the current version of media type in responses.\n', data_key='X-GitHub-Media-Type')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class Path(Schema):
owner = fields.String(required=True, description='Name of repository owner.')
repo = fields.String(required=True, description='Name of repository.')
keyId = fields.Integer(required=True, description='Id of key.')
class Get:
"""
Get a key
"""
class Header(Schema):
X_GitHub_Media_Type = fields.String(description='You can check the current version of media type in responses.\n', data_key='X-GitHub-Media-Type')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class Path(Schema):
owner = fields.String(required=True, description='Name of repository owner.')
repo = fields.String(required=True, description='Name of repository.')
keyId = fields.Integer(required=True, description='Id of key.')
class ReposOwnerRepoLabelsInput:
class Get:
"""
List all labels for this repository.
"""
class Header(Schema):
X_GitHub_Media_Type = fields.String(description='You can check the current version of media type in responses.\n', data_key='X-GitHub-Media-Type')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class Path(Schema):
owner = fields.String(required=True, description='Name of repository owner.')
repo = fields.String(required=True, description='Name of repository.')
class Post:
"""
Create a label.
"""
class Body(PrimitiveValueSchema):
class schema_class(Schema):
value = fields.List(fields.String())
class Header(Schema):
X_GitHub_Media_Type = fields.String(description='You can check the current version of media type in responses.\n', data_key='X-GitHub-Media-Type')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class Path(Schema):
owner = fields.String(required=True, description='Name of repository owner.')
repo = fields.String(required=True, description='Name of repository.')
class ReposOwnerRepoLabelsNameInput:
class Delete:
"""
Delete a label.
"""
class Header(Schema):
X_GitHub_Media_Type = fields.String(description='You can check the current version of media type in responses.\n', data_key='X-GitHub-Media-Type')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class Path(Schema):
owner = fields.String(required=True, description='Name of repository owner.')
repo = fields.String(required=True, description='Name of repository.')
name = fields.String(required=True, description='Name of the label.')
class Get:
"""
Get a single label.
"""
class Header(Schema):
X_GitHub_Media_Type = fields.String(description='You can check the current version of media type in responses.\n', data_key='X-GitHub-Media-Type')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class Path(Schema):
owner = fields.String(required=True, description='Name of repository owner.')
repo = fields.String(required=True, description='Name of repository.')
name = fields.String(required=True, description='Name of the label.')
class Patch:
"""
Update a label.
"""
class Body(PrimitiveValueSchema):
class schema_class(Schema):
value = fields.List(fields.String())
class Header(Schema):
X_GitHub_Media_Type = fields.String(description='You can check the current version of media type in responses.\n', data_key='X-GitHub-Media-Type')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class Path(Schema):
owner = fields.String(required=True, description='Name of repository owner.')
repo = fields.String(required=True, description='Name of repository.')
name = fields.String(required=True, description='Name of the label.')
class ReposOwnerRepoLanguagesInput:
class Get:
"""
List languages.
List languages for the specified repository. The value on the right of a
language is the number of bytes of code written in that language.
"""
class Header(Schema):
X_GitHub_Media_Type = fields.String(description='You can check the current version of media type in responses.\n', data_key='X-GitHub-Media-Type')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class Path(Schema):
owner = fields.String(required=True, description='Name of repository owner.')
repo = fields.String(required=True, description='Name of repository.')
class ReposOwnerRepoMergesInput:
class Post:
"""
Perform a merge.
"""
class Body(MergesBody):
pass
class Header(Schema):
X_GitHub_Media_Type = fields.String(description='You can check the current version of media type in responses.\n', data_key='X-GitHub-Media-Type')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class Path(Schema):
owner = fields.String(required=True, description='Name of repository owner.')
repo = fields.String(required=True, description='Name of repository.')
class ReposOwnerRepoMilestonesInput:
class Get:
"""
List milestones for a repository.
"""
class Header(Schema):
X_GitHub_Media_Type = fields.String(description='You can check the current version of media type in responses.\n', data_key='X-GitHub-Media-Type')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class Path(Schema):
owner = fields.String(required=True, description='Name of repository owner.')
repo = fields.String(required=True, description='Name of repository.')
class Query(Schema):
state = fields.String(description='String to filter by state.', missing=lambda: 'open', validate=[OneOf(choices=['open', 'closed'], labels=[])])
direction = fields.String(description="Ignored without 'sort' parameter.")
sort = fields.String(description='', missing=lambda: 'due_date', validate=[OneOf(choices=['due_date', 'completeness'], labels=[])])
class Post:
"""
Create a milestone.
"""
class Body(MilestoneUpdate):
pass
class Header(Schema):
X_GitHub_Media_Type = fields.String(description='You can check the current version of media type in responses.\n', data_key='X-GitHub-Media-Type')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class Path(Schema):
owner = fields.String(required=True, description='Name of repository owner.')
repo = fields.String(required=True, description='Name of repository.')
class ReposOwnerRepoMilestonesNumberInput:
class Delete:
"""
Delete a milestone.
"""
class Header(Schema):
X_GitHub_Media_Type = fields.String(description='You can check the current version of media type in responses.\n', data_key='X-GitHub-Media-Type')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class Path(Schema):
owner = fields.String(required=True, description='Name of repository owner.')
repo = fields.String(required=True, description='Name of repository.')
number = fields.Integer(required=True, description='Number of milestone.')
class Get:
"""
Get a single milestone.
"""
class Header(Schema):
X_GitHub_Media_Type = fields.String(description='You can check the current version of media type in responses.\n', data_key='X-GitHub-Media-Type')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class Path(Schema):
owner = fields.String(required=True, description='Name of repository owner.')
repo = fields.String(required=True, description='Name of repository.')
number = fields.Integer(required=True, description='Number of milestone.')
class Patch:
"""
Update a milestone.
"""
class Body(MilestoneUpdate):
pass
class Header(Schema):
X_GitHub_Media_Type = fields.String(description='You can check the current version of media type in responses.\n', data_key='X-GitHub-Media-Type')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class Path(Schema):
owner = fields.String(required=True, description='Name of repository owner.')
repo = fields.String(required=True, description='Name of repository.')
number = fields.Integer(required=True, description='Number of milestone.')
class ReposOwnerRepoMilestonesNumberLabelsInput:
class Get:
"""
Get labels for every issue in a milestone.
"""
class Header(Schema):
X_GitHub_Media_Type = fields.String(description='You can check the current version of media type in responses.\n', data_key='X-GitHub-Media-Type')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class Path(Schema):
owner = fields.String(required=True, description='Name of repository owner.')
repo = fields.String(required=True, description='Name of repository.')
number = fields.Integer(required=True, description='Number of milestone.')
class ReposOwnerRepoNotificationsInput:
class Get:
"""
List your notifications in a repository
List all notifications for the current user.
"""
class Header(Schema):
X_GitHub_Media_Type = fields.String(description='You can check the current version of media type in responses.\n', data_key='X-GitHub-Media-Type')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class Path(Schema):
owner = fields.String(required=True, description='Name of repository owner.')
repo = fields.String(required=True, description='Name of repository.')
class Query(Schema):
all = fields.Boolean(description='True to show notifications marked as read.')
participating = fields.Boolean(description='True to show only notifications in which the user is directly participating\nor mentioned.\n')
since = fields.String(description='The time should be passed in as UTC in the ISO 8601 format: YYYY-MM-DDTHH:MM:SSZ.\nExample: "2012-10-09T23:39:01Z".\n')
class Put:
"""
Mark notifications as read in a repository.
Marking all notifications in a repository as "read" removes them from the
default view on GitHub.com.
"""
class Body(NotificationMarkRead):
pass
class Header(Schema):
X_GitHub_Media_Type = fields.String(description='You can check the current version of media type in responses.\n', data_key='X-GitHub-Media-Type')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class Path(Schema):
owner = fields.String(required=True, description='Name of repository owner.')
repo = fields.String(required=True, description='Name of repository.')
class ReposOwnerRepoPullsInput:
class Get:
"""
List pull requests.
"""
class Header(Schema):
X_GitHub_Media_Type = fields.String(description='You can check the current version of media type in responses.\n', data_key='X-GitHub-Media-Type')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class Path(Schema):
owner = fields.String(required=True, description='Name of repository owner.')
repo = fields.String(required=True, description='Name of repository.')
class Query(Schema):
state = fields.String(description='String to filter by state.', missing=lambda: 'open', validate=[OneOf(choices=['open', 'closed'], labels=[])])
head = fields.String(description="Filter pulls by head user and branch name in the format of 'user:ref-name'.\nExample: github:new-script-format.\n")
base = fields.String(description='Filter pulls by base branch name. Example - gh-pages.')
class Post:
"""
Create a pull request.
"""
class Body(PullsPost):
pass
class Header(Schema):
X_GitHub_Media_Type = fields.String(description='You can check the current version of media type in responses.\n', data_key='X-GitHub-Media-Type')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class Path(Schema):
owner = fields.String(required=True, description='Name of repository owner.')
repo = fields.String(required=True, description='Name of repository.')
class ReposOwnerRepoPullsCommentsInput:
class Get:
"""
List comments in a repository.
By default, Review Comments are ordered by ascending ID.
"""
class Header(Schema):
X_GitHub_Media_Type = fields.String(description='You can check the current version of media type in responses.\n', data_key='X-GitHub-Media-Type')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class Path(Schema):
owner = fields.String(required=True, description='Name of repository owner.')
repo = fields.String(required=True, description='Name of repository.')
class Query(Schema):
direction = fields.String(description="Ignored without 'sort' parameter.")
sort = fields.String(description='', validate=[OneOf(choices=['created', 'updated'], labels=[])])
since = fields.String(description='The time should be passed in as UTC in the ISO 8601 format: YYYY-MM-DDTHH:MM:SSZ.\nExample: "2012-10-09T23:39:01Z".\n')
class ReposOwnerRepoPullsCommentsCommentIdInput:
class Delete:
"""
Delete a comment.
"""
class Header(Schema):
X_GitHub_Media_Type = fields.String(description='You can check the current version of media type in responses.\n', data_key='X-GitHub-Media-Type')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class Path(Schema):
owner = fields.String(required=True, description='Name of repository owner.')
repo = fields.String(required=True, description='Name of repository.')
commentId = fields.Integer(required=True, description='Id of comment.')
class Get:
"""
Get a single comment.
"""
class Header(Schema):
X_GitHub_Media_Type = fields.String(description='You can check the current version of media type in responses.\n', data_key='X-GitHub-Media-Type')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class Path(Schema):
owner = fields.String(required=True, description='Name of repository owner.')
repo = fields.String(required=True, description='Name of repository.')
commentId = fields.Integer(required=True, description='Id of comment.')
class Patch:
"""
Edit a comment.
"""
class Body(CommentBody):
pass
class Header(Schema):
X_GitHub_Media_Type = fields.String(description='You can check the current version of media type in responses.\n', data_key='X-GitHub-Media-Type')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class Path(Schema):
owner = fields.String(required=True, description='Name of repository owner.')
repo = fields.String(required=True, description='Name of repository.')
commentId = fields.Integer(required=True, description='Id of comment.')
class ReposOwnerRepoPullsNumberInput:
class Get:
"""
Get a single pull request.
"""
class Header(Schema):
X_GitHub_Media_Type = fields.String(description='You can check the current version of media type in responses.\n', data_key='X-GitHub-Media-Type')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class Path(Schema):
owner = fields.String(required=True, description='Name of repository owner.')
repo = fields.String(required=True, description='Name of repository.')
number = fields.Integer(required=True, description='Id of pull.')
class Patch:
"""
Update a pull request.
"""
class Body(PullUpdate):
pass
class Header(Schema):
X_GitHub_Media_Type = fields.String(description='You can check the current version of media type in responses.\n', data_key='X-GitHub-Media-Type')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class Path(Schema):
owner = fields.String(required=True, description='Name of repository owner.')
repo = fields.String(required=True, description='Name of repository.')
number = fields.Integer(required=True, description='Id of pull.')
class ReposOwnerRepoPullsNumberCommentsInput:
class Get:
"""
List comments on a pull request.
"""
class Header(Schema):
X_GitHub_Media_Type = fields.String(description='You can check the current version of media type in responses.\n', data_key='X-GitHub-Media-Type')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class Path(Schema):
owner = fields.String(required=True, description='Name of repository owner.')
repo = fields.String(required=True, description='Name of repository.')
number = fields.Integer(required=True, description='Id of pull.')
class Post:
"""
Create a comment.
#TODO Alternative input ( http://developer.github.com/v3/pulls/comments/ )
description: |
Alternative Input.
Instead of passing commit_id, path, and position you can reply to an
existing Pull Request Comment like this:
body
Required string
in_reply_to
Required number - Comment id to reply to.
"""
class Body(PullsCommentPost):
pass
class Header(Schema):
X_GitHub_Media_Type = fields.String(description='You can check the current version of media type in responses.\n', data_key='X-GitHub-Media-Type')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class Path(Schema):
owner = fields.String(required=True, description='Name of repository owner.')
repo = fields.String(required=True, description='Name of repository.')
number = fields.Integer(required=True, description='Id of pull.')
class ReposOwnerRepoPullsNumberCommitsInput:
class Get:
"""
List commits on a pull request.
"""
class Header(Schema):
X_GitHub_Media_Type = fields.String(description='You can check the current version of media type in responses.\n', data_key='X-GitHub-Media-Type')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class Path(Schema):
owner = fields.String(required=True, description='Name of repository owner.')
repo = fields.String(required=True, description='Name of repository.')
number = fields.Integer(required=True, description='Id of pull.')
class ReposOwnerRepoPullsNumberFilesInput:
class Get:
"""
List pull requests files.
"""
class Header(Schema):
X_GitHub_Media_Type = fields.String(description='You can check the current version of media type in responses.\n', data_key='X-GitHub-Media-Type')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class Path(Schema):
owner = fields.String(required=True, description='Name of repository owner.')
repo = fields.String(required=True, description='Name of repository.')
number = fields.Integer(required=True, description='Id of pull.')
class ReposOwnerRepoPullsNumberMergeInput:
class Get:
"""
Get if a pull request has been merged.
"""
class Header(Schema):
X_GitHub_Media_Type = fields.String(description='You can check the current version of media type in responses.\n', data_key='X-GitHub-Media-Type')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class Path(Schema):
owner = fields.String(required=True, description='Name of repository owner.')
repo = fields.String(required=True, description='Name of repository.')
number = fields.Integer(required=True, description='Id of pull.')
class Put:
"""
Merge a pull request (Merge Button's)
"""
class Body(MergePullBody):
pass
class Header(Schema):
X_GitHub_Media_Type = fields.String(description='You can check the current version of media type in responses.\n', data_key='X-GitHub-Media-Type')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class Path(Schema):
owner = fields.String(required=True, description='Name of repository owner.')
repo = fields.String(required=True, description='Name of repository.')
number = fields.Integer(required=True, description='Id of pull.')
class ReposOwnerRepoReadmeInput:
class Get:
"""
Get the README.
This method returns the preferred README for a repository.
"""
class Header(Schema):
X_GitHub_Media_Type = fields.String(description='You can check the current version of media type in responses.\n', data_key='X-GitHub-Media-Type')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class Path(Schema):
owner = fields.String(required=True, description='Name of repository owner.')
repo = fields.String(required=True, description='Name of repository.')
class Query(Schema):
ref = fields.String(description='The String name of the Commit/Branch/Tag. Defaults to master.')
class ReposOwnerRepoReleasesInput:
class Get:
"""
Users with push access to the repository will receive all releases (i.e., published releases and draft releases). Users with pull access will receive published releases only
"""
class Header(Schema):
X_GitHub_Media_Type = fields.String(description='You can check the current version of media type in responses.\n', data_key='X-GitHub-Media-Type')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class Path(Schema):
owner = fields.String(required=True, description='Name of repository owner.')
repo = fields.String(required=True, description='Name of repository.')
class Post:
"""
Create a release
Users with push access to the repository can create a release.
"""
class Body(Release_create):
pass
class Header(Schema):
X_GitHub_Media_Type = fields.String(description='You can check the current version of media type in responses.\n', data_key='X-GitHub-Media-Type')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class Path(Schema):
owner = fields.String(required=True, description='Name of repository owner.')
repo = fields.String(required=True, description='Name of repository.')
class ReposOwnerRepoReleasesAssetsIdInput:
class Delete:
"""
Delete a release asset
"""
class Header(Schema):
X_GitHub_Media_Type = fields.String(description='You can check the current version of media type in responses.\n', data_key='X-GitHub-Media-Type')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class Path(Schema):
owner = fields.String(required=True, description='Name of repository owner.')
repo = fields.String(required=True, description='Name of repository.')
id = fields.String(required=True)
class Get:
"""
Get a single release asset
"""
class Header(Schema):
X_GitHub_Media_Type = fields.String(description='You can check the current version of media type in responses.\n', data_key='X-GitHub-Media-Type')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class Path(Schema):
owner = fields.String(required=True, description='Name of repository owner.')
repo = fields.String(required=True, description='Name of repository.')
id = fields.String(required=True)
class Patch:
"""
Edit a release asset
Users with push access to the repository can edit a release asset.
"""
class Body(AssetPatch):
pass
class Header(Schema):
X_GitHub_Media_Type = fields.String(description='You can check the current version of media type in responses.\n', data_key='X-GitHub-Media-Type')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class Path(Schema):
owner = fields.String(required=True, description='Name of repository owner.')
repo = fields.String(required=True, description='Name of repository.')
id = fields.String(required=True)
class ReposOwnerRepoReleasesIdInput:
class Delete:
"""
Users with push access to the repository can delete a release.
"""
class Header(Schema):
X_GitHub_Media_Type = fields.String(description='You can check the current version of media type in responses.\n', data_key='X-GitHub-Media-Type')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class Path(Schema):
owner = fields.String(required=True, description='Name of repository owner.')
repo = fields.String(required=True, description='Name of repository.')
id = fields.String(required=True)
class Get:
"""
Get a single release
"""
class Header(Schema):
X_GitHub_Media_Type = fields.String(description='You can check the current version of media type in responses.\n', data_key='X-GitHub-Media-Type')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class Path(Schema):
owner = fields.String(required=True, description='Name of repository owner.')
repo = fields.String(required=True, description='Name of repository.')
id = fields.String(required=True)
class Patch:
"""
Users with push access to the repository can edit a release
"""
class Body(Release_create):
pass
class Header(Schema):
X_GitHub_Media_Type = fields.String(description='You can check the current version of media type in responses.\n', data_key='X-GitHub-Media-Type')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class Path(Schema):
owner = fields.String(required=True, description='Name of repository owner.')
repo = fields.String(required=True, description='Name of repository.')
id = fields.String(required=True)
class ReposOwnerRepoReleasesIdAssetsInput:
class Get:
"""
List assets for a release
"""
class Header(Schema):
X_GitHub_Media_Type = fields.String(description='You can check the current version of media type in responses.\n', data_key='X-GitHub-Media-Type')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class Path(Schema):
owner = fields.String(required=True, description='Name of repository owner.')
repo = fields.String(required=True, description='Name of repository.')
id = fields.String(required=True)
class ReposOwnerRepoStargazersInput:
class Get:
"""
List Stargazers.
"""
class Header(Schema):
X_GitHub_Media_Type = fields.String(description='You can check the current version of media type in responses.\n', data_key='X-GitHub-Media-Type')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class Path(Schema):
owner = fields.String(required=True, description='Name of repository owner.')
repo = fields.String(required=True, description='Name of repository.')
class ReposOwnerRepoStatsCodeFrequencyInput:
class Get:
"""
Get the number of additions and deletions per week.
Returns a weekly aggregate of the number of additions and deletions pushed
to a repository.
"""
class Header(Schema):
X_GitHub_Media_Type = fields.String(description='You can check the current version of media type in responses.\n', data_key='X-GitHub-Media-Type')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class Path(Schema):
owner = fields.String(required=True, description='Name of repository owner.')
repo = fields.String(required=True, description='Name of repository.')
class ReposOwnerRepoStatsCommitActivityInput:
class Get:
"""
Get the last year of commit activity data.
Returns the last year of commit activity grouped by week. The days array
is a group of commits per day, starting on Sunday.
"""
class Header(Schema):
X_GitHub_Media_Type = fields.String(description='You can check the current version of media type in responses.\n', data_key='X-GitHub-Media-Type')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class Path(Schema):
owner = fields.String(required=True, description='Name of repository owner.')
repo = fields.String(required=True, description='Name of repository.')
class ReposOwnerRepoStatsContributorsInput:
class Get:
"""
Get contributors list with additions, deletions, and commit counts.
"""
class Header(Schema):
X_GitHub_Media_Type = fields.String(description='You can check the current version of media type in responses.\n', data_key='X-GitHub-Media-Type')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class Path(Schema):
owner = fields.String(required=True, description='Name of repository owner.')
repo = fields.String(required=True, description='Name of repository.')
class ReposOwnerRepoStatsParticipationInput:
class Get:
"""
Get the weekly commit count for the repo owner and everyone else.
"""
class Header(Schema):
X_GitHub_Media_Type = fields.String(description='You can check the current version of media type in responses.\n', data_key='X-GitHub-Media-Type')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class Path(Schema):
owner = fields.String(required=True, description='Name of repository owner.')
repo = fields.String(required=True, description='Name of repository.')
class ReposOwnerRepoStatsPunchCardInput:
class Get:
"""
Get the number of commits per hour in each day.
Each array contains the day number, hour number, and number of commits
0-6 Sunday - Saturday
0-23 Hour of day
Number of commits
For example, [2, 14, 25] indicates that there were 25 total commits, during
the 2.00pm hour on Tuesdays. All times are based on the time zone of
individual commits.
"""
class Header(Schema):
X_GitHub_Media_Type = fields.String(description='You can check the current version of media type in responses.\n', data_key='X-GitHub-Media-Type')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class Path(Schema):
owner = fields.String(required=True, description='Name of repository owner.')
repo = fields.String(required=True, description='Name of repository.')
class ReposOwnerRepoStatusesRefInput:
class Get:
"""
List Statuses for a specific Ref.
"""
class Header(Schema):
X_GitHub_Media_Type = fields.String(description='You can check the current version of media type in responses.\n', data_key='X-GitHub-Media-Type')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class Path(Schema):
owner = fields.String(required=True, description='Name of repository owner.')
repo = fields.String(required=True, description='Name of repository.')
ref = fields.String(required=True, description='Ref to list the statuses from. It can be a SHA, a branch name, or a tag name.\n')
class Post:
"""
Create a Status.
"""
class Body(HeadBranch):
pass
class Header(Schema):
X_GitHub_Media_Type = fields.String(description='You can check the current version of media type in responses.\n', data_key='X-GitHub-Media-Type')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class Path(Schema):
owner = fields.String(required=True, description='Name of repository owner.')
repo = fields.String(required=True, description='Name of repository.')
ref = fields.String(required=True, description='Ref to list the statuses from. It can be a SHA, a branch name, or a tag name.\n')
class ReposOwnerRepoSubscribersInput:
class Get:
"""
List watchers.
"""
class Header(Schema):
X_GitHub_Media_Type = fields.String(description='You can check the current version of media type in responses.\n', data_key='X-GitHub-Media-Type')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class Path(Schema):
owner = fields.String(required=True, description='Name of repository owner.')
repo = fields.String(required=True, description='Name of repository.')
class ReposOwnerRepoSubscriptionInput:
class Delete:
"""
Delete a Repository Subscription.
"""
class Header(Schema):
X_GitHub_Media_Type = fields.String(description='You can check the current version of media type in responses.\n', data_key='X-GitHub-Media-Type')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class Path(Schema):
owner = fields.String(required=True, description='Name of repository owner.')
repo = fields.String(required=True, description='Name of repository.')
class Get:
"""
Get a Repository Subscription.
"""
class Header(Schema):
X_GitHub_Media_Type = fields.String(description='You can check the current version of media type in responses.\n', data_key='X-GitHub-Media-Type')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class Path(Schema):
owner = fields.String(required=True, description='Name of repository owner.')
repo = fields.String(required=True, description='Name of repository.')
class Put:
"""
Set a Repository Subscription
"""
class Body(SubscribitionBody):
pass
class Header(Schema):
X_GitHub_Media_Type = fields.String(description='You can check the current version of media type in responses.\n', data_key='X-GitHub-Media-Type')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class Path(Schema):
owner = fields.String(required=True, description='Name of repository owner.')
repo = fields.String(required=True, description='Name of repository.')
class ReposOwnerRepoTagsInput:
class Get:
"""
Get list of tags.
"""
class Header(Schema):
X_GitHub_Media_Type = fields.String(description='You can check the current version of media type in responses.\n', data_key='X-GitHub-Media-Type')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class Path(Schema):
owner = fields.String(required=True, description='Name of repository owner.')
repo = fields.String(required=True, description='Name of repository.')
class ReposOwnerRepoTeamsInput:
class Get:
"""
Get list of teams
"""
class Header(Schema):
X_GitHub_Media_Type = fields.String(description='You can check the current version of media type in responses.\n', data_key='X-GitHub-Media-Type')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class Path(Schema):
owner = fields.String(required=True, description='Name of repository owner.')
repo = fields.String(required=True, description='Name of repository.')
class ReposOwnerRepoWatchersInput:
class Get:
"""
List Stargazers. New implementation.
"""
class Header(Schema):
X_GitHub_Media_Type = fields.String(description='You can check the current version of media type in responses.\n', data_key='X-GitHub-Media-Type')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class Path(Schema):
owner = fields.String(required=True, description='Name of repository owner.')
repo = fields.String(required=True, description='Name of repository.')
class ReposOwnerRepoArchiveFormatPathInput:
class Get:
"""
Get archive link.
This method will return a 302 to a URL to download a tarball or zipball
archive for a repository. Please make sure your HTTP framework is
configured to follow redirects or you will need to use the Location header
to make a second GET request.
Note: For private repositories, these links are temporary and expire quickly.
"""
class Header(Schema):
X_GitHub_Media_Type = fields.String(description='You can check the current version of media type in responses.\n', data_key='X-GitHub-Media-Type')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class Path(Schema):
owner = fields.String(required=True, description='Name of repository owner.')
repo = fields.String(required=True, description='Name of repository.')
archive_format = fields.String(required=True, validate=[OneOf(choices=['tarball', 'zipball'], labels=[])])
path = fields.String(required=True, description="Valid Git reference, defaults to 'master'.")
class RepositoriesInput:
class Get:
"""
List all public repositories.
This provides a dump of every public repository, in the order that they
were created.
Note: Pagination is powered exclusively by the since parameter. is the
Link header to get the URL for the next page of repositories.
"""
class Header(Schema):
X_GitHub_Media_Type = fields.String(description='You can check the current version of media type in responses.\n', data_key='X-GitHub-Media-Type')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class Query(Schema):
since = fields.String(description='The time should be passed in as UTC in the ISO 8601 format: YYYY-MM-DDTHH:MM:SSZ.\nExample: "2012-10-09T23:39:01Z".\n')
class SearchCodeInput:
class Get:
"""
Search code.
"""
class Header(Schema):
X_GitHub_Media_Type = fields.String(description='You can check the current version of media type in responses.\n', data_key='X-GitHub-Media-Type')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class Query(Schema):
order = fields.String(description='The sort field. if sort param is provided. Can be either asc or desc.', missing=lambda: 'desc', validate=[OneOf(choices=['desc', 'asc'], labels=[])])
q = fields.String(required=True, description="The search terms. This can be any combination of the supported code\nsearch parameters:\n'Search In' Qualifies which fields are searched. With this qualifier\nyou can restrict the search to just the file contents, the file path,\nor both.\n'Languages' Searches code based on the language it's written in.\n'Forks' Filters repositories based on the number of forks, and/or\nwhether code from forked repositories should be included in the results\nat all.\n'Size' Finds files that match a certain size (in bytes).\n'Path' Specifies the path that the resulting file must be at.\n'Extension' Matches files with a certain extension.\n'Users' or 'Repositories' Limits searches to a specific user or repository.\n")
sort = fields.String(description="Can only be 'indexed', which indicates how recently a file has been indexed\nby the GitHub search infrastructure. If not provided, results are sorted\nby best match.\n", validate=[OneOf(choices=['indexed'], labels=[])])
class SearchIssuesInput:
class Get:
"""
Find issues by state and keyword. (This method returns up to 100 results per page.)
"""
class Header(Schema):
X_GitHub_Media_Type = fields.String(description='You can check the current version of media type in responses.\n', data_key='X-GitHub-Media-Type')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class Query(Schema):
order = fields.String(description='The sort field. if sort param is provided. Can be either asc or desc.', missing=lambda: 'desc', validate=[OneOf(choices=['desc', 'asc'], labels=[])])
q = fields.String(required=True, description='The q search term can also contain any combination of the supported issue search qualifiers:')
sort = fields.String(description='The sort field. Can be comments, created, or updated. Default: results are sorted by best match.', validate=[OneOf(choices=['updated', 'created', 'comments'], labels=[])])
class SearchRepositoriesInput:
class Get:
"""
Search repositories.
"""
class Header(Schema):
X_GitHub_Media_Type = fields.String(description='You can check the current version of media type in responses.\n', data_key='X-GitHub-Media-Type')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class Query(Schema):
order = fields.String(description='The sort field. if sort param is provided. Can be either asc or desc.', missing=lambda: 'desc', validate=[OneOf(choices=['desc', 'asc'], labels=[])])
q = fields.String(required=True, description="The search terms. This can be any combination of the supported repository\nsearch parameters:\n'Search In' Qualifies which fields are searched. With this qualifier you\ncan restrict the search to just the repository name, description, readme,\nor any combination of these.\n'Size' Finds repositories that match a certain size (in kilobytes).\n'Forks' Filters repositories based on the number of forks, and/or whether\nforked repositories should be included in the results at all.\n'Created' and 'Last Updated' Filters repositories based on times of\ncreation, or when they were last updated.\n'Users or Repositories' Limits searches to a specific user or repository.\n'Languages' Searches repositories based on the language they are written in.\n'Stars' Searches repositories based on the number of stars.\n")
sort = fields.String(description='If not provided, results are sorted by best match.', validate=[OneOf(choices=['stars', 'forks', 'updated'], labels=[])])
class SearchUsersInput:
class Get:
"""
Search users.
"""
class Header(Schema):
X_GitHub_Media_Type = fields.String(description='You can check the current version of media type in responses.\n', data_key='X-GitHub-Media-Type')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class Query(Schema):
order = fields.String(description='The sort field. if sort param is provided. Can be either asc or desc.', missing=lambda: 'desc', validate=[OneOf(choices=['desc', 'asc'], labels=[])])
q = fields.String(required=True, description="The search terms. This can be any combination of the supported user\nsearch parameters:\n'Search In' Qualifies which fields are searched. With this qualifier you\ncan restrict the search to just the username, public email, full name,\nlocation, or any combination of these.\n'Repository count' Filters users based on the number of repositories they\nhave.\n'Location' Filter users by the location indicated in their profile.\n'Language' Search for users that have repositories that match a certain\nlanguage.\n'Created' Filter users based on when they joined.\n'Followers' Filter users based on the number of followers they have.\n")
sort = fields.String(description='If not provided, results are sorted by best match.', validate=[OneOf(choices=['followers', 'repositories', 'joined'], labels=[])])
class TeamsTeamIdInput:
class Delete:
"""
Delete team.
In order to delete a team, the authenticated user must be an owner of the
org that the team is associated with.
"""
class Header(Schema):
X_GitHub_Media_Type = fields.String(description='You can check the current version of media type in responses.\n', data_key='X-GitHub-Media-Type')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class Path(Schema):
teamId = fields.Integer(required=True, description='Id of team.')
class Get:
"""
Get team.
"""
class Header(Schema):
X_GitHub_Media_Type = fields.String(description='You can check the current version of media type in responses.\n', data_key='X-GitHub-Media-Type')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class Path(Schema):
teamId = fields.Integer(required=True, description='Id of team.')
class Patch:
"""
Edit team.
In order to edit a team, the authenticated user must be an owner of the org
that the team is associated with.
"""
class Body(EditTeam):
pass
class Header(Schema):
X_GitHub_Media_Type = fields.String(description='You can check the current version of media type in responses.\n', data_key='X-GitHub-Media-Type')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class Path(Schema):
teamId = fields.Integer(required=True, description='Id of team.')
class TeamsTeamIdMembersInput:
class Get:
"""
List team members.
In order to list members in a team, the authenticated user must be a member
of the team.
"""
class Header(Schema):
X_GitHub_Media_Type = fields.String(description='You can check the current version of media type in responses.\n', data_key='X-GitHub-Media-Type')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class Path(Schema):
teamId = fields.Integer(required=True, description='Id of team.')
class TeamsTeamIdMembersUsernameInput:
class Delete:
"""
The "Remove team member" API is deprecated and is scheduled for removal in the next major version of the API. We recommend using the Remove team membership API instead. It allows you to remove both active and pending memberships.
Remove team member.
In order to remove a user from a team, the authenticated user must have 'admin'
permissions to the team or be an owner of the org that the team is associated
with.
NOTE This does not delete the user, it just remove them from the team.
"""
class Header(Schema):
X_GitHub_Media_Type = fields.String(description='You can check the current version of media type in responses.\n', data_key='X-GitHub-Media-Type')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class Path(Schema):
teamId = fields.Integer(required=True, description='Id of team.')
username = fields.String(required=True, description='Name of a member.')
class Get:
"""
The "Get team member" API is deprecated and is scheduled for removal in the next major version of the API. We recommend using the Get team membership API instead. It allows you to get both active and pending memberships.
Get team member.
In order to get if a user is a member of a team, the authenticated user mus
be a member of the team.
"""
class Header(Schema):
X_GitHub_Media_Type = fields.String(description='You can check the current version of media type in responses.\n', data_key='X-GitHub-Media-Type')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class Path(Schema):
teamId = fields.Integer(required=True, description='Id of team.')
username = fields.String(required=True, description='Name of a member.')
class Put:
"""
The API (described below) is deprecated and is scheduled for removal in the next major version of the API. We recommend using the Add team membership API instead. It allows you to invite new organization members to your teams.
Add team member.
In order to add a user to a team, the authenticated user must have 'admin'
permissions to the team or be an owner of the org that the team is associated
with.
"""
class Header(Schema):
X_GitHub_Media_Type = fields.String(description='You can check the current version of media type in responses.\n', data_key='X-GitHub-Media-Type')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class Path(Schema):
teamId = fields.Integer(required=True, description='Id of team.')
username = fields.String(required=True, description='Name of a member.')
class TeamsTeamIdMembershipsUsernameInput:
class Delete:
"""
Remove team membership.
In order to remove a membership between a user and a team, the authenticated user must have 'admin' permissions to the team or be an owner of the organization that the team is associated with. NOTE: This does not delete the user, it just removes their membership from the team.
"""
class Header(Schema):
X_GitHub_Media_Type = fields.String(description='You can check the current version of media type in responses.\n', data_key='X-GitHub-Media-Type')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class Path(Schema):
teamId = fields.Integer(required=True, description='Id of team.')
username = fields.String(required=True, description='Name of a member.')
class Get:
"""
Get team membership.
In order to get a user's membership with a team, the authenticated user must be a member of the team or an owner of the team's organization.
"""
class Header(Schema):
X_GitHub_Media_Type = fields.String(description='You can check the current version of media type in responses.\n', data_key='X-GitHub-Media-Type')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class Path(Schema):
teamId = fields.Integer(required=True, description='Id of team.')
username = fields.String(required=True, description='Name of a member.')
class Put:
"""
Add team membership.
In order to add a membership between a user and a team, the authenticated user must have 'admin' permissions to the team or be an owner of the organization that the team is associated with.
If the user is already a part of the team's organization (meaning they're on at least one other team in the organization), this endpoint will add the user to the team.
If the user is completely unaffiliated with the team's organization (meaning they're on none of the organization's teams), this endpoint will send an invitation to the user via email. This newly-created membership will be in the 'pending' state until the user accepts the invitation, at which point the membership will transition to the 'active' state and the user will be added as a member of the team.
"""
class Header(Schema):
X_GitHub_Media_Type = fields.String(description='You can check the current version of media type in responses.\n', data_key='X-GitHub-Media-Type')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class Path(Schema):
teamId = fields.Integer(required=True, description='Id of team.')
username = fields.String(required=True, description='Name of a member.')
class TeamsTeamIdReposInput:
class Get:
"""
List team repos
"""
class Header(Schema):
X_GitHub_Media_Type = fields.String(description='You can check the current version of media type in responses.\n', data_key='X-GitHub-Media-Type')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class Path(Schema):
teamId = fields.Integer(required=True, description='Id of team.')
class TeamsTeamIdReposOrgRepoInput:
class Put:
"""
In order to add a repository to a team, the authenticated user must be an owner of the org that the team is associated with. Also, the repository must be owned by the organization, or a direct fork of a repository owned by the organization.
"""
class Header(Schema):
X_GitHub_Media_Type = fields.String(description='You can check the current version of media type in responses.\n', data_key='X-GitHub-Media-Type')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class Path(Schema):
teamId = fields.Integer(required=True, description='Id of team.')
org = fields.String(required=True, description='Name of a organization.')
repo = fields.String(required=True, description='Name of a repository.')
class TeamsTeamIdReposOwnerRepoInput:
class Delete:
"""
In order to remove a repository from a team, the authenticated user must be an owner of the org that the team is associated with. NOTE: This does not delete the repository, it just removes it from the team.
"""
class Header(Schema):
X_GitHub_Media_Type = fields.String(description='You can check the current version of media type in responses.\n', data_key='X-GitHub-Media-Type')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class Path(Schema):
teamId = fields.Integer(required=True, description='Id of team.')
owner = fields.String(required=True, description='Name of a repository owner.')
repo = fields.String(required=True, description='Name of a repository.')
class Get:
"""
Check if a team manages a repository
"""
class Header(Schema):
X_GitHub_Media_Type = fields.String(description='You can check the current version of media type in responses.\n', data_key='X-GitHub-Media-Type')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class Path(Schema):
teamId = fields.Integer(required=True, description='Id of team.')
owner = fields.String(required=True, description='Name of a repository owner.')
repo = fields.String(required=True, description='Name of a repository.')
class UserInput:
class Get:
"""
Get the authenticated user.
"""
class Header(Schema):
X_GitHub_Media_Type = fields.String(description='You can check the current version of media type in responses.\n', data_key='X-GitHub-Media-Type')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class Patch:
"""
Update the authenticated user.
"""
class Body(User_update):
pass
class Header(Schema):
X_GitHub_Media_Type = fields.String(description='You can check the current version of media type in responses.\n', data_key='X-GitHub-Media-Type')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class UserEmailsInput:
class Delete:
"""
Delete email address(es).
You can include a single email address or an array of addresses.
"""
class Body(PrimitiveValueSchema):
class schema_class(Schema):
value = fields.List(fields.String())
class Header(Schema):
X_GitHub_Media_Type = fields.String(description='You can check the current version of media type in responses.\n', data_key='X-GitHub-Media-Type')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class Get:
"""
List email addresses for a user.
In the final version of the API, this method will return an array of hashes
with extended information for each email address indicating if the address
has been verified and if it's primary email address for GitHub.
Until API v3 is finalized, use the application/vnd.github.v3 media type to
get other response format.
"""
class Header(Schema):
X_GitHub_Media_Type = fields.String(description='You can check the current version of media type in responses.\n', data_key='X-GitHub-Media-Type')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class Post:
"""
Add email address(es).
You can post a single email address or an array of addresses.
"""
class Body(PrimitiveValueSchema):
class schema_class(Schema):
value = fields.List(fields.String())
class Header(Schema):
X_GitHub_Media_Type = fields.String(description='You can check the current version of media type in responses.\n', data_key='X-GitHub-Media-Type')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class UserFollowersInput:
class Get:
"""
List the authenticated user's followers
"""
class Header(Schema):
X_GitHub_Media_Type = fields.String(description='You can check the current version of media type in responses.\n', data_key='X-GitHub-Media-Type')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class UserFollowingInput:
class Get:
"""
List who the authenticated user is following.
"""
class Header(Schema):
X_GitHub_Media_Type = fields.String(description='You can check the current version of media type in responses.\n', data_key='X-GitHub-Media-Type')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class UserFollowingUsernameInput:
class Delete:
"""
Unfollow a user.
Unfollowing a user requires the user to be logged in and authenticated with
basic auth or OAuth with the user:follow scope.
"""
class Header(Schema):
X_GitHub_Media_Type = fields.String(description='You can check the current version of media type in responses.\n', data_key='X-GitHub-Media-Type')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class Path(Schema):
username = fields.String(required=True, description='Name of user.')
class Get:
"""
Check if you are following a user.
"""
class Header(Schema):
X_GitHub_Media_Type = fields.String(description='You can check the current version of media type in responses.\n', data_key='X-GitHub-Media-Type')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class Path(Schema):
username = fields.String(required=True, description='Name of user.')
class Put:
"""
Follow a user.
Following a user requires the user to be logged in and authenticated with
basic auth or OAuth with the user:follow scope.
"""
class Header(Schema):
X_GitHub_Media_Type = fields.String(description='You can check the current version of media type in responses.\n', data_key='X-GitHub-Media-Type')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class Path(Schema):
username = fields.String(required=True, description='Name of user.')
class UserIssuesInput:
class Get:
"""
List issues.
List all issues across owned and member repositories for the authenticated
user.
"""
class Header(Schema):
X_GitHub_Media_Type = fields.String(description='You can check the current version of media type in responses.\n', data_key='X-GitHub-Media-Type')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class Query(Schema):
filter = fields.String(required=True, description="Issues assigned to you / created by you / mentioning you / you're\nsubscribed to updates for / All issues the authenticated user can see\n", validate=[OneOf(choices=['assigned', 'created', 'mentioned', 'subscribed', 'all'], labels=[])])
state = fields.String(required=True, validate=[OneOf(choices=['open', 'closed'], labels=[])])
labels = fields.String(required=True, description='String list of comma separated Label names. Example - bug,ui,@high.')
sort = fields.String(required=True, validate=[OneOf(choices=['created', 'updated', 'comments'], labels=[])])
direction = fields.String(required=True, validate=[OneOf(choices=['asc', 'desc'], labels=[])])
since = fields.String(description='Optional string of a timestamp in ISO 8601 format: YYYY-MM-DDTHH:MM:SSZ.\nOnly issues updated at or after this time are returned.\n')
class UserKeysInput:
class Get:
"""
List your public keys.
Lists the current user's keys. Management of public keys via the API requires
that you are authenticated through basic auth, or OAuth with the 'user', 'write:public_key' scopes.
"""
class Header(Schema):
X_GitHub_Media_Type = fields.String(description='You can check the current version of media type in responses.\n', data_key='X-GitHub-Media-Type')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class Post:
"""
Create a public key.
"""
class Body(User_keys_post):
pass
class Header(Schema):
X_GitHub_Media_Type = fields.String(description='You can check the current version of media type in responses.\n', data_key='X-GitHub-Media-Type')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class UserKeysKeyIdInput:
class Delete:
"""
Delete a public key. Removes a public key. Requires that you are authenticated via Basic Auth or via OAuth with at least admin:public_key scope.
"""
class Header(Schema):
X_GitHub_Media_Type = fields.String(description='You can check the current version of media type in responses.\n', data_key='X-GitHub-Media-Type')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class Path(Schema):
keyId = fields.Integer(required=True, description='ID of key.')
class Get:
"""
Get a single public key.
"""
class Header(Schema):
X_GitHub_Media_Type = fields.String(description='You can check the current version of media type in responses.\n', data_key='X-GitHub-Media-Type')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class Path(Schema):
keyId = fields.Integer(required=True, description='ID of key.')
class UserOrgsInput:
class Get:
"""
List public and private organizations for the authenticated user.
"""
class Header(Schema):
X_GitHub_Media_Type = fields.String(description='You can check the current version of media type in responses.\n', data_key='X-GitHub-Media-Type')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class UserReposInput:
class Get:
"""
List repositories for the authenticated user. Note that this does not include
repositories owned by organizations which the user can access. You can lis
user organizations and list organization repositories separately.
"""
class Header(Schema):
X_GitHub_Media_Type = fields.String(description='You can check the current version of media type in responses.\n', data_key='X-GitHub-Media-Type')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class Query(Schema):
type = fields.String(missing=lambda: 'all', validate=[OneOf(choices=['all', 'public', 'private', 'forks', 'sources', 'member'], labels=[])])
class Post:
"""
Create a new repository for the authenticated user. OAuth users must supply
repo scope.
"""
class Body(PostRepo):
pass
class Header(Schema):
X_GitHub_Media_Type = fields.String(description='You can check the current version of media type in responses.\n', data_key='X-GitHub-Media-Type')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class UserStarredInput:
class Get:
"""
List repositories being starred by the authenticated user.
"""
class Header(Schema):
X_GitHub_Media_Type = fields.String(description='You can check the current version of media type in responses.\n', data_key='X-GitHub-Media-Type')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class Query(Schema):
direction = fields.String(description="Ignored without 'sort' parameter.")
sort = fields.String(description='', missing=lambda: 'created', validate=[OneOf(choices=['created', 'updated'], labels=[])])
class UserStarredOwnerRepoInput:
class Delete:
"""
Unstar a repository
"""
class Header(Schema):
X_GitHub_Media_Type = fields.String(description='You can check the current version of media type in responses.\n', data_key='X-GitHub-Media-Type')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class Path(Schema):
owner = fields.String(required=True, description='Name of a repository owner.')
repo = fields.String(required=True, description='Name of a repository.')
class Get:
"""
Check if you are starring a repository.
"""
class Header(Schema):
X_GitHub_Media_Type = fields.String(description='You can check the current version of media type in responses.\n', data_key='X-GitHub-Media-Type')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class Path(Schema):
owner = fields.String(required=True, description='Name of a repository owner.')
repo = fields.String(required=True, description='Name of a repository.')
class Put:
"""
Star a repository.
"""
class Header(Schema):
X_GitHub_Media_Type = fields.String(description='You can check the current version of media type in responses.\n', data_key='X-GitHub-Media-Type')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class Path(Schema):
owner = fields.String(required=True, description='Name of a repository owner.')
repo = fields.String(required=True, description='Name of a repository.')
class UserSubscriptionsInput:
class Get:
"""
List repositories being watched by the authenticated user.
"""
class Header(Schema):
X_GitHub_Media_Type = fields.String(description='You can check the current version of media type in responses.\n', data_key='X-GitHub-Media-Type')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class UserSubscriptionsOwnerRepoInput:
class Delete:
"""
Stop watching a repository
"""
class Header(Schema):
X_GitHub_Media_Type = fields.String(description='You can check the current version of media type in responses.\n', data_key='X-GitHub-Media-Type')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class Path(Schema):
owner = fields.String(required=True, description='Name of the owner.')
repo = fields.String(required=True, description='Name of repository.')
class Get:
"""
Check if you are watching a repository.
"""
class Header(Schema):
X_GitHub_Media_Type = fields.String(description='You can check the current version of media type in responses.\n', data_key='X-GitHub-Media-Type')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class Path(Schema):
owner = fields.String(required=True, description='Name of the owner.')
repo = fields.String(required=True, description='Name of repository.')
class Put:
"""
Watch a repository.
"""
class Header(Schema):
X_GitHub_Media_Type = fields.String(description='You can check the current version of media type in responses.\n', data_key='X-GitHub-Media-Type')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class Path(Schema):
owner = fields.String(required=True, description='Name of the owner.')
repo = fields.String(required=True, description='Name of repository.')
class UserTeamsInput:
class Get:
"""
List all of the teams across all of the organizations to which the authenticated user belongs. This method requires user or repo scope when authenticating via OAuth.
"""
class Header(Schema):
X_GitHub_Media_Type = fields.String(description='You can check the current version of media type in responses.\n', data_key='X-GitHub-Media-Type')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class UsersInput:
class Get:
"""
Get all users.
This provides a dump of every user, in the order that they signed up for GitHub.
Note: Pagination is powered exclusively by the since parameter. Use the Link
header to get the URL for the next page of users.
"""
class Header(Schema):
X_GitHub_Media_Type = fields.String(description='You can check the current version of media type in responses.\n', data_key='X-GitHub-Media-Type')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class Query(Schema):
since = fields.Integer(description="The integer ID of the last User that you've seen.")
class UsersUsernameInput:
class Get:
"""
Get a single user.
"""
class Header(Schema):
X_GitHub_Media_Type = fields.String(description='You can check the current version of media type in responses.\n', data_key='X-GitHub-Media-Type')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class Path(Schema):
username = fields.String(required=True, description='Name of user.')
class UsersUsernameEventsInput:
class Get:
"""
If you are authenticated as the given user, you will see your private events. Otherwise, you'll only see public events.
"""
class Header(Schema):
X_GitHub_Media_Type = fields.String(description='You can check the current version of media type in responses.\n', data_key='X-GitHub-Media-Type')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class Path(Schema):
username = fields.String(required=True, description='Name of user.')
class UsersUsernameEventsOrgsOrgInput:
class Get:
"""
This is the user's organization dashboard. You must be authenticated as the user to view this.
"""
class Header(Schema):
X_GitHub_Media_Type = fields.String(description='You can check the current version of media type in responses.\n', data_key='X-GitHub-Media-Type')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class Path(Schema):
username = fields.String(required=True, description='Name of user.')
org = fields.String(required=True)
class UsersUsernameFollowersInput:
class Get:
"""
List a user's followers
"""
class Header(Schema):
X_GitHub_Media_Type = fields.String(description='You can check the current version of media type in responses.\n', data_key='X-GitHub-Media-Type')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class Path(Schema):
username = fields.String(required=True, description='Name of user.')
class UsersUsernameFollowingTargetUserInput:
class Get:
"""
Check if one user follows another.
"""
class Header(Schema):
X_GitHub_Media_Type = fields.String(description='You can check the current version of media type in responses.\n', data_key='X-GitHub-Media-Type')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class Path(Schema):
username = fields.String(required=True, description='Name of user.')
targetUser = fields.String(required=True, description='Name of user.')
class UsersUsernameGistsInput:
class Get:
"""
List a users gists.
"""
class Header(Schema):
X_GitHub_Media_Type = fields.String(description='You can check the current version of media type in responses.\n', data_key='X-GitHub-Media-Type')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class Path(Schema):
username = fields.String(required=True, description='Name of user.')
class Query(Schema):
since = fields.String(description='The time should be passed in as UTC in the ISO 8601 format: YYYY-MM-DDTHH:MM:SSZ.\nExample: "2012-10-09T23:39:01Z".\n')
class UsersUsernameKeysInput:
class Get:
"""
List public keys for a user.
Lists the verified public keys for a user. This is accessible by anyone.
"""
class Header(Schema):
X_GitHub_Media_Type = fields.String(description='You can check the current version of media type in responses.\n', data_key='X-GitHub-Media-Type')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class Path(Schema):
username = fields.String(required=True, description='Name of user.')
class UsersUsernameOrgsInput:
class Get:
"""
List all public organizations for a user.
"""
class Header(Schema):
X_GitHub_Media_Type = fields.String(description='You can check the current version of media type in responses.\n', data_key='X-GitHub-Media-Type')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class Path(Schema):
username = fields.String(required=True, description='Name of user.')
class UsersUsernameReceivedEventsInput:
class Get:
"""
These are events that you'll only see public events.
"""
class Header(Schema):
X_GitHub_Media_Type = fields.String(description='You can check the current version of media type in responses.\n', data_key='X-GitHub-Media-Type')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class Path(Schema):
username = fields.String(required=True, description='Name of user.')
class UsersUsernameReceivedEventsPublicInput:
class Get:
"""
List public events that a user has received
"""
class Header(Schema):
X_GitHub_Media_Type = fields.String(description='You can check the current version of media type in responses.\n', data_key='X-GitHub-Media-Type')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class Path(Schema):
username = fields.String(required=True, description='Name of user.')
class UsersUsernameReposInput:
class Get:
"""
List public repositories for the specified user.
"""
class Header(Schema):
X_GitHub_Media_Type = fields.String(description='You can check the current version of media type in responses.\n', data_key='X-GitHub-Media-Type')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class Path(Schema):
username = fields.String(required=True, description='Name of user.')
class Query(Schema):
type = fields.String(missing=lambda: 'all', validate=[OneOf(choices=['all', 'public', 'private', 'forks', 'sources', 'member'], labels=[])])
class UsersUsernameStarredInput:
class Get:
"""
List repositories being starred by a user.
"""
class Header(Schema):
X_GitHub_Media_Type = fields.String(description='You can check the current version of media type in responses.\n', data_key='X-GitHub-Media-Type')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class Path(Schema):
username = fields.String(required=True, description='Name of user.')
class UsersUsernameSubscriptionsInput:
class Get:
"""
List repositories being watched by a user.
"""
class Header(Schema):
X_GitHub_Media_Type = fields.String(description='You can check the current version of media type in responses.\n', data_key='X-GitHub-Media-Type')
Accept = fields.String(description='Is used to set specified media type.')
X_RateLimit_Limit = fields.Integer(data_key='X-RateLimit-Limit')
X_RateLimit_Remaining = fields.Integer(data_key='X-RateLimit-Remaining')
X_RateLimit_Reset = fields.Integer(data_key='X-RateLimit-Reset')
X_GitHub_Request_Id = fields.Integer(data_key='X-GitHub-Request-Id')
class Path(Schema):
username = fields.String(required=True, description='Name of user.')
class EmojisOutput:
class Get200(Emojis):
"""OK"""
pass
class EventsOutput:
class Get200(Events):
"""OK"""
pass
class FeedsOutput:
class Get200(Feeds):
"""OK"""
pass
class GistsOutput:
class Get200(GistsItem):
"""OK"""
def __init__(self, *args, **kwargs):
kwargs['many'] = True
super().__init__(*args, **kwargs)
class Post201(Gist):
"""Created"""
pass
class GistsPublicOutput:
class Get200(GistsItem):
"""OK"""
def __init__(self, *args, **kwargs):
kwargs['many'] = True
super().__init__(*args, **kwargs)
class GistsStarredOutput:
class Get200(GistsItem):
"""OK"""
def __init__(self, *args, **kwargs):
kwargs['many'] = True
super().__init__(*args, **kwargs)
class GistsIdOutput:
class Get200(Gist):
"""OK"""
pass
class Patch200(Gist):
"""OK"""
pass
class GistsIdCommentsOutput:
class Get200(CommentsItem):
"""OK"""
def __init__(self, *args, **kwargs):
kwargs['many'] = True
super().__init__(*args, **kwargs)
class Post201(Comment):
"""Created"""
pass
class GistsIdCommentsCommentIdOutput:
class Get200(Comment):
"""OK"""
pass
class Patch200(Comment):
"""OK"""
pass
class GitignoreTemplatesOutput:
class Get200(GitignoreItem):
"""OK"""
def __init__(self, *args, **kwargs):
kwargs['many'] = True
super().__init__(*args, **kwargs)
class GitignoreTemplatesLanguageOutput:
class Get200(Gitignore_lang):
"""OK"""
pass
class IssuesOutput:
class Get200(IssuesItem):
"""OK"""
def __init__(self, *args, **kwargs):
kwargs['many'] = True
super().__init__(*args, **kwargs)
class LegacyIssuesSearchOwnerRepositoryStateKeywordOutput:
class Get200(Search_issues_by_keyword):
"""OK"""
pass
class LegacyReposSearchKeywordOutput:
class Get200(Search_repositories_by_keyword):
"""OK"""
pass
class LegacyUserEmailEmailOutput:
class Get200(Search_user_by_email):
"""OK"""
pass
class LegacyUserSearchKeywordOutput:
class Get200(Search_users_by_keyword):
"""OK"""
pass
class MetaOutput:
class Get200(Meta):
"""OK"""
pass
class NetworksOwnerRepoEventsOutput:
class Get200(Events):
"""OK"""
pass
class NotificationsOutput:
class Get200(Notifications):
"""OK"""
pass
class NotificationsThreadsIdOutput:
class Get200(Notifications):
"""OK"""
pass
class NotificationsThreadsIdSubscriptionOutput:
class Get200(Subscription):
"""OK"""
pass
class Put200(Subscription):
"""OK"""
pass
class OrgsOrgOutput:
class Get200(Organization):
"""OK"""
pass
class Patch200(Organization):
"""OK"""
pass
class OrgsOrgEventsOutput:
class Get200(Events):
"""OK"""
pass
class OrgsOrgIssuesOutput:
class Get200(IssuesItem):
"""OK"""
def __init__(self, *args, **kwargs):
kwargs['many'] = True
super().__init__(*args, **kwargs)
class OrgsOrgMembersOutput:
class Get200(UsersItem):
"""OK"""
def __init__(self, *args, **kwargs):
kwargs['many'] = True
super().__init__(*args, **kwargs)
class OrgsOrgPublicMembersOutput:
class Get200(UsersItem):
"""OK"""
def __init__(self, *args, **kwargs):
kwargs['many'] = True
super().__init__(*args, **kwargs)
class OrgsOrgReposOutput:
class Get200(ReposItem):
"""OK"""
def __init__(self, *args, **kwargs):
kwargs['many'] = True
super().__init__(*args, **kwargs)
class Post201(ReposItem):
"""Created"""
def __init__(self, *args, **kwargs):
kwargs['many'] = True
super().__init__(*args, **kwargs)
class OrgsOrgTeamsOutput:
class Get200(TeamsItem):
"""OK"""
def __init__(self, *args, **kwargs):
kwargs['many'] = True
super().__init__(*args, **kwargs)
class Post201(Team):
"""Created"""
pass
class RateLimitOutput:
class Get200(Rate_limit):
"""OK"""
pass
class ReposOwnerRepoOutput:
class Get200(Repo):
"""OK"""
pass
class Patch200(Repo):
"""OK"""
pass
class ReposOwnerRepoAssigneesOutput:
class Get200(AssigneesItem):
"""OK"""
def __init__(self, *args, **kwargs):
kwargs['many'] = True
super().__init__(*args, **kwargs)
class ReposOwnerRepoBranchesOutput:
class Get200(BranchesItem):
"""OK"""
def __init__(self, *args, **kwargs):
kwargs['many'] = True
super().__init__(*args, **kwargs)
class ReposOwnerRepoBranchesBranchOutput:
class Get200(Branch):
"""OK"""
pass
class ReposOwnerRepoCollaboratorsOutput:
class Get200(UsersItem):
"""OK"""
def __init__(self, *args, **kwargs):
kwargs['many'] = True
super().__init__(*args, **kwargs)
class ReposOwnerRepoCommentsOutput:
class Get200(RepoCommentsItem):
"""OK"""
def __init__(self, *args, **kwargs):
kwargs['many'] = True
super().__init__(*args, **kwargs)
class ReposOwnerRepoCommentsCommentIdOutput:
class Get200(CommitComments):
"""OK"""
pass
class Patch200(CommitComments):
"""OK"""
pass
class ReposOwnerRepoCommitsOutput:
class Get200(CommitsItem):
"""OK"""
def __init__(self, *args, **kwargs):
kwargs['many'] = True
super().__init__(*args, **kwargs)
class ReposOwnerRepoCommitsRefStatusOutput:
class Get200(RefStatusItem):
"""OK"""
def __init__(self, *args, **kwargs):
kwargs['many'] = True
super().__init__(*args, **kwargs)
class ReposOwnerRepoCommitsShaCodeOutput:
class Get200(Commit):
"""OK"""
pass
class ReposOwnerRepoCommitsShaCodeCommentsOutput:
class Get200(RepoCommentsItem):
"""OK"""
def __init__(self, *args, **kwargs):
kwargs['many'] = True
super().__init__(*args, **kwargs)
class Post201(CommitComments):
"""Created"""
pass
class ReposOwnerRepoCompareBaseIdheadIdOutput:
class Get200(Compare_commits):
"""OK"""
pass
class ReposOwnerRepoContentsPathOutput:
class Delete200(DeleteFile):
"""OK"""
pass
class Get200(Contents_path):
"""OK"""
pass
class Put200(CreateFile):
"""OK"""
pass
class ReposOwnerRepoContributorsOutput:
class Get200(ContributorsItem):
"""OK"""
def __init__(self, *args, **kwargs):
kwargs['many'] = True
super().__init__(*args, **kwargs)
class ReposOwnerRepoDeploymentsOutput:
class Get200(Repo_deploymentsItem):
"""OK"""
def __init__(self, *args, **kwargs):
kwargs['many'] = True
super().__init__(*args, **kwargs)
class Post201(Deployment_resp):
"""Created"""
pass
class ReposOwnerRepoDeploymentsIdStatusesOutput:
class Get200(Deployment_statusesItem):
"""OK"""
def __init__(self, *args, **kwargs):
kwargs['many'] = True
super().__init__(*args, **kwargs)
class ReposOwnerRepoDownloadsOutput:
class Get200(Downloads):
"""OK"""
pass
class ReposOwnerRepoDownloadsDownloadIdOutput:
class Get200(Downloads):
"""OK"""
pass
class ReposOwnerRepoEventsOutput:
class Get200(Events):
"""OK"""
pass
class ReposOwnerRepoForksOutput:
class Get200(ForksItem):
"""OK"""
def __init__(self, *args, **kwargs):
kwargs['many'] = True
super().__init__(*args, **kwargs)
class Post201(Fork):
"""Created"""
pass
class ReposOwnerRepoGitBlobsOutput:
class Post201(Blobs):
"""Created"""
pass
class ReposOwnerRepoGitBlobsShaCodeOutput:
class Get200(Blob):
"""OK"""
pass
class ReposOwnerRepoGitCommitsOutput:
class Post201(GitCommit):
"""Created"""
pass
class ReposOwnerRepoGitCommitsShaCodeOutput:
class Get200(RepoCommit):
"""OK"""
pass
class ReposOwnerRepoGitRefsOutput:
class Get200(RefsItem):
"""OK"""
def __init__(self, *args, **kwargs):
kwargs['many'] = True
super().__init__(*args, **kwargs)
class Post201(HeadBranch):
"""Created"""
pass
class ReposOwnerRepoGitRefsRefOutput:
class Get200(HeadBranch):
"""OK"""
pass
class Patch200(HeadBranch):
"""OK"""
pass
class ReposOwnerRepoGitTagsOutput:
class Post201(Tags):
"""Created"""
pass
class ReposOwnerRepoGitTagsShaCodeOutput:
class Get200(Tag):
"""OK"""
pass
class ReposOwnerRepoGitTreesOutput:
class Post201(Trees):
"""Created"""
pass
class ReposOwnerRepoGitTreesShaCodeOutput:
class Get200(Tree):
"""OK"""
pass
class ReposOwnerRepoHooksOutput:
class Get200(HookItem):
"""OK"""
def __init__(self, *args, **kwargs):
kwargs['many'] = True
super().__init__(*args, **kwargs)
class Post201(HookItem):
"""Created"""
def __init__(self, *args, **kwargs):
kwargs['many'] = True
super().__init__(*args, **kwargs)
class ReposOwnerRepoHooksHookIdOutput:
class Get200(HookItem):
"""OK"""
def __init__(self, *args, **kwargs):
kwargs['many'] = True
super().__init__(*args, **kwargs)
class Patch200(HookItem):
"""OK"""
def __init__(self, *args, **kwargs):
kwargs['many'] = True
super().__init__(*args, **kwargs)
class ReposOwnerRepoIssuesOutput:
class Get200(IssuesItem):
"""OK"""
def __init__(self, *args, **kwargs):
kwargs['many'] = True
super().__init__(*args, **kwargs)
class Post201(Issue):
"""Created"""
pass
class ReposOwnerRepoIssuesCommentsOutput:
class Get200(IssuesCommentsItem):
"""OK"""
def __init__(self, *args, **kwargs):
kwargs['many'] = True
super().__init__(*args, **kwargs)
class ReposOwnerRepoIssuesCommentsCommentIdOutput:
class Get200(IssuesComment):
"""OK"""
pass
class Patch200(IssuesComment):
"""OK"""
pass
class ReposOwnerRepoIssuesEventsOutput:
class Get200(Events):
"""OK"""
pass
class ReposOwnerRepoIssuesEventsEventIdOutput:
class Get200(Event):
"""OK"""
pass
class ReposOwnerRepoIssuesNumberOutput:
class Get200(Issue):
"""OK"""
pass
class Patch200(Issue):
"""OK"""
pass
class ReposOwnerRepoIssuesNumberCommentsOutput:
class Get200(IssuesCommentsItem):
"""OK"""
def __init__(self, *args, **kwargs):
kwargs['many'] = True
super().__init__(*args, **kwargs)
class Post201(IssuesComment):
"""Created"""
pass
class ReposOwnerRepoIssuesNumberEventsOutput:
class Get200(Events):
"""OK"""
pass
class ReposOwnerRepoIssuesNumberLabelsOutput:
class Get200(LabelsItem):
"""OK"""
def __init__(self, *args, **kwargs):
kwargs['many'] = True
super().__init__(*args, **kwargs)
class Post201(Label):
"""Created"""
pass
class Put201(Label):
"""Created"""
pass
class ReposOwnerRepoKeysOutput:
class Get200(KeysItem):
"""OK"""
def __init__(self, *args, **kwargs):
kwargs['many'] = True
super().__init__(*args, **kwargs)
class Post201(User_keys_keyId):
"""Created"""
pass
class ReposOwnerRepoKeysKeyIdOutput:
class Get200(User_keys_keyId):
"""OK"""
pass
class ReposOwnerRepoLabelsOutput:
class Get200(LabelsItem):
"""OK"""
def __init__(self, *args, **kwargs):
kwargs['many'] = True
super().__init__(*args, **kwargs)
class Post201(Label):
"""Created"""
pass
class ReposOwnerRepoLabelsNameOutput:
class Get200(Label):
"""OK"""
pass
class Patch200(Label):
"""OK"""
pass
class ReposOwnerRepoLanguagesOutput:
class Get200(Languages):
"""OK"""
pass
class ReposOwnerRepoMergesOutput:
class Post201(MergesSuccessful):
"""Successful Response (The resulting merge commit)"""
pass
class Post404(MergesConflict):
"""Missing base response or missing head response"""
pass
class Post409(MergesConflict):
"""Merge conflict response."""
pass
class ReposOwnerRepoMilestonesOutput:
class Get200(Milestone):
"""OK"""
pass
class Post201(Milestone):
"""Created"""
pass
class ReposOwnerRepoMilestonesNumberOutput:
class Get200(Milestone):
"""OK"""
pass
class Patch200(Milestone):
"""OK"""
pass
class ReposOwnerRepoMilestonesNumberLabelsOutput:
class Get200(LabelsItem):
"""OK"""
def __init__(self, *args, **kwargs):
kwargs['many'] = True
super().__init__(*args, **kwargs)
class ReposOwnerRepoNotificationsOutput:
class Get200(Notifications):
"""OK"""
pass
class ReposOwnerRepoPullsOutput:
class Get200(PullsItem):
"""OK"""
def __init__(self, *args, **kwargs):
kwargs['many'] = True
super().__init__(*args, **kwargs)
class Post201(PullsItem):
"""Created"""
def __init__(self, *args, **kwargs):
kwargs['many'] = True
super().__init__(*args, **kwargs)
class ReposOwnerRepoPullsCommentsOutput:
class Get200(IssuesCommentsItem):
"""OK"""
def __init__(self, *args, **kwargs):
kwargs['many'] = True
super().__init__(*args, **kwargs)
class ReposOwnerRepoPullsCommentsCommentIdOutput:
class Get200(PullsComment):
"""OK"""
pass
class Patch200(PullsComment):
"""OK"""
pass
class ReposOwnerRepoPullsNumberOutput:
class Get200(PullRequest):
"""OK"""
pass
class Patch200(Repo):
"""OK"""
pass
class ReposOwnerRepoPullsNumberCommentsOutput:
class Get200(PullsComment):
"""OK"""
pass
class Post201(PullsComment):
"""Created"""
pass
class ReposOwnerRepoPullsNumberCommitsOutput:
class Get200(CommitsItem):
"""OK"""
def __init__(self, *args, **kwargs):
kwargs['many'] = True
super().__init__(*args, **kwargs)
class ReposOwnerRepoPullsNumberFilesOutput:
class Get200(PullsItem):
"""OK"""
def __init__(self, *args, **kwargs):
kwargs['many'] = True
super().__init__(*args, **kwargs)
class ReposOwnerRepoPullsNumberMergeOutput:
class Put200(Merge):
"""Response if merge was successful."""
pass
class Put405(Merge):
"""Response if merge cannot be performed."""
pass
class ReposOwnerRepoReadmeOutput:
class Get200(Contents_path):
"""OK"""
pass
class ReposOwnerRepoReleasesOutput:
class Get200(ReleasesItem):
"""OK"""
def __init__(self, *args, **kwargs):
kwargs['many'] = True
super().__init__(*args, **kwargs)
class Post201(Release):
"""Created"""
pass
class ReposOwnerRepoReleasesAssetsIdOutput:
class Get200(Asset):
"""OK"""
pass
class Patch200(Asset):
"""OK"""
pass
class ReposOwnerRepoReleasesIdOutput:
class Get200(Release):
"""OK"""
pass
class Patch200(Release):
"""OK"""
pass
class ReposOwnerRepoReleasesIdAssetsOutput:
class Get200(AssetsItem):
"""OK"""
def __init__(self, *args, **kwargs):
kwargs['many'] = True
super().__init__(*args, **kwargs)
class ReposOwnerRepoStargazersOutput:
class Get200(UsersItem):
"""OK"""
def __init__(self, *args, **kwargs):
kwargs['many'] = True
super().__init__(*args, **kwargs)
class ReposOwnerRepoStatsCodeFrequencyOutput:
class Get200(PrimitiveValueSchema):
class schema_class(Schema):
value = fields.List(fields.Integer())
class ReposOwnerRepoStatsCommitActivityOutput:
class Get200(CommitActivityStatsItem):
"""OK"""
def __init__(self, *args, **kwargs):
kwargs['many'] = True
super().__init__(*args, **kwargs)
class ReposOwnerRepoStatsContributorsOutput:
class Get200(ContributorsStatsItem):
"""OK"""
def __init__(self, *args, **kwargs):
kwargs['many'] = True
super().__init__(*args, **kwargs)
class ReposOwnerRepoStatsParticipationOutput:
class Get200(ParticipationStats):
"""OK"""
pass
class ReposOwnerRepoStatsPunchCardOutput:
class Get200(PrimitiveValueSchema):
class schema_class(Schema):
value = fields.List(fields.Integer())
class ReposOwnerRepoStatusesRefOutput:
class Get200(RefItem):
"""OK"""
def __init__(self, *args, **kwargs):
kwargs['many'] = True
super().__init__(*args, **kwargs)
class Post201(RefItem):
"""Created"""
def __init__(self, *args, **kwargs):
kwargs['many'] = True
super().__init__(*args, **kwargs)
class ReposOwnerRepoSubscribersOutput:
class Get200(UsersItem):
"""OK"""
def __init__(self, *args, **kwargs):
kwargs['many'] = True
super().__init__(*args, **kwargs)
class ReposOwnerRepoSubscriptionOutput:
class Get200(Subscribition):
"""OK"""
pass
class Put200(Subscribition):
"""OK"""
pass
class ReposOwnerRepoTagsOutput:
class Get200(Tags):
"""OK"""
pass
class ReposOwnerRepoTeamsOutput:
class Get200(TeamsItem):
"""OK"""
def __init__(self, *args, **kwargs):
kwargs['many'] = True
super().__init__(*args, **kwargs)
class ReposOwnerRepoWatchersOutput:
class Get200(UsersItem):
"""OK"""
def __init__(self, *args, **kwargs):
kwargs['many'] = True
super().__init__(*args, **kwargs)
class RepositoriesOutput:
class Get200(RepositoriesItem):
"""OK"""
def __init__(self, *args, **kwargs):
kwargs['many'] = True
super().__init__(*args, **kwargs)
class SearchCodeOutput:
class Get200(Search_code):
"""OK"""
pass
class SearchIssuesOutput:
class Get200(Search_issues):
"""OK"""
pass
class SearchRepositoriesOutput:
class Get200(Search_repositories):
"""OK"""
pass
class SearchUsersOutput:
class Get200(Search_users):
"""OK"""
pass
class TeamsTeamIdOutput:
class Get200(Team):
"""OK"""
pass
class Patch200(Team):
"""OK"""
pass
class TeamsTeamIdMembersOutput:
class Get200(UsersItem):
"""OK"""
def __init__(self, *args, **kwargs):
kwargs['many'] = True
super().__init__(*args, **kwargs)
class TeamsTeamIdMembersUsernameOutput:
class Put422(OrganizationAsTeamMember):
"""If you attempt to add an organization to a team, you will get this."""
pass
class TeamsTeamIdMembershipsUsernameOutput:
class Get200(TeamMembership):
"""User is a member."""
pass
class Put200(TeamMembership):
"""Team member added."""
pass
class Put422(OrganizationAsTeamMember):
"""If you attempt to add an organization to a team, you will get this."""
pass
class TeamsTeamIdReposOutput:
class Get200(TeamReposItem):
"""OK"""
def __init__(self, *args, **kwargs):
kwargs['many'] = True
super().__init__(*args, **kwargs)
class UserOutput:
class Get200(User):
"""OK"""
pass
class Patch200(User):
"""OK"""
pass
class UserEmailsOutput:
class Get200(PrimitiveValueSchema):
class schema_class(Schema):
value = fields.List(fields.String())
class UserFollowersOutput:
class Get200(UsersItem):
"""OK"""
def __init__(self, *args, **kwargs):
kwargs['many'] = True
super().__init__(*args, **kwargs)
class UserFollowingOutput:
class Get200(UsersItem):
"""OK"""
def __init__(self, *args, **kwargs):
kwargs['many'] = True
super().__init__(*args, **kwargs)
class UserIssuesOutput:
class Get200(IssuesItem):
"""OK"""
def __init__(self, *args, **kwargs):
kwargs['many'] = True
super().__init__(*args, **kwargs)
class UserKeysOutput:
class Get200(GitignoreItem):
"""OK"""
def __init__(self, *args, **kwargs):
kwargs['many'] = True
super().__init__(*args, **kwargs)
class Post201(User_keys_keyId):
"""Created"""
pass
class UserKeysKeyIdOutput:
class Get200(User_keys_keyId):
"""OK"""
pass
class UserOrgsOutput:
class Get200(GitignoreItem):
"""OK"""
def __init__(self, *args, **kwargs):
kwargs['many'] = True
super().__init__(*args, **kwargs)
class UserReposOutput:
class Get200(ReposItem):
"""OK"""
def __init__(self, *args, **kwargs):
kwargs['many'] = True
super().__init__(*args, **kwargs)
class Post201(ReposItem):
"""Created"""
def __init__(self, *args, **kwargs):
kwargs['many'] = True
super().__init__(*args, **kwargs)
class UserStarredOutput:
class Get200(GitignoreItem):
"""OK"""
def __init__(self, *args, **kwargs):
kwargs['many'] = True
super().__init__(*args, **kwargs)
class UserSubscriptionsOutput:
class Get200(User_userId_subscribitionsItem):
"""OK"""
def __init__(self, *args, **kwargs):
kwargs['many'] = True
super().__init__(*args, **kwargs)
class UserTeamsOutput:
class Get200(Teams_listItem):
"""OK"""
def __init__(self, *args, **kwargs):
kwargs['many'] = True
super().__init__(*args, **kwargs)
class UsersOutput:
class Get200(UsersItem):
"""OK"""
def __init__(self, *args, **kwargs):
kwargs['many'] = True
super().__init__(*args, **kwargs)
class UsersUsernameOutput:
class Get200(UsersItem):
"""OK"""
def __init__(self, *args, **kwargs):
kwargs['many'] = True
super().__init__(*args, **kwargs)
class UsersUsernameFollowersOutput:
class Get200(UsersItem):
"""OK"""
def __init__(self, *args, **kwargs):
kwargs['many'] = True
super().__init__(*args, **kwargs)
class UsersUsernameGistsOutput:
class Get200(GistsItem):
"""OK"""
def __init__(self, *args, **kwargs):
kwargs['many'] = True
super().__init__(*args, **kwargs)
class UsersUsernameKeysOutput:
class Get200(GitignoreItem):
"""OK"""
def __init__(self, *args, **kwargs):
kwargs['many'] = True
super().__init__(*args, **kwargs)
class UsersUsernameOrgsOutput:
class Get200(GitignoreItem):
"""OK"""
def __init__(self, *args, **kwargs):
kwargs['many'] = True
super().__init__(*args, **kwargs)
class UsersUsernameReposOutput:
class Get200(ReposItem):
"""OK"""
def __init__(self, *args, **kwargs):
kwargs['many'] = True
super().__init__(*args, **kwargs)
| 36.762462 | 867 | 0.664314 | 43,835 | 367,257 | 5.428835 | 0.033284 | 0.12783 | 0.041013 | 0.082026 | 0.827161 | 0.805499 | 0.788316 | 0.776664 | 0.766726 | 0.760469 | 0 | 0.004142 | 0.224913 | 367,257 | 9,989 | 868 | 36.766143 | 0.83186 | 0.053377 | 0 | 0.804089 | 1 | 0.005548 | 0.217685 | 0.026164 | 0 | 0 | 0 | 0.0001 | 0 | 1 | 0.011095 | false | 0.027421 | 0.000476 | 0 | 0.632271 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 9 |
97458e9f5772383f7eb21897c80ca92e22e1db52 | 148 | py | Python | knx_stack/encode/layer/network/n_data_group/__init__.py | majamassarini/knx-stack | 11a9baac6b7600649b5fbca43c93b200b23676b4 | [
"MIT"
] | 2 | 2021-07-28T07:42:28.000Z | 2022-01-25T18:56:05.000Z | knx_stack/encode/layer/network/n_data_group/__init__.py | majamassarini/knx-stack | 11a9baac6b7600649b5fbca43c93b200b23676b4 | [
"MIT"
] | 6 | 2021-07-25T21:36:01.000Z | 2022-02-20T21:11:31.000Z | knx_stack/encode/layer/network/n_data_group/__init__.py | majamassarini/knx-stack | 11a9baac6b7600649b5fbca43c93b200b23676b4 | [
"MIT"
] | null | null | null | from knx_stack.encode.layer.network.n_data_group.encode import nl_encode as encode
from knx_stack.encode.layer.network.n_data_group import ind, req
| 49.333333 | 82 | 0.858108 | 27 | 148 | 4.444444 | 0.518519 | 0.116667 | 0.2 | 0.3 | 0.666667 | 0.666667 | 0.666667 | 0.666667 | 0.666667 | 0 | 0 | 0 | 0.074324 | 148 | 2 | 83 | 74 | 0.875912 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
97c63b10b9ce192a9d4fd0aac54603e74c0ed5ed | 7,963 | py | Python | samples/test_bf_core.py | yoshi389111/brainfuck-compiler-c | 42650fa35c913b50dc9aabb482c4720943e718cf | [
"MIT"
] | null | null | null | samples/test_bf_core.py | yoshi389111/brainfuck-compiler-c | 42650fa35c913b50dc9aabb482c4720943e718cf | [
"MIT"
] | null | null | null | samples/test_bf_core.py | yoshi389111/brainfuck-compiler-c | 42650fa35c913b50dc9aabb482c4720943e718cf | [
"MIT"
] | null | null | null | # bfコマンドマクロの基本命令のテスト
import unittest
import bf_core as c
from bf_sim import BfSim
class TestBfCore(unittest.TestCase):
def test_move_data_1(self):
source = c.move_data(1, 2)
sim = BfSim(source)
sim.memory[1] = 3
sim.run(10000)
self.assertEqual(sim.memory[1], 0)
self.assertEqual(sim.memory[2], 3)
def test_move_data_2(self):
source = c.move_data(1, 2)
sim = BfSim(source)
sim.memory[1] = 3
sim.memory[2] = 4
sim.run(10000)
self.assertEqual(sim.memory[1], 0)
self.assertEqual(sim.memory[2], 7)
def test_override_data_1(self):
source = c.override_data(1, 2)
sim = BfSim(source)
sim.memory[1] = 3
sim.memory[2] = 4
sim.run(10000)
self.assertEqual(sim.memory[1], 0)
self.assertEqual(sim.memory[2], 3)
def test_copy_data_1(self):
source = c.copy_data(1, 2, 3)
sim = BfSim(source)
sim.memory[1] = 3
sim.memory[2] = 4
sim.run(10000)
self.assertEqual(sim.memory[1], 3)
self.assertEqual(sim.memory[2], 3)
def test_swap_data_1(self):
source = c.swap_data(1, 2, 3)
sim = BfSim(source)
sim.memory[1] = 3
sim.memory[2] = 4
sim.run(10000)
self.assertEqual(sim.memory[1], 4)
self.assertEqual(sim.memory[2], 3)
def test_init_value_1(self):
source = c.init_value(1, 5)
sim = BfSim(source)
sim.run(10000)
self.assertEqual(sim.memory[1], 5)
def test_init_value_2(self):
source = c.init_value(1, 120)
sim = BfSim(source)
sim.run(10000)
self.assertEqual(sim.memory[1], 120)
def test_init_value_3(self):
source = c.init_value(1, 255)
sim = BfSim(source)
sim.run(10000)
self.assertEqual(sim.memory[1], 255)
def test_if_nz_then_1(self):
source = c.if_nz_then(1, c.inc_pos(3))
sim = BfSim(source)
sim.memory[1] = 3
sim.run(10000)
self.assertEqual(sim.memory[1], 0)
self.assertEqual(sim.memory[3], 1)
def test_if_nz_then_2(self):
source = c.if_nz_then(1, c.inc_pos(3))
sim = BfSim(source)
sim.memory[1] = 0
sim.run(10000)
self.assertEqual(sim.memory[1], 0)
self.assertEqual(sim.memory[3], 0)
def test_if_one_then_1(self):
source = c.if_one_then(1, c.inc_pos(3))
sim = BfSim(source)
sim.memory[1] = 1
sim.run(10000)
self.assertEqual(sim.memory[1], 0)
self.assertEqual(sim.memory[3], 1)
def test_if_one_then_2(self):
source = c.if_one_then(1, c.inc_pos(3))
sim = BfSim(source)
sim.memory[1] = 0
sim.run(10000)
self.assertEqual(sim.memory[1], 0)
self.assertEqual(sim.memory[3], 0)
def test_if_nz_tricky_1(self):
source = c.if_nz_tricky(1, 1, 1, c.inc_pos(5), c.inc_pos(6))
sim = BfSim(source)
sim.memory[1] = 3
sim.run(10000)
self.assertEqual(sim.memory[1], 3)
self.assertEqual(sim.memory[5], 1)
self.assertEqual(sim.memory[6], 0)
def test_if_nz_tricky_2(self):
source = c.if_nz_tricky(1, 1, 1, c.inc_pos(5), c.inc_pos(6))
sim = BfSim(source)
sim.memory[1] = 0
sim.run(10000)
self.assertEqual(sim.memory[1], 0)
self.assertEqual(sim.memory[5], 0)
self.assertEqual(sim.memory[6], 1)
def test_if_z_tricky_1(self):
source = c.if_z_tricky(1, 1, 1, c.inc_pos(5), c.inc_pos(6))
sim = BfSim(source)
sim.memory[1] = 3
sim.run(10000)
self.assertEqual(sim.memory[1], 3)
self.assertEqual(sim.memory[5], 0)
self.assertEqual(sim.memory[6], 1)
def test_if_z_tricky_2(self):
source = c.if_z_tricky(1, 1, 1, c.inc_pos(5), c.inc_pos(6))
sim = BfSim(source)
sim.memory[1] = 0
sim.run(10000)
self.assertEqual(sim.memory[1], 0)
self.assertEqual(sim.memory[5], 1)
self.assertEqual(sim.memory[6], 0)
def test_inc_data_tricky_1(self):
source = c.inc_data_tricky(3, 1)
sim = BfSim(source)
sim.memory[3] = 0
sim.run(10000)
self.assertEqual(sim.memory[1], 0)
self.assertEqual(sim.memory[2], 0)
self.assertEqual(sim.memory[3], 1)
def test_inc_data_tricky_2(self):
source = c.inc_data_tricky(3, 1)
sim = BfSim(source)
sim.memory[3] = 10
sim.run(10000)
self.assertEqual(sim.memory[1], 0)
self.assertEqual(sim.memory[2], 0)
self.assertEqual(sim.memory[3], 11)
def test_inc_data_tricky_3(self):
source = c.inc_data_tricky(3, 1)
sim = BfSim(source)
sim.memory[3] = 255
sim.run(10000)
self.assertEqual(sim.memory[1], 0)
self.assertEqual(sim.memory[2], 0)
self.assertEqual(sim.memory[3], 0)
def test_inc_data_tricky_4(self):
source = c.inc_data_tricky(3, 2)
sim = BfSim(source)
sim.memory[3] = 255
sim.run(10000)
self.assertEqual(sim.memory[1], 0)
self.assertEqual(sim.memory[2], 1)
self.assertEqual(sim.memory[3], 0)
def test_inc_data_tricky_5(self):
source = c.inc_data_tricky(3, 3)
sim = BfSim(source)
sim.memory[2] = 255
sim.memory[3] = 255
sim.run(10000)
self.assertEqual(sim.memory[1], 1)
self.assertEqual(sim.memory[2], 0)
self.assertEqual(sim.memory[3], 0)
def test_dec_data_tricky_1(self):
source = c.dec_data_tricky(3, 1)
sim = BfSim(source)
sim.memory[3] = 0
sim.run(10000)
self.assertEqual(sim.memory[1], 0)
self.assertEqual(sim.memory[2], 0)
self.assertEqual(sim.memory[3], 255)
def test_dec_data_tricky_2(self):
source = c.dec_data_tricky(3, 1)
sim = BfSim(source)
sim.memory[3] = 10
sim.run(10000)
self.assertEqual(sim.memory[1], 0)
self.assertEqual(sim.memory[2], 0)
self.assertEqual(sim.memory[3], 9)
def test_dec_data_tricky_3(self):
source = c.dec_data_tricky(3, 1)
sim = BfSim(source)
sim.memory[2] = 1
sim.memory[3] = 0
sim.run(10000)
self.assertEqual(sim.memory[1], 0)
self.assertEqual(sim.memory[2], 1)
self.assertEqual(sim.memory[3], 255)
def test_dec_data_tricky_4(self):
source = c.dec_data_tricky(3, 2)
sim = BfSim(source)
sim.memory[2] = 1
sim.memory[3] = 0
sim.run(10000)
self.assertEqual(sim.memory[1], 0)
self.assertEqual(sim.memory[2], 0)
self.assertEqual(sim.memory[3], 255)
def test_dec_data_tricky_5(self):
source = c.dec_data_tricky(3, 3)
sim = BfSim(source)
sim.memory[1] = 1
sim.memory[2] = 0
sim.memory[3] = 0
sim.run(10000)
self.assertEqual(sim.memory[1], 0)
self.assertEqual(sim.memory[2], 255)
self.assertEqual(sim.memory[3], 255)
def test_add_data_tricky_1(self):
source = c.add_data_tricky(source=2, pos=4, work=7, digit=2)
sim = BfSim(source)
sim.memory[2] = 129
sim.memory[3] = 5
sim.memory[4] = 129
sim.run(10000)
self.assertEqual(sim.memory[2], 129)
self.assertEqual(sim.memory[3], 6)
self.assertEqual(sim.memory[4], 2)
def test_multi_data_tricky_1(self):
source = c.multi_data_tricky(
source1=1, source2=3, pos=5, digit=2)
sim = BfSim(source)
sim.memory[1] = 100
sim.memory[3] = 100
sim.memory[4] = 0
sim.memory[5] = 0
while not sim.is_stopped():
sim.run(10000)
self.assertEqual(sim.memory[4], 39)
self.assertEqual(sim.memory[5], 16)
if __name__ == '__main__':
unittest.main()
| 30.39313 | 68 | 0.575788 | 1,197 | 7,963 | 3.684211 | 0.055138 | 0.218367 | 0.277551 | 0.370068 | 0.902721 | 0.862812 | 0.814512 | 0.772336 | 0.754875 | 0.730385 | 0 | 0.087729 | 0.285696 | 7,963 | 261 | 69 | 30.509579 | 0.687588 | 0.00226 | 0 | 0.687225 | 0 | 0 | 0.001007 | 0 | 0 | 0 | 0 | 0 | 0.299559 | 1 | 0.123348 | false | 0 | 0.013216 | 0 | 0.140969 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 9 |
c139c6f59c559ced22dc5f066cfc7503a5778a7a | 282 | py | Python | microbenchmarks/dict_globals_ubench.py | aisk/pyston | ac69cfef0621dbc8901175e84fa2b5cb5781a646 | [
"BSD-2-Clause",
"Apache-2.0"
] | 1 | 2020-02-06T14:28:45.000Z | 2020-02-06T14:28:45.000Z | microbenchmarks/dict_globals_ubench.py | aisk/pyston | ac69cfef0621dbc8901175e84fa2b5cb5781a646 | [
"BSD-2-Clause",
"Apache-2.0"
] | null | null | null | microbenchmarks/dict_globals_ubench.py | aisk/pyston | ac69cfef0621dbc8901175e84fa2b5cb5781a646 | [
"BSD-2-Clause",
"Apache-2.0"
] | 1 | 2020-02-06T14:29:00.000Z | 2020-02-06T14:29:00.000Z | d = dict(x=1, y=0)
exec """
def g():
global y
y += x
""" in d
def f():
g = d['g']
for i in xrange(1000000):
g()
g()
g()
g()
g()
g()
g()
g()
g()
g()
d['y'] += i
f()
print d['y']
| 11.75 | 29 | 0.269504 | 42 | 282 | 1.809524 | 0.404762 | 0.236842 | 0.315789 | 0.368421 | 0.131579 | 0.131579 | 0.131579 | 0.131579 | 0.131579 | 0 | 0 | 0.066176 | 0.517731 | 282 | 23 | 30 | 12.26087 | 0.492647 | 0 | 0 | 0.454545 | 0 | 0 | 0.131206 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0.045455 | 0 | 0 | 1 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
c17a1a54f41ec62e613e07e583cca1132c4dd496 | 185,416 | py | Python | pirates/leveleditor/worldData/ArtPrototypeLite.py | itsyaboyrocket/pirates | 6ca1e7d571c670b0d976f65e608235707b5737e3 | [
"BSD-3-Clause"
] | 3 | 2021-02-25T06:38:13.000Z | 2022-03-22T07:00:15.000Z | pirates/leveleditor/worldData/ArtPrototypeLite.py | itsyaboyrocket/pirates | 6ca1e7d571c670b0d976f65e608235707b5737e3 | [
"BSD-3-Clause"
] | null | null | null | pirates/leveleditor/worldData/ArtPrototypeLite.py | itsyaboyrocket/pirates | 6ca1e7d571c670b0d976f65e608235707b5737e3 | [
"BSD-3-Clause"
] | 1 | 2021-02-25T06:38:17.000Z | 2021-02-25T06:38:17.000Z | # uncompyle6 version 3.2.0
# Python bytecode 2.4 (62061)
# Decompiled from: Python 2.7.14 (v2.7.14:84471935ed, Sep 16 2017, 20:19:30) [MSC v.1500 32 bit (Intel)]
# Embedded file name: pirates.leveleditor.worldData.ArtPrototypeLite
from pandac.PandaModules import Point3, VBase3
objectStruct = {'Locator Links': [['1142290985.8sdnaik', '1142291141.66sdnaik', 'Bi-directional'], ['1142291141.64sdnaik', '1142037113.09dxschafe', 'Bi-directional'], ['1142290985.73sdnaik', '1142291275.17sdnaik', 'Bi-directional'], ['1142291275.16sdnaik', '1142037113.06dxschafe', 'Bi-directional']], 'Objects': {'1135280776.06dzlu': {'Type': 'Island', 'Name': 'ArtPrototype', 'File': '', 'Objects': {'1136404579.56dzlu': {'Type': 'Tree', 'Hpr': VBase3(2.663, -0.096, 0.553), 'Pos': Point3(403.816, 350.182, 54.6), 'Scale': VBase3(1.089, 1.089, 1.089), 'Visual': {'Model': 'models/vegetation/gen_tree_b'}}, '1136404859.58dzlu': {'Type': 'Building Exterior', 'Name': '', 'File': '', 'Hpr': VBase3(34.746, 0.0, 0.0), 'Objects': {'1136404859.58dzlu0': {'Type': 'Locator Node', 'Name': 'portal_exterior_1', 'Hpr': VBase3(90.0, 0.0, 0.0), 'Pos': Point3(-0.365, -5.213, 0.955), 'Scale': VBase3(1.0, 1.0, 1.0)}}, 'Pos': Point3(119.413, 279.676, 68.879), 'Scale': VBase3(1.0, 1.0, 1.0), 'Visual': {'Model': 'models/buildings/shanty_tavern_exterior'}}, '1136406067.58dzlu': {'Type': 'Tree', 'Hpr': VBase3(-12.942, 8.128, 0.0), 'Pos': Point3(200.423, 159.059, 62.4), 'Scale': VBase3(1.0, 1.0, 1.0), 'Visual': {'Model': 'models/vegetation/gen_tree_a'}}, '1136406102.08dzlu': {'Type': 'Tree', 'Hpr': VBase3(-12.942, 8.128, 0.0), 'Pos': Point3(215.391, 154.899, 59.549), 'Scale': VBase3(1.0, 1.0, 1.0), 'Visual': {'Model': 'models/vegetation/gen_tree_d'}}, '1136406300.36dzlu': {'Type': 'Tree', 'Hpr': VBase3(0.0, 0.0, 1.721), 'Pos': Point3(59.236, 208.391, 82.052), 'Scale': VBase3(0.714, 0.714, 0.714), 'Visual': {'Model': 'models/vegetation/fern_tree_c'}}, '1136406305.84dzlu': {'Type': 'Tree', 'Hpr': VBase3(-23.829, 0.688, 1.364), 'Pos': Point3(150.04, 317.674, 79.666), 'Scale': VBase3(1.0, 1.0, 1.0), 'Visual': {'Model': 'models/vegetation/gen_tree_a'}}, '1136406445.48dzlu': {'Type': 'Tree', 'Hpr': VBase3(166.825, -5.383, 0.0), 'Pos': Point3(93.676, 264.385, 68.748), 'Scale': VBase3(0.683, 0.683, 0.683), 'Visual': {'Model': 'models/vegetation/fern_tree_e'}}, '1136406479.8dzlu': {'Type': 'Tree', 'Hpr': VBase3(104.938, 0.0, 0.0), 'Pos': Point3(247.154, 246.016, 65.624), 'Scale': VBase3(0.534, 0.534, 0.534), 'Visual': {'Model': 'models/vegetation/fern_tree_b'}}, '1136406533.92dzlu': {'Type': 'Tree', 'Hpr': VBase3(-26.442, 0.0, 0.0), 'Pos': Point3(200.046, 297.437, 71.405), 'Scale': VBase3(0.658, 0.658, 0.658), 'Visual': {'Model': 'models/vegetation/gen_tree_b'}}, '1136406575.75dzlu': {'Type': 'Bush', 'Hpr': VBase3(20.412, 0.0, 0.0), 'Pos': Point3(131.267, 295.128, 68.431), 'Scale': VBase3(0.574, 0.574, 0.574), 'Visual': {'Model': 'models/vegetation/bush_a'}}, '1136414754.64dzlu': {'Type': 'Barrel', 'Color': (0.75, 1.0, 0.8500000238418579, 1.0), 'Hpr': Point3(0.0, 0.0, 0.0), 'Pos': Point3(147.939, 295.403, 69.313), 'Scale': VBase3(0.735, 0.735, 0.735), 'Visual': {'Model': 'models/props/barrel'}}, '1136415300.0dzlu': {'Type': 'Building Exterior', 'Name': '', 'File': '', 'Hpr': VBase3(0.0, 0.0, 2.81), 'Objects': {'1136415300.02dzlu': {'Type': 'Locator Node', 'Name': 'portal_exterior_1', 'Hpr': VBase3(90.0, 0.0, 0.0), 'Pos': Point3(3.987, -20.033, 0.0), 'Scale': VBase3(1.0, 1.0, 1.0)}, '1137810131.5dxschafe': {'Type': 'Rock', 'Hpr': VBase3(-129.852, 1.887, 4.019), 'Pos': Point3(-0.221, -10.515, -4.844), 'Scale': VBase3(1.864, 1.864, 1.864), 'Visual': {'Model': 'models/props/zz_dont_use_rocks_Dk_group_3F'}}}, 'Pos': Point3(160.199, 307.685, 69.747), 'Scale': VBase3(1.0, 1.0, 1.0), 'Visual': {'Model': 'models/buildings/shanty_npc_house_combo_H'}}, '1136416540.86dzlu': {'Type': 'Rock', 'Hpr': VBase3(32.191, -6.354, 2.169), 'Pos': Point3(91.467, 257.4, 69.194), 'Scale': VBase3(1.875, 1.875, 1.875), 'Visual': {'Model': 'models/props/zz_dont_use_rock_Dk_1F'}}, '1136420641.61dzlu': {'Type': 'Tree', 'Hpr': Point3(0.0, 0.0, 0.0), 'Pos': Point3(400.973, 288.917, 40.433), 'Scale': VBase3(1.207, 1.207, 1.207), 'Visual': {'Model': 'models/vegetation/gen_tree_a'}}, '1136420648.19dzlu': {'Type': 'Tree', 'Hpr': Point3(0.0, 0.0, 0.0), 'Pos': Point3(388.322, 238.284, 40.315), 'Scale': VBase3(1.207, 1.207, 1.207), 'Visual': {'Model': 'models/vegetation/gen_tree_a'}}, '1136420854.45dzlu': {'Type': 'Tree', 'Hpr': Point3(0.0, 0.0, 0.0), 'Pos': Point3(67.719, 250.038, 92.994), 'Scale': VBase3(1.0, 1.0, 1.0), 'Visual': {'Model': 'models/vegetation/gen_tree_b'}}, '1136420904.83dzlu': {'Type': 'Tree', 'Hpr': Point3(0.0, 0.0, 0.0), 'Pos': Point3(85.396, 296.367, 91.405), 'Scale': VBase3(0.963, 0.963, 0.963), 'Visual': {'Model': 'models/vegetation/gen_tree_b'}}, '1136420928.28dzlu': {'Type': 'Tree', 'Hpr': Point3(0.0, 0.0, 0.0), 'Pos': Point3(107.657, 339.434, 93.724), 'Scale': VBase3(0.876, 0.876, 0.876), 'Visual': {'Model': 'models/vegetation/gen_tree_b'}}, '1136421203.13dzlu': {'Type': 'Bush', 'Hpr': VBase3(56.01, 0.0, 0.0), 'Pos': Point3(75.221, 241.675, 74.58), 'Scale': VBase3(1.0, 1.0, 1.0), 'Visual': {'Model': 'models/vegetation/bush_a'}}, '1136421219.05dzlu': {'Type': 'Bush', 'Hpr': VBase3(56.01, 12.592, 0.0), 'Pos': Point3(78.854, 259.047, 72.827), 'Scale': VBase3(1.0, 1.0, 1.0), 'Visual': {'Model': 'models/vegetation/bush_a'}}, '1136421438.67dzlu': {'Type': 'Tree', 'Hpr': VBase3(-25.704, -2.14, 0.0), 'Pos': Point3(179.517, 307.771, 64.407), 'Scale': VBase3(0.802, 0.802, 0.802), 'Visual': {'Model': 'models/vegetation/fern_tree_b'}}, '1136421473.47dzlu': {'Type': 'Tree', 'Hpr': VBase3(-26.442, 0.0, 0.0), 'Pos': Point3(218.475, 280.826, 72.573), 'Scale': VBase3(0.658, 0.658, 0.658), 'Visual': {'Model': 'models/vegetation/gen_tree_b'}}, '1136422899.75dzlu': {'Type': 'Tree', 'Hpr': Point3(0.0, 0.0, 0.0), 'Pos': Point3(-318.086, -17.05, 58.843), 'Scale': VBase3(1.0, 1.0, 1.0), 'Visual': {'Model': 'models/vegetation/gen_tree_c'}}, '1136422912.8dzlu': {'Type': 'Tree', 'Hpr': Point3(0.0, 0.0, 0.0), 'Pos': Point3(-333.065, 13.689, 61.986), 'Scale': VBase3(0.931, 0.931, 0.931), 'Visual': {'Model': 'models/vegetation/gen_tree_b'}}, '1136422957.22dzlu': {'Type': 'Tree', 'Hpr': Point3(0.0, 0.0, 0.0), 'Pos': Point3(-329.244, 84.973, 73.936), 'Scale': VBase3(1.0, 1.0, 1.0), 'Visual': {'Model': 'models/vegetation/gen_tree_c'}}, '1136422976.59dzlu': {'Type': 'Tree', 'Hpr': Point3(0.0, 0.0, 0.0), 'Pos': Point3(-293.534, 114.608, 70.311), 'Scale': VBase3(0.716, 0.716, 0.716), 'Visual': {'Model': 'models/vegetation/gen_tree_c'}}, '1136422998.09dzlu': {'Type': 'Tree', 'Hpr': Point3(0.0, 0.0, 0.0), 'Pos': Point3(-325.465, 53.237, 71.877), 'Scale': VBase3(0.871, 0.871, 0.871), 'Visual': {'Model': 'models/vegetation/gen_tree_a'}}, '1136423079.63dzlu': {'Type': 'Tree', 'Hpr': Point3(0.0, 0.0, 0.0), 'Pos': Point3(-246.916, 129.769, 68.805), 'Scale': VBase3(1.021, 1.021, 1.021), 'Visual': {'Model': 'models/vegetation/gen_tree_a'}}, '1136423080.58dzlu': {'Type': 'Tree', 'Hpr': Point3(0.0, 0.0, 0.0), 'Pos': Point3(-230.297, 108.779, 67.016), 'Scale': VBase3(1.021, 1.021, 1.021), 'Visual': {'Model': 'models/vegetation/gen_tree_a'}}, '1136423081.72dzlu': {'Type': 'Tree', 'Hpr': Point3(0.0, 0.0, 0.0), 'Pos': Point3(-221.659, 121.591, 65.577), 'Scale': VBase3(1.021, 1.021, 1.021), 'Visual': {'Model': 'models/vegetation/gen_tree_a'}}, '1136423082.23dzlu': {'Type': 'Tree', 'Hpr': VBase3(0.0, 0.0, 5.526), 'Pos': Point3(-201.426, 125.418, 58.743), 'Scale': VBase3(1.353, 1.353, 1.353), 'Visual': {'Model': 'models/vegetation/gen_tree_b'}}, '1136423092.95dzlu': {'Type': 'Tree', 'Hpr': Point3(0.0, 0.0, 0.0), 'Pos': Point3(-140.798, 238.442, 67.9), 'Scale': VBase3(1.021, 1.021, 1.021), 'Visual': {'Model': 'models/vegetation/gen_tree_a'}}, '1136423193.16dzlu': {'Type': 'Tree', 'Hpr': Point3(0.0, 0.0, 0.0), 'Pos': Point3(-142.348, 220.086, 59.771), 'Scale': VBase3(0.802, 0.802, 0.802), 'Visual': {'Model': 'models/vegetation/gen_tree_d'}}, '1136423237.13dzlu': {'Type': 'Rock', 'Hpr': VBase3(61.841, 0.0, 0.0), 'Objects': {}, 'Pos': Point3(-90.148, 171.2, 65.438), 'Scale': VBase3(3.74, 3.74, 3.74), 'Visual': {'Model': 'models/props/zz_dont_use_rocks_Dk_group_1F'}}, '1136423297.98dzlu': {'Type': 'Rock', 'Hpr': VBase3(123.683, -8.334, 0.0), 'Pos': Point3(-165.915, 110.31, 57.844), 'Scale': VBase3(3.596, 3.596, 3.596), 'Visual': {'Model': 'models/props/zz_dont_use_rocks_Dk_group_2F'}}, '1136423371.34dzlu': {'Type': 'Rock', 'Hpr': VBase3(-46.859, 0.0, 0.0), 'Pos': Point3(-98.134, 151.375, 65.723), 'Scale': VBase3(2.538, 2.538, 2.538), 'Visual': {'Model': 'models/props/zz_dont_use_rock_Dk_1F'}}, '1136423530.52dzlu': {'Type': 'Bush', 'Hpr': Point3(0.0, 0.0, 0.0), 'Pos': Point3(-90.227, 156.301, 65.94), 'Scale': VBase3(1.077, 1.077, 1.077), 'Visual': {'Model': 'models/vegetation/bush_a'}}, '1136423563.91dzlu': {'Type': 'Bush', 'Hpr': Point3(0.0, 0.0, 0.0), 'Pos': Point3(-81.552, 183.878, 64.95), 'Scale': VBase3(1.077, 1.077, 1.077), 'Visual': {'Model': 'models/vegetation/bush_a'}}, '1136423575.09dzlu': {'Type': 'Bush', 'Hpr': Point3(0.0, 0.0, 0.0), 'Pos': Point3(-110.862, 194.519, 63.947), 'Scale': VBase3(1.235, 1.235, 1.235), 'Visual': {'Model': 'models/vegetation/bush_a'}}, '1136423697.94dzlu': {'Type': 'Building Exterior', 'Name': '', 'File': '', 'Hpr': VBase3(35.683, 0.0, 0.0), 'Objects': {'1136423697.94dzlu0': {'Type': 'Locator Node', 'Name': 'portal_exterior_1', 'Hpr': VBase3(90.0, 0.0, 0.0), 'Pos': Point3(3.987, -20.033, 0.0), 'Scale': VBase3(1.0, 1.0, 1.0)}}, 'Pos': Point3(-67.761, 178.948, 65.316), 'Scale': VBase3(1.0, 1.0, 1.0), 'Visual': {'Model': 'models/buildings/shanty_npc_house_a_exterior'}}, '1136423796.89dzlu': {'Type': 'Building Exterior', 'Name': '', 'File': '', 'Hpr': VBase3(-50.773, 0.0, 0.0), 'Objects': {'1136423697.94dzlu0': {'Type': 'Locator Node', 'Name': 'portal_exterior_1', 'Hpr': VBase3(90.0, 0.0, 0.0), 'Pos': Point3(3.987, -20.033, 0.0), 'Scale': VBase3(1.0, 1.0, 1.0)}}, 'Pos': Point3(-35.105, 186.473, 65.452), 'Scale': VBase3(1.0, 1.0, 1.0), 'Visual': {'Model': 'models/buildings/shanty_npc_house_combo_H'}}, '1136423964.42dzlu': {'Type': 'Tree', 'Hpr': Point3(0.0, 0.0, 0.0), 'Pos': Point3(-65.731, 207.031, 63.756), 'Scale': VBase3(0.814, 0.814, 0.814), 'Visual': {'Model': 'models/vegetation/fern_tree_d'}}, '1136424011.34dzlu': {'Type': 'Tree', 'Hpr': Point3(0.0, 0.0, 0.0), 'Pos': Point3(-104.128, 227.307, 66.245), 'Scale': VBase3(1.0, 1.0, 1.0), 'Visual': {'Model': 'models/vegetation/gen_tree_b'}}, '1136424157.73dzlu': {'Type': 'Rock', 'Hpr': VBase3(-36.215, 3.77, 12.806), 'Objects': {}, 'Pos': Point3(-223.368, 67.591, 56.016), 'Scale': VBase3(5.733, 5.733, 5.733), 'Visual': {'Model': 'models/props/zz_dont_use_rocks_Dk_group_1F'}}, '1136424231.09dzlu': {'Type': 'Rock', 'Hpr': VBase3(-124.586, 4.107, -1.112), 'Pos': Point3(-128.432, 208.129, 61.014), 'Scale': VBase3(3.736, 3.736, 3.736), 'Visual': {'Model': 'models/props/zz_dont_use_rocks_Dk_group_1F'}}, '1136424617.58dzlu': {'Type': 'Bush', 'Hpr': VBase3(162.321, -6.017, 0.0), 'Pos': Point3(81.796, -32.658, 5.647), 'Scale': VBase3(0.795, 0.795, 0.795), 'Visual': {'Model': 'models/vegetation/bush_d'}}, '1136424781.72dzlu': {'Type': 'Rock', 'Hpr': VBase3(101.64, -5.044, 0.0), 'Pos': Point3(-125.18, 142.052, 62.164), 'Scale': VBase3(6.054, 6.054, 6.054), 'Visual': {'Model': 'models/props/zz_dont_use_rocks_Dk_group_1F'}}, '1136425088.27dzlu': {'Type': 'Rock', 'Hpr': VBase3(101.64, 0.0, -1.409), 'Pos': Point3(-285.726, -23.079, 42.889), 'Scale': VBase3(6.054, 6.054, 6.054), 'Visual': {'Model': 'models/props/zz_dont_use_rocks_Dk_group_1F'}}, '1136425095.08dzlu': {'Type': 'Rock', 'Hpr': VBase3(137.302, 0.0, -4.231), 'Pos': Point3(-257.672, 20.99, 49.597), 'Scale': VBase3(5.33, 5.33, 5.33), 'Visual': {'Model': 'models/props/zz_dont_use_rocks_Dk_group_3F'}}, '1136425234.44dzlu': {'Type': 'Rock', 'Hpr': VBase3(-46.226, -2.464, 14.288), 'Pos': Point3(-298.578, -8.822, 51.166), 'Scale': VBase3(5.796, 5.796, 5.796), 'Visual': {'Model': 'models/props/zz_dont_use_rocks_Dk_group_3F'}}, '1136426351.31dzlu': {'Type': 'Bush', 'Hpr': VBase3(102.186, 0.0, -4.988), 'Pos': Point3(-202.537, 90.08, 58.885), 'Scale': VBase3(1.0, 1.0, 1.0), 'Visual': {'Model': 'models/vegetation/bush_d'}}, '1136426380.89dzlu': {'Type': 'Bush', 'Hpr': VBase3(-39.706, 0.0, 0.0), 'Pos': Point3(-187.495, 109.671, 59.547), 'Scale': VBase3(1.0, 1.0, 1.0), 'Visual': {'Model': 'models/vegetation/bush_d'}}, '1136426422.47dzlu': {'Type': 'Bush', 'Hpr': VBase3(-39.706, 0.0, 0.0), 'Pos': Point3(-182.428, 102.934, 59.167), 'Scale': VBase3(1.447, 1.447, 1.447), 'Visual': {'Model': 'models/vegetation/bush_b'}}, '1136426725.14dzlu': {'Type': 'Bush', 'Hpr': Point3(0.0, 0.0, 0.0), 'Pos': Point3(-295.177, -50.724, 43.351), 'Scale': VBase3(0.643, 0.643, 0.643), 'Visual': {'Model': 'models/vegetation/bush_a'}}, '1136426742.25dzlu': {'Type': 'Bush', 'Hpr': VBase3(-14.234, 5.769, 5.441), 'Pos': Point3(-262.793, -60.813, 42.423), 'Scale': VBase3(0.721, 0.721, 0.721), 'Visual': {'Model': 'models/vegetation/bush_d'}}, '1137608568.63dxschafe': {'Type': 'Building Exterior', 'Name': '', 'File': '', 'Hpr': VBase3(-138.392, 2.568, 0.243), 'Objects': {'1137608568.63dxschafe0': {'Type': 'Locator Node', 'Name': 'portal_exterior_1', 'Hpr': VBase3(90.0, 0.0, 0.0), 'Pos': Point3(3.987, -20.033, 0.0), 'Scale': VBase3(1.0, 1.0, 1.0)}}, 'Pos': Point3(621.715, -126.079, 4.178), 'Scale': VBase3(1.0, 1.0, 1.0), 'Visual': {'Model': 'models/buildings/shanty_repairshop_exterior'}}, '1137608982.7dxschafe': {'Type': 'Crate', 'Color': (0.6000000238418579, 0.6000000238418579, 0.6000000238418579, 1.0), 'Hpr': VBase3(-21.492, 0.0, 0.0), 'Objects': {}, 'Pos': Point3(403.256, -76.642, 5.398), 'Scale': VBase3(0.658, 0.658, 0.658), 'Visual': {'Model': 'models/props/crates_group_1'}}, '1137609219.8dxschafe': {'Type': 'Barrel', 'Hpr': VBase3(55.719, 0.096, -0.14), 'Pos': Point3(581.564, -149.437, 4.362), 'Scale': VBase3(0.617, 0.617, 0.617), 'Visual': {'Model': 'models/props/barrel_worn'}}, '1137609321.42dxschafe': {'Type': 'Barrel', 'Color': (0.6000000238418579, 0.6000000238418579, 0.6000000238418579, 1.0), 'Hpr': Point3(0.0, 0.0, 0.0), 'Pos': Point3(399.365, -125.397, 5.239), 'Scale': VBase3(0.464, 0.464, 0.464), 'Visual': {'Model': 'models/props/barrel_grey'}}, '1137609327.59dxschafe': {'Type': 'Barrel', 'Color': (0.800000011920929, 0.800000011920929, 0.800000011920929, 1.0), 'Hpr': Point3(0.0, 0.0, 0.0), 'Pos': Point3(406.372, -144.289, 5.244), 'Scale': VBase3(0.464, 0.464, 0.464), 'Visual': {'Model': 'models/props/barrel_worn'}}, '1137609343.8dxschafe': {'Type': 'Barrel', 'Color': (0.8999999761581421, 0.8999999761581421, 0.699999988079071, 1.0), 'Hpr': Point3(0.0, 0.0, 0.0), 'Pos': Point3(404.221, -146.105, 5.231), 'Scale': VBase3(0.464, 0.464, 0.464), 'Visual': {'Model': 'models/props/barrel'}}, '1137609393.08dxschafe': {'Type': 'Barrel', 'Color': (0.800000011920929, 0.8999999761581421, 0.699999988079071, 1.0), 'Hpr': Point3(0.0, 0.0, 0.0), 'Pos': Point3(405.148, -84.434, 5.167), 'Scale': VBase3(0.464, 0.464, 0.464), 'Visual': {'Model': 'models/props/barrel_worn'}}, '1137610057.31dxschafe': {'Type': 'Rock', 'Hpr': VBase3(23.741, 1.104, 1.461), 'Objects': {}, 'Pos': Point3(526.022, -77.513, -11.908), 'Scale': VBase3(10.056, 10.056, 10.056), 'Visual': {'Model': 'models/props/zz_dont_use_rock_Dk_3F'}}, '1137610228.36dxschafe': {'Type': 'Rock', 'Hpr': VBase3(0.0, 0.0, 0.0), 'Objects': {}, 'Pos': Point3(525.445, 43.801, 24.286), 'Scale': VBase3(3.035, 3.035, 3.035), 'Visual': {'Model': 'models/props/zz_dont_use_rock_Dk_1F'}}, '1137610308.83dxschafe': {'Type': 'Rock', 'Hpr': Point3(0.0, 0.0, 0.0), 'Objects': {'1137610374.89dxschafe': {'Type': 'Rock', 'Hpr': VBase3(77.469, 0.0, -7.6), 'Pos': Point3(3.193, -2.581, 0.927), 'Scale': VBase3(0.661, 0.661, 0.661), 'Visual': {'Model': 'models/props/zz_dont_use_rock_Dk_1F'}}, '1138062673.06dxschafe': {'Type': 'Bush', 'Hpr': VBase3(0.0, 15.413, 0.0), 'Pos': Point3(-0.364, -3.105, 0.879), 'Scale': VBase3(0.01, 0.01, 0.01), 'Visual': {'Model': 'models/vegetation/bush_a'}}, '1138062673.09dxschafe': {'Type': 'Bush', 'Hpr': VBase3(0.0, 15.413, 0.0), 'Pos': Point3(-0.364, -3.105, 0.879), 'Scale': VBase3(0.01, 0.01, 0.01), 'Visual': {'Model': 'models/vegetation/bush_a'}}}, 'Pos': Point3(491.332, 57.371, 29.914), 'Scale': VBase3(4.751, 4.751, 4.751), 'Visual': {'Model': 'models/props/zz_dont_use_rock_Dk_1F'}}, '1137610539.64dxschafe': {'Type': 'Rock', 'Hpr': VBase3(0.0, 41.769, 0.0), 'Objects': {}, 'Pos': Point3(643.628, 53.824, 31.761), 'Scale': VBase3(5.934, 5.934, 5.934), 'Visual': {'Model': 'models/props/zz_dont_use_rock_Dk_4F'}}, '1137611262.78dxschafe': {'Type': 'Tree - Animated', 'Hpr': VBase3(-50.23, 2.081, 20.443), 'Pos': Point3(645.527, 57.887, 35.767), 'Scale': VBase3(1.0, 1.0, 1.0), 'SubObjs': {'Top Model': {'Visual': {'Animate': 'models/vegetation/palm_leaf_a_idle', 'Attach': ['trunk', 'def_trunk_attach'], 'Color': (0.817, 0.932, 0.7942, 1.0), 'Model': 'models/vegetation/palm_leaf_a_hi', 'PartName': 'leaf', 'Scale': VBase3(1.12, 1.12, 1.12)}}}, 'Visual': {'Animate': 'models/vegetation/palm_trunk_a_idle', 'Color': (0.79, 1.0, 0.97, 1.0), 'Model': 'models/vegetation/palm_trunk_a_hi', 'PartName': 'trunk'}}, '1137611361.14dxschafe': {'Type': 'Tree - Animated', 'Color': (0.699999988079071, 0.699999988079071, 0.699999988079071, 1.0), 'Hpr': VBase3(82.113, 0.0, 0.0), 'Pos': Point3(632.238, -48.605, 2.937), 'Scale': VBase3(1.644, 1.644, 1.644), 'SubObjs': {'Top Model': {'Visual': {'Animate': 'models/vegetation/palm_leaf_a_idle', 'Attach': ['trunk', 'def_trunk_attach'], 'Color': (1.0, 1.0, 0.972, 1.0), 'Model': 'models/vegetation/palm_leaf_c_hi', 'PartName': 'leaf', 'Scale': VBase3(1.63, 1.63, 1.63)}}}, 'Visual': {'Animate': 'models/vegetation/palm_trunk_a_idle', 'Color': (0.972, 0.7942, 0.932, 1.0), 'Model': 'models/vegetation/palm_trunk_a_hi', 'PartName': 'trunk'}}, '1137611475.77dxschafe': {'Type': 'Tree - Animated', 'Animate': 'models/vegetation/palm_trunk_a_idle', 'Hpr': VBase3(-102.01, -0.356, 11.354), 'Pos': Point3(535.48, -89.642, -7.614), 'Scale': VBase3(1.829, 1.829, 1.829), 'SubObjs': {'Top Model': {'Visual': {'Animate': 'models/vegetation/palm_leaf_a_idle', 'Attach': ['trunk', 'def_trunk_attach'], 'Color': (0.817, 0.724, 0.724, 1.0), 'Model': 'models/vegetation/palm_leaf_c_hi', 'PartName': 'leaf', 'Scale': VBase3(0.842, 0.842, 0.842)}}}, 'Visual': {'Animate': 'models/vegetation/palm_trunk_a_idle', 'Color': (0.882, 0.932, 0.932, 1.0), 'Model': 'models/vegetation/palm_trunk_a_hi', 'PartName': 'trunk'}}, '1137611477.19dxschafe': {'Type': 'Tree - Animated', 'Hpr': Point3(0.0, 0.0, 0.0), 'Pos': Point3(544.567, -87.541, -9.29), 'Scale': VBase3(2.222, 2.222, 2.222), 'SubObjs': {'Top Model': {'Visual': {'Animate': 'models/vegetation/palm_leaf_a_idle', 'Attach': ['trunk', 'def_trunk_attach'], 'Color': (0.817, 0.932, 0.932, 1.0), 'Model': 'models/vegetation/palm_leaf_a_hi', 'PartName': 'leaf', 'Scale': VBase3(1.385, 1.385, 1.385)}}}, 'Visual': {'Animate': 'models/vegetation/palm_trunk_a_idle', 'Color': (0.932, 0.817, 0.882, 1.0), 'Model': 'models/vegetation/palm_trunk_a_hi', 'PartName': 'trunk'}}, '1137611840.97dxschafe': {'Type': 'Tree - Animated', 'Animate': 'models/vegetation/palm_trunk_a_idle', 'Hpr': VBase3(1.615, 0.0, 0.0), 'Pos': Point3(649.143, -139.364, 4.873), 'Scale': VBase3(1.093, 1.093, 1.093), 'SubObjs': {'Top Model': {'Visual': {'Animate': 'models/vegetation/palm_leaf_a_idle', 'Attach': ['trunk', 'def_trunk_attach'], 'Model': 'models/vegetation/palm_leaf_c_hi', 'PartName': 'leaf', 'Scale': VBase3(2.042, 2.042, 2.042)}}}, 'Visual': {'Animate': 'models/vegetation/palm_trunk_a_idle', 'Model': 'models/vegetation/palm_trunk_b_hi', 'PartName': 'trunk'}}, '1137612056.77dxschafe': {'Type': 'Rock', 'Hpr': VBase3(-29.542, 16.094, 0.0), 'Pos': Point3(432.851, 65.573, 33.884), 'Scale': VBase3(3.045, 3.045, 3.045), 'Visual': {'Model': 'models/props/zz_dont_use_rock_Dk_1F'}}, '1137612099.81dxschafe': {'Type': 'Rock', 'Hpr': VBase3(-5.272, 24.168, 4.871), 'Objects': {'1137612114.77dxschafe': {'Type': 'Rock', 'Hpr': VBase3(0.0, -4.834, 0.0), 'Pos': Point3(-2.909, -2.714, -1.867), 'Scale': VBase3(2.707, 2.707, 2.707), 'Visual': {'Model': 'models/props/zz_dont_use_rock_Dk_3F'}}}, 'Pos': Point3(254.264, 12.909, 6.101), 'Scale': VBase3(3.533, 3.533, 3.533), 'Visual': {'Model': 'models/props/zz_dont_use_rock_Dk_1F'}}, '1137612356.78dxschafe': {'Type': 'Tree - Animated', 'Animate': 'models/vegetation/palm_trunk_a_idle', 'Color': (0.8999999761581421, 0.8999999761581421, 0.8999999761581421, 1.0), 'Hpr': VBase3(-74.425, 8.936, 1.368), 'Pos': Point3(245.537, 17.033, 9.217), 'Scale': VBase3(1.263, 1.263, 1.263), 'SubObjs': {'Top Model': {'Visual': {'Animate': 'models/vegetation/palm_leaf_a_idle', 'Attach': ['trunk', 'def_trunk_attach'], 'Model': 'models/vegetation/palm_leaf_a_hi', 'PartName': 'leaf'}}}, 'Visual': {'Animate': 'models/vegetation/palm_trunk_a_idle', 'Model': 'models/vegetation/palm_trunk_a_hi', 'PartName': 'trunk'}}, '1137612363.58dxschafe': {'Type': 'Tree - Animated', 'Hpr': Point3(0.0, 0.0, 0.0), 'Pos': Point3(264.778, 22.222, 11.369), 'Scale': VBase3(1.0, 1.0, 1.0), 'SubObjs': {'Top Model': {'Visual': {'Animate': 'models/vegetation/palm_leaf_a_idle', 'Attach': ['trunk', 'def_trunk_attach'], 'Model': 'models/vegetation/palm_leaf_a_hi', 'PartName': 'leaf'}}}, 'Visual': {'Animate': 'models/vegetation/palm_trunk_a_idle', 'Model': 'models/vegetation/palm_trunk_a_hi', 'PartName': 'trunk'}}, '1137612448.58dxschafe': {'Type': 'Tree', 'Hpr': Point3(0.0, 0.0, 0.0), 'Pos': Point3(604.551, -182.933, 5.329), 'Scale': VBase3(1.0, 1.0, 1.0), 'Visual': {'Model': 'models/vegetation/fern_tree_d'}}, '1137612699.33dxschafe': {'Type': 'Bush', 'Hpr': VBase3(-129.2, 0.0, 0.0), 'Pos': Point3(524.211, 49.18, 35.182), 'Scale': VBase3(1.0, 1.0, 1.0), 'Visual': {'Model': 'models/vegetation/bush_a'}}, '1137612891.16dxschafe': {'Type': 'Tree', 'Color': (0.699999988079071, 0.699999988079071, 0.699999988079071, 1.0), 'Hpr': VBase3(0.0, 3.468, 0.0), 'Pos': Point3(371.522, 91.049, 38.056), 'Scale': VBase3(1.0, 1.0, 1.0), 'Visual': {'Model': 'models/vegetation/fern_tree_a'}}, '1137612944.11dxschafe': {'Type': 'Tree', 'Hpr': Point3(0.0, 0.0, 0.0), 'Pos': Point3(516.995, 44.043, 32.282), 'Scale': VBase3(0.822, 0.822, 0.822), 'Visual': {'Model': 'models/vegetation/fern_tree_e'}}, '1137612995.44dxschafe': {'Type': 'Tree', 'Color': (0.800000011920929, 0.8999999761581421, 0.699999988079071, 1.0), 'Hpr': VBase3(165.452, 0.0, 0.0), 'Pos': Point3(459.505, 61.885, 35.936), 'Scale': VBase3(1.54, 1.54, 1.54), 'Visual': {'Model': 'models/vegetation/fern_tree_b'}}, '1137613150.16dxschafe': {'Type': 'Tree', 'Hpr': Point3(0.0, 0.0, 0.0), 'Pos': Point3(571.532, 61.203, 37.585), 'Scale': VBase3(1.0, 1.0, 1.0), 'Visual': {'Model': 'models/vegetation/fern_tree_a'}}, '1137613458.7dxschafe': {'Type': 'Bush', 'Hpr': VBase3(19.832, 12.649, -0.051), 'Pos': Point3(454.592, 11.246, 21.082), 'Scale': VBase3(1.384, 1.384, 1.384), 'Visual': {'Model': 'models/vegetation/bush_d'}}, '1137614544.45dxschafe': {'Type': 'Barrel', 'Color': (0.6000000238418579, 0.6000000238418579, 0.6000000238418579, 1.0), 'Hpr': Point3(0.0, 0.0, 0.0), 'Pos': Point3(203.91, -107.489, 4.759), 'Scale': VBase3(0.66, 0.66, 0.66), 'Visual': {'Model': 'models/props/barrel'}}, '1137696037.8dxschafe': {'Type': 'Rock', 'Hpr': VBase3(-7.797, 18.741, 23.082), 'Pos': Point3(653.408, 57.229, 27.642), 'Scale': VBase3(7.832, 7.832, 7.832), 'Visual': {'Model': 'models/props/zz_dont_use_rock_Dk_4F'}}, '1137786448.86dxschafe': {'Type': 'Crane', 'Hpr': VBase3(-2.93, 0.0, 0.0), 'Pos': Point3(453.079, -165.832, 5.283), 'Scale': VBase3(0.606, 0.606, 0.606), 'Visual': {'Model': 'models/props/Crane'}}, '1137786628.98dxschafe': {'Type': 'Cart', 'Hpr': VBase3(153.468, 0.0, 0.0), 'Pos': Point3(402.452, -135.299, 5.211), 'Scale': VBase3(1.0, 1.0, 1.0), 'Visual': {'Model': 'models/props/cart_reg'}}, '1137806131.59dxschafe': {'Type': 'Sack', 'Hpr': VBase3(0.0, -5.225, 8.539), 'Pos': Point3(108.311, 172.951, 78.377), 'Scale': VBase3(1.0, 1.0, 1.0), 'Visual': {'Model': 'models/props/Sack'}}, '1137806271.52dxschafe': {'Type': 'Crate', 'Hpr': VBase3(0.789, -5.314, 8.461), 'Objects': {}, 'Pos': Point3(114.819, 189.41, 75.414), 'Scale': VBase3(0.751, 0.751, 0.751), 'Visual': {'Model': 'models/props/crate'}}, '1137806572.22dxschafe': {'Type': 'LaundryRope', 'Hpr': VBase3(24.103, 0.0, 0.0), 'Pos': Point3(176.165, 151.12, 67.604), 'Scale': VBase3(1.0, 1.0, 1.0), 'Visual': {'Model': 'models/props/LaundryRope'}}, '1137807096.63dxschafe': {'Type': 'Tree', 'Hpr': Point3(0.0, 0.0, 0.0), 'Pos': Point3(271.29, 236.554, 60.473), 'Scale': VBase3(1.0, 1.0, 1.0), 'Visual': {'Model': 'models/vegetation/gen_tree_a'}}, '1137807403.13dxschafe': {'Type': 'Rock', 'Hpr': VBase3(-1.762, 37.072, 11.793), 'Pos': Point3(155.527, 137.374, 67.343), 'Scale': VBase3(1.396, 1.396, 1.396), 'Visual': {'Model': 'models/props/zz_dont_use_rock_Dk_1F'}}, '1137807467.02dxschafe': {'Type': 'Rock', 'Hpr': VBase3(0.0, 28.004, 0.0), 'Pos': Point3(152.086, 138.379, 68.776), 'Scale': VBase3(1.352, 1.352, 1.352), 'Visual': {'Model': 'models/props/zz_dont_use_rock_Dk_1F'}}, '1137807509.48dxschafe': {'Type': 'Rock', 'Hpr': VBase3(37.001, -2.419, -6.014), 'Pos': Point3(153.309, 139.437, 69.81), 'Scale': VBase3(1.668, 1.668, 1.668), 'Visual': {'Model': 'models/props/zz_dont_use_rock_Dk_1F'}}, '1137808334.67dxschafe': {'Type': 'Rock', 'Hpr': VBase3(156.524, 6.735, -3.327), 'Pos': Point3(81.083, 225.92, 74.175), 'Scale': VBase3(1.815, 1.815, 1.815), 'Visual': {'Model': 'models/props/zz_dont_use_rock_Dk_1F'}}, '1137809588.72dxschafe': {'Type': 'Rock', 'Hpr': VBase3(18.27, 0.0, 8.037), 'Pos': Point3(82.89, 221.067, 74.637), 'Scale': VBase3(1.381, 1.381, 1.381), 'Visual': {'Model': 'models/props/zz_dont_use_rock_Dk_2F'}}, '1137809702.98dxschafe': {'Type': 'Rock', 'Hpr': VBase3(-15.281, 9.35, 7.783), 'Pos': Point3(83.71, 216.379, 75.663), 'Scale': VBase3(1.166, 1.166, 1.166), 'Visual': {'Model': 'models/props/zz_dont_use_rock_Dk_1F'}}, '1137810004.47dxschafe': {'Type': 'Rock', 'Hpr': VBase3(2.008, -8.135, 17.32), 'Objects': {}, 'Pos': Point3(170.996, 297.092, 64.506), 'Scale': VBase3(1.464, 1.464, 1.464), 'Visual': {'Model': 'models/props/zz_dont_use_rocks_Dk_group_2F'}}, '1137810301.38dxschafe': {'Type': 'Rock', 'Hpr': VBase3(101.767, 7.392, -9.082), 'Pos': Point3(160.95, 292.5, 67.989), 'Scale': VBase3(1.0, 1.0, 1.0), 'Visual': {'Model': 'models/props/zz_dont_use_rock_Lt_1F'}}, '1137810377.2dxschafe': {'Type': 'Rock', 'Hpr': VBase3(80.208, 1.871, 20.747), 'Pos': Point3(156.843, 292.649, 63.854), 'Scale': VBase3(2.416, 2.416, 2.416), 'Visual': {'Model': 'models/props/zz_dont_use_rock_Dk_3F'}}, '1137810791.5dxschafe': {'Type': 'Bush', 'Hpr': VBase3(36.141, 4.981, 1.142), 'Pos': Point3(179.255, 300.884, 65.077), 'Scale': VBase3(0.794, 0.794, 0.794), 'Visual': {'Model': 'models/vegetation/bush_g'}}, '1137810869.36dxschafe': {'Type': 'Bush', 'Hpr': VBase3(92.017, -0.311, -3.75), 'Pos': Point3(188.585, 300.745, 65.061), 'Scale': VBase3(0.794, 0.794, 0.794), 'Visual': {'Model': 'models/vegetation/bush_a'}}, '1137810917.73dxschafe': {'Type': 'Bush', 'Hpr': VBase3(-100.05, 0.0, -1.001), 'Pos': Point3(209.087, 259.148, 65.756), 'Scale': VBase3(0.617, 0.617, 0.617), 'Visual': {'Model': 'models/vegetation/bush_f'}}, '1137811234.52dxschafe': {'Type': 'Bush', 'Hpr': VBase3(-60.422, -3.215, -1.431), 'Pos': Point3(116.413, 267.588, 69.0), 'Scale': VBase3(0.616, 0.616, 0.616), 'Visual': {'Model': 'models/vegetation/bush_d'}}, '1137811756.92dxschafe': {'Type': 'Tree', 'Hpr': Point3(0.0, 0.0, 0.0), 'Pos': Point3(183.506, 103.712, 40.804), 'Scale': VBase3(0.61, 0.61, 0.61), 'Visual': {'Model': 'models/vegetation/gen_tree_e'}}, '1137811785.95dxschafe': {'Type': 'Tree', 'Color': (0.4000000059604645, 0.4000000059604645, 0.4000000059604645, 1.0), 'Hpr': VBase3(18.803, 0.0, 0.0), 'Pos': Point3(118.642, 8.866, 9.653), 'Scale': VBase3(1.0, 1.0, 1.0), 'Visual': {'Model': 'models/vegetation/fern_tree_e'}}, '1137811838.45dxschafe': {'Type': 'Tree', 'Hpr': Point3(0.0, 0.0, 0.0), 'Pos': Point3(164.425, 42.439, 23.135), 'Scale': VBase3(1.0, 1.0, 1.0), 'Visual': {'Model': 'models/vegetation/fern_tree_e'}}, '1137811971.63dxschafe': {'Type': 'Tree', 'Color': (0.699999988079071, 0.699999988079071, 0.699999988079071, 1.0), 'Hpr': Point3(0.0, 0.0, 0.0), 'Pos': Point3(115.197, -15.405, 10.068), 'Scale': VBase3(1.0, 1.0, 1.0), 'Visual': {'Model': 'models/vegetation/fern_tree_c'}}, '1137811978.38dxschafe': {'Type': 'Tree', 'Color': (0.6000000238418579, 0.6000000238418579, 0.6000000238418579, 1.0), 'Hpr': Point3(0.0, 0.0, 0.0), 'Pos': Point3(100.568, -11.018, 9.352), 'Scale': VBase3(1.275, 1.275, 1.275), 'Visual': {'Model': 'models/vegetation/fern_tree_c'}}, '1137811986.05dxschafe': {'Type': 'Tree', 'Hpr': VBase3(-38.185, -4.832, 0.0), 'Pos': Point3(135.026, 48.423, 20.353), 'Scale': VBase3(1.0, 1.0, 1.0), 'Visual': {'Model': 'models/vegetation/fern_tree_c'}}, '1137811999.59dxschafe': {'Type': 'Tree', 'Hpr': Point3(0.0, 0.0, 0.0), 'Pos': Point3(155.213, 105.394, 49.23), 'Scale': VBase3(1.0, 1.0, 1.0), 'Visual': {'Model': 'models/vegetation/fern_tree_c'}}, '1137812004.25dxschafe': {'Type': 'Tree', 'Hpr': Point3(0.0, 0.0, 0.0), 'Pos': Point3(236.558, 162.018, 60.22), 'Scale': VBase3(0.625, 0.625, 0.625), 'Visual': {'Model': 'models/vegetation/fern_tree_c'}}, '1137812037.88dxschafe': {'Type': 'Tree', 'Color': (0.6000000238418579, 0.6000000238418579, 0.6000000238418579, 1.0), 'Hpr': VBase3(-16.604, 0.0, 0.0), 'Pos': Point3(88.699, -5.145, 9.706), 'Scale': VBase3(0.625, 0.625, 0.625), 'Visual': {'Model': 'models/vegetation/fern_tree_a'}}, '1137812042.97dxschafe': {'Type': 'Tree', 'Color': (0.800000011920929, 0.800000011920929, 0.800000011920929, 1.0), 'Hpr': VBase3(53.217, 0.0, 0.0), 'Pos': Point3(67.536, -35.952, 4.057), 'Scale': VBase3(0.625, 0.625, 0.625), 'Visual': {'Model': 'models/vegetation/fern_tree_c'}}, '1137812044.56dxschafe': {'Type': 'Tree', 'Color': (0.5, 0.5, 0.5, 1.0), 'Hpr': Point3(0.0, 0.0, 0.0), 'Pos': Point3(98.684, -11.534, 9.149), 'Scale': VBase3(0.625, 0.625, 0.625), 'Visual': {'Model': 'models/vegetation/fern_tree_c'}}, '1137812061.77dxschafe': {'Type': 'Tree', 'Color': (0.699999988079071, 0.699999988079071, 0.699999988079071, 1.0), 'Hpr': Point3(0.0, 0.0, 0.0), 'Pos': Point3(114.363, -6.835, 10.877), 'Scale': VBase3(0.625, 0.625, 0.625), 'Visual': {'Model': 'models/vegetation/fern_tree_c'}}, '1137812068.19dxschafe': {'Type': 'Tree', 'Hpr': Point3(0.0, 0.0, 0.0), 'Pos': Point3(167.467, 95.301, 43.265), 'Scale': VBase3(0.625, 0.625, 0.625), 'Visual': {'Model': 'models/vegetation/fern_tree_c'}}, '1137812071.73dxschafe': {'Type': 'Tree', 'Hpr': VBase3(0.0, 10.493, 0.0), 'Pos': Point3(181.48, 119.384, 53.306), 'Scale': VBase3(0.625, 0.625, 0.625), 'Visual': {'Model': 'models/vegetation/fern_tree_c'}}, '1137812079.69dxschafe': {'Type': 'Tree', 'Hpr': Point3(0.0, 0.0, 0.0), 'Pos': Point3(155.979, 63.358, 29.118), 'Scale': VBase3(0.625, 0.625, 0.625), 'Visual': {'Model': 'models/vegetation/fern_tree_c'}}, '1137812086.39dxschafe': {'Type': 'Tree', 'Color': (0.800000011920929, 0.800000011920929, 0.800000011920929, 1.0), 'Hpr': Point3(0.0, 0.0, 0.0), 'Pos': Point3(214.484, -2.972, 7.894), 'Scale': VBase3(0.625, 0.625, 0.625), 'Visual': {'Model': 'models/vegetation/fern_tree_d'}}, '1137812163.47dxschafe': {'Type': 'Bush', 'Color': (0.6000000238418579, 0.6000000238418579, 0.6000000238418579, 1.0), 'Hpr': VBase3(90.922, 0.0, 0.0), 'Pos': Point3(68.255, -42.658, 3.113), 'Scale': VBase3(1.0, 1.0, 1.0), 'Visual': {'Model': 'models/vegetation/bush_a'}}, '1137812165.38dxschafe': {'Type': 'Bush', 'Hpr': Point3(0.0, 0.0, 0.0), 'Pos': Point3(99.23, 11.426, 12.078), 'Scale': VBase3(1.0, 1.0, 1.0), 'Visual': {'Model': 'models/vegetation/bush_a'}}, '1137812167.22dxschafe': {'Type': 'Bush', 'Color': (0.5, 0.5, 0.5, 1.0), 'Hpr': Point3(0.0, 0.0, 0.0), 'Pos': Point3(138.243, 53.312, 22.685), 'Scale': VBase3(1.0, 1.0, 1.0), 'Visual': {'Model': 'models/vegetation/bush_a'}}, '1137812169.88dxschafe': {'Type': 'Bush', 'Color': (0.5, 0.5, 0.5, 1.0), 'Hpr': Point3(0.0, 0.0, 0.0), 'Pos': Point3(128.064, 51.263, 19.578), 'Scale': VBase3(1.0, 1.0, 1.0), 'Visual': {'Model': 'models/vegetation/bush_a'}}, '1137812172.67dxschafe': {'Type': 'Bush', 'Color': (0.4000000059604645, 0.4000000059604645, 0.4000000059604645, 1.0), 'Hpr': Point3(0.0, 0.0, 0.0), 'Pos': Point3(107.747, 3.449, 11.51), 'Scale': VBase3(1.0, 1.0, 1.0), 'Visual': {'Model': 'models/vegetation/bush_a'}}, '1137812179.3dxschafe': {'Type': 'Bush', 'Hpr': Point3(0.0, 0.0, 0.0), 'Pos': Point3(150.857, 77.852, 33.589), 'Scale': VBase3(1.0, 1.0, 1.0), 'Visual': {'Model': 'models/vegetation/bush_a'}}, '1137812192.33dxschafe': {'Type': 'Bush', 'Color': (0.800000011920929, 1.0, 0.6000000238418579, 1.0), 'Hpr': VBase3(-122.149, -0.165, -0.104), 'Pos': Point3(78.359, -41.748, 2.458), 'Scale': VBase3(1.0, 1.0, 1.0), 'Visual': {'Model': 'models/vegetation/bush_a'}}, '1137812196.17dxschafe': {'Type': 'Bush', 'Color': (0.4000000059604645, 0.4000000059604645, 0.4000000059604645, 1.0), 'Hpr': Point3(0.0, 0.0, 0.0), 'Pos': Point3(81.548, -11.728, 7.93), 'Scale': VBase3(1.0, 1.0, 1.0), 'Visual': {'Model': 'models/vegetation/bush_a'}}, '1137812199.53dxschafe': {'Type': 'Bush', 'Color': (0.6000000238418579, 0.6000000238418579, 0.6000000238418579, 1.0), 'Hpr': Point3(0.0, 0.0, 0.0), 'Pos': Point3(87.826, -5.531, 9.201), 'Scale': VBase3(1.0, 1.0, 1.0), 'Visual': {'Model': 'models/vegetation/bush_a'}}, '1137812203.89dxschafe': {'Type': 'Bush', 'Color': (0.699999988079071, 0.699999988079071, 0.699999988079071, 1.0), 'Hpr': Point3(0.0, 0.0, 0.0), 'Pos': Point3(70.211, -19.024, 6.023), 'Scale': VBase3(1.0, 1.0, 1.0), 'Visual': {'Model': 'models/vegetation/bush_a'}}, '1137812205.42dxschafe': {'Type': 'Bush', 'Color': (0.6000000238418579, 0.6000000238418579, 0.6000000238418579, 1.0), 'Hpr': Point3(0.0, 0.0, 0.0), 'Pos': Point3(56.411, -56.053, 1.608), 'Scale': VBase3(1.0, 1.0, 1.0), 'Visual': {'Model': 'models/vegetation/bush_a'}}, '1137812218.56dxschafe': {'Type': 'Bush', 'Hpr': Point3(0.0, 0.0, 0.0), 'Pos': Point3(141.104, 100.638, 46.824), 'Scale': VBase3(1.0, 1.0, 1.0), 'Visual': {'Model': 'models/vegetation/bush_a'}}, '1137812221.2dxschafe': {'Type': 'Bush', 'Color': (0.699999988079071, 0.699999988079071, 0.699999988079071, 1.0), 'Hpr': Point3(0.0, 0.0, 0.0), 'Pos': Point3(158.113, 69.423, 30.794), 'Scale': VBase3(1.0, 1.0, 1.0), 'Visual': {'Model': 'models/vegetation/bush_a'}}, '1137812229.81dxschafe': {'Type': 'Bush', 'Color': (0.699999988079071, 0.699999988079071, 0.699999988079071, 1.0), 'Hpr': Point3(0.0, 0.0, 0.0), 'Pos': Point3(102.812, -18.982, 8.717), 'Scale': VBase3(1.0, 1.0, 1.0), 'Visual': {'Model': 'models/vegetation/bush_a'}}, '1137812869.47dxschafe': {'Type': 'Bush', 'Hpr': VBase3(0.0, 12.29, 0.0), 'Pos': Point3(443.641, 45.585, 31.421), 'Scale': VBase3(1.0, 1.0, 1.0), 'Visual': {'Model': 'models/vegetation/bush_a'}}, '1137812970.61dxschafe': {'Type': 'Bush', 'Hpr': VBase3(0.0, 15.843, 0.0), 'Pos': Point3(459.286, 24.141, 26.913), 'Scale': VBase3(1.0, 1.0, 1.0), 'Visual': {'Model': 'models/vegetation/bush_d'}}, '1137812972.64dxschafe': {'Type': 'Bush', 'Hpr': VBase3(0.0, 12.481, 0.0), 'Pos': Point3(485.358, 40.065, 30.267), 'Scale': VBase3(1.0, 1.0, 1.0), 'Visual': {'Model': 'models/vegetation/bush_d'}}, '1137812983.08dxschafe': {'Type': 'Bush', 'Hpr': VBase3(33.837, 11.43, -8.641), 'Pos': Point3(452.84, -29.124, 11.746), 'Scale': VBase3(1.0, 1.0, 1.0), 'Visual': {'Model': 'models/vegetation/bush_d'}}, '1138062672.0dxschafe': {'Type': 'Bush', 'Hpr': VBase3(0.0, 13.851, 0.0), 'Pos': Point3(228.386, 3.241, 6.181), 'Scale': VBase3(0.672, 0.672, 0.672), 'Visual': {'Model': 'models/vegetation/bush_a'}}, '1138062672.56dxschafe': {'Type': 'Bush', 'Hpr': VBase3(-29.304, 13.518, 7.474), 'Pos': Point3(268.568, 16.062, 8.792), 'Scale': VBase3(0.606, 0.606, 0.606), 'Visual': {'Model': 'models/vegetation/bush_a'}}, '1138062673.13dxschafe': {'Type': 'Bush', 'Hpr': VBase3(0.0, 15.413, 0.0), 'Pos': Point3(489.602, 42.618, 34.09), 'Scale': VBase3(0.01, 0.01, 0.01), 'Visual': {'Model': 'models/vegetation/bush_a'}}, '1138062673.16dxschafe': {'Type': 'Bush', 'Hpr': VBase3(0.0, 15.413, 0.0), 'Pos': Point3(489.602, 42.618, 34.09), 'Scale': VBase3(0.01, 0.01, 0.01), 'Visual': {'Model': 'models/vegetation/bush_a'}}, '1138062673.22dxschafe': {'Type': 'Bush', 'Hpr': VBase3(0.0, 15.413, 0.0), 'Pos': Point3(485.219, 40.515, 30.348), 'Scale': VBase3(0.01, 0.01, 0.01), 'Visual': {'Model': 'models/vegetation/bush_a'}}, '1138062673.25dxschafe': {'Type': 'Bush', 'Hpr': VBase3(0.0, 15.413, 0.0), 'Pos': Point3(485.219, 40.515, 30.348), 'Scale': VBase3(0.01, 0.01, 0.01), 'Visual': {'Model': 'models/vegetation/bush_a'}}, '1138062673.28dxschafe': {'Type': 'Bush', 'Hpr': VBase3(0.0, 15.413, 0.0), 'Pos': Point3(485.219, 40.515, 30.348), 'Scale': VBase3(0.01, 0.01, 0.01), 'Visual': {'Model': 'models/vegetation/bush_a'}}, '1138062673.31dxschafe': {'Type': 'Bush', 'Hpr': VBase3(0.0, 15.413, 0.0), 'Pos': Point3(485.219, 40.515, 30.348), 'Scale': VBase3(0.01, 0.01, 0.01), 'Visual': {'Model': 'models/vegetation/bush_a'}}, '1138062673.38dxschafe': {'Type': 'Bush', 'Hpr': VBase3(0.0, 15.413, 0.0), 'Pos': Point3(482.278, 38.938, 29.893), 'Scale': VBase3(0.01, 0.01, 0.01), 'Visual': {'Model': 'models/vegetation/bush_a'}}, '1138062673.41dxschafe': {'Type': 'Bush', 'Hpr': VBase3(0.0, 15.413, 0.0), 'Pos': Point3(482.278, 38.938, 29.893), 'Scale': VBase3(0.01, 0.01, 0.01), 'Visual': {'Model': 'models/vegetation/bush_a'}}, '1138062673.44dxschafe': {'Type': 'Bush', 'Hpr': VBase3(0.0, 15.413, 0.0), 'Pos': Point3(482.278, 38.938, 29.893), 'Scale': VBase3(0.01, 0.01, 0.01), 'Visual': {'Model': 'models/vegetation/bush_a'}}, '1138062673.45dxschafe': {'Type': 'Bush', 'Hpr': VBase3(0.0, 15.413, 0.0), 'Pos': Point3(482.278, 38.938, 29.893), 'Scale': VBase3(0.01, 0.01, 0.01), 'Visual': {'Model': 'models/vegetation/bush_a'}}, '1138062673.53dxschafe': {'Type': 'Bush', 'Hpr': VBase3(0.0, 15.413, 0.0), 'Pos': Point3(478.659, 38.85, 29.697), 'Scale': VBase3(0.01, 0.01, 0.01), 'Visual': {'Model': 'models/vegetation/bush_a'}}, '1138062673.55dxschafe': {'Type': 'Bush', 'Hpr': VBase3(0.0, 15.413, 0.0), 'Pos': Point3(478.659, 38.85, 29.697), 'Scale': VBase3(0.01, 0.01, 0.01), 'Visual': {'Model': 'models/vegetation/bush_a'}}, '1138062673.58dxschafe': {'Type': 'Bush', 'Hpr': VBase3(0.0, 15.413, 0.0), 'Pos': Point3(478.659, 38.85, 29.697), 'Scale': VBase3(0.01, 0.01, 0.01), 'Visual': {'Model': 'models/vegetation/bush_a'}}, '1138062673.66dxschafe': {'Type': 'Bush', 'Hpr': VBase3(0.0, 15.413, 0.0), 'Pos': Point3(477.106, 38.893, 29.629), 'Scale': VBase3(0.01, 0.01, 0.01), 'Visual': {'Model': 'models/vegetation/bush_a'}}, '1138062673.67dxschafe': {'Type': 'Bush', 'Hpr': VBase3(0.0, 15.413, 0.0), 'Pos': Point3(477.106, 38.893, 29.629), 'Scale': VBase3(0.01, 0.01, 0.01), 'Visual': {'Model': 'models/vegetation/bush_a'}}, '1138062673.78dxschafe': {'Type': 'Bush', 'Hpr': VBase3(0.0, 15.413, 0.0), 'Pos': Point3(473.399, 39.06, 29.479), 'Scale': VBase3(0.01, 0.01, 0.01), 'Visual': {'Model': 'models/vegetation/bush_a'}}, '1138062673.7dxschafe': {'Type': 'Bush', 'Hpr': VBase3(0.0, 15.413, 0.0), 'Pos': Point3(477.106, 38.893, 29.629), 'Scale': VBase3(0.01, 0.01, 0.01), 'Visual': {'Model': 'models/vegetation/bush_a'}}, '1138062673.83dxschafe': {'Type': 'Bush', 'Hpr': VBase3(0.0, 15.413, 0.0), 'Pos': Point3(473.399, 39.06, 29.479), 'Scale': VBase3(0.01, 0.01, 0.01), 'Visual': {'Model': 'models/vegetation/bush_a'}}, '1138062673.89dxschafe': {'Type': 'Bush', 'Hpr': VBase3(0.0, 15.413, 0.0), 'Pos': Point3(472.939, 39.139, 29.472), 'Scale': VBase3(0.01, 0.01, 0.01), 'Visual': {'Model': 'models/vegetation/bush_a'}}, '1138062673.8dxschafe': {'Type': 'Bush', 'Hpr': VBase3(0.0, 15.413, 0.0), 'Pos': Point3(473.399, 39.06, 29.479), 'Scale': VBase3(0.01, 0.01, 0.01), 'Visual': {'Model': 'models/vegetation/bush_a'}}, '1138062673.92dxschafe': {'Type': 'Bush', 'Hpr': VBase3(0.0, 15.413, 0.0), 'Pos': Point3(472.939, 39.139, 29.472), 'Scale': VBase3(0.01, 0.01, 0.01), 'Visual': {'Model': 'models/vegetation/bush_a'}}, '1138062674.02dxschafe': {'Type': 'Bush', 'Hpr': VBase3(0.0, 15.413, 0.0), 'Pos': Point3(475.071, 36.327, 36.247), 'Scale': VBase3(0.01, 0.01, 0.01), 'Visual': {'Model': 'models/vegetation/bush_a'}}, '1138062674.05dxschafe': {'Type': 'Bush', 'Hpr': VBase3(0.0, 15.413, 0.0), 'Pos': Point3(475.071, 36.327, 36.247), 'Scale': VBase3(0.01, 0.01, 0.01), 'Visual': {'Model': 'models/vegetation/bush_a'}}, '1138062674.08dxschafe': {'Type': 'Bush', 'Hpr': VBase3(0.0, 15.413, 0.0), 'Pos': Point3(475.071, 36.327, 36.247), 'Scale': VBase3(0.01, 0.01, 0.01), 'Visual': {'Model': 'models/vegetation/bush_a'}}, '1138062674.19dxschafe': {'Type': 'Bush', 'Hpr': VBase3(0.0, 15.413, 0.0), 'Pos': Point3(475.071, 36.327, 36.247), 'Scale': VBase3(0.01, 0.01, 0.01), 'Visual': {'Model': 'models/vegetation/bush_a'}}, '1138062674.22dxschafe': {'Type': 'Bush', 'Hpr': VBase3(0.0, 15.413, 0.0), 'Pos': Point3(475.071, 36.327, 36.247), 'Scale': VBase3(0.01, 0.01, 0.01), 'Visual': {'Model': 'models/vegetation/bush_a'}}, '1138062674.25dxschafe': {'Type': 'Bush', 'Hpr': VBase3(0.0, 15.413, 0.0), 'Pos': Point3(475.071, 36.327, 36.247), 'Scale': VBase3(0.01, 0.01, 0.01), 'Visual': {'Model': 'models/vegetation/bush_a'}}, '1138062674.28dxschafe': {'Type': 'Bush', 'Hpr': VBase3(0.0, 15.413, 0.0), 'Pos': Point3(475.071, 36.327, 36.247), 'Scale': VBase3(0.01, 0.01, 0.01), 'Visual': {'Model': 'models/vegetation/bush_a'}}, '1138062835.84dxschafe': {'Type': 'Tree', 'Color': (0.6000000238418579, 0.6000000238418579, 0.6000000238418579, 1.0), 'Hpr': VBase3(53.184, 5.076, -6.741), 'Pos': Point3(448.973, 14.864, 23.543), 'Scale': VBase3(0.747, 0.747, 0.747), 'Visual': {'Model': 'models/vegetation/fern_tree_d'}}, '1138062881.41dxschafe': {'Type': 'Tree', 'Color': (0.699999988079071, 0.699999988079071, 0.699999988079071, 1.0), 'Hpr': VBase3(0.0, 11.344, 0.0), 'Pos': Point3(419.966, 64.807, 36.651), 'Scale': VBase3(0.887, 0.887, 0.887), 'Visual': {'Model': 'models/vegetation/fern_tree_d'}}, '1138062921.28dxschafe': {'Type': 'Tree', 'Color': (0.699999988079071, 0.699999988079071, 0.699999988079071, 1.0), 'Hpr': Point3(0.0, 0.0, 0.0), 'Pos': Point3(440.265, 37.857, 27.508), 'Scale': VBase3(0.686, 0.686, 0.686), 'Visual': {'Model': 'models/vegetation/fern_tree_a'}}, '1138063140.55dxschafe': {'Type': 'Tree', 'Hpr': VBase3(-48.982, 8.379, 0.0), 'Pos': Point3(368.466, 88.495, 38.52), 'Scale': VBase3(1.0, 1.0, 1.0), 'Visual': {'Model': 'models/vegetation/fern_tree_d'}}, '1138302833.38dxschafe': {'Type': 'Crate', 'Color': (0.699999988079071, 0.699999988079071, 0.699999988079071, 1.0), 'Hpr': Point3(0.0, 0.0, 0.0), 'Objects': {}, 'Pos': Point3(454.463, -73.439, 5.227), 'Scale': VBase3(0.586, 0.586, 0.586), 'Visual': {'Model': 'models/props/crates_group_2'}}, '1138302835.94dxschafe': {'Type': 'Crate', 'Color': (0.75, 0.9300000071525574, 1.0, 1.0), 'Hpr': VBase3(90.273, 0.0, 0.723), 'Objects': {}, 'Pos': Point3(453.007, -78.36, 5.287), 'Scale': VBase3(0.398, 0.398, 0.398), 'Visual': {'Model': 'models/props/crates_group_1'}}, '1138302857.72dxschafe': {'Type': 'Crate', 'Color': (0.8999999761581421, 0.8999999761581421, 0.8999999761581421, 1.0), 'Hpr': Point3(0.0, 0.0, 0.0), 'Objects': {}, 'Pos': Point3(510.979, -96.416, 5.213), 'Scale': VBase3(0.673, 0.673, 0.673), 'Visual': {'Model': 'models/props/crates_group_2'}}, '1138302885.47dxschafe': {'Type': 'Crate', 'Color': (0.8999999761581421, 0.8999999761581421, 0.699999988079071, 1.0), 'Hpr': Point3(0.0, 0.0, 0.0), 'Objects': {}, 'Pos': Point3(450.972, -140.916, 5.213), 'Scale': VBase3(0.706, 0.706, 0.706), 'Visual': {'Model': 'models/props/crate_group_net'}}, '1138302910.5dxschafe': {'Type': 'Crate', 'Color': (0.75, 0.9300000071525574, 1.0, 1.0), 'Hpr': VBase3(-52.634, 0.0, 0.0), 'Objects': {'1138320661.88dxschafe': {'Type': 'Crate', 'Color': (0.699999988079071, 0.699999988079071, 0.699999988079071, 1.0), 'Hpr': VBase3(52.634, 0.0, 0.0), 'Pos': Point3(-0.463, 0.444, 11.647), 'Scale': VBase3(0.755, 0.755, 0.755), 'Visual': {'Model': 'models/props/crate'}}}, 'Pos': Point3(500.013, -156.485, 5.213), 'Scale': VBase3(0.72, 0.72, 0.72), 'Visual': {'Model': 'models/props/crates_group_2'}}, '1138302936.0dxschafe': {'Type': 'Crate', 'Color': (0.6000000238418579, 0.6000000238418579, 0.6000000238418579, 1.0), 'Hpr': VBase3(52.778, 0.0, 0.0), 'Objects': {}, 'Pos': Point3(274.299, -105.963, 4.851), 'Scale': VBase3(0.622, 0.622, 0.622), 'Visual': {'Model': 'models/props/crate'}}, '1138302984.48dxschafe': {'Type': 'Crate', 'Color': (0.800000011920929, 0.800000011920929, 0.800000011920929, 1.0), 'Hpr': VBase3(-80.367, 0.0, 0.0), 'Objects': {}, 'Pos': Point3(503.669, -150.116, 5.213), 'Scale': VBase3(0.506, 0.506, 0.506), 'Visual': {'Model': 'models/props/crates_group_1'}}, '1138303002.52dxschafe': {'Type': 'Crate', 'Color': (0.75, 0.9300000071525574, 1.0, 1.0), 'Hpr': VBase3(4.362, 0.0, 0.0), 'Objects': {}, 'Pos': Point3(511.618, -104.554, 4.85), 'Scale': VBase3(0.637, 0.637, 0.637), 'Visual': {'Model': 'models/props/crate_group_net'}}, '1138303005.69dxschafe': {'Type': 'Crate', 'Hpr': VBase3(4.362, 0.0, 0.0), 'Objects': {}, 'Pos': Point3(423.506, -161.616, 19.939), 'Scale': VBase3(0.394, 0.394, 0.394), 'Visual': {'Model': 'models/props/crates_group_1'}}, '1138313303.97sdnaik': {'Type': 'Island Game Area', 'File': 'BilgewaterCave1', 'Hpr': Point3(0.0, 0.0, 0.0), 'Objects': {'1142290985.73sdnaik': {'Type': 'Locator Node', 'Name': 'portal_interior_1', 'Hpr': VBase3(90.049, 0.0, 0.0), 'Pos': Point3(-152.635, -304.943, 170.576), 'Scale': VBase3(1.0, 1.0, 1.0)}, '1142290985.8sdnaik': {'Type': 'Locator Node', 'Name': 'portal_interior_2', 'Hpr': VBase3(-90.0, -2.136, 1.821), 'Pos': Point3(185.799, 421.599, 215.117), 'Scale': VBase3(1.0, 1.0, 1.0)}}, 'Pos': Point3(-846.053, -75.921, 56.631), 'Scale': VBase3(1.0, 1.0, 1.0), 'Visual': {'Model': 'models/caves/cave_a_zero'}}, '1138320260.22dxschafe': {'Type': 'Crate', 'Color': (0.6000000238418579, 0.6000000238418579, 0.6000000238418579, 1.0), 'Hpr': VBase3(91.34, 0.0, 0.0), 'Objects': {}, 'Pos': Point3(365.859, -105.024, 4.266), 'Scale': VBase3(0.798, 0.798, 0.798), 'Visual': {'Model': 'models/props/crate_group_net'}}, '1138320304.84dxschafe': {'Type': 'Crate', 'Color': (0.6000000238418579, 0.6000000238418579, 0.6000000238418579, 1.0), 'Hpr': Point3(0.0, 0.0, 0.0), 'Objects': {}, 'Pos': Point3(410.727, -74.613, 5.311), 'Scale': VBase3(1.218, 1.218, 1.218), 'Visual': {'Model': 'models/props/crate'}}, '1138320363.55dxschafe': {'Type': 'Crate', 'Color': (0.8999999761581421, 0.8999999761581421, 0.699999988079071, 1.0), 'Hpr': Point3(0.0, 0.0, 0.0), 'Objects': {}, 'Pos': Point3(459.496, -79.334, 5.213), 'Scale': VBase3(0.538, 0.538, 0.538), 'Visual': {'Model': 'models/props/crate'}}, '1138320493.22dxschafe': {'Type': 'Crate', 'Color': (0.8999999761581421, 0.8999999761581421, 0.699999988079071, 1.0), 'Hpr': VBase3(-30.13, 0.814, 0.473), 'Objects': {}, 'Pos': Point3(457.53, -140.353, 5.231), 'Scale': VBase3(1.333, 1.333, 1.333), 'Visual': {'Model': 'models/props/crate'}}, '1138320513.19dxschafe': {'Type': 'Crate', 'Color': (0.8999999761581421, 0.8999999761581421, 0.699999988079071, 1.0), 'Hpr': Point3(0.0, 0.0, 0.0), 'Objects': {'1138320517.11dxschafe': {'Type': 'Crate', 'Color': (0.6000000238418579, 0.6000000238418579, 0.6000000238418579, 1.0), 'Hpr': VBase3(72.454, 0.0, 0.0), 'Pos': Point3(0.678, -0.596, 4.542), 'Scale': VBase3(1.25, 1.25, 1.25), 'Visual': {'Model': 'models/props/crate'}}}, 'Pos': Point3(496.526, -151.372, 5.213), 'Scale': VBase3(0.614, 0.614, 0.614), 'Visual': {'Model': 'models/props/crate'}}, '1138320636.58dxschafe': {'Type': 'Crate', 'Color': (0.8999999761581421, 0.8999999761581421, 0.699999988079071, 1.0), 'Hpr': VBase3(22.185, 0.0, 0.0), 'Objects': {}, 'Pos': Point3(372.261, -104.288, 4.851), 'Scale': VBase3(0.831, 0.831, 0.831), 'Visual': {'Model': 'models/props/crate'}}, '1138320640.36dxschafe': {'Type': 'Crate', 'Color': (0.8999999761581421, 0.8999999761581421, 0.699999988079071, 1.0), 'Hpr': Point3(0.0, 0.0, 0.0), 'Objects': {}, 'Pos': Point3(263.856, -106.875, 4.851), 'Scale': VBase3(0.859, 0.859, 0.859), 'Visual': {'Model': 'models/props/crate'}}, '1138320649.98dxschafe': {'Type': 'Crate', 'Color': (0.8999999761581421, 0.8999999761581421, 0.699999988079071, 1.0), 'Hpr': Point3(0.0, 0.0, 0.0), 'Objects': {}, 'Pos': Point3(435.669, -71.589, 5.251), 'Scale': VBase3(0.737, 0.737, 0.737), 'Visual': {'Model': 'models/props/crate'}}, '1138320691.84dxschafe': {'Type': 'Crate', 'Color': (1.0, 0.800000011920929, 0.6000000238418579, 1.0), 'Hpr': VBase3(52.634, 0.0, 0.0), 'Objects': {}, 'Pos': Point3(419.389, -163.257, 19.676), 'Scale': VBase3(0.711, 0.711, 0.711), 'Visual': {'Model': 'models/props/crate'}}, '1138320711.06dxschafe': {'Type': 'Crate', 'Color': (0.8999999761581421, 0.8999999761581421, 0.699999988079071, 1.0), 'Hpr': VBase3(105.268, 0.0, 0.0), 'Pos': Point3(449.902, -133.338, 5.213), 'Scale': VBase3(0.587, 0.587, 0.587), 'Visual': {'Model': 'models/props/crate'}}, '1138320832.58dxschafe': {'Type': 'Barrel', 'Color': (1.0, 0.800000011920929, 0.6000000238418579, 1.0), 'Hpr': Point3(0.0, 0.0, 0.0), 'Pos': Point3(462.239, -82.625, 5.213), 'Scale': VBase3(0.583, 0.583, 0.583), 'Visual': {'Model': 'models/props/barrel_worn'}}, '1138320841.61dxschafe': {'Type': 'Barrel', 'Hpr': Point3(0.0, 0.0, 0.0), 'Pos': Point3(454.383, -84.452, 5.213), 'Scale': VBase3(0.583, 0.583, 0.583), 'Visual': {'Model': 'models/props/barrel'}}, '1138320921.48dxschafe': {'Type': 'Barrel', 'Color': (0.5, 0.5, 0.5, 1.0), 'Hpr': Point3(0.0, 0.0, 0.0), 'Pos': Point3(453.165, -151.743, 5.213), 'Scale': VBase3(0.583, 0.583, 0.583), 'Visual': {'Color': (0.79, 0.65, 0.45, 1.0), 'Model': 'models/props/barrel_grey'}}, '1138320923.02dxschafe': {'Type': 'Barrel', 'Color': (1.0, 1.0, 0.6000000238418579, 1.0), 'Hpr': Point3(0.0, 0.0, 0.0), 'Pos': Point3(454.812, -150.028, 5.213), 'Scale': VBase3(0.583, 0.583, 0.583), 'Visual': {'Model': 'models/props/barrel'}}, '1138320941.98dxschafe': {'Type': 'Barrel', 'Hpr': Point3(0.0, 0.0, 0.0), 'Pos': Point3(485.508, -168.166, 5.213), 'Scale': VBase3(0.583, 0.583, 0.583), 'Visual': {'Model': 'models/props/barrel_group_2'}}, '1138320944.53dxschafe': {'Type': 'Barrel', 'Hpr': Point3(0.0, 0.0, 0.0), 'Pos': Point3(493.72, -153.358, 5.213), 'Scale': VBase3(0.583, 0.583, 0.583), 'Visual': {'Model': 'models/props/barrel'}}, '1138320945.41dxschafe': {'Type': 'Barrel', 'Color': (0.699999988079071, 0.699999988079071, 0.699999988079071, 1.0), 'Hpr': Point3(0.0, 0.0, 0.0), 'Pos': Point3(487.432, -160.417, 5.213), 'Scale': VBase3(0.583, 0.583, 0.583), 'Visual': {'Model': 'models/props/barrel_worn'}}, '1138321117.05dxschafe': {'Type': 'Crate', 'Color': (0.800000011920929, 0.800000011920929, 1.0, 1.0), 'Hpr': Point3(0.0, 0.0, 0.0), 'Objects': {}, 'Pos': Point3(259.842, -106.457, 4.851), 'Scale': VBase3(0.457, 0.457, 0.457), 'Visual': {'Model': 'models/props/crates_group_2'}}, '1138321267.27dxschafe': {'Type': 'Barrel', 'Hpr': Point3(0.0, 0.0, 0.0), 'Pos': Point3(436.06, -74.608, 5.248), 'Scale': VBase3(0.443, 0.443, 0.443), 'Visual': {'Model': 'models/props/barrel'}}, '1138321278.73dxschafe': {'Type': 'Barrel', 'Color': (0.800000011920929, 0.800000011920929, 0.800000011920929, 1.0), 'Hpr': Point3(0.0, 0.0, 0.0), 'Pos': Point3(433.972, -74.829, 5.253), 'Scale': VBase3(0.443, 0.443, 0.443), 'Visual': {'Model': 'models/props/barrel_worn'}}, '1138321281.98dxschafe': {'Type': 'Barrel', 'Hpr': Point3(0.0, 0.0, 0.0), 'Pos': Point3(401.867, -84.423, 5.253), 'Scale': VBase3(0.443, 0.443, 0.443), 'Visual': {'Model': 'models/props/barrel_group_2'}}, '1138321296.0dxschafe': {'Type': 'Barrel', 'Hpr': Point3(0.0, 0.0, 0.0), 'Pos': Point3(483.9, -68.645, 5.213), 'Scale': VBase3(0.443, 0.443, 0.443), 'Visual': {'Model': 'models/props/barrel'}}, '1138321296.64dxschafe': {'Type': 'Barrel', 'Color': (0.8999999761581421, 0.8999999761581421, 0.699999988079071, 1.0), 'Hpr': Point3(0.0, 0.0, 0.0), 'Pos': Point3(483.441, -70.514, 5.213), 'Scale': VBase3(0.443, 0.443, 0.443), 'Visual': {'Model': 'models/props/barrel'}}, '1138321297.41dxschafe': {'Type': 'Barrel', 'Color': (0.6000000238418579, 0.800000011920929, 1.0, 1.0), 'Hpr': Point3(0.0, 0.0, 0.0), 'Pos': Point3(487.344, -65.237, 5.213), 'Scale': VBase3(0.443, 0.443, 0.443), 'Visual': {'Model': 'models/props/barrel_worn'}}, '1138321298.98dxschafe': {'Type': 'Barrel', 'Color': (0.800000011920929, 0.800000011920929, 1.0, 1.0), 'Hpr': Point3(0.0, 0.0, 0.0), 'Pos': Point3(481.767, -74.765, 5.215), 'Scale': VBase3(0.443, 0.443, 0.443), 'Visual': {'Model': 'models/props/barrel_grey'}}, '1138321323.97dxschafe': {'Type': 'Barrel', 'Hpr': Point3(0.0, 0.0, 0.0), 'Pos': Point3(347.02, -105.281, 4.851), 'Scale': VBase3(0.443, 0.443, 0.443), 'Visual': {'Model': 'models/props/barrel_grey'}}, '1138321334.66dxschafe': {'Type': 'Barrel', 'Hpr': Point3(0.0, 0.0, 0.0), 'Pos': Point3(314.246, -123.538, 4.851), 'Scale': VBase3(0.443, 0.443, 0.443), 'Visual': {'Model': 'models/props/barrel'}}, '1138321347.94dxschafe': {'Type': 'Barrel', 'Color': (0.699999988079071, 0.699999988079071, 0.699999988079071, 1.0), 'Hpr': Point3(0.0, 0.0, 0.0), 'Pos': Point3(259.078, -109.761, 4.851), 'Scale': VBase3(0.443, 0.443, 0.443), 'Visual': {'Model': 'models/props/barrel_grey'}}, '1138321366.27dxschafe': {'Type': 'Barrel', 'Hpr': VBase3(64.1, 0.0, 0.0), 'Pos': Point3(504.711, -103.68, 5.213), 'Scale': VBase3(0.443, 0.443, 0.443), 'Visual': {'Model': 'models/props/barrel_group_1'}}, '1138321521.69dxschafe': {'Type': 'Crate', 'Color': (0.699999988079071, 0.699999988079071, 0.699999988079071, 1.0), 'Hpr': VBase3(59.241, 0.0, 0.0), 'Objects': {}, 'Pos': Point3(457.346, -79.553, 7.923), 'Scale': VBase3(0.881, 0.881, 0.881), 'Visual': {'Model': 'models/props/crate'}}, '1138321535.25dxschafe': {'Type': 'Barrel', 'Hpr': VBase3(6.659, 0.0, 0.0), 'Pos': Point3(487.483, -79.886, 5.082), 'Scale': VBase3(0.446, 0.446, 0.446), 'Visual': {'Model': 'models/props/barrel_group_1'}}, '1138322082.3dxschafe': {'Type': 'Crane', 'Hpr': VBase3(93.687, 0.0, 0.0), 'Pos': Point3(239.242, -125.623, 4.066), 'Scale': VBase3(0.495, 0.495, 0.495), 'Visual': {'Model': 'models/props/Crane'}}, '1138322153.2dxschafe': {'Type': 'Rock', 'Color': (0.6000000238418579, 0.6000000238418579, 0.6000000238418579, 1.0), 'Hpr': VBase3(53.019, 3.91, -5.32), 'Objects': {}, 'Pos': Point3(326.35, -95.397, 1.03), 'Scale': VBase3(1.154, 1.154, 1.154), 'Visual': {'Model': 'models/props/zz_dont_use_rocks_LT_group_3F'}}, '1138322162.81dxschafe': {'Type': 'Rock', 'Color': (0.6000000238418579, 0.6000000238418579, 0.6000000238418579, 1.0), 'Hpr': VBase3(-154.919, -16.946, -0.681), 'Objects': {}, 'Pos': Point3(361.653, -93.806, -0.671), 'Scale': VBase3(1.319, 1.319, 1.319), 'Visual': {'Model': 'models/props/zz_dont_use_rocks_LT_group_2F'}}, '1138322170.16dxschafe': {'Type': 'Rock', 'Color': (0.6000000238418579, 0.6000000238418579, 0.6000000238418579, 1.0), 'Hpr': VBase3(124.346, -13.44, -16.241), 'Objects': {}, 'Pos': Point3(301.564, -118.631, -8.817), 'Scale': VBase3(2.274, 2.274, 2.274), 'Visual': {'Model': 'models/props/zz_dont_use_rocks_LT_group_2F'}}, '1138322171.02dxschafe': {'Type': 'Rock', 'Color': (0.6000000238418579, 0.6000000238418579, 0.6000000238418579, 1.0), 'Hpr': Point3(0.0, 0.0, 0.0), 'Objects': {}, 'Pos': Point3(391.886, -63.012, 4.449), 'Scale': VBase3(1.349, 1.349, 1.349), 'Visual': {'Color': (1.0, 1.0, 1.0, 1.0), 'Model': 'models/props/zz_dont_use_rocks_LT_group_2F'}}, '1138322178.02dxschafe': {'Type': 'Rock', 'Color': (0.5, 0.5, 0.5, 1.0), 'Hpr': VBase3(-4.698, 5.902, 0.0), 'Objects': {}, 'Pos': Point3(73.865, -113.019, 2.431), 'Scale': VBase3(2.523, 2.523, 2.523), 'Visual': {'Model': 'models/props/zz_dont_use_rocks_Dk_group_1F'}}, '1138322184.19dxschafe': {'Type': 'Rock', 'Hpr': VBase3(-91.565, -6.453, 0.0), 'Objects': {}, 'Pos': Point3(217.76, -4.635, 5.764), 'Scale': VBase3(2.987, 2.987, 2.987), 'Visual': {'Model': 'models/props/zz_dont_use_rocks_Dk_group_1F'}}, '1138322184.67dxschafe': {'Type': 'Rock', 'Color': (0.5, 0.5, 0.5, 1.0), 'Hpr': VBase3(1.608, 4.341, -0.437), 'Objects': {}, 'Pos': Point3(80.693, -135.397, 1.067), 'Scale': VBase3(1.417, 1.417, 1.417), 'Visual': {'Model': 'models/props/zz_dont_use_rocks_LT_group_3F'}}, '1138322186.06dxschafe': {'Type': 'Rock', 'Hpr': Point3(0.0, 0.0, 0.0), 'Objects': {}, 'Pos': Point3(60.52, -72.091, 1.076), 'Scale': VBase3(2.391, 2.391, 2.391), 'Visual': {'Model': 'models/props/zz_dont_use_rocks_Dk_group_3F'}}, '1138322192.33dxschafe': {'Type': 'Rock', 'Hpr': VBase3(77.291, 0.0, 0.0), 'Objects': {}, 'Pos': Point3(286.825, 155.747, 51.755), 'Scale': VBase3(1.0, 1.0, 1.0), 'Visual': {'Model': 'models/props/zz_dont_use_rocks_Dk_group_3F'}}, '1138322206.23dxschafe': {'Type': 'Rock', 'Hpr': VBase3(0.0, -4.554, -1.779), 'Pos': Point3(642.321, -148.98, 5.697), 'Scale': VBase3(1.986, 1.986, 1.986), 'Visual': {'Model': 'models/props/zz_dont_use_rocks_Dk_group_1F'}}, '1138322206.7dxschafe': {'Type': 'Rock', 'Hpr': VBase3(125.532, 0.322, -3.438), 'Objects': {}, 'Pos': Point3(657.204, -134.403, 4.287), 'Scale': VBase3(1.957, 1.957, 1.957), 'Visual': {'Model': 'models/props/zz_dont_use_rocks_Dk_group_2F'}}, '1138322208.19dxschafe': {'Type': 'Rock', 'Color': (0.6000000238418579, 0.6000000238418579, 0.6000000238418579, 1.0), 'Hpr': Point3(0.0, 0.0, 0.0), 'Pos': Point3(607.992, -185.909, 5.245), 'Scale': VBase3(1.319, 1.319, 1.319), 'Visual': {'Model': 'models/props/zz_dont_use_rocks_LT_group_2F'}}, '1138322220.91dxschafe': {'Type': 'Rock', 'Hpr': VBase3(-0.035, -2.308, -1.405), 'Pos': Point3(545.857, -94.351, 0.726), 'Scale': VBase3(1.923, 1.923, 1.923), 'Visual': {'Model': 'models/props/zz_dont_use_rocks_Dk_group_2F'}}, '1138322221.28dxschafe': {'Type': 'Rock', 'Hpr': VBase3(87.03, 0.0, 0.0), 'Objects': {}, 'Pos': Point3(621.453, -49.773, 2.907), 'Scale': VBase3(2.159, 2.159, 2.159), 'Visual': {'Model': 'models/props/zz_dont_use_rocks_Dk_group_3F'}}, '1138322227.23dxschafe': {'Type': 'Rock', 'Hpr': VBase3(0.0, 19.381, 0.0), 'Objects': {}, 'Pos': Point3(481.41, -5.701, 19.854), 'Scale': VBase3(3.158, 3.158, 3.158), 'Visual': {'Model': 'models/props/zz_dont_use_rocks_Dk_group_1F'}}, '1138322230.16dxschafe': {'Type': 'Rock', 'Hpr': Point3(0.0, 0.0, 0.0), 'Objects': {}, 'Pos': Point3(455.667, -37.218, 9.953), 'Scale': VBase3(1.0, 1.0, 1.0), 'Visual': {'Model': 'models/props/zz_dont_use_rock_Dk_1F'}}, '1138322230.55dxschafe': {'Type': 'Rock', 'Hpr': VBase3(0.0, 19.361, 0.0), 'Objects': {}, 'Pos': Point3(453.701, -42.734, 8.346), 'Scale': VBase3(1.877, 1.877, 1.877), 'Visual': {'Model': 'models/props/zz_dont_use_rock_Dk_1F'}}, '1138322230.91dxschafe': {'Type': 'Rock', 'Hpr': VBase3(0.0, 9.295, 0.0), 'Objects': {}, 'Pos': Point3(456.929, -47.63, 6.21), 'Scale': VBase3(2.524, 2.524, 2.524), 'Visual': {'Model': 'models/props/zz_dont_use_rock_Dk_3F'}}, '1138322231.42dxschafe': {'Type': 'Rock', 'Hpr': VBase3(0.0, 14.029, 0.0), 'Objects': {}, 'Pos': Point3(452.046, -35.949, 9.875), 'Scale': VBase3(3.706, 3.706, 3.706), 'Visual': {'Model': 'models/props/zz_dont_use_rock_Dk_4F'}}, '1138322238.98dxschafe': {'Type': 'Rock', 'Hpr': Point3(0.0, 0.0, 0.0), 'Objects': {}, 'Pos': Point3(263.962, 13.806, 7.993), 'Scale': VBase3(1.0, 1.0, 1.0), 'Visual': {'Model': 'models/props/zz_dont_use_rock_Dk_1F'}}, '1138324126.88dxschafe': {'Type': 'Barrel', 'Hpr': VBase3(108.315, 0.0, 0.0), 'Pos': Point3(484.605, -162.793, 5.213), 'Scale': VBase3(0.606, 0.606, 0.606), 'Visual': {'Model': 'models/props/barrel_sideways'}}, '1138324144.91dxschafe': {'Type': 'Barrel', 'Hpr': VBase3(159.646, 0.0, 0.0), 'Pos': Point3(495.513, -156.728, 5.213), 'Scale': VBase3(0.606, 0.606, 0.606), 'Visual': {'Model': 'models/props/barrel_sideways'}}, '1138324161.06dxschafe': {'Type': 'Barrel', 'Color': (0.6000000238418579, 0.6000000238418579, 0.6000000238418579, 1.0), 'Hpr': VBase3(90.544, 89.34, 162.223), 'Pos': Point3(504.428, -146.789, 6.211), 'Scale': VBase3(0.606, 0.606, 0.606), 'Visual': {'Model': 'models/props/barrel_grey'}}, '1138324180.14dxschafe': {'Type': 'Barrel', 'Color': (0.699999988079071, 0.699999988079071, 0.699999988079071, 1.0), 'Hpr': VBase3(-121.73, 0.0, 0.0), 'Pos': Point3(508.593, -99.248, 5.213), 'Scale': VBase3(0.422, 0.422, 0.422), 'Visual': {'Model': 'models/props/barrel_grey'}}, '1138324211.02dxschafe': {'Type': 'Barrel', 'Hpr': VBase3(9.236, 17.911, 0.0), 'Pos': Point3(489.671, -79.014, 7.904), 'Scale': VBase3(0.422, 0.422, 0.422), 'Visual': {'Model': 'models/props/barrel_sideways'}}, '1138324232.17dxschafe': {'Type': 'Barrel', 'Color': (1.0, 1.0, 1.0, 1.0), 'Hpr': VBase3(14.701, 0.0, 0.0), 'Pos': Point3(501.663, -105.41, 5.213), 'Scale': VBase3(0.422, 0.422, 0.422), 'Visual': {'Model': 'models/props/barrel_sideways'}}, '1138331394.53dxschafe': {'Type': 'Cart', 'Color': (0.6000000238418579, 0.6000000238418579, 0.6000000238418579, 1.0), 'Hpr': VBase3(35.962, 0.0, 0.0), 'Pos': Point3(510.222, -114.638, 5.213), 'Scale': VBase3(1.0, 1.0, 1.0), 'Visual': {'Model': 'models/props/cart_reg'}}, '1138332026.81dxschafe': {'Type': 'Sack', 'Color': (0.8999999761581421, 0.8999999761581421, 0.8999999761581421, 1.0), 'Hpr': VBase3(-173.18, -0.665, 0.011), 'Pos': Point3(435.783, -139.17, 5.15), 'Scale': VBase3(1.0, 1.0, 1.0), 'Visual': {'Model': 'models/props/sack_6stack'}}, '1138332353.31dxschafe': {'Type': 'Sack', 'Color': (0.5, 0.5, 0.5, 1.0), 'Hpr': VBase3(85.252, -0.794, -0.575), 'Pos': Point3(400.366, -91.939, 5.337), 'Scale': VBase3(0.719, 0.719, 0.719), 'Visual': {'Model': 'models/props/sack_6stack'}}, '1138389081.77dxschafe': {'Type': 'Bucket', 'Hpr': VBase3(0.001, -0.122, 0.548), 'Pos': Point3(402.276, -97.232, 5.754), 'Scale': VBase3(1.0, 1.0, 1.0), 'Visual': {'Model': 'models/props/bucket'}}, '1138389083.92dxschafe': {'Type': 'Bucket', 'Hpr': VBase3(0.001, -0.122, 0.548), 'Pos': Point3(348.513, -127.149, 4.365), 'Scale': VBase3(1.0, 1.0, 1.0), 'Visual': {'Model': 'models/props/bucket'}}, '1138389216.22dxschafe': {'Type': 'Rope', 'Color': (0.699999988079071, 0.699999988079071, 0.699999988079071, 1.0), 'Hpr': Point3(0.0, 0.0, 0.0), 'Pos': Point3(290.799, -126.977, 4.546), 'Scale': VBase3(1.0, 1.0, 1.0), 'Visual': {'Model': 'models/props/rope_pile'}}, '1138389261.84dxschafe': {'Type': 'Rope', 'Color': (0.699999988079071, 0.699999988079071, 0.699999988079071, 1.0), 'Hpr': Point3(0.0, 0.0, 0.0), 'Pos': Point3(409.238, -143.318, 5.306), 'Scale': VBase3(1.0, 1.0, 1.0), 'Visual': {'Model': 'models/props/rope_pile'}}, '1138389277.3dxschafe': {'Type': 'Rope', 'Color': (0.699999988079071, 0.699999988079071, 0.699999988079071, 1.0), 'Hpr': Point3(0.0, 0.0, 0.0), 'Pos': Point3(457.283, -162.514, 5.213), 'Scale': VBase3(1.0, 1.0, 1.0), 'Visual': {'Model': 'models/props/rope_pile'}}, '1138389335.77dxschafe': {'Type': 'Rope', 'Color': (0.800000011920929, 0.800000011920929, 0.800000011920929, 1.0), 'Hpr': VBase3(72.229, 0.0, 0.0), 'Pos': Point3(491.819, -157.3, 5.213), 'Scale': VBase3(1.0, 1.0, 1.0), 'Visual': {'Model': 'models/props/rope_pile'}}, '1138389396.72dxschafe': {'Type': 'Rope', 'Hpr': VBase3(72.054, 1.909, -3.887), 'Pos': Point3(429.96, 133.149, 46.522), 'Scale': VBase3(1.0, 1.0, 1.0), 'Visual': {'Model': 'models/props/rope_pile'}}, '1138400345.72dxschafe': {'Type': 'Creature', 'Hpr': VBase3(-102.243, 0.0, 0.0), 'Pos': Point3(-112.27, -313.908, 0.809), 'Respawns': True, 'Scale': VBase3(1.0, 1.0, 1.0), 'Species': 'Crab', 'Start State': 'Walk'}, '1138400763.67dxschafe': {'Type': 'Creature', 'Hpr': Point3(0.0, 0.0, 0.0), 'Pos': Point3(-27.049, -297.966, 1.728), 'Respawns': True, 'Scale': VBase3(1.0, 1.0, 1.0), 'Species': 'Crab', 'Start State': 'Walk'}, '1138400773.91dxschafe': {'Type': 'Creature', 'Hpr': VBase3(60.203, 0.0, 0.0), 'Pos': Point3(-154.66, -382.735, 0.31), 'Respawns': True, 'Scale': VBase3(1.0, 1.0, 1.0), 'Species': 'Crab', 'Start State': 'Walk'}, '1138400798.94dxschafe': {'Type': 'Creature', 'Hpr': VBase3(-146.112, 0.0, 0.0), 'Pos': Point3(-171.326, -282.684, 1.224), 'Respawns': True, 'Scale': VBase3(1.0, 1.0, 1.0), 'Species': 'Crab', 'Start State': 'Walk'}, '1138404941.83dxschafe': {'Type': 'Bush', 'Hpr': VBase3(15.586, 0.0, 0.0), 'Pos': Point3(-220.551, -337.371, 0.489), 'Scale': VBase3(1.0, 1.0, 1.0), 'Visual': {'Model': 'models/vegetation/bush_d'}}, '1138404947.05dxschafe': {'Type': 'Bush', 'Hpr': VBase3(65.384, 0.0, 0.0), 'Pos': Point3(-190.659, -238.688, 1.513), 'Scale': VBase3(1.0, 1.0, 1.0), 'Visual': {'Model': 'models/vegetation/bush_a'}}, '1138404952.7dxschafe': {'Type': 'Bush', 'Hpr': VBase3(-69.085, 0.0, 0.0), 'Pos': Point3(-38.478, -188.329, -1.936), 'Scale': VBase3(1.0, 1.0, 1.0), 'Visual': {'Model': 'models/vegetation/bush_a'}}, '1138404955.5dxschafe': {'Type': 'Bush', 'Hpr': Point3(0.0, 0.0, 0.0), 'Pos': Point3(-7.687, -195.324, 1.944), 'Scale': VBase3(1.0, 1.0, 1.0), 'Visual': {'Model': 'models/vegetation/bush_a'}}, '1138404962.67dxschafe': {'Type': 'Bush', 'Hpr': VBase3(-27.452, 0.0, 0.0), 'Pos': Point3(-205.697, -290.742, 0.932), 'Scale': VBase3(1.0, 1.0, 1.0), 'Visual': {'Model': 'models/vegetation/bush_a'}}, '1138404965.13dxschafe': {'Type': 'Bush', 'Hpr': VBase3(-31.764, 10.781, 9.844), 'Pos': Point3(54.835, -213.632, -2.98), 'Scale': VBase3(1.306, 1.306, 1.306), 'Visual': {'Model': 'models/vegetation/bush_a'}}, '1138404990.83dxschafe': {'Type': 'Rock', 'Hpr': Point3(0.0, 0.0, 0.0), 'Objects': {}, 'Pos': Point3(-47.084, -197.603, 1.796), 'Scale': VBase3(1.969, 1.969, 1.969), 'Visual': {'Model': 'models/props/zz_dont_use_rocks_Dk_group_2F'}}, '1138404994.17dxschafe': {'Type': 'Rock', 'Hpr': VBase3(-12.507, 1.8, 0.0), 'Objects': {'1138406441.97dxschafe': {'Type': 'Tree', 'Hpr': VBase3(12.486, 2.246, 0.39), 'Pos': Point3(-0.926, 1.458, 0.952), 'Scale': VBase3(0.223, 0.223, 0.223), 'Visual': {'Model': 'models/vegetation/fern_tree_c'}}, '1138406484.72dxschafe': {'Type': 'Tree', 'Hpr': VBase3(24.771, 10.597, -4.865), 'Pos': Point3(-1.279, 0.092, 0.926), 'Scale': VBase3(0.146, 0.146, 0.146), 'Visual': {'Model': 'models/vegetation/fern_tree_c'}}, '1138406506.36dxschafe': {'Type': 'Tree', 'Hpr': VBase3(37.088, 9.163, 11.016), 'Pos': Point3(-0.876, 0.692, 1.181), 'Scale': VBase3(0.09, 0.09, 0.09), 'Visual': {'Model': 'models/vegetation/fern_tree_c'}}}, 'Pos': Point3(-70.138, -238.119, 1.684), 'Scale': VBase3(3.253, 3.253, 3.253), 'Visual': {'Model': 'models/props/zz_dont_use_rocks_Dk_group_1F'}}, '1138404996.56dxschafe': {'Type': 'Rock', 'Hpr': VBase3(34.668, 1.658, 1.066), 'Objects': {}, 'Pos': Point3(44.756, -219.042, 0.752), 'Scale': VBase3(1.762, 1.762, 1.762), 'Visual': {'Model': 'models/props/zz_dont_use_rocks_Dk_group_2F'}}, '1138404997.58dxschafe': {'Type': 'Rock', 'Color': (0.4000000059604645, 0.4000000059604645, 0.4000000059604645, 1.0), 'Hpr': VBase3(-33.902, -3.398, 17.03), 'Pos': Point3(84.245, -225.375, -4.251), 'Scale': VBase3(1.551, 1.551, 1.551), 'Visual': {'Model': 'models/props/zz_dont_use_rocks_LT_group_2F'}}, '1138405008.59dxschafe': {'Type': 'Rock', 'Hpr': Point3(0.0, 0.0, 0.0), 'Objects': {}, 'Pos': Point3(-216.403, -419.293, 1.086), 'Scale': VBase3(1.0, 1.0, 1.0), 'Visual': {'Model': 'models/props/zz_dont_use_rocks_Dk_group_1F'}}, '1138405009.48dxschafe': {'Type': 'Rock', 'Hpr': VBase3(128.202, 20.66, -15.518), 'Objects': {}, 'Pos': Point3(-211.436, -433.317, 0.125), 'Scale': VBase3(1.519, 1.519, 1.519), 'Visual': {'Model': 'models/props/zz_dont_use_rocks_Dk_group_2F'}}, '1138405014.53dxschafe': {'Type': 'Rock', 'Hpr': VBase3(-95.028, 0.0, 5.316), 'Objects': {}, 'Pos': Point3(-169.933, -234.205, 1.363), 'Scale': VBase3(2.298, 2.298, 2.298), 'Visual': {'Model': 'models/props/zz_dont_use_rocks_Dk_group_3F'}}, '1138405547.0dxschafe': {'Type': 'Bush', 'Hpr': VBase3(38.179, 0.0, 0.0), 'Pos': Point3(-76.377, -244.608, 2.26), 'Scale': VBase3(1.462, 1.462, 1.462), 'Visual': {'Model': 'models/vegetation/bush_b'}}, '1138405584.83dxschafe': {'Type': 'Bush', 'Hpr': VBase3(7.343, 0.0, 0.0), 'Pos': Point3(-79.53, -225.681, 2.428), 'Scale': VBase3(1.462, 1.462, 1.462), 'Visual': {'Model': 'models/vegetation/bush_c'}}, '1138405639.5dxschafe': {'Type': 'Bush', 'Hpr': VBase3(-51.937, 0.0, 0.0), 'Pos': Point3(-147.535, -229.85, 1.961), 'Scale': VBase3(1.228, 1.228, 1.228), 'Visual': {'Model': 'models/vegetation/bush_c'}}, '1138405669.55dxschafe': {'Type': 'Bush', 'Hpr': VBase3(102.799, 0.0, 0.0), 'Pos': Point3(-133.393, -233.609, 1.008), 'Scale': VBase3(1.228, 1.228, 1.228), 'Visual': {'Model': 'models/vegetation/bush_c'}}, '1138406264.66dxschafe': {'Type': 'Tree', 'Hpr': VBase3(0.0, 0.0, 5.151), 'Pos': Point3(-213.556, -387.863, 2.83), 'Scale': VBase3(0.774, 0.774, 0.774), 'Visual': {'Model': 'models/vegetation/fern_tree_e'}}, '1138406308.7dxschafe': {'Type': 'Tree', 'Color': (0.699999988079071, 0.699999988079071, 0.699999988079071, 1.0), 'Hpr': VBase3(-67.178, -12.457, 5.187), 'Pos': Point3(-214.423, -354.664, 4.023), 'Scale': VBase3(1.0, 1.0, 1.0), 'Visual': {'Model': 'models/vegetation/fern_tree_e'}}, '1138406395.17dxschafe': {'Type': 'Rock', 'Hpr': Point3(0.0, 0.0, 0.0), 'Pos': Point3(-224.082, -366.002, 4.032), 'Scale': VBase3(2.953, 2.953, 2.953), 'Visual': {'Model': 'models/props/zz_dont_use_rocks_Dk_group_3F'}}, '1138406690.55dxschafe': {'Type': 'Tree - Animated', 'Animate': 'models/vegetation/palm_trunk_a_idle', 'Hpr': VBase3(-62.829, -3.202, -1.475), 'Pos': Point3(-132.223, -356.378, 0.51), 'Scale': VBase3(1.185, 1.185, 1.185), 'SubObjs': {'Top Model': {'Visual': {'Animate': 'models/vegetation/palm_leaf_a_idle', 'Attach': ['trunk', 'def_trunk_attach'], 'Model': 'models/vegetation/palm_leaf_a_hi', 'PartName': 'leaf'}}}, 'Visual': {'Animate': 'models/vegetation/palm_trunk_a_idle', 'Model': 'models/vegetation/palm_trunk_a_hi', 'PartName': 'trunk'}}, '1138406692.61dxschafe': {'Type': 'Tree - Animated', 'Animate': 'models/vegetation/palm_trunk_a_idle', 'Hpr': VBase3(81.816, 4.927, -7.169), 'Pos': Point3(-138.884, -355.391, 0.965), 'Scale': VBase3(0.823, 0.823, 0.823), 'SubObjs': {'Top Model': {'Visual': {'Animate': 'models/vegetation/palm_leaf_a_idle', 'Attach': ['trunk', 'def_trunk_attach'], 'Model': 'models/vegetation/palm_leaf_a_hi', 'PartName': 'leaf'}}}, 'Visual': {'Animate': 'models/vegetation/palm_trunk_a_idle', 'Model': 'models/vegetation/palm_trunk_a_hi', 'PartName': 'trunk'}}, '1138406774.14dxschafe': {'Type': 'Bush', 'Color': (0.5, 0.5, 0.5, 1.0), 'Hpr': VBase3(0.0, 13.124, 0.0), 'Pos': Point3(-133.097, -353.011, -1.554), 'Scale': VBase3(1.0, 1.0, 1.0), 'Visual': {'Model': 'models/vegetation/bush_f'}}, '1138406881.92dxschafe': {'Type': 'Rock', 'Hpr': VBase3(44.795, -2.619, -2.635), 'Objects': {}, 'Pos': Point3(-135.435, -350.571, 0.294), 'Scale': VBase3(1.751, 1.751, 1.751), 'Visual': {'Model': 'models/props/zz_dont_use_rocks_LT_group_2F'}}, '1138407288.64dxschafe': {'Type': 'Rock', 'Hpr': VBase3(47.773, 0.0, 0.0), 'Pos': Point3(-0.083, -203.712, 2.472), 'Scale': VBase3(1.679, 1.679, 1.679), 'Visual': {'Model': 'models/props/zz_dont_use_rocks_Dk_group_2F'}}, '1138411180.17dxschafe': {'Type': 'Light - Dynamic', 'Attenuation': '0.15', 'Flickering': True, 'Hpr': Point3(0.0, 0.0, 0.0), 'Pos': Point3(410.187, 127.473, 52.13), 'Scale': VBase3(3.0, 3.0, 3.0), 'Visual': {'Color': (1, 1, 1, 1), 'Model': 'icons/icon_lightbulb'}}, '1138411274.68dxschafe': {'Type': 'Light - Dynamic', 'Attenuation': '0.15', 'Flickering': True, 'Hpr': Point3(0.0, 0.0, 0.0), 'Pos': Point3(618.33, 99.599, 55.781), 'Scale': VBase3(3.0, 3.0, 3.0), 'Visual': {'Color': (1, 1, 1, 1), 'Model': 'icons/icon_lightbulb'}}, '1138411280.35dxschafe': {'Type': 'Light - Dynamic', 'Attenuation': '0.15', 'Flickering': True, 'Hpr': Point3(0.0, 0.0, 0.0), 'Pos': Point3(585.969, 217.42, 58.998), 'Scale': VBase3(3.0, 3.0, 3.0), 'Visual': {'Color': (1, 1, 1, 1), 'Model': 'icons/icon_lightbulb'}}, '1138411455.04dxschafe': {'Type': 'Light - Dynamic', 'Attenuation': '0.15', 'Flickering': True, 'Hpr': Point3(0.0, 0.0, 0.0), 'Pos': Point3(438.89, 251.886, 58.379), 'Scale': VBase3(3.0, 3.0, 3.0), 'Visual': {'Color': (1, 1, 1, 1), 'Model': 'icons/icon_lightbulb'}}, '1138411521.86dxschafe': {'Type': 'Light - Dynamic', 'Attenuation': '0.15', 'Flickering': True, 'Hpr': Point3(0.0, 0.0, 0.0), 'Pos': Point3(463.064, 316.652, 61.517), 'Scale': VBase3(3.0, 3.0, 3.0), 'Visual': {'Color': (1, 1, 1, 1), 'Model': 'icons/icon_lightbulb'}}, '1138648022.36dxschafe': {'Type': 'Wall', 'Color': (0.699999988079071, 0.699999988079071, 0.699999988079071, 1.0), 'Hpr': VBase3(-32.727, -6.056, 12.983), 'Pos': Point3(249.263, 245.427, 65.713), 'Scale': VBase3(1.0, 1.0, 1.0), 'Visual': {'Model': 'models/props/pir_m_prp_fnc_wood20'}}, '1138657546.06dxschafe': {'Type': 'Tree', 'Hpr': VBase3(129.539, 0.0, 0.0), 'Pos': Point3(376.579, 214.601, 41.267), 'Scale': VBase3(1.737, 1.737, 1.737), 'Visual': {'Model': 'models/vegetation/gen_tree_b'}}, '1138695512.45jubutler': {'Type': 'Port Collision Sphere', 'Name': 'ArtPrototypePort', 'Hpr': Point3(0.0, 0.0, 0.0), 'Pos': Point3(353.718, -171.804, 0.0), 'Scale': VBase3(159.917, 159.917, 159.917), 'Visual': {'Color': (0.5, 0.5, 1.0, 1.0), 'Model': 'models/misc/smiley'}}, '1138721865.99dxschafe': {'Type': 'Light - Dynamic', 'Attenuation': '0.15', 'Flickering': True, 'Hpr': VBase3(-36.203, 0.0, 0.0), 'Pos': Point3(566.163, 282.06, 60.102), 'Scale': VBase3(3.0, 3.0, 3.0), 'Visual': {'Color': (1, 1, 1, 1), 'Model': 'icons/icon_lightbulb'}}, '1138722751.38dxschafe': {'Type': 'Building Exterior', 'Name': '', 'File': '', 'Hpr': VBase3(6.635, 0.0, 0.0), 'Objects': {'1138722751.38dxschafe0': {'Type': 'Locator Node', 'Name': 'portal_exterior_1', 'Hpr': VBase3(90.0, 0.0, 0.0), 'Pos': Point3(3.987, -20.033, 0.0), 'Scale': VBase3(1.0, 1.0, 1.0)}}, 'Pos': Point3(501.843, -69.266, 5.213), 'Scale': VBase3(1.0, 1.0, 1.0), 'Visual': {'Model': 'models/buildings/shanty_npc_house_combo_J'}}, '1138732475.17dxschafe': {'Type': 'Tree', 'Hpr': VBase3(-42.846, 6.187, 5.708), 'Pos': Point3(491.715, 37.317, 29.497), 'Scale': VBase3(1.0, 1.0, 1.0), 'Visual': {'Model': 'models/vegetation/fern_tree_a'}}, '1138732687.6dxschafe': {'Type': 'Tree', 'Color': (0.6000000238418579, 0.6000000238418579, 0.6000000238418579, 1.0), 'Hpr': VBase3(0.0, 3.802, 0.0), 'Pos': Point3(130.679, 327.734, 58.742), 'Scale': VBase3(1.0, 1.0, 1.0), 'Visual': {'Model': 'models/vegetation/gen_tree_b'}}, '1138732820.2dxschafe': {'Type': 'Tree', 'Color': (0.6000000238418579, 0.6000000238418579, 0.6000000238418579, 1.0), 'Hpr': VBase3(0.0, 3.802, 0.0), 'Pos': Point3(36.778, 222.26, 66.97), 'Scale': VBase3(1.0, 1.0, 1.0), 'Visual': {'Model': 'models/vegetation/gen_tree_b'}}, '1138732869.65dxschafe': {'Type': 'Tree', 'Color': (0.6000000238418579, 0.6000000238418579, 0.6000000238418579, 1.0), 'Hpr': VBase3(0.0, 3.802, 0.0), 'Pos': Point3(-14.902, 201.497, 53.892), 'Scale': VBase3(1.0, 1.0, 1.0), 'Visual': {'Model': 'models/vegetation/gen_tree_b'}}, '1138733108.16dxschafe': {'Type': 'Tree', 'Color': (0.6000000238418579, 0.6000000238418579, 0.6000000238418579, 1.0), 'Hpr': VBase3(0.0, 3.802, 0.0), 'Pos': Point3(268.083, 258.86, 60.562), 'Scale': VBase3(1.0, 1.0, 1.0), 'Visual': {'Model': 'models/vegetation/gen_tree_b'}}, '1138739839.69dxschafe': {'Type': 'Tree', 'Hpr': Point3(0.0, 0.0, 0.0), 'Pos': Point3(183.138, 317.303, 56.097), 'Scale': VBase3(0.911, 0.911, 0.911), 'Visual': {'Model': 'models/vegetation/gen_tree_d'}}, '1138740107.6dxschafe': {'Type': 'Tree', 'Hpr': VBase3(0.0, -1.363, 0.0), 'Pos': Point3(7.262, 211.095, 70.354), 'Scale': VBase3(0.795, 0.795, 0.795), 'Visual': {'Model': 'models/vegetation/gen_tree_d'}}, '1141416648.8sdnaik': {'Type': 'Townsperson', 'Category': 'Merchant', 'DNA': '1141416648.8sdnaik', 'Hpr': VBase3(-30.935, 0.0, 0.0), 'Pos': Point3(127.877, 182.944, 73.9), 'Respawns': True, 'Scale': VBase3(1.0, 1.0, 1.0), 'Start State': 'Idle'}, '1142037113.06dxschafe': {'Type': 'Locator Node', 'Name': 'portal_exterior_1', 'Hpr': VBase3(-18.331, 0.0, 0.0), 'Pos': Point3(-219.917, -319.235, 0.595), 'Scale': VBase3(1.0, 1.0, 1.0)}, '1142037113.09dxschafe': {'Type': 'Locator Node', 'Name': 'portal_exterior_2', 'Hpr': VBase3(68.97, 0.0, 0.0), 'Pos': Point3(-285.103, -58.817, 44.049), 'Scale': VBase3(1.0, 1.0, 1.0)}, '1142291141.63sdnaik': {'Type': 'Connector Tunnel', 'File': '', 'Hpr': Point3(0.0, 0.0, 0.0), 'Objects': {'1142291141.64sdnaik': {'Type': 'Locator Node', 'Name': 'portal_connector_1', 'Hpr': VBase3(-90.0, 0.0, 0.0), 'Pos': Point3(0.0, 0.0, 0.0), 'Scale': VBase3(1.0, 1.0, 1.0)}, '1142291141.66sdnaik': {'Type': 'Locator Node', 'Name': 'portal_connector_2', 'Hpr': VBase3(90.0, 0.0, 0.0), 'Pos': Point3(95.105, 150.0, 0.0), 'Scale': VBase3(1.0, 1.0, 1.0)}}, 'Pos': Point3(-591.709, -87.039, 43.78), 'Scale': VBase3(1.0, 1.0, 1.0), 'Visual': {'Model': 'models/tunnels/tunnel_cave_left'}}, '1142291275.14sdnaik': {'Type': 'Connector Tunnel', 'File': '', 'Hpr': Point3(0.0, 0.0, 0.0), 'Objects': {'1142291275.16sdnaik': {'Type': 'Locator Node', 'Name': 'portal_connector_1', 'Hpr': VBase3(-90.0, 0.0, 0.0), 'Pos': Point3(0.0, 0.0, 0.0), 'Scale': VBase3(1.0, 1.0, 1.0)}, '1142291275.17sdnaik': {'Type': 'Locator Node', 'Name': 'portal_connector_2', 'Hpr': VBase3(90.0, 0.0, 0.0), 'Pos': Point3(95.105, 150.0, 0.0), 'Scale': VBase3(1.0, 1.0, 1.0)}}, 'Pos': Point3(-935.593, -597.071, -10.001), 'Scale': VBase3(1.0, 1.0, 1.0), 'Visual': {'Model': 'models/tunnels/tunnel_cave_left'}}, '1142307096.22sdnaik': {'Type': 'Player Spawn Node', 'Hpr': Point3(0.0, 0.0, 0.0), 'Pos': Point3(263.509, -13.818, 5.515), 'Scale': VBase3(1.0, 1.0, 1.0), 'Spawnables': 'All'}, '1144362499.15jubutler': {'Type': 'Cell Portal Area', 'Name': 'cell_spanish_town', 'Hpr': Point3(0.0, 0.0, 0.0), 'Objects': {'1135281802.29dzlu': {'Type': 'bilgewater_town', 'Hpr': VBase3(0.001, -0.122, 0.548), 'Objects': {'1135282109.68dzlu': {'Type': 'Building Exterior', 'Name': '', 'File': '', 'Hpr': VBase3(-4.033, 0.147, -0.647), 'Objects': {'1135282109.68dzlu0': {'Type': 'Locator Node', 'Name': 'portal_exterior_1', 'Hpr': Point3(0.0, 0.0, 0.0), 'Pos': Point3(54.084, -22.35, 10.9), 'Scale': VBase3(1.0, 1.0, 1.0)}}, 'Pos': Point3(-10.487, 102.52, 54.754), 'Scale': VBase3(1.0, 1.0, 1.0), 'Visual': {'Model': 'models/buildings/spanish_npc_house_j_exterior'}}, '1135282286.59dzlu': {'Type': 'Building Exterior', 'Name': '', 'File': '', 'Color': (0.800000011920929, 0.800000011920929, 0.800000011920929, 1.0), 'Hpr': VBase3(96.478, -3.524, -0.826), 'Objects': {'1135282286.59dzlu0': {'Type': 'Locator Node', 'Name': 'portal_exterior_1', 'Hpr': Point3(0.0, 0.0, 0.0), 'Pos': Point3(54.084, -22.35, 10.9), 'Scale': VBase3(1.0, 1.0, 1.0)}}, 'Pos': Point3(68.198, 50.878, 51.497), 'Scale': VBase3(1.0, 1.0, 1.0), 'Visual': {'Model': 'models/buildings/spanish_npc_house_i_exterior'}}, '1135285775.21dzlu': {'Type': 'Building Exterior', 'Name': '', 'File': '', 'Hpr': VBase3(-174.901, -1.856, -0.453), 'Objects': {'1135285775.21dzlu0': {'Type': 'Locator Node', 'Name': 'portal_exterior_1', 'Hpr': Point3(0.0, 0.0, 0.0), 'Pos': Point3(54.084, -22.35, 10.9), 'Scale': VBase3(1.0, 1.0, 1.0)}}, 'Pos': Point3(-25.436, -20.253, 45.951), 'Scale': VBase3(1.0, 1.0, 1.0), 'Visual': {'Model': 'models/buildings/spanish_npc_house_d_exterior'}}, '1135285791.23dzlu': {'Type': 'Building Exterior', 'Name': '', 'File': '', 'Hpr': VBase3(-86.804, 0.257, -0.477), 'Objects': {'1135285791.23dzlu0': {'Type': 'Locator Node', 'Name': 'portal_exterior_1', 'Hpr': Point3(0.0, 0.0, 0.0), 'Pos': Point3(54.084, -22.35, 10.9), 'Scale': VBase3(1.0, 1.0, 1.0)}}, 'Pos': Point3(128.052, -12.593, 50.806), 'Scale': VBase3(1.0, 1.0, 1.0), 'Visual': {'Model': 'models/buildings/spanish_npc_house_d_exterior'}}, '1135285802.19dzlu': {'Type': 'Building Exterior', 'Name': '', 'File': '', 'Hpr': VBase3(91.882, -2.064, 0.627), 'Objects': {'1135285802.19dzlu0': {'Type': 'Locator Node', 'Name': 'portal_exterior_1', 'Hpr': Point3(0.0, 0.0, 0.0), 'Pos': Point3(54.084, -22.35, 10.9), 'Scale': VBase3(1.0, 1.0, 1.0)}}, 'Pos': Point3(2.987, 2.381, 48.847), 'Scale': VBase3(1.0, 1.0, 1.0), 'Visual': {'Model': 'models/buildings/spanish_npc_house_n_exterior'}}, '1135287336.43dzlu': {'Type': 'Building Exterior', 'Name': '', 'File': '', 'Color': (0.8999999761581421, 0.8999999761581421, 0.8999999761581421, 1.0), 'Hpr': VBase3(-2.991, 0.0, 0.0), 'Objects': {'1135287336.43dzlu0': {'Type': 'Locator Node', 'Name': 'portal_exterior_1', 'Hpr': Point3(0.0, 0.0, 0.0), 'Pos': Point3(54.084, -22.35, 10.9), 'Scale': VBase3(1.0, 1.0, 1.0)}}, 'Pos': Point3(-106.985, -65.632, 44.261), 'Scale': VBase3(1.0, 1.0, 1.0), 'Visual': {'Model': 'models/buildings/spanish_npc_house_e_exterior'}}, '1135287679.84dzlu': {'Type': 'Building Exterior', 'Name': '', 'File': '', 'Hpr': VBase3(-5.258, 0.446, 0.01), 'Objects': {'1135287679.84dzlu0': {'Type': 'Locator Node', 'Name': 'portal_exterior_1', 'Hpr': Point3(0.0, 0.0, 0.0), 'Pos': Point3(54.084, -22.35, 10.9), 'Scale': VBase3(1.0, 1.0, 1.0)}}, 'Pos': Point3(105.929, 29.739, 52.525), 'Scale': VBase3(1.0, 1.0, 1.0), 'Visual': {'Model': 'models/buildings/spanish_npc_house_g_exterior'}}, '1135288077.32dzlu': {'Type': 'Building Exterior', 'Name': '', 'File': '', 'Hpr': VBase3(-162.14, -1.043, 0.95), 'Objects': {'1135288077.32dzlu0': {'Type': 'Locator Node', 'Name': 'portal_exterior_1', 'Hpr': Point3(0.0, 0.0, 0.0), 'Pos': Point3(54.084, -22.35, 10.9), 'Scale': VBase3(1.0, 1.0, 1.0)}}, 'Pos': Point3(68.853, -59.616, 44.33), 'Scale': VBase3(1.0, 1.0, 1.0), 'Visual': {'Model': 'models/buildings/spanish_npc_house_n_exterior'}}, '1135288180.4dzlu': {'Type': 'Building Exterior', 'Name': '', 'File': '', 'Hpr': VBase3(-2.43, -0.247, -0.174), 'Objects': {'1135288180.41dzlu': {'Type': 'Locator Node', 'Name': 'portal_exterior_1', 'Hpr': Point3(0.0, 0.0, 0.0), 'Pos': Point3(54.084, -22.35, 10.9), 'Scale': VBase3(1.0, 1.0, 1.0)}}, 'Pos': Point3(68.672, 28.213, 52.986), 'Scale': VBase3(1.0, 1.0, 1.0), 'Visual': {'Model': 'models/buildings/spanish_npc_house_i_exterior'}}, '1135289191.98dzlu': {'Type': 'Building Exterior', 'Name': '', 'File': '', 'Hpr': VBase3(111.244, -1.415, 0.064), 'Objects': {'1135289191.98dzlu0': {'Type': 'Locator Node', 'Name': 'portal_exterior_1', 'Hpr': Point3(0.0, 0.0, 0.0), 'Pos': Point3(54.084, -22.35, 10.9), 'Scale': VBase3(1.0, 1.0, 1.0)}}, 'Pos': Point3(2.89, -27.0, 46.214), 'Scale': VBase3(1.0, 1.0, 1.0), 'Visual': {'Model': 'models/buildings/spanish_npc_house_i_exterior'}}, '1135290323.54dzlu': {'Type': 'Building Exterior', 'Name': '', 'File': '', 'Hpr': VBase3(172.41, -1.842, 0.527), 'Objects': {'1135290323.54dzlu0': {'Type': 'Locator Node', 'Name': 'portal_exterior_1', 'Hpr': Point3(0.0, 0.0, 0.0), 'Pos': Point3(54.084, -22.35, 10.9), 'Scale': VBase3(1.0, 1.0, 1.0)}}, 'Pos': Point3(38.629, -103.057, 42.916), 'Scale': VBase3(1.0, 1.0, 1.0), 'Visual': {'Model': 'models/buildings/spanish_npc_house_e_exterior'}}, '1135290764.99dzlu': {'Type': 'Building Exterior', 'File': '', 'Hpr': VBase3(67.39, 0.789, -1.31), 'Pos': Point3(85.786, -77.147, 44.31), 'Scale': VBase3(1.0, 1.0, 1.0), 'Visual': {'Model': 'models/props/pir_m_prp_tnt_group6_market'}}, '1135971052.31dzlu': {'Type': 'Building Exterior', 'Name': '', 'File': '', 'Hpr': VBase3(174.045, -0.174, 0.146), 'Objects': {'1135971052.31dzlu0': {'Type': 'Locator Node', 'Name': 'portal_exterior_1', 'Hpr': Point3(0.0, 0.0, 0.0), 'Pos': Point3(54.084, -22.35, 10.9), 'Scale': VBase3(1.0, 1.0, 1.0)}}, 'Pos': Point3(13.984, 28.764, 50.388), 'Scale': VBase3(1.0, 1.0, 1.0), 'Visual': {'Name': '', 'Model': 'models/buildings/spanish_npc_house_l_exterior'}}, '1135971384.22dzlu': {'Type': 'Building Exterior', 'Name': '', 'File': '', 'Color': (0.8999999761581421, 0.8999999761581421, 0.8999999761581421, 1.0), 'Hpr': VBase3(72.321, 0.0, 0.0), 'Objects': {'1135971384.22dzlu0': {'Type': 'Locator Node', 'Name': 'portal_exterior_1', 'Hpr': Point3(0.0, 0.0, 0.0), 'Pos': Point3(54.084, -22.35, 10.9), 'Scale': VBase3(1.0, 1.0, 1.0)}}, 'Pos': Point3(-116.928, -22.13, 44.597), 'Scale': VBase3(1.0, 1.0, 1.0), 'Visual': {'Color': (1.0, 1.0, 0.75, 1.0), 'Model': 'models/buildings/spanish_npc_house_p_exterior'}}, '1136336439.97dzlu': {'Type': 'Building Exterior', 'Name': '', 'File': '', 'Hpr': VBase3(-2.617, -0.553, 1.759), 'Objects': {'1136336439.97dzlu0': {'Type': 'Locator Node', 'Name': 'portal_exterior_1', 'Hpr': VBase3(90.0, 0.0, 0.0), 'Pos': Point3(3.987, -20.033, 0.0), 'Scale': VBase3(1.0, 1.0, 1.0)}}, 'Pos': Point3(-93.619, 20.475, 46.158), 'Scale': VBase3(1.0, 1.0, 1.0), 'Visual': {'Model': 'models/buildings/spanish_npc_house_i_exterior'}}, '1136337021.82dzlu': {'Type': 'Building Exterior', 'Name': '', 'File': '', 'Hpr': VBase3(-32.272, 0.396, -0.398), 'Objects': {'1136337021.82dzlu0': {'Type': 'Locator Node', 'Name': 'portal_exterior_1', 'Hpr': VBase3(90.0, 0.0, 0.0), 'Pos': Point3(3.987, -20.033, 0.0), 'Scale': VBase3(1.0, 1.0, 1.0)}}, 'Pos': Point3(47.801, 86.975, 54.394), 'Scale': VBase3(1.0, 1.0, 1.0), 'Visual': {'Model': 'models/buildings/spanish_npc_house_g_exterior'}}, '1136338230.04dzlu': {'Type': 'Barrel', 'Hpr': VBase3(28.818, -0.934, -88.39), 'Pos': Point3(-15.3, -90.787, 42.276), 'Scale': VBase3(0.794, 0.794, 0.794), 'Visual': {'Model': 'models/props/barrel'}}, '1136338558.54dzlu': {'Type': 'Barrel', 'Hpr': VBase3(0.0, 0.122, -0.548), 'Pos': Point3(-13.592, -92.661, 40.687), 'Scale': VBase3(0.773, 0.773, 0.773), 'Visual': {'Model': 'models/props/barrel'}}, '1136338586.88dzlu': {'Type': 'Barrel', 'Hpr': VBase3(-0.007, -0.627, -0.548), 'Pos': Point3(-10.31, -95.339, 40.714), 'Scale': VBase3(0.588, 0.588, 0.588), 'Visual': {'Model': 'models/props/barrel'}}, '1136338641.41dzlu': {'Type': 'Barrel', 'Hpr': VBase3(0.0, 0.122, -0.527), 'Pos': Point3(15.66, -36.665, 46.357), 'Scale': VBase3(0.528, 0.528, 0.528), 'Visual': {'Model': 'models/props/barrel'}}, '1136338940.62dzlu': {'Type': 'Building Exterior', 'Name': '', 'File': '', 'Hpr': VBase3(-179.196, 0.0, 0.0), 'Objects': {}, 'Pos': Point3(71.276, -136.421, 43.798), 'Scale': VBase3(1.0, 1.0, 1.0), 'Visual': {'Model': 'models/buildings/spanish_npc_house_e_exterior'}}, '1136339759.96dzlu': {'Type': 'Tree', 'Hpr': VBase3(0.0, 0.122, -0.548), 'Pos': Point3(102.244, 55.115, 51.6), 'Scale': VBase3(1.0, 1.0, 1.0), 'Visual': {'Model': 'models/vegetation/gen_tree_a'}}, '1136339824.68dzlu': {'Type': 'Tree - Animated', 'Hpr': VBase3(108.822, -0.558, 0.061), 'Pos': Point3(32.914, -91.904, 41.113), 'Scale': VBase3(1.0, 1.0, 1.0), 'SubObjs': {'Top Model': {'Visual': {'Animate': 'models/vegetation/palm_leaf_a_idle', 'Attach': ['trunk', 'def_trunk_attach'], 'Model': 'models/vegetation/palm_leaf_a_hi', 'PartName': 'leaf'}}}, 'Visual': {'Animate': 'models/vegetation/palm_trunk_a_idle', 'Model': 'models/vegetation/palm_trunk_a_hi', 'PartName': 'trunk'}}, '1136339875.1dzlu': {'Type': 'Tree - Animated', 'Hpr': VBase3(0.027, 2.901, -0.549), 'Pos': Point3(50.814, -95.244, 41.006), 'Scale': VBase3(0.826, 0.826, 0.826), 'SubObjs': {'Top Model': {'Visual': {'Animate': 'models/vegetation/palm_leaf_a_idle', 'Attach': ['trunk', 'def_trunk_attach'], 'Model': 'models/vegetation/palm_leaf_a_hi', 'PartName': 'leaf'}}}, 'Visual': {'Animate': 'models/vegetation/palm_trunk_a_idle', 'Model': 'models/vegetation/palm_trunk_a_hi', 'PartName': 'trunk'}}, '1136340207.4dzlu': {'Type': 'Building Exterior', 'File': '', 'Hpr': VBase3(-117.778, -0.627, 1.792), 'Pos': Point3(62.673, -100.483, 43.136), 'Scale': VBase3(1.0, 1.0, 1.0), 'Visual': {'Name': '', 'Model': 'models/buildings/burned_gate'}}, '1136340387.18dzlu': {'Type': 'Crate', 'Color': (0.8999999761581421, 0.8999999761581421, 0.699999988079071, 1.0), 'Hpr': VBase3(20.659, 2.457, -1.511), 'Objects': {}, 'Pos': Point3(-82.833, -76.825, 43.9), 'Scale': VBase3(0.572, 0.572, 0.572), 'Visual': {'Model': 'models/props/crate'}}, '1136340454.07dzlu': {'Type': 'Crate', 'Color': (0.75, 0.9300000071525574, 1.0, 1.0), 'Hpr': VBase3(9.204, 0.795, -0.684), 'Pos': Point3(-87.117, -76.997, 44.543), 'Scale': VBase3(0.555, 0.555, 0.555), 'Visual': {'Color': (0.6, 0.6, 0.6, 1.0), 'Model': 'models/props/crate'}}, '1136340700.6dzlu': {'Type': 'Fountain', 'Hpr': Point3(0.0, 0.0, 0.0), 'Pos': Point3(-42.992, 37.831, 50.448), 'Scale': VBase3(0.646, 0.646, 0.646), 'Visual': {'Model': 'models/props/spanishtown_fountain'}}, '1136340768.34dzlu': {'Type': 'Well', 'Hpr': VBase3(-33.998, 0.0, 0.0), 'Pos': Point3(84.608, -5.034, 49.711), 'Scale': VBase3(1.0, 1.0, 1.0), 'Visual': {'Model': 'models/props/wellA'}}, '1136404083.2dzlu': {'Type': 'Tree', 'Hpr': VBase3(35.876, 0.0, 0.0), 'Pos': Point3(-156.382, 15.922, 52.722), 'Scale': VBase3(1.0, 1.0, 1.0), 'Visual': {'Model': 'models/vegetation/gen_tree_a'}}, '1136404682.97dzlu': {'Type': 'Tree', 'Hpr': Point3(0.0, 0.0, 0.0), 'Pos': Point3(-124.701, -39.984, 43.146), 'Scale': VBase3(0.322, 0.322, 0.322), 'Visual': {'Model': 'models/vegetation/gen_tree_e'}}, '1136419185.55dzlu': {'Type': 'Tree - Animated', 'Hpr': VBase3(-33.403, 0.807, -0.781), 'Pos': Point3(140.71, -149.706, 36.893), 'Scale': VBase3(1.162, 1.162, 1.162), 'SubObjs': {'Top Model': {'Visual': {'Animate': 'models/vegetation/palm_leaf_a_idle', 'Attach': ['trunk', 'def_trunk_attach'], 'Color': (0.7942, 0.882, 0.724, 1.0), 'Model': 'models/vegetation/palm_leaf_c_hi', 'PartName': 'leaf', 'Scale': VBase3(0.682, 0.682, 0.682)}}}, 'Visual': {'Animate': 'models/vegetation/palm_trunk_a_idle', 'Color': (0.97, 0.88, 0.97, 1.0), 'Model': 'models/vegetation/palm_trunk_a_hi', 'PartName': 'trunk'}}, '1136419382.98dzlu': {'Type': 'Tree', 'Hpr': VBase3(-8.086, 0.14, -0.178), 'Pos': Point3(140.884, -46.087, 46.564), 'Scale': VBase3(0.828, 0.828, 0.828), 'Visual': {'Model': 'models/vegetation/gen_tree_d'}}, '1136419587.27dzlu': {'Type': 'Bush', 'Hpr': VBase3(-90.934, 1.655, 1.773), 'Pos': Point3(50.091, 61.996, 51.177), 'Scale': VBase3(0.668, 0.668, 0.668), 'Visual': {'Model': 'models/vegetation/bush_c'}}}, 'Pos': Point3(513.025, 209.753, -0.951), 'Scale': VBase3(1.0, 1.0, 1.0), 'Visual': {'Model': 'models/town/bilgewater_town'}}, '1135285783.04dzlu': {'Type': 'Building Exterior', 'Name': '', 'File': '', 'Hpr': VBase3(-2.263, 0.0, 0.0), 'Objects': {'1135285783.04dzlu0': {'Type': 'Locator Node', 'Name': 'portal_exterior_1', 'Hpr': Point3(0.0, 0.0, 0.0), 'Pos': Point3(54.084, -22.35, 10.9), 'Scale': VBase3(1.0, 1.0, 1.0)}}, 'Pos': Point3(459.46, 334.557, 53.979), 'Scale': VBase3(1.0, 1.0, 1.0), 'Visual': {'Name': '', 'Model': 'models/buildings/spanish_npc_house_i_exterior'}}, '1135286034.37dzlu': {'Type': 'Building Exterior', 'Name': '', 'File': '', 'Hpr': VBase3(77.954, 0.0, 0.0), 'Objects': {'1135286034.37dzlu0': {'Type': 'Locator Node', 'Name': 'portal_exterior_1', 'Hpr': Point3(0.0, 0.0, 0.0), 'Pos': Point3(54.084, -22.35, 10.9), 'Scale': VBase3(1.0, 1.0, 1.0)}}, 'Pos': Point3(426.603, 315.829, 54.025), 'Scale': VBase3(1.0, 1.0, 1.0), 'Visual': {'Model': 'models/buildings/spanish_npc_house_k_exterior'}}, '1136336848.57dzlu': {'Type': 'Building Exterior', 'Name': '', 'File': '', 'Hpr': VBase3(75.549, 0.5, 0.255), 'Objects': {'1136336848.57dzlu0': {'Type': 'Locator Node', 'Name': 'portal_exterior_1', 'Hpr': VBase3(90.0, 0.0, 0.0), 'Pos': Point3(3.987, -20.033, 0.0), 'Scale': VBase3(1.0, 1.0, 1.0)}}, 'Pos': Point3(428.219, 263.119, 50.163), 'Scale': VBase3(1.0, 1.0, 1.0), 'Visual': {'Model': 'models/buildings/spanish_npc_house_a_exterior'}}, '1136339970.12dzlu': {'Type': 'Tree - Animated', 'Hpr': VBase3(-123.565, 0.0, 0.0), 'Pos': Point3(374.587, 141.25, 41.661), 'Scale': VBase3(1.0, 1.0, 1.0), 'SubObjs': {'Top Model': {'Visual': {'Animate': 'models/vegetation/palm_leaf_a_idle', 'Attach': ['trunk', 'def_trunk_attach'], 'Model': 'models/vegetation/palm_leaf_b_hi', 'PartName': 'leaf'}}}, 'Visual': {'Animate': 'models/vegetation/palm_trunk_a_idle', 'Model': 'models/vegetation/palm_trunk_a_hi', 'PartName': 'trunk'}}, '1136340427.54dzlu': {'Type': 'Crate', 'Color': (0.75, 0.9300000071525574, 1.0, 1.0), 'Hpr': VBase3(-9.441, 1.319, 0.219), 'Pos': Point3(427.222, 129.383, 43.836), 'Scale': VBase3(0.529, 0.529, 0.529), 'Visual': {'Color': (0.75, 0.93, 1.0, 1.0), 'Model': 'models/props/crate'}}, '1136419266.19dzlu': {'Type': 'Tree', 'Hpr': VBase3(0.0, 0.0, 0.0), 'Pos': Point3(646.795, 94.052, 42.561), 'Scale': VBase3(0.604, 0.604, 0.604), 'Visual': {'Model': 'models/vegetation/gen_tree_a'}}, '1136419312.06dzlu': {'Type': 'Tree', 'Hpr': VBase3(-8.086, -0.058, 0.348), 'Pos': Point3(650.056, 129.578, 44.327), 'Scale': VBase3(0.604, 0.604, 0.604), 'Visual': {'Model': 'models/vegetation/gen_tree_d'}}, '1137613686.39dxschafe': {'Type': 'Bush', 'Hpr': VBase3(168.669, 0.42, 6.905), 'Pos': Point3(575.801, 78.805, 41.537), 'Scale': VBase3(0.743, 0.743, 0.743), 'Visual': {'Model': 'models/vegetation/bush_f'}}, '1137613748.45dxschafe': {'Type': 'Bush', 'Hpr': VBase3(-152.41, -4.661, 1.499), 'Pos': Point3(527.16, 70.568, 40.087), 'Scale': VBase3(1.01, 1.01, 1.01), 'Visual': {'Model': 'models/vegetation/bush_b'}}, '1137613781.44dxschafe': {'Type': 'Bush', 'Color': (1.0, 0.6000000238418579, 0.800000011920929, 1.0), 'Hpr': VBase3(139.966, 0.0, 0.282), 'Pos': Point3(525.399, 73.541, 40.263), 'Scale': VBase3(1.01, 1.01, 1.01), 'Visual': {'Model': 'models/vegetation/bush_h'}}, '1137613814.03dxschafe': {'Type': 'Bush', 'Hpr': VBase3(44.172, 0.0, 0.0), 'Pos': Point3(538.27, 101.971, 41.442), 'Scale': VBase3(0.518, 0.518, 0.518), 'Visual': {'Model': 'models/vegetation/bush_a'}}, '1137804872.88dxschafe': {'Type': 'LaundryRope', 'Hpr': VBase3(77.156, 0.0, 0.0), 'Pos': Point3(404.656, 165.025, 42.997), 'Scale': VBase3(1.0, 1.0, 1.0), 'Visual': {'Color': (1.0, 1.0, 1.0, 1.0), 'Model': 'models/props/LaundryRope'}}, '1137804943.44dxschafe': {'Type': 'LaundryRope', 'Hpr': VBase3(75.678, 0.0, 0.0), 'Pos': Point3(640.114, 96.835, 42.703), 'Scale': VBase3(0.67, 0.67, 0.67), 'Visual': {'Model': 'models/props/LaundryRope'}}, '1137805186.67dxschafe': {'Type': 'LaundryRope', 'Hpr': VBase3(91.633, 0.0, 0.0), 'Pos': Point3(505.019, 154.374, 39.095), 'Scale': VBase3(1.0, 1.0, 1.0), 'Visual': {'Model': 'models/props/LaundryRope'}}, '1137815176.44dxschafe': {'Type': 'Bush', 'Color': (0.699999988079071, 0.699999988079071, 0.699999988079071, 1.0), 'Hpr': Point3(0.0, 0.0, 0.0), 'Pos': Point3(524.834, 193.068, 48.362), 'Scale': VBase3(1.0, 1.0, 1.0), 'Visual': {'Model': 'models/vegetation/bush_h'}}, '1138235338.59dxschafe': {'Type': 'Cart', 'Hpr': VBase3(151.911, 0.0, 0.0), 'Pos': Point3(387.425, 137.694, 44.575), 'Scale': VBase3(1.0, 1.0, 1.0), 'Visual': {'Model': 'models/props/cart_reg'}}, '1138236384.59dxschafe': {'Type': 'Building Exterior', 'Name': '', 'File': '', 'Hpr': VBase3(178.804, 0.0, 0.0), 'Objects': {}, 'Pos': Point3(538.137, 72.645, 41.113), 'Scale': VBase3(1.0, 1.0, 1.0), 'Visual': {'Model': 'models/buildings/spanish_npc_house_n_exterior'}}, '1138237076.48dxschafe': {'Type': 'Well', 'Hpr': VBase3(70.739, 0.0, 0.0), 'Pos': Point3(437.947, 208.556, 46.21), 'Scale': VBase3(1.0, 1.0, 1.0), 'Visual': {'Model': 'models/props/wellA'}}, '1138331085.67dxschafe': {'Type': 'Building Exterior', 'Name': '', 'File': '', 'Hpr': Point3(0.0, 0.0, 0.0), 'Objects': {'1138331085.67dxschafe0': {'Type': 'Locator Node', 'Name': 'portal_exterior_1', 'Hpr': VBase3(-90.0, 0.0, 0.0), 'Pos': Point3(26.442, -16.638, 0.0), 'Scale': VBase3(1.0, 1.0, 1.0)}}, 'Pos': Point3(485.041, 136.057, 45.27), 'Scale': VBase3(1.0, 1.0, 1.0), 'Visual': {'Model': 'models/buildings/spanish_tavern_exterior'}}, '1138333953.2dxschafe': {'Type': 'TreeBase', 'Color': (0.8999999761581421, 0.8999999761581421, 0.699999988079071, 1.0), 'Hpr': VBase3(23.584, 0.0, 0.0), 'Pos': Point3(572.858, 207.144, 46.709), 'Scale': VBase3(1.238, 1.238, 1.238), 'Visual': {'Model': 'models/props/TreeBase'}}, '1138333987.34dxschafe': {'Type': 'TreeBase', 'Color': (0.8999999761581421, 0.8999999761581421, 0.699999988079071, 1.0), 'Hpr': Point3(0.0, 0.0, 0.0), 'Pos': Point3(594.045, 182.287, 45.857), 'Scale': VBase3(1.112, 1.112, 1.112), 'Visual': {'Model': 'models/props/TreeBase'}}, '1138334081.11dxschafe': {'Type': 'Tree - Animated', 'Animate': 'models/vegetation/tree_b_trunk_idle', 'Hpr': Point3(0.0, 0.0, 0.0), 'Pos': Point3(572.745, 207.118, 42.342), 'Scale': VBase3(0.981, 0.981, 0.981), 'SubObjs': {'Top Model': {'Visual': {'Animate': 'models/vegetation/palm_leaf_a_idle', 'Attach': ['trunk', 'def_trunk_attach'], 'Model': 'models/vegetation/tree_b_leaf_hi', 'PartName': 'leaf'}}}, 'Visual': {'Animate': 'models/vegetation/palm_trunk_a_idle', 'Model': 'models/vegetation/tree_b_trunk_hi', 'PartName': 'trunk'}}, '1138334199.27dxschafe': {'Type': 'Tree - Animated', 'Animate': 'models/vegetation/tree_b_trunk_idle', 'Hpr': Point3(0.0, 0.0, 0.0), 'Pos': Point3(594.059, 182.306, 41.11), 'Scale': VBase3(1.0, 1.0, 1.0), 'SubObjs': {'Top Model': {'Visual': {'Animate': 'models/vegetation/palm_leaf_a_idle', 'Attach': ['trunk', 'def_trunk_attach'], 'Model': 'models/vegetation/tree_b_leaf_hi', 'PartName': 'leaf'}}}, 'Visual': {'Animate': 'models/vegetation/palm_trunk_a_idle', 'Model': 'models/vegetation/tree_b_trunk_hi', 'PartName': 'trunk'}}, '1138388039.38dxschafe': {'Type': 'Barrel', 'Hpr': Point3(0.0, 0.0, 0.0), 'Pos': Point3(512.194, 138.211, 45.305), 'Scale': VBase3(0.752, 0.752, 0.752), 'Visual': {'Model': 'models/props/barrel_group_1'}}, '1138388168.08dxschafe': {'Type': 'Barrel', 'Hpr': Point3(0.0, 0.0, 0.0), 'Pos': Point3(490.436, 123.104, 45.373), 'Scale': VBase3(0.685, 0.685, 0.685), 'Visual': {'Model': 'models/props/barrel_group_1'}}, '1138388531.19dxschafe': {'Type': 'Bucket', 'Hpr': VBase3(0.0, 2.32, 0.0), 'Pos': Point3(504.001, 118.66, 45.292), 'Scale': VBase3(1.0, 1.0, 1.0), 'Visual': {'Model': 'models/props/bucket'}}, '1138388588.09dxschafe': {'Type': 'Bucket', 'Color': (0.8999999761581421, 0.8999999761581421, 0.699999988079071, 1.0), 'Hpr': VBase3(0.0, 0.146, 0.0), 'Pos': Point3(592.426, 204.037, 48.075), 'Scale': VBase3(1.0, 1.0, 1.0), 'Visual': {'Model': 'models/props/bucket'}}, '1138388642.84dxschafe': {'Type': 'Cart', 'Hpr': VBase3(80.098, 0.0, 8.622), 'Pos': Point3(564.07, 206.389, 48.116), 'Scale': VBase3(1.0, 1.0, 1.0), 'Visual': {'Model': 'models/props/cart_reg'}}, '1138389026.84dxschafe': {'Type': 'Bucket', 'Hpr': VBase3(79.256, -4.032, -5.49), 'Pos': Point3(440.247, 209.653, 49.085), 'Scale': VBase3(0.852, 0.852, 0.852), 'Visual': {'Model': 'models/props/bucket'}}, '1138389053.84dxschafe': {'Type': 'Bucket', 'Hpr': Point3(0.0, 0.0, 0.0), 'Pos': Point3(575.592, 114.787, 45.049), 'Scale': VBase3(1.0, 1.0, 1.0), 'Visual': {'Model': 'models/props/bucket'}}, '1138389054.97dxschafe': {'Type': 'Bucket', 'Hpr': Point3(0.0, 0.0, 0.0), 'Pos': Point3(548.444, 117.223, 42.821), 'Scale': VBase3(1.0, 1.0, 1.0), 'Visual': {'Model': 'models/props/bucket'}}, '1138389058.48dxschafe': {'Type': 'Bucket', 'Hpr': Point3(0.0, 0.0, 0.0), 'Pos': Point3(577.119, 136.791, 43.012), 'Scale': VBase3(1.0, 1.0, 1.0), 'Visual': {'Model': 'models/props/bucket'}}, '1138389506.56dxschafe': {'Type': 'Crate', 'Color': (0.8999999761581421, 0.8999999761581421, 0.699999988079071, 1.0), 'Hpr': VBase3(22.633, 3.523, -0.965), 'Objects': {'1138727460.57dxschafe': {'Type': 'ChickenCage', 'Hpr': VBase3(19.053, -3.017, 2.06), 'Pos': Point3(-0.013, 0.194, 5.431), 'Scale': VBase3(1.384, 1.384, 1.384), 'Visual': {'Model': 'models/props/ChickenCage'}}}, 'Pos': Point3(450.134, 87.338, 40.222), 'Scale': VBase3(0.723, 0.723, 0.723), 'Visual': {'Model': 'models/props/crate'}}, '1138389517.22dxschafe': {'Type': 'Crate', 'Color': (0.6000000238418579, 0.6000000238418579, 0.6000000238418579, 1.0), 'Hpr': VBase3(41.741, 3.475, 0.0), 'Objects': {}, 'Pos': Point3(454.794, 88.86, 40.303), 'Scale': VBase3(1.147, 1.147, 1.147), 'Visual': {'Model': 'models/props/crate'}}, '1138389571.83dxschafe': {'Type': 'Cart', 'Hpr': VBase3(73.631, 3.467, 0.0), 'Pos': Point3(503.413, 241.128, 48.861), 'Scale': VBase3(1.0, 1.0, 1.0), 'Visual': {'Model': 'models/props/cart_reg'}}, '1138389731.03dxschafe': {'Type': 'ChickenCage', 'Hpr': VBase3(4.15, 0.0, 0.0), 'Pos': Point3(599.079, 186.519, 47.456), 'Scale': VBase3(1.0, 1.0, 1.0), 'Visual': {'Model': 'models/props/ChickenCage'}}, '1138389735.67dxschafe': {'Type': 'ChickenCage', 'Color': (0.800000011920929, 0.800000011920929, 0.800000011920929, 1.0), 'Hpr': Point3(0.0, 0.0, 0.0), 'Pos': Point3(595.842, 187.738, 48.05), 'Scale': VBase3(1.0, 1.0, 1.0), 'Visual': {'Model': 'models/props/ChickenCage'}}, '1138389791.17dxschafe': {'Type': 'Bush', 'Hpr': Point3(0.0, 0.0, 0.0), 'Pos': Point3(593.748, 182.392, 48.47), 'Scale': VBase3(0.375, 0.375, 0.375), 'Visual': {'Model': 'models/vegetation/bush_a'}}, '1138389895.67dxschafe': {'Type': 'Bush', 'Color': (0.6000000238418579, 0.6000000238418579, 0.6000000238418579, 1.0), 'Hpr': VBase3(-55.429, 0.0, 0.0), 'Pos': Point3(585.666, 227.636, 51.086), 'Scale': VBase3(0.48, 0.48, 0.48), 'Visual': {'Model': 'models/vegetation/bush_h'}}, '1138389900.19dxschafe': {'Type': 'Bush', 'Hpr': VBase3(0.0, 1.049, 0.0), 'Pos': Point3(572.482, 207.237, 48.801), 'Scale': VBase3(0.376, 0.376, 0.376), 'Visual': {'Model': 'models/vegetation/bush_a'}}, '1138390780.11dxschafe': {'Type': 'Trellis', 'Hpr': VBase3(2.925, 0.0, 0.0), 'Pos': Point3(568.578, 66.501, 41.125), 'Scale': VBase3(0.538, 0.538, 0.538), 'Visual': {'Model': 'models/props/trellisB'}}, '1138391578.09dxschafe': {'Type': 'Bucket', 'Hpr': Point3(0.0, 0.0, 0.0), 'Pos': Point3(431.776, 263.947, 50.251), 'Scale': VBase3(1.0, 1.0, 1.0), 'Visual': {'Model': 'models/props/bucket'}}, '1138391763.7dxschafe': {'Type': 'ChickenCage', 'Hpr': VBase3(6.674, 0.0, 0.0), 'Pos': Point3(461.236, 238.953, 49.824), 'Scale': VBase3(1.0, 1.0, 1.0), 'Visual': {'Model': 'models/props/ChickenCage'}}, '1138391801.88dxschafe': {'Type': 'TreeBase', 'Color': (1.0, 0.9599999785423279, 0.75, 1.0), 'Hpr': Point3(0.0, 0.0, 0.0), 'Pos': Point3(508.499, 281.587, 54.108), 'Scale': VBase3(1.0, 1.0, 1.0), 'Visual': {'Model': 'models/props/TreeBase'}}, '1138391837.27dxschafe': {'Type': 'TreeBase', 'Color': (1.0, 0.9599999785423279, 0.75, 1.0), 'Hpr': VBase3(-2.634, 0.0, 0.0), 'Pos': Point3(461.183, 283.354, 54.056), 'Scale': VBase3(1.0, 1.0, 1.0), 'Visual': {'Model': 'models/props/TreeBase'}}, '1138391934.66dxschafe': {'Type': 'Crate', 'Color': (0.75, 0.9300000071525574, 1.0, 1.0), 'Hpr': VBase3(-24.988, 0.0, 0.0), 'Pos': Point3(482.051, 122.569, 45.292), 'Scale': VBase3(0.682, 0.682, 0.682), 'Visual': {'Color': (0.79, 0.73, 0.66, 1.0), 'Model': 'models/props/crate'}}, '1138392072.44dxschafe': {'Type': 'Building Exterior', 'Name': '', 'File': '', 'Hpr': VBase3(-162.926, -1.617, -0.543), 'Objects': {'1138392072.44dxschafe0': {'Type': 'Locator Node', 'Name': 'portal_exterior_1', 'Hpr': VBase3(90.0, 0.0, 0.0), 'Pos': Point3(3.987, -20.033, 0.0), 'Scale': VBase3(1.0, 1.0, 1.0)}}, 'Pos': Point3(617.634, 79.446, 42.585), 'Scale': VBase3(1.0, 1.0, 1.0), 'Visual': {'Model': 'models/buildings/spanish_npc_house_o_exterior'}}, '1138393950.38dxschafe': {'Type': 'Wall', 'Hpr': VBase3(86.893, 0.0, 0.0), 'Pos': Point3(655.648, 170.407, 47.136), 'Scale': VBase3(1.0, 1.0, 1.0), 'Visual': {'Model': 'models/buildings/pir_m_bld_wal_stuccoTallColumn'}}, '1138394586.5dxschafe': {'Type': 'LaundryRope', 'Hpr': VBase3(-83.031, 0.0, 0.0), 'Pos': Point3(645.338, 177.843, 47.23), 'Scale': VBase3(1.0, 1.0, 1.0), 'Visual': {'Model': 'models/props/LaundryRope'}}, '1138394804.84dxschafe': {'Type': 'Barrel', 'Color': (0.699999988079071, 0.699999988079071, 0.699999988079071, 1.0), 'Hpr': Point3(0.0, 0.0, 0.0), 'Pos': Point3(540.757, 109.26, 41.791), 'Scale': VBase3(0.692, 0.692, 0.692), 'Visual': {'Model': 'models/props/barrel_worn'}}, '1138395108.81dxschafe': {'Type': 'Tree - Animated', 'Animate': 'models/vegetation/palm_trunk_a_idle', 'Hpr': VBase3(70.721, 0.0, 0.0), 'Pos': Point3(508.472, 282.089, 54.019), 'Scale': VBase3(0.24, 0.24, 0.24), 'SubObjs': {'Top Model': {'Visual': {'Animate': 'models/vegetation/palm_leaf_a_idle', 'Attach': ['trunk', 'def_trunk_attach'], 'Model': 'models/vegetation/palm_leaf_c_hi', 'PartName': 'leaf', 'Scale': VBase3(3.084, 3.084, 3.084)}}}, 'Visual': {'Animate': 'models/vegetation/palm_trunk_a_idle', 'Model': 'models/vegetation/palm_trunk_b_hi', 'PartName': 'trunk'}}, '1138395123.31dxschafe': {'Type': 'Tree - Animated', 'Animate': 'models/vegetation/palm_trunk_a_idle', 'Hpr': VBase3(107.868, 0.0, 0.0), 'Pos': Point3(461.037, 283.646, 54.049), 'Scale': VBase3(0.281, 0.281, 0.281), 'SubObjs': {'Top Model': {'Visual': {'Animate': 'models/vegetation/palm_leaf_a_idle', 'Attach': ['trunk', 'def_trunk_attach'], 'Model': 'models/vegetation/palm_leaf_c_hi', 'PartName': 'leaf', 'Scale': VBase3(3.084, 3.084, 3.084)}}}, 'Visual': {'Animate': 'models/vegetation/palm_trunk_a_idle', 'Model': 'models/vegetation/palm_trunk_b_hi', 'PartName': 'trunk'}}, '1138401436.53dxschafe': {'Type': 'Townsperson', 'Category': 'Commoner', 'DNA': '1138401436.53dxschafe', 'Hpr': VBase3(-126.964, 0.0, 0.0), 'Pos': Point3(606.292, 128.53, 42.33), 'Respawns': True, 'Scale': VBase3(1.0, 1.0, 1.0), 'Start State': 'Idle'}, '1138402892.06dxschafe': {'Type': 'Bush', 'Hpr': VBase3(13.43, 0.0, 0.0), 'Pos': Point3(417.422, 165.266, 45.026), 'Scale': VBase3(0.776, 0.776, 0.776), 'Visual': {'Model': 'models/vegetation/bush_c'}}, '1138403067.3dxschafe': {'Type': 'Barrel', 'Color': (0.800000011920929, 0.800000011920929, 0.800000011920929, 1.0), 'Hpr': VBase3(0.0, -0.346, 0.0), 'Pos': Point3(422.235, 128.217, 43.247), 'Scale': VBase3(0.532, 0.532, 0.532), 'Visual': {'Color': (0.67, 0.58, 0.31, 1.0), 'Model': 'models/props/barrel_grey'}}, '1138403165.69dxschafe': {'Type': 'Bush', 'Color': (0.5, 0.5, 0.5, 1.0), 'Hpr': Point3(0.0, 0.0, 0.0), 'Pos': Point3(418.51, 136.938, 44.56), 'Scale': VBase3(1.0, 1.0, 1.0), 'Visual': {'Model': 'models/vegetation/bush_h'}}, '1138409038.58dxschafe': {'Type': 'Jack Sparrow Standin', 'Hpr': VBase3(11.367, 0.0, 0.0), 'Pos': Point3(458.548, 238.964, 49.761), 'Scale': VBase3(1.0, 1.0, 1.0), 'Visual': {'Model': 'models/char/js_2000'}}, '1138410112.63dxschafe': {'Type': 'Barrel', 'Hpr': VBase3(-6.216, 0.0, 0.0), 'Pos': Point3(438.061, 278.471, 50.169), 'Scale': VBase3(0.541, 0.541, 0.541), 'Visual': {'Model': 'models/props/barrel_group_1'}}, '1138410875.87dxschafe': {'Type': 'Cart', 'Hpr': VBase3(-126.971, 0.159, 0.0), 'Pos': Point3(410.484, 200.64, 46.1), 'Scale': VBase3(1.0, 1.0, 1.0), 'Visual': {'Model': 'models/props/cart_reg'}}, '1138420222.41dxschafe': {'Type': 'Building Exterior', 'File': '', 'Hpr': VBase3(2.096, 0.0, 0.0), 'Pos': Point3(471.497, 83.062, 39.754), 'Scale': VBase3(1.0, 1.0, 1.0), 'Visual': {'Model': 'models/buildings/pir_m_bld_wal_spanish_archA'}}, '1138420265.62dxschafe': {'Type': 'Building Exterior', 'File': '', 'Hpr': VBase3(84.245, 0.0, 0.0), 'Pos': Point3(380.148, 121.958, 41.664), 'Scale': VBase3(1.0, 1.0, 1.0), 'Visual': {'Model': 'models/buildings/pir_m_bld_wal_spanish_archA'}}, '1138420321.09dxschafe': {'Type': 'Building Exterior', 'File': '', 'Color': (0.800000011920929, 0.800000011920929, 0.800000011920929, 1.0), 'Hpr': VBase3(0.901, 0.0, 0.0), 'Pos': Point3(557.895, 238.356, 47.471), 'Scale': VBase3(1.0, 1.0, 1.0), 'Visual': {'Color': (0.88, 0.8899999999999999, 0.8299999999999998, 1.0), 'Model': 'models/buildings/spanish_archB'}}, '1138647059.31dxschafe': {'Type': 'Rope', 'Color': (0.6000000238418579, 0.6000000238418579, 0.6000000238418579, 1.0), 'Hpr': Point3(0.0, 0.0, 0.0), 'Pos': Point3(564.321, 202.968, 51.469), 'Scale': VBase3(1.0, 1.0, 1.0), 'Visual': {'Model': 'models/props/rope_pile'}}, '1138647125.22dxschafe': {'Type': 'Crate', 'Color': (0.75, 0.9300000071525574, 1.0, 1.0), 'Hpr': VBase3(13.836, 0.0, 0.0), 'Pos': Point3(562.005, 196.806, 48.044), 'Scale': VBase3(0.515, 0.515, 0.515), 'Visual': {'Color': (0.67, 0.73, 0.66, 1.0), 'Model': 'models/props/crate'}}, '1138648770.65dxschafe': {'Type': 'Wall', 'Hpr': VBase3(-67.948, 0.0, 0.0), 'Pos': Point3(656.776, 162.74, 44.948), 'Scale': VBase3(1.0, 1.0, 1.0), 'Visual': {'Model': 'models/buildings/LowWallStone_10'}}, '1138649836.96dxschafe': {'Type': 'Bush', 'Hpr': VBase3(-1.499, 0.0, 0.0), 'Pos': Point3(651.46, 184.535, 47.708), 'Scale': VBase3(0.902, 0.902, 0.902), 'Visual': {'Model': 'models/vegetation/bush_d'}}, '1138650056.86dxschafe': {'Type': 'Bush', 'Hpr': VBase3(75.775, 0.0, 0.0), 'Pos': Point3(653.734, 165.339, 44.993), 'Scale': VBase3(0.839, 0.839, 0.839), 'Visual': {'Model': 'models/vegetation/bush_b'}}, '1138650910.16dxschafe': {'Type': 'Bush', 'Hpr': VBase3(-63.739, 0.0, 0.0), 'Pos': Point3(545.986, 229.67, 48.487), 'Scale': VBase3(0.85, 0.85, 0.85), 'Visual': {'Model': 'models/vegetation/bush_f'}}, '1138657845.1dxschafe': {'Type': 'Bush', 'Hpr': VBase3(-109.364, -0.735, 0.0), 'Pos': Point3(433.36, 90.252, 40.515), 'Scale': VBase3(1.0, 1.0, 1.0), 'Visual': {'Model': 'models/vegetation/bush_d'}}, '1138658204.28dxschafe': {'Type': 'Bush', 'Hpr': VBase3(-110.454, 0.0, 0.0), 'Pos': Point3(414.769, 189.853, 43.951), 'Scale': VBase3(0.629, 0.629, 0.629), 'Visual': {'Model': 'models/vegetation/bush_d'}}, '1138660565.68dxschafe': {'Type': 'Bush', 'Hpr': VBase3(-122.345, 0.0, 0.0), 'Pos': Point3(486.529, 84.942, 40.511), 'Scale': VBase3(0.489, 0.489, 0.489), 'Visual': {'Model': 'models/vegetation/bush_a'}}, '1138660671.69dxschafe': {'Type': 'Cart', 'Hpr': VBase3(-25.018, 0.0, 0.0), 'Pos': Point3(517.4, 72.714, 40.194), 'Scale': VBase3(1.0, 1.0, 1.0), 'Visual': {'Model': 'models/props/cart_reg'}}, '1138661112.78dxschafe': {'Type': 'Wall', 'Hpr': VBase3(-46.798, 0.0, 0.0), 'Pos': Point3(621.193, 266.956, 51.357), 'Scale': VBase3(1.0, 1.0, 1.0), 'Visual': {'Model': 'models/buildings/pir_m_bld_wal_stuccoTall10'}}, '1138667460.66dxschafe': {'Type': 'Bush', 'Hpr': VBase3(85.068, -0.352, 4.072), 'Pos': Point3(421.01, 240.989, 49.509), 'Scale': VBase3(0.499, 0.499, 0.499), 'Visual': {'Model': 'models/vegetation/bush_a'}}, '1138725193.96dxschafe': {'Type': 'FountainSmall', 'Hpr': VBase3(0.051, 1.794, -1.632), 'Pos': Point3(556.539, 134.683, 43.097), 'Scale': VBase3(1.003, 1.003, 1.003), 'Visual': {'Model': 'models/props/FountainSmall'}}, '1138725240.0dxschafe': {'Type': 'Bush', 'Hpr': VBase3(19.379, 0.0, 0.0), 'Pos': Point3(569.465, 151.042, 43.616), 'Scale': VBase3(0.425, 0.425, 0.425), 'Visual': {'Model': 'models/vegetation/bush_a'}}, '1138727343.41dxschafe': {'Type': 'Tree - Animated', 'Animate': 'models/vegetation/palm_trunk_a_idle', 'Hpr': Point3(0.0, 0.0, 0.0), 'Pos': Point3(487.619, 82.513, 39.809), 'Scale': VBase3(0.64, 0.64, 0.64), 'SubObjs': {'Top Model': {'Visual': {'Animate': 'models/vegetation/palm_leaf_a_idle', 'Attach': ['trunk', 'def_trunk_attach'], 'Model': 'models/vegetation/palm_leaf_a_hi', 'PartName': 'leaf'}}}, 'Visual': {'Animate': 'models/vegetation/palm_trunk_a_idle', 'Model': 'models/vegetation/palm_trunk_a_hi', 'PartName': 'trunk'}}, '1138728593.36dxschafe': {'Type': 'Barrel', 'Color': (0.800000011920929, 1.0, 0.6000000238418579, 1.0), 'Hpr': VBase3(115.504, -0.605, -63.052), 'Pos': Point3(625.564, 171.452, 44.87), 'Scale': VBase3(0.776, 0.776, 0.776), 'Visual': {'Model': 'models/props/barrel_sideways'}}, '1138730803.13dxschafe': {'Type': 'Bush', 'Hpr': VBase3(106.843, 0.0, 0.0), 'Pos': Point3(609.912, 253.935, 51.356), 'Scale': VBase3(1.0, 1.0, 1.0), 'Visual': {'Model': 'models/vegetation/bush_c'}}, '1138742820.76dxschafe': {'Type': 'Bush', 'Hpr': VBase3(-102.655, 0.0, 0.0), 'Pos': Point3(639.296, 217.768, 48.521), 'Scale': VBase3(0.6, 0.6, 0.6), 'Visual': {'Model': 'models/vegetation/bush_a'}}, '1138743352.03dxschafe': {'Type': 'Bush', 'Hpr': VBase3(0.0, 0.0, 6.367), 'Pos': Point3(554.867, 134.472, 41.778), 'Scale': VBase3(0.384, 0.384, 0.384), 'Visual': {'Model': 'models/vegetation/bush_f'}}, '1142040049.39dxschafe': {'Type': 'Shanty Tents', 'Hpr': VBase3(-135.95, -4.989, 1.523), 'Pos': Point3(580.097, 110.786, 41.98), 'Scale': VBase3(1.0, 1.0, 1.0), 'Visual': {'Model': 'models/props/pir_m_prp_tnt_group3_market'}}, '1142040233.83dxschafe': {'Type': 'Building Exterior', 'File': '', 'ExtUid': '1142040233.83dxschafe0', 'Hpr': VBase3(-60.747, 0.0, 0.0), 'Objects': {'1142040233.85dxschafe': {'Type': 'Locator Node', 'Name': 'portal_exterior_1', 'Hpr': Point3(0.0, 0.0, 0.0), 'Pos': Point3(0.0, 0.0, 0.0), 'Scale': VBase3(1.0, 1.0, 1.0)}}, 'Pos': Point3(640.889, 147.513, 44.853), 'Scale': VBase3(1.0, 1.0, 1.0), 'Visual': {'Name': '', 'Model': 'models/buildings/spanish_npc_house_e_exterior'}}}, 'Pos': Point3(0.0, 0.0, 0.0), 'Scale': VBase3(1.0, 1.0, 1.0)}, '1144362499.33jubutler': {'Type': 'Cell Portal Area', 'Name': 'cell_shanty_town', 'Hpr': Point3(0.0, 0.0, 0.0), 'Objects': {'1136405019.87dzlu': {'Type': 'Building Exterior', 'Name': '', 'File': '', 'Hpr': VBase3(-3.94, 0.0, 0.0), 'Objects': {}, 'Pos': Point3(196.075, 264.07, 65.369), 'Scale': VBase3(1.0, 1.0, 1.0), 'Visual': {'Model': 'models/buildings/shanty_npc_house_combo_B'}}, '1136405045.67dzlu': {'Type': 'Building Exterior', 'Name': '', 'File': '', 'Hpr': VBase3(-36.407, 0.0, 0.0), 'Objects': {'1136405019.87dzlu0': {'Type': 'Locator Node', 'Name': 'portal_exterior_1', 'Hpr': VBase3(90.0, 0.0, 0.0), 'Pos': Point3(3.987, -20.033, 0.0), 'Scale': VBase3(1.0, 1.0, 1.0)}, '1137811384.33dxschafe': {'Type': 'Bush', 'Hpr': VBase3(107.596, 3.611, -3.72), 'Pos': Point3(9.386, -10.777, 0.092), 'Scale': VBase3(0.552, 0.552, 0.552), 'Visual': {'Model': 'models/vegetation/bush_d'}}, '1137811457.28dxschafe': {'Type': 'Tree', 'Hpr': VBase3(34.613, -1.886, 0.0), 'Pos': Point3(-8.018, -12.322, -0.492), 'Scale': VBase3(1.043, 1.043, 1.043), 'Visual': {'Model': 'models/vegetation/gen_tree_d'}}}, 'Pos': Point3(227.589, 251.799, 65.824), 'Scale': VBase3(1.0, 1.0, 1.0), 'Visual': {'Model': 'models/buildings/shanty_npc_house_combo_G'}}, '1136405115.39dzlu': {'Type': 'Building Exterior', 'Name': '', 'File': '', 'Hpr': VBase3(-152.173, 0.0, 0.0), 'Objects': {'1136405019.87dzlu0': {'Type': 'Locator Node', 'Name': 'portal_exterior_1', 'Hpr': VBase3(90.0, 0.0, 0.0), 'Pos': Point3(3.987, -20.033, 0.0), 'Scale': VBase3(1.0, 1.0, 1.0)}, '1138403904.88dxschafe': {'Type': 'Bucket', 'Hpr': VBase3(152.173, 0.0, 0.0), 'Pos': Point3(-3.736, -12.128, 2.621), 'Scale': VBase3(1.0, 1.0, 1.0), 'Visual': {'Model': 'models/props/bucket'}}}, 'Pos': Point3(154.518, 152.03, 70.69), 'Scale': VBase3(1.0, 1.0, 1.0), 'Visual': {'Model': 'models/buildings/shanty_npc_house_combo_H'}}, '1136405216.39dzlu': {'Type': 'Building Exterior', 'Name': '', 'File': '', 'Hpr': VBase3(159.035, 0.0, -3.318), 'Objects': {'1136405019.87dzlu0': {'Type': 'Locator Node', 'Name': 'portal_exterior_1', 'Hpr': VBase3(90.0, 0.0, 0.0), 'Pos': Point3(3.987, -20.033, 0.0), 'Scale': VBase3(1.0, 1.0, 1.0)}, '1138402377.67dxschafe': {'Type': 'Barrel', 'Hpr': VBase3(-159.067, -1.187, -3.099), 'Pos': Point3(-10.254, -10.193, 12.792), 'Scale': VBase3(0.71, 0.71, 0.71), 'Visual': {'Model': 'models/props/barrel'}}}, 'Pos': Point3(186.064, 184.276, 66.339), 'Scale': VBase3(1.0, 1.0, 1.0), 'Visual': {'Model': 'models/buildings/shanty_npc_house_combo_C'}}, '1136405358.8dzlu': {'Type': 'Building Exterior', 'Name': '', 'File': '', 'Door': 'models/buildings/shanty_guildhall_door', 'ExtUid': '1136405358.8dzlu0', 'Hpr': VBase3(141.858, 2.124, 0.788), 'Interior': 'models/buildings/interior_shanty_guildhall', 'Objects': {'1136405019.87dzlu0': {'Type': 'Locator Node', 'Name': 'portal_exterior_1', 'Hpr': VBase3(-180.0, 0.0, 0.0), 'Pos': Point3(-0.277, -13.756, 1.023), 'Scale': VBase3(1.0, 1.0, 1.0)}, '1137806371.73dxschafe': {'Type': 'Bush', 'Hpr': VBase3(54.196, 0.0, 0.0), 'Pos': Point3(-21.814, -6.083, 0.66), 'Scale': VBase3(0.502, 0.502, 0.502), 'Visual': {'Model': 'models/vegetation/bush_a'}}, '1137806828.39dxschafe': {'Type': 'ChickenCage', 'Hpr': VBase3(-161.918, -4.648, 2.808), 'Pos': Point3(12.114, -17.853, 4.663), 'Scale': VBase3(1.0, 1.0, 1.0), 'Visual': {'Model': 'models/props/ChickenCage'}}, '1137806913.77dxschafe': {'Type': 'Tree', 'Hpr': VBase3(-138.891, 0.0, 8.608), 'Pos': Point3(-26.873, 4.368, 1.418), 'Scale': VBase3(0.616, 0.616, 0.616), 'Visual': {'Model': 'models/vegetation/gen_tree_d'}}}, 'Pos': Point3(113.877, 168.16, 74.618), 'Scale': VBase3(1.0, 1.0, 1.0), 'Visual': {'Name': '', 'Model': 'models/buildings/shanty_guildhall_exterior'}}, '1136405421.64dzlu': {'Type': 'Building Exterior', 'Name': '', 'File': '', 'Hpr': VBase3(72.446, 0.0, 2.517), 'Objects': {'1136405019.87dzlu0': {'Type': 'Locator Node', 'Name': 'portal_exterior_1', 'Hpr': VBase3(0.0, 0.0, 0.0), 'Pos': Point3(-0.277, -13.756, 1.023), 'Scale': VBase3(1.0, 1.0, 1.0)}, '1137809822.97dxschafe': {'Type': 'Bush', 'Hpr': VBase3(107.52, -2.4, 0.757), 'Pos': Point3(-6.112, -12.308, -1.592), 'Scale': VBase3(1.0, 1.0, 1.0), 'Visual': {'Model': 'models/vegetation/bush_b'}}}, 'Pos': Point3(70.407, 222.168, 78.097), 'Scale': VBase3(1.0, 1.0, 1.0), 'Visual': {'Model': 'models/buildings/shanty_npc_house_combo_I'}}, '1136405510.23dzlu': {'Type': 'Building Exterior', 'Name': '', 'File': '', 'Hpr': VBase3(176.13, 0.0, 0.0), 'Objects': {'1136405019.87dzlu0': {'Type': 'Locator Node', 'Name': 'portal_exterior_1', 'Hpr': VBase3(0.0, 0.0, 0.0), 'Pos': Point3(-0.277, -13.756, 1.023), 'Scale': VBase3(1.0, 1.0, 1.0)}, '1137614361.77dxschafe': {'Type': 'Bush', 'Hpr': VBase3(153.326, 9.923, 10.767), 'Pos': Point3(-12.631, 18.497, -5.778), 'Scale': VBase3(1.0, 1.0, 1.0), 'Visual': {'Model': 'models/vegetation/bush_a'}}}, 'Pos': Point3(215.034, 179.69, 62.862), 'Scale': VBase3(1.0, 1.0, 1.0), 'Visual': {'Model': 'models/buildings/shanty_npc_house_combo_D'}}, '1136406250.43dzlu': {'Type': 'Tree - Animated', 'Animate': 'models/vegetation/palm_trunk_a_idle', 'Color': (1.0, 1.0, 0.6000000238418579, 1.0), 'Hpr': VBase3(179.878, -7.928, 1.798), 'Pos': Point3(166.408, 181.16, 70.837), 'Scale': VBase3(1.376, 1.376, 1.376), 'SubObjs': {'Top Model': {'Visual': {'Animate': 'models/vegetation/palm_leaf_a_idle', 'Attach': ['trunk', 'def_trunk_attach'], 'Color': (0.724, 0.972, 0.724, 1.0), 'Model': 'models/vegetation/palm_leaf_b_hi', 'PartName': 'leaf', 'Scale': VBase3(1.252, 1.252, 1.252)}}}, 'Visual': {'Animate': 'models/vegetation/palm_trunk_a_idle', 'Color': (0.699999988079071, 0.699999988079071, 0.699999988079071, 1.0), 'Model': 'models/vegetation/palm_trunk_a_hi', 'PartName': 'trunk'}}, '1136416014.44dzlu': {'Type': 'Tree', 'Hpr': Point3(0.0, 0.0, 0.0), 'Pos': Point3(84.211, 154.518, 74.538), 'Scale': VBase3(1.0, 1.0, 1.0), 'Visual': {'Model': 'models/vegetation/gen_tree_a'}}, '1136416026.95dzlu': {'Type': 'Tree', 'Hpr': Point3(0.0, 0.0, 0.0), 'Pos': Point3(59.872, 156.373, 73.917), 'Scale': VBase3(1.043, 1.043, 1.043), 'Visual': {'Model': 'models/vegetation/gen_tree_b'}}, '1136416077.39dzlu': {'Type': 'Rock', 'Hpr': VBase3(60.977, -0.797, -9.551), 'Objects': {'1137806724.17dxschafe': {'Type': 'Bush', 'Hpr': VBase3(-103.298, -11.46, -0.26), 'Pos': Point3(5.128, -2.258, 0.606), 'Scale': VBase3(0.254, 0.254, 0.254), 'Visual': {'Model': 'models/vegetation/bush_c'}}}, 'Pos': Point3(84.247, 173.151, 77.413), 'Scale': VBase3(2.305, 2.305, 2.305), 'Visual': {'Model': 'models/props/zz_dont_use_rock_Dk_1F'}}, '1136421709.5dzlu': {'Type': 'Tree', 'Hpr': VBase3(-26.442, 0.0, 0.0), 'Pos': Point3(244.021, 242.469, 65.671), 'Scale': VBase3(0.478, 0.478, 0.478), 'Visual': {'Model': 'models/vegetation/fern_tree_d'}}, '1136426057.16dzlu': {'Type': 'Rock', 'Hpr': VBase3(76.724, 12.383, -3.664), 'Pos': Point3(245.979, 238.396, 65.326), 'Scale': VBase3(1.509, 1.509, 1.509), 'Visual': {'Model': 'models/props/zz_dont_use_rocks_Dk_group_1F'}}, '1137613374.63dxschafe': {'Type': 'Bush', 'Hpr': VBase3(19.954, -8.974, 0.0), 'Pos': Point3(370.225, 145.45, 40.851), 'Scale': VBase3(1.0, 1.0, 1.0), 'Visual': {'Model': 'models/vegetation/bush_a'}}, '1137805871.28dxschafe': {'Type': 'Well', 'Hpr': VBase3(53.569, 0.0, 0.0), 'Pos': Point3(89.439, 242.903, 71.636), 'Scale': VBase3(1.0, 1.0, 1.0), 'Visual': {'Model': 'models/props/wellA'}}, '1137805912.08dxschafe': {'Type': 'Cart', 'Hpr': VBase3(91.38, 0.0, 0.0), 'Pos': Point3(101.478, 258.244, 70.319), 'Scale': VBase3(1.0, 1.0, 1.0), 'Visual': {'Model': 'models/props/cart_reg'}}, '1137806064.89dxschafe': {'Type': 'ChickenCage', 'Hpr': Point3(0.0, 0.0, 0.0), 'Pos': Point3(144.397, 161.658, 74.183), 'Scale': VBase3(1.0, 1.0, 1.0), 'Visual': {'Model': 'models/props/ChickenCage'}}, '1137806634.36dxschafe': {'Type': 'Bush', 'Hpr': VBase3(-42.304, -2.237, -1.905), 'Pos': Point3(154.954, 166.021, 72.771), 'Scale': VBase3(0.328, 0.328, 0.328), 'Visual': {'Model': 'models/vegetation/bush_c'}}, '1137806680.44dxschafe': {'Type': 'Bush', 'Hpr': VBase3(-42.304, -2.237, -1.905), 'Pos': Point3(169.295, 192.582, 69.486), 'Scale': VBase3(0.585, 0.585, 0.585), 'Visual': {'Model': 'models/vegetation/bush_c'}}, '1137808645.7dxschafe': {'Type': 'Tree', 'Hpr': Point3(0.0, 0.0, 0.0), 'Pos': Point3(45.436, 201.763, 82.502), 'Scale': VBase3(1.0, 1.0, 1.0), 'Visual': {'Model': 'models/vegetation/gen_tree_a'}}, '1137809784.5dxschafe': {'Type': 'Rock', 'Hpr': VBase3(55.352, 26.27, 4.664), 'Pos': Point3(86.117, 219.566, 73.832), 'Scale': VBase3(1.166, 1.166, 1.166), 'Visual': {'Model': 'models/props/zz_dont_use_rock_Dk_2F'}}, '1137809887.95dxschafe': {'Type': 'Bush', 'Hpr': VBase3(0.0, 0.0, 16.083), 'Pos': Point3(84.826, 226.536, 74.615), 'Scale': VBase3(1.0, 1.0, 1.0), 'Visual': {'Model': 'models/vegetation/bush_f'}}, '1137810715.7dxschafe': {'Type': 'Bush', 'Hpr': VBase3(38.132, 1.015, 1.293), 'Pos': Point3(171.283, 291.984, 65.212), 'Scale': VBase3(0.606, 0.606, 0.606), 'Visual': {'Model': 'models/vegetation/bush_g'}}, '1137811001.89dxschafe': {'Type': 'Barrel', 'Color': (0.75, 1.0, 0.8500000238418579, 1.0), 'Hpr': Point3(0.0, 0.0, 0.0), 'Pos': Point3(234.945, 235.792, 66.291), 'Scale': VBase3(0.537, 0.537, 0.537), 'Visual': {'Model': 'models/props/barrel'}}, '1137811050.8dxschafe': {'Type': 'Tree', 'Hpr': Point3(0.0, 0.0, 0.0), 'Pos': Point3(108.747, 261.986, 69.603), 'Scale': VBase3(1.0, 1.0, 1.0), 'Visual': {'Model': 'models/vegetation/fern_tree_c'}}, '1138322252.33dxschafe': {'Type': 'Rock', 'Hpr': VBase3(-37.197, 0.0, 0.0), 'Objects': {}, 'Pos': Point3(57.771, 206.513, 82.94), 'Scale': VBase3(1.34, 1.34, 1.34), 'Visual': {'Model': 'models/props/zz_dont_use_rocks_Dk_group_1F'}}, '1138403541.34dxschafe': {'Type': 'Bucket', 'Hpr': Point3(0.0, 0.0, 0.0), 'Pos': Point3(91.779, 243.671, 74.527), 'Scale': VBase3(1.0, 1.0, 1.0), 'Visual': {'Model': 'models/props/bucket'}}, '1138404283.64dxschafe': {'Type': 'LaundryRope', 'Hpr': VBase3(27.934, 0.0, 0.0), 'Pos': Point3(67.118, 203.949, 77.576), 'Scale': VBase3(1.0, 1.0, 1.0), 'Visual': {'Model': 'models/props/LaundryRope'}}, '1138404327.72dxschafe': {'Type': 'Bucket', 'Hpr': VBase3(0.407, -5.599, 4.167), 'Pos': Point3(72.279, 206.064, 80.221), 'Scale': VBase3(1.0, 1.0, 1.0), 'Visual': {'Model': 'models/props/bucket'}}, '1138404604.47dxschafe': {'Type': 'Townsperson', 'Category': 'Commoner', 'DNA': '1138404604.47dxschafe', 'Hpr': VBase3(-104.36, 0.0, 0.0), 'Pos': Point3(121.083, 261.771, 69.011), 'Respawns': True, 'Scale': VBase3(1.0, 1.0, 1.0), 'Start State': 'Idle'}, '1138740198.35dxschafe': {'Type': 'Tree', 'Hpr': VBase3(0.696, 6.058, -5.372), 'Pos': Point3(91.294, 147.6, 60.402), 'Scale': VBase3(0.795, 0.795, 0.795), 'Visual': {'Model': 'models/vegetation/gen_tree_d'}}, '1138740362.12dxschafe': {'Type': 'Bush', 'Hpr': VBase3(-23.597, 0.0, 0.0), 'Pos': Point3(59.764, 160.633, 72.303), 'Scale': VBase3(1.0, 1.0, 1.0), 'Visual': {'Model': 'models/vegetation/bush_a'}}, '1138741910.42dxschafe': {'Type': 'Bush', 'Hpr': VBase3(-11.978, 4.151, -0.638), 'Pos': Point3(221.617, 196.1, 65.553), 'Scale': VBase3(0.312, 0.312, 0.312), 'Visual': {'Model': 'models/vegetation/bush_a'}}}, 'Pos': Point3(0.0, 0.0, 0.0), 'Scale': VBase3(1.0, 1.0, 1.0)}, '1144362499.63jubutler': {'Type': 'Cell Portal Area', 'Name': 'cell_trail4', 'Hpr': Point3(0.0, 0.0, 0.0), 'Objects': {'1136404761.9dzlu': {'Type': 'Shanty Gypsywagon', 'Hpr': Point3(0.0, 0.0, 0.0), 'Pos': Point3(-92.102, 102.145, 65.617), 'Scale': VBase3(1.0, 1.0, 1.0), 'Visual': {'Model': 'models/buildings/shanty_gypsywagon_exterior'}}, '1136415132.05dzlu': {'Type': 'Crate', 'Hpr': VBase3(26.871, 0.0, 0.0), 'Objects': {}, 'Pos': Point3(-81.874, 93.275, 65.617), 'Scale': VBase3(0.74, 0.74, 0.74), 'Visual': {'Model': 'models/props/crate'}}, '1136415143.75dzlu': {'Type': 'Crate', 'Hpr': Point3(0.0, 0.0, 0.0), 'Objects': {}, 'Pos': Point3(-77.499, 87.932, 65.617), 'Scale': VBase3(0.548, 0.548, 0.548), 'Visual': {'Model': 'models/props/crate'}}, '1136415145.64dzlu': {'Type': 'Crate', 'Hpr': Point3(0.0, 0.0, 0.0), 'Pos': Point3(-81.122, 89.785, 65.617), 'Scale': VBase3(0.3, 0.3, 0.3), 'Visual': {'Model': 'models/props/crate'}}, '1136423361.78dzlu': {'Type': 'Rock', 'Hpr': VBase3(-18.445, 0.0, 0.0), 'Objects': {}, 'Pos': Point3(-169.169, 87.825, 59.012), 'Scale': VBase3(1.593, 1.593, 1.593), 'Visual': {'Model': 'models/props/zz_dont_use_rock_Dk_1F'}}, '1136424346.05dzlu': {'Type': 'Rock', 'Hpr': Point3(0.0, 0.0, 0.0), 'Pos': Point3(-116.127, 79.735, 60.285), 'Scale': VBase3(1.0, 1.0, 1.0), 'Visual': {'Model': 'models/props/zz_dont_use_rock_Dk_1F'}}, '1136424355.72dzlu': {'Type': 'Rock', 'Hpr': VBase3(55.974, 0.0, 0.0), 'Objects': {}, 'Pos': Point3(-65.312, 88.265, 65.616), 'Scale': VBase3(2.834, 2.834, 2.834), 'Visual': {'Model': 'models/props/zz_dont_use_rock_Dk_4F'}}, '1136424363.86dzlu': {'Type': 'Rock', 'Hpr': VBase3(111.948, 0.0, 0.0), 'Pos': Point3(-53.101, 83.125, 65.617), 'Scale': VBase3(0.736, 0.736, 0.736), 'Visual': {'Model': 'models/props/zz_dont_use_rock_Dk_2F'}}, '1136424372.47dzlu': {'Type': 'Rock', 'Hpr': VBase3(111.948, 0.0, 0.0), 'Pos': Point3(-58.935, 86.176, 64.859), 'Scale': VBase3(1.782, 1.782, 1.782), 'Visual': {'Model': 'models/props/zz_dont_use_rock_Dk_2F'}}, '1136424437.36dzlu': {'Type': 'Bush', 'Hpr': Point3(0.0, 0.0, 0.0), 'Pos': Point3(-51.458, 81.996, 65.617), 'Scale': VBase3(1.0, 1.0, 1.0), 'Visual': {'Model': 'models/vegetation/bush_c'}}, '1136424475.03dzlu': {'Type': 'Bush', 'Hpr': VBase3(95.647, 0.0, 0.0), 'Pos': Point3(-34.91, 82.316, 65.655), 'Scale': VBase3(1.0, 1.0, 1.0), 'Visual': {'Model': 'models/vegetation/bush_c'}}, '1136424504.92dzlu': {'Type': 'Bush', 'Hpr': VBase3(-60.354, 0.0, 0.0), 'Pos': Point3(-19.004, 92.61, 66.252), 'Scale': VBase3(0.748, 0.748, 0.748), 'Visual': {'Model': 'models/vegetation/bush_d'}}, '1136424561.63dzlu': {'Type': 'Bush', 'Hpr': VBase3(136.167, 0.0, 0.0), 'Pos': Point3(-12.13, 96.672, 65.94), 'Scale': VBase3(0.795, 0.795, 0.795), 'Visual': {'Model': 'models/vegetation/bush_d'}}, '1136424619.11dzlu': {'Type': 'Bush', 'Hpr': VBase3(136.167, 0.0, 0.0), 'Pos': Point3(-43.346, 86.941, 65.617), 'Scale': VBase3(0.795, 0.795, 0.795), 'Visual': {'Model': 'models/vegetation/bush_b'}}, '1138404983.63dxschafe': {'Type': 'Rock', 'Color': (0.6000000238418579, 0.6000000238418579, 0.6000000238418579, 1.0), 'Hpr': Point3(0.0, 0.0, 0.0), 'Objects': {}, 'Pos': Point3(-204.474, -267.29, 1.807), 'Scale': VBase3(2.775, 2.775, 2.775), 'Visual': {'Model': 'models/props/zz_dont_use_rocks_LT_group_1F'}}, '1138405612.78dxschafe': {'Type': 'Bush', 'Hpr': VBase3(-145.925, 0.0, 0.0), 'Pos': Point3(-209.182, -251.936, 1.818), 'Scale': VBase3(1.348, 1.348, 1.348), 'Visual': {'Model': 'models/vegetation/bush_c'}}}, 'Pos': Point3(0.0, 0.0, 0.0), 'Scale': VBase3(1.0, 1.0, 1.0)}}, 'Visual': {'Model': 'models/islands/bilgewater_zero'}}}, 'Node Links': [], 'Layers': {}, 'ObjectIds': {'1135280776.06dzlu': '["Objects"]["1135280776.06dzlu"]', '1135281802.29dzlu': '["Objects"]["1135280776.06dzlu"]["Objects"]["1144362499.15jubutler"]["Objects"]["1135281802.29dzlu"]', '1135282109.68dzlu': '["Objects"]["1135280776.06dzlu"]["Objects"]["1135281802.29dzlu"]["Objects"]["1135282109.68dzlu"]', '1135282109.68dzlu0': '["Objects"]["1135280776.06dzlu"]["Objects"]["1135281802.29dzlu"]["Objects"]["1135282109.68dzlu"]["Objects"]["1135282109.68dzlu0"]', '1135282286.59dzlu': '["Objects"]["1135280776.06dzlu"]["Objects"]["1135281802.29dzlu"]["Objects"]["1135282286.59dzlu"]', '1135282286.59dzlu0': '["Objects"]["1135280776.06dzlu"]["Objects"]["1135281802.29dzlu"]["Objects"]["1135282286.59dzlu"]["Objects"]["1135282286.59dzlu0"]', '1135285775.21dzlu': '["Objects"]["1135280776.06dzlu"]["Objects"]["1135281802.29dzlu"]["Objects"]["1135285775.21dzlu"]', '1135285775.21dzlu0': '["Objects"]["1135280776.06dzlu"]["Objects"]["1135281802.29dzlu"]["Objects"]["1135285775.21dzlu"]["Objects"]["1135285775.21dzlu0"]', '1135285783.04dzlu': '["Objects"]["1135280776.06dzlu"]["Objects"]["1144362499.15jubutler"]["Objects"]["1135285783.04dzlu"]', '1135285783.04dzlu0': '["Objects"]["1135280776.06dzlu"]["Objects"]["1135285783.04dzlu"]["Objects"]["1135285783.04dzlu0"]', '1135285791.23dzlu': '["Objects"]["1135280776.06dzlu"]["Objects"]["1135281802.29dzlu"]["Objects"]["1135285791.23dzlu"]', '1135285791.23dzlu0': '["Objects"]["1135280776.06dzlu"]["Objects"]["1135281802.29dzlu"]["Objects"]["1135285791.23dzlu"]["Objects"]["1135285791.23dzlu0"]', '1135285802.19dzlu': '["Objects"]["1135280776.06dzlu"]["Objects"]["1135281802.29dzlu"]["Objects"]["1135285802.19dzlu"]', '1135285802.19dzlu0': '["Objects"]["1135280776.06dzlu"]["Objects"]["1135281802.29dzlu"]["Objects"]["1135285802.19dzlu"]["Objects"]["1135285802.19dzlu0"]', '1135286034.37dzlu': '["Objects"]["1135280776.06dzlu"]["Objects"]["1144362499.15jubutler"]["Objects"]["1135286034.37dzlu"]', '1135286034.37dzlu0': '["Objects"]["1135280776.06dzlu"]["Objects"]["1135286034.37dzlu"]["Objects"]["1135286034.37dzlu0"]', '1135287336.43dzlu': '["Objects"]["1135280776.06dzlu"]["Objects"]["1135281802.29dzlu"]["Objects"]["1135287336.43dzlu"]', '1135287336.43dzlu0': '["Objects"]["1135280776.06dzlu"]["Objects"]["1135281802.29dzlu"]["Objects"]["1135287336.43dzlu"]["Objects"]["1135287336.43dzlu0"]', '1135287679.84dzlu': '["Objects"]["1135280776.06dzlu"]["Objects"]["1135281802.29dzlu"]["Objects"]["1135287679.84dzlu"]', '1135287679.84dzlu0': '["Objects"]["1135280776.06dzlu"]["Objects"]["1135281802.29dzlu"]["Objects"]["1135287679.84dzlu"]["Objects"]["1135287679.84dzlu0"]', '1135288077.32dzlu': '["Objects"]["1135280776.06dzlu"]["Objects"]["1135281802.29dzlu"]["Objects"]["1135288077.32dzlu"]', '1135288077.32dzlu0': '["Objects"]["1135280776.06dzlu"]["Objects"]["1135281802.29dzlu"]["Objects"]["1135288077.32dzlu"]["Objects"]["1135288077.32dzlu0"]', '1135288180.41dzlu': '["Objects"]["1135280776.06dzlu"]["Objects"]["1135281802.29dzlu"]["Objects"]["1135288180.4dzlu"]["Objects"]["1135288180.41dzlu"]', '1135288180.4dzlu': '["Objects"]["1135280776.06dzlu"]["Objects"]["1135281802.29dzlu"]["Objects"]["1135288180.4dzlu"]', '1135289191.98dzlu': '["Objects"]["1135280776.06dzlu"]["Objects"]["1135281802.29dzlu"]["Objects"]["1135289191.98dzlu"]', '1135289191.98dzlu0': '["Objects"]["1135280776.06dzlu"]["Objects"]["1135281802.29dzlu"]["Objects"]["1135289191.98dzlu"]["Objects"]["1135289191.98dzlu0"]', '1135290323.54dzlu': '["Objects"]["1135280776.06dzlu"]["Objects"]["1135281802.29dzlu"]["Objects"]["1135290323.54dzlu"]', '1135290323.54dzlu0': '["Objects"]["1135280776.06dzlu"]["Objects"]["1135281802.29dzlu"]["Objects"]["1135290323.54dzlu"]["Objects"]["1135290323.54dzlu0"]', '1135290764.99dzlu': '["Objects"]["1135280776.06dzlu"]["Objects"]["1135281802.29dzlu"]["Objects"]["1135290764.99dzlu"]', '1135971052.31dzlu': '["Objects"]["1135280776.06dzlu"]["Objects"]["1135281802.29dzlu"]["Objects"]["1135971052.31dzlu"]', '1135971052.31dzlu0': '["Objects"]["1135280776.06dzlu"]["Objects"]["1135281802.29dzlu"]["Objects"]["1135971052.31dzlu"]["Objects"]["1135971052.31dzlu0"]', '1135971384.22dzlu': '["Objects"]["1135280776.06dzlu"]["Objects"]["1135281802.29dzlu"]["Objects"]["1135971384.22dzlu"]', '1135971384.22dzlu0': '["Objects"]["1135280776.06dzlu"]["Objects"]["1135281802.29dzlu"]["Objects"]["1135971384.22dzlu"]["Objects"]["1135971384.22dzlu0"]', '1136336439.97dzlu': '["Objects"]["1135280776.06dzlu"]["Objects"]["1135281802.29dzlu"]["Objects"]["1136336439.97dzlu"]', '1136336439.97dzlu0': '["Objects"]["1135280776.06dzlu"]["Objects"]["1135281802.29dzlu"]["Objects"]["1136336439.97dzlu"]["Objects"]["1136336439.97dzlu0"]', '1136336848.57dzlu': '["Objects"]["1135280776.06dzlu"]["Objects"]["1144362499.15jubutler"]["Objects"]["1136336848.57dzlu"]', '1136336848.57dzlu0': '["Objects"]["1135280776.06dzlu"]["Objects"]["1136336848.57dzlu"]["Objects"]["1136336848.57dzlu0"]', '1136337021.82dzlu': '["Objects"]["1135280776.06dzlu"]["Objects"]["1135281802.29dzlu"]["Objects"]["1136337021.82dzlu"]', '1136337021.82dzlu0': '["Objects"]["1135280776.06dzlu"]["Objects"]["1135281802.29dzlu"]["Objects"]["1136337021.82dzlu"]["Objects"]["1136337021.82dzlu0"]', '1136338230.04dzlu': '["Objects"]["1135280776.06dzlu"]["Objects"]["1135281802.29dzlu"]["Objects"]["1136338230.04dzlu"]', '1136338558.54dzlu': '["Objects"]["1135280776.06dzlu"]["Objects"]["1135281802.29dzlu"]["Objects"]["1136338558.54dzlu"]', '1136338586.88dzlu': '["Objects"]["1135280776.06dzlu"]["Objects"]["1135281802.29dzlu"]["Objects"]["1136338586.88dzlu"]', '1136338641.41dzlu': '["Objects"]["1135280776.06dzlu"]["Objects"]["1135281802.29dzlu"]["Objects"]["1136338641.41dzlu"]', '1136338940.62dzlu': '["Objects"]["1135280776.06dzlu"]["Objects"]["1135281802.29dzlu"]["Objects"]["1136338940.62dzlu"]', '1136339759.96dzlu': '["Objects"]["1135280776.06dzlu"]["Objects"]["1135281802.29dzlu"]["Objects"]["1136339759.96dzlu"]', '1136339824.68dzlu': '["Objects"]["1135280776.06dzlu"]["Objects"]["1135281802.29dzlu"]["Objects"]["1136339824.68dzlu"]', '1136339875.1dzlu': '["Objects"]["1135280776.06dzlu"]["Objects"]["1135281802.29dzlu"]["Objects"]["1136339875.1dzlu"]', '1136339970.12dzlu': '["Objects"]["1135280776.06dzlu"]["Objects"]["1144362499.15jubutler"]["Objects"]["1136339970.12dzlu"]', '1136340207.4dzlu': '["Objects"]["1135280776.06dzlu"]["Objects"]["1135281802.29dzlu"]["Objects"]["1136340207.4dzlu"]', '1136340387.18dzlu': '["Objects"]["1135280776.06dzlu"]["Objects"]["1135281802.29dzlu"]["Objects"]["1136340387.18dzlu"]', '1136340427.54dzlu': '["Objects"]["1135280776.06dzlu"]["Objects"]["1144362499.15jubutler"]["Objects"]["1136340427.54dzlu"]', '1136340454.07dzlu': '["Objects"]["1135280776.06dzlu"]["Objects"]["1135281802.29dzlu"]["Objects"]["1136340454.07dzlu"]', '1136340700.6dzlu': '["Objects"]["1135280776.06dzlu"]["Objects"]["1135281802.29dzlu"]["Objects"]["1136340700.6dzlu"]', '1136340768.34dzlu': '["Objects"]["1135280776.06dzlu"]["Objects"]["1135281802.29dzlu"]["Objects"]["1136340768.34dzlu"]', '1136404083.2dzlu': '["Objects"]["1135280776.06dzlu"]["Objects"]["1135281802.29dzlu"]["Objects"]["1136404083.2dzlu"]', '1136404579.56dzlu': '["Objects"]["1135280776.06dzlu"]["Objects"]["1136404579.56dzlu"]', '1136404682.97dzlu': '["Objects"]["1135280776.06dzlu"]["Objects"]["1135281802.29dzlu"]["Objects"]["1136404682.97dzlu"]', '1136404761.9dzlu': '["Objects"]["1135280776.06dzlu"]["Objects"]["1144362499.63jubutler"]["Objects"]["1136404761.9dzlu"]', '1136404859.58dzlu': '["Objects"]["1135280776.06dzlu"]["Objects"]["1136404859.58dzlu"]', '1136404859.58dzlu0': '["Objects"]["1135280776.06dzlu"]["Objects"]["1136404859.58dzlu"]["Objects"]["1136404859.58dzlu0"]', '1136405019.87dzlu': '["Objects"]["1135280776.06dzlu"]["Objects"]["1144362499.33jubutler"]["Objects"]["1136405019.87dzlu"]', '1136405019.87dzlu0': '["Objects"]["1135280776.06dzlu"]["Objects"]["1136405115.39dzlu"]["Objects"]["1136405019.87dzlu0"]', '1136405045.67dzlu': '["Objects"]["1135280776.06dzlu"]["Objects"]["1144362499.33jubutler"]["Objects"]["1136405045.67dzlu"]', '1136405115.39dzlu': '["Objects"]["1135280776.06dzlu"]["Objects"]["1144362499.33jubutler"]["Objects"]["1136405115.39dzlu"]', '1136405216.39dzlu': '["Objects"]["1135280776.06dzlu"]["Objects"]["1144362499.33jubutler"]["Objects"]["1136405216.39dzlu"]', '1136405358.8dzlu': '["Objects"]["1135280776.06dzlu"]["Objects"]["1144362499.33jubutler"]["Objects"]["1136405358.8dzlu"]', '1136405358.8dzlu0': '["Objects"]["1135280776.06dzlu"]["Objects"]["1144362499.33jubutler"]["Objects"]["1136405358.8dzlu"]', '1136405421.64dzlu': '["Objects"]["1135280776.06dzlu"]["Objects"]["1144362499.33jubutler"]["Objects"]["1136405421.64dzlu"]', '1136405510.23dzlu': '["Objects"]["1135280776.06dzlu"]["Objects"]["1144362499.33jubutler"]["Objects"]["1136405510.23dzlu"]', '1136406067.58dzlu': '["Objects"]["1135280776.06dzlu"]["Objects"]["1136406067.58dzlu"]', '1136406102.08dzlu': '["Objects"]["1135280776.06dzlu"]["Objects"]["1136406102.08dzlu"]', '1136406250.43dzlu': '["Objects"]["1135280776.06dzlu"]["Objects"]["1144362499.33jubutler"]["Objects"]["1136406250.43dzlu"]', '1136406300.36dzlu': '["Objects"]["1135280776.06dzlu"]["Objects"]["1136406300.36dzlu"]', '1136406305.84dzlu': '["Objects"]["1135280776.06dzlu"]["Objects"]["1136406305.84dzlu"]', '1136406445.48dzlu': '["Objects"]["1135280776.06dzlu"]["Objects"]["1136406445.48dzlu"]', '1136406479.8dzlu': '["Objects"]["1135280776.06dzlu"]["Objects"]["1136406479.8dzlu"]', '1136406533.92dzlu': '["Objects"]["1135280776.06dzlu"]["Objects"]["1136406533.92dzlu"]', '1136406575.75dzlu': '["Objects"]["1135280776.06dzlu"]["Objects"]["1136406575.75dzlu"]', '1136414754.64dzlu': '["Objects"]["1135280776.06dzlu"]["Objects"]["1136414754.64dzlu"]', '1136415132.05dzlu': '["Objects"]["1135280776.06dzlu"]["Objects"]["1144362499.63jubutler"]["Objects"]["1136415132.05dzlu"]', '1136415143.75dzlu': '["Objects"]["1135280776.06dzlu"]["Objects"]["1144362499.63jubutler"]["Objects"]["1136415143.75dzlu"]', '1136415145.64dzlu': '["Objects"]["1135280776.06dzlu"]["Objects"]["1144362499.63jubutler"]["Objects"]["1136415145.64dzlu"]', '1136415300.02dzlu': '["Objects"]["1135280776.06dzlu"]["Objects"]["1136415300.0dzlu"]["Objects"]["1136415300.02dzlu"]', '1136415300.0dzlu': '["Objects"]["1135280776.06dzlu"]["Objects"]["1136415300.0dzlu"]', '1136416014.44dzlu': '["Objects"]["1135280776.06dzlu"]["Objects"]["1144362499.33jubutler"]["Objects"]["1136416014.44dzlu"]', '1136416026.95dzlu': '["Objects"]["1135280776.06dzlu"]["Objects"]["1144362499.33jubutler"]["Objects"]["1136416026.95dzlu"]', '1136416077.39dzlu': '["Objects"]["1135280776.06dzlu"]["Objects"]["1144362499.33jubutler"]["Objects"]["1136416077.39dzlu"]', '1136416540.86dzlu': '["Objects"]["1135280776.06dzlu"]["Objects"]["1136416540.86dzlu"]', '1136419185.55dzlu': '["Objects"]["1135280776.06dzlu"]["Objects"]["1135281802.29dzlu"]["Objects"]["1136419185.55dzlu"]', '1136419266.19dzlu': '["Objects"]["1135280776.06dzlu"]["Objects"]["1144362499.15jubutler"]["Objects"]["1136419266.19dzlu"]', '1136419312.06dzlu': '["Objects"]["1135280776.06dzlu"]["Objects"]["1144362499.15jubutler"]["Objects"]["1136419312.06dzlu"]', '1136419382.98dzlu': '["Objects"]["1135280776.06dzlu"]["Objects"]["1135281802.29dzlu"]["Objects"]["1136419382.98dzlu"]', '1136419587.27dzlu': '["Objects"]["1135280776.06dzlu"]["Objects"]["1135281802.29dzlu"]["Objects"]["1136419587.27dzlu"]', '1136420641.61dzlu': '["Objects"]["1135280776.06dzlu"]["Objects"]["1136420641.61dzlu"]', '1136420648.19dzlu': '["Objects"]["1135280776.06dzlu"]["Objects"]["1136420648.19dzlu"]', '1136420854.45dzlu': '["Objects"]["1135280776.06dzlu"]["Objects"]["1136420854.45dzlu"]', '1136420904.83dzlu': '["Objects"]["1135280776.06dzlu"]["Objects"]["1136420904.83dzlu"]', '1136420928.28dzlu': '["Objects"]["1135280776.06dzlu"]["Objects"]["1136420928.28dzlu"]', '1136421203.13dzlu': '["Objects"]["1135280776.06dzlu"]["Objects"]["1136421203.13dzlu"]', '1136421219.05dzlu': '["Objects"]["1135280776.06dzlu"]["Objects"]["1136421219.05dzlu"]', '1136421438.67dzlu': '["Objects"]["1135280776.06dzlu"]["Objects"]["1136421438.67dzlu"]', '1136421473.47dzlu': '["Objects"]["1135280776.06dzlu"]["Objects"]["1136421473.47dzlu"]', '1136421709.5dzlu': '["Objects"]["1135280776.06dzlu"]["Objects"]["1144362499.33jubutler"]["Objects"]["1136421709.5dzlu"]', '1136422899.75dzlu': '["Objects"]["1135280776.06dzlu"]["Objects"]["1136422899.75dzlu"]', '1136422912.8dzlu': '["Objects"]["1135280776.06dzlu"]["Objects"]["1136422912.8dzlu"]', '1136422957.22dzlu': '["Objects"]["1135280776.06dzlu"]["Objects"]["1136422957.22dzlu"]', '1136422976.59dzlu': '["Objects"]["1135280776.06dzlu"]["Objects"]["1136422976.59dzlu"]', '1136422998.09dzlu': '["Objects"]["1135280776.06dzlu"]["Objects"]["1136422998.09dzlu"]', '1136423079.63dzlu': '["Objects"]["1135280776.06dzlu"]["Objects"]["1136423079.63dzlu"]', '1136423080.58dzlu': '["Objects"]["1135280776.06dzlu"]["Objects"]["1136423080.58dzlu"]', '1136423081.72dzlu': '["Objects"]["1135280776.06dzlu"]["Objects"]["1136423081.72dzlu"]', '1136423082.23dzlu': '["Objects"]["1135280776.06dzlu"]["Objects"]["1136423082.23dzlu"]', '1136423092.95dzlu': '["Objects"]["1135280776.06dzlu"]["Objects"]["1136423092.95dzlu"]', '1136423193.16dzlu': '["Objects"]["1135280776.06dzlu"]["Objects"]["1136423193.16dzlu"]', '1136423237.13dzlu': '["Objects"]["1135280776.06dzlu"]["Objects"]["1136423237.13dzlu"]', '1136423297.98dzlu': '["Objects"]["1135280776.06dzlu"]["Objects"]["1136423297.98dzlu"]', '1136423361.78dzlu': '["Objects"]["1135280776.06dzlu"]["Objects"]["1144362499.63jubutler"]["Objects"]["1136423361.78dzlu"]', '1136423371.34dzlu': '["Objects"]["1135280776.06dzlu"]["Objects"]["1136423371.34dzlu"]', '1136423530.52dzlu': '["Objects"]["1135280776.06dzlu"]["Objects"]["1136423530.52dzlu"]', '1136423563.91dzlu': '["Objects"]["1135280776.06dzlu"]["Objects"]["1136423563.91dzlu"]', '1136423575.09dzlu': '["Objects"]["1135280776.06dzlu"]["Objects"]["1136423575.09dzlu"]', '1136423697.94dzlu': '["Objects"]["1135280776.06dzlu"]["Objects"]["1136423697.94dzlu"]', '1136423697.94dzlu0': '["Objects"]["1135280776.06dzlu"]["Objects"]["1136423697.94dzlu"]["Objects"]["1136423697.94dzlu0"]', '1136423796.89dzlu': '["Objects"]["1135280776.06dzlu"]["Objects"]["1136423796.89dzlu"]', '1136423964.42dzlu': '["Objects"]["1135280776.06dzlu"]["Objects"]["1136423964.42dzlu"]', '1136424011.34dzlu': '["Objects"]["1135280776.06dzlu"]["Objects"]["1136424011.34dzlu"]', '1136424157.73dzlu': '["Objects"]["1135280776.06dzlu"]["Objects"]["1136424157.73dzlu"]', '1136424231.09dzlu': '["Objects"]["1135280776.06dzlu"]["Objects"]["1136424231.09dzlu"]', '1136424346.05dzlu': '["Objects"]["1135280776.06dzlu"]["Objects"]["1144362499.63jubutler"]["Objects"]["1136424346.05dzlu"]', '1136424355.72dzlu': '["Objects"]["1135280776.06dzlu"]["Objects"]["1144362499.63jubutler"]["Objects"]["1136424355.72dzlu"]', '1136424363.86dzlu': '["Objects"]["1135280776.06dzlu"]["Objects"]["1144362499.63jubutler"]["Objects"]["1136424363.86dzlu"]', '1136424372.47dzlu': '["Objects"]["1135280776.06dzlu"]["Objects"]["1144362499.63jubutler"]["Objects"]["1136424372.47dzlu"]', '1136424437.36dzlu': '["Objects"]["1135280776.06dzlu"]["Objects"]["1144362499.63jubutler"]["Objects"]["1136424437.36dzlu"]', '1136424475.03dzlu': '["Objects"]["1135280776.06dzlu"]["Objects"]["1144362499.63jubutler"]["Objects"]["1136424475.03dzlu"]', '1136424504.92dzlu': '["Objects"]["1135280776.06dzlu"]["Objects"]["1144362499.63jubutler"]["Objects"]["1136424504.92dzlu"]', '1136424561.63dzlu': '["Objects"]["1135280776.06dzlu"]["Objects"]["1144362499.63jubutler"]["Objects"]["1136424561.63dzlu"]', '1136424617.58dzlu': '["Objects"]["1135280776.06dzlu"]["Objects"]["1136424617.58dzlu"]', '1136424619.11dzlu': '["Objects"]["1135280776.06dzlu"]["Objects"]["1144362499.63jubutler"]["Objects"]["1136424619.11dzlu"]', '1136424781.72dzlu': '["Objects"]["1135280776.06dzlu"]["Objects"]["1136424781.72dzlu"]', '1136425088.27dzlu': '["Objects"]["1135280776.06dzlu"]["Objects"]["1136425088.27dzlu"]', '1136425095.08dzlu': '["Objects"]["1135280776.06dzlu"]["Objects"]["1136425095.08dzlu"]', '1136425234.44dzlu': '["Objects"]["1135280776.06dzlu"]["Objects"]["1136425234.44dzlu"]', '1136426057.16dzlu': '["Objects"]["1135280776.06dzlu"]["Objects"]["1144362499.33jubutler"]["Objects"]["1136426057.16dzlu"]', '1136426351.31dzlu': '["Objects"]["1135280776.06dzlu"]["Objects"]["1136426351.31dzlu"]', '1136426380.89dzlu': '["Objects"]["1135280776.06dzlu"]["Objects"]["1136426380.89dzlu"]', '1136426422.47dzlu': '["Objects"]["1135280776.06dzlu"]["Objects"]["1136426422.47dzlu"]', '1136426725.14dzlu': '["Objects"]["1135280776.06dzlu"]["Objects"]["1136426725.14dzlu"]', '1136426742.25dzlu': '["Objects"]["1135280776.06dzlu"]["Objects"]["1136426742.25dzlu"]', '1137608568.63dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1137608568.63dxschafe"]', '1137608568.63dxschafe0': '["Objects"]["1135280776.06dzlu"]["Objects"]["1137608568.63dxschafe"]["Objects"]["1137608568.63dxschafe0"]', '1137608982.7dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1137608982.7dxschafe"]', '1137609219.8dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1137609219.8dxschafe"]', '1137609321.42dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1137609321.42dxschafe"]', '1137609327.59dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1137609327.59dxschafe"]', '1137609343.8dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1137609343.8dxschafe"]', '1137609393.08dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1137609393.08dxschafe"]', '1137610057.31dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1137610057.31dxschafe"]', '1137610228.36dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1137610228.36dxschafe"]', '1137610308.83dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1137610308.83dxschafe"]', '1137610374.89dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1137610308.83dxschafe"]["Objects"]["1137610374.89dxschafe"]', '1137610539.64dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1137610539.64dxschafe"]', '1137611262.78dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1137611262.78dxschafe"]', '1137611361.14dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1137611361.14dxschafe"]', '1137611475.77dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1137611475.77dxschafe"]', '1137611477.19dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1137611477.19dxschafe"]', '1137611840.97dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1137611840.97dxschafe"]', '1137612056.77dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1137612056.77dxschafe"]', '1137612099.81dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1137612099.81dxschafe"]', '1137612114.77dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1137612099.81dxschafe"]["Objects"]["1137612114.77dxschafe"]', '1137612356.78dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1137612356.78dxschafe"]', '1137612363.58dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1137612363.58dxschafe"]', '1137612448.58dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1137612448.58dxschafe"]', '1137612699.33dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1137612699.33dxschafe"]', '1137612891.16dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1137612891.16dxschafe"]', '1137612944.11dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1137612944.11dxschafe"]', '1137612995.44dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1137612995.44dxschafe"]', '1137613150.16dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1137613150.16dxschafe"]', '1137613374.63dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1144362499.33jubutler"]["Objects"]["1137613374.63dxschafe"]', '1137613458.7dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1137613458.7dxschafe"]', '1137613686.39dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1144362499.15jubutler"]["Objects"]["1137613686.39dxschafe"]', '1137613748.45dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1144362499.15jubutler"]["Objects"]["1137613748.45dxschafe"]', '1137613781.44dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1144362499.15jubutler"]["Objects"]["1137613781.44dxschafe"]', '1137613814.03dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1144362499.15jubutler"]["Objects"]["1137613814.03dxschafe"]', '1137614361.77dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1136405510.23dzlu"]["Objects"]["1137614361.77dxschafe"]', '1137614544.45dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1137614544.45dxschafe"]', '1137696037.8dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1137696037.8dxschafe"]', '1137786448.86dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1137786448.86dxschafe"]', '1137786628.98dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1137786628.98dxschafe"]', '1137804872.88dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1144362499.15jubutler"]["Objects"]["1137804872.88dxschafe"]', '1137804943.44dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1144362499.15jubutler"]["Objects"]["1137804943.44dxschafe"]', '1137805186.67dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1144362499.15jubutler"]["Objects"]["1137805186.67dxschafe"]', '1137805871.28dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1144362499.33jubutler"]["Objects"]["1137805871.28dxschafe"]', '1137805912.08dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1144362499.33jubutler"]["Objects"]["1137805912.08dxschafe"]', '1137806064.89dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1144362499.33jubutler"]["Objects"]["1137806064.89dxschafe"]', '1137806131.59dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1137806131.59dxschafe"]', '1137806271.52dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1137806271.52dxschafe"]', '1137806371.73dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1136405358.8dzlu"]["Objects"]["1137806371.73dxschafe"]', '1137806572.22dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1137806572.22dxschafe"]', '1137806634.36dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1144362499.33jubutler"]["Objects"]["1137806634.36dxschafe"]', '1137806680.44dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1144362499.33jubutler"]["Objects"]["1137806680.44dxschafe"]', '1137806724.17dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1136416077.39dzlu"]["Objects"]["1137806724.17dxschafe"]', '1137806828.39dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1136405358.8dzlu"]["Objects"]["1137806828.39dxschafe"]', '1137806913.77dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1136405358.8dzlu"]["Objects"]["1137806913.77dxschafe"]', '1137807096.63dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1137807096.63dxschafe"]', '1137807403.13dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1137807403.13dxschafe"]', '1137807467.02dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1137807467.02dxschafe"]', '1137807509.48dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1137807509.48dxschafe"]', '1137808334.67dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1137808334.67dxschafe"]', '1137808645.7dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1144362499.33jubutler"]["Objects"]["1137808645.7dxschafe"]', '1137809588.72dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1137809588.72dxschafe"]', '1137809702.98dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1137809702.98dxschafe"]', '1137809784.5dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1144362499.33jubutler"]["Objects"]["1137809784.5dxschafe"]', '1137809822.97dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1136405421.64dzlu"]["Objects"]["1137809822.97dxschafe"]', '1137809887.95dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1144362499.33jubutler"]["Objects"]["1137809887.95dxschafe"]', '1137810004.47dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1137810004.47dxschafe"]', '1137810131.5dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1136415300.0dzlu"]["Objects"]["1137810131.5dxschafe"]', '1137810301.38dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1137810301.38dxschafe"]', '1137810377.2dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1137810377.2dxschafe"]', '1137810715.7dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1144362499.33jubutler"]["Objects"]["1137810715.7dxschafe"]', '1137810791.5dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1137810791.5dxschafe"]', '1137810869.36dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1137810869.36dxschafe"]', '1137810917.73dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1137810917.73dxschafe"]', '1137811001.89dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1144362499.33jubutler"]["Objects"]["1137811001.89dxschafe"]', '1137811050.8dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1144362499.33jubutler"]["Objects"]["1137811050.8dxschafe"]', '1137811234.52dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1137811234.52dxschafe"]', '1137811384.33dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1136405045.67dzlu"]["Objects"]["1137811384.33dxschafe"]', '1137811457.28dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1136405045.67dzlu"]["Objects"]["1137811457.28dxschafe"]', '1137811756.92dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1137811756.92dxschafe"]', '1137811785.95dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1137811785.95dxschafe"]', '1137811838.45dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1137811838.45dxschafe"]', '1137811971.63dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1137811971.63dxschafe"]', '1137811978.38dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1137811978.38dxschafe"]', '1137811986.05dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1137811986.05dxschafe"]', '1137811999.59dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1137811999.59dxschafe"]', '1137812004.25dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1137812004.25dxschafe"]', '1137812037.88dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1137812037.88dxschafe"]', '1137812042.97dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1137812042.97dxschafe"]', '1137812044.56dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1137812044.56dxschafe"]', '1137812061.77dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1137812061.77dxschafe"]', '1137812068.19dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1137812068.19dxschafe"]', '1137812071.73dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1137812071.73dxschafe"]', '1137812079.69dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1137812079.69dxschafe"]', '1137812086.39dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1137812086.39dxschafe"]', '1137812163.47dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1137812163.47dxschafe"]', '1137812165.38dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1137812165.38dxschafe"]', '1137812167.22dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1137812167.22dxschafe"]', '1137812169.88dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1137812169.88dxschafe"]', '1137812172.67dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1137812172.67dxschafe"]', '1137812179.3dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1137812179.3dxschafe"]', '1137812192.33dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1137812192.33dxschafe"]', '1137812196.17dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1137812196.17dxschafe"]', '1137812199.53dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1137812199.53dxschafe"]', '1137812203.89dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1137812203.89dxschafe"]', '1137812205.42dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1137812205.42dxschafe"]', '1137812218.56dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1137812218.56dxschafe"]', '1137812221.2dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1137812221.2dxschafe"]', '1137812229.81dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1137812229.81dxschafe"]', '1137812869.47dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1137812869.47dxschafe"]', '1137812970.61dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1137812970.61dxschafe"]', '1137812972.64dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1137812972.64dxschafe"]', '1137812983.08dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1137812983.08dxschafe"]', '1137815176.44dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1144362499.15jubutler"]["Objects"]["1137815176.44dxschafe"]', '1138062672.0dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1138062672.0dxschafe"]', '1138062672.56dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1138062672.56dxschafe"]', '1138062673.06dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1137610308.83dxschafe"]["Objects"]["1138062673.06dxschafe"]', '1138062673.09dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1137610308.83dxschafe"]["Objects"]["1138062673.09dxschafe"]', '1138062673.13dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1138062673.13dxschafe"]', '1138062673.16dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1138062673.16dxschafe"]', '1138062673.22dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1138062673.22dxschafe"]', '1138062673.25dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1138062673.25dxschafe"]', '1138062673.28dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1138062673.28dxschafe"]', '1138062673.31dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1138062673.31dxschafe"]', '1138062673.38dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1138062673.38dxschafe"]', '1138062673.41dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1138062673.41dxschafe"]', '1138062673.44dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1138062673.44dxschafe"]', '1138062673.45dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1138062673.45dxschafe"]', '1138062673.53dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1138062673.53dxschafe"]', '1138062673.55dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1138062673.55dxschafe"]', '1138062673.58dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1138062673.58dxschafe"]', '1138062673.66dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1138062673.66dxschafe"]', '1138062673.67dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1138062673.67dxschafe"]', '1138062673.78dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1138062673.78dxschafe"]', '1138062673.7dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1138062673.7dxschafe"]', '1138062673.83dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1138062673.83dxschafe"]', '1138062673.89dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1138062673.89dxschafe"]', '1138062673.8dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1138062673.8dxschafe"]', '1138062673.92dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1138062673.92dxschafe"]', '1138062674.02dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1138062674.02dxschafe"]', '1138062674.05dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1138062674.05dxschafe"]', '1138062674.08dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1138062674.08dxschafe"]', '1138062674.19dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1138062674.19dxschafe"]', '1138062674.22dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1138062674.22dxschafe"]', '1138062674.25dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1138062674.25dxschafe"]', '1138062674.28dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1138062674.28dxschafe"]', '1138062835.84dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1138062835.84dxschafe"]', '1138062881.41dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1138062881.41dxschafe"]', '1138062921.28dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1138062921.28dxschafe"]', '1138063140.55dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1138063140.55dxschafe"]', '1138235338.59dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1144362499.15jubutler"]["Objects"]["1138235338.59dxschafe"]', '1138236384.59dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1144362499.15jubutler"]["Objects"]["1138236384.59dxschafe"]', '1138237076.48dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1144362499.15jubutler"]["Objects"]["1138237076.48dxschafe"]', '1138302833.38dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1138302833.38dxschafe"]', '1138302835.94dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1138302835.94dxschafe"]', '1138302857.72dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1138302857.72dxschafe"]', '1138302885.47dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1138302885.47dxschafe"]', '1138302910.5dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1138302910.5dxschafe"]', '1138302936.0dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1138302936.0dxschafe"]', '1138302984.48dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1138302984.48dxschafe"]', '1138303002.52dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1138303002.52dxschafe"]', '1138303005.69dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1138303005.69dxschafe"]', '1138313303.97sdnaik': '["Objects"]["1135280776.06dzlu"]["Objects"]["1138313303.97sdnaik"]', '1138320260.22dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1138320260.22dxschafe"]', '1138320304.84dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1138320304.84dxschafe"]', '1138320363.55dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1138320363.55dxschafe"]', '1138320493.22dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1138320493.22dxschafe"]', '1138320513.19dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1138320513.19dxschafe"]', '1138320517.11dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1138320513.19dxschafe"]["Objects"]["1138320517.11dxschafe"]', '1138320636.58dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1138320636.58dxschafe"]', '1138320640.36dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1138320640.36dxschafe"]', '1138320649.98dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1138320649.98dxschafe"]', '1138320661.88dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1138302910.5dxschafe"]["Objects"]["1138320661.88dxschafe"]', '1138320691.84dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1138320691.84dxschafe"]', '1138320711.06dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1138320711.06dxschafe"]', '1138320832.58dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1138320832.58dxschafe"]', '1138320841.61dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1138320841.61dxschafe"]', '1138320921.48dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1138320921.48dxschafe"]', '1138320923.02dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1138320923.02dxschafe"]', '1138320941.98dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1138320941.98dxschafe"]', '1138320944.53dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1138320944.53dxschafe"]', '1138320945.41dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1138320945.41dxschafe"]', '1138321117.05dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1138321117.05dxschafe"]', '1138321267.27dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1138321267.27dxschafe"]', '1138321278.73dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1138321278.73dxschafe"]', '1138321281.98dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1138321281.98dxschafe"]', '1138321296.0dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1138321296.0dxschafe"]', '1138321296.64dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1138321296.64dxschafe"]', '1138321297.41dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1138321297.41dxschafe"]', '1138321298.98dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1138321298.98dxschafe"]', '1138321323.97dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1138321323.97dxschafe"]', '1138321334.66dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1138321334.66dxschafe"]', '1138321347.94dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1138321347.94dxschafe"]', '1138321366.27dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1138321366.27dxschafe"]', '1138321521.69dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1138321521.69dxschafe"]', '1138321535.25dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1138321535.25dxschafe"]', '1138322082.3dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1138322082.3dxschafe"]', '1138322153.2dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1138322153.2dxschafe"]', '1138322162.81dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1138322162.81dxschafe"]', '1138322170.16dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1138322170.16dxschafe"]', '1138322171.02dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1138322171.02dxschafe"]', '1138322178.02dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1138322178.02dxschafe"]', '1138322184.19dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1138322184.19dxschafe"]', '1138322184.67dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1138322184.67dxschafe"]', '1138322186.06dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1138322186.06dxschafe"]', '1138322192.33dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1138322192.33dxschafe"]', '1138322206.23dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1138322206.23dxschafe"]', '1138322206.7dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1138322206.7dxschafe"]', '1138322208.19dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1138322208.19dxschafe"]', '1138322220.91dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1138322220.91dxschafe"]', '1138322221.28dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1138322221.28dxschafe"]', '1138322227.23dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1138322227.23dxschafe"]', '1138322230.16dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1138322230.16dxschafe"]', '1138322230.55dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1138322230.55dxschafe"]', '1138322230.91dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1138322230.91dxschafe"]', '1138322231.42dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1138322231.42dxschafe"]', '1138322238.98dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1138322238.98dxschafe"]', '1138322252.33dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1144362499.33jubutler"]["Objects"]["1138322252.33dxschafe"]', '1138324126.88dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1138324126.88dxschafe"]', '1138324144.91dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1138324144.91dxschafe"]', '1138324161.06dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1138324161.06dxschafe"]', '1138324180.14dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1138324180.14dxschafe"]', '1138324211.02dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1138324211.02dxschafe"]', '1138324232.17dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1138324232.17dxschafe"]', '1138331085.67dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1144362499.15jubutler"]["Objects"]["1138331085.67dxschafe"]', '1138331085.67dxschafe0': '["Objects"]["1135280776.06dzlu"]["Objects"]["1138331085.67dxschafe"]["Objects"]["1138331085.67dxschafe0"]', '1138331394.53dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1138331394.53dxschafe"]', '1138332026.81dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1138332026.81dxschafe"]', '1138332353.31dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1138332353.31dxschafe"]', '1138333953.2dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1144362499.15jubutler"]["Objects"]["1138333953.2dxschafe"]', '1138333987.34dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1144362499.15jubutler"]["Objects"]["1138333987.34dxschafe"]', '1138334081.11dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1144362499.15jubutler"]["Objects"]["1138334081.11dxschafe"]', '1138334199.27dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1144362499.15jubutler"]["Objects"]["1138334199.27dxschafe"]', '1138388039.38dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1144362499.15jubutler"]["Objects"]["1138388039.38dxschafe"]', '1138388168.08dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1144362499.15jubutler"]["Objects"]["1138388168.08dxschafe"]', '1138388531.19dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1144362499.15jubutler"]["Objects"]["1138388531.19dxschafe"]', '1138388588.09dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1144362499.15jubutler"]["Objects"]["1138388588.09dxschafe"]', '1138388642.84dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1144362499.15jubutler"]["Objects"]["1138388642.84dxschafe"]', '1138389026.84dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1144362499.15jubutler"]["Objects"]["1138389026.84dxschafe"]', '1138389053.84dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1144362499.15jubutler"]["Objects"]["1138389053.84dxschafe"]', '1138389054.97dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1144362499.15jubutler"]["Objects"]["1138389054.97dxschafe"]', '1138389058.48dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1144362499.15jubutler"]["Objects"]["1138389058.48dxschafe"]', '1138389081.77dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1138389081.77dxschafe"]', '1138389083.92dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1138389083.92dxschafe"]', '1138389216.22dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1138389216.22dxschafe"]', '1138389261.84dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1138389261.84dxschafe"]', '1138389277.3dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1138389277.3dxschafe"]', '1138389335.77dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1138389335.77dxschafe"]', '1138389396.72dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1138389396.72dxschafe"]', '1138389506.56dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1144362499.15jubutler"]["Objects"]["1138389506.56dxschafe"]', '1138389517.22dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1144362499.15jubutler"]["Objects"]["1138389517.22dxschafe"]', '1138389571.83dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1144362499.15jubutler"]["Objects"]["1138389571.83dxschafe"]', '1138389731.03dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1144362499.15jubutler"]["Objects"]["1138389731.03dxschafe"]', '1138389735.67dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1144362499.15jubutler"]["Objects"]["1138389735.67dxschafe"]', '1138389791.17dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1144362499.15jubutler"]["Objects"]["1138389791.17dxschafe"]', '1138389895.67dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1144362499.15jubutler"]["Objects"]["1138389895.67dxschafe"]', '1138389900.19dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1144362499.15jubutler"]["Objects"]["1138389900.19dxschafe"]', '1138390780.11dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1144362499.15jubutler"]["Objects"]["1138390780.11dxschafe"]', '1138391578.09dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1144362499.15jubutler"]["Objects"]["1138391578.09dxschafe"]', '1138391763.7dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1144362499.15jubutler"]["Objects"]["1138391763.7dxschafe"]', '1138391801.88dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1144362499.15jubutler"]["Objects"]["1138391801.88dxschafe"]', '1138391837.27dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1144362499.15jubutler"]["Objects"]["1138391837.27dxschafe"]', '1138391934.66dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1144362499.15jubutler"]["Objects"]["1138391934.66dxschafe"]', '1138392072.44dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1144362499.15jubutler"]["Objects"]["1138392072.44dxschafe"]', '1138392072.44dxschafe0': '["Objects"]["1135280776.06dzlu"]["Objects"]["1138392072.44dxschafe"]["Objects"]["1138392072.44dxschafe0"]', '1138393950.38dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1144362499.15jubutler"]["Objects"]["1138393950.38dxschafe"]', '1138394586.5dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1144362499.15jubutler"]["Objects"]["1138394586.5dxschafe"]', '1138394804.84dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1144362499.15jubutler"]["Objects"]["1138394804.84dxschafe"]', '1138395108.81dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1144362499.15jubutler"]["Objects"]["1138395108.81dxschafe"]', '1138395123.31dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1144362499.15jubutler"]["Objects"]["1138395123.31dxschafe"]', '1138400345.72dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1138400345.72dxschafe"]', '1138400763.67dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1138400763.67dxschafe"]', '1138400773.91dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1138400773.91dxschafe"]', '1138400798.94dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1138400798.94dxschafe"]', '1138401436.53dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1144362499.15jubutler"]["Objects"]["1138401436.53dxschafe"]', '1138402377.67dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1136405216.39dzlu"]["Objects"]["1138402377.67dxschafe"]', '1138402892.06dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1144362499.15jubutler"]["Objects"]["1138402892.06dxschafe"]', '1138403067.3dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1144362499.15jubutler"]["Objects"]["1138403067.3dxschafe"]', '1138403165.69dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1144362499.15jubutler"]["Objects"]["1138403165.69dxschafe"]', '1138403541.34dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1144362499.33jubutler"]["Objects"]["1138403541.34dxschafe"]', '1138403904.88dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1136405115.39dzlu"]["Objects"]["1138403904.88dxschafe"]', '1138404283.64dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1144362499.33jubutler"]["Objects"]["1138404283.64dxschafe"]', '1138404327.72dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1144362499.33jubutler"]["Objects"]["1138404327.72dxschafe"]', '1138404604.47dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1144362499.33jubutler"]["Objects"]["1138404604.47dxschafe"]', '1138404941.83dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1138404941.83dxschafe"]', '1138404947.05dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1138404947.05dxschafe"]', '1138404952.7dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1138404952.7dxschafe"]', '1138404955.5dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1138404955.5dxschafe"]', '1138404962.67dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1138404962.67dxschafe"]', '1138404965.13dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1138404965.13dxschafe"]', '1138404983.63dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1144362499.63jubutler"]["Objects"]["1138404983.63dxschafe"]', '1138404990.83dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1138404990.83dxschafe"]', '1138404994.17dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1138404994.17dxschafe"]', '1138404996.56dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1138404996.56dxschafe"]', '1138404997.58dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1138404997.58dxschafe"]', '1138405008.59dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1138405008.59dxschafe"]', '1138405009.48dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1138405009.48dxschafe"]', '1138405014.53dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1138405014.53dxschafe"]', '1138405547.0dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1138405547.0dxschafe"]', '1138405584.83dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1138405584.83dxschafe"]', '1138405612.78dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1144362499.63jubutler"]["Objects"]["1138405612.78dxschafe"]', '1138405639.5dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1138405639.5dxschafe"]', '1138405669.55dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1138405669.55dxschafe"]', '1138406264.66dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1138406264.66dxschafe"]', '1138406308.7dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1138406308.7dxschafe"]', '1138406395.17dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1138406395.17dxschafe"]', '1138406441.97dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1138404994.17dxschafe"]["Objects"]["1138406441.97dxschafe"]', '1138406484.72dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1138404994.17dxschafe"]["Objects"]["1138406484.72dxschafe"]', '1138406506.36dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1138404994.17dxschafe"]["Objects"]["1138406506.36dxschafe"]', '1138406690.55dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1138406690.55dxschafe"]', '1138406692.61dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1138406692.61dxschafe"]', '1138406774.14dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1138406774.14dxschafe"]', '1138406881.92dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1138406881.92dxschafe"]', '1138407288.64dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1138407288.64dxschafe"]', '1138409038.58dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1144362499.15jubutler"]["Objects"]["1138409038.58dxschafe"]', '1138410112.63dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1144362499.15jubutler"]["Objects"]["1138410112.63dxschafe"]', '1138410875.87dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1144362499.15jubutler"]["Objects"]["1138410875.87dxschafe"]', '1138411180.17dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1138411180.17dxschafe"]', '1138411274.68dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1138411274.68dxschafe"]', '1138411280.35dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1138411280.35dxschafe"]', '1138411455.04dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1138411455.04dxschafe"]', '1138411521.86dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1138411521.86dxschafe"]', '1138420222.41dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1144362499.15jubutler"]["Objects"]["1138420222.41dxschafe"]', '1138420265.62dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1144362499.15jubutler"]["Objects"]["1138420265.62dxschafe"]', '1138420321.09dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1144362499.15jubutler"]["Objects"]["1138420321.09dxschafe"]', '1138647059.31dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1144362499.15jubutler"]["Objects"]["1138647059.31dxschafe"]', '1138647125.22dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1144362499.15jubutler"]["Objects"]["1138647125.22dxschafe"]', '1138648022.36dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1138648022.36dxschafe"]', '1138648770.65dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1144362499.15jubutler"]["Objects"]["1138648770.65dxschafe"]', '1138649836.96dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1144362499.15jubutler"]["Objects"]["1138649836.96dxschafe"]', '1138650056.86dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1144362499.15jubutler"]["Objects"]["1138650056.86dxschafe"]', '1138650910.16dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1144362499.15jubutler"]["Objects"]["1138650910.16dxschafe"]', '1138657546.06dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1138657546.06dxschafe"]', '1138657845.1dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1144362499.15jubutler"]["Objects"]["1138657845.1dxschafe"]', '1138658204.28dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1144362499.15jubutler"]["Objects"]["1138658204.28dxschafe"]', '1138660565.68dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1144362499.15jubutler"]["Objects"]["1138660565.68dxschafe"]', '1138660671.69dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1144362499.15jubutler"]["Objects"]["1138660671.69dxschafe"]', '1138661112.78dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1144362499.15jubutler"]["Objects"]["1138661112.78dxschafe"]', '1138667460.66dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1144362499.15jubutler"]["Objects"]["1138667460.66dxschafe"]', '1138695512.45jubutler': '["Objects"]["1135280776.06dzlu"]["Objects"]["1138695512.45jubutler"]', '1138721865.99dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1138721865.99dxschafe"]', '1138722751.38dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1138722751.38dxschafe"]', '1138722751.38dxschafe0': '["Objects"]["1135280776.06dzlu"]["Objects"]["1138722751.38dxschafe"]["Objects"]["1138722751.38dxschafe0"]', '1138725193.96dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1144362499.15jubutler"]["Objects"]["1138725193.96dxschafe"]', '1138725240.0dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1144362499.15jubutler"]["Objects"]["1138725240.0dxschafe"]', '1138727343.41dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1144362499.15jubutler"]["Objects"]["1138727343.41dxschafe"]', '1138727460.57dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1138389506.56dxschafe"]["Objects"]["1138727460.57dxschafe"]', '1138728593.36dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1144362499.15jubutler"]["Objects"]["1138728593.36dxschafe"]', '1138730803.13dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1144362499.15jubutler"]["Objects"]["1138730803.13dxschafe"]', '1138732475.17dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1138732475.17dxschafe"]', '1138732687.6dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1138732687.6dxschafe"]', '1138732820.2dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1138732820.2dxschafe"]', '1138732869.65dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1138732869.65dxschafe"]', '1138733108.16dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1138733108.16dxschafe"]', '1138739839.69dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1138739839.69dxschafe"]', '1138740107.6dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1138740107.6dxschafe"]', '1138740198.35dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1144362499.33jubutler"]["Objects"]["1138740198.35dxschafe"]', '1138740362.12dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1144362499.33jubutler"]["Objects"]["1138740362.12dxschafe"]', '1138741910.42dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1144362499.33jubutler"]["Objects"]["1138741910.42dxschafe"]', '1138742820.76dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1144362499.15jubutler"]["Objects"]["1138742820.76dxschafe"]', '1138743352.03dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1144362499.15jubutler"]["Objects"]["1138743352.03dxschafe"]', '1141416648.8sdnaik': '["Objects"]["1135280776.06dzlu"]["Objects"]["1141416648.8sdnaik"]', '1142037113.06dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1142037113.06dxschafe"]', '1142037113.09dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1142037113.09dxschafe"]', '1142040049.39dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1144362499.15jubutler"]["Objects"]["1142040049.39dxschafe"]', '1142040233.83dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1144362499.15jubutler"]["Objects"]["1142040233.83dxschafe"]', '1142040233.83dxschafe0': '["Objects"]["1135280776.06dzlu"]["Objects"]["1144362499.15jubutler"]["Objects"]["1142040233.83dxschafe"]', '1142040233.85dxschafe': '["Objects"]["1135280776.06dzlu"]["Objects"]["1142040233.83dxschafe"]["Objects"]["1142040233.85dxschafe"]', '1142290985.73sdnaik': '["Objects"]["1135280776.06dzlu"]["Objects"]["1138313303.97sdnaik"]["Objects"]["1142290985.73sdnaik"]', '1142290985.8sdnaik': '["Objects"]["1135280776.06dzlu"]["Objects"]["1138313303.97sdnaik"]["Objects"]["1142290985.8sdnaik"]', '1142291141.63sdnaik': '["Objects"]["1135280776.06dzlu"]["Objects"]["1142291141.63sdnaik"]', '1142291141.64sdnaik': '["Objects"]["1135280776.06dzlu"]["Objects"]["1142291141.63sdnaik"]["Objects"]["1142291141.64sdnaik"]', '1142291141.66sdnaik': '["Objects"]["1135280776.06dzlu"]["Objects"]["1142291141.63sdnaik"]["Objects"]["1142291141.66sdnaik"]', '1142291275.14sdnaik': '["Objects"]["1135280776.06dzlu"]["Objects"]["1142291275.14sdnaik"]', '1142291275.16sdnaik': '["Objects"]["1135280776.06dzlu"]["Objects"]["1142291275.14sdnaik"]["Objects"]["1142291275.16sdnaik"]', '1142291275.17sdnaik': '["Objects"]["1135280776.06dzlu"]["Objects"]["1142291275.14sdnaik"]["Objects"]["1142291275.17sdnaik"]', '1142307096.22sdnaik': '["Objects"]["1135280776.06dzlu"]["Objects"]["1142307096.22sdnaik"]', '1144362499.15jubutler': '["Objects"]["1135280776.06dzlu"]["Objects"]["1144362499.15jubutler"]', '1144362499.33jubutler': '["Objects"]["1135280776.06dzlu"]["Objects"]["1144362499.33jubutler"]', '1144362499.63jubutler': '["Objects"]["1135280776.06dzlu"]["Objects"]["1144362499.63jubutler"]'}} | 30,902.666667 | 185,138 | 0.65819 | 25,953 | 185,416 | 4.651562 | 0.071129 | 0.02543 | 0.025546 | 0.022829 | 0.649111 | 0.521081 | 0.487939 | 0.41169 | 0.366478 | 0.343939 | 0 | 0.308365 | 0.066337 | 185,416 | 6 | 185,138 | 30,902.666667 | 0.388982 | 0.001197 | 0 | 0 | 0 | 7.5 | 0.555656 | 0.390124 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.5 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 7 |
a9d0be9a1c0fac1619475415fb4b924136299f93 | 197 | py | Python | tests/tests/utils.py | dldevinc/paper-forms | 6430382a2c369ef346e702d3644f23eba7bd8354 | [
"BSD-3-Clause"
] | 1 | 2021-05-12T06:50:44.000Z | 2021-05-12T06:50:44.000Z | tests/tests/utils.py | dldevinc/paper-forms | 6430382a2c369ef346e702d3644f23eba7bd8354 | [
"BSD-3-Clause"
] | null | null | null | tests/tests/utils.py | dldevinc/paper-forms | 6430382a2c369ef346e702d3644f23eba7bd8354 | [
"BSD-3-Clause"
] | null | null | null | from paper_forms.boundfield import BoundField
from paper_forms.utils import get_composer
def get_bound_field(form, name):
return BoundField(form, form.fields[name], name, get_composer(form))
| 28.142857 | 72 | 0.807107 | 29 | 197 | 5.275862 | 0.517241 | 0.117647 | 0.183007 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.111675 | 197 | 6 | 73 | 32.833333 | 0.874286 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0.5 | 0.25 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 7 |
a9d5636068a17c4c194e5e21be411f1148398ea3 | 124 | py | Python | sweet/preprocess/gaussian.py | charlienewey/penumbra-python | a848adf5628a37339354f5ed5a747b03cc4df9bd | [
"BSD-3-Clause"
] | 1 | 2017-10-16T03:47:51.000Z | 2017-10-16T03:47:51.000Z | sweet/preprocess/gaussian.py | charlienewey/penumbra-python | a848adf5628a37339354f5ed5a747b03cc4df9bd | [
"BSD-3-Clause"
] | null | null | null | sweet/preprocess/gaussian.py | charlienewey/penumbra-python | a848adf5628a37339354f5ed5a747b03cc4df9bd | [
"BSD-3-Clause"
] | null | null | null | from skimage.filter import gaussian_filter as _gaussian
def gaussian(img, sigma=8):
return _gaussian(img, sigma=sigma)
| 24.8 | 55 | 0.782258 | 18 | 124 | 5.222222 | 0.611111 | 0.234043 | 0.340426 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.009346 | 0.137097 | 124 | 4 | 56 | 31 | 0.869159 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.333333 | 0.333333 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 8 |
e7309ce61809ba27776d64bc2a278b77cb070f9d | 23,771 | py | Python | tests/test_matchers.py | waiteperspectives/Morelia | e6e6180d688c6bdf538f6b43a97755c76329646a | [
"MIT"
] | 17 | 2015-11-18T02:49:47.000Z | 2019-12-22T08:46:26.000Z | tests/test_matchers.py | waiteperspectives/Morelia | e6e6180d688c6bdf538f6b43a97755c76329646a | [
"MIT"
] | 230 | 2015-04-28T16:48:20.000Z | 2022-03-25T13:01:07.000Z | tests/test_matchers.py | waiteperspectives/Morelia | e6e6180d688c6bdf538f6b43a97755c76329646a | [
"MIT"
] | 8 | 2015-04-16T07:45:35.000Z | 2019-06-20T17:09:49.000Z | # -*- coding: utf-8 -*-
import unittest
from unittest.mock import MagicMock, Mock, patch, sentinel
from morelia.decorators import tags
from morelia.matchers import (
IStepMatcher,
MethodNameStepMatcher,
ParseStepMatcher,
RegexpStepMatcher,
)
class TestingStepMatcher(IStepMatcher):
def match(self, predicate, augmented_predicate, step_methods):
pass # pragma: nocover
def suggest(self, predicate):
pass # pragma: nocover
@tags(["unit"])
class IStepMatcherAddMatcherTestCase(unittest.TestCase):
""" Test :py:meth:`IStepMatcher.add_matcher`. """
def test_should_add_matcher(self):
""" Scenario: add matcher """
# Arrange
suite = Mock()
matcher1 = TestingStepMatcher(suite)
matcher2 = TestingStepMatcher(suite)
# Act
matcher1.add_matcher(matcher2)
# Assert
assert matcher1._next == matcher2
def test_should_delegate_adding_matcher(self):
""" Scenario: delegating add """
# Arrange
suite = Mock()
matcher1 = TestingStepMatcher(suite)
matcher2 = TestingStepMatcher(suite)
matcher3 = TestingStepMatcher(suite)
matcher1._next = matcher2
# Act
matcher1.add_matcher(matcher3)
# Assert
assert matcher1._next == matcher2
assert matcher2._next == matcher3
def test_should_chain_adding_matchers(self):
""" Scenario: chaining """
# Arrange
suite = Mock()
matcher1 = TestingStepMatcher(suite)
matcher2 = TestingStepMatcher(suite)
matcher3 = TestingStepMatcher(suite)
# Act
matcher1.add_matcher(matcher2).add_matcher(matcher3)
# Assert
assert matcher1._next == matcher2
assert matcher2._next == matcher3
@tags(["unit"])
class IStepMatcherFindTestCase(unittest.TestCase):
""" Test :py:meth:`IStepMatcher.find`. """
def test_should_find_method_when_step_methods_given(self):
""" Scenario: find method when step methods given"""
# Arrange
suite = Mock()
obj = TestingStepMatcher(suite)
# Act
step_methods = sentinel.step_methods
predicate = sentinel.predicate
augmented_predicate = sentinel.augmented_predicate
with patch.object(obj, "match") as match:
match.return_value = (sentinel.method, (), {})
method, args, kwargs = obj.find(
predicate, augmented_predicate, step_methods
)
match.assert_called_once_with(predicate, augmented_predicate, step_methods)
# Assert
assert method == sentinel.method
def test_should_find_method_when_no_step_methods_given(self):
""" Scenario: no step methods """
# Arrange
suite = Mock()
obj = TestingStepMatcher(suite)
# Act
predicate = sentinel.predicate
augmented_predicate = sentinel.augmented_predicate
with patch.object(obj, "match") as match:
with patch.object(obj, "_get_all_step_methods") as _get_all_step_methods:
_get_all_step_methods.return_value = sentinel.steps
match.return_value = (sentinel.method, (), {})
method, args, kwargs = obj.find(predicate, augmented_predicate)
match.assert_called_once_with(
predicate, augmented_predicate, sentinel.steps
)
# Assert
assert method == sentinel.method
def test_should_not_find_method(self):
""" Scenario: not found """
# Arrange
suite = Mock()
obj = TestingStepMatcher(suite)
# Act
step_methods = sentinel.step_methods
predicate = sentinel.predicate
augmented_predicate = sentinel.augmented_predicate
with patch.object(obj, "match") as match:
match.return_value = (None, (), {})
method, args, kwargs = obj.find(
predicate, augmented_predicate, step_methods
)
match.assert_called_once_with(predicate, augmented_predicate, step_methods)
# Assert
assert method is None
def test_should_delegate_search(self):
""" Scenario: delegate """
# Arrange
suite = Mock()
matcher1 = TestingStepMatcher(suite)
matcher2 = TestingStepMatcher(suite)
matcher1.add_matcher(matcher2)
# Act
step_methods = sentinel.step_methods
predicate = sentinel.predicate
augmented_predicate = sentinel.augmented_predicate
with patch.object(matcher1, "match") as match:
match.return_value = (None, (), {})
with patch.object(matcher2, "find") as find:
find.return_value = (sentinel.method, (), {})
method, args, kwargs = matcher1.find(
predicate, augmented_predicate, step_methods
)
match.assert_called_once_with(
predicate, augmented_predicate, step_methods
)
# Assert
find.assert_called_once_with(
predicate, augmented_predicate, step_methods
)
assert method == sentinel.method
@tags(["unit"])
class IStepMatcherGetAllStepMethodsTestCase(unittest.TestCase):
""" Test :py:meth:`IStepMatcher._get_all_step_methods`. """
def test_should_return_steps_list(self):
""" Scenario: filtered step lists """
# Arrange
step_methods = ["step_method1", "step_method2"]
all_methods = step_methods + ["method3"]
suite = MagicMock()
suite.__dir__ = Mock(return_value=all_methods)
obj = TestingStepMatcher(suite)
# Act
result = obj._get_all_step_methods()
# Assert
assert result == step_methods
@tags(["unit"])
class MethodNameStepMatcherMatchTestCase(unittest.TestCase):
""" Test :py:meth:`MethodNameStepMatcher.match`. """
def test_should_return_method(self):
""" Scenario: match by name """
# Arrange
predicate = "my_milkshake"
augmented_predicate = "my_milkshake"
method_name = "step_%s" % predicate
methods = {method_name: sentinel.method}
suite = Mock(**methods)
obj = MethodNameStepMatcher(suite)
# Act
result_method, result_args, result_kwargs = obj.match(
predicate, augmented_predicate, methods.keys()
)
# Assert
assert result_method == sentinel.method
def test_should_return_none_if_method_not_found(self):
""" Scenario: no method """
# Arrange
predicate = "not there"
augmented_predicate = "my_milkshake"
method_name = "step_%s" % predicate
methods = {method_name: sentinel.method}
suite = Mock(**methods)
obj = MethodNameStepMatcher(suite)
# Act
result_method, result_args, result_kwargs = obj.match(
predicate, augmented_predicate, methods.keys()
)
# Assert
assert result_method is None
def test_should_return_none_if_method_name_too_short(self):
""" Scenario: method too short """
# Arrange
predicate = "my milk"
augmented_predicate = "my_milkshake"
method_name = "step_%s" % predicate
methods = {method_name: sentinel.method}
suite = Mock(**methods)
obj = MethodNameStepMatcher(suite)
# Act
result_method, result_args, result_kwargs = obj.match(
predicate, augmented_predicate, methods.keys()
)
# Assert
assert result_method is None
@tags(["unit"])
class MethodNameStepMatcherSuggestTestCase(unittest.TestCase):
""" Test :py:meth:`MethodNameStepMatcher.suggest`. """
def test_should_return_suggested_method(self):
""" Scenariusz: suggest """
# Arrange
obj = MethodNameStepMatcher(sentinel.suite)
pattern = " def step_%(method_name)s(self%(args)s):\n\n raise NotImplementedError('%(predicate)s')\n\n"
test_data = [
("tastes great", "tastes_great", ""),
("less filling", "less_filling", ""),
("line\nfeed", "line_feed", ""),
("tick'ed'", "tick_ed", ""),
("tastes great", "tastes_great", ""),
("argu<ment>al", "argu_ment_al", ""),
("arg<u>ment<al>", "arg_u_ment_al", ""),
('str"ing"', "str_ing", ""),
('"str"i"ngs"', "str_i_ngs", ""),
('enter "10" into', "enter_10_into", ""),
('enter "10" and "20" into', "enter_10_and_20_into", ""),
]
for predicate, method, args in test_data:
# Act
suggest, suggest_method, suggest_docstring = obj.suggest(predicate)
# Assert
expected = pattern % {
"method_name": method,
"args": args,
"predicate": predicate.replace("'", r"\'"),
}
assert suggest == expected
assert suggest_method == method
assert suggest_docstring == ""
@tags(["unit"])
class DocStringStepMatcherMatchTestCase(unittest.TestCase):
""" Test :py:meth:`DocStringStepMatcher.match`. """
def test_should_return_method_and_args(self):
""" Scenario: match by docstring """
# Arrange
predicate = "my milkshake brings all the boys to the yard"
augmented_predicate = "my milkshake brings all the boys to the yard"
method_name = "step_%s" % predicate
docstring = r"my milkshake brings all the (boys|girls) to (.*) yard"
method = Mock(__doc__=docstring)
methods = {method_name: method}
suite = Mock(**methods)
obj = RegexpStepMatcher(suite)
# Act
result_method, result_args, result_kwargs = obj.match(
predicate, augmented_predicate, methods.keys()
)
# Assert
assert result_method == method
assert result_args == ("boys", "the")
def test_should_return_method_and_kwargs(self):
""" Scenario: match by docstring with named groups """
# Arrange
predicate = "my milkshake brings all the boys to the yard"
augmented_predicate = "my milkshake brings all the boys to the yard"
method_name = "step_%s" % predicate
docstring = (
r"my milkshake brings all the (?P<who>boys|girls) to (?P<other>.*) yard"
)
method = Mock(__doc__=docstring)
methods = {method_name: method}
suite = Mock(**methods)
obj = RegexpStepMatcher(suite)
# Act
result_method, result_args, result_kwargs = obj.match(
predicate, augmented_predicate, methods.keys()
)
# Assert
assert result_method == method
assert result_kwargs == {"who": "boys", "other": "the"}
def test_should_return_method_and_kwargs_with_mixed_groups(self):
""" Scenario: match by docstring with named and not named groups """
# Arrange
predicate = "my milkshake brings all the boys to the yard"
augmented_predicate = "my milkshake brings all the boys to the yard"
method_name = "step_%s" % predicate
docstring = r"my milkshake brings all the (?P<who>boys|girls) to (.*) yard"
method = Mock(__doc__=docstring)
methods = {method_name: method}
suite = Mock(**methods)
obj = RegexpStepMatcher(suite)
# Act
result_method, result_args, result_kwargs = obj.match(
predicate, augmented_predicate, methods.keys()
)
# Assert
assert result_method == method
assert result_args == ()
assert result_kwargs == {"who": "boys"}
def test_should_return_none_if_docstring_not_mached(self):
""" Scenario: no match by docstring """
# Arrange
predicate = "not there"
augmented_predicate = "not there"
method_name = "step_%s" % predicate
docstring = r"my milkshake brings all the (boys|girls) to (.*) yard"
method = Mock(__doc__=docstring)
methods = {method_name: method}
suite = Mock(**methods)
obj = RegexpStepMatcher(suite)
# Act
result_method, result_args, result_kwargs = obj.match(
predicate, augmented_predicate, methods.keys()
)
# Assert
assert result_method is None
assert result_args == ()
def test_should_return_none_if_no_docstring(self):
""" Scenario: no match by docstring """
# Arrange
predicate = "my milkshake brings all the boys to the yard"
augmented_predicate = "my milkshake brings all the boys to the yard"
method_name = "step_%s" % predicate
method = Mock(__doc__="")
methods = {method_name: method}
suite = Mock(**methods)
obj = RegexpStepMatcher(suite)
# Act
result_method, result_args, result_kwargs = obj.match(
predicate, augmented_predicate, methods.keys()
)
# Assert
assert result_method is None
assert result_args == ()
def test_should_return_second_method_and_matches(self):
""" Scenario: many methods """
# Arrange
predicate = "my milkshake brings all the boys to the yard"
augmented_predicate = "my milkshake brings all the boys to the yard"
method_name = "step_%s" % predicate
docstring = r"my milkshake brings all the (boys|girls) to (.*) yard"
method = Mock(__doc__=docstring)
methods = {method_name: method, "step_other": sentinel.method}
suite = Mock(**methods)
obj = RegexpStepMatcher(suite)
# Act
result_method, result_args, result_kwargs = obj.match(
predicate, augmented_predicate, [method_name, "step_other"]
)
# Assert
assert result_method == method
assert result_args == ("boys", "the")
def test_should_match_with_utf8_string(self):
""" Scenario: match with utf8 string """
# Arrange
predicate = "zażółć gęślą jaźń"
augmented_predicate = "zażółć gęślą jaźń"
method_name = "step_utf8_match"
docstring = r"zażółć gęślą jaźń"
method = Mock(__doc__=docstring)
methods = {method_name: method}
suite = Mock(**methods)
obj = RegexpStepMatcher(suite)
# Act
result_method, result_args, result_kwargs = obj.match(
predicate, augmented_predicate, methods.keys()
)
# Assert
assert result_method == method
@tags(["unit"])
class DocStringStepMatcherSuggestTestCase(unittest.TestCase):
""" Test :py:meth:`DocStringStepMatcher.suggest`. """
def test_should_return_suggested_method(self):
""" Scenariusz: suggest """
# Arrange
obj = RegexpStepMatcher(sentinel.suite)
# Act
pattern = " def step_%(method_name)s(self%(args)s):\n %(docstring)s\n\n raise NotImplementedError('%(predicate)s')\n\n"
test_data = [
("tastes great", "tastes_great", r"r'tastes great'", ""),
("less filling", "less_filling", r"r'less filling'", ""),
("line\nfeed", "line_feed", r"r'line\nfeed'", ""),
("tick'ed'", "tick_ed", r"r'tick\'ed\''", ""),
("tastes great", "tastes_great", r"r'tastes\s+great'", ""),
("argu<ment>al", "argu_ment_al", r"r'argu(.+)al'", ", ment"),
("arg<u>ment<al>", "arg_u_ment_al", r"r'arg(.+)ment(.+)'", ", u, al"),
('str"ing"', "str_ing", 'r\'str"([^"]+)"\'', ", ing"),
('"str"i"ngs"', "str_i_ngs", 'r\'"([^"]+)"i"([^"]+)"\'', ", str, ngs"),
(
'enter "10" into',
"enter_number_into",
'r\'enter "([^"]+)" into\'',
", number",
),
(
'enter "10" and "20" into',
"enter_number_and_number_into",
'r\'enter "([^"]+)" and "([^"]+)" into\'',
", number1, number2",
),
(
'''some'quote' and "double quote"''',
"some_quote_and_double_quote",
"r'some\\'quote\\' and \"([^\"]+)\"'",
", double_quote",
),
]
for predicate, method, docstring, args in test_data:
suggest, suggest_method, suggest_docstring = obj.suggest(predicate)
# Assert
expected = pattern % {
"method_name": method,
"docstring": docstring,
"args": args,
"predicate": predicate.replace("'", r"\'"),
}
assert suggest == expected
assert suggest_method == method
assert suggest_docstring == docstring
@tags(["unit"])
class ParseStepMatcherMatchTestCase(unittest.TestCase):
""" Test :py:meth:`ParseStepMatcher.match`. """
def test_should_return_method_and_args(self):
""" Scenario: match by docstring """
# Arrange
predicate = "my milkshake brings all the boys to the yard"
augmented_predicate = "my milkshake brings all the boys to the yard"
method_name = "step_%s" % predicate
docstring = r"my milkshake brings all the {} to {} yard"
method = Mock(__doc__=docstring)
methods = {method_name: method}
suite = Mock(**methods)
obj = ParseStepMatcher(suite)
# Act
result_method, result_args, result_kwargs = obj.match(
predicate, augmented_predicate, methods.keys()
)
# Assert
assert result_method == method
assert result_args == ("boys", "the")
def test_should_return_method_and_kwargs(self):
""" Scenario: match by docstring with named groups """
# Arrange
predicate = "my milkshake brings all the boys to the yard"
augmented_predicate = "my milkshake brings all the boys to the yard"
method_name = "step_%s" % predicate
docstring = r"my milkshake brings all the {who} to {other} yard"
method = Mock(__doc__=docstring)
methods = {method_name: method}
suite = Mock(**methods)
obj = ParseStepMatcher(suite)
# Act
result_method, result_args, result_kwargs = obj.match(
predicate, augmented_predicate, methods.keys()
)
# Assert
assert result_method == method
assert result_kwargs == {"who": "boys", "other": "the"}
def test_should_return_method_args_and_kwargs_with_mixed_groups(self):
""" Scenario: match by docstring with named and not named groups """
# Arrange
predicate = "my milkshake brings all the boys to the yard"
augmented_predicate = "my milkshake brings all the boys to the yard"
method_name = "step_%s" % predicate
docstring = r"my milkshake brings all the {who} to {} yard"
method = Mock(__doc__=docstring)
methods = {method_name: method}
suite = Mock(**methods)
obj = ParseStepMatcher(suite)
# Act
result_method, result_args, result_kwargs = obj.match(
predicate, augmented_predicate, methods.keys()
)
# Assert
assert result_method == method
assert result_args == ("the",)
assert result_kwargs == {"who": "boys"}
def test_should_return_none_if_docstring_not_mached(self):
""" Scenario: no match by docstring """
# Arrange
predicate = "not there"
augmented_predicate = "not there"
method_name = "step_%s" % predicate
docstring = r"my milkshake brings all the {who} to {} yard"
method = Mock(__doc__=docstring)
methods = {method_name: method}
suite = Mock(**methods)
obj = ParseStepMatcher(suite)
# Act
result_method, result_args, result_kwargs = obj.match(
predicate, augmented_predicate, methods.keys()
)
# Assert
assert result_method is None
assert result_args == ()
def test_should_return_none_if_no_docstring(self):
""" Scenario: no match by docstring """
# Arrange
predicate = "my milkshake brings all the boys to the yard"
augmented_predicate = "my milkshake brings all the boys to the yard"
method_name = "step_%s" % predicate
method = Mock(__doc__="")
methods = {method_name: method}
suite = Mock(**methods)
obj = ParseStepMatcher(suite)
# Act
result_method, result_args, result_kwargs = obj.match(
predicate, augmented_predicate, methods.keys()
)
# Assert
assert result_method is None
assert result_args == ()
def test_should_return_second_method_and_matches(self):
""" Scenario: many methods """
# Arrange
predicate = "my milkshake brings all the boys to the yard"
augmented_predicate = "my milkshake brings all the boys to the yard"
method_name = "step_%s" % predicate
docstring = r"my milkshake brings all the {who} to {} yard"
method = Mock(__doc__=docstring)
methods = {method_name: method, "step_other": sentinel.method}
suite = Mock(**methods)
obj = ParseStepMatcher(suite)
# Act
result_method, result_args, result_kwargs = obj.match(
predicate, augmented_predicate, [method_name, "step_other"]
)
# Assert
assert result_method == method
assert result_args == ("the",)
assert result_kwargs == {"who": "boys"}
@tags(["unit"])
class ParseStepMatcherSuggestTestCase(unittest.TestCase):
""" Test :py:meth:`ParseStepMatcher.suggest`. """
def test_should_return_suggested_method(self):
""" Scenario: suggest """
# Arrange
obj = ParseStepMatcher(sentinel.suite)
# Act
pattern = " def step_%(method_name)s(self%(args)s):\n %(docstring)s\n\n raise NotImplementedError('%(predicate)s')\n\n"
test_data = [
("tastes great", "tastes_great", r"r'tastes great'", ""),
("less filling", "less_filling", r"r'less filling'", ""),
("line\nfeed", "line_feed", r"r'line\nfeed'", ""),
("tick'ed'", "tick_ed", r"r'tick\'ed\''", ""),
("tastes great", "tastes_great", r"r'tastes\s+great'", ""),
("argu<ment>al", "argu_ment_al", r"r'argu{ment}al'", ", ment"),
("arg<u>ment<al>", "arg_u_ment_al", r"r'arg{u}ment{al}'", ", u, al"),
('str"ing"', "str_ing", "r'str\"{ing}\"'", ", ing"),
('"str"i"ngs"', "str_i_ngs", 'r\'"{str}"i"{ngs}"\'', ", str, ngs"),
(
'enter "10" into',
"enter_number_into",
"r'enter \"{number}\" into'",
", number",
),
(
'enter "10" and "20" into',
"enter_number_and_number_into",
'r\'enter "{number1}" and "{number2}" into\'',
", number1, number2",
),
]
for predicate, method, docstring, args in test_data:
suggest, suggest_method, suggest_docstring = obj.suggest(predicate)
# Assert
expected = pattern % {
"method_name": method,
"docstring": docstring,
"args": args,
"predicate": predicate.replace("'", r"\'"),
}
assert suggest == expected
assert suggest_method == method
assert suggest_docstring == docstring
| 38.526742 | 144 | 0.585756 | 2,450 | 23,771 | 5.456735 | 0.06898 | 0.06732 | 0.060588 | 0.04488 | 0.853766 | 0.83559 | 0.786521 | 0.766549 | 0.748448 | 0.733488 | 0 | 0.004263 | 0.299356 | 23,771 | 616 | 145 | 38.589286 | 0.798439 | 0.076227 | 0 | 0.707589 | 0 | 0.008929 | 0.167969 | 0.015585 | 0 | 0 | 0 | 0 | 0.122768 | 1 | 0.064732 | false | 0.004464 | 0.008929 | 0 | 0.095982 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
e73425453416d3c5dda8568ea9c7b781dc39d557 | 6,726 | py | Python | MLLytics/interpretation.py | scottclay/MLLytics | 584b22533fcdf1cdc5f9749a4f67fc37d79ebb30 | [
"MIT"
] | null | null | null | MLLytics/interpretation.py | scottclay/MLLytics | 584b22533fcdf1cdc5f9749a4f67fc37d79ebb30 | [
"MIT"
] | null | null | null | MLLytics/interpretation.py | scottclay/MLLytics | 584b22533fcdf1cdc5f9749a4f67fc37d79ebb30 | [
"MIT"
] | null | null | null | import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
def make_pdp(df, feature, model, type='classification', quantiles=[0.05, 0.95]):
"""
Computes partial dependency plot values for a given feature.
:param df: pandas dataframe
:param feature: string
:param model: sci-kit learn model instance
:param type: string. classification or regression
:param quantiles: list. min max quantiles to use to exclude extreme values
"""
min_val = df[[feature]].quantile(q=quantiles[0]).values[0]
max_val = df[[feature]].quantile(q=quantiles[1]).values[0]
values = np.arange(min_val, max_val, (max_val - min_val)* 0.01)
qtls = {}
for i in np.arange(0.1,1.0,0.1):
qtls[np.round(i,1)] = df[[feature]].quantile(q=i).values[0]
li = []
va = []
if type=='classification':
for i in values:
_df = df.copy()
_df[feature].values[:] = i
output = model.predict_proba(_df)[:, 1]
vote_1 = len(output[output >= 0.5])
vote_2 = len(output[output < 0.5])
output = np.log(vote_1) - 0.5*(np.log(vote_1) + np.log(vote_2))
avg_output = output.mean()
li.append(avg_output)
va.append(i)
elif type=='regression':
for i in values:
_df = df.copy()
_df[feature].values[:] = i
output = model.predict(_df)
avg_output = output.mean()
li.append(avg_output)
va.append(i)
return va, li, qtls
def plot_pdp(feature, va, li, type='classification', quantiles = None, norm=False, axs = None, **kwargs):
"""
Plot a partial dependency plot
:param feature: string
:param va: array
:param li: array
:param type: string
:param quantiles: list of quantile values to plot
:param norm: boolean. to normalise data so mean value = 0
:param axs: matplotlib axis if using one from a preexisting plot
:param **kwargs: other keywords
"""
sns.set_style("whitegrid")
if axs == None:
fig, axs = plt.subplots(1, 1, figsize=(7,7))
if norm == True:
li=np.array(li)
li-=li.mean()
axs.plot(va,li,c='k', zorder=1, linestyle='-' )
axs.set_xlim(kwargs.get("xmin", None), kwargs.get("xmax", None))
axs.set_ylim(kwargs.get("ymin", None), kwargs.get("ymax", None))
axs.set_xlabel(kwargs.get("xlabel",feature), fontsize=kwargs.get("label_fontsize",16))
axs.set_ylabel(kwargs.get("ylabel","Partial Dependence"), fontsize=kwargs.get("label_fontsize",16))
axs.set_title(kwargs.get("title","Partial Dependency Plot"), fontsize=kwargs.get("title_fontsize",18))
axs.tick_params(axis='both', which='major', labelsize=kwargs.get("major_tick_fontsize",15))
axs.tick_params(axis='both', which='minor', labelsize=kwargs.get("minor_tick_fontsize",15))
if quantiles is not None:
for q in quantiles.keys():
axs.axvline(quantiles[q], 0, 0.05)
try:
return fig, axs
except:
return axs
def make_ice(df, feature, model, type='classification', quantiles=[0.05, 0.95]):
"""
Computes partial dependency plot values for a given feature.
:param df: pandas dataframe
:param feature: string
:param model: sci-kit learn model instance
:param type: string. classification or regression
:param quantiles: list. min max quantiles to use to exclude extreme values
"""
df_sample = df.sample(50)
min_val = df[[feature]].quantile(q=quantiles[0]).values[0]
max_val = df[[feature]].quantile(q=quantiles[1]).values[0]
values = np.arange(min_val, max_val, (max_val - min_val)* 0.01)
qtls = {}
for i in np.arange(0.1,1.0,0.1):
qtls[np.round(i,1)] = df[[feature]].quantile(q=i).values[0]
li = []
va = []
if type=='classification':
for i in values:
_df = df.copy()
_df[feature].values[:] = i
output = model.predict_proba(_df)[:, 1]
vote_1 = len(output[output >= 0.5])
vote_2 = len(output[output < 0.5])
output = np.log(vote_1) - 0.5*(np.log(vote_1) + np.log(vote_2))
avg_output = output.mean()
li.append(avg_output)
va.append(i)
elif type=='regression':
for i in values:
_df = df_sample.copy()
_df[feature].values[:] = i
output = model.predict(_df)
#avg_output = output.mean()
li.append(output)
va.append(i)
return va, li, qtls
def plot_ice(feature, va, li, many_li, type='classification', quantiles = None, norm=False, axs = None, **kwargs):
"""
Plot a partial dependency plot
:param feature: string
:param va: array
:param li: array
:param many_li: pandas df
:param type: string
:param quantiles: list of quantile values to plot
:param norm: boolean. to normalise data so mean value = 0
:param axs: matplotlib axis if using one from a preexisting plot
:param **kwargs: other keywords
"""
sns.set_style("whitegrid")
if axs == None:
fig, axs = plt.subplots(1, 1, figsize=(7,7))
if norm == True:
many_li = many_li.apply(lambda x: x-x[0], axis=0)
li=np.array(li)
li-=li[0]
#if many_li is not None:
# many_li = many_li - li.mean()
axs.plot(va,li,c='k', zorder=1, linestyle='-' )
#axs.set_xlim(kwargs.get("xmin", None), kwargs.get("xmax", None))
#axs.set_ylim(kwargs.get("ymin", None), kwargs.get("ymax", None))
axs.set_xlabel(kwargs.get("xlabel",feature), fontsize=kwargs.get("label_fontsize",16))
axs.set_ylabel(kwargs.get("ylabel","Partial Dependence"), fontsize=kwargs.get("label_fontsize",16))
axs.set_title(kwargs.get("title","Partial Dependency Plot"), fontsize=kwargs.get("title_fontsize",18))
axs.tick_params(axis='both', which='major', labelsize=kwargs.get("major_tick_fontsize",15))
axs.tick_params(axis='both', which='minor', labelsize=kwargs.get("minor_tick_fontsize",15))
for i in range(0,len(many_li.columns)):
axs.plot(va, many_li[i] - many_li[i][0], color='blue', alpha=0.2)
if quantiles is not None:
for q in quantiles.keys():
axs.axvline(quantiles[q], 0, 0.05)
try:
return fig, axs
except:
return axs
| 30.572727 | 115 | 0.592477 | 938 | 6,726 | 4.153518 | 0.15565 | 0.055441 | 0.01078 | 0.027721 | 0.944559 | 0.944559 | 0.937885 | 0.937885 | 0.937885 | 0.937885 | 0 | 0.023388 | 0.268956 | 6,726 | 219 | 116 | 30.712329 | 0.768965 | 0.211418 | 0 | 0.873874 | 0 | 0 | 0.093241 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.036036 | false | 0 | 0.072072 | 0 | 0.162162 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
e774e150427f63f3f6f92550ded1de2085a83b24 | 40,378 | py | Python | SteveChessboardSmith23/ChessBoard2000.py | STEVE-al/ChessBoardGame | ab6f5df0b0ca9cea10ccdf03a1f63990e0daf668 | [
"MIT"
] | null | null | null | SteveChessboardSmith23/ChessBoard2000.py | STEVE-al/ChessBoardGame | ab6f5df0b0ca9cea10ccdf03a1f63990e0daf668 | [
"MIT"
] | null | null | null | SteveChessboardSmith23/ChessBoard2000.py | STEVE-al/ChessBoardGame | ab6f5df0b0ca9cea10ccdf03a1f63990e0daf668 | [
"MIT"
] | null | null | null | class AllPawns:
def __init__(self,Start,End,Board):
self.Start=Start
self.End=End
self.Board=Board
def PawnsWhiteMovement(self):
#Attack
if (self.End[0]==self.Start[0]-1 and self.End[1]==self.Start[1]+1 and (self.Board[(self.End)] in Pawns['Black'])) or (self.End[0]==self.Start[0]-1 and self.End[1]==self.Start[1]-1 and (self.Board[(self.End)] in Pawns['Black'])):
if self.End[0]==0:
Change=input('What piece to you want to change this pawn to.\n Type q to change the pawn into a Queen \n b to change the pawn into a Bishop \n e to change the pawn into a Elephant \n h to change the pawn into a Horse\n')
if Change in 'qbeh':
return Change.upper()
else:
return True
#Starting Movement
elif self.Start[0]==6:
if self.End[0]>3 and self.End[0]<6:
if self.End[1]==self.Start[1]:
return True
#Movement
elif self.End[0]==self.Start[0]-1 and self.End[1]==self.Start[1]:
if self.End[0]==0:
Change=input('What piece to you want to change this pawn to.\n Type q to change the pawn into a Queen \n b to change the pawn into a Bishop \n e to change the pawn into a Elephant \n h to change the pawn into a Horse\n')
if Change in 'qbeh':
return Change.upper()
elif self.Board[(self.End)]==' ':
return True
else:
return False
def PawnsBlackMovement(self):
#Attack
if (self.End[0]==self.Start[0]+1 and self.End[1]==self.Start[1]-1 and (self.Board[(self.End)] in Pawns['White'])) or (self.End[0]==self.Start[0]+1 and self.End[1]==self.Start[1]+1 and (self.Board[(self.End)] in Pawns['White'])):
if self.End[0]==7:
Change=input('What piece to you want to change this pawn to.\n Type q to change the pawn into a Queen \n b to change the pawn into a Bishop \n e to change the pawn into a Elephant \n h to change the pawn into a Horse\n')
if Change=='q' or Change=='b' or Change=='e' or Change=='h':
return Change.upper()
else:
return True
#Starting Movement
elif self.Start[0]==1:
if self.End[0]>1 and self.End[0]<4:
if self.End[1]==self.Start[1]:
return True
#Movement
elif self.End[0]==self.Start[0]+1 and self.End[1]==self.Start[1]:
if self.End[0]==7:
Change=input('What piece to you want to change this pawn to.\n Type q to change the pawn into a Queen \n b to change the pawn into a Bishop \n e to change the pawn into a Elephant \n h to change the pawn into a Horse\n')
if Change=='q' or Change=='b' or Change=='e' or Change=='h':
return Change.upper()
elif self.Board[(self.End)]==' ':
return True
else:
return False
def EleBlackMovement(self):
#Right
if self.Start[0]==self.End[0] and self.Start[1]-self.End[1]<0:
for x in range(self.Start[1]+1,self.End[1]):
if (self.Board[(self.Start[0],x)]) in Pawns['White'] or self.Board[(self.Start[0],x)]==' ':
return True
else:
return False
if (self.Board[(self.Start[0],self.Start[1]+1)]) in Pawns['White'] or self.Board[(self.Start[0],self.Start[1]+1)]==' ':
return True
else:
return False
#Left
elif self.Start[0]==self.End[0] and self.Start[1]-self.End[1]>0:
for x in range(self.Start[1]-1,self.End[1],-1):
if (self.Board[(self.Start[0],x)]) in Pawns['White'] or self.Board[(self.Start[0],x)]==' ':
return True
else:
return False
if (self.Board[(self.Start[0],self.Start[1]-1)]) in Pawns['White'] or self.Board[(self.Start[0],self.Start[1]-1)]==' ':
return True
else:
return False
#Down
elif self.Start[1]==self.End[1] and self.Start[0]-self.End[0]<0:
for x in range(self.Start[0]+1,self.End[0]):
if (self.Board[(x,self.Start[1])]) in Pawns['White'] or self.Board[(x,self.Start[1])]==' ':
return True
else:
return False
if (self.Board[(self.Start[0]+1,self.Start[1])]) in Pawns['White'] or self.Board[(self.Start[0]+1,self.Start[1])]==' ':
return True
else:
return False
#Up
elif self.Start[1]==self.End[1] and self.Start[0]-self.End[0]>0:
for x in range(self.Start[0]-1,self.End[0],-1):
if (self.Board[(x,self.Start[1])]) in Pawns['White'] or self.Board[(x,self.Start[1])]==' ':
return True
else:
return False
if (self.Board[(self.Start[0]-1,self.Start[1])]) in Pawns['White'] or self.Board[(self.Start[0]-1,self.Start[1])]==' ':
return True
else:
return False
else:
return False
def EleWhiteMovement(self):
#Right
if self.Start[0]==self.End[0] and self.Start[1]-self.End[1]<0:
for x in range(self.Start[1]+1,self.End[1]):
if (self.Board[(self.Start[0],x)]) in Pawns['Black'] or self.Board[(self.Start[0],x)]==' ':
return True
else:
return False
if (self.Board[(self.Start[0],self.Start[1]+1)]) in Pawns['Black'] or self.Board[(self.Start[0],self.Start[1]+1)]==' ':
return True
else:
return False
#Left
elif self.Start[0]==self.End[0] and self.Start[1]-self.End[1]>0:
for x in range(self.Start[1]-1,self.End[1],-1):
if (self.Board[(self.Start[0],x)]) in Pawns['Black'] or self.Board[(self.Start[0],x)]==' ':
return True
else:
return False
if (self.Board[(self.Start[0],self.Start[1]-1)]) in Pawns['Black'] or self.Board[(self.Start[0],self.Start[1]-1)]==' ':
return True
else:
return False
#Down
elif self.Start[1]==self.End[1] and self.Start[0]-self.End[0]<0:
for x in range(self.Start[0]+1,self.End[0]):
if (self.Board[(x,self.Start[1])]) in Pawns['Black'] or self.Board[(x,self.Start[1])]==' ':
return True
else:
return False
if (self.Board[(self.Start[0]+1,self.Start[1])]) in Pawns['Black'] or self.Board[(self.Start[0]+1,self.Start[1])]==' ':
return True
else:
return False
#Up
elif self.Start[1]==self.End[1] and self.Start[0]-self.End[0]>0:
for x in range(self.Start[0]-1,self.End[0],-1):
if (self.Board[(x,self.Start[1])]) in Pawns['Black'] or self.Board[(x,self.Start[1])]==' ':
return True
else:
return False
if (self.Board[(self.Start[0]-1,self.Start[1])]) in Pawns['Black'] or self.Board[(self.Start[0]-1,self.Start[1])]==' ':
return True
else:
return False
else:
return False
def BishWhiteMovement(self):
#NorthEast Movement
if self.Start[0]-self.End[0]==self.End[1]-self.Start[1] and self.Start[0]-self.End[0]>0:
for x in range(1,self.Start[0]-self.End[0]):
if (self.Board[(self.Start[0]-x,self.Start[1]+x)]) in Pawns['Black'] or self.Board[(self.Start[0]-x,self.Start[1]+x)]==' ':
return True
else:
return False
if (self.Board[(self.Start[0]-1,self.Start[1]+1)]) in Pawns['Black'] or self.Board[(self.Start[0]-1,self.Start[1]+1)]==' ':
return True
else:
return False
#NorthWest Movement
elif self.Start[0]-self.End[0]==self.Start[1]-self.End[1] and self.Start[0]-self.End[0]>0:
for x in range(1,self.Start[0]-self.End[0]):
if (self.Board[(self.Start[0]-x,self.Start[1]-x)]) in Pawns['Black'] or self.Board[(self.Start[0]-x,self.Start[1]-x)]==' ':
return True
else:
return False
if (self.Board[(self.Start[0]-1,self.Start[1]-1)]) in Pawns['Black'] or self.Board[(self.Start[0]-1,self.Start[1]-1)]==' ':
return True
else:
return False
#SouthWest Movement
elif self.End[0]-self.Start[0]==self.Start[1]-self.End[1] and self.End[0]-self.Start[0]>0:
for x in range(1,self.End[0]-self.Start[0]):
if (self.Board[(self.Start[0]+x,self.Start[1]-x)]) in Pawns['Black'] or self.Board[(self.Start[0]+x,self.Start[1]-x)]==' ':
return True
else:
return False
if (self.Board[(self.Start[0]+1,self.Start[1]-1)]) in Pawns['Black'] or self.Board[(self.Start[0]+1,self.Start[1]-1)]==' ':
return True
else:
return False
#SouthEast Movement
elif self.End[0]-self.Start[0]==self.End[1]-self.Start[1] and self.End[0]-self.Start[0]>0:
for x in range(1,self.End[0]-self.Start[0]):
if (self.Board[(self.Start[0]+x,self.Start[1]+x)]) in Pawns['Black'] or self.Board[(self.Start[0]+x,self.Start[1]+x)]==' ':
return True
else:
return False
if (self.Board[(self.Start[0]+1,self.Start[1]+1)]) in Pawns['Black'] or self.Board[(self.Start[0]+1,self.Start[1]+1)]==' ':
return True
else:
return False
else:
return False
def BishBlackMovement(self):
#NorthEast Movement
if self.Start[0]-self.End[0]==self.End[1]-self.Start[1] and self.Start[0]-self.End[0]>0:
for x in range(1,self.Start[0]-self.End[0]):
if (self.Board[(self.Start[0]-x,self.Start[1]+x)]) in Pawns['White'] or self.Board[(self.Start[0]-x,self.Start[1]+x)]==' ':
return True
else:
return False
if self.Board[(self.Start[0]-1,self.Start[1]+1)] in Pawns['White'] or self.Board[(self.Start[0]-1,self.Start[1]+1)]==' ':
return True
else:
return False
#NorthWest Movement
elif self.Start[0]-self.End[0]==self.Start[1]-self.End[1] and self.Start[0]-self.End[0]>0:
for x in range(1,self.Start[0]-self.End[0]):
if (self.Board[(self.Start[0]-x,self.Start[1]-x)]) in Pawns['White'] or self.Board[(self.Start[0]-x,self.Start[1]-x)]==' ':
return True
else:
return False
if (self.Board[(self.Start[0]-1,self.Start[1]-1)]) in Pawns['White'] or self.Board[(self.Start[0]-1,self.Start[1]-1)]==' ':
return True
else:
return False
#SouthWest Movement
elif self.End[0]-self.Start[0]==self.Start[1]-self.End[1] and self.End[0]-self.Start[0]>0:
for x in range(1,self.End[0]-self.Start[0]):
if (self.Board[(self.Start[0]+x,self.Start[1]-x)]) in Pawns['White'] or self.Board[(self.Start[0]+x,self.Start[1]-x)]==' ':
return True
else:
return False
if (self.Board[(self.Start[0]+1,self.Start[1]-1)]) in Pawns['White'] or self.Board[(self.Start[0]+1,self.Start[1]-1)]==' ':
return True
else:
return False
#SouthEast Movement
elif self.End[0]-self.Start[0]==self.End[1]-self.Start[1] and self.End[0]-self.Start[0]>0:
for x in range(1,self.End[0]-self.Start[0]):
if (self.Board[(self.Start[0]+x,self.Start[1]+x)]) in Pawns['White'] or self.Board[(self.Start[0]+x,self.Start[1]+x)]==' ':
return True
else:
return False
if (self.Board[(self.Start[0]+1,self.Start[1]+1)]) in Pawns['White'] or self.Board[(self.Start[0]+1,self.Start[1]+1)]==' ':
return True
else:
return False
else:
return False
def QuBlackMovement(self):
#NorthEast Movement
if self.Start[0]-self.End[0]==self.End[1]-self.Start[1] and self.Start[0]-self.End[0]>0:
for x in range(1,self.Start[0]-self.End[0]):
if (self.Board[(self.Start[0]-x,self.Start[1]+x)]) in Pawns['White'] or self.Board[(self.Start[0]-x,self.Start[1]+x)]==' ':
return True
else:
return False
if self.Board[(self.Start[0]-1,self.Start[1]+1)] in Pawns['White'] or self.Board[(self.Start[0]-1,self.Start[1]+1)]==' ':
return True
else:
return False
#NorthWest Movement
elif self.Start[0]-self.End[0]==self.Start[1]-self.End[1] and self.Start[0]-self.End[0]>0:
for x in range(1,self.Start[0]-self.End[0]):
if (self.Board[(self.Start[0]-x,self.Start[1]-x)]) in Pawns['White'] or self.Board[(self.Start[0]-x,self.Start[1]-x)]==' ':
return True
else:
return False
if (self.Board[(self.Start[0]-1,self.Start[1]-1)]) in Pawns['White'] or self.Board[(self.Start[0]-1,self.Start[1]-1)]==' ':
return True
else:
return False
#SouthWest Movement
elif self.End[0]-self.Start[0]==self.Start[1]-self.End[1] and self.End[0]-self.Start[0]>0:
for x in range(1,self.End[0]-self.Start[0]):
if (self.Board[(self.Start[0]+x,self.Start[1]-x)]) in Pawns['White'] or self.Board[(self.Start[0]+x,self.Start[1]-x)]==' ':
return True
else:
return False
if (self.Board[(self.Start[0]+1,self.Start[1]-1)]) in Pawns['White'] or self.Board[(self.Start[0]+1,self.Start[1]-1)]==' ':
return True
else:
return False
#SouthEast Movement
elif self.End[0]-self.Start[0]==self.End[1]-self.Start[1] and self.End[0]-self.Start[0]>0:
for x in range(1,self.End[0]-self.Start[0]):
if (self.Board[(self.Start[0]+x,self.Start[1]+x)]) in Pawns['White'] or self.Board[(self.Start[0]+x,self.Start[1]+x)]==' ':
return True
else:
return False
if (self.Board[(self.Start[0]+1,self.Start[1]+1)]) in Pawns['White'] or self.Board[(self.Start[0]+1,self.Start[1]+1)]==' ':
return True
else:
return False
#Right
if self.Start[0]==self.End[0] and self.Start[1]-self.End[1]<0:
for x in range(self.Start[1]+1,self.End[1]):
if (self.Board[(self.Start[0],x)]) in Pawns['White'] or self.Board[(self.Start[0],x)]==' ':
return True
else:
return False
if (self.Board[(self.Start[0],self.Start[1]+1)]) in Pawns['White'] or self.Board[(self.Start[0],self.Start[1]+1)]==' ':
return True
else:
return False
#Left
elif self.Start[0]==self.End[0] and self.Start[1]-self.End[1]>0:
for x in range(self.Start[1]-1,self.End[1],-1):
if (self.Board[(self.Start[0],x)]) in Pawns['White'] or self.Board[(self.Start[0],x)]==' ':
return True
else:
return False
if (self.Board[(self.Start[0],self.Start[1]-1)]) in Pawns['White'] or self.Board[(self.Start[0],self.Start[1]-1)]==' ':
return True
else:
return False
#Down
elif self.Start[1]==self.End[1] and self.Start[0]-self.End[0]<0:
for x in range(self.Start[0]+1,self.End[0]):
if (self.Board[(x,self.Start[1])]) in Pawns['White'] or self.Board[(x,self.Start[1])]==' ':
return True
else:
return False
if (self.Board[(self.Start[0]+1,self.Start[1])]) in Pawns['White'] or self.Board[(self.Start[0]+1,self.Start[1])]==' ':
return True
else:
return False
#Up
elif self.Start[1]==self.End[1] and self.Start[0]-self.End[0]>0:
for x in range(self.Start[0]-1,self.End[0],-1):
if (self.Board[(x,self.Start[1])]) in Pawns['White'] or self.Board[(x,self.Start[1])]==' ':
return True
else:
return False
if (self.Board[(self.Start[0]-1,self.Start[1])]) in Pawns['White'] or self.Board[(self.Start[0]-1,self.Start[1])]==' ':
return True
else:
return False
else:
return False
def QuWhiteMovement(self):
#NorthEast Movement
if self.Start[0]-self.End[0]==self.End[1]-self.Start[1] and self.Start[0]-self.End[0]>0:
for x in range(1,self.Start[0]-self.End[0]):
if (self.Board[(self.Start[0]-x,self.Start[1]+x)]) in Pawns['Black'] or self.Board[(self.Start[0]-x,self.Start[1]+x)]==' ':
return True
else:
return False
if (self.Board[(self.Start[0]-1,self.Start[1]+1)]) in Pawns['Black'] or self.Board[(self.Start[0]-1,self.Start[1]+1)]==' ':
return True
else:
return False
#NorthWest Movement
elif self.Start[0]-self.End[0]==self.Start[1]-self.End[1] and self.Start[0]-self.End[0]>0:
for x in range(1,self.Start[0]-self.End[0]):
if (self.Board[(self.Start[0]-x,self.Start[1]-x)]) in Pawns['Black'] or self.Board[(self.Start[0]-x,self.Start[1]-x)]==' ':
return True
else:
return False
if (self.Board[(self.Start[0]-1,self.Start[1]-1)]) in Pawns['Black'] or self.Board[(self.Start[0]-1,self.Start[1]-1)]==' ':
return True
else:
return False
#SouthWest Movement
elif self.End[0]-self.Start[0]==self.Start[1]-self.End[1] and self.End[0]-self.Start[0]>0:
for x in range(1,self.End[0]-self.Start[0]):
if (self.Board[(self.Start[0]+x,self.Start[1]-x)]) in Pawns['Black'] or self.Board[(self.Start[0]+x,self.Start[1]-x)]==' ':
return True
else:
return False
if (self.Board[(self.Start[0]+1,self.Start[1]-1)]) in Pawns['Black'] or self.Board[(self.Start[0]+1,self.Start[1]-1)]==' ':
return True
else:
return False
#SouthEast Movement
elif self.End[0]-self.Start[0]==self.End[1]-self.Start[1] and self.End[0]-self.Start[0]>0:
for x in range(1,self.End[0]-self.Start[0]):
if (self.Board[(self.Start[0]+x,self.Start[1]+x)]) in Pawns['Black'] or self.Board[(self.Start[0]+x,self.Start[1]+x)]==' ':
return True
else:
return False
if (self.Board[(self.Start[0]+1,self.Start[1]+1)]) in Pawns['Black'] or self.Board[(self.Start[0]+1,self.Start[1]+1)]==' ':
return True
else:
return False
#Right
if self.Start[0]==self.End[0] and self.Start[1]-self.End[1]<0:
for x in range(self.Start[1]+1,self.End[1]):
if (self.Board[(self.Start[0],x)]) in Pawns['Black'] or self.Board[(self.Start[0],x)]==' ':
return True
else:
return False
if (self.Board[(self.Start[0],self.Start[1]+1)]) in Pawns['Black'] or self.Board[(self.Start[0],self.Start[1]+1)]==' ':
return True
else:
return False
#Left
elif self.Start[0]==self.End[0] and self.Start[1]-self.End[1]>0:
for x in range(self.Start[1]-1,self.End[1],-1):
if (self.Board[(self.Start[0],x)]) in Pawns['Black'] or self.Board[(self.Start[0],x)]==' ':
return True
else:
return False
if (self.Board[(self.Start[0],self.Start[1]-1)]) in Pawns['Black'] or self.Board[(self.Start[0],self.Start[1]-1)]==' ':
return True
else:
return False
#Down
elif self.Start[1]==self.End[1] and self.Start[0]-self.End[0]<0:
for x in range(self.Start[0]+1,self.End[0]):
if (self.Board[(x,self.Start[1])]) in Pawns['Black'] or self.Board[(x,self.Start[1])]==' ':
return True
else:
return False
if (self.Board[(self.Start[0]+1,self.Start[1])]) in Pawns['Black'] or self.Board[(self.Start[0]+1,self.Start[1])]==' ':
return True
else:
return False
#Up
elif self.Start[1]==self.End[1] and self.Start[0]-self.End[0]>0:
for x in range(self.Start[0]-1,self.End[0],-1):
if (self.Board[(x,self.Start[1])]) in Pawns['Black'] or self.Board[(x,self.Start[1])]==' ':
return True
else:
return False
if (self.Board[(self.Start[0]-1,self.Start[1])]) in Pawns['Black'] or self.Board[(self.Start[0]-1,self.Start[1])]==' ':
return True
else:
return False
else:
return False
def KingWhiteMovement(self):
#NorthEast
if self.End[0]==self.Start[0]-1 and self.End[1]==self.Start[1]+1:
if self.Board[self.End] in Pawns['Black'] or self.Board[self.End]==' ':
return True
else:
return False
#NorthWest
elif self.End[0]==self.Start[0]-1 and self.End[1]==self.Start[1]-1:
if self.Board[self.End] in Pawns['Black'] or self.Board[self.End]==' ':
return True
else:
return False
#SouthEast
elif self.End[0]==self.Start[0]+1 and self.End[1]==self.Start[1]+1:
if self.Board[self.End] in Pawns['Black'] or self.Board[self.End]==' ':
return True
else:
return False
#SouthWest
elif self.End[0]==self.Start[0]+1 and self.End[1]==self.Start[1]-1:
if self.Board[self.End] in Pawns['Black'] or self.Board[self.End]==' ':
return True
else:
return False
#Up
elif self.End[0]==self.Start[0]-1 and self.End[1]==self.Start[1]:
if self.Board[self.End] in Pawns['Black'] or self.Board[self.End]==' ':
return True
else:
return False
#Down
elif self.End[0]==self.Start[0]+1 and self.End[1]==self.Start[1]:
if self.Board[self.End] in Pawns['Black'] or self.Board[self.End]==' ':
return True
else:
return False
#Right
elif self.End[0]==self.Start[0] and self.End[1]==self.Start[1]+1:
if self.Board[self.End] in Pawns['Black'] or self.Board[self.End]==' ':
return True
else:
return False
#Left
elif self.End[0]==self.Start[0] and self.End[1]==self.Start[1]-1:
if self.Board[self.End] in Pawns['Black'] or self.Board[self.End]==' ':
return True
else:
return False
else:
return False
def KingBlackMovement(self):
#NorthEast
if self.End[0]==self.Start[0]-1 and self.End[1]==self.Start[1]+1:
if self.Board[self.End] in Pawns['White'] or self.Board[self.End]==' ':
return True
else:
return False
#NorthWest
elif self.End[0]==self.Start[0]-1 and self.End[1]==self.Start[1]-1:
if self.Board[self.End] in Pawns['White'] or self.Board[self.End]==' ':
return True
else:
return False
#SouthEast
elif self.End[0]==self.Start[0]+1 and self.End[1]==self.Start[1]+1:
if self.Board[self.End] in Pawns['White'] or self.Board[self.End]==' ':
return True
else:
return False
#SouthWest
elif self.End[0]==self.Start[0]+1 and self.End[1]==self.Start[1]-1:
if self.Board[self.End] in Pawns['White'] or self.Board[self.End]==' ':
return True
else:
return False
#Up
elif self.End[0]==self.Start[0]-1 and self.End[1]==self.Start[1]:
if self.Board[self.End] in Pawns['White'] or self.Board[self.End]==' ':
return True
else:
return False
#Down
elif self.End[0]==self.Start[0]+1 and self.End[1]==self.Start[1]:
if self.Board[self.End] in Pawns['White'] or self.Board[self.End]==' ':
return True
else:
return False
#Right
elif self.End[0]==self.Start[0] and self.End[1]==self.Start[1]+1:
if self.Board[self.End] in Pawns['White'] or self.Board[self.End]==' ':
return True
else:
return False
#Left
elif self.End[0]==self.Start[0] and self.End[1]==self.Start[1]-1:
if self.Board[self.End] in Pawns['White'] or self.Board[self.End]==' ':
return True
else:
return False
else:
return False
def HorseWhiteMovement(self):
#
if self.End[0]==self.Start[0]+2 and self.End[1]==self.Start[1]+1:
if self.Board[self.End] in Pawns['Black'] or self.Board[self.End]==' ':
return True
elif self.Board[self.End] in Pawns['White']:
return False
else:
return 'the logic for this code is wrong'
#
elif self.End[0]==self.Start[0]+2 and self.End[1]==self.Start[1]-1:
if self.Board[self.End] in Pawns['Black'] or self.Board[self.End]==' ':
return True
elif self.Board[self.End] in Pawns['White']:
return False
else:
return 'the logic for this code is wrong'
#
elif self.End[0]==self.Start[0]-2 and self.End[1]==self.Start[1]+1:
if self.Board[self.End] in Pawns['Black'] or self.Board[self.End]==' ':
return True
elif self.Board[self.End] in Pawns['White']:
return False
else:
return 'the logic for this code is wrong'
#
elif self.End[0]==self.Start[0]-2 and self.End[1]==self.Start[1]-1:
if self.Board[self.End] in Pawns['Black'] or self.Board[self.End]==' ':
return True
elif self.Board[self.End] in Pawns['White']:
return False
else:
return 'the logic for this code is wrong'
#
elif self.End[0]==self.Start[0]-1 and self.End[1]==self.Start[1]+2:
if self.Board[self.End] in Pawns['Black'] or self.Board[self.End]==' ':
return True
elif self.Board[self.End] in Pawns['White']:
return False
else:
return 'the logic for this code is wrong'
#
elif self.End[0]==self.Start[0]+1 and self.End[1]==self.Start[1]+2:
if self.Board[self.End] in Pawns['Black'] or self.Board[self.End]==' ':
return True
elif self.Board[self.End] in Pawns['White']:
return False
else:
return 'the logic for this code is wrong'
#
elif self.End[0]==self.Start[0]-1 and self.End[1]==self.Start[1]-2:
if self.Board[self.End] in Pawns['Black'] or self.Board[self.End]==' ':
return True
elif self.Board[self.End] in Pawns['White']:
return False
else:
return 'the logic for this code is wrong'
#
elif self.End[0]==self.Start[0]+1 and self.End[1]==self.Start[1]-2:
if self.Board[self.End] in Pawns['Black'] or self.Board[self.End]==' ':
return True
elif self.Board[self.End] in Pawns['White']:
return False
else:
return 'the logic for this code is wrong'
else:
return False
def HorseBlackMovement(self):
#
if self.End[0]==self.Start[0]+2 and self.End[1]==self.Start[1]+1:
if self.Board[self.End] in Pawns['White'] or self.Board[self.End]==' ':
return True
elif self.Board[self.End] in Pawns['Black']:
return False
else:
return 'the logic for this code is wrong'
#
elif self.End[0]==self.Start[0]+2 and self.End[1]==self.Start[1]-1:
if self.Board[self.End] in Pawns['White'] or self.Board[self.End]==' ':
return True
elif self.Board[self.End] in Pawns['Black']:
return False
else:
return 'the logic for this code is wrong'
#
elif self.End[0]==self.Start[0]-2 and self.End[1]==self.Start[1]+1:
if self.Board[self.End] in Pawns['White'] or self.Board[self.End]==' ':
return True
elif self.Board[self.End] in Pawns['Black']:
return False
else:
return'the logic for this code is wrong'
#
elif self.End[0]==self.Start[0]-2 and self.End[1]==self.Start[1]-1:
if self.Board[self.End] in Pawns['White'] or self.Board[self.End]==' ':
return True
elif self.Board[self.End] in Pawns['Black']:
return False
else:
return 'the logic for this code is wrong'
#
elif self.End[0]==self.Start[0]-1 and self.End[1]==self.Start[1]+2:
if self.Board[self.End] in Pawns['White'] or self.Board[self.End]==' ':
return True
elif self.Board[self.End] in Pawns['Black']:
return False
else:
return 'the logic for this code is wrong'
#
elif self.End[0]==self.Start[0]+1 and self.End[1]==self.Start[1]+2:
if self.Board[self.End] in Pawns['White'] or self.Board[self.End]==' ':
return True
elif self.Board[self.End] in Pawns['Black']:
return False
else:
return 'the logic for this code is wrong'
#
elif self.End[0]==self.Start[0]-1 and self.End[1]==self.Start[1]-2:
if self.Board[self.End] in Pawns['White'] or self.Board[self.End]==' ':
return True
elif self.Board[self.End] in Pawns['Black']:
return False
else:
return 'the logic for this code is wrong'
#
elif self.End[0]==self.Start[0]+1 and self.End[1]==self.Start[1]-2:
if self.Board[self.End] in Pawns['White'] or self.Board[self.End]==' ':
return True
elif self.Board[self.End] in Pawns['Black']:
return False
else:
return 'the logic for this code is wrong'
else:
return False
def ChessBoard():
#Printing the game board
SubBoard={(0,0):' ',(0,1):' ',(0,2):' ',(0,3):' ',(0,4):' ',(0,5):' ',(0,6):' ',(0,7):' ',
(1,0):' ',(1,1):' ',(1,2):' ',(1,3):' ',(1,4):' ',(1,5):' ',(1,6):' ',(1,7):' ',
(2,0):' ',(2,1):' ',(2,2):' ',(2,3):' ',(2,4):' ',(2,5):' ',(2,6):' ',(2,7):' ',
(3,0):' ',(3,1):' ',(3,2):' ',(3,3):' ',(3,4):' ',(3,5):' ',(3,6):' ',(3,7):' ',
(4,0):' ',(4,1):' ',(4,2):' ',(4,3):' ',(4,4):' ',(4,5):' ',(4,6):' ',(4,7):' ',
(5,0):' ',(5,1):' ',(5,2):' ',(5,3):' ',(5,4):' ',(5,5):' ',(5,6):' ',(5,7):' ',
(6,0):' ',(6,1):' ',(6,2):' ',(6,3):' ',(6,4):' ',(6,5):' ',(6,6):' ',(6,7):' ',
(7,0):' ',(7,1):' ',(7,2):' ',(7,3):' ',(7,4):' ',(7,5):' ',(7,6):' ',(7,7):' '}
def printBoard(board):
print(' '+'|'+' 0 '+'|'+' 1 '+'|'+' 2 '+'|'+' 3 '+'|'+' 4 '+'|'+' 5 '+'|'+' 6 '+'|'+' 7 '+'|'),
print(' '+'|'+'---+---+---+---+---+---+---+---'+'|'),
print('0'+'|'+board[(0,0)]+'|'+board[(0,1)]+'|'+board[(0,2)]+'|'+board[(0,3)]+'|'+board[(0,4)]+'|'+board[(0,5)]+'|'+board[(0,6)]+'|'+board[(0,7)]+'|')
print(' '+'|'+'---+---+---+---+---+---+---+---'+'|')
print('1'+'|'+board[(1,0)]+'|'+board[(1,1)]+'|'+board[(1,2)]+'|'+board[(1,3)]+'|'+board[(1,4)]+'|'+board[(1,5)]+'|'+board[(1,6)]+'|'+board[(1,7)]+'|')
print(' '+'|'+'---+---+---+---+---+---+---+---'+'|')
print('2'+'|'+board[(2,0)]+'|'+board[(2,1)]+'|'+board[(2,2)]+'|'+board[(2,3)]+'|'+board[(2,4)]+'|'+board[(2,5)]+'|'+board[(2,6)]+'|'+board[(2,7)]+'|')
print(' '+'|'+'---+---+---+---+---+---+---+---'+'|')
print('3'+'|'+board[(3,0)]+'|'+board[(3,1)]+'|'+board[(3,2)]+'|'+board[(3,3)]+'|'+board[(3,4)]+'|'+board[(3,5)]+'|'+board[(3,6)]+'|'+board[(3,7)]+'|')
print(' '+'|'+'---+---+---+---+---+---+---+---'+'|')
print('4'+'|'+board[(4,0)]+'|'+board[(4,1)]+'|'+board[(4,2)]+'|'+board[(4,3)]+'|'+board[(4,4)]+'|'+board[(4,5)]+'|'+board[(4,6)]+'|'+board[(4,7)]+'|')
print(' '+'|'+'---+---+---+---+---+---+---+---'+'|')
print('5'+'|'+board[(5,0)]+'|'+board[(5,1)]+'|'+board[(5,2)]+'|'+board[(5,3)]+'|'+board[(5,4)]+'|'+board[(5,5)]+'|'+board[(5,6)]+'|'+board[(5,7)]+'|')
print(' '+'|'+'---+---+---+---+---+---+---+---'+'|')
print('6'+'|'+board[(6,0)]+'|'+board[(6,1)]+'|'+board[(6,2)]+'|'+board[(6,3)]+'|'+board[(6,4)]+'|'+board[(6,5)]+'|'+board[(6,6)]+'|'+board[(6,7)]+'|')
print(' '+'|'+'---+---+---+---+---+---+---+---'+'|')
print('7'+'|'+board[(7,0)]+'|'+board[(7,1)]+'|'+board[(7,2)]+'|'+board[(7,3)]+'|'+board[(7,4)]+'|'+board[(7,5)]+'|'+board[(7,6)]+'|'+board[(7,7)]+'|')
#Extra helpful functions
Pawns={'Black':{'BE0':'BE0','BH0':'BH0','BB0':'BB0','BQ0':'BQ0','BK0':'BK0','BB1':'BB1','BH1':'BH1','BE1':'BE1','BP2':'BP2','BP3':'BP3','BP4':'BP4','BP5':'BP5',
'BP6':'BP6','BP7':'BP7','BP8':'BP8','BP9':'BP9','BE2':'BE2','BE3':'BE3','BE4':'BE4','BE5':'BE5','BE6':'BE6','BE7':'BE7','BE8':'BE8','BE9':'BE9',
'BH2':'BH2','BH3':'BH3','BH4':'BH4','BH5':'BH5','BH6':'BH6','BH7':'BH7','BH8':'BH8','BH9':'BH9','BB2':'BB2','BB3':'BB3','BB4':'BB4','BB5':'BB5',
'BB6':'BB6','BB7':'BB7','BB8':'BB8','BB9':'BB9'},
'White':{'WE0':'WE0','WH0':'WH0','WB0':'WB0','WQ0':'WQ0','WK0':'WK0','WB1':'WB1','WH1':'WH1','WE1':'WE1','WP2':'WP2','WP3':'WP3','WP4':'WP4','WP5':'WP5',
'WP6':'WP6','WP7':'WP7','WP8':'WP8','WP9':'WP9','WE2':'WE2','WE3':'WE3','WE4':'WE4','WE5':'WE5','WE6':'WE6','WE7':'WE7','WE8':'WE8','WE9':'WE9',
'WH2':'WH2','WH3':'WH3','WH4':'WH4','WH5':'WH5','WH6':'WH6','WH7':'WH7','WH8':'WH8','WH9':'WH9','WB2':'WB2','WB3':'WB3','WB4':'WB4','WB5':'WB5',
'WB6':'WB6','WB7':'WB7','WB8':'WB8','WB9':'WB9'}}
def SetBoard():
SubBoard[(0,0)]='BE0'
SubBoard[(0,1)]='BH0'
SubBoard[(0,2)]='BB0'
SubBoard[(0,3)]='BQ0'
SubBoard[(0,4)]='BK0'
SubBoard[(0,5)]='BB1'
SubBoard[(0,6)]='BH1'
SubBoard[(0,7)]='BE1'
SubBoard[(1,0)]='BP2'
SubBoard[(1,1)]='BP3'
SubBoard[(1,2)]='BP4'
SubBoard[(1,3)]='BP5'
SubBoard[(1,4)]='BP6'
SubBoard[(1,5)]='BP7'
SubBoard[(1,6)]='BP8'
SubBoard[(1,7)]='BP9'
SubBoard[(6,0)]='WP2'
SubBoard[(6,1)]='WP3'
SubBoard[(6,2)]='WP4'
SubBoard[(6,3)]='WP5'
SubBoard[(6,4)]='WP6'
SubBoard[(6,5)]='WP7'
SubBoard[(6,6)]='WP8'
SubBoard[(6,7)]='WP9'
SubBoard[(7,0)]='WE0'
SubBoard[(7,1)]='WH0'
SubBoard[(7,2)]='WB0'
SubBoard[(7,3)]='WQ0'
SubBoard[(7,4)]='WK0'
SubBoard[(7,5)]='WB1'
SubBoard[(7,6)]='WH1'
SubBoard[(7,7)]='WE1'
def ClearBoard():
for r in range(8):
for c in range(8):
SubBoard[(r,c)]=' '
#Starting the game
game=input('Do you want to start the game Type Y or N')
if game.lower() =='y':
ClearBoard()
SetBoard()
printBoard(SubBoard)
#Continuing the game
turn='Black'
while Pawns['White']['WK0'] in SubBoard.values() and Pawns['Black']['BK0'] in SubBoard.values():
if turn=='Black':
turn='White'
else:
turn='Black'
Check=False
while Check==False or Check==None:
Start=input('Its your turn '+turn+',What is your starting position')
Start=tuple([int(x) for x in Start.split(',')]) #To convert multiple items in a list to certain thing at once
End=input('What is your end position')
End=tuple([int(x) for x in End.split(',')])
a=AllPawns((Start),(End),SubBoard)
if turn=='White' and SubBoard[Start] in Pawns['White'] and (SubBoard[Start])[1]=='E':
Check=a.EleWhiteMovement()
if Check==True:
SubBoard[End]=SubBoard[Start]
SubBoard[Start]=' '
printBoard(SubBoard)
elif Check==False:
print('You there is a piece block your path or this piece cannot move in this direction try again')
pass
else:
print(Check)
elif turn=='Black' and SubBoard[Start] in Pawns['Black'] and (SubBoard[Start])[1]=='E':
Check=a.EleBlackMovement()
if Check==True:
SubBoard[End]=SubBoard[Start]
SubBoard[Start]=' '
printBoard(SubBoard)
elif Check==False:
print('You there is a piece block your path or this piece cannot move in this direction try again')
pass
else:
print(Check)
elif turn=='White' and SubBoard[Start] in Pawns['White'] and (SubBoard[Start])[1]=='B':
Check=a.BishWhiteMovement()
if Check==True:
SubBoard[End]=SubBoard[Start]
SubBoard[Start]=' '
printBoard(SubBoard)
elif Check==False:
print('You there is a piece block your path or this piece cannot move in this direction try again')
pass
else:
print(Check)
elif turn=='Black' and SubBoard[Start] in Pawns['Black'] and (SubBoard[Start])[1]=='B':
Check=a.BishBlackMovement()
if Check==True:
SubBoard[End]=SubBoard[Start]
SubBoard[Start]=' '
printBoard(SubBoard)
elif Check==False:
print('You there is a piece block your path or this piece cannot move in this direction try again')
pass
else:
print(Check)
elif turn=='White' and SubBoard[Start] in Pawns['White'] and (SubBoard[Start])[1]=='Q':
Check=a.QuWhiteMovement()
if Check==True:
SubBoard[End]=SubBoard[Start]
SubBoard[Start]=' '
printBoard(SubBoard)
elif Check==False:
print('You there is a piece block your path or this piece cannot move in this direction try again')
pass
else:
print(Check)
elif turn=='Black' and SubBoard[Start] in Pawns['Black'] and (SubBoard[Start])[1]=='Q':
Check=a.QuBlackMovement()
if Check==True:
SubBoard[End]=SubBoard[Start]
SubBoard[Start]=' '
printBoard(SubBoard)
elif Check==False:
print('You there is a piece block your path or this piece cannot move in this direction try again')
pass
else:
print(Check)
elif turn=='White' and SubBoard[Start] in Pawns['White'] and (SubBoard[Start])[1]=='K':
Check=a.KingWhiteMovement()
if Check==True:
SubBoard[End]=SubBoard[Start]
SubBoard[Start]=' '
printBoard(SubBoard)
elif Check==False:
print('You there is a piece block your path or this piece cannot move in this direction try again')
pass
else:
print(Check)
elif turn=='Black' and SubBoard[Start] in Pawns['Black'] and (SubBoard[Start])[1]=='K':
Check=a.KingBlackMovement()
if Check==True:
SubBoard[End]=SubBoard[Start]
SubBoard[Start]=' '
printBoard(SubBoard)
elif Check==False:
print('You there is a piece block your path or this piece cannot move in this direction try again')
pass
else:
print(Check)
elif turn=='White' and SubBoard[Start] in Pawns['White'] and (SubBoard[Start])[1]=='H':
Check=a.HorseWhiteMovement()
if Check==True:
SubBoard[End]=SubBoard[Start]
SubBoard[Start]=' '
printBoard(SubBoard)
elif Check==False:
print('You there is a piece block your path or this piece cannot move in this direction try again')
pass
else:
print(Check)
elif turn=='Black' and SubBoard[Start] in Pawns['Black'] and (SubBoard[Start])[1]=='H':
Check=a.HorseBlackMovement()
if Check==True:
SubBoard[End]=SubBoard[Start]
SubBoard[Start]=' '
printBoard(SubBoard)
elif Check==False:
print('You there is a piece block your path or this piece cannot move in this direction try again')
pass
else:
print(Check)
elif turn=='White' and SubBoard[Start] in Pawns['White'] and (SubBoard[Start])[1]=='P':
Check=a.PawnsWhiteMovement()
if Check==True:
SubBoard[End]=SubBoard[Start]
SubBoard[Start]=' '
printBoard(SubBoard)
elif Check==False:
print('You there is a piece block your path or this piece cannot move in this direction try again')
pass
elif Check=='Q' or Check=='B' or Check=='E' or Check=='H':
Pawns['White'][SubBoard[Start]]=Pawns['White'][SubBoard[Start]][0]+Check+Pawns['White'][SubBoard[Start]][2]
SubBoard[End]=Pawns['White'][SubBoard[Start]]
SubBoard[Start]=' '
printBoard(SubBoard)
else:
print(Check)
elif turn=='Black' and SubBoard[Start] in Pawns['Black'] and (SubBoard[Start])[1]=='P':
Check=a.PawnsBlackMovement()
if Check==True:
SubBoard[End]=SubBoard[Start]
SubBoard[Start]=' '
printBoard(SubBoard)
elif Check==False:
print('You there is a piece block your path or this piece cannot move in this direction try again')
pass
elif Check=='Q' or Check=='B' or Check=='E' or Check=='H':
Pawns['Black'][SubBoard[Start]]=Pawns['Black'][SubBoard[Start]][0]+Check+Pawns['Black'][SubBoard[Start]][2]
SubBoard[End]=Pawns['Black'][SubBoard[Start]]
SubBoard[Start]=' '
printBoard(SubBoard)
else:
print(Check)
else:
print('You have written starting co-ordinates that have no piece in them, please put another set of co-ordinates')
if Pawns['White']['WK0'] not in SubBoard.values():
print('The Black Pieces Have Won This Game Of Chess \n Yaaaaaaaaaaaaaaaaa Good Job Black Player')
elif Pawns['Black']['BK0'] not in SubBoard.values():
print('The White Pieces Have Won This Game Of Chess \n Yaaaaaaaaaaaaaaaaa Good Job White Player')
| 40.662638 | 232 | 0.547774 | 6,228 | 40,378 | 3.550739 | 0.03693 | 0.170118 | 0.101293 | 0.091164 | 0.868907 | 0.865153 | 0.858642 | 0.854843 | 0.854843 | 0.850231 | 0 | 0.045988 | 0.241741 | 40,378 | 992 | 233 | 40.703629 | 0.676291 | 0.016618 | 0 | 0.836759 | 0 | 0.006046 | 0.129545 | 0.006258 | 0 | 0 | 0 | 0 | 0 | 1 | 0.020556 | false | 0.01451 | 0 | 0 | 0.29867 | 0.072551 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
e799f51efe0eca25f48bccae01aa777f489daaba | 3,945 | py | Python | controllers/config.py | xiaomatech/ops | aeb9355e7ae9aec8404b6f6495c03175d79880e9 | [
"MIT"
] | 9 | 2016-07-21T01:49:18.000Z | 2019-12-10T04:07:33.000Z | controllers/config.py | xiaomatech/ops | aeb9355e7ae9aec8404b6f6495c03175d79880e9 | [
"MIT"
] | null | null | null | controllers/config.py | xiaomatech/ops | aeb9355e7ae9aec8404b6f6495c03175d79880e9 | [
"MIT"
] | 10 | 2016-12-14T02:59:39.000Z | 2020-04-28T07:55:06.000Z | #!/usr/bin/env python
# -*- coding:utf8 -*-
from library.etcd import Etcd
class config:
def help(self, req, resp):
h = '''
配置中心(使用etcd)
支持多机房(-r 是configs/__init__.py中德etcd_config的机房名)
ops config set -k key -v value -t 123 -r gz
ops config append -k key -v value -t 123 -r gz
ops config mkdir -k key -t 123 -r gz
ops config rmdir -k key --recursive t -r gz
ops config wait -k key --recursive t -r gz
ops config get_all -k key -r gz
ops config get -k key --recursive t -r gz
ops config delete -k key -r gz
'''
return h
def set(self, req, resp):
key = req.get_param(name='k')
value = req.get_param(name='v')
ttl = req.get_param(name='t')
room = req.get_param(name='r')
if room is None:
return '-r(room) need'
if ttl is None:
return '-t(ttl) need'
if value is None:
return '-v(value) need'
if key is None:
return '-k(key) need'
etcd = Etcd(room=room)
return etcd.set(key, value, ttl)
def append(self, req, resp):
key = req.get_param(name='k')
value = req.get_param(name='v')
ttl = req.get_param(name='t')
room = req.get_param(name='r')
if room is None:
return '-r(room) need'
if ttl is None:
return '-t(ttl) need'
if value is None:
return '-v(value) need'
if key is None:
return '-k(key) need'
etcd = Etcd(room=room)
return etcd.append(key, value, ttl)
def mkdir(self, req, resp):
key = req.get_param(name='k')
ttl = req.get_param(name='t')
room = req.get_param(name='r')
if room is None:
return '-r(room) need'
if ttl is None:
return '-t(ttl) need'
if key is None:
return '-k(key) need'
etcd = Etcd(room=room)
return etcd.mkdir(key, ttl)
def rmdir(self, req, resp):
key = req.get_param(name='k')
recursive = req._params['recursive']
room = req.get_param(name='r')
if room is None:
return '-r(room) need'
if recursive is None:
return '--recursive(recursive) need'
if key is None:
return '-k(key) need'
etcd = Etcd(room=room)
return etcd.rmdir(key, recursive)
def get(self, req, resp):
key = req.get_param(name='k')
recursive = req.get_param(name='recursive')
room = req.get_param(name='r')
if room is None:
return '-r(room) need'
if recursive is None:
return '--recursive(recursive) need'
if key is None:
return '-k(key) need'
etcd = Etcd(room=room)
return etcd.get(key, recursive)
def wait(self, req, resp):
key = req.get_param(name='k')
recursive = req.get_param(name='recursive')
room = req.get_param(name='r')
if room is None:
return '-r(room) need'
if recursive is None:
return '--recursive(recursive) need'
if key is None:
return '-k(key) need'
etcd = Etcd(room=room)
return etcd.wait(key, recursive)
def get_all(self, req, resp):
key = req.get_param(name='k')
room = req.get_param(name='r')
if room is None:
return '-r(room) need'
if key is None:
return '-k(key) need'
etcd = Etcd(room=room)
return etcd.get_all(key)
def delete(self, req, resp):
key = req.get_param(name='k')
room = req.get_param(name='r')
if room is None:
return '-r(room) need'
if key is None:
return '-k(key) need'
etcd = Etcd(room=room)
return etcd.delete(key)
| 30.581395 | 59 | 0.516857 | 550 | 3,945 | 3.649091 | 0.096364 | 0.071749 | 0.143498 | 0.171898 | 0.821126 | 0.812157 | 0.804185 | 0.804185 | 0.765321 | 0.750374 | 0 | 0.003971 | 0.361724 | 3,945 | 128 | 60 | 30.820313 | 0.79309 | 0.010139 | 0 | 0.705357 | 0 | 0 | 0.236229 | 0.026646 | 0 | 0 | 0 | 0 | 0 | 1 | 0.080357 | false | 0 | 0.008929 | 0 | 0.392857 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
e7c014c8a614ee062b17449020c68652cdd79d79 | 2,615 | py | Python | cogs/listeners.py | JDJGInc/JDJGBotSupreme | fd8a5679f05cb90ebec8dbfc297445f9773ebe5f | [
"MIT"
] | 4 | 2020-07-10T04:02:23.000Z | 2021-02-13T16:38:54.000Z | cogs/listeners.py | JDJGInc/JDJGBotSupreme | fd8a5679f05cb90ebec8dbfc297445f9773ebe5f | [
"MIT"
] | 3 | 2021-07-13T15:38:39.000Z | 2022-02-15T15:17:17.000Z | cogs/listeners.py | johndpope/JDJGBotSupreme | 64fde0e169811e1866eb29174ac5dd8e052d830a | [
"MIT"
] | 2 | 2020-08-01T11:15:09.000Z | 2022-02-15T11:46:22.000Z | from discord.ext import commands
import discord, random, os
class Events(commands.Cog):
def __init__(self,bot):
self.bot = bot
@commands.Cog.listener()
async def on_guild_join(self, guild_fetched):
channels = [channel for channel in guild_fetched.channels]
roles = roles= [role for role in guild_fetched.roles]
embed = discord.Embed(title="Bot just joined: "+str(guild_fetched.name), color=random.randint(0,16777215))
embed.set_thumbnail(url = guild_fetched.icon_url)
embed.add_field(name='Server Name:',value=f'{guild_fetched.name}')
embed.add_field(name='Server ID:',value=f'{guild_fetched.id}')
embed.add_field(name='Server region:',value=f'{guild_fetched.region}')
embed.add_field(name='Server Creation Date:',value=f'{guild_fetched.created_at}')
embed.add_field(name='Server Owner:',value=f'{guild_fetched.owner}')
embed.add_field(name='Server Owner ID:',value=f'{guild_fetched.owner.id}')
embed.add_field(name='Member Count:',value=f'{guild_fetched.member_count}')
embed.add_field(name='Amount of Channels:',value=f"{len(channels)}")
embed.add_field(name='Amount of Roles:',value=f"{len(roles)}")
await self.bot.get_channel(738912143679946783).send(embed=embed)
@commands.Cog.listener()
async def on_guild_remove(self, guild_fetched):
channels = [channel for channel in guild_fetched.channels]
roles = roles= [role for role in guild_fetched.roles]
embed = discord.Embed(title="Bot just left: "+str(guild_fetched.name), color=random.randint(0,16777215))
embed.set_thumbnail(url = guild_fetched.icon_url)
embed.add_field(name='Server Name:',value=f'{guild_fetched.name}')
embed.add_field(name='Server ID:',value=f'{guild_fetched.id}')
embed.add_field(name='Server region:',value=f'{guild_fetched.region}')
embed.add_field(name='Server Creation Date:',value=f'{guild_fetched.created_at}')
embed.add_field(name='Server Owner:',value=f'{guild_fetched.owner}')
try:
embed.add_field(name='Server Owner ID:',value=f'{guild_fetched.owner.id}')
except:
pass
try:
embed.add_field(name='Member Count:',value=f'{guild_fetched.member_count}')
except:
pass
embed.add_field(name='Amount of Channels:',value=f"{len(channels)}")
embed.add_field(name='Amount of Roles:',value=f"{len(roles)}")
await self.bot.get_channel(738912143679946783).send(embed=embed)
@commands.Cog.listener()
async def on_ready(self):
print("Bot is Ready")
print(f"Logged in as {self.bot.user}")
print(f"Id: {self.bot.user.id}")
def setup(bot):
bot.add_cog(Events(bot)) | 45.877193 | 110 | 0.717017 | 391 | 2,615 | 4.636829 | 0.186701 | 0.158853 | 0.129068 | 0.168781 | 0.869829 | 0.86652 | 0.86652 | 0.845008 | 0.845008 | 0.845008 | 0 | 0.023622 | 0.125813 | 2,615 | 57 | 111 | 45.877193 | 0.769466 | 0 | 0 | 0.7 | 0 | 0 | 0.280581 | 0.092508 | 0 | 0 | 0 | 0 | 0 | 1 | 0.04 | false | 0.04 | 0.04 | 0 | 0.1 | 0.06 | 0 | 0 | 0 | null | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
820213264371dbb96c59a7f800f866c643ab4eca | 12,095 | py | Python | antColonyOptimization/test/test_ant_colony_update_pheromones.py | DHEEPIKRAJ/DMCISCT | ec6d3c5b427c3266005f59bc174adfcd37942bf0 | [
"MIT"
] | 1 | 2022-03-02T12:57:19.000Z | 2022-03-02T12:57:19.000Z | antColonyOptimization/test/test_ant_colony_update_pheromones.py | DHEEPIKRAJ/DMCISCT | ec6d3c5b427c3266005f59bc174adfcd37942bf0 | [
"MIT"
] | null | null | null | antColonyOptimization/test/test_ant_colony_update_pheromones.py | DHEEPIKRAJ/DMCISCT | ec6d3c5b427c3266005f59bc174adfcd37942bf0 | [
"MIT"
] | null | null | null | import unittest
import importlib
#source: http://stackoverflow.com/a/11158224/5343977
import os,sys,inspect
currentdir = os.path.dirname(os.path.abspath(inspect.getfile(inspect.currentframe())))
parentdir = os.path.dirname(currentdir)
sys.path.insert(0,parentdir)
import ant_colony as module
class TestAntColonyUpdatePheromones(unittest.TestCase):
def test_empty_first_run_no_ants(self):
module.debug = False
class test_empty_object(module.ant_colony):
def __init__(self): pass
def _get_distance(self, start, end): pass
def _init_matrix(self, size, value=None): pass
def _init_ants(self, count, start=0): pass
#def _update_pheromone_map(self): pass
def _populate_ant_updated_pheromone_map(self, ant): pass
def mainloop(self): pass
test_object = test_empty_object()
#setup test environment
def _init_matrix(size, value=None):
"""
setup a matrix NxN (where n = size)
used in both self.distance_matrix and self.pheromone_map
as they require identical matrixes besides which value to initialize to
"""
ret = []
for row in range(size):
ret.append([value for x in range(size)])
return ret
test_object.pheromone_map = _init_matrix(1, value=0)
test_object.ant_updated_pheromone_map = _init_matrix(1, value=0)
test_object.pheromone_evaporation_coefficient = .99
test_object.first_pass = True
#_DEBUG_ARRAY(test_object.pheromone_map)
test_object._update_pheromone_map()
#_DEBUG_ARRAY(test_object.pheromone_map)
#verify no changes took place, since this was a first pass
self.assertEqual(test_object.pheromone_map, [[0]])
def test_decay_no_ants(self):
module.debug = False
class test_empty_object(module.ant_colony):
def __init__(self): pass
def _get_distance(self, start, end): pass
def _init_matrix(self, size, value=None): pass
def _init_ants(self, count, start=0): pass
#def _update_pheromone_map(self): pass
def _populate_ant_updated_pheromone_map(self, ant): pass
def mainloop(self): pass
test_object = test_empty_object()
#setup test environment
def _init_matrix(size, value=None):
"""
setup a matrix NxN (where n = size)
used in both self.distance_matrix and self.pheromone_map
as they require identical matrixes besides which value to initialize to
"""
ret = []
for row in range(size):
ret.append([value for x in range(size)])
return ret
test_object.pheromone_map = _init_matrix(2, value=1)
test_object.ant_updated_pheromone_map = _init_matrix(2, value=0)
test_object.pheromone_evaporation_coefficient = .99
test_object.ants = []
#_DEBUG_ARRAY(test_object.pheromone_map)
test_object._update_pheromone_map()
#_DEBUG_ARRAY(test_object.pheromone_map)
self.assertTrue(test_object.pheromone_map[0][1] < 1)
self.assertTrue(test_object.pheromone_map[1][0] < 1)
self.assertTrue(test_object.pheromone_map[1][1] < 1)
self.assertTrue(test_object.pheromone_map[0][0] < 1)
def test_first_run_single_ant(self):
module.debug = False
class test_empty_object(module.ant_colony):
def __init__(self): pass
def _get_distance(self, start, end): pass
def _init_matrix(self, size, value=None): pass
def _init_ants(self, count, start=0): pass
#def _update_pheromone_map(self): pass
def _populate_ant_updated_pheromone_map(self, ant): pass
def mainloop(self): pass
test_object = test_empty_object()
#setup test environment
def _init_matrix(size, value=None):
"""
setup a matrix NxN (where n = size)
used in both self.distance_matrix and self.pheromone_map
as they require identical matrixes besides which value to initialize to
"""
ret = []
for row in range(size):
ret.append([value for x in range(size)])
return ret
class mock_ant:
def get_route(self):
return [0, 1, 2]
def get_distance_traveled(self):
return float(2)
test_object.pheromone_map = _init_matrix(3, value=0)
test_object.ant_updated_pheromone_map = _init_matrix(3, value=.5)
test_object.pheromone_evaporation_coefficient = .99
test_object.pheromone_constant = 1
test_object.ants = [mock_ant()]
#_DEBUG_ARRAY(test_object.pheromone_map)
test_object._update_pheromone_map()
#_DEBUG_ARRAY(test_object.pheromone_map)
#testing
self.assertEqual(test_object.pheromone_map[0][1], .5)
self.assertEqual(test_object.pheromone_map[1][0], .5)
self.assertEqual(test_object.pheromone_map[1][2], .5)
self.assertEqual(test_object.pheromone_map[2][1], .5)
def test_single_ant(self):
module.debug = False
class test_empty_object(module.ant_colony):
def __init__(self): pass
def _get_distance(self, start, end): pass
def _init_matrix(self, size, value=None): pass
def _init_ants(self, count, start=0): pass
#def _update_pheromone_map(self): pass
def _populate_ant_updated_pheromone_map(self, ant): pass
def mainloop(self): pass
test_object = test_empty_object()
#setup test environment
def _init_matrix(size, value=None):
"""
setup a matrix NxN (where n = size)
used in both self.distance_matrix and self.pheromone_map
as they require identical matrixes besides which value to initialize to
"""
ret = []
for row in range(size):
ret.append([value for x in range(size)])
return ret
class mock_ant:
def get_route(self):
return [0, 1, 2]
def get_distance_traveled(self):
return float(2)
test_object.pheromone_map = _init_matrix(3, value=1)
test_object.ant_updated_pheromone_map = _init_matrix(3, value=.5)
test_object.pheromone_evaporation_coefficient = .99
test_object.pheromone_constant = 1
test_object.ants = [mock_ant()]
test_object.first_pass = False
#_DEBUG_ARRAY(test_object.pheromone_map)
test_object._update_pheromone_map()
#_DEBUG_ARRAY(test_object.pheromone_map)
#testing
self.assertEqual(test_object.pheromone_map[0][1], .51)
self.assertEqual(test_object.pheromone_map[1][2], .51)
def test_first_run_two_ants(self):
module.debug = False
class test_empty_object(module.ant_colony):
def __init__(self): pass
def _get_distance(self, start, end): pass
def _init_matrix(self, size, value=None): pass
def _init_ants(self, count, start=0): pass
#def _update_pheromone_map(self): pass
def _populate_ant_updated_pheromone_map(self, ant): pass
def mainloop(self): pass
test_object = test_empty_object()
#setup test environment
def _init_matrix(size, value=None):
"""
setup a matrix NxN (where n = size)
used in both self.distance_matrix and self.pheromone_map
as they require identical matrixes besides which value to initialize to
"""
ret = []
for row in range(size):
ret.append([value for x in range(size)])
return ret
class mock_ant:
def __init__(self, route, distance):
self.route = route
self.distance = distance
def get_route(self):
return self.route
def get_distance_traveled(self):
return float(self.distance)
test_object.pheromone_map = _init_matrix(4, value=0)
test_object.ant_updated_pheromone_map = _init_matrix(4, value=0)
#ant1, traverse: 0 -> 1 -> 2 -> 3, distance == 2
test_object.ant_updated_pheromone_map[0][1] = .5
test_object.ant_updated_pheromone_map[1][0] = .5
test_object.ant_updated_pheromone_map[1][2] = .5
test_object.ant_updated_pheromone_map[2][1] = .5
test_object.ant_updated_pheromone_map[2][3] = .5
test_object.ant_updated_pheromone_map[3][2] = .5
#ant2, traverse: 3 -> 0 -> 2 -> 1, distance == 3
test_object.ant_updated_pheromone_map[3][0] = 1.0/3.0
test_object.ant_updated_pheromone_map[0][3] = 1.0/3.0
test_object.ant_updated_pheromone_map[0][2] = 1.0/3.0
test_object.ant_updated_pheromone_map[2][0] = 1.0/3.0
test_object.ant_updated_pheromone_map[2][1] += 1.0/3.0 #as this is traversed twice, once by each ant
test_object.ant_updated_pheromone_map[1][2] += 1.0/3.0
test_object.pheromone_evaporation_coefficient = .99
test_object.pheromone_constant = 1
ant1 = mock_ant([0, 1, 2, 3], 2)
ant2 = mock_ant([3, 0, 2, 1], 3)
test_object.ants = [ant1, ant2]
test_object.first_pass = True
#_DEBUG_ARRAY(test_object.pheromone_map)
test_object._update_pheromone_map()
#_DEBUG_ARRAY(test_object.pheromone_map)
#testing
# #ant 1
self.assertEqual(test_object.pheromone_map[0][1], .5)
#this is updated TWICE, once by each ant and is higher than the others as two sets of pheromone track are laid down (after the decay step)
# due to rounding errors we use this "squeeze" method to issolate its value
# (true value should be 5/6 [1/2 + 1/3, from ant1 and ant2 respectively])
self.assertTrue(test_object.pheromone_map[1][2] < .9 and test_object.pheromone_map[1][2] >= (.83))
self.assertEqual(test_object.pheromone_map[2][3], .5)
#ant2
self.assertEqual(test_object.pheromone_map[3][0], (1.0/3.0))
self.assertEqual(test_object.pheromone_map[0][2], (1.0/3.0))
self.assertTrue(test_object.pheromone_map[2][1] < .9 and test_object.pheromone_map[2][1] >= (.83))
def test_two_ants(self):
module.debug = False
class test_empty_object(module.ant_colony):
def __init__(self): pass
def _get_distance(self, start, end): pass
def _init_matrix(self, size, value=None): pass
def _init_ants(self, count, start=0): pass
#def _update_pheromone_map(self): pass
def _populate_ant_updated_pheromone_map(self, ant): pass
def mainloop(self): pass
test_object = test_empty_object()
#setup test environment
def _init_matrix(size, value=None):
"""
setup a matrix NxN (where n = size)
used in both self.distance_matrix and self.pheromone_map
as they require identical matrixes besides which value to initialize to
"""
ret = []
for row in range(size):
ret.append([value for x in range(size)])
return ret
class mock_ant:
def __init__(self, route, distance):
self.route = route
self.distance = distance
def get_route(self):
return self.route
def get_distance_traveled(self):
return float(self.distance)
test_object.pheromone_map = _init_matrix(4, value=1)
test_object.ant_updated_pheromone_map = _init_matrix(4, value=0)
#ant1, traverse: 0 -> 1 -> 2 -> 3, distance == 2
test_object.ant_updated_pheromone_map[0][1] = .5
test_object.ant_updated_pheromone_map[1][0] = .5
test_object.ant_updated_pheromone_map[1][2] = .5
test_object.ant_updated_pheromone_map[2][1] = .5
test_object.ant_updated_pheromone_map[2][3] = .5
test_object.ant_updated_pheromone_map[3][2] = .5
#ant2, traverse: 3 -> 0 -> 2 -> 1, distance == 3
test_object.ant_updated_pheromone_map[3][0] = 1.0/3.0
test_object.ant_updated_pheromone_map[0][3] = 1.0/3.0
test_object.ant_updated_pheromone_map[0][2] = 1.0/3.0
test_object.ant_updated_pheromone_map[2][0] = 1.0/3.0
test_object.ant_updated_pheromone_map[2][1] += 1.0/3.0 #as this is traversed twice, once by each ant
test_object.ant_updated_pheromone_map[1][2] += 1.0/3.0
test_object.pheromone_evaporation_coefficient = .99
test_object.pheromone_constant = 1
ant1 = mock_ant([0, 1, 2, 3], 2)
ant2 = mock_ant([3, 0, 2, 1], 3)
test_object.ants = [ant1, ant2]
test_object.first_pass = True
#_DEBUG_ARRAY(test_object.pheromone_map)
test_object._update_pheromone_map()
#_DEBUG_ARRAY(test_object.pheromone_map)
#testing
# where we see ~1% larger values than the previous test
# since each value was decayed from 1 (with .99 pheromone_evaporation_coefficient) and then each ant contribution added
#ant 1
self.assertEqual(test_object.pheromone_map[0][1], .51)
self.assertTrue(test_object.pheromone_map[1][2] < .9 and test_object.pheromone_map[1][2] >= (.83 + .01))
self.assertEqual(test_object.pheromone_map[2][3], .51)
#ant2
self.assertEqual(test_object.pheromone_map[3][0], (1.0/3.0) + .01)
self.assertEqual(test_object.pheromone_map[0][2], (1.0/3.0) + .01)
self.assertTrue(test_object.pheromone_map[2][1] < .9 and test_object.pheromone_map[2][1] >= (.83 + .01))
if __name__ == '__main__':
unittest.main() | 34.65616 | 140 | 0.726002 | 1,882 | 12,095 | 4.373539 | 0.085016 | 0.128781 | 0.126959 | 0.120277 | 0.902321 | 0.899891 | 0.89904 | 0.884583 | 0.847406 | 0.84692 | 0 | 0.034809 | 0.161554 | 12,095 | 349 | 141 | 34.65616 | 0.776846 | 0.223067 | 0 | 0.78972 | 0 | 0 | 0.000866 | 0 | 0 | 0 | 0 | 0 | 0.107477 | 1 | 0.271028 | false | 0.186916 | 0.018692 | 0.037383 | 0.406542 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 9 |
41a5bfd27ec2a5a2b7fa7b491017f8c894f0aea4 | 72,319 | py | Python | v6.0.5/system/test_fortios_system_interface.py | fortinet-solutions-cse/ansible_fgt_modules | c45fba49258d7c9705e7a8fd9c2a09ea4c8a4719 | [
"Apache-2.0"
] | 14 | 2018-09-25T20:35:25.000Z | 2021-07-14T04:30:54.000Z | v6.0.6/system/test_fortios_system_interface.py | fortinet-solutions-cse/ansible_fgt_modules | c45fba49258d7c9705e7a8fd9c2a09ea4c8a4719 | [
"Apache-2.0"
] | 32 | 2018-10-09T04:13:42.000Z | 2020-05-11T07:20:28.000Z | v6.0.5/system/test_fortios_system_interface.py | fortinet-solutions-cse/ansible_fgt_modules | c45fba49258d7c9705e7a8fd9c2a09ea4c8a4719 | [
"Apache-2.0"
] | 11 | 2018-10-09T00:14:53.000Z | 2021-11-03T10:54:09.000Z | # Copyright 2019 Fortinet, Inc.
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <https://www.gnu.org/licenses/>.
# Make coding more python3-ish
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import os
import json
import pytest
from mock import ANY
from ansible.module_utils.network.fortios.fortios import FortiOSHandler
try:
from ansible.modules.network.fortios import fortios_system_interface
except ImportError:
pytest.skip("Could not load required modules for testing", allow_module_level=True)
@pytest.fixture(autouse=True)
def connection_mock(mocker):
connection_class_mock = mocker.patch('ansible.modules.network.fortios.fortios_system_interface.Connection')
return connection_class_mock
fos_instance = FortiOSHandler(connection_mock)
def test_system_interface_creation(mocker):
schema_method_mock = mocker.patch('ansible.module_utils.network.fortios.fortios.FortiOSHandler.schema')
set_method_result = {'status': 'success', 'http_method': 'POST', 'http_status': 200}
set_method_mock = mocker.patch('ansible.module_utils.network.fortios.fortios.FortiOSHandler.set', return_value=set_method_result)
input_data = {
'username': 'admin',
'state': 'present',
'system_interface': {
'ac_name': 'test_value_3',
'aggregate': 'test_value_4',
'algorithm': 'L2',
'alias': 'test_value_6',
'ap_discover': 'enable',
'arpforward': 'enable',
'auth_type': 'auto',
'auto_auth_extension_device': 'enable',
'bfd': 'global',
'bfd_desired_min_tx': '12',
'bfd_detect_mult': '13',
'bfd_required_min_rx': '14',
'broadcast_forticlient_discovery': 'enable',
'broadcast_forward': 'enable',
'captive_portal': '17',
'cli_conn_status': '18',
'color': '19',
'dedicated_to': 'none',
'defaultgw': 'enable',
'description': 'test_value_22',
'detected_peer_mtu': '23',
'detectprotocol': 'ping',
'detectserver': 'test_value_25',
'device_access_list': 'test_value_26',
'device_identification': 'enable',
'device_identification_active_scan': 'enable',
'device_netscan': 'disable',
'device_user_identification': 'enable',
'devindex': '31',
'dhcp_client_identifier': 'myId_32',
'dhcp_relay_agent_option': 'enable',
'dhcp_relay_ip': 'test_value_34',
'dhcp_relay_service': 'disable',
'dhcp_relay_type': 'regular',
'dhcp_renew_time': '37',
'disc_retry_timeout': '38',
'disconnect_threshold': '39',
'distance': '40',
'dns_server_override': 'enable',
'drop_fragment': 'enable',
'drop_overlapped_fragment': 'enable',
'egress_shaping_profile': 'test_value_44',
'endpoint_compliance': 'enable',
'estimated_downstream_bandwidth': '46',
'estimated_upstream_bandwidth': '47',
'explicit_ftp_proxy': 'enable',
'explicit_web_proxy': 'enable',
'external': 'enable',
'fail_action_on_extender': 'soft-restart',
'fail_alert_method': 'link-failed-signal',
'fail_detect': 'enable',
'fail_detect_option': 'detectserver',
'fortiheartbeat': 'enable',
'fortilink': 'enable',
'fortilink_backup_link': '57',
'fortilink_split_interface': 'enable',
'fortilink_stacking': 'enable',
'forward_domain': '60',
'gwdetect': 'enable',
'ha_priority': '62',
'icmp_accept_redirect': 'enable',
'icmp_send_redirect': 'enable',
'ident_accept': 'enable',
'idle_timeout': '66',
'inbandwidth': '67',
'ingress_spillover_threshold': '68',
'interface': 'test_value_69',
'internal': '70',
'ip': 'test_value_71',
'ipmac': 'enable',
'ips_sniffer_mode': 'enable',
'ipunnumbered': 'test_value_74',
'l2forward': 'enable',
'lacp_ha_slave': 'enable',
'lacp_mode': 'static',
'lacp_speed': 'slow',
'lcp_echo_interval': '79',
'lcp_max_echo_fails': '80',
'link_up_delay': '81',
'lldp_transmission': 'enable',
'macaddr': 'test_value_83',
'management_ip': 'test_value_84',
'min_links': '85',
'min_links_down': 'operational',
'mode': 'static',
'mtu': '88',
'mtu_override': 'enable',
'name': 'default_name_90',
'ndiscforward': 'enable',
'netbios_forward': 'disable',
'netflow_sampler': 'disable',
'outbandwidth': '94',
'padt_retry_timeout': '95',
'password': 'test_value_96',
'ping_serv_status': '97',
'polling_interval': '98',
'pppoe_unnumbered_negotiate': 'enable',
'pptp_auth_type': 'auto',
'pptp_client': 'enable',
'pptp_password': 'test_value_102',
'pptp_server_ip': 'test_value_103',
'pptp_timeout': '104',
'pptp_user': 'test_value_105',
'preserve_session_route': 'enable',
'priority': '107',
'priority_override': 'enable',
'proxy_captive_portal': 'enable',
'redundant_interface': 'test_value_110',
'remote_ip': 'test_value_111',
'replacemsg_override_group': 'test_value_112',
'role': 'lan',
'sample_direction': 'tx',
'sample_rate': '115',
'scan_botnet_connections': 'disable',
'secondary_IP': 'enable',
'security_exempt_list': 'test_value_118',
'security_external_logout': 'test_value_119',
'security_external_web': 'test_value_120',
'security_mac_auth_bypass': 'enable',
'security_mode': 'none',
'security_redirect_url': 'test_value_123',
'service_name': 'test_value_124',
'sflow_sampler': 'enable',
'snmp_index': '126',
'speed': 'auto',
'spillover_threshold': '128',
'src_check': 'enable',
'status': 'up',
'stpforward': 'enable',
'stpforward_mode': 'rpl-all-ext-id',
'subst': 'enable',
'substitute_dst_mac': 'test_value_134',
'switch': 'test_value_135',
'switch_controller_access_vlan': 'enable',
'switch_controller_arp_inspection': 'enable',
'switch_controller_dhcp_snooping': 'enable',
'switch_controller_dhcp_snooping_option82': 'enable',
'switch_controller_dhcp_snooping_verify_mac': 'enable',
'switch_controller_igmp_snooping': 'enable',
'switch_controller_learning_limit': '142',
'tcp_mss': '143',
'trust_ip_1': 'test_value_144',
'trust_ip_2': 'test_value_145',
'trust_ip_3': 'test_value_146',
'trust_ip6_1': 'test_value_147',
'trust_ip6_2': 'test_value_148',
'trust_ip6_3': 'test_value_149',
'type': 'physical',
'username': 'test_value_151',
'vdom': 'test_value_152',
'vindex': '153',
'vlanforward': 'enable',
'vlanid': '155',
'vrf': '156',
'vrrp_virtual_mac': 'enable',
'wccp': 'enable',
'weight': '159',
'wins_ip': 'test_value_160'
},
'vdom': 'root'}
is_error, changed, response = fortios_system_interface.fortios_system(input_data, fos_instance)
expected_data = {
'ac-name': 'test_value_3',
'aggregate': 'test_value_4',
'algorithm': 'L2',
'alias': 'test_value_6',
'ap-discover': 'enable',
'arpforward': 'enable',
'auth-type': 'auto',
'auto-auth-extension-device': 'enable',
'bfd': 'global',
'bfd-desired-min-tx': '12',
'bfd-detect-mult': '13',
'bfd-required-min-rx': '14',
'broadcast-forticlient-discovery': 'enable',
'broadcast-forward': 'enable',
'captive-portal': '17',
'cli-conn-status': '18',
'color': '19',
'dedicated-to': 'none',
'defaultgw': 'enable',
'description': 'test_value_22',
'detected-peer-mtu': '23',
'detectprotocol': 'ping',
'detectserver': 'test_value_25',
'device-access-list': 'test_value_26',
'device-identification': 'enable',
'device-identification-active-scan': 'enable',
'device-netscan': 'disable',
'device-user-identification': 'enable',
'devindex': '31',
'dhcp-client-identifier': 'myId_32',
'dhcp-relay-agent-option': 'enable',
'dhcp-relay-ip': 'test_value_34',
'dhcp-relay-service': 'disable',
'dhcp-relay-type': 'regular',
'dhcp-renew-time': '37',
'disc-retry-timeout': '38',
'disconnect-threshold': '39',
'distance': '40',
'dns-server-override': 'enable',
'drop-fragment': 'enable',
'drop-overlapped-fragment': 'enable',
'egress-shaping-profile': 'test_value_44',
'endpoint-compliance': 'enable',
'estimated-downstream-bandwidth': '46',
'estimated-upstream-bandwidth': '47',
'explicit-ftp-proxy': 'enable',
'explicit-web-proxy': 'enable',
'external': 'enable',
'fail-action-on-extender': 'soft-restart',
'fail-alert-method': 'link-failed-signal',
'fail-detect': 'enable',
'fail-detect-option': 'detectserver',
'fortiheartbeat': 'enable',
'fortilink': 'enable',
'fortilink-backup-link': '57',
'fortilink-split-interface': 'enable',
'fortilink-stacking': 'enable',
'forward-domain': '60',
'gwdetect': 'enable',
'ha-priority': '62',
'icmp-accept-redirect': 'enable',
'icmp-send-redirect': 'enable',
'ident-accept': 'enable',
'idle-timeout': '66',
'inbandwidth': '67',
'ingress-spillover-threshold': '68',
'interface': 'test_value_69',
'internal': '70',
'ip': 'test_value_71',
'ipmac': 'enable',
'ips-sniffer-mode': 'enable',
'ipunnumbered': 'test_value_74',
'l2forward': 'enable',
'lacp-ha-slave': 'enable',
'lacp-mode': 'static',
'lacp-speed': 'slow',
'lcp-echo-interval': '79',
'lcp-max-echo-fails': '80',
'link-up-delay': '81',
'lldp-transmission': 'enable',
'macaddr': 'test_value_83',
'management-ip': 'test_value_84',
'min-links': '85',
'min-links-down': 'operational',
'mode': 'static',
'mtu': '88',
'mtu-override': 'enable',
'name': 'default_name_90',
'ndiscforward': 'enable',
'netbios-forward': 'disable',
'netflow-sampler': 'disable',
'outbandwidth': '94',
'padt-retry-timeout': '95',
'password': 'test_value_96',
'ping-serv-status': '97',
'polling-interval': '98',
'pppoe-unnumbered-negotiate': 'enable',
'pptp-auth-type': 'auto',
'pptp-client': 'enable',
'pptp-password': 'test_value_102',
'pptp-server-ip': 'test_value_103',
'pptp-timeout': '104',
'pptp-user': 'test_value_105',
'preserve-session-route': 'enable',
'priority': '107',
'priority-override': 'enable',
'proxy-captive-portal': 'enable',
'redundant-interface': 'test_value_110',
'remote-ip': 'test_value_111',
'replacemsg-override-group': 'test_value_112',
'role': 'lan',
'sample-direction': 'tx',
'sample-rate': '115',
'scan-botnet-connections': 'disable',
'secondary-IP': 'enable',
'security-exempt-list': 'test_value_118',
'security-external-logout': 'test_value_119',
'security-external-web': 'test_value_120',
'security-mac-auth-bypass': 'enable',
'security-mode': 'none',
'security-redirect-url': 'test_value_123',
'service-name': 'test_value_124',
'sflow-sampler': 'enable',
'snmp-index': '126',
'speed': 'auto',
'spillover-threshold': '128',
'src-check': 'enable',
'status': 'up',
'stpforward': 'enable',
'stpforward-mode': 'rpl-all-ext-id',
'subst': 'enable',
'substitute-dst-mac': 'test_value_134',
'switch': 'test_value_135',
'switch-controller-access-vlan': 'enable',
'switch-controller-arp-inspection': 'enable',
'switch-controller-dhcp-snooping': 'enable',
'switch-controller-dhcp-snooping-option82': 'enable',
'switch-controller-dhcp-snooping-verify-mac': 'enable',
'switch-controller-igmp-snooping': 'enable',
'switch-controller-learning-limit': '142',
'tcp-mss': '143',
'trust-ip-1': 'test_value_144',
'trust-ip-2': 'test_value_145',
'trust-ip-3': 'test_value_146',
'trust-ip6-1': 'test_value_147',
'trust-ip6-2': 'test_value_148',
'trust-ip6-3': 'test_value_149',
'type': 'physical',
'username': 'test_value_151',
'vdom': 'test_value_152',
'vindex': '153',
'vlanforward': 'enable',
'vlanid': '155',
'vrf': '156',
'vrrp-virtual-mac': 'enable',
'wccp': 'enable',
'weight': '159',
'wins-ip': 'test_value_160'
}
set_method_mock.assert_called_with('system', 'interface', data=expected_data, vdom='root')
schema_method_mock.assert_not_called()
assert not is_error
assert changed
assert response['status'] == 'success'
assert response['http_status'] == 200
def test_system_interface_creation_fails(mocker):
schema_method_mock = mocker.patch('ansible.module_utils.network.fortios.fortios.FortiOSHandler.schema')
set_method_result = {'status': 'error', 'http_method': 'POST', 'http_status': 500}
set_method_mock = mocker.patch('ansible.module_utils.network.fortios.fortios.FortiOSHandler.set', return_value=set_method_result)
input_data = {
'username': 'admin',
'state': 'present',
'system_interface': {
'ac_name': 'test_value_3',
'aggregate': 'test_value_4',
'algorithm': 'L2',
'alias': 'test_value_6',
'ap_discover': 'enable',
'arpforward': 'enable',
'auth_type': 'auto',
'auto_auth_extension_device': 'enable',
'bfd': 'global',
'bfd_desired_min_tx': '12',
'bfd_detect_mult': '13',
'bfd_required_min_rx': '14',
'broadcast_forticlient_discovery': 'enable',
'broadcast_forward': 'enable',
'captive_portal': '17',
'cli_conn_status': '18',
'color': '19',
'dedicated_to': 'none',
'defaultgw': 'enable',
'description': 'test_value_22',
'detected_peer_mtu': '23',
'detectprotocol': 'ping',
'detectserver': 'test_value_25',
'device_access_list': 'test_value_26',
'device_identification': 'enable',
'device_identification_active_scan': 'enable',
'device_netscan': 'disable',
'device_user_identification': 'enable',
'devindex': '31',
'dhcp_client_identifier': 'myId_32',
'dhcp_relay_agent_option': 'enable',
'dhcp_relay_ip': 'test_value_34',
'dhcp_relay_service': 'disable',
'dhcp_relay_type': 'regular',
'dhcp_renew_time': '37',
'disc_retry_timeout': '38',
'disconnect_threshold': '39',
'distance': '40',
'dns_server_override': 'enable',
'drop_fragment': 'enable',
'drop_overlapped_fragment': 'enable',
'egress_shaping_profile': 'test_value_44',
'endpoint_compliance': 'enable',
'estimated_downstream_bandwidth': '46',
'estimated_upstream_bandwidth': '47',
'explicit_ftp_proxy': 'enable',
'explicit_web_proxy': 'enable',
'external': 'enable',
'fail_action_on_extender': 'soft-restart',
'fail_alert_method': 'link-failed-signal',
'fail_detect': 'enable',
'fail_detect_option': 'detectserver',
'fortiheartbeat': 'enable',
'fortilink': 'enable',
'fortilink_backup_link': '57',
'fortilink_split_interface': 'enable',
'fortilink_stacking': 'enable',
'forward_domain': '60',
'gwdetect': 'enable',
'ha_priority': '62',
'icmp_accept_redirect': 'enable',
'icmp_send_redirect': 'enable',
'ident_accept': 'enable',
'idle_timeout': '66',
'inbandwidth': '67',
'ingress_spillover_threshold': '68',
'interface': 'test_value_69',
'internal': '70',
'ip': 'test_value_71',
'ipmac': 'enable',
'ips_sniffer_mode': 'enable',
'ipunnumbered': 'test_value_74',
'l2forward': 'enable',
'lacp_ha_slave': 'enable',
'lacp_mode': 'static',
'lacp_speed': 'slow',
'lcp_echo_interval': '79',
'lcp_max_echo_fails': '80',
'link_up_delay': '81',
'lldp_transmission': 'enable',
'macaddr': 'test_value_83',
'management_ip': 'test_value_84',
'min_links': '85',
'min_links_down': 'operational',
'mode': 'static',
'mtu': '88',
'mtu_override': 'enable',
'name': 'default_name_90',
'ndiscforward': 'enable',
'netbios_forward': 'disable',
'netflow_sampler': 'disable',
'outbandwidth': '94',
'padt_retry_timeout': '95',
'password': 'test_value_96',
'ping_serv_status': '97',
'polling_interval': '98',
'pppoe_unnumbered_negotiate': 'enable',
'pptp_auth_type': 'auto',
'pptp_client': 'enable',
'pptp_password': 'test_value_102',
'pptp_server_ip': 'test_value_103',
'pptp_timeout': '104',
'pptp_user': 'test_value_105',
'preserve_session_route': 'enable',
'priority': '107',
'priority_override': 'enable',
'proxy_captive_portal': 'enable',
'redundant_interface': 'test_value_110',
'remote_ip': 'test_value_111',
'replacemsg_override_group': 'test_value_112',
'role': 'lan',
'sample_direction': 'tx',
'sample_rate': '115',
'scan_botnet_connections': 'disable',
'secondary_IP': 'enable',
'security_exempt_list': 'test_value_118',
'security_external_logout': 'test_value_119',
'security_external_web': 'test_value_120',
'security_mac_auth_bypass': 'enable',
'security_mode': 'none',
'security_redirect_url': 'test_value_123',
'service_name': 'test_value_124',
'sflow_sampler': 'enable',
'snmp_index': '126',
'speed': 'auto',
'spillover_threshold': '128',
'src_check': 'enable',
'status': 'up',
'stpforward': 'enable',
'stpforward_mode': 'rpl-all-ext-id',
'subst': 'enable',
'substitute_dst_mac': 'test_value_134',
'switch': 'test_value_135',
'switch_controller_access_vlan': 'enable',
'switch_controller_arp_inspection': 'enable',
'switch_controller_dhcp_snooping': 'enable',
'switch_controller_dhcp_snooping_option82': 'enable',
'switch_controller_dhcp_snooping_verify_mac': 'enable',
'switch_controller_igmp_snooping': 'enable',
'switch_controller_learning_limit': '142',
'tcp_mss': '143',
'trust_ip_1': 'test_value_144',
'trust_ip_2': 'test_value_145',
'trust_ip_3': 'test_value_146',
'trust_ip6_1': 'test_value_147',
'trust_ip6_2': 'test_value_148',
'trust_ip6_3': 'test_value_149',
'type': 'physical',
'username': 'test_value_151',
'vdom': 'test_value_152',
'vindex': '153',
'vlanforward': 'enable',
'vlanid': '155',
'vrf': '156',
'vrrp_virtual_mac': 'enable',
'wccp': 'enable',
'weight': '159',
'wins_ip': 'test_value_160'
},
'vdom': 'root'}
is_error, changed, response = fortios_system_interface.fortios_system(input_data, fos_instance)
expected_data = {
'ac-name': 'test_value_3',
'aggregate': 'test_value_4',
'algorithm': 'L2',
'alias': 'test_value_6',
'ap-discover': 'enable',
'arpforward': 'enable',
'auth-type': 'auto',
'auto-auth-extension-device': 'enable',
'bfd': 'global',
'bfd-desired-min-tx': '12',
'bfd-detect-mult': '13',
'bfd-required-min-rx': '14',
'broadcast-forticlient-discovery': 'enable',
'broadcast-forward': 'enable',
'captive-portal': '17',
'cli-conn-status': '18',
'color': '19',
'dedicated-to': 'none',
'defaultgw': 'enable',
'description': 'test_value_22',
'detected-peer-mtu': '23',
'detectprotocol': 'ping',
'detectserver': 'test_value_25',
'device-access-list': 'test_value_26',
'device-identification': 'enable',
'device-identification-active-scan': 'enable',
'device-netscan': 'disable',
'device-user-identification': 'enable',
'devindex': '31',
'dhcp-client-identifier': 'myId_32',
'dhcp-relay-agent-option': 'enable',
'dhcp-relay-ip': 'test_value_34',
'dhcp-relay-service': 'disable',
'dhcp-relay-type': 'regular',
'dhcp-renew-time': '37',
'disc-retry-timeout': '38',
'disconnect-threshold': '39',
'distance': '40',
'dns-server-override': 'enable',
'drop-fragment': 'enable',
'drop-overlapped-fragment': 'enable',
'egress-shaping-profile': 'test_value_44',
'endpoint-compliance': 'enable',
'estimated-downstream-bandwidth': '46',
'estimated-upstream-bandwidth': '47',
'explicit-ftp-proxy': 'enable',
'explicit-web-proxy': 'enable',
'external': 'enable',
'fail-action-on-extender': 'soft-restart',
'fail-alert-method': 'link-failed-signal',
'fail-detect': 'enable',
'fail-detect-option': 'detectserver',
'fortiheartbeat': 'enable',
'fortilink': 'enable',
'fortilink-backup-link': '57',
'fortilink-split-interface': 'enable',
'fortilink-stacking': 'enable',
'forward-domain': '60',
'gwdetect': 'enable',
'ha-priority': '62',
'icmp-accept-redirect': 'enable',
'icmp-send-redirect': 'enable',
'ident-accept': 'enable',
'idle-timeout': '66',
'inbandwidth': '67',
'ingress-spillover-threshold': '68',
'interface': 'test_value_69',
'internal': '70',
'ip': 'test_value_71',
'ipmac': 'enable',
'ips-sniffer-mode': 'enable',
'ipunnumbered': 'test_value_74',
'l2forward': 'enable',
'lacp-ha-slave': 'enable',
'lacp-mode': 'static',
'lacp-speed': 'slow',
'lcp-echo-interval': '79',
'lcp-max-echo-fails': '80',
'link-up-delay': '81',
'lldp-transmission': 'enable',
'macaddr': 'test_value_83',
'management-ip': 'test_value_84',
'min-links': '85',
'min-links-down': 'operational',
'mode': 'static',
'mtu': '88',
'mtu-override': 'enable',
'name': 'default_name_90',
'ndiscforward': 'enable',
'netbios-forward': 'disable',
'netflow-sampler': 'disable',
'outbandwidth': '94',
'padt-retry-timeout': '95',
'password': 'test_value_96',
'ping-serv-status': '97',
'polling-interval': '98',
'pppoe-unnumbered-negotiate': 'enable',
'pptp-auth-type': 'auto',
'pptp-client': 'enable',
'pptp-password': 'test_value_102',
'pptp-server-ip': 'test_value_103',
'pptp-timeout': '104',
'pptp-user': 'test_value_105',
'preserve-session-route': 'enable',
'priority': '107',
'priority-override': 'enable',
'proxy-captive-portal': 'enable',
'redundant-interface': 'test_value_110',
'remote-ip': 'test_value_111',
'replacemsg-override-group': 'test_value_112',
'role': 'lan',
'sample-direction': 'tx',
'sample-rate': '115',
'scan-botnet-connections': 'disable',
'secondary-IP': 'enable',
'security-exempt-list': 'test_value_118',
'security-external-logout': 'test_value_119',
'security-external-web': 'test_value_120',
'security-mac-auth-bypass': 'enable',
'security-mode': 'none',
'security-redirect-url': 'test_value_123',
'service-name': 'test_value_124',
'sflow-sampler': 'enable',
'snmp-index': '126',
'speed': 'auto',
'spillover-threshold': '128',
'src-check': 'enable',
'status': 'up',
'stpforward': 'enable',
'stpforward-mode': 'rpl-all-ext-id',
'subst': 'enable',
'substitute-dst-mac': 'test_value_134',
'switch': 'test_value_135',
'switch-controller-access-vlan': 'enable',
'switch-controller-arp-inspection': 'enable',
'switch-controller-dhcp-snooping': 'enable',
'switch-controller-dhcp-snooping-option82': 'enable',
'switch-controller-dhcp-snooping-verify-mac': 'enable',
'switch-controller-igmp-snooping': 'enable',
'switch-controller-learning-limit': '142',
'tcp-mss': '143',
'trust-ip-1': 'test_value_144',
'trust-ip-2': 'test_value_145',
'trust-ip-3': 'test_value_146',
'trust-ip6-1': 'test_value_147',
'trust-ip6-2': 'test_value_148',
'trust-ip6-3': 'test_value_149',
'type': 'physical',
'username': 'test_value_151',
'vdom': 'test_value_152',
'vindex': '153',
'vlanforward': 'enable',
'vlanid': '155',
'vrf': '156',
'vrrp-virtual-mac': 'enable',
'wccp': 'enable',
'weight': '159',
'wins-ip': 'test_value_160'
}
set_method_mock.assert_called_with('system', 'interface', data=expected_data, vdom='root')
schema_method_mock.assert_not_called()
assert is_error
assert not changed
assert response['status'] == 'error'
assert response['http_status'] == 500
def test_system_interface_removal(mocker):
schema_method_mock = mocker.patch('ansible.module_utils.network.fortios.fortios.FortiOSHandler.schema')
delete_method_result = {'status': 'success', 'http_method': 'POST', 'http_status': 200}
delete_method_mock = mocker.patch('ansible.module_utils.network.fortios.fortios.FortiOSHandler.delete', return_value=delete_method_result)
input_data = {
'username': 'admin',
'state': 'absent',
'system_interface': {
'ac_name': 'test_value_3',
'aggregate': 'test_value_4',
'algorithm': 'L2',
'alias': 'test_value_6',
'ap_discover': 'enable',
'arpforward': 'enable',
'auth_type': 'auto',
'auto_auth_extension_device': 'enable',
'bfd': 'global',
'bfd_desired_min_tx': '12',
'bfd_detect_mult': '13',
'bfd_required_min_rx': '14',
'broadcast_forticlient_discovery': 'enable',
'broadcast_forward': 'enable',
'captive_portal': '17',
'cli_conn_status': '18',
'color': '19',
'dedicated_to': 'none',
'defaultgw': 'enable',
'description': 'test_value_22',
'detected_peer_mtu': '23',
'detectprotocol': 'ping',
'detectserver': 'test_value_25',
'device_access_list': 'test_value_26',
'device_identification': 'enable',
'device_identification_active_scan': 'enable',
'device_netscan': 'disable',
'device_user_identification': 'enable',
'devindex': '31',
'dhcp_client_identifier': 'myId_32',
'dhcp_relay_agent_option': 'enable',
'dhcp_relay_ip': 'test_value_34',
'dhcp_relay_service': 'disable',
'dhcp_relay_type': 'regular',
'dhcp_renew_time': '37',
'disc_retry_timeout': '38',
'disconnect_threshold': '39',
'distance': '40',
'dns_server_override': 'enable',
'drop_fragment': 'enable',
'drop_overlapped_fragment': 'enable',
'egress_shaping_profile': 'test_value_44',
'endpoint_compliance': 'enable',
'estimated_downstream_bandwidth': '46',
'estimated_upstream_bandwidth': '47',
'explicit_ftp_proxy': 'enable',
'explicit_web_proxy': 'enable',
'external': 'enable',
'fail_action_on_extender': 'soft-restart',
'fail_alert_method': 'link-failed-signal',
'fail_detect': 'enable',
'fail_detect_option': 'detectserver',
'fortiheartbeat': 'enable',
'fortilink': 'enable',
'fortilink_backup_link': '57',
'fortilink_split_interface': 'enable',
'fortilink_stacking': 'enable',
'forward_domain': '60',
'gwdetect': 'enable',
'ha_priority': '62',
'icmp_accept_redirect': 'enable',
'icmp_send_redirect': 'enable',
'ident_accept': 'enable',
'idle_timeout': '66',
'inbandwidth': '67',
'ingress_spillover_threshold': '68',
'interface': 'test_value_69',
'internal': '70',
'ip': 'test_value_71',
'ipmac': 'enable',
'ips_sniffer_mode': 'enable',
'ipunnumbered': 'test_value_74',
'l2forward': 'enable',
'lacp_ha_slave': 'enable',
'lacp_mode': 'static',
'lacp_speed': 'slow',
'lcp_echo_interval': '79',
'lcp_max_echo_fails': '80',
'link_up_delay': '81',
'lldp_transmission': 'enable',
'macaddr': 'test_value_83',
'management_ip': 'test_value_84',
'min_links': '85',
'min_links_down': 'operational',
'mode': 'static',
'mtu': '88',
'mtu_override': 'enable',
'name': 'default_name_90',
'ndiscforward': 'enable',
'netbios_forward': 'disable',
'netflow_sampler': 'disable',
'outbandwidth': '94',
'padt_retry_timeout': '95',
'password': 'test_value_96',
'ping_serv_status': '97',
'polling_interval': '98',
'pppoe_unnumbered_negotiate': 'enable',
'pptp_auth_type': 'auto',
'pptp_client': 'enable',
'pptp_password': 'test_value_102',
'pptp_server_ip': 'test_value_103',
'pptp_timeout': '104',
'pptp_user': 'test_value_105',
'preserve_session_route': 'enable',
'priority': '107',
'priority_override': 'enable',
'proxy_captive_portal': 'enable',
'redundant_interface': 'test_value_110',
'remote_ip': 'test_value_111',
'replacemsg_override_group': 'test_value_112',
'role': 'lan',
'sample_direction': 'tx',
'sample_rate': '115',
'scan_botnet_connections': 'disable',
'secondary_IP': 'enable',
'security_exempt_list': 'test_value_118',
'security_external_logout': 'test_value_119',
'security_external_web': 'test_value_120',
'security_mac_auth_bypass': 'enable',
'security_mode': 'none',
'security_redirect_url': 'test_value_123',
'service_name': 'test_value_124',
'sflow_sampler': 'enable',
'snmp_index': '126',
'speed': 'auto',
'spillover_threshold': '128',
'src_check': 'enable',
'status': 'up',
'stpforward': 'enable',
'stpforward_mode': 'rpl-all-ext-id',
'subst': 'enable',
'substitute_dst_mac': 'test_value_134',
'switch': 'test_value_135',
'switch_controller_access_vlan': 'enable',
'switch_controller_arp_inspection': 'enable',
'switch_controller_dhcp_snooping': 'enable',
'switch_controller_dhcp_snooping_option82': 'enable',
'switch_controller_dhcp_snooping_verify_mac': 'enable',
'switch_controller_igmp_snooping': 'enable',
'switch_controller_learning_limit': '142',
'tcp_mss': '143',
'trust_ip_1': 'test_value_144',
'trust_ip_2': 'test_value_145',
'trust_ip_3': 'test_value_146',
'trust_ip6_1': 'test_value_147',
'trust_ip6_2': 'test_value_148',
'trust_ip6_3': 'test_value_149',
'type': 'physical',
'username': 'test_value_151',
'vdom': 'test_value_152',
'vindex': '153',
'vlanforward': 'enable',
'vlanid': '155',
'vrf': '156',
'vrrp_virtual_mac': 'enable',
'wccp': 'enable',
'weight': '159',
'wins_ip': 'test_value_160'
},
'vdom': 'root'}
is_error, changed, response = fortios_system_interface.fortios_system(input_data, fos_instance)
delete_method_mock.assert_called_with('system', 'interface', mkey=ANY, vdom='root')
schema_method_mock.assert_not_called()
assert not is_error
assert changed
assert response['status'] == 'success'
assert response['http_status'] == 200
def test_system_interface_deletion_fails(mocker):
schema_method_mock = mocker.patch('ansible.module_utils.network.fortios.fortios.FortiOSHandler.schema')
delete_method_result = {'status': 'error', 'http_method': 'POST', 'http_status': 500}
delete_method_mock = mocker.patch('ansible.module_utils.network.fortios.fortios.FortiOSHandler.delete', return_value=delete_method_result)
input_data = {
'username': 'admin',
'state': 'absent',
'system_interface': {
'ac_name': 'test_value_3',
'aggregate': 'test_value_4',
'algorithm': 'L2',
'alias': 'test_value_6',
'ap_discover': 'enable',
'arpforward': 'enable',
'auth_type': 'auto',
'auto_auth_extension_device': 'enable',
'bfd': 'global',
'bfd_desired_min_tx': '12',
'bfd_detect_mult': '13',
'bfd_required_min_rx': '14',
'broadcast_forticlient_discovery': 'enable',
'broadcast_forward': 'enable',
'captive_portal': '17',
'cli_conn_status': '18',
'color': '19',
'dedicated_to': 'none',
'defaultgw': 'enable',
'description': 'test_value_22',
'detected_peer_mtu': '23',
'detectprotocol': 'ping',
'detectserver': 'test_value_25',
'device_access_list': 'test_value_26',
'device_identification': 'enable',
'device_identification_active_scan': 'enable',
'device_netscan': 'disable',
'device_user_identification': 'enable',
'devindex': '31',
'dhcp_client_identifier': 'myId_32',
'dhcp_relay_agent_option': 'enable',
'dhcp_relay_ip': 'test_value_34',
'dhcp_relay_service': 'disable',
'dhcp_relay_type': 'regular',
'dhcp_renew_time': '37',
'disc_retry_timeout': '38',
'disconnect_threshold': '39',
'distance': '40',
'dns_server_override': 'enable',
'drop_fragment': 'enable',
'drop_overlapped_fragment': 'enable',
'egress_shaping_profile': 'test_value_44',
'endpoint_compliance': 'enable',
'estimated_downstream_bandwidth': '46',
'estimated_upstream_bandwidth': '47',
'explicit_ftp_proxy': 'enable',
'explicit_web_proxy': 'enable',
'external': 'enable',
'fail_action_on_extender': 'soft-restart',
'fail_alert_method': 'link-failed-signal',
'fail_detect': 'enable',
'fail_detect_option': 'detectserver',
'fortiheartbeat': 'enable',
'fortilink': 'enable',
'fortilink_backup_link': '57',
'fortilink_split_interface': 'enable',
'fortilink_stacking': 'enable',
'forward_domain': '60',
'gwdetect': 'enable',
'ha_priority': '62',
'icmp_accept_redirect': 'enable',
'icmp_send_redirect': 'enable',
'ident_accept': 'enable',
'idle_timeout': '66',
'inbandwidth': '67',
'ingress_spillover_threshold': '68',
'interface': 'test_value_69',
'internal': '70',
'ip': 'test_value_71',
'ipmac': 'enable',
'ips_sniffer_mode': 'enable',
'ipunnumbered': 'test_value_74',
'l2forward': 'enable',
'lacp_ha_slave': 'enable',
'lacp_mode': 'static',
'lacp_speed': 'slow',
'lcp_echo_interval': '79',
'lcp_max_echo_fails': '80',
'link_up_delay': '81',
'lldp_transmission': 'enable',
'macaddr': 'test_value_83',
'management_ip': 'test_value_84',
'min_links': '85',
'min_links_down': 'operational',
'mode': 'static',
'mtu': '88',
'mtu_override': 'enable',
'name': 'default_name_90',
'ndiscforward': 'enable',
'netbios_forward': 'disable',
'netflow_sampler': 'disable',
'outbandwidth': '94',
'padt_retry_timeout': '95',
'password': 'test_value_96',
'ping_serv_status': '97',
'polling_interval': '98',
'pppoe_unnumbered_negotiate': 'enable',
'pptp_auth_type': 'auto',
'pptp_client': 'enable',
'pptp_password': 'test_value_102',
'pptp_server_ip': 'test_value_103',
'pptp_timeout': '104',
'pptp_user': 'test_value_105',
'preserve_session_route': 'enable',
'priority': '107',
'priority_override': 'enable',
'proxy_captive_portal': 'enable',
'redundant_interface': 'test_value_110',
'remote_ip': 'test_value_111',
'replacemsg_override_group': 'test_value_112',
'role': 'lan',
'sample_direction': 'tx',
'sample_rate': '115',
'scan_botnet_connections': 'disable',
'secondary_IP': 'enable',
'security_exempt_list': 'test_value_118',
'security_external_logout': 'test_value_119',
'security_external_web': 'test_value_120',
'security_mac_auth_bypass': 'enable',
'security_mode': 'none',
'security_redirect_url': 'test_value_123',
'service_name': 'test_value_124',
'sflow_sampler': 'enable',
'snmp_index': '126',
'speed': 'auto',
'spillover_threshold': '128',
'src_check': 'enable',
'status': 'up',
'stpforward': 'enable',
'stpforward_mode': 'rpl-all-ext-id',
'subst': 'enable',
'substitute_dst_mac': 'test_value_134',
'switch': 'test_value_135',
'switch_controller_access_vlan': 'enable',
'switch_controller_arp_inspection': 'enable',
'switch_controller_dhcp_snooping': 'enable',
'switch_controller_dhcp_snooping_option82': 'enable',
'switch_controller_dhcp_snooping_verify_mac': 'enable',
'switch_controller_igmp_snooping': 'enable',
'switch_controller_learning_limit': '142',
'tcp_mss': '143',
'trust_ip_1': 'test_value_144',
'trust_ip_2': 'test_value_145',
'trust_ip_3': 'test_value_146',
'trust_ip6_1': 'test_value_147',
'trust_ip6_2': 'test_value_148',
'trust_ip6_3': 'test_value_149',
'type': 'physical',
'username': 'test_value_151',
'vdom': 'test_value_152',
'vindex': '153',
'vlanforward': 'enable',
'vlanid': '155',
'vrf': '156',
'vrrp_virtual_mac': 'enable',
'wccp': 'enable',
'weight': '159',
'wins_ip': 'test_value_160'
},
'vdom': 'root'}
is_error, changed, response = fortios_system_interface.fortios_system(input_data, fos_instance)
delete_method_mock.assert_called_with('system', 'interface', mkey=ANY, vdom='root')
schema_method_mock.assert_not_called()
assert is_error
assert not changed
assert response['status'] == 'error'
assert response['http_status'] == 500
def test_system_interface_idempotent(mocker):
schema_method_mock = mocker.patch('ansible.module_utils.network.fortios.fortios.FortiOSHandler.schema')
set_method_result = {'status': 'error', 'http_method': 'DELETE', 'http_status': 404}
set_method_mock = mocker.patch('ansible.module_utils.network.fortios.fortios.FortiOSHandler.set', return_value=set_method_result)
input_data = {
'username': 'admin',
'state': 'present',
'system_interface': {
'ac_name': 'test_value_3',
'aggregate': 'test_value_4',
'algorithm': 'L2',
'alias': 'test_value_6',
'ap_discover': 'enable',
'arpforward': 'enable',
'auth_type': 'auto',
'auto_auth_extension_device': 'enable',
'bfd': 'global',
'bfd_desired_min_tx': '12',
'bfd_detect_mult': '13',
'bfd_required_min_rx': '14',
'broadcast_forticlient_discovery': 'enable',
'broadcast_forward': 'enable',
'captive_portal': '17',
'cli_conn_status': '18',
'color': '19',
'dedicated_to': 'none',
'defaultgw': 'enable',
'description': 'test_value_22',
'detected_peer_mtu': '23',
'detectprotocol': 'ping',
'detectserver': 'test_value_25',
'device_access_list': 'test_value_26',
'device_identification': 'enable',
'device_identification_active_scan': 'enable',
'device_netscan': 'disable',
'device_user_identification': 'enable',
'devindex': '31',
'dhcp_client_identifier': 'myId_32',
'dhcp_relay_agent_option': 'enable',
'dhcp_relay_ip': 'test_value_34',
'dhcp_relay_service': 'disable',
'dhcp_relay_type': 'regular',
'dhcp_renew_time': '37',
'disc_retry_timeout': '38',
'disconnect_threshold': '39',
'distance': '40',
'dns_server_override': 'enable',
'drop_fragment': 'enable',
'drop_overlapped_fragment': 'enable',
'egress_shaping_profile': 'test_value_44',
'endpoint_compliance': 'enable',
'estimated_downstream_bandwidth': '46',
'estimated_upstream_bandwidth': '47',
'explicit_ftp_proxy': 'enable',
'explicit_web_proxy': 'enable',
'external': 'enable',
'fail_action_on_extender': 'soft-restart',
'fail_alert_method': 'link-failed-signal',
'fail_detect': 'enable',
'fail_detect_option': 'detectserver',
'fortiheartbeat': 'enable',
'fortilink': 'enable',
'fortilink_backup_link': '57',
'fortilink_split_interface': 'enable',
'fortilink_stacking': 'enable',
'forward_domain': '60',
'gwdetect': 'enable',
'ha_priority': '62',
'icmp_accept_redirect': 'enable',
'icmp_send_redirect': 'enable',
'ident_accept': 'enable',
'idle_timeout': '66',
'inbandwidth': '67',
'ingress_spillover_threshold': '68',
'interface': 'test_value_69',
'internal': '70',
'ip': 'test_value_71',
'ipmac': 'enable',
'ips_sniffer_mode': 'enable',
'ipunnumbered': 'test_value_74',
'l2forward': 'enable',
'lacp_ha_slave': 'enable',
'lacp_mode': 'static',
'lacp_speed': 'slow',
'lcp_echo_interval': '79',
'lcp_max_echo_fails': '80',
'link_up_delay': '81',
'lldp_transmission': 'enable',
'macaddr': 'test_value_83',
'management_ip': 'test_value_84',
'min_links': '85',
'min_links_down': 'operational',
'mode': 'static',
'mtu': '88',
'mtu_override': 'enable',
'name': 'default_name_90',
'ndiscforward': 'enable',
'netbios_forward': 'disable',
'netflow_sampler': 'disable',
'outbandwidth': '94',
'padt_retry_timeout': '95',
'password': 'test_value_96',
'ping_serv_status': '97',
'polling_interval': '98',
'pppoe_unnumbered_negotiate': 'enable',
'pptp_auth_type': 'auto',
'pptp_client': 'enable',
'pptp_password': 'test_value_102',
'pptp_server_ip': 'test_value_103',
'pptp_timeout': '104',
'pptp_user': 'test_value_105',
'preserve_session_route': 'enable',
'priority': '107',
'priority_override': 'enable',
'proxy_captive_portal': 'enable',
'redundant_interface': 'test_value_110',
'remote_ip': 'test_value_111',
'replacemsg_override_group': 'test_value_112',
'role': 'lan',
'sample_direction': 'tx',
'sample_rate': '115',
'scan_botnet_connections': 'disable',
'secondary_IP': 'enable',
'security_exempt_list': 'test_value_118',
'security_external_logout': 'test_value_119',
'security_external_web': 'test_value_120',
'security_mac_auth_bypass': 'enable',
'security_mode': 'none',
'security_redirect_url': 'test_value_123',
'service_name': 'test_value_124',
'sflow_sampler': 'enable',
'snmp_index': '126',
'speed': 'auto',
'spillover_threshold': '128',
'src_check': 'enable',
'status': 'up',
'stpforward': 'enable',
'stpforward_mode': 'rpl-all-ext-id',
'subst': 'enable',
'substitute_dst_mac': 'test_value_134',
'switch': 'test_value_135',
'switch_controller_access_vlan': 'enable',
'switch_controller_arp_inspection': 'enable',
'switch_controller_dhcp_snooping': 'enable',
'switch_controller_dhcp_snooping_option82': 'enable',
'switch_controller_dhcp_snooping_verify_mac': 'enable',
'switch_controller_igmp_snooping': 'enable',
'switch_controller_learning_limit': '142',
'tcp_mss': '143',
'trust_ip_1': 'test_value_144',
'trust_ip_2': 'test_value_145',
'trust_ip_3': 'test_value_146',
'trust_ip6_1': 'test_value_147',
'trust_ip6_2': 'test_value_148',
'trust_ip6_3': 'test_value_149',
'type': 'physical',
'username': 'test_value_151',
'vdom': 'test_value_152',
'vindex': '153',
'vlanforward': 'enable',
'vlanid': '155',
'vrf': '156',
'vrrp_virtual_mac': 'enable',
'wccp': 'enable',
'weight': '159',
'wins_ip': 'test_value_160'
},
'vdom': 'root'}
is_error, changed, response = fortios_system_interface.fortios_system(input_data, fos_instance)
expected_data = {
'ac-name': 'test_value_3',
'aggregate': 'test_value_4',
'algorithm': 'L2',
'alias': 'test_value_6',
'ap-discover': 'enable',
'arpforward': 'enable',
'auth-type': 'auto',
'auto-auth-extension-device': 'enable',
'bfd': 'global',
'bfd-desired-min-tx': '12',
'bfd-detect-mult': '13',
'bfd-required-min-rx': '14',
'broadcast-forticlient-discovery': 'enable',
'broadcast-forward': 'enable',
'captive-portal': '17',
'cli-conn-status': '18',
'color': '19',
'dedicated-to': 'none',
'defaultgw': 'enable',
'description': 'test_value_22',
'detected-peer-mtu': '23',
'detectprotocol': 'ping',
'detectserver': 'test_value_25',
'device-access-list': 'test_value_26',
'device-identification': 'enable',
'device-identification-active-scan': 'enable',
'device-netscan': 'disable',
'device-user-identification': 'enable',
'devindex': '31',
'dhcp-client-identifier': 'myId_32',
'dhcp-relay-agent-option': 'enable',
'dhcp-relay-ip': 'test_value_34',
'dhcp-relay-service': 'disable',
'dhcp-relay-type': 'regular',
'dhcp-renew-time': '37',
'disc-retry-timeout': '38',
'disconnect-threshold': '39',
'distance': '40',
'dns-server-override': 'enable',
'drop-fragment': 'enable',
'drop-overlapped-fragment': 'enable',
'egress-shaping-profile': 'test_value_44',
'endpoint-compliance': 'enable',
'estimated-downstream-bandwidth': '46',
'estimated-upstream-bandwidth': '47',
'explicit-ftp-proxy': 'enable',
'explicit-web-proxy': 'enable',
'external': 'enable',
'fail-action-on-extender': 'soft-restart',
'fail-alert-method': 'link-failed-signal',
'fail-detect': 'enable',
'fail-detect-option': 'detectserver',
'fortiheartbeat': 'enable',
'fortilink': 'enable',
'fortilink-backup-link': '57',
'fortilink-split-interface': 'enable',
'fortilink-stacking': 'enable',
'forward-domain': '60',
'gwdetect': 'enable',
'ha-priority': '62',
'icmp-accept-redirect': 'enable',
'icmp-send-redirect': 'enable',
'ident-accept': 'enable',
'idle-timeout': '66',
'inbandwidth': '67',
'ingress-spillover-threshold': '68',
'interface': 'test_value_69',
'internal': '70',
'ip': 'test_value_71',
'ipmac': 'enable',
'ips-sniffer-mode': 'enable',
'ipunnumbered': 'test_value_74',
'l2forward': 'enable',
'lacp-ha-slave': 'enable',
'lacp-mode': 'static',
'lacp-speed': 'slow',
'lcp-echo-interval': '79',
'lcp-max-echo-fails': '80',
'link-up-delay': '81',
'lldp-transmission': 'enable',
'macaddr': 'test_value_83',
'management-ip': 'test_value_84',
'min-links': '85',
'min-links-down': 'operational',
'mode': 'static',
'mtu': '88',
'mtu-override': 'enable',
'name': 'default_name_90',
'ndiscforward': 'enable',
'netbios-forward': 'disable',
'netflow-sampler': 'disable',
'outbandwidth': '94',
'padt-retry-timeout': '95',
'password': 'test_value_96',
'ping-serv-status': '97',
'polling-interval': '98',
'pppoe-unnumbered-negotiate': 'enable',
'pptp-auth-type': 'auto',
'pptp-client': 'enable',
'pptp-password': 'test_value_102',
'pptp-server-ip': 'test_value_103',
'pptp-timeout': '104',
'pptp-user': 'test_value_105',
'preserve-session-route': 'enable',
'priority': '107',
'priority-override': 'enable',
'proxy-captive-portal': 'enable',
'redundant-interface': 'test_value_110',
'remote-ip': 'test_value_111',
'replacemsg-override-group': 'test_value_112',
'role': 'lan',
'sample-direction': 'tx',
'sample-rate': '115',
'scan-botnet-connections': 'disable',
'secondary-IP': 'enable',
'security-exempt-list': 'test_value_118',
'security-external-logout': 'test_value_119',
'security-external-web': 'test_value_120',
'security-mac-auth-bypass': 'enable',
'security-mode': 'none',
'security-redirect-url': 'test_value_123',
'service-name': 'test_value_124',
'sflow-sampler': 'enable',
'snmp-index': '126',
'speed': 'auto',
'spillover-threshold': '128',
'src-check': 'enable',
'status': 'up',
'stpforward': 'enable',
'stpforward-mode': 'rpl-all-ext-id',
'subst': 'enable',
'substitute-dst-mac': 'test_value_134',
'switch': 'test_value_135',
'switch-controller-access-vlan': 'enable',
'switch-controller-arp-inspection': 'enable',
'switch-controller-dhcp-snooping': 'enable',
'switch-controller-dhcp-snooping-option82': 'enable',
'switch-controller-dhcp-snooping-verify-mac': 'enable',
'switch-controller-igmp-snooping': 'enable',
'switch-controller-learning-limit': '142',
'tcp-mss': '143',
'trust-ip-1': 'test_value_144',
'trust-ip-2': 'test_value_145',
'trust-ip-3': 'test_value_146',
'trust-ip6-1': 'test_value_147',
'trust-ip6-2': 'test_value_148',
'trust-ip6-3': 'test_value_149',
'type': 'physical',
'username': 'test_value_151',
'vdom': 'test_value_152',
'vindex': '153',
'vlanforward': 'enable',
'vlanid': '155',
'vrf': '156',
'vrrp-virtual-mac': 'enable',
'wccp': 'enable',
'weight': '159',
'wins-ip': 'test_value_160'
}
set_method_mock.assert_called_with('system', 'interface', data=expected_data, vdom='root')
schema_method_mock.assert_not_called()
assert not is_error
assert not changed
assert response['status'] == 'error'
assert response['http_status'] == 404
def test_system_interface_filter_foreign_attributes(mocker):
schema_method_mock = mocker.patch('ansible.module_utils.network.fortios.fortios.FortiOSHandler.schema')
set_method_result = {'status': 'success', 'http_method': 'POST', 'http_status': 200}
set_method_mock = mocker.patch('ansible.module_utils.network.fortios.fortios.FortiOSHandler.set', return_value=set_method_result)
input_data = {
'username': 'admin',
'state': 'present',
'system_interface': {
'random_attribute_not_valid': 'tag',
'ac_name': 'test_value_3',
'aggregate': 'test_value_4',
'algorithm': 'L2',
'alias': 'test_value_6',
'ap_discover': 'enable',
'arpforward': 'enable',
'auth_type': 'auto',
'auto_auth_extension_device': 'enable',
'bfd': 'global',
'bfd_desired_min_tx': '12',
'bfd_detect_mult': '13',
'bfd_required_min_rx': '14',
'broadcast_forticlient_discovery': 'enable',
'broadcast_forward': 'enable',
'captive_portal': '17',
'cli_conn_status': '18',
'color': '19',
'dedicated_to': 'none',
'defaultgw': 'enable',
'description': 'test_value_22',
'detected_peer_mtu': '23',
'detectprotocol': 'ping',
'detectserver': 'test_value_25',
'device_access_list': 'test_value_26',
'device_identification': 'enable',
'device_identification_active_scan': 'enable',
'device_netscan': 'disable',
'device_user_identification': 'enable',
'devindex': '31',
'dhcp_client_identifier': 'myId_32',
'dhcp_relay_agent_option': 'enable',
'dhcp_relay_ip': 'test_value_34',
'dhcp_relay_service': 'disable',
'dhcp_relay_type': 'regular',
'dhcp_renew_time': '37',
'disc_retry_timeout': '38',
'disconnect_threshold': '39',
'distance': '40',
'dns_server_override': 'enable',
'drop_fragment': 'enable',
'drop_overlapped_fragment': 'enable',
'egress_shaping_profile': 'test_value_44',
'endpoint_compliance': 'enable',
'estimated_downstream_bandwidth': '46',
'estimated_upstream_bandwidth': '47',
'explicit_ftp_proxy': 'enable',
'explicit_web_proxy': 'enable',
'external': 'enable',
'fail_action_on_extender': 'soft-restart',
'fail_alert_method': 'link-failed-signal',
'fail_detect': 'enable',
'fail_detect_option': 'detectserver',
'fortiheartbeat': 'enable',
'fortilink': 'enable',
'fortilink_backup_link': '57',
'fortilink_split_interface': 'enable',
'fortilink_stacking': 'enable',
'forward_domain': '60',
'gwdetect': 'enable',
'ha_priority': '62',
'icmp_accept_redirect': 'enable',
'icmp_send_redirect': 'enable',
'ident_accept': 'enable',
'idle_timeout': '66',
'inbandwidth': '67',
'ingress_spillover_threshold': '68',
'interface': 'test_value_69',
'internal': '70',
'ip': 'test_value_71',
'ipmac': 'enable',
'ips_sniffer_mode': 'enable',
'ipunnumbered': 'test_value_74',
'l2forward': 'enable',
'lacp_ha_slave': 'enable',
'lacp_mode': 'static',
'lacp_speed': 'slow',
'lcp_echo_interval': '79',
'lcp_max_echo_fails': '80',
'link_up_delay': '81',
'lldp_transmission': 'enable',
'macaddr': 'test_value_83',
'management_ip': 'test_value_84',
'min_links': '85',
'min_links_down': 'operational',
'mode': 'static',
'mtu': '88',
'mtu_override': 'enable',
'name': 'default_name_90',
'ndiscforward': 'enable',
'netbios_forward': 'disable',
'netflow_sampler': 'disable',
'outbandwidth': '94',
'padt_retry_timeout': '95',
'password': 'test_value_96',
'ping_serv_status': '97',
'polling_interval': '98',
'pppoe_unnumbered_negotiate': 'enable',
'pptp_auth_type': 'auto',
'pptp_client': 'enable',
'pptp_password': 'test_value_102',
'pptp_server_ip': 'test_value_103',
'pptp_timeout': '104',
'pptp_user': 'test_value_105',
'preserve_session_route': 'enable',
'priority': '107',
'priority_override': 'enable',
'proxy_captive_portal': 'enable',
'redundant_interface': 'test_value_110',
'remote_ip': 'test_value_111',
'replacemsg_override_group': 'test_value_112',
'role': 'lan',
'sample_direction': 'tx',
'sample_rate': '115',
'scan_botnet_connections': 'disable',
'secondary_IP': 'enable',
'security_exempt_list': 'test_value_118',
'security_external_logout': 'test_value_119',
'security_external_web': 'test_value_120',
'security_mac_auth_bypass': 'enable',
'security_mode': 'none',
'security_redirect_url': 'test_value_123',
'service_name': 'test_value_124',
'sflow_sampler': 'enable',
'snmp_index': '126',
'speed': 'auto',
'spillover_threshold': '128',
'src_check': 'enable',
'status': 'up',
'stpforward': 'enable',
'stpforward_mode': 'rpl-all-ext-id',
'subst': 'enable',
'substitute_dst_mac': 'test_value_134',
'switch': 'test_value_135',
'switch_controller_access_vlan': 'enable',
'switch_controller_arp_inspection': 'enable',
'switch_controller_dhcp_snooping': 'enable',
'switch_controller_dhcp_snooping_option82': 'enable',
'switch_controller_dhcp_snooping_verify_mac': 'enable',
'switch_controller_igmp_snooping': 'enable',
'switch_controller_learning_limit': '142',
'tcp_mss': '143',
'trust_ip_1': 'test_value_144',
'trust_ip_2': 'test_value_145',
'trust_ip_3': 'test_value_146',
'trust_ip6_1': 'test_value_147',
'trust_ip6_2': 'test_value_148',
'trust_ip6_3': 'test_value_149',
'type': 'physical',
'username': 'test_value_151',
'vdom': 'test_value_152',
'vindex': '153',
'vlanforward': 'enable',
'vlanid': '155',
'vrf': '156',
'vrrp_virtual_mac': 'enable',
'wccp': 'enable',
'weight': '159',
'wins_ip': 'test_value_160'
},
'vdom': 'root'}
is_error, changed, response = fortios_system_interface.fortios_system(input_data, fos_instance)
expected_data = {
'ac-name': 'test_value_3',
'aggregate': 'test_value_4',
'algorithm': 'L2',
'alias': 'test_value_6',
'ap-discover': 'enable',
'arpforward': 'enable',
'auth-type': 'auto',
'auto-auth-extension-device': 'enable',
'bfd': 'global',
'bfd-desired-min-tx': '12',
'bfd-detect-mult': '13',
'bfd-required-min-rx': '14',
'broadcast-forticlient-discovery': 'enable',
'broadcast-forward': 'enable',
'captive-portal': '17',
'cli-conn-status': '18',
'color': '19',
'dedicated-to': 'none',
'defaultgw': 'enable',
'description': 'test_value_22',
'detected-peer-mtu': '23',
'detectprotocol': 'ping',
'detectserver': 'test_value_25',
'device-access-list': 'test_value_26',
'device-identification': 'enable',
'device-identification-active-scan': 'enable',
'device-netscan': 'disable',
'device-user-identification': 'enable',
'devindex': '31',
'dhcp-client-identifier': 'myId_32',
'dhcp-relay-agent-option': 'enable',
'dhcp-relay-ip': 'test_value_34',
'dhcp-relay-service': 'disable',
'dhcp-relay-type': 'regular',
'dhcp-renew-time': '37',
'disc-retry-timeout': '38',
'disconnect-threshold': '39',
'distance': '40',
'dns-server-override': 'enable',
'drop-fragment': 'enable',
'drop-overlapped-fragment': 'enable',
'egress-shaping-profile': 'test_value_44',
'endpoint-compliance': 'enable',
'estimated-downstream-bandwidth': '46',
'estimated-upstream-bandwidth': '47',
'explicit-ftp-proxy': 'enable',
'explicit-web-proxy': 'enable',
'external': 'enable',
'fail-action-on-extender': 'soft-restart',
'fail-alert-method': 'link-failed-signal',
'fail-detect': 'enable',
'fail-detect-option': 'detectserver',
'fortiheartbeat': 'enable',
'fortilink': 'enable',
'fortilink-backup-link': '57',
'fortilink-split-interface': 'enable',
'fortilink-stacking': 'enable',
'forward-domain': '60',
'gwdetect': 'enable',
'ha-priority': '62',
'icmp-accept-redirect': 'enable',
'icmp-send-redirect': 'enable',
'ident-accept': 'enable',
'idle-timeout': '66',
'inbandwidth': '67',
'ingress-spillover-threshold': '68',
'interface': 'test_value_69',
'internal': '70',
'ip': 'test_value_71',
'ipmac': 'enable',
'ips-sniffer-mode': 'enable',
'ipunnumbered': 'test_value_74',
'l2forward': 'enable',
'lacp-ha-slave': 'enable',
'lacp-mode': 'static',
'lacp-speed': 'slow',
'lcp-echo-interval': '79',
'lcp-max-echo-fails': '80',
'link-up-delay': '81',
'lldp-transmission': 'enable',
'macaddr': 'test_value_83',
'management-ip': 'test_value_84',
'min-links': '85',
'min-links-down': 'operational',
'mode': 'static',
'mtu': '88',
'mtu-override': 'enable',
'name': 'default_name_90',
'ndiscforward': 'enable',
'netbios-forward': 'disable',
'netflow-sampler': 'disable',
'outbandwidth': '94',
'padt-retry-timeout': '95',
'password': 'test_value_96',
'ping-serv-status': '97',
'polling-interval': '98',
'pppoe-unnumbered-negotiate': 'enable',
'pptp-auth-type': 'auto',
'pptp-client': 'enable',
'pptp-password': 'test_value_102',
'pptp-server-ip': 'test_value_103',
'pptp-timeout': '104',
'pptp-user': 'test_value_105',
'preserve-session-route': 'enable',
'priority': '107',
'priority-override': 'enable',
'proxy-captive-portal': 'enable',
'redundant-interface': 'test_value_110',
'remote-ip': 'test_value_111',
'replacemsg-override-group': 'test_value_112',
'role': 'lan',
'sample-direction': 'tx',
'sample-rate': '115',
'scan-botnet-connections': 'disable',
'secondary-IP': 'enable',
'security-exempt-list': 'test_value_118',
'security-external-logout': 'test_value_119',
'security-external-web': 'test_value_120',
'security-mac-auth-bypass': 'enable',
'security-mode': 'none',
'security-redirect-url': 'test_value_123',
'service-name': 'test_value_124',
'sflow-sampler': 'enable',
'snmp-index': '126',
'speed': 'auto',
'spillover-threshold': '128',
'src-check': 'enable',
'status': 'up',
'stpforward': 'enable',
'stpforward-mode': 'rpl-all-ext-id',
'subst': 'enable',
'substitute-dst-mac': 'test_value_134',
'switch': 'test_value_135',
'switch-controller-access-vlan': 'enable',
'switch-controller-arp-inspection': 'enable',
'switch-controller-dhcp-snooping': 'enable',
'switch-controller-dhcp-snooping-option82': 'enable',
'switch-controller-dhcp-snooping-verify-mac': 'enable',
'switch-controller-igmp-snooping': 'enable',
'switch-controller-learning-limit': '142',
'tcp-mss': '143',
'trust-ip-1': 'test_value_144',
'trust-ip-2': 'test_value_145',
'trust-ip-3': 'test_value_146',
'trust-ip6-1': 'test_value_147',
'trust-ip6-2': 'test_value_148',
'trust-ip6-3': 'test_value_149',
'type': 'physical',
'username': 'test_value_151',
'vdom': 'test_value_152',
'vindex': '153',
'vlanforward': 'enable',
'vlanid': '155',
'vrf': '156',
'vrrp-virtual-mac': 'enable',
'wccp': 'enable',
'weight': '159',
'wins-ip': 'test_value_160'
}
set_method_mock.assert_called_with('system', 'interface', data=expected_data, vdom='root')
schema_method_mock.assert_not_called()
assert not is_error
assert changed
assert response['status'] == 'success'
assert response['http_status'] == 200
| 40.858192 | 142 | 0.528008 | 6,782 | 72,319 | 5.309643 | 0.066647 | 0.089975 | 0.018328 | 0.021661 | 0.970841 | 0.969064 | 0.966426 | 0.966426 | 0.966426 | 0.966426 | 0 | 0.041647 | 0.318699 | 72,319 | 1,769 | 143 | 40.881289 | 0.689209 | 0.009182 | 0 | 0.977739 | 0 | 0 | 0.498793 | 0.124555 | 0 | 0 | 0 | 0 | 0.02109 | 1 | 0.004101 | false | 0.017575 | 0.004687 | 0 | 0.009373 | 0.000586 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
68c82bfc75345f11991ba363609ac2ca1e40ae1d | 10,390 | py | Python | angr/procedures/definitions/win32_prntvpt.py | r4b3rt/angr | c133cfd4f83ffea2a1d9e064241e9459eaabc55f | [
"BSD-2-Clause"
] | null | null | null | angr/procedures/definitions/win32_prntvpt.py | r4b3rt/angr | c133cfd4f83ffea2a1d9e064241e9459eaabc55f | [
"BSD-2-Clause"
] | null | null | null | angr/procedures/definitions/win32_prntvpt.py | r4b3rt/angr | c133cfd4f83ffea2a1d9e064241e9459eaabc55f | [
"BSD-2-Clause"
] | null | null | null | # pylint:disable=line-too-long
import logging
from ...sim_type import SimTypeFunction, SimTypeShort, SimTypeInt, SimTypeLong, SimTypeLongLong, SimTypeDouble, SimTypeFloat, SimTypePointer, SimTypeChar, SimStruct, SimTypeFixedSizeArray, SimTypeBottom, SimUnion, SimTypeBool
from ...calling_conventions import SimCCStdcall, SimCCMicrosoftAMD64
from .. import SIM_PROCEDURES as P
from . import SimLibrary
_l = logging.getLogger(name=__name__)
lib = SimLibrary()
lib.set_default_cc('X86', SimCCStdcall)
lib.set_default_cc('AMD64', SimCCMicrosoftAMD64)
lib.set_library_names("prntvpt.dll")
prototypes = \
{
#
'PTQuerySchemaVersionSupport': SimTypeFunction([SimTypePointer(SimTypeChar(label="Char"), offset=0), SimTypePointer(SimTypeInt(signed=False, label="UInt32"), offset=0)], SimTypeInt(signed=True, label="Int32"), arg_names=["pszPrinterName", "pMaxVersion"]),
#
'PTOpenProvider': SimTypeFunction([SimTypePointer(SimTypeChar(label="Char"), offset=0), SimTypeInt(signed=False, label="UInt32"), SimTypePointer(SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), offset=0)], SimTypeInt(signed=True, label="Int32"), arg_names=["pszPrinterName", "dwVersion", "phProvider"]),
#
'PTOpenProviderEx': SimTypeFunction([SimTypePointer(SimTypeChar(label="Char"), offset=0), SimTypeInt(signed=False, label="UInt32"), SimTypeInt(signed=False, label="UInt32"), SimTypePointer(SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), offset=0), SimTypePointer(SimTypeInt(signed=False, label="UInt32"), offset=0)], SimTypeInt(signed=True, label="Int32"), arg_names=["pszPrinterName", "dwMaxVersion", "dwPrefVersion", "phProvider", "pUsedVersion"]),
#
'PTCloseProvider': SimTypeFunction([SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0)], SimTypeInt(signed=True, label="Int32"), arg_names=["hProvider"]),
#
'PTReleaseMemory': SimTypeFunction([SimTypePointer(SimTypeBottom(label="Void"), offset=0)], SimTypeInt(signed=True, label="Int32"), arg_names=["pBuffer"]),
#
'PTGetPrintCapabilities': SimTypeFunction([SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), SimTypeBottom(label="IStream"), SimTypeBottom(label="IStream"), SimTypePointer(SimTypePointer(SimTypeChar(label="Char"), offset=0), offset=0)], SimTypeInt(signed=True, label="Int32"), arg_names=["hProvider", "pPrintTicket", "pCapabilities", "pbstrErrorMessage"]),
#
'PTGetPrintDeviceCapabilities': SimTypeFunction([SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), SimTypeBottom(label="IStream"), SimTypeBottom(label="IStream"), SimTypePointer(SimTypePointer(SimTypeChar(label="Char"), offset=0), offset=0)], SimTypeInt(signed=True, label="Int32"), arg_names=["hProvider", "pPrintTicket", "pDeviceCapabilities", "pbstrErrorMessage"]),
#
'PTGetPrintDeviceResources': SimTypeFunction([SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), SimTypePointer(SimTypeChar(label="Char"), offset=0), SimTypeBottom(label="IStream"), SimTypeBottom(label="IStream"), SimTypePointer(SimTypePointer(SimTypeChar(label="Char"), offset=0), offset=0)], SimTypeInt(signed=True, label="Int32"), arg_names=["hProvider", "pszLocaleName", "pPrintTicket", "pDeviceResources", "pbstrErrorMessage"]),
#
'PTMergeAndValidatePrintTicket': SimTypeFunction([SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), SimTypeBottom(label="IStream"), SimTypeBottom(label="IStream"), SimTypeInt(signed=False, label="EPrintTicketScope"), SimTypeBottom(label="IStream"), SimTypePointer(SimTypePointer(SimTypeChar(label="Char"), offset=0), offset=0)], SimTypeInt(signed=True, label="Int32"), arg_names=["hProvider", "pBaseTicket", "pDeltaTicket", "scope", "pResultTicket", "pbstrErrorMessage"]),
#
'PTConvertPrintTicketToDevMode': SimTypeFunction([SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), SimTypeBottom(label="IStream"), SimTypeInt(signed=False, label="EDefaultDevmodeType"), SimTypeInt(signed=False, label="EPrintTicketScope"), SimTypePointer(SimTypeInt(signed=False, label="UInt32"), offset=0), SimTypePointer(SimTypePointer(SimStruct({"dmDeviceName": SimTypeFixedSizeArray(SimTypeChar(label="Byte"), 32), "dmSpecVersion": SimTypeShort(signed=False, label="UInt16"), "dmDriverVersion": SimTypeShort(signed=False, label="UInt16"), "dmSize": SimTypeShort(signed=False, label="UInt16"), "dmDriverExtra": SimTypeShort(signed=False, label="UInt16"), "dmFields": SimTypeInt(signed=False, label="UInt32"), "Anonymous1": SimUnion({"Anonymous1": SimStruct({"dmOrientation": SimTypeShort(signed=True, label="Int16"), "dmPaperSize": SimTypeShort(signed=True, label="Int16"), "dmPaperLength": SimTypeShort(signed=True, label="Int16"), "dmPaperWidth": SimTypeShort(signed=True, label="Int16"), "dmScale": SimTypeShort(signed=True, label="Int16"), "dmCopies": SimTypeShort(signed=True, label="Int16"), "dmDefaultSource": SimTypeShort(signed=True, label="Int16"), "dmPrintQuality": SimTypeShort(signed=True, label="Int16")}, name="_Anonymous1_e__Struct", pack=False, align=None), "Anonymous2": SimStruct({"dmPosition": SimStruct({"x": SimTypeInt(signed=True, label="Int32"), "y": SimTypeInt(signed=True, label="Int32")}, name="POINTL", pack=False, align=None), "dmDisplayOrientation": SimTypeInt(signed=False, label="UInt32"), "dmDisplayFixedOutput": SimTypeInt(signed=False, label="UInt32")}, name="_Anonymous2_e__Struct", pack=False, align=None)}, name="<anon>", label="None"), "dmColor": SimTypeShort(signed=True, label="Int16"), "dmDuplex": SimTypeShort(signed=True, label="Int16"), "dmYResolution": SimTypeShort(signed=True, label="Int16"), "dmTTOption": SimTypeShort(signed=True, label="Int16"), "dmCollate": SimTypeShort(signed=True, label="Int16"), "dmFormName": SimTypeFixedSizeArray(SimTypeChar(label="Byte"), 32), "dmLogPixels": SimTypeShort(signed=False, label="UInt16"), "dmBitsPerPel": SimTypeInt(signed=False, label="UInt32"), "dmPelsWidth": SimTypeInt(signed=False, label="UInt32"), "dmPelsHeight": SimTypeInt(signed=False, label="UInt32"), "Anonymous2": SimUnion({"dmDisplayFlags": SimTypeInt(signed=False, label="UInt32"), "dmNup": SimTypeInt(signed=False, label="UInt32")}, name="<anon>", label="None"), "dmDisplayFrequency": SimTypeInt(signed=False, label="UInt32"), "dmICMMethod": SimTypeInt(signed=False, label="UInt32"), "dmICMIntent": SimTypeInt(signed=False, label="UInt32"), "dmMediaType": SimTypeInt(signed=False, label="UInt32"), "dmDitherType": SimTypeInt(signed=False, label="UInt32"), "dmReserved1": SimTypeInt(signed=False, label="UInt32"), "dmReserved2": SimTypeInt(signed=False, label="UInt32"), "dmPanningWidth": SimTypeInt(signed=False, label="UInt32"), "dmPanningHeight": SimTypeInt(signed=False, label="UInt32")}, name="DEVMODEA", pack=False, align=None), offset=0), offset=0), SimTypePointer(SimTypePointer(SimTypeChar(label="Char"), offset=0), offset=0)], SimTypeInt(signed=True, label="Int32"), arg_names=["hProvider", "pPrintTicket", "baseDevmodeType", "scope", "pcbDevmode", "ppDevmode", "pbstrErrorMessage"]),
#
'PTConvertDevModeToPrintTicket': SimTypeFunction([SimTypePointer(SimTypeInt(signed=True, label="Int"), label="IntPtr", offset=0), SimTypeInt(signed=False, label="UInt32"), SimTypePointer(SimStruct({"dmDeviceName": SimTypeFixedSizeArray(SimTypeChar(label="Byte"), 32), "dmSpecVersion": SimTypeShort(signed=False, label="UInt16"), "dmDriverVersion": SimTypeShort(signed=False, label="UInt16"), "dmSize": SimTypeShort(signed=False, label="UInt16"), "dmDriverExtra": SimTypeShort(signed=False, label="UInt16"), "dmFields": SimTypeInt(signed=False, label="UInt32"), "Anonymous1": SimUnion({"Anonymous1": SimStruct({"dmOrientation": SimTypeShort(signed=True, label="Int16"), "dmPaperSize": SimTypeShort(signed=True, label="Int16"), "dmPaperLength": SimTypeShort(signed=True, label="Int16"), "dmPaperWidth": SimTypeShort(signed=True, label="Int16"), "dmScale": SimTypeShort(signed=True, label="Int16"), "dmCopies": SimTypeShort(signed=True, label="Int16"), "dmDefaultSource": SimTypeShort(signed=True, label="Int16"), "dmPrintQuality": SimTypeShort(signed=True, label="Int16")}, name="_Anonymous1_e__Struct", pack=False, align=None), "Anonymous2": SimStruct({"dmPosition": SimStruct({"x": SimTypeInt(signed=True, label="Int32"), "y": SimTypeInt(signed=True, label="Int32")}, name="POINTL", pack=False, align=None), "dmDisplayOrientation": SimTypeInt(signed=False, label="UInt32"), "dmDisplayFixedOutput": SimTypeInt(signed=False, label="UInt32")}, name="_Anonymous2_e__Struct", pack=False, align=None)}, name="<anon>", label="None"), "dmColor": SimTypeShort(signed=True, label="Int16"), "dmDuplex": SimTypeShort(signed=True, label="Int16"), "dmYResolution": SimTypeShort(signed=True, label="Int16"), "dmTTOption": SimTypeShort(signed=True, label="Int16"), "dmCollate": SimTypeShort(signed=True, label="Int16"), "dmFormName": SimTypeFixedSizeArray(SimTypeChar(label="Byte"), 32), "dmLogPixels": SimTypeShort(signed=False, label="UInt16"), "dmBitsPerPel": SimTypeInt(signed=False, label="UInt32"), "dmPelsWidth": SimTypeInt(signed=False, label="UInt32"), "dmPelsHeight": SimTypeInt(signed=False, label="UInt32"), "Anonymous2": SimUnion({"dmDisplayFlags": SimTypeInt(signed=False, label="UInt32"), "dmNup": SimTypeInt(signed=False, label="UInt32")}, name="<anon>", label="None"), "dmDisplayFrequency": SimTypeInt(signed=False, label="UInt32"), "dmICMMethod": SimTypeInt(signed=False, label="UInt32"), "dmICMIntent": SimTypeInt(signed=False, label="UInt32"), "dmMediaType": SimTypeInt(signed=False, label="UInt32"), "dmDitherType": SimTypeInt(signed=False, label="UInt32"), "dmReserved1": SimTypeInt(signed=False, label="UInt32"), "dmReserved2": SimTypeInt(signed=False, label="UInt32"), "dmPanningWidth": SimTypeInt(signed=False, label="UInt32"), "dmPanningHeight": SimTypeInt(signed=False, label="UInt32")}, name="DEVMODEA", pack=False, align=None), offset=0), SimTypeInt(signed=False, label="EPrintTicketScope"), SimTypeBottom(label="IStream")], SimTypeInt(signed=True, label="Int32"), arg_names=["hProvider", "cbDevmode", "pDevmode", "scope", "pPrintTicket"]),
}
lib.set_prototypes(prototypes)
| 236.136364 | 3,289 | 0.745525 | 1,051 | 10,390 | 7.330162 | 0.143673 | 0.143302 | 0.114226 | 0.151869 | 0.845016 | 0.838525 | 0.834112 | 0.824377 | 0.794912 | 0.789071 | 0 | 0.025729 | 0.072281 | 10,390 | 43 | 3,290 | 241.627907 | 0.773524 | 0.002695 | 0 | 0 | 0 | 0 | 0.260012 | 0.026407 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.2 | 0 | 0.2 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
68d11da08838c622c99e1e8e2e4e785e810d937f | 80 | py | Python | cloud/src/baseline_cloud/ingest/clients/__init__.py | MartinMReed/aws-iot-baseline | 61bdc51708e6f4480d0117a43f0adde5f6a63506 | [
"MIT"
] | 1 | 2021-12-31T05:05:30.000Z | 2021-12-31T05:05:30.000Z | cloud/src/baseline_cloud/ingest/clients/__init__.py | nelsestu/thing-expert | 2e105d718c386258d8efdb329ea60da1072ffbe8 | [
"MIT"
] | null | null | null | cloud/src/baseline_cloud/ingest/clients/__init__.py | nelsestu/thing-expert | 2e105d718c386258d8efdb329ea60da1072ffbe8 | [
"MIT"
] | 1 | 2021-04-05T23:44:12.000Z | 2021-04-05T23:44:12.000Z | RE_UUID = '[0-9a-f]{8}-[0-9a-f]{4}-4[0-9a-f]{3}-[ab89][0-9a-f]{3}-[0-9a-f]{12}'
| 40 | 79 | 0.475 | 24 | 80 | 1.541667 | 0.416667 | 0.405405 | 0.540541 | 0.27027 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.246753 | 0.0375 | 80 | 1 | 80 | 80 | 0.233766 | 0 | 0 | 0 | 0 | 1 | 0.8375 | 0.8375 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 11 |
68d1a2ce2a29b8c3064dfa446e14e2b47d330663 | 123 | py | Python | venv/lib/python3.7/site-packages/foundation/forms/__init__.py | Ljuka/iwen | 6aee69bf46c14e301002d0465a8a2b7e74e02953 | [
"MIT"
] | 1 | 2019-03-14T17:02:46.000Z | 2019-03-14T17:02:46.000Z | venv/lib/python3.7/site-packages/foundation/forms/__init__.py | Ljuka/iwen | 6aee69bf46c14e301002d0465a8a2b7e74e02953 | [
"MIT"
] | null | null | null | venv/lib/python3.7/site-packages/foundation/forms/__init__.py | Ljuka/iwen | 6aee69bf46c14e301002d0465a8a2b7e74e02953 | [
"MIT"
] | null | null | null | from django.forms import *
from .fields import *
from .fieldsets import Fieldset
from .forms import *
from .models import*
| 20.5 | 31 | 0.772358 | 17 | 123 | 5.588235 | 0.470588 | 0.315789 | 0.315789 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.154472 | 123 | 5 | 32 | 24.6 | 0.913462 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
ec2033d52759eef18baca48d983c766de8a1c962 | 15,289 | py | Python | FET extractor-Final Version Modified SCtest.py | Pitt-Star-Group/IV-Analysis | 0a5c11ed2d9a5496ca21c2b63f0a48cd76ce0104 | [
"MIT"
] | 1 | 2016-11-13T22:20:52.000Z | 2016-11-13T22:20:52.000Z | FET extractor-Final Version Modified SCtest.py | Pitt-Star-Group/IV-Analysis | 0a5c11ed2d9a5496ca21c2b63f0a48cd76ce0104 | [
"MIT"
] | null | null | null | FET extractor-Final Version Modified SCtest.py | Pitt-Star-Group/IV-Analysis | 0a5c11ed2d9a5496ca21c2b63f0a48cd76ce0104 | [
"MIT"
] | null | null | null | def calculatebkg(filepath):
import sys
from numpy import NaN, Inf, arange, isscalar, asarray, array
import numpy as np
import matplotlib.pyplot as plt
#load txt file into 3 groups
x, none, y = np.loadtxt(filepath, skiprows=2, unpack=True)
def peakdet(v, delta, x = None):
"""
Converted from MATLAB script at http://billauer.co.il/peakdet.html
Returns two arrays
function [maxtab, mintab]=peakdet(v, delta, x)
%PEAKDET Detect peaks in a vector
% [MAXTAB, MINTAB] = PEAKDET(V, DELTA) finds the local
% maxima and minima ("peaks") in the vector V.
% MAXTAB and MINTAB consists of two columns. Column 1
% contains indices in V, and column 2 the found values.
%
% With [MAXTAB, MINTAB] = PEAKDET(V, DELTA, X) the indices
% in MAXTAB and MINTAB are replaced with the corresponding
% X-values.
%
% A point is considered a maximum peak if it has the maximal
% value, and was preceded (to the left) by a value lower by
% DELTA.
% Eli Billauer, 3.4.05 (Explicitly not copyrighted).
% This function is released to the public domain; Any use is allowed.
"""
maxtab = []
mintab = []
maxp = []
minp = []
if x is None:
x = arange(len(v))
v = asarray(v)
if len(v) != len(x):
sys.exit('Input vectors v and x must have same length')
if not isscalar(delta):
sys.exit('Input argument delta must be a scalar')
if delta <= 0:
sys.exit('Input argument delta must be positive')
mn, mx = Inf, -Inf
mnpos, mxpos = NaN, NaN
lookformax = True
for i in arange(len(v)):
this = v[i]
if this > mx:
mx = this
mxpos = x[i]
if this < mn:
mn = this
mnpos = x[i]
if lookformax:
if this < mx-delta:
maxtab.append(mx)
maxp.append(mxpos)
mn = this
mnpos = x[i]
lookformax = False
else:
if this > mn+delta:
mintab.append(mn)
minp.append(mnpos)
mx = this
mxpos = x[i]
lookformax = True
return maxtab, maxp, mintab, minp
#define smoothing function
def savitzky_golay(y, window_size, order, deriv=0, rate=1):
import numpy as np
from math import factorial
try:
window_size = np.abs(np.int(window_size))
order = np.abs(np.int(order))
except ValueError, msg:
raise ValueError("window_size and order have to be of type int")
if window_size % 2 != 1 or window_size < 1:
raise TypeError("window_size size must be a positive odd number")
if window_size < order + 2:
raise TypeError("window_size is too small for the polynomials order")
order_range = range(order+1)
half_window = (window_size -1) // 2
# precompute coefficients
b = np.mat([[k**i for i in order_range] for k in range(-half_window, half_window+1)])
m = np.linalg.pinv(b).A[deriv] * rate**deriv * factorial(deriv)
# pad the signal at the extremes with
# values taken from the signal itself
firstvals = y[0] - np.abs( y[1:half_window+1][::-1] - y[0] )
lastvals = y[-1] + np.abs(y[-half_window-1:-1][::-1] - y[-1])
y = np.concatenate((firstvals, y, lastvals))
return np.convolve( m[::-1], y, mode='valid')
#transfer x,y into list
yl=[]
xl=[]
for i in range (0,len(y)):
yl.append(y[i])
for j in range (0,len(x)):
xl.append(x[j])
interval=(xl[0]-xl[199])/200
#get 1st derivative with smooth once
#y1st is the smoothed y
y1st= savitzky_golay(y, 35, 2)
der=-np.gradient(y1st,interval)
#plt.plot(x, der,'r--')
#plt.plot(x,y)
#plt.show()
lder2=[]
lder=[]
rx=[]
#limit der1 & xl within the range of not being affacted by smoothing,x not being effacted by smoothing
for i in range (25, len(xl)):
rx.append(xl[i])
for i in range (0,len(der)):
lder.append(der[i])
#detect the minium conductance, if minium is peak or it has no peak.
[maxtab, maxp, mintab, minp] = peakdet(y, 0.00000001, x)
if len(mintab)==0:
gmin=min(yl)
gminx=xl[yl.index(gmin)]
else:
gmin=min(mintab) #here gmin refers to current, not conductance
gminx=minp[mintab.index(gmin)]
gminindex=xl.index(gminx)
#print gmin, gminx, gminindex #if you count from 1 instead of 0, then the real index in x column should be gminindex+1
#define the range of 1st der to be left most to the x(gmin), limitlder represent the restricted 1st order
limitlder=[]
for i in range (gminindex, 183):
limitlder.append(lder[i])
#Get the sharpest point of slope
slope=min(limitlder)
indexslope=lder.index(slope)
#0.3
i3 = y[np.argmin(abs(x-0.3))]
#-0.3
in3= y[np.argmin(abs(x+0.3))]
#0.4
i4= y[np.argmin(abs(x-0.4))]
#-0.4
in4= y[np.argmin(abs(x+0.4))]
#0.6
i6= y[np.argmin(abs(x-0.6))]
#-0.6
in6= y[np.argmin(abs(x+0.6))]
#looking down to find the rightmost point of linear region.
from scipy import stats
import numpy as np
for i in range (3,indexslope-gminindex):
slope, intercept, r_value, p_value, std_err = stats.linregress(x[indexslope-i:indexslope], y1st[indexslope-i:indexslope])
if r_value**2 < 0.9999:
break
end
linearightmost=indexslope-i
for i in range (linearightmost+3,183):
slope, intercept, r_value, p_value, std_err = stats.linregress(x[linearightmost:i], y1st[linearightmost:i]) #make a change here 08/09,not slope but sslope, since slope will replace the slope value we used before
if r_value**2 < 0.9995:
break
end
xintercept= -intercept/slope
yintercept=intercept
xvth= (-yintercept)/slope
vth=xl[np.argmin(abs(x-xvth))]
#get y value of vth
yvth=yl[np.argmin(abs(x-xvth))]
return intercept, slope, vth, yvth, i6, in6, i4, in4, i3, in3, gmin/0.05 #all the 11 parameters
#output files names without path, and put them into a list.
#import glob, os
#os.chdir("c:/")
#name=[]
#for file in glob.glob("*.txt"):
# name.append(file)
#print name
def calculatesample(filepath):
import sys
from numpy import NaN, Inf, arange, isscalar, asarray, array
import numpy as np
import matplotlib.pyplot as plt
#load txt file into 3 groups
x, none, y = np.loadtxt(filepath, skiprows=2, unpack=True)
def peakdet(v, delta, x = None):
"""
Converted from MATLAB script at http://billauer.co.il/peakdet.html
Returns two arrays
function [maxtab, mintab]=peakdet(v, delta, x)
%PEAKDET Detect peaks in a vector
% [MAXTAB, MINTAB] = PEAKDET(V, DELTA) finds the local
% maxima and minima ("peaks") in the vector V.
% MAXTAB and MINTAB consists of two columns. Column 1
% contains indices in V, and column 2 the found values.
%
% With [MAXTAB, MINTAB] = PEAKDET(V, DELTA, X) the indices
% in MAXTAB and MINTAB are replaced with the corresponding
% X-values.
%
% A point is considered a maximum peak if it has the maximal
% value, and was preceded (to the left) by a value lower by
% DELTA.
% Eli Billauer, 3.4.05 (Explicitly not copyrighted).
% This function is released to the public domain; Any use is allowed.
"""
maxtab = []
mintab = []
maxp = []
minp = []
if x is None:
x = arange(len(v))
v = asarray(v)
if len(v) != len(x):
sys.exit('Input vectors v and x must have same length')
if not isscalar(delta):
sys.exit('Input argument delta must be a scalar')
if delta <= 0:
sys.exit('Input argument delta must be positive')
mn, mx = Inf, -Inf
mnpos, mxpos = NaN, NaN
lookformax = True
for i in arange(len(v)):
this = v[i]
if this > mx:
mx = this
mxpos = x[i]
if this < mn:
mn = this
mnpos = x[i]
if lookformax:
if this < mx-delta:
maxtab.append(mx)
maxp.append(mxpos)
mn = this
mnpos = x[i]
lookformax = False
else:
if this > mn+delta:
mintab.append(mn)
minp.append(mnpos)
mx = this
mxpos = x[i]
lookformax = True
return maxtab, maxp, mintab, minp
#define smoothing function
def savitzky_golay(y, window_size, order, deriv=0, rate=1):
import numpy as np
from math import factorial
try:
window_size = np.abs(np.int(window_size))
order = np.abs(np.int(order))
except ValueError, msg:
raise ValueError("window_size and order have to be of type int")
if window_size % 2 != 1 or window_size < 1:
raise TypeError("window_size size must be a positive odd number")
if window_size < order + 2:
raise TypeError("window_size is too small for the polynomials order")
order_range = range(order+1)
half_window = (window_size -1) // 2
# precompute coefficients
b = np.mat([[k**i for i in order_range] for k in range(-half_window, half_window+1)])
m = np.linalg.pinv(b).A[deriv] * rate**deriv * factorial(deriv)
# pad the signal at the extremes with
# values taken from the signal itself
firstvals = y[0] - np.abs( y[1:half_window+1][::-1] - y[0] )
lastvals = y[-1] + np.abs(y[-half_window-1:-1][::-1] - y[-1])
y = np.concatenate((firstvals, y, lastvals))
return np.convolve( m[::-1], y, mode='valid')
#transfer x,y into list
yl=[]
xl=[]
for i in range (0,len(y)):
yl.append(y[i])
for j in range (0,len(x)):
xl.append(x[j])
interval=(xl[0]-xl[199])/200
#get 1st derivative with smooth once
y1st= savitzky_golay(y, 35, 2)
der=-np.gradient(y1st,interval)
lder=[]
rx=[]
#limit der, xl within the range of not being affacted by smoothing,x not being effacted by smoothing
for i in range (25, len(xl)):
rx.append(xl[i])
for i in range (0,len(der)):
lder.append(der[i])
[maxtab, maxp, mintab, minp] = peakdet(y, 0.00000001, x)
if len(mintab)==0:
gmin=min(yl)
gminx=xl[yl.index(gmin)]
else:
gmin=min(mintab) #here gmin refers to current, not conductance
gminx=minp[mintab.index(gmin)]
gminindex=xl.index(gminx)
slimitlder=[]
for i in range (gminindex, 183):
slimitlder.append(lder[i])
#Get the sharpest point of slope
slope=min(slimitlder)
indexslope=lder.index(slope)
#0.3
i3 = y[np.argmin(abs(x-0.3))]
#-0.3
in3= y[np.argmin(abs(x+0.3))]
#0.4
i4= y[np.argmin(abs(x-0.4))]
#-0.4
in4= y[np.argmin(abs(x+0.4))]
#0.6
i6= y[np.argmin(abs(x-0.6))]
#-0.6
in6= y[np.argmin(abs(x+0.6))]
#Get the right most of the linear region
from scipy import stats
import numpy as np
for i in range (3,indexslope-gminindex):
slope, intercept, r_value, p_value, std_err = stats.linregress(x[indexslope-i:indexslope], y1st[indexslope-i:indexslope])
if r_value**2 < 0.9999:
break
end
linearightmost=indexslope-i
for i in range (linearightmost+3,183):
slope, intercept, r_value, p_value, std_err = stats.linregress(x[linearightmost:i], y1st[linearightmost:i]) #make a change here 08/09,not slope but sslope, since slope will replace the slope value we used before
if r_value**2 < 0.9995:
break
end
#get the x intercept
xintercept= -intercept/slope
yintercept=intercept
xvth= (-yintercept)/slope
vth=xl[np.argmin(abs(x-xvth))]
return yintercept, slope, vth, i6, in6, i4, in4, i3, in3, gmin/0.05 #all the 10 parameters, this is the function for processing sample
import glob, os
bkg = []
sample = []
sbkg=[]
ssample=[]
for file in glob.glob("D:/*.txt"):
if 'h2o' in file:
bkg.append(file)
else:
sample.append(file)
sbkg=sorted(bkg)
ssample=sorted(sample)
for i in range (0,len(sbkg)):
[yintercept, slope, vth, yvth, i6, in6, i4, in4, i3, in3, gmin] = calculatebkg(sbkg[i])
[syintercept, sslope, svth, si6, sin6, si4, sin4, si3, sin3, sgmin] = calculatesample(ssample[i])
#print yintercept, slope, vth, yvth, i6, in6, i4, in4, i3, in3, gmin
#print syintercept, sslope, svth, si6, sin6, si4, sin4, si3, sin3, sgmin
p1=(sslope-slope)/slope #dt/t - relative change in transconductance
p2=(svth-vth)/abs(vth) #dvth - relative change in threshold voltage
#p3=(sgmin-gmin)/gmin #dgm/gm
#p4=(si6-i6)/yvth #dg/gvth@0.6v g is yvth
p5=(sin6-in6)/in6 #dg/gvth@-0.6 g is yvth - relative change in conductance @ -0.6V
#p6=(si4-i4)/yvth #dg/gvth@0.4 g is yvth
p7=(sin4-in4)/in4 #dg/gvth@-0.4 g is yvth - relative change in conductance @ -0.4V
#p8=(si3-i3)/yvth #dg/gvth@0.3 g is yvth
p9=(sin3-in3)/in3 #dg/gvth@-0.3 g is yvth - relative change in conductance @ -0.3V
#p10=(si6-i6)/i6 #dg/g@0.6v
#p11=(sin6-in6)/in6 #dg/g@-0.6
print p1,"\t", p2,"\t", p5,"\t", p7,"\t", p9,"\t", sbkg[i],"\t", ssample[i]
'please import you files into C:\, background file should end up with h2o'
'background file and sample file should share the same name except the last word, h2o or name of the sample'
'The Star research group @PITT reserves all the rights'
| 35.147126 | 224 | 0.536202 | 2,085 | 15,289 | 3.905995 | 0.168345 | 0.024558 | 0.012525 | 0.022102 | 0.8389 | 0.834725 | 0.823797 | 0.818148 | 0.805255 | 0.805255 | 0 | 0.035132 | 0.35398 | 15,289 | 434 | 225 | 35.228111 | 0.78941 | 0.15848 | 0 | 0.866412 | 0 | 0.003817 | 0.075659 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.068702 | null | null | 0.003817 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
d485ad85c2a5b83ba9c4ed17281d19472433a856 | 261 | py | Python | launchpad_rtmidi_py/__init__.py | dhilowitz/launchpad-rtmidi.py | 8d90b651948da4dc4f716dc1b9e79b2eea57c9ef | [
"CC-BY-4.0"
] | 8 | 2017-11-28T21:36:53.000Z | 2021-11-23T18:11:50.000Z | launchpad_rtmidi_py/__init__.py | dhilowitz/launchpad-rtmidi.py | 8d90b651948da4dc4f716dc1b9e79b2eea57c9ef | [
"CC-BY-4.0"
] | null | null | null | launchpad_rtmidi_py/__init__.py | dhilowitz/launchpad-rtmidi.py | 8d90b651948da4dc4f716dc1b9e79b2eea57c9ef | [
"CC-BY-4.0"
] | 3 | 2019-04-29T03:00:07.000Z | 2020-08-26T06:20:46.000Z | from launchpad_rtmidi import Launchpad
from launchpad_rtmidi import LaunchpadMk2
from launchpad_rtmidi import LaunchpadPro
from launchpad_rtmidi import LaunchControlXL
from launchpad_rtmidi import LaunchKeyMini
from launchpad_rtmidi import Dicer
import charset
| 32.625 | 44 | 0.900383 | 32 | 261 | 7.15625 | 0.3125 | 0.340611 | 0.497817 | 0.655022 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.004255 | 0.099617 | 261 | 7 | 45 | 37.285714 | 0.970213 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
2e2d329f33903dcbb52b04144755f7e46c7df9b1 | 3,455 | py | Python | app/lib/models/hashcat.py | swisskyrepo/crackerjack | 0f3a3b42f3a69b3a9c7891e71cffcb79c7e28da0 | [
"MIT"
] | 1 | 2022-03-18T11:33:29.000Z | 2022-03-18T11:33:29.000Z | app/lib/models/hashcat.py | swisskyrepo/crackerjack | 0f3a3b42f3a69b3a9c7891e71cffcb79c7e28da0 | [
"MIT"
] | null | null | null | app/lib/models/hashcat.py | swisskyrepo/crackerjack | 0f3a3b42f3a69b3a9c7891e71cffcb79c7e28da0 | [
"MIT"
] | null | null | null | from app import db
import datetime
class HashcatModel(db.Model):
__tablename__ = 'hashcat'
id = db.Column(db.Integer, primary_key=True)
session_id = db.Column(db.Integer, default=0, index=True, nullable=True)
mode = db.Column(db.Integer, default=0, index=True, nullable=True)
workload = db.Column(db.Integer, default=2, index=True, nullable=True)
hashtype = db.Column(db.String, default='', index=True, nullable=True)
wordlist_type = db.Column(db.Integer, default=0, index=True, nullable=True)
wordlist = db.Column(db.String, default='', index=True, nullable=True)
rule_type = db.Column(db.Integer, default=0, index=True, nullable=True)
rule = db.Column(db.String, default='', index=True, nullable=True)
mask_type = db.Column(db.Integer, default=0, index=True, nullable=True)
masklist = db.Column(db.String, default='', index=True, nullable=True)
mask = db.Column(db.String, default='', index=True, nullable=True)
increment_min = db.Column(db.Integer, default=0, index=True, nullable=True)
increment_max = db.Column(db.Integer, default=0, index=True, nullable=True)
optimised_kernel = db.Column(db.Boolean, default=False, index=True, nullable=True)
contains_usernames = db.Column(db.Boolean, default=False, index=True, nullable=True)
device_profile_id = db.Column(db.Integer, default=0, index=True, nullable=True)
created_at = db.Column(db.DateTime, nullable=True, default=datetime.datetime.now())
class HashcatHistoryModel(db.Model):
"""
This table should be identical to the "hashcat" table (see top of this file).
"""
__tablename__ = 'hashcat_history'
id = db.Column(db.Integer, primary_key=True)
session_id = db.Column(db.Integer, default=0, index=True, nullable=True)
mode = db.Column(db.Integer, default=0, index=True, nullable=True)
workload = db.Column(db.Integer, default=2, index=True, nullable=True)
hashtype = db.Column(db.String, default='', index=True, nullable=True)
wordlist_type = db.Column(db.Integer, default=0, index=True, nullable=True)
wordlist = db.Column(db.String, default='', index=True, nullable=True)
rule_type = db.Column(db.Integer, default=0, index=True, nullable=True)
rule = db.Column(db.String, default='', index=True, nullable=True)
mask_type = db.Column(db.Integer, default=0, index=True, nullable=True)
masklist = db.Column(db.String, default='', index=True, nullable=True)
mask = db.Column(db.String, default='', index=True, nullable=True)
increment_min = db.Column(db.Integer, default=0, index=True, nullable=True)
increment_max = db.Column(db.Integer, default=0, index=True, nullable=True)
optimised_kernel = db.Column(db.Boolean, default=False, index=True, nullable=True)
contains_usernames = db.Column(db.Boolean, default=False, index=True, nullable=True)
device_profile_id = db.Column(db.Integer, default=0, index=True, nullable=True)
created_at = db.Column(db.DateTime, nullable=True, default=datetime.datetime.now())
class DeviceProfileModel(db.Model):
__tablename__ = 'device_profiles'
id = db.Column(db.Integer, primary_key=True)
name = db.Column(db.String, default='', unique=True, nullable=True)
devices = db.Column(db.String, default='', nullable=True)
enabled = db.Column(db.Boolean, default=False, index=True, nullable=True)
created_at = db.Column(db.DateTime, nullable=True, default=datetime.datetime.now())
| 58.559322 | 88 | 0.720405 | 491 | 3,455 | 4.98778 | 0.130346 | 0.133932 | 0.167415 | 0.282973 | 0.886893 | 0.868109 | 0.868109 | 0.868109 | 0.854635 | 0.854635 | 0 | 0.006024 | 0.135166 | 3,455 | 58 | 89 | 59.568966 | 0.813588 | 0.022287 | 0 | 0.77551 | 0 | 0 | 0.011005 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.040816 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 9 |
2e6fd56025f80ea25ca9e4aec1e12e1bf340e22e | 3,362 | py | Python | src/tests/test_memory_operator.py | gmum/mow-pytorch | 7532a372570b46a5ef7e95dc157079080291db07 | [
"MIT"
] | null | null | null | src/tests/test_memory_operator.py | gmum/mow-pytorch | 7532a372570b46a5ef7e95dc157079080291db07 | [
"MIT"
] | null | null | null | src/tests/test_memory_operator.py | gmum/mow-pytorch | 7532a372570b46a5ef7e95dc157079080291db07 | [
"MIT"
] | null | null | null | import unittest
import torch
from memory.memory_operator import MemoryOperator
class TestMemoryOperator(unittest.TestCase):
def test_memory_operator_correctly_appends_size(self):
# Arrange
memory_length = 480
latent_dim = 8
batch_size = 32
uut = MemoryOperator(memory_length)
# Act
latent = uut(torch.randn((batch_size, latent_dim)))
# Assert
self.assertEqual(latent.size(0), 32)
self.assertEqual(latent.size(1), 8)
def test_memory_operator_correctly_appends_size_second_time(self):
# Arrange
memory_length = 480
latent_dim = 8
batch_size = 32
uut = MemoryOperator(memory_length)
# Act
latent = uut(torch.randn((batch_size, latent_dim)))
latent = uut(torch.randn((batch_size, latent_dim)))
# Assert
self.assertEqual(latent.size(0), 64)
self.assertEqual(latent.size(1), 8)
def test_memory_operator_correctly_appends_elements(self):
# Arrange
memory_length = 480
latent_dim = 8
batch_size = 32
uut = MemoryOperator(memory_length)
# Act
latent = uut(torch.zeros((memory_length, latent_dim)))
latent = uut(torch.ones((batch_size, latent_dim)))
# Assert
self.assertEqual(latent.size(0), 512)
self.assertEqual(latent.count_nonzero(), 32 * 8)
def test_memory_operator_correctly_appends_elements_second_time(self):
# Arrange
memory_length = 480
latent_dim = 8
batch_size = 32
uut = MemoryOperator(memory_length)
# Act
latent = uut(torch.zeros((memory_length, latent_dim)))
latent = uut(torch.ones((batch_size, latent_dim)))
latent = uut(2 * torch.ones((batch_size, latent_dim)))
# Assert
self.assertEqual(latent.count_nonzero(), 2 * batch_size * latent_dim)
def test_memory_operator_correctly_appends_elements_multiple_times_for_simple_scenario(self):
# Arrange
memory_length = 4
latent_dim = 1
batch_size = 1
uut = MemoryOperator(memory_length)
latent = torch.zeros(memory_length, latent_dim)
# Act
uut(latent)
for i in range(5):
latent = uut((i+1) * torch.ones((batch_size, latent_dim)))
# Assert
self.assertEqual(latent[0], 5)
self.assertEqual(latent[1], 4)
self.assertEqual(latent[2], 3)
self.assertEqual(latent[3], 2)
self.assertEqual(latent[4], 1)
def test_memory_operator_correctly_appends_elements_multiple_times_for_simple_scenario_with_forgetting(self):
# Arrange
memory_length = 4
latent_dim = 1
batch_size = 1
uut = MemoryOperator(memory_length)
latent = torch.zeros(memory_length, latent_dim)
# Act
uut(latent)
for i in range(10):
latent = uut((i+1) * torch.ones((batch_size, latent_dim)))
# Assert
self.assertEqual(latent[0], 10)
self.assertEqual(latent[1], 9)
self.assertEqual(latent[2], 8)
self.assertEqual(latent[3], 7)
self.assertEqual(latent[4], 6)
self.assertEqual(latent.size(0), 5)
| 31.420561 | 114 | 0.608269 | 390 | 3,362 | 4.989744 | 0.148718 | 0.087873 | 0.194245 | 0.083248 | 0.824255 | 0.78777 | 0.783145 | 0.760021 | 0.732271 | 0.732271 | 0 | 0.032627 | 0.298037 | 3,362 | 106 | 115 | 31.716981 | 0.791949 | 0.033611 | 0 | 0.573529 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.264706 | 1 | 0.088235 | false | 0 | 0.044118 | 0 | 0.147059 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
2e84d1f142d48faca6d07c2b7309c067b8100d9b | 7,535 | py | Python | tests/test_if.py | cloudbutton/lithops-array | 5e74b881c7db95eccdccf986f1e3b0dc44603889 | [
"Apache-2.0"
] | null | null | null | tests/test_if.py | cloudbutton/lithops-array | 5e74b881c7db95eccdccf986f1e3b0dc44603889 | [
"Apache-2.0"
] | null | null | null | tests/test_if.py | cloudbutton/lithops-array | 5e74b881c7db95eccdccf986f1e3b0dc44603889 | [
"Apache-2.0"
] | null | null | null | from numpywren.matrix import BigMatrix
from numpywren import matrix_utils, uops
from numpywren import lambdapack as lp
from numpywren import job_runner, frontend
from numpywren import compiler
from numpywren.matrix_utils import constant_zeros
from numpywren.matrix_init import shard_matrix
import dill
import numpywren as npw
import pytest
import numpy as np
from numpy.linalg import cholesky
import pywren
import unittest
import concurrent.futures as fs
import time
import os
import boto3
def f1_if(I: BigMatrix, O: BigMatrix, N: int):
for i in range(N):
if ((i % 2) == 0):
O[i] = mul(1, I[i])
else:
O[i] = mul(2, I[i])
def f1_if_nested(I: BigMatrix, O: BigMatrix, N: int):
for i in range(N):
if ((i % 2) == 0):
if ((i % 3) == 0):
O[i] = mul(3, I[i])
else:
O[i] = mul(1, I[i])
else:
O[i] = mul(2, I[i])
def f1_if_or(I: BigMatrix, O: BigMatrix, N: int):
for i in range(N):
if ((i % 2) == 0 or (i % 3) == 0):
O[i] = mul(1, I[i])
else:
O[i] = mul(2, I[i])
class IfTest(unittest.TestCase):
def test_if_static(self):
X = np.random.randn(64, 64)
shard_sizes = (int(X.shape[0]/8), X.shape[1])
X_sharded = BigMatrix("if_test", shape=X.shape,
shard_sizes=shard_sizes, write_header=True)
O_sharded = BigMatrix("if_test_output", shape=X.shape,
shard_sizes=shard_sizes, write_header=True)
X_sharded.free()
shard_matrix(X_sharded, X)
f = frontend.lpcompile(f1_if)
p = f(X_sharded, O_sharded, X_sharded.num_blocks(0))
assert(p.starters == p.find_terminators())
for s, var_values in p.starters:
if(var_values['i'] % 2 == 0):
assert s == 0
else:
assert s == 1
def test_if_static_nested(self):
X = np.random.randn(64, 64)
shard_sizes = (int(X.shape[0]/8), X.shape[1])
X_sharded = BigMatrix("if_test", shape=X.shape,
shard_sizes=shard_sizes, write_header=True)
O_sharded = BigMatrix("if_test_output", shape=X.shape,
shard_sizes=shard_sizes, write_header=True)
X_sharded.free()
shard_matrix(X_sharded, X)
f = frontend.lpcompile(f1_if_nested)
p = f(X_sharded, O_sharded, X_sharded.num_blocks(0))
assert(p.starters == p.find_terminators())
for s, var_values in p.starters:
i = var_values['i']
if(i % 2 == 0 and (not i % 3 == 0)):
assert s == 1
elif(i % 2 == 0 and (i % 3 == 0)):
assert s == 0
else:
assert s == 2
def test_if_static_or(self):
X = np.random.randn(64, 64)
shard_sizes = (int(X.shape[0]/8), X.shape[1])
X_sharded = BigMatrix("if_test", shape=X.shape,
shard_sizes=shard_sizes, write_header=True)
O_sharded = BigMatrix("if_test_output", shape=X.shape,
shard_sizes=shard_sizes, write_header=True)
X_sharded.free()
shard_matrix(X_sharded, X)
f = frontend.lpcompile(f1_if_or)
p = f(X_sharded, O_sharded, X_sharded.num_blocks(0))
print(p.starters)
assert(p.starters == p.find_terminators())
for s, var_values in p.starters:
i = var_values['i']
if(i % 2 == 0 or (i % 3 == 0)):
assert s == 0
else:
assert s == 1
def test_nested_if_run(self):
X = np.random.randn(64)
shard_sizes = (int(X.shape[0]/8),)
X_sharded = BigMatrix("if_test", shape=X.shape,
shard_sizes=shard_sizes, write_header=True)
O_sharded = BigMatrix("if_test_output", shape=X.shape,
shard_sizes=shard_sizes, write_header=True)
X_sharded.free()
shard_matrix(X_sharded, X)
f = frontend.lpcompile(f1_if_nested)
p = f(X_sharded, O_sharded, X_sharded.num_blocks(0))
num_cores = 1
executor = fs.ProcessPoolExecutor(num_cores)
config = npw.config.default()
p_ex = lp.LambdaPackProgram(p, config=config)
p_ex.start()
all_futures = []
for i in range(num_cores):
all_futures.append(executor.submit(
job_runner.lambdapack_run, p_ex, pipeline_width=1, idle_timeout=5, timeout=60))
p_ex.wait()
time.sleep(5)
p_ex.free()
for i in range(X_sharded.num_blocks(0)):
Ob = O_sharded.get_block(i)
Xb = X_sharded.get_block(i)
if ((i % 2) == 0 and ((i % 3) == 0)):
assert(np.allclose(Ob, 3*Xb))
elif ((i % 2) == 0):
assert(np.allclose(Ob, Xb))
else:
assert(np.allclose(Ob, 2*Xb))
def test_if_or_run(self):
X = np.random.randn(64)
shard_sizes = (int(X.shape[0]/8),)
X_sharded = BigMatrix("if_test", shape=X.shape,
shard_sizes=shard_sizes, write_header=True)
O_sharded = BigMatrix("if_test_output", shape=X.shape,
shard_sizes=shard_sizes, write_header=True)
X_sharded.free()
shard_matrix(X_sharded, X)
f = frontend.lpcompile(f1_if_or)
p = f(X_sharded, O_sharded, X_sharded.num_blocks(0))
num_cores = 1
executor = fs.ProcessPoolExecutor(num_cores)
config = npw.config.default()
p_ex = lp.LambdaPackProgram(p, config=config)
p_ex.start()
all_futures = []
for i in range(num_cores):
all_futures.append(executor.submit(
job_runner.lambdapack_run, p_ex, pipeline_width=1, idle_timeout=5, timeout=60))
p_ex.wait()
time.sleep(5)
p_ex.free()
for i in range(X_sharded.num_blocks(0)):
Ob = O_sharded.get_block(i)
Xb = X_sharded.get_block(i)
if ((i % 2) == 0 or (i % 3) == 0):
assert(np.allclose(Ob, 1*Xb))
else:
assert(np.allclose(Ob, 2*Xb))
def test_if_run(self):
X = np.random.randn(64)
shard_sizes = (int(X.shape[0]/8),)
X_sharded = BigMatrix("if_test", shape=X.shape,
shard_sizes=shard_sizes, write_header=True)
O_sharded = BigMatrix("if_test_output", shape=X.shape,
shard_sizes=shard_sizes, write_header=True)
X_sharded.free()
shard_matrix(X_sharded, X)
f = frontend.lpcompile(f1_if)
p = f(X_sharded, O_sharded, X_sharded.num_blocks(0))
num_cores = 1
executor = fs.ProcessPoolExecutor(num_cores)
config = npw.config.default()
p_ex = lp.LambdaPackProgram(p, config=config)
p_ex.start()
all_futures = []
for i in range(num_cores):
all_futures.append(executor.submit(
job_runner.lambdapack_run, p_ex, pipeline_width=1, idle_timeout=5, timeout=60))
p_ex.wait()
time.sleep(5)
p_ex.free()
for i in range(X_sharded.num_blocks(0)):
Ob = O_sharded.get_block(i)
Xb = X_sharded.get_block(i)
if ((i % 2) == 0):
assert(np.allclose(Ob, 1*Xb))
else:
assert(np.allclose(Ob, 2*Xb))
| 37.118227 | 95 | 0.55209 | 1,049 | 7,535 | 3.755958 | 0.105815 | 0.073096 | 0.054822 | 0.067005 | 0.858629 | 0.856091 | 0.851777 | 0.843401 | 0.837817 | 0.832234 | 0 | 0.024901 | 0.328467 | 7,535 | 202 | 96 | 37.30198 | 0.753755 | 0 | 0 | 0.787234 | 0 | 0 | 0.01712 | 0 | 0 | 0 | 0 | 0 | 0.090426 | 1 | 0.047872 | false | 0 | 0.095745 | 0 | 0.148936 | 0.005319 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
cf16b507f1a5495c20a492865cf43b1ca47cf406 | 2,824 | py | Python | specs/mysql_spec.py | jaimegildesagredo/server-expects | 972bc816be8fbf00f4689fd3a3e51675bb0b2c52 | [
"Apache-2.0"
] | 4 | 2015-01-05T17:18:02.000Z | 2017-02-26T13:59:17.000Z | specs/mysql_spec.py | jaimegildesagredo/server-expects | 972bc816be8fbf00f4689fd3a3e51675bb0b2c52 | [
"Apache-2.0"
] | null | null | null | specs/mysql_spec.py | jaimegildesagredo/server-expects | 972bc816be8fbf00f4689fd3a3e51675bb0b2c52 | [
"Apache-2.0"
] | null | null | null | # -*- coding: utf-8 -*-
from expects import expect
from expects.testing import failure
from server_expects import *
from .constants import c
with describe('mysql'):
with describe('be_accessible'):
with it('passes if instance is accessible by root without password'):
expect(mysql(c.A_MYSQL_LISTENING_HOST)).to(be_accessible)
with it('passes if instance is accessible by root without password on port'):
expect(mysql(c.A_MYSQL_LISTENING_HOST,
port=int(c.A_MYSQL_LISTENING_PORT))).to(be_accessible)
with it('passes if instance is accessible by user and password'):
expect(mysql(c.A_MYSQL_LISTENING_HOST,
user=c.A_MYSQL_EXISTENT_USER,
password=c.A_MYSQL_VALID_PASSWORD)).to(be_accessible)
with it('passes if database is accessible by user and password'):
expect(mysql(c.A_MYSQL_LISTENING_HOST,
user=c.A_MYSQL_EXISTENT_USER,
password=c.A_MYSQL_VALID_PASSWORD,
database=c.A_MYSQL_EXISTENT_DATABASE)).to(be_accessible)
with it('fails if instance is not listening on specified host'):
with failure:
expect(mysql(c.A_MYSQL_NOT_LISTENING_HOST)).to(be_accessible)
with it('fails if instance is not listening on specified port'):
with failure:
expect(mysql(c.A_MYSQL_LISTENING_HOST,
port=int(c.A_MYSQL_NOT_LISTENING_PORT))).to(be_accessible)
with it('fails if instance is not accessible by user'):
with failure:
expect(mysql(c.A_MYSQL_LISTENING_HOST,
user=c.A_MYSQL_NONEXISTENT_USER)).to(be_accessible)
with it('fails if instance is not accessible by user and password'):
with failure:
expect(mysql(c.A_MYSQL_LISTENING_HOST,
user=c.A_MYSQL_EXISTENT_USER,
password=c.A_MYSQL_INVALID_PASSWORD)).to(be_accessible)
with it('fails if database does not exist'):
with failure:
expect(mysql(c.A_MYSQL_LISTENING_HOST,
user=c.A_MYSQL_EXISTENT_USER,
password=c.A_MYSQL_VALID_PASSWORD,
database=c.A_MYSQL_NONEXISTENT_DATABASE)).to(be_accessible)
with it('fails if database is not accessible by user and password'):
with failure:
expect(mysql(c.A_MYSQL_LISTENING_HOST,
user=c.A_MYSQL_NONEXISTENT_USER,
password=c.A_MYSQL_VALID_PASSWORD,
database=c.A_MYSQL_EXISTENT_DATABASE)).to(be_accessible)
| 44.125 | 88 | 0.600921 | 349 | 2,824 | 4.598854 | 0.12894 | 0.032399 | 0.113396 | 0.11215 | 0.882866 | 0.879128 | 0.874143 | 0.81433 | 0.746417 | 0.739564 | 0 | 0.000524 | 0.324717 | 2,824 | 63 | 89 | 44.825397 | 0.841112 | 0.007436 | 0 | 0.479167 | 0 | 0 | 0.191717 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.229167 | 0.083333 | 0 | 0.083333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 8 |
cf5d46982a65efd01fb44f01d4244ab5f7a2eda6 | 1,337 | py | Python | vandal/plugins/__init__.py | vandal-dev/vandal | 1981c86f4de6632776a4132ecbc206fac5188f32 | [
"Apache-2.0"
] | 1 | 2022-02-22T18:39:57.000Z | 2022-02-22T18:39:57.000Z | vandal/plugins/__init__.py | vandal-dev/vandal | 1981c86f4de6632776a4132ecbc206fac5188f32 | [
"Apache-2.0"
] | null | null | null | vandal/plugins/__init__.py | vandal-dev/vandal | 1981c86f4de6632776a4132ecbc206fac5188f32 | [
"Apache-2.0"
] | null | null | null | # imports relevant contents.
from vandal.plugins.metaclass import Meta
from vandal.plugins.types import (
VandalType,
IntegerType,
FloatType,
NumberType,
ReturnType,
PrintType,
GraphType,
StringType,
ListType,
TupleType,
DictionaryType,
BooleanType,
NumberVector,
StringVector,
StringDictionary,
DictionaryVector,
NumberVectorAlike,
NumberArrayAlike,
AnyArrayAlike,
AnyVectorAlike,
AnyType,
)
# all relevant contents.
__all__ = [
'VandalType',
'IntegerType',
'FloatType',
'NumberType',
'ReturnType',
'PrintType',
'GraphType',
'StringType',
'ListType',
'TupleType',
'DictionaryType',
'BooleanType',
'NumberVector',
'StringVector',
'StringDictionary',
'DictionaryVector',
'NumberVectorAlike',
'NumberArrayAlike',
'AnyArrayAlike',
'AnyVectorAlike',
'AnyType',
]
# all available types.
__types__ = [
VandalType,
IntegerType,
FloatType,
NumberType,
ReturnType,
PrintType,
GraphType,
StringType,
ListType,
TupleType,
DictionaryType,
BooleanType,
NumberVector,
StringVector,
StringDictionary,
DictionaryVector,
NumberVectorAlike,
NumberArrayAlike,
AnyArrayAlike,
AnyVectorAlike,
AnyType,
]
| 17.592105 | 41 | 0.649215 | 85 | 1,337 | 10.117647 | 0.388235 | 0.073256 | 0.104651 | 0.139535 | 0.854651 | 0.854651 | 0.854651 | 0.854651 | 0.854651 | 0.854651 | 0 | 0 | 0.258788 | 1,337 | 75 | 42 | 17.826667 | 0.86781 | 0.052356 | 0 | 0.6 | 0 | 0 | 0.192399 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.028571 | 0 | 0.028571 | 0 | 0 | 0 | 1 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
cf6df9080a0de0fa503e968d6558fd47ca332061 | 131 | py | Python | gym/envs/box2d/__init__.py | jgsonx/gym | 69d1115861d8c98726cbc11ca39df3e1083517e9 | [
"MIT"
] | 1 | 2016-08-23T05:47:18.000Z | 2016-08-23T05:47:18.000Z | gym/envs/box2d/__init__.py | Hiroki-Yoshida7/Reinforcement-Learning-Algorithm | 9c066571ac3f9d9922503af24b828aa4e7f26559 | [
"MIT"
] | null | null | null | gym/envs/box2d/__init__.py | Hiroki-Yoshida7/Reinforcement-Learning-Algorithm | 9c066571ac3f9d9922503af24b828aa4e7f26559 | [
"MIT"
] | 1 | 2017-04-04T20:02:33.000Z | 2017-04-04T20:02:33.000Z | from gym.envs.box2d.lunar_lander import LunarLander
from gym.envs.box2d.bipedal_walker import BipedalWalker, BipedalWalkerHardcore
| 43.666667 | 78 | 0.877863 | 17 | 131 | 6.647059 | 0.705882 | 0.123894 | 0.19469 | 0.283186 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.016393 | 0.068702 | 131 | 2 | 79 | 65.5 | 0.909836 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
2b1172cb3baab570b8d290daf8538d124cdb4ec5 | 9,080 | py | Python | conf/url_redirects.py | froddd/great-international-ui | 414bcb09d701cd7e0c5748d1ac8c587d704f92da | [
"MIT"
] | null | null | null | conf/url_redirects.py | froddd/great-international-ui | 414bcb09d701cd7e0c5748d1ac8c587d704f92da | [
"MIT"
] | null | null | null | conf/url_redirects.py | froddd/great-international-ui | 414bcb09d701cd7e0c5748d1ac8c587d704f92da | [
"MIT"
] | null | null | null | from django.conf.urls import url
from core.views import QuerystringRedirectView
redirects_for_retired_pages_that_must_come_before_tree_based_routing = [
# All of these can be used to discover which view code we need to delete
url(
# Redirect the old invest homepage to atlas
r'^international/invest[/]*$',
QuerystringRedirectView.as_view(pattern_name='atlas-home'),
),
url(
# Redirect the old capital invest homepage to atlas
r'^international/content/capital-invest[/]*$',
QuerystringRedirectView.as_view(pattern_name='atlas-home'),
),
url(
r'^international/content/capital-invest/how-we-help-you-invest-capital[/]*$',
QuerystringRedirectView.as_view(
url='/international/content/investment/how-we-can-help/'
),
),
url(
# Old CIOs
r'^international/content/opportunities[/]*',
QuerystringRedirectView.as_view(pattern_name='atlas-opportunities'),
),
url(
# Old HPOs
r'^international/content/invest/high-potential-opportunities[/]*',
QuerystringRedirectView.as_view(pattern_name='atlas-opportunities'),
),
url(
r'^international/content/about-us[/]*$',
QuerystringRedirectView.as_view(
url='/international/content/investment/how-we-can-help/'
),
),
# How to expand UK setup
url(
r'^international/content/invest/how-to-setup-in-the-uk/establish-a-base-for-business-in-the-uk[/]*',
QuerystringRedirectView.as_view(
url='/international/content/investment/how-we-can-help/establish-a-base-for-business-in-the-uk/'
),
),
url(
r'^international/content/invest/how-to-setup-in-the-uk/research-and-development-rd-support-in-the-uk[/]*',
QuerystringRedirectView.as_view(
url='/international/content/investment/how-we-can-help/research-and-development-rd-support-in-the-uk/'
),
),
url(
r'^international/content/invest/how-to-setup-in-the-uk/global-entrepreneur-program[/]*',
QuerystringRedirectView.as_view(
url='/international/content/investment/how-we-can-help/global-entrepreneur-program/'
),
),
url(
r'^international/content/invest/how-to-setup-in-the-uk/uk-visas-and-migration[/]*',
QuerystringRedirectView.as_view(
url='/international/content/investment/how-we-can-help/uk-visas-and-migration/'
),
),
url(
r'^international/content/invest/how-to-setup-in-the-uk/register-a-company-in-the-uk[/]*',
QuerystringRedirectView.as_view(
url='/international/content/investment/how-we-can-help/register-a-company-in-the-uk/'
),
),
url(
r'^international/content/invest/how-to-setup-in-the-uk/hire-skilled-workers-for-your-uk-operations[/]*',
QuerystringRedirectView.as_view(
url='/international/content/investment/how-we-can-help/hire-skilled-workers-for-your-uk-operations/'
),
),
url(
r'^international/content/invest/how-to-setup-in-the-uk/open-a-uk-business-bank-account[/]*',
QuerystringRedirectView.as_view(
url='/international/content/investment/how-we-can-help/open-a-uk-business-bank-account/'
),
),
url(
r'^international/content/invest/how-to-setup-in-the-uk/uk-tax-and-incentives[/]*',
QuerystringRedirectView.as_view(
url='/international/content/investment/how-we-can-help/uk-tax-and-incentives/'
),
),
url(
r'^international/content/invest/how-to-setup-in-the-uk/access-finance-in-the-uk[/]*',
QuerystringRedirectView.as_view(
url='/international/content/investment/how-we-can-help/access-finance-in-the-uk/'
),
),
url(
# Redirect the rest of the 'invest' CMS page and all its tree-based children
r'^international/content/invest/',
QuerystringRedirectView.as_view(pattern_name='atlas-home'),
),
# About the UK was moved over to Why Invest in the UK
url(
r'^international/content/about-uk[/]*$',
QuerystringRedirectView.as_view(
url='/international/content/investment/why-invest-in-the-uk/'
),
),
url(
r'^international/content/about-uk/why-choose-uk[/]*$',
QuerystringRedirectView.as_view(
url='/international/content/investment/why-invest-in-the-uk/'
),
),
url(
r'^international/content/about-uk/why-choose-uk/tax-incentives[/]*$',
QuerystringRedirectView.as_view(
url='/international/content/investment/why-invest-in-the-uk/tax-incentives/'
),
),
url(
r'^international/content/about-uk/why-choose-uk/uk-talent-and-labour[/]*$',
QuerystringRedirectView.as_view(
url='/international/content/investment/why-invest-in-the-uk/uk-talent-and-labour/'
),
),
url(
r'^international/content/about-uk/why-choose-uk/uk-innovation[/]*$',
QuerystringRedirectView.as_view(
url='/international/content/investment/why-invest-in-the-uk/uk-innovation/'
),
),
url(
r'^international/content/about-uk/why-choose-uk/uk-infrastructure[/]*$',
QuerystringRedirectView.as_view(
url='/international/content/investment/why-invest-in-the-uk/uk-infrastructure/'
),
),
url(
r'^international/content/about-uk/industries[/]*', # NB: wildcard
QuerystringRedirectView.as_view(
url='/international/content/investment/sectors/'
),
),
url(
r'^international/content/about-uk/regions[/]*$',
QuerystringRedirectView.as_view(
url='/international/content/investment/regions/'
),
),
url(
r'^international/content/about-uk/regions/scotland[/]*$',
QuerystringRedirectView.as_view(
url='/international/content/investment/regions/scotland/'
),
),
url(
r'^international/content/about-uk/regions/northern-ireland[/]*$',
QuerystringRedirectView.as_view(
url='/international/content/investment/regions/northern-ireland/'
),
),
url(
r'^international/content/about-uk/regions/north-england[/]*$',
QuerystringRedirectView.as_view(
url='/international/content/investment/regions/north-england/'
),
),
url(
r'^international/content/about-uk/regions/wales[/]*$',
QuerystringRedirectView.as_view(
url='/international/content/investment/regions/wales/'
),
),
url(
r'^international/content/about-uk/regions/midlands[/]*$',
QuerystringRedirectView.as_view(
url='/international/content/investment/regions/midlands/'
),
),
url(
r'^international/content/about-uk/regions/south-england[/]*$',
QuerystringRedirectView.as_view(
url='/international/content/investment/regions/south-england/'
),
),
]
redirects_before_tree_based_routing_lookup = [
# These ones are inserted into the urlconf before the 'cms-page-from-path' route is tried
# so we can redirect pages that otherwise came from tree-based routing
url(
r'^international/content/opportunities[/]*$',
QuerystringRedirectView.as_view(pattern_name='atlas-opportunities'),
),
url(
r'^international/content/invest/high-potential-opportunities/contact[/]*$',
QuerystringRedirectView.as_view(
pattern_name='fdi-opportunity-request-form'
)
),
url(
r'^international/content/invest/high-potential-opportunities/contact/success[/]*$',
QuerystringRedirectView.as_view(
pattern_name='fdi-opportunity-request-form-success'
)
),
url(
r'^international/content/expand/high-potential-opportunities/contact[/]*$',
QuerystringRedirectView.as_view(
pattern_name='fdi-opportunity-request-form'
)
),
url(
r'^international/content/expand/high-potential-opportunities/contact/success[/]*$',
QuerystringRedirectView.as_view(
pattern_name='fdi-opportunity-request-form-success'
)
),
] + redirects_for_retired_pages_that_must_come_before_tree_based_routing
redirects = [
url(
r'^international/eu-exit-news/contact[/]*$',
QuerystringRedirectView.as_view(pattern_name='brexit-international-contact-form'),
),
url(
r'^international/eu-exit-news/contact/success[/]*$',
QuerystringRedirectView.as_view(pattern_name='brexit-international-contact-form-success'),
),
url(
r'^international/brexit/contact[/]*$',
QuerystringRedirectView.as_view(pattern_name='brexit-international-contact-form'),
),
url(
r'^international/brexit/contact/success[/]*$',
QuerystringRedirectView.as_view(pattern_name='brexit-international-contact-form-success'),
),
]
| 36.761134 | 114 | 0.642841 | 971 | 9,080 | 5.930999 | 0.145211 | 0.204897 | 0.196388 | 0.125022 | 0.86265 | 0.833999 | 0.801875 | 0.718701 | 0.626324 | 0.571453 | 0 | 0 | 0.212335 | 9,080 | 246 | 115 | 36.910569 | 0.805229 | 0.055066 | 0 | 0.675676 | 0 | 0.108108 | 0.512255 | 0.502101 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.009009 | 0 | 0.009009 | 0 | 0 | 0 | 0 | null | 1 | 1 | 0 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
2b185278c3b3668fd6c3f2b79fc0fa26b0481e6b | 97 | py | Python | tests/unit/test_app.py | Adam-Grinbergs/Mining-Deal-League | 1dfabf2842def5c225d1b1435509848746f30b83 | [
"Apache-2.0"
] | null | null | null | tests/unit/test_app.py | Adam-Grinbergs/Mining-Deal-League | 1dfabf2842def5c225d1b1435509848746f30b83 | [
"Apache-2.0"
] | 1 | 2021-06-29T23:00:34.000Z | 2021-06-29T23:00:34.000Z | tests/unit/test_app.py | Adam-Grinbergs/Mining-Deal-League | 1dfabf2842def5c225d1b1435509848746f30b83 | [
"Apache-2.0"
] | null | null | null | import pytest
import app
def test_page_loads():
pass
def test_illegal_commands():
pass | 10.777778 | 28 | 0.731959 | 14 | 97 | 4.785714 | 0.714286 | 0.208955 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.206186 | 97 | 9 | 29 | 10.777778 | 0.87013 | 0 | 0 | 0.333333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | true | 0.333333 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 7 |
99454ce12467e396885b5b4c4ae21a52ea1535ef | 5,487 | py | Python | thenewboston_node/business_logic/tests/test_memory_blockchain/test_get_blocks.py | MLonTNB/thenewboston-node | 3fbd0fc36c4f0eabaa8267f2a0be2fd717f133d1 | [
"MIT"
] | null | null | null | thenewboston_node/business_logic/tests/test_memory_blockchain/test_get_blocks.py | MLonTNB/thenewboston-node | 3fbd0fc36c4f0eabaa8267f2a0be2fd717f133d1 | [
"MIT"
] | null | null | null | thenewboston_node/business_logic/tests/test_memory_blockchain/test_get_blocks.py | MLonTNB/thenewboston-node | 3fbd0fc36c4f0eabaa8267f2a0be2fd717f133d1 | [
"MIT"
] | null | null | null | import copy
from thenewboston_node.business_logic.blockchain.memory_blockchain import MemoryBlockchain
from thenewboston_node.business_logic.tests.factories import CoinTransferBlockFactory, CoinTransferBlockMessageFactory
def test_yield_blocks_till_snapshot(forced_memory_blockchain: MemoryBlockchain, blockchain_genesis_state):
forced_memory_blockchain.blocks = [
CoinTransferBlockFactory(message=CoinTransferBlockMessageFactory(block_number=x,
block_identifier=str(x))) # type: ignore
for x in range(9)
]
account_root_files = forced_memory_blockchain.blockchain_states
assert len(account_root_files) == 1
assert forced_memory_blockchain.get_closest_blockchain_state_snapshot() == blockchain_genesis_state
assert forced_memory_blockchain.get_closest_blockchain_state_snapshot(-1) == blockchain_genesis_state
assert [block.message.block_number for block in forced_memory_blockchain.yield_blocks_till_snapshot()
] == [8, 7, 6, 5, 4, 3, 2, 1, 0]
assert [block.message.block_number for block in forced_memory_blockchain.yield_blocks_till_snapshot(5)
] == [5, 4, 3, 2, 1, 0]
assert [block.message.block_number for block in forced_memory_blockchain.yield_blocks_till_snapshot(1)] == [1, 0]
assert [block.message.block_number for block in forced_memory_blockchain.yield_blocks_till_snapshot(0)] == [0]
account_root_file1 = copy.deepcopy(blockchain_genesis_state)
account_root_file1.last_block_number = 3
forced_memory_blockchain.blockchain_states.append(account_root_file1)
assert len(account_root_files) == 2
assert forced_memory_blockchain.get_closest_blockchain_state_snapshot() == account_root_file1
assert forced_memory_blockchain.get_closest_blockchain_state_snapshot(-1) == blockchain_genesis_state
assert forced_memory_blockchain.get_closest_blockchain_state_snapshot(0) == blockchain_genesis_state
assert forced_memory_blockchain.get_closest_blockchain_state_snapshot(1) == blockchain_genesis_state
assert forced_memory_blockchain.get_closest_blockchain_state_snapshot(2) == blockchain_genesis_state
assert forced_memory_blockchain.get_closest_blockchain_state_snapshot(3) == blockchain_genesis_state
assert forced_memory_blockchain.get_closest_blockchain_state_snapshot(4) == account_root_file1
assert [block.message.block_number for block in forced_memory_blockchain.yield_blocks_till_snapshot()
] == [8, 7, 6, 5, 4]
assert [block.message.block_number for block in forced_memory_blockchain.yield_blocks_till_snapshot(5)] == [5, 4]
assert [block.message.block_number for block in forced_memory_blockchain.yield_blocks_till_snapshot(3)] == []
assert [block.message.block_number for block in forced_memory_blockchain.yield_blocks_till_snapshot(2)
] == [2, 1, 0]
assert [block.message.block_number for block in forced_memory_blockchain.yield_blocks_till_snapshot(1)] == [1, 0]
assert [block.message.block_number for block in forced_memory_blockchain.yield_blocks_till_snapshot(0)] == [0]
account_root_file2 = copy.deepcopy(blockchain_genesis_state)
account_root_file2.last_block_number = 5
forced_memory_blockchain.blockchain_states.append(account_root_file2)
assert len(account_root_files) == 3
assert forced_memory_blockchain.get_closest_blockchain_state_snapshot() == account_root_file2
assert forced_memory_blockchain.get_closest_blockchain_state_snapshot(-1) == blockchain_genesis_state
assert forced_memory_blockchain.get_closest_blockchain_state_snapshot(0) == blockchain_genesis_state
assert forced_memory_blockchain.get_closest_blockchain_state_snapshot(1) == blockchain_genesis_state
assert forced_memory_blockchain.get_closest_blockchain_state_snapshot(2) == blockchain_genesis_state
assert forced_memory_blockchain.get_closest_blockchain_state_snapshot(3) == blockchain_genesis_state
assert forced_memory_blockchain.get_closest_blockchain_state_snapshot(4) == account_root_file1
assert forced_memory_blockchain.get_closest_blockchain_state_snapshot(5) == account_root_file1
assert forced_memory_blockchain.get_closest_blockchain_state_snapshot(6) == account_root_file2
assert [block.message.block_number for block in forced_memory_blockchain.yield_blocks_till_snapshot()] == [8, 7, 6]
assert [block.message.block_number for block in forced_memory_blockchain.yield_blocks_till_snapshot(8)
] == [8, 7, 6]
assert [block.message.block_number for block in forced_memory_blockchain.yield_blocks_till_snapshot(7)] == [7, 6]
assert [block.message.block_number for block in forced_memory_blockchain.yield_blocks_till_snapshot(6)] == [6]
assert [block.message.block_number for block in forced_memory_blockchain.yield_blocks_till_snapshot(5)] == []
assert [block.message.block_number for block in forced_memory_blockchain.yield_blocks_till_snapshot(4)] == [4]
assert [block.message.block_number for block in forced_memory_blockchain.yield_blocks_till_snapshot(3)] == []
assert [block.message.block_number for block in forced_memory_blockchain.yield_blocks_till_snapshot(2)
] == [2, 1, 0]
assert [block.message.block_number for block in forced_memory_blockchain.yield_blocks_till_snapshot(1)] == [1, 0]
assert [block.message.block_number for block in forced_memory_blockchain.yield_blocks_till_snapshot(0)] == [0]
| 74.148649 | 119 | 0.798797 | 725 | 5,487 | 5.606897 | 0.078621 | 0.173186 | 0.232718 | 0.118819 | 0.883641 | 0.839606 | 0.839606 | 0.817466 | 0.790406 | 0.7754 | 0 | 0.020251 | 0.127028 | 5,487 | 73 | 120 | 75.164384 | 0.828392 | 0.002187 | 0 | 0.428571 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.650794 | 1 | 0.015873 | false | 0 | 0.047619 | 0 | 0.063492 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
99490250b6c0d263a32ac1ecaf4792f706bb876d | 221 | py | Python | motionshader/__init__.py | hokieg3n1us/motionshader | ad2106323d01af41dadc7cca98639e4e5b7e320a | [
"MIT"
] | 3 | 2021-08-09T13:18:31.000Z | 2021-11-08T10:45:02.000Z | motionshader/__init__.py | hokieg3n1us/motionshader | ad2106323d01af41dadc7cca98639e4e5b7e320a | [
"MIT"
] | null | null | null | motionshader/__init__.py | hokieg3n1us/motionshader | ad2106323d01af41dadc7cca98639e4e5b7e320a | [
"MIT"
] | null | null | null | from .core import GeospatialViewport
from .core import TemporalPlayback
from .core import Basemap
from .core import Dataset
from .core import MotionVideo
from .core import FrameAnnotation
from .core import FrameWatermark
| 27.625 | 36 | 0.841629 | 28 | 221 | 6.642857 | 0.357143 | 0.301075 | 0.526882 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.126697 | 221 | 7 | 37 | 31.571429 | 0.963731 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
9958b187c6d631fd3957221f2ffa72c85b1f5db8 | 4,282 | py | Python | bind/pyevt/pyevt/ecc.py | harrywong/evt | 95985384619e0f5ff4021e8838d421ac4b4b946d | [
"BSD-3-Clause"
] | 1,411 | 2018-04-23T03:57:30.000Z | 2022-02-13T10:34:22.000Z | bind/pyevt/pyevt/ecc.py | Zhang-Zexi/evt | e90fe4dbab4b9512d120c79f33ecc62791e088bd | [
"Apache-2.0"
] | 27 | 2018-06-11T10:34:42.000Z | 2019-07-27T08:50:02.000Z | bind/pyevt/pyevt/ecc.py | Zhang-Zexi/evt | e90fe4dbab4b9512d120c79f33ecc62791e088bd | [
"Apache-2.0"
] | 364 | 2018-06-09T12:11:53.000Z | 2020-12-15T03:26:48.000Z | from . import evt_exception, libevt
from .evt_data import EvtData
class PublicKey(EvtData):
def __init__(self, data):
super().__init__(data)
def __str__(self):
return self.to_string()
def to_string(self):
str_c = self.evt.ffi.new('char**')
ret = self.evt.lib.evt_public_key_string(self.data, str_c)
evt_exception.evt_exception_raiser(ret)
str = self.evt.ffi.string(str_c[0]).decode('utf-8')
ret = self.evt.lib.evt_free(str_c[0])
evt_exception.evt_exception_raiser(ret)
return str
@staticmethod
def from_string(str):
evt = libevt.check_lib_init()
public_key_c = evt.ffi.new('evt_public_key_t**')
str_c = bytes(str, encoding='utf-8')
ret = evt.lib.evt_public_key_from_string(str_c, public_key_c)
evt_exception.evt_exception_raiser(ret)
return PublicKey(public_key_c[0])
@staticmethod
def recover(sign, hash):
evt = libevt.check_lib_init()
public_key_c = evt.ffi.new('evt_public_key_t**')
ret = evt.lib.evt_recover(sign.data, hash.data, public_key_c)
evt_exception.evt_exception_raiser(ret)
return PublicKey(public_key_c[0])
class PrivateKey(EvtData):
def __init__(self, data):
super().__init__(data)
def __str__(self):
return self.to_string()
def to_string(self):
str_c = self.evt.ffi.new('char**')
ret = self.evt.lib.evt_private_key_string(self.data, str_c)
evt_exception.evt_exception_raiser(ret)
str = self.evt.ffi.string(str_c[0]).decode('utf-8')
ret = self.evt.lib.evt_free(str_c[0])
evt_exception.evt_exception_raiser(ret)
return str
def get_public_key(self):
public_key_c = self.evt.ffi.new('evt_public_key_t**')
ret = self.evt.lib.evt_get_public_key(self.data, public_key_c)
evt_exception.evt_exception_raiser(ret)
return PublicKey(public_key_c[0])
def sign_hash(self, hash):
signature_c = self.evt.ffi.new('evt_signature_t**')
ret = self.evt.lib.evt_sign_hash(
self.data, hash.data, signature_c)
evt_exception.evt_exception_raiser(ret)
return Signature(signature_c[0])
@staticmethod
def from_string(str):
evt = libevt.check_lib_init()
private_key_c = evt.ffi.new('evt_private_key_t**')
str_c = bytes(str, encoding='utf-8')
ret = evt.lib.evt_private_key_from_string(
str_c, private_key_c)
evt_exception.evt_exception_raiser(ret)
return PrivateKey(private_key_c[0])
class Signature(EvtData):
def __init__(self, data):
super().__init__(data)
def __str__(self):
return self.to_string()
def to_string(self):
str_c = self.evt.ffi.new('char**')
ret = self.evt.lib.evt_signature_string(self.data, str_c)
evt_exception.evt_exception_raiser(ret)
str = self.evt.ffi.string(str_c[0]).decode('utf-8')
ret = self.evt.lib.evt_free(str_c[0])
evt_exception.evt_exception_raiser(ret)
return str
class Checksum(EvtData):
def __init__(self, data):
super().__init__(data)
def __str__(self):
return self.to_string()
def to_string(self):
str_c = self.evt.ffi.new('char**')
ret = self.evt.lib.evt_checksum_string(self.data, str_c)
evt_exception.evt_exception_raiser(ret)
str = self.evt.ffi.string(str_c[0]).decode('utf-8')
ret = self.evt.lib.evt_free(str_c[0])
evt_exception.evt_exception_raiser(ret)
return str
@staticmethod
def from_string(str):
evt = libevt.check_lib_init()
str_c = bytes(str, encoding='utf-8')
evt_hash = evt.ffi.new('evt_checksum_t**')
ret = evt.lib.evt_hash(str_c, len(str_c), evt_hash)
evt_exception.evt_exception_raiser(ret)
return Checksum(evt_hash[0])
def generate_new_pair():
evt = libevt.check_lib_init()
public_key_c = evt.ffi.new('evt_public_key_t**')
private_key_c = evt.ffi.new('evt_private_key_t**')
ret = evt.lib.evt_generate_new_pair(public_key_c, private_key_c)
evt_exception.evt_exception_raiser(ret)
return PublicKey(public_key_c[0]), PrivateKey(private_key_c[0])
| 33.193798 | 70 | 0.656469 | 638 | 4,282 | 4.029781 | 0.073668 | 0.144691 | 0.052509 | 0.140023 | 0.838584 | 0.793466 | 0.765072 | 0.740568 | 0.714119 | 0.714119 | 0 | 0.006897 | 0.221158 | 4,282 | 128 | 71 | 33.453125 | 0.764018 | 0 | 0 | 0.711538 | 0 | 0 | 0.047174 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.182692 | false | 0 | 0.019231 | 0.038462 | 0.384615 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
511176f2e6ee7ffddc5cd6989b025177c96fe2c1 | 222 | py | Python | code/1008.py | minssoj/Learning_Algorithm_Up | 45ec4e2eb4c07c9ec907a74dbd31370e1645c50b | [
"MIT"
] | null | null | null | code/1008.py | minssoj/Learning_Algorithm_Up | 45ec4e2eb4c07c9ec907a74dbd31370e1645c50b | [
"MIT"
] | null | null | null | code/1008.py | minssoj/Learning_Algorithm_Up | 45ec4e2eb4c07c9ec907a74dbd31370e1645c50b | [
"MIT"
] | null | null | null | # [기초-출력] 출력하기08(설명)
# minso.jeong@daum.net
'''
문제링크 : https://www.codeup.kr/problem.php?id=1008
* 유니코드 관련 참고 블로그
https://green-late7.tistory.com/67
'''
print('\u250C\u252C\u2510\n\u251C\u253C\u2524\n\u2514\u2534\u2518')
| 22.2 | 67 | 0.698198 | 39 | 222 | 3.974359 | 0.948718 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.20098 | 0.081081 | 222 | 9 | 68 | 24.666667 | 0.558824 | 0.635135 | 0 | 0 | 0 | 1 | 0.805556 | 0.805556 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 7 |
5128400f4b614bb86938210a47a1c3d01950befa | 430 | py | Python | machine-learning/template/utils.py | olaals/masteroppgave2 | 9fc181325b6e3ef74d81cdb323d3e47a79bb889e | [
"MIT"
] | null | null | null | machine-learning/template/utils.py | olaals/masteroppgave2 | 9fc181325b6e3ef74d81cdb323d3e47a79bb889e | [
"MIT"
] | null | null | null | machine-learning/template/utils.py | olaals/masteroppgave2 | 9fc181325b6e3ef74d81cdb323d3e47a79bb889e | [
"MIT"
] | 2 | 2021-09-17T12:26:04.000Z | 2021-09-27T12:59:55.000Z |
def get_tensor_as_image(tensor):
tensor = tensor[0]
np_img = tensor.cpu().data.numpy()
np_img = np.reshape(np_img, (3,150,-1))
np_img = np.moveaxis(np_img,0,2)
return np_img
def get_tensor_as_image_grayscale(tensor):
tensor = tensor[0]
np_img = tensor.cpu().data.numpy()
np_img = np.reshape(np_img, (150,-1))
#np_img = np.moveaxis(np_img,0,2)
return np.dstack((np_img, np_img, np_img))
| 25.294118 | 46 | 0.65814 | 76 | 430 | 3.447368 | 0.263158 | 0.267176 | 0.160305 | 0.10687 | 0.870229 | 0.725191 | 0.725191 | 0.725191 | 0.725191 | 0.725191 | 0 | 0.042857 | 0.186047 | 430 | 16 | 47 | 26.875 | 0.705714 | 0.074419 | 0 | 0.363636 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.181818 | false | 0 | 0 | 0 | 0.363636 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.