hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
90f12dd7ef25849147771ab063261f5f2e5db249 | 114,699 | py | Python | venv/lib/python3.6/site-packages/tensorflow/python/ops/gen_io_ops.py | yuxuan1995liu/darkflowyolo_detection | a7807e9b85833e3f877d46bb60e8fa7d0596a10b | [
"MIT"
] | null | null | null | venv/lib/python3.6/site-packages/tensorflow/python/ops/gen_io_ops.py | yuxuan1995liu/darkflowyolo_detection | a7807e9b85833e3f877d46bb60e8fa7d0596a10b | [
"MIT"
] | null | null | null | venv/lib/python3.6/site-packages/tensorflow/python/ops/gen_io_ops.py | yuxuan1995liu/darkflowyolo_detection | a7807e9b85833e3f877d46bb60e8fa7d0596a10b | [
"MIT"
] | null | null | null | """Python wrappers around TensorFlow ops.
This file is MACHINE GENERATED! Do not edit.
Original C++ source file: io_ops.cc
"""
import collections as _collections
import six as _six
from tensorflow.python import pywrap_tensorflow as _pywrap_tensorflow
from tensorflow.python.eager import context as _context
from tensorflow.python.eager import core as _core
from tensorflow.python.eager import execute as _execute
from tensorflow.python.framework import dtypes as _dtypes
from tensorflow.python.framework import errors as _errors
from tensorflow.python.framework import tensor_shape as _tensor_shape
from tensorflow.core.framework import op_def_pb2 as _op_def_pb2
# Needed to trigger the call to _set_call_cpp_shape_fn.
from tensorflow.python.framework import common_shapes as _common_shapes
from tensorflow.python.framework import op_def_registry as _op_def_registry
from tensorflow.python.framework import ops as _ops
from tensorflow.python.framework import op_def_library as _op_def_library
from tensorflow.python.util.deprecation import deprecated_endpoints
from tensorflow.python.util import dispatch as _dispatch
from tensorflow.python.util.tf_export import tf_export
def fixed_length_record_reader(record_bytes, header_bytes=0, footer_bytes=0, hop_bytes=0, container="", shared_name="", name=None):
r"""A Reader that outputs fixed-length records from a file.
Args:
record_bytes: An `int`. Number of bytes in the record.
header_bytes: An optional `int`. Defaults to `0`.
Number of bytes in the header, defaults to 0.
footer_bytes: An optional `int`. Defaults to `0`.
Number of bytes in the footer, defaults to 0.
hop_bytes: An optional `int`. Defaults to `0`.
Number of bytes to hop before each read. Default of 0 means using
record_bytes.
container: An optional `string`. Defaults to `""`.
If non-empty, this reader is placed in the given container.
Otherwise, a default container is used.
shared_name: An optional `string`. Defaults to `""`.
If non-empty, this reader is named in the given bucket
with this shared_name. Otherwise, the node name is used instead.
name: A name for the operation (optional).
Returns:
A `Tensor` of type mutable `string`.
"""
_ctx = _context._context
if _ctx is not None and _ctx._eager_context.is_eager:
raise RuntimeError("fixed_length_record_reader op does not support eager execution. Arg 'reader_handle' is a ref.")
# Add nodes to the TensorFlow graph.
record_bytes = _execute.make_int(record_bytes, "record_bytes")
if header_bytes is None:
header_bytes = 0
header_bytes = _execute.make_int(header_bytes, "header_bytes")
if footer_bytes is None:
footer_bytes = 0
footer_bytes = _execute.make_int(footer_bytes, "footer_bytes")
if hop_bytes is None:
hop_bytes = 0
hop_bytes = _execute.make_int(hop_bytes, "hop_bytes")
if container is None:
container = ""
container = _execute.make_str(container, "container")
if shared_name is None:
shared_name = ""
shared_name = _execute.make_str(shared_name, "shared_name")
_, _, _op = _op_def_lib._apply_op_helper(
"FixedLengthRecordReader", record_bytes=record_bytes,
header_bytes=header_bytes,
footer_bytes=footer_bytes,
hop_bytes=hop_bytes, container=container,
shared_name=shared_name, name=name)
_result = _op.outputs[:]
_inputs_flat = _op.inputs
_attrs = ("header_bytes", _op.get_attr("header_bytes"), "record_bytes",
_op.get_attr("record_bytes"), "footer_bytes",
_op.get_attr("footer_bytes"), "hop_bytes",
_op.get_attr("hop_bytes"), "container", _op.get_attr("container"),
"shared_name", _op.get_attr("shared_name"))
_execute.record_gradient(
"FixedLengthRecordReader", _inputs_flat, _attrs, _result, name)
_result, = _result
return _result
def fixed_length_record_reader_eager_fallback(record_bytes, header_bytes=0, footer_bytes=0, hop_bytes=0, container="", shared_name="", name=None, ctx=None):
raise RuntimeError("fixed_length_record_reader op does not support eager execution. Arg 'reader_handle' is a ref.")
def fixed_length_record_reader_v2(record_bytes, header_bytes=0, footer_bytes=0, hop_bytes=0, container="", shared_name="", encoding="", name=None):
r"""A Reader that outputs fixed-length records from a file.
Args:
record_bytes: An `int`. Number of bytes in the record.
header_bytes: An optional `int`. Defaults to `0`.
Number of bytes in the header, defaults to 0.
footer_bytes: An optional `int`. Defaults to `0`.
Number of bytes in the footer, defaults to 0.
hop_bytes: An optional `int`. Defaults to `0`.
Number of bytes to hop before each read. Default of 0 means using
record_bytes.
container: An optional `string`. Defaults to `""`.
If non-empty, this reader is placed in the given container.
Otherwise, a default container is used.
shared_name: An optional `string`. Defaults to `""`.
If non-empty, this reader is named in the given bucket
with this shared_name. Otherwise, the node name is used instead.
encoding: An optional `string`. Defaults to `""`.
The type of encoding for the file. Currently ZLIB and GZIP
are supported. Defaults to none.
name: A name for the operation (optional).
Returns:
A `Tensor` of type `resource`.
"""
_ctx = _context._context
if _ctx is not None and _ctx._eager_context.is_eager:
try:
_result = _pywrap_tensorflow.TFE_Py_FastPathExecute(
_ctx._context_handle, _ctx._eager_context.device_name,
"FixedLengthRecordReaderV2", name, _ctx._post_execution_callbacks,
"header_bytes", header_bytes, "record_bytes", record_bytes,
"footer_bytes", footer_bytes, "hop_bytes", hop_bytes, "container",
container, "shared_name", shared_name, "encoding", encoding)
return _result
except _core._FallbackException:
try:
return fixed_length_record_reader_v2_eager_fallback(
header_bytes=header_bytes, record_bytes=record_bytes,
footer_bytes=footer_bytes, hop_bytes=hop_bytes,
container=container, shared_name=shared_name, encoding=encoding,
name=name, ctx=_ctx)
except _core._SymbolicException:
pass # Add nodes to the TensorFlow graph.
except _core._NotOkStatusException as e:
if name is not None:
message = e.message + " name: " + name
else:
message = e.message
_six.raise_from(_core._status_to_exception(e.code, message), None)
# Add nodes to the TensorFlow graph.
record_bytes = _execute.make_int(record_bytes, "record_bytes")
if header_bytes is None:
header_bytes = 0
header_bytes = _execute.make_int(header_bytes, "header_bytes")
if footer_bytes is None:
footer_bytes = 0
footer_bytes = _execute.make_int(footer_bytes, "footer_bytes")
if hop_bytes is None:
hop_bytes = 0
hop_bytes = _execute.make_int(hop_bytes, "hop_bytes")
if container is None:
container = ""
container = _execute.make_str(container, "container")
if shared_name is None:
shared_name = ""
shared_name = _execute.make_str(shared_name, "shared_name")
if encoding is None:
encoding = ""
encoding = _execute.make_str(encoding, "encoding")
_, _, _op = _op_def_lib._apply_op_helper(
"FixedLengthRecordReaderV2", record_bytes=record_bytes,
header_bytes=header_bytes,
footer_bytes=footer_bytes,
hop_bytes=hop_bytes, container=container,
shared_name=shared_name,
encoding=encoding, name=name)
_result = _op.outputs[:]
_inputs_flat = _op.inputs
_attrs = ("header_bytes", _op.get_attr("header_bytes"), "record_bytes",
_op.get_attr("record_bytes"), "footer_bytes",
_op.get_attr("footer_bytes"), "hop_bytes",
_op.get_attr("hop_bytes"), "container", _op.get_attr("container"),
"shared_name", _op.get_attr("shared_name"), "encoding",
_op.get_attr("encoding"))
_execute.record_gradient(
"FixedLengthRecordReaderV2", _inputs_flat, _attrs, _result, name)
_result, = _result
return _result
def fixed_length_record_reader_v2_eager_fallback(record_bytes, header_bytes=0, footer_bytes=0, hop_bytes=0, container="", shared_name="", encoding="", name=None, ctx=None):
r"""This is the slowpath function for Eager mode.
This is for function fixed_length_record_reader_v2
"""
_ctx = ctx if ctx else _context.context()
record_bytes = _execute.make_int(record_bytes, "record_bytes")
if header_bytes is None:
header_bytes = 0
header_bytes = _execute.make_int(header_bytes, "header_bytes")
if footer_bytes is None:
footer_bytes = 0
footer_bytes = _execute.make_int(footer_bytes, "footer_bytes")
if hop_bytes is None:
hop_bytes = 0
hop_bytes = _execute.make_int(hop_bytes, "hop_bytes")
if container is None:
container = ""
container = _execute.make_str(container, "container")
if shared_name is None:
shared_name = ""
shared_name = _execute.make_str(shared_name, "shared_name")
if encoding is None:
encoding = ""
encoding = _execute.make_str(encoding, "encoding")
_inputs_flat = []
_attrs = ("header_bytes", header_bytes, "record_bytes", record_bytes,
"footer_bytes", footer_bytes, "hop_bytes", hop_bytes, "container",
container, "shared_name", shared_name, "encoding", encoding)
_result = _execute.execute(b"FixedLengthRecordReaderV2", 1,
inputs=_inputs_flat, attrs=_attrs, ctx=_ctx,
name=name)
_execute.record_gradient(
"FixedLengthRecordReaderV2", _inputs_flat, _attrs, _result, name)
_result, = _result
return _result
def identity_reader(container="", shared_name="", name=None):
r"""A Reader that outputs the queued work as both the key and value.
To use, enqueue strings in a Queue. ReaderRead will take the front
work string and output (work, work).
Args:
container: An optional `string`. Defaults to `""`.
If non-empty, this reader is placed in the given container.
Otherwise, a default container is used.
shared_name: An optional `string`. Defaults to `""`.
If non-empty, this reader is named in the given bucket
with this shared_name. Otherwise, the node name is used instead.
name: A name for the operation (optional).
Returns:
A `Tensor` of type mutable `string`.
"""
_ctx = _context._context
if _ctx is not None and _ctx._eager_context.is_eager:
raise RuntimeError("identity_reader op does not support eager execution. Arg 'reader_handle' is a ref.")
# Add nodes to the TensorFlow graph.
if container is None:
container = ""
container = _execute.make_str(container, "container")
if shared_name is None:
shared_name = ""
shared_name = _execute.make_str(shared_name, "shared_name")
_, _, _op = _op_def_lib._apply_op_helper(
"IdentityReader", container=container, shared_name=shared_name,
name=name)
_result = _op.outputs[:]
_inputs_flat = _op.inputs
_attrs = ("container", _op.get_attr("container"), "shared_name",
_op.get_attr("shared_name"))
_execute.record_gradient(
"IdentityReader", _inputs_flat, _attrs, _result, name)
_result, = _result
return _result
def identity_reader_eager_fallback(container="", shared_name="", name=None, ctx=None):
raise RuntimeError("identity_reader op does not support eager execution. Arg 'reader_handle' is a ref.")
def identity_reader_v2(container="", shared_name="", name=None):
r"""A Reader that outputs the queued work as both the key and value.
To use, enqueue strings in a Queue. ReaderRead will take the front
work string and output (work, work).
Args:
container: An optional `string`. Defaults to `""`.
If non-empty, this reader is placed in the given container.
Otherwise, a default container is used.
shared_name: An optional `string`. Defaults to `""`.
If non-empty, this reader is named in the given bucket
with this shared_name. Otherwise, the node name is used instead.
name: A name for the operation (optional).
Returns:
A `Tensor` of type `resource`.
"""
_ctx = _context._context
if _ctx is not None and _ctx._eager_context.is_eager:
try:
_result = _pywrap_tensorflow.TFE_Py_FastPathExecute(
_ctx._context_handle, _ctx._eager_context.device_name,
"IdentityReaderV2", name, _ctx._post_execution_callbacks, "container",
container, "shared_name", shared_name)
return _result
except _core._FallbackException:
try:
return identity_reader_v2_eager_fallback(
container=container, shared_name=shared_name, name=name, ctx=_ctx)
except _core._SymbolicException:
pass # Add nodes to the TensorFlow graph.
except _core._NotOkStatusException as e:
if name is not None:
message = e.message + " name: " + name
else:
message = e.message
_six.raise_from(_core._status_to_exception(e.code, message), None)
# Add nodes to the TensorFlow graph.
if container is None:
container = ""
container = _execute.make_str(container, "container")
if shared_name is None:
shared_name = ""
shared_name = _execute.make_str(shared_name, "shared_name")
_, _, _op = _op_def_lib._apply_op_helper(
"IdentityReaderV2", container=container, shared_name=shared_name,
name=name)
_result = _op.outputs[:]
_inputs_flat = _op.inputs
_attrs = ("container", _op.get_attr("container"), "shared_name",
_op.get_attr("shared_name"))
_execute.record_gradient(
"IdentityReaderV2", _inputs_flat, _attrs, _result, name)
_result, = _result
return _result
def identity_reader_v2_eager_fallback(container="", shared_name="", name=None, ctx=None):
r"""This is the slowpath function for Eager mode.
This is for function identity_reader_v2
"""
_ctx = ctx if ctx else _context.context()
if container is None:
container = ""
container = _execute.make_str(container, "container")
if shared_name is None:
shared_name = ""
shared_name = _execute.make_str(shared_name, "shared_name")
_inputs_flat = []
_attrs = ("container", container, "shared_name", shared_name)
_result = _execute.execute(b"IdentityReaderV2", 1, inputs=_inputs_flat,
attrs=_attrs, ctx=_ctx, name=name)
_execute.record_gradient(
"IdentityReaderV2", _inputs_flat, _attrs, _result, name)
_result, = _result
return _result
def lmdb_reader(container="", shared_name="", name=None):
r"""A Reader that outputs the records from a LMDB file.
Args:
container: An optional `string`. Defaults to `""`.
If non-empty, this reader is placed in the given container.
Otherwise, a default container is used.
shared_name: An optional `string`. Defaults to `""`.
If non-empty, this reader is named in the given bucket
with this shared_name. Otherwise, the node name is used instead.
name: A name for the operation (optional).
Returns:
A `Tensor` of type mutable `string`.
"""
_ctx = _context._context
if _ctx is not None and _ctx._eager_context.is_eager:
raise RuntimeError("lmdb_reader op does not support eager execution. Arg 'reader_handle' is a ref.")
# Add nodes to the TensorFlow graph.
if container is None:
container = ""
container = _execute.make_str(container, "container")
if shared_name is None:
shared_name = ""
shared_name = _execute.make_str(shared_name, "shared_name")
_, _, _op = _op_def_lib._apply_op_helper(
"LMDBReader", container=container, shared_name=shared_name, name=name)
_result = _op.outputs[:]
_inputs_flat = _op.inputs
_attrs = ("container", _op.get_attr("container"), "shared_name",
_op.get_attr("shared_name"))
_execute.record_gradient(
"LMDBReader", _inputs_flat, _attrs, _result, name)
_result, = _result
return _result
def lmdb_reader_eager_fallback(container="", shared_name="", name=None, ctx=None):
raise RuntimeError("lmdb_reader op does not support eager execution. Arg 'reader_handle' is a ref.")
@_dispatch.add_dispatch_list
@tf_export('io.matching_files', v1=['io.matching_files', 'matching_files'])
@deprecated_endpoints('matching_files')
def matching_files(pattern, name=None):
r"""Returns the set of files matching one or more glob patterns.
Note that this routine only supports wildcard characters in the
basename portion of the pattern, not in the directory portion.
Note also that the order of filenames returned can be non-deterministic.
Args:
pattern: A `Tensor` of type `string`.
Shell wildcard pattern(s). Scalar or vector of type string.
name: A name for the operation (optional).
Returns:
A `Tensor` of type `string`.
"""
_ctx = _context._context
if _ctx is not None and _ctx._eager_context.is_eager:
try:
_result = _pywrap_tensorflow.TFE_Py_FastPathExecute(
_ctx._context_handle, _ctx._eager_context.device_name,
"MatchingFiles", name, _ctx._post_execution_callbacks, pattern)
return _result
except _core._FallbackException:
try:
return matching_files_eager_fallback(
pattern, name=name, ctx=_ctx)
except _core._SymbolicException:
pass # Add nodes to the TensorFlow graph.
except (TypeError, ValueError):
result = _dispatch.dispatch(
matching_files, pattern=pattern, name=name)
if result is not _dispatch.OpDispatcher.NOT_SUPPORTED:
return result
raise
except _core._NotOkStatusException as e:
if name is not None:
message = e.message + " name: " + name
else:
message = e.message
_six.raise_from(_core._status_to_exception(e.code, message), None)
# Add nodes to the TensorFlow graph.
try:
_, _, _op = _op_def_lib._apply_op_helper(
"MatchingFiles", pattern=pattern, name=name)
except (TypeError, ValueError):
result = _dispatch.dispatch(
matching_files, pattern=pattern, name=name)
if result is not _dispatch.OpDispatcher.NOT_SUPPORTED:
return result
raise
_result = _op.outputs[:]
_inputs_flat = _op.inputs
_attrs = None
_execute.record_gradient(
"MatchingFiles", _inputs_flat, _attrs, _result, name)
_result, = _result
return _result
def matching_files_eager_fallback(pattern, name=None, ctx=None):
r"""This is the slowpath function for Eager mode.
This is for function matching_files
"""
_ctx = ctx if ctx else _context.context()
pattern = _ops.convert_to_tensor(pattern, _dtypes.string)
_inputs_flat = [pattern]
_attrs = None
_result = _execute.execute(b"MatchingFiles", 1, inputs=_inputs_flat,
attrs=_attrs, ctx=_ctx, name=name)
_execute.record_gradient(
"MatchingFiles", _inputs_flat, _attrs, _result, name)
_result, = _result
return _result
def merge_v2_checkpoints(checkpoint_prefixes, destination_prefix, delete_old_dirs=True, name=None):
r"""V2 format specific: merges the metadata files of sharded checkpoints. The
result is one logical checkpoint, with one physical metadata file and renamed
data files.
Intended for "grouping" multiple checkpoints in a sharded checkpoint setup.
If delete_old_dirs is true, attempts to delete recursively the dirname of each
path in the input checkpoint_prefixes. This is useful when those paths are non
user-facing temporary locations.
Args:
checkpoint_prefixes: A `Tensor` of type `string`.
prefixes of V2 checkpoints to merge.
destination_prefix: A `Tensor` of type `string`.
scalar. The desired final prefix. Allowed to be the same
as one of the checkpoint_prefixes.
delete_old_dirs: An optional `bool`. Defaults to `True`. see above.
name: A name for the operation (optional).
Returns:
The created Operation.
"""
_ctx = _context._context
if _ctx is not None and _ctx._eager_context.is_eager:
try:
_result = _pywrap_tensorflow.TFE_Py_FastPathExecute(
_ctx._context_handle, _ctx._eager_context.device_name,
"MergeV2Checkpoints", name, _ctx._post_execution_callbacks,
checkpoint_prefixes, destination_prefix, "delete_old_dirs",
delete_old_dirs)
return _result
except _core._FallbackException:
try:
return merge_v2_checkpoints_eager_fallback(
checkpoint_prefixes, destination_prefix,
delete_old_dirs=delete_old_dirs, name=name, ctx=_ctx)
except _core._SymbolicException:
pass # Add nodes to the TensorFlow graph.
except _core._NotOkStatusException as e:
if name is not None:
message = e.message + " name: " + name
else:
message = e.message
_six.raise_from(_core._status_to_exception(e.code, message), None)
# Add nodes to the TensorFlow graph.
if delete_old_dirs is None:
delete_old_dirs = True
delete_old_dirs = _execute.make_bool(delete_old_dirs, "delete_old_dirs")
_, _, _op = _op_def_lib._apply_op_helper(
"MergeV2Checkpoints", checkpoint_prefixes=checkpoint_prefixes,
destination_prefix=destination_prefix,
delete_old_dirs=delete_old_dirs, name=name)
return _op
_result = None
return _result
def merge_v2_checkpoints_eager_fallback(checkpoint_prefixes, destination_prefix, delete_old_dirs=True, name=None, ctx=None):
r"""This is the slowpath function for Eager mode.
This is for function merge_v2_checkpoints
"""
_ctx = ctx if ctx else _context.context()
if delete_old_dirs is None:
delete_old_dirs = True
delete_old_dirs = _execute.make_bool(delete_old_dirs, "delete_old_dirs")
checkpoint_prefixes = _ops.convert_to_tensor(checkpoint_prefixes, _dtypes.string)
destination_prefix = _ops.convert_to_tensor(destination_prefix, _dtypes.string)
_inputs_flat = [checkpoint_prefixes, destination_prefix]
_attrs = ("delete_old_dirs", delete_old_dirs)
_result = _execute.execute(b"MergeV2Checkpoints", 0, inputs=_inputs_flat,
attrs=_attrs, ctx=_ctx, name=name)
_result = None
return _result
@_dispatch.add_dispatch_list
@tf_export('io.read_file', v1=['io.read_file', 'read_file'])
@deprecated_endpoints('read_file')
def read_file(filename, name=None):
r"""Reads and outputs the entire contents of the input filename.
Args:
filename: A `Tensor` of type `string`.
name: A name for the operation (optional).
Returns:
A `Tensor` of type `string`.
"""
_ctx = _context._context
if _ctx is not None and _ctx._eager_context.is_eager:
try:
_result = _pywrap_tensorflow.TFE_Py_FastPathExecute(
_ctx._context_handle, _ctx._eager_context.device_name, "ReadFile",
name, _ctx._post_execution_callbacks, filename)
return _result
except _core._FallbackException:
try:
return read_file_eager_fallback(
filename, name=name, ctx=_ctx)
except _core._SymbolicException:
pass # Add nodes to the TensorFlow graph.
except (TypeError, ValueError):
result = _dispatch.dispatch(
read_file, filename=filename, name=name)
if result is not _dispatch.OpDispatcher.NOT_SUPPORTED:
return result
raise
except _core._NotOkStatusException as e:
if name is not None:
message = e.message + " name: " + name
else:
message = e.message
_six.raise_from(_core._status_to_exception(e.code, message), None)
# Add nodes to the TensorFlow graph.
try:
_, _, _op = _op_def_lib._apply_op_helper(
"ReadFile", filename=filename, name=name)
except (TypeError, ValueError):
result = _dispatch.dispatch(
read_file, filename=filename, name=name)
if result is not _dispatch.OpDispatcher.NOT_SUPPORTED:
return result
raise
_result = _op.outputs[:]
_inputs_flat = _op.inputs
_attrs = None
_execute.record_gradient(
"ReadFile", _inputs_flat, _attrs, _result, name)
_result, = _result
return _result
def read_file_eager_fallback(filename, name=None, ctx=None):
r"""This is the slowpath function for Eager mode.
This is for function read_file
"""
_ctx = ctx if ctx else _context.context()
filename = _ops.convert_to_tensor(filename, _dtypes.string)
_inputs_flat = [filename]
_attrs = None
_result = _execute.execute(b"ReadFile", 1, inputs=_inputs_flat,
attrs=_attrs, ctx=_ctx, name=name)
_execute.record_gradient(
"ReadFile", _inputs_flat, _attrs, _result, name)
_result, = _result
return _result
def reader_num_records_produced(reader_handle, name=None):
r"""Returns the number of records this Reader has produced.
This is the same as the number of ReaderRead executions that have
succeeded.
Args:
reader_handle: A `Tensor` of type mutable `string`. Handle to a Reader.
name: A name for the operation (optional).
Returns:
A `Tensor` of type `int64`.
"""
_ctx = _context._context
if _ctx is not None and _ctx._eager_context.is_eager:
raise RuntimeError("reader_num_records_produced op does not support eager execution. Arg 'reader_handle' is a ref.")
# Add nodes to the TensorFlow graph.
_, _, _op = _op_def_lib._apply_op_helper(
"ReaderNumRecordsProduced", reader_handle=reader_handle, name=name)
_result = _op.outputs[:]
_inputs_flat = _op.inputs
_attrs = None
_execute.record_gradient(
"ReaderNumRecordsProduced", _inputs_flat, _attrs, _result, name)
_result, = _result
return _result
def reader_num_records_produced_eager_fallback(reader_handle, name=None, ctx=None):
raise RuntimeError("reader_num_records_produced op does not support eager execution. Arg 'reader_handle' is a ref.")
def reader_num_records_produced_v2(reader_handle, name=None):
r"""Returns the number of records this Reader has produced.
This is the same as the number of ReaderRead executions that have
succeeded.
Args:
reader_handle: A `Tensor` of type `resource`. Handle to a Reader.
name: A name for the operation (optional).
Returns:
A `Tensor` of type `int64`.
"""
_ctx = _context._context
if _ctx is not None and _ctx._eager_context.is_eager:
try:
_result = _pywrap_tensorflow.TFE_Py_FastPathExecute(
_ctx._context_handle, _ctx._eager_context.device_name,
"ReaderNumRecordsProducedV2", name, _ctx._post_execution_callbacks,
reader_handle)
return _result
except _core._FallbackException:
try:
return reader_num_records_produced_v2_eager_fallback(
reader_handle, name=name, ctx=_ctx)
except _core._SymbolicException:
pass # Add nodes to the TensorFlow graph.
except _core._NotOkStatusException as e:
if name is not None:
message = e.message + " name: " + name
else:
message = e.message
_six.raise_from(_core._status_to_exception(e.code, message), None)
# Add nodes to the TensorFlow graph.
_, _, _op = _op_def_lib._apply_op_helper(
"ReaderNumRecordsProducedV2", reader_handle=reader_handle, name=name)
_result = _op.outputs[:]
_inputs_flat = _op.inputs
_attrs = None
_execute.record_gradient(
"ReaderNumRecordsProducedV2", _inputs_flat, _attrs, _result, name)
_result, = _result
return _result
def reader_num_records_produced_v2_eager_fallback(reader_handle, name=None, ctx=None):
r"""This is the slowpath function for Eager mode.
This is for function reader_num_records_produced_v2
"""
_ctx = ctx if ctx else _context.context()
reader_handle = _ops.convert_to_tensor(reader_handle, _dtypes.resource)
_inputs_flat = [reader_handle]
_attrs = None
_result = _execute.execute(b"ReaderNumRecordsProducedV2", 1,
inputs=_inputs_flat, attrs=_attrs, ctx=_ctx,
name=name)
_execute.record_gradient(
"ReaderNumRecordsProducedV2", _inputs_flat, _attrs, _result, name)
_result, = _result
return _result
def reader_num_work_units_completed(reader_handle, name=None):
r"""Returns the number of work units this Reader has finished processing.
Args:
reader_handle: A `Tensor` of type mutable `string`. Handle to a Reader.
name: A name for the operation (optional).
Returns:
A `Tensor` of type `int64`.
"""
_ctx = _context._context
if _ctx is not None and _ctx._eager_context.is_eager:
raise RuntimeError("reader_num_work_units_completed op does not support eager execution. Arg 'reader_handle' is a ref.")
# Add nodes to the TensorFlow graph.
_, _, _op = _op_def_lib._apply_op_helper(
"ReaderNumWorkUnitsCompleted", reader_handle=reader_handle, name=name)
_result = _op.outputs[:]
_inputs_flat = _op.inputs
_attrs = None
_execute.record_gradient(
"ReaderNumWorkUnitsCompleted", _inputs_flat, _attrs, _result, name)
_result, = _result
return _result
def reader_num_work_units_completed_eager_fallback(reader_handle, name=None, ctx=None):
raise RuntimeError("reader_num_work_units_completed op does not support eager execution. Arg 'reader_handle' is a ref.")
def reader_num_work_units_completed_v2(reader_handle, name=None):
r"""Returns the number of work units this Reader has finished processing.
Args:
reader_handle: A `Tensor` of type `resource`. Handle to a Reader.
name: A name for the operation (optional).
Returns:
A `Tensor` of type `int64`.
"""
_ctx = _context._context
if _ctx is not None and _ctx._eager_context.is_eager:
try:
_result = _pywrap_tensorflow.TFE_Py_FastPathExecute(
_ctx._context_handle, _ctx._eager_context.device_name,
"ReaderNumWorkUnitsCompletedV2", name, _ctx._post_execution_callbacks,
reader_handle)
return _result
except _core._FallbackException:
try:
return reader_num_work_units_completed_v2_eager_fallback(
reader_handle, name=name, ctx=_ctx)
except _core._SymbolicException:
pass # Add nodes to the TensorFlow graph.
except _core._NotOkStatusException as e:
if name is not None:
message = e.message + " name: " + name
else:
message = e.message
_six.raise_from(_core._status_to_exception(e.code, message), None)
# Add nodes to the TensorFlow graph.
_, _, _op = _op_def_lib._apply_op_helper(
"ReaderNumWorkUnitsCompletedV2", reader_handle=reader_handle,
name=name)
_result = _op.outputs[:]
_inputs_flat = _op.inputs
_attrs = None
_execute.record_gradient(
"ReaderNumWorkUnitsCompletedV2", _inputs_flat, _attrs, _result, name)
_result, = _result
return _result
def reader_num_work_units_completed_v2_eager_fallback(reader_handle, name=None, ctx=None):
r"""This is the slowpath function for Eager mode.
This is for function reader_num_work_units_completed_v2
"""
_ctx = ctx if ctx else _context.context()
reader_handle = _ops.convert_to_tensor(reader_handle, _dtypes.resource)
_inputs_flat = [reader_handle]
_attrs = None
_result = _execute.execute(b"ReaderNumWorkUnitsCompletedV2", 1,
inputs=_inputs_flat, attrs=_attrs, ctx=_ctx,
name=name)
_execute.record_gradient(
"ReaderNumWorkUnitsCompletedV2", _inputs_flat, _attrs, _result, name)
_result, = _result
return _result
_reader_read_outputs = ["key", "value"]
_ReaderReadOutput = _collections.namedtuple(
"ReaderRead", _reader_read_outputs)
def reader_read(reader_handle, queue_handle, name=None):
r"""Returns the next record (key, value pair) produced by a Reader.
Will dequeue from the input queue if necessary (e.g. when the
Reader needs to start reading from a new file since it has finished
with the previous file).
Args:
reader_handle: A `Tensor` of type mutable `string`. Handle to a Reader.
queue_handle: A `Tensor` of type mutable `string`.
Handle to a Queue, with string work items.
name: A name for the operation (optional).
Returns:
A tuple of `Tensor` objects (key, value).
key: A `Tensor` of type `string`.
value: A `Tensor` of type `string`.
"""
_ctx = _context._context
if _ctx is not None and _ctx._eager_context.is_eager:
raise RuntimeError("reader_read op does not support eager execution. Arg 'queue_handle' is a ref.")
# Add nodes to the TensorFlow graph.
_, _, _op = _op_def_lib._apply_op_helper(
"ReaderRead", reader_handle=reader_handle, queue_handle=queue_handle,
name=name)
_result = _op.outputs[:]
_inputs_flat = _op.inputs
_attrs = None
_execute.record_gradient(
"ReaderRead", _inputs_flat, _attrs, _result, name)
_result = _ReaderReadOutput._make(_result)
return _result
def reader_read_eager_fallback(reader_handle, queue_handle, name=None, ctx=None):
raise RuntimeError("reader_read op does not support eager execution. Arg 'queue_handle' is a ref.")
_reader_read_up_to_outputs = ["keys", "values"]
_ReaderReadUpToOutput = _collections.namedtuple(
"ReaderReadUpTo", _reader_read_up_to_outputs)
def reader_read_up_to(reader_handle, queue_handle, num_records, name=None):
r"""Returns up to `num_records` (key, value) pairs produced by a Reader.
Will dequeue from the input queue if necessary (e.g. when the
Reader needs to start reading from a new file since it has finished
with the previous file).
It may return less than `num_records` even before the last batch.
Args:
reader_handle: A `Tensor` of type mutable `string`. Handle to a `Reader`.
queue_handle: A `Tensor` of type mutable `string`.
Handle to a `Queue`, with string work items.
num_records: A `Tensor` of type `int64`.
number of records to read from `Reader`.
name: A name for the operation (optional).
Returns:
A tuple of `Tensor` objects (keys, values).
keys: A `Tensor` of type `string`.
values: A `Tensor` of type `string`.
"""
_ctx = _context._context
if _ctx is not None and _ctx._eager_context.is_eager:
raise RuntimeError("reader_read_up_to op does not support eager execution. Arg 'queue_handle' is a ref.")
# Add nodes to the TensorFlow graph.
_, _, _op = _op_def_lib._apply_op_helper(
"ReaderReadUpTo", reader_handle=reader_handle,
queue_handle=queue_handle, num_records=num_records,
name=name)
_result = _op.outputs[:]
_inputs_flat = _op.inputs
_attrs = None
_execute.record_gradient(
"ReaderReadUpTo", _inputs_flat, _attrs, _result, name)
_result = _ReaderReadUpToOutput._make(_result)
return _result
def reader_read_up_to_eager_fallback(reader_handle, queue_handle, num_records, name=None, ctx=None):
raise RuntimeError("reader_read_up_to op does not support eager execution. Arg 'queue_handle' is a ref.")
_reader_read_up_to_v2_outputs = ["keys", "values"]
_ReaderReadUpToV2Output = _collections.namedtuple(
"ReaderReadUpToV2", _reader_read_up_to_v2_outputs)
def reader_read_up_to_v2(reader_handle, queue_handle, num_records, name=None):
r"""Returns up to `num_records` (key, value) pairs produced by a Reader.
Will dequeue from the input queue if necessary (e.g. when the
Reader needs to start reading from a new file since it has finished
with the previous file).
It may return less than `num_records` even before the last batch.
Args:
reader_handle: A `Tensor` of type `resource`. Handle to a `Reader`.
queue_handle: A `Tensor` of type `resource`.
Handle to a `Queue`, with string work items.
num_records: A `Tensor` of type `int64`.
number of records to read from `Reader`.
name: A name for the operation (optional).
Returns:
A tuple of `Tensor` objects (keys, values).
keys: A `Tensor` of type `string`.
values: A `Tensor` of type `string`.
"""
_ctx = _context._context
if _ctx is not None and _ctx._eager_context.is_eager:
try:
_result = _pywrap_tensorflow.TFE_Py_FastPathExecute(
_ctx._context_handle, _ctx._eager_context.device_name,
"ReaderReadUpToV2", name, _ctx._post_execution_callbacks,
reader_handle, queue_handle, num_records)
_result = _ReaderReadUpToV2Output._make(_result)
return _result
except _core._FallbackException:
try:
return reader_read_up_to_v2_eager_fallback(
reader_handle, queue_handle, num_records, name=name, ctx=_ctx)
except _core._SymbolicException:
pass # Add nodes to the TensorFlow graph.
except _core._NotOkStatusException as e:
if name is not None:
message = e.message + " name: " + name
else:
message = e.message
_six.raise_from(_core._status_to_exception(e.code, message), None)
# Add nodes to the TensorFlow graph.
_, _, _op = _op_def_lib._apply_op_helper(
"ReaderReadUpToV2", reader_handle=reader_handle,
queue_handle=queue_handle,
num_records=num_records, name=name)
_result = _op.outputs[:]
_inputs_flat = _op.inputs
_attrs = None
_execute.record_gradient(
"ReaderReadUpToV2", _inputs_flat, _attrs, _result, name)
_result = _ReaderReadUpToV2Output._make(_result)
return _result
def reader_read_up_to_v2_eager_fallback(reader_handle, queue_handle, num_records, name=None, ctx=None):
r"""This is the slowpath function for Eager mode.
This is for function reader_read_up_to_v2
"""
_ctx = ctx if ctx else _context.context()
reader_handle = _ops.convert_to_tensor(reader_handle, _dtypes.resource)
queue_handle = _ops.convert_to_tensor(queue_handle, _dtypes.resource)
num_records = _ops.convert_to_tensor(num_records, _dtypes.int64)
_inputs_flat = [reader_handle, queue_handle, num_records]
_attrs = None
_result = _execute.execute(b"ReaderReadUpToV2", 2, inputs=_inputs_flat,
attrs=_attrs, ctx=_ctx, name=name)
_execute.record_gradient(
"ReaderReadUpToV2", _inputs_flat, _attrs, _result, name)
_result = _ReaderReadUpToV2Output._make(_result)
return _result
_reader_read_v2_outputs = ["key", "value"]
_ReaderReadV2Output = _collections.namedtuple(
"ReaderReadV2", _reader_read_v2_outputs)
def reader_read_v2(reader_handle, queue_handle, name=None):
r"""Returns the next record (key, value pair) produced by a Reader.
Will dequeue from the input queue if necessary (e.g. when the
Reader needs to start reading from a new file since it has finished
with the previous file).
Args:
reader_handle: A `Tensor` of type `resource`. Handle to a Reader.
queue_handle: A `Tensor` of type `resource`.
Handle to a Queue, with string work items.
name: A name for the operation (optional).
Returns:
A tuple of `Tensor` objects (key, value).
key: A `Tensor` of type `string`.
value: A `Tensor` of type `string`.
"""
_ctx = _context._context
if _ctx is not None and _ctx._eager_context.is_eager:
try:
_result = _pywrap_tensorflow.TFE_Py_FastPathExecute(
_ctx._context_handle, _ctx._eager_context.device_name, "ReaderReadV2",
name, _ctx._post_execution_callbacks, reader_handle, queue_handle)
_result = _ReaderReadV2Output._make(_result)
return _result
except _core._FallbackException:
try:
return reader_read_v2_eager_fallback(
reader_handle, queue_handle, name=name, ctx=_ctx)
except _core._SymbolicException:
pass # Add nodes to the TensorFlow graph.
except _core._NotOkStatusException as e:
if name is not None:
message = e.message + " name: " + name
else:
message = e.message
_six.raise_from(_core._status_to_exception(e.code, message), None)
# Add nodes to the TensorFlow graph.
_, _, _op = _op_def_lib._apply_op_helper(
"ReaderReadV2", reader_handle=reader_handle,
queue_handle=queue_handle, name=name)
_result = _op.outputs[:]
_inputs_flat = _op.inputs
_attrs = None
_execute.record_gradient(
"ReaderReadV2", _inputs_flat, _attrs, _result, name)
_result = _ReaderReadV2Output._make(_result)
return _result
def reader_read_v2_eager_fallback(reader_handle, queue_handle, name=None, ctx=None):
r"""This is the slowpath function for Eager mode.
This is for function reader_read_v2
"""
_ctx = ctx if ctx else _context.context()
reader_handle = _ops.convert_to_tensor(reader_handle, _dtypes.resource)
queue_handle = _ops.convert_to_tensor(queue_handle, _dtypes.resource)
_inputs_flat = [reader_handle, queue_handle]
_attrs = None
_result = _execute.execute(b"ReaderReadV2", 2, inputs=_inputs_flat,
attrs=_attrs, ctx=_ctx, name=name)
_execute.record_gradient(
"ReaderReadV2", _inputs_flat, _attrs, _result, name)
_result = _ReaderReadV2Output._make(_result)
return _result
def reader_reset(reader_handle, name=None):
r"""Restore a Reader to its initial clean state.
Args:
reader_handle: A `Tensor` of type mutable `string`. Handle to a Reader.
name: A name for the operation (optional).
Returns:
The created Operation.
"""
_ctx = _context._context
if _ctx is not None and _ctx._eager_context.is_eager:
raise RuntimeError("reader_reset op does not support eager execution. Arg 'reader_handle' is a ref.")
# Add nodes to the TensorFlow graph.
_, _, _op = _op_def_lib._apply_op_helper(
"ReaderReset", reader_handle=reader_handle, name=name)
return _op
_result = None
return _result
def reader_reset_eager_fallback(reader_handle, name=None, ctx=None):
raise RuntimeError("reader_reset op does not support eager execution. Arg 'reader_handle' is a ref.")
def reader_reset_v2(reader_handle, name=None):
r"""Restore a Reader to its initial clean state.
Args:
reader_handle: A `Tensor` of type `resource`. Handle to a Reader.
name: A name for the operation (optional).
Returns:
The created Operation.
"""
_ctx = _context._context
if _ctx is not None and _ctx._eager_context.is_eager:
try:
_result = _pywrap_tensorflow.TFE_Py_FastPathExecute(
_ctx._context_handle, _ctx._eager_context.device_name,
"ReaderResetV2", name, _ctx._post_execution_callbacks, reader_handle)
return _result
except _core._FallbackException:
try:
return reader_reset_v2_eager_fallback(
reader_handle, name=name, ctx=_ctx)
except _core._SymbolicException:
pass # Add nodes to the TensorFlow graph.
except _core._NotOkStatusException as e:
if name is not None:
message = e.message + " name: " + name
else:
message = e.message
_six.raise_from(_core._status_to_exception(e.code, message), None)
# Add nodes to the TensorFlow graph.
_, _, _op = _op_def_lib._apply_op_helper(
"ReaderResetV2", reader_handle=reader_handle, name=name)
return _op
_result = None
return _result
def reader_reset_v2_eager_fallback(reader_handle, name=None, ctx=None):
r"""This is the slowpath function for Eager mode.
This is for function reader_reset_v2
"""
_ctx = ctx if ctx else _context.context()
reader_handle = _ops.convert_to_tensor(reader_handle, _dtypes.resource)
_inputs_flat = [reader_handle]
_attrs = None
_result = _execute.execute(b"ReaderResetV2", 0, inputs=_inputs_flat,
attrs=_attrs, ctx=_ctx, name=name)
_result = None
return _result
def reader_restore_state(reader_handle, state, name=None):
r"""Restore a reader to a previously saved state.
Not all Readers support being restored, so this can produce an
Unimplemented error.
Args:
reader_handle: A `Tensor` of type mutable `string`. Handle to a Reader.
state: A `Tensor` of type `string`.
Result of a ReaderSerializeState of a Reader with type
matching reader_handle.
name: A name for the operation (optional).
Returns:
The created Operation.
"""
_ctx = _context._context
if _ctx is not None and _ctx._eager_context.is_eager:
raise RuntimeError("reader_restore_state op does not support eager execution. Arg 'reader_handle' is a ref.")
# Add nodes to the TensorFlow graph.
_, _, _op = _op_def_lib._apply_op_helper(
"ReaderRestoreState", reader_handle=reader_handle, state=state,
name=name)
return _op
_result = None
return _result
def reader_restore_state_eager_fallback(reader_handle, state, name=None, ctx=None):
raise RuntimeError("reader_restore_state op does not support eager execution. Arg 'reader_handle' is a ref.")
def reader_restore_state_v2(reader_handle, state, name=None):
r"""Restore a reader to a previously saved state.
Not all Readers support being restored, so this can produce an
Unimplemented error.
Args:
reader_handle: A `Tensor` of type `resource`. Handle to a Reader.
state: A `Tensor` of type `string`.
Result of a ReaderSerializeState of a Reader with type
matching reader_handle.
name: A name for the operation (optional).
Returns:
The created Operation.
"""
_ctx = _context._context
if _ctx is not None and _ctx._eager_context.is_eager:
try:
_result = _pywrap_tensorflow.TFE_Py_FastPathExecute(
_ctx._context_handle, _ctx._eager_context.device_name,
"ReaderRestoreStateV2", name, _ctx._post_execution_callbacks,
reader_handle, state)
return _result
except _core._FallbackException:
try:
return reader_restore_state_v2_eager_fallback(
reader_handle, state, name=name, ctx=_ctx)
except _core._SymbolicException:
pass # Add nodes to the TensorFlow graph.
except _core._NotOkStatusException as e:
if name is not None:
message = e.message + " name: " + name
else:
message = e.message
_six.raise_from(_core._status_to_exception(e.code, message), None)
# Add nodes to the TensorFlow graph.
_, _, _op = _op_def_lib._apply_op_helper(
"ReaderRestoreStateV2", reader_handle=reader_handle, state=state,
name=name)
return _op
_result = None
return _result
def reader_restore_state_v2_eager_fallback(reader_handle, state, name=None, ctx=None):
r"""This is the slowpath function for Eager mode.
This is for function reader_restore_state_v2
"""
_ctx = ctx if ctx else _context.context()
reader_handle = _ops.convert_to_tensor(reader_handle, _dtypes.resource)
state = _ops.convert_to_tensor(state, _dtypes.string)
_inputs_flat = [reader_handle, state]
_attrs = None
_result = _execute.execute(b"ReaderRestoreStateV2", 0, inputs=_inputs_flat,
attrs=_attrs, ctx=_ctx, name=name)
_result = None
return _result
def reader_serialize_state(reader_handle, name=None):
r"""Produce a string tensor that encodes the state of a Reader.
Not all Readers support being serialized, so this can produce an
Unimplemented error.
Args:
reader_handle: A `Tensor` of type mutable `string`. Handle to a Reader.
name: A name for the operation (optional).
Returns:
A `Tensor` of type `string`.
"""
_ctx = _context._context
if _ctx is not None and _ctx._eager_context.is_eager:
raise RuntimeError("reader_serialize_state op does not support eager execution. Arg 'reader_handle' is a ref.")
# Add nodes to the TensorFlow graph.
_, _, _op = _op_def_lib._apply_op_helper(
"ReaderSerializeState", reader_handle=reader_handle, name=name)
_result = _op.outputs[:]
_inputs_flat = _op.inputs
_attrs = None
_execute.record_gradient(
"ReaderSerializeState", _inputs_flat, _attrs, _result, name)
_result, = _result
return _result
def reader_serialize_state_eager_fallback(reader_handle, name=None, ctx=None):
raise RuntimeError("reader_serialize_state op does not support eager execution. Arg 'reader_handle' is a ref.")
def reader_serialize_state_v2(reader_handle, name=None):
r"""Produce a string tensor that encodes the state of a Reader.
Not all Readers support being serialized, so this can produce an
Unimplemented error.
Args:
reader_handle: A `Tensor` of type `resource`. Handle to a Reader.
name: A name for the operation (optional).
Returns:
A `Tensor` of type `string`.
"""
_ctx = _context._context
if _ctx is not None and _ctx._eager_context.is_eager:
try:
_result = _pywrap_tensorflow.TFE_Py_FastPathExecute(
_ctx._context_handle, _ctx._eager_context.device_name,
"ReaderSerializeStateV2", name, _ctx._post_execution_callbacks,
reader_handle)
return _result
except _core._FallbackException:
try:
return reader_serialize_state_v2_eager_fallback(
reader_handle, name=name, ctx=_ctx)
except _core._SymbolicException:
pass # Add nodes to the TensorFlow graph.
except _core._NotOkStatusException as e:
if name is not None:
message = e.message + " name: " + name
else:
message = e.message
_six.raise_from(_core._status_to_exception(e.code, message), None)
# Add nodes to the TensorFlow graph.
_, _, _op = _op_def_lib._apply_op_helper(
"ReaderSerializeStateV2", reader_handle=reader_handle, name=name)
_result = _op.outputs[:]
_inputs_flat = _op.inputs
_attrs = None
_execute.record_gradient(
"ReaderSerializeStateV2", _inputs_flat, _attrs, _result, name)
_result, = _result
return _result
def reader_serialize_state_v2_eager_fallback(reader_handle, name=None, ctx=None):
r"""This is the slowpath function for Eager mode.
This is for function reader_serialize_state_v2
"""
_ctx = ctx if ctx else _context.context()
reader_handle = _ops.convert_to_tensor(reader_handle, _dtypes.resource)
_inputs_flat = [reader_handle]
_attrs = None
_result = _execute.execute(b"ReaderSerializeStateV2", 1,
inputs=_inputs_flat, attrs=_attrs, ctx=_ctx,
name=name)
_execute.record_gradient(
"ReaderSerializeStateV2", _inputs_flat, _attrs, _result, name)
_result, = _result
return _result
def restore(file_pattern, tensor_name, dt, preferred_shard=-1, name=None):
r"""Restores a tensor from checkpoint files.
Reads a tensor stored in one or several files. If there are several files (for
instance because a tensor was saved as slices), `file_pattern` may contain
wildcard symbols (`*` and `?`) in the filename portion only, not in the
directory portion.
If a `file_pattern` matches several files, `preferred_shard` can be used to hint
in which file the requested tensor is likely to be found. This op will first
open the file at index `preferred_shard` in the list of matching files and try
to restore tensors from that file. Only if some tensors or tensor slices are
not found in that first file, then the Op opens all the files. Setting
`preferred_shard` to match the value passed as the `shard` input
of a matching `Save` Op may speed up Restore. This attribute only affects
performance, not correctness. The default value -1 means files are processed in
order.
See also `RestoreSlice`.
Args:
file_pattern: A `Tensor` of type `string`.
Must have a single element. The pattern of the files from
which we read the tensor.
tensor_name: A `Tensor` of type `string`.
Must have a single element. The name of the tensor to be
restored.
dt: A `tf.DType`. The type of the tensor to be restored.
preferred_shard: An optional `int`. Defaults to `-1`.
Index of file to open first if multiple files match
`file_pattern`.
name: A name for the operation (optional).
Returns:
A `Tensor` of type `dt`.
"""
_ctx = _context._context
if _ctx is not None and _ctx._eager_context.is_eager:
try:
_result = _pywrap_tensorflow.TFE_Py_FastPathExecute(
_ctx._context_handle, _ctx._eager_context.device_name, "Restore",
name, _ctx._post_execution_callbacks, file_pattern, tensor_name, "dt",
dt, "preferred_shard", preferred_shard)
return _result
except _core._FallbackException:
try:
return restore_eager_fallback(
file_pattern, tensor_name, dt=dt, preferred_shard=preferred_shard,
name=name, ctx=_ctx)
except _core._SymbolicException:
pass # Add nodes to the TensorFlow graph.
except _core._NotOkStatusException as e:
if name is not None:
message = e.message + " name: " + name
else:
message = e.message
_six.raise_from(_core._status_to_exception(e.code, message), None)
# Add nodes to the TensorFlow graph.
dt = _execute.make_type(dt, "dt")
if preferred_shard is None:
preferred_shard = -1
preferred_shard = _execute.make_int(preferred_shard, "preferred_shard")
_, _, _op = _op_def_lib._apply_op_helper(
"Restore", file_pattern=file_pattern, tensor_name=tensor_name, dt=dt,
preferred_shard=preferred_shard, name=name)
_result = _op.outputs[:]
_inputs_flat = _op.inputs
_attrs = ("dt", _op.get_attr("dt"), "preferred_shard",
_op.get_attr("preferred_shard"))
_execute.record_gradient(
"Restore", _inputs_flat, _attrs, _result, name)
_result, = _result
return _result
def restore_eager_fallback(file_pattern, tensor_name, dt, preferred_shard=-1, name=None, ctx=None):
r"""This is the slowpath function for Eager mode.
This is for function restore
"""
_ctx = ctx if ctx else _context.context()
dt = _execute.make_type(dt, "dt")
if preferred_shard is None:
preferred_shard = -1
preferred_shard = _execute.make_int(preferred_shard, "preferred_shard")
file_pattern = _ops.convert_to_tensor(file_pattern, _dtypes.string)
tensor_name = _ops.convert_to_tensor(tensor_name, _dtypes.string)
_inputs_flat = [file_pattern, tensor_name]
_attrs = ("dt", dt, "preferred_shard", preferred_shard)
_result = _execute.execute(b"Restore", 1, inputs=_inputs_flat, attrs=_attrs,
ctx=_ctx, name=name)
_execute.record_gradient(
"Restore", _inputs_flat, _attrs, _result, name)
_result, = _result
return _result
def restore_slice(file_pattern, tensor_name, shape_and_slice, dt, preferred_shard=-1, name=None):
r"""Restores a tensor from checkpoint files.
This is like `Restore` except that restored tensor can be listed as filling
only a slice of a larger tensor. `shape_and_slice` specifies the shape of the
larger tensor and the slice that the restored tensor covers.
The `shape_and_slice` input has the same format as the
elements of the `shapes_and_slices` input of the `SaveSlices` op.
Args:
file_pattern: A `Tensor` of type `string`.
Must have a single element. The pattern of the files from
which we read the tensor.
tensor_name: A `Tensor` of type `string`.
Must have a single element. The name of the tensor to be
restored.
shape_and_slice: A `Tensor` of type `string`.
Scalar. The shapes and slice specifications to use when
restoring a tensors.
dt: A `tf.DType`. The type of the tensor to be restored.
preferred_shard: An optional `int`. Defaults to `-1`.
Index of file to open first if multiple files match
`file_pattern`. See the documentation for `Restore`.
name: A name for the operation (optional).
Returns:
A `Tensor` of type `dt`.
"""
_ctx = _context._context
if _ctx is not None and _ctx._eager_context.is_eager:
try:
_result = _pywrap_tensorflow.TFE_Py_FastPathExecute(
_ctx._context_handle, _ctx._eager_context.device_name, "RestoreSlice",
name, _ctx._post_execution_callbacks, file_pattern, tensor_name,
shape_and_slice, "dt", dt, "preferred_shard", preferred_shard)
return _result
except _core._FallbackException:
try:
return restore_slice_eager_fallback(
file_pattern, tensor_name, shape_and_slice, dt=dt,
preferred_shard=preferred_shard, name=name, ctx=_ctx)
except _core._SymbolicException:
pass # Add nodes to the TensorFlow graph.
except _core._NotOkStatusException as e:
if name is not None:
message = e.message + " name: " + name
else:
message = e.message
_six.raise_from(_core._status_to_exception(e.code, message), None)
# Add nodes to the TensorFlow graph.
dt = _execute.make_type(dt, "dt")
if preferred_shard is None:
preferred_shard = -1
preferred_shard = _execute.make_int(preferred_shard, "preferred_shard")
_, _, _op = _op_def_lib._apply_op_helper(
"RestoreSlice", file_pattern=file_pattern, tensor_name=tensor_name,
shape_and_slice=shape_and_slice, dt=dt,
preferred_shard=preferred_shard, name=name)
_result = _op.outputs[:]
_inputs_flat = _op.inputs
_attrs = ("dt", _op.get_attr("dt"), "preferred_shard",
_op.get_attr("preferred_shard"))
_execute.record_gradient(
"RestoreSlice", _inputs_flat, _attrs, _result, name)
_result, = _result
return _result
def restore_slice_eager_fallback(file_pattern, tensor_name, shape_and_slice, dt, preferred_shard=-1, name=None, ctx=None):
r"""This is the slowpath function for Eager mode.
This is for function restore_slice
"""
_ctx = ctx if ctx else _context.context()
dt = _execute.make_type(dt, "dt")
if preferred_shard is None:
preferred_shard = -1
preferred_shard = _execute.make_int(preferred_shard, "preferred_shard")
file_pattern = _ops.convert_to_tensor(file_pattern, _dtypes.string)
tensor_name = _ops.convert_to_tensor(tensor_name, _dtypes.string)
shape_and_slice = _ops.convert_to_tensor(shape_and_slice, _dtypes.string)
_inputs_flat = [file_pattern, tensor_name, shape_and_slice]
_attrs = ("dt", dt, "preferred_shard", preferred_shard)
_result = _execute.execute(b"RestoreSlice", 1, inputs=_inputs_flat,
attrs=_attrs, ctx=_ctx, name=name)
_execute.record_gradient(
"RestoreSlice", _inputs_flat, _attrs, _result, name)
_result, = _result
return _result
def restore_v2(prefix, tensor_names, shape_and_slices, dtypes, name=None):
r"""Restores tensors from a V2 checkpoint.
For backward compatibility with the V1 format, this Op currently allows
restoring from a V1 checkpoint as well:
- This Op first attempts to find the V2 index file pointed to by "prefix", and
if found proceed to read it as a V2 checkpoint;
- Otherwise the V1 read path is invoked.
Relying on this behavior is not recommended, as the ability to fall back to read
V1 might be deprecated and eventually removed.
By default, restores the named tensors in full. If the caller wishes to restore
specific slices of stored tensors, "shape_and_slices" should be non-empty
strings and correspondingly well-formed.
Callers must ensure all the named tensors are indeed stored in the checkpoint.
Args:
prefix: A `Tensor` of type `string`.
Must have a single element. The prefix of a V2 checkpoint.
tensor_names: A `Tensor` of type `string`.
shape {N}. The names of the tensors to be restored.
shape_and_slices: A `Tensor` of type `string`.
shape {N}. The slice specs of the tensors to be restored.
Empty strings indicate that they are non-partitioned tensors.
dtypes: A list of `tf.DTypes` that has length `>= 1`.
shape {N}. The list of expected dtype for the tensors. Must match
those stored in the checkpoint.
name: A name for the operation (optional).
Returns:
A list of `Tensor` objects of type `dtypes`.
"""
_ctx = _context._context
if _ctx is not None and _ctx._eager_context.is_eager:
try:
_result = _pywrap_tensorflow.TFE_Py_FastPathExecute(
_ctx._context_handle, _ctx._eager_context.device_name, "RestoreV2",
name, _ctx._post_execution_callbacks, prefix, tensor_names,
shape_and_slices, "dtypes", dtypes)
return _result
except _core._FallbackException:
try:
return restore_v2_eager_fallback(
prefix, tensor_names, shape_and_slices, dtypes=dtypes, name=name,
ctx=_ctx)
except _core._SymbolicException:
pass # Add nodes to the TensorFlow graph.
except _core._NotOkStatusException as e:
if name is not None:
message = e.message + " name: " + name
else:
message = e.message
_six.raise_from(_core._status_to_exception(e.code, message), None)
# Add nodes to the TensorFlow graph.
if not isinstance(dtypes, (list, tuple)):
raise TypeError(
"Expected list for 'dtypes' argument to "
"'restore_v2' Op, not %r." % dtypes)
dtypes = [_execute.make_type(_t, "dtypes") for _t in dtypes]
_, _, _op = _op_def_lib._apply_op_helper(
"RestoreV2", prefix=prefix, tensor_names=tensor_names,
shape_and_slices=shape_and_slices, dtypes=dtypes,
name=name)
_result = _op.outputs[:]
if not _result:
return _op
_inputs_flat = _op.inputs
_attrs = ("dtypes", _op.get_attr("dtypes"))
_execute.record_gradient(
"RestoreV2", _inputs_flat, _attrs, _result, name)
return _result
def restore_v2_eager_fallback(prefix, tensor_names, shape_and_slices, dtypes, name=None, ctx=None):
r"""This is the slowpath function for Eager mode.
This is for function restore_v2
"""
_ctx = ctx if ctx else _context.context()
if not isinstance(dtypes, (list, tuple)):
raise TypeError(
"Expected list for 'dtypes' argument to "
"'restore_v2' Op, not %r." % dtypes)
dtypes = [_execute.make_type(_t, "dtypes") for _t in dtypes]
prefix = _ops.convert_to_tensor(prefix, _dtypes.string)
tensor_names = _ops.convert_to_tensor(tensor_names, _dtypes.string)
shape_and_slices = _ops.convert_to_tensor(shape_and_slices, _dtypes.string)
_inputs_flat = [prefix, tensor_names, shape_and_slices]
_attrs = ("dtypes", dtypes)
_result = _execute.execute(b"RestoreV2", len(dtypes), inputs=_inputs_flat,
attrs=_attrs, ctx=_ctx, name=name)
_execute.record_gradient(
"RestoreV2", _inputs_flat, _attrs, _result, name)
return _result
def save(filename, tensor_names, data, name=None):
r"""Saves the input tensors to disk.
The size of `tensor_names` must match the number of tensors in `data`. `data[i]`
is written to `filename` with name `tensor_names[i]`.
See also `SaveSlices`.
Args:
filename: A `Tensor` of type `string`.
Must have a single element. The name of the file to which we write
the tensor.
tensor_names: A `Tensor` of type `string`.
Shape `[N]`. The names of the tensors to be saved.
data: A list of `Tensor` objects. `N` tensors to save.
name: A name for the operation (optional).
Returns:
The created Operation.
"""
_ctx = _context._context
if _ctx is not None and _ctx._eager_context.is_eager:
try:
_result = _pywrap_tensorflow.TFE_Py_FastPathExecute(
_ctx._context_handle, _ctx._eager_context.device_name, "Save", name,
_ctx._post_execution_callbacks, filename, tensor_names, data)
return _result
except _core._FallbackException:
try:
return save_eager_fallback(
filename, tensor_names, data, name=name, ctx=_ctx)
except _core._SymbolicException:
pass # Add nodes to the TensorFlow graph.
except _core._NotOkStatusException as e:
if name is not None:
message = e.message + " name: " + name
else:
message = e.message
_six.raise_from(_core._status_to_exception(e.code, message), None)
# Add nodes to the TensorFlow graph.
_, _, _op = _op_def_lib._apply_op_helper(
"Save", filename=filename, tensor_names=tensor_names, data=data,
name=name)
return _op
_result = None
return _result
def save_eager_fallback(filename, tensor_names, data, name=None, ctx=None):
r"""This is the slowpath function for Eager mode.
This is for function save
"""
_ctx = ctx if ctx else _context.context()
_attr_T, data = _execute.convert_to_mixed_eager_tensors(data, _ctx)
filename = _ops.convert_to_tensor(filename, _dtypes.string)
tensor_names = _ops.convert_to_tensor(tensor_names, _dtypes.string)
_inputs_flat = [filename, tensor_names] + list(data)
_attrs = ("T", _attr_T)
_result = _execute.execute(b"Save", 0, inputs=_inputs_flat, attrs=_attrs,
ctx=_ctx, name=name)
_result = None
return _result
def save_slices(filename, tensor_names, shapes_and_slices, data, name=None):
r"""Saves input tensors slices to disk.
This is like `Save` except that tensors can be listed in the saved file as being
a slice of a larger tensor. `shapes_and_slices` specifies the shape of the
larger tensor and the slice that this tensor covers. `shapes_and_slices` must
have as many elements as `tensor_names`.
Elements of the `shapes_and_slices` input must either be:
* The empty string, in which case the corresponding tensor is
saved normally.
* A string of the form `dim0 dim1 ... dimN-1 slice-spec` where the
`dimI` are the dimensions of the larger tensor and `slice-spec`
specifies what part is covered by the tensor to save.
`slice-spec` itself is a `:`-separated list: `slice0:slice1:...:sliceN-1`
where each `sliceI` is either:
* The string `-` meaning that the slice covers all indices of this dimension
* `start,length` where `start` and `length` are integers. In that
case the slice covers `length` indices starting at `start`.
See also `Save`.
Args:
filename: A `Tensor` of type `string`.
Must have a single element. The name of the file to which we write the
tensor.
tensor_names: A `Tensor` of type `string`.
Shape `[N]`. The names of the tensors to be saved.
shapes_and_slices: A `Tensor` of type `string`.
Shape `[N]`. The shapes and slice specifications to use when
saving the tensors.
data: A list of `Tensor` objects. `N` tensors to save.
name: A name for the operation (optional).
Returns:
The created Operation.
"""
_ctx = _context._context
if _ctx is not None and _ctx._eager_context.is_eager:
try:
_result = _pywrap_tensorflow.TFE_Py_FastPathExecute(
_ctx._context_handle, _ctx._eager_context.device_name, "SaveSlices",
name, _ctx._post_execution_callbacks, filename, tensor_names,
shapes_and_slices, data)
return _result
except _core._FallbackException:
try:
return save_slices_eager_fallback(
filename, tensor_names, shapes_and_slices, data, name=name,
ctx=_ctx)
except _core._SymbolicException:
pass # Add nodes to the TensorFlow graph.
except _core._NotOkStatusException as e:
if name is not None:
message = e.message + " name: " + name
else:
message = e.message
_six.raise_from(_core._status_to_exception(e.code, message), None)
# Add nodes to the TensorFlow graph.
_, _, _op = _op_def_lib._apply_op_helper(
"SaveSlices", filename=filename, tensor_names=tensor_names,
shapes_and_slices=shapes_and_slices, data=data,
name=name)
return _op
_result = None
return _result
def save_slices_eager_fallback(filename, tensor_names, shapes_and_slices, data, name=None, ctx=None):
r"""This is the slowpath function for Eager mode.
This is for function save_slices
"""
_ctx = ctx if ctx else _context.context()
_attr_T, data = _execute.convert_to_mixed_eager_tensors(data, _ctx)
filename = _ops.convert_to_tensor(filename, _dtypes.string)
tensor_names = _ops.convert_to_tensor(tensor_names, _dtypes.string)
shapes_and_slices = _ops.convert_to_tensor(shapes_and_slices, _dtypes.string)
_inputs_flat = [filename, tensor_names, shapes_and_slices] + list(data)
_attrs = ("T", _attr_T)
_result = _execute.execute(b"SaveSlices", 0, inputs=_inputs_flat,
attrs=_attrs, ctx=_ctx, name=name)
_result = None
return _result
def save_v2(prefix, tensor_names, shape_and_slices, tensors, name=None):
r"""Saves tensors in V2 checkpoint format.
By default, saves the named tensors in full. If the caller wishes to save
specific slices of full tensors, "shape_and_slices" should be non-empty strings
and correspondingly well-formed.
Args:
prefix: A `Tensor` of type `string`.
Must have a single element. The prefix of the V2 checkpoint to which we
write the tensors.
tensor_names: A `Tensor` of type `string`.
shape {N}. The names of the tensors to be saved.
shape_and_slices: A `Tensor` of type `string`.
shape {N}. The slice specs of the tensors to be saved.
Empty strings indicate that they are non-partitioned tensors.
tensors: A list of `Tensor` objects. `N` tensors to save.
name: A name for the operation (optional).
Returns:
The created Operation.
"""
_ctx = _context._context
if _ctx is not None and _ctx._eager_context.is_eager:
try:
_result = _pywrap_tensorflow.TFE_Py_FastPathExecute(
_ctx._context_handle, _ctx._eager_context.device_name, "SaveV2", name,
_ctx._post_execution_callbacks, prefix, tensor_names,
shape_and_slices, tensors)
return _result
except _core._FallbackException:
try:
return save_v2_eager_fallback(
prefix, tensor_names, shape_and_slices, tensors, name=name,
ctx=_ctx)
except _core._SymbolicException:
pass # Add nodes to the TensorFlow graph.
except _core._NotOkStatusException as e:
if name is not None:
message = e.message + " name: " + name
else:
message = e.message
_six.raise_from(_core._status_to_exception(e.code, message), None)
# Add nodes to the TensorFlow graph.
_, _, _op = _op_def_lib._apply_op_helper(
"SaveV2", prefix=prefix, tensor_names=tensor_names,
shape_and_slices=shape_and_slices, tensors=tensors,
name=name)
return _op
_result = None
return _result
def save_v2_eager_fallback(prefix, tensor_names, shape_and_slices, tensors, name=None, ctx=None):
r"""This is the slowpath function for Eager mode.
This is for function save_v2
"""
_ctx = ctx if ctx else _context.context()
_attr_dtypes, tensors = _execute.convert_to_mixed_eager_tensors(tensors, _ctx)
prefix = _ops.convert_to_tensor(prefix, _dtypes.string)
tensor_names = _ops.convert_to_tensor(tensor_names, _dtypes.string)
shape_and_slices = _ops.convert_to_tensor(shape_and_slices, _dtypes.string)
_inputs_flat = [prefix, tensor_names, shape_and_slices] + list(tensors)
_attrs = ("dtypes", _attr_dtypes)
_result = _execute.execute(b"SaveV2", 0, inputs=_inputs_flat, attrs=_attrs,
ctx=_ctx, name=name)
_result = None
return _result
def sharded_filename(basename, shard, num_shards, name=None):
r"""Generate a sharded filename. The filename is printf formatted as
%s-%05d-of-%05d, basename, shard, num_shards.
Args:
basename: A `Tensor` of type `string`.
shard: A `Tensor` of type `int32`.
num_shards: A `Tensor` of type `int32`.
name: A name for the operation (optional).
Returns:
A `Tensor` of type `string`.
"""
_ctx = _context._context
if _ctx is not None and _ctx._eager_context.is_eager:
try:
_result = _pywrap_tensorflow.TFE_Py_FastPathExecute(
_ctx._context_handle, _ctx._eager_context.device_name,
"ShardedFilename", name, _ctx._post_execution_callbacks, basename,
shard, num_shards)
return _result
except _core._FallbackException:
try:
return sharded_filename_eager_fallback(
basename, shard, num_shards, name=name, ctx=_ctx)
except _core._SymbolicException:
pass # Add nodes to the TensorFlow graph.
except _core._NotOkStatusException as e:
if name is not None:
message = e.message + " name: " + name
else:
message = e.message
_six.raise_from(_core._status_to_exception(e.code, message), None)
# Add nodes to the TensorFlow graph.
_, _, _op = _op_def_lib._apply_op_helper(
"ShardedFilename", basename=basename, shard=shard,
num_shards=num_shards, name=name)
_result = _op.outputs[:]
_inputs_flat = _op.inputs
_attrs = None
_execute.record_gradient(
"ShardedFilename", _inputs_flat, _attrs, _result, name)
_result, = _result
return _result
def sharded_filename_eager_fallback(basename, shard, num_shards, name=None, ctx=None):
r"""This is the slowpath function for Eager mode.
This is for function sharded_filename
"""
_ctx = ctx if ctx else _context.context()
basename = _ops.convert_to_tensor(basename, _dtypes.string)
shard = _ops.convert_to_tensor(shard, _dtypes.int32)
num_shards = _ops.convert_to_tensor(num_shards, _dtypes.int32)
_inputs_flat = [basename, shard, num_shards]
_attrs = None
_result = _execute.execute(b"ShardedFilename", 1, inputs=_inputs_flat,
attrs=_attrs, ctx=_ctx, name=name)
_execute.record_gradient(
"ShardedFilename", _inputs_flat, _attrs, _result, name)
_result, = _result
return _result
def sharded_filespec(basename, num_shards, name=None):
r"""Generate a glob pattern matching all sharded file names.
Args:
basename: A `Tensor` of type `string`.
num_shards: A `Tensor` of type `int32`.
name: A name for the operation (optional).
Returns:
A `Tensor` of type `string`.
"""
_ctx = _context._context
if _ctx is not None and _ctx._eager_context.is_eager:
try:
_result = _pywrap_tensorflow.TFE_Py_FastPathExecute(
_ctx._context_handle, _ctx._eager_context.device_name,
"ShardedFilespec", name, _ctx._post_execution_callbacks, basename,
num_shards)
return _result
except _core._FallbackException:
try:
return sharded_filespec_eager_fallback(
basename, num_shards, name=name, ctx=_ctx)
except _core._SymbolicException:
pass # Add nodes to the TensorFlow graph.
except _core._NotOkStatusException as e:
if name is not None:
message = e.message + " name: " + name
else:
message = e.message
_six.raise_from(_core._status_to_exception(e.code, message), None)
# Add nodes to the TensorFlow graph.
_, _, _op = _op_def_lib._apply_op_helper(
"ShardedFilespec", basename=basename, num_shards=num_shards,
name=name)
_result = _op.outputs[:]
_inputs_flat = _op.inputs
_attrs = None
_execute.record_gradient(
"ShardedFilespec", _inputs_flat, _attrs, _result, name)
_result, = _result
return _result
def sharded_filespec_eager_fallback(basename, num_shards, name=None, ctx=None):
r"""This is the slowpath function for Eager mode.
This is for function sharded_filespec
"""
_ctx = ctx if ctx else _context.context()
basename = _ops.convert_to_tensor(basename, _dtypes.string)
num_shards = _ops.convert_to_tensor(num_shards, _dtypes.int32)
_inputs_flat = [basename, num_shards]
_attrs = None
_result = _execute.execute(b"ShardedFilespec", 1, inputs=_inputs_flat,
attrs=_attrs, ctx=_ctx, name=name)
_execute.record_gradient(
"ShardedFilespec", _inputs_flat, _attrs, _result, name)
_result, = _result
return _result
def tf_record_reader(container="", shared_name="", compression_type="", name=None):
r"""A Reader that outputs the records from a TensorFlow Records file.
Args:
container: An optional `string`. Defaults to `""`.
If non-empty, this reader is placed in the given container.
Otherwise, a default container is used.
shared_name: An optional `string`. Defaults to `""`.
If non-empty, this reader is named in the given bucket
with this shared_name. Otherwise, the node name is used instead.
compression_type: An optional `string`. Defaults to `""`.
name: A name for the operation (optional).
Returns:
A `Tensor` of type mutable `string`.
"""
_ctx = _context._context
if _ctx is not None and _ctx._eager_context.is_eager:
raise RuntimeError("tf_record_reader op does not support eager execution. Arg 'reader_handle' is a ref.")
# Add nodes to the TensorFlow graph.
if container is None:
container = ""
container = _execute.make_str(container, "container")
if shared_name is None:
shared_name = ""
shared_name = _execute.make_str(shared_name, "shared_name")
if compression_type is None:
compression_type = ""
compression_type = _execute.make_str(compression_type, "compression_type")
_, _, _op = _op_def_lib._apply_op_helper(
"TFRecordReader", container=container, shared_name=shared_name,
compression_type=compression_type, name=name)
_result = _op.outputs[:]
_inputs_flat = _op.inputs
_attrs = ("container", _op.get_attr("container"), "shared_name",
_op.get_attr("shared_name"), "compression_type",
_op.get_attr("compression_type"))
_execute.record_gradient(
"TFRecordReader", _inputs_flat, _attrs, _result, name)
_result, = _result
return _result
def tf_record_reader_eager_fallback(container="", shared_name="", compression_type="", name=None, ctx=None):
raise RuntimeError("tf_record_reader op does not support eager execution. Arg 'reader_handle' is a ref.")
def tf_record_reader_v2(container="", shared_name="", compression_type="", name=None):
r"""A Reader that outputs the records from a TensorFlow Records file.
Args:
container: An optional `string`. Defaults to `""`.
If non-empty, this reader is placed in the given container.
Otherwise, a default container is used.
shared_name: An optional `string`. Defaults to `""`.
If non-empty, this reader is named in the given bucket
with this shared_name. Otherwise, the node name is used instead.
compression_type: An optional `string`. Defaults to `""`.
name: A name for the operation (optional).
Returns:
A `Tensor` of type `resource`.
"""
_ctx = _context._context
if _ctx is not None and _ctx._eager_context.is_eager:
try:
_result = _pywrap_tensorflow.TFE_Py_FastPathExecute(
_ctx._context_handle, _ctx._eager_context.device_name,
"TFRecordReaderV2", name, _ctx._post_execution_callbacks, "container",
container, "shared_name", shared_name, "compression_type",
compression_type)
return _result
except _core._FallbackException:
try:
return tf_record_reader_v2_eager_fallback(
container=container, shared_name=shared_name,
compression_type=compression_type, name=name, ctx=_ctx)
except _core._SymbolicException:
pass # Add nodes to the TensorFlow graph.
except _core._NotOkStatusException as e:
if name is not None:
message = e.message + " name: " + name
else:
message = e.message
_six.raise_from(_core._status_to_exception(e.code, message), None)
# Add nodes to the TensorFlow graph.
if container is None:
container = ""
container = _execute.make_str(container, "container")
if shared_name is None:
shared_name = ""
shared_name = _execute.make_str(shared_name, "shared_name")
if compression_type is None:
compression_type = ""
compression_type = _execute.make_str(compression_type, "compression_type")
_, _, _op = _op_def_lib._apply_op_helper(
"TFRecordReaderV2", container=container, shared_name=shared_name,
compression_type=compression_type, name=name)
_result = _op.outputs[:]
_inputs_flat = _op.inputs
_attrs = ("container", _op.get_attr("container"), "shared_name",
_op.get_attr("shared_name"), "compression_type",
_op.get_attr("compression_type"))
_execute.record_gradient(
"TFRecordReaderV2", _inputs_flat, _attrs, _result, name)
_result, = _result
return _result
def tf_record_reader_v2_eager_fallback(container="", shared_name="", compression_type="", name=None, ctx=None):
r"""This is the slowpath function for Eager mode.
This is for function tf_record_reader_v2
"""
_ctx = ctx if ctx else _context.context()
if container is None:
container = ""
container = _execute.make_str(container, "container")
if shared_name is None:
shared_name = ""
shared_name = _execute.make_str(shared_name, "shared_name")
if compression_type is None:
compression_type = ""
compression_type = _execute.make_str(compression_type, "compression_type")
_inputs_flat = []
_attrs = ("container", container, "shared_name", shared_name,
"compression_type", compression_type)
_result = _execute.execute(b"TFRecordReaderV2", 1, inputs=_inputs_flat,
attrs=_attrs, ctx=_ctx, name=name)
_execute.record_gradient(
"TFRecordReaderV2", _inputs_flat, _attrs, _result, name)
_result, = _result
return _result
def text_line_reader(skip_header_lines=0, container="", shared_name="", name=None):
r"""A Reader that outputs the lines of a file delimited by '\n'.
Args:
skip_header_lines: An optional `int`. Defaults to `0`.
Number of lines to skip from the beginning of every file.
container: An optional `string`. Defaults to `""`.
If non-empty, this reader is placed in the given container.
Otherwise, a default container is used.
shared_name: An optional `string`. Defaults to `""`.
If non-empty, this reader is named in the given bucket
with this shared_name. Otherwise, the node name is used instead.
name: A name for the operation (optional).
Returns:
A `Tensor` of type mutable `string`.
"""
_ctx = _context._context
if _ctx is not None and _ctx._eager_context.is_eager:
raise RuntimeError("text_line_reader op does not support eager execution. Arg 'reader_handle' is a ref.")
# Add nodes to the TensorFlow graph.
if skip_header_lines is None:
skip_header_lines = 0
skip_header_lines = _execute.make_int(skip_header_lines, "skip_header_lines")
if container is None:
container = ""
container = _execute.make_str(container, "container")
if shared_name is None:
shared_name = ""
shared_name = _execute.make_str(shared_name, "shared_name")
_, _, _op = _op_def_lib._apply_op_helper(
"TextLineReader", skip_header_lines=skip_header_lines,
container=container, shared_name=shared_name,
name=name)
_result = _op.outputs[:]
_inputs_flat = _op.inputs
_attrs = ("skip_header_lines", _op.get_attr("skip_header_lines"),
"container", _op.get_attr("container"), "shared_name",
_op.get_attr("shared_name"))
_execute.record_gradient(
"TextLineReader", _inputs_flat, _attrs, _result, name)
_result, = _result
return _result
def text_line_reader_eager_fallback(skip_header_lines=0, container="", shared_name="", name=None, ctx=None):
raise RuntimeError("text_line_reader op does not support eager execution. Arg 'reader_handle' is a ref.")
def text_line_reader_v2(skip_header_lines=0, container="", shared_name="", name=None):
r"""A Reader that outputs the lines of a file delimited by '\n'.
Args:
skip_header_lines: An optional `int`. Defaults to `0`.
Number of lines to skip from the beginning of every file.
container: An optional `string`. Defaults to `""`.
If non-empty, this reader is placed in the given container.
Otherwise, a default container is used.
shared_name: An optional `string`. Defaults to `""`.
If non-empty, this reader is named in the given bucket
with this shared_name. Otherwise, the node name is used instead.
name: A name for the operation (optional).
Returns:
A `Tensor` of type `resource`.
"""
_ctx = _context._context
if _ctx is not None and _ctx._eager_context.is_eager:
try:
_result = _pywrap_tensorflow.TFE_Py_FastPathExecute(
_ctx._context_handle, _ctx._eager_context.device_name,
"TextLineReaderV2", name, _ctx._post_execution_callbacks,
"skip_header_lines", skip_header_lines, "container", container,
"shared_name", shared_name)
return _result
except _core._FallbackException:
try:
return text_line_reader_v2_eager_fallback(
skip_header_lines=skip_header_lines, container=container,
shared_name=shared_name, name=name, ctx=_ctx)
except _core._SymbolicException:
pass # Add nodes to the TensorFlow graph.
except _core._NotOkStatusException as e:
if name is not None:
message = e.message + " name: " + name
else:
message = e.message
_six.raise_from(_core._status_to_exception(e.code, message), None)
# Add nodes to the TensorFlow graph.
if skip_header_lines is None:
skip_header_lines = 0
skip_header_lines = _execute.make_int(skip_header_lines, "skip_header_lines")
if container is None:
container = ""
container = _execute.make_str(container, "container")
if shared_name is None:
shared_name = ""
shared_name = _execute.make_str(shared_name, "shared_name")
_, _, _op = _op_def_lib._apply_op_helper(
"TextLineReaderV2", skip_header_lines=skip_header_lines,
container=container, shared_name=shared_name,
name=name)
_result = _op.outputs[:]
_inputs_flat = _op.inputs
_attrs = ("skip_header_lines", _op.get_attr("skip_header_lines"),
"container", _op.get_attr("container"), "shared_name",
_op.get_attr("shared_name"))
_execute.record_gradient(
"TextLineReaderV2", _inputs_flat, _attrs, _result, name)
_result, = _result
return _result
def text_line_reader_v2_eager_fallback(skip_header_lines=0, container="", shared_name="", name=None, ctx=None):
r"""This is the slowpath function for Eager mode.
This is for function text_line_reader_v2
"""
_ctx = ctx if ctx else _context.context()
if skip_header_lines is None:
skip_header_lines = 0
skip_header_lines = _execute.make_int(skip_header_lines, "skip_header_lines")
if container is None:
container = ""
container = _execute.make_str(container, "container")
if shared_name is None:
shared_name = ""
shared_name = _execute.make_str(shared_name, "shared_name")
_inputs_flat = []
_attrs = ("skip_header_lines", skip_header_lines, "container", container,
"shared_name", shared_name)
_result = _execute.execute(b"TextLineReaderV2", 1, inputs=_inputs_flat,
attrs=_attrs, ctx=_ctx, name=name)
_execute.record_gradient(
"TextLineReaderV2", _inputs_flat, _attrs, _result, name)
_result, = _result
return _result
def whole_file_reader(container="", shared_name="", name=None):
r"""A Reader that outputs the entire contents of a file as a value.
To use, enqueue filenames in a Queue. The output of ReaderRead will
be a filename (key) and the contents of that file (value).
Args:
container: An optional `string`. Defaults to `""`.
If non-empty, this reader is placed in the given container.
Otherwise, a default container is used.
shared_name: An optional `string`. Defaults to `""`.
If non-empty, this reader is named in the given bucket
with this shared_name. Otherwise, the node name is used instead.
name: A name for the operation (optional).
Returns:
A `Tensor` of type mutable `string`.
"""
_ctx = _context._context
if _ctx is not None and _ctx._eager_context.is_eager:
raise RuntimeError("whole_file_reader op does not support eager execution. Arg 'reader_handle' is a ref.")
# Add nodes to the TensorFlow graph.
if container is None:
container = ""
container = _execute.make_str(container, "container")
if shared_name is None:
shared_name = ""
shared_name = _execute.make_str(shared_name, "shared_name")
_, _, _op = _op_def_lib._apply_op_helper(
"WholeFileReader", container=container, shared_name=shared_name,
name=name)
_result = _op.outputs[:]
_inputs_flat = _op.inputs
_attrs = ("container", _op.get_attr("container"), "shared_name",
_op.get_attr("shared_name"))
_execute.record_gradient(
"WholeFileReader", _inputs_flat, _attrs, _result, name)
_result, = _result
return _result
def whole_file_reader_eager_fallback(container="", shared_name="", name=None, ctx=None):
raise RuntimeError("whole_file_reader op does not support eager execution. Arg 'reader_handle' is a ref.")
def whole_file_reader_v2(container="", shared_name="", name=None):
r"""A Reader that outputs the entire contents of a file as a value.
To use, enqueue filenames in a Queue. The output of ReaderRead will
be a filename (key) and the contents of that file (value).
Args:
container: An optional `string`. Defaults to `""`.
If non-empty, this reader is placed in the given container.
Otherwise, a default container is used.
shared_name: An optional `string`. Defaults to `""`.
If non-empty, this reader is named in the given bucket
with this shared_name. Otherwise, the node name is used instead.
name: A name for the operation (optional).
Returns:
A `Tensor` of type `resource`.
"""
_ctx = _context._context
if _ctx is not None and _ctx._eager_context.is_eager:
try:
_result = _pywrap_tensorflow.TFE_Py_FastPathExecute(
_ctx._context_handle, _ctx._eager_context.device_name,
"WholeFileReaderV2", name, _ctx._post_execution_callbacks,
"container", container, "shared_name", shared_name)
return _result
except _core._FallbackException:
try:
return whole_file_reader_v2_eager_fallback(
container=container, shared_name=shared_name, name=name, ctx=_ctx)
except _core._SymbolicException:
pass # Add nodes to the TensorFlow graph.
except _core._NotOkStatusException as e:
if name is not None:
message = e.message + " name: " + name
else:
message = e.message
_six.raise_from(_core._status_to_exception(e.code, message), None)
# Add nodes to the TensorFlow graph.
if container is None:
container = ""
container = _execute.make_str(container, "container")
if shared_name is None:
shared_name = ""
shared_name = _execute.make_str(shared_name, "shared_name")
_, _, _op = _op_def_lib._apply_op_helper(
"WholeFileReaderV2", container=container, shared_name=shared_name,
name=name)
_result = _op.outputs[:]
_inputs_flat = _op.inputs
_attrs = ("container", _op.get_attr("container"), "shared_name",
_op.get_attr("shared_name"))
_execute.record_gradient(
"WholeFileReaderV2", _inputs_flat, _attrs, _result, name)
_result, = _result
return _result
def whole_file_reader_v2_eager_fallback(container="", shared_name="", name=None, ctx=None):
r"""This is the slowpath function for Eager mode.
This is for function whole_file_reader_v2
"""
_ctx = ctx if ctx else _context.context()
if container is None:
container = ""
container = _execute.make_str(container, "container")
if shared_name is None:
shared_name = ""
shared_name = _execute.make_str(shared_name, "shared_name")
_inputs_flat = []
_attrs = ("container", container, "shared_name", shared_name)
_result = _execute.execute(b"WholeFileReaderV2", 1, inputs=_inputs_flat,
attrs=_attrs, ctx=_ctx, name=name)
_execute.record_gradient(
"WholeFileReaderV2", _inputs_flat, _attrs, _result, name)
_result, = _result
return _result
@_dispatch.add_dispatch_list
@tf_export('io.write_file', v1=['io.write_file', 'write_file'])
@deprecated_endpoints('write_file')
def write_file(filename, contents, name=None):
r"""Writes contents to the file at input filename. Creates file and recursively
creates directory if not existing.
Args:
filename: A `Tensor` of type `string`.
scalar. The name of the file to which we write the contents.
contents: A `Tensor` of type `string`.
scalar. The content to be written to the output file.
name: A name for the operation (optional).
Returns:
The created Operation.
"""
_ctx = _context._context
if _ctx is not None and _ctx._eager_context.is_eager:
try:
_result = _pywrap_tensorflow.TFE_Py_FastPathExecute(
_ctx._context_handle, _ctx._eager_context.device_name, "WriteFile",
name, _ctx._post_execution_callbacks, filename, contents)
return _result
except _core._FallbackException:
try:
return write_file_eager_fallback(
filename, contents, name=name, ctx=_ctx)
except _core._SymbolicException:
pass # Add nodes to the TensorFlow graph.
except (TypeError, ValueError):
result = _dispatch.dispatch(
write_file, filename=filename, contents=contents, name=name)
if result is not _dispatch.OpDispatcher.NOT_SUPPORTED:
return result
raise
except _core._NotOkStatusException as e:
if name is not None:
message = e.message + " name: " + name
else:
message = e.message
_six.raise_from(_core._status_to_exception(e.code, message), None)
# Add nodes to the TensorFlow graph.
try:
_, _, _op = _op_def_lib._apply_op_helper(
"WriteFile", filename=filename, contents=contents, name=name)
except (TypeError, ValueError):
result = _dispatch.dispatch(
write_file, filename=filename, contents=contents, name=name)
if result is not _dispatch.OpDispatcher.NOT_SUPPORTED:
return result
raise
return _op
_result = None
return _result
def write_file_eager_fallback(filename, contents, name=None, ctx=None):
r"""This is the slowpath function for Eager mode.
This is for function write_file
"""
_ctx = ctx if ctx else _context.context()
filename = _ops.convert_to_tensor(filename, _dtypes.string)
contents = _ops.convert_to_tensor(contents, _dtypes.string)
_inputs_flat = [filename, contents]
_attrs = None
_result = _execute.execute(b"WriteFile", 0, inputs=_inputs_flat,
attrs=_attrs, ctx=_ctx, name=name)
_result = None
return _result
def _InitOpDefLibrary(op_list_proto_bytes):
op_list = _op_def_pb2.OpList()
op_list.ParseFromString(op_list_proto_bytes)
_op_def_registry.register_op_list(op_list)
op_def_lib = _op_def_library.OpDefLibrary()
op_def_lib.add_op_list(op_list)
return op_def_lib
# op {
# name: "FixedLengthRecordReader"
# output_arg {
# name: "reader_handle"
# type: DT_STRING
# is_ref: true
# }
# attr {
# name: "header_bytes"
# type: "int"
# default_value {
# i: 0
# }
# }
# attr {
# name: "record_bytes"
# type: "int"
# }
# attr {
# name: "footer_bytes"
# type: "int"
# default_value {
# i: 0
# }
# }
# attr {
# name: "hop_bytes"
# type: "int"
# default_value {
# i: 0
# }
# }
# attr {
# name: "container"
# type: "string"
# default_value {
# s: ""
# }
# }
# attr {
# name: "shared_name"
# type: "string"
# default_value {
# s: ""
# }
# }
# deprecation {
# version: 26
# explanation: "Use FixedLengthRecordReaderV2"
# }
# is_stateful: true
# }
# op {
# name: "FixedLengthRecordReaderV2"
# output_arg {
# name: "reader_handle"
# type: DT_RESOURCE
# }
# attr {
# name: "header_bytes"
# type: "int"
# default_value {
# i: 0
# }
# }
# attr {
# name: "record_bytes"
# type: "int"
# }
# attr {
# name: "footer_bytes"
# type: "int"
# default_value {
# i: 0
# }
# }
# attr {
# name: "hop_bytes"
# type: "int"
# default_value {
# i: 0
# }
# }
# attr {
# name: "container"
# type: "string"
# default_value {
# s: ""
# }
# }
# attr {
# name: "shared_name"
# type: "string"
# default_value {
# s: ""
# }
# }
# attr {
# name: "encoding"
# type: "string"
# default_value {
# s: ""
# }
# }
# is_stateful: true
# }
# op {
# name: "IdentityReader"
# output_arg {
# name: "reader_handle"
# type: DT_STRING
# is_ref: true
# }
# attr {
# name: "container"
# type: "string"
# default_value {
# s: ""
# }
# }
# attr {
# name: "shared_name"
# type: "string"
# default_value {
# s: ""
# }
# }
# deprecation {
# version: 26
# explanation: "Use IdentityReaderV2"
# }
# is_stateful: true
# }
# op {
# name: "IdentityReaderV2"
# output_arg {
# name: "reader_handle"
# type: DT_RESOURCE
# }
# attr {
# name: "container"
# type: "string"
# default_value {
# s: ""
# }
# }
# attr {
# name: "shared_name"
# type: "string"
# default_value {
# s: ""
# }
# }
# is_stateful: true
# }
# op {
# name: "LMDBReader"
# output_arg {
# name: "reader_handle"
# type: DT_STRING
# is_ref: true
# }
# attr {
# name: "container"
# type: "string"
# default_value {
# s: ""
# }
# }
# attr {
# name: "shared_name"
# type: "string"
# default_value {
# s: ""
# }
# }
# is_stateful: true
# }
# op {
# name: "MatchingFiles"
# input_arg {
# name: "pattern"
# type: DT_STRING
# }
# output_arg {
# name: "filenames"
# type: DT_STRING
# }
# }
# op {
# name: "MergeV2Checkpoints"
# input_arg {
# name: "checkpoint_prefixes"
# type: DT_STRING
# }
# input_arg {
# name: "destination_prefix"
# type: DT_STRING
# }
# attr {
# name: "delete_old_dirs"
# type: "bool"
# default_value {
# b: true
# }
# }
# is_stateful: true
# }
# op {
# name: "ReadFile"
# input_arg {
# name: "filename"
# type: DT_STRING
# }
# output_arg {
# name: "contents"
# type: DT_STRING
# }
# }
# op {
# name: "ReaderNumRecordsProduced"
# input_arg {
# name: "reader_handle"
# type: DT_STRING
# is_ref: true
# }
# output_arg {
# name: "records_produced"
# type: DT_INT64
# }
# }
# op {
# name: "ReaderNumRecordsProducedV2"
# input_arg {
# name: "reader_handle"
# type: DT_RESOURCE
# }
# output_arg {
# name: "records_produced"
# type: DT_INT64
# }
# is_stateful: true
# }
# op {
# name: "ReaderNumWorkUnitsCompleted"
# input_arg {
# name: "reader_handle"
# type: DT_STRING
# is_ref: true
# }
# output_arg {
# name: "units_completed"
# type: DT_INT64
# }
# }
# op {
# name: "ReaderNumWorkUnitsCompletedV2"
# input_arg {
# name: "reader_handle"
# type: DT_RESOURCE
# }
# output_arg {
# name: "units_completed"
# type: DT_INT64
# }
# is_stateful: true
# }
# op {
# name: "ReaderRead"
# input_arg {
# name: "reader_handle"
# type: DT_STRING
# is_ref: true
# }
# input_arg {
# name: "queue_handle"
# type: DT_STRING
# is_ref: true
# }
# output_arg {
# name: "key"
# type: DT_STRING
# }
# output_arg {
# name: "value"
# type: DT_STRING
# }
# }
# op {
# name: "ReaderReadUpTo"
# input_arg {
# name: "reader_handle"
# type: DT_STRING
# is_ref: true
# }
# input_arg {
# name: "queue_handle"
# type: DT_STRING
# is_ref: true
# }
# input_arg {
# name: "num_records"
# type: DT_INT64
# }
# output_arg {
# name: "keys"
# type: DT_STRING
# }
# output_arg {
# name: "values"
# type: DT_STRING
# }
# }
# op {
# name: "ReaderReadUpToV2"
# input_arg {
# name: "reader_handle"
# type: DT_RESOURCE
# }
# input_arg {
# name: "queue_handle"
# type: DT_RESOURCE
# }
# input_arg {
# name: "num_records"
# type: DT_INT64
# }
# output_arg {
# name: "keys"
# type: DT_STRING
# }
# output_arg {
# name: "values"
# type: DT_STRING
# }
# is_stateful: true
# }
# op {
# name: "ReaderReadV2"
# input_arg {
# name: "reader_handle"
# type: DT_RESOURCE
# }
# input_arg {
# name: "queue_handle"
# type: DT_RESOURCE
# }
# output_arg {
# name: "key"
# type: DT_STRING
# }
# output_arg {
# name: "value"
# type: DT_STRING
# }
# is_stateful: true
# }
# op {
# name: "ReaderReset"
# input_arg {
# name: "reader_handle"
# type: DT_STRING
# is_ref: true
# }
# }
# op {
# name: "ReaderResetV2"
# input_arg {
# name: "reader_handle"
# type: DT_RESOURCE
# }
# is_stateful: true
# }
# op {
# name: "ReaderRestoreState"
# input_arg {
# name: "reader_handle"
# type: DT_STRING
# is_ref: true
# }
# input_arg {
# name: "state"
# type: DT_STRING
# }
# }
# op {
# name: "ReaderRestoreStateV2"
# input_arg {
# name: "reader_handle"
# type: DT_RESOURCE
# }
# input_arg {
# name: "state"
# type: DT_STRING
# }
# is_stateful: true
# }
# op {
# name: "ReaderSerializeState"
# input_arg {
# name: "reader_handle"
# type: DT_STRING
# is_ref: true
# }
# output_arg {
# name: "state"
# type: DT_STRING
# }
# }
# op {
# name: "ReaderSerializeStateV2"
# input_arg {
# name: "reader_handle"
# type: DT_RESOURCE
# }
# output_arg {
# name: "state"
# type: DT_STRING
# }
# is_stateful: true
# }
# op {
# name: "Restore"
# input_arg {
# name: "file_pattern"
# type: DT_STRING
# }
# input_arg {
# name: "tensor_name"
# type: DT_STRING
# }
# output_arg {
# name: "tensor"
# type_attr: "dt"
# }
# attr {
# name: "dt"
# type: "type"
# }
# attr {
# name: "preferred_shard"
# type: "int"
# default_value {
# i: -1
# }
# }
# is_stateful: true
# }
# op {
# name: "RestoreSlice"
# input_arg {
# name: "file_pattern"
# type: DT_STRING
# }
# input_arg {
# name: "tensor_name"
# type: DT_STRING
# }
# input_arg {
# name: "shape_and_slice"
# type: DT_STRING
# }
# output_arg {
# name: "tensor"
# type_attr: "dt"
# }
# attr {
# name: "dt"
# type: "type"
# }
# attr {
# name: "preferred_shard"
# type: "int"
# default_value {
# i: -1
# }
# }
# is_stateful: true
# }
# op {
# name: "RestoreV2"
# input_arg {
# name: "prefix"
# type: DT_STRING
# }
# input_arg {
# name: "tensor_names"
# type: DT_STRING
# }
# input_arg {
# name: "shape_and_slices"
# type: DT_STRING
# }
# output_arg {
# name: "tensors"
# type_list_attr: "dtypes"
# }
# attr {
# name: "dtypes"
# type: "list(type)"
# has_minimum: true
# minimum: 1
# }
# is_stateful: true
# }
# op {
# name: "Save"
# input_arg {
# name: "filename"
# type: DT_STRING
# }
# input_arg {
# name: "tensor_names"
# type: DT_STRING
# }
# input_arg {
# name: "data"
# type_list_attr: "T"
# }
# attr {
# name: "T"
# type: "list(type)"
# has_minimum: true
# minimum: 1
# }
# is_stateful: true
# }
# op {
# name: "SaveSlices"
# input_arg {
# name: "filename"
# type: DT_STRING
# }
# input_arg {
# name: "tensor_names"
# type: DT_STRING
# }
# input_arg {
# name: "shapes_and_slices"
# type: DT_STRING
# }
# input_arg {
# name: "data"
# type_list_attr: "T"
# }
# attr {
# name: "T"
# type: "list(type)"
# has_minimum: true
# minimum: 1
# }
# is_stateful: true
# }
# op {
# name: "SaveV2"
# input_arg {
# name: "prefix"
# type: DT_STRING
# }
# input_arg {
# name: "tensor_names"
# type: DT_STRING
# }
# input_arg {
# name: "shape_and_slices"
# type: DT_STRING
# }
# input_arg {
# name: "tensors"
# type_list_attr: "dtypes"
# }
# attr {
# name: "dtypes"
# type: "list(type)"
# has_minimum: true
# minimum: 1
# }
# is_stateful: true
# }
# op {
# name: "ShardedFilename"
# input_arg {
# name: "basename"
# type: DT_STRING
# }
# input_arg {
# name: "shard"
# type: DT_INT32
# }
# input_arg {
# name: "num_shards"
# type: DT_INT32
# }
# output_arg {
# name: "filename"
# type: DT_STRING
# }
# }
# op {
# name: "ShardedFilespec"
# input_arg {
# name: "basename"
# type: DT_STRING
# }
# input_arg {
# name: "num_shards"
# type: DT_INT32
# }
# output_arg {
# name: "filename"
# type: DT_STRING
# }
# }
# op {
# name: "TFRecordReader"
# output_arg {
# name: "reader_handle"
# type: DT_STRING
# is_ref: true
# }
# attr {
# name: "container"
# type: "string"
# default_value {
# s: ""
# }
# }
# attr {
# name: "shared_name"
# type: "string"
# default_value {
# s: ""
# }
# }
# attr {
# name: "compression_type"
# type: "string"
# default_value {
# s: ""
# }
# }
# deprecation {
# version: 26
# explanation: "Use TFRecordReaderV2"
# }
# is_stateful: true
# }
# op {
# name: "TFRecordReaderV2"
# output_arg {
# name: "reader_handle"
# type: DT_RESOURCE
# }
# attr {
# name: "container"
# type: "string"
# default_value {
# s: ""
# }
# }
# attr {
# name: "shared_name"
# type: "string"
# default_value {
# s: ""
# }
# }
# attr {
# name: "compression_type"
# type: "string"
# default_value {
# s: ""
# }
# }
# is_stateful: true
# }
# op {
# name: "TextLineReader"
# output_arg {
# name: "reader_handle"
# type: DT_STRING
# is_ref: true
# }
# attr {
# name: "skip_header_lines"
# type: "int"
# default_value {
# i: 0
# }
# }
# attr {
# name: "container"
# type: "string"
# default_value {
# s: ""
# }
# }
# attr {
# name: "shared_name"
# type: "string"
# default_value {
# s: ""
# }
# }
# deprecation {
# version: 26
# explanation: "Use TextLineReaderV2"
# }
# is_stateful: true
# }
# op {
# name: "TextLineReaderV2"
# output_arg {
# name: "reader_handle"
# type: DT_RESOURCE
# }
# attr {
# name: "skip_header_lines"
# type: "int"
# default_value {
# i: 0
# }
# }
# attr {
# name: "container"
# type: "string"
# default_value {
# s: ""
# }
# }
# attr {
# name: "shared_name"
# type: "string"
# default_value {
# s: ""
# }
# }
# is_stateful: true
# }
# op {
# name: "WholeFileReader"
# output_arg {
# name: "reader_handle"
# type: DT_STRING
# is_ref: true
# }
# attr {
# name: "container"
# type: "string"
# default_value {
# s: ""
# }
# }
# attr {
# name: "shared_name"
# type: "string"
# default_value {
# s: ""
# }
# }
# is_stateful: true
# }
# op {
# name: "WholeFileReaderV2"
# output_arg {
# name: "reader_handle"
# type: DT_RESOURCE
# }
# attr {
# name: "container"
# type: "string"
# default_value {
# s: ""
# }
# }
# attr {
# name: "shared_name"
# type: "string"
# default_value {
# s: ""
# }
# }
# is_stateful: true
# }
# op {
# name: "WriteFile"
# input_arg {
# name: "filename"
# type: DT_STRING
# }
# input_arg {
# name: "contents"
# type: DT_STRING
# }
# }
_op_def_lib = _InitOpDefLibrary(b"\n\346\001\n\027FixedLengthRecordReader\032\024\n\rreader_handle\030\007\200\001\001\"\027\n\014header_bytes\022\003int\032\002\030\000\"\023\n\014record_bytes\022\003int\"\027\n\014footer_bytes\022\003int\032\002\030\000\"\024\n\thop_bytes\022\003int\032\002\030\000\"\027\n\tcontainer\022\006string\032\002\022\000\"\031\n\013shared_name\022\006string\032\002\022\000B!\010\032\022\035Use FixedLengthRecordReaderV2\210\001\001\n\332\001\n\031FixedLengthRecordReaderV2\032\021\n\rreader_handle\030\024\"\027\n\014header_bytes\022\003int\032\002\030\000\"\023\n\014record_bytes\022\003int\"\027\n\014footer_bytes\022\003int\032\002\030\000\"\024\n\thop_bytes\022\003int\032\002\030\000\"\027\n\tcontainer\022\006string\032\002\022\000\"\031\n\013shared_name\022\006string\032\002\022\000\"\026\n\010encoding\022\006string\032\002\022\000\210\001\001\nw\n\016IdentityReader\032\024\n\rreader_handle\030\007\200\001\001\"\027\n\tcontainer\022\006string\032\002\022\000\"\031\n\013shared_name\022\006string\032\002\022\000B\030\010\032\022\024Use IdentityReaderV2\210\001\001\n\\\n\020IdentityReaderV2\032\021\n\rreader_handle\030\024\"\027\n\tcontainer\022\006string\032\002\022\000\"\031\n\013shared_name\022\006string\032\002\022\000\210\001\001\nY\n\nLMDBReader\032\024\n\rreader_handle\030\007\200\001\001\"\027\n\tcontainer\022\006string\032\002\022\000\"\031\n\013shared_name\022\006string\032\002\022\000\210\001\001\n+\n\rMatchingFiles\022\013\n\007pattern\030\007\032\r\n\tfilenames\030\007\ne\n\022MergeV2Checkpoints\022\027\n\023checkpoint_prefixes\030\007\022\026\n\022destination_prefix\030\007\"\033\n\017delete_old_dirs\022\004bool\032\002(\001\210\001\001\n&\n\010ReadFile\022\014\n\010filename\030\007\032\014\n\010contents\030\007\nF\n\030ReaderNumRecordsProduced\022\024\n\rreader_handle\030\007\200\001\001\032\024\n\020records_produced\030\t\nH\n\032ReaderNumRecordsProducedV2\022\021\n\rreader_handle\030\024\032\024\n\020records_produced\030\t\210\001\001\nH\n\033ReaderNumWorkUnitsCompleted\022\024\n\rreader_handle\030\007\200\001\001\032\023\n\017units_completed\030\t\nJ\n\035ReaderNumWorkUnitsCompletedV2\022\021\n\rreader_handle\030\024\032\023\n\017units_completed\030\t\210\001\001\nK\n\nReaderRead\022\024\n\rreader_handle\030\007\200\001\001\022\023\n\014queue_handle\030\007\200\001\001\032\007\n\003key\030\007\032\t\n\005value\030\007\nb\n\016ReaderReadUpTo\022\024\n\rreader_handle\030\007\200\001\001\022\023\n\014queue_handle\030\007\200\001\001\022\017\n\013num_records\030\t\032\010\n\004keys\030\007\032\n\n\006values\030\007\na\n\020ReaderReadUpToV2\022\021\n\rreader_handle\030\024\022\020\n\014queue_handle\030\024\022\017\n\013num_records\030\t\032\010\n\004keys\030\007\032\n\n\006values\030\007\210\001\001\nJ\n\014ReaderReadV2\022\021\n\rreader_handle\030\024\022\020\n\014queue_handle\030\024\032\007\n\003key\030\007\032\t\n\005value\030\007\210\001\001\n#\n\013ReaderReset\022\024\n\rreader_handle\030\007\200\001\001\n%\n\rReaderResetV2\022\021\n\rreader_handle\030\024\210\001\001\n5\n\022ReaderRestoreState\022\024\n\rreader_handle\030\007\200\001\001\022\t\n\005state\030\007\n7\n\024ReaderRestoreStateV2\022\021\n\rreader_handle\030\024\022\t\n\005state\030\007\210\001\001\n7\n\024ReaderSerializeState\022\024\n\rreader_handle\030\007\200\001\001\032\t\n\005state\030\007\n9\n\026ReaderSerializeStateV2\022\021\n\rreader_handle\030\024\032\t\n\005state\030\007\210\001\001\nn\n\007Restore\022\020\n\014file_pattern\030\007\022\017\n\013tensor_name\030\007\032\014\n\006tensor\"\002dt\"\n\n\002dt\022\004type\"#\n\017preferred_shard\022\003int\032\013\030\377\377\377\377\377\377\377\377\377\001\210\001\001\n\210\001\n\014RestoreSlice\022\020\n\014file_pattern\030\007\022\017\n\013tensor_name\030\007\022\023\n\017shape_and_slice\030\007\032\014\n\006tensor\"\002dt\"\n\n\002dt\022\004type\"#\n\017preferred_shard\022\003int\032\013\030\377\377\377\377\377\377\377\377\377\001\210\001\001\no\n\tRestoreV2\022\n\n\006prefix\030\007\022\020\n\014tensor_names\030\007\022\024\n\020shape_and_slices\030\007\032\021\n\007tensors2\006dtypes\"\030\n\006dtypes\022\nlist(type)(\0010\001\210\001\001\nI\n\004Save\022\014\n\010filename\030\007\022\020\n\014tensor_names\030\007\022\t\n\004data2\001T\"\023\n\001T\022\nlist(type)(\0010\001\210\001\001\nf\n\nSaveSlices\022\014\n\010filename\030\007\022\020\n\014tensor_names\030\007\022\025\n\021shapes_and_slices\030\007\022\t\n\004data2\001T\"\023\n\001T\022\nlist(type)(\0010\001\210\001\001\nl\n\006SaveV2\022\n\n\006prefix\030\007\022\020\n\014tensor_names\030\007\022\024\n\020shape_and_slices\030\007\022\021\n\007tensors2\006dtypes\"\030\n\006dtypes\022\nlist(type)(\0010\001\210\001\001\nH\n\017ShardedFilename\022\014\n\010basename\030\007\022\t\n\005shard\030\003\022\016\n\nnum_shards\030\003\032\014\n\010filename\030\007\n=\n\017ShardedFilespec\022\014\n\010basename\030\007\022\016\n\nnum_shards\030\003\032\014\n\010filename\030\007\n\227\001\n\016TFRecordReader\032\024\n\rreader_handle\030\007\200\001\001\"\027\n\tcontainer\022\006string\032\002\022\000\"\031\n\013shared_name\022\006string\032\002\022\000\"\036\n\020compression_type\022\006string\032\002\022\000B\030\010\032\022\024Use TFRecordReaderV2\210\001\001\n|\n\020TFRecordReaderV2\032\021\n\rreader_handle\030\024\"\027\n\tcontainer\022\006string\032\002\022\000\"\031\n\013shared_name\022\006string\032\002\022\000\"\036\n\020compression_type\022\006string\032\002\022\000\210\001\001\n\225\001\n\016TextLineReader\032\024\n\rreader_handle\030\007\200\001\001\"\034\n\021skip_header_lines\022\003int\032\002\030\000\"\027\n\tcontainer\022\006string\032\002\022\000\"\031\n\013shared_name\022\006string\032\002\022\000B\030\010\032\022\024Use TextLineReaderV2\210\001\001\nz\n\020TextLineReaderV2\032\021\n\rreader_handle\030\024\"\034\n\021skip_header_lines\022\003int\032\002\030\000\"\027\n\tcontainer\022\006string\032\002\022\000\"\031\n\013shared_name\022\006string\032\002\022\000\210\001\001\n^\n\017WholeFileReader\032\024\n\rreader_handle\030\007\200\001\001\"\027\n\tcontainer\022\006string\032\002\022\000\"\031\n\013shared_name\022\006string\032\002\022\000\210\001\001\n]\n\021WholeFileReaderV2\032\021\n\rreader_handle\030\024\"\027\n\tcontainer\022\006string\032\002\022\000\"\031\n\013shared_name\022\006string\032\002\022\000\210\001\001\n\'\n\tWriteFile\022\014\n\010filename\030\007\022\014\n\010contents\030\007")
| 35.466605 | 6,445 | 0.687739 | 15,245 | 114,699 | 4.872745 | 0.042703 | 0.028135 | 0.009692 | 0.014 | 0.876893 | 0.857629 | 0.84052 | 0.819142 | 0.802019 | 0.789163 | 0 | 0.03284 | 0.211257 | 114,699 | 3,233 | 6,446 | 35.477575 | 0.788279 | 0.330238 | 0 | 0.74359 | 1 | 0.00641 | 0.146538 | 0.067136 | 0 | 0 | 0 | 0 | 0 | 1 | 0.048077 | false | 0.015385 | 0.010897 | 0 | 0.139744 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 9 |
90f6973be2a8eb5f8e726521e96bb681ea291478 | 39 | py | Python | stylizer/datasets/__init__.py | suyash/stylizer | 50d4df89eb4299a228cc208fb140d7ea0cfc4295 | [
"BSD-3-Clause"
] | 4 | 2019-09-04T15:58:15.000Z | 2020-12-30T19:05:20.000Z | stylizer/datasets/__init__.py | suyash/stylizer | 50d4df89eb4299a228cc208fb140d7ea0cfc4295 | [
"BSD-3-Clause"
] | 1 | 2022-02-09T23:31:32.000Z | 2022-02-09T23:31:32.000Z | stylizer/datasets/__init__.py | suyash/stylizer | 50d4df89eb4299a228cc208fb140d7ea0cfc4295 | [
"BSD-3-Clause"
] | null | null | null | from .danbooru2017 import Danbooru2017
| 19.5 | 38 | 0.871795 | 4 | 39 | 8.5 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.228571 | 0.102564 | 39 | 1 | 39 | 39 | 0.742857 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
291ed1faf0a4c5e0590717ab147aa675e48c3af6 | 9,690 | py | Python | pyEOM/datasets/predefined/MODIS/MOD11A1.py | jonas-eberle/pyEOM | 0e03af1076573d37a506bdbcb3f532b0c56a1a4c | [
"MIT"
] | 15 | 2015-07-25T01:29:23.000Z | 2020-06-12T00:51:39.000Z | pyEOM/datasets/predefined/MODIS/MOD11A1.py | jonas-eberle/pyEOM | 0e03af1076573d37a506bdbcb3f532b0c56a1a4c | [
"MIT"
] | 6 | 2015-07-30T20:49:25.000Z | 2017-01-24T08:32:30.000Z | pyEOM/datasets/predefined/MODIS/MOD11A1.py | jonas-eberle/pyEOM | 0e03af1076573d37a506bdbcb3f532b0c56a1a4c | [
"MIT"
] | 7 | 2015-03-05T20:32:30.000Z | 2021-12-18T16:35:37.000Z | __author__ = 'we32zac'
from pyEOM.datasets import Dataset as DatasetAbs
class Dataset(DatasetAbs):
shortname = 'MOD11A1'
platform = 'Terra'
collection = '005'
rastertype = 'Tile'
timeInterval = 'P1D'
host = 'http://e4ftl01.cr.usgs.gov'
dir = '/MODIS_Dailies_E/MOLT/MOD11A1.005'
sources = ['LPDAAC']
def getDownloadInfo(self):
return dict(shortname=self.shortname, platform=self.platform, collection=self.collection, rastertype=self.rastertype, host=self.host, directory=self.dir, sources=self.sources)
def getBands(self):
return self.bands
def getThematicBands(self):
return [self.bands['Daytime'], self.bands['Nighttime']]
def getQualityBands(self):
return [self.bands['QCDay'], self.bands['QCNight']]
bands = dict(QCDay={
'name': 'MODIS_Grid_Daily_1km_LST:QC_Day',
'nodata': 0,
'scale': None,
'offset': None,
'imagetype': 'qualityInformation',
'identifier': 'MODIS_MOD11_A1_LST_Day_Series_QC',
'title': 'Daily Daytime Land Surface Temperature from MODIS Terra Quality Dataset',
'abstract': 'Time-series of daily Terra MODIS daytime land surface temperature Quality in Bit (Bit-Field) at 1 km spatial resolution. No scale factor. The unscaled nodata value is encoded as 0. Original MODIS data retrieved from the Land Processes Distributed Active Archive Center (ftp://e4ftl01.cr.usgs.gov/MOLT/).',
'keywords': 'MODIS,Terra,Siberia,Temperature,Global,Daily,Series,Daytime',
'lineage': 'Original MODIS data retrieved from the Land Processes Distributed Active Archive Center (ftp://e4ftl01.cr.usgs.gov/MOLT/) and processed with GDAL 1.9.0.',
'datasetname': 'Land Surface Temperature',
'datatype': 'RASTER',
'resolution': 1000.0,
'layername': 'mod11a1_lst_day_qc',
'templates': 'template_header_evi.html',
'wcs_description': 'MODIS Terra LST Day Daily quality',
'wms_description': 'MODIS Terra LST Day Daily quality',
'colormap': 'lst_colorbar2.map',
'resolution_unit': 'm',
'unit': 'Bit'
},EmissNight={
'name': 'MODIS_Grid_Daily_1km_LST:Emis_31',
'nodata': 0,
'scale': 0.002,
'offset': None,
'imagetype': 'physicalMeasurement',
'identifier': 'MODIS_MOD11_A1_LST_Night_Series_B31_Emissivity',
'title': 'Daily Nighttime Land Surface Temperature from MODIS Terra Band 31 Emissivity',
'abstract': 'Time-series of daily Terra MODIS nighttime land surface temperature B31 emissivity without unit at 1 km spatial resolution. Scale factor is 0.002. The unscaled nodata value is encoded as 0. Original MODIS data retrieved from the Land Processes Distributed Active Archive Center (ftp://e4ftl01.cr.usgs.gov/MOLT/).',
'keywords': 'MODIS,Terra,Siberia,Temperature,Global,Daily,Series,Nighttime',
'lineage': 'Original MODIS data retrieved from the Land Processes Distributed Active Archive Center (ftp://e4ftl01.cr.usgs.gov/MOLT/) and processed with GDAL 1.9.0.',
'datasetname': 'Land Surface Temperature',
'datatype': 'RASTER',
'resolution': 1000.0,
'layername': 'mod11a1_lst_night_b31_emiss',
'templates': 'template_header_evi.html',
'wcs_description': 'MODIS Terra LST Night Daily b31 emiss',
'wms_description': 'MODIS Terra LST Night Daily b31 emiss',
'colormap': 'lst_colorbar2.map',
'resolution_unit': 'm',
'unit': 'None'
},QCNight={
'name': 'MODIS_Grid_Daily_1km_LST:QC_Night',
'nodata': 0,
'scale': None,
'offset': None,
'imagetype': 'qualityInformation',
'identifier': 'MODIS_MOD11_A1_LST_Night_Series_QC',
'title': 'Daily Nighttime Land Surface Temperature from MODIS Terra Quality Dataset',
'abstract': 'Time-series of daily Terra MODIS nighttime land surface temperature Quality Data in Bit (Bit-Field) at 1 km spatial resolution. No scale factor. The unscaled nodata value is encoded as 0. Original MODIS data retrieved from the Land Processes Distributed Active Archive Center (ftp://e4ftl01.cr.usgs.gov/MOLT/).',
'keywords': 'MODIS,Terra,Siberia,Temperature,Global,Daily,Series,Nighttime',
'lineage': 'Original MODIS data retrieved from the Land Processes Distributed Active Archive Center (ftp://e4ftl01.cr.usgs.gov/MOLT/) and processed with GDAL 1.9.0.',
'datasetname': 'Land Surface Temperature',
'datatype': 'RASTER',
'resolution': 1000.0,
'layername': 'mod11a1_lst_night_qc',
'templates': 'template_header_evi.html',
'wcs_description': 'MODIS Terra LST Night Daily quality',
'wms_description': 'MODIS Terra LST Night Daily quality',
'colormap': 'lst_colorbar2.map',
'resolution_unit': 'm',
'unit': 'Bit'
},Nighttime={
'name': 'MODIS_Grid_Daily_1km_LST:LST_Night_1km',
'nodata': 0,
'scale': 0.02,
'offset': None,
'imagetype': 'physicalMeasurement',
'identifier': 'MODIS_MOD11_A1_LST_Night_Series',
'title': 'Daily Nighttime Land Surface Temperature from MODIS Terra',
'abstract': 'Time-series of daily Terra MODIS nighttime land surface temperature in Kelvin at 1 km spatial resolution. To retrieve actual values in Kelvin a scale factor of 0.02 has to be applied. The unscaled nodata value is encoded as 0. Original MODIS data retrieved from the Land Processes Distributed Active Archive Center (ftp://e4ftl01.cr.usgs.gov/MOLT/).',
'keywords': 'MODIS,Terra,Siberia,Temperature,Global,Daily,Series,Nighttime',
'lineage': 'Original MODIS data retrieved from the Land Processes Distributed Active Archive Center (ftp://e4ftl01.cr.usgs.gov/MOLT/) and processed with GDAL 1.9.0.',
'datasetname': 'Land Surface Temperature',
'datatype': 'RASTER',
'resolution': 1000.0,
'layername': 'mod11a1_lst_night',
'templates': 'template_header_evi.html',
'wcs_description': 'MODIS Terra LST Night Daily',
'wms_description': 'MODIS Terra LST Night Daily',
'colormap': 'lst_colorbar2.map',
'resolution_unit': 'm',
'unit': 'Kelvin'
},EmissDay={
'name': 'MODIS_Grid_Daily_1km_LST:Emis_32',
'nodata': 0,
'scale': 0.002,
'offset': None,
'imagetype': 'physicalMeasurement',
'identifier': 'MODIS_MOD11_A1_LST_Day_Series_B32_Emissivity',
'title': 'Daily Daytime Land Surface Temperature from MODIS Terra Band 32 Emissivity',
'abstract': 'Time-series of daily Terra MODIS daytime land surface temperature B32 emissivity without unit at 1 km spatial resolution. Scale factor is 0.002. The unscaled nodata value is encoded as 0. Original MODIS data retrieved from the Land Processes Distributed Active Archive Center (ftp://e4ftl01.cr.usgs.gov/MOLT/).',
'keywords': 'MODIS,Terra,Siberia,Temperature,Global,Daily,Series,Daytime',
'lineage': 'Original MODIS data retrieved from the Land Processes Distributed Active Archive Center (ftp://e4ftl01.cr.usgs.gov/MOLT/) and processed with GDAL 1.9.0.',
'datasetname': 'Land Surface Temperature',
'datatype': 'RASTER',
'resolution': 1000.0,
'layername': 'mod11a1_lst_day_b32_emiss',
'templates': 'template_header_evi.html',
'wcs_description': 'MODIS Terra LST Day Daily b32 emiss',
'wms_description': 'MODIS Terra LST Day Daily b32 emiss',
'colormap': 'lst_colorbar2.map',
'resolution_unit': 'm',
'unit': 'None'
},Daytime={
'name': 'MODIS_Grid_Daily_1km_LST:LST_Day_1km',
'nodata': 0,
'scale': 0.02,
'offset': None,
'imagetype': 'physicalMeasurement',
'identifier': 'MODIS_MOD11_A1_LST_Day_Series',
'title': 'Daily Daytime Land Surface Temperature from MODIS Terra',
'abstract': 'Time-series of daily Terra MODIS daytime land surface temperature in Kelvin at 1 km spatial resolution. To retrieve actual values in Kelvin a scale factor of 0.02 has to be applied. The unscaled nodata value is encoded as 0. Original MODIS data retrieved from the Land Processes Distributed Active Archive Center (ftp://e4ftl01.cr.usgs.gov/MOLT/).',
'keywords': 'MODIS,Terra,Siberia,Temperature,Global,Daily,Series,Daytime',
'lineage': 'Original MODIS data retrieved from the Land Processes Distributed Active Archive Center (ftp://e4ftl01.cr.usgs.gov/MOLT/) and processed with GDAL 1.9.0.',
'datasetname': 'Land Surface Temperature',
'datatype': 'RASTER',
'resolution': 1000.0,
'layername': 'mod11a1_lst_day',
'templates': 'template_header_evi.html',
'wcs_description': 'MODIS Terra LST Day Daily',
'wms_description': 'MODIS Terra LST Day Daily',
'colormap': 'lst_colorbar2.map',
'resolution_unit': 'm',
'unit': 'Kelvin'
}
) | 57.678571 | 377 | 0.627348 | 1,108 | 9,690 | 5.370939 | 0.133574 | 0.040329 | 0.066543 | 0.034952 | 0.88691 | 0.88691 | 0.88691 | 0.830785 | 0.812637 | 0.78239 | 0 | 0.031728 | 0.264912 | 9,690 | 168 | 378 | 57.678571 | 0.803734 | 0 | 0 | 0.530612 | 0 | 0.081633 | 0.656972 | 0.147942 | 0 | 0 | 0 | 0 | 0 | 1 | 0.027211 | false | 0 | 0.006803 | 0.027211 | 0.129252 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
4641d39e2cf51caae05483a64c80a28474562bb7 | 4,711 | py | Python | tests/parser/07-Nomystery.asp.test.py | veltri/DLV2 | 944aaef803aa75e7ec51d7e0c2b0d964687fdd0e | [
"Apache-2.0"
] | null | null | null | tests/parser/07-Nomystery.asp.test.py | veltri/DLV2 | 944aaef803aa75e7ec51d7e0c2b0d964687fdd0e | [
"Apache-2.0"
] | null | null | null | tests/parser/07-Nomystery.asp.test.py | veltri/DLV2 | 944aaef803aa75e7ec51d7e0c2b0d964687fdd0e | [
"Apache-2.0"
] | null | null | null | input = """
%
% Nomystery for ASP 2013.
%
% Domain specification freely adapted from the plasp PDDL-to-ASP output
% (http://potassco.sourceforge.net/labs.html)
%
% Author (2013) GB Ianni
%
%
%
truck(T) :- fuel(T,_).
package(P) :- at(P,L), not truck(P).
location(L) :- fuelcost(_,L,_).
location(L) :- fuelcost(_,_,L).
locatable(O) :- at(O,L).
%
at(O,L,0) :- at(O,L).
fuel(T,F,0) :- fuel(T,F).
%
%
% GENERATE >>>>>
1 <= { unload( P,T,L,S ) :
package( P ) ,
truck( T ) ,
location( L );
load( P,T,L,S ) :
package( P ) ,
truck( T ) ,
location( L );
drive( T,L1,L2,S ) :
fuelcost( Fueldelta,L1,L2 ) ,
truck( T );
noop(S)
} <= 1 :- step(S), S > 0.
% <<<<< GENERATE
% unload/4, effects
at( P,L,S ) :- unload( P,T,L,S ).
del( in( P,T ),S ) :- unload( P,T,L,S ).
% load/4, effects
del( at( P,L ),S ) :- load( P,T,L,S ).
in( P,T,S ) :- load( P,T,L,S ).
% drive/4, effects
del( at( T,L1 ), S ) :- drive( T,L1,L2,S ).
at( T,L2,S ) :- drive( T,L1,L2,S).
del( fuel( T,Fuelpre ),S ) :- drive( T,L1,L2,S ), fuel(T, Fuelpre,S-1).
fuel( T,Fuelpost,S ) :- drive( T,L1,L2,S ), fuelcost(Fueldelta,L1,L2), fuel(T,Fuelpre,S-1), Fuelpost = Fuelpre - Fueldelta.
% <<<<< EFFECTS APPLY
%
% INERTIA >>>>>
at( O,L,S ) :- at( O,L,S-1 ), not del( at( O,L ),S ), step(S).
in( P,T,S ) :- in( P,T,S-1 ), not del( in( P,T ),S ), step(S).
fuel( T,Level,S ) :- fuel( T,Level,S-1 ), not del( fuel( T,Level) ,S ), truck( T ), step(S).
% <<<<< INERTIA
%
%
%
% PRECONDITIONS CHECK >>>>>
% unload/4, preconditions
:- unload( P,T,L,S ), not preconditions_u( P,T,L,S ).
preconditions_u( P,T,L,S ) :- step(S), at( T,L,S-1 ), in( P,T,S-1 ), package( P ), truck( T ).
% load/4, preconditions
:- load( P,T,L,S ), not preconditions_l( P,T,L,S ).
preconditions_l( P,T,L,S ) :- step(S), at( T,L,S-1 ), at( P,L,S-1 ).
% drive/5, preconditions
:- drive( T,L1,L2,S ), not preconditions_d( T,L1,L2,S ).
preconditions_d( T,L1,L2,S ) :- step(S), at( T,L1,S-1 ), fuel( T, Fuelpre, S-1), fuelcost(Fueldelta,L1,L2), Fuelpre >= Fueldelta.
% <<<<< PRECONDITIONS HOLD
%
% GOAL CHECK
goalreached :- step(S), N = #count{ P,L : at(P,L,S) , goal(P,L) }, N = #count{ P1,L1 : goal(P1,L1) }.
:- not goalreached.
% Gringo directives to show / hide particular literals
%#hide.
%#show unload/4.
%#show load/4.
%#show drive/4.
%#show at/2.
%#show at/3.
"""
output = """
%
% Nomystery for ASP 2013.
%
% Domain specification freely adapted from the plasp PDDL-to-ASP output
% (http://potassco.sourceforge.net/labs.html)
%
% Author (2013) GB Ianni
%
%
%
truck(T) :- fuel(T,_).
package(P) :- at(P,L), not truck(P).
location(L) :- fuelcost(_,L,_).
location(L) :- fuelcost(_,_,L).
locatable(O) :- at(O,L).
%
at(O,L,0) :- at(O,L).
fuel(T,F,0) :- fuel(T,F).
%
%
% GENERATE >>>>>
1 <= { unload( P,T,L,S ) :
package( P ) ,
truck( T ) ,
location( L );
load( P,T,L,S ) :
package( P ) ,
truck( T ) ,
location( L );
drive( T,L1,L2,S ) :
fuelcost( Fueldelta,L1,L2 ) ,
truck( T );
noop(S)
} <= 1 :- step(S), S > 0.
% <<<<< GENERATE
% unload/4, effects
at( P,L,S ) :- unload( P,T,L,S ).
del( in( P,T ),S ) :- unload( P,T,L,S ).
% load/4, effects
del( at( P,L ),S ) :- load( P,T,L,S ).
in( P,T,S ) :- load( P,T,L,S ).
% drive/4, effects
del( at( T,L1 ), S ) :- drive( T,L1,L2,S ).
at( T,L2,S ) :- drive( T,L1,L2,S).
del( fuel( T,Fuelpre ),S ) :- drive( T,L1,L2,S ), fuel(T, Fuelpre,S-1).
fuel( T,Fuelpost,S ) :- drive( T,L1,L2,S ), fuelcost(Fueldelta,L1,L2), fuel(T,Fuelpre,S-1), Fuelpost = Fuelpre - Fueldelta.
% <<<<< EFFECTS APPLY
%
% INERTIA >>>>>
at( O,L,S ) :- at( O,L,S-1 ), not del( at( O,L ),S ), step(S).
in( P,T,S ) :- in( P,T,S-1 ), not del( in( P,T ),S ), step(S).
fuel( T,Level,S ) :- fuel( T,Level,S-1 ), not del( fuel( T,Level) ,S ), truck( T ), step(S).
% <<<<< INERTIA
%
%
%
% PRECONDITIONS CHECK >>>>>
% unload/4, preconditions
:- unload( P,T,L,S ), not preconditions_u( P,T,L,S ).
preconditions_u( P,T,L,S ) :- step(S), at( T,L,S-1 ), in( P,T,S-1 ), package( P ), truck( T ).
% load/4, preconditions
:- load( P,T,L,S ), not preconditions_l( P,T,L,S ).
preconditions_l( P,T,L,S ) :- step(S), at( T,L,S-1 ), at( P,L,S-1 ).
% drive/5, preconditions
:- drive( T,L1,L2,S ), not preconditions_d( T,L1,L2,S ).
preconditions_d( T,L1,L2,S ) :- step(S), at( T,L1,S-1 ), fuel( T, Fuelpre, S-1), fuelcost(Fueldelta,L1,L2), Fuelpre >= Fueldelta.
% <<<<< PRECONDITIONS HOLD
%
% GOAL CHECK
goalreached :- step(S), N = #count{ P,L : at(P,L,S) , goal(P,L) }, N = #count{ P1,L1 : goal(P1,L1) }.
:- not goalreached.
% Gringo directives to show / hide particular literals
%#hide.
%#show unload/4.
%#show load/4.
%#show drive/4.
%#show at/2.
%#show at/3.
"""
| 25.743169 | 129 | 0.550414 | 856 | 4,711 | 3.003505 | 0.089953 | 0.032672 | 0.032672 | 0.03734 | 0.995722 | 0.995722 | 0.995722 | 0.995722 | 0.995722 | 0.995722 | 0 | 0.0338 | 0.196137 | 4,711 | 182 | 130 | 25.884615 | 0.645102 | 0 | 0 | 0.8125 | 0 | 0.1875 | 0.99342 | 0.022076 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
464ffc78ea1dc6a80ee472b4aa59dc1af36fead3 | 3,694 | py | Python | sdk/python/pulumi_aws_native/kms/outputs.py | AaronFriel/pulumi-aws-native | 5621690373ac44accdbd20b11bae3be1baf022d1 | [
"Apache-2.0"
] | 29 | 2021-09-30T19:32:07.000Z | 2022-03-22T21:06:08.000Z | sdk/python/pulumi_aws_native/kms/outputs.py | AaronFriel/pulumi-aws-native | 5621690373ac44accdbd20b11bae3be1baf022d1 | [
"Apache-2.0"
] | 232 | 2021-09-30T19:26:26.000Z | 2022-03-31T23:22:06.000Z | sdk/python/pulumi_aws_native/kms/outputs.py | AaronFriel/pulumi-aws-native | 5621690373ac44accdbd20b11bae3be1baf022d1 | [
"Apache-2.0"
] | 4 | 2021-11-10T19:42:01.000Z | 2022-02-05T10:15:49.000Z | # coding=utf-8
# *** WARNING: this file was generated by the Pulumi SDK Generator. ***
# *** Do not edit by hand unless you're certain you know what you are doing! ***
import warnings
import pulumi
import pulumi.runtime
from typing import Any, Mapping, Optional, Sequence, Union, overload
from .. import _utilities
from ._enums import *
__all__ = [
'KeyTag',
'ReplicaKeyTag',
]
@pulumi.output_type
class KeyTag(dict):
"""
A key-value pair to associate with a resource.
"""
def __init__(__self__, *,
key: str,
value: str):
"""
A key-value pair to associate with a resource.
:param str key: The key name of the tag. You can specify a value that is 1 to 128 Unicode characters in length and cannot be prefixed with aws:. You can use any of the following characters: the set of Unicode letters, digits, whitespace, _, ., /, =, +, and -.
:param str value: The value for the tag. You can specify a value that is 0 to 256 Unicode characters in length and cannot be prefixed with aws:. You can use any of the following characters: the set of Unicode letters, digits, whitespace, _, ., /, =, +, and -.
"""
pulumi.set(__self__, "key", key)
pulumi.set(__self__, "value", value)
@property
@pulumi.getter
def key(self) -> str:
"""
The key name of the tag. You can specify a value that is 1 to 128 Unicode characters in length and cannot be prefixed with aws:. You can use any of the following characters: the set of Unicode letters, digits, whitespace, _, ., /, =, +, and -.
"""
return pulumi.get(self, "key")
@property
@pulumi.getter
def value(self) -> str:
"""
The value for the tag. You can specify a value that is 0 to 256 Unicode characters in length and cannot be prefixed with aws:. You can use any of the following characters: the set of Unicode letters, digits, whitespace, _, ., /, =, +, and -.
"""
return pulumi.get(self, "value")
@pulumi.output_type
class ReplicaKeyTag(dict):
"""
A key-value pair to associate with a resource.
"""
def __init__(__self__, *,
key: str,
value: str):
"""
A key-value pair to associate with a resource.
:param str key: The key name of the tag. You can specify a value that is 1 to 128 Unicode characters in length and cannot be prefixed with aws:. You can use any of the following characters: the set of Unicode letters, digits, whitespace, _, ., /, =, +, and -.
:param str value: The value for the tag. You can specify a value that is 0 to 256 Unicode characters in length and cannot be prefixed with aws:. You can use any of the following characters: the set of Unicode letters, digits, whitespace, _, ., /, =, +, and -.
"""
pulumi.set(__self__, "key", key)
pulumi.set(__self__, "value", value)
@property
@pulumi.getter
def key(self) -> str:
"""
The key name of the tag. You can specify a value that is 1 to 128 Unicode characters in length and cannot be prefixed with aws:. You can use any of the following characters: the set of Unicode letters, digits, whitespace, _, ., /, =, +, and -.
"""
return pulumi.get(self, "key")
@property
@pulumi.getter
def value(self) -> str:
"""
The value for the tag. You can specify a value that is 0 to 256 Unicode characters in length and cannot be prefixed with aws:. You can use any of the following characters: the set of Unicode letters, digits, whitespace, _, ., /, =, +, and -.
"""
return pulumi.get(self, "value")
| 44.506024 | 267 | 0.638332 | 527 | 3,694 | 4.383302 | 0.16888 | 0.041558 | 0.031169 | 0.041558 | 0.856277 | 0.856277 | 0.856277 | 0.856277 | 0.856277 | 0.856277 | 0 | 0.012141 | 0.264212 | 3,694 | 82 | 268 | 45.04878 | 0.837748 | 0.639415 | 0 | 0.7 | 1 | 0 | 0.045253 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.15 | false | 0 | 0.15 | 0 | 0.45 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
465a20b5e6105bcfd013653dde874dfdcb85c4b3 | 115 | py | Python | codeball/patterns/__init__.py | metrica-sports/codeball | 60bfe54b7898bed87cbbbae9dfc0f3bc49d31025 | [
"MIT"
] | 54 | 2020-09-16T13:09:03.000Z | 2022-03-28T12:32:19.000Z | codeball/patterns/__init__.py | metrica-sports/codeball | 60bfe54b7898bed87cbbbae9dfc0f3bc49d31025 | [
"MIT"
] | null | null | null | codeball/patterns/__init__.py | metrica-sports/codeball | 60bfe54b7898bed87cbbbae9dfc0f3bc49d31025 | [
"MIT"
] | 9 | 2021-03-28T13:02:57.000Z | 2022-03-24T11:19:06.000Z | from .patterns import *
from .team_stretched import *
from .set_pieces import *
from .passes_into_the_box import *
| 23 | 34 | 0.791304 | 17 | 115 | 5.058824 | 0.647059 | 0.348837 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.13913 | 115 | 4 | 35 | 28.75 | 0.868687 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.25 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 7 |
466457dec160c9e0a0b9a75f45c8f2ecd7fcf193 | 34,421 | py | Python | tests/test_matching_cost_sad.py | njimenezd/Pandora | 9e3c2054415301edac6da7510056af0136790277 | [
"Apache-2.0"
] | 14 | 2020-09-18T14:11:59.000Z | 2020-11-18T14:10:07.000Z | tests/test_matching_cost_sad.py | njimenezd/Pandora | 9e3c2054415301edac6da7510056af0136790277 | [
"Apache-2.0"
] | 1 | 2020-09-29T10:35:45.000Z | 2020-09-29T10:35:45.000Z | tests/test_matching_cost_sad.py | njimenezd/Pandora | 9e3c2054415301edac6da7510056af0136790277 | [
"Apache-2.0"
] | 1 | 2020-09-29T09:29:41.000Z | 2020-09-29T09:29:41.000Z | # type:ignore
#!/usr/bin/env python
# coding: utf8
#
# Copyright (c) 2020 Centre National d'Etudes Spatiales (CNES).
#
# This file is part of PANDORA
#
# https://github.com/CNES/Pandora_pandora
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
"""
This module contains functions to test the cost volume measure step.
"""
import unittest
import numpy as np
import xarray as xr
from rasterio import Affine
from pandora import matching_cost
import tests.common as common
class TestMatchingCost(unittest.TestCase):
"""
TestMatchingCost class allows to test all the methods in the class MatchingCost,
and the plugins pixel_wise, zncc
"""
def setUp(self):
"""
Method called to prepare the test fixture
"""
self.left, self.right = common.matching_cost_tests_setup()
def test_sad_cost(self):
"""
Test the absolute difference method
"""
# Absolute difference pixel-wise ground truth for the images self.left, self.right
ad_ground_truth = np.array(
(
[0, 0, 0, 1, 1, 1],
[0, 0, 0, abs(1 - 4), 0, abs(1 - 4)],
[0, 0, 0, 0, abs(3 - 4), 0],
[0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0],
)
)
# Computes the ad cost for the whole images
matching_cost_matcher = matching_cost.AbstractMatchingCost(
**{"matching_cost_method": "sad", "window_size": 1, "subpix": 1}
)
sad = matching_cost_matcher.compute_cost_volume(
img_left=self.left, img_right=self.right, disp_min=-1, disp_max=1
)
# Check if the calculated ad cost is equal to the ground truth (same shape and all elements equals)
np.testing.assert_array_equal(sad["cost_volume"].sel(disp=0), ad_ground_truth)
# Sum of absolute difference pixel-wise ground truth for the images self.left, self.right with window size 5
sad_ground_truth = np.array(
(
[
[np.nan, np.nan, np.nan, np.nan, np.nan, np.nan],
[np.nan, np.nan, np.nan, np.nan, np.nan, np.nan],
[np.nan, np.nan, 6.0, 10.0, np.nan, np.nan],
[np.nan, np.nan, np.nan, np.nan, np.nan, np.nan],
[np.nan, np.nan, np.nan, np.nan, np.nan, np.nan],
]
)
)
# Computes the ad cost for the whole images
matching_cost_matcher = matching_cost.AbstractMatchingCost(
**{"matching_cost_method": "sad", "window_size": 5, "subpix": 1}
)
sad = matching_cost_matcher.compute_cost_volume(
img_left=self.left, img_right=self.right, disp_min=-1, disp_max=1
)
matching_cost_matcher.cv_masked(self.left, self.right, sad, -1, 1)
# Check if the calculated ad cost is equal to the ground truth (same shape and all elements equals)
np.testing.assert_array_equal(sad["cost_volume"].sel(disp=0), sad_ground_truth)
@staticmethod
def test_cost_volume():
"""
Test the cost volume method
"""
# Create simple images
data = np.array(([1, 2, 1, 4], [6, 2, 7, 4], [1, 1, 3, 6]), dtype=np.float64)
left = xr.Dataset(
{"im": (["row", "col"], data)}, coords={"row": np.arange(data.shape[0]), "col": np.arange(data.shape[1])}
)
left.attrs["crs"] = None
left.attrs["transform"] = Affine(1.0, 0.0, 0.0, 0.0, 1.0, 0.0)
data = np.array(([6, 7, 8, 10], [2, 4, 1, 6], [9, 10, 1, 2]), dtype=np.float64)
right = xr.Dataset(
{"im": (["row", "col"], data)}, coords={"row": np.arange(data.shape[0]), "col": np.arange(data.shape[1])}
)
right.attrs["crs"] = None
right.attrs["transform"] = Affine(1.0, 0.0, 0.0, 0.0, 1.0, 0.0)
# Cost Volume ground truth for the stereo image simple_stereo_imgs,
# with disp_min = -2, disp_max = 1, sad measure and subpixel_offset = 0
ground_truth = np.array(
[
[
[np.nan, np.nan, np.nan, np.nan],
[np.nan, np.nan, np.nan, np.nan],
[np.nan, np.nan, np.nan, np.nan],
[np.nan, np.nan, np.nan, np.nan],
],
[
[np.nan, np.nan, np.nan, np.nan],
[np.nan, np.nan, 48, 35],
[np.nan, 40, 43, np.nan],
[np.nan, np.nan, np.nan, np.nan],
],
[
[np.nan, np.nan, np.nan, np.nan],
[np.nan, np.nan, np.nan, np.nan],
[np.nan, np.nan, np.nan, np.nan],
[np.nan, np.nan, np.nan, np.nan],
],
]
)
# Computes the Cost Volume for the stereo image simple_stereo_imgs,
# with disp_min = -2, disp_max = 1, sad measure, window_size = 3 and subpix = 1
matching_cost_matcher = matching_cost.AbstractMatchingCost(
**{"matching_cost_method": "sad", "window_size": 3, "subpix": 1}
)
cv = matching_cost_matcher.compute_cost_volume(left, right, disp_min=-2, disp_max=1)
matching_cost_matcher.cv_masked(left, right, cv, -2, 1)
# Check if the calculated mean is equal to the ground truth (same shape and all elements equals)
np.testing.assert_array_equal(cv["cost_volume"].data, ground_truth)
@staticmethod
def test_masks_invalid_pixels():
"""
Test the method masks_invalid_pixels
"""
# ------------ Test the method with a left mask ( right mask contains valid pixels ) ------------
# Mask convention
# cfg['image']['valid_pixels'] = 0
# cfg['image']['no_data'] = 1
# invalid_pixels all other values
data = np.array(([1, 1, 1, 3, 4], [1, 2, 1, 0, 2], [2, 1, 0, 1, 2], [1, 1, 1, 1, 4]), dtype=np.float64)
mask = np.array(([0, 0, 2, 0, 1], [0, 2, 0, 0, 0], [0, 0, 0, 0, 0], [1, 0, 0, 0, 2]), dtype=np.int16)
left = xr.Dataset(
{"im": (["row", "col"], data), "msk": (["row", "col"], mask)},
coords={"row": np.arange(data.shape[0]), "col": np.arange(data.shape[1])},
)
left.attrs = common.img_attrs
data = np.array(([5, 1, 2, 3, 4], [1, 2, 1, 0, 2], [2, 2, 0, 1, 4], [1, 1, 1, 1, 2]), dtype=np.float64)
# right mask contains valid pixels
mask = np.zeros((4, 5), dtype=np.int16)
right = xr.Dataset(
{"im": (["row", "col"], data), "msk": (["row", "col"], mask)},
coords={"row": np.arange(data.shape[0]), "col": np.arange(data.shape[1])},
)
right.attrs = common.img_attrs
matching_cost_ = matching_cost.AbstractMatchingCost(
**{"matching_cost_method": "sad", "window_size": 3, "subpix": 1}
)
# Compute the cost volume and invalidate pixels if need
cv = matching_cost_.compute_cost_volume(img_left=left, img_right=right, disp_min=-1, disp_max=1)
matching_cost_.cv_masked(img_left=left, img_right=right, cost_volume=cv, disp_min=-1, disp_max=1)
# Cost volume before invalidation
# disp -1 0 1
# col 1 [[[nan, 6., 8.],
# col 2 [12., 2., 13.],
# col 3 [10., 3., nan]],
# col 1 [[nan, 1., 5.],
# col 2 [7., 1., 10.],
# col 3 [11., 4., nan]]], dtype=float32)
# Cost volume ground truth after invalidation
cv_ground_truth = np.array(
[
[
[np.nan, np.nan, np.nan],
[np.nan, np.nan, np.nan],
[np.nan, np.nan, np.nan],
[np.nan, np.nan, np.nan],
[np.nan, np.nan, np.nan],
],
[
[np.nan, np.nan, np.nan],
[np.nan, np.nan, np.nan],
[12.0, 2.0, 13.0],
[np.nan, np.nan, np.nan],
[np.nan, np.nan, np.nan],
],
[
[np.nan, np.nan, np.nan],
[np.nan, np.nan, np.nan],
[7.0, 1.0, 10.0],
[11.0, 4.0, np.nan],
[np.nan, np.nan, np.nan],
],
[
[np.nan, np.nan, np.nan],
[np.nan, np.nan, np.nan],
[np.nan, np.nan, np.nan],
[np.nan, np.nan, np.nan],
[np.nan, np.nan, np.nan],
],
],
dtype=np.float32,
)
# Check if the calculated cost volume is equal to the ground truth (same shape and all elements equals)
np.testing.assert_array_equal(cv["cost_volume"], cv_ground_truth)
# ------------ Test the method with a right mask ( left mask contains valid pixels ) ------------
# Mask convention
# cfg['image']['valid_pixels'] = 0
# cfg['image']['no_data'] = 1
# invalid_pixels all other values
data = np.array(([1, 1, 1, 3, 4], [1, 2, 1, 0, 2], [2, 1, 0, 1, 2], [1, 1, 1, 1, 4]), dtype=np.float64)
# left mask contains valid pixels
mask = np.zeros((4, 5), dtype=np.int16)
left = xr.Dataset(
{"im": (["row", "col"], data), "msk": (["row", "col"], mask)},
coords={"row": np.arange(data.shape[0]), "col": np.arange(data.shape[1])},
)
left.attrs = common.img_attrs
data = np.array(([5, 1, 2, 3, 4], [1, 2, 1, 0, 2], [2, 2, 0, 1, 4], [1, 1, 1, 1, 2]), dtype=np.float64)
mask = np.array(([0, 0, 0, 0, 2], [0, 1, 0, 0, 0], [0, 2, 0, 2, 0], [1, 0, 0, 0, 0]), dtype=np.int16)
right = xr.Dataset(
{"im": (["row", "col"], data), "msk": (["row", "col"], mask)},
coords={"row": np.arange(data.shape[0]), "col": np.arange(data.shape[1])},
)
right.attrs = common.img_attrs
matching_cost_ = matching_cost.AbstractMatchingCost(
**{"matching_cost_method": "sad", "window_size": 3, "subpix": 1}
)
# Compute the cost volume and invalidate pixels if need
cv = matching_cost_.compute_cost_volume(img_left=left, img_right=right, disp_min=-1, disp_max=1)
matching_cost_.cv_masked(img_left=left, img_right=right, cost_volume=cv, disp_min=-1, disp_max=1)
# Cost volume before invalidation
# disp -1 0 1
# col 1 [[[nan, 6., 8.],
# col 2 [12., 2., 13.],
# col 3 [10., 3., nan]],
# col 1 [[nan, 1., 5.],
# col 2 [7., 1., 10.],
# col 3 [11., 4., nan]]], dtype=float32)
# Cost volume ground truth after invalidation
cv_ground_truth = np.array(
[
[
[np.nan, np.nan, np.nan],
[np.nan, np.nan, np.nan],
[np.nan, np.nan, np.nan],
[np.nan, np.nan, np.nan],
[np.nan, np.nan, np.nan],
],
[
[np.nan, np.nan, np.nan],
[np.nan, np.nan, np.nan],
[np.nan, np.nan, 13.0],
[np.nan, 3.0, np.nan],
[np.nan, np.nan, np.nan],
],
[
[np.nan, np.nan, np.nan],
[np.nan, np.nan, np.nan],
[np.nan, np.nan, np.nan],
[np.nan, np.nan, np.nan],
[np.nan, np.nan, np.nan],
],
[
[np.nan, np.nan, np.nan],
[np.nan, np.nan, np.nan],
[np.nan, np.nan, np.nan],
[np.nan, np.nan, np.nan],
[np.nan, np.nan, np.nan],
],
],
dtype=np.float32,
)
# Check if the calculated cost volume is equal to the ground truth (same shape and all elements equals)
np.testing.assert_array_equal(cv["cost_volume"], cv_ground_truth)
# ------------ Test the method with a left and right mask ------------
# Mask convention
# cfg['image']['valid_pixels'] = 0
# cfg['image']['no_data'] = 1
# invalid_pixels all other values
data = np.array(([1, 1, 1, 3, 4], [1, 2, 1, 0, 2], [2, 1, 0, 1, 2], [1, 1, 1, 1, 4]), dtype=np.float64)
# left mask contains valid pixels
mask = np.array(([1, 0, 0, 2, 0], [0, 0, 0, 0, 0], [0, 0, 2, 0, 0], [2, 0, 0, 0, 1]), dtype=np.int16)
left = xr.Dataset(
{"im": (["row", "col"], data), "msk": (["row", "col"], mask)},
coords={"row": np.arange(data.shape[0]), "col": np.arange(data.shape[1])},
)
left.attrs = common.img_attrs
data = np.array(([5, 1, 2, 3, 4], [1, 2, 1, 0, 2], [2, 2, 0, 1, 4], [1, 1, 1, 1, 2]), dtype=np.float64)
mask = np.array(([0, 2, 0, 0, 1], [0, 0, 0, 0, 0], [0, 0, 0, 2, 0], [1, 0, 2, 0, 0]), dtype=np.int16)
right = xr.Dataset(
{"im": (["row", "col"], data), "msk": (["row", "col"], mask)},
coords={"row": np.arange(data.shape[0]), "col": np.arange(data.shape[1])},
)
right.attrs = common.img_attrs
matching_cost_ = matching_cost.AbstractMatchingCost(
**{"matching_cost_method": "sad", "window_size": 3, "subpix": 1}
)
# Compute the cost volume and invalidate pixels if need
cv = matching_cost_.compute_cost_volume(img_left=left, img_right=right, disp_min=-1, disp_max=1)
matching_cost_.cv_masked(img_left=left, img_right=right, cost_volume=cv, disp_min=-1, disp_max=1)
# Cost volume before invalidation
# disp -1 0 1
# col 1 [[[nan, 6., 8.],
# col 2 [12., 2., 13.],
# col 3 [10., 3., nan]],
# col 1 [[nan, 1., 5.],
# col 2 [7., 1., 10.],
# col 3 [11., 4., nan]]], dtype=float32)
# Cost volume ground truth after invalidation
cv_ground_truth = np.array(
[
[
[np.nan, np.nan, np.nan],
[np.nan, np.nan, np.nan],
[np.nan, np.nan, np.nan],
[np.nan, np.nan, np.nan],
[np.nan, np.nan, np.nan],
],
[
[np.nan, np.nan, np.nan],
[np.nan, np.nan, np.nan],
[12, 2, np.nan],
[10, np.nan, np.nan],
[np.nan, np.nan, np.nan],
],
[
[np.nan, np.nan, np.nan],
[np.nan, np.nan, 5],
[np.nan, np.nan, np.nan],
[np.nan, np.nan, np.nan],
[np.nan, np.nan, np.nan],
],
[
[np.nan, np.nan, np.nan],
[np.nan, np.nan, np.nan],
[np.nan, np.nan, np.nan],
[np.nan, np.nan, np.nan],
[np.nan, np.nan, np.nan],
],
],
dtype=np.float32,
)
# Check if the calculated cost volume is equal to the ground truth (same shape and all elements equals)
np.testing.assert_array_equal(cv["cost_volume"], cv_ground_truth)
# ------------ Test the method with a left and right mask and window size 5 ------------
# Mask convention
# cfg['image']['valid_pixels'] = 0
# cfg['image']['no_data'] = 1
# invalid_pixels all other values
data = np.array(
(
[0, 0, 0, 0, 0, 0, 0],
[0, 1, 1, 1, 3, 4, 0],
[0, 1, 2, 1, 0, 2, 0],
[0, 2, 1, 0, 1, 2, 0],
[0, 1, 1, 1, 1, 4, 0],
[0, 0, 0, 0, 0, 0, 0],
),
dtype=np.float64,
)
mask = np.array(
(
[2, 0, 0, 0, 0, 0, 1],
[0, 0, 0, 0, 0, 0, 0],
[0, 2, 0, 0, 0, 0, 0],
[0, 0, 0, 2, 0, 0, 0],
[0, 0, 0, 0, 0, 2, 0],
[1, 0, 0, 0, 0, 0, 2],
),
dtype=np.int16,
)
left = xr.Dataset(
{"im": (["row", "col"], data), "msk": (["row", "col"], mask)},
coords={"row": np.arange(data.shape[0]), "col": np.arange(data.shape[1])},
)
left.attrs = common.img_attrs
data = np.array(
(
[0, 0, 0, 0, 0, 0, 0],
[0, 5, 1, 2, 3, 4, 0],
[0, 1, 2, 1, 0, 2, 0],
[0, 2, 2, 0, 1, 4, 0],
[0, 1, 1, 1, 1, 2, 0],
[0, 0, 0, 0, 0, 0, 0],
),
dtype=np.float64,
)
mask = np.array(
(
[1, 0, 0, 0, 0, 0, 2],
[0, 0, 0, 0, 0, 0, 0],
[2, 0, 2, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 2],
[0, 0, 0, 0, 0, 0, 0],
[2, 0, 0, 0, 0, 0, 1],
),
dtype=np.int16,
)
right = xr.Dataset(
{"im": (["row", "col"], data), "msk": (["row", "col"], mask)},
coords={"row": np.arange(data.shape[0]), "col": np.arange(data.shape[1])},
)
right.attrs = common.img_attrs
matching_cost_ = matching_cost.AbstractMatchingCost(
**{"matching_cost_method": "sad", "window_size": 5, "subpix": 1}
)
# Compute the cost volume and invalidate pixels if need
cv = matching_cost_.compute_cost_volume(img_left=left, img_right=right, disp_min=-1, disp_max=1)
matching_cost_.cv_masked(img_left=left, img_right=right, cost_volume=cv, disp_min=-1, disp_max=1)
# Cost volume ground truth after invalidation
cv_ground_truth = np.array(
[
[
[np.nan, np.nan, np.nan],
[np.nan, np.nan, np.nan],
[np.nan, np.nan, np.nan],
[np.nan, np.nan, np.nan],
[np.nan, np.nan, np.nan],
[np.nan, np.nan, np.nan],
[np.nan, np.nan, np.nan],
],
[
[np.nan, np.nan, np.nan],
[np.nan, np.nan, np.nan],
[np.nan, np.nan, np.nan],
[np.nan, np.nan, np.nan],
[np.nan, np.nan, np.nan],
[np.nan, np.nan, np.nan],
[np.nan, np.nan, np.nan],
],
[
[np.nan, np.nan, np.nan],
[np.nan, np.nan, np.nan],
[np.nan, np.nan, 24.0],
[np.nan, 10.0, 27.0],
[np.nan, np.nan, np.nan],
[np.nan, np.nan, np.nan],
[np.nan, np.nan, np.nan],
],
[
[np.nan, np.nan, np.nan],
[np.nan, np.nan, np.nan],
[np.nan, np.nan, np.nan],
[np.nan, np.nan, np.nan],
[31.0, np.nan, np.nan],
[np.nan, np.nan, np.nan],
[np.nan, np.nan, np.nan],
],
[
[np.nan, np.nan, np.nan],
[np.nan, np.nan, np.nan],
[np.nan, np.nan, np.nan],
[np.nan, np.nan, np.nan],
[np.nan, np.nan, np.nan],
[np.nan, np.nan, np.nan],
[np.nan, np.nan, np.nan],
],
[
[np.nan, np.nan, np.nan],
[np.nan, np.nan, np.nan],
[np.nan, np.nan, np.nan],
[np.nan, np.nan, np.nan],
[np.nan, np.nan, np.nan],
[np.nan, np.nan, np.nan],
[np.nan, np.nan, np.nan],
],
],
dtype=np.float32,
)
# Check if the calculated cost volume is equal to the ground truth (same shape and all elements equals)
np.testing.assert_array_equal(cv["cost_volume"], cv_ground_truth)
# ------------ Test the method with a left and right mask with window size 1------------
# Mask convention
# cfg['image']['valid_pixels'] = 0
# cfg['image']['no_data'] = 1
# invalid_pixels all other values
data = np.array(([1, 1, 1, 3, 4], [1, 1, 1, 1, 4]), dtype=np.float64)
# left mask contains valid pixels
mask = np.array(([1, 0, 0, 2, 0], [2, 0, 0, 0, 1]), dtype=np.int16)
left = xr.Dataset(
{"im": (["row", "col"], data), "msk": (["row", "col"], mask)},
coords={"row": np.arange(data.shape[0]), "col": np.arange(data.shape[1])},
)
left.attrs = common.img_attrs
data = np.array(([5, 1, 2, 3, 4], [1, 1, 1, 1, 2]), dtype=np.float64)
mask = np.array(([0, 2, 0, 0, 1], [1, 0, 2, 0, 0]), dtype=np.int16)
right = xr.Dataset(
{"im": (["row", "col"], data), "msk": (["row", "col"], mask)},
coords={"row": np.arange(data.shape[0]), "col": np.arange(data.shape[1])},
)
right.attrs = common.img_attrs
matching_cost_ = matching_cost.AbstractMatchingCost(
**{"matching_cost_method": "sad", "window_size": 1, "subpix": 1}
)
# Compute the cost volume and invalidate pixels if need
cv = matching_cost_.compute_cost_volume(img_left=left, img_right=right, disp_min=-1, disp_max=1)
matching_cost_.cv_masked(img_left=left, img_right=right, cost_volume=cv, disp_min=-1, disp_max=1)
# Cost volume ground truth after invalidation
cv_ground_truth = np.array(
[
[
[np.nan, np.nan, np.nan],
[4, np.nan, 1],
[np.nan, 1, 2],
[np.nan, np.nan, np.nan],
[1, np.nan, np.nan],
],
[
[np.nan, np.nan, np.nan],
[np.nan, 0, np.nan],
[0, np.nan, 0],
[np.nan, 0, 1],
[np.nan, np.nan, np.nan],
],
],
dtype=np.float32,
)
# Check if the calculated cost volume is equal to the ground truth (same shape and all elements equals)
np.testing.assert_array_equal(cv["cost_volume"], cv_ground_truth)
@staticmethod
def test_masks_invalid_pixels_subpixel():
"""
Test the method masks_invalid_pixels with subpixel precision
"""
# ------------ Test the method with a right mask with window size 1 subpixel 2 ------------
# Mask convention
# cfg['image']['valid_pixels'] = 0
# cfg['image']['no_data'] = 1
# invalid_pixels all other values
data = np.array(([1, 1, 1, 3, 4], [1, 1, 1, 1, 4]), dtype=np.float64)
# left mask contains valid pixels
mask = np.array(([0, 0, 0, 0, 0], [0, 0, 0, 0, 0]), dtype=np.int16)
left = xr.Dataset(
{"im": (["row", "col"], data), "msk": (["row", "col"], mask)},
coords={"row": np.arange(data.shape[0]), "col": np.arange(data.shape[1])},
)
left.attrs = common.img_attrs
data = np.array(([5, 1, 2, 3, 4], [1, 1, 1, 1, 2]), dtype=np.float64)
mask = np.array(([0, 0, 0, 0, 1], [1, 0, 2, 0, 0]), dtype=np.int16)
right = xr.Dataset(
{"im": (["row", "col"], data), "msk": (["row", "col"], mask)},
coords={"row": np.arange(data.shape[0]), "col": np.arange(data.shape[1])},
)
right.attrs = common.img_attrs
dmin = -1
dmax = 1
matching_cost_ = matching_cost.AbstractMatchingCost(
**{"matching_cost_method": "sad", "window_size": 1, "subpix": 2}
)
# Compute the cost volume and invalidate pixels if need
cv = matching_cost_.compute_cost_volume(img_left=left, img_right=right, disp_min=dmin, disp_max=dmax)
matching_cost_.cv_masked(img_left=left, img_right=right, cost_volume=cv, disp_min=dmin, disp_max=dmax)
# The cost volume before invalidation
# <xarray.DataArray 'cost_volume' (row: 2, col: 5, disp: 5)>
# array([[[nan, nan, 4. , 2. , 0. ],
# [4. , 2. , 0. , 0.5, 1. ],
# [0. , 0.5, 1. , 1.5, 2. ],
# [1. , 0.5, 0. , 0.5, 1. ],
# [1. , 0.5, 0. , nan, nan]],
#
# [[nan, nan, 0. , 0. , 0. ],
# [0. , 0. , 0. , 0. , 0. ],
# [0. , 0. , 0. , 0. , 0. ],
# [0. , 0. , 0. , 0.5, 1. ],
# [3. , 2.5, 2. , nan, nan]]], dtype=float32)
# Coordinates:
# * row (row) int64 0 1
# * col (col) int64 0 1 2 3 4
# * disp (disp) float64 -1.0 -0.5 0.0 0.5 1.0
cv_ground_truth = np.array(
[
[
[np.nan, np.nan, 4, 2, 0],
[4, 2, 0, 0.5, 1],
[0, 0.5, 1, 1.5, 2],
[1, 0.5, 0, np.nan, np.nan],
[1, np.nan, np.nan, np.nan, np.nan],
],
[
[np.nan, np.nan, np.nan, np.nan, 0],
[np.nan, np.nan, 0, np.nan, np.nan],
[0, np.nan, np.nan, np.nan, 0],
[np.nan, np.nan, 0, 0.5, 1],
[3, 2.5, 2, np.nan, np.nan],
],
],
dtype=np.float32,
)
# Check if the calculated cost volume is equal to the ground truth (same shape and all elements equals)
np.testing.assert_array_equal(cv["cost_volume"], cv_ground_truth)
# ------------ Test the method with a right mask with window size 1 subpixel 4 ------------
# Mask convention
# cfg['image']['valid_pixels'] = 5
# cfg['image']['no_data'] = 7
# invalid_pixels all other values
data = np.array(([1, 1, 1], [1, 1, 1]), dtype=np.float64)
# left mask contains valid pixels
mask = np.array(([5, 5, 5], [5, 5, 5]), dtype=np.int16)
left = xr.Dataset(
{"im": (["row", "col"], data), "msk": (["row", "col"], mask)},
coords={"row": np.arange(data.shape[0]), "col": np.arange(data.shape[1])},
)
left.attrs = {
"valid_pixels": 5,
"no_data_mask": 7,
"crs": None,
"transform": Affine(1.0, 0.0, 0.0, 0.0, 1.0, 0.0),
}
data = np.array(([5, 1, 2], [1, 1, 1]), dtype=np.float64)
mask = np.array(([5, 4, 7], [6, 7, 5]), dtype=np.int16)
right = xr.Dataset(
{"im": (["row", "col"], data), "msk": (["row", "col"], mask)},
coords={"row": np.arange(data.shape[0]), "col": np.arange(data.shape[1])},
)
right.attrs = {
"valid_pixels": 5,
"no_data_mask": 7,
"crs": None,
"transform": Affine(1.0, 0.0, 0.0, 0.0, 1.0, 0.0),
}
dmin = -1
dmax = 1
matching_cost_ = matching_cost.AbstractMatchingCost(
**{"matching_cost_method": "sad", "window_size": 1, "subpix": 4}
)
# Compute the cost volume and invalidate pixels if need
cv = matching_cost_.compute_cost_volume(img_left=left, img_right=right, disp_min=dmin, disp_max=dmax)
matching_cost_.cv_masked(img_left=left, img_right=right, cost_volume=cv, disp_min=dmin, disp_max=dmax)
# The cost volume before invalidation
# <xarray.DataArray 'cost_volume' (row: 2, col: 5, disp: 5)>
# array([[[ nan, nan, nan, nan, 4. , 3. , 2. , 1. , 0. ],
# [4. , 3. , 2. , 1. , 0. , 0.25, 0.5 , 0.75, 1. ],
# [0. , 0.25, 0.5 , 0.75, 1. , nan, nan, nan, nan]],
#
# [[ nan, nan, nan, nan, 0. , 0. , 0. , 0. , 0. ],
# [0. , 0. , 0. , 0. , 0. , 0. , 0. , 0. , 0. ],
# [0. , 0. , 0. , 0. , 0. , nan, nan, nan, nan]]],
# dtype=float32)
# Coordinates:
# * row (row) int64 0 1
# * col (col) int64 0 1 2
# * disp (disp) float64 -1.0 -0.75 -0.5 -0.25 0.0 0.25 0.5 0.75 1.0
cv_ground_truth = np.array(
[
[
[np.nan, np.nan, np.nan, np.nan, 4.0, np.nan, np.nan, np.nan, np.nan],
[4.0, np.nan, np.nan, np.nan, np.nan, np.nan, np.nan, np.nan, np.nan],
[np.nan, np.nan, np.nan, np.nan, np.nan, np.nan, np.nan, np.nan, np.nan],
],
[
[np.nan, np.nan, np.nan, np.nan, np.nan, np.nan, np.nan, np.nan, np.nan],
[np.nan, np.nan, np.nan, np.nan, np.nan, np.nan, np.nan, np.nan, 0.0],
[np.nan, np.nan, np.nan, np.nan, 0.0, np.nan, np.nan, np.nan, np.nan],
],
],
dtype=np.float32,
)
# Check if the calculated cost volume is equal to the ground truth (same shape and all elements equals)
np.testing.assert_array_equal(cv["cost_volume"], cv_ground_truth)
# ------------ Test the method with a left and right mask, window size 3, subpixel 2 ------------
# Mask convention
# cfg['image']['valid_pixels'] = 5
# cfg['image']['no_data'] = 7
# invalid_pixels all other values
data = np.array(([1, 1, 1, 3, 4], [1, 2, 1, 0, 2], [2, 1, 0, 1, 2], [1, 1, 1, 1, 4]), dtype=np.float64)
mask = np.array(([5, 56, 5, 12, 5], [5, 5, 5, 5, 5], [5, 5, 5, 5, 5], [3, 5, 4, 5, 7]), dtype=np.int16)
left = xr.Dataset(
{"im": (["row", "col"], data), "msk": (["row", "col"], mask)},
coords={"row": np.arange(data.shape[0]), "col": np.arange(data.shape[1])},
)
left.attrs = {
"valid_pixels": 5,
"no_data_mask": 7,
"crs": None,
"transform": Affine(1.0, 0.0, 0.0, 0.0, 1.0, 0.0),
}
data = np.array(([5, 1, 2, 3, 4], [1, 2, 1, 0, 2], [2, 2, 0, 1, 4], [1, 1, 1, 1, 2]), dtype=np.float64)
mask = np.array(([7, 5, 5, 5, 5], [5, 5, 5, 65, 5], [5, 5, 5, 5, 5], [5, 23, 5, 5, 2]), dtype=np.int16)
right = xr.Dataset(
{"im": (["row", "col"], data), "msk": (["row", "col"], mask)},
coords={"row": np.arange(data.shape[0]), "col": np.arange(data.shape[1])},
)
right.attrs = {
"valid_pixels": 5,
"no_data_mask": 7,
"crs": None,
"transform": Affine(1.0, 0.0, 0.0, 0.0, 1.0, 0.0),
}
dmin = -1
dmax = 1
matching_cost_ = matching_cost.AbstractMatchingCost(
**{"matching_cost_method": "sad", "window_size": 3, "subpix": 2}
)
# Compute the cost volume and invalidate pixels if need
cv = matching_cost_.compute_cost_volume(img_left=left, img_right=right, disp_min=dmin, disp_max=dmax)
matching_cost_.cv_masked(img_left=left, img_right=right, cost_volume=cv, disp_min=dmin, disp_max=dmax)
# Cost volume before invalidation
# array([[[ nan, nan, 6. , 6. , 8. ],
# [12. , 7. , 2. , 6.5, 13. ],
# [10. , 5.5, 3. , nan, nan]],
#
# [[ nan, nan, 1. , 2. , 5. ],
# [ 7. , 4. , 1. , 4.5, 10. ],
# [11. , 6.5, 4. , nan, nan]]], dtype=float32)
# Coordinates:
# * row (row) int64 1 2
# * col (col) int64 1 2 3
# * disp (disp) float64 -1.0 -0.5 0.0 0.5 1.0
# Cost volume ground truth after invalidation
cv_ground_truth = np.array(
[
[
[np.nan, np.nan, np.nan, np.nan, np.nan],
[np.nan, np.nan, np.nan, np.nan, np.nan],
[np.nan, np.nan, np.nan, np.nan, np.nan],
[np.nan, np.nan, np.nan, np.nan, np.nan],
[np.nan, np.nan, np.nan, np.nan, np.nan],
],
[
[np.nan, np.nan, np.nan, np.nan, np.nan],
[np.nan, np.nan, np.nan, np.nan, 8.0],
[np.nan, np.nan, 2.0, np.nan, np.nan],
[10.0, np.nan, np.nan, np.nan, np.nan],
[np.nan, np.nan, np.nan, np.nan, np.nan],
],
[
[np.nan, np.nan, np.nan, np.nan, np.nan],
[np.nan, np.nan, 1.0, 2.0, 5.0],
[7.0, 4.0, 1.0, 4.5, 10.0],
[np.nan, np.nan, np.nan, np.nan, np.nan],
[np.nan, np.nan, np.nan, np.nan, np.nan],
],
[
[np.nan, np.nan, np.nan, np.nan, np.nan],
[np.nan, np.nan, np.nan, np.nan, np.nan],
[np.nan, np.nan, np.nan, np.nan, np.nan],
[np.nan, np.nan, np.nan, np.nan, np.nan],
[np.nan, np.nan, np.nan, np.nan, np.nan],
],
],
dtype=np.float32,
)
# Check if the calculated cost volume is equal to the ground truth (same shape and all elements equals)
np.testing.assert_array_equal(cv["cost_volume"], cv_ground_truth)
if __name__ == "__main__":
common.setup_logging()
unittest.main()
| 40.638725 | 117 | 0.451294 | 4,708 | 34,421 | 3.21729 | 0.049065 | 0.178913 | 0.22922 | 0.327458 | 0.894038 | 0.884069 | 0.869017 | 0.864198 | 0.850796 | 0.846372 | 0 | 0.063425 | 0.377502 | 34,421 | 846 | 118 | 40.686761 | 0.643487 | 0.220273 | 0 | 0.641844 | 0 | 0 | 0.042716 | 0 | 0 | 0 | 0 | 0 | 0.019504 | 1 | 0.008865 | false | 0 | 0.010638 | 0 | 0.021277 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
d3b8581466fea06d2d6586c02b243aaa41500bc5 | 363 | py | Python | tests/pyflakes_bears/pep8_naming_test_files/E10/invalid_nested_function.py | MacBox7/coala-pyflakes | 637f8a2e77973384be79d30b0dae1f43072e60c8 | [
"MIT"
] | null | null | null | tests/pyflakes_bears/pep8_naming_test_files/E10/invalid_nested_function.py | MacBox7/coala-pyflakes | 637f8a2e77973384be79d30b0dae1f43072e60c8 | [
"MIT"
] | 12 | 2018-05-21T06:12:59.000Z | 2018-07-30T10:37:16.000Z | tests/pyflakes_bears/pep8_naming_test_files/E10/invalid_nested_function.py | MacBox7/coala-pyflakes | 637f8a2e77973384be79d30b0dae1f43072e60c8 | [
"MIT"
] | 1 | 2018-06-10T16:16:47.000Z | 2018-06-10T16:16:47.000Z | class Foo:
def good(self):
class Bar:
@classmethod
def foo_bar():
pass
def foo():
'''
>>> class Good():
... def __str__(self):
... class Bar:
... @classmethod
... def foo_bar(me):
... pass
'''
pass
| 20.166667 | 40 | 0.330579 | 28 | 363 | 4.071429 | 0.357143 | 0.157895 | 0.210526 | 0.403509 | 0.561404 | 0.561404 | 0.561404 | 0 | 0 | 0 | 0 | 0 | 0.539945 | 363 | 17 | 41 | 21.352941 | 0.682635 | 0.424242 | 0 | 0.25 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.375 | false | 0.25 | 0 | 0 | 0.625 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 7 |
d3c24722231cbd2b4577560a42650dde287ba69a | 1,635 | py | Python | test/test_db_session_string_definition.py | xyloon/k-exchange-rate | 8b145927e57d81652e1de987b77e56b87b9c0b09 | [
"MIT"
] | null | null | null | test/test_db_session_string_definition.py | xyloon/k-exchange-rate | 8b145927e57d81652e1de987b77e56b87b9c0b09 | [
"MIT"
] | null | null | null | test/test_db_session_string_definition.py | xyloon/k-exchange-rate | 8b145927e57d81652e1de987b77e56b87b9c0b09 | [
"MIT"
] | null | null | null | import pytest
from kexr.utils import db_session_string_definition, DBType, NotEnoughParameter
def test_db_session_string_definition_memory():
assert "sqlite://" == db_session_string_definition(DBType.memory)
def test_db_session_string_definition_sqlite():
assert "sqlite:///a.db" == db_session_string_definition(DBType.sqlite3, file_path="a.db")
@pytest.mark.xfail(raises=NotEnoughParameter)
def test_db_session_string_definition_sqlite_error():
db_session_string_definition(DBType.sqlite3)
@pytest.mark.xfail(raises=NotEnoughParameter)
def test_db_session_string_definition_psql_error1():
db_session_string_definition(DBType.postgres)
@pytest.mark.xfail(raises=NotEnoughParameter)
def test_db_session_string_definition_psql_error2():
db_session_string_definition(DBType.postgres, username="u")
@pytest.mark.xfail(raises=NotEnoughParameter)
def test_db_session_string_definition_psql_error3():
db_session_string_definition(DBType.postgres, username="u", password="p")
@pytest.mark.xfail(raises=NotEnoughParameter)
def test_db_session_string_definition_psql_error4():
db_session_string_definition(DBType.postgres, username="u", password="p", ipaddr="127.0.0.1")
@pytest.mark.xfail(raises=NotEnoughParameter)
def test_db_session_string_definition_psql_error5():
db_session_string_definition(DBType.postgres, username="u", password="p", ipaddr="127.0.0.1", port=5000)
def test_db_session_string_definition_psql():
assert "postgres://u:p@127.0.0.1:5000/dbn" == db_session_string_definition(DBType.postgres, username="u", password="p", ipaddr="127.0.0.1", port=5000, dbname="dbn")
| 32.7 | 168 | 0.805505 | 224 | 1,635 | 5.513393 | 0.191964 | 0.138462 | 0.230769 | 0.384615 | 0.875304 | 0.825101 | 0.731984 | 0.626721 | 0.587854 | 0.587854 | 0 | 0.028477 | 0.076453 | 1,635 | 49 | 169 | 33.367347 | 0.789404 | 0 | 0 | 0.230769 | 0 | 0 | 0.060699 | 0.020233 | 0 | 0 | 0 | 0 | 0.115385 | 1 | 0.346154 | true | 0.153846 | 0.076923 | 0 | 0.423077 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 8 |
d3c27f079b83bdccf59bd930a0c1fe94b25426e2 | 1,504 | py | Python | advent/model/decoder.py | shiyutang/Coarse-to-fine-UDA | 6025b99dacc6c03b5980fd1bb952657a389886c3 | [
"Apache-2.0"
] | null | null | null | advent/model/decoder.py | shiyutang/Coarse-to-fine-UDA | 6025b99dacc6c03b5980fd1bb952657a389886c3 | [
"Apache-2.0"
] | null | null | null | advent/model/decoder.py | shiyutang/Coarse-to-fine-UDA | 6025b99dacc6c03b5980fd1bb952657a389886c3 | [
"Apache-2.0"
] | null | null | null | from torch import nn
decoder = nn.Sequential(nn.ReflectionPad2d((1, 1, 1, 1)),
nn.Conv2d(512, 256, (3, 3)),
nn.ReLU(),
nn.Upsample(scale_factor=2, mode='nearest'),
nn.ReflectionPad2d((1, 1, 1, 1)),
nn.Conv2d(256, 256, (3, 3)),
nn.ReLU(),
nn.ReflectionPad2d((1, 1, 1, 1)),
nn.Conv2d(256, 256, (3, 3)),
nn.ReLU(),
nn.ReflectionPad2d((1, 1, 1, 1)),
nn.Conv2d(256, 256, (3, 3)),
nn.ReLU(),
nn.ReflectionPad2d((1, 1, 1, 1)),
nn.Conv2d(256, 128, (3, 3)),
nn.ReLU(),
nn.Upsample(scale_factor=2, mode='nearest'),
nn.ReflectionPad2d((1, 1, 1, 1)),
nn.Conv2d(128, 128, (3, 3)),
nn.ReLU(),
nn.ReflectionPad2d((1, 1, 1, 1)),
nn.Conv2d(128, 64, (3, 3)),
nn.ReLU(),
nn.Upsample(scale_factor=2, mode='nearest'),
nn.ReflectionPad2d((1, 1, 1, 1)),
nn.Conv2d(64, 64, (3, 3)),
nn.ReLU(),
nn.ReflectionPad2d((1, 1, 1, 1)),
nn.Conv2d(64, 3, (3, 3)), )
| 47 | 68 | 0.34109 | 152 | 1,504 | 3.355263 | 0.138158 | 0.105882 | 0.105882 | 0.335294 | 0.917647 | 0.917647 | 0.892157 | 0.892157 | 0.835294 | 0.815686 | 0 | 0.166667 | 0.509309 | 1,504 | 31 | 69 | 48.516129 | 0.52439 | 0 | 0 | 0.733333 | 0 | 0 | 0.013963 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.033333 | 0 | 0.033333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 9 |
d3d4cc7b096c91715b08a4c6379655c129d42748 | 95,048 | py | Python | eerie/bsplines/b_1d.py | RomeroGuDw/wavelet_networks | 0fd6871ff9f03a3cb26f1c414728aed89a33b99c | [
"MIT"
] | 59 | 2020-06-12T09:16:52.000Z | 2022-03-10T09:30:58.000Z | eerie/bsplines/b_1d.py | RomeroGuDw/wavelet_networks | 0fd6871ff9f03a3cb26f1c414728aed89a33b99c | [
"MIT"
] | 1 | 2020-09-13T01:43:44.000Z | 2022-02-16T14:33:18.000Z | eerie/bsplines/b_1d.py | RomeroGuDw/wavelet_networks | 0fd6871ff9f03a3cb26f1c414728aed89a33b99c | [
"MIT"
] | 1 | 2020-07-31T14:23:43.000Z | 2020-07-31T14:23:43.000Z | """
Implementation for B-splines of degree up to 50. For speed considerations the
splines of degrees up to 50 are hard-coded. This file was generated using a
Wolfram Mathematica script in which the expressions are generated via the inverse Fourier transform
of the Fourier B-spline expression
BF[n_][w_]:=(Sin[w/2]/(w/2))^(n+1)
with handling of the case w = 0 via
Do[BF[n][0]=1;BF[n][0.]=1;,{n,0,nMax}]
and the spatial/time domain B-spline expression is then obtained via
InverseFourierTransform[BF[n][w],w,x,FourierParameters{1,-1}]
File created Wed 18 Dec 2019 13:04:31
@author: EJ Bekkers, Informatics Institute, University of Amsterdam, The Netherlands
Edit: added functions that return the support of the spline
"""
import torch
## The 1-dimensional B-spline
def B(n):
""" Returns a 1D B-spline basis function of degree "n" (centered around
zero).
INPUT:
- degree n, an integer
OUTPUT:
- func, a python function which takes as input a position x, or a
torch tensor array of positions, and returns the function value(s)
of the B-Spline basis function.
"""
if (n >= 0) and (n <= 50):
func = eval('_B_' + str(n) + '()')
else:
if n <= 0:
raise ValueError('Error, spline degree should be a positive integer!')
if n >= 10:
raise ValueError('Error, spline degree too high! Currently only B-splines up to degree 50 are implemented.')
return func
## Returns the support of the 1D cardinal B-spline in terms of a min-max range
def B_supp(n, s=1, dx=0, intsupp=False):
""" Returns a min and max value of the domain on which the 1D cardinal B-spline of order n is non-zero.
INPUT:
- degree n, an integer
INPUT (optional):
- scale s, a real scalar number. Specifies the support of scaled B-splines via supp( B( . / s) )
- offset dx, a real scalar number. Specifies the support of scaled+shifted B-splines via supp(B( . / s - dx)
- intsupp, a boolean. Specifies whether or not the support should be on an integer grid. E.g. if xMax would
be 2.3, and we only sample integer positions x. Then 2 would still be non-zero, but 3 would evaluate to
zero. In this case the non-zero interval would be [-2,2] whereas in the intsupp=False case it would be
[-2.3,2.3]
OUTPUT:
- (xMin, xMax), the min-max range of the support
"""
xMinMax = s * (n + 1) / 2
xMin = -xMinMax + dx
xMax = xMinMax + dx
if intsupp:
xMax = (int(xMax) - 1 if int(xMax) == xMax else int(xMax))
xMin = (int(xMin) + 1 if int(xMin) == xMin else int(xMin))
return (xMin, xMax)
## Returns the grid (1D torch tensor) with unit gridpoint spacing
def B_supp_grid(n, s=1, dx=0, intsupp=False):
""" Returns a grid (1D torch tensor) with unit spacing between the grid points (e.g. [xMin,...,-1,0,1,...,xMax]).
The min-max range is computed via B_supp.
INPUT:
- degree n, an integer
INPUT (optional):
- scale s, a real scalar number. Specifies the support of scaled B-splines via supp( B( . / s) )
- offset dx, a real scalar number. Specifies the support of scaled+shifted B-splines via supp(B( . / s - dx)
- intsupp, a boolean. Specifies whether or not the support should be on an integer grid. E.g. if xMax would
be 2.3, and we only sample integer positions x. Then 2 would still be non-zero, but 3 would evaluate to
zero. In this case the non-zero interval would be [-2,2] whereas in the intsupp=False case it would be
[-2.3,2.3]
OUTPUT:
- xx, a 1D torch.tensor of x-values for which B(x) is non-zero
"""
xMin, xMax = B_supp(n, s, dx, intsupp)
return torch.arange(xMin, xMax+1, dtype = torch.int16)
## The base definitions of the 1D B-spline
def _B_0():
def B(x):
return (torch.sign(1 / 2 - x) + torch.sign(1 / 2 + x)) / 2
return B
def _B_1():
def B(x):
return (-((-1 + x) * torch.sign(1 - x)) - 2 * x * torch.sign(x) + (1 + x) * torch.sign(1 + x)) / 2
return B
def _B_2():
def B(x):
return (-3 * (-1 / 2 + x) ** 2 * torch.sign(1 / 2 - x) + (-3 / 2 + x) ** 2 * torch.sign(3 / 2 - x) - (
3 * (1 + 2 * x) ** 2 * torch.sign(1 / 2 + x)) / 4 + (
(3 + 2 * x) ** 2 * torch.sign(3 / 2 + x)) / 4) / 4
return B
def _B_3():
def B(x):
return (4 * (-1 + x) ** 3 * torch.sign(1 - x) - (-2 + x) ** 3 * torch.sign(2 - x) + 6 * x ** 3 * torch.sign(
x) - 4 * (1 + x) ** 3 * torch.sign(1 + x) + (2 + x) ** 3 * torch.sign(2 + x)) / 12
return B
def _B_4():
def B(x):
return (10 * (-1 / 2 + x) ** 4 * torch.sign(1 / 2 - x) - 5 * (-3 / 2 + x) ** 4 * torch.sign(3 / 2 - x) + (
-5 / 2 + x) ** 4 * torch.sign(5 / 2 - x) + (
5 * (1 + 2 * x) ** 4 * torch.sign(1 / 2 + x)) / 8 - 5 * (3 / 2 + x) ** 4 * torch.sign(
3 / 2 + x) + ((5 + 2 * x) ** 4 * torch.sign(5 / 2 + x)) / 16) / 48
return B
def _B_5():
def B(x):
return (-15 * (-1 + x) ** 5 * torch.sign(1 - x) + 6 * (-2 + x) ** 5 * torch.sign(2 - x) - (
-3 + x) ** 5 * torch.sign(3 - x) - 20 * x ** 5 * torch.sign(x) + 15 * (1 + x) ** 5 * torch.sign(
1 + x) - 6 * (2 + x) ** 5 * torch.sign(2 + x) + (3 + x) ** 5 * torch.sign(3 + x)) / 240
return B
def _B_6():
def B(x):
return (-35 * (-1 / 2 + x) ** 6 * torch.sign(1 / 2 - x) + 21 * (-3 / 2 + x) ** 6 * torch.sign(3 / 2 - x) - 7 * (
-5 / 2 + x) ** 6 * torch.sign(5 / 2 - x) + (-7 / 2 + x) ** 6 * torch.sign(7 / 2 - x) - 35 * (
1 / 2 + x) ** 6 * torch.sign(1 / 2 + x) + (
21 * (3 + 2 * x) ** 6 * torch.sign(3 / 2 + x)) / 64 - 7 * (5 / 2 + x) ** 6 * torch.sign(
5 / 2 + x) + ((7 + 2 * x) ** 6 * torch.sign(7 / 2 + x)) / 64) / 1440
return B
def _B_7():
def B(x):
return (56 * (-1 + x) ** 7 * torch.sign(1 - x) - 28 * (-2 + x) ** 7 * torch.sign(2 - x) + 8 * (
-3 + x) ** 7 * torch.sign(3 - x) - (-4 + x) ** 7 * torch.sign(4 - x) + 70 * x ** 7 * torch.sign(
x) - 56 * (1 + x) ** 7 * torch.sign(1 + x) + 28 * (2 + x) ** 7 * torch.sign(2 + x) - 8 * (
3 + x) ** 7 * torch.sign(3 + x) + (4 + x) ** 7 * torch.sign(4 + x)) / 10080
return B
def _B_8():
def B(x):
return (126 * (-1 / 2 + x) ** 8 * torch.sign(1 / 2 - x) - 84 * (-3 / 2 + x) ** 8 * torch.sign(
3 / 2 - x) + 36 * (-5 / 2 + x) ** 8 * torch.sign(5 / 2 - x) - 9 * (-7 / 2 + x) ** 8 * torch.sign(
7 / 2 - x) + (-9 / 2 + x) ** 8 * torch.sign(9 / 2 - x) + (
63 * (1 + 2 * x) ** 8 * torch.sign(1 / 2 + x)) / 128 - 84 * (3 / 2 + x) ** 8 * torch.sign(
3 / 2 + x) + (9 * (5 + 2 * x) ** 8 * torch.sign(5 / 2 + x)) / 64 - 9 * (7 / 2 + x) ** 8 * torch.sign(
7 / 2 + x) + ((9 + 2 * x) ** 8 * torch.sign(9 / 2 + x)) / 256) / 80640
return B
def _B_9():
def B(x):
return (-210 * (-1 + x) ** 9 * torch.sign(1 - x) + 120 * (-2 + x) ** 9 * torch.sign(2 - x) - 45 * (
-3 + x) ** 9 * torch.sign(3 - x) + 10 * (-4 + x) ** 9 * torch.sign(4 - x) - (
-5 + x) ** 9 * torch.sign(5 - x) - 252 * x ** 9 * torch.sign(x) + 210 * (
1 + x) ** 9 * torch.sign(1 + x) - 120 * (2 + x) ** 9 * torch.sign(2 + x) + 45 * (
3 + x) ** 9 * torch.sign(3 + x) - 10 * (4 + x) ** 9 * torch.sign(4 + x) + (
5 + x) ** 9 * torch.sign(5 + x)) / 725760
return B
def _B_10():
def B(x):
return (-462 * (-1 / 2 + x) ** 10 * torch.sign(1 / 2 - x) + 330 * (-3 / 2 + x) ** 10 * torch.sign(
3 / 2 - x) - 165 * (-5 / 2 + x) ** 10 * torch.sign(5 / 2 - x) + 55 * (-7 / 2 + x) ** 10 * torch.sign(
7 / 2 - x) - 11 * (-9 / 2 + x) ** 10 * torch.sign(9 / 2 - x) + (-11 / 2 + x) ** 10 * torch.sign(
11 / 2 - x) - 462 * (1 / 2 + x) ** 10 * torch.sign(1 / 2 + x) + (
165 * (3 + 2 * x) ** 10 * torch.sign(3 / 2 + x)) / 512 - 165 * (
5 / 2 + x) ** 10 * torch.sign(5 / 2 + x) + 55 * (7 / 2 + x) ** 10 * torch.sign(
7 / 2 + x) - 11 * (9 / 2 + x) ** 10 * torch.sign(9 / 2 + x) + (11 / 2 + x) ** 10 * torch.sign(
11 / 2 + x)) / 7257600
return B
def _B_11():
def B(x):
return (792 * (-1 + x) ** 11 * torch.sign(1 - x) - 495 * (-2 + x) ** 11 * torch.sign(2 - x) + 220 * (
-3 + x) ** 11 * torch.sign(3 - x) - 66 * (-4 + x) ** 11 * torch.sign(4 - x) + 12 * (
-5 + x) ** 11 * torch.sign(5 - x) - (-6 + x) ** 11 * torch.sign(
6 - x) + 924 * x ** 11 * torch.sign(x) - 792 * (1 + x) ** 11 * torch.sign(1 + x) + 495 * (
2 + x) ** 11 * torch.sign(2 + x) - 220 * (3 + x) ** 11 * torch.sign(3 + x) + 66 * (
4 + x) ** 11 * torch.sign(4 + x) - 12 * (5 + x) ** 11 * torch.sign(5 + x) + (
6 + x) ** 11 * torch.sign(6 + x)) / 79833600
return B
def _B_12():
def B(x):
return (1716 * (-1 / 2 + x) ** 12 * torch.sign(1 / 2 - x) - 1287 * (-3 / 2 + x) ** 12 * torch.sign(
3 / 2 - x) + 715 * (-5 / 2 + x) ** 12 * torch.sign(5 / 2 - x) - 286 * (-7 / 2 + x) ** 12 * torch.sign(
7 / 2 - x) + 78 * (-9 / 2 + x) ** 12 * torch.sign(9 / 2 - x) - 13 * (-11 / 2 + x) ** 12 * torch.sign(
11 / 2 - x) + (-13 / 2 + x) ** 12 * torch.sign(13 / 2 - x) + (
429 * (1 + 2 * x) ** 12 * torch.sign(1 / 2 + x)) / 1024 - 1287 * (
3 / 2 + x) ** 12 * torch.sign(3 / 2 + x) + 715 * (5 / 2 + x) ** 12 * torch.sign(
5 / 2 + x) - 286 * (7 / 2 + x) ** 12 * torch.sign(7 / 2 + x) + 78 * (9 / 2 + x) ** 12 * torch.sign(
9 / 2 + x) - 13 * (11 / 2 + x) ** 12 * torch.sign(11 / 2 + x) + (13 / 2 + x) ** 12 * torch.sign(
13 / 2 + x)) / 958003200
return B
def _B_13():
def B(x):
return (-3003 * (-1 + x) ** 13 * torch.sign(1 - x) + 2002 * (-2 + x) ** 13 * torch.sign(2 - x) - 1001 * (
-3 + x) ** 13 * torch.sign(3 - x) + 364 * (-4 + x) ** 13 * torch.sign(4 - x) - 91 * (
-5 + x) ** 13 * torch.sign(5 - x) + 14 * (-6 + x) ** 13 * torch.sign(6 - x) - (
-7 + x) ** 13 * torch.sign(7 - x) - 3432 * x ** 13 * torch.sign(x) + 3003 * (
1 + x) ** 13 * torch.sign(1 + x) - 2002 * (2 + x) ** 13 * torch.sign(2 + x) + 1001 * (
3 + x) ** 13 * torch.sign(3 + x) - 364 * (4 + x) ** 13 * torch.sign(4 + x) + 91 * (
5 + x) ** 13 * torch.sign(5 + x) - 14 * (6 + x) ** 13 * torch.sign(6 + x) + (
7 + x) ** 13 * torch.sign(7 + x)) / 12454041600
return B
def _B_14():
def B(x):
return (-6435 * (-1 / 2 + x) ** 14 * torch.sign(1 / 2 - x) + 5005 * (-3 / 2 + x) ** 14 * torch.sign(
3 / 2 - x) - 3003 * (-5 / 2 + x) ** 14 * torch.sign(5 / 2 - x) + 1365 * (-7 / 2 + x) ** 14 * torch.sign(
7 / 2 - x) - 455 * (-9 / 2 + x) ** 14 * torch.sign(9 / 2 - x) + 105 * (-11 / 2 + x) ** 14 * torch.sign(
11 / 2 - x) - 15 * (-13 / 2 + x) ** 14 * torch.sign(13 / 2 - x) + (-15 / 2 + x) ** 14 * torch.sign(
15 / 2 - x) - 6435 * (1 / 2 + x) ** 14 * torch.sign(1 / 2 + x) + 5005 * (3 / 2 + x) ** 14 * torch.sign(
3 / 2 + x) - 3003 * (5 / 2 + x) ** 14 * torch.sign(5 / 2 + x) + 1365 * (7 / 2 + x) ** 14 * torch.sign(
7 / 2 + x) - 455 * (9 / 2 + x) ** 14 * torch.sign(9 / 2 + x) + 105 * (11 / 2 + x) ** 14 * torch.sign(
11 / 2 + x) - 15 * (13 / 2 + x) ** 14 * torch.sign(13 / 2 + x) + (15 / 2 + x) ** 14 * torch.sign(
15 / 2 + x)) / 174356582400
return B
def _B_15():
def B(x):
return (11440 * (-1 + x) ** 15 * torch.sign(1 - x) - 8008 * (-2 + x) ** 15 * torch.sign(2 - x) + 4368 * (
-3 + x) ** 15 * torch.sign(3 - x) - 1820 * (-4 + x) ** 15 * torch.sign(4 - x) + 560 * (
-5 + x) ** 15 * torch.sign(5 - x) - 120 * (-6 + x) ** 15 * torch.sign(6 - x) + 16 * (
-7 + x) ** 15 * torch.sign(7 - x) - (-8 + x) ** 15 * torch.sign(
8 - x) + 12870 * x ** 15 * torch.sign(x) - 11440 * (1 + x) ** 15 * torch.sign(1 + x) + 8008 * (
2 + x) ** 15 * torch.sign(2 + x) - 4368 * (3 + x) ** 15 * torch.sign(3 + x) + 1820 * (
4 + x) ** 15 * torch.sign(4 + x) - 560 * (5 + x) ** 15 * torch.sign(5 + x) + 120 * (
6 + x) ** 15 * torch.sign(6 + x) - 16 * (7 + x) ** 15 * torch.sign(7 + x) + (
8 + x) ** 15 * torch.sign(8 + x)) / 2615348736000
return B
def _B_16():
def B(x):
return (24310 * (-1 / 2 + x) ** 16 * torch.sign(1 / 2 - x) - 19448 * (-3 / 2 + x) ** 16 * torch.sign(
3 / 2 - x) + 12376 * (-5 / 2 + x) ** 16 * torch.sign(5 / 2 - x) - 6188 * (-7 / 2 + x) ** 16 * torch.sign(
7 / 2 - x) + 2380 * (-9 / 2 + x) ** 16 * torch.sign(9 / 2 - x) - 680 * (-11 / 2 + x) ** 16 * torch.sign(
11 / 2 - x) + 136 * (-13 / 2 + x) ** 16 * torch.sign(13 / 2 - x) - 17 * (-15 / 2 + x) ** 16 * torch.sign(
15 / 2 - x) + (-17 / 2 + x) ** 16 * torch.sign(17 / 2 - x) + 24310 * (1 / 2 + x) ** 16 * torch.sign(
1 / 2 + x) - 19448 * (3 / 2 + x) ** 16 * torch.sign(3 / 2 + x) + (
1547 * (5 + 2 * x) ** 16 * torch.sign(5 / 2 + x)) / 8192 - 6188 * (
7 / 2 + x) ** 16 * torch.sign(7 / 2 + x) + 2380 * (9 / 2 + x) ** 16 * torch.sign(
9 / 2 + x) - 680 * (11 / 2 + x) ** 16 * torch.sign(11 / 2 + x) + (
17 * (13 + 2 * x) ** 16 * torch.sign(13 / 2 + x)) / 8192 - 17 * (
15 / 2 + x) ** 16 * torch.sign(15 / 2 + x) + (17 / 2 + x) ** 16 * torch.sign(
17 / 2 + x)) / 41845579776000
return B
def _B_17():
def B(x):
return (-43758 * (-1 + x) ** 17 * torch.sign(1 - x) + 31824 * (-2 + x) ** 17 * torch.sign(2 - x) - 18564 * (
-3 + x) ** 17 * torch.sign(3 - x) + 8568 * (-4 + x) ** 17 * torch.sign(4 - x) - 3060 * (
-5 + x) ** 17 * torch.sign(5 - x) + 816 * (-6 + x) ** 17 * torch.sign(6 - x) - 153 * (
-7 + x) ** 17 * torch.sign(7 - x) + 18 * (-8 + x) ** 17 * torch.sign(8 - x) - (
-9 + x) ** 17 * torch.sign(9 - x) - 48620 * x ** 17 * torch.sign(x) + 43758 * (
1 + x) ** 17 * torch.sign(1 + x) - 31824 * (2 + x) ** 17 * torch.sign(2 + x) + 18564 * (
3 + x) ** 17 * torch.sign(3 + x) - 8568 * (4 + x) ** 17 * torch.sign(4 + x) + 3060 * (
5 + x) ** 17 * torch.sign(5 + x) - 816 * (6 + x) ** 17 * torch.sign(6 + x) + 153 * (
7 + x) ** 17 * torch.sign(7 + x) - 18 * (8 + x) ** 17 * torch.sign(8 + x) + (
9 + x) ** 17 * torch.sign(9 + x)) / 711374856192000
return B
def _B_18():
def B(x):
return (-92378 * (-1 / 2 + x) ** 18 * torch.sign(1 / 2 - x) + 75582 * (-3 / 2 + x) ** 18 * torch.sign(
3 / 2 - x) - 50388 * (-5 / 2 + x) ** 18 * torch.sign(5 / 2 - x) + 27132 * (-7 / 2 + x) ** 18 * torch.sign(
7 / 2 - x) - 11628 * (-9 / 2 + x) ** 18 * torch.sign(9 / 2 - x) + 3876 * (-11 / 2 + x) ** 18 * torch.sign(
11 / 2 - x) - 969 * (-13 / 2 + x) ** 18 * torch.sign(13 / 2 - x) + 171 * (-15 / 2 + x) ** 18 * torch.sign(
15 / 2 - x) - 19 * (-17 / 2 + x) ** 18 * torch.sign(17 / 2 - x) + (-19 / 2 + x) ** 18 * torch.sign(
19 / 2 - x) - 92378 * (1 / 2 + x) ** 18 * torch.sign(1 / 2 + x) + 75582 * (3 / 2 + x) ** 18 * torch.sign(
3 / 2 + x) - 50388 * (5 / 2 + x) ** 18 * torch.sign(5 / 2 + x) + 27132 * (7 / 2 + x) ** 18 * torch.sign(
7 / 2 + x) - 11628 * (9 / 2 + x) ** 18 * torch.sign(9 / 2 + x) + 3876 * (11 / 2 + x) ** 18 * torch.sign(
11 / 2 + x) - 969 * (13 / 2 + x) ** 18 * torch.sign(13 / 2 + x) + 171 * (15 / 2 + x) ** 18 * torch.sign(
15 / 2 + x) - 19 * (17 / 2 + x) ** 18 * torch.sign(17 / 2 + x) + (19 / 2 + x) ** 18 * torch.sign(
19 / 2 + x)) / 12804747411456000
return B
def _B_19():
def B(x):
return (167960 * (-1 + x) ** 19 * torch.sign(1 - x) - 125970 * (-2 + x) ** 19 * torch.sign(2 - x) + 77520 * (
-3 + x) ** 19 * torch.sign(3 - x) - 38760 * (-4 + x) ** 19 * torch.sign(4 - x) + 15504 * (
-5 + x) ** 19 * torch.sign(5 - x) - 4845 * (-6 + x) ** 19 * torch.sign(6 - x) + 1140 * (
-7 + x) ** 19 * torch.sign(7 - x) - 190 * (-8 + x) ** 19 * torch.sign(8 - x) + 20 * (
-9 + x) ** 19 * torch.sign(9 - x) - (-10 + x) ** 19 * torch.sign(
10 - x) + 184756 * x ** 19 * torch.sign(x) - 167960 * (1 + x) ** 19 * torch.sign(1 + x) + 125970 * (
2 + x) ** 19 * torch.sign(2 + x) - 77520 * (3 + x) ** 19 * torch.sign(3 + x) + 38760 * (
4 + x) ** 19 * torch.sign(4 + x) - 15504 * (5 + x) ** 19 * torch.sign(5 + x) + 4845 * (
6 + x) ** 19 * torch.sign(6 + x) - 1140 * (7 + x) ** 19 * torch.sign(7 + x) + 190 * (
8 + x) ** 19 * torch.sign(8 + x) - 20 * (9 + x) ** 19 * torch.sign(9 + x) + (
10 + x) ** 19 * torch.sign(10 + x)) / 243290200817664000
return B
def _B_20():
def B(x):
return (352716 * (-1 / 2 + x) ** 20 * torch.sign(1 / 2 - x) - 293930 * (-3 / 2 + x) ** 20 * torch.sign(
3 / 2 - x) + 203490 * (-5 / 2 + x) ** 20 * torch.sign(5 / 2 - x) - 116280 * (-7 / 2 + x) ** 20 * torch.sign(
7 / 2 - x) + 54264 * (-9 / 2 + x) ** 20 * torch.sign(9 / 2 - x) - 20349 * (-11 / 2 + x) ** 20 * torch.sign(
11 / 2 - x) + 5985 * (-13 / 2 + x) ** 20 * torch.sign(13 / 2 - x) - 1330 * (-15 / 2 + x) ** 20 * torch.sign(
15 / 2 - x) + 210 * (-17 / 2 + x) ** 20 * torch.sign(17 / 2 - x) - 21 * (-19 / 2 + x) ** 20 * torch.sign(
19 / 2 - x) + (-21 / 2 + x) ** 20 * torch.sign(21 / 2 - x) + 352716 * (1 / 2 + x) ** 20 * torch.sign(
1 / 2 + x) - 293930 * (3 / 2 + x) ** 20 * torch.sign(3 / 2 + x) + 203490 * (5 / 2 + x) ** 20 * torch.sign(
5 / 2 + x) - 116280 * (7 / 2 + x) ** 20 * torch.sign(7 / 2 + x) + 54264 * (9 / 2 + x) ** 20 * torch.sign(
9 / 2 + x) - 20349 * (11 / 2 + x) ** 20 * torch.sign(11 / 2 + x) + 5985 * (13 / 2 + x) ** 20 * torch.sign(
13 / 2 + x) - 1330 * (15 / 2 + x) ** 20 * torch.sign(15 / 2 + x) + 210 * (17 / 2 + x) ** 20 * torch.sign(
17 / 2 + x) - 21 * (19 / 2 + x) ** 20 * torch.sign(19 / 2 + x) + (21 / 2 + x) ** 20 * torch.sign(
21 / 2 + x)) / 4865804016353280000
return B
def _B_21():
def B(x):
return (-646646 * (-1 + x) ** 21 * torch.sign(1 - x) + 497420 * (-2 + x) ** 21 * torch.sign(2 - x) - 319770 * (
-3 + x) ** 21 * torch.sign(3 - x) + 170544 * (-4 + x) ** 21 * torch.sign(4 - x) - 74613 * (
-5 + x) ** 21 * torch.sign(5 - x) + 26334 * (-6 + x) ** 21 * torch.sign(6 - x) - 7315 * (
-7 + x) ** 21 * torch.sign(7 - x) + 1540 * (-8 + x) ** 21 * torch.sign(8 - x) - 231 * (
-9 + x) ** 21 * torch.sign(9 - x) + 22 * (-10 + x) ** 21 * torch.sign(10 - x) - (
-11 + x) ** 21 * torch.sign(11 - x) - 705432 * x ** 21 * torch.sign(x) + 646646 * (
1 + x) ** 21 * torch.sign(1 + x) - 497420 * (2 + x) ** 21 * torch.sign(2 + x) + 319770 * (
3 + x) ** 21 * torch.sign(3 + x) - 170544 * (4 + x) ** 21 * torch.sign(4 + x) + 74613 * (
5 + x) ** 21 * torch.sign(5 + x) - 26334 * (6 + x) ** 21 * torch.sign(6 + x) + 7315 * (
7 + x) ** 21 * torch.sign(7 + x) - 1540 * (8 + x) ** 21 * torch.sign(8 + x) + 231 * (
9 + x) ** 21 * torch.sign(9 + x) - 22 * (10 + x) ** 21 * torch.sign(10 + x) + (
11 + x) ** 21 * torch.sign(11 + x)) / 102181884343418880000
return B
def _B_22():
def B(x):
return (-1352078 * (-1 / 2 + x) ** 22 * torch.sign(1 / 2 - x) + 1144066 * (-3 / 2 + x) ** 22 * torch.sign(
3 / 2 - x) - 817190 * (-5 / 2 + x) ** 22 * torch.sign(5 / 2 - x) + 490314 * (-7 / 2 + x) ** 22 * torch.sign(
7 / 2 - x) - 245157 * (-9 / 2 + x) ** 22 * torch.sign(9 / 2 - x) + 100947 * (
-11 / 2 + x) ** 22 * torch.sign(11 / 2 - x) - 33649 * (-13 / 2 + x) ** 22 * torch.sign(
13 / 2 - x) + 8855 * (-15 / 2 + x) ** 22 * torch.sign(15 / 2 - x) - 1771 * (-17 / 2 + x) ** 22 * torch.sign(
17 / 2 - x) + 253 * (-19 / 2 + x) ** 22 * torch.sign(19 / 2 - x) - 23 * (-21 / 2 + x) ** 22 * torch.sign(
21 / 2 - x) + (-23 / 2 + x) ** 22 * torch.sign(23 / 2 - x) - 1352078 * (1 / 2 + x) ** 22 * torch.sign(
1 / 2 + x) + 1144066 * (3 / 2 + x) ** 22 * torch.sign(3 / 2 + x) - 817190 * (5 / 2 + x) ** 22 * torch.sign(
5 / 2 + x) + 490314 * (7 / 2 + x) ** 22 * torch.sign(7 / 2 + x) - 245157 * (9 / 2 + x) ** 22 * torch.sign(
9 / 2 + x) + 100947 * (11 / 2 + x) ** 22 * torch.sign(11 / 2 + x) - 33649 * (13 / 2 + x) ** 22 * torch.sign(
13 / 2 + x) + 8855 * (15 / 2 + x) ** 22 * torch.sign(15 / 2 + x) - 1771 * (17 / 2 + x) ** 22 * torch.sign(
17 / 2 + x) + 253 * (19 / 2 + x) ** 22 * torch.sign(19 / 2 + x) - 23 * (21 / 2 + x) ** 22 * torch.sign(
21 / 2 + x) + (23 / 2 + x) ** 22 * torch.sign(23 / 2 + x)) / 2248001455555215360000
return B
def _B_23():
def B(x):
return (2496144 * (-1 + x) ** 23 * torch.sign(1 - x) - 1961256 * (-2 + x) ** 23 * torch.sign(
2 - x) + 1307504 * (-3 + x) ** 23 * torch.sign(3 - x) - 735471 * (-4 + x) ** 23 * torch.sign(
4 - x) + 346104 * (-5 + x) ** 23 * torch.sign(5 - x) - 134596 * (-6 + x) ** 23 * torch.sign(
6 - x) + 42504 * (-7 + x) ** 23 * torch.sign(7 - x) - 10626 * (-8 + x) ** 23 * torch.sign(8 - x) + 2024 * (
-9 + x) ** 23 * torch.sign(9 - x) - 276 * (-10 + x) ** 23 * torch.sign(10 - x) + 24 * (
-11 + x) ** 23 * torch.sign(11 - x) - (-12 + x) ** 23 * torch.sign(
12 - x) + 2704156 * x ** 23 * torch.sign(x) - 2496144 * (1 + x) ** 23 * torch.sign(1 + x) + 1961256 * (
2 + x) ** 23 * torch.sign(2 + x) - 1307504 * (3 + x) ** 23 * torch.sign(3 + x) + 735471 * (
4 + x) ** 23 * torch.sign(4 + x) - 346104 * (5 + x) ** 23 * torch.sign(5 + x) + 134596 * (
6 + x) ** 23 * torch.sign(6 + x) - 42504 * (7 + x) ** 23 * torch.sign(7 + x) + 10626 * (
8 + x) ** 23 * torch.sign(8 + x) - 2024 * (9 + x) ** 23 * torch.sign(9 + x) + 276 * (
10 + x) ** 23 * torch.sign(10 + x) - 24 * (11 + x) ** 23 * torch.sign(11 + x) + (
12 + x) ** 23 * torch.sign(12 + x)) / 51704033477769953280000
return B
def _B_24():
def B(x):
return (5200300 * (-1 / 2 + x) ** 24 * torch.sign(1 / 2 - x) - 4457400 * (-3 / 2 + x) ** 24 * torch.sign(
3 / 2 - x) + 3268760 * (-5 / 2 + x) ** 24 * torch.sign(5 / 2 - x) - 2042975 * (
-7 / 2 + x) ** 24 * torch.sign(7 / 2 - x) + 1081575 * (-9 / 2 + x) ** 24 * torch.sign(
9 / 2 - x) - 480700 * (-11 / 2 + x) ** 24 * torch.sign(11 / 2 - x) + 177100 * (
-13 / 2 + x) ** 24 * torch.sign(13 / 2 - x) - 53130 * (-15 / 2 + x) ** 24 * torch.sign(
15 / 2 - x) + 12650 * (-17 / 2 + x) ** 24 * torch.sign(17 / 2 - x) - 2300 * (
-19 / 2 + x) ** 24 * torch.sign(19 / 2 - x) + 300 * (-21 / 2 + x) ** 24 * torch.sign(
21 / 2 - x) - 25 * (-23 / 2 + x) ** 24 * torch.sign(23 / 2 - x) + (-25 / 2 + x) ** 24 * torch.sign(
25 / 2 - x) + 5200300 * (1 / 2 + x) ** 24 * torch.sign(1 / 2 + x) - 4457400 * (
3 / 2 + x) ** 24 * torch.sign(3 / 2 + x) + 3268760 * (5 / 2 + x) ** 24 * torch.sign(
5 / 2 + x) - 2042975 * (7 / 2 + x) ** 24 * torch.sign(7 / 2 + x) + 1081575 * (9 / 2 + x) ** 24 * torch.sign(
9 / 2 + x) - 480700 * (11 / 2 + x) ** 24 * torch.sign(11 / 2 + x) + 177100 * (
13 / 2 + x) ** 24 * torch.sign(13 / 2 + x) - 53130 * (15 / 2 + x) ** 24 * torch.sign(
15 / 2 + x) + 12650 * (17 / 2 + x) ** 24 * torch.sign(17 / 2 + x) - 2300 * (19 / 2 + x) ** 24 * torch.sign(
19 / 2 + x) + 300 * (21 / 2 + x) ** 24 * torch.sign(21 / 2 + x) - 25 * (23 / 2 + x) ** 24 * torch.sign(
23 / 2 + x) + (25 / 2 + x) ** 24 * torch.sign(25 / 2 + x)) / 1240896803466478878720000
return B
def _B_25():
def B(x):
return (-9657700 * (-1 + x) ** 25 * torch.sign(1 - x) + 7726160 * (-2 + x) ** 25 * torch.sign(
2 - x) - 5311735 * (-3 + x) ** 25 * torch.sign(3 - x) + 3124550 * (-4 + x) ** 25 * torch.sign(
4 - x) - 1562275 * (-5 + x) ** 25 * torch.sign(5 - x) + 657800 * (-6 + x) ** 25 * torch.sign(
6 - x) - 230230 * (-7 + x) ** 25 * torch.sign(7 - x) + 65780 * (-8 + x) ** 25 * torch.sign(
8 - x) - 14950 * (-9 + x) ** 25 * torch.sign(9 - x) + 2600 * (-10 + x) ** 25 * torch.sign(10 - x) - 325 * (
-11 + x) ** 25 * torch.sign(11 - x) + 26 * (-12 + x) ** 25 * torch.sign(12 - x) - (
-13 + x) ** 25 * torch.sign(13 - x) - 10400600 * x ** 25 * torch.sign(x) + 9657700 * (
1 + x) ** 25 * torch.sign(1 + x) - 7726160 * (2 + x) ** 25 * torch.sign(2 + x) + 5311735 * (
3 + x) ** 25 * torch.sign(3 + x) - 3124550 * (4 + x) ** 25 * torch.sign(4 + x) + 1562275 * (
5 + x) ** 25 * torch.sign(5 + x) - 657800 * (6 + x) ** 25 * torch.sign(6 + x) + 230230 * (
7 + x) ** 25 * torch.sign(7 + x) - 65780 * (8 + x) ** 25 * torch.sign(8 + x) + 14950 * (
9 + x) ** 25 * torch.sign(9 + x) - 2600 * (10 + x) ** 25 * torch.sign(10 + x) + 325 * (
11 + x) ** 25 * torch.sign(11 + x) - 26 * (12 + x) ** 25 * torch.sign(12 + x) + (
13 + x) ** 25 * torch.sign(13 + x)) / 31022420086661971968000000
return B
def _B_26():
def B(x):
return (-20058300 * (-1 / 2 + x) ** 26 * torch.sign(1 / 2 - x) + 17383860 * (-3 / 2 + x) ** 26 * torch.sign(
3 / 2 - x) - 13037895 * (-5 / 2 + x) ** 26 * torch.sign(5 / 2 - x) + 8436285 * (
-7 / 2 + x) ** 26 * torch.sign(7 / 2 - x) - 4686825 * (-9 / 2 + x) ** 26 * torch.sign(
9 / 2 - x) + 2220075 * (-11 / 2 + x) ** 26 * torch.sign(11 / 2 - x) - 888030 * (
-13 / 2 + x) ** 26 * torch.sign(13 / 2 - x) + 296010 * (-15 / 2 + x) ** 26 * torch.sign(
15 / 2 - x) - 80730 * (-17 / 2 + x) ** 26 * torch.sign(17 / 2 - x) + 17550 * (
-19 / 2 + x) ** 26 * torch.sign(19 / 2 - x) - 2925 * (-21 / 2 + x) ** 26 * torch.sign(
21 / 2 - x) + 351 * (-23 / 2 + x) ** 26 * torch.sign(23 / 2 - x) - 27 * (-25 / 2 + x) ** 26 * torch.sign(
25 / 2 - x) + (-27 / 2 + x) ** 26 * torch.sign(27 / 2 - x) - 20058300 * (1 / 2 + x) ** 26 * torch.sign(
1 / 2 + x) + 17383860 * (3 / 2 + x) ** 26 * torch.sign(3 / 2 + x) - 13037895 * (
5 / 2 + x) ** 26 * torch.sign(5 / 2 + x) + 8436285 * (7 / 2 + x) ** 26 * torch.sign(
7 / 2 + x) - 4686825 * (9 / 2 + x) ** 26 * torch.sign(9 / 2 + x) + 2220075 * (
11 / 2 + x) ** 26 * torch.sign(11 / 2 + x) - 888030 * (13 / 2 + x) ** 26 * torch.sign(
13 / 2 + x) + 296010 * (15 / 2 + x) ** 26 * torch.sign(15 / 2 + x) - 80730 * (
17 / 2 + x) ** 26 * torch.sign(17 / 2 + x) + 17550 * (19 / 2 + x) ** 26 * torch.sign(
19 / 2 + x) - 2925 * (21 / 2 + x) ** 26 * torch.sign(21 / 2 + x) + 351 * (23 / 2 + x) ** 26 * torch.sign(
23 / 2 + x) - 27 * (25 / 2 + x) ** 26 * torch.sign(25 / 2 + x) + (27 / 2 + x) ** 26 * torch.sign(
27 / 2 + x)) / 806582922253211271168000000
return B
def _B_27():
def B(x):
return (37442160 * (-1 + x) ** 27 * torch.sign(1 - x) - 30421755 * (-2 + x) ** 27 * torch.sign(
2 - x) + 21474180 * (-3 + x) ** 27 * torch.sign(3 - x) - 13123110 * (-4 + x) ** 27 * torch.sign(
4 - x) + 6906900 * (-5 + x) ** 27 * torch.sign(5 - x) - 3108105 * (-6 + x) ** 27 * torch.sign(
6 - x) + 1184040 * (-7 + x) ** 27 * torch.sign(7 - x) - 376740 * (-8 + x) ** 27 * torch.sign(
8 - x) + 98280 * (-9 + x) ** 27 * torch.sign(9 - x) - 20475 * (-10 + x) ** 27 * torch.sign(
10 - x) + 3276 * (-11 + x) ** 27 * torch.sign(11 - x) - 378 * (-12 + x) ** 27 * torch.sign(12 - x) + 28 * (
-13 + x) ** 27 * torch.sign(13 - x) - (-14 + x) ** 27 * torch.sign(
14 - x) + 40116600 * x ** 27 * torch.sign(x) - 37442160 * (1 + x) ** 27 * torch.sign(1 + x) + 30421755 * (
2 + x) ** 27 * torch.sign(2 + x) - 21474180 * (3 + x) ** 27 * torch.sign(
3 + x) + 13123110 * (4 + x) ** 27 * torch.sign(4 + x) - 6906900 * (5 + x) ** 27 * torch.sign(
5 + x) + 3108105 * (6 + x) ** 27 * torch.sign(6 + x) - 1184040 * (7 + x) ** 27 * torch.sign(
7 + x) + 376740 * (8 + x) ** 27 * torch.sign(8 + x) - 98280 * (9 + x) ** 27 * torch.sign(9 + x) + 20475 * (
10 + x) ** 27 * torch.sign(10 + x) - 3276 * (11 + x) ** 27 * torch.sign(11 + x) + 378 * (
12 + x) ** 27 * torch.sign(12 + x) - 28 * (13 + x) ** 27 * torch.sign(13 + x) + (
14 + x) ** 27 * torch.sign(14 + x)) / 21777738900836704321536000000
return B
def _B_28():
def B(x):
return (77558760 * (-1 / 2 + x) ** 28 * torch.sign(1 / 2 - x) - 67863915 * (-3 / 2 + x) ** 28 * torch.sign(
3 / 2 - x) + 51895935 * (-5 / 2 + x) ** 28 * torch.sign(5 / 2 - x) - 34597290 * (
-7 / 2 + x) ** 28 * torch.sign(7 / 2 - x) + 20030010 * (-9 / 2 + x) ** 28 * torch.sign(
9 / 2 - x) - 10015005 * (-11 / 2 + x) ** 28 * torch.sign(11 / 2 - x) + 4292145 * (
-13 / 2 + x) ** 28 * torch.sign(13 / 2 - x) - 1560780 * (-15 / 2 + x) ** 28 * torch.sign(
15 / 2 - x) + 475020 * (-17 / 2 + x) ** 28 * torch.sign(17 / 2 - x) - 118755 * (
-19 / 2 + x) ** 28 * torch.sign(19 / 2 - x) + 23751 * (-21 / 2 + x) ** 28 * torch.sign(
21 / 2 - x) - 3654 * (-23 / 2 + x) ** 28 * torch.sign(23 / 2 - x) + 406 * (-25 / 2 + x) ** 28 * torch.sign(
25 / 2 - x) - 29 * (-27 / 2 + x) ** 28 * torch.sign(27 / 2 - x) + (-29 / 2 + x) ** 28 * torch.sign(
29 / 2 - x) + 77558760 * (1 / 2 + x) ** 28 * torch.sign(1 / 2 + x) - 67863915 * (
3 / 2 + x) ** 28 * torch.sign(3 / 2 + x) + 51895935 * (5 / 2 + x) ** 28 * torch.sign(
5 / 2 + x) - 34597290 * (7 / 2 + x) ** 28 * torch.sign(7 / 2 + x) + 20030010 * (
9 / 2 + x) ** 28 * torch.sign(9 / 2 + x) - 10015005 * (11 / 2 + x) ** 28 * torch.sign(
11 / 2 + x) + 4292145 * (13 / 2 + x) ** 28 * torch.sign(13 / 2 + x) - 1560780 * (
15 / 2 + x) ** 28 * torch.sign(15 / 2 + x) + 475020 * (17 / 2 + x) ** 28 * torch.sign(
17 / 2 + x) - 118755 * (19 / 2 + x) ** 28 * torch.sign(19 / 2 + x) + 23751 * (
21 / 2 + x) ** 28 * torch.sign(21 / 2 + x) - 3654 * (23 / 2 + x) ** 28 * torch.sign(
23 / 2 + x) + 406 * (25 / 2 + x) ** 28 * torch.sign(25 / 2 + x) - 29 * (27 / 2 + x) ** 28 * torch.sign(
27 / 2 + x) + (29 / 2 + x) ** 28 * torch.sign(29 / 2 + x)) / 609776689223427721003008000000
return B
def _B_29():
def B(x):
return (-145422675 * (-1 + x) ** 29 * torch.sign(1 - x) + 119759850 * (-2 + x) ** 29 * torch.sign(
2 - x) - 86493225 * (-3 + x) ** 29 * torch.sign(3 - x) + 54627300 * (-4 + x) ** 29 * torch.sign(
4 - x) - 30045015 * (-5 + x) ** 29 * torch.sign(5 - x) + 14307150 * (-6 + x) ** 29 * torch.sign(
6 - x) - 5852925 * (-7 + x) ** 29 * torch.sign(7 - x) + 2035800 * (-8 + x) ** 29 * torch.sign(
8 - x) - 593775 * (-9 + x) ** 29 * torch.sign(9 - x) + 142506 * (-10 + x) ** 29 * torch.sign(
10 - x) - 27405 * (-11 + x) ** 29 * torch.sign(11 - x) + 4060 * (-12 + x) ** 29 * torch.sign(
12 - x) - 435 * (-13 + x) ** 29 * torch.sign(13 - x) + 30 * (-14 + x) ** 29 * torch.sign(14 - x) - (
-15 + x) ** 29 * torch.sign(15 - x) - 155117520 * x ** 29 * torch.sign(x) + 145422675 * (
1 + x) ** 29 * torch.sign(1 + x) - 119759850 * (2 + x) ** 29 * torch.sign(
2 + x) + 86493225 * (3 + x) ** 29 * torch.sign(3 + x) - 54627300 * (4 + x) ** 29 * torch.sign(
4 + x) + 30045015 * (5 + x) ** 29 * torch.sign(5 + x) - 14307150 * (6 + x) ** 29 * torch.sign(
6 + x) + 5852925 * (7 + x) ** 29 * torch.sign(7 + x) - 2035800 * (8 + x) ** 29 * torch.sign(
8 + x) + 593775 * (9 + x) ** 29 * torch.sign(9 + x) - 142506 * (10 + x) ** 29 * torch.sign(
10 + x) + 27405 * (11 + x) ** 29 * torch.sign(11 + x) - 4060 * (12 + x) ** 29 * torch.sign(12 + x) + 435 * (
13 + x) ** 29 * torch.sign(13 + x) - 30 * (14 + x) ** 29 * torch.sign(14 + x) + (
15 + x) ** 29 * torch.sign(15 + x)) / 17683523987479403909087232000000
return B
def _B_30():
def B(x):
return (-300540195 * (-1 / 2 + x) ** 30 * torch.sign(1 / 2 - x) + 265182525 * (-3 / 2 + x) ** 30 * torch.sign(
3 / 2 - x) - 206253075 * (-5 / 2 + x) ** 30 * torch.sign(5 / 2 - x) + 141120525 * (
-7 / 2 + x) ** 30 * torch.sign(7 / 2 - x) - 84672315 * (-9 / 2 + x) ** 30 * torch.sign(
9 / 2 - x) + 44352165 * (-11 / 2 + x) ** 30 * torch.sign(11 / 2 - x) - 20160075 * (
-13 / 2 + x) ** 30 * torch.sign(13 / 2 - x) + 7888725 * (-15 / 2 + x) ** 30 * torch.sign(
15 / 2 - x) - 2629575 * (-17 / 2 + x) ** 30 * torch.sign(17 / 2 - x) + 736281 * (
-19 / 2 + x) ** 30 * torch.sign(19 / 2 - x) - 169911 * (-21 / 2 + x) ** 30 * torch.sign(
21 / 2 - x) + 31465 * (-23 / 2 + x) ** 30 * torch.sign(23 / 2 - x) - 4495 * (
-25 / 2 + x) ** 30 * torch.sign(25 / 2 - x) + 465 * (-27 / 2 + x) ** 30 * torch.sign(
27 / 2 - x) - 31 * (-29 / 2 + x) ** 30 * torch.sign(29 / 2 - x) + (-31 / 2 + x) ** 30 * torch.sign(
31 / 2 - x) - 300540195 * (1 / 2 + x) ** 30 * torch.sign(1 / 2 + x) + 265182525 * (
3 / 2 + x) ** 30 * torch.sign(3 / 2 + x) - 206253075 * (5 / 2 + x) ** 30 * torch.sign(
5 / 2 + x) + 141120525 * (7 / 2 + x) ** 30 * torch.sign(7 / 2 + x) - 84672315 * (
9 / 2 + x) ** 30 * torch.sign(9 / 2 + x) + 44352165 * (11 / 2 + x) ** 30 * torch.sign(
11 / 2 + x) - 20160075 * (13 / 2 + x) ** 30 * torch.sign(13 / 2 + x) + 7888725 * (
15 / 2 + x) ** 30 * torch.sign(15 / 2 + x) - 2629575 * (17 / 2 + x) ** 30 * torch.sign(
17 / 2 + x) + 736281 * (19 / 2 + x) ** 30 * torch.sign(19 / 2 + x) - 169911 * (
21 / 2 + x) ** 30 * torch.sign(21 / 2 + x) + 31465 * (23 / 2 + x) ** 30 * torch.sign(
23 / 2 + x) - 4495 * (25 / 2 + x) ** 30 * torch.sign(25 / 2 + x) + 465 * (27 / 2 + x) ** 30 * torch.sign(
27 / 2 + x) - 31 * (29 / 2 + x) ** 30 * torch.sign(29 / 2 + x) + (31 / 2 + x) ** 30 * torch.sign(
31 / 2 + x)) / 530505719624382117272616960000000
return B
def _B_31():
def B(x):
return (565722720 * (-1 + x) ** 31 * torch.sign(1 - x) - 471435600 * (-2 + x) ** 31 * torch.sign(
2 - x) + 347373600 * (-3 + x) ** 31 * torch.sign(3 - x) - 225792840 * (-4 + x) ** 31 * torch.sign(
4 - x) + 129024480 * (-5 + x) ** 31 * torch.sign(5 - x) - 64512240 * (-6 + x) ** 31 * torch.sign(
6 - x) + 28048800 * (-7 + x) ** 31 * torch.sign(7 - x) - 10518300 * (-8 + x) ** 31 * torch.sign(
8 - x) + 3365856 * (-9 + x) ** 31 * torch.sign(9 - x) - 906192 * (-10 + x) ** 31 * torch.sign(
10 - x) + 201376 * (-11 + x) ** 31 * torch.sign(11 - x) - 35960 * (-12 + x) ** 31 * torch.sign(
12 - x) + 4960 * (-13 + x) ** 31 * torch.sign(13 - x) - 496 * (-14 + x) ** 31 * torch.sign(14 - x) + 32 * (
-15 + x) ** 31 * torch.sign(15 - x) - (-16 + x) ** 31 * torch.sign(
16 - x) + 601080390 * x ** 31 * torch.sign(x) - 565722720 * (1 + x) ** 31 * torch.sign(
1 + x) + 471435600 * (2 + x) ** 31 * torch.sign(2 + x) - 347373600 * (3 + x) ** 31 * torch.sign(
3 + x) + 225792840 * (4 + x) ** 31 * torch.sign(4 + x) - 129024480 * (5 + x) ** 31 * torch.sign(
5 + x) + 64512240 * (6 + x) ** 31 * torch.sign(6 + x) - 28048800 * (7 + x) ** 31 * torch.sign(
7 + x) + 10518300 * (8 + x) ** 31 * torch.sign(8 + x) - 3365856 * (9 + x) ** 31 * torch.sign(
9 + x) + 906192 * (10 + x) ** 31 * torch.sign(10 + x) - 201376 * (11 + x) ** 31 * torch.sign(
11 + x) + 35960 * (12 + x) ** 31 * torch.sign(12 + x) - 4960 * (13 + x) ** 31 * torch.sign(13 + x) + 496 * (
14 + x) ** 31 * torch.sign(14 + x) - 32 * (15 + x) ** 31 * torch.sign(15 + x) + (
16 + x) ** 31 * torch.sign(16 + x)) / 16445677308355845635451125760000000
return B
def _B_32():
def B(x):
return (1166803110 * (-1 / 2 + x) ** 32 * torch.sign(1 / 2 - x) - 1037158320 * (-3 / 2 + x) ** 32 * torch.sign(
3 / 2 - x) + 818809200 * (-5 / 2 + x) ** 32 * torch.sign(5 / 2 - x) - 573166440 * (
-7 / 2 + x) ** 32 * torch.sign(7 / 2 - x) + 354817320 * (-9 / 2 + x) ** 32 * torch.sign(
9 / 2 - x) - 193536720 * (-11 / 2 + x) ** 32 * torch.sign(11 / 2 - x) + 92561040 * (
-13 / 2 + x) ** 32 * torch.sign(13 / 2 - x) - 38567100 * (-15 / 2 + x) ** 32 * torch.sign(
15 / 2 - x) + 13884156 * (-17 / 2 + x) ** 32 * torch.sign(17 / 2 - x) - 4272048 * (
-19 / 2 + x) ** 32 * torch.sign(19 / 2 - x) + 1107568 * (-21 / 2 + x) ** 32 * torch.sign(
21 / 2 - x) - 237336 * (-23 / 2 + x) ** 32 * torch.sign(23 / 2 - x) + 40920 * (
-25 / 2 + x) ** 32 * torch.sign(25 / 2 - x) - 5456 * (-27 / 2 + x) ** 32 * torch.sign(
27 / 2 - x) + 528 * (-29 / 2 + x) ** 32 * torch.sign(29 / 2 - x) - 33 * (-31 / 2 + x) ** 32 * torch.sign(
31 / 2 - x) + (-33 / 2 + x) ** 32 * torch.sign(33 / 2 - x) + 1166803110 * (1 / 2 + x) ** 32 * torch.sign(
1 / 2 + x) - 1037158320 * (3 / 2 + x) ** 32 * torch.sign(3 / 2 + x) + 818809200 * (
5 / 2 + x) ** 32 * torch.sign(5 / 2 + x) - 573166440 * (7 / 2 + x) ** 32 * torch.sign(
7 / 2 + x) + 354817320 * (9 / 2 + x) ** 32 * torch.sign(9 / 2 + x) - 193536720 * (
11 / 2 + x) ** 32 * torch.sign(11 / 2 + x) + 92561040 * (13 / 2 + x) ** 32 * torch.sign(
13 / 2 + x) - 38567100 * (15 / 2 + x) ** 32 * torch.sign(15 / 2 + x) + 13884156 * (
17 / 2 + x) ** 32 * torch.sign(17 / 2 + x) - 4272048 * (19 / 2 + x) ** 32 * torch.sign(
19 / 2 + x) + 1107568 * (21 / 2 + x) ** 32 * torch.sign(21 / 2 + x) - 237336 * (
23 / 2 + x) ** 32 * torch.sign(23 / 2 + x) + 40920 * (25 / 2 + x) ** 32 * torch.sign(
25 / 2 + x) - 5456 * (27 / 2 + x) ** 32 * torch.sign(27 / 2 + x) + 528 * (29 / 2 + x) ** 32 * torch.sign(
29 / 2 + x) - 33 * (31 / 2 + x) ** 32 * torch.sign(31 / 2 + x) + (33 / 2 + x) ** 32 * torch.sign(
33 / 2 + x)) / 526261673867387060334436024320000000
return B
def _B_33():
def B(x):
return (-2203961430 * (-1 + x) ** 33 * torch.sign(1 - x) + 1855967520 * (-2 + x) ** 33 * torch.sign(
2 - x) - 1391975640 * (-3 + x) ** 33 * torch.sign(3 - x) + 927983760 * (-4 + x) ** 33 * torch.sign(
4 - x) - 548354040 * (-5 + x) ** 33 * torch.sign(5 - x) + 286097760 * (-6 + x) ** 33 * torch.sign(
6 - x) - 131128140 * (-7 + x) ** 33 * torch.sign(7 - x) + 52451256 * (-8 + x) ** 33 * torch.sign(
8 - x) - 18156204 * (-9 + x) ** 33 * torch.sign(9 - x) + 5379616 * (-10 + x) ** 33 * torch.sign(
10 - x) - 1344904 * (-11 + x) ** 33 * torch.sign(11 - x) + 278256 * (-12 + x) ** 33 * torch.sign(
12 - x) - 46376 * (-13 + x) ** 33 * torch.sign(13 - x) + 5984 * (-14 + x) ** 33 * torch.sign(
14 - x) - 561 * (-15 + x) ** 33 * torch.sign(15 - x) + 34 * (-16 + x) ** 33 * torch.sign(16 - x) - (
-17 + x) ** 33 * torch.sign(17 - x) - 2333606220 * x ** 33 * torch.sign(x) + 2203961430 * (
1 + x) ** 33 * torch.sign(1 + x) - 1855967520 * (2 + x) ** 33 * torch.sign(
2 + x) + 1391975640 * (3 + x) ** 33 * torch.sign(3 + x) - 927983760 * (4 + x) ** 33 * torch.sign(
4 + x) + 548354040 * (5 + x) ** 33 * torch.sign(5 + x) - 286097760 * (6 + x) ** 33 * torch.sign(
6 + x) + 131128140 * (7 + x) ** 33 * torch.sign(7 + x) - 52451256 * (8 + x) ** 33 * torch.sign(
8 + x) + 18156204 * (9 + x) ** 33 * torch.sign(9 + x) - 5379616 * (10 + x) ** 33 * torch.sign(
10 + x) + 1344904 * (11 + x) ** 33 * torch.sign(11 + x) - 278256 * (12 + x) ** 33 * torch.sign(
12 + x) + 46376 * (13 + x) ** 33 * torch.sign(13 + x) - 5984 * (14 + x) ** 33 * torch.sign(14 + x) + 561 * (
15 + x) ** 33 * torch.sign(15 + x) - 34 * (16 + x) ** 33 * torch.sign(16 + x) + (
17 + x) ** 33 * torch.sign(17 + x)) / 17366635237623772991036388802560000000
return B
def _B_34():
def B(x):
return (-4537567650 * (-1 / 2 + x) ** 34 * torch.sign(1 / 2 - x) + 4059928950 * (-3 / 2 + x) ** 34 * torch.sign(
3 / 2 - x) - 3247943160 * (-5 / 2 + x) ** 34 * torch.sign(5 / 2 - x) + 2319959400 * (
-7 / 2 + x) ** 34 * torch.sign(7 / 2 - x) - 1476337800 * (-9 / 2 + x) ** 34 * torch.sign(
9 / 2 - x) + 834451800 * (-11 / 2 + x) ** 34 * torch.sign(11 / 2 - x) - 417225900 * (
-13 / 2 + x) ** 34 * torch.sign(13 / 2 - x) + 183579396 * (-15 / 2 + x) ** 34 * torch.sign(
15 / 2 - x) - 70607460 * (-17 / 2 + x) ** 34 * torch.sign(17 / 2 - x) + 23535820 * (
-19 / 2 + x) ** 34 * torch.sign(19 / 2 - x) - 6724520 * (-21 / 2 + x) ** 34 * torch.sign(
21 / 2 - x) + 1623160 * (-23 / 2 + x) ** 34 * torch.sign(23 / 2 - x) - 324632 * (
-25 / 2 + x) ** 34 * torch.sign(25 / 2 - x) + 52360 * (-27 / 2 + x) ** 34 * torch.sign(
27 / 2 - x) - 6545 * (-29 / 2 + x) ** 34 * torch.sign(29 / 2 - x) + 595 * (-31 / 2 + x) ** 34 * torch.sign(
31 / 2 - x) - 35 * (-33 / 2 + x) ** 34 * torch.sign(33 / 2 - x) + (-35 / 2 + x) ** 34 * torch.sign(
35 / 2 - x) - 4537567650 * (1 / 2 + x) ** 34 * torch.sign(1 / 2 + x) + 4059928950 * (
3 / 2 + x) ** 34 * torch.sign(3 / 2 + x) - 3247943160 * (5 / 2 + x) ** 34 * torch.sign(
5 / 2 + x) + 2319959400 * (7 / 2 + x) ** 34 * torch.sign(7 / 2 + x) - 1476337800 * (
9 / 2 + x) ** 34 * torch.sign(9 / 2 + x) + 834451800 * (11 / 2 + x) ** 34 * torch.sign(
11 / 2 + x) - 417225900 * (13 / 2 + x) ** 34 * torch.sign(13 / 2 + x) + 183579396 * (
15 / 2 + x) ** 34 * torch.sign(15 / 2 + x) - 70607460 * (17 / 2 + x) ** 34 * torch.sign(
17 / 2 + x) + 23535820 * (19 / 2 + x) ** 34 * torch.sign(19 / 2 + x) - 6724520 * (
21 / 2 + x) ** 34 * torch.sign(21 / 2 + x) + 1623160 * (23 / 2 + x) ** 34 * torch.sign(
23 / 2 + x) - 324632 * (25 / 2 + x) ** 34 * torch.sign(25 / 2 + x) + 52360 * (
27 / 2 + x) ** 34 * torch.sign(27 / 2 + x) - 6545 * (29 / 2 + x) ** 34 * torch.sign(
29 / 2 + x) + 595 * (31 / 2 + x) ** 34 * torch.sign(31 / 2 + x) - 35 * (33 / 2 + x) ** 34 * torch.sign(
33 / 2 + x) + (35 / 2 + x) ** 34 * torch.sign(35 / 2 + x)) / 590465598079208281695237219287040000000
return B
def _B_35():
def B(x):
return (8597496600 * (-1 + x) ** 35 * torch.sign(1 - x) - 7307872110 * (-2 + x) ** 35 * torch.sign(
2 - x) + 5567902560 * (-3 + x) ** 35 * torch.sign(3 - x) - 3796297200 * (-4 + x) ** 35 * torch.sign(
4 - x) + 2310789600 * (-5 + x) ** 35 * torch.sign(5 - x) - 1251677700 * (-6 + x) ** 35 * torch.sign(
6 - x) + 600805296 * (-7 + x) ** 35 * torch.sign(7 - x) - 254186856 * (-8 + x) ** 35 * torch.sign(
8 - x) + 94143280 * (-9 + x) ** 35 * torch.sign(9 - x) - 30260340 * (-10 + x) ** 35 * torch.sign(
10 - x) + 8347680 * (-11 + x) ** 35 * torch.sign(11 - x) - 1947792 * (-12 + x) ** 35 * torch.sign(
12 - x) + 376992 * (-13 + x) ** 35 * torch.sign(13 - x) - 58905 * (-14 + x) ** 35 * torch.sign(
14 - x) + 7140 * (-15 + x) ** 35 * torch.sign(15 - x) - 630 * (-16 + x) ** 35 * torch.sign(16 - x) + 36 * (
-17 + x) ** 35 * torch.sign(17 - x) - (-18 + x) ** 35 * torch.sign(
18 - x) + 9075135300 * x ** 35 * torch.sign(x) - 8597496600 * (1 + x) ** 35 * torch.sign(
1 + x) + 7307872110 * (2 + x) ** 35 * torch.sign(2 + x) - 5567902560 * (3 + x) ** 35 * torch.sign(
3 + x) + 3796297200 * (4 + x) ** 35 * torch.sign(4 + x) - 2310789600 * (5 + x) ** 35 * torch.sign(
5 + x) + 1251677700 * (6 + x) ** 35 * torch.sign(6 + x) - 600805296 * (7 + x) ** 35 * torch.sign(
7 + x) + 254186856 * (8 + x) ** 35 * torch.sign(8 + x) - 94143280 * (9 + x) ** 35 * torch.sign(
9 + x) + 30260340 * (10 + x) ** 35 * torch.sign(10 + x) - 8347680 * (11 + x) ** 35 * torch.sign(
11 + x) + 1947792 * (12 + x) ** 35 * torch.sign(12 + x) - 376992 * (13 + x) ** 35 * torch.sign(
13 + x) + 58905 * (14 + x) ** 35 * torch.sign(14 + x) - 7140 * (15 + x) ** 35 * torch.sign(15 + x) + 630 * (
16 + x) ** 35 * torch.sign(16 + x) - 36 * (17 + x) ** 35 * torch.sign(17 + x) + (
18 + x) ** 35 * torch.sign(18 + x)) / 20666295932772289859333302675046400000000
return B
def _B_36():
def B(x):
return (17672631900 * (-1 / 2 + x) ** 36 * torch.sign(1 / 2 - x) - 15905368710 * (
-3 / 2 + x) ** 36 * torch.sign(3 / 2 - x) + 12875774670 * (-5 / 2 + x) ** 36 * torch.sign(
5 / 2 - x) - 9364199760 * (-7 / 2 + x) ** 36 * torch.sign(7 / 2 - x) + 6107086800 * (
-9 / 2 + x) ** 36 * torch.sign(9 / 2 - x) - 3562467300 * (-11 / 2 + x) ** 36 * torch.sign(
11 / 2 - x) + 1852482996 * (-13 / 2 + x) ** 36 * torch.sign(13 / 2 - x) - 854992152 * (
-15 / 2 + x) ** 36 * torch.sign(15 / 2 - x) + 348330136 * (-17 / 2 + x) ** 36 * torch.sign(
17 / 2 - x) - 124403620 * (-19 / 2 + x) ** 36 * torch.sign(19 / 2 - x) + 38608020 * (
-21 / 2 + x) ** 36 * torch.sign(21 / 2 - x) - 10295472 * (-23 / 2 + x) ** 36 * torch.sign(
23 / 2 - x) + 2324784 * (-25 / 2 + x) ** 36 * torch.sign(25 / 2 - x) - 435897 * (
-27 / 2 + x) ** 36 * torch.sign(27 / 2 - x) + 66045 * (-29 / 2 + x) ** 36 * torch.sign(
29 / 2 - x) - 7770 * (-31 / 2 + x) ** 36 * torch.sign(31 / 2 - x) + 666 * (-33 / 2 + x) ** 36 * torch.sign(
33 / 2 - x) - 37 * (-35 / 2 + x) ** 36 * torch.sign(35 / 2 - x) + (-37 / 2 + x) ** 36 * torch.sign(
37 / 2 - x) + 17672631900 * (1 / 2 + x) ** 36 * torch.sign(1 / 2 + x) - 15905368710 * (
3 / 2 + x) ** 36 * torch.sign(3 / 2 + x) + 12875774670 * (5 / 2 + x) ** 36 * torch.sign(
5 / 2 + x) - 9364199760 * (7 / 2 + x) ** 36 * torch.sign(7 / 2 + x) + 6107086800 * (
9 / 2 + x) ** 36 * torch.sign(9 / 2 + x) - 3562467300 * (11 / 2 + x) ** 36 * torch.sign(
11 / 2 + x) + 1852482996 * (13 / 2 + x) ** 36 * torch.sign(13 / 2 + x) - 854992152 * (
15 / 2 + x) ** 36 * torch.sign(15 / 2 + x) + 348330136 * (17 / 2 + x) ** 36 * torch.sign(
17 / 2 + x) - 124403620 * (19 / 2 + x) ** 36 * torch.sign(19 / 2 + x) + 38608020 * (
21 / 2 + x) ** 36 * torch.sign(21 / 2 + x) - 10295472 * (23 / 2 + x) ** 36 * torch.sign(
23 / 2 + x) + 2324784 * (25 / 2 + x) ** 36 * torch.sign(25 / 2 + x) - 435897 * (
27 / 2 + x) ** 36 * torch.sign(27 / 2 + x) + 66045 * (29 / 2 + x) ** 36 * torch.sign(
29 / 2 + x) - 7770 * (31 / 2 + x) ** 36 * torch.sign(31 / 2 + x) + 666 * (33 / 2 + x) ** 36 * torch.sign(
33 / 2 + x) - 37 * (35 / 2 + x) ** 36 * torch.sign(35 / 2 + x) + (37 / 2 + x) ** 36 * torch.sign(
37 / 2 + x)) / 743986653579802434935998896301670400000000
return B
def _B_37():
def B(x):
return (-33578000610 * (-1 + x) ** 37 * torch.sign(1 - x) + 28781143380 * (-2 + x) ** 37 * torch.sign(
2 - x) - 22239974430 * (-3 + x) ** 37 * torch.sign(3 - x) + 15471286560 * (-4 + x) ** 37 * torch.sign(
4 - x) - 9669554100 * (-5 + x) ** 37 * torch.sign(5 - x) + 5414950296 * (-6 + x) ** 37 * torch.sign(
6 - x) - 2707475148 * (-7 + x) ** 37 * torch.sign(7 - x) + 1203322288 * (-8 + x) ** 37 * torch.sign(
8 - x) - 472733756 * (-9 + x) ** 37 * torch.sign(9 - x) + 163011640 * (-10 + x) ** 37 * torch.sign(
10 - x) - 48903492 * (-11 + x) ** 37 * torch.sign(11 - x) + 12620256 * (-12 + x) ** 37 * torch.sign(
12 - x) - 2760681 * (-13 + x) ** 37 * torch.sign(13 - x) + 501942 * (-14 + x) ** 37 * torch.sign(
14 - x) - 73815 * (-15 + x) ** 37 * torch.sign(15 - x) + 8436 * (-16 + x) ** 37 * torch.sign(
16 - x) - 703 * (-17 + x) ** 37 * torch.sign(17 - x) + 38 * (-18 + x) ** 37 * torch.sign(18 - x) - (
-19 + x) ** 37 * torch.sign(19 - x) - 35345263800 * x ** 37 * torch.sign(
x) + 33578000610 * (1 + x) ** 37 * torch.sign(1 + x) - 28781143380 * (2 + x) ** 37 * torch.sign(
2 + x) + 22239974430 * (3 + x) ** 37 * torch.sign(3 + x) - 15471286560 * (4 + x) ** 37 * torch.sign(
4 + x) + 9669554100 * (5 + x) ** 37 * torch.sign(5 + x) - 5414950296 * (6 + x) ** 37 * torch.sign(
6 + x) + 2707475148 * (7 + x) ** 37 * torch.sign(7 + x) - 1203322288 * (8 + x) ** 37 * torch.sign(
8 + x) + 472733756 * (9 + x) ** 37 * torch.sign(9 + x) - 163011640 * (10 + x) ** 37 * torch.sign(
10 + x) + 48903492 * (11 + x) ** 37 * torch.sign(11 + x) - 12620256 * (12 + x) ** 37 * torch.sign(
12 + x) + 2760681 * (13 + x) ** 37 * torch.sign(13 + x) - 501942 * (14 + x) ** 37 * torch.sign(
14 + x) + 73815 * (15 + x) ** 37 * torch.sign(15 + x) - 8436 * (16 + x) ** 37 * torch.sign(16 + x) + 703 * (
17 + x) ** 37 * torch.sign(17 + x) - 38 * (18 + x) ** 37 * torch.sign(18 + x) + (
19 + x) ** 37 * torch.sign(19 + x)) / 27527506182452690092631959163161804800000000
return B
def _B_38():
def B(x):
return (-68923264410 * (-1 / 2 + x) ** 38 * torch.sign(1 / 2 - x) + 62359143990 * (
-3 / 2 + x) ** 38 * torch.sign(3 / 2 - x) - 51021117810 * (-5 / 2 + x) ** 38 * torch.sign(
5 / 2 - x) + 37711260990 * (-7 / 2 + x) ** 38 * torch.sign(7 / 2 - x) - 25140840660 * (
-9 / 2 + x) ** 38 * torch.sign(9 / 2 - x) + 15084504396 * (-11 / 2 + x) ** 38 * torch.sign(
11 / 2 - x) - 8122425444 * (-13 / 2 + x) ** 38 * torch.sign(13 / 2 - x) + 3910797436 * (
-15 / 2 + x) ** 38 * torch.sign(15 / 2 - x) - 1676056044 * (-17 / 2 + x) ** 38 * torch.sign(
17 / 2 - x) + 635745396 * (-19 / 2 + x) ** 38 * torch.sign(19 / 2 - x) - 211915132 * (
-21 / 2 + x) ** 38 * torch.sign(21 / 2 - x) + 61523748 * (-23 / 2 + x) ** 38 * torch.sign(
23 / 2 - x) - 15380937 * (-25 / 2 + x) ** 38 * torch.sign(25 / 2 - x) + 3262623 * (
-27 / 2 + x) ** 38 * torch.sign(27 / 2 - x) - 575757 * (-29 / 2 + x) ** 38 * torch.sign(
29 / 2 - x) + 82251 * (-31 / 2 + x) ** 38 * torch.sign(31 / 2 - x) - 9139 * (
-33 / 2 + x) ** 38 * torch.sign(33 / 2 - x) + 741 * (-35 / 2 + x) ** 38 * torch.sign(
35 / 2 - x) - 39 * (-37 / 2 + x) ** 38 * torch.sign(37 / 2 - x) + (-39 / 2 + x) ** 38 * torch.sign(
39 / 2 - x) - 68923264410 * (1 / 2 + x) ** 38 * torch.sign(1 / 2 + x) + 62359143990 * (
3 / 2 + x) ** 38 * torch.sign(3 / 2 + x) - 51021117810 * (5 / 2 + x) ** 38 * torch.sign(
5 / 2 + x) + 37711260990 * (7 / 2 + x) ** 38 * torch.sign(7 / 2 + x) - 25140840660 * (
9 / 2 + x) ** 38 * torch.sign(9 / 2 + x) + 15084504396 * (11 / 2 + x) ** 38 * torch.sign(
11 / 2 + x) - 8122425444 * (13 / 2 + x) ** 38 * torch.sign(13 / 2 + x) + 3910797436 * (
15 / 2 + x) ** 38 * torch.sign(15 / 2 + x) - 1676056044 * (17 / 2 + x) ** 38 * torch.sign(
17 / 2 + x) + 635745396 * (19 / 2 + x) ** 38 * torch.sign(19 / 2 + x) - 211915132 * (
21 / 2 + x) ** 38 * torch.sign(21 / 2 + x) + 61523748 * (23 / 2 + x) ** 38 * torch.sign(
23 / 2 + x) - 15380937 * (25 / 2 + x) ** 38 * torch.sign(25 / 2 + x) + 3262623 * (
27 / 2 + x) ** 38 * torch.sign(27 / 2 + x) - 575757 * (29 / 2 + x) ** 38 * torch.sign(
29 / 2 + x) + 82251 * (31 / 2 + x) ** 38 * torch.sign(31 / 2 + x) - 9139 * (33 / 2 + x) ** 38 * torch.sign(
33 / 2 + x) + 741 * (35 / 2 + x) ** 38 * torch.sign(35 / 2 + x) - 39 * (37 / 2 + x) ** 38 * torch.sign(
37 / 2 + x) + (39 / 2 + x) ** 38 * torch.sign(39 / 2 + x)) / 1046045234933202223520014448200148582400000000
return B
def _B_39():
def B(x):
return (131282408400 * (-1 + x) ** 39 * torch.sign(1 - x) - 113380261800 * (-2 + x) ** 39 * torch.sign(
2 - x) + 88732378800 * (-3 + x) ** 39 * torch.sign(3 - x) - 62852101650 * (-4 + x) ** 39 * torch.sign(
4 - x) + 40225345056 * (-5 + x) ** 39 * torch.sign(5 - x) - 23206929840 * (-6 + x) ** 39 * torch.sign(
6 - x) + 12033222880 * (-7 + x) ** 39 * torch.sign(7 - x) - 5586853480 * (-8 + x) ** 39 * torch.sign(
8 - x) + 2311801440 * (-9 + x) ** 39 * torch.sign(9 - x) - 847660528 * (-10 + x) ** 39 * torch.sign(
10 - x) + 273438880 * (-11 + x) ** 39 * torch.sign(11 - x) - 76904685 * (-12 + x) ** 39 * torch.sign(
12 - x) + 18643560 * (-13 + x) ** 39 * torch.sign(13 - x) - 3838380 * (-14 + x) ** 39 * torch.sign(
14 - x) + 658008 * (-15 + x) ** 39 * torch.sign(15 - x) - 91390 * (-16 + x) ** 39 * torch.sign(
16 - x) + 9880 * (-17 + x) ** 39 * torch.sign(17 - x) - 780 * (-18 + x) ** 39 * torch.sign(18 - x) + 40 * (
-19 + x) ** 39 * torch.sign(19 - x) - (-20 + x) ** 39 * torch.sign(
20 - x) + 137846528820 * x ** 39 * torch.sign(x) - 131282408400 * (1 + x) ** 39 * torch.sign(
1 + x) + 113380261800 * (2 + x) ** 39 * torch.sign(2 + x) - 88732378800 * (3 + x) ** 39 * torch.sign(
3 + x) + 62852101650 * (4 + x) ** 39 * torch.sign(4 + x) - 40225345056 * (5 + x) ** 39 * torch.sign(
5 + x) + 23206929840 * (6 + x) ** 39 * torch.sign(6 + x) - 12033222880 * (7 + x) ** 39 * torch.sign(
7 + x) + 5586853480 * (8 + x) ** 39 * torch.sign(8 + x) - 2311801440 * (9 + x) ** 39 * torch.sign(
9 + x) + 847660528 * (10 + x) ** 39 * torch.sign(10 + x) - 273438880 * (11 + x) ** 39 * torch.sign(
11 + x) + 76904685 * (12 + x) ** 39 * torch.sign(12 + x) - 18643560 * (13 + x) ** 39 * torch.sign(
13 + x) + 3838380 * (14 + x) ** 39 * torch.sign(14 + x) - 658008 * (15 + x) ** 39 * torch.sign(
15 + x) + 91390 * (16 + x) ** 39 * torch.sign(16 + x) - 9880 * (17 + x) ** 39 * torch.sign(17 + x) + 780 * (
18 + x) ** 39 * torch.sign(18 + x) - 40 * (19 + x) ** 39 * torch.sign(19 + x) + (
20 + x) ** 39 * torch.sign(20 + x)) / 40795764162394886717280563479805794713600000000
return B
def _B_40():
def B(x):
return (269128937220 * (-1 / 2 + x) ** 40 * torch.sign(1 / 2 - x) - 244662670200 * (
-3 / 2 + x) ** 40 * torch.sign(3 / 2 - x) + 202112640600 * (-5 / 2 + x) ** 40 * torch.sign(
5 / 2 - x) - 151584480450 * (-7 / 2 + x) ** 40 * torch.sign(7 / 2 - x) + 103077446706 * (
-9 / 2 + x) ** 40 * torch.sign(9 / 2 - x) - 63432274896 * (-11 / 2 + x) ** 40 * torch.sign(
11 / 2 - x) + 35240152720 * (-13 / 2 + x) ** 40 * torch.sign(13 / 2 - x) - 17620076360 * (
-15 / 2 + x) ** 40 * torch.sign(15 / 2 - x) + 7898654920 * (-17 / 2 + x) ** 40 * torch.sign(
17 / 2 - x) - 3159461968 * (-19 / 2 + x) ** 40 * torch.sign(19 / 2 - x) + 1121099408 * (
-21 / 2 + x) ** 40 * torch.sign(21 / 2 - x) - 350343565 * (-23 / 2 + x) ** 40 * torch.sign(
23 / 2 - x) + 95548245 * (-25 / 2 + x) ** 40 * torch.sign(25 / 2 - x) - 22481940 * (
-27 / 2 + x) ** 40 * torch.sign(27 / 2 - x) + 4496388 * (-29 / 2 + x) ** 40 * torch.sign(
29 / 2 - x) - 749398 * (-31 / 2 + x) ** 40 * torch.sign(31 / 2 - x) + 101270 * (
-33 / 2 + x) ** 40 * torch.sign(33 / 2 - x) - 10660 * (-35 / 2 + x) ** 40 * torch.sign(
35 / 2 - x) + 820 * (-37 / 2 + x) ** 40 * torch.sign(37 / 2 - x) - 41 * (-39 / 2 + x) ** 40 * torch.sign(
39 / 2 - x) + (-41 / 2 + x) ** 40 * torch.sign(41 / 2 - x) + 269128937220 * (1 / 2 + x) ** 40 * torch.sign(
1 / 2 + x) - 244662670200 * (3 / 2 + x) ** 40 * torch.sign(3 / 2 + x) + 202112640600 * (
5 / 2 + x) ** 40 * torch.sign(5 / 2 + x) - 151584480450 * (7 / 2 + x) ** 40 * torch.sign(
7 / 2 + x) + 103077446706 * (9 / 2 + x) ** 40 * torch.sign(9 / 2 + x) - 63432274896 * (
11 / 2 + x) ** 40 * torch.sign(11 / 2 + x) + 35240152720 * (13 / 2 + x) ** 40 * torch.sign(
13 / 2 + x) - 17620076360 * (15 / 2 + x) ** 40 * torch.sign(15 / 2 + x) + 7898654920 * (
17 / 2 + x) ** 40 * torch.sign(17 / 2 + x) - 3159461968 * (19 / 2 + x) ** 40 * torch.sign(
19 / 2 + x) + 1121099408 * (21 / 2 + x) ** 40 * torch.sign(21 / 2 + x) - 350343565 * (
23 / 2 + x) ** 40 * torch.sign(23 / 2 + x) + 95548245 * (25 / 2 + x) ** 40 * torch.sign(
25 / 2 + x) - 22481940 * (27 / 2 + x) ** 40 * torch.sign(27 / 2 + x) + 4496388 * (
29 / 2 + x) ** 40 * torch.sign(29 / 2 + x) - 749398 * (31 / 2 + x) ** 40 * torch.sign(
31 / 2 + x) + 101270 * (33 / 2 + x) ** 40 * torch.sign(33 / 2 + x) - 10660 * (
35 / 2 + x) ** 40 * torch.sign(35 / 2 + x) + 820 * (37 / 2 + x) ** 40 * torch.sign(
37 / 2 + x) - 41 * (39 / 2 + x) ** 40 * torch.sign(39 / 2 + x) + (41 / 2 + x) ** 40 * torch.sign(
41 / 2 + x)) / 1631830566495795468691222539192231788544000000000
return B
def _B_41():
def B(x):
return (-513791607420 * (-1 + x) ** 41 * torch.sign(1 - x) + 446775310800 * (-2 + x) ** 41 * torch.sign(
2 - x) - 353697121050 * (-3 + x) ** 41 * torch.sign(3 - x) + 254661927156 * (-4 + x) ** 41 * torch.sign(
4 - x) - 166509721602 * (-5 + x) ** 41 * torch.sign(5 - x) + 98672427616 * (-6 + x) ** 41 * torch.sign(
6 - x) - 52860229080 * (-7 + x) ** 41 * torch.sign(7 - x) + 25518731280 * (-8 + x) ** 41 * torch.sign(
8 - x) - 11058116888 * (-9 + x) ** 41 * torch.sign(9 - x) + 4280561376 * (-10 + x) ** 41 * torch.sign(
10 - x) - 1471442973 * (-11 + x) ** 41 * torch.sign(11 - x) + 445891810 * (-12 + x) ** 41 * torch.sign(
12 - x) - 118030185 * (-13 + x) ** 41 * torch.sign(13 - x) + 26978328 * (-14 + x) ** 41 * torch.sign(
14 - x) - 5245786 * (-15 + x) ** 41 * torch.sign(15 - x) + 850668 * (-16 + x) ** 41 * torch.sign(
16 - x) - 111930 * (-17 + x) ** 41 * torch.sign(17 - x) + 11480 * (-18 + x) ** 41 * torch.sign(
18 - x) - 861 * (-19 + x) ** 41 * torch.sign(19 - x) + 42 * (-20 + x) ** 41 * torch.sign(20 - x) - (
-21 + x) ** 41 * torch.sign(21 - x) - 538257874440 * x ** 41 * torch.sign(
x) + 513791607420 * (1 + x) ** 41 * torch.sign(1 + x) - 446775310800 * (2 + x) ** 41 * torch.sign(
2 + x) + 353697121050 * (3 + x) ** 41 * torch.sign(3 + x) - 254661927156 * (4 + x) ** 41 * torch.sign(
4 + x) + 166509721602 * (5 + x) ** 41 * torch.sign(5 + x) - 98672427616 * (6 + x) ** 41 * torch.sign(
6 + x) + 52860229080 * (7 + x) ** 41 * torch.sign(7 + x) - 25518731280 * (8 + x) ** 41 * torch.sign(
8 + x) + 11058116888 * (9 + x) ** 41 * torch.sign(9 + x) - 4280561376 * (10 + x) ** 41 * torch.sign(
10 + x) + 1471442973 * (11 + x) ** 41 * torch.sign(11 + x) - 445891810 * (12 + x) ** 41 * torch.sign(
12 + x) + 118030185 * (13 + x) ** 41 * torch.sign(13 + x) - 26978328 * (14 + x) ** 41 * torch.sign(
14 + x) + 5245786 * (15 + x) ** 41 * torch.sign(15 + x) - 850668 * (16 + x) ** 41 * torch.sign(
16 + x) + 111930 * (17 + x) ** 41 * torch.sign(17 + x) - 11480 * (18 + x) ** 41 * torch.sign(
18 + x) + 861 * (19 + x) ** 41 * torch.sign(19 + x) - 42 * (20 + x) ** 41 * torch.sign(20 + x) + (
21 + x) ** 41 * torch.sign(21 + x)) / 66905053226327614216340124106881503330304000000000
return B
def _B_42():
def B(x):
return (-1052049481860 * (-1 / 2 + x) ** 42 * torch.sign(1 / 2 - x) + 960566918220 * (
-3 / 2 + x) ** 42 * torch.sign(3 / 2 - x) - 800472431850 * (-5 / 2 + x) ** 42 * torch.sign(
5 / 2 - x) + 608359048206 * (-7 / 2 + x) ** 42 * torch.sign(7 / 2 - x) - 421171648758 * (
-9 / 2 + x) ** 42 * torch.sign(9 / 2 - x) + 265182149218 * (-11 / 2 + x) ** 42 * torch.sign(
11 / 2 - x) - 151532656696 * (-13 / 2 + x) ** 42 * torch.sign(13 / 2 - x) + 78378960360 * (
-15 / 2 + x) ** 42 * torch.sign(15 / 2 - x) - 36576848168 * (
-17 / 2 + x) ** 42 * torch.sign(17 / 2 - x) + 15338678264 * (
-19 / 2 + x) ** 42 * torch.sign(19 / 2 - x) - 5752004349 * (-21 / 2 + x) ** 42 * torch.sign(
21 / 2 - x) + 1917334783 * (-23 / 2 + x) ** 42 * torch.sign(23 / 2 - x) - 563921995 * (
-25 / 2 + x) ** 42 * torch.sign(25 / 2 - x) + 145008513 * (-27 / 2 + x) ** 42 * torch.sign(
27 / 2 - x) - 32224114 * (-29 / 2 + x) ** 42 * torch.sign(29 / 2 - x) + 6096454 * (
-31 / 2 + x) ** 42 * torch.sign(31 / 2 - x) - 962598 * (-33 / 2 + x) ** 42 * torch.sign(
33 / 2 - x) + 123410 * (-35 / 2 + x) ** 42 * torch.sign(35 / 2 - x) - 12341 * (
-37 / 2 + x) ** 42 * torch.sign(37 / 2 - x) + 903 * (-39 / 2 + x) ** 42 * torch.sign(
39 / 2 - x) - 43 * (-41 / 2 + x) ** 42 * torch.sign(41 / 2 - x) + (-43 / 2 + x) ** 42 * torch.sign(
43 / 2 - x) - 1052049481860 * (1 / 2 + x) ** 42 * torch.sign(1 / 2 + x) + 960566918220 * (
3 / 2 + x) ** 42 * torch.sign(3 / 2 + x) - 800472431850 * (5 / 2 + x) ** 42 * torch.sign(
5 / 2 + x) + 608359048206 * (7 / 2 + x) ** 42 * torch.sign(7 / 2 + x) - 421171648758 * (
9 / 2 + x) ** 42 * torch.sign(9 / 2 + x) + 265182149218 * (11 / 2 + x) ** 42 * torch.sign(
11 / 2 + x) - 151532656696 * (13 / 2 + x) ** 42 * torch.sign(13 / 2 + x) + 78378960360 * (
15 / 2 + x) ** 42 * torch.sign(15 / 2 + x) - 36576848168 * (17 / 2 + x) ** 42 * torch.sign(
17 / 2 + x) + 15338678264 * (19 / 2 + x) ** 42 * torch.sign(19 / 2 + x) - 5752004349 * (
21 / 2 + x) ** 42 * torch.sign(21 / 2 + x) + 1917334783 * (23 / 2 + x) ** 42 * torch.sign(
23 / 2 + x) - 563921995 * (25 / 2 + x) ** 42 * torch.sign(25 / 2 + x) + 145008513 * (
27 / 2 + x) ** 42 * torch.sign(27 / 2 + x) - 32224114 * (29 / 2 + x) ** 42 * torch.sign(
29 / 2 + x) + 6096454 * (31 / 2 + x) ** 42 * torch.sign(31 / 2 + x) - 962598 * (
33 / 2 + x) ** 42 * torch.sign(33 / 2 + x) + 123410 * (35 / 2 + x) ** 42 * torch.sign(
35 / 2 + x) - 12341 * (37 / 2 + x) ** 42 * torch.sign(37 / 2 + x) + 903 * (39 / 2 + x) ** 42 * torch.sign(
39 / 2 + x) - 43 * (41 / 2 + x) ** 42 * torch.sign(41 / 2 + x) + (43 / 2 + x) ** 42 * torch.sign(
43 / 2 + x)) / 2810012235505759797086285212489023139872768000000000
return B
def _B_43():
def B(x):
return (2012616400080 * (-1 + x) ** 43 * torch.sign(1 - x) - 1761039350070 * (-2 + x) ** 43 * torch.sign(
2 - x) + 1408831480056 * (-3 + x) ** 43 * torch.sign(3 - x) - 1029530696964 * (-4 + x) ** 43 * torch.sign(
4 - x) + 686353797976 * (-5 + x) ** 43 * torch.sign(5 - x) - 416714805914 * (-6 + x) ** 43 * torch.sign(
6 - x) + 229911617056 * (-7 + x) ** 43 * torch.sign(7 - x) - 114955808528 * (-8 + x) ** 43 * torch.sign(
8 - x) + 51915526432 * (-9 + x) ** 43 * torch.sign(9 - x) - 21090682613 * (-10 + x) ** 43 * torch.sign(
10 - x) + 7669339132 * (-11 + x) ** 43 * torch.sign(11 - x) - 2481256778 * (-12 + x) ** 43 * torch.sign(
12 - x) + 708930508 * (-13 + x) ** 43 * torch.sign(13 - x) - 177232627 * (-14 + x) ** 43 * torch.sign(
14 - x) + 38320568 * (-15 + x) ** 43 * torch.sign(15 - x) - 7059052 * (-16 + x) ** 43 * torch.sign(
16 - x) + 1086008 * (-17 + x) ** 43 * torch.sign(17 - x) - 135751 * (-18 + x) ** 43 * torch.sign(
18 - x) + 13244 * (-19 + x) ** 43 * torch.sign(19 - x) - 946 * (-20 + x) ** 43 * torch.sign(20 - x) + 44 * (
-21 + x) ** 43 * torch.sign(21 - x) - (-22 + x) ** 43 * torch.sign(
22 - x) + 2104098963720 * x ** 43 * torch.sign(x) - 2012616400080 * (1 + x) ** 43 * torch.sign(
1 + x) + 1761039350070 * (2 + x) ** 43 * torch.sign(2 + x) - 1408831480056 * (3 + x) ** 43 * torch.sign(
3 + x) + 1029530696964 * (4 + x) ** 43 * torch.sign(4 + x) - 686353797976 * (5 + x) ** 43 * torch.sign(
5 + x) + 416714805914 * (6 + x) ** 43 * torch.sign(6 + x) - 229911617056 * (7 + x) ** 43 * torch.sign(
7 + x) + 114955808528 * (8 + x) ** 43 * torch.sign(8 + x) - 51915526432 * (9 + x) ** 43 * torch.sign(
9 + x) + 21090682613 * (10 + x) ** 43 * torch.sign(10 + x) - 7669339132 * (11 + x) ** 43 * torch.sign(
11 + x) + 2481256778 * (12 + x) ** 43 * torch.sign(12 + x) - 708930508 * (13 + x) ** 43 * torch.sign(
13 + x) + 177232627 * (14 + x) ** 43 * torch.sign(14 + x) - 38320568 * (15 + x) ** 43 * torch.sign(
15 + x) + 7059052 * (16 + x) ** 43 * torch.sign(16 + x) - 1086008 * (17 + x) ** 43 * torch.sign(
17 + x) + 135751 * (18 + x) ** 43 * torch.sign(18 + x) - 13244 * (19 + x) ** 43 * torch.sign(
19 + x) + 946 * (20 + x) ** 43 * torch.sign(20 + x) - 44 * (21 + x) ** 43 * torch.sign(21 + x) + (
22 + x) ** 43 * torch.sign(22 + x)) / 120830526126747671274710264137027995014529024000000000
return B
def _B_44():
def B(x):
return (4116715363800 * (-1 / 2 + x) ** 44 * torch.sign(1 / 2 - x) - 3773655750150 * (
-3 / 2 + x) ** 44 * torch.sign(3 / 2 - x) + 3169870830126 * (-5 / 2 + x) ** 44 * torch.sign(
5 / 2 - x) - 2438362177020 * (-7 / 2 + x) ** 44 * torch.sign(7 / 2 - x) + 1715884494940 * (
-9 / 2 + x) ** 44 * torch.sign(9 / 2 - x) - 1103068603890 * (
-11 / 2 + x) ** 44 * torch.sign(11 / 2 - x) + 646626422970 * (
-13 / 2 + x) ** 44 * torch.sign(13 / 2 - x) - 344867425584 * (
-15 / 2 + x) ** 44 * torch.sign(15 / 2 - x) + 166871334960 * (
-17 / 2 + x) ** 44 * torch.sign(17 / 2 - x) - 73006209045 * (
-19 / 2 + x) ** 44 * torch.sign(19 / 2 - x) + 28760021745 * (
-21 / 2 + x) ** 44 * torch.sign(21 / 2 - x) - 10150595910 * (
-23 / 2 + x) ** 44 * torch.sign(23 / 2 - x) + 3190187286 * (-25 / 2 + x) ** 44 * torch.sign(
25 / 2 - x) - 886163135 * (-27 / 2 + x) ** 44 * torch.sign(27 / 2 - x) + 215553195 * (
-29 / 2 + x) ** 44 * torch.sign(29 / 2 - x) - 45379620 * (-31 / 2 + x) ** 44 * torch.sign(
31 / 2 - x) + 8145060 * (-33 / 2 + x) ** 44 * torch.sign(33 / 2 - x) - 1221759 * (
-35 / 2 + x) ** 44 * torch.sign(35 / 2 - x) + 148995 * (-37 / 2 + x) ** 44 * torch.sign(
37 / 2 - x) - 14190 * (-39 / 2 + x) ** 44 * torch.sign(39 / 2 - x) + 990 * (-41 / 2 + x) ** 44 * torch.sign(
41 / 2 - x) - 45 * (-43 / 2 + x) ** 44 * torch.sign(43 / 2 - x) + (-45 / 2 + x) ** 44 * torch.sign(
45 / 2 - x) + 4116715363800 * (1 / 2 + x) ** 44 * torch.sign(1 / 2 + x) - 3773655750150 * (
3 / 2 + x) ** 44 * torch.sign(3 / 2 + x) + 3169870830126 * (5 / 2 + x) ** 44 * torch.sign(
5 / 2 + x) - 2438362177020 * (7 / 2 + x) ** 44 * torch.sign(7 / 2 + x) + 1715884494940 * (
9 / 2 + x) ** 44 * torch.sign(9 / 2 + x) - 1103068603890 * (11 / 2 + x) ** 44 * torch.sign(
11 / 2 + x) + 646626422970 * (13 / 2 + x) ** 44 * torch.sign(13 / 2 + x) - 344867425584 * (
15 / 2 + x) ** 44 * torch.sign(15 / 2 + x) + 166871334960 * (17 / 2 + x) ** 44 * torch.sign(
17 / 2 + x) - 73006209045 * (19 / 2 + x) ** 44 * torch.sign(19 / 2 + x) + 28760021745 * (
21 / 2 + x) ** 44 * torch.sign(21 / 2 + x) - 10150595910 * (23 / 2 + x) ** 44 * torch.sign(
23 / 2 + x) + 3190187286 * (25 / 2 + x) ** 44 * torch.sign(25 / 2 + x) - 886163135 * (
27 / 2 + x) ** 44 * torch.sign(27 / 2 + x) + 215553195 * (29 / 2 + x) ** 44 * torch.sign(
29 / 2 + x) - 45379620 * (31 / 2 + x) ** 44 * torch.sign(31 / 2 + x) + 8145060 * (
33 / 2 + x) ** 44 * torch.sign(33 / 2 + x) - 1221759 * (35 / 2 + x) ** 44 * torch.sign(
35 / 2 + x) + 148995 * (37 / 2 + x) ** 44 * torch.sign(37 / 2 + x) - 14190 * (
39 / 2 + x) ** 44 * torch.sign(39 / 2 + x) + 990 * (41 / 2 + x) ** 44 * torch.sign(
41 / 2 + x) - 45 * (43 / 2 + x) ** 44 * torch.sign(43 / 2 + x) + (45 / 2 + x) ** 44 * torch.sign(
45 / 2 + x)) / 5316543149576897536087251622029231780639277056000000000
return B
def _B_45():
def B(x):
return (-7890371113950 * (-1 + x) ** 45 * torch.sign(1 - x) + 6943526580276 * (-2 + x) ** 45 * torch.sign(
2 - x) - 5608233007146 * (-3 + x) ** 45 * torch.sign(3 - x) + 4154246671960 * (-4 + x) ** 45 * torch.sign(
4 - x) - 2818953098830 * (-5 + x) ** 45 * torch.sign(5 - x) + 1749695026860 * (-6 + x) ** 45 * torch.sign(
6 - x) - 991493848554 * (-7 + x) ** 45 * torch.sign(7 - x) + 511738760544 * (-8 + x) ** 45 * torch.sign(
8 - x) - 239877544005 * (-9 + x) ** 45 * torch.sign(9 - x) + 101766230790 * (-10 + x) ** 45 * torch.sign(
10 - x) - 38910617655 * (-11 + x) ** 45 * torch.sign(11 - x) + 13340783196 * (-12 + x) ** 45 * torch.sign(
12 - x) - 4076350421 * (-13 + x) ** 45 * torch.sign(13 - x) + 1101716330 * (-14 + x) ** 45 * torch.sign(
14 - x) - 260932815 * (-15 + x) ** 45 * torch.sign(15 - x) + 53524680 * (-16 + x) ** 45 * torch.sign(
16 - x) - 9366819 * (-17 + x) ** 45 * torch.sign(17 - x) + 1370754 * (-18 + x) ** 45 * torch.sign(
18 - x) - 163185 * (-19 + x) ** 45 * torch.sign(19 - x) + 15180 * (-20 + x) ** 45 * torch.sign(
20 - x) - 1035 * (-21 + x) ** 45 * torch.sign(21 - x) + 46 * (-22 + x) ** 45 * torch.sign(22 - x) - (
-23 + x) ** 45 * torch.sign(23 - x) - 8233430727600 * x ** 45 * torch.sign(
x) + 7890371113950 * (1 + x) ** 45 * torch.sign(1 + x) - 6943526580276 * (2 + x) ** 45 * torch.sign(
2 + x) + 5608233007146 * (3 + x) ** 45 * torch.sign(3 + x) - 4154246671960 * (4 + x) ** 45 * torch.sign(
4 + x) + 2818953098830 * (5 + x) ** 45 * torch.sign(5 + x) - 1749695026860 * (6 + x) ** 45 * torch.sign(
6 + x) + 991493848554 * (7 + x) ** 45 * torch.sign(7 + x) - 511738760544 * (8 + x) ** 45 * torch.sign(
8 + x) + 239877544005 * (9 + x) ** 45 * torch.sign(9 + x) - 101766230790 * (10 + x) ** 45 * torch.sign(
10 + x) + 38910617655 * (11 + x) ** 45 * torch.sign(11 + x) - 13340783196 * (12 + x) ** 45 * torch.sign(
12 + x) + 4076350421 * (13 + x) ** 45 * torch.sign(13 + x) - 1101716330 * (14 + x) ** 45 * torch.sign(
14 + x) + 260932815 * (15 + x) ** 45 * torch.sign(15 + x) - 53524680 * (16 + x) ** 45 * torch.sign(
16 + x) + 9366819 * (17 + x) ** 45 * torch.sign(17 + x) - 1370754 * (18 + x) ** 45 * torch.sign(
18 + x) + 163185 * (19 + x) ** 45 * torch.sign(19 + x) - 15180 * (20 + x) ** 45 * torch.sign(
20 + x) + 1035 * (21 + x) ** 45 * torch.sign(21 + x) - 46 * (22 + x) ** 45 * torch.sign(22 + x) + (
23 + x) ** 45 * torch.sign(
23 + x)) / 239244441730960389123926322991315430128767467520000000000
return B
def _B_46():
def B(x):
return (-16123801841550 * (-1 / 2 + x) ** 46 * torch.sign(1 / 2 - x) + 14833897694226 * (
-3 / 2 + x) ** 46 * torch.sign(3 / 2 - x) - 12551759587422 * (-5 / 2 + x) ** 46 * torch.sign(
5 / 2 - x) + 9762479679106 * (-7 / 2 + x) ** 46 * torch.sign(7 / 2 - x) - 6973199770790 * (
-9 / 2 + x) ** 46 * torch.sign(9 / 2 - x) + 4568648125690 * (
-11 / 2 + x) ** 46 * torch.sign(11 / 2 - x) - 2741188875414 * (
-13 / 2 + x) ** 46 * torch.sign(13 / 2 - x) + 1503232609098 * (
-15 / 2 + x) ** 46 * torch.sign(15 / 2 - x) - 751616304549 * (
-17 / 2 + x) ** 46 * torch.sign(17 / 2 - x) + 341643774795 * (
-19 / 2 + x) ** 46 * torch.sign(19 / 2 - x) - 140676848445 * (
-21 / 2 + x) ** 46 * torch.sign(21 / 2 - x) + 52251400851 * (
-23 / 2 + x) ** 46 * torch.sign(23 / 2 - x) - 17417133617 * (
-25 / 2 + x) ** 46 * torch.sign(25 / 2 - x) + 5178066751 * (-27 / 2 + x) ** 46 * torch.sign(
27 / 2 - x) - 1362649145 * (-29 / 2 + x) ** 46 * torch.sign(29 / 2 - x) + 314457495 * (
-31 / 2 + x) ** 46 * torch.sign(31 / 2 - x) - 62891499 * (-33 / 2 + x) ** 46 * torch.sign(
33 / 2 - x) + 10737573 * (-35 / 2 + x) ** 46 * torch.sign(35 / 2 - x) - 1533939 * (
-37 / 2 + x) ** 46 * torch.sign(37 / 2 - x) + 178365 * (-39 / 2 + x) ** 46 * torch.sign(
39 / 2 - x) - 16215 * (-41 / 2 + x) ** 46 * torch.sign(41 / 2 - x) + 1081 * (
-43 / 2 + x) ** 46 * torch.sign(43 / 2 - x) - 47 * (-45 / 2 + x) ** 46 * torch.sign(
45 / 2 - x) + (-47 / 2 + x) ** 46 * torch.sign(47 / 2 - x) - 16123801841550 * (
1 / 2 + x) ** 46 * torch.sign(1 / 2 + x) + 14833897694226 * (3 / 2 + x) ** 46 * torch.sign(
3 / 2 + x) - 12551759587422 * (5 / 2 + x) ** 46 * torch.sign(5 / 2 + x) + 9762479679106 * (
7 / 2 + x) ** 46 * torch.sign(7 / 2 + x) - 6973199770790 * (9 / 2 + x) ** 46 * torch.sign(
9 / 2 + x) + 4568648125690 * (11 / 2 + x) ** 46 * torch.sign(11 / 2 + x) - 2741188875414 * (
13 / 2 + x) ** 46 * torch.sign(13 / 2 + x) + 1503232609098 * (
15 / 2 + x) ** 46 * torch.sign(15 / 2 + x) - 751616304549 * (17 / 2 + x) ** 46 * torch.sign(
17 / 2 + x) + 341643774795 * (19 / 2 + x) ** 46 * torch.sign(19 / 2 + x) - 140676848445 * (
21 / 2 + x) ** 46 * torch.sign(21 / 2 + x) + 52251400851 * (23 / 2 + x) ** 46 * torch.sign(
23 / 2 + x) - 17417133617 * (25 / 2 + x) ** 46 * torch.sign(25 / 2 + x) + 5178066751 * (
27 / 2 + x) ** 46 * torch.sign(27 / 2 + x) - 1362649145 * (29 / 2 + x) ** 46 * torch.sign(
29 / 2 + x) + 314457495 * (31 / 2 + x) ** 46 * torch.sign(31 / 2 + x) - 62891499 * (
33 / 2 + x) ** 46 * torch.sign(33 / 2 + x) + 10737573 * (35 / 2 + x) ** 46 * torch.sign(
35 / 2 + x) - 1533939 * (37 / 2 + x) ** 46 * torch.sign(37 / 2 + x) + 178365 * (
39 / 2 + x) ** 46 * torch.sign(39 / 2 + x) - 16215 * (41 / 2 + x) ** 46 * torch.sign(
41 / 2 + x) + 1081 * (43 / 2 + x) ** 46 * torch.sign(43 / 2 + x) - 47 * (45 / 2 + x) ** 46 * torch.sign(
45 / 2 + x) + (47 / 2 + x) ** 46 * torch.sign(
47 / 2 + x)) / 11005244319624177899700610857600509785923303505920000000000
return B
def _B_47():
def B(x):
return (30957699535776 * (-1 + x) ** 47 * torch.sign(1 - x) - 27385657281648 * (-2 + x) ** 47 * torch.sign(
2 - x) + 22314239266528 * (-3 + x) ** 47 * torch.sign(3 - x) - 16735679449896 * (-4 + x) ** 47 * torch.sign(
4 - x) + 11541847896480 * (-5 + x) ** 47 * torch.sign(5 - x) - 7309837001104 * (-6 + x) ** 47 * torch.sign(
6 - x) + 4244421484512 * (-7 + x) ** 47 * torch.sign(7 - x) - 2254848913647 * (-8 + x) ** 47 * torch.sign(
8 - x) + 1093260079344 * (-9 + x) ** 47 * torch.sign(9 - x) - 482320623240 * (-10 + x) ** 47 * torch.sign(
10 - x) + 192928249296 * (-11 + x) ** 47 * torch.sign(11 - x) - 69668534468 * (-12 + x) ** 47 * torch.sign(
12 - x) + 22595200368 * (-13 + x) ** 47 * torch.sign(13 - x) - 6540715896 * (-14 + x) ** 47 * torch.sign(
14 - x) + 1677106640 * (-15 + x) ** 47 * torch.sign(15 - x) - 377348994 * (-16 + x) ** 47 * torch.sign(
16 - x) + 73629072 * (-17 + x) ** 47 * torch.sign(17 - x) - 12271512 * (-18 + x) ** 47 * torch.sign(
18 - x) + 1712304 * (-19 + x) ** 47 * torch.sign(19 - x) - 194580 * (-20 + x) ** 47 * torch.sign(
20 - x) + 17296 * (-21 + x) ** 47 * torch.sign(21 - x) - 1128 * (-22 + x) ** 47 * torch.sign(
22 - x) + 48 * (-23 + x) ** 47 * torch.sign(23 - x) - (-24 + x) ** 47 * torch.sign(
24 - x) + 32247603683100 * x ** 47 * torch.sign(x) - 30957699535776 * (1 + x) ** 47 * torch.sign(
1 + x) + 27385657281648 * (2 + x) ** 47 * torch.sign(2 + x) - 22314239266528 * (3 + x) ** 47 * torch.sign(
3 + x) + 16735679449896 * (4 + x) ** 47 * torch.sign(4 + x) - 11541847896480 * (5 + x) ** 47 * torch.sign(
5 + x) + 7309837001104 * (6 + x) ** 47 * torch.sign(6 + x) - 4244421484512 * (7 + x) ** 47 * torch.sign(
7 + x) + 2254848913647 * (8 + x) ** 47 * torch.sign(8 + x) - 1093260079344 * (9 + x) ** 47 * torch.sign(
9 + x) + 482320623240 * (10 + x) ** 47 * torch.sign(10 + x) - 192928249296 * (11 + x) ** 47 * torch.sign(
11 + x) + 69668534468 * (12 + x) ** 47 * torch.sign(12 + x) - 22595200368 * (13 + x) ** 47 * torch.sign(
13 + x) + 6540715896 * (14 + x) ** 47 * torch.sign(14 + x) - 1677106640 * (15 + x) ** 47 * torch.sign(
15 + x) + 377348994 * (16 + x) ** 47 * torch.sign(16 + x) - 73629072 * (17 + x) ** 47 * torch.sign(
17 + x) + 12271512 * (18 + x) ** 47 * torch.sign(18 + x) - 1712304 * (19 + x) ** 47 * torch.sign(
19 + x) + 194580 * (20 + x) ** 47 * torch.sign(20 + x) - 17296 * (21 + x) ** 47 * torch.sign(
21 + x) + 1128 * (22 + x) ** 47 * torch.sign(22 + x) - 48 * (23 + x) ** 47 * torch.sign(23 + x) + (
24 + x) ** 47 * torch.sign(
24 + x)) / 517246483022336361285928710307223959938395264778240000000000
return B
def _B_48():
def B(x):
return (63205303218876 * (-1 / 2 + x) ** 48 * torch.sign(1 / 2 - x) - 58343356817424 * (
-3 / 2 + x) ** 48 * torch.sign(3 / 2 - x) + 49699896548176 * (-5 / 2 + x) ** 48 * torch.sign(
5 / 2 - x) - 39049918716424 * (-7 / 2 + x) ** 48 * torch.sign(7 / 2 - x) + 28277527346376 * (
-9 / 2 + x) ** 48 * torch.sign(9 / 2 - x) - 18851684897584 * (
-11 / 2 + x) ** 48 * torch.sign(11 / 2 - x) + 11554258485616 * (
-13 / 2 + x) ** 48 * torch.sign(13 / 2 - x) - 6499270398159 * (
-15 / 2 + x) ** 48 * torch.sign(15 / 2 - x) + 3348108992991 * (
-17 / 2 + x) ** 48 * torch.sign(17 / 2 - x) - 1575580702584 * (
-19 / 2 + x) ** 48 * torch.sign(19 / 2 - x) + 675248872536 * (
-21 / 2 + x) ** 48 * torch.sign(21 / 2 - x) - 262596783764 * (
-23 / 2 + x) ** 48 * torch.sign(23 / 2 - x) + 92263734836 * (
-25 / 2 + x) ** 48 * torch.sign(25 / 2 - x) - 29135916264 * (
-27 / 2 + x) ** 48 * torch.sign(27 / 2 - x) + 8217822536 * (-29 / 2 + x) ** 48 * torch.sign(
29 / 2 - x) - 2054455634 * (-31 / 2 + x) ** 48 * torch.sign(31 / 2 - x) + 450978066 * (
-33 / 2 + x) ** 48 * torch.sign(33 / 2 - x) - 85900584 * (-35 / 2 + x) ** 48 * torch.sign(
35 / 2 - x) + 13983816 * (-37 / 2 + x) ** 48 * torch.sign(37 / 2 - x) - 1906884 * (
-39 / 2 + x) ** 48 * torch.sign(39 / 2 - x) + 211876 * (-41 / 2 + x) ** 48 * torch.sign(
41 / 2 - x) - 18424 * (-43 / 2 + x) ** 48 * torch.sign(43 / 2 - x) + 1176 * (
-45 / 2 + x) ** 48 * torch.sign(45 / 2 - x) - 49 * (-47 / 2 + x) ** 48 * torch.sign(
47 / 2 - x) + (-49 / 2 + x) ** 48 * torch.sign(49 / 2 - x) + 63205303218876 * (
1 / 2 + x) ** 48 * torch.sign(1 / 2 + x) - 58343356817424 * (3 / 2 + x) ** 48 * torch.sign(
3 / 2 + x) + 49699896548176 * (5 / 2 + x) ** 48 * torch.sign(5 / 2 + x) - 39049918716424 * (
7 / 2 + x) ** 48 * torch.sign(7 / 2 + x) + 28277527346376 * (9 / 2 + x) ** 48 * torch.sign(
9 / 2 + x) - 18851684897584 * (11 / 2 + x) ** 48 * torch.sign(11 / 2 + x) + 11554258485616 * (
13 / 2 + x) ** 48 * torch.sign(13 / 2 + x) - 6499270398159 * (
15 / 2 + x) ** 48 * torch.sign(15 / 2 + x) + 3348108992991 * (
17 / 2 + x) ** 48 * torch.sign(17 / 2 + x) - 1575580702584 * (
19 / 2 + x) ** 48 * torch.sign(19 / 2 + x) + 675248872536 * (21 / 2 + x) ** 48 * torch.sign(
21 / 2 + x) - 262596783764 * (23 / 2 + x) ** 48 * torch.sign(23 / 2 + x) + 92263734836 * (
25 / 2 + x) ** 48 * torch.sign(25 / 2 + x) - 29135916264 * (27 / 2 + x) ** 48 * torch.sign(
27 / 2 + x) + 8217822536 * (29 / 2 + x) ** 48 * torch.sign(29 / 2 + x) - 2054455634 * (
31 / 2 + x) ** 48 * torch.sign(31 / 2 + x) + 450978066 * (33 / 2 + x) ** 48 * torch.sign(
33 / 2 + x) - 85900584 * (35 / 2 + x) ** 48 * torch.sign(35 / 2 + x) + 13983816 * (
37 / 2 + x) ** 48 * torch.sign(37 / 2 + x) - 1906884 * (39 / 2 + x) ** 48 * torch.sign(
39 / 2 + x) + 211876 * (41 / 2 + x) ** 48 * torch.sign(41 / 2 + x) - 18424 * (
43 / 2 + x) ** 48 * torch.sign(43 / 2 + x) + 1176 * (45 / 2 + x) ** 48 * torch.sign(
45 / 2 + x) - 49 * (47 / 2 + x) ** 48 * torch.sign(47 / 2 + x) + (49 / 2 + x) ** 48 * torch.sign(
49 / 2 + x)) / 24827831185072145341724578094746750077042972709355520000000000
return B
def _B_49():
def B(x):
return (-121548660036300 * (-1 + x) ** 49 * torch.sign(1 - x) + 108043253365600 * (-2 + x) ** 49 * torch.sign(
2 - x) - 88749815264600 * (-3 + x) ** 49 * torch.sign(3 - x) + 67327446062800 * (-4 + x) ** 49 * torch.sign(
4 - x) - 47129212243960 * (-5 + x) ** 49 * torch.sign(5 - x) + 30405943383200 * (-6 + x) ** 49 * torch.sign(
6 - x) - 18053528883775 * (-7 + x) ** 49 * torch.sign(7 - x) + 9847379391150 * (-8 + x) ** 49 * torch.sign(
8 - x) - 4923689695575 * (-9 + x) ** 49 * torch.sign(9 - x) + 2250829575120 * (-10 + x) ** 49 * torch.sign(
10 - x) - 937845656300 * (-11 + x) ** 49 * torch.sign(11 - x) + 354860518600 * (-12 + x) ** 49 * torch.sign(
12 - x) - 121399651100 * (-13 + x) ** 49 * torch.sign(13 - x) + 37353738800 * (-14 + x) ** 49 * torch.sign(
14 - x) - 10272278170 * (-15 + x) ** 49 * torch.sign(15 - x) + 2505433700 * (-16 + x) ** 49 * torch.sign(
16 - x) - 536878650 * (-17 + x) ** 49 * torch.sign(17 - x) + 99884400 * (-18 + x) ** 49 * torch.sign(
18 - x) - 15890700 * (-19 + x) ** 49 * torch.sign(19 - x) + 2118760 * (-20 + x) ** 49 * torch.sign(
20 - x) - 230300 * (-21 + x) ** 49 * torch.sign(21 - x) + 19600 * (-22 + x) ** 49 * torch.sign(
22 - x) - 1225 * (-23 + x) ** 49 * torch.sign(23 - x) + 50 * (-24 + x) ** 49 * torch.sign(24 - x) - (
-25 + x) ** 49 * torch.sign(25 - x) - 126410606437752 * x ** 49 * torch.sign(
x) + 121548660036300 * (1 + x) ** 49 * torch.sign(1 + x) - 108043253365600 * (2 + x) ** 49 * torch.sign(
2 + x) + 88749815264600 * (3 + x) ** 49 * torch.sign(3 + x) - 67327446062800 * (4 + x) ** 49 * torch.sign(
4 + x) + 47129212243960 * (5 + x) ** 49 * torch.sign(5 + x) - 30405943383200 * (6 + x) ** 49 * torch.sign(
6 + x) + 18053528883775 * (7 + x) ** 49 * torch.sign(7 + x) - 9847379391150 * (8 + x) ** 49 * torch.sign(
8 + x) + 4923689695575 * (9 + x) ** 49 * torch.sign(9 + x) - 2250829575120 * (10 + x) ** 49 * torch.sign(
10 + x) + 937845656300 * (11 + x) ** 49 * torch.sign(11 + x) - 354860518600 * (12 + x) ** 49 * torch.sign(
12 + x) + 121399651100 * (13 + x) ** 49 * torch.sign(13 + x) - 37353738800 * (14 + x) ** 49 * torch.sign(
14 + x) + 10272278170 * (15 + x) ** 49 * torch.sign(15 + x) - 2505433700 * (16 + x) ** 49 * torch.sign(
16 + x) + 536878650 * (17 + x) ** 49 * torch.sign(17 + x) - 99884400 * (18 + x) ** 49 * torch.sign(
18 + x) + 15890700 * (19 + x) ** 49 * torch.sign(19 + x) - 2118760 * (20 + x) ** 49 * torch.sign(
20 + x) + 230300 * (21 + x) ** 49 * torch.sign(21 + x) - 19600 * (22 + x) ** 49 * torch.sign(
22 + x) + 1225 * (23 + x) ** 49 * torch.sign(23 + x) - 50 * (24 + x) ** 49 * torch.sign(24 + x) + (
25 + x) ** 49 * torch.sign(
25 + x)) / 1216563728068535121744504326642590753775105662758420480000000000
return B
def _B_50():
def B(x):
return (-247959266474052 * (-1 / 2 + x) ** 50 * torch.sign(1 / 2 - x) + 229591913401900 * (
-3 / 2 + x) ** 50 * torch.sign(3 / 2 - x) - 196793068630200 * (-5 / 2 + x) ** 50 * torch.sign(
5 / 2 - x) + 156077261327400 * (-7 / 2 + x) ** 50 * torch.sign(7 / 2 - x) - 114456658306760 * (
-9 / 2 + x) ** 50 * torch.sign(9 / 2 - x) + 77535155627160 * (
-11 / 2 + x) ** 50 * torch.sign(11 / 2 - x) - 48459472266975 * (
-13 / 2 + x) ** 50 * torch.sign(13 / 2 - x) + 27900908274925 * (
-15 / 2 + x) ** 50 * torch.sign(15 / 2 - x) - 14771069086725 * (
-17 / 2 + x) ** 50 * torch.sign(17 / 2 - x) + 7174519270695 * (
-19 / 2 + x) ** 50 * torch.sign(19 / 2 - x) - 3188675231420 * (
-21 / 2 + x) ** 50 * torch.sign(21 / 2 - x) + 1292706174900 * (
-23 / 2 + x) ** 50 * torch.sign(23 / 2 - x) - 476260169700 * (
-25 / 2 + x) ** 50 * torch.sign(25 / 2 - x) + 158753389900 * (
-27 / 2 + x) ** 50 * torch.sign(27 / 2 - x) - 47626016970 * (
-29 / 2 + x) ** 50 * torch.sign(29 / 2 - x) + 12777711870 * (
-31 / 2 + x) ** 50 * torch.sign(31 / 2 - x) - 3042312350 * (-33 / 2 + x) ** 50 * torch.sign(
33 / 2 - x) + 636763050 * (-35 / 2 + x) ** 50 * torch.sign(35 / 2 - x) - 115775100 * (
-37 / 2 + x) ** 50 * torch.sign(37 / 2 - x) + 18009460 * (-39 / 2 + x) ** 50 * torch.sign(
39 / 2 - x) - 2349060 * (-41 / 2 + x) ** 50 * torch.sign(41 / 2 - x) + 249900 * (
-43 / 2 + x) ** 50 * torch.sign(43 / 2 - x) - 20825 * (-45 / 2 + x) ** 50 * torch.sign(
45 / 2 - x) + 1275 * (-47 / 2 + x) ** 50 * torch.sign(47 / 2 - x) - 51 * (-49 / 2 + x) ** 50 * torch.sign(
49 / 2 - x) + (-51 / 2 + x) ** 50 * torch.sign(51 / 2 - x) - 247959266474052 * (
1 / 2 + x) ** 50 * torch.sign(1 / 2 + x) + 229591913401900 * (3 / 2 + x) ** 50 * torch.sign(
3 / 2 + x) - 196793068630200 * (5 / 2 + x) ** 50 * torch.sign(5 / 2 + x) + 156077261327400 * (
7 / 2 + x) ** 50 * torch.sign(7 / 2 + x) - 114456658306760 * (9 / 2 + x) ** 50 * torch.sign(
9 / 2 + x) + 77535155627160 * (11 / 2 + x) ** 50 * torch.sign(11 / 2 + x) - 48459472266975 * (
13 / 2 + x) ** 50 * torch.sign(13 / 2 + x) + 27900908274925 * (
15 / 2 + x) ** 50 * torch.sign(15 / 2 + x) - 14771069086725 * (
17 / 2 + x) ** 50 * torch.sign(17 / 2 + x) + 7174519270695 * (
19 / 2 + x) ** 50 * torch.sign(19 / 2 + x) - 3188675231420 * (
21 / 2 + x) ** 50 * torch.sign(21 / 2 + x) + 1292706174900 * (
23 / 2 + x) ** 50 * torch.sign(23 / 2 + x) - 476260169700 * (25 / 2 + x) ** 50 * torch.sign(
25 / 2 + x) + 158753389900 * (27 / 2 + x) ** 50 * torch.sign(27 / 2 + x) - 47626016970 * (
29 / 2 + x) ** 50 * torch.sign(29 / 2 + x) + 12777711870 * (31 / 2 + x) ** 50 * torch.sign(
31 / 2 + x) - 3042312350 * (33 / 2 + x) ** 50 * torch.sign(33 / 2 + x) + 636763050 * (
35 / 2 + x) ** 50 * torch.sign(35 / 2 + x) - 115775100 * (37 / 2 + x) ** 50 * torch.sign(
37 / 2 + x) + 18009460 * (39 / 2 + x) ** 50 * torch.sign(39 / 2 + x) - 2349060 * (
41 / 2 + x) ** 50 * torch.sign(41 / 2 + x) + 249900 * (43 / 2 + x) ** 50 * torch.sign(
43 / 2 + x) - 20825 * (45 / 2 + x) ** 50 * torch.sign(45 / 2 + x) + 1275 * (47 / 2 + x) ** 50 * torch.sign(
47 / 2 + x) - 51 * (49 / 2 + x) ** 50 * torch.sign(49 / 2 + x) + (51 / 2 + x) ** 50 * torch.sign(
51 / 2 + x)) / 60828186403426756087225216332129537688755283137921024000000000000
return B
if __name__ == '__main__':
n = 3
s = 1
dx=0.2
Bfunc = B(3)
xlist = B_supp_grid(n, s, dx, True)
print(B_supp(n, s, dx))
print(xlist)
print(Bfunc((xlist - dx) / s))
| 77.463733 | 121 | 0.403375 | 13,456 | 95,048 | 2.840294 | 0.064507 | 0.078443 | 0.026688 | 0.014966 | 0.878961 | 0.873702 | 0.87098 | 0.870248 | 0.870248 | 0.861875 | 0 | 0.337481 | 0.393801 | 95,048 | 1,226 | 122 | 77.526917 | 0.325818 | 0.03008 | 0 | 0.102 | 1 | 0 | 0.001665 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.105 | false | 0 | 0.001 | 0.051 | 0.211 | 0.003 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 9 |
311f0ab65809c6252a66a56d7c44e9a9233fd5c1 | 92 | py | Python | doajtest/fixtures/v2/__init__.py | gaybro8777/doaj | 27d9d98ce4f496ae52acbaba6ee8e42c84cf1a58 | [
"Apache-2.0"
] | 47 | 2015-04-24T13:13:39.000Z | 2022-03-06T03:22:42.000Z | doajtest/fixtures/v2/__init__.py | gaybro8777/doaj | 27d9d98ce4f496ae52acbaba6ee8e42c84cf1a58 | [
"Apache-2.0"
] | 1,215 | 2015-01-02T14:29:38.000Z | 2022-03-28T14:19:13.000Z | doajtest/fixtures/v2/__init__.py | gaybro8777/doaj | 27d9d98ce4f496ae52acbaba6ee8e42c84cf1a58 | [
"Apache-2.0"
] | 14 | 2015-11-27T13:01:23.000Z | 2021-05-21T07:57:23.000Z | from doajtest.fixtures.v2.applications import *
from doajtest.fixtures.v2.journals import *
| 30.666667 | 47 | 0.826087 | 12 | 92 | 6.333333 | 0.583333 | 0.315789 | 0.526316 | 0.578947 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.02381 | 0.086957 | 92 | 2 | 48 | 46 | 0.880952 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 8 |
314f6e7f0dce384a60dee80155b391d3627d3eb9 | 34,138 | py | Python | content/test/gpu/gpu_tests/webgl2_conformance_expectations.py | Wzzzx/chromium-crosswalk | 768dde8efa71169f1c1113ca6ef322f1e8c9e7de | [
"BSD-3-Clause-No-Nuclear-License-2014",
"BSD-3-Clause"
] | 2 | 2019-01-28T08:09:58.000Z | 2021-11-15T15:32:10.000Z | content/test/gpu/gpu_tests/webgl2_conformance_expectations.py | Wzzzx/chromium-crosswalk | 768dde8efa71169f1c1113ca6ef322f1e8c9e7de | [
"BSD-3-Clause-No-Nuclear-License-2014",
"BSD-3-Clause"
] | null | null | null | content/test/gpu/gpu_tests/webgl2_conformance_expectations.py | Wzzzx/chromium-crosswalk | 768dde8efa71169f1c1113ca6ef322f1e8c9e7de | [
"BSD-3-Clause-No-Nuclear-License-2014",
"BSD-3-Clause"
] | 6 | 2020-09-23T08:56:12.000Z | 2021-11-18T03:40:49.000Z | # Copyright (c) 2015 The Chromium Authors. All rights reserved.
# Use of this source code is governed by a BSD-style license that can be
# found in the LICENSE file.
from gpu_tests.webgl_conformance_expectations import WebGLConformanceExpectations
# See the GpuTestExpectations class for documentation.
class WebGL2ConformanceExpectations(WebGLConformanceExpectations):
def __init__(self, conformance_path):
super(WebGL2ConformanceExpectations, self).__init__(conformance_path)
def SetExpectations(self):
# ===================================
# Extension availability expectations
# ===================================
# It's expected that not all extensions will be available on all platforms.
# Having a test listed here is not necessarily a problem.
self.Fail('WebglExtension.WEBGL_compressed_texture_astc',
['win', 'mac', 'linux'])
self.Fail('WebglExtension.WEBGL_compressed_texture_atc',
['win', 'mac', 'linux'])
self.Fail('WebglExtension.WEBGL_compressed_texture_etc1',
['mac', 'linux'])
self.Fail('WebglExtension.WEBGL_compressed_texture_pvrtc',
['win', 'mac', 'linux'])
# ========================
# Conformance expectations
# ========================
# All platforms.
# Too slow (take about one hour to run)
self.Skip('deqp/functional/gles3/builtinprecision/*.html', bug=619403)
self.Fail('deqp/functional/gles3/framebufferblit/*.html', bug=483282)
self.Fail('deqp/data/gles3/shaders/linkage.html', bug=601821)
self.Fail('deqp/functional/gles3/shaderoperator/*.html', bug=483282)
self.Flaky('deqp/functional/gles3/negativefragmentapi.html', bug=604794)
self.Fail('deqp/data/gles3/shaders/preprocessor.html', bug=483282)
self.Fail('conformance2/glsl3/forbidden-operators.html', bug=483282)
self.Flaky('conformance2/query/occlusion-query.html', bug=603168)
# Avoid a conflict with a Mac expectation by setting
self.Fail('conformance2/textures/misc/tex-input-validation.html',
['d3d9', 'd3d11', 'opengl'], bug=483282)
# All platforms with AMD GPU.
self.Fail('deqp/functional/gles3/multisample.html',
['amd'], bug=617290)
# Windows only.
self.Fail('deqp/functional/gles3/texturespecification/' +
'basic_copyteximage2d.html',
['win'], bug=483282)
self.Fail('deqp/functional/gles3/transformfeedback/*.html',
['win'], bug=483282)
self.Fail('deqp/functional/gles3/negativetextureapi.html',
['win'], bug=483282)
self.Fail('deqp/functional/gles3/shaderloop_for.html',
['win'], bug=617817)
self.Fail('deqp/functional/gles3/shaderloop_while.html',
['win'], bug=617817)
self.Fail('deqp/functional/gles3/shaderloop_do_while.html',
['win'], bug=617817)
self.Fail('deqp/functional/gles3/shadertexturefunction/texturelod.html',
['win'], bug=483282)
self.Fail('deqp/functional/gles3/shadertexturefunction/' +
'texturelodoffset.html',
['win'], bug=483282)
self.Fail('deqp/functional/gles3/shadertexturefunction/' +
'textureprojlod.html',
['win'], bug=483282)
self.Fail('deqp/functional/gles3/shadertexturefunction/' +
'textureprojlodoffset.html',
['win'], bug=483282)
self.Fail('deqp/functional/gles3/shadertexturefunction/texturegrad.html',
['win'], bug=483282)
self.Fail('deqp/functional/gles3/shadertexturefunction/' +
'texturegradoffset.html',
['win'], bug=483282)
self.Fail('deqp/functional/gles3/shadertexturefunction/' +
'textureprojgrad.html',
['win'], bug=483282)
self.Fail('deqp/functional/gles3/shadertexturefunction/' +
'textureprojgradoffset.html',
['win'], bug=483282)
self.Fail('deqp/functional/gles3/textureshadow/2d*',
['win'], bug=483282)
self.Fail('deqp/functional/gles3/textureshadow/cube*',
['win'], bug=483282)
self.Fail('conformance2/glsl3/array-in-complex-expression.html',
['win'], bug=483282)
self.Skip('conformance2/reading/read-pixels-pack-parameters.html',
['win'], bug=483282)
self.Skip('conformance2/reading/read-pixels-into-pixel-pack-buffer.html',
['win'], bug=1266) # angle bug ID
self.Fail('conformance2/state/gl-object-get-calls.html',
['win'], bug=483282)
self.Fail('deqp/functional/gles3/fbomultisample*',
['win'], bug=483282)
self.Fail('deqp/functional/gles3/fboinvalidate/sub.html',
['win'], bug=483282)
self.Fail('deqp/functional/gles3/fboinvalidate/whole.html',
['win'], bug=624506)
# Windows 8 only.
self.Fail('conformance2/reading/read-pixels-from-fbo-test.html',
['win8'], bug=483282)
self.Flaky('deqp/functional/gles3/buffercopy.html', ['win8'], bug=587601)
# Windows Debug. Causing assertions in the GPU process which raise
# a dialog box, so have to skip them rather than mark them as
# failing.
self.Skip('conformance2/textures/canvas/' +
'tex-2d-rgba8-rgba-unsigned_byte.html',
['win', 'debug'], bug=542901)
# Win / NVidia
self.Fail('deqp/functional/gles3/textureformat/compressed_cube.html',
['win', 'nvidia'], bug=614573)
# Win / AMD
# It's unfortunate that this suppression needs to be so broad, but
# basically any test that uses readPixels is potentially flaky, and
# it's infeasible to suppress individual failures one by one.
self.Flaky('conformance2/*', ['win', ('amd', 0x6779)], bug=491419)
self.Flaky('deqp/*', ['win', ('amd', 0x6779)], bug=491419)
self.Fail('deqp/functional/gles3/texturespecification/' +
'texstorage2d_format_depth_stencil.html',
['win', ('amd', 0x6779)], bug=614178)
self.Fail('deqp/functional/gles3/texturespecification/' +
'texstorage3d_format_depth_stencil.html',
['win', ('amd', 0x6779)], bug=614178)
self.Fail('deqp/functional/gles3/textureformat/compressed_cube.html',
['win', ('amd', 0x6779)], bug=614573)
self.Fail('deqp/functional/gles3/shadertexturefunction/texture.html',
['win', ('amd', 0x6779)], bug=483282)
self.Fail('deqp/functional/gles3/shadertexturefunction/' +
'texelfetchoffset.html',
['win', ('amd', 0x6779)], bug=483282)
self.Fail('deqp/functional/gles3/shadertexturefunction/texturesize.html',
['win', ('amd', 0x6779)], bug=483282)
self.Fail('deqp/functional/gles3/shadercommonfunction.html',
['win', ('amd', 0x6779)], bug=621201)
self.Fail('deqp/functional/gles3/fragmentoutput/array.int.html',
['win', ('amd', 0x6779)], bug=483282)
self.Fail('deqp/functional/gles3/fragmentoutput/array.uint.html',
['win', ('amd', 0x6779)], bug=483282)
self.Fail('deqp/functional/gles3/fragmentoutput/random_00.html',
['win', ('amd', 0x6779)], bug=483282)
self.Fail('deqp/functional/gles3/fragmentoutput/random_01.html',
['win', ('amd', 0x6779)], bug=483282)
self.Fail('deqp/functional/gles3/fragmentoutput/random_02.html',
['win', ('amd', 0x6779)], bug=483282)
# Win / Intel
self.Fail('conformance2/buffers/uniform-buffers.html',
['win', 'intel'], bug=483282)
self.Skip('conformance2/textures/misc/copy-texture-image.html',
['win', 'intel'], bug=617449)
self.Fail('deqp/functional/gles3/shaderderivate_*',
['win', 'intel'], bug=483282)
self.Fail('deqp/functional/gles3/shaderstruct.html',
['win', 'intel'], bug=483282)
self.Fail('deqp/functional/gles3/texturespecification/' +
'teximage3d_depth.html',
['win', 'intel'], bug=614418)
self.Skip('deqp/functional/gles3/texturespecification/' +
'teximage3d_depth_pbo.html',
['win', 'intel'], bug=617449)
self.Flaky('deqp/functional/gles3/lifetime.html',
['win', 'intel'], bug=620379)
self.Fail('deqp/functional/gles3/texturespecification/' +
'texsubimage3d_depth.html',
['win', 'intel'], bug=614418)
self.Fail('deqp/functional/gles3/texturespecification/' +
'texstorage3d_format_depth_stencil.html',
['win', 'intel'], bug=614418)
self.Fail('deqp/functional/gles3/textureformat/sized_color_3d_pot_00.html',
['win', 'intel'], bug=614418)
self.Fail('deqp/functional/gles3/textureformat/sized_color_3d_pot_02.html',
['win', 'intel'], bug=614418)
self.Fail('deqp/functional/gles3/textureformat/sized_color_3d_pot_03.html',
['win', 'intel'], bug=614418)
self.Fail('deqp/functional/gles3/textureformat/sized_depth_stencil.html',
['win', 'intel'], bug=614418)
self.Fail('deqp/functional/gles3/textureformat/compressed_cube.html',
['win', 'intel'], bug=614418)
self.Fail('deqp/functional/gles3/shadertexturefunction/texture.html',
['win', 'intel'], bug=483282)
self.Fail('deqp/functional/gles3/shadertexturefunction/' +
'texelfetchoffset.html',
['win', 'intel'], bug=483282)
self.Fail('deqp/functional/gles3/shadertexturefunction/texturesize.html',
['win', 'intel'], bug=483282)
self.Fail('conformance2/textures/misc/tex-unpack-params.html',
['win', 'intel'], bug=483282)
self.Fail('deqp/functional/gles3/uniformbuffers/*.html',
['win', 'intel'], bug=483282)
self.Fail('deqp/functional/gles3/fragmentoutput/array.int.html',
['win', 'intel'], bug=483282)
self.Fail('deqp/functional/gles3/fragmentoutput/array.uint.html',
['win', 'intel'], bug=483282)
self.Fail('deqp/functional/gles3/fragmentoutput/basic.int.html',
['win', 'intel'], bug=483282)
self.Fail('deqp/functional/gles3/fragmentoutput/basic.uint.html',
['win', 'intel'], bug=483282)
self.Fail('deqp/functional/gles3/fragmentoutput/random_00.html',
['win', 'intel'], bug=483282)
self.Fail('deqp/functional/gles3/fragmentoutput/random_01.html',
['win', 'intel'], bug=483282)
self.Fail('deqp/functional/gles3/fragmentoutput/random_02.html',
['win', 'intel'], bug=483282)
# Mac only.
self.Flaky('deqp/functional/gles3/shaderindexing/varying.html',
['mac'], bug=619264)
self.Fail('deqp/functional/gles3/shaderloop_do_while.html',
['mac'], bug=617820)
self.Fail('deqp/functional/gles3/texturespecification/' +
'basic_copyteximage2d.html',
['mac'], bug=620067)
self.Fail('deqp/functional/gles3/fragmentoutput/*.html',
['mac'], bug=483282)
# This one's flaky on AMD, NVIDIA and Intel GPUs, but the
# GPU-specific expectations aren't working properly.
self.Fail('deqp/functional/gles3/shaderpackingfunction.html',
['mac'], bug=619264)
self.Fail('deqp/functional/gles3/uniformbuffers/random.html',
['mac'], bug=618464)
self.Fail('deqp/functional/gles3/textureformat/compressed_2d.html',
['mac'], bug=612205)
self.Fail('deqp/functional/gles3/textureformat/compressed_cube.html',
['mac'], bug=612205)
self.Fail('deqp/functional/gles3/texturewrap/e*',
['mac'], bug=612205)
self.Fail('deqp/data/gles3/shaders/qualification_order.html',
['mac'], bug=483282)
self.Fail('deqp/data/gles3/shaders/scoping.html',
['mac'], bug=483282)
self.Fail('deqp/functional/gles3/pixelbufferobject.html',
['mac'], bug=483282)
self.Fail('deqp/functional/gles3/negativeshaderapi.html',
['mac'], bug=483282)
self.Fail('conformance2/textures/misc/compressed-tex-image.html',
['mac'], bug=565438)
self.Fail('conformance2/textures/misc/tex-new-formats.html',
['mac'], bug=483282)
self.Fail('conformance2/textures/misc/tex-storage-compressed-formats.html',
['mac'], bug=295792)
self.Fail('conformance2/renderbuffers/framebuffer-test.html',
['mac'], bug=483282)
self.Fail('conformance2/rendering/framebuffer-completeness-unaffected.html',
['mac'], bug=604053)
self.Fail('deqp/functional/gles3/instancedrendering.html',
['mac'], bug=483282)
self.Fail('deqp/functional/gles3/negativetextureapi.html',
['mac'], bug=483282)
self.Fail('deqp/functional/gles3/fbomultisample*',
['mac'], bug=483282)
self.Fail('deqp/functional/gles3/fbocolorbuffer/clear.html',
['mac'], bug=483282)
self.Fail('deqp/functional/gles3/fborender/recreate_color_02.html',
['mac'], bug=483282)
self.Fail('deqp/functional/gles3/fborender/resize_01.html',
['mac'], bug=483282)
# Mac Retina NVIDIA
self.Fail('conformance2/textures/misc/tex-input-validation.html',
['mac', ('nvidia', 0xfe9), 'no_angle'], bug=483282)
self.Fail('conformance2/textures/misc/tex-mipmap-levels.html',
['mac', ('nvidia', 0xfe9)], bug=483282)
self.Fail('deqp/functional/gles3/shaderstruct.html',
['mac', ('nvidia', 0xfe9)], bug=483282)
self.Fail('deqp/functional/gles3/shaderswitch.html',
['mac', ('nvidia', 0xfe9)], bug=483282)
self.Fail('deqp/functional/gles3/negativevertexarrayapi.html',
['mac', ('nvidia', 0xfe9)], bug=483282)
self.Fail('deqp/functional/gles3/fbocompleteness.html',
['mac', ('nvidia', 0xfe9)], bug=616562)
self.Fail('deqp/functional/gles3/negativebufferapi.html',
['mac', ('nvidia', 0xfe9)], bug=483282)
self.Fail('deqp/functional/gles3/texturespecification/' +
'teximage2d_pbo_2d_00.html',
['mac', ('nvidia', 0xfe9)], bug=614174)
self.Fail('deqp/functional/gles3/texturespecification/' +
'teximage2d_pbo_2d_01.html',
['mac', ('nvidia', 0xfe9)], bug=614174)
self.Fail('deqp/functional/gles3/texturespecification/' +
'texsubimage2d_pbo_2d_00.html',
['mac', ('nvidia', 0xfe9)], bug=614174)
self.Fail('deqp/functional/gles3/texturespecification/' +
'texsubimage2d_pbo_2d_01.html',
['mac', ('nvidia', 0xfe9)], bug=614174)
self.Fail('deqp/functional/gles3/texturespecification/' +
'texsubimage2d_pbo_cube_00.html',
['mac', ('nvidia', 0xfe9)], bug=614174)
self.Fail('deqp/functional/gles3/texturespecification/' +
'texsubimage2d_pbo_cube_01.html',
['mac', ('nvidia', 0xfe9)], bug=614174)
self.Fail('deqp/functional/gles3/texturespecification/' +
'texsubimage2d_pbo_cube_02.html',
['mac', ('nvidia', 0xfe9)], bug=614174)
self.Fail('deqp/functional/gles3/texturespecification/' +
'texsubimage2d_pbo_cube_03.html',
['mac', ('nvidia', 0xfe9)], bug=614174)
self.Fail('deqp/functional/gles3/texturespecification/' +
'texsubimage2d_pbo_cube_04.html',
['mac', ('nvidia', 0xfe9)], bug=614174)
self.Fail('deqp/functional/gles3/texturespecification/' +
'teximage3d_pbo_2d_array_00.html',
['mac', ('nvidia', 0xfe9)], bug=614174)
self.Fail('deqp/functional/gles3/texturespecification/' +
'teximage3d_pbo_2d_array_01.html',
['mac', ('nvidia', 0xfe9)], bug=614174)
self.Fail('deqp/functional/gles3/texturespecification/' +
'teximage3d_pbo_3d_00.html',
['mac', ('nvidia', 0xfe9)], bug=614174)
self.Fail('deqp/functional/gles3/texturespecification/' +
'teximage3d_pbo_3d_01.html',
['mac', ('nvidia', 0xfe9)], bug=614174)
self.Fail('deqp/functional/gles3/texturespecification/' +
'texsubimage3d_pbo_3d_00.html',
['mac', ('nvidia', 0xfe9)], bug=614174)
self.Fail('deqp/functional/gles3/texturespecification/' +
'texsubimage3d_pbo_3d_01.html',
['mac', ('nvidia', 0xfe9)], bug=614174)
self.Fail('deqp/functional/gles3/shadertexturefunction/' +
'texturelod.html',
['mac', ('nvidia', 0xfe9)], bug=483282)
self.Fail('deqp/functional/gles3/fbocolorbuffer/' +
'tex2d_05.html',
['mac', ('nvidia', 0xfe9)], bug=483282)
self.Fail('deqp/functional/gles3/fbocolorbuffer/' +
'tex2darray_05.html',
['mac', ('nvidia', 0xfe9)], bug=483282)
self.Fail('deqp/functional/gles3/fbocolorbuffer/' +
'tex3d_05.html',
['mac', ('nvidia', 0xfe9)], bug=483282)
self.Fail('deqp/functional/gles3/fbocolorbuffer/' +
'texcube_05.html',
['mac', ('nvidia', 0xfe9)], bug=483282)
self.Fail('deqp/functional/gles3/fbocolorbuffer/' +
'blend.html',
['mac', ('nvidia', 0xfe9)], bug=483282)
self.Fail('deqp/functional/gles3/draw/' +
'draw_arrays.html',
['mac', ('nvidia', 0xfe9)], bug=483282)
self.Fail('deqp/functional/gles3/draw/' +
'draw_arrays_instanced.html',
['mac', ('nvidia', 0xfe9)], bug=483282)
self.Fail('deqp/functional/gles3/draw/' +
'draw_elements.html',
['mac', ('nvidia', 0xfe9)], bug=483282)
self.Fail('deqp/functional/gles3/draw/' +
'draw_elements_instanced.html',
['mac', ('nvidia', 0xfe9)], bug=483282)
self.Fail('deqp/functional/gles3/draw/' +
'draw_range_elements.html',
['mac', ('nvidia', 0xfe9)], bug=483282)
self.Fail('conformance2/rendering/draw-buffers.html',
['mac', ('nvidia', 0xfe9)], bug=617410)
self.Fail('deqp/functional/gles3/fboinvalidate/format_02.html',
['mac', ('nvidia', 0xfe9)], bug=483282)
# Mac AMD
self.Fail('deqp/functional/gles3/clipping.html',
['mac', 'amd'], bug=483282)
self.Fail('deqp/functional/gles3/primitiverestart/00.html',
['mac', 'amd'], bug=598930)
self.Fail('deqp/functional/gles3/primitiverestart/01.html',
['mac', 'amd'], bug=598930)
self.Fail('deqp/functional/gles3/shadercommonfunction.html',
['mac', 'amd'], bug=483282)
self.Fail('deqp/functional/gles3/transformfeedback/*.html',
['mac', 'amd'], bug=483282)
self.Fail('deqp/functional/gles3/shadertexturefunction/' +
'texturesize.html',
['mac', 'amd'], bug=483282)
self.Fail('deqp/functional/gles3/shadertexturefunction/' +
'textureprojlodoffset.html',
['mac', 'amd'], bug=483282)
self.Fail('deqp/functional/gles3/shadertexturefunction/' +
'texturelod.html',
['mac', 'amd'], bug=483282)
self.Fail('deqp/functional/gles3/shadertexturefunction/' +
'textureprojlod.html',
['mac', 'amd'], bug=483282)
self.Fail('deqp/functional/gles3/vertexarrays/' +
'single_attribute.normalize.html',
['mac', 'amd'], bug=483282)
# Mac Intel
self.Fail('conformance2/textures/misc/tex-unpack-params.html',
['mac', 'intel', 'no_angle'], bug=483282)
self.Fail('deqp/functional/gles3/shadercommonfunction.html',
['mac', 'intel'], bug=483282)
self.Fail('deqp/functional/gles3/shaderderivate_*',
['mac', 'intel'], bug=483282)
self.Fail('deqp/functional/gles3/transformfeedback/*.html',
['mac', 'intel'], bug=483282)
self.Fail('deqp/functional/gles3/texturefiltering/2d_combinations_01.html',
['mac', 'intel'], bug=606074)
self.Fail('deqp/functional/gles3/texturefiltering/' +
'cube_combinations_01.html',
['mac', 'intel'], bug=606074)
self.Fail('deqp/functional/gles3/texturefiltering/' +
'2d_array_combinations_01.html',
['mac', 'intel'], bug=606074)
self.Fail('deqp/functional/gles3/texturefiltering/3d_combinations_06.html',
['mac', 'intel'], bug=606074)
self.Fail('deqp/functional/gles3/texturefiltering/3d_combinations_07.html',
['mac', 'intel'], bug=606074)
self.Fail('deqp/functional/gles3/texturefiltering/3d_combinations_08.html',
['mac', 'intel'], bug=606074)
self.Fail('deqp/functional/gles3/texturespecification/' +
'random_teximage2d_2d.html',
['mac', 'intel'], bug=483282)
self.Fail('deqp/functional/gles3/texturespecification/' +
'teximage3d_pbo_params.html',
['mac', 'intel'], bug=483282)
self.Fail('deqp/functional/gles3/texturespecification/' +
'texsubimage3d_pbo_params.html',
['mac', 'intel'], bug=483282)
self.Fail('deqp/functional/gles3/shadertexturefunction/' +
'texture.html',
['mac', 'intel'], bug=483282)
self.Fail('deqp/functional/gles3/shadertexturefunction/' +
'texturelod.html',
['mac', 'intel'], bug=483282)
self.Fail('deqp/functional/gles3/shadertexturefunction/' +
'texturegrad.html',
['mac', 'intel'], bug=483282)
self.Fail('deqp/functional/gles3/shadertexturefunction/' +
'textureprojgrad.html',
['mac', 'intel'], bug=483282)
self.Fail('deqp/functional/gles3/shadertexturefunction/' +
'texelfetchoffset.html',
['mac', 'intel'], bug=483282)
self.Fail('deqp/functional/gles3/shadertexturefunction/' +
'texturesize.html',
['mac', 'intel'], bug=483282)
self.Fail('deqp/functional/gles3/textureformat/sized_color_cube_*.html',
['mac', 'intel'], bug=612205)
self.Fail('conformance2/reading/read-pixels-from-fbo-test.html',
['mac', 'intel'], bug=483282)
# Linux only.
self.Fail('deqp/data/gles3/shaders/functions.html',
['linux'], bug=483282)
self.Fail('conformance2/glsl3/vector-dynamic-indexing.html',
['linux'], bug=483282)
self.Fail('deqp/functional/gles3/fbodepthbuffer.html',
['linux'], bug=483282)
# Behavior difference between GL compatibility profile and ES3.
self.Fail('conformance2/rendering/draw-buffers.html',
['linux'], bug=617410)
self.Skip('deqp/data/gles3/shaders/qualification_order.html',
['linux', 'amd', 'intel'], bug=483282)
self.Fail('deqp/functional/gles3/clipping.html',
['linux', 'amd', 'intel'], bug=483282)
self.Flaky('deqp/functional/gles3/texturespecification/' +
'random_teximage2d_2d.html',
['linux'], bug=618447)
self.Fail('deqp/functional/gles3/texturespecification/' +
'random_teximage2d_cube.html',
['linux'], bug=483282)
self.Fail('deqp/functional/gles3/fboinvalidate/whole.html',
['linux'], bug=624506)
# Linux NVIDIA only.
self.Fail('conformance2/glsl3/array-complex-indexing.html',
['linux', 'nvidia', 'no_angle'], bug=606498)
self.Fail('deqp/functional/gles3/uniformapi/random.html',
['linux', 'nvidia'], bug=621178)
# Linux NVIDIA with ANGLE only
self.Fail('deqp/functional/gles3/buffercopy.html',
['linux', 'nvidia', 'opengl'], bug=483282)
self.Fail('deqp/functional/gles3/bufferobjectquery.html',
['linux', 'nvidia', 'opengl'], bug=483282)
self.Fail('conformance2/reading/read-pixels-pack-parameters.html',
['linux', 'nvidia', 'opengl'], bug=483282)
self.Fail('conformance2/transform_feedback/transform_feedback.html',
['linux', 'nvidia', 'opengl'], bug=483282)
self.Fail('deqp/functional/gles3/transformfeedback/*.html',
['linux', 'nvidia', 'opengl'], bug=618408)
self.Fail('deqp/functional/gles3/shadercommonfunction.html',
['linux', 'nvidia', 'opengl'], bug=618408)
self.Fail('deqp/functional/gles3/shaderbuiltinvar.html',
['linux', 'nvidia', 'opengl'], bug=483282)
self.Fail('deqp/functional/gles3/shaderpackingfunction.html',
['linux', 'nvidia', 'opengl'], bug=483282)
self.Fail('conformance2/buffers/bound-buffer-size-change-test.html',
['linux', 'nvidia', 'opengl'], bug=483282)
self.Fail('conformance2/textures/misc/tex-unpack-params.html',
['linux', 'nvidia', 'opengl'], bug=483282)
# Linux Intel with ANGLE only
self.Fail('deqp/functional/gles3/pixelbufferobject.html',
['linux', 'intel', 'opengl'], bug=483282)
self.Fail('deqp/functional/gles3/shaderderivate_*',
['linux', 'intel', 'opengl'], bug=618408)
self.Fail('deqp/functional/gles3/fragmentoutput/*.html',
['linux', 'intel', 'opengl'], bug=483282)
# The Mesa Intel driver has a scoping bug, see
# https://bugs.freedesktop.org/show_bug.cgi?id=95184
self.Fail('deqp/data/gles3/shaders/scoping.html',
['linux', 'intel'], bug=610800)
# Linux AMD only.
# It looks like AMD shader compiler rejects many valid ES3 semantics.
self.Fail('deqp/data/gles3/shaders/conversions.html',
['linux', 'amd'], bug=483282)
self.Skip('deqp/data/gles3/shaders/arrays.html',
['linux', 'amd'], bug=483282)
self.Fail('deqp/functional/gles3/internalformatquery.html',
['linux', 'amd'], bug=483282)
self.Fail('deqp/functional/gles3/texturestatequery.html',
['linux', 'amd'], bug=483282)
self.Fail('deqp/functional/gles3/buffercopy.html',
['linux', 'amd'], bug=483282)
self.Fail('deqp/functional/gles3/samplerobject.html',
['linux', 'amd'], bug=483282)
self.Fail('deqp/functional/gles3/shaderprecision*.html',
['linux', 'amd'], bug=483282)
self.Fail('deqp/functional/gles3/texturefiltering/3d*',
['linux', 'amd'], bug=606114)
self.Fail('deqp/functional/gles3/fbocompleteness.html',
['linux', 'amd'], bug=483282)
self.Fail('deqp/functional/gles3/lifetime.html',
['linux', 'amd'], bug=483282)
self.Fail('deqp/functional/gles3/shadertexturefunction/texture.html',
['linux', 'amd'], bug=483282)
self.Fail('deqp/functional/gles3/shadertexturefunction/' +
'textureprojoffset.html',
['linux', 'amd'], bug=483282)
self.Fail('deqp/functional/gles3/shadertexturefunction/' +
'textureprojlodoffset.html',
['linux', 'amd'], bug=483282)
self.Fail('deqp/functional/gles3/shadertexturefunction/texturegrad.html',
['linux', 'amd'], bug=483282)
self.Fail('deqp/functional/gles3/shadertexturefunction/' +
'texelfetchoffset.html',
['linux', 'amd'], bug=483282)
self.Fail('deqp/functional/gles3/instancedrendering.html',
['linux', 'amd'], bug=483282)
self.Fail('deqp/functional/gles3/negativetextureapi.html',
['linux', 'amd'], bug=483282)
self.Fail('deqp/functional/gles3/transformfeedback/*.html',
['linux', 'amd'], bug=483282)
self.Fail('deqp/functional/gles3/uniformbuffers/random.html',
['linux', 'amd'], bug=483282)
self.Fail('conformance2/misc/uninitialized-test-2.html',
['linux', 'amd'], bug=483282)
self.Fail('conformance2/reading/read-pixels-pack-parameters.html',
['linux', 'amd'], bug=483282)
self.Fail('conformance2/reading/read-pixels-into-pixel-pack-buffer.html',
['linux', 'amd'], bug=483282)
self.Fail('conformance2/renderbuffers/framebuffer-texture-layer.html',
['linux', 'amd'], bug=295792)
self.Fail('conformance2/textures/misc/tex-mipmap-levels.html',
['linux', 'amd'], bug=483282)
self.Fail('conformance2/textures/misc/tex-unpack-params.html',
['linux', 'amd'], bug=483282)
self.Fail('deqp/functional/gles3/texturespecification/' +
'teximage2d_pbo_cube_00.html',
['linux', 'amd'], bug=483282)
self.Fail('deqp/functional/gles3/texturespecification/' +
'teximage2d_pbo_cube_01.html',
['linux', 'amd'], bug=483282)
self.Fail('deqp/functional/gles3/texturespecification/' +
'teximage2d_pbo_cube_02.html',
['linux', 'amd'], bug=483282)
self.Fail('deqp/functional/gles3/texturespecification/' +
'teximage2d_pbo_cube_03.html',
['linux', 'amd'], bug=483282)
self.Fail('deqp/functional/gles3/texturespecification/' +
'teximage2d_pbo_cube_04.html',
['linux', 'amd'], bug=483282)
self.Fail('deqp/functional/gles3/texturespecification/' +
'teximage2d_pbo_params.html',
['linux', 'amd'], bug=483282)
self.Fail('deqp/functional/gles3/texturespecification/' +
'teximage2d_depth_pbo.html',
['linux', 'amd'], bug=483282)
self.Fail('deqp/functional/gles3/texturespecification/' +
'basic_copyteximage2d.html',
['linux', 'amd'], bug=483282)
self.Fail('deqp/functional/gles3/texturespecification/' +
'basic_teximage3d_3d_00.html',
['linux', 'amd'], bug=483282)
self.Fail('deqp/functional/gles3/texturespecification/' +
'basic_teximage3d_3d_01.html',
['linux', 'amd'], bug=483282)
self.Fail('deqp/functional/gles3/texturespecification/' +
'basic_teximage3d_3d_02.html',
['linux', 'amd'], bug=483282)
self.Fail('deqp/functional/gles3/texturespecification/' +
'basic_teximage3d_3d_03.html',
['linux', 'amd'], bug=483282)
self.Fail('deqp/functional/gles3/texturespecification/' +
'basic_teximage3d_3d_04.html',
['linux', 'amd'], bug=483282)
self.Fail('deqp/functional/gles3/texturespecification/' +
'texstorage2d_format_depth_stencil.html',
['linux', 'amd'], bug=483282)
self.Fail('deqp/functional/gles3/texturespecification/' +
'texstorage3d_format_2d_array_00.html',
['linux', 'amd'], bug=483282)
self.Fail('deqp/functional/gles3/texturespecification/' +
'texstorage3d_format_2d_array_01.html',
['linux', 'amd'], bug=483282)
self.Fail('deqp/functional/gles3/texturespecification/' +
'texstorage3d_format_2d_array_02.html',
['linux', 'amd'], bug=483282)
self.Fail('deqp/functional/gles3/texturespecification/' +
'texstorage3d_format_3d_00.html',
['linux', 'amd'], bug=483282)
self.Fail('deqp/functional/gles3/texturespecification/' +
'texstorage3d_format_3d_01.html',
['linux', 'amd'], bug=483282)
self.Fail('deqp/functional/gles3/texturespecification/' +
'texstorage3d_format_3d_02.html',
['linux', 'amd'], bug=483282)
self.Fail('deqp/functional/gles3/texturespecification/' +
'texstorage3d_format_3d_03.html',
['linux', 'amd'], bug=483282)
self.Fail('deqp/functional/gles3/texturespecification/' +
'texstorage3d_format_depth_stencil.html',
['linux', 'amd'], bug=483282)
self.Fail('deqp/functional/gles3/texturespecification/' +
'texstorage3d_format_size.html',
['linux', 'amd'], bug=483282)
self.Fail('deqp/functional/gles3/uniformbuffers/single_struct_array.html',
['linux', 'amd'], bug=483282)
self.Fail('deqp/functional/gles3/uniformbuffers/single_nested_struct.html',
['linux', 'amd'], bug=483282)
self.Fail('deqp/functional/gles3/uniformbuffers/' +
'single_nested_struct_array.html',
['linux', 'amd'], bug=483282)
self.Fail('deqp/functional/gles3/uniformbuffers/multi_basic_types.html',
['linux', 'amd'], bug=483282)
self.Fail('deqp/functional/gles3/uniformbuffers/multi_nested_struct.html',
['linux', 'amd'], bug=483282)
self.Fail('conformance2/reading/read-pixels-from-fbo-test.html',
['linux', 'amd'], bug=483282)
self.Fail('deqp/functional/gles3/vertexarrays/' +
'single_attribute.output_type.unsigned_int.html',
['linux', 'amd'], bug=483282)
self.Fail('deqp/functional/gles3/draw/*.html',
['linux', 'amd'], bug=483282)
self.Fail('deqp/functional/gles3/fbomultisample*',
['linux', 'amd'], bug=483282)
self.Fail('deqp/functional/gles3/fragmentoutput/*.html',
['linux', 'amd'], bug=483282)
self.Fail('deqp/functional/gles3/textureshadow/*.html',
['linux', 'amd'], bug=483282)
self.Fail('deqp/functional/gles3/shadermatrix/mul_dynamic_highp.html',
['linux', 'amd'], bug=483282)
self.Fail('deqp/functional/gles3/shadermatrix/mul_dynamic_lowp.html',
['linux', 'amd'], bug=483282)
self.Fail('deqp/functional/gles3/shadermatrix/mul_dynamic_mediump.html',
['linux', 'amd'], bug=483282)
self.Fail('deqp/functional/gles3/shadermatrix/pre_decrement.html',
['linux', 'amd'], bug=483282)
# Conflicting expectations to test that the
# "Expectations Have No collisions" unittest works.
# page_name = 'conformance/glsl/constructors/glsl-construct-ivec4.html'
# Conflict when all conditions match
# self.Fail(page_name,
# ['linux', ('nvidia', 0x1), 'debug', 'opengl'])
# self.Fail(page_name,
# ['linux', ('nvidia', 0x1), 'debug', 'opengl'])
# Conflict when all conditions match (and different sets)
# self.Fail(page_name,
# ['linux', 'win', ('nvidia', 0x1), 'debug', 'opengl'])
# self.Fail(page_name,
# ['linux', 'mac', ('nvidia', 0x1), 'amd', 'debug', 'opengl'])
# Conflict with one aspect not specified
# self.Fail(page_name,
# ['linux', ('nvidia', 0x1), 'debug'])
# self.Fail(page_name,
# ['linux', ('nvidia', 0x1), 'debug', 'opengl'])
# Conflict with one aspect not specified (in both conditions)
# self.Fail(page_name,
# ['linux', ('nvidia', 0x1), 'debug'])
# self.Fail(page_name,
# ['linux', ('nvidia', 0x1), 'debug'])
# Conflict even if the GPU is specified in a device ID
# self.Fail(page_name,
# ['linux', ('nvidia', 0x1), 'debug'])
# self.Fail(page_name,
# ['linux', 'nvidia', 'debug'])
# Test there are no conflicts between two different devices
# self.Fail(page_name,
# ['linux', ('nvidia', 0x1), 'debug'])
# self.Fail(page_name,
# ['linux', ('nvidia', 0x2), 'debug'])
# Test there are no conflicts between two devices with different vendors
# self.Fail(page_name,
# ['linux', ('nvidia', 0x1), 'debug'])
# self.Fail(page_name,
# ['linux', ('amd', 0x1), 'debug'])
# Conflicts if there is a device and nothing specified for the other's
# GPU vendors
# self.Fail(page_name,
# ['linux', ('nvidia', 0x1), 'debug'])
# self.Fail(page_name,
# ['linux', 'debug'])
# Test no conflicts happen when only one aspect differs
# self.Fail(page_name,
# ['linux', ('nvidia', 0x1), 'debug', 'opengl'])
# self.Fail(page_name,
# ['win', ('nvidia', 0x1), 'debug', 'opengl'])
# Conflicts if between a generic os condition and a specific version
# self.Fail(page_name,
# ['xp', ('nvidia', 0x1), 'debug', 'opengl'])
# self.Fail(page_name,
# ['win', ('nvidia', 0x1), 'debug', 'opengl'])
| 43.992268 | 81 | 0.645615 | 3,829 | 34,138 | 5.682424 | 0.11439 | 0.0967 | 0.181634 | 0.203236 | 0.840151 | 0.810828 | 0.775347 | 0.737246 | 0.68476 | 0.589622 | 0 | 0.077887 | 0.176724 | 34,138 | 775 | 82 | 44.049032 | 0.696282 | 0.105337 | 0 | 0.677205 | 0 | 0 | 0.522763 | 0.450335 | 0 | 0 | 0.008507 | 0 | 0 | 1 | 0.003328 | false | 0 | 0.001664 | 0 | 0.006656 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
731bae45ddff863de2afc90061d95af4cf81cdf7 | 53,867 | py | Python | pytests/gsi/collections_indexes_rebalance.py | sumedhpb/testrunner | 9ff887231c75571624abc31a3fb5248110e01203 | [
"Apache-2.0"
] | 14 | 2015-02-06T02:47:57.000Z | 2020-03-14T15:06:05.000Z | pytests/gsi/collections_indexes_rebalance.py | sumedhpb/testrunner | 9ff887231c75571624abc31a3fb5248110e01203 | [
"Apache-2.0"
] | 3 | 2019-02-27T19:29:11.000Z | 2021-06-02T02:14:27.000Z | pytests/gsi/collections_indexes_rebalance.py | sumedhpb/testrunner | 9ff887231c75571624abc31a3fb5248110e01203 | [
"Apache-2.0"
] | 108 | 2015-03-26T08:58:49.000Z | 2022-03-21T05:21:39.000Z | """collections_indexes_rebalance.py: Test Cases for gsi with rebalance
__author__ = "Hemant Rajput"
__maintainer = "Hemant Rajput"
__email__ = "Hemant.Rajput@couchbase.com"
__git_user__ = "hrajput89"
__created_on__ = "14/10/20 1:10 pm"
"""
import re
from concurrent.futures import ThreadPoolExecutor
from itertools import combinations, chain
from couchbase_helper.documentgenerator import SDKDataLoader
from couchbase_helper.query_definitions import QueryDefinition
from membase.api.rest_client import RestConnection, RestHelper
from remote.remote_util import RemoteMachineShellConnection
from .base_gsi import BaseSecondaryIndexingTests, ConCurIndexOps
from collection.collections_rest_client import CollectionsRest
from collection.collections_stats import CollectionsStats
from tasks.taskmanager import TaskManager
class CollectionIndexesRebalance(BaseSecondaryIndexingTests):
def setUp(self):
super(CollectionIndexesRebalance, self).setUp()
self.log.info("============== ConcurrentIndexes setup has started ==============")
self.rest.delete_all_buckets()
self.num_concurrent_indexes = self.input.param("num_concurrent_indexes", 10)
self.num_scopes = self.input.param("num_scopes", 1)
self.num_collections = self.input.param("num_collections", 1)
self.test_bucket = self.input.param('test_bucket', 'test_bucket')
self.num_of_indexes = self.input.param('num_of_indexes', 1)
self.services_in = self.input.param('services_in', None)
self.server_out = self.input.param('server_out', None)
self.bucket_params = self._create_bucket_params(server=self.master, size=100,
replicas=self.num_replicas, bucket_type=self.bucket_type,
enable_replica_index=self.enable_replica_index,
eviction_policy=self.eviction_policy, lww=self.lww)
self.cluster.create_standard_bucket(name=self.test_bucket, port=11222,
bucket_params=self.bucket_params)
self.buckets = self.rest.get_buckets()
self.cli_rest = CollectionsRest(self.master)
self.stat = CollectionsStats(self.master)
self.scope_prefix = 'test_scope'
self.collection_prefix = 'test_collection'
self.run_cbq_query = self.n1ql_helper.run_cbq_query
self.err_msg1 = 'The index is scheduled for background creation'
self.err_msg2 = 'Index creation will be retried in background'
self.err_msg3 = 'will retry building in the background for reason: Build Already In Progress.'
self.err_msg4 = 'Create index or Alter replica cannot proceed due to another concurrent create index request'
self.system_query = "select * from system:indexes"
self.log.info("============== ConcurrentIndexes setup has completed ==============")
def tearDown(self):
self.log.info("============== ConcurrentIndexes tearDown has started ==============")
super(CollectionIndexesRebalance, self).tearDown()
self.log.info("============== ConcurrentIndexes tearDown has completed ==============")
def suite_tearDown(self):
pass
def suite_setUp(self):
pass
def test_multiple_type_indexes_with_rebalance(self):
unique_index_type_per_collection = 8
num_of_docs = self.num_of_docs_per_collection
redistribute = {"indexer.settings.rebalance.redistribute_indexes": True}
self.index_rest.set_index_settings(redistribute)
self.run_tasks = True
self.index_ops_obj = ConCurIndexOps()
self.index_create_task_manager = TaskManager("index_create_task_manager")
self.index_create_task_manager.start()
self.n1ql_nodes = self.get_nodes_from_services_map(service_type="n1ql", get_all_nodes=True)
self.prepare_collection_for_indexing(num_of_docs_per_collection=num_of_docs, num_scopes=self.num_scopes,
num_collections=self.num_collections, json_template='Employee')
self.update_keyspace_list(bucket=self.test_bucket)
index_create_tasks = self.create_indexes(num=3*3*unique_index_type_per_collection*2,
query_def_group="unique")
for task in index_create_tasks:
task.result()
result = self.wait_until_indexes_online()
if not result:
self.log.error("Indexes status got timed out. Check logs or increase timeout")
before_rebalance_index_meta_info = self.rest.get_indexer_metadata()['status']
for index in before_rebalance_index_meta_info:
self.assertEqual(index['status'],'Ready')
for index_to_scan in self.index_ops_obj.get_create_index_list():
self.log.info(f'Processing index: {index_to_scan["name"]}')
query_def = index_to_scan["query_def"]
query = query_def.generate_query(bucket=query_def.keyspace)
try:
result = self.run_cbq_query(query=query)['results'][0]
self.assertTrue(result)
except Exception as err:
self.fail(f'{query} failed with {err}')
add_nodes = [self.servers[2]]
remove_nodes = [self.servers[1]]
rebalance_task = self.cluster.async_rebalance(servers=self.servers[:self.nodes_init],
to_add=add_nodes,
to_remove=remove_nodes,
services=['index', 'index'])
self.sleep(5)
rebalance_task.result()
rebalance_status = RestHelper(self.rest).rebalance_reached()
self.assertTrue(rebalance_status, "rebalance failed, stuck or did not complete")
self.sleep(5)
after_rebalance_index_meta_info = self.rest.get_indexer_metadata()['status']
self.assertEqual(len(before_rebalance_index_meta_info), len(after_rebalance_index_meta_info))
for index in after_rebalance_index_meta_info:
self.assertEqual(index['status'],'Ready')
for index_to_scan in self.index_ops_obj.get_create_index_list():
self.log.info(f'Processing index: {index_to_scan["name"]}')
query_def = index_to_scan["query_def"]
query = query_def.generate_query(bucket=query_def.keyspace)
try:
result = self.run_cbq_query(query=query)['results'][0]
self.assertTrue(result)
except Exception as err:
self.fail(f'{query} failed with {err}')
def test_schedule_index_drop_during_rebalance(self):
schedule_index_disable = {"indexer.debug.enableBackgroundIndexCreation": False}
self.rest.set_index_settings(schedule_index_disable)
redistribute = {"indexer.settings.rebalance.redistribute_indexes": True}
self.rest.set_index_settings(redistribute)
num_of_docs = 10 ** 4
self.prepare_collection_for_indexing(num_of_docs_per_collection=num_of_docs, num_scopes=self.num_scopes,
num_collections=self.num_collections)
idx_prefix = 'idx'
index_gen_list = []
index_gen_query_list = []
drop_index_queries = []
regex_pattern = re.compile('.*?Index creation for index (.*?),.*')
index_field_list = ['age', 'city', 'country', 'title', 'firstName', 'lastName', 'streetAddress',
'suffix', 'filler1', 'phone']
for index_fields, idx_num in zip(index_field_list, range(10)):
for collection_namespace in self.namespaces:
index_gen = QueryDefinition(index_name=f'{idx_prefix}_{idx_num}', index_fields=[index_fields])
index_gen_list.append(index_gen)
query = index_gen.generate_index_create_query(namespace=collection_namespace,
defer_build=self.defer_build)
drop_query = index_gen.generate_index_drop_query(namespace=collection_namespace)
index_gen_query_list.append(query)
drop_index_queries.append(drop_query)
tasks = []
rebalance_flag = False
cqueries_before_rebalance = index_gen_query_list[0:15]
cqueries_during_rebalance = index_gen_query_list[15:]
dqueries_during_rebalance = drop_index_queries[0:15]
dqueries_after_rebalance = drop_index_queries[15:]
with ThreadPoolExecutor() as executor:
for query in cqueries_before_rebalance:
task = executor.submit(self.run_cbq_query, query=query)
tasks.append(task)
for task in tasks:
try:
result = task.result()
self.log.info(result)
except Exception as err:
if self.err_msg1 in str(err):
out = re.search(regex_pattern, str(err))
index_name = out.groups()[0]
self.log.info(f"{index_name} is scheduled for background")
add_nodes = [self.servers[2]]
remove_nodes = [self.servers[1]]
if not rebalance_flag:
self.sleep(30)
rebalance_task = self.cluster.async_rebalance(servers=self.servers[:self.nodes_init],
to_add=add_nodes,
to_remove=remove_nodes,
services=['index', 'index'])
self.sleep(5)
# creating indexes during rebalance operation
for query, drop_query in zip(cqueries_during_rebalance, dqueries_during_rebalance):
task = executor.submit(self.run_cbq_query, query=query)
tasks.append(task)
task = executor.submit(self.run_cbq_query, query=drop_query)
tasks.append(task)
result = rebalance_task.result()
rebalance_status = RestHelper(self.rest).rebalance_reached()
self.assertTrue(rebalance_status, "rebalance failed, stuck or did not complete")
rebalance_flag = True
elif self.err_msg2 in str(err):
continue
else:
self.log.info(err)
for query in dqueries_after_rebalance:
task = executor.submit(self.run_cbq_query, query=query)
tasks.append(task)
for task in tasks:
try:
task.result()
except Exception as err:
pass
self.sleep(10)
result = self.wait_until_indexes_online(timeout=60)
if not result:
self.log.error("Timed out while checking for index status. Check index logs")
index_metadata = self.rest.get_indexer_metadata()
self.log.info(f"Index Metadata: {index_metadata}")
system_indexes = self.run_cbq_query(query=self.system_query)['results']
self.assertFalse(system_indexes)
def test_schedule_index_create_during_rebalance(self):
schedule_index_disable = {"indexer.debug.enableBackgroundIndexCreation": False}
self.rest.set_index_settings(schedule_index_disable)
redistribute = {"indexer.settings.rebalance.redistribute_indexes": True}
self.rest.set_index_settings(redistribute)
num_of_docs = 10 ** 4
self.prepare_collection_for_indexing(num_of_docs_per_collection=num_of_docs, num_scopes=3, num_collections=1)
idx_prefix = 'idx'
index_gen_list = []
index_gen_query_list = []
drop_index_queries = []
regex_pattern = re.compile('.*?Index creation for index (.*?),.*')
index_field_list = ['age', 'city', 'country', 'title', 'firstName', 'lastName', 'streetAddress',
'suffix', 'filler1', 'phone']
for index_fields, idx_num in zip(index_field_list, range(10)):
for collection_namespace in self.namespaces:
index_gen = QueryDefinition(index_name=f'{idx_prefix}_{idx_num}', index_fields=[index_fields])
index_gen_list.append(index_gen)
query = index_gen.generate_index_create_query(namespace=collection_namespace,
defer_build=self.defer_build)
drop_query = index_gen.generate_index_drop_query(namespace=collection_namespace)
index_gen_query_list.append(query)
drop_index_queries.append(drop_query)
tasks = []
rebalance_flag = False
with ThreadPoolExecutor() as executor:
queries_before_rebalance = index_gen_query_list[0:10]
queries_during_rebalance = index_gen_query_list[10:20]
queries_after_rebalance = index_gen_query_list[20:]
for query in queries_before_rebalance:
task = executor.submit(self.run_cbq_query, query=query)
tasks.append(task)
for task in tasks:
try:
result = task.result()
self.log.info(result)
except Exception as err:
if self.err_msg1 in str(err):
out = re.search(regex_pattern, str(err))
index_name = out.groups()[0]
self.log.info(f"{index_name} is scheduled for background")
add_nodes = [self.servers[2]]
remove_nodes = [self.servers[1]]
if not rebalance_flag:
self.sleep(10)
rebalance_task = self.cluster.async_rebalance(servers=self.servers[:self.nodes_init], to_add=add_nodes,
to_remove=remove_nodes, services=['index', 'index'])
self.sleep(5)
# creating indexes during rebalance operation
for query in queries_during_rebalance:
task = executor.submit(self.run_cbq_query, query=query)
tasks.append(task)
result = rebalance_task.result()
rebalance_status = RestHelper(self.rest).rebalance_reached()
self.assertTrue(rebalance_status, "rebalance failed, stuck or did not complete")
rebalance_flag = True
elif self.err_msg2 in str(err):
continue
else:
self.log.info(err)
schedule_index_enable = {"indexer.debug.enableBackgroundIndexCreation": True}
self.rest.set_index_settings(schedule_index_enable)
for query in queries_after_rebalance:
task = executor.submit(self.run_cbq_query, query=query)
tasks.append(task)
for task in tasks:
try:
task.result()
except Exception as err:
self.log.info(err)
self.sleep(10)
result = self.wait_until_indexes_online()
if not result:
self.log.error("Timed out while checking for index status. Check index logs")
index_meta_info = self.rest.get_indexer_metadata()['status']
for index in index_meta_info:
self.assertTrue(self.servers[1].ip not in index['hosts'][0])
self.assertEqual(index['status'], 'Ready',
f"Index {index['name']} for scope:{index['scope']} and "
f"collection:{index['collection']} status is not matching with expected value.")
def test_concurrent_indexes_with_failedover_nodes(self):
"""
https://issues.couchbase.com/browse/MB-43442
:return:
"""
num_retries_for_failed_index = self.input.param("num_retries_for_failed_index", 1)
doc = {"indexer.scheduleCreateRetries": num_retries_for_failed_index}
self.rest.set_index_settings(doc)
num_of_docs = 10 ** 4
self.prepare_collection_for_indexing(num_of_docs_per_collection=num_of_docs)
collection_namespace = self.namespaces[0]
_, keyspace = collection_namespace.split(':')
bucket, scope, collection = keyspace.split('.')
idx_prefix = 'idx'
index_gen_list = []
index_gen_query_list = []
regex_pattern = re.compile('.*?Index creation for index (.*?),.*')
index_field_list = ['age', 'city', 'country', 'title', 'firstName', 'lastName', 'streetAddress',
'suffix', 'filler1', 'phone']
for index_fields, idx_num in zip(index_field_list, range(10)):
index_gen = QueryDefinition(index_name=f'{idx_prefix}_{idx_num}', index_fields=[index_fields])
index_gen_list.append(index_gen)
query = index_gen.generate_index_create_query(namespace=collection_namespace, defer_build=self.defer_build,
num_replica=1)
index_gen_query_list.append(query)
tasks = []
failover_flag = False
with ThreadPoolExecutor() as executor:
for count, query in enumerate(index_gen_query_list):
task = executor.submit(self.run_cbq_query, query=query)
tasks.append(task)
for task in tasks:
try:
result = task.result()
self.log.info(result)
except Exception as err:
if self.err_msg1 in str(err):
out = re.search(regex_pattern, str(err))
index_name = out.groups()[0]
self.log.info(f"{index_name} is scheduled for background")
self.sleep(5)
node_out = self.servers[2]
if not failover_flag:
failover_task = self.cluster.async_failover(
self.servers[:self.nodes_init],
[node_out],
self.graceful, wait_for_pending=180)
failover_task.result()
failover_flag = True
elif self.err_msg2 in str(err) or self.err_msg3 in str(err) or self.err_msg4 in str(err):
continue
else:
self.fail(err)
self.wait_until_indexes_online()
index_meta_info = self.rest.get_indexer_metadata()['status']
for index in index_meta_info:
self.log.info(index)
def test_rebalance_redistribution_with_rebalance_in(self):
redistribute = {"indexer.settings.rebalance.redistribute_indexes": True}
self.rest.set_index_settings(redistribute)
num_of_docs = 10 ** 4
self.prepare_collection_for_indexing(num_of_docs_per_collection=num_of_docs)
collection_namespace = self.namespaces[0]
idx_prefix = 'idx'
index_gen_list = []
index_gen_query_list = []
regex_pattern = re.compile('.*?Index creation for index (.*?),.*')
index_field_list = ['age', 'city', 'country', 'title', 'firstName', 'lastName', 'streetAddress',
'suffix', 'filler1', 'phone']
for index_fields, idx_num in zip(index_field_list, range(10)):
index_gen = QueryDefinition(index_name=f'{idx_prefix}_{idx_num}', index_fields=[index_fields])
index_gen_list.append(index_gen)
query = index_gen.generate_index_create_query(namespace=collection_namespace, defer_build=self.defer_build,
num_replica=1)
index_gen_query_list.append(query)
tasks = []
with ThreadPoolExecutor() as executor:
for count, query in enumerate(index_gen_query_list):
task = executor.submit(self.run_cbq_query, query=query)
tasks.append(task)
for task in tasks:
try:
result = task.result()
self.log.info(result)
except Exception as err:
if self.err_msg1 in str(err):
out = re.search(regex_pattern, str(err))
index_name = out.groups()[0]
self.log.info(f"{index_name} is scheduled for background")
elif self.err_msg2 in str(err) or self.err_msg3 in str(err) or self.err_msg4 in str(err):
continue
else:
self.fail(err)
self.sleep(30, "Waiting before checking for index status")
self.wait_until_indexes_online()
index_meta_info = self.rest.get_indexer_metadata()['status']
self.log.info(f"Index Metadata: {index_meta_info}")
self.assertEqual(len(index_meta_info), 10 * (self.num_replicas + 1))
index_hosts = set()
for index in index_meta_info:
host = index['hosts'][0]
index_hosts.add(host.split(':')[0])
self.log.info("Swaping in Indexer node C")
add_nodes = [self.servers[2]]
task = self.cluster.async_rebalance(servers=self.servers[:self.nodes_init], to_add=add_nodes,
to_remove=[], services=['index', 'index'])
task.result()
rebalance_status = RestHelper(self.rest).rebalance_reached()
self.assertTrue(rebalance_status, "rebalance failed, stuck or did not complete")
self.sleep(30, "Waiting before checking for index status")
self.wait_until_indexes_online()
index_meta_info = self.rest.get_indexer_metadata()['status']
index_hosts = set()
for index in index_meta_info:
host = index['hosts'][0]
index_hosts.add(host.split(':')[0])
self.assertTrue(self.servers[2].ip in index_hosts)
def test_rebalance_in_of_nodes_with_failed_rebalance(self):
"""
https://issues.couchbase.com/browse/MB-43664
:return:
"""
redistribute = {"indexer.settings.rebalance.redistribute_indexes": True}
self.rest.set_index_settings(redistribute)
num_of_docs = 10 ** 5
self.prepare_collection_for_indexing(num_of_docs_per_collection=num_of_docs)
collection_namespace = self.namespaces[0]
idx_prefix = 'idx'
index_gen_list = []
index_gen_query_list = []
regex_pattern = re.compile('.*?Index creation for index (.*?),.*')
index_field_list = ['age', 'city', 'country', 'title', 'firstName', 'lastName', 'streetAddress',
'suffix', 'filler1']
index_lists = []
for index_fields, idx_num in zip(index_field_list, range(10)):
index_name = f'{idx_prefix}_{idx_num}'
index_gen = QueryDefinition(index_name=index_name, index_fields=[index_fields])
index_gen_list.append(index_gen)
query = index_gen.generate_index_create_query(namespace=collection_namespace, defer_build=self.defer_build,
num_replica=1)
index_gen_query_list.append(query)
index_lists.append(index_name)
tasks = []
with ThreadPoolExecutor() as executor:
for count, query in enumerate(index_gen_query_list):
task = executor.submit(self.run_cbq_query, query=query)
tasks.append(task)
for task in tasks:
try:
result = task.result()
self.log.info(result)
except Exception as err:
if self.err_msg1 in str(err):
out = re.search(regex_pattern, str(err))
index_name = out.groups()[0]
self.log.info(f"{index_name} is scheduled for background")
elif self.err_msg2 in str(err) or self.err_msg4 in str(err):
continue
else:
self.fail(err)
self.wait_until_indexes_online()
index_meta_info = self.rest.get_indexer_metadata()['status']
self.assertEqual(len(index_meta_info), len(index_field_list) * (self.num_replicas + 1))
self.log.info('Starting Rebalance In process')
add_nodes = [self.servers[2]]
task = self.cluster.async_rebalance(servers=self.servers[:self.nodes_init], to_add=add_nodes,
to_remove=[], services=['index', 'index'])
while self.rest._rebalance_progress_status() != "running":
self.sleep(5, "Allowing some time for rebalance to make progress")
self.stop_server(self.servers[2])
self.sleep(5)
self.start_server(self.servers[2])
try:
task.result()
except Exception as err:
self.log.info(err)
self.wait_until_indexes_online()
self.sleep(10)
index_meta_info = self.rest.get_indexer_metadata()['status']
self.assertEqual(len(index_lists) * 2, len(index_meta_info),
"Some Index/es is/are missing due to rebalance failover")
for index_field in index_field_list:
query = f"select count(*) from {collection_namespace} where {index_field} is not null"
count = self.run_cbq_query(query=query)['results'][0]['$1']
self.assertEqual(count, num_of_docs, "No. indexed docs are not matching after rebalance")
task = self.cluster.async_rebalance(servers=self.servers[:self.nodes_init], to_add=[], to_remove=[])
task.result()
rebalance_status = RestHelper(self.rest).rebalance_reached()
self.assertTrue(rebalance_status, "rebalance failed, stuck or did not complete")
self.wait_until_indexes_online()
index_meta_info = self.rest.get_indexer_metadata()['status']
for index_field in index_field_list:
query = f"select count(*) from {collection_namespace} where {index_field} is not null"
count = self.run_cbq_query(query=query)['results'][0]['$1']
self.assertEqual(count, num_of_docs, "No. indexed docs are not matching after rebalance")
index_hosts = set()
for index in index_meta_info:
host = index['hosts'][0]
index_hosts.add(host.split(':')[0])
self.assertTrue(self.servers[2].ip in index_hosts)
def test_rebalance_out_of_nodes_with_failed_rebalance(self):
num_of_docs = 10 ** 5
self.prepare_collection_for_indexing(num_of_docs_per_collection=num_of_docs)
collection_namespace = self.namespaces[0]
idx_prefix = 'idx'
index_gen_list = []
index_gen_query_list = []
regex_pattern = re.compile('.*?Index creation for index (.*?),.*')
index_field_list = ['age', 'city', 'country', 'title', 'firstName', 'lastName', 'streetAddress',
'suffix', 'filler1']
index_lists = []
for index_fields, idx_num in zip(index_field_list, range(10)):
index_name = f'{idx_prefix}_{idx_num}'
index_gen = QueryDefinition(index_name=index_name, index_fields=[index_fields])
index_gen_list.append(index_gen)
query = index_gen.generate_index_create_query(namespace=collection_namespace, defer_build=self.defer_build,
num_replica=1)
index_gen_query_list.append(query)
index_lists.append(index_name)
tasks = []
with ThreadPoolExecutor() as executor:
for count, query in enumerate(index_gen_query_list):
task = executor.submit(self.run_cbq_query, query=query)
tasks.append(task)
for task in tasks:
try:
result = task.result()
self.log.info(result)
except Exception as err:
if self.err_msg1 in str(err):
out = re.search(regex_pattern, str(err))
index_name = out.groups()[0]
self.log.info(f"{index_name} is scheduled for background")
elif self.err_msg2 in str(err) or self.err_msg4 in str(err):
continue
else:
self.fail(err)
self.wait_until_indexes_online()
index_meta_info = self.rest.get_indexer_metadata()['status']
self.assertEqual(len(index_meta_info), len(index_field_list) * (self.num_replicas + 1))
self.log.info('Starting Rebalance out process')
remove_nodes = [self.servers[2]]
task = self.cluster.async_rebalance(servers=self.servers[:self.nodes_init], to_add=[],
to_remove=remove_nodes)
while self.rest._rebalance_progress_status() != "running":
self.sleep(5, "Allowing some time for rebalance to make progress")
self.stop_server(self.servers[2])
self.sleep(5)
self.start_server(self.servers[2])
try:
task.result()
except Exception as err:
self.log.info(err)
self.wait_until_indexes_online()
index_meta_info = self.rest.get_indexer_metadata()['status']
self.assertEqual(len(index_lists) * 2, len(index_meta_info),
"Some Index/es is/are missing due to rebalance failover")
for index_field in index_field_list:
query = f"select count(*) from {collection_namespace} where {index_field} is not null"
count = self.run_cbq_query(query=query)['results'][0]['$1']
self.assertEqual(count, num_of_docs, "No. indexed docs are not matching after rebalance")
task = self.cluster.async_rebalance(servers=self.servers[:self.nodes_init], to_add=[], to_remove=remove_nodes)
task.result()
rebalance_status = RestHelper(self.rest).rebalance_reached()
self.assertTrue(rebalance_status, "rebalance failed, stuck or did not complete")
self.wait_until_indexes_online()
index_meta_info = self.rest.get_indexer_metadata()['status']
self.assertEqual(len(index_meta_info), len(index_field_list) * 2, "No. indexes are not matching the expected value")
for index_field in index_field_list:
query = f"select count(*) from {collection_namespace} where {index_field} is not null"
count = self.run_cbq_query(query=query)['results'][0]['$1']
self.assertEqual(count, num_of_docs, "No. indexed docs are not matching after rebalance")
index_hosts = set()
for index in index_meta_info:
host = index['hosts'][0]
index_hosts.add(host.split(':')[0])
self.assertTrue(self.servers[2].ip not in index_hosts)
def test_rebalance_swap_of_nodes_with_failed_rebalance(self):
num_of_docs = 10 ** 5
self.prepare_collection_for_indexing(num_of_docs_per_collection=num_of_docs)
collection_namespace = self.namespaces[0]
idx_prefix = 'idx'
index_gen_list = []
index_gen_query_list = []
regex_pattern = re.compile('.*?Index creation for index (.*?),.*')
index_field_list = ['age', 'city', 'country', 'title', 'firstName', 'lastName', 'streetAddress',
'suffix', 'filler1']
index_lists = []
for index_fields, idx_num in zip(index_field_list, range(10)):
index_name = f'{idx_prefix}_{idx_num}'
index_gen = QueryDefinition(index_name=index_name, index_fields=[index_fields])
index_gen_list.append(index_gen)
query = index_gen.generate_index_create_query(namespace=collection_namespace, defer_build=self.defer_build,
num_replica=1)
index_gen_query_list.append(query)
index_lists.append(index_name)
tasks = []
with ThreadPoolExecutor() as executor:
for count, query in enumerate(index_gen_query_list):
task = executor.submit(self.run_cbq_query, query=query)
tasks.append(task)
for task in tasks:
try:
result = task.result()
self.log.info(result)
except Exception as err:
if self.err_msg1 in str(err):
out = re.search(regex_pattern, str(err))
index_name = out.groups()[0]
self.log.info(f"{index_name} is scheduled for background")
elif self.err_msg2 in str(err):
continue
else:
self.log.error(err)
self.wait_until_indexes_online()
index_meta_info = self.rest.get_indexer_metadata()['status']
self.assertEqual(len(index_meta_info), len(index_field_list) * (self.num_replicas + 1))
self.log.info('Starting Rebalance Swap process')
add_nodes = [self.servers[2]]
remove_nodes = [self.servers[1]]
task = self.cluster.async_rebalance(servers=self.servers[:self.nodes_init], to_add=add_nodes,
to_remove=remove_nodes, services=['index', 'index'])
while self.rest._rebalance_progress_status() != "running":
self.sleep(5, "Allowing some time for rebalance to make progress")
self.stop_server(self.servers[2])
self.sleep(5)
self.start_server(self.servers[2])
try:
task.result()
except Exception as err:
self.log.info(err)
self.wait_until_indexes_online()
self.sleep(5)
index_meta_info = self.rest.get_indexer_metadata()['status']
self.assertEqual(len(index_lists) * 2, len(index_meta_info),
"Some Index/es is/are missing due to rebalance failover")
for index_field in index_field_list:
query = f"select count(*) from {collection_namespace} where {index_field} is not null"
count = self.run_cbq_query(query=query)['results'][0]['$1']
self.assertEqual(count, num_of_docs, "No. indexed docs are not matching after rebalance")
task = self.cluster.async_rebalance(servers=self.servers[:self.nodes_init], to_add=[], to_remove=remove_nodes)
task.result()
rebalance_status = RestHelper(self.rest).rebalance_reached()
self.assertTrue(rebalance_status, "rebalance failed, stuck or did not complete")
self.wait_until_indexes_online()
index_meta_info = self.rest.get_indexer_metadata()['status']
self.assertEqual(len(index_meta_info), len(index_field_list) * 2, "No. indexes are not matching the expected value")
for index_field in index_field_list:
query = f"select count(*) from {collection_namespace} where {index_field} is not null"
count = self.run_cbq_query(query=query)['results'][0]['$1']
self.assertEqual(count, num_of_docs, "No. indexed docs are not matching after rebalance")
index_hosts = set()
for index in index_meta_info:
host = index['hosts'][0]
index_hosts.add(host.split(':')[0])
self.assertTrue(self.servers[1].ip not in index_hosts)
def test_rebalance_in_with_incomplete_rebalance(self):
redistribute = {"indexer.settings.rebalance.redistribute_indexes": True}
self.rest.set_index_settings(redistribute)
num_of_docs = 10 ** 5
self.prepare_collection_for_indexing(num_of_docs_per_collection=num_of_docs)
collection_namespace = self.namespaces[0]
_, keyspace = collection_namespace.split(':')
bucket, scope, collection = keyspace.split('.')
idx_prefix = 'idx'
index_gen_list = []
index_gen_query_list = []
regex_pattern = re.compile('.*?Index creation for index (.*?),.*')
index_field_list = ['age', 'city', 'country', 'title', 'firstName', 'lastName', 'streetAddress',
'suffix', 'filler1']
index_lists = []
for index_fields, idx_num in zip(index_field_list, range(10)):
index_name = f'{idx_prefix}_{idx_num}'
index_gen = QueryDefinition(index_name=index_name, index_fields=[index_fields])
index_gen_list.append(index_gen)
query = index_gen.generate_index_create_query(namespace=collection_namespace, defer_build=self.defer_build,
num_replica=1)
index_gen_query_list.append(query)
index_lists.append(index_name)
tasks = []
with ThreadPoolExecutor() as executor:
for count, query in enumerate(index_gen_query_list):
task = executor.submit(self.run_cbq_query, query=query)
tasks.append(task)
for task in tasks:
try:
result = task.result()
self.log.info(result)
except Exception as err:
if self.err_msg1 in str(err):
out = re.search(regex_pattern, str(err))
index_name = out.groups()[0]
self.log.info(f"{index_name} is scheduled for background")
elif self.err_msg2 in str(err) or self.err_msg4 in str(err):
continue
else:
self.fail(err)
self.wait_until_indexes_online()
index_meta_info = self.rest.get_indexer_metadata()['status']
self.assertEqual(len(index_meta_info), len(index_field_list) * (self.num_replicas + 1))
index_hosts = set()
for index in index_meta_info:
host = index['hosts'][0]
index_hosts.add(host.split(':')[0])
self.log.info("Swaping in Indexer node C")
add_nodes = [self.servers[2]]
task = self.cluster.async_rebalance(servers=self.servers[:self.nodes_init], to_add=add_nodes,
to_remove=[], services=['index', 'index'])
self.sleep(15, "Allowing sometime for rebalance to make progress")
if self.rest._rebalance_progress_status() == "running":
self.assertTrue(self.rest.stop_rebalance(), "Failed while stopping rebalance.")
result = task.result()
self.wait_until_indexes_online()
index_meta_info = self.rest.get_indexer_metadata()['status']
self.assertEqual(len(index_lists) * 2, len(index_meta_info),
"Some Index/es is/are missing due to in-process rebalance cancel")
for index in index_meta_info:
host = index['hosts'][0]
index_hosts.add(host.split(':')[0])
self.assertTrue(self.servers[2].ip in index_hosts)
for index_field in index_field_list:
query = f"select count(*) from {collection_namespace} where {index_field} is not null"
count = self.run_cbq_query(query=query)['results'][0]['$1']
self.assertEqual(count, num_of_docs, "No. indexed docs are not matching after rebalance")
def test_rebalance_out_node_with_schedule_indexes(self):
redistribute = {"indexer.settings.rebalance.redistribute_indexes": True}
self.rest.set_index_settings(redistribute)
schedule_index_disable = {"indexer.debug.enableBackgroundIndexCreation": False}
self.rest.set_index_settings(schedule_index_disable)
num_of_docs = 10 ** 5
self.prepare_collection_for_indexing(num_of_docs_per_collection=num_of_docs)
collection_namespace = self.namespaces[0]
idx_prefix = 'idx'
index_gen_list = []
index_gen_query_list = []
regex_pattern = re.compile('.*?Index creation for index (.*?),.*')
index_field_list = ['age', 'city', 'country', 'title', 'firstName', 'lastName', 'streetAddress',
'suffix', 'filler1']
index_lists = []
for index_fields, idx_num in zip(index_field_list, range(10)):
index_name = f'{idx_prefix}_{idx_num}'
index_gen = QueryDefinition(index_name=index_name, index_fields=[index_fields])
index_gen_list.append(index_gen)
query = index_gen.generate_index_create_query(namespace=collection_namespace, defer_build=self.defer_build,
num_replica=self.num_replicas)
index_gen_query_list.append(query)
index_lists.append(index_name)
tasks = []
rebalance_flag = False
with ThreadPoolExecutor() as executor:
for count, query in enumerate(index_gen_query_list):
task = executor.submit(self.run_cbq_query, query=query)
tasks.append(task)
for task in tasks:
try:
result = task.result()
self.log.info(result)
except Exception as err:
if self.err_msg1 in str(err):
out = re.search(regex_pattern, str(err))
index_name = out.groups()[0]
self.log.info(f"{index_name} is scheduled for background")
if not rebalance_flag:
self.sleep(15)
remove_nodes = [self.servers[2]]
rebalance_task = self.cluster.async_rebalance(servers=self.servers[:self.nodes_init],
to_add=[], to_remove=remove_nodes)
result = rebalance_task.result()
self.assertTrue(result)
rebalance_status = RestHelper(self.rest).rebalance_reached()
self.assertTrue(rebalance_status, "rebalance failed, stuck or did not complete")
schedule_index_enable = {"indexer.debug.enableBackgroundIndexCreation": True}
self.rest.set_index_settings(schedule_index_enable)
rebalance_flag = True
elif self.err_msg2 in str(err):
continue
else:
self.fail(err)
self.wait_until_indexes_online(timeout=120)
index_meta_info = self.rest.get_indexer_metadata()['status']
indexes_after_rebalance_out = set()
for index in index_meta_info:
indexes_after_rebalance_out.add(index['indexName'])
self.assertEqual(len(index_field_list), len(indexes_after_rebalance_out))
add_nodes = [self.servers[3]]
rebalance_task = self.cluster.async_rebalance(servers=self.servers[:self.nodes_init],
to_add=add_nodes, to_remove=[], services=['index'])
result = rebalance_task.result()
self.assertTrue(result)
rebalance_status = RestHelper(self.rest).rebalance_reached()
self.assertTrue(rebalance_status, "rebalance failed, stuck or did not complete")
result = self.wait_until_indexes_online()
if not result:
self.log.error("Timed out while checking for index status. Check index logs")
index_meta_info = self.rest.get_indexer_metadata()['status']
self.assertEqual(len(index_meta_info), len(index_field_list) * (self.num_replicas + 1))
for index in index_meta_info:
self.assertEqual(index['status'], 'Ready')
def test_rebalance_swap_with_indexer(self):
num_of_docs = 10 ** 4
self.prepare_collection_for_indexing(num_of_docs_per_collection=num_of_docs)
collection_namespace = self.namespaces[0]
_, keyspace = collection_namespace.split(':')
bucket, scope, collection = keyspace.split('.')
idx_prefix = 'idx'
index_gen_list = []
index_gen_query_list = []
regex_pattern = re.compile('.*?Index creation for index (.*?),.*')
index_field_list = ['age', 'city', 'country', 'title', 'firstName', 'lastName', 'streetAddress',
'suffix', 'filler1', 'phone']
for index_fields, idx_num in zip(index_field_list, range(10)):
index_gen = QueryDefinition(index_name=f'{idx_prefix}_{idx_num}', index_fields=[index_fields])
index_gen_list.append(index_gen)
query = index_gen.generate_index_create_query(namespace=collection_namespace, defer_build=self.defer_build,
num_replica=1)
index_gen_query_list.append(query)
tasks = []
with ThreadPoolExecutor() as executor:
for count, query in enumerate(index_gen_query_list):
task = executor.submit(self.run_cbq_query, query=query)
tasks.append(task)
for task in tasks:
try:
result = task.result()
self.log.info(result)
except Exception as err:
if self.err_msg1 in str(err):
out = re.search(regex_pattern, str(err))
index_name = out.groups()[0]
self.log.info(f"{index_name} is scheduled for background")
elif self.err_msg2 in str(err) or self.err_msg3 in str(err) or self.err_msg4 in str(err):
continue
else:
self.fail(err)
self.wait_until_indexes_online()
index_meta_info = self.rest.get_indexer_metadata()['status']
self.assertEqual(len(index_meta_info), 10 * (self.num_replicas + 1))
self.log.info("Swaping out Indexer node B with C and D")
gen_create = SDKDataLoader(num_ops=10**4, percent_create=100, percent_update=0, percent_delete=0, scope=scope,
collection=collection, json_template='Person', key_prefix="new_doc_")
add_nodes = [self.servers[2]]
remove_node = [self.servers[1]]
tasks = []
tasks.append(self.cluster.async_rebalance(servers=self.servers[:self.nodes_init], to_add=add_nodes,
to_remove=remove_node, services=['index', 'index']))
tasks.extend(self.data_ops_javasdk_loader_in_batches(sdk_data_loader=gen_create, batch_size=10 ** 4))
for task in tasks:
task.result()
rebalance_status = RestHelper(self.rest).rebalance_reached()
self.assertTrue(rebalance_status, "rebalance failed, stuck or did not complete")
self.wait_until_indexes_online()
index_meta_info = self.rest.get_indexer_metadata()['status']
self.assertEqual(len(index_meta_info), 10 * (self.num_replicas + 1))
def test_rebalance_indexer_nodes_with_multiple_BSC(self):
num_of_docs = 10 ** 4
self.rest.delete_all_buckets()
bucket_1 = 'test_bucket_1'
bucket_2 = 'test_bucket_2'
self.cluster.create_standard_bucket(name=bucket_1, port=11222, bucket_params=self.bucket_params)
self.cluster.create_standard_bucket(name=bucket_2, port=11222, bucket_params=self.bucket_params)
collection_namespaces = []
scope_prefix = 'test_scope'
collection_prefix = 'test_collection'
data_load_tasks = []
for bucket in (bucket_1, bucket_2):
for s_item in range(self.num_scopes):
scope = f'{scope_prefix}_{s_item}'
self.cli_rest.create_scope(bucket=bucket, scope=scope)
for c_item in range(self.num_collections):
collection = f'{collection_prefix}_{c_item}'
self.cli_rest.create_collection(bucket=bucket, scope=scope, collection=collection)
self.sleep(10)
gen_create = SDKDataLoader(num_ops=num_of_docs, percent_create=100,
percent_update=0, percent_delete=0, scope=scope,
collection=collection, json_template='Person')
task = self.cluster.async_load_gen_docs(self.master, bucket, gen_create, timeout_secs=300)
data_load_tasks.append(task)
collection_namespaces.append(f'default:{bucket}.{scope}.{collection}')
for task in data_load_tasks:
task.result()
idx_prefix = 'idx'
index_gen_list = []
index_gen_query_list = []
index_build_query_list = []
regex_pattern = re.compile('.*?Index creation for index (.*?),.*')
index_field_list = ['age', 'city', 'country', 'title', 'firstName', 'lastName', 'streetAddress',
'suffix', 'filler1', 'phone']
for collection_namespace in collection_namespaces:
for index_fields, idx_num in zip(index_field_list, range(self.num_of_indexes)):
index_gen = QueryDefinition(index_name=f'{idx_prefix}_{idx_num}', index_fields=[index_fields])
index_gen_list.append(index_gen)
query = index_gen.generate_index_create_query(namespace=collection_namespace,
defer_build=self.defer_build,
num_replica=1)
build_query = index_gen.generate_build_query(namespace=collection_namespace)
index_gen_query_list.append(query)
index_build_query_list.append(build_query)
tasks = []
with ThreadPoolExecutor() as executor:
for count, query in enumerate(index_gen_query_list):
task = executor.submit(self.run_cbq_query, query=query)
tasks.append(task)
for task in tasks:
try:
result = task.result()
self.log.info(result)
except Exception as err:
if self.err_msg1 in str(err):
out = re.search(regex_pattern, str(err))
index_name = out.groups()[0]
self.log.info(f"{index_name} is scheduled for background")
elif self.err_msg2 in str(err):
continue
elif self.err_msg3 in str(err):
continue
else:
self.log.info(err)
self.sleep(10, "Giving some time before checking index status")
self.wait_until_indexes_online(defer_build=self.defer_build)
tasks = []
self.log.info("Swapping out Indexer node B with C and D")
add_nodes = self.servers[2:4]
remove_node = [self.servers[1]]
tasks.append(self.cluster.async_rebalance(servers=self.servers[:self.nodes_init], to_add=add_nodes,
to_remove=remove_node, services=['index', 'index']))
for collection_namespace in collection_namespaces:
_, keyspace = collection_namespace.split(':')
bucket, scope, collection = keyspace.split('.')
gen_create = SDKDataLoader(num_ops=10 ** 3, percent_create=100, percent_update=0, percent_delete=0,
scope=scope, collection=collection, json_template='Person',
key_prefix="new_doc_")
tasks.extend(self.data_ops_javasdk_loader_in_batches(sdk_data_loader=gen_create, batch_size=10 ** 4))
for task in tasks:
try:
task.result()
except Exception as err:
self.log.error(err)
rebalance_status = RestHelper(self.rest).rebalance_reached()
self.assertTrue(rebalance_status, "rebalance failed, stuck or did not complete")
self.sleep(30, "Giving some time before checking index status")
self.wait_until_indexes_online(defer_build=self.defer_build)
if self.defer_build:
for build_query in index_build_query_list:
try:
self.run_cbq_query(query=build_query)
except Exception as err:
self.log.info(err)
self.sleep(120, "Giving some time before checking index status")
self.wait_until_indexes_online()
index_meta_info = self.rest.get_indexer_metadata()['status']
self.log.info(f"Index Metadata: {index_meta_info}")
self.assertEqual(len(index_meta_info), self.num_of_indexes * (self.num_replicas + 1) * self.num_scopes * self.num_collections * 2)
for index in index_meta_info:
self.assertEqual(index['status'], 'Ready', index['status'])
self.assertEqual(index['completion'], 100, index['completion'])
self.assertFalse(index['stale'], index['stale'])
| 52.914538 | 138 | 0.594408 | 6,117 | 53,867 | 4.953409 | 0.0564 | 0.024818 | 0.023597 | 0.020198 | 0.856535 | 0.82934 | 0.808251 | 0.794917 | 0.784389 | 0.777855 | 0 | 0.010022 | 0.309076 | 53,867 | 1,017 | 139 | 52.966568 | 0.8041 | 0.007983 | 0 | 0.797371 | 0 | 0 | 0.126726 | 0.023085 | 0 | 0 | 0 | 0 | 0.059146 | 1 | 0.017525 | false | 0.003286 | 0.012048 | 0 | 0.030668 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
732bd616c02107ae272bc833cefce84f2e355495 | 2,361 | py | Python | function/logical_ops.py | facebookresearch/task_bench | 1a75797d635d2b2e79336b5c02af654f1bec7013 | [
"CC0-1.0"
] | 1 | 2022-03-20T22:09:25.000Z | 2022-03-20T22:09:25.000Z | function/logical_ops.py | facebookresearch/task_bench | 1a75797d635d2b2e79336b5c02af654f1bec7013 | [
"CC0-1.0"
] | null | null | null | function/logical_ops.py | facebookresearch/task_bench | 1a75797d635d2b2e79336b5c02af654f1bec7013 | [
"CC0-1.0"
] | null | null | null | # Copyright (c) Meta Platforms, Inc. and affiliates.
# All rights reserved.
from function import Function, FUNCTION_REGISTRY, WordFunction
class LogicalAnd(WordFunction):
"""
Maps bool-> bool
"""
def __init__(self, fn_tree, inner_fns, **kwargs):
super().__init__(fn_tree=fn_tree, inner_fns=inner_fns)
assert len(self.inner_fns) <= 2
@classmethod
def get_func_name(cls):
return ['land']
def to_nl(self):
inner_nls = [inner_fn.to_nl() for inner_fn in self.inner_fns]
assert len(inner_nls) == 2
return f"{inner_nls[0]} and {inner_nls[1]}"
def __call__(self, inputs: list=None):
inputs = self.compute_inner_fns(inputs)
assert len(inputs) == 2
return {'out': inputs[0] and inputs[1], 'inner': inputs}
@classmethod
def build(cls, fn_tree, inner_fns, **kwargs):
return cls(fn_tree=fn_tree, inner_fns=inner_fns, **kwargs)
class LogicalOr(WordFunction):
"""
Maps bool-> bool
"""
def __init__(self, fn_tree, inner_fns, **kwargs):
super().__init__(fn_tree=fn_tree, inner_fns=inner_fns)
assert len(self.inner_fns) <= 2
@classmethod
def get_func_name(cls):
return ['lor']
def to_nl(self):
inner_nls = [inner_fn.to_nl() for inner_fn in self.inner_fns]
assert len(inner_nls) == 2
return f"{inner_nls[0]} or {inner_nls[1]}"
def __call__(self, inputs: list=None):
inputs = self.compute_inner_fns(inputs)
assert len(inputs) == 2
return {'out': inputs[0] or inputs[1], 'inner': inputs}
@classmethod
def build(cls, fn_tree, inner_fns, **kwargs):
return cls(fn_tree=fn_tree, inner_fns=inner_fns, **kwargs)
class LogicalNot(WordFunction):
"""
Maps bool-> bool
"""
def __init__(self, fn_tree, inner_fns, **kwargs):
super().__init__(fn_tree=fn_tree, inner_fns=inner_fns)
self.inner_fns = inner_fns
@classmethod
def get_func_name(cls):
return ['lnot']
def __call__(self, inputs: list=None):
inputs = self.compute_inner_fns(inputs)
assert len(inputs) == 1
return {'out': not inputs[0], 'inner': inputs}
@classmethod
def build(cls, fn_tree, inner_fns, **kwargs):
return cls(fn_tree=fn_tree, inner_fns=inner_fns, **kwargs)
| 29.148148 | 69 | 0.629818 | 325 | 2,361 | 4.246154 | 0.181538 | 0.156522 | 0.095652 | 0.121739 | 0.851449 | 0.851449 | 0.851449 | 0.826812 | 0.826812 | 0.826812 | 0 | 0.008929 | 0.241 | 2,361 | 80 | 70 | 29.5125 | 0.761161 | 0.052097 | 0 | 0.72549 | 0 | 0 | 0.045641 | 0 | 0 | 0 | 0 | 0 | 0.137255 | 1 | 0.27451 | false | 0 | 0.019608 | 0.117647 | 0.568627 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 9 |
7340e0aa305813376ef77d74d1b51c383095c9cd | 10,838 | py | Python | pybrowscap/test/loader/csv/test_browser.py | Shananra/pybrowscap | 73f6866ceabe3729bc9e1cd08eaa82f362e741bf | [
"BSD-3-Clause"
] | 1 | 2021-04-29T11:19:32.000Z | 2021-04-29T11:19:32.000Z | pybrowscap/test/loader/csv/test_browser.py | adw0rd/pybrowscap | cbf2f1de4028b958ee84149abe6b02c8fc182a4d | [
"BSD-3-Clause"
] | null | null | null | pybrowscap/test/loader/csv/test_browser.py | adw0rd/pybrowscap | cbf2f1de4028b958ee84149abe6b02c8fc182a4d | [
"BSD-3-Clause"
] | 1 | 2018-10-09T23:20:12.000Z | 2018-10-09T23:20:12.000Z | import unittest
import os
from pybrowscap.loader.csv import load_file
BROWSCAP = load_file(os.path.join(os.path.dirname(__file__), '..', '..', 'data', 'browscap_14_05_2012.csv'))
class TestBrowserFirefox(unittest.TestCase):
user_agent = 'Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.2.18) Gecko/20110628 Ubuntu/10.10 (maverick) Firefox/3.6.18'
def setUp(self):
self.browser = BROWSCAP.search(self.user_agent)
def tearDown(self):
self.browser = None
def test_items(self):
self.assertDictEqual(self.browser.items(),
{'cookies': True, 'activexcontrols': False, 'aolversion': 0.0, 'frames': True,
'cssversion': 0.0, 'majorver': 3, 'tables': True, 'iframes': True, 'vbscript': False,
'comments': 'Firefox 3.6', 'platform_version': 0.0, 'platform': 'Linux', 'version': 3.6,
'masterparent': False, 'renderingengine_version': 0.0, 'javaapplets': True,
'parent': 'Firefox 3.6', 'backgroundsounds': False, 'win64': False,
'propertyname': 'Mozilla/5.0 (X11; *; *Linux*; *; rv:1.9.2*) Gecko/* Firefox/3.6*',
'javascript': True, 'beta': False, 'alpha': False,
'renderingengine_description': 'For Firefox, Camino, K-Meleon, SeaMonkey, Netscape, and other Gecko-based browsers.',
'crawler': False, 'renderingengine_name': 'Gecko', 'device_maker': '',
'platform_description': '', 'minorver': 6, 'issyndicationreader': False,
'device_name': '', 'win32': False, 'ismobiledevice': False, 'litemode': True,
'agentid': '11277', 'win16': False, 'browser': 'Firefox'})
def test_get(self):
self.assertEqual(self.browser.get('platform'), 'Linux')
self.assertEqual(self.browser.get('parent'), 'Firefox 3.6')
self.assertIsNone(self.browser.get('codescale'))
self.assertEqual(self.browser.get('codescale', ''), '')
def test_name(self):
self.assertEqual(self.browser.name(), 'Firefox')
def test_category(self):
self.assertEqual(self.browser.category(), 'Firefox 3.6')
def test_platform(self):
self.assertEqual(self.browser.platform(), 'Linux')
def test_aol_version(self):
self.assertIsInstance(self.browser.aol_version(), float)
self.assertEqual(self.browser.aol_version(), 0.0)
def test_version(self):
self.assertIsInstance(self.browser.version(), float)
self.assertEqual(self.browser.version(), 3.6)
def test_version_major(self):
self.assertIsInstance(self.browser.version_major(), int)
self.assertEqual(self.browser.version_major(), 3)
def test_version_minor(self):
self.assertIsInstance(self.browser.version_minor(), int)
self.assertEqual(self.browser.version_minor(), 6)
def test_css_version(self):
self.assertIsInstance(self.browser.css_version(), float)
self.assertEqual(self.browser.css_version(), 0.0)
def test_rendering_engine_name(self):
self.assertEqual(self.browser.rendering_engine_name(), 'Gecko')
def test_rendering_engine_version(self):
self.assertIsInstance(self.browser.rendering_engine_version(), float)
self.assertEqual(self.browser.rendering_engine_version(), 0.0)
def test_device_maker(self):
self.assertEqual(self.browser.device_maker(), '')
def test_device_name(self):
self.assertEqual(self.browser.device_name(), '')
def test_platform_description(self):
self.assertEqual(self.browser.platform_description(), '')
def test_platform_version(self):
self.assertIsInstance(self.browser.platform_version(), float)
self.assertEqual(self.browser.platform_version(), 0.0)
def test_litemode(self):
self.assertTrue(self.browser.litemode())
def test_supports(self):
self.assertTrue(self.browser.supports('tables'))
def test_supports_tables(self):
self.assertTrue(self.browser.supports_tables())
def test_supports_frames(self):
self.assertTrue(self.browser.supports_frames())
def test_supports_iframes(self):
self.assertTrue(self.browser.supports_iframes())
def test_supports_java(self):
self.assertTrue(self.browser.supports_java())
def test_supports_javascript(self):
self.assertTrue(self.browser.supports_javascript())
def test_supports_vbscript(self):
self.assertFalse(self.browser.supports_vbscript())
def test_supports_activex(self):
self.assertFalse(self.browser.supports_activex())
def test_supports_cookies(self):
self.assertTrue(self.browser.supports_cookies())
def test_supports_css(self):
self.assertFalse(self.browser.supports_css())
def test_is_crawler(self):
self.assertFalse(self.browser.is_crawler())
def test_is_mobile(self):
self.assertFalse(self.browser.is_mobile())
def test_is_syndication_reader(self):
self.assertFalse(self.browser.is_syndication_reader())
def test_is_banned(self):
self.assertIsNone(self.browser.is_banned())
def test_is_alpha(self):
self.assertFalse(self.browser.is_alpha())
def test_is_beta(self):
self.assertFalse(self.browser.is_beta())
def test_features(self):
self.assertListEqual(self.browser.features(), ['tables', 'frames', 'iframes', 'javascript', 'cookies', 'java'])
class BrowserGooglebotTest(unittest.TestCase):
user_agent = 'Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)'
def setUp(self):
self.browser = BROWSCAP.search(self.user_agent)
def tearDown(self):
self.browser = None
def test_items(self):
self.assertDictEqual(self.browser.items(),
{'cookies': False, 'activexcontrols': False, 'aolversion': 0.0, 'frames': True,
'cssversion': 0.0, 'majorver': 2, 'tables': True, 'iframes': True, 'vbscript': False,
'comments': 'Google', 'platform_version': 0.0, 'platform': '', 'version': 2.1,
'masterparent': False, 'renderingengine_version': 0.0, 'javaapplets': False,
'parent': 'Google', 'backgroundsounds': False, 'win64': False,
'propertyname': '*Googlebot/2.1*', 'javascript': False, 'beta': False,
'alpha': False, 'renderingengine_description': '', 'crawler': True,
'renderingengine_name': '', 'device_maker': '', 'platform_description': '',
'minorver': 1, 'issyndicationreader': False, 'device_name': '', 'win32': False,
'ismobiledevice': False, 'litemode': True, 'agentid': '4128', 'win16': False,
'browser': 'Googlebot'})
def test_get(self):
self.assertEqual(self.browser.get('platform'), '')
self.assertEqual(self.browser.get('parent'), 'Google')
self.assertIsNone(self.browser.get('codescale'))
self.assertEqual(self.browser.get('codescale', ''), '')
def test_name(self):
self.assertEqual(self.browser.name(), 'Googlebot')
def test_category(self):
self.assertEqual(self.browser.category(), 'Google')
def test_platform(self):
self.assertEqual(self.browser.platform(), '')
def test_aol_version(self):
self.assertIsInstance(self.browser.aol_version(), float)
self.assertEqual(self.browser.aol_version(), 0)
def test_version(self):
self.assertIsInstance(self.browser.version(), float)
self.assertEqual(self.browser.version(), 2.1)
def test_version_major(self):
self.assertIsInstance(self.browser.version_major(), int)
self.assertEqual(self.browser.version_major(), 2)
def test_version_minor(self):
self.assertIsInstance(self.browser.version_minor(), int)
self.assertEqual(self.browser.version_minor(), 1)
def test_css_version(self):
self.assertIsInstance(self.browser.css_version(), float)
self.assertEqual(self.browser.css_version(), 0.0)
def test_rendering_engine_name(self):
self.assertEqual(self.browser.rendering_engine_name(), '')
def test_rendering_engine_version(self):
self.assertIsInstance(self.browser.rendering_engine_version(), float)
self.assertEqual(self.browser.rendering_engine_version(), 0.0)
def test_device_maker(self):
self.assertEqual(self.browser.device_maker(), '')
def test_device_name(self):
self.assertEqual(self.browser.device_name(), '')
def test_platform_description(self):
self.assertEqual(self.browser.platform_description(), '')
def test_platform_version(self):
self.assertIsInstance(self.browser.platform_version(), float)
self.assertEqual(self.browser.platform_version(), 0.0)
def test_litemode(self):
self.assertTrue(self.browser.litemode())
def test_supports(self):
self.assertTrue(self.browser.supports('tables'))
def test_supports_tables(self):
self.assertTrue(self.browser.supports_tables())
def test_supports_frames(self):
self.assertTrue(self.browser.supports_frames())
def test_supports_iframes(self):
self.assertTrue(self.browser.supports_iframes())
def test_supports_java(self):
self.assertFalse(self.browser.supports_java())
def test_supports_javascript(self):
self.assertFalse(self.browser.supports_javascript())
def test_supports_vbscript(self):
self.assertFalse(self.browser.supports_vbscript())
def test_supports_activex(self):
self.assertFalse(self.browser.supports_activex())
def test_supports_cookies(self):
self.assertFalse(self.browser.supports_cookies())
def test_supports_css(self):
self.assertFalse(self.browser.supports_css())
def test_is_crawler(self):
self.assertTrue(self.browser.is_crawler())
def test_is_mobile(self):
self.assertFalse(self.browser.is_mobile())
def test_is_syndication_reader(self):
self.assertFalse(self.browser.is_syndication_reader())
def test_is_banned(self):
self.assertIsNone(self.browser.is_banned())
def test_is_alpha(self):
self.assertFalse(self.browser.is_alpha())
def test_is_beta(self):
self.assertFalse(self.browser.is_beta())
def test_features(self):
self.assertEqual(self.browser.features(), ['tables', 'frames', 'iframes'])
if __name__ == '__main__':
unittest.main() | 38.569395 | 147 | 0.649105 | 1,212 | 10,838 | 5.615512 | 0.114686 | 0.148692 | 0.097708 | 0.133706 | 0.887746 | 0.853512 | 0.807964 | 0.77035 | 0.77035 | 0.739201 | 0 | 0.014911 | 0.214154 | 10,838 | 281 | 148 | 38.569395 | 0.784196 | 0 | 0 | 0.680203 | 0 | 0.015228 | 0.131746 | 0.011348 | 0 | 0 | 0 | 0 | 0.446701 | 1 | 0.365482 | false | 0 | 0.015228 | 0 | 0.401015 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
735405eec1eafad0b0d0579756efb24280032666 | 36,565 | py | Python | tests/test_costs.py | edwardoughton/pytal | 69e688ebfb3f7b64a4eff60cf3603ea189c9afdf | [
"MIT"
] | 3 | 2020-01-16T12:12:32.000Z | 2021-12-04T11:46:00.000Z | tests/test_costs.py | edwardoughton/pytal | 69e688ebfb3f7b64a4eff60cf3603ea189c9afdf | [
"MIT"
] | null | null | null | tests/test_costs.py | edwardoughton/pytal | 69e688ebfb3f7b64a4eff60cf3603ea189c9afdf | [
"MIT"
] | 3 | 2020-01-15T14:46:20.000Z | 2021-01-27T02:42:15.000Z | import pytest
import math
from pytal.costs import (greenfield_4g, upgrade_to_4g, greenfield_5g_nsa,
upgrade_to_5g_nsa, greenfield_5g_sa, upgrade_to_5g_sa,
get_fronthaul_costs, get_backhaul_costs, local_net_costs,
regional_net_costs, core_costs, discount_opex,
discount_capex_and_opex, calc_costs, find_single_network_cost)
#test approach is to:
#integration test meta cost function
#unit test each function which returns the cost structure
#unit test the function which calculates quantities
#unit test infrastructure sharing strategies
def test_find_single_network_cost(setup_region, setup_costs,
setup_global_parameters, setup_country_parameters,
setup_core_lut):
"""
Integration test for main function.
"""
setup_region[0]['sites_4G'] = 0
setup_region[0]['new_mno_sites'] = 1
setup_region[0]['upgraded_mno_sites'] = 1
setup_region[0]['network_site_density'] = 0.5
setup_region[0]['backhaul_new'] = 0
answer = find_single_network_cost(
setup_region[0],
{'strategy': '4G_epc_microwave_baseline_baseline_baseline_baseline'},
setup_costs,
setup_global_parameters,
setup_country_parameters,
setup_core_lut
)
assert answer['network_cost'] == 267480.4
setup_region[0]['sites_4G'] = 0
setup_region[0]['new_mno_sites'] = 1
setup_region[0]['upgraded_mno_sites'] = 1
setup_region[0]['sites_estimated_total'] = 1
setup_region[0]['network_site_density'] = 0.5
setup_region[0]['backhaul_new'] = 10
answer = find_single_network_cost(
setup_region[0],
{'strategy': '4G_epc_microwave_baseline_baseline_baseline_baseline'},
setup_costs,
setup_global_parameters,
setup_country_parameters,
setup_core_lut
)
assert answer['network_cost'] == 320071.4
setup_region[0]['sites_4G'] = 0
setup_region[0]['new_mno_sites'] = 1
setup_region[0]['upgraded_mno_sites'] = 1
setup_region[0]['network_site_density'] = 0.5
setup_region[0]['backhaul_new'] = 2
answer = find_single_network_cost(
setup_region[0],
{'strategy': '4G_epc_microwave_baseline_baseline_baseline_baseline'},
setup_costs,
setup_global_parameters,
setup_country_parameters,
setup_core_lut
)
setup_region[0]['sites_4G'] = 0
setup_region[0]['new_mno_sites'] = 1
setup_region[0]['upgraded_mno_sites'] = 1
setup_region[0]['network_site_density'] = 0.5
setup_region[0]['backhaul_new'] = 2
answer = find_single_network_cost(
setup_region[0],
{'strategy': '5G_nsa_microwave_baseline_baseline_baseline_baseline'},
setup_costs,
setup_global_parameters,
setup_country_parameters,
setup_core_lut
)
assert answer['network_cost'] == 601671.4
setup_region[0]['new_mno_sites'] = 0
setup_region[0]['upgraded_mno_sites'] = 1
setup_region[0]['network_site_density'] = 0.5
setup_region[0]['backhaul_new'] = 0
answer = find_single_network_cost(
setup_region[0],
{'strategy': '5G_nsa_microwave_baseline_baseline_baseline_baseline'},
setup_costs,
setup_global_parameters,
setup_country_parameters,
setup_core_lut
)
assert round(answer['network_cost']) == round(473902)
setup_region[0]['sites_4G'] = 0
setup_region[0]['new_mno_sites'] = 1
setup_region[0]['upgraded_mno_sites'] = 1
setup_region[0]['network_site_density'] = 0.5
setup_region[0]['backhaul_new'] = 2
answer = find_single_network_cost(
setup_region[0],
{'strategy': '5G_sa_microwave_baseline_baseline_baseline_baseline'},
setup_costs,
setup_global_parameters,
setup_country_parameters,
setup_core_lut
)
assert answer['network_cost'] == 721499.9000000001#(110322 + 11952 + 11952 + 1027906)
setup_region[0]['new_mno_sites'] = 0
setup_region[0]['upgraded_mno_sites'] = 1
setup_region[0]['network_site_density'] = 0.5
setup_region[0]['backhaul_new'] = 0
answer = find_single_network_cost(
setup_region[0],
{'strategy': '5G_sa_fiber_baseline_baseline_baseline_baseline'},
setup_costs,
setup_global_parameters,
setup_country_parameters,
setup_core_lut
)
assert answer['network_cost'] == 1155040.7#63357.0 + 1027906
setup_region[0]['new_mno_sites'] = 0
setup_region[0]['upgraded_mno_sites'] = 1
setup_region[0]['network_site_density'] = 0.5
setup_region[0]['backhaul_new'] = 1
answer = find_single_network_cost(
setup_region[0],
{'strategy': '5G_sa_fiber_baseline_baseline_baseline_baseline'},
setup_costs,
setup_global_parameters,
setup_country_parameters,
setup_core_lut
)
assert answer['network_cost'] == 1155040.7#63357 + 1027906
setup_region[0]['new_mno_sites'] = 1
setup_region[0]['upgraded_mno_sites'] = 1
setup_region[0]['network_site_density'] = 0.5
setup_region[0]['backhaul_new'] = 2
answer = find_single_network_cost(
setup_region[0],
{'strategy': '5G_sa_fiber_baseline_baseline_baseline_baseline'},
setup_costs,
setup_global_parameters,
setup_country_parameters,
setup_core_lut
)
assert answer['network_cost'] == 1237544.0000000002#152690 + 1027906
setup_region[0]['new_mno_sites'] = 1
setup_region[0]['upgraded_mno_sites'] = 0
setup_region[0]['network_site_density'] = 0.001
setup_region[0]['backhaul_new'] = 1
answer = find_single_network_cost(
setup_region[0],
{'strategy': '5G_sa_fiber_baseline_baseline_baseline_baseline'},
setup_costs,
setup_global_parameters,
setup_country_parameters,
setup_core_lut
)
assert answer['network_cost'] == 1375624.7999999998#450398.0 + 1027906
setup_region[0]['new_mno_sites'] = 10
setup_region[0]['upgraded_mno_sites'] = 10
setup_region[0]['network_site_density'] = 1
setup_region[0]['backhaul_new'] = 20
answer = find_single_network_cost(
setup_region[0],
{'strategy': '5G_sa_fiber_baseline_baseline_baseline_baseline'},
setup_costs,
setup_global_parameters,
setup_country_parameters,
setup_core_lut
)
assert answer['network_cost'] == 2674016.4000000004#1451800.0 + 1027906
def test_greenfield_4g(setup_region, setup_option, setup_costs,
setup_global_parameters, setup_core_lut, setup_country_parameters):
"""
Unit test.
"""
setup_region[0]['upgraded_mno_sites'] = 0
setup_region[0]['new_mno_sites'] = 1
setup_region[0]['network_site_density'] = 1
#test baseline infra sharing
cost_structure = greenfield_4g(setup_region[0],
'4G_epc_microwave_baseline_baseline_baseline_baseline',
setup_costs, setup_global_parameters,
setup_core_lut, setup_country_parameters)
assert cost_structure['single_sector_antenna'] == 1500
assert cost_structure['single_remote_radio_unit'] == 4000
assert cost_structure['io_fronthaul'] ==1500
assert cost_structure['tower'] == 10000
assert cost_structure['civil_materials'] == 5000
assert cost_structure['transportation'] == 5000
assert cost_structure['installation'] == 5000
assert cost_structure['site_rental'] == 9600
assert cost_structure['power_generator_battery_system'] == 5000
assert cost_structure['io_s1_x2'] == 1500
assert cost_structure['router'] == 2000
#test passive infra sharing
cost_structure = greenfield_4g(setup_region[0],
'4G_epc_microwave_passive_baseline_baseline_baseline',
setup_costs, setup_global_parameters,
setup_core_lut, setup_country_parameters)
assert cost_structure['tower'] == 10000 / setup_country_parameters['networks']['baseline_urban']
assert cost_structure['civil_materials'] == 5000 / setup_country_parameters['networks']['baseline_urban']
#test active infra sharing
cost_structure = greenfield_4g(setup_region[0],
'4G_epc_microwave_active_baseline_baseline_baseline',
setup_costs, setup_global_parameters,
setup_core_lut, setup_country_parameters)
assert cost_structure['single_sector_antenna'] == 1500 / setup_country_parameters['networks']['baseline_urban']
assert cost_structure['single_remote_radio_unit'] == 4000 / setup_country_parameters['networks']['baseline_urban']
assert cost_structure['bbu_cabinet'] == 500 / setup_country_parameters['networks']['baseline_urban']
assert cost_structure['civil_materials'] == 5000 / setup_country_parameters['networks']['baseline_urban']
setup_region[0]['sites_estimated_total'] = 6
setup_region[0]['upgraded_mno_sites'] = 3
setup_region[0]['sites_3G'] = 3
setup_region[0]['network_site_density'] = 2
#test srn wholesale core network
cost_structure = greenfield_4g(setup_region[0],
'4G_epc_microwave_srn_baseline_baseline_baseline',
setup_costs, setup_global_parameters,
setup_core_lut, setup_country_parameters)
assert cost_structure['core_node'] == (
(setup_costs['core_node_epc'] * 2) /
(setup_region[0]['new_mno_sites'] + setup_region[0]['upgraded_mno_sites']))
assert cost_structure['regional_node'] == (
(setup_costs['regional_node_epc'] * 2) /
(setup_region[0]['new_mno_sites'] + setup_region[0]['upgraded_mno_sites']))
#test srn wholesale core network
setup_region[0]['geotype'] = 'rural'
cost_structure = greenfield_4g(setup_region[0],
'4G_epc_microwave_srn_baseline_baseline_baseline',
setup_costs, setup_global_parameters,
setup_core_lut, setup_country_parameters)
assert cost_structure['core_node'] == (
(setup_costs['core_node_epc'] * 2) /
(setup_region[0]['new_mno_sites'] + setup_region[0]['upgraded_mno_sites']) /
(setup_country_parameters['networks']['baseline_rural']))
assert cost_structure['regional_node'] == (
(setup_costs['regional_node_epc'] * 2) /
(setup_region[0]['new_mno_sites'] + setup_region[0]['upgraded_mno_sites']) /
(setup_country_parameters['networks']['baseline_rural']))
def test_upgrade_to_4g(setup_region, setup_option, setup_costs,
setup_global_parameters, setup_core_lut, setup_country_parameters):
"""
Unit test.
"""
setup_region[0]['new_mno_sites'] = 0
setup_region[0]['upgraded_mno_sites'] = 1
setup_region[0]['sites_3G'] = 1
setup_region[0]['network_site_density'] = 0.5
cost_structure = upgrade_to_4g(setup_region[0],
'4G_epc_microwave_baseline_baseline_baseline_baseline',
setup_costs, setup_global_parameters,
setup_core_lut, setup_country_parameters)
assert cost_structure['single_sector_antenna'] == 1500
assert cost_structure['single_remote_radio_unit'] == 4000
assert cost_structure['installation'] == 5000
assert cost_structure['site_rental'] == 9600
assert cost_structure['router'] == 2000
#test passive infra sharing
cost_structure = upgrade_to_4g(setup_region[0],
'4G_epc_microwave_passive_baseline_baseline_baseline',
setup_costs, setup_global_parameters,
setup_core_lut, setup_country_parameters)
assert cost_structure['site_rental'] == 9600 / setup_country_parameters['networks']['baseline_urban']
#test active infra sharing
cost_structure = upgrade_to_4g(setup_region[0],
'4G_epc_microwave_active_baseline_baseline_baseline',
setup_costs, setup_global_parameters,
setup_core_lut, setup_country_parameters)
assert cost_structure['single_sector_antenna'] == 1500 / setup_country_parameters['networks']['baseline_urban']
assert cost_structure['single_remote_radio_unit'] == 4000 / setup_country_parameters['networks']['baseline_urban']
setup_region[0]['sites_estimated_total'] = 6
setup_region[0]['upgraded_mno_sites'] = 3
setup_region[0]['sites_3G'] = 3
setup_region[0]['network_site_density'] = 2
#test srn wholesale core network
cost_structure = upgrade_to_4g(setup_region[0],
'4G_epc_microwave_srn_baseline_baseline_baseline',
setup_costs, setup_global_parameters,
setup_core_lut, setup_country_parameters)
assert cost_structure['regional_node'] == int(
(setup_costs['regional_node_epc'] * 2) /
(setup_region[0]['new_mno_sites'] + setup_region[0]['upgraded_mno_sites']))
#test srn wholesale core network
setup_region[0]['geotype'] = 'rural'
cost_structure = upgrade_to_4g(setup_region[0],
'4G_epc_microwave_srn_baseline_baseline_baseline',
setup_costs, setup_global_parameters,
setup_core_lut, setup_country_parameters)
assert cost_structure['regional_node'] == int(
(setup_costs['regional_node_epc'] * 2) /
(setup_region[0]['new_mno_sites'] + setup_region[0]['upgraded_mno_sites']) /
(setup_country_parameters['networks']['baseline_urban']))
def test_greenfield_5g_nsa(setup_region, setup_option, setup_costs,
setup_global_parameters, setup_core_lut, setup_country_parameters):
"""
Unit test.
"""
setup_region[0]['upgraded_mno_sites'] = 0
setup_region[0]['new_mno_sites'] = 1
setup_region[0]['network_site_density'] = 1
#test baseline infra sharing
cost_structure = greenfield_5g_nsa(setup_region[0],
'5G_nsa_microwave_baseline_baseline_baseline_baseline',
setup_costs, setup_global_parameters,
setup_core_lut, setup_country_parameters)
assert cost_structure['single_sector_antenna'] == 1500
assert cost_structure['single_remote_radio_unit'] == 4000
assert cost_structure['tower'] == 10000
assert cost_structure['civil_materials'] == 5000
assert cost_structure['transportation'] == 5000
assert cost_structure['installation'] == 5000
assert cost_structure['site_rental'] == 9600
assert cost_structure['power_generator_battery_system'] == 5000
assert cost_structure['router'] == 2000
#test passive infra sharing
cost_structure = greenfield_5g_nsa(setup_region[0],
'5G_nsa_microwave_passive_baseline_baseline_baseline',
setup_costs, setup_global_parameters,
setup_core_lut, setup_country_parameters)
assert cost_structure['tower'] == 10000 / setup_country_parameters['networks']['baseline_urban']
assert cost_structure['civil_materials'] == 5000 / setup_country_parameters['networks']['baseline_urban']
#test active infra sharing
cost_structure = greenfield_5g_nsa(setup_region[0],
'5G_nsa_microwave_active_baseline_baseline_baseline',
setup_costs, setup_global_parameters,
setup_core_lut, setup_country_parameters)
assert cost_structure['single_sector_antenna'] == 1500 / setup_country_parameters['networks']['baseline_urban']
assert cost_structure['single_remote_radio_unit'] == 4000 / setup_country_parameters['networks']['baseline_urban']
assert cost_structure['civil_materials'] == 5000 / setup_country_parameters['networks']['baseline_urban']
setup_region[0]['sites_estimated_total'] = 6
setup_region[0]['upgraded_mno_sites'] = 3
setup_region[0]['sites_3G'] = 3
setup_region[0]['network_site_density'] = 2
#test srn wholesale core network
cost_structure = greenfield_5g_nsa(setup_region[0],
'5G_nsa_microwave_srn_baseline_baseline_baseline',
setup_costs, setup_global_parameters,
setup_core_lut, setup_country_parameters)
assert cost_structure['core_node'] == (
(setup_costs['core_node_nsa'] * 2) /
(setup_region[0]['new_mno_sites'] + setup_region[0]['upgraded_mno_sites']))
assert cost_structure['regional_node'] == (
(setup_costs['regional_node_nsa'] * 2) /
(setup_region[0]['new_mno_sites'] + setup_region[0]['upgraded_mno_sites']))
#test srn wholesale core network
setup_region[0]['geotype'] = 'rural'
cost_structure = greenfield_5g_nsa(setup_region[0],
'5G_nsa_microwave_srn_baseline_baseline_baseline',
setup_costs, setup_global_parameters,
setup_core_lut, setup_country_parameters)
assert cost_structure['core_node'] == ((setup_costs['core_node_nsa'] * 2) /
(setup_region[0]['new_mno_sites'] + setup_region[0]['upgraded_mno_sites']) /
(setup_country_parameters['networks']['baseline_urban']))
assert cost_structure['regional_node'] == (
(setup_costs['regional_node_nsa'] * 2) /
(setup_region[0]['new_mno_sites'] + setup_region[0]['upgraded_mno_sites']) /
(setup_country_parameters['networks']['baseline_urban']))
def test_upgrade_to_5g_nsa(setup_region, setup_option, setup_costs,
setup_global_parameters, setup_core_lut, setup_country_parameters):
"""
Unit test.
"""
setup_region[0]['new_mno_sites'] = 0
setup_region[0]['upgraded_mno_sites'] = 1
setup_region[0]['sites_3G'] = 1
setup_region[0]['network_site_density'] = 0.5
cost_structure = upgrade_to_5g_nsa(setup_region[0],
'5G_nsa_microwave_baseline_baseline_baseline_baseline',
setup_costs, setup_global_parameters,
setup_core_lut, setup_country_parameters)
assert cost_structure['single_sector_antenna'] == 1500
assert cost_structure['single_remote_radio_unit'] == 4000
assert cost_structure['installation'] == 5000
assert cost_structure['site_rental'] == 9600
assert cost_structure['router'] == 2000
#test passive infra sharing
cost_structure = upgrade_to_5g_nsa(setup_region[0],
'5G_nsa_microwave_passive_baseline_baseline_baseline',
setup_costs, setup_global_parameters,
setup_core_lut, setup_country_parameters)
assert cost_structure['site_rental'] == 9600 / setup_country_parameters['networks']['baseline_urban']
#test active infra sharing
cost_structure = upgrade_to_5g_nsa(setup_region[0],
'5G_nsa_microwave_active_baseline_baseline_baseline',
setup_costs, setup_global_parameters,
setup_core_lut, setup_country_parameters)
assert cost_structure['single_sector_antenna'] == 1500 / setup_country_parameters['networks']['baseline_urban']
assert cost_structure['single_remote_radio_unit'] == 4000 / setup_country_parameters['networks']['baseline_urban']
setup_region[0]['new_mno_sites'] = 3
setup_region[0]['upgraded_mno_sites'] = 3
setup_region[0]['sites_3G'] = 3
setup_region[0]['network_site_density'] = 2
#test srn wholesale core network
cost_structure = upgrade_to_5g_nsa(setup_region[0],
'5G_nsa_microwave_srn_baseline_baseline_baseline',
setup_costs, setup_global_parameters,
setup_core_lut, setup_country_parameters)
assert cost_structure['core_node'] == ((setup_costs['core_node_nsa'] * 2) /
(setup_region[0]['new_mno_sites'] + setup_region[0]['upgraded_mno_sites']))
assert cost_structure['regional_node'] == (
(setup_costs['regional_node_nsa'] * 2) /
(setup_region[0]['new_mno_sites'] + setup_region[0]['upgraded_mno_sites']))
#test srn wholesale core network
setup_region[0]['geotype'] = 'rural'
cost_structure = upgrade_to_5g_nsa(setup_region[0],
'5G_nsa_microwave_srn_baseline_baseline_baseline',
setup_costs, setup_global_parameters,
setup_core_lut, setup_country_parameters)
assert cost_structure['core_node'] == ((setup_costs['core_node_nsa'] * 2) /
(setup_region[0]['new_mno_sites'] + setup_region[0]['upgraded_mno_sites']) /
(setup_country_parameters['networks']['baseline_urban']))
assert cost_structure['regional_node'] == (
(setup_costs['regional_node_nsa'] * 2) /
(setup_region[0]['new_mno_sites'] + setup_region[0]['upgraded_mno_sites']) /
(setup_country_parameters['networks']['baseline_urban']))
def test_greenfield_5g_sa(setup_region, setup_option, setup_costs,
setup_global_parameters, setup_core_lut, setup_country_parameters):
"""
Unit test.
"""
setup_region[0]['upgraded_mno_sites'] = 1
setup_region[0]['new_mno_sites'] = 1
setup_region[0]['network_site_density'] = 1
cost_structure = greenfield_5g_sa(setup_region[0],
'5G_sa_microwave_baseline_baseline_baseline_baseline',
setup_costs, setup_global_parameters,
setup_core_lut, setup_country_parameters)
assert cost_structure['single_sector_antenna'] == 1500
assert cost_structure['single_remote_radio_unit'] == 4000
assert cost_structure['cots_processing'] == 500
assert cost_structure['tower'] == 10000
assert cost_structure['civil_materials'] == 5000
assert cost_structure['transportation'] == 5000
assert cost_structure['installation'] == 5000
assert cost_structure['site_rental'] == 9600
assert cost_structure['power_generator_battery_system'] == 5000
assert cost_structure['router'] == 2000
#test passive infra sharing
cost_structure = greenfield_5g_sa(setup_region[0],
'5g_sa_microwave_passive_baseline_baseline_baseline',
setup_costs, setup_global_parameters,
setup_core_lut, setup_country_parameters)
assert cost_structure['tower'] == 10000 / setup_country_parameters['networks']['baseline_urban']
assert cost_structure['civil_materials'] == 5000 / setup_country_parameters['networks']['baseline_urban']
#test active infra sharing
cost_structure = greenfield_5g_sa(setup_region[0],
'5g_sa_microwave_active_baseline_baseline_baseline',
setup_costs, setup_global_parameters,
setup_core_lut, setup_country_parameters)
assert cost_structure['single_sector_antenna'] == 1500 / setup_country_parameters['networks']['baseline_urban']
assert cost_structure['single_remote_radio_unit'] == 4000 / setup_country_parameters['networks']['baseline_urban']
assert cost_structure['cloud_power_supply_converter'] == 1000 / setup_country_parameters['networks']['baseline_urban']
assert cost_structure['civil_materials'] == 5000 / setup_country_parameters['networks']['baseline_urban']
setup_region[0]['sites_estimated_total'] = 6
setup_region[0]['upgraded_mno_sites'] = 3
setup_region[0]['sites_3G'] = 3
setup_region[0]['network_site_density'] = 2
#test srn wholesale core network
cost_structure = greenfield_5g_sa(setup_region[0],
'5G_sa_microwave_srn_baseline_baseline_baseline',
setup_costs, setup_global_parameters,
setup_core_lut, setup_country_parameters)
assert cost_structure['core_node'] == (
(setup_costs['core_node_sa'] * 2) /
(setup_region[0]['new_mno_sites'] + setup_region[0]['upgraded_mno_sites']))
assert cost_structure['regional_node'] == (
(setup_costs['regional_node_sa'] * 2) /
(setup_region[0]['new_mno_sites'] + setup_region[0]['upgraded_mno_sites']))
setup_region[0]['geotype'] = 'rural'
cost_structure = greenfield_5g_sa(setup_region[0],
'5G_sa_microwave_srn_baseline_baseline_baseline',
setup_costs, setup_global_parameters,
setup_core_lut, setup_country_parameters)
assert cost_structure['core_node'] == (
(setup_costs['core_node_sa'] * 2) /
(setup_region[0]['new_mno_sites'] + setup_region[0]['upgraded_mno_sites']) /
(setup_country_parameters['networks']['baseline_rural']))
assert cost_structure['regional_node'] == (
(setup_costs['regional_node_sa'] * 2) /
(setup_region[0]['new_mno_sites'] + setup_region[0]['upgraded_mno_sites']) /
(setup_country_parameters['networks']['baseline_rural']))
def test_upgrade_to_5g_sa(setup_region, setup_option, setup_costs,
setup_global_parameters, setup_core_lut, setup_country_parameters):
"""
Unit test.
"""
setup_region[0]['new_mno_sites'] = 1
setup_region[0]['upgraded_mno_sites'] = 1
setup_region[0]['sites_3G'] = 1
setup_region[0]['network_site_density'] = 0.5
cost_structure = upgrade_to_5g_sa(setup_region[0],
'5G_sa_microwave_baseline_baseline_baseline_baseline',
setup_costs, setup_global_parameters,
setup_core_lut, setup_country_parameters)
assert cost_structure['single_sector_antenna'] == 1500
assert cost_structure['single_remote_radio_unit'] == 4000
assert cost_structure['cots_processing'] == 500
assert cost_structure['installation'] == 5000
assert cost_structure['site_rental'] == 9600
assert cost_structure['low_latency_switch'] == 500
assert cost_structure['router'] == 2000
#test passive infra sharing
cost_structure = upgrade_to_5g_sa(setup_region[0],
'5g_sa_microwave_passive_baseline_baseline_baseline',
setup_costs, setup_global_parameters,
setup_core_lut, setup_country_parameters)
assert cost_structure['site_rental'] == 9600 / setup_country_parameters['networks']['baseline_urban']
#test active infra sharing
cost_structure = upgrade_to_5g_sa(setup_region[0],
'5g_sa_microwave_active_baseline_baseline_baseline',
setup_costs, setup_global_parameters,
setup_core_lut, setup_country_parameters)
assert cost_structure['single_sector_antenna'] == int(1500 / setup_country_parameters['networks']['baseline_urban'])
assert cost_structure['single_remote_radio_unit'] == int(4000 / setup_country_parameters['networks']['baseline_urban'])
assert cost_structure['cloud_power_supply_converter'] == int(1000 / setup_country_parameters['networks']['baseline_urban'])
setup_region[0]['new_mno_sites'] = 6
setup_region[0]['upgraded_mno_sites'] = 3
setup_region[0]['sites_3G'] = 3
setup_region[0]['network_site_density'] = 2
#test srn wholesale core network
cost_structure = upgrade_to_5g_sa(setup_region[0],
'5G_sa_microwave_srn_baseline_baseline_baseline',
setup_costs, setup_global_parameters,
setup_core_lut, setup_country_parameters)
assert cost_structure['core_node'] == int(
(setup_costs['core_node_sa'] * 2) /
(setup_region[0]['new_mno_sites'] + setup_region[0]['upgraded_mno_sites']) /
(setup_country_parameters['networks']['baseline_urban'])
)
assert cost_structure['regional_node'] == int(
(setup_costs['regional_node_sa'] * 2) /
(setup_region[0]['new_mno_sites'] + setup_region[0]['upgraded_mno_sites']) /
(setup_country_parameters['networks']['baseline_urban']))
def test_get_fronthaul_costs(setup_region, setup_costs):
"""
Unit test.
"""
setup_region[0]['network_site_density'] = 1
assert get_fronthaul_costs(setup_region[0], setup_costs) == int(
setup_costs['fiber_urban_m'] *
(math.sqrt(1/setup_region[0]['network_site_density']) / 2) * 1000)
setup_region[0]['network_site_density'] = 4
assert get_fronthaul_costs(setup_region[0], setup_costs) == int(
setup_costs['fiber_urban_m'] *
(math.sqrt(1/setup_region[0]['network_site_density']) / 2) * 1000)
setup_region[0]['network_site_density'] = 0.5
assert get_fronthaul_costs(setup_region[0], setup_costs) == int(
setup_costs['fiber_urban_m'] *
(math.sqrt(1/setup_region[0]['network_site_density']) / 2) * 1000)
setup_region[0]['network_site_density'] = 0.00001
assert get_fronthaul_costs(setup_region[0], setup_costs) == int(
setup_costs['fiber_urban_m'] *
(math.sqrt(1/setup_region[0]['network_site_density']) / 2) * 1000)
def test_get_backhaul_costs(setup_region, setup_costs, setup_core_lut):
"""
Unit test.
"""
assert get_backhaul_costs(setup_region[0], 'microwave',
setup_costs, setup_core_lut) == (setup_costs['microwave_small'])
setup_region[0]['area_km2'] = 5000
assert get_backhaul_costs(setup_region[0], 'microwave',
setup_costs, setup_core_lut) == (setup_costs['microwave_small'])
setup_region[0]['area_km2'] = 100000
assert get_backhaul_costs(setup_region[0], 'microwave',
setup_costs, setup_core_lut) == (setup_costs['microwave_large'])
setup_region[0]['area_km2'] = 2
assert get_backhaul_costs(setup_region[0], 'fiber',
setup_costs, setup_core_lut) == (setup_costs['fiber_urban_m'] * 250)
setup_region[0]['area_km2'] = 8
assert get_backhaul_costs(setup_region[0], 'fiber',
setup_costs, setup_core_lut) == (setup_costs['fiber_urban_m'] * 500)
assert get_backhaul_costs(setup_region[0], 'incorrect_backhaul_tech_name',
setup_costs, setup_core_lut) == 0
def test_local_net_costs(setup_region, setup_option, setup_costs,
setup_country_parameters, setup_global_parameters):
"""
Unit test.
"""
setup_region[0]['new_mno_sites'] = 2
setup_region[0]['upgraded_mno_sites'] = 0
setup_region[0]['area_km2'] = 40
assert local_net_costs(setup_region[0], setup_costs, setup_option['strategy'],
setup_country_parameters, setup_global_parameters) == (
setup_costs['regional_node_lower_epc'] *
(setup_region[0]['area_km2'] /
setup_global_parameters['local_node_spacing_km2']) /
(setup_region[0]['new_mno_sites'] + setup_region[0]['upgraded_mno_sites']))
setup_region[0]['new_mno_sites'] = 0
assert local_net_costs(setup_region[0], setup_costs, setup_option['strategy'],
setup_country_parameters, setup_global_parameters) == 0
def test_regional_net_costs(setup_region, setup_option, setup_costs, setup_core_lut,
setup_country_parameters):
"""
Unit test.
"""
setup_region[0]['new_mno_sites'] = 6
setup_region[0]['upgraded_mno_sites'] = 0
assert regional_net_costs(setup_region[0], 'regional_edge', setup_costs,
setup_core_lut, setup_option['strategy'], setup_country_parameters) == int(
(setup_costs['regional_edge'] * setup_core_lut['regional_edge']['MWI.1.1.1_1_new']) /
(setup_region[0]['new_mno_sites'] + setup_region[0]['upgraded_mno_sites']))
assert regional_net_costs(setup_region[0], 'regional_node', setup_costs,
setup_core_lut, setup_option['strategy'], setup_country_parameters) == int(
(setup_costs['regional_node_epc'] * setup_core_lut['regional_node']['MWI.1.1.1_1_new']) /
(setup_region[0]['new_mno_sites'] + setup_region[0]['upgraded_mno_sites']))
setup_region[0]['new_mno_sites'] = 10
assert regional_net_costs(setup_region[0], 'regional_node', setup_costs,
setup_core_lut, setup_option['strategy'], setup_country_parameters) == int(
(setup_costs['regional_node_epc'] * setup_core_lut['regional_node']['MWI.1.1.1_1_new']) /
(setup_region[0]['new_mno_sites'] + setup_region[0]['upgraded_mno_sites']))
setup_core_lut['regional_node']['MWI.1.1.1_1'] = 10
setup_region[0]['area_km2'] = 100
assert regional_net_costs(setup_region[0], 'regional_node', setup_costs,
setup_core_lut, setup_option['strategy'], setup_country_parameters) == int(
(setup_costs['regional_node_epc'] * setup_core_lut['regional_node']['MWI.1.1.1_1_new']) /
(setup_region[0]['new_mno_sites'] + setup_region[0]['upgraded_mno_sites']))
assert regional_net_costs(setup_region[0], 'incorrrect_asset_name', setup_costs,
setup_core_lut, setup_option['strategy'], setup_country_parameters) == 'Asset name not in lut'
setup_region[0]['new_mno_sites'] = 0
assert regional_net_costs(setup_region[0], 'regional_node', setup_costs,
setup_core_lut, setup_option['strategy'], setup_country_parameters) == 0
setup_region[0]['GID_id'] = 'unknown GID ID'
assert regional_net_costs(setup_region[0], 'regional_node', setup_costs,
setup_core_lut, setup_option['strategy'], setup_country_parameters) == 0
def test_core_costs(setup_region, setup_option, setup_costs, setup_core_lut, setup_country_parameters):
"""
Unit test.
"""
setup_region[0]['new_mno_sites'] = 2
setup_region[0]['upgraded_mno_sites'] = 0
setup_country_parameters['networks']['baseline_urban'] = 2
assert core_costs(setup_region[0], 'core_edge', setup_costs,
setup_core_lut, setup_option['strategy'], setup_country_parameters) == (
(setup_costs['core_edge'] * 1000) /
(setup_region[0]['new_mno_sites'] + setup_region[0]['upgraded_mno_sites']))
assert core_costs(setup_region[0], 'core_node', setup_costs,
setup_core_lut, setup_option['strategy'], setup_country_parameters) == (
(setup_costs['core_node_{}'.format('epc')] * 2) /
(setup_region[0]['new_mno_sites'] + setup_region[0]['upgraded_mno_sites']))
assert core_costs(setup_region[0], 'incorrrect_asset_name', setup_costs,
setup_core_lut, setup_option['strategy'], setup_country_parameters) == 0
setup_region[0]['GID_id'] == 'unknown'
assert core_costs(setup_region[0], 'core_edge', setup_costs,
setup_core_lut, setup_option['strategy'], setup_country_parameters) == (
(setup_costs['core_edge'] * setup_core_lut['core_edge']['MWI.1.1.1_1_new']) /
(setup_region[0]['new_mno_sites'] + setup_region[0]['upgraded_mno_sites']))
setup_core_lut['regional_node']['MWI.1.1.1_1'] = 3
def test_discount_capex_and_opex(setup_global_parameters, setup_country_parameters):
"""
Unit test.
"""
assert discount_capex_and_opex(1000, setup_global_parameters, setup_country_parameters) == (
1195 * (1 + (setup_country_parameters['financials']['wacc'] / 100)))
def test_discount_opex(setup_global_parameters, setup_country_parameters):
"""
Unit test.
"""
assert discount_opex(1000, setup_global_parameters, setup_country_parameters) == (
1952 * (1 + (setup_country_parameters['financials']['wacc'] / 100)))
def test_calc_costs(setup_region, setup_global_parameters, setup_country_parameters):
"""
Unit test.
"""
setup_region[0]['sites_4G'] = 0
setup_region[0]['upgraded_mno_sites'] = 1
setup_region[0]['new_mno_sites'] = 1
answer, structure = calc_costs(
setup_region[0],
{'single_sector_antenna': 1500},
'fiber',
1,
setup_global_parameters,
setup_country_parameters)
assert answer == 5917
answer, structure = calc_costs(
setup_region[0],
{'single_baseband_unit': 4000},
'fiber',
1,
setup_global_parameters,
setup_country_parameters)
assert answer == 5259
answer, structure = calc_costs(
setup_region[0],
{'tower': 10000},
'fiber',
1,
setup_global_parameters,
setup_country_parameters)
assert answer == 11000
answer, structure = calc_costs(
setup_region[0],
{'site_rental': 9600},
'fiber',
1,
setup_global_parameters,
setup_country_parameters)
assert answer == 20617 #two years' of rent
answer, structure = calc_costs(setup_region[0],
{'single_sector_antenna': 1500,
'single_baseband_unit': 4000,
'tower': 10000,
'site_rental': 9600},
'fiber',
6,
setup_global_parameters,
setup_country_parameters)
#answer = sum of antenna, bbu, tower, site_rental (5379 + 4781 + 10000 + 18743)
assert answer == 42793
answer, structure = calc_costs(
setup_region[0],
{'incorrect_name': 9600},
'fiber',
1,
setup_global_parameters,
setup_country_parameters)
assert answer == 0 #two years' of rent
answer, structure = calc_costs(setup_region[0],
{'cots_processing': 6,
'io_n2_n3': 6,
'low_latency_switch': 6,
'rack': 6,
'cloud_power_supply_converter': 6,},
'fiber',
1,
setup_global_parameters,
setup_country_parameters)
assert answer == round(sum([
8.8, #cots_processing = capex + opex
8.8, #io_n2_n3 = capex + opex
8.8, #low_latency_switch = capex + opex
6.6, #rack = capex
8.8, #cloud_power_supply_converter = capex + opex
]))
answer, structure = calc_costs(setup_region[0],
{'backhaul': 100,},
'fiber',
1,
setup_global_parameters,
setup_country_parameters)
assert answer == 132
answer, structure = calc_costs(setup_region[0],
{'backhaul': 100,},
'fiber',
0,
setup_global_parameters,
setup_country_parameters)
assert answer == 0
| 38.981876 | 127 | 0.703596 | 4,621 | 36,565 | 5.105172 | 0.043281 | 0.125895 | 0.130728 | 0.068331 | 0.940316 | 0.930482 | 0.921114 | 0.910983 | 0.90064 | 0.890891 | 0 | 0.041675 | 0.180364 | 36,565 | 937 | 128 | 39.023479 | 0.745479 | 0.039327 | 0 | 0.813154 | 0 | 0 | 0.247761 | 0.084606 | 0 | 0 | 0 | 0 | 0.206278 | 1 | 0.022422 | false | 0.008969 | 0.004484 | 0 | 0.026906 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
b42c6e0663dbf8b4887cb18215da23f7187c9591 | 28,368 | py | Python | script/MongoDB/insertion.py | BioMAs/convertToEntrezGeneID | a15f027c5af787b366afe6695e79051733a8560e | [
"MIT"
] | null | null | null | script/MongoDB/insertion.py | BioMAs/convertToEntrezGeneID | a15f027c5af787b366afe6695e79051733a8560e | [
"MIT"
] | null | null | null | script/MongoDB/insertion.py | BioMAs/convertToEntrezGeneID | a15f027c5af787b366afe6695e79051733a8560e | [
"MIT"
] | 1 | 2018-10-18T11:42:14.000Z | 2018-10-18T11:42:14.000Z | # -*- coding: utf-8 -*-
"""
Created on Mon July 28 13:44:19 2017
@author: clancien
"""
try:
import ConfigParser
except ImportError:
import configparser as ConfigParser
import os
import subprocess
from pymongo import MongoClient, ASCENDING
import logging
from logging.handlers import RotatingFileHandler
import sys
class Insertion():
def __init__(self):
config = ConfigParser.ConfigParser()
config.readfp(open('../../configuration.ini','r'))
self.logFile = config.get('Error', 'logFile')
self.Ensembl_gene=config.get('Convert','Ensembl_gene')
self.Ensembl_transcript=config.get('Convert','Ensembl_transcript')
self.Ensembl_protein=config.get('Convert','Ensembl_protein')
self.UniGene=config.get('Convert','UniGene')
self.GenBank_transcript=config.get('Convert','GenBank_transcript')
self.RefSeq_transcript=config.get('Convert','RefSeq_transcript')
self.GenBank_protein=config.get('Convert','GenBank_protein')
self.RefSeq_protein=config.get('Convert','RefSeq_protein')
self.GI_transcript=config.get('Convert','GI_transcript')
self.GI_protein=config.get('Convert','GI_protein')
self.Info=config.get('Convert', 'InfoWithHomologene')
self.GPL=config.get('Convert','GPL')
self.Homologene=config.get('Convert','Homologene')
self.Vega_gene=config.get('Convert','Vega_gene')
self.Vega_transcript=config.get('Convert','Vega_transcript')
self.Vega_protein=config.get('Convert','Vega_protein')
self.History=config.get('Convert','History')
self.Swissprot=config.get('Convert', 'Swissprot')
self.trEMBL=config.get('Convert', 'trEMBL')
self.client = MongoClient()
self.db = self.client["geneulike"]
self.logger=None
self.formatter=None
self.file_handler=None
self.init_log()
def file_exist(self, filepath):
return os.path.isfile(filepath)
def init_log(self):
# création de l'objet logger qui va nous servir à écrire dans les logs
self.logger = logging.getLogger()
# on met le niveau du logger à DEBUG, comme ça il écrit tout
self.logger.setLevel(logging.DEBUG)
# création d'un formateur qui va ajouter le temps, le niveau
# de chaque message quand on écrira un message dans le log
self.formatter = logging.Formatter('%(asctime)s :: %(levelname)s :: %(message)s')
# création d'un handler qui va rediriger une écriture du log vers
# un fichier en mode 'append', avec 1 backup et une taille max de 1Mo
self.file_handler = RotatingFileHandler(self.logFile, 'a', 1000000, 1)
# on lui met le niveau sur DEBUG, on lui dit qu'il doit utiliser le formateur
# créé précédement et on ajoute ce handler au logger
self.file_handler.setLevel(logging.DEBUG)
self.file_handler.setFormatter(self.formatter)
self.logger.addHandler(self.file_handler)
def push_Ensembl_gene(self):
if self.file_exist(self.Ensembl_gene):
try:
subprocess.check_output(['bash','-c',"mongoimport -d geneulike -c Ensembl_gene --type tsv --fields EGID.string\(\),BDID.string\(\) --columnsHaveTypes --numInsertionWorkers 8 --file " + self.Ensembl_gene ])
except subprocess.CalledProcessError as error:
self.logger.warning("Error - insert.py - Ensembl_gene - insertion")
self.logger.warning("Exception at the line : {}".format(sys.exc_info()[-1].tb_lineno))
self.logger.warning(sys.exc_info())
self.logger.warning(error)
else:
self.logger.warning("Error - insertion.py - Ensembl_gene - File not found")
self.logger.warning("Ensembl_gene file has not been found")
try:
self.db['Ensembl_gene'].create_index([('BDID', ASCENDING)])
except:
self.logger.warning("Error - insert.py - Ensembl_gene - createIndex")
self.logger.warning("Exception at the line : {}".format(sys.exc_info()[-1].tb_lineno))
self.logger.warning(sys.exc_info())
def push_Ensembl_transcript(self):
if self.file_exist(self.Ensembl_transcript):
try:
subprocess.check_output(['bash','-c',"mongoimport -d geneulike -c Ensembl_transcript --type tsv --fields EGID.string\(\),BDID.string\(\) --columnsHaveTypes --numInsertionWorkers 8 --file " + self.Ensembl_transcript ])
except subprocess.CalledProcessError as error:
self.logger.warning("Error - insert.py - Ensembl_transcript - insertion")
self.logger.warning("Exception at the line : {}".format(sys.exc_info()[-1].tb_lineno))
self.logger.warning(sys.exc_info())
self.logger.warning(error)
else:
self.logger.warning("Error - insertion.py - Ensembl_transcript - File not found")
self.logger.warning("Ensembl_transcript file has not been found")
try:
self.db['Ensembl_transcript'].create_index([('BDID', ASCENDING)])
except:
self.logger.warning("Error - insert.py - Ensembl_transcript - createIndex")
self.logger.warning("Exception at the line : {}".format(sys.exc_info()[-1].tb_lineno))
self.logger.warning(sys.exc_info())
def push_Ensembl_protein(self):
if self.file_exist(self.Ensembl_transcript):
try:
subprocess.check_output(['bash','-c',"mongoimport -d geneulike -c Ensembl_protein --type tsv --fields EGID.string\(\),BDID.string\(\) --columnsHaveTypes --numInsertionWorkers 8 --file " + self.Ensembl_protein ])
except subprocess.CalledProcessError as error:
self.logger.warning("Error - insert.py - Ensembl_protein - insertion")
self.logger.warning("Exception at the line : {}".format(sys.exc_info()[-1].tb_lineno))
self.logger.warning(sys.exc_info())
self.logger.warning(error)
else:
self.logger.warning("Error - insertion.py - Ensembl_protein - File not found")
self.logger.warning("Ensembl_protein file has not been found")
try:
self.db['Ensembl_protein'].create_index([('BDID', ASCENDING)])
except:
self.logger.warning("Error - insert.py - Ensembl_protein - createIndex")
self.logger.warning("Exception at the line : {}".format(sys.exc_info()[-1].tb_lineno))
self.logger.warning(sys.exc_info())
def push_UniGene(self):
if self.file_exist(self.UniGene):
try:
subprocess.check_output(['bash','-c',"mongoimport -d geneulike -c UniGene --type tsv --fields EGID.string\(\),BDID.string\(\) --columnsHaveTypes --numInsertionWorkers 8 --file " + self.UniGene ])
except subprocess.CalledProcessError as error:
self.logger.warning("Error - insert.py - UniGene - insertion")
self.logger.warning("Exception at the line : {}".format(sys.exc_info()[-1].tb_lineno))
self.logger.warning(sys.exc_info())
self.logger.warning(error)
else:
self.logger.warning("Error - insertion.py - UniGene - File not found")
self.logger.warning("UniGene file has not been found")
try:
self.db['UniGene'].create_index([('BDID', ASCENDING)])
except:
self.logger.warning("Error - insert.py - UniGene - createIndex")
self.logger.warning("Exception at the line : {}".format(sys.exc_info()[-1].tb_lineno))
self.logger.warning(sys.exc_info())
def push_GenBank_transcript(self):
if self.file_exist(self.GenBank_transcript):
try:
subprocess.check_output(['bash','-c',"mongoimport -d geneulike -c GenBank_transcript --type tsv --fields EGID.string\(\),BDID.string\(\) --columnsHaveTypes --numInsertionWorkers 8 --file " + self.GenBank_transcript ])
except subprocess.CalledProcessError as error:
self.logger.warning("Error - insert.py - GenBank_transcript - insertion")
self.logger.warning("Exception at the line : {}".format(sys.exc_info()[-1].tb_lineno))
self.logger.warning(sys.exc_info())
self.logger.warning(error)
else:
self.logger.warning("Error - insertion.py - GenBank_transcript - File not found")
self.logger.warning("GenBank_transcript file has not been found")
try:
self.db['GenBank_transcript'].create_index([('BDID', ASCENDING)])
except:
self.logger.warning("Error - insert.py - GenBank_transcript - createIndex")
self.logger.warning("Exception at the line : {}".format(sys.exc_info()[-1].tb_lineno))
self.logger.warning(sys.exc_info())
def push_RefSeq_transcript(self):
if self.file_exist(self.RefSeq_transcript):
try:
subprocess.check_output(['bash','-c',"mongoimport -d geneulike -c RefSeq_transcript --type tsv --fields EGID.string\(\),BDID.string\(\) --columnsHaveTypes --numInsertionWorkers 8 --file " + self.RefSeq_transcript ])
except subprocess.CalledProcessError as error:
self.logger.warning("Error - insert.py - RefSeq_transcript - insertion")
self.logger.warning("Exception at the line : {}".format(sys.exc_info()[-1].tb_lineno))
self.logger.warning(sys.exc_info())
self.logger.warning(error)
else:
self.logger.warning("Error - insertion.py - RefSeq_transcript - File not found")
self.logger.warning("RefSeq_transcript file has not been found")
try:
self.db['RefSeq_transcript'].create_index([('BDID', ASCENDING)])
except:
self.logger.warning("Error - insert.py - RefSeq_transcript - createIndex")
self.logger.warning("Exception at the line : {}".format(sys.exc_info()[-1].tb_lineno))
self.logger.warning(sys.exc_info())
def push_GenBank_protein(self):
if self.file_exist(self.GenBank_protein):
try:
subprocess.check_output(['bash','-c',"mongoimport -d geneulike -c GenBank_protein --type tsv --fields EGID.string\(\),BDID.string\(\) --columnsHaveTypes --numInsertionWorkers 8 --file " + self.GenBank_protein ])
except subprocess.CalledProcessError as error:
self.logger.warning("Error - insert.py - GenBank_protein - insertion")
self.logger.warning("Exception at the line : {}".format(sys.exc_info()[-1].tb_lineno))
self.logger.warning(sys.exc_info())
self.logger.warning(error)
else:
self.logger.warning("Error - insertion.py - GenBank_transcript - File not found")
self.logger.warning("GenBank_protein file has not been found")
try:
self.db['GenBank_protein'].create_index([('BDID', ASCENDING)])
except:
self.logger.warning("Error - insert.py - GenBank_protein - createIndex")
self.logger.warning("Exception at the line : {}".format(sys.exc_info()[-1].tb_lineno))
self.logger.warning(sys.exc_info())
def push_RefSeq_protein(self):
if self.file_exist(self.RefSeq_protein):
try:
subprocess.check_output(['bash','-c',"mongoimport -d geneulike -c RefSeq_protein --type tsv --fields EGID.string\(\),BDID.string\(\) --columnsHaveTypes --numInsertionWorkers 8 --file " + self.RefSeq_protein ])
except subprocess.CalledProcessError as error:
self.logger.warning("Error - insert.py - RefSeq_protein - insertion")
self.logger.warning("Exception at the line : {}".format(sys.exc_info()[-1].tb_lineno))
self.logger.warning(sys.exc_info())
self.logger.warning(error)
else:
self.logger.warning("Error - insertion.py - RefSeq_protein - File not found")
self.logger.warning("RefSeq_protein file has not been found")
try:
self.db['RefSeq_protein'].create_index([('BDID', ASCENDING)])
except:
self.logger.warning("Error - insert.py - RefSeq_protein - createIndex")
self.logger.warning("Exception at the line : {}".format(sys.exc_info()[-1].tb_lineno))
self.logger.warning(sys.exc_info())
def push_GI_transcript(self):
if self.file_exist(self.GI_transcript):
try:
subprocess.check_output(['bash','-c',"mongoimport -d geneulike -c GI_transcript --type tsv --fields EGID.string\(\),BDID.string\(\) --columnsHaveTypes --numInsertionWorkers 8 --file " + self.GI_transcript ])
except subprocess.CalledProcessError as error:
self.logger.warning("Error - insert.py - GI_transcript - insertion")
self.logger.warning("Exception at the line : {}".format(sys.exc_info()[-1].tb_lineno))
self.logger.warning(sys.exc_info())
self.logger.warning(error)
else:
self.logger.warning("Error - insertion.py - RefSeq_protein - File not found")
self.logger.warning("GI_transcript file has not been found")
try:
self.db['GI_transcript'].create_index([('BDID', ASCENDING)])
except:
self.logger.warning("Error - insert.py - GI_transcript - createIndex")
self.logger.warning("Exception at the line : {}".format(sys.exc_info()[-1].tb_lineno))
self.logger.warning(sys.exc_info())
def push_GI_protein(self):
if self.file_exist(self.GI_protein):
try:
subprocess.check_output(['bash','-c',"mongoimport -d geneulike -c GI_protein --type tsv --fields EGID.string\(\),BDID.string\(\) --columnsHaveTypes --numInsertionWorkers 8 --file " + self.GI_protein ])
except subprocess.CalledProcessError as error:
self.logger.warning("Error - insert.py - GI_protein - insertion")
self.logger.warning("Exception at the line : {}".format(sys.exc_info()[-1].tb_lineno))
self.logger.warning(sys.exc_info())
self.logger.warning(error)
else:
self.logger.warning("Error - insertion.py - GI_protein - File not found")
self.logger.warning("GI_protein file has not been found")
try:
self.db['GI_protein'].create_index([('BDID', ASCENDING)])
except:
self.logger.warning("Error - insert.py - GI_protein - createIndex")
self.logger.warning("Exception at the line : {}".format(sys.exc_info()[-1].tb_lineno))
self.logger.warning(sys.exc_info())
def push_Info(self):
if self.file_exist(self.Info):
try:
subprocess.check_output(['bash','-c',"mongoimport -d geneulike -c GeneInfo --type tsv --fields EGID.string\(\),TAXID.string\(\),SYMBOL.string\(\),DESCRIPTION.string\(\),HOMOLOGENE.string\(\) --columnsHaveTypes --numInsertionWorkers 8 --file " + self.Info ])
except subprocess.CalledProcessError as error:
self.logger.warning("Error - insert.py - GeneInfo - insertion")
self.logger.warning("Exception at the line : {}".format(sys.exc_info()[-1].tb_lineno))
self.logger.warning(sys.exc_info())
self.logger.warning(error)
else:
self.logger.warning("Error - insertion.py - GeneInfo - File not found")
self.logger.warning("GeneInfo file has not been found")
try:
self.db['GeneInfo'].create_index([('EGID', ASCENDING)])
except:
self.logger.warning("Error - insert.py - GeneInfo - createIndex")
self.logger.warning("Exception at the line : {}".format(sys.exc_info()[-1].tb_lineno))
self.logger.warning(sys.exc_info())
def push_GPL(self):
if self.file_exist(self.GPL):
try:
subprocess.check_output(['bash','-c',"mongoimport -d geneulike -c GPL --type tsv --fields EGID.string\(\),BDID.string\(\),TAXID.string\(\),PLATFORM.string\(\),TITLE.string\(\),ORGANISM.string\(\) --columnsHaveTypes --numInsertionWorkers 8 --file " + self.GPL ])
except subprocess.CalledProcessError as error:
self.logger.warning("Error - insert.py - GPL - insertion")
self.logger.warning("Exception at the line : {}".format(sys.exc_info()[-1].tb_lineno))
self.logger.warning(sys.exc_info())
self.logger.warning(error)
else:
self.logger.warning("Error - insertion.py - GPL - File not found")
self.logger.warning("GPL file has not been found")
try:
self.db['GPL'].create_index([('BDID', ASCENDING), ('PLATFORM', ASCENDING)])
except:
self.logger.warning("Error - insert.py - GPL - createIndex")
self.logger.warning("Exception at the line : {}".format(sys.exc_info()[-1].tb_lineno))
self.logger.warning(sys.exc_info())
def push_Homologene(self):
if self.file_exist(self.Homologene):
try:
subprocess.check_output(['bash','-c',"mongoimport -d geneulike -c HomoloGene --type tsv --fields EGID.string\(\),BDID.string\(\) --columnsHaveTypes --numInsertionWorkers 8 --file " + self.Homologene ])
except subprocess.CalledProcessError as error:
self.logger.warning("Error - insert.py - HomoloGene - insertion")
self.logger.warning("Exception at the line : {}".format(sys.exc_info()[-1].tb_lineno))
self.logger.warning(sys.exc_info())
self.logger.warning(error)
else:
self.logger.warning("Error - insertion.py - HomoloGene - File not found")
self.logger.warning("HomoloGene file has not been found")
try:
self.db['HomoloGene'].create_index([('BDID', ASCENDING)])
except:
self.logger.warning("Error - insert.py - HomoloGene - createIndex")
self.logger.warning("Exception at the line : {}".format(sys.exc_info()[-1].tb_lineno))
self.logger.warning(sys.exc_info())
def push_Vega_gene(self):
if self.file_exist(self.Vega_gene):
try:
subprocess.check_output(['bash','-c',"mongoimport -d geneulike -c Vega_gene --type tsv --fields EGID.string\(\),BDID.string\(\) --columnsHaveTypes --numInsertionWorkers 8 --file " + self.Vega_gene ])
except subprocess.CalledProcessError as error:
self.logger.warning("Error - insert.py - Vega_gene - insertion")
self.logger.warning("Exception at the line : {}".format(sys.exc_info()[-1].tb_lineno))
self.logger.warning(sys.exc_info())
self.logger.warning(error)
else:
self.logger.warning("Error - insertion.py - Vega_gene - File not found")
self.logger.warning("Vega_gene file has not been found")
try:
self.db['Vega_gene'].create_index([('BDID', ASCENDING)])
except:
self.logger.warning("Error - insert.py - Vega_gene - createIndex")
self.logger.warning("Exception at the line : {}".format(sys.exc_info()[-1].tb_lineno))
self.logger.warning(sys.exc_info())
def push_Vega_transcript(self):
if self.file_exist(self.Vega_transcript):
try:
subprocess.check_output(['bash','-c',"mongoimport -d geneulike -c Vega_transcript --type tsv --fields EGID.string\(\),BDID.string\(\) --columnsHaveTypes --numInsertionWorkers 8 --file " + self.Vega_transcript ])
except subprocess.CalledProcessError as error:
self.logger.warning("Error - insert.py - Vega_transcript - insertion")
self.logger.warning("Exception at the line : {}".format(sys.exc_info()[-1].tb_lineno))
self.logger.warning(sys.exc_info())
self.logger.warning(error)
else:
self.logger.warning("Error - insertion.py - Vega_transcript - File not found")
self.logger.warning("Vega_transcript file has not been found")
try:
self.db['Vega_transcript'].create_index([('BDID', ASCENDING)])
except:
self.logger.warning("Error - insert.py - Vega_transcript - createIndex")
self.logger.warning("Exception at the line : {}".format(sys.exc_info()[-1].tb_lineno))
self.logger.warning(sys.exc_info())
def push_Vega_protein(self):
if self.file_exist(self.Vega_protein):
try:
subprocess.check_output(['bash','-c',"mongoimport -d geneulike -c Vega_protein --type tsv --fields EGID.string\(\),BDID.string\(\) --columnsHaveTypes --numInsertionWorkers 8 --file " + self.Vega_protein ])
except subprocess.CalledProcessError as error:
self.logger.warning("Error - insert.py - Vega_protein - insertion")
self.logger.warning("Exception at the line : {}".format(sys.exc_info()[-1].tb_lineno))
self.logger.warning(sys.exc_info())
self.logger.warning(error)
else:
self.logger.warning("Error - insertion.py - Vega_protein File not found")
self.logger.warning("Vega_protein file has not been found")
try:
self.db['Vega_protein'].create_index([('BDID', ASCENDING)])
except:
self.logger.warning("Error - insert.py - Vega_protein - createIndex")
self.logger.warning("Exception at the line : {}".format(sys.exc_info()[-1].tb_lineno))
self.logger.warning(sys.exc_info())
def push_History(self):
if self.file_exist(self.History):
try:
subprocess.check_output(['bash','-c',"mongoimport -d geneulike -c History --type tsv --fields EGID.string\(\),BDID.string\(\) --columnsHaveTypes --numInsertionWorkers 8 --file " + self.History ])
except subprocess.CalledProcessError as error:
self.logger.warning("Error - insert.py - History - insertion")
self.logger.warning("Exception at the line : {}".format(sys.exc_info()[-1].tb_lineno))
self.logger.warning(sys.exc_info())
self.logger.warning(error)
else:
self.logger.warning("Error - insertion.py - History File not found")
self.logger.warning("History file has not been found")
try:
self.db['History'].create_index([('BDID', ASCENDING)])
except:
self.logger.warning("Error - insert.py - History - createIndex")
self.logger.warning("Exception at the line : {}".format(sys.exc_info()[-1].tb_lineno))
self.logger.warning(sys.exc_info())
def push_Swissprot(self):
if self.file_exist(self.History):
try:
subprocess.check_output(['bash','-c',"mongoimport -d geneulike -c UniProt --type tsv --fields EGID.string\(\),BDID.string\(\) --columnsHaveTypes --numInsertionWorkers 8 --file " + self.Swissprot ])
except subprocess.CalledProcessError as error:
self.logger.warning("Error - insert.py - Swissprot - insertion")
self.logger.warning("Exception at the line : {}".format(sys.exc_info()[-1].tb_lineno))
self.logger.warning(sys.exc_info())
self.logger.warning(error)
else:
self.logger.warning("Error - insertion.py - Swissprot - File not found")
self.logger.warning("Swissprot file has not been found")
try:
self.db['UniProt'].create_index([('BDID', ASCENDING)])
except:
self.logger.warning("Error - insert.py - Swissprot - createIndex")
self.logger.warning("Exception at the line : {}".format(sys.exc_info()[-1].tb_lineno))
self.logger.warning(sys.exc_info())
def push_trEMBL(self):
if self.file_exist(self.trEMBL):
try:
subprocess.check_output(['bash','-c',"mongoimport -d geneulike -c UniProt --type tsv --fields EGID.string\(\),BDID.string\(\) --columnsHaveTypes --numInsertionWorkers 8 --file " + self.trEMBL ])
except subprocess.CalledProcessError as error:
self.logger.warning("Error - insert.py - trEMBL - insertion")
self.logger.warning("Exception at the line : {}".format(sys.exc_info()[-1].tb_lineno))
self.logger.warning(sys.exc_info())
self.logger.warning(error)
else:
self.logger.warning("Error - insertion.py - trEMBL - File not found")
self.logger.warning("Swissprot file has not been found")
try:
self.db['UniProt'].create_index([('BDID', ASCENDING)])
except:
self.logger.warning("Error - insert.py - trEMBL - createIndex")
self.logger.warning("Exception at the line : {}".format(sys.exc_info()[-1].tb_lineno))
self.logger.warning(sys.exc_info())
insert = Insertion()
#insert.push_Ensembl_gene()
#insert.push_Ensembl_transcript()
#insert.push_Ensembl_protein
#insert.push_UniGene()
#insert.push_GenBank_transcript()
#insert.push_RefSeq_transcript()
#insert.push_GenBank_protein()
#insert.push_RefSeq_protein()
#insert.push_GI_transcript()
#insert.push_GI_protein()
insert.push_Info()
#insert.push_GPL()
#insert.push_Homologene()
#insert.push_Vega_gene()
#insert.push_Vega_transcript()
#insert.push_Vega_protein()
#insert.push_History()
#insert.push_Swissprot()
#insert.push_trEMBL()
| 41.35277 | 277 | 0.568387 | 3,007 | 28,368 | 5.240772 | 0.061523 | 0.111048 | 0.184466 | 0.106098 | 0.819088 | 0.817628 | 0.794784 | 0.743702 | 0.726569 | 0.695349 | 0 | 0.004073 | 0.307671 | 28,368 | 685 | 278 | 41.413139 | 0.79832 | 0.034687 | 0 | 0.515228 | 0 | 0.048223 | 0.294654 | 0.042099 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.071066 | null | null | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
b42ebdbbf89bd38d58b4ae43414325c8aa39f126 | 546 | py | Python | app/__init__.py | NCBI-Hackathons/McDiff | 43037967e65e8dbdda18c891175c93537b98a238 | [
"MIT"
] | 3 | 2018-06-21T15:16:25.000Z | 2018-06-21T22:42:17.000Z | app/__init__.py | NCBI-Hackathons/McDiff | 43037967e65e8dbdda18c891175c93537b98a238 | [
"MIT"
] | null | null | null | app/__init__.py | NCBI-Hackathons/McDiff | 43037967e65e8dbdda18c891175c93537b98a238 | [
"MIT"
] | 1 | 2018-06-25T16:17:04.000Z | 2018-06-25T16:17:04.000Z | from flask import Flask
from config import Config
import os
app = Flask(__name__)
app.config.from_object(Config)
from app import routes
os.path.abspath(os.path.dirname(__file__))
if not os.path.exists("{0}/static/img".format(os.path.abspath(os.path.dirname(__file__)))):
os.makedirs("{0}/static/img".format(os.path.abspath(os.path.dirname(__file__))))
if not os.path.exists("{0}/static/uploads".format(os.path.abspath(os.path.dirname(__file__)))):
os.makedirs("{0}/static/uploads".format(os.path.abspath(os.path.dirname(__file__))))
| 32.117647 | 95 | 0.745421 | 87 | 546 | 4.390805 | 0.252874 | 0.188482 | 0.170157 | 0.196335 | 0.722513 | 0.722513 | 0.722513 | 0.722513 | 0.722513 | 0.722513 | 0 | 0.007905 | 0.07326 | 546 | 16 | 96 | 34.125 | 0.747036 | 0 | 0 | 0 | 0 | 0 | 0.117647 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.363636 | 0 | 0.363636 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 7 |
b44842a102a62351e3417b3b26d4dfec5e968d29 | 30,561 | py | Python | jes/jes-v5.020-linux/jes/python/zipf.py | utv-teaching/foundations-computer-science | 568e19fd83a3355dab2814229f335abf31bfd7e9 | [
"MIT"
] | null | null | null | jes/jes-v5.020-linux/jes/python/zipf.py | utv-teaching/foundations-computer-science | 568e19fd83a3355dab2814229f335abf31bfd7e9 | [
"MIT"
] | null | null | null | jes/jes-v5.020-linux/jes/python/zipf.py | utv-teaching/foundations-computer-science | 568e19fd83a3355dab2814229f335abf31bfd7e9 | [
"MIT"
] | null | null | null | ###############################################################################
# zipf.py Version 1.5 24-Dec-2008 Bill Manaris, Dana Hughes, J.R. Armstrong,
# Thomas Zalonis, Luca Pellicoro,
# Chris Wagner, Chuck McCormick
###########################################################################
#
# Copyright (C) 2003-2014 Bill Manaris, Dana Hughes, J.R. Armstrong,
# Thomas Zalonis, Luca Pellicoro,
# Chris Wagner, Chuck McCormick
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 2 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
#
###########################################################################
#
# This module encapsulates functions that may be used to calculate
# the slope and r2 (fit) of a trendline
# of a Zipf distribution (byRank or bySize).
#
# The byRank distribution plots the values (y-axis)
# against the ranks of the values from largest to smallest
# (x-axis) in log-log scale. The ranks are generated automatically.
#
# The bySize distribution plots the values (y-axis)
# against the supplied keys (x-axis) in log-log scale.
#
# Usage: Call bySize(sizes, counts) and/or byRank(counts) functions
# Output: slope and R2
#
# WARNING: If an error occurs the current code will NOT raise an exception;
# it will only print an error message (for ShedSkin compatibility).
# This may cause problems, if the error messages go undetected
# (e.g., this code is run in batch mode).
#
# Authors: Chris Wagner and Bill Manaris (based on VB code by Chuck McCormick and Bill Manaris)
#
# version 1.5 (December 24, 2008) J.R. Armstrong and Bill Manaris
# - Now we are differentiating between monotonous and random phenomena (vertical vs. horizontal trendlines).
# In the first case, we return slope = 0 and r2 = 0.
# In the second case, we return slope = 0 and r2 = 1.
# Also, some variable names have been updated.
#
# version 1.4 (October 1, 2008) Bill Manaris
# - Added more unit-testing code (i.e., if __name__=='__main__') for Shed Skin Python-to-C++ conversion to work.
# - Updated some variable names for usability/readability
#
# version 1.3 (March 23, 2007) Thomas Zalonis
# - Added code to the getSlopeR2() function that calculates the y-intercept for the trendline.
# - getSlopeR2() now returns 3 values, slope, r2 and the trendline y-intercept
#
# version 1.2 (Feb 03, 2007) Luca Pellicoro
# -Translation from Java to Python
# -Raise exceptions with erroneous user inputs (such as zero keys or values)
#
# version 1.1 (July 30, 2005)
#
# version 1.0 (May 10, 2003)
#
# for logarithmic calculations
from math import *
def byRank(counts):
'''
Calculate the slope and R^2 of the counts.
Sorting the counts in descending order.
'''
newCounts = [] # to hold the deep copy
newRanks = [] # the newly created ranks
numberOfCounts = len(counts)
for index in range(numberOfCounts):
newCounts.append(counts[index]) # deep copy the counts
newRanks.append(numberOfCounts - index) # create the ranks: highest frequency has smallest rank
newCounts.sort()
checkRanksAndCounts(newRanks, newCounts)
return getSlopeR2(newRanks, newCounts)
def bySize(sizes, counts):
'''
Calculate the slope and r2 of the counts without ordering the ranks.
Keys contains the desired ranking.
'''
checkRanksAndCounts(sizes,counts)
return getSlopeR2(sizes, counts)
######################################
######### SUPPORTING METHODS #########
######################################
def checkRanksAndCounts(ranks, counts):
'''
Verify that:
- ranks and counts contain at least one element
- ranks and counts have the same length
- both ranks and counts do not contain any negative or zero element
'''
if len(counts) == 0: raise ValueError, 'Counts should contain at least one element'
if min(counts) <= 0.0: raise ValueError, 'Counts should be strictly positive: %f' % (min(counts))
if len(ranks) == 0: raise ValueError, 'Ranks should contain at least one element'
if min(ranks) <= 0.0 : raise ValueError, 'Ranks should be strictly positive: %f' % (min(ranks))
if len(ranks) != len(counts):
raise ValueError,'Ranks (length: %d) and counts (length: %d) should have the same size.' % (len(ranks), len(counts))
def getSlopeR2(ranks, counts):
'''
Calculates the Zipf Slope and R^2(fit) of a
set of ranks and counts.
If slope and/or R^2 cannot be calculated, a zero is returned.
'''
assert len(ranks) == len(counts) , 'Ranks and counts must have the same length.'
sumX = sumY = sumXY = sumX2 = sumY2 = 0.0
numberOfRanks = len(ranks)
# one exterme case:
# if the phenomenon is monotonous (only one type of event, e.g., ['a', 'a', 'a']),
# then the slope is negative infinity (cannot draw a line with only one data point),
# so indicate this with slope = 0 AND r2 = 0
if numberOfRanks == 1:
slope = 0.0
r2 = 0.0
else:
# the other extreme case:
# if the phenomenon is uniformly distributed (several types of events,
# but all having the same number of instances, e.g., ['a', 'b', 'a', 'b', 'a', 'b']),
# then the slope = 0 and r2 = 1 (a horizontal line).
# check if all counts are equal
i = 0
allCountsEqual = True # assume they are all equal
while allCountsEqual and i < numberOfRanks-1:
allCountsEqual = (counts[i] == counts[i + 1]) # update hypothesis
i = i + 1
if allCountsEqual: # is phenomenon uniformly distributed?
slope = 0.0
r2 = 1.0
# general case, so calculate actual slope and r2 values
else:
# Sum up the values for the calculations
for index in range(numberOfRanks):
sumX += log(ranks[index],10)
sumY += log(counts[index],10)
sumXY += log(ranks[index],10) * log(counts[index],10)
sumX2 += log(ranks[index],10)**2
sumY2 += log(counts[index],10)**2
# calculate the slope
if ((numberOfRanks * sumX2 - sumX * sumX) == 0.0):
slope = 0.0
else:
slope = ((numberOfRanks * sumXY - sumX * sumY) / (numberOfRanks * sumX2 - sumX * sumX))
# calculate the r2
if(sqrt((numberOfRanks * sumX2 - sumX * sumX) * (numberOfRanks * sumY2 - sumY * sumY)) == 0.0):
r2 = 0.0
else:
r = (numberOfRanks * sumXY - sumX * sumY) / sqrt((numberOfRanks * sumX2 - sumX * sumX) * (numberOfRanks * sumY2 - sumY * sumY))
r2 = r * r
# calulate y-intercept
yint = (sumY - slope * sumX) / len(ranks)
return slope, r2, yint
if __name__ == '__main__':
#print "Enter sequence of numbers to calculate its Zipfian distribution."
#print "The rank-frequency distribution is calculated based on how many times each number appears."
#print "The size-frequency distribution is calculated based on how many times each number appears; also the actual number is treated as if it represents 'size'."
#phenomenon = input("Enter sequence of numbers (e.g., [50, 100, 50]): ")
#phenomenon = [1, 1, 1] # check monotonous
#phenomenon = [2, 2, 2, 3, 3, 3] # check uniformly distributed (white noise)
#phenomenon = [1, 1, 2] # check truly zipfian (pink noise)
#phenomenon = [1, 1, 1, 1, 2] # check brown noise
phenomenon = [1, 2, 2, 3, 3, 3, 3] # check general case
# even more general case (from a textbook)
#phenomenon = [5364, 2794, 2312, 2127, 2092, 1659, 1380, 999, 975, 919, 716, 712, 698, 678, 630, 591, 566, 563, 553, 543, 540, 480, 478, 475, 468, 463, 460, 452, 442, 428, 424, 416, 382, 382, 380, 374, 335, 334, 327, 325, 303, 290, 284, 283, 274, 272, 266, 265, 265, 258, 252, 247, 245, 243, 242, 241, 237, 236, 233, 231, 223, 223, 222, 220, 218, 213, 212, 209, 208, 203, 199, 198, 198, 192, 189, 183, 183, 182, 181, 175, 175, 174, 173, 172, 171, 171, 169, 169, 169, 167, 166, 165, 164, 161, 161, 160, 160, 159, 157, 157, 157, 155, 150, 149, 148, 147, 146, 142, 142, 141, 140, 137, 136, 134, 132, 130, 128, 127, 127, 124, 123, 122, 122, 121, 121, 121, 118, 117, 117, 116, 116, 114, 114, 114, 113, 111, 110, 110, 109, 107, 106, 106, 105, 103, 103, 103, 102, 101, 101, 101, 100, 100, 98, 98, 97, 97, 95, 95, 94, 94, 94, 93, 92, 92, 90, 89, 89, 88, 88, 87, 86, 86, 86, 85, 85, 84, 84, 84, 84, 84, 84, 83, 83, 83, 83, 82, 82, 81, 81, 81, 80, 80, 80, 79, 78, 77, 76, 75, 75, 75, 74, 74, 74, 74, 73, 73, 73, 72, 72, 71, 71, 70, 70, 70, 70, 70, 69, 69, 69, 68, 68, 68, 67, 67, 67, 66, 66, 66, 66, 65, 65, 64, 64, 64, 62, 62, 62, 62, 62, 62, 62, 62, 62, 62, 61, 61, 61, 60, 60, 60, 60, 60, 60, 60, 60, 59, 59, 59, 59, 59, 58, 58, 58, 57, 57, 57, 57, 57, 56, 56, 56, 56, 56, 55, 55, 55, 55, 55, 55, 55, 55, 54, 54, 54, 53, 53, 53, 53, 53, 53, 53, 53, 52, 52, 52, 52, 52, 52, 52, 52, 52, 52, 51, 51, 51, 51, 51, 51, 50, 50, 50, 50, 50, 49, 49, 49, 49, 49, 49, 49, 49, 49, 49, 49, 49, 48, 48, 48, 48, 48, 48, 48, 48, 48, 47, 47, 47, 47, 47, 47, 47, 46, 46, 46, 46, 46, 46, 46, 45, 45, 45, 45, 45, 45, 45, 45, 45, 45, 45, 44, 44, 44, 44, 44, 44, 44, 44, 43, 43, 43, 43, 43, 43, 43, 43, 43, 43, 42, 42, 42, 42, 42, 42, 42, 42, 42, 41, 41, 41, 41, 41, 41, 41, 40, 40, 40, 40, 40, 40, 39, 39, 39, 39, 39, 39, 39, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 37, 37, 37, 37, 37, 37, 37, 37, 37, 36, 36, 36, 36, 36, 35, 35, 35, 35, 35, 35, 35, 35, 35, 35, 35, 35, 35, 35, 35, 35, 35, 34, 34, 34, 34, 34, 34, 34, 34, 34, 34, 34, 34, 34, 34, 34, 33, 33, 33, 33, 33, 33, 33, 33, 33, 33, 33, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 31, 31, 31, 31, 31, 31, 31, 31, 31, 31, 31, 31, 31, 31, 31, 31, 31, 31, 31, 31, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 29, 29, 29, 29, 29, 29, 29, 29, 29, 29, 29, 29, 29, 29, 29, 29, 29, 29, 29, 29, 29, 29, 29, 29, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 26, 26, 26, 26, 26, 26, 26, 26, 26, 26, 26, 26, 26, 26, 26, 26, 26, 26, 26, 26, 26, 26, 26, 26, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 18, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]
print "Given the sequence", phenomenon
# calculate frequency of occurrence of each symbol
histogram = {}
for event in phenomenon:
histogram[event] = histogram.get(event, 0) + 1
# now, the histogram contains the frequencies
# next, extract the counts and calculate their rank-frequency (Zipfian) distribution
counts = histogram.values()
slope, r2, yint = byRank(counts)
print "The byRank slope is", slope, "and the R^2 is", r2
# now, extract the sizes calculate their side-frequency (Zipfian) distribution
sizes = histogram.keys()
slope, r2, yint = bySize(sizes, counts)
print "The bySize slope is", slope, "and the R^2 is", r2
| 131.16309 | 21,150 | 0.445208 | 7,749 | 30,561 | 1.753775 | 0.071364 | 0.34496 | 0.516336 | 0.687564 | 0.622001 | 0.598013 | 0.581531 | 0.573436 | 0.558278 | 0.540618 | 0 | 0.385146 | 0.303001 | 30,561 | 232 | 21,151 | 131.728448 | 0.252852 | 0.851837 | 0 | 0.134328 | 0 | 0.014925 | 0.103134 | 0 | 0 | 0 | 0 | 0 | 0.014925 | 0 | null | null | 0 | 0.014925 | null | null | 0.044776 | 0 | 0 | 1 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 10 |
b45eef3d7ad05c062fd552ca89006dbb8c5b14d3 | 8,973 | py | Python | tests/functional/dht/test_store.py | walidmujahid/lbry | e4c3e038b613f8e84fbe6e9227913c9c42146eaa | [
"MIT"
] | null | null | null | tests/functional/dht/test_store.py | walidmujahid/lbry | e4c3e038b613f8e84fbe6e9227913c9c42146eaa | [
"MIT"
] | null | null | null | tests/functional/dht/test_store.py | walidmujahid/lbry | e4c3e038b613f8e84fbe6e9227913c9c42146eaa | [
"MIT"
] | null | null | null | import struct
from binascii import hexlify
from twisted.internet import defer
from lbrynet.dht import constants
from lbrynet.utils import generate_id
from .dht_test_environment import TestKademliaBase
import logging
log = logging.getLogger()
class TestStoreExpiration(TestKademliaBase):
network_size = 40
@defer.inlineCallbacks
def test_nullify_token(self):
blob_hash = generate_id(1)
announcing_node = self.nodes[20]
# announce the blob
announce_d = announcing_node.announceHaveBlob(blob_hash)
self.pump_clock(5+1)
storing_node_ids = yield announce_d
self.assertEqual(len(storing_node_ids), 8)
for node in set(self.nodes).union(set(self._seeds)):
# now, everyone has the wrong token
node.change_token()
node.change_token()
announce_d = announcing_node.announceHaveBlob(blob_hash)
self.pump_clock(5+1)
storing_node_ids = yield announce_d
self.assertEqual(len(storing_node_ids), 0) # can't store, wrong tokens, but they get nullified
announce_d = announcing_node.announceHaveBlob(blob_hash)
self.pump_clock(5+1)
storing_node_ids = yield announce_d
self.assertEqual(len(storing_node_ids), 8) # next attempt succeeds as it refreshes tokens
@defer.inlineCallbacks
def test_store_and_expire(self):
blob_hash = generate_id(1)
announcing_node = self.nodes[20]
# announce the blob
announce_d = announcing_node.announceHaveBlob(blob_hash)
self.pump_clock(5+1)
storing_node_ids = yield announce_d
all_nodes = set(self.nodes).union(set(self._seeds))
# verify the nodes we think stored it did actually store it
storing_nodes = [node for node in all_nodes if hexlify(node.node_id) in storing_node_ids]
self.assertEqual(len(storing_nodes), len(storing_node_ids))
self.assertEqual(len(storing_nodes), constants.k)
for node in storing_nodes:
self.assertTrue(node._dataStore.hasPeersForBlob(blob_hash))
datastore_result = node._dataStore.getPeersForBlob(blob_hash)
self.assertEqual(list(map(lambda contact: (contact.id, contact.address, contact.port),
node._dataStore.getStoringContacts())), [(announcing_node.node_id,
announcing_node.externalIP,
announcing_node.port)])
self.assertEqual(len(datastore_result), 1)
expanded_peers = []
for peer in datastore_result:
host = ".".join([str(d) for d in peer[:4]])
port, = struct.unpack('>H', peer[4:6])
peer_node_id = peer[6:]
if (host, port, peer_node_id) not in expanded_peers:
expanded_peers.append((peer_node_id, host, port))
self.assertEqual(expanded_peers[0],
(announcing_node.node_id, announcing_node.externalIP, announcing_node.peerPort))
# verify the announced blob expires in the storing nodes datastores
self.clock.advance(constants.dataExpireTimeout) # skip the clock directly ahead
for node in storing_nodes:
self.assertFalse(node._dataStore.hasPeersForBlob(blob_hash))
datastore_result = node._dataStore.getPeersForBlob(blob_hash)
self.assertEqual(len(datastore_result), 0)
self.assertIn(blob_hash, node._dataStore) # the looping call shouldn't have removed it yet
self.assertEqual(len(node._dataStore.getStoringContacts()), 1)
self.pump_clock(constants.checkRefreshInterval + 1) # tick the clock forward (so the nodes refresh)
for node in storing_nodes:
self.assertFalse(node._dataStore.hasPeersForBlob(blob_hash))
datastore_result = node._dataStore.getPeersForBlob(blob_hash)
self.assertEqual(len(datastore_result), 0)
self.assertEqual(len(node._dataStore.getStoringContacts()), 0)
self.assertNotIn(blob_hash, node._dataStore.keys()) # the looping call should have fired
@defer.inlineCallbacks
def test_storing_node_went_stale_then_came_back(self):
blob_hash = generate_id(1)
announcing_node = self.nodes[20]
# announce the blob
announce_d = announcing_node.announceHaveBlob(blob_hash)
self.pump_clock(5+1)
storing_node_ids = yield announce_d
all_nodes = set(self.nodes).union(set(self._seeds))
# verify the nodes we think stored it did actually store it
storing_nodes = [node for node in all_nodes if hexlify(node.node_id) in storing_node_ids]
self.assertEqual(len(storing_nodes), len(storing_node_ids))
self.assertEqual(len(storing_nodes), constants.k)
for node in storing_nodes:
self.assertTrue(node._dataStore.hasPeersForBlob(blob_hash))
datastore_result = node._dataStore.getPeersForBlob(blob_hash)
self.assertEqual(list(map(lambda contact: (contact.id, contact.address, contact.port),
node._dataStore.getStoringContacts())), [(announcing_node.node_id,
announcing_node.externalIP,
announcing_node.port)])
self.assertEqual(len(datastore_result), 1)
expanded_peers = []
for peer in datastore_result:
host = ".".join([str(d) for d in peer[:4]])
port, = struct.unpack('>H', peer[4:6])
peer_node_id = peer[6:]
if (host, port, peer_node_id) not in expanded_peers:
expanded_peers.append((peer_node_id, host, port))
self.assertEqual(expanded_peers[0],
(announcing_node.node_id, announcing_node.externalIP, announcing_node.peerPort))
self.pump_clock(constants.checkRefreshInterval*2)
# stop the node
self.nodes.remove(announcing_node)
yield self.run_reactor(31, [announcing_node.stop()])
# run the network for an hour, which should expire the removed node and turn the announced value stale
self.pump_clock(constants.checkRefreshInterval * 5, constants.checkRefreshInterval/2)
self.verify_all_nodes_are_routable()
# make sure the contact isn't returned as a peer for the blob, but that we still have the entry in the
# datastore in case the node comes back
for node in storing_nodes:
self.assertFalse(node._dataStore.hasPeersForBlob(blob_hash))
datastore_result = node._dataStore.getPeersForBlob(blob_hash)
self.assertEqual(len(datastore_result), 0)
self.assertEqual(len(node._dataStore.getStoringContacts()), 1)
self.assertIn(blob_hash, node._dataStore)
# # bring the announcing node back online
self.nodes.append(announcing_node)
yield self.run_reactor(
31, [announcing_node.start([(seed_name, 4444) for seed_name in sorted(self.seed_dns.keys())])]
)
self.pump_clock(constants.checkRefreshInterval * 2)
self.verify_all_nodes_are_routable()
# now the announcing node should once again be returned as a peer for the blob
for node in storing_nodes:
self.assertTrue(node._dataStore.hasPeersForBlob(blob_hash))
datastore_result = node._dataStore.getPeersForBlob(blob_hash)
self.assertEqual(len(datastore_result), 1)
self.assertEqual(len(node._dataStore.getStoringContacts()), 1)
self.assertIn(blob_hash, node._dataStore)
# verify the announced blob expires in the storing nodes datastores
self.clock.advance(constants.dataExpireTimeout) # skip the clock directly ahead
for node in storing_nodes:
self.assertFalse(node._dataStore.hasPeersForBlob(blob_hash))
datastore_result = node._dataStore.getPeersForBlob(blob_hash)
self.assertEqual(len(datastore_result), 0)
self.assertIn(blob_hash, node._dataStore) # the looping call shouldn't have removed it yet
self.assertEqual(len(node._dataStore.getStoringContacts()), 1)
self.pump_clock(constants.checkRefreshInterval + 1) # tick the clock forward (so the nodes refresh)
for node in storing_nodes:
self.assertFalse(node._dataStore.hasPeersForBlob(blob_hash))
datastore_result = node._dataStore.getPeersForBlob(blob_hash)
self.assertEqual(len(datastore_result), 0)
self.assertEqual(len(node._dataStore.getStoringContacts()), 0)
self.assertNotIn(blob_hash, node._dataStore) # the looping call should have fired
| 51.867052 | 110 | 0.653739 | 1,051 | 8,973 | 5.365366 | 0.16746 | 0.042561 | 0.067033 | 0.022699 | 0.844653 | 0.837205 | 0.819294 | 0.802802 | 0.802802 | 0.764497 | 0 | 0.009398 | 0.264794 | 8,973 | 172 | 111 | 52.168605 | 0.845384 | 0.124373 | 0 | 0.823529 | 1 | 0 | 0.000766 | 0 | 0 | 0 | 0 | 0 | 0.286765 | 1 | 0.022059 | false | 0 | 0.051471 | 0 | 0.088235 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
c359cabe1732dbe5310ccc0052f016509cbaeef4 | 5,600 | py | Python | tests/auth/scope.py | Allerter/tekore | 20cf68280fb5b691126600a5b474ee841f7be199 | [
"MIT"
] | 135 | 2020-01-14T17:47:26.000Z | 2022-03-25T18:30:04.000Z | tests/auth/scope.py | Allerter/tekore | 20cf68280fb5b691126600a5b474ee841f7be199 | [
"MIT"
] | 135 | 2020-01-13T22:56:35.000Z | 2022-03-11T19:41:36.000Z | tests/auth/scope.py | Allerter/tekore | 20cf68280fb5b691126600a5b474ee841f7be199 | [
"MIT"
] | 21 | 2020-01-16T16:01:23.000Z | 2022-02-17T12:46:32.000Z | import pytest
from tekore import scope, Scope
class TestScopesEnum:
def test_str_is_enum_value(self):
s = scope.user_read_private
assert str(s) == 'user-read-private'
def test_subtracting_same_scope_returns_empty(self):
s = scope.user_library_read - scope.user_library_read
assert s == set()
class TestScope:
def test_repr_like_instantiation(self):
s = Scope('a', 'b')
assert repr(s) == "Scope('a', 'b')"
def test_empty_scope_equal_to_empty_set(self):
s = Scope()
assert s == set()
def test_scope_initialisable_with_strings(self):
s = Scope('b', 'a')
assert str(s) == 'a b'
def test_scope_initialisable_with_enum(self):
s = Scope(scope.user_read_private)
assert str(s) == 'user-read-private'
def test_scope_initialisable_with_combination(self):
s = Scope('a', 'b', scope.user_read_private)
assert str(s) == 'a b user-read-private'
def test_different_object_same_str_results_in_no_duplicates(self):
s = Scope(scope.user_read_private, 'user-read-private')
assert s == {'user-read-private'}
def test_scope_unpackable(self):
s1 = Scope('b', 'a')
s2 = Scope(*s1)
assert s1 == s2
def test_adding_scopes_preserves_originals(self):
s1 = Scope('b', 'a')
s2 = Scope('c', 'b')
assert isinstance(s1 + s2, Scope)
assert s1 + s2 == {'a', 'b', 'c'}
assert str(s1) == 'a b'
assert str(s2) == 'b c'
def test_subtracting_scopes_preservers_originals(self):
s1 = Scope('b', 'a')
s2 = Scope('c', 'b')
assert isinstance(s1 - s2, Scope)
assert s1 - s2 == {'a'}
assert str(s1) == 'a b'
assert str(s2) == 'b c'
class TestScopeOperations:
def test_add_invalid_scope(self):
with pytest.raises(NotImplementedError):
1 + scope.user_top_read
def test_add_invalid_Scope(self):
with pytest.raises(NotImplementedError):
1 + Scope('a')
def test_add_str_scope(self):
s = 'a' + scope.user_top_read
assert str(s) == 'a user-top-read'
def test_add_str_Scope(self):
s = 'a' + Scope('b')
assert str(s) == 'a b'
def test_add_scope_str(self):
s = scope.user_top_read + 'a'
assert str(s) == 'a user-top-read'
def test_add_scope_scope(self):
s = scope.user_follow_read + scope.user_top_read
assert str(s) == 'user-follow-read user-top-read'
def test_add_scope_Scope(self):
s = scope.user_top_read + Scope('a')
assert str(s) == 'a user-top-read'
def test_add_scope_invalid_raises(self):
with pytest.raises(NotImplementedError):
scope.user_top_read + 1
def test_add_Scope_str(self):
s = Scope('a') + 'b'
assert str(s) == 'a b'
def test_add_Scope_scope(self):
s = Scope('a') + scope.user_top_read
assert str(s) == 'a user-top-read'
def test_add_Scope_Scope(self):
s = Scope('a') + Scope('b')
assert str(s) == 'a b'
def test_add_Scope_invalid_raises(self):
with pytest.raises(NotImplementedError):
Scope('a') + 1
def test_sub_invalid_scope(self):
with pytest.raises(NotImplementedError):
1 - scope.user_top_read
def test_sub_invalid_Scope(self):
with pytest.raises(NotImplementedError):
1 - Scope('a')
def test_sub_str_scope_different(self):
s = 'a' - scope.user_top_read
assert str(s) == 'a'
def test_sub_str_scope_same(self):
s = 'user-top-read' - scope.user_top_read
assert str(s) == ''
def test_sub_str_Scope_different(self):
s = 'a' - Scope('b')
assert str(s) == 'a'
def test_sub_str_Scope_same(self):
s = 'a' - Scope('a')
assert str(s) == ''
def test_sub_scope_str_different(self):
s = scope.user_top_read - 'a'
assert str(s) == 'user-top-read'
def test_sub_scope_str_same(self):
s = scope.user_top_read - 'user-top-read'
assert str(s) == ''
def test_sub_scope_scope_different(self):
s = scope.user_top_read - scope.user_follow_read
assert str(s) == 'user-top-read'
def test_sub_scope_scope_same(self):
s = scope.user_top_read - scope.user_top_read
assert str(s) == ''
def test_sub_scope_Scope_different(self):
s = scope.user_top_read - Scope('a')
assert str(s) == 'user-top-read'
def test_sub_scope_Scope_same(self):
s = scope.user_top_read - Scope('user-top-read')
assert str(s) == ''
def test_sub_scope_invalid_raises(self):
with pytest.raises(NotImplementedError):
scope.user_top_read - 1
def test_sub_Scope_str_different(self):
s = Scope('a') - 'b'
assert str(s) == 'a'
def test_sub_Scope_str_same(self):
s = Scope('a') - 'a'
assert str(s) == ''
def test_sub_Scope_scope_different(self):
s = Scope('a') - scope.user_top_read
assert str(s) == 'a'
def test_sub_Scope_scope_same(self):
s = Scope('user-top-read') - scope.user_top_read
assert str(s) == ''
def test_sub_Scope_Scope_different(self):
s = Scope('a') - Scope('b')
assert str(s) == 'a'
def test_sub_Scope_Scope_same(self):
s = Scope('a') - Scope('a')
assert str(s) == ''
def test_sub_Scope_invalid_raises(self):
with pytest.raises(NotImplementedError):
Scope('a') - 1
| 29.166667 | 70 | 0.599464 | 791 | 5,600 | 3.965866 | 0.078382 | 0.095952 | 0.112209 | 0.112209 | 0.839018 | 0.802678 | 0.79694 | 0.748167 | 0.732547 | 0.688237 | 0 | 0.007096 | 0.270179 | 5,600 | 191 | 71 | 29.319372 | 0.76046 | 0 | 0 | 0.314685 | 0 | 0 | 0.065 | 0 | 0 | 0 | 0 | 0 | 0.286713 | 1 | 0.300699 | false | 0 | 0.013986 | 0 | 0.335664 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
c36fe8289fd235c9a930ee8959114d71bf7addce | 58,640 | py | Python | layers/triplet_loss.py | huangzongheng/NAMA | e9bc5b9ca0c1dd5fff2f0613fdaac9fc5b038152 | [
"MIT"
] | null | null | null | layers/triplet_loss.py | huangzongheng/NAMA | e9bc5b9ca0c1dd5fff2f0613fdaac9fc5b038152 | [
"MIT"
] | null | null | null | layers/triplet_loss.py | huangzongheng/NAMA | e9bc5b9ca0c1dd5fff2f0613fdaac9fc5b038152 | [
"MIT"
] | null | null | null | # encoding: utf-8
"""
@author: liaoxingyu
@contact: sherlockliao01@gmail.com
"""
import torch
from torch import nn
import torch.nn.functional as F
import math
import logging
from einops import rearrange, reduce, repeat
def normalize(x, axis=-1):
"""Normalizing to unit length along the specified dimension.
Args:
x: pytorch Variable
Returns:
x: pytorch Variable, same shape as input
"""
x = 1. * x / (torch.norm(x, 2, axis, keepdim=True).expand_as(x) + 1e-12)
return x
def euclidean_dist(x, y):
"""
Args:
x: pytorch Variable, with shape [m, d]
y: pytorch Variable, with shape [n, d]
Returns:
dist: pytorch Variable, with shape [m, n]
"""
m, n = x.size(0), y.size(0)
xx = torch.pow(x, 2).sum(1, keepdim=True).expand(m, n)
yy = torch.pow(y, 2).sum(1, keepdim=True).expand(n, m).t()
dist = xx + yy
dist.addmm_(1, -2, x, y.t())
dist = dist.clamp(min=1e-12).sqrt() # for numerical stability
return dist
def hard_example_mining(dist_mat, labels, return_inds=False):
"""For each anchor, find the hardest positive and negative sample.
Args:
dist_mat: pytorch Variable, pair wise distance between samples, shape [N, N]
labels: pytorch LongTensor, with shape [N]
return_inds: whether to return the indices. Save time if `False`(?)
Returns:
dist_ap: pytorch Variable, distance(anchor, positive); shape [N]
dist_an: pytorch Variable, distance(anchor, negative); shape [N]
p_inds: pytorch LongTensor, with shape [N];
indices of selected hard positive samples; 0 <= p_inds[i] <= N - 1
n_inds: pytorch LongTensor, with shape [N];
indices of selected hard negative samples; 0 <= n_inds[i] <= N - 1
NOTE: Only consider the case in which all labels have same num of samples,
thus we can cope with all anchors in parallel.
"""
assert len(dist_mat.size()) == 2
assert dist_mat.size(0) == dist_mat.size(1)
N = dist_mat.size(0)
# shape [N, N]
is_pos = labels.expand(N, N).eq(labels.expand(N, N).t())
is_neg = labels.expand(N, N).ne(labels.expand(N, N).t())
# `dist_ap` means distance(anchor, positive)
# both `dist_ap` and `relative_p_inds` with shape [N, 1]
dist_ap, relative_p_inds = torch.max(dist_mat - is_neg*1000, 1, keepdim=True)
# dist_ap, relative_p_inds = torch.max(
# dist_mat[is_pos].contiguous().view(N, -1), 1, keepdim=True)
# `dist_an` means distance(anchor, negative)
# both `dist_an` and `relative_n_inds` with shape [N, 1]
dist_an, relative_n_inds = torch.min(dist_mat + is_pos * 1000, 1, keepdim=True)
# dist_an, relative_n_inds = torch.min(
# dist_mat[is_neg].contiguous().view(N, -1), 1, keepdim=True)
# shape [N]
dist_ap = dist_ap.squeeze(1)
dist_an = dist_an.squeeze(1)
if return_inds:
# shape [N, N]
ind = (labels.new().resize_as_(labels)
.copy_(torch.arange(0, N).long())
.unsqueeze(0).expand(N, N))
# shape [N, 1]
p_inds = torch.gather(
ind[is_pos].contiguous().view(N, -1), 1, relative_p_inds.data)
n_inds = torch.gather(
ind[is_neg].contiguous().view(N, -1), 1, relative_n_inds.data)
# shape [N]
p_inds = p_inds.squeeze(1)
n_inds = n_inds.squeeze(1)
return dist_ap, dist_an, p_inds, n_inds
return dist_ap, dist_an
class TripletLoss(object):
"""Modified from Tong Xiao's open-reid (https://github.com/Cysu/open-reid).
Related Triplet Loss theory can be found in paper 'In Defense of the Triplet
Loss for Person Re-Identification'."""
def __init__(self, margin=None, normalize_feature=False):
self.margin = margin
self.normalize_feature = normalize_feature
if margin is not None:
self.ranking_loss = nn.MarginRankingLoss(margin=margin)
else:
self.ranking_loss = nn.SoftMarginLoss()
def __call__(self, global_feat, labels, weight=None, normalize_feature=False):
# if normalize_feature:
if self.normalize_feature:
global_feat = normalize(global_feat, axis=-1)
# if global_feat.shape[-1] > 2048: # 前2048维为全局特征,不算triplet loss
# local_feat = global_feat[..., 2048:]
# global_feat = global_feat[..., :2048]
# else:
# local_feat=None
if global_feat.dim() > 2:
dist_mat = euclidean_dist(global_feat[0], global_feat[1])
else:
dist_mat = euclidean_dist(global_feat, global_feat)
# if local_feat is not None:
# dist_mat = euclidean_dist(local_feat, local_feat) + dist_mat.detach()
if weight is not None: # 压缩正样本类内距离
mask = labels
dist_ap, dist_an = hard_example_mining(
dist_mat, labels)
y = dist_an.new().resize_as_(dist_an).fill_(1)
if self.margin is not None:
# loss = self.ranking_loss(dist_an/(dist_ap.detach() + dist_an), dist_ap/(dist_ap + dist_an.detach()), y)
loss = self.ranking_loss(dist_an, dist_ap, y)
else:
loss = self.ranking_loss(dist_an - dist_ap, y)
return loss, dist_ap, dist_an
class RelativeTripletLoss(nn.Module):
"""Modified from Tong Xiao's open-reid (https://github.com/Cysu/open-reid).
Related Triplet Loss theory can be found in paper 'In Defense of the Triplet
Loss for Person Re-Identification'."""
def __init__(self, margin=0.1, normalize_feature=True, num_classes=0, num_instances=4,
alpha=0.0, beta=0.9, p=1.0, sigma=100.0, gamma=0):
super().__init__()
self.margin = margin
self.normalize_feature = normalize_feature
self.num_classes = num_classes
self.nimg = num_instances
self.centers = None
self.stds = None
self.p = p
self.alpha = alpha
self.beta = 0.9 # beta
self.gamma = gamma
self.count = 0
self.rdist = 0
self.sigma = sigma
self.logger = logging.getLogger("reid_baseline.train")
if margin is not None:
self.ranking_loss = nn.MarginRankingLoss(margin=margin)
else:
self.ranking_loss = nn.SoftMarginLoss()
def forward(self, global_feat, labels, normalize_feature=False):
if self.centers is None:
self.centers = torch.zeros((self.num_classes, *global_feat.shape[1:]), device=global_feat.device)
self.stds = torch.ones(self.num_classes, device=global_feat.device)
# update centers
pids = labels[::self.nimg]
mean_feat = reduce(global_feat, '(p k) c -> p c', 'mean', k=self.nimg).detach()
self.centers[pids] += 0.2 * (F.normalize(mean_feat, dim=-1) - self.centers[pids])
self.centers[pids] = F.normalize(self.centers[pids], dim=1)
if self.normalize_feature:
global_feat = F.normalize(global_feat, dim=-1)
# update stds
r = (rearrange(global_feat, '(p k) c -> p k c', k=self.nimg).detach() - self.centers[pids][:, None]).norm(dim=-1)
self.stds[pids] += 0.2 * (r.mean(-1) - self.stds[pids])
ref_ap = repeat(self.stds[pids], 'p -> (p k)', k=self.nimg)
# calculate dist
dist_mat = euclidean_dist(global_feat, global_feat)
dist_ap, dist_an = hard_example_mining(
dist_mat, labels)
# rel_dist = (dist_ap - dist_an) / (torch.max(ref_ap, 0.5*dist_ap.detach()))
# loss = F.softplus(rel_dist + self.margin, beta=20)
# grad = torch.sigmoid((rel_dist + self.margin) * 20)
loss = F.softplus((dist_ap - dist_an) + self.margin * ref_ap, beta=20)
grad = torch.sigmoid(((dist_ap - dist_an) + self.margin * ref_ap) * 20)
self.count += 1
if self.count % 200 == 0:
self.count = 0
self.logger.info('ref:{:.3f}, grad:{:.3f}'.format(
ref_ap.mean().item(), grad.mean().item()
))
# if self.margin is not None:
# loss = self.ranking_loss(dist_an/(dg_ap + dist_an) + self.alpha, dist_ap/(dist_ap + dg_an), y)
# else:
# loss = self.ranking_loss(dist_an - dist_ap, y)
return loss.mean(), dist_ap, dist_an
class MagRelativeTripletLoss(RelativeTripletLoss):
beta=20
def __init__(self, normalize_feature=True, num_classes=0, num_instances=4,
lm=0.05, um=0.25, lambda_g=35.0, la=10, ua=110): # 5, 50
super().__init__(normalize_feature=normalize_feature, num_classes=num_classes, num_instances=num_instances)
# self.margin = None
# self.normalize_feature = normalize_feature
# assert mode in ['all', 'same', 'cross']
# self.soft_mine = soft_mine
self.lm = lm
self.um = um
self.la = la
self.ua = ua
self.lambda_g = lambda_g # max(lambda_g, ((um-lm )/ (ua-la)) / (1 / la**2 -1 / ua**2))
self.avg_m = 0
self.min_m = 0
self.max_m = 0
self.avg_l = 0
@staticmethod
def get_dist(feat1, feat2):
return euclidean_dist(feat1, feat2)
def forward(self, global_feat, labels, weight=None, normalize_feature=False):
# if self.normalize_feature:
norms = global_feat.norm(dim=-1)
self.margin = self.m(norms)
reg = self.g(norms)
loss, dist_ap, dist_an = super().forward(global_feat, labels)
loss = loss + reg - reg.detach()
self.avg_l += 0.1 * (norms.mean().detach().item() - self.avg_l)
self.avg_m += 0.1 * (self.margin.mean().detach().item() - self.avg_m)
self.min_m += 0.1 * (self.margin.min().detach().item() - self.min_m)
self.max_m += 0.1 * (self.margin.max().detach().item() - self.max_m)
return loss, dist_ap, dist_an
def m(self, norms):
# grad: um-lm / ua-la
norms = norms.clamp(self.la, self.ua)
x = (norms - self.la) / (self.ua - self.la)
margin = (self.um - self.lm) * x + self.lm
return margin
def g(self, norms):
# min: norm = ua
# grad: 1 / ua^2 -1 / norm^2
# lambda_g > (um-lm / ua-la) / (1 / ua^2 -1 / la^2)
norms = norms.clamp(self.la, self.ua)
normed_x = ((norms - self.ua) / (self.la - self.ua)) # la:1, ua:0
# reg = 1 / norms + norms / self.ua ** 2 # magface
# reg = normed_x ** 2 # square
reg = torch.exp(normed_x) - normed_x # exp
reg = reg.mean()
return (reg) * self.lambda_g
# class TripletPosLoss(object):
# """Modified from Tong Xiao's open-reid (https://github.com/Cysu/open-reid).
# Related Triplet Loss theory can be found in paper 'In Defense of the Triplet
# Loss for Person Re-Identification'."""
#
# def __init__(self, margin=None, normalize_feature=False):
# self.margin = margin
# self.normalize_feature = normalize_feature
# if margin is not None:
# self.ranking_loss = nn.MarginRankingLoss(margin=margin)
# else:
# self.ranking_loss = nn.SoftMarginLoss()
#
# def __call__(self, global_feat, labels, normalize_feature=False):
# # if normalize_feature:
# if self.normalize_feature:
# global_feat = normalize(global_feat, axis=-1)
# if global_feat.dim() > 2:
# dist_mat = euclidean_dist(global_feat[0], global_feat[1])
# else:
# dist_mat = euclidean_dist(global_feat, global_feat)
# dist_ap, dist_an = hard_example_mining(
# dist_mat, labels)
# y = dist_an.new().resize_as_(dist_an).fill_(1)
# dist_an = dist_an.detach()
#
# dist_ap, dist_an = (dist_ap/(dist_ap + dist_an.detach())), (dist_an/(dist_ap.detach() + dist_an))
# # dist_an = (dist_an/(dist_ap + dist_an))
# if self.margin is not None:
# # loss = self.ranking_loss(dist_an/(dist_ap.detach() + dist_an), dist_ap/(dist_ap + dist_an.detach()), y)
# loss = self.ranking_loss(dist_an, dist_ap, y)
# else:
# loss = self.ranking_loss(dist_an - dist_ap, y)
# return loss, dist_ap, dist_an
def soft_hard_example_mining(dist_mat, labels, gamma=32.0):
"""For each anchor, find the hardest positive and negative sample.
Args:
dist_mat: pytorch Variable, pair wise distance between samples, shape [N, N]
labels: pytorch LongTensor, with shape [N]
return_inds: whether to return the indices. Save time if `False`(?)
Returns:
dist_ap: pytorch Variable, distance(anchor, positive); shape [N]
dist_an: pytorch Variable, distance(anchor, negative); shape [N]
p_inds: pytorch LongTensor, with shape [N];
indices of selected hard positive samples; 0 <= p_inds[i] <= N - 1
n_inds: pytorch LongTensor, with shape [N];
indices of selected hard negative samples; 0 <= n_inds[i] <= N - 1
NOTE: Only consider the case in which all labels have same num of samples,
thus we can cope with all anchors in parallel.
"""
assert len(dist_mat.size()) == 2
assert dist_mat.size(0) == dist_mat.size(1)
N = dist_mat.size(0)
# shape [N, N]
is_pos = labels.expand(N, N).eq(labels.expand(N, N).t())
is_neg = labels.expand(N, N).ne(labels.expand(N, N).t())
# `dist_ap` means distance(anchor, positive)
# both `dist_ap` and `relative_p_inds` with shape [N, 1]
dist_ap = torch.logsumexp(
gamma * dist_mat[is_pos].contiguous().view(N, -1), 1, keepdim=True)/gamma
# `dist_an` means distance(anchor, negative)
# both `dist_an` and `relative_n_inds` with shape [N, 1]
dist_an = -torch.logsumexp(
-gamma * dist_mat[is_neg].contiguous().view(N, -1), 1, keepdim=True)/gamma
# shape [N]
dist_ap = dist_ap.squeeze(1)
dist_an = dist_an.squeeze(1)
return dist_ap, dist_an
# bug 点高的版本,detach an
# class RelativeTripletLoss(object):
# """Modified from Tong Xiao's open-reid (https://github.com/Cysu/open-reid).
# Related Triplet Loss theory can be found in paper 'In Defense of the Triplet
# Loss for Person Re-Identification'."""
#
# def __init__(self, margin=None, num_classes=0, num_instances=4, alpha=0.0, beta=0.9, p=1.0, sigma=100.0, gamma=0):
# self.margin = margin
# self.num_classes = num_classes
# self.alpha = alpha
# self.beta = beta
# self.gamma = gamma
# self.num_instances = num_instances
# self.dist_bank = None # 记录每个class的平均类内距离和类间距离
# self.p = p
# self.count = 0
# self.sigma = sigma
# self.logger = logging.getLogger("reid_baseline.train")
# if margin is not None:
# self.ranking_loss = nn.MarginRankingLoss(margin=margin)
# else:
# self.ranking_loss = nn.SoftMarginLoss()
#
# def __call__(self, global_feat, labels, normalize_feature=False):
# if self.dist_bank is None:
# self.dist_bank = torch.ones(self.num_classes + 1, 4, device=global_feat.device)
# if normalize_feature:
# global_feat = normalize(global_feat, axis=-1)
# dist_mat = euclidean_dist(global_feat, global_feat)
# dg_ap = None # referance distance
# dg_an = None
# # calculate average inner and inter class distances
# if self.num_classes > 0:
# mask = labels.expand(*labels.shape, *labels.shape) == labels.expand(*labels.shape, *labels.shape).t() # mask = mask.float()
# # d_inner = mask*dist_mat.detach()
# # d_inter = (mask.logical_not())*dist_mat.detach()
# # d_inner = dist_mat[mask].detach().reshape(mask.shape[0], -1)
# org_dist = dist_mat.detach()
# d_inner = dist_mat[mask & ~torch.eye(mask.shape[0], dtype=torch.bool, device=labels.device)]\
# .detach().reshape(mask.shape[0], -1)
# d_inter = dist_mat[torch.logical_not(mask)].detach().reshape(mask.shape[0], -1)
# d_inner = d_inner.sort(-1, True)[0] # [:, :math.ceil(self.p * d_inner.shape[-1])]
# d_inter = d_inter.sort(-1, False)[0][:, :math.ceil(self.p * d_inter.shape[-1])]
# # d_inter = (1 - mask)*dist_mat.detach()
# std_inner = d_inner.std() # -1
# std_inter = d_inter.std()
# # print(d_inner.max(), d_inner.mean(), d_inner.min())
# # if d_inner.max() > 3*d_inner.mean():
# # print(d_inner)
# d_inner = d_inner.mean(-1)
# d_inter = d_inter.mean(-1)
# # dg_ap = self.alpha * d_inner + (1 - self.alpha) * self.dist_bank[labels, 0]
# # dg_an = self.alpha * d_inter + (1 - self.alpha) * self.dist_bank[labels, 1]
# # 计算全局参考距离均值方差
# mean_ap = self.alpha * d_inner.mean() + (1 - self.alpha) * self.dist_bank[-1, 0]
# mean_an = self.alpha * d_inter.mean() + (1 - self.alpha) * self.dist_bank[-1, 1]
# stdg_ap = self.alpha * std_inner + (1 - self.alpha) * self.dist_bank[-1, 2] # labels
# stdg_an = self.alpha * std_inter + (1 - self.alpha) * self.dist_bank[-1, 3]
# # dg_ap = self.alpha * dist_ap.detach() + (1 - self.alpha) * self.dist_bank[labels, 0]
# # dg_an = self.alpha * dist_an.detach() + (1 - self.alpha) * self.dist_bank[labels, 1]
# clabel = labels[::self.num_instances] # label of each class
#
# d_inner = d_inner.reshape(-1,self.num_instances)
# d_inter = d_inter.reshape(-1,self.num_instances)
# # std_inner = std_inner.reshape(-1,self.num_instances)
# # std_inter = std_inter.reshape(-1,self.num_instances)
# # new_dist = torch.stack([d_inner.detach(), d_inter.detach(),
# # std_inner.detach(), std_inter.detach()], dim=-1).mean(1) # new average dist
# new_dist = torch.stack([d_inner.detach(), d_inter.detach()], dim=-1).mean(1) # new average dist
# # new_dist = torch.stack([dist_ap.detach(), dist_an.detach()], dim=-1)\
# # .reshape(-1, self.num_instances, 2).mean(1) # new average dist
#
# # 更新参考距离
# self.dist_bank[clabel, :2] = self.beta * self.dist_bank[clabel, :2] + (1 - self.beta) * new_dist
# # 更新全局均值与方差
# if self.dist_bank[-1].std() < 1e-6:
# self.dist_bank[-1, :] = torch.stack([d_inner.mean().detach(), d_inter.mean().detach(),
# std_inner.detach(), std_inter.detach()])
# # mean_ap, mean_an = d_inner.mean().detach(), d_inter.mean().detach()
# # self.dist_bank[-1, :] = self.beta * self.dist_bank[-1, :] + (1 - self.beta) * torch.stack(
# # [mean_ap.detach(), mean_an.detach(), std_inner.detach(), std_inter.detach()])
# # self.dist_bank[-1, :] = torch.tensor([15,15,1,1.], device=self.dist_bank.device)
# self.logger.info('initializing dist bank {}'.format(self.dist_bank[-1, :]))
# else:
# self.dist_bank[-1, :] = self.beta * self.dist_bank[-1, :] + (1 - self.beta) * torch.stack(
# [mean_ap.detach(), mean_an.detach(), std_inner.detach(), std_inter.detach()])
# # [d_inner.mean().detach(), d_inter.mean().detach(), std_inner.detach(), std_inter.detach()])
#
# # very hard triplets filter
# if self.sigma < 99:
# vhard_ap = (dist_mat > (mean_ap + self.sigma * stdg_ap)) & mask # d_inner.flatten()
# dist_mat = dist_mat * (1 - vhard_ap.float())
# vhard_an = (dist_mat < (mean_an - (self.sigma) * stdg_an)) & (~mask)
# dist_mat = dist_mat + 100 * vhard_an.float() # * (1 + 10 * vhard_an.float())
# # if self.sigma > 0:
# # vhard_ap = (dist_mat > (self.sigma * d_inner.mean().detach())) & mask # d_inner.flatten()
# # # vhard_ap = (dist_mat > (mean_ap + self.sigma * stdg_ap)) & mask # d_inner.flatten()
# # dist_mat = dist_mat * (1 - vhard_ap.float())
# # else:
# # vhard_an = (dist_mat < (mean_an - (-self.sigma) * stdg_an)) & (~mask)
# # dist_mat = dist_mat + 100 * vhard_an.float() # * (1 + 10 * vhard_an.float())
# # if vhard_ap.sum() + vhard_an.sum() > 0:
# # self.logger.info('Hard example:p{} {} \nn{} {}\ndist bank {}'.format(
# # vhard_ap.sum(), org_dist[vhard_ap], vhard_an.sum(), org_dist[vhard_an],
# # self.dist_bank[-1, :]))
# # if vhard_ap.sum() > 0:
# # # vhard_ap = vhard_ap
# # self.logger.info('hp:{:.3f} {:.3f}/{:.3f}'.format(
# # vhard_ap.sum(), org_dist[vhard_ap].min(), org_dist[vhard_ap].max()))
# # # print('hp:', vhard_ap.sum(), org_dist[vhard_ap])
# # pass
# # if vhard_an.sum() > 0:
# # # vhard_an = vhard_an
# # self.logger.info('hn:{:.3f} {:.3f}/{:.3f}'.format(
# # vhard_an.sum(), org_dist[vhard_an].min(), org_dist[vhard_an].max()))
# # # print('hn:', vhard_an.sum(), org_dist[vhard_an])
# # pass
# else:
# # vhard_an = (dist_mat < (mean_an - (self.sigma) * stdg_an)) & (~mask)
# dist_mat = dist_mat + 0 * (~mask).float()
# dg_ap = self.dist_bank[labels, 0].detach()
# dg_an = self.dist_bank[labels, 1].detach()
#
#
# else:
# # dg_ap = dist_ap.detach() # referance distance
# # dg_an = dist_an.detach()
# pass
#
#
# # batch hard triplet sample
# if self.gamma > 1e-3:
# dist_ap, dist_an = soft_hard_example_mining(
# dist_mat, labels)
# else:
# dist_ap, dist_an = hard_example_mining(
# dist_mat, labels)
# y = dist_an.new().resize_as_(dist_an).fill_(1)
#
# if dg_an is None:
# dg_ap = dist_ap.detach() # referance distance
# dg_an = dist_an.detach()
# # mn = (dist_an/(dg_ap + dist_an)).detach()
# mp = (dist_ap/(dist_ap + dg_an)).detach()
# mn = (dg_an/(dg_ap + dg_an)).detach()
# # mp = (dg_ap/(dg_ap + dg_an)).detach()
# self.count += 1
# if self.count >= 20:
# self.count = 0
# r_distp = d_inner.mean() / global_feat.norm(dim=-1).mean()
# r_distn = d_inter.mean() / global_feat.norm(dim=-1).mean()
# self.logger.info('mp:{:.3f}({:.3f}/{:.3f}) mn:{:.3f}({:.3f}/{:.3f}) rd:{:.3f}/{:.3f} eft:{}'.format(
# mp.mean(), mp.min(), mp.max(), mn.mean(), mn.min(), mn.max(), r_distp, r_distn,
# (1 - dist_ap/(dist_ap + dg_an) < self.margin).sum().item()))
# dist_an = dist_an.detach()
# # dist_ap = dist_ap.detach()
# if self.margin is not None:
# loss = self.ranking_loss(dist_an/(dg_ap.zero_() + dist_an), dist_ap/(dist_ap + dg_an), y)
# else:
# loss = self.ranking_loss(dist_an - dist_ap, y)
# return loss, dist_ap, dist_an
# class RelativeTripletLoss(object):
# """Modified from Tong Xiao's open-reid (https://github.com/Cysu/open-reid).
# Related Triplet Loss theory can be found in paper 'In Defense of the Triplet
# Loss for Person Re-Identification'."""
#
# def __init__(self, margin=None, num_classes=0, num_instances=4, alpha=0.0, beta=0.9, p=1.0, sigma=100.0, gamma=0):
# self.margin = margin
# self.num_classes = num_classes
# self.alpha = alpha
# self.beta = 0.9 # beta
# self.gamma = gamma
# self.num_instances = num_instances
# self.dist_bank = None # 记录每个class的平均类内距离和类间距离
# self.p = p
# self.count = 0
# self.rdist = 0
# self.sigma = sigma
# self.logger = logging.getLogger("reid_baseline.train")
# if margin is not None:
# self.ranking_loss = nn.MarginRankingLoss(margin=margin)
# else:
# self.ranking_loss = nn.SoftMarginLoss()
#
# def __call__(self, global_feat, labels, normalize_feature=False):
# if self.dist_bank is None:
# self.dist_bank = torch.ones(self.num_classes + 1, 4, device=global_feat.device)
# if normalize_feature:
# global_feat = normalize(global_feat, axis=-1)
# dist_mat = euclidean_dist(global_feat, global_feat)
# dg_ap = None # referance distance
# dg_an = None
# # calculate average inner and inter class distances
# if self.num_classes > 0:
# mask = labels.expand(*labels.shape, *labels.shape) == labels.expand(*labels.shape, *labels.shape).t() # mask = mask.float()
# org_dist = dist_mat.detach()
# d_inner = dist_mat[mask & ~torch.eye(mask.shape[0], dtype=torch.bool, device=labels.device)] \
# .detach().reshape(mask.shape[0], -1)
# d_inter = dist_mat[torch.logical_not(mask)].detach().reshape(mask.shape[0], -1)
# d_inner = d_inner.sort(-1, True)[0] # [:, :math.ceil(self.p * d_inner.shape[-1])]
# d_inter = d_inter.sort(-1, False)[0][:, :math.ceil(self.p * d_inter.shape[-1])]
# # d_inter = (1 - mask)*dist_mat.detach()
# std_inner = d_inner.std() # -1
# std_inter = d_inter.std()
# # print(d_inner.max(), d_inner.mean(), d_inner.min())
# # if d_inner.max() > 3*d_inner.mean():
# # print(d_inner)
# d_inner = d_inner.mean(-1)
# d_inter = d_inter.mean(-1)
# # dg_ap = self.alpha * d_inner + (1 - self.alpha) * self.dist_bank[labels, 0]
# # dg_an = self.alpha * d_inter + (1 - self.alpha) * self.dist_bank[labels, 1]
# # 计算全局参考距离均值方差
# mean_ap = self.alpha * d_inner.mean() + (1 - self.alpha) * self.dist_bank[-1, 0]
# mean_an = self.alpha * d_inter.mean() + (1 - self.alpha) * self.dist_bank[-1, 1]
# stdg_ap = self.alpha * std_inner + (1 - self.alpha) * self.dist_bank[-1, 2] # labels
# stdg_an = self.alpha * std_inter + (1 - self.alpha) * self.dist_bank[-1, 3]
# # dg_ap = self.alpha * dist_ap.detach() + (1 - self.alpha) * self.dist_bank[labels, 0]
# # dg_an = self.alpha * dist_an.detach() + (1 - self.alpha) * self.dist_bank[labels, 1]
# clabel = labels[::self.num_instances] # label of each class
#
# d_inner = d_inner.reshape(-1,self.num_instances)
# d_inter = d_inter.reshape(-1,self.num_instances)
# new_dist = torch.stack([d_inner.detach(), d_inter.detach()], dim=-1).mean(1) # new average dist
# # new_dist = torch.stack([dist_ap.detach(), dist_an.detach()], dim=-1)\
# # .reshape(-1, self.num_instances, 2).mean(1) # new average dist
#
# # 更新参考距离
# self.dist_bank[clabel, :2] = self.beta * self.dist_bank[clabel, :2] + (1 - self.beta) * new_dist
# # 更新全局均值与方差
# if self.dist_bank[-1].std() < 1e-6:
# self.dist_bank[-1, :] = torch.stack([d_inner.mean().detach(), d_inter.mean().detach(),
# std_inner.detach(), std_inter.detach()])
# self.logger.info('initializing dist bank {}'.format(self.dist_bank[-1, :]))
# else:
# self.dist_bank[-1, :] = self.beta * self.dist_bank[-1, :] + (1 - self.beta) * torch.stack(
# [mean_ap.detach(), mean_an.detach(), std_inner.detach(), std_inter.detach()])
# # [d_inner.mean().detach(), d_inter.mean().detach(), std_inner.detach(), std_inter.detach()])
#
# # very hard triplets filter
# dg_ap = self.dist_bank[labels, 0].detach()
# dg_an = self.dist_bank[labels, 1].detach()
#
#
# else:
# # dg_ap = dist_ap.detach() # referance distance
# # dg_an = dist_an.detach()
# pass
#
#
# # batch hard triplet sample
# dist_ap, dist_an = hard_example_mining(
# dist_mat, labels)
# y = dist_an.new().resize_as_(dist_an).fill_(1)
#
# if dg_an is None:
# dg_ap = dist_ap.detach() # referance distance
# dg_an = dist_an.detach()
# # mn = (dist_an/(dg_ap + dist_an)).detach()
# mp = (dist_ap/(dist_ap + dg_an)).detach()
# mn = (dg_an/(dg_ap + dg_an)).detach()
# # mp = (dg_ap/(dg_ap + dg_an)).detach()
# self.count += 1
# if self.count >= 20:
# self.count = 0
# r_distp = d_inner.mean() / global_feat.norm(dim=-1).mean()
# r_distn = d_inter.mean() / global_feat.norm(dim=-1).mean()
# self.logger.info('rd:{:.3f}/{:.3f} eft:{}'.format(
# r_distp, r_distn,
# (1 - dist_ap/(dist_ap + dg_an) + self.alpha < self.margin).sum().item()))
# # dist_an = dist_an.detach()
# # dist_ap = dist_ap.detach()
# if self.margin is not None:
# loss = self.ranking_loss(dist_an/(dg_ap + dist_an) + self.alpha, dist_ap/(dist_ap + dg_an), y)
# else:
# loss = self.ranking_loss(dist_an - dist_ap, y)
# return loss, dist_ap, dist_an
#
#
# class BaseRelativeTripletLoss(object):
# """Modified from Tong Xiao's open-reid (https://github.com/Cysu/open-reid).
# Related Triplet Loss theory can be found in paper 'In Defense of the Triplet
# Loss for Person Re-Identification'."""
#
# def __init__(self, margin=None, num_classes=0, num_instances=4, alpha=0.0, beta=0.9, p=1.0, sigma=100.0, gamma=0):
# self.margin = margin
# self.num_classes = num_classes
# self.alpha = alpha
# self.beta = beta
# self.gamma = gamma
# self.ad_margin = [0, 0]
# self.num_instances = num_instances
# self.dist_bank = None # 记录每个class的平均类内距离和类间距离
# self.avg_rdist = None
# self.p = p
# self.sigma = sigma
# self.softplus = torch.nn.Softplus(50)
# self.logger = logging.getLogger("reid_baseline.train")
# self.count = 0
# self.rdist = 0
# if margin is not None:
# self.ranking_loss = nn.MarginRankingLoss(margin=min(margin, 1))
# else:
# self.ranking_loss = nn.SoftMarginLoss()
#
# def __call__(self, global_feat, labels, normalize_feature=False):
# if self.avg_rdist is None:
# self.avg_rdist = torch.ones(3, 2, device=global_feat.device)
# if normalize_feature:
# global_feat = normalize(global_feat, axis=-1)
# dist_mat = euclidean_dist(global_feat, global_feat)
# dg_ap = None # referance distance
# dg_an = None
# # calculate average inner and inter class distances
# # if self.num_classes > 0:
# mask = labels.expand(*labels.shape, *labels.shape) == labels.expand(*labels.shape, *labels.shape).t() # mask = mask.float()
# # d_inner = mask*dist_mat.detach()
# # d_inter = (mask.logical_not())*dist_mat.detach()
# # d_inner = dist_mat[mask].detach().reshape(mask.shape[0], -1)
# org_dist = dist_mat.detach()
# d_inner = dist_mat[mask & ~torch.eye(mask.shape[0], dtype=torch.bool, device=labels.device)] \
# .detach().reshape(mask.shape[0], -1)
# d_inter = dist_mat[torch.logical_not(mask)].detach().reshape(mask.shape[0], -1)
# # d_inner = d_inner * ((d_inner < 3 * d_inner.mean()).float())
# # d_inter = d_inter * ((d_inter * 3 > d_inner.mean()).float())
#
# d_inner = d_inner.sort(-1, True)[0] # [:, :math.ceil(self.p * d_inner.shape[-1])]
# d_inter = d_inter.sort(-1, False)[0] # [:, :math.ceil(self.p * d_inter.shape[-1])]
# # d_inter = (1 - mask)*dist_mat.detach()
# std_inner = d_inner.std() # -1
# std_inter = d_inter.std()
# # print(d_inner.max(), d_inner.mean(), d_inner.min())
# # if d_inner.max() > 3*d_inner.mean():
# # print(d_inner)
# dg_ap = d_inner.mean(-1).detach() # 平均参考距离
# dg_an = d_inter.mean(-1).detach()
# # dg_ap = d_inner[..., 0].detach() # 最大值作为参考距离
# # dg_an = d_inter[..., 0].detach()
# r_distp = d_inner.mean() / global_feat.norm(dim=-1).mean()
# r_distn = d_inter.mean() / global_feat.norm(dim=-1).mean()
# # if self.avg_rdist.std() < 1e-6:
# # self.avg_rdist[:] = torch.stack([r_distp, r_distn]).detach()
# # else:
# # self.avg_rdist[0] = torch.stack([r_distp, r_distn]).detach()
# # self.avg_rdist[1] = self.beta * self.avg_rdist[1] + (1 - self.beta) * self.avg_rdist[0]
# # self.avg_rdist[2] = self.beta * self.avg_rdist[2] + (1 - self.beta) * self.avg_rdist[1]
#
# # batch hard triplet sample
# dist_ap, dist_an = hard_example_mining(
# dist_mat, labels)
# y = dist_an.new().resize_as_(dist_an).fill_(1)
# # dist_an = dist_an.detach()
#
# if dg_an is None:
# dg_ap = dist_ap.detach() # referance distance
# dg_an = dist_an.detach()
# mn = (dist_an/(dg_ap + dist_an)).detach()
# mp = (dist_ap/(dist_ap + dg_an)).detach()
# self.count += 1
# if self.count >= 20:
# self.count = 0
# # self.logger.info('mp:{:.3f}({:.3f}/{:.3f}) mn:{:.3f}({:.3f}/{:.3f}) rdist:{:.3f}/{:.3f}'.format(
# # mp.mean(), mp.min(), mp.max(), mn.mean(), mn.min(), mn.max(), r_distp, r_distn))
# self.logger.info('rd:{:.3f}/{:.3f} eft:{}'.format(
# r_distp, r_distn, (dist_an/(dg_ap + dist_an) + self.alpha - dist_ap/(dist_ap + dg_an) < self.margin).sum().item()))
# # self.avg_rdist.flatten(), self.avg_rdist[2] - self.avg_rdist[1]))
# self.rdist = (dist_ap/(dist_ap + dg_an)).detach()
# if self.margin == 0:
# # if self.margin >= 0.9999:
# # mp = (dist_an/(dg_ap + dist_an) - 0.5).detach()
# # mn = (0.5 - dist_ap/(dist_ap + dg_an)).detach()
#
# # loss = (self.ranking_loss(dist_an/(dg_ap + dist_an), 0.5 - mn.mean().expand(*dist_an.shape), y)
# # + self.ranking_loss(mp.mean().expand(*dist_an.shape) + 0.5, dist_ap/(dist_ap + dg_an), y))
# loss = self.softplus(mn.mean().clamp(0.5 + self.alpha, 1) - dist_an/(dg_ap + dist_an)) \
# + self.softplus(dist_ap/(dist_ap + dg_an) - mp.mean().clamp(0, .5 - self.alpha))
# loss = 2 * loss.mean()
# # loss = (self.ranking_loss(dist_an/(dg_ap + dist_an), mn.mean().expand(*dist_an.shape), y)
# # + self.ranking_loss(mp.mean().expand(*dist_an.shape), dist_ap/(dist_ap + dg_an), y))
# # loss = (self.ranking_loss(2 * dist_an/(dg_ap + dist_an), 2*mn.mean() + torch.ones_like(dist_an, device=dist_an.device), y)
# # + self.ranking_loss(2*mp.mean() + torch.ones_like(dist_an, device=dist_an.device), 2 * dist_ap/(dist_ap + dg_an), y)) / 2
# elif self.margin is not None:
# loss = (self.ranking_loss(dist_an/(dg_ap + dist_an) + self.alpha, dist_ap/(dist_ap + dg_an), y))
# # loss = (self.ranking_loss(2 * dist_an/(dg_ap + dist_an), torch.ones_like(dist_an, device=dist_an.device), y)
# # + self.ranking_loss(torch.ones_like(dist_an, device=dist_an.device), 2 * dist_ap/(dist_ap + dg_an), y)) / 2
# else:
# loss = self.ranking_loss(dist_an - dist_ap, y)
# return loss, dist_ap, dist_an
#
#
# class TightRelativeTripletLoss(object):
# """Modified from Tong Xiao's open-reid (https://github.com/Cysu/open-reid).
# Related Triplet Loss theory can be found in paper 'In Defense of the Triplet
# Loss for Person Re-Identification'."""
#
# # 主要通过约束类内距离来使得每个类更紧凑,类间距离仅约束类间平均距离,不针对单个样本
# def __init__(self, margin=None, num_classes=0, num_instances=4, alpha=0.0, beta=0.9, p=1.0, sigma=100.0, gamma=0):
# self.margin = margin
# self.num_classes = num_classes
# self.alpha = alpha
# self.beta = beta
# self.gamma = gamma
# self.ad_margin = [0, 0]
# self.num_instances = num_instances
# self.dist_bank = None # 记录每个class的平均类内距离和类间距离
# self.avg_rdist = None
# self.p = p
# self.sigma = sigma
# self.softplus = torch.nn.Softplus(50)
# self.logger = logging.getLogger("reid_baseline.train")
# self.count = 0
# if margin is not None:
# self.ranking_loss = nn.MarginRankingLoss(margin=min(margin, 1))
# else:
# self.ranking_loss = nn.SoftMarginLoss()
#
# def __call__(self, global_feat, labels, normalize_feature=False):
# if self.avg_rdist is None:
# self.avg_rdist = torch.ones(3, 2, device=global_feat.device)
# if normalize_feature:
# global_feat = normalize(global_feat, axis=-1)
# dist_mat = euclidean_dist(global_feat, global_feat)
# class_feat = global_feat.reshape(-1, self.num_instances, global_feat.shape[-1]).mean(1)
# inter_dist_mat = euclidean_dist(class_feat , class_feat)
# dg_ap = None # referance distance
# dg_an = None
# # calculate average inner and inter class distances
# # if self.num_classes > 0:
# mask = labels.expand(*labels.shape, *labels.shape) == labels.expand(*labels.shape, *labels.shape).t() # mask = mask.float()
# # d_inner = mask*dist_mat.detach()
# # d_inter = (mask.logical_not())*dist_mat.detach()
# # d_inner = dist_mat[mask].detach().reshape(mask.shape[0], -1)
# org_dist = dist_mat.detach()
# d_inner = dist_mat[mask & ~torch.eye(mask.shape[0], dtype=torch.bool, device=labels.device)] \
# .detach().reshape(mask.shape[0], -1)
# d_inter = inter_dist_mat.detach()
# # d_inter = d_inter.reshape(d_inter.shape[0]//self.num_instances, self.num_instances, -1, self.num_instances).mean((1,3))
# # d_inter = dist_mat[torch.logical_not(mask)].detach().reshape(mask.shape[0], -1)
# # d_inner = d_inner * ((d_inner < 3 * d_inner.mean()).float())
# # d_inter = d_inter * ((d_inter * 3 > d_inner.mean()).float())
#
# d_inner = d_inner.sort(-1, True)[0] # [:, :math.ceil(self.p * d_inner.shape[-1])]
# d_inter = d_inter.sort(-1, False)[0][:, 1:] # [:, :math.ceil(self.p * d_inter.shape[-1])]
#
# std_inner = d_inner.std() # -1
# std_inter = d_inter.std()
# # print(d_inner.max(), d_inner.mean(), d_inner.min())
# # if d_inner.max() > 3*d_inner.mean():
# # print(d_inner)
# # dg_ap = d_inner.mean(-1).detach() # 平均参考距离
# # dg_an = d_inter.mean(-1).detach()
# dg_ap = d_inner[..., 0].detach() # 最大值作为参考距离
# dg_an = d_inter[..., 0].detach()
# dg_ap = dg_ap.reshape(self.num_instances, *dg_an.shape).mean(0)
# dg_an = dg_an.expand(self.num_instances, *dg_an.shape).flatten()
# r_distp = d_inner.mean() / global_feat.norm(dim=-1).mean()
# r_distn = d_inter.mean() / global_feat.norm(dim=-1).mean()
# if self.avg_rdist.std() < 1e-6:
# self.avg_rdist[:] = torch.stack([r_distp, r_distn]).detach()
# else:
# self.avg_rdist[0] = torch.stack([r_distp, r_distn]).detach()
# self.avg_rdist[1] = self.beta * self.avg_rdist[1] + (1 - self.beta) * self.avg_rdist[0]
# self.avg_rdist[2] = self.beta * self.avg_rdist[2] + (1 - self.beta) * self.avg_rdist[1]
#
# # batch hard triplet sample
# dist_ap, _ = hard_example_mining(
# dist_mat, labels)
# _, dist_an = hard_example_mining(
# inter_dist_mat, labels[::self.num_instances])
# yp = dist_ap.new().resize_as_(dist_ap).fill_(1)
# yn = dist_an.new().resize_as_(dist_an).fill_(1)
#
# if dg_an is None:
# dg_ap = dist_ap.detach() # referance distance
# dg_an = dist_an.detach()
# mn = (dist_an/(dg_ap + dist_an)).detach()
# mp = (dist_ap/(dist_ap + dg_an)).detach()
# self.count += 1
# if self.count >= 20:
# self.count = 0
# self.logger.info('mp:{:.3f}({:.3f}/{:.3f}) mn:{:.3f}({:.3f}/{:.3f}) rd:{:.3f}/{:.3f}'.format(
# mp.mean(), mp.min(), mp.max(), mn.mean(), mn.min(), mn.max(), r_distp, r_distn))
# loss = (self.ranking_loss(torch.ones_like(dist_ap, device=dist_ap.device), 2*dist_ap/(dist_ap + dg_an), yp)
# + self.ranking_loss(2*dist_an/(dg_ap + dist_an), torch.ones_like(dist_an, device=dist_an.device), yn)
# # + self.ranking_loss(dist_an, torch.ones_like(dist_an, device=dist_an.device), yn)
# )/2
# return loss, dist_ap, dist_an
#
# # 优化最大类内距离和最小类间距离组成的triplet
# class ClassRelativeTripletLoss(object):
# """Modified from Tong Xiao's open-reid (https://github.com/Cysu/open-reid).
# Related Triplet Loss theory can be found in paper 'In Defense of the Triplet
# Loss for Person Re-Identification'."""
#
# def __init__(self, margin=None, num_classes=0, num_instances=4, alpha=0.0, beta=0.9, p=1.0, sigma=100.0, gamma=0):
# self.margin = margin
# self.num_classes = num_classes
# self.alpha = alpha
# self.beta = beta
# self.gamma = gamma
# self.ad_margin = [0, 0]
# self.num_instances = num_instances
# self.dist_bank = None # 记录每个class的平均类内距离和类间距离
# self.avg_rdist = None
# self.p = p
# self.sigma = sigma
# self.softplus = torch.nn.Softplus(50)
# self.logger = logging.getLogger("reid_baseline.train")
# self.count = 0
# if margin is not None:
# self.ranking_loss = nn.MarginRankingLoss(margin=min(margin, 1))
# else:
# self.ranking_loss = nn.SoftMarginLoss()
#
# def __call__(self, global_feat, labels, normalize_feature=False):
# if self.avg_rdist is None:
# self.avg_rdist = torch.ones(3, 2, device=global_feat.device)
# if normalize_feature:
# global_feat = normalize(global_feat, axis=-1)
# dist_mat = euclidean_dist(global_feat, global_feat)
# class_feat = global_feat.reshape(-1, self.num_instances, global_feat.shape[-1]).mean(1)
# inter_dist_mat = euclidean_dist(class_feat , class_feat)
# dg_ap = None # referance distance
# dg_an = None
# # calculate average inner and inter class distances
# # if self.num_classes > 0:
# mask = labels.expand(*labels.shape, *labels.shape) == labels.expand(*labels.shape, *labels.shape).t() # mask = mask.float()
# org_dist = dist_mat.detach()
# d_inner = dist_mat[mask & ~torch.eye(mask.shape[0], dtype=torch.bool, device=labels.device)] \
# .detach().reshape(mask.shape[0], -1)
# d_inter = inter_dist_mat.detach()
#
# d_inner = d_inner.sort(-1, True)[0] # [:, :math.ceil(self.p * d_inner.shape[-1])]
# d_inter = d_inter.sort(-1, False)[0][:, 1:] # [:, :math.ceil(self.p * d_inter.shape[-1])]
#
# std_inner = d_inner.std() # -1
# std_inter = d_inter.std()
# # dg_ap = d_inner[..., 0].detach() # 最大值作为参考距离
# # dg_an = d_inter[..., 0].detach()
# # dg_ap = dg_ap.reshape(self.num_instances, *dg_an.shape).mean(0)
# # dg_an = dg_an.expand(self.num_instances, *dg_an.shape).flatten()
# r_distp = d_inner.mean() / global_feat.norm(dim=-1).mean()
# r_distn = d_inter.mean() / global_feat.norm(dim=-1).mean()
#
# # batch hard triplet sample
# dist_ap, dist_an = hard_example_mining(
# dist_mat, labels)
# dist_ap = dist_ap.reshape(-1, self.num_instances).max(dim=-1)[0]
# dist_an = dist_an.reshape(-1, self.num_instances).min(dim=-1)[0]
# # dist_ap = self.softplus(dist_mat[mask].reshape(-1, self.num_instances * self.num_instances)) # [C]
#
# # dist_ap, _ = hard_example_mining(
# # dist_mat, labels)
# # _, dist_an = hard_example_mining(
# # inter_dist_mat, labels[::self.num_instances])
# y = dist_ap.new().resize_as_(dist_ap).fill_(1)
# # yn = dist_an.new().resize_as_(dist_an).fill_(1)
#
# if dg_an is None:
# dg_ap = dist_ap.detach() # referance distance
# dg_an = dist_an.detach()
# mn = (dist_an/(dg_ap + dist_an)).detach()
# mp = (dist_ap/(dist_ap + dg_an)).detach()
# self.count += 1
# if self.count >= 20:
# self.count = 0
# self.logger.info('mp:{:.3f}({:.3f}/{:.3f}) mn:{:.3f}({:.3f}/{:.3f}) rd:{:.3f}/{:.3f}'.format(
# mp.mean(), mp.min(), mp.max(), mn.mean(), mn.min(), mn.max(), r_distp, r_distn))
# loss = (self.ranking_loss(2 * dist_an/(dg_ap + dist_an), torch.ones_like(dist_an, device=dist_an.device), y)
# + self.ranking_loss(torch.ones_like(dist_an, device=dist_an.device), 2 * dist_ap/(dist_ap + dg_an), y)
# ) / 2
#
# return loss, dist_ap, dist_an
#
# # 用focal loss思想实现adaptive margin
# class FocalRelativeTripletLoss(object):
# """Modified from Tong Xiao's open-reid (https://github.com/Cysu/open-reid).
# Related Triplet Loss theory can be found in paper 'In Defense of the Triplet
# Loss for Person Re-Identification'."""
#
# def __init__(self, margin=None, num_classes=0, num_instances=4, alpha=0.0, beta=0.9, p=1.0, sigma=100.0, gamma=0):
# self.margin = margin
# self.num_classes = num_classes
# self.alpha = alpha
# self.beta = beta
# self.gamma = gamma
# self.ad_margin = [0, 0]
# self.num_instances = num_instances
# self.dist_bank = None # 记录每个class的平均类内距离和类间距离
# self.avg_rdist = None
# self.p = p
# self.sigma = sigma
# self.softplus = torch.nn.Softplus(50)
# self.logger = logging.getLogger("reid_baseline.train")
# self.count = 0
# self.rdist = 0
# if margin is not None:
# self.ranking_loss = nn.MarginRankingLoss(margin=min(margin, 1))
# else:
# self.ranking_loss = nn.SoftMarginLoss()
#
# def __call__(self, global_feat, labels, normalize_feature=False):
# if self.dist_bank is None:
# self.dist_bank = torch.ones(self.num_classes + 1, 4, device=global_feat.device)
# if self.avg_rdist is None:
# self.avg_rdist = torch.ones(3, 2, device=global_feat.device)
# if normalize_feature:
# global_feat = normalize(global_feat, axis=-1)
# dist_mat = euclidean_dist(global_feat, global_feat)
# dg_ap = None # referance distance
# dg_an = None
#
# mask = labels.expand(*labels.shape, *labels.shape) == labels.expand(*labels.shape, *labels.shape).t() # mask = mask.float()
# # d_inner = mask*dist_mat.detach()
# # d_inter = (mask.logical_not())*dist_mat.detach()
# # d_inner = dist_mat[mask].detach().reshape(mask.shape[0], -1)
# org_dist = dist_mat.detach()
# d_inner = dist_mat[mask & ~torch.eye(mask.shape[0], dtype=torch.bool, device=labels.device)] \
# .detach().reshape(mask.shape[0], -1)
# d_inter = dist_mat[torch.logical_not(mask)].detach().reshape(mask.shape[0], -1)
#
# d_inner = d_inner.sort(-1, True)[0] # [:, :math.ceil(self.p * d_inner.shape[-1])]
# d_inter = d_inter.sort(-1, False)[0] # [:, :math.ceil(self.p * d_inter.shape[-1])]
# # calculate average inner and inter class distances
# if self.num_classes > 0:
# clabel = labels[::self.num_instances] # label of each class
# dd_inner = d_inner.mean(-1).reshape(-1,self.num_instances)
# dd_inter = d_inter.mean(-1).reshape(-1,self.num_instances)
# new_dist = torch.stack([dd_inner.detach(), dd_inter.detach()], dim=-1).mean(1) # new average dist
# # 更新参考距离
# self.dist_bank[clabel, :2] = 0.9 * self.dist_bank[clabel, :2] + (1 - 0.9) * new_dist
# dg_ap = self.dist_bank[labels, 0].detach() # distbank + 平均参考距离
# dg_an = self.dist_bank[labels, 1].detach()
# # dg_ap = d_inner.mean(-1).detach() # 平均参考距离
# # dg_an = d_inter.mean(-1).detach()
# # dg_ap = d_inner[..., 0].detach() # 最大值作为参考距离
# # dg_an = d_inter[..., 0].detach()
# r_distp = d_inner.mean() / global_feat.norm(dim=-1).mean()
# r_distn = d_inter.mean() / global_feat.norm(dim=-1).mean()
# # batch hard triplet sample
# dist_ap, dist_an = hard_example_mining(
# dist_mat, labels)
# y = dist_an.new().resize_as_(dist_an).fill_(1)
# # dist_an = dist_an.detach()
#
# if dg_an is None:
# dg_ap = dist_ap.detach() # referance distance
# dg_an = dist_an.detach()
# mn = (dist_an/(dg_ap + dist_an)).detach()
# mp = (dist_ap/(dist_ap + dg_an)).detach()
# self.count += 1
# if self.count >= 20:
# self.count = 0
# # self.logger.info('mp:{:.3f}({:.3f}/{:.3f}) mn:{:.3f}({:.3f}/{:.3f}) rdist:{:.3f}/{:.3f}'.format(
# # mp.mean(), mp.min(), mp.max(), mn.mean(), mn.min(), mn.max(), r_distp, r_distn))
# self.logger.info('rd:{:.3f}/{:.3f} eft:{}'.format(
# r_distp, r_distn, (dist_an/(dg_ap + dist_an) + self.alpha - dist_ap/(dist_ap + dg_an) < self.margin).sum().item()))
# # self.avg_rdist.flatten(), self.avg_rdist[2] - self.avg_rdist[1]))
# self.rdist = (dist_ap/(dist_ap + dg_an)).detach()
# # if self.margin == 0:
# # loss = self.softplus(mn.mean().clamp(0.5 + self.alpha, 1) - dist_an/(dg_ap + dist_an)) \
# # + self.softplus(dist_ap/(dist_ap + dg_an) - mp.mean().clamp(0, .5 - self.alpha))
# # loss = 2 * loss.mean()
# # elif self.margin is not None:
# # loss = (self.ranking_loss(dist_an/(dg_ap + dist_an) + self.alpha, dist_ap/(dist_ap + dg_an), y))
# loss_p = dist_ap/(dist_ap + dg_an)
# loss_n = 1 - dist_an/(dg_ap + dist_an)
# if self.sigma < 1e-6:
# loss_n = loss_n.detach()
# loss = loss_p * ((self.p*loss_p).pow(self.beta).clamp(0, 1).detach()) \
# + loss_n * ((self.p*loss_n).pow(self.beta).clamp(0, 1).detach())
# loss = loss.mean()
# # else:
# # loss = self.ranking_loss(dist_an - dist_ap, y)
# return loss, dist_ap, dist_an
#
#
# class ADMarginRelativeTripletLoss(object):
# """Modified from Tong Xiao's open-reid (https://github.com/Cysu/open-reid).
# Related Triplet Loss theory can be found in paper 'In Defense of the Triplet
# Loss for Person Re-Identification'."""
#
# def __init__(self, margin=None, num_classes=0, num_instances=4, alpha=0.0, beta=0.9, p=1.0, sigma=100.0, gamma=0
# , normalize_feature=False):
# self.margin = margin
# self.num_classes = num_classes
# self.alpha = alpha
# self.beta = beta
# self.gamma = gamma
# self.normalize_feature = normalize_feature
# self.ad_margin = [0, 0]
# self.num_instances = num_instances
# self.dist_bank = None # 记录每个class的平均类内距离和类间距离
# self.avg_rdist = None
# self.p = p
# self.sigma = sigma
# self.softplus = torch.nn.Softplus(50)
# self.logger = logging.getLogger("reid_baseline.train")
# self.count = 0
# self.rdist = 0
# if margin is not None:
# self.ranking_loss = nn.MarginRankingLoss(margin=min(margin, 1))
# else:
# self.ranking_loss = nn.SoftMarginLoss()
#
# def __call__(self, global_feat, labels, normalize_feature=False):
# if self.avg_rdist is None:
# self.avg_rdist = torch.ones(3, 2, device=global_feat.device)
# if self.normalize_feature:
# global_feat = normalize(global_feat, axis=-1)
# dist_mat = euclidean_dist(global_feat, global_feat)
# dg_ap = None # referance distance
# dg_an = None
# # calculate average inner and inter class distances
# # if self.num_classes > 0:
# mask = labels.expand(*labels.shape, *labels.shape) == labels.expand(*labels.shape, *labels.shape).t() # mask = mask.float()
# org_dist = dist_mat.detach()
# d_inner = dist_mat[mask & ~torch.eye(mask.shape[0], dtype=torch.bool, device=labels.device)] \
# .detach().reshape(mask.shape[0], -1)
# d_inter = dist_mat[torch.logical_not(mask)].detach().reshape(mask.shape[0], -1)
#
# d_inner = d_inner.sort(-1, True)[0] # [:, :math.ceil(self.p * d_inner.shape[-1])]
# d_inter = d_inter.sort(-1, False)[0] # [:, :math.ceil(self.p * d_inter.shape[-1])]
# dg_ap = d_inner.mean(-1).detach() # 平均参考距离
# dg_an = d_inter.mean(-1).detach()
# # dg_ap = d_inner[..., 0].detach() # 最大值作为参考距离
# # dg_an = d_inter[..., 0].detach()
# r_distp = d_inner.mean() / global_feat.norm(dim=-1).mean()
# r_distn = d_inter.mean() / global_feat.norm(dim=-1).mean()
# # batch hard triplet sample
# dist_ap, dist_an = hard_example_mining(
# dist_mat, labels)
# y = dist_an.new().resize_as_(dist_an).fill_(1)
# # dist_an = dist_an.detach()
#
# if dg_an is None:
# dg_ap = dist_ap.detach() # referance distance
# dg_an = dist_an.detach()
# mn = (dist_an/(dg_ap + dist_an)).detach()
# mp = (dist_ap/(dist_ap + dg_an)).detach()
# self.rdist = (dist_ap/(dist_ap + dg_an)).detach()
#
# if self.normalize_feature: # 归一化triplet
# dist = dist_an - dist_ap
# # loss = self.ranking_loss(dist_an, dist_ap, y)
# elif self.margin is not None: # relative triplet
# dist = dist_an/(dg_ap + dist_an) - dist_ap/(dist_ap + dg_an)
# # loss = (self.ranking_loss(dist_an/(dg_ap + dist_an) + self.alpha, dist_ap/(dist_ap + dg_an), y))
# else:
# loss = self.ranking_loss(dist_an - dist_ap, y)
# if self.margin > 1:
# dist = dist_an - dist_ap
# # loss = torch.clamp_min((self.margin - 1) * dist_ap.detach() - dist, 0)
# loss = torch.clamp_min(torch.clamp_min(
# (dist_an / dist_ap - 1).mean().detach(), (self.margin - 1)) * dist_ap.detach() - dist, 0)
# else:
# loss = torch.clamp_min(torch.clamp_min(dist.mean().detach(), self.margin) - dist, 0)
# self.count += 1
# if self.count >= 20:
# self.count = 0
# # self.logger.info('mp:{:.3f}({:.3f}/{:.3f}) mn:{:.3f}({:.3f}/{:.3f}) rdist:{:.3f}/{:.3f}'.format(
# # mp.mean(), mp.min(), mp.max(), mn.mean(), mn.min(), mn.max(), r_distp, r_distn))
# self.logger.info('rd:{:.3f}/{:.3f} eft:{}'.format(
# r_distp, r_distn, (loss != 0).sum().item()))
# # self.avg_rdist.flatten(), self.avg_rdist[2] - self.avg_rdist[1]))
#
# return loss.mean(), dist_ap, dist_an
#
class CrossEntropyLabelSmooth(nn.Module):
"""Cross entropy loss with label smoothing regularizer.
Reference:
Szegedy et al. Rethinking the Inception Architecture for Computer Vision. CVPR 2016.
Equation: y = (1 - epsilon) * y + epsilon / K.
Args:
num_classes (int): number of classes.
epsilon (float): weight.
"""
def __init__(self, num_classes, epsilon=0.1, use_gpu=True):
super(CrossEntropyLabelSmooth, self).__init__()
self.num_classes = num_classes
self.epsilon = epsilon
self.use_gpu = use_gpu
self.logsoftmax = nn.LogSoftmax(dim=1)
def forward(self, inputs, targets):
"""
Args:
inputs: prediction matrix (before softmax) with shape (batch_size, num_classes)
targets: ground truth labels with shape (num_classes)
"""
log_probs = self.logsoftmax(inputs)
targets = torch.zeros(log_probs.size()).scatter_(1, targets.unsqueeze(1).data.cpu(), 1)
# if self.use_gpu: targets = targets.cuda()
targets = targets.to(inputs.device)
targets = (1 - self.epsilon) * targets + self.epsilon / self.num_classes
loss = (- targets * log_probs).mean(0).sum()
return loss
# 占位符
class NoneCLS(nn.Module):
def forward(self, x, *args):
return 0 * x.sum()
class NoneTri(nn.Module):
def forward(self, x, *args):
return 0 * x.sum(), 0, 0
class CrossEntropyLabelSmoothwithMargin(nn.Module):
"""Cross entropy loss with label smoothing regularizer.
Reference:
Szegedy et al. Rethinking the Inception Architecture for Computer Vision. CVPR 2016.
Equation: y = (1 - epsilon) * y + epsilon / K.
Args:
num_classes (int): number of classes.
epsilon (float): weight.
"""
def __init__(self, num_classes, epsilon=0.1, use_gpu=True):
super(CrossEntropyLabelSmoothwithMargin, self).__init__()
self.num_classes = num_classes
self.epsilon = epsilon
self.use_gpu = use_gpu
self.logsoftmax = nn.LogSoftmax(dim=1)
def forward(self, inputs, targets):
"""
Args:
inputs: prediction matrix (before softmax) with shape (batch_size, num_classes)
targets: ground truth labels with shape (num_classes)
"""
log_probs = self.logsoftmax(inputs)
targets = torch.zeros(log_probs.size()).scatter_(1, targets.unsqueeze(1).data.cpu(), 1)
if self.use_gpu: targets = targets.cuda()
targets = (1 - self.epsilon) * targets + self.epsilon / self.num_classes
loss = (- targets * log_probs).mean(0).sum()
return loss | 49.235936 | 145 | 0.571436 | 8,062 | 58,640 | 3.942322 | 0.046639 | 0.035491 | 0.022654 | 0.014725 | 0.888116 | 0.869962 | 0.854356 | 0.835195 | 0.815467 | 0.806343 | 0 | 0.020388 | 0.273977 | 58,640 | 1,191 | 146 | 49.235936 | 0.726147 | 0.800392 | 0 | 0.350254 | 0 | 0 | 0.008417 | 0 | 0 | 0 | 0 | 0 | 0.020305 | 1 | 0.096447 | false | 0 | 0.030457 | 0.015228 | 0.243655 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
c37ecbaa1e5d0a1541eab7203099d227ab8ce8ce | 14,649 | py | Python | tests/test_autogen_composition.py | acnebs/alembic | d6b16eb3a7b8c6398236d8d227c336726c8a46e5 | [
"MIT"
] | null | null | null | tests/test_autogen_composition.py | acnebs/alembic | d6b16eb3a7b8c6398236d8d227c336726c8a46e5 | [
"MIT"
] | null | null | null | tests/test_autogen_composition.py | acnebs/alembic | d6b16eb3a7b8c6398236d8d227c336726c8a46e5 | [
"MIT"
] | null | null | null | import re
from alembic import autogenerate
from alembic.migration import MigrationContext
from alembic.testing import eq_
from alembic.testing import TestBase
from ._autogen_fixtures import _default_include_object
from ._autogen_fixtures import AutogenTest
from ._autogen_fixtures import ModelOne
class AutogenerateDiffTest(ModelOne, AutogenTest, TestBase):
__only_on__ = "sqlite"
def test_render_nothing(self):
context = MigrationContext.configure(
connection=self.bind.connect(),
opts={
"compare_type": True,
"compare_server_default": True,
"target_metadata": self.m1,
"upgrade_token": "upgrades",
"downgrade_token": "downgrades",
},
)
template_args = {}
autogenerate._render_migration_diffs(context, template_args)
eq_(
re.sub(r"u'", "'", template_args["upgrades"]),
"""# ### commands auto generated by Alembic - please adjust! ###
pass
# ### end Alembic commands ###""",
)
eq_(
re.sub(r"u'", "'", template_args["downgrades"]),
"""# ### commands auto generated by Alembic - please adjust! ###
pass
# ### end Alembic commands ###""",
)
def test_render_nothing_batch(self):
context = MigrationContext.configure(
connection=self.bind.connect(),
opts={
"compare_type": True,
"compare_server_default": True,
"target_metadata": self.m1,
"upgrade_token": "upgrades",
"downgrade_token": "downgrades",
"alembic_module_prefix": "op.",
"sqlalchemy_module_prefix": "sa.",
"render_as_batch": True,
"include_symbol": lambda name, schema: False,
},
)
template_args = {}
autogenerate._render_migration_diffs(context, template_args)
eq_(
re.sub(r"u'", "'", template_args["upgrades"]),
"""# ### commands auto generated by Alembic - please adjust! ###
pass
# ### end Alembic commands ###""",
)
eq_(
re.sub(r"u'", "'", template_args["downgrades"]),
"""# ### commands auto generated by Alembic - please adjust! ###
pass
# ### end Alembic commands ###""",
)
def test_render_diffs_standard(self):
"""test a full render including indentation"""
template_args = {}
autogenerate._render_migration_diffs(self.context, template_args)
eq_(
re.sub(r"u'", "'", template_args["upgrades"]),
"""# ### commands auto generated by Alembic - please adjust! ###
op.create_table('item',
sa.Column('id', sa.Integer(), nullable=False),
sa.Column('description', sa.String(length=100), nullable=True),
sa.Column('order_id', sa.Integer(), nullable=True),
sa.CheckConstraint('len(description) > 5'),
sa.ForeignKeyConstraint(['order_id'], ['order.order_id'], ),
sa.PrimaryKeyConstraint('id')
)
op.drop_table('extra')
op.add_column('address', sa.Column('street', sa.String(length=50), \
nullable=True))
op.create_unique_constraint('uq_email', 'address', ['email_address'])
op.add_column('order', sa.Column('user_id', sa.Integer(), nullable=True))
op.alter_column('order', 'amount',
existing_type=sa.NUMERIC(precision=8, scale=2),
type_=sa.Numeric(precision=10, scale=2),
nullable=True,
existing_server_default=sa.text('0'))
op.create_foreign_key(None, 'order', 'user', ['user_id'], ['id'])
op.alter_column('user', 'a1',
existing_type=sa.TEXT(),
server_default='x',
existing_nullable=True)
op.alter_column('user', 'name',
existing_type=sa.VARCHAR(length=50),
nullable=False)
op.drop_index('pw_idx', table_name='user')
op.drop_column('user', 'pw')
# ### end Alembic commands ###""",
)
eq_(
re.sub(r"u'", "'", template_args["downgrades"]),
"""# ### commands auto generated by Alembic - please adjust! ###
op.add_column('user', sa.Column('pw', sa.VARCHAR(length=50), \
nullable=True))
op.create_index('pw_idx', 'user', ['pw'], unique=False)
op.alter_column('user', 'name',
existing_type=sa.VARCHAR(length=50),
nullable=True)
op.alter_column('user', 'a1',
existing_type=sa.TEXT(),
server_default=None,
existing_nullable=True)
op.drop_constraint(None, 'order', type_='foreignkey')
op.alter_column('order', 'amount',
existing_type=sa.Numeric(precision=10, scale=2),
type_=sa.NUMERIC(precision=8, scale=2),
nullable=False,
existing_server_default=sa.text('0'))
op.drop_column('order', 'user_id')
op.drop_constraint('uq_email', 'address', type_='unique')
op.drop_column('address', 'street')
op.create_table('extra',
sa.Column('x', sa.CHAR(), nullable=True),
sa.Column('uid', sa.INTEGER(), nullable=True),
sa.ForeignKeyConstraint(['uid'], ['user.id'], )
)
op.drop_table('item')
# ### end Alembic commands ###""",
)
def test_render_diffs_batch(self):
"""test a full render in batch mode including indentation"""
template_args = {}
self.context.opts["render_as_batch"] = True
autogenerate._render_migration_diffs(self.context, template_args)
eq_(
re.sub(r"u'", "'", template_args["upgrades"]),
"""# ### commands auto generated by Alembic - please adjust! ###
op.create_table('item',
sa.Column('id', sa.Integer(), nullable=False),
sa.Column('description', sa.String(length=100), nullable=True),
sa.Column('order_id', sa.Integer(), nullable=True),
sa.CheckConstraint('len(description) > 5'),
sa.ForeignKeyConstraint(['order_id'], ['order.order_id'], ),
sa.PrimaryKeyConstraint('id')
)
op.drop_table('extra')
with op.batch_alter_table('address', schema=None) as batch_op:
batch_op.add_column(sa.Column('street', sa.String(length=50), nullable=True))
batch_op.create_unique_constraint('uq_email', ['email_address'])
with op.batch_alter_table('order', schema=None) as batch_op:
batch_op.add_column(sa.Column('user_id', sa.Integer(), nullable=True))
batch_op.alter_column('amount',
existing_type=sa.NUMERIC(precision=8, scale=2),
type_=sa.Numeric(precision=10, scale=2),
nullable=True,
existing_server_default=sa.text('0'))
batch_op.create_foreign_key(None, 'user', ['user_id'], ['id'])
with op.batch_alter_table('user', schema=None) as batch_op:
batch_op.alter_column('a1',
existing_type=sa.TEXT(),
server_default='x',
existing_nullable=True)
batch_op.alter_column('name',
existing_type=sa.VARCHAR(length=50),
nullable=False)
batch_op.drop_index('pw_idx')
batch_op.drop_column('pw')
# ### end Alembic commands ###""", # noqa,
)
eq_(
re.sub(r"u'", "'", template_args["downgrades"]),
"""# ### commands auto generated by Alembic - please adjust! ###
with op.batch_alter_table('user', schema=None) as batch_op:
batch_op.add_column(sa.Column('pw', sa.VARCHAR(length=50), nullable=True))
batch_op.create_index('pw_idx', ['pw'], unique=False)
batch_op.alter_column('name',
existing_type=sa.VARCHAR(length=50),
nullable=True)
batch_op.alter_column('a1',
existing_type=sa.TEXT(),
server_default=None,
existing_nullable=True)
with op.batch_alter_table('order', schema=None) as batch_op:
batch_op.drop_constraint(None, type_='foreignkey')
batch_op.alter_column('amount',
existing_type=sa.Numeric(precision=10, scale=2),
type_=sa.NUMERIC(precision=8, scale=2),
nullable=False,
existing_server_default=sa.text('0'))
batch_op.drop_column('user_id')
with op.batch_alter_table('address', schema=None) as batch_op:
batch_op.drop_constraint('uq_email', type_='unique')
batch_op.drop_column('street')
op.create_table('extra',
sa.Column('x', sa.CHAR(), nullable=True),
sa.Column('uid', sa.INTEGER(), nullable=True),
sa.ForeignKeyConstraint(['uid'], ['user.id'], )
)
op.drop_table('item')
# ### end Alembic commands ###""", # noqa,
)
def test_imports_maintined(self):
template_args = {}
self.context.opts["render_as_batch"] = True
def render_item(type_, col, autogen_context):
autogen_context.imports.add(
"from mypackage import my_special_import"
)
autogen_context.imports.add("from foobar import bat")
self.context.opts["render_item"] = render_item
autogenerate._render_migration_diffs(self.context, template_args)
eq_(
set(template_args["imports"].split("\n")),
set(
[
"from foobar import bat",
"from mypackage import my_special_import",
]
),
)
class AutogenerateDiffTestWSchema(ModelOne, AutogenTest, TestBase):
__only_on__ = "postgresql"
schema = "test_schema"
def test_render_nothing(self):
context = MigrationContext.configure(
connection=self.bind.connect(),
opts={
"compare_type": True,
"compare_server_default": True,
"target_metadata": self.m1,
"upgrade_token": "upgrades",
"downgrade_token": "downgrades",
"alembic_module_prefix": "op.",
"sqlalchemy_module_prefix": "sa.",
"include_object": lambda name, *args: False,
},
)
template_args = {}
autogenerate._render_migration_diffs(context, template_args)
eq_(
re.sub(r"u'", "'", template_args["upgrades"]),
"""# ### commands auto generated by Alembic - please adjust! ###
pass
# ### end Alembic commands ###""",
)
eq_(
re.sub(r"u'", "'", template_args["downgrades"]),
"""# ### commands auto generated by Alembic - please adjust! ###
pass
# ### end Alembic commands ###""",
)
def test_render_diffs_extras(self):
"""test a full render including indentation (include and schema)"""
template_args = {}
self.context.opts.update(
{
"include_object": _default_include_object,
"include_schemas": True,
}
)
autogenerate._render_migration_diffs(self.context, template_args)
eq_(
re.sub(r"u'", "'", template_args["upgrades"]),
"""# ### commands auto generated by Alembic - please adjust! ###
op.create_table('item',
sa.Column('id', sa.Integer(), nullable=False),
sa.Column('description', sa.String(length=100), nullable=True),
sa.Column('order_id', sa.Integer(), nullable=True),
sa.CheckConstraint('len(description) > 5'),
sa.ForeignKeyConstraint(['order_id'], ['%(schema)s.order.order_id'], ),
sa.PrimaryKeyConstraint('id'),
schema='%(schema)s'
)
op.drop_table('extra', schema='%(schema)s')
op.add_column('address', sa.Column('street', sa.String(length=50), \
nullable=True), schema='%(schema)s')
op.create_unique_constraint('uq_email', 'address', ['email_address'], \
schema='test_schema')
op.add_column('order', sa.Column('user_id', sa.Integer(), nullable=True), \
schema='%(schema)s')
op.alter_column('order', 'amount',
existing_type=sa.NUMERIC(precision=8, scale=2),
type_=sa.Numeric(precision=10, scale=2),
nullable=True,
existing_server_default=sa.text('0'),
schema='%(schema)s')
op.create_foreign_key(None, 'order', 'user', ['user_id'], ['id'], \
source_schema='%(schema)s', referent_schema='%(schema)s')
op.alter_column('user', 'a1',
existing_type=sa.TEXT(),
server_default='x',
existing_nullable=True,
schema='%(schema)s')
op.alter_column('user', 'name',
existing_type=sa.VARCHAR(length=50),
nullable=False,
schema='%(schema)s')
op.drop_index('pw_idx', table_name='user', schema='test_schema')
op.drop_column('user', 'pw', schema='%(schema)s')
# ### end Alembic commands ###"""
% {"schema": self.schema},
)
eq_(
re.sub(r"u'", "'", template_args["downgrades"]),
"""# ### commands auto generated by Alembic - please adjust! ###
op.add_column('user', sa.Column('pw', sa.VARCHAR(length=50), \
autoincrement=False, nullable=True), schema='%(schema)s')
op.create_index('pw_idx', 'user', ['pw'], unique=False, schema='%(schema)s')
op.alter_column('user', 'name',
existing_type=sa.VARCHAR(length=50),
nullable=True,
schema='%(schema)s')
op.alter_column('user', 'a1',
existing_type=sa.TEXT(),
server_default=None,
existing_nullable=True,
schema='%(schema)s')
op.drop_constraint(None, 'order', schema='%(schema)s', type_='foreignkey')
op.alter_column('order', 'amount',
existing_type=sa.Numeric(precision=10, scale=2),
type_=sa.NUMERIC(precision=8, scale=2),
nullable=False,
existing_server_default=sa.text('0'),
schema='%(schema)s')
op.drop_column('order', 'user_id', schema='%(schema)s')
op.drop_constraint('uq_email', 'address', schema='test_schema', type_='unique')
op.drop_column('address', 'street', schema='%(schema)s')
op.create_table('extra',
sa.Column('x', sa.CHAR(length=1), autoincrement=False, nullable=True),
sa.Column('uid', sa.INTEGER(), autoincrement=False, nullable=True),
sa.ForeignKeyConstraint(['uid'], ['%(schema)s.user.id'], \
name='extra_uid_fkey'),
schema='%(schema)s'
)
op.drop_table('item', schema='%(schema)s')
# ### end Alembic commands ###""" # noqa
% {"schema": self.schema},
)
| 39.379032 | 85 | 0.5812 | 1,625 | 14,649 | 5.021538 | 0.094769 | 0.048529 | 0.031863 | 0.029412 | 0.87451 | 0.823284 | 0.776471 | 0.745466 | 0.729289 | 0.673162 | 0 | 0.007592 | 0.26268 | 14,649 | 371 | 86 | 39.485175 | 0.747894 | 0.011946 | 0 | 0.514085 | 1 | 0 | 0.161958 | 0.031039 | 0 | 0 | 0 | 0 | 0 | 1 | 0.056338 | false | 0 | 0.105634 | 0 | 0.197183 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
488f262ce24c375dbb84294d7b19ca15e2db75f9 | 12,971 | py | Python | Stock_Analysis_NYSE_RealTime/Predictive_Analysis.py | vaibhavsharma8/StockMarket | aad0cad7bfc2dc2e52a5e212040097e53f7a8fe2 | [
"MIT"
] | null | null | null | Stock_Analysis_NYSE_RealTime/Predictive_Analysis.py | vaibhavsharma8/StockMarket | aad0cad7bfc2dc2e52a5e212040097e53f7a8fe2 | [
"MIT"
] | null | null | null | Stock_Analysis_NYSE_RealTime/Predictive_Analysis.py | vaibhavsharma8/StockMarket | aad0cad7bfc2dc2e52a5e212040097e53f7a8fe2 | [
"MIT"
] | null | null | null | '''
This is the module which contains all functions required for predictive analysis of selected inputs by the user.
These functions are called as per users requirement in the CLI as well as GUI file.
'''
#---------------------------------------------------------------------------------------
'''
All the necessary libraries to create different functions and perform required operations
and enable calculations and potting of results.
'''
#---------------------------------------------------------------------------------------
import pandas as pd
import numpy as np
from sklearn import preprocessing
import matplotlib.pyplot as plt
from sklearn.tree import DecisionTreeRegressor
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LinearRegression
from sklearn import metrics
from tkinter import messagebox
import tkinter as tk
#Defiing the property of matplot charts to set the black background theme.
plt.style.use('dark_background')
#This functions performs the linar regression for GUI file by taking user inputs and predicts the furture price.
def linear_Regression(df1,Price,price,stock_name,Prediction_Days,trainingData):
df1['prediction'] = df1[Price].shift(-1)
df1['Date'] = df1['Date'].values.astype(float)
df1.dropna(inplace=True)
forecast_period = int(Prediction_Days)
X = np.array(df1.drop(['prediction'], 1))
Y = np.array(df1['prediction'])
X = preprocessing.scale(X)
X_prediction = X[-forecast_period:]
x_train, x_test, y_train, y_test = train_test_split(X, Y,train_size=trainingData, test_size=Prediction_Days)
# This function of the sk.learn library performs the Regression on the training data
reg = LinearRegression()
reg.fit(x_train, y_train)
array_Prediction = (reg.predict(X_prediction))
#Calculating the error values to obbtain the accuracy of the prediction
MAE = metrics.mean_absolute_error(y_test,array_Prediction )
MSE = metrics.mean_squared_error(y_test,array_Prediction )
rmse = np.sqrt(metrics.mean_squared_error(y_test,array_Prediction ))
Rsquarevalue = metrics.r2_score(y_test,array_Prediction)
predictedUserPrice = array_Prediction[Prediction_Days-1]
tk.messagebox.showinfo("Prediction (press Ok to see graph)","Predicted price after "+ str(Prediction_Days)+" days after end date is: " + str(round(predictedUserPrice,2)) +
"\n MAE Value is : " + str(round(MAE,3)) + "\n MSE Value is : " + str(round(MSE,3)) + "\n RMSE Value is : " + str(round(rmse,2)) + "\n R square Value is : " + str(round(Rsquarevalue,2)))
df1['Date'] = pd.to_datetime(df1['Date'], format='%Y-%m-%d %H:%M:%S.%f')
df1 = df1.set_index(['Date'])
row_end = df1.tail(1)
date1 = row_end[Price].index.date.item(0) + pd.Timedelta(str(Prediction_Days)+' day')
series = pd.Series(pd.date_range(date1, periods=Prediction_Days, freq='D'))
array_Prediction = pd.DataFrame(data=array_Prediction,
columns=['prediction'])
series = pd.DataFrame(series)
format = '%Y-%m-%d %H:%M:%S'
array_Prediction['Date'] = pd.to_datetime(series[0], format=format)
array_Prediction = array_Prediction.set_index(pd.DatetimeIndex(array_Prediction['Date']))
array_Prediction = array_Prediction.drop('Date', axis=1)
predictAll = df1['prediction']
predictAll = pd.DataFrame(predictAll)
predictAll = pd.concat([predictAll, array_Prediction])
plt.figure(num='Linear Regression',figsize=(16,8))
plt.legend(loc='best')
plt.title(stock_name + ' Prediction Chart for ' + str(Prediction_Days) + ' days', fontsize=9)
plt.xticks(rotation=90, fontsize=6)
plt.yticks(fontsize=6)
plt.xlabel('Date', fontsize=8)
plt.ylabel('Predicted Price/Close', fontsize=8)
plt.plot(df1[Price], label = price)
plt.plot(predictAll, label = 'Predicted Price')
plt.show()
#This functions performs the linar regression CLI File by taking user inputs and predicts the furture price.
def linear_Regression_Terminal(df1,Price,price,stock_name,Prediction_Days2,trainingData):
df1['prediction'] = df1[Price].shift(-1)
df1['Date'] = df1['Date'].values.astype(float)
df1.dropna(inplace=True)
forecast_period = int(Prediction_Days2)
X = np.array(df1.drop(['prediction'], 1))
Y = np.array(df1['prediction'])
X = preprocessing.scale(X)
X_prediction = X[-forecast_period:]
x_train, x_test, y_train, y_test = train_test_split(X, Y,train_size=trainingData, test_size=Prediction_Days2)
# This function of the sk.learn library performs the Regression on the training data
reg = LinearRegression()
reg.fit(x_train, y_train)
array_Prediction = (reg.predict(X_prediction))
# Calculating the error values to obtain the accuracy of the prediction
MAE = metrics.mean_absolute_error(y_test,array_Prediction )
MSE = metrics.mean_squared_error(y_test,array_Prediction )
rmse = np.sqrt(metrics.mean_squared_error(y_test,array_Prediction ))
Rsquarevalue = metrics.r2_score(y_test,array_Prediction)
predictedUserPrice = array_Prediction[Prediction_Days2-1]
print("Predicted price after " + str(Prediction_Days2) + " days after end date is: " + str(
round(predictedUserPrice, 2)) +
"\n MAE Value is : " + str(round(MAE, 3)) + "\n MSE Value is : " + str(
round(MSE, 3)) + "\n RMSE Value is : " + str(round(rmse, 2)) + "\n R square Value is : " + str(
round(Rsquarevalue, 2)))
df1['Date'] = pd.to_datetime(df1['Date'], format='%Y-%m-%d %H:%M:%S.%f')
df1 = df1.set_index(['Date'])
row_end = df1.tail(1)
date1 = row_end[Price].index.date.item(0) + pd.Timedelta(str(Prediction_Days2)+' day')
series = pd.Series(pd.date_range(date1, periods=Prediction_Days2, freq='D'))
array_Prediction = pd.DataFrame(data=array_Prediction,
columns=['prediction'])
series = pd.DataFrame(series)
format = '%Y-%m-%d %H:%M:%S'
array_Prediction['Date'] = pd.to_datetime(series[0], format=format)
array_Prediction = array_Prediction.set_index(pd.DatetimeIndex(array_Prediction['Date']))
array_Prediction = array_Prediction.drop('Date', axis=1)
predictAll = df1['prediction']
predictAll = pd.DataFrame(predictAll)
predictAll = pd.concat([predictAll, array_Prediction])
plt.figure(num='Linear Regression',figsize=(16,8))
plt.legend(loc='best')
plt.title(stock_name + ' Prediction Chart for ' + str(Prediction_Days2) + ' days', fontsize=9)
plt.xticks(rotation=90, fontsize=6)
plt.yticks(fontsize=6)
plt.xlabel('Date', fontsize=8)
plt.ylabel('Predicted Price/Close', fontsize=8)
plt.plot(df1[Price], label = price)
plt.plot(predictAll, label = 'Predicted Price')
plt.show()
#This functions performs the decision tree regression for GUI file by taking user inputs and predicts the furture price.
def decisionTree_Regression(df1,Price,price,stock_name,Prediction_Days,trainingData):
df1['prediction'] = df1[Price].shift(-1)
df1['Date'] = df1['Date'].values.astype(float)
df1.dropna(inplace=True)
forecast_period = int(Prediction_Days)
X = np.array(df1.drop(['prediction'], 1))
Y = np.array(df1['prediction'])
X = preprocessing.scale(X)
X_prediction = X[-forecast_period:]
x_train, x_test, y_train, y_test = train_test_split(X, Y, train_size=trainingData, test_size=Prediction_Days)
# This function of the sk.learn library performs the decision tree regression on the training data
reg = DecisionTreeRegressor()
reg.fit(x_train, y_train)
array_Prediction = (reg.predict(X_prediction))
# Calculating the error values to obtain the accuracy of the prediction
MAE = metrics.mean_absolute_error(y_test, array_Prediction)
MSE = metrics.mean_squared_error(y_test, array_Prediction)
rmse = np.sqrt(metrics.mean_squared_error(y_test, array_Prediction))
Rsquarevalue = metrics.r2_score(y_test, array_Prediction)
predictedUserPrice = array_Prediction[Prediction_Days - 1]
tk.messagebox.showinfo("Prediction (press Ok to see graph)",
"Predicted price after " + str(Prediction_Days) + " days after end date is: " + str(
round(predictedUserPrice, 2)) +
"\n MAE Value is : " + str(round(MAE, 3)) + "\n MSE Value is : " + str(
round(MSE, 3)) + "\n RMSE Value is : " + str(
round(rmse, 2)) + "\n R square Value is : " + str(round(Rsquarevalue, 2)))
df1['Date'] = pd.to_datetime(df1['Date'], format='%Y-%m-%d %H:%M:%S.%f')
df1 = df1.set_index(['Date'])
row_end = df1.tail(1)
date1 = row_end[Price].index.date.item(0) + pd.Timedelta(str(Prediction_Days) + ' day')
series = pd.Series(pd.date_range(date1, periods=Prediction_Days, freq='D'))
array_Prediction = pd.DataFrame(data=array_Prediction,
columns=['prediction'])
series = pd.DataFrame(series)
format = '%Y-%m-%d %H:%M:%S'
array_Prediction['Date'] = pd.to_datetime(series[0], format=format)
array_Prediction = array_Prediction.set_index(pd.DatetimeIndex(array_Prediction['Date']))
array_Prediction = array_Prediction.drop('Date', axis=1)
predictAll = df1['prediction']
predictAll = pd.DataFrame(predictAll)
predictAll = pd.concat([predictAll, array_Prediction])
plt.figure(num='Linear Regression', figsize=(16, 8))
plt.legend(loc='best')
plt.title(stock_name + ' Prediction Chart for ' + str(Prediction_Days) + ' days', fontsize=9)
plt.xticks(rotation=90, fontsize=6)
plt.yticks(fontsize=6)
plt.xlabel('Date', fontsize=8)
plt.ylabel('Predicted Price/Close', fontsize=8)
plt.plot(df1[Price], label=price)
plt.plot(predictAll, label='Predicted Price')
plt.show()
def decisionTree_Regression_Terminal(df1,Price,price,stock_name,Prediction_Days2,trainingData):
df1['prediction'] = df1[Price].shift(-1)
df1['Date'] = df1['Date'].values.astype(float)
df1.dropna(inplace=True)
forecast_period = int(Prediction_Days2)
X = np.array(df1.drop(['prediction'], 1))
Y = np.array(df1['prediction'])
X_prediction = X[-forecast_period:]
x_train, x_test, y_train, y_test = train_test_split(X, Y, train_size=trainingData, test_size=Prediction_Days2)
# This function of the sk.learn library performs the decision tree regression on the training data
reg = DecisionTreeRegressor()
reg.fit(x_train, y_train)
array_Prediction = (reg.predict(X_prediction))
# Calculating the error values to obtain the accuracy of the prediction
MAE1 = metrics.mean_absolute_error(y_test, array_Prediction)
MSE1 = metrics.mean_squared_error(y_test, array_Prediction)
rmse1 = np.sqrt(metrics.mean_squared_error(y_test, array_Prediction))
Rsquarevalue1 = metrics.r2_score(y_test, array_Prediction)
predictedUserPrice1 = array_Prediction[Prediction_Days2 - 1]
print("Predicted price after " + str(Prediction_Days2) + " days after end date is: " + str(
round(predictedUserPrice1, 2)) +
"\n MAE Value is : " + str(round(MAE1, 3)) + "\n MSE Value is : " + str(
round(MSE1, 3)) + "\n RMSE Value is : " + str(
round(rmse1, 2)) + "\n R square Value is : " + str(round(Rsquarevalue1, 2)))
df1['Date'] = pd.to_datetime(df1['Date'], format='%Y-%m-%d %H:%M:%S.%f')
df1 = df1.set_index(['Date'])
row_end = df1.tail(1)
date1 = row_end[Price].index.date.item(0) + pd.Timedelta(str(Prediction_Days2) + ' day')
series = pd.Series(pd.date_range(date1, periods=Prediction_Days2, freq='D'))
array_Prediction = pd.DataFrame(data=array_Prediction,
columns=['prediction'])
series = pd.DataFrame(series)
format = '%Y-%m-%d %H:%M:%S'
array_Prediction['Date'] = pd.to_datetime(series[0], format=format)
array_Prediction = array_Prediction.set_index(pd.DatetimeIndex(array_Prediction['Date']))
array_Prediction = array_Prediction.drop('Date', axis=1)
predictAll = df1['prediction']
predictAll = pd.DataFrame(predictAll)
predictAll = pd.concat([predictAll, array_Prediction])
plt.figure(num='Linear Regression', figsize=(16, 8))
plt.legend(loc='best')
plt.title(stock_name + ' Prediction Chart for ' + str(Prediction_Days2) + ' days', fontsize=9)
plt.xticks(rotation=90, fontsize=6)
plt.yticks(fontsize=6)
plt.xlabel('Date', fontsize=8)
plt.ylabel('Predicted Price/Close', fontsize=8)
plt.plot(df1[Price], label=price)
plt.plot(predictAll, label='Predicted Price')
plt.show()
#-------------------------------------
'''
END OF MODULE
'''
#------------------------------------- | 55.431624 | 214 | 0.66533 | 1,709 | 12,971 | 4.909889 | 0.121709 | 0.107258 | 0.023835 | 0.038136 | 0.905852 | 0.904421 | 0.899178 | 0.887856 | 0.874628 | 0.874628 | 0 | 0.017655 | 0.187804 | 12,971 | 234 | 215 | 55.431624 | 0.778832 | 0.115257 | 0 | 0.835 | 0 | 0 | 0.129444 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.02 | false | 0 | 0.05 | 0 | 0.07 | 0.01 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
d2b69a3f7b44a52044cf9c02b3cf46e85099bf6a | 16,519 | py | Python | tests/inventory/test_metrics_views.py | janheise/zentral | cd809483573301e7d1aa5d3fc2da2c74a62405ab | [
"Apache-2.0"
] | null | null | null | tests/inventory/test_metrics_views.py | janheise/zentral | cd809483573301e7d1aa5d3fc2da2c74a62405ab | [
"Apache-2.0"
] | null | null | null | tests/inventory/test_metrics_views.py | janheise/zentral | cd809483573301e7d1aa5d3fc2da2c74a62405ab | [
"Apache-2.0"
] | null | null | null | from datetime import datetime, timedelta
from django.urls import reverse
from django.test import TestCase
from prometheus_client.parser import text_string_to_metric_families
from zentral.conf import ConfigDict, settings
from zentral.contrib.inventory.conf import MACOS
from zentral.contrib.inventory.models import MachineSnapshotCommit
class PrometheusViewsTestCase(TestCase):
@classmethod
def setUpTestData(cls):
tree = {
"source": {"module": "tests.zentral.io", "name": "Zentral Tests"},
"serial_number": "0123456789",
"os_version": {'name': 'OS X', 'major': 10, 'minor': 11, 'patch': 1},
"android_apps": [
{"display_name": "AndroidApp1",
"version_name": "1.1"},
{"display_name": "AndroidApp2",
"version_name": "1.2"}
],
"ios_apps": [
{"name": "2Password",
"version": "1.1"},
{"name": "3Password",
"version": "1.2"}
],
"osx_app_instances": [
{'app': {'bundle_id': 'io.zentral.baller',
'bundle_name': 'Baller',
'bundle_version': '123',
'bundle_version_str': '1.2.3'},
'bundle_path': "/Applications/Baller.app"},
{'app': {'bundle_id': 'io.zentral.no',
'bundle_name': 'No',
'bundle_version': '123',
'bundle_version_str': '1.2.3'},
'bundle_path': "/Applications/No.app"}
],
"deb_packages": [
{"name": "deb_package_1", "version": "1.1"},
{"name": "deb_package_2", "version": "1.2"},
],
"program_instances": [
{"program": {"name": "program_1", "version": "1.1"},
"install_source": "tests"},
{"program": {"name": "program_2", "version": "1.2"},
"install_source": "tests"},
],
"last_seen": datetime.utcnow() - timedelta(days=2),
}
_, cls.ms, _ = MachineSnapshotCommit.objects.commit_machine_snapshot_tree(tree)
tree = {
"source": {"module": "tests2.zentral.io", "name": "Zentral Tests2"},
"serial_number": "0123456789",
"os_version": {'name': 'OS X', 'major': 12, 'minor': 2},
"android_apps": [
{"display_name": "AndroidApp1",
"version_name": "2.1"},
{"display_name": "AndroidApp2",
"version_name": "2.2"}
],
"ios_apps": [
{"name": "2Password",
"version": "2.1"},
{"name": "3Password",
"version": "2.2"}
],
"osx_app_instances": [
{'app': {'bundle_id': 'io.zentral.baller',
'bundle_name': 'Baller',
'bundle_version': '123',
'bundle_version_str': '2.3.4'},
'bundle_path': "/Applications/Baller.app"},
{'app': {'bundle_id': 'io.zentral.no',
'bundle_name': 'No',
'bundle_version': '123',
'bundle_version_str': '2.3.4'},
'bundle_path': "/Applications/No.app"}
],
"deb_packages": [
{"name": "deb_package_1", "version": "2.1"},
{"name": "deb_package_2", "version": "2.2"},
],
"program_instances": [
{"program": {"name": "program_1", "version": "2.1"},
"install_source": "tests"},
{"program": {"name": "program_2", "version": "2.2"},
"install_source": "tests"},
],
"last_seen": datetime.utcnow() - timedelta(days=13),
}
_, cls.ms2, _ = MachineSnapshotCommit.objects.commit_machine_snapshot_tree(tree)
def test_prometheus_metrics_403(self):
response = self.client.get(reverse("inventory_metrics:all"))
self.assertEqual(response.status_code, 403)
def test_prometheus_metrics_osx_apps(self):
old_config = settings._collection["apps"]["zentral.contrib.inventory"].pop("metrics_options", None)
settings._collection["apps"]["zentral.contrib.inventory"]["metrics_options"] = ConfigDict({
"osx_apps": {"sources": ["zentral tests"], "bundle_ids": ["io.zentral.baller"]},
})
response = self.client.get(reverse("inventory_metrics:all"),
HTTP_AUTHORIZATION="Bearer CHANGE ME!!!")
self.assertEqual(response.status_code, 200)
seen = False
for family in text_string_to_metric_families(response.content.decode('utf-8')):
if family.name == "zentral_inventory_active_machines_bucket":
continue
self.assertEqual(len(family.samples), 7)
for sample in family.samples:
self.assertEqual(sample.name, "zentral_inventory_osx_apps_bucket")
le = sample.labels["le"]
self.assertEqual(sample.labels,
{'name': 'Baller',
'source_name': self.ms.source.name,
'source_id': str(self.ms.source.pk),
'version': '1.2.3',
'le': le})
if le == "1": # source 1 is 2 days old
self.assertEqual(sample.value, 0)
else:
self.assertEqual(sample.value, 1)
self.assertFalse(seen) # only osx apps
seen = True
self.assertTrue(seen)
if old_config:
settings._collection["apps"]["zentral.contrib.inventory"]["metrics_options"] = old_config
def test_prometheus_metrics_android_apps(self):
old_config = settings._collection["apps"]["zentral.contrib.inventory"].pop("metrics_options", None)
settings._collection["apps"]["zentral.contrib.inventory"]["metrics_options"] = ConfigDict({
"android_apps": {"sources": ["zentral tests2"], "names": ["AndroidApp1"]},
})
response = self.client.get(reverse("inventory_metrics:all"),
HTTP_AUTHORIZATION="Bearer CHANGE ME!!!")
self.assertEqual(response.status_code, 200)
seen = False
for family in text_string_to_metric_families(response.content.decode('utf-8')):
if family.name == "zentral_inventory_active_machines_bucket":
continue
self.assertEqual(len(family.samples), 7)
for sample in family.samples:
self.assertEqual(sample.name, "zentral_inventory_android_apps_bucket")
le = sample.labels["le"]
self.assertEqual(sample.labels,
{'name': 'AndroidApp1',
'source_name': self.ms2.source.name,
'source_id': str(self.ms2.source.pk),
'version': '2.1',
'le': le})
if le in ("1", "7"): # source 2 is 13 days old
self.assertEqual(sample.value, 0)
else:
self.assertEqual(sample.value, 1)
self.assertFalse(seen) # only Android apps
seen = True
self.assertTrue(seen)
if old_config:
settings._collection["apps"]["zentral.contrib.inventory"]["metrics_options"] = old_config
def test_prometheus_metrics_ios_apps(self):
old_config = settings._collection["apps"]["zentral.contrib.inventory"].pop("metrics_options", None)
settings._collection["apps"]["zentral.contrib.inventory"]["metrics_options"] = ConfigDict({
"ios_apps": {"sources": ["zentral tests"], "names": ["3Password"]},
})
response = self.client.get(reverse("inventory_metrics:all"),
HTTP_AUTHORIZATION="Bearer CHANGE ME!!!")
self.assertEqual(response.status_code, 200)
seen = False
for family in text_string_to_metric_families(response.content.decode('utf-8')):
if family.name == "zentral_inventory_active_machines_bucket":
continue
self.assertEqual(len(family.samples), 7)
for sample in family.samples:
self.assertEqual(sample.name, "zentral_inventory_ios_apps_bucket")
le = sample.labels["le"]
self.assertEqual(sample.labels,
{'name': '3Password',
'source_name': self.ms.source.name,
'source_id': str(self.ms.source.pk),
'version': '1.2',
'le': le})
if le == "1": # source 1 is 2 days old
self.assertEqual(sample.value, 0)
else:
self.assertEqual(sample.value, 1)
self.assertFalse(seen) # only iOS apps
seen = True
self.assertTrue(seen)
if old_config:
settings._collection["apps"]["zentral.contrib.inventory"]["metrics_options"] = old_config
def test_prometheus_metrics_deb_packages(self):
old_config = settings._collection["apps"]["zentral.contrib.inventory"].pop("metrics_options", None)
settings._collection["apps"]["zentral.contrib.inventory"]["metrics_options"] = ConfigDict({
"deb_packages": {"sources": ["zentral tests2"], "names": ["deb_package_2"]},
})
response = self.client.get(reverse("inventory_metrics:all"),
HTTP_AUTHORIZATION="Bearer CHANGE ME!!!")
self.assertEqual(response.status_code, 200)
seen = False
for family in text_string_to_metric_families(response.content.decode('utf-8')):
if family.name == "zentral_inventory_active_machines_bucket":
continue
self.assertEqual(len(family.samples), 7)
for sample in family.samples:
self.assertEqual(sample.name, "zentral_inventory_deb_packages_bucket")
le = sample.labels["le"]
self.assertEqual(sample.labels,
{'name': 'deb_package_2',
'source_name': self.ms2.source.name,
'source_id': str(self.ms2.source.pk),
'version': '2.2',
'le': le})
if le in ("1", "7"): # source 2 is 13 days old
self.assertEqual(sample.value, 0)
else:
self.assertEqual(sample.value, 1)
self.assertFalse(seen) # only deb packages
seen = True
self.assertTrue(seen)
if old_config:
settings._collection["apps"]["zentral.contrib.inventory"]["metrics_options"] = old_config
def test_prometheus_metrics_programs(self):
old_config = settings._collection["apps"]["zentral.contrib.inventory"].pop("metrics_options", None)
settings._collection["apps"]["zentral.contrib.inventory"]["metrics_options"] = ConfigDict({
"programs": {"sources": ["zentral tests"], "names": ["program_1"]},
})
response = self.client.get(reverse("inventory_metrics:all"),
HTTP_AUTHORIZATION="Bearer CHANGE ME!!!")
self.assertEqual(response.status_code, 200)
seen = False
for family in text_string_to_metric_families(response.content.decode('utf-8')):
if family.name == "zentral_inventory_active_machines_bucket":
continue
self.assertEqual(len(family.samples), 7)
for sample in family.samples:
self.assertEqual(sample.name, "zentral_inventory_programs_bucket")
le = sample.labels["le"]
self.assertEqual(sample.labels,
{'name': 'program_1',
'source_name': self.ms.source.name,
'source_id': str(self.ms.source.pk),
'version': '1.1',
'le': le})
if le == "1": # source 1 is 2 days old
self.assertEqual(sample.value, 0)
else:
self.assertEqual(sample.value, 1)
self.assertFalse(seen) # only programs
seen = True
self.assertTrue(seen)
if old_config:
settings._collection["apps"]["zentral.contrib.inventory"]["metrics_options"] = old_config
def test_prometheus_metrics_os_versions(self):
old_config = settings._collection["apps"]["zentral.contrib.inventory"].pop("metrics_options", None)
settings._collection["apps"]["zentral.contrib.inventory"]["metrics_options"] = ConfigDict({
"os_versions": {"sources": ["zentral tests2"]}
})
response = self.client.get(reverse("inventory_metrics:all"),
HTTP_AUTHORIZATION="Bearer CHANGE ME!!!")
self.assertEqual(response.status_code, 200)
seen = False
for family in text_string_to_metric_families(response.content.decode('utf-8')):
if family.name == "zentral_inventory_active_machines_bucket":
continue
self.assertEqual(len(family.samples), 7)
for sample in family.samples:
self.assertEqual(sample.name, "zentral_inventory_os_versions_bucket")
le = sample.labels["le"]
self.assertEqual(sample.labels,
{'build': '_',
'major': '12',
'minor': '2',
'name': 'OS X',
'patch': '_',
'source_name': self.ms2.source.name,
'source_id': str(self.ms2.source.pk),
'le': le})
if le in ("1", "7"): # source 2 is 13 days old
self.assertEqual(sample.value, 0)
else:
self.assertEqual(sample.value, 1)
self.assertFalse(seen) # only os versions
seen = True
self.assertTrue(seen)
if old_config:
settings._collection["apps"]["zentral.contrib.inventory"]["metrics_options"] = old_config
def test_prometheus_metrics_active_machines(self):
old_config = settings._collection["apps"]["zentral.contrib.inventory"].pop("metrics_options", None)
settings._collection["apps"]["zentral.contrib.inventory"]["metrics_options"] = ConfigDict({
"os_versions": {"sources": ["zentral tests2"]}
})
response = self.client.get(reverse("inventory_metrics:all"),
HTTP_AUTHORIZATION="Bearer CHANGE ME!!!")
self.assertEqual(response.status_code, 200)
seen = False
for family in text_string_to_metric_families(response.content.decode('utf-8')):
if family.name != "zentral_inventory_active_machines_bucket":
continue
self.assertEqual(len(family.samples), 7)
for sample in family.samples:
self.assertEqual(sample.name, "zentral_inventory_active_machines_bucket")
le = sample.labels["le"]
self.assertEqual(sample.labels,
{'platform': MACOS,
'source_name': self.ms2.source.name,
'source_id': str(self.ms2.source.pk),
'le': le})
if le in ("1", "7"): # source 2 is 13 days old
self.assertEqual(sample.value, 0)
else:
self.assertEqual(sample.value, 1)
self.assertFalse(seen) # only is versions
seen = True
self.assertTrue(seen)
if old_config:
settings._collection["apps"]["zentral.contrib.inventory"]["metrics_options"] = old_config
| 50.827692 | 107 | 0.522489 | 1,579 | 16,519 | 5.270424 | 0.092464 | 0.077505 | 0.070656 | 0.07318 | 0.877073 | 0.869623 | 0.847513 | 0.82324 | 0.807258 | 0.774333 | 0 | 0.020835 | 0.349174 | 16,519 | 324 | 108 | 50.984568 | 0.753232 | 0.016708 | 0 | 0.761147 | 0 | 0 | 0.233485 | 0.07826 | 0 | 0 | 0 | 0 | 0.181529 | 1 | 0.028662 | false | 0.019108 | 0.022293 | 0 | 0.05414 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
d2ccb6d2cb14ebb6829dec8d3a71cec287cc28cc | 11,480 | py | Python | nltools/tests/test_stats.py | elvandy/nltools | 5cba63132e0a6d51302d39ce020d1bac7acc61dc | [
"MIT"
] | null | null | null | nltools/tests/test_stats.py | elvandy/nltools | 5cba63132e0a6d51302d39ce020d1bac7acc61dc | [
"MIT"
] | null | null | null | nltools/tests/test_stats.py | elvandy/nltools | 5cba63132e0a6d51302d39ce020d1bac7acc61dc | [
"MIT"
] | null | null | null | import numpy as np
import pandas as pd
from nltools.stats import (one_sample_permutation,
two_sample_permutation,
correlation_permutation,
matrix_permutation,
downsample,
upsample,
winsorize,
align,
transform_pairwise, _calc_pvalue)
from nltools.simulator import Simulator
from nltools.mask import create_sphere
# import pytest
def test_permutation():
dat = np.random.multivariate_normal([2, 6], [[.5, 2], [.5, 3]], 1000)
x = dat[:, 0]
y = dat[:, 1]
stats = two_sample_permutation(x, y,tail=1,n_permute=1000)
assert (stats['mean'] < -2) & (stats['mean'] > -6) & (stats['p'] < .001)
stats = one_sample_permutation(x-y,tail=1,n_permute=1000)
assert (stats['mean'] < -2) & (stats['mean'] > -6) & (stats['p'] < .001)
stats = correlation_permutation(x, y, metric='pearson',tail=1)
assert (stats['correlation'] > .4) & (stats['correlation']<.85) & (stats['p'] < .001)
stats = correlation_permutation(x, y, metric='spearman',tail=1)
assert (stats['correlation'] > .4) & (stats['correlation']<.85) & (stats['p'] < .001)
stats = correlation_permutation(x, y, metric='kendall',tail=2)
assert (stats['correlation'] > .4) & (stats['correlation']<.85) & (stats['p'] < .001)
# with pytest.raises(ValueError):
# correlation_permutation(x, y, metric='kendall',tail=3)
# with pytest.raises(ValueError):
# correlation_permutation(x, y, metric='doesntwork',tail=3)
s = np.random.normal(0,1,10000)
two_sided = _calc_pvalue(all_p = s, stat= 1.96, tail = 2)
upper_p = _calc_pvalue(all_p = s, stat= 1.96, tail = 1)
lower_p = _calc_pvalue(all_p = s, stat= -1.96, tail = 1)
sum_p = upper_p + lower_p
np.testing.assert_almost_equal(two_sided, sum_p)
# Test matrix_permutation
dat = np.random.multivariate_normal([2, 6], [[.5, 2], [.5, 3]], 190)
x = dat[:, 0]
y = dat[:, 1]
stats = matrix_permutation(x,y,n_permute=1000)
assert (stats['correlation'] > .4) & (stats['correlation']<.85) & (stats['p'] <.001)
def test_downsample():
dat = pd.DataFrame()
dat['x'] = range(0,100)
dat['y'] = np.repeat(range(1,11),10)
assert((dat.groupby('y').mean().values.ravel() == downsample(data=dat['x'],sampling_freq=10,target=1,target_type='hz',method='mean').values).all)
assert((dat.groupby('y').median().values.ravel() == downsample(data=dat['x'],sampling_freq=10,target=1,target_type='hz',method='median').values).all)
# with pytest.raises(ValueError):
# downsample(data=list(dat['x']),sampling_freq=10,target=1,target_type='hz',method='median')
# with pytest.raises(ValueError):
# downsample(data=dat['x'],sampling_freq=10,target=1,target_type='hz',method='doesnotwork')
# with pytest.raises(ValueError):
# downsample(data=dat['x'],sampling_freq=10,target=1,target_type='doesnotwork',method='median')
def test_upsample():
dat = pd.DataFrame()
dat['x'] = range(0,100)
dat['y'] = np.repeat(range(1,11),10)
fs = 2
us = upsample(dat,sampling_freq=1,target=fs,target_type='hz')
assert(dat.shape[0]*fs-fs == us.shape[0])
fs = 3
us = upsample(dat,sampling_freq=1,target=fs,target_type='hz')
assert(dat.shape[0]*fs-fs == us.shape[0])
# with pytest.raises(ValueError):
# upsample(dat,sampling_freq=1,target=fs,target_type='hz',method='doesnotwork')
# with pytest.raises(ValueError):
# upsample(dat,sampling_freq=1,target=fs,target_type='doesnotwork',method='linear')
def test_winsorize():
outlier_test = pd.DataFrame([92, 19, 101, 58, 1053, 91, 26, 78, 10, 13,
-40, 101, 86, 85, 15, 89, 89, 28, -5, 41])
out = winsorize(outlier_test,cutoff={'quantile':[0.05, .95]},
replace_with_cutoff=False).values.squeeze()
correct_result = np.array([92, 19, 101, 58, 101, 91, 26, 78, 10,
13, -5, 101, 86, 85, 15, 89, 89, 28,
-5, 41])
assert(np.sum(out == correct_result) == 20)
out = winsorize(outlier_test,cutoff={'std':[2, 2]},
replace_with_cutoff=False).values.squeeze()
correct_result = np.array([92, 19, 101, 58, 101, 91, 26, 78, 10, 13,
-40, 101, 86, 85, 15, 89, 89, 28, -5, 41])
assert(np.sum(out==correct_result)==20)
out = winsorize(outlier_test,cutoff={'std':[2, 2]},
replace_with_cutoff=True).values.squeeze()
correct_result = np.array([92., 19., 101., 58., 556.97961997, 91., 26.,
78., 10., 13., -40., 101., 86., 85., 15., 89.,
89., 28., -5., 41.])
assert(np.round(np.mean(out)) == np.round(np.mean(correct_result)))
def test_align():
# Test hyperalignment matrix
sim = Simulator()
y = [0, 1]
n_reps = 10
s1 = create_sphere([0, 0, 0], radius=3)
d1 = sim.create_data(y, 1, reps=n_reps, output_dir=None).apply_mask(s1)
d2 = sim.create_data(y, 2, reps=n_reps, output_dir=None).apply_mask(s1)
d3 = sim.create_data(y, 3, reps=n_reps, output_dir=None).apply_mask(s1)
data = [d1.data.T,d2.data.T,d3.data.T]
out = align(data, method='deterministic_srm')
assert len(data) == len(out['transformed'])
assert len(data) == len(out['transformation_matrix'])
assert data[0].shape == out['common_model'].shape
transformed = np.dot(data[0].T,out['transformation_matrix'][0])
np.testing.assert_almost_equal(0,np.sum(out['transformed'][0]-transformed.T))
out = align(data, method='probabilistic_srm')
assert len(data) == len(out['transformed'])
assert len(data) == len(out['transformation_matrix'])
assert data[0].shape == out['common_model'].shape
transformed = np.dot(data[0].T,out['transformation_matrix'][0])
np.testing.assert_almost_equal(0,np.sum(out['transformed'][0]-transformed.T))
out2 = align(data, method='procrustes')
assert len(data) == len(out2['transformed'])
assert data[0].shape == out2['common_model'].shape
assert len(data) == len(out2['transformation_matrix'])
assert len(data) == len(out2['disparity'])
centered = data[0].T-np.mean(data[0].T,0)
transformed = (np.dot(centered/np.linalg.norm(centered), out2['transformation_matrix'][0])*out2['scale'][0])
np.testing.assert_almost_equal(0,np.sum(out2['transformed'][0]-transformed.T))
assert out['transformed'][0].shape == out2['transformed'][0].shape
assert out['transformation_matrix'][0].shape == out2['transformation_matrix'][0].shape
# Test hyperalignment on Brain_Data
data = [d1,d2,d3]
out = align(data, method='deterministic_srm')
assert len(data) == len(out['transformed'])
assert len(data) == len(out['transformation_matrix'])
assert data[0].shape() == out['common_model'].shape()
transformed = np.dot(d1.data,out['transformation_matrix'][0])
np.testing.assert_almost_equal(0,np.sum(out['transformed'][0].data-transformed))
out = align(data, method='probabilistic_srm')
assert len(data) == len(out['transformed'])
assert len(data) == len(out['transformation_matrix'])
assert data[0].shape() == out['common_model'].shape()
transformed = np.dot(d1.data,out['transformation_matrix'][0])
np.testing.assert_almost_equal(0,np.sum(out['transformed'][0].data-transformed))
out2 = align(data, method='procrustes')
assert len(data) == len(out2['transformed'])
assert data[0].shape() == out2['common_model'].shape()
assert len(data) == len(out2['transformation_matrix'])
assert len(data) == len(out2['disparity'])
centered = data[0].data-np.mean(data[0].data,0)
transformed = (np.dot(centered/np.linalg.norm(centered), out2['transformation_matrix'][0])*out2['scale'][0])
np.testing.assert_almost_equal(0,np.sum(out2['transformed'][0].data-transformed))
assert out['transformed'][0].shape() == out2['transformed'][0].shape()
assert out['transformation_matrix'][0].shape == out2['transformation_matrix'][0].shape
# Test hyperalignment on matrix over time (axis=1)
sim = Simulator()
y = [0, 1]
n_reps = 10
s1 = create_sphere([0, 0, 0], radius=5)
d1 = sim.create_data(y, 1, reps=n_reps, output_dir=None).apply_mask(s1)
d2 = sim.create_data(y, 2, reps=n_reps, output_dir=None).apply_mask(s1)
d3 = sim.create_data(y, 3, reps=n_reps, output_dir=None).apply_mask(s1)
data = [d1.data.T,d2.data.T,d3.data.T]
out = align(data, method='deterministic_srm', axis=1)
assert len(data) == len(out['transformed'])
assert len(data) == len(out['transformation_matrix'])
assert data[0].shape == out['common_model'].shape
transformed = np.dot(data[0],out['transformation_matrix'][0])
np.testing.assert_almost_equal(0,np.sum(out['transformed'][0]-transformed))
out = align(data, method='probabilistic_srm', axis=1)
assert len(data) == len(out['transformed'])
assert len(data) == len(out['transformation_matrix'])
assert data[0].shape == out['common_model'].shape
transformed = np.dot(data[0],out['transformation_matrix'][0])
np.testing.assert_almost_equal(0,np.sum(out['transformed'][0]-transformed))
out2 = align(data, method='procrustes', axis=1)
assert len(data) == len(out2['transformed'])
assert data[0].shape == out2['common_model'].shape
assert len(data) == len(out2['transformation_matrix'])
assert len(data) == len(out2['disparity'])
centered = data[0]-np.mean(data[0],0)
transformed = (np.dot(centered/np.linalg.norm(centered), out2['transformation_matrix'][0])*out2['scale'][0])
np.testing.assert_almost_equal(0,np.sum(out2['transformed'][0]-transformed))
assert out['transformed'][0].shape == out2['transformed'][0].shape
assert out['transformation_matrix'][0].shape == out2['transformation_matrix'][0].shape
# Test hyperalignment on Brain_Data over time (axis=1)
data = [d1, d2, d3]
out = align(data, method='deterministic_srm', axis=1)
assert len(data) == len(out['transformed'])
assert len(data) == len(out['transformation_matrix'])
assert data[0].shape() == out['common_model'].shape()
transformed = np.dot(d1.data.T,out['transformation_matrix'][0])
np.testing.assert_almost_equal(0,np.sum(out['transformed'][0].data-transformed.T))
out = align(data, method='probabilistic_srm', axis=1)
assert len(data) == len(out['transformed'])
assert len(data) == len(out['transformation_matrix'])
assert data[0].shape() == out['common_model'].shape()
transformed = np.dot(d1.data.T,out['transformation_matrix'][0])
np.testing.assert_almost_equal(0,np.sum(out['transformed'][0].data-transformed.T))
out2 = align(data, method='procrustes', axis=1)
assert len(data) == len(out2['transformed'])
assert data[0].shape() == out2['common_model'].shape()
assert len(data) == len(out2['transformation_matrix'])
assert len(data) == len(out2['disparity'])
centered = data[0].data.T-np.mean(data[0].data.T,0)
transformed = (np.dot(centered/np.linalg.norm(centered), out2['transformation_matrix'][0])*out2['scale'][0])
np.testing.assert_almost_equal(0,np.sum(out2['transformed'][0].data-transformed.T))
assert out['transformed'][0].shape() == out2['transformed'][0].shape()
assert out['transformation_matrix'][0].shape == out2['transformation_matrix'][0].shape
def test_transform_pairwise():
n_features = 50
n_samples = 100
# Test without groups
new_n_samples = int(n_samples * (n_samples-1) / 2)
X = np.random.rand(n_samples,n_features)
y = np.random.rand(n_samples,)
x_new, y_new = transform_pairwise(X,y)
assert x_new.shape == (new_n_samples,n_features)
assert y_new.shape == (new_n_samples,)
assert y_new.ndim == 1
# Test with groups
n_subs = 4
new_n_samples = int(n_subs * ((n_samples/n_subs)*(n_samples/n_subs-1))/2)
groups = np.repeat(np.arange(1,1+n_subs),n_samples/n_subs)
y = np.vstack((y,groups)).T
x_new, y_new = transform_pairwise(X,y)
assert x_new.shape == (new_n_samples,n_features)
assert y_new.shape == (new_n_samples,2)
assert y_new.ndim == 2
a = y_new[:,1] ==np.repeat(np.arange(1,1+n_subs),((n_samples/n_subs)*(n_samples/n_subs-1))/2)
assert a.all()
| 46.857143 | 150 | 0.694774 | 1,776 | 11,480 | 4.355856 | 0.099099 | 0.08273 | 0.047053 | 0.057911 | 0.876293 | 0.853154 | 0.847983 | 0.841262 | 0.841262 | 0.815926 | 0 | 0.05185 | 0.107927 | 11,480 | 244 | 151 | 47.04918 | 0.703545 | 0.088502 | 0 | 0.636816 | 0 | 0 | 0.15056 | 0.064362 | 0 | 0 | 0 | 0 | 0.402985 | 1 | 0.029851 | false | 0 | 0.024876 | 0 | 0.054726 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
824d390efee2f9d4f038a92b76b301beea525165 | 21,370 | py | Python | exavault/api/ssh_keys_api.py | ExaVault/evapi-python | 769bfa9fbb683f2b4653ca2564029ffb72445c8c | [
"MIT"
] | null | null | null | exavault/api/ssh_keys_api.py | ExaVault/evapi-python | 769bfa9fbb683f2b4653ca2564029ffb72445c8c | [
"MIT"
] | 3 | 2017-07-13T20:58:05.000Z | 2019-08-02T19:08:37.000Z | exavault/api/ssh_keys_api.py | ExaVault/evapi-python | 769bfa9fbb683f2b4653ca2564029ffb72445c8c | [
"MIT"
] | 4 | 2016-11-16T00:14:23.000Z | 2020-09-24T14:50:46.000Z | # coding: utf-8
"""
ExaVault API
See our API reference documentation at https://www.exavault.com/developer/api-docs/ # noqa: E501
OpenAPI spec version: 2.0
Contact: support@exavault.com
Generated by: https://github.com/swagger-api/swagger-codegen.git
"""
from __future__ import absolute_import
import re # noqa: F401
# python 2 and python 3 compatibility library
import six
from exavault.api_client import ApiClient
class SSHKeysApi(object):
"""NOTE: This class is auto generated by the swagger code generator program.
Do not edit the class manually.
Ref: https://github.com/swagger-api/swagger-codegen
"""
def __init__(self, api_client=None):
if api_client is None:
api_client = ApiClient()
self.api_client = api_client
def add_ssh_key(self, ev_api_key, ev_access_token, **kwargs): # noqa: E501
"""Create a new SSH Key # noqa: E501
Create a new SSH Key for a user. Provide the Public Key as formatted from the ssh-keygen command (openssh format or RFC-4716 format). If you'd prefer to let us generate your key automatically, you can log in to your account via the web portal and set up new keys via the SSH Keys page. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.add_ssh_key(ev_api_key, ev_access_token, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str ev_api_key: API key required to make the API call. (required)
:param str ev_access_token: Access token required to make the API call. (required)
:param AddSSHKeyRequestBody body:
:return: SSHKeyResponse
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async_req'):
return self.add_ssh_key_with_http_info(ev_api_key, ev_access_token, **kwargs) # noqa: E501
else:
(data) = self.add_ssh_key_with_http_info(ev_api_key, ev_access_token, **kwargs) # noqa: E501
return data
def add_ssh_key_with_http_info(self, ev_api_key, ev_access_token, **kwargs): # noqa: E501
"""Create a new SSH Key # noqa: E501
Create a new SSH Key for a user. Provide the Public Key as formatted from the ssh-keygen command (openssh format or RFC-4716 format). If you'd prefer to let us generate your key automatically, you can log in to your account via the web portal and set up new keys via the SSH Keys page. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.add_ssh_key_with_http_info(ev_api_key, ev_access_token, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str ev_api_key: API key required to make the API call. (required)
:param str ev_access_token: Access token required to make the API call. (required)
:param AddSSHKeyRequestBody body:
:return: SSHKeyResponse
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['ev_api_key', 'ev_access_token', 'body'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method add_ssh_key" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'ev_api_key' is set
if ('ev_api_key' not in params or
params['ev_api_key'] is None):
raise ValueError("Missing the required parameter `ev_api_key` when calling `add_ssh_key`") # noqa: E501
# verify the required parameter 'ev_access_token' is set
if ('ev_access_token' not in params or
params['ev_access_token'] is None):
raise ValueError("Missing the required parameter `ev_access_token` when calling `add_ssh_key`") # noqa: E501
collection_formats = {}
path_params = {}
query_params = []
header_params = {}
if 'ev_api_key' in params:
header_params['ev-api-key'] = params['ev_api_key'] # noqa: E501
if 'ev_access_token' in params:
header_params['ev-access-token'] = params['ev_access_token'] # noqa: E501
form_params = []
local_var_files = {}
body_params = None
if 'body' in params:
body_params = params['body']
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json']) # noqa: E501
# Authentication setting
auth_settings = [] # noqa: E501
return self.api_client.call_api(
'/ssh-keys', 'POST',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='SSHKeyResponse', # noqa: E501
auth_settings=auth_settings,
async_req=params.get('async_req'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def delete_ssh_key(self, id, ev_api_key, ev_access_token, **kwargs): # noqa: E501
"""Delete an SSH Key # noqa: E501
Delete the specified SSH key. This will not delete or deactivate the user tied to the key. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.delete_ssh_key(id, ev_api_key, ev_access_token, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str id: (required)
:param str ev_api_key: API key required to make the API call. (required)
:param str ev_access_token: Access token required to make the API call. (required)
:return: None
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async_req'):
return self.delete_ssh_key_with_http_info(id, ev_api_key, ev_access_token, **kwargs) # noqa: E501
else:
(data) = self.delete_ssh_key_with_http_info(id, ev_api_key, ev_access_token, **kwargs) # noqa: E501
return data
def delete_ssh_key_with_http_info(self, id, ev_api_key, ev_access_token, **kwargs): # noqa: E501
"""Delete an SSH Key # noqa: E501
Delete the specified SSH key. This will not delete or deactivate the user tied to the key. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.delete_ssh_key_with_http_info(id, ev_api_key, ev_access_token, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str id: (required)
:param str ev_api_key: API key required to make the API call. (required)
:param str ev_access_token: Access token required to make the API call. (required)
:return: None
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['id', 'ev_api_key', 'ev_access_token'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method delete_ssh_key" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'id' is set
if ('id' not in params or
params['id'] is None):
raise ValueError("Missing the required parameter `id` when calling `delete_ssh_key`") # noqa: E501
# verify the required parameter 'ev_api_key' is set
if ('ev_api_key' not in params or
params['ev_api_key'] is None):
raise ValueError("Missing the required parameter `ev_api_key` when calling `delete_ssh_key`") # noqa: E501
# verify the required parameter 'ev_access_token' is set
if ('ev_access_token' not in params or
params['ev_access_token'] is None):
raise ValueError("Missing the required parameter `ev_access_token` when calling `delete_ssh_key`") # noqa: E501
collection_formats = {}
path_params = {}
if 'id' in params:
path_params['id'] = params['id'] # noqa: E501
query_params = []
header_params = {}
if 'ev_api_key' in params:
header_params['ev-api-key'] = params['ev_api_key'] # noqa: E501
if 'ev_access_token' in params:
header_params['ev-access-token'] = params['ev_access_token'] # noqa: E501
form_params = []
local_var_files = {}
body_params = None
# Authentication setting
auth_settings = [] # noqa: E501
return self.api_client.call_api(
'/ssh-keys/{id}', 'DELETE',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type=None, # noqa: E501
auth_settings=auth_settings,
async_req=params.get('async_req'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def get_ssh_key(self, id, ev_api_key, ev_access_token, **kwargs): # noqa: E501
"""Get metadata for an SSH Key # noqa: E501
Return the information for a single SSH Key # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_ssh_key(id, ev_api_key, ev_access_token, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str id: (required)
:param str ev_api_key: API key required to make the API call. (required)
:param str ev_access_token: Access token required to make the API call. (required)
:return: SSHKeyResponse
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async_req'):
return self.get_ssh_key_with_http_info(id, ev_api_key, ev_access_token, **kwargs) # noqa: E501
else:
(data) = self.get_ssh_key_with_http_info(id, ev_api_key, ev_access_token, **kwargs) # noqa: E501
return data
def get_ssh_key_with_http_info(self, id, ev_api_key, ev_access_token, **kwargs): # noqa: E501
"""Get metadata for an SSH Key # noqa: E501
Return the information for a single SSH Key # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_ssh_key_with_http_info(id, ev_api_key, ev_access_token, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str id: (required)
:param str ev_api_key: API key required to make the API call. (required)
:param str ev_access_token: Access token required to make the API call. (required)
:return: SSHKeyResponse
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['id', 'ev_api_key', 'ev_access_token'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method get_ssh_key" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'id' is set
if ('id' not in params or
params['id'] is None):
raise ValueError("Missing the required parameter `id` when calling `get_ssh_key`") # noqa: E501
# verify the required parameter 'ev_api_key' is set
if ('ev_api_key' not in params or
params['ev_api_key'] is None):
raise ValueError("Missing the required parameter `ev_api_key` when calling `get_ssh_key`") # noqa: E501
# verify the required parameter 'ev_access_token' is set
if ('ev_access_token' not in params or
params['ev_access_token'] is None):
raise ValueError("Missing the required parameter `ev_access_token` when calling `get_ssh_key`") # noqa: E501
collection_formats = {}
path_params = {}
if 'id' in params:
path_params['id'] = params['id'] # noqa: E501
query_params = []
header_params = {}
if 'ev_api_key' in params:
header_params['ev-api-key'] = params['ev_api_key'] # noqa: E501
if 'ev_access_token' in params:
header_params['ev-access-token'] = params['ev_access_token'] # noqa: E501
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# Authentication setting
auth_settings = [] # noqa: E501
return self.api_client.call_api(
'/ssh-keys/{id}', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='SSHKeyResponse', # noqa: E501
auth_settings=auth_settings,
async_req=params.get('async_req'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def get_ssh_keys_list(self, ev_api_key, ev_access_token, **kwargs): # noqa: E501
"""Get metadata for a list of SSH Keys # noqa: E501
Returns a list of SSH Keys within the account. Can be filtered for a single user. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_ssh_keys_list(ev_api_key, ev_access_token, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str ev_api_key: API key required to make the API call. (required)
:param str ev_access_token: Access token required to make the API call. (required)
:param str user_id: Only return results for the given user ID. This is not the username, but the numeric ID of the user.
:param int limit: Limits the results by the given number. Cannot be set higher than 100.
:param int offset: Determines which item to start on for pagination. Use zero (0) to start at the beginning of the list.
:return: SSHKeyCollectionResponse
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async_req'):
return self.get_ssh_keys_list_with_http_info(ev_api_key, ev_access_token, **kwargs) # noqa: E501
else:
(data) = self.get_ssh_keys_list_with_http_info(ev_api_key, ev_access_token, **kwargs) # noqa: E501
return data
def get_ssh_keys_list_with_http_info(self, ev_api_key, ev_access_token, **kwargs): # noqa: E501
"""Get metadata for a list of SSH Keys # noqa: E501
Returns a list of SSH Keys within the account. Can be filtered for a single user. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_ssh_keys_list_with_http_info(ev_api_key, ev_access_token, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str ev_api_key: API key required to make the API call. (required)
:param str ev_access_token: Access token required to make the API call. (required)
:param str user_id: Only return results for the given user ID. This is not the username, but the numeric ID of the user.
:param int limit: Limits the results by the given number. Cannot be set higher than 100.
:param int offset: Determines which item to start on for pagination. Use zero (0) to start at the beginning of the list.
:return: SSHKeyCollectionResponse
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['ev_api_key', 'ev_access_token', 'user_id', 'limit', 'offset'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method get_ssh_keys_list" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'ev_api_key' is set
if ('ev_api_key' not in params or
params['ev_api_key'] is None):
raise ValueError("Missing the required parameter `ev_api_key` when calling `get_ssh_keys_list`") # noqa: E501
# verify the required parameter 'ev_access_token' is set
if ('ev_access_token' not in params or
params['ev_access_token'] is None):
raise ValueError("Missing the required parameter `ev_access_token` when calling `get_ssh_keys_list`") # noqa: E501
collection_formats = {}
path_params = {}
query_params = []
if 'user_id' in params:
query_params.append(('userId', params['user_id'])) # noqa: E501
if 'limit' in params:
query_params.append(('limit', params['limit'])) # noqa: E501
if 'offset' in params:
query_params.append(('offset', params['offset'])) # noqa: E501
header_params = {}
if 'ev_api_key' in params:
header_params['ev-api-key'] = params['ev_api_key'] # noqa: E501
if 'ev_access_token' in params:
header_params['ev-access-token'] = params['ev_access_token'] # noqa: E501
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# Authentication setting
auth_settings = [] # noqa: E501
return self.api_client.call_api(
'/ssh-keys', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='SSHKeyCollectionResponse', # noqa: E501
auth_settings=auth_settings,
async_req=params.get('async_req'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
| 44.707113 | 309 | 0.627468 | 2,816 | 21,370 | 4.51456 | 0.079545 | 0.045937 | 0.040274 | 0.022025 | 0.939904 | 0.931409 | 0.930229 | 0.922678 | 0.914576 | 0.914576 | 0 | 0.015931 | 0.286242 | 21,370 | 477 | 310 | 44.800839 | 0.817544 | 0.385307 | 0 | 0.764706 | 0 | 0 | 0.212279 | 0.023798 | 0 | 0 | 0 | 0 | 0 | 1 | 0.035294 | false | 0 | 0.015686 | 0 | 0.101961 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
828c4118aad72966a6d6d6052a01d4955cb9ecab | 6,769 | py | Python | boto3_type_annotations_with_docs/boto3_type_annotations/neptune/waiter.py | cowboygneox/boto3_type_annotations | 450dce1de4e066b939de7eac2ec560ed1a7ddaa2 | [
"MIT"
] | 119 | 2018-12-01T18:20:57.000Z | 2022-02-02T10:31:29.000Z | boto3_type_annotations_with_docs/boto3_type_annotations/neptune/waiter.py | cowboygneox/boto3_type_annotations | 450dce1de4e066b939de7eac2ec560ed1a7ddaa2 | [
"MIT"
] | 15 | 2018-11-16T00:16:44.000Z | 2021-11-13T03:44:18.000Z | boto3_type_annotations_with_docs/boto3_type_annotations/neptune/waiter.py | cowboygneox/boto3_type_annotations | 450dce1de4e066b939de7eac2ec560ed1a7ddaa2 | [
"MIT"
] | 11 | 2019-05-06T05:26:51.000Z | 2021-09-28T15:27:59.000Z | from typing import Dict
from typing import List
from botocore.waiter import Waiter
class DBInstanceAvailable(Waiter):
def wait(self, DBInstanceIdentifier: str = None, Filters: List = None, MaxRecords: int = None, Marker: str = None, WaiterConfig: Dict = None):
"""
Polls :py:meth:`Neptune.Client.describe_db_instances` every 30 seconds until a successful state is reached. An error is returned after 60 failed checks.
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/neptune-2014-10-31/DescribeDBInstances>`_
**Request Syntax**
::
waiter.wait(
DBInstanceIdentifier='string',
Filters=[
{
'Name': 'string',
'Values': [
'string',
]
},
],
MaxRecords=123,
Marker='string',
WaiterConfig={
'Delay': 123,
'MaxAttempts': 123
}
)
:type DBInstanceIdentifier: string
:param DBInstanceIdentifier:
The user-supplied instance identifier. If this parameter is specified, information from only the specific DB instance is returned. This parameter isn\'t case-sensitive.
Constraints:
* If supplied, must match the identifier of an existing DBInstance.
:type Filters: list
:param Filters:
A filter that specifies one or more DB instances to describe.
Supported filters:
* ``db-cluster-id`` - Accepts DB cluster identifiers and DB cluster Amazon Resource Names (ARNs). The results list will only include information about the DB instances associated with the DB clusters identified by these ARNs.
* ``db-instance-id`` - Accepts DB instance identifiers and DB instance Amazon Resource Names (ARNs). The results list will only include information about the DB instances identified by these ARNs.
- *(dict) --*
This type is not currently supported.
- **Name** *(string) --* **[REQUIRED]**
This parameter is not currently supported.
- **Values** *(list) --* **[REQUIRED]**
This parameter is not currently supported.
- *(string) --*
:type MaxRecords: integer
:param MaxRecords:
The maximum number of records to include in the response. If more records exist than the specified ``MaxRecords`` value, a pagination token called a marker is included in the response so that the remaining results can be retrieved.
Default: 100
Constraints: Minimum 20, maximum 100.
:type Marker: string
:param Marker:
An optional pagination token provided by a previous ``DescribeDBInstances`` request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by ``MaxRecords`` .
:type WaiterConfig: dict
:param WaiterConfig:
A dictionary that provides parameters to control waiting behavior.
- **Delay** *(integer) --*
The amount of time in seconds to wait between attempts. Default: 30
- **MaxAttempts** *(integer) --*
The maximum number of attempts to be made. Default: 60
:returns: None
"""
pass
class DBInstanceDeleted(Waiter):
def wait(self, DBInstanceIdentifier: str = None, Filters: List = None, MaxRecords: int = None, Marker: str = None, WaiterConfig: Dict = None):
"""
Polls :py:meth:`Neptune.Client.describe_db_instances` every 30 seconds until a successful state is reached. An error is returned after 60 failed checks.
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/neptune-2014-10-31/DescribeDBInstances>`_
**Request Syntax**
::
waiter.wait(
DBInstanceIdentifier='string',
Filters=[
{
'Name': 'string',
'Values': [
'string',
]
},
],
MaxRecords=123,
Marker='string',
WaiterConfig={
'Delay': 123,
'MaxAttempts': 123
}
)
:type DBInstanceIdentifier: string
:param DBInstanceIdentifier:
The user-supplied instance identifier. If this parameter is specified, information from only the specific DB instance is returned. This parameter isn\'t case-sensitive.
Constraints:
* If supplied, must match the identifier of an existing DBInstance.
:type Filters: list
:param Filters:
A filter that specifies one or more DB instances to describe.
Supported filters:
* ``db-cluster-id`` - Accepts DB cluster identifiers and DB cluster Amazon Resource Names (ARNs). The results list will only include information about the DB instances associated with the DB clusters identified by these ARNs.
* ``db-instance-id`` - Accepts DB instance identifiers and DB instance Amazon Resource Names (ARNs). The results list will only include information about the DB instances identified by these ARNs.
- *(dict) --*
This type is not currently supported.
- **Name** *(string) --* **[REQUIRED]**
This parameter is not currently supported.
- **Values** *(list) --* **[REQUIRED]**
This parameter is not currently supported.
- *(string) --*
:type MaxRecords: integer
:param MaxRecords:
The maximum number of records to include in the response. If more records exist than the specified ``MaxRecords`` value, a pagination token called a marker is included in the response so that the remaining results can be retrieved.
Default: 100
Constraints: Minimum 20, maximum 100.
:type Marker: string
:param Marker:
An optional pagination token provided by a previous ``DescribeDBInstances`` request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by ``MaxRecords`` .
:type WaiterConfig: dict
:param WaiterConfig:
A dictionary that provides parameters to control waiting behavior.
- **Delay** *(integer) --*
The amount of time in seconds to wait between attempts. Default: 30
- **MaxAttempts** *(integer) --*
The maximum number of attempts to be made. Default: 60
:returns: None
"""
pass
| 52.069231 | 241 | 0.608066 | 731 | 6,769 | 5.622435 | 0.221614 | 0.03163 | 0.029197 | 0.033577 | 0.971776 | 0.971776 | 0.971776 | 0.971776 | 0.971776 | 0.971776 | 0 | 0.014203 | 0.313488 | 6,769 | 129 | 242 | 52.472868 | 0.870239 | 0.789925 | 0 | 0.444444 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.222222 | false | 0.222222 | 0.333333 | 0 | 0.777778 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 1 | 0 | 1 | 0 | 0 | 10 |
8296f8fd503e31902b7b11b23c2652e414ff74fe | 204 | py | Python | pytrademonster/objects/__init__.py | femtotrader/pytrademonster | 0bce61a3ed90e3bd438de2bc56b90bbb409490c4 | [
"MIT"
] | null | null | null | pytrademonster/objects/__init__.py | femtotrader/pytrademonster | 0bce61a3ed90e3bd438de2bc56b90bbb409490c4 | [
"MIT"
] | null | null | null | pytrademonster/objects/__init__.py | femtotrader/pytrademonster | 0bce61a3ed90e3bd438de2bc56b90bbb409490c4 | [
"MIT"
] | 1 | 2018-02-23T09:33:58.000Z | 2018-02-23T09:33:58.000Z | from pytrademonster.objects.orderObjects import *
from pytrademonster.objects.accountObjects import *
from pytrademonster.objects.quoteObjects import *
from pytrademonster.objects.positionObjects import * | 51 | 52 | 0.867647 | 20 | 204 | 8.85 | 0.4 | 0.40678 | 0.564972 | 0.525424 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.073529 | 204 | 4 | 52 | 51 | 0.936508 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 8 |
82ae266e26e5919cbedc21cf4255009cac242d62 | 25,746 | py | Python | tests/test_users.py | EncoreTechnologies/py-menandmice | 3233d884744a9df0a8b0781dd3c84845955c5200 | [
"Apache-2.0"
] | 1 | 2017-06-21T12:33:43.000Z | 2017-06-21T12:33:43.000Z | tests/test_users.py | EncoreTechnologies/py-menandmice | 3233d884744a9df0a8b0781dd3c84845955c5200 | [
"Apache-2.0"
] | null | null | null | tests/test_users.py | EncoreTechnologies/py-menandmice | 3233d884744a9df0a8b0781dd3c84845955c5200 | [
"Apache-2.0"
] | null | null | null | # Licensed to the Encore Technologies ("Encore") under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
from base_test import BaseObjectTest
from base_test import BaseTest
from mock import call
from mock import Mock
from mock import patch
import menandmice
from menandmice.users import Group
from menandmice.users import Groups
from menandmice.users import Role
from menandmice.users import Roles
from menandmice.users import User
from menandmice.users import Users
class TestRole(BaseObjectTest):
__test__ = True
def setUp(self):
super(TestRole, self).setUp()
self.obj_class = menandmice.client.Role
self.add_key('ref')
self.add_key('name')
self.add_key('description')
self.add_key('users', []) # list of User()
self.add_key('groups', []) # list of Group()
class TestUser(BaseObjectTest):
__test__ = True
def setUp(self):
super(TestUser, self).setUp()
self.obj_class = menandmice.client.User
self.add_key('ref')
self.add_key('name')
self.add_key('password')
self.add_key('fullName')
self.add_key('description')
self.add_key('email')
self.add_key('authenticationType')
self.add_key('roles', []) # list of Role()
self.add_key('groups', []) # list of Group()
class TestGroup(BaseObjectTest):
__test__ = True
def setUp(self):
super(TestGroup, self).setUp()
self.obj_class = menandmice.client.Group
self.add_key('ref')
self.add_key('name')
self.add_key('description')
self.add_key('adIntegrated')
self.add_key('groupMembers', []) # list of User()
self.add_key('roles', []) # list of Role()
class TestGroups(BaseTest):
def test_init(self):
expected_client = "Test Client"
expected_url_base = "Groups"
expected_entity_class = menandmice.client.Group
expected_get_response_entity_key = "group"
expected_get_response_all_key = "groups"
expected_get_is_singular = False
expected_ref_key = "ref"
obj = Groups(client=expected_client)
self.assertIsInstance(obj, dict)
self.assertIsInstance(obj, menandmice.base.BaseObject)
self.assertIsInstance(obj, menandmice.base.BaseService)
self.assertEqual(obj.client, expected_client)
self.assertEqual(obj.url_base, expected_url_base)
self.assertEqual(obj.entity_class, expected_entity_class)
self.assertEqual(obj.get_response_entity_key, expected_get_response_entity_key)
self.assertEqual(obj.get_response_all_key, expected_get_response_all_key)
self.assertEqual(obj.get_is_singular, expected_get_is_singular)
self.assertEqual(obj.ref_key, expected_ref_key)
@patch("menandmice.users.Groups.get")
def test_add(self, mock_get):
expected_group = "test group"
expected_save_comment = ""
expected_payload = {
"saveComment": expected_save_comment,
"group": expected_group
}
expected_get_refs = ["ref1", "ref2", "ref3"]
expected_get_calls = [call(c) for c in expected_get_refs]
expected_results = ["get_" + ref for ref in expected_get_refs]
expected_base_url = self.url_base
expected_url_base = "Groups"
mock_client = Mock()
mock_client.baseurl = expected_base_url
mock_client.post.return_value = {'result': {'objRefs': expected_get_refs}}
mock_get.side_effect = [[result] for result in expected_results]
obj = Groups(client=mock_client)
results = obj.add(expected_group)
mock_client.post.assert_called_with("{0}{1}".format(expected_base_url,
expected_url_base),
expected_payload)
mock_get.assert_has_calls(expected_get_calls)
self.assertEquals(results, expected_results)
@patch("menandmice.users.Groups.make_query_str")
@patch("menandmice.users.Groups.ref_or_raise")
def test_get_group_roles(self, mock_ref_or_raise, mock_make_query_str):
expected_group = "test group"
expected_roles = [{"ref": "ref1", "name": "name1"},
{"ref": "ref2", "name": "name2"},
{"ref": "ref3", "name": "name3"}, ]
expected_results = [Role(role) for role in expected_roles]
expected_base_url = self.url_base
expected_kwargs = {"test": "value", "int": 123}
expected_group_ref = "Groups/123"
expected_query_str = "?query=xyz"
mock_ref_or_raise.return_value = expected_group_ref
mock_make_query_str.return_value = expected_query_str
mock_client = Mock()
mock_client.baseurl = expected_base_url
mock_client.get.return_value = {'result': {'roles': expected_roles}}
obj = Groups(client=mock_client)
results = obj.get_group_roles(expected_group, **expected_kwargs)
mock_client.get.assert_called_with("{0}{1}/Roles{2}".format(expected_base_url,
expected_group_ref,
expected_query_str))
mock_make_query_str.assert_called_with(**expected_kwargs)
self.assertEquals(results, expected_results)
@patch("menandmice.users.Groups.make_query_str")
@patch("menandmice.users.Groups.ref_or_raise")
def test_delete_group_role(self, mock_ref_or_raise, mock_make_query_str):
expected_group = "test group"
expected_role = "test role"
expected_group_ref = "Groups/123"
expected_role_ref = "Roles/123"
expected_results = "test"
expected_base_url = self.url_base
expected_save_comment = "save_comment"
expected_query_str = "?saveComment=test"
mock_ref_or_raise.side_effect = [expected_group_ref, expected_role_ref]
mock_make_query_str.return_value = expected_query_str
mock_client = Mock()
mock_client.baseurl = expected_base_url
mock_client.delete.return_value = expected_results
obj = Groups(client=mock_client)
results = obj.delete_group_role(expected_group,
expected_role,
expected_save_comment)
mock_client.delete.assert_called_with("{0}{1}/{2}{3}".format(expected_base_url,
expected_group_ref,
expected_role_ref,
expected_query_str))
self.assertEquals(results, expected_results)
@patch("menandmice.users.Groups.ref_or_raise")
def test_add_group_role(self, mock_ref_or_raise):
expected_group = "test group"
expected_role = "test role"
expected_group_ref = "Groups/123"
expected_role_ref = "Roles/123"
expected_results = "test"
expected_base_url = self.url_base
expected_save_comment = "save_comment"
expected_payload = {"saveComment": expected_save_comment}
mock_ref_or_raise.side_effect = [expected_group_ref, expected_role_ref]
mock_client = Mock()
mock_client.baseurl = expected_base_url
mock_client.put.return_value = expected_results
obj = Groups(client=mock_client)
results = obj.add_group_role(expected_group,
expected_role,
expected_save_comment)
mock_client.put.assert_called_with("{0}{1}/{2}".format(expected_base_url,
expected_group_ref,
expected_role_ref),
expected_payload,
True)
self.assertEquals(results, expected_results)
@patch("menandmice.users.Groups.make_query_str")
@patch("menandmice.users.Groups.ref_or_raise")
def test_get_group_users(self, mock_ref_or_raise, mock_make_query_str):
expected_group = "test group"
expected_users = [{"ref": "ref1", "name": "name1"},
{"ref": "ref2", "name": "name2"},
{"ref": "ref3", "name": "name3"}, ]
expected_results = [User(user) for user in expected_users]
expected_base_url = self.url_base
expected_kwargs = {"test": "value", "int": 123}
expected_group_ref = "Groups/123"
expected_query_str = "?query=xyz"
mock_ref_or_raise.return_value = expected_group_ref
mock_make_query_str.return_value = expected_query_str
mock_client = Mock()
mock_client.baseurl = expected_base_url
mock_client.get.return_value = {'result': {'users': expected_users}}
obj = Groups(client=mock_client)
results = obj.get_group_users(expected_group, **expected_kwargs)
mock_client.get.assert_called_with("{0}{1}/Users{2}".format(expected_base_url,
expected_group_ref,
expected_query_str))
mock_make_query_str.assert_called_with(**expected_kwargs)
self.assertEquals(results, expected_results)
@patch("menandmice.users.Groups.make_query_str")
@patch("menandmice.users.Groups.ref_or_raise")
def test_delete_group_user(self, mock_ref_or_raise, mock_make_query_str):
expected_group = "test group"
expected_user = "test user"
expected_group_ref = "Groups/123"
expected_user_ref = "Users/123"
expected_results = "test"
expected_base_url = self.url_base
expected_save_comment = "save_comment"
expected_query_str = "?saveComment=test"
mock_ref_or_raise.side_effect = [expected_group_ref, expected_user_ref]
mock_make_query_str.return_value = expected_query_str
mock_client = Mock()
mock_client.baseurl = expected_base_url
mock_client.delete.return_value = expected_results
obj = Groups(client=mock_client)
results = obj.delete_group_user(expected_group,
expected_user,
expected_save_comment)
mock_client.delete.assert_called_with("{0}{1}/{2}{3}".format(expected_base_url,
expected_group_ref,
expected_user_ref,
expected_query_str))
self.assertEquals(results, expected_results)
@patch("menandmice.users.Groups.ref_or_raise")
def test_add_group_user(self, mock_ref_or_raise):
expected_group = "test group"
expected_user = "test user"
expected_group_ref = "Groups/123"
expected_user_ref = "Users/123"
expected_results = "test"
expected_base_url = self.url_base
expected_save_comment = "save_comment"
expected_payload = {"saveComment": expected_save_comment}
mock_ref_or_raise.side_effect = [expected_group_ref, expected_user_ref]
mock_client = Mock()
mock_client.baseurl = expected_base_url
mock_client.put.return_value = expected_results
obj = Groups(client=mock_client)
results = obj.add_group_user(expected_group,
expected_user,
expected_save_comment)
mock_client.put.assert_called_with("{0}{1}/{2}".format(expected_base_url,
expected_group_ref,
expected_user_ref),
expected_payload,
True)
self.assertEquals(results, expected_results)
class TestRoles(BaseTest):
def test_init(self):
expected_client = "Test Client"
expected_url_base = "Roles"
expected_entity_class = menandmice.client.Role
expected_get_response_entity_key = "role"
expected_get_response_all_key = "roles"
expected_get_is_singular = False
expected_ref_key = "ref"
obj = Roles(client=expected_client)
self.assertIsInstance(obj, dict)
self.assertIsInstance(obj, menandmice.base.BaseObject)
self.assertIsInstance(obj, menandmice.base.BaseService)
self.assertEqual(obj.client, expected_client)
self.assertEqual(obj.url_base, expected_url_base)
self.assertEqual(obj.entity_class, expected_entity_class)
self.assertEqual(obj.get_response_entity_key, expected_get_response_entity_key)
self.assertEqual(obj.get_response_all_key, expected_get_response_all_key)
self.assertEqual(obj.get_is_singular, expected_get_is_singular)
self.assertEqual(obj.ref_key, expected_ref_key)
@patch("menandmice.users.Roles.get")
def test_add(self, mock_get):
expected_role = "test role"
expected_save_comment = ""
expected_payload = {
"saveComment": expected_save_comment,
"role": expected_role
}
expected_get_refs = ["ref1", "ref2", "ref3"]
expected_get_calls = [call(c) for c in expected_get_refs]
expected_results = ["get_" + ref for ref in expected_get_refs]
expected_base_url = self.url_base
expected_url_base = "Roles"
mock_client = Mock()
mock_client.baseurl = expected_base_url
mock_client.post.return_value = {'result': {'objRefs': expected_get_refs}}
mock_get.side_effect = [[result] for result in expected_results]
obj = Roles(client=mock_client)
results = obj.add(expected_role)
mock_client.post.assert_called_with("{0}{1}".format(expected_base_url,
expected_url_base),
expected_payload)
mock_get.assert_has_calls(expected_get_calls)
self.assertEquals(results, expected_results)
@patch("menandmice.users.Roles.make_query_str")
@patch("menandmice.users.Roles.ref_or_raise")
def test_get_role_users(self, mock_ref_or_raise, mock_make_query_str):
expected_role = "test role"
expected_users = [{"ref": "ref1", "name": "name1"},
{"ref": "ref2", "name": "name2"},
{"ref": "ref3", "name": "name3"}]
expected_results = [User(user) for user in expected_users]
expected_base_url = self.url_base
expected_kwargs = {"test": "value", "int": 123}
expected_role_ref = "Roles/123"
expected_query_str = "?query=xyz"
mock_ref_or_raise.return_value = expected_role_ref
mock_make_query_str.return_value = expected_query_str
mock_client = Mock()
mock_client.baseurl = expected_base_url
mock_client.get.return_value = {'result': {'users': expected_users}}
obj = Roles(client=mock_client)
results = obj.get_role_users(expected_role, **expected_kwargs)
mock_client.get.assert_called_with("{0}{1}/Users{2}".format(expected_base_url,
expected_role_ref,
expected_query_str))
mock_make_query_str.assert_called_with(**expected_kwargs)
self.assertEquals(results, expected_results)
@patch("menandmice.users.Roles.make_query_str")
@patch("menandmice.users.Roles.ref_or_raise")
def test_get_role_groups(self, mock_ref_or_raise, mock_make_query_str):
expected_role = "test role"
expected_groups = [{"ref": "ref1", "name": "name1"},
{"ref": "ref2", "name": "name2"},
{"ref": "ref3", "name": "name3"}]
expected_results = [Group(group) for group in expected_groups]
expected_base_url = self.url_base
expected_kwargs = {"test": "value", "int": 123}
expected_role_ref = "Roles/123"
expected_query_str = "?query=xyz"
mock_ref_or_raise.return_value = expected_role_ref
mock_make_query_str.return_value = expected_query_str
mock_client = Mock()
mock_client.baseurl = expected_base_url
mock_client.get.return_value = {'result': {'groups': expected_groups}}
obj = Roles(client=mock_client)
results = obj.get_role_groups(expected_role, **expected_kwargs)
mock_client.get.assert_called_with("{0}{1}/Groups{2}".format(expected_base_url,
expected_role_ref,
expected_query_str))
mock_make_query_str.assert_called_with(**expected_kwargs)
self.assertEquals(results, expected_results)
class TestUsers(BaseTest):
def test_init(self):
expected_client = "Test Client"
expected_url_base = "Users"
expected_entity_class = menandmice.client.User
expected_get_response_entity_key = "user"
expected_get_response_all_key = "users"
expected_get_is_singular = False
expected_ref_key = "ref"
obj = Users(client=expected_client)
self.assertIsInstance(obj, dict)
self.assertIsInstance(obj, menandmice.base.BaseObject)
self.assertIsInstance(obj, menandmice.base.BaseService)
self.assertEqual(obj.client, expected_client)
self.assertEqual(obj.url_base, expected_url_base)
self.assertEqual(obj.entity_class, expected_entity_class)
self.assertEqual(obj.get_response_entity_key, expected_get_response_entity_key)
self.assertEqual(obj.get_response_all_key, expected_get_response_all_key)
self.assertEqual(obj.get_is_singular, expected_get_is_singular)
self.assertEqual(obj.ref_key, expected_ref_key)
@patch("menandmice.users.Users.get")
def test_add(self, mock_get):
expected_user = "test user"
expected_save_comment = ""
expected_payload = {
"saveComment": expected_save_comment,
"user": expected_user
}
expected_get_refs = ["ref1", "ref2", "ref3"]
expected_get_calls = [call(c) for c in expected_get_refs]
expected_results = ["get_" + ref for ref in expected_get_refs]
expected_base_url = self.url_base
expected_url_base = "Users"
mock_client = Mock()
mock_client.baseurl = expected_base_url
mock_client.post.return_value = {'result': {'objRefs': expected_get_refs}}
mock_get.side_effect = [[result] for result in expected_results]
obj = Users(client=mock_client)
results = obj.add(expected_user)
mock_client.post.assert_called_with("{0}{1}".format(expected_base_url,
expected_url_base),
expected_payload)
mock_get.assert_has_calls(expected_get_calls)
self.assertEquals(results, expected_results)
@patch("menandmice.users.Users.make_query_str")
@patch("menandmice.users.Users.ref_or_raise")
def test_get_user_groups(self, mock_ref_or_raise, mock_make_query_str):
expected_user = "test user"
expected_groups = [{"ref": "ref1", "name": "name1"},
{"ref": "ref2", "name": "name2"},
{"ref": "ref3", "name": "name3"}]
expected_results = [Group(group) for group in expected_groups]
expected_base_url = self.url_base
expected_kwargs = {"test": "value", "int": 123}
expected_user_ref = "Users/123"
expected_query_str = "?query=xyz"
mock_ref_or_raise.return_value = expected_user_ref
mock_make_query_str.return_value = expected_query_str
mock_client = Mock()
mock_client.baseurl = expected_base_url
mock_client.get.return_value = {'result': {'groups': expected_groups}}
obj = Users(client=mock_client)
results = obj.get_user_groups(expected_user, **expected_kwargs)
mock_client.get.assert_called_with("{0}{1}/Groups{2}".format(expected_base_url,
expected_user_ref,
expected_query_str))
mock_make_query_str.assert_called_with(**expected_kwargs)
self.assertEquals(results, expected_results)
@patch("menandmice.users.Users.make_query_str")
@patch("menandmice.users.Users.ref_or_raise")
def test_get_user_roles(self, mock_ref_or_raise, mock_make_query_str):
expected_user = "test user"
expected_roles = [{"ref": "ref1", "name": "name1"},
{"ref": "ref2", "name": "name2"},
{"ref": "ref3", "name": "name3"}]
expected_results = [Role(role) for role in expected_roles]
expected_base_url = self.url_base
expected_kwargs = {"test": "value", "int": 123}
expected_user_ref = "Users/123"
expected_query_str = "?query=xyz"
mock_ref_or_raise.return_value = expected_user_ref
mock_make_query_str.return_value = expected_query_str
mock_client = Mock()
mock_client.baseurl = expected_base_url
mock_client.get.return_value = {'result': {'roles': expected_roles}}
obj = Users(client=mock_client)
results = obj.get_user_roles(expected_user, **expected_kwargs)
mock_client.get.assert_called_with("{0}{1}/Roles{2}".format(expected_base_url,
expected_user_ref,
expected_query_str))
mock_make_query_str.assert_called_with(**expected_kwargs)
self.assertEquals(results, expected_results)
@patch("menandmice.users.Users.make_query_str")
@patch("menandmice.users.Users.ref_or_raise")
def test_delete_user_role(self, mock_ref_or_raise, mock_make_query_str):
expected_user = "test user"
expected_role = "test role"
expected_user_ref = "Users/123"
expected_role_ref = "Roles/123"
expected_results = "test"
expected_base_url = self.url_base
expected_save_comment = "save_comment"
expected_query_str = "?saveComment=test"
mock_ref_or_raise.side_effect = [expected_user_ref, expected_role_ref]
mock_make_query_str.return_value = expected_query_str
mock_client = Mock()
mock_client.baseurl = expected_base_url
mock_client.delete.return_value = expected_results
obj = Users(client=mock_client)
results = obj.delete_user_role(expected_user,
expected_role,
expected_save_comment)
mock_client.delete.assert_called_with("{0}{1}/{2}{3}".format(expected_base_url,
expected_user_ref,
expected_role_ref,
expected_query_str))
self.assertEquals(results, expected_results)
@patch("menandmice.users.Users.ref_or_raise")
def test_add_user_role(self, mock_ref_or_raise):
expected_user = "test user"
expected_role = "test role"
expected_user_ref = "Users/123"
expected_role_ref = "Roles/123"
expected_results = "test"
expected_base_url = self.url_base
expected_save_comment = "save_comment"
expected_payload = {"saveComment": expected_save_comment}
mock_ref_or_raise.side_effect = [expected_user_ref, expected_role_ref]
mock_client = Mock()
mock_client.baseurl = expected_base_url
mock_client.put.return_value = expected_results
obj = Users(client=mock_client)
results = obj.add_user_role(expected_user,
expected_role,
expected_save_comment)
mock_client.put.assert_called_with("{0}{1}/{2}".format(expected_base_url,
expected_user_ref,
expected_role_ref),
expected_payload,
True)
self.assertEquals(results, expected_results)
| 43.053512 | 89 | 0.612134 | 2,893 | 25,746 | 5.066713 | 0.058071 | 0.051167 | 0.04605 | 0.022923 | 0.910493 | 0.889889 | 0.886819 | 0.858917 | 0.848001 | 0.814231 | 0 | 0.009192 | 0.298571 | 25,746 | 597 | 90 | 43.125628 | 0.802481 | 0.032704 | 0 | 0.829787 | 0 | 0 | 0.097785 | 0.033841 | 0 | 0 | 0 | 0 | 0.146809 | 1 | 0.044681 | false | 0.002128 | 0.025532 | 0 | 0.089362 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
7d870cb5606be593fc00da97911381beb6698831 | 4,491 | py | Python | AccNuker/RemoveFriends.py | Smeezy0605/Cryp | 7d975e48616f0b9070a46c0a36f19a9cd4c90189 | [
"MIT"
] | null | null | null | AccNuker/RemoveFriends.py | Smeezy0605/Cryp | 7d975e48616f0b9070a46c0a36f19a9cd4c90189 | [
"MIT"
] | null | null | null | AccNuker/RemoveFriends.py | Smeezy0605/Cryp | 7d975e48616f0b9070a46c0a36f19a9cd4c90189 | [
"MIT"
] | null | null | null | __pyarmor__(__name__, __file__, b'\x50\x59\x41\x52\x4d\x4f\x52\x00\x00\x03\x09\x00\x61\x0d\x0d\x0a\x09\x2f\xa0\x01\x00\x00\x00\x00\x01\x00\x00\x00\x40\x00\x00\x00\x19\x04\x00\x00\x00\x00\x00\x10\x5d\x1a\x0f\x25\x08\xe5\x43\x81\x70\x71\x53\x36\x36\x77\xea\x35\x00\x00\x00\x00\x00\x00\x00\x00\xbd\xed\xe2\x74\x39\xf5\xd6\x27\x6b\xcc\x46\x84\x6b\x1b\xd3\x58\x1f\xac\x74\x6e\xf5\x5c\x90\x88\x42\x33\x9a\x2f\xb2\x97\xe7\x23\x3f\xb7\xe1\x65\x35\xbf\x86\x4e\x1b\xa6\x0a\xd2\x8e\xa8\xd1\x5a\x81\xfe\xb9\x10\x6a\xa3\x40\x81\x7b\x7b\xf3\x6d\x65\x70\xcf\x9c\x17\x2f\x1e\xf3\x73\x03\x45\xaf\x69\xbd\xed\x5d\xa1\xfa\x87\x6d\x1e\x9b\xb7\xc2\x34\x5e\xcc\xf0\x34\x0e\xf9\x97\x9a\x41\x2d\x99\x77\xae\x00\x93\x59\xdb\x71\x94\x24\x44\xf4\xaa\x33\xa0\x0b\x5a\x4b\x08\x4f\x3a\x15\x7e\x31\x91\x68\x6f\xf2\xc5\x51\x21\xa4\xbd\x23\x77\xd3\x64\x77\x0a\xb4\x83\xb5\xa0\xa8\x42\x2a\x4b\x7f\xad\xc7\xc6\x99\xac\xe8\xd9\xd4\xa3\x02\xc8\x9f\x82\x1a\xcc\xbf\x91\x55\xe2\x4f\x96\xbd\x28\x67\x89\xe1\x83\xf9\xaf\xd5\xd9\x1e\xfa\x3d\x60\x6c\x22\xd9\x36\x61\xed\x33\x92\x2d\x02\x29\x72\xcb\xd9\x6e\x91\x62\x00\x50\x7b\x0a\x24\xb6\xab\x39\x62\x44\xf7\xd1\xdf\x9b\x3d\x6b\x8c\x02\xf1\x25\x4d\x11\x4d\x47\xc1\x72\x25\xae\xe9\x48\xf5\x81\xbb\x70\x5a\x8a\x42\x8e\xf0\x8a\xef\xf9\x98\x4a\x5a\xda\x24\xad\x83\x16\xb2\x8f\x7a\x8c\xf7\x1a\xb4\x75\xf1\x12\x04\x44\x7d\xd7\x2f\x57\x18\x8a\x85\xc3\x70\x9d\x30\xab\xdc\xf9\x79\x05\x5b\xdf\x57\xcc\x68\xe2\x53\x5f\x75\xce\x24\x22\x19\xbb\xf4\xa2\xb6\xaf\x53\x0d\xfc\x04\x74\x39\x11\xef\xaf\x6a\x2b\xa4\xcd\xa3\x3c\x5b\xdd\x81\x31\xe0\x1c\xfb\x15\x8f\x51\xfd\xa8\x5c\x13\x8b\xe5\xcb\x6f\x74\xac\x70\x77\x21\x74\x32\x84\x37\x10\x87\x88\x30\x04\xd6\x9d\x42\xfa\x58\x60\x6f\x90\x7a\xdc\xde\xcc\xdb\x4b\xa8\xec\xc1\xdc\x5a\xd0\x99\x21\x37\xc8\x7b\xe0\x42\x95\xe5\x97\x0c\x3c\x2c\xac\x5a\xcb\x89\x7c\x2d\xb5\x7e\xec\x08\x48\x4d\x6e\xfa\xb1\x07\x9f\x8d\x02\x19\x94\x5f\x99\xc9\x2e\x95\xb8\x5e\x5d\x26\xf9\xec\x14\x27\x7c\xe9\x15\x21\xda\xac\x49\x39\x43\x88\x0f\xa8\x87\x3b\xaf\x80\xbd\xf4\x35\xe9\x5c\xf0\xab\x11\xb8\x16\x83\x3c\x7a\x25\xc7\x77\xe5\x80\x92\x36\xf1\x1f\x62\x15\x62\x4d\xea\x14\xa4\xf4\xa4\x63\xcf\x44\xb1\x27\x6f\x27\x87\x96\x9f\x99\x2e\xc9\x6f\xbb\xc7\x8b\xaf\x51\x93\x1c\x3c\xa6\xd8\x89\x69\xe5\x11\x4b\x35\x9a\x9b\x5e\x0f\xc4\x87\xeb\x5d\x23\x2f\xcb\xb8\xd0\x13\x53\xec\x67\x17\xad\xa5\x4c\x6e\x80\x54\xe4\x68\xff\xe7\x89\x7e\x1f\xc5\xa4\x5e\x74\x0a\x78\xcc\x73\xbc\x0c\x81\x08\x09\xeb\xa6\xf5\xf1\x76\x33\x45\xaa\x92\xb5\x6a\x5f\x0b\x55\xf5\xb2\x6e\x68\x2a\x10\x2f\x84\x28\x68\x96\x0f\x88\x0a\x82\x2b\x19\xc5\xad\x53\x5a\x63\x4b\x87\xab\x2e\x89\x3b\x3d\xc2\xf6\x1e\xfc\x9b\x67\x03\x76\x92\xcc\xc8\x82\x0b\x9d\x88\x3b\x4d\xa8\x5d\xbb\x76\x5d\xea\xca\x65\x5a\xf4\xc1\xbd\x30\xac\x6c\xda\xb6\x75\xb6\xa3\xe9\xf8\x9a\x79\x07\x3a\xbe\x98\x87\x22\x01\x6a\x25\x0a\x9d\xb8\x72\x1e\xe1\x2c\x2c\x7d\x44\x0a\x97\xc8\x46\xdf\x27\x34\xf4\xc1\x36\xda\x81\x36\xd0\x4a\x76\x4c\x35\x48\x56\x86\xba\x56\xb9\x18\xd2\x8f\xc0\x6e\x68\x78\x61\x3e\x0f\xf6\x9f\x57\xe2\xb6\xf0\xd2\x8b\x89\xc3\x2a\xa0\x52\xf8\x04\x3d\x89\x63\x26\xe1\x2d\x08\xde\x8c\x02\x46\xc7\xde\xd2\xc3\x3d\x1c\x02\x29\xaa\x85\xb9\x1c\xeb\x3e\x0a\x5e\x38\xe4\x11\x3f\x20\x81\x72\x88\x8c\x4f\x75\x13\x3f\x4a\xc6\xc1\x2f\xa9\xaf\xca\x92\x50\xab\x64\xa1\x1c\x03\xef\xde\xc6\x11\xe6\x3e\x0f\x32\x83\x63\x19\xcc\x1b\x9d\x08\x1a\x64\x66\x31\x7d\x77\xbf\xec\x86\x94\xbd\x93\xa0\xed\xd2\xd4\xf7\x9e\xfd\x04\x7c\xdd\x7d\xeb\x32\x50\x07\xcf\x35\xd9\xf6\x65\x26\xf3\x81\x86\xd1\xd0\x9a\x0a\x69\xd9\x77\xe7\x8b\x9d\x0a\xe1\x52\x61\xe3\x3e\x82\xc1\x63\x09\xdd\x8e\xab\x15\xf0\x66\x3b\xd5\xb6\xb9\xdd\x47\x9b\xc6\x34\x90\x44\x6c\xb6\x35\xa3\xdc\xf8\x12\x6a\x64\x47\x88\x7b\x6e\x8e\xee\x38\x64\xfd\x02\xea\x80\x7f\x2a\x76\xc2\x45\x3e\x32\x08\x6d\x89\xf9\x62\xe4\x77\x64\x14\x42\x90\x8c\x06\x02\x34\x8f\xbe\x2e\x7e\x69\xe7\xe0\x6a\xf9\x4a\x3f\x94\x5d\xea\xe7\xda\x85\x06\x00\x3c\xb1\x9e\xe2\x33\x4e\xa0\x2d\xdd\x54\xbe\x11\x52\xa1\xd1\xc5\x8d\x4c\xc0\x03\xbe\xff\x46\xdc\x24\xc1\x77\x79\x11\x51\xa5\x1a\x99\x46\xa0\x01\x3e\x4c\xc6\x07\x08\x39\xa5\xe2\x37\x95\x26\xc2\x73\x42\x0c\x8b\x23\x85\xc7\x3e\x1d\x91\x18\x4f\x7a\xfe\xd0\xd0\x1c\xb4\xc1\xf1\x1c\xd0\x43\xb3\x8e\xde\x1d\xa1\xfb\x08\x69\x7d\x37\x19\xda\x66\x3a\xc5\x23\xda\x0b\xd3\xb2\xb2\x67\x69\xd7\x2a\x9a\x52\x43\x52\xc9\xc9\xc7\x56\x46\xef\x28\xb6\x77\x16\xfb\xba\x14\xd8\x4b\xe8\x0e\x67\xfe\xf4\xcd\x34\x2f\x2b\x33\x23\xfb\x33\x29\xe9\x2a\x63\x42\xf7\x2e\xb0\x34\x33\xee\xb7\xb3\x14\xec\x50\xa1\xf7\x15\x94\x25\xa3\x6f\xcb\xe3\xbb\x75\x6e\x4f\xed\xca\xd7\x64\x21\x37\xb7\x56\xf7\xd3', 2) | 4,491 | 4,491 | 0.749944 | 1,118 | 4,491 | 3.001789 | 0.232558 | 0.033969 | 0.034863 | 0.028605 | 0.011621 | 0.007151 | 0.007151 | 0 | 0 | 0 | 0 | 0.32041 | 0.000668 | 4,491 | 1 | 4,491 | 4,491 | 0.427362 | 0 | 0 | 0 | 0 | 1 | 0.991095 | 0.991095 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | null | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 10 |
7db9d35de5ae72c066ba1bd0e67a5d557c7a42ff | 28,635 | py | Python | boto3_type_annotations_with_docs/boto3_type_annotations/rds/waiter.py | cowboygneox/boto3_type_annotations | 450dce1de4e066b939de7eac2ec560ed1a7ddaa2 | [
"MIT"
] | 119 | 2018-12-01T18:20:57.000Z | 2022-02-02T10:31:29.000Z | boto3_type_annotations_with_docs/boto3_type_annotations/rds/waiter.py | cowboygneox/boto3_type_annotations | 450dce1de4e066b939de7eac2ec560ed1a7ddaa2 | [
"MIT"
] | 15 | 2018-11-16T00:16:44.000Z | 2021-11-13T03:44:18.000Z | boto3_type_annotations_with_docs/boto3_type_annotations/rds/waiter.py | cowboygneox/boto3_type_annotations | 450dce1de4e066b939de7eac2ec560ed1a7ddaa2 | [
"MIT"
] | 11 | 2019-05-06T05:26:51.000Z | 2021-09-28T15:27:59.000Z | from typing import Dict
from typing import List
from botocore.waiter import Waiter
class DBInstanceAvailable(Waiter):
def wait(self, DBInstanceIdentifier: str = None, Filters: List = None, MaxRecords: int = None, Marker: str = None, WaiterConfig: Dict = None):
"""
Polls :py:meth:`RDS.Client.describe_db_instances` every 30 seconds until a successful state is reached. An error is returned after 60 failed checks.
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/rds-2014-10-31/DescribeDBInstances>`_
**Request Syntax**
::
waiter.wait(
DBInstanceIdentifier='string',
Filters=[
{
'Name': 'string',
'Values': [
'string',
]
},
],
MaxRecords=123,
Marker='string',
WaiterConfig={
'Delay': 123,
'MaxAttempts': 123
}
)
:type DBInstanceIdentifier: string
:param DBInstanceIdentifier:
The user-supplied instance identifier. If this parameter is specified, information from only the specific DB instance is returned. This parameter isn\'t case-sensitive.
Constraints:
* If supplied, must match the identifier of an existing DBInstance.
:type Filters: list
:param Filters:
A filter that specifies one or more DB instances to describe.
Supported filters:
* ``db-cluster-id`` - Accepts DB cluster identifiers and DB cluster Amazon Resource Names (ARNs). The results list will only include information about the DB instances associated with the DB clusters identified by these ARNs.
* ``db-instance-id`` - Accepts DB instance identifiers and DB instance Amazon Resource Names (ARNs). The results list will only include information about the DB instances identified by these ARNs.
- *(dict) --*
A filter name and value pair that is used to return a more specific list of results from a describe operation. Filters can be used to match a set of resources by specific criteria, such as IDs. The filters supported by a describe operation are documented with the describe operation.
.. note::
Currently, wildcards are not supported in filters.
The following actions can be filtered:
* DescribeDBClusterBacktracks
* DescribeDBClusterEndpoints
* DescribeDBClusters
* DescribeDBInstances
* DescribePendingMaintenanceActions
- **Name** *(string) --* **[REQUIRED]**
The name of the filter. Filter names are case-sensitive.
- **Values** *(list) --* **[REQUIRED]**
One or more filter values. Filter values are case-sensitive.
- *(string) --*
:type MaxRecords: integer
:param MaxRecords:
The maximum number of records to include in the response. If more records exist than the specified ``MaxRecords`` value, a pagination token called a marker is included in the response so that the remaining results can be retrieved.
Default: 100
Constraints: Minimum 20, maximum 100.
:type Marker: string
:param Marker:
An optional pagination token provided by a previous ``DescribeDBInstances`` request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by ``MaxRecords`` .
:type WaiterConfig: dict
:param WaiterConfig:
A dictionary that provides parameters to control waiting behavior.
- **Delay** *(integer) --*
The amount of time in seconds to wait between attempts. Default: 30
- **MaxAttempts** *(integer) --*
The maximum number of attempts to be made. Default: 60
:returns: None
"""
pass
class DBInstanceDeleted(Waiter):
def wait(self, DBInstanceIdentifier: str = None, Filters: List = None, MaxRecords: int = None, Marker: str = None, WaiterConfig: Dict = None):
"""
Polls :py:meth:`RDS.Client.describe_db_instances` every 30 seconds until a successful state is reached. An error is returned after 60 failed checks.
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/rds-2014-10-31/DescribeDBInstances>`_
**Request Syntax**
::
waiter.wait(
DBInstanceIdentifier='string',
Filters=[
{
'Name': 'string',
'Values': [
'string',
]
},
],
MaxRecords=123,
Marker='string',
WaiterConfig={
'Delay': 123,
'MaxAttempts': 123
}
)
:type DBInstanceIdentifier: string
:param DBInstanceIdentifier:
The user-supplied instance identifier. If this parameter is specified, information from only the specific DB instance is returned. This parameter isn\'t case-sensitive.
Constraints:
* If supplied, must match the identifier of an existing DBInstance.
:type Filters: list
:param Filters:
A filter that specifies one or more DB instances to describe.
Supported filters:
* ``db-cluster-id`` - Accepts DB cluster identifiers and DB cluster Amazon Resource Names (ARNs). The results list will only include information about the DB instances associated with the DB clusters identified by these ARNs.
* ``db-instance-id`` - Accepts DB instance identifiers and DB instance Amazon Resource Names (ARNs). The results list will only include information about the DB instances identified by these ARNs.
- *(dict) --*
A filter name and value pair that is used to return a more specific list of results from a describe operation. Filters can be used to match a set of resources by specific criteria, such as IDs. The filters supported by a describe operation are documented with the describe operation.
.. note::
Currently, wildcards are not supported in filters.
The following actions can be filtered:
* DescribeDBClusterBacktracks
* DescribeDBClusterEndpoints
* DescribeDBClusters
* DescribeDBInstances
* DescribePendingMaintenanceActions
- **Name** *(string) --* **[REQUIRED]**
The name of the filter. Filter names are case-sensitive.
- **Values** *(list) --* **[REQUIRED]**
One or more filter values. Filter values are case-sensitive.
- *(string) --*
:type MaxRecords: integer
:param MaxRecords:
The maximum number of records to include in the response. If more records exist than the specified ``MaxRecords`` value, a pagination token called a marker is included in the response so that the remaining results can be retrieved.
Default: 100
Constraints: Minimum 20, maximum 100.
:type Marker: string
:param Marker:
An optional pagination token provided by a previous ``DescribeDBInstances`` request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by ``MaxRecords`` .
:type WaiterConfig: dict
:param WaiterConfig:
A dictionary that provides parameters to control waiting behavior.
- **Delay** *(integer) --*
The amount of time in seconds to wait between attempts. Default: 30
- **MaxAttempts** *(integer) --*
The maximum number of attempts to be made. Default: 60
:returns: None
"""
pass
class DBSnapshotAvailable(Waiter):
def wait(self, DBInstanceIdentifier: str = None, DBSnapshotIdentifier: str = None, SnapshotType: str = None, Filters: List = None, MaxRecords: int = None, Marker: str = None, IncludeShared: bool = None, IncludePublic: bool = None, DbiResourceId: str = None, WaiterConfig: Dict = None):
"""
.. _https://docs.aws.amazon.com/aws-backup/latest/devguide/whatisbackup.html: https://docs.aws.amazon.com/aws-backup/latest/devguide/whatisbackup.html
Polls :py:meth:`RDS.Client.describe_db_snapshots` every 30 seconds until a successful state is reached. An error is returned after 60 failed checks.
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/rds-2014-10-31/DescribeDBSnapshots>`_
**Request Syntax**
::
waiter.wait(
DBInstanceIdentifier='string',
DBSnapshotIdentifier='string',
SnapshotType='string',
Filters=[
{
'Name': 'string',
'Values': [
'string',
]
},
],
MaxRecords=123,
Marker='string',
IncludeShared=True|False,
IncludePublic=True|False,
DbiResourceId='string',
WaiterConfig={
'Delay': 123,
'MaxAttempts': 123
}
)
:type DBInstanceIdentifier: string
:param DBInstanceIdentifier:
The ID of the DB instance to retrieve the list of DB snapshots for. This parameter can\'t be used in conjunction with ``DBSnapshotIdentifier`` . This parameter is not case-sensitive.
Constraints:
* If supplied, must match the identifier of an existing DBInstance.
:type DBSnapshotIdentifier: string
:param DBSnapshotIdentifier:
A specific DB snapshot identifier to describe. This parameter can\'t be used in conjunction with ``DBInstanceIdentifier`` . This value is stored as a lowercase string.
Constraints:
* If supplied, must match the identifier of an existing DBSnapshot.
* If this identifier is for an automated snapshot, the ``SnapshotType`` parameter must also be specified.
:type SnapshotType: string
:param SnapshotType:
The type of snapshots to be returned. You can specify one of the following values:
* ``automated`` - Return all DB snapshots that have been automatically taken by Amazon RDS for my AWS account.
* ``manual`` - Return all DB snapshots that have been taken by my AWS account.
* ``shared`` - Return all manual DB snapshots that have been shared to my AWS account.
* ``public`` - Return all DB snapshots that have been marked as public.
* ``awsbackup`` - Return the DB snapshots managed by the AWS Backup service. For information about AWS Backup, see the ` *AWS Backup Developer Guide.* https://docs.aws.amazon.com/aws-backup/latest/devguide/whatisbackup.html`__ The ``awsbackup`` type does not apply to Aurora.
If you don\'t specify a ``SnapshotType`` value, then both automated and manual snapshots are returned. Shared and public DB snapshots are not included in the returned results by default. You can include shared snapshots with these results by setting the ``IncludeShared`` parameter to ``true`` . You can include public snapshots with these results by setting the ``IncludePublic`` parameter to ``true`` .
The ``IncludeShared`` and ``IncludePublic`` parameters don\'t apply for ``SnapshotType`` values of ``manual`` or ``automated`` . The ``IncludePublic`` parameter doesn\'t apply when ``SnapshotType`` is set to ``shared`` . The ``IncludeShared`` parameter doesn\'t apply when ``SnapshotType`` is set to ``public`` .
:type Filters: list
:param Filters:
This parameter is not currently supported.
- *(dict) --*
A filter name and value pair that is used to return a more specific list of results from a describe operation. Filters can be used to match a set of resources by specific criteria, such as IDs. The filters supported by a describe operation are documented with the describe operation.
.. note::
Currently, wildcards are not supported in filters.
The following actions can be filtered:
* DescribeDBClusterBacktracks
* DescribeDBClusterEndpoints
* DescribeDBClusters
* DescribeDBInstances
* DescribePendingMaintenanceActions
- **Name** *(string) --* **[REQUIRED]**
The name of the filter. Filter names are case-sensitive.
- **Values** *(list) --* **[REQUIRED]**
One or more filter values. Filter values are case-sensitive.
- *(string) --*
:type MaxRecords: integer
:param MaxRecords:
The maximum number of records to include in the response. If more records exist than the specified ``MaxRecords`` value, a pagination token called a marker is included in the response so that the remaining results can be retrieved.
Default: 100
Constraints: Minimum 20, maximum 100.
:type Marker: string
:param Marker:
An optional pagination token provided by a previous ``DescribeDBSnapshots`` request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by ``MaxRecords`` .
:type IncludeShared: boolean
:param IncludeShared:
True to include shared manual DB snapshots from other AWS accounts that this AWS account has been given permission to copy or restore, and otherwise false. The default is ``false`` .
You can give an AWS account permission to restore a manual DB snapshot from another AWS account by using the ModifyDBSnapshotAttribute API action.
:type IncludePublic: boolean
:param IncludePublic:
True to include manual DB snapshots that are public and can be copied or restored by any AWS account, and otherwise false. The default is false.
You can share a manual DB snapshot as public by using the ModifyDBSnapshotAttribute API.
:type DbiResourceId: string
:param DbiResourceId:
A specific DB resource ID to describe.
:type WaiterConfig: dict
:param WaiterConfig:
A dictionary that provides parameters to control waiting behavior.
- **Delay** *(integer) --*
The amount of time in seconds to wait between attempts. Default: 30
- **MaxAttempts** *(integer) --*
The maximum number of attempts to be made. Default: 60
:returns: None
"""
pass
class DBSnapshotCompleted(Waiter):
def wait(self, DBInstanceIdentifier: str = None, DBSnapshotIdentifier: str = None, SnapshotType: str = None, Filters: List = None, MaxRecords: int = None, Marker: str = None, IncludeShared: bool = None, IncludePublic: bool = None, DbiResourceId: str = None, WaiterConfig: Dict = None):
"""
.. _https://docs.aws.amazon.com/aws-backup/latest/devguide/whatisbackup.html: https://docs.aws.amazon.com/aws-backup/latest/devguide/whatisbackup.html
Polls :py:meth:`RDS.Client.describe_db_snapshots` every 15 seconds until a successful state is reached. An error is returned after 40 failed checks.
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/rds-2014-10-31/DescribeDBSnapshots>`_
**Request Syntax**
::
waiter.wait(
DBInstanceIdentifier='string',
DBSnapshotIdentifier='string',
SnapshotType='string',
Filters=[
{
'Name': 'string',
'Values': [
'string',
]
},
],
MaxRecords=123,
Marker='string',
IncludeShared=True|False,
IncludePublic=True|False,
DbiResourceId='string',
WaiterConfig={
'Delay': 123,
'MaxAttempts': 123
}
)
:type DBInstanceIdentifier: string
:param DBInstanceIdentifier:
The ID of the DB instance to retrieve the list of DB snapshots for. This parameter can\'t be used in conjunction with ``DBSnapshotIdentifier`` . This parameter is not case-sensitive.
Constraints:
* If supplied, must match the identifier of an existing DBInstance.
:type DBSnapshotIdentifier: string
:param DBSnapshotIdentifier:
A specific DB snapshot identifier to describe. This parameter can\'t be used in conjunction with ``DBInstanceIdentifier`` . This value is stored as a lowercase string.
Constraints:
* If supplied, must match the identifier of an existing DBSnapshot.
* If this identifier is for an automated snapshot, the ``SnapshotType`` parameter must also be specified.
:type SnapshotType: string
:param SnapshotType:
The type of snapshots to be returned. You can specify one of the following values:
* ``automated`` - Return all DB snapshots that have been automatically taken by Amazon RDS for my AWS account.
* ``manual`` - Return all DB snapshots that have been taken by my AWS account.
* ``shared`` - Return all manual DB snapshots that have been shared to my AWS account.
* ``public`` - Return all DB snapshots that have been marked as public.
* ``awsbackup`` - Return the DB snapshots managed by the AWS Backup service. For information about AWS Backup, see the ` *AWS Backup Developer Guide.* https://docs.aws.amazon.com/aws-backup/latest/devguide/whatisbackup.html`__ The ``awsbackup`` type does not apply to Aurora.
If you don\'t specify a ``SnapshotType`` value, then both automated and manual snapshots are returned. Shared and public DB snapshots are not included in the returned results by default. You can include shared snapshots with these results by setting the ``IncludeShared`` parameter to ``true`` . You can include public snapshots with these results by setting the ``IncludePublic`` parameter to ``true`` .
The ``IncludeShared`` and ``IncludePublic`` parameters don\'t apply for ``SnapshotType`` values of ``manual`` or ``automated`` . The ``IncludePublic`` parameter doesn\'t apply when ``SnapshotType`` is set to ``shared`` . The ``IncludeShared`` parameter doesn\'t apply when ``SnapshotType`` is set to ``public`` .
:type Filters: list
:param Filters:
This parameter is not currently supported.
- *(dict) --*
A filter name and value pair that is used to return a more specific list of results from a describe operation. Filters can be used to match a set of resources by specific criteria, such as IDs. The filters supported by a describe operation are documented with the describe operation.
.. note::
Currently, wildcards are not supported in filters.
The following actions can be filtered:
* DescribeDBClusterBacktracks
* DescribeDBClusterEndpoints
* DescribeDBClusters
* DescribeDBInstances
* DescribePendingMaintenanceActions
- **Name** *(string) --* **[REQUIRED]**
The name of the filter. Filter names are case-sensitive.
- **Values** *(list) --* **[REQUIRED]**
One or more filter values. Filter values are case-sensitive.
- *(string) --*
:type MaxRecords: integer
:param MaxRecords:
The maximum number of records to include in the response. If more records exist than the specified ``MaxRecords`` value, a pagination token called a marker is included in the response so that the remaining results can be retrieved.
Default: 100
Constraints: Minimum 20, maximum 100.
:type Marker: string
:param Marker:
An optional pagination token provided by a previous ``DescribeDBSnapshots`` request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by ``MaxRecords`` .
:type IncludeShared: boolean
:param IncludeShared:
True to include shared manual DB snapshots from other AWS accounts that this AWS account has been given permission to copy or restore, and otherwise false. The default is ``false`` .
You can give an AWS account permission to restore a manual DB snapshot from another AWS account by using the ModifyDBSnapshotAttribute API action.
:type IncludePublic: boolean
:param IncludePublic:
True to include manual DB snapshots that are public and can be copied or restored by any AWS account, and otherwise false. The default is false.
You can share a manual DB snapshot as public by using the ModifyDBSnapshotAttribute API.
:type DbiResourceId: string
:param DbiResourceId:
A specific DB resource ID to describe.
:type WaiterConfig: dict
:param WaiterConfig:
A dictionary that provides parameters to control waiting behavior.
- **Delay** *(integer) --*
The amount of time in seconds to wait between attempts. Default: 15
- **MaxAttempts** *(integer) --*
The maximum number of attempts to be made. Default: 40
:returns: None
"""
pass
class DBSnapshotDeleted(Waiter):
def wait(self, DBInstanceIdentifier: str = None, DBSnapshotIdentifier: str = None, SnapshotType: str = None, Filters: List = None, MaxRecords: int = None, Marker: str = None, IncludeShared: bool = None, IncludePublic: bool = None, DbiResourceId: str = None, WaiterConfig: Dict = None):
"""
.. _https://docs.aws.amazon.com/aws-backup/latest/devguide/whatisbackup.html: https://docs.aws.amazon.com/aws-backup/latest/devguide/whatisbackup.html
Polls :py:meth:`RDS.Client.describe_db_snapshots` every 30 seconds until a successful state is reached. An error is returned after 60 failed checks.
See also: `AWS API Documentation <https://docs.aws.amazon.com/goto/WebAPI/rds-2014-10-31/DescribeDBSnapshots>`_
**Request Syntax**
::
waiter.wait(
DBInstanceIdentifier='string',
DBSnapshotIdentifier='string',
SnapshotType='string',
Filters=[
{
'Name': 'string',
'Values': [
'string',
]
},
],
MaxRecords=123,
Marker='string',
IncludeShared=True|False,
IncludePublic=True|False,
DbiResourceId='string',
WaiterConfig={
'Delay': 123,
'MaxAttempts': 123
}
)
:type DBInstanceIdentifier: string
:param DBInstanceIdentifier:
The ID of the DB instance to retrieve the list of DB snapshots for. This parameter can\'t be used in conjunction with ``DBSnapshotIdentifier`` . This parameter is not case-sensitive.
Constraints:
* If supplied, must match the identifier of an existing DBInstance.
:type DBSnapshotIdentifier: string
:param DBSnapshotIdentifier:
A specific DB snapshot identifier to describe. This parameter can\'t be used in conjunction with ``DBInstanceIdentifier`` . This value is stored as a lowercase string.
Constraints:
* If supplied, must match the identifier of an existing DBSnapshot.
* If this identifier is for an automated snapshot, the ``SnapshotType`` parameter must also be specified.
:type SnapshotType: string
:param SnapshotType:
The type of snapshots to be returned. You can specify one of the following values:
* ``automated`` - Return all DB snapshots that have been automatically taken by Amazon RDS for my AWS account.
* ``manual`` - Return all DB snapshots that have been taken by my AWS account.
* ``shared`` - Return all manual DB snapshots that have been shared to my AWS account.
* ``public`` - Return all DB snapshots that have been marked as public.
* ``awsbackup`` - Return the DB snapshots managed by the AWS Backup service. For information about AWS Backup, see the ` *AWS Backup Developer Guide.* https://docs.aws.amazon.com/aws-backup/latest/devguide/whatisbackup.html`__ The ``awsbackup`` type does not apply to Aurora.
If you don\'t specify a ``SnapshotType`` value, then both automated and manual snapshots are returned. Shared and public DB snapshots are not included in the returned results by default. You can include shared snapshots with these results by setting the ``IncludeShared`` parameter to ``true`` . You can include public snapshots with these results by setting the ``IncludePublic`` parameter to ``true`` .
The ``IncludeShared`` and ``IncludePublic`` parameters don\'t apply for ``SnapshotType`` values of ``manual`` or ``automated`` . The ``IncludePublic`` parameter doesn\'t apply when ``SnapshotType`` is set to ``shared`` . The ``IncludeShared`` parameter doesn\'t apply when ``SnapshotType`` is set to ``public`` .
:type Filters: list
:param Filters:
This parameter is not currently supported.
- *(dict) --*
A filter name and value pair that is used to return a more specific list of results from a describe operation. Filters can be used to match a set of resources by specific criteria, such as IDs. The filters supported by a describe operation are documented with the describe operation.
.. note::
Currently, wildcards are not supported in filters.
The following actions can be filtered:
* DescribeDBClusterBacktracks
* DescribeDBClusterEndpoints
* DescribeDBClusters
* DescribeDBInstances
* DescribePendingMaintenanceActions
- **Name** *(string) --* **[REQUIRED]**
The name of the filter. Filter names are case-sensitive.
- **Values** *(list) --* **[REQUIRED]**
One or more filter values. Filter values are case-sensitive.
- *(string) --*
:type MaxRecords: integer
:param MaxRecords:
The maximum number of records to include in the response. If more records exist than the specified ``MaxRecords`` value, a pagination token called a marker is included in the response so that the remaining results can be retrieved.
Default: 100
Constraints: Minimum 20, maximum 100.
:type Marker: string
:param Marker:
An optional pagination token provided by a previous ``DescribeDBSnapshots`` request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by ``MaxRecords`` .
:type IncludeShared: boolean
:param IncludeShared:
True to include shared manual DB snapshots from other AWS accounts that this AWS account has been given permission to copy or restore, and otherwise false. The default is ``false`` .
You can give an AWS account permission to restore a manual DB snapshot from another AWS account by using the ModifyDBSnapshotAttribute API action.
:type IncludePublic: boolean
:param IncludePublic:
True to include manual DB snapshots that are public and can be copied or restored by any AWS account, and otherwise false. The default is false.
You can share a manual DB snapshot as public by using the ModifyDBSnapshotAttribute API.
:type DbiResourceId: string
:param DbiResourceId:
A specific DB resource ID to describe.
:type WaiterConfig: dict
:param WaiterConfig:
A dictionary that provides parameters to control waiting behavior.
- **Delay** *(integer) --*
The amount of time in seconds to wait between attempts. Default: 30
- **MaxAttempts** *(integer) --*
The maximum number of attempts to be made. Default: 60
:returns: None
"""
pass
| 63.775056 | 414 | 0.641627 | 3,273 | 28,635 | 5.606172 | 0.078216 | 0.017985 | 0.012262 | 0.013734 | 0.989427 | 0.989427 | 0.989427 | 0.989427 | 0.989427 | 0.989427 | 0 | 0.008055 | 0.284652 | 28,635 | 448 | 415 | 63.917411 | 0.887717 | 0.828392 | 0 | 0.555556 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.277778 | false | 0.277778 | 0.166667 | 0 | 0.722222 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 10 |
81cbd3c1a133e9f02653758635a7508afc29f904 | 83,646 | py | Python | mixer.py | nyu-dl/dl4mt-cdec | e738dc7235cb2819ad2b4e8e5837e97b2fb41de2 | [
"BSD-3-Clause"
] | 198 | 2016-05-10T20:49:58.000Z | 2019-01-29T23:44:39.000Z | mixer.py | trevordonnelly/dl4mt-cdec | e738dc7235cb2819ad2b4e8e5837e97b2fb41de2 | [
"BSD-3-Clause"
] | 23 | 2016-11-20T03:55:36.000Z | 2019-05-13T15:18:32.000Z | mixer.py | trevordonnelly/dl4mt-cdec | e738dc7235cb2819ad2b4e8e5837e97b2fb41de2 | [
"BSD-3-Clause"
] | 63 | 2016-11-01T16:53:28.000Z | 2020-06-13T13:12:40.000Z | '''
Mixer containing essential functions or building blocks
'''
import theano
import theano.tensor as tensor
from theano.sandbox.rng_mrg import MRG_RandomStreams as RandomStreams
import numpy
import copy
import os
import warnings
import sys
import time
from collections import OrderedDict
profile = False
# layers: 'name': ('parameter initializer', 'feedforward')
layers = {'ff': ('param_init_fflayer', 'fflayer'),
'fff': ('param_init_ffflayer', 'ffflayer'),
'gru_decoder': ('param_init_gru_decoder', 'gru_decoder'),
'gru_cond_decoder': ('param_init_gru_cond_decoder',
'gru_cond_decoder'),
'two_layer_gru_decoder': ('param_init_two_layer_gru_decoder',
'two_layer_gru_decoder'),
'two_layer_gru_decoder_both': ('param_init_two_layer_gru_decoder_both',
'two_layer_gru_decoder_both'),
'biscale_decoder': ('param_init_biscale_decoder',
'biscale_decoder'),
'biscale_decoder_both': ('param_init_biscale_decoder_both',
'biscale_decoder_both'),
'biscale_decoder_attc': ('param_init_biscale_decoder_attc',
'biscale_decoder_attc'),
'gru': ('param_init_gru', 'gru_layer')
}
# utility function to slice a tensor
def _slice(_x, n, dim):
if _x.ndim == 3:
return _x[:, :, n*dim:(n+1)*dim]
return _x[:, n*dim:(n+1)*dim]
# push parameters to Theano shared variables
def zipp(params, tparams):
for kk, vv in params.iteritems():
tparams[kk].set_value(vv)
# pull parameters from Theano shared variables
def unzip(zipped):
new_params = OrderedDict()
for kk, vv in zipped.iteritems():
new_params[kk] = vv.get_value()
return new_params
# get the list of parameters: Note that tparams must be OrderedDict
def itemlist(tparams):
return [vv for kk, vv in tparams.iteritems()]
# dropout
def dropout_layer(state_before, use_noise, trng):
proj = tensor.switch(
use_noise,
state_before * trng.binomial(state_before.shape, p=0.5, n=1,
dtype=state_before.dtype),
state_before * 0.5)
return proj
# make prefix-appended name
def _p(pp, name):
return '%s_%s' % (pp, name)
# initialize Theano shared variables according to the initial parameters
def init_tparams(params):
tparams = OrderedDict()
for kk, pp in params.iteritems():
tparams[kk] = theano.shared(params[kk], name=kk)
return tparams
# load parameters
def load_params(path, params):
pp = numpy.load(path)
for kk, vv in params.iteritems():
if kk not in pp:
warnings.warn('%s is not in the archive' % kk)
continue
params[kk] = pp[kk]
return params
def get_layer(name):
fns = layers[name]
return (eval(fns[0]), eval(fns[1]))
# some utilities
def ortho_weight(ndim, scale=0.01):
W = scale * numpy.random.randn(ndim, ndim)
u, s, v = numpy.linalg.svd(W)
return u.astype('float32')
def norm_vector(nin, scale=0.01):
V = scale * numpy.random.randn(nin)
return V.astype('float32')
def norm_weight(nin, nout=None, scale=0.01, ortho=True):
if nout is None:
nout = nin
if nout == nin and ortho:
W = ortho_weight(nin)
else:
W = scale * numpy.random.randn(nin, nout)
return W.astype('float32')
def tanh(x):
return tensor.tanh(x)
def linear(x):
return x
def concatenate(tensor_list, axis=0):
"""
Alternative implementation of `theano.tensor.concatenate`.
This function does exactly the same thing, but contrary to Theano's own
implementation, the gradient is implemented on the GPU.
Backpropagating through `theano.tensor.concatenate` yields slowdowns
because the inverse operation (splitting) needs to be done on the CPU.
This implementation does not have that problem.
:usage:
>>> x, y = theano.tensor.matrices('x', 'y')
>>> c = concatenate([x, y], axis=1)
:parameters:
- tensor_list : list
list of Theano tensor expressions that should be concatenated.
- axis : int
the tensors will be joined along this axis.
:returns:
- out : tensor
the concatenated tensor expression.
"""
concat_size = sum(tt.shape[axis] for tt in tensor_list)
output_shape = ()
for k in range(axis):
output_shape += (tensor_list[0].shape[k],)
output_shape += (concat_size,)
for k in range(axis + 1, tensor_list[0].ndim):
output_shape += (tensor_list[0].shape[k],)
out = tensor.zeros(output_shape, dtype=tensor_list[0].dtype)
offset = 0
for tt in tensor_list:
indices = ()
for k in range(axis):
indices += (slice(None),)
indices += (slice(offset, offset + tt.shape[axis]),)
for k in range(axis + 1, tensor_list[0].ndim):
indices += (slice(None),)
out = tensor.set_subtensor(out[indices], tt)
offset += tt.shape[axis]
return out
# feedforward layer: affine transformation + point-wise nonlinearity
def param_init_fflayer(options, params, prefix='ff', nin=None, nout=None,
ortho=True, scale=0.01):
if nin is None:
nin = options['dim_proj']
if nout is None:
nout = options['dim_proj']
params[_p(prefix, 'W')] = norm_weight(nin, nout, scale=scale, ortho=ortho)
params[_p(prefix, 'b')] = numpy.zeros((nout,)).astype('float32')
return params
def fflayer(tparams, state_below, options, prefix='rconv',
activ='lambda x: tensor.tanh(x)', **kwargs):
return eval(activ)(
tensor.dot(state_below, tparams[_p(prefix, 'W')]) +
tparams[_p(prefix, 'b')])
# feedforward layer short-cut: affine transformation + point-wise nonlinearity
def param_init_ffflayer(options, params, prefix='fff', nin1=None, nin2=None, nout=None,
ortho=True, scale1=0.01, scale2=0.01):
if nin1 is None:
nin1 = options['dim_proj']
if nin2 is None:
nin2 = options['dim_proj']
if nout is None:
nout = options['dim_proj']
params[_p(prefix, 'W')] = norm_weight(nin1, nout, scale=scale1, ortho=ortho)
params[_p(prefix, 'U')] = norm_weight(nin2, nout, scale=scale2, ortho=ortho)
params[_p(prefix, 'b')] = numpy.zeros((nout,)).astype('float32')
return params
def ffflayer(tparams, state_below1, state_below2, options, prefix='rconv',
activ='lambda x: tensor.tanh(x)', **kwargs):
return eval(activ)(
tensor.dot(state_below1, tparams[_p(prefix, 'W')]) +
tensor.dot(state_below2, tparams[_p(prefix, 'U')]) +
tparams[_p(prefix, 'b')])
# GRU layer
def param_init_gru(options, params, prefix='gru', nin=None, dim=None):
if nin is None:
nin = options['dim_proj']
if dim is None:
dim = options['rnn_dim']
# embedding to gates transformation weights, biases
W = numpy.concatenate([norm_weight(nin, dim),
norm_weight(nin, dim)], axis=1)
params[_p(prefix, 'W')] = W
params[_p(prefix, 'b')] = numpy.zeros((2 * dim,)).astype('float32')
# recurrent transformation weights for gates
U = numpy.concatenate([ortho_weight(dim),
ortho_weight(dim)], axis=1)
params[_p(prefix, 'U')] = U
# embedding to hidden state proposal weights, biases
Wx = norm_weight(nin, dim)
params[_p(prefix, 'Wx')] = Wx
params[_p(prefix, 'bx')] = numpy.zeros((dim,)).astype('float32')
# recurrent transformation weights for hidden state proposal
Ux = ortho_weight(dim)
params[_p(prefix, 'Ux')] = Ux
return params
def gru_layer(tparams, state_below, options, prefix='gru',
mask=None, one_step=False, init_state=None, **kwargs):
if one_step:
assert init_state, 'previous state must be provided'
n_steps = state_below.shape[0]
if state_below.ndim in [2, 3]:
n_samples = state_below.shape[1]
elif state_below.ndim == 1:
if not one_step:
raise ValueError('if state_below.ndim is 1, one_step shoud also be 1')
else:
n_samples = 1
# mask
if mask is None:
mask = tensor.alloc(1., state_below.shape[0], 1)
dim = tparams[_p(prefix, 'Ux')].shape[1]
if state_below.dtype == 'int64':
state_below_ = tparams[_p(prefix, 'W')][state_below.flatten()]
state_belowx = tparams[_p(prefix, 'Wx')][state_below.flatten()]
if state_below.ndim == 2:
state_below_ = state_below_.reshape((n_steps, n_samples, -1))
state_belowx = state_belowx.reshape((n_steps, n_samples, -1))
state_below_ += tparams[_p(prefix, 'b')]
state_belowx += tparams[_p(prefix, 'bx')]
else:
# projected x to hidden state proposal
state_below_ = tensor.dot(state_below, tparams[_p(prefix, 'W')]) + \
tparams[_p(prefix, 'b')]
# projected x to gates
state_belowx = tensor.dot(state_below, tparams[_p(prefix, 'Wx')]) + \
tparams[_p(prefix, 'bx')]
# initial/previous state
if init_state is None:
init_state = tensor.alloc(0., n_samples, dim)
# step function to be used by scan
def _step(m_, x_, xx_, h_, U, Ux):
preact = tensor.dot(h_, U)
preact += x_
preact = tensor.nnet.sigmoid(preact)
# reset and update gates
r = _slice(preact, 0, dim)
u = _slice(preact, 1, dim)
# compute the hidden state proposal
preactx = tensor.dot(h_, Ux)
preactx *= r
preactx += xx_
# hidden state proposal
h = tensor.tanh(preactx)
# leaky integrate and obtain next hidden state
h = u * h_ + (1. - u) * h
h = m_[:, None] * h + (1. - m_)[:, None] * h_
return h
# prepare scan arguments
seqs = [mask, state_below_, state_belowx]
shared_vars = [tparams[_p(prefix, 'U')],
tparams[_p(prefix, 'Ux')]]
if one_step:
rval = _step(*(seqs+[init_state]+shared_vars))
else:
rval, updates = theano.scan(_step,
sequences=seqs,
outputs_info=[init_state],
non_sequences=shared_vars,
name=_p(prefix, '_layers'),
n_steps=n_steps,
profile=profile,
strict=True)
return rval
# Conditional GRU layer without Attention
def param_init_gru_decoder(options, params, prefix='gru_decoder', nin=None,
dim=None, dimctx=None):
if nin is None:
nin = options['dim']
if dim is None:
dim = options['dim']
if dimctx is None:
dimctx = options['dim']
params = param_init_gru(options, params, prefix, nin=nin, dim=dim)
# context to GRU gates
Wc = norm_weight(dimctx, dim*2)
params[_p(prefix, 'Wc')] = Wc
# context to hidden proposal
Wcx = norm_weight(dimctx, dim)
params[_p(prefix, 'Wcx')] = Wcx
return params
def gru_decoder(tparams, state_below, options, prefix='gru_decoder',
mask=None, context=None, one_step=False,
init_state=None, **kwargs):
assert context, 'Context must be provided'
if one_step:
assert init_state, 'previous state must be provided'
n_steps = state_below.shape[0]
if state_below.ndim == 3:
n_samples = state_below.shape[1]
else:
n_samples = 1
# mask
if mask is None:
mask = tensor.alloc(1., state_below.shape[0], 1)
dim = tparams[_p(prefix, 'Ux')].shape[1]
# initial/previous state
if init_state is None:
init_state = tensor.alloc(0., n_samples, dim)
assert context.ndim == 2, 'Context must be 2-d: #sample x dim'
# projected context to GRU gates
pctx_ = tensor.dot(context, tparams[_p(prefix, 'Wc')])
# projected context to hidden state proposal
pctxx_ = tensor.dot(context, tparams[_p(prefix, 'Wcx')])
# projected x to hidden state proposal
state_below_ = tensor.dot(state_below, tparams[_p(prefix, 'W')]) + \
tparams[_p(prefix, 'b')]
# projected x to gates
state_belowx = tensor.dot(state_below, tparams[_p(prefix, 'Wx')]) + \
tparams[_p(prefix, 'bx')]
# step function to be used by scan
# arguments | sequences | outputs-info| non-seqs
def _step(m_, x_, xx_, h_, pctx_, pctxx_, U, Ux):
preact = tensor.dot(h_, U)
preact += x_
preact += pctx_
preact = tensor.nnet.sigmoid(preact)
# reset and update gates
r = _slice(preact, 0, dim)
u = _slice(preact, 1, dim)
# compute the hidden state proposal
preactx = tensor.dot(h_, Ux)
preactx *= r
preactx += xx_
preactx += pctxx_
# hidden state proposal
h = tensor.tanh(preactx)
# leaky integrate and obtain next hidden state
h = u * h_ + (1. - u) * h
h = m_[:, None] * h + (1. - m_)[:, None] * h_
return h
# prepare scan arguments
seqs = [mask, state_below_, state_belowx]
shared_vars = [tparams[_p(prefix, 'U')],
tparams[_p(prefix, 'Ux')]]
if one_step:
rval = _step(*(seqs+[init_state, pctx_, pctxx_]+shared_vars))
else:
rval, updates = theano.scan(_step,
sequences=seqs,
outputs_info=[init_state],
non_sequences=[pctx_, pctxx_]+shared_vars,
name=_p(prefix, '_layers'),
n_steps=n_steps,
profile=profile,
strict=True)
return rval
# Conditional GRU layer with Attention
def param_init_gru_cond_decoder(options, params, prefix='gru_cond_decoder',
nin=None, dim=None, dimctx=None):
if nin is None:
nin = options['dim']
if dim is None:
dim = options['dim']
if dimctx is None:
dimctx = options['dim']
params = param_init_gru(options, params, prefix, nin=nin, dim=dim)
# context to LSTM
Wc = norm_weight(dimctx, dim*2)
params[_p(prefix, 'Wc')] = Wc
Wcx = norm_weight(dimctx, dim)
params[_p(prefix, 'Wcx')] = Wcx
# attention: prev -> hidden
Wi_att = norm_weight(nin, dimctx)
params[_p(prefix, 'Wi_att')] = Wi_att
# attention: context -> hidden
Wc_att = norm_weight(dimctx)
params[_p(prefix, 'Wc_att')] = Wc_att
# attention: LSTM -> hidden
Wd_att = norm_weight(dim, dimctx)
params[_p(prefix, 'Wd_att')] = Wd_att
# attention: hidden bias
b_att = numpy.zeros((dimctx,)).astype('float32')
params[_p(prefix, 'b_att')] = b_att
# attention:
U_att = norm_weight(dimctx, 1)
params[_p(prefix, 'U_att')] = U_att
c_att = numpy.zeros((1,)).astype('float32')
params[_p(prefix, 'c_tt')] = c_att
return params
def gru_cond_decoder(tparams, state_below, options, prefix='gru_cond_decoder',
mask=None, context=None, one_step=False, init_state=None,
context_mask=None, **kwargs):
assert context, 'Context must be provided'
assert context.ndim == 3, \
'Context must be 3-d: #annotation x #sample x dim'
if one_step:
assert init_state, 'previous state must be provided'
nsteps = state_below.shape[0]
if state_below.ndim == 3:
n_samples = state_below.shape[1]
else:
n_samples = 1
# mask
if mask is None: # sampling or beamsearch
mask = tensor.alloc(1., state_below.shape[0], 1)
dim = tparams[_p(prefix, 'Wcx')].shape[1]
# initial/previous state
if init_state is None:
init_state = tensor.alloc(0., n_samples, dim)
# projected context
pctx_ = tensor.dot(context, tparams[_p(prefix, 'Wc_att')]) + \
tparams[_p(prefix, 'b_att')]
def _slice(_x, n, dim):
if _x.ndim == 3:
return _x[:, :, n*dim:(n+1)*dim]
return _x[:, n*dim:(n+1)*dim]
# projected x into hidden state proposal
state_belowx = tensor.dot(state_below, tparams[_p(prefix, 'Wx')]) + \
tparams[_p(prefix, 'bx')]
# projected x into gru gates
state_below_ = tensor.dot(state_below, tparams[_p(prefix, 'W')]) + \
tparams[_p(prefix, 'b')]
# projected x into attention module
state_belowc = tensor.dot(state_below, tparams[_p(prefix, 'Wi_att')])
# step function to be used by scan
# arguments | sequences | outputs-info | non-seqs ...
def _step_slice(m_, x_, xx_, xc_, h_, ctx_, alpha_, pctx_, cc_,
U, Wc, Wd_att, U_att, c_tt, Ux, Wcx):
# attention
# project previous hidden state
pstate_ = tensor.dot(h_, Wd_att)
# add projected context
pctx__ = pctx_ + pstate_[None, :, :]
# add projected previous output
pctx__ += xc_
pctx__ = tensor.tanh(pctx__)
# compute alignment weights
alpha = tensor.dot(pctx__, U_att)+c_tt
alpha = alpha.reshape([alpha.shape[0], alpha.shape[1]])
alpha = tensor.exp(alpha - alpha.max(0))
if context_mask:
alpha = alpha * context_mask
alpha = alpha / alpha.sum(0, keepdims=True)
# conpute the weighted averages - current context to gru
ctx_ = (cc_ * alpha[:, :, None]).sum(0)
# conditional gru layer computations
preact = tensor.dot(h_, U)
preact += x_
preact += tensor.dot(ctx_, Wc)
preact = tensor.nnet.sigmoid(preact)
# reset and update gates
r = _slice(preact, 0, dim)
u = _slice(preact, 1, dim)
preactx = tensor.dot(h_, Ux)
preactx *= r
preactx += xx_
preactx += tensor.dot(ctx_, Wcx)
# hidden state proposal, leaky integrate and obtain next hidden state
h = tensor.tanh(preactx)
h = u * h_ + (1. - u) * h
h = m_[:, None] * h + (1. - m_)[:, None] * h_
return h, ctx_, alpha.T
seqs = [mask, state_below_, state_belowx, state_belowc]
_step = _step_slice
shared_vars = [tparams[_p(prefix, 'U')],
tparams[_p(prefix, 'Wc')],
tparams[_p(prefix, 'Wd_att')],
tparams[_p(prefix, 'U_att')],
tparams[_p(prefix, 'c_tt')],
tparams[_p(prefix, 'Ux')],
tparams[_p(prefix, 'Wcx')]]
if one_step:
rval = _step(*(
seqs+[init_state, None, None, pctx_, context]+shared_vars))
else:
rval, updates = theano.scan(
_step,
sequences=seqs,
outputs_info=[init_state,
tensor.alloc(0., n_samples, context.shape[2]),
tensor.alloc(0., n_samples, context.shape[0])],
non_sequences=[pctx_,
context]+shared_vars,
name=_p(prefix, '_layers'),
n_steps=nsteps,
profile=profile,
strict=True)
return rval
def param_init_two_layer_gru_decoder(options, params,
prefix='two_layer_gru_decoder',
nin=None,
dim_char=None,
dim_word=None,
dimctx=None):
if nin is None:
nin = options['n_words']
if dim_char is None:
dim_char = options['dec_dim']
if dim_word is None:
dim_word = options['dec_dim']
if dimctx is None:
dimctx = options['enc_dim'] * 2
# embedding to gates transformation weights, biases
W_xc = numpy.concatenate([norm_weight(nin, dim_char),
norm_weight(nin, dim_char)], axis=1)
params[_p(prefix, 'W_xc')] = W_xc
params[_p(prefix, 'b_c')] = numpy.zeros((2 * dim_char,)).astype('float32')
# recurrent transformation weights for gates
U_cc = numpy.concatenate([ortho_weight(dim_char),
ortho_weight(dim_char)], axis=1)
params[_p(prefix, 'U_cc')] = U_cc
# embedding to hidden state proposal weights, biases
Wx_xc = norm_weight(nin, dim_char)
params[_p(prefix, 'Wx_xc')] = Wx_xc
params[_p(prefix, 'bx_c')] = numpy.zeros((dim_char,)).astype('float32')
# recurrent transformation weights for hidden state proposal
Ux_cc = ortho_weight(dim_char)
params[_p(prefix, 'Ux_cc')] = Ux_cc
# embedding to gates transformation weights, biases
W_cw = numpy.concatenate([norm_weight(dim_char, dim_word),
norm_weight(dim_char, dim_word)], axis=1)
params[_p(prefix, 'W_cw')] = W_cw
params[_p(prefix, 'b_w')] = numpy.zeros((2 * dim_word,)).astype('float32')
# recurrent transformation weights for gates
U_ww = numpy.concatenate([ortho_weight(dim_word),
ortho_weight(dim_word)], axis=1)
params[_p(prefix, 'U_ww')] = U_ww
# embedding to hidden state proposal weights, biases
Wx_cw = norm_weight(dim_char, dim_word)
params[_p(prefix, 'Wx_cw')] = Wx_cw
params[_p(prefix, 'bx_w')] = numpy.zeros((dim_word,)).astype('float32')
# recurrent transformation weights for hidden state proposal
Ux_ww = ortho_weight(dim_word)
params[_p(prefix, 'Ux_ww')] = Ux_ww
# context to GRU gates: char-level
W_ctxc = numpy.concatenate([norm_weight(dimctx, dim_char),
norm_weight(dimctx, dim_char)], axis=1)
params[_p(prefix, 'W_ctxc')] = W_ctxc
# context to hidden proposal: char-level
Wx_ctxc = norm_weight(dimctx, dim_char)
params[_p(prefix, 'Wx_ctxc')] = Wx_ctxc
# context to GRU gates: word-level
W_ctxw = numpy.concatenate([norm_weight(dimctx, dim_word),
norm_weight(dimctx, dim_word)], axis=1)
params[_p(prefix, 'W_ctxw')] = W_ctxw
# context to hidden proposal: word-level
Wx_ctxw = norm_weight(dimctx, dim_word)
params[_p(prefix, 'Wx_ctxw')] = Wx_ctxw
# attention: prev -> hidden
Winp_att = norm_weight(nin, dimctx)
params[_p(prefix, 'Winp_att')] = Winp_att
# attention: context -> hidden
Wctx_att = norm_weight(dimctx)
params[_p(prefix, 'Wctx_att')] = Wctx_att
# attention: decoder -> hidden
Wdec_att = norm_weight(dim_word, dimctx)
params[_p(prefix, 'Wdec_att')] = Wdec_att
# attention: hidden bias
params[_p(prefix, 'b_att')] = numpy.zeros((dimctx,)).astype('float32')
# attention
U_att = norm_weight(dimctx, 1)
params[_p(prefix, 'U_att')] = U_att
c_att = numpy.zeros((1,)).astype('float32')
params[_p(prefix, 'c_att')] = c_att
return params
def two_layer_gru_decoder(tparams, state_below, options,
prefix='two_layer_gru_decoder',
mask=None, one_step=False,
context=None, context_mask=None,
init_state_char=None, init_state_word=None,
**kwargs):
assert context, 'Context must be provided'
assert context.ndim == 3, \
'Context must be 3-D: #annotation x #sample x #dim'
if one_step:
assert init_state_char, 'previous state must be provided'
assert init_state_word, 'previous state must be provided'
n_steps = state_below.shape[0]
if state_below.ndim in [2, 3]:
n_samples = state_below.shape[1]
elif state_below.ndim == 1:
if not one_step:
raise ValueError('if state_below.ndim is 1, one_step shoud also be 1')
else:
n_samples = 1
# mask
if mask is None:
mask = tensor.alloc(1., state_below.shape[0], 1)
dim_char = tparams[_p(prefix, 'Ux_cc')].shape[1]
dim_word = tparams[_p(prefix, 'Ux_ww')].shape[1]
if state_below.dtype == 'int64':
state_below_emb = tparams[_p(prefix, 'W_xc')][state_below.flatten()] + tparams[_p(prefix, 'b_c')]
state_belowx_emb = tparams[_p(prefix, 'Wx_xc')][state_below.flatten()] + tparams[_p(prefix, 'bx_c')]
state_belowctx_emb = tparams[_p(prefix, 'Winp_att')][state_below.flatten()]
if state_below.ndim == 2:
state_below_emb = state_below_emb.reshape((n_steps, n_samples, -1))
state_belowx_emb = state_belowx_emb.reshape((n_steps, n_samples, -1))
state_belowctx_emb = state_belowctx_emb.reshape((n_steps, n_samples, -1))
else:
state_below_emb = tensor.dot(state_below, tparams[_p(prefix, 'W_xc')]) + tparams[_p(prefix, 'b_c')]
state_belowx_emb = tensor.dot(state_below, tparams[_p(prefix, 'Wx_xc')]) + tparams[_p(prefix, 'bx_c')]
state_belowctx_emb = tensor.dot(state_below, tparams[_p(prefix, 'Winp_att')])
# initial/previous state
if init_state_char is None:
init_state_char = tensor.alloc(0., n_samples, dim_char)
if init_state_word is None:
init_state_word = tensor.alloc(0., n_samples, dim_word)
# projected context
proj_ctx = tensor.dot(context, tparams[_p(prefix, 'Wctx_att')]) + tparams[_p(prefix, 'b_att')]
# step function to be used by scan
def _step(m_t,
state_below_emb_t,
state_belowx_emb_t,
state_belowctx_emb_t,
h_c_tm1, h_w_tm1,
ctx_t,
alpha_t,
proj_ctx_all,
context,
U_cc, Ux_cc,
W_cw, Wx_cw, U_ww, Ux_ww, b_w, bx_w,
W_ctxc, Wx_ctxc, W_ctxw, Wx_ctxw,
Wdec_att,
U_att, c_att):
# ~~ attention ~~ #
# project previous hidden states
proj_state = tensor.dot(h_w_tm1, Wdec_att)
# add projected context
proj_ctx = proj_ctx_all + proj_state[None, :, :] + state_belowctx_emb_t
proj_h = tensor.tanh(proj_ctx)
# compute alignment weights
alpha = tensor.dot(proj_h, U_att) + c_att
alpha = alpha.reshape([alpha.shape[0], alpha.shape[1]])
alpha = tensor.exp(alpha - alpha.max(0))
#alpha = tensor.exp(alpha)
if context_mask:
alpha = alpha * context_mask
alpha = alpha / alpha.sum(0, keepdims=True)
# compute the weighted averages - current context to GRU
ctx_t = (context * alpha[:, :, None]).sum(0)
# compute char-level
preact_c = tensor.dot(h_c_tm1, U_cc) + state_below_emb_t + tensor.dot(ctx_t, W_ctxc )
preact_c = tensor.nnet.sigmoid(preact_c)
# update gates
r_c = _slice(preact_c, 0, dim_char)
u_c = _slice(preact_c, 1, dim_char)
# compute the hidden state proposal: char-level
preactx_c = tensor.dot(h_c_tm1, Ux_cc) * r_c + state_belowx_emb_t + tensor.dot(ctx_t, Wx_ctxc)
# hidden state proposal
h_c = tensor.tanh(preactx_c)
# leaky integrate and obtain next hidden state
h_c_t = u_c * h_c_tm1 + (1. - u_c) * h_c
h_c_t = m_t[:, None] * h_c_t + (1. - m_t)[:, None] * h_c_tm1
# compute char-level
preact_w = tensor.dot(h_w_tm1, U_ww) + tensor.dot(h_c_t, W_cw) + tensor.dot(ctx_t, W_ctxw) + b_w
preact_w = tensor.nnet.sigmoid(preact_w)
# update gates
r_w = _slice(preact_w, 0, dim_char)
u_w = _slice(preact_w, 1, dim_char)
# compute the hidden state proposal: char-level
preactx_w = tensor.dot(h_w_tm1, Ux_ww) * r_w + tensor.dot(h_c_t, Wx_cw) + tensor.dot(ctx_t, Wx_ctxw) + bx_w
# hidden state proposal
h_w = tensor.tanh(preactx_w)
# leaky integrate and obtain next hidden state
h_w_t = u_w * h_w_tm1 + (1. - u_w) * h_w
h_w_t = m_t[:, None] * h_w_t + (1. - m_t)[:, None] * h_w_tm1
return h_c_t, h_w_t, ctx_t, alpha.T
# prepare scan arguments
seqs = [mask, state_below_emb, state_belowx_emb, state_belowctx_emb]
shared_vars = [
tparams[_p(prefix, 'U_cc')],
tparams[_p(prefix, 'Ux_cc')],
tparams[_p(prefix, 'W_cw')],
tparams[_p(prefix, 'Wx_cw')],
tparams[_p(prefix, 'U_ww')],
tparams[_p(prefix, 'Ux_ww')],
tparams[_p(prefix, 'b_w')],
tparams[_p(prefix, 'bx_w')],
tparams[_p(prefix, 'W_ctxc')],
tparams[_p(prefix, 'Wx_ctxc')],
tparams[_p(prefix, 'W_ctxw')],
tparams[_p(prefix, 'Wx_ctxw')],
tparams[_p(prefix, 'Wdec_att')],
tparams[_p(prefix, 'U_att')],
tparams[_p(prefix, 'c_att')],
]
if one_step:
rval = _step(*(seqs+[init_state_char, init_state_word,
None, None,
proj_ctx, context]+shared_vars))
else:
rval, updates = theano.scan(_step,
sequences=seqs,
outputs_info=[
init_state_char,
init_state_word,
tensor.alloc(0., n_samples, context.shape[2]),
tensor.alloc(0., n_samples, context.shape[0])
],
non_sequences=[proj_ctx, context]+shared_vars,
name=_p(prefix, '_layers'),
n_steps=n_steps,
profile=profile,
strict=True)
return rval
def param_init_two_layer_gru_decoder_both(options, params,
prefix='two_layer_gru_decoder_both',
nin=None,
dim_char=None,
dim_word=None,
dimctx=None):
if nin is None:
nin = options['n_words']
if dim_char is None:
dim_char = options['dec_dim']
if dim_word is None:
dim_word = options['dec_dim']
if dimctx is None:
dimctx = options['enc_dim'] * 2
# embedding to gates transformation weights, biases
W_xc = numpy.concatenate([norm_weight(nin, dim_char),
norm_weight(nin, dim_char)], axis=1)
params[_p(prefix, 'W_xc')] = W_xc
params[_p(prefix, 'b_c')] = numpy.zeros((2 * dim_char,)).astype('float32')
# recurrent transformation weights for gates
U_cc = numpy.concatenate([ortho_weight(dim_char),
ortho_weight(dim_char)], axis=1)
params[_p(prefix, 'U_cc')] = U_cc
# embedding to hidden state proposal weights, biases
Wx_xc = norm_weight(nin, dim_char)
params[_p(prefix, 'Wx_xc')] = Wx_xc
params[_p(prefix, 'bx_c')] = numpy.zeros((dim_char,)).astype('float32')
# recurrent transformation weights for hidden state proposal
Ux_cc = ortho_weight(dim_char)
params[_p(prefix, 'Ux_cc')] = Ux_cc
# embedding to gates transformation weights, biases
W_cw = numpy.concatenate([norm_weight(dim_char, dim_word),
norm_weight(dim_char, dim_word)], axis=1)
params[_p(prefix, 'W_cw')] = W_cw
params[_p(prefix, 'b_w')] = numpy.zeros((2 * dim_word,)).astype('float32')
# recurrent transformation weights for gates
U_ww = numpy.concatenate([ortho_weight(dim_word),
ortho_weight(dim_word)], axis=1)
params[_p(prefix, 'U_ww')] = U_ww
# embedding to hidden state proposal weights, biases
Wx_cw = norm_weight(dim_char, dim_word)
params[_p(prefix, 'Wx_cw')] = Wx_cw
params[_p(prefix, 'bx_w')] = numpy.zeros((dim_word,)).astype('float32')
# recurrent transformation weights for hidden state proposal
Ux_ww = ortho_weight(dim_word)
params[_p(prefix, 'Ux_ww')] = Ux_ww
# context to GRU gates: char-level
W_ctxc = numpy.concatenate([norm_weight(dimctx, dim_char),
norm_weight(dimctx, dim_char)], axis=1)
params[_p(prefix, 'W_ctxc')] = W_ctxc
# context to hidden proposal: char-level
Wx_ctxc = norm_weight(dimctx, dim_char)
params[_p(prefix, 'Wx_ctxc')] = Wx_ctxc
# context to GRU gates: word-level
W_ctxw = numpy.concatenate([norm_weight(dimctx, dim_word),
norm_weight(dimctx, dim_word)], axis=1)
params[_p(prefix, 'W_ctxw')] = W_ctxw
# context to hidden proposal: word-level
Wx_ctxw = norm_weight(dimctx, dim_word)
params[_p(prefix, 'Wx_ctxw')] = Wx_ctxw
# attention: prev -> hidden
Winp_att = norm_weight(nin, dimctx)
params[_p(prefix, 'Winp_att')] = Winp_att
# attention: context -> hidden
Wctx_att = norm_weight(dimctx)
params[_p(prefix, 'Wctx_att')] = Wctx_att
# attention: decoder -> hidden
Wdecc_att = norm_weight(dim_char, dimctx)
params[_p(prefix, 'Wdecc_att')] = Wdecc_att
Wdecw_att = norm_weight(dim_word, dimctx)
params[_p(prefix, 'Wdecw_att')] = Wdecw_att
# attention: hidden bias
params[_p(prefix, 'b_att')] = numpy.zeros((dimctx,)).astype('float32')
# attention
U_att = norm_weight(dimctx, 1)
params[_p(prefix, 'U_att')] = U_att
c_att = numpy.zeros((1,)).astype('float32')
params[_p(prefix, 'c_att')] = c_att
return params
def two_layer_gru_decoder_both(tparams, state_below, options,
prefix='two_layer_gru_decoder_both',
mask=None, one_step=False,
context=None, context_mask=None,
init_state_char=None, init_state_word=None,
**kwargs):
assert context, 'Context must be provided'
assert context.ndim == 3, \
'Context must be 3-D: #annotation x #sample x #dim'
if one_step:
assert init_state_char, 'previous state must be provided'
assert init_state_word, 'previous state must be provided'
n_steps = state_below.shape[0]
if state_below.ndim in [2, 3]:
n_samples = state_below.shape[1]
elif state_below.ndim == 1:
if not one_step:
raise ValueError('if state_below.ndim is 1, one_step shoud also be 1')
else:
n_samples = 1
# mask
if mask is None:
mask = tensor.alloc(1., state_below.shape[0], 1)
dim_char = tparams[_p(prefix, 'Ux_cc')].shape[1]
dim_word = tparams[_p(prefix, 'Ux_ww')].shape[1]
if state_below.dtype == 'int64':
state_below_emb = tparams[_p(prefix, 'W_xc')][state_below.flatten()] + tparams[_p(prefix, 'b_c')]
state_belowx_emb = tparams[_p(prefix, 'Wx_xc')][state_below.flatten()] + tparams[_p(prefix, 'bx_c')]
state_belowctx_emb = tparams[_p(prefix, 'Winp_att')][state_below.flatten()]
if state_below.ndim == 2:
state_below_emb = state_below_emb.reshape((n_steps, n_samples, -1))
state_belowx_emb = state_belowx_emb.reshape((n_steps, n_samples, -1))
state_belowctx_emb = state_belowctx_emb.reshape((n_steps, n_samples, -1))
else:
state_below_emb = tensor.dot(state_below, tparams[_p(prefix, 'W_xc')]) + tparams[_p(prefix, 'b_c')]
state_belowx_emb = tensor.dot(state_below, tparams[_p(prefix, 'Wx_xc')]) + tparams[_p(prefix, 'bx_c')]
state_belowctx_emb = tensor.dot(state_below, tparams[_p(prefix, 'Winp_att')])
# initial/previous state
if init_state_char is None:
init_state_char = tensor.alloc(0., n_samples, dim_char)
if init_state_word is None:
init_state_word = tensor.alloc(0., n_samples, dim_word)
# projected context
proj_ctx = tensor.dot(context, tparams[_p(prefix, 'Wctx_att')]) + tparams[_p(prefix, 'b_att')]
# step function to be used by scan
def _step(m_t,
state_below_emb_t,
state_belowx_emb_t,
state_belowctx_emb_t,
h_c_tm1, h_w_tm1,
ctx_t,
alpha_t,
proj_ctx_all,
context,
U_cc, Ux_cc,
W_cw, Wx_cw, U_ww, Ux_ww, b_w, bx_w,
W_ctxc, Wx_ctxc, W_ctxw, Wx_ctxw,
Wdecc_att, Wdecw_att,
U_att, c_att):
# ~~ attention ~~ #
# project previous hidden states
proj_state = tensor.dot(h_w_tm1, Wdecw_att) + tensor.dot(h_c_tm1, Wdecc_att)
# add projected context
proj_ctx = proj_ctx_all + proj_state[None, :, :] + state_belowctx_emb_t
proj_h = tensor.tanh(proj_ctx)
# compute alignment weights
alpha = tensor.dot(proj_h, U_att) + c_att
alpha = alpha.reshape([alpha.shape[0], alpha.shape[1]])
alpha = tensor.exp(alpha - alpha.max(0))
#alpha = tensor.exp(alpha)
if context_mask:
alpha = alpha * context_mask
alpha = alpha / alpha.sum(0, keepdims=True)
# compute the weighted averages - current context to GRU
ctx_t = (context * alpha[:, :, None]).sum(0)
# compute char-level
preact_c = tensor.dot(h_c_tm1, U_cc) + state_below_emb_t + tensor.dot(ctx_t, W_ctxc)
preact_c = tensor.nnet.sigmoid(preact_c)
# update gates
r_c = _slice(preact_c, 0, dim_char)
u_c = _slice(preact_c, 1, dim_char)
# compute the hidden state proposal: char-level
preactx_c = tensor.dot(h_c_tm1, Ux_cc) * r_c + state_belowx_emb_t + tensor.dot(ctx_t, Wx_ctxc)
# hidden state proposal
h_c = tensor.tanh(preactx_c)
# leaky integrate and obtain next hidden state
h_c_t = u_c * h_c_tm1 + (1. - u_c) * h_c
h_c_t = m_t[:, None] * h_c_t + (1. - m_t)[:, None] * h_c_tm1
# compute char-level
preact_w = tensor.dot(h_w_tm1, U_ww) + tensor.dot(h_c_t, W_cw) + tensor.dot(ctx_t, W_ctxw) + b_w
preact_w = tensor.nnet.sigmoid(preact_w)
# update gates
r_w = _slice(preact_w, 0, dim_char)
u_w = _slice(preact_w, 1, dim_char)
# compute the hidden state proposal: char-level
preactx_w = tensor.dot(h_w_tm1, Ux_ww) * r_w + tensor.dot(h_c_t, Wx_cw) + tensor.dot(ctx_t, Wx_ctxw) + bx_w
# hidden state proposal
h_w = tensor.tanh(preactx_w)
# leaky integrate and obtain next hidden state
h_w_t = u_w * h_w_tm1 + (1. - u_w) * h_w
h_w_t = m_t[:, None] * h_w_t + (1. - m_t)[:, None] * h_w_tm1
return h_c_t, h_w_t, ctx_t, alpha.T
# prepare scan arguments
seqs = [mask, state_below_emb, state_belowx_emb, state_belowctx_emb]
shared_vars = [
tparams[_p(prefix, 'U_cc')],
tparams[_p(prefix, 'Ux_cc')],
tparams[_p(prefix, 'W_cw')],
tparams[_p(prefix, 'Wx_cw')],
tparams[_p(prefix, 'U_ww')],
tparams[_p(prefix, 'Ux_ww')],
tparams[_p(prefix, 'b_w')],
tparams[_p(prefix, 'bx_w')],
tparams[_p(prefix, 'W_ctxc')],
tparams[_p(prefix, 'Wx_ctxc')],
tparams[_p(prefix, 'W_ctxw')],
tparams[_p(prefix, 'Wx_ctxw')],
tparams[_p(prefix, 'Wdecc_att')],
tparams[_p(prefix, 'Wdecw_att')],
tparams[_p(prefix, 'U_att')],
tparams[_p(prefix, 'c_att')],
]
if one_step:
rval = _step(*(seqs+[init_state_char, init_state_word,
None, None,
proj_ctx, context]+shared_vars))
else:
rval, updates = theano.scan(_step,
sequences=seqs,
outputs_info=[
init_state_char,
init_state_word,
tensor.alloc(0., n_samples, context.shape[2]),
tensor.alloc(0., n_samples, context.shape[0])
],
non_sequences=[proj_ctx, context]+shared_vars,
name=_p(prefix, '_layers'),
n_steps=n_steps,
profile=profile,
strict=True)
return rval
def param_init_biscale_decoder(options, params,
prefix='biscale_decoder',
nin=None,
dim_char=None,
dim_word=None,
dimctx=None,
scalar_bound=False):
if nin is None:
nin = options['n_words']
if dim_char is None:
dim_char = options['dec_dim']
if dim_word is None:
dim_word = options['dec_dim']
if dimctx is None:
dimctx = options['enc_dim'] * 2
# embedding to gates transformation weights, biases
if scalar_bound:
W_xc = norm_vector(nin)
params[_p(prefix, 'b_c')] = numpy.zeros((1,)).astype('float32')
else:
W_xc = norm_weight(nin, dim_char)
params[_p(prefix, 'b_c')] = numpy.zeros((dim_char,)).astype('float32')
params[_p(prefix, 'W_xc')] = W_xc
# recurrent transformation weights for gates
if scalar_bound:
U_cc = norm_vector(dim_char)
U_wc = norm_vector(dim_char)
else:
U_cc = ortho_weight(dim_char)
U_wc = ortho_weight(dim_char)
params[_p(prefix, 'U_cc')] = U_cc
params[_p(prefix, 'U_wc')] = U_wc
# embedding to hidden state proposal weights, biases
Wx_xc = norm_weight(nin, dim_char)
params[_p(prefix, 'Wx_xc')] = Wx_xc
params[_p(prefix, 'bx_c')] = numpy.zeros((dim_char,)).astype('float32')
# recurrent transformation weights for hidden state proposal
Ux_cc = ortho_weight(dim_char)
params[_p(prefix, 'Ux_cc')] = Ux_cc
Ux_wc = ortho_weight(dim_char)
params[_p(prefix, 'Ux_wc')] = Ux_wc
# embedding to gates transformation weights, biases
if scalar_bound:
W_cw = norm_vector(dim_char)
params[_p(prefix, 'b_w')] = numpy.zeros((1,)).astype('float32')
else:
W_cw = norm_weight(dim_char, dim_word)
params[_p(prefix, 'b_w')] = numpy.zeros((dim_word,)).astype('float32')
params[_p(prefix, 'W_cw')] = W_cw
# recurrent transformation weights for gates
if scalar_bound:
U_ww = norm_vector(dim_word)
else:
U_ww = ortho_weight(dim_word)
params[_p(prefix, 'U_ww')] = U_ww
# embedding to hidden state proposal weights, biases
Wx_cw = norm_weight(dim_char, dim_word)
params[_p(prefix, 'Wx_cw')] = Wx_cw
params[_p(prefix, 'bx_w')] = numpy.zeros((dim_word,)).astype('float32')
# recurrent transformation weights for hidden state proposal
Ux_ww = ortho_weight(dim_word)
params[_p(prefix, 'Ux_ww')] = Ux_ww
# context to GRU gates: char-level
if scalar_bound:
W_ctxc = norm_vector(dimctx)
else:
W_ctxc = norm_weight(dimctx, dim_char)
params[_p(prefix, 'W_ctxc')] = W_ctxc
# context to hidden proposal: char-level
Wx_ctxc = norm_weight(dimctx, dim_char)
params[_p(prefix, 'Wx_ctxc')] = Wx_ctxc
# context to GRU gates: word-level
if scalar_bound:
W_ctxw = norm_vector(dimctx)
else:
W_ctxw = norm_weight(dimctx, dim_word)
params[_p(prefix, 'W_ctxw')] = W_ctxw
# context to hidden proposal: word-level
Wx_ctxw = norm_weight(dimctx, dim_word)
params[_p(prefix, 'Wx_ctxw')] = Wx_ctxw
# attention: prev -> hidden
Winp_att = norm_weight(nin, dimctx)
params[_p(prefix, 'Winp_att')] = Winp_att
# attention: context -> hidden
Wctx_att = norm_weight(dimctx)
params[_p(prefix, 'Wctx_att')] = Wctx_att
# attention: decoder -> hidden
Wdec_att = norm_weight(dim_word, dimctx)
params[_p(prefix, 'Wdec_att')] = Wdec_att
# attention: hidden bias
params[_p(prefix, 'b_att')] = numpy.zeros((dimctx,)).astype('float32')
# attention
U_att = norm_weight(dimctx, 1)
params[_p(prefix, 'U_att')] = U_att
c_att = numpy.zeros((1,)).astype('float32')
params[_p(prefix, 'c_att')] = c_att
return params
def biscale_decoder(tparams, state_below, options,
prefix='biscale_decoder',
mask=None, one_step=False,
context=None, context_mask=None,
init_state_char=None, init_state_word=None,
init_bound_char=None, init_bound_word=None,
scalar_bound=False,
**kwargs):
assert context, 'Context must be provided'
assert context.ndim == 3, \
'Context must be 3-D: #annotation x #sample x #dim'
if one_step:
assert init_state_char, 'previous state must be provided'
assert init_state_word, 'previous state must be provided'
assert init_bound_char, 'previous bound must be provided'
assert init_bound_word, 'previous bound must be provided'
n_steps = state_below.shape[0]
if state_below.ndim in [2, 3]:
n_samples = state_below.shape[1]
elif state_below.ndim == 1:
if not one_step:
raise ValueError('if state_below.ndim is 1, one_step shoud also be 1')
else:
n_samples = 1
# mask
if mask is None:
mask = tensor.alloc(1., state_below.shape[0], 1)
dim_char = tparams[_p(prefix, 'Ux_cc')].shape[1]
dim_word = tparams[_p(prefix, 'Ux_ww')].shape[1]
if state_below.dtype == 'int64':
state_below_emb = tparams[_p(prefix, 'W_xc')][state_below.flatten()]
if scalar_bound:
state_below_emb += tensor.addbroadcast(tparams[_p(prefix, 'b_c')], 0)
else:
state_below_emb += tparams[_p(prefix, 'b_c')]
state_belowx_emb = tparams[_p(prefix, 'Wx_xc')][state_below.flatten()] + tparams[_p(prefix, 'bx_c')]
state_belowctx_emb = tparams[_p(prefix, 'Winp_att')][state_below.flatten()]
if state_below.ndim == 2:
state_below_emb = state_below_emb.reshape((n_steps, n_samples, -1))
state_belowx_emb = state_belowx_emb.reshape((n_steps, n_samples, -1))
state_belowctx_emb = state_belowctx_emb.reshape((n_steps, n_samples, -1))
else:
state_below_emb = tensor.dot(state_below, tparams[_p(prefix, 'W_xc')])
if scalar_bound:
state_below_emb += tensor.addbroadcast(tparams[_p(prefix, 'b_c')], 0)
else:
state_below_emb += tparams[_p(prefix, 'b_c')]
state_belowx_emb = tensor.dot(state_below, tparams[_p(prefix, 'Wx_xc')]) + tparams[_p(prefix, 'bx_c')]
state_belowctx_emb = tensor.dot(state_below, tparams[_p(prefix, 'Winp_att')])
# initial/previous state
if init_state_char is None:
init_state_char = tensor.alloc(0., n_samples, dim_char).astype('float32')
if init_state_word is None:
init_state_word = tensor.alloc(0., n_samples, dim_word).astype('float32')
if scalar_bound:
if init_bound_char is None:
init_bound_char = tensor.alloc(0, n_samples).astype('float32')
if init_bound_word is None:
init_bound_char = tensor.alloc(0, n_samples).astype('float32')
else:
if init_bound_char is None:
init_bound_char = tensor.zeros_like(init_state_char)
if init_bound_word is None:
init_bound_word = tensor.zeros_like(init_state_word)
# projected context
proj_ctx = tensor.dot(context, tparams[_p(prefix, 'Wctx_att')]) + tparams[_p(prefix, 'b_att')]
# step function to be used by scan
def _step(m_t,
state_below_emb_t,
state_belowx_emb_t,
state_belowctx_emb_t,
h_c_tm1, h_w_tm1,
bd_c_tm1, bd_w_tm1,
ctx_t,
alpha_t,
proj_ctx_all,
context,
U_cc, Ux_cc, U_wc, Ux_wc,
W_cw, Wx_cw, U_ww, Ux_ww, b_w, bx_w,
W_ctxc, Wx_ctxc, W_ctxw, Wx_ctxw,
Wdec_att,
U_att, c_att):
# ~~ attention ~~ #
# project previous hidden states
proj_state = tensor.dot(h_w_tm1, Wdec_att)
# add projected context
proj_ctx = proj_ctx_all + proj_state[None, :, :] + state_belowctx_emb_t
proj_h = tensor.tanh(proj_ctx)
# compute alignment weights
alpha = tensor.dot(proj_h, U_att) + c_att
alpha = alpha.reshape([alpha.shape[0], alpha.shape[1]])
alpha = tensor.exp(alpha - alpha.max(0))
#alpha = tensor.exp(alpha)
if context_mask:
alpha = alpha * context_mask
alpha = alpha / alpha.sum(0, keepdims=True)
# compute the weighted averages - current context to GRU
ctx_t = (context * alpha[:, :, None]).sum(0)
if scalar_bound:
bd_c_tm1 = bd_c_tm1[:, None]
bd_w_tm1 = bd_w_tm1[:, None]
# compute char-level
preact_c = tensor.dot((1 - bd_c_tm1) * h_c_tm1, U_cc) + tensor.dot(bd_c_tm1 * h_w_tm1, U_wc) + tensor.dot(ctx_t, W_ctxc )
if scalar_bound:
preact_c += state_below_emb_t
preact_c = preact_c[:, None]
else:
preact_c += state_below_emb_t
# update gates
bd_c_t = tensor.nnet.sigmoid(preact_c)
# compute the hidden state proposal: char-level
preactx_c = tensor.dot((1 - bd_c_tm1) * h_c_tm1, Ux_cc) + tensor.dot(bd_c_tm1 * h_w_tm1, Ux_wc) + tensor.dot(ctx_t, Wx_ctxc) + state_belowx_emb_t
h_c_t = tensor.tanh(preactx_c)
h_c_t = m_t[:, None] * h_c_t + (1. - m_t)[:, None] * h_c_tm1
# compute word-level
preact_w = tensor.dot((1 - bd_w_tm1) * h_w_tm1, U_ww) + tensor.dot(bd_c_t * h_c_t, W_cw) + tensor.dot(ctx_t, W_ctxw)
if scalar_bound:
preact_w += b_w[:, None]
preact_w = preact_w.T
else:
preact_w += b_w
# update gates for word-level
bd_w_t = tensor.nnet.sigmoid(preact_w)
# compute the hidden state proposal: word-level
preactx_w = tensor.dot((1 - bd_w_tm1) * h_w_tm1, Ux_ww) + tensor.dot(bd_c_t * h_c_t, Wx_cw) + tensor.dot(ctx_t, Wx_ctxw) + bx_w
h_w_t = tensor.tanh(preactx_w)
h_w_t = bd_c_t * h_w_t + (1. - bd_c_t) * h_w_tm1
h_w_t = m_t[:, None] * h_w_t + (1. - m_t)[:, None] * h_w_tm1
if scalar_bound:
bd_c_t = bd_c_t.flatten()
bd_w_t = bd_w_t.flatten()
return h_c_t, h_w_t, bd_c_t, bd_w_t, ctx_t, alpha.T
# prepare scan arguments
seqs = [mask, state_below_emb, state_belowx_emb, state_belowctx_emb]
shared_vars = [
tparams[_p(prefix, 'U_cc')],
tparams[_p(prefix, 'Ux_cc')],
tparams[_p(prefix, 'U_wc')],
tparams[_p(prefix, 'Ux_wc')],
tparams[_p(prefix, 'W_cw')],
tparams[_p(prefix, 'Wx_cw')],
tparams[_p(prefix, 'U_ww')],
tparams[_p(prefix, 'Ux_ww')],
tparams[_p(prefix, 'b_w')],
tparams[_p(prefix, 'bx_w')],
tparams[_p(prefix, 'W_ctxc')],
tparams[_p(prefix, 'Wx_ctxc')],
tparams[_p(prefix, 'W_ctxw')],
tparams[_p(prefix, 'Wx_ctxw')],
tparams[_p(prefix, 'Wdec_att')],
tparams[_p(prefix, 'U_att')],
tparams[_p(prefix, 'c_att')],
]
if one_step:
rval = _step(*(seqs+[init_state_char, init_state_word,
init_bound_char, init_bound_word,
None, None,
proj_ctx, context]+shared_vars))
else:
rval, updates = theano.scan(_step,
sequences=seqs,
outputs_info=[
init_state_char,
init_state_word,
init_bound_char,
init_bound_word,
tensor.alloc(0., n_samples, context.shape[2]),
tensor.alloc(0., n_samples, context.shape[0])
],
non_sequences=[proj_ctx, context]+shared_vars,
name=_p(prefix, '_layers'),
n_steps=n_steps,
profile=profile,
strict=True)
return rval
def param_init_biscale_decoder_attc(options, params,
prefix='biscale_decoder_attc',
nin=None,
dim_char=None,
dim_word=None,
dimctx=None,
scalar_bound=False):
if nin is None:
nin = options['n_words']
if dim_char is None:
dim_char = options['dec_dim']
if dim_word is None:
dim_word = options['dec_dim']
if dimctx is None:
dimctx = options['enc_dim'] * 2
# embedding to gates transformation weights, biases
if scalar_bound:
W_xc = norm_vector(nin)
params[_p(prefix, 'b_c')] = numpy.zeros((1,)).astype('float32')
else:
W_xc = norm_weight(nin, dim_char)
params[_p(prefix, 'b_c')] = numpy.zeros((dim_char,)).astype('float32')
params[_p(prefix, 'W_xc')] = W_xc
# recurrent transformation weights for gates
if scalar_bound:
U_cc = norm_vector(dim_char)
U_wc = norm_vector(dim_char)
else:
U_cc = ortho_weight(dim_char)
U_wc = ortho_weight(dim_char)
params[_p(prefix, 'U_cc')] = U_cc
params[_p(prefix, 'U_wc')] = U_wc
# embedding to hidden state proposal weights, biases
Wx_xc = norm_weight(nin, dim_char)
params[_p(prefix, 'Wx_xc')] = Wx_xc
params[_p(prefix, 'bx_c')] = numpy.zeros((dim_char,)).astype('float32')
# recurrent transformation weights for hidden state proposal
Ux_cc = ortho_weight(dim_char)
params[_p(prefix, 'Ux_cc')] = Ux_cc
Ux_wc = ortho_weight(dim_char)
params[_p(prefix, 'Ux_wc')] = Ux_wc
# embedding to gates transformation weights, biases
if scalar_bound:
W_cw = norm_vector(dim_char)
params[_p(prefix, 'b_w')] = numpy.zeros((1,)).astype('float32')
else:
W_cw = norm_weight(dim_char, dim_word)
params[_p(prefix, 'b_w')] = numpy.zeros((dim_word,)).astype('float32')
params[_p(prefix, 'W_cw')] = W_cw
# recurrent transformation weights for gates
if scalar_bound:
U_ww = norm_vector(dim_word)
else:
U_ww = ortho_weight(dim_word)
params[_p(prefix, 'U_ww')] = U_ww
# embedding to hidden state proposal weights, biases
Wx_cw = norm_weight(dim_char, dim_word)
params[_p(prefix, 'Wx_cw')] = Wx_cw
params[_p(prefix, 'bx_w')] = numpy.zeros((dim_word,)).astype('float32')
# recurrent transformation weights for hidden state proposal
Ux_ww = ortho_weight(dim_word)
params[_p(prefix, 'Ux_ww')] = Ux_ww
# context to GRU gates: char-level
if scalar_bound:
W_ctxc = norm_vector(dimctx)
else:
W_ctxc = norm_weight(dimctx, dim_char)
params[_p(prefix, 'W_ctxc')] = W_ctxc
# context to hidden proposal: char-level
Wx_ctxc = norm_weight(dimctx, dim_char)
params[_p(prefix, 'Wx_ctxc')] = Wx_ctxc
# context to GRU gates: word-level
if scalar_bound:
W_ctxw = norm_vector(dimctx)
else:
W_ctxw = norm_weight(dimctx, dim_word)
params[_p(prefix, 'W_ctxw')] = W_ctxw
# context to hidden proposal: word-level
Wx_ctxw = norm_weight(dimctx, dim_word)
params[_p(prefix, 'Wx_ctxw')] = Wx_ctxw
# attention: prev -> hidden
Winp_att = norm_weight(nin, dimctx)
params[_p(prefix, 'Winp_att')] = Winp_att
# attention: context -> hidden
Wctx_att = norm_weight(dimctx)
params[_p(prefix, 'Wctx_att')] = Wctx_att
# attention: decoder -> hidden
Wdec_att = norm_weight(dim_char, dimctx)
params[_p(prefix, 'Wdec_att')] = Wdec_att
# attention: hidden bias
params[_p(prefix, 'b_att')] = numpy.zeros((dimctx,)).astype('float32')
# attention
U_att = norm_weight(dimctx, 1)
params[_p(prefix, 'U_att')] = U_att
c_att = numpy.zeros((1,)).astype('float32')
params[_p(prefix, 'c_att')] = c_att
return params
def biscale_decoder_attc(tparams, state_below, options,
prefix='biscale_decoder_attc',
mask=None, one_step=False,
context=None, context_mask=None,
init_state_char=None, init_state_word=None,
init_bound_char=None, init_bound_word=None,
scalar_bound=False,
**kwargs):
assert context, 'Context must be provided'
assert context.ndim == 3, \
'Context must be 3-D: #annotation x #sample x #dim'
if one_step:
assert init_state_char, 'previous state must be provided'
assert init_state_word, 'previous state must be provided'
assert init_bound_char, 'previous bound must be provided'
assert init_bound_word, 'previous bound must be provided'
n_steps = state_below.shape[0]
if state_below.ndim in [2, 3]:
n_samples = state_below.shape[1]
elif state_below.ndim == 1:
if not one_step:
raise ValueError('if state_below.ndim is 1, one_step shoud also be 1')
else:
n_samples = 1
# mask
if mask is None:
mask = tensor.alloc(1., state_below.shape[0], 1)
dim_char = tparams[_p(prefix, 'Ux_cc')].shape[1]
dim_word = tparams[_p(prefix, 'Ux_ww')].shape[1]
if state_below.dtype == 'int64':
state_below_emb = tparams[_p(prefix, 'W_xc')][state_below.flatten()]
if scalar_bound:
state_below_emb += tensor.addbroadcast(tparams[_p(prefix, 'b_c')], 0)
else:
state_below_emb += tparams[_p(prefix, 'b_c')]
state_belowx_emb = tparams[_p(prefix, 'Wx_xc')][state_below.flatten()] + tparams[_p(prefix, 'bx_c')]
state_belowctx_emb = tparams[_p(prefix, 'Winp_att')][state_below.flatten()]
if state_below.ndim == 2:
state_below_emb = state_below_emb.reshape((n_steps, n_samples, -1))
state_belowx_emb = state_belowx_emb.reshape((n_steps, n_samples, -1))
state_belowctx_emb = state_belowctx_emb.reshape((n_steps, n_samples, -1))
else:
state_below_emb = tensor.dot(state_below, tparams[_p(prefix, 'W_xc')])
if scalar_bound:
state_below_emb += tensor.addbroadcast(tparams[_p(prefix, 'b_c')], 0)
else:
state_below_emb += tparams[_p(prefix, 'b_c')]
state_belowx_emb = tensor.dot(state_below, tparams[_p(prefix, 'Wx_xc')]) + tparams[_p(prefix, 'bx_c')]
state_belowctx_emb = tensor.dot(state_below, tparams[_p(prefix, 'Winp_att')])
# initial/previous state
if init_state_char is None:
init_state_char = tensor.alloc(0., n_samples, dim_char).astype('float32')
if init_state_word is None:
init_state_word = tensor.alloc(0., n_samples, dim_word).astype('float32')
if scalar_bound:
if init_bound_char is None:
init_bound_char = tensor.alloc(0, n_samples).astype('float32')
if init_bound_word is None:
init_bound_char = tensor.alloc(0, n_samples).astype('float32')
else:
if init_bound_char is None:
init_bound_char = tensor.zeros_like(init_state_char)
if init_bound_word is None:
init_bound_word = tensor.zeros_like(init_state_word)
# projected context
proj_ctx = tensor.dot(context, tparams[_p(prefix, 'Wctx_att')]) + tparams[_p(prefix, 'b_att')]
# step function to be used by scan
def _step(m_t,
state_below_emb_t,
state_belowx_emb_t,
state_belowctx_emb_t,
h_c_tm1, h_w_tm1,
bd_c_tm1, bd_w_tm1,
ctx_t,
alpha_t,
proj_ctx_all,
context,
U_cc, Ux_cc, U_wc, Ux_wc,
W_cw, Wx_cw, U_ww, Ux_ww, b_w, bx_w,
W_ctxc, Wx_ctxc, W_ctxw, Wx_ctxw,
Wdec_att,
U_att, c_att):
# ~~ attention ~~ #
# project previous hidden states
proj_state = tensor.dot(h_c_tm1, Wdec_att)
# add projected context
proj_ctx = proj_ctx_all + proj_state[None, :, :] + state_belowctx_emb_t
proj_h = tensor.tanh(proj_ctx)
# compute alignment weights
alpha = tensor.dot(proj_h, U_att) + c_att
alpha = alpha.reshape([alpha.shape[0], alpha.shape[1]])
alpha = tensor.exp(alpha - alpha.max(0))
#alpha = tensor.exp(alpha)
if context_mask:
alpha = alpha * context_mask
alpha = alpha / alpha.sum(0, keepdims=True)
# compute the weighted averages - current context to GRU
ctx_t = (context * alpha[:, :, None]).sum(0)
if scalar_bound:
bd_c_tm1 = bd_c_tm1[:, None]
bd_w_tm1 = bd_w_tm1[:, None]
# compute char-level
preact_c = tensor.dot((1 - bd_c_tm1) * h_c_tm1, U_cc) + tensor.dot(bd_c_tm1 * h_w_tm1, U_wc) + tensor.dot(ctx_t, W_ctxc )
if scalar_bound:
preact_c += state_below_emb_t
preact_c = preact_c[:, None]
else:
preact_c += state_below_emb_t
# update gates
bd_c_t = tensor.nnet.sigmoid(preact_c)
# compute the hidden state proposal: char-level
preactx_c = tensor.dot((1 - bd_c_tm1) * h_c_tm1, Ux_cc) + tensor.dot(bd_c_tm1 * h_w_tm1, Ux_wc) + tensor.dot(ctx_t, Wx_ctxc) + state_belowx_emb_t
h_c_t = tensor.tanh(preactx_c)
h_c_t = m_t[:, None] * h_c_t + (1. - m_t)[:, None] * h_c_tm1
# compute word-level
preact_w = tensor.dot((1 - bd_w_tm1) * h_w_tm1, U_ww) + tensor.dot(bd_c_t * h_c_t, W_cw) + tensor.dot(ctx_t, W_ctxw)
if scalar_bound:
preact_w += b_w[:, None]
preact_w = preact_w.T
else:
preact_w += b_w
# update gates for word-level
bd_w_t = tensor.nnet.sigmoid(preact_w)
# compute the hidden state proposal: word-level
preactx_w = tensor.dot((1 - bd_w_tm1) * h_w_tm1, Ux_ww) + tensor.dot(bd_c_t * h_c_t, Wx_cw) + tensor.dot(ctx_t, Wx_ctxw) + bx_w
h_w_t = tensor.tanh(preactx_w)
h_w_t = bd_c_t * h_w_t + (1. - bd_c_t) * h_w_tm1
h_w_t = m_t[:, None] * h_w_t + (1. - m_t)[:, None] * h_w_tm1
if scalar_bound:
bd_c_t = bd_c_t.flatten()
bd_w_t = bd_w_t.flatten()
return h_c_t, h_w_t, bd_c_t, bd_w_t, ctx_t, alpha.T
# prepare scan arguments
seqs = [mask, state_below_emb, state_belowx_emb, state_belowctx_emb]
shared_vars = [
tparams[_p(prefix, 'U_cc')],
tparams[_p(prefix, 'Ux_cc')],
tparams[_p(prefix, 'U_wc')],
tparams[_p(prefix, 'Ux_wc')],
tparams[_p(prefix, 'W_cw')],
tparams[_p(prefix, 'Wx_cw')],
tparams[_p(prefix, 'U_ww')],
tparams[_p(prefix, 'Ux_ww')],
tparams[_p(prefix, 'b_w')],
tparams[_p(prefix, 'bx_w')],
tparams[_p(prefix, 'W_ctxc')],
tparams[_p(prefix, 'Wx_ctxc')],
tparams[_p(prefix, 'W_ctxw')],
tparams[_p(prefix, 'Wx_ctxw')],
tparams[_p(prefix, 'Wdec_att')],
tparams[_p(prefix, 'U_att')],
tparams[_p(prefix, 'c_att')],
]
if one_step:
rval = _step(*(seqs+[init_state_char, init_state_word,
init_bound_char, init_bound_word,
None, None,
proj_ctx, context]+shared_vars))
else:
rval, updates = theano.scan(_step,
sequences=seqs,
outputs_info=[
init_state_char,
init_state_word,
init_bound_char,
init_bound_word,
tensor.alloc(0., n_samples, context.shape[2]),
tensor.alloc(0., n_samples, context.shape[0])
],
non_sequences=[proj_ctx, context]+shared_vars,
name=_p(prefix, '_layers'),
n_steps=n_steps,
profile=profile,
strict=True)
return rval
def param_init_biscale_decoder_both(options, params,
prefix='biscale_decoder_both',
nin=None,
dim_char=None,
dim_word=None,
dimctx=None,
scalar_bound=False):
if nin is None:
nin = options['n_words']
if dim_char is None:
dim_char = options['dec_dim']
if dim_word is None:
dim_word = options['dec_dim']
if dimctx is None:
dimctx = options['enc_dim'] * 2
# embedding to gates transformation weights, biases
if scalar_bound:
W_xc = norm_vector(nin)
params[_p(prefix, 'b_c')] = numpy.zeros((1,)).astype('float32')
else:
W_xc = norm_weight(nin, dim_char)
params[_p(prefix, 'b_c')] = numpy.zeros((dim_char,)).astype('float32')
params[_p(prefix, 'W_xc')] = W_xc
# recurrent transformation weights for gates
if scalar_bound:
U_cc = norm_vector(dim_char)
U_wc = norm_vector(dim_char)
else:
U_cc = ortho_weight(dim_char)
U_wc = ortho_weight(dim_char)
params[_p(prefix, 'U_cc')] = U_cc
params[_p(prefix, 'U_wc')] = U_wc
# embedding to hidden state proposal weights, biases
Wx_xc = norm_weight(nin, dim_char)
params[_p(prefix, 'Wx_xc')] = Wx_xc
params[_p(prefix, 'bx_c')] = numpy.zeros((dim_char,)).astype('float32')
# recurrent transformation weights for hidden state proposal
Ux_cc = ortho_weight(dim_char)
params[_p(prefix, 'Ux_cc')] = Ux_cc
Ux_wc = ortho_weight(dim_char)
params[_p(prefix, 'Ux_wc')] = Ux_wc
# embedding to gates transformation weights, biases
if scalar_bound:
W_cw = norm_vector(dim_char)
params[_p(prefix, 'b_w')] = numpy.zeros((1,)).astype('float32')
else:
W_cw = norm_weight(dim_char, dim_word)
params[_p(prefix, 'b_w')] = numpy.zeros((dim_word,)).astype('float32')
params[_p(prefix, 'W_cw')] = W_cw
# recurrent transformation weights for gates
if scalar_bound:
U_ww = norm_vector(dim_word)
else:
U_ww = ortho_weight(dim_word)
params[_p(prefix, 'U_ww')] = U_ww
# embedding to hidden state proposal weights, biases
Wx_cw = norm_weight(dim_char, dim_word)
params[_p(prefix, 'Wx_cw')] = Wx_cw
params[_p(prefix, 'bx_w')] = numpy.zeros((dim_word,)).astype('float32')
# recurrent transformation weights for hidden state proposal
Ux_ww = ortho_weight(dim_word)
params[_p(prefix, 'Ux_ww')] = Ux_ww
# context to GRU gates: char-level
if scalar_bound:
W_ctxc = norm_vector(dimctx)
else:
W_ctxc = norm_weight(dimctx, dim_char)
params[_p(prefix, 'W_ctxc')] = W_ctxc
# context to hidden proposal: char-level
Wx_ctxc = norm_weight(dimctx, dim_char)
params[_p(prefix, 'Wx_ctxc')] = Wx_ctxc
# context to GRU gates: word-level
if scalar_bound:
W_ctxw = norm_vector(dimctx)
else:
W_ctxw = norm_weight(dimctx, dim_word)
params[_p(prefix, 'W_ctxw')] = W_ctxw
# context to hidden proposal: word-level
Wx_ctxw = norm_weight(dimctx, dim_word)
params[_p(prefix, 'Wx_ctxw')] = Wx_ctxw
# attention: prev -> hidden
Winp_att = norm_weight(nin, dimctx)
params[_p(prefix, 'Winp_att')] = Winp_att
# attention: context -> hidden
Wctx_att = norm_weight(dimctx)
params[_p(prefix, 'Wctx_att')] = Wctx_att
# attention: decoder -> hidden
Wdecc_att = norm_weight(dim_char, dimctx)
params[_p(prefix, 'Wdecc_att')] = Wdecc_att
Wdecw_att = norm_weight(dim_word, dimctx)
params[_p(prefix, 'Wdecw_att')] = Wdecw_att
# attention: hidden bias
params[_p(prefix, 'b_att')] = numpy.zeros((dimctx,)).astype('float32')
# attention
U_att = norm_weight(dimctx, 1)
params[_p(prefix, 'U_att')] = U_att
c_att = numpy.zeros((1,)).astype('float32')
params[_p(prefix, 'c_att')] = c_att
return params
def biscale_decoder_both(tparams, state_below, options,
prefix='biscale_decoder_both',
mask=None, one_step=False,
context=None, context_mask=None,
init_state_char=None, init_state_word=None,
init_bound_char=None, init_bound_word=None,
scalar_bound=False,
**kwargs):
assert context, 'Context must be provided'
assert context.ndim == 3, \
'Context must be 3-D: #annotation x #sample x #dim'
if one_step:
assert init_state_char, 'previous state must be provided'
assert init_state_word, 'previous state must be provided'
assert init_bound_char, 'previous bound must be provided'
assert init_bound_word, 'previous bound must be provided'
n_steps = state_below.shape[0]
if state_below.ndim in [2, 3]:
n_samples = state_below.shape[1]
elif state_below.ndim == 1:
if not one_step:
raise ValueError('if state_below.ndim is 1, one_step shoud also be 1')
else:
n_samples = 1
# mask
if mask is None:
mask = tensor.alloc(1., state_below.shape[0], 1)
dim_char = tparams[_p(prefix, 'Ux_cc')].shape[1]
dim_word = tparams[_p(prefix, 'Ux_ww')].shape[1]
if state_below.dtype == 'int64':
state_below_emb = tparams[_p(prefix, 'W_xc')][state_below.flatten()]
if scalar_bound:
state_below_emb += tensor.addbroadcast(tparams[_p(prefix, 'b_c')], 0)
else:
state_below_emb += tparams[_p(prefix, 'b_c')]
state_belowx_emb = tparams[_p(prefix, 'Wx_xc')][state_below.flatten()] + tparams[_p(prefix, 'bx_c')]
state_belowctx_emb = tparams[_p(prefix, 'Winp_att')][state_below.flatten()]
if state_below.ndim == 2:
state_below_emb = state_below_emb.reshape((n_steps, n_samples, -1))
state_belowx_emb = state_belowx_emb.reshape((n_steps, n_samples, -1))
state_belowctx_emb = state_belowctx_emb.reshape((n_steps, n_samples, -1))
else:
state_below_emb = tensor.dot(state_below, tparams[_p(prefix, 'W_xc')])
if scalar_bound:
state_below_emb += tensor.addbroadcast(tparams[_p(prefix, 'b_c')], 0)
else:
state_below_emb += tparams[_p(prefix, 'b_c')]
state_belowx_emb = tensor.dot(state_below, tparams[_p(prefix, 'Wx_xc')]) + tparams[_p(prefix, 'bx_c')]
state_belowctx_emb = tensor.dot(state_below, tparams[_p(prefix, 'Winp_att')])
# initial/previous state
if init_state_char is None:
init_state_char = tensor.alloc(0., n_samples, dim_char).astype('float32')
if init_state_word is None:
init_state_word = tensor.alloc(0., n_samples, dim_word).astype('float32')
if scalar_bound:
if init_bound_char is None:
init_bound_char = tensor.alloc(0, n_samples).astype('float32')
if init_bound_word is None:
init_bound_char = tensor.alloc(0, n_samples).astype('float32')
else:
if init_bound_char is None:
init_bound_char = tensor.zeros_like(init_state_char)
if init_bound_word is None:
init_bound_word = tensor.zeros_like(init_state_word)
# projected context
proj_ctx = tensor.dot(context, tparams[_p(prefix, 'Wctx_att')]) + tparams[_p(prefix, 'b_att')]
# step function to be used by scan
def _step(m_t,
state_below_emb_t,
state_belowx_emb_t,
state_belowctx_emb_t,
h_c_tm1, h_w_tm1,
bd_c_tm1, bd_w_tm1,
ctx_t,
alpha_t,
proj_ctx_all,
context,
U_cc, Ux_cc, U_wc, Ux_wc,
W_cw, Wx_cw, U_ww, Ux_ww, b_w, bx_w,
W_ctxc, Wx_ctxc, W_ctxw, Wx_ctxw,
Wdecc_att, Wdecw_att,
U_att, c_att):
# ~~ attention ~~ #
# project previous hidden states
proj_state = tensor.dot(h_w_tm1, Wdecw_att) + tensor.dot(h_c_tm1, Wdecc_att)
# add projected context
proj_ctx = proj_ctx_all + proj_state[None, :, :] + state_belowctx_emb_t
proj_h = tensor.tanh(proj_ctx)
# compute alignment weights
alpha = tensor.dot(proj_h, U_att) + c_att
alpha = alpha.reshape([alpha.shape[0], alpha.shape[1]])
alpha = tensor.exp(alpha - alpha.max(0))
#alpha = tensor.exp(alpha)
if context_mask:
alpha = alpha * context_mask
alpha = alpha / alpha.sum(0, keepdims=True)
# compute the weighted averages - current context to GRU
ctx_t = (context * alpha[:, :, None]).sum(0)
if scalar_bound:
bd_c_tm1 = bd_c_tm1[:, None]
bd_w_tm1 = bd_w_tm1[:, None]
# compute char-level
preact_c = tensor.dot((1 - bd_c_tm1) * h_c_tm1, U_cc) + tensor.dot(bd_c_tm1 * h_w_tm1, U_wc) + tensor.dot(ctx_t, W_ctxc )
if scalar_bound:
preact_c += state_below_emb_t
preact_c = preact_c[:, None]
else:
preact_c += state_below_emb_t
# update gates
bd_c_t = tensor.nnet.sigmoid(preact_c)
# compute the hidden state proposal: char-level
preactx_c = tensor.dot((1 - bd_c_tm1) * h_c_tm1, Ux_cc) + tensor.dot(bd_c_tm1 * h_w_tm1, Ux_wc) + tensor.dot(ctx_t, Wx_ctxc) + state_belowx_emb_t
h_c_t = tensor.tanh(preactx_c)
h_c_t = m_t[:, None] * h_c_t + (1. - m_t)[:, None] * h_c_tm1
# compute word-level
preact_w = tensor.dot((1 - bd_w_tm1) * h_w_tm1, U_ww) + tensor.dot(bd_c_t * h_c_t, W_cw) + tensor.dot(ctx_t, W_ctxw)
if scalar_bound:
preact_w += b_w[:, None]
preact_w = preact_w.T
else:
preact_w += b_w
# update gates for word-level
bd_w_t = tensor.nnet.sigmoid(preact_w)
# compute the hidden state proposal: word-level
preactx_w = tensor.dot((1 - bd_w_tm1) * h_w_tm1, Ux_ww) + tensor.dot(bd_c_t * h_c_t, Wx_cw) + tensor.dot(ctx_t, Wx_ctxw) + bx_w
h_w_t = tensor.tanh(preactx_w)
h_w_t = bd_c_t * h_w_t + (1. - bd_c_t) * h_w_tm1
h_w_t = m_t[:, None] * h_w_t + (1. - m_t)[:, None] * h_w_tm1
if scalar_bound:
bd_c_t = bd_c_t.flatten()
bd_w_t = bd_w_t.flatten()
return h_c_t, h_w_t, bd_c_t, bd_w_t, ctx_t, alpha.T
# prepare scan arguments
seqs = [mask, state_below_emb, state_belowx_emb, state_belowctx_emb]
shared_vars = [
tparams[_p(prefix, 'U_cc')],
tparams[_p(prefix, 'Ux_cc')],
tparams[_p(prefix, 'U_wc')],
tparams[_p(prefix, 'Ux_wc')],
tparams[_p(prefix, 'W_cw')],
tparams[_p(prefix, 'Wx_cw')],
tparams[_p(prefix, 'U_ww')],
tparams[_p(prefix, 'Ux_ww')],
tparams[_p(prefix, 'b_w')],
tparams[_p(prefix, 'bx_w')],
tparams[_p(prefix, 'W_ctxc')],
tparams[_p(prefix, 'Wx_ctxc')],
tparams[_p(prefix, 'W_ctxw')],
tparams[_p(prefix, 'Wx_ctxw')],
tparams[_p(prefix, 'Wdecc_att')],
tparams[_p(prefix, 'Wdecw_att')],
tparams[_p(prefix, 'U_att')],
tparams[_p(prefix, 'c_att')],
]
if one_step:
rval = _step(*(seqs+[init_state_char, init_state_word,
init_bound_char, init_bound_word,
None, None,
proj_ctx, context]+shared_vars))
else:
rval, updates = theano.scan(_step,
sequences=seqs,
outputs_info=[
init_state_char,
init_state_word,
init_bound_char,
init_bound_word,
tensor.alloc(0., n_samples, context.shape[2]),
tensor.alloc(0., n_samples, context.shape[0])
],
non_sequences=[proj_ctx, context]+shared_vars,
name=_p(prefix, '_layers'),
n_steps=n_steps,
profile=profile,
strict=True)
return rval
# optimizers
def gradient_clipping(grads, tparams, clip_c=10):
g2 = 0.
for g in grads:
g2 += (g**2).sum()
g2 = tensor.sqrt(g2)
not_finite = tensor.or_(tensor.isnan(g2), tensor.isinf(g2))
new_grads = []
for p, g in zip(tparams.values(), grads):
new_grads.append(tensor.switch(g2 > clip_c,
g * (clip_c / g2),
g))
return new_grads, not_finite, tensor.lt(clip_c, g2)
def adam(lr, tparams, grads, inp, cost, not_finite=None, clipped=None,
b1=0.9, b2=0.999, eps=1e-8, file_name=None):
gshared = [theano.shared(p.get_value() * 0., name='%s_grad' % k)
for k, p in tparams.iteritems()]
gsup = [(gs, g) for gs, g in zip(gshared, grads)]
if not_finite is not None and clipped is not None:
f_grad_shared = theano.function(inp, [cost, not_finite, clipped], updates=gsup, profile=profile)
else:
f_grad_shared = theano.function(inp, cost, updates=gsup, profile=profile)
updates = OrderedDict()
optparams = OrderedDict()
optparams['i'] = numpy.float32(0.)
for k, p in tparams.items():
optparams[_p(k, 'm')] = p.get_value() * 0.
optparams[_p(k, 'v')] = p.get_value() * 0.
if file_name is not None:
optparams = load_params(file_name, optparams)
toptparams = init_tparams(optparams)
i_t = toptparams['i'] + 1.
fix1 = b1**i_t
fix2 = b2**i_t
lr_t = lr * tensor.sqrt(1. - fix2) / (1. - fix1)
for (k, p), g in zip(tparams.items(), gshared):
m_t = b1 * toptparams[_p(k, 'm')] + (1. - b1) * g
v_t = b2 * toptparams[_p(k, 'v')] + (1. - b2) * g**2
g_t = lr_t * m_t / (tensor.sqrt(v_t) + eps)
p_t = p - g_t
updates[toptparams[_p(k, 'm')]] = m_t
updates[toptparams[_p(k, 'v')]] = v_t
updates[p] = p_t
updates[toptparams['i']] = i_t
f_update = theano.function([lr], [], updates=updates,
on_unused_input='ignore', profile=profile)
return f_grad_shared, f_update, toptparams
def adadelta(lr, tparams, grads, inp, cost):
zipped_grads = [theano.shared(p.get_value() * numpy.float32(0.),
name='%s_grad' % k)
for k, p in tparams.iteritems()]
running_up2 = [theano.shared(p.get_value() * numpy.float32(0.),
name='%s_rup2' % k)
for k, p in tparams.iteritems()]
running_grads2 = [theano.shared(p.get_value() * numpy.float32(0.),
name='%s_rgrad2' % k)
for k, p in tparams.iteritems()]
zgup = [(zg, g) for zg, g in zip(zipped_grads, grads)]
rg2up = [(rg2, 0.95 * rg2 + 0.05 * (g ** 2))
for rg2, g in zip(running_grads2, grads)]
f_grad_shared = theano.function(inp, cost, updates=zgup+rg2up,
profile=profile)
updir = [-tensor.sqrt(ru2 + 1e-6) / tensor.sqrt(rg2 + 1e-6) * zg
for zg, ru2, rg2 in
zip(zipped_grads, running_up2, running_grads2)]
ru2up = [(ru2, 0.95 * ru2 + 0.05 * (ud ** 2))
for ru2, ud in zip(running_up2, updir)]
param_up = [(p, p + ud) for p, ud in zip(itemlist(tparams), updir)]
f_update = theano.function([lr], [], updates=ru2up+param_up,
on_unused_input='ignore', profile=profile)
return f_grad_shared, f_update
def rmsprop(lr, tparams, grads, inp, cost, not_finite=None, clipped=None, mom=0.9, sec_mom=0.95, eps=1e-4):
zipped_grads = [theano.shared(p.get_value() * numpy.float32(0.),
name='%s_grad' % k)
for k, p in tparams.iteritems()]
running_grads = [theano.shared(p.get_value() * numpy.float32(0.),
name='%s_rgrad' % k)
for k, p in tparams.iteritems()]
running_grads2 = [theano.shared(p.get_value() * numpy.float32(0.),
name='%s_rgrad2' % k)
for k, p in tparams.iteritems()]
zgup = [(zg, g) for zg, g in zip(zipped_grads, grads)]
rgup = [(rg, sec_mom * rg + (1. - sec_mom) * g) for rg, g in zip(running_grads, grads)]
rg2up = [(rg2, sec_mom * rg2 + (1. - sec_mom) * g**2)
for rg2, g in zip(running_grads2, grads)]
if not_finite is not None or clipped is not None:
f_grad_shared = theano.function(inp, [cost, not_finite, clipped], updates=zgup+rgup+rg2up, profile=profile)
else:
f_grad_shared = theano.function(inp, cost, updates=zgup+rgup+rg2up, profile=profile)
updir = [theano.shared(p.get_value() * numpy.float32(0.),
name='%s_updir' % k)
for k, p in tparams.iteritems()]
updir_new = [(ud, mom * ud - lr * zg / tensor.sqrt(rg2 - rg**2 + eps))
for ud, zg, rg, rg2 in zip(updir, zipped_grads, running_grads,
running_grads2)]
param_up = [(p, p + udn[1])
for p, udn in zip(itemlist(tparams), updir_new)]
f_update = theano.function([lr], [], updates=updir_new+param_up,
on_unused_input='ignore', profile=profile)
return f_grad_shared, f_update
def sgd(lr, tparams, grads, x, mask, y, cost):
gshared = [theano.shared(p.get_value() * 0., name='%s_grad' % k)
for k, p in tparams.iteritems()]
gsup = [(gs, g) for gs, g in zip(gshared, grads)]
f_grad_shared = theano.function([x, mask, y], cost, updates=gsup,
profile=profile)
pup = [(p, p - lr * g) for p, g in zip(itemlist(tparams), gshared)]
f_update = theano.function([lr], [], updates=pup, profile=profile)
return f_grad_shared, f_update
| 37.044287 | 153 | 0.57668 | 11,287 | 83,646 | 3.975547 | 0.036768 | 0.054912 | 0.062088 | 0.011031 | 0.885318 | 0.863389 | 0.850865 | 0.836869 | 0.822896 | 0.815676 | 0 | 0.012257 | 0.305562 | 83,646 | 2,257 | 154 | 37.0607 | 0.760239 | 0.105468 | 0 | 0.822222 | 0 | 0 | 0.06275 | 0.00529 | 0 | 0 | 0 | 0 | 0.020952 | 1 | 0.031111 | false | 0 | 0.006349 | 0.00381 | 0.069206 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
81db9cfe7044650adb1ffb98658c8fb3f88dd4ad | 13,546 | py | Python | tests/filters_test.py | a-n-rose/Python-Sound-Tool | 4cb9ab7b55da9808da8dec3bc33759a7615ad4ed | [
"RSA-MD"
] | 52 | 2019-10-13T07:43:51.000Z | 2022-01-13T19:58:01.000Z | tests/filters_test.py | a-n-rose/Python-Sound-Tool | 4cb9ab7b55da9808da8dec3bc33759a7615ad4ed | [
"RSA-MD"
] | 7 | 2019-10-13T08:40:58.000Z | 2021-04-09T13:18:13.000Z | tests/filters_test.py | a-n-rose/Python-Sound-Tool | 4cb9ab7b55da9808da8dec3bc33759a7615ad4ed | [
"RSA-MD"
] | 4 | 2019-10-13T07:43:44.000Z | 2021-04-13T12:16:17.000Z |
import os, sys
import inspect
currentdir = os.path.dirname(os.path.abspath(
inspect.getfile(inspect.currentframe())))
parentdir = os.path.dirname(currentdir)
sys.path.insert(0, parentdir)
import librosa
import numpy as np
import pytest
import soundpy as sp
audiodir = 'test_audio/'
test_audiofile = '{}audio2channels.wav'.format(audiodir)
test_noisyfile = '{}python_traffic.wav'.format(audiodir)
test_filtered_wiener = '{}python_traffic_wiener.wav'.format(audiodir)
test_filtered_wiener_postfilter = '{}python_traffic_pf.wav'.format(audiodir)
test_filtered_bandsub = '{}python_traffic_bs.wav'.format(audiodir)
def test_setup_bands_default():
fil = sp.BandSubtraction()
fil.setup_bands()
band_start_freq = fil.band_start_freq
band_end_freq = fil.band_end_freq
expected1 = np.array([ 0., 80., 160., 240., 320., 400.])
expected2 = np.array([ 80., 160., 240., 320., 400., 480.])
assert np.array_equal(expected1, band_start_freq)
assert np.array_equal(expected2, band_end_freq)
def test_setup_bands_8():
fil = sp.BandSubtraction(num_bands = 8)
fil.setup_bands()
band_start_freq = fil.band_start_freq
band_end_freq = fil.band_end_freq
expected1 = np.array([ 0., 60., 120., 180., 240., 300., 360., 420.])
expected2 = np.array([ 60., 120., 180., 240., 300., 360., 420., 480.])
assert np.array_equal(expected1, band_start_freq)
assert np.array_equal(expected2, band_end_freq)
def test_setup_bands_winsize16ms():
fil = sp.BandSubtraction(win_size_ms = 16)
fil.setup_bands()
band_start_freq = fil.band_start_freq
band_end_freq = fil.band_end_freq
expected1 = np.array([ 0., 64., 128., 192., 256., 320.])
expected2 = np.array([ 64., 128., 192., 256., 320., 384.])
assert np.array_equal(expected1, band_start_freq)
assert np.array_equal(expected2, band_end_freq)
def test_setup_bands_winsize500ms():
fil = sp.BandSubtraction(win_size_ms = 500)
fil.setup_bands()
band_start_freq = fil.band_start_freq
band_end_freq = fil.band_end_freq
expected1 = np.array([ 0., 2000., 4000., 6000., 8000., 10000.])
expected2 = np.array([ 2000., 4000., 6000., 8000., 10000., 12000.])
assert np.array_equal(expected1, band_start_freq)
assert np.array_equal(expected2, band_end_freq)
def test_update_posteri_bands_noisy():
noise_max = 0.3
fil = sp.BandSubtraction(num_bands = 3)
fil.setup_bands()
time = np.arange(0, 10, 0.01)
signal = np.sin(time)[:fil.frame_length]
np.random.seed(seed=0)
noise = np.random.normal(np.mean(signal),
np.mean(signal)+noise_max,
fil.frame_length)
powspec = np.abs(np.fft.fft(signal))**2
powspec_noisy = np.abs(np.fft.fft(signal + noise))**2
fil.update_posteri_bands(powspec, powspec_noisy)
snr_bands = fil.snr_bands
expected = np.array([ -2.02865226, -41.70672353, -45.45654087])
assert np.allclose(expected, snr_bands)
def test_update_posteri_bands_verynoisy():
noise_max = 0.7
fil = sp.BandSubtraction(num_bands = 3)
fil.setup_bands()
time = np.arange(0, 10, 0.01)
signal = np.sin(time)[:fil.frame_length]
np.random.seed(seed=0)
noise = np.random.normal(np.mean(signal),
np.mean(signal)+noise_max,
fil.frame_length)
powspec = np.abs(np.fft.fft(signal))**2
powspec_noisy = np.abs(np.fft.fft(signal + noise))**2
fil.update_posteri_bands(powspec, powspec_noisy)
snr_bands = fil.snr_bands
expected = np.array([ -2.82864994, -46.76075799, -50.50670912])
assert np.allclose(expected, snr_bands)
def test_update_posteri_bands_nonoise():
fil = sp.BandSubtraction(num_bands = 3)
fil.setup_bands()
time = np.arange(0, 10, 0.01)
signal = np.sin(time)[:fil.frame_length]
powspec = np.abs(np.fft.fft(signal))**2
powspec_noisy = powspec
fil.update_posteri_bands(powspec, powspec_noisy)
snr_bands = fil.snr_bands
expected = np.array([0., 0., 0.])
assert np.allclose(expected, snr_bands)
def test_calc_oversub_factor_noisy():
noise_max = 0.3
fil = sp.BandSubtraction(num_bands = 4)
fil.setup_bands()
time = np.arange(0, 10, 0.01)
signal = np.sin(time)[:fil.frame_length]
np.random.seed(seed=0)
noise = np.random.normal(np.mean(signal),
np.mean(signal)+noise_max,
fil.frame_length)
powspec = np.abs(np.fft.fft(signal))**2
powspec_noisy = np.abs(np.fft.fft(signal + noise))**2
fil.update_posteri_bands(powspec, powspec_noisy)
a = fil.calc_oversub_factor()
expected = np.array([4.28678354, 4.75, 4.75, 4.75 ])
assert np.allclose(expected, a)
def test_calc_oversub_factor_nonoise():
noise_max = 0.3
fil = sp.BandSubtraction(num_bands = 4)
fil.setup_bands()
time = np.arange(0, 10, 0.01)
signal = np.sin(time)[:fil.frame_length]
powspec = np.abs(np.fft.fft(signal))**2
fil.update_posteri_bands(powspec, powspec)
a = fil.calc_oversub_factor()
expected = np.array([4., 4., 4., 4.])
assert np.allclose(expected, a)
def test_calc_relevant_band1():
fil = sp.BandSubtraction(num_bands = 6)
fil.setup_bands()
band_index = 0
freq = fil.band_start_freq[band_index]
time = np.arange(0, 10, 0.01)
full_circle = 2 * np.pi
signal = np.sin((freq*full_circle)*time)[:fil.frame_length]
powspec = np.abs(np.fft.fft(signal))**2
rel_band, pow_levels = fil.calc_relevant_band(powspec)
print('IF ERROR, PERHAPS DUE TO HARMONICS??? OR BAND SPACING???')
print('Testing frequency: ', freq)
print('Expected most relevant band: ', band_index)
print('Which covers frequencies between {} and {}.'.format(
fil.band_start_freq[band_index], fil.band_end_freq[band_index]))
print('Calculated energy levels of bands: ', pow_levels)
print('Most energetic frequency band: ', rel_band)
print('Which covers frequencies between {} and {}.'.format(
fil.band_start_freq[rel_band], fil.band_end_freq[rel_band]))
expected = band_index
assert expected == rel_band
def test_calc_relevant_band2():
fil = sp.BandSubtraction(num_bands = 6)
fil.setup_bands()
band_index = 1
freq = fil.band_start_freq[band_index]
time = np.arange(0, 10, 0.01)
full_circle = 2 * np.pi
signal = np.sin((freq*full_circle)*time)[:fil.frame_length]
powspec = np.abs(np.fft.fft(signal))**2
rel_band, pow_levels = fil.calc_relevant_band(powspec)
print('IF ERROR, PERHAPS DUE TO HARMONICS??? OR BAND SPACING???')
print('Testing frequency: ', freq)
print('Expected most relevant band: ', band_index)
print('Which covers frequencies between {} and {}.'.format(
fil.band_start_freq[band_index], fil.band_end_freq[band_index]))
print('Calculated energy levels of bands: ', pow_levels)
print('Most energetic frequency band: ', rel_band)
print('Which covers frequencies between {} and {}.'.format(
fil.band_start_freq[rel_band], fil.band_end_freq[rel_band]))
expected = band_index
assert expected == rel_band
def test_calc_relevant_band4():
fil = sp.BandSubtraction(num_bands = 6)
fil.setup_bands()
band_index = 2
freq = fil.band_start_freq[band_index]
time = np.arange(0, 10, 0.01)
full_circle = 2 * np.pi
signal = np.sin((freq*full_circle)*time)[:fil.frame_length]
powspec = np.abs(np.fft.fft(signal))**2
rel_band, pow_levels = fil.calc_relevant_band(powspec)
print('IF ERROR, PERHAPS DUE TO HARMONICS??? OR BAND SPACING???')
print('Testing frequency: ', freq)
print('Expected most relevant band: ', band_index)
print('Which covers frequencies between {} and {}.'.format(
fil.band_start_freq[band_index], fil.band_end_freq[band_index]))
print('Calculated energy levels of bands: ', pow_levels)
print('Most energetic frequency band: ', rel_band)
print('Which covers frequencies between {} and {}.'.format(
fil.band_start_freq[rel_band], fil.band_end_freq[rel_band]))
expected = band_index
assert expected == rel_band
def test_calc_relevant_band4():
fil = sp.BandSubtraction(num_bands = 6)
fil.setup_bands()
band_index = 3
freq = fil.band_start_freq[band_index]
time = np.arange(0, 10, 0.01)
full_circle = 2 * np.pi
signal = np.sin((freq*full_circle)*time)[:fil.frame_length]
powspec = np.abs(np.fft.fft(signal))**2
rel_band, pow_levels = fil.calc_relevant_band(powspec)
print('IF ERROR, PERHAPS DUE TO HARMONICS??? OR BAND SPACING???')
print('Testing frequency: ', freq)
print('Expected most relevant band: ', band_index)
print('Which covers frequencies between {} and {}.'.format(
fil.band_start_freq[band_index], fil.band_end_freq[band_index]))
print('Calculated energy levels of bands: ', pow_levels)
print('Most energetic frequency band: ', rel_band)
print('Which covers frequencies between {} and {}.'.format(
fil.band_start_freq[rel_band], fil.band_end_freq[rel_band]))
expected = band_index
assert expected == rel_band
def test_calc_relevant_band():
fil = sp.BandSubtraction(num_bands = 4)
fil.setup_bands()
time = np.arange(0, 10, 0.01)
signal = np.cos(time)[:fil.frame_length]
powspec = np.abs(np.fft.fft(signal))**2
rel_band, pow_levels = fil.calc_relevant_band(powspec)
expected = 0
assert expected == rel_band
def test_bandsub_reset_samplerate_22050():
sr = 22050
fil = sp.BandSubtraction(num_bands=4, sr=sr)
updated_sr = fil.sr
expected = 48000
assert expected == updated_sr
# TODO: just seems a bit complicated.. remove?
#def test_sub_noise():
#fil = sp.BandSubtraction(num_bands = 4)
#fil.setup_bands()
#time = np.arange(0, 10, 0.01)
#signal = np.sin(time)[:fil.frame_length]
#powspec = np.abs(np.fft.fft(signal))**2
## add noise
#np.random.seed(seed=0)
#noise = 0.1 * np.random.randn(len(signal))
#noisy_signal = signal + noise
#powspec_noisy = np.abs(np.fft.fft(noisy_signal))**2
## calculate other necessary variables
#fil.update_posteri_bands(powspec, powspec_noisy)
#a = fil.calc_oversub_factor()
#sub_signal = fil.sub_noise(powspec, powspec_noisy,
#oversub_factor = a,
#speech = True)
def test_filtersettings_getsamples_default_wiener():
wf = sp.WienerFilter()
samps_wf = wf.get_samples(test_audiofile,
dur_sec = 1)
assert wf.sr == 48000
assert len(samps_wf) == wf.sr
def test_filtersettings_getsamples_default_bandsubtraction():
bs = sp.BandSubtraction()
samps_bs = bs.get_samples(test_audiofile,
dur_sec = 1)
assert bs.sr == 48000
assert len(samps_bs) == bs.sr
def test_filtersettings_getsamples_sr22050_wiener():
sr = 22050
wf = sp.WienerFilter(sr=sr)
samps_wf = wf.get_samples(test_audiofile,
dur_sec = 1)
assert wf.sr == sr
assert len(samps_wf) == wf.sr
def test_filtersettings_getsamples_sr22050_bandsubtraction():
sr = 22050
sr_permanent = 48000
bs = sp.BandSubtraction(sr=sr)
samps_bs = bs.get_samples(test_audiofile,
dur_sec = 1)
print('IF ERROR: Check whether or not BandSubtraction works with '+\
'sample rates other than 48000. If not, the sr must stay at 48000.')
assert bs.sr == sr_permanent
assert len(samps_bs) == bs.sr
def test_filtersettings_getsamples_sr8000_wiener():
sr = 8000
wf = sp.WienerFilter(sr=sr)
samps_wf = wf.get_samples(test_audiofile,
dur_sec = 1)
assert wf.sr == sr
assert len(samps_wf) == wf.sr
def test_filtersettings_getsamples_sr8000_bandsubtraction():
sr = 8000
sr_permanent = 48000
bs = sp.BandSubtraction(sr=sr)
samps_bs = bs.get_samples(test_audiofile,
dur_sec = 1)
print('IF ERROR: Check whether or not BandSubtraction works with '+\
'sample rates other than 48000. If not, the sr must stay at 48000.')
assert bs.sr == sr_permanent
assert len(samps_bs) == bs.sr
def test_filtersignal_wiener_simple_doesitrun_uselibrosa_False():
signal, sr = sp.filtersignal(test_noisyfile, filter_type = 'wiener',
use_scipy=True, remove_dc=False, control_vol = True)
sig_expected, sr_expected = librosa.load(test_filtered_wiener, sr=sr)
assert np.allclose(signal, sig_expected)
assert sr == sr_expected
def test_filtersignal_wiener_posfilter_simple_doesitrun_uselibrosa_False():
signal, sr = sp.filtersignal(test_noisyfile, filter_type = 'wiener_pf',
use_scipy=True, remove_dc=False, control_vol = True)
sig_expected, sr_expected = librosa.load(test_filtered_wiener_postfilter, sr=sr)
assert np.allclose(signal, sig_expected)
assert sr == sr_expected
def test_filtersignal_bandsubtraction_simple_doesitrun_uselibrosa_False():
signal, sr = sp.filtersignal(test_noisyfile, filter_type = 'bandsubtraction',
use_scipy=True, remove_dc=False, control_vol = True)
sig_expected, sr_expected = librosa.load(test_filtered_bandsub,sr=sr)
assert np.allclose(signal, sig_expected)
assert sr == sr_expected
| 40.076923 | 87 | 0.668463 | 1,897 | 13,546 | 4.536637 | 0.118081 | 0.022775 | 0.036254 | 0.029747 | 0.862422 | 0.83221 | 0.805136 | 0.793516 | 0.779921 | 0.773762 | 0 | 0.045429 | 0.210247 | 13,546 | 337 | 88 | 40.195846 | 0.75902 | 0.042817 | 0 | 0.726316 | 0 | 0 | 0.110012 | 0.00564 | 0 | 0 | 0 | 0.002967 | 0.129825 | 1 | 0.084211 | false | 0 | 0.021053 | 0 | 0.105263 | 0.105263 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
c4919701181736b01f4aedfe61805732809e978a | 19,375 | py | Python | total_tolles_ferleihsystem/api/catalog/item_type.py | spethso/Verleihsystem-TTF | 39179f9ac5b07f5106e555f82f3c9011d33805bd | [
"MIT"
] | 1 | 2019-03-17T08:11:14.000Z | 2019-03-17T08:11:14.000Z | total_tolles_ferleihsystem/api/catalog/item_type.py | spethso/Verleihsystem-TTF | 39179f9ac5b07f5106e555f82f3c9011d33805bd | [
"MIT"
] | 60 | 2018-06-12T14:46:50.000Z | 2020-11-16T00:50:37.000Z | total_tolles_ferleihsystem/api/catalog/item_type.py | FIUS/ttf-backend | 39179f9ac5b07f5106e555f82f3c9011d33805bd | [
"MIT"
] | 1 | 2019-12-02T19:25:59.000Z | 2019-12-02T19:25:59.000Z | """
This module contains all API endpoints for the namespace 'item_type'
"""
from flask import request
from flask_restplus import Resource, abort, marshal
from flask_jwt_extended import jwt_required, get_jwt_claims
from sqlalchemy.orm import joinedload
from sqlalchemy.exc import IntegrityError
from .. import API, satisfies_role
from ..models import ITEM_TYPE_GET, ITEM_TYPE_POST, ATTRIBUTE_DEFINITION_GET, ID, ITEM_TYPE_PUT
from ... import DB, APP
from ...login import UserRole
from ...db_models.attributeDefinition import AttributeDefinition
from ...db_models.itemType import ItemType, ItemTypeToAttributeDefinition, ItemTypeToItemType
from ...db_models.item import Item
PATH: str = '/catalog/item_types'
ANS = API.namespace('item_type', description='ItemTypes', path=PATH)
@ANS.route('/')
class ItemTypeList(Resource):
"""
Item types root element
"""
@jwt_required
@API.param('deleted', 'get all deleted objects (and only these)', type=bool, required=False, default=False)
@API.marshal_list_with(ITEM_TYPE_GET)
# pylint: disable=R0201
def get(self):
"""
Get a list of all item types currently in the system
"""
base_query = ItemType.query
test_for = request.args.get('deleted', 'false') == 'true'
if test_for:
base_query = base_query.filter(ItemType.deleted_time != None)
else:
base_query = base_query.filter(ItemType.deleted_time == None)
# auth check
if UserRole(get_jwt_claims()) != UserRole.ADMIN:
if UserRole(get_jwt_claims()) == UserRole.MODERATOR:
base_query = base_query.filter((ItemType.visible_for == 'all') | (ItemType.visible_for == 'moderator'))
else:
base_query = base_query.filter(ItemType.visible_for == 'all')
return base_query.order_by(ItemType.name).all()
@jwt_required
@satisfies_role(UserRole.ADMIN)
@ANS.doc(model=ITEM_TYPE_GET, body=ITEM_TYPE_POST)
@ANS.response(409, 'Name is not Unique.')
@ANS.response(201, 'Created.')
# pylint: disable=R0201
def post(self):
"""
Add a new item type to the system
"""
new = ItemType(**request.get_json())
try:
DB.session.add(new)
DB.session.commit()
return marshal(new, ITEM_TYPE_GET), 201
except IntegrityError as err:
message = str(err)
if APP.config['DB_UNIQUE_CONSTRAIN_FAIL'] in message:
APP.logger.info('Name is not unique. %s', err)
abort(409, 'Name is not unique!')
APP.logger.error('SQL Error, %s', err)
abort(500)
@ANS.route('/<int:type_id>/')
class ItemTypeDetail(Resource):
"""
Single item type object
"""
@jwt_required
@ANS.response(404, 'Requested item type not found!')
@API.marshal_with(ITEM_TYPE_GET)
# pylint: disable=R0201
def get(self, type_id):
"""
Get a single item type object
"""
base_query = ItemType.query.filter(ItemType.id == type_id)
# auth check
if UserRole(get_jwt_claims()) != UserRole.ADMIN:
if UserRole(get_jwt_claims()) == UserRole.MODERATOR:
base_query = base_query.filter((ItemType.visible_for == 'all') | (ItemType.visible_for == 'moderator'))
else:
base_query = base_query.filter(ItemType.visible_for == 'all')
item_type = base_query.first()
if item_type is None:
APP.logger.debug('Requested item type (id: %s) not found!', type_id)
abort(404, 'Requested item type not found!')
return item_type
@jwt_required
@satisfies_role(UserRole.ADMIN)
@ANS.response(404, 'Requested item type not found!')
@ANS.response(204, 'Success.')
# pylint: disable=R0201
def delete(self, type_id):
"""
Delete a item type object
"""
item_type = ItemType.query.filter(ItemType.id == type_id).first()
if item_type is None:
APP.logger.debug('Requested item type (id: %s) not found!', type_id)
abort(404, 'Requested item type not found!')
item_type.deleted = True
items = Item.query.filter(Item.type_id == type_id).all()
for item in items:
code, msg, commit = item.delete()
if not commit:
abort(code, msg)
DB.session.commit()
return "", 204
@jwt_required
@satisfies_role(UserRole.ADMIN)
@ANS.response(404, 'Requested item type not found!')
@ANS.response(204, 'Success.')
# pylint: disable=R0201
def post(self, type_id):
"""
Undelete a item type object
"""
item_type = ItemType.query.filter(ItemType.id == type_id).first()
if item_type is None:
APP.logger.debug('Requested item type (id: %s) not found!', type_id)
abort(404, 'Requested item type not found!')
item_type.deleted = False
DB.session.commit()
return "", 204
@jwt_required
@satisfies_role(UserRole.ADMIN)
@ANS.doc(model=ITEM_TYPE_GET, body=ITEM_TYPE_PUT)
@ANS.response(409, 'Name is not Unique.')
@ANS.response(404, 'Requested item type not found!')
# pylint: disable=R0201
def put(self, type_id):
"""
Replace a item type object
"""
item_type = ItemType.query.filter(ItemType.id == type_id).first()
if item_type is None:
APP.logger.debug('Requested item type (id: %s) not found!', type_id)
abort(404, 'Requested item type not found!')
item_type.update(**request.get_json())
try:
DB.session.commit()
return marshal(item_type, ITEM_TYPE_GET), 200
except IntegrityError as err:
message = str(err)
if APP.config['DB_UNIQUE_CONSTRAIN_FAIL'] in message:
APP.logger.info('Name is not unique. %s', err)
abort(409, 'Name is not unique!')
APP.logger.error('SQL Error %s', err)
abort(500)
@ANS.route('/<int:type_id>/attributes/')
class ItemTypeAttributes(Resource):
"""
The attributes of a single item type object
"""
@jwt_required
@ANS.response(404, 'Requested item type not found!')
@API.marshal_with(ATTRIBUTE_DEFINITION_GET)
# pylint: disable=R0201
def get(self, type_id):
"""
Get all attribute definitions for this item type.
"""
base_query = ItemType.query.options(joinedload('_item_type_to_attribute_definitions')).filter(ItemType.id == type_id).filter(ItemType.deleted_time == None)
# auth check
if UserRole(get_jwt_claims()) != UserRole.ADMIN:
if UserRole(get_jwt_claims()) == UserRole.MODERATOR:
base_query = base_query.filter((ItemType.visible_for == 'all') | (ItemType.visible_for == 'moderator'))
else:
base_query = base_query.filter(ItemType.visible_for == 'all')
item_type = base_query.first()
if item_type is None:
APP.logger.debug('Requested item type (id: %s) not found!', type_id)
abort(404, 'Requested item type not found!')
return [ittad.attribute_definition for ittad in item_type._item_type_to_attribute_definitions]
@jwt_required
@satisfies_role(UserRole.ADMIN)
@ANS.doc(body=ID)
@ANS.response(404, 'Requested item type not found!')
@ANS.response(400, 'Requested attribute definition not found!')
@ANS.response(409, 'Attribute definition is already associated with this item type!')
@API.marshal_with(ATTRIBUTE_DEFINITION_GET)
# pylint: disable=R0201
def post(self, type_id):
"""
Associate a new attribute definition with the item type.
"""
attribute_definition_id = request.get_json()["id"]
# pylint: disable=C0121
attribute_definition = AttributeDefinition.query.filter(AttributeDefinition.id == attribute_definition_id).filter(AttributeDefinition.deleted_time == None).first()
if ItemType.query.filter(ItemType.id == type_id).filter(ItemType.deleted_time == None).first() is None:
APP.logger.debug('Requested item type (id: %s) not found!', type_id)
abort(404, 'Requested item type not found!')
if attribute_definition is None:
APP.logger.debug('Requested item type (id: %s) not found!', type_id)
abort(400, 'Requested attribute definition not found!')
items = Item.query.filter(Item.type_id == type_id).all()
new = ItemTypeToAttributeDefinition(type_id, attribute_definition_id)
try:
DB.session.add(new)
for item in items:
attributes_to_add, _, attributes_to_undelete = item.get_attribute_changes([attribute_definition_id])
DB.session.add_all(attributes_to_add)
for attr in attributes_to_undelete:
attr.deleted = False
DB.session.commit()
associations = (ItemTypeToAttributeDefinition
.query
.filter(ItemTypeToAttributeDefinition.item_type_id == type_id)
.all())
return [e.attribute_definition for e in associations]
except IntegrityError as err:
message = str(err)
if APP.config['DB_UNIQUE_CONSTRAIN_FAIL'] in message:
APP.logger.info('Attribute definition is already asociated with item type! %s', err)
abort(409, 'Attribute definition is already asociated with item type!')
APP.logger.error('SQL Error %s', err)
abort(500)
@jwt_required
@satisfies_role(UserRole.ADMIN)
@ANS.doc(body=ID)
@ANS.response(404, 'Requested item type not found!')
@ANS.response(400, 'Requested attribute definition not found!')
@ANS.response(204, 'Success.')
# pylint: disable=R0201
def delete(self, type_id):
"""
Remove association of a attribute definition with the item type.
"""
attribute_definition_id = request.get_json()["id"]
item_type = ItemType.query.filter(ItemType.id == type_id).filter(ItemType.deleted_time == None).first()
if item_type is None:
APP.logger.debug('Requested item type (id: %s) not found!', type_id)
abort(404, 'Requested item type not found!')
code, msg, commit = item_type.unassociate_attr_def(attribute_definition_id)
if commit:
DB.session.commit()
if code == 204:
return '', 204
APP.logger.error("Error. %s, %s", code, msg)
abort(code, msg)
@ANS.route('/<int:type_id>/contained_types/')
class ItemTypeContainedTypes(Resource):
"""
The item types that a item of this type can contain.
"""
@jwt_required
@ANS.response(404, 'Requested item type not found!')
@API.marshal_with(ITEM_TYPE_GET)
# pylint: disable=R0201
def get(self, type_id):
"""
Get all item types, this item_type may contain.
"""
base_query = ItemType.query.options(joinedload('_contained_item_types').joinedload('item_type')).filter(ItemType.id == type_id).filter(ItemType.deleted_time == None)
# auth check
if UserRole(get_jwt_claims()) != UserRole.ADMIN:
if UserRole(get_jwt_claims()) == UserRole.MODERATOR:
base_query = base_query.filter((ItemType.visible_for == 'all') | (ItemType.visible_for == 'moderator'))
else:
base_query = base_query.filter(ItemType.visible_for == 'all')
item_type = base_query.first()
if item_type is None:
APP.logger.debug('Requested item type (id: %s) not found!', type_id)
abort(404, 'Requested item type not found!')
return [cit.item_type for cit in item_type._contained_item_types]
@jwt_required
@satisfies_role(UserRole.ADMIN)
@ANS.doc(body=ID)
@ANS.response(404, 'Requested item type not found!')
@ANS.response(400, 'Requested child item type not found!')
@ANS.response(409, 'Item type can already be contained in this item type.')
@API.marshal_with(ITEM_TYPE_GET)
# pylint: disable=R0201
def post(self, type_id):
"""
Add new item type to be able to be contained in this item type.
"""
child_id = request.get_json()["id"]
if ItemType.query.filter(ItemType.id == type_id).filter(ItemType.deleted_time == None).first() is None:
APP.logger.debug('Requested item type (id: %s) not found!', type_id)
abort(404, 'Requested item type not found!')
if ItemType.query.filter(ItemType.id == child_id).filter(ItemType.deleted_time == None).first() is None:
APP.logger.debug('Requested contained type (id: %s) not found!', child_id)
abort(400, 'Requested contained type not found!')
new = ItemTypeToItemType(type_id, child_id)
try:
DB.session.add(new)
DB.session.commit()
associations = ItemTypeToItemType.query.filter(ItemTypeToItemType.parent_id == type_id).options(joinedload('item_type')).all()
return [e.item_type for e in associations]
except IntegrityError as err:
message = str(err)
if APP.config['DB_UNIQUE_CONSTRAIN_FAIL'] in message:
APP.logger.info('Item type can already be contained in this item type. %s', err)
abort(409, 'Item type can already be contained in this item type.')
APP.logger.error('SQL Error %s', err)
abort(500)
@jwt_required
@satisfies_role(UserRole.ADMIN)
@ANS.doc(body=ID)
@ANS.response(404, 'Requested item type not found!')
@ANS.response(400, 'Requested child item type not found!')
@ANS.response(204, 'Success.')
# pylint: disable=R0201
def delete(self, type_id):
"""
Remove item type from being able to be contained in this item type
"""
child_id = request.get_json()["id"]
if ItemType.query.filter(ItemType.id == type_id).filter(ItemType.deleted_time == None).first() is None:
APP.logger.debug('Requested item type (id: %s) not found!', type_id)
abort(404, 'Requested item type not found!')
if ItemType.query.filter(ItemType.id == child_id).filter(ItemType.deleted_time == None).first() is None:
APP.logger.debug('Requested contained type (id: %s) not found!', child_id)
abort(400, 'Requested contained type not found!')
association = (ItemTypeToItemType
.query
.filter(ItemTypeToItemType.parent_id == type_id)
.filter(ItemTypeToItemType.item_type_id == child_id)
.first())
if association is None:
return '', 204
DB.session.delete(association)
DB.session.commit()
return '', 204
@ANS.route('/<int:type_id>/parent_types/')
class ItemTypeParentTypes(Resource):
"""
The item types that a item of this type can be contained by.
"""
@jwt_required
@ANS.response(404, 'Requested item type not found!')
@API.marshal_with(ITEM_TYPE_GET)
# pylint: disable=R0201
def get(self, type_id):
"""
Get all item types, this item_type may be contained in.
"""
base_query = ItemType.query.options(joinedload('_possible_parent_item_types').joinedload('parent')).filter(ItemType.id == type_id).filter(ItemType.deleted_time == None)
# auth check
if UserRole(get_jwt_claims()) != UserRole.ADMIN:
if UserRole(get_jwt_claims()) == UserRole.MODERATOR:
base_query = base_query.filter((ItemType.visible_for == 'all') | (ItemType.visible_for == 'moderator'))
else:
base_query = base_query.filter(ItemType.visible_for == 'all')
item_type = base_query.first()
if item_type is None:
APP.logger.debug('Requested item type (id: %s) not found!', type_id)
abort(404, 'Requested item type not found!')
return [ppit.parent for ppit in item_type._possible_parent_item_types]
@jwt_required
@satisfies_role(UserRole.ADMIN)
@ANS.doc(body=ID)
@ANS.response(404, 'Requested item type not found!')
@ANS.response(400, 'Requested parent item type not found!')
@ANS.response(409, 'Item type can already be contained in this item type.')
@API.marshal_with(ITEM_TYPE_GET)
# pylint: disable=R0201
def post(self, type_id):
"""
Add new item type which can contain this item type.
"""
parent_id = request.get_json()["id"]
if ItemType.query.filter(ItemType.id == type_id).filter(ItemType.deleted_time == None).first() is None:
APP.logger.debug('Requested item type (id: %s) not found!', type_id)
abort(404, 'Requested item type not found!')
if ItemType.query.filter(ItemType.id == parent_id).filter(ItemType.deleted_time == None).first() is None:
APP.logger.debug('Requested parent type (id: %s) not found!', parent_id)
abort(400, 'Requested parent type not found!')
new = ItemTypeToItemType(parent_id, type_id)
try:
DB.session.add(new)
DB.session.commit()
associations = ItemTypeToItemType.query.filter(ItemTypeToItemType.parent_id == type_id).options(joinedload('item_type')).all()
return [e.item_type for e in associations]
except IntegrityError as err:
message = str(err)
if APP.config['DB_UNIQUE_CONSTRAIN_FAIL'] in message:
APP.logger.info('This item type can already contain the given item type. %s', err)
abort(409, 'This item type can already contain the given item type.')
APP.logger.error('SQL Error %s', err)
abort(500)
@jwt_required
@satisfies_role(UserRole.ADMIN)
@ANS.doc(body=ID)
@ANS.response(404, 'Requested item type not found!')
@ANS.response(400, 'Requested child item type not found!')
@ANS.response(204, 'Success.')
# pylint: disable=R0201
def delete(self, type_id):
"""
Remove item type which can contain this item type
"""
parent_id = request.get_json()["id"]
if ItemType.query.filter(ItemType.id == type_id).filter(ItemType.deleted_time == None).first() is None:
APP.logger.debug('Requested item type (id: %s) not found!', type_id)
abort(404, 'Requested item type not found!')
if ItemType.query.filter(ItemType.id == parent_id).filter(ItemType.deleted_time == None).first() is None:
APP.logger.debug('Requested parent type (id: %s) not found!', parent_id)
abort(400, 'Requested parent type not found!')
association = (ItemTypeToItemType
.query
.filter(ItemTypeToItemType.parent_id == type_id)
.filter(ItemTypeToItemType.item_type_id == parent_id)
.first())
if association is None:
return '', 204
DB.session.delete(association)
DB.session.commit()
return '', 204
| 39.621677 | 176 | 0.624568 | 2,410 | 19,375 | 4.86971 | 0.075104 | 0.091343 | 0.057941 | 0.0409 | 0.805811 | 0.781527 | 0.755198 | 0.751875 | 0.737815 | 0.723586 | 0 | 0.018724 | 0.261265 | 19,375 | 488 | 177 | 39.702869 | 0.80123 | 0.07169 | 0 | 0.744681 | 0 | 0 | 0.180864 | 0.016421 | 0 | 0 | 0 | 0 | 0 | 1 | 0.045593 | false | 0 | 0.036474 | 0 | 0.148936 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
c49a615fb740e1a780f955ba75e6e723df107657 | 4,900 | py | Python | resources/dot_PyCharm/system/python_stubs/-762174762/PySide/QtCore/QFileInfo.py | basepipe/developer_onboarding | 05b6a776f8974c89517868131b201f11c6c2a5ad | [
"MIT"
] | 1 | 2020-04-20T02:27:20.000Z | 2020-04-20T02:27:20.000Z | resources/dot_PyCharm/system/python_stubs/cache/16012662ddca113c1f50140f9e0d3bd290a511015767475cf362e5267760f062/PySide/QtCore/QFileInfo.py | basepipe/developer_onboarding | 05b6a776f8974c89517868131b201f11c6c2a5ad | [
"MIT"
] | null | null | null | resources/dot_PyCharm/system/python_stubs/cache/16012662ddca113c1f50140f9e0d3bd290a511015767475cf362e5267760f062/PySide/QtCore/QFileInfo.py | basepipe/developer_onboarding | 05b6a776f8974c89517868131b201f11c6c2a5ad | [
"MIT"
] | null | null | null | # encoding: utf-8
# module PySide.QtCore
# from C:\Python27\lib\site-packages\PySide\QtCore.pyd
# by generator 1.147
# no doc
# imports
import Shiboken as __Shiboken
class QFileInfo(__Shiboken.Object):
# no doc
def absoluteDir(self, *args, **kwargs): # real signature unknown
pass
def absoluteFilePath(self, *args, **kwargs): # real signature unknown
pass
def absolutePath(self, *args, **kwargs): # real signature unknown
pass
def baseName(self, *args, **kwargs): # real signature unknown
pass
def bundleName(self, *args, **kwargs): # real signature unknown
pass
def caching(self, *args, **kwargs): # real signature unknown
pass
def canonicalFilePath(self, *args, **kwargs): # real signature unknown
pass
def canonicalPath(self, *args, **kwargs): # real signature unknown
pass
def completeBaseName(self, *args, **kwargs): # real signature unknown
pass
def completeSuffix(self, *args, **kwargs): # real signature unknown
pass
def created(self, *args, **kwargs): # real signature unknown
pass
def dir(self, *args, **kwargs): # real signature unknown
pass
def exists(self, *args, **kwargs): # real signature unknown
pass
def fileName(self, *args, **kwargs): # real signature unknown
pass
def filePath(self, *args, **kwargs): # real signature unknown
pass
def group(self, *args, **kwargs): # real signature unknown
pass
def groupId(self, *args, **kwargs): # real signature unknown
pass
def isAbsolute(self, *args, **kwargs): # real signature unknown
pass
def isBundle(self, *args, **kwargs): # real signature unknown
pass
def isDir(self, *args, **kwargs): # real signature unknown
pass
def isExecutable(self, *args, **kwargs): # real signature unknown
pass
def isFile(self, *args, **kwargs): # real signature unknown
pass
def isHidden(self, *args, **kwargs): # real signature unknown
pass
def isReadable(self, *args, **kwargs): # real signature unknown
pass
def isRelative(self, *args, **kwargs): # real signature unknown
pass
def isRoot(self, *args, **kwargs): # real signature unknown
pass
def isSymLink(self, *args, **kwargs): # real signature unknown
pass
def isWritable(self, *args, **kwargs): # real signature unknown
pass
def lastModified(self, *args, **kwargs): # real signature unknown
pass
def lastRead(self, *args, **kwargs): # real signature unknown
pass
def makeAbsolute(self, *args, **kwargs): # real signature unknown
pass
def owner(self, *args, **kwargs): # real signature unknown
pass
def ownerId(self, *args, **kwargs): # real signature unknown
pass
def path(self, *args, **kwargs): # real signature unknown
pass
def permission(self, *args, **kwargs): # real signature unknown
pass
def permissions(self, *args, **kwargs): # real signature unknown
pass
def readLink(self, *args, **kwargs): # real signature unknown
pass
def refresh(self, *args, **kwargs): # real signature unknown
pass
def setCaching(self, *args, **kwargs): # real signature unknown
pass
def setFile(self, *args, **kwargs): # real signature unknown
pass
def size(self, *args, **kwargs): # real signature unknown
pass
def suffix(self, *args, **kwargs): # real signature unknown
pass
def symLinkTarget(self, *args, **kwargs): # real signature unknown
pass
def __copy__(self, *args, **kwargs): # real signature unknown
pass
def __eq__(self, y): # real signature unknown; restored from __doc__
""" x.__eq__(y) <==> x==y """
pass
def __ge__(self, y): # real signature unknown; restored from __doc__
""" x.__ge__(y) <==> x>=y """
pass
def __gt__(self, y): # real signature unknown; restored from __doc__
""" x.__gt__(y) <==> x>y """
pass
def __init__(self, *args, **kwargs): # real signature unknown
pass
def __le__(self, y): # real signature unknown; restored from __doc__
""" x.__le__(y) <==> x<=y """
pass
def __lt__(self, y): # real signature unknown; restored from __doc__
""" x.__lt__(y) <==> x<y """
pass
@staticmethod # known case of __new__
def __new__(S, *more): # real signature unknown; restored from __doc__
""" T.__new__(S, ...) -> a new object with type S, a subtype of T """
pass
def __ne__(self, y): # real signature unknown; restored from __doc__
""" x.__ne__(y) <==> x!=y """
pass
def __reduce__(self, *args, **kwargs): # real signature unknown
pass
| 27.071823 | 77 | 0.609796 | 571 | 4,900 | 5.050788 | 0.176883 | 0.238904 | 0.367545 | 0.287101 | 0.767684 | 0.750347 | 0.738211 | 0.725035 | 0.085298 | 0 | 0 | 0.001959 | 0.270612 | 4,900 | 180 | 78 | 27.222222 | 0.80498 | 0.353469 | 0 | 0.486239 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.486239 | false | 0.486239 | 0.009174 | 0 | 0.504587 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 9 |
c4c0091833ba15b23bb9d2ac74a362b3bce8bde5 | 1,184 | py | Python | src/highdicom/sr/__init__.py | pieper/highdicom | 4e299f99c9a94eb72006dd21909f7e8c22eb766e | [
"MIT"
] | null | null | null | src/highdicom/sr/__init__.py | pieper/highdicom | 4e299f99c9a94eb72006dd21909f7e8c22eb766e | [
"MIT"
] | null | null | null | src/highdicom/sr/__init__.py | pieper/highdicom | 4e299f99c9a94eb72006dd21909f7e8c22eb766e | [
"MIT"
] | null | null | null | """Package for creationg of Structured Report (SR) instances."""
SOP_CLASS_UIDS = {
'1.2.840.10008.5.1.4.1.1.88.1', # Text SR
'1.2.840.10008.5.1.4.1.1.88.2', # Audio SR
'1.2.840.10008.5.1.4.1.1.88.3', # Detail SR
'1.2.840.10008.5.1.4.1.1.88.4', # Comprehensive SR
'1.2.840.10008.5.1.4.1.1.88.11', # Basic Text SR
'1.2.840.10008.5.1.4.1.1.88.22', # Enhanced SR
'1.2.840.10008.5.1.4.1.1.88.33', # Comprehensive SR
'1.2.840.10008.5.1.4.1.1.88.34', # Comprehensive 3D SR
'1.2.840.10008.5.1.4.1.1.88.35', # Extensible SR
'1.2.840.10008.5.1.4.1.1.88.40', # Procedure Log
'1.2.840.10008.5.1.4.1.1.88.50', # Mammography CAD SR
'1.2.840.10008.5.1.4.1.1.88.65', # Chest CAD SR
'1.2.840.10008.5.1.4.1.1.88.67', # X-Ray Radiation Dose SR
'1.2.840.10008.5.1.4.1.1.88.68', # Radiopharmaceutical Radiation Dose SR
'1.2.840.10008.5.1.4.1.1.88.69', # Colon CAD SR
'1.2.840.10008.5.1.4.1.1.88.70', # Implantation Plan SR
'1.2.840.10008.5.1.4.1.1.88.71', # Acquisition Context SR
'1.2.840.10008.5.1.4.1.1.88.72', # Simplified Adult Echo SR
'1.2.840.10008.5.1.4.1.1.88.73', # Patient Radiation Dose SR
}
| 49.333333 | 77 | 0.592905 | 274 | 1,184 | 2.554745 | 0.222628 | 0.054286 | 0.135714 | 0.271429 | 0.608571 | 0.608571 | 0.608571 | 0.608571 | 0.608571 | 0.608571 | 0 | 0.364934 | 0.171453 | 1,184 | 23 | 78 | 51.478261 | 0.348624 | 0.334459 | 0 | 0 | 0 | 0.904762 | 0.715969 | 0.715969 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
c4ef143557823722814ba5e4200bb61bee1f4c3f | 251 | py | Python | src/example_2.py | ToJestKrzysio/ProcessVisualization | 9a359a31816bf1be65e3684a571509e3a2c2c0ac | [
"MIT"
] | null | null | null | src/example_2.py | ToJestKrzysio/ProcessVisualization | 9a359a31816bf1be65e3684a571509e3a2c2c0ac | [
"MIT"
] | null | null | null | src/example_2.py | ToJestKrzysio/ProcessVisualization | 9a359a31816bf1be65e3684a571509e3a2c2c0ac | [
"MIT"
] | null | null | null | from src.report_generator import generate_html_report, generate_pdf_report
generate_html_report("../examples/02_Realizuj_zlecenie.bpmn")
generate_pdf_report("../examples/02_Realizuj_zlecenie.bpmn", "C:/Program Files/wkhtmltopdf/bin/wkhtmltopdf.exe")
| 50.2 | 112 | 0.844622 | 34 | 251 | 5.852941 | 0.558824 | 0.120603 | 0.180905 | 0.241206 | 0.361809 | 0.361809 | 0 | 0 | 0 | 0 | 0 | 0.016598 | 0.039841 | 251 | 4 | 113 | 62.75 | 0.809129 | 0 | 0 | 0 | 1 | 0 | 0.486056 | 0.442231 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.333333 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 7 |
4827811a33963017044007a68e15748c89d7e33c | 2,600 | py | Python | standardefficiency.py | SayadPervez/ef-cnc | 524d91292938c9c6a74378e1b70da9e4e3493910 | [
"MIT"
] | null | null | null | standardefficiency.py | SayadPervez/ef-cnc | 524d91292938c9c6a74378e1b70da9e4e3493910 | [
"MIT"
] | null | null | null | standardefficiency.py | SayadPervez/ef-cnc | 524d91292938c9c6a74378e1b70da9e4e3493910 | [
"MIT"
] | null | null | null | from functions import *
from shapes import *
import algorithm1,algorithm2,algorithm3,algorithm4
import constants as cont
from visualization import *
print("\na1-S starting:")
canvas = Canvas(200,100)
shapes = [
Square(20) ,
Rectangle(35,25) ,
Circle(7) ,
Cone(17,20) ,
Cone(12,4)
]
for shape in shapes:
shape.shapeMatrix = outline_with_shape(shape,3)
c = canvas
li = shapes
print("Starting algorithm1")
out = algorithm1.run(c,li,log_=True,constCompute=50)
arr2png(out).show()
input("Press ENTER to continue ...")
out=binaryFilter(out)
out = free_surface_all(out,70)
arr2png(out).show()
input("Press ENTER to continue ...")
pieChart(free_surface_area(out))
input("Start next algorithm ?")
print("\na2-S starting:")
canvas = Canvas(108,72)
shapes = [
Square(20) ,
Rectangle(10,25) ,
Circle(7) ,
Cone(17,20) ,
Cone(12,4)
]
for shape in shapes:
shape.shapeMatrix = outline_with_shape(shape,3)
c = canvas
li = shapes
print("Starting algorithm2")
out = algorithm2.run(c,li,log_=True,constCompute=50)
arr2png(out).show()
input("Press ENTER to continue ...")
out=binaryFilter(out)
out = free_surface_all(out,70)
arr2png(out).show()
input("Press ENTER to continue ...")
pieChart(free_surface_area(out))
input("Start next algorithm ?")
canvas = Canvas(108,108)
shapes = [
Square(20) ,
Rectangle(10,25) ,
Circle(7) ,
Cone(17,20),
Cone(12,4),
Cone(12,4),
Cone(12,4),
Cone(12,4)
]
for shape in shapes:
shape.shapeMatrix = outline_with_shape(shape,3)
c = canvas
li = shapes
print("Starting algorithm3")
out = algorithm3.run(c,li,log_=True,constCompute=75)
arr2png(out).show()
input("Press ENTER to continue ...")
out=binaryFilter(out)
out = free_surface_all(out,70)
arr2png(out).show()
input("Press ENTER to continue ...")
pieChart(free_surface_area(out))
print("\na4-S starting:")
canvas = Canvas(108,72)
shapes = [
Square(20) ,
Rectangle(10,25) ,
Circle(7) ,
Cone(17,20) ,
Cone(12,4)
]
for shape in shapes:
shape.shapeMatrix = outline_with_shape(shape,3)
c = canvas
li = shapes
print("Starting algorithm4")
out = algorithm4.run(c,li,log_=True,constCompute=75)
arr2png(out).show()
input("Press ENTER to continue ...")
out=binaryFilter(out)
out = free_surface_all(out,60)
arr2png(out).show()
input("Press ENTER to continue ...")
pieChart(free_surface_area(out))
| 23.853211 | 52 | 0.631154 | 347 | 2,600 | 4.648415 | 0.198847 | 0.049597 | 0.069436 | 0.094234 | 0.805952 | 0.805952 | 0.805952 | 0.805952 | 0.805952 | 0.797272 | 0 | 0.064677 | 0.226923 | 2,600 | 108 | 53 | 24.074074 | 0.737811 | 0 | 0 | 0.752577 | 0 | 0 | 0.147749 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.051546 | 0 | 0.051546 | 0.072165 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
484b92e35d0f4cd4fe93988d9e4caf53bc92bcfa | 17,401 | py | Python | deepsim/test/test_deepsim/domain_randomizations/randomizers/test_model_visual_randomizer.py | aws-deepracer/deepsim | cad2639f525c2f94ec5c03d8b855cc65b0b8ee55 | [
"Apache-2.0"
] | 1 | 2022-03-25T07:20:49.000Z | 2022-03-25T07:20:49.000Z | deepsim/test/test_deepsim/domain_randomizations/randomizers/test_model_visual_randomizer.py | aws-deepracer/deepsim | cad2639f525c2f94ec5c03d8b855cc65b0b8ee55 | [
"Apache-2.0"
] | null | null | null | deepsim/test/test_deepsim/domain_randomizations/randomizers/test_model_visual_randomizer.py | aws-deepracer/deepsim | cad2639f525c2f94ec5c03d8b855cc65b0b8ee55 | [
"Apache-2.0"
] | null | null | null | #################################################################################
# Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. #
# #
# Licensed under the Apache License, Version 2.0 (the "License"). #
# You may not use this file except in compliance with the License. #
# You may obtain a copy of the License at #
# #
# http://www.apache.org/licenses/LICENSE-2.0 #
# #
# Unless required by applicable law or agreed to in writing, software #
# distributed under the License is distributed on an "AS IS" BASIS, #
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. #
# See the License for the specific language governing permissions and #
# limitations under the License. #
#################################################################################
from typing import Any, Callable
from unittest import TestCase
from unittest.mock import patch, MagicMock, call
import inspect
from deepsim.gazebo.constants import GazeboServiceName
from deepsim.domain_randomizations.randomizers.model_visual_randomizer import ModelVisualRandomizer, ModelRandomizerType
@patch("deepsim.domain_randomizations.randomizers.model_visual_randomizer.ServiceProxyWrapper")
class ModelVisualRandomizerTest(TestCase):
def setUp(self) -> None:
pass
def test_initialize(self, service_proxy_wrapper_mock):
model_name = "test_model"
model_randomizer_type = ModelRandomizerType.MODEL
get_model_prop_mock = MagicMock()
get_model_prop_mock.return_value.body_names = ["body_name1"]
get_visual_names_mock = MagicMock()
get_visual_names_mock.return_value.visual_names = ["visual_name1"]
def service_proxy_creator(service_name, service_class):
if service_name == GazeboServiceName.GET_MODEL_PROPERTIES:
return get_model_prop_mock
elif service_name == GazeboServiceName.GET_VISUAL_NAMES:
return get_visual_names_mock
service_proxy_wrapper_mock.side_effect = service_proxy_creator
model_visual_randomizer = ModelVisualRandomizer(model_name=model_name,
model_randomizer_type=model_randomizer_type)
assert model_visual_randomizer.model_name == model_name
assert model_visual_randomizer.model_randomizer_type == model_randomizer_type
def test_initialize_custom_range(self, service_proxy_wrapper_mock):
model_name = "test_model"
model_randomizer_type = ModelRandomizerType.MODEL
color_range = {'r': {'min': 0.1, 'max': 0.4},
'g': {'min': 0.2, 'max': 0.5},
'b': {'min': 0.3, 'max': 0.6}}
get_model_prop_mock = MagicMock()
get_model_prop_mock.return_value.body_names = ["body_name1"]
get_visual_names_mock = MagicMock()
get_visual_names_mock.return_value.visual_names = ["visual_name1"]
def service_proxy_creator(service_name, service_class):
if service_name == GazeboServiceName.GET_MODEL_PROPERTIES:
return get_model_prop_mock
elif service_name == GazeboServiceName.GET_VISUAL_NAMES:
return get_visual_names_mock
service_proxy_wrapper_mock.side_effect = service_proxy_creator
model_visual_randomizer = ModelVisualRandomizer(model_name=model_name,
model_randomizer_type=model_randomizer_type,
color_range=color_range)
assert model_visual_randomizer.model_name == model_name
assert model_visual_randomizer.model_randomizer_type == model_randomizer_type
assert model_visual_randomizer.color_range == color_range
def test_link_filter(self, service_proxy_wrapper_mock):
model_name = "test_model"
model_randomizer_type = ModelRandomizerType.MODEL
get_model_prop_mock = MagicMock()
get_model_prop_mock.return_value.body_names = ["body_name1", "body_name2"]
def get_visual_names(req):
response_mock = MagicMock()
response_mock.visual_names = []
response_mock.link_names = []
visual_names = ["visual_name1", "visual_name2"]
for link_name in req.link_names:
for visual_name in visual_names:
response_mock.link_names.append(link_name)
response_mock.visual_names.append(visual_name)
return response_mock
get_visual_names_mock = MagicMock()
get_visual_names_mock.side_effect = get_visual_names
def service_proxy_creator(service_name, service_class):
if service_name == GazeboServiceName.GET_MODEL_PROPERTIES:
return get_model_prop_mock
elif service_name == GazeboServiceName.GET_VISUAL_NAMES:
return get_visual_names_mock
service_proxy_wrapper_mock.side_effect = service_proxy_creator
model_visual_randomizer = ModelVisualRandomizer(model_name=model_name,
model_randomizer_type=model_randomizer_type,
link_name_filter=["test_model::body_name1"])
assert len(model_visual_randomizer._link_visuals_map) == 1
assert "test_model::body_name1" in model_visual_randomizer._link_visuals_map
assert "test_model::body_name2" not in model_visual_randomizer._link_visuals_map
def test_visual_filter(self, service_proxy_wrapper_mock):
model_name = "test_model"
model_randomizer_type = ModelRandomizerType.MODEL
get_model_prop_mock = MagicMock()
get_model_prop_mock.return_value.body_names = ["body_name1", "body_name2"]
def get_visual_names(req):
response_mock = MagicMock()
response_mock.visual_names = []
response_mock.link_names = []
visual_names = ["visual_name1", "visual_name2"]
for link_name in req.link_names:
for visual_name in visual_names:
response_mock.link_names.append(link_name)
response_mock.visual_names.append(visual_name)
return response_mock
get_visual_names_mock = MagicMock()
get_visual_names_mock.side_effect = get_visual_names
def service_proxy_creator(service_name, service_class):
if service_name == GazeboServiceName.GET_MODEL_PROPERTIES:
return get_model_prop_mock
elif service_name == GazeboServiceName.GET_VISUAL_NAMES:
return get_visual_names_mock
service_proxy_wrapper_mock.side_effect = service_proxy_creator
model_visual_randomizer = ModelVisualRandomizer(model_name=model_name,
model_randomizer_type=model_randomizer_type,
visual_name_filter=["visual_name1"])
assert len(model_visual_randomizer._link_visuals_map) == 2
for link_visual_names in model_visual_randomizer._link_visuals_map.values():
assert "visual_name1" in link_visual_names
assert "visual_name2" not in link_visual_names
def test_model_randomizer_type_model(self, service_proxy_wrapper_mock):
model_name = "test_model"
model_randomizer_type = ModelRandomizerType.MODEL
get_model_prop_mock = MagicMock()
get_model_prop_mock.return_value.body_names = ["body_name1", "body_name2"]
def get_visual_names(req):
response_mock = MagicMock()
response_mock.visual_names = []
response_mock.link_names = []
visual_names = ["visual_name1", "visual_name2"]
for link_name in req.link_names:
for visual_name in visual_names:
response_mock.link_names.append(link_name)
response_mock.visual_names.append(visual_name)
return response_mock
get_visual_names_mock = MagicMock()
get_visual_names_mock.side_effect = get_visual_names
def service_proxy_creator(service_name, service_class):
if service_name == GazeboServiceName.GET_MODEL_PROPERTIES:
return get_model_prop_mock
elif service_name == GazeboServiceName.GET_VISUAL_NAMES:
return get_visual_names_mock
service_proxy_wrapper_mock.side_effect = service_proxy_creator
model_visual_randomizer = ModelVisualRandomizer(model_name=model_name,
model_randomizer_type=model_randomizer_type)
with patch("deepsim.domain_randomizations.randomizers.model_visual_randomizer.SetVisualMaterialTracker") as tracker_mock:
get_random_color_mock = MagicMock()
model_visual_randomizer._get_random_color = get_random_color_mock
model_visual_randomizer.randomize()
assert get_random_color_mock.call_count == 1
assert tracker_mock.get_instance.return_value.set_visual_material.call_count == 4
def test_model_randomizer_type_link(self, service_proxy_wrapper_mock):
model_name = "test_model"
model_randomizer_type = ModelRandomizerType.LINK
get_model_prop_mock = MagicMock()
get_model_prop_mock.return_value.body_names = ["body_name1", "body_name2"]
def get_visual_names(req):
response_mock = MagicMock()
response_mock.visual_names = []
response_mock.link_names = []
visual_names = ["visual_name1", "visual_name2"]
for link_name in req.link_names:
for visual_name in visual_names:
response_mock.link_names.append(link_name)
response_mock.visual_names.append(visual_name)
return response_mock
get_visual_names_mock = MagicMock()
get_visual_names_mock.side_effect = get_visual_names
def service_proxy_creator(service_name, service_class):
if service_name == GazeboServiceName.GET_MODEL_PROPERTIES:
return get_model_prop_mock
elif service_name == GazeboServiceName.GET_VISUAL_NAMES:
return get_visual_names_mock
service_proxy_wrapper_mock.side_effect = service_proxy_creator
model_visual_randomizer = ModelVisualRandomizer(model_name=model_name,
model_randomizer_type=model_randomizer_type)
with patch("deepsim.domain_randomizations.randomizers.model_visual_randomizer.SetVisualMaterialTracker") as tracker_mock:
get_random_color_mock = MagicMock()
model_visual_randomizer._get_random_color = get_random_color_mock
model_visual_randomizer.randomize()
assert get_random_color_mock.call_count == 3 # Last one is not used
assert tracker_mock.get_instance.return_value.set_visual_material.call_count == 4
def test_model_randomizer_type_visual(self, service_proxy_wrapper_mock):
model_name = "test_model"
model_randomizer_type = ModelRandomizerType.VISUAL
get_model_prop_mock = MagicMock()
get_model_prop_mock.return_value.body_names = ["body_name1", "body_name2"]
def get_visual_names(req):
response_mock = MagicMock()
response_mock.visual_names = []
response_mock.link_names = []
visual_names = ["visual_name1", "visual_name2"]
for link_name in req.link_names:
for visual_name in visual_names:
response_mock.link_names.append(link_name)
response_mock.visual_names.append(visual_name)
return response_mock
get_visual_names_mock = MagicMock()
get_visual_names_mock.side_effect = get_visual_names
def service_proxy_creator(service_name, service_class):
if service_name == GazeboServiceName.GET_MODEL_PROPERTIES:
return get_model_prop_mock
elif service_name == GazeboServiceName.GET_VISUAL_NAMES:
return get_visual_names_mock
service_proxy_wrapper_mock.side_effect = service_proxy_creator
model_visual_randomizer = ModelVisualRandomizer(model_name=model_name,
model_randomizer_type=model_randomizer_type)
with patch("deepsim.domain_randomizations.randomizers.model_visual_randomizer.SetVisualMaterialTracker") as tracker_mock:
get_random_color_mock = MagicMock()
model_visual_randomizer._get_random_color = get_random_color_mock
model_visual_randomizer.randomize()
assert get_random_color_mock.call_count == 5 # Last one is not used
assert tracker_mock.get_instance.return_value.set_visual_material.call_count == 4
def test_model_randomizer_type_link_selection(self, service_proxy_wrapper_mock):
model_name = "test_model"
model_randomizer_type = ModelRandomizerType.LINK
get_model_prop_mock = MagicMock()
get_model_prop_mock.return_value.body_names = ["body_name1", "body_name2"]
def get_visual_names(req):
response_mock = MagicMock()
response_mock.visual_names = []
response_mock.link_names = []
visual_names = ["visual_name1", "visual_name2"]
for link_name in req.link_names:
for visual_name in visual_names:
response_mock.link_names.append(link_name)
response_mock.visual_names.append(visual_name)
return response_mock
get_visual_names_mock = MagicMock()
get_visual_names_mock.side_effect = get_visual_names
def service_proxy_creator(service_name, service_class):
if service_name == GazeboServiceName.GET_MODEL_PROPERTIES:
return get_model_prop_mock
elif service_name == GazeboServiceName.GET_VISUAL_NAMES:
return get_visual_names_mock
service_proxy_wrapper_mock.side_effect = service_proxy_creator
model_visual_randomizer = ModelVisualRandomizer(model_name=model_name,
model_randomizer_type=model_randomizer_type,
num_selection=1)
with patch("deepsim.domain_randomizations.randomizers.model_visual_randomizer.SetVisualMaterialTracker") as tracker_mock:
get_random_color_mock = MagicMock()
model_visual_randomizer._get_random_color = get_random_color_mock
model_visual_randomizer.randomize()
assert get_random_color_mock.call_count == 2 # Last one is not used
assert tracker_mock.get_instance.return_value.set_visual_material.call_count == 2
def test_model_randomizer_type_visual_selection(self, service_proxy_wrapper_mock):
model_name = "test_model"
model_randomizer_type = ModelRandomizerType.VISUAL
get_model_prop_mock = MagicMock()
get_model_prop_mock.return_value.body_names = ["body_name1", "body_name2"]
def get_visual_names(req):
response_mock = MagicMock()
response_mock.visual_names = []
response_mock.link_names = []
visual_names = ["visual_name1", "visual_name2"]
for link_name in req.link_names:
for visual_name in visual_names:
response_mock.link_names.append(link_name)
response_mock.visual_names.append(link_name + '_' + visual_name)
return response_mock
get_visual_names_mock = MagicMock()
get_visual_names_mock.side_effect = get_visual_names
def service_proxy_creator(service_name, service_class):
if service_name == GazeboServiceName.GET_MODEL_PROPERTIES:
return get_model_prop_mock
elif service_name == GazeboServiceName.GET_VISUAL_NAMES:
return get_visual_names_mock
service_proxy_wrapper_mock.side_effect = service_proxy_creator
model_visual_randomizer = ModelVisualRandomizer(model_name=model_name,
model_randomizer_type=model_randomizer_type,
num_selection=3)
with patch("deepsim.domain_randomizations.randomizers.model_visual_randomizer.SetVisualMaterialTracker") as tracker_mock:
get_random_color_mock = MagicMock()
model_visual_randomizer._get_random_color = get_random_color_mock
model_visual_randomizer.randomize()
assert get_random_color_mock.call_count == 4 # Last one is not used
assert tracker_mock.get_instance.return_value.set_visual_material.call_count == 3
| 51.330383 | 129 | 0.65502 | 1,904 | 17,401 | 5.516282 | 0.081933 | 0.086928 | 0.066648 | 0.041131 | 0.893173 | 0.890698 | 0.88708 | 0.870894 | 0.864801 | 0.85547 | 0 | 0.005419 | 0.278835 | 17,401 | 338 | 130 | 51.482249 | 0.83154 | 0.063675 | 0 | 0.837736 | 0 | 0 | 0.068394 | 0.037334 | 0 | 0 | 0 | 0 | 0.079245 | 1 | 0.098113 | false | 0.003774 | 0.022642 | 0 | 0.218868 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
6ff65962e8b5500d9e7869ba87170333bca2580a | 45,657 | py | Python | migrations/versions/7dd71f1af063_.py | Anioko/TestApp | 95fa8d27ca8e7a074e62f92609427a378844e621 | [
"MIT"
] | null | null | null | migrations/versions/7dd71f1af063_.py | Anioko/TestApp | 95fa8d27ca8e7a074e62f92609427a378844e621 | [
"MIT"
] | 1 | 2021-06-02T01:53:47.000Z | 2021-06-02T01:53:47.000Z | migrations/versions/7dd71f1af063_.py | Anioko/TestApp | 95fa8d27ca8e7a074e62f92609427a378844e621 | [
"MIT"
] | null | null | null | """empty message
Revision ID: 7dd71f1af063
Revises:
Create Date: 2020-05-23 14:48:01.769844
"""
from alembic import op
import sqlalchemy as sa
# revision identifiers, used by Alembic.
revision = '7dd71f1af063'
down_revision = None
branch_labels = None
depends_on = None
def upgrade():
# ### commands auto generated by Alembic - please adjust! ###
op.create_table('crawledjobs',
sa.Column('id', sa.Integer(), nullable=False),
sa.Column('image_filename', sa.String(), nullable=True),
sa.Column('pub_date', sa.String(length=255), nullable=True),
sa.Column('end_date', sa.String(length=255), nullable=True),
sa.Column('job_title', sa.String(length=255), nullable=True),
sa.Column('job_city', sa.String(length=255), nullable=True),
sa.Column('job_state', sa.String(length=255), nullable=True),
sa.Column('job_country', sa.String(length=255), nullable=True),
sa.Column('description', sa.Text(), nullable=True),
sa.Column('company_name', sa.String(length=255), nullable=True),
sa.Column('job_url', sa.String(length=255), nullable=True),
sa.Column('created_at', sa.DateTime(), nullable=True),
sa.Column('updated_at', sa.DateTime(), nullable=True),
sa.PrimaryKeyConstraint('id')
)
op.create_table('jobpikrs',
sa.Column('id', sa.Integer(), nullable=False),
sa.Column('job_type', sa.String(length=255), nullable=True),
sa.Column('has_expired', sa.String(length=255), nullable=True),
sa.Column('inferred_country', sa.String(length=255), nullable=True),
sa.Column('country', sa.String(length=255), nullable=True),
sa.Column('crawl_timestamp', sa.DateTime(), nullable=True),
sa.Column('city', sa.String(length=255), nullable=True),
sa.Column('inferred_city', sa.String(length=255), nullable=True),
sa.Column('salary_offered', sa.String(length=255), nullable=True),
sa.Column('url', sa.String(length=500), nullable=True),
sa.Column('contact_email', sa.String(length=255), nullable=True),
sa.Column('uniq_id', sa.String(length=255), nullable=True),
sa.Column('job_description', sa.Text(), nullable=True),
sa.Column('inferred_state', sa.String(length=255), nullable=True),
sa.Column('post_date', sa.DateTime(), nullable=True),
sa.Column('company_name', sa.String(length=255), nullable=True),
sa.Column('category', sa.String(length=255), nullable=True),
sa.Column('job_title', sa.String(length=255), nullable=True),
sa.Column('cursor', sa.BigInteger(), nullable=True),
sa.PrimaryKeyConstraint('id')
)
op.create_table('marketplace_categories',
sa.Column('id', sa.Integer(), autoincrement=True, nullable=False),
sa.Column('parent_id', sa.Integer(), nullable=True),
sa.Column('name', sa.String(), nullable=False),
sa.Column('image', sa.String(), nullable=False),
sa.Column('order', sa.Integer(), nullable=True),
sa.Column('is_featured', sa.Boolean(), nullable=True),
sa.Column('created_at', sa.DateTime(), nullable=True),
sa.Column('updated_at', sa.DateTime(), nullable=True),
sa.ForeignKeyConstraint(['parent_id'], ['marketplace_categories.id'], ondelete='CASCADE'),
sa.PrimaryKeyConstraint('id')
)
op.create_table('marketplace_currency',
sa.Column('id', sa.Integer(), autoincrement=True, nullable=False),
sa.Column('name', sa.String(), nullable=False),
sa.Column('symbol', sa.String(), nullable=True),
sa.Column('default', sa.Boolean(), nullable=True),
sa.Column('created_at', sa.DateTime(), nullable=True),
sa.Column('updated_at', sa.DateTime(), nullable=True),
sa.PrimaryKeyConstraint('id')
)
op.create_table('marketplace_settings',
sa.Column('id', sa.Integer(), autoincrement=True, nullable=False),
sa.Column('name', sa.String(), nullable=True),
sa.Column('display_name', sa.String(), nullable=True),
sa.Column('value', sa.String(), nullable=True),
sa.Column('created_at', sa.DateTime(), nullable=True),
sa.Column('updated_at', sa.DateTime(), nullable=True),
sa.PrimaryKeyConstraint('id')
)
op.create_table('tags',
sa.Column('id', sa.Integer(), nullable=False),
sa.Column('tag', sa.String(length=25), nullable=True),
sa.PrimaryKeyConstraint('id')
)
op.create_table('contact_messages',
sa.Column('id', sa.Integer(), autoincrement=True, nullable=False),
sa.Column('user_id', sa.Integer(), nullable=True),
sa.Column('name', sa.String(), nullable=True),
sa.Column('email', sa.String(length=64), nullable=True),
sa.Column('text', sa.Text(), nullable=True),
sa.Column('created_at', sa.DateTime(), nullable=True),
sa.Column('updated_at', sa.DateTime(), nullable=True),
sa.ForeignKeyConstraint(['user_id'], ['users.id'], ondelete='CASCADE'),
sa.PrimaryKeyConstraint('id')
)
op.create_table('extras',
sa.Column('id', sa.Integer(), nullable=False),
sa.Column('image_filename', sa.String(), nullable=True),
sa.Column('image_url', sa.String(), nullable=True),
sa.Column('required_skill_one', sa.String(length=255), nullable=True),
sa.Column('required_skill_two', sa.String(length=255), nullable=True),
sa.Column('required_skill_three', sa.String(length=255), nullable=True),
sa.Column('required_skill_four', sa.String(length=255), nullable=True),
sa.Column('required_skill_five', sa.String(length=255), nullable=True),
sa.Column('required_skill_six', sa.String(length=255), nullable=True),
sa.Column('required_skill_seven', sa.String(length=255), nullable=True),
sa.Column('required_skill_eight', sa.String(length=255), nullable=True),
sa.Column('required_skill_nine', sa.String(length=255), nullable=True),
sa.Column('required_skill_ten', sa.String(length=255), nullable=True),
sa.Column('user_id', sa.Integer(), nullable=True),
sa.Column('created_at', sa.DateTime(), nullable=True),
sa.Column('updated_at', sa.DateTime(), nullable=True),
sa.ForeignKeyConstraint(['user_id'], ['users.id'], ondelete='cascade'),
sa.PrimaryKeyConstraint('id')
)
op.create_table('followers',
sa.Column('id', sa.Integer(), nullable=False),
sa.Column('follower_id', sa.Integer(), nullable=True),
sa.Column('followed_id', sa.Integer(), nullable=True),
sa.Column('created_at', sa.DateTime(), nullable=True),
sa.Column('updated_at', sa.DateTime(), nullable=True),
sa.ForeignKeyConstraint(['followed_id'], ['users.id'], ondelete='CASCADE'),
sa.ForeignKeyConstraint(['follower_id'], ['users.id'], ondelete='CASCADE'),
sa.PrimaryKeyConstraint('id')
)
op.create_table('interests',
sa.Column('id', sa.Integer(), nullable=False),
sa.Column('name', sa.String(), nullable=True),
sa.Column('desc', sa.String(), nullable=True),
sa.Column('creator_id', sa.Integer(), nullable=True),
sa.Column('status', sa.SmallInteger(), nullable=True),
sa.Column('created_at', sa.DateTime(), nullable=True),
sa.Column('updated_at', sa.DateTime(), nullable=True),
sa.ForeignKeyConstraint(['creator_id'], ['users.id'], ondelete='CASCADE'),
sa.PrimaryKeyConstraint('id'),
sa.UniqueConstraint('name')
)
op.create_table('marketplace_carts',
sa.Column('id', sa.Integer(), autoincrement=True, nullable=False),
sa.Column('session_id', sa.String(), nullable=True),
sa.Column('user_id', sa.Integer(), nullable=True),
sa.Column('step', sa.Integer(), nullable=True),
sa.Column('created_at', sa.DateTime(), nullable=True),
sa.Column('updated_at', sa.DateTime(), nullable=True),
sa.ForeignKeyConstraint(['user_id'], ['users.id'], ondelete='CASCADE'),
sa.PrimaryKeyConstraint('id')
)
op.create_table('marketplace_orders',
sa.Column('id', sa.Integer(), autoincrement=True, nullable=False),
sa.Column('order_number', sa.String(), nullable=True),
sa.Column('charge_id', sa.String(), nullable=True),
sa.Column('order_status', sa.Integer(), nullable=True),
sa.Column('products_total', sa.Float(), nullable=True),
sa.Column('shipping_cost', sa.Float(), nullable=True),
sa.Column('order_total', sa.Float(), nullable=True),
sa.Column('order_discount', sa.Float(), nullable=True),
sa.Column('order_pay_amount', sa.Float(), nullable=True),
sa.Column('buyer_id', sa.Integer(), nullable=True),
sa.Column('price_currency_id', sa.Integer(), nullable=True),
sa.Column('first_name', sa.String(length=64), nullable=True),
sa.Column('last_name', sa.String(length=64), nullable=True),
sa.Column('email', sa.String(length=64), nullable=True),
sa.Column('mobile_phone', sa.BigInteger(), nullable=True),
sa.Column('zip', sa.String(length=10), nullable=True),
sa.Column('city', sa.String(length=64), nullable=True),
sa.Column('state', sa.String(length=64), nullable=True),
sa.Column('country', sa.String(length=64), nullable=True),
sa.Column('created_at', sa.DateTime(), nullable=True),
sa.Column('updated_at', sa.DateTime(), nullable=True),
sa.ForeignKeyConstraint(['buyer_id'], ['users.id'], ondelete='CASCADE'),
sa.ForeignKeyConstraint(['price_currency_id'], ['marketplace_currency.id'], ondelete='CASCADE'),
sa.PrimaryKeyConstraint('id')
)
op.create_index(op.f('ix_marketplace_orders_city'), 'marketplace_orders', ['city'], unique=False)
op.create_index(op.f('ix_marketplace_orders_country'), 'marketplace_orders', ['country'], unique=False)
op.create_index(op.f('ix_marketplace_orders_email'), 'marketplace_orders', ['email'], unique=False)
op.create_index(op.f('ix_marketplace_orders_first_name'), 'marketplace_orders', ['first_name'], unique=False)
op.create_index(op.f('ix_marketplace_orders_last_name'), 'marketplace_orders', ['last_name'], unique=False)
op.create_index(op.f('ix_marketplace_orders_mobile_phone'), 'marketplace_orders', ['mobile_phone'], unique=False)
op.create_index(op.f('ix_marketplace_orders_state'), 'marketplace_orders', ['state'], unique=False)
op.create_index(op.f('ix_marketplace_orders_zip'), 'marketplace_orders', ['zip'], unique=False)
op.create_table('marketplace_products',
sa.Column('id', sa.Integer(), autoincrement=True, nullable=False),
sa.Column('name', sa.String(), nullable=True),
sa.Column('images', sa.Text(), nullable=True),
sa.Column('description', sa.String(), nullable=True),
sa.Column('availability', sa.Boolean(), nullable=True),
sa.Column('min_order_quantity', sa.Integer(), nullable=True),
sa.Column('length', sa.Float(), nullable=True),
sa.Column('weight', sa.Float(), nullable=True),
sa.Column('height', sa.Float(), nullable=True),
sa.Column('price', sa.Float(), nullable=True),
sa.Column('price_currency_id', sa.Integer(), nullable=True),
sa.Column('seller_id', sa.Integer(), nullable=True),
sa.Column('is_featured', sa.Boolean(), nullable=True),
sa.Column('lead_time', sa.String(), nullable=True),
sa.Column('created_at', sa.DateTime(), nullable=True),
sa.Column('updated_at', sa.DateTime(), nullable=True),
sa.ForeignKeyConstraint(['price_currency_id'], ['marketplace_currency.id'], ondelete='CASCADE'),
sa.ForeignKeyConstraint(['seller_id'], ['users.id'], ondelete='CASCADE'),
sa.PrimaryKeyConstraint('id')
)
op.create_table('marketplace_shipping_methods',
sa.Column('id', sa.Integer(), autoincrement=True, nullable=False),
sa.Column('name', sa.String(), nullable=False),
sa.Column('seller_id', sa.Integer(), nullable=True),
sa.Column('created_at', sa.DateTime(), nullable=True),
sa.Column('updated_at', sa.DateTime(), nullable=True),
sa.ForeignKeyConstraint(['seller_id'], ['users.id'], ondelete='CASCADE'),
sa.PrimaryKeyConstraint('id')
)
op.create_table('messages',
sa.Column('id', sa.Integer(), nullable=False),
sa.Column('user_id', sa.Integer(), nullable=True),
sa.Column('recipient_id', sa.Integer(), nullable=True),
sa.Column('body', sa.Text(), nullable=True),
sa.Column('timestamp', sa.DateTime(), nullable=True),
sa.Column('created_at', sa.DateTime(), nullable=True),
sa.Column('updated_at', sa.DateTime(), nullable=True),
sa.Column('read_at', sa.DateTime(), nullable=True),
sa.ForeignKeyConstraint(['recipient_id'], ['users.id'], ondelete='CASCADE'),
sa.ForeignKeyConstraint(['user_id'], ['users.id'], ondelete='cascade'),
sa.PrimaryKeyConstraint('id')
)
op.create_index(op.f('ix_messages_timestamp'), 'messages', ['timestamp'], unique=False)
op.create_table('notifications',
sa.Column('id', sa.Integer(), nullable=False),
sa.Column('name', sa.String(length=128), nullable=True),
sa.Column('user_id', sa.Integer(), nullable=True),
sa.Column('related_id', sa.Integer(), nullable=True),
sa.Column('timestamp', sa.Float(), nullable=True),
sa.Column('payload_json', sa.Text(), nullable=True),
sa.Column('read', sa.Boolean(), nullable=True),
sa.Column('created_at', sa.DateTime(), nullable=True),
sa.Column('updated_at', sa.DateTime(), nullable=True),
sa.ForeignKeyConstraint(['user_id'], ['users.id'], ondelete='CASCADE'),
sa.PrimaryKeyConstraint('id')
)
op.create_index(op.f('ix_notifications_name'), 'notifications', ['name'], unique=False)
op.create_index(op.f('ix_notifications_timestamp'), 'notifications', ['timestamp'], unique=False)
op.create_table('organisations',
sa.Column('id', sa.Integer(), nullable=False),
sa.Column('user_id', sa.Integer(), nullable=False),
sa.Column('image_filename', sa.String(), nullable=True),
sa.Column('image_url', sa.String(), nullable=True),
sa.Column('org_name', sa.String(length=255), nullable=True),
sa.Column('org_city', sa.String(length=255), nullable=True),
sa.Column('org_state', sa.String(length=255), nullable=True),
sa.Column('org_country', sa.String(length=255), nullable=True),
sa.Column('org_website', sa.String(length=255), nullable=True),
sa.Column('org_industry', sa.String(length=255), nullable=True),
sa.Column('org_description', sa.Text(), nullable=True),
sa.Column('created_at', sa.DateTime(), nullable=True),
sa.Column('updated_at', sa.DateTime(), nullable=True),
sa.ForeignKeyConstraint(['user_id'], ['users.id'], ondelete='CASCADE'),
sa.PrimaryKeyConstraint('id')
)
op.create_table('questions',
sa.Column('id', sa.Integer(), nullable=False),
sa.Column('title', sa.String(), nullable=True),
sa.Column('description', sa.String(), nullable=True),
sa.Column('timestamp', sa.DateTime(), nullable=True),
sa.Column('user_id', sa.Integer(), nullable=True),
sa.Column('author', sa.String(length=128), nullable=True),
sa.Column('level', sa.Integer(), nullable=True),
sa.Column('created_at', sa.DateTime(), nullable=True),
sa.Column('updated_at', sa.DateTime(), nullable=True),
sa.ForeignKeyConstraint(['user_id'], ['users.id'], ondelete='CASCADE'),
sa.PrimaryKeyConstraint('id')
)
op.create_index(op.f('ix_questions_timestamp'), 'questions', ['timestamp'], unique=False)
op.create_table('answers',
sa.Column('id', sa.Integer(), nullable=False),
sa.Column('body', sa.String(), nullable=True),
sa.Column('timestamp', sa.DateTime(), nullable=True),
sa.Column('user_id', sa.Integer(), nullable=True),
sa.Column('author', sa.String(length=128), nullable=True),
sa.Column('question_id', sa.Integer(), nullable=True),
sa.Column('image_url', sa.String(), nullable=True),
sa.Column('created_at', sa.DateTime(), nullable=True),
sa.Column('updated_at', sa.DateTime(), nullable=True),
sa.Column('lft', sa.Integer(), nullable=False),
sa.Column('rgt', sa.Integer(), nullable=False),
sa.Column('level', sa.Integer(), nullable=False),
sa.Column('tree_id', sa.Integer(), nullable=True),
sa.Column('parent_id', sa.Integer(), nullable=True),
sa.ForeignKeyConstraint(['parent_id'], ['answers.id'], ondelete='CASCADE'),
sa.ForeignKeyConstraint(['question_id'], ['questions.id'], ondelete='CASCADE'),
sa.ForeignKeyConstraint(['user_id'], ['users.id'], ondelete='CASCADE'),
sa.PrimaryKeyConstraint('id')
)
op.create_index('answers_level_idx', 'answers', ['level'], unique=False)
op.create_index('answers_lft_idx', 'answers', ['lft'], unique=False)
op.create_index('answers_rgt_idx', 'answers', ['rgt'], unique=False)
op.create_index(op.f('ix_answers_body'), 'answers', ['body'], unique=False)
op.create_index(op.f('ix_answers_timestamp'), 'answers', ['timestamp'], unique=False)
op.create_table('entry_tags',
sa.Column('tag_id', sa.Integer(), nullable=False),
sa.Column('question_id', sa.Integer(), nullable=False),
sa.Column('created_at', sa.DateTime(), nullable=True),
sa.Column('updated_at', sa.DateTime(), nullable=True),
sa.ForeignKeyConstraint(['question_id'], ['questions.id'], ondelete='CASCADE'),
sa.ForeignKeyConstraint(['tag_id'], ['tags.id'], ondelete='CASCADE'),
sa.PrimaryKeyConstraint('question_id', 'tag_id')
)
op.create_table('jobs',
sa.Column('id', sa.Integer(), nullable=False),
sa.Column('organisation_id', sa.Integer(), nullable=True),
sa.Column('image_filename', sa.String(), nullable=True),
sa.Column('pub_date', sa.DateTime(), nullable=False),
sa.Column('end_date', sa.DateTime(), nullable=False),
sa.Column('position_title', sa.String(length=255), nullable=True),
sa.Column('position_city', sa.String(length=255), nullable=True),
sa.Column('position_state', sa.String(length=255), nullable=True),
sa.Column('position_country', sa.String(length=255), nullable=True),
sa.Column('required_skill_one', sa.String(length=255), nullable=True),
sa.Column('required_skill_two', sa.String(length=255), nullable=True),
sa.Column('required_skill_three', sa.String(length=255), nullable=True),
sa.Column('required_skill_four', sa.String(length=255), nullable=True),
sa.Column('required_skill_five', sa.String(length=255), nullable=True),
sa.Column('required_skill_six', sa.String(length=255), nullable=True),
sa.Column('required_skill_seven', sa.String(length=255), nullable=True),
sa.Column('required_skill_eight', sa.String(length=255), nullable=True),
sa.Column('required_skill_nine', sa.String(length=255), nullable=True),
sa.Column('required_skill_ten', sa.String(length=255), nullable=True),
sa.Column('description', sa.Text(), nullable=True),
sa.Column('creator_id', sa.Integer(), nullable=False),
sa.Column('created_at', sa.DateTime(), nullable=True),
sa.Column('updated_at', sa.DateTime(), nullable=True),
sa.ForeignKeyConstraint(['creator_id'], ['users.id'], ondelete='CASCADE'),
sa.ForeignKeyConstraint(['organisation_id'], ['organisations.id'], ondelete='CASCADE'),
sa.PrimaryKeyConstraint('id')
)
op.create_index(op.f('ix_jobs_position_state'), 'jobs', ['position_state'], unique=False)
op.create_table('logos',
sa.Column('id', sa.Integer(), nullable=False),
sa.Column('image_filename', sa.String(), nullable=True),
sa.Column('image_url', sa.String(), nullable=True),
sa.Column('organisation_id', sa.Integer(), nullable=False),
sa.Column('owner_organisation', sa.String(length=128), nullable=True),
sa.Column('created_at', sa.DateTime(), nullable=True),
sa.Column('updated_at', sa.DateTime(), nullable=True),
sa.ForeignKeyConstraint(['organisation_id'], ['organisations.id'], ondelete='CASCADE'),
sa.PrimaryKeyConstraint('id')
)
op.create_table('marketplace_cart_details',
sa.Column('id', sa.Integer(), autoincrement=True, nullable=False),
sa.Column('cart_id', sa.Integer(), nullable=True),
sa.Column('first_name', sa.String(length=64), nullable=True),
sa.Column('last_name', sa.String(length=64), nullable=True),
sa.Column('email', sa.String(length=64), nullable=True),
sa.Column('mobile_phone', sa.BigInteger(), nullable=True),
sa.Column('zip', sa.String(length=10), nullable=True),
sa.Column('city', sa.String(length=64), nullable=True),
sa.Column('state', sa.String(length=64), nullable=True),
sa.Column('country', sa.String(length=64), nullable=True),
sa.Column('created_at', sa.DateTime(), nullable=True),
sa.Column('updated_at', sa.DateTime(), nullable=True),
sa.ForeignKeyConstraint(['cart_id'], ['marketplace_carts.id'], ondelete='CASCADE'),
sa.PrimaryKeyConstraint('id')
)
op.create_index(op.f('ix_marketplace_cart_details_city'), 'marketplace_cart_details', ['city'], unique=False)
op.create_index(op.f('ix_marketplace_cart_details_country'), 'marketplace_cart_details', ['country'], unique=False)
op.create_index(op.f('ix_marketplace_cart_details_email'), 'marketplace_cart_details', ['email'], unique=True)
op.create_index(op.f('ix_marketplace_cart_details_first_name'), 'marketplace_cart_details', ['first_name'], unique=False)
op.create_index(op.f('ix_marketplace_cart_details_last_name'), 'marketplace_cart_details', ['last_name'], unique=False)
op.create_index(op.f('ix_marketplace_cart_details_mobile_phone'), 'marketplace_cart_details', ['mobile_phone'], unique=True)
op.create_index(op.f('ix_marketplace_cart_details_state'), 'marketplace_cart_details', ['state'], unique=False)
op.create_index(op.f('ix_marketplace_cart_details_zip'), 'marketplace_cart_details', ['zip'], unique=False)
op.create_table('marketplace_product_categories',
sa.Column('id', sa.Integer(), autoincrement=True, nullable=False),
sa.Column('category_id', sa.Integer(), nullable=True),
sa.Column('product_id', sa.Integer(), nullable=True),
sa.Column('created_at', sa.DateTime(), nullable=True),
sa.Column('updated_at', sa.DateTime(), nullable=True),
sa.ForeignKeyConstraint(['category_id'], ['marketplace_categories.id'], ondelete='CASCADE'),
sa.ForeignKeyConstraint(['product_id'], ['marketplace_products.id'], ondelete='CASCADE'),
sa.PrimaryKeyConstraint('id')
)
op.create_table('marketplace_seller_carts',
sa.Column('id', sa.Integer(), autoincrement=True, nullable=False),
sa.Column('cart_id', sa.Integer(), nullable=True),
sa.Column('seller_id', sa.Integer(), nullable=True),
sa.Column('shipping_method_id', sa.Integer(), nullable=True),
sa.Column('buyer_id', sa.Integer(), nullable=True),
sa.Column('current_currency_id', sa.Integer(), nullable=True),
sa.Column('created_at', sa.DateTime(), nullable=True),
sa.Column('updated_at', sa.DateTime(), nullable=True),
sa.ForeignKeyConstraint(['buyer_id'], ['users.id'], ondelete='CASCADE'),
sa.ForeignKeyConstraint(['cart_id'], ['marketplace_carts.id'], ondelete='CASCADE'),
sa.ForeignKeyConstraint(['current_currency_id'], ['marketplace_currency.id'], ondelete='CASCADE'),
sa.ForeignKeyConstraint(['seller_id'], ['users.id'], ondelete='CASCADE'),
sa.ForeignKeyConstraint(['shipping_method_id'], ['marketplace_shipping_methods.id'], ondelete='CASCADE'),
sa.PrimaryKeyConstraint('id')
)
op.create_table('marketplace_seller_orders',
sa.Column('id', sa.Integer(), autoincrement=True, nullable=False),
sa.Column('order_id', sa.Integer(), nullable=True),
sa.Column('seller_id', sa.Integer(), nullable=True),
sa.Column('order_status', sa.Integer(), nullable=True),
sa.Column('shipping_method_id', sa.Integer(), nullable=True),
sa.Column('buyer_id', sa.Integer(), nullable=True),
sa.Column('current_currency_id', sa.Integer(), nullable=True),
sa.Column('shipping_cost', sa.Float(), nullable=True),
sa.Column('created_at', sa.DateTime(), nullable=True),
sa.Column('updated_at', sa.DateTime(), nullable=True),
sa.ForeignKeyConstraint(['buyer_id'], ['users.id'], ondelete='CASCADE'),
sa.ForeignKeyConstraint(['current_currency_id'], ['marketplace_currency.id'], ondelete='CASCADE'),
sa.ForeignKeyConstraint(['order_id'], ['marketplace_orders.id'], ondelete='CASCADE'),
sa.ForeignKeyConstraint(['seller_id'], ['users.id'], ondelete='CASCADE'),
sa.ForeignKeyConstraint(['shipping_method_id'], ['marketplace_shipping_methods.id'], ondelete='CASCADE'),
sa.PrimaryKeyConstraint('id')
)
op.create_table('marketplace_shipping_method_prices',
sa.Column('id', sa.Integer(), autoincrement=True, nullable=False),
sa.Column('shipping_method_id', sa.Integer(), nullable=True),
sa.Column('seller_id', sa.Integer(), nullable=True),
sa.Column('price_currency_id', sa.Integer(), nullable=True),
sa.Column('price', sa.Float(), nullable=True),
sa.Column('created_at', sa.DateTime(), nullable=True),
sa.Column('updated_at', sa.DateTime(), nullable=True),
sa.ForeignKeyConstraint(['price_currency_id'], ['marketplace_currency.id'], ondelete='CASCADE'),
sa.ForeignKeyConstraint(['seller_id'], ['users.id'], ondelete='CASCADE'),
sa.ForeignKeyConstraint(['shipping_method_id'], ['marketplace_shipping_methods.id'], ondelete='CASCADE'),
sa.PrimaryKeyConstraint('id')
)
op.create_table('org_staff',
sa.Column('id', sa.Integer(), nullable=False),
sa.Column('user_id', sa.Integer(), nullable=True),
sa.Column('invited_by', sa.Integer(), nullable=True),
sa.Column('org_id', sa.Integer(), nullable=True),
sa.Column('created_at', sa.DateTime(), nullable=True),
sa.Column('updated_at', sa.DateTime(), nullable=True),
sa.ForeignKeyConstraint(['invited_by'], ['users.id'], ondelete='CASCADE'),
sa.ForeignKeyConstraint(['org_id'], ['organisations.id'], ondelete='CASCADE'),
sa.ForeignKeyConstraint(['user_id'], ['users.id'], ondelete='CASCADE'),
sa.PrimaryKeyConstraint('id')
)
op.create_table('posts',
sa.Column('id', sa.Integer(), nullable=False),
sa.Column('title', sa.String(), nullable=True),
sa.Column('text', sa.String(), nullable=True),
sa.Column('thumbnail', sa.String(), nullable=True),
sa.Column('post_privacy', sa.Integer(), nullable=True),
sa.Column('author', sa.String(length=128), nullable=True),
sa.Column('image_filename', sa.Text(), nullable=True),
sa.Column('image_url', sa.Text(), nullable=True),
sa.Column('user_id', sa.Integer(), nullable=True),
sa.Column('interest_id', sa.Integer(), nullable=True),
sa.Column('votes', sa.Integer(), nullable=True),
sa.Column('hotness', sa.Float(precision=15, asdecimal=6), nullable=True),
sa.Column('created_at', sa.DateTime(), nullable=True),
sa.Column('updated_at', sa.DateTime(), nullable=True),
sa.ForeignKeyConstraint(['interest_id'], ['interests.id'], ondelete='CASCADE'),
sa.ForeignKeyConstraint(['user_id'], ['users.id'], ondelete='CASCADE'),
sa.PrimaryKeyConstraint('id')
)
op.create_table('promos',
sa.Column('id', sa.Integer(), nullable=False),
sa.Column('organisation_id', sa.Integer(), nullable=True),
sa.Column('image_filename', sa.String(), nullable=True),
sa.Column('pub_date', sa.DateTime(), nullable=False),
sa.Column('end_date', sa.DateTime(), nullable=False),
sa.Column('promo_title', sa.String(length=255), nullable=True),
sa.Column('promo_city', sa.String(length=255), nullable=True),
sa.Column('promo_state', sa.String(length=255), nullable=True),
sa.Column('promo_country', sa.String(length=255), nullable=True),
sa.Column('requirement_one', sa.String(length=255), nullable=True),
sa.Column('requirement_two', sa.String(length=255), nullable=True),
sa.Column('requirement_three', sa.String(length=255), nullable=True),
sa.Column('requirement_four', sa.String(length=255), nullable=True),
sa.Column('requirement_five', sa.String(length=255), nullable=True),
sa.Column('requirement_six', sa.String(length=255), nullable=True),
sa.Column('requirement_seven', sa.String(length=255), nullable=True),
sa.Column('requirement_eight', sa.String(length=255), nullable=True),
sa.Column('requirement_nine', sa.String(length=255), nullable=True),
sa.Column('requirement_ten', sa.String(length=255), nullable=True),
sa.Column('description', sa.Text(), nullable=True),
sa.Column('creator_id', sa.Integer(), nullable=False),
sa.Column('created_at', sa.DateTime(), nullable=True),
sa.Column('updated_at', sa.DateTime(), nullable=True),
sa.ForeignKeyConstraint(['creator_id'], ['users.id'], ondelete='CASCADE'),
sa.ForeignKeyConstraint(['organisation_id'], ['organisations.id'], ondelete='CASCADE'),
sa.PrimaryKeyConstraint('id')
)
op.create_table('applications',
sa.Column('id', sa.Integer(), nullable=False),
sa.Column('position_id', sa.Integer(), nullable=False),
sa.Column('user_id', sa.Integer(), nullable=False),
sa.Column('created_at', sa.DateTime(), nullable=True),
sa.Column('updated_at', sa.DateTime(), nullable=True),
sa.ForeignKeyConstraint(['position_id'], ['jobs.id'], ondelete='CASCADE'),
sa.ForeignKeyConstraint(['user_id'], ['users.id'], ondelete='CASCADE'),
sa.PrimaryKeyConstraint('id')
)
op.create_table('marketplace_cart_items',
sa.Column('id', sa.Integer(), autoincrement=True, nullable=False),
sa.Column('cart_id', sa.Integer(), nullable=True),
sa.Column('seller_cart_id', sa.Integer(), nullable=True),
sa.Column('product_id', sa.Integer(), nullable=True),
sa.Column('seller_id', sa.Integer(), nullable=True),
sa.Column('buyer_id', sa.Integer(), nullable=True),
sa.Column('count', sa.Integer(), nullable=True),
sa.Column('created_at', sa.DateTime(), nullable=True),
sa.Column('updated_at', sa.DateTime(), nullable=True),
sa.ForeignKeyConstraint(['buyer_id'], ['users.id'], ondelete='CASCADE'),
sa.ForeignKeyConstraint(['cart_id'], ['marketplace_carts.id'], ondelete='CASCADE'),
sa.ForeignKeyConstraint(['product_id'], ['marketplace_products.id'], ondelete='CASCADE'),
sa.ForeignKeyConstraint(['seller_cart_id'], ['marketplace_seller_carts.id'], ondelete='CASCADE'),
sa.ForeignKeyConstraint(['seller_id'], ['users.id'], ondelete='CASCADE'),
sa.PrimaryKeyConstraint('id')
)
op.create_table('marketplace_order_items',
sa.Column('id', sa.Integer(), autoincrement=True, nullable=False),
sa.Column('order_id', sa.Integer(), nullable=True),
sa.Column('seller_order_id', sa.Integer(), nullable=True),
sa.Column('seller_id', sa.Integer(), nullable=True),
sa.Column('buyer_id', sa.Integer(), nullable=True),
sa.Column('product_id', sa.Integer(), nullable=True),
sa.Column('count', sa.Integer(), nullable=True),
sa.Column('current_price', sa.Float(), nullable=True),
sa.Column('current_total_price', sa.Integer(), nullable=True),
sa.Column('created_at', sa.DateTime(), nullable=True),
sa.Column('updated_at', sa.DateTime(), nullable=True),
sa.ForeignKeyConstraint(['buyer_id'], ['users.id'], ondelete='CASCADE'),
sa.ForeignKeyConstraint(['order_id'], ['marketplace_seller_orders.id'], ondelete='CASCADE'),
sa.ForeignKeyConstraint(['product_id'], ['marketplace_products.id'], ondelete='CASCADE'),
sa.ForeignKeyConstraint(['seller_id'], ['users.id'], ondelete='CASCADE'),
sa.ForeignKeyConstraint(['seller_order_id'], ['marketplace_orders.id'], ondelete='CASCADE'),
sa.PrimaryKeyConstraint('id')
)
op.create_table('photos',
sa.Column('id', sa.Integer(), nullable=False),
sa.Column('image_filename', sa.String(), nullable=True),
sa.Column('image_url', sa.String(), nullable=True),
sa.Column('user_id', sa.Integer(), nullable=True),
sa.Column('question_id', sa.Integer(), nullable=True),
sa.Column('answer_id', sa.Integer(), nullable=True),
sa.Column('created_at', sa.DateTime(), nullable=True),
sa.Column('updated_at', sa.DateTime(), nullable=True),
sa.ForeignKeyConstraint(['answer_id'], ['answers.id'], ondelete='CASCADE'),
sa.ForeignKeyConstraint(['question_id'], ['questions.id'], ondelete='CASCADE'),
sa.ForeignKeyConstraint(['user_id'], ['users.id'], ondelete='CASCADE'),
sa.PrimaryKeyConstraint('id')
)
op.create_table('post_comments',
sa.Column('id', sa.Integer(), nullable=False),
sa.Column('text', sa.String(), nullable=True),
sa.Column('user_id', sa.Integer(), nullable=True),
sa.Column('post_id', sa.Integer(), nullable=True),
sa.Column('depth', sa.Integer(), nullable=True),
sa.Column('question_id', sa.Integer(), nullable=True),
sa.Column('votes', sa.Integer(), nullable=True),
sa.Column('created_at', sa.DateTime(), nullable=True),
sa.Column('updated_at', sa.DateTime(), nullable=True),
sa.Column('lft', sa.Integer(), nullable=False),
sa.Column('rgt', sa.Integer(), nullable=False),
sa.Column('level', sa.Integer(), nullable=False),
sa.Column('tree_id', sa.Integer(), nullable=True),
sa.Column('parent_id', sa.Integer(), nullable=True),
sa.ForeignKeyConstraint(['parent_id'], ['post_comments.id'], ondelete='CASCADE'),
sa.ForeignKeyConstraint(['post_id'], ['posts.id'], ondelete='CASCADE'),
sa.ForeignKeyConstraint(['question_id'], ['questions.id'], ondelete='CASCADE'),
sa.ForeignKeyConstraint(['user_id'], ['users.id'], ondelete='CASCADE'),
sa.PrimaryKeyConstraint('id')
)
op.create_index('post_comments_level_idx', 'post_comments', ['level'], unique=False)
op.create_index('post_comments_lft_idx', 'post_comments', ['lft'], unique=False)
op.create_index('post_comments_rgt_idx', 'post_comments', ['rgt'], unique=False)
op.create_table('post_likes',
sa.Column('id', sa.Integer(), nullable=False),
sa.Column('user_id', sa.Integer(), nullable=True),
sa.Column('post_id', sa.Integer(), nullable=True),
sa.Column('created_at', sa.DateTime(), nullable=True),
sa.Column('updated_at', sa.DateTime(), nullable=True),
sa.ForeignKeyConstraint(['post_id'], ['posts.id'], ondelete='CASCADE'),
sa.PrimaryKeyConstraint('id')
)
op.create_table('post_upvotes',
sa.Column('id', sa.Integer(), nullable=False),
sa.Column('user_id', sa.Integer(), nullable=True),
sa.Column('post_id', sa.Integer(), nullable=True),
sa.Column('created_at', sa.DateTime(), nullable=True),
sa.Column('updated_at', sa.DateTime(), nullable=True),
sa.ForeignKeyConstraint(['post_id'], ['posts.id'], ondelete='CASCADE'),
sa.ForeignKeyConstraint(['user_id'], ['users.id'], ondelete='CASCADE'),
sa.PrimaryKeyConstraint('id')
)
op.create_table('submissions',
sa.Column('id', sa.Integer(), nullable=False),
sa.Column('promo_id', sa.Integer(), nullable=False),
sa.Column('user_id', sa.Integer(), nullable=False),
sa.Column('created_at', sa.DateTime(), nullable=True),
sa.Column('updated_at', sa.DateTime(), nullable=True),
sa.ForeignKeyConstraint(['promo_id'], ['promos.id'], ondelete='CASCADE'),
sa.ForeignKeyConstraint(['user_id'], ['users.id'], ondelete='CASCADE'),
sa.PrimaryKeyConstraint('id')
)
op.create_table('application_extras',
sa.Column('id', sa.Integer(), nullable=False),
sa.Column('application_id', sa.Integer(), nullable=True),
sa.Column('extra_id', sa.Integer(), nullable=True),
sa.Column('created_at', sa.DateTime(), nullable=True),
sa.Column('updated_at', sa.DateTime(), nullable=True),
sa.ForeignKeyConstraint(['application_id'], ['applications.id'], ondelete='CASCADE'),
sa.ForeignKeyConstraint(['extra_id'], ['extras.id'], ondelete='CASCADE'),
sa.PrimaryKeyConstraint('id')
)
op.create_table('comment_upvotes',
sa.Column('id', sa.Integer(), nullable=False),
sa.Column('user_id', sa.Integer(), nullable=True),
sa.Column('comment_id', sa.Integer(), nullable=True),
sa.Column('created_at', sa.DateTime(), nullable=True),
sa.Column('updated_at', sa.DateTime(), nullable=True),
sa.ForeignKeyConstraint(['comment_id'], ['post_comments.id'], ondelete='CASCADE'),
sa.ForeignKeyConstraint(['user_id'], ['users.id'], ondelete='CASCADE'),
sa.PrimaryKeyConstraint('id')
)
op.create_table('job_applications',
sa.Column('id', sa.Integer(), nullable=False),
sa.Column('application_id', sa.Integer(), nullable=True),
sa.Column('position_id', sa.Integer(), nullable=True),
sa.Column('created_at', sa.DateTime(), nullable=True),
sa.Column('updated_at', sa.DateTime(), nullable=True),
sa.ForeignKeyConstraint(['application_id'], ['applications.id'], ondelete='CASCADE'),
sa.ForeignKeyConstraint(['position_id'], ['jobs.id'], ondelete='CASCADE'),
sa.PrimaryKeyConstraint('id')
)
op.create_table('marketplace_order_status_changes',
sa.Column('id', sa.Integer(), autoincrement=True, nullable=False),
sa.Column('order_id', sa.Integer(), nullable=True),
sa.Column('order_item_id', sa.Integer(), nullable=True),
sa.Column('changed_from', sa.Integer(), nullable=True),
sa.Column('changed_to', sa.Integer(), nullable=True),
sa.Column('created_at', sa.DateTime(), nullable=True),
sa.Column('updated_at', sa.DateTime(), nullable=True),
sa.ForeignKeyConstraint(['order_id'], ['marketplace_orders.id'], ondelete='CASCADE'),
sa.ForeignKeyConstraint(['order_item_id'], ['marketplace_order_items.id'], ondelete='CASCADE'),
sa.PrimaryKeyConstraint('id')
)
op.create_table('promo_submissions',
sa.Column('id', sa.Integer(), nullable=False),
sa.Column('submission_id', sa.Integer(), nullable=True),
sa.Column('promo_id', sa.Integer(), nullable=True),
sa.Column('created_at', sa.DateTime(), nullable=True),
sa.Column('updated_at', sa.DateTime(), nullable=True),
sa.ForeignKeyConstraint(['promo_id'], ['promos.id'], ondelete='CASCADE'),
sa.ForeignKeyConstraint(['submission_id'], ['submissions.id'], ondelete='CASCADE'),
sa.PrimaryKeyConstraint('id')
)
op.add_column('users', sa.Column('area_code', sa.String(length=6), nullable=True))
op.add_column('users', sa.Column('city', sa.String(length=64), nullable=True))
op.add_column('users', sa.Column('country', sa.String(length=64), nullable=True))
op.add_column('users', sa.Column('created_at', sa.DateTime(), nullable=True))
op.add_column('users', sa.Column('gender', sa.String(length=64), nullable=True))
op.add_column('users', sa.Column('invited_by', sa.String(length=128), nullable=True))
op.add_column('users', sa.Column('is_seller', sa.Boolean(), nullable=True))
op.add_column('users', sa.Column('last_message_read_time', sa.DateTime(), nullable=True))
op.add_column('users', sa.Column('mobile_phone', sa.BigInteger(), nullable=True))
op.add_column('users', sa.Column('online', sa.String(length=1), nullable=True))
op.add_column('users', sa.Column('profession', sa.String(length=64), nullable=True))
op.add_column('users', sa.Column('recruiter_id', sa.Integer(), nullable=True))
op.add_column('users', sa.Column('socket_id', sa.Text(), nullable=True))
op.add_column('users', sa.Column('state', sa.String(length=64), nullable=True))
op.add_column('users', sa.Column('summary_text', sa.Text(), nullable=True))
op.add_column('users', sa.Column('updated_at', sa.DateTime(), nullable=True))
op.add_column('users', sa.Column('verified', sa.Boolean(), nullable=True))
op.add_column('users', sa.Column('zip', sa.String(length=10), nullable=True))
op.create_index(op.f('ix_users_area_code'), 'users', ['area_code'], unique=False)
op.create_index(op.f('ix_users_city'), 'users', ['city'], unique=False)
op.create_index(op.f('ix_users_country'), 'users', ['country'], unique=False)
op.create_index(op.f('ix_users_gender'), 'users', ['gender'], unique=False)
op.create_index(op.f('ix_users_mobile_phone'), 'users', ['mobile_phone'], unique=True)
op.create_index(op.f('ix_users_profession'), 'users', ['profession'], unique=False)
op.create_index(op.f('ix_users_state'), 'users', ['state'], unique=False)
op.create_index(op.f('ix_users_zip'), 'users', ['zip'], unique=False)
op.drop_constraint(None, 'users', type_='foreignkey')
op.create_foreign_key(None, 'users', 'roles', ['role_id'], ['id'], ondelete='CASCADE')
op.create_foreign_key(None, 'users', 'users', ['recruiter_id'], ['id'])
# ### end Alembic commands ###
def downgrade():
# ### commands auto generated by Alembic - please adjust! ###
op.drop_constraint(None, 'users', type_='foreignkey')
op.drop_constraint(None, 'users', type_='foreignkey')
op.create_foreign_key(None, 'users', 'roles', ['role_id'], ['id'])
op.drop_index(op.f('ix_users_zip'), table_name='users')
op.drop_index(op.f('ix_users_state'), table_name='users')
op.drop_index(op.f('ix_users_profession'), table_name='users')
op.drop_index(op.f('ix_users_mobile_phone'), table_name='users')
op.drop_index(op.f('ix_users_gender'), table_name='users')
op.drop_index(op.f('ix_users_country'), table_name='users')
op.drop_index(op.f('ix_users_city'), table_name='users')
op.drop_index(op.f('ix_users_area_code'), table_name='users')
op.drop_column('users', 'zip')
op.drop_column('users', 'verified')
op.drop_column('users', 'updated_at')
op.drop_column('users', 'summary_text')
op.drop_column('users', 'state')
op.drop_column('users', 'socket_id')
op.drop_column('users', 'recruiter_id')
op.drop_column('users', 'profession')
op.drop_column('users', 'online')
op.drop_column('users', 'mobile_phone')
op.drop_column('users', 'last_message_read_time')
op.drop_column('users', 'is_seller')
op.drop_column('users', 'invited_by')
op.drop_column('users', 'gender')
op.drop_column('users', 'created_at')
op.drop_column('users', 'country')
op.drop_column('users', 'city')
op.drop_column('users', 'area_code')
op.drop_table('promo_submissions')
op.drop_table('marketplace_order_status_changes')
op.drop_table('job_applications')
op.drop_table('comment_upvotes')
op.drop_table('application_extras')
op.drop_table('submissions')
op.drop_table('post_upvotes')
op.drop_table('post_likes')
op.drop_index('post_comments_rgt_idx', table_name='post_comments')
op.drop_index('post_comments_lft_idx', table_name='post_comments')
op.drop_index('post_comments_level_idx', table_name='post_comments')
op.drop_table('post_comments')
op.drop_table('photos')
op.drop_table('marketplace_order_items')
op.drop_table('marketplace_cart_items')
op.drop_table('applications')
op.drop_table('promos')
op.drop_table('posts')
op.drop_table('org_staff')
op.drop_table('marketplace_shipping_method_prices')
op.drop_table('marketplace_seller_orders')
op.drop_table('marketplace_seller_carts')
op.drop_table('marketplace_product_categories')
op.drop_index(op.f('ix_marketplace_cart_details_zip'), table_name='marketplace_cart_details')
op.drop_index(op.f('ix_marketplace_cart_details_state'), table_name='marketplace_cart_details')
op.drop_index(op.f('ix_marketplace_cart_details_mobile_phone'), table_name='marketplace_cart_details')
op.drop_index(op.f('ix_marketplace_cart_details_last_name'), table_name='marketplace_cart_details')
op.drop_index(op.f('ix_marketplace_cart_details_first_name'), table_name='marketplace_cart_details')
op.drop_index(op.f('ix_marketplace_cart_details_email'), table_name='marketplace_cart_details')
op.drop_index(op.f('ix_marketplace_cart_details_country'), table_name='marketplace_cart_details')
op.drop_index(op.f('ix_marketplace_cart_details_city'), table_name='marketplace_cart_details')
op.drop_table('marketplace_cart_details')
op.drop_table('logos')
op.drop_index(op.f('ix_jobs_position_state'), table_name='jobs')
op.drop_table('jobs')
op.drop_table('entry_tags')
op.drop_index(op.f('ix_answers_timestamp'), table_name='answers')
op.drop_index(op.f('ix_answers_body'), table_name='answers')
op.drop_index('answers_rgt_idx', table_name='answers')
op.drop_index('answers_lft_idx', table_name='answers')
op.drop_index('answers_level_idx', table_name='answers')
op.drop_table('answers')
op.drop_index(op.f('ix_questions_timestamp'), table_name='questions')
op.drop_table('questions')
op.drop_table('organisations')
op.drop_index(op.f('ix_notifications_timestamp'), table_name='notifications')
op.drop_index(op.f('ix_notifications_name'), table_name='notifications')
op.drop_table('notifications')
op.drop_index(op.f('ix_messages_timestamp'), table_name='messages')
op.drop_table('messages')
op.drop_table('marketplace_shipping_methods')
op.drop_table('marketplace_products')
op.drop_index(op.f('ix_marketplace_orders_zip'), table_name='marketplace_orders')
op.drop_index(op.f('ix_marketplace_orders_state'), table_name='marketplace_orders')
op.drop_index(op.f('ix_marketplace_orders_mobile_phone'), table_name='marketplace_orders')
op.drop_index(op.f('ix_marketplace_orders_last_name'), table_name='marketplace_orders')
op.drop_index(op.f('ix_marketplace_orders_first_name'), table_name='marketplace_orders')
op.drop_index(op.f('ix_marketplace_orders_email'), table_name='marketplace_orders')
op.drop_index(op.f('ix_marketplace_orders_country'), table_name='marketplace_orders')
op.drop_index(op.f('ix_marketplace_orders_city'), table_name='marketplace_orders')
op.drop_table('marketplace_orders')
op.drop_table('marketplace_carts')
op.drop_table('interests')
op.drop_table('followers')
op.drop_table('extras')
op.drop_table('contact_messages')
op.drop_table('tags')
op.drop_table('marketplace_settings')
op.drop_table('marketplace_currency')
op.drop_table('marketplace_categories')
op.drop_table('jobpikrs')
op.drop_table('crawledjobs')
# ### end Alembic commands ###
| 57.35804 | 128 | 0.695162 | 5,968 | 45,657 | 5.13874 | 0.042058 | 0.108517 | 0.151559 | 0.18847 | 0.901102 | 0.876614 | 0.840909 | 0.795324 | 0.765651 | 0.682046 | 0 | 0.007384 | 0.11306 | 45,657 | 795 | 129 | 57.430189 | 0.749944 | 0.006198 | 0 | 0.503218 | 0 | 0 | 0.247811 | 0.062708 | 0 | 0 | 0 | 0 | 0 | 1 | 0.002574 | false | 0 | 0.002574 | 0 | 0.005148 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
6ffd9dc793aac9dcff3e0fa2366c7e7c9824e170 | 15,511 | py | Python | morse-stf/stensorflow/ml/nn/networks/CNN_with_SL.py | alipay/Antchain-MPC | f6916465e1da5722ca7efadc4eeaca13ec229707 | [
"Apache-2.0"
] | 33 | 2021-11-23T09:04:03.000Z | 2022-03-14T07:56:31.000Z | morse-stf/stensorflow/ml/nn/networks/CNN_with_SL.py | qizhi-zhang/Antchain-MPC | f551170f68b0baff328e6594484e9832230fe719 | [
"Apache-2.0"
] | null | null | null | morse-stf/stensorflow/ml/nn/networks/CNN_with_SL.py | qizhi-zhang/Antchain-MPC | f551170f68b0baff328e6594484e9832230fe719 | [
"Apache-2.0"
] | 6 | 2021-11-25T12:38:41.000Z | 2022-02-23T03:29:51.000Z | #!/usr/bin/env python
# coding=utf-8
"""
Ant Group Copyright (c) 2021
All Rights Reserved.
"""
from stensorflow.ml.nn.networks.NN import NN
from stensorflow.ml.nn.layers.input import Input
from stensorflow.ml.nn.layers.relu import *
from stensorflow.ml.nn.layers.conv2d import Conv2dLocal
from stensorflow.ml.nn.layers.pooling import avg_pool2d, sum_pool2d_grad
from stensorflow.ml.nn.layers.flatten import *
from stensorflow.ml.nn.layers.dense import *
from stensorflow.ml.nn.layers.loss import *
from stensorflow.random import random
class LocalCNN(NN):
"""
只有一层卷积核一层池化的CNN网络
"""
def __init__(self, feature: PrivateTensor, label: Union[PrivateTensor, SharedPair], loss=None):
super(LocalCNN, self).__init__()
# input layer, init data;
# 这里将dim设置位输入的wight,后续不使用;仅仅是为了应用原有的模板
layer = Input(dim=28, x=feature)
local_layer_owner = layer.owner
self.addLayer(ly=layer)
# convolutional layer with 1 input channel, 16 output channels and a 5×5 filter
layer = Conv2dLocal(output_dim=None, fathers=[layer], filters=16,
kernel_size=5, input_shape=[28, 28, 1], owner=local_layer_owner)
self.addLayer(layer)
# Relu Layer
layer = ReLU_Local(output_dim=layer.output_dim, fathers=[layer], owner=local_layer_owner)
self.addLayer(layer)
# Average pool
layer = AveragePooling2DLocal(output_dim=None, fathers=[layer], pool_size=(2, 2), owner=local_layer_owner)
self.addLayer(layer)
# flatten data, only consider data_format = "NWHC"
# 这里需要给出正确的output_dim,方便后续的全连接层
layer = FlattenLocal(output_dim=None, fathers=[layer], owner=local_layer_owner)
self.addLayer(layer)
# 全连接层
# 这里添加一层,a 2304 × 100 linear layer
# Dlayer = Dense_Local(output_dim=100, fathers=[layer], owner=local_layer_owner)
# self.addLayer(Dlayer)
# 添加一层Relu
# Relu Layer
# layer = ReLU_Local(output_dim=100, fathers=[Dlayer], owner=local_layer_owner)
# self.addLayer(layer)
# a 2304 × 10 linear layer; a 100* 10 line layer
layer = Dense_Local(output_dim=10, fathers=[layer], owner=local_layer_owner)
self.addLayer(layer)
# 输出层
layer_label = Input(dim=10, x=label)
self.addLayer(ly=layer_label)
# 损失计算
layer_loss = CrossEntropyLossWithSoftmaxLocal(layer_score=layer, layer_label=layer_label, owner=local_layer_owner)
self.addLayer(ly=layer_loss)
def predict(self, x, out_prob=True):
self.cut_off()
# 输入层
l_input = self.layers[0]
assert isinstance(l_input, Input)
l_input.replace(x)
self.layers[0] = l_input
# 输出层
ly = self.layers[-1]
if not isinstance(ly, Layer):
raise Exception("l must be a Layer")
else:
ly.forward()
if out_prob:
return ly.y
else:
return ly.score
def predict_to_file(self, sess, x, predict_file_name,
pred_batch_num, model_file_machine,
record_num_ceil_mod_batch_size,
with_sigmoid):
y_pred = self.predict(x=x, out_prob=with_sigmoid)
id_y_pred = y_pred.to_tf_str(owner=model_file_machine)
random.random_init(sess)
# 分批写入文件
with open(predict_file_name, "w") as f:
for batch in range(pred_batch_num):
records = sess.run(id_y_pred)
records = "\n".join(records.astype('str'))
# records.to_file()
f.write(records + "\n")
def replace_weight(self, keras_weight):
i = 0
for ly in self.layers:
if isinstance(ly, Conv2dLocal):
# 用传入的权重直接进行预测
# kernel = PrivateTensor(owner=ly.owner)
# kernel.load_from_numpy(keras_weight[i])
# ly.w[0] = kernel
# 用传入的权重进行训练
kernel = ly.w[0]
kernel.load_from_numpy(keras_weight[i])
i += 1
if isinstance(ly, Dense_Local):
# 用传入的权重直接进行预测
# kernel1 = PrivateTensor(owner=ly.owner)
# kernel1.load_from_numpy(keras_weight[i])
# ly.w[0] = kernel1
# kernel2 = PrivateTensor(owner=ly.owner)
# kernel2.load_from_numpy(keras_weight[i+1])
# ly.w[1] = kernel2
# 用传入的权重进行训练
kernel1 = ly.w[0]
kernel2 = ly.w[1]
kernel1.load_from_numpy(keras_weight[i])
kernel2.load_from_numpy(keras_weight[i+1])
i += 2
def save_model(self, sess, save_file_path, model_file_machine):
res = []
for ly in self.layers:
if isinstance(ly, Dense_Local) or isinstance(ly, Conv2dLocal):
for weight in ly.w:
weight = weight.to_tf_tensor(owner=model_file_machine)
weight = sess.run(weight)
res.append(weight)
res = np.array(res)
np.savez(save_file_path, weight=res)
class LocalNetworkB(NN):
"""
两层卷积和两层池化的复杂网络
"""
def __init__(self, feature: PrivateTensor, label: Union[PrivateTensor, SharedPair], loss=None):
super(LocalNetworkB, self).__init__()
# 这里将dim设置位输入的wight,后续不使用;仅仅是为了应用原有的模板
layer = Input(dim=28, x=feature)
local_layer_owner = layer.owner
self.addLayer(layer)
# convolutional layer with 1 input channel, 16 output channels and a 5×5 filter
layer = Conv2dLocal(output_dim=None, fathers=[layer], filters=16,
kernel_size=5, input_shape=[28, 28, 1], owner=local_layer_owner)
self.addLayer(layer)
# Relu Layer
layer = ReLU_Local(output_dim=layer.output_dim, fathers=[layer], owner=local_layer_owner)
self.addLayer(layer)
# Average pool
layer = AveragePooling2DLocal(output_dim=None, fathers=[layer],
pool_size=(2, 2), owner=local_layer_owner)
self.addLayer(layer)
# 16 input channels, 16 output channels and another 5×5 filter
layer = Conv2dLocal(output_dim=None, fathers=[layer], filters=16,
kernel_size=5, input_shape=layer.output_dim, owner=local_layer_owner)
self.addLayer(layer)
# Relu Layer
layer = ReLU_Local(output_dim=layer.output_dim, fathers=[layer], owner=local_layer_owner)
self.addLayer(layer)
# Average pool
layer = AveragePooling2DLocal(output_dim=None, fathers=[layer], pool_size=(2, 2), owner=local_layer_owner)
self.addLayer(layer)
# flatten data, only consider data_format = "NWHC"
# 这里需要给出正确的output_dim,方便后续的全连接层
layer = FlattenLocal(output_dim=None, fathers=[layer], owner=local_layer_owner)
self.addLayer(layer)
# 全连接层
# 256×100 fully-connected layer
layer = Dense_Local(output_dim=100, fathers=[layer], owner=local_layer_owner)
self.addLayer(layer)
# Relu Layer, 需要给出一个正确的output_dim
layer = ReLU_Local(output_dim=100, fathers=[layer], owner=local_layer_owner)
self.addLayer(layer)
# a 100 × 10 linear layer
layer = Dense_Local(output_dim=10, fathers=[layer], owner=local_layer_owner)
self.addLayer(layer)
# 输出层
layer_label = Input(dim=10, x=label)
self.addLayer(ly=layer_label)
# 损失计算
layer_loss = CrossEntropyLossWithSoftmaxLocal(layer_score=layer, layer_label=layer_label,
owner=local_layer_owner)
self.addLayer(ly=layer_loss)
def predict(self, x, out_prob=True):
self.cut_off()
# 输入层
l_input = self.layers[0]
assert isinstance(l_input, Input)
l_input.replace(x)
self.layers[0] = l_input
# 输出层
ly = self.layers[-1]
if not isinstance(ly, Layer):
raise Exception("l must be a Layer")
else:
ly.forward()
if out_prob:
return ly.y
else:
return ly.score
def predict_to_file(self, sess, x, predict_file_name,
pred_batch_num, model_file_machine,
record_num_ceil_mod_batch_size,
with_sigmoid):
y_pred = self.predict(x=x, out_prob=with_sigmoid)
id_y_pred = y_pred.to_tf_str(owner=model_file_machine)
random.random_init(sess)
with open(predict_file_name, "w") as f:
for batch in range(pred_batch_num):
records = sess.run(id_y_pred)
records = "\n".join(records.astype('str'))
# records.to_file()
f.write(records + "\n")
def save_model(self, sess, save_file_path, model_file_machine):
res = []
for ly in self.layers:
if isinstance(ly, Dense_Local) or isinstance(ly, Conv2dLocal):
for weight in ly.w:
weight = weight.to_tf_tensor(owner=model_file_machine)
weight = sess.run(weight)
res.append(weight)
res = np.array(res)
np.savez(save_file_path, weight=res)
def replace_weight(self, keras_weight):
i = 0
for ly in self.layers:
if isinstance(ly, Conv2dLocal):
# 用传入的权重直接进行预测
# kernel = PrivateTensor(owner=ly.owner)
# kernel.load_from_numpy(keras_weight[i])
# ly.w[0] = kernel
# 用传入的权重进行训练
kernel = ly.w[0]
kernel.load_from_numpy(keras_weight[i])
i += 1
if isinstance(ly, Dense_Local):
# 用传入的权重直接进行预测
# kernel1 = PrivateTensor(owner=ly.owner)
# kernel1.load_from_numpy(keras_weight[i])
# ly.w[0] = kernel1
# kernel2 = PrivateTensor(owner=ly.owner)
# kernel2.load_from_numpy(keras_weight[i+1])
# ly.w[1] = kernel2
# 用传入的权重进行训练
kernel1 = ly.w[0]
kernel2 = ly.w[1]
kernel1.load_from_numpy(keras_weight[i])
kernel2.load_from_numpy(keras_weight[i+1])
i += 2
class LocalNetworkC(NN):
"""
两层卷积和两层池化的复杂网络
"""
def __init__(self, feature: PrivateTensor, label: Union[PrivateTensor, SharedPair], loss=None):
super(LocalNetworkC, self).__init__()
# 这里将dim设置位输入的wight,后续不使用;仅仅是为了应用原有的模板
layer = Input(dim=28, x=feature)
local_layer_owner = layer.owner
self.addLayer(layer)
# convolutional layer with 1 input channel, 16 output channels and a 5×5 filter
layer = Conv2dLocal(output_dim=None, fathers=[layer], filters=20,
kernel_size=5, input_shape=[28, 28, 1], owner=local_layer_owner)
self.addLayer(layer)
# Relu Layer
layer = ReLU_Local(output_dim=layer.output_dim, fathers=[layer], owner=local_layer_owner)
self.addLayer(layer)
# Average pool
layer = AveragePooling2DLocal(output_dim=None, fathers=[layer],
pool_size=(2, 2), owner=local_layer_owner)
self.addLayer(layer)
# 20 input channels, 50 output channels and another 5×5 filter
layer = Conv2dLocal(output_dim=None, fathers=[layer], filters=50,
kernel_size=5, input_shape=layer.output_dim, owner=local_layer_owner)
self.addLayer(layer)
# Relu Layer
layer = ReLU_Local(output_dim=layer.output_dim, fathers=[layer], owner=local_layer_owner)
self.addLayer(layer)
# Average pool
layer = AveragePooling2DLocal(output_dim=None, fathers=[layer], pool_size=(2, 2), owner=local_layer_owner)
self.addLayer(layer)
# flatten data, only consider data_format = "NWHC"
# 这里需要给出正确的output_dim,方便后续的全连接层
layer = FlattenLocal(output_dim=None, fathers=[layer], owner=local_layer_owner)
self.addLayer(layer)
# 全连接层
# 800x500 fully-connected layer
layer = Dense_Local(output_dim=500, fathers=[layer], owner=local_layer_owner)
self.addLayer(layer)
# a 500 × 10 linear layer
layer = Dense_Local(output_dim=10, fathers=[layer], owner=local_layer_owner)
self.addLayer(layer)
# 输出层
layer_label = Input(dim=10, x=label)
self.addLayer(ly=layer_label)
# 损失计算
layer_loss = CrossEntropyLossWithSoftmaxLocal(layer_score=layer, layer_label=layer_label,
owner=local_layer_owner)
self.addLayer(ly=layer_loss)
def predict(self, x, out_prob=True):
self.cut_off()
# 输入层
l_input = self.layers[0]
assert isinstance(l_input, Input)
l_input.replace(x)
self.layers[0] = l_input
# 输出层
ly = self.layers[-1]
if not isinstance(ly, Layer):
raise Exception("l must be a Layer")
else:
ly.forward()
if out_prob:
return ly.y
else:
return ly.score
def predict_to_file(self, sess, x, predict_file_name,
pred_batch_num, model_file_machine,
with_sigmoid):
y_pred = self.predict(x=x, out_prob=with_sigmoid)
id_y_pred = y_pred.to_tf_str(owner=model_file_machine)
random.random_init(sess)
with open(predict_file_name, "w") as f:
for batch in range(pred_batch_num):
records = sess.run(id_y_pred)
records = "\n".join(records.astype('str'))
# records.to_file()
f.write(records + "\n")
def save_model(self, sess, save_file_path, model_file_machine):
res = []
for ly in self.layers:
if isinstance(ly, Dense_Local) or isinstance(ly, Conv2dLocal):
for weight in ly.w:
weight = weight.to_tf_tensor(owner=model_file_machine)
weight = sess.run(weight)
res.append(weight)
res = np.array(res)
np.savez(save_file_path, weight=res)
def replace_weight(self, keras_weight):
i = 0
for ly in self.layers:
if isinstance(ly, Conv2dLocal):
# 用传入的权重直接进行预测
# kernel = PrivateTensor(owner=ly.owner)
# kernel.load_from_numpy(keras_weight[i])
# ly.w[0] = kernel
# 用传入的权重进行训练
kernel = ly.w[0]
kernel.load_from_numpy(keras_weight[i])
i += 1
if isinstance(ly, Dense_Local):
# 用传入的权重直接进行预测
# kernel1 = PrivateTensor(owner=ly.owner)
# kernel1.load_from_numpy(keras_weight[i])
# ly.w[0] = kernel1
# kernel2 = PrivateTensor(owner=ly.owner)
# kernel2.load_from_numpy(keras_weight[i+1])
# ly.w[1] = kernel2
# 用传入的权重进行训练
kernel1 = ly.w[0]
kernel2 = ly.w[1]
kernel1.load_from_numpy(keras_weight[i])
kernel2.load_from_numpy(keras_weight[i+1])
i += 2
| 40.711286 | 122 | 0.584682 | 1,862 | 15,511 | 4.665951 | 0.098281 | 0.057551 | 0.055249 | 0.081031 | 0.939342 | 0.927831 | 0.916897 | 0.911027 | 0.898941 | 0.897905 | 0 | 0.022441 | 0.319128 | 15,511 | 380 | 123 | 40.818421 | 0.799261 | 0.166205 | 0 | 0.910204 | 0 | 0 | 0.00587 | 0 | 0 | 0 | 0 | 0 | 0.012245 | 1 | 0.061224 | false | 0 | 0.036735 | 0 | 0.134694 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
82eea1707890d549d15803aa8d9404b03e64fcf3 | 3,282 | py | Python | test/inp/T03.py | hryknkgw/pymolwin | 4a1335e90497dbcbfa789f1285a7c1ad84a051f8 | [
"CNRI-Python"
] | 2 | 2019-05-23T22:17:29.000Z | 2020-07-03T14:36:22.000Z | test/inp/T03.py | hryknkgw/pymolwin | 4a1335e90497dbcbfa789f1285a7c1ad84a051f8 | [
"CNRI-Python"
] | null | null | null | test/inp/T03.py | hryknkgw/pymolwin | 4a1335e90497dbcbfa789f1285a7c1ad84a051f8 | [
"CNRI-Python"
] | null | null | null | #
# full blown threading stability test, higher enent rate...
#
from pymol import util
import threading
import time
import random
from pymol import cmd
#cmd.feedback("ena","thread","debug")
cmd.rock()
cmd.load("dat/il2.pdb","obj1")
cmd.hide()
cmd.show("ribbon")
cmd.show("car")
util.ss()
def turns():
while 1:
time.sleep(random.random()*0.05)
cmd.turn('x',random.random()*10-5)
time.sleep(random.random()*0.05)
cmd.turn('y',random.random()*10-5)
time.sleep(random.random()*0.05)
cmd.turn('z',random.random()*10-5)
t = threading.Thread(target=turns)
t.setDaemon(1)
t.start()
def sets():
while 1:
time.sleep(random.random()*0.15)
if random.random()>0.5:
value=1
else:
value=0
cmd.set('cartoon_fancy_helices',str(value))
if random.random()>0.5:
value=1
else:
value=0
cmd.set('cartoon_smooth_loop',str(value))
if random.random()>0.5:
value=1
else:
value=0
cmd.set('cartoon_round_helices',str(value))
if random.random()>0.5:
value=1
else:
value=0
cmd.set('cartoon_smooth_loops',str(value))
if random.random()>0.5:
value=1
else:
value=0
cmd.set('cartoon_flat_sheets',str(value))
t = threading.Thread(target=sets)
t.setDaemon(1)
t.start()
def carts():
while 1:
resi = int(random.random()*150)
cmd.cartoon('loop',"(resi %d)"%resi)
time.sleep(random.random()*0.05)
resi = int(random.random()*150)
cmd.cartoon('oval',"(resi %d)"%resi)
cmd.cartoon('oval',"(resi %d)"%(resi+1))
time.sleep(random.random()*0.05)
resi = int(random.random()*150)
cmd.cartoon('auto',"(resi %d)"%resi)
cmd.cartoon('auto',"(resi %d)"%(resi+1))
time.sleep(random.random()*0.05)
resi = int(random.random()*150)
cmd.cartoon('tube',"(resi %d)"%resi)
cmd.cartoon('tube',"(resi %d)"%(resi+1))
time.sleep(random.random()*0.05)
resi = int(random.random()*150)
cmd.cartoon('rect',"(resi %d)"%resi)
cmd.cartoon('rect',"(resi %d)"%(resi+1))
resi = int(random.random()*150)
cmd.cartoon('oval',"(resi %d)"%resi)
time.sleep(random.random()*0.05)
resi = int(random.random()*150)
cmd.cartoon('auto',"(resi %d)"%resi)
time.sleep(random.random()*0.05)
resi = int(random.random()*150)
cmd.cartoon('tube',"(resi %d)"%resi)
time.sleep(random.random()*0.05)
resi = int(random.random()*150)
cmd.cartoon('rect',"(resi %d)"%resi)
time.sleep(random.random()*0.05)
resi = int(random.random()*150)
cmd.hide('car',"(resi %d)"%resi)
time.sleep(random.random()*0.05)
resi = int(random.random()*150)
cmd.show('car',"(resi %d)"%resi)
time.sleep(random.random()*0.05)
resi = int(random.random()*150)
cmd.show('car',"(resi %d)"%resi)
time.sleep(random.random()*0.05)
resi = int(random.random()*150)
cmd.show('car',"(resi %d)"%resi)
time.sleep(random.random()*0.05)
resi = int(random.random()*150)
cmd.show('car',"(resi %d)"%resi)
time.sleep(random.random()*0.05)
t = threading.Thread(target=carts)
t.setDaemon(1)
t.start()
| 28.051282 | 59 | 0.576782 | 477 | 3,282 | 3.947589 | 0.150943 | 0.24854 | 0.151885 | 0.189591 | 0.80085 | 0.791822 | 0.736059 | 0.718003 | 0.683484 | 0.683484 | 0 | 0.052221 | 0.21816 | 3,282 | 116 | 60 | 28.293103 | 0.681606 | 0.028336 | 0 | 0.68932 | 0 | 0 | 0.111879 | 0.013199 | 0 | 0 | 0 | 0 | 0 | 1 | 0.029126 | false | 0 | 0.048544 | 0 | 0.07767 | 0 | 0 | 0 | 0 | null | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
82fa0079e67e0c3b7392e7424c5fbdf4c131c15c | 8,050 | py | Python | monk/gluon/losses/losses.py | abhi-kumar/monk_kaggle_bengali_ai | 12a6c654446e887706c1a8bed82fccf8a98ce356 | [
"Apache-2.0"
] | null | null | null | monk/gluon/losses/losses.py | abhi-kumar/monk_kaggle_bengali_ai | 12a6c654446e887706c1a8bed82fccf8a98ce356 | [
"Apache-2.0"
] | 9 | 2020-01-28T21:40:39.000Z | 2022-02-10T01:24:06.000Z | monk/gluon/losses/losses.py | abhi-kumar/monk_kaggle_bengali_ai | 12a6c654446e887706c1a8bed82fccf8a98ce356 | [
"Apache-2.0"
] | null | null | null | from gluon.losses.imports import *
from system.imports import *
@accepts(dict, weight=[list, type(np.array([1, 2, 3])), float, type(None)], batch_axis=int, post_trace=True)
@TraceFunction(trace_args=False, trace_rv=False)
def l1(system_dict, weight=None, batch_axis=0):
system_dict["local"]["criterion"] = "l1";
system_dict["hyper-parameters"]["loss"]["name"] = "l1";
system_dict["hyper-parameters"]["loss"]["params"]["weight"] = weight;
system_dict["hyper-parameters"]["loss"]["params"]["batch_axis"] = batch_axis;
system_dict["hyper-parameters"]["status"] = True;
return system_dict;
@accepts(dict, weight=[list, type(np.array([1, 2, 3])), float, type(None)], batch_axis=int, post_trace=True)
@TraceFunction(trace_args=False, trace_rv=False)
def l2(system_dict, weight=1.0, batch_axis=0):
system_dict["local"]["criterion"] = "l2";
system_dict["hyper-parameters"]["loss"]["name"] = "l2";
system_dict["hyper-parameters"]["loss"]["params"]["weight"] = weight;
system_dict["hyper-parameters"]["loss"]["params"]["batch_axis"] = batch_axis;
system_dict["hyper-parameters"]["status"] = True;
return system_dict;
@accepts(dict, weight=[list, type(np.array([1, 2, 3])), float, type(None)], batch_axis=int,
axis_to_sum_over=int, label_as_categories=bool, label_smoothing=bool, post_trace=True)
@TraceFunction(trace_args=False, trace_rv=False)
def softmax_crossentropy(system_dict, weight=None, batch_axis=0, axis_to_sum_over=-1,
label_as_categories=True, label_smoothing=False):
system_dict["local"]["criterion"] = "softmaxcrossentropy";
system_dict["hyper-parameters"]["loss"]["name"] = "softmaxcrossentropy";
system_dict["hyper-parameters"]["loss"]["params"]["weight"] = weight;
system_dict["hyper-parameters"]["loss"]["params"]["batch_axis"] = batch_axis;
system_dict["hyper-parameters"]["loss"]["params"]["axis_to_sum_over"] = axis_to_sum_over;
system_dict["hyper-parameters"]["loss"]["params"]["label_as_categories"] = label_as_categories;
system_dict["hyper-parameters"]["loss"]["params"]["label_smoothing"] = label_smoothing;
system_dict["hyper-parameters"]["status"] = True;
return system_dict;
@accepts(dict, weight=[list, type(np.array([1, 2, 3])), float, type(None)], batch_axis=int,
axis_to_sum_over=int, label_as_categories=bool, label_smoothing=bool, post_trace=True)
@TraceFunction(trace_args=False, trace_rv=False)
def crossentropy(system_dict, weight=None, batch_axis=0, axis_to_sum_over=-1,
label_as_categories=True, label_smoothing=False):
system_dict["local"]["criterion"] = "crossentropy";
system_dict["hyper-parameters"]["loss"]["name"] = "crossentropy";
system_dict["hyper-parameters"]["loss"]["params"]["weight"] = weight;
system_dict["hyper-parameters"]["loss"]["params"]["batch_axis"] = batch_axis;
system_dict["hyper-parameters"]["loss"]["params"]["axis_to_sum_over"] = axis_to_sum_over;
system_dict["hyper-parameters"]["loss"]["params"]["label_as_categories"] = label_as_categories;
system_dict["hyper-parameters"]["loss"]["params"]["label_smoothing"] = label_smoothing;
system_dict["hyper-parameters"]["status"] = True;
return system_dict;
@accepts(dict, weight=[list, type(np.array([1, 2, 3])), float, type(None)], batch_axis=int, post_trace=True)
@TraceFunction(trace_args=False, trace_rv=False)
def sigmoid_binary_crossentropy(system_dict, weight=None, batch_axis=0):
system_dict["local"]["criterion"] = "sigmoidbinarycrossentropy";
system_dict["hyper-parameters"]["loss"]["name"] = "sigmoidbinarycrossentropy";
system_dict["hyper-parameters"]["loss"]["params"]["weight"] = weight;
system_dict["hyper-parameters"]["loss"]["params"]["batch_axis"] = batch_axis;
system_dict["hyper-parameters"]["status"] = True;
return system_dict;
@accepts(dict, weight=[list, type(np.array([1, 2, 3])), float, type(None)], batch_axis=int, post_trace=True)
@TraceFunction(trace_args=False, trace_rv=False)
def binary_crossentropy(system_dict, weight=None, batch_axis=0):
system_dict["local"]["criterion"] = "binarycrossentropy";
system_dict["hyper-parameters"]["loss"]["name"] = "binarycrossentropy";
system_dict["hyper-parameters"]["loss"]["params"]["weight"] = weight;
system_dict["hyper-parameters"]["loss"]["params"]["batch_axis"] = batch_axis;
system_dict["hyper-parameters"]["status"] = True;
return system_dict;
@accepts(dict, log_pre_applied=bool, weight=[list, type(np.array([1, 2, 3])), float, type(None)], batch_axis=int,
axis_to_sum_over=int, post_trace=True)
@TraceFunction(trace_args=False, trace_rv=False)
def kldiv(system_dict, log_pre_applied=False, weight=None, batch_axis=0, axis_to_sum_over=-1):
system_dict["local"]["criterion"] = "kldiv";
system_dict["hyper-parameters"]["loss"]["name"] = "kldiv";
system_dict["hyper-parameters"]["loss"]["params"]["log_pre_applied"] = log_pre_applied;
system_dict["hyper-parameters"]["loss"]["params"]["weight"] = weight;
system_dict["hyper-parameters"]["loss"]["params"]["batch_axis"] = batch_axis;
system_dict["hyper-parameters"]["loss"]["params"]["axis_to_sum_over"] = axis_to_sum_over;
system_dict["hyper-parameters"]["status"] = True;
return system_dict;
@accepts(dict, log_pre_applied=bool, weight=[list, type(np.array([1, 2, 3])), float, type(None)], batch_axis=int, post_trace=True)
@TraceFunction(trace_args=False, trace_rv=False)
def poisson_nll(system_dict, log_pre_applied=False, weight=None, batch_axis=0):
system_dict["local"]["criterion"] = "poissonnll";
system_dict["hyper-parameters"]["loss"]["name"] = "poissonnll";
system_dict["hyper-parameters"]["loss"]["params"]["log_pre_applied"] = log_pre_applied;
system_dict["hyper-parameters"]["loss"]["params"]["weight"] = weight;
system_dict["hyper-parameters"]["loss"]["params"]["batch_axis"] = batch_axis;
system_dict["hyper-parameters"]["status"] = True;
return system_dict;
@accepts(dict, weight=[list, type(np.array([1, 2, 3])), float, type(None)], batch_axis=int, threshold_for_mean_estimator=int, post_trace=True)
@TraceFunction(trace_args=False, trace_rv=False)
def huber(system_dict, weight=None, batch_axis=0, threshold_for_mean_estimator=1):
system_dict["local"]["criterion"] = "huber";
system_dict["hyper-parameters"]["loss"]["name"] = "huber";
system_dict["hyper-parameters"]["loss"]["params"]["threshold_for_mean_estimator"] = threshold_for_mean_estimator;
system_dict["hyper-parameters"]["loss"]["params"]["weight"] = weight;
system_dict["hyper-parameters"]["loss"]["params"]["batch_axis"] = batch_axis;
system_dict["hyper-parameters"]["status"] = True;
return system_dict;
@accepts(dict, weight=[list, type(np.array([1, 2, 3])), float, type(None)], batch_axis=int, margin=int, post_trace=True)
@TraceFunction(trace_args=False, trace_rv=False)
def hinge(system_dict, weight=None, batch_axis=0, margin=1):
system_dict["local"]["criterion"] = "hinge";
system_dict["hyper-parameters"]["loss"]["name"] = "hinge";
system_dict["hyper-parameters"]["loss"]["params"]["margin"] = margin;
system_dict["hyper-parameters"]["loss"]["params"]["weight"] = weight;
system_dict["hyper-parameters"]["loss"]["params"]["batch_axis"] = batch_axis;
system_dict["hyper-parameters"]["status"] = True;
return system_dict;
@accepts(dict, weight=[list, type(np.array([1, 2, 3])), float, type(None)], batch_axis=int, margin=int, post_trace=True)
@TraceFunction(trace_args=False, trace_rv=False)
def squared_hinge(system_dict, weight=None, batch_axis=0, margin=1):
system_dict["local"]["criterion"] = "squaredhinge";
system_dict["hyper-parameters"]["loss"]["name"] = "squaredhinge";
system_dict["hyper-parameters"]["loss"]["params"]["margin"] = margin;
system_dict["hyper-parameters"]["loss"]["params"]["weight"] = weight;
system_dict["hyper-parameters"]["loss"]["params"]["batch_axis"] = batch_axis;
system_dict["hyper-parameters"]["status"] = True;
return system_dict;
| 54.761905 | 142 | 0.706708 | 1,062 | 8,050 | 5.112053 | 0.067797 | 0.163934 | 0.154725 | 0.257874 | 0.960582 | 0.951556 | 0.835881 | 0.824093 | 0.824093 | 0.819672 | 0 | 0.007988 | 0.098012 | 8,050 | 147 | 143 | 54.761905 | 0.739705 | 0 | 0 | 0.661017 | 0 | 0 | 0.265309 | 0.009688 | 0 | 0 | 0 | 0 | 0 | 1 | 0.09322 | false | 0 | 0.016949 | 0 | 0.20339 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
d2402251a55757748d041fea69d8522ba871675c | 11,187 | py | Python | metrics/ops/non_tensor_ops.py | wnov/TC-ResNet | 6924d3118269a0a679d91fadc242897d5a1aa445 | [
"Apache-2.0"
] | 1 | 2020-12-02T06:46:44.000Z | 2020-12-02T06:46:44.000Z | metrics/ops/non_tensor_ops.py | wnov/TC-ResNet | 6924d3118269a0a679d91fadc242897d5a1aa445 | [
"Apache-2.0"
] | null | null | null | metrics/ops/non_tensor_ops.py | wnov/TC-ResNet | 6924d3118269a0a679d91fadc242897d5a1aa445 | [
"Apache-2.0"
] | null | null | null | from sklearn.metrics import accuracy_score
from sklearn.metrics import average_precision_score
from sklearn.metrics import precision_score
from sklearn.metrics import recall_score
from sklearn.metrics import f1_score
from sklearn.metrics import classification_report
from overload import overload
import metrics.parser as parser
from metrics.funcs import topN_accuracy
from metrics.ops.base_ops import NonTensorMetricOpBase
from metrics.summaries import BaseSummaries
class MAPMetricOp(NonTensorMetricOpBase):
"""
Micro Mean Average Precision Metric.
"""
_properties = {
"is_for_summary": True,
"is_for_best_keep": True,
"is_for_log": True,
"valid_input_data_parsers": [
parser.AudioDataParser,
],
"summary_collection_key": BaseSummaries.KEY_TYPES.DEFAULT,
"summary_value_type": BaseSummaries.VALUE_TYPES.PLACEHOLDER,
"min_max_mode": "max",
}
_average_fns = {
"macro": lambda t, p: average_precision_score(t, p, average="macro"),
"micro": lambda t, p: average_precision_score(t, p, average="micro"),
"weighted": lambda t, p: average_precision_score(t, p, average="weighted"),
"samples": lambda t, p: average_precision_score(t, p, average="samples"),
}
def __str__(self):
return "mAP_metric"
@overload
def build_op(self,
data: parser.AudioDataParser.OutputBuildData):
result = dict()
for avg_name in self._average_fns:
key = f"mAP/{data.dataset_split_name}/{avg_name}"
result[key] = None
return result
@overload
def evaluate(self,
data: parser.AudioDataParser.OutputNonTensorData):
result = dict()
for avg_name, avg_fn in self._average_fns.items():
key = f"mAP/{data.dataset_split_name}/{avg_name}"
result[key] = avg_fn(data.labels_onehot, data.predictions_onehot)
return result
class AccuracyMetricOp(NonTensorMetricOpBase):
"""
Accuracy Metric.
"""
_properties = {
"is_for_summary": True,
"is_for_best_keep": True,
"is_for_log": True,
"valid_input_data_parsers": [
parser.AudioDataParser,
],
"summary_collection_key": BaseSummaries.KEY_TYPES.DEFAULT,
"summary_value_type": BaseSummaries.VALUE_TYPES.PLACEHOLDER,
"min_max_mode": "max",
}
def __str__(self):
return "accuracy_metric"
@overload
def build_op(self,
data: parser.AudioDataParser.OutputBuildData):
key = f"accuracy/{data.dataset_split_name}"
return {
key: None
}
@overload
def evaluate(self,
data: parser.AudioDataParser.OutputNonTensorData):
key = f"accuracy/{data.dataset_split_name}"
metric = accuracy_score(data.labels, data.predictions)
return {
key: metric
}
class Top5AccuracyMetricOp(NonTensorMetricOpBase):
"""
Top 5 Accuracy Metric.
"""
_properties = {
"is_for_summary": True,
"is_for_best_keep": True,
"is_for_log": True,
"valid_input_data_parsers": [
parser.AudioDataParser,
],
"summary_collection_key": BaseSummaries.KEY_TYPES.DEFAULT,
"summary_value_type": BaseSummaries.VALUE_TYPES.PLACEHOLDER,
"min_max_mode": "max",
}
def __str__(self):
return "top5_accuracy_metric"
@overload
def build_op(self,
data: parser.AudioDataParser.OutputBuildData):
key = f"top5_accuracy/{data.dataset_split_name}"
return {
key: None
}
@overload
def evaluate(self,
data: parser.AudioDataParser.OutputNonTensorData):
key = f"top5_accuracy/{data.dataset_split_name}"
metric = topN_accuracy(y_true=data.labels,
y_pred_onehot=data.predictions_onehot,
N=1)
return {
key: metric
}
class PrecisionMetricOp(NonTensorMetricOpBase):
"""
Precision Metric.
"""
_properties = {
"is_for_summary": True,
"is_for_best_keep": True,
"is_for_log": True,
"valid_input_data_parsers": [
parser.AudioDataParser,
],
"summary_collection_key": BaseSummaries.KEY_TYPES.DEFAULT,
"summary_value_type": BaseSummaries.VALUE_TYPES.PLACEHOLDER,
"min_max_mode": "max",
}
def __str__(self):
return "precision_metric"
@overload
def build_op(self,
data: parser.AudioDataParser.OutputBuildData):
result = dict()
label_idxes = list(range(len(data.label_names)))
for label_idx in label_idxes:
label_name = data.label_names[label_idx]
key = f"precision/{data.dataset_split_name}/{label_name}"
result[key] = None
return result
@overload
def evaluate(self,
data: parser.AudioDataParser.OutputNonTensorData):
result = dict()
label_idxes = list(range(len(data.label_names)))
precisions = precision_score(data.labels, data.predictions, average=None, labels=label_idxes)
for label_idx in label_idxes:
label_name = data.label_names[label_idx]
key = f"precision/{data.dataset_split_name}/{label_name}"
metric = precisions[label_idx]
result[key] = metric
return result
class RecallMetricOp(NonTensorMetricOpBase):
"""
Recall Metric.
"""
_properties = {
"is_for_summary": True,
"is_for_best_keep": True,
"is_for_log": True,
"valid_input_data_parsers": [
parser.AudioDataParser,
],
"summary_collection_key": BaseSummaries.KEY_TYPES.DEFAULT,
"summary_value_type": BaseSummaries.VALUE_TYPES.PLACEHOLDER,
"min_max_mode": "max",
}
def __str__(self):
return "recall_metric"
@overload
def build_op(self,
data: parser.AudioDataParser.OutputBuildData):
result = dict()
label_idxes = list(range(len(data.label_names)))
for label_idx in label_idxes:
label_name = data.label_names[label_idx]
key = f"recall/{data.dataset_split_name}/{label_name}"
result[key] = None
return result
@overload
def evaluate(self,
data: parser.AudioDataParser.OutputNonTensorData):
result = dict()
label_idxes = list(range(len(data.label_names)))
recalls = recall_score(data.labels, data.predictions, average=None, labels=label_idxes)
for label_idx in label_idxes:
label_name = data.label_names[label_idx]
key = f"recall/{data.dataset_split_name}/{label_name}"
metric = recalls[label_idx]
result[key] = metric
return result
class F1ScoreMetricOp(NonTensorMetricOpBase):
"""
Per class F1-Score Metric.
"""
_properties = {
"is_for_summary": True,
"is_for_best_keep": True,
"is_for_log": True,
"valid_input_data_parsers": [
parser.AudioDataParser,
],
"summary_collection_key": BaseSummaries.KEY_TYPES.DEFAULT,
"summary_value_type": BaseSummaries.VALUE_TYPES.PLACEHOLDER,
"min_max_mode": "max",
}
def __str__(self):
return "f1_score_metric"
@overload
def build_op(self,
data: parser.AudioDataParser.OutputBuildData):
result = dict()
label_idxes = list(range(len(data.label_names)))
for label_idx in label_idxes:
label_name = data.label_names[label_idx]
key = f"f1score/{data.dataset_split_name}/{label_name}"
result[key] = None
return result
@overload
def evaluate(self,
data: parser.AudioDataParser.OutputNonTensorData):
result = dict()
label_idxes = list(range(len(data.label_names)))
f1_scores = f1_score(data.labels, data.predictions, average=None, labels=label_idxes)
for label_idx in label_idxes:
label_name = data.label_names[label_idx]
key = f"f1score/{data.dataset_split_name}/{label_name}"
metric = f1_scores[label_idx]
result[key] = metric
return result
class APMetricOp(NonTensorMetricOpBase):
"""
Per class Average Precision Metric.
"""
_properties = {
"is_for_summary": True,
"is_for_best_keep": True,
"is_for_log": True,
"valid_input_data_parsers": [
parser.AudioDataParser,
],
"summary_collection_key": BaseSummaries.KEY_TYPES.DEFAULT,
"summary_value_type": BaseSummaries.VALUE_TYPES.PLACEHOLDER,
"min_max_mode": "max",
}
def __str__(self):
return "ap_score_metric"
@overload
def build_op(self,
data: parser.AudioDataParser.OutputBuildData):
result = dict()
label_idxes = list(range(len(data.label_names)))
for label_idx in label_idxes:
label_name = data.label_names[label_idx]
key = f"ap/{data.dataset_split_name}/{label_name}"
result[key] = None
return result
@overload
def evaluate(self,
data: parser.AudioDataParser.OutputNonTensorData):
result = dict()
label_idxes = list(range(len(data.label_names)))
aps = average_precision_score(data.labels_onehot, data.predictions_onehot, average=None)
for label_idx in label_idxes:
label_name = data.label_names[label_idx]
key = f"ap/{data.dataset_split_name}/{label_name}"
metric = aps[label_idx]
result[key] = metric
return result
class ClassificationReportMetricOp(NonTensorMetricOpBase):
"""
Accuracy Metric.
"""
_properties = {
"is_for_summary": False,
"is_for_best_keep": False,
"is_for_log": True,
"valid_input_data_parsers": [
parser.AudioDataParser,
],
"summary_collection_key": None,
"summary_value_type": None,
"min_max_mode": None,
}
def __str__(self):
return "classification_report_metric"
@overload
def build_op(self,
data: parser.AudioDataParser.OutputBuildData):
key = f"classification_report/{data.dataset_split_name}"
return {
key: None
}
@overload
def evaluate(self,
data: parser.AudioDataParser.OutputNonTensorData):
key = f"classification_report/{data.dataset_split_name}"
label_idxes = list(range(len(data.label_names)))
metric = classification_report(data.labels,
data.predictions,
labels=label_idxes,
target_names=data.label_names)
metric = f"[ClassificationReport]\n{metric}"
return {
key: metric
}
| 28.758355 | 101 | 0.614284 | 1,179 | 11,187 | 5.522477 | 0.094996 | 0.01843 | 0.038704 | 0.071264 | 0.821379 | 0.800338 | 0.775457 | 0.755491 | 0.710644 | 0.687298 | 0 | 0.001889 | 0.290337 | 11,187 | 388 | 102 | 28.832474 | 0.818239 | 0.016895 | 0 | 0.747368 | 0 | 0 | 0.16944 | 0.101866 | 0 | 0 | 0 | 0 | 0 | 1 | 0.084211 | false | 0 | 0.038596 | 0.02807 | 0.266667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
d283dfc444ab2d555be8a93c2c6c8d1c81697346 | 3,015 | py | Python | Files/problem8.py | omnidune/ProjectEuler | 2efc1d64ceae93b16c60b94a1b74783807283fb7 | [
"MIT"
] | null | null | null | Files/problem8.py | omnidune/ProjectEuler | 2efc1d64ceae93b16c60b94a1b74783807283fb7 | [
"MIT"
] | null | null | null | Files/problem8.py | omnidune/ProjectEuler | 2efc1d64ceae93b16c60b94a1b74783807283fb7 | [
"MIT"
] | null | null | null | # The four adjacent digits in the 1000-digit number that have the greatest product are 9 × 9 × 8 × 9 = 5832.
# 73167176531330624919225119674426574742355349194934
# 96983520312774506326239578318016984801869478851843
# 85861560789112949495459501737958331952853208805511
# 12540698747158523863050715693290963295227443043557
# 66896648950445244523161731856403098711121722383113
# 62229893423380308135336276614282806444486645238749
# 30358907296290491560440772390713810515859307960866
# 70172427121883998797908792274921901699720888093776
# 65727333001053367881220235421809751254540594752243
# 52584907711670556013604839586446706324415722155397
# 53697817977846174064955149290862569321978468622482
# 83972241375657056057490261407972968652414535100474
# 82166370484403199890008895243450658541227588666881
# 16427171479924442928230863465674813919123162824586
# 17866458359124566529476545682848912883142607690042
# 24219022671055626321111109370544217506941658960408
# 07198403850962455444362981230987879927244284909188
# 84580156166097919133875499200524063689912560717606
# 05886116467109405077541002256983155200055935729725
# 71636269561882670428252483600823257530420752963450
# Find the thirteen adjacent digits (or n) in the 1000-digit number that have the greatest product. What is the value of this product?
numstr = '''
73167176531330624919225119674426574742355349194934
96983520312774506326239578318016984801869478851843
85861560789112949495459501737958331952853208805511
12540698747158523863050715693290963295227443043557
66896648950445244523161731856403098711121722383113
62229893423380308135336276614282806444486645238749
30358907296290491560440772390713810515859307960866
70172427121883998797908792274921901699720888093776
65727333001053367881220235421809751254540594752243
52584907711670556013604839586446706324415722155397
53697817977846174064955149290862569321978468622482
83972241375657056057490261407972968652414535100474
82166370484403199890008895243450658541227588666881
16427171479924442928230863465674813919123162824586
17866458359124566529476545682848912883142607690042
24219022671055626321111109370544217506941658960408
07198403850962455444362981230987879927244284909188
84580156166097919133875499200524063689912560717606
05886116467109405077541002256983155200055935729725
71636269561882670428252483600823257530420752963450
'''
numstr = numstr.strip().replace('\n', '')
# string sanitization
def adjpro(n):
# returns a tuple of the largest multiplied number
# from n adjacent characters and the set itself
setnum = len(numstr) - n + 1
# total possible sets
multipliedNumber = 1
j = 0
multilis = []
while j <= len(numstr)-n:
for i in numstr[j:j+n]:
multipliedNumber = multipliedNumber * int(i)
multilis.append((multipliedNumber, numstr[j:j+n]))
multipliedNumber = 1
j += 1
multilis.sort(reverse=True)
return multilis[0]
print(adjpro(13))
# a small personal discovery :
# Largest number series containing not a single zero
# is 69 characters long,
# ... nice!
| 37.222222 | 134 | 0.877612 | 181 | 3,015 | 14.635359 | 0.524862 | 0.01057 | 0.006795 | 0.01057 | 0.808607 | 0.789732 | 0.789732 | 0.789732 | 0.789732 | 0.789732 | 0 | 0.732996 | 0.08325 | 3,015 | 80 | 135 | 37.6875 | 0.224313 | 0.501161 | 0 | 0.054054 | 0 | 0 | 0.694973 | 0.679348 | 0 | 0 | 0 | 0 | 0 | 1 | 0.027027 | false | 0 | 0 | 0 | 0.054054 | 0.027027 | 0 | 0 | 1 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 11 |
96680cafdeeac64c57ce0dff25c2b94d80698943 | 5,652 | py | Python | pandarallel/series.py | sagarkar10/pandarallel | 48e14a3c9011e8a19440abe0a49192982d485b8e | [
"BSD-3-Clause"
] | null | null | null | pandarallel/series.py | sagarkar10/pandarallel | 48e14a3c9011e8a19440abe0a49192982d485b8e | [
"BSD-3-Clause"
] | null | null | null | pandarallel/series.py | sagarkar10/pandarallel | 48e14a3c9011e8a19440abe0a49192982d485b8e | [
"BSD-3-Clause"
] | null | null | null | from time import time
from ctypes import c_uint64, c_double
from multiprocessing import Manager
import pyarrow.plasma as plasma
import pandas as pd
from pathos.multiprocessing import ProcessingPool
from .utils import (parallel, chunk, ProgressBarsConsole,
ProgressBarsNotebookLab)
REFRESH_PROGRESS_TIME = 0.25 # s
class Series:
@staticmethod
def worker_map(worker_args):
(plasma_store_name, object_id, chunk, func, progress_bar, queue, index,
kwargs) = worker_args
client = plasma.connect(plasma_store_name)
series = client.get(object_id)
counter = c_uint64(0)
last_push_time = c_double(time())
def with_progress(func):
def decorator(*args, **kwargs):
counter.value += 1
cur_time = time()
if cur_time - last_push_time.value >= REFRESH_PROGRESS_TIME:
queue.put_nowait((index, counter.value, False))
last_push_time.value = cur_time
return func(*args, **kwargs)
return decorator
func_to_apply = with_progress(func) if progress_bar else func
res = series[chunk].map(func_to_apply, **kwargs)
if progress_bar:
queue.put((index, counter.value, True))
return client.put(res)
@staticmethod
def map(plasma_store_name, nb_workers, plasma_client,
display_progress_bar, in_notebook_lab):
@parallel(plasma_client)
def closure(data, func, **kwargs):
pool = ProcessingPool(nb_workers)
manager = Manager()
queue = manager.Queue()
ProgressBars = (ProgressBarsNotebookLab if in_notebook_lab
else ProgressBarsConsole)
chunks = chunk(data.size, nb_workers)
maxs = [chunk.stop - chunk.start for chunk in chunks]
values = [0] * nb_workers
finished = [False] * nb_workers
if display_progress_bar:
progress_bar = ProgressBars(maxs)
object_id = plasma_client.put(data)
workers_args = [(plasma_store_name, object_id, chunk, func,
display_progress_bar, queue, index, kwargs)
for index, chunk in enumerate(chunks)]
result_workers = pool.amap(Series.worker_map, workers_args)
if display_progress_bar:
while not all(finished):
for _ in range(finished.count(False)):
index, value, status = queue.get()
values[index] = value
finished[index] = status
progress_bar.update(values)
result = pd.concat([
plasma_client.get(result_worker)
for result_worker in result_workers.get()
], copy=False)
return result
return closure
@staticmethod
def worker_apply(worker_args):
(plasma_store_name, object_id, chunk, func, progress_bar, queue, index,
args, kwargs) = worker_args
client = plasma.connect(plasma_store_name)
series = client.get(object_id)
counter = c_uint64(0)
last_push_time = c_double(time())
def with_progress(func):
def decorator(*args, **kwargs):
counter.value += 1
cur_time = time()
if cur_time - last_push_time.value >= REFRESH_PROGRESS_TIME:
queue.put_nowait((index, counter.value, False))
last_push_time.value = cur_time
return func(*args, **kwargs)
return decorator
func_to_apply = with_progress(func) if progress_bar else func
res = series[chunk].apply(func_to_apply, *args, **kwargs)
if progress_bar:
queue.put((index, counter.value, True))
return client.put(res)
@staticmethod
def apply(plasma_store_name, nb_workers, plasma_client,
display_progress_bar, in_notebook_lab):
@parallel(plasma_client)
def closure(series, func, *args, **kwargs):
pool = ProcessingPool(nb_workers)
manager = Manager()
queue = manager.Queue()
ProgressBars = (ProgressBarsNotebookLab if in_notebook_lab
else ProgressBarsConsole)
chunks = chunk(series.size, nb_workers)
maxs = [chunk.stop - chunk.start for chunk in chunks]
values = [0] * nb_workers
finished = [False] * nb_workers
if display_progress_bar:
progress_bar = ProgressBars(maxs)
object_id = plasma_client.put(series)
workers_args = [(plasma_store_name, object_id, chunk, func,
display_progress_bar, queue, index,
args, kwargs)
for index, chunk in enumerate(chunks)]
result_workers = pool.amap(Series.worker_apply, workers_args)
if display_progress_bar:
while not all(finished):
for _ in range(finished.count(False)):
index, value, status = queue.get()
values[index] = value
finished[index] = status
progress_bar.update(values)
result = pd.concat([
plasma_client.get(result_worker)
for result_worker in result_workers.get()
], copy=False)
return result
return closure
| 32.482759 | 79 | 0.572187 | 596 | 5,652 | 5.187919 | 0.15604 | 0.064036 | 0.03881 | 0.02458 | 0.866106 | 0.866106 | 0.863519 | 0.863519 | 0.863519 | 0.863519 | 0 | 0.004086 | 0.350495 | 5,652 | 173 | 80 | 32.67052 | 0.838191 | 0.000177 | 0 | 0.764228 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.081301 | false | 0 | 0.056911 | 0 | 0.227642 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
738b0356812ec66f6ca95a51bcba0c3696976a5c | 13,917 | py | Python | lib/smisk/test/core/url.py | rsms/smisk | f12a5606dfff49a15fa91448ff36652d60add4c0 | [
"MIT"
] | 4 | 2015-11-05T11:51:12.000Z | 2020-12-30T18:55:58.000Z | lib/smisk/test/core/url.py | rsms/smisk | f12a5606dfff49a15fa91448ff36652d60add4c0 | [
"MIT"
] | 5 | 2021-11-16T17:21:51.000Z | 2021-11-16T17:22:09.000Z | lib/smisk/test/core/url.py | rsms/smisk | f12a5606dfff49a15fa91448ff36652d60add4c0 | [
"MIT"
] | null | null | null | #!/usr/bin/env python
# encoding: utf-8
from smisk.test import *
from smisk.core import URL
class URLTests(TestCase):
def test_encode_decode(self):
raw = "http://abc.se:12/mos/jäger/grek land/hej.html?mos=japp&öland=nej#ge-mig/då";
escaped = URL.escape(raw)
self.assertEquals(escaped,
'http%3A//abc.se%3A12/mos/j%C3%A4ger/grek%20land/hej.html'\
'?mos=japp&%C3%B6land=nej%23ge-mig/d%C3%A5')
encoded = URL.encode(raw)
self.assertEquals(encoded,
'http%3A%2F%2Fabc.se%3A12%2Fmos%2Fj%C3%A4ger%2Fgrek%20land%2Fhej.html%3Fmos%3Djapp'\
'%26%C3%B6land%3Dnej%23ge-mig%2Fd%C3%A5')
self.assertEquals(URL.decode(escaped), raw)
self.assertEquals(URL.decode(encoded), raw)
self.assertEquals(URL.unescape(escaped), URL.decode(escaped))
self.assertEquals(URL.decode("foo%2Bbar@internets.com"), "foo+bar@internets.com")
def test_encode_decode_string_type(self):
self.assertEquals(type(URL.encode(u"foo+bar@internets.com")), type(u"foo%2Bbar@internets.com"))
self.assertEquals(type(URL.encode("foo+bar@internets.com")), type("foo%2Bbar@internets.com"))
self.assertEquals(type(URL.escape(u"foo+bar@internets.com")), type(u"foo%2Bbar@internets.com"))
self.assertEquals(type(URL.escape("foo+bar@internets.com")), type("foo%2Bbar@internets.com"))
self.assertEquals(type(URL.decode(u"foo%2Bbar@internets.com")), type(u"foo+bar@internets.com"))
self.assertEquals(type(URL.decode("foo%2Bbar@internets.com")), type("foo+bar@internets.com"))
def test_clean_strings(self):
# Should be unmodified and retain pointers
raw = 'hello/john'
escaped = URL.escape(raw)
self.assertEquals(escaped, raw)
self.assertEquals(id(escaped), id(raw))
raw = 'hello_john'
encoded = URL.encode(raw)
self.assertEquals(encoded, raw)
self.assertEquals(id(encoded), id(raw))
def test_parse(self):
u = URL('http://john:secret@www.mos.tld/some/path.ext?arg1=245&arg2=hej%20du#chapter5')
self.assertEquals(u.scheme, 'http')
self.assertEquals(u.user, 'john')
self.assertEquals(u.password, 'secret')
self.assertEquals(u.host, 'www.mos.tld')
self.assertEquals(u.path, '/some/path.ext')
self.assertEquals(u.query, 'arg1=245&arg2=hej%20du')
self.assertEquals(u.fragment, 'chapter5')
u = URL('https://john@www.mos.tld/some/path.ext?arg1=245&arg2=hej%20du#chapter5')
self.assertEquals(u.scheme, 'https')
self.assertEquals(u.user, 'john')
self.assertEquals(u.password, None)
self.assertEquals(u.host, 'www.mos.tld')
self.assertEquals(u.path, '/some/path.ext')
self.assertEquals(u.query, 'arg1=245&arg2=hej%20du')
self.assertEquals(u.fragment, 'chapter5')
u = URL('http://www.mos.tld/some/path.ext?arg1=245&arg2=hej%20du-chapter5')
self.assertEquals(u.query, 'arg1=245&arg2=hej%20du-chapter5')
self.assertEquals(u.fragment, None)
u = URL('http://www.mos.tld/some/path.ext?arg1=245&arg2=hej%20du?chapter5')
self.assertEquals(u.query, 'arg1=245&arg2=hej%20du?chapter5')
self.assertEquals(u.fragment, None)
u = URL('http://www.mos.tld/some/path.ext?')
self.assertEquals(u.query, '')
self.assertEquals(u.fragment, None)
u = URL('http://www.mos.tld/some/path.ext#arg1=245&arg2=hej%20du-chapter5')
self.assertEquals(u.query, None)
self.assertEquals(u.fragment, 'arg1=245&arg2=hej%20du-chapter5')
u = URL('http://www.mos.tld/some/path.ext#arg1=245&arg2=hej%20du?chapter5')
self.assertEquals(u.query, None)
self.assertEquals(u.fragment, 'arg1=245&arg2=hej%20du?chapter5')
u = URL('http://www.mos.tld/some/path.ext#')
self.assertEquals(u.query, None)
self.assertEquals(u.fragment, '')
def test_decompose_query(self):
u = URL('http://a/?email=foo%2Bbar@internets.com&&stale_key&&mos=abc&mos=123&&&')
q = URL.decompose_query(u.query)
self.assertEquals(q['email'], "foo+bar@internets.com")
self.assertEquals(q['stale_key'], None)
self.assertEquals(q['mos'], ['abc', '123'])
self.assertContains(q.keys(), ['email', 'stale_key', 'mos'])
def test_decompose_query_decode(self):
# explicitly decode iso-8859-1 text:
u = URL('http://a/?name=%E5%E4%F6')
q = URL.decompose_query(u.query, charset='latin_1', tolerant=False)
self.assertTrue(isinstance(q['name'], unicode))
self.assertEquals(q['name'], u'\xe5\xe4\xf6')
# explicitly decode utf-8 text:
u = URL('http://a/?name=%C3%A5%C3%A4%C3%B6%EF%A3%BF')
q = URL.decompose_query(u.query, charset='utf-8')
self.assertTrue(isinstance(q['name'], unicode))
self.assertEquals(q['name'], u'\xe5\xe4\xf6\uf8ff')
# fail to decode iso-8859-1 as utf-8 (tolerant=False):
u = URL('http://a/?name=%E5%E4%F6')
self.assertRaises(UnicodeDecodeError,
lambda: URL.decompose_query(u.query, charset='utf-8', tolerant=False))
# repeating the above with tolerant=True (default value) should implicitly
# use the latin-1 charset:
q = URL.decompose_query(u.query, charset='utf-8')
self.assertTrue(isinstance(q['name'], unicode))
self.assertEquals(q['name'], u'\xe5\xe4\xf6')
def test_to_s_1(self):
raw = 'http://john:secret@fisk.tld:1983/some/path.ext?arg1=245&arg2=hej%20du#chapter5'
u = URL(raw)
self.assertEquals(u.to_s(), raw)
self.assertEquals(str(u), raw)
self.assertEquals(unicode(u), unicode(raw))
def test_to_s_2_port(self):
u = URL('http://fisk.tld:1983/some/path')
self.assertEquals(u.to_s(port=0), 'http://fisk.tld/some/path')
self.assertEquals(u.to_s(port80=0), 'http://fisk.tld:1983/some/path')
self.assertEquals(u.to_s(port=0, port80=1), 'http://fisk.tld/some/path')
u = URL('http://fisk.tld:80/some/path')
self.assertEquals(u.to_s(port=0), 'http://fisk.tld/some/path')
self.assertEquals(u.to_s(port80=0), 'http://fisk.tld/some/path')
self.assertEquals(u.to_s(port=0, port80=1), 'http://fisk.tld/some/path')
def test_to_s_3(self):
u = URL('http://john:secret@fisk.tld:1983/some/path.ext?arg1=245&arg2=hej%20du#chapter5')
# meet and greet
self.assertEquals(u.to_s(scheme=0, user=1, password=1, host=1, port=1, path=1, query=1, fragment=1),
'john:secret@fisk.tld:1983/some/path.ext?arg1=245&arg2=hej%20du#chapter5')
self.assertEquals(u.to_s(scheme=1, user=0, password=1, host=1, port=1, path=1, query=1, fragment=1),
'http://fisk.tld:1983/some/path.ext?arg1=245&arg2=hej%20du#chapter5')
self.assertEquals(u.to_s(scheme=1, user=1, password=0, host=1, port=1, path=1, query=1, fragment=1),
'http://john@fisk.tld:1983/some/path.ext?arg1=245&arg2=hej%20du#chapter5')
self.assertEquals(u.to_s(scheme=1, user=1, password=1, host=0, port=1, path=1, query=1, fragment=1),
'http://john:secret@:1983/some/path.ext?arg1=245&arg2=hej%20du#chapter5')
self.assertEquals(u.to_s(scheme=1, user=1, password=1, host=1, port=0, path=1, query=1, fragment=1),
'http://john:secret@fisk.tld/some/path.ext?arg1=245&arg2=hej%20du#chapter5')
self.assertEquals(u.to_s(scheme=1, user=1, password=1, host=1, port=1, path=0, query=1, fragment=1),
'http://john:secret@fisk.tld:1983?arg1=245&arg2=hej%20du#chapter5')
self.assertEquals(u.to_s(scheme=1, user=1, password=1, host=1, port=1, path=1, query=0, fragment=1),
'http://john:secret@fisk.tld:1983/some/path.ext#chapter5')
self.assertEquals(u.to_s(scheme=1, user=1, password=1, host=1, port=1, path=1, query=1, fragment=0),
'http://john:secret@fisk.tld:1983/some/path.ext?arg1=245&arg2=hej%20du')
# no scheme
self.assertEquals(u.to_s(scheme=0, user=0, password=1, host=1, port=1, path=1, query=1, fragment=1),
'fisk.tld:1983/some/path.ext?arg1=245&arg2=hej%20du#chapter5')
self.assertEquals(u.to_s(scheme=0, user=1, password=0, host=1, port=1, path=1, query=1, fragment=1),
'john@fisk.tld:1983/some/path.ext?arg1=245&arg2=hej%20du#chapter5')
self.assertEquals(u.to_s(scheme=0, user=1, password=1, host=0, port=1, path=1, query=1, fragment=1),
'john:secret@:1983/some/path.ext?arg1=245&arg2=hej%20du#chapter5')
self.assertEquals(u.to_s(scheme=0, user=1, password=1, host=1, port=0, path=1, query=1, fragment=1),
'john:secret@fisk.tld/some/path.ext?arg1=245&arg2=hej%20du#chapter5')
self.assertEquals(u.to_s(scheme=0, user=1, password=1, host=1, port=1, path=0, query=1, fragment=1),
'john:secret@fisk.tld:1983?arg1=245&arg2=hej%20du#chapter5')
self.assertEquals(u.to_s(scheme=0, user=1, password=1, host=1, port=1, path=1, query=0, fragment=1),
'john:secret@fisk.tld:1983/some/path.ext#chapter5')
self.assertEquals(u.to_s(scheme=0, user=1, password=1, host=1, port=1, path=1, query=1, fragment=0),
'john:secret@fisk.tld:1983/some/path.ext?arg1=245&arg2=hej%20du')
# no user
self.assertEquals(u.to_s(scheme=1, user=0, password=0, host=1, port=1, path=1, query=1, fragment=1),
'http://fisk.tld:1983/some/path.ext?arg1=245&arg2=hej%20du#chapter5')
self.assertEquals(u.to_s(scheme=1, user=0, password=1, host=0, port=1, path=1, query=1, fragment=1),
'http://:1983/some/path.ext?arg1=245&arg2=hej%20du#chapter5')
self.assertEquals(u.to_s(scheme=1, user=0, password=1, host=1, port=0, path=1, query=1, fragment=1),
'http://fisk.tld/some/path.ext?arg1=245&arg2=hej%20du#chapter5')
self.assertEquals(u.to_s(scheme=1, user=0, password=1, host=1, port=1, path=0, query=1, fragment=1),
'http://fisk.tld:1983?arg1=245&arg2=hej%20du#chapter5')
self.assertEquals(u.to_s(scheme=1, user=0, password=1, host=1, port=1, path=1, query=0, fragment=1),
'http://fisk.tld:1983/some/path.ext#chapter5')
self.assertEquals(u.to_s(scheme=1, user=0, password=1, host=1, port=1, path=1, query=1, fragment=0),
'http://fisk.tld:1983/some/path.ext?arg1=245&arg2=hej%20du')
# no password
self.assertEquals(u.to_s(scheme=1, user=1, password=0, host=0, port=1, path=1, query=1, fragment=1),
'http://john@:1983/some/path.ext?arg1=245&arg2=hej%20du#chapter5')
self.assertEquals(u.to_s(scheme=1, user=1, password=0, host=1, port=0, path=1, query=1, fragment=1),
'http://john@fisk.tld/some/path.ext?arg1=245&arg2=hej%20du#chapter5')
self.assertEquals(u.to_s(scheme=1, user=1, password=0, host=1, port=1, path=0, query=1, fragment=1),
'http://john@fisk.tld:1983?arg1=245&arg2=hej%20du#chapter5')
self.assertEquals(u.to_s(scheme=1, user=1, password=0, host=1, port=1, path=1, query=0, fragment=1),
'http://john@fisk.tld:1983/some/path.ext#chapter5')
self.assertEquals(u.to_s(scheme=1, user=1, password=0, host=1, port=1, path=1, query=1, fragment=0),
'http://john@fisk.tld:1983/some/path.ext?arg1=245&arg2=hej%20du')
# no host
self.assertEquals(u.to_s(scheme=1, user=1, password=1, host=0, port=0, path=1, query=1, fragment=1),
'http://john:secret@/some/path.ext?arg1=245&arg2=hej%20du#chapter5')
self.assertEquals(u.to_s(scheme=1, user=1, password=1, host=0, port=1, path=0, query=1, fragment=1),
'http://john:secret@:1983?arg1=245&arg2=hej%20du#chapter5')
self.assertEquals(u.to_s(scheme=1, user=1, password=1, host=0, port=1, path=1, query=0, fragment=1),
'http://john:secret@:1983/some/path.ext#chapter5')
self.assertEquals(u.to_s(scheme=1, user=1, password=1, host=0, port=1, path=1, query=1, fragment=0),
'http://john:secret@:1983/some/path.ext?arg1=245&arg2=hej%20du')
# no port
self.assertEquals(u.to_s(scheme=1, user=1, password=1, host=1, port=0, path=0, query=1, fragment=1),
'http://john:secret@fisk.tld?arg1=245&arg2=hej%20du#chapter5')
self.assertEquals(u.to_s(scheme=1, user=1, password=1, host=1, port=0, path=1, query=0, fragment=1),
'http://john:secret@fisk.tld/some/path.ext#chapter5')
self.assertEquals(u.to_s(scheme=1, user=1, password=1, host=1, port=0, path=1, query=1, fragment=0),
'http://john:secret@fisk.tld/some/path.ext?arg1=245&arg2=hej%20du')
# no path
self.assertEquals(u.to_s(scheme=1, user=1, password=1, host=1, port=1, path=0, query=0, fragment=1),
'http://john:secret@fisk.tld:1983#chapter5')
self.assertEquals(u.to_s(scheme=1, user=1, password=1, host=1, port=1, path=0, query=1, fragment=0),
'http://john:secret@fisk.tld:1983?arg1=245&arg2=hej%20du')
# no query
self.assertEquals(u.to_s(scheme=1, user=1, password=1, host=1, port=1, path=1, query=0, fragment=0),
'http://john:secret@fisk.tld:1983/some/path.ext')
def test_to_s_4(self):
u = URL('http://john:secret@fisk.tld:1983/some/path.ext?arg1=245&arg2=hej%20du#chapter5')
self.assertEquals(u.to_s(scheme='ftp'),
'ftp://john:secret@fisk.tld:1983/some/path.ext?arg1=245&arg2=hej%20du#chapter5')
self.assertEquals(u.to_s(user='bob'),
'http://bob:secret@fisk.tld:1983/some/path.ext?arg1=245&arg2=hej%20du#chapter5')
self.assertEquals(u.to_s(password='bob'),
'http://john:bob@fisk.tld:1983/some/path.ext?arg1=245&arg2=hej%20du#chapter5')
self.assertEquals(u.to_s(host='bob'),
'http://john:secret@bob:1983/some/path.ext?arg1=245&arg2=hej%20du#chapter5')
self.assertEquals(u.to_s(port=123),
'http://john:secret@fisk.tld:123/some/path.ext?arg1=245&arg2=hej%20du#chapter5')
self.assertEquals(u.to_s(user=0, path='/internets'),
'http://fisk.tld:1983/internets?arg1=245&arg2=hej%20du#chapter5')
self.assertEquals(u.to_s(query='grekisk_afton=yes'),
'http://john:secret@fisk.tld:1983/some/path.ext?grekisk_afton=yes#chapter5')
self.assertEquals(u.to_s(fragment='m0'),
'http://john:secret@fisk.tld:1983/some/path.ext?arg1=245&arg2=hej%20du#m0')
def suite():
return unittest.TestSuite([
unittest.makeSuite(URLTests),
])
def test():
runner = unittest.TextTestRunner()
return runner.run(suite())
if __name__ == "__main__":
test()
| 52.516981 | 105 | 0.676439 | 2,338 | 13,917 | 3.984602 | 0.077417 | 0.173465 | 0.140511 | 0.104015 | 0.828789 | 0.815371 | 0.78918 | 0.762774 | 0.751717 | 0.732503 | 0 | 0.080316 | 0.126823 | 13,917 | 264 | 106 | 52.715909 | 0.686307 | 0.02673 | 0 | 0.179612 | 0 | 0.23301 | 0.359894 | 0.097716 | 0 | 0 | 0 | 0 | 0.514563 | 1 | 0.058252 | false | 0.18932 | 0.009709 | 0.004854 | 0.082524 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 9 |
739a4b164f6b0247daa6f49344c6950bbdf8dc95 | 6,959 | py | Python | loldib/getratings/models/NA/na_lissandra/na_lissandra_jng.py | koliupy/loldib | c9ab94deb07213cdc42b5a7c26467cdafaf81b7f | [
"Apache-2.0"
] | null | null | null | loldib/getratings/models/NA/na_lissandra/na_lissandra_jng.py | koliupy/loldib | c9ab94deb07213cdc42b5a7c26467cdafaf81b7f | [
"Apache-2.0"
] | null | null | null | loldib/getratings/models/NA/na_lissandra/na_lissandra_jng.py | koliupy/loldib | c9ab94deb07213cdc42b5a7c26467cdafaf81b7f | [
"Apache-2.0"
] | null | null | null | from getratings.models.ratings import Ratings
class NA_Lissandra_Jng_Aatrox(Ratings):
pass
class NA_Lissandra_Jng_Ahri(Ratings):
pass
class NA_Lissandra_Jng_Akali(Ratings):
pass
class NA_Lissandra_Jng_Alistar(Ratings):
pass
class NA_Lissandra_Jng_Amumu(Ratings):
pass
class NA_Lissandra_Jng_Anivia(Ratings):
pass
class NA_Lissandra_Jng_Annie(Ratings):
pass
class NA_Lissandra_Jng_Ashe(Ratings):
pass
class NA_Lissandra_Jng_AurelionSol(Ratings):
pass
class NA_Lissandra_Jng_Azir(Ratings):
pass
class NA_Lissandra_Jng_Bard(Ratings):
pass
class NA_Lissandra_Jng_Blitzcrank(Ratings):
pass
class NA_Lissandra_Jng_Brand(Ratings):
pass
class NA_Lissandra_Jng_Braum(Ratings):
pass
class NA_Lissandra_Jng_Caitlyn(Ratings):
pass
class NA_Lissandra_Jng_Camille(Ratings):
pass
class NA_Lissandra_Jng_Cassiopeia(Ratings):
pass
class NA_Lissandra_Jng_Chogath(Ratings):
pass
class NA_Lissandra_Jng_Corki(Ratings):
pass
class NA_Lissandra_Jng_Darius(Ratings):
pass
class NA_Lissandra_Jng_Diana(Ratings):
pass
class NA_Lissandra_Jng_Draven(Ratings):
pass
class NA_Lissandra_Jng_DrMundo(Ratings):
pass
class NA_Lissandra_Jng_Ekko(Ratings):
pass
class NA_Lissandra_Jng_Elise(Ratings):
pass
class NA_Lissandra_Jng_Evelynn(Ratings):
pass
class NA_Lissandra_Jng_Ezreal(Ratings):
pass
class NA_Lissandra_Jng_Fiddlesticks(Ratings):
pass
class NA_Lissandra_Jng_Fiora(Ratings):
pass
class NA_Lissandra_Jng_Fizz(Ratings):
pass
class NA_Lissandra_Jng_Galio(Ratings):
pass
class NA_Lissandra_Jng_Gangplank(Ratings):
pass
class NA_Lissandra_Jng_Garen(Ratings):
pass
class NA_Lissandra_Jng_Gnar(Ratings):
pass
class NA_Lissandra_Jng_Gragas(Ratings):
pass
class NA_Lissandra_Jng_Graves(Ratings):
pass
class NA_Lissandra_Jng_Hecarim(Ratings):
pass
class NA_Lissandra_Jng_Heimerdinger(Ratings):
pass
class NA_Lissandra_Jng_Illaoi(Ratings):
pass
class NA_Lissandra_Jng_Irelia(Ratings):
pass
class NA_Lissandra_Jng_Ivern(Ratings):
pass
class NA_Lissandra_Jng_Janna(Ratings):
pass
class NA_Lissandra_Jng_JarvanIV(Ratings):
pass
class NA_Lissandra_Jng_Jax(Ratings):
pass
class NA_Lissandra_Jng_Jayce(Ratings):
pass
class NA_Lissandra_Jng_Jhin(Ratings):
pass
class NA_Lissandra_Jng_Jinx(Ratings):
pass
class NA_Lissandra_Jng_Kalista(Ratings):
pass
class NA_Lissandra_Jng_Karma(Ratings):
pass
class NA_Lissandra_Jng_Karthus(Ratings):
pass
class NA_Lissandra_Jng_Kassadin(Ratings):
pass
class NA_Lissandra_Jng_Katarina(Ratings):
pass
class NA_Lissandra_Jng_Kayle(Ratings):
pass
class NA_Lissandra_Jng_Kayn(Ratings):
pass
class NA_Lissandra_Jng_Kennen(Ratings):
pass
class NA_Lissandra_Jng_Khazix(Ratings):
pass
class NA_Lissandra_Jng_Kindred(Ratings):
pass
class NA_Lissandra_Jng_Kled(Ratings):
pass
class NA_Lissandra_Jng_KogMaw(Ratings):
pass
class NA_Lissandra_Jng_Leblanc(Ratings):
pass
class NA_Lissandra_Jng_LeeSin(Ratings):
pass
class NA_Lissandra_Jng_Leona(Ratings):
pass
class NA_Lissandra_Jng_Lissandra(Ratings):
pass
class NA_Lissandra_Jng_Lucian(Ratings):
pass
class NA_Lissandra_Jng_Lulu(Ratings):
pass
class NA_Lissandra_Jng_Lux(Ratings):
pass
class NA_Lissandra_Jng_Malphite(Ratings):
pass
class NA_Lissandra_Jng_Malzahar(Ratings):
pass
class NA_Lissandra_Jng_Maokai(Ratings):
pass
class NA_Lissandra_Jng_MasterYi(Ratings):
pass
class NA_Lissandra_Jng_MissFortune(Ratings):
pass
class NA_Lissandra_Jng_MonkeyKing(Ratings):
pass
class NA_Lissandra_Jng_Mordekaiser(Ratings):
pass
class NA_Lissandra_Jng_Morgana(Ratings):
pass
class NA_Lissandra_Jng_Nami(Ratings):
pass
class NA_Lissandra_Jng_Nasus(Ratings):
pass
class NA_Lissandra_Jng_Nautilus(Ratings):
pass
class NA_Lissandra_Jng_Nidalee(Ratings):
pass
class NA_Lissandra_Jng_Nocturne(Ratings):
pass
class NA_Lissandra_Jng_Nunu(Ratings):
pass
class NA_Lissandra_Jng_Olaf(Ratings):
pass
class NA_Lissandra_Jng_Orianna(Ratings):
pass
class NA_Lissandra_Jng_Ornn(Ratings):
pass
class NA_Lissandra_Jng_Pantheon(Ratings):
pass
class NA_Lissandra_Jng_Poppy(Ratings):
pass
class NA_Lissandra_Jng_Quinn(Ratings):
pass
class NA_Lissandra_Jng_Rakan(Ratings):
pass
class NA_Lissandra_Jng_Rammus(Ratings):
pass
class NA_Lissandra_Jng_RekSai(Ratings):
pass
class NA_Lissandra_Jng_Renekton(Ratings):
pass
class NA_Lissandra_Jng_Rengar(Ratings):
pass
class NA_Lissandra_Jng_Riven(Ratings):
pass
class NA_Lissandra_Jng_Rumble(Ratings):
pass
class NA_Lissandra_Jng_Ryze(Ratings):
pass
class NA_Lissandra_Jng_Sejuani(Ratings):
pass
class NA_Lissandra_Jng_Shaco(Ratings):
pass
class NA_Lissandra_Jng_Shen(Ratings):
pass
class NA_Lissandra_Jng_Shyvana(Ratings):
pass
class NA_Lissandra_Jng_Singed(Ratings):
pass
class NA_Lissandra_Jng_Sion(Ratings):
pass
class NA_Lissandra_Jng_Sivir(Ratings):
pass
class NA_Lissandra_Jng_Skarner(Ratings):
pass
class NA_Lissandra_Jng_Sona(Ratings):
pass
class NA_Lissandra_Jng_Soraka(Ratings):
pass
class NA_Lissandra_Jng_Swain(Ratings):
pass
class NA_Lissandra_Jng_Syndra(Ratings):
pass
class NA_Lissandra_Jng_TahmKench(Ratings):
pass
class NA_Lissandra_Jng_Taliyah(Ratings):
pass
class NA_Lissandra_Jng_Talon(Ratings):
pass
class NA_Lissandra_Jng_Taric(Ratings):
pass
class NA_Lissandra_Jng_Teemo(Ratings):
pass
class NA_Lissandra_Jng_Thresh(Ratings):
pass
class NA_Lissandra_Jng_Tristana(Ratings):
pass
class NA_Lissandra_Jng_Trundle(Ratings):
pass
class NA_Lissandra_Jng_Tryndamere(Ratings):
pass
class NA_Lissandra_Jng_TwistedFate(Ratings):
pass
class NA_Lissandra_Jng_Twitch(Ratings):
pass
class NA_Lissandra_Jng_Udyr(Ratings):
pass
class NA_Lissandra_Jng_Urgot(Ratings):
pass
class NA_Lissandra_Jng_Varus(Ratings):
pass
class NA_Lissandra_Jng_Vayne(Ratings):
pass
class NA_Lissandra_Jng_Veigar(Ratings):
pass
class NA_Lissandra_Jng_Velkoz(Ratings):
pass
class NA_Lissandra_Jng_Vi(Ratings):
pass
class NA_Lissandra_Jng_Viktor(Ratings):
pass
class NA_Lissandra_Jng_Vladimir(Ratings):
pass
class NA_Lissandra_Jng_Volibear(Ratings):
pass
class NA_Lissandra_Jng_Warwick(Ratings):
pass
class NA_Lissandra_Jng_Xayah(Ratings):
pass
class NA_Lissandra_Jng_Xerath(Ratings):
pass
class NA_Lissandra_Jng_XinZhao(Ratings):
pass
class NA_Lissandra_Jng_Yasuo(Ratings):
pass
class NA_Lissandra_Jng_Yorick(Ratings):
pass
class NA_Lissandra_Jng_Zac(Ratings):
pass
class NA_Lissandra_Jng_Zed(Ratings):
pass
class NA_Lissandra_Jng_Ziggs(Ratings):
pass
class NA_Lissandra_Jng_Zilean(Ratings):
pass
class NA_Lissandra_Jng_Zyra(Ratings):
pass
| 16.688249 | 46 | 0.780572 | 972 | 6,959 | 5.162551 | 0.151235 | 0.192507 | 0.440016 | 0.522519 | 0.819051 | 0.819051 | 0 | 0 | 0 | 0 | 0 | 0 | 0.159649 | 6,959 | 416 | 47 | 16.728365 | 0.858071 | 0 | 0 | 0.498195 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.498195 | 0.00361 | 0 | 0.501805 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 7 |
73bc23ae1475909169d664aa88f29c5089b99e7c | 114 | py | Python | examples/docStrings/experiment_str.py | mathieulagrange/doce | cde1e50565c29e360d8400dac689e2601b5e6fb3 | [
"Apache-2.0"
] | 1 | 2021-03-14T10:06:46.000Z | 2021-03-14T10:06:46.000Z | examples/docStrings/experiment_str.py | mathieulagrange/doce | cde1e50565c29e360d8400dac689e2601b5e6fb3 | [
"Apache-2.0"
] | 70 | 2021-03-12T08:35:58.000Z | 2022-03-31T16:27:25.000Z | examples/docStrings/experiment_str.py | mathieulagrange/doce | cde1e50565c29e360d8400dac689e2601b5e6fb3 | [
"Apache-2.0"
] | 1 | 2022-03-09T16:06:31.000Z | 2022-03-09T16:06:31.000Z | import explanes as el
print(el.Experiment())
import explanes as el
print(el.Experiment().__str__(format='html'))
| 19 | 45 | 0.763158 | 17 | 114 | 4.882353 | 0.529412 | 0.337349 | 0.385542 | 0.433735 | 0.843373 | 0.843373 | 0.843373 | 0 | 0 | 0 | 0 | 0 | 0.096491 | 114 | 5 | 46 | 22.8 | 0.805825 | 0 | 0 | 0.5 | 0 | 0 | 0.035088 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0.5 | 1 | 0 | 0 | null | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 11 |
fba137dddd076f379ba0c6ecbcb09393d00f4ca0 | 1,380 | py | Python | ecver/curves.py | s-v-grebnev/ECver | 798c900fee2090de3d91a6a3db23fa54a9cae1eb | [
"WTFPL"
] | null | null | null | ecver/curves.py | s-v-grebnev/ECver | 798c900fee2090de3d91a6a3db23fa54a9cae1eb | [
"WTFPL"
] | null | null | null | ecver/curves.py | s-v-grebnev/ECver | 798c900fee2090de3d91a6a3db23fa54a9cae1eb | [
"WTFPL"
] | null | null | null | Curves = {
"GOSTR34102012-Test": {
"P": "FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFDC7",
"Q": "FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF27E69532F48D89116FF22B8D4E0560609B4B38ABFAD2B85DCACDB1411F10B275",
"A": "FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFDC4",
"B": "E8C2505DEDFC86DDC1BD0B2B6667F1DA34B82574761CB0E879BD081CFD0B6265EE3CB090F30D27614CB4574010DA90DD862EF9D4EBEE4761503190785A71C760",
"X": "00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000003",
"Y": "7503CFE87A836AE3A61B8816E25450E6CE5E1C93ACF1ABC1778064FDCBEFA921DF1626BE4FD036E93D75E6A50E3A41E98028FE5FC235F5B889A589CB5215F2A4"
},
"GOSTR34102001-Test": {
"P": "8000000000000000000000000000000000000000000000000000000000000431",
"Q": "8000000000000000000000000000000150FE8A1892976154C59CFC193ACCF5B3",
"A": "0000000000000000000000000000000000000000000000000000000000000007",
"B": "5FBFF498AA938CE739B8E022FBAFEF40563F6E6A3472FC2A514C0CE9DAE23B7E",
"X": "0000000000000000000000000000000000000000000000000000000000000002",
"Y": "08E2A8A0E65147D4BD6316030E16D19C85C97F0A9CA267122B96ABBCEA7E8FC8"
}
}
| 72.631579 | 140 | 0.872464 | 29 | 1,380 | 41.517241 | 0.758621 | 0.008306 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.511628 | 0.065217 | 1,380 | 18 | 141 | 76.666667 | 0.421705 | 0 | 0 | 0 | 0 | 0 | 0.869565 | 0.834783 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
fba17954cbe41bf92cd02abd5bfdcae638aefcd1 | 3,324 | py | Python | src/dynamic_programming/python/all_construct/tests/test_all_construct.py | djeada/GraphAlgorithms | 0961303ec20430f90053a4efb9074185f96dfddc | [
"MIT"
] | 2 | 2021-05-31T13:01:33.000Z | 2021-12-20T19:48:18.000Z | src/dynamic_programming/python/all_construct/tests/test_all_construct.py | djeada/GraphAlgorithms | 0961303ec20430f90053a4efb9074185f96dfddc | [
"MIT"
] | null | null | null | src/dynamic_programming/python/all_construct/tests/test_all_construct.py | djeada/GraphAlgorithms | 0961303ec20430f90053a4efb9074185f96dfddc | [
"MIT"
] | null | null | null | import unittest
import os
import sys
file_dir = os.path.dirname(os.path.dirname(__file__))
sys.path.append(file_dir + "/src")
from all_construct import all_construct_basic, all_construct_memo, all_construct_table
def compare_2d_lists(a, b):
for i in range(len(a)):
a[i] = sorted(a[i])
for i in range(len(b)):
b[i] = sorted(b[i])
return sorted(a, key=lambda x: x[0]) == sorted(b, key=lambda x: x[0])
class TestAllConstructBasic(unittest.TestCase):
def test_negative_1(self):
word_bank = ["bo", "rd", "ate", "t", "ska", "sk", "boar"]
target = "skateboard"
result = list()
self.assertEqual(all_construct_basic(target, word_bank), result)
def test_negative_2(self):
word_bank = ["mo", "ha", "cz"]
target = "mocha"
result = list()
self.assertEqual(all_construct_basic(target, word_bank), result)
def test_positive_1(self):
word_bank = ["a", "b", "c"]
target = "abc"
result = [["a", "b", "c"]]
self.assertTrue(
compare_2d_lists(all_construct_basic(target, word_bank), result)
)
def test_positive_2(self):
word_bank = ["ab", "abc", "cd", "def", "abcd"]
target = "abcdef"
result = [["abc", "def"]]
self.assertTrue(
compare_2d_lists(all_construct_basic(target, word_bank), result)
)
class TestAllConstructMemo(unittest.TestCase):
def test_negative_1(self):
word_bank = ["bo", "rd", "ate", "t", "ska", "sk", "boar"]
target = "skateboard"
result = list()
self.assertEqual(all_construct_memo(target, word_bank), result)
def test_negative_2(self):
word_bank = ["mo", "ha", "cz"]
target = "mocha"
result = list()
self.assertEqual(all_construct_memo(target, word_bank), result)
def test_positive_1(self):
word_bank = ["a", "b", "c"]
target = "abc"
result = [["a", "b", "c"]]
self.assertTrue(compare_2d_lists(all_construct_memo(target, word_bank), result))
def test_positive_2(self):
word_bank = ["ab", "abc", "cd", "def", "abcd"]
target = "abcdef"
result = [["abc", "def"]]
self.assertTrue(compare_2d_lists(all_construct_memo(target, word_bank), result))
class TestAllConstructTab(unittest.TestCase):
def test_negative_1(self):
word_bank = ["bo", "rd", "ate", "t", "ska", "sk", "boar"]
target = "skateboard"
result = list()
self.assertEqual(all_construct_table(target, word_bank), result)
def test_negative_2(self):
word_bank = ["mo", "ha", "cz"]
target = "mocha"
result = list()
self.assertEqual(all_construct_table(target, word_bank), result)
def test_positive_1(self):
word_bank = ["a", "b", "c"]
target = "abc"
result = [["a", "b", "c"]]
self.assertTrue(
compare_2d_lists(all_construct_table(target, word_bank), result)
)
def test_positive_2(self):
word_bank = ["ab", "abc", "cd", "def", "abcd"]
target = "abcdef"
result = [["abc", "def"]]
self.assertTrue(
compare_2d_lists(all_construct_table(target, word_bank), result)
)
if __name__ == "__main__":
unittest.main()
| 30.218182 | 88 | 0.586342 | 415 | 3,324 | 4.438554 | 0.173494 | 0.104235 | 0.078176 | 0.130293 | 0.831162 | 0.797503 | 0.797503 | 0.797503 | 0.797503 | 0.797503 | 0 | 0.008492 | 0.256017 | 3,324 | 109 | 89 | 30.495413 | 0.736353 | 0 | 0 | 0.752941 | 0 | 0 | 0.069495 | 0 | 0 | 0 | 0 | 0 | 0.141176 | 1 | 0.152941 | false | 0 | 0.047059 | 0 | 0.247059 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
fbb40cf7c47c50640ad320c9e6d30c23feddb1e8 | 57,413 | py | Python | DB/MySQL_Aena.py | SergioCMDev/Busines-Inteligence-applied-to-tourism | 61834a46fce22453e94b7bbdf8d4ecdcf128285a | [
"Apache-2.0"
] | null | null | null | DB/MySQL_Aena.py | SergioCMDev/Busines-Inteligence-applied-to-tourism | 61834a46fce22453e94b7bbdf8d4ecdcf128285a | [
"Apache-2.0"
] | null | null | null | DB/MySQL_Aena.py | SergioCMDev/Busines-Inteligence-applied-to-tourism | 61834a46fce22453e94b7bbdf8d4ecdcf128285a | [
"Apache-2.0"
] | null | null | null | # -*- coding: utf-8 -*-
"""
Created on Sat Jul 1 12:28:06 2017
@author: Sergio Cristauro Manzano
"""
#import pymysql
import mysql.connector
from ..Utilidades.Constantes import Constantes
class MySQLAccessAena:
connection = mysql.connector.connect(user=Constantes.UsuarioBD, host=Constantes.IP_BD, database=Constantes.DB_Name)
def __init__(self):
# super(MySQLAccess, self).__init__()
print("Clase MYSQL Aena Cargada Correctamente ")
def ObtenerNumeroMesDadoNombre(self, Mes):
if Mes == 'Enero':
return '1'
elif Mes == 'Febrero':
return '2'
elif Mes == 'Marzo':
return '3'
elif Mes == 'Abril':
return '4'
elif Mes == 'Mayo':
return '5'
elif Mes == 'Junio':
return '6'
elif Mes == 'Julio':
return '7'
elif Mes == 'Agosto':
return '8'
elif Mes == 'Septiembre':
return '9'
elif Mes == 'Octubre':
return '10'
elif Mes == 'Noviembre':
return '11'
elif Mes == 'Diciembre':
return '12'
#############################################################################################################################################################################################################################################
#####################################VUELOS ENTRANTES###############################################################################
#############################################################################################################################################################################################################################################
#Muestra los paises de origen y el mumero de vuelos entrantes que se realizan en anio en PaisDestino
def ObtenerPaisOrigenYVuelosEntrantesAenaDadoPaisDestinoAnio(self, PaisDestino, Anio): #PROBAR
self.cursor = self.connection.cursor()
self.query = "SELECT country_origen.name as Pais_Origen, SUM(av.flights) as Numero_Vuelos from aena_vuelos av JOIN airport ap_destino on av.destination_id = ap_destino.id JOIN country country_destino on ap_destino.country_id = country_destino.id JOIN airport ap_origen on av.origin_id = ap_origen.id JOIN country country_origen on ap_origen.country_id = country_origen.id Where country_destino.name = %s AND YEAR(av.date) = %s GROUP BY country_origen.name"
self.cursor.execute(self.query,(PaisDestino, Anio))
return self.cursor
#Muestra los paises de origen y el mumero de vuelos entrantes que se realizan en anio en PaisDestino hacia la ciudad Destino
def ObtenerPaisOrigenYVuelosEntrantesAenaDadoPaisDestinoCiudadDestinoAnio(self, PaisDestino, ciudadDestino, Anio): #PROBAR
self.cursor = self.connection.cursor()
self.query = "SELECT country_origen.name as Pais_Origen, SUM(av.flights) as Numero_Vuelos from aena_vuelos av JOIN airport ap_destino on av.destination_id = ap_destino.id JOIN country country_destino on ap_destino.country_id = country_destino.id JOIN city city_destino on city_destino.country_id = country_destino.id JOIN airport ap_origen on av.origin_id = ap_origen.id JOIN country country_origen on ap_origen.country_id = country_origen.id Where country_destino.name = %s AND city_destino.name = %s AND YEAR(av.date) = %s GROUP BY country_origen.name"
self.cursor.execute(self.query,(PaisDestino, ciudadDestino, Anio))
return self.cursor
#Muestra los paises de origen y el mumero de vuelos entrantes que se realizan entre AnioInicio y AnioFin en PaisDestino de forma mensual
def ObtenerPaisesOrigenYVuelosEntrantesMensualmenteDuranteAniosAenaDadoPaisDestinoAnio(self, PaisDestino, Anio): #PROBAR
self.cursor = self.connection.cursor()
self.query = "SELECT MONTH(av.date), country_origen.name as Pais_Origen, SUM(av.flights) as Numero_Vuelos from aena_vuelos av JOIN airport ap_destino on av.destination_id = ap_destino.id JOIN country country_destino on ap_destino.country_id = country_destino.id JOIN airport ap_origen on av.origin_id = ap_origen.id JOIN country country_origen on ap_origen.country_id = country_origen.id Where country_destino.name = %s AND YEAR(av.date) = %s GROUP BY MONTH(av.date), country_origen.name"
self.cursor.execute(self.query,(PaisDestino, Anio))
return self.cursor
#Muestra los paises de origen y el mumero de vuelos entrantes que se realizan entre AnioInicio y AnioFin en PaisDestino
def ObtenerPaisesOrigenYVuelosEntrantesAnualmenteAenaDadoPaisDestinoAnioMinMax(self, PaisDestino, MinYear, MaxYear): #PROBAR
self.cursor = self.connection.cursor()
self.query = "SELECT YEAR(av.date) as Anio, country_origen.name as Pais_Origen, SUM(av.flights) as Numero_Vuelos from aena_vuelos av JOIN airport ap_destino on av.destination_id = ap_destino.id JOIN country country_destino on ap_destino.country_id = country_destino.id JOIN airport ap_origen on av.origin_id = ap_origen.id JOIN country country_origen on ap_origen.country_id = country_origen.id Where country_destino.name = %s AND YEAR(av.date) >= %s AND YEAR(av.date) <= %s GROUP BY YEAR(av.date), country_origen.name"
self.cursor.execute(self.query,(PaisDestino, MinYear, MaxYear ))
return self.cursor
##Dado un pais destino y un rango de años devuelve los paises desde donde se viaja a pais destino con sus ciudades y el numero de vuelos
def ObtenerPaisesOrigenCiudadesOrigenYVuelosEntrantesDuranteAnioAenaDadoPaisDestinoAnio(self, PaisDestino, Anio): #PROBAR
self.cursor = self.connection.cursor()
self.query = "SELECT country_origen.name as Pais_Origen, city_origen.name AS Ciudad_Origen, SUM(av.flights) as Numero_Vuelos from aena_vuelos av JOIN airport ap_destino on av.destination_id = ap_destino.id JOIN country country_destino on ap_destino.country_id = country_destino.id JOIN airport ap_origen on av.origin_id = ap_origen.id JOIN country country_origen on ap_origen.country_id = country_origen.id JOIN city city_origen ON country_origen.id = city_origen.country_id Where country_destino.name = %s AND YEAR(av.date) = %s GROUP BY country_origen.name, city_origen.name"
self.cursor.execute(self.query,(PaisDestino, Anio))
return self.cursor
##Dado un pais destino y un rango de años devuelve los paises desde donde se viaja a pais destino con sus ciudades y el numero de vuelos
def ObtenerPaisesOrigenCiudadesOrigenYVuelosEntrantesAnualmenteAenaDadoPaisDestinoAnioMinMax(self, PaisDestino, MinYear, MaxYear): #PROBAR
self.cursor = self.connection.cursor()
self.query = "SELECT YEAR(av.date) AS Anio, country_origen.name as Pais_Origen, city_origen.name AS Ciudad_Origen, SUM(av.flights) as Numero_Vuelos from aena_vuelos av JOIN airport ap_destino on av.destination_id = ap_destino.id JOIN country country_destino on ap_destino.country_id = country_destino.id JOIN airport ap_origen on av.origin_id = ap_origen.id JOIN country country_origen on ap_origen.country_id = country_origen.id JOIN city city_origen ON country_origen.id = city_origen.country_id Where country_destino.name = %s AND YEAR(av.date) >= %s AND YEAR(av.date) <= %s GROUP BY country_origen.name, city_origen.name"
self.cursor.execute(self.query,(PaisDestino, MinYear, MaxYear ))
return self.cursor
def ObtenerPaisesOrigenCiudadesOrigenYVuelosEntrantesAnualmenteAenaDadoPaisDestinoMesAnioMinMax(self, PaisDestino, Mes, MinYear, MaxYear): #PROBAR
self.cursor = self.connection.cursor()
Mes = self.ObtenerNumeroMesDadoNombre(Mes)
self.query = "SELECT YEAR(av.date) as Anio, country_origen.name as Pais_Origen, SUM(av.flights) as Numero_Vuelos from aena_vuelos av JOIN airport ap_destino on av.destination_id = ap_destino.id JOIN country country_destino on ap_destino.country_id = country_destino.id JOIN airport ap_origen on av.origin_id = ap_origen.id JOIN country country_origen on ap_origen.country_id = country_origen.id Where country_destino.name = %s AND MONTH(av.date) = %s AND YEAR(av.date) >= %s AND YEAR(av.date) <= %s GROUP BY YEAR(av.date), country_origen.name"
self.cursor.execute(self.query,(PaisDestino, Mes, MinYear, MaxYear ))
return self.cursor
#Muestra el mumero de vuelos entrantes en PaisDestino entre MinYear y MaxYear
def ObtenerDatosVuelosEntrantesAenaDadoPaisDestinoAnioMinMax(self, PaisDestino, MinYear, MaxYear): #OK
self.cursor = self.connection.cursor()
self.query = "SELECT YEAR(ava.date) AS Anio, SUM(ava.flights) AS Numero_Vuelos FROM aena_vuelos_airline ava JOIN airport ap_destino ON ava.destination_id = ap_destino.id JOIN country countryDestino ON ap_destino.country_id = countryDestino.id WHERE countryDestino.name = %s AND year(ava.date) >= %s AND year(ava.date) <= %s GROUP BY YEAR(ava.date), countryDestino.name"
self.cursor.execute(self.query,(PaisDestino, MinYear , MaxYear))
return self.cursor
#Muestra todos los vuelos entrantes en PaisDestino organizados mensualmente desde minYear hasta MaxYear
def ObtenerDatosVuelosEntrantesAenaMensualmenteDadoPaisDestinoAnioMinMax(self, PaisDestino, MinYear, MaxYear): #OK
# #connection = pymysql.connect(host='localhost', port=3306, user='root', passwd='', db='tfgtesting')
self.cursor = self.connection.cursor()
self.query = str("SELECT YEAR(`date`) AS Anio, MONTH(date) AS Mes, SUM(ava.flights) AS Numero_Vuelos FROM aena_vuelos_airline ava JOIN airport ap_destino ON ava.destination_id = ap_destino.id JOIN country countryDestino ON ap_destino.country_id = countryDestino.id WHERE countryDestino.name = %s AND year(`date`) >= %s AND year(`date`) <= %s GROUP BY YEAR(`date`),MONTH(date), countryDestino.name")
self.cursor.execute(self.query,(PaisDestino, MinYear, MaxYear))
return self.cursor
#Muestra todos los vuelos entrantes en PaisDestino durante los Meses Mes desde minYear hasta MaxYear
def ObtenerDatosVuelosEntrantesAenaEnUnMesDadoPaisDestinoMesAnioMinMax(self, PaisDestino, Mes, MinYear, MaxYear): #OK
# #connection = pymysql.connect(host='localhost', port=3306, user='root', passwd='', db='tfgtesting')
self.cursor = self.connection.cursor()
Mes = self.ObtenerNumeroMesDadoNombre(Mes)
self.query = str("SELECT YEAR(`date`) AS Anio, SUM(ava.flights) AS Numero_Vuelos FROM aena_vuelos_airline ava JOIN airport ap_destino ON ava.destination_id = ap_destino.id JOIN country countryDestino ON ap_destino.country_id = countryDestino.id WHERE countryDestino.name = %s AND year(`date`) >= %s AND year(`date`) <= %s AND MONTH(date) = %s GROUP BY YEAR(`date`),MONTH(date), countryDestino.name")
self.cursor.execute(self.query,(PaisDestino, MinYear, MaxYear, Mes))
return self.cursor
#Muestra todos los vuelos entrantes en PaisDestino Durante Year organizado mensualmente
def ObtenerDatosVuelosEntrantesAenaMensualmenteDadoPaisDestinoAnio(self, PaisDestino, Year): #OK
# #connection = pymysql.connect(host='localhost', port=3306, user='root', passwd='', db='tfgtesting')
self.cursor = self.connection.cursor()
self.query = str("SELECT MONTH(`date`) AS Mes, SUM(ava.flights) AS Numero_Vuelos FROM aena_vuelos_airline ava JOIN airport ap_destino ON ava.destination_id = ap_destino.id JOIN country countryDestino ON ap_destino.country_id = countryDestino.id WHERE countryDestino.name = %s AND year(`date`) = %s GROUP BY MONTH(`date`), countryDestino.name")
self.cursor.execute(self.query,(PaisDestino, Year))
return self.cursor
#Muestra las ciudades destino y el numero de vuelos organizados anualmente entre MinYear y MaxYear que llegan a PaisDestino
def ObtenerDatosVuelosEntrantesAenaDivididosPorCiudadesDadoPaisDestinoAnioMinMax(self, PaisDestino, MinYear, MaxYear): #Datos Generales
# #connection = pymysql.connect(host='localhost', port=3306, user='root', passwd='', db='tfgtesting')
self.cursor = self.connection.cursor()
self.query = str("SELECT YEAR(`date`) AS Anio, city.name AS Ciudad , SUM(ava.flights) AS Numero_Vuelos FROM aena_vuelos_airline ava JOIN airport ap_destino ON ava.destination_id = ap_destino.id JOIN country countryDestino ON ap_destino.country_id = countryDestino.id JOIN city ON city.id = ap_destino.city_id WHERE countryDestino.name = %s AND year(`date`) >= %s AND year(`date`) <= %s GROUP BY YEAR(`date`), city.name")
self.cursor.execute(self.query,(PaisDestino, MinYear, MaxYear))
return self.cursor
#Muestra las ciudades destino y el numero de vuelos durante un mismo Mes entre MinYear y MaxYear que llegan a PaisDestino
def ObtenerDatosVuelosEntrantesEnUnMesAenaDivididosPorCiudadesDadoPaisDestinoMesAnioMinMax(self, PaisDestino, Mes, MinYear, MaxYear): #Datos Generales
# #connection = pymysql.connect(host='localhost', port=3306, user='root', passwd='', db='tfgtesting')
self.cursor = self.connection.cursor()
Mes = self.ObtenerNumeroMesDadoNombre(Mes)
self.query = str("SELECT YEAR(`date`) AS Anio, city.name AS Ciudad, SUM(ava.flights) AS Numero_Vuelos FROM aena_vuelos ava JOIN airport ap_destino ON ava.destination_id = ap_destino.id JOIN country countryDestino ON ap_destino.country_id = countryDestino.id JOIN city ON city.id = ap_destino.city_id WHERE countryDestino.name = %s AND year(`date`) >= %s AND year(`date`) <= %s AND MONTH(`date`) = %s GROUP BY YEAR(`date`), MONTH(ava.date), city.name")
self.cursor.execute(self.query,(PaisDestino, MinYear, MaxYear, Mes))
return self.cursor
#Muestra las ciudades destino y el numero de vuelos que llegan a PaisDestino organizados en el Anio Year
def ObtenerDatosVuelosEntrantesAenaEnUnAnioDivididosPorCiudadDadoPaisDestinoAnio(self, PaisDestino, Year): #OK
# #connection = pymysql.connect(host='localhost', port=3306, user='root', passwd='', db='tfgtesting')
self.cursor = self.connection.cursor()
self.query = str("SELECT city.name, SUM(ava.flights) FROM aena_vuelos ava JOIN airport ap_destino ON ava.destination_id = ap_destino.id JOIN country countryDestino ON ap_destino.country_id = countryDestino.id JOIN city ON city.id = ap_destino.city_id WHERE countryDestino.name = %s AND YEAR(ava.date) = %s GROUP BY city.name")
self.cursor.execute(self.query,(PaisDestino, Year))
return self.cursor
#Muestra las ciudades origen y el numero de vuelos que llegan a PaisDestino organizados en el Anio Year En un mes dado
def ObtenerDatosVuelosEntrantesAenaMensualmenteDivididosPorCiudadDadoPaisDestinoMesAnio(self, PaisDestino, Mes, Year): #OK
# #connection = pymysql.connect(host='localhost', port=3306, user='root', passwd='', db='tfgtesting')
self.cursor = self.connection.cursor()
Mes = self.ObtenerNumeroMesDadoNombre(Mes)
self.query = str("SELECT city.name AS Ciudad, SUM(ava.flights) AS Numero_Vuelos FROM aena_vuelos ava JOIN airport ap_destino ON ava.destination_id = ap_destino.id JOIN country countryDestino ON ap_destino.country_id = countryDestino.id JOIN city ON city.id = ap_destino.city_id WHERE countryDestino.name = %s AND year(`date`) = %s AND MONTH(`date`) = %s GROUP BY YEAR(`date`),Month(`date`), city.name")
self.cursor.execute(self.query,(PaisDestino, Year, Mes))
return self.cursor
#Muestra el numero de vuelos que llegan a una ciudad destino entre Minyear y MaxYears organizado por Anios
def ObtenerDatosVuelosEntrantesAenaDadoPaisDestinoCiudadDestinoAnioMinMax(self, PaisDestino, cityDestino, MinYear, MaxYear): #OK
# #connection = pymysql.connect(host='localhost', port=3306, user='root', passwd='', db='tfgtesting')
self.cursor = self.connection.cursor()
self.query = str("SELECT YEAR(`date`) AS Anio, SUM(ava.flights) AS Numero_Vuelos FROM aena_vuelos ava JOIN airport ap_destino ON ava.destination_id = ap_destino.id JOIN country country_Destino ON ap_destino.country_id = country_Destino.id JOIN city cityDestino ON cityDestino.id = ap_destino.city_id WHERE country_Destino.name = %s AND cityDestino.name=%s AND year(`date`) >= %s AND year(`date`) <= %s GROUP BY YEAR(`date`), cityDestino.name")
self.cursor.execute(self.query,(PaisDestino, cityDestino, MinYear, MaxYear))
return self.cursor
#Muestra el numero de vuelos que llegan a cityDestino durante un mismo mes Mes entre Minyear y MaxYears organizado por Anios
def ObtenerDatosVuelosEntrantesAenaEnUnMesDadoPaisDestinoCiudadDestinoMesAnioMinMax(self, PaisDestino, cityDestino, Mes, MinYear, MaxYear): #OK
# #connection = pymysql.connect(host='localhost', port=3306, user='root', passwd='', db='tfgtesting')
self.cursor = self.connection.cursor()
Mes = self.ObtenerNumeroMesDadoNombre(Mes)
self.query = str("SELECT YEAR(ava.date) AS Anio, SUM(ava.flights) AS Numero_Vuelos FROM aena_vuelos ava JOIN airport ap_destino ON ava.destination_id = ap_destino.id JOIN country country_Destino ON ap_destino.country_id = country_Destino.id JOIN city cityDestino ON cityDestino.id = ap_destino.city_id WHERE country_Destino.name = %s AND cityDestino.name=%s AND year(`date`) >= %s AND year(`date`) <= %s AND MONTH(ava.date) = %s GROUP BY YEAR(ava.date), MONTH(ava.date), cityDestino.name")
self.cursor.execute(self.query,(PaisDestino, cityDestino, MinYear, MaxYear, Mes))
return self.cursor
#Muestra el numero de vuelos que llegan a cityDestino en un un Anio Year
def ObtenerDatosVuelosEntrantesAenaDadoPaisDestinoCiudadDestinoAnio(self, PaisDestino, cityDestino, Year): #OK
# #connection = pymysql.connect(host='localhost', port=3306, user='root', passwd='', db='tfgtesting')
self.cursor = self.connection.cursor()
self.query = str("SELECT YEAR(`date`) AS Anio, SUM(ava.flights) AS Numero_Vuelos FROM aena_vuelos_airline ava JOIN airport ap_destino ON ava.destination_id = ap_destino.id JOIN country country_Destino ON ap_destino.country_id = country_Destino.id JOIN city cityDestino ON cityDestino.id = ap_destino.city_id WERE country_Destino.name = %s AND cityDestino.name=%s AND year(`date`) = %s GROUP BY YEAR(`date`), cityDestino.name")
self.cursor.execute(self.query,(PaisDestino, cityDestino, Year))
return self.cursorH
#Mostrar el numero de vuelos que llegan a cityDestino de forma mensual durante un Anio Year
def ObtenerDatosVuelosEntrantesAenaEnUnAnioEnUnaCiudadMensualmenteDadoPaisDestinoCiudadAnio(self, PaisDestino, cityDestino, Year): #OK
# #connection = pymysql.connect(host='localhost', port=3306, user='root', passwd='', db='tfgtesting')
self.cursor = self.connection.cursor()
self.query = str("SELECT MONTH(`date`) AS Mes, SUM(ava.flights) AS Numero_Vuelos FROM aena_vuelos ava JOIN airport ap_destino ON ava.destination_id = ap_destino.id JOIN country country_Destino ON ap_destino.country_id = country_Destino.id JOIN city cityDestino ON cityDestino.id = ap_destino.city_id WHERE country_Destino.name = %s AND cityDestino.name= %s AND year(`date`) = %s GROUP BY YEAR(`date`), Month(`date`), cityDestino.name")
self.cursor.execute(self.query,(PaisDestino, cityDestino, Year))
return self.cursor
##############################################################################################################################################################
############################TURISTAS ENTRANTES PAIS DESTINO###############################################################################
##############################################################################################################################################################
#Muestra todos los turistas que entran a PaisDestino entre MinYear y MaxYear separando las ciudades
def ObtenerPaisOrigenYNumeroTuristasAenaAnualmenteDadoPaisDestinoAnioMinMaxSeparadoPorCiudades(self, PaisDestino, MinYear, MaxYear):
# #connection = pymysql.connect(host='localhost', port=3306, user='root', passwd='', db='tfgtesting')
self.cursor = self.connection.cursor()
self.query = str("SELECT YEAR(AV.date) AS Anio, country_origin.name AS Pais_Origen, city_origin.name AS Ciudad_Origen, SUM(AV.travelers) AS Numero_Turistas FROM `aena_vuelos` AV JOIN airport Airport_destino on Airport_destino.id = AV.destination_id JOIN city city_destino on Airport_destino.city_id = city_destino.id Join country country_Destino ON city_destino.country_id = country_Destino.id JOIN airport Airport_origen on Airport_origen.id = AV.origin_id JOIN city city_origin on Airport_origen.city_id = city_origin.id JOIN country country_origin on city_origin.country_id = country_origin.id where country_Destino.name = %s AND YEAR(AV.date) >= %s AND YEAR(AV.date) <= %s GROUP BY YEAR(AV.date), city_origin.name")
self.cursor.execute(self.query,(PaisDestino, MinYear, MaxYear))
return self.cursor
#Muestra todos los turistas que entran a PaisDestino entre MinYear y MaxYear separando las ciudades y meses
def ObtenerPaisOrigenYNumeroTuristasAenaAnualmenteDadoPaisDestinoAnioMinMaxSeparadoPorCiudadesYMeses(self, PaisDestino, MinYear, MaxYear):
# #connection = pymysql.connect(host='localhost', port=3306, user='root', passwd='', db='tfgtesting')
self.cursor = self.connection.cursor()
self.query = str("SELECT YEAR(AV.date) AS Anio,MONTH(AV.date) AS Mes, country_origin.name AS Pais_origen, city_origin.name AS Ciudad_Origen, SUM(AV.travelers) AS Numero_Turistas FROM `aena_vuelos` AV JOIN airport Airport_destino on Airport_destino.id = AV.destination_id JOIN city city_destino on Airport_destino.city_id = city_destino.id Join country country_Destino ON city_destino.country_id = country_Destino.id JOIN airport Airport_origen on Airport_origen.id = AV.origin_id JOIN city city_origin on Airport_origen.city_id = city_origin.id JOIN country country_origin on city_origin.country_id = country_origin.id where country_Destino.name = %s AND YEAR(AV.date) >= %s AND YEAR(AV.date) <= %s GROUP BY YEAR(AV.date), city_origin.name, MONTH(AV.date)")
self.cursor.execute(self.query,(PaisDestino, MinYear, MaxYear))
return self.cursor
#Muestra todos los turistas que entran a PaisDestino y ciudad destino entre MinYear y MaxYear
def ObtenerNumeroTuristasAenaAnualmenteDadoPaisDestinoAnioMinMax(self, PaisDestino, MinYear, MaxYear): #OK
# #connection = pymysql.connect(host='localhost', port=3306, user='root', passwd='', db='tfgtesting')
self.cursor = self.connection.cursor()
self.query = str("SELECT YEAR(AV.date) AS Anio, SUM(AV.travelers) AS Numero_Turistas FROM `aena_vuelos` AV JOIN airport Airport_destino on Airport_destino.id = AV.destination_id JOIN city city_destino on Airport_destino.city_id = city_destino.id Join country country_Destino ON city_destino.country_id = country_Destino.id JOIN airport Airport_origen on Airport_origen.id = AV.origin_id JOIN city city_origin on Airport_origen.city_id = city_origin.id JOIN country country_origin on city_origin.country_id = country_origin.id where country_Destino.name = %s AND YEAR(AV.date) >= %s AND YEAR(AV.date) <= %s GROUP BY YEAR(AV.date)")
self.cursor.execute(self.query,(PaisDestino, MinYear, MaxYear))
return self.cursor
#Muestra todos los turistas que entran a PaisDestino y ciudad destino entre MinYear y MaxYear
def ObtenerNumeroTuristasAenaAnualmenteDadoPaisDestinoAnioMinMaxSeparadoPorMeses(self, PaisDestino, MinYear, MaxYear): #OK
# #connection = pymysql.connect(host='localhost', port=3306, user='root', passwd='', db='tfgtesting')
self.cursor = self.connection.cursor()
self.query = str("SELECT YEAR(AV.date) AS Anio, MONTH(AV.date) AS Mes, SUM(AV.travelers) AS Numero_Turistas FROM `aena_vuelos` AV JOIN airport Airport_destino on Airport_destino.id = AV.destination_id JOIN city city_destino on Airport_destino.city_id = city_destino.id Join country country_Destino ON city_destino.country_id = country_Destino.id JOIN airport Airport_origen on Airport_origen.id = AV.origin_id JOIN city city_origin on Airport_origen.city_id = city_origin.id JOIN country country_origin on city_origin.country_id = country_origin.id where country_Destino.name = %s AND YEAR(AV.date) >= %s AND YEAR(AV.date) <= %s GROUP BY YEAR(AV.date), MONTH(AV.date)")
self.cursor.execute(self.query,(PaisDestino, MinYear, MaxYear))
return self.cursor
#Muestra todos los turistas que entran a PaisDestino y ciudad destino entre MinYear y MaxYear
def ObtenerPaisOrigenYNumeroTuristasAenaAnualmenteDadoPaisDestinoCiudadDestinoAnioMinMax(self, PaisDestino, CiudadDestino, MinYear, MaxYear): #OK
# #connection = pymysql.connect(host='localhost', port=3306, user='root', passwd='', db='tfgtesting')
self.cursor = self.connection.cursor()
self.query = str("SELECT YEAR(AV.date) AS Anio, country_origin.name AS Pais_Origen, city_origin.name AS Ciudad_Origen, SUM(AV.travelers) AS Numero_Turistas FROM `aena_vuelos` AV JOIN airport Airport_destino on Airport_destino.id = AV.destination_id JOIN city city_destino on Airport_destino.city_id = city_destino.id Join country country_Destino ON city_destino.country_id = country_Destino.id JOIN airport Airport_origen on Airport_origen.id = AV.origin_id JOIN city city_origin on Airport_origen.city_id = city_origin.id JOIN country country_origin on city_origin.country_id = country_origin.id where country_Destino.name = %s AND city_destino.name = %s AND YEAR(AV.date) >= %s AND YEAR(AV.date) <= %s GROUP BY YEAR(AV.date), city_origin.name")
self.cursor.execute(self.query,(PaisDestino, CiudadDestino, MinYear, MaxYear))
return self.cursor
#Muestra todos los turistas que entran a PaisDestino y ciudad destino entre MinYear y MaxYear
def ObtenerPaisOrigenYNumeroTuristasAenaAnualmenteDadoPaisDestinoCiudadDestinoMesAnioMinMax(self, PaisDestino, CiudadDestino, Mes, MinYear, MaxYear): #OK
# #connection = pymysql.connect(host='localhost', port=3306, user='root', passwd='', db='tfgtesting')
self.cursor = self.connection.cursor()
Mes = self.ObtenerNumeroMesDadoNombre(Mes)
self.query = str("SELECT YEAR(AV.date) AS Anio, country_origin.name AS Pais_Origen, city_origin.name AS Ciudad_Origen, SUM(AV.travelers) AS Numero_Turistas FROM `aena_vuelos` AV JOIN airport Airport_destino on Airport_destino.id = AV.destination_id JOIN city city_destino on Airport_destino.city_id = city_destino.id Join country country_Destino ON city_destino.country_id = country_Destino.id JOIN airport Airport_origen on Airport_origen.id = AV.origin_id JOIN city city_origin on Airport_origen.city_id = city_origin.id JOIN country country_origin on city_origin.country_id = country_origin.id where country_Destino.name = %s AND city_destino.name = %s AND MONTH(AV.date) = %s AND YEAR(AV.date) >= %s AND YEAR(AV.date) <= %s GROUP BY YEAR(AV.date), city_origin.name")
self.cursor.execute(self.query,(PaisDestino, CiudadDestino, Mes, MinYear, MaxYear))
return self.cursor
#Muestra todos los turistas que entran a PaisDestino y ciudad destino en year de forma total
def ObtenerPaisOrigenYNumeroTuristasAenaTotalesEnUnAnioDadoPaisDestinoCiudadDestinoMesAnioMinMax(self, PaisDestino, CiudadDestino, Mes, Year): #OK
# #connection = pymysql.connect(host='localhost', port=3306, user='root', passwd='', db='tfgtesting')
self.cursor = self.connection.cursor()
Mes = self.ObtenerNumeroMesDadoNombre(Mes)
self.query = str("SELECT country_origin.name AS Pais_Origen, SUM(AV.travelers) AS Numero_Turistas FROM `aena_vuelos` AV JOIN airport Airport_destino on Airport_destino.id = AV.destination_id JOIN city city_destino on Airport_destino.city_id = city_destino.id Join country country_Destino ON city_destino.country_id = country_Destino.id JOIN airport Airport_origen on Airport_origen.id = AV.origin_id JOIN city city_origin on Airport_origen.city_id = city_origin.id JOIN country country_origin on city_origin.country_id = country_origin.id where country_Destino.name = %s AND city_destino.name = %s AND MONTH(AV.date) = %s AND YEAR(AV.date) = %s GROUP BY YEAR(AV.date)")
self.cursor.execute(self.query,(PaisDestino, CiudadDestino, Mes, Year))
return self.cursor
#Muestra todos los turistas que entran a PaisDestino y ciudad destino en year de forma total
def ObtenerNumeroTuristasYPaisOrigenAenaTotalesEnUnAnioDadoPaisDestinoCiudadDestinoMesAnioMinMax(self, PaisDestino, CiudadDestino, Mes, Year): #OK
# #connection = pymysql.connect(host='localhost', port=3306, user='root', passwd='', db='tfgtesting')
self.cursor = self.connection.cursor()
Mes = self.ObtenerNumeroMesDadoNombre(Mes)
self.query = str("SELECT country_origin.name AS Pais_Origen, SUM(AV.travelers) AS Numero_Turistas FROM `aena_vuelos` AV JOIN airport Airport_destino on Airport_destino.id = AV.destination_id JOIN city city_destino on Airport_destino.city_id = city_destino.id Join country country_Destino ON city_destino.country_id = country_Destino.id JOIN airport Airport_origen on Airport_origen.id = AV.origin_id JOIN city city_origin on Airport_origen.city_id = city_origin.id JOIN country country_origin on city_origin.country_id = country_origin.id where country_Destino.name = %s AND city_destino.name = %s AND MONTH(AV.date) = %s AND YEAR(AV.date) = %s GROUP BY country_origin.name, MONTH(AV.date)")
self.cursor.execute(self.query,(PaisDestino, CiudadDestino, Mes, Year))
return self.cursor
#Muestra todos los turistas que entran a PaisDestino y ciudad destino en year de forma mensual
def ObtenerOrigenYNumeroTuristasAenaMensualmenteEnUnAnioDadoPaisDestinoCiudadDestinoAnioMinMax(self, PaisDestino, CiudadDestino, Year): #OK
# #connection = pymysql.connect(host='localhost', port=3306, user='root', passwd='', db='tfgtesting')
self.cursor = self.connection.cursor()
self.query = str("SELECT MONTH(AV.date) AS Mes, SUM(AV.travelers) AS Numero_Turistas FROM `aena_vuelos` AV JOIN airport Airport_destino on Airport_destino.id = AV.destination_id JOIN city city_destino on Airport_destino.city_id = city_destino.id Join country country_Destino ON city_destino.country_id = country_Destino.id JOIN airport Airport_origen on Airport_origen.id = AV.origin_id JOIN city city_origin on Airport_origen.city_id = city_origin.id JOIN country country_origin on city_origin.country_id = country_origin.id where country_Destino.name = %s AND city_destino.name = %s AND YEAR(AV.date) = %s GROUP BY MONTH(AV.date)")
self.cursor.execute(self.query,(PaisDestino, CiudadDestino, Year))
return self.cursor
#Muestra todos los turistas que entran a PaisDestino y ciudad destino en year de forma total
def ObtenerPaisOrigenYNumeroTuristasAenaMensualmenteEnUnAnioTotalesDadoPaisDestinoAnio(self, PaisDestino, Year): #OK
# #connection = pymysql.connect(host='localhost', port=3306, user='root', passwd='', db='tfgtesting')
self.cursor = self.connection.cursor()
self.query = str("SELECT country_origin.name AS Pais_Origen, SUM(AV.travelers) AS Numero_Turistas FROM `aena_vuelos` AV JOIN airport Airport_destino on Airport_destino.id = AV.destination_id JOIN city city_destino on Airport_destino.city_id = city_destino.id Join country country_Destino ON city_destino.country_id = country_Destino.id JOIN airport Airport_origen on Airport_origen.id = AV.origin_id JOIN city city_origin on Airport_origen.city_id = city_origin.id JOIN country country_origin on city_origin.country_id = country_origin.id where country_Destino.name = %s AND YEAR(AV.date) = %s GROUP BY country_origin.name")
self.cursor.execute(self.query,(PaisDestino, Year))
return self.cursor
#Muestra todos los turistas que entran a PaisDestino y ciudad destino en year de forma mensual
def ObtenerPaisOrigenYNumeroTuristasAenaMensualmenteEnUnAnioDadoPaisDestinoAnio(self, PaisDestino, Year): #OK
# #connection = pymysql.connect(host='localhost', port=3306, user='root', passwd='', db='tfgtesting')
self.cursor = self.connection.cursor()
self.query = str("SELECT country_origin.name AS Pais_Origen, MONTH(AV.date) AS Mes, SUM(AV.travelers) AS Numero_Turistas FROM `aena_vuelos` AV JOIN airport Airport_destino on Airport_destino.id = AV.destination_id JOIN city city_destino on Airport_destino.city_id = city_destino.id Join country country_Destino ON city_destino.country_id = country_Destino.id JOIN airport Airport_origen on Airport_origen.id = AV.origin_id JOIN city city_origin on Airport_origen.city_id = city_origin.id JOIN country country_origin on city_origin.country_id = country_origin.id where country_Destino.name = %s AND YEAR(AV.date) = %s GROUP BY country_origin.name, MONTH(AV.date)")
self.cursor.execute(self.query,(PaisDestino, Year))
return self.cursor
#Muestra todos los turistas que entran a PaisDestino y ciudad destino en year de forma mensual dado un pais destino y origen
def ObtenerNumeroTuristasAenaMensualmenteEnUnAnioDadoPaisDestinoAnioYPaisOrigen(self, PaisDestino, PaisOrigen, Year): #OK
# #connection = pymysql.connect(host='localhost', port=3306, user='root', passwd='', db='tfgtesting')
self.cursor = self.connection.cursor()
self.query = str("SELECT MONTH(AV.date) AS Me, SUM(AV.travelers) AS Numero_Turistas FROM `aena_vuelos` AV JOIN airport Airport_destino on Airport_destino.id = AV.destination_id JOIN city city_destino on Airport_destino.city_id = city_destino.id Join country country_Destino ON city_destino.country_id = country_Destino.id JOIN airport Airport_origen on Airport_origen.id = AV.origin_id JOIN city city_origin on Airport_origen.city_id = city_origin.id JOIN country country_origin on city_origin.country_id = country_origin.id where country_Destino.name = %s AND country_origin.name = %s AND YEAR(AV.date) = %s GROUP BY country_origin.name, MONTH(AV.date)")
self.cursor.execute(self.query,(PaisDestino, PaisOrigen, Year))
return self.cursor
def ObtenerNumeroTuristasAenaMensualmenteDadoPaisDestinoAnioPaisOrigenAnioMinMax(self, PaisDestino, PaisOrigen, MinYear, MaxYear): #OK
# #connection = pymysql.connect(host='localhost', port=3306, user='root', passwd='', db='tfgtesting')
self.cursor = self.connection.cursor()
self.query = str("SELECT YEAR('AV.date') AS Anio, MONTH(AV.date) AS Me, SUM(AV.travelers) AS Numero_Turistas FROM `aena_vuelos` AV JOIN airport Airport_destino on Airport_destino.id = AV.destination_id JOIN city city_destino on Airport_destino.city_id = city_destino.id Join country country_Destino ON city_destino.country_id = country_Destino.id JOIN airport Airport_origen on Airport_origen.id = AV.origin_id JOIN city city_origin on Airport_origen.city_id = city_origin.id JOIN country country_origin on city_origin.country_id = country_origin.id where country_Destino.name = %s AND country_origin.name = %s AND YEAR(AV.date) >= %s AND YEAR(AV.date) <= %s GROUP BY country_origin.name,YEAR('AV.date'), MONTH(AV.date)")
self.cursor.execute(self.query,(PaisDestino, PaisOrigen, MinYear, MaxYear))
return self.cursor
#####################################################################################################################################################################
##################################TURISTAS SALIENTES####################################################
#####################################################################################################################################################################
#Mostrar numero de turistas que viajan a PaisDestino entre MinYear y Maxyear
def ObtenerNumeroTuristasAenaDadoPaisDestinoAnioMinMax(self, PaisDestino, MinYear, MaxYear): #OK
# #connection = pymysql.connect(host='localhost', port=3306, user='root', passwd='', db='tfgtesting')
self.cursor = self.connection.cursor()
self.query = str("SELECT YEAR(`date`) AS Anio, SUM(travelers) AS Numero_Turistas FROM aena_vuelos_airline ava JOIN airport ap_destino ON ava.destination_id = ap_destino.id JOIN country country_Destino ON ap_destino.country_id = country_Destino.id WHERE country_Destino.name = %s AND year(`date`) >= %s AND year(`date`) <= %s GROUP BY YEAR(`date`), country_Destino.name")
self.cursor.execute(self.query,(PaisDestino, MinYear, MaxYear))
return self.cursor
#Mostrar numero de turistas que viajan a PaisDestino en Year
def ObtenerDatosTuristasAenaEnUnAnioDadoPaisDestinoAnio(self, PaisDestino, Year): #OK
return self.ObtenerNumeroTuristasAenaDadoPaisDestinoAnioMinMax(PaisDestino, Year, Year)
#Mostrar numero de turistas que viajan desde un PaisDestino a city entre MinYear y MaxYear ANTES
def ObtenerDatosTuristasAenaDadoPaisDestinoCiudadDestinoAnioMinMax(self, PaisDestino, cityDestino, MinYear, MaxYear): #OK
# #connection = pymysql.connect(host='localhost', port=3306, user='root', passwd='', db='tfgtesting')
self.cursor = self.connection.cursor()
self.query = str("SELECT YEAR(`date`) AS Anio, SUM(travelers) AS Numero_Turistas FROM aena_vuelos_airline ava JOIN airport ap_destino ON ava.destination_id = ap_destino.id JOIN country country_Destino ON ap_destino.country_id = country_Destino.id JOIN city city_destino ON city_destino.id = ap_destino.city_id WHERE country_Destino.name = %sAND year(`date`) >= %s AND year(`date`) <= %s AND city_destino.name=%s GROUP BY YEAR(`date`), city_destino.name")
self.cursor.execute(self.query,(PaisDestino, MinYear, MaxYear, cityDestino))
return self.cursor
#Mostrar numero de turistas que viajan a un PaisDestino a city en Year separado en meses
def ObtenerDatosTuristasMensualmenteAenaDadoPaisDestinoCiudadAnio(self, PaisDestino, cityDestino, Year): #MIRAR
# #connection = pymysql.connect(host='localhost', port=3306, user='root', passwd='', db='tfgtesting')
self.cursor = self.connection.cursor()
self.query = str("SELECT YEAR(`date`) AS Anio, Month(`date`) AS Mes, SUM(travelers) AS Numero_Turistas FROM aena_vuelos_airline ava JOIN airport ap_destino ON ava.destination_id = ap_destino.id JOIN country country_Destino ON ap_destino.country_id = country_Destino.id JOIN city city_destino ON city_destino.id = ap_destino.city_id WHERE country_Destino.name = %s AND year(`date`) = %s AND city_destino.name= %s GROUP BY YEAR(`date`), city_destino.name, Month(`date`)")
self.cursor.execute(self.query,(PaisDestino, Year, cityDestino))
return self.cursor
#Mostrar numero de turistas que viajan hacia PaisDestino y city entre MinYear y MaxYear en un Mismo Mes
def ObtenerDatosTuristasAenaDadoPaisCiudadMesAnioMinMax(self, PaisDestino, cityDestino, Mes, MinYear, MaxYear): #MIRAR
# #connection = pymysql.connect(host='localhost', port=3306, user='root', passwd='', db='tfgtesting')
self.cursor = self.connection.cursor()
Mes = self.ObtenerNumeroMesDadoNombre(Mes)
self.query = str("SELECT YEAR(`date`) AS Anio, SUM(travelers) AS Numero_Turistas FROM aena_vuelos_airline ava JOIN airport ap_destino ON ava.destination_id = ap_destino.id JOIN country country_Destino ON ap_destino.country_id = country_Destino.id JOIN city city_destino ON city_destino.id = ap_destino.city_id WHERE country_Destino.name = %s AND year(`date`) >= %s AND YEAR(`date`) <= %s AND city_destino.name= %s AND MONTH(`date`) =%s GROUP BY YEAR(`date`), city_destino.name, Month(`date`)")
self.cursor.execute(self.query,(PaisDestino, MinYear, MaxYear, cityDestino, Mes))
return self.cursor
#Mostrar numero de turistas que viajan salen de una ciudad de un Pais en Year en Mes
def ObtenerNumeroTuristasAenaDadoPaisOrigenCiudadOrigenMesAnio(self, paisOrigin, CiudadOrigen, Mes, Year):
# #connection = pymysql.connect(host='localhost', port=3306, user='root', passwd='', db='tfgtesting')
self.cursor = self.connection.cursor()
Mes = self.ObtenerNumeroMesDadoNombre(Mes)
self.query = str("SELECT SUM(travelers) AS Numero_Turistas FROM aena_vuelos ava JOIN airport ap_origin ON ava.origin_id = ap_origin.id JOIN country country_Origin ON ap_origin.country_id = country_Origin.id JOIN city ciudadOrigen on ciudadOrigen.country_id = country_Origin.id WHERE country_Origin.name = %s AND ciudadOrigen.name=%s AND MONTH(`date`)= %s AND year(`date`) = %s GROUP BY ciudadOrigen.name, Month(`date`)")
self.cursor.execute(self.query,(paisOrigin, CiudadOrigen, Mes, Year))
return self.cursor
#########################################################################################################################
#################################VUELOS SALIENTES##########################################################
#########################################################################################################################
#Mostrar numero de vuelos salientes desde un paisOrigen entre Minyear y MaxYear
def ObtenerDatosVuelosSalientesAenaDadoPaisOrigenAnioMinMax(self, PaisOrigen, MinYear, MaxYear): #MIRAR
# #connection = pymysql.connect(host='localhost', port=3306, user='root', passwd='', db='tfgtesting')
self.cursor = self.connection.cursor()
self.query = str("SELECT Year(AV.date) AS ANIO, country_Destino.name AS Pais_Destino, city_destino.name AS Ciudad_Destino, SUM(AV.flights) AS Numero_Vuelos FROM aena_vuelos AV JOIN airport AP_origen on AV.origin_id = AP_origen.id JOIN country country_origen ON AP_origen.country_id = country_origen.id JOIN airport AP_Destino ON AP_Destino.id = AV.destination_id JOIN country country_Destino ON country_Destino.id = AP_Destino.country_id JOIN city city_destino ON AP_Destino.city_id = city_destino.id WHERE country_origen.name = %s and country_origen.name != country_Destino.name AND Year(AV.date) >= %s AND Year(AV.date) <= %s GROUP BY country_origen.name, country_Destino.name, city_destino.name, Year(AV.date)")
self.cursor.execute(self.query,(PaisOrigen, MinYear, MaxYear))
return self.cursor
#Mostrar numero de vuelos salientes desde un paisOrigen hacia cityDestino en un mismo mes entre Minyear y MaxYear
def ObtenerDatosVuelosSalientesMensualmenteAenaEnUnaCiudadDadoPaisOrigenCiudadDestinoAnioMinMax(self, PaisOrigen, cityDestino, MinYear, MaxYear): #MIRAR
# #connection = pymysql.connect(host='localhost', port=3306, user='root', passwd='', db='tfgtesting')
self.cursor = self.connection.cursor()
self.query = str("SELECT YEAR(AV.date) AS Anio ,MONTH(AV.date) AS Mes, SUM(AV.flights) AS Numero_Vuelos FROM aena_vuelos AV JOIN airport AP_origen on AV.origin_id = AP_origen.id JOIN country country_origen ON AP_origen.country_id = country_origen.id JOIN airport AP_Destino ON AP_Destino.id = AV.destination_id JOIN country country_Destino ON country_Destino.id = AP_Destino.country_id JOIN city city_destino ON AP_Destino.city_id = city_destino.id WHERE country_origen.name = %s and country_origen.name != country_Destino.name AND Year(AV.date) >= %s AND Year(AV.date) <= %s AND city_destino.name = %s GROUP BY country_origen.name, country_Destino.name, city_destino.name, Year(AV.date), MONTH(AV.date)")
self.cursor.execute(self.query,(PaisOrigen, MinYear, MaxYear, cityDestino))
return self.cursor
#Mostrar numero vuelos salientes desde paisOrigen a cityDestino en Year (valor)
def ObtenerCantidadVuelosAenaSalientesDadoPaisOrigenCiudadDestinoAnio(self, PaisOrigen, cityDestino, Year):
# #connection = pymysql.connect(host='localhost', port=3306, user='root', passwd='', db='tfgtesting')
self.cursor = self.connection.cursor()
self.query = str("SELECT YEAR(AV.date) AS Anio , SUM(AV.flights) AS Numero_Vuelos FROM aena_vuelos AV JOIN airport AP_origen on AV.origin_id = AP_origen.id JOIN country country_origen ON AP_origen.country_id = country_origen.id JOIN airport AP_Destino ON AP_Destino.id = AV.destination_id JOIN country country_Destino ON country_Destino.id = AP_Destino.country_id JOIN city city_destino ON AP_Destino.city_id = city_destino.id WHERE country_origen.name = %s and country_origen.name != country_Destino.name AND Year(AV.date) = %s AND city_destino.name = %s GROUP BY country_origen.name, country_Destino.name, city_destino.name, Year(AV.date)")
self.cursor.execute(self.query,(PaisOrigen, Year, cityDestino))
return self.cursor
#Mostrar numero vuelos salientes desde paisOrigen a cityDestino en Year agrupados por meses
def ObtenerCantidadVuelosAenaSalientesMensualmenteDadoPaisOrigenCiudadOrigenAnio(self, PaisOrigen, cityDestino, Year):
# #connection = pymysql.connect(host='localhost', port=3306, user='root', passwd='', db='tfgtesting')
self.cursor = self.connection.cursor()
self.query = str("SELECT MONTH(AV.date) AS Mes , SUM(AV.flights) AS Numero_Vuelos FROM aena_vuelos AV JOIN airport AP_origen on AV.origin_id = AP_origen.id JOIN country country_origen ON AP_origen.country_id = country_origen.id JOIN airport AP_Destino ON AP_Destino.id = AV.destination_id JOIN country country_Destino ON country_Destino.id = AP_Destino.country_id JOIN city city_destino ON AP_Destino.city_id = city_destino.id WHERE country_origen.name = %s and country_origen.name != country_Destino.name AND Year(AV.date) = %s AND city_destino.name = %s GROUP BY country_origen.name, country_Destino.name, city_destino.name, Month(AV.date), Year(AV.date)")
self.cursor.execute(self.query,(PaisOrigen, Year, cityDestino))
return self.cursor
#Obtener numero de vuelos salientes desde un PaisOrigen mostrando pais destino y ciudad destino entre MinYear y MaxYear durante un mismo mes
def ObtenerDatosVuelosSalientesAenaPaisesAlosQueSeViajaEnUnMesSeparadosPorAniosYCiudadesDadoPaisOrigenMesAniosMinMax(self, PaisOrigen, Mes, MinYear, MaxYear): #OK
#connection = pymysql.connect(host='localhost', port=3306, user='root', passwd='', db='tfgtesting')
self.cursor = self.connection.cursor()
Mes = self.ObtenerNumeroMesDadoNombre(Mes)
self.query = str("SELECT Year(AV.date) AS Anio, country_Destino.name AS Pais_Destino, city_destino.name AS Ciudad_Destino, SUM(AV.flights) AS Numero_Vuelos FROM aena_vuelos AV JOIN airport AP_origen on AV.origin_id = AP_origen.id JOIN country country_origen ON AP_origen.country_id = country_origen.id JOIN airport AP_Destino ON AP_Destino.id = AV.destination_id JOIN country country_Destino ON country_Destino.id = AP_Destino.country_id JOIN city city_destino ON AP_Destino.city_id = city_destino.id WHERE country_origen.name =%s AND YEAR(AV.date) >= %s AND YEAR(AV.date) <= %s AND MONTH(AV.date) = %s GROUP BY Year(AV.date), country_origen.name, country_Destino.name, city_destino.name ")
self.cursor.execute(self.query,(PaisOrigen, MinYear, MaxYear, Mes))
return self.cursor
#Obtener numero vuelos y destinos desde un pais origen en un Anio
def ObtenerCantidadVuelosAenaSalientesDadoPaisOrigenAnio(self, PaisOrigen, Year): #OK
#connection = pymysql.connect(host='localhost', port=3306, user='root', passwd='', db='tfgtesting')
self.cursor = self.connection.cursor()
self.query = str("SELECT city_destino.name AS Ciudad_Destino, SUM(AV.flights) AS Numero_Vuelos FROM aena_vuelos AV JOIN airport AP_origen on AV.origin_id = AP_origen.id JOIN country country_origen ON AP_origen.country_id = country_origen.id JOIN airport AP_Destino ON AP_Destino.id = AV.destination_id JOIN country country_Destino ON country_Destino.id = AP_Destino.country_id JOIN city city_destino ON AP_Destino.city_id = city_destino.id WHERE country_origen.name =%s and country_origen.name != country_Destino.name AND YEAR(AV.date) = %s GROUP BY country_origen.name, country_Destino.name, city_destino.name, Year(AV.date)")
self.cursor.execute(self.query,(PaisOrigen, Year))
return self.cursor
#Muestra todos los vuelos y destinos desde un pais origen
def ObtenerCantidadVuelosSalientesHaciaCiudadesPorAniosMesesDadoPaisOrigen(self, PaisOrigen): #OK
#connection = pymysql.connect(host='localhost', port=3306, user='root', passwd='', db='tfgtesting')
self.cursor = self.connection.cursor()
self.query = str("SELECT country_Destino.name AS Pais_Destino, city_destino.name AS Ciudad_Destino, Year(AV.date) AS Anio, MONTH(AV.date) AS Mes, SUM(AV.flights) AS Numero_Vuelos FROM aena_vuelos AV JOIN airport AP_origen on AV.origin_id = AP_origen.id JOIN country country_origen ON AP_origen.country_id = country_origen.id JOIN airport AP_Destino ON AP_Destino.id = AV.destination_id JOIN country country_Destino ON country_Destino.id = AP_Destino.country_id JOIN city city_destino ON AP_Destino.city_id = city_destino.id WHERE country_origen.name = %s and country_origen.name != country_Destino.name GROUP BY country_origen.name, country_Destino.name, city_destino.name, Year(AV.date), MONTH(AV.date) ")
self.cursor.execute(self.query,(PaisOrigen))
return self.cursor
#Obtener numero vuelos salientes divididos en Anios dado PaisOrigen y CiudadDestino
def ObtenerCantidadVuelosAenaSalientesHaciaCiudadesPorAniosMesDadoPaisOrigenCiudadDestino(self, PaisOrigen, CiudadDestino): #OK
#connection = pymysql.connect(host='localhost', port=3306, user='root', passwd='', db='tfgtesting')
self.cursor = self.connection.cursor()
self.query = str("SELECT YEAR(AV.date) AS Anio, MONTH(AV.date) AS Mes, SUM(AV.flights) AS Numero_Vuelos FROM aena_vuelos AV JOIN airport AP_origen on AV.origin_id = AP_origen.id JOIN country country_origen ON AP_origen.country_id = country_origen.id JOIN airport AP_Destino ON AP_Destino.id = AV.destination_id JOIN country country_Destino ON country_Destino.id = AP_Destino.country_id JOIN city city_destino ON AP_Destino.city_id = city_destino.id WHERE country_origen.name = %s and country_origen.name != country_Destino.name AND city_destino.name = %s GROUP BY country_origen.name, country_Destino.name, city_destino.name, Year(AV.date), MONTH(AV.date) ")
self.cursor.execute(self.query,(PaisOrigen, CiudadDestino))
return self.cursor
#Obtener numero vuelos salientes divididos en Anios entre MinYear y MaxYear de un paisOrigen
def ObtenerCantidadVuelosSalientesHaciaCiudadesPorDadoPaisOrigenAnioMinMaxMensualmente(self, PaisOrigen, MinYear, MaxYear): #OK
#connection = pymysql.connect(host='localhost', port=3306, user='root', passwd='', db='tfgtesting')
self.cursor = self.connection.cursor()
self.query = str("SELECT country_Destino.name AS Pais_Destino, city_destino.name AS Ciudad_Destino, Year(AV.date) AS Anio, MONTH(AV.date) AS Mes, SUM(AV.flights) AS Numero_Vuelos FROM aena_vuelos AV JOIN airport AP_origen on AV.origin_id = AP_origen.id JOIN country country_origen ON AP_origen.country_id = country_origen.id JOIN airport AP_Destino ON AP_Destino.id = AV.destination_id JOIN country country_Destino ON country_Destino.id = AP_Destino.country_id JOIN city city_destino ON AP_Destino.city_id = city_destino.id WHERE country_origen.name = %s and country_origen.name != country_Destino.name AND YEAR(AV.date) >= %s AND YEAR(AV.date) <= %s GROUP BY country_origen.name, country_Destino.name, city_destino.name, Year(AV.date), MONTH(AV.date)")
self.cursor.execute(self.query,(PaisOrigen, MinYear, MaxYear))
return self.cursor
#Obtener numero vuelos saliente divididos por mes y ciudades dado paisOrigen y Year
def ObtenerCantidadVuelosSalientesDivididosPorMesPorCiudadDadoPaisOrigenAnio(self, PaisOrigen, Year): #OK
#connection = pymysql.connect(host='localhost', port=3306, user='root', passwd='', db='tfgtesting')
self.cursor = self.connection.cursor()
self.query = str("SELECT city_destino.name AS Ciudad_Destino, MONTH(AV.date) AS Mes, (AV.flights) AS Numero_Vuelos FROM aena_vuelos AV JOIN airport AP_origen on AV.origin_id = AP_origen.id JOIN country country_origen ON AP_origen.country_id = country_origen.id JOIN airport AP_Destino ON AP_Destino.id = AV.destination_id JOIN country country_Destino ON country_Destino.id = AP_Destino.country_id JOIN city city_destino ON AP_Destino.city_id = city_destino.id WHERE country_origen.name = %s AND country_origen.name != country_Destino.name AND YEAR(AV.date) = %s GROUP BY country_origen.name, country_Destino.name, city_destino.name, MONTH(AV.date)")
self.cursor.execute(self.query,(PaisOrigen, Year))
return self.cursor
#Obtener numero vuelos saliente divididos por mes y ciudades dado paisOrigen entre MinYear y MaxYear
def ObtenerCantidadVuelosPorCiudadYAniosDadoPaisOrigenMesAniosMinMax(self, PaisOrigen, Mes, MinYear, MaxYear): #OK
#connection = pymysql.connect(host='localhost', port=3306, user='root', passwd='', db='tfgtesting')
self.cursor = self.connection.cursor()
Mes = self.ObtenerNumeroMesDadoNombre(Mes)
self.query = str("SELECT YEAR(AV.date) AS Anio, country_Destino.name AS Pais_Destino,city_destino.name AS Ciudad_Destino, SUM(AV.flights) AS Numero_Vuelos FROM aena_vuelos AV JOIN airport AP_origen on AV.origin_id = AP_origen.id JOIN country country_origen ON AP_origen.country_id = country_origen.id JOIN airport AP_Destino ON AP_Destino.id = AV.destination_id JOIN country country_Destino ON country_Destino.id = AP_Destino.country_id JOIN city city_destino ON AP_Destino.city_id = city_destino.id WHERE country_origen.name = %s and country_origen.name != country_Destino.name AND YEAR(AV.date) >= %s AND YEAR(AV.date) < %s AND MONTH(AV.date) = %s GROUP BY country_origen.name, country_Destino.name, city_destino.name, Year(AV.date), MONTH(AV.date) ")
self.cursor.execute(self.query,(PaisOrigen, MinYear, MaxYear, Mes))
return self.cursor
#Obtener numero vuelos entre PaisOrigen y cityDestino entre MinYear y MaxYear
def ObtenerDatosVuelosAenaEntreDosPaisesDadoPaisOrigenPaisDestinoCiudadDestinoAniosMinMax(self, PaisOrigen, PaisDestino, cityDestino, MinYear, MaxYear): #OK
# #connection = pymysql.connect(host='localhost', port=3306, user='root', passwd='', db='tfgtesting')
self.cursor = self.connection.cursor()
self.query = str("SELECT YEAR(AV.date) AS Anio , SUM(AV.flights) AS Numero_Vuelos FROM aena_vuelos AV JOIN airport AP_origen on AV.origin_id = AP_origen.id JOIN country country_origen ON AP_origen.country_id = country_origen.id JOIN airport AP_Destino ON AP_Destino.id = AV.destination_id JOIN country country_Destino ON country_Destino.id = AP_Destino.country_id JOIN city city_destino ON AP_Destino.city_id = city_destino.id WHERE country_origen.name = %s and country_Destino.name = %s AND YEAR(AV.date) >= %s AND YEAR(AV.date) <= %s AND city_destino.name = %s GROUP BY country_origen.name, country_Destino.name, city_destino.name, Year(AV.date)")
self.cursor.execute(self.query,(PaisOrigen, PaisDestino, MinYear, MaxYear, cityDestino))
return self.cursor
#Mostrar numero vuelosEntrantes desde paisOrigen a cityDestino durante los Anios entre MinYear y Maxyear en el mes Mes
def ObtenerDatosVuelosAenaEntreDosPaisesEnUnMesDadoPaisOrigenPaisDestinoCiudadDestinoAniosMinMax(self, PaisOrigen, PaisDestino, cityDestino, Mes, MinYear, MaxYear):
# #connection = pymysql.connect(host='localhost', port=3306, user='root', passwd='', db='tfgtesting')
self.cursor = self.connection.cursor()
Mes = self.ObtenerNumeroMesDadoNombre(Mes)
self.query = str("SELECT YEAR(AV.date) AS Anio , SUM(AV.flights) AS Numero_Vuelos FROM aena_vuelos AV JOIN airport AP_origen on AV.origin_id = AP_origen.id JOIN country country_origen ON AP_origen.country_id = country_origen.id JOIN airport AP_Destino ON AP_Destino.id = AV.destination_id JOIN country country_Destino ON country_Destino.id = AP_Destino.country_id JOIN city city_destino ON AP_Destino.city_id = city_destino.id WHERE country_origen.name = %s and country_Destino.name =%s AND YEAR(AV.date) >= %s AND YEAR(AV.date) <= %s AND city_destino.name = %s AND MONTH(AV.date) = %s GROUP BY country_origen.name, country_Destino.name, city_destino.name, Year(AV.date)")
self.cursor.execute(self.query,(PaisOrigen, PaisDestino, MinYear, MaxYear, cityDestino, Mes))
return self.cursor
| 114.141153 | 790 | 0.729452 | 7,630 | 57,413 | 5.346003 | 0.03211 | 0.025006 | 0.022064 | 0.036774 | 0.858887 | 0.848051 | 0.841236 | 0.833513 | 0.821108 | 0.813091 | 0 | 0.004011 | 0.153223 | 57,413 | 502 | 791 | 114.368526 | 0.835013 | 0.166652 | 0 | 0.483108 | 0 | 0.168919 | 0.615842 | 0.053571 | 0 | 0 | 0 | 0.001992 | 0 | 1 | 0.179054 | false | 0 | 0.006757 | 0.003378 | 0.405405 | 0.003378 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
fbce7c5bc56ca26060cd1efe4e6c108ea0d3993d | 1,806 | py | Python | autogoal/contrib/wikipedia/_base.py | lsuarez98/autogoal | 5c0210677de108238d30ed892beaf0801fb94bce | [
"MIT"
] | 157 | 2020-06-20T10:28:04.000Z | 2022-03-26T18:20:58.000Z | autogoal/contrib/wikipedia/_base.py | lsuarez98/autogoal | 5c0210677de108238d30ed892beaf0801fb94bce | [
"MIT"
] | 110 | 2020-08-10T21:50:52.000Z | 2022-02-25T16:13:53.000Z | autogoal/contrib/wikipedia/_base.py | lsuarez98/autogoal | 5c0210677de108238d30ed892beaf0801fb94bce | [
"MIT"
] | 62 | 2020-08-09T07:41:50.000Z | 2022-03-16T01:07:47.000Z | import wikipedia
from autogoal.kb import Word, Document, FeatureSet
from autogoal.utils import nice_repr
from autogoal.kb import AlgorithmBase
@nice_repr
class WikipediaSummary(AlgorithmBase):
"""This class find a word in Wikipedia and return a summary in english.
"""
def __init__(self):
pass
def run(self, input: Word) -> Document:
"""This method use Word2Vect of gensim for tranform a word in embedding vector.
"""
try:
return wikipedia.summary(input)
except:
return ""
@nice_repr
class WikipediaContainsWord(AlgorithmBase):
"""This class find a word in Wikipedia and return a summary in english.
"""
def __init__(self):
pass
def run(self, input: Word) -> FeatureSet:
"""This method use Word2Vect of gensim for tranform a word in embedding vector.
"""
return dict(in_wikipedia=bool(wikipedia.search(input)))
@nice_repr
class WikipediaSummarySpanish(AlgorithmBase):
"""This class find a word in Wikipedia and return a summary in Spanish.
"""
def __init__(self):
wikipedia.set_lang("es")
def run(self, input: Word) -> Document:
"""This method use Word2Vect of gensim for tranform a word in embedding vector.
"""
try:
return wikipedia.summary(input)
except:
return ""
@nice_repr
class WikipediaContainsWordSpanish(AlgorithmBase):
"""This class find a word in Wikipedia and return a summary in Spanish.
"""
def __init__(self):
wikipedia.set_lang("es")
def run(self, input: Word) -> FeatureSet:
"""This method use Word2Vect of gensim for tranform a word in embedding vector.
"""
return dict(in_wikipedia=bool(wikipedia.search(input)))
| 26.558824 | 87 | 0.655592 | 220 | 1,806 | 5.268182 | 0.222727 | 0.034513 | 0.048318 | 0.089733 | 0.797239 | 0.797239 | 0.797239 | 0.797239 | 0.797239 | 0.797239 | 0 | 0.002983 | 0.257475 | 1,806 | 67 | 88 | 26.955224 | 0.861298 | 0.348837 | 0 | 0.764706 | 0 | 0 | 0.003552 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.235294 | false | 0.058824 | 0.117647 | 0 | 0.647059 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 9 |
fbd984d84c3edd0c6d4b536f1b00c9509331b9f2 | 43,004 | py | Python | sdk/python/pulumi_azure/streamanalytics/job.py | aangelisc/pulumi-azure | 71dd9c75403146e16f7480e5a60b08bc0329660e | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | sdk/python/pulumi_azure/streamanalytics/job.py | aangelisc/pulumi-azure | 71dd9c75403146e16f7480e5a60b08bc0329660e | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | sdk/python/pulumi_azure/streamanalytics/job.py | aangelisc/pulumi-azure | 71dd9c75403146e16f7480e5a60b08bc0329660e | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | # coding=utf-8
# *** WARNING: this file was generated by the Pulumi Terraform Bridge (tfgen) Tool. ***
# *** Do not edit by hand unless you're certain you know what you are doing! ***
import warnings
import pulumi
import pulumi.runtime
from typing import Any, Mapping, Optional, Sequence, Union, overload
from .. import _utilities
__all__ = ['JobArgs', 'Job']
@pulumi.input_type
class JobArgs:
def __init__(__self__, *,
resource_group_name: pulumi.Input[str],
streaming_units: pulumi.Input[int],
transformation_query: pulumi.Input[str],
compatibility_level: Optional[pulumi.Input[str]] = None,
data_locale: Optional[pulumi.Input[str]] = None,
events_late_arrival_max_delay_in_seconds: Optional[pulumi.Input[int]] = None,
events_out_of_order_max_delay_in_seconds: Optional[pulumi.Input[int]] = None,
events_out_of_order_policy: Optional[pulumi.Input[str]] = None,
location: Optional[pulumi.Input[str]] = None,
name: Optional[pulumi.Input[str]] = None,
output_error_policy: Optional[pulumi.Input[str]] = None,
tags: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]] = None):
"""
The set of arguments for constructing a Job resource.
:param pulumi.Input[str] resource_group_name: The name of the Resource Group where the Stream Analytics Job should exist. Changing this forces a new resource to be created.
:param pulumi.Input[int] streaming_units: Specifies the number of streaming units that the streaming job uses. Supported values are `1`, `3`, `6` and multiples of `6` up to `120`.
:param pulumi.Input[str] transformation_query: Specifies the query that will be run in the streaming job, [written in Stream Analytics Query Language (SAQL)](https://msdn.microsoft.com/library/azure/dn834998).
:param pulumi.Input[str] compatibility_level: Specifies the compatibility level for this job - which controls certain runtime behaviours of the streaming job. Possible values are `1.0` and `1.1`.
:param pulumi.Input[str] data_locale: Specifies the Data Locale of the Job, which [should be a supported .NET Culture](https://msdn.microsoft.com/en-us/library/system.globalization.culturetypes(v=vs.110).aspx).
:param pulumi.Input[int] events_late_arrival_max_delay_in_seconds: Specifies the maximum tolerable delay in seconds where events arriving late could be included. Supported range is `-1` (indefinite) to `1814399` (20d 23h 59m 59s). Default is `0`.
:param pulumi.Input[int] events_out_of_order_max_delay_in_seconds: Specifies the maximum tolerable delay in seconds where out-of-order events can be adjusted to be back in order. Supported range is `0` to `599` (9m 59s). Default is `5`.
:param pulumi.Input[str] events_out_of_order_policy: Specifies the policy which should be applied to events which arrive out of order in the input event stream. Possible values are `Adjust` and `Drop`. Default is `Adjust`.
:param pulumi.Input[str] location: The Azure Region in which the Resource Group exists. Changing this forces a new resource to be created.
:param pulumi.Input[str] name: The name of the Stream Analytics Job. Changing this forces a new resource to be created.
:param pulumi.Input[str] output_error_policy: Specifies the policy which should be applied to events which arrive at the output and cannot be written to the external storage due to being malformed (such as missing column values, column values of wrong type or size). Possible values are `Drop` and `Stop`. Default is `Drop`.
:param pulumi.Input[Mapping[str, pulumi.Input[str]]] tags: A mapping of tags assigned to the resource.
"""
pulumi.set(__self__, "resource_group_name", resource_group_name)
pulumi.set(__self__, "streaming_units", streaming_units)
pulumi.set(__self__, "transformation_query", transformation_query)
if compatibility_level is not None:
pulumi.set(__self__, "compatibility_level", compatibility_level)
if data_locale is not None:
pulumi.set(__self__, "data_locale", data_locale)
if events_late_arrival_max_delay_in_seconds is not None:
pulumi.set(__self__, "events_late_arrival_max_delay_in_seconds", events_late_arrival_max_delay_in_seconds)
if events_out_of_order_max_delay_in_seconds is not None:
pulumi.set(__self__, "events_out_of_order_max_delay_in_seconds", events_out_of_order_max_delay_in_seconds)
if events_out_of_order_policy is not None:
pulumi.set(__self__, "events_out_of_order_policy", events_out_of_order_policy)
if location is not None:
pulumi.set(__self__, "location", location)
if name is not None:
pulumi.set(__self__, "name", name)
if output_error_policy is not None:
pulumi.set(__self__, "output_error_policy", output_error_policy)
if tags is not None:
pulumi.set(__self__, "tags", tags)
@property
@pulumi.getter(name="resourceGroupName")
def resource_group_name(self) -> pulumi.Input[str]:
"""
The name of the Resource Group where the Stream Analytics Job should exist. Changing this forces a new resource to be created.
"""
return pulumi.get(self, "resource_group_name")
@resource_group_name.setter
def resource_group_name(self, value: pulumi.Input[str]):
pulumi.set(self, "resource_group_name", value)
@property
@pulumi.getter(name="streamingUnits")
def streaming_units(self) -> pulumi.Input[int]:
"""
Specifies the number of streaming units that the streaming job uses. Supported values are `1`, `3`, `6` and multiples of `6` up to `120`.
"""
return pulumi.get(self, "streaming_units")
@streaming_units.setter
def streaming_units(self, value: pulumi.Input[int]):
pulumi.set(self, "streaming_units", value)
@property
@pulumi.getter(name="transformationQuery")
def transformation_query(self) -> pulumi.Input[str]:
"""
Specifies the query that will be run in the streaming job, [written in Stream Analytics Query Language (SAQL)](https://msdn.microsoft.com/library/azure/dn834998).
"""
return pulumi.get(self, "transformation_query")
@transformation_query.setter
def transformation_query(self, value: pulumi.Input[str]):
pulumi.set(self, "transformation_query", value)
@property
@pulumi.getter(name="compatibilityLevel")
def compatibility_level(self) -> Optional[pulumi.Input[str]]:
"""
Specifies the compatibility level for this job - which controls certain runtime behaviours of the streaming job. Possible values are `1.0` and `1.1`.
"""
return pulumi.get(self, "compatibility_level")
@compatibility_level.setter
def compatibility_level(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "compatibility_level", value)
@property
@pulumi.getter(name="dataLocale")
def data_locale(self) -> Optional[pulumi.Input[str]]:
"""
Specifies the Data Locale of the Job, which [should be a supported .NET Culture](https://msdn.microsoft.com/en-us/library/system.globalization.culturetypes(v=vs.110).aspx).
"""
return pulumi.get(self, "data_locale")
@data_locale.setter
def data_locale(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "data_locale", value)
@property
@pulumi.getter(name="eventsLateArrivalMaxDelayInSeconds")
def events_late_arrival_max_delay_in_seconds(self) -> Optional[pulumi.Input[int]]:
"""
Specifies the maximum tolerable delay in seconds where events arriving late could be included. Supported range is `-1` (indefinite) to `1814399` (20d 23h 59m 59s). Default is `0`.
"""
return pulumi.get(self, "events_late_arrival_max_delay_in_seconds")
@events_late_arrival_max_delay_in_seconds.setter
def events_late_arrival_max_delay_in_seconds(self, value: Optional[pulumi.Input[int]]):
pulumi.set(self, "events_late_arrival_max_delay_in_seconds", value)
@property
@pulumi.getter(name="eventsOutOfOrderMaxDelayInSeconds")
def events_out_of_order_max_delay_in_seconds(self) -> Optional[pulumi.Input[int]]:
"""
Specifies the maximum tolerable delay in seconds where out-of-order events can be adjusted to be back in order. Supported range is `0` to `599` (9m 59s). Default is `5`.
"""
return pulumi.get(self, "events_out_of_order_max_delay_in_seconds")
@events_out_of_order_max_delay_in_seconds.setter
def events_out_of_order_max_delay_in_seconds(self, value: Optional[pulumi.Input[int]]):
pulumi.set(self, "events_out_of_order_max_delay_in_seconds", value)
@property
@pulumi.getter(name="eventsOutOfOrderPolicy")
def events_out_of_order_policy(self) -> Optional[pulumi.Input[str]]:
"""
Specifies the policy which should be applied to events which arrive out of order in the input event stream. Possible values are `Adjust` and `Drop`. Default is `Adjust`.
"""
return pulumi.get(self, "events_out_of_order_policy")
@events_out_of_order_policy.setter
def events_out_of_order_policy(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "events_out_of_order_policy", value)
@property
@pulumi.getter
def location(self) -> Optional[pulumi.Input[str]]:
"""
The Azure Region in which the Resource Group exists. Changing this forces a new resource to be created.
"""
return pulumi.get(self, "location")
@location.setter
def location(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "location", value)
@property
@pulumi.getter
def name(self) -> Optional[pulumi.Input[str]]:
"""
The name of the Stream Analytics Job. Changing this forces a new resource to be created.
"""
return pulumi.get(self, "name")
@name.setter
def name(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "name", value)
@property
@pulumi.getter(name="outputErrorPolicy")
def output_error_policy(self) -> Optional[pulumi.Input[str]]:
"""
Specifies the policy which should be applied to events which arrive at the output and cannot be written to the external storage due to being malformed (such as missing column values, column values of wrong type or size). Possible values are `Drop` and `Stop`. Default is `Drop`.
"""
return pulumi.get(self, "output_error_policy")
@output_error_policy.setter
def output_error_policy(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "output_error_policy", value)
@property
@pulumi.getter
def tags(self) -> Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]]:
"""
A mapping of tags assigned to the resource.
"""
return pulumi.get(self, "tags")
@tags.setter
def tags(self, value: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]]):
pulumi.set(self, "tags", value)
@pulumi.input_type
class _JobState:
def __init__(__self__, *,
compatibility_level: Optional[pulumi.Input[str]] = None,
data_locale: Optional[pulumi.Input[str]] = None,
events_late_arrival_max_delay_in_seconds: Optional[pulumi.Input[int]] = None,
events_out_of_order_max_delay_in_seconds: Optional[pulumi.Input[int]] = None,
events_out_of_order_policy: Optional[pulumi.Input[str]] = None,
job_id: Optional[pulumi.Input[str]] = None,
location: Optional[pulumi.Input[str]] = None,
name: Optional[pulumi.Input[str]] = None,
output_error_policy: Optional[pulumi.Input[str]] = None,
resource_group_name: Optional[pulumi.Input[str]] = None,
streaming_units: Optional[pulumi.Input[int]] = None,
tags: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]] = None,
transformation_query: Optional[pulumi.Input[str]] = None):
"""
Input properties used for looking up and filtering Job resources.
:param pulumi.Input[str] compatibility_level: Specifies the compatibility level for this job - which controls certain runtime behaviours of the streaming job. Possible values are `1.0` and `1.1`.
:param pulumi.Input[str] data_locale: Specifies the Data Locale of the Job, which [should be a supported .NET Culture](https://msdn.microsoft.com/en-us/library/system.globalization.culturetypes(v=vs.110).aspx).
:param pulumi.Input[int] events_late_arrival_max_delay_in_seconds: Specifies the maximum tolerable delay in seconds where events arriving late could be included. Supported range is `-1` (indefinite) to `1814399` (20d 23h 59m 59s). Default is `0`.
:param pulumi.Input[int] events_out_of_order_max_delay_in_seconds: Specifies the maximum tolerable delay in seconds where out-of-order events can be adjusted to be back in order. Supported range is `0` to `599` (9m 59s). Default is `5`.
:param pulumi.Input[str] events_out_of_order_policy: Specifies the policy which should be applied to events which arrive out of order in the input event stream. Possible values are `Adjust` and `Drop`. Default is `Adjust`.
:param pulumi.Input[str] job_id: The Job ID assigned by the Stream Analytics Job.
:param pulumi.Input[str] location: The Azure Region in which the Resource Group exists. Changing this forces a new resource to be created.
:param pulumi.Input[str] name: The name of the Stream Analytics Job. Changing this forces a new resource to be created.
:param pulumi.Input[str] output_error_policy: Specifies the policy which should be applied to events which arrive at the output and cannot be written to the external storage due to being malformed (such as missing column values, column values of wrong type or size). Possible values are `Drop` and `Stop`. Default is `Drop`.
:param pulumi.Input[str] resource_group_name: The name of the Resource Group where the Stream Analytics Job should exist. Changing this forces a new resource to be created.
:param pulumi.Input[int] streaming_units: Specifies the number of streaming units that the streaming job uses. Supported values are `1`, `3`, `6` and multiples of `6` up to `120`.
:param pulumi.Input[Mapping[str, pulumi.Input[str]]] tags: A mapping of tags assigned to the resource.
:param pulumi.Input[str] transformation_query: Specifies the query that will be run in the streaming job, [written in Stream Analytics Query Language (SAQL)](https://msdn.microsoft.com/library/azure/dn834998).
"""
if compatibility_level is not None:
pulumi.set(__self__, "compatibility_level", compatibility_level)
if data_locale is not None:
pulumi.set(__self__, "data_locale", data_locale)
if events_late_arrival_max_delay_in_seconds is not None:
pulumi.set(__self__, "events_late_arrival_max_delay_in_seconds", events_late_arrival_max_delay_in_seconds)
if events_out_of_order_max_delay_in_seconds is not None:
pulumi.set(__self__, "events_out_of_order_max_delay_in_seconds", events_out_of_order_max_delay_in_seconds)
if events_out_of_order_policy is not None:
pulumi.set(__self__, "events_out_of_order_policy", events_out_of_order_policy)
if job_id is not None:
pulumi.set(__self__, "job_id", job_id)
if location is not None:
pulumi.set(__self__, "location", location)
if name is not None:
pulumi.set(__self__, "name", name)
if output_error_policy is not None:
pulumi.set(__self__, "output_error_policy", output_error_policy)
if resource_group_name is not None:
pulumi.set(__self__, "resource_group_name", resource_group_name)
if streaming_units is not None:
pulumi.set(__self__, "streaming_units", streaming_units)
if tags is not None:
pulumi.set(__self__, "tags", tags)
if transformation_query is not None:
pulumi.set(__self__, "transformation_query", transformation_query)
@property
@pulumi.getter(name="compatibilityLevel")
def compatibility_level(self) -> Optional[pulumi.Input[str]]:
"""
Specifies the compatibility level for this job - which controls certain runtime behaviours of the streaming job. Possible values are `1.0` and `1.1`.
"""
return pulumi.get(self, "compatibility_level")
@compatibility_level.setter
def compatibility_level(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "compatibility_level", value)
@property
@pulumi.getter(name="dataLocale")
def data_locale(self) -> Optional[pulumi.Input[str]]:
"""
Specifies the Data Locale of the Job, which [should be a supported .NET Culture](https://msdn.microsoft.com/en-us/library/system.globalization.culturetypes(v=vs.110).aspx).
"""
return pulumi.get(self, "data_locale")
@data_locale.setter
def data_locale(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "data_locale", value)
@property
@pulumi.getter(name="eventsLateArrivalMaxDelayInSeconds")
def events_late_arrival_max_delay_in_seconds(self) -> Optional[pulumi.Input[int]]:
"""
Specifies the maximum tolerable delay in seconds where events arriving late could be included. Supported range is `-1` (indefinite) to `1814399` (20d 23h 59m 59s). Default is `0`.
"""
return pulumi.get(self, "events_late_arrival_max_delay_in_seconds")
@events_late_arrival_max_delay_in_seconds.setter
def events_late_arrival_max_delay_in_seconds(self, value: Optional[pulumi.Input[int]]):
pulumi.set(self, "events_late_arrival_max_delay_in_seconds", value)
@property
@pulumi.getter(name="eventsOutOfOrderMaxDelayInSeconds")
def events_out_of_order_max_delay_in_seconds(self) -> Optional[pulumi.Input[int]]:
"""
Specifies the maximum tolerable delay in seconds where out-of-order events can be adjusted to be back in order. Supported range is `0` to `599` (9m 59s). Default is `5`.
"""
return pulumi.get(self, "events_out_of_order_max_delay_in_seconds")
@events_out_of_order_max_delay_in_seconds.setter
def events_out_of_order_max_delay_in_seconds(self, value: Optional[pulumi.Input[int]]):
pulumi.set(self, "events_out_of_order_max_delay_in_seconds", value)
@property
@pulumi.getter(name="eventsOutOfOrderPolicy")
def events_out_of_order_policy(self) -> Optional[pulumi.Input[str]]:
"""
Specifies the policy which should be applied to events which arrive out of order in the input event stream. Possible values are `Adjust` and `Drop`. Default is `Adjust`.
"""
return pulumi.get(self, "events_out_of_order_policy")
@events_out_of_order_policy.setter
def events_out_of_order_policy(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "events_out_of_order_policy", value)
@property
@pulumi.getter(name="jobId")
def job_id(self) -> Optional[pulumi.Input[str]]:
"""
The Job ID assigned by the Stream Analytics Job.
"""
return pulumi.get(self, "job_id")
@job_id.setter
def job_id(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "job_id", value)
@property
@pulumi.getter
def location(self) -> Optional[pulumi.Input[str]]:
"""
The Azure Region in which the Resource Group exists. Changing this forces a new resource to be created.
"""
return pulumi.get(self, "location")
@location.setter
def location(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "location", value)
@property
@pulumi.getter
def name(self) -> Optional[pulumi.Input[str]]:
"""
The name of the Stream Analytics Job. Changing this forces a new resource to be created.
"""
return pulumi.get(self, "name")
@name.setter
def name(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "name", value)
@property
@pulumi.getter(name="outputErrorPolicy")
def output_error_policy(self) -> Optional[pulumi.Input[str]]:
"""
Specifies the policy which should be applied to events which arrive at the output and cannot be written to the external storage due to being malformed (such as missing column values, column values of wrong type or size). Possible values are `Drop` and `Stop`. Default is `Drop`.
"""
return pulumi.get(self, "output_error_policy")
@output_error_policy.setter
def output_error_policy(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "output_error_policy", value)
@property
@pulumi.getter(name="resourceGroupName")
def resource_group_name(self) -> Optional[pulumi.Input[str]]:
"""
The name of the Resource Group where the Stream Analytics Job should exist. Changing this forces a new resource to be created.
"""
return pulumi.get(self, "resource_group_name")
@resource_group_name.setter
def resource_group_name(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "resource_group_name", value)
@property
@pulumi.getter(name="streamingUnits")
def streaming_units(self) -> Optional[pulumi.Input[int]]:
"""
Specifies the number of streaming units that the streaming job uses. Supported values are `1`, `3`, `6` and multiples of `6` up to `120`.
"""
return pulumi.get(self, "streaming_units")
@streaming_units.setter
def streaming_units(self, value: Optional[pulumi.Input[int]]):
pulumi.set(self, "streaming_units", value)
@property
@pulumi.getter
def tags(self) -> Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]]:
"""
A mapping of tags assigned to the resource.
"""
return pulumi.get(self, "tags")
@tags.setter
def tags(self, value: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]]):
pulumi.set(self, "tags", value)
@property
@pulumi.getter(name="transformationQuery")
def transformation_query(self) -> Optional[pulumi.Input[str]]:
"""
Specifies the query that will be run in the streaming job, [written in Stream Analytics Query Language (SAQL)](https://msdn.microsoft.com/library/azure/dn834998).
"""
return pulumi.get(self, "transformation_query")
@transformation_query.setter
def transformation_query(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "transformation_query", value)
class Job(pulumi.CustomResource):
@overload
def __init__(__self__,
resource_name: str,
opts: Optional[pulumi.ResourceOptions] = None,
compatibility_level: Optional[pulumi.Input[str]] = None,
data_locale: Optional[pulumi.Input[str]] = None,
events_late_arrival_max_delay_in_seconds: Optional[pulumi.Input[int]] = None,
events_out_of_order_max_delay_in_seconds: Optional[pulumi.Input[int]] = None,
events_out_of_order_policy: Optional[pulumi.Input[str]] = None,
location: Optional[pulumi.Input[str]] = None,
name: Optional[pulumi.Input[str]] = None,
output_error_policy: Optional[pulumi.Input[str]] = None,
resource_group_name: Optional[pulumi.Input[str]] = None,
streaming_units: Optional[pulumi.Input[int]] = None,
tags: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]] = None,
transformation_query: Optional[pulumi.Input[str]] = None,
__props__=None):
"""
Manages a Stream Analytics Job.
## Example Usage
```python
import pulumi
import pulumi_azure as azure
example_resource_group = azure.core.ResourceGroup("exampleResourceGroup", location="West Europe")
example_job = azure.streamanalytics.Job("exampleJob",
resource_group_name=example_resource_group.name,
location=example_resource_group.location,
compatibility_level="1.1",
data_locale="en-GB",
events_late_arrival_max_delay_in_seconds=60,
events_out_of_order_max_delay_in_seconds=50,
events_out_of_order_policy="Adjust",
output_error_policy="Drop",
streaming_units=3,
tags={
"environment": "Example",
},
transformation_query=\"\"\" SELECT *
INTO [YourOutputAlias]
FROM [YourInputAlias]
\"\"\")
```
## Import
Stream Analytics Job's can be imported using the `resource id`, e.g.
```sh
$ pulumi import azure:streamanalytics/job:Job example /subscriptions/00000000-0000-0000-0000-000000000000/resourcegroups/group1/providers/Microsoft.StreamAnalytics/streamingjobs/job1
```
:param str resource_name: The name of the resource.
:param pulumi.ResourceOptions opts: Options for the resource.
:param pulumi.Input[str] compatibility_level: Specifies the compatibility level for this job - which controls certain runtime behaviours of the streaming job. Possible values are `1.0` and `1.1`.
:param pulumi.Input[str] data_locale: Specifies the Data Locale of the Job, which [should be a supported .NET Culture](https://msdn.microsoft.com/en-us/library/system.globalization.culturetypes(v=vs.110).aspx).
:param pulumi.Input[int] events_late_arrival_max_delay_in_seconds: Specifies the maximum tolerable delay in seconds where events arriving late could be included. Supported range is `-1` (indefinite) to `1814399` (20d 23h 59m 59s). Default is `0`.
:param pulumi.Input[int] events_out_of_order_max_delay_in_seconds: Specifies the maximum tolerable delay in seconds where out-of-order events can be adjusted to be back in order. Supported range is `0` to `599` (9m 59s). Default is `5`.
:param pulumi.Input[str] events_out_of_order_policy: Specifies the policy which should be applied to events which arrive out of order in the input event stream. Possible values are `Adjust` and `Drop`. Default is `Adjust`.
:param pulumi.Input[str] location: The Azure Region in which the Resource Group exists. Changing this forces a new resource to be created.
:param pulumi.Input[str] name: The name of the Stream Analytics Job. Changing this forces a new resource to be created.
:param pulumi.Input[str] output_error_policy: Specifies the policy which should be applied to events which arrive at the output and cannot be written to the external storage due to being malformed (such as missing column values, column values of wrong type or size). Possible values are `Drop` and `Stop`. Default is `Drop`.
:param pulumi.Input[str] resource_group_name: The name of the Resource Group where the Stream Analytics Job should exist. Changing this forces a new resource to be created.
:param pulumi.Input[int] streaming_units: Specifies the number of streaming units that the streaming job uses. Supported values are `1`, `3`, `6` and multiples of `6` up to `120`.
:param pulumi.Input[Mapping[str, pulumi.Input[str]]] tags: A mapping of tags assigned to the resource.
:param pulumi.Input[str] transformation_query: Specifies the query that will be run in the streaming job, [written in Stream Analytics Query Language (SAQL)](https://msdn.microsoft.com/library/azure/dn834998).
"""
...
@overload
def __init__(__self__,
resource_name: str,
args: JobArgs,
opts: Optional[pulumi.ResourceOptions] = None):
"""
Manages a Stream Analytics Job.
## Example Usage
```python
import pulumi
import pulumi_azure as azure
example_resource_group = azure.core.ResourceGroup("exampleResourceGroup", location="West Europe")
example_job = azure.streamanalytics.Job("exampleJob",
resource_group_name=example_resource_group.name,
location=example_resource_group.location,
compatibility_level="1.1",
data_locale="en-GB",
events_late_arrival_max_delay_in_seconds=60,
events_out_of_order_max_delay_in_seconds=50,
events_out_of_order_policy="Adjust",
output_error_policy="Drop",
streaming_units=3,
tags={
"environment": "Example",
},
transformation_query=\"\"\" SELECT *
INTO [YourOutputAlias]
FROM [YourInputAlias]
\"\"\")
```
## Import
Stream Analytics Job's can be imported using the `resource id`, e.g.
```sh
$ pulumi import azure:streamanalytics/job:Job example /subscriptions/00000000-0000-0000-0000-000000000000/resourcegroups/group1/providers/Microsoft.StreamAnalytics/streamingjobs/job1
```
:param str resource_name: The name of the resource.
:param JobArgs args: The arguments to use to populate this resource's properties.
:param pulumi.ResourceOptions opts: Options for the resource.
"""
...
def __init__(__self__, resource_name: str, *args, **kwargs):
resource_args, opts = _utilities.get_resource_args_opts(JobArgs, pulumi.ResourceOptions, *args, **kwargs)
if resource_args is not None:
__self__._internal_init(resource_name, opts, **resource_args.__dict__)
else:
__self__._internal_init(resource_name, *args, **kwargs)
def _internal_init(__self__,
resource_name: str,
opts: Optional[pulumi.ResourceOptions] = None,
compatibility_level: Optional[pulumi.Input[str]] = None,
data_locale: Optional[pulumi.Input[str]] = None,
events_late_arrival_max_delay_in_seconds: Optional[pulumi.Input[int]] = None,
events_out_of_order_max_delay_in_seconds: Optional[pulumi.Input[int]] = None,
events_out_of_order_policy: Optional[pulumi.Input[str]] = None,
location: Optional[pulumi.Input[str]] = None,
name: Optional[pulumi.Input[str]] = None,
output_error_policy: Optional[pulumi.Input[str]] = None,
resource_group_name: Optional[pulumi.Input[str]] = None,
streaming_units: Optional[pulumi.Input[int]] = None,
tags: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]] = None,
transformation_query: Optional[pulumi.Input[str]] = None,
__props__=None):
if opts is None:
opts = pulumi.ResourceOptions()
if not isinstance(opts, pulumi.ResourceOptions):
raise TypeError('Expected resource options to be a ResourceOptions instance')
if opts.version is None:
opts.version = _utilities.get_version()
if opts.id is None:
if __props__ is not None:
raise TypeError('__props__ is only valid when passed in combination with a valid opts.id to get an existing resource')
__props__ = JobArgs.__new__(JobArgs)
__props__.__dict__["compatibility_level"] = compatibility_level
__props__.__dict__["data_locale"] = data_locale
__props__.__dict__["events_late_arrival_max_delay_in_seconds"] = events_late_arrival_max_delay_in_seconds
__props__.__dict__["events_out_of_order_max_delay_in_seconds"] = events_out_of_order_max_delay_in_seconds
__props__.__dict__["events_out_of_order_policy"] = events_out_of_order_policy
__props__.__dict__["location"] = location
__props__.__dict__["name"] = name
__props__.__dict__["output_error_policy"] = output_error_policy
if resource_group_name is None and not opts.urn:
raise TypeError("Missing required property 'resource_group_name'")
__props__.__dict__["resource_group_name"] = resource_group_name
if streaming_units is None and not opts.urn:
raise TypeError("Missing required property 'streaming_units'")
__props__.__dict__["streaming_units"] = streaming_units
__props__.__dict__["tags"] = tags
if transformation_query is None and not opts.urn:
raise TypeError("Missing required property 'transformation_query'")
__props__.__dict__["transformation_query"] = transformation_query
__props__.__dict__["job_id"] = None
super(Job, __self__).__init__(
'azure:streamanalytics/job:Job',
resource_name,
__props__,
opts)
@staticmethod
def get(resource_name: str,
id: pulumi.Input[str],
opts: Optional[pulumi.ResourceOptions] = None,
compatibility_level: Optional[pulumi.Input[str]] = None,
data_locale: Optional[pulumi.Input[str]] = None,
events_late_arrival_max_delay_in_seconds: Optional[pulumi.Input[int]] = None,
events_out_of_order_max_delay_in_seconds: Optional[pulumi.Input[int]] = None,
events_out_of_order_policy: Optional[pulumi.Input[str]] = None,
job_id: Optional[pulumi.Input[str]] = None,
location: Optional[pulumi.Input[str]] = None,
name: Optional[pulumi.Input[str]] = None,
output_error_policy: Optional[pulumi.Input[str]] = None,
resource_group_name: Optional[pulumi.Input[str]] = None,
streaming_units: Optional[pulumi.Input[int]] = None,
tags: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]] = None,
transformation_query: Optional[pulumi.Input[str]] = None) -> 'Job':
"""
Get an existing Job resource's state with the given name, id, and optional extra
properties used to qualify the lookup.
:param str resource_name: The unique name of the resulting resource.
:param pulumi.Input[str] id: The unique provider ID of the resource to lookup.
:param pulumi.ResourceOptions opts: Options for the resource.
:param pulumi.Input[str] compatibility_level: Specifies the compatibility level for this job - which controls certain runtime behaviours of the streaming job. Possible values are `1.0` and `1.1`.
:param pulumi.Input[str] data_locale: Specifies the Data Locale of the Job, which [should be a supported .NET Culture](https://msdn.microsoft.com/en-us/library/system.globalization.culturetypes(v=vs.110).aspx).
:param pulumi.Input[int] events_late_arrival_max_delay_in_seconds: Specifies the maximum tolerable delay in seconds where events arriving late could be included. Supported range is `-1` (indefinite) to `1814399` (20d 23h 59m 59s). Default is `0`.
:param pulumi.Input[int] events_out_of_order_max_delay_in_seconds: Specifies the maximum tolerable delay in seconds where out-of-order events can be adjusted to be back in order. Supported range is `0` to `599` (9m 59s). Default is `5`.
:param pulumi.Input[str] events_out_of_order_policy: Specifies the policy which should be applied to events which arrive out of order in the input event stream. Possible values are `Adjust` and `Drop`. Default is `Adjust`.
:param pulumi.Input[str] job_id: The Job ID assigned by the Stream Analytics Job.
:param pulumi.Input[str] location: The Azure Region in which the Resource Group exists. Changing this forces a new resource to be created.
:param pulumi.Input[str] name: The name of the Stream Analytics Job. Changing this forces a new resource to be created.
:param pulumi.Input[str] output_error_policy: Specifies the policy which should be applied to events which arrive at the output and cannot be written to the external storage due to being malformed (such as missing column values, column values of wrong type or size). Possible values are `Drop` and `Stop`. Default is `Drop`.
:param pulumi.Input[str] resource_group_name: The name of the Resource Group where the Stream Analytics Job should exist. Changing this forces a new resource to be created.
:param pulumi.Input[int] streaming_units: Specifies the number of streaming units that the streaming job uses. Supported values are `1`, `3`, `6` and multiples of `6` up to `120`.
:param pulumi.Input[Mapping[str, pulumi.Input[str]]] tags: A mapping of tags assigned to the resource.
:param pulumi.Input[str] transformation_query: Specifies the query that will be run in the streaming job, [written in Stream Analytics Query Language (SAQL)](https://msdn.microsoft.com/library/azure/dn834998).
"""
opts = pulumi.ResourceOptions.merge(opts, pulumi.ResourceOptions(id=id))
__props__ = _JobState.__new__(_JobState)
__props__.__dict__["compatibility_level"] = compatibility_level
__props__.__dict__["data_locale"] = data_locale
__props__.__dict__["events_late_arrival_max_delay_in_seconds"] = events_late_arrival_max_delay_in_seconds
__props__.__dict__["events_out_of_order_max_delay_in_seconds"] = events_out_of_order_max_delay_in_seconds
__props__.__dict__["events_out_of_order_policy"] = events_out_of_order_policy
__props__.__dict__["job_id"] = job_id
__props__.__dict__["location"] = location
__props__.__dict__["name"] = name
__props__.__dict__["output_error_policy"] = output_error_policy
__props__.__dict__["resource_group_name"] = resource_group_name
__props__.__dict__["streaming_units"] = streaming_units
__props__.__dict__["tags"] = tags
__props__.__dict__["transformation_query"] = transformation_query
return Job(resource_name, opts=opts, __props__=__props__)
@property
@pulumi.getter(name="compatibilityLevel")
def compatibility_level(self) -> pulumi.Output[str]:
"""
Specifies the compatibility level for this job - which controls certain runtime behaviours of the streaming job. Possible values are `1.0` and `1.1`.
"""
return pulumi.get(self, "compatibility_level")
@property
@pulumi.getter(name="dataLocale")
def data_locale(self) -> pulumi.Output[str]:
"""
Specifies the Data Locale of the Job, which [should be a supported .NET Culture](https://msdn.microsoft.com/en-us/library/system.globalization.culturetypes(v=vs.110).aspx).
"""
return pulumi.get(self, "data_locale")
@property
@pulumi.getter(name="eventsLateArrivalMaxDelayInSeconds")
def events_late_arrival_max_delay_in_seconds(self) -> pulumi.Output[Optional[int]]:
"""
Specifies the maximum tolerable delay in seconds where events arriving late could be included. Supported range is `-1` (indefinite) to `1814399` (20d 23h 59m 59s). Default is `0`.
"""
return pulumi.get(self, "events_late_arrival_max_delay_in_seconds")
@property
@pulumi.getter(name="eventsOutOfOrderMaxDelayInSeconds")
def events_out_of_order_max_delay_in_seconds(self) -> pulumi.Output[Optional[int]]:
"""
Specifies the maximum tolerable delay in seconds where out-of-order events can be adjusted to be back in order. Supported range is `0` to `599` (9m 59s). Default is `5`.
"""
return pulumi.get(self, "events_out_of_order_max_delay_in_seconds")
@property
@pulumi.getter(name="eventsOutOfOrderPolicy")
def events_out_of_order_policy(self) -> pulumi.Output[Optional[str]]:
"""
Specifies the policy which should be applied to events which arrive out of order in the input event stream. Possible values are `Adjust` and `Drop`. Default is `Adjust`.
"""
return pulumi.get(self, "events_out_of_order_policy")
@property
@pulumi.getter(name="jobId")
def job_id(self) -> pulumi.Output[str]:
"""
The Job ID assigned by the Stream Analytics Job.
"""
return pulumi.get(self, "job_id")
@property
@pulumi.getter
def location(self) -> pulumi.Output[str]:
"""
The Azure Region in which the Resource Group exists. Changing this forces a new resource to be created.
"""
return pulumi.get(self, "location")
@property
@pulumi.getter
def name(self) -> pulumi.Output[str]:
"""
The name of the Stream Analytics Job. Changing this forces a new resource to be created.
"""
return pulumi.get(self, "name")
@property
@pulumi.getter(name="outputErrorPolicy")
def output_error_policy(self) -> pulumi.Output[Optional[str]]:
"""
Specifies the policy which should be applied to events which arrive at the output and cannot be written to the external storage due to being malformed (such as missing column values, column values of wrong type or size). Possible values are `Drop` and `Stop`. Default is `Drop`.
"""
return pulumi.get(self, "output_error_policy")
@property
@pulumi.getter(name="resourceGroupName")
def resource_group_name(self) -> pulumi.Output[str]:
"""
The name of the Resource Group where the Stream Analytics Job should exist. Changing this forces a new resource to be created.
"""
return pulumi.get(self, "resource_group_name")
@property
@pulumi.getter(name="streamingUnits")
def streaming_units(self) -> pulumi.Output[int]:
"""
Specifies the number of streaming units that the streaming job uses. Supported values are `1`, `3`, `6` and multiples of `6` up to `120`.
"""
return pulumi.get(self, "streaming_units")
@property
@pulumi.getter
def tags(self) -> pulumi.Output[Optional[Mapping[str, str]]]:
"""
A mapping of tags assigned to the resource.
"""
return pulumi.get(self, "tags")
@property
@pulumi.getter(name="transformationQuery")
def transformation_query(self) -> pulumi.Output[str]:
"""
Specifies the query that will be run in the streaming job, [written in Stream Analytics Query Language (SAQL)](https://msdn.microsoft.com/library/azure/dn834998).
"""
return pulumi.get(self, "transformation_query")
| 56.287958 | 333 | 0.686773 | 5,611 | 43,004 | 5.029228 | 0.049011 | 0.069776 | 0.062015 | 0.054573 | 0.943265 | 0.935398 | 0.924235 | 0.9141 | 0.91045 | 0.895283 | 0 | 0.011835 | 0.218026 | 43,004 | 763 | 334 | 56.36173 | 0.827317 | 0.42124 | 0 | 0.795294 | 1 | 0 | 0.128897 | 0.056306 | 0 | 0 | 0 | 0 | 0 | 1 | 0.164706 | false | 0.002353 | 0.011765 | 0 | 0.275294 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
83cf73350f2bc562cf5e2b0de6811b855ebca719 | 95 | py | Python | task_server.py | gucheen/corylus-prototype | b1db14314ef5c07ec8b179a7843f54f62f58c8bb | [
"MIT"
] | null | null | null | task_server.py | gucheen/corylus-prototype | b1db14314ef5c07ec8b179a7843f54f62f58c8bb | [
"MIT"
] | null | null | null | task_server.py | gucheen/corylus-prototype | b1db14314ef5c07ec8b179a7843f54f62f58c8bb | [
"MIT"
] | null | null | null | from corylus.huey_tasks.config import huey
from corylus.huey_tasks.tasks import render_to_png
| 23.75 | 50 | 0.863158 | 16 | 95 | 4.875 | 0.5625 | 0.282051 | 0.384615 | 0.512821 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.094737 | 95 | 3 | 51 | 31.666667 | 0.906977 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 8 |
83eb8c57e5ba0851d17685eac121bacd17f0d4af | 5,683 | py | Python | AIDSAnalysisProcedures2.py | InsightlyYours/Insight_Project | 0c97c7a4c90d197c4e9f07febcd765ec93ee92c6 | [
"Apache-2.0"
] | null | null | null | AIDSAnalysisProcedures2.py | InsightlyYours/Insight_Project | 0c97c7a4c90d197c4e9f07febcd765ec93ee92c6 | [
"Apache-2.0"
] | null | null | null | AIDSAnalysisProcedures2.py | InsightlyYours/Insight_Project | 0c97c7a4c90d197c4e9f07febcd765ec93ee92c6 | [
"Apache-2.0"
] | null | null | null | # -*- coding: utf-8 -*-
"""
Spyder Editor
This is a temporary script file.
"""
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.colors import LogNorm
from matplotlib import cm
def contourplotAIDSByAgeGroup2(x,y,z, labels,location,city):
plt.close()
fig = plt.figure()
# contour the gridded data, plotting dots at the randomly spaced data points.
#CS = plt.contour(x,y,z.T,15,linewidths=0.5,colors='k')
CS = plt.contourf(x,y,z.T,15,cmap=plt.cm.jet)
cbar = plt.colorbar() # draw colorbar
plt.xlim(1981,2003)
plt.xticks([1981,1985,1990,1995,2000, 2003],['1981','1985','1990','1995','2000','2003'])
plt.xlabel('Year of Diagnosis')
plt.ylim(0,12)
plt.yticks(location,labels, rotation='horizontal')
plt.title('AIDS Diagnoses By Age Group: All Years in ' + str(city))
cbar.set_label('Cases Diagnosed')
plt.tight_layout()
plt.savefig('/home/InsightfullyYours/webapp/assets/images/C2F4a.png')
def contourplotAIDSByAgeGroupLogNorm2(x,y,z,labels,location,city):
plt.close()
fig = plt.figure()
# contour the gridded data, plotting dots at the randomly spaced data points.
#CS = plt.contour(x,y,z.T,15,linewidths=0.5,colors='k')
CS = plt.contourf(x,y,z.T,15,cmap=plt.cm.jet, norm=LogNorm())
cbar = plt.colorbar() # draw colorbar
plt.xlim(1981,2003)
plt.xticks([1981,1985,1990,1995,2000, 2003],['1981','1985','1990','1995','2000','2003'])
plt.xlabel('Year of Diagnosis')
plt.ylim(0,12)
plt.yticks(location,labels, rotation='horizontal')
plt.title('AIDS Diagnoses By Age Group: All Years in ' + str(city))
cbar.set_label('Cases Diagnosed')
plt.tight_layout()
plt.savefig('/home/InsightfullyYours/webapp/assets/images/C2F4b.png')
def contourplotHIVExpByYear2(x,y,z, labels,location, city):
plt.close()
fig = plt.figure()
# contour the gridded data, plotting dots at the randomly spaced data points.
#CS = plt.contour(x,y,z.T,15,linewidths=0.5,colors='k')
CS = plt.contourf(x,y,z.T,15,cmap=plt.cm.jet)
cbar = plt.colorbar() # draw colorbar
plt.xlim(1981,2003)
plt.xticks([1981,1985,1990,1995,2000, 2003],['1981','1985','1990','1995','2000','2003'])
plt.xlabel('Year of Diagnosis')
#plt.ylim(-1,13)
plt.yticks(location,labels, rotation='horizontal',fontsize=8)
plt.title('AIDS Diagnoses By HIV Exposure: All Years in ' + str(city))
cbar.set_label('Cases Diagnosed')
plt.tight_layout()
plt.savefig('/home/InsightfullyYours/webapp/assets/images/C2F6a.png')
def contourplotHIVExpByYearLogNorm2(x,y,z,labels,location,city):
plt.close()
fig = plt.figure()
# contour the gridded data, plotting dots at the randomly spaced data points.
#CS = plt.contour(x,y,z.T,15,linewidths=0.5,colors='k')
CS = plt.contourf(x,y,z.T,15,cmap=plt.cm.jet, norm=LogNorm())
cbar = plt.colorbar() # draw colorbar
plt.xlim(1981,2003)
plt.xticks([1981,1985,1990,1995,2000, 2003],['1981','1985','1990','1995','2000','2003'])
plt.xlabel('Year of Diagnosis')
#plt.ylim(-1,13)
plt.yticks(location,labels, rotation='horizontal', fontsize=8)
plt.title('AIDS Diagnoses By HIV Exposure: All Years in ' + str(city))
cbar.set_label('Cases Diagnosed')
plt.tight_layout()
plt.savefig('/home/InsightfullyYours/webapp/assets/images/C2F6b.png')
def contourplotHIVExpByAge2(x,y,z, labels,location,labelsy,location2,city):
plt.close()
fig = plt.figure()
# contour the gridded data, plotting dots at the randomly spaced data points.
# CS = plt.contour(x,y,z.T,15,linewidths=0.5,colors='k')
CS = plt.contourf(x,y,z.T,15,cmap=plt.cm.jet)
cbar = plt.colorbar() # draw colorbar
plt.xlabel('Age At Diagnosis')
plt.xticks(location2,labelsy, rotation='vertical', fontsize=6)
plt.yticks(location,labels, rotation='horizontal',fontsize=6)
plt.title('AIDS Diagnoses By HIV Exposure Type and Age at Diagnosis in ' + str(city))
cbar.set_label('Cases Diagnosed')
plt.tight_layout()
plt.savefig('/home/InsightfullyYours/webapp/assets/images/C2F7.png')
def contourplotVital2(x,y,z, labels,location,city):
plt.close()
fig = plt.figure()
# contour the gridded data, plotting dots at the randomly spaced data points.
#CS = plt.contour(x,y,z.T,15,linewidths=0.5,colors='k')
CS = plt.contourf(x,y,z.T,15,cmap=plt.cm.jet)
cbar = plt.colorbar() # draw colorbar
plt.xlim(1982,2000)
plt.xlabel('Year of Diagnosis')
plt.xticks([1982,1985,1990,1995,2000],['1982','1985','1990','1995','2000'])
#plt.ylim(-1,13)
plt.yticks(location,labels, rotation='horizontal',fontsize=8)
plt.title('Case Mortality Percentage By Exposure and Year in ' + str(city))
cbar.set_label('Percent Mortality by 2001, All Causes')
plt.tight_layout()
plt.savefig('/home/InsightfullyYours/webapp/assets/images/C2F9.png')
def contourplotVitalAge2(x,y,z, labels,location,city):
plt.close()
fig = plt.figure()
# contour the gridded data, plotting dots at the randomly spaced data points.
#CS = plt.contour(x,y,z.T,15,linewidths=0.5,colors='k')
CS = plt.contourf(x,y,z.T,15,cmap=plt.cm.jet)
cbar = plt.colorbar() # draw colorbar
plt.xlim(1982,2000)
plt.xlabel('Year of Diagnosis')
plt.xticks([1982,1985,1990,1995,2000],['1982','1985','1990','1995','2000'])
#plt.ylim(-1,13)
plt.yticks(location,labels, rotation='horizontal',fontsize=8)
plt.title('Case Mortality Percentage By Age at Diagnosis and Year in ' + str(city))
cbar.set_label('Percent Mortality by 2001, All Causes')
plt.tight_layout()
plt.savefig('/home/InsightfullyYours/webapp/assets/images/C2F8.png')
| 42.096296 | 92 | 0.685377 | 859 | 5,683 | 4.518044 | 0.14901 | 0.010822 | 0.016233 | 0.014429 | 0.865241 | 0.860861 | 0.860861 | 0.839474 | 0.839474 | 0.839474 | 0 | 0.086704 | 0.15168 | 5,683 | 134 | 93 | 42.410448 | 0.718316 | 0.200422 | 0 | 0.75 | 0 | 0 | 0.265632 | 0.083149 | 0 | 0 | 0 | 0 | 0 | 1 | 0.072917 | false | 0 | 0.041667 | 0 | 0.114583 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
f7af74db472bbf599b0c4e9808c3d0b31ae4ddac | 3,434 | py | Python | pynars/NARS/RuleMap/Interface/Interface_DecompositionalRules.py | AIxer/PyNARS | 443b6a5e1c9779a1b861df1ca51ce5a190998d2e | [
"MIT"
] | null | null | null | pynars/NARS/RuleMap/Interface/Interface_DecompositionalRules.py | AIxer/PyNARS | 443b6a5e1c9779a1b861df1ca51ce5a190998d2e | [
"MIT"
] | null | null | null | pynars/NARS/RuleMap/Interface/Interface_DecompositionalRules.py | AIxer/PyNARS | 443b6a5e1c9779a1b861df1ca51ce5a190998d2e | [
"MIT"
] | null | null | null | from pynars.NARS.DataStructures import Link, TaskLink, TermLink, LinkType, Task
from pynars.Narsese import Belief
from pynars.NAL.Inference import *
from pynars.NAL.Theorems import *
from pynars import Global
def _decompositional__decomposition_theorem2__0_0(task: Task, belief: Belief, tasklink: TaskLink=None, termlink: TermLink=None):
return decompositional__decomposition_theorem2(task, belief, (tasklink.budget if tasklink is not None else None), (termlink.budget if termlink is not None else None), inverse_premise=False)
def _decompositional__decomposition_theorem2__0_0_prime(task: Task, belief: Belief, tasklink: TaskLink=None, termlink: TermLink=None):
return decompositional__decomposition_theorem2(task, belief, (tasklink.budget if tasklink is not None else None), (termlink.budget if termlink is not None else None), inverse_premise=True)
def _decompositional__decomposition_theorem3__0_0(task: Task, belief: Belief, tasklink: TaskLink=None, termlink: TermLink=None):
return decompositional__decomposition_theorem3(task, belief, (tasklink.budget if tasklink is not None else None), (termlink.budget if termlink is not None else None), inverse_premise=False)
def _decompositional__decomposition_theorem3__0_0_prime(task: Task, belief: Belief, tasklink: TaskLink=None, termlink: TermLink=None):
return decompositional__decomposition_theorem3(task, belief, (tasklink.budget if tasklink is not None else None), (termlink.budget if termlink is not None else None), inverse_premise=True)
# def _decompositional__decomposition_theorem4__0_0(task: Task, belief: Belief, tasklink: TaskLink=None, termlink: TermLink=None):
# return decomposition_theorem4(task, belief, (tasklink.budget if tasklink is not None else None), (termlink.budget if termlink is not None else None), inverse_premise=False)
# def _decompositional__decomposition_theorem4__0_0_prime(task: Task, belief: Belief, tasklink: TaskLink=None, termlink: TermLink=None):
# return decomposition_theorem4(task, belief, (tasklink.budget if tasklink is not None else None), (termlink.budget if termlink is not None else None), inverse_premise=True)
def _decompositional__decomposition_theorem9(task: Task, belief: Belief, tasklink: TaskLink=None, termlink: TermLink=None):
return decompositional__decomposition_theorem9(task, belief, (tasklink.budget if tasklink is not None else None), (termlink.budget if termlink is not None else None), inverse_premise=False)
def _decompositional__decomposition_theorem9_prime(task: Task, belief: Belief, tasklink: TaskLink=None, termlink: TermLink=None):
return decompositional__decomposition_theorem9(task, belief, (tasklink.budget if tasklink is not None else None), (termlink.budget if termlink is not None else None), inverse_premise=True)
def _decompositional__decomposition_theorem10(task: Task, belief: Belief, tasklink: TaskLink=None, termlink: TermLink=None):
return decompositional__decomposition_theorem10(task, belief, (tasklink.budget if tasklink is not None else None), (termlink.budget if termlink is not None else None), inverse_premise=False)
def _decompositional__decomposition_theorem10_prime(task: Task, belief: Belief, tasklink: TaskLink=None, termlink: TermLink=None):
return decompositional__decomposition_theorem10(task, belief, (tasklink.budget if tasklink is not None else None), (termlink.budget if termlink is not None else None), inverse_premise=True)
| 90.368421 | 194 | 0.814502 | 454 | 3,434 | 5.942731 | 0.081498 | 0.074129 | 0.066716 | 0.096368 | 0.923647 | 0.923647 | 0.894366 | 0.894366 | 0.894366 | 0.894366 | 0 | 0.011749 | 0.107746 | 3,434 | 37 | 195 | 92.810811 | 0.868799 | 0.179383 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.380952 | false | 0 | 0.238095 | 0.380952 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 9 |
f7d1d5c6696fb93e8fa33110306df108a1d15f3f | 189 | py | Python | authy_admin/__init__.py | jhmaddox/django-authy-admin | 3f9829c025b81db46a888625191d8882e96373e1 | [
"MIT"
] | 7 | 2015-12-20T11:38:49.000Z | 2021-04-11T19:20:28.000Z | authy_admin/__init__.py | jhmaddox/django-authy-admin | 3f9829c025b81db46a888625191d8882e96373e1 | [
"MIT"
] | 2 | 2017-06-02T10:17:38.000Z | 2020-05-19T23:53:03.000Z | authy_admin/__init__.py | jhmaddox/django-authy-admin | 3f9829c025b81db46a888625191d8882e96373e1 | [
"MIT"
] | 2 | 2015-12-20T11:38:50.000Z | 2017-01-21T21:05:36.000Z | from django.contrib import admin as default_admin
from authy_admin.sites import AuthyAdminSite
# replace django's default admin site with our version
default_admin.site = AuthyAdminSite()
| 31.5 | 54 | 0.835979 | 27 | 189 | 5.740741 | 0.592593 | 0.232258 | 0.206452 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.121693 | 189 | 5 | 55 | 37.8 | 0.933735 | 0.275132 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.666667 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
f7db64fc9ad9e408637afe73d2058573fe8d4e80 | 65,514 | py | Python | src/dataprotection/azext_dataprotection/vendored_sdks/dataprotection/aio/operations/_resource_guards_operations.py | LGDoor/azure-cli-extensions | 570a7c181999c1dd160d48f8454aab6cea057a20 | [
"MIT"
] | null | null | null | src/dataprotection/azext_dataprotection/vendored_sdks/dataprotection/aio/operations/_resource_guards_operations.py | LGDoor/azure-cli-extensions | 570a7c181999c1dd160d48f8454aab6cea057a20 | [
"MIT"
] | null | null | null | src/dataprotection/azext_dataprotection/vendored_sdks/dataprotection/aio/operations/_resource_guards_operations.py | LGDoor/azure-cli-extensions | 570a7c181999c1dd160d48f8454aab6cea057a20 | [
"MIT"
] | 1 | 2022-02-14T21:43:29.000Z | 2022-02-14T21:43:29.000Z | # coding=utf-8
# --------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See License.txt in the project root for license information.
# Code generated by Microsoft (R) AutoRest Code Generator (autorest: 3.0.6370, generator: {generator})
# Changes may cause incorrect behavior and will be lost if the code is regenerated.
# --------------------------------------------------------------------------
from typing import Any, AsyncIterable, Callable, Dict, Generic, Optional, TypeVar
import warnings
from azure.core.async_paging import AsyncItemPaged, AsyncList
from azure.core.exceptions import ClientAuthenticationError, HttpResponseError, ResourceExistsError, ResourceNotFoundError, map_error
from azure.core.pipeline import PipelineResponse
from azure.core.pipeline.transport import AsyncHttpResponse, HttpRequest
from azure.mgmt.core.exceptions import ARMErrorFormat
from ... import models
T = TypeVar('T')
ClsType = Optional[Callable[[PipelineResponse[HttpRequest, AsyncHttpResponse], T, Dict[str, Any]], Any]]
class ResourceGuardsOperations:
"""ResourceGuardsOperations async operations.
You should not instantiate this class directly. Instead, you should create a Client instance that
instantiates it for you and attaches it as an attribute.
:ivar models: Alias to model classes used in this operation group.
:type models: ~azure.mgmt.dataprotection.models
:param client: Client for service requests.
:param config: Configuration of service client.
:param serializer: An object model serializer.
:param deserializer: An object model deserializer.
"""
models = models
def __init__(self, client, config, serializer, deserializer) -> None:
self._client = client
self._serialize = serializer
self._deserialize = deserializer
self._config = config
def get_resources_in_subscription(
self,
**kwargs
) -> AsyncIterable["models.ResourceGuardResourceList"]:
"""Returns ResourceGuards collection belonging to a subscription.
Returns ResourceGuards collection belonging to a subscription.
:keyword callable cls: A custom type or function that will be passed the direct response
:return: An iterator like instance of either ResourceGuardResourceList or the result of cls(response)
:rtype: ~azure.core.async_paging.AsyncItemPaged[~azure.mgmt.dataprotection.models.ResourceGuardResourceList]
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType["models.ResourceGuardResourceList"]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
api_version = "2022-04-01"
accept = "application/json"
def prepare_request(next_link=None):
# Construct headers
header_parameters = {} # type: Dict[str, Any]
header_parameters['Accept'] = self._serialize.header("accept", accept, 'str')
if not next_link:
# Construct URL
url = self.get_resources_in_subscription.metadata['url'] # type: ignore
path_format_arguments = {
'subscriptionId': self._serialize.url("self._config.subscription_id", self._config.subscription_id, 'str'),
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {} # type: Dict[str, Any]
query_parameters['api-version'] = self._serialize.query("api_version", api_version, 'str')
request = self._client.get(url, query_parameters, header_parameters)
else:
url = next_link
query_parameters = {} # type: Dict[str, Any]
request = self._client.get(url, query_parameters, header_parameters)
return request
async def extract_data(pipeline_response):
deserialized = self._deserialize('ResourceGuardResourceList', pipeline_response)
list_of_elem = deserialized.value
if cls:
list_of_elem = cls(list_of_elem)
return deserialized.next_link or None, AsyncList(list_of_elem)
async def get_next(next_link=None):
request = prepare_request(next_link)
pipeline_response = await self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [200]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
raise HttpResponseError(response=response, error_format=ARMErrorFormat)
return pipeline_response
return AsyncItemPaged(
get_next, extract_data
)
get_resources_in_subscription.metadata = {'url': '/subscriptions/{subscriptionId}/providers/Microsoft.DataProtection/resourceGuards'} # type: ignore
def get_resources_in_resource_group(
self,
resource_group_name: str,
**kwargs
) -> AsyncIterable["models.ResourceGuardResourceList"]:
"""Returns ResourceGuards collection belonging to a ResourceGroup.
Returns ResourceGuards collection belonging to a ResourceGroup.
:param resource_group_name: The name of the resource group where the backup vault is present.
:type resource_group_name: str
:keyword callable cls: A custom type or function that will be passed the direct response
:return: An iterator like instance of either ResourceGuardResourceList or the result of cls(response)
:rtype: ~azure.core.async_paging.AsyncItemPaged[~azure.mgmt.dataprotection.models.ResourceGuardResourceList]
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType["models.ResourceGuardResourceList"]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
api_version = "2022-04-01"
accept = "application/json"
def prepare_request(next_link=None):
# Construct headers
header_parameters = {} # type: Dict[str, Any]
header_parameters['Accept'] = self._serialize.header("accept", accept, 'str')
if not next_link:
# Construct URL
url = self.get_resources_in_resource_group.metadata['url'] # type: ignore
path_format_arguments = {
'subscriptionId': self._serialize.url("self._config.subscription_id", self._config.subscription_id, 'str'),
'resourceGroupName': self._serialize.url("resource_group_name", resource_group_name, 'str'),
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {} # type: Dict[str, Any]
query_parameters['api-version'] = self._serialize.query("api_version", api_version, 'str')
request = self._client.get(url, query_parameters, header_parameters)
else:
url = next_link
query_parameters = {} # type: Dict[str, Any]
request = self._client.get(url, query_parameters, header_parameters)
return request
async def extract_data(pipeline_response):
deserialized = self._deserialize('ResourceGuardResourceList', pipeline_response)
list_of_elem = deserialized.value
if cls:
list_of_elem = cls(list_of_elem)
return deserialized.next_link or None, AsyncList(list_of_elem)
async def get_next(next_link=None):
request = prepare_request(next_link)
pipeline_response = await self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [200]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
raise HttpResponseError(response=response, error_format=ARMErrorFormat)
return pipeline_response
return AsyncItemPaged(
get_next, extract_data
)
get_resources_in_resource_group.metadata = {'url': '/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.DataProtection/resourceGuards'} # type: ignore
async def put(
self,
resource_group_name: str,
resource_guards_name: str,
parameters: "models.ResourceGuardResource",
**kwargs
) -> "models.ResourceGuardResource":
"""Creates or updates a ResourceGuard resource belonging to a resource group.
Creates or updates a ResourceGuard resource belonging to a resource group.
:param resource_group_name: The name of the resource group where the backup vault is present.
:type resource_group_name: str
:param resource_guards_name: The name of ResourceGuard.
:type resource_guards_name: str
:param parameters: Request body for operation.
:type parameters: ~azure.mgmt.dataprotection.models.ResourceGuardResource
:keyword callable cls: A custom type or function that will be passed the direct response
:return: ResourceGuardResource, or the result of cls(response)
:rtype: ~azure.mgmt.dataprotection.models.ResourceGuardResource
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType["models.ResourceGuardResource"]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
api_version = "2022-04-01"
content_type = kwargs.pop("content_type", "application/json")
accept = "application/json"
# Construct URL
url = self.put.metadata['url'] # type: ignore
path_format_arguments = {
'resourceGroupName': self._serialize.url("resource_group_name", resource_group_name, 'str'),
'subscriptionId': self._serialize.url("self._config.subscription_id", self._config.subscription_id, 'str'),
'resourceGuardsName': self._serialize.url("resource_guards_name", resource_guards_name, 'str'),
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {} # type: Dict[str, Any]
query_parameters['api-version'] = self._serialize.query("api_version", api_version, 'str')
# Construct headers
header_parameters = {} # type: Dict[str, Any]
header_parameters['Content-Type'] = self._serialize.header("content_type", content_type, 'str')
header_parameters['Accept'] = self._serialize.header("accept", accept, 'str')
body_content_kwargs = {} # type: Dict[str, Any]
body_content = self._serialize.body(parameters, 'ResourceGuardResource')
body_content_kwargs['content'] = body_content
request = self._client.put(url, query_parameters, header_parameters, **body_content_kwargs)
pipeline_response = await self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [200]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
raise HttpResponseError(response=response, error_format=ARMErrorFormat)
deserialized = self._deserialize('ResourceGuardResource', pipeline_response)
if cls:
return cls(pipeline_response, deserialized, {})
return deserialized
put.metadata = {'url': '/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.DataProtection/resourceGuards/{resourceGuardsName}'} # type: ignore
async def get(
self,
resource_group_name: str,
resource_guards_name: str,
**kwargs
) -> "models.ResourceGuardResource":
"""Returns a ResourceGuard belonging to a resource group.
Returns a ResourceGuard belonging to a resource group.
:param resource_group_name: The name of the resource group where the backup vault is present.
:type resource_group_name: str
:param resource_guards_name: The name of ResourceGuard.
:type resource_guards_name: str
:keyword callable cls: A custom type or function that will be passed the direct response
:return: ResourceGuardResource, or the result of cls(response)
:rtype: ~azure.mgmt.dataprotection.models.ResourceGuardResource
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType["models.ResourceGuardResource"]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
api_version = "2022-04-01"
accept = "application/json"
# Construct URL
url = self.get.metadata['url'] # type: ignore
path_format_arguments = {
'resourceGroupName': self._serialize.url("resource_group_name", resource_group_name, 'str'),
'subscriptionId': self._serialize.url("self._config.subscription_id", self._config.subscription_id, 'str'),
'resourceGuardsName': self._serialize.url("resource_guards_name", resource_guards_name, 'str'),
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {} # type: Dict[str, Any]
query_parameters['api-version'] = self._serialize.query("api_version", api_version, 'str')
# Construct headers
header_parameters = {} # type: Dict[str, Any]
header_parameters['Accept'] = self._serialize.header("accept", accept, 'str')
request = self._client.get(url, query_parameters, header_parameters)
pipeline_response = await self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [200]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
raise HttpResponseError(response=response, error_format=ARMErrorFormat)
deserialized = self._deserialize('ResourceGuardResource', pipeline_response)
if cls:
return cls(pipeline_response, deserialized, {})
return deserialized
get.metadata = {'url': '/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.DataProtection/resourceGuards/{resourceGuardsName}'} # type: ignore
async def delete(
self,
resource_group_name: str,
resource_guards_name: str,
**kwargs
) -> None:
"""Deletes a ResourceGuard resource from the resource group.
Deletes a ResourceGuard resource from the resource group.
:param resource_group_name: The name of the resource group where the backup vault is present.
:type resource_group_name: str
:param resource_guards_name: The name of ResourceGuard.
:type resource_guards_name: str
:keyword callable cls: A custom type or function that will be passed the direct response
:return: None, or the result of cls(response)
:rtype: None
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType[None]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
api_version = "2022-04-01"
accept = "application/json"
# Construct URL
url = self.delete.metadata['url'] # type: ignore
path_format_arguments = {
'resourceGroupName': self._serialize.url("resource_group_name", resource_group_name, 'str'),
'subscriptionId': self._serialize.url("self._config.subscription_id", self._config.subscription_id, 'str'),
'resourceGuardsName': self._serialize.url("resource_guards_name", resource_guards_name, 'str'),
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {} # type: Dict[str, Any]
query_parameters['api-version'] = self._serialize.query("api_version", api_version, 'str')
# Construct headers
header_parameters = {} # type: Dict[str, Any]
header_parameters['Accept'] = self._serialize.header("accept", accept, 'str')
request = self._client.delete(url, query_parameters, header_parameters)
pipeline_response = await self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [200, 204]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
raise HttpResponseError(response=response, error_format=ARMErrorFormat)
if cls:
return cls(pipeline_response, None, {})
delete.metadata = {'url': '/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.DataProtection/resourceGuards/{resourceGuardsName}'} # type: ignore
async def patch(
self,
resource_group_name: str,
resource_guards_name: str,
parameters: "models.PatchResourceRequestInput",
**kwargs
) -> "models.ResourceGuardResource":
"""Updates a ResourceGuard resource belonging to a resource group. For example, updating tags for a resource.
Updates a ResourceGuard resource belonging to a resource group. For example, updating tags for
a resource.
:param resource_group_name: The name of the resource group where the backup vault is present.
:type resource_group_name: str
:param resource_guards_name: The name of ResourceGuard.
:type resource_guards_name: str
:param parameters: Request body for operation.
:type parameters: ~azure.mgmt.dataprotection.models.PatchResourceRequestInput
:keyword callable cls: A custom type or function that will be passed the direct response
:return: ResourceGuardResource, or the result of cls(response)
:rtype: ~azure.mgmt.dataprotection.models.ResourceGuardResource
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType["models.ResourceGuardResource"]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
api_version = "2022-04-01"
content_type = kwargs.pop("content_type", "application/json")
accept = "application/json"
# Construct URL
url = self.patch.metadata['url'] # type: ignore
path_format_arguments = {
'resourceGroupName': self._serialize.url("resource_group_name", resource_group_name, 'str'),
'subscriptionId': self._serialize.url("self._config.subscription_id", self._config.subscription_id, 'str'),
'resourceGuardsName': self._serialize.url("resource_guards_name", resource_guards_name, 'str'),
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {} # type: Dict[str, Any]
query_parameters['api-version'] = self._serialize.query("api_version", api_version, 'str')
# Construct headers
header_parameters = {} # type: Dict[str, Any]
header_parameters['Content-Type'] = self._serialize.header("content_type", content_type, 'str')
header_parameters['Accept'] = self._serialize.header("accept", accept, 'str')
body_content_kwargs = {} # type: Dict[str, Any]
body_content = self._serialize.body(parameters, 'PatchResourceRequestInput')
body_content_kwargs['content'] = body_content
request = self._client.patch(url, query_parameters, header_parameters, **body_content_kwargs)
pipeline_response = await self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [200]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
raise HttpResponseError(response=response, error_format=ARMErrorFormat)
deserialized = self._deserialize('ResourceGuardResource', pipeline_response)
if cls:
return cls(pipeline_response, deserialized, {})
return deserialized
patch.metadata = {'url': '/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.DataProtection/resourceGuards/{resourceGuardsName}'} # type: ignore
def get_disable_soft_delete_requests_objects(
self,
resource_group_name: str,
resource_guards_name: str,
**kwargs
) -> AsyncIterable["models.DppBaseResourceList"]:
"""Returns collection of operation request objects for a critical operation protected by the given ResourceGuard resource.
Returns collection of operation request objects for a critical operation protected by the given
ResourceGuard resource.
:param resource_group_name: The name of the resource group where the backup vault is present.
:type resource_group_name: str
:param resource_guards_name:
:type resource_guards_name: str
:keyword callable cls: A custom type or function that will be passed the direct response
:return: An iterator like instance of either DppBaseResourceList or the result of cls(response)
:rtype: ~azure.core.async_paging.AsyncItemPaged[~azure.mgmt.dataprotection.models.DppBaseResourceList]
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType["models.DppBaseResourceList"]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
api_version = "2022-04-01"
accept = "application/json"
def prepare_request(next_link=None):
# Construct headers
header_parameters = {} # type: Dict[str, Any]
header_parameters['Accept'] = self._serialize.header("accept", accept, 'str')
if not next_link:
# Construct URL
url = self.get_disable_soft_delete_requests_objects.metadata['url'] # type: ignore
path_format_arguments = {
'resourceGroupName': self._serialize.url("resource_group_name", resource_group_name, 'str'),
'subscriptionId': self._serialize.url("self._config.subscription_id", self._config.subscription_id, 'str'),
'resourceGuardsName': self._serialize.url("resource_guards_name", resource_guards_name, 'str'),
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {} # type: Dict[str, Any]
query_parameters['api-version'] = self._serialize.query("api_version", api_version, 'str')
request = self._client.get(url, query_parameters, header_parameters)
else:
url = next_link
query_parameters = {} # type: Dict[str, Any]
request = self._client.get(url, query_parameters, header_parameters)
return request
async def extract_data(pipeline_response):
deserialized = self._deserialize('DppBaseResourceList', pipeline_response)
list_of_elem = deserialized.value
if cls:
list_of_elem = cls(list_of_elem)
return deserialized.next_link or None, AsyncList(list_of_elem)
async def get_next(next_link=None):
request = prepare_request(next_link)
pipeline_response = await self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [200]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
raise HttpResponseError(response=response, error_format=ARMErrorFormat)
return pipeline_response
return AsyncItemPaged(
get_next, extract_data
)
get_disable_soft_delete_requests_objects.metadata = {'url': '/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.DataProtection/resourceGuards/{resourceGuardsName}/disableSoftDeleteRequests'} # type: ignore
def get_delete_resource_guard_proxy_requests_objects(
self,
resource_group_name: str,
resource_guards_name: str,
**kwargs
) -> AsyncIterable["models.DppBaseResourceList"]:
"""Returns collection of operation request objects for a critical operation protected by the given ResourceGuard resource.
Returns collection of operation request objects for a critical operation protected by the given
ResourceGuard resource.
:param resource_group_name: The name of the resource group where the backup vault is present.
:type resource_group_name: str
:param resource_guards_name:
:type resource_guards_name: str
:keyword callable cls: A custom type or function that will be passed the direct response
:return: An iterator like instance of either DppBaseResourceList or the result of cls(response)
:rtype: ~azure.core.async_paging.AsyncItemPaged[~azure.mgmt.dataprotection.models.DppBaseResourceList]
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType["models.DppBaseResourceList"]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
api_version = "2022-04-01"
accept = "application/json"
def prepare_request(next_link=None):
# Construct headers
header_parameters = {} # type: Dict[str, Any]
header_parameters['Accept'] = self._serialize.header("accept", accept, 'str')
if not next_link:
# Construct URL
url = self.get_delete_resource_guard_proxy_requests_objects.metadata['url'] # type: ignore
path_format_arguments = {
'resourceGroupName': self._serialize.url("resource_group_name", resource_group_name, 'str'),
'subscriptionId': self._serialize.url("self._config.subscription_id", self._config.subscription_id, 'str'),
'resourceGuardsName': self._serialize.url("resource_guards_name", resource_guards_name, 'str'),
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {} # type: Dict[str, Any]
query_parameters['api-version'] = self._serialize.query("api_version", api_version, 'str')
request = self._client.get(url, query_parameters, header_parameters)
else:
url = next_link
query_parameters = {} # type: Dict[str, Any]
request = self._client.get(url, query_parameters, header_parameters)
return request
async def extract_data(pipeline_response):
deserialized = self._deserialize('DppBaseResourceList', pipeline_response)
list_of_elem = deserialized.value
if cls:
list_of_elem = cls(list_of_elem)
return deserialized.next_link or None, AsyncList(list_of_elem)
async def get_next(next_link=None):
request = prepare_request(next_link)
pipeline_response = await self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [200]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
raise HttpResponseError(response=response, error_format=ARMErrorFormat)
return pipeline_response
return AsyncItemPaged(
get_next, extract_data
)
get_delete_resource_guard_proxy_requests_objects.metadata = {'url': '/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.DataProtection/resourceGuards/{resourceGuardsName}/deleteResourceGuardProxyRequests'} # type: ignore
def get_backup_security_pin_requests_objects(
self,
resource_group_name: str,
resource_guards_name: str,
**kwargs
) -> AsyncIterable["models.DppBaseResourceList"]:
"""Returns collection of operation request objects for a critical operation protected by the given ResourceGuard resource.
Returns collection of operation request objects for a critical operation protected by the given
ResourceGuard resource.
:param resource_group_name: The name of the resource group where the backup vault is present.
:type resource_group_name: str
:param resource_guards_name:
:type resource_guards_name: str
:keyword callable cls: A custom type or function that will be passed the direct response
:return: An iterator like instance of either DppBaseResourceList or the result of cls(response)
:rtype: ~azure.core.async_paging.AsyncItemPaged[~azure.mgmt.dataprotection.models.DppBaseResourceList]
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType["models.DppBaseResourceList"]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
api_version = "2022-04-01"
accept = "application/json"
def prepare_request(next_link=None):
# Construct headers
header_parameters = {} # type: Dict[str, Any]
header_parameters['Accept'] = self._serialize.header("accept", accept, 'str')
if not next_link:
# Construct URL
url = self.get_backup_security_pin_requests_objects.metadata['url'] # type: ignore
path_format_arguments = {
'resourceGroupName': self._serialize.url("resource_group_name", resource_group_name, 'str'),
'subscriptionId': self._serialize.url("self._config.subscription_id", self._config.subscription_id, 'str'),
'resourceGuardsName': self._serialize.url("resource_guards_name", resource_guards_name, 'str'),
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {} # type: Dict[str, Any]
query_parameters['api-version'] = self._serialize.query("api_version", api_version, 'str')
request = self._client.get(url, query_parameters, header_parameters)
else:
url = next_link
query_parameters = {} # type: Dict[str, Any]
request = self._client.get(url, query_parameters, header_parameters)
return request
async def extract_data(pipeline_response):
deserialized = self._deserialize('DppBaseResourceList', pipeline_response)
list_of_elem = deserialized.value
if cls:
list_of_elem = cls(list_of_elem)
return deserialized.next_link or None, AsyncList(list_of_elem)
async def get_next(next_link=None):
request = prepare_request(next_link)
pipeline_response = await self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [200]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
raise HttpResponseError(response=response, error_format=ARMErrorFormat)
return pipeline_response
return AsyncItemPaged(
get_next, extract_data
)
get_backup_security_pin_requests_objects.metadata = {'url': '/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.DataProtection/resourceGuards/{resourceGuardsName}/getBackupSecurityPINRequests'} # type: ignore
def get_delete_protected_item_requests_objects(
self,
resource_group_name: str,
resource_guards_name: str,
**kwargs
) -> AsyncIterable["models.DppBaseResourceList"]:
"""Returns collection of operation request objects for a critical operation protected by the given ResourceGuard resource.
Returns collection of operation request objects for a critical operation protected by the given
ResourceGuard resource.
:param resource_group_name: The name of the resource group where the backup vault is present.
:type resource_group_name: str
:param resource_guards_name:
:type resource_guards_name: str
:keyword callable cls: A custom type or function that will be passed the direct response
:return: An iterator like instance of either DppBaseResourceList or the result of cls(response)
:rtype: ~azure.core.async_paging.AsyncItemPaged[~azure.mgmt.dataprotection.models.DppBaseResourceList]
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType["models.DppBaseResourceList"]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
api_version = "2022-04-01"
accept = "application/json"
def prepare_request(next_link=None):
# Construct headers
header_parameters = {} # type: Dict[str, Any]
header_parameters['Accept'] = self._serialize.header("accept", accept, 'str')
if not next_link:
# Construct URL
url = self.get_delete_protected_item_requests_objects.metadata['url'] # type: ignore
path_format_arguments = {
'resourceGroupName': self._serialize.url("resource_group_name", resource_group_name, 'str'),
'subscriptionId': self._serialize.url("self._config.subscription_id", self._config.subscription_id, 'str'),
'resourceGuardsName': self._serialize.url("resource_guards_name", resource_guards_name, 'str'),
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {} # type: Dict[str, Any]
query_parameters['api-version'] = self._serialize.query("api_version", api_version, 'str')
request = self._client.get(url, query_parameters, header_parameters)
else:
url = next_link
query_parameters = {} # type: Dict[str, Any]
request = self._client.get(url, query_parameters, header_parameters)
return request
async def extract_data(pipeline_response):
deserialized = self._deserialize('DppBaseResourceList', pipeline_response)
list_of_elem = deserialized.value
if cls:
list_of_elem = cls(list_of_elem)
return deserialized.next_link or None, AsyncList(list_of_elem)
async def get_next(next_link=None):
request = prepare_request(next_link)
pipeline_response = await self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [200]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
raise HttpResponseError(response=response, error_format=ARMErrorFormat)
return pipeline_response
return AsyncItemPaged(
get_next, extract_data
)
get_delete_protected_item_requests_objects.metadata = {'url': '/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.DataProtection/resourceGuards/{resourceGuardsName}/deleteProtectedItemRequests'} # type: ignore
def get_update_protection_policy_requests_objects(
self,
resource_group_name: str,
resource_guards_name: str,
**kwargs
) -> AsyncIterable["models.DppBaseResourceList"]:
"""Returns collection of operation request objects for a critical operation protected by the given ResourceGuard resource.
Returns collection of operation request objects for a critical operation protected by the given
ResourceGuard resource.
:param resource_group_name: The name of the resource group where the backup vault is present.
:type resource_group_name: str
:param resource_guards_name:
:type resource_guards_name: str
:keyword callable cls: A custom type or function that will be passed the direct response
:return: An iterator like instance of either DppBaseResourceList or the result of cls(response)
:rtype: ~azure.core.async_paging.AsyncItemPaged[~azure.mgmt.dataprotection.models.DppBaseResourceList]
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType["models.DppBaseResourceList"]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
api_version = "2022-04-01"
accept = "application/json"
def prepare_request(next_link=None):
# Construct headers
header_parameters = {} # type: Dict[str, Any]
header_parameters['Accept'] = self._serialize.header("accept", accept, 'str')
if not next_link:
# Construct URL
url = self.get_update_protection_policy_requests_objects.metadata['url'] # type: ignore
path_format_arguments = {
'resourceGroupName': self._serialize.url("resource_group_name", resource_group_name, 'str'),
'subscriptionId': self._serialize.url("self._config.subscription_id", self._config.subscription_id, 'str'),
'resourceGuardsName': self._serialize.url("resource_guards_name", resource_guards_name, 'str'),
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {} # type: Dict[str, Any]
query_parameters['api-version'] = self._serialize.query("api_version", api_version, 'str')
request = self._client.get(url, query_parameters, header_parameters)
else:
url = next_link
query_parameters = {} # type: Dict[str, Any]
request = self._client.get(url, query_parameters, header_parameters)
return request
async def extract_data(pipeline_response):
deserialized = self._deserialize('DppBaseResourceList', pipeline_response)
list_of_elem = deserialized.value
if cls:
list_of_elem = cls(list_of_elem)
return deserialized.next_link or None, AsyncList(list_of_elem)
async def get_next(next_link=None):
request = prepare_request(next_link)
pipeline_response = await self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [200]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
raise HttpResponseError(response=response, error_format=ARMErrorFormat)
return pipeline_response
return AsyncItemPaged(
get_next, extract_data
)
get_update_protection_policy_requests_objects.metadata = {'url': '/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.DataProtection/resourceGuards/{resourceGuardsName}/updateProtectionPolicyRequests'} # type: ignore
def get_update_protected_item_requests_objects(
self,
resource_group_name: str,
resource_guards_name: str,
**kwargs
) -> AsyncIterable["models.DppBaseResourceList"]:
"""Returns collection of operation request objects for a critical operation protected by the given ResourceGuard resource.
Returns collection of operation request objects for a critical operation protected by the given
ResourceGuard resource.
:param resource_group_name: The name of the resource group where the backup vault is present.
:type resource_group_name: str
:param resource_guards_name:
:type resource_guards_name: str
:keyword callable cls: A custom type or function that will be passed the direct response
:return: An iterator like instance of either DppBaseResourceList or the result of cls(response)
:rtype: ~azure.core.async_paging.AsyncItemPaged[~azure.mgmt.dataprotection.models.DppBaseResourceList]
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType["models.DppBaseResourceList"]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
api_version = "2022-04-01"
accept = "application/json"
def prepare_request(next_link=None):
# Construct headers
header_parameters = {} # type: Dict[str, Any]
header_parameters['Accept'] = self._serialize.header("accept", accept, 'str')
if not next_link:
# Construct URL
url = self.get_update_protected_item_requests_objects.metadata['url'] # type: ignore
path_format_arguments = {
'resourceGroupName': self._serialize.url("resource_group_name", resource_group_name, 'str'),
'subscriptionId': self._serialize.url("self._config.subscription_id", self._config.subscription_id, 'str'),
'resourceGuardsName': self._serialize.url("resource_guards_name", resource_guards_name, 'str'),
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {} # type: Dict[str, Any]
query_parameters['api-version'] = self._serialize.query("api_version", api_version, 'str')
request = self._client.get(url, query_parameters, header_parameters)
else:
url = next_link
query_parameters = {} # type: Dict[str, Any]
request = self._client.get(url, query_parameters, header_parameters)
return request
async def extract_data(pipeline_response):
deserialized = self._deserialize('DppBaseResourceList', pipeline_response)
list_of_elem = deserialized.value
if cls:
list_of_elem = cls(list_of_elem)
return deserialized.next_link or None, AsyncList(list_of_elem)
async def get_next(next_link=None):
request = prepare_request(next_link)
pipeline_response = await self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [200]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
raise HttpResponseError(response=response, error_format=ARMErrorFormat)
return pipeline_response
return AsyncItemPaged(
get_next, extract_data
)
get_update_protected_item_requests_objects.metadata = {'url': '/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.DataProtection/resourceGuards/{resourceGuardsName}/updateProtectedItemRequests'} # type: ignore
async def get_default_disable_soft_delete_requests_object(
self,
resource_group_name: str,
resource_guards_name: str,
request_name: str,
**kwargs
) -> "models.DppBaseResource":
"""Returns collection of operation request objects for a critical operation protected by the given ResourceGuard resource.
Returns collection of operation request objects for a critical operation protected by the given
ResourceGuard resource.
:param resource_group_name: The name of the resource group where the backup vault is present.
:type resource_group_name: str
:param resource_guards_name:
:type resource_guards_name: str
:param request_name:
:type request_name: str
:keyword callable cls: A custom type or function that will be passed the direct response
:return: DppBaseResource, or the result of cls(response)
:rtype: ~azure.mgmt.dataprotection.models.DppBaseResource
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType["models.DppBaseResource"]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
api_version = "2022-04-01"
accept = "application/json"
# Construct URL
url = self.get_default_disable_soft_delete_requests_object.metadata['url'] # type: ignore
path_format_arguments = {
'resourceGroupName': self._serialize.url("resource_group_name", resource_group_name, 'str'),
'subscriptionId': self._serialize.url("self._config.subscription_id", self._config.subscription_id, 'str'),
'resourceGuardsName': self._serialize.url("resource_guards_name", resource_guards_name, 'str'),
'requestName': self._serialize.url("request_name", request_name, 'str'),
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {} # type: Dict[str, Any]
query_parameters['api-version'] = self._serialize.query("api_version", api_version, 'str')
# Construct headers
header_parameters = {} # type: Dict[str, Any]
header_parameters['Accept'] = self._serialize.header("accept", accept, 'str')
request = self._client.get(url, query_parameters, header_parameters)
pipeline_response = await self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [200]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
raise HttpResponseError(response=response, error_format=ARMErrorFormat)
deserialized = self._deserialize('DppBaseResource', pipeline_response)
if cls:
return cls(pipeline_response, deserialized, {})
return deserialized
get_default_disable_soft_delete_requests_object.metadata = {'url': '/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.DataProtection/resourceGuards/{resourceGuardsName}/disableSoftDeleteRequests/{requestName}'} # type: ignore
async def get_default_delete_resource_guard_proxy_requests_object(
self,
resource_group_name: str,
resource_guards_name: str,
request_name: str,
**kwargs
) -> "models.DppBaseResource":
"""Returns collection of operation request objects for a critical operation protected by the given ResourceGuard resource.
Returns collection of operation request objects for a critical operation protected by the given
ResourceGuard resource.
:param resource_group_name: The name of the resource group where the backup vault is present.
:type resource_group_name: str
:param resource_guards_name:
:type resource_guards_name: str
:param request_name:
:type request_name: str
:keyword callable cls: A custom type or function that will be passed the direct response
:return: DppBaseResource, or the result of cls(response)
:rtype: ~azure.mgmt.dataprotection.models.DppBaseResource
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType["models.DppBaseResource"]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
api_version = "2022-04-01"
accept = "application/json"
# Construct URL
url = self.get_default_delete_resource_guard_proxy_requests_object.metadata['url'] # type: ignore
path_format_arguments = {
'resourceGroupName': self._serialize.url("resource_group_name", resource_group_name, 'str'),
'subscriptionId': self._serialize.url("self._config.subscription_id", self._config.subscription_id, 'str'),
'resourceGuardsName': self._serialize.url("resource_guards_name", resource_guards_name, 'str'),
'requestName': self._serialize.url("request_name", request_name, 'str'),
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {} # type: Dict[str, Any]
query_parameters['api-version'] = self._serialize.query("api_version", api_version, 'str')
# Construct headers
header_parameters = {} # type: Dict[str, Any]
header_parameters['Accept'] = self._serialize.header("accept", accept, 'str')
request = self._client.get(url, query_parameters, header_parameters)
pipeline_response = await self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [200]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
raise HttpResponseError(response=response, error_format=ARMErrorFormat)
deserialized = self._deserialize('DppBaseResource', pipeline_response)
if cls:
return cls(pipeline_response, deserialized, {})
return deserialized
get_default_delete_resource_guard_proxy_requests_object.metadata = {'url': '/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.DataProtection/resourceGuards/{resourceGuardsName}/deleteResourceGuardProxyRequests/{requestName}'} # type: ignore
async def get_default_backup_security_pin_requests_object(
self,
resource_group_name: str,
resource_guards_name: str,
request_name: str,
**kwargs
) -> "models.DppBaseResource":
"""Returns collection of operation request objects for a critical operation protected by the given ResourceGuard resource.
Returns collection of operation request objects for a critical operation protected by the given
ResourceGuard resource.
:param resource_group_name: The name of the resource group where the backup vault is present.
:type resource_group_name: str
:param resource_guards_name:
:type resource_guards_name: str
:param request_name:
:type request_name: str
:keyword callable cls: A custom type or function that will be passed the direct response
:return: DppBaseResource, or the result of cls(response)
:rtype: ~azure.mgmt.dataprotection.models.DppBaseResource
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType["models.DppBaseResource"]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
api_version = "2022-04-01"
accept = "application/json"
# Construct URL
url = self.get_default_backup_security_pin_requests_object.metadata['url'] # type: ignore
path_format_arguments = {
'resourceGroupName': self._serialize.url("resource_group_name", resource_group_name, 'str'),
'subscriptionId': self._serialize.url("self._config.subscription_id", self._config.subscription_id, 'str'),
'resourceGuardsName': self._serialize.url("resource_guards_name", resource_guards_name, 'str'),
'requestName': self._serialize.url("request_name", request_name, 'str'),
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {} # type: Dict[str, Any]
query_parameters['api-version'] = self._serialize.query("api_version", api_version, 'str')
# Construct headers
header_parameters = {} # type: Dict[str, Any]
header_parameters['Accept'] = self._serialize.header("accept", accept, 'str')
request = self._client.get(url, query_parameters, header_parameters)
pipeline_response = await self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [200]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
raise HttpResponseError(response=response, error_format=ARMErrorFormat)
deserialized = self._deserialize('DppBaseResource', pipeline_response)
if cls:
return cls(pipeline_response, deserialized, {})
return deserialized
get_default_backup_security_pin_requests_object.metadata = {'url': '/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.DataProtection/resourceGuards/{resourceGuardsName}/getBackupSecurityPINRequests/{requestName}'} # type: ignore
async def get_default_delete_protected_item_requests_object(
self,
resource_group_name: str,
resource_guards_name: str,
request_name: str,
**kwargs
) -> "models.DppBaseResource":
"""Returns collection of operation request objects for a critical operation protected by the given ResourceGuard resource.
Returns collection of operation request objects for a critical operation protected by the given
ResourceGuard resource.
:param resource_group_name: The name of the resource group where the backup vault is present.
:type resource_group_name: str
:param resource_guards_name:
:type resource_guards_name: str
:param request_name:
:type request_name: str
:keyword callable cls: A custom type or function that will be passed the direct response
:return: DppBaseResource, or the result of cls(response)
:rtype: ~azure.mgmt.dataprotection.models.DppBaseResource
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType["models.DppBaseResource"]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
api_version = "2022-04-01"
accept = "application/json"
# Construct URL
url = self.get_default_delete_protected_item_requests_object.metadata['url'] # type: ignore
path_format_arguments = {
'resourceGroupName': self._serialize.url("resource_group_name", resource_group_name, 'str'),
'subscriptionId': self._serialize.url("self._config.subscription_id", self._config.subscription_id, 'str'),
'resourceGuardsName': self._serialize.url("resource_guards_name", resource_guards_name, 'str'),
'requestName': self._serialize.url("request_name", request_name, 'str'),
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {} # type: Dict[str, Any]
query_parameters['api-version'] = self._serialize.query("api_version", api_version, 'str')
# Construct headers
header_parameters = {} # type: Dict[str, Any]
header_parameters['Accept'] = self._serialize.header("accept", accept, 'str')
request = self._client.get(url, query_parameters, header_parameters)
pipeline_response = await self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [200]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
raise HttpResponseError(response=response, error_format=ARMErrorFormat)
deserialized = self._deserialize('DppBaseResource', pipeline_response)
if cls:
return cls(pipeline_response, deserialized, {})
return deserialized
get_default_delete_protected_item_requests_object.metadata = {'url': '/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.DataProtection/resourceGuards/{resourceGuardsName}/deleteProtectedItemRequests/{requestName}'} # type: ignore
async def get_default_update_protection_policy_requests_object(
self,
resource_group_name: str,
resource_guards_name: str,
request_name: str,
**kwargs
) -> "models.DppBaseResource":
"""Returns collection of operation request objects for a critical operation protected by the given ResourceGuard resource.
Returns collection of operation request objects for a critical operation protected by the given
ResourceGuard resource.
:param resource_group_name: The name of the resource group where the backup vault is present.
:type resource_group_name: str
:param resource_guards_name:
:type resource_guards_name: str
:param request_name:
:type request_name: str
:keyword callable cls: A custom type or function that will be passed the direct response
:return: DppBaseResource, or the result of cls(response)
:rtype: ~azure.mgmt.dataprotection.models.DppBaseResource
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType["models.DppBaseResource"]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
api_version = "2022-04-01"
accept = "application/json"
# Construct URL
url = self.get_default_update_protection_policy_requests_object.metadata['url'] # type: ignore
path_format_arguments = {
'resourceGroupName': self._serialize.url("resource_group_name", resource_group_name, 'str'),
'subscriptionId': self._serialize.url("self._config.subscription_id", self._config.subscription_id, 'str'),
'resourceGuardsName': self._serialize.url("resource_guards_name", resource_guards_name, 'str'),
'requestName': self._serialize.url("request_name", request_name, 'str'),
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {} # type: Dict[str, Any]
query_parameters['api-version'] = self._serialize.query("api_version", api_version, 'str')
# Construct headers
header_parameters = {} # type: Dict[str, Any]
header_parameters['Accept'] = self._serialize.header("accept", accept, 'str')
request = self._client.get(url, query_parameters, header_parameters)
pipeline_response = await self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [200]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
raise HttpResponseError(response=response, error_format=ARMErrorFormat)
deserialized = self._deserialize('DppBaseResource', pipeline_response)
if cls:
return cls(pipeline_response, deserialized, {})
return deserialized
get_default_update_protection_policy_requests_object.metadata = {'url': '/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.DataProtection/resourceGuards/{resourceGuardsName}/updateProtectionPolicyRequests/{requestName}'} # type: ignore
async def get_default_update_protected_item_requests_object(
self,
resource_group_name: str,
resource_guards_name: str,
request_name: str,
**kwargs
) -> "models.DppBaseResource":
"""Returns collection of operation request objects for a critical operation protected by the given ResourceGuard resource.
Returns collection of operation request objects for a critical operation protected by the given
ResourceGuard resource.
:param resource_group_name: The name of the resource group where the backup vault is present.
:type resource_group_name: str
:param resource_guards_name:
:type resource_guards_name: str
:param request_name:
:type request_name: str
:keyword callable cls: A custom type or function that will be passed the direct response
:return: DppBaseResource, or the result of cls(response)
:rtype: ~azure.mgmt.dataprotection.models.DppBaseResource
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType["models.DppBaseResource"]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
api_version = "2022-04-01"
accept = "application/json"
# Construct URL
url = self.get_default_update_protected_item_requests_object.metadata['url'] # type: ignore
path_format_arguments = {
'resourceGroupName': self._serialize.url("resource_group_name", resource_group_name, 'str'),
'subscriptionId': self._serialize.url("self._config.subscription_id", self._config.subscription_id, 'str'),
'resourceGuardsName': self._serialize.url("resource_guards_name", resource_guards_name, 'str'),
'requestName': self._serialize.url("request_name", request_name, 'str'),
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {} # type: Dict[str, Any]
query_parameters['api-version'] = self._serialize.query("api_version", api_version, 'str')
# Construct headers
header_parameters = {} # type: Dict[str, Any]
header_parameters['Accept'] = self._serialize.header("accept", accept, 'str')
request = self._client.get(url, query_parameters, header_parameters)
pipeline_response = await self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [200]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
raise HttpResponseError(response=response, error_format=ARMErrorFormat)
deserialized = self._deserialize('DppBaseResource', pipeline_response)
if cls:
return cls(pipeline_response, deserialized, {})
return deserialized
get_default_update_protected_item_requests_object.metadata = {'url': '/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.DataProtection/resourceGuards/{resourceGuardsName}/updateProtectedItemRequests/{requestName}'} # type: ignore
| 50.825446 | 282 | 0.677 | 6,905 | 65,514 | 6.190442 | 0.035337 | 0.01916 | 0.033805 | 0.023862 | 0.958568 | 0.955597 | 0.95017 | 0.942754 | 0.932226 | 0.926611 | 0 | 0.007348 | 0.231416 | 65,514 | 1,288 | 283 | 50.864907 | 0.841559 | 0.15006 | 0 | 0.840358 | 0 | 0.007663 | 0.162847 | 0.085279 | 0 | 0 | 0 | 0 | 0 | 1 | 0.021711 | false | 0 | 0.010217 | 0 | 0.099617 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
f7fd8af0f84ec714ff8e6274f3c52715d339784f | 9,608 | py | Python | STL_Py/venv/Version_Extended/ExtendedOutputDemo.py | pb-10/Smart-Traffic-Light | 334ba878f42723b72ea2a23fe912e429763ba3af | [
"MIT"
] | 3 | 2021-05-19T04:59:08.000Z | 2021-08-23T20:35:54.000Z | STL_Py/venv/Version_Extended/ExtendedOutputDemo.py | pb-10/Smart-Traffic-Light | 334ba878f42723b72ea2a23fe912e429763ba3af | [
"MIT"
] | null | null | null | STL_Py/venv/Version_Extended/ExtendedOutputDemo.py | pb-10/Smart-Traffic-Light | 334ba878f42723b72ea2a23fe912e429763ba3af | [
"MIT"
] | 3 | 2022-02-16T04:56:58.000Z | 2022-02-25T09:51:38.000Z | from turtle import Turtle
import turtle
from turtle import Screen
def HeadText():
turtle.color('black')
style = ('Courier', 14,)
turtle.speed(1000)
turtle.penup()
turtle.setposition(-198, 285)
turtle.write('Side 1', font=style, align='center')
turtle.penup()
turtle.setposition(-48, 285)
turtle.write('Side 2', font=style, align='center')
turtle.penup()
turtle.setposition(102, 285)
turtle.write('Side 3', font=style, align='center')
turtle.penup()
turtle.setposition(252, 285)
turtle.write('Side 4', font=style, align='center')
turtle.setposition(-245, 140)
turtle.write('Left ', font=style, align='center')
turtle.penup()
turtle.setposition(-260, 90)
turtle.write('Straight ', font=style, align='center')
turtle.penup()
turtle.setposition(-250, 40)
turtle.write('Right ', font=style, align='center')
turtle.penup()
turtle.hideturtle()
def Back():
for i in range(0,4):
pen9 = Turtle(shape='square')
pen9.color('white')
pen9.shapesize(12.65, 2.5)
pen9.speed(100)
pen9.color('grey')
pen9.penup()
pen9.sety(150)
pen9.setx(-200+(i*150))
def Pole():
for i in range(0, 4):
pen9 = Turtle(shape='square')
pen9.shapesize(9, 1)
pen9.color('white')
pen9.speed(100)
pen9.penup()
pen9.sety(-65)
pen9.setx(-200+(i*150))
pen9.color('grey')
def Base():
for i in range(0, 4):
pen9 = Turtle(shape='square')
pen9.color('white')
pen9.penup()
pen9.speed(100)
pen9.sety(-150)
pen9.setx(-200+(i*150))
pen9.shapesize(1, 2)
pen9.color('grey')
turtle.color('black')
style = ('Courier', 14,)
turtle.speed(1000)
turtle.penup()
turtle.setposition(-320, -207)
turtle.write('Total Cars :', font=style, align='center')
turtle.penup()
turtle.setposition(-329, -227)
turtle.write('Passing Cars :', font=style, align='center')
turtle.penup()
turtle.setposition(-297, -247)
turtle.write('Time :', font=style, align='center')
turtle.penup()
turtle.hideturtle()
def Red(Num):
i=Num-1
pen1 = Turtle(shape='circle')
pen1.color('white')
pen1.speed(100)
pen1.shapesize(2)
pen1.color('red')
pen1.penup()
pen1.sety(250)
pen1.setx(-200 + (i * 150))
pen2 = Turtle(shape='circle')
pen2.color('white')
pen2.speed(100)
pen2.shapesize(2)
pen2.color('white')
pen2.penup()
pen2.sety(200)
pen2.setx(-200 + (i * 150))
pen3 = Turtle(shape='circle')
pen3.color('white')
pen3.speed(165)
pen3.shapesize(2)
pen3.color('white')
pen3.penup()
pen3.sety(150)
pen3.setx(-200 + (i * 150))
pen3 = Turtle(shape='circle')
pen3.color('white')
pen3.speed(165)
pen3.shapesize(2)
pen3.color('white')
pen3.penup()
pen3.sety(100)
pen3.setx(-200 + (i * 150))
pen3 = Turtle(shape='circle')
pen3.color('white')
pen3.speed(165)
pen3.shapesize(2)
pen3.color('white')
pen3.penup()
pen3.sety(50)
pen3.setx(-200 + (i * 150))
def Yellow(Num):
i=Num-1
pen1 = Turtle(shape='circle')
pen1.color('white')
pen1.speed(100)
pen1.shapesize(2)
pen1.color('white')
pen1.penup()
pen1.sety(250)
pen1.setx(-200 + (i * 150))
pen2 = Turtle(shape='circle')
pen2.color('white')
pen2.speed(100)
pen2.shapesize(2)
pen2.color('yellow')
pen2.penup()
pen2.sety(200)
pen2.setx(-200 + (i * 150))
pen3 = Turtle(shape='circle')
pen3.color('white')
pen3.speed(165)
pen3.shapesize(2)
pen3.color('white')
pen3.penup()
pen3.sety(150)
pen3.setx(-200 + (i * 150))
pen3 = Turtle(shape='circle')
pen3.color('white')
pen3.speed(165)
pen3.shapesize(2)
pen3.color('white')
pen3.penup()
pen3.sety(100)
pen3.setx(-200 + (i * 150))
pen3 = Turtle(shape='circle')
pen3.color('white')
pen3.speed(165)
pen3.shapesize(2)
pen3.color('white')
pen3.penup()
pen3.sety(50)
pen3.setx(-200 + (i * 150))
def GreenL(Num,TCars,PCars,Time):
i=Num-1
pen1 = Turtle(shape='circle')
pen1.color('white')
pen1.speed(100)
pen1.shapesize(2)
pen1.color('white')
pen1.penup()
pen1.sety(250)
pen1.setx(-200 + (i * 150))
pen2 = Turtle(shape='circle')
pen2.color('white')
pen2.speed(100)
pen2.shapesize(2)
pen2.color('white')
pen2.penup()
pen2.sety(200)
pen2.setx(-200 + (i * 150))
pen3 = Turtle(shape='circle')
pen3.color('white')
pen3.speed(165)
pen3.shapesize(2)
pen3.color('green')
pen3.penup()
pen3.sety(150)
pen3.setx(-200 + (i * 150))
turtle.color('black')
style = ('Courier', 14,)
turtle.speed(1000)
turtle.penup()
pen3 = Turtle(shape='square')
pen3.color('white')
pen3.speed(100)
pen3.shapesize(1)
pen3.color('white')
pen3.penup()
pen3.sety(-207)
pen3.setx(-230 + ((i) * 150))
turtle.setposition(-230 + (i * 150), -207)
turtle.write(TCars, font=style, align='center')
turtle.penup()
pen3 = Turtle(shape='square')
pen3.color('white')
pen3.speed(100)
pen3.shapesize(1)
pen3.color('white')
pen3.penup()
pen3.sety(-227)
pen3.setx(-230 + ((i) * 150))
turtle.setposition(-230 + (i * 150), -227)
turtle.write(PCars, font=style, align='center')
turtle.penup()
pen3 = Turtle(shape='square')
pen3.color('white')
pen3.speed(100)
pen3.shapesize(1)
pen3.color('white')
pen3.penup()
pen3.sety(-247)
pen3.setx(-230 + ((i) * 150))
turtle.setposition(-230 + (i * 150), -247)
turtle.write(Time, font=style, align='center')
turtle.hideturtle()
def GreenM(Num,TCars,PCars,Time):
i=Num-1
pen1 = Turtle(shape='circle')
pen1.color('white')
pen1.speed(100)
pen1.shapesize(2)
pen1.color('white')
pen1.penup()
pen1.sety(250)
pen1.setx(-200 + (i * 150))
pen2 = Turtle(shape='circle')
pen2.color('white')
pen2.speed(100)
pen2.shapesize(2)
pen2.color('white')
pen2.penup()
pen2.sety(200)
pen2.setx(-200 + (i * 150))
pen3 = Turtle(shape='circle')
pen3.color('white')
pen3.speed(165)
pen3.shapesize(2)
pen3.color('green')
pen3.penup()
pen3.sety(100)
pen3.setx(-200 + (i * 150))
turtle.color('black')
style = ('Courier', 14,)
turtle.speed(1000)
turtle.penup()
pen3 = Turtle(shape='square')
pen3.color('white')
pen3.speed(100)
pen3.shapesize(1)
pen3.color('white')
pen3.penup()
pen3.sety(-207)
pen3.setx(-200 + ((i) * 150))
turtle.setposition(-200 + (i * 150), -207)
turtle.write(TCars, font=style, align='center')
turtle.penup()
pen3 = Turtle(shape='square')
pen3.color('white')
pen3.speed(100)
pen3.shapesize(1)
pen3.color('white')
pen3.penup()
pen3.sety(-227)
pen3.setx(-200 + ((i) * 150))
turtle.setposition(-200 + (i * 150), -227)
turtle.write(PCars, font=style, align='center')
turtle.penup()
pen3 = Turtle(shape='square')
pen3.color('white')
pen3.speed(100)
pen3.shapesize(1)
pen3.color('white')
pen3.penup()
pen3.sety(-247)
pen3.setx(-200 + ((i) * 150))
turtle.setposition(-200 + (i * 150), -247)
turtle.write(Time, font=style, align='center')
turtle.hideturtle()
def GreenR(Num,TCars,PCars,Time):
i=Num-1
pen1 = Turtle(shape='circle')
pen1.color('white')
pen1.speed(100)
pen1.shapesize(2)
pen1.color('white')
pen1.penup()
pen1.sety(250)
pen1.setx(-200 + (i * 150))
pen2 = Turtle(shape='circle')
pen2.color('white')
pen2.speed(100)
pen2.shapesize(2)
pen2.color('white')
pen2.penup()
pen2.sety(200)
pen2.setx(-200 + (i * 150))
pen3 = Turtle(shape='circle')
pen3.color('white')
pen3.speed(165)
pen3.shapesize(2)
pen3.color('green')
pen3.penup()
pen3.sety(50)
pen3.setx(-200 + (i * 150))
turtle.color('black')
style = ('Courier', 14,)
turtle.speed(1000)
turtle.penup()
pen3 = Turtle(shape='square')
pen3.color('white')
pen3.speed(100)
pen3.shapesize(1)
pen3.color('white')
pen3.penup()
pen3.sety(-207)
pen3.setx(-170 + ((i) * 150))
turtle.setposition(-170 + (i * 150), -207)
turtle.write(TCars, font=style, align='center')
turtle.penup()
pen3 = Turtle(shape='square')
pen3.color('white')
pen3.speed(100)
pen3.shapesize(1)
pen3.color('white')
pen3.penup()
pen3.sety(-227)
pen3.setx(-170 + ((i) * 150))
turtle.setposition(-170 + (i * 150), -227)
turtle.write(PCars, font=style, align='center')
turtle.penup()
pen3 = Turtle(shape='square')
pen3.color('white')
pen3.speed(100)
pen3.shapesize(1)
pen3.color('white')
pen3.penup()
pen3.sety(-247)
pen3.setx(-170 + ((i) * 150))
turtle.setposition(-170 + (i * 150), -247)
turtle.write(Time, font=style, align='center')
turtle.hideturtle()
def RightOff(Num):
i=Num-1
pen3 = Turtle(shape='circle')
pen3.color('white')
pen3.speed(165)
pen3.shapesize(2)
pen3.color('white')
pen3.penup()
pen3.sety(50)
pen3.setx(-200 + (i * 150))
def Reset():
Yellow(1)
Yellow(2)
Yellow(3)
Yellow(4)
'''
screen=Screen()
screen.setup(1000,1000)
Base()
Pole()
Back()
HeadText()
GreenR(1,12,12,123)
RightOff(1)
#Reset()
screen.mainloop()
'''
| 23.434146 | 62 | 0.583264 | 1,276 | 9,608 | 4.39185 | 0.073668 | 0.099929 | 0.087438 | 0.11242 | 0.887937 | 0.882762 | 0.878658 | 0.878658 | 0.826552 | 0.786938 | 0 | 0.119057 | 0.231578 | 9,608 | 409 | 63 | 23.491443 | 0.639984 | 0 | 0 | 0.847645 | 0 | 0 | 0.080059 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.030471 | false | 0.00277 | 0.00831 | 0 | 0.038781 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
7905a0da55cd59454211170ef0459873b054418e | 107 | py | Python | jgsnippets/strings/__init__.py | jgontrum/snippets | a23bd196cc24b8163d83d9daca3fb60bc67eabcf | [
"MIT"
] | 1 | 2017-06-05T08:41:24.000Z | 2017-06-05T08:41:24.000Z | jgsnippets/strings/__init__.py | jgontrum/snippets | a23bd196cc24b8163d83d9daca3fb60bc67eabcf | [
"MIT"
] | 1 | 2021-06-01T21:53:53.000Z | 2021-06-01T21:53:53.000Z | jgsnippets/strings/__init__.py | jgontrum/snippets | a23bd196cc24b8163d83d9daca3fb60bc67eabcf | [
"MIT"
] | null | null | null | from jgsnippets.strings.encoding import clean_encoding
from jgsnippets.strings.format import jprint, pprint | 53.5 | 54 | 0.878505 | 14 | 107 | 6.642857 | 0.642857 | 0.301075 | 0.451613 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.074766 | 107 | 2 | 55 | 53.5 | 0.939394 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0.5 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | 8 |
f75d8913c3606449fc40b929074611555ae15dcc | 106 | py | Python | section-22-unittesting/02_mock_basic/mock_example.py | mugan86/bootcamp-basic-to-expert-from-scratch | 028aab243386e5a75d84aea319c480ec54913c53 | [
"MIT"
] | 31 | 2022-01-19T18:33:40.000Z | 2022-03-29T16:24:44.000Z | section-22-unittesting/02_mock_basic/mock_example.py | mugan86/bootcamp-basic-to-expert-from-scratch | 028aab243386e5a75d84aea319c480ec54913c53 | [
"MIT"
] | 1 | 2022-02-09T17:47:17.000Z | 2022-02-09T17:47:17.000Z | section-22-unittesting/02_mock_basic/mock_example.py | mugan86/bootcamp-basic-to-expert-from-scratch | 028aab243386e5a75d84aea319c480ec54913c53 | [
"MIT"
] | 4 | 2022-01-20T15:41:09.000Z | 2022-03-29T16:25:08.000Z |
def hello():
return get_greeting()
def get_greeting():
return "Hola Mundo en el curso de Python" | 17.666667 | 45 | 0.688679 | 16 | 106 | 4.4375 | 0.75 | 0.309859 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.216981 | 106 | 6 | 45 | 17.666667 | 0.855422 | 0 | 0 | 0 | 0 | 0 | 0.301887 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | true | 0 | 0 | 0.5 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 1 | 0 | 0 | 7 |
f762c687420035abcb843e420a0489a6453f0657 | 17,931 | py | Python | tests/analyses/milhdbk217f/models/test_inductor.py | TahaEntezari/ramstk | f82e5b31ef5c4e33cc02252263247b99a9abe129 | [
"BSD-3-Clause"
] | 26 | 2019-05-15T02:03:47.000Z | 2022-02-21T07:28:11.000Z | tests/analyses/milhdbk217f/models/test_inductor.py | TahaEntezari/ramstk | f82e5b31ef5c4e33cc02252263247b99a9abe129 | [
"BSD-3-Clause"
] | 815 | 2019-05-10T12:31:52.000Z | 2022-03-31T12:56:26.000Z | tests/analyses/milhdbk217f/models/test_inductor.py | TahaEntezari/ramstk | f82e5b31ef5c4e33cc02252263247b99a9abe129 | [
"BSD-3-Clause"
] | 9 | 2019-04-20T23:06:29.000Z | 2022-01-24T21:21:04.000Z | # pylint: skip-file
# type: ignore
# -*- coding: utf-8 -*-
#
# tests.analyses.milhdbk217f.models.test_inductor.py is part of The
# RAMSTK Project
#
# All rights reserved.
# Copyright 2007 - 2017 Doyle Rowland doyle.rowland <AT> reliaqual <DOT> com
"""Test class for the inductor module."""
# Third Party Imports
import pytest
# RAMSTK Package Imports
from ramstk.analyses.milhdbk217f import inductor
ATTRIBUTES = {
"category_id": 5,
"subcategory_id": 1,
"environment_active_id": 3,
"insulation_id": 3,
"family_id": 1,
"construction_id": 1,
"specification_id": 1,
"quality_id": 1,
"page_number": 3,
"area": 12.5,
"weight": 0.612,
"power_operating": 0.875,
"voltage_dc_operating": 3.3,
"current_operating": 0.00108778877888,
"temperature_active": 43.2,
"piE": 5.0,
"lambda_b": 0.0,
}
@pytest.mark.unit
@pytest.mark.calculation
@pytest.mark.parametrize("family_id", [1, 2, 3, 4])
@pytest.mark.parametrize(
"environment_active_id",
[1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14],
)
def test_get_part_count_lambda_b_xfmr(family_id, environment_active_id):
"""get_part_count_lambda_b() should return a float value for the base
hazard rate on success."""
_lambda_b = inductor.get_part_count_lambda_b(
id_keys={
"subcategory_id": 1,
"family_id": family_id,
"environment_active_id": environment_active_id,
}
)
assert isinstance(_lambda_b, float)
assert (
_lambda_b
== {
1: [
0.0035,
0.023,
0.049,
0.019,
0.065,
0.027,
0.037,
0.041,
0.052,
0.11,
0.0018,
0.053,
0.16,
2.3,
],
2: [
0.0071,
0.046,
0.097,
0.038,
0.13,
0.055,
0.073,
0.081,
0.10,
0.22,
0.035,
0.11,
0.31,
4.7,
],
3: [
0.023,
0.16,
0.35,
0.13,
0.45,
0.21,
0.27,
0.35,
0.45,
0.82,
0.011,
0.37,
1.2,
16.0,
],
4: [
0.028,
0.18,
0.39,
0.15,
0.52,
0.22,
0.29,
0.33,
0.42,
0.88,
0.015,
0.42,
1.2,
19.0,
],
}[family_id][environment_active_id - 1]
)
@pytest.mark.unit
@pytest.mark.calculation
@pytest.mark.parametrize("family_id", [1, 2])
@pytest.mark.parametrize(
"environment_active_id",
[1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14],
)
def test_get_part_count_lambda_b_inductor(
family_id,
environment_active_id,
):
"""get_part_count_lambda_b() should return a float value for the base
hazard rate on success."""
_lambda_b = inductor.get_part_count_lambda_b(
id_keys={
"subcategory_id": 2,
"family_id": family_id,
"environment_active_id": environment_active_id,
}
)
assert isinstance(_lambda_b, float)
assert (
_lambda_b
== {
1: [
0.0017,
0.0073,
0.023,
0.0091,
0.031,
0.011,
0.015,
0.016,
0.022,
0.052,
0.00083,
0.25,
0.073,
1.1,
],
2: [
0.0033,
0.015,
0.046,
0.018,
0.061,
0.022,
0.03,
0.033,
0.044,
0.10,
0.0017,
0.05,
0.15,
2.2,
],
}[family_id][environment_active_id - 1]
)
@pytest.mark.unit
@pytest.mark.calculation
def test_get_part_count_lambda_b_no_subcategory():
"""get_part_count_lambda_b() should raise a KeyError when passed an unknown
subcategory ID."""
with pytest.raises(KeyError):
_lambda_b = inductor.get_part_count_lambda_b(
id_keys={
"subcategory_id": 20,
"family_id": 1,
"environment_active_id": 3,
}
)
@pytest.mark.unit
@pytest.mark.calculation
def test_get_part_count_lambda_b_no_family():
"""get_part_count_lambda_b() should raise a KeyError when passed an unknown
family ID."""
with pytest.raises(KeyError):
_lambda_b = inductor.get_part_count_lambda_b(
id_keys={
"subcategory_id": 2,
"family_id": 12,
"environment_active_id": 3,
}
)
@pytest.mark.unit
@pytest.mark.calculation
def test_get_part_count_lambda_b_no_environment():
"""get_part_count_lambda_b() should raise an IndexError when passed an
unknown active environment ID."""
with pytest.raises(IndexError):
_lambda_b = inductor.get_part_count_lambda_b(
id_keys={
"subcategory_id": 2,
"family_id": 1,
"environment_active_id": 31,
}
)
@pytest.mark.unit
@pytest.mark.calculation
@pytest.mark.parametrize("family_id", [1, 2])
@pytest.mark.parametrize(
"environment_active_id", [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14]
)
def test_calculate_part_count_inductor(
family_id,
environment_active_id,
):
"""calculate_part_count() should return a float value for the base hazard
rate on success."""
ATTRIBUTES["subcategory_id"] = 2
ATTRIBUTES["family_id"] = family_id
ATTRIBUTES["environment_active_id"] = environment_active_id
_lambda_b = inductor.calculate_part_count(**ATTRIBUTES)
assert isinstance(_lambda_b, float)
assert (
_lambda_b
== {
1: [
0.0017,
0.0073,
0.023,
0.0091,
0.031,
0.011,
0.015,
0.016,
0.022,
0.052,
0.00083,
0.25,
0.073,
1.1,
],
2: [
0.0033,
0.015,
0.046,
0.018,
0.061,
0.022,
0.03,
0.033,
0.044,
0.10,
0.0017,
0.05,
0.15,
2.2,
],
}[family_id][environment_active_id - 1]
)
@pytest.mark.unit
@pytest.mark.calculation
@pytest.mark.parametrize("family_id", [1, 2, 3, 4])
@pytest.mark.parametrize(
"environment_active_id",
[1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14],
)
def test_calculate_part_count_xfmr(
family_id,
environment_active_id,
):
"""calculate_part_count() should return a float value for the base hazard
rate on success."""
ATTRIBUTES["subcategory_id"] = 1
ATTRIBUTES["family_id"] = family_id
ATTRIBUTES["environment_active_id"] = environment_active_id
_lambda_b = inductor.calculate_part_count(**ATTRIBUTES)
assert isinstance(_lambda_b, float)
assert (
_lambda_b
== {
1: [
0.0035,
0.023,
0.049,
0.019,
0.065,
0.027,
0.037,
0.041,
0.052,
0.11,
0.0018,
0.053,
0.16,
2.3,
],
2: [
0.0071,
0.046,
0.097,
0.038,
0.13,
0.055,
0.073,
0.081,
0.10,
0.22,
0.035,
0.11,
0.31,
4.7,
],
3: [
0.023,
0.16,
0.35,
0.13,
0.45,
0.21,
0.27,
0.35,
0.45,
0.82,
0.011,
0.37,
1.2,
16.0,
],
4: [
0.028,
0.18,
0.39,
0.15,
0.52,
0.22,
0.29,
0.33,
0.42,
0.88,
0.015,
0.42,
1.2,
19.0,
],
}[family_id][environment_active_id - 1]
)
@pytest.mark.unit
@pytest.mark.calculation
@pytest.mark.parametrize(
"page_number",
[1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14],
)
def test_get_temperature_rise_spec_sheet(page_number):
"""get_temperature_rise_spec_sheet() should return a float value for the
temperature_rise on success."""
_temperature_rise = inductor.get_temperature_rise_spec_sheet(page_number)
assert isinstance(_temperature_rise, float)
assert _temperature_rise == {
1: 15.0,
2: 15.0,
3: 15.0,
4: 35.0,
5: 15.0,
6: 35.0,
7: 15.0,
8: 35.0,
9: 15.0,
10: 15.0,
11: 35.0,
12: 35.0,
13: 15.0,
14: 15.0,
}[page_number]
@pytest.mark.unit
@pytest.mark.calculation
def test_get_temperature_rise_no_spec_sheet():
"""get_temperature_rise_spec_sheet() should raise a KeyError when passed an
unkown page number."""
with pytest.raises(KeyError):
_temperature_rise = inductor.get_temperature_rise_spec_sheet(22)
@pytest.mark.unit
@pytest.mark.calculation
def test_calculate_temperature_rise_input_power_weight():
"""calculate_temperature_rise_input_power_weight() should return a float
value on success."""
_temperature_rise = inductor.calculate_temperature_rise_input_power_weight(
0.387, 0.015
)
assert isinstance(_temperature_rise, float)
assert _temperature_rise == pytest.approx(13.93114825)
@pytest.mark.unit
@pytest.mark.calculation
def test_calculate_temperature_rise_input_power_weight_zero_weight():
"""calculate_temperature_rise_input_power_weight() should raise a
ZeroDivisionError when passed a weight=0.0."""
with pytest.raises(ZeroDivisionError):
_temperature_rise = inductor.calculate_temperature_rise_input_power_weight(
0.387, 0.0
)
@pytest.mark.unit
@pytest.mark.calculation
def test_calculate_temperature_rise_power_loss_surface():
"""calculate_temperature_rise_power_loss_surface() should return a float
value on success."""
_temperature_rise = inductor.calculate_temperature_rise_power_loss_surface(
0.387, 12.5
)
assert isinstance(_temperature_rise, float)
assert _temperature_rise == 3.87
@pytest.mark.unit
@pytest.mark.calculation
def test_calculate_temperature_rise_power_loss_surface_zero_area():
"""calculate_temperature_rise_power_loss_surface() should raise a
ZeroDivisionError when passed an area=0.0."""
with pytest.raises(ZeroDivisionError):
_temperature_rise = inductor.calculate_temperature_rise_power_loss_surface(
0.387, 0.0
)
@pytest.mark.unit
@pytest.mark.calculation
def test_calculate_temperature_rise_power_loss_weight():
"""calculate_temperature_rise_power_loss_radiating_surface() should return
a float value on success."""
_temperature_rise = inductor.calculate_temperature_rise_power_loss_weight(
0.387, 2.5
)
assert isinstance(_temperature_rise, float)
assert _temperature_rise == pytest.approx(2.394211958)
@pytest.mark.unit
@pytest.mark.calculation
def test_calculate_temperature_rise_power_loss_weight_zero_weight():
"""calculate_temperature_rise_power_loss_weight() should raise a
ZeroDivisionError when passed a weight=0.0."""
with pytest.raises(ZeroDivisionError):
_temperature_rise = inductor.calculate_temperature_rise_power_loss_weight(
0.387, 0.0
)
@pytest.mark.unit
@pytest.mark.calculation
def test_calculate_hot_spot_temperature():
"""calculate_hot_spot_temperature() should return a float value on
success."""
_temperature_hot_spot = inductor.calculate_hot_spot_temperature(43.2, 38.7)
assert isinstance(_temperature_hot_spot, float)
assert _temperature_hot_spot == pytest.approx(85.77)
@pytest.mark.unit
@pytest.mark.calculation
def test_calculate_part_stress_lambda_b():
"""calculate_part_stress_lambda_b() should return a float value on
success."""
_lambda_b = inductor.calculate_part_stress_lambda_b(1, 4, 85.77)
assert isinstance(_lambda_b, float)
assert _lambda_b == pytest.approx(0.00280133)
@pytest.mark.unit
@pytest.mark.calculation
def test_calculate_part_stress_lambda_b_no_subcategory():
"""calculate_part_stress_lambda_b() should raise an KeyError when passed an
unknown subcategory ID."""
with pytest.raises(KeyError):
_lambda_b = inductor.calculate_part_stress_lambda_b(101, 4, 85.77)
@pytest.mark.unit
@pytest.mark.calculation
def test_calculate_part_stress_lambda_b_no_insulation():
"""calculate_part_stress_lambda_b() should raise an KeyError when passed an
unknown insulation ID."""
with pytest.raises(KeyError):
_lambda_b = inductor.calculate_part_stress_lambda_b(1, 41, 85.77)
@pytest.mark.unit
@pytest.mark.calculation
@pytest.mark.parametrize("subcategory_id", [1, 2])
def test_get_part_stress_quality_factor(subcategory_id):
"""get_part_stress_quality_factor() should return a float value for piQ on
success."""
_pi_q = inductor.get_part_stress_quality_factor(subcategory_id, 1, 1)
assert isinstance(_pi_q, float)
assert _pi_q == {1: 1.5, 2: 0.03}[subcategory_id]
@pytest.mark.unit
@pytest.mark.calculation
def test_calculate_part_stress_inductor():
"""calculate_part_stress() should return a dictionary of updated values on
success."""
ATTRIBUTES["subcategory_id"] = 2
ATTRIBUTES["construction_id"] = 2
_attributes = inductor.calculate_part_stress(**ATTRIBUTES)
assert isinstance(_attributes, dict)
assert _attributes["lambda_b"] == pytest.approx(0.00046712295)
assert _attributes["piC"] == 2.0
assert _attributes["hazard_rate_active"] == pytest.approx(0.00014013688)
@pytest.mark.unit
@pytest.mark.calculation
def test_calculate_part_stress_xfmr_with_surface_area():
"""calculate_part_stress() should return a dictionary of updated values on
success."""
ATTRIBUTES["subcategory_id"] = 1
ATTRIBUTES["construction_id"] = 1
_attributes = inductor.calculate_part_stress(**ATTRIBUTES)
assert isinstance(_attributes, dict)
assert _attributes["lambda_b"] == pytest.approx(0.0026358035)
assert _attributes["piC"] == 1.0
assert _attributes["hazard_rate_active"] == pytest.approx(0.15814821)
@pytest.mark.unit
@pytest.mark.calculation
def test_calculate_part_stress_xfmr_with_weight():
"""calculate_part_stress() should return a dictionary of updated values on
success."""
ATTRIBUTES["subcategory_id"] = 1
ATTRIBUTES["construction_id"] = 1
ATTRIBUTES["power_operating"] = 0.387
ATTRIBUTES["voltage_dc_operating"] = 0.0
ATTRIBUTES["area"] = 0.0
ATTRIBUTES["weight"] = 2.5
_attributes = inductor.calculate_part_stress(**ATTRIBUTES)
assert isinstance(_attributes, dict)
assert _attributes["temperature_rise"] == pytest.approx(2.39421196)
assert _attributes["lambda_b"] == pytest.approx(0.0024684654)
assert _attributes["piC"] == 1.0
assert _attributes["hazard_rate_active"] == pytest.approx(0.14810792)
@pytest.mark.unit
@pytest.mark.calculation
def test_calculate_part_stress_xfmr_with_input_power():
"""calculate_part_stress() should return a dictionary of updated values on
success."""
ATTRIBUTES["subcategory_id"] = 1
ATTRIBUTES["construction_id"] = 1
ATTRIBUTES["power_operating"] = 0.0
ATTRIBUTES["voltage_dc_operating"] = 3.3
ATTRIBUTES["area"] = 0.0
ATTRIBUTES["weight"] = 2.5
_attributes = inductor.calculate_part_stress(**ATTRIBUTES)
assert isinstance(_attributes, dict)
assert _attributes["temperature_rise"] == pytest.approx(0.0040553804)
assert _attributes["lambda_b"] == pytest.approx(0.0024148713)
assert _attributes["piC"] == 1.0
assert _attributes["hazard_rate_active"] == pytest.approx(0.14489228)
@pytest.mark.unit
@pytest.mark.calculation
def test_calculate_part_stress_xfmr_no_temperature_rise():
"""calculate_part_stress() should return a dictionary of updated values on
success."""
ATTRIBUTES["subcategory_id"] = 1
ATTRIBUTES["construction_id"] = 1
ATTRIBUTES["power_operating"] = 0.0
ATTRIBUTES["voltage_dc_operating"] = 0.0
ATTRIBUTES["area"] = 0.0
ATTRIBUTES["weight"] = 0.0
_attributes = inductor.calculate_part_stress(**ATTRIBUTES)
assert isinstance(_attributes, dict)
assert _attributes["temperature_rise"] == 0.0
assert _attributes["lambda_b"] == pytest.approx(0.0024147842)
assert _attributes["piC"] == 1.0
assert _attributes["hazard_rate_active"] == pytest.approx(0.14488705)
| 28.327014 | 83 | 0.572974 | 2,141 | 17,931 | 4.520785 | 0.101822 | 0.06199 | 0.036161 | 0.051658 | 0.861349 | 0.852154 | 0.822399 | 0.7757 | 0.741915 | 0.71619 | 0 | 0.094659 | 0.321287 | 17,931 | 632 | 84 | 28.371835 | 0.700657 | 0.142825 | 0 | 0.746507 | 0 | 0 | 0.075154 | 0.016657 | 0 | 0 | 0 | 0 | 0.08982 | 1 | 0.0499 | false | 0 | 0.003992 | 0 | 0.053892 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
f782a52c23a92c31bf09956dc973ae15977eb22d | 5,681 | py | Python | mayan/apps/file_caching/tests/test_views.py | CMU-313/fall-2021-hw2-451-unavailable-for-legal-reasons | 0e4e919fd2e1ded6711354a0330135283e87f8c7 | [
"Apache-2.0"
] | 2 | 2021-09-12T19:41:19.000Z | 2021-09-12T19:41:20.000Z | mayan/apps/file_caching/tests/test_views.py | CMU-313/fall-2021-hw2-451-unavailable-for-legal-reasons | 0e4e919fd2e1ded6711354a0330135283e87f8c7 | [
"Apache-2.0"
] | 37 | 2021-09-13T01:00:12.000Z | 2021-10-02T03:54:30.000Z | mayan/apps/file_caching/tests/test_views.py | CMU-313/fall-2021-hw2-451-unavailable-for-legal-reasons | 0e4e919fd2e1ded6711354a0330135283e87f8c7 | [
"Apache-2.0"
] | 1 | 2021-09-22T13:17:30.000Z | 2021-09-22T13:17:30.000Z | from mayan.apps.testing.tests.base import GenericViewTestCase
from ..events import event_cache_partition_purged, event_cache_purged
from ..permissions import (
permission_cache_purge, permission_cache_view
)
from .mixins import CacheTestMixin, CacheViewTestMixin
class CacheViewTestCase(
CacheTestMixin, CacheViewTestMixin, GenericViewTestCase
):
def test_cache_detail_view_no_permission(self):
self._create_test_cache()
self._clear_events()
response = self._request_test_cache_detail_view()
self.assertNotContains(
response=response, text=self.test_cache.label, status_code=404
)
events = self._get_test_events()
self.assertEqual(events.count(), 0)
def test_cache_detail_view_with_access(self):
self._create_test_cache()
self.grant_access(
obj=self.test_cache, permission=permission_cache_view
)
self._clear_events()
response = self._request_test_cache_detail_view()
self.assertContains(
response=response, text=self.test_cache.label, status_code=200
)
events = self._get_test_events()
self.assertEqual(events.count(), 0)
def test_cache_list_view_with_no_permission(self):
self._create_test_cache()
self._clear_events()
response = self._request_test_cache_list_view()
self.assertNotContains(
response=response, text=self.test_cache.label, status_code=200
)
events = self._get_test_events()
self.assertEqual(events.count(), 0)
def test_cache_list_view_with_access(self):
self._create_test_cache()
self.grant_access(
obj=self.test_cache, permission=permission_cache_view
)
self._clear_events()
response = self._request_test_cache_list_view()
self.assertContains(
response=response, text=self.test_cache.label, status_code=200
)
events = self._get_test_events()
self.assertEqual(events.count(), 0)
def test_cache_purge_view_no_permission(self):
self._create_test_cache()
self._create_test_cache_partition()
self._create_test_cache_partition_file()
cache_total_size = self.test_cache.get_total_size()
self._clear_events()
response = self._request_test_cache_purge_view()
self.assertEqual(response.status_code, 404)
self.assertEqual(cache_total_size, self.test_cache.get_total_size())
events = self._get_test_events()
self.assertEqual(events.count(), 0)
def test_cache_purge_view_with_access(self):
self._create_test_cache()
self._create_test_cache_partition()
self._create_test_cache_partition_file()
self.grant_access(
obj=self.test_cache, permission=permission_cache_purge
)
cache_total_size = self.test_cache.get_total_size()
self._clear_events()
response = self._request_test_cache_purge_view()
self.assertEqual(response.status_code, 302)
self.assertNotEqual(cache_total_size, self.test_cache.get_total_size())
events = self._get_test_events()
self.assertEqual(events.count(), 2)
self.assertEqual(events[0].action_object, None)
self.assertEqual(events[0].actor, self._test_case_user)
self.assertEqual(events[0].target, self.test_cache_partition)
self.assertEqual(events[0].verb, event_cache_partition_purged.id)
self.assertEqual(events[1].action_object, None)
self.assertEqual(events[1].actor, self._test_case_user)
self.assertEqual(events[1].target, self.test_cache)
self.assertEqual(events[1].verb, event_cache_purged.id)
def test_cache_multiple_purge_view_no_permission(self):
self._create_test_cache()
self._create_test_cache_partition()
self._create_test_cache_partition_file()
cache_total_size = self.test_cache.get_total_size()
self._clear_events()
response = self._request_test_cache_multiple_purge_view()
self.assertEqual(response.status_code, 404)
self.assertEqual(cache_total_size, self.test_cache.get_total_size())
events = self._get_test_events()
self.assertEqual(events.count(), 0)
def test_cache_multiple_purge_view_with_access(self):
self._create_test_cache()
self._create_test_cache_partition()
self._create_test_cache_partition_file()
self.grant_access(
obj=self.test_cache, permission=permission_cache_purge
)
cache_total_size = self.test_cache.get_total_size()
self._clear_events()
response = self._request_test_cache_multiple_purge_view()
self.assertEqual(response.status_code, 302)
self.assertNotEqual(cache_total_size, self.test_cache.get_total_size())
events = self._get_test_events()
self.assertEqual(events.count(), 2)
self.assertEqual(events[0].action_object, None)
self.assertEqual(events[0].actor, self._test_case_user)
self.assertEqual(events[0].target, self.test_cache_partition)
self.assertEqual(events[0].verb, event_cache_partition_purged.id)
self.assertEqual(events[1].action_object, None)
self.assertEqual(events[1].actor, self._test_case_user)
self.assertEqual(events[1].target, self.test_cache)
self.assertEqual(events[1].verb, event_cache_purged.id)
| 33.615385 | 80 | 0.680162 | 674 | 5,681 | 5.296736 | 0.096439 | 0.131092 | 0.141176 | 0.085154 | 0.917367 | 0.910644 | 0.902801 | 0.902801 | 0.902801 | 0.901681 | 0 | 0.011002 | 0.232001 | 5,681 | 168 | 81 | 33.815476 | 0.807243 | 0 | 0 | 0.777778 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.307692 | 1 | 0.068376 | false | 0 | 0.034188 | 0 | 0.111111 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
e38ed99341086facd0245d5014161ff530abb33c | 24,331 | py | Python | eeauditor/auditors/aws/Amazon_Redshift_Auditor.py | kbhagi/ElectricEye | 31960e1e1cfb75c5d354844ea9e07d5295442823 | [
"Apache-2.0"
] | 442 | 2020-03-15T20:56:36.000Z | 2022-03-31T22:13:07.000Z | eeauditor/auditors/aws/Amazon_Redshift_Auditor.py | kbhagi/ElectricEye | 31960e1e1cfb75c5d354844ea9e07d5295442823 | [
"Apache-2.0"
] | 57 | 2020-03-15T22:09:56.000Z | 2022-03-31T13:17:06.000Z | eeauditor/auditors/aws/Amazon_Redshift_Auditor.py | kbhagi/ElectricEye | 31960e1e1cfb75c5d354844ea9e07d5295442823 | [
"Apache-2.0"
] | 59 | 2020-03-15T21:19:10.000Z | 2022-03-31T15:01:31.000Z | #This file is part of ElectricEye.
#SPDX-License-Identifier: Apache-2.0
#Licensed to the Apache Software Foundation (ASF) under one
#or more contributor license agreements. See the NOTICE file
#distributed with this work for additional information
#regarding copyright ownership. The ASF licenses this file
#to you under the Apache License, Version 2.0 (the
#"License"); you may not use this file except in compliance
#with the License. You may obtain a copy of the License at
#http://www.apache.org/licenses/LICENSE-2.0
#Unless required by applicable law or agreed to in writing,
#software distributed under the License is distributed on an
#"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
#KIND, either express or implied. See the License for the
#specific language governing permissions and limitations
#under the License.
import boto3
import datetime
from check_register import CheckRegister
registry = CheckRegister()
# import boto3 clients
redshift = boto3.client("redshift")
# loop through redshift clusters
def describe_clusters(cache):
response = cache.get("describe_clusters")
if response:
return response
cache["describe_clusters"] = redshift.describe_clusters()
return cache["describe_clusters"]
@registry.register_check("redshift")
def cluster_public_access_check(cache: dict, awsAccountId: str, awsRegion: str, awsPartition: str) -> dict:
"""[Redshift.1] Redshift clusters should not be publicly accessible"""
clusters = describe_clusters(cache=cache)
myRedshiftClusters = clusters["Clusters"]
for cluster in myRedshiftClusters:
clusterId = str(cluster["ClusterIdentifier"])
clusterArn = f"arn:{awsPartition}:redshift:{awsRegion}:{awsAccountId}:cluster:{clusterId}"
iso8601Time = datetime.datetime.utcnow().replace(tzinfo=datetime.timezone.utc).isoformat()
if str(cluster["PubliclyAccessible"]) == "True":
finding = {
"SchemaVersion": "2018-10-08",
"Id": clusterArn + "/redshift-public-access-check",
"ProductArn": f"arn:{awsPartition}:securityhub:{awsRegion}:{awsAccountId}:product/{awsAccountId}/default",
"GeneratorId": clusterArn,
"AwsAccountId": awsAccountId,
"Types": [
"Software and Configuration Checks/AWS Security Best Practices",
"Effects/Data Exposure",
],
"FirstObservedAt": iso8601Time,
"CreatedAt": iso8601Time,
"UpdatedAt": iso8601Time,
"Severity": {"Label": "CRITICAL"},
"Confidence": 99,
"Title": "[Redshift.1] Redshift clusters should not be publicly accessible",
"Description": "Redshift cluster "
+ clusterId
+ " is publicly accessible. Refer to the remediation instructions to remediate this behavior",
"Remediation": {
"Recommendation": {
"Text": "For more information on modifying Redshift public access refer to the Modifying a Cluster section of the Amazon Redshift Cluster Management Guide",
"Url": "https://docs.aws.amazon.com/redshift/latest/mgmt/managing-clusters-console.html#modify-cluster",
}
},
"ProductFields": {"Product Name": "ElectricEye"},
"Resources": [
{
"Type": "AwsRedshiftCluster",
"Id": clusterArn,
"Partition": awsPartition,
"Region": awsRegion,
"Details": {"Other": {"ClusterId": clusterId}},
}
],
"Compliance": {
"Status": "FAILED",
"RelatedRequirements": [
"NIST CSF PR.AC-3",
"NIST SP 800-53 AC-1",
"NIST SP 800-53 AC-17",
"NIST SP 800-53 AC-19",
"NIST SP 800-53 AC-20",
"NIST SP 800-53 SC-15",
"AICPA TSC CC6.6",
"ISO 27001:2013 A.6.2.1",
"ISO 27001:2013 A.6.2.2",
"ISO 27001:2013 A.11.2.6",
"ISO 27001:2013 A.13.1.1",
"ISO 27001:2013 A.13.2.1",
],
},
"Workflow": {"Status": "NEW"},
"RecordState": "ACTIVE",
}
yield finding
else:
finding = {
"SchemaVersion": "2018-10-08",
"Id": clusterArn + "/redshift-public-access-check",
"ProductArn": f"arn:{awsPartition}:securityhub:{awsRegion}:{awsAccountId}:product/{awsAccountId}/default",
"GeneratorId": clusterArn,
"AwsAccountId": awsAccountId,
"Types": [
"Software and Configuration Checks/AWS Security Best Practices",
"Effects/Data Exposure",
],
"FirstObservedAt": iso8601Time,
"CreatedAt": iso8601Time,
"UpdatedAt": iso8601Time,
"Severity": {"Label": "INFORMATIONAL"},
"Confidence": 99,
"Title": "[Redshift.1] Redshift clusters should not be publicly accessible",
"Description": "Redshift cluster " + clusterId + " is not publicly accessible.",
"Remediation": {
"Recommendation": {
"Text": "For more information on modifying Redshift public access refer to the Modifying a Cluster section of the Amazon Redshift Cluster Management Guide",
"Url": "https://docs.aws.amazon.com/redshift/latest/mgmt/managing-clusters-console.html#modify-cluster",
}
},
"ProductFields": {"Product Name": "ElectricEye"},
"Resources": [
{
"Type": "AwsRedshiftCluster",
"Id": clusterArn,
"Partition": awsPartition,
"Region": awsRegion,
"Details": {"Other": {"ClusterId": clusterId}},
}
],
"Compliance": {
"Status": "PASSED",
"RelatedRequirements": [
"NIST CSF PR.AC-3",
"NIST SP 800-53 AC-1",
"NIST SP 800-53 AC-17",
"NIST SP 800-53 AC-19",
"NIST SP 800-53 AC-20",
"NIST SP 800-53 SC-15",
"AICPA TSC CC6.6",
"ISO 27001:2013 A.6.2.1",
"ISO 27001:2013 A.6.2.2",
"ISO 27001:2013 A.11.2.6",
"ISO 27001:2013 A.13.1.1",
"ISO 27001:2013 A.13.2.1",
],
},
"Workflow": {"Status": "RESOLVED"},
"RecordState": "ARCHIVED",
}
yield finding
@registry.register_check("redshift")
def cluster_encryption_check(cache: dict, awsAccountId: str, awsRegion: str, awsPartition: str) -> dict:
"""[Redshift.2] Redshift clusters should be encrypted"""
clusters = describe_clusters(cache=cache)
myRedshiftClusters = clusters["Clusters"]
for cluster in myRedshiftClusters:
clusterId = str(cluster["ClusterIdentifier"])
clusterArn = f"arn:{awsPartition}:redshift:{awsRegion}:{awsAccountId}:cluster:{clusterId}"
iso8601Time = datetime.datetime.utcnow().replace(tzinfo=datetime.timezone.utc).isoformat()
if str(cluster["Encrypted"]) == "False":
finding = {
"SchemaVersion": "2018-10-08",
"Id": clusterArn + "/redshift-cluster-encryption-check",
"ProductArn": f"arn:{awsPartition}:securityhub:{awsRegion}:{awsAccountId}:product/{awsAccountId}/default",
"GeneratorId": clusterArn,
"AwsAccountId": awsAccountId,
"Types": [
"Software and Configuration Checks/AWS Security Best Practices",
"Effects/Data Exposure",
],
"FirstObservedAt": iso8601Time,
"CreatedAt": iso8601Time,
"UpdatedAt": iso8601Time,
"Severity": {"Label": "HIGH"},
"Confidence": 99,
"Title": "[Redshift.2] Redshift clusters should be encrypted",
"Description": "Redshift cluster "
+ clusterId
+ " is not encrypted. Refer to the remediation instructions to remediate this behavior",
"Remediation": {
"Recommendation": {
"Text": "For more information on Redshift cluster encryption and how to configure it refer to the Amazon Redshift Database Encryption section of the Amazon Redshift Cluster Management Guide",
"Url": "https://docs.aws.amazon.com/redshift/latest/mgmt/working-with-db-encryption.html",
}
},
"ProductFields": {"Product Name": "ElectricEye"},
"Resources": [
{
"Type": "AwsRedshiftCluster",
"Id": clusterArn,
"Partition": awsPartition,
"Region": awsRegion,
"Details": {"Other": {"ClusterId": clusterId}},
}
],
"Compliance": {
"Status": "FAILED",
"RelatedRequirements": [
"NIST CSF PR.DS-1",
"NIST SP 800-53 MP-8",
"NIST SP 800-53 SC-12",
"NIST SP 800-53 SC-28",
"AICPA TSC CC6.1",
"ISO 27001:2013 A.8.2.3",
],
},
"Workflow": {"Status": "NEW"},
"RecordState": "ACTIVE",
}
yield finding
else:
finding = {
"SchemaVersion": "2018-10-08",
"Id": clusterArn + "/redshift-cluster-encryption-check",
"ProductArn": f"arn:{awsPartition}:securityhub:{awsRegion}:{awsAccountId}:product/{awsAccountId}/default",
"GeneratorId": clusterArn,
"AwsAccountId": awsAccountId,
"Types": [
"Software and Configuration Checks/AWS Security Best Practices",
"Effects/Data Exposure",
],
"FirstObservedAt": iso8601Time,
"CreatedAt": iso8601Time,
"UpdatedAt": iso8601Time,
"Severity": {"Label": "INFORMATIONAL"},
"Confidence": 99,
"Title": "[Redshift.2] Redshift clusters should be encrypted",
"Description": "Redshift cluster " + clusterId + " is encrypted.",
"Remediation": {
"Recommendation": {
"Text": "For more information on Redshift cluster encryption and how to configure it refer to the Amazon Redshift Database Encryption section of the Amazon Redshift Cluster Management Guide",
"Url": "https://docs.aws.amazon.com/redshift/latest/mgmt/working-with-db-encryption.html",
}
},
"ProductFields": {"Product Name": "ElectricEye"},
"Resources": [
{
"Type": "AwsRedshiftCluster",
"Id": clusterArn,
"Partition": awsPartition,
"Region": awsRegion,
"Details": {"Other": {"ClusterId": clusterId}},
}
],
"Compliance": {
"Status": "PASSED",
"RelatedRequirements": [
"NIST CSF PR.DS-1",
"NIST SP 800-53 MP-8",
"NIST SP 800-53 SC-12",
"NIST SP 800-53 SC-28",
"AICPA TSC CC6.1",
"ISO 27001:2013 A.8.2.3",
],
},
"Workflow": {"Status": "RESOLVED"},
"RecordState": "ARCHIVED",
}
yield finding
@registry.register_check("redshift")
def cluster_enhanced_vpc_routing_check(cache: dict, awsAccountId: str, awsRegion: str, awsPartition: str) -> dict:
"""[Redshift.3] Redshift clusters should utilize enhanced VPC routing"""
clusters = describe_clusters(cache=cache)
myRedshiftClusters = clusters["Clusters"]
for cluster in myRedshiftClusters:
clusterId = str(cluster["ClusterIdentifier"])
clusterArn = f"arn:{awsPartition}:redshift:{awsRegion}:{awsAccountId}:cluster:{clusterId}"
iso8601Time = datetime.datetime.utcnow().replace(tzinfo=datetime.timezone.utc).isoformat()
if str(cluster["EnhancedVpcRouting"]) == "False":
finding = {
"SchemaVersion": "2018-10-08",
"Id": clusterArn + "/redshift-cluster-enhanced-vpc-routing-check",
"ProductArn": f"arn:{awsPartition}:securityhub:{awsRegion}:{awsAccountId}:product/{awsAccountId}/default",
"GeneratorId": clusterArn,
"AwsAccountId": awsAccountId,
"Types": ["Software and Configuration Checks/AWS Security Best Practices"],
"FirstObservedAt": iso8601Time,
"CreatedAt": iso8601Time,
"UpdatedAt": iso8601Time,
"Severity": {"Label": "MEDIUM"},
"Confidence": 99,
"Title": "[Redshift.3] Redshift clusters should utilize enhanced VPC routing",
"Description": "Redshift cluster "
+ clusterId
+ " is not utilizing enhanced VPC routing. Refer to the remediation instructions to remediate this behavior",
"Remediation": {
"Recommendation": {
"Text": "For more information on Redshift Enhanced VPC routing and how to configure it refer to the Amazon Redshift Enhanced VPC Routing section of the Amazon Redshift Cluster Management Guide",
"Url": "https://docs.aws.amazon.com/redshift/latest/mgmt/enhanced-vpc-routing.html",
}
},
"ProductFields": {"Product Name": "ElectricEye"},
"Resources": [
{
"Type": "AwsRedshiftCluster",
"Id": clusterArn,
"Partition": awsPartition,
"Region": awsRegion,
"Details": {"Other": {"ClusterId": clusterId}},
}
],
"Compliance": {
"Status": "FAILED",
"RelatedRequirements": [
"NIST CSF PR.AC-5",
"NIST SP 800-53 AC-4",
"NIST SP 800-53 AC-10",
"NIST SP 800-53 SC-7",
"AICPA TSC CC6.1",
"ISO 27001:2013 A.13.1.1",
"ISO 27001:2013 A.13.1.3",
"ISO 27001:2013 A.13.2.1",
"ISO 27001:2013 A.14.1.2",
"ISO 27001:2013 A.14.1.3",
],
},
"Workflow": {"Status": "NEW"},
"RecordState": "ACTIVE",
}
yield finding
else:
finding = {
"SchemaVersion": "2018-10-08",
"Id": clusterArn + "/redshift-enhanced-vpc-routing-check",
"ProductArn": f"arn:{awsPartition}:securityhub:{awsRegion}:{awsAccountId}:product/{awsAccountId}/default",
"GeneratorId": clusterArn,
"AwsAccountId": awsAccountId,
"Types": ["Software and Configuration Checks/AWS Security Best Practices"],
"FirstObservedAt": iso8601Time,
"CreatedAt": iso8601Time,
"UpdatedAt": iso8601Time,
"Severity": {"Label": "INFORMATIONAL"},
"Confidence": 99,
"Title": "[Redshift.3] Redshift clusters should utilize enhanced VPC routing",
"Description": "Redshift cluster "
+ clusterId
+ " is utilizing enhanced VPC routing.",
"Remediation": {
"Recommendation": {
"Text": "For more information on Redshift Enhanced VPC routing and how to configure it refer to the Amazon Redshift Enhanced VPC Routing section of the Amazon Redshift Cluster Management Guide",
"Url": "https://docs.aws.amazon.com/redshift/latest/mgmt/enhanced-vpc-routing.html",
}
},
"ProductFields": {"Product Name": "ElectricEye"},
"Resources": [
{
"Type": "AwsRedshiftCluster",
"Id": clusterArn,
"Partition": awsPartition,
"Region": awsRegion,
"Details": {"Other": {"ClusterId": clusterId}},
}
],
"Compliance": {
"Status": "PASSED",
"RelatedRequirements": [
"NIST CSF PR.AC-5",
"NIST SP 800-53 AC-4",
"NIST SP 800-53 AC-10",
"NIST SP 800-53 SC-7",
"AICPA TSC CC6.1",
"ISO 27001:2013 A.13.1.1",
"ISO 27001:2013 A.13.1.3",
"ISO 27001:2013 A.13.2.1",
"ISO 27001:2013 A.14.1.2",
"ISO 27001:2013 A.14.1.3",
],
},
"Workflow": {"Status": "RESOLVED"},
"RecordState": "ARCHIVED",
}
yield finding
@registry.register_check("redshift")
def cluster_logging_check(cache: dict, awsAccountId: str, awsRegion: str, awsPartition: str) -> dict:
"""[Redshift.4] Redshift clusters should have logging enabled"""
clusters = describe_clusters(cache=cache)
myRedshiftClusters = clusters["Clusters"]
for cluster in myRedshiftClusters:
clusterId = str(cluster["ClusterIdentifier"])
clusterArn = f"arn:{awsPartition}:redshift:{awsRegion}:{awsAccountId}:cluster:{clusterId}"
response = redshift.describe_logging_status(ClusterIdentifier=clusterId)
iso8601Time = datetime.datetime.utcnow().replace(tzinfo=datetime.timezone.utc).isoformat()
if str(response["LoggingEnabled"]) == "False":
finding = {
"SchemaVersion": "2018-10-08",
"Id": clusterArn + "/redshift-cluster-logging-check",
"ProductArn": f"arn:{awsPartition}:securityhub:{awsRegion}:{awsAccountId}:product/{awsAccountId}/default",
"GeneratorId": clusterArn,
"AwsAccountId": awsAccountId,
"Types": ["Software and Configuration Checks/AWS Security Best Practices"],
"FirstObservedAt": iso8601Time,
"CreatedAt": iso8601Time,
"UpdatedAt": iso8601Time,
"Severity": {"Label": "MEDIUM"},
"Confidence": 99,
"Title": "[Redshift.4] Redshift clusters should have logging enabled",
"Description": "Redshift cluster "
+ clusterId
+ " does not have logging enabled. Refer to the remediation instructions to remediate this behavior",
"Remediation": {
"Recommendation": {
"Text": "For more information on Redshift logging and how to configure it refer to the Database Audit Logging section of the Amazon Redshift Cluster Management Guide",
"Url": "https://docs.aws.amazon.com/redshift/latest/mgmt/db-auditing.html",
}
},
"ProductFields": {"Product Name": "ElectricEye"},
"Resources": [
{
"Type": "AwsRedshiftCluster",
"Id": clusterArn,
"Partition": awsPartition,
"Region": awsRegion,
"Details": {"Other": {"ClusterId": clusterId}},
}
],
"Compliance": {
"Status": "FAILED",
"RelatedRequirements": [
"NIST CSF DE.AE-3",
"NIST SP 800-53 AU-6",
"NIST SP 800-53 CA-7",
"NIST SP 800-53 IR-4",
"NIST SP 800-53 IR-5",
"NIST SP 800-53 IR-8",
"NIST SP 800-53 SI-4",
"AICPA TSC CC7.2",
"ISO 27001:2013 A.12.4.1",
"ISO 27001:2013 A.16.1.7",
],
},
"Workflow": {"Status": "NEW"},
"RecordState": "ACTIVE",
}
yield finding
else:
finding = {
"SchemaVersion": "2018-10-08",
"Id": clusterArn + "/redshift-cluster-logging-check",
"ProductArn": f"arn:{awsPartition}:securityhub:{awsRegion}:{awsAccountId}:product/{awsAccountId}/default",
"GeneratorId": clusterArn,
"AwsAccountId": awsAccountId,
"Types": ["Software and Configuration Checks/AWS Security Best Practices"],
"FirstObservedAt": iso8601Time,
"CreatedAt": iso8601Time,
"UpdatedAt": iso8601Time,
"Severity": {"Label": "INFORMATIONAL"},
"Confidence": 99,
"Title": "[Redshift.4] Redshift clusters should have logging enabled",
"Description": "Redshift cluster " + clusterId + " has logging enabled.",
"Remediation": {
"Recommendation": {
"Text": "For more information on Redshift logging and how to configure it refer to the Database Audit Logging section of the Amazon Redshift Cluster Management Guide",
"Url": "https://docs.aws.amazon.com/redshift/latest/mgmt/db-auditing.html",
}
},
"ProductFields": {"Product Name": "ElectricEye"},
"Resources": [
{
"Type": "AwsRedshiftCluster",
"Id": clusterArn,
"Partition": awsPartition,
"Region": awsRegion,
"Details": {"Other": {"ClusterId": clusterId}},
}
],
"Compliance": {
"Status": "PASSED",
"RelatedRequirements": [
"NIST CSF DE.AE-3",
"NIST SP 800-53 AU-6",
"NIST SP 800-53 CA-7",
"NIST SP 800-53 IR-4",
"NIST SP 800-53 IR-5",
"NIST SP 800-53 IR-8",
"NIST SP 800-53 SI-4",
"AICPA TSC CC7.2",
"ISO 27001:2013 A.12.4.1",
"ISO 27001:2013 A.16.1.7",
],
},
"Workflow": {"Status": "RESOLVED"},
"RecordState": "ARCHIVED",
}
yield finding | 49.352941 | 218 | 0.489663 | 2,020 | 24,331 | 5.884653 | 0.130693 | 0.017162 | 0.025742 | 0.031463 | 0.891899 | 0.890384 | 0.886346 | 0.880205 | 0.876756 | 0.868848 | 0 | 0.053783 | 0.398586 | 24,331 | 493 | 219 | 49.352941 | 0.758559 | 0.045169 | 0 | 0.82684 | 0 | 0.034632 | 0.410649 | 0.054667 | 0 | 0 | 0 | 0 | 0 | 1 | 0.010823 | false | 0.008658 | 0.006494 | 0 | 0.021645 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
582fbf2d50fc6c192774215167a2d35b698a824f | 3,219 | py | Python | App/AccountPasswordPage.py | tartaruswh/SaaSCyberWaterSupplyGWAuto | 07b43c67e059a5b602957d94e9f441e74d12bde1 | [
"Apache-2.0"
] | null | null | null | App/AccountPasswordPage.py | tartaruswh/SaaSCyberWaterSupplyGWAuto | 07b43c67e059a5b602957d94e9f441e74d12bde1 | [
"Apache-2.0"
] | null | null | null | App/AccountPasswordPage.py | tartaruswh/SaaSCyberWaterSupplyGWAuto | 07b43c67e059a5b602957d94e9f441e74d12bde1 | [
"Apache-2.0"
] | null | null | null | import time
import pytest
from appium.webdriver.common.mobileby import MobileBy
from App.BasePage import BasePage
from App.MainPage import MainPage
from App.TenantPage import TenantPage
class AccountPasswordPage(BasePage):
# 点击租户输入框,跳转到租户选择界面,进行租户搜索
# /html/body/div[1]/uni-view/uni-view[1]/uni-view/uni-view[1]/uni-input/div
def goto_tenantPage(self):
self.find(MobileBy.XPATH,"/hierarchy/android.widget.FrameLayout/android.widget.LinearLayout/android.widget.FrameLayout/android.widget.FrameLayout/"
"android.widget.FrameLayout/android.view.ViewGroup/android.widget.FrameLayout/android.widget.LinearLayout/"
"android.webkit.WebView/android.webkit.WebView/android.view.View[4]/android.view.View[1]").click()
#self.cf_webDriverWaitUnitlIsDisplayed(MobileBy.XPATH,"/html/body/div[1]/uni-view/uni-view[2]")
#pytest.assume(self.find(MobileBy.XPATH,"/html/body/div[1]/uni-view/uni-view[2]").text == "取消")
return TenantPage(self.getDriver())
# 输入用户名与密码
def input_username_password(self,username,password):
#点击用户名输入框,输入用户名称
self.find(MobileBy.XPATH,"/hierarchy/android.widget.FrameLayout/android.widget.LinearLayout/android.widget.FrameLayout/"
"android.widget.FrameLayout/android.widget.FrameLayout/android.view.ViewGroup/android.widget.FrameLayout/"
"android.widget.LinearLayout/android.webkit.WebView/android.webkit.WebView/android.view.View[6]/android.view.View[1]/"
"android.view.View/android.view.View/android.widget.EditText").send_keys(username)
#点击密码输入框,输入密码
self.find(MobileBy.XPATH,"/hierarchy/android.widget.FrameLayout/android.widget.LinearLayout/android.widget.FrameLayout/"
"android.widget.FrameLayout/android.widget.FrameLayout/android.view.ViewGroup/android.widget.FrameLayout/"
"android.widget.LinearLayout/android.webkit.WebView/android.webkit.WebView/android.view.View[8]/android.view.View[1]/"
"android.view.View[1]/android.view.View/android.widget.EditText").send_keys(password)
#点击登录
def goto_mainPage(self):
self.find(MobileBy.XPATH,"/hierarchy/android.widget.FrameLayout/android.widget.LinearLayout/android.widget.FrameLayout/"
"android.widget.FrameLayout/android.widget.FrameLayout/android.view.ViewGroup/android.widget.FrameLayout/"
"android.widget.LinearLayout/android.webkit.WebView/android.webkit.WebView/android.view.View[11]/android.view.View[2]").click()
return MainPage(self.getDriver())
def inputTenant(self,tenant):
self.find(MobileBy.XPATH,"/hierarchy/android.widget.FrameLayout/android.widget.LinearLayout/android.widget.FrameLayout/android.widget.FrameLayout/android.widget.FrameLayout/android.view.ViewGroup/android.widget.FrameLayout/android.widget.LinearLayout/android.webkit.WebView/android.webkit.WebView/android.view.View[4]/android.view.View[1]/android.view.View/android.view.View/android.widget.EditText").send_keys(tenant) | 76.642857 | 426 | 0.71078 | 367 | 3,219 | 6.212534 | 0.171662 | 0.216667 | 0.263158 | 0.339912 | 0.768421 | 0.768421 | 0.768421 | 0.75307 | 0.715351 | 0.715351 | 0 | 0.007032 | 0.160609 | 3,219 | 42 | 426 | 76.642857 | 0.836788 | 0.101274 | 0 | 0.214286 | 0 | 0.464286 | 0.604506 | 0.604506 | 0 | 0 | 0 | 0 | 0 | 1 | 0.142857 | false | 0.107143 | 0.214286 | 0 | 0.464286 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 12 |
5837e88d10e399d5358ed0aecb40bc220d870539 | 3,371 | py | Python | src/genie/libs/parser/iosxe/tests/ShowIpMsdpSaCache/cli/equal/device_output_3_expected.py | balmasea/genieparser | d1e71a96dfb081e0a8591707b9d4872decd5d9d3 | [
"Apache-2.0"
] | 204 | 2018-06-27T00:55:27.000Z | 2022-03-06T21:12:18.000Z | src/genie/libs/parser/iosxe/tests/ShowIpMsdpSaCache/cli/equal/device_output_3_expected.py | balmasea/genieparser | d1e71a96dfb081e0a8591707b9d4872decd5d9d3 | [
"Apache-2.0"
] | 468 | 2018-06-19T00:33:18.000Z | 2022-03-31T23:23:35.000Z | src/genie/libs/parser/iosxe/tests/ShowIpMsdpSaCache/cli/equal/device_output_3_expected.py | balmasea/genieparser | d1e71a96dfb081e0a8591707b9d4872decd5d9d3 | [
"Apache-2.0"
] | 309 | 2019-01-16T20:21:07.000Z | 2022-03-30T12:56:41.000Z | expected_output = {
"vrf": {
"default": {
"num_of_sa_cache": 8,
"sa_cache": {
"239.232.1.0 10.44.44.5": {
"group": "239.232.1.0",
"source_addr": "10.44.44.5",
"up_time": "00:01:20",
"expire": "00:05:32",
"peer_as": 64512,
"peer": "192.168.4.4",
"origin_rp": {"192.168.4.4": {"rp_address": "192.168.4.4"}},
},
"239.232.1.1 10.44.44.5": {
"group": "239.232.1.1",
"source_addr": "10.44.44.5",
"up_time": "00:01:20",
"expire": "00:05:32",
"peer_as": 64512,
"peer": "192.168.4.4",
"origin_rp": {"192.168.4.4": {"rp_address": "192.168.4.4"}},
},
"239.232.1.2 10.44.44.5": {
"group": "239.232.1.2",
"source_addr": "10.44.44.5",
"up_time": "00:01:19",
"expire": "00:05:32",
"peer": "192.168.4.4",
"peer_as": 64512,
"origin_rp": {"192.168.4.4": {"rp_address": "192.168.4.4"}},
},
"239.232.1.3 10.44.44.5": {
"group": "239.232.1.3",
"source_addr": "10.44.44.5",
"up_time": "00:01:19",
"expire": "00:05:32",
"peer": "192.168.4.4",
"peer_as": 64512,
"origin_rp": {"192.168.4.4": {"rp_address": "192.168.4.4"}},
},
"239.232.1.4 10.44.44.5": {
"group": "239.232.1.4",
"source_addr": "10.44.44.5",
"up_time": "00:01:19",
"expire": "00:05:32",
"peer_as": 64512,
"peer": "192.168.4.4",
"origin_rp": {"192.168.4.4": {"rp_address": "192.168.4.4"}},
},
"239.232.1.5 10.44.44.5": {
"group": "239.232.1.5",
"source_addr": "10.44.44.5",
"up_time": "00:01:19",
"expire": "00:05:32",
"peer_as": 64512,
"peer": "192.168.4.4",
"origin_rp": {"192.168.4.4": {"rp_address": "192.168.4.4"}},
},
"239.232.1.6 10.44.44.5": {
"group": "239.232.1.6",
"source_addr": "10.44.44.5",
"up_time": "00:01:19",
"expire": "00:05:32",
"peer_as": 64512,
"peer": "192.168.4.4",
"origin_rp": {"192.168.4.4": {"rp_address": "192.168.4.4"}},
},
"239.232.1.7 10.44.44.5": {
"group": "239.232.1.7",
"source_addr": "10.44.44.5",
"up_time": "00:01:19",
"expire": "00:05:32",
"peer_as": 64512,
"peer": "192.168.4.4",
"origin_rp": {"192.168.4.4": {"rp_address": "192.168.4.4"}},
},
},
}
}
}
| 41.109756 | 80 | 0.32305 | 395 | 3,371 | 2.643038 | 0.103797 | 0.137931 | 0.16092 | 0.183908 | 0.935824 | 0.935824 | 0.935824 | 0.935824 | 0.79023 | 0.79023 | 0 | 0.323295 | 0.4779 | 3,371 | 81 | 81 | 41.617284 | 0.269886 | 0 | 0 | 0.592593 | 0 | 0 | 0.36814 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 9 |
584ff79c57db83264b90689f79d2e2241466a413 | 56,631 | py | Python | skywalker/Skywalker.py | iPlantCollaborativeOpenSource/skywalker-python.twisted | 2482404e5f3da4f544273ff7ff7e9da426b27927 | [
"BSD-3-Clause"
] | null | null | null | skywalker/Skywalker.py | iPlantCollaborativeOpenSource/skywalker-python.twisted | 2482404e5f3da4f544273ff7ff7e9da426b27927 | [
"BSD-3-Clause"
] | null | null | null | skywalker/Skywalker.py | iPlantCollaborativeOpenSource/skywalker-python.twisted | 2482404e5f3da4f544273ff7ff7e9da426b27927 | [
"BSD-3-Clause"
] | null | null | null | #
# Autogenerated by Thrift Compiler (0.9.2)
#
# DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
#
# options string: py:twisted,new_style,utf8strings
#
from thrift.Thrift import TType, TMessageType, TException, TApplicationException
from ttypes import *
from thrift.Thrift import TProcessor
from thrift.transport import TTransport
from thrift.protocol import TBinaryProtocol, TProtocol
try:
from thrift.protocol import fastbinary
except:
fastbinary = None
from zope.interface import Interface, implements
from twisted.internet import defer
from thrift.transport import TTwisted
class Iface(Interface):
def get_provider_hash(provider):
"""
Parameters:
- provider
"""
pass
def get_identity_hash(identity):
"""
Parameters:
- identity
"""
pass
def get_instance(provider_hash, identity_hash, instance_uuid):
"""
Parameters:
- provider_hash
- identity_hash
- instance_uuid
"""
pass
def list_instances(provider_hash, identity_hash):
"""
Parameters:
- provider_hash
- identity_hash
"""
pass
def create_instance(provider_hash, identity_hash, options):
"""
Parameters:
- provider_hash
- identity_hash
- options
"""
pass
def deploy_to_instance(provider_hash, identity_hash, options):
"""
Parameters:
- provider_hash
- identity_hash
- options
"""
pass
def destroy_instance(provider_hash, identity_hash, instance_uuid):
"""
Parameters:
- provider_hash
- identity_hash
- instance_uuid
"""
pass
class Client(object):
implements(Iface)
def __init__(self, transport, oprot_factory):
self._transport = transport
self._oprot_factory = oprot_factory
self._seqid = 0
self._reqs = {}
def get_provider_hash(self, provider):
"""
Parameters:
- provider
"""
seqid = self._seqid = self._seqid + 1
self._reqs[seqid] = defer.Deferred()
d = defer.maybeDeferred(self.send_get_provider_hash, provider)
d.addCallbacks(
callback=self.cb_send_get_provider_hash,
callbackArgs=(seqid,),
errback=self.eb_send_get_provider_hash,
errbackArgs=(seqid,))
return d
def cb_send_get_provider_hash(self, _, seqid):
return self._reqs[seqid]
def eb_send_get_provider_hash(self, f, seqid):
d = self._reqs.pop(seqid)
d.errback(f)
return d
def send_get_provider_hash(self, provider):
oprot = self._oprot_factory.getProtocol(self._transport)
oprot.writeMessageBegin('get_provider_hash', TMessageType.CALL, self._seqid)
args = get_provider_hash_args()
args.provider = provider
args.write(oprot)
oprot.writeMessageEnd()
oprot.trans.flush()
def recv_get_provider_hash(self, iprot, mtype, rseqid):
d = self._reqs.pop(rseqid)
if mtype == TMessageType.EXCEPTION:
x = TApplicationException()
x.read(iprot)
iprot.readMessageEnd()
return d.errback(x)
result = get_provider_hash_result()
result.read(iprot)
iprot.readMessageEnd()
if result.success is not None:
return d.callback(result.success)
return d.errback(TApplicationException(TApplicationException.MISSING_RESULT, "get_provider_hash failed: unknown result"))
def get_identity_hash(self, identity):
"""
Parameters:
- identity
"""
seqid = self._seqid = self._seqid + 1
self._reqs[seqid] = defer.Deferred()
d = defer.maybeDeferred(self.send_get_identity_hash, identity)
d.addCallbacks(
callback=self.cb_send_get_identity_hash,
callbackArgs=(seqid,),
errback=self.eb_send_get_identity_hash,
errbackArgs=(seqid,))
return d
def cb_send_get_identity_hash(self, _, seqid):
return self._reqs[seqid]
def eb_send_get_identity_hash(self, f, seqid):
d = self._reqs.pop(seqid)
d.errback(f)
return d
def send_get_identity_hash(self, identity):
oprot = self._oprot_factory.getProtocol(self._transport)
oprot.writeMessageBegin('get_identity_hash', TMessageType.CALL, self._seqid)
args = get_identity_hash_args()
args.identity = identity
args.write(oprot)
oprot.writeMessageEnd()
oprot.trans.flush()
def recv_get_identity_hash(self, iprot, mtype, rseqid):
d = self._reqs.pop(rseqid)
if mtype == TMessageType.EXCEPTION:
x = TApplicationException()
x.read(iprot)
iprot.readMessageEnd()
return d.errback(x)
result = get_identity_hash_result()
result.read(iprot)
iprot.readMessageEnd()
if result.success is not None:
return d.callback(result.success)
return d.errback(TApplicationException(TApplicationException.MISSING_RESULT, "get_identity_hash failed: unknown result"))
def get_instance(self, provider_hash, identity_hash, instance_uuid):
"""
Parameters:
- provider_hash
- identity_hash
- instance_uuid
"""
seqid = self._seqid = self._seqid + 1
self._reqs[seqid] = defer.Deferred()
d = defer.maybeDeferred(self.send_get_instance, provider_hash, identity_hash, instance_uuid)
d.addCallbacks(
callback=self.cb_send_get_instance,
callbackArgs=(seqid,),
errback=self.eb_send_get_instance,
errbackArgs=(seqid,))
return d
def cb_send_get_instance(self, _, seqid):
return self._reqs[seqid]
def eb_send_get_instance(self, f, seqid):
d = self._reqs.pop(seqid)
d.errback(f)
return d
def send_get_instance(self, provider_hash, identity_hash, instance_uuid):
oprot = self._oprot_factory.getProtocol(self._transport)
oprot.writeMessageBegin('get_instance', TMessageType.CALL, self._seqid)
args = get_instance_args()
args.provider_hash = provider_hash
args.identity_hash = identity_hash
args.instance_uuid = instance_uuid
args.write(oprot)
oprot.writeMessageEnd()
oprot.trans.flush()
def recv_get_instance(self, iprot, mtype, rseqid):
d = self._reqs.pop(rseqid)
if mtype == TMessageType.EXCEPTION:
x = TApplicationException()
x.read(iprot)
iprot.readMessageEnd()
return d.errback(x)
result = get_instance_result()
result.read(iprot)
iprot.readMessageEnd()
if result.success is not None:
return d.callback(result.success)
return d.errback(TApplicationException(TApplicationException.MISSING_RESULT, "get_instance failed: unknown result"))
def list_instances(self, provider_hash, identity_hash):
"""
Parameters:
- provider_hash
- identity_hash
"""
seqid = self._seqid = self._seqid + 1
self._reqs[seqid] = defer.Deferred()
d = defer.maybeDeferred(self.send_list_instances, provider_hash, identity_hash)
d.addCallbacks(
callback=self.cb_send_list_instances,
callbackArgs=(seqid,),
errback=self.eb_send_list_instances,
errbackArgs=(seqid,))
return d
def cb_send_list_instances(self, _, seqid):
return self._reqs[seqid]
def eb_send_list_instances(self, f, seqid):
d = self._reqs.pop(seqid)
d.errback(f)
return d
def send_list_instances(self, provider_hash, identity_hash):
oprot = self._oprot_factory.getProtocol(self._transport)
oprot.writeMessageBegin('list_instances', TMessageType.CALL, self._seqid)
args = list_instances_args()
args.provider_hash = provider_hash
args.identity_hash = identity_hash
args.write(oprot)
oprot.writeMessageEnd()
oprot.trans.flush()
def recv_list_instances(self, iprot, mtype, rseqid):
d = self._reqs.pop(rseqid)
if mtype == TMessageType.EXCEPTION:
x = TApplicationException()
x.read(iprot)
iprot.readMessageEnd()
return d.errback(x)
result = list_instances_result()
result.read(iprot)
iprot.readMessageEnd()
if result.success is not None:
return d.callback(result.success)
if result.oex is not None:
return d.errback(result.oex)
if result.cex is not None:
return d.errback(result.cex)
return d.errback(TApplicationException(TApplicationException.MISSING_RESULT, "list_instances failed: unknown result"))
def create_instance(self, provider_hash, identity_hash, options):
"""
Parameters:
- provider_hash
- identity_hash
- options
"""
seqid = self._seqid = self._seqid + 1
self._reqs[seqid] = defer.Deferred()
d = defer.maybeDeferred(self.send_create_instance, provider_hash, identity_hash, options)
d.addCallbacks(
callback=self.cb_send_create_instance,
callbackArgs=(seqid,),
errback=self.eb_send_create_instance,
errbackArgs=(seqid,))
return d
def cb_send_create_instance(self, _, seqid):
return self._reqs[seqid]
def eb_send_create_instance(self, f, seqid):
d = self._reqs.pop(seqid)
d.errback(f)
return d
def send_create_instance(self, provider_hash, identity_hash, options):
oprot = self._oprot_factory.getProtocol(self._transport)
oprot.writeMessageBegin('create_instance', TMessageType.CALL, self._seqid)
args = create_instance_args()
args.provider_hash = provider_hash
args.identity_hash = identity_hash
args.options = options
args.write(oprot)
oprot.writeMessageEnd()
oprot.trans.flush()
def recv_create_instance(self, iprot, mtype, rseqid):
d = self._reqs.pop(rseqid)
if mtype == TMessageType.EXCEPTION:
x = TApplicationException()
x.read(iprot)
iprot.readMessageEnd()
return d.errback(x)
result = create_instance_result()
result.read(iprot)
iprot.readMessageEnd()
if result.success is not None:
return d.callback(result.success)
return d.errback(TApplicationException(TApplicationException.MISSING_RESULT, "create_instance failed: unknown result"))
def deploy_to_instance(self, provider_hash, identity_hash, options):
"""
Parameters:
- provider_hash
- identity_hash
- options
"""
seqid = self._seqid = self._seqid + 1
self._reqs[seqid] = defer.Deferred()
d = defer.maybeDeferred(self.send_deploy_to_instance, provider_hash, identity_hash, options)
d.addCallbacks(
callback=self.cb_send_deploy_to_instance,
callbackArgs=(seqid,),
errback=self.eb_send_deploy_to_instance,
errbackArgs=(seqid,))
return d
def cb_send_deploy_to_instance(self, _, seqid):
return self._reqs[seqid]
def eb_send_deploy_to_instance(self, f, seqid):
d = self._reqs.pop(seqid)
d.errback(f)
return d
def send_deploy_to_instance(self, provider_hash, identity_hash, options):
oprot = self._oprot_factory.getProtocol(self._transport)
oprot.writeMessageBegin('deploy_to_instance', TMessageType.CALL, self._seqid)
args = deploy_to_instance_args()
args.provider_hash = provider_hash
args.identity_hash = identity_hash
args.options = options
args.write(oprot)
oprot.writeMessageEnd()
oprot.trans.flush()
def recv_deploy_to_instance(self, iprot, mtype, rseqid):
d = self._reqs.pop(rseqid)
if mtype == TMessageType.EXCEPTION:
x = TApplicationException()
x.read(iprot)
iprot.readMessageEnd()
return d.errback(x)
result = deploy_to_instance_result()
result.read(iprot)
iprot.readMessageEnd()
if result.success is not None:
return d.callback(result.success)
if result.oex is not None:
return d.errback(result.oex)
if result.cex is not None:
return d.errback(result.cex)
if result.dex is not None:
return d.errback(result.dex)
return d.errback(TApplicationException(TApplicationException.MISSING_RESULT, "deploy_to_instance failed: unknown result"))
def destroy_instance(self, provider_hash, identity_hash, instance_uuid):
"""
Parameters:
- provider_hash
- identity_hash
- instance_uuid
"""
seqid = self._seqid = self._seqid + 1
self._reqs[seqid] = defer.Deferred()
d = defer.maybeDeferred(self.send_destroy_instance, provider_hash, identity_hash, instance_uuid)
d.addCallbacks(
callback=self.cb_send_destroy_instance,
callbackArgs=(seqid,),
errback=self.eb_send_destroy_instance,
errbackArgs=(seqid,))
return d
def cb_send_destroy_instance(self, _, seqid):
return self._reqs[seqid]
def eb_send_destroy_instance(self, f, seqid):
d = self._reqs.pop(seqid)
d.errback(f)
return d
def send_destroy_instance(self, provider_hash, identity_hash, instance_uuid):
oprot = self._oprot_factory.getProtocol(self._transport)
oprot.writeMessageBegin('destroy_instance', TMessageType.CALL, self._seqid)
args = destroy_instance_args()
args.provider_hash = provider_hash
args.identity_hash = identity_hash
args.instance_uuid = instance_uuid
args.write(oprot)
oprot.writeMessageEnd()
oprot.trans.flush()
def recv_destroy_instance(self, iprot, mtype, rseqid):
d = self._reqs.pop(rseqid)
if mtype == TMessageType.EXCEPTION:
x = TApplicationException()
x.read(iprot)
iprot.readMessageEnd()
return d.errback(x)
result = destroy_instance_result()
result.read(iprot)
iprot.readMessageEnd()
if result.oex is not None:
return d.errback(result.oex)
if result.cex is not None:
return d.errback(result.cex)
return d.callback(None)
class Processor(TProcessor):
implements(Iface)
def __init__(self, handler):
self._handler = Iface(handler)
self._processMap = {}
self._processMap["get_provider_hash"] = Processor.process_get_provider_hash
self._processMap["get_identity_hash"] = Processor.process_get_identity_hash
self._processMap["get_instance"] = Processor.process_get_instance
self._processMap["list_instances"] = Processor.process_list_instances
self._processMap["create_instance"] = Processor.process_create_instance
self._processMap["deploy_to_instance"] = Processor.process_deploy_to_instance
self._processMap["destroy_instance"] = Processor.process_destroy_instance
def process(self, iprot, oprot):
(name, type, seqid) = iprot.readMessageBegin()
if name not in self._processMap:
iprot.skip(TType.STRUCT)
iprot.readMessageEnd()
x = TApplicationException(TApplicationException.UNKNOWN_METHOD, 'Unknown function %s' % (name))
oprot.writeMessageBegin(name, TMessageType.EXCEPTION, seqid)
x.write(oprot)
oprot.writeMessageEnd()
oprot.trans.flush()
return defer.succeed(None)
else:
return self._processMap[name](self, seqid, iprot, oprot)
def process_get_provider_hash(self, seqid, iprot, oprot):
args = get_provider_hash_args()
args.read(iprot)
iprot.readMessageEnd()
result = get_provider_hash_result()
d = defer.maybeDeferred(self._handler.get_provider_hash, args.provider)
d.addCallback(self.write_results_success_get_provider_hash, result, seqid, oprot)
return d
def write_results_success_get_provider_hash(self, success, result, seqid, oprot):
result.success = success
oprot.writeMessageBegin("get_provider_hash", TMessageType.REPLY, seqid)
result.write(oprot)
oprot.writeMessageEnd()
oprot.trans.flush()
def process_get_identity_hash(self, seqid, iprot, oprot):
args = get_identity_hash_args()
args.read(iprot)
iprot.readMessageEnd()
result = get_identity_hash_result()
d = defer.maybeDeferred(self._handler.get_identity_hash, args.identity)
d.addCallback(self.write_results_success_get_identity_hash, result, seqid, oprot)
return d
def write_results_success_get_identity_hash(self, success, result, seqid, oprot):
result.success = success
oprot.writeMessageBegin("get_identity_hash", TMessageType.REPLY, seqid)
result.write(oprot)
oprot.writeMessageEnd()
oprot.trans.flush()
def process_get_instance(self, seqid, iprot, oprot):
args = get_instance_args()
args.read(iprot)
iprot.readMessageEnd()
result = get_instance_result()
d = defer.maybeDeferred(self._handler.get_instance, args.provider_hash, args.identity_hash, args.instance_uuid)
d.addCallback(self.write_results_success_get_instance, result, seqid, oprot)
return d
def write_results_success_get_instance(self, success, result, seqid, oprot):
result.success = success
oprot.writeMessageBegin("get_instance", TMessageType.REPLY, seqid)
result.write(oprot)
oprot.writeMessageEnd()
oprot.trans.flush()
def process_list_instances(self, seqid, iprot, oprot):
args = list_instances_args()
args.read(iprot)
iprot.readMessageEnd()
result = list_instances_result()
d = defer.maybeDeferred(self._handler.list_instances, args.provider_hash, args.identity_hash)
d.addCallback(self.write_results_success_list_instances, result, seqid, oprot)
d.addErrback(self.write_results_exception_list_instances, result, seqid, oprot)
return d
def write_results_success_list_instances(self, success, result, seqid, oprot):
result.success = success
oprot.writeMessageBegin("list_instances", TMessageType.REPLY, seqid)
result.write(oprot)
oprot.writeMessageEnd()
oprot.trans.flush()
def write_results_exception_list_instances(self, error, result, seqid, oprot):
try:
error.raiseException()
except OpenStackException, oex:
result.oex = oex
except ConnectionException, cex:
result.cex = cex
oprot.writeMessageBegin("list_instances", TMessageType.REPLY, seqid)
result.write(oprot)
oprot.writeMessageEnd()
oprot.trans.flush()
def process_create_instance(self, seqid, iprot, oprot):
args = create_instance_args()
args.read(iprot)
iprot.readMessageEnd()
result = create_instance_result()
d = defer.maybeDeferred(self._handler.create_instance, args.provider_hash, args.identity_hash, args.options)
d.addCallback(self.write_results_success_create_instance, result, seqid, oprot)
return d
def write_results_success_create_instance(self, success, result, seqid, oprot):
result.success = success
oprot.writeMessageBegin("create_instance", TMessageType.REPLY, seqid)
result.write(oprot)
oprot.writeMessageEnd()
oprot.trans.flush()
def process_deploy_to_instance(self, seqid, iprot, oprot):
args = deploy_to_instance_args()
args.read(iprot)
iprot.readMessageEnd()
result = deploy_to_instance_result()
d = defer.maybeDeferred(self._handler.deploy_to_instance, args.provider_hash, args.identity_hash, args.options)
d.addCallback(self.write_results_success_deploy_to_instance, result, seqid, oprot)
d.addErrback(self.write_results_exception_deploy_to_instance, result, seqid, oprot)
return d
def write_results_success_deploy_to_instance(self, success, result, seqid, oprot):
result.success = success
oprot.writeMessageBegin("deploy_to_instance", TMessageType.REPLY, seqid)
result.write(oprot)
oprot.writeMessageEnd()
oprot.trans.flush()
def write_results_exception_deploy_to_instance(self, error, result, seqid, oprot):
try:
error.raiseException()
except OpenStackException, oex:
result.oex = oex
except ConnectionException, cex:
result.cex = cex
except DeployException, dex:
result.dex = dex
oprot.writeMessageBegin("deploy_to_instance", TMessageType.REPLY, seqid)
result.write(oprot)
oprot.writeMessageEnd()
oprot.trans.flush()
def process_destroy_instance(self, seqid, iprot, oprot):
args = destroy_instance_args()
args.read(iprot)
iprot.readMessageEnd()
result = destroy_instance_result()
d = defer.maybeDeferred(self._handler.destroy_instance, args.provider_hash, args.identity_hash, args.instance_uuid)
d.addCallback(self.write_results_success_destroy_instance, result, seqid, oprot)
d.addErrback(self.write_results_exception_destroy_instance, result, seqid, oprot)
return d
def write_results_success_destroy_instance(self, success, result, seqid, oprot):
result.success = success
oprot.writeMessageBegin("destroy_instance", TMessageType.REPLY, seqid)
result.write(oprot)
oprot.writeMessageEnd()
oprot.trans.flush()
def write_results_exception_destroy_instance(self, error, result, seqid, oprot):
try:
error.raiseException()
except OpenStackException, oex:
result.oex = oex
except ConnectionException, cex:
result.cex = cex
oprot.writeMessageBegin("destroy_instance", TMessageType.REPLY, seqid)
result.write(oprot)
oprot.writeMessageEnd()
oprot.trans.flush()
# HELPER FUNCTIONS AND STRUCTURES
class get_provider_hash_args(object):
"""
Attributes:
- provider
"""
thrift_spec = (
None, # 0
(1, TType.STRUCT, 'provider', (Provider, Provider.thrift_spec), None, ), # 1
)
def __init__(self, provider=None,):
self.provider = provider
def read(self, iprot):
if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None:
fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec))
return
iprot.readStructBegin()
while True:
(fname, ftype, fid) = iprot.readFieldBegin()
if ftype == TType.STOP:
break
if fid == 1:
if ftype == TType.STRUCT:
self.provider = Provider()
self.provider.read(iprot)
else:
iprot.skip(ftype)
else:
iprot.skip(ftype)
iprot.readFieldEnd()
iprot.readStructEnd()
def write(self, oprot):
if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None:
oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec)))
return
oprot.writeStructBegin('get_provider_hash_args')
if self.provider is not None:
oprot.writeFieldBegin('provider', TType.STRUCT, 1)
self.provider.write(oprot)
oprot.writeFieldEnd()
oprot.writeFieldStop()
oprot.writeStructEnd()
def validate(self):
return
def __hash__(self):
value = 17
value = (value * 31) ^ hash(self.provider)
return value
def __repr__(self):
L = ['%s=%r' % (key, value)
for key, value in self.__dict__.iteritems()]
return '%s(%s)' % (self.__class__.__name__, ', '.join(L))
def __eq__(self, other):
return isinstance(other, self.__class__) and self.__dict__ == other.__dict__
def __ne__(self, other):
return not (self == other)
class get_provider_hash_result(object):
"""
Attributes:
- success
"""
thrift_spec = (
(0, TType.STRING, 'success', None, None, ), # 0
)
def __init__(self, success=None,):
self.success = success
def read(self, iprot):
if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None:
fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec))
return
iprot.readStructBegin()
while True:
(fname, ftype, fid) = iprot.readFieldBegin()
if ftype == TType.STOP:
break
if fid == 0:
if ftype == TType.STRING:
self.success = iprot.readString().decode('utf-8')
else:
iprot.skip(ftype)
else:
iprot.skip(ftype)
iprot.readFieldEnd()
iprot.readStructEnd()
def write(self, oprot):
if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None:
oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec)))
return
oprot.writeStructBegin('get_provider_hash_result')
if self.success is not None:
oprot.writeFieldBegin('success', TType.STRING, 0)
oprot.writeString(self.success.encode('utf-8'))
oprot.writeFieldEnd()
oprot.writeFieldStop()
oprot.writeStructEnd()
def validate(self):
return
def __hash__(self):
value = 17
value = (value * 31) ^ hash(self.success)
return value
def __repr__(self):
L = ['%s=%r' % (key, value)
for key, value in self.__dict__.iteritems()]
return '%s(%s)' % (self.__class__.__name__, ', '.join(L))
def __eq__(self, other):
return isinstance(other, self.__class__) and self.__dict__ == other.__dict__
def __ne__(self, other):
return not (self == other)
class get_identity_hash_args(object):
"""
Attributes:
- identity
"""
thrift_spec = (
None, # 0
(1, TType.STRUCT, 'identity', (Identity, Identity.thrift_spec), None, ), # 1
)
def __init__(self, identity=None,):
self.identity = identity
def read(self, iprot):
if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None:
fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec))
return
iprot.readStructBegin()
while True:
(fname, ftype, fid) = iprot.readFieldBegin()
if ftype == TType.STOP:
break
if fid == 1:
if ftype == TType.STRUCT:
self.identity = Identity()
self.identity.read(iprot)
else:
iprot.skip(ftype)
else:
iprot.skip(ftype)
iprot.readFieldEnd()
iprot.readStructEnd()
def write(self, oprot):
if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None:
oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec)))
return
oprot.writeStructBegin('get_identity_hash_args')
if self.identity is not None:
oprot.writeFieldBegin('identity', TType.STRUCT, 1)
self.identity.write(oprot)
oprot.writeFieldEnd()
oprot.writeFieldStop()
oprot.writeStructEnd()
def validate(self):
return
def __hash__(self):
value = 17
value = (value * 31) ^ hash(self.identity)
return value
def __repr__(self):
L = ['%s=%r' % (key, value)
for key, value in self.__dict__.iteritems()]
return '%s(%s)' % (self.__class__.__name__, ', '.join(L))
def __eq__(self, other):
return isinstance(other, self.__class__) and self.__dict__ == other.__dict__
def __ne__(self, other):
return not (self == other)
class get_identity_hash_result(object):
"""
Attributes:
- success
"""
thrift_spec = (
(0, TType.STRING, 'success', None, None, ), # 0
)
def __init__(self, success=None,):
self.success = success
def read(self, iprot):
if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None:
fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec))
return
iprot.readStructBegin()
while True:
(fname, ftype, fid) = iprot.readFieldBegin()
if ftype == TType.STOP:
break
if fid == 0:
if ftype == TType.STRING:
self.success = iprot.readString().decode('utf-8')
else:
iprot.skip(ftype)
else:
iprot.skip(ftype)
iprot.readFieldEnd()
iprot.readStructEnd()
def write(self, oprot):
if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None:
oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec)))
return
oprot.writeStructBegin('get_identity_hash_result')
if self.success is not None:
oprot.writeFieldBegin('success', TType.STRING, 0)
oprot.writeString(self.success.encode('utf-8'))
oprot.writeFieldEnd()
oprot.writeFieldStop()
oprot.writeStructEnd()
def validate(self):
return
def __hash__(self):
value = 17
value = (value * 31) ^ hash(self.success)
return value
def __repr__(self):
L = ['%s=%r' % (key, value)
for key, value in self.__dict__.iteritems()]
return '%s(%s)' % (self.__class__.__name__, ', '.join(L))
def __eq__(self, other):
return isinstance(other, self.__class__) and self.__dict__ == other.__dict__
def __ne__(self, other):
return not (self == other)
class get_instance_args(object):
"""
Attributes:
- provider_hash
- identity_hash
- instance_uuid
"""
thrift_spec = (
None, # 0
(1, TType.STRING, 'provider_hash', None, None, ), # 1
(2, TType.STRING, 'identity_hash', None, None, ), # 2
(3, TType.STRING, 'instance_uuid', None, None, ), # 3
)
def __init__(self, provider_hash=None, identity_hash=None, instance_uuid=None,):
self.provider_hash = provider_hash
self.identity_hash = identity_hash
self.instance_uuid = instance_uuid
def read(self, iprot):
if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None:
fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec))
return
iprot.readStructBegin()
while True:
(fname, ftype, fid) = iprot.readFieldBegin()
if ftype == TType.STOP:
break
if fid == 1:
if ftype == TType.STRING:
self.provider_hash = iprot.readString().decode('utf-8')
else:
iprot.skip(ftype)
elif fid == 2:
if ftype == TType.STRING:
self.identity_hash = iprot.readString().decode('utf-8')
else:
iprot.skip(ftype)
elif fid == 3:
if ftype == TType.STRING:
self.instance_uuid = iprot.readString().decode('utf-8')
else:
iprot.skip(ftype)
else:
iprot.skip(ftype)
iprot.readFieldEnd()
iprot.readStructEnd()
def write(self, oprot):
if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None:
oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec)))
return
oprot.writeStructBegin('get_instance_args')
if self.provider_hash is not None:
oprot.writeFieldBegin('provider_hash', TType.STRING, 1)
oprot.writeString(self.provider_hash.encode('utf-8'))
oprot.writeFieldEnd()
if self.identity_hash is not None:
oprot.writeFieldBegin('identity_hash', TType.STRING, 2)
oprot.writeString(self.identity_hash.encode('utf-8'))
oprot.writeFieldEnd()
if self.instance_uuid is not None:
oprot.writeFieldBegin('instance_uuid', TType.STRING, 3)
oprot.writeString(self.instance_uuid.encode('utf-8'))
oprot.writeFieldEnd()
oprot.writeFieldStop()
oprot.writeStructEnd()
def validate(self):
return
def __hash__(self):
value = 17
value = (value * 31) ^ hash(self.provider_hash)
value = (value * 31) ^ hash(self.identity_hash)
value = (value * 31) ^ hash(self.instance_uuid)
return value
def __repr__(self):
L = ['%s=%r' % (key, value)
for key, value in self.__dict__.iteritems()]
return '%s(%s)' % (self.__class__.__name__, ', '.join(L))
def __eq__(self, other):
return isinstance(other, self.__class__) and self.__dict__ == other.__dict__
def __ne__(self, other):
return not (self == other)
class get_instance_result(object):
"""
Attributes:
- success
"""
thrift_spec = (
(0, TType.STRUCT, 'success', (Instance, Instance.thrift_spec), None, ), # 0
)
def __init__(self, success=None,):
self.success = success
def read(self, iprot):
if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None:
fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec))
return
iprot.readStructBegin()
while True:
(fname, ftype, fid) = iprot.readFieldBegin()
if ftype == TType.STOP:
break
if fid == 0:
if ftype == TType.STRUCT:
self.success = Instance()
self.success.read(iprot)
else:
iprot.skip(ftype)
else:
iprot.skip(ftype)
iprot.readFieldEnd()
iprot.readStructEnd()
def write(self, oprot):
if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None:
oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec)))
return
oprot.writeStructBegin('get_instance_result')
if self.success is not None:
oprot.writeFieldBegin('success', TType.STRUCT, 0)
self.success.write(oprot)
oprot.writeFieldEnd()
oprot.writeFieldStop()
oprot.writeStructEnd()
def validate(self):
return
def __hash__(self):
value = 17
value = (value * 31) ^ hash(self.success)
return value
def __repr__(self):
L = ['%s=%r' % (key, value)
for key, value in self.__dict__.iteritems()]
return '%s(%s)' % (self.__class__.__name__, ', '.join(L))
def __eq__(self, other):
return isinstance(other, self.__class__) and self.__dict__ == other.__dict__
def __ne__(self, other):
return not (self == other)
class list_instances_args(object):
"""
Attributes:
- provider_hash
- identity_hash
"""
thrift_spec = (
None, # 0
(1, TType.STRING, 'provider_hash', None, None, ), # 1
(2, TType.STRING, 'identity_hash', None, None, ), # 2
)
def __init__(self, provider_hash=None, identity_hash=None,):
self.provider_hash = provider_hash
self.identity_hash = identity_hash
def read(self, iprot):
if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None:
fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec))
return
iprot.readStructBegin()
while True:
(fname, ftype, fid) = iprot.readFieldBegin()
if ftype == TType.STOP:
break
if fid == 1:
if ftype == TType.STRING:
self.provider_hash = iprot.readString().decode('utf-8')
else:
iprot.skip(ftype)
elif fid == 2:
if ftype == TType.STRING:
self.identity_hash = iprot.readString().decode('utf-8')
else:
iprot.skip(ftype)
else:
iprot.skip(ftype)
iprot.readFieldEnd()
iprot.readStructEnd()
def write(self, oprot):
if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None:
oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec)))
return
oprot.writeStructBegin('list_instances_args')
if self.provider_hash is not None:
oprot.writeFieldBegin('provider_hash', TType.STRING, 1)
oprot.writeString(self.provider_hash.encode('utf-8'))
oprot.writeFieldEnd()
if self.identity_hash is not None:
oprot.writeFieldBegin('identity_hash', TType.STRING, 2)
oprot.writeString(self.identity_hash.encode('utf-8'))
oprot.writeFieldEnd()
oprot.writeFieldStop()
oprot.writeStructEnd()
def validate(self):
return
def __hash__(self):
value = 17
value = (value * 31) ^ hash(self.provider_hash)
value = (value * 31) ^ hash(self.identity_hash)
return value
def __repr__(self):
L = ['%s=%r' % (key, value)
for key, value in self.__dict__.iteritems()]
return '%s(%s)' % (self.__class__.__name__, ', '.join(L))
def __eq__(self, other):
return isinstance(other, self.__class__) and self.__dict__ == other.__dict__
def __ne__(self, other):
return not (self == other)
class list_instances_result(object):
"""
Attributes:
- success
- oex
- cex
"""
thrift_spec = (
(0, TType.STRUCT, 'success', (Instances, Instances.thrift_spec), None, ), # 0
(1, TType.STRUCT, 'oex', (OpenStackException, OpenStackException.thrift_spec), None, ), # 1
(2, TType.STRUCT, 'cex', (ConnectionException, ConnectionException.thrift_spec), None, ), # 2
)
def __init__(self, success=None, oex=None, cex=None,):
self.success = success
self.oex = oex
self.cex = cex
def read(self, iprot):
if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None:
fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec))
return
iprot.readStructBegin()
while True:
(fname, ftype, fid) = iprot.readFieldBegin()
if ftype == TType.STOP:
break
if fid == 0:
if ftype == TType.STRUCT:
self.success = Instances()
self.success.read(iprot)
else:
iprot.skip(ftype)
elif fid == 1:
if ftype == TType.STRUCT:
self.oex = OpenStackException()
self.oex.read(iprot)
else:
iprot.skip(ftype)
elif fid == 2:
if ftype == TType.STRUCT:
self.cex = ConnectionException()
self.cex.read(iprot)
else:
iprot.skip(ftype)
else:
iprot.skip(ftype)
iprot.readFieldEnd()
iprot.readStructEnd()
def write(self, oprot):
if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None:
oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec)))
return
oprot.writeStructBegin('list_instances_result')
if self.success is not None:
oprot.writeFieldBegin('success', TType.STRUCT, 0)
self.success.write(oprot)
oprot.writeFieldEnd()
if self.oex is not None:
oprot.writeFieldBegin('oex', TType.STRUCT, 1)
self.oex.write(oprot)
oprot.writeFieldEnd()
if self.cex is not None:
oprot.writeFieldBegin('cex', TType.STRUCT, 2)
self.cex.write(oprot)
oprot.writeFieldEnd()
oprot.writeFieldStop()
oprot.writeStructEnd()
def validate(self):
return
def __hash__(self):
value = 17
value = (value * 31) ^ hash(self.success)
value = (value * 31) ^ hash(self.oex)
value = (value * 31) ^ hash(self.cex)
return value
def __repr__(self):
L = ['%s=%r' % (key, value)
for key, value in self.__dict__.iteritems()]
return '%s(%s)' % (self.__class__.__name__, ', '.join(L))
def __eq__(self, other):
return isinstance(other, self.__class__) and self.__dict__ == other.__dict__
def __ne__(self, other):
return not (self == other)
class create_instance_args(object):
"""
Attributes:
- provider_hash
- identity_hash
- options
"""
thrift_spec = (
None, # 0
(1, TType.STRING, 'provider_hash', None, None, ), # 1
(2, TType.STRING, 'identity_hash', None, None, ), # 2
(3, TType.MAP, 'options', (TType.STRING,None,TType.STRING,None), None, ), # 3
)
def __init__(self, provider_hash=None, identity_hash=None, options=None,):
self.provider_hash = provider_hash
self.identity_hash = identity_hash
self.options = options
def read(self, iprot):
if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None:
fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec))
return
iprot.readStructBegin()
while True:
(fname, ftype, fid) = iprot.readFieldBegin()
if ftype == TType.STOP:
break
if fid == 1:
if ftype == TType.STRING:
self.provider_hash = iprot.readString().decode('utf-8')
else:
iprot.skip(ftype)
elif fid == 2:
if ftype == TType.STRING:
self.identity_hash = iprot.readString().decode('utf-8')
else:
iprot.skip(ftype)
elif fid == 3:
if ftype == TType.MAP:
self.options = {}
(_ktype40, _vtype41, _size39 ) = iprot.readMapBegin()
for _i43 in xrange(_size39):
_key44 = iprot.readString().decode('utf-8')
_val45 = iprot.readString().decode('utf-8')
self.options[_key44] = _val45
iprot.readMapEnd()
else:
iprot.skip(ftype)
else:
iprot.skip(ftype)
iprot.readFieldEnd()
iprot.readStructEnd()
def write(self, oprot):
if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None:
oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec)))
return
oprot.writeStructBegin('create_instance_args')
if self.provider_hash is not None:
oprot.writeFieldBegin('provider_hash', TType.STRING, 1)
oprot.writeString(self.provider_hash.encode('utf-8'))
oprot.writeFieldEnd()
if self.identity_hash is not None:
oprot.writeFieldBegin('identity_hash', TType.STRING, 2)
oprot.writeString(self.identity_hash.encode('utf-8'))
oprot.writeFieldEnd()
if self.options is not None:
oprot.writeFieldBegin('options', TType.MAP, 3)
oprot.writeMapBegin(TType.STRING, TType.STRING, len(self.options))
for kiter46,viter47 in self.options.items():
oprot.writeString(kiter46.encode('utf-8'))
oprot.writeString(viter47.encode('utf-8'))
oprot.writeMapEnd()
oprot.writeFieldEnd()
oprot.writeFieldStop()
oprot.writeStructEnd()
def validate(self):
return
def __hash__(self):
value = 17
value = (value * 31) ^ hash(self.provider_hash)
value = (value * 31) ^ hash(self.identity_hash)
value = (value * 31) ^ hash(self.options)
return value
def __repr__(self):
L = ['%s=%r' % (key, value)
for key, value in self.__dict__.iteritems()]
return '%s(%s)' % (self.__class__.__name__, ', '.join(L))
def __eq__(self, other):
return isinstance(other, self.__class__) and self.__dict__ == other.__dict__
def __ne__(self, other):
return not (self == other)
class create_instance_result(object):
"""
Attributes:
- success
"""
thrift_spec = (
(0, TType.STRUCT, 'success', (Instance, Instance.thrift_spec), None, ), # 0
)
def __init__(self, success=None,):
self.success = success
def read(self, iprot):
if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None:
fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec))
return
iprot.readStructBegin()
while True:
(fname, ftype, fid) = iprot.readFieldBegin()
if ftype == TType.STOP:
break
if fid == 0:
if ftype == TType.STRUCT:
self.success = Instance()
self.success.read(iprot)
else:
iprot.skip(ftype)
else:
iprot.skip(ftype)
iprot.readFieldEnd()
iprot.readStructEnd()
def write(self, oprot):
if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None:
oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec)))
return
oprot.writeStructBegin('create_instance_result')
if self.success is not None:
oprot.writeFieldBegin('success', TType.STRUCT, 0)
self.success.write(oprot)
oprot.writeFieldEnd()
oprot.writeFieldStop()
oprot.writeStructEnd()
def validate(self):
return
def __hash__(self):
value = 17
value = (value * 31) ^ hash(self.success)
return value
def __repr__(self):
L = ['%s=%r' % (key, value)
for key, value in self.__dict__.iteritems()]
return '%s(%s)' % (self.__class__.__name__, ', '.join(L))
def __eq__(self, other):
return isinstance(other, self.__class__) and self.__dict__ == other.__dict__
def __ne__(self, other):
return not (self == other)
class deploy_to_instance_args(object):
"""
Attributes:
- provider_hash
- identity_hash
- options
"""
thrift_spec = (
None, # 0
(1, TType.STRING, 'provider_hash', None, None, ), # 1
(2, TType.STRING, 'identity_hash', None, None, ), # 2
(3, TType.MAP, 'options', (TType.STRING,None,TType.STRING,None), None, ), # 3
)
def __init__(self, provider_hash=None, identity_hash=None, options=None,):
self.provider_hash = provider_hash
self.identity_hash = identity_hash
self.options = options
def read(self, iprot):
if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None:
fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec))
return
iprot.readStructBegin()
while True:
(fname, ftype, fid) = iprot.readFieldBegin()
if ftype == TType.STOP:
break
if fid == 1:
if ftype == TType.STRING:
self.provider_hash = iprot.readString().decode('utf-8')
else:
iprot.skip(ftype)
elif fid == 2:
if ftype == TType.STRING:
self.identity_hash = iprot.readString().decode('utf-8')
else:
iprot.skip(ftype)
elif fid == 3:
if ftype == TType.MAP:
self.options = {}
(_ktype49, _vtype50, _size48 ) = iprot.readMapBegin()
for _i52 in xrange(_size48):
_key53 = iprot.readString().decode('utf-8')
_val54 = iprot.readString().decode('utf-8')
self.options[_key53] = _val54
iprot.readMapEnd()
else:
iprot.skip(ftype)
else:
iprot.skip(ftype)
iprot.readFieldEnd()
iprot.readStructEnd()
def write(self, oprot):
if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None:
oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec)))
return
oprot.writeStructBegin('deploy_to_instance_args')
if self.provider_hash is not None:
oprot.writeFieldBegin('provider_hash', TType.STRING, 1)
oprot.writeString(self.provider_hash.encode('utf-8'))
oprot.writeFieldEnd()
if self.identity_hash is not None:
oprot.writeFieldBegin('identity_hash', TType.STRING, 2)
oprot.writeString(self.identity_hash.encode('utf-8'))
oprot.writeFieldEnd()
if self.options is not None:
oprot.writeFieldBegin('options', TType.MAP, 3)
oprot.writeMapBegin(TType.STRING, TType.STRING, len(self.options))
for kiter55,viter56 in self.options.items():
oprot.writeString(kiter55.encode('utf-8'))
oprot.writeString(viter56.encode('utf-8'))
oprot.writeMapEnd()
oprot.writeFieldEnd()
oprot.writeFieldStop()
oprot.writeStructEnd()
def validate(self):
return
def __hash__(self):
value = 17
value = (value * 31) ^ hash(self.provider_hash)
value = (value * 31) ^ hash(self.identity_hash)
value = (value * 31) ^ hash(self.options)
return value
def __repr__(self):
L = ['%s=%r' % (key, value)
for key, value in self.__dict__.iteritems()]
return '%s(%s)' % (self.__class__.__name__, ', '.join(L))
def __eq__(self, other):
return isinstance(other, self.__class__) and self.__dict__ == other.__dict__
def __ne__(self, other):
return not (self == other)
class deploy_to_instance_result(object):
"""
Attributes:
- success
- oex
- cex
- dex
"""
thrift_spec = (
(0, TType.BOOL, 'success', None, None, ), # 0
(1, TType.STRUCT, 'oex', (OpenStackException, OpenStackException.thrift_spec), None, ), # 1
(2, TType.STRUCT, 'cex', (ConnectionException, ConnectionException.thrift_spec), None, ), # 2
(3, TType.STRUCT, 'dex', (DeployException, DeployException.thrift_spec), None, ), # 3
)
def __init__(self, success=None, oex=None, cex=None, dex=None,):
self.success = success
self.oex = oex
self.cex = cex
self.dex = dex
def read(self, iprot):
if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None:
fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec))
return
iprot.readStructBegin()
while True:
(fname, ftype, fid) = iprot.readFieldBegin()
if ftype == TType.STOP:
break
if fid == 0:
if ftype == TType.BOOL:
self.success = iprot.readBool();
else:
iprot.skip(ftype)
elif fid == 1:
if ftype == TType.STRUCT:
self.oex = OpenStackException()
self.oex.read(iprot)
else:
iprot.skip(ftype)
elif fid == 2:
if ftype == TType.STRUCT:
self.cex = ConnectionException()
self.cex.read(iprot)
else:
iprot.skip(ftype)
elif fid == 3:
if ftype == TType.STRUCT:
self.dex = DeployException()
self.dex.read(iprot)
else:
iprot.skip(ftype)
else:
iprot.skip(ftype)
iprot.readFieldEnd()
iprot.readStructEnd()
def write(self, oprot):
if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None:
oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec)))
return
oprot.writeStructBegin('deploy_to_instance_result')
if self.success is not None:
oprot.writeFieldBegin('success', TType.BOOL, 0)
oprot.writeBool(self.success)
oprot.writeFieldEnd()
if self.oex is not None:
oprot.writeFieldBegin('oex', TType.STRUCT, 1)
self.oex.write(oprot)
oprot.writeFieldEnd()
if self.cex is not None:
oprot.writeFieldBegin('cex', TType.STRUCT, 2)
self.cex.write(oprot)
oprot.writeFieldEnd()
if self.dex is not None:
oprot.writeFieldBegin('dex', TType.STRUCT, 3)
self.dex.write(oprot)
oprot.writeFieldEnd()
oprot.writeFieldStop()
oprot.writeStructEnd()
def validate(self):
return
def __hash__(self):
value = 17
value = (value * 31) ^ hash(self.success)
value = (value * 31) ^ hash(self.oex)
value = (value * 31) ^ hash(self.cex)
value = (value * 31) ^ hash(self.dex)
return value
def __repr__(self):
L = ['%s=%r' % (key, value)
for key, value in self.__dict__.iteritems()]
return '%s(%s)' % (self.__class__.__name__, ', '.join(L))
def __eq__(self, other):
return isinstance(other, self.__class__) and self.__dict__ == other.__dict__
def __ne__(self, other):
return not (self == other)
class destroy_instance_args(object):
"""
Attributes:
- provider_hash
- identity_hash
- instance_uuid
"""
thrift_spec = (
None, # 0
(1, TType.STRING, 'provider_hash', None, None, ), # 1
(2, TType.STRING, 'identity_hash', None, None, ), # 2
(3, TType.STRING, 'instance_uuid', None, None, ), # 3
)
def __init__(self, provider_hash=None, identity_hash=None, instance_uuid=None,):
self.provider_hash = provider_hash
self.identity_hash = identity_hash
self.instance_uuid = instance_uuid
def read(self, iprot):
if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None:
fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec))
return
iprot.readStructBegin()
while True:
(fname, ftype, fid) = iprot.readFieldBegin()
if ftype == TType.STOP:
break
if fid == 1:
if ftype == TType.STRING:
self.provider_hash = iprot.readString().decode('utf-8')
else:
iprot.skip(ftype)
elif fid == 2:
if ftype == TType.STRING:
self.identity_hash = iprot.readString().decode('utf-8')
else:
iprot.skip(ftype)
elif fid == 3:
if ftype == TType.STRING:
self.instance_uuid = iprot.readString().decode('utf-8')
else:
iprot.skip(ftype)
else:
iprot.skip(ftype)
iprot.readFieldEnd()
iprot.readStructEnd()
def write(self, oprot):
if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None:
oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec)))
return
oprot.writeStructBegin('destroy_instance_args')
if self.provider_hash is not None:
oprot.writeFieldBegin('provider_hash', TType.STRING, 1)
oprot.writeString(self.provider_hash.encode('utf-8'))
oprot.writeFieldEnd()
if self.identity_hash is not None:
oprot.writeFieldBegin('identity_hash', TType.STRING, 2)
oprot.writeString(self.identity_hash.encode('utf-8'))
oprot.writeFieldEnd()
if self.instance_uuid is not None:
oprot.writeFieldBegin('instance_uuid', TType.STRING, 3)
oprot.writeString(self.instance_uuid.encode('utf-8'))
oprot.writeFieldEnd()
oprot.writeFieldStop()
oprot.writeStructEnd()
def validate(self):
return
def __hash__(self):
value = 17
value = (value * 31) ^ hash(self.provider_hash)
value = (value * 31) ^ hash(self.identity_hash)
value = (value * 31) ^ hash(self.instance_uuid)
return value
def __repr__(self):
L = ['%s=%r' % (key, value)
for key, value in self.__dict__.iteritems()]
return '%s(%s)' % (self.__class__.__name__, ', '.join(L))
def __eq__(self, other):
return isinstance(other, self.__class__) and self.__dict__ == other.__dict__
def __ne__(self, other):
return not (self == other)
class destroy_instance_result(object):
"""
Attributes:
- oex
- cex
"""
thrift_spec = (
None, # 0
(1, TType.STRUCT, 'oex', (OpenStackException, OpenStackException.thrift_spec), None, ), # 1
(2, TType.STRUCT, 'cex', (ConnectionException, ConnectionException.thrift_spec), None, ), # 2
)
def __init__(self, oex=None, cex=None,):
self.oex = oex
self.cex = cex
def read(self, iprot):
if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None:
fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec))
return
iprot.readStructBegin()
while True:
(fname, ftype, fid) = iprot.readFieldBegin()
if ftype == TType.STOP:
break
if fid == 1:
if ftype == TType.STRUCT:
self.oex = OpenStackException()
self.oex.read(iprot)
else:
iprot.skip(ftype)
elif fid == 2:
if ftype == TType.STRUCT:
self.cex = ConnectionException()
self.cex.read(iprot)
else:
iprot.skip(ftype)
else:
iprot.skip(ftype)
iprot.readFieldEnd()
iprot.readStructEnd()
def write(self, oprot):
if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None:
oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec)))
return
oprot.writeStructBegin('destroy_instance_result')
if self.oex is not None:
oprot.writeFieldBegin('oex', TType.STRUCT, 1)
self.oex.write(oprot)
oprot.writeFieldEnd()
if self.cex is not None:
oprot.writeFieldBegin('cex', TType.STRUCT, 2)
self.cex.write(oprot)
oprot.writeFieldEnd()
oprot.writeFieldStop()
oprot.writeStructEnd()
def validate(self):
return
def __hash__(self):
value = 17
value = (value * 31) ^ hash(self.oex)
value = (value * 31) ^ hash(self.cex)
return value
def __repr__(self):
L = ['%s=%r' % (key, value)
for key, value in self.__dict__.iteritems()]
return '%s(%s)' % (self.__class__.__name__, ', '.join(L))
def __eq__(self, other):
return isinstance(other, self.__class__) and self.__dict__ == other.__dict__
def __ne__(self, other):
return not (self == other)
| 32.067384 | 188 | 0.676891 | 6,806 | 56,631 | 5.368939 | 0.031443 | 0.039736 | 0.024137 | 0.021182 | 0.922252 | 0.897458 | 0.870584 | 0.837662 | 0.821269 | 0.802934 | 0 | 0.006918 | 0.208737 | 56,631 | 1,765 | 189 | 32.085552 | 0.808547 | 0.004609 | 0 | 0.816075 | 1 | 0 | 0.032745 | 0.00416 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.005069 | 0.006517 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 9 |
5859069ed0a22d434ece2868666a89cccb13c33d | 6,543 | py | Python | tests/utils/test_password_manager.py | zEdS15B3GCwq/poetry | 2afe9840533aacfe561d3fdf65c6fb2e790d89b1 | [
"MIT"
] | 7,258 | 2018-02-28T16:23:08.000Z | 2019-12-11T18:27:58.000Z | tests/utils/test_password_manager.py | zEdS15B3GCwq/poetry | 2afe9840533aacfe561d3fdf65c6fb2e790d89b1 | [
"MIT"
] | 1,608 | 2018-02-28T15:31:35.000Z | 2019-12-11T20:00:05.000Z | tests/utils/test_password_manager.py | zEdS15B3GCwq/poetry | 2afe9840533aacfe561d3fdf65c6fb2e790d89b1 | [
"MIT"
] | 597 | 2018-03-07T15:07:46.000Z | 2019-12-11T16:36:22.000Z | from __future__ import annotations
import os
from typing import TYPE_CHECKING
import pytest
from poetry.utils.password_manager import PasswordManager
from poetry.utils.password_manager import PoetryKeyring
from poetry.utils.password_manager import PoetryKeyringError
if TYPE_CHECKING:
from pytest_mock import MockerFixture
from tests.conftest import Config
from tests.conftest import DummyBackend
def test_set_http_password(
config: Config, with_simple_keyring: None, dummy_keyring: DummyBackend
):
manager = PasswordManager(config)
assert manager.keyring.is_available()
manager.set_http_password("foo", "bar", "baz")
assert dummy_keyring.get_password("poetry-repository-foo", "bar") == "baz"
auth = config.get("http-basic.foo")
assert auth["username"] == "bar"
assert "password" not in auth
def test_get_http_auth(
config: Config, with_simple_keyring: None, dummy_keyring: DummyBackend
):
dummy_keyring.set_password("poetry-repository-foo", "bar", "baz")
config.auth_config_source.add_property("http-basic.foo", {"username": "bar"})
manager = PasswordManager(config)
assert manager.keyring.is_available()
auth = manager.get_http_auth("foo")
assert auth["username"] == "bar"
assert auth["password"] == "baz"
def test_delete_http_password(
config: Config, with_simple_keyring: None, dummy_keyring: DummyBackend
):
dummy_keyring.set_password("poetry-repository-foo", "bar", "baz")
config.auth_config_source.add_property("http-basic.foo", {"username": "bar"})
manager = PasswordManager(config)
assert manager.keyring.is_available()
manager.delete_http_password("foo")
assert dummy_keyring.get_password("poetry-repository-foo", "bar") is None
assert config.get("http-basic.foo") is None
def test_set_pypi_token(
config: Config, with_simple_keyring: None, dummy_keyring: DummyBackend
):
manager = PasswordManager(config)
assert manager.keyring.is_available()
manager.set_pypi_token("foo", "baz")
assert config.get("pypi-token.foo") is None
assert dummy_keyring.get_password("poetry-repository-foo", "__token__") == "baz"
def test_get_pypi_token(
config: Config, with_simple_keyring: None, dummy_keyring: DummyBackend
):
dummy_keyring.set_password("poetry-repository-foo", "__token__", "baz")
manager = PasswordManager(config)
assert manager.keyring.is_available()
assert manager.get_pypi_token("foo") == "baz"
def test_delete_pypi_token(
config: Config, with_simple_keyring: None, dummy_keyring: DummyBackend
):
dummy_keyring.set_password("poetry-repository-foo", "__token__", "baz")
manager = PasswordManager(config)
assert manager.keyring.is_available()
manager.delete_pypi_token("foo")
assert dummy_keyring.get_password("poetry-repository-foo", "__token__") is None
def test_set_http_password_with_unavailable_backend(
config: Config, with_fail_keyring: None
):
manager = PasswordManager(config)
assert not manager.keyring.is_available()
manager.set_http_password("foo", "bar", "baz")
auth = config.get("http-basic.foo")
assert auth["username"] == "bar"
assert auth["password"] == "baz"
def test_get_http_auth_with_unavailable_backend(
config: Config, with_fail_keyring: None
):
config.auth_config_source.add_property(
"http-basic.foo", {"username": "bar", "password": "baz"}
)
manager = PasswordManager(config)
assert not manager.keyring.is_available()
auth = manager.get_http_auth("foo")
assert auth["username"] == "bar"
assert auth["password"] == "baz"
def test_delete_http_password_with_unavailable_backend(
config: Config, with_fail_keyring: None
):
config.auth_config_source.add_property(
"http-basic.foo", {"username": "bar", "password": "baz"}
)
manager = PasswordManager(config)
assert not manager.keyring.is_available()
manager.delete_http_password("foo")
assert config.get("http-basic.foo") is None
def test_set_pypi_token_with_unavailable_backend(
config: Config, with_fail_keyring: None
):
manager = PasswordManager(config)
assert not manager.keyring.is_available()
manager.set_pypi_token("foo", "baz")
assert config.get("pypi-token.foo") == "baz"
def test_get_pypi_token_with_unavailable_backend(
config: Config, with_fail_keyring: None
):
config.auth_config_source.add_property("pypi-token.foo", "baz")
manager = PasswordManager(config)
assert not manager.keyring.is_available()
assert manager.get_pypi_token("foo") == "baz"
def test_delete_pypi_token_with_unavailable_backend(
config: Config, with_fail_keyring: None
):
config.auth_config_source.add_property("pypi-token.foo", "baz")
manager = PasswordManager(config)
assert not manager.keyring.is_available()
manager.delete_pypi_token("foo")
assert config.get("pypi-token.foo") is None
def test_keyring_raises_errors_on_keyring_errors(
mocker: MockerFixture, with_fail_keyring: None
):
mocker.patch("poetry.utils.password_manager.PoetryKeyring._check")
key_ring = PoetryKeyring("poetry")
with pytest.raises(PoetryKeyringError):
key_ring.set_password("foo", "bar", "baz")
with pytest.raises(PoetryKeyringError):
key_ring.get_password("foo", "bar")
with pytest.raises(PoetryKeyringError):
key_ring.delete_password("foo", "bar")
def test_keyring_with_chainer_backend_and_fail_keyring_should_be_unavailable(
with_chained_fail_keyring: None,
):
key_ring = PoetryKeyring("poetry")
assert not key_ring.is_available()
def test_keyring_with_chainer_backend_and_null_keyring_should_be_unavailable(
with_chained_null_keyring: None,
):
key_ring = PoetryKeyring("poetry")
assert not key_ring.is_available()
def test_null_keyring_should_be_unavailable(
with_null_keyring: None,
):
key_ring = PoetryKeyring("poetry")
assert not key_ring.is_available()
def test_fail_keyring_should_be_unavailable(
with_fail_keyring: None,
):
key_ring = PoetryKeyring("poetry")
assert not key_ring.is_available()
def test_get_http_auth_from_environment_variables(
environ: None, config: Config, with_simple_keyring: None
):
os.environ["POETRY_HTTP_BASIC_FOO_USERNAME"] = "bar"
os.environ["POETRY_HTTP_BASIC_FOO_PASSWORD"] = "baz"
manager = PasswordManager(config)
auth = manager.get_http_auth("foo")
assert auth["username"] == "bar"
assert auth["password"] == "baz"
| 27.961538 | 84 | 0.736971 | 826 | 6,543 | 5.514528 | 0.085956 | 0.027662 | 0.045664 | 0.089572 | 0.890231 | 0.863227 | 0.75697 | 0.740944 | 0.733699 | 0.691987 | 0 | 0 | 0.150543 | 6,543 | 233 | 85 | 28.081545 | 0.819539 | 0 | 0 | 0.685897 | 0 | 0 | 0.126547 | 0.042488 | 0 | 0 | 0 | 0 | 0.237179 | 1 | 0.115385 | false | 0.282051 | 0.064103 | 0 | 0.179487 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 7 |
5891f702c6d6f2c2b62b43ef808f37205f154822 | 3,262 | py | Python | Python/cubesat2017/soft/desktop/app/lib/widgets.py | Misha91908/Portfolio | c10b06462ec45f039778c77aa6c84e871cac34f6 | [
"MIT"
] | null | null | null | Python/cubesat2017/soft/desktop/app/lib/widgets.py | Misha91908/Portfolio | c10b06462ec45f039778c77aa6c84e871cac34f6 | [
"MIT"
] | null | null | null | Python/cubesat2017/soft/desktop/app/lib/widgets.py | Misha91908/Portfolio | c10b06462ec45f039778c77aa6c84e871cac34f6 | [
"MIT"
] | null | null | null | from lib.base import BaseContentWidget
from PyQt5 import QtCore, QtGui, QtWidgets
class TelemetryContentWidget(BaseContentWidget):
disconnection_signal = QtCore.pyqtSignal()
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.disconnection_case_notification = QtWidgets.QWidget()
self.init_disconnection_notification()
self.producer.disconnection = self.disconnection_sig
self.consumer.disconnection = self.disconnection_sig
self.producer.bug_tracker_signal = self.bug_tracker.update_bug_tracker_signal
self.disconnection_signal.connect(self.disconnection_case)
def init_disconnection_notification(self):
self.disconnection_case_notification.setWindowTitle('Lost connection!')
self.disconnection_case_notification.setFixedSize(500, 150)
label = QtWidgets.QLabel('Telemetry port device is not found! \n '
'Please, check a wire connection or plug in your device.',
self.disconnection_case_notification)
label.setAlignment(QtCore.Qt.AlignCenter)
label.move(65, 60)
frame = self.disconnection_case_notification.frameGeometry()
mid = QtWidgets.QDesktopWidget().availableGeometry().center()
frame.moveCenter(mid)
self.disconnection_case_notification.move(frame.topLeft())
def disconnection_sig(self):
self.disconnection_signal.emit()
def disconnection_case(self):
self.disconnection_case_notification.show()
class HC12TelemetryContentWidget(BaseContentWidget):
disconnection_signal = QtCore.pyqtSignal()
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.disconnection_case_notification = QtWidgets.QWidget()
self.init_disconnection_notification()
self.producer.disconnection = self.disconnection_sig
self.consumer.disconnection = self.disconnection_sig
self.producer.bug_tracker_signal = self.bug_tracker.update_bug_tracker_signal
self.disconnection_signal.connect(self.disconnection_case)
def init_disconnection_notification(self):
self.disconnection_case_notification.setWindowTitle('Lost connection!')
self.disconnection_case_notification.setFixedSize(500, 150)
label = QtWidgets.QLabel('HC12 port device is not found! \n'
' Please, check a wire connection or plug in your device.',
self.disconnection_case_notification)
label.setAlignment(QtCore.Qt.AlignCenter)
label.move(65, 60)
frame = self.disconnection_case_notification.frameGeometry()
mid = QtWidgets.QDesktopWidget().availableGeometry().center()
frame.moveCenter(mid)
self.disconnection_case_notification.move(frame.topLeft())
def disconnection_sig(self):
self.disconnection_signal.emit()
def disconnection_case(self):
self.disconnection_case_notification.show()
class APRSContentWidget(BaseContentWidget):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.producer.bug_tracker_signal = self.bug_tracker.update_bug_tracker_signal
| 41.291139 | 92 | 0.711527 | 329 | 3,262 | 6.768997 | 0.218845 | 0.183206 | 0.150876 | 0.207454 | 0.924113 | 0.924113 | 0.924113 | 0.924113 | 0.924113 | 0.924113 | 0 | 0.009604 | 0.202023 | 3,262 | 78 | 93 | 41.820513 | 0.845947 | 0 | 0 | 0.87931 | 0 | 0 | 0.06591 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.155172 | false | 0 | 0.034483 | 0 | 0.275862 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
546a1a606889778ee04868d67a2886602d8c137a | 18,214 | py | Python | tests/meli/morse/app/api/test_translate_api.py | JILP/morse | b2a56063b74911430ad82d1c20eb1e4fb026dba5 | [
"CNRI-Python",
"Linux-OpenIB"
] | null | null | null | tests/meli/morse/app/api/test_translate_api.py | JILP/morse | b2a56063b74911430ad82d1c20eb1e4fb026dba5 | [
"CNRI-Python",
"Linux-OpenIB"
] | null | null | null | tests/meli/morse/app/api/test_translate_api.py | JILP/morse | b2a56063b74911430ad82d1c20eb1e4fb026dba5 | [
"CNRI-Python",
"Linux-OpenIB"
] | null | null | null | import json
class TestTranslate2Text:
endpoint = '/translate/v1/2text'
# Happy path
def test_morse_source(self, test_client, morse2text):
content = morse2text[0]
translated_content = morse2text[1]
req = {
'msg': {
'src': 'morse',
'content': content,
'format': {
'inter_word': ' / '
}
}
}
res = test_client.post(self.endpoint,
data=json.dumps(req),
content_type='application/json')
assert res.status_code == 200 # SUCCESS(200)
assert res.content_type == 'application/json'
assert 'msg' in res.get_json()
assert all(key in res.get_json()['msg'] for key in ['src', 'content'])
assert res.get_json()['msg']['src'] == 'text'
assert res.get_json()['msg']['content'] == translated_content
def test_bits_source(self, test_client, bits2text):
content = bits2text[0]
translated_content = bits2text[1]
req = {
'msg': {
'src': 'bits',
'content': content,
}
}
res = test_client.post(self.endpoint,
data=json.dumps(req),
content_type='application/json')
assert res.status_code == 200 # SUCCESS(200)
assert res.content_type == 'application/json'
assert 'msg' in res.get_json()
assert all(key in res.get_json()['msg'] for key in ['src', 'content'])
assert res.get_json()['msg']['src'] == 'text'
assert res.get_json()['msg']['content'] == translated_content
# Invalid data
def test_invalid_morse(self, test_client, invalid_morse):
req = {
'msg': {
'src': 'morse',
'content': invalid_morse,
'format': {
'inter_word': ' / '
}
}
}
res = test_client.post(self.endpoint,
data=json.dumps(req),
content_type='application/json')
assert res.status_code == 400 # BAD REQUEST(400)
assert res.get_json()['error'] == 'bad request'
assert 'Invalid morse code' in res.get_json()['message']
def test_invalid_bits(self, test_client, invalid_bits):
req = {
'msg': {
'src': 'bits',
'content': invalid_bits,
}
}
res = test_client.post(self.endpoint,
data=json.dumps(req),
content_type='application/json')
assert res.status_code == 400 # BAD REQUEST(400)
assert res.get_json()['error'] == 'bad request'
assert 'Invalid bit' in res.get_json()['message']
# Invalid request
def test_invalid_msg_src(self, test_client):
req = {
'msg': {
'src': 'text',
'content': '.-.-.-',
'format': {
'inter_word': ' / '
}
}
}
res = test_client.post(self.endpoint,
data=json.dumps(req),
content_type='application/json')
assert res.status_code == 400 # BAD REQUEST(400)
assert res.get_json()['error'] == 'bad request'
assert res.get_json()['message'] == 'Message source not valid'
def test_missing_msg_src(self, test_client):
req = {
'msg': {
'content': '.-.-.-',
'format': {
'inter_word': ' / '
}
}
}
res = test_client.post(self.endpoint,
data=json.dumps(req),
content_type='application/json')
assert res.status_code == 400 # BAD REQUEST(400)
assert res.get_json()['error'] == 'bad request'
assert res.get_json()['message'] == 'Message not valid'
def test_missing_msg_content(self, test_client):
req = {
'msg': {
'src': 'morse',
'format': {
'inter_word': ' / '
}
}
}
res = test_client.post(self.endpoint,
data=json.dumps(req),
content_type='application/json')
assert res.status_code == 400 # BAD REQUEST(400)
assert res.get_json()['error'] == 'bad request'
assert res.get_json()['message'] == 'Message not valid'
def test_missing_msg(self, test_client):
req = { }
res = test_client.post(self.endpoint,
data=json.dumps(req),
content_type='application/json')
assert res.status_code == 400 # BAD REQUEST(400)
assert res.get_json()['error'] == 'bad request'
assert res.get_json()['message'] == 'Missing msg attribute'
def test_invalid_content_type(self, test_client):
req = {
'msg': {
'src': 'morse',
'content': '.-.-.-',
'format': {
'inter_word': ' / '
}
}
}
res = test_client.post(self.endpoint,
data=json.dumps(req))
assert res.status_code == 400 # BAD REQUEST(400)
assert res.get_json()['error'] == 'bad request'
assert 'Invalid content type' in res.get_json()['message']
def test_big_content(self, test_client):
req = {
'msg': {
'src': 'morse',
'content': '.-' * 1000,
'format': {
'inter_word': ' / '
}
}
}
res = test_client.post(self.endpoint,
data=json.dumps(req),
content_type='application/json')
assert res.status_code == 400 # BAD REQUEST(400)
assert res.get_json()['error'] == 'bad request'
assert 'Character limit exceeded' in res.get_json()['message']
class TestTranslate2Morse:
endpoint = '/translate/v1/2morse'
# Happy path
def test_text_source(self, test_client, text2morse):
content = text2morse[0]
translated_content =text2morse[1]
req = {
'msg': {
'src': 'text',
'content': content,
'format': {
'inter_word': ' / '
}
}
}
res = test_client.post(self.endpoint,
data=json.dumps(req),
content_type='application/json')
assert res.status_code == 200 # SUCCESS(200)
assert res.content_type == 'application/json'
assert 'msg' in res.get_json()
assert all(key in res.get_json()['msg']
for key in ['src', 'content', 'format'])
assert res.get_json()['msg']['src'] == 'morse'
assert res.get_json()['msg']['content'] == translated_content
def test_bits_source(self, test_client, bits2morse):
content = bits2morse[0]
translated_content = bits2morse[1]
req = {
'msg': {
'src': 'bits',
'content': content,
'format': {
'inter_word': ' / '
}
}
}
res = test_client.post(self.endpoint,
data=json.dumps(req),
content_type='application/json')
assert res.status_code == 200 # SUCCESS(200)
assert res.content_type == 'application/json'
assert 'msg' in res.get_json()
assert all(key in res.get_json()['msg']
for key in ['src', 'content', 'format'])
assert res.get_json()['msg']['src'] == 'morse'
assert res.get_json()['msg']['content'] == translated_content
# Invalid data
def test_invalid_text(self, test_client, invalid_text):
req = {
'msg': {
'src': 'text',
'content': invalid_text,
}
}
res = test_client.post(self.endpoint,
data=json.dumps(req),
content_type='application/json')
assert res.status_code == 400 # BAD REQUEST(400)
assert res.get_json()['error'] == 'bad request'
assert 'Invalid character' in res.get_json()['message']
def test_invalid_bits(self, test_client, invalid_bits):
req = {
'msg': {
'src': 'bits',
'content': invalid_bits,
}
}
res = test_client.post(self.endpoint,
data=json.dumps(req),
content_type='application/json')
assert res.status_code == 400 # BAD REQUEST(400)
assert res.get_json()['error'] == 'bad request'
assert 'Invalid bit' in res.get_json()['message']
# Invalid request
def test_invalid_msg_src(self, test_client):
req = {
'msg': {
'src': 'morse',
'content': '.-.-.-',
'format': {
'inter_word': ' / '
}
}
}
res = test_client.post(self.endpoint,
data=json.dumps(req),
content_type='application/json')
assert res.status_code == 400 # BAD REQUEST(400)
assert res.get_json()['error'] == 'bad request'
assert res.get_json()['message'] == 'Message source not valid'
def test_missing_msg_src(self, test_client):
req = {
'msg': {
'content': '.-.-.-',
'format': {
'inter_word': ' / '
}
}
}
res = test_client.post(self.endpoint,
data=json.dumps(req),
content_type='application/json')
assert res.status_code == 400 # BAD REQUEST(400)
assert res.get_json()['error'] == 'bad request'
assert res.get_json()['message'] == 'Message not valid'
def test_missing_msg_content(self, test_client):
req = {
'msg': {
'src': 'text',
}
}
res = test_client.post(self.endpoint,
data=json.dumps(req),
content_type='application/json')
assert res.status_code == 400 # BAD REQUEST(400)
assert res.get_json()['error'] == 'bad request'
assert res.get_json()['message'] == 'Message not valid'
def test_missing_msg(self, test_client):
req = { }
res = test_client.post(self.endpoint,
data=json.dumps(req),
content_type='application/json')
assert res.status_code == 400 # BAD REQUEST(400)
assert res.get_json()['error'] == 'bad request'
assert res.get_json()['message'] == 'Missing msg attribute'
def test_invalid_content_type(self, test_client):
req = {
'msg': {
'src': 'text',
'content': '.-.-.-',
}
}
res = test_client.post(self.endpoint,
data=json.dumps(req))
assert res.status_code == 400 # BAD REQUEST(400)
assert res.get_json()['error'] == 'bad request'
assert 'Invalid content type' in res.get_json()['message']
def test_big_content(self, test_client):
req = {
'msg': {
'src': 'text',
'content': 'A' * 1001,
}
}
res = test_client.post(self.endpoint,
data=json.dumps(req),
content_type='application/json')
assert res.status_code == 400 # BAD REQUEST(400)
assert res.get_json()['error'] == 'bad request'
assert 'Character limit exceeded' in res.get_json()['message']
class TestTranslate2Bits:
endpoint = '/translate/v1/2bits'
# Happy path
def test_text_source(self, test_client, text2bits):
content = text2bits[0]
translated_content =text2bits[1]
req = {
'msg': {
'src': 'text',
'content': content,
}
}
res = test_client.post(self.endpoint,
data=json.dumps(req),
content_type='application/json')
assert res.status_code == 200 # SUCCESS(200)
assert res.content_type == 'application/json'
assert 'msg' in res.get_json()
assert all(key in res.get_json()['msg'] for key in ['src', 'content'])
assert res.get_json()['msg']['src'] == 'bits'
assert res.get_json()['msg']['content'] == translated_content
def test_morse_source(self, test_client, morse2bits):
content = morse2bits[0]
translated_content = morse2bits[1]
req = {
'msg': {
'src': 'morse',
'content': content,
'format': {
'inter_word': ' / '
}
}
}
res = test_client.post(self.endpoint,
data=json.dumps(req),
content_type='application/json')
assert res.status_code == 200 # SUCCESS(200)
assert res.content_type == 'application/json'
assert 'msg' in res.get_json()
assert all(key in res.get_json()['msg'] for key in ['src', 'content'])
assert res.get_json()['msg']['src'] == 'bits'
assert res.get_json()['msg']['content'] == translated_content
# Invalid data
def test_invalid_text(self, test_client, invalid_text):
req = {
'msg': {
'src': 'text',
'content': invalid_text,
}
}
res = test_client.post(self.endpoint,
data=json.dumps(req),
content_type='application/json')
assert res.status_code == 400 # BAD REQUEST(400)
assert res.get_json()['error'] == 'bad request'
assert 'Invalid character' in res.get_json()['message']
def test_invalid_morse(self, test_client, invalid_morse):
req = {
'msg': {
'src': 'morse',
'content': invalid_morse,
}
}
res = test_client.post(self.endpoint,
data=json.dumps(req),
content_type='application/json')
assert res.status_code == 400 # BAD REQUEST(400)
assert res.get_json()['error'] == 'bad request'
assert 'Invalid morse' in res.get_json()['message']
# Invalid request
def test_invalid_msg_src(self, test_client):
req = {
'msg': {
'src': 'bits',
'content': '101010',
}
}
res = test_client.post(self.endpoint,
data=json.dumps(req),
content_type='application/json')
assert res.status_code == 400 # BAD REQUEST(400)
assert res.get_json()['error'] == 'bad request'
assert res.get_json()['message'] == 'Message source not valid'
def test_missing_msg_src(self, test_client):
req = {
'msg': {
'content': '.-.-.-',
'format': {
'inter_word': ' / '
}
}
}
res = test_client.post(self.endpoint,
data=json.dumps(req),
content_type='application/json')
assert res.status_code == 400 # BAD REQUEST(400)
assert res.get_json()['error'] == 'bad request'
assert res.get_json()['message'] == 'Message not valid'
def test_missing_msg_content(self, test_client):
req = {
'msg': {
'src': 'text',
}
}
res = test_client.post(self.endpoint,
data=json.dumps(req),
content_type='application/json')
assert res.status_code == 400 # BAD REQUEST(400)
assert res.get_json()['error'] == 'bad request'
assert res.get_json()['message'] == 'Message not valid'
def test_missing_msg(self, test_client):
req = { }
res = test_client.post(self.endpoint,
data=json.dumps(req),
content_type='application/json')
assert res.status_code == 400 # BAD REQUEST(400)
assert res.get_json()['error'] == 'bad request'
assert res.get_json()['message'] == 'Missing msg attribute'
def test_invalid_content_type(self, test_client):
req = {
'msg': {
'src': 'text',
'content': '.-.-.-',
}
}
res = test_client.post(self.endpoint,
data=json.dumps(req))
assert res.status_code == 400 # BAD REQUEST(400)
assert res.get_json()['error'] == 'bad request'
assert 'Invalid content type' in res.get_json()['message']
def test_big_content(self, test_client):
req = {
'msg': {
'src': 'text',
'content': 'A' * 1001,
}
}
res = test_client.post(self.endpoint,
data=json.dumps(req),
content_type='application/json')
assert res.status_code == 400 # BAD REQUEST(400)
assert res.get_json()['error'] == 'bad request'
assert 'Character limit exceeded' in res.get_json()['message']
| 33.855019 | 78 | 0.4748 | 1,766 | 18,214 | 4.724802 | 0.041336 | 0.090604 | 0.08629 | 0.092042 | 0.941275 | 0.939957 | 0.939957 | 0.926414 | 0.924497 | 0.914909 | 0 | 0.021573 | 0.396838 | 18,214 | 537 | 79 | 33.918063 | 0.737939 | 0.033216 | 0 | 0.783784 | 0 | 0 | 0.14537 | 0 | 0 | 0 | 0 | 0 | 0.243243 | 1 | 0.067568 | false | 0 | 0.002252 | 0 | 0.083333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
54a0d75dec079d0735fbf756bebf0cde820ea695 | 68,668 | py | Python | examples/tutorial_medical_expenditure.py | andrerubeis/AIF360 | c0ce6f2e3eff9cab0ccce0bc0a05b681a5df7e44 | [
"Apache-2.0"
] | null | null | null | examples/tutorial_medical_expenditure.py | andrerubeis/AIF360 | c0ce6f2e3eff9cab0ccce0bc0a05b681a5df7e44 | [
"Apache-2.0"
] | null | null | null | examples/tutorial_medical_expenditure.py | andrerubeis/AIF360 | c0ce6f2e3eff9cab0ccce0bc0a05b681a5df7e44 | [
"Apache-2.0"
] | null | null | null | # %% md
# Medical Expenditure Tutorial
# %% md
## This tutorial demonstrates classification model learning with bias mitigation as a part of a Care Management use case using Medical Expenditure data.
# %% md
#The notebook demonstrates how the AIF 360 toolkit can be used to detect and reduce bias when learning classifiers
# using a variety of fairness metrics and algorithms.It also demonstrates how explanations can be generated
# for predictions made by models learnt with the toolkit using LIME.
#
# Classifiers are built using Logistic Regression as well as Random Forests.
# Bias
# detection is demonstrated
# using
# several
# metrics, including
# disparate
# impact, average
# odds
# difference, statistical
# parity
# difference, equal
# opportunity
# difference, and Theil
# index.
#
# Bias
# alleviation is explored
# via
# a
# variety
# of
# methods, including
# reweighing(pre - processing
# algorithm), prejudice
# remover( in -processing
# algorithm), and disparate
# impact
# remover(pre - processing
# technique).
#
# Data
# from the
#
# [Medical Expenditure Panel Survey](https: // meps.ahrq.gov / mepsweb /) is used in this
# tutorial.See[Section
# 2]( # 2.-Data-used) below for more details.
#
# # %% md
#
# ## Table of Contents
#
# # %% md
#
# To
# return to
# the
# table
# of
# contents, click
# on
# the
# number
# at
# any
# major
# section
# heading.
#
# [1. Use case]( # 1.-Use-case)
#
# [2. Data used]( # 2.-Data-used)
#
# [3. Training models without debiasing]( # 3.-Training-models-on-original-2015-Panel-19-data)
#
# [4. Reweighing(pre - processing bias mitigation)](
# # 4.-Bias-mitigation-using-pre-processing-technique---Reweighing)
#
# [5. Prejudice Remover( in -processing bias mitigation)](
# # 5.-Bias-mitigation-using-in-processing-technique---Prejudice-Remover-(PR))
#
# [6. Summary of results](
# # 6.-Summary-of-Model-Learning-Results)
#
# [7. Deploying model]( # 7.-Deploying-model)
#
# [8. Generating explanations for model predictions
# using LIME]( # 8.-Generating-explanations-for-model-predictions-using-LIME)
#
# [9. Re-deploying Model]( # 9.-Re-deploying-Model)
#
# [10. Overall Summary]( # 10.-SUMMARY)
#
# # %% md
#
# ## [1.](#Table-of-Contents) Use case
#
# # %% md
#
# In order to demonstrate how AIF 360 can be used to detect and mitigate bias in classfier models, we adopt the following use case:
#
# 1. a data scientist develops a 'fair' healthcare utilization scoring model with respect to defined protected classes.Fairness may be dictated by legal or government regulations, such as a requirement that additional care decisions be not predicated on factors such as race of the patient.
#
# 2. developer takes the model AND performance characteristics / specs of the model (e.g.accuracy, fairness tests, etc.basically the model factsheet) and deploys
# the
# model in an
# enterprise
# app
# that
# prioritizes
# cases
# for care management.
#
#
# 3. the app is put into production and starts scoring people and making recommendations.
#
# 4. explanations are generated for each recommendation
#
#
# 5. both recommendations and associated explanations are given to nurses as a part of the care management process.The nurses can evaluate the recommendations for quality and correctness and provide feedback.
#
# 6. nurse feedback as well as analysis of usage data with respect to specs of the model w.r.t accuracy and fairness is communicated to AI Ops specialist and LOB user periodically.
#
# 7. when significant drift in model specs relative to the model factsheet is observed, the model is sent back for retraining.
#
# # %% md
#
# ## [2.](#Table-of-Contents) Data used
#
# # %% md
#
# The specific data used is the[2015 Full Year Consolidated Data File](https://
# meps.ahrq.gov / mepsweb / data_stats / download_data_files_detail.jsp?cboPufNumber = HC - 181) as well as the[2016
# Full
# Year
# Consolidated
# Data
# File](https: // meps.ahrq.gov / mepsweb / data_stats / download_data_files_detail.jsp?cboPufNumber=HC-192).
#
# # %% md
#
# The
# 2015
# file
# contains
# data
# from rounds
#
# 3, 4, 5
# of
# panel
# 19(2014) and rounds
# 1, 2, 3
# of
# panel
# 20(2015).The
# 2016
# file
# contains
# data
# from rounds
#
# 3, 4, 5
# of
# panel
# 20(2015) and rounds
# 1, 2, 3
# of
# panel
# 21(2016).
#
# For
# this
# demonstration, three
# datasets
# were
# constructed: one
# from panel
#
# 19, round
# 5(used for learning models), one
# from panel
#
# 20, round
# 3(used for deployment / testing of
# model - steps); the
# other
# from panel
#
# 21, round
# 3(used for re - training and deployment / testing of
# updated
# model).
#
# # %% md
#
# For
# each
# dataset, the
# sensitive
# attribute is 'RACE'
# constructed as follows: 'Whites'(privileged
#
#
# class ) defined by the features RACEV2X = 1 (White) and HISPANX = 2 (non Hispanic); 'Non-Whites' that included everyone else.
#
# Along with race as the sensitive feature, other features used for modeling include demographics (such as age, gender, active duty status), physical / mental health assessments, diagnosis codes (such as history of diagnosis of cancer, or diabetes), and limitations (such as cognitive or hearing or vision limitation).
#
# To measure utilization, a composite feature, 'UTILIZATION', was created to measure the total number of trips requiring some sort of medical care by summing up the following features: OBTOTV15(
# 16), the
#
#
# number
# of
# office
# based
# visits;
# OPTOTV15(16), the
# number
# of
# outpatient
# visits;
# ERTOT15(16), the
# number
# of
# ER
# visits;
# IPNGTD15(16), the
# number
# of
# inpatient
# nights, and + HHTOTD16, the
# number
# of
# home
# health
# visits.
#
# The
# model
# classification
# task is to
# predict
# whether
# a
# person
# would
# have
# 'high'
# utilization(defined as UTILIZATION >= 10, roughly
# the
# average
# utilization
# for the considered population).High utilization respondents constituted around 17 % of each dataset.
#
# To simulate the scenario, each dataset is split into 3 parts: a
# train, a
# validation, and a
# test / deployment
# part.
#
# We
# assume
# that
# the
# model is initially
# built and tuned
# using
# the
# 2015
# Panel
# 19
# train / test
# data.(Use
# case
# steps
# 1 - 2.)
# It is then
# put
# into
# practice and used
# to
# score
# people
# to
# identify
# potential
# candidates
# for care management(
# Use case steps 3-5).Initial deployment is simulated to 2015 Panel 20 deployment data.To show change in performance and / or fairness over time, (use case steps 6-7), the 2016 Panel 21 deployment data is used.Finally, if drift is observed, the 2015 train / validation data is used to learn a new model and evaluated again on the 2016 deployment data
#
# # %% md
#
# ## [3.](#Table-of-Contents) Training models on original 2015 Panel 19 data
#
# # %% md
#
# First, load all necessary packages
#
# # %%
import sys
sys.path.insert(0, '../')
import matplotlib.pyplot as plt
import numpy as np
from IPython.display import Markdown, display
# Datasets
from aif360.datasets import MEPSDataset19
from aif360.datasets import MEPSDataset20
from aif360.datasets import MEPSDataset21
# Fairness metrics
from aif360.metrics import BinaryLabelDatasetMetric
from aif360.metrics import ClassificationMetric
# Explainers
from aif360.explainers import MetricTextExplainer
# Scalers
from sklearn.preprocessing import StandardScaler
# Classifiers
from sklearn.ensemble import RandomForestClassifier
from sklearn.linear_model import LogisticRegression
from sklearn.pipeline import make_pipeline
# Bias mitigation techniques
from aif360.algorithms.preprocessing import Reweighing
from aif360.algorithms.inprocessing import PrejudiceRemover
# LIME
from aif360.datasets.lime_encoder import LimeEncoder
import lime
from lime.lime_tabular import LimeTabularExplainer
np.random.seed(1)
### 3.1. Load data & create splits for learning/validating/testing model
#Get the dataset and split into train (50 % ), validate (30 % ), and test (20 % )
(dataset_orig_panel19_train,
dataset_orig_panel19_val,
dataset_orig_panel19_test) = MEPSDataset19().split([0.5, 0.8], shuffle=True)
sens_ind = 0
sens_attr = dataset_orig_panel19_train.protected_attribute_names[sens_ind]
unprivileged_groups = [{sens_attr: v} for v in
dataset_orig_panel19_train.unprivileged_protected_attributes[sens_ind]]
privileged_groups = [{sens_attr: v} for v in
dataset_orig_panel19_train.privileged_protected_attributes[sens_ind]]
# %% md
# This function will be used throughout the notebook to print out some labels, names, etc.
# %%
def describe(train=None, val=None, test=None):
if train is not None:
display(Markdown("#### Training Dataset shape"))
print(train.features.shape)
if val is not None:
display(Markdown("#### Validation Dataset shape"))
print(val.features.shape)
display(Markdown("#### Test Dataset shape"))
print(test.features.shape)
display(Markdown("#### Favorable and unfavorable labels"))
print(test.favorable_label, test.unfavorable_label)
display(Markdown("#### Protected attribute names"))
print(test.protected_attribute_names)
display(Markdown("#### Privileged and unprivileged protected attribute values"))
print(test.privileged_protected_attributes,
test.unprivileged_protected_attributes)
display(Markdown("#### Dataset feature names"))
print(test.feature_names)
#Show 2015 dataset details
describe(dataset_orig_panel19_train, dataset_orig_panel19_val, dataset_orig_panel19_test)
# Metrics for original data
metric_orig_panel19_train = BinaryLabelDatasetMetric(
dataset_orig_panel19_train,
unprivileged_groups=unprivileged_groups,
privileged_groups=privileged_groups)
explainer_orig_panel19_train = MetricTextExplainer(metric_orig_panel19_train)
print(explainer_orig_panel19_train.disparate_impact())
# %% md
### 3.2. Learning a Logistic Regression (LR) classifier on original data
# %% md
#### 3.2.1. Training LR model on original data
# %%
dataset = dataset_orig_panel19_train
model = make_pipeline(StandardScaler(),
LogisticRegression(solver='liblinear', random_state=1))
fit_params = {'logisticregression__sample_weight': dataset.instance_weights}
lr_orig_panel19 = model.fit(dataset.features, dataset.labels.ravel(), **fit_params)
# %% md
#### 3.2.2. Validating LR model on original data
#This function will be used throughout the tutorial to find best threshold using a validation set
from collections import defaultdict
def test(dataset, model, thresh_arr):
try:
# sklearn classifier
y_val_pred_prob = model.predict_proba(dataset.features)
pos_ind = np.where(model.classes_ == dataset.favorable_label)[0][0]
except AttributeError:
# aif360 inprocessing algorithm
y_val_pred_prob = model.predict(dataset).scores
pos_ind = 0
metric_arrs = defaultdict(list)
for thresh in thresh_arr:
y_val_pred = (y_val_pred_prob[:, pos_ind] > thresh).astype(np.float64)
dataset_pred = dataset.copy()
dataset_pred.labels = y_val_pred
metric = ClassificationMetric(
dataset, dataset_pred,
unprivileged_groups=unprivileged_groups,
privileged_groups=privileged_groups)
metric_arrs['bal_acc'].append((metric.true_positive_rate()
+ metric.true_negative_rate()) / 2)
metric_arrs['avg_odds_diff'].append(metric.average_odds_difference())
metric_arrs['disp_imp'].append(metric.disparate_impact())
metric_arrs['stat_par_diff'].append(metric.statistical_parity_difference())
metric_arrs['eq_opp_diff'].append(metric.equal_opportunity_difference())
metric_arrs['theil_ind'].append(metric.theil_index())
return metric_arrs
thresh_arr = np.linspace(0.01, 0.5, 50)
val_metrics = test(dataset=dataset_orig_panel19_val,
model=lr_orig_panel19,
thresh_arr=thresh_arr)
lr_orig_best_ind = np.argmax(val_metrics['bal_acc'])
#Plot metrics with twin x-axes
def plot(x, x_name, y_left, y_left_name, y_right, y_right_name):
fig, ax1 = plt.subplots(figsize=(10, 7))
ax1.plot(x, y_left)
ax1.set_xlabel(x_name, fontsize=16, fontweight='bold')
ax1.set_ylabel(y_left_name, color='b', fontsize=16, fontweight='bold')
ax1.xaxis.set_tick_params(labelsize=14)
ax1.yaxis.set_tick_params(labelsize=14)
ax1.set_ylim(0.5, 0.8)
ax2 = ax1.twinx()
ax2.plot(x, y_right, color='r')
ax2.set_ylabel(y_right_name, color='r', fontsize=16, fontweight='bold')
if 'DI' in y_right_name:
ax2.set_ylim(0., 0.7)
else:
ax2.set_ylim(-0.25, 0.1)
best_ind = np.argmax(y_left)
ax2.axvline(np.array(x)[best_ind], color='k', linestyle=':')
ax2.yaxis.set_tick_params(labelsize=14)
ax2.grid(True)
#Here we plot $1 - \min(\text {disparate impact}, 1 /\text {disparate impact})$ since it's possible to overcorrect and
# end up with a value greater than 1, implying unfairness for the original privileged group. For shorthand, we simply call this 1-min(DI, 1/DI) from now on. We want the plotted metric to be less than 0.2.
# %%
disp_imp = np.array(val_metrics['disp_imp'])
disp_imp_err = 1 - np.minimum(disp_imp, 1 / disp_imp)
plot(thresh_arr, 'Classification Thresholds',
val_metrics['bal_acc'], 'Balanced Accuracy',
disp_imp_err, '1 - min(DI, 1/DI)')
# %%
plot(thresh_arr, 'Classification Thresholds',
val_metrics['bal_acc'], 'Balanced Accuracy',
val_metrics['avg_odds_diff'], 'avg. odds diff.')
# Make a function to print out accuracy and fairness metrics.This will be used throughout the tutorial.
def describe_metrics(metrics, thresh_arr):
best_ind = np.argmax(metrics['bal_acc'])
print("Threshold corresponding to Best balanced accuracy: {:6.4f}".format(thresh_arr[best_ind]))
print("Best balanced accuracy: {:6.4f}".format(metrics['bal_acc'][best_ind]))
# disp_imp_at_best_ind = np.abs(1 - np.array(metrics['disp_imp']))[best_ind]
disp_imp_at_best_ind = 1 - min(metrics['disp_imp'][best_ind], 1 / metrics['disp_imp'][best_ind])
print("Corresponding 1-min(DI, 1/DI) value: {:6.4f}".format(disp_imp_at_best_ind))
print("Corresponding average odds difference value: {:6.4f}".format(metrics['avg_odds_diff'][best_ind]))
print("Corresponding statistical parity difference value: {:6.4f}".format(metrics['stat_par_diff'][best_ind]))
print("Corresponding equal opportunity difference value: {:6.4f}".format(metrics['eq_opp_diff'][best_ind]))
print("Corresponding Theil index value: {:6.4f}".format(metrics['theil_ind'][best_ind]))
# %%
describe_metrics(val_metrics, thresh_arr)
#### 3.2.3. Testing LR model on original data
lr_orig_metrics = test(dataset=dataset_orig_panel19_test,
model=lr_orig_panel19,
thresh_arr=[thresh_arr[lr_orig_best_ind]])
describe_metrics(lr_orig_metrics, [thresh_arr[lr_orig_best_ind]])
# For all the fairness metrics displayed above, the value should be close to '0' for fairness.
#
# 1 - min(DI, 1 / DI) < 0.2 is typically desired for classifier predictions to be fair.
#
# However, for a logistic regression classifier trained with original training data,
# at the best classification rate, this is quite high.This implies unfairness.
#
# Similarly, $\text {average odds difference} = \frac {(FPR_{unpriv}-FPR_{priv}) + (TPR_{unpriv}-TPR_{priv})} {2}$
# must be close to zero for the classifier to be fair.
#
# Again, the results for this classifier - data combination are still high.This still implies unfairness.
### 3.3. Learning a Random Forest (RF) classifier on original data
#### 3.3.1. Training RF model on original data
dataset = dataset_orig_panel19_train
model = make_pipeline(StandardScaler(),
RandomForestClassifier(n_estimators=500, min_samples_leaf=25))
fit_params = {'randomforestclassifier__sample_weight': dataset.instance_weights}
rf_orig_panel19 = model.fit(dataset.features, dataset.labels.ravel(), **fit_params)
# %% md
#### 3.3.2. Validating RF model on original data
thresh_arr = np.linspace(0.01, 0.5, 50)
val_metrics = test(dataset=dataset_orig_panel19_val,
model=rf_orig_panel19,
thresh_arr=thresh_arr)
rf_orig_best_ind = np.argmax(val_metrics['bal_acc'])
disp_imp = np.array(val_metrics['disp_imp'])
disp_imp_err = 1 - np.minimum(disp_imp, 1 / disp_imp)
plot(thresh_arr, 'Classification Thresholds',
val_metrics['bal_acc'], 'Balanced Accuracy',
disp_imp_err, '1 - min(DI, 1/DI)')
plot(thresh_arr, 'Classification Thresholds',
val_metrics['bal_acc'], 'Balanced Accuracy',
val_metrics['avg_odds_diff'], 'avg. odds diff.')
describe_metrics(val_metrics, thresh_arr)
#### 3.3.3. Testing RF model on original data
rf_orig_metrics = test(dataset=dataset_orig_panel19_test,
model=rf_orig_panel19,
thresh_arr=[thresh_arr[rf_orig_best_ind]])
describe_metrics(rf_orig_metrics, [thresh_arr[rf_orig_best_ind]])
# As in the case of the logistic regression classifier learned on the original data, the fairness metrics
# for the random forest classifier have values that are quite far from 0.
#
# For example, 1 - min(DI, 1 / DI) has a value of over 0.5 as opposed to the desired
# value
# of < 0.2.
#
# This
# indicates
# that
# the
# random
# forest
# classifier
# learned
# on
# the
# original
# data is also
# unfair.
# %% md
## [4.](#Table-of-Contents) Bias mitigation using pre-processing technique - Reweighing
# %% md
### 4.1. Transform data
# %%
RW = Reweighing(unprivileged_groups=unprivileged_groups,
privileged_groups=privileged_groups)
dataset_transf_panel19_train = RW.fit_transform(dataset_orig_panel19_train)
# %% md
Metrics
for transformed data
# %%
metric_transf_panel19_train = BinaryLabelDatasetMetric(
dataset_transf_panel19_train,
unprivileged_groups=unprivileged_groups,
privileged_groups=privileged_groups)
explainer_transf_panel19_train = MetricTextExplainer(metric_transf_panel19_train)
print(explainer_transf_panel19_train.disparate_impact())
# %% md
### 4.2. Learning a Logistic Regression (LR) classifier on data transformed by reweighing
# %% md
#### 4.2.1. Training LR model after reweighing
# %%
dataset = dataset_transf_panel19_train
model = make_pipeline(StandardScaler(),
LogisticRegression(solver='liblinear', random_state=1))
fit_params = {'logisticregression__sample_weight': dataset.instance_weights}
lr_transf_panel19 = model.fit(dataset.features, dataset.labels.ravel(), **fit_params)
# %% md
#### 4.2.2. Validating LR model after reweighing
# %%
thresh_arr = np.linspace(0.01, 0.5, 50)
val_metrics = test(dataset=dataset_orig_panel19_val,
model=lr_transf_panel19,
thresh_arr=thresh_arr)
lr_transf_best_ind = np.argmax(val_metrics['bal_acc'])
# %%
disp_imp = np.array(val_metrics['disp_imp'])
disp_imp_err = 1 - np.minimum(disp_imp, 1 / disp_imp)
plot(thresh_arr, 'Classification Thresholds',
val_metrics['bal_acc'], 'Balanced Accuracy',
disp_imp_err, '1 - min(DI, 1/DI)')
# %%
plot(thresh_arr, 'Classification Thresholds',
val_metrics['bal_acc'], 'Balanced Accuracy',
val_metrics['avg_odds_diff'], 'avg. odds diff.')
# %%
describe_metrics(val_metrics, thresh_arr)
# %% md
#### 4.2.3. Testing LR model after reweighing
# %%
lr_transf_metrics = test(dataset=dataset_orig_panel19_test,
model=lr_transf_panel19,
thresh_arr=[thresh_arr[lr_transf_best_ind]])
# %%
describe_metrics(lr_transf_metrics, [thresh_arr[lr_transf_best_ind]])
# %% md
The
fairness
metrics
for the logistic regression model learned after reweighing are well improved, and thus the model is much more fair relative to the logistic regression model learned from the original data.
# %% md
### 4.3. Learning a Random Forest (RF) classifier on data transformed by reweighing
# %% md
#### 4.3.1. Training RF model after reweighing
# %%
dataset = dataset_transf_panel19_train
model = make_pipeline(StandardScaler(),
RandomForestClassifier(n_estimators=500, min_samples_leaf=25))
fit_params = {'randomforestclassifier__sample_weight': dataset.instance_weights}
rf_transf_panel19 = model.fit(dataset.features, dataset.labels.ravel(), **fit_params)
# %% md
#### 4.3.2. Validating RF model after reweighing
# %%
thresh_arr = np.linspace(0.01, 0.5, 50)
val_metrics = test(dataset=dataset_orig_panel19_val,
model=rf_transf_panel19,
thresh_arr=thresh_arr)
rf_transf_best_ind = np.argmax(val_metrics['bal_acc'])
# %%
disp_imp = np.array(val_metrics['disp_imp'])
disp_imp_err = 1 - np.minimum(disp_imp, 1 / disp_imp)
plot(thresh_arr, 'Classification Thresholds',
val_metrics['bal_acc'], 'Balanced Accuracy',
disp_imp_err, '1 - min(DI, 1/DI)')
# %%
plot(thresh_arr, 'Classification Thresholds',
val_metrics['bal_acc'], 'Balanced Accuracy',
val_metrics['avg_odds_diff'], 'avg. odds diff.')
# %%
describe_metrics(val_metrics, thresh_arr)
# %% md
#### 4.3.3. Testing RF model after reweighing
# %%
rf_transf_metrics = test(dataset=dataset_orig_panel19_test,
model=rf_transf_panel19,
thresh_arr=[thresh_arr[rf_transf_best_ind]])
# %%
describe_metrics(rf_transf_metrics, [thresh_arr[rf_transf_best_ind]])
# %% md
Once
again, the
model
learned
from the transformed
data is fairer
than
that
learned
from the original
data.However, the
random
forest
model
learned
from the transformed
data is still
relatively
unfair as compared
to
the
logistic
regression
model
learned
from the transformed
data.
# %% md
## [5.](#Table-of-Contents) Bias mitigation using in-processing technique - Prejudice Remover (PR)
# %% md
### 5.1. Learning a Prejudice Remover (PR) model on original data
# %% md
#### 5.1.1. Training a PR model
# %%
model = PrejudiceRemover(sensitive_attr=sens_attr, eta=25.0)
pr_orig_scaler = StandardScaler()
dataset = dataset_orig_panel19_train.copy()
dataset.features = pr_orig_scaler.fit_transform(dataset.features)
pr_orig_panel19 = model.fit(dataset)
# %% md
#### 5.1.2. Validating PR model
# %%
thresh_arr = np.linspace(0.01, 0.50, 50)
dataset = dataset_orig_panel19_val.copy()
dataset.features = pr_orig_scaler.transform(dataset.features)
val_metrics = test(dataset=dataset,
model=pr_orig_panel19,
thresh_arr=thresh_arr)
pr_orig_best_ind = np.argmax(val_metrics['bal_acc'])
# %%
disp_imp = np.array(val_metrics['disp_imp'])
disp_imp_err = 1 - np.minimum(disp_imp, 1 / disp_imp)
plot(thresh_arr, 'Classification Thresholds',
val_metrics['bal_acc'], 'Balanced Accuracy',
disp_imp_err, '1 - min(DI, 1/DI)')
# %%
plot(thresh_arr, 'Classification Thresholds',
val_metrics['bal_acc'], 'Balanced Accuracy',
val_metrics['avg_odds_diff'], 'avg. odds diff.')
# %%
describe_metrics(val_metrics, thresh_arr)
# %% md
#### 5.1.3. Testing PR model
# %%
dataset = dataset_orig_panel19_test.copy()
dataset.features = pr_orig_scaler.transform(dataset.features)
pr_orig_metrics = test(dataset=dataset,
model=pr_orig_panel19,
thresh_arr=[thresh_arr[pr_orig_best_ind]])
# %%
describe_metrics(pr_orig_metrics, [thresh_arr[pr_orig_best_ind]])
# %% md
As in the
case
of
reweighing, prejudice
remover
results in a
fair
model.However, it
has
come
at
the
expense
of
relatively
lower
balanced
accuracy.
# %% md
## [6.](#Table-of-Contents) Summary of Model Learning Results
# %%
import pandas as pd
pd.set_option('display.multi_sparse', False)
results = [lr_orig_metrics, rf_orig_metrics, lr_transf_metrics,
rf_transf_metrics, pr_orig_metrics]
debias = pd.Series([''] * 2 + ['Reweighing'] * 2
+ ['Prejudice Remover'],
name='Bias Mitigator')
clf = pd.Series(['Logistic Regression', 'Random Forest'] * 2 + [''],
name='Classifier')
pd.concat([pd.DataFrame(metrics) for metrics in results], axis=0).set_index([debias, clf])
# %% md
Of
all
the
models, the
logistic
regression
model
gives
the
best
balance in terms
of
balanced
accuracy and fairness.While
the
model
learnt
by
prejudice
remover is slightly
fairer, it
has
much
lower
accuracy.All
other
models
are
quite
unfair
compared
to
the
logistic
model.Hence, we
take
the
logistic
regression
model
learnt
from data transformed
by
re - weighing and 'deploy'
it.
# %% md
## [7.](#Table-of-Contents) Deploying model
# %% md
### 7.1. Testing model learned on 2014 (Panel 19) on 2015 (Panel 20) deployment data
# %%
dataset_orig_panel20_deploy = MEPSDataset20()
# now align it with the 2014 dataset
dataset_orig_panel20_deploy = dataset_orig_panel19_train.align_datasets(dataset_orig_panel20_deploy)
# %%
# describe(dataset_orig_panel20_train, dataset_orig_panel20_val, dataset_orig_panel20_deploy)
describe(test=dataset_orig_panel20_deploy)
# %%
metric_orig_panel20_deploy = BinaryLabelDatasetMetric(
dataset_orig_panel20_deploy,
unprivileged_groups=unprivileged_groups,
privileged_groups=privileged_groups)
explainer_orig_panel20_deploy = MetricTextExplainer(metric_orig_panel20_deploy)
print(explainer_orig_panel20_deploy.disparate_impact())
# %%
lr_transf_metrics_panel20_deploy = test(
dataset=dataset_orig_panel20_deploy,
model=lr_transf_panel19,
thresh_arr=[thresh_arr[lr_transf_best_ind]])
# %%
describe_metrics(lr_transf_metrics_panel20_deploy, [thresh_arr[lr_transf_best_ind]])
# %% md
Deployed
model
tested
on
the
2015
Panel
20
data
still
exhibits
fairness as well as maintains
accuracy.
# %% md
## [8.](#Table-of-Contents) Generating explanations for model predictions using LIME
# %% md
### 8.1. Generating explanations on 2015 Panel 20 deployment data
# %% md
This
section
shows
how
LIME
can
be
integrated
with AIF360 to get explanations for model predictions.
# %%
train_dataset = dataset_transf_panel19_train # data the deployed model (lr from transformed data)
test_dataset = dataset_orig_panel20_deploy # the data model is being tested on
model = lr_transf_panel19 # lr_transf_panel19 is LR model learned from Panel 19 with Reweighing
thresh_arr = np.linspace(0.01, 0.5, 50)
best_thresh = thresh_arr[lr_transf_best_ind]
# %% md
First, we
need
to
fit
the
encoder
to
the
aif360
dataset
# %%
lime_data = LimeEncoder().fit(train_dataset)
# %% md
The
`transform()`
method is then
used
to
convert
aif360
features
to
LIME - compatible
features
# %%
s_train = lime_data.transform(train_dataset.features)
s_test = lime_data.transform(test_dataset.features)
# %% md
The
`LimeTabularExplainer`
takes as input
the
LIME - compatible
data
along
with various other arguments to create a lime explainer
# %%
explainer = LimeTabularExplainer(
s_train, class_names=lime_data.s_class_names,
feature_names=lime_data.s_feature_names,
categorical_features=lime_data.s_categorical_features,
categorical_names=lime_data.s_categorical_names,
kernel_width=3, verbose=False, discretize_continuous=True)
# %% md
The
`inverse_transform()`
function is used
to
transform
LIME - compatible
data
back
to
aif360 - compatible
data
since
that is needed
by
the
model
to
make
predictions.The
function
below is used
to
produce
the
predictions
for any perturbed data that is produce by LIME
# %%
def s_predict_fn(x):
return model.predict_proba(lime_data.inverse_transform(x))
# %% md
The
`explain_instance()`
method
can
then
be
used
to
produce
explanations
for any instance in the test dataset
# %%
def show_explanation(ind):
exp = explainer.explain_instance(s_test[ind], s_predict_fn, num_features=10)
print("Actual label: " + str(test_dataset.labels[ind]))
exp.as_pyplot_figure()
plt.show()
# %%
print("Threshold corresponding to Best balanced accuracy: {:6.4f}".format(best_thresh))
show_explanation(0)
show_explanation(2)
# %% md
See
the[LIME
documentation](https: // github.com / marcotcr / lime)
for detailed description of results.In short, the left hand side shows the label predictions made by the model, the middle shows the features that are important to the instance in question and their contributions (weights) to the label prediction, while the right hand side shows the actual values of the features in the particular instance.
# %% md
## [9.](#Table-of-Contents) Re-deploying Model
# %% md
### 9.1. Testing model learned on 2014 (Panel 19) data on 2016 (Panel 21) deployment data
# %% md
Load
the
Panel
21
data, and split
it
again
into
3
parts: train, validate, and deploy.We
test
the
deployed
model
against
the
deployment
data.If
a
new
model
needs
to
be
learnt, it
will
be
learnt
from the train / validate
data and then
tested
again
on
the
deployment
data.
# %%
dataset_orig_panel21_deploy = MEPSDataset21()
# now align it with the panel19 datasets
dataset_orig_panel21_deploy = dataset_orig_panel19_train.align_datasets(dataset_orig_panel21_deploy)
describe(test=dataset_orig_panel21_deploy)
# %%
metric_orig_panel21_deploy = BinaryLabelDatasetMetric(
dataset_orig_panel21_deploy,
unprivileged_groups=unprivileged_groups,
privileged_groups=privileged_groups)
explainer_orig_panel21_deploy = MetricTextExplainer(metric_orig_panel21_deploy)
print(explainer_orig_panel21_deploy.disparate_impact())
# %% md
Now, the
logistic
regression
classifier
trained
on
the
panel
19
data
after
reweighing is tested
against
the
panel
21
deployment
data.
# %%
lr_transf_metrics_panel21_deploy = test(
dataset=dataset_orig_panel21_deploy,
model=lr_transf_panel19,
thresh_arr=[thresh_arr[lr_transf_best_ind]])
# %%
describe_metrics(lr_transf_metrics_panel21_deploy, [thresh_arr[lr_transf_best_ind]])
# %% md
Compared
to
the
2015
panel
20
deployment
data
results, the $ | 1 - \text
{disparate
impact} | $ fairness
metric
shows
a
noticable
drift
upwards.While
still
within
specs, it
may
be
worthwhile
to
re - learn
the
model.So
even
though
the
model is still
relatively
fair and accurate, we
go
ahead and re - learn
the
model
from the
2015
Panel
20
data.
# %% md
### 9.2. Re-learning model (from 2015 Panel 20 data)
# %%
(dataset_orig_panel20_train,
dataset_orig_panel20_val,
dataset_orig_panel20_test) = MEPSDataset20().split([0.5, 0.8], shuffle=True)
# now align them with the 2014 datasets
dataset_orig_panel20_train = dataset_orig_panel19_train.align_datasets(dataset_orig_panel20_train)
dataset_orig_panel20_val = dataset_orig_panel19_train.align_datasets(dataset_orig_panel20_val)
dataset_orig_panel20_test = dataset_orig_panel19_train.align_datasets(dataset_orig_panel20_test)
# %% md
** Train and evaluate
new
model
on
'transformed'
2016
training / test
data **
# %%
RW = Reweighing(unprivileged_groups=unprivileged_groups,
privileged_groups=privileged_groups)
RW.fit(dataset_orig_panel20_train)
dataset_transf_panel20_train = RW.transform(dataset_orig_panel20_train)
# %%
metric_transf_panel20_train = BinaryLabelDatasetMetric(
dataset_transf_panel20_train,
unprivileged_groups=unprivileged_groups,
privileged_groups=privileged_groups)
explainer_transf_panel20_train = MetricTextExplainer(metric_transf_panel20_train)
print(explainer_transf_panel20_train.disparate_impact())
# %%
dataset = dataset_transf_panel20_train
model = make_pipeline(StandardScaler(),
LogisticRegression(solver='liblinear', random_state=1))
fit_params = {'logisticregression__sample_weight': dataset.instance_weights}
lr_transf_panel20 = model.fit(dataset.features, dataset.labels.ravel(), **fit_params)
# %%
thresh_arr = np.linspace(0.01, 0.5, 50)
val_metrics = test(dataset=dataset_orig_panel20_val,
model=lr_transf_panel20,
thresh_arr=thresh_arr)
lr_transf_best_ind_panel20 = np.argmax(val_metrics['bal_acc'])
# %%
disp_imp = np.array(val_metrics['disp_imp'])
disp_imp_err = 1 - np.minimum(disp_imp, 1 / disp_imp)
plot(thresh_arr, 'Classification Thresholds',
val_metrics['bal_acc'], 'Balanced Accuracy',
disp_imp_err, '1 - min(DI, 1/DI)')
# %%
plot(thresh_arr, 'Classification Thresholds',
val_metrics['bal_acc'], 'Balanced Accuracy',
val_metrics['avg_odds_diff'], 'avg. odds diff.')
# %%
describe_metrics(val_metrics, thresh_arr)
# %%
lr_transf_metrics_panel20_test = test(
dataset=dataset_orig_panel20_test,
model=lr_transf_panel20,
thresh_arr=[thresh_arr[lr_transf_best_ind_panel20]])
# %%
describe_metrics(lr_transf_metrics_panel20_test, [thresh_arr[lr_transf_best_ind_panel20]])
# %% md
The
new
model is both
relatively
fair as well as accurate
so
we
deploy and test
against
the
2016
deployment
data
# %% md
### 9.3. Testing model learned on 2015 (Panel 20) data on 2016 (Panel 21) deployment data
# %% md
** Evaluate
new
2015
transformed
data
model and evaluate
again
on
2016
deployment
data **
# %%
lr_transf_panel20_metrics_panel21_deploy = test(
dataset=dataset_orig_panel21_deploy,
model=lr_transf_panel20,
thresh_arr=[thresh_arr[lr_transf_best_ind_panel20]])
# %%
describe_metrics(lr_transf_panel20_metrics_panel21_deploy, [thresh_arr[lr_transf_best_ind_panel20]])
# %% md
The
new
transformed
2016
data
model is again
within
original
accuracy / fairness
specs
so is deployed
# %% md
## [10.](#Table-of-Contents) SUMMARY
# %%
results = [lr_orig_metrics, lr_transf_metrics,
lr_#%% md
# Medical Expenditure Tutorial
#%% md
## This tutorial demonstrates classification model learning with bias mitigation as a part of a Care Management use case using Medical Expenditure data.
#%% md
The notebook demonstrates how the AIF 360 toolkit can be used to detect and reduce bias when learning classifiers using a variety of fairness metrics and algorithms . It also demonstrates how explanations can be generated for predictions made by models learnt with the toolkit using LIME.
Classifiers are built using Logistic Regression as well as Random Forests.
Bias detection is demonstrated using several metrics, including disparate impact, average odds difference, statistical parity difference, equal opportunity difference, and Theil index.
Bias alleviation is explored via a variety of methods, including reweighing (pre-processing algorithm), prejudice remover (in-processing algorithm), and disparate impact remover (pre-processing technique).
Data from the [Medical Expenditure Panel Survey](https://meps.ahrq.gov/mepsweb/) is used in this tutorial. See [Section 2](#2.-Data-used) below for more details.
#%% md
## Table of Contents
#%% md
To return to the table of contents, click on the number at any major section heading.
[1. Use case](#1.-Use-case)
[2. Data used](#2.-Data-used)
[3. Training models without debiasing](#3.-Training-models-on-original-2015-Panel-19-data)
[4. Reweighing (pre-processing bias mitigation)](#4.-Bias-mitigation-using-pre-processing-technique---Reweighing)
[5. Prejudice Remover (in-processing bias mitigation)](#5.-Bias-mitigation-using-in-processing-technique---Prejudice-Remover-(PR))
[6. Summary of results](#6.-Summary-of-Model-Learning-Results)
[7. Deploying model](#7.-Deploying-model)
[8. Generating explanations for model predictions using LIME](#8.-Generating-explanations-for-model-predictions-using-LIME)
[9. Re-deploying Model](#9.-Re-deploying-Model)
[10. Overall Summary](#10.-SUMMARY)
#%% md
## [1.](#Table-of-Contents) Use case
#%% md
In order to demonstrate how AIF 360 can be used to detect and mitigate bias in classfier models, we adopt the following use case:
1. a data scientist develops a 'fair' healthcare utilization scoring model with respect to defined protected classes. Fairness may be dictated by legal or government regulations, such as a requirement that additional care decisions be not predicated on factors such as race of the patient.
2. developer takes the model AND performance characteristics / specs of the model (e.g. accuracy, fairness tests, etc. basically the model factsheet) and deploys the model in an enterprise app that prioritizes cases for care management.
3. the app is put into production and starts scoring people and making recommendations.
4. explanations are generated for each recommendation
5. both recommendations and associated explanations are given to nurses as a part of the care management process. The nurses can evaluate the recommendations for quality and correctness and provide feedback.
6. nurse feedback as well as analysis of usage data with respect to specs of the model w.r.t accuracy and fairness is communicated to AI Ops specialist and LOB user periodically.
7. when significant drift in model specs relative to the model factsheet is observed, the model is sent back for retraining.
#%% md
## [2.](#Table-of-Contents) Data used
#%% md
The specific data used is the [2015 Full Year Consolidated Data File](https://meps.ahrq.gov/mepsweb/data_stats/download_data_files_detail.jsp?cboPufNumber=HC-181) as well as the [2016 Full Year Consolidated Data File](https://meps.ahrq.gov/mepsweb/data_stats/download_data_files_detail.jsp?cboPufNumber=HC-192).
#%% md
The 2015 file contains data from rounds 3,4,5 of panel 19 (2014) and rounds 1,2,3 of panel 20 (2015). The 2016 file contains data from rounds 3,4,5 of panel 20 (2015) and rounds 1,2,3 of panel 21 (2016).
For this demonstration, three datasets were constructed: one from panel 19, round 5 (used for learning models), one from panel 20, round 3 (used for deployment/testing of model - steps); the other from panel 21, round 3 (used for re-training and deployment/testing of updated model).
#%% md
For each dataset, the sensitive attribute is 'RACE' constructed as follows: 'Whites' (privileged class) defined by the features RACEV2X = 1 (White) and HISPANX = 2 (non Hispanic); 'Non-Whites' that included everyone else.
Along with race as the sensitive feature, other features used for modeling include demographics (such as age, gender, active duty status), physical/mental health assessments, diagnosis codes (such as history of diagnosis of cancer, or diabetes), and limitations (such as cognitive or hearing or vision limitation).
To measure utilization, a composite feature, 'UTILIZATION', was created to measure the total number of trips requiring some sort of medical care by summing up the following features: OBTOTV15(16), the number of office based visits; OPTOTV15(16), the number of outpatient visits; ERTOT15(16), the number of ER visits; IPNGTD15(16), the number of inpatient nights, and + HHTOTD16, the number of home health visits.
The model classification task is to predict whether a person would have 'high' utilization (defined as UTILIZATION >= 10, roughly the average utilization for the considered population). High utilization respondents constituted around 17% of each dataset.
To simulate the scenario, each dataset is split into 3 parts: a train, a validation, and a test/deployment part.
We assume that the model is initially built and tuned using the 2015 Panel 19 train/test data. (Use case steps 1-2.)
It is then put into practice and used to score people to identify potential candidates for care management (Use case steps 3-5). Initial deployment is simulated to 2015 Panel 20 deployment data. To show change in performance and/or fairness over time, (use case steps 6-7), the 2016 Panel 21 deployment data is used. Finally, if drift is observed, the 2015 train/validation data is used to learn a new model and evaluated again on the 2016 deployment data
#%% md
## [3.](#Table-of-Contents) Training models on original 2015 Panel 19 data
#%% md
First, load all necessary packages
#%%
import sys
sys.path.insert(0, '../')
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
from IPython.display import Markdown, display
# Datasets
from aif360.datasets import MEPSDataset19
from aif360.datasets import MEPSDataset20
from aif360.datasets import MEPSDataset21
# Fairness metrics
from aif360.metrics import BinaryLabelDatasetMetric
from aif360.metrics import ClassificationMetric
# Explainers
from aif360.explainers import MetricTextExplainer
# Scalers
from sklearn.preprocessing import StandardScaler
# Classifiers
from sklearn.ensemble import RandomForestClassifier
from sklearn.linear_model import LogisticRegression
from sklearn.pipeline import make_pipeline
# Bias mitigation techniques
from aif360.algorithms.preprocessing import Reweighing
from aif360.algorithms.inprocessing import PrejudiceRemover
# LIME
from aif360.datasets.lime_encoder import LimeEncoder
import lime
from lime.lime_tabular import LimeTabularExplainer
np.random.seed(1)
#%% md
### 3.1. Load data & create splits for learning/validating/testing model
#%% md
Get the dataset and split into train (50%), validate (30%), and test (20%)
#%%
(dataset_orig_panel19_train,
dataset_orig_panel19_val,
dataset_orig_panel19_test) = MEPSDataset19().split([0.5, 0.8], shuffle=True)
sens_ind = 0
sens_attr = dataset_orig_panel19_train.protected_attribute_names[sens_ind]
unprivileged_groups = [{sens_attr: v} for v in
dataset_orig_panel19_train.unprivileged_protected_attributes[sens_ind]]
privileged_groups = [{sens_attr: v} for v in
dataset_orig_panel19_train.privileged_protected_attributes[sens_ind]]
#%% md
This function will be used throughout the notebook to print out some labels, names, etc.
#%%
def describe(train=None, val=None, test=None):
if train is not None:
display(Markdown("#### Training Dataset shape"))
print(train.features.shape)
if val is not None:
display(Markdown("#### Validation Dataset shape"))
print(val.features.shape)
display(Markdown("#### Test Dataset shape"))
print(test.features.shape)
display(Markdown("#### Favorable and unfavorable labels"))
print(test.favorable_label, test.unfavorable_label)
display(Markdown("#### Protected attribute names"))
print(test.protected_attribute_names)
display(Markdown("#### Privileged and unprivileged protected attribute values"))
print(test.privileged_protected_attributes,
test.unprivileged_protected_attributes)
display(Markdown("#### Dataset feature names"))
print(test.feature_names)
#%% md
Show 2015 dataset details
#%%
describe(dataset_orig_panel19_train, dataset_orig_panel19_val, dataset_orig_panel19_test)
#%% md
Metrics for original data
#%%
metric_orig_panel19_train = BinaryLabelDatasetMetric(
dataset_orig_panel19_train,
unprivileged_groups=unprivileged_groups,
privileged_groups=privileged_groups)
explainer_orig_panel19_train = MetricTextExplainer(metric_orig_panel19_train)
print(explainer_orig_panel19_train.disparate_impact())
#%% md
### 3.2. Learning a Logistic Regression (LR) classifier on original data
#%% md
#### 3.2.1. Training LR model on original data
#%%
dataset = dataset_orig_panel19_train
model = make_pipeline(StandardScaler(),
LogisticRegression(solver='liblinear', random_state=1))
fit_params = {'logisticregression__sample_weight': dataset.instance_weights}
lr_orig_panel19 = model.fit(dataset.features, dataset.labels.ravel(), **fit_params)
#%% md
#### 3.2.2. Validating LR model on original data
#%% md
This function will be used throughout the tutorial to find best threshold using a validation set
#%%
from collections import defaultdict
def test(dataset, model, thresh_arr):
try:
# sklearn classifier
y_val_pred_prob = model.predict_proba(dataset.features)
pos_ind = np.where(model.classes_ == dataset.favorable_label)[0][0]
except AttributeError:
# aif360 inprocessing algorithm
y_val_pred_prob = model.predict(dataset).scores
pos_ind = 0
metric_arrs = defaultdict(list)
for thresh in thresh_arr:
y_val_pred = (y_val_pred_prob[:, pos_ind] > thresh).astype(np.float64)
dataset_pred = dataset.copy()
dataset_pred.labels = y_val_pred
metric = ClassificationMetric(
dataset, dataset_pred,
unprivileged_groups=unprivileged_groups,
privileged_groups=privileged_groups)
metric_arrs['bal_acc'].append((metric.true_positive_rate()
+ metric.true_negative_rate()) / 2)
metric_arrs['avg_odds_diff'].append(metric.average_odds_difference())
metric_arrs['disp_imp'].append(metric.disparate_impact())
metric_arrs['stat_par_diff'].append(metric.statistical_parity_difference())
metric_arrs['eq_opp_diff'].append(metric.equal_opportunity_difference())
metric_arrs['theil_ind'].append(metric.theil_index())
return metric_arrs
#%%
thresh_arr = np.linspace(0.01, 0.5, 50)
val_metrics = test(dataset=dataset_orig_panel19_val,
model=lr_orig_panel19,
thresh_arr=thresh_arr)
lr_orig_best_ind = np.argmax(val_metrics['bal_acc'])
#%% md
Plot metrics with twin x-axes
#%%
def plot(x, x_name, y_left, y_left_name, y_right, y_right_name):
fig, ax1 = plt.subplots(figsize=(10,7))
ax1.plot(x, y_left)
ax1.set_xlabel(x_name, fontsize=16, fontweight='bold')
ax1.set_ylabel(y_left_name, color='b', fontsize=16, fontweight='bold')
ax1.xaxis.set_tick_params(labelsize=14)
ax1.yaxis.set_tick_params(labelsize=14)
ax1.set_ylim(0.5, 0.8)
ax2 = ax1.twinx()
ax2.plot(x, y_right, color='r')
ax2.set_ylabel(y_right_name, color='r', fontsize=16, fontweight='bold')
if 'DI' in y_right_name:
ax2.set_ylim(0., 0.7)
else:
ax2.set_ylim(-0.25, 0.1)
best_ind = np.argmax(y_left)
ax2.axvline(np.array(x)[best_ind], color='k', linestyle=':')
ax2.yaxis.set_tick_params(labelsize=14)
ax2.grid(True)
#%% md
Here we plot $1 - \min(\text{disparate impact}, 1/\text{disparate impact})$ since it's possible to overcorrect and end up with a value greater than 1, implying unfairness for the original privileged group. For shorthand, we simply call this 1-min(DI, 1/DI) from now on. We want the plotted metric to be less than 0.2.
#%%
disp_imp = np.array(val_metrics['disp_imp'])
disp_imp_err = 1 - np.minimum(disp_imp, 1/disp_imp)
plot(thresh_arr, 'Classification Thresholds',
val_metrics['bal_acc'], 'Balanced Accuracy',
disp_imp_err, '1 - min(DI, 1/DI)')
#%%
plot(thresh_arr, 'Classification Thresholds',
val_metrics['bal_acc'], 'Balanced Accuracy',
val_metrics['avg_odds_diff'], 'avg. odds diff.')
#%% md
Make a function to print out accuracy and fairness metrics. This will be used throughout the tutorial.
#%%
def describe_metrics(metrics, thresh_arr):
best_ind = np.argmax(metrics['bal_acc'])
print("Threshold corresponding to Best balanced accuracy: {:6.4f}".format(thresh_arr[best_ind]))
print("Best balanced accuracy: {:6.4f}".format(metrics['bal_acc'][best_ind]))
# disp_imp_at_best_ind = np.abs(1 - np.array(metrics['disp_imp']))[best_ind]
disp_imp_at_best_ind = 1 - min(metrics['disp_imp'][best_ind], 1/metrics['disp_imp'][best_ind])
print("Corresponding 1-min(DI, 1/DI) value: {:6.4f}".format(disp_imp_at_best_ind))
print("Corresponding average odds difference value: {:6.4f}".format(metrics['avg_odds_diff'][best_ind]))
print("Corresponding statistical parity difference value: {:6.4f}".format(metrics['stat_par_diff'][best_ind]))
print("Corresponding equal opportunity difference value: {:6.4f}".format(metrics['eq_opp_diff'][best_ind]))
print("Corresponding Theil index value: {:6.4f}".format(metrics['theil_ind'][best_ind]))
#%%
describe_metrics(val_metrics, thresh_arr)
#%% md
#### 3.2.3. Testing LR model on original data
#%%
lr_orig_metrics = test(dataset=dataset_orig_panel19_test,
model=lr_orig_panel19,
thresh_arr=[thresh_arr[lr_orig_best_ind]])
#%%
describe_metrics(lr_orig_metrics, [thresh_arr[lr_orig_best_ind]])
#%% md
For all the fairness metrics displayed above, the value should be close to '0' for fairness.
1-min(DI, 1/DI) < 0.2 is typically desired for classifier predictions to be fair.
However, for a logistic regression classifier trained with original training data, at the best classification rate, this is quite high. This implies unfairness.
Similarly, $\text{average odds difference} = \frac{(FPR_{unpriv}-FPR_{priv})+(TPR_{unpriv}-TPR_{priv})}{2}$ must be close to zero for the classifier to be fair.
Again, the results for this classifier-data combination are still high. This still implies unfairness.
#%% md
### 3.3. Learning a Random Forest (RF) classifier on original data
#%% md
#### 3.3.1. Training RF model on original data
#%%
dataset = dataset_orig_panel19_train
model = make_pipeline(StandardScaler(),
RandomForestClassifier(n_estimators=500, min_samples_leaf=25))
fit_params = {'randomforestclassifier__sample_weight': dataset.instance_weights}
rf_orig_panel19 = model.fit(dataset.features, dataset.labels.ravel(), **fit_params)
#%% md
#### 3.3.2. Validating RF model on original data
#%%
thresh_arr = np.linspace(0.01, 0.5, 50)
val_metrics = test(dataset=dataset_orig_panel19_val,
model=rf_orig_panel19,
thresh_arr=thresh_arr)
rf_orig_best_ind = np.argmax(val_metrics['bal_acc'])
#%%
disp_imp = np.array(val_metrics['disp_imp'])
disp_imp_err = 1 - np.minimum(disp_imp, 1/disp_imp)
plot(thresh_arr, 'Classification Thresholds',
val_metrics['bal_acc'], 'Balanced Accuracy',
disp_imp_err, '1 - min(DI, 1/DI)')
#%%
plot(thresh_arr, 'Classification Thresholds',
val_metrics['bal_acc'], 'Balanced Accuracy',
val_metrics['avg_odds_diff'], 'avg. odds diff.')
#%%
describe_metrics(val_metrics, thresh_arr)
#%% md
#### 3.3.3. Testing RF model on original data
#%%
rf_orig_metrics = test(dataset=dataset_orig_panel19_test,
model=rf_orig_panel19,
thresh_arr=[thresh_arr[rf_orig_best_ind]])
#%%
describe_metrics(rf_orig_metrics, [thresh_arr[rf_orig_best_ind]])
#%% md
As in the case of the logistic regression classifier learned on the original data, the fairness metrics for the random forest classifier have values that are quite far from 0.
For example, 1 - min(DI, 1/DI) has a value of over 0.5 as opposed to the desired value of < 0.2.
This indicates that the random forest classifier learned on the original data is also unfair.
#%% md
## [4.](#Table-of-Contents) Bias mitigation using pre-processing technique - Reweighing
#%% md
### 4.1. Transform data
#%%
RW = Reweighing(unprivileged_groups=unprivileged_groups,
privileged_groups=privileged_groups)
dataset_transf_panel19_train = RW.fit_transform(dataset_orig_panel19_train)
#%% md
Metrics for transformed data
#%%
metric_transf_panel19_train = BinaryLabelDatasetMetric(
dataset_transf_panel19_train,
unprivileged_groups=unprivileged_groups,
privileged_groups=privileged_groups)
explainer_transf_panel19_train = MetricTextExplainer(metric_transf_panel19_train)
print(explainer_transf_panel19_train.disparate_impact())
#%% md
### 4.2. Learning a Logistic Regression (LR) classifier on data transformed by reweighing
#%% md
#### 4.2.1. Training LR model after reweighing
#%%
dataset = dataset_transf_panel19_train
model = make_pipeline(StandardScaler(),
LogisticRegression(solver='liblinear', random_state=1))
fit_params = {'logisticregression__sample_weight': dataset.instance_weights}
lr_transf_panel19 = model.fit(dataset.features, dataset.labels.ravel(), **fit_params)
#%% md
#### 4.2.2. Validating LR model after reweighing
#%%
thresh_arr = np.linspace(0.01, 0.5, 50)
val_metrics = test(dataset=dataset_orig_panel19_val,
model=lr_transf_panel19,
thresh_arr=thresh_arr)
lr_transf_best_ind = np.argmax(val_metrics['bal_acc'])
#%%
disp_imp = np.array(val_metrics['disp_imp'])
disp_imp_err = 1 - np.minimum(disp_imp, 1/disp_imp)
plot(thresh_arr, 'Classification Thresholds',
val_metrics['bal_acc'], 'Balanced Accuracy',
disp_imp_err, '1 - min(DI, 1/DI)')
#%%
plot(thresh_arr, 'Classification Thresholds',
val_metrics['bal_acc'], 'Balanced Accuracy',
val_metrics['avg_odds_diff'], 'avg. odds diff.')
#%%
describe_metrics(val_metrics, thresh_arr)
#%% md
#### 4.2.3. Testing LR model after reweighing
#%%
lr_transf_metrics = test(dataset=dataset_orig_panel19_test,
model=lr_transf_panel19,
thresh_arr=[thresh_arr[lr_transf_best_ind]])
#%%
describe_metrics(lr_transf_metrics, [thresh_arr[lr_transf_best_ind]])
#%% md
The fairness metrics for the logistic regression model learned after reweighing are well improved, and thus the model is much more fair relative to the logistic regression model learned from the original data.
#%% md
### 4.3. Learning a Random Forest (RF) classifier on data transformed by reweighing
#%% md
#### 4.3.1. Training RF model after reweighing
#%%
dataset = dataset_transf_panel19_train
model = make_pipeline(StandardScaler(),
RandomForestClassifier(n_estimators=500, min_samples_leaf=25))
fit_params = {'randomforestclassifier__sample_weight': dataset.instance_weights}
rf_transf_panel19 = model.fit(dataset.features, dataset.labels.ravel(), **fit_params)
#%% md
#### 4.3.2. Validating RF model after reweighing
#%%
thresh_arr = np.linspace(0.01, 0.5, 50)
val_metrics = test(dataset=dataset_orig_panel19_val,
model=rf_transf_panel19,
thresh_arr=thresh_arr)
rf_transf_best_ind = np.argmax(val_metrics['bal_acc'])
#%%
disp_imp = np.array(val_metrics['disp_imp'])
disp_imp_err = 1 - np.minimum(disp_imp, 1/disp_imp)
plot(thresh_arr, 'Classification Thresholds',
val_metrics['bal_acc'], 'Balanced Accuracy',
disp_imp_err, '1 - min(DI, 1/DI)')
#%%
plot(thresh_arr, 'Classification Thresholds',
val_metrics['bal_acc'], 'Balanced Accuracy',
val_metrics['avg_odds_diff'], 'avg. odds diff.')
#%%
describe_metrics(val_metrics, thresh_arr)
#%% md
#### 4.3.3. Testing RF model after reweighing
#%%
rf_transf_metrics = test(dataset=dataset_orig_panel19_test,
model=rf_transf_panel19,
thresh_arr=[thresh_arr[rf_transf_best_ind]])
#%%
describe_metrics(rf_transf_metrics, [thresh_arr[rf_transf_best_ind]])
#%% md
Once again, the model learned from the transformed data is fairer than that learned from the original data. However, the random forest model learned from the transformed data is still relatively unfair as compared to the logistic regression model learned from the transformed data.
#%% md
## [5.](#Table-of-Contents) Bias mitigation using in-processing technique - Prejudice Remover (PR)
#%% md
### 5.1. Learning a Prejudice Remover (PR) model on original data
#%% md
#### 5.1.1. Training a PR model
#%%
model = PrejudiceRemover(sensitive_attr=sens_attr, eta=25.0)
pr_orig_scaler = StandardScaler()
dataset = dataset_orig_panel19_train.copy()
dataset.features = pr_orig_scaler.fit_transform(dataset.features)
pr_orig_panel19 = model.fit(dataset)
#%% md
#### 5.1.2. Validating PR model
#%%
thresh_arr = np.linspace(0.01, 0.50, 50)
dataset = dataset_orig_panel19_val.copy()
dataset.features = pr_orig_scaler.transform(dataset.features)
val_metrics = test(dataset=dataset,
model=pr_orig_panel19,
thresh_arr=thresh_arr)
pr_orig_best_ind = np.argmax(val_metrics['bal_acc'])
#%%
disp_imp = np.array(val_metrics['disp_imp'])
disp_imp_err = 1 - np.minimum(disp_imp, 1/disp_imp)
plot(thresh_arr, 'Classification Thresholds',
val_metrics['bal_acc'], 'Balanced Accuracy',
disp_imp_err, '1 - min(DI, 1/DI)')
#%%
plot(thresh_arr, 'Classification Thresholds',
val_metrics['bal_acc'], 'Balanced Accuracy',
val_metrics['avg_odds_diff'], 'avg. odds diff.')
#%%
describe_metrics(val_metrics, thresh_arr)
#%% md
#### 5.1.3. Testing PR model
#%%
dataset = dataset_orig_panel19_test.copy()
dataset.features = pr_orig_scaler.transform(dataset.features)
pr_orig_metrics = test(dataset=dataset,
model=pr_orig_panel19,
thresh_arr=[thresh_arr[pr_orig_best_ind]])
#%%
describe_metrics(pr_orig_metrics, [thresh_arr[pr_orig_best_ind]])
#%% md
As in the case of reweighing, prejudice remover results in a fair model. However, it has come at the expense of relatively lower balanced accuracy.
#%% md
## [6.](#Table-of-Contents) Summary of Model Learning Results
#%%
import pandas as pd
pd.set_option('display.multi_sparse', False)
results = [lr_orig_metrics, rf_orig_metrics, lr_transf_metrics,
rf_transf_metrics, pr_orig_metrics]
debias = pd.Series(['']*2 + ['Reweighing']*2
+ ['Prejudice Remover'],
name='Bias Mitigator')
clf = pd.Series(['Logistic Regression', 'Random Forest']*2 + [''],
name='Classifier')
pd.concat([pd.DataFrame(metrics) for metrics in results], axis=0).set_index([debias, clf])
#%% md
Of all the models, the logistic regression model gives the best balance in terms of balanced accuracy and fairness. While the model learnt by prejudice remover is slightly fairer, it has much lower accuracy. All other models are quite unfair compared to the logistic model. Hence, we take the logistic regression model learnt from data transformed by re-weighing and 'deploy' it.
#%% md
## [7.](#Table-of-Contents) Deploying model
#%% md
### 7.1. Testing model learned on 2014 (Panel 19) on 2015 (Panel 20) deployment data
#%%
dataset_orig_panel20_deploy = MEPSDataset20()
# now align it with the 2014 dataset
dataset_orig_panel20_deploy = dataset_orig_panel19_train.align_datasets(dataset_orig_panel20_deploy)
#%%
# describe(dataset_orig_panel20_train, dataset_orig_panel20_val, dataset_orig_panel20_deploy)
describe(test=dataset_orig_panel20_deploy)
#%%
metric_orig_panel20_deploy = BinaryLabelDatasetMetric(
dataset_orig_panel20_deploy,
unprivileged_groups=unprivileged_groups,
privileged_groups=privileged_groups)
explainer_orig_panel20_deploy = MetricTextExplainer(metric_orig_panel20_deploy)
print(explainer_orig_panel20_deploy.disparate_impact())
#%%
lr_transf_metrics_panel20_deploy = test(
dataset=dataset_orig_panel20_deploy,
model=lr_transf_panel19,
thresh_arr=[thresh_arr[lr_transf_best_ind]])
#%%
describe_metrics(lr_transf_metrics_panel20_deploy, [thresh_arr[lr_transf_best_ind]])
#%% md
Deployed model tested on the 2015 Panel 20 data still exhibits fairness as well as maintains accuracy.
#%% md
## [8.](#Table-of-Contents) Generating explanations for model predictions using LIME
#%% md
### 8.1. Generating explanations on 2015 Panel 20 deployment data
#%% md
This section shows how LIME can be integrated with AIF360 to get explanations for model predictions.
#%%
train_dataset = dataset_transf_panel19_train # data the deployed model (lr from transformed data)
test_dataset = dataset_orig_panel20_deploy # the data model is being tested on
model = lr_transf_panel19 # lr_transf_panel19 is LR model learned from Panel 19 with Reweighing
thresh_arr = np.linspace(0.01, 0.5, 50)
best_thresh = thresh_arr[lr_transf_best_ind]
#%% md
First, we need to fit the encoder to the aif360 dataset
#%%
lime_data = LimeEncoder().fit(train_dataset)
#%% md
The `transform()` method is then used to convert aif360 features to LIME-compatible features
#%%
s_train = lime_data.transform(train_dataset.features)
s_test = lime_data.transform(test_dataset.features)
#%% md
The `LimeTabularExplainer` takes as input the LIME-compatible data along with various other arguments to create a lime explainer
#%%
explainer = LimeTabularExplainer(
s_train, class_names=lime_data.s_class_names,
feature_names=lime_data.s_feature_names,
categorical_features=lime_data.s_categorical_features,
categorical_names=lime_data.s_categorical_names,
kernel_width=3, verbose=False, discretize_continuous=True)
#%% md
The `inverse_transform()` function is used to transform LIME-compatible data back to aif360-compatible data since that is needed by the model to make predictions. The function below is used to produce the predictions for any perturbed data that is produce by LIME
#%%
def s_predict_fn(x):
return model.predict_proba(lime_data.inverse_transform(x))
#%% md
The `explain_instance()` method can then be used to produce explanations for any instance in the test dataset
#%%
def show_explanation(ind):
exp = explainer.explain_instance(s_test[ind], s_predict_fn, num_features=10)
print("Actual label: " + str(test_dataset.labels[ind]))
exp.as_pyplot_figure()
plt.show()
#%%
print("Threshold corresponding to Best balanced accuracy: {:6.4f}".format(best_thresh))
show_explanation(0)
show_explanation(2)
#%% md
See the [LIME documentation](https://github.com/marcotcr/lime) for detailed description of results. In short, the left hand side shows the label predictions made by the model, the middle shows the features that are important to the instance in question and their contributions (weights) to the label prediction, while the right hand side shows the actual values of the features in the particular instance.
#%% md
## [9.](#Table-of-Contents) Re-deploying Model
#%% md
### 9.1. Testing model learned on 2014 (Panel 19) data on 2016 (Panel 21) deployment data
#%% md
Load the Panel 21 data, and split it again into 3 parts: train, validate, and deploy. We test the deployed model against the deployment data. If a new model needs to be learnt, it will be learnt from the train/validate data and then tested again on the deployment data.
#%%
dataset_orig_panel21_deploy = MEPSDataset21()
# now align it with the panel19 datasets
dataset_orig_panel21_deploy = dataset_orig_panel19_train.align_datasets(dataset_orig_panel21_deploy)
describe(test=dataset_orig_panel21_deploy)
#%%
metric_orig_panel21_deploy = BinaryLabelDatasetMetric(
dataset_orig_panel21_deploy,
unprivileged_groups=unprivileged_groups,
privileged_groups=privileged_groups)
explainer_orig_panel21_deploy = MetricTextExplainer(metric_orig_panel21_deploy)
print(explainer_orig_panel21_deploy.disparate_impact())
#%% md
Now, the logistic regression classifier trained on the panel 19 data after reweighing is tested against the panel 21 deployment data.
#%%
lr_transf_metrics_panel21_deploy = test(
dataset=dataset_orig_panel21_deploy,
model=lr_transf_panel19,
thresh_arr=[thresh_arr[lr_transf_best_ind]])
#%%
describe_metrics(lr_transf_metrics_panel21_deploy, [thresh_arr[lr_transf_best_ind]])
#%% md
Compared to the 2015 panel 20 deployment data results, the $|1 - \text{disparate impact}|$ fairness metric shows a noticable drift upwards. While still within specs, it may be worthwhile to re-learn the model. So even though the model is still relatively fair and accurate, we go ahead and re-learn the model from the 2015 Panel 20 data.
#%% md
### 9.2. Re-learning model (from 2015 Panel 20 data)
#%%
(dataset_orig_panel20_train,
dataset_orig_panel20_val,
dataset_orig_panel20_test) = MEPSDataset20().split([0.5, 0.8], shuffle=True)
# now align them with the 2014 datasets
dataset_orig_panel20_train = dataset_orig_panel19_train.align_datasets(dataset_orig_panel20_train)
dataset_orig_panel20_val = dataset_orig_panel19_train.align_datasets(dataset_orig_panel20_val)
dataset_orig_panel20_test = dataset_orig_panel19_train.align_datasets(dataset_orig_panel20_test)
#%% md
**Train and evaluate new model on 'transformed' 2016 training/test data**
#%%
RW = Reweighing(unprivileged_groups=unprivileged_groups,
privileged_groups=privileged_groups)
RW.fit(dataset_orig_panel20_train)
dataset_transf_panel20_train = RW.transform(dataset_orig_panel20_train)
#%%
metric_transf_panel20_train = BinaryLabelDatasetMetric(
dataset_transf_panel20_train,
unprivileged_groups=unprivileged_groups,
privileged_groups=privileged_groups)
explainer_transf_panel20_train = MetricTextExplainer(metric_transf_panel20_train)
print(explainer_transf_panel20_train.disparate_impact())
#%%
dataset = dataset_transf_panel20_train
model = make_pipeline(StandardScaler(),
LogisticRegression(solver='liblinear', random_state=1))
fit_params = {'logisticregression__sample_weight': dataset.instance_weights}
lr_transf_panel20 = model.fit(dataset.features, dataset.labels.ravel(), **fit_params)
#%%
thresh_arr = np.linspace(0.01, 0.5, 50)
val_metrics = test(dataset=dataset_orig_panel20_val,
model=lr_transf_panel20,
thresh_arr=thresh_arr)
lr_transf_best_ind_panel20 = np.argmax(val_metrics['bal_acc'])
#%%
disp_imp = np.array(val_metrics['disp_imp'])
disp_imp_err = 1 - np.minimum(disp_imp, 1/disp_imp)
plot(thresh_arr, 'Classification Thresholds',
val_metrics['bal_acc'], 'Balanced Accuracy',
disp_imp_err, '1 - min(DI, 1/DI)')
#%%
plot(thresh_arr, 'Classification Thresholds',
val_metrics['bal_acc'], 'Balanced Accuracy',
val_metrics['avg_odds_diff'], 'avg. odds diff.')
#%%
describe_metrics(val_metrics, thresh_arr)
#%%
lr_transf_metrics_panel20_test = test(
dataset=dataset_orig_panel20_test,
model=lr_transf_panel20,
thresh_arr=[thresh_arr[lr_transf_best_ind_panel20]])
#%%
describe_metrics(lr_transf_metrics_panel20_test, [thresh_arr[lr_transf_best_ind_panel20]])
#%% md
The new model is both relatively fair as well as accurate so we deploy and test against the 2016 deployment data
#%% md
### 9.3. Testing model learned on 2015 (Panel 20) data on 2016 (Panel 21) deployment data
#%% md
**Evaluate new 2015 transformed data model and evaluate again on 2016 deployment data**
#%%
lr_transf_panel20_metrics_panel21_deploy = test(
dataset=dataset_orig_panel21_deploy,
model=lr_transf_panel20,
thresh_arr=[thresh_arr[lr_transf_best_ind_panel20]])
#%%
describe_metrics(lr_transf_panel20_metrics_panel21_deploy, [thresh_arr[lr_transf_best_ind_panel20]])
#%% md
The new transformed 2016 data model is again within original accuracy/fairness specs so is deployed
#%% md
## [10.](#Table-of-Contents) SUMMARY
#%%
results = [lr_orig_metrics, lr_transf_metrics,
lr_transf_metrics_panel20_deploy,
lr_transf_metrics_panel21_deploy,
lr_transf_metrics_panel20_test,
lr_transf_panel20_metrics_panel21_deploy]
debias = pd.Series([''] + ['Reweighing']*5, name='Bias Mitigator')
clf = pd.Series(['Logistic Regression']*6, name='Classifier')
tr = pd.Series(['Panel19']*4 + ['Panel20']*2, name='Training set')
te = pd.Series(['Panel19']*2 + ['Panel20', 'Panel21']*2, name='Testing set')
pd.concat([pd.DataFrame(m) for m in results], axis=0).set_index([debias, clf, tr, te])
transf_metrics_panel20_deploy,
lr_transf_metrics_panel21_deploy,
lr_transf_metrics_panel20_test,
lr_transf_panel20_metrics_panel21_deploy]
debias = pd.Series([''] + ['Reweighing'] * 5, name='Bias Mitigator')
clf = pd.Series(['Logistic Regression'] * 6, name='Classifier')
tr = pd.Series(['Panel19'] * 4 + ['Panel20'] * 2, name='Training set')
te = pd.Series(['Panel19'] * 2 + ['Panel20', 'Panel21'] * 2, name='Testing set')
pd.concat([pd.DataFrame(m) for m in results], axis=0).set_index([debias, clf, tr, te])
| 27.4672 | 455 | 0.728491 | 9,544 | 68,668 | 5.035205 | 0.07303 | 0.025845 | 0.021725 | 0.011986 | 0.999292 | 0.999292 | 0.999251 | 0.999251 | 0.999168 | 0.999126 | 0 | 0.031763 | 0.171972 | 68,668 | 2,499 | 456 | 27.478191 | 0.813416 | 0 | 0 | 0.797834 | 0 | 0.018051 | 0.084242 | 0.006622 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.039711 | null | null | 0.039711 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
54c2fadcb4f04d76045bb5e7cbf751af1e67fa29 | 22,810 | py | Python | sdk/python/pulumi_gcp/networkservices/edge_cache_keyset.py | sisisin/pulumi-gcp | af6681d70ea457843409110c1324817fe55f68ad | [
"ECL-2.0",
"Apache-2.0"
] | 121 | 2018-06-18T19:16:42.000Z | 2022-03-31T06:06:48.000Z | sdk/python/pulumi_gcp/networkservices/edge_cache_keyset.py | sisisin/pulumi-gcp | af6681d70ea457843409110c1324817fe55f68ad | [
"ECL-2.0",
"Apache-2.0"
] | 492 | 2018-06-22T19:41:03.000Z | 2022-03-31T15:33:53.000Z | sdk/python/pulumi_gcp/networkservices/edge_cache_keyset.py | sisisin/pulumi-gcp | af6681d70ea457843409110c1324817fe55f68ad | [
"ECL-2.0",
"Apache-2.0"
] | 43 | 2018-06-19T01:43:13.000Z | 2022-03-23T22:43:37.000Z | # coding=utf-8
# *** WARNING: this file was generated by the Pulumi Terraform Bridge (tfgen) Tool. ***
# *** Do not edit by hand unless you're certain you know what you are doing! ***
import warnings
import pulumi
import pulumi.runtime
from typing import Any, Mapping, Optional, Sequence, Union, overload
from .. import _utilities
from . import outputs
from ._inputs import *
__all__ = ['EdgeCacheKeysetArgs', 'EdgeCacheKeyset']
@pulumi.input_type
class EdgeCacheKeysetArgs:
def __init__(__self__, *,
public_keys: pulumi.Input[Sequence[pulumi.Input['EdgeCacheKeysetPublicKeyArgs']]],
description: Optional[pulumi.Input[str]] = None,
labels: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]] = None,
name: Optional[pulumi.Input[str]] = None,
project: Optional[pulumi.Input[str]] = None):
"""
The set of arguments for constructing a EdgeCacheKeyset resource.
:param pulumi.Input[Sequence[pulumi.Input['EdgeCacheKeysetPublicKeyArgs']]] public_keys: An ordered list of Ed25519 public keys to use for validating signed requests.
You must specify at least one (1) key, and may have up to three (3) keys.
Ed25519 public keys are not secret, and only allow Google to validate a request was signed by your corresponding private key.
You should ensure that the private key is kept secret, and that only authorized users can add public keys to a keyset.
Structure is documented below.
:param pulumi.Input[str] description: A human-readable description of the resource.
:param pulumi.Input[Mapping[str, pulumi.Input[str]]] labels: Set of label tags associated with the EdgeCache resource.
:param pulumi.Input[str] name: Name of the resource; provided by the client when the resource is created.
The name must be 1-64 characters long, and match the regular expression [a-zA-Z][a-zA-Z0-9_-]* which means the first character must be a letter,
and all following characters must be a dash, underscore, letter or digit.
:param pulumi.Input[str] project: The ID of the project in which the resource belongs.
If it is not provided, the provider project is used.
"""
pulumi.set(__self__, "public_keys", public_keys)
if description is not None:
pulumi.set(__self__, "description", description)
if labels is not None:
pulumi.set(__self__, "labels", labels)
if name is not None:
pulumi.set(__self__, "name", name)
if project is not None:
pulumi.set(__self__, "project", project)
@property
@pulumi.getter(name="publicKeys")
def public_keys(self) -> pulumi.Input[Sequence[pulumi.Input['EdgeCacheKeysetPublicKeyArgs']]]:
"""
An ordered list of Ed25519 public keys to use for validating signed requests.
You must specify at least one (1) key, and may have up to three (3) keys.
Ed25519 public keys are not secret, and only allow Google to validate a request was signed by your corresponding private key.
You should ensure that the private key is kept secret, and that only authorized users can add public keys to a keyset.
Structure is documented below.
"""
return pulumi.get(self, "public_keys")
@public_keys.setter
def public_keys(self, value: pulumi.Input[Sequence[pulumi.Input['EdgeCacheKeysetPublicKeyArgs']]]):
pulumi.set(self, "public_keys", value)
@property
@pulumi.getter
def description(self) -> Optional[pulumi.Input[str]]:
"""
A human-readable description of the resource.
"""
return pulumi.get(self, "description")
@description.setter
def description(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "description", value)
@property
@pulumi.getter
def labels(self) -> Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]]:
"""
Set of label tags associated with the EdgeCache resource.
"""
return pulumi.get(self, "labels")
@labels.setter
def labels(self, value: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]]):
pulumi.set(self, "labels", value)
@property
@pulumi.getter
def name(self) -> Optional[pulumi.Input[str]]:
"""
Name of the resource; provided by the client when the resource is created.
The name must be 1-64 characters long, and match the regular expression [a-zA-Z][a-zA-Z0-9_-]* which means the first character must be a letter,
and all following characters must be a dash, underscore, letter or digit.
"""
return pulumi.get(self, "name")
@name.setter
def name(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "name", value)
@property
@pulumi.getter
def project(self) -> Optional[pulumi.Input[str]]:
"""
The ID of the project in which the resource belongs.
If it is not provided, the provider project is used.
"""
return pulumi.get(self, "project")
@project.setter
def project(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "project", value)
@pulumi.input_type
class _EdgeCacheKeysetState:
def __init__(__self__, *,
description: Optional[pulumi.Input[str]] = None,
labels: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]] = None,
name: Optional[pulumi.Input[str]] = None,
project: Optional[pulumi.Input[str]] = None,
public_keys: Optional[pulumi.Input[Sequence[pulumi.Input['EdgeCacheKeysetPublicKeyArgs']]]] = None):
"""
Input properties used for looking up and filtering EdgeCacheKeyset resources.
:param pulumi.Input[str] description: A human-readable description of the resource.
:param pulumi.Input[Mapping[str, pulumi.Input[str]]] labels: Set of label tags associated with the EdgeCache resource.
:param pulumi.Input[str] name: Name of the resource; provided by the client when the resource is created.
The name must be 1-64 characters long, and match the regular expression [a-zA-Z][a-zA-Z0-9_-]* which means the first character must be a letter,
and all following characters must be a dash, underscore, letter or digit.
:param pulumi.Input[str] project: The ID of the project in which the resource belongs.
If it is not provided, the provider project is used.
:param pulumi.Input[Sequence[pulumi.Input['EdgeCacheKeysetPublicKeyArgs']]] public_keys: An ordered list of Ed25519 public keys to use for validating signed requests.
You must specify at least one (1) key, and may have up to three (3) keys.
Ed25519 public keys are not secret, and only allow Google to validate a request was signed by your corresponding private key.
You should ensure that the private key is kept secret, and that only authorized users can add public keys to a keyset.
Structure is documented below.
"""
if description is not None:
pulumi.set(__self__, "description", description)
if labels is not None:
pulumi.set(__self__, "labels", labels)
if name is not None:
pulumi.set(__self__, "name", name)
if project is not None:
pulumi.set(__self__, "project", project)
if public_keys is not None:
pulumi.set(__self__, "public_keys", public_keys)
@property
@pulumi.getter
def description(self) -> Optional[pulumi.Input[str]]:
"""
A human-readable description of the resource.
"""
return pulumi.get(self, "description")
@description.setter
def description(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "description", value)
@property
@pulumi.getter
def labels(self) -> Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]]:
"""
Set of label tags associated with the EdgeCache resource.
"""
return pulumi.get(self, "labels")
@labels.setter
def labels(self, value: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]]):
pulumi.set(self, "labels", value)
@property
@pulumi.getter
def name(self) -> Optional[pulumi.Input[str]]:
"""
Name of the resource; provided by the client when the resource is created.
The name must be 1-64 characters long, and match the regular expression [a-zA-Z][a-zA-Z0-9_-]* which means the first character must be a letter,
and all following characters must be a dash, underscore, letter or digit.
"""
return pulumi.get(self, "name")
@name.setter
def name(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "name", value)
@property
@pulumi.getter
def project(self) -> Optional[pulumi.Input[str]]:
"""
The ID of the project in which the resource belongs.
If it is not provided, the provider project is used.
"""
return pulumi.get(self, "project")
@project.setter
def project(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "project", value)
@property
@pulumi.getter(name="publicKeys")
def public_keys(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['EdgeCacheKeysetPublicKeyArgs']]]]:
"""
An ordered list of Ed25519 public keys to use for validating signed requests.
You must specify at least one (1) key, and may have up to three (3) keys.
Ed25519 public keys are not secret, and only allow Google to validate a request was signed by your corresponding private key.
You should ensure that the private key is kept secret, and that only authorized users can add public keys to a keyset.
Structure is documented below.
"""
return pulumi.get(self, "public_keys")
@public_keys.setter
def public_keys(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['EdgeCacheKeysetPublicKeyArgs']]]]):
pulumi.set(self, "public_keys", value)
class EdgeCacheKeyset(pulumi.CustomResource):
@overload
def __init__(__self__,
resource_name: str,
opts: Optional[pulumi.ResourceOptions] = None,
description: Optional[pulumi.Input[str]] = None,
labels: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]] = None,
name: Optional[pulumi.Input[str]] = None,
project: Optional[pulumi.Input[str]] = None,
public_keys: Optional[pulumi.Input[Sequence[pulumi.Input[pulumi.InputType['EdgeCacheKeysetPublicKeyArgs']]]]] = None,
__props__=None):
"""
EdgeCacheKeyset represents a collection of public keys used for validating signed requests.
> **Warning:** All arguments including `public_key.public_key.value` will be stored in the raw
state as plain-text. [Read more about sensitive data in state](https://www.terraform.io/docs/state/sensitive-data.html).
## Example Usage
### Network Services Edge Cache Keyset Basic
```python
import pulumi
import pulumi_gcp as gcp
default = gcp.networkservices.EdgeCacheKeyset("default",
description="The default keyset",
public_keys=[
gcp.networkservices.EdgeCacheKeysetPublicKeyArgs(
id="my-public-key",
value="FHsTyFHNmvNpw4o7-rp-M1yqMyBF8vXSBRkZtkQ0RKY",
),
gcp.networkservices.EdgeCacheKeysetPublicKeyArgs(
id="my-public-key-2",
value="hzd03llxB1u5FOLKFkZ6_wCJqC7jtN0bg7xlBqS6WVM",
),
])
```
## Import
EdgeCacheKeyset can be imported using any of these accepted formats
```sh
$ pulumi import gcp:networkservices/edgeCacheKeyset:EdgeCacheKeyset default projects/{{project}}/locations/global/edgeCacheKeysets/{{name}}
```
```sh
$ pulumi import gcp:networkservices/edgeCacheKeyset:EdgeCacheKeyset default {{project}}/{{name}}
```
```sh
$ pulumi import gcp:networkservices/edgeCacheKeyset:EdgeCacheKeyset default {{name}}
```
:param str resource_name: The name of the resource.
:param pulumi.ResourceOptions opts: Options for the resource.
:param pulumi.Input[str] description: A human-readable description of the resource.
:param pulumi.Input[Mapping[str, pulumi.Input[str]]] labels: Set of label tags associated with the EdgeCache resource.
:param pulumi.Input[str] name: Name of the resource; provided by the client when the resource is created.
The name must be 1-64 characters long, and match the regular expression [a-zA-Z][a-zA-Z0-9_-]* which means the first character must be a letter,
and all following characters must be a dash, underscore, letter or digit.
:param pulumi.Input[str] project: The ID of the project in which the resource belongs.
If it is not provided, the provider project is used.
:param pulumi.Input[Sequence[pulumi.Input[pulumi.InputType['EdgeCacheKeysetPublicKeyArgs']]]] public_keys: An ordered list of Ed25519 public keys to use for validating signed requests.
You must specify at least one (1) key, and may have up to three (3) keys.
Ed25519 public keys are not secret, and only allow Google to validate a request was signed by your corresponding private key.
You should ensure that the private key is kept secret, and that only authorized users can add public keys to a keyset.
Structure is documented below.
"""
...
@overload
def __init__(__self__,
resource_name: str,
args: EdgeCacheKeysetArgs,
opts: Optional[pulumi.ResourceOptions] = None):
"""
EdgeCacheKeyset represents a collection of public keys used for validating signed requests.
> **Warning:** All arguments including `public_key.public_key.value` will be stored in the raw
state as plain-text. [Read more about sensitive data in state](https://www.terraform.io/docs/state/sensitive-data.html).
## Example Usage
### Network Services Edge Cache Keyset Basic
```python
import pulumi
import pulumi_gcp as gcp
default = gcp.networkservices.EdgeCacheKeyset("default",
description="The default keyset",
public_keys=[
gcp.networkservices.EdgeCacheKeysetPublicKeyArgs(
id="my-public-key",
value="FHsTyFHNmvNpw4o7-rp-M1yqMyBF8vXSBRkZtkQ0RKY",
),
gcp.networkservices.EdgeCacheKeysetPublicKeyArgs(
id="my-public-key-2",
value="hzd03llxB1u5FOLKFkZ6_wCJqC7jtN0bg7xlBqS6WVM",
),
])
```
## Import
EdgeCacheKeyset can be imported using any of these accepted formats
```sh
$ pulumi import gcp:networkservices/edgeCacheKeyset:EdgeCacheKeyset default projects/{{project}}/locations/global/edgeCacheKeysets/{{name}}
```
```sh
$ pulumi import gcp:networkservices/edgeCacheKeyset:EdgeCacheKeyset default {{project}}/{{name}}
```
```sh
$ pulumi import gcp:networkservices/edgeCacheKeyset:EdgeCacheKeyset default {{name}}
```
:param str resource_name: The name of the resource.
:param EdgeCacheKeysetArgs args: The arguments to use to populate this resource's properties.
:param pulumi.ResourceOptions opts: Options for the resource.
"""
...
def __init__(__self__, resource_name: str, *args, **kwargs):
resource_args, opts = _utilities.get_resource_args_opts(EdgeCacheKeysetArgs, pulumi.ResourceOptions, *args, **kwargs)
if resource_args is not None:
__self__._internal_init(resource_name, opts, **resource_args.__dict__)
else:
__self__._internal_init(resource_name, *args, **kwargs)
def _internal_init(__self__,
resource_name: str,
opts: Optional[pulumi.ResourceOptions] = None,
description: Optional[pulumi.Input[str]] = None,
labels: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]] = None,
name: Optional[pulumi.Input[str]] = None,
project: Optional[pulumi.Input[str]] = None,
public_keys: Optional[pulumi.Input[Sequence[pulumi.Input[pulumi.InputType['EdgeCacheKeysetPublicKeyArgs']]]]] = None,
__props__=None):
if opts is None:
opts = pulumi.ResourceOptions()
if not isinstance(opts, pulumi.ResourceOptions):
raise TypeError('Expected resource options to be a ResourceOptions instance')
if opts.version is None:
opts.version = _utilities.get_version()
if opts.id is None:
if __props__ is not None:
raise TypeError('__props__ is only valid when passed in combination with a valid opts.id to get an existing resource')
__props__ = EdgeCacheKeysetArgs.__new__(EdgeCacheKeysetArgs)
__props__.__dict__["description"] = description
__props__.__dict__["labels"] = labels
__props__.__dict__["name"] = name
__props__.__dict__["project"] = project
if public_keys is None and not opts.urn:
raise TypeError("Missing required property 'public_keys'")
__props__.__dict__["public_keys"] = public_keys
super(EdgeCacheKeyset, __self__).__init__(
'gcp:networkservices/edgeCacheKeyset:EdgeCacheKeyset',
resource_name,
__props__,
opts)
@staticmethod
def get(resource_name: str,
id: pulumi.Input[str],
opts: Optional[pulumi.ResourceOptions] = None,
description: Optional[pulumi.Input[str]] = None,
labels: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]] = None,
name: Optional[pulumi.Input[str]] = None,
project: Optional[pulumi.Input[str]] = None,
public_keys: Optional[pulumi.Input[Sequence[pulumi.Input[pulumi.InputType['EdgeCacheKeysetPublicKeyArgs']]]]] = None) -> 'EdgeCacheKeyset':
"""
Get an existing EdgeCacheKeyset resource's state with the given name, id, and optional extra
properties used to qualify the lookup.
:param str resource_name: The unique name of the resulting resource.
:param pulumi.Input[str] id: The unique provider ID of the resource to lookup.
:param pulumi.ResourceOptions opts: Options for the resource.
:param pulumi.Input[str] description: A human-readable description of the resource.
:param pulumi.Input[Mapping[str, pulumi.Input[str]]] labels: Set of label tags associated with the EdgeCache resource.
:param pulumi.Input[str] name: Name of the resource; provided by the client when the resource is created.
The name must be 1-64 characters long, and match the regular expression [a-zA-Z][a-zA-Z0-9_-]* which means the first character must be a letter,
and all following characters must be a dash, underscore, letter or digit.
:param pulumi.Input[str] project: The ID of the project in which the resource belongs.
If it is not provided, the provider project is used.
:param pulumi.Input[Sequence[pulumi.Input[pulumi.InputType['EdgeCacheKeysetPublicKeyArgs']]]] public_keys: An ordered list of Ed25519 public keys to use for validating signed requests.
You must specify at least one (1) key, and may have up to three (3) keys.
Ed25519 public keys are not secret, and only allow Google to validate a request was signed by your corresponding private key.
You should ensure that the private key is kept secret, and that only authorized users can add public keys to a keyset.
Structure is documented below.
"""
opts = pulumi.ResourceOptions.merge(opts, pulumi.ResourceOptions(id=id))
__props__ = _EdgeCacheKeysetState.__new__(_EdgeCacheKeysetState)
__props__.__dict__["description"] = description
__props__.__dict__["labels"] = labels
__props__.__dict__["name"] = name
__props__.__dict__["project"] = project
__props__.__dict__["public_keys"] = public_keys
return EdgeCacheKeyset(resource_name, opts=opts, __props__=__props__)
@property
@pulumi.getter
def description(self) -> pulumi.Output[Optional[str]]:
"""
A human-readable description of the resource.
"""
return pulumi.get(self, "description")
@property
@pulumi.getter
def labels(self) -> pulumi.Output[Optional[Mapping[str, str]]]:
"""
Set of label tags associated with the EdgeCache resource.
"""
return pulumi.get(self, "labels")
@property
@pulumi.getter
def name(self) -> pulumi.Output[str]:
"""
Name of the resource; provided by the client when the resource is created.
The name must be 1-64 characters long, and match the regular expression [a-zA-Z][a-zA-Z0-9_-]* which means the first character must be a letter,
and all following characters must be a dash, underscore, letter or digit.
"""
return pulumi.get(self, "name")
@property
@pulumi.getter
def project(self) -> pulumi.Output[str]:
"""
The ID of the project in which the resource belongs.
If it is not provided, the provider project is used.
"""
return pulumi.get(self, "project")
@property
@pulumi.getter(name="publicKeys")
def public_keys(self) -> pulumi.Output[Sequence['outputs.EdgeCacheKeysetPublicKey']]:
"""
An ordered list of Ed25519 public keys to use for validating signed requests.
You must specify at least one (1) key, and may have up to three (3) keys.
Ed25519 public keys are not secret, and only allow Google to validate a request was signed by your corresponding private key.
You should ensure that the private key is kept secret, and that only authorized users can add public keys to a keyset.
Structure is documented below.
"""
return pulumi.get(self, "public_keys")
| 48.739316 | 192 | 0.651995 | 2,752 | 22,810 | 5.289608 | 0.090116 | 0.071787 | 0.051934 | 0.040805 | 0.87044 | 0.860892 | 0.841176 | 0.828193 | 0.824689 | 0.821186 | 0 | 0.00886 | 0.257782 | 22,810 | 467 | 193 | 48.843683 | 0.850975 | 0.499562 | 0 | 0.721951 | 1 | 0 | 0.097752 | 0.034076 | 0 | 0 | 0 | 0 | 0 | 1 | 0.156098 | false | 0.004878 | 0.034146 | 0 | 0.282927 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
49b6fded8408674736387fefa717d1fec9843003 | 9,407 | py | Python | natasha/tests.py | glibin/natasha | 4f5c153f754759c189779f9879decd8d218356af | [
"MIT"
] | 1 | 2020-01-16T14:02:01.000Z | 2020-01-16T14:02:01.000Z | natasha/tests.py | glibin/natasha | 4f5c153f754759c189779f9879decd8d218356af | [
"MIT"
] | null | null | null | natasha/tests.py | glibin/natasha | 4f5c153f754759c189779f9879decd8d218356af | [
"MIT"
] | null | null | null | import unittest
import natasha
class BaseTestCase(unittest.TestCase):
def setUp(self):
self.combinator = natasha.Combinator(natasha.DEFAULT_GRAMMARS)
class PersonGrammarsTestCase(BaseTestCase):
def test_full(self):
grammar, rule, _ = next(self.combinator.extract('Шерер Анна Павловна'))
self.assertEqual(grammar, natasha.Person)
self.assertEqual(rule, 'Full')
def test_full_reversed(self):
grammar, rule, _ = next(self.combinator.extract('Анна Павловна Шерер'))
self.assertEqual(grammar, natasha.Person)
self.assertEqual(rule, 'FullReversed')
def test_firstname_and_lastname(self):
grammar, rule, _ = next(self.combinator.extract('Анна Шерер'))
self.assertEqual(grammar, natasha.Person)
self.assertEqual(rule, 'FisrtnameAndLastname')
def test_lastname_and_firstname(self):
grammar, rule, _ = next(self.combinator.extract('Шерер Анна'))
self.assertEqual(grammar, natasha.Person)
self.assertEqual(rule, 'LastnameAndFirstname')
def test_lastname(self):
grammar, rule, _ = next(self.combinator.extract('Шерер'))
self.assertEqual(grammar, natasha.Person)
self.assertEqual(rule, 'Lastname')
def test_firstname(self):
grammar, rule, _ = next(self.combinator.extract('Анна'))
self.assertEqual(grammar, natasha.Person)
self.assertEqual(rule, 'Firstname')
def test_initials_and_lastname(self):
grammar, rule, _ = next(self.combinator.extract('в имении Л. А. Раневской'))
self.assertEqual(grammar, natasha.Person)
self.assertEqual(rule, 'InitialsAndLastname')
class DateTestCase(BaseTestCase):
def test_full(self):
grammar, rule, _ = next(self.combinator.extract('21 мая 1996 года'))
self.assertEqual(grammar, natasha.Date)
self.assertEqual(rule, 'Full')
def test_full_with_digits(self):
grammar, rule, _ = next(self.combinator.extract('21/05/1996'))
self.assertEqual(grammar, natasha.Date)
self.assertEqual(rule, 'FullWithDigits')
grammar, rule, _ = next(self.combinator.extract('21 05 1996'))
self.assertEqual(grammar, natasha.Date)
self.assertEqual(rule, 'FullWithDigits')
def test_day_and_month(self):
grammar, rule, _ = next(self.combinator.extract('21 мая'))
self.assertEqual(grammar, natasha.Date)
self.assertEqual(rule, 'DayAndMonth')
def test_year(self):
grammar, rule, ((_, match, *_), *_) = next(self.combinator.extract('21 год'))
self.assertEqual(grammar, natasha.Date)
self.assertEqual(type(match), int)
self.assertEqual(rule, 'Year')
def test_year_float(self):
grammar, rule, ((_, match, *_), *_) = next(self.combinator.extract('1.5 года'))
self.assertEqual(grammar, natasha.Date)
self.assertEqual(type(match), float)
self.assertEqual(rule, 'Year')
def test_partial_year(self):
grammar, rule, _ = list(self.combinator.extract('в конце 2015 года'))[-1]
self.assertEqual(grammar, natasha.Date)
self.assertEqual(rule, 'PartialYearObject')
def test_partial_month(self):
grammar, rule, _ = next(self.combinator.extract('в конце мая'))
self.assertEqual(grammar, natasha.Date)
self.assertEqual(rule, 'PartialMonthObject')
def test_month(self):
grammar, rule, _ = next(self.combinator.extract('мая'))
self.assertEqual(grammar, natasha.Date)
self.assertEqual(rule, 'Month')
def test_day_of_week(self):
grammar, rule, _ = next(self.combinator.extract('в пятницу'))
self.assertEqual(grammar, natasha.Date)
self.assertEqual(rule, 'DayOfWeek')
def test_day_range(self):
grammar, rule, _ = next(self.combinator.extract('18-19 ноября'))
self.assertEqual(grammar, natasha.Date)
self.assertEqual(rule, 'DayRange')
def test_year_range(self):
grammar, rule, _ = next(self.combinator.extract('18-20 лет'))
self.assertEqual(grammar, natasha.Date)
self.assertEqual(rule, 'YearRange')
class GeoTestCase(BaseTestCase):
def test_federal_district(self):
grammar, rule, _ = next(self.combinator.extract('северо-западный федеральный округ'))
self.assertEqual(grammar, natasha.Geo)
self.assertEqual(rule, 'FederalDistrict')
def test_federal_district_abbr(self):
grammar, rule, _ = next(self.combinator.extract('северо-западный ФО'))
self.assertEqual(grammar, natasha.Geo)
self.assertEqual(rule, 'FederalDistrictAbbr')
def test_region(self):
grammar, rule, _ = next(self.combinator.extract('северо-западная область'))
self.assertEqual(grammar, natasha.Geo)
self.assertEqual(rule, 'Region')
with self.assertRaises(StopIteration):
next(self.combinator.extract('северо-западный область'))
def test_complex_object(self):
grammar, rule, _ = next(self.combinator.extract('северный кипр'))
self.assertEqual(grammar, natasha.Geo)
self.assertEqual(rule, 'ComplexObject')
with self.assertRaises(StopIteration):
next(self.combinator.extract('северная кипр'))
def test_partial_object(self):
grammar, rule, _ = next(self.combinator.extract('на юго-западе кипра'))
self.assertEqual(grammar, natasha.Geo)
self.assertEqual(rule, 'PartialObject')
def test_object(self):
grammar, rule, _ = next(self.combinator.extract('Москва́'))
self.assertEqual(grammar, natasha.Geo)
self.assertEqual(rule, 'Object')
class MoneyTestCase(BaseTestCase):
def test_int_object_with_prefix(self):
grammar, rule, ((_, match, *_), *_) = next(self.combinator.extract('1 миллион долларов'))
self.assertEqual(grammar, natasha.Money)
self.assertEqual(type(match), int)
self.assertEqual(rule, 'ObjectWithPrefix')
def test_int_object_with_abbr_prefix(self):
grammar, rule, ((_, match, *_), *_) = next(self.combinator.extract('1 млрд. долларов'))
self.assertEqual(grammar, natasha.Money)
self.assertEqual(type(match), int)
self.assertEqual(rule, 'ObjectWithPrefix')
def test_float_object_with_prefix(self):
grammar, rule, ((_, match, *_), *_) = next(self.combinator.extract('1.2 миллиона долларов'))
self.assertEqual(grammar, natasha.Money)
self.assertEqual(type(match), float)
self.assertEqual(rule, 'ObjectWithPrefix')
def test_float_object_with_abbr_prefix(self):
grammar, rule, ((_, match, *_), *_) = next(self.combinator.extract('1.2 млрд. долларов'))
self.assertEqual(grammar, natasha.Money)
self.assertEqual(type(match), float)
self.assertEqual(rule, 'ObjectWithPrefix')
def test_int_object(self):
grammar, rule, ((_, match, *_), *_) = next(self.combinator.extract('10 долларов'))
self.assertEqual(grammar, natasha.Money)
self.assertEqual(type(match), int)
self.assertEqual(rule, 'Object')
def test_float_object(self):
grammar, rule, ((_, match, *_), *_) = next(self.combinator.extract('1.5 рубля'))
self.assertEqual(grammar, natasha.Money)
self.assertEqual(type(match), float)
self.assertEqual(rule, 'Object')
def test_object_without_actual_number(self):
grammar, rule, _ = next(self.combinator.extract('миллион долларов'))
self.assertEqual(grammar, natasha.Money)
self.assertEqual(rule, 'ObjectWithoutActualNumber')
def test_hand_written_numbers(self):
grammar, rule, ((_, match, *_), *_) = next(self.combinator.extract('сто рублей'))
self.assertEqual(match, 'сто')
self.assertEqual(grammar, natasha.Money)
self.assertEqual(rule, 'HandwrittenNumber')
def test_hand_written_numbers_with_prefix(self):
grammar, rule, ((_, match, *_), *_) = next(self.combinator.extract('два миллиона долларов'))
self.assertEqual(match, 'два')
self.assertEqual(grammar, natasha.Money)
self.assertEqual(rule, 'HandwrittenNumberWithPrefix')
grammar, rule, ((_, head, *_), (_, tail, *_), *_) = next(self.combinator.extract('семьдесят пять тысяч рублей'))
self.assertEqual(head, 'семьдесят')
self.assertEqual(tail, 'пять')
self.assertEqual(grammar, natasha.Money)
self.assertEqual(rule, 'HandwrittenNumberWithPrefix')
class OrganisationTestCase(BaseTestCase):
def test_official_abbr_quoted(self):
grammar, rule, _ = next(self.combinator.extract('ПАО «Газпром»'))
self.assertEqual(grammar, natasha.Organisation)
self.assertEqual(rule, 'OfficialAbbrQuoted')
def test_abbr(self):
grammar, rule, _ = next(self.combinator.extract('МВД'))
self.assertEqual(grammar, natasha.Organisation)
self.assertEqual(rule, 'Abbr')
def test_individual_entrepreneur(self):
grammar, rule, _ = list(self.combinator.extract('ИП Иванов Иван Иванович'))[-1]
self.assertEqual(grammar, natasha.Organisation)
self.assertEqual(rule, 'IndividualEntrepreneur')
def test_simple_latin(self):
grammar, rule, _ = list(self.combinator.extract('агентство Bloomberg'))[-1]
self.assertEqual(grammar, natasha.Organisation)
self.assertEqual(rule, 'SimpleLatin')
| 41.808889 | 120 | 0.673435 | 1,019 | 9,407 | 6.069676 | 0.154073 | 0.21827 | 0.139208 | 0.182862 | 0.773969 | 0.762005 | 0.751981 | 0.714794 | 0.448828 | 0.222959 | 0 | 0.007009 | 0.196131 | 9,407 | 224 | 121 | 41.995536 | 0.8105 | 0 | 0 | 0.366667 | 0 | 0 | 0.117891 | 0.010737 | 0 | 0 | 0 | 0 | 0.511111 | 1 | 0.211111 | false | 0 | 0.011111 | 0 | 0.255556 | 0 | 0 | 0 | 0 | null | 1 | 0 | 1 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
b71f9193e9e540e57f57a4c3fc4569e51a446057 | 39,610 | py | Python | accelbyte_py_sdk/api/iam/wrappers/_roles.py | AccelByte/accelbyte-python-sdk | dcd311fad111c59da828278975340fb92e0f26f7 | [
"MIT"
] | null | null | null | accelbyte_py_sdk/api/iam/wrappers/_roles.py | AccelByte/accelbyte-python-sdk | dcd311fad111c59da828278975340fb92e0f26f7 | [
"MIT"
] | 1 | 2021-10-13T03:46:58.000Z | 2021-10-13T03:46:58.000Z | accelbyte_py_sdk/api/iam/wrappers/_roles.py | AccelByte/accelbyte-python-sdk | dcd311fad111c59da828278975340fb92e0f26f7 | [
"MIT"
] | null | null | null | # Copyright (c) 2021 AccelByte Inc. All Rights Reserved.
# This is licensed software from AccelByte Inc, for limitations
# and restrictions contact your company contract manager.
#
# Code generated. DO NOT EDIT!
# template file: justice_py_sdk_codegen/__main__.py
# pylint: disable=duplicate-code
# pylint: disable=line-too-long
# pylint: disable=missing-function-docstring
# pylint: disable=missing-function-docstring
# pylint: disable=missing-module-docstring
# pylint: disable=too-many-arguments
# pylint: disable=too-many-branches
# pylint: disable=too-many-instance-attributes
# pylint: disable=too-many-lines
# pylint: disable=too-many-locals
# pylint: disable=too-many-public-methods
# pylint: disable=too-many-return-statements
# pylint: disable=too-many-statements
# pylint: disable=unused-import
from typing import Any, Dict, List, Optional, Tuple, Union
from ....core import HeaderStr
from ....core import get_namespace as get_services_namespace
from ....core import run_request
from ....core import run_request_async
from ....core import same_doc_as
from ..models import AccountcommonPermissions
from ..models import AccountcommonPermissionsV3
from ..models import AccountcommonRole
from ..models import AccountcommonRoleV3
from ..models import ModelAssignUserV4Request
from ..models import ModelAssignedUserV4Response
from ..models import ModelListAssignedUsersV4Response
from ..models import ModelListRoleV4Response
from ..models import ModelRevokeUserV4Request
from ..models import ModelRoleAdminStatusResponse
from ..models import ModelRoleAdminStatusResponseV3
from ..models import ModelRoleCreateRequest
from ..models import ModelRoleCreateV3Request
from ..models import ModelRoleManagersRequest
from ..models import ModelRoleManagersRequestV3
from ..models import ModelRoleManagersResponse
from ..models import ModelRoleManagersResponsesV3
from ..models import ModelRoleMembersRequest
from ..models import ModelRoleMembersRequestV3
from ..models import ModelRoleMembersResponse
from ..models import ModelRoleMembersResponseV3
from ..models import ModelRoleNamesResponseV3
from ..models import ModelRoleResponse
from ..models import ModelRoleResponseV3
from ..models import ModelRoleResponseWithManagers
from ..models import ModelRoleResponseWithManagersAndPaginationV3
from ..models import ModelRoleUpdateRequest
from ..models import ModelRoleUpdateRequestV3
from ..models import ModelRoleV4Request
from ..models import ModelRoleV4Response
from ..models import ModelUpdatePermissionScheduleRequest
from ..models import RestErrorResponse
from ..models import RestapiErrorResponse
from ..operations.roles import AddRoleManagers
from ..operations.roles import AddRoleMembers
from ..operations.roles import AddRolePermission
from ..operations.roles import AdminAddRoleManagersV3
from ..operations.roles import AdminAddRoleMembersV3
from ..operations.roles import AdminAddRolePermissionsV3
from ..operations.roles import AdminAddRolePermissionsV4
from ..operations.roles import AdminAssignUserToRoleV4
from ..operations.roles import AdminCreateRoleV3
from ..operations.roles import AdminCreateRoleV4
from ..operations.roles import AdminDeleteRolePermissionV3
from ..operations.roles import AdminDeleteRolePermissionsV3
from ..operations.roles import AdminDeleteRolePermissionsV4
from ..operations.roles import AdminDeleteRoleV3
from ..operations.roles import AdminDeleteRoleV4
from ..operations.roles import AdminGetRoleAdminStatusV3
from ..operations.roles import AdminGetRoleManagersV3
from ..operations.roles import AdminGetRoleMembersV3
from ..operations.roles import AdminGetRoleV3
from ..operations.roles import AdminGetRoleV4
from ..operations.roles import AdminGetRolesV3
from ..operations.roles import AdminGetRolesV4
from ..operations.roles import AdminListAssignedUsersV4
from ..operations.roles import AdminRemoveRoleAdminV3
from ..operations.roles import AdminRemoveRoleManagersV3
from ..operations.roles import AdminRemoveRoleMembersV3
from ..operations.roles import AdminRevokeUserFromRoleV4
from ..operations.roles import AdminUpdateAdminRoleStatusV3
from ..operations.roles import AdminUpdateRolePermissionsV3
from ..operations.roles import AdminUpdateRolePermissionsV4
from ..operations.roles import AdminUpdateRoleV3
from ..operations.roles import AdminUpdateRoleV4
from ..operations.roles import CreateRole
from ..operations.roles import DeleteRole
from ..operations.roles import DeleteRolePermission
from ..operations.roles import GetRole
from ..operations.roles import GetRoleAdminStatus
from ..operations.roles import GetRoleManagers
from ..operations.roles import GetRoleMembers
from ..operations.roles import GetRoles
from ..operations.roles import PublicGetRoleV3
from ..operations.roles import PublicGetRolesV3
from ..operations.roles import RemoveRoleAdmin
from ..operations.roles import RemoveRoleManagers
from ..operations.roles import RemoveRoleMembers
from ..operations.roles import SetRoleAsAdmin
from ..operations.roles import UpdateRole
from ..operations.roles import UpdateRolePermissions
@same_doc_as(AddRoleManagers)
def add_role_managers(body: ModelRoleManagersRequest, role_id: str, x_additional_headers: Optional[Dict[str, str]] = None, **kwargs):
request = AddRoleManagers.create(
body=body,
role_id=role_id,
)
return run_request(request, additional_headers=x_additional_headers, **kwargs)
@same_doc_as(AddRoleManagers)
async def add_role_managers_async(body: ModelRoleManagersRequest, role_id: str, x_additional_headers: Optional[Dict[str, str]] = None, **kwargs):
request = AddRoleManagers.create(
body=body,
role_id=role_id,
)
return await run_request_async(request, additional_headers=x_additional_headers, **kwargs)
@same_doc_as(AddRoleMembers)
def add_role_members(body: ModelRoleMembersRequest, role_id: str, x_additional_headers: Optional[Dict[str, str]] = None, **kwargs):
request = AddRoleMembers.create(
body=body,
role_id=role_id,
)
return run_request(request, additional_headers=x_additional_headers, **kwargs)
@same_doc_as(AddRoleMembers)
async def add_role_members_async(body: ModelRoleMembersRequest, role_id: str, x_additional_headers: Optional[Dict[str, str]] = None, **kwargs):
request = AddRoleMembers.create(
body=body,
role_id=role_id,
)
return await run_request_async(request, additional_headers=x_additional_headers, **kwargs)
@same_doc_as(AddRolePermission)
def add_role_permission(action: int, body: ModelUpdatePermissionScheduleRequest, resource: str, role_id: str, x_additional_headers: Optional[Dict[str, str]] = None, **kwargs):
request = AddRolePermission.create(
action=action,
body=body,
resource=resource,
role_id=role_id,
)
return run_request(request, additional_headers=x_additional_headers, **kwargs)
@same_doc_as(AddRolePermission)
async def add_role_permission_async(action: int, body: ModelUpdatePermissionScheduleRequest, resource: str, role_id: str, x_additional_headers: Optional[Dict[str, str]] = None, **kwargs):
request = AddRolePermission.create(
action=action,
body=body,
resource=resource,
role_id=role_id,
)
return await run_request_async(request, additional_headers=x_additional_headers, **kwargs)
@same_doc_as(AdminAddRoleManagersV3)
def admin_add_role_managers_v3(body: ModelRoleManagersRequestV3, role_id: str, x_additional_headers: Optional[Dict[str, str]] = None, **kwargs):
request = AdminAddRoleManagersV3.create(
body=body,
role_id=role_id,
)
return run_request(request, additional_headers=x_additional_headers, **kwargs)
@same_doc_as(AdminAddRoleManagersV3)
async def admin_add_role_managers_v3_async(body: ModelRoleManagersRequestV3, role_id: str, x_additional_headers: Optional[Dict[str, str]] = None, **kwargs):
request = AdminAddRoleManagersV3.create(
body=body,
role_id=role_id,
)
return await run_request_async(request, additional_headers=x_additional_headers, **kwargs)
@same_doc_as(AdminAddRoleMembersV3)
def admin_add_role_members_v3(body: ModelRoleMembersRequestV3, role_id: str, x_additional_headers: Optional[Dict[str, str]] = None, **kwargs):
request = AdminAddRoleMembersV3.create(
body=body,
role_id=role_id,
)
return run_request(request, additional_headers=x_additional_headers, **kwargs)
@same_doc_as(AdminAddRoleMembersV3)
async def admin_add_role_members_v3_async(body: ModelRoleMembersRequestV3, role_id: str, x_additional_headers: Optional[Dict[str, str]] = None, **kwargs):
request = AdminAddRoleMembersV3.create(
body=body,
role_id=role_id,
)
return await run_request_async(request, additional_headers=x_additional_headers, **kwargs)
@same_doc_as(AdminAddRolePermissionsV3)
def admin_add_role_permissions_v3(body: AccountcommonPermissionsV3, role_id: str, x_additional_headers: Optional[Dict[str, str]] = None, **kwargs):
request = AdminAddRolePermissionsV3.create(
body=body,
role_id=role_id,
)
return run_request(request, additional_headers=x_additional_headers, **kwargs)
@same_doc_as(AdminAddRolePermissionsV3)
async def admin_add_role_permissions_v3_async(body: AccountcommonPermissionsV3, role_id: str, x_additional_headers: Optional[Dict[str, str]] = None, **kwargs):
request = AdminAddRolePermissionsV3.create(
body=body,
role_id=role_id,
)
return await run_request_async(request, additional_headers=x_additional_headers, **kwargs)
@same_doc_as(AdminAddRolePermissionsV4)
def admin_add_role_permissions_v4(body: AccountcommonPermissionsV3, role_id: str, x_additional_headers: Optional[Dict[str, str]] = None, **kwargs):
request = AdminAddRolePermissionsV4.create(
body=body,
role_id=role_id,
)
return run_request(request, additional_headers=x_additional_headers, **kwargs)
@same_doc_as(AdminAddRolePermissionsV4)
async def admin_add_role_permissions_v4_async(body: AccountcommonPermissionsV3, role_id: str, x_additional_headers: Optional[Dict[str, str]] = None, **kwargs):
request = AdminAddRolePermissionsV4.create(
body=body,
role_id=role_id,
)
return await run_request_async(request, additional_headers=x_additional_headers, **kwargs)
@same_doc_as(AdminAssignUserToRoleV4)
def admin_assign_user_to_role_v4(body: ModelAssignUserV4Request, role_id: str, x_additional_headers: Optional[Dict[str, str]] = None, **kwargs):
request = AdminAssignUserToRoleV4.create(
body=body,
role_id=role_id,
)
return run_request(request, additional_headers=x_additional_headers, **kwargs)
@same_doc_as(AdminAssignUserToRoleV4)
async def admin_assign_user_to_role_v4_async(body: ModelAssignUserV4Request, role_id: str, x_additional_headers: Optional[Dict[str, str]] = None, **kwargs):
request = AdminAssignUserToRoleV4.create(
body=body,
role_id=role_id,
)
return await run_request_async(request, additional_headers=x_additional_headers, **kwargs)
@same_doc_as(AdminCreateRoleV3)
def admin_create_role_v3(body: ModelRoleCreateV3Request, x_additional_headers: Optional[Dict[str, str]] = None, **kwargs):
request = AdminCreateRoleV3.create(
body=body,
)
return run_request(request, additional_headers=x_additional_headers, **kwargs)
@same_doc_as(AdminCreateRoleV3)
async def admin_create_role_v3_async(body: ModelRoleCreateV3Request, x_additional_headers: Optional[Dict[str, str]] = None, **kwargs):
request = AdminCreateRoleV3.create(
body=body,
)
return await run_request_async(request, additional_headers=x_additional_headers, **kwargs)
@same_doc_as(AdminCreateRoleV4)
def admin_create_role_v4(body: ModelRoleV4Request, x_additional_headers: Optional[Dict[str, str]] = None, **kwargs):
request = AdminCreateRoleV4.create(
body=body,
)
return run_request(request, additional_headers=x_additional_headers, **kwargs)
@same_doc_as(AdminCreateRoleV4)
async def admin_create_role_v4_async(body: ModelRoleV4Request, x_additional_headers: Optional[Dict[str, str]] = None, **kwargs):
request = AdminCreateRoleV4.create(
body=body,
)
return await run_request_async(request, additional_headers=x_additional_headers, **kwargs)
@same_doc_as(AdminDeleteRolePermissionV3)
def admin_delete_role_permission_v3(action: int, resource: str, role_id: str, x_additional_headers: Optional[Dict[str, str]] = None, **kwargs):
request = AdminDeleteRolePermissionV3.create(
action=action,
resource=resource,
role_id=role_id,
)
return run_request(request, additional_headers=x_additional_headers, **kwargs)
@same_doc_as(AdminDeleteRolePermissionV3)
async def admin_delete_role_permission_v3_async(action: int, resource: str, role_id: str, x_additional_headers: Optional[Dict[str, str]] = None, **kwargs):
request = AdminDeleteRolePermissionV3.create(
action=action,
resource=resource,
role_id=role_id,
)
return await run_request_async(request, additional_headers=x_additional_headers, **kwargs)
@same_doc_as(AdminDeleteRolePermissionsV3)
def admin_delete_role_permissions_v3(body: List[str], role_id: str, x_additional_headers: Optional[Dict[str, str]] = None, **kwargs):
request = AdminDeleteRolePermissionsV3.create(
body=body,
role_id=role_id,
)
return run_request(request, additional_headers=x_additional_headers, **kwargs)
@same_doc_as(AdminDeleteRolePermissionsV3)
async def admin_delete_role_permissions_v3_async(body: List[str], role_id: str, x_additional_headers: Optional[Dict[str, str]] = None, **kwargs):
request = AdminDeleteRolePermissionsV3.create(
body=body,
role_id=role_id,
)
return await run_request_async(request, additional_headers=x_additional_headers, **kwargs)
@same_doc_as(AdminDeleteRolePermissionsV4)
def admin_delete_role_permissions_v4(body: List[str], role_id: str, x_additional_headers: Optional[Dict[str, str]] = None, **kwargs):
request = AdminDeleteRolePermissionsV4.create(
body=body,
role_id=role_id,
)
return run_request(request, additional_headers=x_additional_headers, **kwargs)
@same_doc_as(AdminDeleteRolePermissionsV4)
async def admin_delete_role_permissions_v4_async(body: List[str], role_id: str, x_additional_headers: Optional[Dict[str, str]] = None, **kwargs):
request = AdminDeleteRolePermissionsV4.create(
body=body,
role_id=role_id,
)
return await run_request_async(request, additional_headers=x_additional_headers, **kwargs)
@same_doc_as(AdminDeleteRoleV3)
def admin_delete_role_v3(role_id: str, x_additional_headers: Optional[Dict[str, str]] = None, **kwargs):
request = AdminDeleteRoleV3.create(
role_id=role_id,
)
return run_request(request, additional_headers=x_additional_headers, **kwargs)
@same_doc_as(AdminDeleteRoleV3)
async def admin_delete_role_v3_async(role_id: str, x_additional_headers: Optional[Dict[str, str]] = None, **kwargs):
request = AdminDeleteRoleV3.create(
role_id=role_id,
)
return await run_request_async(request, additional_headers=x_additional_headers, **kwargs)
@same_doc_as(AdminDeleteRoleV4)
def admin_delete_role_v4(role_id: str, x_additional_headers: Optional[Dict[str, str]] = None, **kwargs):
request = AdminDeleteRoleV4.create(
role_id=role_id,
)
return run_request(request, additional_headers=x_additional_headers, **kwargs)
@same_doc_as(AdminDeleteRoleV4)
async def admin_delete_role_v4_async(role_id: str, x_additional_headers: Optional[Dict[str, str]] = None, **kwargs):
request = AdminDeleteRoleV4.create(
role_id=role_id,
)
return await run_request_async(request, additional_headers=x_additional_headers, **kwargs)
@same_doc_as(AdminGetRoleAdminStatusV3)
def admin_get_role_admin_status_v3(role_id: str, x_additional_headers: Optional[Dict[str, str]] = None, **kwargs):
request = AdminGetRoleAdminStatusV3.create(
role_id=role_id,
)
return run_request(request, additional_headers=x_additional_headers, **kwargs)
@same_doc_as(AdminGetRoleAdminStatusV3)
async def admin_get_role_admin_status_v3_async(role_id: str, x_additional_headers: Optional[Dict[str, str]] = None, **kwargs):
request = AdminGetRoleAdminStatusV3.create(
role_id=role_id,
)
return await run_request_async(request, additional_headers=x_additional_headers, **kwargs)
@same_doc_as(AdminGetRoleManagersV3)
def admin_get_role_managers_v3(role_id: str, after: Optional[str] = None, before: Optional[str] = None, limit: Optional[int] = None, x_additional_headers: Optional[Dict[str, str]] = None, **kwargs):
request = AdminGetRoleManagersV3.create(
role_id=role_id,
after=after,
before=before,
limit=limit,
)
return run_request(request, additional_headers=x_additional_headers, **kwargs)
@same_doc_as(AdminGetRoleManagersV3)
async def admin_get_role_managers_v3_async(role_id: str, after: Optional[str] = None, before: Optional[str] = None, limit: Optional[int] = None, x_additional_headers: Optional[Dict[str, str]] = None, **kwargs):
request = AdminGetRoleManagersV3.create(
role_id=role_id,
after=after,
before=before,
limit=limit,
)
return await run_request_async(request, additional_headers=x_additional_headers, **kwargs)
@same_doc_as(AdminGetRoleMembersV3)
def admin_get_role_members_v3(role_id: str, after: Optional[str] = None, before: Optional[str] = None, limit: Optional[int] = None, x_additional_headers: Optional[Dict[str, str]] = None, **kwargs):
request = AdminGetRoleMembersV3.create(
role_id=role_id,
after=after,
before=before,
limit=limit,
)
return run_request(request, additional_headers=x_additional_headers, **kwargs)
@same_doc_as(AdminGetRoleMembersV3)
async def admin_get_role_members_v3_async(role_id: str, after: Optional[str] = None, before: Optional[str] = None, limit: Optional[int] = None, x_additional_headers: Optional[Dict[str, str]] = None, **kwargs):
request = AdminGetRoleMembersV3.create(
role_id=role_id,
after=after,
before=before,
limit=limit,
)
return await run_request_async(request, additional_headers=x_additional_headers, **kwargs)
@same_doc_as(AdminGetRoleV3)
def admin_get_role_v3(role_id: str, x_additional_headers: Optional[Dict[str, str]] = None, **kwargs):
request = AdminGetRoleV3.create(
role_id=role_id,
)
return run_request(request, additional_headers=x_additional_headers, **kwargs)
@same_doc_as(AdminGetRoleV3)
async def admin_get_role_v3_async(role_id: str, x_additional_headers: Optional[Dict[str, str]] = None, **kwargs):
request = AdminGetRoleV3.create(
role_id=role_id,
)
return await run_request_async(request, additional_headers=x_additional_headers, **kwargs)
@same_doc_as(AdminGetRoleV4)
def admin_get_role_v4(role_id: str, x_additional_headers: Optional[Dict[str, str]] = None, **kwargs):
request = AdminGetRoleV4.create(
role_id=role_id,
)
return run_request(request, additional_headers=x_additional_headers, **kwargs)
@same_doc_as(AdminGetRoleV4)
async def admin_get_role_v4_async(role_id: str, x_additional_headers: Optional[Dict[str, str]] = None, **kwargs):
request = AdminGetRoleV4.create(
role_id=role_id,
)
return await run_request_async(request, additional_headers=x_additional_headers, **kwargs)
@same_doc_as(AdminGetRolesV3)
def admin_get_roles_v3(after: Optional[str] = None, before: Optional[str] = None, is_wildcard: Optional[bool] = None, limit: Optional[int] = None, x_additional_headers: Optional[Dict[str, str]] = None, **kwargs):
request = AdminGetRolesV3.create(
after=after,
before=before,
is_wildcard=is_wildcard,
limit=limit,
)
return run_request(request, additional_headers=x_additional_headers, **kwargs)
@same_doc_as(AdminGetRolesV3)
async def admin_get_roles_v3_async(after: Optional[str] = None, before: Optional[str] = None, is_wildcard: Optional[bool] = None, limit: Optional[int] = None, x_additional_headers: Optional[Dict[str, str]] = None, **kwargs):
request = AdminGetRolesV3.create(
after=after,
before=before,
is_wildcard=is_wildcard,
limit=limit,
)
return await run_request_async(request, additional_headers=x_additional_headers, **kwargs)
@same_doc_as(AdminGetRolesV4)
def admin_get_roles_v4(admin_role: Optional[bool] = None, is_wildcard: Optional[bool] = None, limit: Optional[int] = None, offset: Optional[int] = None, x_additional_headers: Optional[Dict[str, str]] = None, **kwargs):
request = AdminGetRolesV4.create(
admin_role=admin_role,
is_wildcard=is_wildcard,
limit=limit,
offset=offset,
)
return run_request(request, additional_headers=x_additional_headers, **kwargs)
@same_doc_as(AdminGetRolesV4)
async def admin_get_roles_v4_async(admin_role: Optional[bool] = None, is_wildcard: Optional[bool] = None, limit: Optional[int] = None, offset: Optional[int] = None, x_additional_headers: Optional[Dict[str, str]] = None, **kwargs):
request = AdminGetRolesV4.create(
admin_role=admin_role,
is_wildcard=is_wildcard,
limit=limit,
offset=offset,
)
return await run_request_async(request, additional_headers=x_additional_headers, **kwargs)
@same_doc_as(AdminListAssignedUsersV4)
def admin_list_assigned_users_v4(role_id: str, after: Optional[str] = None, before: Optional[str] = None, limit: Optional[int] = None, x_additional_headers: Optional[Dict[str, str]] = None, **kwargs):
request = AdminListAssignedUsersV4.create(
role_id=role_id,
after=after,
before=before,
limit=limit,
)
return run_request(request, additional_headers=x_additional_headers, **kwargs)
@same_doc_as(AdminListAssignedUsersV4)
async def admin_list_assigned_users_v4_async(role_id: str, after: Optional[str] = None, before: Optional[str] = None, limit: Optional[int] = None, x_additional_headers: Optional[Dict[str, str]] = None, **kwargs):
request = AdminListAssignedUsersV4.create(
role_id=role_id,
after=after,
before=before,
limit=limit,
)
return await run_request_async(request, additional_headers=x_additional_headers, **kwargs)
@same_doc_as(AdminRemoveRoleAdminV3)
def admin_remove_role_admin_v3(role_id: str, x_additional_headers: Optional[Dict[str, str]] = None, **kwargs):
request = AdminRemoveRoleAdminV3.create(
role_id=role_id,
)
return run_request(request, additional_headers=x_additional_headers, **kwargs)
@same_doc_as(AdminRemoveRoleAdminV3)
async def admin_remove_role_admin_v3_async(role_id: str, x_additional_headers: Optional[Dict[str, str]] = None, **kwargs):
request = AdminRemoveRoleAdminV3.create(
role_id=role_id,
)
return await run_request_async(request, additional_headers=x_additional_headers, **kwargs)
@same_doc_as(AdminRemoveRoleManagersV3)
def admin_remove_role_managers_v3(body: ModelRoleManagersRequestV3, role_id: str, x_additional_headers: Optional[Dict[str, str]] = None, **kwargs):
request = AdminRemoveRoleManagersV3.create(
body=body,
role_id=role_id,
)
return run_request(request, additional_headers=x_additional_headers, **kwargs)
@same_doc_as(AdminRemoveRoleManagersV3)
async def admin_remove_role_managers_v3_async(body: ModelRoleManagersRequestV3, role_id: str, x_additional_headers: Optional[Dict[str, str]] = None, **kwargs):
request = AdminRemoveRoleManagersV3.create(
body=body,
role_id=role_id,
)
return await run_request_async(request, additional_headers=x_additional_headers, **kwargs)
@same_doc_as(AdminRemoveRoleMembersV3)
def admin_remove_role_members_v3(body: ModelRoleMembersRequestV3, role_id: str, x_additional_headers: Optional[Dict[str, str]] = None, **kwargs):
request = AdminRemoveRoleMembersV3.create(
body=body,
role_id=role_id,
)
return run_request(request, additional_headers=x_additional_headers, **kwargs)
@same_doc_as(AdminRemoveRoleMembersV3)
async def admin_remove_role_members_v3_async(body: ModelRoleMembersRequestV3, role_id: str, x_additional_headers: Optional[Dict[str, str]] = None, **kwargs):
request = AdminRemoveRoleMembersV3.create(
body=body,
role_id=role_id,
)
return await run_request_async(request, additional_headers=x_additional_headers, **kwargs)
@same_doc_as(AdminRevokeUserFromRoleV4)
def admin_revoke_user_from_role_v4(body: ModelRevokeUserV4Request, role_id: str, x_additional_headers: Optional[Dict[str, str]] = None, **kwargs):
request = AdminRevokeUserFromRoleV4.create(
body=body,
role_id=role_id,
)
return run_request(request, additional_headers=x_additional_headers, **kwargs)
@same_doc_as(AdminRevokeUserFromRoleV4)
async def admin_revoke_user_from_role_v4_async(body: ModelRevokeUserV4Request, role_id: str, x_additional_headers: Optional[Dict[str, str]] = None, **kwargs):
request = AdminRevokeUserFromRoleV4.create(
body=body,
role_id=role_id,
)
return await run_request_async(request, additional_headers=x_additional_headers, **kwargs)
@same_doc_as(AdminUpdateAdminRoleStatusV3)
def admin_update_admin_role_status_v3(role_id: str, x_additional_headers: Optional[Dict[str, str]] = None, **kwargs):
request = AdminUpdateAdminRoleStatusV3.create(
role_id=role_id,
)
return run_request(request, additional_headers=x_additional_headers, **kwargs)
@same_doc_as(AdminUpdateAdminRoleStatusV3)
async def admin_update_admin_role_status_v3_async(role_id: str, x_additional_headers: Optional[Dict[str, str]] = None, **kwargs):
request = AdminUpdateAdminRoleStatusV3.create(
role_id=role_id,
)
return await run_request_async(request, additional_headers=x_additional_headers, **kwargs)
@same_doc_as(AdminUpdateRolePermissionsV3)
def admin_update_role_permissions_v3(body: AccountcommonPermissionsV3, role_id: str, x_additional_headers: Optional[Dict[str, str]] = None, **kwargs):
request = AdminUpdateRolePermissionsV3.create(
body=body,
role_id=role_id,
)
return run_request(request, additional_headers=x_additional_headers, **kwargs)
@same_doc_as(AdminUpdateRolePermissionsV3)
async def admin_update_role_permissions_v3_async(body: AccountcommonPermissionsV3, role_id: str, x_additional_headers: Optional[Dict[str, str]] = None, **kwargs):
request = AdminUpdateRolePermissionsV3.create(
body=body,
role_id=role_id,
)
return await run_request_async(request, additional_headers=x_additional_headers, **kwargs)
@same_doc_as(AdminUpdateRolePermissionsV4)
def admin_update_role_permissions_v4(body: AccountcommonPermissionsV3, role_id: str, x_additional_headers: Optional[Dict[str, str]] = None, **kwargs):
request = AdminUpdateRolePermissionsV4.create(
body=body,
role_id=role_id,
)
return run_request(request, additional_headers=x_additional_headers, **kwargs)
@same_doc_as(AdminUpdateRolePermissionsV4)
async def admin_update_role_permissions_v4_async(body: AccountcommonPermissionsV3, role_id: str, x_additional_headers: Optional[Dict[str, str]] = None, **kwargs):
request = AdminUpdateRolePermissionsV4.create(
body=body,
role_id=role_id,
)
return await run_request_async(request, additional_headers=x_additional_headers, **kwargs)
@same_doc_as(AdminUpdateRoleV3)
def admin_update_role_v3(body: ModelRoleUpdateRequestV3, role_id: str, x_additional_headers: Optional[Dict[str, str]] = None, **kwargs):
request = AdminUpdateRoleV3.create(
body=body,
role_id=role_id,
)
return run_request(request, additional_headers=x_additional_headers, **kwargs)
@same_doc_as(AdminUpdateRoleV3)
async def admin_update_role_v3_async(body: ModelRoleUpdateRequestV3, role_id: str, x_additional_headers: Optional[Dict[str, str]] = None, **kwargs):
request = AdminUpdateRoleV3.create(
body=body,
role_id=role_id,
)
return await run_request_async(request, additional_headers=x_additional_headers, **kwargs)
@same_doc_as(AdminUpdateRoleV4)
def admin_update_role_v4(body: ModelRoleV4Request, role_id: str, x_additional_headers: Optional[Dict[str, str]] = None, **kwargs):
request = AdminUpdateRoleV4.create(
body=body,
role_id=role_id,
)
return run_request(request, additional_headers=x_additional_headers, **kwargs)
@same_doc_as(AdminUpdateRoleV4)
async def admin_update_role_v4_async(body: ModelRoleV4Request, role_id: str, x_additional_headers: Optional[Dict[str, str]] = None, **kwargs):
request = AdminUpdateRoleV4.create(
body=body,
role_id=role_id,
)
return await run_request_async(request, additional_headers=x_additional_headers, **kwargs)
@same_doc_as(CreateRole)
def create_role(body: ModelRoleCreateRequest, x_additional_headers: Optional[Dict[str, str]] = None, **kwargs):
request = CreateRole.create(
body=body,
)
return run_request(request, additional_headers=x_additional_headers, **kwargs)
@same_doc_as(CreateRole)
async def create_role_async(body: ModelRoleCreateRequest, x_additional_headers: Optional[Dict[str, str]] = None, **kwargs):
request = CreateRole.create(
body=body,
)
return await run_request_async(request, additional_headers=x_additional_headers, **kwargs)
@same_doc_as(DeleteRole)
def delete_role(role_id: str, x_additional_headers: Optional[Dict[str, str]] = None, **kwargs):
request = DeleteRole.create(
role_id=role_id,
)
return run_request(request, additional_headers=x_additional_headers, **kwargs)
@same_doc_as(DeleteRole)
async def delete_role_async(role_id: str, x_additional_headers: Optional[Dict[str, str]] = None, **kwargs):
request = DeleteRole.create(
role_id=role_id,
)
return await run_request_async(request, additional_headers=x_additional_headers, **kwargs)
@same_doc_as(DeleteRolePermission)
def delete_role_permission(action: int, resource: str, role_id: str, x_additional_headers: Optional[Dict[str, str]] = None, **kwargs):
request = DeleteRolePermission.create(
action=action,
resource=resource,
role_id=role_id,
)
return run_request(request, additional_headers=x_additional_headers, **kwargs)
@same_doc_as(DeleteRolePermission)
async def delete_role_permission_async(action: int, resource: str, role_id: str, x_additional_headers: Optional[Dict[str, str]] = None, **kwargs):
request = DeleteRolePermission.create(
action=action,
resource=resource,
role_id=role_id,
)
return await run_request_async(request, additional_headers=x_additional_headers, **kwargs)
@same_doc_as(GetRole)
def get_role(role_id: str, x_additional_headers: Optional[Dict[str, str]] = None, **kwargs):
request = GetRole.create(
role_id=role_id,
)
return run_request(request, additional_headers=x_additional_headers, **kwargs)
@same_doc_as(GetRole)
async def get_role_async(role_id: str, x_additional_headers: Optional[Dict[str, str]] = None, **kwargs):
request = GetRole.create(
role_id=role_id,
)
return await run_request_async(request, additional_headers=x_additional_headers, **kwargs)
@same_doc_as(GetRoleAdminStatus)
def get_role_admin_status(role_id: str, x_additional_headers: Optional[Dict[str, str]] = None, **kwargs):
request = GetRoleAdminStatus.create(
role_id=role_id,
)
return run_request(request, additional_headers=x_additional_headers, **kwargs)
@same_doc_as(GetRoleAdminStatus)
async def get_role_admin_status_async(role_id: str, x_additional_headers: Optional[Dict[str, str]] = None, **kwargs):
request = GetRoleAdminStatus.create(
role_id=role_id,
)
return await run_request_async(request, additional_headers=x_additional_headers, **kwargs)
@same_doc_as(GetRoleManagers)
def get_role_managers(role_id: str, x_additional_headers: Optional[Dict[str, str]] = None, **kwargs):
request = GetRoleManagers.create(
role_id=role_id,
)
return run_request(request, additional_headers=x_additional_headers, **kwargs)
@same_doc_as(GetRoleManagers)
async def get_role_managers_async(role_id: str, x_additional_headers: Optional[Dict[str, str]] = None, **kwargs):
request = GetRoleManagers.create(
role_id=role_id,
)
return await run_request_async(request, additional_headers=x_additional_headers, **kwargs)
@same_doc_as(GetRoleMembers)
def get_role_members(role_id: str, x_additional_headers: Optional[Dict[str, str]] = None, **kwargs):
request = GetRoleMembers.create(
role_id=role_id,
)
return run_request(request, additional_headers=x_additional_headers, **kwargs)
@same_doc_as(GetRoleMembers)
async def get_role_members_async(role_id: str, x_additional_headers: Optional[Dict[str, str]] = None, **kwargs):
request = GetRoleMembers.create(
role_id=role_id,
)
return await run_request_async(request, additional_headers=x_additional_headers, **kwargs)
@same_doc_as(GetRoles)
def get_roles(is_wildcard: Optional[str] = None, x_additional_headers: Optional[Dict[str, str]] = None, **kwargs):
request = GetRoles.create(
is_wildcard=is_wildcard,
)
return run_request(request, additional_headers=x_additional_headers, **kwargs)
@same_doc_as(GetRoles)
async def get_roles_async(is_wildcard: Optional[str] = None, x_additional_headers: Optional[Dict[str, str]] = None, **kwargs):
request = GetRoles.create(
is_wildcard=is_wildcard,
)
return await run_request_async(request, additional_headers=x_additional_headers, **kwargs)
@same_doc_as(PublicGetRoleV3)
def public_get_role_v3(role_id: str, x_additional_headers: Optional[Dict[str, str]] = None, **kwargs):
request = PublicGetRoleV3.create(
role_id=role_id,
)
return run_request(request, additional_headers=x_additional_headers, **kwargs)
@same_doc_as(PublicGetRoleV3)
async def public_get_role_v3_async(role_id: str, x_additional_headers: Optional[Dict[str, str]] = None, **kwargs):
request = PublicGetRoleV3.create(
role_id=role_id,
)
return await run_request_async(request, additional_headers=x_additional_headers, **kwargs)
@same_doc_as(PublicGetRolesV3)
def public_get_roles_v3(after: Optional[str] = None, before: Optional[str] = None, is_wildcard: Optional[bool] = None, limit: Optional[int] = None, x_additional_headers: Optional[Dict[str, str]] = None, **kwargs):
request = PublicGetRolesV3.create(
after=after,
before=before,
is_wildcard=is_wildcard,
limit=limit,
)
return run_request(request, additional_headers=x_additional_headers, **kwargs)
@same_doc_as(PublicGetRolesV3)
async def public_get_roles_v3_async(after: Optional[str] = None, before: Optional[str] = None, is_wildcard: Optional[bool] = None, limit: Optional[int] = None, x_additional_headers: Optional[Dict[str, str]] = None, **kwargs):
request = PublicGetRolesV3.create(
after=after,
before=before,
is_wildcard=is_wildcard,
limit=limit,
)
return await run_request_async(request, additional_headers=x_additional_headers, **kwargs)
@same_doc_as(RemoveRoleAdmin)
def remove_role_admin(role_id: str, x_additional_headers: Optional[Dict[str, str]] = None, **kwargs):
request = RemoveRoleAdmin.create(
role_id=role_id,
)
return run_request(request, additional_headers=x_additional_headers, **kwargs)
@same_doc_as(RemoveRoleAdmin)
async def remove_role_admin_async(role_id: str, x_additional_headers: Optional[Dict[str, str]] = None, **kwargs):
request = RemoveRoleAdmin.create(
role_id=role_id,
)
return await run_request_async(request, additional_headers=x_additional_headers, **kwargs)
@same_doc_as(RemoveRoleManagers)
def remove_role_managers(body: ModelRoleManagersRequest, role_id: str, x_additional_headers: Optional[Dict[str, str]] = None, **kwargs):
request = RemoveRoleManagers.create(
body=body,
role_id=role_id,
)
return run_request(request, additional_headers=x_additional_headers, **kwargs)
@same_doc_as(RemoveRoleManagers)
async def remove_role_managers_async(body: ModelRoleManagersRequest, role_id: str, x_additional_headers: Optional[Dict[str, str]] = None, **kwargs):
request = RemoveRoleManagers.create(
body=body,
role_id=role_id,
)
return await run_request_async(request, additional_headers=x_additional_headers, **kwargs)
@same_doc_as(RemoveRoleMembers)
def remove_role_members(body: ModelRoleMembersRequest, role_id: str, x_additional_headers: Optional[Dict[str, str]] = None, **kwargs):
request = RemoveRoleMembers.create(
body=body,
role_id=role_id,
)
return run_request(request, additional_headers=x_additional_headers, **kwargs)
@same_doc_as(RemoveRoleMembers)
async def remove_role_members_async(body: ModelRoleMembersRequest, role_id: str, x_additional_headers: Optional[Dict[str, str]] = None, **kwargs):
request = RemoveRoleMembers.create(
body=body,
role_id=role_id,
)
return await run_request_async(request, additional_headers=x_additional_headers, **kwargs)
@same_doc_as(SetRoleAsAdmin)
def set_role_as_admin(role_id: str, x_additional_headers: Optional[Dict[str, str]] = None, **kwargs):
request = SetRoleAsAdmin.create(
role_id=role_id,
)
return run_request(request, additional_headers=x_additional_headers, **kwargs)
@same_doc_as(SetRoleAsAdmin)
async def set_role_as_admin_async(role_id: str, x_additional_headers: Optional[Dict[str, str]] = None, **kwargs):
request = SetRoleAsAdmin.create(
role_id=role_id,
)
return await run_request_async(request, additional_headers=x_additional_headers, **kwargs)
@same_doc_as(UpdateRole)
def update_role(body: ModelRoleUpdateRequest, role_id: str, x_additional_headers: Optional[Dict[str, str]] = None, **kwargs):
request = UpdateRole.create(
body=body,
role_id=role_id,
)
return run_request(request, additional_headers=x_additional_headers, **kwargs)
@same_doc_as(UpdateRole)
async def update_role_async(body: ModelRoleUpdateRequest, role_id: str, x_additional_headers: Optional[Dict[str, str]] = None, **kwargs):
request = UpdateRole.create(
body=body,
role_id=role_id,
)
return await run_request_async(request, additional_headers=x_additional_headers, **kwargs)
@same_doc_as(UpdateRolePermissions)
def update_role_permissions(body: AccountcommonPermissions, role_id: str, x_additional_headers: Optional[Dict[str, str]] = None, **kwargs):
request = UpdateRolePermissions.create(
body=body,
role_id=role_id,
)
return run_request(request, additional_headers=x_additional_headers, **kwargs)
@same_doc_as(UpdateRolePermissions)
async def update_role_permissions_async(body: AccountcommonPermissions, role_id: str, x_additional_headers: Optional[Dict[str, str]] = None, **kwargs):
request = UpdateRolePermissions.create(
body=body,
role_id=role_id,
)
return await run_request_async(request, additional_headers=x_additional_headers, **kwargs)
| 40.751029 | 230 | 0.763923 | 4,805 | 39,610 | 5.995838 | 0.041415 | 0.169941 | 0.119958 | 0.086637 | 0.837244 | 0.816626 | 0.795661 | 0.788927 | 0.785665 | 0.785665 | 0 | 0.007869 | 0.140167 | 39,610 | 971 | 231 | 40.792997 | 0.838041 | 0.019339 | 0 | 0.629482 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.063745 | false | 0 | 0.115538 | 0 | 0.306773 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
3f82af717d25146a1f4915039033c90813009b54 | 29,525 | py | Python | sdk/python/pulumi_oci/loadbalancer/backend.py | EladGabay/pulumi-oci | 6841e27d4a1a7e15c672306b769912efbfd3ba99 | [
"ECL-2.0",
"Apache-2.0"
] | 5 | 2021-08-17T11:14:46.000Z | 2021-12-31T02:07:03.000Z | sdk/python/pulumi_oci/loadbalancer/backend.py | pulumi-oci/pulumi-oci | 6841e27d4a1a7e15c672306b769912efbfd3ba99 | [
"ECL-2.0",
"Apache-2.0"
] | 1 | 2021-09-06T11:21:29.000Z | 2021-09-06T11:21:29.000Z | sdk/python/pulumi_oci/loadbalancer/backend.py | pulumi-oci/pulumi-oci | 6841e27d4a1a7e15c672306b769912efbfd3ba99 | [
"ECL-2.0",
"Apache-2.0"
] | 2 | 2021-08-24T23:31:30.000Z | 2022-01-02T19:26:54.000Z | # coding=utf-8
# *** WARNING: this file was generated by the Pulumi Terraform Bridge (tfgen) Tool. ***
# *** Do not edit by hand unless you're certain you know what you are doing! ***
import warnings
import pulumi
import pulumi.runtime
from typing import Any, Mapping, Optional, Sequence, Union, overload
from .. import _utilities
__all__ = ['BackendArgs', 'Backend']
@pulumi.input_type
class BackendArgs:
def __init__(__self__, *,
backendset_name: pulumi.Input[str],
ip_address: pulumi.Input[str],
load_balancer_id: pulumi.Input[str],
port: pulumi.Input[int],
backup: Optional[pulumi.Input[bool]] = None,
drain: Optional[pulumi.Input[bool]] = None,
offline: Optional[pulumi.Input[bool]] = None,
weight: Optional[pulumi.Input[int]] = None):
"""
The set of arguments for constructing a Backend resource.
:param pulumi.Input[str] backendset_name: The name of the backend set to add the backend server to. Example: `example_backend_set`
:param pulumi.Input[str] ip_address: The IP address of the backend server. Example: `10.0.0.3`
:param pulumi.Input[str] load_balancer_id: The [OCID](https://docs.cloud.oracle.com/iaas/Content/General/Concepts/identifiers.htm) of the load balancer associated with the backend set and servers.
:param pulumi.Input[int] port: The communication port for the backend server. Example: `8080`
:param pulumi.Input[bool] backup: (Updatable) Whether the load balancer should treat this server as a backup unit. If `true`, the load balancer forwards no ingress traffic to this backend server unless all other backend servers not marked as "backup" fail the health check policy.
:param pulumi.Input[bool] drain: (Updatable) Whether the load balancer should drain this server. Servers marked "drain" receive no new incoming traffic. Example: `false`
:param pulumi.Input[bool] offline: (Updatable) Whether the load balancer should treat this server as offline. Offline servers receive no incoming traffic. Example: `false`
:param pulumi.Input[int] weight: (Updatable) The load balancing policy weight assigned to the server. Backend servers with a higher weight receive a larger proportion of incoming traffic. For example, a server weighted '3' receives 3 times the number of new connections as a server weighted '1'. For more information on load balancing policies, see [How Load Balancing Policies Work](https://docs.cloud.oracle.com/iaas/Content/Balance/Reference/lbpolicies.htm). Example: `3`
"""
pulumi.set(__self__, "backendset_name", backendset_name)
pulumi.set(__self__, "ip_address", ip_address)
pulumi.set(__self__, "load_balancer_id", load_balancer_id)
pulumi.set(__self__, "port", port)
if backup is not None:
pulumi.set(__self__, "backup", backup)
if drain is not None:
pulumi.set(__self__, "drain", drain)
if offline is not None:
pulumi.set(__self__, "offline", offline)
if weight is not None:
pulumi.set(__self__, "weight", weight)
@property
@pulumi.getter(name="backendsetName")
def backendset_name(self) -> pulumi.Input[str]:
"""
The name of the backend set to add the backend server to. Example: `example_backend_set`
"""
return pulumi.get(self, "backendset_name")
@backendset_name.setter
def backendset_name(self, value: pulumi.Input[str]):
pulumi.set(self, "backendset_name", value)
@property
@pulumi.getter(name="ipAddress")
def ip_address(self) -> pulumi.Input[str]:
"""
The IP address of the backend server. Example: `10.0.0.3`
"""
return pulumi.get(self, "ip_address")
@ip_address.setter
def ip_address(self, value: pulumi.Input[str]):
pulumi.set(self, "ip_address", value)
@property
@pulumi.getter(name="loadBalancerId")
def load_balancer_id(self) -> pulumi.Input[str]:
"""
The [OCID](https://docs.cloud.oracle.com/iaas/Content/General/Concepts/identifiers.htm) of the load balancer associated with the backend set and servers.
"""
return pulumi.get(self, "load_balancer_id")
@load_balancer_id.setter
def load_balancer_id(self, value: pulumi.Input[str]):
pulumi.set(self, "load_balancer_id", value)
@property
@pulumi.getter
def port(self) -> pulumi.Input[int]:
"""
The communication port for the backend server. Example: `8080`
"""
return pulumi.get(self, "port")
@port.setter
def port(self, value: pulumi.Input[int]):
pulumi.set(self, "port", value)
@property
@pulumi.getter
def backup(self) -> Optional[pulumi.Input[bool]]:
"""
(Updatable) Whether the load balancer should treat this server as a backup unit. If `true`, the load balancer forwards no ingress traffic to this backend server unless all other backend servers not marked as "backup" fail the health check policy.
"""
return pulumi.get(self, "backup")
@backup.setter
def backup(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "backup", value)
@property
@pulumi.getter
def drain(self) -> Optional[pulumi.Input[bool]]:
"""
(Updatable) Whether the load balancer should drain this server. Servers marked "drain" receive no new incoming traffic. Example: `false`
"""
return pulumi.get(self, "drain")
@drain.setter
def drain(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "drain", value)
@property
@pulumi.getter
def offline(self) -> Optional[pulumi.Input[bool]]:
"""
(Updatable) Whether the load balancer should treat this server as offline. Offline servers receive no incoming traffic. Example: `false`
"""
return pulumi.get(self, "offline")
@offline.setter
def offline(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "offline", value)
@property
@pulumi.getter
def weight(self) -> Optional[pulumi.Input[int]]:
"""
(Updatable) The load balancing policy weight assigned to the server. Backend servers with a higher weight receive a larger proportion of incoming traffic. For example, a server weighted '3' receives 3 times the number of new connections as a server weighted '1'. For more information on load balancing policies, see [How Load Balancing Policies Work](https://docs.cloud.oracle.com/iaas/Content/Balance/Reference/lbpolicies.htm). Example: `3`
"""
return pulumi.get(self, "weight")
@weight.setter
def weight(self, value: Optional[pulumi.Input[int]]):
pulumi.set(self, "weight", value)
@pulumi.input_type
class _BackendState:
def __init__(__self__, *,
backendset_name: Optional[pulumi.Input[str]] = None,
backup: Optional[pulumi.Input[bool]] = None,
drain: Optional[pulumi.Input[bool]] = None,
ip_address: Optional[pulumi.Input[str]] = None,
load_balancer_id: Optional[pulumi.Input[str]] = None,
name: Optional[pulumi.Input[str]] = None,
offline: Optional[pulumi.Input[bool]] = None,
port: Optional[pulumi.Input[int]] = None,
state: Optional[pulumi.Input[str]] = None,
weight: Optional[pulumi.Input[int]] = None):
"""
Input properties used for looking up and filtering Backend resources.
:param pulumi.Input[str] backendset_name: The name of the backend set to add the backend server to. Example: `example_backend_set`
:param pulumi.Input[bool] backup: (Updatable) Whether the load balancer should treat this server as a backup unit. If `true`, the load balancer forwards no ingress traffic to this backend server unless all other backend servers not marked as "backup" fail the health check policy.
:param pulumi.Input[bool] drain: (Updatable) Whether the load balancer should drain this server. Servers marked "drain" receive no new incoming traffic. Example: `false`
:param pulumi.Input[str] ip_address: The IP address of the backend server. Example: `10.0.0.3`
:param pulumi.Input[str] load_balancer_id: The [OCID](https://docs.cloud.oracle.com/iaas/Content/General/Concepts/identifiers.htm) of the load balancer associated with the backend set and servers.
:param pulumi.Input[str] name: A read-only field showing the IP address and port that uniquely identify this backend server in the backend set. Example: `10.0.0.3:8080`
:param pulumi.Input[bool] offline: (Updatable) Whether the load balancer should treat this server as offline. Offline servers receive no incoming traffic. Example: `false`
:param pulumi.Input[int] port: The communication port for the backend server. Example: `8080`
:param pulumi.Input[int] weight: (Updatable) The load balancing policy weight assigned to the server. Backend servers with a higher weight receive a larger proportion of incoming traffic. For example, a server weighted '3' receives 3 times the number of new connections as a server weighted '1'. For more information on load balancing policies, see [How Load Balancing Policies Work](https://docs.cloud.oracle.com/iaas/Content/Balance/Reference/lbpolicies.htm). Example: `3`
"""
if backendset_name is not None:
pulumi.set(__self__, "backendset_name", backendset_name)
if backup is not None:
pulumi.set(__self__, "backup", backup)
if drain is not None:
pulumi.set(__self__, "drain", drain)
if ip_address is not None:
pulumi.set(__self__, "ip_address", ip_address)
if load_balancer_id is not None:
pulumi.set(__self__, "load_balancer_id", load_balancer_id)
if name is not None:
pulumi.set(__self__, "name", name)
if offline is not None:
pulumi.set(__self__, "offline", offline)
if port is not None:
pulumi.set(__self__, "port", port)
if state is not None:
pulumi.set(__self__, "state", state)
if weight is not None:
pulumi.set(__self__, "weight", weight)
@property
@pulumi.getter(name="backendsetName")
def backendset_name(self) -> Optional[pulumi.Input[str]]:
"""
The name of the backend set to add the backend server to. Example: `example_backend_set`
"""
return pulumi.get(self, "backendset_name")
@backendset_name.setter
def backendset_name(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "backendset_name", value)
@property
@pulumi.getter
def backup(self) -> Optional[pulumi.Input[bool]]:
"""
(Updatable) Whether the load balancer should treat this server as a backup unit. If `true`, the load balancer forwards no ingress traffic to this backend server unless all other backend servers not marked as "backup" fail the health check policy.
"""
return pulumi.get(self, "backup")
@backup.setter
def backup(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "backup", value)
@property
@pulumi.getter
def drain(self) -> Optional[pulumi.Input[bool]]:
"""
(Updatable) Whether the load balancer should drain this server. Servers marked "drain" receive no new incoming traffic. Example: `false`
"""
return pulumi.get(self, "drain")
@drain.setter
def drain(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "drain", value)
@property
@pulumi.getter(name="ipAddress")
def ip_address(self) -> Optional[pulumi.Input[str]]:
"""
The IP address of the backend server. Example: `10.0.0.3`
"""
return pulumi.get(self, "ip_address")
@ip_address.setter
def ip_address(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "ip_address", value)
@property
@pulumi.getter(name="loadBalancerId")
def load_balancer_id(self) -> Optional[pulumi.Input[str]]:
"""
The [OCID](https://docs.cloud.oracle.com/iaas/Content/General/Concepts/identifiers.htm) of the load balancer associated with the backend set and servers.
"""
return pulumi.get(self, "load_balancer_id")
@load_balancer_id.setter
def load_balancer_id(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "load_balancer_id", value)
@property
@pulumi.getter
def name(self) -> Optional[pulumi.Input[str]]:
"""
A read-only field showing the IP address and port that uniquely identify this backend server in the backend set. Example: `10.0.0.3:8080`
"""
return pulumi.get(self, "name")
@name.setter
def name(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "name", value)
@property
@pulumi.getter
def offline(self) -> Optional[pulumi.Input[bool]]:
"""
(Updatable) Whether the load balancer should treat this server as offline. Offline servers receive no incoming traffic. Example: `false`
"""
return pulumi.get(self, "offline")
@offline.setter
def offline(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "offline", value)
@property
@pulumi.getter
def port(self) -> Optional[pulumi.Input[int]]:
"""
The communication port for the backend server. Example: `8080`
"""
return pulumi.get(self, "port")
@port.setter
def port(self, value: Optional[pulumi.Input[int]]):
pulumi.set(self, "port", value)
@property
@pulumi.getter
def state(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "state")
@state.setter
def state(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "state", value)
@property
@pulumi.getter
def weight(self) -> Optional[pulumi.Input[int]]:
"""
(Updatable) The load balancing policy weight assigned to the server. Backend servers with a higher weight receive a larger proportion of incoming traffic. For example, a server weighted '3' receives 3 times the number of new connections as a server weighted '1'. For more information on load balancing policies, see [How Load Balancing Policies Work](https://docs.cloud.oracle.com/iaas/Content/Balance/Reference/lbpolicies.htm). Example: `3`
"""
return pulumi.get(self, "weight")
@weight.setter
def weight(self, value: Optional[pulumi.Input[int]]):
pulumi.set(self, "weight", value)
class Backend(pulumi.CustomResource):
@overload
def __init__(__self__,
resource_name: str,
opts: Optional[pulumi.ResourceOptions] = None,
backendset_name: Optional[pulumi.Input[str]] = None,
backup: Optional[pulumi.Input[bool]] = None,
drain: Optional[pulumi.Input[bool]] = None,
ip_address: Optional[pulumi.Input[str]] = None,
load_balancer_id: Optional[pulumi.Input[str]] = None,
offline: Optional[pulumi.Input[bool]] = None,
port: Optional[pulumi.Input[int]] = None,
weight: Optional[pulumi.Input[int]] = None,
__props__=None):
"""
This resource provides the Backend resource in Oracle Cloud Infrastructure Load Balancer service.
Adds a backend server to a backend set.
## Example Usage
```python
import pulumi
import pulumi_oci as oci
test_backend = oci.loadbalancer.Backend("testBackend",
backendset_name=oci_load_balancer_backend_set["test_backend_set"]["name"],
ip_address=var["backend_ip_address"],
load_balancer_id=oci_load_balancer_load_balancer["test_load_balancer"]["id"],
port=var["backend_port"],
backup=var["backend_backup"],
drain=var["backend_drain"],
offline=var["backend_offline"],
weight=var["backend_weight"])
```
## Import
Backends can be imported using the `id`, e.g.
```sh
$ pulumi import oci:loadbalancer/backend:Backend test_backend "loadBalancers/{loadBalancerId}/backendSets/{backendSetName}/backends/{backendName}"
```
:param str resource_name: The name of the resource.
:param pulumi.ResourceOptions opts: Options for the resource.
:param pulumi.Input[str] backendset_name: The name of the backend set to add the backend server to. Example: `example_backend_set`
:param pulumi.Input[bool] backup: (Updatable) Whether the load balancer should treat this server as a backup unit. If `true`, the load balancer forwards no ingress traffic to this backend server unless all other backend servers not marked as "backup" fail the health check policy.
:param pulumi.Input[bool] drain: (Updatable) Whether the load balancer should drain this server. Servers marked "drain" receive no new incoming traffic. Example: `false`
:param pulumi.Input[str] ip_address: The IP address of the backend server. Example: `10.0.0.3`
:param pulumi.Input[str] load_balancer_id: The [OCID](https://docs.cloud.oracle.com/iaas/Content/General/Concepts/identifiers.htm) of the load balancer associated with the backend set and servers.
:param pulumi.Input[bool] offline: (Updatable) Whether the load balancer should treat this server as offline. Offline servers receive no incoming traffic. Example: `false`
:param pulumi.Input[int] port: The communication port for the backend server. Example: `8080`
:param pulumi.Input[int] weight: (Updatable) The load balancing policy weight assigned to the server. Backend servers with a higher weight receive a larger proportion of incoming traffic. For example, a server weighted '3' receives 3 times the number of new connections as a server weighted '1'. For more information on load balancing policies, see [How Load Balancing Policies Work](https://docs.cloud.oracle.com/iaas/Content/Balance/Reference/lbpolicies.htm). Example: `3`
"""
...
@overload
def __init__(__self__,
resource_name: str,
args: BackendArgs,
opts: Optional[pulumi.ResourceOptions] = None):
"""
This resource provides the Backend resource in Oracle Cloud Infrastructure Load Balancer service.
Adds a backend server to a backend set.
## Example Usage
```python
import pulumi
import pulumi_oci as oci
test_backend = oci.loadbalancer.Backend("testBackend",
backendset_name=oci_load_balancer_backend_set["test_backend_set"]["name"],
ip_address=var["backend_ip_address"],
load_balancer_id=oci_load_balancer_load_balancer["test_load_balancer"]["id"],
port=var["backend_port"],
backup=var["backend_backup"],
drain=var["backend_drain"],
offline=var["backend_offline"],
weight=var["backend_weight"])
```
## Import
Backends can be imported using the `id`, e.g.
```sh
$ pulumi import oci:loadbalancer/backend:Backend test_backend "loadBalancers/{loadBalancerId}/backendSets/{backendSetName}/backends/{backendName}"
```
:param str resource_name: The name of the resource.
:param BackendArgs args: The arguments to use to populate this resource's properties.
:param pulumi.ResourceOptions opts: Options for the resource.
"""
...
def __init__(__self__, resource_name: str, *args, **kwargs):
resource_args, opts = _utilities.get_resource_args_opts(BackendArgs, pulumi.ResourceOptions, *args, **kwargs)
if resource_args is not None:
__self__._internal_init(resource_name, opts, **resource_args.__dict__)
else:
__self__._internal_init(resource_name, *args, **kwargs)
def _internal_init(__self__,
resource_name: str,
opts: Optional[pulumi.ResourceOptions] = None,
backendset_name: Optional[pulumi.Input[str]] = None,
backup: Optional[pulumi.Input[bool]] = None,
drain: Optional[pulumi.Input[bool]] = None,
ip_address: Optional[pulumi.Input[str]] = None,
load_balancer_id: Optional[pulumi.Input[str]] = None,
offline: Optional[pulumi.Input[bool]] = None,
port: Optional[pulumi.Input[int]] = None,
weight: Optional[pulumi.Input[int]] = None,
__props__=None):
if opts is None:
opts = pulumi.ResourceOptions()
if not isinstance(opts, pulumi.ResourceOptions):
raise TypeError('Expected resource options to be a ResourceOptions instance')
if opts.version is None:
opts.version = _utilities.get_version()
if opts.id is None:
if __props__ is not None:
raise TypeError('__props__ is only valid when passed in combination with a valid opts.id to get an existing resource')
__props__ = BackendArgs.__new__(BackendArgs)
if backendset_name is None and not opts.urn:
raise TypeError("Missing required property 'backendset_name'")
__props__.__dict__["backendset_name"] = backendset_name
__props__.__dict__["backup"] = backup
__props__.__dict__["drain"] = drain
if ip_address is None and not opts.urn:
raise TypeError("Missing required property 'ip_address'")
__props__.__dict__["ip_address"] = ip_address
if load_balancer_id is None and not opts.urn:
raise TypeError("Missing required property 'load_balancer_id'")
__props__.__dict__["load_balancer_id"] = load_balancer_id
__props__.__dict__["offline"] = offline
if port is None and not opts.urn:
raise TypeError("Missing required property 'port'")
__props__.__dict__["port"] = port
__props__.__dict__["weight"] = weight
__props__.__dict__["name"] = None
__props__.__dict__["state"] = None
super(Backend, __self__).__init__(
'oci:loadbalancer/backend:Backend',
resource_name,
__props__,
opts)
@staticmethod
def get(resource_name: str,
id: pulumi.Input[str],
opts: Optional[pulumi.ResourceOptions] = None,
backendset_name: Optional[pulumi.Input[str]] = None,
backup: Optional[pulumi.Input[bool]] = None,
drain: Optional[pulumi.Input[bool]] = None,
ip_address: Optional[pulumi.Input[str]] = None,
load_balancer_id: Optional[pulumi.Input[str]] = None,
name: Optional[pulumi.Input[str]] = None,
offline: Optional[pulumi.Input[bool]] = None,
port: Optional[pulumi.Input[int]] = None,
state: Optional[pulumi.Input[str]] = None,
weight: Optional[pulumi.Input[int]] = None) -> 'Backend':
"""
Get an existing Backend resource's state with the given name, id, and optional extra
properties used to qualify the lookup.
:param str resource_name: The unique name of the resulting resource.
:param pulumi.Input[str] id: The unique provider ID of the resource to lookup.
:param pulumi.ResourceOptions opts: Options for the resource.
:param pulumi.Input[str] backendset_name: The name of the backend set to add the backend server to. Example: `example_backend_set`
:param pulumi.Input[bool] backup: (Updatable) Whether the load balancer should treat this server as a backup unit. If `true`, the load balancer forwards no ingress traffic to this backend server unless all other backend servers not marked as "backup" fail the health check policy.
:param pulumi.Input[bool] drain: (Updatable) Whether the load balancer should drain this server. Servers marked "drain" receive no new incoming traffic. Example: `false`
:param pulumi.Input[str] ip_address: The IP address of the backend server. Example: `10.0.0.3`
:param pulumi.Input[str] load_balancer_id: The [OCID](https://docs.cloud.oracle.com/iaas/Content/General/Concepts/identifiers.htm) of the load balancer associated with the backend set and servers.
:param pulumi.Input[str] name: A read-only field showing the IP address and port that uniquely identify this backend server in the backend set. Example: `10.0.0.3:8080`
:param pulumi.Input[bool] offline: (Updatable) Whether the load balancer should treat this server as offline. Offline servers receive no incoming traffic. Example: `false`
:param pulumi.Input[int] port: The communication port for the backend server. Example: `8080`
:param pulumi.Input[int] weight: (Updatable) The load balancing policy weight assigned to the server. Backend servers with a higher weight receive a larger proportion of incoming traffic. For example, a server weighted '3' receives 3 times the number of new connections as a server weighted '1'. For more information on load balancing policies, see [How Load Balancing Policies Work](https://docs.cloud.oracle.com/iaas/Content/Balance/Reference/lbpolicies.htm). Example: `3`
"""
opts = pulumi.ResourceOptions.merge(opts, pulumi.ResourceOptions(id=id))
__props__ = _BackendState.__new__(_BackendState)
__props__.__dict__["backendset_name"] = backendset_name
__props__.__dict__["backup"] = backup
__props__.__dict__["drain"] = drain
__props__.__dict__["ip_address"] = ip_address
__props__.__dict__["load_balancer_id"] = load_balancer_id
__props__.__dict__["name"] = name
__props__.__dict__["offline"] = offline
__props__.__dict__["port"] = port
__props__.__dict__["state"] = state
__props__.__dict__["weight"] = weight
return Backend(resource_name, opts=opts, __props__=__props__)
@property
@pulumi.getter(name="backendsetName")
def backendset_name(self) -> pulumi.Output[str]:
"""
The name of the backend set to add the backend server to. Example: `example_backend_set`
"""
return pulumi.get(self, "backendset_name")
@property
@pulumi.getter
def backup(self) -> pulumi.Output[Optional[bool]]:
"""
(Updatable) Whether the load balancer should treat this server as a backup unit. If `true`, the load balancer forwards no ingress traffic to this backend server unless all other backend servers not marked as "backup" fail the health check policy.
"""
return pulumi.get(self, "backup")
@property
@pulumi.getter
def drain(self) -> pulumi.Output[bool]:
"""
(Updatable) Whether the load balancer should drain this server. Servers marked "drain" receive no new incoming traffic. Example: `false`
"""
return pulumi.get(self, "drain")
@property
@pulumi.getter(name="ipAddress")
def ip_address(self) -> pulumi.Output[str]:
"""
The IP address of the backend server. Example: `10.0.0.3`
"""
return pulumi.get(self, "ip_address")
@property
@pulumi.getter(name="loadBalancerId")
def load_balancer_id(self) -> pulumi.Output[str]:
"""
The [OCID](https://docs.cloud.oracle.com/iaas/Content/General/Concepts/identifiers.htm) of the load balancer associated with the backend set and servers.
"""
return pulumi.get(self, "load_balancer_id")
@property
@pulumi.getter
def name(self) -> pulumi.Output[str]:
"""
A read-only field showing the IP address and port that uniquely identify this backend server in the backend set. Example: `10.0.0.3:8080`
"""
return pulumi.get(self, "name")
@property
@pulumi.getter
def offline(self) -> pulumi.Output[bool]:
"""
(Updatable) Whether the load balancer should treat this server as offline. Offline servers receive no incoming traffic. Example: `false`
"""
return pulumi.get(self, "offline")
@property
@pulumi.getter
def port(self) -> pulumi.Output[int]:
"""
The communication port for the backend server. Example: `8080`
"""
return pulumi.get(self, "port")
@property
@pulumi.getter
def state(self) -> pulumi.Output[str]:
return pulumi.get(self, "state")
@property
@pulumi.getter
def weight(self) -> pulumi.Output[int]:
"""
(Updatable) The load balancing policy weight assigned to the server. Backend servers with a higher weight receive a larger proportion of incoming traffic. For example, a server weighted '3' receives 3 times the number of new connections as a server weighted '1'. For more information on load balancing policies, see [How Load Balancing Policies Work](https://docs.cloud.oracle.com/iaas/Content/Balance/Reference/lbpolicies.htm). Example: `3`
"""
return pulumi.get(self, "weight")
| 50.643225 | 483 | 0.661778 | 3,710 | 29,525 | 5.112668 | 0.061186 | 0.068431 | 0.068115 | 0.028047 | 0.906632 | 0.891185 | 0.863138 | 0.844 | 0.841312 | 0.829713 | 0 | 0.005679 | 0.236579 | 29,525 | 582 | 484 | 50.730241 | 0.835847 | 0.466181 | 0 | 0.700306 | 1 | 0 | 0.080459 | 0.002225 | 0 | 0 | 0 | 0 | 0 | 1 | 0.16208 | false | 0.003058 | 0.015291 | 0.006116 | 0.275229 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
b20d4527fab557ae50582a2cab917bbae4414e95 | 1,789 | py | Python | Dashboard/shuttleservice/models.py | Gowtham1729/GNius | 0bdfaddd882837b43485e424e44fa5f353f227bd | [
"MIT"
] | 3 | 2017-08-31T15:24:50.000Z | 2020-03-24T13:22:15.000Z | Dashboard/shuttleservice/models.py | coding-iitgn/GNius | 0bdfaddd882837b43485e424e44fa5f353f227bd | [
"MIT"
] | 1 | 2020-11-04T03:22:47.000Z | 2020-11-04T03:22:47.000Z | Dashboard/shuttleservice/models.py | coding-iitgn/GNius | 0bdfaddd882837b43485e424e44fa5f353f227bd | [
"MIT"
] | 1 | 2018-10-03T14:53:55.000Z | 2018-10-03T14:53:55.000Z | from django.db import models
from django.core.urlresolvers import reverse
# Create your models here.
class ToPalajWD(models.Model):
time = models.TimeField()
route = models.CharField(max_length=100)
routepic = models.ImageField(name='routepic', width_field=None, height_field=None,default="images/Integrated Route Map-pagep001_6.jpg")
def get_absolute_url(self):
return reverse('topalaj:detail', kwargs={'pk': self.pk})
def __str__(self):
return str(self.time) + '-' + self.route
class ToPalajHD(models.Model):
time = models.TimeField()
route = models.CharField(max_length=100)
routepic=models.ImageField(name='routepic',width_field=None,height_field=None,default="images/Integrated Route Map-pagep001_6.jpg")
def get_absolute_url(self):
return reverse('topalaj:detail', kwargs={'pk': self.pk})
def __str__(self):
return str(self.time) + '-' + self.route
class ToChandhkedaWD(models.Model):
time = models.TimeField()
route = models.CharField(max_length=100)
routepic=models.ImageField(name='routepic',width_field=None,height_field=None,default="images/Integrated Route Map-pagep001_6.jpg")
def get_absolute_url(self):
return reverse('tochandkeda:detail', kwargs={'pk': self.pk})
def __str__(self):
return str(self.time) + '-' + self.route
class ToChandhkedaHD(models.Model):
time = models.TimeField()
route = models.CharField(max_length=100)
routepic = models.ImageField(name='routepic', width_field=None, height_field=None,default="images/Integrated Route Map-pagep001_6.jpg")
def get_absolute_url(self):
return reverse('tochandkeda:detail', kwargs={'pk': self.pk})
def __str__(self):
return str(self.time) + '-' + self.route
| 33.12963 | 139 | 0.707099 | 231 | 1,789 | 5.30303 | 0.21645 | 0.058776 | 0.04898 | 0.068571 | 0.890612 | 0.890612 | 0.890612 | 0.890612 | 0.890612 | 0.890612 | 0 | 0.018592 | 0.158189 | 1,789 | 53 | 140 | 33.754717 | 0.794821 | 0.013415 | 0 | 0.823529 | 0 | 0 | 0.156907 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.235294 | false | 0 | 0.058824 | 0.235294 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 10 |
b7602a3d10d43c71f504314baccd7ce7c6b00e58 | 50 | py | Python | src/util.py | BalticBytes/Py-Data-Science-devcontainer | 7cbbf2aabb3a306327581d12888c2665aaa379e3 | [
"MIT"
] | 1 | 2021-04-23T08:00:19.000Z | 2021-04-23T08:00:19.000Z | src/util.py | BalticBytes/Py-Data-Science-devcontainer | 7cbbf2aabb3a306327581d12888c2665aaa379e3 | [
"MIT"
] | null | null | null | src/util.py | BalticBytes/Py-Data-Science-devcontainer | 7cbbf2aabb3a306327581d12888c2665aaa379e3 | [
"MIT"
] | null | null | null | print("relative import works")
def dummy(): None
| 12.5 | 30 | 0.72 | 7 | 50 | 5.142857 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.14 | 50 | 3 | 31 | 16.666667 | 0.837209 | 0 | 0 | 0 | 0 | 0 | 0.42 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | true | 0 | 0.5 | 0 | 1 | 0.5 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | 7 |
b77a0323883ee6d1fcef2b1da29a521579032355 | 81 | py | Python | demo_project/demo/context_processors.py | monasysinfo/django-jchart | 2e224f061cdb5804814a6031c4d23899408d62e4 | [
"BSD-3-Clause"
] | 125 | 2017-01-27T20:43:02.000Z | 2021-12-31T04:25:09.000Z | demo_project/demo/context_processors.py | monasysinfo/django-jchart | 2e224f061cdb5804814a6031c4d23899408d62e4 | [
"BSD-3-Clause"
] | 26 | 2017-03-06T21:56:20.000Z | 2021-05-28T06:03:32.000Z | demo_project/demo/context_processors.py | monasysinfo/django-jchart | 2e224f061cdb5804814a6031c4d23899408d62e4 | [
"BSD-3-Clause"
] | 30 | 2017-02-06T21:07:46.000Z | 2021-05-28T05:40:34.000Z | def url_name(request):
return dict(url_name=request.resolver_match.url_name)
| 27 | 57 | 0.802469 | 13 | 81 | 4.692308 | 0.615385 | 0.344262 | 0.459016 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.098765 | 81 | 2 | 58 | 40.5 | 0.835616 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | false | 0 | 0 | 0.5 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 7 |
4d52de31b3cbdaaca951afb55928adf27c2e576b | 1,383 | py | Python | UCourse/api/permissions.py | Natsu1270/UCourse | e8c814d91e54f5f51e4a0fa2df177ebb59544dc2 | [
"MIT"
] | 1 | 2020-08-31T22:40:27.000Z | 2020-08-31T22:40:27.000Z | UCourse/api/permissions.py | Natsu1270/UCourse | e8c814d91e54f5f51e4a0fa2df177ebb59544dc2 | [
"MIT"
] | 13 | 2020-08-05T16:17:09.000Z | 2022-03-12T00:18:42.000Z | UCourse/api/permissions.py | Natsu1270/UCourse | e8c814d91e54f5f51e4a0fa2df177ebb59544dc2 | [
"MIT"
] | null | null | null | from rest_framework import permissions
from . import constants
class IsOwnerOrReadOnly(permissions.BasePermission):
def has_object_permission(self, request, view, obj):
if request.method in permissions.SAFE_METHODS:
return True
if request.user.is_superuser:
return True
return obj.user == request.user
class IsOwner(permissions.BasePermission):
def has_object_permission(self, request, view, obj):
return obj.user == request.user
class IsTeacherOrTARoleOrReadOnly(permissions.BasePermission):
def has_permission(self, request, view):
if request.method in permissions.SAFE_METHODS:
return True
return bool(
request.user and
request.user.is_authenticated and
request.user.role.code == constants.TEACHER_ROLE_CODE or
request.user.role.code == constants.TA_ROLE_CODE or
request.user.is_superuser
)
def has_object_permission(self, request, view, obj):
if request.method in permissions.SAFE_METHODS:
return True
return bool(
request.user and
request.user.is_authenticated and
request.user.role.code == constants.TEACHER_ROLE_CODE or
request.user.role.code == constants.TA_ROLE_CODE or
request.user.is_superuser
)
| 30.065217 | 68 | 0.663774 | 158 | 1,383 | 5.658228 | 0.234177 | 0.159955 | 0.072707 | 0.111857 | 0.777405 | 0.777405 | 0.712528 | 0.712528 | 0.712528 | 0.712528 | 0 | 0 | 0.268257 | 1,383 | 45 | 69 | 30.733333 | 0.883399 | 0 | 0 | 0.727273 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.121212 | false | 0 | 0.060606 | 0.030303 | 0.515152 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 7 |
4d5f6cc854cf96200e7c73c685d2312d6bf05638 | 39,460 | py | Python | tests/test_chain.py | gourab337/revaultd | 8d76298a00c23401b0e630fc46c2cb85dd487fbe | [
"BSD-3-Clause"
] | null | null | null | tests/test_chain.py | gourab337/revaultd | 8d76298a00c23401b0e630fc46c2cb85dd487fbe | [
"BSD-3-Clause"
] | null | null | null | tests/test_chain.py | gourab337/revaultd | 8d76298a00c23401b0e630fc46c2cb85dd487fbe | [
"BSD-3-Clause"
] | null | null | null | """Tests related to the tracking of the chain state.
This includes the tracking the status of the vaults, wallet transactions,
handling of reorgs, etc..
"""
import logging
import pytest
from fixtures import *
from test_framework import serializations
from test_framework.utils import (
POSTGRES_IS_SETUP,
wait_for,
)
def append_or_remove(timestamps, timestamp, append):
if append:
timestamps.append(timestamp)
else:
timestamps.remove(timestamp)
def timestamps_from_status(status, present=True):
"""Given a vault status, what timestamps should be present or absent."""
# TODO!
assert status not in [
"emergencied",
"emergencying",
"unvaultemergencied",
"unvaultemergencying",
]
timestamps = (
[] if present else ["funded_at", "secured_at", "delegated_at", "moved_at"]
)
if status == "unconfirmed":
return timestamps
# It's confirmed
append_or_remove(timestamps, "funded_at", present)
if status in [
"secured",
"active",
"unvaulting",
"unvaulted",
"spending",
"spent",
"canceling",
"canceled",
]:
append_or_remove(timestamps, "secured_at", present)
if status in [
"active",
"unvaulting",
"unvaulted",
"spending",
"spent",
"canceling",
"canceled",
]:
append_or_remove(timestamps, "delegated_at", present)
if status in ["spent", "canceled"]:
append_or_remove(timestamps, "moved_at", present)
return timestamps
def reorg(revault_network, bitcoind, stop_wallets, height, shift=0):
if stop_wallets:
revault_network.stop_wallets()
bitcoind.simple_reorg(height, shift=shift)
if stop_wallets:
revault_network.start_wallets()
def reorg_deposit(revault_network, bitcoind, deposit, stop_wallets, target_status):
"""Reorganize the chain around a deposit according to different scenarii.
The deposit must refer to a vault that is at least confirmed.
The `stop_wallets` parameter controls whether to stop the daemons during a reorg.
The `target_status` parameter indicates the expected status of the vault if its
deposit transaction gets unconfirmed then re-confirmed.
"""
vault = revault_network.stk(0).rpc.listvaults([], [deposit])["vaults"][0]
initial_confs = bitcoind.rpc.getblockcount() - vault["blockheight"] + 1
logging.info(
f"Initial vault blockheight {vault['blockheight']} ({initial_confs} confs)"
)
# Sanity check the timestamps
for field in timestamps_from_status(vault["status"]):
assert vault[field] is not None, field
for field in timestamps_from_status(vault["status"], present=False):
assert vault[field] is None, field
# Mine a block and reorg it, it should not affect us since the deposit would still
# have more than 6 confs.
bitcoind.generate_block(1)
height = bitcoind.rpc.getblockcount()
for w in revault_network.participants():
wait_for(lambda: w.rpc.getinfo()["blockheight"] == height)
reorg(revault_network, bitcoind, stop_wallets, height)
new_tip = f"{height + 1}.*{bitcoind.rpc.getblockhash(height + 1)}"
for w in revault_network.participants():
w.wait_for_logs(
[
"Detected reorg",
f"Found common ancestor at height {height - 1}",
f"Vault deposit '{deposit}' still has {initial_confs} confirmations at common ancestor",
"Rescan .*done",
f"New tip.* {new_tip}",
]
)
v = w.rpc.listvaults([], [deposit])["vaults"][0]
assert v["status"] == vault["status"]
for field in timestamps_from_status(vault["status"]):
assert v[field] is not None, field
for field in timestamps_from_status(vault["status"], present=False):
assert v[field] is None, field
for w in revault_network.participants():
wait_for(lambda: w.rpc.getinfo()["blockheight"] == height + 1)
height = bitcoind.rpc.getblockcount()
vault = w.rpc.listvaults([], [deposit])["vaults"][0]
confs = height + 1 - vault["blockheight"]
logging.info(
f"After first reorg. Vault blockheight {vault['blockheight']} ({confs} confs)"
)
# Now actually shift it out.
# It won't transition to 'funded'...
reorg(revault_network, bitcoind, stop_wallets, vault["blockheight"], shift=-1)
new_tip = f"{height + 1}.*{bitcoind.rpc.getblockhash(height + 1)}"
for w in revault_network.participants():
w.wait_for_logs(
[
"Detected reorg",
f"Found common ancestor at height {vault['blockheight'] - 1}",
f"Vault deposit '{deposit}' has 0 confirmations at common ancestor",
"Rescan .*done",
f"New tip.* {new_tip}",
]
)
for w in revault_network.participants():
wait_for(lambda: w.rpc.getinfo()["blockheight"] == height + 1)
for w in revault_network.participants():
wait_for(
lambda: w.rpc.listvaults([], [deposit])["vaults"][0]["status"]
== "unconfirmed"
)
vault = w.rpc.listvaults([], [deposit])["vaults"][0]
for field in ["funded_at", "secured_at", "delegated_at", "moved_at"]:
assert vault[field] is None, field
# ... But it will if we re-confirm it!
bitcoind.generate_block(6, wait_for_mempool=vault["txid"])
for w in revault_network.participants():
wait_for(
lambda: w.rpc.listvaults([], [deposit])["vaults"][0]["status"]
== target_status
)
vault = w.rpc.listvaults([], [deposit])["vaults"][0]
for field in timestamps_from_status(target_status):
assert vault[field] is not None, field
for field in timestamps_from_status(target_status, present=False):
assert vault[field] is None, field
height = bitcoind.rpc.getblockcount()
vault = w.rpc.listvaults([], [deposit])["vaults"][0]
confs = height + 1 - vault["blockheight"]
logging.info(
f"After second reorg. Vault blockheight {vault['blockheight']} ({confs} confs)"
)
# Now reorg 1 block of the 6 making the vault funded. This should get the deposit under
# the minimum number of confirmations threshold.
# But since the newly connected chain has as many blocks, the vault will get back to
# 'funded'. And since the deposit didn't change, the signatures on the coordinator are
# still valid. It will re-download them and transition back to 'secured' / 'active'. Then
# if some second-stage transactions were broadcasted, they will be re-broadcast.
reorged_block_height = vault["blockheight"] + 5
reorg(revault_network, bitcoind, stop_wallets, reorged_block_height)
new_tip = f"{height + 1}.*{bitcoind.rpc.getblockhash(height + 1)}"
for w in revault_network.participants():
w.wait_for_logs(
[
"Detected reorg",
f"Found common ancestor at height {reorged_block_height - 1}",
f"Vault deposit '{deposit}' has 5 confirmations at common ancestor",
"Rescan .*done",
f"New tip.* {new_tip}",
]
)
for w in revault_network.participants():
wait_for(lambda: w.rpc.getinfo()["blockheight"] == height + 1)
for w in revault_network.participants():
wait_for(
lambda: w.rpc.listvaults([], [deposit])["vaults"][0]["status"]
== target_status
)
vault = w.rpc.listvaults([], [deposit])["vaults"][0]
for field in timestamps_from_status(target_status):
assert vault[field] is not None, field
for field in timestamps_from_status(target_status, present=False):
assert vault[field] is None, field
height = bitcoind.rpc.getblockcount()
vault = w.rpc.listvaults([], [deposit])["vaults"][0]
confs = height + 1 - vault["blockheight"]
logging.info(
f"After third reorg. Vault blockheight {vault['blockheight']} ({confs} confs)"
)
# Now reorg up to the deposit. The same will happen.
reorg(revault_network, bitcoind, stop_wallets, vault["blockheight"])
new_tip = f"{height + 1}.*{bitcoind.rpc.getblockhash(height + 1)}"
for w in revault_network.participants():
w.wait_for_logs(
[
"Detected reorg",
f"Found common ancestor at height {vault['blockheight'] - 1}",
f"Vault deposit '{deposit}' has 0 confirmations at common ancestor",
"Rescan .*done",
f"New tip.* {new_tip}",
]
)
for w in revault_network.participants():
wait_for(lambda: w.rpc.getinfo()["blockheight"] == height + 1)
for w in revault_network.participants():
wait_for(
lambda: w.rpc.listvaults([], [deposit])["vaults"][0]["status"]
== target_status
)
for field in timestamps_from_status(target_status):
assert vault[field] is not None, field
for field in timestamps_from_status(target_status, present=False):
assert vault[field] is None, field
height = bitcoind.rpc.getblockcount()
vault = w.rpc.listvaults([], [deposit])["vaults"][0]
confs = height + 1 - vault["blockheight"]
logging.info(
f"After fourth reorg. Vault blockheight {vault['blockheight']} ({confs} confs)"
)
# TODO: try with tx malleation
@pytest.mark.skipif(not POSTGRES_IS_SETUP, reason="Needs Postgres for servers db")
def test_reorged_deposit_status_1(revault_network, bitcoind):
# NOTE: bitcoind would discard updating the mempool if the reorg is >10 blocks long.
revault_network.deploy(4, 2, csv=12, with_watchtowers=False)
# Play with the chain on a vault which is 'secured'
vault = revault_network.fund(0.14)
deposit = f"{vault['txid']}:{vault['vout']}"
revault_network.secure_vault(vault)
for stop_wallets in [True, False]:
logging.info(f"For secured vault '{deposit}'. Stop wallets: {stop_wallets}")
reorg_deposit(
revault_network, bitcoind, deposit, stop_wallets, target_status="secured"
)
# Now on a vault that is 'active'
vault = revault_network.fund(0.28)
deposit = f"{vault['txid']}:{vault['vout']}"
revault_network.activate_fresh_vaults([vault])
for stop_wallets in [True, False]:
logging.info(f"For active vault '{deposit}'. Stop wallets: {stop_wallets}")
reorg_deposit(
revault_network, bitcoind, deposit, stop_wallets, target_status="active"
)
# Now on a vault that is 'unvaulted'
vault = revault_network.fund(0.56)
deposit = f"{vault['txid']}:{vault['vout']}"
revault_network.activate_fresh_vaults([vault])
revault_network.unvault_vaults_anyhow([vault])
for stop_wallets in [True, False]:
logging.info(f"For unvaulted vault '{deposit}'. Stop wallets: {stop_wallets}")
reorg_deposit(
revault_network, bitcoind, deposit, stop_wallets, target_status="unvaulted"
)
# TODO: same with 'emergency'
@pytest.mark.skipif(not POSTGRES_IS_SETUP, reason="Needs Postgres for servers db")
def test_reorged_deposit_status_2(revault_network, bitcoind):
# NOTE: bitcoind would discard updating the mempool if the reorg is >10 blocks long.
revault_network.deploy(4, 2, csv=3, with_watchtowers=False)
# Now on a vault that is 'spent'
vault = revault_network.fund(1.12)
deposit = f"{vault['txid']}:{vault['vout']}"
revault_network.activate_fresh_vaults([vault])
revault_network.spend_vaults_anyhow([vault])
for stop_wallets in [True, False]:
logging.info(f"For spent vault '{deposit}'. Stop wallets: {stop_wallets}")
# Target "unvaulted" as Spend txs get wiped from DB
reorg_deposit(
revault_network, bitcoind, deposit, stop_wallets, target_status="unvaulted"
)
# And finally the same dance with a 'canceled' vault
vault = revault_network.fund(2.24)
deposit = f"{vault['txid']}:{vault['vout']}"
revault_network.activate_fresh_vaults([vault])
revault_network.unvault_vaults_anyhow([vault])
revault_network.cancel_vault(vault)
for stop_wallets in [True, False]:
logging.info(f"For canceled vault '{deposit}'. Stop wallets: {stop_wallets}")
reorg_deposit(
revault_network, bitcoind, deposit, stop_wallets, target_status="canceled"
)
# TODO: same with 'unvault_emergency'
@pytest.mark.skipif(not POSTGRES_IS_SETUP, reason="Needs Postgres for servers db")
def test_reorged_unvault(revault_network, bitcoind):
"""Test various scenarii with reorgs around the Unvault transaction of a vault."""
CSV = 12
revault_network.deploy(4, 2, csv=CSV, with_watchtowers=False)
man = revault_network.man(0)
vaults = revault_network.fundmany([32, 3])
deposits = []
amounts = []
for v in vaults:
revault_network.secure_vault(v)
revault_network.activate_vault(v)
deposits.append(f"{v['txid']}:{v['vout']}")
amounts.append(v["amount"])
addr = bitcoind.rpc.getnewaddress()
amount = sum(amounts)
feerate = 1
fee = revault_network.compute_spendtx_fees(feerate, len(vaults), 1)
destinations = {addr: amount - fee}
revault_network.unvault_vaults(vaults, destinations, feerate)
bitcoind.generate_block(1)
unvault_tx_a = man.rpc.listonchaintransactions([deposits[0]])[
"onchain_transactions"
][0]["unvault"]
unvault_tx_b = man.rpc.listonchaintransactions([deposits[1]])[
"onchain_transactions"
][0]["unvault"]
# Initial sanity checks..
assert unvault_tx_a["blockheight"] == unvault_tx_b["blockheight"]
for w in revault_network.participants():
wait_for(lambda: w.rpc.getinfo()["blockheight"] == bitcoind.rpc.getblockcount())
assert len(w.rpc.listvaults(["unvaulted"], deposits)["vaults"]) == len(deposits)
for vault in w.rpc.listvaults(["unvaulted"], deposits)["vaults"]:
assert vault["moved_at"] is None
for field in timestamps_from_status("unvaulted"):
assert vault[field] is not None, field
for field in timestamps_from_status("unvaulted", present=False):
assert vault[field] is None, field
# First, if we reorg but not up to the Unvault tx height, nothing will happen.
bitcoind.simple_reorg(unvault_tx_a["blockheight"] + 1)
height = bitcoind.rpc.getblockcount()
new_tip = f"{height}.*{bitcoind.rpc.getblockhash(height)}"
for w in revault_network.participants():
w.wait_for_logs(
[
"Detected reorg",
f"{deposits[0]}.* First Stage transaction is still confirmed .*'{unvault_tx_a['blockheight']}'",
f"{deposits[1]}.* First Stage transaction is still confirmed .*'{unvault_tx_b['blockheight']}'",
"Rescan .*done",
f"New tip.* {new_tip}",
]
)
assert len(w.rpc.listvaults(["unvaulted"], deposits)["vaults"]) == len(deposits)
for vault in w.rpc.listvaults(["unvaulted"], deposits)["vaults"]:
assert vault["moved_at"] is None
for field in timestamps_from_status("unvaulted"):
assert vault[field] is not None, field
for field in timestamps_from_status("unvaulted", present=False):
assert vault[field] is None, field
# Now, if the Unvault tx moves we'll rewind up to the ancestor, rescan the chain
# and get back to the 'unvaulted' state.
bitcoind.simple_reorg(unvault_tx_a["blockheight"], shift=1)
for w in revault_network.participants():
w.wait_for_logs(
[
"Detected reorg",
f"Vault {deposits[0]}'s Unvault transaction .* got unconfirmed",
f"Vault {deposits[1]}'s Unvault transaction .* got unconfirmed",
"Rescan of all vaults in db done.",
]
)
for w in revault_network.participants():
wait_for(lambda: w.rpc.getinfo()["blockheight"] == bitcoind.rpc.getblockcount())
wait_for(
lambda: len(w.rpc.listvaults(["unvaulted"], deposits)["vaults"])
== len(deposits)
)
for vault in w.rpc.listvaults(["unvaulted"], deposits)["vaults"]:
assert vault["moved_at"] is None
for field in timestamps_from_status("unvaulted"):
assert vault[field] is not None, field
for field in timestamps_from_status("unvaulted", present=False):
assert vault[field] is None, field
# If it's not confirmed anymore, we'll detect it and mark the vault as unvaulting
unvault_tx_a = man.rpc.listonchaintransactions([deposits[0]])[
"onchain_transactions"
][0]["unvault"]
bitcoind.simple_reorg(unvault_tx_a["blockheight"], shift=-1)
for w in revault_network.participants():
w.wait_for_logs(
[
"Detected reorg",
f"Vault {deposits[0]}'s Unvault transaction .* got unconfirmed",
f"Vault {deposits[1]}'s Unvault transaction .* got unconfirmed",
"Rescan of all vaults in db done.",
]
)
for w in revault_network.participants():
wait_for(lambda: w.rpc.getinfo()["blockheight"] == bitcoind.rpc.getblockcount())
assert len(w.rpc.listvaults(["unvaulting"], deposits)["vaults"]) == len(
deposits
)
for vault in w.rpc.listvaults(["unvaulting"], deposits)["vaults"]:
assert vault["moved_at"] is None
for field in timestamps_from_status("unvaulting"):
assert vault[field] is not None, field
for field in timestamps_from_status("unvaulting", present=False):
assert vault[field] is None, field
# Now if we are spending
# unvault_vault() above actually registered the Spend transaction, so we can activate
# it by generating enough block for it to be mature.
# NOTE: this exercises the logic of "jump from unvaulting to spending state"
assert len(bitcoind.rpc.getrawmempool()) == len(vaults)
bitcoind.generate_block(1, wait_for_mempool=len(vaults))
bitcoind.generate_block(CSV - 1)
for w in revault_network.participants():
wait_for(lambda: w.rpc.getinfo()["blockheight"] == bitcoind.rpc.getblockcount())
wait_for(
lambda: len(w.rpc.listvaults(["spending"], deposits)["vaults"])
== len(deposits)
)
for vault in w.rpc.listvaults(["spending"], deposits)["vaults"]:
assert vault["moved_at"] is None
for field in timestamps_from_status("spending"):
assert vault[field] is not None, field
for field in timestamps_from_status("spending", present=False):
assert vault[field] is None, field
# If we are 'spending' and the Unvault gets unconfirmed, we'll rewind, get back to
# unvaulting, and mark the Spend for re-broadcast
unvault_tx_a = man.rpc.listonchaintransactions([deposits[0]])[
"onchain_transactions"
][0]["unvault"]
bitcoind.simple_reorg(unvault_tx_a["blockheight"], shift=-1)
height = bitcoind.rpc.getblockcount()
new_tip = f"{height}.*{bitcoind.rpc.getblockhash(height)}"
for w in revault_network.participants():
w.wait_for_logs(
[
"Detected reorg",
f"Vault {deposits[0]}'s Unvault transaction .* got unconfirmed",
f"Vault {deposits[1]}'s Unvault transaction .* got unconfirmed",
"Rescan of all vaults in db done.",
f"New tip.* {new_tip}",
]
)
for w in revault_network.participants():
wait_for(
lambda: len(w.rpc.listvaults(["unvaulting"], deposits)["vaults"])
== len(deposits)
)
for vault in w.rpc.listvaults(["unvaulting"], deposits)["vaults"]:
assert vault["moved_at"] is None
for field in timestamps_from_status("unvaulting"):
assert vault[field] is not None, field
for field in timestamps_from_status("unvaulting", present=False):
assert vault[field] is None, field
# Get to re-broadcast the spend
bitcoind.generate_block(1, wait_for_mempool=len(vaults))
bitcoind.generate_block(CSV - 1)
for w in revault_network.participants():
wait_for(
lambda: len(w.rpc.listvaults(["spending"], deposits)["vaults"])
== len(deposits)
)
for vault in w.rpc.listvaults(["spending"], deposits)["vaults"]:
assert vault["moved_at"] is None
for field in timestamps_from_status("spending"):
assert vault[field] is not None, field
for field in timestamps_from_status("spending", present=False):
assert vault[field] is None, field
# And confirm it
bitcoind.generate_block(1, wait_for_mempool=1)
for w in revault_network.participants():
wait_for(
lambda: len(w.rpc.listvaults(["spent"], deposits)["vaults"])
== len(deposits)
)
for vault in w.rpc.listvaults(["spent"], deposits)["vaults"]:
for field in timestamps_from_status("spent"):
assert vault[field] is not None, field
for field in timestamps_from_status("spent", present=False):
assert vault[field] is None, field
@pytest.mark.skipif(not POSTGRES_IS_SETUP, reason="Needs Postgres for servers db")
def test_reorged_spend(revault_network, bitcoind):
CSV = 12
revault_network.deploy(4, 2, csv=CSV, with_watchtowers=False)
vaults = revault_network.fundmany([32, 3])
# Spend the vaults, record the spend time
revault_network.activate_fresh_vaults(vaults)
deposits, _ = revault_network.spend_vaults_anyhow(vaults)
initial_moved_at = revault_network.stk(0).rpc.listvaults(["spent"])["vaults"][0][
"moved_at"
]
# Initial sanity checks..
for w in revault_network.participants():
wait_for(lambda: w.rpc.getinfo()["blockheight"] == bitcoind.rpc.getblockcount())
assert len(w.rpc.listvaults(["spent"], deposits)["vaults"]) == len(deposits)
for vault in w.rpc.listvaults(["spent"], deposits)["vaults"]:
for field in timestamps_from_status("spent"):
assert vault[field] is not None, field
for field in timestamps_from_status("spent", present=False):
assert vault[field] is None, field
# If we are 'spent' and the Spend gets unconfirmed, it'll get marked for
# re-broadcast
blockheight = bitcoind.rpc.getblockcount()
bitcoind.simple_reorg(blockheight, shift=-1)
for w in revault_network.participants():
w.wait_for_logs(
[
"Detected reorg",
f"Vault {deposits[0]}'s Spend transaction got unconfirmed",
f"Vault {deposits[1]}'s Spend transaction got unconfirmed",
"Rescan of all vaults in db done.",
]
)
# All good if we re-confirm it
bitcoind.generate_block(1, wait_for_mempool=1)
for w in revault_network.participants():
wait_for(
lambda: len(w.rpc.listvaults(["spent"], deposits)["vaults"])
== len(deposits)
)
for vault in w.rpc.listvaults(["spent"], deposits)["vaults"]:
for field in timestamps_from_status("spent"):
assert vault[field] is not None, field
for field in timestamps_from_status("spent", present=False):
assert vault[field] is None, field
# It's in a new block, it shouldn't have the same timestamp!
assert vault["moved_at"] != initial_moved_at
@pytest.mark.skipif(not POSTGRES_IS_SETUP, reason="Needs Postgres for servers db")
def test_reorged_cancel(revault_network, bitcoind):
revault_network.deploy(4, 2, csv=12, with_watchtowers=False)
stks = revault_network.stks()
mans = revault_network.mans()
vault = revault_network.fund(32)
revault_network.secure_vault(vault)
revault_network.activate_vault(vault)
deposit = f"{vault['txid']}:{vault['vout']}"
amount = vault["amount"]
addr = bitcoind.rpc.getnewaddress()
feerate = 1
fee = revault_network.compute_spendtx_fees(feerate, 1, 1)
destinations = {addr: amount - fee}
revault_network.unvault_vaults([vault], destinations, feerate)
unvault_tx = mans[0].rpc.listonchaintransactions([deposit])["onchain_transactions"][
0
]["unvault"]
# Now let's cancel the spending
revault_network.cancel_vault(vault)
cancel_tx = mans[0].rpc.listonchaintransactions([deposit])["onchain_transactions"][
0
]["cancel"]
initial_moved_at = revault_network.stk(0).rpc.listvaults()["vaults"][0]["moved_at"]
# Reorging, but not unconfirming the cancel
bitcoind.simple_reorg(cancel_tx["blockheight"])
for w in stks + mans:
w.wait_for_logs(
[
"Detected reorg",
f"Vault {deposit}'s Cancel transaction got unconfirmed",
"Rescan of all vaults in db done.",
]
)
wait_for(lambda: w.rpc.getinfo()["blockheight"] == bitcoind.rpc.getblockcount())
# Let's unconfirm the cancel and check that the vault is now in 'canceling' state
bitcoind.simple_reorg(cancel_tx["blockheight"], shift=-1)
for w in stks + mans:
w.wait_for_logs(
[
"Detected reorg",
f"Vault {deposit}'s Cancel transaction got unconfirmed",
"Rescan of all vaults in db done.",
]
)
wait_for(lambda: w.rpc.getinfo()["blockheight"] == bitcoind.rpc.getblockcount())
for w in stks + mans:
wait_for(
lambda: w.rpc.listvaults([], [deposit])["vaults"][0]["status"]
== "canceling"
)
vault = w.rpc.listvaults([], [deposit])["vaults"][0]
assert vault["moved_at"] is None
for field in timestamps_from_status("canceling"):
assert vault[field] is not None, field
for field in timestamps_from_status("canceling", present=False):
assert vault[field] is None, field
# Confirming the cancel again
bitcoind.generate_block(1, wait_for_mempool=1)
for w in stks + mans:
w.wait_for_log("Cancel tx .* was confirmed at height .*")
wait_for(
lambda: w.rpc.listvaults([], [deposit])["vaults"][0]["status"] == "canceled"
)
for field in timestamps_from_status("canceled"):
vault = w.rpc.listvaults([], [deposit])["vaults"][0]
assert vault[field] is not None, field
for field in timestamps_from_status("canceled", present=False):
assert vault[field] is None, field
# It's in a new block, it shouldn't have the same timestamp!
assert vault["moved_at"] != initial_moved_at
# Let's unconfirm the unvault
bitcoind.simple_reorg(unvault_tx["blockheight"], shift=-1)
for w in stks + mans:
w.wait_for_log(f"Vault {deposit}'s Unvault transaction .* got unconfirmed")
# Here we go canceling everything again
bitcoind.generate_block(1, wait_for_mempool=2)
for w in stks + mans:
wait_for(
lambda: w.rpc.listvaults([], [deposit])["vaults"][0]["status"] == "canceled"
)
for field in timestamps_from_status("canceled"):
assert [field] is not None, field
for field in timestamps_from_status("canceled", present=False):
assert vault[field] is None, field
@pytest.mark.skipif(not POSTGRES_IS_SETUP, reason="Needs Postgres for servers db")
def test_retrieve_vault_status(revault_network, bitcoind):
"""Test we keep track of coins that moved without us actively noticing it."""
CSV = 3
revault_network.deploy(2, 2, csv=CSV)
stks = revault_network.stk_wallets
# We don't use mans() here as we need a reference to the actual list in order to
# modify it.
mans = revault_network.man_wallets
# Create a new deposit, makes everyone aware of it. Then stop one of the
# wallets for it to not notice anything from now on.
vault = revault_network.fund(0.05)
man = mans.pop(0)
man.stop()
# Now activate and Spend the vault, the manager does not acknowledge it (yet)
revault_network.secure_vault(vault)
revault_network.activate_vault(vault)
deposits = [f"{vault['txid']}:{vault['vout']}"]
destinations = {bitcoind.rpc.getnewaddress(): vault["amount"] // 2}
spend_tx = mans[0].rpc.getspendtx(deposits, destinations, 1)["spend_tx"]
for m in [man] + mans:
spend_tx = m.man_keychain.sign_spend_psbt(spend_tx, [vault["derivation_index"]])
mans[0].rpc.updatespendtx(spend_tx)
spend_psbt = serializations.PSBT()
spend_psbt.deserialize(spend_tx)
spend_psbt.tx.calc_sha256()
mans[0].rpc.setspendtx(spend_psbt.tx.hash)
bitcoind.generate_block(1, wait_for_mempool=len(deposits))
bitcoind.generate_block(CSV)
mans[0].wait_for_log(
f"Succesfully broadcasted Spend tx '{spend_psbt.tx.hash}'",
)
wait_for(lambda: len(mans[0].rpc.listvaults(["spending"], deposits)["vaults"]) == 1)
# The manager should restart, and acknowledge the vault as being "spending"
mans.insert(0, man)
mans[0].start()
deposit = f"{vault['txid']}:{vault['vout']}"
wait_for(
lambda: len(mans[0].rpc.listvaults(["spending"], deposits)["vaults"])
== len(deposits)
)
# And if we mine it now everyone will see it as "spent"
bitcoind.generate_block(1, wait_for_mempool=spend_psbt.tx.hash)
for w in mans + revault_network.stks():
wait_for(
lambda: len(w.rpc.listvaults(["spent"], deposits)["vaults"])
== len(deposits)
)
# Now do the same dance with a "spent" vault
vault = revault_network.fund(0.14)
man = mans.pop(0)
man.stop()
revault_network.secure_vault(vault)
revault_network.activate_vault(vault)
deposits = [f"{vault['txid']}:{vault['vout']}"]
destinations = {bitcoind.rpc.getnewaddress(): vault["amount"] // 2}
spend_tx = mans[0].rpc.getspendtx(deposits, destinations, 1)["spend_tx"]
for m in [man] + mans:
spend_tx = m.man_keychain.sign_spend_psbt(spend_tx, [vault["derivation_index"]])
mans[0].rpc.updatespendtx(spend_tx)
spend_psbt = serializations.PSBT()
spend_psbt.deserialize(spend_tx)
spend_psbt.tx.calc_sha256()
mans[0].rpc.setspendtx(spend_psbt.tx.hash)
bitcoind.generate_block(1, wait_for_mempool=len(deposits))
bitcoind.generate_block(CSV)
mans[0].wait_for_log(
f"Succesfully broadcasted Spend tx '{spend_psbt.tx.hash}'",
)
bitcoind.generate_block(1, wait_for_mempool=spend_psbt.tx.hash)
for w in mans + revault_network.stks():
wait_for(
lambda: len(w.rpc.listvaults(["spent"], deposits)["vaults"])
== len(deposits)
)
# The manager should restart, and acknowledge the vault as being "spent"
mans.insert(0, man)
mans[0].start()
deposit = f"{vault['txid']}:{vault['vout']}"
wait_for(
lambda: len(mans[0].rpc.listvaults(["spent"], [deposit])["vaults"])
== len(deposits)
)
# Now do the same dance with a "canceling" vault
vault = revault_network.fund(8)
man = mans.pop(0)
man.stop()
revault_network.secure_vault(vault)
revault_network.activate_vault(vault)
deposits = [f"{vault['txid']}:{vault['vout']}"]
destinations = {bitcoind.rpc.getnewaddress(): vault["amount"] // 2}
spend_tx = mans[0].rpc.getspendtx(deposits, destinations, 1)["spend_tx"]
for m in [man] + mans:
spend_tx = m.man_keychain.sign_spend_psbt(spend_tx, [vault["derivation_index"]])
mans[0].rpc.updatespendtx(spend_tx)
spend_psbt = serializations.PSBT()
spend_psbt.deserialize(spend_tx)
spend_psbt.tx.calc_sha256()
mans[0].rpc.setspendtx(spend_psbt.tx.hash)
bitcoind.generate_block(1, wait_for_mempool=len(deposits))
# Cancel it
for w in mans + revault_network.stks():
wait_for(
lambda: len(w.rpc.listvaults(["unvaulted"], deposits)["vaults"])
== len(deposits)
)
mans[0].rpc.revault(deposits[0])
for w in mans + revault_network.stks():
wait_for(
lambda: len(w.rpc.listvaults(["canceling"], deposits)["vaults"])
== len(deposits)
)
# The manager should restart, and acknowledge the vault as being "canceling"
mans.insert(0, man)
mans[0].start()
deposit = f"{vault['txid']}:{vault['vout']}"
wait_for(
lambda: len(mans[0].rpc.listvaults(["canceling"], [deposit])["vaults"])
== len(deposits)
)
# Now do the same dance with a "canceled" vault
vault = revault_network.fund(19)
man = mans.pop(0)
man.stop()
revault_network.secure_vault(vault)
revault_network.activate_vault(vault)
deposits = [f"{vault['txid']}:{vault['vout']}"]
destinations = {bitcoind.rpc.getnewaddress(): vault["amount"] // 2}
spend_tx = mans[0].rpc.getspendtx(deposits, destinations, 1)["spend_tx"]
for m in [man] + mans:
spend_tx = m.man_keychain.sign_spend_psbt(spend_tx, [vault["derivation_index"]])
mans[0].rpc.updatespendtx(spend_tx)
spend_psbt = serializations.PSBT()
spend_psbt.deserialize(spend_tx)
spend_psbt.tx.calc_sha256()
mans[0].rpc.setspendtx(spend_psbt.tx.hash)
bitcoind.generate_block(1, wait_for_mempool=len(deposits))
# Cancel it
for w in mans + revault_network.stks():
wait_for(
lambda: len(w.rpc.listvaults(["unvaulted"], deposits)["vaults"])
== len(deposits)
)
mans[0].rpc.revault(deposits[0])
bitcoind.generate_block(1, wait_for_mempool=1)
for w in mans + revault_network.stks():
wait_for(
lambda: len(w.rpc.listvaults(["canceled"], deposits)["vaults"])
== len(deposits)
)
# The manager should restart, and acknowledge the vault as being "canceled"
mans.insert(0, man)
mans[0].start()
deposit = f"{vault['txid']}:{vault['vout']}"
wait_for(
lambda: len(mans[0].rpc.listvaults(["canceled"], [deposit])["vaults"])
== len(deposits)
)
# Now do the same dance with a "unvaulting" vault
vault = revault_network.fund(41)
man = mans.pop(0)
man.stop()
revault_network.secure_vault(vault)
revault_network.activate_vault(vault)
deposits = [f"{vault['txid']}:{vault['vout']}"]
destinations = {bitcoind.rpc.getnewaddress(): vault["amount"] // 2}
spend_tx = mans[0].rpc.getspendtx(deposits, destinations, 1)["spend_tx"]
for m in [man] + mans:
spend_tx = m.man_keychain.sign_spend_psbt(spend_tx, [vault["derivation_index"]])
mans[0].rpc.updatespendtx(spend_tx)
spend_psbt = serializations.PSBT()
spend_psbt.deserialize(spend_tx)
spend_psbt.tx.calc_sha256()
mans[0].rpc.setspendtx(spend_psbt.tx.hash)
for w in mans + revault_network.stks():
wait_for(
lambda: len(w.rpc.listvaults(["unvaulting"], deposits)["vaults"])
== len(deposits)
)
# The manager should restart, and acknowledge the vault as being "unvaulting"
mans.insert(0, man)
mans[0].start()
deposit = f"{vault['txid']}:{vault['vout']}"
wait_for(
lambda: len(mans[0].rpc.listvaults(["unvaulting"], [deposit])["vaults"])
== len(deposits)
)
# Now do the same dance with a "unvaulted" vault
vault = revault_network.fund(99)
man = mans.pop(0)
man.stop()
revault_network.secure_vault(vault)
revault_network.activate_vault(vault)
deposits = [f"{vault['txid']}:{vault['vout']}"]
destinations = {bitcoind.rpc.getnewaddress(): vault["amount"] // 2}
spend_tx = mans[0].rpc.getspendtx(deposits, destinations, 1)["spend_tx"]
for m in [man] + mans:
spend_tx = m.man_keychain.sign_spend_psbt(spend_tx, [vault["derivation_index"]])
mans[0].rpc.updatespendtx(spend_tx)
spend_psbt = serializations.PSBT()
spend_psbt.deserialize(spend_tx)
spend_psbt.tx.calc_sha256()
mans[0].rpc.setspendtx(spend_psbt.tx.hash)
bitcoind.generate_block(1, wait_for_mempool=len(deposits))
for w in mans + revault_network.stks():
wait_for(
lambda: len(w.rpc.listvaults(["unvaulted"], deposits)["vaults"])
== len(deposits)
)
# The manager should restart, and acknowledge the vault as being "unvaulted"
mans.insert(0, man)
mans[0].start()
deposit = f"{vault['txid']}:{vault['vout']}"
wait_for(
lambda: len(mans[0].rpc.listvaults(["unvaulted"], [deposit])["vaults"])
== len(deposits)
)
# Now do the same dance with an "active" vault
vault = revault_network.fund(0.0556789)
man = mans.pop(0)
man.stop()
revault_network.secure_vault(vault)
revault_network.activate_vault(vault)
# The manager should restart, and acknowledge the vault as being "active"
mans.insert(0, man)
mans[0].start()
deposit = f"{vault['txid']}:{vault['vout']}"
mans[0].wait_for_active_vaults([deposit])
# Now do the same dance with a "secured" vault
vault = revault_network.fund(0.123456)
man = mans.pop(0)
man.stop()
revault_network.secure_vault(vault)
# The manager should restart, and acknowledge the vault as being "secured"
mans.insert(0, man)
mans[0].start()
deposit = f"{vault['txid']}:{vault['vout']}"
mans[0].wait_for_secured_vaults([deposit])
# Now do the same dance with an "emergencyvaulting" vault
vault = revault_network.fund(0.98634)
deposit = f"{vault['txid']}:{vault['vout']}"
revault_network.secure_vault(vault)
stk = stks.pop(0)
stk.stop()
stks[0].rpc.emergency()
wait_for(
lambda: len(stks[0].rpc.listvaults(["emergencyvaulting"], [deposit])["vaults"])
== 1
)
# The stakeholder should restart, and acknowledge the vault as being "emergencyvaulting"
stks.insert(0, stk)
stks[0].start()
deposit = f"{vault['txid']}:{vault['vout']}"
wait_for(
lambda: len(stks[0].rpc.listvaults(["emergencyvaulting"], [deposit])["vaults"])
== 1
)
# Now do the same dance with an "unvaultemergencyvaulting" vault
vault = revault_network.fund(1.64329)
deposit = f"{vault['txid']}:{vault['vout']}"
revault_network.activate_fresh_vaults([vault])
revault_network.unvault_vaults_anyhow([vault])
stk = stks.pop(0)
stk.stop()
stks[0].rpc.emergency()
wait_for(
lambda: len(
stks[0].rpc.listvaults(["unvaultemergencyvaulting"], [deposit])["vaults"]
)
== 1
)
# The stakeholder should restart, and acknowledge the vault as being "emergencyvaulting"
stks.insert(0, stk)
stks[0].start()
deposit = f"{vault['txid']}:{vault['vout']}"
wait_for(
lambda: len(
stks[0].rpc.listvaults(["unvaultemergencyvaulting"], [deposit])["vaults"]
)
== 1
)
| 40.060914 | 112 | 0.635707 | 4,921 | 39,460 | 4.96078 | 0.073359 | 0.071113 | 0.025807 | 0.029494 | 0.813575 | 0.788506 | 0.7636 | 0.751761 | 0.728986 | 0.700434 | 0 | 0.010205 | 0.237608 | 39,460 | 984 | 113 | 40.101626 | 0.801256 | 0.121921 | 0 | 0.700637 | 0 | 0 | 0.175564 | 0.038699 | 0 | 0 | 0 | 0.002033 | 0.070064 | 1 | 0.012739 | false | 0 | 0.006369 | 0 | 0.021656 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
4dbee7f1d8de0c31dc0e69bd076e6aa9dc4007e5 | 119 | py | Python | benchmarks/syft_benchmarks/__init__.py | leosole/PySyft | 01606f08f5ec5510840644e198301cd25c3ccfa5 | [
"Apache-1.1"
] | null | null | null | benchmarks/syft_benchmarks/__init__.py | leosole/PySyft | 01606f08f5ec5510840644e198301cd25c3ccfa5 | [
"Apache-1.1"
] | null | null | null | benchmarks/syft_benchmarks/__init__.py | leosole/PySyft | 01606f08f5ec5510840644e198301cd25c3ccfa5 | [
"Apache-1.1"
] | null | null | null | # relative
from .repts.suite import run_rept_suite # noqa: F401
from .septs.suite import run_sept_suite # noqa: F401
| 29.75 | 53 | 0.773109 | 19 | 119 | 4.631579 | 0.578947 | 0.25 | 0.318182 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.059406 | 0.151261 | 119 | 3 | 54 | 39.666667 | 0.811881 | 0.252101 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
4de8dc492f5c431e253545fc0187f843c69e5fa1 | 104 | py | Python | api/app/author/__init__.py | yunfei07/vue-flask-in-action | 8695f9a252bb3e2136609f421e02a0d3f01c0e58 | [
"MIT"
] | null | null | null | api/app/author/__init__.py | yunfei07/vue-flask-in-action | 8695f9a252bb3e2136609f421e02a0d3f01c0e58 | [
"MIT"
] | null | null | null | api/app/author/__init__.py | yunfei07/vue-flask-in-action | 8695f9a252bb3e2136609f421e02a0d3f01c0e58 | [
"MIT"
] | null | null | null | from flask import Blueprint
author_bp = Blueprint('author_bp', __name__)
from app.author import routes
| 20.8 | 44 | 0.807692 | 15 | 104 | 5.2 | 0.6 | 0.384615 | 0.435897 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.125 | 104 | 4 | 45 | 26 | 0.857143 | 0 | 0 | 0 | 0 | 0 | 0.086538 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.666667 | 0 | 0.666667 | 0.666667 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 1 | 0 | 7 |
128ccf908e1a9784d222c10a93ceb92d3741e53a | 1,039,540 | py | Python | MiddlePunks.py | docluffy/NFTlurk | bedf7f65dfc59e1b16314af3800bd7ead9dfc0ab | [
"MIT"
] | 2 | 2021-09-13T16:04:13.000Z | 2021-09-14T10:11:11.000Z | MiddlePunks.py | docluffy/NFTlurk | bedf7f65dfc59e1b16314af3800bd7ead9dfc0ab | [
"MIT"
] | null | null | null | MiddlePunks.py | docluffy/NFTlurk | bedf7f65dfc59e1b16314af3800bd7ead9dfc0ab | [
"MIT"
] | null | null | null | # Built with python 3, dependencies installed with pip
# library to generate images - Pillow
# https://pillow.readthedocs.io/en/stable/installation.html
from PIL import Image
# library to work with arrays and dataframe
# https://numpy.org/
# https://pandas.pydata.org/
import numpy as np
import pandas as pd
import csv
import json
# library to interact with the operating system
import os
# library to generate random integer values
from random import seed
from random import randint
import sys
#print(sys.getrecursionlimit())
sys.setrecursionlimit(10000)
#print(sys.getrecursionlimit())
# gets path to be used in image creation mechanism, using os
dirname = os.path.dirname(os.path.abspath(__file__))
Races = ["Unknown","Halflings", "Men", "Elves", "Dwarves", "Gobelins", "Orcs", "Wizards", "Daemons", "Wraiths", "Dark Riders", "Dark Lord"]
Types = ["Male", "Female","Firebeards","Blacklocks","Broadbeams","Stiffbeards","Stonefoots","Ironfists","Longbeards","White", "Grey", "Wood", "Blue", "Tower", "None"]
Skins = ["Red","Eggplant","Granite","Dark Grey","Charcoal","Albino","Light","Mid","Dark","Purple","Camel","Wattle","Smokey Grey","Moon Grey","Sand","Green","Peach","Dust","Bone","Silk","None"]
Ears = ["Earring", "None"]
Haircolors = ["Black","Bronze","Mango","Dark Grey","Persian Blue","Sapphire","Indigo","Topaz","Burning Orange","Taupe & Cookie Brown","Brown & Cookie Brown","Taupe & Graphite","Brown & Graphite","Seashell & Grey","Seashell & Carbon Grey","Smokey Grey & Charcoal","Grey & Carbon Grey","Dark Grey & Silver","Granite & Seashell","Dark Grey & Black","Black & Granite","Carbon Grey","Seashell","Silver","Granite","Grey Goose","Mango & Brown","Ginger & Fair","Bronze & Chocolate","Fair & Wattle","Orange & Black Rose","Dark Grey & Silver","Butter","Red","Blond","Blonde","Orange","Fair","Grey","Ginger","Black Rose","Brown","None"]
Haircuts = ["Braids","Long Hair","Medium Layers","The Bob","Left Side Hair","Right Side Hair","Curly Hair","Prince Hair","King Hair","Straight Hair","Grunge Hair","Wild Hair","Perm Hair","Bedhead","Hockey Hair","Bald","Wedge Hair","Feathered Hair","Ponytail","None"]
Hairprops = ["Orc Helmet","Gobelins Crown","Dwarf Helmet","Elfic Tiara","Elfic Crown","Circlet","Punk Hat","Beanie","Fedora","Bandana","Knitted Cap","Men Crown","Police","Top Hat","Cap Forward","Cowboy Hat","Cap","Tiara","Flower","Shire Hat","Headband","Pilot Helmet","None"]
Necks = ["Choker","Gold Chain","Silver Chain","Ring Onchain","Brooch","None"]
Facialhairs = ["Big Beard","Muttonchops","Mustache","Handlebars","Front Beard Dark","Front Beard","Normal Beard","Normal Beard Black","Luxurious Beard","Goat","Chinstrap","Shadow Beard","None"]
Mouthprops = ["Cigarette","Medical Mask","Pipe","Vape","None"]
Eyecolors = ["Orange Eye Shadow","Orange","Purple","Blue Eye Shadow","Purple Eye Shadow","Green Eye Shadow","Black","Peach","Blue","White","Yellow","Red","None"]
Eyeprops = ["3D Glasses","VR","Classic Shades","Small Shades","Eye Patch","Nerd Glasses","Big Shades","Eye Mask","Horned Rim Glasses","Regular Shades","Welding Goggles","None"]
Noses = ["Clown Nose","None"]
Blemishes = ["Scare","Rosy Cheeks","Mole","None"]
Toothcolors = ["Brown","White","Gold","Blood","None"]
Mouths = ["Smile","Frown","None","Black Lipstick","Hot Lipstick","Purple Lipstick","Orange Lipstick"]
#Metada prep
def createCombo():
trait = {}
#trait["Name"] = name_ep
trait["Race"] = race_ep
trait["Type"] = type_ep
trait["Skin Tone"] = skin_ep
trait["Ears"] = ears_ep
trait["Hair Color"] = hair_color_ep
trait["Haircut"] = haircut_ep
trait["Hair Prop"] = hair_prop_ep
trait["Neck"] = neck_ep
trait["Facial Hair"] = facial_hair_ep
trait["Mouth Prop"] = mouth_prop_ep
trait["Eyes Color"] = eyes_color_ep
trait["Eyes Prop"] = eyes_prop_ep
trait["Nose"] = nose_ep
trait["Blemishe"] = blemishe_ep
trait["Tooth Color"] = tooth_color_ep
trait["Mouth"] = mouth_ep
if trait in traits:
filterlist1.append(x)
else:
return trait
traits = []
# sets final image dimensions as 480x480 pixels
# the original 24x24 pixel image will be expanded to these dimensions
dimensions = 480, 480
s=(24,24)
none = np.zeros(s)
# Variables to define the colors with the RGB system
nr = (0,0,0)
bl = (255,255,255)
BG1 = (0,110,110)
FR1 = nr
FR2 = bl
BR1 = nr
BR2 = bl
FR3 = nr
DE1 = bl
SK3 = bl
BE1 = nr
BE2 = (204,154,39)
BE3 = (102,28,51)
BE4 = (128,97,21)
BE7 = (104,70,31)
CG2 = (198,198,198)
CG3 = (241,68,0)
CG4 = (157,178,187)
CG1 = (0,0,0)
PI2 = (139,78,0)
PI3 = (109,57,0)
PI1 = (0,0,0)
PI4 = (139,160,169)
MO1 = (156,141,138)
MO2 = (148,118,83)
MO3 = (121,95,64)
MO4 = (86,48,21)
SM1 = (0,0,0)
FW1 = (0,0,0)
VP3 = (89,89,89)
VP2 = (57,0,255)
VP1 = (0,0,0)
CN1 = (231,0,0)
RC1 = (215,154,104)
HRG3 = (213,200,183)
HRG2 = (184,163,135)
HRG4 = (209,189,164)
HRG5 = (87,101,113)
HRG1 = (0,0,0)
MK1 = (201,201,201)
MK2 = (177,177,177)
ER2 = (255,221,0)
ER1 = (0,0,0)
GC1 = (255,203,0)
RG1 = (255,160,0)
BO1 = (35,165,115)
SV1 = (223,223,223)
KR1 = (0,0,0)
HL1 = (212,0,0)
PL1 = (226,0,203)
NL1 = (122,0,0)
BL1 = (0,0,0)
CH1 = (127,73,0)
CH2 = (84,45,0)
CA1 = (145,0,185)
CA2 = (194,60,221)
BN1 = (2,85,198)
BN3 = (221,244,0)
BN2 = (231,0,0)
BN4 = (0,208,0)
BN5 = (0,0,0)
TH1 = (0,0,0)
TH2 = (238,0,0)
KC2 = (216,56,0)
KC3 = (157,39,0)
KC1 = (0,0,0)
HB1 = (255,255,255)
HB2 = (25,100,216)
FC2 = (81,81,81)
FC3 = (53,53,53)
FC1 = (0,0,0)
BA1 = (48,36,203)
BA2 = (39,31,167)
BA3 = (30,29,126)
FD1 = (63,47,28)
FD2 = (0,0,0)
PC2 = (38,47,75)
PC4 = (255,220,0)
PC1 = (0,0,0)
PC3 = (255,255,255)
TD1 = (240,240,240)
TD3 = (44,131,255)
TD2 = (255,0,0)
VR2 = (180,180,180)
VR3 = (141,141,141)
VR1 = (0,0,0)
CSH2 = (96,55,4)
CSH3 = (209,111,0)
CSH1 = (0,0,0)
SSH1 = (0,0,0)
EP1 = (0,0,0)
ND1 = (97,224,220)
ND2 = (0,0,0)
BSH2 = (115,0,67)
BSH3 = (153,0,89)
BSH4 = (188,0,92)
BSH1 = (0,0,0)
EM2 = (215,215,215)
EM1 = (0,0,0)
RSH1 = (0,0,0)
TI1 = (255,186,0)
TI2 = (255,0,0)
MH2 = (255,255,255)
MH1 = (0,0,0)
PH2 = (97,224,220)
PH1 = (250,128,114)
PH3 = (0,0,0)
WG3 = (97,224,220)
WG2 = (82,78,0)
WG1 = (28,27,0)
OH2 = (50,40,40)
OH1 = (90,65,55)
ETI = SV1 #(0,223,138)
HOB1 = (255,192,0)
HOB2 = (255,255,0)
HOB3 = (255,0,0)
HOB4 = (146,208,80)
HOB5 = (192,0,0)
GCR1 = (191,191,191)
GCR2 = (128,128,128)
GCR3 = (219,219,219)
GCR4 = (219,227,115)
GCR5 = (255,192,0)
KGC = (159,109,9)
FL1 = (219,227,115)
FL2 = (255,192,0)
FL3 = (146,208,80)
FL4 = (255,255,0)
EOY1 = (255,192,0)
EOY2 = (255,255,0)
ELT = (255,192,0)
DHL1 = (190,130,70)
DHL2 = (80,50,30)
DHL3 = (0,0,0)
THR1=(200,140,90)
# The matrix of each atty
ORC_HELMET=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,OH2,OH2,OH2,OH2,OH2,OH2,OH2,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,OH2,OH2,OH2,OH2,OH2,OH2,OH2,OH2,OH2,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,OH2,OH2,OH2,OH2,OH2,OH2,OH2,OH2,OH2,OH2,OH2,0,0,0,0,0,0],
[0,0,0,0,0,0,OH2,OH2,OH2,OH1,OH1,OH1,OH1,OH1,OH1,OH1,OH2,OH2,OH2,0,0,0,0,0],
[0,0,0,0,0,0,0,0,OH1,OH1,OH1,OH1,OH1,OH1,OH1,OH1,OH1,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,OH1,OH1,OH1,OH1,OH1,OH1,OH1,OH1,OH1,OH1,OH1,0,0,0,0,0,0],
[0,0,0,0,0,0,0,OH1,OH1,OH1,OH1,OH1,OH1,OH1,OH1,OH1,OH1,OH1,0,0,0,0,0,0],
[0,0,0,0,0,0,0,OH1,OH1,OH1,OH1,OH1,OH1,OH1,OH1,OH1,OH1,OH1,0,0,0,0,0,0],
[0,0,0,0,0,0,0,OH1,OH1,OH1,OH1,OH1,OH1,OH1,OH1,OH1,OH1,OH1,0,OH1,0,0,0,0],
[0,0,0,0,0,0,0,OH1,OH1,OH1,OH1,OH1,OH1,OH1,OH1,OH1,OH1,OH1,OH1,0,0,0,0,0],
[0,0,0,0,0,0,0,OH1,0,0,OH1,OH1,OH1,0,0,OH1,OH1,OH1,0,0,0,0,0,0],
[0,0,0,0,0,0,0,OH1,0,0,OH1,OH1,OH1,0,0,OH1,OH1,OH1,0,0,0,0,0,0],
[0,0,0,0,0,0,0,OH1,OH1,OH1,OH1,OH1,OH1,OH1,OH1,OH1,OH1,OH1,0,OH1,0,0,0,0],
[0,0,0,0,0,0,0,OH1,OH1,OH1,OH1,OH1,OH1,OH1,OH1,OH1,OH1,OH1,0,OH1,0,0,0,0],
[0,0,0,0,0,0,0,OH1,OH1,OH1,OH1,OH1,OH1,OH1,OH1,OH1,OH1,OH1,OH1,OH1,0,0,0,0],
[0,0,0,0,0,0,0,OH1,OH1,OH1,OH1,OH1,OH1,OH1,OH1,OH1,OH1,OH1,0,OH1,0,0,0,0],
[0,0,0,0,0,0,0,OH1,OH1,0,0,0,0,0,0,OH1,OH1,OH1,0,0,0,0,0,0],
[0,0,0,0,0,0,0,OH1,OH1,0,0,0,0,0,0,OH1,OH1,OH1,0,0,0,0,0,0],
[0,0,0,0,0,0,0,OH1,OH1,0,0,0,0,0,0,OH1,OH1,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,OH1,OH1,0,0,0,0,OH1,OH1,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
CIGARETTE=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,CG4,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,CG4,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,CG4,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,CG4,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,CG4,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,CG4,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,CG1,CG1,CG1,CG1,CG1,CG1,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,CG1,CG3,CG2,CG2,CG2,CG2,CG2,CG1,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,CG1,CG1,CG1,CG1,CG1,CG1,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
PIPE=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,PI4,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,PI4,PI4,PI4,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,PI4,PI4,PI4,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,PI4,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,PI4,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,PI1,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,PI1,PI1,PI1,PI1,PI1,0,0,PI1,PI2,PI1,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,PI1,PI2,PI2,PI2,PI1,0,PI1,PI2,PI1,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,PI1,PI3,PI2,PI3,PI1,PI1,PI2,PI1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,PI1,PI3,PI2,PI2,PI2,PI1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,PI1,PI1,PI1,PI1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
MOLE=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,MO1,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
MOLE_2=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,MO1,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
SMILE=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,SM1,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
FROWN=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,FW1,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
VAPE=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,VP1,VP1,VP1,VP1,VP1,VP1,VP1,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,VP1,VP2,VP3,VP3,VP3,VP3,VP3,VP1,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,VP1,VP1,VP1,VP1,VP1,VP1,VP1,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
NOSE_1=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,CN1,CN1,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,CN1,CN1,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
NOSE_2=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,CN1,CN1,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,CN1,CN1,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
NOSE_3=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,CN1,CN1,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,CN1,CN1,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
ROSY_1=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,RC1,0,0,0,0,RC1,RC1,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,RC1,0,0,0,0,0,RC1,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
ROSY_2=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,RC1,RC1,0,0,0,RC1,RC1,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
MASK_1=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,MK1,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,MK1,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,MK1,0,0,0,0,0,0,MK1,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,MK1,MK1,MK2,MK1,MK1,MK1,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,MK1,MK1,MK1,MK1,MK1,MK1,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,MK2,MK1,MK1,MK1,MK1,MK2,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,MK1,MK1,MK1,MK1,MK1,MK1,MK1,MK1,MK1,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,MK1,MK1,MK1,MK1,MK1,MK1,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,MK1,MK1,MK1,MK1,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
MASK_2=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,MK1,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,MK1,0,0,0,0,0,0,MK1,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,MK1,MK1,MK2,MK1,MK1,MK1,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,MK1,MK1,MK1,MK1,MK1,MK1,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,MK2,MK1,MK1,MK1,MK1,MK2,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,MK1,MK1,MK1,MK1,MK1,MK1,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,MK1,MK1,MK1,MK1,MK1,MK1,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,MK1,MK1,MK1,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
EARS_0=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,ER1,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,ER1,ER2,ER1,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,ER1,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
EARS_1=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,ER1,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,ER1,ER2,ER1,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,ER1,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
EARS_2=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,ER1,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,ER1,ER2,ER1,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,ER1,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
EARS_3=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,ER1,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,ER1,ER2,ER1,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,ER1,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
EARS_4=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,ER1,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,ER1,ER2,ER1,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,ER1,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
GoldChain_1=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,GC1,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,GC1,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,GC1,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
SCR1 = (20,20,20)
SCARE_1=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,SCR1,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,SCR1,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,SCR1,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,SCR1,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,SCR1,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
SCARE_2=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,SCR1,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,SCR1,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,SCR1,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,SCR1,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
GoldChain_4=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,GC1,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,GC1,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,GC1,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
RING_1=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,RG1,0,RG1,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,RG1,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
BROCHE_1=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,BO1,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,BO1,BO1,BO1,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,BO1,0,0,0,0,0,0,0]
]
BROCHE_2=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,BO1,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,BO1,BO1,BO1,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,BO1,0,0,0,0,0,0,0,0]
]
BROCHE_3=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,BO1,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,BO1,BO1,BO1,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,BO1,0,0,0,0,0,0,0,0,0]
]
SilverChain_1=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,SV1,SV1,SV1,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
SilverChain_2=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,SV1,SV1,SV1,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
GoldChain_2=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,GC1,GC1,GC1,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
RING_2=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,RG1,0,RG1,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,RG1,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
SilverChain_3=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,SV1,SV1,SV1,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
GoldChain_3=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,GC1,GC1,GC1,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
CHOKER=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,KR1,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,KR1,KR1,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,KR1,KR1,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
RING_3=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,RG1,0,RG1,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,RG1,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
CAP_1=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,CA1,CA1,CA1,CA1,CA1,CA1,CA1,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,CA1,CA1,CA2,CA1,CA1,CA1,CA1,CA1,CA1,BG1,0,0,0,0,0,0],
[0,0,0,0,0,0,0,BG1,CA1,CA2,CA1,CA1,CA1,CA1,CA1,CA1,CA1,CA1,BG1,0,0,0,0,0],
[0,0,0,0,0,CA1,CA1,CA1,CA1,CA1,CA1,CA1,CA1,CA1,CA1,CA1,CA1,CA1,BG1,0,0,0,0,0],
[0,0,0,0,CA1,CA1,CA1,CA1,CA1,CA1,CA1,CA1,CA1,CA1,CA1,CA1,CA1,CA1,0,0,0,0,0,0],
[0,0,0,0,0,0,BG1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
CAP_7=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,BG1,BG1,CA1,CA1,CA1,CA1,CA1,CA1,CA1,BG1,0,0,0,0,0,0,0],
[0,0,0,0,0,0,BG1,BG1,CA1,CA1,CA2,CA1,CA1,CA1,CA1,CA1,CA1,BG1,0,0,0,0,0,0],
[0,0,0,0,0,BG1,BG1,BG1,CA1,CA2,CA1,CA1,CA1,CA1,CA1,CA1,CA1,CA1,BG1,0,0,0,0,0],
[0,0,0,0,0,CA1,CA1,CA1,CA1,CA1,CA1,CA1,CA1,CA1,CA1,CA1,CA1,CA1,BG1,0,0,0,0,0],
[0,0,0,0,CA1,CA1,CA1,CA1,CA1,CA1,CA1,CA1,CA1,CA1,CA1,CA1,CA1,CA1,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
BEANI_1=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,BG1,BG1,BG1,BG1,BG1,BG1,BG1,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,BG1,BN1,BN1,BN1,BN1,BN1,BG1,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,BG1,BG1,BG1,BG1,BN5,BG1,BG1,BG1,BG1,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,BG1,BG1,BN2,BN2,BN3,BN3,BN3,BN1,BN1,BG1,BG1,0,0,0,0,0,0],
[0,0,0,0,0,0,BG1,BG1,BN2,BN2,BN2,BN3,BN3,BN3,BN1,BN1,BN1,BG1,BG1,0,0,0,0,0],
[0,0,0,0,0,0,BG1,BN2,BN2,BN2,BN3,BN3,BN3,BN3,BN3,BN1,BN1,BN1,BG1,0,0,0,0,0],
[0,0,0,0,0,0,BG1,BN2,BN2,BN3,BN3,BN3,BN3,BN3,BN3,BN3,BN1,BN1,BG1,0,0,0,0,0],
[0,0,0,0,0,0,BG1,0,0,BN4,BN4,BN4,BN4,BN4,BN4,BN4,0,0,0,BG1,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
BEANI_7=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,0,0,0,0,0,0],
[0,0,0,0,0,0,0,BG1,BG1,BG1,BN1,BN1,BN1,BN1,BN1,BG1,BG1,0,0,0,0,0,0,0],
[0,0,0,0,0,0,BG1,BG1,BG1,BG1,BG1,BG1,BN5,BG1,BG1,BG1,BG1,BG1,0,0,0,0,0,0],
[0,0,0,0,0,BG1,BG1,BG1,BG1,BN2,BN2,BN3,BN3,BN3,BN1,BN1,BG1,BG1,BG1,0,0,0,0,0],
[0,0,0,0,0,BG1,BG1,BG1,BN2,BN2,BN2,BN3,BN3,BN3,BN1,BN1,BN1,BG1,BG1,0,0,0,0,0],
[0,0,0,0,0,BG1,BG1,BN2,BN2,BN2,BN3,BN3,BN3,BN3,BN3,BN1,BN1,BN1,BG1,0,0,0,0,0],
[0,0,0,0,0,BG1,BG1,BN2,BN2,BN3,BN3,BN3,BN3,BN3,BN3,BN3,BN1,BN1,BG1,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,BN4,BN4,BN4,BN4,BN4,BN4,BN4,0,0,BG1,BG1,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
TOPHAT_1=[
[0,0,0,0,0,0,0,0,TH1,TH1,TH1,TH1,TH1,TH1,TH1,TH1,TH1,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,TH1,TH1,TH1,TH1,TH1,TH1,TH1,TH1,TH1,TH1,TH1,0,0,0,0,0,0],
[0,0,0,0,0,0,0,TH1,TH1,TH1,TH1,TH1,TH1,TH1,TH1,TH1,TH1,TH1,0,0,0,0,0,0],
[0,0,0,0,0,0,0,TH1,TH1,TH1,TH1,TH1,TH1,TH1,TH1,TH1,TH1,TH1,0,0,0,0,0,0],
[0,0,0,0,0,0,0,TH1,TH1,TH1,TH1,TH1,TH1,TH1,TH1,TH1,TH1,TH1,0,0,0,0,0,0],
[0,0,0,0,0,0,0,TH2,TH2,TH2,TH2,TH2,TH2,TH2,TH2,TH2,TH2,TH2,0,0,0,0,0,0],
[0,0,0,0,0,0,TH1,TH1,TH1,TH1,TH1,TH1,TH1,TH1,TH1,TH1,TH1,TH1,TH1,0,0,0,0,0],
[0,0,0,0,0,TH1,TH1,TH1,TH1,TH1,TH1,TH1,TH1,TH1,TH1,TH1,TH1,TH1,TH1,TH1,0,0,0,0],
[0,0,0,0,0,BG1,0,0,0,0,0,0,0,0,0,0,0,0,BG1,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
TOPHAT_7=[
[0,0,0,0,0,0,0,0,TH1,TH1,TH1,TH1,TH1,TH1,TH1,TH1,TH1,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,TH1,TH1,TH1,TH1,TH1,TH1,TH1,TH1,TH1,TH1,TH1,0,0,0,0,0,0],
[0,0,0,0,0,0,0,TH1,TH1,TH1,TH1,TH1,TH1,TH1,TH1,TH1,TH1,TH1,0,0,0,0,0,0],
[0,0,0,0,0,0,0,TH1,TH1,TH1,TH1,TH1,TH1,TH1,TH1,TH1,TH1,TH1,0,0,0,0,0,0],
[0,0,0,0,0,0,BG1,TH1,TH1,TH1,TH1,TH1,TH1,TH1,TH1,TH1,TH1,TH1,0,0,0,0,0,0],
[0,0,0,0,0,BG1,BG1,TH2,TH2,TH2,TH2,TH2,TH2,TH2,TH2,TH2,TH2,TH2,BG1,0,0,0,0,0],
[0,0,0,0,0,BG1,TH1,TH1,TH1,TH1,TH1,TH1,TH1,TH1,TH1,TH1,TH1,TH1,TH1,0,0,0,0,0],
[0,0,0,0,0,TH1,TH1,TH1,TH1,TH1,TH1,TH1,TH1,TH1,TH1,TH1,TH1,TH1,TH1,TH1,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
KNITTED_1=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,BG1,KC1,KC1,KC1,KC1,KC1,KC1,KC1,BG1,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,BG1,KC1,KC2,KC2,KC2,KC2,KC2,KC2,KC2,KC1,BG1,0,0,0,0,0,0],
[0,0,0,0,0,0,BG1,KC1,KC2,KC2,KC2,KC2,KC2,KC2,KC2,KC2,KC2,KC1,BG1,0,0,0,0,0],
[0,0,0,0,0,BG1,KC1,KC3,KC3,KC3,KC3,KC3,KC3,KC3,KC3,KC3,KC3,KC3,KC1,BG1,0,0,0,0],
[0,0,0,0,0,BG1,KC1,KC3,KC2,KC3,KC2,KC3,KC2,KC3,KC2,KC3,KC2,KC3,KC1,BG1,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
KNITTED_7=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,BG1,BG1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,0,0,0,0,0,0],
[0,0,0,0,0,0,BG1,BG1,BG1,KC1,KC1,KC1,KC1,KC1,KC1,KC1,BG1,BG1,0,0,0,0,0,0],
[0,0,0,0,0,BG1,BG1,BG1,KC1,KC2,KC2,KC2,KC2,KC2,KC2,KC2,KC1,BG1,BG1,0,0,0,0,0],
[0,0,0,0,0,BG1,BG1,KC1,KC2,KC2,KC2,KC2,KC2,KC2,KC2,KC2,KC2,KC1,BG1,0,0,0,0,0],
[0,0,0,0,0,BG1,KC1,KC3,KC3,KC3,KC3,KC3,KC3,KC3,KC3,KC3,KC3,KC3,KC1,BG1,0,0,0,0],
[0,0,0,0,0,BG1,KC1,KC3,KC2,KC3,KC2,KC3,KC2,KC3,KC2,KC3,KC2,KC3,KC1,BG1,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
HEADBAND_1=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,HB1,HB1,HB1,HB1,HB1,HB1,HB1,HB1,HB1,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,HB2,HB2,HB2,HB2,HB2,HB2,HB2,HB2,HB2,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
COWBOY_1=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,CH1,CH1,0,0,0,CH1,CH1,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,BG1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,BG1,0,0,0,0,0,0],
[0,0,0,CH1,0,0,BG1,CH2,CH2,CH2,CH2,CH2,CH2,CH2,CH2,CH2,CH2,CH2,BG1,0,0,CH1,0,0],
[0,0,0,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,0,0],
[0,0,0,0,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,BG1,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
COWBOY_7=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,CH1,CH1,0,0,0,CH1,CH1,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,BG1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,0,0,0,0,0,0,0],
[0,0,0,0,0,0,BG1,BG1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,BG1,0,0,0,0,0,0],
[0,0,0,0,0,BG1,BG1,BG1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,BG1,BG1,0,0,0,0,0],
[0,0,0,CH1,0,BG1,BG1,CH2,CH2,CH2,CH2,CH2,CH2,CH2,CH2,CH2,CH2,CH2,BG1,0,0,CH1,0,0],
[0,0,0,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,0,0],
[0,0,0,0,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,BG1,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
FORCAP_1=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,FC1,FC1,FC1,FC1,FC1,FC1,FC1,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,FC1,FC2,FC2,FC2,FC2,FC2,FC2,FC3,FC1,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,FC1,FC2,FC2,FC2,FC2,FC2,FC2,FC2,FC2,FC3,FC1,0,0,0,0,0,0],
[0,0,0,0,0,0,FC1,FC1,FC1,FC1,FC1,FC1,FC1,FC1,FC2,FC2,FC2,FC1,BG1,0,0,0,0,0],
[0,0,0,0,0,FC1,FC3,FC3,FC3,FC3,FC3,FC3,FC3,FC3,FC1,FC2,FC2,FC1,0,0,0,0,0,0],
[0,0,0,0,0,FC1,FC1,FC1,FC1,FC1,FC1,FC1,FC1,FC1,FC1,FC1,FC1,FC1,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
FORCAP_7=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,BG1,BG1,FC1,FC1,FC1,FC1,FC1,FC1,FC1,BG1,0,0,0,0,0,0,0],
[0,0,0,0,0,0,BG1,BG1,FC1,FC2,FC2,FC2,FC2,FC2,FC2,FC3,FC1,BG1,0,0,0,0,0,0],
[0,0,0,0,0,BG1,BG1,FC1,FC2,FC2,FC2,FC2,FC2,FC2,FC2,FC2,FC3,FC1,BG1,0,0,0,0,0],
[0,0,0,0,0,BG1,FC1,FC1,FC1,FC1,FC1,FC1,FC1,FC1,FC2,FC2,FC2,FC1,BG1,0,0,0,0,0],
[0,0,0,0,0,FC1,FC3,FC3,FC3,FC3,FC3,FC3,FC3,FC3,FC1,FC2,FC2,FC1,0,0,0,0,0,0],
[0,0,0,0,0,FC1,FC1,FC1,FC1,FC1,FC1,FC1,FC1,FC1,FC1,FC1,FC1,FC1,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
BANDANA_1=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,BA2,BA2,BA2,BA2,BA2,BA2,BA2,BA2,BG1,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,BA2,BA1,BA1,BA1,BA1,BA1,BA1,BA1,BA1,BA2,BG1,0,0,0,0,0,0],
[0,0,0,0,0,BG1,0,BA2,BA1,BA1,BA1,BA1,BA1,BA1,BA1,BA1,BA1,BA2,BG1,0,0,0,0,0],
[0,0,0,0,0,0,0,0,BA2,BA1,BA1,BA1,BA2,BA2,BA2,BA2,BA1,BA3,BA2,BA1,BA2,BA1,0,0],
[0,0,0,0,0,0,0,0,0,BA2,BA2,BA2,0,0,0,0,0,0,BA3,BA2,BA1,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,BA3,BA1,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,BA1,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
BANDANA_7=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,0,0,0,0,0,0],
[0,0,0,0,0,0,BG1,BG1,BA2,BA2,BA2,BA2,BA2,BA2,BA2,BA2,BG1,BG1,0,0,0,0,0,0],
[0,0,0,0,0,BG1,BG1,BA2,BA1,BA1,BA1,BA1,BA1,BA1,BA1,BA1,BA2,BG1,BG1,0,0,0,0,0],
[0,0,0,0,0,BG1,BG1,BA2,BA1,BA1,BA1,BA1,BA1,BA1,BA1,BA1,BA1,BA2,BG1,0,0,0,0,0],
[0,0,0,0,0,BG1,0,0,BA2,BA1,BA1,BA1,BA2,BA2,BA2,BA2,BA1,BA3,BA2,BA1,BA2,BA1,0,0],
[0,0,0,0,0,0,0,0,0,BA2,BA2,BA2,0,0,0,0,0,0,BA3,BA2,BA1,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,BA3,BA1,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,BA1,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
FEDORA_1=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,FD1,FD1,FD1,FD1,FD1,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,FD1,FD1,FD1,FD1,FD1,FD1,FD1,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,BG1,FD1,FD1,FD1,FD1,FD1,FD1,FD1,BG1,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,BG1,FD1,FD1,FD1,FD1,FD1,FD1,FD1,FD1,FD1,BG1,0,0,0,0,0,0],
[0,0,0,0,0,0,0,FD2,FD2,FD2,FD2,FD2,FD2,FD2,FD2,FD2,FD2,FD2,BG1,0,0,0,0,0],
[0,0,0,0,0,FD1,FD1,FD1,FD1,FD1,FD1,FD1,FD1,FD1,FD1,FD1,FD1,FD1,FD1,FD1,0,0,0,0],
[0,0,0,0,FD1,FD1,FD1,FD1,FD1,FD1,FD1,FD1,FD1,FD1,FD1,FD1,FD1,FD1,FD1,FD1,FD1,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
FEDORA_7=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,FD1,FD1,FD1,FD1,FD1,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,BG1,BG1,FD1,FD1,FD1,FD1,FD1,FD1,FD1,BG1,0,0,0,0,0,0,0],
[0,0,0,0,0,0,BG1,BG1,BG1,FD1,FD1,FD1,FD1,FD1,FD1,FD1,BG1,BG1,0,0,0,0,0,0],
[0,0,0,0,0,BG1,BG1,BG1,FD1,FD1,FD1,FD1,FD1,FD1,FD1,FD1,FD1,BG1,BG1,0,0,0,0,0],
[0,0,0,0,0,BG1,BG1,FD2,FD2,FD2,FD2,FD2,FD2,FD2,FD2,FD2,FD2,FD2,BG1,0,0,0,0,0],
[0,0,0,0,0,FD1,FD1,FD1,FD1,FD1,FD1,FD1,FD1,FD1,FD1,FD1,FD1,FD1,FD1,FD1,0,0,0,0],
[0,0,0,0,FD1,FD1,FD1,FD1,FD1,FD1,FD1,FD1,FD1,FD1,FD1,FD1,FD1,FD1,FD1,FD1,FD1,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
POLICE_1=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,PC1,PC1,PC1,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,PC1,PC1,PC1,PC1,PC2,PC2,PC2,PC1,PC1,PC1,PC1,0,0,0,0,0,0],
[0,0,0,0,0,0,PC1,PC2,PC2,PC2,PC2,PC2,PC4,PC2,PC2,PC2,PC2,PC2,PC1,0,0,0,0,0],
[0,0,0,0,0,0,PC1,PC2,PC2,PC2,PC2,PC2,PC2,PC2,PC2,PC2,PC2,PC2,PC1,0,0,0,0,0],
[0,0,0,0,0,0,BG1,PC1,PC3,PC1,PC3,PC1,PC3,PC1,PC3,PC1,PC3,PC1,0,0,0,0,0,0],
[0,0,0,0,0,0,PC1,PC2,PC2,PC2,PC2,PC2,PC2,PC2,PC2,PC1,PC1,PC1,0,0,0,0,0,0],
[0,0,0,0,0,0,PC1,PC1,PC1,PC1,PC1,PC1,PC1,PC1,PC1,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
POLICE_7=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,PC1,PC1,PC1,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,PC1,PC1,PC1,PC1,PC2,PC2,PC2,PC1,PC1,PC1,PC1,0,0,0,0,0,0],
[0,0,0,0,0,0,PC1,PC2,PC2,PC2,PC2,PC2,PC4,PC2,PC2,PC2,PC2,PC2,PC1,0,0,0,0,0],
[0,0,0,0,0,BG1,PC1,PC2,PC2,PC2,PC2,PC2,PC2,PC2,PC2,PC2,PC2,PC2,PC1,0,0,0,0,0],
[0,0,0,0,0,0,0,PC1,PC3,PC1,PC3,PC1,PC3,PC1,PC3,PC1,PC3,PC1,0,0,0,0,0,0],
[0,0,0,0,0,0,PC1,PC2,PC2,PC2,PC2,PC2,PC2,PC2,PC2,PC1,PC1,PC1,0,0,0,0,0,0],
[0,0,0,0,0,0,PC1,PC1,PC1,PC1,PC1,PC1,PC1,PC1,PC1,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
CAP_2=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,BG1,BG1,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,BG1,BG1,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,BG1,BG1,BG1,BG1,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,BG1,CA1,CA1,CA1,CA1,CA1,CA1,CA1,0,0,0,0,0,0,0,0],
[0,0,0,0,BG1,BG1,BG1,BG1,CA1,CA1,CA2,CA1,CA1,CA1,CA1,CA1,CA1,BG1,BG1,BG1,BG1,0,0,0],
[0,0,0,0,BG1,BG1,BG1,BG1,CA1,CA2,CA1,CA1,CA1,CA1,CA1,CA1,CA1,CA1,BG1,BG1,BG1,0,0,0],
[0,0,0,0,0,CA1,CA1,CA1,CA1,CA1,CA1,CA1,CA1,CA1,CA1,CA1,CA1,CA1,BG1,BG1,0,0,0,0],
[0,0,0,0,CA1,CA1,CA1,CA1,CA1,CA1,CA1,CA1,CA1,CA1,CA1,CA1,CA1,CA1,0,BG1,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
BEANI_2=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,BG1,BG1,BG1,BG1,BG1,BG1,BG1,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,BG1,BN1,BN1,BN1,BN1,BN1,BG1,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,BG1,BG1,BG1,BN5,BG1,BG1,BG1,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,BG1,BN2,BN2,BN3,BN3,BN3,BN1,BN1,BG1,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,BG1,BN2,BN2,BN2,BN3,BN3,BN3,BN1,BN1,BN1,BG1,0,0,0,0,0,0],
[0,0,0,0,0,0,BG1,BN2,BN2,BN2,BN3,BN3,BN3,BN3,BN3,BN1,BN1,BN1,BG1,0,0,0,0,0],
[0,0,0,0,0,0,BG1,BN2,BN2,BN3,BN3,BN3,BN3,BN3,BN3,BN3,BN1,BN1,BG1,0,0,0,0,0],
[0,0,0,0,0,0,BG1,0,0,BN4,BN4,BN4,BN4,BN4,BN4,BN4,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
BEANI_3=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,BN1,BN1,BN1,BN1,BN1,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,BN5,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,BN2,BN2,BN3,BN3,BN1,BN1,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,BN2,BN2,BN2,BN3,BN3,BN1,BN1,BN1,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,BN2,BN2,BN2,BN3,BN3,BN3,BN3,BN1,BN1,BN1,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,BN2,BN2,BN3,BN3,BN3,BN3,BN3,BN3,BN1,BN1,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,BN4,BN4,BN4,BN4,BN4,BN4,BN4,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
TOPHAT_2=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,TH1,TH1,TH1,TH1,TH1,TH1,TH1,TH1,TH1,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,TH1,TH1,TH1,TH1,TH1,TH1,TH1,TH1,TH1,TH1,TH1,0,0,0,0,0,0],
[0,0,0,0,0,0,0,TH1,TH1,TH1,TH1,TH1,TH1,TH1,TH1,TH1,TH1,TH1,0,0,0,0,0,0],
[0,0,0,0,0,0,0,TH1,TH1,TH1,TH1,TH1,TH1,TH1,TH1,TH1,TH1,TH1,0,0,0,0,0,0],
[0,0,0,0,BG1,BG1,BG1,TH1,TH1,TH1,TH1,TH1,TH1,TH1,TH1,TH1,TH1,TH1,BG1,BG1,BG1,0,0,0],
[0,0,0,0,BG1,BG1,BG1,TH2,TH2,TH2,TH2,TH2,TH2,TH2,TH2,TH2,TH2,TH2,BG1,BG1,BG1,0,0,0],
[0,0,0,0,0,BG1,TH1,TH1,TH1,TH1,TH1,TH1,TH1,TH1,TH1,TH1,TH1,TH1,TH1,BG1,0,0,0,0],
[0,0,0,0,0,TH1,TH1,TH1,TH1,TH1,TH1,TH1,TH1,TH1,TH1,TH1,TH1,TH1,TH1,TH1,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
TOPHAT_3=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,TH1,TH1,TH1,TH1,TH1,TH1,TH1,TH1,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,TH1,TH1,TH1,TH1,TH1,TH1,TH1,TH1,TH1,TH1,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,TH1,TH1,TH1,TH1,TH1,TH1,TH1,TH1,TH1,TH1,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,TH1,TH1,TH1,TH1,TH1,TH1,TH1,TH1,TH1,TH1,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,TH1,TH1,TH1,TH1,TH1,TH1,TH1,TH1,TH1,TH1,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,TH2,TH2,TH2,TH2,TH2,TH2,TH2,TH2,TH2,TH2,0,0,0,0,0,0,0],
[0,0,0,0,0,0,TH1,TH1,TH1,TH1,TH1,TH1,TH1,TH1,TH1,TH1,TH1,TH1,0,0,0,0,0,0],
[0,0,0,0,0,TH1,TH1,TH1,TH1,TH1,TH1,TH1,TH1,TH1,TH1,TH1,TH1,TH1,TH1,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
KNITTED_2=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,BG1,KC1,KC1,KC1,KC1,KC1,KC1,KC1,BG1,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,BG1,KC1,KC2,KC2,KC2,KC2,KC2,KC2,KC2,KC1,BG1,0,0,0,0,0,0],
[0,0,0,0,0,0,BG1,KC1,KC2,KC2,KC2,KC2,KC2,KC2,KC2,KC2,KC2,KC1,BG1,0,0,0,0,0],
[0,0,0,0,0,BG1,KC1,KC3,KC3,KC3,KC3,KC3,KC3,KC3,KC3,KC3,KC3,KC3,KC1,BG1,0,0,0,0],
[0,0,0,0,0,BG1,KC1,KC3,KC2,KC3,KC2,KC3,KC2,KC3,KC2,KC3,KC2,KC3,KC1,BG1,0,0,0,0],
[0,0,0,0,0,BG1,BG1,0,0,0,0,0,0,0,0,0,0,0,0,BG1,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
KNITTED_6=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,BG1,KC1,KC1,KC1,KC1,KC1,KC1,KC1,BG1,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,BG1,KC1,KC2,KC2,KC2,KC2,KC2,KC2,KC2,KC1,BG1,0,0,0,0,0,0],
[0,0,0,0,0,0,BG1,KC1,KC2,KC2,KC2,KC2,KC2,KC2,KC2,KC2,KC2,KC1,BG1,0,0,0,0,0],
[0,0,0,0,0,BG1,KC1,KC3,KC3,KC3,KC3,KC3,KC3,KC3,KC3,KC3,KC3,KC3,KC1,0,0,0,0,0],
[0,0,0,0,0,BG1,KC1,KC3,KC2,KC3,KC2,KC3,KC2,KC3,KC2,KC3,KC2,KC3,KC1,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
KNITTED_5=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,BG1,KC1,KC1,KC1,KC1,KC1,KC1,KC1,BG1,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,BG1,KC1,KC2,KC2,KC2,KC2,KC2,KC2,KC2,KC1,BG1,0,0,0,0,0,0],
[0,0,0,0,0,0,BG1,KC1,KC2,KC2,KC2,KC2,KC2,KC2,KC2,KC2,KC2,KC1,BG1,0,0,0,0,0],
[0,0,0,0,0,BG1,KC1,KC3,KC3,KC3,KC3,KC3,KC3,KC3,KC3,KC3,KC3,KC3,KC1,BG1,0,0,0,0],
[0,0,0,0,0,BG1,KC1,KC3,KC2,KC3,KC2,KC3,KC2,KC3,KC2,KC3,KC2,KC3,KC1,BG1,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
HEADBAND_2=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,HB1,HB1,HB1,HB1,HB1,HB1,HB1,HB1,HB1,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,HB2,HB2,HB2,HB2,HB2,HB2,HB2,HB2,HB2,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
COWBOY_2=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,BG1,BG1,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,BG1,BG1,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,CH1,CH1,BG1,BG1,BG1,CH1,CH1,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,0,0,0,0,0,0,0],
[0,0,0,0,BG1,BG1,BG1,BG1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,BG1,BG1,BG1,BG1,0,0,0],
[0,0,0,0,BG1,BG1,BG1,BG1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,BG1,BG1,BG1,BG1,0,0,0],
[0,0,0,CH1,0,BG1,BG1,CH2,CH2,CH2,CH2,CH2,CH2,CH2,CH2,CH2,CH2,CH2,BG1,BG1,0,CH1,0,0],
[0,0,0,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,0,0],
[0,0,0,0,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,0,0,0],
[0,0,0,0,0,BG1,0,0,0,0,0,0,0,0,0,0,0,0,0,BG1,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
COWBOY_4=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,CH1,CH1,0,0,0,CH1,CH1,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,BG1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,BG1,0,0,0,0,0,0],
[0,0,0,CH1,0,0,BG1,CH2,CH2,CH2,CH2,CH2,CH2,CH2,CH2,CH2,CH2,CH2,BG1,0,0,CH1,0,0],
[0,0,0,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,0,0],
[0,0,0,0,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
COWBOY_3=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,CH1,CH1,0,0,CH1,CH1,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,BG1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,BG1,0,0,0,0,0,0,0],
[0,0,0,CH1,0,0,BG1,CH2,CH2,CH2,CH2,CH2,CH2,CH2,CH2,CH2,CH2,BG1,0,0,CH1,0,0,0],
[0,0,0,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,0,0,0],
[0,0,0,0,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
COWBOY_5=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,CH1,CH1,0,0,0,CH1,CH1,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,BG1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,BG1,0,0,0,0,0,0],
[0,0,0,CH1,0,0,BG1,CH2,CH2,CH2,CH2,CH2,CH2,CH2,CH2,CH2,CH2,CH2,BG1,0,0,CH1,0,0],
[0,0,0,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,0,0],
[0,0,0,0,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
FORCAP_2=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,BG1,BG1,BG1,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,BG1,BG1,BG1,BG1,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,BG1,FC1,FC1,FC1,FC1,FC1,FC1,FC1,BG1,0,0,0,0,0,0,0],
[0,0,0,0,BG1,BG1,BG1,BG1,FC1,FC2,FC2,FC2,FC2,FC2,FC2,FC3,FC1,BG1,BG1,BG1,BG1,0,0,0],
[0,0,0,0,BG1,BG1,BG1,FC1,FC2,FC2,FC2,FC2,FC2,FC2,FC2,FC2,FC3,FC1,BG1,BG1,BG1,0,0,0],
[0,0,0,0,0,BG1,FC1,FC1,FC1,FC1,FC1,FC1,FC1,FC1,FC2,FC2,FC2,FC1,BG1,BG1,0,0,0,0],
[0,0,0,0,0,FC1,FC3,FC3,FC3,FC3,FC3,FC3,FC3,FC3,FC1,FC2,FC2,FC1,0,BG1,0,0,0,0],
[0,0,0,0,0,FC1,FC1,FC1,FC1,FC1,FC1,FC1,FC1,FC1,FC1,FC1,FC1,FC1,0,0,0,0,0,0],
[0,0,0,0,0,BG1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
FORCAP_3=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,FC1,FC1,FC1,FC1,FC1,FC1,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,FC1,FC2,FC2,FC2,FC2,FC2,FC3,FC1,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,FC1,FC2,FC2,FC2,FC2,FC2,FC2,FC2,FC3,FC1,0,0,0,0,0,0,0],
[0,0,0,0,0,0,FC1,FC1,FC1,FC1,FC1,FC1,FC1,FC2,FC2,FC2,FC1,0,0,0,0,0,0,0],
[0,0,0,0,0,FC1,FC3,FC3,FC3,FC3,FC3,FC3,FC3,FC1,FC2,FC2,FC1,0,0,0,0,0,0,0],
[0,0,0,0,0,FC1,FC1,FC1,FC1,FC1,FC1,FC1,FC1,FC1,FC1,FC1,FC1,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
BANDANA_2=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,BG1,BG1,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,BG1,BG1,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,BG1,BG1,BG1,BG1,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,0,0,0,0,0,0],
[0,0,0,0,BG1,BG1,BG1,BG1,BA2,BA2,BA2,BA2,BA2,BA2,BA2,BA2,BG1,BG1,BG1,BG1,BG1,0,0,0],
[0,0,0,0,BG1,BG1,BG1,BA2,BA1,BA1,BA1,BA1,BA1,BA1,BA1,BA1,BA2,BG1,BG1,BG1,BG1,0,0,0],
[0,0,0,0,0,BG1,BG1,BA2,BA1,BA1,BA1,BA1,BA1,BA1,BA1,BA1,BA1,BA2,BG1,BG1,0,0,0,0],
[0,0,0,0,0,BG1,BG1,0,BA2,BA1,BA1,BA1,BA2,BA2,BA2,BA2,BA1,BA3,BA2,BA1,BA2,BA1,0,0],
[0,0,0,0,0,BG1,0,0,0,BA2,BA2,BA2,0,0,0,0,0,0,BA3,BA2,BA1,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,BA3,BA1,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,BA1,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
FEDORA_2=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,BG1,BG1,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,BG1,BG1,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,BG1,FD1,FD1,FD1,FD1,FD1,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,BG1,FD1,FD1,FD1,FD1,FD1,FD1,FD1,0,0,0,0,0,0,0,0],
[0,0,0,0,BG1,BG1,BG1,BG1,BG1,FD1,FD1,FD1,FD1,FD1,FD1,FD1,BG1,BG1,BG1,BG1,BG1,0,0,0],
[0,0,0,0,BG1,BG1,BG1,BG1,FD1,FD1,FD1,FD1,FD1,FD1,FD1,FD1,FD1,BG1,BG1,BG1,BG1,0,0,0],
[0,0,0,0,0,BG1,BG1,FD2,FD2,FD2,FD2,FD2,FD2,FD2,FD2,FD2,FD2,FD2,BG1,BG1,0,0,0,0],
[0,0,0,0,0,FD1,FD1,FD1,FD1,FD1,FD1,FD1,FD1,FD1,FD1,FD1,FD1,FD1,FD1,FD1,0,0,0,0],
[0,0,0,0,FD1,FD1,FD1,FD1,FD1,FD1,FD1,FD1,FD1,FD1,FD1,FD1,FD1,FD1,FD1,FD1,FD1,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,BG1,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
FEDORA_5=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,FD1,FD1,FD1,FD1,FD1,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,FD1,FD1,FD1,FD1,FD1,FD1,FD1,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,BG1,FD1,FD1,FD1,FD1,FD1,FD1,FD1,BG1,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,BG1,FD1,FD1,FD1,FD1,FD1,FD1,FD1,FD1,FD1,BG1,0,0,0,0,0,0],
[0,0,0,0,0,0,BG1,FD2,FD2,FD2,FD2,FD2,FD2,FD2,FD2,FD2,FD2,FD2,BG1,0,0,0,0,0],
[0,0,0,0,0,FD1,FD1,FD1,FD1,FD1,FD1,FD1,FD1,FD1,FD1,FD1,FD1,FD1,FD1,FD1,0,0,0,0],
[0,0,0,0,FD1,FD1,FD1,FD1,FD1,FD1,FD1,FD1,FD1,FD1,FD1,FD1,FD1,FD1,FD1,FD1,FD1,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
POLICE_2=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,BG1,BG1,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,BG1,BG1,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,BG1,PC1,PC1,PC1,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,PC1,PC1,PC1,PC1,PC2,PC2,PC2,PC1,PC1,PC1,PC1,0,0,0,0,0,0],
[0,0,0,0,BG1,BG1,PC1,PC2,PC2,PC2,PC2,PC2,PC4,PC2,PC2,PC2,PC2,PC2,PC1,BG1,BG1,0,0,0],
[0,0,0,0,BG1,BG1,PC1,PC2,PC2,PC2,PC2,PC2,PC2,PC2,PC2,PC2,PC2,PC2,PC1,BG1,BG1,0,0,0],
[0,0,0,0,0,BG1,BG1,PC1,PC3,PC1,PC3,PC1,PC3,PC1,PC3,PC1,PC3,PC1,BG1,BG1,0,0,0,0],
[0,0,0,0,0,BG1,PC1,PC2,PC2,PC2,PC2,PC2,PC2,PC2,PC2,PC1,PC1,PC1,0,BG1,0,0,0,0],
[0,0,0,0,0,0,PC1,PC1,PC1,PC1,PC1,PC1,PC1,PC1,PC1,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
POLICE_3=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,BG1,BG1,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,BG1,BG1,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,BG1,PC1,PC1,PC1,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,PC1,PC1,PC1,PC1,PC2,PC2,PC2,PC1,PC1,PC1,PC1,0,0,0,0,0,0],
[0,0,0,0,BG1,BG1,PC1,PC2,PC2,PC2,PC2,PC2,PC4,PC2,PC2,PC2,PC2,PC2,PC1,BG1,BG1,0,0,0],
[0,0,0,0,BG1,BG1,PC1,PC2,PC2,PC2,PC2,PC2,PC2,PC2,PC2,PC2,PC2,PC2,PC1,BG1,BG1,0,0,0],
[0,0,0,0,0,BG1,BG1,PC1,PC3,PC1,PC3,PC1,PC3,PC1,PC3,PC1,PC3,PC1,BG1,BG1,0,0,0,0],
[0,0,0,0,0,0,PC1,PC2,PC2,PC2,PC2,PC2,PC2,PC2,PC2,PC1,PC1,PC1,0,BG1,0,0,0,0],
[0,0,0,0,0,0,PC1,PC1,PC1,PC1,PC1,PC1,PC1,PC1,PC1,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
CAP_6=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,CA1,CA1,CA1,CA1,CA1,CA1,CA1,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,CA1,CA1,CA2,CA1,CA1,CA1,CA1,CA1,CA1,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,CA1,CA2,CA1,CA1,CA1,CA1,CA1,CA1,CA1,CA1,0,0,0,0,0,0],
[0,0,0,0,0,CA1,CA1,CA1,CA1,CA1,CA1,CA1,CA1,CA1,CA1,CA1,CA1,CA1,0,0,0,0,0,0],
[0,0,0,0,CA1,CA1,CA1,CA1,CA1,CA1,CA1,CA1,CA1,CA1,CA1,CA1,CA1,CA1,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
CAP_8=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,BG1,BG1,BG1,0,BG1,0,BG1,BG1,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,BG1,CA1,CA1,CA1,CA1,CA1,CA1,CA1,BG1,BG1,0,0,0,0,0,0],
[0,0,0,0,0,0,BG1,BG1,CA1,CA1,CA2,CA1,CA1,CA1,CA1,CA1,CA1,BG1,BG1,0,0,0,0,0],
[0,0,0,0,0,BG1,0,BG1,CA1,CA2,CA1,CA1,CA1,CA1,CA1,CA1,CA1,CA1,BG1,BG1,0,0,0,0],
[0,0,0,0,0,CA1,CA1,CA1,CA1,CA1,CA1,CA1,CA1,CA1,CA1,CA1,CA1,CA1,0,0,BG1,0,0,0],
[0,0,0,0,CA1,CA1,CA1,CA1,CA1,CA1,CA1,CA1,CA1,CA1,CA1,CA1,CA1,CA1,0,0,0,BG1,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
TOPHAT_6=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,0],
[0,BG1,BG1,BG1,BG1,BG1,BG1,BG1,TH1,TH1,TH1,TH1,TH1,TH1,TH1,TH1,TH1,BG1,BG1,BG1,BG1,BG1,BG1,0],
[0,BG1,BG1,BG1,BG1,BG1,BG1,TH1,TH1,TH1,TH1,TH1,TH1,TH1,TH1,TH1,TH1,TH1,BG1,BG1,BG1,BG1,BG1,0],
[0,BG1,BG1,BG1,BG1,BG1,BG1,TH1,TH1,TH1,TH1,TH1,TH1,TH1,TH1,TH1,TH1,TH1,BG1,BG1,BG1,BG1,BG1,0],
[0,BG1,BG1,BG1,BG1,BG1,BG1,TH1,TH1,TH1,TH1,TH1,TH1,TH1,TH1,TH1,TH1,TH1,BG1,BG1,BG1,BG1,BG1,0],
[0,BG1,BG1,BG1,BG1,BG1,BG1,TH1,TH1,TH1,TH1,TH1,TH1,TH1,TH1,TH1,TH1,TH1,BG1,BG1,BG1,BG1,BG1,0],
[0,BG1,BG1,BG1,BG1,BG1,BG1,TH2,TH2,TH2,TH2,TH2,TH2,TH2,TH2,TH2,TH2,TH2,BG1,BG1,BG1,BG1,BG1,0],
[0,BG1,BG1,BG1,BG1,BG1,TH1,TH1,TH1,TH1,TH1,TH1,TH1,TH1,TH1,TH1,TH1,TH1,TH1,BG1,BG1,BG1,BG1,0],
[0,BG1,BG1,BG1,BG1,TH1,TH1,TH1,TH1,TH1,TH1,TH1,TH1,TH1,TH1,TH1,TH1,TH1,TH1,TH1,BG1,BG1,BG1,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
HEADBAND_6=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,HB1,HB1,HB1,HB1,HB1,HB1,HB1,HB1,HB1,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,HB2,HB2,HB2,HB2,HB2,HB2,HB2,HB2,HB2,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
COWBOY_6=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,CH1,CH1,0,0,0,CH1,CH1,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,0,0,0,0,0,0,0],
[0,0,0,CH1,0,0,0,CH2,CH2,CH2,CH2,CH2,CH2,CH2,CH2,CH2,CH2,CH2,0,0,0,CH1,0,0],
[0,0,0,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,0,0],
[0,0,0,0,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
COWBOY_8=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,CH1,CH1,BG1,0,BG1,CH1,CH1,BG1,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,BG1,0,0,0,0,0,0],
[0,0,0,0,0,0,BG1,BG1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,BG1,BG1,0,0,0,0,0],
[0,0,0,0,0,BG1,BG1,BG1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,BG1,BG1,BG1,0,0,0,0],
[0,0,0,CH1,0,BG1,BG1,CH2,CH2,CH2,CH2,CH2,CH2,CH2,CH2,CH2,CH2,CH2,BG1,BG1,BG1,CH1,0,0],
[0,0,0,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,0,0],
[0,0,0,0,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
FORCAP_6=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,0],
[0,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,0],
[0,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,0],
[0,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,0],
[0,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FC1,FC1,FC1,FC1,FC1,FC1,FC1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,0],
[0,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FC1,FC2,FC2,FC2,FC2,FC2,FC2,FC3,FC1,BG1,BG1,BG1,BG1,BG1,BG1,0],
[0,BG1,BG1,BG1,BG1,BG1,BG1,FC1,FC2,FC2,FC2,FC2,FC2,FC2,FC2,FC2,FC3,FC1,BG1,BG1,BG1,BG1,BG1,0],
[0,BG1,BG1,BG1,BG1,BG1,FC1,FC1,FC1,FC1,FC1,FC1,FC1,FC1,FC2,FC2,FC2,FC1,0,BG1,BG1,BG1,BG1,0],
[0,BG1,BG1,BG1,BG1,FC1,FC3,FC3,FC3,FC3,FC3,FC3,FC3,FC3,FC1,FC2,FC2,FC1,0,BG1,BG1,BG1,BG1,0],
[0,BG1,BG1,BG1,BG1,FC1,FC1,FC1,FC1,FC1,FC1,FC1,FC1,FC1,FC1,FC1,FC1,FC1,0,BG1,BG1,BG1,BG1,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
FORCAP_8=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,0],
[0,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,0],
[0,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,0],
[0,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,0],
[0,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FC1,FC1,FC1,FC1,FC1,FC1,FC1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,0],
[0,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FC1,FC2,FC2,FC2,FC2,FC2,FC2,FC3,FC1,BG1,BG1,BG1,BG1,BG1,BG1,0],
[0,BG1,BG1,BG1,BG1,BG1,BG1,FC1,FC2,FC2,FC2,FC2,FC2,FC2,FC2,FC2,FC3,FC1,BG1,BG1,BG1,BG1,BG1,0],
[0,BG1,BG1,BG1,BG1,BG1,FC1,FC1,FC1,FC1,FC1,FC1,FC1,FC1,FC2,FC2,FC2,FC1,0,BG1,BG1,BG1,BG1,0],
[0,BG1,BG1,BG1,BG1,FC1,FC3,FC3,FC3,FC3,FC3,FC3,FC3,FC3,FC1,FC2,FC2,FC1,0,0,BG1,BG1,BG1,0],
[0,BG1,BG1,BG1,BG1,FC1,FC1,FC1,FC1,FC1,FC1,FC1,FC1,FC1,FC1,FC1,FC1,FC1,0,0,0,BG1,BG1,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
BANDANA_6=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,BA2,BA2,BA2,BA2,BA2,BA2,BA2,BA2,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,BA2,BA1,BA1,BA1,BA1,BA1,BA1,BA1,BA1,BA2,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,BA2,BA1,BA1,BA1,BA1,BA1,BA1,BA1,BA1,BA1,BA2,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,BA2,BA1,BA1,BA1,BA2,BA2,BA2,BA2,BA1,BA3,BA2,BA1,BA2,BA1,0,0],
[0,0,0,0,0,0,0,0,0,BA2,BA2,BA2,0,0,0,0,0,0,BA3,BA2,BA1,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,BA3,BA1,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,BA1,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
FEDORA_6=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,FD1,FD1,FD1,FD1,FD1,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,FD1,FD1,FD1,FD1,FD1,FD1,FD1,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,FD1,FD1,FD1,FD1,FD1,FD1,FD1,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,FD1,FD1,FD1,FD1,FD1,FD1,FD1,FD1,FD1,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,FD2,FD2,FD2,FD2,FD2,FD2,FD2,FD2,FD2,FD2,FD2,0,0,0,0,0,0],
[0,0,0,0,0,FD1,FD1,FD1,FD1,FD1,FD1,FD1,FD1,FD1,FD1,FD1,FD1,FD1,FD1,FD1,0,0,0,0],
[0,0,0,0,FD1,FD1,FD1,FD1,FD1,FD1,FD1,FD1,FD1,FD1,FD1,FD1,FD1,FD1,FD1,FD1,FD1,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
POLICE_6=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,0],
[0,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,0],
[0,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,0],
[0,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,PC1,PC1,PC1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,0],
[0,BG1,BG1,BG1,BG1,BG1,BG1,PC1,PC1,PC1,PC1,PC2,PC2,PC2,PC1,PC1,PC1,PC1,BG1,BG1,BG1,BG1,BG1,0],
[0,BG1,BG1,BG1,BG1,BG1,PC1,PC2,PC2,PC2,PC2,PC2,PC4,PC2,PC2,PC2,PC2,PC2,PC1,BG1,BG1,BG1,BG1,0],
[0,BG1,BG1,BG1,BG1,BG1,PC1,PC2,PC2,PC2,PC2,PC2,PC2,PC2,PC2,PC2,PC2,PC2,PC1,BG1,BG1,BG1,BG1,0],
[0,BG1,BG1,BG1,BG1,BG1,BG1,PC1,PC3,PC1,PC3,PC1,PC3,PC1,PC3,PC1,PC3,PC1,0,BG1,BG1,BG1,BG1,0],
[0,BG1,BG1,BG1,BG1,BG1,PC1,PC2,PC2,PC2,PC2,PC2,PC2,PC2,PC2,PC1,PC1,PC1,0,BG1,BG1,BG1,BG1,0],
[0,BG1,BG1,BG1,BG1,BG1,PC1,PC1,PC1,PC1,PC1,PC1,PC1,PC1,PC1,0,0,0,0,0,BG1,BG1,BG1,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
TD_1=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,TD1,TD1,TD1,TD1,TD1,TD1,TD1,TD1,TD1,TD1,TD1,0,0,0,0,0,0],
[0,0,0,0,0,0,0,TD1,TD2,TD2,TD2,TD1,TD3,TD3,TD3,TD1,TD1,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,TD1,TD2,TD2,TD2,TD1,TD3,TD3,TD3,TD1,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,TD1,TD1,TD1,TD1,TD1,TD1,TD1,TD1,TD1,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
VR_1=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,VR1,VR1,VR1,VR1,VR1,VR1,VR1,VR1,VR1,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,VR1,VR3,VR2,VR2,VR2,VR2,VR2,VR2,VR2,VR3,VR1,0,0,0,0,0,0,0],
[0,0,0,0,0,0,VR1,VR2,VR1,VR1,VR1,VR1,VR1,VR1,VR1,VR2,VR3,VR1,0,0,0,0,0,0],
[0,0,0,0,0,0,VR1,VR2,VR1,VR1,VR1,VR1,VR1,VR1,VR1,VR2,VR3,VR1,0,0,0,0,0,0],
[0,0,0,0,0,0,VR1,VR3,VR2,VR2,VR2,VR2,VR2,VR2,VR2,VR3,VR1,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,VR1,VR1,VR1,VR1,VR1,VR1,VR1,VR1,VR1,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
ClassicShades_1=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,CSH1,CSH1,CSH1,CSH1,CSH1,CSH1,CSH1,CSH1,CSH1,CSH1,CSH1,0,0,0,0,0,0],
[0,0,0,0,0,0,0,CSH1,CSH2,CSH2,CSH1,0,CSH1,CSH2,CSH2,CSH1,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,CSH1,CSH3,CSH3,CSH1,0,CSH1,CSH3,CSH3,CSH1,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,CSH1,CSH1,0,0,0,CSH1,CSH1,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
SmallShades_1=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,SSH1,SSH1,SSH1,SSH1,SSH1,SSH1,SSH1,SSH1,SSH1,SSH1,SSH1,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,SSH1,SSH1,0,0,0,SSH1,SSH1,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,SSH1,SSH1,0,0,0,SSH1,SSH1,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
EyePatch_1=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,EP1,EP1,EP1,EP1,EP1,EP1,EP1,EP1,EP1,EP1,EP1,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,EP1,EP1,EP1,EP1,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,EP1,EP1,EP1,EP1,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,EP1,EP1,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
NerdGlasses_1=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,ND2,ND2,ND2,ND2,0,ND2,ND2,ND2,ND2,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,ND2,ND1,ND1,ND2,ND2,ND2,ND1,ND1,ND2,ND2,ND2,0,0,0,0,0,0],
[0,0,0,0,0,0,0,ND2,ND1,ND1,ND2,0,ND2,ND1,ND1,ND2,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,ND2,ND2,ND2,ND2,0,ND2,ND2,ND2,ND2,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
BigShades_1=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,BSH1,BSH1,BSH1,BSH1,BSH1,0,BSH1,BSH1,BSH1,BSH1,BSH1,0,0,0,0,0,0,0],
[0,0,0,0,0,0,BSH1,BSH2,BSH2,BSH2,BSH1,BSH1,BSH1,BSH2,BSH2,BSH2,BSH1,0,0,0,0,0,0,0],
[0,0,0,0,0,0,BSH1,BSH3,BSH3,BSH3,BSH1,0,BSH1,BSH3,BSH3,BSH3,BSH1,0,0,0,0,0,0,0],
[0,0,0,0,0,0,BSH1,BSH4,BSH4,BSH4,BSH1,0,BSH1,BSH4,BSH4,BSH4,BSH1,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,BSH1,BSH1,BSH1,0,0,0,BSH1,BSH1,BSH1,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
EyeMask_1=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,EM1,EM1,EM1,EM1,EM1,EM1,EM1,EM1,EM1,EM1,EM1,0,0,0,0,0,0],
[0,0,0,0,0,0,0,EM1,0,0,EM1,EM1,EM1,0,0,EM1,EM1,EM1,0,0,0,0,0,0],
[0,0,0,0,0,0,0,EM1,0,EM2,EM1,EM1,EM1,0,EM2,EM1,EM1,EM1,0,0,0,0,0,0],
[0,0,0,0,0,0,0,EM1,EM1,EM1,EM1,EM1,EM1,EM1,EM1,EM1,EM1,EM1,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
HornedRimGlasses_1=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,HRG1,HRG1,HRG1,HRG1,HRG1,HRG1,HRG1,HRG1,HRG1,HRG1,HRG1,HRG1,0,0,0,0,0,0],
[0,0,0,0,0,0,HRG1,HRG1,HRG2,HRG2,HRG3,HRG1,HRG1,HRG2,HRG2,HRG3,HRG1,HRG1,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,HRG4,HRG5,HRG3,0,0,HRG4,HRG5,HRG3,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,HRG3,HRG3,HRG3,0,0,HRG3,HRG3,HRG3,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
HornedRimGlasses_7=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,HRG1,HRG1,HRG1,HRG1,HRG1,HRG1,HRG1,HRG1,HRG1,HRG1,HRG1,HRG1,0,0,0,0,0,0],
[0,0,0,0,0,0,HRG1,HRG1,HRG2,HRG2,HRG3,HRG1,HRG1,HRG2,HRG2,HRG3,HRG1,HRG1,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,HRG4,HRG5,HRG3,0,0,HRG4,HRG5,HRG3,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,HRG3,HRG3,0,0,0,HRG3,HRG3,HRG3,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
RegularShades_1=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,RSH1,RSH1,RSH1,RSH1,RSH1,RSH1,RSH1,RSH1,RSH1,RSH1,RSH1,RSH1,0,0,0,0,0,0],
[0,0,0,0,0,0,RSH1,RSH1,RSH1,RSH1,0,0,RSH1,RSH1,RSH1,RSH1,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,RSH1,RSH1,0,0,0,0,RSH1,RSH1,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
TD_2=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,TD1,TD1,TD1,TD1,TD1,TD1,TD1,TD1,TD1,TD1,TD1,0,0,0,0,0,0],
[0,0,0,0,0,0,0,TD1,TD2,TD2,TD2,TD1,TD3,TD3,TD3,TD1,TD1,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,TD1,TD2,TD2,TD2,TD1,TD3,TD3,TD3,TD1,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,TD1,TD1,TD1,TD1,TD1,TD1,TD1,TD1,TD1,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
VR_2=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,VR1,VR1,VR1,VR1,VR1,VR1,VR1,VR1,VR1,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,VR1,VR3,VR2,VR2,VR2,VR2,VR2,VR2,VR2,VR3,VR1,0,0,0,0,0,0,0],
[0,0,0,0,0,0,VR1,VR2,VR1,VR1,VR1,VR1,VR1,VR1,VR1,VR2,VR3,VR1,0,0,0,0,0,0],
[0,0,0,0,0,0,VR1,VR2,VR1,VR1,VR1,VR1,VR1,VR1,VR1,VR2,VR3,VR1,0,0,0,0,0,0],
[0,0,0,0,0,0,VR1,VR3,VR2,VR2,VR2,VR2,VR2,VR2,VR2,VR3,VR1,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,VR1,VR1,VR1,VR1,VR1,VR1,VR1,VR1,VR1,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
ClassicShades_2=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,CSH1,CSH1,CSH1,CSH1,CSH1,CSH1,CSH1,CSH1,CSH1,CSH1,CSH1,0,0,0,0,0,0],
[0,0,0,0,0,0,0,CSH1,CSH2,CSH2,CSH1,0,CSH1,CSH2,CSH2,CSH1,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,CSH1,CSH3,CSH3,CSH1,0,CSH1,CSH3,CSH3,CSH1,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,CSH1,CSH1,0,0,0,CSH1,CSH1,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
SmallShades_2=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,SSH1,SSH1,SSH1,SSH1,SSH1,SSH1,SSH1,SSH1,SSH1,SSH1,SSH1,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,SSH1,SSH1,0,0,0,SSH1,SSH1,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,SSH1,SSH1,0,0,0,SSH1,SSH1,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
EyePatch_2=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,EP1,EP1,EP1,EP1,EP1,EP1,EP1,EP1,EP1,EP1,EP1,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,EP1,EP1,EP1,EP1,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,EP1,EP1,EP1,EP1,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,EP1,EP1,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
NerdGlasses_2=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,ND2,ND2,ND2,ND2,0,ND2,ND2,ND2,ND2,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,ND2,ND1,ND1,ND2,ND2,ND2,ND1,ND1,ND2,ND2,ND2,0,0,0,0,0,0],
[0,0,0,0,0,0,0,ND2,ND1,ND1,ND2,0,ND2,ND1,ND1,ND2,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,ND2,ND2,ND2,ND2,0,ND2,ND2,ND2,ND2,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
BigShades_2=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,BSH1,BSH1,BSH1,BSH1,BSH1,0,BSH1,BSH1,BSH1,BSH1,BSH1,0,0,0,0,0,0,0],
[0,0,0,0,0,0,BSH1,BSH2,BSH2,BSH2,BSH1,BSH1,BSH1,BSH2,BSH2,BSH2,BSH1,0,0,0,0,0,0,0],
[0,0,0,0,0,0,BSH1,BSH3,BSH3,BSH3,BSH1,0,BSH1,BSH3,BSH3,BSH3,BSH1,0,0,0,0,0,0,0],
[0,0,0,0,0,0,BSH1,BSH4,BSH4,BSH4,BSH1,0,BSH1,BSH4,BSH4,BSH4,BSH1,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,BSH1,BSH1,BSH1,0,0,0,BSH1,BSH1,BSH1,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
EyeMask_2=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,EM1,EM1,EM1,EM1,EM1,EM1,EM1,EM1,EM1,EM1,EM1,0,0,0,0,0,0],
[0,0,0,0,0,0,0,EM1,0,0,EM1,EM1,EM1,0,0,EM1,EM1,EM1,0,0,0,0,0,0],
[0,0,0,0,0,0,0,EM1,0,EM2,EM1,EM1,EM1,0,EM2,EM1,EM1,EM1,0,0,0,0,0,0],
[0,0,0,0,0,0,0,EM1,EM1,EM1,EM1,EM1,EM1,EM1,EM1,EM1,EM1,EM1,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
HornedRimGlasses_2=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,HRG1,HRG1,HRG1,HRG1,HRG1,HRG1,HRG1,HRG1,HRG1,HRG1,HRG1,HRG1,0,0,0,0,0,0],
[0,0,0,0,0,0,HRG1,HRG1,HRG2,HRG2,HRG3,HRG1,HRG1,HRG2,HRG2,HRG3,HRG1,HRG1,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,HRG4,HRG1,HRG3,0,0,HRG4,HRG1,HRG3,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,HRG3,HRG3,HRG3,0,0,HRG3,HRG3,HRG3,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
RegularShades_2=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,RSH1,RSH1,RSH1,RSH1,RSH1,RSH1,RSH1,RSH1,RSH1,RSH1,RSH1,RSH1,0,0,0,0,0,0],
[0,0,0,0,0,0,RSH1,RSH1,RSH1,RSH1,0,0,RSH1,RSH1,RSH1,RSH1,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,RSH1,RSH1,0,0,0,0,RSH1,RSH1,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
TD_6=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,TD1,TD1,TD1,TD1,TD1,TD1,TD1,TD1,TD1,TD1,TD1,0,0,0,0,0,0],
[0,0,0,0,0,0,0,TD1,TD2,TD2,TD2,TD1,TD3,TD3,TD3,TD1,TD1,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,TD1,TD2,TD2,TD2,TD1,TD3,TD3,TD3,TD1,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,TD1,TD1,TD1,TD1,TD1,TD1,TD1,TD1,TD1,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
VR_6=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,VR1,VR1,VR1,VR1,VR1,VR1,VR1,VR1,VR1,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,VR1,VR3,VR2,VR2,VR2,VR2,VR2,VR2,VR2,VR3,VR1,0,0,0,0,0,0,0],
[0,0,0,0,0,0,VR1,VR2,VR1,VR1,VR1,VR1,VR1,VR1,VR1,VR2,VR3,VR1,0,0,0,0,0,0],
[0,0,0,0,0,0,VR1,VR2,VR1,VR1,VR1,VR1,VR1,VR1,VR1,VR2,VR3,VR1,0,0,0,0,0,0],
[0,0,0,0,0,0,VR1,VR3,VR2,VR2,VR2,VR2,VR2,VR2,VR2,VR3,VR1,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,VR1,VR1,VR1,VR1,VR1,VR1,VR1,VR1,VR1,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
ClassicShades_6=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,CSH1,CSH1,CSH1,CSH1,CSH1,CSH1,CSH1,CSH1,CSH1,CSH1,CSH1,0,0,0,0,0,0],
[0,0,0,0,0,0,0,CSH1,CSH2,CSH2,CSH1,0,CSH1,CSH2,CSH2,CSH1,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,CSH1,CSH3,CSH3,CSH1,0,CSH1,CSH3,CSH3,CSH1,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,CSH1,CSH1,0,0,0,CSH1,CSH1,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
SmallShades_6=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,SSH1,SSH1,SSH1,SSH1,SSH1,SSH1,SSH1,SSH1,SSH1,SSH1,SSH1,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,SSH1,SSH1,0,0,0,SSH1,SSH1,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,SSH1,SSH1,0,0,0,SSH1,SSH1,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
EyePatch_6=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,EP1,EP1,EP1,EP1,EP1,EP1,EP1,EP1,EP1,EP1,EP1,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,EP1,EP1,EP1,EP1,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,EP1,EP1,EP1,EP1,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,EP1,EP1,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
NerdGlasses_6=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,ND2,ND2,ND2,ND2,0,ND2,ND2,ND2,ND2,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,ND2,ND1,ND1,ND2,ND2,ND2,ND1,ND1,ND2,ND2,ND2,0,0,0,0,0,0],
[0,0,0,0,0,0,0,ND2,ND1,ND1,ND2,0,ND2,ND1,ND1,ND2,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,ND2,ND2,ND2,ND2,0,ND2,ND2,ND2,ND2,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
BigShades_6=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,BSH1,BSH1,BSH1,BSH1,BSH1,0,BSH1,BSH1,BSH1,BSH1,BSH1,0,0,0,0,0,0,0],
[0,0,0,0,0,0,BSH1,BSH2,BSH2,BSH2,BSH1,BSH1,BSH1,BSH2,BSH2,BSH2,BSH1,0,0,0,0,0,0,0],
[0,0,0,0,0,0,BSH1,BSH3,BSH3,BSH3,BSH1,0,BSH1,BSH3,BSH3,BSH3,BSH1,0,0,0,0,0,0,0],
[0,0,0,0,0,0,BSH1,BSH4,BSH4,BSH4,BSH1,0,BSH1,BSH4,BSH4,BSH4,BSH1,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,BSH1,BSH1,BSH1,0,0,0,BSH1,BSH1,BSH1,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
EyeMask_6=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,EM1,EM1,EM1,EM1,EM1,EM1,EM1,EM1,EM1,EM1,EM1,0,0,0,0,0,0],
[0,0,0,0,0,0,0,EM1,0,0,EM1,EM1,EM1,0,0,EM1,EM1,EM1,0,0,0,0,0,0],
[0,0,0,0,0,0,0,EM1,0,EM2,EM1,EM1,EM1,0,EM2,EM1,EM1,EM1,0,0,0,0,0,0],
[0,0,0,0,0,0,0,EM1,EM1,EM1,EM1,EM1,EM1,EM1,EM1,EM1,EM1,EM1,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
HornedRimGlasses_6=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,HRG1,HRG1,HRG1,HRG1,HRG1,HRG1,HRG1,HRG1,HRG1,HRG1,HRG1,HRG1,0,0,0,0,0,0],
[0,0,0,0,0,0,HRG1,HRG1,HRG2,HRG2,HRG3,HRG1,HRG1,HRG2,HRG2,HRG3,HRG1,HRG1,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,HRG4,HRG1,HRG3,0,0,HRG4,HRG1,HRG3,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,HRG3,HRG3,HRG3,0,0,HRG3,HRG3,HRG3,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
RegularShades_6=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,RSH1,RSH1,RSH1,RSH1,RSH1,RSH1,RSH1,RSH1,RSH1,RSH1,RSH1,RSH1,0,0,0,0,0,0],
[0,0,0,0,0,0,RSH1,RSH1,RSH1,RSH1,0,0,RSH1,RSH1,RSH1,RSH1,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,RSH1,RSH1,0,0,0,0,RSH1,RSH1,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
TIARA_1=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,TI1,TI1,0,TI1,TI1,TI1,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,TI1,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,TI1,TI2,TI1,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,TI1,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
KNITTED_3=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,BG1,KC1,KC1,KC1,KC1,KC1,KC1,BG1,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,BG1,KC1,KC2,KC2,KC2,KC2,KC2,KC2,KC1,BG1,0,0,0,0,0,0,0],
[0,0,0,0,0,0,BG1,KC1,KC2,KC2,KC2,KC2,KC2,KC2,KC2,KC2,KC1,BG1,0,0,0,0,0,0],
[0,0,0,0,0,BG1,KC1,KC3,KC3,KC3,KC3,KC3,KC3,KC3,KC3,KC3,KC3,KC1,BG1,0,0,0,0,0],
[0,0,0,0,0,BG1,KC1,KC2,KC3,KC2,KC3,KC2,KC3,KC2,KC3,KC2,KC3,KC1,BG1,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
HEADBAND_3=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,HB1,HB1,HB1,HB1,HB1,HB1,HB1,HB1,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,HB2,HB2,HB2,HB2,HB2,HB2,HB2,HB2,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
HEADBAND_5=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,HB1,HB1,HB1,HB1,HB1,HB1,HB1,HB1,HB1,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,HB2,HB2,HB2,HB2,HB2,HB2,HB2,HB2,HB2,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
MILICAP_1=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,BG1,MH1,MH1,MH1,MH1,MH1,MH1,BG1,BG1,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,BG1,MH1,MH1,MH1,MH1,MH2,MH1,MH1,MH1,BG1,BG1,0,0,0,0,0,0],
[0,0,0,0,0,0,BG1,MH1,MH1,MH1,MH1,MH2,MH1,MH1,MH1,MH1,MH1,BG1,0,0,0,0,0,0],
[0,0,0,0,0,0,BG1,MH1,MH1,MH1,MH1,MH1,MH1,MH1,MH1,MH1,MH1,BG1,0,0,0,0,0,0],
[0,0,0,0,0,0,BG1,0,0,0,0,0,0,0,0,0,0,BG1,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
BANDANA_3=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,BG1,BG1,BG1,BG1,BG1,BG1,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,BA2,BA2,BA2,BA2,BA2,BA2,BA2,BA2,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,BA2,BA1,BA1,BA1,BA1,BA1,BA1,BA1,BA1,BA2,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,BA2,BA1,BA1,BA1,BA1,BA1,BA1,BA1,BA1,BA1,BA2,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,BA2,BA1,BA1,BA1,BA2,BA2,BA2,BA2,BA1,BA3,BA2,BA1,BA2,BA1,0,0],
[0,0,0,0,0,0,0,0,0,BA2,BA2,BA2,0,0,0,0,0,0,BA3,BA2,BA1,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,BA3,BA1,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,BA1,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
PILOT_1=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,PH1,PH1,PH1,PH1,PH1,PH1,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,PH3,PH3,PH3,PH3,PH3,PH3,PH3,PH3,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,PH3,PH2,PH2,PH2,PH3,PH3,PH2,PH2,PH2,PH3,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,PH3,PH2,PH2,PH3,PH3,PH3,PH3,PH2,PH2,PH3,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,PH3,PH3,PH3,PH3,PH1,PH1,PH3,PH3,PH3,PH3,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,PH1,PH1,PH1,PH1,PH1,PH1,PH1,PH1,PH1,PH1,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,PH1,0,0,0,0,0,0,0,0,PH1,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,PH1,0,0,0,0,0,0,0,0,PH1,PH1,0,0,0,0,0,0],
[0,0,0,0,0,0,0,PH1,0,0,0,0,0,0,0,0,PH1,PH1,0,0,0,0,0,0],
[0,0,0,0,0,0,0,PH1,0,0,0,0,0,0,0,0,PH1,PH1,0,0,0,0,0,0],
[0,0,0,0,0,0,0,PH1,0,0,0,0,0,0,0,0,PH1,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,PH1,0,0,0,0,0,0,0,0,PH1,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,PH1,0,0,0,0,0,0,0,0,PH1,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,PH1,0,0,0,0,0,0,0,0,PH1,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,PH1,0,0,0,0,0,0,0,0,PH1,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,PH1,0,0,0,0,0,0,0,0,PH1,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
CAP_3=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,CA1,CA1,CA1,CA1,CA1,CA1,CA1,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,CA1,CA1,CA2,CA1,CA1,CA1,CA1,CA1,CA1,BG1,0,0,0,0,0,0,0],
[0,0,0,0,0,0,BG1,CA1,CA2,CA1,CA1,CA1,CA1,CA1,CA1,CA1,CA1,BG1,0,0,0,0,0,0],
[0,0,0,0,CA1,CA1,CA1,CA1,CA1,CA1,CA1,CA1,CA1,CA1,CA1,CA1,CA1,BG1,0,0,0,0,0,0],
[0,0,0,CA1,CA1,CA1,CA1,CA1,CA1,CA1,CA1,CA1,CA1,CA1,CA1,CA1,CA1,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
EyePatch_3=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,EP1,EP1,EP1,EP1,EP1,EP1,EP1,EP1,EP1,EP1,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,EP1,EP1,EP1,EP1,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,EP1,EP1,EP1,EP1,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,EP1,EP1,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
GOGOLES_1=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,WG1,WG2,WG2,WG1,WG1,WG1,WG2,WG2,WG1,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,WG2,WG3,WG3,WG2,WG1,WG2,WG3,WG3,WG2,WG1,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,WG2,WG3,WG3,WG2,WG1,WG2,WG3,WG3,WG2,WG1,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,WG1,WG2,WG2,WG1,0,WG1,WG2,WG2,WG1,WG1,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,WG1,0,0,0,0,0,0,0,0,WG1,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
VR_3=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,VR1,VR1,VR1,VR1,VR1,VR1,VR1,VR1,VR1,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,VR1,VR3,VR2,VR2,VR2,VR2,VR2,VR2,VR2,VR3,VR1,0,0,0,0,0,0,0],
[0,0,0,0,0,0,VR1,VR2,VR1,VR1,VR1,VR1,VR1,VR1,VR1,VR2,VR3,VR1,0,0,0,0,0,0],
[0,0,0,0,0,0,VR1,VR2,VR1,VR1,VR1,VR1,VR1,VR1,VR1,VR2,VR3,VR1,0,0,0,0,0,0],
[0,0,0,0,0,0,VR1,VR3,VR2,VR2,VR2,VR2,VR2,VR2,VR2,VR3,VR1,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,VR1,VR1,VR1,VR1,VR1,VR1,VR1,VR1,VR1,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
RegularShades_3=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,RSH1,RSH1,RSH1,RSH1,RSH1,RSH1,RSH1,RSH1,RSH1,RSH1,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,RSH1,RSH1,RSH1,RSH1,0,RSH1,RSH1,RSH1,RSH1,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,RSH1,RSH1,0,0,0,RSH1,RSH1,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
TD_3=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,TD1,TD1,TD1,TD1,TD1,TD1,TD1,TD1,TD1,TD1,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,TD1,TD2,TD2,TD2,TD1,TD3,TD3,TD3,TD1,TD1,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,TD1,TD2,TD2,TD2,TD1,TD3,TD3,TD3,TD1,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,TD1,TD1,TD1,TD1,TD1,TD1,TD1,TD1,TD1,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
NerdGlasses_3=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,ND2,ND2,ND2,ND2,0,ND2,ND2,ND2,ND2,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,ND2,ND1,ND1,ND2,ND2,ND2,ND1,ND1,ND2,ND2,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,ND2,ND1,ND1,ND2,0,ND2,ND1,ND1,ND2,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,ND2,ND2,0,0,0,ND2,ND2,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
ClassicShades_3=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,CSH1,CSH1,CSH1,CSH1,CSH1,CSH1,CSH1,CSH1,CSH1,CSH1,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,CSH1,CSH2,CSH2,CSH1,0,CSH1,CSH2,CSH2,CSH1,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,CSH1,CSH3,CSH3,CSH1,0,CSH1,CSH3,CSH3,CSH1,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,CSH1,CSH1,0,0,0,CSH1,CSH1,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
HornedRimGlasses_3=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,HRG1,HRG1,HRG1,HRG1,HRG1,HRG1,HRG1,HRG1,HRG1,HRG1,HRG1,0,0,0,0,0,0,0],
[0,0,0,0,0,0,HRG1,HRG1,HRG2,HRG2,HRG3,HRG1,HRG1,HRG2,HRG2,HRG3,HRG1,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,HRG4,HRG1,HRG3,0,0,HRG4,HRG1,HRG3,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,HRG3,HRG3,HRG3,0,0,HRG3,HRG3,HRG3,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
BigShades_3=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,BSH1,BSH1,BSH1,BSH1,BSH1,0,BSH1,BSH1,BSH1,BSH1,BSH1,0,0,0,0,0,0,0],
[0,0,0,0,0,0,BSH1,BSH2,BSH2,BSH2,BSH1,BSH1,BSH1,BSH2,BSH2,BSH2,BSH1,0,0,0,0,0,0,0],
[0,0,0,0,0,0,BSH1,BSH3,BSH3,BSH3,BSH1,0,BSH1,BSH3,BSH3,BSH3,BSH1,0,0,0,0,0,0,0],
[0,0,0,0,0,0,BSH1,BSH4,BSH4,BSH4,BSH1,0,BSH1,BSH4,BSH4,BSH4,BSH1,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,BSH1,BSH1,BSH1,0,0,0,BSH1,BSH1,BSH1,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
EyeMask_3=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,EM1,EM1,EM1,EM1,EM1,EM1,EM1,EM1,EM1,EM1,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,EM1,0,0,EM1,EM1,EM1,0,0,EM1,EM1,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,EM1,0,EM2,EM1,EM1,EM1,0,EM2,EM1,EM1,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,EM1,EM1,EM1,EM1,EM1,EM1,EM1,EM1,EM1,EM1,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
TIARA_2=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,TI1,TI1,0,TI1,TI1,TI1,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,TI1,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,TI1,TI2,TI1,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,TI1,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
TIARA_3=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,TI1,TI1,TI1,0,TI1,TI1,TI1,TI1,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,TI1,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,TI1,TI2,TI1,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,TI1,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
KNITTED_4=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,BG1,KC1,KC1,KC1,KC1,KC1,KC1,BG1,BG1,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,BG1,KC1,KC2,KC2,KC2,KC2,KC2,KC2,KC1,BG1,BG1,0,0,0,0,0,0],
[0,0,0,0,0,0,BG1,KC1,KC2,KC2,KC2,KC2,KC2,KC2,KC2,KC2,KC1,BG1,0,0,0,0,0,0],
[0,0,0,0,0,BG1,KC1,KC3,KC3,KC3,KC3,KC3,KC3,KC3,KC3,KC3,KC3,KC1,BG1,0,0,0,0,0],
[0,0,0,0,0,BG1,KC1,KC2,KC3,KC2,KC3,KC2,KC3,KC2,KC3,KC2,KC3,KC1,BG1,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
HEADBAND_4=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,HB1,HB1,HB1,HB1,HB1,HB1,HB1,HB1,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,HB2,HB2,HB2,HB2,HB2,HB2,HB2,HB2,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
HEADBAND_7=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,HB1,HB1,HB1,HB1,HB1,HB1,HB1,HB1,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,HB2,HB2,HB2,HB2,HB2,HB2,HB2,HB2,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
MILICAP_2=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,BG1,MH1,MH1,MH1,MH1,MH1,MH1,BG1,BG1,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,BG1,MH1,MH1,MH1,MH1,MH2,MH1,MH1,MH1,BG1,BG1,0,0,0,0,0,0],
[0,0,0,0,0,0,BG1,MH1,MH1,MH1,MH1,MH2,MH1,MH1,MH1,MH1,MH1,BG1,0,0,0,0,0,0],
[0,0,0,0,0,0,BG1,MH1,MH1,MH1,MH1,MH1,MH1,MH1,MH1,MH1,MH1,0,0,0,0,0,0,0],
[0,0,0,0,0,0,BG1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
BANDANA_4=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,BG1,BA2,BA2,BA2,BA2,BA2,BA2,BA2,BA2,BG1,0,0,0,0,0,0,0],
[0,0,0,0,0,0,BG1,BA2,BA1,BA1,BA1,BA1,BA1,BA1,BA1,BA1,BA2,BG1,0,0,0,0,0,0],
[0,0,0,0,0,0,BG1,BA2,BA1,BA1,BA1,BA1,BA1,BA1,BA1,BA1,BA1,BA2,BG1,0,0,0,0,0],
[0,0,0,0,0,0,0,0,BA2,BA1,BA1,BA1,BA2,BA2,BA2,BA2,BA1,BA3,BA2,BA1,BA2,BA1,0,0],
[0,0,0,0,0,0,0,0,0,BA2,BA2,BA2,0,0,0,0,0,0,BA3,BA2,BA1,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,BA3,BA1,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,BA1,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
BANDANA_5=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,BG1,BA2,BA2,BA2,BA2,BA2,BA2,BA2,BA2,BG1,0,0,0,0,0,0,0],
[0,0,0,0,0,0,BG1,BA2,BA1,BA1,BA1,BA1,BA1,BA1,BA1,BA1,BA2,BG1,0,0,0,0,0,0],
[0,0,0,0,0,0,BG1,BA2,BA1,BA1,BA1,BA1,BA1,BA1,BA1,BA1,BA1,BA2,BG1,0,0,0,0,0],
[0,0,0,0,0,0,0,0,BA2,BA1,BA1,BA1,BA2,BA2,BA2,BA2,BA1,BA3,BA2,BA1,BA2,BA1,0,0],
[0,0,0,0,0,0,0,0,0,BA2,BA2,BA2,0,0,0,0,0,0,BA3,BA2,BA1,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,BA3,BA1,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,BA1,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
PILOT_2=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,PH1,PH1,PH1,PH1,PH1,PH1,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,PH3,PH3,PH3,PH3,PH3,PH3,PH3,PH3,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,PH3,PH2,PH2,PH2,PH3,PH3,PH2,PH2,PH2,PH3,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,PH3,PH2,PH2,PH3,PH3,PH3,PH3,PH2,PH2,PH3,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,PH3,PH3,PH3,PH3,PH1,PH1,PH3,PH3,PH3,PH3,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,PH1,PH1,PH1,PH1,PH1,PH1,PH1,PH1,PH1,PH1,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,PH1,0,0,0,0,0,0,0,0,PH1,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,PH1,0,0,0,0,0,0,0,0,PH1,PH1,0,0,0,0,0,0],
[0,0,0,0,0,0,0,PH1,0,0,0,0,0,0,0,0,PH1,PH1,0,0,0,0,0,0],
[0,0,0,0,0,0,0,PH1,0,0,0,0,0,0,0,0,PH1,PH1,0,0,0,0,0,0],
[0,0,0,0,0,0,0,PH1,0,0,0,0,0,0,0,0,PH1,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,PH1,0,0,0,0,0,0,0,0,PH1,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,PH1,0,0,0,0,0,0,0,0,PH1,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,PH1,0,0,0,0,0,0,0,0,PH1,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,PH1,0,0,0,0,0,0,0,0,PH1,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,PH1,0,0,0,0,0,0,0,0,PH1,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
CAP_4=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,CA1,CA1,CA1,CA1,CA1,CA1,CA1,BG1,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,CA1,CA1,CA2,CA1,CA1,CA1,CA1,CA1,CA1,BG1,0,0,0,0,0,0,0],
[0,0,0,0,0,0,BG1,CA1,CA2,CA1,CA1,CA1,CA1,CA1,CA1,CA1,CA1,BG1,0,0,0,0,0,0],
[0,0,0,0,CA1,CA1,CA1,CA1,CA1,CA1,CA1,CA1,CA1,CA1,CA1,CA1,CA1,BG1,0,0,0,0,0,0],
[0,0,0,CA1,CA1,CA1,CA1,CA1,CA1,CA1,CA1,CA1,CA1,CA1,CA1,CA1,CA1,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
CAP_5=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,CA1,CA1,CA1,CA1,CA1,CA1,CA1,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,CA1,CA1,CA2,CA1,CA1,CA1,CA1,CA1,CA1,BG1,0,0,0,0,0,0,0],
[0,0,0,0,0,0,BG1,CA1,CA2,CA1,CA1,CA1,CA1,CA1,CA1,CA1,CA1,BG1,0,0,0,0,0,0],
[0,0,0,0,CA1,CA1,CA1,CA1,CA1,CA1,CA1,CA1,CA1,CA1,CA1,CA1,CA1,BG1,0,0,0,0,0,0],
[0,0,0,CA1,CA1,CA1,CA1,CA1,CA1,CA1,CA1,CA1,CA1,CA1,CA1,CA1,CA1,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
EyePatch_4=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,EP1,EP1,EP1,EP1,EP1,EP1,EP1,EP1,EP1,EP1,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,EP1,EP1,EP1,EP1,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,EP1,EP1,EP1,EP1,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,EP1,EP1,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
GOGOLES_2=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,WG1,WG2,WG2,WG1,WG1,WG1,WG2,WG2,WG1,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,WG2,WG3,WG3,WG2,WG1,WG2,WG3,WG3,WG2,WG1,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,WG2,WG3,WG3,WG2,WG1,WG2,WG3,WG3,WG2,WG1,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,WG1,WG2,WG2,WG1,0,WG1,WG2,WG2,WG1,WG1,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,WG1,0,0,0,0,0,0,0,0,WG1,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
VR_4=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,VR1,VR1,VR1,VR1,VR1,VR1,VR1,VR1,VR1,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,VR1,VR3,VR2,VR2,VR2,VR2,VR2,VR2,VR2,VR3,VR1,0,0,0,0,0,0,0],
[0,0,0,0,0,0,VR1,VR2,VR1,VR1,VR1,VR1,VR1,VR1,VR1,VR2,VR3,VR1,0,0,0,0,0,0],
[0,0,0,0,0,0,VR1,VR2,VR1,VR1,VR1,VR1,VR1,VR1,VR1,VR2,VR3,VR1,0,0,0,0,0,0],
[0,0,0,0,0,0,VR1,VR3,VR2,VR2,VR2,VR2,VR2,VR2,VR2,VR3,VR1,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,VR1,VR1,VR1,VR1,VR1,VR1,VR1,VR1,VR1,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
RegularShades_4=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,RSH1,RSH1,RSH1,RSH1,RSH1,RSH1,RSH1,RSH1,RSH1,RSH1,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,RSH1,RSH1,RSH1,RSH1,0,RSH1,RSH1,RSH1,RSH1,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,RSH1,RSH1,0,0,0,RSH1,RSH1,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
TD_4=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,TD1,TD1,TD1,TD1,TD1,TD1,TD1,TD1,TD1,TD1,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,TD1,TD2,TD2,TD2,TD1,TD3,TD3,TD3,TD1,TD1,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,TD1,TD2,TD2,TD2,TD1,TD3,TD3,TD3,TD1,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,TD1,TD1,TD1,TD1,TD1,TD1,TD1,TD1,TD1,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
NerdGlasses_4=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,ND2,ND2,ND2,ND2,0,ND2,ND2,ND2,ND2,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,ND2,ND1,ND1,ND2,ND2,ND2,ND1,ND1,ND2,ND2,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,ND2,ND1,ND1,ND2,0,ND2,ND1,ND1,ND2,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,ND2,ND2,0,0,0,ND2,ND2,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
ClassicShades_4=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,CSH1,CSH1,CSH1,CSH1,CSH1,CSH1,CSH1,CSH1,CSH1,CSH1,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,CSH1,CSH2,CSH2,CSH1,0,CSH1,CSH2,CSH2,CSH1,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,CSH1,CSH3,CSH3,CSH1,0,CSH1,CSH3,CSH3,CSH1,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,CSH1,CSH1,0,0,0,CSH1,CSH1,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
HornedRimGlasses_4=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,HRG1,HRG1,HRG1,HRG1,HRG1,HRG1,HRG1,HRG1,HRG1,HRG1,HRG1,0,0,0,0,0,0,0],
[0,0,0,0,0,0,HRG1,HRG1,HRG2,HRG2,HRG3,HRG1,HRG1,HRG2,HRG2,HRG3,HRG1,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,HRG4,HRG1,HRG3,0,0,HRG4,HRG1,HRG3,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,HRG3,HRG3,HRG3,0,0,HRG3,HRG3,HRG3,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
BigShades_4=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,BSH1,BSH1,BSH1,BSH1,BSH1,0,BSH1,BSH1,BSH1,BSH1,BSH1,0,0,0,0,0,0,0],
[0,0,0,0,0,0,BSH1,BSH2,BSH2,BSH2,BSH1,BSH1,BSH1,BSH2,BSH2,BSH2,BSH1,0,0,0,0,0,0,0],
[0,0,0,0,0,0,BSH1,BSH3,BSH3,BSH3,BSH1,0,BSH1,BSH3,BSH3,BSH3,BSH1,0,0,0,0,0,0,0],
[0,0,0,0,0,0,BSH1,BSH4,BSH4,BSH4,BSH1,0,BSH1,BSH4,BSH4,BSH4,BSH1,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,BSH1,BSH1,BSH1,0,0,0,BSH1,BSH1,BSH1,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
EyeMask_4=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,EM1,EM1,EM1,EM1,EM1,EM1,EM1,EM1,EM1,EM1,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,EM1,0,0,EM1,EM1,EM1,0,0,EM1,EM1,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,EM1,0,EM2,EM1,EM1,EM1,0,EM2,EM1,EM1,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,EM1,EM1,EM1,EM1,EM1,EM1,EM1,EM1,EM1,EM1,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
BigBeard=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,BE1,BE2,BE2,BE2,BE2,BE2,BE2,BE2,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,BE1,BE2,BE2,BE2,BE2,BE2,BE2,BE2,BE2,BE2,BE2,BE1,0,0,0,0,0,0],
[0,0,0,0,0,0,BE1,BE2,BE2,BE2,BE1,BE1,BE1,BE2,BE2,BE2,BE2,BE2,BE1,0,0,0,0,0],
[0,0,0,0,0,0,BE1,BE2,BE2,BE2,BE2,BE2,BE2,BE2,BE2,BE2,BE2,BE2,BE1,0,0,0,0,0],
[0,0,0,0,0,0,BE1,BE2,BE2,BE2,BE2,BE2,BE2,BE2,BE2,BE2,BE2,BE2,BE1,0,0,0,0,0],
[0,0,0,0,0,0,BE1,BE2,BE2,BE2,BE2,BE2,BE2,BE2,BE2,BE2,BE1,BE1,0,0,0,0,0,0],
[0,0,0,0,0,0,0,BE1,BE2,BE2,BE2,BE2,BE2,BE2,BE1,BE1,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,BE1,BE1,BE1,BE1,BE1,BE1,0,0,0,0,0,0,0,0,0,0]
]
NormalBeardBlack=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,BE1,0,0,0,0,0,0],
[0,0,0,0,0,0,0,BE1,0,0,0,0,0,0,0,0,BE1,BE1,0,0,0,0,0,0],
[0,0,0,0,0,0,0,BE1,BE1,0,0,0,0,0,0,BE1,BE1,BE1,0,0,0,0,0,0],
[0,0,0,0,0,0,0,BE1,BE1,BE1,BE1,BE1,BE1,BE1,BE1,BE1,BE1,BE1,0,0,0,0,0,0],
[0,0,0,0,0,0,0,BE1,BE1,BE1,BE3,BE3,BE3,BE1,BE1,BE1,BE1,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,BE1,BE1,BE1,BE1,BE1,BE1,BE1,BE1,BE1,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,BE1,BE1,BE1,BE1,BE1,BE1,BE1,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,BE1,BE1,BE1,BE1,BE1,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
FrontBeard=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,BE2,BE2,BE2,BE2,BE2,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,BE2,0,0,0,BE2,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,BE2,BE2,BE2,BE2,BE2,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,BE2,BE2,BE2,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,BE1,BE2,BE2,BE2,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,BE1,BE1,BE1,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
Handlebars=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,BE2,BE4,BE4,BE4,BE2,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,BE4,0,0,0,BE4,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,BE4,0,0,0,BE4,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
Muttonchops=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,BE2,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,BE2,0,0,0,0,0,0,BE2,BE2,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,BE2,0,0,0,0,0,BE2,BE2,BE2,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,BE2,0,0,0,0,0,BE2,BE2,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
Mustache=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,BE2,BE2,BE2,BE2,BE2,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
NormalBeard=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,BE2,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,BE2,0,0,0,0,0,0,BE2,BE2,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,BE2,BE2,BE2,BE2,BE2,BE2,BE2,BE2,BE2,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,BE2,BE2,0,0,0,BE2,BE2,BE2,BE2,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,BE2,BE2,BE2,BE2,BE2,BE2,BE2,BE2,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,BE2,BE2,BE2,BE2,BE2,BE2,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
Chinstrap=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,BE2,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,BE2,0,0,0,0,0,0,BE2,BE2,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,BE2,0,0,0,0,0,0,BE2,BE2,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,BE2,0,0,0,0,0,BE2,BE2,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,BE2,0,0,0,0,0,BE2,BE2,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,BE1,BE2,BE2,BE2,BE2,BE2,BE2,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,BE1,BE2,BE2,BE2,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,BE1,BE1,BE1,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
Goat=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,BE1,0,BE2,BE2,BE2,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,BE1,BE2,BE2,BE2,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,BE1,BE2,BE1,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,BE1,0,0,0,0,0,0,0,0,0,0,0,0]
]
FrontBeardDark=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,BE7,BE7,BE7,BE7,BE7,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,BE7,0,0,0,BE7,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,BE7,BE7,BE7,BE7,BE7,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,BE7,BE7,BE7,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,BE1,BE7,BE7,BE7,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,BE1,BE1,BE1,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
LuxuriousBeard=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,BE1,0,0,0,0,0,0,0,0,BE1,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,BE1,BE1,0,0,0,0,0,0,BE1,BE1,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,BE1,BE1,BE1,BE1,BE1,BE1,BE1,BE1,BE1,BE1,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,BE1,BE1,BE1,BE3,BE3,BE3,BE1,BE1,BE1,BE1,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,BE1,BE1,BE1,BE1,BE1,BE1,BE1,BE1,BE1,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,BE1,BE1,BE1,BE1,BE1,BE1,BE1,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,BE1,BE1,BE1,BE1,BE1,BE1,BE1,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,BE1,BE1,BE1,BE1,BE1,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
Elfe_Tiara =[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,ETI,0,0,0,0,0,0,0,0,ETI,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,ETI,0,0,0,0,0,0,ETI,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,ETI,ETI,0,ETI,ETI,ETI,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,ETI,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
Hob_Hat =[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,HOB2,HOB3,HOB4,HOB2,HOB3,HOB4,HOB2,HOB3,HOB4,HOB2,0,0,0,0,0,0,0],
[0,0,0,0,0,HOB1,HOB1,HOB1,HOB1,HOB1,HOB1,HOB1,HOB1,HOB1,HOB1,HOB1,HOB1,HOB1,HOB1,0,0,0,0,0],
[0,0,0,0,0,0,HOB1,HOB1,HOB1,HOB1,HOB1,HOB1,HOB1,HOB1,HOB1,HOB1,HOB1,HOB1,0,0,0,0,0,0],
[0,0,0,0,0,BG1,0,HOB1,HOB1,HOB1,HOB1,HOB1,HOB1,HOB1,HOB1,HOB1,HOB1,BG1,0,0,0,0,0,0],
[0,0,0,0,BG1,0,BG1,HOB5,HOB5,HOB5,HOB5,HOB5,HOB5,HOB5,HOB5,HOB5,HOB5,0,BG1,0,0,0,0,0],
[0,0,0,BG1,BG1,BG1,HOB2,HOB2,HOB2,HOB2,HOB2,HOB2,HOB2,HOB2,HOB2,HOB2,HOB2,HOB2,BG1,BG1,0,0,0,0],
[0,0,0,0,HOB1,HOB1,HOB1,HOB1,HOB1,HOB1,HOB1,HOB1,HOB1,HOB1,HOB1,HOB1,HOB1,HOB1,HOB1,HOB1,0,0,0,0],
[0,0,HOB1,HOB1,HOB1,HOB1,HOB1,HOB1,HOB1,HOB1,HOB1,HOB1,HOB1,HOB1,HOB1,HOB1,HOB1,HOB1,HOB1,HOB1,HOB1,HOB1,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
Gondor_Crown =[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,GCR1,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,GCR1,GCR1,GCR1,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,GCR1,GCR2,GCR1,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,GCR1,BG1,BG1,BG1,BG1,GCR1,GCR2,GCR3,GCR2,GCR1,BG1,BG1,BG1,BG1,BG1,GCR1,0,0,0,0],
[0,0,0,0,0,GCR1,GCR1,GCR1,GCR1,GCR2,GCR3,GCR2,GCR3,GCR2,GCR1,GCR1,GCR1,GCR1,GCR1,0,0,0,0,0],
[0,0,0,0,0,0,GCR1,GCR1,GCR2,GCR3,GCR2,GCR4,GCR2,GCR3,GCR2,GCR1,GCR1,GCR1,0,0,0,0,0,0],
[0,0,0,0,0,0,GCR1,GCR2,GCR3,GCR2,GCR4,GCR5,GCR4,GCR2,GCR3,GCR2,GCR1,GCR1,0,0,0,0,0,0],
[0,0,0,0,0,0,GCR1,GCR1,GCR2,GCR4,GCR5,GCR5,GCR5,GCR4,GCR2,GCR1,GCR1,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,GCR5,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
Gobelin_Crown =[
[0,0,0,0,0,0,0,0,0,0,0,0,KGC,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,KGC,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,KGC,0,0,0,KGC,0,0,0,KGC,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,KGC,0,0,0,KGC,0,0,0,KGC,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,KGC,0,KGC,0,KGC,0,KGC,0,KGC,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,KGC,0,KGC,0,KGC,0,KGC,0,KGC,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,KGC,KGC,KGC,KGC,KGC,KGC,KGC,KGC,KGC,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,KGC,KGC,KGC,KGC,KGC,KGC,KGC,KGC,KGC,KGC,KGC,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
Flower =[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,FL3,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,FL4,FL2,FL4,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,FL3,FL2,FL1,FL2,FL3,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,FL4,FL2,FL4,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,FL3,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
Wo_Crown =[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,EOY2,EOY1,EOY2,EOY1,EOY2,EOY1,EOY2,EOY1,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,EOY1,0,0,0,0,0,0,0,0,EOY2,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,EOY1,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
Elf_Crown =[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,ELT,ELT,ELT,0,0,0,ELT,ELT,ELT,ELT,ELT,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,ELT,0,ELT,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,ELT,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
Helmet =[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,BG1,BG1,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,BG1,BG1,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,BG1,BG1,BG1,BG1,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,BG1,DHL3,DHL1,DHL3,DHL3,DHL1,DHL3,BG1,0,0,0,0,0,0,0,0],
[0,0,0,0,BG1,BG1,BG1,BG1,DHL3,DHL2,DHL1,DHL2,DHL2,DHL2,DHL1,DHL3,BG1,BG1,BG1,BG1,BG1,0,0,0],
[0,0,0,0,BG1,BG1,BG1,DHL3,DHL2,DHL2,DHL1,DHL2,DHL2,DHL2,DHL1,DHL2,DHL3,BG1,BG1,BG1,BG1,0,0,0],
[0,0,0,0,0,BG1,DHL3,DHL2,DHL2,DHL1,DHL2,DHL2,DHL2,DHL2,DHL2,DHL1,DHL2,DHL3,BG1,BG1,0,0,0,0],
[0,0,0,0,0,BG1,DHL3,DHL2,DHL2,DHL1,DHL2,DHL2,DHL2,DHL2,DHL2,DHL1,DHL2,DHL3,0,BG1,0,0,0,0],
[0,0,0,0,0,BG1,DHL3,DHL1,DHL1,DHL1,DHL1,DHL1,DHL1,DHL1,DHL1,DHL1,DHL1,DHL3,0,0,0,0,0,0],
[0,0,0,0,0,0,DHL3,DHL1,0,DHL1,0,0,0,0,0,DHL1,DHL1,DHL3,0,0,0,0,0,0],
[0,0,0,0,0,0,DHL3,DHL1,0,0,0,0,0,0,0,DHL1,DHL1,DHL3,0,0,0,0,0,0],
[0,0,0,0,0,0,DHL3,DHL1,0,0,0,0,0,0,0,DHL1,DHL1,DHL3,0,0,0,0,0,0],
[0,0,0,0,0,0,DHL3,DHL1,0,0,0,0,0,0,DHL1,DHL1,DHL3,DHL3,0,0,0,0,0,0],
[0,0,0,0,0,0,0,DHL1,DHL1,0,0,0,0,0,DHL1,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
Elfic_Krown=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,THR1,0,0,0,0,0,0,0,0,0,THR1,0,0,0,0,0,0],
[0,0,0,0,0,THR1,0,0,THR1,0,0,0,0,0,0,0,THR1,0,0,THR1,0,0,0,0],
[0,0,0,0,0,0,THR1,0,0,0,0,0,0,0,0,0,0,0,THR1,0,0,0,0,0],
[0,0,0,0,THR1,0,0,THR1,0,0,0,0,0,0,0,0,0,THR1,0,0,THR1,0,0,0],
[0,0,0,0,0,THR1,0,0,0,0,0,0,0,0,0,0,0,0,0,THR1,0,0,0,0],
[0,0,0,0,0,THR1,0,0,0,0,0,0,0,0,0,0,0,0,0,THR1,0,0,0,0],
[0,0,0,THR1,0,0,THR1,0,0,0,0,0,0,0,0,0,0,0,THR1,0,0,THR1,0,0],
[0,0,0,0,THR1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,THR1,0,0,0],
[0,0,0,0,0,THR1,0,0,0,0,0,0,0,0,0,0,0,0,0,THR1,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
#Initiate the variables
# tells how many times to iterate through the following mechanism
# which equals the number of MidPunks
# for x in range(0-200)
# would generate 201 Midpunks numbered 0-200
list1 = range(11984)
filterlist1 = []
for x in list1:
a = 13080698
seed(x+a)
titi=0
titin=0
titine=0
toto=0
tata=0
tutu=0
tyty=0
tete=0
toutou=0
toctoc=0
tactac=0
tuctuc=0
tonton=0
tantan=0
neyo=0
neye=0
neya=0
neyh=0
neyu=0
neyw=0
b = randint(0,1000000)
if b > 950000:
race_ep = 'Halflings'
type_ep = 'Male'
hair_color_ep = 'None'
haircut_ep = 'None'
hair_prop_ep = 'None'
eyes_prop_ep = 'None'
blemishe_ep = 'None'
eyes_color_ep = 'None'
facial_hair_ep = 'None'
mouth_prop_ep = 'None'
mouth_ep = 'None'
tooth_color_ep = 'None'
nose_ep = 'None'
neck_ep = 'None'
ears_ep = 'None'
skin_ep = 'None'
ears = none
hair = none
hair_prop = none
neck = none
blemishes = none
#tooth color
mouth = none
facial_hair = none
rod = none
mouth_prop = none
#eye color
eyes = none
nose = none
seed(b)
c=randint(0,1000000)
if c > 800000:
SK1 = (234,217,217)
SC1 = (165,141,141)
EY1 = (201,178,178)
SK2 = (255,255,255)
HRG3 = (220,222,234)
HRG2 = (183,179,191)
HRG4 = (203,200,212)
HRG5 = (87,101,113)
HRG1 = (0,0,0)
RC1 = (226,187,185)
MO1 = EY1
SCR1 = EY1
skin_ep = 'Albino'
elif c > 500000:
SK1 = (219,177,128)
SC1 = (166,110,44)
EY1 = (210,157,96)
SK2 = (235,203,166)
HRG3 = (213,200,183)
HRG2 = (184,163,135)
HRG4 = (209,189,164)
HRG5 = (87,101,113)
HRG1 = (0,0,0)
RC1 = (215,154,104)
MO1 = EY1
SCR1 = EY1
skin_ep = 'Light'
elif c > 200000:
SK1 = (174,139,97)
SC1 = (134,88,30)
EY1 = (167,124,71)
SK2 = (178,138,93)
HRG3 = (188,179,165)
HRG2 = (166,150,128)
HRG4 = (184,171,151)
HRG5 = (87,101,113)
HRG1 = (0,0,0)
RC1 = (191,105,71)
MO1 = EY1
SCR1 = EY1
skin_ep = 'Mid'
else:
SK1 = (113,63,29)
SC1 = (86,39,10)
EY1 = (114,55,17)
SK2 = (146,79,35)
HRG3 = (155,135,127)
HRG2 = (139,121,111)
HRG4 = (156,131,115)
HRG5 = (87,101,113)
HRG1 = (0,0,0)
RC1 = (142,36,2)
MO1 = EY1
SCR1 = EY1
skin_ep = 'Dark'
seed(c)
d=randint(0,1000000)
if d > 750000:
ears = EARS_2
ears_ep = 'Earring'
else:
ears = none
ears_ep = 'None'
seed(d)
e=randint(0,1000000)
HR0 = (255,193,0)
HR2 = (251,114,7)
HR3 = (210,154,0)
HR4 = (166,165,165)
HR5 = (165,108,0)
HR6 = (111,0,48)
HR7 = (85,57,23)
if e > 875000:
HR1 = HR0
hair_color_ep ='Blond'
elif e > 750000:
HR1 = nr
hair_color_ep='Black'
elif e > 625000:
HR1 = HR2
hair_color_ep ='Orange'
elif e > 500000:
HR1 = HR3
hair_color_ep ='Fair'
elif e > 375000:
HR1 = HR4
hair_color_ep ='Grey'
elif e > 250000:
HR1 = HR5
hair_color_ep ='Ginger'
elif e > 125000:
HR1 = HR6
hair_color_ep ='Black Rose'
else:
HR1 = HR7
hair_color_ep ='Brown'
HALFIN_HR1=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,HR1,0,HR1,HR1,HR1,0,HR1,HR1,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,0,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,0,HR1,0,0,0,0,0],
[0,0,0,0,0,HR1,0,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,0,0,0,0],
[0,0,0,0,0,HR1,0,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,0,HR1,0,0,0],
[0,0,0,0,0,0,HR1,HR1,HR1,0,0,HR1,0,HR1,0,0,HR1,HR1,HR1,HR1,0,HR1,0,0],
[0,0,0,0,0,HR1,HR1,HR1,HR1,0,HR1,HR1,0,0,HR1,0,HR1,HR1,HR1,HR1,HR1,0,0,0],
[0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,0,0,0,0,HR1,HR1,HR1,0,0],
[0,0,0,0,HR1,0,HR1,0,0,0,0,0,0,0,0,0,0,0,0,HR1,HR1,HR1,0,0],
[0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,0,0,0,0,HR1,HR1,HR1,0,0],
[0,0,0,0,0,0,HR1,0,0,0,0,0,0,0,0,0,0,0,HR1,HR1,HR1,0,0,0],
[0,0,0,0,0,0,HR1,0,0,0,0,0,0,0,0,0,0,0,HR1,HR1,HR1,0,0,0],
[0,0,0,0,HR1,0,HR1,0,0,0,0,0,0,0,0,0,0,0,HR1,0,HR1,HR1,0,0],
[0,0,0,0,0,HR1,0,0,0,0,0,0,0,0,0,0,0,0,0,HR1,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
HALFIN_HR2=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,HR1,HR1,HR1,0,HR1,0,HR1,HR1,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,HR1,0,HR1,HR1,HR1,0,HR1,HR1,0,HR1,0,0,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,0,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,0,HR1,0,0,0,0,0],
[0,0,0,0,0,HR1,0,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,0,0,0,0],
[0,0,0,0,0,HR1,0,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,0,HR1,0,0,0],
[0,0,0,0,HR1,0,HR1,HR1,HR1,0,0,HR1,0,HR1,0,0,HR1,HR1,HR1,HR1,0,HR1,0,0],
[0,0,0,0,0,HR1,HR1,HR1,HR1,0,HR1,HR1,0,0,HR1,0,HR1,HR1,HR1,HR1,HR1,0,0,0],
[0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,0,0,0,0,HR1,HR1,HR1,0,0],
[0,0,0,0,HR1,0,HR1,0,0,0,0,0,0,0,0,0,0,0,0,HR1,HR1,HR1,0,0],
[0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,0,0,0,0,HR1,HR1,HR1,0,0],
[0,0,0,0,HR1,0,HR1,0,0,0,0,0,0,0,0,0,0,0,HR1,HR1,0,HR1,HR1,0],
[0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,0,0,0,HR1,HR1,HR1,HR1,HR1,0],
[0,0,0,0,HR1,0,HR1,0,0,0,0,0,0,0,0,0,0,0,HR1,0,HR1,HR1,0,0],
[0,0,0,0,0,HR1,0,0,0,0,0,0,0,0,0,0,0,0,0,HR1,0,HR1,HR1,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,HR1,0,HR1,HR1,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,HR1,HR1,0,HR1,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,HR1,HR1,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
HALFIN_HR3=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,0,0,0,0,0,0],
[0,0,0,0,0,0,0,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,0,0,0,0,0],
[0,0,0,0,0,0,0,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,0,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,0,HR1,HR1,0,HR1,0,HR1,HR1,HR1,HR1,HR1,HR1,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,0,HR1,HR1,0,0,0,HR1,HR1,HR1,HR1,HR1,HR1,HR1,0,0,0],
[0,0,0,0,0,0,HR1,HR1,0,HR1,0,0,0,0,HR1,HR1,HR1,HR1,HR1,HR1,HR1,0,0,0],
[0,0,0,0,0,0,HR1,0,0,0,0,0,0,0,0,0,HR1,HR1,0,HR1,HR1,0,0,0],
[0,0,0,0,0,0,HR1,0,0,0,0,0,0,0,0,0,0,0,0,HR1,HR1,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,HR1,HR1,HR1,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,HR1,HR1,HR1,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,HR1,HR1,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,HR1,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
HALFIN_HR4=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,HR1,HR1,HR1,0,HR1,HR1,HR1,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,0,0,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,HR1,0,HR1,0,HR1,0,0,HR1,HR1,HR1,HR1,0,0,0,0,0],
[0,0,0,0,0,HR1,HR1,HR1,0,0,HR1,0,0,HR1,0,0,HR1,HR1,HR1,HR1,0,0,0,0],
[0,0,0,0,0,HR1,HR1,0,0,0,HR1,0,0,0,HR1,HR1,0,0,HR1,HR1,0,0,0,0],
[0,0,0,0,0,HR1,HR1,0,0,0,0,HR1,0,0,0,0,0,0,0,HR1,0,0,0,0],
[0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,0,0,0,0,HR1,0,0,0,0],
[0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,0,0,0,0,HR1,0,0,0,0],
[0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,0,0,0,HR1,HR1,0,0,0,0],
[0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,0,0,0,HR1,HR1,0,0,0,0],
[0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,0,0,0,HR1,HR1,0,0,0,0],
[0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,0,0,0,HR1,HR1,0,0,0,0],
[0,0,0,0,0,0,HR1,0,0,0,0,0,0,0,0,0,0,0,HR1,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
HALFIN_HR5=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,HR1,HR1,HR1,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,0,0,HR1,HR1,HR1,0,0,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,0,HR1,HR1,0,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,0,HR1,HR1,0,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,0,HR1,HR1,0,0,0,0,0],
[0,0,0,0,0,0,HR1,0,0,0,0,0,0,0,0,0,0,0,HR1,0,0,0,0,0],
[0,0,0,0,0,0,HR1,0,0,0,0,0,0,0,0,0,0,0,HR1,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
seed(e)
f=randint(0,1000000)
if f > 800000:
hair = HALFIN_HR1
haircut_ep ='Wild Hair'
elif f > 600000:
hair = HALFIN_HR2
haircut_ep ='Perm Hair'
elif f > 400000:
hair = HALFIN_HR3
haircut_ep ='Bedhead'
elif f > 200000:
hair = HALFIN_HR4
haircut_ep ='Hockey Hair'
else:
hair = HALFIN_HR5
haircut_ep ='Bald'
seed(f)
g=randint(0,1000000)
if g > 970000:
hair_prop = POLICE_6
hair_prop_ep = 'Police'
elif g > 950000:
hair_prop = TOPHAT_6
hair_prop_ep = 'Top Hat'
elif e > 900000:
hair_prop = HEADBAND_6
hair_prop_ep = 'Headband'
elif e > 850000:
hair_prop = FORCAP_8
hair_prop_ep = 'Cap Forward'
elif e > 830000:
hair_prop = COWBOY_8
hair_prop_ep = 'Cowboy Hat'
elif e > 790000:
hair_prop = CAP_8
hair_prop_ep = 'Cap'
else:
hair_prop = none
hair_prop_ep = 'None'
seed(g)
h=randint(0,1000000)
if h > 900000:
neck = GoldChain_1
neck_ep = 'Gold Chain'
elif h > 820000:
neck = SilverChain_1
neck_ep = 'Silver Chain'
elif h > 800000:
neck = RING_1
neck_ep = 'Ring Onchain'
elif h > 780000:
neck = BROCHE_1
neck_ep = 'Brooch'
else:
neck = none
neck_ep = 'None'
seed(h)
i=randint(0,1000000)
if i > 900000:
mouth_prop = CIGARETTE
mouth_prop_ep = 'Cigarette'
elif i > 880000:
mouth_prop = MASK_1
facial_hair = none
mouth_prop_ep = 'Medical Mask'
elif i > 820000:
mouth_prop = PIPE
mouth_prop_ep = 'Pipe'
elif i > 780000:
mouth_prop = VAPE
mouth_prop_ep = 'Vape'
else:
mouth_prop = none
mouth_prop_ep = 'None'
seed(i)
j=randint(0,1000000)
if j > 970000:
eyes = TD_6
eyes_prop_ep ='3D Glasses'
elif j > 930000:
eyes = VR_6
eyes_prop_ep ='VR'
elif j > 880000:
eyes = ClassicShades_6
eyes_prop_ep ='Classic Shades'
elif j >830000:
eyes = SmallShades_6
eyes_prop_ep ='Small Shades'
elif j > 780000:
eyes = EyePatch_6
eyes_prop_ep ='Eye Patch'
elif j > 730000:
eyes = NerdGlasses_6
eyes_prop_ep ='Nerd Glasses'
elif j > 680000:
eyes = BigShades_6
eyes_prop_ep ='Big Shades'
elif j > 650000:
eyes = EyeMask_6
eyes_prop_ep ='Eye Mask'
elif j > 600000:
eyes = HornedRimGlasses_6
eyes_prop_ep ='Horned Rim Glasses'
elif j > 550000:
eyes = RegularShades_6
eyes_prop_ep ='Regular Shades'
else:
eyes=none
eyes_prop_ep ='None'
seed(j)
k=randint(0,1000000)
if k > 975000:
mouth = SMILE
mouth_ep = 'Smile'
elif k > 950000:
mouth = FROWN
mouth_ep = 'Frown'
else:
mouth = none
mouth_ep = 'None'
MOLE=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,MO1,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
SCARE_2=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,SCR1,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,SCR1,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,SCR1,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,SCR1,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
SCARE_2=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,SCR1,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,SCR1,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,SCR1,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,SCR1,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
ROSY_1=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,RC1,0,0,0,0,RC1,RC1,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,RC1,0,0,0,0,0,RC1,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
ROSY_2=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,RC1,RC1,0,0,0,RC1,RC1,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
seed(k)
l=randint(0,1000000)
if l > 970000:
blemishes = ROSY_1
blemishe_ep = 'Rosy Cheeks'
elif l > 900000:
blemishes = MOLE
blemishe_ep = 'Mole'
elif l > 870000:
blemishes = SCARE_2
blemishe_ep = 'Scare'
else:
blemishes = none
blemishe_ep = 'None'
HALFIN=[
[FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR1,FR1,FR1,FR1,FR1,FR1,FR1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SC1,SC1,SK1,SK1,SK1,SC1,SC1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,EY1,FR1,SK1,SK1,SK1,EY1,FR1,SK1,SK1,FR1,FR1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,SK1,FR1,SK1,SK1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,FR1,FR1,FR1,SK1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR1,FR1,FR1,FR1,FR1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR1,SK1,SK1,SK1,FR1,FR2,FR2,FR2,FR2,FR2,FR2]
]
pixels = HALFIN
elif b > 900000:
race_ep = 'Halflings'
type_ep = 'Female'
hair_color_ep = 'None'
haircut_ep = 'None'
hair_prop_ep = 'None'
eyes_prop_ep = 'None'
blemishe_ep = 'None'
eyes_color_ep = 'None'
facial_hair_ep = 'None'
mouth_prop_ep = 'None'
mouth_ep = 'None'
tooth_color_ep = 'None'
nose_ep = 'None'
neck_ep = 'None'
ears_ep = 'None'
skin_ep = 'None'
ears = none
hair = none
hair_prop = none
neck = none
blemishes = none
#tooth color
mouth = none
facial_hair = none
rod = none
mouth_prop = none
#eye color
eyes = none
nose = none
seed(b)
c=randint(0,1000000)
if c > 800000:
SK1 = (234,217,217)
SC1 = (165,141,141)
EY1 = (201,178,178)
SK2 = (255,255,255)
HRG3 = (220,222,234)
HRG2 = (183,179,191)
HRG4 = (203,200,212)
HRG5 = (87,101,113)
HRG1 = (0,0,0)
RC1 = (226,187,185)
LI1 = (113,28,17)
MO1 = EY1
SCR1 = EY1
skin_ep = 'Albino'
elif c > 500000:
SK1 = (219,177,128)
SC1 = (166,110,44)
EY1 = (210,157,96)
SK2 = (235,203,166)
HRG3 = (213,200,183)
HRG2 = (184,163,135)
HRG4 = (209,189,164)
HRG5 = (87,101,113)
HRG1 = (0,0,0)
RC1 = (215,154,104)
LI1 = (113,28,17)
MO1 = EY1
SCR1 = EY1
skin_ep = 'Light'
elif c > 200000:
SK1 = (174,139,97)
SC1 = (134,88,30)
EY1 = (167,124,71)
SK2 = (178,138,93)
HRG3 = (188,179,165)
HRG2 = (166,150,128)
HRG4 = (184,171,151)
HRG5 = (87,101,113)
HRG1 = (0,0,0)
RC1 = (191,105,71)
LI1 = (95,29,13)
MO1 = EY1
SCR1 = EY1
skin_ep = 'Mid'
else:
SK1 = (113,63,29)
SC1 = (86,39,10)
EY1 = (114,55,17)
SK2 = (146,79,35)
HRG3 = (155,135,127)
HRG2 = (139,121,111)
HRG4 = (156,131,115)
HRG5 = (87,101,113)
HRG1 = (0,0,0)
RC1 = (142,36,2)
LI1 = (74,18,8)
MO1 = EY1
SCR1 = EY1
skin_ep = 'Dark'
seed(c)
d=randint(0,1000000)
if d > 750000:
ears = EARS_3
ears_ep = 'Earring'
else:
ears = none
ears_ep = 'None'
seed(d)
e=randint(0,1000000)
HR0 = (255,193,0)
HR2 = (251,114,7)
HR3 = (210,154,0)
HR4 = (166,165,165)
HR5 = (165,108,0)
HR6 = (111,0,48)
HR7 = (85,57,23)
red = (255,0,0)
if e > 875000:
HR1 = HR0
HR2 = red
hair_color_ep ='Blonde'
elif e > 750000:
HR1 = nr
HR2 = red
hair_color_ep ='Black'
elif e > 625000:
HR1 = HR2
HR2 = red
hair_color_ep ='Orange'
elif e > 500000:
HR1 = HR3
HR2 = red
hair_color_ep ='Fair'
elif e > 375000:
HR1 = HR4
HR2 = red
hair_color_ep ='Grey'
elif e > 250000:
HR1 = HR5
HR2 = red
hair_color_ep ='Ginger'
elif e > 125000:
HR1 = HR6
HR2 = red
hair_color_ep ='Black Rose'
else:
HR1 = HR7
HR2 = red
hair_color_ep ='Brown'
HALFINE_HR1=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,HR1,0,0,HR1,0,0,HR1,HR1,HR1,0,0,0,0,0,0,0,0],
[0,0,0,0,0,HR1,0,0,HR1,HR1,0,HR1,HR1,HR1,0,HR1,HR1,HR1,0,0,0,0,0,0],
[0,0,0,0,HR1,0,HR1,HR1,HR1,HR1,HR1,0,HR1,0,HR1,0,HR1,0,HR1,0,0,0,0,0],
[0,0,0,HR1,HR1,HR1,HR1,0,0,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,0,0,0,0],
[0,0,0,0,HR1,0,HR1,HR1,HR1,HR1,0,HR1,0,HR1,0,HR1,HR1,0,HR1,HR1,0,0,0,0],
[0,0,0,0,HR1,HR1,HR1,HR1,HR1,0,HR1,0,0,0,HR1,HR1,0,HR1,HR1,HR1,HR1,0,0,0],
[0,0,0,HR1,0,HR1,HR1,HR1,HR1,0,0,0,0,0,0,HR1,0,HR1,HR1,HR1,0,0,0,0],
[0,0,HR1,0,HR1,HR1,HR1,0,0,0,0,0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,0],
[0,0,HR1,HR1,HR1,0,HR1,0,0,0,0,0,0,0,0,0,0,0,HR1,0,HR1,0,0,0],
[0,0,0,HR1,0,HR1,HR1,0,0,0,0,0,0,0,0,0,0,0,HR1,HR1,HR1,0,0,0],
[0,0,HR1,0,HR1,HR1,HR1,0,0,0,0,0,0,0,0,0,0,0,HR1,HR1,HR1,0,0,0],
[0,0,HR1,HR1,HR1,HR1,HR1,0,0,0,0,0,0,0,0,0,0,HR1,HR1,0,HR1,0,0,0],
[0,0,HR1,HR1,HR1,HR1,0,0,0,0,0,0,0,0,0,0,0,HR1,HR1,HR1,HR1,HR1,0,0],
[0,0,HR1,0,0,HR1,HR1,0,0,0,0,0,0,0,0,0,0,HR1,HR1,HR1,0,HR1,0,0],
[0,HR1,0,HR1,HR1,HR1,HR1,0,0,0,0,0,0,0,0,0,0,HR1,HR1,0,HR1,HR1,0,0],
[0,HR1,HR1,0,HR1,0,HR1,0,0,0,0,0,0,0,0,0,HR1,HR1,HR1,HR1,HR1,0,0,0],
[0,0,0,HR1,0,HR1,0,HR1,0,0,0,0,0,0,0,0,0,HR1,0,HR1,0,HR1,0,0],
[0,0,0,0,HR1,0,HR1,HR1,0,0,0,0,0,0,0,0,0,0,HR1,0,HR1,0,0,0],
[0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
HALFINE_HR2=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,HR1,0,HR1,HR1,HR1,HR1,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,HR1,HR1,HR1,HR1,HR1,HR1,0,HR1,0,0,0,0,0,0,0],
[0,0,0,0,0,0,HR1,0,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,0,0,0,0,0,0,0],
[0,0,0,0,0,HR1,0,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,0,0,0,0,0],
[0,0,0,0,0,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,0,HR1,HR1,HR1,HR1,0,0,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,HR1,0,0,HR1,0,HR1,0,HR1,HR1,HR1,HR1,0,0,0,0,0],
[0,0,0,0,0,HR1,HR1,HR1,0,0,0,HR1,0,0,0,HR1,HR1,HR1,0,HR1,0,0,0,0],
[0,0,0,0,HR1,0,HR1,0,0,0,0,HR1,0,0,0,0,HR1,HR1,HR1,0,0,0,0,0],
[0,0,0,0,0,HR1,HR1,0,0,0,HR1,0,0,0,0,0,0,HR1,0,HR1,0,0,0,0],
[0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,0,0,HR1,HR1,HR1,HR1,0,0,0],
[0,0,0,0,HR1,HR1,HR1,0,0,0,0,0,0,0,0,0,0,HR1,HR1,0,0,0,0,0],
[0,0,0,0,0,0,HR1,0,0,0,0,0,0,0,0,0,0,HR1,HR1,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
HALFINE_HR3=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,HR1,HR1,HR1,HR1,HR1,HR1,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,0,0,0,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,0,0,HR1,HR1,HR1,0,0,HR1,HR1,HR1,0,0,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,0,0,0,HR1,0,0,0,0,HR1,HR1,0,0,0,0,0,0],
[0,0,0,0,0,HR1,HR1,HR1,0,0,0,0,0,0,0,0,0,HR1,HR1,0,0,0,0,0],
[0,0,0,0,0,HR1,HR1,HR1,0,0,0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,0,0],
[0,0,0,0,0,HR1,HR1,HR1,0,0,0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,0,0],
[0,0,0,0,0,HR1,HR1,HR1,0,0,0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,0,0],
[0,0,0,0,0,HR1,HR1,HR1,0,0,0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,0,0,0],
[0,0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
HALFINE_HR4=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,HR1,HR1,HR1,HR1,HR1,HR1,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,0,0,0,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,0,0,0,0,0,0],
[0,0,0,0,0,HR1,HR1,HR1,HR1,0,0,0,0,0,0,HR1,HR1,HR1,HR1,0,0,0,0,0],
[0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,0,0,HR1,HR1,0,0,0,0,0],
[0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,0,0,0,HR1,0,0,0,0,0],
[0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,0,0,0,HR1,0,0,0,0,0],
[0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,0,0,0,HR1,0,0,0,0,0],
[0,0,0,0,HR1,HR1,HR1,0,0,0,0,0,0,0,0,0,0,HR1,HR1,0,0,0,0,0],
[0,0,0,0,HR1,HR1,HR1,0,0,0,0,0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,0],
[0,0,0,0,HR1,HR1,HR1,0,0,0,0,0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,0],
[0,0,0,0,HR1,HR1,HR1,0,0,0,0,0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,0],
[0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,0,HR1,HR1,HR1,HR1,0,0,0,0],
[0,0,0,0,0,HR1,HR1,HR1,0,0,0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
HALFINE_HR5=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,HR1,HR1,HR1,HR1,HR1,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,HR1,HR1,HR1,HR1,HR1,HR1,HR1,0,0,HR1,HR1,0,0,0,0,0],
[0,0,0,0,0,0,0,0,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR2,HR1,HR1,HR1,0,0,0,0],
[0,0,0,0,0,0,0,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR2,0,HR1,HR1,HR1,0,0,0],
[0,0,0,0,0,0,0,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,0,HR1,HR1,HR1,HR1,0,0],
[0,0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,HR1,HR1,HR1,0,HR1,HR1,HR1,HR1,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,HR1,0,0,HR1,HR1,HR1,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
MOLE_2=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,MO1,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
SCARE_2=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,SCR1,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,SCR1,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,SCR1,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,SCR1,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
ROSY_2=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,RC1,RC1,0,0,0,RC1,RC1,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
seed(e)
f=randint(0,1000000)
if f > 800000:
hair = HALFINE_HR1
haircut_ep ='Perm Hair'
elif f > 600000:
hair = HALFINE_HR2
haircut_ep ='Wild Hair'
elif f > 400000:
hair = HALFINE_HR3
haircut_ep ='Wedge Hair'
elif f > 200000:
hair = HALFINE_HR4
haircut_ep ='Feathered Hair'
else:
hair = HALFINE_HR5
haircut_ep ='Ponytail'
toto = 99
seed(f)
g=randint(0,1000000)
if g > 990000:
hair_prop = TIARA_3
hair_prop_ep = 'Tiara'
titine = 99
elif g > 940000:
hair_prop = Flower
hair_prop_ep = 'Flower'
elif g > 900000 and toto != 99:
hair_prop = Hob_Hat
hair_prop_ep = 'Shire Hat'
elif g > 860000:
hair_prop = HEADBAND_4
hair_prop_ep = 'Headband'
elif g > 850000:
hair = none
hair_prop = PILOT_2
hair_prop_ep = 'Pilot Helmet'
titine = 99
else:
hair_prop = none
hair_prop_ep = 'None'
seed(g)
h=randint(0,1000000)
if h > 900000:
EY1 = (110,152,77)
SC1 = (92,133,57)
eyes_color_ep = 'Green Eye Shadow'
elif h > 800000:
EY1 = (93,121,117)
SC1 = (80,106,101)
eyes_color_ep = 'Blue Eye Shadow'
elif h > 700000:
EY1 = (176,61,133)
SC1 = (164,55,117)
eyes_color_ep = 'Purple Eye Shadow'
elif h > 600000:
EY1 = (214,92,26)
SC1 = (194,79,17)
eyes_color_ep = 'Orange Eye Shadow'
else:
eyes_color_ep = 'None'
neya = 99
seed(h)
i=randint(0,1000000)
if i > 900000:
mouth_prop = CIGARETTE
mouth_prop_ep = 'Cigarette'
elif i > 880000:
mouth_prop = MASK_2
mouth_prop_ep = 'Medical Mask'
tactac=99
elif i > 820000:
mouth_prop = PIPE
mouth_prop_ep = 'Pipe'
elif i > 780000:
mouth_prop = VAPE
mouth_prop_ep = 'Vape'
else:
mouth_prop = none
mouth_prop_ep = 'None'
seed(i)
j=randint(0,1000000)
if j > 970000:
eyes = TD_3
eyes_prop_ep ='3D Glasses'
elif j > 930000:
eyes = VR_3
eyes_prop_ep ='VR'
elif j > 880000:
eyes = ClassicShades_4
eyes_prop_ep ='Classic Shades'
elif j > 830000:
eyes = EyePatch_4
eyes_prop_ep ='Eye Patch'
neyh = 99
elif j > 780000:
eyes = NerdGlasses_4
eyes_prop_ep ='Nerd Glasses'
elif j > 730000:
eyes = BigShades_4
eyes_prop_ep ='Big Shades'
elif j > 700000:
eyes = EyeMask_4
eyes_prop_ep ='Eye Mask'
neyh = 99
elif j > 650000:
eyes = HornedRimGlasses_4
eyes_prop_ep ='Horned Rim Glasses'
elif j > 600000:
eyes = RegularShades_4
eyes_prop_ep ='Regular Shades'
elif j > 590000:
eyes = GOGOLES_2
eyes_prop_ep ='Welding Goggles'
hair_prop = none
hair_prop_ep = 'None'
toctoc = 99
else:
eyes=none
eyes_prop_ep ='None'
neyh = 99
if titine == 99 and toctoc !=99:
eyes = none
eyes_prop_ep ='None'
if neya != 99 and neyh !=99:
eyes = none
eyes_prop_ep ='None'
seed(j)
k=randint(0,1000000)
if k > 975000:
nose = NOSE_2
nose_ep = 'Clown Nose'
tuctuc = 99
else:
nose = none
nose_ep = 'None'
if tactac == 99 and tuctuc == 99:
mouthprop = none
mouth_prop_ep = 'None'
seed(k)
l=randint(0,1000000)
if l > 970000:
blemishes = ROSY_2
blemishe_ep = 'Rosy Cheeks'
elif l > 900000:
blemishes = MOLE_2
blemishe_ep = 'Mole'
elif l > 870000:
blemishes = SCARE_2
blemishe_ep = 'Scare'
else:
blemishes = none
blemishe_ep = 'None'
seed(l)
m=randint(0,1000000)
if m > 930000:
LI1 = nr
mouth_ep = 'Black Lipstick'
elif m > 860000:
LI1 = (255,0,0)
mouth_ep = 'Hot Lipstick'
elif m > 790000:
LI1 = (208,82,203)
mouth_ep = 'Purple Lipstick'
elif m > 720000:
LI1 = (214,92,26)
mouth_ep = 'Orange Lipstick'
else:
mouth = none
mouth_ep = 'None'
seed(m)
n=randint(0,1000000)
if n > 900000:
neck = GoldChain_3
neck_ep = 'Gold Chain'
elif n > 820000:
neck = SilverChain_3
neck_ep = 'Silver Chain'
elif n > 800000:
neck = RING_3
neck_ep = 'Ring Onchain'
else:
neck = none
neck_ep = 'None'
HALFINE=[
[FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR1,FR1,FR1,FR1,FR1,FR1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,SK1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SC1,SC1,SK1,SK1,SK1,SC1,SC1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,EY1,FR1,SK1,SK1,SK1,EY1,FR1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,FR1,FR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,SK1,FR1,SK1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,LI1,LI1,LI1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,SK1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR1,FR1,FR1,FR1,FR1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR1,SK1,SK1,SK1,FR1,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2]
]
pixels = HALFINE
elif b > 750000:
race_ep = 'Men'
type_ep = 'Male'
hair_color_ep = 'None'
haircut_ep = 'None'
hair_prop_ep = 'None'
eyes_prop_ep = 'None'
blemishe_ep = 'None'
eyes_color_ep = 'None'
facial_hair_ep = 'None'
mouth_prop_ep = 'None'
mouth_ep = 'None'
tooth_color_ep = 'None'
nose_ep = 'None'
neck_ep = 'None'
ears_ep = 'None'
skin_ep = 'None'
ears = none
hair = none
hair_prop = none
neck = none
blemishes = none
mouth = none
facial_hair = none
rod = none
mouth_prop = none
#eye color
eyes = none
nose = none
BE6 = (40,27,9)
seed(b)
c=randint(0,1000000)
if c > 800000:
SK1 = (234,217,217)
SC1 = (165,141,141)
EY1 = (201,178,178)
SK2 = (255,255,255)
HRG3 = (220,222,234)
HRG2 = (183,179,191)
HRG4 = (203,200,212)
HRG5 = (87,101,113)
HRG1 = (0,0,0)
RC1 = (226,187,185)
BE5 = (163,151,131)
MO1 = EY1
SCR1 = EY1
skin_ep = 'Albino'
elif c > 500000:
SK1 = (219,177,128)
SC1 = (166,110,44)
EY1 = (210,157,96)
SK2 = (235,203,166)
HRG3 = (213,200,183)
HRG2 = (184,163,135)
HRG4 = (209,189,164)
HRG5 = (87,101,113)
HRG1 = (0,0,0)
RC1 = (215,154,104)
BE5 = (153,124,89)
MO1 = EY1
SCR1 = EY1
skin_ep = 'Light'
elif c > 200000:
SK1 = (174,139,97)
SC1 = (134,88,30)
EY1 = (167,124,71)
SK2 = (178,138,93)
HRG3 = (188,179,165)
HRG2 = (166,150,128)
HRG4 = (184,171,151)
HRG5 = (87,101,113)
HRG1 = (0,0,0)
RC1 = (191,105,71)
BE5 = (121,97,68)
MO1 = EY1
SCR1 = EY1
skin_ep = 'Mid'
else:
SK1 = (113,63,29)
SC1 = (86,39,10)
EY1 = (114,55,17)
SK2 = (146,79,35)
HRG3 = (155,135,127)
HRG2 = (139,121,111)
HRG4 = (156,131,115)
HRG5 = (87,101,113)
HRG1 = (0,0,0)
RC1 = (142,36,2)
BE5 = (79,44,20)
MO1 = EY1
SCR1 = EY1
skin_ep = 'Dark'
seed(c)
d=randint(0,1000000)
if d > 750000:
ears = EARS_2
ears_ep = 'Earring'
else:
ears = none
ears_ep = 'None'
seed(d)
e=randint(0,1000000)
HR0 = (255,193,0)
HR2 = (251,114,7)
HR3 = (210,154,0)
HR4 = (166,165,165)
HR5 = (165,108,0)
HR6 = (111,0,48)
HR7 = (85,57,23)
red = (255,0,0)
if e > 875000:
HR1 = HR0
HR2 = red
hair_color_ep ='Blonde'
elif e > 750000:
HR1 = nr
HR2 = red
hair_color_ep ='Black'
elif e > 625000:
HR1 = HR2
HR2 = red
hair_color_ep ='Orange'
elif e > 500000:
HR1 = HR3
HR2 = red
hair_color_ep ='Fair'
elif e > 375000:
HR1 = HR4
HR2 = red
hair_color_ep ='Grey'
elif e > 250000:
HR1 = HR5
HR2 = red
hair_color_ep ='Ginger'
elif e > 125000:
HR1 = HR6
HR2 = red
hair_color_ep ='Black Rose'
else:
HR1 = HR7
HR2 = red
hair_color_ep ='Brown'
MAN_HR1=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,HR1,HR1,0,HR1,HR1,HR1,HR1,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,0,0,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,HR1,HR1,0,HR1,0,HR1,0,HR1,HR1,HR1,HR1,0,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,HR1,0,0,HR1,0,0,HR1,0,HR1,HR1,HR1,0,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,0,0,HR1,0,0,0,HR1,0,HR1,HR1,HR1,0,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,HR1,0,HR1,HR1,HR1,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,0,0,HR1,HR1,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,0,0,HR1,HR1,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,0,0,0,HR1,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,0,0,0,HR1,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,0,0,HR1,HR1,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,0,0,HR1,HR1,0,0,0,0],
[0,0,0,0,0,HR1,HR1,HR1,0,0,0,0,0,0,0,0,0,0,HR1,HR1,HR1,0,0,0],
[0,0,0,0,0,HR1,HR1,HR1,0,0,0,0,0,0,0,0,0,0,HR1,HR1,HR1,0,0,0],
[0,0,0,0,HR1,0,HR1,HR1,0,0,0,0,0,0,0,0,0,0,HR1,HR1,0,HR1,0,0],
[0,0,0,0,0,0,HR1,0,0,0,0,0,0,0,0,0,0,0,0,HR1,0,0,0,0],
[0,0,0,0,0,0,HR1,0,0,0,0,0,0,0,0,0,0,0,0,HR1,0,0,0,0],
[0,0,0,0,0,HR1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,HR1,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
MAN_HR2=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,HR1,HR1,HR1,HR1,HR1,HR1,HR1,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,0,0,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,HR1,HR1,HR1,HR1,HR1,HR1,HR1,0,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,HR1,HR1,HR1,HR1,HR1,HR1,HR1,0,0,0,0,0],
[0,0,0,0,0,HR1,HR1,HR1,HR1,0,0,0,0,HR1,HR1,HR1,HR1,HR1,HR1,HR1,0,0,0,0],
[0,0,0,0,0,HR1,HR1,HR1,0,0,0,0,0,0,0,HR1,HR1,HR1,HR1,HR1,HR1,0,0,0],
[0,0,0,0,HR1,HR1,HR1,HR1,0,0,0,0,0,0,0,0,HR1,HR1,HR1,HR1,HR1,0,0,0],
[0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,0,HR1,HR1,HR1,HR1,0,0,0],
[0,0,0,0,HR1,HR1,HR1,HR1,0,0,0,0,0,0,0,0,0,HR1,HR1,HR1,HR1,HR1,0,0],
[0,0,0,0,HR1,HR1,HR1,HR1,0,0,0,0,0,0,0,0,0,HR1,HR1,HR1,HR1,HR1,0,0],
[0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,0,HR1,HR1,HR1,HR1,HR1,0,0],
[0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,0,HR1,HR1,HR1,HR1,HR1,0,0],
[0,0,0,0,0,HR1,HR1,HR1,0,0,0,0,0,0,0,0,0,HR1,HR1,HR1,HR1,HR1,0,0],
[0,0,0,0,0,HR1,HR1,HR1,0,0,0,0,0,0,0,0,0,HR1,HR1,HR1,HR1,HR1,0,0],
[0,0,0,0,0,HR1,HR1,HR1,0,0,0,0,0,0,0,0,0,HR1,HR1,HR1,HR1,0,0,0],
[0,0,0,0,HR1,HR1,HR1,0,0,0,0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,0,0],
[0,0,0,0,0,0,HR1,0,0,0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
MAN_HR3=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,HR1,HR1,HR1,HR1,HR1,HR1,HR1,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,0,0,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,0,HR1,HR1,HR1,HR1,0,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,HR1,HR1,HR1,0,0,0,HR1,0,HR1,HR1,HR1,0,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,0,0,HR1,0,0,HR1,HR1,0,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,HR1,0,HR1,HR1,HR1,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,0],
[0,0,0,0,0,HR1,HR1,HR1,0,0,0,0,0,0,0,0,0,0,HR1,HR1,HR1,0,0,0],
[0,0,0,0,0,HR1,HR1,HR1,0,0,0,0,0,0,0,0,0,0,HR1,HR1,HR1,0,0,0],
[0,0,0,0,0,HR1,HR1,HR1,0,0,0,0,0,0,0,0,0,0,HR1,HR1,HR1,0,0,0],
[0,0,0,0,0,HR1,HR1,HR1,0,0,0,0,0,0,0,0,0,0,HR1,HR1,HR1,0,0,0],
[0,0,0,0,0,HR1,HR1,HR1,0,0,0,0,0,0,0,0,0,0,HR1,HR1,HR1,0,0,0],
[0,0,0,0,0,HR1,HR1,HR1,0,0,0,0,0,0,0,0,0,0,HR1,HR1,HR1,0,0,0],
[0,0,0,0,0,HR1,HR1,HR1,0,0,0,0,0,0,0,0,0,0,HR1,HR1,HR1,HR1,0,0],
[0,0,0,0,0,HR1,HR1,HR1,0,0,0,0,0,0,0,0,0,0,HR1,HR1,HR1,HR1,0,0],
[0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,0,0,HR1,HR1,HR1,HR1,0,0],
[0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,0,0,0,HR1,HR1,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
MAN_HR4=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,HR1,HR1,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,0,HR1,HR1,0,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,0,HR1,HR1,0,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,0,HR1,HR1,0,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,0,HR1,0,HR1,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,0,0,0,HR1,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,0,0,0,HR1,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,0,0,HR1,HR1,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,0,0,HR1,HR1,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,0,0,HR1,HR1,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,0,0,HR1,HR1,0,0,0,0],
[0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,0,0,0,0,HR1,HR1,0,0,0],
[0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,0,0,0,0,HR1,HR1,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
MAN_HR5=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,HR1,HR1,HR1,HR1,HR1,HR1,HR1,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,0,0,0,0,0,0],
[0,0,0,0,0,0,0,HR1,HR1,0,HR1,0,0,0,0,HR1,HR1,HR1,HR1,0,0,0,0,0],
[0,0,0,0,0,0,0,HR1,0,0,HR1,0,0,0,0,0,HR1,HR1,HR1,0,0,0,0,0],
[0,0,0,0,0,0,0,HR1,0,0,0,HR1,0,0,0,0,HR1,HR1,HR1,0,0,0,0,0],
[0,0,0,0,0,0,0,HR1,0,0,0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,0,0],
[0,0,0,0,0,0,0,HR1,0,0,0,0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,0],
[0,0,0,0,0,0,0,HR1,0,0,0,0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,0],
[0,0,0,0,0,0,0,HR1,0,0,0,0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,0],
[0,0,0,0,0,0,0,HR1,0,0,0,0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,0],
[0,0,0,0,0,0,0,HR1,0,0,0,0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,0],
[0,0,0,0,0,0,0,HR1,0,0,0,0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,0],
[0,0,0,0,0,0,0,HR1,0,0,0,0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,0],
[0,0,0,0,0,0,0,HR1,0,0,0,0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,0],
[0,0,0,0,0,0,0,HR1,0,0,0,0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,0],
[0,0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,0],
[0,0,0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
seed(e)
f=randint(0,1000000)
if f > 800000:
hair = MAN_HR1
haircut_ep = 'Grunge Hair'
elif f > 600000:
hair = MAN_HR2
haircut_ep = 'Prince Hair'
elif f > 400000:
hair = MAN_HR3
haircut_ep = 'King Hair'
elif f > 200000:
hair = MAN_HR4
haircut_ep = 'Bald'
else:
hair = MAN_HR5
haircut_ep = 'Straight Hair'
seed(f)
g=randint(0,1000000)
if g > 960000:
hair_prop = CAP_2
hair_prop_ep = 'Cap'
elif g > 940000:
hair_prop = COWBOY_2
hair_prop_ep = 'Cowboy Hat'
elif g > 930000:
hair_prop = TOPHAT_2
hair_prop_ep = 'Top Hat'
elif g > 910000:
hair_prop = Gondor_Crown
hair_prop_ep = 'Men Crown'
elif g > 870000:
hair_prop = KNITTED_2
hair_prop_ep = 'Knitted Cap'
elif g > 820000:
hair_prop = HEADBAND_2
hair_prop_ep = 'Headband'
elif g > 790000:
hair_prop = FORCAP_2
hair_prop_ep = 'Cap Forward'
elif g > 760000:
hair_prop = BANDANA_2
hair_prop_ep = 'Bandana'
elif g > 740000:
hair_prop = FEDORA_2
hair_prop_ep = 'Fedora'
elif g > 710000:
hair_prop = POLICE_2
hair_prop_ep = 'Police'
elif g > 700000:
hair_prop = BEANI_2
hair_prop_ep = 'Beanie'
else:
hair_prop = none
hair_prop_ep = 'None'
seed(g)
h=randint(0,1000000)
if h > 900000:
neck = GoldChain_1
neck_ep = 'Gold Chain'
elif h > 820000:
neck = SilverChain_1
neck_ep = 'Silver Chain'
elif h > 800000:
neck = RING_1
neck_ep = 'Ring Onchain'
elif h > 780000:
neck = BROCHE_1
neck_ep = 'Brooch'
else:
neck = none
neck_ep = 'None'
ShadowBeard=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,BE5,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,BE5,0,0,0,0,0,0,BE5,BE5,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,BE5,BE5,BE5,BE5,BE5,BE5,BE5,BE5,BE5,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,BE5,BE5,BE6,BE6,BE6,BE5,BE5,BE5,BE5,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,BE5,BE5,BE5,BE5,BE5,BE5,BE5,BE5,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,BE5,BE5,BE5,BE5,BE5,BE5,BE5,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,BE5,BE5,BE5,BE5,BE5,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
seed(h)
i=randint(0,1000000)
if i > 950000:
facial_hair = BigBeard
facial_hair_ep = 'Big Beard'
elif i >900000:
facial_hair = Muttonchops
facial_hair_ep = 'Muttonchops'
elif i > 850000:
facial_hair = Mustache
facial_hair_ep = 'Mustache'
elif i > 890000:
facial_hair = Handlebars
facial_hair_ep = 'Handlebars'
elif i > 750000:
facial_hair = FrontBeardDark
facial_hair_ep = 'Front Beard Dark'
elif i > 700000:
facial_hair = FrontBeard
facial_hair_ep = 'Front Beard'
elif i > 650000:
facial_hair = NormalBeard
facial_hair_ep = 'Normal Beard'
elif i > 600000:
facial_hair = NormalBeardBlack
facial_hair_ep = 'Normal Beard Black'
elif i > 550000:
facial_hair = LuxuriousBeard
facial_hair_ep = 'Luxurious Beard'
elif i > 500000:
facial_hair = Goat
facial_hair_ep = 'Goat'
elif i > 450000:
facial_hair = Chinstrap
facial_hair_ep = 'Chinstrap'
elif i > 400000:
facial_hair = ShadowBeard
facial_hair_ep = 'Shadow Beard'
else:
facial_hair = none
facial_hair_ep = 'None'
seed(i)
j=randint(0,1000000)
if j > 900000:
mouth_prop = CIGARETTE
mouth_prop_ep = 'Cigarette'
elif j > 880000:
mouth_prop = MASK_1
mouth_prop_ep = 'Medical Mask'
facial_hair = none
elif j > 820000:
mouth_prop = PIPE
mouth_prop_ep = 'Pipe'
elif j > 780000:
mouth_prop = VAPE
mouth_prop_ep = 'Vape'
else:
mouth_prop = none
mouth_prop_ep = 'None'
# seed(h)
# i=randint(0,1000000)
# if i > 300000:
# EY1 = (255,255,255)
# elif i > 50000:
# EY1 = (0,0,255)
# else:
# EY1 = (0,255,0)
seed(j)
k=randint(0,1000000)
if k > 970000:
eyes = TD_2
eyes_prop_ep ='3D Glasses'
elif k > 930000:
eyes = VR_2
eyes_prop_ep ='VR'
elif k > 880000:
eyes = ClassicShades_2
eyes_prop_ep ='Classic Shades'
elif k > 830000:
eyes = SmallShades_2
eyes_prop_ep ='Small Shades'
hair = MAN_HR3
haircut_ep = 'King Hair'
elif k > 780000:
eyes = EyePatch_2
eyes_prop_ep ='Eye Patch'
elif k > 730000:
eyes = NerdGlasses_2
eyes_prop_ep ='Nerd Glasses'
elif k > 680000:
eyes = BigShades_2
eyes_prop_ep ='Big Shades'
elif k > 650000:
eyes = EyeMask_2
eyes_prop_ep ='Eye Mask'
elif k > 600000:
eyes = HornedRimGlasses_2
eyes_prop_ep ='Horned Rim Glasses'
elif k > 550000:
eyes = RegularShades_2
eyes_prop_ep ='Regular Shades'
else:
eyes=none
eyes_prop_ep ='None'
seed(k)
l=randint(0,1000000)
if l > 975000:
nose = NOSE_1
nose_ep = 'Clown Nose'
else:
nose = none
nose_ep = 'None'
seed(l)
m=randint(0,1000000)
if m > 975000:
mouth = SMILE
mouth_ep = 'Smile'
elif m > 950000:
mouth = FROWN
mouth_ep = 'Frown'
else:
mouth = none
mouth_ep = 'None'
MOLE=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,MO1,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
SCARE_1=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,SCR1,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,SCR1,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,SCR1,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,SCR1,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,SCR1,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
ROSY_1=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,RC1,0,0,0,0,RC1,RC1,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,RC1,0,0,0,0,0,RC1,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
seed(m)
n=randint(0,1000000)
if n > 970000:
blemishes = ROSY_1
blemishe_ep = 'Rosy Cheeks'
elif n > 900000:
blemishes = MOLE
blemishe_ep = 'Mole'
elif n > 870000:
blemishes = SCARE_1
blemishe_ep = 'Scare'
else:
blemishes = none
blemishe_ep = 'None'
MAN=[
[FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR1,FR1,FR1,FR1,FR1,FR1,FR1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SC1,SC1,SK1,SK1,SK1,SC1,SC1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,EY1,FR1,SK1,SK1,SK1,EY1,FR1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,FR1,FR1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,FR1,FR1,SK1,SK1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,FR1,FR1,FR1,SK1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR1,FR1,FR1,FR1,FR1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR1,SK1,SK1,SK1,FR1,FR2,FR2,FR2,FR2,FR2,FR2]
]
pixels = MAN
elif b > 600000:
race_ep = 'Men'
type_ep = 'Female'
hair_color_ep = 'None'
haircut_ep = 'None'
hair_prop_ep = 'None'
eyes_prop_ep = 'None'
blemishe_ep = 'None'
eyes_color_ep = 'None'
facial_hair_ep = 'None'
mouth_prop_ep = 'None'
mouth_ep = 'None'
tooth_color_ep = 'None'
nose_ep = 'None'
neck_ep = 'None'
ears_ep = 'None'
skin_ep = 'None'
ears = none
hair = none
hair_prop = none
neck = none
blemishes = none
#tooth color
mouth = none
facial_hair = none
rod = none
mouth_prop = none
#eye color
eyes = none
nose = none
seed(b)
c=randint(0,1000000)
if c > 800000:
SK1 = (234,217,217)
SC1 = (165,141,141)
EY1 = (201,178,178)
SK2 = (255,255,255)
HRG3 = (220,222,234)
HRG2 = (183,179,191)
HRG4 = (203,200,212)
HRG5 = (87,101,113)
HRG1 = (0,0,0)
RC1 = (226,187,185)
LI1 = (113,28,17)
MO1 = EY1
SCR1 = EY1
skin_ep = 'Albino'
elif c > 500000:
SK1 = (219,177,128)
SC1 = (166,110,44)
EY1 = (210,157,96)
SK2 = (235,203,166)
HRG3 = (213,200,183)
HRG2 = (184,163,135)
HRG4 = (209,189,164)
HRG5 = (87,101,113)
HRG1 = (0,0,0)
RC1 = (215,154,104)
LI1 = (113,28,17)
MO1 = EY1
SCR1 = EY1
skin_ep = 'Light'
elif c > 200000:
SK1 = (174,139,97)
SC1 = (134,88,30)
EY1 = (167,124,71)
SK2 = (178,138,93)
HRG3 = (188,179,165)
HRG2 = (166,150,128)
HRG4 = (184,171,151)
HRG5 = (87,101,113)
HRG1 = (0,0,0)
RC1 = (191,105,71)
LI1 = (95,29,13)
MO1 = EY1
SCR1 = EY1
skin_ep = 'Mid'
else:
SK1 = (113,63,29)
SC1 = (86,39,10)
EY1 = (114,55,17)
SK2 = (146,79,35)
HRG3 = (155,135,127)
HRG2 = (139,121,111)
HRG4 = (156,131,115)
HRG5 = (87,101,113)
HRG1 = (0,0,0)
RC1 = (142,36,2)
LI1 = (74,18,8)
MO1 = EY1
SCR1 = EY1
skin_ep = 'Dark'
seed(c)
d=randint(0,1000000)
if d > 750000:
ears = EARS_3
ears_ep = 'Earring'
else:
ears = none
ears_ep = 'None'
seed(d)
e=randint(0,1000000)
HR0 = (255,193,0)
HR2 = (251,114,7)
HR3 = (210,154,0)
HR4 = (166,165,165)
HR5 = (165,108,0)
HR6 = (111,0,48)
HR7 = (85,57,23)
red = (255,0,0)
if e > 875000:
HR1 = HR0
HR2 = red
hair_color_ep ='Blonde'
elif e > 750000:
HR1 = nr
HR2 = red
hair_color_ep ='Black'
elif e > 625000:
HR1 = HR2
HR2 = red
hair_color_ep ='Orange'
elif e > 500000:
HR1 = HR3
HR2 = red
hair_color_ep ='Fair'
elif e > 375000:
HR1 = HR4
HR2 = red
hair_color_ep ='Grey'
elif e > 250000:
HR1 = HR5
HR2 = red
hair_color_ep ='Ginger'
elif e > 125000:
HR1 = HR6
HR2 = red
hair_color_ep ='Black Rose'
else:
HR1 = HR7
HR2 = red
hair_color_ep ='Brown'
WOMAN_HR1=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,HR1,HR1,HR1,HR1,HR1,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,HR1,HR1,HR1,HR1,HR1,HR1,HR1,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,0,0,0,0,0,0],
[0,0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,0,HR1,HR1,HR1,HR1,0,0,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,0,0,0,HR1,HR1,0,HR1,0,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,HR1,0,HR1,0,0,0,0,0],
[0,0,0,0,0,HR1,0,HR1,0,0,0,0,0,0,0,0,HR1,HR1,0,HR1,0,0,0,0],
[0,0,0,0,0,HR1,0,HR1,0,0,0,0,0,0,0,0,HR1,HR1,0,HR1,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,HR1,0,HR1,0,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,HR1,0,HR1,0,0,0,0,0],
[0,0,0,0,0,HR1,0,HR1,0,0,0,0,0,0,0,0,HR1,HR1,0,HR1,0,0,0,0],
[0,0,0,0,0,HR1,0,HR1,0,0,0,0,0,0,0,0,HR1,HR1,0,HR1,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,0,0,0,0,HR1,0,HR1,0,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,0,0,0,0,HR1,0,HR1,0,0,0,0,0],
[0,0,0,0,0,HR1,0,HR1,HR1,0,0,0,0,0,0,0,HR1,HR1,0,HR1,0,0,0,0],
[0,0,0,0,0,HR1,0,HR1,HR1,HR1,0,0,0,0,0,0,HR1,HR1,0,HR1,0,0,0,0],
[0,0,0,0,0,0,HR1,0,HR1,HR1,HR1,0,0,0,0,0,HR1,0,HR1,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
WOMAN_HR2=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,HR1,HR1,HR1,HR1,HR1,HR1,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,0,0,0,0,0,0],
[0,0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,0,HR1,HR1,HR1,HR1,0,0,0,0,0,0],
[0,0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,0,0,0],
[0,0,0,0,0,0,0,HR1,0,0,0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,HR1,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,HR1,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,HR1,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,HR1,HR1,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,HR1,HR1,HR1,HR1,HR1,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,HR1,HR1,HR1,HR1,HR1,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,HR1,HR1,HR1,HR1,HR1,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
WOMAN_HR3=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,HR1,HR1,HR1,HR1,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,0,0,0,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,HR1,0,HR1,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,HR1,0,0,0,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,HR1,0,0,0,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,HR1,0,0,0,0,0,0,0],
[0,0,0,0,0,HR1,HR1,HR1,0,0,0,0,0,0,0,0,HR1,0,0,0,0,0,0,0],
[0,0,0,0,0,HR1,HR1,HR1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,HR1,HR1,HR1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,HR1,HR1,HR1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,HR1,HR1,HR1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,HR1,HR1,HR1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,HR1,HR1,HR1,HR1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,HR1,HR1,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
WOMAN_HR4=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,HR1,HR1,HR1,HR1,HR1,HR1,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,0,0,0,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,0,0,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,0,HR1,HR1,0,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,0,HR1,HR1,0,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,0,HR1,HR1,0,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,0,0],
[0,0,0,0,0,HR1,HR1,HR1,0,0,0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,0,0],
[0,0,0,0,0,HR1,HR1,HR1,0,0,0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,0,0],
[0,0,0,0,HR1,HR1,HR1,HR1,0,0,0,0,0,0,0,HR1,HR1,HR1,HR1,HR1,0,0,0,0],
[0,0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,HR1,HR1,HR1,HR1,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
WOMAN_HR5=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,HR1,HR1,HR1,HR1,HR1,HR1,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,HR1,HR1,HR1,HR1,0,0,0,0,0,0],
[0,0,0,0,0,0,0,HR1,0,0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,0,0,0],
[0,0,0,0,0,0,0,HR1,0,0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,0,0,0],
[0,0,0,0,0,0,0,HR1,0,0,0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,0,0],
[0,0,0,0,0,0,0,HR1,0,0,0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,0,0],
[0,0,0,0,0,0,0,HR1,0,0,0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,0,0],
[0,0,0,0,0,0,0,HR1,0,0,0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,0,0],
[0,0,0,0,0,0,0,HR1,0,0,0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,0,0],
[0,0,0,0,0,0,0,HR1,0,0,0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,0,0],
[0,0,0,0,0,0,0,HR1,0,0,0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,0,0],
[0,0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,0,0],
[0,0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,0,0],
[0,0,0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,HR1,HR1,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
MOLE_2=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,MO1,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
SCARE_2=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,SCR1,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,SCR1,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,SCR1,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,SCR1,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
ROSY_2=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,RC1,RC1,0,0,0,RC1,RC1,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
seed(e)
f=randint(0,1000000)
if f > 800000:
hair = WOMAN_HR1
haircut_ep = 'Curly Hair'
elif f > 600000:
hair = WOMAN_HR2
haircut_ep = 'Right Side Hair'
elif f > 400000:
hair = WOMAN_HR3
haircut_ep = 'Left Side Hair'
elif f > 200000:
hair = WOMAN_HR4
haircut_ep = 'The Bob'
else:
hair = WOMAN_HR5
haircut_ep = 'Straight Hair'
seed(f)
g=randint(0,1000000)
if g > 960000:
hair_prop = CAP_4
hair_prop_ep = 'Cap'
elif g > 950000:
hair_prop = TIARA_2
hair_prop_ep = 'Tiara'
titi = 99
elif g > 930000:
hair_prop = MILICAP_2
hair_prop_ep = 'Punk Hat'
elif e > 890000:
hair_prop = KNITTED_4
hair_prop_ep = 'Knitted Cap'
elif g > 850000:
hair_prop = HEADBAND_4
hair_prop_ep = 'Headband'
elif g > 840000:
hair = none
hair_prop = PILOT_2
hair_prop_ep = 'Pilot Helmet'
titi = 99
elif g > 810000:
hair_prop = BANDANA_4
hair_prop_ep = 'Bandana'
elif g > 750000:
hair_prop = Wo_Crown
hair_prop_ep = 'Circlet'
else:
hair_prop = none
hair_prop_ep = 'None'
seed(g)
h=randint(0,1000000)
if h > 900000:
EY1 = (110,152,77)
SC1 = (92,133,57)
eyes_color_ep = 'Green Eye Shadow'
elif h > 800000:
EY1 = (93,121,117)
SC1 = (80,106,101)
eyes_color_ep = 'Blue Eye Shadow'
elif h > 700000:
EY1 = (176,61,133)
SC1 = (164,55,117)
eyes_color_ep = 'Purple Eye Shadow'
elif h > 600000:
EY1 = (214,92,26)
SC1 = (194,79,17)
eyes_color_ep = 'Orange Eye Shadow'
else:
eyes_color_ep = 'None'
neyu = 99
seed(h)
i=randint(0,1000000)
if i > 900000:
mouth_prop = CIGARETTE
mouth_prop_ep = 'Cigarette'
elif i > 880000:
mouth_prop = MASK_2
mouth_prop_ep = 'Medical Mask'
tactac = 99
elif i > 820000:
mouth_prop = PIPE
mouth_prop_ep = 'Pipe'
elif i > 780000:
mouth_prop = VAPE
mouth_prop_ep = 'Vape'
else:
mouth_prop = none
mouth_prop_ep = 'None'
# seed(h)
# i=randint(0,1000000)
# if i > 300000:
# EY1 = (255,255,255)
# elif i > 50000:
# EY1 = (0,0,255)
# else:
# EY1 = (0,255,0)
seed(i)
j=randint(0,1000000)
if j > 970000:
eyes = TD_3
eyes_prop_ep ='3D Glasses'
elif j > 930000:
eyes = VR_3
eyes_prop_ep ='VR'
elif j > 880000:
eyes = ClassicShades_4
eyes_prop_ep ='Classic Shades'
elif j > 830000:
eyes = EyePatch_4
eyes_prop_ep ='Eye Patch'
neyw = 99
elif j > 780000:
eyes = NerdGlasses_4
eyes_prop_ep ='Nerd Glasses'
elif j > 730000:
eyes = BigShades_4
eyes_prop_ep ='Big Shades'
elif j > 700000:
eyes = EyeMask_4
eyes_prop_ep ='Eye Mask'
neyw = 99
elif j > 650000:
eyes = HornedRimGlasses_4
eyes_prop_ep ='Horned Rim Glasses'
elif j > 600000:
eyes = RegularShades_4
eyes_prop_ep ='Regular Shades'
elif j > 590000:
eyes = GOGOLES_2
eyes_prop_ep ='Welding Goggles'
hair_prop = none
hair_prop_ep = 'None'
tata = 99
else:
eyes=none
eyes_prop_ep ='None'
neyw = 99
if titi == 99 and tata != 99:
eyes = none
eyes_prop_ep ='None'
if neyu != 99 and neyw !=99:
eyes = none
eyes_prop_ep ='None'
seed(j)
k=randint(0,1000000)
if k > 975000:
nose = NOSE_2
nose_ep = 'Clown Nose'
tuctuc = 99
else:
nose = none
nose_ep = 'None'
if tactac == 99 and tuctuc == 99:
mouthprop = none
mouth_prop_ep = 'None'
seed(k)
l=randint(0,1000000)
if l > 970000:
blemishes = ROSY_2
blemishe_ep = 'Rosy Cheeks'
elif l > 900000:
blemishes = MOLE_2
blemishe_ep = 'Mole'
elif l > 870000:
blemishes = SCARE_2
blemishe_ep = 'Scare'
else:
blemishes = none
blemishe_ep = 'None'
seed(l)
m=randint(0,1000000)
if m > 930000:
LI1 = nr
mouth_ep = 'Black Lipstick'
elif m > 860000:
LI1 = (255,0,0)
mouth_ep = 'Hot Lipstick'
elif m > 790000:
LI1 = (208,82,203)
mouth_ep = 'Purple Lipstick'
elif m > 720000:
LI1 = (214,92,26)
mouth_ep = 'Orange Lipstick'
else:
mouth = none
mouth_ep = 'None'
seed(m)
n=randint(0,1000000)
if n > 900000:
neck = GoldChain_3
neck_ep = 'Gold Chain'
elif n > 820000:
neck = SilverChain_3
neck_ep = 'Silver Chain'
elif n > 800000:
neck = RING_3
neck_ep = 'Ring Onchain'
elif n > 790000:
neck = CHOKER
neck_ep = 'Choker'
elif n > 770000:
neck = BROCHE_3
neck_ep = 'Brooch'
else:
neck = none
neck_ep = 'None'
WOMAN=[
[FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR1,FR1,FR1,FR1,FR1,FR1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,SK1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SC1,SC1,SK1,SK1,SK1,SC1,SC1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,EY1,FR1,SK1,SK1,SK1,EY1,FR1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,FR1,FR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,SK1,FR1,SK1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,LI1,LI1,LI1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,SK1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,SK1,FR1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR1,FR1,FR1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR1,SK1,SK1,SK1,FR1,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2]
]
pixels = WOMAN
elif b > 535000:
race_ep = 'Elves'
type_ep = 'Male'
hair_color_ep = 'None'
haircut_ep = 'None'
hair_prop_ep = 'None'
eyes_prop_ep = 'None'
blemishe_ep = 'None'
eyes_color_ep = 'None'
facial_hair_ep = 'None'
mouth_prop_ep = 'None'
mouth_ep = 'None'
tooth_color_ep = 'None'
nose_ep = 'None'
neck_ep = 'None'
ears_ep = 'None'
skin_ep = 'None'
ears = none
hair = none
hair_prop = none
neck = none
blemishes = none
#tooth color
mouth = none
facial_hair = none
rod = none
mouth_prop = none
#eye color
eyes = none
nose = none
seed(b)
c=randint(0,1000000)
if c > 800000:
SK1 = (234,217,217)
SC1 = (165,141,141)
EY1 = (201,178,178)
SK2 = (255,255,255)
HRG3 = (220,222,234)
HRG2 = (183,179,191)
HRG4 = (203,200,212)
HRG5 = (87,101,113)
HRG1 = (0,0,0)
RC1 = (226,187,185)
MO1 = EY1
SCR1 = EY1
skin_ep = 'Albino'
elif c > 500000:
SK1 = (219,177,128)
SC1 = (166,110,44)
EY1 = (210,157,96)
SK2 = (235,203,166)
HRG3 = (213,200,183)
HRG2 = (184,163,135)
HRG4 = (209,189,164)
HRG5 = (87,101,113)
HRG1 = (0,0,0)
RC1 = (215,154,104)
MO1 = EY1
SCR1 = EY1
skin_ep = 'Light'
elif c > 200000:
SK1 = (174,139,97)
SC1 = (134,88,30)
EY1 = (167,124,71)
SK2 = (178,138,93)
HRG3 = (188,179,165)
HRG2 = (166,150,128)
HRG4 = (184,171,151)
HRG5 = (87,101,113)
HRG1 = (0,0,0)
RC1 = (191,105,71)
MO1 = EY1
SCR1 = EY1
skin_ep = 'Mid'
else:
SK1 = (113,63,29)
SC1 = (86,39,10)
EY1 = (114,55,17)
SK2 = (146,79,35)
HRG3 = (155,135,127)
HRG2 = (139,121,111)
HRG4 = (156,131,115)
HRG5 = (87,101,113)
HRG1 = (0,0,0)
RC1 = (142,36,2)
MO1 = EY1
SCR1 = EY1
skin_ep = 'Dark'
seed(c)
d=randint(0,1000000)
if d > 750000:
ears = EARS_1
ears_ep = 'Earring'
else:
ears = none
ears_ep = 'None'
seed(d)
e=randint(0,1000000)
HR0 = (255,227,72)
HR2 = (255,255,153)
HR3 = (165,108,0)
HR4 = (61,35,32)
HR5 = (111,0,48)
HR6 = (255,0,0)
if e > 850000:
HR1 = HR0
hair_color_ep ='Blond'
elif e > 700000:
HR1 = HR2
hair_color_ep ='Butter'
elif e > 650000:
HR1 = HR3
hair_color_ep ='Ginger'
elif e > 500000:
HR1 = HR4
hair_color_ep ='Brown'
elif e > 350000:
HR1 = HR5
hair_color_ep ='Black Rose'
elif e > 200000:
HR1 = nr
hair_color_ep='Black'
else:
HR1 = HR6
hair_color_ep ='Red'
ELF_HR1=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,HR1,HR1,HR1,HR1,HR1,HR1,HR1,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,0,0,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,0,HR1,HR1,0,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,0,HR1,HR1,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,HR1,HR1,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,0],
[0,0,0,0,0,0,HR1,0,0,0,0,0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,0],
[0,0,0,0,0,0,HR1,0,0,0,0,0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,0],
[0,0,0,0,0,0,HR1,0,0,0,0,0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,0],
[0,0,0,0,0,0,HR1,0,0,0,0,0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,0],
[0,0,0,0,0,0,HR1,0,0,0,0,0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,0],
[0,0,0,0,0,0,HR1,0,0,0,0,0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,0],
[0,0,0,0,0,0,HR1,0,0,0,0,0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,0],
[0,0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
ELF_HR2=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,HR1,HR1,HR1,HR1,HR1,HR1,HR1,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,0,0,0,0,0,0],
[0,0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,0,0,HR1,HR1,HR1,HR1,0,0,0,0,0],
[0,0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,0,0],
[0,0,0,0,0,0,0,HR1,0,0,0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,HR1,HR1,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,HR1,HR1,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,HR1,HR1,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,HR1,HR1,HR1,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,HR1,0,HR1,HR1,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,HR1,0,0,HR1,HR1,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,HR1,HR1,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,HR1,HR1,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,HR1,HR1,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,HR1,HR1,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
ELF_HR3=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,BG1,HR1,HR1,HR1,HR1,BG1,BG1,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,BG1,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,BG1,0,0,0,0,0,0],
[0,0,0,0,0,0,0,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,BG1,0,0,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,HR1,0,HR1,0,0,0,0,HR1,HR1,HR1,0,0,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,0,HR1,0,0,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,0,HR1,0,0,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
ELF_HR4=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,HR1,HR1,HR1,HR1,HR1,HR1,HR1,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,0,0,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,HR1,0,HR1,HR1,HR1,HR1,0,0,HR1,HR1,HR1,0,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,0,0,0,HR1,HR1,0,0,0,0,HR1,HR1,0,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,0,HR1,HR1,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,HR1,HR1,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,HR1,HR1,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,HR1,HR1,0,0,0,0,0],
[0,0,0,0,0,0,HR1,0,0,0,0,0,0,0,0,0,0,HR1,HR1,0,0,0,0,0],
[0,0,0,0,0,0,HR1,0,0,0,0,0,0,0,0,0,0,HR1,HR1,0,0,0,0,0],
[0,0,0,0,0,0,HR1,0,0,0,0,0,0,0,0,0,0,HR1,HR1,0,0,0,0,0],
[0,0,0,0,0,0,HR1,0,0,0,0,0,0,0,0,0,0,HR1,HR1,0,0,0,0,0],
[0,0,0,0,0,0,HR1,0,0,0,0,0,0,0,0,0,0,HR1,HR1,0,0,0,0,0],
[0,0,0,0,0,0,HR1,0,0,0,0,0,0,0,0,0,0,HR1,HR1,0,0,0,0,0],
[0,0,0,0,0,0,HR1,0,0,0,0,0,0,0,0,0,0,HR1,HR1,0,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,0,HR1,HR1,0,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,0,HR1,HR1,0,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,0,HR1,HR1,0,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,0,HR1,HR1,0,0,0,0,0]
]
ELF_HR5=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,HR1,HR1,HR1,HR1,HR1,HR1,HR1,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,0,0,0,0,0,0],
[0,0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,0,HR1,HR1,HR1,HR1,0,0,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,0,0,0,HR1,HR1,HR1,HR1,0,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,0,0],
[0,0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,HR1,0,0,0,0,0,HR1,0,0,0,0,0,0,0,0]
]
seed(e)
f=randint(0,1000000)
if f > 800000:
hair = ELF_HR1
haircut_ep = 'Straight Hair'
elif f > 600000:
hair = ELF_HR2
haircut_ep = 'Braids'
elif f > 400000:
hair = ELF_HR3
haircut_ep = 'Left Side Hair'
elif f > 200000:
hair = ELF_HR4
haircut_ep = 'Long Hair'
else:
hair = ELF_HR5
haircut_ep = 'Medium Layers'
seed(f)
g=randint(0,1000000)
if g > 960000:
hair_prop = CAP_1
hair_prop_ep = 'Cap'
elif g > 940000:
hair_prop = COWBOY_1
hair_prop_ep = 'Cowboy Hat'
elif g > 910000:
hair_prop = TOPHAT_1
hair_prop_ep = 'Top Hat'
elif e > 870000:
hair_prop = KNITTED_1
hair_prop_ep = 'Knitted Cap'
elif g > 865000:
hair_prop = HEADBAND_1
hair_prop_ep = 'Headband'
hair = ELF_HR1
haircut_ep = 'Straight Hair'
elif g > 850000:
hair_prop = HEADBAND_1
hair_prop_ep = 'Headband'
hair = ELF_HR2
haircut_ep = 'Braids'
elif g > 835000:
hair_prop = HEADBAND_1
hair_prop_ep = 'Headband'
hair = ELF_HR4
haircut_ep = 'Long Hair'
elif g > 820000:
hair_prop = HEADBAND_1
hair_prop_ep = 'Headband'
hair = ELF_HR5
haircut_ep = 'Medium Layers'
elif g > 790000:
hair_prop = FORCAP_1
hair_prop_ep = 'Cap Forward'
elif g > 760000:
hair_prop = BANDANA_1
hair_prop_ep = 'Bandana'
elif g > 750000:
hair_prop = Elf_Crown
hair_prop_ep = 'Elfic Crown'
hair = ELF_HR1
haircut_ep = 'Straight Hair'
elif g > 740000:
hair_prop = Elf_Crown
hair_prop_ep = 'Elfic Crown'
hair = ELF_HR2
haircut_ep = 'Braids'
elif g > 730000:
hair_prop = Elf_Crown
hair_prop_ep = 'Elfic Crown'
hair = ELF_HR4
haircut_ep = 'Long Hair'
elif g > 720000:
hair_prop = Elf_Crown
hair_prop_ep = 'Elfic Crown'
hair = ELF_HR5
haircut_ep = 'Medium Layers'
elif g > 700000:
hair_prop = FEDORA_1
hair_prop_ep = 'Fedora'
elif g > 670000:
hair_prop = POLICE_1
hair_prop_ep = 'Police'
elif g > 660000:
hair_prop = BEANI_1
hair_prop_ep = 'Beanie'
else:
hair_prop = none
hair_prop_ep = 'None'
seed(g)
h=randint(0,1000000)
if h > 900000:
neck = GoldChain_1
neck_ep = 'Gold Chain'
elif h > 820000:
neck = SilverChain_1
neck_ep = 'Silver Chain'
elif h > 800000:
neck = RING_1
neck_ep = 'Ring Onchain'
elif h > 780000:
neck = BROCHE_1
neck_ep = 'Brooch'
else:
neck = none
neck_ep = 'None'
seed(h)
i=randint(0,1000000)
if i > 900000:
mouth_prop = CIGARETTE
mouth_prop_ep = 'Cigarette'
elif i > 880000:
mouth_prop = MASK_1
mouth_prop_ep = 'Medical Mask'
elif i > 820000:
mouth_prop = PIPE
mouth_prop_ep = 'Pipe'
elif i > 780000:
mouth_prop = VAPE
mouth_prop_ep = 'Vape'
else:
mouth_prop = none
mouth_prop_ep = 'None'
# seed(h)
# i=randint(0,1000000)
# if i > 300000:
# EY1 = (255,255,255)
# elif i > 50000:
# EY1 = (0,0,255)
# else:
# EY1 = (0,255,0)
seed(i)
j=randint(0,1000000)
if j > 970000:
eyes = TD_1
eyes_prop_ep ='3D Glasses'
elif j > 930000:
eyes = VR_1
eyes_prop_ep ='VR'
elif j > 880000:
eyes = ClassicShades_1
eyes_prop_ep ='Classic Shades'
elif j > 830000:
eyes = EyePatch_1
eyes_prop_ep ='Eye Patch'
elif j > 780000:
eyes = NerdGlasses_1
eyes_prop_ep ='Nerd Glasses'
elif j > 730000:
eyes = BigShades_1
eyes_prop_ep ='Big Shades'
elif j > 700000:
eyes = EyeMask_1
eyes_prop_ep ='Eye Mask'
elif j > 650000:
eyes = HornedRimGlasses_1
eyes_prop_ep ='Horned Rim Glasses'
elif j > 600000:
eyes = RegularShades_1
eyes_prop_ep ='Regular Shades'
else:
eyes=none
eyes_prop_ep ='None'
seed(j)
k=randint(0,1000000)
if k > 975000:
nose = NOSE_1
nose_ep = 'Clown Nose'
else:
nose = none
nose_ep = 'None'
seed(k)
l=randint(0,1000000)
if l > 975000:
mouth = SMILE
mouth_ep = 'Smile'
elif l > 950000:
mouth = FROWN
mouth_ep = 'Frown'
else:
mouth = none
mouth_ep = 'None'
MOLE=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,MO1,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
SCARE_1=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,SCR1,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,SCR1,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,SCR1,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,SCR1,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,SCR1,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
ROSY_1=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,RC1,0,0,0,0,RC1,RC1,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,RC1,0,0,0,0,0,RC1,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
seed(l)
m=randint(0,1000000)
if m > 970000:
blemishes = ROSY_1
blemishe_ep = 'Rosy Cheeks'
elif m > 900000:
blemishes = MOLE
blemishe_ep = 'Mole'
elif m > 870000:
blemishes = SCARE_1
blemishe_ep = 'Scare'
else:
blemishes = none
blemishe_ep = 'None'
ELF=[
[FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR1,FR1,FR1,FR1,FR1,FR1,FR1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,FR1,FR1,SC1,SC1,SK1,SK1,SK1,SC1,SC1,SK1,SK1,FR1,FR1,FR1,FR1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,FR1,SK1,FR1,EY1,FR1,SK1,SK1,SK1,EY1,FR1,SK1,SK1,SK1,SK1,SK1,SK1,FR1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,FR1,FR1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,FR1,FR1,FR1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,FR1,FR1,SK1,SK1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,FR1,FR1,FR1,SK1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR1,FR1,FR1,FR1,FR1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR1,SK1,SK1,SK1,FR1,FR2,FR2,FR2,FR2,FR2,FR2]
]
pixels = ELF
elif b > 470000:
race_ep = 'Elves'
type_ep = 'Female'
hair_color_ep = 'None'
haircut_ep = 'None'
hair_prop_ep = 'None'
eyes_prop_ep = 'None'
blemishe_ep = 'None'
eyes_color_ep = 'None'
facial_hair_ep = 'None'
mouth_prop_ep = 'None'
mouth_ep = 'None'
tooth_color_ep = 'None'
nose_ep = 'None'
neck_ep = 'None'
ears_ep = 'None'
skin_ep = 'None'
ears = none
hair = none
hair_prop = none
neck = none
blemishes = none
#tooth color
mouth = none
facial_hair = none
rod = none
mouth_prop = none
#eye color
eyes = none
nose = none
seed(b)
c=randint(0,1000000)
if c > 800000:
SK1 = (234,217,217)
SC1 = (165,141,141)
EY1 = (201,178,178)
SK2 = SK1
HRG3 = (220,222,234)
HRG2 = (183,179,191)
HRG4 = (203,200,212)
HRG5 = (87,101,113)
HRG1 = (0,0,0)
RC1 = (226,187,185)
LI1 = (113,28,17)
MO1 = EY1
SCR1 = EY1
skin_ep = 'Albino'
elif c > 500000:
SK1 = (219,177,128)
SC1 = (166,110,44)
EY1 = (210,157,96)
SK2 = SK1
HRG3 = (213,200,183)
HRG2 = (184,163,135)
HRG4 = (209,189,164)
HRG5 = (87,101,113)
HRG1 = (0,0,0)
RC1 = (215,154,104)
LI1 = (113,28,17)
MO1 = EY1
SCR1 = EY1
skin_ep = 'Light'
elif c > 200000:
SK1 = (174,139,97)
SC1 = (134,88,30)
EY1 = (167,124,71)
SK2 = SK1
HRG3 = (188,179,165)
HRG2 = (166,150,128)
HRG4 = (184,171,151)
HRG5 = (87,101,113)
HRG1 = (0,0,0)
RC1 = (191,105,71)
LI1 = (95,29,13)
MO1 = EY1
SCR1 = EY1
skin_ep = 'Mid'
else:
SK1 = (113,63,29)
SC1 = (86,39,10)
EY1 = (114,55,17)
SK2 = SK1
HRG3 = (155,135,127)
HRG2 = (139,121,111)
HRG4 = (156,131,115)
HRG5 = (87,101,113)
HRG1 = (0,0,0)
RC1 = (142,36,2)
LI1 = (74,18,8)
MO1 = EY1
SCR1 = EY1
skin_ep = 'Dark'
seed(c)
d=randint(0,1000000)
if d > 750000:
ears = EARS_3
ears_ep = 'Earring'
else:
ears = none
ears_ep = 'None'
seed(d)
e=randint(0,1000000)
HR0 = (255,227,72)
HR2 = (249,255,0)
HR3 = (165,108,0)
HR4 = (61,35,32)
HR5 = (111,0,48)
HR6 = (255,0,0)
if e > 850000:
HR1 = HR0
hair_color_ep ='Blond'
elif e > 700000:
HR1 = HR2
hair_color_ep ='Butter'
elif e > 650000:
HR1 = HR3
hair_color_ep ='Ginger'
elif e > 500000:
HR1 = HR4
hair_color_ep ='Brown'
elif e > 350000:
HR1 = HR5
hair_color_ep ='Black Rose'
elif e > 200000:
HR1 = nr
hair_color_ep='Black'
else:
HR1 = HR6
hair_color_ep ='Red'
ELFE_HR1=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,HR1,HR1,HR1,HR1,HR1,HR1,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,0,0,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0],
[0,0,0,0,0,0,HR1,0,0,0,0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,0,0],
[0,0,0,0,0,0,HR1,0,0,0,0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,0,0],
[0,0,0,0,0,0,HR1,0,0,0,0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,0,0],
[0,0,0,0,0,0,HR1,0,0,0,0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,0,0],
[0,0,0,0,0,0,HR1,0,0,0,0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,0,0],
[0,0,0,0,0,0,HR1,0,0,0,0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,HR1,HR1,0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,0,0],
[0,0,0,0,0,0,0,HR1,HR1,HR1,HR1,HR1,0,0,0,0,HR1,HR1,HR1,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
ELFE_HR2=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,HR1,HR1,HR1,HR1,HR1,HR1,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,0,HR1,HR1,HR1,HR1,0,0,0,0,0,0],
[0,0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,0,0,0],
[0,0,0,0,0,0,0,HR1,0,0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,HR1,HR1,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,HR1,HR1,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,HR1,HR1,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,HR1,HR1,HR1,HR1,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,HR1,HR1,0,HR1,HR1,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,HR1,HR1,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,HR1,HR1,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,HR1,HR1,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,HR1,0,0]
]
ELFE_HR3=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,HR1,HR1,HR1,HR1,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,0,0,0,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,HR1,0,HR1,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,HR1,0,0,0,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,HR1,0,0,0,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,HR1,0,0,0,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,HR1,0,0,0,0,0,0,0],
[0,0,0,0,0,HR1,HR1,HR1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,HR1,HR1,HR1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,HR1,HR1,HR1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,HR1,HR1,HR1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,HR1,HR1,HR1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,HR1,HR1,HR1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,HR1,HR1,HR1,HR1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,HR1,HR1,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
ELFE_HR4=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,HR1,HR1,HR1,HR1,HR1,HR1,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,0,0,0,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,HR1,0,HR1,HR1,HR1,HR1,0,HR1,HR1,HR1,0,0,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,0,0,0,HR1,HR1,0,0,0,HR1,HR1,0,0,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0]
]
ELFE_HR5=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,HR1,HR1,HR1,HR1,HR1,HR1,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,0,HR1,HR1,HR1,0,0,0,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0],
[0,0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,HR1,HR1,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,HR1,0,0,0,0,HR1,0,0,0,0,0,0,0,0,0]
]
MOLE_2=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,MO1,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
SCARE_2=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,SCR1,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,SCR1,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,SCR1,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,SCR1,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
ROSY_2=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,RC1,RC1,0,0,0,RC1,RC1,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
seed(e)
f=randint(0,1000000)
if f > 800000:
hair = ELFE_HR1
haircut_ep = 'Straight Hair'
elif f > 600000:
hair = ELFE_HR2
haircut_ep = 'Braids'
elif f > 400000:
hair = ELFE_HR3
haircut_ep = 'Left Side Hair'
elif f > 200000:
hair = ELFE_HR4
haircut_ep = 'Long Hair'
else:
hair = ELFE_HR5
haircut_ep = 'Medium Layers'
seed(f)
g=randint(0,1000000)
if g > 900000:
hair_prop = CAP_3
hair_prop_ep = 'Cap'
elif g > 700000:
hair_prop = MILICAP_1
hair_prop_ep = 'Punk Hat'
elif e > 600000:
hair_prop = KNITTED_3
hair_prop_ep = 'Knitted Cap'
elif g > 500000:
hair_prop = HEADBAND_3
hair_prop_ep = 'Headband'
elif g > 400000:
hair = none
hair_prop = PILOT_1
hair_prop_ep = 'Pilot Helmet'
titin = 99
elif g > 300000:
hair_prop = BANDANA_3
hair_prop_ep = 'Bandana'
elif g > 100000:
hair_prop = Elfe_Tiara
hair_prop_ep = 'Elfic Tiara'
else:
hair_prop = none
hair_prop_ep = 'None'
seed(g)
h=randint(0,1000000)
if h > 900000:
EY1 = (110,152,77)
SC1 = (92,133,57)
eyes_color_ep = 'Green Eye Shadow'
elif h > 800000:
EY1 = (93,121,117)
SC1 = (80,106,101)
eyes_color_ep = 'Blue Eye Shadow'
elif h > 700000:
EY1 = (176,61,133)
SC1 = (164,55,117)
eyes_color_ep = 'Purple Eye Shadow'
elif h > 600000:
EY1 = (214,92,26)
SC1 = (194,79,17)
eyes_color_ep = 'Orange Eye Shadow'
else:
eyes_color_ep = 'None'
neyo = 99
seed(h)
i=randint(0,1000000)
if i > 900000:
mouth_prop = CIGARETTE
mouth_prop_ep = 'Cigarette'
elif i > 880000:
mouth_prop = MASK_2
mouth_prop_ep = 'Medical Mask'
tactac = 99
elif i > 820000:
mouth_prop = PIPE
mouth_prop_ep = 'Pipe'
elif i > 780000:
mouth_prop = VAPE
mouth_prop_ep = 'Vape'
else:
mouth_prop = none
mouth_prop_ep = 'None'
# seed(h)
# i=randint(0,1000000)
# if i > 300000:
# EY1 = (255,255,255)
# elif i > 50000:
# EY1 = (0,0,255)
# else:
# EY1 = (0,255,0)
seed(i)
j=randint(0,1000000)
if j > 970000:
eyes = TD_3
eyes_prop_ep ='3D Glasses'
elif j > 930000:
eyes = VR_3
eyes_prop_ep ='VR'
elif j > 880000:
eyes = ClassicShades_3
eyes_prop_ep ='Classic Shades'
elif j > 830000:
eyes = EyePatch_3
eyes_prop_ep ='Eye Patch'
neye = 99
elif j > 780000:
eyes = NerdGlasses_3
eyes_prop_ep ='Nerd Glasses'
elif j > 730000:
eyes = BigShades_3
eyes_prop_ep ='Big Shades'
elif j > 700000:
eyes = EyeMask_3
eyes_prop_ep ='Eye Mask'
neye = 99
elif j > 650000:
eyes = HornedRimGlasses_3
eyes_prop_ep ='Horned Rim Glasses'
elif j > 600000:
eyes = RegularShades_3
eyes_prop_ep ='Regular Shades'
elif j > 590000:
eyes = GOGOLES_1
eyes_prop_ep ='Welding Goggles'
hair_prop = none
hair_prop_ep = 'None'
toutou = 99
else:
eyes=none
eyes_prop_ep ='None'
neye = 99
if titin == 99 and toutou != 99:
eyes = none
eyes_prop_ep ='None'
if neyo != 99 and neye !=99:
eyes = none
eyes_prop_ep ='None'
seed(j)
k=randint(0,1000000)
if k > 975000:
nose = NOSE_2
nose_ep = 'Clown Nose'
tuctuc = 99
else:
nose = none
nose_ep = 'None'
if tactac == 99 and tuctuc == 99:
mouthprop = none
mouth_prop_ep = 'None'
seed(k)
l=randint(0,1000000)
if l > 970000:
blemishes = ROSY_2
blemishe_ep = 'Rosy Cheeks'
elif l > 900000:
blemishes = MOLE_2
blemishe_ep = 'Mole'
elif l > 870000:
blemishes = SCARE_2
blemishe_ep = 'Scare'
else:
blemishes = none
blemishe_ep = 'None'
seed(l)
m=randint(0,1000000)
if m > 930000:
LI1 = nr
mouth_ep = 'Black Lipstick'
elif m > 860000:
LI1 = (255,0,0)
mouth_ep = 'Hot Lipstick'
elif m > 790000:
LI1 = (208,82,203)
mouth_ep = 'Purple Lipstick'
elif m > 720000:
LI1 = (214,92,26)
mouth_ep = 'Orange Lipstick'
else:
mouth = none
mouth_ep = 'None'
seed(m)
n=randint(0,1000000)
if n > 900000:
neck = GoldChain_2
neck_ep = 'Gold Chain'
elif n > 820000:
neck = SilverChain_2
neck_ep = 'Silver Chain'
elif n > 800000:
neck = RING_2
neck_ep = 'Ring Onchain'
elif n > 780000:
neck = BROCHE_2
neck_ep = 'Brooch'
else:
neck = none
neck_ep = 'None'
ELFE=[
[FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR1,FR1,FR1,FR1,FR1,FR1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,SK1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,SK1,SK1,SK1,SK1,SK2,SK1,FR1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,FR1,FR1,SC1,SC1,SK1,SK1,SK1,SC1,SC1,SK1,FR1,FR1,FR1,FR1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,FR1,SK1,FR1,EY1,FR1,SK1,SK1,SK1,EY1,FR1,SK1,SK1,SK1,SK1,SK1,FR1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,FR1,FR1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,FR1,FR1,FR1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,SK1,FR1,SK1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,LI1,LI1,LI1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,SK1,FR1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR1,FR1,FR1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR1,SK1,SK1,SK1,FR1,FR2,FR2,FR2,FR2,FR2,FR2,FR2]
]
pixels = ELFE
elif b > 460000:
race_ep = 'Dwarves'
type_ep = 'Firebeards'
hair_color_ep = 'None'
haircut_ep = 'None'
hair_prop_ep = 'None'
eyes_prop_ep = 'None'
blemishe_ep = 'None'
eyes_color_ep = 'None'
facial_hair_ep = 'None'
mouth_prop_ep = 'None'
mouth_ep = 'None'
tooth_color_ep = 'None'
nose_ep = 'None'
neck_ep = 'None'
ears_ep = 'None'
skin_ep = 'None'
ears = none
hair = none
hair_prop = none
neck = none
blemishes = none
#tooth color
mouth = none
facial_hair = none
rod = none
mouth_prop = none
#eye color
eyes = none
nose = none
seed(b)
c=randint(0,1000000)
if c > 800000:
SK1 = (234,217,217)
SC1 = (165,141,141)
EY1 = (201,178,178)
SK2 = (255,255,255)
HRG3 = (220,222,234)
HRG2 = (183,179,191)
HRG4 = (203,200,212)
HRG5 = (87,101,113)
HRG1 = (0,0,0)
RC1 = (226,187,185)
MO1 = EY1
SCR1 = EY1
skin_ep = 'Albino'
elif c > 500000:
SK1 = (219,177,128)
SC1 = (166,110,44)
EY1 = (210,157,96)
SK2 = (235,203,166)
HRG3 = (213,200,183)
HRG2 = (184,163,135)
HRG4 = (209,189,164)
HRG5 = (87,101,113)
HRG1 = (0,0,0)
RC1 = (215,154,104)
MO1 = EY1
SCR1 = EY1
skin_ep = 'Light'
elif c > 200000:
SK1 = (174,139,97)
SC1 = (134,88,30)
EY1 = (167,124,71)
SK2 = (178,138,93)
HRG3 = (188,179,165)
HRG2 = (166,150,128)
HRG4 = (184,171,151)
HRG5 = (87,101,113)
HRG1 = (0,0,0)
RC1 = (191,105,71)
MO1 = EY1
SCR1 = EY1
skin_ep = 'Mid'
else:
SK1 = (113,63,29)
SC1 = (86,39,10)
EY1 = (114,55,17)
SK2 = (146,79,35)
HRG3 = (155,135,127)
HRG2 = (139,121,111)
HRG4 = (156,131,115)
HRG5 = (87,101,113)
HRG1 = (0,0,0)
RC1 = (142,36,2)
MO1 = EY1
SCR1 = EY1
skin_ep = 'Dark'
seed(c)
d=randint(0,1000000)
if d > 750000:
ears = EARS_2
ears_ep = 'Earring'
else:
ears = none
ears_ep = 'None'
seed(d)
e=randint(0,1000000)
HR0 = (255,193,0)
HR8 = (251,114,7)
HR3 = (210,154,0)
HR4 = (166,165,165)
HR5 = (165,108,0)
HR6 = (111,0,48)
HR7 = (85,57,23)
if e > 860000:
HR1 = (50,50,50) #Ok
HR2 = (200,200,200)
hair_color_ep = 'Dark Grey & Silver'
elif e > 720000:
HR1 = HR8
HR2 = (111,0,48) #ok
hair_color_ep = 'Orange & Black Rose'
elif e > 580000:
HR1 = HR3 #ok
HR2 = (210,210,0)
hair_color_ep = 'Fair & Wattle'
elif e > 440000:
HR1 = (80,50,30) #Ok
HR2 = (44,4,9)
hair_color_ep = 'Bronze & Chocolate'
elif e > 300000:
HR1 = HR5
HR2 = HR3
hair_color_ep = 'Ginger & Fair'
elif e > 150000:
HR1 = (220,130,0) #ok
HR2 = (70,40,10)
hair_color_ep = 'Mango & Brown'
else:
HR1 = (210,210,210) #Ok
HR2 = (210,210,210)
hair_color_ep = 'Grey Goose'
seed(e)
f=randint(0,1000000)
seed(f)
g=randint(0,1000000)
if g > 960000:
hair_prop = CAP_2
hair_prop_ep = 'Cap'
elif g > 940000:
hair_prop = COWBOY_2
hair_prop_ep = 'Cowboy Hat'
elif g > 920000:
hair_prop = TOPHAT_2
hair_prop_ep = 'Top Hat'
elif g > 890000:
hair_prop = Helmet
hair_prop_ep = 'Dwarf Helmet'
tete = 99
elif g > 870000:
hair_prop = FORCAP_2
hair_prop_ep = 'Cap Forward'
elif g > 850000:
hair_prop = BANDANA_2
hair_prop_ep = 'Bandana'
elif g > 830000:
hair_prop = FEDORA_2
hair_prop_ep = 'Fedora'
elif g > 800000:
hair_prop = POLICE_2
hair_prop_ep = 'Police'
else:
hair_prop = none
hair_prop_ep = 'None'
seed(g)
h=randint(0,1000000)
if h > 900000:
mouth_prop = CIGARETTE
mouth_prop_ep = 'Cigarette'
elif h > 840000:
mouth_prop = PIPE
mouth_prop_ep = 'Pipe'
elif h > 800000:
mouth_prop = VAPE
mouth_prop_ep = 'Vape'
else:
mouth_prop = none
mouth_prop_ep = 'None'
# seed(h)
# i=randint(0,1000000)
# if i > 300000:
# EY1 = (255,255,255)
# elif i > 50000:
# EY1 = (0,0,255)
# else:
# EY1 = (0,255,0)
seed(h)
i=randint(0,1000000)
if i > 970000:
eyes = TD_2
eyes_prop_ep ='3D Glasses'
elif i > 930000:
eyes = VR_2
eyes_prop_ep ='VR'
elif i > 880000:
eyes = ClassicShades_2
eyes_prop_ep ='Classic Shades'
elif i > 830000:
eyes = SmallShades_2
eyes_prop_ep ='Small Shades'
elif i > 780000:
eyes = EyePatch_2
eyes_prop_ep ='Eye Patch'
elif i > 730000:
eyes = NerdGlasses_2
eyes_prop_ep ='Nerd Glasses'
elif i > 680000:
eyes = BigShades_2
eyes_prop_ep ='Big Shades'
elif i > 650000:
eyes = EyeMask_2
eyes_prop_ep ='Eye Mask'
elif i > 600000:
eyes = HornedRimGlasses_2
eyes_prop_ep ='Horned Rim Glasses'
elif i > 550000:
eyes = RegularShades_2
eyes_prop_ep ='Regular Shades'
else:
eyes=none
eyes_prop_ep ='None'
if tete == 99:
eyes = none
eyes_prop_ep ='None'
seed(i)
j=randint(0,1000000)
if j > 975000:
nose = NOSE_1
nose_ep = 'Clown Nose'
else:
nose = none
nose_ep = 'None'
seed(j)
k=randint(0,1000000)
if k > 975000:
mouth = SMILE
mouth_ep = 'Smile'
elif k > 950000:
mouth = FROWN
mouth_ep = 'Frown'
else:
mouth = none
mouth_ep = 'None'
MOLE=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,MO1,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
SCARE_1=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,SCR1,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,SCR1,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,SCR1,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,SCR1,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,SCR1,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
ROSY_1=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,RC1,0,0,0,0,RC1,RC1,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,RC1,0,0,0,0,0,RC1,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
seed(k)
l=randint(0,1000000)
if l > 970000:
blemishes = ROSY_1
blemishe_ep = 'Rosy Cheeks'
elif l > 900000:
blemishes = MOLE
blemishe_ep = 'Mole'
elif l > 870000:
blemishes = SCARE_1
blemishe_ep = 'Scare'
else:
blemishes = none
blemishe_ep = 'None'
DWARF_1=[
[FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,HR1,HR1,HR1,HR1,HR1,HR1,HR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,HR2,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR2,HR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,HR2,HR1,HR1,HR1,SK1,SK1,SK1,HR1,HR1,HR1,HR2,HR1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,HR2,HR1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,HR1,HR1,HR2,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,HR2,HR1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,FR1,HR1,HR2,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,HR2,HR1,FR1,HR1,SK1,SK1,SK1,SK1,SK1,HR1,SK1,SK1,FR1,HR1,HR1,HR2,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,HR2,HR1,FR1,SC1,SC1,SK1,SK1,SK1,SC1,SC1,SK1,SK1,SK1,FR1,HR1,HR1,HR2,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,HR2,HR1,FR1,EY1,FR1,SK1,SK1,SK1,EY1,FR1,SK1,SK1,SK1,FR1,HR1,HR1,HR2,BG1,FR2],
[FR2,BG1,BG1,BG1,HR2,HR1,HR1,FR1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,FR1,FR1,HR1,HR1,HR2,BG1,FR2],
[FR2,BG1,BG1,BG1,HR2,HR1,HR1,FR1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,FR1,HR1,HR1,HR1,BG1,HR2,FR2],
[FR2,BG1,BG1,HR2,BG1,HR1,HR1,FR1,SK1,SK1,FR1,FR1,SK1,SK1,SK1,SK1,SK1,FR1,HR1,HR1,HR1,BG1,BG1,FR2],
[FR2,BG1,BG1,HR2,BG1,HR1,HR1,FR1,HR1,HR2,HR2,HR2,HR2,HR1,HR1,SK1,SK1,FR1,HR1,HR1,HR1,BG1,BG1,FR2],
[FR2,BG1,HR2,BG1,BG1,HR1,FR1,HR1,HR2,HR1,HR1,HR1,HR1,HR2,HR1,HR1,HR1,FR1,HR1,HR1,HR1,HR1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,HR1,FR1,HR1,HR2,HR1,FR1,FR1,FR1,HR1,HR2,HR1,HR1,HR1,FR1,HR1,HR1,HR1,HR1,FR2],
[FR2,BG1,BG1,BG1,BG1,HR1,FR1,HR1,HR2,HR1,HR1,HR1,HR1,HR1,HR2,HR1,HR1,HR1,FR1,HR1,HR1,HR1,HR1,FR2],
[FR2,BG1,BG1,BG1,BG1,HR1,HR1,FR1,HR2,HR1,HR1,HR1,HR1,HR1,HR2,HR1,HR1,HR1,FR1,HR1,HR1,HR1,HR1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,HR1,HR2,FR1,HR1,HR1,HR1,HR1,HR1,HR1,HR2,FR1,FR1,BG1,BG1,HR1,HR1,HR1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,HR2,HR1,BG1,FR1,HR1,HR1,FR1,FR1,FR1,FR1,HR2,FR1,BG1,BG1,BG1,HR1,HR1,FR2],
[FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR1,FR1,FR2,FR1,SK1,SK1,SK1,FR1,FR2,FR2,FR2,FR2,FR2,FR2]
]
pixels = DWARF_1
elif b > 450000:
race_ep = 'Dwarves'
type_ep = 'Blacklocks'
hair_color_ep = 'None'
haircut_ep = 'None'
hair_prop_ep = 'None'
eyes_prop_ep = 'None'
blemishe_ep = 'None'
eyes_color_ep = 'None'
facial_hair_ep = 'None'
mouth_prop_ep = 'None'
mouth_ep = 'None'
tooth_color_ep = 'None'
nose_ep = 'None'
neck_ep = 'None'
ears_ep = 'None'
skin_ep = 'None'
ears = none
hair = none
hair_prop = none
neck = none
blemishes = none
#tooth color
mouth = none
facial_hair = none
rod = none
mouth_prop = none
#eye color
eyes = none
nose = none
seed(b)
c=randint(0,1000000)
if c > 800000:
SK1 = (234,217,217)
SC1 = (165,141,141)
EY1 = (201,178,178)
SK2 = (255,255,255)
HRG3 = (220,222,234)
HRG2 = (183,179,191)
HRG4 = (203,200,212)
HRG5 = (87,101,113)
HRG1 = (0,0,0)
RC1 = (226,187,185)
MO1 = EY1
SCR1 = EY1
skin_ep = 'Albino'
elif c > 500000:
SK1 = (219,177,128)
SC1 = (166,110,44)
EY1 = (210,157,96)
SK2 = (235,203,166)
HRG3 = (213,200,183)
HRG2 = (184,163,135)
HRG4 = (209,189,164)
HRG5 = (87,101,113)
HRG1 = (0,0,0)
RC1 = (215,154,104)
MO1 = EY1
SCR1 = EY1
skin_ep = 'Light'
elif c > 200000:
SK1 = (174,139,97)
SC1 = (134,88,30)
EY1 = (167,124,71)
SK2 = (178,138,93)
HRG3 = (188,179,165)
HRG2 = (166,150,128)
HRG4 = (184,171,151)
HRG5 = (87,101,113)
HRG1 = (0,0,0)
RC1 = (191,105,71)
MO1 = EY1
SCR1 = EY1
skin_ep = 'Mid'
else:
SK1 = (113,63,29)
SC1 = (86,39,10)
EY1 = (114,55,17)
SK2 = (146,79,35)
HRG3 = (155,135,127)
HRG2 = (139,121,111)
HRG4 = (156,131,115)
HRG5 = (87,101,113)
HRG1 = (0,0,0)
RC1 = (142,36,2)
MO1 = EY1
SCR1 = EY1
skin_ep = 'Dark'
seed(c)
d=randint(0,1000000)
if d > 750000:
ears = EARS_2
ears_ep = 'Earring'
else:
ears = none
ears_ep = 'None'
seed(d)
e=randint(0,1000000)
HR0 = (255,193,0)
HR8 = (251,114,7)
HR3 = (210,154,0)
HR4 = (166,165,165)
HR5 = (165,108,0)
HR6 = (111,0,48)
HR7 = (85,57,23)
if e > 860000:
HR1 = (50,50,50) #Ok
HR2 = (200,200,200)
hair_color_ep = 'Dark Grey & Silver'
elif e > 720000:
HR1 = HR8
HR2 = (111,0,48) #ok
hair_color_ep = 'Orange & Black Rose'
elif e > 580000:
HR1 = HR3 #ok
HR2 = (210,210,0)
hair_color_ep = 'Fair & Wattle'
elif e > 440000:
HR1 = (80,50,30) #Ok
HR2 = (44,4,9)
hair_color_ep = 'Bronze & Chocolate'
elif e > 300000:
HR1 = HR5
HR2 = HR3
hair_color_ep = 'Ginger & Fair'
elif e > 150000:
HR1 = (220,130,0) #ok
HR2 = (70,40,10)
hair_color_ep = 'Mango & Brown'
else:
HR1 = (210,210,210) #Ok
HR2 = (210,210,210)
hair_color_ep = 'Grey Goose'
seed(e)
f=randint(0,1000000)
seed(f)
g=randint(0,1000000)
if g > 960000:
hair_prop = CAP_2
hair_prop_ep = 'Cap'
elif g > 940000:
hair_prop = COWBOY_2
hair_prop_ep = 'Cowboy Hat'
elif g > 920000:
hair_prop = TOPHAT_2
hair_prop_ep = 'Top Hat'
elif g > 890000:
hair_prop = Helmet
hair_prop_ep = 'Dwarf Helmet'
tete = 99
elif g > 870000:
hair_prop = FORCAP_2
hair_prop_ep = 'Cap Forward'
elif g > 850000:
hair_prop = BANDANA_2
hair_prop_ep = 'Bandana'
elif g > 830000:
hair_prop = FEDORA_2
hair_prop_ep = 'Fedora'
elif g > 800000:
hair_prop = POLICE_2
hair_prop_ep = 'Police'
else:
hair_prop = none
hair_prop_ep = 'None'
seed(g)
h=randint(0,1000000)
if h > 900000:
mouth_prop = CIGARETTE
mouth_prop_ep = 'Cigarette'
elif h > 840000:
mouth_prop = PIPE
mouth_prop_ep = 'Pipe'
elif h > 800000:
mouth_prop = VAPE
mouth_prop_ep = 'Vape'
else:
mouth_prop = none
mouth_prop_ep = 'None'
# seed(h)
# i=randint(0,1000000)
# if i > 300000:
# EY1 = (255,255,255)
# elif i > 50000:
# EY1 = (0,0,255)
# else:
# EY1 = (0,255,0)
seed(h)
i=randint(0,1000000)
if i > 970000:
eyes = TD_2
eyes_prop_ep ='3D Glasses'
elif i > 930000:
eyes = VR_2
eyes_prop_ep ='VR'
elif i > 880000:
eyes = ClassicShades_2
eyes_prop_ep ='Classic Shades'
elif i > 830000:
eyes = SmallShades_2
eyes_prop_ep ='Small Shades'
elif i > 780000:
eyes = EyePatch_2
eyes_prop_ep ='Eye Patch'
elif i > 730000:
eyes = NerdGlasses_2
eyes_prop_ep ='Nerd Glasses'
elif i > 680000:
eyes = BigShades_2
eyes_prop_ep ='Big Shades'
elif i > 650000:
eyes = EyeMask_2
eyes_prop_ep ='Eye Mask'
elif i > 600000:
eyes = HornedRimGlasses_2
eyes_prop_ep ='Horned Rim Glasses'
elif i > 550000:
eyes = RegularShades_2
eyes_prop_ep ='Regular Shades'
else:
eyes=none
eyes_prop_ep ='None'
if tete == 99:
eyes = none
eyes_prop_ep ='None'
seed(i)
j=randint(0,1000000)
if j > 975000:
nose = NOSE_1
nose_ep = 'Clown Nose'
else:
nose = none
nose_ep = 'None'
seed(j)
k=randint(0,1000000)
if k > 975000:
mouth = SMILE
mouth_ep = 'Smile'
elif k > 950000:
mouth = FROWN
mouth_ep = 'Frown'
else:
mouth = none
mouth_ep = 'None'
MOLE=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,MO1,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
SCARE_1=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,SCR1,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,SCR1,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,SCR1,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,SCR1,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,SCR1,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
ROSY_1=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,RC1,0,0,0,0,RC1,RC1,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,RC1,0,0,0,0,0,RC1,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
seed(k)
l=randint(0,1000000)
if l > 970000:
blemishes = ROSY_1
blemishe_ep = 'Rosy Cheeks'
elif l > 900000:
blemishes = MOLE
blemishe_ep = 'Mole'
elif l > 870000:
blemishes = SCARE_1
blemishe_ep = 'Scare'
else:
blemishes = none
blemishe_ep = 'None'
DWARF_2=[
[FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,HR1,HR1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,HR1,HR1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,HR1,HR1,HR1,HR1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,HR1,HR1,HR1,HR1,HR1,HR1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,HR1,HR1,HR1,HR1,SK1,SK1,SK1,SK1,SK1,SK1,HR1,HR1,HR1,HR1,HR1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,HR1,HR1,HR1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,HR1,HR1,HR1,HR1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,HR1,HR1,HR1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,HR1,HR1,HR1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,HR1,HR1,HR1,SK1,SK1,SK1,SK1,SK1,SK1,HR1,HR1,HR1,HR1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,HR1,HR1,SC1,SC1,SK1,SK1,SK1,SC1,SC1,SK1,HR1,HR1,FR1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,HR1,EY1,FR1,SK1,SK1,SK1,EY1,FR1,SK1,HR1,HR1,FR1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,HR1,HR1,SK1,SK1,SK1,SK1,SK1,SK1,HR1,HR1,HR1,FR1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,HR1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,HR1,HR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,HR1,SK1,SK1,FR1,FR1,SK1,SK1,SK1,SK1,HR1,HR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,HR1,SK1,HR2,HR2,HR2,HR2,HR2,SK1,SK1,HR1,HR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,HR1,HR2,HR2,HR2,HR2,HR2,HR2,HR2,HR2,HR1,HR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,HR1,HR2,SK1,FR1,FR1,FR1,SK1,HR2,HR2,HR1,FR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,HR2,HR2,HR2,SK1,HR2,HR2,HR2,SK1,SK1,HR2,HR2,HR2,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,HR2,BG1,HR2,HR2,SK1,SK1,HR2,SK1,SK1,SK1,HR2,HR2,FR1,HR2,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,HR2,BG1,HR2,HR2,FR1,FR1,HR2,FR1,FR1,SK1,HR2,HR2,FR1,HR2,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,HR2,BG1,BG1,HR2,HR2,BG1,BG1,HR2,BG1,FR1,SK1,HR2,HR2,FR1,BG1,HR2,BG1,BG1,BG1,FR2],
[FR2,FR2,FR2,FR2,FR2,FR2,FR2,HR2,HR2,FR2,FR2,HR2,FR2,FR1,SK1,HR2,HR2,FR1,FR2,FR2,FR2,FR2,FR2,FR2]
]
pixels = DWARF_2
elif b > 440000:
race_ep = 'Dwarves'
type_ep = 'Broadbeams'
hair_color_ep = 'None'
haircut_ep = 'None'
hair_prop_ep = 'None'
eyes_prop_ep = 'None'
blemishe_ep = 'None'
eyes_color_ep = 'None'
facial_hair_ep = 'None'
mouth_prop_ep = 'None'
mouth_ep = 'None'
tooth_color_ep = 'None'
nose_ep = 'None'
neck_ep = 'None'
ears_ep = 'None'
skin_ep = 'None'
ears = none
hair = none
hair_prop = none
neck = none
blemishes = none
#tooth color
mouth = none
facial_hair = none
rod = none
mouth_prop = none
#eye color
eyes = none
nose = none
seed(b)
c=randint(0,1000000)
if c > 800000:
SK1 = (234,217,217)
SC1 = (165,141,141)
EY1 = (201,178,178)
SK2 = (255,255,255)
HRG3 = (220,222,234)
HRG2 = (183,179,191)
HRG4 = (203,200,212)
HRG5 = (87,101,113)
HRG1 = (0,0,0)
RC1 = (226,187,185)
MO1 = EY1
SCR1 = EY1
skin_ep = 'Albino'
elif c > 500000:
SK1 = (219,177,128)
SC1 = (166,110,44)
EY1 = (210,157,96)
SK2 = (235,203,166)
HRG3 = (213,200,183)
HRG2 = (184,163,135)
HRG4 = (209,189,164)
HRG5 = (87,101,113)
HRG1 = (0,0,0)
RC1 = (215,154,104)
MO1 = EY1
SCR1 = EY1
skin_ep = 'Light'
elif c > 200000:
SK1 = (174,139,97)
SC1 = (134,88,30)
EY1 = (167,124,71)
SK2 = (178,138,93)
HRG3 = (188,179,165)
HRG2 = (166,150,128)
HRG4 = (184,171,151)
HRG5 = (87,101,113)
HRG1 = (0,0,0)
RC1 = (191,105,71)
MO1 = EY1
SCR1 = EY1
skin_ep = 'Mid'
else:
SK1 = (113,63,29)
SC1 = (86,39,10)
EY1 = (114,55,17)
SK2 = (146,79,35)
HRG3 = (155,135,127)
HRG2 = (139,121,111)
HRG4 = (156,131,115)
HRG5 = (87,101,113)
HRG1 = (0,0,0)
RC1 = (142,36,2)
MO1 = EY1
SCR1 = EY1
skin_ep = 'Dark'
seed(c)
d=randint(0,1000000)
if d > 750000:
ears = EARS_2
ears_ep = 'Earring'
else:
ears = none
ears_ep = 'None'
seed(d)
e=randint(0,1000000)
HR0 = (255,193,0)
HR8 = (251,114,7)
HR3 = (210,154,0)
HR4 = (166,165,165)
HR5 = (165,108,0)
HR6 = (111,0,48)
HR7 = (85,57,23)
if e > 860000:
HR1 = (50,50,50) #Ok
HR2 = (200,200,200)
hair_color_ep = 'Dark Grey & Silver'
elif e > 720000:
HR1 = HR8
HR2 = (111,0,48) #ok
hair_color_ep = 'Orange & Black Rose'
elif e > 580000:
HR1 = HR3 #ok
HR2 = (210,210,0)
hair_color_ep = 'Fair & Wattle'
elif e > 440000:
HR1 = (80,50,30) #Ok
HR2 = (44,4,9)
hair_color_ep = 'Bronze & Chocolate'
elif e > 300000:
HR1 = HR5
HR2 = HR3
hair_color_ep = 'Ginger & Fair'
elif e > 150000:
HR1 = (220,130,0) #ok
HR2 = (70,40,10)
hair_color_ep = 'Mango & Brown'
else:
HR1 = (210,210,210) #Ok
HR2 = (210,210,210)
hair_color_ep = 'Grey Goose'
seed(e)
f=randint(0,1000000)
seed(f)
g=randint(0,1000000)
if g > 960000:
hair_prop = CAP_2
hair_prop_ep = 'Cap'
elif g > 940000:
hair_prop = COWBOY_2
hair_prop_ep = 'Cowboy Hat'
elif g > 920000:
hair_prop = TOPHAT_2
hair_prop_ep = 'Top Hat'
elif g > 890000:
hair_prop = Helmet
hair_prop_ep = 'Dwarf Helmet'
tete = 99
elif g > 870000:
hair_prop = FORCAP_2
hair_prop_ep = 'Cap Forward'
elif g > 850000:
hair_prop = BANDANA_2
hair_prop_ep = 'Bandana'
elif g > 830000:
hair_prop = FEDORA_2
hair_prop_ep = 'Fedora'
elif g > 800000:
hair_prop = POLICE_2
hair_prop_ep = 'Police'
else:
hair_prop = none
hair_prop_ep = 'None'
seed(g)
h=randint(0,1000000)
if h > 900000:
mouth_prop = CIGARETTE
mouth_prop_ep = 'Cigarette'
elif h > 840000:
mouth_prop = PIPE
mouth_prop_ep = 'Pipe'
elif h > 800000:
mouth_prop = VAPE
mouth_prop_ep = 'Vape'
else:
mouth_prop = none
mouth_prop_ep = 'None'
# seed(h)
# i=randint(0,1000000)
# if i > 300000:
# EY1 = (255,255,255)
# elif i > 50000:
# EY1 = (0,0,255)
# else:
# EY1 = (0,255,0)
seed(h)
i=randint(0,1000000)
if i > 970000:
eyes = TD_2
eyes_prop_ep ='3D Glasses'
elif i > 930000:
eyes = VR_2
eyes_prop_ep ='VR'
elif i > 880000:
eyes = ClassicShades_2
eyes_prop_ep ='Classic Shades'
elif i > 830000:
eyes = SmallShades_2
eyes_prop_ep ='Small Shades'
elif i > 780000:
eyes = EyePatch_2
eyes_prop_ep ='Eye Patch'
elif i > 730000:
eyes = NerdGlasses_2
eyes_prop_ep ='Nerd Glasses'
elif i > 680000:
eyes = BigShades_2
eyes_prop_ep ='Big Shades'
elif i > 650000:
eyes = EyeMask_2
eyes_prop_ep ='Eye Mask'
elif i > 600000:
eyes = HornedRimGlasses_2
eyes_prop_ep ='Horned Rim Glasses'
elif i > 550000:
eyes = RegularShades_2
eyes_prop_ep ='Regular Shades'
else:
eyes=none
eyes_prop_ep ='None'
if tete == 99:
eyes = none
eyes_prop_ep ='None'
seed(i)
j=randint(0,1000000)
if j > 975000:
nose = NOSE_1
nose_ep = 'Clown Nose'
else:
nose = none
nose_ep = 'None'
seed(j)
k=randint(0,1000000)
if k > 975000:
mouth = SMILE
mouth_ep = 'Smile'
elif k > 950000:
mouth = FROWN
mouth_ep = 'Frown'
else:
mouth = none
mouth_ep = 'None'
MOLE=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,MO1,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
SCARE_1=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,SCR1,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,SCR1,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,SCR1,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,SCR1,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,SCR1,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
ROSY_1=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,RC1,0,0,0,0,RC1,RC1,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,RC1,0,0,0,0,0,RC1,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
seed(k)
l=randint(0,1000000)
if l > 970000:
blemishes = ROSY_1
blemishe_ep = 'Rosy Cheeks'
elif l > 900000:
blemishes = MOLE
blemishe_ep = 'Mole'
elif l > 870000:
blemishes = SCARE_1
blemishe_ep = 'Scare'
else:
blemishes = none
blemishe_ep = 'None'
DWARF_3=[
[FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,HR1,HR1,HR1,HR1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,HR1,HR1,HR1,HR1,SK1,HR1,HR1,SK1,HR1,HR1,HR1,HR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,HR1,HR1,HR1,HR1,SK1,SK1,SK1,SK1,SK1,SK1,HR1,HR1,HR1,HR1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,HR1,HR1,FR1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,HR1,HR1,HR1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,HR1,HR1,FR1,SC1,SC1,SK1,SK1,SK1,SC1,SC1,SK1,HR1,HR1,HR1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,HR1,HR1,FR1,EY1,FR1,SK1,SK1,SK1,EY1,FR1,SK1,HR1,HR1,HR1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,HR1,HR1,FR1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,HR1,HR1,HR1,HR1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,HR1,HR1,HR1,FR1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,HR1,HR1,HR1,HR1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,HR1,HR1,HR1,FR1,HR1,SK1,FR1,FR1,SK1,SK1,SK1,HR1,HR1,HR1,HR1,HR1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,HR1,HR1,BG1,FR1,HR2,HR2,HR2,HR2,HR2,HR2,HR2,FR1,HR1,FR1,HR1,HR1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,HR1,BG1,FR1,HR2,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR2,FR1,FR1,BG1,HR1,HR1,BG1,BG1,FR2],
[FR2,BG1,BG1,HR1,HR1,FR1,HR2,HR1,HR1,HR1,FR1,FR1,FR1,HR1,HR1,HR1,HR2,FR1,BG1,HR1,HR1,BG1,BG1,FR2],
[FR2,BG1,BG1,HR1,HR1,FR1,HR2,HR1,HR1,HR1,HR1,HR2,HR1,HR1,HR1,HR1,HR2,FR1,BG1,HR1,HR1,BG1,BG1,FR2],
[FR2,BG1,BG1,HR1,HR1,FR1,HR2,HR1,HR1,HR1,HR2,HR2,HR2,HR1,HR1,HR1,HR2,FR1,BG1,HR1,HR1,BG1,BG1,FR2],
[FR2,BG1,BG1,HR1,HR1,BG1,FR1,HR2,HR1,HR2,HR2,HR2,HR2,HR2,HR1,HR2,FR1,FR1,BG1,HR1,HR1,BG1,BG1,FR2],
[FR2,BG1,BG1,HR1,HR1,BG1,BG1,FR1,HR2,FR1,FR1,FR1,FR1,FR1,HR2,FR1,SK1,FR1,BG1,HR1,HR1,BG1,BG1,FR2],
[FR2,FR2,FR2,HR1,HR1,FR2,FR2,FR2,FR1,FR2,FR2,FR2,FR2,FR1,FR1,SK1,SK1,FR1,FR2,HR1,HR1,FR2,FR2,FR2]
]
pixels = DWARF_3
elif b > 430000:
race_ep = 'Dwarves'
type_ep = 'Stiffbeards'
hair_color_ep = 'None'
haircut_ep = 'None'
hair_prop_ep = 'None'
eyes_prop_ep = 'None'
blemishe_ep = 'None'
eyes_color_ep = 'None'
facial_hair_ep = 'None'
mouth_prop_ep = 'None'
mouth_ep = 'None'
tooth_color_ep = 'None'
nose_ep = 'None'
neck_ep = 'None'
ears_ep = 'None'
skin_ep = 'None'
ears = none
hair = none
hair_prop = none
neck = none
blemishes = none
#tooth color
mouth = none
facial_hair = none
rod = none
mouth_prop = none
#eye color
eyes = none
nose = none
seed(b)
c=randint(0,1000000)
if c > 800000:
SK1 = (234,217,217)
SC1 = (165,141,141)
EY1 = (201,178,178)
SK2 = (255,255,255)
HRG3 = (220,222,234)
HRG2 = (183,179,191)
HRG4 = (203,200,212)
HRG5 = (87,101,113)
HRG1 = (0,0,0)
RC1 = (226,187,185)
MO1 = EY1
SCR1 = EY1
skin_ep = 'Albino'
elif c > 500000:
SK1 = (219,177,128)
SC1 = (166,110,44)
EY1 = (210,157,96)
SK2 = (235,203,166)
HRG3 = (213,200,183)
HRG2 = (184,163,135)
HRG4 = (209,189,164)
HRG5 = (87,101,113)
HRG1 = (0,0,0)
RC1 = (215,154,104)
MO1 = EY1
SCR1 = EY1
skin_ep = 'Light'
elif c > 200000:
SK1 = (174,139,97)
SC1 = (134,88,30)
EY1 = (167,124,71)
SK2 = (178,138,93)
HRG3 = (188,179,165)
HRG2 = (166,150,128)
HRG4 = (184,171,151)
HRG5 = (87,101,113)
HRG1 = (0,0,0)
RC1 = (191,105,71)
MO1 = EY1
SCR1 = EY1
skin_ep = 'Mid'
else:
SK1 = (113,63,29)
SC1 = (86,39,10)
EY1 = (114,55,17)
SK2 = (146,79,35)
HRG3 = (155,135,127)
HRG2 = (139,121,111)
HRG4 = (156,131,115)
HRG5 = (87,101,113)
HRG1 = (0,0,0)
RC1 = (142,36,2)
MO1 = EY1
SCR1 = EY1
skin_ep = 'Dark'
seed(c)
d=randint(0,1000000)
if d > 750000:
ears = EARS_2
ears_ep = 'Earring'
else:
ears = none
ears_ep = 'None'
seed(d)
e=randint(0,1000000)
HR0 = (255,193,0)
HR8 = (251,114,7)
HR3 = (210,154,0)
HR4 = (166,165,165)
HR5 = (165,108,0)
HR6 = (111,0,48)
HR7 = (85,57,23)
if e > 860000:
HR1 = (50,50,50) #Ok
HR2 = (200,200,200)
hair_color_ep = 'Dark Grey'
elif e > 720000:
HR1 = HR8
HR2 = (111,0,48) #ok
hair_color_ep = 'Orange'
elif e > 580000:
HR1 = HR3 #ok
HR2 = (210,210,0)
hair_color_ep = 'Fair'
elif e > 440000:
HR1 = (80,50,30) #Ok
HR2 = (44,4,9)
hair_color_ep = 'Bronze'
elif e > 300000:
HR1 = HR5
HR2 = HR3
hair_color_ep = 'Ginger'
elif e > 150000:
HR1 = (220,130,0) #ok
HR2 = (70,40,10)
hair_color_ep = 'Mango'
else:
HR1 = (210,210,210) #Ok
HR2 = (210,210,210)
hair_color_ep = 'Grey Goose'
seed(e)
f=randint(0,1000000)
seed(f)
g=randint(0,1000000)
if g > 960000:
hair_prop = CAP_2
hair_prop_ep = 'Cap'
elif g > 940000:
hair_prop = COWBOY_2
hair_prop_ep = 'Cowboy Hat'
elif g > 920000:
hair_prop = TOPHAT_2
hair_prop_ep = 'Top Hat'
elif g > 890000:
hair_prop = Helmet
hair_prop_ep = 'Dwarf Helmet'
tete = 99
elif g > 870000:
hair_prop = FORCAP_2
hair_prop_ep = 'Cap Forward'
elif g > 850000:
hair_prop = BANDANA_2
hair_prop_ep = 'Bandana'
elif g > 830000:
hair_prop = FEDORA_2
hair_prop_ep = 'Fedora'
elif g > 800000:
hair_prop = POLICE_2
hair_prop_ep = 'Police'
else:
hair_prop = none
hair_prop_ep = 'None'
seed(g)
h=randint(0,1000000)
if h > 900000:
mouth_prop = CIGARETTE
mouth_prop_ep = 'Cigarette'
elif h > 840000:
mouth_prop = PIPE
mouth_prop_ep = 'Pipe'
elif h > 800000:
mouth_prop = VAPE
mouth_prop_ep = 'Vape'
else:
mouth_prop = none
mouth_prop_ep = 'None'
# seed(h)
# i=randint(0,1000000)
# if i > 300000:
# EY1 = (255,255,255)
# elif i > 50000:
# EY1 = (0,0,255)
# else:
# EY1 = (0,255,0)
seed(h)
i=randint(0,1000000)
if i > 970000:
eyes = TD_2
eyes_prop_ep ='3D Glasses'
elif i > 930000:
eyes = VR_2
eyes_prop_ep ='VR'
elif i > 880000:
eyes = ClassicShades_2
eyes_prop_ep ='Classic Shades'
elif i > 830000:
eyes = SmallShades_2
eyes_prop_ep ='Small Shades'
elif i > 780000:
eyes = EyePatch_2
eyes_prop_ep ='Eye Patch'
elif i > 730000:
eyes = NerdGlasses_2
eyes_prop_ep ='Nerd Glasses'
elif i > 680000:
eyes = BigShades_2
eyes_prop_ep ='Big Shades'
elif i > 650000:
eyes = EyeMask_2
eyes_prop_ep ='Eye Mask'
elif i > 600000:
eyes = HornedRimGlasses_2
eyes_prop_ep ='Horned Rim Glasses'
elif i > 550000:
eyes = RegularShades_2
eyes_prop_ep ='Regular Shades'
else:
eyes=none
eyes_prop_ep ='None'
if tete == 99:
eyes = none
eyes_prop_ep ='None'
seed(i)
j=randint(0,1000000)
if j > 975000:
nose = NOSE_1
nose_ep = 'Clown Nose'
else:
nose = none
nose_ep = 'None'
seed(j)
k=randint(0,1000000)
if k > 975000:
mouth = SMILE
mouth_ep = 'Smile'
elif k > 950000:
mouth = FROWN
mouth_ep = 'Frown'
else:
mouth = none
mouth_ep = 'None'
MOLE=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,MO1,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
SCARE_1=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,SCR1,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,SCR1,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,SCR1,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,SCR1,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,SCR1,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
ROSY_1=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,RC1,0,0,0,0,RC1,RC1,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,RC1,0,0,0,0,0,RC1,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
seed(k)
l=randint(0,1000000)
if l > 970000:
blemishes = ROSY_1
blemishe_ep = 'Rosy Cheeks'
elif l > 900000:
blemishes = MOLE
blemishe_ep = 'Mole'
elif l > 870000:
blemishes = SCARE_1
blemishe_ep = 'Scare'
else:
blemishes = none
blemishe_ep = 'None'
DWARF_4=[
[FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,HR1,HR1,HR1,HR1,HR1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,HR1,HR1,HR1,HR1,HR1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,HR1,HR1,HR1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,HR1,HR1,HR1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,HR1,HR1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,HR1,HR1,HR1,HR1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,HR1,HR1,SK1,SK1,SK1,SK1,SK1,HR1,HR1,SK1,HR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,HR1,HR1,SC1,SC1,SK1,SK1,SK1,SC1,SC1,SK1,SK1,HR1,HR1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,HR1,HR1,EY1,FR1,SK1,SK1,SK1,EY1,FR1,SK1,SK1,HR1,HR1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,HR1,HR1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,HR1,HR1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,HR1,HR1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,HR1,HR1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,HR1,HR1,SK1,SK1,FR1,FR1,SK1,SK1,SK1,SK1,HR1,HR1,HR1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,HR1,HR1,SK1,SK1,SK1,SK1,SK1,SK1,HR1,HR1,HR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,HR1,HR1,HR1,SK1,SK1,SK1,SK1,SK1,HR1,HR1,HR1,HR1,HR1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,HR1,HR1,HR1,SK1,FR1,FR1,FR1,SK1,HR1,HR1,SK1,HR1,HR1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,HR1,HR1,SK1,SK1,SK1,HR1,SK1,SK1,SK1,SK1,SK1,HR1,HR1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,HR1,HR1,SK1,HR1,HR1,HR1,SK1,SK1,SK1,HR1,HR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,HR1,HR1,HR1,HR1,HR1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,FR2,FR2,FR2,FR2,FR2,FR2,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,SK1,FR1,FR2,FR2,FR2,FR2,FR2,FR2]
]
pixels = DWARF_4
elif b > 420000:
race_ep = 'Dwarves'
type_ep = 'Stonefoots'
hair_color_ep = 'None'
haircut_ep = 'None'
hair_prop_ep = 'None'
eyes_prop_ep = 'None'
blemishe_ep = 'None'
eyes_color_ep = 'None'
facial_hair_ep = 'None'
mouth_prop_ep = 'None'
mouth_ep = 'None'
tooth_color_ep = 'None'
nose_ep = 'None'
neck_ep = 'None'
ears_ep = 'None'
skin_ep = 'None'
ears = none
hair = none
hair_prop = none
neck = none
blemishes = none
#tooth color
mouth = none
facial_hair = none
rod = none
mouth_prop = none
#eye color
eyes = none
nose = none
seed(b)
c=randint(0,1000000)
if c > 800000:
SK1 = (234,217,217)
SC1 = (165,141,141)
EY1 = (201,178,178)
SK2 = (255,255,255)
HRG3 = (220,222,234)
HRG2 = (183,179,191)
HRG4 = (203,200,212)
HRG5 = (87,101,113)
HRG1 = (0,0,0)
RC1 = (226,187,185)
MO1 = EY1
SCR1 = EY1
skin_ep = 'Albino'
elif c > 500000:
SK1 = (219,177,128)
SC1 = (166,110,44)
EY1 = (210,157,96)
SK2 = (235,203,166)
HRG3 = (213,200,183)
HRG2 = (184,163,135)
HRG4 = (209,189,164)
HRG5 = (87,101,113)
HRG1 = (0,0,0)
RC1 = (215,154,104)
MO1 = EY1
SCR1 = EY1
skin_ep = 'Light'
elif c > 200000:
SK1 = (174,139,97)
SC1 = (134,88,30)
EY1 = (167,124,71)
SK2 = (178,138,93)
HRG3 = (188,179,165)
HRG2 = (166,150,128)
HRG4 = (184,171,151)
HRG5 = (87,101,113)
HRG1 = (0,0,0)
RC1 = (191,105,71)
MO1 = EY1
SCR1 = EY1
skin_ep = 'Mid'
else:
SK1 = (113,63,29)
SC1 = (86,39,10)
EY1 = (114,55,17)
SK2 = (146,79,35)
HRG3 = (155,135,127)
HRG2 = (139,121,111)
HRG4 = (156,131,115)
HRG5 = (87,101,113)
HRG1 = (0,0,0)
RC1 = (142,36,2)
MO1 = EY1
SCR1 = EY1
skin_ep = 'Dark'
seed(c)
d=randint(0,1000000)
if d > 750000:
ears = EARS_2
ears_ep = 'Earring'
else:
ears = none
ears_ep = 'None'
seed(d)
e=randint(0,1000000)
HR0 = (255,193,0)
HR8 = (251,114,7)
HR3 = (210,154,0)
HR4 = (166,165,165)
HR5 = (165,108,0)
HR6 = (111,0,48)
HR7 = (85,57,23)
if e > 860000:
HR1 = (50,50,50) #Ok
HR2 = (200,200,200)
hair_color_ep = 'Dark Grey & Silver'
elif e > 720000:
HR1 = HR8
HR2 = (111,0,48) #ok
hair_color_ep = 'Orange & Black Rose'
elif e > 580000:
HR1 = HR3 #ok
HR2 = (210,210,0)
hair_color_ep = 'Fair & Wattle'
elif e > 440000:
HR1 = (80,50,30) #Ok
HR2 = (44,4,9)
hair_color_ep = 'Bronze & Chocolate'
elif e > 300000:
HR1 = HR5
HR2 = HR3
hair_color_ep = 'Ginger & Fair'
elif e > 150000:
HR1 = (220,130,0) #ok
HR2 = (70,40,10)
hair_color_ep = 'Mango & Brown'
else:
HR1 = (210,210,210) #Ok
HR2 = (210,210,210)
hair_color_ep = 'Grey Goose'
seed(e)
f=randint(0,1000000)
seed(f)
g=randint(0,1000000)
if g > 960000:
hair_prop = CAP_2
hair_prop_ep = 'Cap'
elif g > 940000:
hair_prop = COWBOY_2
hair_prop_ep = 'Cowboy Hat'
elif g > 920000:
hair_prop = TOPHAT_2
hair_prop_ep = 'Top Hat'
elif g > 890000:
hair_prop = Helmet
hair_prop_ep = 'Dwarf Helmet'
tete = 99
elif g > 870000:
hair_prop = FORCAP_2
hair_prop_ep = 'Cap Forward'
elif g > 850000:
hair_prop = BANDANA_2
hair_prop_ep = 'Bandana'
elif g > 830000:
hair_prop = FEDORA_2
hair_prop_ep = 'Fedora'
elif g > 800000:
hair_prop = POLICE_2
hair_prop_ep = 'Police'
else:
hair_prop = none
hair_prop_ep = 'None'
seed(g)
h=randint(0,1000000)
if h > 900000:
mouth_prop = CIGARETTE
mouth_prop_ep = 'Cigarette'
elif h > 840000:
mouth_prop = PIPE
mouth_prop_ep = 'Pipe'
elif h > 800000:
mouth_prop = VAPE
mouth_prop_ep = 'Vape'
else:
mouth_prop = none
mouth_prop_ep = 'None'
# seed(h)
# i=randint(0,1000000)
# if i > 300000:
# EY1 = (255,255,255)
# elif i > 50000:
# EY1 = (0,0,255)
# else:
# EY1 = (0,255,0)
seed(h)
i=randint(0,1000000)
if i > 970000:
eyes = TD_2
eyes_prop_ep ='3D Glasses'
elif i > 930000:
eyes = VR_2
eyes_prop_ep ='VR'
elif i > 880000:
eyes = ClassicShades_2
eyes_prop_ep ='Classic Shades'
elif i > 830000:
eyes = SmallShades_2
eyes_prop_ep ='Small Shades'
elif i > 780000:
eyes = EyePatch_2
eyes_prop_ep ='Eye Patch'
elif i > 730000:
eyes = NerdGlasses_2
eyes_prop_ep ='Nerd Glasses'
elif i > 680000:
eyes = BigShades_2
eyes_prop_ep ='Big Shades'
elif i > 650000:
eyes = EyeMask_2
eyes_prop_ep ='Eye Mask'
elif i > 600000:
eyes = HornedRimGlasses_2
eyes_prop_ep ='Horned Rim Glasses'
elif i > 550000:
eyes = RegularShades_2
eyes_prop_ep ='Regular Shades'
else:
eyes=none
eyes_prop_ep ='None'
if tete == 99:
eyes = none
eyes_prop_ep ='None'
seed(i)
j=randint(0,1000000)
if j > 975000:
nose = NOSE_1
nose_ep = 'Clown Nose'
else:
nose = none
nose_ep = 'None'
seed(j)
k=randint(0,1000000)
if k > 975000:
mouth = SMILE
mouth_ep = 'Smile'
elif k > 950000:
mouth = FROWN
mouth_ep = 'Frown'
else:
mouth = none
mouth_ep = 'None'
MOLE=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,MO1,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
SCARE_1=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,SCR1,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,SCR1,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,SCR1,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,SCR1,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,SCR1,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
ROSY_1=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,RC1,0,0,0,0,RC1,RC1,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,RC1,0,0,0,0,0,RC1,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
seed(k)
l=randint(0,1000000)
if l > 970000:
blemishes = ROSY_1
blemishe_ep = 'Rosy Cheeks'
elif l > 900000:
blemishes = MOLE
blemishe_ep = 'Mole'
elif l > 870000:
blemishes = SCARE_1
blemishe_ep = 'Scare'
else:
blemishes = none
blemishe_ep = 'None'
DWARF_5=[
[FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR1,FR1,FR1,FR1,FR1,FR1,FR1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,HR1,HR1,HR1,SK1,SK1,SK1,HR1,HR1,HR1,SK1,HR1,HR1,HR1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,HR1,SC1,SC1,HR1,SK1,HR1,SC1,SC1,HR1,SK1,HR1,FR1,HR1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,EY1,FR1,SK1,SK1,SK1,EY1,FR1,SK1,HR1,SK1,FR1,HR1,HR1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,HR1,FR1,FR1,HR1,HR1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,HR1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,HR1,FR1,BG1,HR1,HR1,HR1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,HR1,SK1,SK1,FR1,FR1,SK1,SK1,SK1,SK1,HR1,FR1,BG1,HR1,HR1,HR1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,HR1,SK1,SK1,HR2,HR2,SK1,SK1,SK1,HR1,HR1,FR1,BG1,HR1,HR1,HR1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,HR1,HR1,HR2,HR2,HR2,HR2,HR2,HR1,HR1,HR1,FR1,BG1,HR1,HR1,HR1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,HR1,HR2,HR2,FR1,FR1,FR1,HR2,HR2,HR1,HR1,FR1,BG1,HR1,HR1,HR1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,HR2,HR2,HR1,HR2,HR2,HR2,HR1,HR2,HR2,HR1,FR1,BG1,HR1,HR1,HR1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,HR2,HR2,HR1,HR2,HR2,HR2,HR2,HR2,HR1,HR2,HR2,FR1,BG1,HR1,HR1,HR1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,HR2,BG1,HR2,HR2,HR2,HR1,HR2,HR2,HR2,HR1,HR2,FR1,BG1,HR1,HR1,HR1,HR1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,HR2,BG1,HR2,HR2,BG1,BG1,BG1,HR2,HR2,HR1,HR2,FR1,BG1,BG1,HR1,HR1,HR1,FR2],
[FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,HR2,FR2,FR2,FR2,FR2,FR1,HR2,SK1,SK1,FR1,FR2,FR2,FR2,FR2,FR2,FR2]
]
pixels = DWARF_5
elif b > 410000:
race_ep = 'Dwarves'
type_ep = 'Ironfists'
hair_color_ep = 'None'
haircut_ep = 'None'
hair_prop_ep = 'None'
eyes_prop_ep = 'None'
blemishe_ep = 'None'
eyes_color_ep = 'None'
facial_hair_ep = 'None'
mouth_prop_ep = 'None'
mouth_ep = 'None'
tooth_color_ep = 'None'
nose_ep = 'None'
neck_ep = 'None'
ears_ep = 'None'
skin_ep = 'None'
ears = none
hair = none
hair_prop = none
neck = none
blemishes = none
#tooth color
mouth = none
facial_hair = none
rod = none
mouth_prop = none
#eye color
eyes = none
nose = none
seed(b)
c=randint(0,1000000)
if c > 800000:
SK1 = (234,217,217)
SC1 = (165,141,141)
EY1 = (201,178,178)
SK2 = (255,255,255)
HRG3 = (220,222,234)
HRG2 = (183,179,191)
HRG4 = (203,200,212)
HRG5 = (87,101,113)
HRG1 = (0,0,0)
RC1 = (226,187,185)
MO1 = EY1
SCR1 = EY1
skin_ep = 'Albino'
elif c > 500000:
SK1 = (219,177,128)
SC1 = (166,110,44)
EY1 = (210,157,96)
SK2 = (235,203,166)
HRG3 = (213,200,183)
HRG2 = (184,163,135)
HRG4 = (209,189,164)
HRG5 = (87,101,113)
HRG1 = (0,0,0)
RC1 = (215,154,104)
MO1 = EY1
SCR1 = EY1
skin_ep = 'Light'
elif c > 200000:
SK1 = (174,139,97)
SC1 = (134,88,30)
EY1 = (167,124,71)
SK2 = (178,138,93)
HRG3 = (188,179,165)
HRG2 = (166,150,128)
HRG4 = (184,171,151)
HRG5 = (87,101,113)
HRG1 = (0,0,0)
RC1 = (191,105,71)
MO1 = EY1
SCR1 = EY1
skin_ep = 'Mid'
else:
SK1 = (113,63,29)
SC1 = (86,39,10)
EY1 = (114,55,17)
SK2 = (146,79,35)
HRG3 = (155,135,127)
HRG2 = (139,121,111)
HRG4 = (156,131,115)
HRG5 = (87,101,113)
HRG1 = (0,0,0)
RC1 = (142,36,2)
MO1 = EY1
SCR1 = EY1
skin_ep = 'Dark'
seed(c)
d=randint(0,1000000)
if d > 750000:
ears = EARS_2
ears_ep = 'Earring'
else:
ears = none
ears_ep = 'None'
seed(d)
e=randint(0,1000000)
HR0 = (255,193,0)
HR8 = (251,114,7)
HR3 = (210,154,0)
HR4 = (166,165,165)
HR5 = (165,108,0)
HR6 = (111,0,48)
HR7 = (85,57,23)
if e > 860000:
HR1 = (50,50,50) #Ok
HR2 = (200,200,200)
hair_color_ep = 'Dark Grey & Silver'
elif e > 720000:
HR1 = HR8
HR2 = (111,0,48) #ok
hair_color_ep = 'Orange & Black Rose'
elif e > 580000:
HR1 = HR3 #ok
HR2 = (210,210,0)
hair_color_ep = 'Fair & Wattle'
elif e > 440000:
HR1 = (80,50,30) #Ok
HR2 = (44,4,9)
hair_color_ep = 'Bronze & Chocolate'
elif e > 300000:
HR1 = HR5
HR2 = HR3
hair_color_ep = 'Ginger & Fair'
elif e > 150000:
HR1 = (220,130,0) #ok
HR2 = (70,40,10)
hair_color_ep = 'Mango & Brown'
else:
HR1 = (210,210,210) #Ok
HR2 = (210,210,210)
hair_color_ep = 'Grey Goose'
seed(e)
f=randint(0,1000000)
seed(f)
g=randint(0,1000000)
if g > 960000:
hair_prop = CAP_2
hair_prop_ep = 'Cap'
elif g > 940000:
hair_prop = COWBOY_2
hair_prop_ep = 'Cowboy Hat'
elif g > 920000:
hair_prop = TOPHAT_2
hair_prop_ep = 'Top Hat'
elif g > 890000:
hair_prop = Helmet
hair_prop_ep = 'Dwarf Helmet'
tete = 99
elif g > 870000:
hair_prop = FORCAP_2
hair_prop_ep = 'Cap Forward'
elif g > 850000:
hair_prop = BANDANA_2
hair_prop_ep = 'Bandana'
elif g > 830000:
hair_prop = FEDORA_2
hair_prop_ep = 'Fedora'
elif g > 800000:
hair_prop = POLICE_2
hair_prop_ep = 'Police'
else:
hair_prop = none
hair_prop_ep = 'None'
seed(g)
h=randint(0,1000000)
if h > 900000:
mouth_prop = CIGARETTE
mouth_prop_ep = 'Cigarette'
elif h > 840000:
mouth_prop = PIPE
mouth_prop_ep = 'Pipe'
elif h > 800000:
mouth_prop = VAPE
mouth_prop_ep = 'Vape'
else:
mouth_prop = none
mouth_prop_ep = 'None'
# seed(h)
# i=randint(0,1000000)
# if i > 300000:
# EY1 = (255,255,255)
# elif i > 50000:
# EY1 = (0,0,255)
# else:
# EY1 = (0,255,0)
seed(h)
i=randint(0,1000000)
if i > 970000:
eyes = TD_2
eyes_prop_ep ='3D Glasses'
elif i > 930000:
eyes = VR_2
eyes_prop_ep ='VR'
elif i > 880000:
eyes = ClassicShades_2
eyes_prop_ep ='Classic Shades'
elif i > 830000:
eyes = SmallShades_2
eyes_prop_ep ='Small Shades'
elif i > 780000:
eyes = EyePatch_2
eyes_prop_ep ='Eye Patch'
elif i > 730000:
eyes = NerdGlasses_2
eyes_prop_ep ='Nerd Glasses'
elif i > 680000:
eyes = BigShades_2
eyes_prop_ep ='Big Shades'
elif i > 650000:
eyes = EyeMask_2
eyes_prop_ep ='Eye Mask'
elif i > 600000:
eyes = HornedRimGlasses_2
eyes_prop_ep ='Horned Rim Glasses'
elif i > 550000:
eyes = RegularShades_2
eyes_prop_ep ='Regular Shades'
else:
eyes=none
eyes_prop_ep ='None'
if tete == 99:
eyes = none
eyes_prop_ep ='None'
seed(i)
j=randint(0,1000000)
if j > 975000:
nose = NOSE_1
nose_ep = 'Clown Nose'
else:
nose = none
nose_ep = 'None'
seed(j)
k=randint(0,1000000)
if k > 975000:
mouth = SMILE
mouth_ep = 'Smile'
elif k > 950000:
mouth = FROWN
mouth_ep = 'Frown'
else:
mouth = none
mouth_ep = 'None'
MOLE=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,MO1,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
SCARE_1=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,SCR1,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,SCR1,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,SCR1,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,SCR1,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,SCR1,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
ROSY_1=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,RC1,0,0,0,0,RC1,RC1,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,RC1,0,0,0,0,0,RC1,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
seed(k)
l=randint(0,1000000)
if l > 970000:
blemishes = ROSY_1
blemishe_ep = 'Rosy Cheeks'
elif l > 900000:
blemishes = MOLE
blemishe_ep = 'Mole'
elif l > 870000:
blemishes = SCARE_1
blemishe_ep = 'Scare'
else:
blemishes = none
blemishe_ep = 'None'
DWARF_6=[
[FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,HR1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,HR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,HR1,FR1,SC1,SC1,SK1,SK1,SK1,SC1,SC1,SK1,SK1,HR2,HR1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,HR1,HR2,FR1,EY1,FR1,SK1,SK1,SK1,EY1,FR1,SK1,SK1,HR1,HR2,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,HR2,HR1,FR1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,HR2,HR1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,HR1,HR2,FR1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,HR1,HR2,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,HR2,HR1,FR1,SK1,SK1,FR1,FR1,SK1,SK1,SK1,SK1,SK1,HR2,HR1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,HR1,HR2,FR1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,FR1,HR2,HR1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,HR1,HR2,BG1,FR1,SK1,SK1,HR1,HR1,HR1,SK1,SK1,SK1,SK1,FR1,HR1,HR2,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,HR2,HR1,BG1,FR1,SK1,HR1,FR1,FR1,FR1,HR1,SK1,SK1,SK1,FR1,BG1,HR1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,HR1,BG1,BG1,HR1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,HR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,HR1,HR2,HR1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,HR1,HR2,HR1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,HR1,HR2,HR1,HR2,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR2,HR1,HR2,HR1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,HR2,HR1,HR2,HR1,HR1,HR1,HR1,BG1,HR1,HR1,HR1,HR1,HR2,HR1,HR2,BG1,BG1,BG1,FR2],
[FR2,FR2,FR2,FR2,FR2,HR1,HR2,HR1,FR2,HR1,HR1,FR2,FR2,FR1,HR1,HR1,SK1,HR1,HR2,HR1,FR2,FR2,FR2,FR2]
]
pixels = DWARF_6
elif b > 400000:
race_ep = 'Dwarves'
type_ep = 'Longbeards'
hair_color_ep = 'None'
haircut_ep = 'None'
hair_prop_ep = 'None'
eyes_prop_ep = 'None'
blemishe_ep = 'None'
eyes_color_ep = 'None'
facial_hair_ep = 'None'
mouth_prop_ep = 'None'
mouth_ep = 'None'
tooth_color_ep = 'None'
nose_ep = 'None'
neck_ep = 'None'
ears_ep = 'None'
skin_ep = 'None'
ears = none
hair = none
hair_prop = none
neck = none
blemishes = none
#tooth color
mouth = none
facial_hair = none
rod = none
mouth_prop = none
#eye color
eyes = none
nose = none
seed(b)
c=randint(0,1000000)
if c > 800000:
SK1 = (234,217,217)
SC1 = (165,141,141)
EY1 = (201,178,178)
SK2 = (255,255,255)
HRG3 = (220,222,234)
HRG2 = (183,179,191)
HRG4 = (203,200,212)
HRG5 = (87,101,113)
HRG1 = (0,0,0)
RC1 = (226,187,185)
MO1 = EY1
SCR1 = EY1
skin_ep = 'Albino'
elif c > 500000:
SK1 = (219,177,128)
SC1 = (166,110,44)
EY1 = (210,157,96)
SK2 = (235,203,166)
HRG3 = (213,200,183)
HRG2 = (184,163,135)
HRG4 = (209,189,164)
HRG5 = (87,101,113)
HRG1 = (0,0,0)
RC1 = (215,154,104)
MO1 = EY1
SCR1 = EY1
skin_ep = 'Light'
elif c > 200000:
SK1 = (174,139,97)
SC1 = (134,88,30)
EY1 = (167,124,71)
SK2 = (178,138,93)
HRG3 = (188,179,165)
HRG2 = (166,150,128)
HRG4 = (184,171,151)
HRG5 = (87,101,113)
HRG1 = (0,0,0)
RC1 = (191,105,71)
MO1 = EY1
SCR1 = EY1
skin_ep = 'Mid'
else:
SK1 = (113,63,29)
SC1 = (86,39,10)
EY1 = (114,55,17)
SK2 = (146,79,35)
HRG3 = (155,135,127)
HRG2 = (139,121,111)
HRG4 = (156,131,115)
HRG5 = (87,101,113)
HRG1 = (0,0,0)
RC1 = (142,36,2)
MO1 = EY1
SCR1 = EY1
skin_ep = 'Dark'
seed(c)
d=randint(0,1000000)
if d > 750000:
ears = EARS_2
ears_ep = 'Earring'
else:
ears = none
ears_ep = 'None'
seed(d)
e=randint(0,1000000)
HR0 = (255,193,0)
HR8 = (251,114,7)
HR3 = (210,154,0)
HR4 = (166,165,165)
HR5 = (165,108,0)
HR6 = (111,0,48)
HR7 = (85,57,23)
if e > 860000:
HR1 = (50,50,50) #Ok
HR2 = (200,200,200)
hair_color_ep = 'Dark Grey & Silver'
elif e > 720000:
HR1 = HR8
HR2 = (111,0,48) #ok
hair_color_ep = 'Orange & Black Rose'
elif e > 580000:
HR1 = HR3 #ok
HR2 = (210,210,0)
hair_color_ep = 'Fair & Wattle'
elif e > 440000:
HR1 = (80,50,30) #Ok
HR2 = (44,4,9)
hair_color_ep = 'Bronze & Chocolate'
elif e > 300000:
HR1 = HR5
HR2 = HR3
hair_color_ep = 'Ginger & Fair'
elif e > 150000:
HR1 = (220,130,0) #ok
HR2 = (70,40,10)
hair_color_ep = 'Mango & Brown'
else:
HR1 = (210,210,210) #Ok
HR2 = (210,210,210)
hair_color_ep = 'Grey Goose'
seed(e)
f=randint(0,1000000)
seed(f)
g=randint(0,1000000)
if g > 960000:
hair_prop = CAP_2
hair_prop_ep = 'Cap'
elif g > 940000:
hair_prop = COWBOY_2
hair_prop_ep = 'Cowboy Hat'
elif g > 920000:
hair_prop = TOPHAT_2
hair_prop_ep = 'Top Hat'
elif g > 890000:
hair_prop = Helmet
hair_prop_ep = 'Dwarf Helmet'
tete = 99
elif g > 870000:
hair_prop = FORCAP_2
hair_prop_ep = 'Cap Forward'
elif g > 850000:
hair_prop = BANDANA_2
hair_prop_ep = 'Bandana'
elif g > 830000:
hair_prop = FEDORA_2
hair_prop_ep = 'Fedora'
elif g > 800000:
hair_prop = POLICE_2
hair_prop_ep = 'Police'
else:
hair_prop = none
hair_prop_ep = 'None'
seed(g)
h=randint(0,1000000)
if h > 900000:
mouth_prop = CIGARETTE
mouth_prop_ep = 'Cigarette'
elif h > 840000:
mouth_prop = PIPE
mouth_prop_ep = 'Pipe'
elif h > 800000:
mouth_prop = VAPE
mouth_prop_ep = 'Vape'
else:
mouth_prop = none
mouth_prop_ep = 'None'
# seed(h)
# i=randint(0,1000000)
# if i > 300000:
# EY1 = (255,255,255)
# elif i > 50000:
# EY1 = (0,0,255)
# else:
# EY1 = (0,255,0)
seed(h)
i=randint(0,1000000)
if i > 970000:
eyes = TD_2
eyes_prop_ep ='3D Glasses'
elif i > 930000:
eyes = VR_2
eyes_prop_ep ='VR'
elif i > 880000:
eyes = ClassicShades_2
eyes_prop_ep ='Classic Shades'
elif i > 830000:
eyes = SmallShades_2
eyes_prop_ep ='Small Shades'
elif i > 780000:
eyes = EyePatch_2
eyes_prop_ep ='Eye Patch'
elif i > 730000:
eyes = NerdGlasses_2
eyes_prop_ep ='Nerd Glasses'
elif i > 680000:
eyes = BigShades_2
eyes_prop_ep ='Big Shades'
elif i > 650000:
eyes = EyeMask_2
eyes_prop_ep ='Eye Mask'
elif i > 600000:
eyes = HornedRimGlasses_2
eyes_prop_ep ='Horned Rim Glasses'
elif i > 550000:
eyes = RegularShades_2
eyes_prop_ep ='Regular Shades'
else:
eyes=none
eyes_prop_ep ='None'
if tete == 99:
eyes = none
eyes_prop_ep ='None'
seed(i)
j=randint(0,1000000)
if j > 975000:
nose = NOSE_1
nose_ep = 'Clown Nose'
else:
nose = none
nose_ep = 'None'
seed(j)
k=randint(0,1000000)
if k > 975000:
mouth = SMILE
mouth_ep = 'Smile'
elif k > 950000:
mouth = FROWN
mouth_ep = 'Frown'
else:
mouth = none
mouth_ep = 'None'
MOLE=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,MO1,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
SCARE_1=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,SCR1,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,SCR1,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,SCR1,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,SCR1,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,SCR1,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
ROSY_1=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,RC1,0,0,0,0,RC1,RC1,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,RC1,0,0,0,0,0,RC1,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
seed(k)
l=randint(0,1000000)
if l > 970000:
blemishes = ROSY_1
blemishe_ep = 'Rosy Cheeks'
elif l > 900000:
blemishes = MOLE
blemishe_ep = 'Mole'
elif l > 870000:
blemishes = SCARE_1
blemishe_ep = 'Scare'
else:
blemishes = none
blemishe_ep = 'None'
DWARF_7=[
[FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,HR1,HR1,SK1,HR1,HR1,HR1,HR1,SK1,SK1,HR1,HR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,HR1,HR2,SK1,SK1,SK1,HR1,HR1,SK1,SK1,SK1,SK1,HR2,HR1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,HR2,HR1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,HR1,HR2,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,HR1,HR2,HR1,HR2,SK1,SK1,SK1,SK1,SK1,HR2,SK1,SK1,HR1,HR2,HR1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,HR1,HR2,HR1,SC1,SC1,SK1,SK1,SK1,SC1,SC1,SK1,SK1,HR1,HR2,HR1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,HR1,HR2,HR1,EY1,FR1,SK1,SK1,SK1,EY1,FR1,SK1,SK1,HR1,HR2,HR1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,HR1,HR2,HR1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,HR1,HR2,HR1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,HR1,HR2,HR1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,HR1,HR2,HR1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,HR2,HR1,HR1,SK1,SK1,FR1,FR1,SK1,SK1,SK1,SK1,HR2,HR1,HR1,HR2,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,HR1,HR2,HR1,HR1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,HR2,HR1,HR1,HR2,HR1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,HR1,HR2,HR1,HR1,SK1,HR2,HR2,HR2,HR2,HR2,SK1,HR2,HR2,HR1,HR1,HR2,HR1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,HR1,HR2,HR1,HR1,SK1,HR2,FR1,FR1,FR1,HR2,SK1,HR2,HR2,HR1,HR1,HR2,HR1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,HR1,HR2,HR1,HR1,HR2,HR2,HR2,HR2,HR2,HR2,HR2,HR2,SK1,HR1,HR1,HR2,HR1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,HR1,HR2,HR1,HR1,FR1,HR2,SK1,HR2,SK1,HR2,SK1,SK1,SK1,HR1,HR1,HR2,HR1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,HR2,HR1,HR1,HR1,HR2,FR1,FR1,FR1,FR1,FR1,HR2,SK1,SK1,HR1,HR1,HR1,HR2,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,HR2,HR1,BG1,HR2,BG1,BG1,BG1,BG1,BG1,FR1,SK1,HR2,SK1,FR1,BG1,HR1,HR2,BG1,BG1,FR2],
[FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR1,SK1,SK1,SK1,FR1,FR2,FR2,FR2,FR2,FR2,FR2]
]
pixels = DWARF_7
elif b > 250000:
race_ep = 'Gobelins'
type_ep = 'None'
hair_color_ep = 'None'
haircut_ep = 'None'
hair_prop_ep = 'None'
eyes_prop_ep = 'None'
blemishe_ep = 'None'
eyes_color_ep = 'None'
facial_hair_ep = 'None'
mouth_prop_ep = 'None'
mouth_ep = 'None'
tooth_color_ep = 'None'
nose_ep = 'None'
neck_ep = 'None'
ears_ep = 'None'
skin_ep = 'None'
ears = none
hair = none
hair_prop = none
neck = none
blemishes = none
#tooth color
mouth = none
facial_hair = none
rod = none
mouth_prop = none
#eye color
eyes = none
nose = none
seed(b)
c=randint(0,1000000)
if c > 800000:
SK1 = (112,168,104) #ZOMBO
SC1 = (88,117,83)
MO1 = SC1
SCR1 = SC1
skin_ep = 'Green'
elif c > 700000:
SK1 = (145,0,185) #PURPLE
SC1 = (120,0,160)
MO1 = SC1
SCR1 = SC1
skin_ep = 'Purple'
elif c > 400000:
SK1 = (185,160,60) #DARK GREEN
SC1 = (150,125,25)
MO1 = SC1
SCR1 = SC1
skin_ep = 'Camel'
else:
SK1 = (205,205,57) #JAUNE
SC1 = (130,119,23)
MO1 = SC1
SCR1 = SC1
skin_ep = 'Wattle'
seed(c)
d=randint(0,1000000)
if d > 750000:
ears = EARS_2
ears_ep = 'Earring'
else:
ears = none
ears_ep = 'None'
seed(d)
e=randint(0,1000000)
if e > 960000:
hair_prop = CAP_2
hair_prop_ep = 'Cap'
elif e > 940000:
hair_prop = COWBOY_5
hair_prop_ep = 'Cowboy Hat'
elif e > 920000:
hair_prop = TOPHAT_2
hair_prop_ep = 'Top Hat'
elif e > 870000:
hair_prop = KNITTED_5
hair_prop_ep = 'Knitted Cap'
elif e > 850000:
hair_prop = Gobelin_Crown
hair_prop_ep = 'Gobelins Crown'
elif e > 830000:
hair_prop = FORCAP_2
hair_prop_ep = 'Cap Forward'
elif e > 800000:
hair_prop = BANDANA_2
hair_prop_ep = 'Bandana'
elif e > 780000:
hair_prop = FEDORA_5
hair_prop_ep = 'Fedora'
elif e > 750000:
hair_prop = POLICE_2
hair_prop_ep = 'Police'
elif e > 740000:
hair_prop = BEANI_2
hair_prop_ep = 'Beanie'
else:
hair_prop = none
hair_prop_ep = 'None'
seed(e)
f=randint(0,1000000)
if f > 900000:
neck = GoldChain_1
neck_ep = 'Gold Chain'
elif f > 820000:
neck = SilverChain_1
neck_ep = 'Silver Chain'
elif f > 800000:
neck = RING_1
neck_ep = 'Ring Onchain'
else:
neck = none
neck_ep = 'None'
seed(f)
g=randint(0,1000000)
if g > 300000:
DE1 = (255,255,255)
tooth_color_ep = 'White'
elif g > 200000:
DE1 = (163,110,16)
tooth_color_ep = 'Brown'
elif g > 80000:
DE1 = (255,203,0)
tooth_color_ep = 'Gold'
else :
DE1 = (200,0,0)
tooth_color_ep = 'Blood'
seed(g)
h=randint(0,1000000)
if h > 900000:
mouth_prop = CIGARETTE
mouth_prop_ep = 'Cigarette'
elif h > 880000:
mouth_prop = MASK_1
mouth_prop_ep = 'Medical Mask'
elif h > 820000:
mouth_prop = PIPE
mouth_prop_ep = 'Pipe'
elif h > 780000:
mouth_prop = VAPE
mouth_prop_ep = 'Vape'
else:
mouth_prop = none
mouth_prop_ep = 'None'
seed(h)
i=randint(0,1000000)
if i > 400000:
EY1 = (255,255,255)
eyes_color_ep = 'White'
elif i > 300000:
EY1 = (214,92,26)
eyes_color_ep = "Orange"
elif i > 200000:
EY1 = (176,61,133)
eyes_color_ep = "Purple"
elif i > 100000:
EY1 = (255,255,0)
eyes_color_ep = 'Yellow'
else:
EY1 = (255,0,0)
eyes_color_ep = 'Red'
seed(i)
j=randint(0,1000000)
if j > 970000:
eyes = TD_2
eyes_prop_ep ='3D Glasses'
elif j > 930000:
eyes = VR_2
eyes_prop_ep ='VR'
elif j > 880000:
eyes = ClassicShades_2
eyes_prop_ep ='Classic Shades'
elif j > 830000:
eyes = SmallShades_2
eyes_prop_ep ='Small Shades'
elif j > 780000:
eyes = EyePatch_2
eyes_prop_ep ='Eye Patch'
elif j > 730000:
eyes = NerdGlasses_2
eyes_prop_ep ='Nerd Glasses'
elif j > 680000:
eyes = BigShades_2
eyes_prop_ep ='Big Shades'
elif j > 650000:
eyes = EyeMask_2
eyes_prop_ep ='Eye Mask'
elif j > 600000:
eyes = HornedRimGlasses_2
eyes_prop_ep ='Horned Rim Glasses'
elif j > 550000:
eyes = RegularShades_2
eyes_prop_ep ='Regular Shades'
else:
eyes=none
eyes_prop_ep ='None'
MOLE=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,MO1,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
SCARE_1=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,SCR1,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,SCR1,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,SCR1,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,SCR1,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,SCR1,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
ROSY_1=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,RC1,0,0,0,0,RC1,RC1,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,RC1,0,0,0,0,0,RC1,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
seed(j)
k=randint(0,1000000)
if k > 970000:
blemishes = MOLE
blemishe_ep = 'Mole'
elif k > 940000:
blemishes = SCARE_1
blemishe_ep = 'Scare'
else:
blemishes = none
blemishe_ep = 'None'
GOBELIN=[
[FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR1,FR1,FR1,FR1,FR1,FR1,FR1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,FR1,BG1,BG1,FR1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,FR1,BG1,FR1,SK1,FR1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SC1,SC1,SK1,SK1,SK1,SC1,SC1,SK1,SK1,SK1,FR1,SK1,FR1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,EY1,FR1,SK1,SK1,SK1,EY1,FR1,SK1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,FR1,FR1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,FR1,SK1,FR1,SK1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,DE1,SK1,SK1,DE1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,FR1,FR1,FR1,FR1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,SK1,SK1,SK1,SK1,FR1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,SK1,SK1,FR1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR1,FR1,FR1,FR1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR1,SK1,SK1,SK1,FR1,FR2,FR2,FR2,FR2,FR2,FR2]
]
pixels = GOBELIN
elif b > 150000:
race_ep = 'Orcs'
type_ep = 'None'
hair_color_ep = 'None'
haircut_ep = 'None'
hair_prop_ep = 'None'
eyes_prop_ep = 'None'
blemishe_ep = 'None'
eyes_color_ep = 'None'
facial_hair_ep = 'None'
mouth_prop_ep = 'None'
mouth_ep = 'None'
tooth_color_ep = 'None'
nose_ep = 'None'
neck_ep = 'None'
ears_ep = 'None'
skin_ep = 'None'
ears = none
hair = none
hair_prop = none
neck = none
blemishes = none
#tooth color
mouth = none
facial_hair = none
rod = none
mouth_prop = none
#eye color
eyes = none
nose = none
seed(b)
c=randint(0,1000000)
if c > 850000:
SK1 = (112,112,112) #grey
SC1 = (64,64,64)
MO1 = SC1
SCR1 = SC1
skin_ep = 'Smokey Grey'
elif c > 600000:
SK1 = (220,220,220) #brown
SC1 = (180,180,180)
MO1 = SC1
SCR1 = SC1
skin_ep = 'Moon Grey'
elif c > 100000:
SK1 = (180,145,115) #Sand
SC1 = (120,100,60)
MO1 = SC1
SCR1 = SC1
skin_ep = 'Sand'
else:
SK1 = (153,0,0) #red
SC1 = (102,0,0)
MO1 = SC1
SCR1 = SC1
skin_ep = 'Red'
seed(c)
d=randint(0,1000000)
if d > 750000:
ears = EARS_1
ears_ep = 'Earring'
else:
ears = none
ears_ep = 'None'
seed(d)
e=randint(0,1000000)
if e > 960000:
hair_prop = CAP_2
hair_prop_ep = 'Cap'
elif e > 940000:
hair_prop = COWBOY_4
hair_prop_ep = 'Cowboy Hat'
elif e > 920000:
hair_prop = TOPHAT_2
hair_prop_ep = 'Top Hat'
elif e > 870000:
hair_prop = KNITTED_6
hair_prop_ep = 'Knitted Cap'
elif e > 860000:
hair_prop = HEADBAND_2
hair_prop_ep = 'Headband'
elif e > 830000:
hair_prop = FORCAP_2
hair_prop_ep = 'Cap Forward'
elif e > 800000:
hair_prop = BANDANA_2
hair_prop_ep = 'Bandana'
elif e > 780000:
hair_prop = FEDORA_2
hair_prop_ep = 'Fedora'
elif e > 750000:
hair_prop = POLICE_2
hair_prop_ep = 'Police'
elif e > 740000:
hair_prop = BEANI_2
hair_prop_ep = 'Beanie'
elif e > 700000:
hair_prop = ORC_HELMET
hair_prop_ep = 'Orc Helmet'
tonton = 99
else:
hair_prop = none
hair_prop_ep = 'None'
seed(e)
f=randint(0,1000000)
if f > 900000:
neck = GoldChain_1
neck_ep = 'Gold Chain'
elif f > 820000:
neck = SilverChain_1
neck_ep = 'Silver Chain'
elif f > 800000:
neck = RING_1
neck_ep = 'Ring Onchain'
else:
neck = none
neck_ep = 'None'
seed(f)
g=randint(0,1000000)
if g > 300000:
DE1 = (255,255,255)
tooth_color_ep = 'White'
elif g > 200000:
DE1 = (163,110,16)
tooth_color_ep = 'Brown'
elif g > 80000:
DE1 = (255,203,0)
tooth_color_ep = 'Gold'
else :
DE1 = (200,0,0)
tooth_color_ep = 'Blood'
seed(g)
h=randint(0,1000000)
if h > 900000:
mouth_prop = CIGARETTE
mouth_prop_ep = 'Cigarette'
elif h > 880000:
mouth_prop = MASK_1
mouth_prop_ep = 'Medical Mask'
elif h > 820000:
mouth_prop = PIPE
mouth_prop_ep = 'Pipe'
elif h > 780000:
mouth_prop = VAPE
mouth_prop_ep = 'Vape'
else:
mouth_prop = none
mouth_prop_ep = 'None'
seed(h)
i=randint(0,1000000)
if i > 400000:
EY1 = (255,255,255)
eyes_color_ep = 'White'
elif i > 300000:
EY1 = (214,92,26)
eyes_color_ep = "Orange"
elif i > 200000:
EY1 = (176,61,133)
eyes_color_ep = "Purple"
elif i > 100000:
EY1 = (255,255,0)
eyes_color_ep = 'Yellow'
else:
EY1 = (255,0,0)
eyes_color_ep = 'Red'
seed(i)
j=randint(0,1000000)
if j > 970000:
eyes = TD_2
eyes_prop_ep ='3D Glasses'
elif j > 930000:
eyes = VR_2
eyes_prop_ep ='VR'
elif j > 880000:
eyes = ClassicShades_2
eyes_prop_ep ='Classic Shades'
elif j > 830000:
eyes = SmallShades_2
eyes_prop_ep ='Small Shades'
elif j > 780000:
eyes = EyePatch_2
eyes_prop_ep ='Eye Patch'
elif j > 730000:
eyes = BigShades_2
eyes_prop_ep ='Big Shades'
elif j > 700000:
eyes = EyeMask_2
eyes_prop_ep ='Eye Mask'
elif j > 650000:
eyes = HornedRimGlasses_2
eyes_prop_ep ='Horned Rim Glasses'
elif j > 600000:
eyes = RegularShades_2
eyes_prop_ep ='Regular Shades'
else:
eyes=none
eyes_prop_ep ='None'
tantan = 99
if tonton == 99 and tantan != 99:
eyes = none
eyes_prop_ep = 'None'
MOLE=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,MO1,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
ROSY_1=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,RC1,0,0,0,0,RC1,RC1,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,RC1,0,0,0,0,0,RC1,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
seed(j)
k=randint(0,1000000)
if k > 970000:
blemishes = MOLE
blemishe_ep = 'Mole'
else:
blemishes = none
blemishe_ep = 'None'
ORC=[
[FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR1,FR1,FR1,FR1,FR1,FR1,FR1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,FR1,SK1,SK1,SK1,FR1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,FR1,SK1,SK1,SK1,SK1,SK1,FR1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,FR1,SK1,SK1,FR1,SK1,FR1,SK1,SK1,FR1,FR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,FR1,SK1,SK1,SK1,FR1,SK1,SK1,FR1,BG1,BG1,BG1,FR1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,FR1,BG1,FR1,FR1,SK1,FR1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SC1,SC1,SK1,SK1,SK1,SC1,SC1,SK1,SK1,FR1,FR1,SK1,SK1,FR1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,EY1,FR1,SK1,SK1,SK1,EY1,FR1,SK1,SK1,SK1,SK1,FR1,FR1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,FR1,SK1,FR1,SK1,SK1,SK1,SK1,FR1,FR1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,FR1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,FR1,FR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,FR1,SK1,SK1,SK1,SK1,SK1,FR1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,DE1,SK1,SK1,DE1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,FR1,FR1,FR1,FR1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,SK1,SK1,SK1,FR1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR1,FR1,FR1,FR1,FR1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR1,SK1,SK1,SK1,FR1,FR2,FR2,FR2,FR2,FR2,FR2]
]
pixels = ORC
elif b > 135000:
race_ep = 'Wizards'
type_ep = 'White'
hair_color_ep = 'None'
haircut_ep = 'None'
hair_prop_ep = 'None'
eyes_prop_ep = 'None'
blemishe_ep = 'None'
eyes_color_ep = 'None'
facial_hair_ep = 'None'
mouth_prop_ep = 'None'
mouth_ep = 'None'
tooth_color_ep = 'None'
nose_ep = 'None'
neck_ep = 'None'
ears_ep = 'None'
skin_ep = 'None'
ears = none
hair = none
hair_prop = none
neck = none
blemishes = none
#tooth color
mouth = none
facial_hair = none
rod = none
mouth_prop = none
#eye color
eyes = none
nose = none
seed(b)
c=randint(0,1000000)
if c > 750000:
SK1 = (234,217,217)
SC1 = (165,141,141)
EY1 = (201,178,178)
SK2 = (255,255,255)
HRG3 = (220,222,234)
HRG2 = (183,179,191)
HRG4 = (203,200,212)
HRG5 = (87,101,113)
HRG1 = (0,0,0)
RC1 = (226,187,185)
MO1 = EY1
SCR1 = EY1
skin_ep = 'Albino'
elif c > 500000:
SK1 = (219,177,128)
SC1 = (166,110,44)
EY1 = (210,157,96)
SK2 = (235,203,166)
HRG3 = (213,200,183)
HRG2 = (184,163,135)
HRG4 = (209,189,164)
HRG5 = (87,101,113)
HRG1 = (0,0,0)
RC1 = (215,154,104)
MO1 = EY1
SCR1 = EY1
skin_ep = 'Light'
elif c > 250000:
SK1 = (174,139,97)
SC1 = (134,88,30)
EY1 = (167,124,71)
SK2 = (178,138,93)
HRG3 = (188,179,165)
HRG2 = (166,150,128)
HRG4 = (184,171,151)
HRG5 = (87,101,113)
HRG1 = (0,0,0)
RC1 = (191,105,71)
MO1 = EY1
SCR1 = EY1
skin_ep = 'Mid'
else:
SK1 = (113,63,29)
SC1 = (86,39,10)
EY1 = (114,55,17)
SK2 = (146,79,35)
HRG3 = (155,135,127)
HRG2 = (139,121,111)
HRG4 = (156,131,115)
HRG5 = (87,101,113)
HRG1 = (0,0,0)
RC1 = (142,36,2)
MO1 = EY1
SCR1 = EY1
skin_ep = 'Dark'
seed(c)
d=randint(0,1000000)
if d > 750000:
ears = EARS_1
ears_ep = 'Earring'
else:
ears = none
ears_ep = 'None'
seed(d)
e=randint(0,1000000)
if e > 750000:
HR1 = (140,140,140)
hair_color_ep = 'Granite'
elif e > 500000:
HR1 = (90,90,90)
hair_color_ep = 'Carbon Grey'
elif e > 250000:
HR1 = (240,240,240)
hair_color_ep = 'Seashell'
else:
HR1 = (190,190,190)
hair_color_ep = 'Silver'
seed(e)
f=randint(0,1000000)
if f > 900000:
mouth_prop = CIGARETTE
mouth_prop_ep = 'Cigarette'
elif f > 840000:
mouth_prop = PIPE
mouth_prop_ep = 'Pipe'
elif f > 800000:
mouth_prop = VAPE
mouth_prop_ep = 'Vape'
else:
mouth_prop = none
mouth_prop_ep = 'None'
seed(f)
g=randint(0,1000000)
if g > 950000:
hair_prop = COWBOY_7
hair_prop_ep = 'Cowboy Hat'
elif g > 900000:
hair_prop = TOPHAT_7
hair_prop_ep = 'Top Hat'
elif e > 850000:
hair_prop = KNITTED_7
hair_prop_ep = 'Knitted Cap'
elif g > 800000:
hair_prop = FORCAP_7
hair_prop_ep = 'Cap Forward'
elif g > 750000:
hair_prop = FEDORA_7
hair_prop_ep = 'Fedora'
elif g > 700000:
hair_prop = BANDANA_7
hair_prop_ep = 'Bandana'
elif g > 650000:
hair_prop = POLICE_7
hair_prop_ep = 'Police'
elif g > 600000:
hair_prop = CAP_7
hair_prop_ep = 'Cap'
else:
hair_prop = none
hair_prop_ep = 'None'
MOLE=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,MO1,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
ROSY_1=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,RC1,0,0,0,0,RC1,RC1,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,RC1,0,0,0,0,0,RC1,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
seed(g)
h=randint(0,1000000)
if h > 970000:
blemishes = ROSY_1
blemishe_ep = 'Rosy Cheeks'
elif h > 900000:
blemishes = MOLE
blemishe_ep = 'Mole'
else:
blemishes = none
blemishe_ep = 'None'
seed(h)
i=randint(0,1000000)
if i > 970000:
eyes = TD_1
eyes_prop_ep ='3D Glasses'
elif i > 930000:
eyes = VR_1
eyes_prop_ep ='VR'
elif i > 880000:
eyes = ClassicShades_1
eyes_prop_ep ='Classic Shades'
elif i > 830000:
eyes = EyePatch_1
eyes_prop_ep ='Eye Patch'
elif i > 780000:
eyes = NerdGlasses_1
eyes_prop_ep ='Nerd Glasses'
elif i > 730000:
eyes = BigShades_1
eyes_prop_ep ='Big Shades'
elif i > 700000:
eyes = EyeMask_1
eyes_prop_ep ='Eye Mask'
elif i > 650000:
eyes = HornedRimGlasses_1
eyes_prop_ep ='Horned Rim Glasses'
elif i > 600000:
eyes = RegularShades_1
eyes_prop_ep ='Regular Shades'
else:
eyes=none
eyes_prop_ep ='None'
seed(i)
j=randint(0,1000000)
if j > 975000:
nose = NOSE_1
nose_ep = 'Clown Nose'
else:
nose = none
nose_ep = 'None'
seed(j)
k=randint(0,1000000)
if k > 975000:
mouth = SMILE
mouth_ep = 'Smile'
elif k > 950000:
mouth = FROWN
mouth_ep = 'Frown'
else:
mouth = none
mouth_ep = 'None'
WIZ_WHITE=[
[FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,HR1,HR1,BG1,BG1,HR1,HR1,HR1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,HR1,HR1,HR1,SK1,SK1,HR1,HR1,SK1,SK1,HR1,HR1,HR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,HR1,HR1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,HR1,HR1,HR1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,HR1,HR1,HR1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,HR1,HR1,HR1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,HR1,HR1,FR1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,HR1,HR1,HR1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,HR1,HR1,FR1,SC1,SC1,SK1,SK1,SK1,SC1,SC1,SK1,SK1,HR1,HR1,HR1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,HR1,HR1,FR1,EY1,FR1,SK1,SK1,SK1,EY1,FR1,SK1,SK1,HR1,HR1,HR1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,HR1,HR1,FR1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,HR1,HR1,HR1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,HR1,HR1,FR1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,HR1,HR1,HR1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,HR1,HR1,FR1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,HR1,HR1,HR1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,HR1,HR1,FR1,SK1,SK1,FR1,FR1,SK1,SK1,SK1,SK1,SK1,HR1,HR1,HR1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,HR1,HR1,FR1,SK1,HR1,HR1,HR1,HR1,HR1,SK1,SK1,SK1,HR1,HR1,HR1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,HR1,FR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,HR1,FR1,HR1,HR1,HR1,FR1,FR1,FR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,HR1,FR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,HR1,HR1,FR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,HR1,HR1,HR1,FR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,FR1,FR1,HR1,HR1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,HR1,HR1,FR1,HR1,HR1,FR1,FR1,FR1,FR1,SK1,FR1,HR1,HR1,BG1,BG1,BG1,FR2],
[FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR1,FR1,FR2,FR1,FR2,FR2,FR2,FR1,FR2,FR2,FR2,FR2,FR2,FR2]
]
pixels = WIZ_WHITE
elif b > 110000:
race_ep = 'Wizards'
type_ep = 'Grey'
hair_color_ep = 'None'
haircut_ep = 'None'
hair_prop_ep = 'None'
eyes_prop_ep = 'None'
blemishe_ep = 'None'
eyes_color_ep = 'None'
facial_hair_ep = 'None'
mouth_prop_ep = 'None'
mouth_ep = 'None'
tooth_color_ep = 'None'
nose_ep = 'None'
neck_ep = 'None'
ears_ep = 'None'
skin_ep = 'None'
ears = none
hair = none
hair_prop = none
neck = none
blemishes = none
#tooth color
mouth = none
facial_hair = none
rod = none
mouth_prop = none
#eye color
eyes = none
nose = none
seed(b)
c=randint(0,1000000)
if c > 750000:
SK1 = (234,217,217)
SC1 = (165,141,141)
EY1 = (201,178,178)
SK2 = (255,255,255)
HRG3 = (220,222,234)
HRG2 = (183,179,191)
HRG4 = (203,200,212)
HRG5 = (87,101,113)
HRG1 = (0,0,0)
RC1 = (226,187,185)
MO1 = EY1
SCR1 = EY1
skin_ep = 'Albino'
elif c > 500000:
SK1 = (219,177,128)
SC1 = (166,110,44)
EY1 = (210,157,96)
SK2 = (235,203,166)
HRG3 = (213,200,183)
HRG2 = (184,163,135)
HRG4 = (209,189,164)
HRG5 = (87,101,113)
HRG1 = (0,0,0)
RC1 = (215,154,104)
MO1 = EY1
SCR1 = EY1
skin_ep = 'Light'
elif c > 250000:
SK1 = (174,139,97)
SC1 = (134,88,30)
EY1 = (167,124,71)
SK2 = (178,138,93)
HRG3 = (188,179,165)
HRG2 = (166,150,128)
HRG4 = (184,171,151)
HRG5 = (87,101,113)
HRG1 = (0,0,0)
RC1 = (191,105,71)
MO1 = EY1
SCR1 = EY1
skin_ep = 'Mid'
else:
SK1 = (113,63,29)
SC1 = (86,39,10)
EY1 = (114,55,17)
SK2 = (146,79,35)
HRG3 = (155,135,127)
HRG2 = (139,121,111)
HRG4 = (156,131,115)
HRG5 = (87,101,113)
HRG1 = (0,0,0)
RC1 = (142,36,2)
MO1 = EY1
SCR1 = EY1
skin_ep = 'Dark'
seed(c)
d=randint(0,1000000)
if d > 750000:
ears = EARS_1
ears_ep = 'Earring'
else:
ears = none
ears_ep = 'None'
seed(d)
e=randint(0,1000000)
if e > 750000:
CH1 = nr
CH2= (130,130,130)
HR1 = (160,160,160)
BR1 = (190,190,190)
hair_color_ep = 'Black & Granite'
elif e > 500000:
CH2 = (10,10,10)
CH1= (50,50,50)
HR1 = (160,160,160)
BR1 = (190,190,190)
hair_color_ep = 'Dark Grey & Black'
elif e > 250000:
CH1 = (130,130,130)
CH2= (230,230,230)
HR1 = (160,160,160)
BR1 = (190,190,190)
hair_color_ep = 'Granite & Seashell'
else:
CH1 = (50,50,50)
CH2= (200,200,200)
HR1 = (160,160,160)
BR1 = (190,190,190)
hair_color_ep = 'Dark Grey & Silver'
seed(e)
f=randint(0,1000000)
if f > 900000:
mouth_prop = CIGARETTE
mouth_prop_ep = 'Cigarette'
elif f > 840000:
mouth_prop = PIPE
mouth_prop_ep = 'Pipe'
elif f > 800000:
mouth_prop = VAPE
mouth_prop_ep = 'Vape'
else:
mouth_prop = none
mouth_prop_ep = 'None'
seed(f)
g=randint(0,1000000)
MOLE=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,MO1,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
ROSY_1=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,RC1,0,0,0,0,RC1,RC1,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,RC1,0,0,0,0,0,RC1,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
seed(g)
h=randint(0,1000000)
if h > 970000:
blemishes = ROSY_1
blemishe_ep = 'Rosy Cheeks'
elif h > 900000:
blemishes = MOLE
blemishe_ep = 'Mole'
else:
blemishes = none
blemishe_ep = 'None'
seed(h)
i=randint(0,1000000)
if i > 970000:
eyes = TD_1
eyes_prop_ep ='3D Glasses'
elif i > 930000:
eyes = VR_1
eyes_prop_ep ='VR'
elif i > 880000:
eyes = ClassicShades_1
eyes_prop_ep ='Classic Shades'
elif i > 830000:
eyes = EyePatch_1
eyes_prop_ep ='Eye Patch'
elif i > 780000:
eyes = NerdGlasses_1
eyes_prop_ep ='Nerd Glasses'
elif i > 730000:
eyes = BigShades_1
eyes_prop_ep ='Big Shades'
elif i > 700000:
eyes = EyeMask_1
eyes_prop_ep ='Eye Mask'
elif i > 650000:
eyes = HornedRimGlasses_1
eyes_prop_ep ='Horned Rim Glasses'
elif i > 600000:
eyes = RegularShades_1
eyes_prop_ep ='Regular Shades'
else:
eyes=none
eyes_prop_ep ='None'
seed(i)
j=randint(0,1000000)
if j > 975000:
nose = NOSE_1
nose_ep = 'Clown Nose'
else:
nose = none
nose_ep = 'None'
seed(j)
k=randint(0,1000000)
if k > 975000:
mouth = SMILE
mouth_ep = 'Smile'
elif k > 950000:
mouth = FROWN
mouth_ep = 'Frown'
else:
mouth = none
mouth_ep = 'None'
WIZ_GREY=[
[FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,CH1,CH1,CH1,CH1,CH1,CH1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,CH1,CH1,CH1,CH1,CH1,BG1,CH1,CH1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,BG1,BG1,CH1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,CH2,CH2,CH2,CH2,CH2,CH2,CH2,CH2,CH2,CH2,CH2,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,BG1,BG1,FR2],
[FR2,BG1,BG1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,FR1,HR1,HR1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SC1,SC1,SK1,SK1,SK1,SC1,SC1,SK1,SK1,FR1,HR1,HR1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,EY1,FR1,SK1,SK1,SK1,EY1,FR1,SK1,SK1,SK1,FR1,HR1,HR1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,FR1,HR1,HR1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,FR1,FR1,HR1,HR1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,FR1,HR1,HR1,HR1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,FR1,FR1,SK1,SK1,SK1,SK1,SK1,FR1,HR1,HR1,HR1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,BR1,BR1,BR1,BR1,BR1,SK1,SK1,SK1,FR1,HR1,HR1,HR1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,FR1,BR1,BR1,BR1,BR1,BR1,BR1,BR1,BR1,BR1,BR1,FR1,HR1,HR1,HR1,HR1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,FR1,BR1,BR1,BR1,FR1,FR1,FR1,BR1,BR1,BR1,BR1,BR1,FR1,HR1,HR1,HR1,HR1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,FR1,BR1,BR1,BR1,BR1,BR1,BR1,BR1,BR1,BR1,BR1,BR1,FR1,HR1,HR1,HR1,HR1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,BR1,BR1,BR1,BR1,BR1,BR1,BR1,BR1,BR1,BR1,FR1,HR1,HR1,HR1,HR1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR1,BR1,BR1,BR1,BR1,BR1,BR1,BR1,FR1,FR1,BG1,BG1,HR1,HR1,HR1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR1,BR1,BR1,FR1,FR1,FR1,FR1,SK1,FR1,BG1,BG1,BG1,HR1,HR1,FR2],
[FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR1,FR1,FR2,FR1,SK1,SK1,SK1,FR1,FR2,FR2,FR2,FR2,FR2,FR2]
]
pixels = WIZ_GREY
elif b > 85000:
race_ep = 'Wizards'
type_ep = 'Tower'
hair_color_ep = 'None'
haircut_ep = 'None'
hair_prop_ep = 'None'
eyes_prop_ep = 'None'
blemishe_ep = 'None'
eyes_color_ep = 'None'
facial_hair_ep = 'None'
mouth_prop_ep = 'None'
mouth_ep = 'None'
tooth_color_ep = 'None'
nose_ep = 'None'
neck_ep = 'None'
ears_ep = 'None'
skin_ep = 'None'
ears = none
hair = none
hair_prop = none
neck = none
blemishes = none
#tooth color
mouth = none
facial_hair = none
rod = none
mouth_prop = none
#eye color
eyes = none
nose = none
seed(b)
c=randint(0,1000000)
if c > 750000:
SK1 = (234,217,217)
EY1 = (201,178,178)
SK2 = (255,255,255)
HRG3 = (220,222,234)
HRG2 = (183,179,191)
HRG4 = (203,200,212)
HRG5 = (87,101,113)
HRG1 = (0,0,0)
RC1 = (226,187,185)
MO1 = EY1
SCR1 = EY1
skin_ep = 'Albino'
elif c > 500000:
SK1 = (219,177,128)
EY1 = (210,157,96)
SK2 = (235,203,166)
HRG3 = (213,200,183)
HRG2 = (184,163,135)
HRG4 = (209,189,164)
HRG5 = (87,101,113)
HRG1 = (0,0,0)
RC1 = (215,154,104)
MO1 = EY1
SCR1 = EY1
skin_ep = 'Light'
elif c > 250000:
SK1 = (174,139,97)
EY1 = (167,124,71)
SK2 = (178,138,93)
HRG3 = (188,179,165)
HRG2 = (166,150,128)
HRG4 = (184,171,151)
HRG5 = (87,101,113)
HRG1 = (0,0,0)
RC1 = (191,105,71)
MO1 = EY1
SCR1 = EY1
skin_ep = 'Mid'
else:
SK1 = (113,63,29)
EY1 = (114,55,17)
SK2 = (146,79,35)
HRG3 = (155,135,127)
HRG2 = (139,121,111)
HRG4 = (156,131,115)
HRG5 = (87,101,113)
HRG1 = (0,0,0)
RC1 = (142,36,2)
MO1 = EY1
SCR1 = EY1
skin_ep = 'Dark'
seed(c)
d=randint(0,1000000)
if d > 750000:
ears = EARS_1
ears_ep = 'Earring'
else:
ears = none
ears_ep = 'None'
seed(d)
e=randint(0,1000000)
if e > 750000:
SC1 = (80,80,80)
BR1 = (80,80,80)
HR1 = (160,160,160)
hair_color_ep = 'Grey & Carbon Grey'
elif e > 500000:
SC1 = (30,30,30)
BR1 = (30,30,30)
HR1 = (110,110,110)
hair_color_ep = 'Smokey Grey & Charcoal'
elif e > 250000:
SC1 = (80,80,80)
BR1 = (80,80,80)
HR1 = (235,235,235)
hair_color_ep = 'Seashell & Carbon Grey'
else:
SC1 = (155,155,155)
BR1 = (155,155,155)
HR1 = (235,235,235)
hair_color_ep = 'Seashell & Grey'
seed(e)
f=randint(0,1000000)
if f > 900000:
mouth_prop = CIGARETTE
mouth_prop_ep = 'Cigarette'
elif f > 840000:
mouth_prop = PIPE
mouth_prop_ep = 'Pipe'
elif f > 800000:
mouth_prop = VAPE
mouth_prop_ep = 'Vape'
else:
mouth_prop = none
mouth_prop_ep = 'None'
seed(f)
g=randint(0,1000000)
if g > 950000:
hair_prop = COWBOY_7
hair_prop_ep = 'Cowboy Hat'
elif g > 900000:
hair_prop = TOPHAT_7
hair_prop_ep = 'Top Hat'
elif e > 850000:
hair_prop = KNITTED_7
hair_prop_ep = 'Knitted Cap'
elif g > 800000:
hair_prop = FORCAP_7
hair_prop_ep = 'Cap Forward'
elif g > 750000:
hair_prop = FEDORA_7
hair_prop_ep = 'Fedora'
elif g > 700000:
hair_prop = BANDANA_7
hair_prop_ep = 'Bandana'
elif g > 650000:
hair_prop = POLICE_7
hair_prop_ep = 'Police'
elif g > 600000:
hair_prop = CAP_7
hair_prop_ep = 'Cap'
else:
hair_prop = none
hair_prop_ep = 'None'
MOLE=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,MO1,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
ROSY_1=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,RC1,0,0,0,0,RC1,RC1,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,RC1,0,0,0,0,0,RC1,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
seed(g)
h=randint(0,1000000)
if h > 970000:
blemishes = ROSY_1
blemishe_ep = 'Rosy Cheeks'
elif h > 900000:
blemishes = MOLE
blemishe_ep = 'Mole'
else:
blemishes = none
blemishe_ep = 'None'
seed(h)
i=randint(0,1000000)
if i > 970000:
eyes = TD_1
eyes_prop_ep ='3D Glasses'
elif i > 930000:
eyes = VR_1
eyes_prop_ep ='VR'
elif i > 880000:
eyes = ClassicShades_1
eyes_prop_ep ='Classic Shades'
elif i > 830000:
eyes = EyePatch_1
eyes_prop_ep ='Eye Patch'
elif i > 780000:
eyes = NerdGlasses_1
eyes_prop_ep ='Nerd Glasses'
elif i > 730000:
eyes = BigShades_1
eyes_prop_ep ='Big Shades'
elif i > 700000:
eyes = EyeMask_1
eyes_prop_ep ='Eye Mask'
elif i > 650000:
eyes = HornedRimGlasses_1
eyes_prop_ep ='Horned Rim Glasses'
elif i > 600000:
eyes = RegularShades_1
eyes_prop_ep ='Regular Shades'
else:
eyes=none
eyes_prop_ep ='None'
seed(i)
j=randint(0,1000000)
if j > 975000:
nose = NOSE_1
nose_ep = 'Clown Nose'
else:
nose = none
nose_ep = 'None'
seed(j)
k=randint(0,1000000)
if k > 975000:
mouth = SMILE
mouth_ep = 'Smile'
elif k > 950000:
mouth = FROWN
mouth_ep = 'Frown'
else:
mouth = none
mouth_ep = 'None'
WIZ_TOWER=[
[FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,HR1,HR1,HR1,SK1,SK1,HR1,HR1,HR1,SK1,SK1,HR1,HR1,HR1,HR1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,HR1,HR1,FR1,SK1,SK1,SK1,HR1,SK1,SK1,SK1,SK1,HR1,HR1,HR1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,HR1,HR1,FR1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,HR1,HR1,HR1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,HR1,HR1,FR1,SC1,SC1,SC1,SK1,SK1,SC1,SC1,SC1,HR1,HR1,HR1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,HR1,HR1,FR1,EY1,FR1,SK1,SK1,SK1,EY1,FR1,SK1,HR1,HR1,HR1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,HR1,HR1,FR1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,HR1,HR1,HR1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,HR1,HR1,FR1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,HR1,HR1,HR1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,HR1,HR1,FR1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,HR1,HR1,HR1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,HR1,HR1,FR1,SK1,SK1,FR1,FR1,SK1,SK1,SK1,SK1,HR1,HR1,HR1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,HR1,HR1,FR1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,HR1,HR1,HR1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,HR1,HR1,FR1,SK1,SK1,BR1,BR1,BR1,SK1,SK1,SK1,HR1,HR1,HR1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,HR1,FR1,BR1,BR1,BR1,FR1,FR1,FR1,BR1,BR1,BR1,HR1,HR1,HR1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,HR1,HR1,FR1,BR1,BR1,BR1,BR1,BR1,BR1,BR1,BR1,HR1,HR1,HR1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,HR1,HR1,BG1,FR1,BR1,BR1,BR1,BR1,BR1,BR1,FR1,SK1,HR1,HR1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,HR1,BG1,BG1,BG1,FR1,BR1,BR1,BR1,BR1,FR1,SK1,SK1,FR1,HR1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR1,BR1,FR1,FR1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR1,FR2,FR1,SK1,SK1,SK1,FR1,FR2,FR2,FR2,FR2,FR2,FR2]
]
pixels = WIZ_TOWER
elif b > 60000:
race_ep = 'Wizards'
type_ep = 'Wood'
hair_color_ep = 'None'
haircut_ep = 'None'
hair_prop_ep = 'None'
eyes_prop_ep = 'None'
blemishe_ep = 'None'
eyes_color_ep = 'None'
facial_hair_ep = 'None'
mouth_prop_ep = 'None'
mouth_ep = 'None'
tooth_color_ep = 'None'
nose_ep = 'None'
neck_ep = 'None'
ears_ep = 'None'
skin_ep = 'None'
ears = none
hair = none
hair_prop = none
neck = none
blemishes = none
#tooth color
mouth = none
facial_hair = none
rod = none
mouth_prop = none
#eye color
eyes = none
nose = none
seed(b)
c=randint(0,1000000)
if c > 750000:
SK1 = (234,217,217)
SC1 = (165,141,141)
EY1 = (201,178,178)
SK2 = (255,255,255)
HRG3 = (220,222,234)
HRG2 = (183,179,191)
HRG4 = (203,200,212)
HRG5 = (87,101,113)
HRG1 = (0,0,0)
RC1 = (226,187,185)
MO1 = EY1
SCR1 = EY1
skin_ep = 'Albino'
elif c > 500000:
SK1 = (219,177,128)
SC1 = (166,110,44)
EY1 = (210,157,96)
SK2 = (235,203,166)
HRG3 = (213,200,183)
HRG2 = (184,163,135)
HRG4 = (209,189,164)
HRG5 = (87,101,113)
HRG1 = (0,0,0)
RC1 = (215,154,104)
MO1 = EY1
SCR1 = EY1
skin_ep = 'Light'
elif c > 250000:
SK1 = (174,139,97)
SC1 = (134,88,30)
EY1 = (167,124,71)
SK2 = (178,138,93)
HRG3 = (188,179,165)
HRG2 = (166,150,128)
HRG4 = (184,171,151)
HRG5 = (87,101,113)
HRG1 = (0,0,0)
RC1 = (191,105,71)
MO1 = EY1
SCR1 = EY1
skin_ep = 'Mid'
else:
SK1 = (113,63,29)
SC1 = (86,39,10)
EY1 = (114,55,17)
SK2 = (146,79,35)
HRG3 = (155,135,127)
HRG2 = (139,121,111)
HRG4 = (156,131,115)
HRG5 = (87,101,113)
HRG1 = (0,0,0)
RC1 = (142,36,2)
MO1 = EY1
SCR1 = EY1
skin_ep = 'Dark'
seed(c)
d=randint(0,1000000)
if d > 750000:
ears = EARS_1
ears_ep = 'Earring'
else:
ears = none
ears_ep = 'None'
seed(d)
e=randint(0,1000000)
if e > 750000:
HR1 = (160,110,30)
HR2 = (130,60,20)
BR2 = (200,230,180)
BR1 = BE2
hair_color_ep = 'Taupe & Cookie Brown'
elif e > 500000:
HR1 = (130,90,10)
HR2 = (70,50,10)
BR2 = (200,230,180)
hair_color_ep = 'Brown & Cookie Brown'
BR1 = BE2
elif e > 250000:
HR1 = (160,110,30)
HR2 = (130,60,20)
BR2 = (60,200,180)
BR1 = (30,20,5)
hair_color_ep = 'Taupe & Graphite'
else:
HR1 = (130,90,10)
HR2 = (70,50,10)
BR2 = (60,200,180)
BR1 = (30,20,5)
hair_color_ep = 'Brown & Graphite'
seed(e)
f=randint(0,1000000)
if f > 900000:
mouth_prop = CIGARETTE
mouth_prop_ep = 'Cigarette'
elif f > 840000:
mouth_prop = PIPE
mouth_prop_ep = 'Pipe'
elif f > 800000:
mouth_prop = VAPE
mouth_prop_ep = 'Vape'
else:
mouth_prop = none
mouth_prop_ep = 'None'
seed(f)
g=randint(0,1000000)
if g > 975000:
mouth = SMILE
mouth_ep = 'Smile'
elif g > 950000:
mouth = FROWN
mouth_ep = 'Frown'
else:
mouth = none
mouth_ep = 'None'
MOLE=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,MO1,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
ROSY_1=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,RC1,0,0,0,0,RC1,RC1,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,RC1,0,0,0,0,0,RC1,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
seed(g)
h=randint(0,1000000)
if h > 970000:
blemishes = ROSY_1
blemishe_ep = 'Rosy Cheeks'
elif h > 900000:
blemishes = MOLE
blemishe_ep = 'Mole'
else:
blemishes = none
blemishe_ep = 'None'
seed(h)
i=randint(0,1000000)
if i > 970000:
eyes = TD_1
eyes_prop_ep ='3D Glasses'
elif i > 930000:
eyes = VR_1
eyes_prop_ep ='VR'
elif i > 880000:
eyes = ClassicShades_1
eyes_prop_ep ='Classic Shades'
elif i > 830000:
eyes = EyePatch_1
eyes_prop_ep ='Eye Patch'
elif i > 780000:
eyes = NerdGlasses_1
eyes_prop_ep ='Nerd Glasses'
elif i > 730000:
eyes = BigShades_1
eyes_prop_ep ='Big Shades'
elif i > 700000:
eyes = EyeMask_1
eyes_prop_ep ='Eye Mask'
elif i > 650000:
eyes = HornedRimGlasses_1
eyes_prop_ep ='Horned Rim Glasses'
elif i > 600000:
eyes = RegularShades_1
eyes_prop_ep ='Regular Shades'
else:
eyes=none
eyes_prop_ep ='None'
seed(i)
j=randint(0,1000000)
if j > 975000:
nose = NOSE_1
nose_ep = 'Clown Nose'
else:
nose = none
nose_ep = 'None'
WIZ_WOODEN=[
[FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,HR2,HR2,HR2,HR2,HR2,HR2,HR2,HR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,HR2,HR2,HR2,HR2,HR2,HR2,HR2,HR2,HR2,HR2,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,HR2,HR2,HR2,HR1,HR1,HR1,HR1,HR2,HR2,HR2,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,HR1,HR1,HR2,HR2,HR2,HR2,HR1,HR1,HR1,HR1,HR1,HR1,HR2,HR2,HR2,HR2,HR1,HR1,BG1,BG1,FR2],
[FR2,BG1,HR1,HR1,HR1,HR1,HR2,HR2,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR2,HR2,HR1,HR1,HR1,HR1,BG1,FR2],
[FR2,BG1,HR1,HR1,HR1,HR1,HR1,HR2,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR2,HR1,HR1,HR1,HR1,HR1,BG1,FR2],
[FR2,BG1,HR1,HR1,HR1,HR1,HR1,BR2,HR2,HR1,HR1,HR1,HR1,HR1,HR1,HR2,HR1,HR1,HR1,HR1,HR1,HR1,BG1,FR2],
[FR2,BG1,HR1,BG1,BG1,HR1,BR2,HR1,HR1,HR1,SK1,SK1,SK1,SK1,HR1,HR1,HR1,HR1,HR1,BG1,BG1,HR1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,HR1,BR2,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,HR1,HR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BR2,SC1,SC1,SK1,SK1,SK1,SC1,SC1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BR2,EY1,FR1,SK1,SK1,SK1,EY1,FR1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BR2,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,BR1,FR1,FR1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BR2,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,BR1,FR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,BR2,SK1,FR1,FR1,SK1,SK1,SK1,BR1,BR1,FR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,BR1,BR2,SK1,SK1,SK1,SK1,BR1,BR1,BR1,FR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,BR2,BR1,BR1,BR1,BR1,BR1,BR1,BR1,BR1,FR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,BR1,BR1,FR1,FR1,FR1,BR1,BR1,BR1,BR1,FR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,BR1,BR1,BR1,BR1,BR1,BR1,BR1,BR1,BR1,FR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,BR1,BR1,BR1,BR1,BR1,BR1,BR1,BR1,BR1,FR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR1,BR1,BR1,BR1,BR1,BR1,BR1,BR1,FR1,FR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR1,BR1,BR1,BR1,BR1,BR1,FR1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR1,FR1,FR1,FR1,FR1,SK1,SK1,FR1,FR2,FR2,FR2,FR2,FR2,FR2]
]
pixels = WIZ_WOODEN
elif b > 35000:
race_ep = 'Wizards'
type_ep = 'Blue'
hair_color_ep = 'None'
haircut_ep = 'None'
hair_prop_ep = 'None'
eyes_prop_ep = 'None'
blemishe_ep = 'None'
eyes_color_ep = 'None'
facial_hair_ep = 'None'
mouth_prop_ep = 'None'
mouth_ep = 'None'
tooth_color_ep = 'None'
nose_ep = 'None'
neck_ep = 'None'
ears_ep = 'None'
skin_ep = 'None'
ears = none
hair = none
hair_prop = none
neck = none
blemishes = none
#tooth color
mouth = none
facial_hair = none
rod = none
mouth_prop = none
#eye color
eyes = none
nose = none
seed(b)
c=randint(0,1000000)
if c > 750000:
HR1 = (30,25,200)
HR2 = (255,218,0)
SK1 = (234,217,217)
SC1 = (190,215,240)
BR1 = (190,215,240)
EY1 = (201,178,178)
SK2 = (255,255,255)
HRG3 = (220,222,234)
HRG2 = (183,179,191)
HRG4 = (203,200,212)
HRG5 = (87,101,113)
HRG1 = (0,0,0)
RC1 = (226,187,185)
skin_ep = 'Albino'
MO1 = EY1
SCR1 = EY1
hair_color_ep = 'Persian Blue'
elif c > 500000:
HR1 = (10,50,100)
HR2 = (216,214,203)
SK1 = (219,177,128)
SC1 = (190,215,240)
BR1 = (190,215,240)
EY1 = (210,157,96)
SK2 = (235,203,166)
HRG3 = (213,200,183)
HRG2 = (184,163,135)
HRG4 = (209,189,164)
HRG5 = (87,101,113)
HRG1 = (0,0,0)
RC1 = (215,154,104)
MO1 = EY1
SCR1 = EY1
skin_ep = 'Light'
hair_color_ep = 'Sapphire'
elif c > 250000:
HR1 = (60,10,145)
HR2 = (255,218,0)
SK1 = (174,139,97)
SC1 = (190,215,240)
BR1 = (190,215,240)
EY1 = (167,124,71)
SK2 = (178,138,93)
HRG3 = (188,179,165)
HRG2 = (166,150,128)
HRG4 = (184,171,151)
HRG5 = (87,101,113)
HRG1 = (0,0,0)
RC1 = (191,105,71)
MO1 = EY1
SCR1 = EY1
skin_ep = 'Mid'
hair_color_ep = 'Indigo'
else:
HR1 = (30,180,220)
HR2 = (216,214,203)
SK1 = (113,63,29)
SC1 = (190,215,240)
BR1 = (190,215,240)
EY1 = (114,55,17)
SK2 = (146,79,35)
HRG3 = (155,135,127)
HRG2 = (139,121,111)
HRG4 = (156,131,115)
HRG5 = (87,101,113)
HRG1 = (0,0,0)
RC1 = (142,36,2)
MO1 = EY1
SCR1 = EY1
skin_ep = 'Dark'
hair_color_ep = 'Topaz'
seed(c)
d=randint(0,1000000)
if d > 750000:
ears = EARS_1
ears_ep = 'Earring'
else:
ears = none
ears_ep = 'None'
seed(d)
e=randint(0,1000000)
#if e > 900000:
# neck = GoldChain_1
#elif e > 700000:
# neck = SilverChain_1
#elif e > 500000:
# neck = RING_1
#else:
# neck = none
seed(e)
f=randint(0,1000000)
if f > 900000:
mouth_prop = CIGARETTE
mouth_prop_ep = 'Cigarette'
elif f > 840000:
mouth_prop = PIPE
mouth_prop_ep = 'Pipe'
elif f > 800000:
mouth_prop = VAPE
mouth_prop_ep = 'Vape'
else:
mouth_prop = none
mouth_prop_ep = 'None'
seed(f)
g=randint(0,1000000)
if g > 975000:
mouth = SMILE
mouth_ep = 'Smile'
elif g > 950000:
mouth = FROWN
mouth_ep = 'Frown'
else:
mouth = none
mouth_ep = 'None'
MOLE=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,MO1,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
ROSY_1=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,RC1,0,0,0,0,RC1,RC1,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,RC1,0,0,0,0,0,RC1,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
seed(g)
h=randint(0,1000000)
if h > 970000:
blemishes = ROSY_1
blemishe_ep = 'Rosy Cheeks'
elif h > 900000:
blemishes = MOLE
blemishe_ep = 'Mole'
else:
blemishes = none
blemishe_ep = 'None'
seed(h)
i=randint(0,1000000)
if i > 970000:
eyes = TD_1
eyes_prop_ep ='3D Glasses'
elif i > 930000:
eyes = VR_1
eyes_prop_ep ='VR'
elif i > 880000:
eyes = ClassicShades_1
eyes_prop_ep ='Classic Shades'
elif i > 830000:
eyes = EyePatch_1
eyes_prop_ep ='Eye Patch'
elif i > 780000:
eyes = NerdGlasses_1
eyes_prop_ep ='Nerd Glasses'
elif i > 730000:
eyes = BigShades_1
eyes_prop_ep ='Big Shades'
elif i > 700000:
eyes = EyeMask_1
eyes_prop_ep ='Eye Mask'
elif i > 650000:
eyes = HornedRimGlasses_1
eyes_prop_ep ='Horned Rim Glasses'
elif i > 600000:
eyes = RegularShades_1
eyes_prop_ep ='Regular Shades'
else:
eyes=none
eyes_prop_ep ='None'
seed(i)
j=randint(0,1000000)
if j > 975000:
nose = NOSE_1
nose_ep = 'Clown Nose'
else:
nose = none
nose_ep = 'None'
WIZ_BLUE=[
[FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,HR1,HR1,HR1,HR2,HR2,HR2,HR2,HR2,HR2,HR2,HR1,HR1,HR1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,HR1,HR1,HR1,HR2,SK1,SK1,SK1,SK1,SK1,SK1,SK1,HR2,HR1,HR1,HR1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,HR1,HR1,HR2,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,HR2,HR1,HR1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,HR1,HR1,HR2,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,HR2,HR1,HR1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,HR1,HR1,HR2,SC1,SC1,SC1,SK1,SK1,SC1,SC1,SC1,SK1,HR2,HR1,HR1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,HR1,HR1,HR2,EY1,FR1,SK1,SK1,SK1,EY1,FR1,SK1,SK1,HR2,HR1,HR1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,HR1,HR1,HR2,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,HR2,HR1,HR1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,HR1,HR1,HR2,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,HR2,HR1,HR1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,HR1,HR1,HR2,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,HR2,HR1,HR1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,HR1,HR1,HR2,SK1,SK1,FR1,FR1,SK1,SK1,SK1,SK1,SK1,HR2,HR1,HR1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,HR1,HR1,HR2,BR1,BR1,BR1,BR1,BR1,BR1,BR1,BR1,BR1,BR1,BR1,HR2,HR1,HR1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,HR1,HR1,HR2,BR1,BR1,BR1,BR1,BR1,BR1,BR1,BR1,BR1,BR1,BR1,HR2,HR1,HR1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,HR1,HR1,HR2,BR1,BR1,BR1,FR1,FR1,FR1,BR1,BR1,BR1,BR1,BR1,HR2,HR1,HR1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,HR1,HR1,HR2,BR1,BR1,BR1,BR1,BR1,BR1,BR1,BR1,BR1,BR1,BR1,HR2,HR1,HR1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,HR1,HR1,HR2,BR1,BR1,BR1,BR1,BR1,BR1,BR1,BR1,BR1,FR1,FR1,HR2,HR1,HR1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,HR1,HR1,HR2,BR1,BR1,BR1,BR1,BR1,BR1,BR1,FR1,SK1,HR2,HR1,HR1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,HR1,HR1,HR2,FR1,FR1,BR1,BR1,BR1,FR1,FR1,SK1,HR2,HR1,HR1,HR1,BG1,BG1,BG1,FR2],
[FR2,FR2,FR2,FR2,FR2,FR2,HR1,HR1,HR2,FR2,FR1,BR1,FR1,FR1,FR2,HR2,HR1,HR1,HR1,HR1,FR2,FR2,FR2,FR2]
]
pixels = WIZ_BLUE
elif b > 19000:
race_ep = 'Unknown'
type_ep = 'Male'
hair_color_ep = 'None'
haircut_ep = 'None'
hair_prop_ep = 'None'
eyes_prop_ep = 'None'
blemishe_ep = 'None'
eyes_color_ep = 'None'
facial_hair_ep = 'None'
mouth_prop_ep = 'None'
mouth_ep = 'None'
tooth_color_ep = 'None'
nose_ep = 'None'
neck_ep = 'None'
ears_ep = 'None'
skin_ep = 'None'
ears = none
hair = none
hair_prop = none
neck = none
blemishes = none
#tooth color
mouth = none
facial_hair = none
rod = none
mouth_prop = none
#eye color
eyes = none
nose = none
seed(b)
c=randint(0,1000000)
if c > 750000:
SK1 = (250,200,170)
HR1 = (130,130,130)
skin_ep = 'Peach'
elif c > 500000:
SK1 = (200,170,140)
HR1 = (125,110,90)
skin_ep = 'Dust'
elif c > 250000:
SK1 = (240,210,190)
HR1 = (170,150,120)
skin_ep = 'Bone'
else:
SK1 = (195,175,165)
HR1 = (100,95,85)
skin_ep = 'Silk'
seed(c)
d=randint(0,1000000)
if d > 750000:
ears = EARS_4
ears_ep = 'Earring'
else:
ears = none
ears_ep = 'None'
seed(d)
e=randint(0,1000000)
if e > 950000:
hair_prop = CAP_5
hair_prop_ep = 'Cap'
elif e > 900000:
hair_prop = KNITTED_4
hair_prop_ep = 'Knitted Cap'
elif e > 850000:
hair_prop = HEADBAND_7
hair_prop_ep = 'Headband'
elif e > 800000:
hair_prop = FORCAP_3
hair_prop_ep = 'Cap Forward'
elif e > 750000:
hair_prop = COWBOY_3
hair_prop_ep = 'Cowboy Hat'
elif e > 700000:
hair_prop = TOPHAT_3
hair_prop_ep = 'Top Hat'
else:
hair_prop = none
hair_prop_ep = 'None'
seed(e)
f=randint(0,1000000)
if f > 980000:
neck = RING_3
neck_ep = 'Ring Onchain'
elif f > 880000:
neck = GoldChain_4
neck_ep = 'Gold Chain'
tutu = 99
elif f > 800000:
neck = SilverChain_3
neck_ep = 'Silver Chain'
else:
neck = none
neck_ep = 'None'
seed(f)
g=randint(0,1000000)
if g > 975000:
mouth = SMILE
mouth_ep = 'Smile'
elif g > 950000:
mouth = FROWN
mouth_ep = 'Frown'
tyty = 99
else:
mouth = none
mouth_ep = 'None'
if tutu == 99 and tyty == 99:
neck = none
neck_ep = 'None'
seed(g)
h=randint(0,1000000)
if h > 900000:
mouth_prop = CIGARETTE
mouth_prop_ep = 'Cigarette'
elif h > 840000:
mouth_prop = PIPE
mouth_prop_ep = 'Pipe'
elif h > 800000:
mouth_prop = VAPE
mouth_prop_ep = 'Vape'
else:
mouth_prop = none
mouth_prop_ep = 'None'
seed(h)
i=randint(0,1000000)
if i > 200000:
EY1 = (255,255,255)
eyes_color_ep = 'White'
elif i > 80000:
EY1 = (230,180,100)
eyes_color_ep = 'Peach'
else:
EY1 = (78,154,197)
eyes_color_ep = 'Blue'
seed(i)
j=randint(0,1000000)
if j > 950000:
eyes = ClassicShades_4
eyes_prop_ep ='Classic Shades'
elif j > 900000:
eyes = EyePatch_4
eyes_prop_ep ='Eye Patch'
elif j > 850000:
eyes = RegularShades_4
eyes_prop_ep ='Regular Shades'
else:
eyes=none
eyes_prop_ep ='None'
GOLLUN=[
[FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,HR1,HR1,HR1,HR1,HR1,HR1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,HR1,SK1,HR1,SK1,HR1,SK1,HR1,HR1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,HR1,SK1,SK1,HR1,SK1,HR1,SK1,HR1,SK1,HR1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,HR1,SK1,HR1,SK1,SK1,HR1,SK1,HR1,SK1,HR1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,HR1,SK1,HR1,SK1,HR1,SK1,SK1,HR1,SK1,HR1,FR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,HR1,EY1,EY1,SK1,SK1,SK1,EY1,EY1,SK1,HR1,SK1,FR1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,HR1,EY1,FR1,SK1,SK1,SK1,EY1,FR1,SK1,HR1,SK1,FR1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,HR1,SK1,HR1,SK1,SK1,SK1,SK1,HR1,SK1,HR1,SK1,FR1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,HR1,SK1,SK1,SK1,FR1,SK1,SK1,SK1,SK1,FR1,FR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,bl,SK1,SK1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,FR1,FR1,FR1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,SK1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR1,FR1,FR1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR1,SK1,SK1,SK1,FR1,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2]
]
pixels = GOLLUN
elif b > 10000:
race_ep = 'Wraiths'
type_ep = 'None'
hair_color_ep = 'None'
haircut_ep = 'None'
hair_prop_ep = 'None'
eyes_prop_ep = 'None'
blemishe_ep = 'None'
eyes_color_ep = 'None'
facial_hair_ep = 'None'
mouth_prop_ep = 'None'
mouth_ep = 'None'
tooth_color_ep = 'None'
nose_ep = 'None'
neck_ep = 'None'
ears_ep = 'None'
skin_ep = 'None'
ears = none
hair = none
hair_prop = none
neck = none
blemishes = none
#tooth color
mouth = none
facial_hair = none
rod = none
mouth_prop = none
#eye color
eyes = none
nose = none
seed(b)
c=randint(0,1000000)
if c > 500000:
SK1 = (50,50,50)
HR1 = (100,100,100)
SC1 = nr
MO1 = nr
skin_ep = 'Dark Grey'
elif c > 400000:
SK1 = (128,128,128)
HR1 = (255,193,7) #OR
SC1 = nr
MO1 = nr
skin_ep = 'Granite'
elif c > 300000:
SK1 = (128,128,128)
HR1 = (200,130,40) #BRONZE
SC1 = nr
MO1 = nr
skin_ep = 'Granite'
elif c > 250000:
SK1 = (142,36,170) #VIOLET
HR1 = (40,5,55)
SC1 = (74,20,140)
MO1 = SC1
skin_ep = 'Eggplant'
else:
SK1 = (128,128,128)
HR1 = (230,230,230)
SC1 = (30,30,30)
MO1 = SC1
skin_ep = 'Granite'
seed(c)
d=randint(0,1000000)
if d > 750000:
ears = EARS_2
ears_ep = 'Earring'
else:
ears = none
ears_ep = 'None'
MOLE=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,MO1,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
seed(d)
e=randint(0,1000000)
if e > 930000:
blemishes = MOLE
blemishe_ep = 'Mole'
else:
blemishes = none
blemishe_ep = 'None'
seed(e)
f=randint(0,1000000)
if f > 900000:
neck = GoldChain_1
neck_ep = 'Gold Chain'
elif f > 820000:
neck = SilverChain_1
neck_ep = 'Silver Chain'
elif f > 800000:
neck = RING_1
neck_ep = 'Ring Onchain'
else:
neck = none
neck_ep = 'None'
seed(f)
g=randint(0,1000000)
seed(g)
h=randint(0,1000000)
if h > 900000:
mouth_prop = CIGARETTE
mouth_prop_ep = 'Cigarette'
elif h > 880000:
mouth_prop = MASK_1
mouth_prop_ep = 'Medical Mask'
elif h > 820000:
mouth_prop = PIPE
mouth_prop_ep = 'Pipe'
elif h > 780000:
mouth_prop = VAPE
mouth_prop_ep = 'Vape'
else:
mouth_prop = none
mouth_prop_ep = 'None'
seed(h)
i=randint(0,1000000)
if i > 400000:
EY1 = (255,255,255)
EY2 = nr
eyes_color_ep = 'White'
elif i > 300000:
EY1 = (214,92,26)
EY2 = nr
eyes_color_ep = "Orange"
elif i > 200000:
EY1 = (176,61,133)
EY2 = nr
eyes_color_ep = "Purple"
elif i > 100000:
EY1 = (255,255,0)
EY2 = nr
eyes_color_ep = 'Yellow'
else:
EY1 = (255,0,0)
EY2 = nr
eyes_color_ep = 'Red'
seed(i)
j=randint(0,1000000)
if j > 970000:
eyes = TD_2
eyes_prop_ep ='3D Glasses'
elif j > 930000:
eyes = VR_2
eyes_prop_ep ='VR'
elif j > 880000:
eyes = ClassicShades_2
eyes_prop_ep ='Classic Shades'
elif j > 830000:
eyes = SmallShades_2
eyes_prop_ep ='Small Shades'
elif j > 780000:
eyes = EyePatch_2
eyes_prop_ep ='Eye Patch'
elif j > 730000:
eyes = NerdGlasses_2
eyes_prop_ep ='Nerd Glasses'
elif j > 700000:
eyes = EyeMask_2
eyes_prop_ep ='Eye Mask'
elif j > 650000:
eyes = RegularShades_2
eyes_prop_ep ='Regular Shades'
else:
eyes=none
eyes_prop_ep ='None'
seed(j)
k=randint(0,1000000)
if k > 975000:
nose = NOSE_1
nose_ep = 'Clown Nose'
else:
nose = none
nose_ep = 'None'
SPECTRE=[
[FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR1,FR1,FR1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR1,FR1,HR1,HR1,HR1,FR1,FR1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,FR1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,FR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,FR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,FR1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,FR1,HR1,FR1,FR1,FR1,HR1,HR1,FR1,FR1,FR1,HR1,HR1,HR1,FR1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,FR1,HR1,FR1,SK1,SK1,SK1,FR1,FR1,SK1,SK1,SK1,FR1,HR1,HR1,FR1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,FR1,HR1,FR1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,FR1,HR1,FR1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,FR1,HR1,FR1,SC1,SC1,SK1,SK1,SK1,SC1,SC1,SK1,SK1,FR1,HR1,HR1,FR1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,FR1,HR1,FR1,EY1,EY2,SK1,SK1,SK1,EY1,EY2,SK1,SK1,SK1,FR1,HR1,FR1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,FR1,HR1,FR1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,FR1,HR1,FR1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,FR1,HR1,FR1,SK1,SK1,SK1,FR1,SK1,SK1,SK1,SK1,SK1,FR1,FR1,HR1,FR1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,FR1,HR1,FR1,SK1,SK1,FR1,SK1,SK1,SK1,SK1,SK1,SK1,FR1,HR1,HR1,FR1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,FR1,HR1,FR1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,FR1,HR1,FR1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,FR1,HR1,FR1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,FR1,HR1,FR1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,FR1,HR1,FR1,SK1,SK1,FR1,FR1,FR1,SK1,SK1,SK1,SK1,FR1,HR1,FR1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,FR1,HR1,FR1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,FR1,SK1,FR1,HR1,FR1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,FR1,HR1,FR1,SK1,SK1,SK1,SK1,SK1,FR1,SK1,SK1,FR1,HR1,FR1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,HR1,FR1,FR1,FR1,FR1,FR1,SK1,SK1,SK1,FR1,HR1,FR1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR1,HR1,HR1,HR1,HR1,FR1,FR1,SK1,FR1,FR1,HR1,FR1,BG1,BG1,BG1,FR2],
[FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR1,FR1,FR1,HR1,HR1,HR1,FR1,HR1,HR1,HR1,FR1,FR2,FR2,FR2,FR2]
]
pixels = SPECTRE
elif b > 7000:
race_ep = 'Dark Riders'
type_ep = 'None'
hair_color_ep = 'None'
haircut_ep = 'None'
hair_prop_ep = 'None'
eyes_prop_ep = 'None'
blemishe_ep = 'None'
eyes_color_ep = 'None'
facial_hair_ep = 'None'
mouth_prop_ep = 'None'
mouth_ep = 'None'
tooth_color_ep = 'None'
nose_ep = 'None'
neck_ep = 'None'
ears_ep = 'None'
skin_ep = 'None'
ears = none
hair = none
hair_prop = none
neck = none
blemishes = none
#tooth color
mouth = none
facial_hair = none
rod = none
mouth_prop = none
#eye color
eyes = none
nose = none
SK1 = (118,113,113)
SK2 = (191,191,191)
SK3 = (223,223,223)
skin_ep = 'None'
seed(b)
c=randint(0,1000000)
if c > 750000:
ears = EARS_1
ears_ep = 'Earring'
else:
ears = none
ears_ep = 'None'
seed(c)
d=randint(0,1000000)
if d > 900000:
neck = GoldChain_1
neck_ep = 'Gold Chain'
elif d > 820000:
neck = SilverChain_1
neck_ep = 'Silver Chain'
elif d > 800000:
neck = RING_1
neck_ep = 'Ring Onchain'
else:
neck = none
neck_ep = 'None'
seed(d)
e=randint(0,1000000)
if e > 900000:
mouth_prop = CIGARETTE
mouth_prop_ep = 'Cigarette'
elif e > 840000:
mouth_prop = PIPE
mouth_prop_ep = 'Pipe'
elif e > 800000:
mouth_prop = VAPE
mouth_prop_ep = 'Vape'
else:
mouth_prop = none
mouth_prop_ep = 'None'
seed(e)
f=randint(0,1000000)
if f > 400000:
EY1 = (255,255,255)
eyes_color_ep = 'White'
elif f > 300000:
EY1 = (214,92,26)
eyes_color_ep = "Orange"
elif f > 200000:
EY1 = (176,61,133)
eyes_color_ep = "Purple"
elif f > 100000:
EY1 = (255,255,0)
eyes_color_ep = 'Yellow'
else:
EY1 = (255,0,0)
eyes_color_ep = 'Red'
seed(f)
g=randint(0,1000000)
if g > 970000:
eyes = TD_1
eyes_prop_ep ='3D Glasses'
elif g > 930000:
eyes = VR_1
eyes_prop_ep ='VR'
elif g > 880000:
eyes = ClassicShades_1
eyes_prop_ep ='Classic Shades'
elif g > 830000:
eyes = EyePatch_1
eyes_prop_ep ='Eye Patch'
elif g > 780000:
eyes = NerdGlasses_1
eyes_prop_ep ='Nerd Glasses'
elif g > 730000:
eyes = BigShades_1
eyes_prop_ep ='Big Shades'
elif g > 700000:
eyes = EyeMask_1
eyes_prop_ep ='Eye Mask'
elif g > 650000:
eyes = HornedRimGlasses_1
eyes_prop_ep ='Horned Rim Glasses'
elif g > 600000:
eyes = RegularShades_1
eyes_prop_ep ='Regular Shades'
else:
eyes=none
eyes_prop_ep ='None'
DARK_RIDER=[
[FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR1,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR1,BG1,FR1,SK1,SK1,SK1,FR1,BG1,FR1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,FR1,BG1,FR1,SK1,SK1,SK1,FR1,BG1,FR1,FR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,FR1,FR1,FR1,SK1,FR1,FR1,SK1,SK1,SK1,FR1,FR1,SK1,FR1,FR1,FR1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,FR1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,FR1,EY1,SK1,SK1,SK1,FR1,EY1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,EY1,FR1,SK1,SK1,SK1,EY1,FR1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,SK1,FR1,SK1,SK1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,FR1,FR1,FR1,SK1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,FR1,FR1,FR1,FR1,FR1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,FR1,FR1,FR1,FR1,FR1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,FR1,FR1,FR1,FR1,FR1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,FR1,FR1,FR1,SK1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,FR1,FR1,FR1,SK1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,FR1,FR1,FR1,SK1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR1,FR1,FR1,FR1,FR1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR1,SK1,SK1,SK1,FR1,FR2,FR2,FR2,FR2,FR2,FR2]
]
pixels = DARK_RIDER
elif b > 1000:
race_ep = 'Daemons'
type_ep = 'None'
hair_color_ep = 'None'
haircut_ep = 'None'
hair_prop_ep = 'None'
eyes_prop_ep = 'None'
blemishe_ep = 'None'
eyes_color_ep = 'None'
facial_hair_ep = 'None'
mouth_prop_ep = 'None'
mouth_ep = 'None'
tooth_color_ep = 'None'
nose_ep = 'None'
neck_ep = 'None'
ears_ep = 'None'
skin_ep = 'None'
ears = none
hair = none
hair_prop = none
neck = none
blemishes = none
#tooth color
mouth = none
facial_hair = none
rod = none
mouth_prop = none
#eye color
eyes = none
nose = none
SK1 = (90,90,90)
FR4 = (166,166,166)
FR5 = (225,63,0)
FR3 = (240,114,48)
FR1 = nr
FR2 = bl
seed(b)
c=randint(0,1000000)
seed(c)
d=randint(0,1000000)
if d > 900000:
neck = GoldChain_1
neck_ep = 'Gold Chain'
elif d > 820000:
neck = SilverChain_1
neck_ep = 'Silver Chain'
elif d > 800000:
neck = RING_1
neck_ep = 'Ring Onchain'
else:
neck = none
neck_ep = 'None'
seed(d)
e=randint(0,1000000)
if e > 900000:
mouth_prop = CIGARETTE
mouth_prop_ep = 'Cigarette'
elif e > 840000:
mouth_prop = PIPE
mouth_prop_ep = 'Pipe'
elif e > 800000:
mouth_prop = VAPE
mouth_prop_ep = 'Vape'
else:
mouth_prop = none
mouth_prop_ep = 'None'
seed(e)
f=randint(0,1000000)
if f > 500000:
EY1 = bl
eyes_color_ep = 'White'
else:
EY1 = nr
eyes_color_ep = 'Black'
seed(f)
g=randint(0,1000000)
if g > 970000:
eyes = TD_1
eyes_prop_ep ='3D Glasses'
elif g > 930000:
eyes = VR_1
eyes_prop_ep ='VR'
elif g > 880000:
eyes = ClassicShades_1
eyes_prop_ep ='Classic Shades'
elif g > 830000:
eyes = EyePatch_1
eyes_prop_ep ='Eye Patch'
elif g > 780000:
eyes = NerdGlasses_1
eyes_prop_ep ='Nerd Glasses'
elif g > 730000:
eyes = BigShades_1
eyes_prop_ep ='Big Shades'
elif g > 700000:
eyes = EyeMask_1
eyes_prop_ep ='Eye Mask'
elif g > 650000:
eyes = RegularShades_1
eyes_prop_ep ='Regular Shades'
else:
eyes=none
eyes_prop_ep ='None'
seed(g)
h=randint(0,1000000)
if h > 750000:
SK1 = (60,60,60)
FR4 = (166,166,166)
FR5 = (225,63,0)
FR3 = (240,160,0)
FR1 = nr
FR2 = bl
skin_ep = 'Dark Grey'
hair_color_ep = 'Orange'
elif h > 500000:
SK1 = (30,30,30)
FR4 = (166,166,166)
FR5 = (225,63,0)
FR3 = (240,160,0)
FR1 = nr
FR2 = bl
skin_ep = 'Charcoal'
hair_color_ep = 'Orange'
elif h > 250000:
SK1 = (60,60,60)
FR4 = (166,166,166)
FR5 = (225,63,0)
FR3 = (240,114,48)
FR1 = nr
FR2 = bl
skin_ep = 'Dark Grey'
hair_color_ep = 'Burning Orange'
else:
SK1 = (30,30,30)
FR4 = (166,166,166)
FR5 = (225,63,0)
FR3 = (240,114,48)
FR1 = nr
FR2 = bl
skin_ep = 'Charcoal'
hair_color_ep = 'Burning Orange'
DEAMON=[
[FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,FR3,FR3,FR3,BG1,BG1,BG1,BG1,BG1,FR3,FR3,FR3,FR3,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,FR3,FR1,FR1,FR1,FR3,BG1,BG1,BG1,FR3,FR1,FR1,FR1,FR1,FR3,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,FR3,FR1,FR1,FR1,FR1,FR1,FR3,FR3,FR3,FR1,FR1,FR1,FR1,FR1,FR1,FR3,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,FR3,FR1,FR1,FR1,FR1,FR1,FR1,FR1,FR1,FR1,FR1,FR1,FR1,FR1,FR1,FR1,FR1,FR3,BG1,BG1,FR2],
[FR2,BG1,FR3,FR1,FR1,FR1,FR1,FR1,FR1,FR1,FR1,FR1,FR1,FR1,FR1,FR1,FR1,FR1,FR1,FR1,FR1,FR3,BG1,FR2],
[FR2,FR3,FR1,FR1,FR1,FR1,FR1,FR1,FR1,FR3,FR1,FR1,FR1,FR3,FR1,FR1,FR1,FR1,FR1,FR1,FR1,FR1,FR3,FR2],
[FR2,FR3,FR1,FR1,FR1,FR1,FR1,FR3,FR3,SK1,FR3,FR1,FR3,SK1,FR3,FR3,FR1,FR1,FR1,FR1,FR1,FR1,FR3,FR2],
[FR2,FR3,FR1,FR1,FR1,FR1,FR3,FR1,SK1,SK1,SK1,FR3,SK1,SK1,SK1,SK1,FR3,FR1,FR1,FR1,FR1,FR1,FR3,FR2],
[FR2,FR3,FR1,FR1,FR3,FR3,BG1,FR1,FR4,FR4,SK1,SK1,SK1,FR4,FR4,SK1,SK1,FR3,FR3,FR3,FR1,FR1,FR3,FR2],
[FR2,FR3,FR1,FR1,FR3,BG1,BG1,FR1,FR5,EY1,SK1,SK1,SK1,FR5,EY1,SK1,SK1,FR1,BG1,FR3,FR1,FR1,FR3,FR2],
[FR2,FR3,FR1,FR1,FR3,BG1,BG1,FR1,SK1,SK1,FR3,SK1,FR3,SK1,SK1,SK1,SK1,FR1,BG1,FR3,FR1,FR1,FR3,FR2],
[FR2,FR3,FR1,FR1,FR3,BG1,BG1,FR1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,FR1,BG1,FR3,FR1,FR1,FR3,FR2],
[FR2,BG1,FR3,FR1,FR1,FR3,BG1,FR1,SK1,FR3,FR3,FR3,FR3,FR3,SK1,SK1,SK1,FR1,FR3,FR1,FR1,FR3,BG1,FR2],
[FR2,BG1,BG1,FR3,FR1,FR1,FR3,FR1,SK1,FR3,FR3,FR3,FR3,FR3,SK1,SK1,SK1,FR3,FR1,FR1,FR3,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,FR3,FR1,FR3,FR1,SK1,FR3,FR3,FR3,FR3,FR3,SK1,SK1,SK1,FR3,FR1,FR3,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,FR3,BG1,FR1,SK1,SK1,FR3,FR3,FR3,SK1,SK1,SK1,SK1,FR1,FR3,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,FR3,FR3,FR3,SK1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,SK1,FR3,SK1,SK1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,SK1,SK1,SK1,FR1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR1,FR1,FR1,FR1,FR1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR1,SK1,SK1,SK1,FR1,FR2,FR2,FR2,FR2,FR2,FR2]
]
pixels = DEAMON
else:
race_ep = 'Dark Lord'
type_ep = 'None'
hair_color_ep = 'None'
haircut_ep = 'None'
hair_prop_ep = 'None'
eyes_prop_ep = 'None'
blemishe_ep = 'None'
eyes_color_ep = 'None'
facial_hair_ep = 'None'
mouth_prop_ep = 'None'
mouth_ep = 'None'
tooth_color_ep = 'None'
nose_ep = 'None'
neck_ep = 'None'
ears_ep = 'None'
skin_ep = 'None'
ears = none
hair = none
hair_prop = none
neck = none
blemishes = none
#tooth color
mouth = none
facial_hair = none
rod = none
mouth_prop = none
#eye color
eyes = none
nose = none
SK1 = (113,113,113)
SK2 = (160,160,160)
SK3 = (223,223,223)
skin_ep = 'None'
seed(b)
c=randint(0,1000000)
if c > 750000:
ears = EARS_0
ears_ep = 'Earring'
else:
ears = none
ears_ep = 'None'
seed(c)
d=randint(0,1000000)
if d > 700000:
neck = GoldChain_1
neck_ep = 'Gold Chain'
elif d > 400000:
neck = SilverChain_1
neck_ep = 'Silver Chain'
elif d > 100000:
neck = RING_1
neck_ep = 'Ring Onchain'
else:
neck = none
neck_ep = 'None'
seed(d)
e=randint(0,1000000)
if e > 800000:
mouth_prop = CIGARETTE
mouth_prop_ep = 'Cigarette'
elif e > 600000:
mouth_prop = PIPE
mouth_prop_ep = 'Pipe'
elif e > 400000:
mouth_prop = VAPE
mouth_prop_ep = 'Vape'
else:
mouth_prop = none
mouth_prop_ep = 'None'
seed(e)
f=randint(0,1000000)
if f > 400000:
EY1 = (255,255,255)
eyes_color_ep = 'White'
elif f > 300000:
EY1 = (214,92,26)
eyes_color_ep = "Orange"
elif f > 200000:
EY1 = (176,61,133)
eyes_color_ep = "Purple"
elif f > 100000:
EY1 = (255,255,0)
eyes_color_ep = 'Yellow'
else:
EY1 = (255,0,0)
eyes_color_ep = 'Red'
DARK_LORD=[
[FR2,FR2,FR2,FR2,FR2,FR2,FR1,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR1,FR2,FR2,FR2,FR2,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,FR1,BG1,BG1,BG1,BG1,BG1,FR1,BG1,BG1,BG1,BG1,BG1,FR1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,FR1,FR1,BG1,BG1,BG1,BG1,FR1,BG1,BG1,BG1,BG1,FR1,FR1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,FR1,FR1,BG1,FR1,BG1,BG1,FR1,BG1,BG1,FR1,BG1,FR1,FR1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,FR1,FR1,BG1,FR1,BG1,BG1,FR1,BG1,BG1,FR1,BG1,FR1,FR1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,FR1,FR1,BG1,FR1,FR1,BG1,FR1,BG1,FR1,FR1,BG1,FR1,FR1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,FR1,FR1,FR1,FR1,FR1,FR1,FR1,FR1,FR1,FR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,FR1,FR1,FR1,FR1,FR1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,SK1,FR1,FR1,FR1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,FR1,EY1,SK1,SK1,FR1,SK1,FR1,EY1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,EY1,FR1,SK1,SK1,FR1,SK1,EY1,FR1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,SK3,FR1,SK1,SK1,SK1,SK1,FR1,SK1,SK1,SK1,SK1,FR1,SK3,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,SK3,FR1,SK2,SK1,SK1,SK1,FR1,SK1,SK1,SK1,SK2,FR1,SK3,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,SK3,SK1,SK2,SK1,SK1,FR1,SK1,SK1,SK2,SK1,SK3,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK3,SK1,SK2,SK1,FR1,SK1,SK2,SK1,SK3,FR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK3,SK1,SK2,FR1,SK2,SK1,SK3,SK1,FR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK3,SK1,SK2,FR1,SK2,SK1,SK3,SK1,FR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK3,SK1,SK2,FR1,SK2,SK1,SK3,SK1,FR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK3,SK1,SK2,SK1,FR1,SK1,SK2,SK1,SK3,FR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,SK3,FR1,SK1,SK2,SK1,FR1,SK1,SK2,SK1,SK1,SK3,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,SK2,SK1,SK1,FR1,SK1,SK1,SK2,SK1,FR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,SK2,BG1,FR1,FR1,FR1,FR1,FR1,SK1,SK2,FR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR1,SK1,SK1,SK1,FR1,FR2,FR2,FR2,FR2,FR2,FR2]
]
pixels = DARK_LORD
newtraitcombo = createCombo()
traits.append(newtraitcombo)
FL01 = len(filterlist1)
TR01 = len(traits)
RESU1 = TR01 - FL01
print(RESU1)
print(FL01)
#########################################
def createCombo2():
trait = {}
#trait["Name"] = name_ep
trait["Race"] = race_ep
trait["Type"] = type_ep
trait["Skin Tone"] = skin_ep
trait["Ears"] = ears_ep
trait["Hair Color"] = hair_color_ep
trait["Haircut"] = haircut_ep
trait["Hair Prop"] = hair_prop_ep
trait["Neck"] = neck_ep
trait["Facial Hair"] = facial_hair_ep
trait["Mouth Prop"] = mouth_prop_ep
trait["Eyes Color"] = eyes_color_ep
trait["Eyes Prop"] = eyes_prop_ep
trait["Nose"] = nose_ep
trait["Blemishe"] = blemishe_ep
trait["Tooth Color"] = tooth_color_ep
trait["Mouth"] = mouth_ep
if trait in traits2:
filterlist2.append(x)
else:
return trait
traits2 = []
list2 = range(11984)
#To avoid duplicates The first loop was just here for fill the filterlist1 with all the duplicates midpunks
#Allways put the same number in listx and increase the number until you get the desired number of midpunks
#Alaways use the same seed "a" in both loops, Here we need 11984 "loops" to get 10K unique midpunks
filtered=[item for item in list2 if item not in filterlist1]
jpeg = -1
for x in filtered:
a = 13080698
jpeg +=1
seed(x+a )
titi=0
titin=0
titine=0
toto=0
tata=0
tutu=0
tyty=0
tete=0
toutou=0
toctoc=0
tactac=0
tuctuc=0
tonton=0
tantan=0
neyo=0
neye=0
neya=0
neyh=0
neyu=0
neyw=0
b = randint(0,1000000)
if b > 950000:
race_ep = 'Halflings'
type_ep = 'Male'
hair_color_ep = 'None'
haircut_ep = 'None'
hair_prop_ep = 'None'
eyes_prop_ep = 'None'
blemishe_ep = 'None'
eyes_color_ep = 'None'
facial_hair_ep = 'None'
mouth_prop_ep = 'None'
mouth_ep = 'None'
tooth_color_ep = 'None'
nose_ep = 'None'
neck_ep = 'None'
ears_ep = 'None'
skin_ep = 'None'
ears = none
hair = none
hair_prop = none
neck = none
blemishes = none
#tooth color
mouth = none
facial_hair = none
rod = none
mouth_prop = none
#eye color
eyes = none
nose = none
seed(b)
c=randint(0,1000000)
if c > 800000:
SK1 = (234,217,217)
SC1 = (165,141,141)
EY1 = (201,178,178)
SK2 = (255,255,255)
HRG3 = (220,222,234)
HRG2 = (183,179,191)
HRG4 = (203,200,212)
HRG5 = (87,101,113)
HRG1 = (0,0,0)
RC1 = (226,187,185)
MO1 = EY1
SCR1 = EY1
skin_ep = 'Albino'
elif c > 500000:
SK1 = (219,177,128)
SC1 = (166,110,44)
EY1 = (210,157,96)
SK2 = (235,203,166)
HRG3 = (213,200,183)
HRG2 = (184,163,135)
HRG4 = (209,189,164)
HRG5 = (87,101,113)
HRG1 = (0,0,0)
RC1 = (215,154,104)
MO1 = EY1
SCR1 = EY1
skin_ep = 'Light'
elif c > 200000:
SK1 = (174,139,97)
SC1 = (134,88,30)
EY1 = (167,124,71)
SK2 = (178,138,93)
HRG3 = (188,179,165)
HRG2 = (166,150,128)
HRG4 = (184,171,151)
HRG5 = (87,101,113)
HRG1 = (0,0,0)
RC1 = (191,105,71)
MO1 = EY1
SCR1 = EY1
skin_ep = 'Mid'
else:
SK1 = (113,63,29)
SC1 = (86,39,10)
EY1 = (114,55,17)
SK2 = (146,79,35)
HRG3 = (155,135,127)
HRG2 = (139,121,111)
HRG4 = (156,131,115)
HRG5 = (87,101,113)
HRG1 = (0,0,0)
RC1 = (142,36,2)
MO1 = EY1
SCR1 = EY1
skin_ep = 'Dark'
seed(c)
d=randint(0,1000000)
if d > 750000:
ears = EARS_2
ears_ep = 'Earring'
else:
ears = none
ears_ep = 'None'
seed(d)
e=randint(0,1000000)
HR0 = (255,193,0)
HR2 = (251,114,7)
HR3 = (210,154,0)
HR4 = (166,165,165)
HR5 = (165,108,0)
HR6 = (111,0,48)
HR7 = (85,57,23)
if e > 875000:
HR1 = HR0
hair_color_ep ='Blond'
elif e > 750000:
HR1 = nr
hair_color_ep='Black'
elif e > 625000:
HR1 = HR2
hair_color_ep ='Orange'
elif e > 500000:
HR1 = HR3
hair_color_ep ='Fair'
elif e > 375000:
HR1 = HR4
hair_color_ep ='Grey'
elif e > 250000:
HR1 = HR5
hair_color_ep ='Ginger'
elif e > 125000:
HR1 = HR6
hair_color_ep ='Black Rose'
else:
HR1 = HR7
hair_color_ep ='Brown'
HALFIN_HR1=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,HR1,0,HR1,HR1,HR1,0,HR1,HR1,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,0,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,0,HR1,0,0,0,0,0],
[0,0,0,0,0,HR1,0,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,0,0,0,0],
[0,0,0,0,0,HR1,0,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,0,HR1,0,0,0],
[0,0,0,0,0,0,HR1,HR1,HR1,0,0,HR1,0,HR1,0,0,HR1,HR1,HR1,HR1,0,HR1,0,0],
[0,0,0,0,0,HR1,HR1,HR1,HR1,0,HR1,HR1,0,0,HR1,0,HR1,HR1,HR1,HR1,HR1,0,0,0],
[0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,0,0,0,0,HR1,HR1,HR1,0,0],
[0,0,0,0,HR1,0,HR1,0,0,0,0,0,0,0,0,0,0,0,0,HR1,HR1,HR1,0,0],
[0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,0,0,0,0,HR1,HR1,HR1,0,0],
[0,0,0,0,0,0,HR1,0,0,0,0,0,0,0,0,0,0,0,HR1,HR1,HR1,0,0,0],
[0,0,0,0,0,0,HR1,0,0,0,0,0,0,0,0,0,0,0,HR1,HR1,HR1,0,0,0],
[0,0,0,0,HR1,0,HR1,0,0,0,0,0,0,0,0,0,0,0,HR1,0,HR1,HR1,0,0],
[0,0,0,0,0,HR1,0,0,0,0,0,0,0,0,0,0,0,0,0,HR1,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
HALFIN_HR2=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,HR1,HR1,HR1,0,HR1,0,HR1,HR1,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,HR1,0,HR1,HR1,HR1,0,HR1,HR1,0,HR1,0,0,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,0,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,0,HR1,0,0,0,0,0],
[0,0,0,0,0,HR1,0,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,0,0,0,0],
[0,0,0,0,0,HR1,0,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,0,HR1,0,0,0],
[0,0,0,0,HR1,0,HR1,HR1,HR1,0,0,HR1,0,HR1,0,0,HR1,HR1,HR1,HR1,0,HR1,0,0],
[0,0,0,0,0,HR1,HR1,HR1,HR1,0,HR1,HR1,0,0,HR1,0,HR1,HR1,HR1,HR1,HR1,0,0,0],
[0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,0,0,0,0,HR1,HR1,HR1,0,0],
[0,0,0,0,HR1,0,HR1,0,0,0,0,0,0,0,0,0,0,0,0,HR1,HR1,HR1,0,0],
[0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,0,0,0,0,HR1,HR1,HR1,0,0],
[0,0,0,0,HR1,0,HR1,0,0,0,0,0,0,0,0,0,0,0,HR1,HR1,0,HR1,HR1,0],
[0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,0,0,0,HR1,HR1,HR1,HR1,HR1,0],
[0,0,0,0,HR1,0,HR1,0,0,0,0,0,0,0,0,0,0,0,HR1,0,HR1,HR1,0,0],
[0,0,0,0,0,HR1,0,0,0,0,0,0,0,0,0,0,0,0,0,HR1,0,HR1,HR1,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,HR1,0,HR1,HR1,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,HR1,HR1,0,HR1,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,HR1,HR1,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
HALFIN_HR3=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,0,0,0,0,0,0],
[0,0,0,0,0,0,0,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,0,0,0,0,0],
[0,0,0,0,0,0,0,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,0,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,0,HR1,HR1,0,HR1,0,HR1,HR1,HR1,HR1,HR1,HR1,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,0,HR1,HR1,0,0,0,HR1,HR1,HR1,HR1,HR1,HR1,HR1,0,0,0],
[0,0,0,0,0,0,HR1,HR1,0,HR1,0,0,0,0,HR1,HR1,HR1,HR1,HR1,HR1,HR1,0,0,0],
[0,0,0,0,0,0,HR1,0,0,0,0,0,0,0,0,0,HR1,HR1,0,HR1,HR1,0,0,0],
[0,0,0,0,0,0,HR1,0,0,0,0,0,0,0,0,0,0,0,0,HR1,HR1,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,HR1,HR1,HR1,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,HR1,HR1,HR1,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,HR1,HR1,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,HR1,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
HALFIN_HR4=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,HR1,HR1,HR1,0,HR1,HR1,HR1,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,0,0,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,HR1,0,HR1,0,HR1,0,0,HR1,HR1,HR1,HR1,0,0,0,0,0],
[0,0,0,0,0,HR1,HR1,HR1,0,0,HR1,0,0,HR1,0,0,HR1,HR1,HR1,HR1,0,0,0,0],
[0,0,0,0,0,HR1,HR1,0,0,0,HR1,0,0,0,HR1,HR1,0,0,HR1,HR1,0,0,0,0],
[0,0,0,0,0,HR1,HR1,0,0,0,0,HR1,0,0,0,0,0,0,0,HR1,0,0,0,0],
[0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,0,0,0,0,HR1,0,0,0,0],
[0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,0,0,0,0,HR1,0,0,0,0],
[0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,0,0,0,HR1,HR1,0,0,0,0],
[0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,0,0,0,HR1,HR1,0,0,0,0],
[0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,0,0,0,HR1,HR1,0,0,0,0],
[0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,0,0,0,HR1,HR1,0,0,0,0],
[0,0,0,0,0,0,HR1,0,0,0,0,0,0,0,0,0,0,0,HR1,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
HALFIN_HR5=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,HR1,HR1,HR1,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,0,0,HR1,HR1,HR1,0,0,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,0,HR1,HR1,0,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,0,HR1,HR1,0,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,0,HR1,HR1,0,0,0,0,0],
[0,0,0,0,0,0,HR1,0,0,0,0,0,0,0,0,0,0,0,HR1,0,0,0,0,0],
[0,0,0,0,0,0,HR1,0,0,0,0,0,0,0,0,0,0,0,HR1,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
seed(e)
f=randint(0,1000000)
if f > 800000:
hair = HALFIN_HR1
haircut_ep ='Wild Hair'
elif f > 600000:
hair = HALFIN_HR2
haircut_ep ='Perm Hair'
elif f > 400000:
hair = HALFIN_HR3
haircut_ep ='Bedhead'
elif f > 200000:
hair = HALFIN_HR4
haircut_ep ='Hockey Hair'
else:
hair = HALFIN_HR5
haircut_ep ='Bald'
seed(f)
g=randint(0,1000000)
if g > 970000:
hair_prop = POLICE_6
hair_prop_ep = 'Police'
elif g > 950000:
hair_prop = TOPHAT_6
hair_prop_ep = 'Top Hat'
elif e > 900000:
hair_prop = HEADBAND_6
hair_prop_ep = 'Headband'
elif e > 850000:
hair_prop = FORCAP_8
hair_prop_ep = 'Cap Forward'
elif e > 830000:
hair_prop = COWBOY_8
hair_prop_ep = 'Cowboy Hat'
elif e > 790000:
hair_prop = CAP_8
hair_prop_ep = 'Cap'
else:
hair_prop = none
hair_prop_ep = 'None'
seed(g)
h=randint(0,1000000)
if h > 900000:
neck = GoldChain_1
neck_ep = 'Gold Chain'
elif h > 820000:
neck = SilverChain_1
neck_ep = 'Silver Chain'
elif h > 800000:
neck = RING_1
neck_ep = 'Ring Onchain'
elif h > 780000:
neck = BROCHE_1
neck_ep = 'Brooch'
else:
neck = none
neck_ep = 'None'
seed(h)
i=randint(0,1000000)
if i > 900000:
mouth_prop = CIGARETTE
mouth_prop_ep = 'Cigarette'
elif i > 880000:
mouth_prop = MASK_1
facial_hair = none
mouth_prop_ep = 'Medical Mask'
elif i > 820000:
mouth_prop = PIPE
mouth_prop_ep = 'Pipe'
elif i > 780000:
mouth_prop = VAPE
mouth_prop_ep = 'Vape'
else:
mouth_prop = none
mouth_prop_ep = 'None'
seed(i)
j=randint(0,1000000)
if j > 970000:
eyes = TD_6
eyes_prop_ep ='3D Glasses'
elif j > 930000:
eyes = VR_6
eyes_prop_ep ='VR'
elif j > 880000:
eyes = ClassicShades_6
eyes_prop_ep ='Classic Shades'
elif j >830000:
eyes = SmallShades_6
eyes_prop_ep ='Small Shades'
elif j > 780000:
eyes = EyePatch_6
eyes_prop_ep ='Eye Patch'
elif j > 730000:
eyes = NerdGlasses_6
eyes_prop_ep ='Nerd Glasses'
elif j > 680000:
eyes = BigShades_6
eyes_prop_ep ='Big Shades'
elif j > 650000:
eyes = EyeMask_6
eyes_prop_ep ='Eye Mask'
elif j > 600000:
eyes = HornedRimGlasses_6
eyes_prop_ep ='Horned Rim Glasses'
elif j > 550000:
eyes = RegularShades_6
eyes_prop_ep ='Regular Shades'
else:
eyes=none
eyes_prop_ep ='None'
seed(j)
k=randint(0,1000000)
if k > 975000:
mouth = SMILE
mouth_ep = 'Smile'
elif k > 950000:
mouth = FROWN
mouth_ep = 'Frown'
else:
mouth = none
mouth_ep = 'None'
MOLE=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,MO1,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
SCARE_2=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,SCR1,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,SCR1,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,SCR1,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,SCR1,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
SCARE_2=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,SCR1,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,SCR1,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,SCR1,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,SCR1,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
ROSY_1=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,RC1,0,0,0,0,RC1,RC1,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,RC1,0,0,0,0,0,RC1,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
ROSY_2=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,RC1,RC1,0,0,0,RC1,RC1,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
seed(k)
l=randint(0,1000000)
if l > 970000:
blemishes = ROSY_1
blemishe_ep = 'Rosy Cheeks'
elif l > 900000:
blemishes = MOLE
blemishe_ep = 'Mole'
elif l > 870000:
blemishes = SCARE_2
blemishe_ep = 'Scare'
else:
blemishes = none
blemishe_ep = 'None'
HALFIN=[
[FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR1,FR1,FR1,FR1,FR1,FR1,FR1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SC1,SC1,SK1,SK1,SK1,SC1,SC1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,EY1,FR1,SK1,SK1,SK1,EY1,FR1,SK1,SK1,FR1,FR1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,SK1,FR1,SK1,SK1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,FR1,FR1,FR1,SK1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR1,FR1,FR1,FR1,FR1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR1,SK1,SK1,SK1,FR1,FR2,FR2,FR2,FR2,FR2,FR2]
]
pixels = HALFIN
elif b > 900000:
race_ep = 'Halflings'
type_ep = 'Female'
hair_color_ep = 'None'
haircut_ep = 'None'
hair_prop_ep = 'None'
eyes_prop_ep = 'None'
blemishe_ep = 'None'
eyes_color_ep = 'None'
facial_hair_ep = 'None'
mouth_prop_ep = 'None'
mouth_ep = 'None'
tooth_color_ep = 'None'
nose_ep = 'None'
neck_ep = 'None'
ears_ep = 'None'
skin_ep = 'None'
ears = none
hair = none
hair_prop = none
neck = none
blemishes = none
#tooth color
mouth = none
facial_hair = none
rod = none
mouth_prop = none
#eye color
eyes = none
nose = none
seed(b)
c=randint(0,1000000)
if c > 800000:
SK1 = (234,217,217)
SC1 = (165,141,141)
EY1 = (201,178,178)
SK2 = (255,255,255)
HRG3 = (220,222,234)
HRG2 = (183,179,191)
HRG4 = (203,200,212)
HRG5 = (87,101,113)
HRG1 = (0,0,0)
RC1 = (226,187,185)
LI1 = (113,28,17)
MO1 = EY1
SCR1 = EY1
skin_ep = 'Albino'
elif c > 500000:
SK1 = (219,177,128)
SC1 = (166,110,44)
EY1 = (210,157,96)
SK2 = (235,203,166)
HRG3 = (213,200,183)
HRG2 = (184,163,135)
HRG4 = (209,189,164)
HRG5 = (87,101,113)
HRG1 = (0,0,0)
RC1 = (215,154,104)
LI1 = (113,28,17)
MO1 = EY1
SCR1 = EY1
skin_ep = 'Light'
elif c > 200000:
SK1 = (174,139,97)
SC1 = (134,88,30)
EY1 = (167,124,71)
SK2 = (178,138,93)
HRG3 = (188,179,165)
HRG2 = (166,150,128)
HRG4 = (184,171,151)
HRG5 = (87,101,113)
HRG1 = (0,0,0)
RC1 = (191,105,71)
LI1 = (95,29,13)
MO1 = EY1
SCR1 = EY1
skin_ep = 'Mid'
else:
SK1 = (113,63,29)
SC1 = (86,39,10)
EY1 = (114,55,17)
SK2 = (146,79,35)
HRG3 = (155,135,127)
HRG2 = (139,121,111)
HRG4 = (156,131,115)
HRG5 = (87,101,113)
HRG1 = (0,0,0)
RC1 = (142,36,2)
LI1 = (74,18,8)
MO1 = EY1
SCR1 = EY1
skin_ep = 'Dark'
seed(c)
d=randint(0,1000000)
if d > 750000:
ears = EARS_3
ears_ep = 'Earring'
else:
ears = none
ears_ep = 'None'
seed(d)
e=randint(0,1000000)
HR0 = (255,193,0)
HR2 = (251,114,7)
HR3 = (210,154,0)
HR4 = (166,165,165)
HR5 = (165,108,0)
HR6 = (111,0,48)
HR7 = (85,57,23)
red = (255,0,0)
if e > 875000:
HR1 = HR0
HR2 = red
hair_color_ep ='Blonde'
elif e > 750000:
HR1 = nr
HR2 = red
hair_color_ep ='Black'
elif e > 625000:
HR1 = HR2
HR2 = red
hair_color_ep ='Orange'
elif e > 500000:
HR1 = HR3
HR2 = red
hair_color_ep ='Fair'
elif e > 375000:
HR1 = HR4
HR2 = red
hair_color_ep ='Grey'
elif e > 250000:
HR1 = HR5
HR2 = red
hair_color_ep ='Ginger'
elif e > 125000:
HR1 = HR6
HR2 = red
hair_color_ep ='Black Rose'
else:
HR1 = HR7
HR2 = red
hair_color_ep ='Brown'
HALFINE_HR1=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,HR1,0,0,HR1,0,0,HR1,HR1,HR1,0,0,0,0,0,0,0,0],
[0,0,0,0,0,HR1,0,0,HR1,HR1,0,HR1,HR1,HR1,0,HR1,HR1,HR1,0,0,0,0,0,0],
[0,0,0,0,HR1,0,HR1,HR1,HR1,HR1,HR1,0,HR1,0,HR1,0,HR1,0,HR1,0,0,0,0,0],
[0,0,0,HR1,HR1,HR1,HR1,0,0,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,0,0,0,0],
[0,0,0,0,HR1,0,HR1,HR1,HR1,HR1,0,HR1,0,HR1,0,HR1,HR1,0,HR1,HR1,0,0,0,0],
[0,0,0,0,HR1,HR1,HR1,HR1,HR1,0,HR1,0,0,0,HR1,HR1,0,HR1,HR1,HR1,HR1,0,0,0],
[0,0,0,HR1,0,HR1,HR1,HR1,HR1,0,0,0,0,0,0,HR1,0,HR1,HR1,HR1,0,0,0,0],
[0,0,HR1,0,HR1,HR1,HR1,0,0,0,0,0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,0],
[0,0,HR1,HR1,HR1,0,HR1,0,0,0,0,0,0,0,0,0,0,0,HR1,0,HR1,0,0,0],
[0,0,0,HR1,0,HR1,HR1,0,0,0,0,0,0,0,0,0,0,0,HR1,HR1,HR1,0,0,0],
[0,0,HR1,0,HR1,HR1,HR1,0,0,0,0,0,0,0,0,0,0,0,HR1,HR1,HR1,0,0,0],
[0,0,HR1,HR1,HR1,HR1,HR1,0,0,0,0,0,0,0,0,0,0,HR1,HR1,0,HR1,0,0,0],
[0,0,HR1,HR1,HR1,HR1,0,0,0,0,0,0,0,0,0,0,0,HR1,HR1,HR1,HR1,HR1,0,0],
[0,0,HR1,0,0,HR1,HR1,0,0,0,0,0,0,0,0,0,0,HR1,HR1,HR1,0,HR1,0,0],
[0,HR1,0,HR1,HR1,HR1,HR1,0,0,0,0,0,0,0,0,0,0,HR1,HR1,0,HR1,HR1,0,0],
[0,HR1,HR1,0,HR1,0,HR1,0,0,0,0,0,0,0,0,0,HR1,HR1,HR1,HR1,HR1,0,0,0],
[0,0,0,HR1,0,HR1,0,HR1,0,0,0,0,0,0,0,0,0,HR1,0,HR1,0,HR1,0,0],
[0,0,0,0,HR1,0,HR1,HR1,0,0,0,0,0,0,0,0,0,0,HR1,0,HR1,0,0,0],
[0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
HALFINE_HR2=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,HR1,0,HR1,HR1,HR1,HR1,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,HR1,HR1,HR1,HR1,HR1,HR1,0,HR1,0,0,0,0,0,0,0],
[0,0,0,0,0,0,HR1,0,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,0,0,0,0,0,0,0],
[0,0,0,0,0,HR1,0,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,0,0,0,0,0],
[0,0,0,0,0,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,0,HR1,HR1,HR1,HR1,0,0,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,HR1,0,0,HR1,0,HR1,0,HR1,HR1,HR1,HR1,0,0,0,0,0],
[0,0,0,0,0,HR1,HR1,HR1,0,0,0,HR1,0,0,0,HR1,HR1,HR1,0,HR1,0,0,0,0],
[0,0,0,0,HR1,0,HR1,0,0,0,0,HR1,0,0,0,0,HR1,HR1,HR1,0,0,0,0,0],
[0,0,0,0,0,HR1,HR1,0,0,0,HR1,0,0,0,0,0,0,HR1,0,HR1,0,0,0,0],
[0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,0,0,HR1,HR1,HR1,HR1,0,0,0],
[0,0,0,0,HR1,HR1,HR1,0,0,0,0,0,0,0,0,0,0,HR1,HR1,0,0,0,0,0],
[0,0,0,0,0,0,HR1,0,0,0,0,0,0,0,0,0,0,HR1,HR1,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
HALFINE_HR3=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,HR1,HR1,HR1,HR1,HR1,HR1,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,0,0,0,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,0,0,HR1,HR1,HR1,0,0,HR1,HR1,HR1,0,0,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,0,0,0,HR1,0,0,0,0,HR1,HR1,0,0,0,0,0,0],
[0,0,0,0,0,HR1,HR1,HR1,0,0,0,0,0,0,0,0,0,HR1,HR1,0,0,0,0,0],
[0,0,0,0,0,HR1,HR1,HR1,0,0,0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,0,0],
[0,0,0,0,0,HR1,HR1,HR1,0,0,0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,0,0],
[0,0,0,0,0,HR1,HR1,HR1,0,0,0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,0,0],
[0,0,0,0,0,HR1,HR1,HR1,0,0,0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,0,0,0],
[0,0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
HALFINE_HR4=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,HR1,HR1,HR1,HR1,HR1,HR1,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,0,0,0,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,0,0,0,0,0,0],
[0,0,0,0,0,HR1,HR1,HR1,HR1,0,0,0,0,0,0,HR1,HR1,HR1,HR1,0,0,0,0,0],
[0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,0,0,HR1,HR1,0,0,0,0,0],
[0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,0,0,0,HR1,0,0,0,0,0],
[0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,0,0,0,HR1,0,0,0,0,0],
[0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,0,0,0,HR1,0,0,0,0,0],
[0,0,0,0,HR1,HR1,HR1,0,0,0,0,0,0,0,0,0,0,HR1,HR1,0,0,0,0,0],
[0,0,0,0,HR1,HR1,HR1,0,0,0,0,0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,0],
[0,0,0,0,HR1,HR1,HR1,0,0,0,0,0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,0],
[0,0,0,0,HR1,HR1,HR1,0,0,0,0,0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,0],
[0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,0,HR1,HR1,HR1,HR1,0,0,0,0],
[0,0,0,0,0,HR1,HR1,HR1,0,0,0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
HALFINE_HR5=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,HR1,HR1,HR1,HR1,HR1,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,HR1,HR1,HR1,HR1,HR1,HR1,HR1,0,0,HR1,HR1,0,0,0,0,0],
[0,0,0,0,0,0,0,0,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR2,HR1,HR1,HR1,0,0,0,0],
[0,0,0,0,0,0,0,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR2,0,HR1,HR1,HR1,0,0,0],
[0,0,0,0,0,0,0,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,0,HR1,HR1,HR1,HR1,0,0],
[0,0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,HR1,HR1,HR1,0,HR1,HR1,HR1,HR1,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,HR1,0,0,HR1,HR1,HR1,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
MOLE_2=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,MO1,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
SCARE_2=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,SCR1,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,SCR1,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,SCR1,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,SCR1,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
ROSY_2=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,RC1,RC1,0,0,0,RC1,RC1,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
seed(e)
f=randint(0,1000000)
if f > 800000:
hair = HALFINE_HR1
haircut_ep ='Perm Hair'
elif f > 600000:
hair = HALFINE_HR2
haircut_ep ='Wild Hair'
elif f > 400000:
hair = HALFINE_HR3
haircut_ep ='Wedge Hair'
elif f > 200000:
hair = HALFINE_HR4
haircut_ep ='Feathered Hair'
else:
hair = HALFINE_HR5
haircut_ep ='Ponytail'
toto = 99
seed(f)
g=randint(0,1000000)
if g > 990000:
hair_prop = TIARA_3
hair_prop_ep = 'Tiara'
titine = 99
elif g > 940000:
hair_prop = Flower
hair_prop_ep = 'Flower'
elif g > 900000 and toto != 99:
hair_prop = Hob_Hat
hair_prop_ep = 'Shire Hat'
elif g > 860000:
hair_prop = HEADBAND_4
hair_prop_ep = 'Headband'
elif g > 850000:
hair = none
hair_prop = PILOT_2
hair_prop_ep = 'Pilot Helmet'
titine = 99
else:
hair_prop = none
hair_prop_ep = 'None'
seed(g)
h=randint(0,1000000)
if h > 900000:
EY1 = (110,152,77)
SC1 = (92,133,57)
eyes_color_ep = 'Green Eye Shadow'
elif h > 800000:
EY1 = (93,121,117)
SC1 = (80,106,101)
eyes_color_ep = 'Blue Eye Shadow'
elif h > 700000:
EY1 = (176,61,133)
SC1 = (164,55,117)
eyes_color_ep = 'Purple Eye Shadow'
elif h > 600000:
EY1 = (214,92,26)
SC1 = (194,79,17)
eyes_color_ep = 'Orange Eye Shadow'
else:
eyes_color_ep = 'None'
neya = 99
seed(h)
i=randint(0,1000000)
if i > 900000:
mouth_prop = CIGARETTE
mouth_prop_ep = 'Cigarette'
elif i > 880000:
mouth_prop = MASK_2
mouth_prop_ep = 'Medical Mask'
tactac=99
elif i > 820000:
mouth_prop = PIPE
mouth_prop_ep = 'Pipe'
elif i > 780000:
mouth_prop = VAPE
mouth_prop_ep = 'Vape'
else:
mouth_prop = none
mouth_prop_ep = 'None'
seed(i)
j=randint(0,1000000)
if j > 970000:
eyes = TD_3
eyes_prop_ep ='3D Glasses'
elif j > 930000:
eyes = VR_3
eyes_prop_ep ='VR'
elif j > 880000:
eyes = ClassicShades_4
eyes_prop_ep ='Classic Shades'
elif j > 830000:
eyes = EyePatch_4
eyes_prop_ep ='Eye Patch'
neyh = 99
elif j > 780000:
eyes = NerdGlasses_4
eyes_prop_ep ='Nerd Glasses'
elif j > 730000:
eyes = BigShades_4
eyes_prop_ep ='Big Shades'
elif j > 700000:
eyes = EyeMask_4
eyes_prop_ep ='Eye Mask'
neyh = 99
elif j > 650000:
eyes = HornedRimGlasses_4
eyes_prop_ep ='Horned Rim Glasses'
elif j > 600000:
eyes = RegularShades_4
eyes_prop_ep ='Regular Shades'
elif j > 590000:
eyes = GOGOLES_2
eyes_prop_ep ='Welding Goggles'
hair_prop = none
hair_prop_ep = 'None'
toctoc = 99
else:
eyes=none
eyes_prop_ep ='None'
neyh = 99
if titine == 99 and toctoc !=99:
eyes = none
eyes_prop_ep ='None'
if neya != 99 and neyh !=99:
eyes = none
eyes_prop_ep ='None'
seed(j)
k=randint(0,1000000)
if k > 975000:
nose = NOSE_2
nose_ep = 'Clown Nose'
tuctuc = 99
else:
nose = none
nose_ep = 'None'
if tactac == 99 and tuctuc == 99:
mouthprop = none
mouth_prop_ep = 'None'
seed(k)
l=randint(0,1000000)
if l > 970000:
blemishes = ROSY_2
blemishe_ep = 'Rosy Cheeks'
elif l > 900000:
blemishes = MOLE_2
blemishe_ep = 'Mole'
elif l > 870000:
blemishes = SCARE_2
blemishe_ep = 'Scare'
else:
blemishes = none
blemishe_ep = 'None'
seed(l)
m=randint(0,1000000)
if m > 930000:
LI1 = nr
mouth_ep = 'Black Lipstick'
elif m > 860000:
LI1 = (255,0,0)
mouth_ep = 'Hot Lipstick'
elif m > 790000:
LI1 = (208,82,203)
mouth_ep = 'Purple Lipstick'
elif m > 720000:
LI1 = (214,92,26)
mouth_ep = 'Orange Lipstick'
else:
mouth = none
mouth_ep = 'None'
seed(m)
n=randint(0,1000000)
if n > 900000:
neck = GoldChain_3
neck_ep = 'Gold Chain'
elif n > 820000:
neck = SilverChain_3
neck_ep = 'Silver Chain'
elif n > 800000:
neck = RING_3
neck_ep = 'Ring Onchain'
else:
neck = none
neck_ep = 'None'
HALFINE=[
[FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR1,FR1,FR1,FR1,FR1,FR1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,SK1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SC1,SC1,SK1,SK1,SK1,SC1,SC1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,EY1,FR1,SK1,SK1,SK1,EY1,FR1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,FR1,FR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,SK1,FR1,SK1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,LI1,LI1,LI1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,SK1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR1,FR1,FR1,FR1,FR1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR1,SK1,SK1,SK1,FR1,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2]
]
pixels = HALFINE
elif b > 750000:
race_ep = 'Men'
type_ep = 'Male'
hair_color_ep = 'None'
haircut_ep = 'None'
hair_prop_ep = 'None'
eyes_prop_ep = 'None'
blemishe_ep = 'None'
eyes_color_ep = 'None'
facial_hair_ep = 'None'
mouth_prop_ep = 'None'
mouth_ep = 'None'
tooth_color_ep = 'None'
nose_ep = 'None'
neck_ep = 'None'
ears_ep = 'None'
skin_ep = 'None'
ears = none
hair = none
hair_prop = none
neck = none
blemishes = none
mouth = none
facial_hair = none
rod = none
mouth_prop = none
#eye color
eyes = none
nose = none
BE6 = (40,27,9)
seed(b)
c=randint(0,1000000)
if c > 800000:
SK1 = (234,217,217)
SC1 = (165,141,141)
EY1 = (201,178,178)
SK2 = (255,255,255)
HRG3 = (220,222,234)
HRG2 = (183,179,191)
HRG4 = (203,200,212)
HRG5 = (87,101,113)
HRG1 = (0,0,0)
RC1 = (226,187,185)
BE5 = (163,151,131)
MO1 = EY1
SCR1 = EY1
skin_ep = 'Albino'
elif c > 500000:
SK1 = (219,177,128)
SC1 = (166,110,44)
EY1 = (210,157,96)
SK2 = (235,203,166)
HRG3 = (213,200,183)
HRG2 = (184,163,135)
HRG4 = (209,189,164)
HRG5 = (87,101,113)
HRG1 = (0,0,0)
RC1 = (215,154,104)
BE5 = (153,124,89)
MO1 = EY1
SCR1 = EY1
skin_ep = 'Light'
elif c > 200000:
SK1 = (174,139,97)
SC1 = (134,88,30)
EY1 = (167,124,71)
SK2 = (178,138,93)
HRG3 = (188,179,165)
HRG2 = (166,150,128)
HRG4 = (184,171,151)
HRG5 = (87,101,113)
HRG1 = (0,0,0)
RC1 = (191,105,71)
BE5 = (121,97,68)
MO1 = EY1
SCR1 = EY1
skin_ep = 'Mid'
else:
SK1 = (113,63,29)
SC1 = (86,39,10)
EY1 = (114,55,17)
SK2 = (146,79,35)
HRG3 = (155,135,127)
HRG2 = (139,121,111)
HRG4 = (156,131,115)
HRG5 = (87,101,113)
HRG1 = (0,0,0)
RC1 = (142,36,2)
BE5 = (79,44,20)
MO1 = EY1
SCR1 = EY1
skin_ep = 'Dark'
seed(c)
d=randint(0,1000000)
if d > 750000:
ears = EARS_2
ears_ep = 'Earring'
else:
ears = none
ears_ep = 'None'
seed(d)
e=randint(0,1000000)
HR0 = (255,193,0)
HR2 = (251,114,7)
HR3 = (210,154,0)
HR4 = (166,165,165)
HR5 = (165,108,0)
HR6 = (111,0,48)
HR7 = (85,57,23)
red = (255,0,0)
if e > 875000:
HR1 = HR0
HR2 = red
hair_color_ep ='Blonde'
elif e > 750000:
HR1 = nr
HR2 = red
hair_color_ep ='Black'
elif e > 625000:
HR1 = HR2
HR2 = red
hair_color_ep ='Orange'
elif e > 500000:
HR1 = HR3
HR2 = red
hair_color_ep ='Fair'
elif e > 375000:
HR1 = HR4
HR2 = red
hair_color_ep ='Grey'
elif e > 250000:
HR1 = HR5
HR2 = red
hair_color_ep ='Ginger'
elif e > 125000:
HR1 = HR6
HR2 = red
hair_color_ep ='Black Rose'
else:
HR1 = HR7
HR2 = red
hair_color_ep ='Brown'
MAN_HR1=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,HR1,HR1,0,HR1,HR1,HR1,HR1,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,0,0,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,HR1,HR1,0,HR1,0,HR1,0,HR1,HR1,HR1,HR1,0,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,HR1,0,0,HR1,0,0,HR1,0,HR1,HR1,HR1,0,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,0,0,HR1,0,0,0,HR1,0,HR1,HR1,HR1,0,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,HR1,0,HR1,HR1,HR1,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,0,0,HR1,HR1,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,0,0,HR1,HR1,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,0,0,0,HR1,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,0,0,0,HR1,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,0,0,HR1,HR1,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,0,0,HR1,HR1,0,0,0,0],
[0,0,0,0,0,HR1,HR1,HR1,0,0,0,0,0,0,0,0,0,0,HR1,HR1,HR1,0,0,0],
[0,0,0,0,0,HR1,HR1,HR1,0,0,0,0,0,0,0,0,0,0,HR1,HR1,HR1,0,0,0],
[0,0,0,0,HR1,0,HR1,HR1,0,0,0,0,0,0,0,0,0,0,HR1,HR1,0,HR1,0,0],
[0,0,0,0,0,0,HR1,0,0,0,0,0,0,0,0,0,0,0,0,HR1,0,0,0,0],
[0,0,0,0,0,0,HR1,0,0,0,0,0,0,0,0,0,0,0,0,HR1,0,0,0,0],
[0,0,0,0,0,HR1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,HR1,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
MAN_HR2=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,HR1,HR1,HR1,HR1,HR1,HR1,HR1,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,0,0,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,HR1,HR1,HR1,HR1,HR1,HR1,HR1,0,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,HR1,HR1,HR1,HR1,HR1,HR1,HR1,0,0,0,0,0],
[0,0,0,0,0,HR1,HR1,HR1,HR1,0,0,0,0,HR1,HR1,HR1,HR1,HR1,HR1,HR1,0,0,0,0],
[0,0,0,0,0,HR1,HR1,HR1,0,0,0,0,0,0,0,HR1,HR1,HR1,HR1,HR1,HR1,0,0,0],
[0,0,0,0,HR1,HR1,HR1,HR1,0,0,0,0,0,0,0,0,HR1,HR1,HR1,HR1,HR1,0,0,0],
[0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,0,HR1,HR1,HR1,HR1,0,0,0],
[0,0,0,0,HR1,HR1,HR1,HR1,0,0,0,0,0,0,0,0,0,HR1,HR1,HR1,HR1,HR1,0,0],
[0,0,0,0,HR1,HR1,HR1,HR1,0,0,0,0,0,0,0,0,0,HR1,HR1,HR1,HR1,HR1,0,0],
[0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,0,HR1,HR1,HR1,HR1,HR1,0,0],
[0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,0,HR1,HR1,HR1,HR1,HR1,0,0],
[0,0,0,0,0,HR1,HR1,HR1,0,0,0,0,0,0,0,0,0,HR1,HR1,HR1,HR1,HR1,0,0],
[0,0,0,0,0,HR1,HR1,HR1,0,0,0,0,0,0,0,0,0,HR1,HR1,HR1,HR1,HR1,0,0],
[0,0,0,0,0,HR1,HR1,HR1,0,0,0,0,0,0,0,0,0,HR1,HR1,HR1,HR1,0,0,0],
[0,0,0,0,HR1,HR1,HR1,0,0,0,0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,0,0],
[0,0,0,0,0,0,HR1,0,0,0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
MAN_HR3=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,HR1,HR1,HR1,HR1,HR1,HR1,HR1,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,0,0,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,0,HR1,HR1,HR1,HR1,0,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,HR1,HR1,HR1,0,0,0,HR1,0,HR1,HR1,HR1,0,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,0,0,HR1,0,0,HR1,HR1,0,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,HR1,0,HR1,HR1,HR1,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,0],
[0,0,0,0,0,HR1,HR1,HR1,0,0,0,0,0,0,0,0,0,0,HR1,HR1,HR1,0,0,0],
[0,0,0,0,0,HR1,HR1,HR1,0,0,0,0,0,0,0,0,0,0,HR1,HR1,HR1,0,0,0],
[0,0,0,0,0,HR1,HR1,HR1,0,0,0,0,0,0,0,0,0,0,HR1,HR1,HR1,0,0,0],
[0,0,0,0,0,HR1,HR1,HR1,0,0,0,0,0,0,0,0,0,0,HR1,HR1,HR1,0,0,0],
[0,0,0,0,0,HR1,HR1,HR1,0,0,0,0,0,0,0,0,0,0,HR1,HR1,HR1,0,0,0],
[0,0,0,0,0,HR1,HR1,HR1,0,0,0,0,0,0,0,0,0,0,HR1,HR1,HR1,0,0,0],
[0,0,0,0,0,HR1,HR1,HR1,0,0,0,0,0,0,0,0,0,0,HR1,HR1,HR1,HR1,0,0],
[0,0,0,0,0,HR1,HR1,HR1,0,0,0,0,0,0,0,0,0,0,HR1,HR1,HR1,HR1,0,0],
[0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,0,0,HR1,HR1,HR1,HR1,0,0],
[0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,0,0,0,HR1,HR1,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
MAN_HR4=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,HR1,HR1,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,0,HR1,HR1,0,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,0,HR1,HR1,0,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,0,HR1,HR1,0,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,0,HR1,0,HR1,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,0,0,0,HR1,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,0,0,0,HR1,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,0,0,HR1,HR1,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,0,0,HR1,HR1,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,0,0,HR1,HR1,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,0,0,HR1,HR1,0,0,0,0],
[0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,0,0,0,0,HR1,HR1,0,0,0],
[0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,0,0,0,0,HR1,HR1,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
MAN_HR5=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,HR1,HR1,HR1,HR1,HR1,HR1,HR1,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,0,0,0,0,0,0],
[0,0,0,0,0,0,0,HR1,HR1,0,HR1,0,0,0,0,HR1,HR1,HR1,HR1,0,0,0,0,0],
[0,0,0,0,0,0,0,HR1,0,0,HR1,0,0,0,0,0,HR1,HR1,HR1,0,0,0,0,0],
[0,0,0,0,0,0,0,HR1,0,0,0,HR1,0,0,0,0,HR1,HR1,HR1,0,0,0,0,0],
[0,0,0,0,0,0,0,HR1,0,0,0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,0,0],
[0,0,0,0,0,0,0,HR1,0,0,0,0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,0],
[0,0,0,0,0,0,0,HR1,0,0,0,0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,0],
[0,0,0,0,0,0,0,HR1,0,0,0,0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,0],
[0,0,0,0,0,0,0,HR1,0,0,0,0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,0],
[0,0,0,0,0,0,0,HR1,0,0,0,0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,0],
[0,0,0,0,0,0,0,HR1,0,0,0,0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,0],
[0,0,0,0,0,0,0,HR1,0,0,0,0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,0],
[0,0,0,0,0,0,0,HR1,0,0,0,0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,0],
[0,0,0,0,0,0,0,HR1,0,0,0,0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,0],
[0,0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,0],
[0,0,0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
seed(e)
f=randint(0,1000000)
if f > 800000:
hair = MAN_HR1
haircut_ep = 'Grunge Hair'
elif f > 600000:
hair = MAN_HR2
haircut_ep = 'Prince Hair'
elif f > 400000:
hair = MAN_HR3
haircut_ep = 'King Hair'
elif f > 200000:
hair = MAN_HR4
haircut_ep = 'Bald'
else:
hair = MAN_HR5
haircut_ep = 'Straight Hair'
seed(f)
g=randint(0,1000000)
if g > 960000:
hair_prop = CAP_2
hair_prop_ep = 'Cap'
elif g > 940000:
hair_prop = COWBOY_2
hair_prop_ep = 'Cowboy Hat'
elif g > 930000:
hair_prop = TOPHAT_2
hair_prop_ep = 'Top Hat'
elif g > 910000:
hair_prop = Gondor_Crown
hair_prop_ep = 'Men Crown'
elif g > 870000:
hair_prop = KNITTED_2
hair_prop_ep = 'Knitted Cap'
elif g > 820000:
hair_prop = HEADBAND_2
hair_prop_ep = 'Headband'
elif g > 790000:
hair_prop = FORCAP_2
hair_prop_ep = 'Cap Forward'
elif g > 760000:
hair_prop = BANDANA_2
hair_prop_ep = 'Bandana'
elif g > 740000:
hair_prop = FEDORA_2
hair_prop_ep = 'Fedora'
elif g > 710000:
hair_prop = POLICE_2
hair_prop_ep = 'Police'
elif g > 700000:
hair_prop = BEANI_2
hair_prop_ep = 'Beanie'
else:
hair_prop = none
hair_prop_ep = 'None'
seed(g)
h=randint(0,1000000)
if h > 900000:
neck = GoldChain_1
neck_ep = 'Gold Chain'
elif h > 820000:
neck = SilverChain_1
neck_ep = 'Silver Chain'
elif h > 800000:
neck = RING_1
neck_ep = 'Ring Onchain'
elif h > 780000:
neck = BROCHE_1
neck_ep = 'Brooch'
else:
neck = none
neck_ep = 'None'
ShadowBeard=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,BE5,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,BE5,0,0,0,0,0,0,BE5,BE5,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,BE5,BE5,BE5,BE5,BE5,BE5,BE5,BE5,BE5,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,BE5,BE5,BE6,BE6,BE6,BE5,BE5,BE5,BE5,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,BE5,BE5,BE5,BE5,BE5,BE5,BE5,BE5,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,BE5,BE5,BE5,BE5,BE5,BE5,BE5,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,BE5,BE5,BE5,BE5,BE5,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
seed(h)
i=randint(0,1000000)
if i > 950000:
facial_hair = BigBeard
facial_hair_ep = 'Big Beard'
elif i >900000:
facial_hair = Muttonchops
facial_hair_ep = 'Muttonchops'
elif i > 850000:
facial_hair = Mustache
facial_hair_ep = 'Mustache'
elif i > 890000:
facial_hair = Handlebars
facial_hair_ep = 'Handlebars'
elif i > 750000:
facial_hair = FrontBeardDark
facial_hair_ep = 'Front Beard Dark'
elif i > 700000:
facial_hair = FrontBeard
facial_hair_ep = 'Front Beard'
elif i > 650000:
facial_hair = NormalBeard
facial_hair_ep = 'Normal Beard'
elif i > 600000:
facial_hair = NormalBeardBlack
facial_hair_ep = 'Normal Beard Black'
elif i > 550000:
facial_hair = LuxuriousBeard
facial_hair_ep = 'Luxurious Beard'
elif i > 500000:
facial_hair = Goat
facial_hair_ep = 'Goat'
elif i > 450000:
facial_hair = Chinstrap
facial_hair_ep = 'Chinstrap'
elif i > 400000:
facial_hair = ShadowBeard
facial_hair_ep = 'Shadow Beard'
else:
facial_hair = none
facial_hair_ep = 'None'
seed(i)
j=randint(0,1000000)
if j > 900000:
mouth_prop = CIGARETTE
mouth_prop_ep = 'Cigarette'
elif j > 880000:
mouth_prop = MASK_1
mouth_prop_ep = 'Medical Mask'
facial_hair = none
elif j > 820000:
mouth_prop = PIPE
mouth_prop_ep = 'Pipe'
elif j > 780000:
mouth_prop = VAPE
mouth_prop_ep = 'Vape'
else:
mouth_prop = none
mouth_prop_ep = 'None'
# seed(h)
# i=randint(0,1000000)
# if i > 300000:
# EY1 = (255,255,255)
# elif i > 50000:
# EY1 = (0,0,255)
# else:
# EY1 = (0,255,0)
seed(j)
k=randint(0,1000000)
if k > 970000:
eyes = TD_2
eyes_prop_ep ='3D Glasses'
elif k > 930000:
eyes = VR_2
eyes_prop_ep ='VR'
elif k > 880000:
eyes = ClassicShades_2
eyes_prop_ep ='Classic Shades'
elif k > 830000:
eyes = SmallShades_2
eyes_prop_ep ='Small Shades'
hair = MAN_HR3
haircut_ep = 'King Hair'
elif k > 780000:
eyes = EyePatch_2
eyes_prop_ep ='Eye Patch'
elif k > 730000:
eyes = NerdGlasses_2
eyes_prop_ep ='Nerd Glasses'
elif k > 680000:
eyes = BigShades_2
eyes_prop_ep ='Big Shades'
elif k > 650000:
eyes = EyeMask_2
eyes_prop_ep ='Eye Mask'
elif k > 600000:
eyes = HornedRimGlasses_2
eyes_prop_ep ='Horned Rim Glasses'
elif k > 550000:
eyes = RegularShades_2
eyes_prop_ep ='Regular Shades'
else:
eyes=none
eyes_prop_ep ='None'
seed(k)
l=randint(0,1000000)
if l > 975000:
nose = NOSE_1
nose_ep = 'Clown Nose'
else:
nose = none
nose_ep = 'None'
seed(l)
m=randint(0,1000000)
if m > 975000:
mouth = SMILE
mouth_ep = 'Smile'
elif m > 950000:
mouth = FROWN
mouth_ep = 'Frown'
else:
mouth = none
mouth_ep = 'None'
MOLE=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,MO1,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
SCARE_1=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,SCR1,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,SCR1,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,SCR1,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,SCR1,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,SCR1,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
ROSY_1=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,RC1,0,0,0,0,RC1,RC1,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,RC1,0,0,0,0,0,RC1,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
seed(m)
n=randint(0,1000000)
if n > 970000:
blemishes = ROSY_1
blemishe_ep = 'Rosy Cheeks'
elif n > 900000:
blemishes = MOLE
blemishe_ep = 'Mole'
elif n > 870000:
blemishes = SCARE_1
blemishe_ep = 'Scare'
else:
blemishes = none
blemishe_ep = 'None'
MAN=[
[FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR1,FR1,FR1,FR1,FR1,FR1,FR1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SC1,SC1,SK1,SK1,SK1,SC1,SC1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,EY1,FR1,SK1,SK1,SK1,EY1,FR1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,FR1,FR1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,FR1,FR1,SK1,SK1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,FR1,FR1,FR1,SK1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR1,FR1,FR1,FR1,FR1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR1,SK1,SK1,SK1,FR1,FR2,FR2,FR2,FR2,FR2,FR2]
]
pixels = MAN
elif b > 600000:
race_ep = 'Men'
type_ep = 'Female'
hair_color_ep = 'None'
haircut_ep = 'None'
hair_prop_ep = 'None'
eyes_prop_ep = 'None'
blemishe_ep = 'None'
eyes_color_ep = 'None'
facial_hair_ep = 'None'
mouth_prop_ep = 'None'
mouth_ep = 'None'
tooth_color_ep = 'None'
nose_ep = 'None'
neck_ep = 'None'
ears_ep = 'None'
skin_ep = 'None'
ears = none
hair = none
hair_prop = none
neck = none
blemishes = none
#tooth color
mouth = none
facial_hair = none
rod = none
mouth_prop = none
#eye color
eyes = none
nose = none
seed(b)
c=randint(0,1000000)
if c > 800000:
SK1 = (234,217,217)
SC1 = (165,141,141)
EY1 = (201,178,178)
SK2 = (255,255,255)
HRG3 = (220,222,234)
HRG2 = (183,179,191)
HRG4 = (203,200,212)
HRG5 = (87,101,113)
HRG1 = (0,0,0)
RC1 = (226,187,185)
LI1 = (113,28,17)
MO1 = EY1
SCR1 = EY1
skin_ep = 'Albino'
elif c > 500000:
SK1 = (219,177,128)
SC1 = (166,110,44)
EY1 = (210,157,96)
SK2 = (235,203,166)
HRG3 = (213,200,183)
HRG2 = (184,163,135)
HRG4 = (209,189,164)
HRG5 = (87,101,113)
HRG1 = (0,0,0)
RC1 = (215,154,104)
LI1 = (113,28,17)
MO1 = EY1
SCR1 = EY1
skin_ep = 'Light'
elif c > 200000:
SK1 = (174,139,97)
SC1 = (134,88,30)
EY1 = (167,124,71)
SK2 = (178,138,93)
HRG3 = (188,179,165)
HRG2 = (166,150,128)
HRG4 = (184,171,151)
HRG5 = (87,101,113)
HRG1 = (0,0,0)
RC1 = (191,105,71)
LI1 = (95,29,13)
MO1 = EY1
SCR1 = EY1
skin_ep = 'Mid'
else:
SK1 = (113,63,29)
SC1 = (86,39,10)
EY1 = (114,55,17)
SK2 = (146,79,35)
HRG3 = (155,135,127)
HRG2 = (139,121,111)
HRG4 = (156,131,115)
HRG5 = (87,101,113)
HRG1 = (0,0,0)
RC1 = (142,36,2)
LI1 = (74,18,8)
MO1 = EY1
SCR1 = EY1
skin_ep = 'Dark'
seed(c)
d=randint(0,1000000)
if d > 750000:
ears = EARS_3
ears_ep = 'Earring'
else:
ears = none
ears_ep = 'None'
seed(d)
e=randint(0,1000000)
HR0 = (255,193,0)
HR2 = (251,114,7)
HR3 = (210,154,0)
HR4 = (166,165,165)
HR5 = (165,108,0)
HR6 = (111,0,48)
HR7 = (85,57,23)
red = (255,0,0)
if e > 875000:
HR1 = HR0
HR2 = red
hair_color_ep ='Blonde'
elif e > 750000:
HR1 = nr
HR2 = red
hair_color_ep ='Black'
elif e > 625000:
HR1 = HR2
HR2 = red
hair_color_ep ='Orange'
elif e > 500000:
HR1 = HR3
HR2 = red
hair_color_ep ='Fair'
elif e > 375000:
HR1 = HR4
HR2 = red
hair_color_ep ='Grey'
elif e > 250000:
HR1 = HR5
HR2 = red
hair_color_ep ='Ginger'
elif e > 125000:
HR1 = HR6
HR2 = red
hair_color_ep ='Black Rose'
else:
HR1 = HR7
HR2 = red
hair_color_ep ='Brown'
WOMAN_HR1=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,HR1,HR1,HR1,HR1,HR1,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,HR1,HR1,HR1,HR1,HR1,HR1,HR1,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,0,0,0,0,0,0],
[0,0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,0,HR1,HR1,HR1,HR1,0,0,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,0,0,0,HR1,HR1,0,HR1,0,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,HR1,0,HR1,0,0,0,0,0],
[0,0,0,0,0,HR1,0,HR1,0,0,0,0,0,0,0,0,HR1,HR1,0,HR1,0,0,0,0],
[0,0,0,0,0,HR1,0,HR1,0,0,0,0,0,0,0,0,HR1,HR1,0,HR1,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,HR1,0,HR1,0,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,HR1,0,HR1,0,0,0,0,0],
[0,0,0,0,0,HR1,0,HR1,0,0,0,0,0,0,0,0,HR1,HR1,0,HR1,0,0,0,0],
[0,0,0,0,0,HR1,0,HR1,0,0,0,0,0,0,0,0,HR1,HR1,0,HR1,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,0,0,0,0,HR1,0,HR1,0,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,0,0,0,0,HR1,0,HR1,0,0,0,0,0],
[0,0,0,0,0,HR1,0,HR1,HR1,0,0,0,0,0,0,0,HR1,HR1,0,HR1,0,0,0,0],
[0,0,0,0,0,HR1,0,HR1,HR1,HR1,0,0,0,0,0,0,HR1,HR1,0,HR1,0,0,0,0],
[0,0,0,0,0,0,HR1,0,HR1,HR1,HR1,0,0,0,0,0,HR1,0,HR1,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
WOMAN_HR2=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,HR1,HR1,HR1,HR1,HR1,HR1,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,0,0,0,0,0,0],
[0,0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,0,HR1,HR1,HR1,HR1,0,0,0,0,0,0],
[0,0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,0,0,0],
[0,0,0,0,0,0,0,HR1,0,0,0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,HR1,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,HR1,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,HR1,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,HR1,HR1,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,HR1,HR1,HR1,HR1,HR1,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,HR1,HR1,HR1,HR1,HR1,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,HR1,HR1,HR1,HR1,HR1,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
WOMAN_HR3=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,HR1,HR1,HR1,HR1,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,0,0,0,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,HR1,0,HR1,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,HR1,0,0,0,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,HR1,0,0,0,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,HR1,0,0,0,0,0,0,0],
[0,0,0,0,0,HR1,HR1,HR1,0,0,0,0,0,0,0,0,HR1,0,0,0,0,0,0,0],
[0,0,0,0,0,HR1,HR1,HR1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,HR1,HR1,HR1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,HR1,HR1,HR1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,HR1,HR1,HR1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,HR1,HR1,HR1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,HR1,HR1,HR1,HR1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,HR1,HR1,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
WOMAN_HR4=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,HR1,HR1,HR1,HR1,HR1,HR1,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,0,0,0,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,0,0,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,0,HR1,HR1,0,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,0,HR1,HR1,0,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,0,HR1,HR1,0,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,0,0],
[0,0,0,0,0,HR1,HR1,HR1,0,0,0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,0,0],
[0,0,0,0,0,HR1,HR1,HR1,0,0,0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,0,0],
[0,0,0,0,HR1,HR1,HR1,HR1,0,0,0,0,0,0,0,HR1,HR1,HR1,HR1,HR1,0,0,0,0],
[0,0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,HR1,HR1,HR1,HR1,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
WOMAN_HR5=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,HR1,HR1,HR1,HR1,HR1,HR1,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,HR1,HR1,HR1,HR1,0,0,0,0,0,0],
[0,0,0,0,0,0,0,HR1,0,0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,0,0,0],
[0,0,0,0,0,0,0,HR1,0,0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,0,0,0],
[0,0,0,0,0,0,0,HR1,0,0,0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,0,0],
[0,0,0,0,0,0,0,HR1,0,0,0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,0,0],
[0,0,0,0,0,0,0,HR1,0,0,0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,0,0],
[0,0,0,0,0,0,0,HR1,0,0,0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,0,0],
[0,0,0,0,0,0,0,HR1,0,0,0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,0,0],
[0,0,0,0,0,0,0,HR1,0,0,0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,0,0],
[0,0,0,0,0,0,0,HR1,0,0,0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,0,0],
[0,0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,0,0],
[0,0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,0,0],
[0,0,0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,HR1,HR1,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
MOLE_2=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,MO1,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
SCARE_2=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,SCR1,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,SCR1,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,SCR1,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,SCR1,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
ROSY_2=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,RC1,RC1,0,0,0,RC1,RC1,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
seed(e)
f=randint(0,1000000)
if f > 800000:
hair = WOMAN_HR1
haircut_ep = 'Curly Hair'
elif f > 600000:
hair = WOMAN_HR2
haircut_ep = 'Right Side Hair'
elif f > 400000:
hair = WOMAN_HR3
haircut_ep = 'Left Side Hair'
elif f > 200000:
hair = WOMAN_HR4
haircut_ep = 'The Bob'
else:
hair = WOMAN_HR5
haircut_ep = 'Straight Hair'
seed(f)
g=randint(0,1000000)
if g > 960000:
hair_prop = CAP_4
hair_prop_ep = 'Cap'
elif g > 950000:
hair_prop = TIARA_2
hair_prop_ep = 'Tiara'
titi = 99
elif g > 930000:
hair_prop = MILICAP_2
hair_prop_ep = 'Punk Hat'
elif e > 890000:
hair_prop = KNITTED_4
hair_prop_ep = 'Knitted Cap'
elif g > 850000:
hair_prop = HEADBAND_4
hair_prop_ep = 'Headband'
elif g > 840000:
hair = none
hair_prop = PILOT_2
hair_prop_ep = 'Pilot Helmet'
titi = 99
elif g > 810000:
hair_prop = BANDANA_4
hair_prop_ep = 'Bandana'
elif g > 750000:
hair_prop = Wo_Crown
hair_prop_ep = 'Circlet'
else:
hair_prop = none
hair_prop_ep = 'None'
seed(g)
h=randint(0,1000000)
if h > 900000:
EY1 = (110,152,77)
SC1 = (92,133,57)
eyes_color_ep = 'Green Eye Shadow'
elif h > 800000:
EY1 = (93,121,117)
SC1 = (80,106,101)
eyes_color_ep = 'Blue Eye Shadow'
elif h > 700000:
EY1 = (176,61,133)
SC1 = (164,55,117)
eyes_color_ep = 'Purple Eye Shadow'
elif h > 600000:
EY1 = (214,92,26)
SC1 = (194,79,17)
eyes_color_ep = 'Orange Eye Shadow'
else:
eyes_color_ep = 'None'
neyu = 99
seed(h)
i=randint(0,1000000)
if i > 900000:
mouth_prop = CIGARETTE
mouth_prop_ep = 'Cigarette'
elif i > 880000:
mouth_prop = MASK_2
mouth_prop_ep = 'Medical Mask'
tactac = 99
elif i > 820000:
mouth_prop = PIPE
mouth_prop_ep = 'Pipe'
elif i > 780000:
mouth_prop = VAPE
mouth_prop_ep = 'Vape'
else:
mouth_prop = none
mouth_prop_ep = 'None'
# seed(h)
# i=randint(0,1000000)
# if i > 300000:
# EY1 = (255,255,255)
# elif i > 50000:
# EY1 = (0,0,255)
# else:
# EY1 = (0,255,0)
seed(i)
j=randint(0,1000000)
if j > 970000:
eyes = TD_3
eyes_prop_ep ='3D Glasses'
elif j > 930000:
eyes = VR_3
eyes_prop_ep ='VR'
elif j > 880000:
eyes = ClassicShades_4
eyes_prop_ep ='Classic Shades'
elif j > 830000:
eyes = EyePatch_4
eyes_prop_ep ='Eye Patch'
neyw = 99
elif j > 780000:
eyes = NerdGlasses_4
eyes_prop_ep ='Nerd Glasses'
elif j > 730000:
eyes = BigShades_4
eyes_prop_ep ='Big Shades'
elif j > 700000:
eyes = EyeMask_4
eyes_prop_ep ='Eye Mask'
neyw = 99
elif j > 650000:
eyes = HornedRimGlasses_4
eyes_prop_ep ='Horned Rim Glasses'
elif j > 600000:
eyes = RegularShades_4
eyes_prop_ep ='Regular Shades'
elif j > 590000:
eyes = GOGOLES_2
eyes_prop_ep ='Welding Goggles'
hair_prop = none
hair_prop_ep = 'None'
tata = 99
else:
eyes=none
eyes_prop_ep ='None'
neyw = 99
if titi == 99 and tata != 99:
eyes = none
eyes_prop_ep ='None'
if neyu != 99 and neyw !=99:
eyes = none
eyes_prop_ep ='None'
seed(j)
k=randint(0,1000000)
if k > 975000:
nose = NOSE_2
nose_ep = 'Clown Nose'
tuctuc = 99
else:
nose = none
nose_ep = 'None'
if tactac == 99 and tuctuc == 99:
mouthprop = none
mouth_prop_ep = 'None'
seed(k)
l=randint(0,1000000)
if l > 970000:
blemishes = ROSY_2
blemishe_ep = 'Rosy Cheeks'
elif l > 900000:
blemishes = MOLE_2
blemishe_ep = 'Mole'
elif l > 870000:
blemishes = SCARE_2
blemishe_ep = 'Scare'
else:
blemishes = none
blemishe_ep = 'None'
seed(l)
m=randint(0,1000000)
if m > 930000:
LI1 = nr
mouth_ep = 'Black Lipstick'
elif m > 860000:
LI1 = (255,0,0)
mouth_ep = 'Hot Lipstick'
elif m > 790000:
LI1 = (208,82,203)
mouth_ep = 'Purple Lipstick'
elif m > 720000:
LI1 = (214,92,26)
mouth_ep = 'Orange Lipstick'
else:
mouth = none
mouth_ep = 'None'
seed(m)
n=randint(0,1000000)
if n > 900000:
neck = GoldChain_3
neck_ep = 'Gold Chain'
elif n > 820000:
neck = SilverChain_3
neck_ep = 'Silver Chain'
elif n > 800000:
neck = RING_3
neck_ep = 'Ring Onchain'
elif n > 790000:
neck = CHOKER
neck_ep = 'Choker'
elif n > 770000:
neck = BROCHE_3
neck_ep = 'Brooch'
else:
neck = none
neck_ep = 'None'
WOMAN=[
[FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR1,FR1,FR1,FR1,FR1,FR1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,SK1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SC1,SC1,SK1,SK1,SK1,SC1,SC1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,EY1,FR1,SK1,SK1,SK1,EY1,FR1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,FR1,FR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,SK1,FR1,SK1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,LI1,LI1,LI1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,SK1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,SK1,FR1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR1,FR1,FR1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR1,SK1,SK1,SK1,FR1,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2]
]
pixels = WOMAN
elif b > 535000:
race_ep = 'Elves'
type_ep = 'Male'
hair_color_ep = 'None'
haircut_ep = 'None'
hair_prop_ep = 'None'
eyes_prop_ep = 'None'
blemishe_ep = 'None'
eyes_color_ep = 'None'
facial_hair_ep = 'None'
mouth_prop_ep = 'None'
mouth_ep = 'None'
tooth_color_ep = 'None'
nose_ep = 'None'
neck_ep = 'None'
ears_ep = 'None'
skin_ep = 'None'
ears = none
hair = none
hair_prop = none
neck = none
blemishes = none
#tooth color
mouth = none
facial_hair = none
rod = none
mouth_prop = none
#eye color
eyes = none
nose = none
seed(b)
c=randint(0,1000000)
if c > 800000:
SK1 = (234,217,217)
SC1 = (165,141,141)
EY1 = (201,178,178)
SK2 = (255,255,255)
HRG3 = (220,222,234)
HRG2 = (183,179,191)
HRG4 = (203,200,212)
HRG5 = (87,101,113)
HRG1 = (0,0,0)
RC1 = (226,187,185)
MO1 = EY1
SCR1 = EY1
skin_ep = 'Albino'
elif c > 500000:
SK1 = (219,177,128)
SC1 = (166,110,44)
EY1 = (210,157,96)
SK2 = (235,203,166)
HRG3 = (213,200,183)
HRG2 = (184,163,135)
HRG4 = (209,189,164)
HRG5 = (87,101,113)
HRG1 = (0,0,0)
RC1 = (215,154,104)
MO1 = EY1
SCR1 = EY1
skin_ep = 'Light'
elif c > 200000:
SK1 = (174,139,97)
SC1 = (134,88,30)
EY1 = (167,124,71)
SK2 = (178,138,93)
HRG3 = (188,179,165)
HRG2 = (166,150,128)
HRG4 = (184,171,151)
HRG5 = (87,101,113)
HRG1 = (0,0,0)
RC1 = (191,105,71)
MO1 = EY1
SCR1 = EY1
skin_ep = 'Mid'
else:
SK1 = (113,63,29)
SC1 = (86,39,10)
EY1 = (114,55,17)
SK2 = (146,79,35)
HRG3 = (155,135,127)
HRG2 = (139,121,111)
HRG4 = (156,131,115)
HRG5 = (87,101,113)
HRG1 = (0,0,0)
RC1 = (142,36,2)
MO1 = EY1
SCR1 = EY1
skin_ep = 'Dark'
seed(c)
d=randint(0,1000000)
if d > 750000:
ears = EARS_1
ears_ep = 'Earring'
else:
ears = none
ears_ep = 'None'
seed(d)
e=randint(0,1000000)
HR0 = (255,227,72)
HR2 = (255,255,153)
HR3 = (165,108,0)
HR4 = (61,35,32)
HR5 = (111,0,48)
HR6 = (255,0,0)
if e > 850000:
HR1 = HR0
hair_color_ep ='Blond'
elif e > 700000:
HR1 = HR2
hair_color_ep ='Butter'
elif e > 650000:
HR1 = HR3
hair_color_ep ='Ginger'
elif e > 500000:
HR1 = HR4
hair_color_ep ='Brown'
elif e > 350000:
HR1 = HR5
hair_color_ep ='Black Rose'
elif e > 200000:
HR1 = nr
hair_color_ep='Black'
else:
HR1 = HR6
hair_color_ep ='Red'
ELF_HR1=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,HR1,HR1,HR1,HR1,HR1,HR1,HR1,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,0,0,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,0,HR1,HR1,0,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,0,HR1,HR1,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,HR1,HR1,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,0],
[0,0,0,0,0,0,HR1,0,0,0,0,0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,0],
[0,0,0,0,0,0,HR1,0,0,0,0,0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,0],
[0,0,0,0,0,0,HR1,0,0,0,0,0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,0],
[0,0,0,0,0,0,HR1,0,0,0,0,0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,0],
[0,0,0,0,0,0,HR1,0,0,0,0,0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,0],
[0,0,0,0,0,0,HR1,0,0,0,0,0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,0],
[0,0,0,0,0,0,HR1,0,0,0,0,0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,0],
[0,0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
ELF_HR2=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,HR1,HR1,HR1,HR1,HR1,HR1,HR1,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,0,0,0,0,0,0],
[0,0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,0,0,HR1,HR1,HR1,HR1,0,0,0,0,0],
[0,0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,0,0],
[0,0,0,0,0,0,0,HR1,0,0,0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,HR1,HR1,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,HR1,HR1,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,HR1,HR1,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,HR1,HR1,HR1,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,HR1,0,HR1,HR1,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,HR1,0,0,HR1,HR1,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,HR1,HR1,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,HR1,HR1,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,HR1,HR1,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,HR1,HR1,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
ELF_HR3=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,BG1,HR1,HR1,HR1,HR1,BG1,BG1,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,BG1,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,BG1,0,0,0,0,0,0],
[0,0,0,0,0,0,0,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,BG1,0,0,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,HR1,0,HR1,0,0,0,0,HR1,HR1,HR1,0,0,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,0,HR1,0,0,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,0,HR1,0,0,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
ELF_HR4=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,HR1,HR1,HR1,HR1,HR1,HR1,HR1,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,0,0,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,HR1,0,HR1,HR1,HR1,HR1,0,0,HR1,HR1,HR1,0,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,0,0,0,HR1,HR1,0,0,0,0,HR1,HR1,0,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,0,HR1,HR1,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,HR1,HR1,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,HR1,HR1,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,HR1,HR1,0,0,0,0,0],
[0,0,0,0,0,0,HR1,0,0,0,0,0,0,0,0,0,0,HR1,HR1,0,0,0,0,0],
[0,0,0,0,0,0,HR1,0,0,0,0,0,0,0,0,0,0,HR1,HR1,0,0,0,0,0],
[0,0,0,0,0,0,HR1,0,0,0,0,0,0,0,0,0,0,HR1,HR1,0,0,0,0,0],
[0,0,0,0,0,0,HR1,0,0,0,0,0,0,0,0,0,0,HR1,HR1,0,0,0,0,0],
[0,0,0,0,0,0,HR1,0,0,0,0,0,0,0,0,0,0,HR1,HR1,0,0,0,0,0],
[0,0,0,0,0,0,HR1,0,0,0,0,0,0,0,0,0,0,HR1,HR1,0,0,0,0,0],
[0,0,0,0,0,0,HR1,0,0,0,0,0,0,0,0,0,0,HR1,HR1,0,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,0,HR1,HR1,0,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,0,HR1,HR1,0,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,0,HR1,HR1,0,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,0,HR1,HR1,0,0,0,0,0]
]
ELF_HR5=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,HR1,HR1,HR1,HR1,HR1,HR1,HR1,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,0,0,0,0,0,0],
[0,0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,0,HR1,HR1,HR1,HR1,0,0,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,0,0,0,HR1,HR1,HR1,HR1,0,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,0,0],
[0,0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,HR1,0,0,0,0,0,HR1,0,0,0,0,0,0,0,0]
]
seed(e)
f=randint(0,1000000)
if f > 800000:
hair = ELF_HR1
haircut_ep = 'Straight Hair'
elif f > 600000:
hair = ELF_HR2
haircut_ep = 'Braids'
elif f > 400000:
hair = ELF_HR3
haircut_ep = 'Left Side Hair'
elif f > 200000:
hair = ELF_HR4
haircut_ep = 'Long Hair'
else:
hair = ELF_HR5
haircut_ep = 'Medium Layers'
seed(f)
g=randint(0,1000000)
if g > 960000:
hair_prop = CAP_1
hair_prop_ep = 'Cap'
elif g > 940000:
hair_prop = COWBOY_1
hair_prop_ep = 'Cowboy Hat'
elif g > 910000:
hair_prop = TOPHAT_1
hair_prop_ep = 'Top Hat'
elif e > 870000:
hair_prop = KNITTED_1
hair_prop_ep = 'Knitted Cap'
elif g > 865000:
hair_prop = HEADBAND_1
hair_prop_ep = 'Headband'
hair = ELF_HR1
haircut_ep = 'Straight Hair'
elif g > 850000:
hair_prop = HEADBAND_1
hair_prop_ep = 'Headband'
hair = ELF_HR2
haircut_ep = 'Braids'
elif g > 835000:
hair_prop = HEADBAND_1
hair_prop_ep = 'Headband'
hair = ELF_HR4
haircut_ep = 'Long Hair'
elif g > 820000:
hair_prop = HEADBAND_1
hair_prop_ep = 'Headband'
hair = ELF_HR5
haircut_ep = 'Medium Layers'
elif g > 790000:
hair_prop = FORCAP_1
hair_prop_ep = 'Cap Forward'
elif g > 760000:
hair_prop = BANDANA_1
hair_prop_ep = 'Bandana'
elif g > 750000:
hair_prop = Elf_Crown
hair_prop_ep = 'Elfic Crown'
hair = ELF_HR1
haircut_ep = 'Straight Hair'
elif g > 740000:
hair_prop = Elf_Crown
hair_prop_ep = 'Elfic Crown'
hair = ELF_HR2
haircut_ep = 'Braids'
elif g > 730000:
hair_prop = Elf_Crown
hair_prop_ep = 'Elfic Crown'
hair = ELF_HR4
haircut_ep = 'Long Hair'
elif g > 720000:
hair_prop = Elf_Crown
hair_prop_ep = 'Elfic Crown'
hair = ELF_HR5
haircut_ep = 'Medium Layers'
elif g > 700000:
hair_prop = FEDORA_1
hair_prop_ep = 'Fedora'
elif g > 670000:
hair_prop = POLICE_1
hair_prop_ep = 'Police'
elif g > 660000:
hair_prop = BEANI_1
hair_prop_ep = 'Beanie'
else:
hair_prop = none
hair_prop_ep = 'None'
seed(g)
h=randint(0,1000000)
if h > 900000:
neck = GoldChain_1
neck_ep = 'Gold Chain'
elif h > 820000:
neck = SilverChain_1
neck_ep = 'Silver Chain'
elif h > 800000:
neck = RING_1
neck_ep = 'Ring Onchain'
elif h > 780000:
neck = BROCHE_1
neck_ep = 'Brooch'
else:
neck = none
neck_ep = 'None'
seed(h)
i=randint(0,1000000)
if i > 900000:
mouth_prop = CIGARETTE
mouth_prop_ep = 'Cigarette'
elif i > 880000:
mouth_prop = MASK_1
mouth_prop_ep = 'Medical Mask'
elif i > 820000:
mouth_prop = PIPE
mouth_prop_ep = 'Pipe'
elif i > 780000:
mouth_prop = VAPE
mouth_prop_ep = 'Vape'
else:
mouth_prop = none
mouth_prop_ep = 'None'
# seed(h)
# i=randint(0,1000000)
# if i > 300000:
# EY1 = (255,255,255)
# elif i > 50000:
# EY1 = (0,0,255)
# else:
# EY1 = (0,255,0)
seed(i)
j=randint(0,1000000)
if j > 970000:
eyes = TD_1
eyes_prop_ep ='3D Glasses'
elif j > 930000:
eyes = VR_1
eyes_prop_ep ='VR'
elif j > 880000:
eyes = ClassicShades_1
eyes_prop_ep ='Classic Shades'
elif j > 830000:
eyes = EyePatch_1
eyes_prop_ep ='Eye Patch'
elif j > 780000:
eyes = NerdGlasses_1
eyes_prop_ep ='Nerd Glasses'
elif j > 730000:
eyes = BigShades_1
eyes_prop_ep ='Big Shades'
elif j > 700000:
eyes = EyeMask_1
eyes_prop_ep ='Eye Mask'
elif j > 650000:
eyes = HornedRimGlasses_1
eyes_prop_ep ='Horned Rim Glasses'
elif j > 600000:
eyes = RegularShades_1
eyes_prop_ep ='Regular Shades'
else:
eyes=none
eyes_prop_ep ='None'
seed(j)
k=randint(0,1000000)
if k > 975000:
nose = NOSE_1
nose_ep = 'Clown Nose'
else:
nose = none
nose_ep = 'None'
seed(k)
l=randint(0,1000000)
if l > 975000:
mouth = SMILE
mouth_ep = 'Smile'
elif l > 950000:
mouth = FROWN
mouth_ep = 'Frown'
else:
mouth = none
mouth_ep = 'None'
MOLE=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,MO1,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
SCARE_1=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,SCR1,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,SCR1,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,SCR1,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,SCR1,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,SCR1,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
ROSY_1=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,RC1,0,0,0,0,RC1,RC1,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,RC1,0,0,0,0,0,RC1,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
seed(l)
m=randint(0,1000000)
if m > 970000:
blemishes = ROSY_1
blemishe_ep = 'Rosy Cheeks'
elif m > 900000:
blemishes = MOLE
blemishe_ep = 'Mole'
elif m > 870000:
blemishes = SCARE_1
blemishe_ep = 'Scare'
else:
blemishes = none
blemishe_ep = 'None'
ELF=[
[FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR1,FR1,FR1,FR1,FR1,FR1,FR1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,FR1,FR1,SC1,SC1,SK1,SK1,SK1,SC1,SC1,SK1,SK1,FR1,FR1,FR1,FR1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,FR1,SK1,FR1,EY1,FR1,SK1,SK1,SK1,EY1,FR1,SK1,SK1,SK1,SK1,SK1,SK1,FR1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,FR1,FR1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,FR1,FR1,FR1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,FR1,FR1,SK1,SK1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,FR1,FR1,FR1,SK1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR1,FR1,FR1,FR1,FR1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR1,SK1,SK1,SK1,FR1,FR2,FR2,FR2,FR2,FR2,FR2]
]
pixels = ELF
elif b > 470000:
race_ep = 'Elves'
type_ep = 'Female'
hair_color_ep = 'None'
haircut_ep = 'None'
hair_prop_ep = 'None'
eyes_prop_ep = 'None'
blemishe_ep = 'None'
eyes_color_ep = 'None'
facial_hair_ep = 'None'
mouth_prop_ep = 'None'
mouth_ep = 'None'
tooth_color_ep = 'None'
nose_ep = 'None'
neck_ep = 'None'
ears_ep = 'None'
skin_ep = 'None'
ears = none
hair = none
hair_prop = none
neck = none
blemishes = none
#tooth color
mouth = none
facial_hair = none
rod = none
mouth_prop = none
#eye color
eyes = none
nose = none
seed(b)
c=randint(0,1000000)
if c > 800000:
SK1 = (234,217,217)
SC1 = (165,141,141)
EY1 = (201,178,178)
SK2 = SK1
HRG3 = (220,222,234)
HRG2 = (183,179,191)
HRG4 = (203,200,212)
HRG5 = (87,101,113)
HRG1 = (0,0,0)
RC1 = (226,187,185)
LI1 = (113,28,17)
MO1 = EY1
SCR1 = EY1
skin_ep = 'Albino'
elif c > 500000:
SK1 = (219,177,128)
SC1 = (166,110,44)
EY1 = (210,157,96)
SK2 = SK1
HRG3 = (213,200,183)
HRG2 = (184,163,135)
HRG4 = (209,189,164)
HRG5 = (87,101,113)
HRG1 = (0,0,0)
RC1 = (215,154,104)
LI1 = (113,28,17)
MO1 = EY1
SCR1 = EY1
skin_ep = 'Light'
elif c > 200000:
SK1 = (174,139,97)
SC1 = (134,88,30)
EY1 = (167,124,71)
SK2 = SK1
HRG3 = (188,179,165)
HRG2 = (166,150,128)
HRG4 = (184,171,151)
HRG5 = (87,101,113)
HRG1 = (0,0,0)
RC1 = (191,105,71)
LI1 = (95,29,13)
MO1 = EY1
SCR1 = EY1
skin_ep = 'Mid'
else:
SK1 = (113,63,29)
SC1 = (86,39,10)
EY1 = (114,55,17)
SK2 = SK1
HRG3 = (155,135,127)
HRG2 = (139,121,111)
HRG4 = (156,131,115)
HRG5 = (87,101,113)
HRG1 = (0,0,0)
RC1 = (142,36,2)
LI1 = (74,18,8)
MO1 = EY1
SCR1 = EY1
skin_ep = 'Dark'
seed(c)
d=randint(0,1000000)
if d > 750000:
ears = EARS_3
ears_ep = 'Earring'
else:
ears = none
ears_ep = 'None'
seed(d)
e=randint(0,1000000)
HR0 = (255,227,72)
HR2 = (249,255,0)
HR3 = (165,108,0)
HR4 = (61,35,32)
HR5 = (111,0,48)
HR6 = (255,0,0)
if e > 850000:
HR1 = HR0
hair_color_ep ='Blond'
elif e > 700000:
HR1 = HR2
hair_color_ep ='Butter'
elif e > 650000:
HR1 = HR3
hair_color_ep ='Ginger'
elif e > 500000:
HR1 = HR4
hair_color_ep ='Brown'
elif e > 350000:
HR1 = HR5
hair_color_ep ='Black Rose'
elif e > 200000:
HR1 = nr
hair_color_ep='Black'
else:
HR1 = HR6
hair_color_ep ='Red'
ELFE_HR1=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,HR1,HR1,HR1,HR1,HR1,HR1,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,0,0,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0],
[0,0,0,0,0,0,HR1,0,0,0,0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,0,0],
[0,0,0,0,0,0,HR1,0,0,0,0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,0,0],
[0,0,0,0,0,0,HR1,0,0,0,0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,0,0],
[0,0,0,0,0,0,HR1,0,0,0,0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,0,0],
[0,0,0,0,0,0,HR1,0,0,0,0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,0,0],
[0,0,0,0,0,0,HR1,0,0,0,0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,HR1,HR1,0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,0,0],
[0,0,0,0,0,0,0,HR1,HR1,HR1,HR1,HR1,0,0,0,0,HR1,HR1,HR1,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
ELFE_HR2=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,HR1,HR1,HR1,HR1,HR1,HR1,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,0,HR1,HR1,HR1,HR1,0,0,0,0,0,0],
[0,0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,0,0,0],
[0,0,0,0,0,0,0,HR1,0,0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,HR1,HR1,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,HR1,HR1,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,HR1,HR1,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,HR1,HR1,HR1,HR1,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,HR1,HR1,0,HR1,HR1,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,HR1,HR1,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,HR1,HR1,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,HR1,HR1,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,HR1,0,0]
]
ELFE_HR3=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,HR1,HR1,HR1,HR1,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,0,0,0,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,HR1,0,HR1,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,HR1,0,0,0,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,HR1,0,0,0,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,HR1,0,0,0,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,HR1,0,0,0,0,0,0,0],
[0,0,0,0,0,HR1,HR1,HR1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,HR1,HR1,HR1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,HR1,HR1,HR1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,HR1,HR1,HR1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,HR1,HR1,HR1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,HR1,HR1,HR1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,HR1,HR1,HR1,HR1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,HR1,HR1,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
ELFE_HR4=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,HR1,HR1,HR1,HR1,HR1,HR1,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,0,0,0,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,HR1,0,HR1,HR1,HR1,HR1,0,HR1,HR1,HR1,0,0,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,0,0,0,HR1,HR1,0,0,0,HR1,HR1,0,0,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0]
]
ELFE_HR5=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,HR1,HR1,HR1,HR1,HR1,HR1,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,0,HR1,HR1,HR1,0,0,0,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,0,0,0,HR1,HR1,HR1,0,0,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0],
[0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0],
[0,0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,HR1,HR1,0,0,0,0,HR1,HR1,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,HR1,0,0,0,0,HR1,0,0,0,0,0,0,0,0,0]
]
MOLE_2=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,MO1,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
SCARE_2=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,SCR1,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,SCR1,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,SCR1,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,SCR1,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
ROSY_2=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,RC1,RC1,0,0,0,RC1,RC1,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
seed(e)
f=randint(0,1000000)
if f > 800000:
hair = ELFE_HR1
haircut_ep = 'Straight Hair'
elif f > 600000:
hair = ELFE_HR2
haircut_ep = 'Braids'
elif f > 400000:
hair = ELFE_HR3
haircut_ep = 'Left Side Hair'
elif f > 200000:
hair = ELFE_HR4
haircut_ep = 'Long Hair'
else:
hair = ELFE_HR5
haircut_ep = 'Medium Layers'
seed(f)
g=randint(0,1000000)
if g > 900000:
hair_prop = CAP_3
hair_prop_ep = 'Cap'
elif g > 700000:
hair_prop = MILICAP_1
hair_prop_ep = 'Punk Hat'
elif e > 600000:
hair_prop = KNITTED_3
hair_prop_ep = 'Knitted Cap'
elif g > 500000:
hair_prop = HEADBAND_3
hair_prop_ep = 'Headband'
elif g > 400000:
hair = none
hair_prop = PILOT_1
hair_prop_ep = 'Pilot Helmet'
titin = 99
elif g > 300000:
hair_prop = BANDANA_3
hair_prop_ep = 'Bandana'
elif g > 100000:
hair_prop = Elfe_Tiara
hair_prop_ep = 'Elfic Tiara'
else:
hair_prop = none
hair_prop_ep = 'None'
seed(g)
h=randint(0,1000000)
if h > 900000:
EY1 = (110,152,77)
SC1 = (92,133,57)
eyes_color_ep = 'Green Eye Shadow'
elif h > 800000:
EY1 = (93,121,117)
SC1 = (80,106,101)
eyes_color_ep = 'Blue Eye Shadow'
elif h > 700000:
EY1 = (176,61,133)
SC1 = (164,55,117)
eyes_color_ep = 'Purple Eye Shadow'
elif h > 600000:
EY1 = (214,92,26)
SC1 = (194,79,17)
eyes_color_ep = 'Orange Eye Shadow'
else:
eyes_color_ep = 'None'
neyo = 99
seed(h)
i=randint(0,1000000)
if i > 900000:
mouth_prop = CIGARETTE
mouth_prop_ep = 'Cigarette'
elif i > 880000:
mouth_prop = MASK_2
mouth_prop_ep = 'Medical Mask'
tactac = 99
elif i > 820000:
mouth_prop = PIPE
mouth_prop_ep = 'Pipe'
elif i > 780000:
mouth_prop = VAPE
mouth_prop_ep = 'Vape'
else:
mouth_prop = none
mouth_prop_ep = 'None'
# seed(h)
# i=randint(0,1000000)
# if i > 300000:
# EY1 = (255,255,255)
# elif i > 50000:
# EY1 = (0,0,255)
# else:
# EY1 = (0,255,0)
seed(i)
j=randint(0,1000000)
if j > 970000:
eyes = TD_3
eyes_prop_ep ='3D Glasses'
elif j > 930000:
eyes = VR_3
eyes_prop_ep ='VR'
elif j > 880000:
eyes = ClassicShades_3
eyes_prop_ep ='Classic Shades'
elif j > 830000:
eyes = EyePatch_3
eyes_prop_ep ='Eye Patch'
neye = 99
elif j > 780000:
eyes = NerdGlasses_3
eyes_prop_ep ='Nerd Glasses'
elif j > 730000:
eyes = BigShades_3
eyes_prop_ep ='Big Shades'
elif j > 700000:
eyes = EyeMask_3
eyes_prop_ep ='Eye Mask'
neye = 99
elif j > 650000:
eyes = HornedRimGlasses_3
eyes_prop_ep ='Horned Rim Glasses'
elif j > 600000:
eyes = RegularShades_3
eyes_prop_ep ='Regular Shades'
elif j > 590000:
eyes = GOGOLES_1
eyes_prop_ep ='Welding Goggles'
hair_prop = none
hair_prop_ep = 'None'
toutou = 99
else:
eyes=none
eyes_prop_ep ='None'
neye = 99
if titin == 99 and toutou != 99:
eyes = none
eyes_prop_ep ='None'
if neyo != 99 and neye !=99:
eyes = none
eyes_prop_ep ='None'
seed(j)
k=randint(0,1000000)
if k > 975000:
nose = NOSE_2
nose_ep = 'Clown Nose'
tuctuc = 99
else:
nose = none
nose_ep = 'None'
if tactac == 99 and tuctuc == 99:
mouthprop = none
mouth_prop_ep = 'None'
seed(k)
l=randint(0,1000000)
if l > 970000:
blemishes = ROSY_2
blemishe_ep = 'Rosy Cheeks'
elif l > 900000:
blemishes = MOLE_2
blemishe_ep = 'Mole'
elif l > 870000:
blemishes = SCARE_2
blemishe_ep = 'Scare'
else:
blemishes = none
blemishe_ep = 'None'
seed(l)
m=randint(0,1000000)
if m > 930000:
LI1 = nr
mouth_ep = 'Black Lipstick'
elif m > 860000:
LI1 = (255,0,0)
mouth_ep = 'Hot Lipstick'
elif m > 790000:
LI1 = (208,82,203)
mouth_ep = 'Purple Lipstick'
elif m > 720000:
LI1 = (214,92,26)
mouth_ep = 'Orange Lipstick'
else:
mouth = none
mouth_ep = 'None'
seed(m)
n=randint(0,1000000)
if n > 900000:
neck = GoldChain_2
neck_ep = 'Gold Chain'
elif n > 820000:
neck = SilverChain_2
neck_ep = 'Silver Chain'
elif n > 800000:
neck = RING_2
neck_ep = 'Ring Onchain'
elif n > 780000:
neck = BROCHE_2
neck_ep = 'Brooch'
else:
neck = none
neck_ep = 'None'
ELFE=[
[FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR1,FR1,FR1,FR1,FR1,FR1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,SK1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,SK1,SK1,SK1,SK1,SK2,SK1,FR1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,FR1,FR1,SC1,SC1,SK1,SK1,SK1,SC1,SC1,SK1,FR1,FR1,FR1,FR1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,FR1,SK1,FR1,EY1,FR1,SK1,SK1,SK1,EY1,FR1,SK1,SK1,SK1,SK1,SK1,FR1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,FR1,FR1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,FR1,FR1,FR1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,SK1,FR1,SK1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,LI1,LI1,LI1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,SK1,FR1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR1,FR1,FR1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR1,SK1,SK1,SK1,FR1,FR2,FR2,FR2,FR2,FR2,FR2,FR2]
]
pixels = ELFE
elif b > 460000:
race_ep = 'Dwarves'
type_ep = 'Firebeards'
hair_color_ep = 'None'
haircut_ep = 'None'
hair_prop_ep = 'None'
eyes_prop_ep = 'None'
blemishe_ep = 'None'
eyes_color_ep = 'None'
facial_hair_ep = 'None'
mouth_prop_ep = 'None'
mouth_ep = 'None'
tooth_color_ep = 'None'
nose_ep = 'None'
neck_ep = 'None'
ears_ep = 'None'
skin_ep = 'None'
ears = none
hair = none
hair_prop = none
neck = none
blemishes = none
#tooth color
mouth = none
facial_hair = none
rod = none
mouth_prop = none
#eye color
eyes = none
nose = none
seed(b)
c=randint(0,1000000)
if c > 800000:
SK1 = (234,217,217)
SC1 = (165,141,141)
EY1 = (201,178,178)
SK2 = (255,255,255)
HRG3 = (220,222,234)
HRG2 = (183,179,191)
HRG4 = (203,200,212)
HRG5 = (87,101,113)
HRG1 = (0,0,0)
RC1 = (226,187,185)
MO1 = EY1
SCR1 = EY1
skin_ep = 'Albino'
elif c > 500000:
SK1 = (219,177,128)
SC1 = (166,110,44)
EY1 = (210,157,96)
SK2 = (235,203,166)
HRG3 = (213,200,183)
HRG2 = (184,163,135)
HRG4 = (209,189,164)
HRG5 = (87,101,113)
HRG1 = (0,0,0)
RC1 = (215,154,104)
MO1 = EY1
SCR1 = EY1
skin_ep = 'Light'
elif c > 200000:
SK1 = (174,139,97)
SC1 = (134,88,30)
EY1 = (167,124,71)
SK2 = (178,138,93)
HRG3 = (188,179,165)
HRG2 = (166,150,128)
HRG4 = (184,171,151)
HRG5 = (87,101,113)
HRG1 = (0,0,0)
RC1 = (191,105,71)
MO1 = EY1
SCR1 = EY1
skin_ep = 'Mid'
else:
SK1 = (113,63,29)
SC1 = (86,39,10)
EY1 = (114,55,17)
SK2 = (146,79,35)
HRG3 = (155,135,127)
HRG2 = (139,121,111)
HRG4 = (156,131,115)
HRG5 = (87,101,113)
HRG1 = (0,0,0)
RC1 = (142,36,2)
MO1 = EY1
SCR1 = EY1
skin_ep = 'Dark'
seed(c)
d=randint(0,1000000)
if d > 750000:
ears = EARS_2
ears_ep = 'Earring'
else:
ears = none
ears_ep = 'None'
seed(d)
e=randint(0,1000000)
HR0 = (255,193,0)
HR8 = (251,114,7)
HR3 = (210,154,0)
HR4 = (166,165,165)
HR5 = (165,108,0)
HR6 = (111,0,48)
HR7 = (85,57,23)
if e > 860000:
HR1 = (50,50,50) #Ok
HR2 = (200,200,200)
hair_color_ep = 'Dark Grey & Silver'
elif e > 720000:
HR1 = HR8
HR2 = (111,0,48) #ok
hair_color_ep = 'Orange & Black Rose'
elif e > 580000:
HR1 = HR3 #ok
HR2 = (210,210,0)
hair_color_ep = 'Fair & Wattle'
elif e > 440000:
HR1 = (80,50,30) #Ok
HR2 = (44,4,9)
hair_color_ep = 'Bronze & Chocolate'
elif e > 300000:
HR1 = HR5
HR2 = HR3
hair_color_ep = 'Ginger & Fair'
elif e > 150000:
HR1 = (220,130,0) #ok
HR2 = (70,40,10)
hair_color_ep = 'Mango & Brown'
else:
HR1 = (210,210,210) #Ok
HR2 = (210,210,210)
hair_color_ep = 'Grey Goose'
seed(e)
f=randint(0,1000000)
seed(f)
g=randint(0,1000000)
if g > 960000:
hair_prop = CAP_2
hair_prop_ep = 'Cap'
elif g > 940000:
hair_prop = COWBOY_2
hair_prop_ep = 'Cowboy Hat'
elif g > 920000:
hair_prop = TOPHAT_2
hair_prop_ep = 'Top Hat'
elif g > 890000:
hair_prop = Helmet
hair_prop_ep = 'Dwarf Helmet'
tete = 99
elif g > 870000:
hair_prop = FORCAP_2
hair_prop_ep = 'Cap Forward'
elif g > 850000:
hair_prop = BANDANA_2
hair_prop_ep = 'Bandana'
elif g > 830000:
hair_prop = FEDORA_2
hair_prop_ep = 'Fedora'
elif g > 800000:
hair_prop = POLICE_2
hair_prop_ep = 'Police'
else:
hair_prop = none
hair_prop_ep = 'None'
seed(g)
h=randint(0,1000000)
if h > 900000:
mouth_prop = CIGARETTE
mouth_prop_ep = 'Cigarette'
elif h > 840000:
mouth_prop = PIPE
mouth_prop_ep = 'Pipe'
elif h > 800000:
mouth_prop = VAPE
mouth_prop_ep = 'Vape'
else:
mouth_prop = none
mouth_prop_ep = 'None'
# seed(h)
# i=randint(0,1000000)
# if i > 300000:
# EY1 = (255,255,255)
# elif i > 50000:
# EY1 = (0,0,255)
# else:
# EY1 = (0,255,0)
seed(h)
i=randint(0,1000000)
if i > 970000:
eyes = TD_2
eyes_prop_ep ='3D Glasses'
elif i > 930000:
eyes = VR_2
eyes_prop_ep ='VR'
elif i > 880000:
eyes = ClassicShades_2
eyes_prop_ep ='Classic Shades'
elif i > 830000:
eyes = SmallShades_2
eyes_prop_ep ='Small Shades'
elif i > 780000:
eyes = EyePatch_2
eyes_prop_ep ='Eye Patch'
elif i > 730000:
eyes = NerdGlasses_2
eyes_prop_ep ='Nerd Glasses'
elif i > 680000:
eyes = BigShades_2
eyes_prop_ep ='Big Shades'
elif i > 650000:
eyes = EyeMask_2
eyes_prop_ep ='Eye Mask'
elif i > 600000:
eyes = HornedRimGlasses_2
eyes_prop_ep ='Horned Rim Glasses'
elif i > 550000:
eyes = RegularShades_2
eyes_prop_ep ='Regular Shades'
else:
eyes=none
eyes_prop_ep ='None'
if tete == 99:
eyes = none
eyes_prop_ep ='None'
seed(i)
j=randint(0,1000000)
if j > 975000:
nose = NOSE_1
nose_ep = 'Clown Nose'
else:
nose = none
nose_ep = 'None'
seed(j)
k=randint(0,1000000)
if k > 975000:
mouth = SMILE
mouth_ep = 'Smile'
elif k > 950000:
mouth = FROWN
mouth_ep = 'Frown'
else:
mouth = none
mouth_ep = 'None'
MOLE=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,MO1,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
SCARE_1=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,SCR1,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,SCR1,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,SCR1,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,SCR1,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,SCR1,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
ROSY_1=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,RC1,0,0,0,0,RC1,RC1,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,RC1,0,0,0,0,0,RC1,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
seed(k)
l=randint(0,1000000)
if l > 970000:
blemishes = ROSY_1
blemishe_ep = 'Rosy Cheeks'
elif l > 900000:
blemishes = MOLE
blemishe_ep = 'Mole'
elif l > 870000:
blemishes = SCARE_1
blemishe_ep = 'Scare'
else:
blemishes = none
blemishe_ep = 'None'
DWARF_1=[
[FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,HR1,HR1,HR1,HR1,HR1,HR1,HR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,HR2,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR2,HR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,HR2,HR1,HR1,HR1,SK1,SK1,SK1,HR1,HR1,HR1,HR2,HR1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,HR2,HR1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,HR1,HR1,HR2,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,HR2,HR1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,FR1,HR1,HR2,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,HR2,HR1,FR1,HR1,SK1,SK1,SK1,SK1,SK1,HR1,SK1,SK1,FR1,HR1,HR1,HR2,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,HR2,HR1,FR1,SC1,SC1,SK1,SK1,SK1,SC1,SC1,SK1,SK1,SK1,FR1,HR1,HR1,HR2,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,HR2,HR1,FR1,EY1,FR1,SK1,SK1,SK1,EY1,FR1,SK1,SK1,SK1,FR1,HR1,HR1,HR2,BG1,FR2],
[FR2,BG1,BG1,BG1,HR2,HR1,HR1,FR1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,FR1,FR1,HR1,HR1,HR2,BG1,FR2],
[FR2,BG1,BG1,BG1,HR2,HR1,HR1,FR1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,FR1,HR1,HR1,HR1,BG1,HR2,FR2],
[FR2,BG1,BG1,HR2,BG1,HR1,HR1,FR1,SK1,SK1,FR1,FR1,SK1,SK1,SK1,SK1,SK1,FR1,HR1,HR1,HR1,BG1,BG1,FR2],
[FR2,BG1,BG1,HR2,BG1,HR1,HR1,FR1,HR1,HR2,HR2,HR2,HR2,HR1,HR1,SK1,SK1,FR1,HR1,HR1,HR1,BG1,BG1,FR2],
[FR2,BG1,HR2,BG1,BG1,HR1,FR1,HR1,HR2,HR1,HR1,HR1,HR1,HR2,HR1,HR1,HR1,FR1,HR1,HR1,HR1,HR1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,HR1,FR1,HR1,HR2,HR1,FR1,FR1,FR1,HR1,HR2,HR1,HR1,HR1,FR1,HR1,HR1,HR1,HR1,FR2],
[FR2,BG1,BG1,BG1,BG1,HR1,FR1,HR1,HR2,HR1,HR1,HR1,HR1,HR1,HR2,HR1,HR1,HR1,FR1,HR1,HR1,HR1,HR1,FR2],
[FR2,BG1,BG1,BG1,BG1,HR1,HR1,FR1,HR2,HR1,HR1,HR1,HR1,HR1,HR2,HR1,HR1,HR1,FR1,HR1,HR1,HR1,HR1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,HR1,HR2,FR1,HR1,HR1,HR1,HR1,HR1,HR1,HR2,FR1,FR1,BG1,BG1,HR1,HR1,HR1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,HR2,HR1,BG1,FR1,HR1,HR1,FR1,FR1,FR1,FR1,HR2,FR1,BG1,BG1,BG1,HR1,HR1,FR2],
[FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR1,FR1,FR2,FR1,SK1,SK1,SK1,FR1,FR2,FR2,FR2,FR2,FR2,FR2]
]
pixels = DWARF_1
elif b > 450000:
race_ep = 'Dwarves'
type_ep = 'Blacklocks'
hair_color_ep = 'None'
haircut_ep = 'None'
hair_prop_ep = 'None'
eyes_prop_ep = 'None'
blemishe_ep = 'None'
eyes_color_ep = 'None'
facial_hair_ep = 'None'
mouth_prop_ep = 'None'
mouth_ep = 'None'
tooth_color_ep = 'None'
nose_ep = 'None'
neck_ep = 'None'
ears_ep = 'None'
skin_ep = 'None'
ears = none
hair = none
hair_prop = none
neck = none
blemishes = none
#tooth color
mouth = none
facial_hair = none
rod = none
mouth_prop = none
#eye color
eyes = none
nose = none
seed(b)
c=randint(0,1000000)
if c > 800000:
SK1 = (234,217,217)
SC1 = (165,141,141)
EY1 = (201,178,178)
SK2 = (255,255,255)
HRG3 = (220,222,234)
HRG2 = (183,179,191)
HRG4 = (203,200,212)
HRG5 = (87,101,113)
HRG1 = (0,0,0)
RC1 = (226,187,185)
MO1 = EY1
SCR1 = EY1
skin_ep = 'Albino'
elif c > 500000:
SK1 = (219,177,128)
SC1 = (166,110,44)
EY1 = (210,157,96)
SK2 = (235,203,166)
HRG3 = (213,200,183)
HRG2 = (184,163,135)
HRG4 = (209,189,164)
HRG5 = (87,101,113)
HRG1 = (0,0,0)
RC1 = (215,154,104)
MO1 = EY1
SCR1 = EY1
skin_ep = 'Light'
elif c > 200000:
SK1 = (174,139,97)
SC1 = (134,88,30)
EY1 = (167,124,71)
SK2 = (178,138,93)
HRG3 = (188,179,165)
HRG2 = (166,150,128)
HRG4 = (184,171,151)
HRG5 = (87,101,113)
HRG1 = (0,0,0)
RC1 = (191,105,71)
MO1 = EY1
SCR1 = EY1
skin_ep = 'Mid'
else:
SK1 = (113,63,29)
SC1 = (86,39,10)
EY1 = (114,55,17)
SK2 = (146,79,35)
HRG3 = (155,135,127)
HRG2 = (139,121,111)
HRG4 = (156,131,115)
HRG5 = (87,101,113)
HRG1 = (0,0,0)
RC1 = (142,36,2)
MO1 = EY1
SCR1 = EY1
skin_ep = 'Dark'
seed(c)
d=randint(0,1000000)
if d > 750000:
ears = EARS_2
ears_ep = 'Earring'
else:
ears = none
ears_ep = 'None'
seed(d)
e=randint(0,1000000)
HR0 = (255,193,0)
HR8 = (251,114,7)
HR3 = (210,154,0)
HR4 = (166,165,165)
HR5 = (165,108,0)
HR6 = (111,0,48)
HR7 = (85,57,23)
if e > 860000:
HR1 = (50,50,50) #Ok
HR2 = (200,200,200)
hair_color_ep = 'Dark Grey & Silver'
elif e > 720000:
HR1 = HR8
HR2 = (111,0,48) #ok
hair_color_ep = 'Orange & Black Rose'
elif e > 580000:
HR1 = HR3 #ok
HR2 = (210,210,0)
hair_color_ep = 'Fair & Wattle'
elif e > 440000:
HR1 = (80,50,30) #Ok
HR2 = (44,4,9)
hair_color_ep = 'Bronze & Chocolate'
elif e > 300000:
HR1 = HR5
HR2 = HR3
hair_color_ep = 'Ginger & Fair'
elif e > 150000:
HR1 = (220,130,0) #ok
HR2 = (70,40,10)
hair_color_ep = 'Mango & Brown'
else:
HR1 = (210,210,210) #Ok
HR2 = (210,210,210)
hair_color_ep = 'Grey Goose'
seed(e)
f=randint(0,1000000)
seed(f)
g=randint(0,1000000)
if g > 960000:
hair_prop = CAP_2
hair_prop_ep = 'Cap'
elif g > 940000:
hair_prop = COWBOY_2
hair_prop_ep = 'Cowboy Hat'
elif g > 920000:
hair_prop = TOPHAT_2
hair_prop_ep = 'Top Hat'
elif g > 890000:
hair_prop = Helmet
hair_prop_ep = 'Dwarf Helmet'
tete = 99
elif g > 870000:
hair_prop = FORCAP_2
hair_prop_ep = 'Cap Forward'
elif g > 850000:
hair_prop = BANDANA_2
hair_prop_ep = 'Bandana'
elif g > 830000:
hair_prop = FEDORA_2
hair_prop_ep = 'Fedora'
elif g > 800000:
hair_prop = POLICE_2
hair_prop_ep = 'Police'
else:
hair_prop = none
hair_prop_ep = 'None'
seed(g)
h=randint(0,1000000)
if h > 900000:
mouth_prop = CIGARETTE
mouth_prop_ep = 'Cigarette'
elif h > 840000:
mouth_prop = PIPE
mouth_prop_ep = 'Pipe'
elif h > 800000:
mouth_prop = VAPE
mouth_prop_ep = 'Vape'
else:
mouth_prop = none
mouth_prop_ep = 'None'
# seed(h)
# i=randint(0,1000000)
# if i > 300000:
# EY1 = (255,255,255)
# elif i > 50000:
# EY1 = (0,0,255)
# else:
# EY1 = (0,255,0)
seed(h)
i=randint(0,1000000)
if i > 970000:
eyes = TD_2
eyes_prop_ep ='3D Glasses'
elif i > 930000:
eyes = VR_2
eyes_prop_ep ='VR'
elif i > 880000:
eyes = ClassicShades_2
eyes_prop_ep ='Classic Shades'
elif i > 830000:
eyes = SmallShades_2
eyes_prop_ep ='Small Shades'
elif i > 780000:
eyes = EyePatch_2
eyes_prop_ep ='Eye Patch'
elif i > 730000:
eyes = NerdGlasses_2
eyes_prop_ep ='Nerd Glasses'
elif i > 680000:
eyes = BigShades_2
eyes_prop_ep ='Big Shades'
elif i > 650000:
eyes = EyeMask_2
eyes_prop_ep ='Eye Mask'
elif i > 600000:
eyes = HornedRimGlasses_2
eyes_prop_ep ='Horned Rim Glasses'
elif i > 550000:
eyes = RegularShades_2
eyes_prop_ep ='Regular Shades'
else:
eyes=none
eyes_prop_ep ='None'
if tete == 99:
eyes = none
eyes_prop_ep ='None'
seed(i)
j=randint(0,1000000)
if j > 975000:
nose = NOSE_1
nose_ep = 'Clown Nose'
else:
nose = none
nose_ep = 'None'
seed(j)
k=randint(0,1000000)
if k > 975000:
mouth = SMILE
mouth_ep = 'Smile'
elif k > 950000:
mouth = FROWN
mouth_ep = 'Frown'
else:
mouth = none
mouth_ep = 'None'
MOLE=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,MO1,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
SCARE_1=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,SCR1,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,SCR1,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,SCR1,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,SCR1,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,SCR1,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
ROSY_1=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,RC1,0,0,0,0,RC1,RC1,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,RC1,0,0,0,0,0,RC1,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
seed(k)
l=randint(0,1000000)
if l > 970000:
blemishes = ROSY_1
blemishe_ep = 'Rosy Cheeks'
elif l > 900000:
blemishes = MOLE
blemishe_ep = 'Mole'
elif l > 870000:
blemishes = SCARE_1
blemishe_ep = 'Scare'
else:
blemishes = none
blemishe_ep = 'None'
DWARF_2=[
[FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,HR1,HR1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,HR1,HR1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,HR1,HR1,HR1,HR1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,HR1,HR1,HR1,HR1,HR1,HR1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,HR1,HR1,HR1,HR1,SK1,SK1,SK1,SK1,SK1,SK1,HR1,HR1,HR1,HR1,HR1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,HR1,HR1,HR1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,HR1,HR1,HR1,HR1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,HR1,HR1,HR1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,HR1,HR1,HR1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,HR1,HR1,HR1,SK1,SK1,SK1,SK1,SK1,SK1,HR1,HR1,HR1,HR1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,HR1,HR1,SC1,SC1,SK1,SK1,SK1,SC1,SC1,SK1,HR1,HR1,FR1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,HR1,EY1,FR1,SK1,SK1,SK1,EY1,FR1,SK1,HR1,HR1,FR1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,HR1,HR1,SK1,SK1,SK1,SK1,SK1,SK1,HR1,HR1,HR1,FR1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,HR1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,HR1,HR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,HR1,SK1,SK1,FR1,FR1,SK1,SK1,SK1,SK1,HR1,HR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,HR1,SK1,HR2,HR2,HR2,HR2,HR2,SK1,SK1,HR1,HR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,HR1,HR2,HR2,HR2,HR2,HR2,HR2,HR2,HR2,HR1,HR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,HR1,HR2,SK1,FR1,FR1,FR1,SK1,HR2,HR2,HR1,FR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,HR2,HR2,HR2,SK1,HR2,HR2,HR2,SK1,SK1,HR2,HR2,HR2,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,HR2,BG1,HR2,HR2,SK1,SK1,HR2,SK1,SK1,SK1,HR2,HR2,FR1,HR2,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,HR2,BG1,HR2,HR2,FR1,FR1,HR2,FR1,FR1,SK1,HR2,HR2,FR1,HR2,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,HR2,BG1,BG1,HR2,HR2,BG1,BG1,HR2,BG1,FR1,SK1,HR2,HR2,FR1,BG1,HR2,BG1,BG1,BG1,FR2],
[FR2,FR2,FR2,FR2,FR2,FR2,FR2,HR2,HR2,FR2,FR2,HR2,FR2,FR1,SK1,HR2,HR2,FR1,FR2,FR2,FR2,FR2,FR2,FR2]
]
pixels = DWARF_2
elif b > 440000:
race_ep = 'Dwarves'
type_ep = 'Broadbeams'
hair_color_ep = 'None'
haircut_ep = 'None'
hair_prop_ep = 'None'
eyes_prop_ep = 'None'
blemishe_ep = 'None'
eyes_color_ep = 'None'
facial_hair_ep = 'None'
mouth_prop_ep = 'None'
mouth_ep = 'None'
tooth_color_ep = 'None'
nose_ep = 'None'
neck_ep = 'None'
ears_ep = 'None'
skin_ep = 'None'
ears = none
hair = none
hair_prop = none
neck = none
blemishes = none
#tooth color
mouth = none
facial_hair = none
rod = none
mouth_prop = none
#eye color
eyes = none
nose = none
seed(b)
c=randint(0,1000000)
if c > 800000:
SK1 = (234,217,217)
SC1 = (165,141,141)
EY1 = (201,178,178)
SK2 = (255,255,255)
HRG3 = (220,222,234)
HRG2 = (183,179,191)
HRG4 = (203,200,212)
HRG5 = (87,101,113)
HRG1 = (0,0,0)
RC1 = (226,187,185)
MO1 = EY1
SCR1 = EY1
skin_ep = 'Albino'
elif c > 500000:
SK1 = (219,177,128)
SC1 = (166,110,44)
EY1 = (210,157,96)
SK2 = (235,203,166)
HRG3 = (213,200,183)
HRG2 = (184,163,135)
HRG4 = (209,189,164)
HRG5 = (87,101,113)
HRG1 = (0,0,0)
RC1 = (215,154,104)
MO1 = EY1
SCR1 = EY1
skin_ep = 'Light'
elif c > 200000:
SK1 = (174,139,97)
SC1 = (134,88,30)
EY1 = (167,124,71)
SK2 = (178,138,93)
HRG3 = (188,179,165)
HRG2 = (166,150,128)
HRG4 = (184,171,151)
HRG5 = (87,101,113)
HRG1 = (0,0,0)
RC1 = (191,105,71)
MO1 = EY1
SCR1 = EY1
skin_ep = 'Mid'
else:
SK1 = (113,63,29)
SC1 = (86,39,10)
EY1 = (114,55,17)
SK2 = (146,79,35)
HRG3 = (155,135,127)
HRG2 = (139,121,111)
HRG4 = (156,131,115)
HRG5 = (87,101,113)
HRG1 = (0,0,0)
RC1 = (142,36,2)
MO1 = EY1
SCR1 = EY1
skin_ep = 'Dark'
seed(c)
d=randint(0,1000000)
if d > 750000:
ears = EARS_2
ears_ep = 'Earring'
else:
ears = none
ears_ep = 'None'
seed(d)
e=randint(0,1000000)
HR0 = (255,193,0)
HR8 = (251,114,7)
HR3 = (210,154,0)
HR4 = (166,165,165)
HR5 = (165,108,0)
HR6 = (111,0,48)
HR7 = (85,57,23)
if e > 860000:
HR1 = (50,50,50) #Ok
HR2 = (200,200,200)
hair_color_ep = 'Dark Grey & Silver'
elif e > 720000:
HR1 = HR8
HR2 = (111,0,48) #ok
hair_color_ep = 'Orange & Black Rose'
elif e > 580000:
HR1 = HR3 #ok
HR2 = (210,210,0)
hair_color_ep = 'Fair & Wattle'
elif e > 440000:
HR1 = (80,50,30) #Ok
HR2 = (44,4,9)
hair_color_ep = 'Bronze & Chocolate'
elif e > 300000:
HR1 = HR5
HR2 = HR3
hair_color_ep = 'Ginger & Fair'
elif e > 150000:
HR1 = (220,130,0) #ok
HR2 = (70,40,10)
hair_color_ep = 'Mango & Brown'
else:
HR1 = (210,210,210) #Ok
HR2 = (210,210,210)
hair_color_ep = 'Grey Goose'
seed(e)
f=randint(0,1000000)
seed(f)
g=randint(0,1000000)
if g > 960000:
hair_prop = CAP_2
hair_prop_ep = 'Cap'
elif g > 940000:
hair_prop = COWBOY_2
hair_prop_ep = 'Cowboy Hat'
elif g > 920000:
hair_prop = TOPHAT_2
hair_prop_ep = 'Top Hat'
elif g > 890000:
hair_prop = Helmet
hair_prop_ep = 'Dwarf Helmet'
tete = 99
elif g > 870000:
hair_prop = FORCAP_2
hair_prop_ep = 'Cap Forward'
elif g > 850000:
hair_prop = BANDANA_2
hair_prop_ep = 'Bandana'
elif g > 830000:
hair_prop = FEDORA_2
hair_prop_ep = 'Fedora'
elif g > 800000:
hair_prop = POLICE_2
hair_prop_ep = 'Police'
else:
hair_prop = none
hair_prop_ep = 'None'
seed(g)
h=randint(0,1000000)
if h > 900000:
mouth_prop = CIGARETTE
mouth_prop_ep = 'Cigarette'
elif h > 840000:
mouth_prop = PIPE
mouth_prop_ep = 'Pipe'
elif h > 800000:
mouth_prop = VAPE
mouth_prop_ep = 'Vape'
else:
mouth_prop = none
mouth_prop_ep = 'None'
# seed(h)
# i=randint(0,1000000)
# if i > 300000:
# EY1 = (255,255,255)
# elif i > 50000:
# EY1 = (0,0,255)
# else:
# EY1 = (0,255,0)
seed(h)
i=randint(0,1000000)
if i > 970000:
eyes = TD_2
eyes_prop_ep ='3D Glasses'
elif i > 930000:
eyes = VR_2
eyes_prop_ep ='VR'
elif i > 880000:
eyes = ClassicShades_2
eyes_prop_ep ='Classic Shades'
elif i > 830000:
eyes = SmallShades_2
eyes_prop_ep ='Small Shades'
elif i > 780000:
eyes = EyePatch_2
eyes_prop_ep ='Eye Patch'
elif i > 730000:
eyes = NerdGlasses_2
eyes_prop_ep ='Nerd Glasses'
elif i > 680000:
eyes = BigShades_2
eyes_prop_ep ='Big Shades'
elif i > 650000:
eyes = EyeMask_2
eyes_prop_ep ='Eye Mask'
elif i > 600000:
eyes = HornedRimGlasses_2
eyes_prop_ep ='Horned Rim Glasses'
elif i > 550000:
eyes = RegularShades_2
eyes_prop_ep ='Regular Shades'
else:
eyes=none
eyes_prop_ep ='None'
if tete == 99:
eyes = none
eyes_prop_ep ='None'
seed(i)
j=randint(0,1000000)
if j > 975000:
nose = NOSE_1
nose_ep = 'Clown Nose'
else:
nose = none
nose_ep = 'None'
seed(j)
k=randint(0,1000000)
if k > 975000:
mouth = SMILE
mouth_ep = 'Smile'
elif k > 950000:
mouth = FROWN
mouth_ep = 'Frown'
else:
mouth = none
mouth_ep = 'None'
MOLE=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,MO1,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
SCARE_1=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,SCR1,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,SCR1,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,SCR1,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,SCR1,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,SCR1,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
ROSY_1=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,RC1,0,0,0,0,RC1,RC1,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,RC1,0,0,0,0,0,RC1,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
seed(k)
l=randint(0,1000000)
if l > 970000:
blemishes = ROSY_1
blemishe_ep = 'Rosy Cheeks'
elif l > 900000:
blemishes = MOLE
blemishe_ep = 'Mole'
elif l > 870000:
blemishes = SCARE_1
blemishe_ep = 'Scare'
else:
blemishes = none
blemishe_ep = 'None'
DWARF_3=[
[FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,HR1,HR1,HR1,HR1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,HR1,HR1,HR1,HR1,SK1,HR1,HR1,SK1,HR1,HR1,HR1,HR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,HR1,HR1,HR1,HR1,SK1,SK1,SK1,SK1,SK1,SK1,HR1,HR1,HR1,HR1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,HR1,HR1,FR1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,HR1,HR1,HR1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,HR1,HR1,FR1,SC1,SC1,SK1,SK1,SK1,SC1,SC1,SK1,HR1,HR1,HR1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,HR1,HR1,FR1,EY1,FR1,SK1,SK1,SK1,EY1,FR1,SK1,HR1,HR1,HR1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,HR1,HR1,FR1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,HR1,HR1,HR1,HR1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,HR1,HR1,HR1,FR1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,HR1,HR1,HR1,HR1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,HR1,HR1,HR1,FR1,HR1,SK1,FR1,FR1,SK1,SK1,SK1,HR1,HR1,HR1,HR1,HR1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,HR1,HR1,BG1,FR1,HR2,HR2,HR2,HR2,HR2,HR2,HR2,FR1,HR1,FR1,HR1,HR1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,HR1,BG1,FR1,HR2,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR2,FR1,FR1,BG1,HR1,HR1,BG1,BG1,FR2],
[FR2,BG1,BG1,HR1,HR1,FR1,HR2,HR1,HR1,HR1,FR1,FR1,FR1,HR1,HR1,HR1,HR2,FR1,BG1,HR1,HR1,BG1,BG1,FR2],
[FR2,BG1,BG1,HR1,HR1,FR1,HR2,HR1,HR1,HR1,HR1,HR2,HR1,HR1,HR1,HR1,HR2,FR1,BG1,HR1,HR1,BG1,BG1,FR2],
[FR2,BG1,BG1,HR1,HR1,FR1,HR2,HR1,HR1,HR1,HR2,HR2,HR2,HR1,HR1,HR1,HR2,FR1,BG1,HR1,HR1,BG1,BG1,FR2],
[FR2,BG1,BG1,HR1,HR1,BG1,FR1,HR2,HR1,HR2,HR2,HR2,HR2,HR2,HR1,HR2,FR1,FR1,BG1,HR1,HR1,BG1,BG1,FR2],
[FR2,BG1,BG1,HR1,HR1,BG1,BG1,FR1,HR2,FR1,FR1,FR1,FR1,FR1,HR2,FR1,SK1,FR1,BG1,HR1,HR1,BG1,BG1,FR2],
[FR2,FR2,FR2,HR1,HR1,FR2,FR2,FR2,FR1,FR2,FR2,FR2,FR2,FR1,FR1,SK1,SK1,FR1,FR2,HR1,HR1,FR2,FR2,FR2]
]
pixels = DWARF_3
elif b > 430000:
race_ep = 'Dwarves'
type_ep = 'Stiffbeards'
hair_color_ep = 'None'
haircut_ep = 'None'
hair_prop_ep = 'None'
eyes_prop_ep = 'None'
blemishe_ep = 'None'
eyes_color_ep = 'None'
facial_hair_ep = 'None'
mouth_prop_ep = 'None'
mouth_ep = 'None'
tooth_color_ep = 'None'
nose_ep = 'None'
neck_ep = 'None'
ears_ep = 'None'
skin_ep = 'None'
ears = none
hair = none
hair_prop = none
neck = none
blemishes = none
#tooth color
mouth = none
facial_hair = none
rod = none
mouth_prop = none
#eye color
eyes = none
nose = none
seed(b)
c=randint(0,1000000)
if c > 800000:
SK1 = (234,217,217)
SC1 = (165,141,141)
EY1 = (201,178,178)
SK2 = (255,255,255)
HRG3 = (220,222,234)
HRG2 = (183,179,191)
HRG4 = (203,200,212)
HRG5 = (87,101,113)
HRG1 = (0,0,0)
RC1 = (226,187,185)
MO1 = EY1
SCR1 = EY1
skin_ep = 'Albino'
elif c > 500000:
SK1 = (219,177,128)
SC1 = (166,110,44)
EY1 = (210,157,96)
SK2 = (235,203,166)
HRG3 = (213,200,183)
HRG2 = (184,163,135)
HRG4 = (209,189,164)
HRG5 = (87,101,113)
HRG1 = (0,0,0)
RC1 = (215,154,104)
MO1 = EY1
SCR1 = EY1
skin_ep = 'Light'
elif c > 200000:
SK1 = (174,139,97)
SC1 = (134,88,30)
EY1 = (167,124,71)
SK2 = (178,138,93)
HRG3 = (188,179,165)
HRG2 = (166,150,128)
HRG4 = (184,171,151)
HRG5 = (87,101,113)
HRG1 = (0,0,0)
RC1 = (191,105,71)
MO1 = EY1
SCR1 = EY1
skin_ep = 'Mid'
else:
SK1 = (113,63,29)
SC1 = (86,39,10)
EY1 = (114,55,17)
SK2 = (146,79,35)
HRG3 = (155,135,127)
HRG2 = (139,121,111)
HRG4 = (156,131,115)
HRG5 = (87,101,113)
HRG1 = (0,0,0)
RC1 = (142,36,2)
MO1 = EY1
SCR1 = EY1
skin_ep = 'Dark'
seed(c)
d=randint(0,1000000)
if d > 750000:
ears = EARS_2
ears_ep = 'Earring'
else:
ears = none
ears_ep = 'None'
seed(d)
e=randint(0,1000000)
HR0 = (255,193,0)
HR8 = (251,114,7)
HR3 = (210,154,0)
HR4 = (166,165,165)
HR5 = (165,108,0)
HR6 = (111,0,48)
HR7 = (85,57,23)
if e > 860000:
HR1 = (50,50,50) #Ok
HR2 = (200,200,200)
hair_color_ep = 'Dark Grey'
elif e > 720000:
HR1 = HR8
HR2 = (111,0,48) #ok
hair_color_ep = 'Orange'
elif e > 580000:
HR1 = HR3 #ok
HR2 = (210,210,0)
hair_color_ep = 'Fair'
elif e > 440000:
HR1 = (80,50,30) #Ok
HR2 = (44,4,9)
hair_color_ep = 'Bronze'
elif e > 300000:
HR1 = HR5
HR2 = HR3
hair_color_ep = 'Ginger'
elif e > 150000:
HR1 = (220,130,0) #ok
HR2 = (70,40,10)
hair_color_ep = 'Mango'
else:
HR1 = (210,210,210) #Ok
HR2 = (210,210,210)
hair_color_ep = 'Grey Goose'
seed(e)
f=randint(0,1000000)
seed(f)
g=randint(0,1000000)
if g > 960000:
hair_prop = CAP_2
hair_prop_ep = 'Cap'
elif g > 940000:
hair_prop = COWBOY_2
hair_prop_ep = 'Cowboy Hat'
elif g > 920000:
hair_prop = TOPHAT_2
hair_prop_ep = 'Top Hat'
elif g > 890000:
hair_prop = Helmet
hair_prop_ep = 'Dwarf Helmet'
tete = 99
elif g > 870000:
hair_prop = FORCAP_2
hair_prop_ep = 'Cap Forward'
elif g > 850000:
hair_prop = BANDANA_2
hair_prop_ep = 'Bandana'
elif g > 830000:
hair_prop = FEDORA_2
hair_prop_ep = 'Fedora'
elif g > 800000:
hair_prop = POLICE_2
hair_prop_ep = 'Police'
else:
hair_prop = none
hair_prop_ep = 'None'
seed(g)
h=randint(0,1000000)
if h > 900000:
mouth_prop = CIGARETTE
mouth_prop_ep = 'Cigarette'
elif h > 840000:
mouth_prop = PIPE
mouth_prop_ep = 'Pipe'
elif h > 800000:
mouth_prop = VAPE
mouth_prop_ep = 'Vape'
else:
mouth_prop = none
mouth_prop_ep = 'None'
# seed(h)
# i=randint(0,1000000)
# if i > 300000:
# EY1 = (255,255,255)
# elif i > 50000:
# EY1 = (0,0,255)
# else:
# EY1 = (0,255,0)
seed(h)
i=randint(0,1000000)
if i > 970000:
eyes = TD_2
eyes_prop_ep ='3D Glasses'
elif i > 930000:
eyes = VR_2
eyes_prop_ep ='VR'
elif i > 880000:
eyes = ClassicShades_2
eyes_prop_ep ='Classic Shades'
elif i > 830000:
eyes = SmallShades_2
eyes_prop_ep ='Small Shades'
elif i > 780000:
eyes = EyePatch_2
eyes_prop_ep ='Eye Patch'
elif i > 730000:
eyes = NerdGlasses_2
eyes_prop_ep ='Nerd Glasses'
elif i > 680000:
eyes = BigShades_2
eyes_prop_ep ='Big Shades'
elif i > 650000:
eyes = EyeMask_2
eyes_prop_ep ='Eye Mask'
elif i > 600000:
eyes = HornedRimGlasses_2
eyes_prop_ep ='Horned Rim Glasses'
elif i > 550000:
eyes = RegularShades_2
eyes_prop_ep ='Regular Shades'
else:
eyes=none
eyes_prop_ep ='None'
if tete == 99:
eyes = none
eyes_prop_ep ='None'
seed(i)
j=randint(0,1000000)
if j > 975000:
nose = NOSE_1
nose_ep = 'Clown Nose'
else:
nose = none
nose_ep = 'None'
seed(j)
k=randint(0,1000000)
if k > 975000:
mouth = SMILE
mouth_ep = 'Smile'
elif k > 950000:
mouth = FROWN
mouth_ep = 'Frown'
else:
mouth = none
mouth_ep = 'None'
MOLE=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,MO1,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
SCARE_1=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,SCR1,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,SCR1,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,SCR1,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,SCR1,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,SCR1,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
ROSY_1=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,RC1,0,0,0,0,RC1,RC1,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,RC1,0,0,0,0,0,RC1,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
seed(k)
l=randint(0,1000000)
if l > 970000:
blemishes = ROSY_1
blemishe_ep = 'Rosy Cheeks'
elif l > 900000:
blemishes = MOLE
blemishe_ep = 'Mole'
elif l > 870000:
blemishes = SCARE_1
blemishe_ep = 'Scare'
else:
blemishes = none
blemishe_ep = 'None'
DWARF_4=[
[FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,HR1,HR1,HR1,HR1,HR1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,HR1,HR1,HR1,HR1,HR1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,HR1,HR1,HR1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,HR1,HR1,HR1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,HR1,HR1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,HR1,HR1,HR1,HR1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,HR1,HR1,SK1,SK1,SK1,SK1,SK1,HR1,HR1,SK1,HR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,HR1,HR1,SC1,SC1,SK1,SK1,SK1,SC1,SC1,SK1,SK1,HR1,HR1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,HR1,HR1,EY1,FR1,SK1,SK1,SK1,EY1,FR1,SK1,SK1,HR1,HR1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,HR1,HR1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,HR1,HR1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,HR1,HR1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,HR1,HR1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,HR1,HR1,SK1,SK1,FR1,FR1,SK1,SK1,SK1,SK1,HR1,HR1,HR1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,HR1,HR1,SK1,SK1,SK1,SK1,SK1,SK1,HR1,HR1,HR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,HR1,HR1,HR1,SK1,SK1,SK1,SK1,SK1,HR1,HR1,HR1,HR1,HR1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,HR1,HR1,HR1,SK1,FR1,FR1,FR1,SK1,HR1,HR1,SK1,HR1,HR1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,HR1,HR1,SK1,SK1,SK1,HR1,SK1,SK1,SK1,SK1,SK1,HR1,HR1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,HR1,HR1,SK1,HR1,HR1,HR1,SK1,SK1,SK1,HR1,HR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,HR1,HR1,HR1,HR1,HR1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,FR2,FR2,FR2,FR2,FR2,FR2,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,SK1,FR1,FR2,FR2,FR2,FR2,FR2,FR2]
]
pixels = DWARF_4
elif b > 420000:
race_ep = 'Dwarves'
type_ep = 'Stonefoots'
hair_color_ep = 'None'
haircut_ep = 'None'
hair_prop_ep = 'None'
eyes_prop_ep = 'None'
blemishe_ep = 'None'
eyes_color_ep = 'None'
facial_hair_ep = 'None'
mouth_prop_ep = 'None'
mouth_ep = 'None'
tooth_color_ep = 'None'
nose_ep = 'None'
neck_ep = 'None'
ears_ep = 'None'
skin_ep = 'None'
ears = none
hair = none
hair_prop = none
neck = none
blemishes = none
#tooth color
mouth = none
facial_hair = none
rod = none
mouth_prop = none
#eye color
eyes = none
nose = none
seed(b)
c=randint(0,1000000)
if c > 800000:
SK1 = (234,217,217)
SC1 = (165,141,141)
EY1 = (201,178,178)
SK2 = (255,255,255)
HRG3 = (220,222,234)
HRG2 = (183,179,191)
HRG4 = (203,200,212)
HRG5 = (87,101,113)
HRG1 = (0,0,0)
RC1 = (226,187,185)
MO1 = EY1
SCR1 = EY1
skin_ep = 'Albino'
elif c > 500000:
SK1 = (219,177,128)
SC1 = (166,110,44)
EY1 = (210,157,96)
SK2 = (235,203,166)
HRG3 = (213,200,183)
HRG2 = (184,163,135)
HRG4 = (209,189,164)
HRG5 = (87,101,113)
HRG1 = (0,0,0)
RC1 = (215,154,104)
MO1 = EY1
SCR1 = EY1
skin_ep = 'Light'
elif c > 200000:
SK1 = (174,139,97)
SC1 = (134,88,30)
EY1 = (167,124,71)
SK2 = (178,138,93)
HRG3 = (188,179,165)
HRG2 = (166,150,128)
HRG4 = (184,171,151)
HRG5 = (87,101,113)
HRG1 = (0,0,0)
RC1 = (191,105,71)
MO1 = EY1
SCR1 = EY1
skin_ep = 'Mid'
else:
SK1 = (113,63,29)
SC1 = (86,39,10)
EY1 = (114,55,17)
SK2 = (146,79,35)
HRG3 = (155,135,127)
HRG2 = (139,121,111)
HRG4 = (156,131,115)
HRG5 = (87,101,113)
HRG1 = (0,0,0)
RC1 = (142,36,2)
MO1 = EY1
SCR1 = EY1
skin_ep = 'Dark'
seed(c)
d=randint(0,1000000)
if d > 750000:
ears = EARS_2
ears_ep = 'Earring'
else:
ears = none
ears_ep = 'None'
seed(d)
e=randint(0,1000000)
HR0 = (255,193,0)
HR8 = (251,114,7)
HR3 = (210,154,0)
HR4 = (166,165,165)
HR5 = (165,108,0)
HR6 = (111,0,48)
HR7 = (85,57,23)
if e > 860000:
HR1 = (50,50,50) #Ok
HR2 = (200,200,200)
hair_color_ep = 'Dark Grey & Silver'
elif e > 720000:
HR1 = HR8
HR2 = (111,0,48) #ok
hair_color_ep = 'Orange & Black Rose'
elif e > 580000:
HR1 = HR3 #ok
HR2 = (210,210,0)
hair_color_ep = 'Fair & Wattle'
elif e > 440000:
HR1 = (80,50,30) #Ok
HR2 = (44,4,9)
hair_color_ep = 'Bronze & Chocolate'
elif e > 300000:
HR1 = HR5
HR2 = HR3
hair_color_ep = 'Ginger & Fair'
elif e > 150000:
HR1 = (220,130,0) #ok
HR2 = (70,40,10)
hair_color_ep = 'Mango & Brown'
else:
HR1 = (210,210,210) #Ok
HR2 = (210,210,210)
hair_color_ep = 'Grey Goose'
seed(e)
f=randint(0,1000000)
seed(f)
g=randint(0,1000000)
if g > 960000:
hair_prop = CAP_2
hair_prop_ep = 'Cap'
elif g > 940000:
hair_prop = COWBOY_2
hair_prop_ep = 'Cowboy Hat'
elif g > 920000:
hair_prop = TOPHAT_2
hair_prop_ep = 'Top Hat'
elif g > 890000:
hair_prop = Helmet
hair_prop_ep = 'Dwarf Helmet'
tete = 99
elif g > 870000:
hair_prop = FORCAP_2
hair_prop_ep = 'Cap Forward'
elif g > 850000:
hair_prop = BANDANA_2
hair_prop_ep = 'Bandana'
elif g > 830000:
hair_prop = FEDORA_2
hair_prop_ep = 'Fedora'
elif g > 800000:
hair_prop = POLICE_2
hair_prop_ep = 'Police'
else:
hair_prop = none
hair_prop_ep = 'None'
seed(g)
h=randint(0,1000000)
if h > 900000:
mouth_prop = CIGARETTE
mouth_prop_ep = 'Cigarette'
elif h > 840000:
mouth_prop = PIPE
mouth_prop_ep = 'Pipe'
elif h > 800000:
mouth_prop = VAPE
mouth_prop_ep = 'Vape'
else:
mouth_prop = none
mouth_prop_ep = 'None'
# seed(h)
# i=randint(0,1000000)
# if i > 300000:
# EY1 = (255,255,255)
# elif i > 50000:
# EY1 = (0,0,255)
# else:
# EY1 = (0,255,0)
seed(h)
i=randint(0,1000000)
if i > 970000:
eyes = TD_2
eyes_prop_ep ='3D Glasses'
elif i > 930000:
eyes = VR_2
eyes_prop_ep ='VR'
elif i > 880000:
eyes = ClassicShades_2
eyes_prop_ep ='Classic Shades'
elif i > 830000:
eyes = SmallShades_2
eyes_prop_ep ='Small Shades'
elif i > 780000:
eyes = EyePatch_2
eyes_prop_ep ='Eye Patch'
elif i > 730000:
eyes = NerdGlasses_2
eyes_prop_ep ='Nerd Glasses'
elif i > 680000:
eyes = BigShades_2
eyes_prop_ep ='Big Shades'
elif i > 650000:
eyes = EyeMask_2
eyes_prop_ep ='Eye Mask'
elif i > 600000:
eyes = HornedRimGlasses_2
eyes_prop_ep ='Horned Rim Glasses'
elif i > 550000:
eyes = RegularShades_2
eyes_prop_ep ='Regular Shades'
else:
eyes=none
eyes_prop_ep ='None'
if tete == 99:
eyes = none
eyes_prop_ep ='None'
seed(i)
j=randint(0,1000000)
if j > 975000:
nose = NOSE_1
nose_ep = 'Clown Nose'
else:
nose = none
nose_ep = 'None'
seed(j)
k=randint(0,1000000)
if k > 975000:
mouth = SMILE
mouth_ep = 'Smile'
elif k > 950000:
mouth = FROWN
mouth_ep = 'Frown'
else:
mouth = none
mouth_ep = 'None'
MOLE=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,MO1,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
SCARE_1=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,SCR1,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,SCR1,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,SCR1,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,SCR1,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,SCR1,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
ROSY_1=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,RC1,0,0,0,0,RC1,RC1,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,RC1,0,0,0,0,0,RC1,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
seed(k)
l=randint(0,1000000)
if l > 970000:
blemishes = ROSY_1
blemishe_ep = 'Rosy Cheeks'
elif l > 900000:
blemishes = MOLE
blemishe_ep = 'Mole'
elif l > 870000:
blemishes = SCARE_1
blemishe_ep = 'Scare'
else:
blemishes = none
blemishe_ep = 'None'
DWARF_5=[
[FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR1,FR1,FR1,FR1,FR1,FR1,FR1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,HR1,HR1,HR1,SK1,SK1,SK1,HR1,HR1,HR1,SK1,HR1,HR1,HR1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,HR1,SC1,SC1,HR1,SK1,HR1,SC1,SC1,HR1,SK1,HR1,FR1,HR1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,EY1,FR1,SK1,SK1,SK1,EY1,FR1,SK1,HR1,SK1,FR1,HR1,HR1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,HR1,FR1,FR1,HR1,HR1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,HR1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,HR1,FR1,BG1,HR1,HR1,HR1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,HR1,SK1,SK1,FR1,FR1,SK1,SK1,SK1,SK1,HR1,FR1,BG1,HR1,HR1,HR1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,HR1,SK1,SK1,HR2,HR2,SK1,SK1,SK1,HR1,HR1,FR1,BG1,HR1,HR1,HR1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,HR1,HR1,HR2,HR2,HR2,HR2,HR2,HR1,HR1,HR1,FR1,BG1,HR1,HR1,HR1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,HR1,HR2,HR2,FR1,FR1,FR1,HR2,HR2,HR1,HR1,FR1,BG1,HR1,HR1,HR1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,HR2,HR2,HR1,HR2,HR2,HR2,HR1,HR2,HR2,HR1,FR1,BG1,HR1,HR1,HR1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,HR2,HR2,HR1,HR2,HR2,HR2,HR2,HR2,HR1,HR2,HR2,FR1,BG1,HR1,HR1,HR1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,HR2,BG1,HR2,HR2,HR2,HR1,HR2,HR2,HR2,HR1,HR2,FR1,BG1,HR1,HR1,HR1,HR1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,HR2,BG1,HR2,HR2,BG1,BG1,BG1,HR2,HR2,HR1,HR2,FR1,BG1,BG1,HR1,HR1,HR1,FR2],
[FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,HR2,FR2,FR2,FR2,FR2,FR1,HR2,SK1,SK1,FR1,FR2,FR2,FR2,FR2,FR2,FR2]
]
pixels = DWARF_5
elif b > 410000:
race_ep = 'Dwarves'
type_ep = 'Ironfists'
hair_color_ep = 'None'
haircut_ep = 'None'
hair_prop_ep = 'None'
eyes_prop_ep = 'None'
blemishe_ep = 'None'
eyes_color_ep = 'None'
facial_hair_ep = 'None'
mouth_prop_ep = 'None'
mouth_ep = 'None'
tooth_color_ep = 'None'
nose_ep = 'None'
neck_ep = 'None'
ears_ep = 'None'
skin_ep = 'None'
ears = none
hair = none
hair_prop = none
neck = none
blemishes = none
#tooth color
mouth = none
facial_hair = none
rod = none
mouth_prop = none
#eye color
eyes = none
nose = none
seed(b)
c=randint(0,1000000)
if c > 800000:
SK1 = (234,217,217)
SC1 = (165,141,141)
EY1 = (201,178,178)
SK2 = (255,255,255)
HRG3 = (220,222,234)
HRG2 = (183,179,191)
HRG4 = (203,200,212)
HRG5 = (87,101,113)
HRG1 = (0,0,0)
RC1 = (226,187,185)
MO1 = EY1
SCR1 = EY1
skin_ep = 'Albino'
elif c > 500000:
SK1 = (219,177,128)
SC1 = (166,110,44)
EY1 = (210,157,96)
SK2 = (235,203,166)
HRG3 = (213,200,183)
HRG2 = (184,163,135)
HRG4 = (209,189,164)
HRG5 = (87,101,113)
HRG1 = (0,0,0)
RC1 = (215,154,104)
MO1 = EY1
SCR1 = EY1
skin_ep = 'Light'
elif c > 200000:
SK1 = (174,139,97)
SC1 = (134,88,30)
EY1 = (167,124,71)
SK2 = (178,138,93)
HRG3 = (188,179,165)
HRG2 = (166,150,128)
HRG4 = (184,171,151)
HRG5 = (87,101,113)
HRG1 = (0,0,0)
RC1 = (191,105,71)
MO1 = EY1
SCR1 = EY1
skin_ep = 'Mid'
else:
SK1 = (113,63,29)
SC1 = (86,39,10)
EY1 = (114,55,17)
SK2 = (146,79,35)
HRG3 = (155,135,127)
HRG2 = (139,121,111)
HRG4 = (156,131,115)
HRG5 = (87,101,113)
HRG1 = (0,0,0)
RC1 = (142,36,2)
MO1 = EY1
SCR1 = EY1
skin_ep = 'Dark'
seed(c)
d=randint(0,1000000)
if d > 750000:
ears = EARS_2
ears_ep = 'Earring'
else:
ears = none
ears_ep = 'None'
seed(d)
e=randint(0,1000000)
HR0 = (255,193,0)
HR8 = (251,114,7)
HR3 = (210,154,0)
HR4 = (166,165,165)
HR5 = (165,108,0)
HR6 = (111,0,48)
HR7 = (85,57,23)
if e > 860000:
HR1 = (50,50,50) #Ok
HR2 = (200,200,200)
hair_color_ep = 'Dark Grey & Silver'
elif e > 720000:
HR1 = HR8
HR2 = (111,0,48) #ok
hair_color_ep = 'Orange & Black Rose'
elif e > 580000:
HR1 = HR3 #ok
HR2 = (210,210,0)
hair_color_ep = 'Fair & Wattle'
elif e > 440000:
HR1 = (80,50,30) #Ok
HR2 = (44,4,9)
hair_color_ep = 'Bronze & Chocolate'
elif e > 300000:
HR1 = HR5
HR2 = HR3
hair_color_ep = 'Ginger & Fair'
elif e > 150000:
HR1 = (220,130,0) #ok
HR2 = (70,40,10)
hair_color_ep = 'Mango & Brown'
else:
HR1 = (210,210,210) #Ok
HR2 = (210,210,210)
hair_color_ep = 'Grey Goose'
seed(e)
f=randint(0,1000000)
seed(f)
g=randint(0,1000000)
if g > 960000:
hair_prop = CAP_2
hair_prop_ep = 'Cap'
elif g > 940000:
hair_prop = COWBOY_2
hair_prop_ep = 'Cowboy Hat'
elif g > 920000:
hair_prop = TOPHAT_2
hair_prop_ep = 'Top Hat'
elif g > 890000:
hair_prop = Helmet
hair_prop_ep = 'Dwarf Helmet'
tete = 99
elif g > 870000:
hair_prop = FORCAP_2
hair_prop_ep = 'Cap Forward'
elif g > 850000:
hair_prop = BANDANA_2
hair_prop_ep = 'Bandana'
elif g > 830000:
hair_prop = FEDORA_2
hair_prop_ep = 'Fedora'
elif g > 800000:
hair_prop = POLICE_2
hair_prop_ep = 'Police'
else:
hair_prop = none
hair_prop_ep = 'None'
seed(g)
h=randint(0,1000000)
if h > 900000:
mouth_prop = CIGARETTE
mouth_prop_ep = 'Cigarette'
elif h > 840000:
mouth_prop = PIPE
mouth_prop_ep = 'Pipe'
elif h > 800000:
mouth_prop = VAPE
mouth_prop_ep = 'Vape'
else:
mouth_prop = none
mouth_prop_ep = 'None'
# seed(h)
# i=randint(0,1000000)
# if i > 300000:
# EY1 = (255,255,255)
# elif i > 50000:
# EY1 = (0,0,255)
# else:
# EY1 = (0,255,0)
seed(h)
i=randint(0,1000000)
if i > 970000:
eyes = TD_2
eyes_prop_ep ='3D Glasses'
elif i > 930000:
eyes = VR_2
eyes_prop_ep ='VR'
elif i > 880000:
eyes = ClassicShades_2
eyes_prop_ep ='Classic Shades'
elif i > 830000:
eyes = SmallShades_2
eyes_prop_ep ='Small Shades'
elif i > 780000:
eyes = EyePatch_2
eyes_prop_ep ='Eye Patch'
elif i > 730000:
eyes = NerdGlasses_2
eyes_prop_ep ='Nerd Glasses'
elif i > 680000:
eyes = BigShades_2
eyes_prop_ep ='Big Shades'
elif i > 650000:
eyes = EyeMask_2
eyes_prop_ep ='Eye Mask'
elif i > 600000:
eyes = HornedRimGlasses_2
eyes_prop_ep ='Horned Rim Glasses'
elif i > 550000:
eyes = RegularShades_2
eyes_prop_ep ='Regular Shades'
else:
eyes=none
eyes_prop_ep ='None'
if tete == 99:
eyes = none
eyes_prop_ep ='None'
seed(i)
j=randint(0,1000000)
if j > 975000:
nose = NOSE_1
nose_ep = 'Clown Nose'
else:
nose = none
nose_ep = 'None'
seed(j)
k=randint(0,1000000)
if k > 975000:
mouth = SMILE
mouth_ep = 'Smile'
elif k > 950000:
mouth = FROWN
mouth_ep = 'Frown'
else:
mouth = none
mouth_ep = 'None'
MOLE=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,MO1,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
SCARE_1=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,SCR1,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,SCR1,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,SCR1,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,SCR1,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,SCR1,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
ROSY_1=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,RC1,0,0,0,0,RC1,RC1,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,RC1,0,0,0,0,0,RC1,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
seed(k)
l=randint(0,1000000)
if l > 970000:
blemishes = ROSY_1
blemishe_ep = 'Rosy Cheeks'
elif l > 900000:
blemishes = MOLE
blemishe_ep = 'Mole'
elif l > 870000:
blemishes = SCARE_1
blemishe_ep = 'Scare'
else:
blemishes = none
blemishe_ep = 'None'
DWARF_6=[
[FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,HR1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,HR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,HR1,FR1,SC1,SC1,SK1,SK1,SK1,SC1,SC1,SK1,SK1,HR2,HR1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,HR1,HR2,FR1,EY1,FR1,SK1,SK1,SK1,EY1,FR1,SK1,SK1,HR1,HR2,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,HR2,HR1,FR1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,HR2,HR1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,HR1,HR2,FR1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,HR1,HR2,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,HR2,HR1,FR1,SK1,SK1,FR1,FR1,SK1,SK1,SK1,SK1,SK1,HR2,HR1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,HR1,HR2,FR1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,FR1,HR2,HR1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,HR1,HR2,BG1,FR1,SK1,SK1,HR1,HR1,HR1,SK1,SK1,SK1,SK1,FR1,HR1,HR2,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,HR2,HR1,BG1,FR1,SK1,HR1,FR1,FR1,FR1,HR1,SK1,SK1,SK1,FR1,BG1,HR1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,HR1,BG1,BG1,HR1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,HR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,HR1,HR2,HR1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,HR1,HR2,HR1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,HR1,HR2,HR1,HR2,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR2,HR1,HR2,HR1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,HR2,HR1,HR2,HR1,HR1,HR1,HR1,BG1,HR1,HR1,HR1,HR1,HR2,HR1,HR2,BG1,BG1,BG1,FR2],
[FR2,FR2,FR2,FR2,FR2,HR1,HR2,HR1,FR2,HR1,HR1,FR2,FR2,FR1,HR1,HR1,SK1,HR1,HR2,HR1,FR2,FR2,FR2,FR2]
]
pixels = DWARF_6
elif b > 400000:
race_ep = 'Dwarves'
type_ep = 'Longbeards'
hair_color_ep = 'None'
haircut_ep = 'None'
hair_prop_ep = 'None'
eyes_prop_ep = 'None'
blemishe_ep = 'None'
eyes_color_ep = 'None'
facial_hair_ep = 'None'
mouth_prop_ep = 'None'
mouth_ep = 'None'
tooth_color_ep = 'None'
nose_ep = 'None'
neck_ep = 'None'
ears_ep = 'None'
skin_ep = 'None'
ears = none
hair = none
hair_prop = none
neck = none
blemishes = none
#tooth color
mouth = none
facial_hair = none
rod = none
mouth_prop = none
#eye color
eyes = none
nose = none
seed(b)
c=randint(0,1000000)
if c > 800000:
SK1 = (234,217,217)
SC1 = (165,141,141)
EY1 = (201,178,178)
SK2 = (255,255,255)
HRG3 = (220,222,234)
HRG2 = (183,179,191)
HRG4 = (203,200,212)
HRG5 = (87,101,113)
HRG1 = (0,0,0)
RC1 = (226,187,185)
MO1 = EY1
SCR1 = EY1
skin_ep = 'Albino'
elif c > 500000:
SK1 = (219,177,128)
SC1 = (166,110,44)
EY1 = (210,157,96)
SK2 = (235,203,166)
HRG3 = (213,200,183)
HRG2 = (184,163,135)
HRG4 = (209,189,164)
HRG5 = (87,101,113)
HRG1 = (0,0,0)
RC1 = (215,154,104)
MO1 = EY1
SCR1 = EY1
skin_ep = 'Light'
elif c > 200000:
SK1 = (174,139,97)
SC1 = (134,88,30)
EY1 = (167,124,71)
SK2 = (178,138,93)
HRG3 = (188,179,165)
HRG2 = (166,150,128)
HRG4 = (184,171,151)
HRG5 = (87,101,113)
HRG1 = (0,0,0)
RC1 = (191,105,71)
MO1 = EY1
SCR1 = EY1
skin_ep = 'Mid'
else:
SK1 = (113,63,29)
SC1 = (86,39,10)
EY1 = (114,55,17)
SK2 = (146,79,35)
HRG3 = (155,135,127)
HRG2 = (139,121,111)
HRG4 = (156,131,115)
HRG5 = (87,101,113)
HRG1 = (0,0,0)
RC1 = (142,36,2)
MO1 = EY1
SCR1 = EY1
skin_ep = 'Dark'
seed(c)
d=randint(0,1000000)
if d > 750000:
ears = EARS_2
ears_ep = 'Earring'
else:
ears = none
ears_ep = 'None'
seed(d)
e=randint(0,1000000)
HR0 = (255,193,0)
HR8 = (251,114,7)
HR3 = (210,154,0)
HR4 = (166,165,165)
HR5 = (165,108,0)
HR6 = (111,0,48)
HR7 = (85,57,23)
if e > 860000:
HR1 = (50,50,50) #Ok
HR2 = (200,200,200)
hair_color_ep = 'Dark Grey & Silver'
elif e > 720000:
HR1 = HR8
HR2 = (111,0,48) #ok
hair_color_ep = 'Orange & Black Rose'
elif e > 580000:
HR1 = HR3 #ok
HR2 = (210,210,0)
hair_color_ep = 'Fair & Wattle'
elif e > 440000:
HR1 = (80,50,30) #Ok
HR2 = (44,4,9)
hair_color_ep = 'Bronze & Chocolate'
elif e > 300000:
HR1 = HR5
HR2 = HR3
hair_color_ep = 'Ginger & Fair'
elif e > 150000:
HR1 = (220,130,0) #ok
HR2 = (70,40,10)
hair_color_ep = 'Mango & Brown'
else:
HR1 = (210,210,210) #Ok
HR2 = (210,210,210)
hair_color_ep = 'Grey Goose'
seed(e)
f=randint(0,1000000)
seed(f)
g=randint(0,1000000)
if g > 960000:
hair_prop = CAP_2
hair_prop_ep = 'Cap'
elif g > 940000:
hair_prop = COWBOY_2
hair_prop_ep = 'Cowboy Hat'
elif g > 920000:
hair_prop = TOPHAT_2
hair_prop_ep = 'Top Hat'
elif g > 890000:
hair_prop = Helmet
hair_prop_ep = 'Dwarf Helmet'
tete = 99
elif g > 870000:
hair_prop = FORCAP_2
hair_prop_ep = 'Cap Forward'
elif g > 850000:
hair_prop = BANDANA_2
hair_prop_ep = 'Bandana'
elif g > 830000:
hair_prop = FEDORA_2
hair_prop_ep = 'Fedora'
elif g > 800000:
hair_prop = POLICE_2
hair_prop_ep = 'Police'
else:
hair_prop = none
hair_prop_ep = 'None'
seed(g)
h=randint(0,1000000)
if h > 900000:
mouth_prop = CIGARETTE
mouth_prop_ep = 'Cigarette'
elif h > 840000:
mouth_prop = PIPE
mouth_prop_ep = 'Pipe'
elif h > 800000:
mouth_prop = VAPE
mouth_prop_ep = 'Vape'
else:
mouth_prop = none
mouth_prop_ep = 'None'
# seed(h)
# i=randint(0,1000000)
# if i > 300000:
# EY1 = (255,255,255)
# elif i > 50000:
# EY1 = (0,0,255)
# else:
# EY1 = (0,255,0)
seed(h)
i=randint(0,1000000)
if i > 970000:
eyes = TD_2
eyes_prop_ep ='3D Glasses'
elif i > 930000:
eyes = VR_2
eyes_prop_ep ='VR'
elif i > 880000:
eyes = ClassicShades_2
eyes_prop_ep ='Classic Shades'
elif i > 830000:
eyes = SmallShades_2
eyes_prop_ep ='Small Shades'
elif i > 780000:
eyes = EyePatch_2
eyes_prop_ep ='Eye Patch'
elif i > 730000:
eyes = NerdGlasses_2
eyes_prop_ep ='Nerd Glasses'
elif i > 680000:
eyes = BigShades_2
eyes_prop_ep ='Big Shades'
elif i > 650000:
eyes = EyeMask_2
eyes_prop_ep ='Eye Mask'
elif i > 600000:
eyes = HornedRimGlasses_2
eyes_prop_ep ='Horned Rim Glasses'
elif i > 550000:
eyes = RegularShades_2
eyes_prop_ep ='Regular Shades'
else:
eyes=none
eyes_prop_ep ='None'
if tete == 99:
eyes = none
eyes_prop_ep ='None'
seed(i)
j=randint(0,1000000)
if j > 975000:
nose = NOSE_1
nose_ep = 'Clown Nose'
else:
nose = none
nose_ep = 'None'
seed(j)
k=randint(0,1000000)
if k > 975000:
mouth = SMILE
mouth_ep = 'Smile'
elif k > 950000:
mouth = FROWN
mouth_ep = 'Frown'
else:
mouth = none
mouth_ep = 'None'
MOLE=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,MO1,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
SCARE_1=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,SCR1,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,SCR1,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,SCR1,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,SCR1,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,SCR1,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
ROSY_1=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,RC1,0,0,0,0,RC1,RC1,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,RC1,0,0,0,0,0,RC1,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
seed(k)
l=randint(0,1000000)
if l > 970000:
blemishes = ROSY_1
blemishe_ep = 'Rosy Cheeks'
elif l > 900000:
blemishes = MOLE
blemishe_ep = 'Mole'
elif l > 870000:
blemishes = SCARE_1
blemishe_ep = 'Scare'
else:
blemishes = none
blemishe_ep = 'None'
DWARF_7=[
[FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,HR1,HR1,SK1,HR1,HR1,HR1,HR1,SK1,SK1,HR1,HR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,HR1,HR2,SK1,SK1,SK1,HR1,HR1,SK1,SK1,SK1,SK1,HR2,HR1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,HR2,HR1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,HR1,HR2,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,HR1,HR2,HR1,HR2,SK1,SK1,SK1,SK1,SK1,HR2,SK1,SK1,HR1,HR2,HR1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,HR1,HR2,HR1,SC1,SC1,SK1,SK1,SK1,SC1,SC1,SK1,SK1,HR1,HR2,HR1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,HR1,HR2,HR1,EY1,FR1,SK1,SK1,SK1,EY1,FR1,SK1,SK1,HR1,HR2,HR1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,HR1,HR2,HR1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,HR1,HR2,HR1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,HR1,HR2,HR1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,HR1,HR2,HR1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,HR2,HR1,HR1,SK1,SK1,FR1,FR1,SK1,SK1,SK1,SK1,HR2,HR1,HR1,HR2,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,HR1,HR2,HR1,HR1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,HR2,HR1,HR1,HR2,HR1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,HR1,HR2,HR1,HR1,SK1,HR2,HR2,HR2,HR2,HR2,SK1,HR2,HR2,HR1,HR1,HR2,HR1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,HR1,HR2,HR1,HR1,SK1,HR2,FR1,FR1,FR1,HR2,SK1,HR2,HR2,HR1,HR1,HR2,HR1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,HR1,HR2,HR1,HR1,HR2,HR2,HR2,HR2,HR2,HR2,HR2,HR2,SK1,HR1,HR1,HR2,HR1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,HR1,HR2,HR1,HR1,FR1,HR2,SK1,HR2,SK1,HR2,SK1,SK1,SK1,HR1,HR1,HR2,HR1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,HR2,HR1,HR1,HR1,HR2,FR1,FR1,FR1,FR1,FR1,HR2,SK1,SK1,HR1,HR1,HR1,HR2,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,HR2,HR1,BG1,HR2,BG1,BG1,BG1,BG1,BG1,FR1,SK1,HR2,SK1,FR1,BG1,HR1,HR2,BG1,BG1,FR2],
[FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR1,SK1,SK1,SK1,FR1,FR2,FR2,FR2,FR2,FR2,FR2]
]
pixels = DWARF_7
elif b > 250000:
race_ep = 'Gobelins'
type_ep = 'None'
hair_color_ep = 'None'
haircut_ep = 'None'
hair_prop_ep = 'None'
eyes_prop_ep = 'None'
blemishe_ep = 'None'
eyes_color_ep = 'None'
facial_hair_ep = 'None'
mouth_prop_ep = 'None'
mouth_ep = 'None'
tooth_color_ep = 'None'
nose_ep = 'None'
neck_ep = 'None'
ears_ep = 'None'
skin_ep = 'None'
ears = none
hair = none
hair_prop = none
neck = none
blemishes = none
#tooth color
mouth = none
facial_hair = none
rod = none
mouth_prop = none
#eye color
eyes = none
nose = none
seed(b)
c=randint(0,1000000)
if c > 800000:
SK1 = (112,168,104) #ZOMBO
SC1 = (88,117,83)
MO1 = SC1
SCR1 = SC1
skin_ep = 'Green'
elif c > 700000:
SK1 = (145,0,185) #PURPLE
SC1 = (120,0,160)
MO1 = SC1
SCR1 = SC1
skin_ep = 'Purple'
elif c > 400000:
SK1 = (185,160,60) #DARK GREEN
SC1 = (150,125,25)
MO1 = SC1
SCR1 = SC1
skin_ep = 'Camel'
else:
SK1 = (205,205,57) #JAUNE
SC1 = (130,119,23)
MO1 = SC1
SCR1 = SC1
skin_ep = 'Wattle'
seed(c)
d=randint(0,1000000)
if d > 750000:
ears = EARS_2
ears_ep = 'Earring'
else:
ears = none
ears_ep = 'None'
seed(d)
e=randint(0,1000000)
if e > 960000:
hair_prop = CAP_2
hair_prop_ep = 'Cap'
elif e > 940000:
hair_prop = COWBOY_5
hair_prop_ep = 'Cowboy Hat'
elif e > 920000:
hair_prop = TOPHAT_2
hair_prop_ep = 'Top Hat'
elif e > 870000:
hair_prop = KNITTED_5
hair_prop_ep = 'Knitted Cap'
elif e > 850000:
hair_prop = Gobelin_Crown
hair_prop_ep = 'Gobelins Crown'
elif e > 830000:
hair_prop = FORCAP_2
hair_prop_ep = 'Cap Forward'
elif e > 800000:
hair_prop = BANDANA_2
hair_prop_ep = 'Bandana'
elif e > 780000:
hair_prop = FEDORA_5
hair_prop_ep = 'Fedora'
elif e > 750000:
hair_prop = POLICE_2
hair_prop_ep = 'Police'
elif e > 740000:
hair_prop = BEANI_2
hair_prop_ep = 'Beanie'
else:
hair_prop = none
hair_prop_ep = 'None'
seed(e)
f=randint(0,1000000)
if f > 900000:
neck = GoldChain_1
neck_ep = 'Gold Chain'
elif f > 820000:
neck = SilverChain_1
neck_ep = 'Silver Chain'
elif f > 800000:
neck = RING_1
neck_ep = 'Ring Onchain'
else:
neck = none
neck_ep = 'None'
seed(f)
g=randint(0,1000000)
if g > 300000:
DE1 = (255,255,255)
tooth_color_ep = 'White'
elif g > 200000:
DE1 = (163,110,16)
tooth_color_ep = 'Brown'
elif g > 80000:
DE1 = (255,203,0)
tooth_color_ep = 'Gold'
else :
DE1 = (200,0,0)
tooth_color_ep = 'Blood'
seed(g)
h=randint(0,1000000)
if h > 900000:
mouth_prop = CIGARETTE
mouth_prop_ep = 'Cigarette'
elif h > 880000:
mouth_prop = MASK_1
mouth_prop_ep = 'Medical Mask'
elif h > 820000:
mouth_prop = PIPE
mouth_prop_ep = 'Pipe'
elif h > 780000:
mouth_prop = VAPE
mouth_prop_ep = 'Vape'
else:
mouth_prop = none
mouth_prop_ep = 'None'
seed(h)
i=randint(0,1000000)
if i > 400000:
EY1 = (255,255,255)
eyes_color_ep = 'White'
elif i > 300000:
EY1 = (214,92,26)
eyes_color_ep = "Orange"
elif i > 200000:
EY1 = (176,61,133)
eyes_color_ep = "Purple"
elif i > 100000:
EY1 = (255,255,0)
eyes_color_ep = 'Yellow'
else:
EY1 = (255,0,0)
eyes_color_ep = 'Red'
seed(i)
j=randint(0,1000000)
if j > 970000:
eyes = TD_2
eyes_prop_ep ='3D Glasses'
elif j > 930000:
eyes = VR_2
eyes_prop_ep ='VR'
elif j > 880000:
eyes = ClassicShades_2
eyes_prop_ep ='Classic Shades'
elif j > 830000:
eyes = SmallShades_2
eyes_prop_ep ='Small Shades'
elif j > 780000:
eyes = EyePatch_2
eyes_prop_ep ='Eye Patch'
elif j > 730000:
eyes = NerdGlasses_2
eyes_prop_ep ='Nerd Glasses'
elif j > 680000:
eyes = BigShades_2
eyes_prop_ep ='Big Shades'
elif j > 650000:
eyes = EyeMask_2
eyes_prop_ep ='Eye Mask'
elif j > 600000:
eyes = HornedRimGlasses_2
eyes_prop_ep ='Horned Rim Glasses'
elif j > 550000:
eyes = RegularShades_2
eyes_prop_ep ='Regular Shades'
else:
eyes=none
eyes_prop_ep ='None'
MOLE=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,MO1,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
SCARE_1=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,SCR1,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,SCR1,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,SCR1,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,SCR1,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,SCR1,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
ROSY_1=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,RC1,0,0,0,0,RC1,RC1,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,RC1,0,0,0,0,0,RC1,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
seed(j)
k=randint(0,1000000)
if k > 970000:
blemishes = MOLE
blemishe_ep = 'Mole'
elif k > 940000:
blemishes = SCARE_1
blemishe_ep = 'Scare'
else:
blemishes = none
blemishe_ep = 'None'
GOBELIN=[
[FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR1,FR1,FR1,FR1,FR1,FR1,FR1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,FR1,BG1,BG1,FR1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,FR1,BG1,FR1,SK1,FR1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SC1,SC1,SK1,SK1,SK1,SC1,SC1,SK1,SK1,SK1,FR1,SK1,FR1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,EY1,FR1,SK1,SK1,SK1,EY1,FR1,SK1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,FR1,FR1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,FR1,SK1,FR1,SK1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,DE1,SK1,SK1,DE1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,FR1,FR1,FR1,FR1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,SK1,SK1,SK1,SK1,FR1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,SK1,SK1,FR1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR1,FR1,FR1,FR1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR1,SK1,SK1,SK1,FR1,FR2,FR2,FR2,FR2,FR2,FR2]
]
pixels = GOBELIN
elif b > 150000:
race_ep = 'Orcs'
type_ep = 'None'
hair_color_ep = 'None'
haircut_ep = 'None'
hair_prop_ep = 'None'
eyes_prop_ep = 'None'
blemishe_ep = 'None'
eyes_color_ep = 'None'
facial_hair_ep = 'None'
mouth_prop_ep = 'None'
mouth_ep = 'None'
tooth_color_ep = 'None'
nose_ep = 'None'
neck_ep = 'None'
ears_ep = 'None'
skin_ep = 'None'
ears = none
hair = none
hair_prop = none
neck = none
blemishes = none
#tooth color
mouth = none
facial_hair = none
rod = none
mouth_prop = none
#eye color
eyes = none
nose = none
seed(b)
c=randint(0,1000000)
if c > 850000:
SK1 = (112,112,112) #grey
SC1 = (64,64,64)
MO1 = SC1
SCR1 = SC1
skin_ep = 'Smokey Grey'
elif c > 600000:
SK1 = (220,220,220) #brown
SC1 = (180,180,180)
MO1 = SC1
SCR1 = SC1
skin_ep = 'Moon Grey'
elif c > 100000:
SK1 = (180,145,115) #Sand
SC1 = (120,100,60)
MO1 = SC1
SCR1 = SC1
skin_ep = 'Sand'
else:
SK1 = (153,0,0) #red
SC1 = (102,0,0)
MO1 = SC1
SCR1 = SC1
skin_ep = 'Red'
seed(c)
d=randint(0,1000000)
if d > 750000:
ears = EARS_1
ears_ep = 'Earring'
else:
ears = none
ears_ep = 'None'
seed(d)
e=randint(0,1000000)
if e > 960000:
hair_prop = CAP_2
hair_prop_ep = 'Cap'
elif e > 940000:
hair_prop = COWBOY_4
hair_prop_ep = 'Cowboy Hat'
elif e > 920000:
hair_prop = TOPHAT_2
hair_prop_ep = 'Top Hat'
elif e > 870000:
hair_prop = KNITTED_6
hair_prop_ep = 'Knitted Cap'
elif e > 860000:
hair_prop = HEADBAND_2
hair_prop_ep = 'Headband'
elif e > 830000:
hair_prop = FORCAP_2
hair_prop_ep = 'Cap Forward'
elif e > 800000:
hair_prop = BANDANA_2
hair_prop_ep = 'Bandana'
elif e > 780000:
hair_prop = FEDORA_2
hair_prop_ep = 'Fedora'
elif e > 750000:
hair_prop = POLICE_2
hair_prop_ep = 'Police'
elif e > 740000:
hair_prop = BEANI_2
hair_prop_ep = 'Beanie'
elif e > 700000:
hair_prop = ORC_HELMET
hair_prop_ep = 'Orc Helmet'
tonton = 99
else:
hair_prop = none
hair_prop_ep = 'None'
seed(e)
f=randint(0,1000000)
if f > 900000:
neck = GoldChain_1
neck_ep = 'Gold Chain'
elif f > 820000:
neck = SilverChain_1
neck_ep = 'Silver Chain'
elif f > 800000:
neck = RING_1
neck_ep = 'Ring Onchain'
else:
neck = none
neck_ep = 'None'
seed(f)
g=randint(0,1000000)
if g > 300000:
DE1 = (255,255,255)
tooth_color_ep = 'White'
elif g > 200000:
DE1 = (163,110,16)
tooth_color_ep = 'Brown'
elif g > 80000:
DE1 = (255,203,0)
tooth_color_ep = 'Gold'
else :
DE1 = (200,0,0)
tooth_color_ep = 'Blood'
seed(g)
h=randint(0,1000000)
if h > 900000:
mouth_prop = CIGARETTE
mouth_prop_ep = 'Cigarette'
elif h > 880000:
mouth_prop = MASK_1
mouth_prop_ep = 'Medical Mask'
elif h > 820000:
mouth_prop = PIPE
mouth_prop_ep = 'Pipe'
elif h > 780000:
mouth_prop = VAPE
mouth_prop_ep = 'Vape'
else:
mouth_prop = none
mouth_prop_ep = 'None'
seed(h)
i=randint(0,1000000)
if i > 400000:
EY1 = (255,255,255)
eyes_color_ep = 'White'
elif i > 300000:
EY1 = (214,92,26)
eyes_color_ep = "Orange"
elif i > 200000:
EY1 = (176,61,133)
eyes_color_ep = "Purple"
elif i > 100000:
EY1 = (255,255,0)
eyes_color_ep = 'Yellow'
else:
EY1 = (255,0,0)
eyes_color_ep = 'Red'
seed(i)
j=randint(0,1000000)
if j > 970000:
eyes = TD_2
eyes_prop_ep ='3D Glasses'
elif j > 930000:
eyes = VR_2
eyes_prop_ep ='VR'
elif j > 880000:
eyes = ClassicShades_2
eyes_prop_ep ='Classic Shades'
elif j > 830000:
eyes = SmallShades_2
eyes_prop_ep ='Small Shades'
elif j > 780000:
eyes = EyePatch_2
eyes_prop_ep ='Eye Patch'
elif j > 730000:
eyes = BigShades_2
eyes_prop_ep ='Big Shades'
elif j > 700000:
eyes = EyeMask_2
eyes_prop_ep ='Eye Mask'
elif j > 650000:
eyes = HornedRimGlasses_2
eyes_prop_ep ='Horned Rim Glasses'
elif j > 600000:
eyes = RegularShades_2
eyes_prop_ep ='Regular Shades'
else:
eyes=none
eyes_prop_ep ='None'
tantan = 99
if tonton == 99 and tantan != 99:
eyes = none
eyes_prop_ep = 'None'
MOLE=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,MO1,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
ROSY_1=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,RC1,0,0,0,0,RC1,RC1,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,RC1,0,0,0,0,0,RC1,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
seed(j)
k=randint(0,1000000)
if k > 970000:
blemishes = MOLE
blemishe_ep = 'Mole'
else:
blemishes = none
blemishe_ep = 'None'
ORC=[
[FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR1,FR1,FR1,FR1,FR1,FR1,FR1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,FR1,SK1,SK1,SK1,FR1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,FR1,SK1,SK1,SK1,SK1,SK1,FR1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,FR1,SK1,SK1,FR1,SK1,FR1,SK1,SK1,FR1,FR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,FR1,SK1,SK1,SK1,FR1,SK1,SK1,FR1,BG1,BG1,BG1,FR1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,FR1,BG1,FR1,FR1,SK1,FR1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SC1,SC1,SK1,SK1,SK1,SC1,SC1,SK1,SK1,FR1,FR1,SK1,SK1,FR1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,EY1,FR1,SK1,SK1,SK1,EY1,FR1,SK1,SK1,SK1,SK1,FR1,FR1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,FR1,SK1,FR1,SK1,SK1,SK1,SK1,FR1,FR1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,FR1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,FR1,FR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,FR1,SK1,SK1,SK1,SK1,SK1,FR1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,DE1,SK1,SK1,DE1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,FR1,FR1,FR1,FR1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,SK1,SK1,SK1,FR1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR1,FR1,FR1,FR1,FR1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR1,SK1,SK1,SK1,FR1,FR2,FR2,FR2,FR2,FR2,FR2]
]
pixels = ORC
elif b > 135000:
race_ep = 'Wizards'
type_ep = 'White'
hair_color_ep = 'None'
haircut_ep = 'None'
hair_prop_ep = 'None'
eyes_prop_ep = 'None'
blemishe_ep = 'None'
eyes_color_ep = 'None'
facial_hair_ep = 'None'
mouth_prop_ep = 'None'
mouth_ep = 'None'
tooth_color_ep = 'None'
nose_ep = 'None'
neck_ep = 'None'
ears_ep = 'None'
skin_ep = 'None'
ears = none
hair = none
hair_prop = none
neck = none
blemishes = none
#tooth color
mouth = none
facial_hair = none
rod = none
mouth_prop = none
#eye color
eyes = none
nose = none
seed(b)
c=randint(0,1000000)
if c > 750000:
SK1 = (234,217,217)
SC1 = (165,141,141)
EY1 = (201,178,178)
SK2 = (255,255,255)
HRG3 = (220,222,234)
HRG2 = (183,179,191)
HRG4 = (203,200,212)
HRG5 = (87,101,113)
HRG1 = (0,0,0)
RC1 = (226,187,185)
MO1 = EY1
SCR1 = EY1
skin_ep = 'Albino'
elif c > 500000:
SK1 = (219,177,128)
SC1 = (166,110,44)
EY1 = (210,157,96)
SK2 = (235,203,166)
HRG3 = (213,200,183)
HRG2 = (184,163,135)
HRG4 = (209,189,164)
HRG5 = (87,101,113)
HRG1 = (0,0,0)
RC1 = (215,154,104)
MO1 = EY1
SCR1 = EY1
skin_ep = 'Light'
elif c > 250000:
SK1 = (174,139,97)
SC1 = (134,88,30)
EY1 = (167,124,71)
SK2 = (178,138,93)
HRG3 = (188,179,165)
HRG2 = (166,150,128)
HRG4 = (184,171,151)
HRG5 = (87,101,113)
HRG1 = (0,0,0)
RC1 = (191,105,71)
MO1 = EY1
SCR1 = EY1
skin_ep = 'Mid'
else:
SK1 = (113,63,29)
SC1 = (86,39,10)
EY1 = (114,55,17)
SK2 = (146,79,35)
HRG3 = (155,135,127)
HRG2 = (139,121,111)
HRG4 = (156,131,115)
HRG5 = (87,101,113)
HRG1 = (0,0,0)
RC1 = (142,36,2)
MO1 = EY1
SCR1 = EY1
skin_ep = 'Dark'
seed(c)
d=randint(0,1000000)
if d > 750000:
ears = EARS_1
ears_ep = 'Earring'
else:
ears = none
ears_ep = 'None'
seed(d)
e=randint(0,1000000)
if e > 750000:
HR1 = (140,140,140)
hair_color_ep = 'Granite'
elif e > 500000:
HR1 = (90,90,90)
hair_color_ep = 'Carbon Grey'
elif e > 250000:
HR1 = (240,240,240)
hair_color_ep = 'Seashell'
else:
HR1 = (190,190,190)
hair_color_ep = 'Silver'
seed(e)
f=randint(0,1000000)
if f > 900000:
mouth_prop = CIGARETTE
mouth_prop_ep = 'Cigarette'
elif f > 840000:
mouth_prop = PIPE
mouth_prop_ep = 'Pipe'
elif f > 800000:
mouth_prop = VAPE
mouth_prop_ep = 'Vape'
else:
mouth_prop = none
mouth_prop_ep = 'None'
seed(f)
g=randint(0,1000000)
if g > 950000:
hair_prop = COWBOY_7
hair_prop_ep = 'Cowboy Hat'
elif g > 900000:
hair_prop = TOPHAT_7
hair_prop_ep = 'Top Hat'
elif e > 850000:
hair_prop = KNITTED_7
hair_prop_ep = 'Knitted Cap'
elif g > 800000:
hair_prop = FORCAP_7
hair_prop_ep = 'Cap Forward'
elif g > 750000:
hair_prop = FEDORA_7
hair_prop_ep = 'Fedora'
elif g > 700000:
hair_prop = BANDANA_7
hair_prop_ep = 'Bandana'
elif g > 650000:
hair_prop = POLICE_7
hair_prop_ep = 'Police'
elif g > 600000:
hair_prop = CAP_7
hair_prop_ep = 'Cap'
else:
hair_prop = none
hair_prop_ep = 'None'
MOLE=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,MO1,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
ROSY_1=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,RC1,0,0,0,0,RC1,RC1,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,RC1,0,0,0,0,0,RC1,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
seed(g)
h=randint(0,1000000)
if h > 970000:
blemishes = ROSY_1
blemishe_ep = 'Rosy Cheeks'
elif h > 900000:
blemishes = MOLE
blemishe_ep = 'Mole'
else:
blemishes = none
blemishe_ep = 'None'
seed(h)
i=randint(0,1000000)
if i > 970000:
eyes = TD_1
eyes_prop_ep ='3D Glasses'
elif i > 930000:
eyes = VR_1
eyes_prop_ep ='VR'
elif i > 880000:
eyes = ClassicShades_1
eyes_prop_ep ='Classic Shades'
elif i > 830000:
eyes = EyePatch_1
eyes_prop_ep ='Eye Patch'
elif i > 780000:
eyes = NerdGlasses_1
eyes_prop_ep ='Nerd Glasses'
elif i > 730000:
eyes = BigShades_1
eyes_prop_ep ='Big Shades'
elif i > 700000:
eyes = EyeMask_1
eyes_prop_ep ='Eye Mask'
elif i > 650000:
eyes = HornedRimGlasses_1
eyes_prop_ep ='Horned Rim Glasses'
elif i > 600000:
eyes = RegularShades_1
eyes_prop_ep ='Regular Shades'
else:
eyes=none
eyes_prop_ep ='None'
seed(i)
j=randint(0,1000000)
if j > 975000:
nose = NOSE_1
nose_ep = 'Clown Nose'
else:
nose = none
nose_ep = 'None'
seed(j)
k=randint(0,1000000)
if k > 975000:
mouth = SMILE
mouth_ep = 'Smile'
elif k > 950000:
mouth = FROWN
mouth_ep = 'Frown'
else:
mouth = none
mouth_ep = 'None'
WIZ_WHITE=[
[FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,HR1,HR1,BG1,BG1,HR1,HR1,HR1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,HR1,HR1,HR1,SK1,SK1,HR1,HR1,SK1,SK1,HR1,HR1,HR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,HR1,HR1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,HR1,HR1,HR1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,HR1,HR1,HR1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,HR1,HR1,HR1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,HR1,HR1,FR1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,HR1,HR1,HR1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,HR1,HR1,FR1,SC1,SC1,SK1,SK1,SK1,SC1,SC1,SK1,SK1,HR1,HR1,HR1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,HR1,HR1,FR1,EY1,FR1,SK1,SK1,SK1,EY1,FR1,SK1,SK1,HR1,HR1,HR1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,HR1,HR1,FR1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,HR1,HR1,HR1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,HR1,HR1,FR1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,HR1,HR1,HR1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,HR1,HR1,FR1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,HR1,HR1,HR1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,HR1,HR1,FR1,SK1,SK1,FR1,FR1,SK1,SK1,SK1,SK1,SK1,HR1,HR1,HR1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,HR1,HR1,FR1,SK1,HR1,HR1,HR1,HR1,HR1,SK1,SK1,SK1,HR1,HR1,HR1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,HR1,FR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,HR1,FR1,HR1,HR1,HR1,FR1,FR1,FR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,HR1,FR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,HR1,HR1,FR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,HR1,HR1,HR1,FR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,FR1,FR1,HR1,HR1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,HR1,HR1,FR1,HR1,HR1,FR1,FR1,FR1,FR1,SK1,FR1,HR1,HR1,BG1,BG1,BG1,FR2],
[FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR1,FR1,FR2,FR1,FR2,FR2,FR2,FR1,FR2,FR2,FR2,FR2,FR2,FR2]
]
pixels = WIZ_WHITE
elif b > 110000:
race_ep = 'Wizards'
type_ep = 'Grey'
hair_color_ep = 'None'
haircut_ep = 'None'
hair_prop_ep = 'None'
eyes_prop_ep = 'None'
blemishe_ep = 'None'
eyes_color_ep = 'None'
facial_hair_ep = 'None'
mouth_prop_ep = 'None'
mouth_ep = 'None'
tooth_color_ep = 'None'
nose_ep = 'None'
neck_ep = 'None'
ears_ep = 'None'
skin_ep = 'None'
ears = none
hair = none
hair_prop = none
neck = none
blemishes = none
#tooth color
mouth = none
facial_hair = none
rod = none
mouth_prop = none
#eye color
eyes = none
nose = none
seed(b)
c=randint(0,1000000)
if c > 750000:
SK1 = (234,217,217)
SC1 = (165,141,141)
EY1 = (201,178,178)
SK2 = (255,255,255)
HRG3 = (220,222,234)
HRG2 = (183,179,191)
HRG4 = (203,200,212)
HRG5 = (87,101,113)
HRG1 = (0,0,0)
RC1 = (226,187,185)
MO1 = EY1
SCR1 = EY1
skin_ep = 'Albino'
elif c > 500000:
SK1 = (219,177,128)
SC1 = (166,110,44)
EY1 = (210,157,96)
SK2 = (235,203,166)
HRG3 = (213,200,183)
HRG2 = (184,163,135)
HRG4 = (209,189,164)
HRG5 = (87,101,113)
HRG1 = (0,0,0)
RC1 = (215,154,104)
MO1 = EY1
SCR1 = EY1
skin_ep = 'Light'
elif c > 250000:
SK1 = (174,139,97)
SC1 = (134,88,30)
EY1 = (167,124,71)
SK2 = (178,138,93)
HRG3 = (188,179,165)
HRG2 = (166,150,128)
HRG4 = (184,171,151)
HRG5 = (87,101,113)
HRG1 = (0,0,0)
RC1 = (191,105,71)
MO1 = EY1
SCR1 = EY1
skin_ep = 'Mid'
else:
SK1 = (113,63,29)
SC1 = (86,39,10)
EY1 = (114,55,17)
SK2 = (146,79,35)
HRG3 = (155,135,127)
HRG2 = (139,121,111)
HRG4 = (156,131,115)
HRG5 = (87,101,113)
HRG1 = (0,0,0)
RC1 = (142,36,2)
MO1 = EY1
SCR1 = EY1
skin_ep = 'Dark'
seed(c)
d=randint(0,1000000)
if d > 750000:
ears = EARS_1
ears_ep = 'Earring'
else:
ears = none
ears_ep = 'None'
seed(d)
e=randint(0,1000000)
if e > 750000:
CH1 = nr
CH2= (130,130,130)
HR1 = (160,160,160)
BR1 = (190,190,190)
hair_color_ep = 'Black & Granite'
elif e > 500000:
CH2 = (10,10,10)
CH1= (50,50,50)
HR1 = (160,160,160)
BR1 = (190,190,190)
hair_color_ep = 'Dark Grey & Black'
elif e > 250000:
CH1 = (130,130,130)
CH2= (230,230,230)
HR1 = (160,160,160)
BR1 = (190,190,190)
hair_color_ep = 'Granite & Seashell'
else:
CH1 = (50,50,50)
CH2= (200,200,200)
HR1 = (160,160,160)
BR1 = (190,190,190)
hair_color_ep = 'Dark Grey & Silver'
seed(e)
f=randint(0,1000000)
if f > 900000:
mouth_prop = CIGARETTE
mouth_prop_ep = 'Cigarette'
elif f > 840000:
mouth_prop = PIPE
mouth_prop_ep = 'Pipe'
elif f > 800000:
mouth_prop = VAPE
mouth_prop_ep = 'Vape'
else:
mouth_prop = none
mouth_prop_ep = 'None'
seed(f)
g=randint(0,1000000)
MOLE=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,MO1,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
ROSY_1=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,RC1,0,0,0,0,RC1,RC1,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,RC1,0,0,0,0,0,RC1,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
seed(g)
h=randint(0,1000000)
if h > 970000:
blemishes = ROSY_1
blemishe_ep = 'Rosy Cheeks'
elif h > 900000:
blemishes = MOLE
blemishe_ep = 'Mole'
else:
blemishes = none
blemishe_ep = 'None'
seed(h)
i=randint(0,1000000)
if i > 970000:
eyes = TD_1
eyes_prop_ep ='3D Glasses'
elif i > 930000:
eyes = VR_1
eyes_prop_ep ='VR'
elif i > 880000:
eyes = ClassicShades_1
eyes_prop_ep ='Classic Shades'
elif i > 830000:
eyes = EyePatch_1
eyes_prop_ep ='Eye Patch'
elif i > 780000:
eyes = NerdGlasses_1
eyes_prop_ep ='Nerd Glasses'
elif i > 730000:
eyes = BigShades_1
eyes_prop_ep ='Big Shades'
elif i > 700000:
eyes = EyeMask_1
eyes_prop_ep ='Eye Mask'
elif i > 650000:
eyes = HornedRimGlasses_1
eyes_prop_ep ='Horned Rim Glasses'
elif i > 600000:
eyes = RegularShades_1
eyes_prop_ep ='Regular Shades'
else:
eyes=none
eyes_prop_ep ='None'
seed(i)
j=randint(0,1000000)
if j > 975000:
nose = NOSE_1
nose_ep = 'Clown Nose'
else:
nose = none
nose_ep = 'None'
seed(j)
k=randint(0,1000000)
if k > 975000:
mouth = SMILE
mouth_ep = 'Smile'
elif k > 950000:
mouth = FROWN
mouth_ep = 'Frown'
else:
mouth = none
mouth_ep = 'None'
WIZ_GREY=[
[FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,CH1,CH1,CH1,CH1,CH1,CH1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,CH1,CH1,CH1,CH1,CH1,BG1,CH1,CH1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,BG1,BG1,CH1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,CH2,CH2,CH2,CH2,CH2,CH2,CH2,CH2,CH2,CH2,CH2,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,BG1,BG1,FR2],
[FR2,BG1,BG1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,CH1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,FR1,HR1,HR1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SC1,SC1,SK1,SK1,SK1,SC1,SC1,SK1,SK1,FR1,HR1,HR1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,EY1,FR1,SK1,SK1,SK1,EY1,FR1,SK1,SK1,SK1,FR1,HR1,HR1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,FR1,HR1,HR1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,FR1,FR1,HR1,HR1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,FR1,HR1,HR1,HR1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,FR1,FR1,SK1,SK1,SK1,SK1,SK1,FR1,HR1,HR1,HR1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,BR1,BR1,BR1,BR1,BR1,SK1,SK1,SK1,FR1,HR1,HR1,HR1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,FR1,BR1,BR1,BR1,BR1,BR1,BR1,BR1,BR1,BR1,BR1,FR1,HR1,HR1,HR1,HR1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,FR1,BR1,BR1,BR1,FR1,FR1,FR1,BR1,BR1,BR1,BR1,BR1,FR1,HR1,HR1,HR1,HR1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,FR1,BR1,BR1,BR1,BR1,BR1,BR1,BR1,BR1,BR1,BR1,BR1,FR1,HR1,HR1,HR1,HR1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,BR1,BR1,BR1,BR1,BR1,BR1,BR1,BR1,BR1,BR1,FR1,HR1,HR1,HR1,HR1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR1,BR1,BR1,BR1,BR1,BR1,BR1,BR1,FR1,FR1,BG1,BG1,HR1,HR1,HR1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR1,BR1,BR1,FR1,FR1,FR1,FR1,SK1,FR1,BG1,BG1,BG1,HR1,HR1,FR2],
[FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR1,FR1,FR2,FR1,SK1,SK1,SK1,FR1,FR2,FR2,FR2,FR2,FR2,FR2]
]
pixels = WIZ_GREY
elif b > 85000:
race_ep = 'Wizards'
type_ep = 'Tower'
hair_color_ep = 'None'
haircut_ep = 'None'
hair_prop_ep = 'None'
eyes_prop_ep = 'None'
blemishe_ep = 'None'
eyes_color_ep = 'None'
facial_hair_ep = 'None'
mouth_prop_ep = 'None'
mouth_ep = 'None'
tooth_color_ep = 'None'
nose_ep = 'None'
neck_ep = 'None'
ears_ep = 'None'
skin_ep = 'None'
ears = none
hair = none
hair_prop = none
neck = none
blemishes = none
#tooth color
mouth = none
facial_hair = none
rod = none
mouth_prop = none
#eye color
eyes = none
nose = none
seed(b)
c=randint(0,1000000)
if c > 750000:
SK1 = (234,217,217)
EY1 = (201,178,178)
SK2 = (255,255,255)
HRG3 = (220,222,234)
HRG2 = (183,179,191)
HRG4 = (203,200,212)
HRG5 = (87,101,113)
HRG1 = (0,0,0)
RC1 = (226,187,185)
MO1 = EY1
SCR1 = EY1
skin_ep = 'Albino'
elif c > 500000:
SK1 = (219,177,128)
EY1 = (210,157,96)
SK2 = (235,203,166)
HRG3 = (213,200,183)
HRG2 = (184,163,135)
HRG4 = (209,189,164)
HRG5 = (87,101,113)
HRG1 = (0,0,0)
RC1 = (215,154,104)
MO1 = EY1
SCR1 = EY1
skin_ep = 'Light'
elif c > 250000:
SK1 = (174,139,97)
EY1 = (167,124,71)
SK2 = (178,138,93)
HRG3 = (188,179,165)
HRG2 = (166,150,128)
HRG4 = (184,171,151)
HRG5 = (87,101,113)
HRG1 = (0,0,0)
RC1 = (191,105,71)
MO1 = EY1
SCR1 = EY1
skin_ep = 'Mid'
else:
SK1 = (113,63,29)
EY1 = (114,55,17)
SK2 = (146,79,35)
HRG3 = (155,135,127)
HRG2 = (139,121,111)
HRG4 = (156,131,115)
HRG5 = (87,101,113)
HRG1 = (0,0,0)
RC1 = (142,36,2)
MO1 = EY1
SCR1 = EY1
skin_ep = 'Dark'
seed(c)
d=randint(0,1000000)
if d > 750000:
ears = EARS_1
ears_ep = 'Earring'
else:
ears = none
ears_ep = 'None'
seed(d)
e=randint(0,1000000)
if e > 750000:
SC1 = (80,80,80)
BR1 = (80,80,80)
HR1 = (160,160,160)
hair_color_ep = 'Grey & Carbon Grey'
elif e > 500000:
SC1 = (30,30,30)
BR1 = (30,30,30)
HR1 = (110,110,110)
hair_color_ep = 'Smokey Grey & Charcoal'
elif e > 250000:
SC1 = (80,80,80)
BR1 = (80,80,80)
HR1 = (235,235,235)
hair_color_ep = 'Seashell & Carbon Grey'
else:
SC1 = (155,155,155)
BR1 = (155,155,155)
HR1 = (235,235,235)
hair_color_ep = 'Seashell & Grey'
seed(e)
f=randint(0,1000000)
if f > 900000:
mouth_prop = CIGARETTE
mouth_prop_ep = 'Cigarette'
elif f > 840000:
mouth_prop = PIPE
mouth_prop_ep = 'Pipe'
elif f > 800000:
mouth_prop = VAPE
mouth_prop_ep = 'Vape'
else:
mouth_prop = none
mouth_prop_ep = 'None'
seed(f)
g=randint(0,1000000)
if g > 950000:
hair_prop = COWBOY_7
hair_prop_ep = 'Cowboy Hat'
elif g > 900000:
hair_prop = TOPHAT_7
hair_prop_ep = 'Top Hat'
elif e > 850000:
hair_prop = KNITTED_7
hair_prop_ep = 'Knitted Cap'
elif g > 800000:
hair_prop = FORCAP_7
hair_prop_ep = 'Cap Forward'
elif g > 750000:
hair_prop = FEDORA_7
hair_prop_ep = 'Fedora'
elif g > 700000:
hair_prop = BANDANA_7
hair_prop_ep = 'Bandana'
elif g > 650000:
hair_prop = POLICE_7
hair_prop_ep = 'Police'
elif g > 600000:
hair_prop = CAP_7
hair_prop_ep = 'Cap'
else:
hair_prop = none
hair_prop_ep = 'None'
MOLE=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,MO1,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
ROSY_1=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,RC1,0,0,0,0,RC1,RC1,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,RC1,0,0,0,0,0,RC1,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
seed(g)
h=randint(0,1000000)
if h > 970000:
blemishes = ROSY_1
blemishe_ep = 'Rosy Cheeks'
elif h > 900000:
blemishes = MOLE
blemishe_ep = 'Mole'
else:
blemishes = none
blemishe_ep = 'None'
seed(h)
i=randint(0,1000000)
if i > 970000:
eyes = TD_1
eyes_prop_ep ='3D Glasses'
elif i > 930000:
eyes = VR_1
eyes_prop_ep ='VR'
elif i > 880000:
eyes = ClassicShades_1
eyes_prop_ep ='Classic Shades'
elif i > 830000:
eyes = EyePatch_1
eyes_prop_ep ='Eye Patch'
elif i > 780000:
eyes = NerdGlasses_1
eyes_prop_ep ='Nerd Glasses'
elif i > 730000:
eyes = BigShades_1
eyes_prop_ep ='Big Shades'
elif i > 700000:
eyes = EyeMask_1
eyes_prop_ep ='Eye Mask'
elif i > 650000:
eyes = HornedRimGlasses_1
eyes_prop_ep ='Horned Rim Glasses'
elif i > 600000:
eyes = RegularShades_1
eyes_prop_ep ='Regular Shades'
else:
eyes=none
eyes_prop_ep ='None'
seed(i)
j=randint(0,1000000)
if j > 975000:
nose = NOSE_1
nose_ep = 'Clown Nose'
else:
nose = none
nose_ep = 'None'
seed(j)
k=randint(0,1000000)
if k > 975000:
mouth = SMILE
mouth_ep = 'Smile'
elif k > 950000:
mouth = FROWN
mouth_ep = 'Frown'
else:
mouth = none
mouth_ep = 'None'
WIZ_TOWER=[
[FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,HR1,HR1,HR1,SK1,SK1,HR1,HR1,HR1,SK1,SK1,HR1,HR1,HR1,HR1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,HR1,HR1,FR1,SK1,SK1,SK1,HR1,SK1,SK1,SK1,SK1,HR1,HR1,HR1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,HR1,HR1,FR1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,HR1,HR1,HR1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,HR1,HR1,FR1,SC1,SC1,SC1,SK1,SK1,SC1,SC1,SC1,HR1,HR1,HR1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,HR1,HR1,FR1,EY1,FR1,SK1,SK1,SK1,EY1,FR1,SK1,HR1,HR1,HR1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,HR1,HR1,FR1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,HR1,HR1,HR1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,HR1,HR1,FR1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,HR1,HR1,HR1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,HR1,HR1,FR1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,HR1,HR1,HR1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,HR1,HR1,FR1,SK1,SK1,FR1,FR1,SK1,SK1,SK1,SK1,HR1,HR1,HR1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,HR1,HR1,FR1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,HR1,HR1,HR1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,HR1,HR1,FR1,SK1,SK1,BR1,BR1,BR1,SK1,SK1,SK1,HR1,HR1,HR1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,HR1,FR1,BR1,BR1,BR1,FR1,FR1,FR1,BR1,BR1,BR1,HR1,HR1,HR1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,HR1,HR1,FR1,BR1,BR1,BR1,BR1,BR1,BR1,BR1,BR1,HR1,HR1,HR1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,HR1,HR1,BG1,FR1,BR1,BR1,BR1,BR1,BR1,BR1,FR1,SK1,HR1,HR1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,HR1,BG1,BG1,BG1,FR1,BR1,BR1,BR1,BR1,FR1,SK1,SK1,FR1,HR1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR1,BR1,FR1,FR1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR1,FR2,FR1,SK1,SK1,SK1,FR1,FR2,FR2,FR2,FR2,FR2,FR2]
]
pixels = WIZ_TOWER
elif b > 60000:
race_ep = 'Wizards'
type_ep = 'Wood'
hair_color_ep = 'None'
haircut_ep = 'None'
hair_prop_ep = 'None'
eyes_prop_ep = 'None'
blemishe_ep = 'None'
eyes_color_ep = 'None'
facial_hair_ep = 'None'
mouth_prop_ep = 'None'
mouth_ep = 'None'
tooth_color_ep = 'None'
nose_ep = 'None'
neck_ep = 'None'
ears_ep = 'None'
skin_ep = 'None'
ears = none
hair = none
hair_prop = none
neck = none
blemishes = none
#tooth color
mouth = none
facial_hair = none
rod = none
mouth_prop = none
#eye color
eyes = none
nose = none
seed(b)
c=randint(0,1000000)
if c > 750000:
SK1 = (234,217,217)
SC1 = (165,141,141)
EY1 = (201,178,178)
SK2 = (255,255,255)
HRG3 = (220,222,234)
HRG2 = (183,179,191)
HRG4 = (203,200,212)
HRG5 = (87,101,113)
HRG1 = (0,0,0)
RC1 = (226,187,185)
MO1 = EY1
SCR1 = EY1
skin_ep = 'Albino'
elif c > 500000:
SK1 = (219,177,128)
SC1 = (166,110,44)
EY1 = (210,157,96)
SK2 = (235,203,166)
HRG3 = (213,200,183)
HRG2 = (184,163,135)
HRG4 = (209,189,164)
HRG5 = (87,101,113)
HRG1 = (0,0,0)
RC1 = (215,154,104)
MO1 = EY1
SCR1 = EY1
skin_ep = 'Light'
elif c > 250000:
SK1 = (174,139,97)
SC1 = (134,88,30)
EY1 = (167,124,71)
SK2 = (178,138,93)
HRG3 = (188,179,165)
HRG2 = (166,150,128)
HRG4 = (184,171,151)
HRG5 = (87,101,113)
HRG1 = (0,0,0)
RC1 = (191,105,71)
MO1 = EY1
SCR1 = EY1
skin_ep = 'Mid'
else:
SK1 = (113,63,29)
SC1 = (86,39,10)
EY1 = (114,55,17)
SK2 = (146,79,35)
HRG3 = (155,135,127)
HRG2 = (139,121,111)
HRG4 = (156,131,115)
HRG5 = (87,101,113)
HRG1 = (0,0,0)
RC1 = (142,36,2)
MO1 = EY1
SCR1 = EY1
skin_ep = 'Dark'
seed(c)
d=randint(0,1000000)
if d > 750000:
ears = EARS_1
ears_ep = 'Earring'
else:
ears = none
ears_ep = 'None'
seed(d)
e=randint(0,1000000)
if e > 750000:
HR1 = (160,110,30)
HR2 = (130,60,20)
BR2 = (200,230,180)
BR1 = BE2
hair_color_ep = 'Taupe & Cookie Brown'
elif e > 500000:
HR1 = (130,90,10)
HR2 = (70,50,10)
BR2 = (200,230,180)
hair_color_ep = 'Brown & Cookie Brown'
BR1 = BE2
elif e > 250000:
HR1 = (160,110,30)
HR2 = (130,60,20)
BR2 = (60,200,180)
BR1 = (30,20,5)
hair_color_ep = 'Taupe & Graphite'
else:
HR1 = (130,90,10)
HR2 = (70,50,10)
BR2 = (60,200,180)
BR1 = (30,20,5)
hair_color_ep = 'Brown & Graphite'
seed(e)
f=randint(0,1000000)
if f > 900000:
mouth_prop = CIGARETTE
mouth_prop_ep = 'Cigarette'
elif f > 840000:
mouth_prop = PIPE
mouth_prop_ep = 'Pipe'
elif f > 800000:
mouth_prop = VAPE
mouth_prop_ep = 'Vape'
else:
mouth_prop = none
mouth_prop_ep = 'None'
seed(f)
g=randint(0,1000000)
if g > 975000:
mouth = SMILE
mouth_ep = 'Smile'
elif g > 950000:
mouth = FROWN
mouth_ep = 'Frown'
else:
mouth = none
mouth_ep = 'None'
MOLE=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,MO1,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
ROSY_1=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,RC1,0,0,0,0,RC1,RC1,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,RC1,0,0,0,0,0,RC1,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
seed(g)
h=randint(0,1000000)
if h > 970000:
blemishes = ROSY_1
blemishe_ep = 'Rosy Cheeks'
elif h > 900000:
blemishes = MOLE
blemishe_ep = 'Mole'
else:
blemishes = none
blemishe_ep = 'None'
seed(h)
i=randint(0,1000000)
if i > 970000:
eyes = TD_1
eyes_prop_ep ='3D Glasses'
elif i > 930000:
eyes = VR_1
eyes_prop_ep ='VR'
elif i > 880000:
eyes = ClassicShades_1
eyes_prop_ep ='Classic Shades'
elif i > 830000:
eyes = EyePatch_1
eyes_prop_ep ='Eye Patch'
elif i > 780000:
eyes = NerdGlasses_1
eyes_prop_ep ='Nerd Glasses'
elif i > 730000:
eyes = BigShades_1
eyes_prop_ep ='Big Shades'
elif i > 700000:
eyes = EyeMask_1
eyes_prop_ep ='Eye Mask'
elif i > 650000:
eyes = HornedRimGlasses_1
eyes_prop_ep ='Horned Rim Glasses'
elif i > 600000:
eyes = RegularShades_1
eyes_prop_ep ='Regular Shades'
else:
eyes=none
eyes_prop_ep ='None'
seed(i)
j=randint(0,1000000)
if j > 975000:
nose = NOSE_1
nose_ep = 'Clown Nose'
else:
nose = none
nose_ep = 'None'
WIZ_WOODEN=[
[FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,HR2,HR2,HR2,HR2,HR2,HR2,HR2,HR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,HR2,HR2,HR2,HR2,HR2,HR2,HR2,HR2,HR2,HR2,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,HR2,HR2,HR2,HR1,HR1,HR1,HR1,HR2,HR2,HR2,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,HR1,HR1,HR2,HR2,HR2,HR2,HR1,HR1,HR1,HR1,HR1,HR1,HR2,HR2,HR2,HR2,HR1,HR1,BG1,BG1,FR2],
[FR2,BG1,HR1,HR1,HR1,HR1,HR2,HR2,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR2,HR2,HR1,HR1,HR1,HR1,BG1,FR2],
[FR2,BG1,HR1,HR1,HR1,HR1,HR1,HR2,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR2,HR1,HR1,HR1,HR1,HR1,BG1,FR2],
[FR2,BG1,HR1,HR1,HR1,HR1,HR1,BR2,HR2,HR1,HR1,HR1,HR1,HR1,HR1,HR2,HR1,HR1,HR1,HR1,HR1,HR1,BG1,FR2],
[FR2,BG1,HR1,BG1,BG1,HR1,BR2,HR1,HR1,HR1,SK1,SK1,SK1,SK1,HR1,HR1,HR1,HR1,HR1,BG1,BG1,HR1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,HR1,BR2,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,HR1,HR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BR2,SC1,SC1,SK1,SK1,SK1,SC1,SC1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BR2,EY1,FR1,SK1,SK1,SK1,EY1,FR1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BR2,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,BR1,FR1,FR1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BR2,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,BR1,FR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,BR2,SK1,FR1,FR1,SK1,SK1,SK1,BR1,BR1,FR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,BR1,BR2,SK1,SK1,SK1,SK1,BR1,BR1,BR1,FR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,BR2,BR1,BR1,BR1,BR1,BR1,BR1,BR1,BR1,FR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,BR1,BR1,FR1,FR1,FR1,BR1,BR1,BR1,BR1,FR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,BR1,BR1,BR1,BR1,BR1,BR1,BR1,BR1,BR1,FR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,BR1,BR1,BR1,BR1,BR1,BR1,BR1,BR1,BR1,FR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR1,BR1,BR1,BR1,BR1,BR1,BR1,BR1,FR1,FR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR1,BR1,BR1,BR1,BR1,BR1,FR1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR1,FR1,FR1,FR1,FR1,SK1,SK1,FR1,FR2,FR2,FR2,FR2,FR2,FR2]
]
pixels = WIZ_WOODEN
elif b > 35000:
race_ep = 'Wizards'
type_ep = 'Blue'
hair_color_ep = 'None'
haircut_ep = 'None'
hair_prop_ep = 'None'
eyes_prop_ep = 'None'
blemishe_ep = 'None'
eyes_color_ep = 'None'
facial_hair_ep = 'None'
mouth_prop_ep = 'None'
mouth_ep = 'None'
tooth_color_ep = 'None'
nose_ep = 'None'
neck_ep = 'None'
ears_ep = 'None'
skin_ep = 'None'
ears = none
hair = none
hair_prop = none
neck = none
blemishes = none
#tooth color
mouth = none
facial_hair = none
rod = none
mouth_prop = none
#eye color
eyes = none
nose = none
seed(b)
c=randint(0,1000000)
if c > 750000:
HR1 = (30,25,200)
HR2 = (255,218,0)
SK1 = (234,217,217)
SC1 = (190,215,240)
BR1 = (190,215,240)
EY1 = (201,178,178)
SK2 = (255,255,255)
HRG3 = (220,222,234)
HRG2 = (183,179,191)
HRG4 = (203,200,212)
HRG5 = (87,101,113)
HRG1 = (0,0,0)
RC1 = (226,187,185)
skin_ep = 'Albino'
MO1 = EY1
SCR1 = EY1
hair_color_ep = 'Persian Blue'
elif c > 500000:
HR1 = (10,50,100)
HR2 = (216,214,203)
SK1 = (219,177,128)
SC1 = (190,215,240)
BR1 = (190,215,240)
EY1 = (210,157,96)
SK2 = (235,203,166)
HRG3 = (213,200,183)
HRG2 = (184,163,135)
HRG4 = (209,189,164)
HRG5 = (87,101,113)
HRG1 = (0,0,0)
RC1 = (215,154,104)
MO1 = EY1
SCR1 = EY1
skin_ep = 'Light'
hair_color_ep = 'Sapphire'
elif c > 250000:
HR1 = (60,10,145)
HR2 = (255,218,0)
SK1 = (174,139,97)
SC1 = (190,215,240)
BR1 = (190,215,240)
EY1 = (167,124,71)
SK2 = (178,138,93)
HRG3 = (188,179,165)
HRG2 = (166,150,128)
HRG4 = (184,171,151)
HRG5 = (87,101,113)
HRG1 = (0,0,0)
RC1 = (191,105,71)
MO1 = EY1
SCR1 = EY1
skin_ep = 'Mid'
hair_color_ep = 'Indigo'
else:
HR1 = (30,180,220)
HR2 = (216,214,203)
SK1 = (113,63,29)
SC1 = (190,215,240)
BR1 = (190,215,240)
EY1 = (114,55,17)
SK2 = (146,79,35)
HRG3 = (155,135,127)
HRG2 = (139,121,111)
HRG4 = (156,131,115)
HRG5 = (87,101,113)
HRG1 = (0,0,0)
RC1 = (142,36,2)
MO1 = EY1
SCR1 = EY1
skin_ep = 'Dark'
hair_color_ep = 'Topaz'
seed(c)
d=randint(0,1000000)
if d > 750000:
ears = EARS_1
ears_ep = 'Earring'
else:
ears = none
ears_ep = 'None'
seed(d)
e=randint(0,1000000)
#if e > 900000:
# neck = GoldChain_1
#elif e > 700000:
# neck = SilverChain_1
#elif e > 500000:
# neck = RING_1
#else:
# neck = none
seed(e)
f=randint(0,1000000)
if f > 900000:
mouth_prop = CIGARETTE
mouth_prop_ep = 'Cigarette'
elif f > 840000:
mouth_prop = PIPE
mouth_prop_ep = 'Pipe'
elif f > 800000:
mouth_prop = VAPE
mouth_prop_ep = 'Vape'
else:
mouth_prop = none
mouth_prop_ep = 'None'
seed(f)
g=randint(0,1000000)
if g > 975000:
mouth = SMILE
mouth_ep = 'Smile'
elif g > 950000:
mouth = FROWN
mouth_ep = 'Frown'
else:
mouth = none
mouth_ep = 'None'
MOLE=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,MO1,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
ROSY_1=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,RC1,0,0,0,0,RC1,RC1,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,RC1,0,0,0,0,0,RC1,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
seed(g)
h=randint(0,1000000)
if h > 970000:
blemishes = ROSY_1
blemishe_ep = 'Rosy Cheeks'
elif h > 900000:
blemishes = MOLE
blemishe_ep = 'Mole'
else:
blemishes = none
blemishe_ep = 'None'
seed(h)
i=randint(0,1000000)
if i > 970000:
eyes = TD_1
eyes_prop_ep ='3D Glasses'
elif i > 930000:
eyes = VR_1
eyes_prop_ep ='VR'
elif i > 880000:
eyes = ClassicShades_1
eyes_prop_ep ='Classic Shades'
elif i > 830000:
eyes = EyePatch_1
eyes_prop_ep ='Eye Patch'
elif i > 780000:
eyes = NerdGlasses_1
eyes_prop_ep ='Nerd Glasses'
elif i > 730000:
eyes = BigShades_1
eyes_prop_ep ='Big Shades'
elif i > 700000:
eyes = EyeMask_1
eyes_prop_ep ='Eye Mask'
elif i > 650000:
eyes = HornedRimGlasses_1
eyes_prop_ep ='Horned Rim Glasses'
elif i > 600000:
eyes = RegularShades_1
eyes_prop_ep ='Regular Shades'
else:
eyes=none
eyes_prop_ep ='None'
seed(i)
j=randint(0,1000000)
if j > 975000:
nose = NOSE_1
nose_ep = 'Clown Nose'
else:
nose = none
nose_ep = 'None'
WIZ_BLUE=[
[FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,HR1,HR1,HR1,HR2,HR2,HR2,HR2,HR2,HR2,HR2,HR1,HR1,HR1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,HR1,HR1,HR1,HR2,SK1,SK1,SK1,SK1,SK1,SK1,SK1,HR2,HR1,HR1,HR1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,HR1,HR1,HR2,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,HR2,HR1,HR1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,HR1,HR1,HR2,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,HR2,HR1,HR1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,HR1,HR1,HR2,SC1,SC1,SC1,SK1,SK1,SC1,SC1,SC1,SK1,HR2,HR1,HR1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,HR1,HR1,HR2,EY1,FR1,SK1,SK1,SK1,EY1,FR1,SK1,SK1,HR2,HR1,HR1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,HR1,HR1,HR2,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,HR2,HR1,HR1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,HR1,HR1,HR2,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,HR2,HR1,HR1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,HR1,HR1,HR2,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,HR2,HR1,HR1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,HR1,HR1,HR2,SK1,SK1,FR1,FR1,SK1,SK1,SK1,SK1,SK1,HR2,HR1,HR1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,HR1,HR1,HR2,BR1,BR1,BR1,BR1,BR1,BR1,BR1,BR1,BR1,BR1,BR1,HR2,HR1,HR1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,HR1,HR1,HR2,BR1,BR1,BR1,BR1,BR1,BR1,BR1,BR1,BR1,BR1,BR1,HR2,HR1,HR1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,HR1,HR1,HR2,BR1,BR1,BR1,FR1,FR1,FR1,BR1,BR1,BR1,BR1,BR1,HR2,HR1,HR1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,HR1,HR1,HR2,BR1,BR1,BR1,BR1,BR1,BR1,BR1,BR1,BR1,BR1,BR1,HR2,HR1,HR1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,HR1,HR1,HR2,BR1,BR1,BR1,BR1,BR1,BR1,BR1,BR1,BR1,FR1,FR1,HR2,HR1,HR1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,HR1,HR1,HR2,BR1,BR1,BR1,BR1,BR1,BR1,BR1,FR1,SK1,HR2,HR1,HR1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,HR1,HR1,HR2,FR1,FR1,BR1,BR1,BR1,FR1,FR1,SK1,HR2,HR1,HR1,HR1,BG1,BG1,BG1,FR2],
[FR2,FR2,FR2,FR2,FR2,FR2,HR1,HR1,HR2,FR2,FR1,BR1,FR1,FR1,FR2,HR2,HR1,HR1,HR1,HR1,FR2,FR2,FR2,FR2]
]
pixels = WIZ_BLUE
elif b > 19000:
race_ep = 'Unknown'
type_ep = 'Male'
hair_color_ep = 'None'
haircut_ep = 'None'
hair_prop_ep = 'None'
eyes_prop_ep = 'None'
blemishe_ep = 'None'
eyes_color_ep = 'None'
facial_hair_ep = 'None'
mouth_prop_ep = 'None'
mouth_ep = 'None'
tooth_color_ep = 'None'
nose_ep = 'None'
neck_ep = 'None'
ears_ep = 'None'
skin_ep = 'None'
ears = none
hair = none
hair_prop = none
neck = none
blemishes = none
#tooth color
mouth = none
facial_hair = none
rod = none
mouth_prop = none
#eye color
eyes = none
nose = none
seed(b)
c=randint(0,1000000)
if c > 750000:
SK1 = (250,200,170)
HR1 = (130,130,130)
skin_ep = 'Peach'
elif c > 500000:
SK1 = (200,170,140)
HR1 = (125,110,90)
skin_ep = 'Dust'
elif c > 250000:
SK1 = (240,210,190)
HR1 = (170,150,120)
skin_ep = 'Bone'
else:
SK1 = (195,175,165)
HR1 = (100,95,85)
skin_ep = 'Silk'
seed(c)
d=randint(0,1000000)
if d > 750000:
ears = EARS_4
ears_ep = 'Earring'
else:
ears = none
ears_ep = 'None'
seed(d)
e=randint(0,1000000)
if e > 950000:
hair_prop = CAP_5
hair_prop_ep = 'Cap'
elif e > 900000:
hair_prop = KNITTED_4
hair_prop_ep = 'Knitted Cap'
elif e > 850000:
hair_prop = HEADBAND_7
hair_prop_ep = 'Headband'
elif e > 800000:
hair_prop = FORCAP_3
hair_prop_ep = 'Cap Forward'
elif e > 750000:
hair_prop = COWBOY_3
hair_prop_ep = 'Cowboy Hat'
elif e > 700000:
hair_prop = TOPHAT_3
hair_prop_ep = 'Top Hat'
else:
hair_prop = none
hair_prop_ep = 'None'
seed(e)
f=randint(0,1000000)
if f > 980000:
neck = RING_3
neck_ep = 'Ring Onchain'
elif f > 880000:
neck = GoldChain_4
neck_ep = 'Gold Chain'
tutu = 99
elif f > 800000:
neck = SilverChain_3
neck_ep = 'Silver Chain'
else:
neck = none
neck_ep = 'None'
seed(f)
g=randint(0,1000000)
if g > 975000:
mouth = SMILE
mouth_ep = 'Smile'
elif g > 950000:
mouth = FROWN
mouth_ep = 'Frown'
tyty = 99
else:
mouth = none
mouth_ep = 'None'
if tutu == 99 and tyty == 99:
neck = none
neck_ep = 'None'
seed(g)
h=randint(0,1000000)
if h > 900000:
mouth_prop = CIGARETTE
mouth_prop_ep = 'Cigarette'
elif h > 840000:
mouth_prop = PIPE
mouth_prop_ep = 'Pipe'
elif h > 800000:
mouth_prop = VAPE
mouth_prop_ep = 'Vape'
else:
mouth_prop = none
mouth_prop_ep = 'None'
seed(h)
i=randint(0,1000000)
if i > 200000:
EY1 = (255,255,255)
eyes_color_ep = 'White'
elif i > 80000:
EY1 = (230,180,100)
eyes_color_ep = 'Peach'
else:
EY1 = (78,154,197)
eyes_color_ep = 'Blue'
seed(i)
j=randint(0,1000000)
if j > 950000:
eyes = ClassicShades_4
eyes_prop_ep ='Classic Shades'
elif j > 900000:
eyes = EyePatch_4
eyes_prop_ep ='Eye Patch'
elif j > 850000:
eyes = RegularShades_4
eyes_prop_ep ='Regular Shades'
else:
eyes=none
eyes_prop_ep ='None'
GOLLUN=[
[FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,HR1,HR1,HR1,HR1,HR1,HR1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,HR1,SK1,HR1,SK1,HR1,SK1,HR1,HR1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,HR1,SK1,SK1,HR1,SK1,HR1,SK1,HR1,SK1,HR1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,HR1,SK1,HR1,SK1,SK1,HR1,SK1,HR1,SK1,HR1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,HR1,SK1,HR1,SK1,HR1,SK1,SK1,HR1,SK1,HR1,FR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,HR1,EY1,EY1,SK1,SK1,SK1,EY1,EY1,SK1,HR1,SK1,FR1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,HR1,EY1,FR1,SK1,SK1,SK1,EY1,FR1,SK1,HR1,SK1,FR1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,HR1,SK1,HR1,SK1,SK1,SK1,SK1,HR1,SK1,HR1,SK1,FR1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,HR1,SK1,SK1,SK1,FR1,SK1,SK1,SK1,SK1,FR1,FR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,bl,SK1,SK1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,FR1,FR1,FR1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,SK1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR1,FR1,FR1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR1,SK1,SK1,SK1,FR1,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2]
]
pixels = GOLLUN
elif b > 10000:
race_ep = 'Wraiths'
type_ep = 'None'
hair_color_ep = 'None'
haircut_ep = 'None'
hair_prop_ep = 'None'
eyes_prop_ep = 'None'
blemishe_ep = 'None'
eyes_color_ep = 'None'
facial_hair_ep = 'None'
mouth_prop_ep = 'None'
mouth_ep = 'None'
tooth_color_ep = 'None'
nose_ep = 'None'
neck_ep = 'None'
ears_ep = 'None'
skin_ep = 'None'
ears = none
hair = none
hair_prop = none
neck = none
blemishes = none
#tooth color
mouth = none
facial_hair = none
rod = none
mouth_prop = none
#eye color
eyes = none
nose = none
seed(b)
c=randint(0,1000000)
if c > 500000:
SK1 = (50,50,50)
HR1 = (100,100,100)
SC1 = nr
MO1 = nr
skin_ep = 'Dark Grey'
elif c > 400000:
SK1 = (128,128,128)
HR1 = (255,193,7) #OR
SC1 = nr
MO1 = nr
skin_ep = 'Granite'
elif c > 300000:
SK1 = (128,128,128)
HR1 = (200,130,40) #BRONZE
SC1 = nr
MO1 = nr
skin_ep = 'Granite'
elif c > 250000:
SK1 = (142,36,170) #VIOLET
HR1 = (40,5,55)
SC1 = (74,20,140)
MO1 = SC1
skin_ep = 'Eggplant'
else:
SK1 = (128,128,128)
HR1 = (230,230,230)
SC1 = (30,30,30)
MO1 = SC1
skin_ep = 'Granite'
seed(c)
d=randint(0,1000000)
if d > 750000:
ears = EARS_2
ears_ep = 'Earring'
else:
ears = none
ears_ep = 'None'
MOLE=[
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,MO1,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
]
seed(d)
e=randint(0,1000000)
if e > 930000:
blemishes = MOLE
blemishe_ep = 'Mole'
else:
blemishes = none
blemishe_ep = 'None'
seed(e)
f=randint(0,1000000)
if f > 900000:
neck = GoldChain_1
neck_ep = 'Gold Chain'
elif f > 820000:
neck = SilverChain_1
neck_ep = 'Silver Chain'
elif f > 800000:
neck = RING_1
neck_ep = 'Ring Onchain'
else:
neck = none
neck_ep = 'None'
seed(f)
g=randint(0,1000000)
seed(g)
h=randint(0,1000000)
if h > 900000:
mouth_prop = CIGARETTE
mouth_prop_ep = 'Cigarette'
elif h > 880000:
mouth_prop = MASK_1
mouth_prop_ep = 'Medical Mask'
elif h > 820000:
mouth_prop = PIPE
mouth_prop_ep = 'Pipe'
elif h > 780000:
mouth_prop = VAPE
mouth_prop_ep = 'Vape'
else:
mouth_prop = none
mouth_prop_ep = 'None'
seed(h)
i=randint(0,1000000)
if i > 400000:
EY1 = (255,255,255)
EY2 = nr
eyes_color_ep = 'White'
elif i > 300000:
EY1 = (214,92,26)
EY2 = nr
eyes_color_ep = "Orange"
elif i > 200000:
EY1 = (176,61,133)
EY2 = nr
eyes_color_ep = "Purple"
elif i > 100000:
EY1 = (255,255,0)
EY2 = nr
eyes_color_ep = 'Yellow'
else:
EY1 = (255,0,0)
EY2 = nr
eyes_color_ep = 'Red'
seed(i)
j=randint(0,1000000)
if j > 970000:
eyes = TD_2
eyes_prop_ep ='3D Glasses'
elif j > 930000:
eyes = VR_2
eyes_prop_ep ='VR'
elif j > 880000:
eyes = ClassicShades_2
eyes_prop_ep ='Classic Shades'
elif j > 830000:
eyes = SmallShades_2
eyes_prop_ep ='Small Shades'
elif j > 780000:
eyes = EyePatch_2
eyes_prop_ep ='Eye Patch'
elif j > 730000:
eyes = NerdGlasses_2
eyes_prop_ep ='Nerd Glasses'
elif j > 700000:
eyes = EyeMask_2
eyes_prop_ep ='Eye Mask'
elif j > 650000:
eyes = RegularShades_2
eyes_prop_ep ='Regular Shades'
else:
eyes=none
eyes_prop_ep ='None'
seed(j)
k=randint(0,1000000)
if k > 975000:
nose = NOSE_1
nose_ep = 'Clown Nose'
else:
nose = none
nose_ep = 'None'
SPECTRE=[
[FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR1,FR1,FR1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR1,FR1,HR1,HR1,HR1,FR1,FR1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,FR1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,FR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,FR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,HR1,FR1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,FR1,HR1,FR1,FR1,FR1,HR1,HR1,FR1,FR1,FR1,HR1,HR1,HR1,FR1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,FR1,HR1,FR1,SK1,SK1,SK1,FR1,FR1,SK1,SK1,SK1,FR1,HR1,HR1,FR1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,FR1,HR1,FR1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,FR1,HR1,FR1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,FR1,HR1,FR1,SC1,SC1,SK1,SK1,SK1,SC1,SC1,SK1,SK1,FR1,HR1,HR1,FR1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,FR1,HR1,FR1,EY1,EY2,SK1,SK1,SK1,EY1,EY2,SK1,SK1,SK1,FR1,HR1,FR1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,FR1,HR1,FR1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,FR1,HR1,FR1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,FR1,HR1,FR1,SK1,SK1,SK1,FR1,SK1,SK1,SK1,SK1,SK1,FR1,FR1,HR1,FR1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,FR1,HR1,FR1,SK1,SK1,FR1,SK1,SK1,SK1,SK1,SK1,SK1,FR1,HR1,HR1,FR1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,FR1,HR1,FR1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,FR1,HR1,FR1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,FR1,HR1,FR1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,FR1,HR1,FR1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,FR1,HR1,FR1,SK1,SK1,FR1,FR1,FR1,SK1,SK1,SK1,SK1,FR1,HR1,FR1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,FR1,HR1,FR1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,FR1,SK1,FR1,HR1,FR1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,FR1,HR1,FR1,SK1,SK1,SK1,SK1,SK1,FR1,SK1,SK1,FR1,HR1,FR1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,HR1,FR1,FR1,FR1,FR1,FR1,SK1,SK1,SK1,FR1,HR1,FR1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR1,HR1,HR1,HR1,HR1,FR1,FR1,SK1,FR1,FR1,HR1,FR1,BG1,BG1,BG1,FR2],
[FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR1,FR1,FR1,HR1,HR1,HR1,FR1,HR1,HR1,HR1,FR1,FR2,FR2,FR2,FR2]
]
pixels = SPECTRE
elif b > 7000:
race_ep = 'Dark Riders'
type_ep = 'None'
hair_color_ep = 'None'
haircut_ep = 'None'
hair_prop_ep = 'None'
eyes_prop_ep = 'None'
blemishe_ep = 'None'
eyes_color_ep = 'None'
facial_hair_ep = 'None'
mouth_prop_ep = 'None'
mouth_ep = 'None'
tooth_color_ep = 'None'
nose_ep = 'None'
neck_ep = 'None'
ears_ep = 'None'
skin_ep = 'None'
ears = none
hair = none
hair_prop = none
neck = none
blemishes = none
#tooth color
mouth = none
facial_hair = none
rod = none
mouth_prop = none
#eye color
eyes = none
nose = none
SK1 = (118,113,113)
SK2 = (191,191,191)
SK3 = (223,223,223)
skin_ep = 'None'
seed(b)
c=randint(0,1000000)
if c > 750000:
ears = EARS_1
ears_ep = 'Earring'
else:
ears = none
ears_ep = 'None'
seed(c)
d=randint(0,1000000)
if d > 900000:
neck = GoldChain_1
neck_ep = 'Gold Chain'
elif d > 820000:
neck = SilverChain_1
neck_ep = 'Silver Chain'
elif d > 800000:
neck = RING_1
neck_ep = 'Ring Onchain'
else:
neck = none
neck_ep = 'None'
seed(d)
e=randint(0,1000000)
if e > 900000:
mouth_prop = CIGARETTE
mouth_prop_ep = 'Cigarette'
elif e > 840000:
mouth_prop = PIPE
mouth_prop_ep = 'Pipe'
elif e > 800000:
mouth_prop = VAPE
mouth_prop_ep = 'Vape'
else:
mouth_prop = none
mouth_prop_ep = 'None'
seed(e)
f=randint(0,1000000)
if f > 400000:
EY1 = (255,255,255)
eyes_color_ep = 'White'
elif f > 300000:
EY1 = (214,92,26)
eyes_color_ep = "Orange"
elif f > 200000:
EY1 = (176,61,133)
eyes_color_ep = "Purple"
elif f > 100000:
EY1 = (255,255,0)
eyes_color_ep = 'Yellow'
else:
EY1 = (255,0,0)
eyes_color_ep = 'Red'
seed(f)
g=randint(0,1000000)
if g > 970000:
eyes = TD_1
eyes_prop_ep ='3D Glasses'
elif g > 930000:
eyes = VR_1
eyes_prop_ep ='VR'
elif g > 880000:
eyes = ClassicShades_1
eyes_prop_ep ='Classic Shades'
elif g > 830000:
eyes = EyePatch_1
eyes_prop_ep ='Eye Patch'
elif g > 780000:
eyes = NerdGlasses_1
eyes_prop_ep ='Nerd Glasses'
elif g > 730000:
eyes = BigShades_1
eyes_prop_ep ='Big Shades'
elif g > 700000:
eyes = EyeMask_1
eyes_prop_ep ='Eye Mask'
elif g > 650000:
eyes = HornedRimGlasses_1
eyes_prop_ep ='Horned Rim Glasses'
elif g > 600000:
eyes = RegularShades_1
eyes_prop_ep ='Regular Shades'
else:
eyes=none
eyes_prop_ep ='None'
DARK_RIDER=[
[FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR1,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR1,BG1,FR1,SK1,SK1,SK1,FR1,BG1,FR1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,FR1,BG1,FR1,SK1,SK1,SK1,FR1,BG1,FR1,FR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,FR1,FR1,FR1,SK1,FR1,FR1,SK1,SK1,SK1,FR1,FR1,SK1,FR1,FR1,FR1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,FR1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,FR1,EY1,SK1,SK1,SK1,FR1,EY1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,EY1,FR1,SK1,SK1,SK1,EY1,FR1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,SK1,FR1,SK1,SK1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,FR1,FR1,FR1,SK1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,FR1,FR1,FR1,FR1,FR1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,FR1,FR1,FR1,FR1,FR1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,FR1,FR1,FR1,FR1,FR1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,FR1,FR1,FR1,SK1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,FR1,FR1,FR1,SK1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,FR1,FR1,FR1,SK1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR1,FR1,FR1,FR1,FR1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR1,SK1,SK1,SK1,FR1,FR2,FR2,FR2,FR2,FR2,FR2]
]
pixels = DARK_RIDER
elif b > 1000:
race_ep = 'Daemons'
type_ep = 'None'
hair_color_ep = 'None'
haircut_ep = 'None'
hair_prop_ep = 'None'
eyes_prop_ep = 'None'
blemishe_ep = 'None'
eyes_color_ep = 'None'
facial_hair_ep = 'None'
mouth_prop_ep = 'None'
mouth_ep = 'None'
tooth_color_ep = 'None'
nose_ep = 'None'
neck_ep = 'None'
ears_ep = 'None'
skin_ep = 'None'
ears = none
hair = none
hair_prop = none
neck = none
blemishes = none
#tooth color
mouth = none
facial_hair = none
rod = none
mouth_prop = none
#eye color
eyes = none
nose = none
SK1 = (90,90,90)
FR4 = (166,166,166)
FR5 = (225,63,0)
FR3 = (240,114,48)
FR1 = nr
FR2 = bl
seed(b)
c=randint(0,1000000)
seed(c)
d=randint(0,1000000)
if d > 900000:
neck = GoldChain_1
neck_ep = 'Gold Chain'
elif d > 820000:
neck = SilverChain_1
neck_ep = 'Silver Chain'
elif d > 800000:
neck = RING_1
neck_ep = 'Ring Onchain'
else:
neck = none
neck_ep = 'None'
seed(d)
e=randint(0,1000000)
if e > 900000:
mouth_prop = CIGARETTE
mouth_prop_ep = 'Cigarette'
elif e > 840000:
mouth_prop = PIPE
mouth_prop_ep = 'Pipe'
elif e > 800000:
mouth_prop = VAPE
mouth_prop_ep = 'Vape'
else:
mouth_prop = none
mouth_prop_ep = 'None'
seed(e)
f=randint(0,1000000)
if f > 500000:
EY1 = bl
eyes_color_ep = 'White'
else:
EY1 = nr
eyes_color_ep = 'Black'
seed(f)
g=randint(0,1000000)
if g > 970000:
eyes = TD_1
eyes_prop_ep ='3D Glasses'
elif g > 930000:
eyes = VR_1
eyes_prop_ep ='VR'
elif g > 880000:
eyes = ClassicShades_1
eyes_prop_ep ='Classic Shades'
elif g > 830000:
eyes = EyePatch_1
eyes_prop_ep ='Eye Patch'
elif g > 780000:
eyes = NerdGlasses_1
eyes_prop_ep ='Nerd Glasses'
elif g > 730000:
eyes = BigShades_1
eyes_prop_ep ='Big Shades'
elif g > 700000:
eyes = EyeMask_1
eyes_prop_ep ='Eye Mask'
elif g > 650000:
eyes = RegularShades_1
eyes_prop_ep ='Regular Shades'
else:
eyes=none
eyes_prop_ep ='None'
seed(g)
h=randint(0,1000000)
if h > 750000:
SK1 = (60,60,60)
FR4 = (166,166,166)
FR5 = (225,63,0)
FR3 = (240,160,0)
FR1 = nr
FR2 = bl
skin_ep = 'Dark Grey'
hair_color_ep = 'Orange'
elif h > 500000:
SK1 = (30,30,30)
FR4 = (166,166,166)
FR5 = (225,63,0)
FR3 = (240,160,0)
FR1 = nr
FR2 = bl
skin_ep = 'Charcoal'
hair_color_ep = 'Orange'
elif h > 250000:
SK1 = (60,60,60)
FR4 = (166,166,166)
FR5 = (225,63,0)
FR3 = (240,114,48)
FR1 = nr
FR2 = bl
skin_ep = 'Dark Grey'
hair_color_ep = 'Burning Orange'
else:
SK1 = (30,30,30)
FR4 = (166,166,166)
FR5 = (225,63,0)
FR3 = (240,114,48)
FR1 = nr
FR2 = bl
skin_ep = 'Charcoal'
hair_color_ep = 'Burning Orange'
DEAMON=[
[FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,FR3,FR3,FR3,BG1,BG1,BG1,BG1,BG1,FR3,FR3,FR3,FR3,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,FR3,FR1,FR1,FR1,FR3,BG1,BG1,BG1,FR3,FR1,FR1,FR1,FR1,FR3,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,FR3,FR1,FR1,FR1,FR1,FR1,FR3,FR3,FR3,FR1,FR1,FR1,FR1,FR1,FR1,FR3,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,FR3,FR1,FR1,FR1,FR1,FR1,FR1,FR1,FR1,FR1,FR1,FR1,FR1,FR1,FR1,FR1,FR1,FR3,BG1,BG1,FR2],
[FR2,BG1,FR3,FR1,FR1,FR1,FR1,FR1,FR1,FR1,FR1,FR1,FR1,FR1,FR1,FR1,FR1,FR1,FR1,FR1,FR1,FR3,BG1,FR2],
[FR2,FR3,FR1,FR1,FR1,FR1,FR1,FR1,FR1,FR3,FR1,FR1,FR1,FR3,FR1,FR1,FR1,FR1,FR1,FR1,FR1,FR1,FR3,FR2],
[FR2,FR3,FR1,FR1,FR1,FR1,FR1,FR3,FR3,SK1,FR3,FR1,FR3,SK1,FR3,FR3,FR1,FR1,FR1,FR1,FR1,FR1,FR3,FR2],
[FR2,FR3,FR1,FR1,FR1,FR1,FR3,FR1,SK1,SK1,SK1,FR3,SK1,SK1,SK1,SK1,FR3,FR1,FR1,FR1,FR1,FR1,FR3,FR2],
[FR2,FR3,FR1,FR1,FR3,FR3,BG1,FR1,FR4,FR4,SK1,SK1,SK1,FR4,FR4,SK1,SK1,FR3,FR3,FR3,FR1,FR1,FR3,FR2],
[FR2,FR3,FR1,FR1,FR3,BG1,BG1,FR1,FR5,EY1,SK1,SK1,SK1,FR5,EY1,SK1,SK1,FR1,BG1,FR3,FR1,FR1,FR3,FR2],
[FR2,FR3,FR1,FR1,FR3,BG1,BG1,FR1,SK1,SK1,FR3,SK1,FR3,SK1,SK1,SK1,SK1,FR1,BG1,FR3,FR1,FR1,FR3,FR2],
[FR2,FR3,FR1,FR1,FR3,BG1,BG1,FR1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,SK1,FR1,BG1,FR3,FR1,FR1,FR3,FR2],
[FR2,BG1,FR3,FR1,FR1,FR3,BG1,FR1,SK1,FR3,FR3,FR3,FR3,FR3,SK1,SK1,SK1,FR1,FR3,FR1,FR1,FR3,BG1,FR2],
[FR2,BG1,BG1,FR3,FR1,FR1,FR3,FR1,SK1,FR3,FR3,FR3,FR3,FR3,SK1,SK1,SK1,FR3,FR1,FR1,FR3,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,FR3,FR1,FR3,FR1,SK1,FR3,FR3,FR3,FR3,FR3,SK1,SK1,SK1,FR3,FR1,FR3,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,FR3,BG1,FR1,SK1,SK1,FR3,FR3,FR3,SK1,SK1,SK1,SK1,FR1,FR3,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,FR3,FR3,FR3,SK1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,SK1,FR3,SK1,SK1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,SK1,SK1,SK1,FR1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR1,FR1,FR1,FR1,FR1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR1,SK1,SK1,SK1,FR1,FR2,FR2,FR2,FR2,FR2,FR2]
]
pixels = DEAMON
else:
race_ep = 'Dark Lord'
type_ep = 'None'
hair_color_ep = 'None'
haircut_ep = 'None'
hair_prop_ep = 'None'
eyes_prop_ep = 'None'
blemishe_ep = 'None'
eyes_color_ep = 'None'
facial_hair_ep = 'None'
mouth_prop_ep = 'None'
mouth_ep = 'None'
tooth_color_ep = 'None'
nose_ep = 'None'
neck_ep = 'None'
ears_ep = 'None'
skin_ep = 'None'
ears = none
hair = none
hair_prop = none
neck = none
blemishes = none
#tooth color
mouth = none
facial_hair = none
rod = none
mouth_prop = none
#eye color
eyes = none
nose = none
SK1 = (113,113,113)
SK2 = (160,160,160)
SK3 = (223,223,223)
skin_ep = 'None'
seed(b)
c=randint(0,1000000)
if c > 750000:
ears = EARS_0
ears_ep = 'Earring'
else:
ears = none
ears_ep = 'None'
seed(c)
d=randint(0,1000000)
if d > 900000:
neck = GoldChain_1
neck_ep = 'Gold Chain'
elif d > 820000:
neck = SilverChain_1
neck_ep = 'Silver Chain'
elif d > 800000:
neck = RING_1
neck_ep = 'Ring Onchain'
else:
neck = none
neck_ep = 'None'
seed(d)
e=randint(0,1000000)
if e > 900000:
mouth_prop = CIGARETTE
mouth_prop_ep = 'Cigarette'
elif e > 840000:
mouth_prop = PIPE
mouth_prop_ep = 'Pipe'
elif e > 800000:
mouth_prop = VAPE
mouth_prop_ep = 'Vape'
else:
mouth_prop = none
mouth_prop_ep = 'None'
seed(e)
f=randint(0,1000000)
if f > 400000:
EY1 = (255,255,255)
eyes_color_ep = 'White'
elif f > 300000:
EY1 = (214,92,26)
eyes_color_ep = "Orange"
elif f > 200000:
EY1 = (176,61,133)
eyes_color_ep = "Purple"
elif f > 100000:
EY1 = (255,255,0)
eyes_color_ep = 'Yellow'
else:
EY1 = (255,0,0)
eyes_color_ep = 'Red'
DARK_LORD=[
[FR2,FR2,FR2,FR2,FR2,FR2,FR1,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR1,FR2,FR2,FR2,FR2,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,FR1,BG1,BG1,BG1,BG1,BG1,FR1,BG1,BG1,BG1,BG1,BG1,FR1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,FR1,FR1,BG1,BG1,BG1,BG1,FR1,BG1,BG1,BG1,BG1,FR1,FR1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,FR1,FR1,BG1,FR1,BG1,BG1,FR1,BG1,BG1,FR1,BG1,FR1,FR1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,FR1,FR1,BG1,FR1,BG1,BG1,FR1,BG1,BG1,FR1,BG1,FR1,FR1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,FR1,FR1,BG1,FR1,FR1,BG1,FR1,BG1,FR1,FR1,BG1,FR1,FR1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,FR1,FR1,FR1,FR1,FR1,FR1,FR1,FR1,FR1,FR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,FR1,FR1,FR1,FR1,FR1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,SK1,FR1,FR1,FR1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,FR1,EY1,SK1,SK1,FR1,SK1,FR1,EY1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,EY1,FR1,SK1,SK1,FR1,SK1,EY1,FR1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,SK3,FR1,SK1,SK1,SK1,SK1,FR1,SK1,SK1,SK1,SK1,FR1,SK3,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,SK3,FR1,SK2,SK1,SK1,SK1,FR1,SK1,SK1,SK1,SK2,FR1,SK3,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,SK3,SK1,SK2,SK1,SK1,FR1,SK1,SK1,SK2,SK1,SK3,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK3,SK1,SK2,SK1,FR1,SK1,SK2,SK1,SK3,FR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK3,SK1,SK2,FR1,SK2,SK1,SK3,SK1,FR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK3,SK1,SK2,FR1,SK2,SK1,SK3,SK1,FR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK3,SK1,SK2,FR1,SK2,SK1,SK3,SK1,FR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK3,SK1,SK2,SK1,FR1,SK1,SK2,SK1,SK3,FR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,SK3,FR1,SK1,SK2,SK1,FR1,SK1,SK2,SK1,SK1,SK3,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,SK2,SK1,SK1,FR1,SK1,SK1,SK2,SK1,FR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,SK2,BG1,FR1,FR1,FR1,FR1,FR1,SK1,SK2,FR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,BG1,FR1,SK1,SK1,SK1,FR1,BG1,BG1,BG1,BG1,BG1,FR2],
[FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR2,FR1,SK1,SK1,SK1,FR1,FR2,FR2,FR2,FR2,FR2,FR2]
]
pixels = DARK_LORD
newtraitcombo1 = createCombo2()
traits2.append(newtraitcombo1)
######################
# we stock all the atty in dataframes with pandas library for each loop
df = pd.DataFrame(pixels)
df2 = pd.DataFrame(ears)
df3 = pd.DataFrame(hair)
df31 = pd.DataFrame(hair_prop)
df4 = pd.DataFrame(neck)
df5 = pd.DataFrame(blemishes)
df6 = pd.DataFrame(facial_hair)
df7 = pd.DataFrame(mouth)
df8 = pd.DataFrame(rod)
df9 = pd.DataFrame(mouth_prop)
df10 = pd.DataFrame(eyes)
df11 = pd.DataFrame(nose)
#we superimpose each atty for each loop to obtain a midpunk
df = df2.where(df2!=0, other=df)
df = df4.where(df4!=0, other=df)
df = df3.where(df3!=0, other=df)
df = df31.where(df31!=0, other=df)
df = df5.where(df5!=0, other=df)
df = df6.where(df6!=0, other=df)
df = df7.where(df7!=0, other=df)
df = df8.where(df8!=0, other=df)
df = df9.where(df9!=0, other=df)
df = df10.where(df10!=0, other=df)
df = df11.where(df11!=0, other=df)
#we convert the RGB values into a PNG with pillow library
array = np.asarray(df)
pixels = array.tolist()
# convert the pixels into an array using numpy
array = np.array(pixels, dtype=np.uint8)
# use PIL to create an image from the new array of pixels
new_image = Image.fromarray(array)
new_image = new_image.resize(dimensions, resample=0)
imgname = dirname + '/bird_images/' + str(jpeg) + '.png'
new_image.save(imgname)
#
i = 0
for item in traits2:
item["tokenId"] = i
i = i + 1
# GET TRAIT COUNTS
racescounts = {}
for item in Races:
racescounts[item] = 0
typescounts = {}
for item in Types:
typescounts[item] = 0
skinscounts = {}
for item in Skins:
skinscounts[item] = 0
earscounts = {}
for item in Ears:
earscounts[item] = 0
haircolorscounts = {}
for item in Haircolors:
haircolorscounts[item] = 0
haircutscounts = {}
for item in Haircuts:
haircutscounts[item] = 0
hairpropscounts = {}
for item in Hairprops:
hairpropscounts[item] = 0
neckscounts = {}
for item in Necks:
neckscounts[item] = 0
facialhairscounts = {}
for item in Facialhairs:
facialhairscounts[item] = 0
mouthpropscounts = {}
for item in Mouthprops:
mouthpropscounts[item] = 0
eyecolorscounts = {}
for item in Eyecolors:
eyecolorscounts[item] = 0
eyepropscounts = {}
for item in Eyeprops:
eyepropscounts[item] = 0
nosescounts = {}
for item in Noses:
nosescounts[item] = 0
blemishescounts = {}
for item in Blemishes:
blemishescounts[item] = 0
toothcolorscounts = {}
for item in Toothcolors:
toothcolorscounts[item] = 0
mouthscounts = {}
for item in Mouths:
mouthscounts[item] = 0
for banana in traits2:
racescounts[banana["Race"]] += 1
typescounts[banana["Type"]] += 1
skinscounts[banana["Skin Tone"]] += 1
earscounts[banana["Ears"]] += 1
haircolorscounts[banana["Hair Color"]] += 1
haircutscounts[banana["Haircut"]] += 1
hairpropscounts[banana["Hair Prop"]] += 1
neckscounts[banana["Neck"]] += 1
facialhairscounts[banana["Facial Hair"]] += 1
mouthpropscounts[banana["Mouth Prop"]] += 1
eyecolorscounts[banana["Eyes Color"]] += 1
eyepropscounts[banana["Eyes Prop"]] += 1
nosescounts[banana["Nose"]] += 1
blemishescounts[banana["Blemishe"]] += 1
toothcolorscounts[banana["Tooth Color"]] += 1
mouthscounts[banana["Mouth"]] += 1
print("race:", racescounts)
print("type:", typescounts)
print("skin:", skinscounts)
print("ears:", earscounts)
print("haircolor:", haircolorscounts)
print("haircut:", haircutscounts)
print("hairprop:", hairpropscounts)
print("neck:", neckscounts)
print("facialhair:", facialhairscounts)
print("mouthprop:", mouthpropscounts)
print("eyecolor:", eyecolorscounts)
print("eyeprop:", eyepropscounts)
print("nose:", nosescounts)
print("blemishe:", blemishescounts)
print("tooth:", toothcolorscounts)
print("mouth:", mouthscounts)
# READ METADATA IF YOU ALREADY HAVE A JSON FILE WITH ALL THE PICTURES HASH
# IF NOT JUST COMMENT ALL THE CODE BELOW
# To Obtain the json file with all the pictures hash we first have to run the code without the code below
# Then upload the file with all your pictures into IPFS and get the hash of each pictures
# Create a kind of "jsonlocation" file like in the repo and run the shit
# You will obtain a json file with the metadata for each midpunks
with open("jsonlocation", 'r') as f:
hashes = json.load(f)
hashes2=[]
for k, v in hashes.items():
hashes2.append(v)
for item in traits2:
trait1=[]
caca=item
for key in item.keys():
trait1.append(key)
#print(trait1)
trait2=[]
for item in traits2:
coco=item
for value in item.values():
trait2.append(value)
#print(trait2)
def metadata(n):
a=0+(17*n)
b=1+(17*n)
c=2+(17*n)
d=3+(17*n)
e=4+(17*n)
f=5+(17*n)
g=6+(17*n)
h=7+(17*n)
i=8+(17*n)
j=9+(17*n)
k=10+(17*n)
l=11+(17*n)
m=12+(17*n)
o=13+(17*n)
p=14+(17*n)
q=15+(17*n)
s=16+(17*n)
t=n
metadata = {
"name": "MidPunk #" + str(trait2[s]),
"description": "Middle Punks NFT, The Return of The Punks!",
"tokenId" : trait2[s],
"image": "https://gateway.pinata.cloud/ipfs/" + str(hashes2[t]),
"external_url":"https://www.middlepunks.com",
"animation_url":"https://ipfs.io/ipfs/QmbUoshVaVxBhZuQ7uy24LTZHXjhpdtHd9UjzaHSd4j37c",
"attributes": [
{
"trait_type": "Race",
"value": trait2[a]
},
{
"trait_type": "Type",
"value": trait2[b]
},
{
"trait_type": "Skin Tone",
"value": trait2[c]
},
{
"trait_type": "Ears",
"value": trait2[d]
},
{
"trait_type": "Hair Color",
"value": trait2[e]
},
{
"trait_type": "Haircut",
"value": trait2[f]
},
{
"trait_type": "Hair Prop",
"value": trait2[g]
},
{
"trait_type": "Neck",
"value": trait2[h]
},
{
"trait_type": "Facial Hair",
"value": trait2[i]
},
{
"trait_type": "Mouth Prop",
"value": trait2[j]
},
{
"trait_type": "Eyes Color",
"value": trait2[k]
},
{
"trait_type": "Eyes Prop",
"value": trait2[l]
},
{
"trait_type": "Nose",
"value": trait2[m]
},
{
"trait_type": "Blemishe",
"value": trait2[o]
},
{
"trait_type": "Tooth Color",
"value": trait2[p]
},
{
"trait_type": "Mouth",
"value": trait2[q]
},
]
}
return metadata
for i in range(10000):
metadata(i)
l = [str(i) for i in range(10000)]
for x in l:
with open(dirname + '/midpunks_json/' + x,"w") as outfile:
json.dump(metadata(int(x)), outfile, indent=4)
| 43.449948 | 625 | 0.458678 | 272,995 | 1,039,540 | 1.720416 | 0.003557 | 0.788387 | 1.15889 | 1.515378 | 0.98124 | 0.980797 | 0.980488 | 0.980156 | 0.980035 | 0.979453 | 0 | 0.353658 | 0.266607 | 1,039,540 | 23,925 | 626 | 43.449948 | 0.262384 | 0.008109 | 0 | 0.943416 | 1 | 0 | 0.023691 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.000134 | false | 0 | 0.000401 | 0 | 0.000669 | 0.000803 | 0 | 0 | 1 | null | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 14 |
12929156bac1b72360bee8cf2a62c0b02b6aa731 | 5,533 | py | Python | SmartFoxServer_PRO_1.6.6/Server/lib/Lib/test/test_bisect.py | ChisdealHD/DetlasWorldLinux | 336465a4df1a48c9a273329fc7a09d8099c4e4d5 | [
"MIT"
] | 8 | 2016-11-24T09:38:31.000Z | 2021-04-23T13:04:48.000Z | SmartFoxServer_PRO_1.6.6/Server/lib/Lib/test/test_bisect.py | ChisdealHD/DetlasWorldLinux | 336465a4df1a48c9a273329fc7a09d8099c4e4d5 | [
"MIT"
] | 4 | 2018-02-22T07:42:13.000Z | 2021-12-13T10:53:09.000Z | SmartFoxServer_PRO_1.6.6/Server/lib/Lib/test/test_bisect.py | ChisdealHD/DetlasWorldLinux | 336465a4df1a48c9a273329fc7a09d8099c4e4d5 | [
"MIT"
] | 4 | 2015-09-09T11:54:37.000Z | 2018-05-26T05:08:14.000Z | from test_support import TestFailed
import bisect
import sys
nerrors = 0
def check_bisect(func, list, elt, expected):
global nerrors
got = func(list, elt)
if got != expected:
print >> sys.stderr, \
"expected %s(%s, %s) -> %s, but got %s" % (func.__name__,
list,
elt,
expected,
got)
nerrors += 1
# XXX optional slice arguments need tests.
check_bisect(bisect.bisect_right, [], 1, 0)
check_bisect(bisect.bisect_right, [1], 0, 0)
check_bisect(bisect.bisect_right, [1], 1, 1)
check_bisect(bisect.bisect_right, [1], 2, 1)
check_bisect(bisect.bisect_right, [1, 1], 0, 0)
check_bisect(bisect.bisect_right, [1, 1], 1, 2)
check_bisect(bisect.bisect_right, [1, 1], 2, 2)
check_bisect(bisect.bisect_right, [1, 1, 1], 0, 0)
check_bisect(bisect.bisect_right, [1, 1, 1], 1, 3)
check_bisect(bisect.bisect_right, [1, 1, 1], 2, 3)
check_bisect(bisect.bisect_right, [1, 1, 1, 1], 0, 0)
check_bisect(bisect.bisect_right, [1, 1, 1, 1], 1, 4)
check_bisect(bisect.bisect_right, [1, 1, 1, 1], 2, 4)
check_bisect(bisect.bisect_right, [1, 2], 0, 0)
check_bisect(bisect.bisect_right, [1, 2], 1, 1)
check_bisect(bisect.bisect_right, [1, 2], 1.5, 1)
check_bisect(bisect.bisect_right, [1, 2], 2, 2)
check_bisect(bisect.bisect_right, [1, 2], 3, 2)
check_bisect(bisect.bisect_right, [1, 1, 2, 2], 0, 0)
check_bisect(bisect.bisect_right, [1, 1, 2, 2], 1, 2)
check_bisect(bisect.bisect_right, [1, 1, 2, 2], 1.5, 2)
check_bisect(bisect.bisect_right, [1, 1, 2, 2], 2, 4)
check_bisect(bisect.bisect_right, [1, 1, 2, 2], 3, 4)
check_bisect(bisect.bisect_right, [1, 2, 3], 0, 0)
check_bisect(bisect.bisect_right, [1, 2, 3], 1, 1)
check_bisect(bisect.bisect_right, [1, 2, 3], 1.5, 1)
check_bisect(bisect.bisect_right, [1, 2, 3], 2, 2)
check_bisect(bisect.bisect_right, [1, 2, 3], 2.5, 2)
check_bisect(bisect.bisect_right, [1, 2, 3], 3, 3)
check_bisect(bisect.bisect_right, [1, 2, 3], 4, 3)
check_bisect(bisect.bisect_right, [1, 2, 2, 3, 3, 3, 4, 4, 4, 4], 0, 0)
check_bisect(bisect.bisect_right, [1, 2, 2, 3, 3, 3, 4, 4, 4, 4], 1, 1)
check_bisect(bisect.bisect_right, [1, 2, 2, 3, 3, 3, 4, 4, 4, 4], 1.5, 1)
check_bisect(bisect.bisect_right, [1, 2, 2, 3, 3, 3, 4, 4, 4, 4], 2, 3)
check_bisect(bisect.bisect_right, [1, 2, 2, 3, 3, 3, 4, 4, 4, 4], 2.5, 3)
check_bisect(bisect.bisect_right, [1, 2, 2, 3, 3, 3, 4, 4, 4, 4], 3, 6)
check_bisect(bisect.bisect_right, [1, 2, 2, 3, 3, 3, 4, 4, 4, 4], 3.5, 6)
check_bisect(bisect.bisect_right, [1, 2, 2, 3, 3, 3, 4, 4, 4, 4], 4, 10)
check_bisect(bisect.bisect_right, [1, 2, 2, 3, 3, 3, 4, 4, 4, 4], 5, 10)
check_bisect(bisect.bisect_left, [], 1, 0)
check_bisect(bisect.bisect_left, [1], 0, 0)
check_bisect(bisect.bisect_left, [1], 1, 0)
check_bisect(bisect.bisect_left, [1], 2, 1)
check_bisect(bisect.bisect_left, [1, 1], 0, 0)
check_bisect(bisect.bisect_left, [1, 1], 1, 0)
check_bisect(bisect.bisect_left, [1, 1], 2, 2)
check_bisect(bisect.bisect_left, [1, 1, 1], 0, 0)
check_bisect(bisect.bisect_left, [1, 1, 1], 1, 0)
check_bisect(bisect.bisect_left, [1, 1, 1], 2, 3)
check_bisect(bisect.bisect_left, [1, 1, 1, 1], 0, 0)
check_bisect(bisect.bisect_left, [1, 1, 1, 1], 1, 0)
check_bisect(bisect.bisect_left, [1, 1, 1, 1], 2, 4)
check_bisect(bisect.bisect_left, [1, 2], 0, 0)
check_bisect(bisect.bisect_left, [1, 2], 1, 0)
check_bisect(bisect.bisect_left, [1, 2], 1.5, 1)
check_bisect(bisect.bisect_left, [1, 2], 2, 1)
check_bisect(bisect.bisect_left, [1, 2], 3, 2)
check_bisect(bisect.bisect_left, [1, 1, 2, 2], 0, 0)
check_bisect(bisect.bisect_left, [1, 1, 2, 2], 1, 0)
check_bisect(bisect.bisect_left, [1, 1, 2, 2], 1.5, 2)
check_bisect(bisect.bisect_left, [1, 1, 2, 2], 2, 2)
check_bisect(bisect.bisect_left, [1, 1, 2, 2], 3, 4)
check_bisect(bisect.bisect_left, [1, 2, 3], 0, 0)
check_bisect(bisect.bisect_left, [1, 2, 3], 1, 0)
check_bisect(bisect.bisect_left, [1, 2, 3], 1.5, 1)
check_bisect(bisect.bisect_left, [1, 2, 3], 2, 1)
check_bisect(bisect.bisect_left, [1, 2, 3], 2.5, 2)
check_bisect(bisect.bisect_left, [1, 2, 3], 3, 2)
check_bisect(bisect.bisect_left, [1, 2, 3], 4, 3)
check_bisect(bisect.bisect_left, [1, 2, 2, 3, 3, 3, 4, 4, 4, 4], 0, 0)
check_bisect(bisect.bisect_left, [1, 2, 2, 3, 3, 3, 4, 4, 4, 4], 1, 0)
check_bisect(bisect.bisect_left, [1, 2, 2, 3, 3, 3, 4, 4, 4, 4], 1.5, 1)
check_bisect(bisect.bisect_left, [1, 2, 2, 3, 3, 3, 4, 4, 4, 4], 2, 1)
check_bisect(bisect.bisect_left, [1, 2, 2, 3, 3, 3, 4, 4, 4, 4], 2.5, 3)
check_bisect(bisect.bisect_left, [1, 2, 2, 3, 3, 3, 4, 4, 4, 4], 3, 3)
check_bisect(bisect.bisect_left, [1, 2, 2, 3, 3, 3, 4, 4, 4, 4], 3.5, 6)
check_bisect(bisect.bisect_left, [1, 2, 2, 3, 3, 3, 4, 4, 4, 4], 4, 6)
check_bisect(bisect.bisect_left, [1, 2, 2, 3, 3, 3, 4, 4, 4, 4], 5, 10)
def check_insort(n):
global nerrors
from random import choice
import sys
digits = "0123456789"
raw = []
insorted = []
for i in range(n):
digit = choice(digits)
raw.append(digit)
if digit in "02468":
f = bisect.insort_left
else:
f = bisect.insort_right
f(insorted, digit)
sorted = raw[:]
sorted.sort()
if sorted == insorted:
return
print >> sys.stderr, "insort test failed: raw %s got %s" % (raw, insorted)
nerrors += 1
check_insort(500)
if nerrors:
raise TestFailed("%d errors in test_bisect" % nerrors)
| 43.226563 | 78 | 0.617929 | 1,032 | 5,533 | 3.151163 | 0.061047 | 0.575646 | 0.407749 | 0.551661 | 0.817958 | 0.817036 | 0.816728 | 0.78198 | 0.739852 | 0.636839 | 0 | 0.120911 | 0.198807 | 5,533 | 127 | 79 | 43.566929 | 0.612678 | 0.007229 | 0 | 0.051282 | 0 | 0 | 0.019851 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.017094 | false | 0 | 0.042735 | 0 | 0.068376 | 0.017094 | 0 | 0 | 0 | null | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 9 |
12e2914c63b36e679c54382c61cc4291f9c398ca | 81 | py | Python | qittle/types/__init__.py | muffleo/qittle | 6658e11eae9e6d83bcf0e930803c2f41abd3f4a0 | [
"MIT"
] | 2 | 2020-09-15T19:48:13.000Z | 2020-09-16T10:26:17.000Z | qittle/types/__init__.py | cyanlabs-org/qittle | 6658e11eae9e6d83bcf0e930803c2f41abd3f4a0 | [
"MIT"
] | 2 | 2021-05-04T17:15:28.000Z | 2021-05-04T17:20:09.000Z | qittle/types/__init__.py | cyanlabs-org/qittle | 6658e11eae9e6d83bcf0e930803c2f41abd3f4a0 | [
"MIT"
] | null | null | null | from qittle.types.responses import hook
from qittle.types.responses import key
| 27 | 40 | 0.82716 | 12 | 81 | 5.583333 | 0.583333 | 0.298507 | 0.447761 | 0.716418 | 0.895522 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.123457 | 81 | 2 | 41 | 40.5 | 0.943662 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 9 |
4277ebadf601c7820f3b62f88cd848ae7af9082b | 214,710 | py | Python | clients/client/python/ory_client/api/default_api.py | simoneromano96/sdk | a6113d0daefbbb803790297e4b242d4c7cbbcb22 | [
"Apache-2.0"
] | null | null | null | clients/client/python/ory_client/api/default_api.py | simoneromano96/sdk | a6113d0daefbbb803790297e4b242d4c7cbbcb22 | [
"Apache-2.0"
] | null | null | null | clients/client/python/ory_client/api/default_api.py | simoneromano96/sdk | a6113d0daefbbb803790297e4b242d4c7cbbcb22 | [
"Apache-2.0"
] | null | null | null | """
Ory APIs
Documentation for all public and administrative Ory APIs. Administrative APIs can only be accessed with a valid Personal Access Token. Public APIs are mostly used in browsers. # noqa: E501
The version of the OpenAPI document: v0.0.1-alpha.9
Contact: support@ory.sh
Generated by: https://openapi-generator.tech
"""
import re # noqa: F401
import sys # noqa: F401
from ory_client.api_client import ApiClient, Endpoint as _Endpoint
from ory_client.model_utils import ( # noqa: F401
check_allowed_values,
check_validations,
date,
datetime,
file_type,
none_type,
validate_and_convert_types
)
from ory_client.model.create_identity import CreateIdentity
from ory_client.model.create_recovery_link import CreateRecoveryLink
from ory_client.model.generic_error import GenericError
from ory_client.model.identity import Identity
from ory_client.model.inline_response200 import InlineResponse200
from ory_client.model.inline_response2001 import InlineResponse2001
from ory_client.model.inline_response503 import InlineResponse503
from ory_client.model.json_error import JsonError
from ory_client.model.login_flow import LoginFlow
from ory_client.model.login_via_api_response import LoginViaApiResponse
from ory_client.model.recovery_flow import RecoveryFlow
from ory_client.model.recovery_link import RecoveryLink
from ory_client.model.registration_flow import RegistrationFlow
from ory_client.model.registration_via_api_response import RegistrationViaApiResponse
from ory_client.model.revoke_session import RevokeSession
from ory_client.model.self_service_error_container import SelfServiceErrorContainer
from ory_client.model.session import Session
from ory_client.model.settings_flow import SettingsFlow
from ory_client.model.settings_via_api_response import SettingsViaApiResponse
from ory_client.model.submit_self_service_login_flow import SubmitSelfServiceLoginFlow
from ory_client.model.submit_self_service_recovery_flow_with_link_method import SubmitSelfServiceRecoveryFlowWithLinkMethod
from ory_client.model.submit_self_service_registration_flow import SubmitSelfServiceRegistrationFlow
from ory_client.model.submit_self_service_settings_flow import SubmitSelfServiceSettingsFlow
from ory_client.model.update_identity import UpdateIdentity
from ory_client.model.verification_flow import VerificationFlow
class DefaultApi(object):
"""NOTE: This class is auto generated by OpenAPI Generator
Ref: https://openapi-generator.tech
Do not edit the class manually.
"""
def __init__(self, api_client=None):
if api_client is None:
api_client = ApiClient()
self.api_client = api_client
def __create_identity_admin(
self,
**kwargs
):
"""Create an Identity # noqa: E501
This endpoint creates an identity. It is NOT possible to set an identity's credentials (password, ...) using this method! A way to achieve that will be introduced in the future. Learn how identities work in [Ory Kratos' User And Identity Model Documentation](https://www.ory.sh/docs/next/kratos/concepts/identity-user-model). # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.create_identity_admin(async_req=True)
>>> result = thread.get()
Keyword Args:
create_identity (CreateIdentity): [optional]
_return_http_data_only (bool): response data without head status
code and headers. Default is True.
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (float/tuple): timeout setting for this request. If one
number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
async_req (bool): execute request asynchronously
Returns:
Identity
If the method is called asynchronously, returns the request
thread.
"""
kwargs['async_req'] = kwargs.get(
'async_req', False
)
kwargs['_return_http_data_only'] = kwargs.get(
'_return_http_data_only', True
)
kwargs['_preload_content'] = kwargs.get(
'_preload_content', True
)
kwargs['_request_timeout'] = kwargs.get(
'_request_timeout', None
)
kwargs['_check_input_type'] = kwargs.get(
'_check_input_type', True
)
kwargs['_check_return_type'] = kwargs.get(
'_check_return_type', True
)
kwargs['_host_index'] = kwargs.get('_host_index')
return self.call_with_http_info(**kwargs)
self.create_identity_admin = _Endpoint(
settings={
'response_type': (Identity,),
'auth': [
'oryToken'
],
'endpoint_path': '/api/kratos/admin/identities',
'operation_id': 'create_identity_admin',
'http_method': 'POST',
'servers': None,
},
params_map={
'all': [
'create_identity',
],
'required': [],
'nullable': [
],
'enum': [
],
'validation': [
]
},
root_map={
'validations': {
},
'allowed_values': {
},
'openapi_types': {
'create_identity':
(CreateIdentity,),
},
'attribute_map': {
},
'location_map': {
'create_identity': 'body',
},
'collection_format_map': {
}
},
headers_map={
'accept': [
'application/json'
],
'content_type': [
'application/json'
]
},
api_client=api_client,
callable=__create_identity_admin
)
def __create_recovery_link_admin(
self,
**kwargs
):
"""Create a Recovery Link # noqa: E501
This endpoint creates a recovery link which should be given to the user in order for them to recover (or activate) their account. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.create_recovery_link_admin(async_req=True)
>>> result = thread.get()
Keyword Args:
create_recovery_link (CreateRecoveryLink): [optional]
_return_http_data_only (bool): response data without head status
code and headers. Default is True.
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (float/tuple): timeout setting for this request. If one
number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
async_req (bool): execute request asynchronously
Returns:
RecoveryLink
If the method is called asynchronously, returns the request
thread.
"""
kwargs['async_req'] = kwargs.get(
'async_req', False
)
kwargs['_return_http_data_only'] = kwargs.get(
'_return_http_data_only', True
)
kwargs['_preload_content'] = kwargs.get(
'_preload_content', True
)
kwargs['_request_timeout'] = kwargs.get(
'_request_timeout', None
)
kwargs['_check_input_type'] = kwargs.get(
'_check_input_type', True
)
kwargs['_check_return_type'] = kwargs.get(
'_check_return_type', True
)
kwargs['_host_index'] = kwargs.get('_host_index')
return self.call_with_http_info(**kwargs)
self.create_recovery_link_admin = _Endpoint(
settings={
'response_type': (RecoveryLink,),
'auth': [
'oryToken'
],
'endpoint_path': '/api/kratos/admin/recovery/link',
'operation_id': 'create_recovery_link_admin',
'http_method': 'POST',
'servers': None,
},
params_map={
'all': [
'create_recovery_link',
],
'required': [],
'nullable': [
],
'enum': [
],
'validation': [
]
},
root_map={
'validations': {
},
'allowed_values': {
},
'openapi_types': {
'create_recovery_link':
(CreateRecoveryLink,),
},
'attribute_map': {
},
'location_map': {
'create_recovery_link': 'body',
},
'collection_format_map': {
}
},
headers_map={
'accept': [
'application/json'
],
'content_type': [
'application/json'
]
},
api_client=api_client,
callable=__create_recovery_link_admin
)
def __delete_identity_admin(
self,
id,
**kwargs
):
"""Delete an Identity # noqa: E501
Calling this endpoint irrecoverably and permanently deletes the identity given its ID. This action can not be undone. This endpoint returns 204 when the identity was deleted or when the identity was not found, in which case it is assumed that is has been deleted already. Learn how identities work in [Ory Kratos' User And Identity Model Documentation](https://www.ory.sh/docs/next/kratos/concepts/identity-user-model). # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.delete_identity_admin(id, async_req=True)
>>> result = thread.get()
Args:
id (str): ID is the identity's ID.
Keyword Args:
_return_http_data_only (bool): response data without head status
code and headers. Default is True.
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (float/tuple): timeout setting for this request. If one
number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
async_req (bool): execute request asynchronously
Returns:
None
If the method is called asynchronously, returns the request
thread.
"""
kwargs['async_req'] = kwargs.get(
'async_req', False
)
kwargs['_return_http_data_only'] = kwargs.get(
'_return_http_data_only', True
)
kwargs['_preload_content'] = kwargs.get(
'_preload_content', True
)
kwargs['_request_timeout'] = kwargs.get(
'_request_timeout', None
)
kwargs['_check_input_type'] = kwargs.get(
'_check_input_type', True
)
kwargs['_check_return_type'] = kwargs.get(
'_check_return_type', True
)
kwargs['_host_index'] = kwargs.get('_host_index')
kwargs['id'] = \
id
return self.call_with_http_info(**kwargs)
self.delete_identity_admin = _Endpoint(
settings={
'response_type': None,
'auth': [
'oryToken'
],
'endpoint_path': '/api/kratos/admin/identities/{id}',
'operation_id': 'delete_identity_admin',
'http_method': 'DELETE',
'servers': None,
},
params_map={
'all': [
'id',
],
'required': [
'id',
],
'nullable': [
],
'enum': [
],
'validation': [
]
},
root_map={
'validations': {
},
'allowed_values': {
},
'openapi_types': {
'id':
(str,),
},
'attribute_map': {
'id': 'id',
},
'location_map': {
'id': 'path',
},
'collection_format_map': {
}
},
headers_map={
'accept': [
'application/json'
],
'content_type': [],
},
api_client=api_client,
callable=__delete_identity_admin
)
def __get_identity_admin(
self,
id,
**kwargs
):
"""Get an Identity # noqa: E501
Learn how identities work in [Ory Kratos' User And Identity Model Documentation](https://www.ory.sh/docs/next/kratos/concepts/identity-user-model). # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_identity_admin(id, async_req=True)
>>> result = thread.get()
Args:
id (str): ID must be set to the ID of identity you want to get
Keyword Args:
_return_http_data_only (bool): response data without head status
code and headers. Default is True.
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (float/tuple): timeout setting for this request. If one
number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
async_req (bool): execute request asynchronously
Returns:
Identity
If the method is called asynchronously, returns the request
thread.
"""
kwargs['async_req'] = kwargs.get(
'async_req', False
)
kwargs['_return_http_data_only'] = kwargs.get(
'_return_http_data_only', True
)
kwargs['_preload_content'] = kwargs.get(
'_preload_content', True
)
kwargs['_request_timeout'] = kwargs.get(
'_request_timeout', None
)
kwargs['_check_input_type'] = kwargs.get(
'_check_input_type', True
)
kwargs['_check_return_type'] = kwargs.get(
'_check_return_type', True
)
kwargs['_host_index'] = kwargs.get('_host_index')
kwargs['id'] = \
id
return self.call_with_http_info(**kwargs)
self.get_identity_admin = _Endpoint(
settings={
'response_type': (Identity,),
'auth': [
'oryToken'
],
'endpoint_path': '/api/kratos/admin/identities/{id}',
'operation_id': 'get_identity_admin',
'http_method': 'GET',
'servers': None,
},
params_map={
'all': [
'id',
],
'required': [
'id',
],
'nullable': [
],
'enum': [
],
'validation': [
]
},
root_map={
'validations': {
},
'allowed_values': {
},
'openapi_types': {
'id':
(str,),
},
'attribute_map': {
'id': 'id',
},
'location_map': {
'id': 'path',
},
'collection_format_map': {
}
},
headers_map={
'accept': [
'application/json'
],
'content_type': [],
},
api_client=api_client,
callable=__get_identity_admin
)
def __get_schema(
self,
id,
**kwargs
):
"""get_schema # noqa: E501
Get a Traits Schema Definition # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_schema(id, async_req=True)
>>> result = thread.get()
Args:
id (str): ID must be set to the ID of schema you want to get
Keyword Args:
_return_http_data_only (bool): response data without head status
code and headers. Default is True.
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (float/tuple): timeout setting for this request. If one
number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
async_req (bool): execute request asynchronously
Returns:
{str: (bool, date, datetime, dict, float, int, list, str, none_type)}
If the method is called asynchronously, returns the request
thread.
"""
kwargs['async_req'] = kwargs.get(
'async_req', False
)
kwargs['_return_http_data_only'] = kwargs.get(
'_return_http_data_only', True
)
kwargs['_preload_content'] = kwargs.get(
'_preload_content', True
)
kwargs['_request_timeout'] = kwargs.get(
'_request_timeout', None
)
kwargs['_check_input_type'] = kwargs.get(
'_check_input_type', True
)
kwargs['_check_return_type'] = kwargs.get(
'_check_return_type', True
)
kwargs['_host_index'] = kwargs.get('_host_index')
kwargs['id'] = \
id
return self.call_with_http_info(**kwargs)
self.get_schema = _Endpoint(
settings={
'response_type': ({str: (bool, date, datetime, dict, float, int, list, str, none_type)},),
'auth': [],
'endpoint_path': '/api/kratos/public/schemas/{id}',
'operation_id': 'get_schema',
'http_method': 'GET',
'servers': None,
},
params_map={
'all': [
'id',
],
'required': [
'id',
],
'nullable': [
],
'enum': [
],
'validation': [
]
},
root_map={
'validations': {
},
'allowed_values': {
},
'openapi_types': {
'id':
(str,),
},
'attribute_map': {
'id': 'id',
},
'location_map': {
'id': 'path',
},
'collection_format_map': {
}
},
headers_map={
'accept': [
'application/json'
],
'content_type': [],
},
api_client=api_client,
callable=__get_schema
)
def __get_schema_admin(
self,
id,
**kwargs
):
"""get_schema_admin # noqa: E501
Get a Traits Schema Definition # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_schema_admin(id, async_req=True)
>>> result = thread.get()
Args:
id (str): ID must be set to the ID of schema you want to get
Keyword Args:
_return_http_data_only (bool): response data without head status
code and headers. Default is True.
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (float/tuple): timeout setting for this request. If one
number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
async_req (bool): execute request asynchronously
Returns:
{str: (bool, date, datetime, dict, float, int, list, str, none_type)}
If the method is called asynchronously, returns the request
thread.
"""
kwargs['async_req'] = kwargs.get(
'async_req', False
)
kwargs['_return_http_data_only'] = kwargs.get(
'_return_http_data_only', True
)
kwargs['_preload_content'] = kwargs.get(
'_preload_content', True
)
kwargs['_request_timeout'] = kwargs.get(
'_request_timeout', None
)
kwargs['_check_input_type'] = kwargs.get(
'_check_input_type', True
)
kwargs['_check_return_type'] = kwargs.get(
'_check_return_type', True
)
kwargs['_host_index'] = kwargs.get('_host_index')
kwargs['id'] = \
id
return self.call_with_http_info(**kwargs)
self.get_schema_admin = _Endpoint(
settings={
'response_type': ({str: (bool, date, datetime, dict, float, int, list, str, none_type)},),
'auth': [
'oryToken'
],
'endpoint_path': '/api/kratos/admin/schemas/{id}',
'operation_id': 'get_schema_admin',
'http_method': 'GET',
'servers': None,
},
params_map={
'all': [
'id',
],
'required': [
'id',
],
'nullable': [
],
'enum': [
],
'validation': [
]
},
root_map={
'validations': {
},
'allowed_values': {
},
'openapi_types': {
'id':
(str,),
},
'attribute_map': {
'id': 'id',
},
'location_map': {
'id': 'path',
},
'collection_format_map': {
}
},
headers_map={
'accept': [
'application/json'
],
'content_type': [],
},
api_client=api_client,
callable=__get_schema_admin
)
def __get_self_service_error(
self,
error,
**kwargs
):
"""Get User-Facing Self-Service Errors # noqa: E501
This endpoint returns the error associated with a user-facing self service errors. This endpoint supports stub values to help you implement the error UI: `?error=stub:500` - returns a stub 500 (Internal Server Error) error. More information can be found at [Ory Kratos User User Facing Error Documentation](https://www.ory.sh/docs/kratos/self-service/flows/user-facing-errors). # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_self_service_error(error, async_req=True)
>>> result = thread.get()
Args:
error (str): Error is the container's ID
Keyword Args:
_return_http_data_only (bool): response data without head status
code and headers. Default is True.
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (float/tuple): timeout setting for this request. If one
number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
async_req (bool): execute request asynchronously
Returns:
SelfServiceErrorContainer
If the method is called asynchronously, returns the request
thread.
"""
kwargs['async_req'] = kwargs.get(
'async_req', False
)
kwargs['_return_http_data_only'] = kwargs.get(
'_return_http_data_only', True
)
kwargs['_preload_content'] = kwargs.get(
'_preload_content', True
)
kwargs['_request_timeout'] = kwargs.get(
'_request_timeout', None
)
kwargs['_check_input_type'] = kwargs.get(
'_check_input_type', True
)
kwargs['_check_return_type'] = kwargs.get(
'_check_return_type', True
)
kwargs['_host_index'] = kwargs.get('_host_index')
kwargs['error'] = \
error
return self.call_with_http_info(**kwargs)
self.get_self_service_error = _Endpoint(
settings={
'response_type': (SelfServiceErrorContainer,),
'auth': [],
'endpoint_path': '/api/kratos/public/self-service/errors',
'operation_id': 'get_self_service_error',
'http_method': 'GET',
'servers': None,
},
params_map={
'all': [
'error',
],
'required': [
'error',
],
'nullable': [
],
'enum': [
],
'validation': [
]
},
root_map={
'validations': {
},
'allowed_values': {
},
'openapi_types': {
'error':
(str,),
},
'attribute_map': {
'error': 'error',
},
'location_map': {
'error': 'query',
},
'collection_format_map': {
}
},
headers_map={
'accept': [
'application/json'
],
'content_type': [],
},
api_client=api_client,
callable=__get_self_service_error
)
def __get_self_service_error_admin(
self,
error,
**kwargs
):
"""Get User-Facing Self-Service Errors # noqa: E501
This endpoint returns the error associated with a user-facing self service errors. This endpoint supports stub values to help you implement the error UI: `?error=stub:500` - returns a stub 500 (Internal Server Error) error. More information can be found at [Ory Kratos User User Facing Error Documentation](https://www.ory.sh/docs/kratos/self-service/flows/user-facing-errors). # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_self_service_error_admin(error, async_req=True)
>>> result = thread.get()
Args:
error (str): Error is the container's ID
Keyword Args:
_return_http_data_only (bool): response data without head status
code and headers. Default is True.
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (float/tuple): timeout setting for this request. If one
number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
async_req (bool): execute request asynchronously
Returns:
SelfServiceErrorContainer
If the method is called asynchronously, returns the request
thread.
"""
kwargs['async_req'] = kwargs.get(
'async_req', False
)
kwargs['_return_http_data_only'] = kwargs.get(
'_return_http_data_only', True
)
kwargs['_preload_content'] = kwargs.get(
'_preload_content', True
)
kwargs['_request_timeout'] = kwargs.get(
'_request_timeout', None
)
kwargs['_check_input_type'] = kwargs.get(
'_check_input_type', True
)
kwargs['_check_return_type'] = kwargs.get(
'_check_return_type', True
)
kwargs['_host_index'] = kwargs.get('_host_index')
kwargs['error'] = \
error
return self.call_with_http_info(**kwargs)
self.get_self_service_error_admin = _Endpoint(
settings={
'response_type': (SelfServiceErrorContainer,),
'auth': [
'oryToken'
],
'endpoint_path': '/api/kratos/admin/self-service/errors',
'operation_id': 'get_self_service_error_admin',
'http_method': 'GET',
'servers': None,
},
params_map={
'all': [
'error',
],
'required': [
'error',
],
'nullable': [
],
'enum': [
],
'validation': [
]
},
root_map={
'validations': {
},
'allowed_values': {
},
'openapi_types': {
'error':
(str,),
},
'attribute_map': {
'error': 'error',
},
'location_map': {
'error': 'query',
},
'collection_format_map': {
}
},
headers_map={
'accept': [
'application/json'
],
'content_type': [],
},
api_client=api_client,
callable=__get_self_service_error_admin
)
def __get_self_service_login_flow(
self,
id,
**kwargs
):
"""Get Login Flow # noqa: E501
This endpoint returns a login flow's context with, for example, error details and other information. :::info This endpoint is EXPERIMENTAL and subject to potential breaking changes in the future. ::: More information can be found at [Ory Kratos User Login and User Registration Documentation](https://www.ory.sh/docs/next/kratos/self-service/flows/user-login-user-registration). # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_self_service_login_flow(id, async_req=True)
>>> result = thread.get()
Args:
id (str): The Login Flow ID The value for this parameter comes from `flow` URL Query parameter sent to your application (e.g. `/login?flow=abcde`).
Keyword Args:
_return_http_data_only (bool): response data without head status
code and headers. Default is True.
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (float/tuple): timeout setting for this request. If one
number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
async_req (bool): execute request asynchronously
Returns:
LoginFlow
If the method is called asynchronously, returns the request
thread.
"""
kwargs['async_req'] = kwargs.get(
'async_req', False
)
kwargs['_return_http_data_only'] = kwargs.get(
'_return_http_data_only', True
)
kwargs['_preload_content'] = kwargs.get(
'_preload_content', True
)
kwargs['_request_timeout'] = kwargs.get(
'_request_timeout', None
)
kwargs['_check_input_type'] = kwargs.get(
'_check_input_type', True
)
kwargs['_check_return_type'] = kwargs.get(
'_check_return_type', True
)
kwargs['_host_index'] = kwargs.get('_host_index')
kwargs['id'] = \
id
return self.call_with_http_info(**kwargs)
self.get_self_service_login_flow = _Endpoint(
settings={
'response_type': (LoginFlow,),
'auth': [],
'endpoint_path': '/api/kratos/public/self-service/login/flows',
'operation_id': 'get_self_service_login_flow',
'http_method': 'GET',
'servers': None,
},
params_map={
'all': [
'id',
],
'required': [
'id',
],
'nullable': [
],
'enum': [
],
'validation': [
]
},
root_map={
'validations': {
},
'allowed_values': {
},
'openapi_types': {
'id':
(str,),
},
'attribute_map': {
'id': 'id',
},
'location_map': {
'id': 'query',
},
'collection_format_map': {
}
},
headers_map={
'accept': [
'application/json'
],
'content_type': [],
},
api_client=api_client,
callable=__get_self_service_login_flow
)
def __get_self_service_login_flow_admin(
self,
id,
**kwargs
):
"""Get Login Flow # noqa: E501
This endpoint returns a login flow's context with, for example, error details and other information. :::info This endpoint is EXPERIMENTAL and subject to potential breaking changes in the future. ::: More information can be found at [Ory Kratos User Login and User Registration Documentation](https://www.ory.sh/docs/next/kratos/self-service/flows/user-login-user-registration). # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_self_service_login_flow_admin(id, async_req=True)
>>> result = thread.get()
Args:
id (str): The Login Flow ID The value for this parameter comes from `flow` URL Query parameter sent to your application (e.g. `/login?flow=abcde`).
Keyword Args:
_return_http_data_only (bool): response data without head status
code and headers. Default is True.
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (float/tuple): timeout setting for this request. If one
number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
async_req (bool): execute request asynchronously
Returns:
LoginFlow
If the method is called asynchronously, returns the request
thread.
"""
kwargs['async_req'] = kwargs.get(
'async_req', False
)
kwargs['_return_http_data_only'] = kwargs.get(
'_return_http_data_only', True
)
kwargs['_preload_content'] = kwargs.get(
'_preload_content', True
)
kwargs['_request_timeout'] = kwargs.get(
'_request_timeout', None
)
kwargs['_check_input_type'] = kwargs.get(
'_check_input_type', True
)
kwargs['_check_return_type'] = kwargs.get(
'_check_return_type', True
)
kwargs['_host_index'] = kwargs.get('_host_index')
kwargs['id'] = \
id
return self.call_with_http_info(**kwargs)
self.get_self_service_login_flow_admin = _Endpoint(
settings={
'response_type': (LoginFlow,),
'auth': [
'oryToken'
],
'endpoint_path': '/api/kratos/admin/self-service/login/flows',
'operation_id': 'get_self_service_login_flow_admin',
'http_method': 'GET',
'servers': None,
},
params_map={
'all': [
'id',
],
'required': [
'id',
],
'nullable': [
],
'enum': [
],
'validation': [
]
},
root_map={
'validations': {
},
'allowed_values': {
},
'openapi_types': {
'id':
(str,),
},
'attribute_map': {
'id': 'id',
},
'location_map': {
'id': 'query',
},
'collection_format_map': {
}
},
headers_map={
'accept': [
'application/json'
],
'content_type': [],
},
api_client=api_client,
callable=__get_self_service_login_flow_admin
)
def __get_self_service_recovery_flow(
self,
id,
**kwargs
):
"""Get information about a recovery flow # noqa: E501
This endpoint returns a recovery flow's context with, for example, error details and other information. More information can be found at [Ory Kratos Account Recovery Documentation](../self-service/flows/account-recovery.mdx). # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_self_service_recovery_flow(id, async_req=True)
>>> result = thread.get()
Args:
id (str): The Flow ID The value for this parameter comes from `request` URL Query parameter sent to your application (e.g. `/recovery?flow=abcde`).
Keyword Args:
_return_http_data_only (bool): response data without head status
code and headers. Default is True.
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (float/tuple): timeout setting for this request. If one
number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
async_req (bool): execute request asynchronously
Returns:
RecoveryFlow
If the method is called asynchronously, returns the request
thread.
"""
kwargs['async_req'] = kwargs.get(
'async_req', False
)
kwargs['_return_http_data_only'] = kwargs.get(
'_return_http_data_only', True
)
kwargs['_preload_content'] = kwargs.get(
'_preload_content', True
)
kwargs['_request_timeout'] = kwargs.get(
'_request_timeout', None
)
kwargs['_check_input_type'] = kwargs.get(
'_check_input_type', True
)
kwargs['_check_return_type'] = kwargs.get(
'_check_return_type', True
)
kwargs['_host_index'] = kwargs.get('_host_index')
kwargs['id'] = \
id
return self.call_with_http_info(**kwargs)
self.get_self_service_recovery_flow = _Endpoint(
settings={
'response_type': (RecoveryFlow,),
'auth': [],
'endpoint_path': '/api/kratos/public/self-service/recovery/flows',
'operation_id': 'get_self_service_recovery_flow',
'http_method': 'GET',
'servers': None,
},
params_map={
'all': [
'id',
],
'required': [
'id',
],
'nullable': [
],
'enum': [
],
'validation': [
]
},
root_map={
'validations': {
},
'allowed_values': {
},
'openapi_types': {
'id':
(str,),
},
'attribute_map': {
'id': 'id',
},
'location_map': {
'id': 'query',
},
'collection_format_map': {
}
},
headers_map={
'accept': [
'application/json'
],
'content_type': [],
},
api_client=api_client,
callable=__get_self_service_recovery_flow
)
def __get_self_service_recovery_flow_admin(
self,
id,
**kwargs
):
"""Get information about a recovery flow # noqa: E501
This endpoint returns a recovery flow's context with, for example, error details and other information. More information can be found at [Ory Kratos Account Recovery Documentation](../self-service/flows/account-recovery.mdx). # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_self_service_recovery_flow_admin(id, async_req=True)
>>> result = thread.get()
Args:
id (str): The Flow ID The value for this parameter comes from `request` URL Query parameter sent to your application (e.g. `/recovery?flow=abcde`).
Keyword Args:
_return_http_data_only (bool): response data without head status
code and headers. Default is True.
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (float/tuple): timeout setting for this request. If one
number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
async_req (bool): execute request asynchronously
Returns:
RecoveryFlow
If the method is called asynchronously, returns the request
thread.
"""
kwargs['async_req'] = kwargs.get(
'async_req', False
)
kwargs['_return_http_data_only'] = kwargs.get(
'_return_http_data_only', True
)
kwargs['_preload_content'] = kwargs.get(
'_preload_content', True
)
kwargs['_request_timeout'] = kwargs.get(
'_request_timeout', None
)
kwargs['_check_input_type'] = kwargs.get(
'_check_input_type', True
)
kwargs['_check_return_type'] = kwargs.get(
'_check_return_type', True
)
kwargs['_host_index'] = kwargs.get('_host_index')
kwargs['id'] = \
id
return self.call_with_http_info(**kwargs)
self.get_self_service_recovery_flow_admin = _Endpoint(
settings={
'response_type': (RecoveryFlow,),
'auth': [
'oryToken'
],
'endpoint_path': '/api/kratos/admin/self-service/recovery/flows',
'operation_id': 'get_self_service_recovery_flow_admin',
'http_method': 'GET',
'servers': None,
},
params_map={
'all': [
'id',
],
'required': [
'id',
],
'nullable': [
],
'enum': [
],
'validation': [
]
},
root_map={
'validations': {
},
'allowed_values': {
},
'openapi_types': {
'id':
(str,),
},
'attribute_map': {
'id': 'id',
},
'location_map': {
'id': 'query',
},
'collection_format_map': {
}
},
headers_map={
'accept': [
'application/json'
],
'content_type': [],
},
api_client=api_client,
callable=__get_self_service_recovery_flow_admin
)
def __get_self_service_registration_flow(
self,
id,
**kwargs
):
"""Get Registration Flow # noqa: E501
This endpoint returns a registration flow's context with, for example, error details and other information. :::info This endpoint is EXPERIMENTAL and subject to potential breaking changes in the future. ::: More information can be found at [Ory Kratos User Login and User Registration Documentation](https://www.ory.sh/docs/next/kratos/self-service/flows/user-login-user-registration). # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_self_service_registration_flow(id, async_req=True)
>>> result = thread.get()
Args:
id (str): The Registration Flow ID The value for this parameter comes from `flow` URL Query parameter sent to your application (e.g. `/registration?flow=abcde`).
Keyword Args:
_return_http_data_only (bool): response data without head status
code and headers. Default is True.
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (float/tuple): timeout setting for this request. If one
number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
async_req (bool): execute request asynchronously
Returns:
RegistrationFlow
If the method is called asynchronously, returns the request
thread.
"""
kwargs['async_req'] = kwargs.get(
'async_req', False
)
kwargs['_return_http_data_only'] = kwargs.get(
'_return_http_data_only', True
)
kwargs['_preload_content'] = kwargs.get(
'_preload_content', True
)
kwargs['_request_timeout'] = kwargs.get(
'_request_timeout', None
)
kwargs['_check_input_type'] = kwargs.get(
'_check_input_type', True
)
kwargs['_check_return_type'] = kwargs.get(
'_check_return_type', True
)
kwargs['_host_index'] = kwargs.get('_host_index')
kwargs['id'] = \
id
return self.call_with_http_info(**kwargs)
self.get_self_service_registration_flow = _Endpoint(
settings={
'response_type': (RegistrationFlow,),
'auth': [],
'endpoint_path': '/api/kratos/public/self-service/registration/flows',
'operation_id': 'get_self_service_registration_flow',
'http_method': 'GET',
'servers': None,
},
params_map={
'all': [
'id',
],
'required': [
'id',
],
'nullable': [
],
'enum': [
],
'validation': [
]
},
root_map={
'validations': {
},
'allowed_values': {
},
'openapi_types': {
'id':
(str,),
},
'attribute_map': {
'id': 'id',
},
'location_map': {
'id': 'query',
},
'collection_format_map': {
}
},
headers_map={
'accept': [
'application/json'
],
'content_type': [],
},
api_client=api_client,
callable=__get_self_service_registration_flow
)
def __get_self_service_registration_flow_admin(
self,
id,
**kwargs
):
"""Get Registration Flow # noqa: E501
This endpoint returns a registration flow's context with, for example, error details and other information. :::info This endpoint is EXPERIMENTAL and subject to potential breaking changes in the future. ::: More information can be found at [Ory Kratos User Login and User Registration Documentation](https://www.ory.sh/docs/next/kratos/self-service/flows/user-login-user-registration). # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_self_service_registration_flow_admin(id, async_req=True)
>>> result = thread.get()
Args:
id (str): The Registration Flow ID The value for this parameter comes from `flow` URL Query parameter sent to your application (e.g. `/registration?flow=abcde`).
Keyword Args:
_return_http_data_only (bool): response data without head status
code and headers. Default is True.
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (float/tuple): timeout setting for this request. If one
number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
async_req (bool): execute request asynchronously
Returns:
RegistrationFlow
If the method is called asynchronously, returns the request
thread.
"""
kwargs['async_req'] = kwargs.get(
'async_req', False
)
kwargs['_return_http_data_only'] = kwargs.get(
'_return_http_data_only', True
)
kwargs['_preload_content'] = kwargs.get(
'_preload_content', True
)
kwargs['_request_timeout'] = kwargs.get(
'_request_timeout', None
)
kwargs['_check_input_type'] = kwargs.get(
'_check_input_type', True
)
kwargs['_check_return_type'] = kwargs.get(
'_check_return_type', True
)
kwargs['_host_index'] = kwargs.get('_host_index')
kwargs['id'] = \
id
return self.call_with_http_info(**kwargs)
self.get_self_service_registration_flow_admin = _Endpoint(
settings={
'response_type': (RegistrationFlow,),
'auth': [
'oryToken'
],
'endpoint_path': '/api/kratos/admin/self-service/registration/flows',
'operation_id': 'get_self_service_registration_flow_admin',
'http_method': 'GET',
'servers': None,
},
params_map={
'all': [
'id',
],
'required': [
'id',
],
'nullable': [
],
'enum': [
],
'validation': [
]
},
root_map={
'validations': {
},
'allowed_values': {
},
'openapi_types': {
'id':
(str,),
},
'attribute_map': {
'id': 'id',
},
'location_map': {
'id': 'query',
},
'collection_format_map': {
}
},
headers_map={
'accept': [
'application/json'
],
'content_type': [],
},
api_client=api_client,
callable=__get_self_service_registration_flow_admin
)
def __get_self_service_settings_flow(
self,
id,
**kwargs
):
"""Get Settings Flow # noqa: E501
When accessing this endpoint through Ory Kratos' Public API you must ensure that either the Ory Kratos Session Cookie or the Ory Kratos Session Token are set. The public endpoint does not return 404 status codes but instead 403 or 500 to improve data privacy. You can access this endpoint without credentials when using Ory Kratos' Admin API. More information can be found at [Ory Kratos User Settings & Profile Management Documentation](../self-service/flows/user-settings). # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_self_service_settings_flow(id, async_req=True)
>>> result = thread.get()
Args:
id (str): ID is the Settings Flow ID The value for this parameter comes from `flow` URL Query parameter sent to your application (e.g. `/settings?flow=abcde`).
Keyword Args:
x_session_token (str): The Session Token of the Identity performing the settings flow.. [optional]
_return_http_data_only (bool): response data without head status
code and headers. Default is True.
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (float/tuple): timeout setting for this request. If one
number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
async_req (bool): execute request asynchronously
Returns:
SettingsFlow
If the method is called asynchronously, returns the request
thread.
"""
kwargs['async_req'] = kwargs.get(
'async_req', False
)
kwargs['_return_http_data_only'] = kwargs.get(
'_return_http_data_only', True
)
kwargs['_preload_content'] = kwargs.get(
'_preload_content', True
)
kwargs['_request_timeout'] = kwargs.get(
'_request_timeout', None
)
kwargs['_check_input_type'] = kwargs.get(
'_check_input_type', True
)
kwargs['_check_return_type'] = kwargs.get(
'_check_return_type', True
)
kwargs['_host_index'] = kwargs.get('_host_index')
kwargs['id'] = \
id
return self.call_with_http_info(**kwargs)
self.get_self_service_settings_flow = _Endpoint(
settings={
'response_type': (SettingsFlow,),
'auth': [
'sessionToken'
],
'endpoint_path': '/api/kratos/public/self-service/settings/flows',
'operation_id': 'get_self_service_settings_flow',
'http_method': 'GET',
'servers': None,
},
params_map={
'all': [
'id',
'x_session_token',
],
'required': [
'id',
],
'nullable': [
],
'enum': [
],
'validation': [
]
},
root_map={
'validations': {
},
'allowed_values': {
},
'openapi_types': {
'id':
(str,),
'x_session_token':
(str,),
},
'attribute_map': {
'id': 'id',
'x_session_token': 'X-Session-Token',
},
'location_map': {
'id': 'query',
'x_session_token': 'header',
},
'collection_format_map': {
}
},
headers_map={
'accept': [
'application/json'
],
'content_type': [],
},
api_client=api_client,
callable=__get_self_service_settings_flow
)
def __get_self_service_settings_flow_admin(
self,
id,
**kwargs
):
"""Get Settings Flow # noqa: E501
When accessing this endpoint through Ory Kratos' Public API you must ensure that either the Ory Kratos Session Cookie or the Ory Kratos Session Token are set. The public endpoint does not return 404 status codes but instead 403 or 500 to improve data privacy. You can access this endpoint without credentials when using Ory Kratos' Admin API. More information can be found at [Ory Kratos User Settings & Profile Management Documentation](../self-service/flows/user-settings). # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_self_service_settings_flow_admin(id, async_req=True)
>>> result = thread.get()
Args:
id (str): ID is the Settings Flow ID The value for this parameter comes from `flow` URL Query parameter sent to your application (e.g. `/settings?flow=abcde`).
Keyword Args:
x_session_token (str): The Session Token of the Identity performing the settings flow.. [optional]
_return_http_data_only (bool): response data without head status
code and headers. Default is True.
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (float/tuple): timeout setting for this request. If one
number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
async_req (bool): execute request asynchronously
Returns:
SettingsFlow
If the method is called asynchronously, returns the request
thread.
"""
kwargs['async_req'] = kwargs.get(
'async_req', False
)
kwargs['_return_http_data_only'] = kwargs.get(
'_return_http_data_only', True
)
kwargs['_preload_content'] = kwargs.get(
'_preload_content', True
)
kwargs['_request_timeout'] = kwargs.get(
'_request_timeout', None
)
kwargs['_check_input_type'] = kwargs.get(
'_check_input_type', True
)
kwargs['_check_return_type'] = kwargs.get(
'_check_return_type', True
)
kwargs['_host_index'] = kwargs.get('_host_index')
kwargs['id'] = \
id
return self.call_with_http_info(**kwargs)
self.get_self_service_settings_flow_admin = _Endpoint(
settings={
'response_type': (SettingsFlow,),
'auth': [
'oryToken'
],
'endpoint_path': '/api/kratos/admin/self-service/settings/flows',
'operation_id': 'get_self_service_settings_flow_admin',
'http_method': 'GET',
'servers': None,
},
params_map={
'all': [
'id',
'x_session_token',
],
'required': [
'id',
],
'nullable': [
],
'enum': [
],
'validation': [
]
},
root_map={
'validations': {
},
'allowed_values': {
},
'openapi_types': {
'id':
(str,),
'x_session_token':
(str,),
},
'attribute_map': {
'id': 'id',
'x_session_token': 'X-Session-Token',
},
'location_map': {
'id': 'query',
'x_session_token': 'header',
},
'collection_format_map': {
}
},
headers_map={
'accept': [
'application/json'
],
'content_type': [],
},
api_client=api_client,
callable=__get_self_service_settings_flow_admin
)
def __get_self_service_verification_flow(
self,
id,
**kwargs
):
"""Get Verification Flow # noqa: E501
This endpoint returns a verification flow's context with, for example, error details and other information. More information can be found at [Ory Kratos Email and Phone Verification Documentation](https://www.ory.sh/docs/kratos/selfservice/flows/verify-email-account-activation). # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_self_service_verification_flow(id, async_req=True)
>>> result = thread.get()
Args:
id (str): The Flow ID The value for this parameter comes from `request` URL Query parameter sent to your application (e.g. `/verification?flow=abcde`).
Keyword Args:
_return_http_data_only (bool): response data without head status
code and headers. Default is True.
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (float/tuple): timeout setting for this request. If one
number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
async_req (bool): execute request asynchronously
Returns:
VerificationFlow
If the method is called asynchronously, returns the request
thread.
"""
kwargs['async_req'] = kwargs.get(
'async_req', False
)
kwargs['_return_http_data_only'] = kwargs.get(
'_return_http_data_only', True
)
kwargs['_preload_content'] = kwargs.get(
'_preload_content', True
)
kwargs['_request_timeout'] = kwargs.get(
'_request_timeout', None
)
kwargs['_check_input_type'] = kwargs.get(
'_check_input_type', True
)
kwargs['_check_return_type'] = kwargs.get(
'_check_return_type', True
)
kwargs['_host_index'] = kwargs.get('_host_index')
kwargs['id'] = \
id
return self.call_with_http_info(**kwargs)
self.get_self_service_verification_flow = _Endpoint(
settings={
'response_type': (VerificationFlow,),
'auth': [],
'endpoint_path': '/api/kratos/public/self-service/verification/flows',
'operation_id': 'get_self_service_verification_flow',
'http_method': 'GET',
'servers': None,
},
params_map={
'all': [
'id',
],
'required': [
'id',
],
'nullable': [
],
'enum': [
],
'validation': [
]
},
root_map={
'validations': {
},
'allowed_values': {
},
'openapi_types': {
'id':
(str,),
},
'attribute_map': {
'id': 'id',
},
'location_map': {
'id': 'query',
},
'collection_format_map': {
}
},
headers_map={
'accept': [
'application/json'
],
'content_type': [],
},
api_client=api_client,
callable=__get_self_service_verification_flow
)
def __get_self_service_verification_flow_admin(
self,
id,
**kwargs
):
"""Get Verification Flow # noqa: E501
This endpoint returns a verification flow's context with, for example, error details and other information. More information can be found at [Ory Kratos Email and Phone Verification Documentation](https://www.ory.sh/docs/kratos/selfservice/flows/verify-email-account-activation). # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_self_service_verification_flow_admin(id, async_req=True)
>>> result = thread.get()
Args:
id (str): The Flow ID The value for this parameter comes from `request` URL Query parameter sent to your application (e.g. `/verification?flow=abcde`).
Keyword Args:
_return_http_data_only (bool): response data without head status
code and headers. Default is True.
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (float/tuple): timeout setting for this request. If one
number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
async_req (bool): execute request asynchronously
Returns:
VerificationFlow
If the method is called asynchronously, returns the request
thread.
"""
kwargs['async_req'] = kwargs.get(
'async_req', False
)
kwargs['_return_http_data_only'] = kwargs.get(
'_return_http_data_only', True
)
kwargs['_preload_content'] = kwargs.get(
'_preload_content', True
)
kwargs['_request_timeout'] = kwargs.get(
'_request_timeout', None
)
kwargs['_check_input_type'] = kwargs.get(
'_check_input_type', True
)
kwargs['_check_return_type'] = kwargs.get(
'_check_return_type', True
)
kwargs['_host_index'] = kwargs.get('_host_index')
kwargs['id'] = \
id
return self.call_with_http_info(**kwargs)
self.get_self_service_verification_flow_admin = _Endpoint(
settings={
'response_type': (VerificationFlow,),
'auth': [
'oryToken'
],
'endpoint_path': '/api/kratos/admin/self-service/verification/flows',
'operation_id': 'get_self_service_verification_flow_admin',
'http_method': 'GET',
'servers': None,
},
params_map={
'all': [
'id',
],
'required': [
'id',
],
'nullable': [
],
'enum': [
],
'validation': [
]
},
root_map={
'validations': {
},
'allowed_values': {
},
'openapi_types': {
'id':
(str,),
},
'attribute_map': {
'id': 'id',
},
'location_map': {
'id': 'query',
},
'collection_format_map': {
}
},
headers_map={
'accept': [
'application/json'
],
'content_type': [],
},
api_client=api_client,
callable=__get_self_service_verification_flow_admin
)
def __get_version_admin(
self,
**kwargs
):
"""Return Running Software Version. # noqa: E501
This endpoint returns the version of Ory Kratos. If the service supports TLS Edge Termination, this endpoint does not require the `X-Forwarded-Proto` header to be set. Be aware that if you are running multiple nodes of this service, the version will never refer to the cluster state, only to a single instance. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_version_admin(async_req=True)
>>> result = thread.get()
Keyword Args:
_return_http_data_only (bool): response data without head status
code and headers. Default is True.
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (float/tuple): timeout setting for this request. If one
number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
async_req (bool): execute request asynchronously
Returns:
InlineResponse2001
If the method is called asynchronously, returns the request
thread.
"""
kwargs['async_req'] = kwargs.get(
'async_req', False
)
kwargs['_return_http_data_only'] = kwargs.get(
'_return_http_data_only', True
)
kwargs['_preload_content'] = kwargs.get(
'_preload_content', True
)
kwargs['_request_timeout'] = kwargs.get(
'_request_timeout', None
)
kwargs['_check_input_type'] = kwargs.get(
'_check_input_type', True
)
kwargs['_check_return_type'] = kwargs.get(
'_check_return_type', True
)
kwargs['_host_index'] = kwargs.get('_host_index')
return self.call_with_http_info(**kwargs)
self.get_version_admin = _Endpoint(
settings={
'response_type': (InlineResponse2001,),
'auth': [
'oryToken'
],
'endpoint_path': '/api/kratos/admin/version',
'operation_id': 'get_version_admin',
'http_method': 'GET',
'servers': None,
},
params_map={
'all': [
],
'required': [],
'nullable': [
],
'enum': [
],
'validation': [
]
},
root_map={
'validations': {
},
'allowed_values': {
},
'openapi_types': {
},
'attribute_map': {
},
'location_map': {
},
'collection_format_map': {
}
},
headers_map={
'accept': [
'application/json'
],
'content_type': [],
},
api_client=api_client,
callable=__get_version_admin
)
def __initialize_self_service_browser_logout_flow(
self,
**kwargs
):
"""Initialize Browser-Based Logout User Flow # noqa: E501
This endpoint initializes a logout flow. > This endpoint is NOT INTENDED for API clients and only works with browsers (Chrome, Firefox, ...). On successful logout, the browser will be redirected (HTTP 302 Found) to the `return_to` parameter of the initial request or fall back to `urls.default_return_to`. More information can be found at [Ory Kratos User Logout Documentation](https://www.ory.sh/docs/next/kratos/self-service/flows/user-logout). # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.initialize_self_service_browser_logout_flow(async_req=True)
>>> result = thread.get()
Keyword Args:
_return_http_data_only (bool): response data without head status
code and headers. Default is True.
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (float/tuple): timeout setting for this request. If one
number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
async_req (bool): execute request asynchronously
Returns:
None
If the method is called asynchronously, returns the request
thread.
"""
kwargs['async_req'] = kwargs.get(
'async_req', False
)
kwargs['_return_http_data_only'] = kwargs.get(
'_return_http_data_only', True
)
kwargs['_preload_content'] = kwargs.get(
'_preload_content', True
)
kwargs['_request_timeout'] = kwargs.get(
'_request_timeout', None
)
kwargs['_check_input_type'] = kwargs.get(
'_check_input_type', True
)
kwargs['_check_return_type'] = kwargs.get(
'_check_return_type', True
)
kwargs['_host_index'] = kwargs.get('_host_index')
return self.call_with_http_info(**kwargs)
self.initialize_self_service_browser_logout_flow = _Endpoint(
settings={
'response_type': None,
'auth': [],
'endpoint_path': '/api/kratos/public/self-service/browser/flows/logout',
'operation_id': 'initialize_self_service_browser_logout_flow',
'http_method': 'GET',
'servers': None,
},
params_map={
'all': [
],
'required': [],
'nullable': [
],
'enum': [
],
'validation': [
]
},
root_map={
'validations': {
},
'allowed_values': {
},
'openapi_types': {
},
'attribute_map': {
},
'location_map': {
},
'collection_format_map': {
}
},
headers_map={
'accept': [
'application/json'
],
'content_type': [],
},
api_client=api_client,
callable=__initialize_self_service_browser_logout_flow
)
def __initialize_self_service_login_for_browsers(
self,
**kwargs
):
"""Initialize Login Flow for Browsers # noqa: E501
This endpoint initializes a browser-based user login flow. This endpoint will set the appropriate cookies and anti-CSRF measures required for browser-based flows. :::info This endpoint is EXPERIMENTAL and subject to potential breaking changes in the future. ::: If this endpoint is opened as a link in the browser, it will be redirected to `selfservice.flows.login.ui_url` with the flow ID set as the query parameter `?flow=`. If a valid user session exists already, the browser will be redirected to `urls.default_redirect_url` unless the query parameter `?refresh=true` was set. If this endpoint is called via an AJAX request, the response contains the login flow without a redirect. This endpoint is NOT INTENDED for clients that do not have a browser (Chrome, Firefox, ...) as cookies are needed. More information can be found at [Ory Kratos User Login and User Registration Documentation](https://www.ory.sh/docs/next/kratos/self-service/flows/user-login-user-registration). # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.initialize_self_service_login_for_browsers(async_req=True)
>>> result = thread.get()
Keyword Args:
refresh (bool): Refresh a login session If set to true, this will refresh an existing login session by asking the user to sign in again. This will reset the authenticated_at time of the session.. [optional]
_return_http_data_only (bool): response data without head status
code and headers. Default is True.
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (float/tuple): timeout setting for this request. If one
number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
async_req (bool): execute request asynchronously
Returns:
LoginFlow
If the method is called asynchronously, returns the request
thread.
"""
kwargs['async_req'] = kwargs.get(
'async_req', False
)
kwargs['_return_http_data_only'] = kwargs.get(
'_return_http_data_only', True
)
kwargs['_preload_content'] = kwargs.get(
'_preload_content', True
)
kwargs['_request_timeout'] = kwargs.get(
'_request_timeout', None
)
kwargs['_check_input_type'] = kwargs.get(
'_check_input_type', True
)
kwargs['_check_return_type'] = kwargs.get(
'_check_return_type', True
)
kwargs['_host_index'] = kwargs.get('_host_index')
return self.call_with_http_info(**kwargs)
self.initialize_self_service_login_for_browsers = _Endpoint(
settings={
'response_type': (LoginFlow,),
'auth': [],
'endpoint_path': '/api/kratos/public/self-service/login/browser',
'operation_id': 'initialize_self_service_login_for_browsers',
'http_method': 'GET',
'servers': None,
},
params_map={
'all': [
'refresh',
],
'required': [],
'nullable': [
],
'enum': [
],
'validation': [
]
},
root_map={
'validations': {
},
'allowed_values': {
},
'openapi_types': {
'refresh':
(bool,),
},
'attribute_map': {
'refresh': 'refresh',
},
'location_map': {
'refresh': 'query',
},
'collection_format_map': {
}
},
headers_map={
'accept': [
'application/json'
],
'content_type': [],
},
api_client=api_client,
callable=__initialize_self_service_login_for_browsers
)
def __initialize_self_service_login_without_browser(
self,
**kwargs
):
"""Initialize Login Flow for APIs, Services, Apps, ... # noqa: E501
This endpoint initiates a login flow for API clients that do not use a browser, such as mobile devices, smart TVs, and so on. :::info This endpoint is EXPERIMENTAL and subject to potential breaking changes in the future. ::: If a valid provided session cookie or session token is provided, a 400 Bad Request error will be returned unless the URL query parameter `?refresh=true` is set. To fetch an existing login flow call `/self-service/login/flows?flow=<flow_id>`. :::warning You MUST NOT use this endpoint in client-side (Single Page Apps, ReactJS, AngularJS) nor server-side (Java Server Pages, NodeJS, PHP, Golang, ...) browser applications. Using this endpoint in these applications will make you vulnerable to a variety of CSRF attacks, including CSRF login attacks. This endpoint MUST ONLY be used in scenarios such as native mobile apps (React Native, Objective C, Swift, Java, ...). ::: More information can be found at [Ory Kratos User Login and User Registration Documentation](https://www.ory.sh/docs/next/kratos/self-service/flows/user-login-user-registration). # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.initialize_self_service_login_without_browser(async_req=True)
>>> result = thread.get()
Keyword Args:
refresh (bool): Refresh a login session If set to true, this will refresh an existing login session by asking the user to sign in again. This will reset the authenticated_at time of the session.. [optional]
_return_http_data_only (bool): response data without head status
code and headers. Default is True.
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (float/tuple): timeout setting for this request. If one
number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
async_req (bool): execute request asynchronously
Returns:
LoginFlow
If the method is called asynchronously, returns the request
thread.
"""
kwargs['async_req'] = kwargs.get(
'async_req', False
)
kwargs['_return_http_data_only'] = kwargs.get(
'_return_http_data_only', True
)
kwargs['_preload_content'] = kwargs.get(
'_preload_content', True
)
kwargs['_request_timeout'] = kwargs.get(
'_request_timeout', None
)
kwargs['_check_input_type'] = kwargs.get(
'_check_input_type', True
)
kwargs['_check_return_type'] = kwargs.get(
'_check_return_type', True
)
kwargs['_host_index'] = kwargs.get('_host_index')
return self.call_with_http_info(**kwargs)
self.initialize_self_service_login_without_browser = _Endpoint(
settings={
'response_type': (LoginFlow,),
'auth': [],
'endpoint_path': '/api/kratos/public/self-service/login/api',
'operation_id': 'initialize_self_service_login_without_browser',
'http_method': 'GET',
'servers': None,
},
params_map={
'all': [
'refresh',
],
'required': [],
'nullable': [
],
'enum': [
],
'validation': [
]
},
root_map={
'validations': {
},
'allowed_values': {
},
'openapi_types': {
'refresh':
(bool,),
},
'attribute_map': {
'refresh': 'refresh',
},
'location_map': {
'refresh': 'query',
},
'collection_format_map': {
}
},
headers_map={
'accept': [
'application/json'
],
'content_type': [],
},
api_client=api_client,
callable=__initialize_self_service_login_without_browser
)
def __initialize_self_service_recovery_for_browsers(
self,
**kwargs
):
"""Initialize Recovery Flow for Browser Clients # noqa: E501
This endpoint initializes a browser-based account recovery flow. Once initialized, the browser will be redirected to `selfservice.flows.recovery.ui_url` with the flow ID set as the query parameter `?flow=`. If a valid user session exists, the browser is returned to the configured return URL. This endpoint is NOT INTENDED for API clients and only works with browsers (Chrome, Firefox, ...). More information can be found at [Ory Kratos Account Recovery Documentation](../self-service/flows/account-recovery.mdx). # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.initialize_self_service_recovery_for_browsers(async_req=True)
>>> result = thread.get()
Keyword Args:
_return_http_data_only (bool): response data without head status
code and headers. Default is True.
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (float/tuple): timeout setting for this request. If one
number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
async_req (bool): execute request asynchronously
Returns:
None
If the method is called asynchronously, returns the request
thread.
"""
kwargs['async_req'] = kwargs.get(
'async_req', False
)
kwargs['_return_http_data_only'] = kwargs.get(
'_return_http_data_only', True
)
kwargs['_preload_content'] = kwargs.get(
'_preload_content', True
)
kwargs['_request_timeout'] = kwargs.get(
'_request_timeout', None
)
kwargs['_check_input_type'] = kwargs.get(
'_check_input_type', True
)
kwargs['_check_return_type'] = kwargs.get(
'_check_return_type', True
)
kwargs['_host_index'] = kwargs.get('_host_index')
return self.call_with_http_info(**kwargs)
self.initialize_self_service_recovery_for_browsers = _Endpoint(
settings={
'response_type': None,
'auth': [],
'endpoint_path': '/api/kratos/public/self-service/recovery/browser',
'operation_id': 'initialize_self_service_recovery_for_browsers',
'http_method': 'GET',
'servers': None,
},
params_map={
'all': [
],
'required': [],
'nullable': [
],
'enum': [
],
'validation': [
]
},
root_map={
'validations': {
},
'allowed_values': {
},
'openapi_types': {
},
'attribute_map': {
},
'location_map': {
},
'collection_format_map': {
}
},
headers_map={
'accept': [
'application/json'
],
'content_type': [],
},
api_client=api_client,
callable=__initialize_self_service_recovery_for_browsers
)
def __initialize_self_service_recovery_for_native_apps(
self,
**kwargs
):
"""Initialize Recovery Flow for Native Apps and API clients # noqa: E501
This endpoint initiates a recovery flow for API clients such as mobile devices, smart TVs, and so on. If a valid provided session cookie or session token is provided, a 400 Bad Request error. To fetch an existing recovery flow call `/self-service/recovery/flows?flow=<flow_id>`. :::warning You MUST NOT use this endpoint in client-side (Single Page Apps, ReactJS, AngularJS) nor server-side (Java Server Pages, NodeJS, PHP, Golang, ...) browser applications. Using this endpoint in these applications will make you vulnerable to a variety of CSRF attacks. This endpoint MUST ONLY be used in scenarios such as native mobile apps (React Native, Objective C, Swift, Java, ...). ::: More information can be found at [Ory Kratos Account Recovery Documentation](../self-service/flows/account-recovery.mdx). # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.initialize_self_service_recovery_for_native_apps(async_req=True)
>>> result = thread.get()
Keyword Args:
_return_http_data_only (bool): response data without head status
code and headers. Default is True.
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (float/tuple): timeout setting for this request. If one
number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
async_req (bool): execute request asynchronously
Returns:
RecoveryFlow
If the method is called asynchronously, returns the request
thread.
"""
kwargs['async_req'] = kwargs.get(
'async_req', False
)
kwargs['_return_http_data_only'] = kwargs.get(
'_return_http_data_only', True
)
kwargs['_preload_content'] = kwargs.get(
'_preload_content', True
)
kwargs['_request_timeout'] = kwargs.get(
'_request_timeout', None
)
kwargs['_check_input_type'] = kwargs.get(
'_check_input_type', True
)
kwargs['_check_return_type'] = kwargs.get(
'_check_return_type', True
)
kwargs['_host_index'] = kwargs.get('_host_index')
return self.call_with_http_info(**kwargs)
self.initialize_self_service_recovery_for_native_apps = _Endpoint(
settings={
'response_type': (RecoveryFlow,),
'auth': [],
'endpoint_path': '/api/kratos/public/self-service/recovery/api',
'operation_id': 'initialize_self_service_recovery_for_native_apps',
'http_method': 'GET',
'servers': None,
},
params_map={
'all': [
],
'required': [],
'nullable': [
],
'enum': [
],
'validation': [
]
},
root_map={
'validations': {
},
'allowed_values': {
},
'openapi_types': {
},
'attribute_map': {
},
'location_map': {
},
'collection_format_map': {
}
},
headers_map={
'accept': [
'application/json'
],
'content_type': [],
},
api_client=api_client,
callable=__initialize_self_service_recovery_for_native_apps
)
def __initialize_self_service_registration_for_browsers(
self,
**kwargs
):
"""Initialize Registration Flow for Browsers # noqa: E501
This endpoint initializes a browser-based user registration flow. This endpoint will set the appropriate cookies and anti-CSRF measures required for browser-based flows. :::info This endpoint is EXPERIMENTAL and subject to potential breaking changes in the future. ::: If this endpoint is opened as a link in the browser, it will be redirected to `selfservice.flows.registration.ui_url` with the flow ID set as the query parameter `?flow=`. If a valid user session exists already, the browser will be redirected to `urls.default_redirect_url`. If this endpoint is called via an AJAX request, the response contains the registration flow without a redirect. This endpoint is NOT INTENDED for clients that do not have a browser (Chrome, Firefox, ...) as cookies are needed. More information can be found at [Ory Kratos User Login and User Registration Documentation](https://www.ory.sh/docs/next/kratos/self-service/flows/user-login-user-registration). # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.initialize_self_service_registration_for_browsers(async_req=True)
>>> result = thread.get()
Keyword Args:
_return_http_data_only (bool): response data without head status
code and headers. Default is True.
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (float/tuple): timeout setting for this request. If one
number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
async_req (bool): execute request asynchronously
Returns:
RegistrationFlow
If the method is called asynchronously, returns the request
thread.
"""
kwargs['async_req'] = kwargs.get(
'async_req', False
)
kwargs['_return_http_data_only'] = kwargs.get(
'_return_http_data_only', True
)
kwargs['_preload_content'] = kwargs.get(
'_preload_content', True
)
kwargs['_request_timeout'] = kwargs.get(
'_request_timeout', None
)
kwargs['_check_input_type'] = kwargs.get(
'_check_input_type', True
)
kwargs['_check_return_type'] = kwargs.get(
'_check_return_type', True
)
kwargs['_host_index'] = kwargs.get('_host_index')
return self.call_with_http_info(**kwargs)
self.initialize_self_service_registration_for_browsers = _Endpoint(
settings={
'response_type': (RegistrationFlow,),
'auth': [],
'endpoint_path': '/api/kratos/public/self-service/registration/browser',
'operation_id': 'initialize_self_service_registration_for_browsers',
'http_method': 'GET',
'servers': None,
},
params_map={
'all': [
],
'required': [],
'nullable': [
],
'enum': [
],
'validation': [
]
},
root_map={
'validations': {
},
'allowed_values': {
},
'openapi_types': {
},
'attribute_map': {
},
'location_map': {
},
'collection_format_map': {
}
},
headers_map={
'accept': [
'application/json'
],
'content_type': [],
},
api_client=api_client,
callable=__initialize_self_service_registration_for_browsers
)
def __initialize_self_service_registration_without_browser(
self,
**kwargs
):
"""Initialize Registration Flow for APIs, Services, Apps, ... # noqa: E501
This endpoint initiates a registration flow for API clients such as mobile devices, smart TVs, and so on. :::info This endpoint is EXPERIMENTAL and subject to potential breaking changes in the future. ::: If a valid provided session cookie or session token is provided, a 400 Bad Request error will be returned unless the URL query parameter `?refresh=true` is set. To fetch an existing registration flow call `/self-service/registration/flows?flow=<flow_id>`. :::warning You MUST NOT use this endpoint in client-side (Single Page Apps, ReactJS, AngularJS) nor server-side (Java Server Pages, NodeJS, PHP, Golang, ...) browser applications. Using this endpoint in these applications will make you vulnerable to a variety of CSRF attacks. This endpoint MUST ONLY be used in scenarios such as native mobile apps (React Native, Objective C, Swift, Java, ...). ::: More information can be found at [Ory Kratos User Login and User Registration Documentation](https://www.ory.sh/docs/next/kratos/self-service/flows/user-login-user-registration). # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.initialize_self_service_registration_without_browser(async_req=True)
>>> result = thread.get()
Keyword Args:
_return_http_data_only (bool): response data without head status
code and headers. Default is True.
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (float/tuple): timeout setting for this request. If one
number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
async_req (bool): execute request asynchronously
Returns:
RegistrationFlow
If the method is called asynchronously, returns the request
thread.
"""
kwargs['async_req'] = kwargs.get(
'async_req', False
)
kwargs['_return_http_data_only'] = kwargs.get(
'_return_http_data_only', True
)
kwargs['_preload_content'] = kwargs.get(
'_preload_content', True
)
kwargs['_request_timeout'] = kwargs.get(
'_request_timeout', None
)
kwargs['_check_input_type'] = kwargs.get(
'_check_input_type', True
)
kwargs['_check_return_type'] = kwargs.get(
'_check_return_type', True
)
kwargs['_host_index'] = kwargs.get('_host_index')
return self.call_with_http_info(**kwargs)
self.initialize_self_service_registration_without_browser = _Endpoint(
settings={
'response_type': (RegistrationFlow,),
'auth': [],
'endpoint_path': '/api/kratos/public/self-service/registration/api',
'operation_id': 'initialize_self_service_registration_without_browser',
'http_method': 'GET',
'servers': None,
},
params_map={
'all': [
],
'required': [],
'nullable': [
],
'enum': [
],
'validation': [
]
},
root_map={
'validations': {
},
'allowed_values': {
},
'openapi_types': {
},
'attribute_map': {
},
'location_map': {
},
'collection_format_map': {
}
},
headers_map={
'accept': [
'application/json'
],
'content_type': [],
},
api_client=api_client,
callable=__initialize_self_service_registration_without_browser
)
def __initialize_self_service_settings_for_browsers(
self,
**kwargs
):
"""Initialize Settings Flow for Browsers # noqa: E501
This endpoint initializes a browser-based user settings flow. Once initialized, the browser will be redirected to `selfservice.flows.settings.ui_url` with the flow ID set as the query parameter `?flow=`. If no valid Ory Kratos Session Cookie is included in the request, a login flow will be initialized. :::note This endpoint is NOT INTENDED for API clients and only works with browsers (Chrome, Firefox, ...). ::: More information can be found at [Ory Kratos User Settings & Profile Management Documentation](../self-service/flows/user-settings). # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.initialize_self_service_settings_for_browsers(async_req=True)
>>> result = thread.get()
Keyword Args:
_return_http_data_only (bool): response data without head status
code and headers. Default is True.
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (float/tuple): timeout setting for this request. If one
number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
async_req (bool): execute request asynchronously
Returns:
None
If the method is called asynchronously, returns the request
thread.
"""
kwargs['async_req'] = kwargs.get(
'async_req', False
)
kwargs['_return_http_data_only'] = kwargs.get(
'_return_http_data_only', True
)
kwargs['_preload_content'] = kwargs.get(
'_preload_content', True
)
kwargs['_request_timeout'] = kwargs.get(
'_request_timeout', None
)
kwargs['_check_input_type'] = kwargs.get(
'_check_input_type', True
)
kwargs['_check_return_type'] = kwargs.get(
'_check_return_type', True
)
kwargs['_host_index'] = kwargs.get('_host_index')
return self.call_with_http_info(**kwargs)
self.initialize_self_service_settings_for_browsers = _Endpoint(
settings={
'response_type': None,
'auth': [
'sessionToken'
],
'endpoint_path': '/api/kratos/public/self-service/settings/browser',
'operation_id': 'initialize_self_service_settings_for_browsers',
'http_method': 'GET',
'servers': None,
},
params_map={
'all': [
],
'required': [],
'nullable': [
],
'enum': [
],
'validation': [
]
},
root_map={
'validations': {
},
'allowed_values': {
},
'openapi_types': {
},
'attribute_map': {
},
'location_map': {
},
'collection_format_map': {
}
},
headers_map={
'accept': [
'application/json'
],
'content_type': [],
},
api_client=api_client,
callable=__initialize_self_service_settings_for_browsers
)
def __initialize_self_service_settings_for_native_apps(
self,
**kwargs
):
"""Initialize Settings Flow for Native Apps and API clients # noqa: E501
This endpoint initiates a settings flow for API clients such as mobile devices, smart TVs, and so on. You must provide a valid Ory Kratos Session Token for this endpoint to respond with HTTP 200 OK. To fetch an existing settings flow call `/self-service/settings/flows?flow=<flow_id>`. :::warning You MUST NOT use this endpoint in client-side (Single Page Apps, ReactJS, AngularJS) nor server-side (Java Server Pages, NodeJS, PHP, Golang, ...) browser applications. Using this endpoint in these applications will make you vulnerable to a variety of CSRF attacks. This endpoint MUST ONLY be used in scenarios such as native mobile apps (React Native, Objective C, Swift, Java, ...). ::: More information can be found at [Ory Kratos User Settings & Profile Management Documentation](../self-service/flows/user-settings). # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.initialize_self_service_settings_for_native_apps(async_req=True)
>>> result = thread.get()
Keyword Args:
x_session_token (str): The Session Token of the Identity performing the settings flow.. [optional]
_return_http_data_only (bool): response data without head status
code and headers. Default is True.
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (float/tuple): timeout setting for this request. If one
number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
async_req (bool): execute request asynchronously
Returns:
SettingsFlow
If the method is called asynchronously, returns the request
thread.
"""
kwargs['async_req'] = kwargs.get(
'async_req', False
)
kwargs['_return_http_data_only'] = kwargs.get(
'_return_http_data_only', True
)
kwargs['_preload_content'] = kwargs.get(
'_preload_content', True
)
kwargs['_request_timeout'] = kwargs.get(
'_request_timeout', None
)
kwargs['_check_input_type'] = kwargs.get(
'_check_input_type', True
)
kwargs['_check_return_type'] = kwargs.get(
'_check_return_type', True
)
kwargs['_host_index'] = kwargs.get('_host_index')
return self.call_with_http_info(**kwargs)
self.initialize_self_service_settings_for_native_apps = _Endpoint(
settings={
'response_type': (SettingsFlow,),
'auth': [
'sessionToken'
],
'endpoint_path': '/api/kratos/public/self-service/settings/api',
'operation_id': 'initialize_self_service_settings_for_native_apps',
'http_method': 'GET',
'servers': None,
},
params_map={
'all': [
'x_session_token',
],
'required': [],
'nullable': [
],
'enum': [
],
'validation': [
]
},
root_map={
'validations': {
},
'allowed_values': {
},
'openapi_types': {
'x_session_token':
(str,),
},
'attribute_map': {
'x_session_token': 'X-Session-Token',
},
'location_map': {
'x_session_token': 'header',
},
'collection_format_map': {
}
},
headers_map={
'accept': [
'application/json'
],
'content_type': [],
},
api_client=api_client,
callable=__initialize_self_service_settings_for_native_apps
)
def __initialize_self_service_verification_for_browsers(
self,
**kwargs
):
"""Initialize Verification Flow for Browser Clients # noqa: E501
This endpoint initializes a browser-based account verification flow. Once initialized, the browser will be redirected to `selfservice.flows.verification.ui_url` with the flow ID set as the query parameter `?flow=`. This endpoint is NOT INTENDED for API clients and only works with browsers (Chrome, Firefox, ...). More information can be found at [Ory Kratos Email and Phone Verification Documentation](https://www.ory.sh/docs/kratos/selfservice/flows/verify-email-account-activation). # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.initialize_self_service_verification_for_browsers(async_req=True)
>>> result = thread.get()
Keyword Args:
_return_http_data_only (bool): response data without head status
code and headers. Default is True.
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (float/tuple): timeout setting for this request. If one
number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
async_req (bool): execute request asynchronously
Returns:
None
If the method is called asynchronously, returns the request
thread.
"""
kwargs['async_req'] = kwargs.get(
'async_req', False
)
kwargs['_return_http_data_only'] = kwargs.get(
'_return_http_data_only', True
)
kwargs['_preload_content'] = kwargs.get(
'_preload_content', True
)
kwargs['_request_timeout'] = kwargs.get(
'_request_timeout', None
)
kwargs['_check_input_type'] = kwargs.get(
'_check_input_type', True
)
kwargs['_check_return_type'] = kwargs.get(
'_check_return_type', True
)
kwargs['_host_index'] = kwargs.get('_host_index')
return self.call_with_http_info(**kwargs)
self.initialize_self_service_verification_for_browsers = _Endpoint(
settings={
'response_type': None,
'auth': [],
'endpoint_path': '/api/kratos/public/self-service/verification/browser',
'operation_id': 'initialize_self_service_verification_for_browsers',
'http_method': 'GET',
'servers': None,
},
params_map={
'all': [
],
'required': [],
'nullable': [
],
'enum': [
],
'validation': [
]
},
root_map={
'validations': {
},
'allowed_values': {
},
'openapi_types': {
},
'attribute_map': {
},
'location_map': {
},
'collection_format_map': {
}
},
headers_map={
'accept': [
'application/json'
],
'content_type': [],
},
api_client=api_client,
callable=__initialize_self_service_verification_for_browsers
)
def __initialize_self_service_verification_for_native_apps(
self,
**kwargs
):
"""Initialize Verification Flow for Native Apps and API clients # noqa: E501
This endpoint initiates a verification flow for API clients such as mobile devices, smart TVs, and so on. To fetch an existing verification flow call `/self-service/verification/flows?flow=<flow_id>`. :::warning You MUST NOT use this endpoint in client-side (Single Page Apps, ReactJS, AngularJS) nor server-side (Java Server Pages, NodeJS, PHP, Golang, ...) browser applications. Using this endpoint in these applications will make you vulnerable to a variety of CSRF attacks. This endpoint MUST ONLY be used in scenarios such as native mobile apps (React Native, Objective C, Swift, Java, ...). ::: More information can be found at [Ory Kratos Email and Phone Verification Documentation](https://www.ory.sh/docs/kratos/selfservice/flows/verify-email-account-activation). # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.initialize_self_service_verification_for_native_apps(async_req=True)
>>> result = thread.get()
Keyword Args:
_return_http_data_only (bool): response data without head status
code and headers. Default is True.
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (float/tuple): timeout setting for this request. If one
number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
async_req (bool): execute request asynchronously
Returns:
VerificationFlow
If the method is called asynchronously, returns the request
thread.
"""
kwargs['async_req'] = kwargs.get(
'async_req', False
)
kwargs['_return_http_data_only'] = kwargs.get(
'_return_http_data_only', True
)
kwargs['_preload_content'] = kwargs.get(
'_preload_content', True
)
kwargs['_request_timeout'] = kwargs.get(
'_request_timeout', None
)
kwargs['_check_input_type'] = kwargs.get(
'_check_input_type', True
)
kwargs['_check_return_type'] = kwargs.get(
'_check_return_type', True
)
kwargs['_host_index'] = kwargs.get('_host_index')
return self.call_with_http_info(**kwargs)
self.initialize_self_service_verification_for_native_apps = _Endpoint(
settings={
'response_type': (VerificationFlow,),
'auth': [],
'endpoint_path': '/api/kratos/public/self-service/verification/api',
'operation_id': 'initialize_self_service_verification_for_native_apps',
'http_method': 'GET',
'servers': None,
},
params_map={
'all': [
],
'required': [],
'nullable': [
],
'enum': [
],
'validation': [
]
},
root_map={
'validations': {
},
'allowed_values': {
},
'openapi_types': {
},
'attribute_map': {
},
'location_map': {
},
'collection_format_map': {
}
},
headers_map={
'accept': [
'application/json'
],
'content_type': [],
},
api_client=api_client,
callable=__initialize_self_service_verification_for_native_apps
)
def __is_alive_admin(
self,
**kwargs
):
"""Check HTTP Server Status # noqa: E501
This endpoint returns a HTTP 200 status code when Ory Kratos is accepting incoming HTTP requests. This status does currently not include checks whether the database connection is working. If the service supports TLS Edge Termination, this endpoint does not require the `X-Forwarded-Proto` header to be set. Be aware that if you are running multiple nodes of this service, the health status will never refer to the cluster state, only to a single instance. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.is_alive_admin(async_req=True)
>>> result = thread.get()
Keyword Args:
_return_http_data_only (bool): response data without head status
code and headers. Default is True.
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (float/tuple): timeout setting for this request. If one
number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
async_req (bool): execute request asynchronously
Returns:
InlineResponse200
If the method is called asynchronously, returns the request
thread.
"""
kwargs['async_req'] = kwargs.get(
'async_req', False
)
kwargs['_return_http_data_only'] = kwargs.get(
'_return_http_data_only', True
)
kwargs['_preload_content'] = kwargs.get(
'_preload_content', True
)
kwargs['_request_timeout'] = kwargs.get(
'_request_timeout', None
)
kwargs['_check_input_type'] = kwargs.get(
'_check_input_type', True
)
kwargs['_check_return_type'] = kwargs.get(
'_check_return_type', True
)
kwargs['_host_index'] = kwargs.get('_host_index')
return self.call_with_http_info(**kwargs)
self.is_alive_admin = _Endpoint(
settings={
'response_type': (InlineResponse200,),
'auth': [
'oryToken'
],
'endpoint_path': '/api/kratos/admin/health/alive',
'operation_id': 'is_alive_admin',
'http_method': 'GET',
'servers': None,
},
params_map={
'all': [
],
'required': [],
'nullable': [
],
'enum': [
],
'validation': [
]
},
root_map={
'validations': {
},
'allowed_values': {
},
'openapi_types': {
},
'attribute_map': {
},
'location_map': {
},
'collection_format_map': {
}
},
headers_map={
'accept': [
'application/json'
],
'content_type': [],
},
api_client=api_client,
callable=__is_alive_admin
)
def __is_ready_admin(
self,
**kwargs
):
"""Check HTTP Server and Database Status # noqa: E501
This endpoint returns a HTTP 200 status code when Ory Kratos is up running and the environment dependencies (e.g. the database) are responsive as well. If the service supports TLS Edge Termination, this endpoint does not require the `X-Forwarded-Proto` header to be set. Be aware that if you are running multiple nodes of Ory Kratos, the health status will never refer to the cluster state, only to a single instance. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.is_ready_admin(async_req=True)
>>> result = thread.get()
Keyword Args:
_return_http_data_only (bool): response data without head status
code and headers. Default is True.
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (float/tuple): timeout setting for this request. If one
number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
async_req (bool): execute request asynchronously
Returns:
InlineResponse200
If the method is called asynchronously, returns the request
thread.
"""
kwargs['async_req'] = kwargs.get(
'async_req', False
)
kwargs['_return_http_data_only'] = kwargs.get(
'_return_http_data_only', True
)
kwargs['_preload_content'] = kwargs.get(
'_preload_content', True
)
kwargs['_request_timeout'] = kwargs.get(
'_request_timeout', None
)
kwargs['_check_input_type'] = kwargs.get(
'_check_input_type', True
)
kwargs['_check_return_type'] = kwargs.get(
'_check_return_type', True
)
kwargs['_host_index'] = kwargs.get('_host_index')
return self.call_with_http_info(**kwargs)
self.is_ready_admin = _Endpoint(
settings={
'response_type': (InlineResponse200,),
'auth': [
'oryToken'
],
'endpoint_path': '/api/kratos/admin/health/ready',
'operation_id': 'is_ready_admin',
'http_method': 'GET',
'servers': None,
},
params_map={
'all': [
],
'required': [],
'nullable': [
],
'enum': [
],
'validation': [
]
},
root_map={
'validations': {
},
'allowed_values': {
},
'openapi_types': {
},
'attribute_map': {
},
'location_map': {
},
'collection_format_map': {
}
},
headers_map={
'accept': [
'application/json'
],
'content_type': [],
},
api_client=api_client,
callable=__is_ready_admin
)
def __list_identities_admin(
self,
**kwargs
):
"""List Identities # noqa: E501
Lists all identities. Does not support search at the moment. Learn how identities work in [Ory Kratos' User And Identity Model Documentation](https://www.ory.sh/docs/next/kratos/concepts/identity-user-model). # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.list_identities_admin(async_req=True)
>>> result = thread.get()
Keyword Args:
per_page (int): Items per Page This is the number of items per page.. [optional] if omitted the server will use the default value of 100
page (int): Pagination Page. [optional] if omitted the server will use the default value of 0
_return_http_data_only (bool): response data without head status
code and headers. Default is True.
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (float/tuple): timeout setting for this request. If one
number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
async_req (bool): execute request asynchronously
Returns:
[Identity]
If the method is called asynchronously, returns the request
thread.
"""
kwargs['async_req'] = kwargs.get(
'async_req', False
)
kwargs['_return_http_data_only'] = kwargs.get(
'_return_http_data_only', True
)
kwargs['_preload_content'] = kwargs.get(
'_preload_content', True
)
kwargs['_request_timeout'] = kwargs.get(
'_request_timeout', None
)
kwargs['_check_input_type'] = kwargs.get(
'_check_input_type', True
)
kwargs['_check_return_type'] = kwargs.get(
'_check_return_type', True
)
kwargs['_host_index'] = kwargs.get('_host_index')
return self.call_with_http_info(**kwargs)
self.list_identities_admin = _Endpoint(
settings={
'response_type': ([Identity],),
'auth': [
'oryToken'
],
'endpoint_path': '/api/kratos/admin/identities',
'operation_id': 'list_identities_admin',
'http_method': 'GET',
'servers': None,
},
params_map={
'all': [
'per_page',
'page',
],
'required': [],
'nullable': [
],
'enum': [
],
'validation': [
'per_page',
'page',
]
},
root_map={
'validations': {
('per_page',): {
'inclusive_maximum': 500,
'inclusive_minimum': 1,
},
('page',): {
'inclusive_minimum': 0,
},
},
'allowed_values': {
},
'openapi_types': {
'per_page':
(int,),
'page':
(int,),
},
'attribute_map': {
'per_page': 'per_page',
'page': 'page',
},
'location_map': {
'per_page': 'query',
'page': 'query',
},
'collection_format_map': {
}
},
headers_map={
'accept': [
'application/json'
],
'content_type': [],
},
api_client=api_client,
callable=__list_identities_admin
)
def __prometheus_admin(
self,
**kwargs
):
"""Get snapshot metrics from the Hydra service. If you're using k8s, you can then add annotations to your deployment like so: # noqa: E501
``` metadata: annotations: prometheus.io/port: \"4434\" prometheus.io/path: \"/metrics/prometheus\" ``` # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.prometheus_admin(async_req=True)
>>> result = thread.get()
Keyword Args:
_return_http_data_only (bool): response data without head status
code and headers. Default is True.
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (float/tuple): timeout setting for this request. If one
number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
async_req (bool): execute request asynchronously
Returns:
None
If the method is called asynchronously, returns the request
thread.
"""
kwargs['async_req'] = kwargs.get(
'async_req', False
)
kwargs['_return_http_data_only'] = kwargs.get(
'_return_http_data_only', True
)
kwargs['_preload_content'] = kwargs.get(
'_preload_content', True
)
kwargs['_request_timeout'] = kwargs.get(
'_request_timeout', None
)
kwargs['_check_input_type'] = kwargs.get(
'_check_input_type', True
)
kwargs['_check_return_type'] = kwargs.get(
'_check_return_type', True
)
kwargs['_host_index'] = kwargs.get('_host_index')
return self.call_with_http_info(**kwargs)
self.prometheus_admin = _Endpoint(
settings={
'response_type': None,
'auth': [
'oryToken'
],
'endpoint_path': '/api/kratos/admin/metrics/prometheus',
'operation_id': 'prometheus_admin',
'http_method': 'GET',
'servers': None,
},
params_map={
'all': [
],
'required': [],
'nullable': [
],
'enum': [
],
'validation': [
]
},
root_map={
'validations': {
},
'allowed_values': {
},
'openapi_types': {
},
'attribute_map': {
},
'location_map': {
},
'collection_format_map': {
}
},
headers_map={
'accept': [],
'content_type': [],
},
api_client=api_client,
callable=__prometheus_admin
)
def __revoke_session(
self,
revoke_session,
**kwargs
):
"""Initialize Logout Flow for API Clients - Revoke a Session # noqa: E501
Use this endpoint to revoke a session using its token. This endpoint is particularly useful for API clients such as mobile apps to log the user out of the system and invalidate the session. This endpoint does not remove any HTTP Cookies - use the Browser-Based Self-Service Logout Flow instead. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.revoke_session(revoke_session, async_req=True)
>>> result = thread.get()
Args:
revoke_session (RevokeSession):
Keyword Args:
_return_http_data_only (bool): response data without head status
code and headers. Default is True.
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (float/tuple): timeout setting for this request. If one
number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
async_req (bool): execute request asynchronously
Returns:
None
If the method is called asynchronously, returns the request
thread.
"""
kwargs['async_req'] = kwargs.get(
'async_req', False
)
kwargs['_return_http_data_only'] = kwargs.get(
'_return_http_data_only', True
)
kwargs['_preload_content'] = kwargs.get(
'_preload_content', True
)
kwargs['_request_timeout'] = kwargs.get(
'_request_timeout', None
)
kwargs['_check_input_type'] = kwargs.get(
'_check_input_type', True
)
kwargs['_check_return_type'] = kwargs.get(
'_check_return_type', True
)
kwargs['_host_index'] = kwargs.get('_host_index')
kwargs['revoke_session'] = \
revoke_session
return self.call_with_http_info(**kwargs)
self.revoke_session = _Endpoint(
settings={
'response_type': None,
'auth': [],
'endpoint_path': '/api/kratos/public/sessions',
'operation_id': 'revoke_session',
'http_method': 'DELETE',
'servers': None,
},
params_map={
'all': [
'revoke_session',
],
'required': [
'revoke_session',
],
'nullable': [
],
'enum': [
],
'validation': [
]
},
root_map={
'validations': {
},
'allowed_values': {
},
'openapi_types': {
'revoke_session':
(RevokeSession,),
},
'attribute_map': {
},
'location_map': {
'revoke_session': 'body',
},
'collection_format_map': {
}
},
headers_map={
'accept': [
'application/json'
],
'content_type': [
'application/json'
]
},
api_client=api_client,
callable=__revoke_session
)
def __submit_self_service_login_flow(
self,
flow,
**kwargs
):
"""Submit a Login Flow # noqa: E501
:::info This endpoint is EXPERIMENTAL and subject to potential breaking changes in the future. ::: Use this endpoint to complete a login flow. This endpoint behaves differently for API and browser flows. API flows expect `application/json` to be sent in the body and responds with HTTP 200 and a application/json body with the session token on success; HTTP 302 redirect to a fresh login flow if the original flow expired with the appropriate error messages set; HTTP 400 on form validation errors. Browser flows expect a Content-Type of `application/x-www-form-urlencoded` or `application/json` to be sent in the body and respond with a HTTP 302 redirect to the post/after login URL or the `return_to` value if it was set and if the login succeeded; a HTTP 302 redirect to the login UI URL with the flow ID containing the validation errors otherwise. Browser flows with an accept header of `application/json` will not redirect but instead respond with HTTP 200 and a application/json body with the signed in identity and a `Set-Cookie` header on success; HTTP 302 redirect to a fresh login flow if the original flow expired with the appropriate error messages set; HTTP 400 on form validation errors. More information can be found at [Ory Kratos User Login and User Registration Documentation](https://www.ory.sh/docs/next/kratos/self-service/flows/user-login-user-registration). # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.submit_self_service_login_flow(flow, async_req=True)
>>> result = thread.get()
Args:
flow (str): The Login Flow ID The value for this parameter comes from `flow` URL Query parameter sent to your application (e.g. `/login?flow=abcde`).
Keyword Args:
submit_self_service_login_flow (SubmitSelfServiceLoginFlow): [optional]
_return_http_data_only (bool): response data without head status
code and headers. Default is True.
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (float/tuple): timeout setting for this request. If one
number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
async_req (bool): execute request asynchronously
Returns:
LoginViaApiResponse
If the method is called asynchronously, returns the request
thread.
"""
kwargs['async_req'] = kwargs.get(
'async_req', False
)
kwargs['_return_http_data_only'] = kwargs.get(
'_return_http_data_only', True
)
kwargs['_preload_content'] = kwargs.get(
'_preload_content', True
)
kwargs['_request_timeout'] = kwargs.get(
'_request_timeout', None
)
kwargs['_check_input_type'] = kwargs.get(
'_check_input_type', True
)
kwargs['_check_return_type'] = kwargs.get(
'_check_return_type', True
)
kwargs['_host_index'] = kwargs.get('_host_index')
kwargs['flow'] = \
flow
return self.call_with_http_info(**kwargs)
self.submit_self_service_login_flow = _Endpoint(
settings={
'response_type': (LoginViaApiResponse,),
'auth': [],
'endpoint_path': '/api/kratos/public/self-service/login',
'operation_id': 'submit_self_service_login_flow',
'http_method': 'POST',
'servers': None,
},
params_map={
'all': [
'flow',
'submit_self_service_login_flow',
],
'required': [
'flow',
],
'nullable': [
],
'enum': [
],
'validation': [
]
},
root_map={
'validations': {
},
'allowed_values': {
},
'openapi_types': {
'flow':
(str,),
'submit_self_service_login_flow':
(SubmitSelfServiceLoginFlow,),
},
'attribute_map': {
'flow': 'flow',
},
'location_map': {
'flow': 'query',
'submit_self_service_login_flow': 'body',
},
'collection_format_map': {
}
},
headers_map={
'accept': [
'application/json'
],
'content_type': [
'application/json',
'application/x-www-form-urlencoded'
]
},
api_client=api_client,
callable=__submit_self_service_login_flow
)
def __submit_self_service_recovery_flow(
self,
flow,
**kwargs
):
"""Complete Recovery Flow # noqa: E501
Use this endpoint to complete a recovery flow. This endpoint behaves differently for API and browser flows and has several states: `choose_method` expects `flow` (in the URL query) and `email` (in the body) to be sent and works with API- and Browser-initiated flows. For API clients it either returns a HTTP 200 OK when the form is valid and HTTP 400 OK when the form is invalid and a HTTP 302 Found redirect with a fresh recovery flow if the flow was otherwise invalid (e.g. expired). For Browser clients it returns a HTTP 302 Found redirect to the Recovery UI URL with the Recovery Flow ID appended. `sent_email` is the success state after `choose_method` for the `link` method and allows the user to request another recovery email. It works for both API and Browser-initiated flows and returns the same responses as the flow in `choose_method` state. `passed_challenge` expects a `token` to be sent in the URL query and given the nature of the flow (\"sending a recovery link\") does not have any API capabilities. The server responds with a HTTP 302 Found redirect either to the Settings UI URL (if the link was valid) and instructs the user to update their password, or a redirect to the Recover UI URL with a new Recovery Flow ID which contains an error message that the recovery link was invalid. More information can be found at [Ory Kratos Account Recovery Documentation](../self-service/flows/account-recovery.mdx). # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.submit_self_service_recovery_flow(flow, async_req=True)
>>> result = thread.get()
Args:
flow (str): The Registration Flow ID The value for this parameter comes from `flow` URL Query parameter sent to your application (e.g. `/registration?flow=abcde`).
Keyword Args:
body ({str: (bool, date, datetime, dict, float, int, list, str, none_type)}): [optional]
_return_http_data_only (bool): response data without head status
code and headers. Default is True.
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (float/tuple): timeout setting for this request. If one
number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
async_req (bool): execute request asynchronously
Returns:
None
If the method is called asynchronously, returns the request
thread.
"""
kwargs['async_req'] = kwargs.get(
'async_req', False
)
kwargs['_return_http_data_only'] = kwargs.get(
'_return_http_data_only', True
)
kwargs['_preload_content'] = kwargs.get(
'_preload_content', True
)
kwargs['_request_timeout'] = kwargs.get(
'_request_timeout', None
)
kwargs['_check_input_type'] = kwargs.get(
'_check_input_type', True
)
kwargs['_check_return_type'] = kwargs.get(
'_check_return_type', True
)
kwargs['_host_index'] = kwargs.get('_host_index')
kwargs['flow'] = \
flow
return self.call_with_http_info(**kwargs)
self.submit_self_service_recovery_flow = _Endpoint(
settings={
'response_type': None,
'auth': [],
'endpoint_path': '/api/kratos/public/self-service/recovery',
'operation_id': 'submit_self_service_recovery_flow',
'http_method': 'POST',
'servers': None,
},
params_map={
'all': [
'flow',
'body',
],
'required': [
'flow',
],
'nullable': [
],
'enum': [
],
'validation': [
]
},
root_map={
'validations': {
},
'allowed_values': {
},
'openapi_types': {
'flow':
(str,),
'body':
({str: (bool, date, datetime, dict, float, int, list, str, none_type)},),
},
'attribute_map': {
'flow': 'flow',
},
'location_map': {
'flow': 'query',
'body': 'body',
},
'collection_format_map': {
}
},
headers_map={
'accept': [
'application/json'
],
'content_type': [
'application/json',
'application/x-www-form-urlencoded'
]
},
api_client=api_client,
callable=__submit_self_service_recovery_flow
)
def __submit_self_service_recovery_flow_with_link_method(
self,
**kwargs
):
"""Complete Recovery Flow with Link Method # noqa: E501
Use this endpoint to complete a recovery flow using the link method. This endpoint behaves differently for API and browser flows and has several states: `choose_method` expects `flow` (in the URL query) and `email` (in the body) to be sent and works with API- and Browser-initiated flows. For API clients it either returns a HTTP 200 OK when the form is valid and HTTP 400 OK when the form is invalid and a HTTP 302 Found redirect with a fresh recovery flow if the flow was otherwise invalid (e.g. expired). For Browser clients it returns a HTTP 302 Found redirect to the Recovery UI URL with the Recovery Flow ID appended. `sent_email` is the success state after `choose_method` and allows the user to request another recovery email. It works for both API and Browser-initiated flows and returns the same responses as the flow in `choose_method` state. `passed_challenge` expects a `token` to be sent in the URL query and given the nature of the flow (\"sending a recovery link\") does not have any API capabilities. The server responds with a HTTP 302 Found redirect either to the Settings UI URL (if the link was valid) and instructs the user to update their password, or a redirect to the Recover UI URL with a new Recovery Flow ID which contains an error message that the recovery link was invalid. More information can be found at [Ory Kratos Account Recovery Documentation](../self-service/flows/account-recovery.mdx). # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.submit_self_service_recovery_flow_with_link_method(async_req=True)
>>> result = thread.get()
Keyword Args:
token (str): Recovery Token The recovery token which completes the recovery request. If the token is invalid (e.g. expired) an error will be shown to the end-user.. [optional]
flow (str): The Flow ID format: uuid. [optional]
submit_self_service_recovery_flow_with_link_method (SubmitSelfServiceRecoveryFlowWithLinkMethod): [optional]
_return_http_data_only (bool): response data without head status
code and headers. Default is True.
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (float/tuple): timeout setting for this request. If one
number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
async_req (bool): execute request asynchronously
Returns:
None
If the method is called asynchronously, returns the request
thread.
"""
kwargs['async_req'] = kwargs.get(
'async_req', False
)
kwargs['_return_http_data_only'] = kwargs.get(
'_return_http_data_only', True
)
kwargs['_preload_content'] = kwargs.get(
'_preload_content', True
)
kwargs['_request_timeout'] = kwargs.get(
'_request_timeout', None
)
kwargs['_check_input_type'] = kwargs.get(
'_check_input_type', True
)
kwargs['_check_return_type'] = kwargs.get(
'_check_return_type', True
)
kwargs['_host_index'] = kwargs.get('_host_index')
return self.call_with_http_info(**kwargs)
self.submit_self_service_recovery_flow_with_link_method = _Endpoint(
settings={
'response_type': None,
'auth': [],
'endpoint_path': '/api/kratos/public/self-service/recovery/methods/link',
'operation_id': 'submit_self_service_recovery_flow_with_link_method',
'http_method': 'POST',
'servers': None,
},
params_map={
'all': [
'token',
'flow',
'submit_self_service_recovery_flow_with_link_method',
],
'required': [],
'nullable': [
],
'enum': [
],
'validation': [
]
},
root_map={
'validations': {
},
'allowed_values': {
},
'openapi_types': {
'token':
(str,),
'flow':
(str,),
'submit_self_service_recovery_flow_with_link_method':
(SubmitSelfServiceRecoveryFlowWithLinkMethod,),
},
'attribute_map': {
'token': 'token',
'flow': 'flow',
},
'location_map': {
'token': 'query',
'flow': 'query',
'submit_self_service_recovery_flow_with_link_method': 'body',
},
'collection_format_map': {
}
},
headers_map={
'accept': [
'application/json'
],
'content_type': [
'application/json',
'application/x-www-form-urlencoded'
]
},
api_client=api_client,
callable=__submit_self_service_recovery_flow_with_link_method
)
def __submit_self_service_registration_flow(
self,
flow,
**kwargs
):
"""Submit a Registration Flow # noqa: E501
:::info This endpoint is EXPERIMENTAL and subject to potential breaking changes in the future. ::: Use this endpoint to complete a registration flow by sending an identity's traits and password. This endpoint behaves differently for API and browser flows. API flows expect `application/json` to be sent in the body and respond with HTTP 200 and a application/json body with the created identity success - if the session hook is configured the `session` and `session_token` will also be included; HTTP 302 redirect to a fresh registration flow if the original flow expired with the appropriate error messages set; HTTP 400 on form validation errors. Browser flows expect a Content-Type of `application/x-www-form-urlencoded` or `application/json` to be sent in the body and respond with a HTTP 302 redirect to the post/after registration URL or the `return_to` value if it was set and if the registration succeeded; a HTTP 302 redirect to the registration UI URL with the flow ID containing the validation errors otherwise. Browser flows with an accept header of `application/json` will not redirect but instead respond with HTTP 200 and a application/json body with the signed in identity and a `Set-Cookie` header on success; HTTP 302 redirect to a fresh login flow if the original flow expired with the appropriate error messages set; HTTP 400 on form validation errors. More information can be found at [Ory Kratos User Login and User Registration Documentation](https://www.ory.sh/docs/next/kratos/self-service/flows/user-login-user-registration). # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.submit_self_service_registration_flow(flow, async_req=True)
>>> result = thread.get()
Args:
flow (str): The Registration Flow ID The value for this parameter comes from `flow` URL Query parameter sent to your application (e.g. `/registration?flow=abcde`).
Keyword Args:
submit_self_service_registration_flow (SubmitSelfServiceRegistrationFlow): [optional]
_return_http_data_only (bool): response data without head status
code and headers. Default is True.
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (float/tuple): timeout setting for this request. If one
number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
async_req (bool): execute request asynchronously
Returns:
RegistrationViaApiResponse
If the method is called asynchronously, returns the request
thread.
"""
kwargs['async_req'] = kwargs.get(
'async_req', False
)
kwargs['_return_http_data_only'] = kwargs.get(
'_return_http_data_only', True
)
kwargs['_preload_content'] = kwargs.get(
'_preload_content', True
)
kwargs['_request_timeout'] = kwargs.get(
'_request_timeout', None
)
kwargs['_check_input_type'] = kwargs.get(
'_check_input_type', True
)
kwargs['_check_return_type'] = kwargs.get(
'_check_return_type', True
)
kwargs['_host_index'] = kwargs.get('_host_index')
kwargs['flow'] = \
flow
return self.call_with_http_info(**kwargs)
self.submit_self_service_registration_flow = _Endpoint(
settings={
'response_type': (RegistrationViaApiResponse,),
'auth': [],
'endpoint_path': '/api/kratos/public/self-service/registration',
'operation_id': 'submit_self_service_registration_flow',
'http_method': 'POST',
'servers': None,
},
params_map={
'all': [
'flow',
'submit_self_service_registration_flow',
],
'required': [
'flow',
],
'nullable': [
],
'enum': [
],
'validation': [
]
},
root_map={
'validations': {
},
'allowed_values': {
},
'openapi_types': {
'flow':
(str,),
'submit_self_service_registration_flow':
(SubmitSelfServiceRegistrationFlow,),
},
'attribute_map': {
'flow': 'flow',
},
'location_map': {
'flow': 'query',
'submit_self_service_registration_flow': 'body',
},
'collection_format_map': {
}
},
headers_map={
'accept': [
'application/json'
],
'content_type': [
'application/json',
'application/x-www-form-urlencoded'
]
},
api_client=api_client,
callable=__submit_self_service_registration_flow
)
def __submit_self_service_settings_flow(
self,
flow,
**kwargs
):
"""Complete Settings Flow # noqa: E501
Use this endpoint to complete a settings flow by sending an identity's updated password. This endpoint behaves differently for API and browser flows. API-initiated flows expect `application/json` to be sent in the body and respond with HTTP 200 and an application/json body with the session token on success; HTTP 302 redirect to a fresh settings flow if the original flow expired with the appropriate error messages set; HTTP 400 on form validation errors. HTTP 401 when the endpoint is called without a valid session token. HTTP 403 when `selfservice.flows.settings.privileged_session_max_age` was reached. Implies that the user needs to re-authenticate. Browser flows expect `application/x-www-form-urlencoded` to be sent in the body and responds with a HTTP 302 redirect to the post/after settings URL or the `return_to` value if it was set and if the flow succeeded; a HTTP 302 redirect to the Settings UI URL with the flow ID containing the validation errors otherwise. a HTTP 302 redirect to the login endpoint when `selfservice.flows.settings.privileged_session_max_age` was reached. More information can be found at [Ory Kratos User Settings & Profile Management Documentation](../self-service/flows/user-settings). # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.submit_self_service_settings_flow(flow, async_req=True)
>>> result = thread.get()
Args:
flow (str): The Settings Flow ID The value for this parameter comes from `flow` URL Query parameter sent to your application (e.g. `/settings?flow=abcde`).
Keyword Args:
x_session_token (str): The Session Token of the Identity performing the settings flow.. [optional]
submit_self_service_settings_flow (SubmitSelfServiceSettingsFlow): [optional]
_return_http_data_only (bool): response data without head status
code and headers. Default is True.
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (float/tuple): timeout setting for this request. If one
number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
async_req (bool): execute request asynchronously
Returns:
SettingsViaApiResponse
If the method is called asynchronously, returns the request
thread.
"""
kwargs['async_req'] = kwargs.get(
'async_req', False
)
kwargs['_return_http_data_only'] = kwargs.get(
'_return_http_data_only', True
)
kwargs['_preload_content'] = kwargs.get(
'_preload_content', True
)
kwargs['_request_timeout'] = kwargs.get(
'_request_timeout', None
)
kwargs['_check_input_type'] = kwargs.get(
'_check_input_type', True
)
kwargs['_check_return_type'] = kwargs.get(
'_check_return_type', True
)
kwargs['_host_index'] = kwargs.get('_host_index')
kwargs['flow'] = \
flow
return self.call_with_http_info(**kwargs)
self.submit_self_service_settings_flow = _Endpoint(
settings={
'response_type': (SettingsViaApiResponse,),
'auth': [
'sessionToken'
],
'endpoint_path': '/api/kratos/public/self-service/settings',
'operation_id': 'submit_self_service_settings_flow',
'http_method': 'POST',
'servers': None,
},
params_map={
'all': [
'flow',
'x_session_token',
'submit_self_service_settings_flow',
],
'required': [
'flow',
],
'nullable': [
],
'enum': [
],
'validation': [
]
},
root_map={
'validations': {
},
'allowed_values': {
},
'openapi_types': {
'flow':
(str,),
'x_session_token':
(str,),
'submit_self_service_settings_flow':
(SubmitSelfServiceSettingsFlow,),
},
'attribute_map': {
'flow': 'flow',
'x_session_token': 'X-Session-Token',
},
'location_map': {
'flow': 'query',
'x_session_token': 'header',
'submit_self_service_settings_flow': 'body',
},
'collection_format_map': {
}
},
headers_map={
'accept': [
'application/json'
],
'content_type': [
'application/json',
'application/x-www-form-urlencoded'
]
},
api_client=api_client,
callable=__submit_self_service_settings_flow
)
def __submit_self_service_verification_flow(
self,
flow,
**kwargs
):
"""Complete Verification Flow # noqa: E501
Use this endpoint to complete a verification flow. This endpoint behaves differently for API and browser flows and has several states: `choose_method` expects `flow` (in the URL query) and `email` (in the body) to be sent and works with API- and Browser-initiated flows. For API clients it either returns a HTTP 200 OK when the form is valid and HTTP 400 OK when the form is invalid and a HTTP 302 Found redirect with a fresh verification flow if the flow was otherwise invalid (e.g. expired). For Browser clients it returns a HTTP 302 Found redirect to the Verification UI URL with the Verification Flow ID appended. `sent_email` is the success state after `choose_method` when using the `link` method and allows the user to request another verification email. It works for both API and Browser-initiated flows and returns the same responses as the flow in `choose_method` state. `passed_challenge` expects a `token` to be sent in the URL query and given the nature of the flow (\"sending a verification link\") does not have any API capabilities. The server responds with a HTTP 302 Found redirect either to the Settings UI URL (if the link was valid) and instructs the user to update their password, or a redirect to the Verification UI URL with a new Verification Flow ID which contains an error message that the verification link was invalid. More information can be found at [Ory Kratos Email and Phone Verification Documentation](https://www.ory.sh/docs/kratos/selfservice/flows/verify-email-account-activation). # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.submit_self_service_verification_flow(flow, async_req=True)
>>> result = thread.get()
Args:
flow (str): The Registration Flow ID The value for this parameter comes from `flow` URL Query parameter sent to your application (e.g. `/registration?flow=abcde`).
Keyword Args:
body ({str: (bool, date, datetime, dict, float, int, list, str, none_type)}): [optional]
_return_http_data_only (bool): response data without head status
code and headers. Default is True.
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (float/tuple): timeout setting for this request. If one
number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
async_req (bool): execute request asynchronously
Returns:
None
If the method is called asynchronously, returns the request
thread.
"""
kwargs['async_req'] = kwargs.get(
'async_req', False
)
kwargs['_return_http_data_only'] = kwargs.get(
'_return_http_data_only', True
)
kwargs['_preload_content'] = kwargs.get(
'_preload_content', True
)
kwargs['_request_timeout'] = kwargs.get(
'_request_timeout', None
)
kwargs['_check_input_type'] = kwargs.get(
'_check_input_type', True
)
kwargs['_check_return_type'] = kwargs.get(
'_check_return_type', True
)
kwargs['_host_index'] = kwargs.get('_host_index')
kwargs['flow'] = \
flow
return self.call_with_http_info(**kwargs)
self.submit_self_service_verification_flow = _Endpoint(
settings={
'response_type': None,
'auth': [],
'endpoint_path': '/api/kratos/public/self-service/verification/flows',
'operation_id': 'submit_self_service_verification_flow',
'http_method': 'POST',
'servers': None,
},
params_map={
'all': [
'flow',
'body',
],
'required': [
'flow',
],
'nullable': [
],
'enum': [
],
'validation': [
]
},
root_map={
'validations': {
},
'allowed_values': {
},
'openapi_types': {
'flow':
(str,),
'body':
({str: (bool, date, datetime, dict, float, int, list, str, none_type)},),
},
'attribute_map': {
'flow': 'flow',
},
'location_map': {
'flow': 'query',
'body': 'body',
},
'collection_format_map': {
}
},
headers_map={
'accept': [
'application/json'
],
'content_type': [
'application/json',
'application/x-www-form-urlencoded'
]
},
api_client=api_client,
callable=__submit_self_service_verification_flow
)
def __to_session(
self,
**kwargs
):
"""Check Who the Current HTTP Session Belongs To # noqa: E501
Uses the HTTP Headers in the GET request to determine (e.g. by using checking the cookies) who is authenticated. Returns a session object in the body or 401 if the credentials are invalid or no credentials were sent. Additionally when the request it successful it adds the user ID to the 'X-Kratos-Authenticated-Identity-Id' header in the response. This endpoint is useful for: AJAX calls. Remember to send credentials and set up CORS correctly! Reverse proxies and API Gateways Server-side calls - use the `X-Session-Token` header! # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.to_session(async_req=True)
>>> result = thread.get()
Keyword Args:
x_session_token (str): [optional]
_return_http_data_only (bool): response data without head status
code and headers. Default is True.
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (float/tuple): timeout setting for this request. If one
number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
async_req (bool): execute request asynchronously
Returns:
Session
If the method is called asynchronously, returns the request
thread.
"""
kwargs['async_req'] = kwargs.get(
'async_req', False
)
kwargs['_return_http_data_only'] = kwargs.get(
'_return_http_data_only', True
)
kwargs['_preload_content'] = kwargs.get(
'_preload_content', True
)
kwargs['_request_timeout'] = kwargs.get(
'_request_timeout', None
)
kwargs['_check_input_type'] = kwargs.get(
'_check_input_type', True
)
kwargs['_check_return_type'] = kwargs.get(
'_check_return_type', True
)
kwargs['_host_index'] = kwargs.get('_host_index')
return self.call_with_http_info(**kwargs)
self.to_session = _Endpoint(
settings={
'response_type': (Session,),
'auth': [
'sessionCookie'
],
'endpoint_path': '/api/kratos/public/sessions/whoami',
'operation_id': 'to_session',
'http_method': 'GET',
'servers': None,
},
params_map={
'all': [
'x_session_token',
],
'required': [],
'nullable': [
],
'enum': [
],
'validation': [
]
},
root_map={
'validations': {
},
'allowed_values': {
},
'openapi_types': {
'x_session_token':
(str,),
},
'attribute_map': {
'x_session_token': 'X-Session-Token',
},
'location_map': {
'x_session_token': 'header',
},
'collection_format_map': {
}
},
headers_map={
'accept': [
'application/json'
],
'content_type': [],
},
api_client=api_client,
callable=__to_session
)
def __update_identity_admin(
self,
id,
**kwargs
):
"""Update an Identity # noqa: E501
This endpoint updates an identity. It is NOT possible to set an identity's credentials (password, ...) using this method! A way to achieve that will be introduced in the future. The full identity payload (except credentials) is expected. This endpoint does not support patching. Learn how identities work in [Ory Kratos' User And Identity Model Documentation](https://www.ory.sh/docs/next/kratos/concepts/identity-user-model). # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.update_identity_admin(id, async_req=True)
>>> result = thread.get()
Args:
id (str): ID must be set to the ID of identity you want to update
Keyword Args:
update_identity (UpdateIdentity): [optional]
_return_http_data_only (bool): response data without head status
code and headers. Default is True.
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (float/tuple): timeout setting for this request. If one
number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
async_req (bool): execute request asynchronously
Returns:
Identity
If the method is called asynchronously, returns the request
thread.
"""
kwargs['async_req'] = kwargs.get(
'async_req', False
)
kwargs['_return_http_data_only'] = kwargs.get(
'_return_http_data_only', True
)
kwargs['_preload_content'] = kwargs.get(
'_preload_content', True
)
kwargs['_request_timeout'] = kwargs.get(
'_request_timeout', None
)
kwargs['_check_input_type'] = kwargs.get(
'_check_input_type', True
)
kwargs['_check_return_type'] = kwargs.get(
'_check_return_type', True
)
kwargs['_host_index'] = kwargs.get('_host_index')
kwargs['id'] = \
id
return self.call_with_http_info(**kwargs)
self.update_identity_admin = _Endpoint(
settings={
'response_type': (Identity,),
'auth': [
'oryToken'
],
'endpoint_path': '/api/kratos/admin/identities/{id}',
'operation_id': 'update_identity_admin',
'http_method': 'PUT',
'servers': None,
},
params_map={
'all': [
'id',
'update_identity',
],
'required': [
'id',
],
'nullable': [
],
'enum': [
],
'validation': [
]
},
root_map={
'validations': {
},
'allowed_values': {
},
'openapi_types': {
'id':
(str,),
'update_identity':
(UpdateIdentity,),
},
'attribute_map': {
'id': 'id',
},
'location_map': {
'id': 'path',
'update_identity': 'body',
},
'collection_format_map': {
}
},
headers_map={
'accept': [
'application/json'
],
'content_type': [
'application/json'
]
},
api_client=api_client,
callable=__update_identity_admin
)
| 42.575848 | 1,584 | 0.507778 | 20,590 | 214,710 | 5.079796 | 0.02778 | 0.027277 | 0.021378 | 0.0222 | 0.935244 | 0.922213 | 0.897574 | 0.886742 | 0.880078 | 0.871272 | 0 | 0.004336 | 0.414587 | 214,710 | 5,042 | 1,585 | 42.584292 | 0.827788 | 0.421112 | 0 | 0.670181 | 0 | 0 | 0.224641 | 0.059756 | 0 | 0 | 0 | 0 | 0 | 1 | 0.013253 | false | 0 | 0.008735 | 0 | 0.035241 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
c40de7e47a2491a024e633ea2bcb5eff151e0fc5 | 5,005 | py | Python | signal_block/one/tests/test_block.py | spectrum-dev/django-block-monolith | c17a1ef98ae813a4e94581e2e52a4a03f0e65769 | [
"MIT"
] | null | null | null | signal_block/one/tests/test_block.py | spectrum-dev/django-block-monolith | c17a1ef98ae813a4e94581e2e52a4a03f0e65769 | [
"MIT"
] | null | null | null | signal_block/one/tests/test_block.py | spectrum-dev/django-block-monolith | c17a1ef98ae813a4e94581e2e52a4a03f0e65769 | [
"MIT"
] | null | null | null | from django.test import TestCase
from blocks.event import event_ingestor
from signal_block.one.exceptions import SignalBlockOneInvalidInputPayloadException
class PostRun(TestCase):
def setUp(self):
self.payload = {
"blockType": "SIGNAL_BLOCK",
"blockId": 1,
}
def test_intersect_event_two_outputs_single_intersection_ok(self):
payload = {
**self.payload,
"inputs": {"event_action": "BUY"},
"outputs": {
"COMPUTATIONAL_BLOCK-1-1": [
{"timestamp": "2020-01-01", "data": 10.00},
{"timestamp": "2020-01-02", "data": 11.00},
{"timestamp": "2020-01-03", "data": 13.00},
],
"COMPUTATIONAL_BLOCK-1-2": [
{"timestamp": "2020-01-01", "data": 14.00},
{"timestamp": "2020-01-02", "data": 11.00},
{"timestamp": "2020-01-03", "data": 10.00},
],
},
}
response = event_ingestor(payload)
self.assertEqual(response, [{"timestamp": "2020-01-02", "order": "BUY"}])
def test_intersect_event_two_outputs_single_intersection_ok(self):
payload = {
**self.payload,
"inputs": {"event_action": "BUY"},
"outputs": {
"COMPUTATIONAL_BLOCK-1-1": [
{"timestamp": "2020-01-01", "data": 10.00},
{"timestamp": "2020-01-02", "data": 11.00},
{"timestamp": "2020-01-03", "data": 13.00},
{"timestamp": "2020-01-04", "data": 9.00},
{"timestamp": "2020-01-5", "data": 7.00},
],
"COMPUTATIONAL_BLOCK-1-2": [
{"timestamp": "2020-01-01", "data": 14.00},
{"timestamp": "2020-01-02", "data": 11.00},
{"timestamp": "2020-01-03", "data": 10.00},
{"timestamp": "2020-01-04", "data": 9.00},
{"timestamp": "2020-01-5", "data": 7.00},
],
},
}
response = event_ingestor(payload)
self.assertEqual(
response,
[
{"timestamp": "2020-01-02", "order": "BUY"},
{"order": "BUY", "timestamp": "2020-01-04"},
],
)
def test_intersect_event_three_outputs_single_intersection_ok(self):
payload = {
**self.payload,
"inputs": {"event_action": "SELL"},
"outputs": {
"COMPUTATIONAL_BLOCK-1-1": [
{"timestamp": "2020-01-01", "data": 10.00},
{"timestamp": "2020-01-02", "data": 11.00},
{"timestamp": "2020-01-03", "data": 13.00},
{"timestamp": "2020-01-04", "data": 12.00},
],
"COMPUTATIONAL_BLOCK-1-2": [
{"timestamp": "2020-01-01", "data": 14.00},
{"timestamp": "2020-01-02", "data": 13.50},
{"timestamp": "2020-01-03", "data": 13.00},
{"timestamp": "2020-01-04", "data": 12.00},
],
"COMPUTATIONAL_BLOCK-1-3": [
{"timestamp": "2020-01-01", "data": 9.00},
{"timestamp": "2020-01-02", "data": 10.00},
{"timestamp": "2020-01-03", "data": 13.00},
{"timestamp": "2020-01-04", "data": 15.00},
],
},
}
response = event_ingestor(payload)
self.assertEqual(
response,
[{"timestamp": "2020-01-03", "order": "SELL"}],
)
def test_failure_invalid_event_action(self):
payload = {
**self.payload,
"inputs": {"event_action": "FOO"},
"outputs": {
"COMPUTATIONAL_BLOCK-1-1": [
{"timestamp": "2020-01-01", "data": 10.00},
{"timestamp": "2020-01-02", "data": 11.00},
{"timestamp": "2020-01-03", "data": 13.00},
{"timestamp": "2020-01-04", "data": 12.00},
],
"COMPUTATIONAL_BLOCK-1-2": [
{"timestamp": "2020-01-01", "data": 14.00},
{"timestamp": "2020-01-02", "data": 13.50},
{"timestamp": "2020-01-03", "data": 13.00},
{"timestamp": "2020-01-04", "data": 12.00},
],
"COMPUTATIONAL_BLOCK-1-3": [
{"timestamp": "2020-01-01", "data": 9.00},
{"timestamp": "2020-01-02", "data": 10.00},
{"timestamp": "2020-01-03", "data": 13.00},
{"timestamp": "2020-01-04", "data": 15.00},
],
},
}
with self.assertRaises(SignalBlockOneInvalidInputPayloadException):
event_ingestor(payload)
| 38.79845 | 82 | 0.432967 | 472 | 5,005 | 4.493644 | 0.131356 | 0.269684 | 0.311174 | 0.224422 | 0.815182 | 0.815182 | 0.815182 | 0.796794 | 0.796794 | 0.796794 | 0 | 0.170676 | 0.385415 | 5,005 | 128 | 83 | 39.101563 | 0.518856 | 0 | 0 | 0.736842 | 0 | 0 | 0.279321 | 0.045954 | 0 | 0 | 0 | 0 | 0.035088 | 1 | 0.04386 | false | 0 | 0.026316 | 0 | 0.078947 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 10 |
c40f5876c3668e77a4de941a87eb95eafb24976f | 8,481 | py | Python | MultiObjectiveProblem.py | yclavinas/adaptative-techniques-for-moea_d | 3e9e31f734ff606de15e0487477c9b1ef4a82bf7 | [
"MIT"
] | 1 | 2020-03-19T20:09:32.000Z | 2020-03-19T20:09:32.000Z | MultiObjectiveProblem.py | yclavinas/adaptative-techniques-for-moea_d | 3e9e31f734ff606de15e0487477c9b1ef4a82bf7 | [
"MIT"
] | null | null | null | MultiObjectiveProblem.py | yclavinas/adaptative-techniques-for-moea_d | 3e9e31f734ff606de15e0487477c9b1ef4a82bf7 | [
"MIT"
] | 1 | 2020-04-19T14:47:02.000Z | 2020-04-19T14:47:02.000Z | import numpy as np
import math
def SCH(x):
f1 = x[0] ** 2
f2 = (x[0] - 2) ** 2
return np.array([f1, f2])
def UF1(x):
"""
adapted from
https://github.com/Project-Platypus/Platypus/blob/master/platypus/problems.py
"""
nvars = len(x)
count1 = 0
count2 = 0
sum1 = 0.0
sum2 = 0.0
for j in range(2, nvars+1):
yj = x[j-1] - math.sin(6.0*math.pi*x[0] + j*math.pi/nvars)
if j % 2 == 1:
sum1 += yj**2
count1 += 1
else:
sum2 += yj**2
count2 += 1
f1 = x[0] + 2.0 * sum1 / count1
print(x[0])
print(math.sqrt(x[0]))
print(2.0 * sum2 / count2)
f2 = 1.0 - math.sqrt(x[0]) + 2.0 * sum2 / count2
return np.array([f1, f2])
def UF2(x):
"""
adapted from
https://github.com/Project-Platypus/Platypus/blob/master/platypus/problems.py
"""
nvars = len(x)
count1 = 0
count2 = 0
sum1 = 0.0
sum2 = 0.0
for j in range(2, nvars+1):
if j % 2 == 1:
yj = x[j-1] - 0.3*x[0]*(x[0] * math.cos(24.0*math.pi*x[0] + 4.0*j*math.pi/nvars) + 2.0)*math.cos(6.0*math.pi*x[0] + j*math.pi/nvars)
sum1 += yj**2
count1 += 1
else:
yj = x[j-1] - 0.3*x[0]*(x[0] * math.cos(24.0*math.pi*x[0] + 4.0*j*math.pi/nvars) + 2.0)*math.sin(6.0*math.pi*x[0] + j*math.pi/nvars)
sum2 += yj**2
count2 += 1
f1 = x[0] + 2.0 * sum1 / count1
f2 = 1.0 - math.sqrt(x[0]) + 2.0 * sum2 / count2
return np.array([f1, f2])
def UF3(x):
"""
adapted from
https://github.com/Project-Platypus/Platypus/blob/master/platypus/problems.py
"""
nvars = len(x)
count1 = 0
count2 = 0
sum1 = 0.0
sum2 = 0.0
prod1 = 1.0
prod2 = 1.0
for j in range(2, nvars+1):
yj = x[j-1] - math.pow(x[0], 0.5*(1.0 + 3.0*(j - 2.0) / (nvars - 2.0)))
pj = math.cos(20.0*yj*math.pi/math.sqrt(j))
if j % 2 == 1:
sum1 += yj**2
prod1 *= pj
count1 += 1
else:
sum2 += yj**2
prod2 *= pj
count2 += 1
f1 = x[0] + 2.0 * (4.0*sum1 - 2.0*prod1 + 2.0) / count1
f2 = 1.0 - math.sqrt(x[0]) + 2.0 * (4.0*sum2 - 2.0*prod2 + 2.0) / count2
return np.array([f1, f2])
def UF4(x):
"""
adapted from
https://github.com/Project-Platypus/Platypus/blob/master/platypus/problems.py
"""
nvars = len(x)
count1 = 0
count2 = 0
sum1 = 0.0
sum2 = 0.0
for j in range(2, nvars+1):
yj = x[j-1] - math.sin(6.0*math.pi*x[0] + j*math.pi/nvars)
hj = abs(yj) / (1.0 + math.exp(2.0*abs(yj)))
if j % 2 == 1:
sum1 += hj
count1 += 1
else:
sum2 += hj
count2 += 1
f1 = x[0] + 2.0*sum1/count1
f2 = 1.0 - x[0]**2 + 2.0*sum2/count2
return np.array([f1, f2])
def UF5(x):
"""
adapted from
https://github.com/Project-Platypus/Platypus/blob/master/platypus/problems.py
"""
nvars = len(x)
count1 = 0
count2 = 0
sum1 = 0.0
sum2 = 0.0
N = 10.0
E = 0.1
for j in range(2, nvars+1):
yj = x[j-1] - math.sin(6.0*math.pi*x[0] + j*math.pi/nvars)
hj = 2.0*yj**2 - math.cos(4.0*math.pi*yj) + 1.0
if j % 2 == 1:
sum1 += hj
count1 += 1
else:
sum2 += hj
count2 += 1
hj = (0.5/N + E) * abs(math.sin(2.0*N*math.pi*x[0]))
f1 = x[0] + hj + 2.0*sum1/count1
f2 = 1.0 - x[0] + hj + 2.0*sum2/count2
return np.array([f1, f2])
def UF6(x):
"""
adapted from
https://github.com/Project-Platypus/Platypus/blob/master/platypus/problems.py
"""
nvars = len(x)
count1 = 0
count2 = 0
sum1 = 0.0
sum2 = 0.0
N = 10.0
E = 0.1
for j in range(2, nvars+1):
yj = x[j-1] - math.sin(6.0*math.pi*x[0] + j*math.pi/nvars)
hj = 2.0*yj**2 - math.cos(4.0*math.pi*yj) + 1.0
if j % 2 == 1:
sum1 += hj
count1 += 1
else:
sum2 += hj
count2 += 1
hj = (0.5/N + E) * abs(math.sin(2.0*N*math.pi*x[0]))
f1 = x[0] + hj + 2.0*sum1/count1
f2 = 1.0 - x[0] + hj + 2.0*sum2/count2
return np.array([f1, f2])
def UF6(x):
"""
adapted from
https://github.com/Project-Platypus/Platypus/blob/master/platypus/problems.py
"""
nvars = len(x)
count1 = 0
count2 = 0
sum1 = 0.0
sum2 = 0.0
prod1 = 1.0
prod2 = 1.0
N = 2.0
E = 0.1
for j in range(2, nvars+1):
yj = x[j-1] - math.sin(6.0*math.pi*x[0] + j*math.pi/nvars)
pj = math.cos(20.0*yj*math.pi/math.sqrt(j))
if j % 2 == 1:
sum1 += yj**2
prod1 *= pj
count1 += 1
else:
sum2 += yj**2
prod2 *= pj
count2 += 1
hj = 2.0 * (0.5/N + E) * math.sin(2.0*N*math.pi*x[0])
hj = max(hj, 0.0)
f1 = x[0] + hj + 2.0*(4.0*sum1 - 2.0*prod1 + 2.0)/count1
f2 = 1.0 - x[0] + hj + 2.0*(4.0*sum2 - 2.0*prod2 + 2.0)/count2
return np.array([f1, f2])
def UF7(x):
"""
adapted from
https://github.com/Project-Platypus/Platypus/blob/master/platypus/problems.py
"""
nvars = len(x)
count1 = 0
count2 = 0
sum1 = 0.0
sum2 = 0.0
for j in range(2, nvars+1):
yj = x[j-1] - math.sin(6.0*math.pi*x[0] + j*math.pi/nvars)
if j % 2 == 1:
sum1 += yj**2
count1 += 1
else:
sum2 += yj**2
count2 += 1
yj = math.pow(x[0], 0.2)
f1 = yj + 2.0*sum1/count1
f2 = 1.0 - yj + 2.0*sum2/count2
return np.array([f1, f2])
def UF8(x):
"""
adapted from
https://github.com/Project-Platypus/Platypus/blob/master/platypus/problems.py
"""
nvars = len(x)
count1 = 0
count2 = 0
count3 = 0
sum1 = 0.0
sum2 = 0.0
sum3 = 0.0
for j in range(3, nvars+1):
yj = x[j-1] - 2.0*x[1]*math.sin(2.0*math.pi*x[0] + j*math.pi/nvars)
if j % 3 == 1:
sum1 += yj**2
count1 += 1
elif j % 3 == 2:
sum2 += yj**2
count2 += 1
else:
sum3 += yj**2
count3 += 1
f1 = math.cos(0.5*math.pi*x[0]) * math.cos(0.5*math.pi*x[1]) + 2.0*sum1/count1
f2 = math.cos(0.5*math.pi*x[0]) * math.sin(0.5*math.pi*x[1]) + 2.0*sum2/count2
f3 = math.sin(0.5*math.pi*x[0]) + 2.0*sum3/count3
return np.array([f1, f2, f3])
def UF9(x):
"""
adapted from
https://github.com/Project-Platypus/Platypus/blob/master/platypus/problems.py
"""
nvars = len(x)
count1 = 0
count2 = 0
count3 = 0
sum1 = 0.0
sum2 = 0.0
sum3 = 0.0
E = 0.1
for j in range(3, nvars+1):
yj = x[j-1] - 2.0*x[1]*math.sin(2.0*math.pi*x[0] + j*math.pi/nvars)
if j % 3 == 1:
sum1 += yj**2
count1 += 1
elif j % 3 == 2:
sum2 += yj**2
count2 += 1
else:
sum3 += yj**2
count3 += 1
yj = (1.0 + E) * (1.0 - 4.0*(2.0*x[0] - 1.0)**2)
yj = max(yj, 0.0)
f1 = 0.5*(yj + 2.0*x[0])*x[1] + 2.0*sum1/count1
f2 = 0.5*(yj - 2.0*x[0] + 2.0)*x[1] + 2.0*sum2/count2
f3 = 1.0 - x[1] + 2.0*sum3/count3
return np.array([f1, f2, f3])
def UF10(x):
"""
adapted from
https://github.com/Project-Platypus/Platypus/blob/master/platypus/problems.py
"""
nvars = len(x)
count1 = 0
count2 = 0
count3 = 0
sum1 = 0.0
sum2 = 0.0
sum3 = 0.0
for j in range(3, self.nvars+1):
yj = x[j-1] - 2.0*x[1]*math.sin(2.0*math.pi*x[0] + j*math.pi/self.nvars)
hj = 4.0*yj**2 - math.cos(8.0*math.pi*yj) + 1.0
if j % 3 == 1:
sum1 += hj
count1 += 1
elif j % 3 == 2:
sum2 += hj
count2 += 1
else:
sum3 += hj
count3 += 1
f1 = math.cos(0.5*math.pi*x[0])*math.cos(0.5*math.pi*x[1]) + 2.0*sum1/count1
f2 = math.cos(0.5*math.pi*x[0])*math.sin(0.5*math.pi*x[1]) + 2.0*sum2/count2
f3 = math.sin(0.5*math.pi*x[0]) + 2.0*sum3/count3
return np.array([f1, f2, f3])
| 22.798387 | 144 | 0.451951 | 1,452 | 8,481 | 2.639807 | 0.05303 | 0.02922 | 0.047482 | 0.045917 | 0.935821 | 0.925385 | 0.915471 | 0.891208 | 0.886773 | 0.880511 | 0 | 0.13676 | 0.36458 | 8,481 | 371 | 145 | 22.859838 | 0.574504 | 0.117911 | 0 | 0.831276 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.049383 | false | 0 | 0.00823 | 0 | 0.106996 | 0.012346 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
c479fba1c0c6367377c2faebf4961f75345a0e66 | 16,866 | py | Python | src/models.py | dczifra/lightly | d8bff271c6951da5b1b28c5d4c31ceba41aead80 | [
"MIT"
] | null | null | null | src/models.py | dczifra/lightly | d8bff271c6951da5b1b28c5d4c31ceba41aead80 | [
"MIT"
] | null | null | null | src/models.py | dczifra/lightly | d8bff271c6951da5b1b28c5d4c31ceba41aead80 | [
"MIT"
] | null | null | null | import torch
import torchvision
import torch.nn as nn
import lightly.data as ldata
import lightly.models as models
import lightly.loss as loss
import pytorch_lightning as pl
from lightly.utils import BenchmarkModule
from lightly.models.resnet import ResNetGenerator
from lightly.loss import NegativeCosineSimilarity, NTXentLoss, SwaVLoss, TsLoss, TwistLoss
from lightly.models.modules.heads import SwaVProjectionHead, SwaVPrototypes, SimCLRProjectionHead, SimSiamPredictionHead, ProjectionHead
gather_distributed = False
# ========================================
# MODELS
# ========================================
class SimCLRModel(BenchmarkModule):
def __init__(self, dataloader_kNN, num_classes, lr_factor, max_epochs):
super().__init__(dataloader_kNN, num_classes)
# create a ResNet backbone and remove the classification head
#resnet = torchvision.models.resnet18()
resnet = ResNetGenerator('resnet-18')
self.backbone = nn.Sequential(
*list(resnet.children())[:-1],
nn.AdaptiveAvgPool2d(1)
)
self.projection_head = SimCLRProjectionHead(512, 512, 128)
self.criterion = NTXentLoss()
#self.dummy_param.device = 'cuda:0'
self.lr_factor = lr_factor
self.max_epochs = max_epochs
def forward(self, x):
x = self.backbone(x).flatten(start_dim=1)
z = self.projection_head(x)
return z
def training_step(self, batch, batch_index):
(x0, x1), _, _ = batch
z0 = self.forward(x0)
z1 = self.forward(x1)
loss = self.criterion(z0, z1)
self.log('train_loss_ssl', loss)
return loss
def configure_optimizers(self):
optim = torch.optim.SGD(
self.parameters(),
lr=6e-2 * self.lr_factor,
momentum=0.9,
weight_decay=5e-4
)
scheduler = torch.optim.lr_scheduler.CosineAnnealingLR(optim, self.max_epochs)
return [optim], [scheduler]
class SwaVModel(BenchmarkModule):
def __init__(self, dataloader_kNN, num_classes, lr_factor, max_epochs):
super().__init__(dataloader_kNN, num_classes)
# create a ResNet backbone and remove the classification head
resnet = ResNetGenerator('resnet-18')
self.backbone = nn.Sequential(
*list(resnet.children())[:-1],
nn.AdaptiveAvgPool2d(1)
)
self.projection_head = SwaVProjectionHead(512, 512, 128)
self.prototypes = SwaVPrototypes(128, 512) # use 512 prototypes
self.criterion = SwaVLoss(sinkhorn_gather_distributed=gather_distributed)
self.lr_factor = lr_factor
self.max_epochs = max_epochs
def forward(self, x):
x = self.backbone(x).flatten(start_dim=1)
x = self.projection_head(x)
x = nn.functional.normalize(x, dim=1, p=2)
return self.prototypes(x)
def training_step(self, batch, batch_idx):
# normalize the prototypes so they are on the unit sphere
self.prototypes.normalize()
# the multi-crop dataloader returns a list of image crops where the
# first two items are the high resolution crops and the rest are low
# resolution crops
multi_crops, _, _ = batch
multi_crop_features = [self.forward(x) for x in multi_crops]
# split list of crop features into high and low resolution
high_resolution_features = multi_crop_features[:2]
low_resolution_features = multi_crop_features[2:]
# calculate the SwaV loss
loss = self.criterion(
high_resolution_features,
low_resolution_features
)
self.log('train_loss_ssl', loss)
return loss
def configure_optimizers(self):
optim = torch.optim.Adam(
self.parameters(),
lr=1e-3 * self.lr_factor,
weight_decay=1e-6,
)
scheduler = torch.optim.lr_scheduler.CosineAnnealingLR(optim, self.max_epochs)
return [optim], [scheduler]
class SimSiamModel(BenchmarkModule):
def __init__(self, dataloader_kNN, num_classes, max_epochs):
super().__init__(dataloader_kNN, num_classes)
# create a ResNet backbone and remove the classification head
resnet = ResNetGenerator('resnet-18')
self.backbone = nn.Sequential(
*list(resnet.children())[:-1],
nn.AdaptiveAvgPool2d(1)
)
self.prediction_head = SimSiamPredictionHead(2048, 512, 2048)
# use a 2-layer projection head for cifar10 as described in the paper
self.projection_head = ProjectionHead([
(
512,
2048,
nn.BatchNorm1d(2048),
nn.ReLU(inplace=True)
),
(
2048,
2048,
nn.BatchNorm1d(2048),
None
)
])
self.criterion = NegativeCosineSimilarity()
self.max_epochs = max_epochs
def forward(self, x):
f = self.backbone(x).flatten(start_dim=1)
z = self.projection_head(f)
p = self.prediction_head(z)
z = z.detach()
return z, p
def training_step(self, batch, batch_idx):
(x0, x1), _, _ = batch
z0, p0 = self.forward(x0)
z1, p1 = self.forward(x1)
loss = 0.5 * (self.criterion(z0, p1) + self.criterion(z1, p0))
self.log('train_loss_ssl', loss)
return loss
def configure_optimizers(self):
optim = torch.optim.SGD(
self.parameters(),
lr=6e-2, # no lr-scaling, results in better training stability
momentum=0.9,
weight_decay=5e-4
)
scheduler = torch.optim.lr_scheduler.CosineAnnealingLR(optim, self.max_epochs)
return [optim], [scheduler]
class SwaV_ts_Model(BenchmarkModule):
def __init__(self, dataloader_kNN, dataloader_prototype, num_classes, lr_factor, max_epochs):
super().__init__(dataloader_kNN, num_classes)
# create a ResNet backbone and remove the classification head
resnet = ResNetGenerator('resnet-18')
self.backbone = nn.Sequential(
*list(resnet.children())[:-1],
nn.AdaptiveAvgPool2d(1)
)
self.projection_head = SwaVProjectionHead(512, 512, 128)
self.prototypes = SwaVPrototypes(128, 512) # use 512 prototypes
self.criterion = SwaVLoss(sinkhorn_gather_distributed=gather_distributed)
self.lr_factor = lr_factor
self.max_epochs = max_epochs
self.dataloader_prototype = dataloader_prototype
self.supervised_iterator = iter(self.dataloader_prototype)
def next_prototypes(self):
try:
sdata, lab, _ = next(self.supervised_iterator)
except Exception:
self.supervised_iterator = iter(self.dataloader_prototype)
print(f'len.supervised_loader: {len(self.supervised_iterator)}')
sdata,lab, _ = next(self.supervised_iterator)
finally:
pass
#print(len(sdata))
return sdata.to(self.dummy_param.device)
def forward(self, x):
x = self.backbone(x).flatten(start_dim=1)
x = self.projection_head(x)
x = nn.functional.normalize(x, dim=1, p=2)
return self.prototypes(x)
#return x
def training_step(self, batch, batch_idx):
#proto = self.forward(self.next_prototypes())
# normalize the prototypes so they are on the unit sphere
self.prototypes.normalize()
# the multi-crop dataloader returns a list of image crops where the
# first two items are the high resolution crops and the rest are low
# resolution crops
multi_crops, _, _ = batch
multi_crop_features = [self.forward(x) for x in multi_crops]
#multi_crop_features = [self.forward(x)@proto.T for x in multi_crops]
# split list of crop features into high and low resolution
high_resolution_features = multi_crop_features[:2]
low_resolution_features = multi_crop_features[2:]
# calculate the SwaV loss
loss = self.criterion(
high_resolution_features,
low_resolution_features
)
self.log('train_loss_ssl', loss)
return loss
def configure_optimizers(self):
optim = torch.optim.Adam(
self.parameters(),
lr=1e-3 * self.lr_factor,
weight_decay=1e-6,
)
scheduler = torch.optim.lr_scheduler.CosineAnnealingLR(optim, self.max_epochs)
return [optim], [scheduler]
class SimSiam_ts_Model(BenchmarkModule):
def __init__(self, dataloader_kNN, dataloader_prototype, num_classes, max_epochs):
super().__init__(dataloader_kNN, num_classes)
# create a ResNet backbone and remove the classification head
resnet = ResNetGenerator('resnet-18')
self.backbone = nn.Sequential(
*list(resnet.children())[:-1],
nn.AdaptiveAvgPool2d(1)
)
self.prediction_head = SimSiamPredictionHead(2048, 512, 2048)
# use a 2-layer projection head for cifar10 as described in the paper
self.projection_head = ProjectionHead([
(
512,
2048,
nn.BatchNorm1d(2048),
nn.ReLU(inplace=True)
),
(
2048,
2048,
nn.BatchNorm1d(2048),
None
)
])
self.criterion = NegativeCosineSimilarity()
self.max_epochs = max_epochs
self.dataloader_prototype = dataloader_prototype
self.supervised_iterator = iter(self.dataloader_prototype)
def next_prototypes(self):
try:
sdata, lab, _ = next(self.supervised_iterator)
except Exception:
self.supervised_iterator = iter(self.dataloader_prototype)
print(f'len.supervised_loader: {len(self.supervised_iterator)}')
sdata,lab, _ = next(self.supervised_iterator)
finally:
pass
return sdata.to(self.dummy_param.device)
def forward(self, x):
f = self.backbone(x).flatten(start_dim=1)
z = self.projection_head(f)
p = self.prediction_head(z)
z = z.detach()
return z, p
def training_step(self, batch, batch_idx):
proto_z, proto_p = self.forward(self.next_prototypes())
(x0, x1), _, _ = batch
z0, p0 = self.forward(x0)
z1, p1 = self.forward(x1)
loss = 0.5 * (self.criterion(z0@proto_z.T, p1@proto_p.T) + self.criterion(z1@proto_z.T, p0@proto_p.T))
self.log('train_loss_ssl', loss)
return loss
def configure_optimizers(self):
optim = torch.optim.SGD(
self.parameters(),
lr=6e-2, # no lr-scaling, results in better training stability
momentum=0.9,
weight_decay=5e-4
)
scheduler = torch.optim.lr_scheduler.CosineAnnealingLR(optim, self.max_epochs)
return [optim], [scheduler]
class TsModel(BenchmarkModule):
def __init__(self, dataloader_kNN, dataloader_prototype, num_classes, lr_factor, max_epochs):
super().__init__(dataloader_kNN, num_classes)
# create a ResNet backbone and remove the classification head
resnet = ResNetGenerator('resnet-18')
self.backbone = nn.Sequential(
*list(resnet.children())[:-1],
nn.AdaptiveAvgPool2d(1)
)
self.prediction_head = SimSiamPredictionHead(2048, 512, 2048)
# use a 2-layer projection head for cifar10 as described in the paper
self.projection_head = ProjectionHead([
(
512,
2048,
nn.BatchNorm1d(2048),
nn.ReLU(inplace=True)
),
(
2048,
2048,
nn.BatchNorm1d(2048),
None
)
])
self.criterion = TsLoss(gather_supports=True)
self.lr_factor = lr_factor
self.max_epochs = max_epochs
self.dataloader_prototype = dataloader_prototype
self.supervised_iterator = iter(self.dataloader_prototype)
def next_prototypes(self):
try:
sdata, lab, _ = next(self.supervised_iterator)
except Exception:
self.supervised_iterator = iter(self.dataloader_prototype)
print(f'len.supervised_loader: {len(self.supervised_iterator)}')
sdata,lab, _ = next(self.supervised_iterator)
finally:
pass
return sdata.to(self.dummy_param.device)
def forward(self, x):
f = self.backbone(x).flatten(start_dim=1)
z = self.projection_head(f)
p = self.prediction_head(z)
z = z.detach()
return z, p
def training_step(self, batch, batch_idx):
proto_z, _ = self.forward(self.next_prototypes())
proto_z = proto_z.float()
(x0, x1), _, _ = batch
with torch.cuda.amp.autocast(enabled=False):
z0, p0 = self.forward(x0)
z1, p1 = self.forward(x1)
loss = 0.5 * (self.criterion(z0, p1, proto_z) + self.criterion(z1, p0, proto_z))
self.log('train_loss_ssl', loss)
return loss
def configure_optimizers(self):
optim = torch.optim.SGD(
self.parameters(),
lr=2*6e-2, # no lr-scaling, results in better training stability
#lr=1e-3 * self.lr_factor,
momentum=0.9,
weight_decay=5e-4
)
scheduler = torch.optim.lr_scheduler.CosineAnnealingLR(optim, self.max_epochs)
return [optim], [scheduler]
class TwistModel(BenchmarkModule):
def __init__(self, dataloader_kNN, dataloader_prototype, num_classes, lr_factor, max_epochs, world_size):
super().__init__(dataloader_kNN, num_classes)
# create a ResNet backbone and remove the classification head
resnet = ResNetGenerator('resnet-18')
self.backbone = nn.Sequential(
*list(resnet.children())[:-1],
nn.AdaptiveAvgPool2d(1)
)
self.prediction_head = SimSiamPredictionHead(2048, 512, 2048)
# use a 2-layer projection head for cifar10 as described in the paper
self.projection_head = ProjectionHead([
(
512,
2048,
nn.BatchNorm1d(2048),
nn.ReLU(inplace=True)
),
(
2048,
2048,
nn.BatchNorm1d(2048),
None
)
])
self.criterion = TwistLoss(0.0, 0.6, world_size = world_size)
self.lr_factor = lr_factor
self.max_epochs = max_epochs
self.dataloader_prototype = dataloader_prototype
self.supervised_iterator = iter(self.dataloader_prototype)
def next_prototypes(self):
try:
sdata, lab, _ = next(self.supervised_iterator)
except Exception:
self.supervised_iterator = iter(self.dataloader_prototype)
print(f'len.supervised_loader: {len(self.supervised_iterator)}')
sdata,lab, _ = next(self.supervised_iterator)
finally:
pass
return sdata.to(self.dummy_param.device)
def forward(self, x):
f = self.backbone(x).flatten(start_dim=1)
z = self.projection_head(f)
p = self.prediction_head(z)
#p = z
z = z.detach()
return z, p
def training_step(self, batch, batch_idx):
(x0, x1), _, _ = batch
with torch.cuda.amp.autocast(enabled=False):
z0, p0 = self.forward(x0)
z1, p1 = self.forward(x1)
loss = 0.5 * (self.criterion(z0, p1) + self.criterion(z1, p0))
self.log('train_loss_ssl', loss)
return loss
def configure_optimizers(self):
optim = torch.optim.SGD(
self.parameters(),
#lr=6e-2, # no lr-scaling, results in better training stability
lr=2*6e-2 * self.lr_factor,
momentum=0.9,
weight_decay=1e-6
)
scheduler = torch.optim.lr_scheduler.CosineAnnealingLR(optim, self.max_epochs)
return [optim], [scheduler] | 36.427646 | 168 | 0.588521 | 1,858 | 16,866 | 5.153391 | 0.10549 | 0.026319 | 0.045953 | 0.024021 | 0.907572 | 0.898486 | 0.892115 | 0.892115 | 0.880731 | 0.880731 | 0 | 0.03173 | 0.31608 | 16,866 | 463 | 169 | 36.427646 | 0.798353 | 0.110518 | 0 | 0.809917 | 0 | 0 | 0.025199 | 0.01417 | 0 | 0 | 0 | 0 | 0 | 1 | 0.088154 | false | 0.011019 | 0.030303 | 0 | 0.206612 | 0.011019 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
c481e2c21f01781c4a6ebac04833abf124000fd4 | 109 | py | Python | radmin/python/__init__.py | 311labs/SRL | c3f0069270ada3784f2a81d9ec9e390e31e53a59 | [
"MIT"
] | 2 | 2018-12-21T01:55:23.000Z | 2021-11-29T01:30:37.000Z | radmin/python/__init__.py | 311labs/SRL | c3f0069270ada3784f2a81d9ec9e390e31e53a59 | [
"MIT"
] | null | null | null | radmin/python/__init__.py | 311labs/SRL | c3f0069270ada3784f2a81d9ec9e390e31e53a59 | [
"MIT"
] | null | null | null |
from client import ClientPool
from client import Client
from client import Triggers
from log import Logger
| 15.571429 | 29 | 0.834862 | 16 | 109 | 5.6875 | 0.4375 | 0.32967 | 0.527473 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.165138 | 109 | 6 | 30 | 18.166667 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
6714da5d733446e30962036e6f3a3e70dfb9d22d | 87 | py | Python | cerebral/forms/__init__.py | jswinarton/django-cerebral-forms | baf5617b191b8857a6ad41e0e8cd2f8ccf65fbc9 | [
"MIT"
] | null | null | null | cerebral/forms/__init__.py | jswinarton/django-cerebral-forms | baf5617b191b8857a6ad41e0e8cd2f8ccf65fbc9 | [
"MIT"
] | 1 | 2020-07-03T14:39:07.000Z | 2020-07-03T14:39:07.000Z | cerebral/forms/__init__.py | jswinarton/django-cerebral-forms | baf5617b191b8857a6ad41e0e8cd2f8ccf65fbc9 | [
"MIT"
] | null | null | null | from cerebral.forms.fields import * # NOQA
from cerebral.forms.forms import * # NOQA
| 29 | 43 | 0.747126 | 12 | 87 | 5.416667 | 0.5 | 0.369231 | 0.523077 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.16092 | 87 | 2 | 44 | 43.5 | 0.890411 | 0.103448 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
674b0db5b28aa7507accfaabe4b58c80feecb607 | 100 | py | Python | two_thinning/average_based/__init__.py | varikakasandor/dissertation-balls-into-bins | fba69dd5ffd0b4984795c9a5ec119bf8c6f47d9e | [
"Apache-2.0"
] | null | null | null | two_thinning/average_based/__init__.py | varikakasandor/dissertation-balls-into-bins | fba69dd5ffd0b4984795c9a5ec119bf8c6f47d9e | [
"Apache-2.0"
] | null | null | null | two_thinning/average_based/__init__.py | varikakasandor/dissertation-balls-into-bins | fba69dd5ffd0b4984795c9a5ec119bf8c6f47d9e | [
"Apache-2.0"
] | null | null | null | import two_thinning.average_based.simulation
import two_thinning.average_based.RL.basic_neuralnet_RL | 50 | 55 | 0.92 | 15 | 100 | 5.733333 | 0.6 | 0.209302 | 0.395349 | 0.55814 | 0.674419 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.03 | 100 | 2 | 55 | 50 | 0.886598 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 7 |
67d83ee5171c19252094c2f7400a0b2d571cdb3e | 336 | py | Python | Python/Basics-Sentdex/1. Basics with Sentdex/Tutorial 8 - Mutability/quiz_8.py | yorks-dev/Learning-Software-Developement | 4733f782705dda04cc790b0e16297241c23b2504 | [
"MIT"
] | null | null | null | Python/Basics-Sentdex/1. Basics with Sentdex/Tutorial 8 - Mutability/quiz_8.py | yorks-dev/Learning-Software-Developement | 4733f782705dda04cc790b0e16297241c23b2504 | [
"MIT"
] | null | null | null | Python/Basics-Sentdex/1. Basics with Sentdex/Tutorial 8 - Mutability/quiz_8.py | yorks-dev/Learning-Software-Developement | 4733f782705dda04cc790b0e16297241c23b2504 | [
"MIT"
] | null | null | null | x = 1
def test():
x = 2
test()
print(x) # x = 1
x = 1
def test():
global x
x = 2
test()
print(x) # X = 2
x = [1]
def test():
x = [2]
test()
print(x) # x = [1]
x = [1]
def test():
global x
x = [2]
test()
print(x) # x = [2]
x = [1]
def test():
x[0] = 2
test()
print(x) # x = [2]
| 6.339623 | 19 | 0.386905 | 60 | 336 | 2.166667 | 0.133333 | 0.107692 | 0.192308 | 0.346154 | 0.992308 | 0.992308 | 0.892308 | 0.892308 | 0.892308 | 0.892308 | 0 | 0.078818 | 0.395833 | 336 | 52 | 20 | 6.461538 | 0.561576 | 0.104167 | 0 | 0.962963 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.185185 | false | 0 | 0 | 0 | 0.185185 | 0.185185 | 0 | 0 | 0 | null | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 9 |
db038be129243fc1eaf773ef2cdac27dbc4b4aa5 | 8,811 | py | Python | Pymoe/Anilist/search.py | ni8x/PyMoe | a3326f5a4030f74ee493b7b4131e402f38d3aba0 | [
"MIT"
] | null | null | null | Pymoe/Anilist/search.py | ni8x/PyMoe | a3326f5a4030f74ee493b7b4131e402f38d3aba0 | [
"MIT"
] | null | null | null | Pymoe/Anilist/search.py | ni8x/PyMoe | a3326f5a4030f74ee493b7b4131e402f38d3aba0 | [
"MIT"
] | 1 | 2021-09-21T06:02:12.000Z | 2021-09-21T06:02:12.000Z | import json
import requests
class ASearch:
def __init__(self, settings):
self.settings = settings
def character(self, term, page = 1, perpage = 3):
"""
Search for a character by term.
Results are paginated by default. Page specifies which page we're on.
Perpage specifies how many per page to request. 3 is just the example from the API docs.
:param term str: Name to search by
:param page int: Which page are we requesting? Starts at 1.
:param perpage int: How many results per page are we requesting?
:return: Json object with returned results.
:rtype: Json object with returned results.
"""
query_string = """\
query ($query: String, $page: Int, $perpage: Int) {
Page (page: $page, perPage: $perpage) {
pageInfo {
total
currentPage
lastPage
hasNextPage
}
characters (search: $query) {
name {
full
}
favourites
}
}
}
"""
vars = {"query": term, "page": page, "perpage": perpage}
r = requests.post(self.settings['apiurl'],
headers=self.settings['header'],
json={'query': query_string, 'variables': vars})
jsd = r.text
try:
jsd = json.loads(jsd)
except ValueError:
return None
else:
return jsd
def anime(self, term, page = 1, perpage = 3):
"""
Search for an anime by term.
Results are paginated by default. Page specifies which page we're on.
Perpage specifies how many per page to request. 3 is just the example from the API docs.
:param term str: Name to search by
:param page int: Which page are we requesting? starts at 1.
:param perpage int: How many results per page? defaults to 3.
:return: List of dictionaries which are anime objects or None
:rtype: list of dict or NoneType
"""
query_string = """\
query ($query: String, $page: Int, $perpage: Int) {
Page (page: $page, perPage: $perpage) {
pageInfo {
total
currentPage
lastPage
hasNextPage
}
media (search: $query, type: ANIME) {
id
title {
romaji
english
}
coverImage {
large
}
averageScore
popularity
episodes
season
hashtag
isAdult
}
}
}
"""
vars = {"query": term, "page": page, "perpage": perpage}
r = requests.post(self.settings['apiurl'],
headers=self.settings['header'],
json={'query': query_string, 'variables': vars})
jsd = r.text
try:
jsd = json.loads(jsd)
except ValueError:
return None
else:
return jsd
def manga(self, term, page = 1, perpage = 3):
"""
Search for a manga by term.
Results are paginated by default. Page specifies which page we're on.
Perpage specifies how many per page to request. 3 is just the example from the API docs.
:param term str: Name to search by
:param page int: Which page are we requesting? Starts at 1.
:param perpage int: How many results per page? defaults to 3.
:return: List of dictionaries which are manga objects or None
:rtype: list of dict or NoneType
"""
query_string = """\
query ($query: String, $page: Int, $perpage: Int) {
Page (page: $page, perPage: $perpage) {
pageInfo {
total
currentPage
lastPage
hasNextPage
}
media (search: $query, type: MANGA) {
id
title {
romaji
english
}
coverImage {
large
}
averageScore
popularity
chapters
volumes
season
hashtag
isAdult
}
}
}
"""
vars = {"query": term, "page": page, "perpage": perpage}
r = requests.post(self.settings['apiurl'],
headers=self.settings['header'],
json={'query': query_string, 'variables': vars})
jsd = r.text
try:
jsd = json.loads(jsd)
except ValueError:
return None
else:
return jsd
def staff(self, term, page = 1, perpage = 3):
"""
Search for staff by term. Staff means actors, directors, etc.
Results are paginated by default. Page specifies which page we're on.
Perpage specifies how many per page to request. 3 is just the example from the API docs.
:param term str: Name to search by
:param page int: What page are we requesting? Starts at 1.
:param perpage int: How many results per page? Defaults to 3.
:return: List of dictionaries which are staff objects or None
:rtype: list of dict or NoneType
"""
query_string = """\
query ($query: String, $page: Int, $perpage: Int) {
Page (page: $page, perPage: $perpage) {
pageInfo {
total
currentPage
lastPage
hasNextPage
}
staff (search: $query) {
id
name {
first
last
}
image {
large
}
}
}
}
"""
vars = {"query": term, "page": page, "perpage": perpage}
r = requests.post(self.settings['apiurl'],
headers=self.settings['header'],
json={'query': query_string, 'variables': vars})
jsd = r.text
try:
jsd = json.loads(jsd)
except ValueError:
return None
else:
return jsd
def studio(self, term, page = 1, perpage = 3):
"""
Search for a studio by term.
Results are paginated by default. Page specifies which page we're on.
Perpage specifies how many per page to request. 3 is just the example from the API docs.
:param term str: Name to search by
:param page int: What page are we requesting? starts at 1.
:param perpage int: How many results per page? defaults to 3.
:return: List of dictionaries which are studio objects or None
:rtype: list of dict or NoneType
"""
query_string = """\
query ($query: String, $page: Int, $perpage: Int) {
Page (page: $page, perPage: $perpage) {
pageInfo {
total
currentPage
lastPage
hasNextPage
}
studios (search: $query) {
id
name
}
}
}
"""
vars = {"query": term, "page": page, "perpage": perpage}
r = requests.post(self.settings['apiurl'],
headers=self.settings['header'],
json={'query': query_string, 'variables': vars})
jsd = r.text
try:
jsd = json.loads(jsd)
except ValueError:
return None
else:
return jsd
| 35.817073 | 96 | 0.432414 | 793 | 8,811 | 4.786885 | 0.145019 | 0.043467 | 0.04215 | 0.057956 | 0.90569 | 0.890411 | 0.890411 | 0.890411 | 0.844573 | 0.820074 | 0 | 0.005401 | 0.49563 | 8,811 | 245 | 97 | 35.963265 | 0.848785 | 0.253433 | 0 | 0.713483 | 0 | 0 | 0.600782 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.033708 | false | 0 | 0.011236 | 0 | 0.106742 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
e1dcf18376f29d5e9e3af0245997c05e85eeb388 | 148 | py | Python | be_test/logsystem/models.py | ForeverFancy/USTC-Software-2018-BE-Test | bdf415091f81638aba88f26074b870e91a19e307 | [
"MIT"
] | null | null | null | be_test/logsystem/models.py | ForeverFancy/USTC-Software-2018-BE-Test | bdf415091f81638aba88f26074b870e91a19e307 | [
"MIT"
] | null | null | null | be_test/logsystem/models.py | ForeverFancy/USTC-Software-2018-BE-Test | bdf415091f81638aba88f26074b870e91a19e307 | [
"MIT"
] | null | null | null | from django.db import models
class User(models.Model):
username=models.CharField(max_length=256)
password=models.CharField(max_length=256)
| 24.666667 | 45 | 0.783784 | 21 | 148 | 5.428571 | 0.666667 | 0.263158 | 0.315789 | 0.421053 | 0.473684 | 0 | 0 | 0 | 0 | 0 | 0 | 0.045802 | 0.114865 | 148 | 5 | 46 | 29.6 | 0.824427 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.25 | 0.25 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 7 |
c0062c9f2ba2774577746e91302c19b8cddc7f32 | 33,962 | py | Python | fatiando/gravmag/polyprism.py | XuesongDing/fatiando | 57a0e0802fde2e53628511d3a7a2964e69bb309a | [
"BSD-3-Clause"
] | 179 | 2015-03-08T08:50:45.000Z | 2022-03-20T08:19:05.000Z | fatiando/gravmag/polyprism.py | XuesongDing/fatiando | 57a0e0802fde2e53628511d3a7a2964e69bb309a | [
"BSD-3-Clause"
] | 207 | 2015-01-12T17:04:57.000Z | 2021-01-08T23:36:11.000Z | fatiando/gravmag/polyprism.py | XuesongDing/fatiando | 57a0e0802fde2e53628511d3a7a2964e69bb309a | [
"BSD-3-Clause"
] | 114 | 2015-01-29T18:51:22.000Z | 2022-03-25T12:35:43.000Z | """
The potential fields of a homogeneous 3D prism with polygonal cross-section.
"""
from __future__ import division, absolute_import
from future.builtins import range
import numpy as np
from .. import utils
from ..constants import SI2MGAL, SI2EOTVOS, G, CM, T2NT
from .._our_duecredit import due, Doi
due.cite(Doi("10.1190/1.1440645"),
description='Forward modeling formula for polygonal prisms.',
path='fatiando.gravmag.polyprism')
def tf(xp, yp, zp, prisms, inc, dec, pmag=None):
r"""
The total-field magnetic anomaly of polygonal prisms.
.. note:: The coordinate system of the input parameters is to be
x -> North, y -> East and z -> Down.
.. note:: Input units are SI. Output is in nT
Parameters:
* xp, yp, zp : arrays
Arrays with the x, y, and z coordinates of the computation points.
* prisms : list of :class:`fatiando.mesher.PolygonalPrism`
The model used to calculate the total field anomaly.
Prisms without the physical property ``'magnetization'`` will
be ignored.
* inc : float
The inclination of the regional field (in degrees)
* dec : float
The declination of the regional field (in degrees)
* pmag : [mx, my, mz] or None
A magnetization vector. If not None, will use this value instead of the
``'magnetization'`` property of the prisms. Use this, e.g., for
sensitivity matrix building.
Returns:
* res : array
The field calculated on xp, yp, zp
References:
Plouff, D. , 1976, Gravity and magnetic fields of polygonal prisms and
applications to magnetic terrain corrections, Geophysics, 41(4), 727-741,
doi:10.1190/1.1440645.
"""
if xp.shape != yp.shape != zp.shape:
raise ValueError("Input arrays xp, yp, and zp must have same shape!")
# Calculate the 3 components of the unit vector in the direction of the
# regional field
fx, fy, fz = utils.dircos(inc, dec)
res = 0
for prism in prisms:
if prism is None:
continue
if 'magnetization' not in prism.props and pmag is None:
continue
if pmag is None:
mx, my, mz = prism.props['magnetization']
else:
mx, my, mz = pmag
v1 = kernelxx(xp, yp, zp, prism)
v2 = kernelxy(xp, yp, zp, prism)
v3 = kernelxz(xp, yp, zp, prism)
v4 = kernelyy(xp, yp, zp, prism)
v5 = kernelyz(xp, yp, zp, prism)
v6 = kernelzz(xp, yp, zp, prism)
bx = v1*mx + v2*my + v3*mz
by = v2*mx + v4*my + v5*mz
bz = v3*mx + v5*my + v6*mz
res += fx*bx + fy*by + fz*bz
res *= CM * T2NT
return res
def bx(xp, yp, zp, prisms):
"""
x component of magnetic induction of a polygonal prism.
.. note:: Input units are SI. Output is in nT
Parameters:
* xp, yp, zp : arrays
The x, y, and z coordinates where the anomaly will be calculated
* prisms : list of :class:`fatiando.mesher.PolygonalPrism`
The model used to calculate the total field anomaly.
Prisms without the physical property ``'magnetization'`` will
be ignored. The ``'magnetization'`` must be a vector.
Returns:
* bx: array
The x component of the magnetic induction
References:
Plouff, D. , 1976, Gravity and magnetic fields of polygonal prisms and
applications to magnetic terrain corrections, Geophysics, 41(4), 727-741,
doi:10.1190/1.1440645.
"""
if xp.shape != yp.shape != zp.shape:
raise ValueError("Input arrays xp, yp, and zp must have same shape!")
res = 0
for prism in prisms:
if prism is None or ('magnetization' not in prism.props):
continue
# Get the magnetization vector components
mx, my, mz = prism.props['magnetization']
v1 = kernelxx(xp, yp, zp, prism)
v2 = kernelxy(xp, yp, zp, prism)
v3 = kernelxz(xp, yp, zp, prism)
res += v1*mx + v2*my + v3*mz
res *= CM * T2NT
return res
def by(xp, yp, zp, prisms):
"""
y component of magnetic induction of a polygonal prism.
.. note:: Input units are SI. Output is in nT
Parameters:
* xp, yp, zp : arrays
The x, y, and z coordinates where the anomaly will be calculated
* prisms : list of :class:`fatiando.mesher.PolygonalPrism`
The model used to calculate the total field anomaly.
Prisms without the physical property ``'magnetization'`` will
be ignored. The ``'magnetization'`` must be a vector.
Returns:
* by: array
The y component of the magnetic induction
References:
Plouff, D. , 1976, Gravity and magnetic fields of polygonal prisms and
applications to magnetic terrain corrections, Geophysics, 41(4), 727-741,
doi:10.1190/1.1440645.
"""
if xp.shape != yp.shape != zp.shape:
raise ValueError("Input arrays xp, yp, and zp must have same shape!")
res = 0
for prism in prisms:
if prism is None or ('magnetization' not in prism.props):
continue
# Get the magnetization vector components
mx, my, mz = prism.props['magnetization']
v2 = kernelxy(xp, yp, zp, prism)
v4 = kernelyy(xp, yp, zp, prism)
v5 = kernelyz(xp, yp, zp, prism)
res += v2*mx + v4*my + v5*mz
res *= CM * T2NT
return res
def bz(xp, yp, zp, prisms):
"""
z component of magnetic induction of a polygonal prism.
.. note:: Input units are SI. Output is in nT
Parameters:
* xp, yp, zp : arrays
The x, y, and z coordinates where the anomaly will be calculated
* prisms : list of :class:`fatiando.mesher.PolygonalPrism`
The model used to calculate the total field anomaly.
Prisms without the physical property ``'magnetization'`` will
be ignored. The ``'magnetization'`` must be a vector.
Returns:
* bz: array
The z component of the magnetic induction
References:
Plouff, D. , 1976, Gravity and magnetic fields of polygonal prisms and
applications to magnetic terrain corrections, Geophysics, 41(4), 727-741,
doi:10.1190/1.1440645.
"""
if xp.shape != yp.shape != zp.shape:
raise ValueError("Input arrays xp, yp, and zp must have same shape!")
res = 0
for prism in prisms:
if prism is None or ('magnetization' not in prism.props):
continue
# Get the magnetization vector components
mx, my, mz = prism.props['magnetization']
v3 = kernelxz(xp, yp, zp, prism)
v5 = kernelyz(xp, yp, zp, prism)
v6 = kernelzz(xp, yp, zp, prism)
res += v3*mx + v5*my + v6*mz
res *= CM * T2NT
return res
def gz(xp, yp, zp, prisms):
r"""
z component of gravitational acceleration of a polygonal prism.
.. note:: The coordinate system of the input parameters is to be
x -> North, y -> East and z -> Down.
.. note:: All input values in SI units and output in mGal!
Parameters:
* xp, yp, zp : arrays
The x, y, and z coordinates of the computation points.
* prisms : list of :class:`fatiando.mesher.PolygonalPrism`
The model used to calculate the field.
Prisms must have the physical property ``'density'`` will be
ignored.
Returns:
* res : array
The effect calculated on the computation points.
References:
Plouff, D. , 1976, Gravity and magnetic fields of polygonal prisms and
applications to magnetic terrain corrections, Geophysics, 41(4), 727-741,
doi:10.1190/1.1440645.
"""
if xp.shape != yp.shape != zp.shape:
raise ValueError("Input arrays xp, yp, and zp must have same shape!")
dummy = 1e-10
res = 0
for prism in prisms:
if prism is None or 'density' not in prism.props:
continue
x, y = prism.x, prism.y
z1, z2 = prism.z1, prism.z2
density = prism.props['density']
nverts = prism.nverts
# Calculate the effect of the prism
Z1 = z1 - zp
Z2 = z2 - zp
Z1_sqr = Z1**2
Z2_sqr = Z2**2
kernel = 0
for k in range(nverts):
Xk1 = x[k] - xp
Yk1 = y[k] - yp
Xk2 = x[(k + 1) % nverts] - xp
Yk2 = y[(k + 1) % nverts] - yp
p = Xk1*Yk2 - Xk2*Yk1
p_sqr = p**2
Qk1 = (Yk2 - Yk1)*Yk1 + (Xk2 - Xk1)*Xk1
Qk2 = (Yk2 - Yk1)*Yk2 + (Xk2 - Xk1)*Xk2
Ak1 = Xk1**2 + Yk1**2
Ak2 = Xk2**2 + Yk2**2
R1k1 = np.sqrt(Ak1 + Z1_sqr)
R1k2 = np.sqrt(Ak2 + Z1_sqr)
R2k1 = np.sqrt(Ak1 + Z2_sqr)
R2k2 = np.sqrt(Ak2 + Z2_sqr)
Ak1 = np.sqrt(Ak1)
Ak2 = np.sqrt(Ak2)
Bk1 = np.sqrt(Qk1**2 + p_sqr)
Bk2 = np.sqrt(Qk2**2 + p_sqr)
E1k1 = R1k1*Bk1
E1k2 = R1k2*Bk2
E2k1 = R2k1*Bk1
E2k2 = R2k2*Bk2
# Simplifying these arctans with, e.g., (Z2 - Z1)*arctan2(Qk2*p -
# Qk1*p, p*p + Qk2*Qk1) doesn't work because of the restrictions
# regarding the angles for that identity. The regression tests
# fail for some points by a large amount.
kernel += (Z2 - Z1)*(np.arctan2(Qk2, p) - np.arctan2(Qk1, p))
kernel += Z2*(np.arctan2(Z2*Qk1, R2k1*p) -
np.arctan2(Z2*Qk2, R2k2*p))
kernel += Z1*(np.arctan2(Z1*Qk2, R1k2*p) -
np.arctan2(Z1*Qk1, R1k1*p))
Ck1 = Qk1*Ak1
Ck2 = Qk2*Ak2
# dummy helps prevent zero division and log(0) errors (that's why I
# need to add it twice)
# Simplifying these two logs with a single one is not worth it
# because it would introduce two pow operations.
kernel += 0.5*p*Ak1/(Bk1 + dummy)*np.log(
(E1k1 - Ck1)*(E2k1 + Ck1)/((E1k1 + Ck1)*(E2k1 - Ck1) + dummy) +
dummy)
kernel += 0.5*p*(Ak2/(Bk2 + dummy))*np.log(
(E2k2 - Ck2)*(E1k2 + Ck2)/((E2k2 + Ck2)*(E1k2 - Ck2) + dummy) +
dummy)
res += kernel*density
res *= G*SI2MGAL
return res
def gxx(xp, yp, zp, prisms):
r"""
xx component of the gravity gradient tensor of a polygonal prism.
.. note:: The coordinate system of the input parameters is to be
x -> North, y -> East and z -> Down.
.. note:: All input values in SI units and output in Eotvos!
Parameters:
* xp, yp, zp : arrays
The x, y, and z coordinates of the computation points.
* prisms : list of :class:`fatiando.mesher.PolygonalPrism`
The model used to calculate the field.
Prisms must have the physical property ``'density'`` will be
ignored.
Returns:
* res : array
The effect calculated on the computation points.
References:
Plouff, D. , 1976, Gravity and magnetic fields of polygonal prisms and
applications to magnetic terrain corrections, Geophysics, 41(4), 727-741,
doi:10.1190/1.1440645.
"""
if xp.shape != yp.shape != zp.shape:
raise ValueError("Input arrays xp, yp, and zp must have same shape!")
res = 0
for prism in prisms:
if prism is None or 'density' not in prism.props:
continue
density = prism.props['density']
res += kernelxx(xp, yp, zp, prism)*density
res *= G * SI2EOTVOS
return res
def gxy(xp, yp, zp, prisms):
r"""
xy component of the gravity gradient tensor of a polygonal prism.
.. note:: The coordinate system of the input parameters is to be
x -> North, y -> East and z -> Down.
.. note:: All input values in SI units and output in Eotvos!
Parameters:
* xp, yp, zp : arrays
The x, y, and z coordinates of the computation points.
* prisms : list of :class:`fatiando.mesher.PolygonalPrism`
The model used to calculate the field.
Prisms must have the physical property ``'density'`` will be
ignored.
Returns:
* res : array
The effect calculated on the computation points.
References:
Plouff, D. , 1976, Gravity and magnetic fields of polygonal prisms and
applications to magnetic terrain corrections, Geophysics, 41(4), 727-741,
doi:10.1190/1.1440645.
"""
if xp.shape != yp.shape != zp.shape:
raise ValueError("Input arrays xp, yp, and zp must have same shape!")
res = 0
for prism in prisms:
if prism is None or 'density' not in prism.props:
continue
density = prism.props['density']
res += kernelxy(xp, yp, zp, prism)*density
res *= G * SI2EOTVOS
return res
def gxz(xp, yp, zp, prisms):
r"""
xz component of the gravity gradient tensor of a polygonal prism.
.. note:: The coordinate system of the input parameters is to be
x -> North, y -> East and z -> Down.
.. note:: All input values in SI units and output in Eotvos!
Parameters:
* xp, yp, zp : arrays
The x, y, and z coordinates of the computation points.
* prisms : list of :class:`fatiando.mesher.PolygonalPrism`
The model used to calculate the field.
Prisms must have the physical property ``'density'`` will be
ignored.
Returns:
* res : array
The effect calculated on the computation points.
References:
Plouff, D. , 1976, Gravity and magnetic fields of polygonal prisms and
applications to magnetic terrain corrections, Geophysics, 41(4), 727-741,
doi:10.1190/1.1440645.
"""
if xp.shape != yp.shape != zp.shape:
raise ValueError("Input arrays xp, yp, and zp must have same shape!")
res = 0
for prism in prisms:
if prism is None or 'density' not in prism.props:
continue
density = prism.props['density']
res += kernelxz(xp, yp, zp, prism)*density
res *= G * SI2EOTVOS
return res
def gyy(xp, yp, zp, prisms):
r"""
yy component of the gravity gradient tensor of a polygonal prism.
.. note:: The coordinate system of the input parameters is to be
x -> North, y -> East and z -> Down.
.. note:: All input values in SI units and output in Eotvos!
Parameters:
* xp, yp, zp : arrays
The x, y, and z coordinates of the computation points.
* prisms : list of :class:`fatiando.mesher.PolygonalPrism`
The model used to calculate the field.
Prisms must have the physical property ``'density'`` will be
ignored.
Returns:
* res : array
The effect calculated on the computation points.
References:
Plouff, D. , 1976, Gravity and magnetic fields of polygonal prisms and
applications to magnetic terrain corrections, Geophysics, 41(4), 727-741,
doi:10.1190/1.1440645.
"""
if xp.shape != yp.shape != zp.shape:
raise ValueError("Input arrays xp, yp, and zp must have same shape!")
res = 0
for prism in prisms:
if prism is None or 'density' not in prism.props:
continue
density = prism.props['density']
res += kernelyy(xp, yp, zp, prism)*density
res *= G * SI2EOTVOS
return res
def gyz(xp, yp, zp, prisms):
r"""
yz component of the gravity gradient tensor of a polygonal prism.
.. note:: The coordinate system of the input parameters is to be
x -> North, y -> East and z -> Down.
.. note:: All input values in SI units and output in Eotvos!
Parameters:
* xp, yp, zp : arrays
The x, y, and z coordinates of the computation points.
* prisms : list of :class:`fatiando.mesher.PolygonalPrism`
The model used to calculate the field.
Prisms must have the physical property ``'density'`` will be
ignored.
Returns:
* res : array
The effect calculated on the computation points.
References:
Plouff, D. , 1976, Gravity and magnetic fields of polygonal prisms and
applications to magnetic terrain corrections, Geophysics, 41(4), 727-741,
doi:10.1190/1.1440645.
"""
if xp.shape != yp.shape != zp.shape:
raise ValueError("Input arrays xp, yp, and zp must have same shape!")
res = 0
for prism in prisms:
if prism is None or 'density' not in prism.props:
continue
density = prism.props['density']
res += kernelyz(xp, yp, zp, prism)*density
res *= G * SI2EOTVOS
return res
def gzz(xp, yp, zp, prisms):
r"""
zz component of the gravity gradient tensor of a polygonal prism.
.. note:: The coordinate system of the input parameters is to be
x -> North, y -> East and z -> Down.
.. note:: All input values in SI units and output in Eotvos!
Parameters:
* xp, yp, zp : arrays
The x, y, and z coordinates of the computation points.
* prisms : list of :class:`fatiando.mesher.PolygonalPrism`
The model used to calculate the field.
Prisms must have the physical property ``'density'`` will be
ignored.
Returns:
* res : array
The effect calculated on the computation points.
References:
Plouff, D. , 1976, Gravity and magnetic fields of polygonal prisms and
applications to magnetic terrain corrections, Geophysics, 41(4), 727-741,
doi:10.1190/1.1440645.
"""
if xp.shape != yp.shape != zp.shape:
raise ValueError("Input arrays xp, yp, and zp must have same shape!")
res = 0
for prism in prisms:
if prism is None or 'density' not in prism.props:
continue
density = prism.props['density']
res += kernelzz(xp, yp, zp, prism)*density
res *= G * SI2EOTVOS
return res
def kernelxx(xp, yp, zp, prism):
r"""
The xx second-derivative of the kernel function :math:`\phi`.
.. math::
\phi(x,y,z) = \iiint_\Omega \frac{1}{r}
\mathrm{d}\nu \mathrm{d}\eta \mathrm{d}\zeta
in which
.. math::
r = \sqrt{(x - \nu)^2 + (y - \eta)^2 + (z - \zeta)^2}.
This function is used to calculate the gravity gradient tensor, magnetic
induction, and total field magnetic anomaly.
.. note:: The coordinate system of the input parameters is to be
x -> North, y -> East and z -> Down.
.. note:: All input and output values in SI!
Parameters:
* xp, yp, zp : arrays
The x, y, and z coordinates of the computation points.
* prisms : object of :class:`fatiando.mesher.PolygonalPrism`
The model used as the integration domain :math:`\Omega` of the kernel
function.
Returns:
* res : array
The effect calculated on the computation points.
References:
Plouff, D. , 1976, Gravity and magnetic fields of polygonal prisms and
applications to magnetic terrain corrections, Geophysics, 41(4), 727-741,
doi:10.1190/1.1440645.
"""
if xp.shape != yp.shape != zp.shape:
raise ValueError("Input arrays xp, yp, and zp must have same shape!")
dummy = 1e-10
x, y = prism.x, prism.y
z1, z2 = prism.z1, prism.z2
nverts = prism.nverts
# Calculate the effect of the prism
Z1 = z1 - zp
Z2 = z2 - zp
Z1_sqr = Z1*Z1
Z2_sqr = Z2*Z2
kernel = 0
for k in range(nverts):
X1 = x[k] - xp
Y1 = y[k] - yp
X2 = x[(k + 1) % nverts] - xp
Y2 = y[(k + 1) % nverts] - yp
deltax = X2 - X1 + dummy
deltay = Y2 - Y1 + dummy
n = deltax/deltay
g = X1 - Y1*n
dist = np.sqrt(deltax*deltax + deltay*deltay)
cross = X1*Y2 - X2*Y1
p = cross/dist + dummy
d1 = (deltax*X1 + deltay*Y1)/dist + dummy
d2 = (deltax*X2 + deltay*Y2)/dist + dummy
vert1_sqr = X1*X1 + Y1*Y1
vert2_sqr = X2*X2 + Y2*Y2
R11 = np.sqrt(vert1_sqr + Z1_sqr)
R12 = np.sqrt(vert1_sqr + Z2_sqr)
R21 = np.sqrt(vert2_sqr + Z1_sqr)
R22 = np.sqrt(vert2_sqr + Z2_sqr)
atan_diff_d2 = np.arctan2(Z2*d2, p*R22) - np.arctan2(Z1*d2, p*R21)
atan_diff_d1 = np.arctan2(Z2*d1, p*R12) - np.arctan2(Z1*d1, p*R11)
tmp = g*Y2*atan_diff_d2/(p*d2) + n*p*atan_diff_d2/(d2)
tmp -= g*Y1*atan_diff_d1/(p*d1) + n*p*atan_diff_d1/(d1)
tmp += n*np.log(
(Z2 + R12)*(Z1 + R21)/((Z1 + R11)*(Z2 + R22) + dummy) + dummy)
tmp *= -1/(1 + n*n)
kernel += tmp
return kernel
def kernelxy(xp, yp, zp, prism):
r"""
The xy second-derivative of the kernel function :math:`\phi`.
.. math::
\phi(x,y,z) = \iiint_\Omega \frac{1}{r}
\mathrm{d}\nu \mathrm{d}\eta \mathrm{d}\zeta
in which
.. math::
r = \sqrt{(x - \nu)^2 + (y - \eta)^2 + (z - \zeta)^2}.
This function is used to calculate the gravity gradient tensor, magnetic
induction, and total field magnetic anomaly.
.. note:: The coordinate system of the input parameters is to be
x -> North, y -> East and z -> Down.
.. note:: All input and output values in SI!
Parameters:
* xp, yp, zp : arrays
The x, y, and z coordinates of the computation points.
* prisms : object of :class:`fatiando.mesher.PolygonalPrism`
The model used as the integration domain :math:`\Omega` of the kernel
function.
Returns:
* res : array
The effect calculated on the computation points.
References:
Plouff, D. , 1976, Gravity and magnetic fields of polygonal prisms and
applications to magnetic terrain corrections, Geophysics, 41(4), 727-741,
doi:10.1190/1.1440645.
"""
if xp.shape != yp.shape != zp.shape:
raise ValueError("Input arrays xp, yp, and zp must have same shape!")
dummy = 1e-10
x, y = prism.x, prism.y
z1, z2 = prism.z1, prism.z2
nverts = prism.nverts
# Calculate the effect of the prism
Z1 = z1 - zp
Z2 = z2 - zp
Z1_sqr = Z1*Z1
Z2_sqr = Z2*Z2
kernel = 0
for k in range(nverts):
X1 = x[k] - xp
Y1 = y[k] - yp
X2 = x[(k + 1) % nverts] - xp
Y2 = y[(k + 1) % nverts] - yp
deltax = X2 - X1 + dummy
deltay = Y2 - Y1 + dummy
n = deltax/deltay
g = X1 - Y1*n
g_sqr = g*g
dist = np.sqrt(deltax*deltax + deltay*deltay)
cross = X1*Y2 - X2*Y1
p = cross/dist + dummy
d1 = (deltax*X1 + deltay*Y1)/dist + dummy
d2 = (deltax*X2 + deltay*Y2)/dist + dummy
vert1_sqr = X1*X1 + Y1*Y1
vert2_sqr = X2*X2 + Y2*Y2
R11 = np.sqrt(vert1_sqr + Z1_sqr)
R12 = np.sqrt(vert1_sqr + Z2_sqr)
R21 = np.sqrt(vert2_sqr + Z1_sqr)
R22 = np.sqrt(vert2_sqr + Z2_sqr)
atan_diff_d2 = np.arctan2(Z2*d2, p*R22) - np.arctan2(Z1*d2, p*R21)
atan_diff_d1 = np.arctan2(Z2*d1, p*R12) - np.arctan2(Z1*d1, p*R11)
tmp = (g_sqr + g*n*Y2)*atan_diff_d2/(p*d2) - p*atan_diff_d2/d2
tmp -= (g_sqr + g*n*Y1)*atan_diff_d1/(p*d1) - p*atan_diff_d1/d1
tmp += np.log(
(Z2 + R22)*(Z1 + R11)/((Z1 + R21)*(Z2 + R12) + dummy) + dummy)
tmp *= 1/(1 + n*n)
kernel += tmp
return kernel
def kernelxz(xp, yp, zp, prism):
r"""
The xz second-derivative of the kernel function :math:`\phi`.
.. math::
\phi(x,y,z) = \iiint_\Omega \frac{1}{r}
\mathrm{d}\nu \mathrm{d}\eta \mathrm{d}\zeta
in which
.. math::
r = \sqrt{(x - \nu)^2 + (y - \eta)^2 + (z - \zeta)^2}.
This function is used to calculate the gravity gradient tensor, magnetic
induction, and total field magnetic anomaly.
.. note:: The coordinate system of the input parameters is to be
x -> North, y -> East and z -> Down.
.. note:: All input and output values in SI!
Parameters:
* xp, yp, zp : arrays
The x, y, and z coordinates of the computation points.
* prisms : object of :class:`fatiando.mesher.PolygonalPrism`
The model used as the integration domain :math:`\Omega` of the kernel
function.
Returns:
* res : array
The effect calculated on the computation points.
References:
Plouff, D. , 1976, Gravity and magnetic fields of polygonal prisms and
applications to magnetic terrain corrections, Geophysics, 41(4), 727-741,
doi:10.1190/1.1440645.
"""
if xp.shape != yp.shape != zp.shape:
raise ValueError("Input arrays xp, yp, and zp must have same shape!")
dummy = 1e-10
x, y = prism.x, prism.y
z1, z2 = prism.z1, prism.z2
nverts = prism.nverts
# Calculate the effect of the prism
Z1 = z1 - zp
Z2 = z2 - zp
Z1_sqr = Z1*Z1
Z2_sqr = Z2*Z2
kernel = 0
for k in range(nverts):
X1 = x[k] - xp
Y1 = y[k] - yp
X2 = x[(k + 1) % nverts] - xp
Y2 = y[(k + 1) % nverts] - yp
deltax = X2 - X1 + dummy
deltay = Y2 - Y1 + dummy
n = deltax/deltay
n_sqr_p1 = n*n + 1
g = X1 - Y1*n
ng = n*g
dist = np.sqrt(deltax*deltax + deltay*deltay)
d1 = (deltax*X1 + deltay*Y1)/dist + dummy
d2 = (deltax*X2 + deltay*Y2)/dist + dummy
vert1_sqr = X1*X1 + Y1*Y1
vert2_sqr = X2*X2 + Y2*Y2
R11 = np.sqrt(vert1_sqr + Z1_sqr)
R12 = np.sqrt(vert1_sqr + Z2_sqr)
R21 = np.sqrt(vert2_sqr + Z1_sqr)
R22 = np.sqrt(vert2_sqr + Z2_sqr)
# Collapsing these logs decreases the precision too much leading to a
# larger difference with the prism code.
log_r22 = np.log((R22 - d2)/(R22 + d2) + dummy)
log_r21 = np.log((R21 - d2)/(R21 + d2) + dummy)
log_r12 = np.log((R12 - d1)/(R12 + d1) + dummy)
log_r11 = np.log((R11 - d1)/(R11 + d1) + dummy)
log_diff_d1 = (0.5/d1)*(log_r12 - log_r11)
log_diff_d2 = (0.5/d2)*(log_r22 - log_r21)
tmp = (Y2*n_sqr_p1 + ng)*log_diff_d2
tmp -= (Y1*n_sqr_p1 + ng)*log_diff_d1
tmp *= -1/n_sqr_p1
kernel += tmp
return kernel
def kernelyy(xp, yp, zp, prism):
r"""
The yy second-derivative of the kernel function :math:`\phi`.
.. math::
\phi(x,y,z) = \iiint_\Omega \frac{1}{r}
\mathrm{d}\nu \mathrm{d}\eta \mathrm{d}\zeta
in which
.. math::
r = \sqrt{(x - \nu)^2 + (y - \eta)^2 + (z - \zeta)^2}.
This function is used to calculate the gravity gradient tensor, magnetic
induction, and total field magnetic anomaly.
.. note:: The coordinate system of the input parameters is to be
x -> North, y -> East and z -> Down.
.. note:: All input and output values in SI!
Parameters:
* xp, yp, zp : arrays
The x, y, and z coordinates of the computation points.
* prisms : object of :class:`fatiando.mesher.PolygonalPrism`
The model used as the integration domain :math:`\Omega` of the kernel
function.
Returns:
* res : array
The effect calculated on the computation points.
References:
Plouff, D. , 1976, Gravity and magnetic fields of polygonal prisms and
applications to magnetic terrain corrections, Geophysics, 41(4), 727-741,
doi:10.1190/1.1440645.
"""
if xp.shape != yp.shape != zp.shape:
raise ValueError("Input arrays xp, yp, and zp must have same shape!")
dummy = 1e-10
x, y = prism.x, prism.y
z1, z2 = prism.z1, prism.z2
nverts = prism.nverts
# Calculate the effect of the prism
Z1 = z1 - zp
Z2 = z2 - zp
Z1_sqr = Z1*Z1
Z2_sqr = Z2*Z2
kernel = 0
for k in range(nverts):
X1 = x[k] - xp
Y1 = y[k] - yp
X2 = x[(k + 1) % nverts] - xp
Y2 = y[(k + 1) % nverts] - yp
deltax = X2 - X1 + dummy
deltay = Y2 - Y1 + dummy
m = deltay/deltax
c = Y1 - X1*m
dist = np.sqrt(deltax*deltax + deltay*deltay)
cross = X1*Y2 - X2*Y1
p = cross/dist + dummy
d1 = (deltax*X1 + deltay*Y1)/dist + dummy
d2 = (deltax*X2 + deltay*Y2)/dist + dummy
vert1_sqr = X1*X1 + Y1*Y1
vert2_sqr = X2*X2 + Y2*Y2
R11 = np.sqrt(vert1_sqr + Z1_sqr)
R12 = np.sqrt(vert1_sqr + Z2_sqr)
R21 = np.sqrt(vert2_sqr + Z1_sqr)
R22 = np.sqrt(vert2_sqr + Z2_sqr)
atan_diff_d2 = np.arctan2(Z2*d2, p*R22) - np.arctan2(Z1*d2, p*R21)
atan_diff_d1 = np.arctan2(Z2*d1, p*R12) - np.arctan2(Z1*d1, p*R11)
tmp = c*X2*atan_diff_d2/(p*d2) + m*p*atan_diff_d2/d2
tmp -= c*X1*atan_diff_d1/(p*d1) + m*p*atan_diff_d1/d1
tmp += m*np.log(
(Z2 + R12)*(Z1 + R21)/((Z2 + R22)*(Z1 + R11)) + dummy)
tmp *= 1/(1 + m*m)
kernel += tmp
return kernel
def kernelyz(xp, yp, zp, prism):
r"""
The yz second-derivative of the kernel function :math:`\phi`.
.. math::
\phi(x,y,z) = \iiint_\Omega \frac{1}{r}
\mathrm{d}\nu \mathrm{d}\eta \mathrm{d}\zeta
in which
.. math::
r = \sqrt{(x - \nu)^2 + (y - \eta)^2 + (z - \zeta)^2}.
This function is used to calculate the gravity gradient tensor, magnetic
induction, and total field magnetic anomaly.
.. note:: The coordinate system of the input parameters is to be
x -> North, y -> East and z -> Down.
.. note:: All input and output values in SI!
Parameters:
* xp, yp, zp : arrays
The x, y, and z coordinates of the computation points.
* prisms : object of :class:`fatiando.mesher.PolygonalPrism`
The model used as the integration domain :math:`\Omega` of the kernel
function.
Returns:
* res : array
The effect calculated on the computation points.
References:
Plouff, D. , 1976, Gravity and magnetic fields of polygonal prisms and
applications to magnetic terrain corrections, Geophysics, 41(4), 727-741,
doi:10.1190/1.1440645.
"""
if xp.shape != yp.shape != zp.shape:
raise ValueError("Input arrays xp, yp, and zp must have same shape!")
dummy = 1e-10
x, y = prism.x, prism.y
z1, z2 = prism.z1, prism.z2
nverts = prism.nverts
# Calculate the effect of the prism
Z1 = z1 - zp
Z2 = z2 - zp
Z1_sqr = Z1*Z1
Z2_sqr = Z2*Z2
kernel = 0
for k in range(nverts):
X1 = x[k] - xp
Y1 = y[k] - yp
X2 = x[(k + 1) % nverts] - xp
Y2 = y[(k + 1) % nverts] - yp
deltax = X2 - X1 + dummy
deltay = Y2 - Y1 + dummy
m = deltay/deltax
m_sqr_p1 = m*m + 1
c = Y1 - X1*m
cm = c*m
dist = np.sqrt(deltax*deltax + deltay*deltay)
d1 = (deltax*X1 + deltay*Y1)/dist + dummy
d2 = (deltax*X2 + deltay*Y2)/dist + dummy
vert1_sqr = X1*X1 + Y1*Y1
vert2_sqr = X2*X2 + Y2*Y2
R11 = np.sqrt(vert1_sqr + Z1_sqr)
R12 = np.sqrt(vert1_sqr + Z2_sqr)
R21 = np.sqrt(vert2_sqr + Z1_sqr)
R22 = np.sqrt(vert2_sqr + Z2_sqr)
# Same remark about collapsing logs as kernelxz
log_r11 = np.log((R11 - d1)/(R11 + d1) + dummy)
log_r12 = np.log((R12 - d1)/(R12 + d1) + dummy)
log_r21 = np.log((R21 - d2)/(R21 + d2) + dummy)
log_r22 = np.log((R22 - d2)/(R22 + d2) + dummy)
tmp = (X2*m_sqr_p1 + cm)*(0.5/d2)*(log_r22 - log_r21)
tmp -= (X1*m_sqr_p1 + cm)*(0.5/d1)*(log_r12 - log_r11)
tmp *= 1/m_sqr_p1
kernel += tmp
return kernel
def kernelzz(xp, yp, zp, prism):
r"""
The zz second-derivative of the kernel function :math:`\phi`.
.. math::
\phi(x,y,z) = \iiint_\Omega \frac{1}{r}
\mathrm{d}\nu \mathrm{d}\eta \mathrm{d}\zeta
in which
.. math::
r = \sqrt{(x - \nu)^2 + (y - \eta)^2 + (z - \zeta)^2}.
This function is used to calculate the gravity gradient tensor, magnetic
induction, and total field magnetic anomaly.
.. note:: The coordinate system of the input parameters is to be
x -> North, y -> East and z -> Down.
.. note:: All input and output values in SI!
Parameters:
* xp, yp, zp : arrays
The x, y, and z coordinates of the computation points.
* prisms : object of :class:`fatiando.mesher.PolygonalPrism`
The model used as the integration domain :math:`\Omega` of the kernel
function.
Returns:
* res : array
The effect calculated on the computation points.
References:
Plouff, D. , 1976, Gravity and magnetic fields of polygonal prisms and
applications to magnetic terrain corrections, Geophysics, 41(4), 727-741,
doi:10.1190/1.1440645.
"""
if xp.shape != yp.shape != zp.shape:
raise ValueError("Input arrays xp, yp, and zp must have same shape!")
dummy = 1e-10
x, y = prism.x, prism.y
z1, z2 = prism.z1, prism.z2
nverts = prism.nverts
# Calculate the effect of the prism
Z1 = z1 - zp
Z2 = z2 - zp
Z1_sqr = Z1*Z1
Z2_sqr = Z2*Z2
kernel = 0
for k in range(nverts):
X1 = x[k] - xp
Y1 = y[k] - yp
X2 = x[(k + 1) % nverts] - xp
Y2 = y[(k + 1) % nverts] - yp
deltax = X2 - X1
deltay = Y2 - Y1
# dist is only used in divisions. Add dummy to avoid zero division
# errors if the two vertices coincide.
dist = np.sqrt(deltax*deltax + deltay*deltay) + dummy
cross = X1*Y2 - X2*Y1
p = cross/dist
d1 = (deltax*X1 + deltay*Y1)/dist
d2 = (deltax*X2 + deltay*Y2)/dist
vert1_sqr = X1*X1 + Y1*Y1
vert2_sqr = X2*X2 + Y2*Y2
R11 = np.sqrt(vert1_sqr + Z1_sqr)
R12 = np.sqrt(vert1_sqr + Z2_sqr)
R21 = np.sqrt(vert2_sqr + Z1_sqr)
R22 = np.sqrt(vert2_sqr + Z2_sqr)
kernel += (np.arctan2(Z2*d2, p*R22) - np.arctan2(Z1*d2, p*R21) -
np.arctan2(Z2*d1, p*R12) + np.arctan2(Z1*d1, p*R11))
return kernel
| 31.53389 | 79 | 0.582857 | 4,945 | 33,962 | 3.968049 | 0.065925 | 0.014881 | 0.017124 | 0.015136 | 0.882224 | 0.865763 | 0.840179 | 0.826114 | 0.822801 | 0.820559 | 0 | 0.058694 | 0.304693 | 33,962 | 1,076 | 80 | 31.563197 | 0.772254 | 0.471527 | 0 | 0.709251 | 0 | 0 | 0.068906 | 0.001594 | 0 | 0 | 0 | 0 | 0 | 1 | 0.037445 | false | 0 | 0.013216 | 0 | 0.088106 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
c05740a4ebeaba0d9cb61ab2fc92d04be4925b9d | 121 | py | Python | 1144.py | luizgallas/uri_iniciante | fd23f2fe1638b373b94b7b4ddb2d906cec8db87b | [
"Apache-2.0"
] | null | null | null | 1144.py | luizgallas/uri_iniciante | fd23f2fe1638b373b94b7b4ddb2d906cec8db87b | [
"Apache-2.0"
] | null | null | null | 1144.py | luizgallas/uri_iniciante | fd23f2fe1638b373b94b7b4ddb2d906cec8db87b | [
"Apache-2.0"
] | null | null | null | num = int(input())
for i in range(1, num+1):
print(i, (i ** 2), (i ** 3))
print(i, ((i **2) + 1), ((i ** 3) + 1)) | 30.25 | 43 | 0.404959 | 24 | 121 | 2.041667 | 0.458333 | 0.244898 | 0.285714 | 0.326531 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.089888 | 0.264463 | 121 | 4 | 43 | 30.25 | 0.460674 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.5 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 7 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.