hexsha stringlengths 40 40 | size int64 2 1.02M | ext stringclasses 10
values | lang stringclasses 1
value | max_stars_repo_path stringlengths 4 245 | max_stars_repo_name stringlengths 6 130 | max_stars_repo_head_hexsha stringlengths 40 40 | max_stars_repo_licenses listlengths 1 10 | max_stars_count int64 1 191k ⌀ | max_stars_repo_stars_event_min_datetime stringlengths 24 24 ⌀ | max_stars_repo_stars_event_max_datetime stringlengths 24 24 ⌀ | max_issues_repo_path stringlengths 4 245 | max_issues_repo_name stringlengths 6 130 | max_issues_repo_head_hexsha stringlengths 40 40 | max_issues_repo_licenses listlengths 1 10 | max_issues_count int64 1 67k ⌀ | max_issues_repo_issues_event_min_datetime stringlengths 24 24 ⌀ | max_issues_repo_issues_event_max_datetime stringlengths 24 24 ⌀ | max_forks_repo_path stringlengths 4 245 | max_forks_repo_name stringlengths 6 130 | max_forks_repo_head_hexsha stringlengths 40 40 | max_forks_repo_licenses listlengths 1 10 | max_forks_count int64 1 105k ⌀ | max_forks_repo_forks_event_min_datetime stringlengths 24 24 ⌀ | max_forks_repo_forks_event_max_datetime stringlengths 24 24 ⌀ | content stringlengths 2 1.02M | avg_line_length float64 1 417k | max_line_length int64 1 987k | alphanum_fraction float64 0 1 | content_no_comment stringlengths 0 1.01M | is_comment_constant_removed bool 1
class | is_sharp_comment_removed bool 1
class |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
f73dddf470763349a7f01540ff083d75743566dd | 19,409 | py | Python | qsubm.py | mark-caprio/mcscript | 7a5a69667857f27b8f2d2f9387b90301bc321df2 | [
"MIT"
] | 1 | 2017-05-30T20:45:24.000Z | 2017-05-30T20:45:24.000Z | qsubm.py | mark-caprio/mcscript | 7a5a69667857f27b8f2d2f9387b90301bc321df2 | [
"MIT"
] | 3 | 2020-06-15T16:10:23.000Z | 2020-10-15T02:47:21.000Z | qsubm.py | mark-caprio/mcscript | 7a5a69667857f27b8f2d2f9387b90301bc321df2 | [
"MIT"
] | null | null | null | #!/usr/bin/python3
"""qsubm -- generic queue submission for task-oriented batch scripts
Environment variables:
MCSCRIPT_DIR should specify the directory in which the mcscript package is
installed, i.e., the directory where the file qsubm.py is found. (Note that
qsubm uses this information to locate certain auxiliary script files used as
part of the job submission process.)
MCSCRIPT_RUN_HOME must specify the directory in which job files are found.
MCSCRIPT_WORK_HOME should specify the parent directory in which run scratch
directories should be made.
MCSCRIPT_INSTALL_HOME must specify the directory in which executables are found.
MCSCRIPT_LAUNCH_HOME (optional) should specify the parent directory in which
run subdirectories for qsub invocation and output logging should be made.
Otherwise, this will default to MCSCRIPT_WORK_HOME.
MCSCRIPT_PYTHON should give the full qualified filename (i.e., including
path) to the Python 3 executable for running run script files. A typical
value will simply be "python3", assuming the Python 3 executable is in the
shell's command search PATH. However, see note on "Availability of Python"
in INSTALL.md.
MCSCRIPT_RUN_PREFIX should specify the prefix for run names, e.g., set to
"run" if your scripts are to be named run<XXXX>.py.
Requires local definitions file config.py to translate options into
arguments for local batch server. See directions in readme.txt. Your local
definitions might not make use of or support all the parallel environment
options.
Language: Python 3
M. A. Caprio
University of Notre Dame
+ 3/6/13 (mac): Based on earlier qsubm csh script.
+ 7/4/13 (mac): Support for multiple cluster flavors via qsubm_local.
+ 1/22/14 (mac): Python 3 update.
+ 10/27/14 (mac): Updates to --archive handling.
+ 5/14/15 (mac):
- Insert "future" statements for Python 2 legacy support.
- Add --noredirect switch.
- Mandatory environment variable QSUBM_PYTHON.
+ 8/4/15 (mac): Make user environment variable definitions into option.
+ 6/13/16 (mac): Rename environment variables to MCSCRIPT_*.
+ 6/22/16 (mac): Update to use config.py for local configuration.
+ 12/14/16 (mac): Add --here option.
+ 12/29/16 (mac):
- Add --spread option.
- Remove --pernode option.
- Make --opt option repeatable.
+ 1/16/17 (mac): Add --serialthreads option.
+ 2/23/17 (mac): Switch from os.mkdir to mcscript.utils.mkdir.
+ 3/16/17 (mac):
- Add --setup option.
- Change environment interface to pass MCSCRIPT_TASK_MODE.
+ 3/18/17 (mac):
- Revise to support updated hybrid run parameters.
- Rename option --setup to --prerun.
+ 5/22/17 (mac): Fix processing of boolean option --redirect.
+ 10/11/17 (pjf): Add --switchwaittime option.
+ 01/05/18 (pjf): Sort arguments into groups.
+ 02/11/18 (pjf):
- Pass through MCSCRIPT_INSTALL_HOME.
- Use job_environ for submission.
+ 07/06/18 (pjf):
- Pass queue via MCSCRIPT_RUN_QUEUE.
- Remove MCSCRIPT_HYBRID_NODESIZE.
+ 06/04/19 (pjf):
- Add hook for individual configurations to add command-line arguments.
- Move --switchwaittime option into config-slurm-nersc.py.
+ 09/11/19 (pjf): Add expert mode argument.
"""
import argparse
import os
import shutil
import subprocess
import sys
import mcscript.config # local configuration (usually symlink)
import mcscript.utils
################################################################
# argument parsing
################################################################
parser = argparse.ArgumentParser(
description="Queue submission for numbered run.",
usage=
"%(prog)s [option] run queue|RUN wall [var1=val1, ...]\n",
formatter_class=argparse.ArgumentDefaultsHelpFormatter,
epilog=
"""Simply omit the queue name and leave off the wall time for a
local interactive run.
Environment variables for qsubm are described in INSTALL.md.
Note that qsubm relies upon code in the local `config.py`
configuration file for the system or cluster you are running on, in
order to interpret the following arguments and translate them into
arguments for your local batch system. Your local configuration
file might not make use of or support all the parallel environment
options listed below.
"""
)
# general arguments
parser.add_argument("run", help="Run number (e.g., 0000 for run0000)")
# latter arguments are made optional to simplify bare-bones syntax for --toc, etc., calls
parser.add_argument("queue", nargs='?', help="Submission queue, or RUN for direct interactive run", default="RUN")
parser.add_argument("wall", type=int, nargs='?', help="Wall time (minutes)", default=60)
##parser.add_argument("vars", nargs="?", help="Environment variables to pass to script, with optional values, comma delimited (e.g., METHOD2, PARAM=1.0)")
parser.add_argument("--here", action="store_true", help="Force run in current working directory")
parser.add_argument("--vars", help="Environment variables to pass to script, with optional values, comma delimited (e.g., --vars=METHOD2, PARAM=1.0)")
## parser.add_argument("--stat", action="store_true", help="Display queue status information")
parser.add_argument("--num", type=int, default=1, help="Number of repetitions")
parser.add_argument("--opt", action="append", help="Additional option arguments to be passed to job submission command (e.g., --opt=\"-m ae\" or --opt=\"--mail-type=END,FAIL\"), may be repeated (e.g., --opt=\"-A acct\" --opt=\"-a 1200\"); beware the spaces may be important to the job submission command")
parser.add_argument("--expert", action="store_true", help="Run mcscript in expert mode")
# serial run parallelization parameters
serial_group = parser.add_argument_group("serial run options (single-node, non-MPI)")
serial_group.add_argument("--serialthreads", type=int, default=1, help="OMP threads")
# hybrid run parallelization parameters
#
# Not all local configuration files need necessarily require or
# respect all of the following parameters.
hybrid_group = parser.add_argument_group("hybrid run options")
hybrid_group.add_argument("--nodes", type=int, default=1, help="number of nodes")
hybrid_group.add_argument("--ranks", type=int, default=1, help="number of MPI ranks")
hybrid_group.add_argument("--threads", type=int, default=1, help="OMP threads per rank)")
hybrid_group.add_argument("--nodesize", type=int, default=0, help="logical threads available per node"
" (might instead be interpreted physical CPUs depending on local config file)")
##hybrid_group.add_argument("--undersubscription", type=int, default=1, help="undersubscription factor (e.g., spread=2 requests twice the cores needed)")
# multi-task interface: invocation modes
task_mode_group = parser.add_mutually_exclusive_group()
task_mode_group.add_argument("--toc", action="store_true", help="Invoke run script to generate task table of contents")
task_mode_group.add_argument("--unlock", action="store_true", help="Delete any .lock or .fail flags for tasks")
task_mode_group.add_argument("--archive", action="store_true", help="Invoke archive-generation run")
task_mode_group.add_argument("--prerun", action="store_true", help="Invoke prerun mode, for argument validation and file staging only")
task_mode_group.add_argument("--offline", action="store_true", help="Invoke offline mode, to create batch scripts for later submission instead of running compute codes")
# multi-task interface: task selection
task_selection_group = parser.add_argument_group("multi-task run options")
task_selection_group.add_argument("--pool", help="Set task pool (or ALL) for task selection")
task_selection_group.add_argument("--phase", type=int, default=0, help="Set task phase for task selection")
task_selection_group.add_argument("--start", type=int, help="Set starting task number for task selection")
task_selection_group.add_argument("--limit", type=int, help="Set task count limit for task selection")
task_selection_group.add_argument("--redirect", default="True", choices=["True", "False"], help="Allow redirection of standard"
" output/error to file (may want to disable for interactive debugging)")
# some special options (deprecated?)
##parser.add_argument("--epar", type=int, default=None, help="Width for embarassingly parallel job")
##parser.add_argument("--nopar", action="store_true", help="Disable parallel resource requests (for use on special serial queues)")
# site-local options
try:
mcscript.config.qsubm_arguments(parser)
except AttributeError:
# local config doesn't provide arguments, ignore gracefully
pass
##parser.print_help()
##print
args = parser.parse_args()
##printargs
################################################################
# special mode: status display
################################################################
# TODO
# will have to modify argument processing to allow no arguments, local
# customization for qstat
# @ i = 0
# while (($i == 0) || ($loop))
# @ i++
# clear
# echo "****************************************************************"
# qstat -u $user
# if ($loop) sleep 5
# end
## if (args.stat):
## pass
################################################################
# environment processing
################################################################
if (args.here):
run_home = os.environ["PWD"]
elif ("MCSCRIPT_RUN_HOME" in os.environ):
run_home = os.environ["MCSCRIPT_RUN_HOME"]
else:
print("MCSCRIPT_RUN_HOME not found in environment")
exit(1)
if (args.here):
work_home = os.environ["PWD"]
elif ("MCSCRIPT_WORK_HOME" in os.environ):
work_home = os.environ["MCSCRIPT_WORK_HOME"]
else:
print("MCSCRIPT_WORK_HOME not found in environment")
exit(1)
if (args.here):
launch_home = os.environ["PWD"]
elif ("MCSCRIPT_LAUNCH_HOME" in os.environ):
launch_home = os.environ["MCSCRIPT_LAUNCH_HOME"]
else:
launch_home = work_home
if ("MCSCRIPT_RUN_PREFIX" in os.environ):
run_prefix = os.environ["MCSCRIPT_RUN_PREFIX"]
else:
print("MCSCRIPT_RUN_PREFIX not found in environment")
exit(1)
if ("MCSCRIPT_PYTHON" in os.environ):
python_executable = os.environ["MCSCRIPT_PYTHON"]
else:
print("MCSCRIPT_PYTHON not found in environment")
exit(1)
if ("MCSCRIPT_DIR" in os.environ):
qsubm_path = os.environ["MCSCRIPT_DIR"]
else:
print("MCSCRIPT_DIR not found in environment")
exit(1)
################################################################
# argument processing
################################################################
# set run name
run = run_prefix + args.run
print("Run:", run)
# ...and process run file
script_extensions = [".py", ".csh"]
job_file = None
for extension in script_extensions:
filename = os.path.join(run_home, run+extension)
if (filename):
job_file = filename
job_extension = extension
break
print(" Run home:", run_home) # useful to report now, in case job file missing
if (job_file is None):
print("No job file %s.* found with an extension in the set %s." % (run, script_extensions))
exit(1)
print(" Job file:", job_file)
# set queue and flag batch or local mode
# force local run for task.py toc mode
if ((args.queue == "RUN") or args.toc or args.unlock):
run_mode = "local"
run_queue = "local"
print(" Mode:", run_mode)
else:
run_mode = "batch"
run_queue = args.queue
print(" Mode:", run_mode, "(%s)" % args.queue)
# set wall time
wall_time_min = args.wall
print(" Wall time (min): {:d}".format(wall_time_min))
wall_time_sec = wall_time_min*60
# environment definitions: general run parameters
environment_definitions = [
"MCSCRIPT_RUN={:s}".format(run),
"MCSCRIPT_JOB_FILE={:s}".format(job_file),
"MCSCRIPT_RUN_MODE={:s}".format(run_mode),
"MCSCRIPT_RUN_QUEUE={:s}".format(run_queue),
"MCSCRIPT_WALL_SEC={:d}".format(wall_time_sec)
]
# environment definitions: serial run parameters
environment_definitions += [
"MCSCRIPT_SERIAL_THREADS={:d}".format(args.serialthreads)
]
# environment definitions: hybrid run parameters
environment_definitions += [
"MCSCRIPT_HYBRID_NODES={:d}".format(args.nodes),
"MCSCRIPT_HYBRID_RANKS={:d}".format(args.ranks),
"MCSCRIPT_HYBRID_THREADS={:d}".format(args.threads),
]
# set multi-task run parameters
if (args.toc):
task_mode = mcscript.task.TaskMode.kTOC
elif (args.unlock):
task_mode = mcscript.task.TaskMode.kUnlock
elif (args.archive):
task_mode = mcscript.task.TaskMode.kArchive
elif (args.prerun):
task_mode = mcscript.task.TaskMode.kPrerun
elif (args.offline):
task_mode = mcscript.task.TaskMode.kOffline
else:
task_mode = mcscript.task.TaskMode.kRun
# TODO (mac): neaten up so that these arguments are always provided
# (and simplify this code to a simple list += as above)
environment_definitions.append("MCSCRIPT_TASK_MODE={:d}".format(task_mode.value))
if (args.pool is not None):
environment_definitions.append("MCSCRIPT_TASK_POOL={:s}".format(args.pool))
if (args.phase is not None):
environment_definitions.append("MCSCRIPT_TASK_PHASE={:d}".format(args.phase))
if (args.start is not None):
environment_definitions.append("MCSCRIPT_TASK_START_INDEX={:d}".format(args.start))
if (args.limit is not None):
environment_definitions.append("MCSCRIPT_TASK_COUNT_LIMIT={:d}".format(args.limit))
environment_definitions.append("MCSCRIPT_TASK_REDIRECT={:s}".format(args.redirect))
# pass through install directory
if os.environ.get("MCSCRIPT_INSTALL_HOME"):
environment_definitions += [
"MCSCRIPT_INSTALL_HOME={:s}".format(os.environ["MCSCRIPT_INSTALL_HOME"])
]
elif os.environ.get("MCSCRIPT_INSTALL_DIR"):
# TODO remove deprecated environment variable
print("****************************************************************")
print("MCSCRIPT_INSTALL_DIR is now MCSCRIPT_INSTALL_HOME.")
print("Please update your environment variables.")
print("****************************************************************")
environment_definitions += [
"MCSCRIPT_INSTALL_HOME={:s}".format(os.environ["MCSCRIPT_INSTALL_DIR"])
]
else:
print("MCSCRIPT_INSTALL_HOME not found in environment")
exit(1)
# include additional environment setup if defined
if os.environ.get("MCSCRIPT_SOURCE"):
environment_definitions += [
"MCSCRIPT_SOURCE={:s}".format(os.environ["MCSCRIPT_SOURCE"])
]
# set user-specified variable definitions
# Note conditional is required since "".split(", ") is [""] rather than [].
if (args.vars is None):
user_environment_definitions = []
else:
user_environment_definitions = args.vars.split(",")
print(" User environment definitions:", user_environment_definitions)
environment_definitions += user_environment_definitions
################################################################
# directory setup
################################################################
# set up scratch directory (for batch job work)
# name is defined here, but creation is left up to job script,
# in case scratch is local to the compute note
work_dir = os.path.join(work_home, run)
## if ( not os.path.exists(work_dir)):
## mcscript.utils.mkdir(work_dir)
environment_definitions.append("MCSCRIPT_WORK_DIR=%s" % work_dir)
# set up run launch directory (for batch job output logging)
launch_dir_parent = os.path.join(launch_home, run)
if ( not os.path.exists(launch_home)):
mcscript.utils.mkdir(launch_home)
if ( not os.path.exists(launch_dir_parent)):
mcscript.utils.mkdir(launch_dir_parent)
if (args.archive):
# archive mode
# launch in archive directory rather than usual batch job output directory
# (important since if batch job server directs output to the
# regular output directory while tar is archiving that directory,
# tar will return with an error code, torpedoing the archive task)
launch_dir = os.path.join(launch_home, run, "archive")
else:
# standard run mode
launch_dir = os.path.join(launch_home, run, "batch")
if ( not os.path.exists(launch_dir)):
mcscript.utils.mkdir(launch_dir)
environment_definitions.append("MCSCRIPT_LAUNCH_DIR=%s" % launch_dir)
################################################################
# job environment setup
################################################################
# construct job name
job_name = "%s" % run
##job_name += "-w%d" % args.width
if (args.pool is not None):
job_name += "-%s" % args.pool
job_name += "-%s" % args.phase
print(" Job name:", job_name)
# process environment definitions
# regularize environment definitions
# Convert all plain variable name definitions "VAR" into definition
# as null string "VAR=". Note that "VAR" would be an environment
# variable pass-through request to qsub, but it causes trouble with
# defining an environment for local execution. So doing this
# regularization simplifies further processing and ensures
# uniformity of the environment between batch and local runs.
for i in range(len(environment_definitions)):
if (not "=" in environment_definitions[i]):
environment_definitions[i] += "="
print()
print("Vars:", ",".join(environment_definitions))
# for local run
job_environ=os.environ
environment_keyvalues = [
entry.split("=")
for entry in environment_definitions
]
job_environ.update(dict(environment_keyvalues))
################################################################
# run invocation
################################################################
# flush script output before invoking job
print()
sys.stdout.flush()
# handle batch run
if (run_mode == "batch"):
# set local qsub arguments
(submission_args, submission_input_string, repetitions) = mcscript.config.submission(job_name, job_file, qsubm_path, environment_definitions, args)
# notes: options must come before command on some platforms (e.g., Univa)
print(" ".join(submission_args))
print(submission_input_string)
print()
print("-"*64)
for i in range(repetitions):
process = subprocess.Popen(
submission_args,
stdin=subprocess.PIPE, # to take input from communicate
stdout=subprocess.PIPE, # to send output to communicate -- default merged stderr
env=job_environ,
cwd=launch_dir
)
stdout_bytes = process.communicate(input=submission_input_string)[0]
stdout_string = stdout_bytes.decode("utf-8")
print(stdout_string)
# handle interactive run
# Note: We call interpreter rather than trying to directly execute
# job file since this saves us from bothering with execute permissions.
# But, beware the interpreter enforced by the script's shebang line might
# be different from the version of the interpreter found in the below invocation,
# especially in a "module" environment.
elif (run_mode == "local"):
if (extension == ".py"):
popen_args = [python_executable, job_file]
elif (extension == ".csh"):
popen_args = ["csh", job_file]
print()
print("-"*64)
process = subprocess.Popen(popen_args, cwd=launch_dir, env=job_environ)
process.wait()
| 40.77521 | 305 | 0.679273 |
import argparse
import os
import shutil
import subprocess
import sys
import mcscript.config
import mcscript.utils
######################################################
if (args.here):
run_home = os.environ["PWD"]
elif ("MCSCRIPT_RUN_HOME" in os.environ):
run_home = os.environ["MCSCRIPT_RUN_HOME"]
else:
print("MCSCRIPT_RUN_HOME not found in environment")
exit(1)
if (args.here):
work_home = os.environ["PWD"]
elif ("MCSCRIPT_WORK_HOME" in os.environ):
work_home = os.environ["MCSCRIPT_WORK_HOME"]
else:
print("MCSCRIPT_WORK_HOME not found in environment")
exit(1)
if (args.here):
launch_home = os.environ["PWD"]
elif ("MCSCRIPT_LAUNCH_HOME" in os.environ):
launch_home = os.environ["MCSCRIPT_LAUNCH_HOME"]
else:
launch_home = work_home
if ("MCSCRIPT_RUN_PREFIX" in os.environ):
run_prefix = os.environ["MCSCRIPT_RUN_PREFIX"]
else:
print("MCSCRIPT_RUN_PREFIX not found in environment")
exit(1)
if ("MCSCRIPT_PYTHON" in os.environ):
python_executable = os.environ["MCSCRIPT_PYTHON"]
else:
print("MCSCRIPT_PYTHON not found in environment")
exit(1)
if ("MCSCRIPT_DIR" in os.environ):
qsubm_path = os.environ["MCSCRIPT_DIR"]
else:
print("MCSCRIPT_DIR not found in environment")
exit(1)
################################################################
# argument processing
################################################################
# set run name
run = run_prefix + args.run
print("Run:", run)
# ...and process run file
script_extensions = [".py", ".csh"]
job_file = None
for extension in script_extensions:
filename = os.path.join(run_home, run+extension)
if (filename):
job_file = filename
job_extension = extension
break
print(" Run home:", run_home) # useful to report now, in case job file missing
if (job_file is None):
print("No job file %s.* found with an extension in the set %s." % (run, script_extensions))
exit(1)
print(" Job file:", job_file)
# set queue and flag batch or local mode
# force local run for task.py toc mode
if ((args.queue == "RUN") or args.toc or args.unlock):
run_mode = "local"
run_queue = "local"
print(" Mode:", run_mode)
else:
run_mode = "batch"
run_queue = args.queue
print(" Mode:", run_mode, "(%s)" % args.queue)
# set wall time
wall_time_min = args.wall
print(" Wall time (min): {:d}".format(wall_time_min))
wall_time_sec = wall_time_min*60
# environment definitions: general run parameters
environment_definitions = [
"MCSCRIPT_RUN={:s}".format(run),
"MCSCRIPT_JOB_FILE={:s}".format(job_file),
"MCSCRIPT_RUN_MODE={:s}".format(run_mode),
"MCSCRIPT_RUN_QUEUE={:s}".format(run_queue),
"MCSCRIPT_WALL_SEC={:d}".format(wall_time_sec)
]
# environment definitions: serial run parameters
environment_definitions += [
"MCSCRIPT_SERIAL_THREADS={:d}".format(args.serialthreads)
]
# environment definitions: hybrid run parameters
environment_definitions += [
"MCSCRIPT_HYBRID_NODES={:d}".format(args.nodes),
"MCSCRIPT_HYBRID_RANKS={:d}".format(args.ranks),
"MCSCRIPT_HYBRID_THREADS={:d}".format(args.threads),
]
# set multi-task run parameters
if (args.toc):
task_mode = mcscript.task.TaskMode.kTOC
elif (args.unlock):
task_mode = mcscript.task.TaskMode.kUnlock
elif (args.archive):
task_mode = mcscript.task.TaskMode.kArchive
elif (args.prerun):
task_mode = mcscript.task.TaskMode.kPrerun
elif (args.offline):
task_mode = mcscript.task.TaskMode.kOffline
else:
task_mode = mcscript.task.TaskMode.kRun
# TODO (mac): neaten up so that these arguments are always provided
# (and simplify this code to a simple list += as above)
environment_definitions.append("MCSCRIPT_TASK_MODE={:d}".format(task_mode.value))
if (args.pool is not None):
environment_definitions.append("MCSCRIPT_TASK_POOL={:s}".format(args.pool))
if (args.phase is not None):
environment_definitions.append("MCSCRIPT_TASK_PHASE={:d}".format(args.phase))
if (args.start is not None):
environment_definitions.append("MCSCRIPT_TASK_START_INDEX={:d}".format(args.start))
if (args.limit is not None):
environment_definitions.append("MCSCRIPT_TASK_COUNT_LIMIT={:d}".format(args.limit))
environment_definitions.append("MCSCRIPT_TASK_REDIRECT={:s}".format(args.redirect))
# pass through install directory
if os.environ.get("MCSCRIPT_INSTALL_HOME"):
environment_definitions += [
"MCSCRIPT_INSTALL_HOME={:s}".format(os.environ["MCSCRIPT_INSTALL_HOME"])
]
elif os.environ.get("MCSCRIPT_INSTALL_DIR"):
# TODO remove deprecated environment variable
print("****************************************************************")
print("MCSCRIPT_INSTALL_DIR is now MCSCRIPT_INSTALL_HOME.")
print("Please update your environment variables.")
print("****************************************************************")
environment_definitions += [
"MCSCRIPT_INSTALL_HOME={:s}".format(os.environ["MCSCRIPT_INSTALL_DIR"])
]
else:
print("MCSCRIPT_INSTALL_HOME not found in environment")
exit(1)
# include additional environment setup if defined
if os.environ.get("MCSCRIPT_SOURCE"):
environment_definitions += [
"MCSCRIPT_SOURCE={:s}".format(os.environ["MCSCRIPT_SOURCE"])
]
# set user-specified variable definitions
# Note conditional is required since "".split(", ") is [""] rather than [].
if (args.vars is None):
user_environment_definitions = []
else:
user_environment_definitions = args.vars.split(",")
print(" User environment definitions:", user_environment_definitions)
environment_definitions += user_environment_definitions
################################################################
# directory setup
################################################################
# set up scratch directory (for batch job work)
# name is defined here, but creation is left up to job script,
# in case scratch is local to the compute note
work_dir = os.path.join(work_home, run)
## if ( not os.path.exists(work_dir)):
## mcscript.utils.mkdir(work_dir)
environment_definitions.append("MCSCRIPT_WORK_DIR=%s" % work_dir)
# set up run launch directory (for batch job output logging)
launch_dir_parent = os.path.join(launch_home, run)
if ( not os.path.exists(launch_home)):
mcscript.utils.mkdir(launch_home)
if ( not os.path.exists(launch_dir_parent)):
mcscript.utils.mkdir(launch_dir_parent)
if (args.archive):
# archive mode
# launch in archive directory rather than usual batch job output directory
# (important since if batch job server directs output to the
# regular output directory while tar is archiving that directory,
# tar will return with an error code, torpedoing the archive task)
launch_dir = os.path.join(launch_home, run, "archive")
else:
# standard run mode
launch_dir = os.path.join(launch_home, run, "batch")
if ( not os.path.exists(launch_dir)):
mcscript.utils.mkdir(launch_dir)
environment_definitions.append("MCSCRIPT_LAUNCH_DIR=%s" % launch_dir)
################################################################
# job environment setup
################################################################
# construct job name
job_name = "%s" % run
##job_name += "-w%d" % args.width
if (args.pool is not None):
job_name += "-%s" % args.pool
job_name += "-%s" % args.phase
print(" Job name:", job_name)
# process environment definitions
# regularize environment definitions
# Convert all plain variable name definitions "VAR" into definition
# as null string "VAR=". Note that "VAR" would be an environment
# variable pass-through request to qsub, but it causes trouble with
# defining an environment for local execution. So doing this
# regularization simplifies further processing and ensures
# uniformity of the environment between batch and local runs.
for i in range(len(environment_definitions)):
if (not "=" in environment_definitions[i]):
environment_definitions[i] += "="
print()
print("Vars:", ",".join(environment_definitions))
# for local run
job_environ=os.environ
environment_keyvalues = [
entry.split("=")
for entry in environment_definitions
]
job_environ.update(dict(environment_keyvalues))
################################################################
# run invocation
################################################################
# flush script output before invoking job
print()
sys.stdout.flush()
# handle batch run
if (run_mode == "batch"):
# set local qsub arguments
(submission_args, submission_input_string, repetitions) = mcscript.config.submission(job_name, job_file, qsubm_path, environment_definitions, args)
# notes: options must come before command on some platforms (e.g., Univa)
print(" ".join(submission_args))
print(submission_input_string)
print()
print("-"*64)
for i in range(repetitions):
process = subprocess.Popen(
submission_args,
stdin=subprocess.PIPE, # to take input from communicate
stdout=subprocess.PIPE, # to send output to communicate -- default merged stderr
env=job_environ,
cwd=launch_dir
)
stdout_bytes = process.communicate(input=submission_input_string)[0]
stdout_string = stdout_bytes.decode("utf-8")
print(stdout_string)
# handle interactive run
# Note: We call interpreter rather than trying to directly execute
# job file since this saves us from bothering with execute permissions.
# But, beware the interpreter enforced by the script's shebang line might
elif (run_mode == "local"):
if (extension == ".py"):
popen_args = [python_executable, job_file]
elif (extension == ".csh"):
popen_args = ["csh", job_file]
print()
print("-"*64)
process = subprocess.Popen(popen_args, cwd=launch_dir, env=job_environ)
process.wait()
| true | true |
f73dde1ef8b0eb4fb039b59bf8be2957b2e6548b | 2,642 | py | Python | openGaussBase/testcase/MOT/Opengauss_Function_MOT_Case0025.py | opengauss-mirror/Yat | aef107a8304b94e5d99b4f1f36eb46755eb8919e | [
"MulanPSL-1.0"
] | null | null | null | openGaussBase/testcase/MOT/Opengauss_Function_MOT_Case0025.py | opengauss-mirror/Yat | aef107a8304b94e5d99b4f1f36eb46755eb8919e | [
"MulanPSL-1.0"
] | null | null | null | openGaussBase/testcase/MOT/Opengauss_Function_MOT_Case0025.py | opengauss-mirror/Yat | aef107a8304b94e5d99b4f1f36eb46755eb8919e | [
"MulanPSL-1.0"
] | null | null | null | """
Copyright (c) 2022 Huawei Technologies Co.,Ltd.
openGauss is licensed under Mulan PSL v2.
You can use this software according to the terms and conditions of the Mulan PSL v2.
You may obtain a copy of Mulan PSL v2 at:
http://license.coscl.org.cn/MulanPSL2
THIS SOFTWARE IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND,
EITHER EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO NON-INFRINGEMENT,
MERCHANTABILITY OR FIT FOR A PARTICULAR PURPOSE.
See the Mulan PSL v2 for more details.
"""
'''
Case Type: MOT不支持的数据类型
Case Name: MOT不支持的数据类型Reltime
'''
import unittest
import sys
sys.path.append(sys.path[0] + "/../")
from testcase.utils.Constant import Constant
from testcase.utils.CommonSH import CommonSH
from testcase.utils.Logger import Logger
logger = Logger()
class Mot_datatype_test(unittest.TestCase):
def setUp(self):
self.sh_primysh = CommonSH('PrimaryDbUser')
self.constant = Constant()
# logger.info('------------修改配置,并重启数据库------------')
# self.configitem = "enable_incremental_checkpoint=off"
# mod_msg = self.sh_primysh.execute_gsguc('set', self.constant.GSGUC_SUCCESS_MSG, self.configitem)
# stopmsg = str(self.sh_primysh.stop_db_cluster())
# startmsg = str(self.sh_primysh.start_db_cluster())
# self.assertTrue(stopmsg)
# self.assertTrue(startmsg)
def test_mot_none_datatype_array(self):
logger.info("------------------------Opengauss_Function_MOT_Case0025开始执行---------------------")
self.schema = 'schema_mot_test'
self.tablename = 'MOTTable'
self.datatype = 'reltime'
self.sql_cmd = f'''CREATE SCHEMA {self.schema};
CREATE FOREIGN TABLE {self.schema}.{self.tablename}(t1 {self.datatype});
DROP SCHEMA {self.schema} CASCADE;
'''
logger.info("-------------------------开始用例测试:MOT不支持数据类型reltime--------------------------")
msg = self.sh_primysh.execut_db_sql(self.sql_cmd)
logger.info(msg)
self.assertIn(self.constant.NOT_SUPPORTED_TYPE, msg)
def tearDown(self):
# logger.info('-----------恢复配置,并重启数据库-----------')
# self.configitem = "enable_incremental_checkpoint=on"
# mod_msg = self.sh_primysh.execute_gsguc('set', self.constant.GSGUC_SUCCESS_MSG, self.configitem)
# stopmsg = str(self.sh_primysh.stop_db_cluster())
# startmsg = str(self.sh_primysh.start_db_cluster())
# self.assertTrue(stopmsg)
# self.assertTrue(startmsg)
logger.info('---------------Opengauss_Function_MOT_Case0025执行结束---------------')
| 40.030303 | 106 | 0.644209 |
import unittest
import sys
sys.path.append(sys.path[0] + "/../")
from testcase.utils.Constant import Constant
from testcase.utils.CommonSH import CommonSH
from testcase.utils.Logger import Logger
logger = Logger()
class Mot_datatype_test(unittest.TestCase):
def setUp(self):
self.sh_primysh = CommonSH('PrimaryDbUser')
self.constant = Constant()
def test_mot_none_datatype_array(self):
logger.info("------------------------Opengauss_Function_MOT_Case0025开始执行---------------------")
self.schema = 'schema_mot_test'
self.tablename = 'MOTTable'
self.datatype = 'reltime'
self.sql_cmd = f'''CREATE SCHEMA {self.schema};
CREATE FOREIGN TABLE {self.schema}.{self.tablename}(t1 {self.datatype});
DROP SCHEMA {self.schema} CASCADE;
'''
logger.info("-------------------------开始用例测试:MOT不支持数据类型reltime--------------------------")
msg = self.sh_primysh.execut_db_sql(self.sql_cmd)
logger.info(msg)
self.assertIn(self.constant.NOT_SUPPORTED_TYPE, msg)
def tearDown(self):
logger.info('---------------Opengauss_Function_MOT_Case0025执行结束---------------')
| true | true |
f73dde7a996b20aa38148c47c3929e59891baadb | 11,596 | py | Python | sloth/backend/generic.py | katelaan/sloth | f487911c4e6850253c592cf65280390f39e24a87 | [
"MIT"
] | 2 | 2018-07-09T10:10:52.000Z | 2021-04-02T14:56:00.000Z | sloth/backend/generic.py | katelaan/sloth | f487911c4e6850253c592cf65280390f39e24a87 | [
"MIT"
] | null | null | null | sloth/backend/generic.py | katelaan/sloth | f487911c4e6850253c592cf65280390f39e24a87 | [
"MIT"
] | null | null | null | import z3
from ..utils import logger, utils
from ..model import model_utils
from . import symbols, struct
def const(name, sort):
assert(isinstance(name, str))
return z3.Const(name, sort)
def array(name, ix_sort, cont_sort):
return z3.Array(name, ix_sort, cont_sort)
class SlSort:
"""Representation of a separation logic sort.
Depending on the backend, this could be either an uninterpreted sort or a
built-in sort (Int).
Indexing into the sort object with a string returns a constant of
the sort of the given name.
"""
def __init__(self, ref, set_class):
"""
Create new separation logic sort whose elements are of sort `ref` in Z3.
:param: ref: Reference type to be used for elements of this sort in z3 (of type :class:`z3.SortRef`)
:param: set_class: Subclass of Set used to create sets of this sort.
"""
assert(isinstance(ref, z3.SortRef))
self.ref = ref
self.set_class = set_class
def __eq__(self, other):
try:
return self.ref == other.ref
except:
return False
def __hash__(self):
# TODO: Proper hash for sorts? (Currently simply negating the ref to ensure the hash is different from the one for the wrapped z3 sort)
return ~hash(self.ref)
def to_declare(self):
"""Must this sort be declared in the SMT2 encoding?"""
raise NotImplementedError("Not specified whether sort must be declared")
def set_sort(self):
"""Return the set / footprint sort associated with this sort."""
return SetSort(self.set_class, self)
def __getitem__(self, elem):
"""Return a constant of this sort of the given string name."""
return const(elem, self.ref)
class Set:
"""Representation of a set of locations / footprint."""
def __init__(self, ref, elem_sort):
"""Create a new set"""
self.ref = ref
self.elem_sort = elem_sort
def __repr__(self):
return "{} : SET({})".format(self.ref, self.elem_sort)
@staticmethod
def get_empty(elem_sort):
"""Return encoding of an empty set"""
raise NotImplementedError("")
def is_empty(self):
"""Return constraint expressing that this set is empty"""
raise NotImplementedError("")
def non_empty(self):
"""Return constraint expressing that this set is nonempty"""
raise NotImplementedError("")
def insert(self, elem):
"""Return a new set that additionally contains `elem`"""
raise NotImplementedError("")
def remove(self, elem):
"""Return a new set with `elem` removed"""
raise NotImplementedError("")
def is_singleton(self, elem):
"""Return constraint expressing that `self` is the singleton set containing `elem`"""
raise NotImplementedError("")
def contains(self, elem):
"""Return constraint expressing that this set contains the given element"""
raise NotImplementedError("")
def subset_of(self, other):
"""Return constraint expressing that `self` is a subset of `other`"""
raise NotImplementedError("")
def disjoint_from(self, other):
"""Return constraint expressing that `self` is disjoint from `other`"""
raise NotImplementedError("")
def is_identical(self, other):
"""Return constraint expressing that `self` is identical to `other`"""
raise NotImplementedError("")
def union_of(self, part1, part2):
"""Return constraint expressing that `self` is the union of `part1` and `part2`"""
raise NotImplementedError("")
def union_of_all(self, *parts):
"""Return constraint expressing that `self` is the union of all `parts`"""
raise NotImplementedError("")
def union_without_elem(self, part1, part2, elem):
"""Return constraint expressing that after removing `elem` from `self`,
the result is the union of `part1` and `part2`"""
raise NotImplementedError("")
class SetSort:
"""A separation logic set / footprint sort associated with a
:class:`backend.generic.SlSort`
Indexing into the sort object with a string returns a constant of
the set sort of the given name.
"""
def __init__(self, set_class, elem_sort):
assert(isinstance(elem_sort, SlSort))
self.set_class = set_class
self.elem_sort = elem_sort
self.ref = z3.ArraySort(self.elem_sort.ref, z3.BoolSort())
self.consts = set()
def __getitem__(self, name):
"""Return a constant of this sort of the given string name."""
assert(isinstance(name, str))
set_ref = array(name, self.elem_sort.ref, z3.BoolSort())
return self.set_class(set_ref, self.elem_sort)
def __eq__(self, other):
try:
return self.elem_sort == other.elem_sort
except:
return False
def __hash__(self):
# TODO: Proper hash for sorts? (Currently simply negating the ref to ensure the hash is different from the one for the wrapped z3 sort)
return ~hash(self.elem_sort)
class LocInterpretation:
"""Interpretation of a location sort in a z3 model.
Represents the interpretation of the location sort itself as well
as all (plain and footprint) constants of a
:class:`sloth.backend.struct.Struct` in a :class:`z3.ModelRef`.
The set of constants is restricted to constants that are both
1. known to the :class:`backend.generic.ConstRegistry` passed to
the constructor
2. interpreted by the z3 model (not None in the z3 model)
This makes `LocInterpretation` a safe interface for iterating over
constants, since neither redundant/unused constants in the
encoding (which may not occur in the z3 model) nor internal
constants introduced by z3 (which are in the z3 model but not part
of our encoding) are contained in the `const` and `fp_consts`
attributes.
"""
def __init__(self, struct, const_registry, z3_model):
self.struct = struct
self.z3_model = z3_model
self._locs = [] # Note: Initialized properly in subclasses
self.labeling = {}
# Initialize constants based on the registry & the model
self.consts = list(const_registry.defined_locs(struct, z3_model))
#print("CONSTS IN LOC INTERPRETATION: {}".format(self.consts))
if self.consts:
self.null = model_utils.val_of(struct.null, z3_model)
else:
self.null = None
self.fp_consts = list(const_registry.defined_fps(struct, z3_model))
# TODO: Locs must currently be initialized in the subclass after calling super and before calling _init_node_labeling --> Make less error prone
def _is_used(self):
# TODO: The following isn't true any more, is it?
# The null constant is always there, because it is declared for the parser
# Thus we define that a sort is used if it contains at least one more const
#null_set = set([self.struct.null])
#return self.struct.sort.consts != null_set
return bool(self.consts)
def __bool__(self):
return bool(self._locs)
def __iter__(self):
return iter(sorted(self._locs, key = lambda v: int(str(v))))
def __len__(self):
return len(self._locs)
def __repr__(self):
def node_repr(k,v):
if v:
return "{}:{}".format(k,v)
else:
return str(k)
ordered = sorted(self.labeling.items(),
key = lambda i: int(str(i[0])))
return ", ".join(map(lambda i : node_repr(*i), ordered))
def empty(self):
"""Is this sort interpreted by an empty set of locations (or not at all)?"""
return not bool(self)
def _init_node_labeling(self):
if not self._is_used():
return
labeling = dict([(loc,[]) for loc in self._locs])
for c in self.consts:
try:
loc = model_utils.val_of(c, self.z3_model)
labeling[loc].append(c)
except KeyError as e:
if loc is None:
fmt = "Adding {} to labeling of {} failed --> {} not actually used in model"
logger.debug(fmt.format(c, loc, c))
else:
fmt = ("Inconsistent internal state: location {} interprets {}"
+ "in z3 model, but model adapter contains only locs {}")
raise utils.IllegalSolverState(fmt.format(loc,c,self._locs))
self.labeling = labeling
class ConstRegistry:
"""Cache for keeping track of constants introduced in an encoding.
Use case: Add all constants that appear in an encoding to a
:class:`ConstRegistry` and pass that registry to all
:class:`LocInterpretation` instances you create. This guarantees
that the set of constants accessible through the intepretation is
the intersection of the constants in the encoding and the
:class:`z3.ModelRef` model returned by z3.
"""
LOC = False
FP = True
DATA = "data"
def __init__(self, structs):
self.structs = structs
self._cache = {(struct, typ) : set()
for struct in structs
for typ in [self.LOC, self.FP]}
self._cache.update({(self.DATA,self.LOC) : set()})
def __repr__(self):
lines = []
for (s,t), cs in self._cache.items():
if cs:
typ = "locs" if t == self.LOC else "foots"
lines.append("{}-{} = {}".format(s, typ, cs))
return "consts(\n" + utils.indented("\n".join(lines)) + "\n)"
def add_const(self, struct, const):
"""Add the given const to the cache for the given struct."""
#print("REGISTRY: {}".format(const))
if const.sort() == struct.fp_sort.ref:
self._cache[(struct, self.FP)].add(const)
elif const.sort() == struct.sort.ref:
self._cache[(struct, self.LOC)].add(const)
else:
fmt = "Constant of wrong sort {} added to {} registry"
raise utils.IllegalSolverState(fmt.format(const.__class__, struct))
def add_data_const(self, const):
assert(const.sort() == symbols.data_sort)
self._cache[(self.DATA, self.LOC)].add(const)
# TODO: Memoize?
def _defined_consts(self, key, z3_model):
try:
for c in self._cache[key]:
if model_utils.val_of(c, z3_model) is not None:
yield c
except KeyError:
fmt = "Registry not defined for {} of {}"
typ = "locations" if key[1] == self.LOC else "footprints"
raise utils.IllegalSolverState(fmt.format(typ, key[0]))
def has_consts(self, struct):
"""Does this registry contain any (location) consts of the given struct?"""
return bool(self._cache[(struct, self.LOC)])
def defined_locs(self, struct, z3_model):
"""Generator for location consts of given struct in given model.
No order on the returned consts guaranteed."""
return self._defined_consts((struct, self.LOC), z3_model)
def defined_data(self, z3_model):
return self._defined_consts((self.DATA, self.LOC), z3_model)
def defined_fps(self, struct, z3_model):
"""Generator for footprint consts of given struct in given model.
No order on the returned consts guaranteed."""
return self._defined_consts((struct, self.FP), z3_model)
| 36.351097 | 151 | 0.626768 | import z3
from ..utils import logger, utils
from ..model import model_utils
from . import symbols, struct
def const(name, sort):
assert(isinstance(name, str))
return z3.Const(name, sort)
def array(name, ix_sort, cont_sort):
return z3.Array(name, ix_sort, cont_sort)
class SlSort:
def __init__(self, ref, set_class):
assert(isinstance(ref, z3.SortRef))
self.ref = ref
self.set_class = set_class
def __eq__(self, other):
try:
return self.ref == other.ref
except:
return False
def __hash__(self):
return ~hash(self.ref)
def to_declare(self):
raise NotImplementedError("Not specified whether sort must be declared")
def set_sort(self):
return SetSort(self.set_class, self)
def __getitem__(self, elem):
return const(elem, self.ref)
class Set:
def __init__(self, ref, elem_sort):
self.ref = ref
self.elem_sort = elem_sort
def __repr__(self):
return "{} : SET({})".format(self.ref, self.elem_sort)
@staticmethod
def get_empty(elem_sort):
raise NotImplementedError("")
def is_empty(self):
raise NotImplementedError("")
def non_empty(self):
raise NotImplementedError("")
def insert(self, elem):
raise NotImplementedError("")
def remove(self, elem):
raise NotImplementedError("")
def is_singleton(self, elem):
raise NotImplementedError("")
def contains(self, elem):
raise NotImplementedError("")
def subset_of(self, other):
raise NotImplementedError("")
def disjoint_from(self, other):
raise NotImplementedError("")
def is_identical(self, other):
raise NotImplementedError("")
def union_of(self, part1, part2):
raise NotImplementedError("")
def union_of_all(self, *parts):
raise NotImplementedError("")
def union_without_elem(self, part1, part2, elem):
raise NotImplementedError("")
class SetSort:
def __init__(self, set_class, elem_sort):
assert(isinstance(elem_sort, SlSort))
self.set_class = set_class
self.elem_sort = elem_sort
self.ref = z3.ArraySort(self.elem_sort.ref, z3.BoolSort())
self.consts = set()
def __getitem__(self, name):
assert(isinstance(name, str))
set_ref = array(name, self.elem_sort.ref, z3.BoolSort())
return self.set_class(set_ref, self.elem_sort)
def __eq__(self, other):
try:
return self.elem_sort == other.elem_sort
except:
return False
def __hash__(self):
return ~hash(self.elem_sort)
class LocInterpretation:
def __init__(self, struct, const_registry, z3_model):
self.struct = struct
self.z3_model = z3_model
self._locs = []
self.labeling = {}
self.consts = list(const_registry.defined_locs(struct, z3_model))
if self.consts:
self.null = model_utils.val_of(struct.null, z3_model)
else:
self.null = None
self.fp_consts = list(const_registry.defined_fps(struct, z3_model))
def _is_used(self):
# The null constant is always there, because it is declared for the parser
# Thus we define that a sort is used if it contains at least one more const
#null_set = set([self.struct.null])
#return self.struct.sort.consts != null_set
return bool(self.consts)
def __bool__(self):
return bool(self._locs)
def __iter__(self):
return iter(sorted(self._locs, key = lambda v: int(str(v))))
def __len__(self):
return len(self._locs)
def __repr__(self):
def node_repr(k,v):
if v:
return "{}:{}".format(k,v)
else:
return str(k)
ordered = sorted(self.labeling.items(),
key = lambda i: int(str(i[0])))
return ", ".join(map(lambda i : node_repr(*i), ordered))
def empty(self):
return not bool(self)
def _init_node_labeling(self):
if not self._is_used():
return
labeling = dict([(loc,[]) for loc in self._locs])
for c in self.consts:
try:
loc = model_utils.val_of(c, self.z3_model)
labeling[loc].append(c)
except KeyError as e:
if loc is None:
fmt = "Adding {} to labeling of {} failed --> {} not actually used in model"
logger.debug(fmt.format(c, loc, c))
else:
fmt = ("Inconsistent internal state: location {} interprets {}"
+ "in z3 model, but model adapter contains only locs {}")
raise utils.IllegalSolverState(fmt.format(loc,c,self._locs))
self.labeling = labeling
class ConstRegistry:
LOC = False
FP = True
DATA = "data"
def __init__(self, structs):
self.structs = structs
self._cache = {(struct, typ) : set()
for struct in structs
for typ in [self.LOC, self.FP]}
self._cache.update({(self.DATA,self.LOC) : set()})
def __repr__(self):
lines = []
for (s,t), cs in self._cache.items():
if cs:
typ = "locs" if t == self.LOC else "foots"
lines.append("{}-{} = {}".format(s, typ, cs))
return "consts(\n" + utils.indented("\n".join(lines)) + "\n)"
def add_const(self, struct, const):
#print("REGISTRY: {}".format(const))
if const.sort() == struct.fp_sort.ref:
self._cache[(struct, self.FP)].add(const)
elif const.sort() == struct.sort.ref:
self._cache[(struct, self.LOC)].add(const)
else:
fmt = "Constant of wrong sort {} added to {} registry"
raise utils.IllegalSolverState(fmt.format(const.__class__, struct))
def add_data_const(self, const):
assert(const.sort() == symbols.data_sort)
self._cache[(self.DATA, self.LOC)].add(const)
# TODO: Memoize?
def _defined_consts(self, key, z3_model):
try:
for c in self._cache[key]:
if model_utils.val_of(c, z3_model) is not None:
yield c
except KeyError:
fmt = "Registry not defined for {} of {}"
typ = "locations" if key[1] == self.LOC else "footprints"
raise utils.IllegalSolverState(fmt.format(typ, key[0]))
def has_consts(self, struct):
return bool(self._cache[(struct, self.LOC)])
def defined_locs(self, struct, z3_model):
return self._defined_consts((struct, self.LOC), z3_model)
def defined_data(self, z3_model):
return self._defined_consts((self.DATA, self.LOC), z3_model)
def defined_fps(self, struct, z3_model):
return self._defined_consts((struct, self.FP), z3_model)
| true | true |
f73ddeefd932879d911767bf79a1c153386e8966 | 1,405 | py | Python | setup.py | awslabs/lorien | bcd39132e5f0738ee6f4685676ea8628cb4cea1b | [
"Apache-2.0"
] | 40 | 2021-09-03T00:13:52.000Z | 2022-03-16T10:55:15.000Z | setup.py | awslabs/lorien | bcd39132e5f0738ee6f4685676ea8628cb4cea1b | [
"Apache-2.0"
] | null | null | null | setup.py | awslabs/lorien | bcd39132e5f0738ee6f4685676ea8628cb4cea1b | [
"Apache-2.0"
] | 1 | 2021-11-09T03:20:53.000Z | 2021-11-09T03:20:53.000Z | """Package Setup"""
import os
import re
from distutils.core import setup
from setuptools import find_packages
CURRENT_DIR = os.path.dirname(__file__)
def read(path):
with open(path, "r") as filep:
return filep.read()
def get_version(package_name):
with open(os.path.join(os.path.dirname(__file__), package_name, "__init__.py")) as fp:
for line in fp:
tokens = re.search(r'^\s*__version__\s*=\s*"(.+)"\s*$', line)
if tokens:
return tokens.group(1)
raise RuntimeError("Unable to find own __version__ string")
setup(
name="lorien",
version=get_version("lorien"),
license="Apache-2.0",
description="A Unified Infrastructure for Efficient Deep Learning Workloads Delivery",
long_description=read(os.path.join(CURRENT_DIR, "README.md")),
long_description_content_type="text/markdown",
author="Lorien Community",
url="https://github.com/awslabs/lorien",
keywords=[],
packages=find_packages(),
install_requires=[p for p in read("requirements.txt").split("\n") if p],
classifiers=[
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"Topic :: Software Development :: Build Tools",
"License :: OSI Approved :: Apache Software License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.6",
],
)
| 30.543478 | 90 | 0.64911 | import os
import re
from distutils.core import setup
from setuptools import find_packages
CURRENT_DIR = os.path.dirname(__file__)
def read(path):
with open(path, "r") as filep:
return filep.read()
def get_version(package_name):
with open(os.path.join(os.path.dirname(__file__), package_name, "__init__.py")) as fp:
for line in fp:
tokens = re.search(r'^\s*__version__\s*=\s*"(.+)"\s*$', line)
if tokens:
return tokens.group(1)
raise RuntimeError("Unable to find own __version__ string")
setup(
name="lorien",
version=get_version("lorien"),
license="Apache-2.0",
description="A Unified Infrastructure for Efficient Deep Learning Workloads Delivery",
long_description=read(os.path.join(CURRENT_DIR, "README.md")),
long_description_content_type="text/markdown",
author="Lorien Community",
url="https://github.com/awslabs/lorien",
keywords=[],
packages=find_packages(),
install_requires=[p for p in read("requirements.txt").split("\n") if p],
classifiers=[
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"Topic :: Software Development :: Build Tools",
"License :: OSI Approved :: Apache Software License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.6",
],
)
| true | true |
f73ddf07a7874888b3dcbaefbe5efca8353ab73b | 40 | py | Python | libcloud/storage/__init__.py | dupontz/libcloud | 419c69441ea10e7bbf37319e5e8d02e82e7e6b40 | [
"Apache-2.0"
] | 5 | 2016-02-24T14:44:03.000Z | 2020-11-29T19:18:40.000Z | libcloud/storage/__init__.py | dupontz/libcloud | 419c69441ea10e7bbf37319e5e8d02e82e7e6b40 | [
"Apache-2.0"
] | 25 | 2015-03-23T16:05:19.000Z | 2018-02-13T17:22:22.000Z | libcloud/storage/__init__.py | dupontz/libcloud | 419c69441ea10e7bbf37319e5e8d02e82e7e6b40 | [
"Apache-2.0"
] | 4 | 2016-04-04T08:01:48.000Z | 2018-06-06T08:04:36.000Z | """
Module for working with Storage
"""
| 10 | 31 | 0.675 | true | true | |
f73ddfddb8c6c9de88a31ade8a1178a156a7af1a | 714 | py | Python | bot/help.py | thecryptoundertaker/discord-tip-bot | 533b35530fab8c8e03496183df521ee31f50f612 | [
"MIT"
] | null | null | null | bot/help.py | thecryptoundertaker/discord-tip-bot | 533b35530fab8c8e03496183df521ee31f50f612 | [
"MIT"
] | null | null | null | bot/help.py | thecryptoundertaker/discord-tip-bot | 533b35530fab8c8e03496183df521ee31f50f612 | [
"MIT"
] | 2 | 2021-09-16T09:39:18.000Z | 2021-10-11T03:59:07.000Z | from loguru import logger
from bot import embeds
@logger.catch
def help_commands(bot):
@bot.group(invoke_without_command=True)
async def help(ctx):
await ctx.send(embed=embeds.help())
@help.command()
async def balance(ctx):
await ctx.send(embed=embeds.help_balance())
@help.command()
async def deposit(ctx):
await ctx.send(embed=embeds.help_deposit())
@help.command()
async def tip(ctx):
await ctx.send(embed=embeds.help_tip())
@help.command()
async def withdraw(ctx):
await ctx.send(embed=embeds.help_withdraw())
@help.command(name="tokens")
async def _tokens(ctx):
await ctx.send(embed=embeds.help_tokens())
| 23.032258 | 52 | 0.658263 | from loguru import logger
from bot import embeds
@logger.catch
def help_commands(bot):
@bot.group(invoke_without_command=True)
async def help(ctx):
await ctx.send(embed=embeds.help())
@help.command()
async def balance(ctx):
await ctx.send(embed=embeds.help_balance())
@help.command()
async def deposit(ctx):
await ctx.send(embed=embeds.help_deposit())
@help.command()
async def tip(ctx):
await ctx.send(embed=embeds.help_tip())
@help.command()
async def withdraw(ctx):
await ctx.send(embed=embeds.help_withdraw())
@help.command(name="tokens")
async def _tokens(ctx):
await ctx.send(embed=embeds.help_tokens())
| true | true |
f73ddfe1027f4b1ece28a7283d76ac29e6e7258d | 671 | py | Python | h2o-py/tests/testdir_algos/gbm/pyunit_get_model_gbm.py | Bhanuprakash-ch/h2o-3 | c75bc5d2dc644cc8c09df755185a4cc6e34e0d1a | [
"Apache-2.0"
] | null | null | null | h2o-py/tests/testdir_algos/gbm/pyunit_get_model_gbm.py | Bhanuprakash-ch/h2o-3 | c75bc5d2dc644cc8c09df755185a4cc6e34e0d1a | [
"Apache-2.0"
] | null | null | null | h2o-py/tests/testdir_algos/gbm/pyunit_get_model_gbm.py | Bhanuprakash-ch/h2o-3 | c75bc5d2dc644cc8c09df755185a4cc6e34e0d1a | [
"Apache-2.0"
] | 1 | 2020-01-22T19:10:37.000Z | 2020-01-22T19:10:37.000Z | import sys
sys.path.insert(1,"../../../")
import h2o
from tests import pyunit_utils
def get_model_gbm():
prostate = h2o.import_file(path=pyunit_utils.locate("smalldata/logreg/prostate.csv"))
prostate.describe()
prostate[1] = prostate[1].asfactor()
from h2o.estimators.gbm import H2OGradientBoostingEstimator
prostate_gbm = H2OGradientBoostingEstimator(distribution="bernoulli")
prostate_gbm.train(x=range(2,9),y=1, training_frame=prostate)
prostate_gbm.show()
prostate_gbm.predict(prostate)
model = h2o.get_model(prostate_gbm.model_id)
model.show()
if __name__ == "__main__":
pyunit_utils.standalone_test(get_model_gbm)
else:
get_model_gbm()
| 26.84 | 87 | 0.767511 | import sys
sys.path.insert(1,"../../../")
import h2o
from tests import pyunit_utils
def get_model_gbm():
prostate = h2o.import_file(path=pyunit_utils.locate("smalldata/logreg/prostate.csv"))
prostate.describe()
prostate[1] = prostate[1].asfactor()
from h2o.estimators.gbm import H2OGradientBoostingEstimator
prostate_gbm = H2OGradientBoostingEstimator(distribution="bernoulli")
prostate_gbm.train(x=range(2,9),y=1, training_frame=prostate)
prostate_gbm.show()
prostate_gbm.predict(prostate)
model = h2o.get_model(prostate_gbm.model_id)
model.show()
if __name__ == "__main__":
pyunit_utils.standalone_test(get_model_gbm)
else:
get_model_gbm()
| true | true |
f73ddffc793680e311cadda78efd71e501d92baf | 1,970 | py | Python | algorithms/stack/is_consecutive.py | zhengli0817/algorithms | 3c98813f0329d9a5fff1107dbcd40e7f38d2275d | [
"MIT"
] | 22,426 | 2017-01-17T04:01:44.000Z | 2022-03-31T12:06:16.000Z | algorithms/stack/is_consecutive.py | zhengli0817/algorithms | 3c98813f0329d9a5fff1107dbcd40e7f38d2275d | [
"MIT"
] | 523 | 2017-04-18T12:05:11.000Z | 2022-03-20T11:10:41.000Z | algorithms/stack/is_consecutive.py | zhengli0817/algorithms | 3c98813f0329d9a5fff1107dbcd40e7f38d2275d | [
"MIT"
] | 4,900 | 2017-01-19T23:47:05.000Z | 2022-03-31T10:00:47.000Z | """
Given a stack, a function is_consecutive takes a stack as a parameter and that
returns whether or not the stack contains a sequence of consecutive integers
starting from the bottom of the stack (returning true if it does, returning
false if it does not).
For example:
bottom [3, 4, 5, 6, 7] top
Then the call of is_consecutive(s) should return true.
bottom [3, 4, 6, 7] top
Then the call of is_consecutive(s) should return false.
bottom [3, 2, 1] top
The function should return false due to reverse order.
Note: There are 2 solutions:
first_is_consecutive: it uses a single stack as auxiliary storage
second_is_consecutive: it uses a single queue as auxiliary storage
"""
import collections
def first_is_consecutive(stack):
storage_stack = []
for i in range(len(stack)):
first_value = stack.pop()
if len(stack) == 0: # Case odd number of values in stack
return True
second_value = stack.pop()
if first_value - second_value != 1: # Not consecutive
return False
stack.append(second_value) # Backup second value
storage_stack.append(first_value)
# Back up stack from storage stack
for i in range(len(storage_stack)):
stack.append(storage_stack.pop())
return True
def second_is_consecutive(stack):
q = collections.deque()
for i in range(len(stack)):
first_value = stack.pop()
if len(stack) == 0: # Case odd number of values in stack
return True
second_value = stack.pop()
if first_value - second_value != 1: # Not consecutive
return False
stack.append(second_value) # Backup second value
q.append(first_value)
# Back up stack from queue
for i in range(len(q)):
stack.append(q.pop())
for i in range(len(stack)):
q.append(stack.pop())
for i in range(len(q)):
stack.append(q.pop())
return True
| 33.389831 | 79 | 0.653299 | import collections
def first_is_consecutive(stack):
storage_stack = []
for i in range(len(stack)):
first_value = stack.pop()
if len(stack) == 0:
return True
second_value = stack.pop()
if first_value - second_value != 1:
return False
stack.append(second_value)
storage_stack.append(first_value)
for i in range(len(storage_stack)):
stack.append(storage_stack.pop())
return True
def second_is_consecutive(stack):
q = collections.deque()
for i in range(len(stack)):
first_value = stack.pop()
if len(stack) == 0:
return True
second_value = stack.pop()
if first_value - second_value != 1:
return False
stack.append(second_value)
q.append(first_value)
for i in range(len(q)):
stack.append(q.pop())
for i in range(len(stack)):
q.append(stack.pop())
for i in range(len(q)):
stack.append(q.pop())
return True
| true | true |
f73de04e5801a0180d233c6a7747838636ecbbaa | 3,675 | py | Python | mezzanine/blog/views.py | ShAlireza/mezzanine | 365b3e3251f341f7337f79838796112f84ab94ad | [
"BSD-2-Clause"
] | 6 | 2019-11-22T10:27:57.000Z | 2020-07-21T22:55:20.000Z | mezzanine/blog/views.py | ShAlireza/mezzanine | 365b3e3251f341f7337f79838796112f84ab94ad | [
"BSD-2-Clause"
] | 7 | 2020-02-11T09:08:59.000Z | 2020-07-23T12:55:44.000Z | mezzanine/blog/views.py | cjh79/mezzanine | 37712f1a1023ac0ee7e2114ac4262cedfd7f4556 | [
"BSD-2-Clause"
] | 6 | 2019-12-13T12:58:21.000Z | 2021-01-25T03:07:50.000Z | from calendar import month_name
from django.contrib.auth import get_user_model
from django.http import Http404
from django.shortcuts import get_object_or_404
from django.template.response import TemplateResponse
from django.utils.translation import ugettext_lazy as _
from mezzanine.blog.models import BlogPost, BlogCategory
from mezzanine.blog.feeds import PostsRSS, PostsAtom
from mezzanine.conf import settings
from mezzanine.generic.models import Keyword
from mezzanine.utils.views import paginate
User = get_user_model()
def blog_post_list(
request,
tag=None,
year=None,
month=None,
username=None,
category=None,
template="blog/blog_post_list.html",
extra_context=None,
):
"""
Display a list of blog posts that are filtered by tag, year, month,
author or category. Custom templates are checked for using the name
``blog/blog_post_list_XXX.html`` where ``XXX`` is either the
category slug or author's username if given.
"""
templates = []
blog_posts = BlogPost.objects.published(for_user=request.user)
if tag is not None:
tag = get_object_or_404(Keyword, slug=tag)
blog_posts = blog_posts.filter(keywords__keyword=tag)
if year is not None:
blog_posts = blog_posts.filter(publish_date__year=year)
if month is not None:
blog_posts = blog_posts.filter(publish_date__month=month)
try:
month = _(month_name[int(month)])
except IndexError:
raise Http404()
if category is not None:
category = get_object_or_404(BlogCategory, slug=category)
blog_posts = blog_posts.filter(categories=category)
templates.append(u"blog/blog_post_list_%s.html" % str(category.slug))
author = None
if username is not None:
author = get_object_or_404(User, username=username)
blog_posts = blog_posts.filter(user=author)
templates.append(u"blog/blog_post_list_%s.html" % username)
prefetch = ("categories", "keywords__keyword")
blog_posts = blog_posts.select_related("user").prefetch_related(*prefetch)
blog_posts = paginate(
blog_posts,
request.GET.get("page", 1),
settings.BLOG_POST_PER_PAGE,
settings.MAX_PAGING_LINKS,
)
context = {
"blog_posts": blog_posts,
"year": year,
"month": month,
"tag": tag,
"category": category,
"author": author,
}
context.update(extra_context or {})
templates.append(template)
return TemplateResponse(request, templates, context)
def blog_post_detail(
request,
slug,
year=None,
month=None,
day=None,
template="blog/blog_post_detail.html",
extra_context=None,
):
""". Custom templates are checked for using the name
``blog/blog_post_detail_XXX.html`` where ``XXX`` is the blog
posts's slug.
"""
blog_posts = BlogPost.objects.published(for_user=request.user).select_related()
blog_post = get_object_or_404(blog_posts, slug=slug)
related_posts = blog_post.related_posts.published(for_user=request.user)
context = {
"blog_post": blog_post,
"editable_obj": blog_post,
"related_posts": related_posts,
}
context.update(extra_context or {})
templates = [u"blog/blog_post_detail_%s.html" % str(slug), template]
return TemplateResponse(request, templates, context)
def blog_post_feed(request, format, **kwargs):
"""
Blog posts feeds - maps format to the correct feed view.
"""
try:
return {"rss": PostsRSS, "atom": PostsAtom}[format](**kwargs)(request)
except KeyError:
raise Http404()
| 32.8125 | 83 | 0.68517 | from calendar import month_name
from django.contrib.auth import get_user_model
from django.http import Http404
from django.shortcuts import get_object_or_404
from django.template.response import TemplateResponse
from django.utils.translation import ugettext_lazy as _
from mezzanine.blog.models import BlogPost, BlogCategory
from mezzanine.blog.feeds import PostsRSS, PostsAtom
from mezzanine.conf import settings
from mezzanine.generic.models import Keyword
from mezzanine.utils.views import paginate
User = get_user_model()
def blog_post_list(
request,
tag=None,
year=None,
month=None,
username=None,
category=None,
template="blog/blog_post_list.html",
extra_context=None,
):
templates = []
blog_posts = BlogPost.objects.published(for_user=request.user)
if tag is not None:
tag = get_object_or_404(Keyword, slug=tag)
blog_posts = blog_posts.filter(keywords__keyword=tag)
if year is not None:
blog_posts = blog_posts.filter(publish_date__year=year)
if month is not None:
blog_posts = blog_posts.filter(publish_date__month=month)
try:
month = _(month_name[int(month)])
except IndexError:
raise Http404()
if category is not None:
category = get_object_or_404(BlogCategory, slug=category)
blog_posts = blog_posts.filter(categories=category)
templates.append(u"blog/blog_post_list_%s.html" % str(category.slug))
author = None
if username is not None:
author = get_object_or_404(User, username=username)
blog_posts = blog_posts.filter(user=author)
templates.append(u"blog/blog_post_list_%s.html" % username)
prefetch = ("categories", "keywords__keyword")
blog_posts = blog_posts.select_related("user").prefetch_related(*prefetch)
blog_posts = paginate(
blog_posts,
request.GET.get("page", 1),
settings.BLOG_POST_PER_PAGE,
settings.MAX_PAGING_LINKS,
)
context = {
"blog_posts": blog_posts,
"year": year,
"month": month,
"tag": tag,
"category": category,
"author": author,
}
context.update(extra_context or {})
templates.append(template)
return TemplateResponse(request, templates, context)
def blog_post_detail(
request,
slug,
year=None,
month=None,
day=None,
template="blog/blog_post_detail.html",
extra_context=None,
):
blog_posts = BlogPost.objects.published(for_user=request.user).select_related()
blog_post = get_object_or_404(blog_posts, slug=slug)
related_posts = blog_post.related_posts.published(for_user=request.user)
context = {
"blog_post": blog_post,
"editable_obj": blog_post,
"related_posts": related_posts,
}
context.update(extra_context or {})
templates = [u"blog/blog_post_detail_%s.html" % str(slug), template]
return TemplateResponse(request, templates, context)
def blog_post_feed(request, format, **kwargs):
try:
return {"rss": PostsRSS, "atom": PostsAtom}[format](**kwargs)(request)
except KeyError:
raise Http404()
| true | true |
f73de0cf78faa821e681ebb950cc71213af5e281 | 1,630 | py | Python | pinax/apps/tasks/tests/test_client.py | peiwei/pinax | 34f95b1df4318655fe9bd90dcda8fe824e0c4117 | [
"MIT"
] | 1 | 2019-02-12T04:45:09.000Z | 2019-02-12T04:45:09.000Z | pinax/apps/tasks/tests/test_client.py | alex/pinax | 37e17ee2e2eb0e387d8809c12e55c20194a7118a | [
"MIT"
] | null | null | null | pinax/apps/tasks/tests/test_client.py | alex/pinax | 37e17ee2e2eb0e387d8809c12e55c20194a7118a | [
"MIT"
] | 1 | 2019-02-12T04:45:40.000Z | 2019-02-12T04:45:40.000Z | # coding: utf-8
from django.test import TestCase
rst_markup = """
Sample Header
===============
Blah blah blah
Lower Header
-------------
Blah blah blah
"""
class TestAddForm(TestCase):
fixtures = ["test_tasks.json"]
urls = "tasks.tests.tasks_urls"
def setUp(self):
self.client.login(username="admin", password="test")
def tearDown(self):
pass
def test_add_buttons(self):
response = self.client.get("/tasks/add/")
# Check that the response is 200 OK.
self.failUnlessEqual(response.status_code, 200)
# check that there is an add button
self.assertContains(response, '<input type="submit" value="Add task"/>')
# check that there is an add another task button
self.assertContains(response, "add-another-task")
def test_markup(self):
# create some sample form data
form_data = {
"summary": "my simple test",
"detail": rst_markup,
"markup": "rst",
"assignee": "",
"tags": ""
}
# post the form
response = self.client.post("/tasks/add/", form_data)
# display the resultant task
response = self.client.get("/tasks/task/3/")
# test the markup
self.assertContains(response, '<h1 class="title">Sample Header</h1>')
def test_tag_for_rel(self):
# checking for tag
response = self.client.get("/tasks/")
self.assertContains(response, '<a rel="tag" href="/tasks/tag/test/">test</a>')
| 24.69697 | 86 | 0.556442 |
from django.test import TestCase
rst_markup = """
Sample Header
===============
Blah blah blah
Lower Header
-------------
Blah blah blah
"""
class TestAddForm(TestCase):
fixtures = ["test_tasks.json"]
urls = "tasks.tests.tasks_urls"
def setUp(self):
self.client.login(username="admin", password="test")
def tearDown(self):
pass
def test_add_buttons(self):
response = self.client.get("/tasks/add/")
self.failUnlessEqual(response.status_code, 200)
self.assertContains(response, '<input type="submit" value="Add task"/>')
self.assertContains(response, "add-another-task")
def test_markup(self):
form_data = {
"summary": "my simple test",
"detail": rst_markup,
"markup": "rst",
"assignee": "",
"tags": ""
}
response = self.client.post("/tasks/add/", form_data)
response = self.client.get("/tasks/task/3/")
self.assertContains(response, '<h1 class="title">Sample Header</h1>')
def test_tag_for_rel(self):
response = self.client.get("/tasks/")
self.assertContains(response, '<a rel="tag" href="/tasks/tag/test/">test</a>')
| true | true |
f73de13e696bebb11efeeedf0313d4ec29e6ba4e | 1,343 | py | Python | my_classes/.history/ModulesPackages_PackageNamespaces/example1/module1_20210726140241.py | minefarmer/deep-Dive-1 | b0675b853180c5b5781888266ea63a3793b8d855 | [
"Unlicense"
] | null | null | null | my_classes/.history/ModulesPackages_PackageNamespaces/example1/module1_20210726140241.py | minefarmer/deep-Dive-1 | b0675b853180c5b5781888266ea63a3793b8d855 | [
"Unlicense"
] | null | null | null | my_classes/.history/ModulesPackages_PackageNamespaces/example1/module1_20210726140241.py | minefarmer/deep-Dive-1 | b0675b853180c5b5781888266ea63a3793b8d855 | [
"Unlicense"
] | null | null | null |
from pprint import pprint
print('----------- Running {0} --------'.format(__name__))
def pprint_dict(header, d):
print('\n\n-----------------')
print('****** {0} *****')
# from pprint import pprint
# print('------- Running {0} -----------'.format(__name__))
# def pprint_dict(header, d):
# print('****** {0} *****'.format(header))
# for key, value in d.items():
# print(key, value)
# print('------------------------------\n\n')
# pprint_dict('module1.globals', globals())
print('---------- End of {0} -------------'.format(__name__)) # ------- Running __main__ -----------
# ****** module1.globals *****
# __name__ __main__
# __doc__ None
# __package__ None
# __loader__ <_frozen_importlib_external.SourceFileLoader object at 0x7fd0cedb24c0>
# __spec__ None
# __annotations__ {}
# __builtins__ <module 'builtins' (built-in)>
# __file__ /home/rich/Desktop/carls_hub/deep-Dive-1/my_classes/ModulesPackages_PackageNamespaces/module1.py
# __cached__ None
# pprint <function pprint at 0x7fd0cec5c4c0>
# pprint_dict <function pprint_dict at 0x7fd0cec5c430>
# ------------------------------
# ---------- End of __main__ -------------
| 28.574468 | 119 | 0.501862 |
from pprint import pprint
print('----------- Running {0} --------'.format(__name__))
def pprint_dict(header, d):
print('\n\n-----------------')
print('****** {0} *****')
print('---------- End of {0} -------------'.format(__name__))
| true | true |
f73de266f47a7fbf6575121646eda284072e9940 | 1,569 | py | Python | thefuck/rules/apt_invalid_operation.py | juzim/thefuck | a3b2e6872b9e75b8a259375b9440246fdd181565 | [
"MIT"
] | 1 | 2021-12-13T18:41:46.000Z | 2021-12-13T18:41:46.000Z | thefuck/rules/apt_invalid_operation.py | juzim/thefuck | a3b2e6872b9e75b8a259375b9440246fdd181565 | [
"MIT"
] | 4 | 2020-12-23T15:44:08.000Z | 2020-12-23T16:48:59.000Z | thefuck/rules/apt_invalid_operation.py | juzim/thefuck | a3b2e6872b9e75b8a259375b9440246fdd181565 | [
"MIT"
] | 1 | 2019-12-08T19:23:06.000Z | 2019-12-08T19:23:06.000Z | import subprocess
from thefuck.specific.apt import apt_available
from thefuck.specific.sudo import sudo_support
from thefuck.utils import for_app, eager, replace_command
enabled_by_default = apt_available
@for_app('apt', 'apt-get', 'apt-cache')
@sudo_support
def match(command):
return 'E: Invalid operation' in command.stderr
@eager
def _parse_apt_operations(help_text_lines):
is_commands_list = False
for line in help_text_lines:
line = line.decode().strip()
if is_commands_list and line:
yield line.split()[0]
elif line.startswith('Basic commands:'):
is_commands_list = True
@eager
def _parse_apt_get_and_cache_operations(help_text_lines):
is_commands_list = False
for line in help_text_lines:
line = line.decode().strip()
if is_commands_list:
if not line:
return
yield line.split()[0]
elif line.startswith('Commands:'):
is_commands_list = True
def _get_operations(app):
proc = subprocess.Popen([app, '--help'],
stdout=subprocess.PIPE,
stderr=subprocess.PIPE)
lines = proc.stdout.readlines()
if app == 'apt':
return _parse_apt_operations(lines)
else:
return _parse_apt_get_and_cache_operations(lines)
@sudo_support
def get_new_command(command):
invalid_operation = command.stderr.split()[-1]
operations = _get_operations(command.script_parts[0])
return replace_command(command, invalid_operation, operations)
| 27.526316 | 66 | 0.67304 | import subprocess
from thefuck.specific.apt import apt_available
from thefuck.specific.sudo import sudo_support
from thefuck.utils import for_app, eager, replace_command
enabled_by_default = apt_available
@for_app('apt', 'apt-get', 'apt-cache')
@sudo_support
def match(command):
return 'E: Invalid operation' in command.stderr
@eager
def _parse_apt_operations(help_text_lines):
is_commands_list = False
for line in help_text_lines:
line = line.decode().strip()
if is_commands_list and line:
yield line.split()[0]
elif line.startswith('Basic commands:'):
is_commands_list = True
@eager
def _parse_apt_get_and_cache_operations(help_text_lines):
is_commands_list = False
for line in help_text_lines:
line = line.decode().strip()
if is_commands_list:
if not line:
return
yield line.split()[0]
elif line.startswith('Commands:'):
is_commands_list = True
def _get_operations(app):
proc = subprocess.Popen([app, '--help'],
stdout=subprocess.PIPE,
stderr=subprocess.PIPE)
lines = proc.stdout.readlines()
if app == 'apt':
return _parse_apt_operations(lines)
else:
return _parse_apt_get_and_cache_operations(lines)
@sudo_support
def get_new_command(command):
invalid_operation = command.stderr.split()[-1]
operations = _get_operations(command.script_parts[0])
return replace_command(command, invalid_operation, operations)
| true | true |
f73de47597685b133447c9e04ac8ad2dce7a88e8 | 2,971 | py | Python | setting.py | myWillison/proxy_pool | 2c89db1f976c0754ac4b8502053aa52f232bdf51 | [
"MIT"
] | null | null | null | setting.py | myWillison/proxy_pool | 2c89db1f976c0754ac4b8502053aa52f232bdf51 | [
"MIT"
] | null | null | null | setting.py | myWillison/proxy_pool | 2c89db1f976c0754ac4b8502053aa52f232bdf51 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
"""
-------------------------------------------------
File Name: setting.py
Description : 配置文件
Author : JHao
date: 2019/2/15
-------------------------------------------------
Change Activity:
2019/2/15:
-------------------------------------------------
"""
BANNER = r"""
****************************************************************
*** ______ ********************* ______ *********** _ ********
*** | ___ \_ ******************** | ___ \ ********* | | ********
*** | |_/ / \__ __ __ _ __ _ | |_/ /___ * ___ | | ********
*** | __/| _// _ \ \ \/ /| | | || __// _ \ / _ \ | | ********
*** | | | | | (_) | > < \ |_| || | | (_) | (_) || |___ ****
*** \_| |_| \___/ /_/\_\ \__ |\_| \___/ \___/ \_____/ ****
**** __ / / *****
************************* /___ / *******************************
************************* ********************************
****************************************************************
"""
# ############### server config ###############
# HOST = "localhost"
HOST = "0.0.0.0"
PORT = 5010
# ############### database config ###################
# db connection uri
# example:
# Redis: redis://:password@ip:port/db
# Ssdb: ssdb://:password@ip:port
DB_CONN = 'redis://:root@127.0.0.1:6379/1'
# proxy table name
TABLE_NAME = 'use_proxy'
# ###### config the proxy fetch function ######
PROXY_FETCHER = [
# "freeProxy01", # 无忧代理, 20个 几乎没有能用的
# # "freeProxy02", # ---代理66(count: 数量), 无返回结果
# # "freeProxy03", # ---西刺代理(pagecount=10), 无法访问
# "freeProxy04", # 全网代理guobanjia, 10个
# "freeProxy05", # 快代理(page_count=20/3680, 15个/page) **
# # "freeProxy06", # 码农代理coderbusy, country=cn, 106个 好站 ** 连接失败, 暂未找到原因. 找到原因: 没加请求头
# "freeProxy07", # 云代理(pagecount=7, 共100个)
# # "freeProxy08", # ---IP海, 网站到期停机
# "freeProxy09", # 免费代理库(pagecount=8/8, country=中国, 15个/page) **
# # "freeProxy10", # |||gfw
# # "freeProxy11", # |||gfw
# # "freeProxy12", # |||gfw
# # "freeProxy13", # ---齐云代理, 无法访问
# "freeProxy14", # 89免费代理(maxpage=20/unknow, 15个/page) **
# "freeProxy15", # 西拉免费代理(pagecount=20/2000, 50个/page, 普通|http) **
# 注释掉上面所有, 只用一个
"freeProxy06", # 码农代理coderbusy, country=cn, 106个 好站 **
]
# ############# proxy validator #################
# VERIFY_URL = "http://www.baidu.com"
VERIFY_URL = "http://httpbin.org/ip"
VERIFY_TIMEOUT = 10
MAX_FAIL_COUNT = 0 # default 0
# ############# scheduler config #################
# Set the timezone for the scheduler forcely (optional)
# If it is running on a VM, and
# "ValueError: Timezone offset does not match system offset"
# was raised during scheduling.
# Please uncomment the following line and set a timezone for the scheduler.
# Otherwise it will detect the timezone from the system automatically.
# TIMEZONE = "Asia/Shanghai"
| 34.546512 | 92 | 0.431505 |
BANNER = r"""
****************************************************************
*** ______ ********************* ______ *********** _ ********
*** | ___ \_ ******************** | ___ \ ********* | | ********
*** | |_/ / \__ __ __ _ __ _ | |_/ /___ * ___ | | ********
*** | __/| _// _ \ \ \/ /| | | || __// _ \ / _ \ | | ********
*** | | | | | (_) | > < \ |_| || | | (_) | (_) || |___ ****
*** \_| |_| \___/ /_/\_\ \__ |\_| \___/ \___/ \_____/ ****
**** __ / / *****
************************* /___ / *******************************
************************* ********************************
****************************************************************
"""
| true | true |
f73de4f06476ba4ca0eb4145e7dcb7a6682462b8 | 3,749 | py | Python | vendor/packages/translate-toolkit/translate/lang/af.py | DESHRAJ/fjord | 8899b6286b23347c9b024334e61c33fe133e836d | [
"BSD-3-Clause"
] | null | null | null | vendor/packages/translate-toolkit/translate/lang/af.py | DESHRAJ/fjord | 8899b6286b23347c9b024334e61c33fe133e836d | [
"BSD-3-Clause"
] | 1 | 2021-12-13T20:55:07.000Z | 2021-12-13T20:55:07.000Z | vendor/packages/translate-toolkit/translate/lang/af.py | DESHRAJ/fjord | 8899b6286b23347c9b024334e61c33fe133e836d | [
"BSD-3-Clause"
] | null | null | null | #!/usr/bin/env python
# -*- coding: utf-8 -*-
#
# Copyright 2007 Zuza Software Foundation
#
# This file is part of translate.
#
# translate is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 2 of the License, or
# (at your option) any later version.
#
# translate is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, see <http://www.gnu.org/licenses/>.
"""This module represents the Afrikaans language.
.. seealso:: http://en.wikipedia.org/wiki/Afrikaans_language
"""
import re
from translate.lang import common
articlere = re.compile(r"'n\b")
class af(common.Common):
"""This class represents Afrikaans."""
validdoublewords = [u"u"]
punctuation = u"".join([common.Common.commonpunc, common.Common.quotes,
common.Common.miscpunc])
sentenceend = u".!?…"
sentencere = re.compile(r"""
(?s) # make . also match newlines
.*? # anything, but match non-greedy
[%s] # the puntuation for sentence ending
\s+ # the spacing after the puntuation
(?='n\s[A-Z]|[^'a-z\d]|'[^n])
# lookahead that next part starts with caps or 'n followed by caps
""" % sentenceend, re.VERBOSE
)
specialchars = u"ëïêôûáéíóúý"
def capsstart(cls, text):
"""Modify this for the indefinite article ('n)."""
match = articlere.search(text, 0, 20)
if match:
#construct a list of non-apostrophe punctuation:
nonapos = u"".join(cls.punctuation.split(u"'"))
stripped = text.lstrip().lstrip(nonapos)
match = articlere.match(stripped)
if match:
return common.Common.capsstart(stripped[match.end():])
return common.Common.capsstart(text)
capsstart = classmethod(capsstart)
cyr2lat = {
u"А": "A", u"а": "a",
u"Б": "B", u"б": "b",
u"В": "W", u"в": "w", # Different if at the end of a syllable see rule 2.
u"Г": "G", u"г": "g", # see rule 3 and 4
u"Д": "D", u"д": "d",
u"ДЖ": "Dj", u"дж": "dj",
u"Е": "Je", u"е": "je", # Sometimes e need to check when/why see rule 5.
u"Ё": "Jo", u"ё": "jo", # see rule 6
u"ЕЙ": "Ei", u"ей": "ei",
u"Ж": "Zj", u"ж": "zj",
u"З": "Z", u"з": "z",
u"И": "I", u"и": "i",
u"Й": "J", u"й": "j", # see rule 9 and 10
u"К": "K", u"к": "k", # see note 11
u"Л": "L", u"л": "l",
u"М": "M", u"м": "m",
u"Н": "N", u"н": "n",
u"О": "O", u"о": "o",
u"П": "P", u"п": "p",
u"Р": "R", u"р": "r",
u"С": "S", u"с": "s", # see note 12
u"Т": "T", u"т": "t",
u"У": "Oe", u"у": "oe",
u"Ф": "F", u"ф": "f",
u"Х": "Ch", u"х": "ch", # see rule 12
u"Ц": "Ts", u"ц": "ts",
u"Ч": "Tj", u"ч": "tj",
u"Ш": "Sj", u"ш": "sj",
u"Щ": "Sjtsj", u"щ": "sjtsj",
u"Ы": "I", u"ы": "i", # see note 13
u"Ъ": "", u"ъ": "", # See note 14
u"Ь": "", u"ь": "", # this letter is not in the AWS we assume it is left out as in the previous letter
u"Э": "E", u"э": "e",
u"Ю": "Joe", u"ю": "joe",
u"Я": "Ja", u"я": "ja",
}
"""Mapping of Cyrillic to Latin letters for transliteration in Afrikaans"""
cyr_vowels = u"аеёиоуыэюя"
def tranliterate_cyrillic(text):
"""Convert Cyrillic text to Latin according to the AWS transliteration rules."""
trans = u""
for i in text:
trans += cyr2lat.get(i, i)
return trans
| 33.176991 | 106 | 0.561483 |
import re
from translate.lang import common
articlere = re.compile(r"'n\b")
class af(common.Common):
validdoublewords = [u"u"]
punctuation = u"".join([common.Common.commonpunc, common.Common.quotes,
common.Common.miscpunc])
sentenceend = u".!?…"
sentencere = re.compile(r"""
(?s) # make . also match newlines
.*? # anything, but match non-greedy
[%s] # the puntuation for sentence ending
\s+ # the spacing after the puntuation
(?='n\s[A-Z]|[^'a-z\d]|'[^n])
# lookahead that next part starts with caps or 'n followed by caps
""" % sentenceend, re.VERBOSE
)
specialchars = u"ëïêôûáéíóúý"
def capsstart(cls, text):
match = articlere.search(text, 0, 20)
if match:
#construct a list of non-apostrophe punctuation:
nonapos = u"".join(cls.punctuation.split(u"'"))
stripped = text.lstrip().lstrip(nonapos)
match = articlere.match(stripped)
if match:
return common.Common.capsstart(stripped[match.end():])
return common.Common.capsstart(text)
capsstart = classmethod(capsstart)
cyr2lat = {
u"А": "A", u"а": "a",
u"Б": "B", u"б": "b",
u"В": "W", u"в": "w",
u"Г": "G", u"г": "g",
u"Д": "D", u"д": "d",
u"ДЖ": "Dj", u"дж": "dj",
u"Е": "Je", u"е": "je",
u"Ё": "Jo", u"ё": "jo",
u"ЕЙ": "Ei", u"ей": "ei",
u"Ж": "Zj", u"ж": "zj",
u"З": "Z", u"з": "z",
u"И": "I", u"и": "i",
u"Й": "J", u"й": "j",
u"К": "K", u"к": "k",
u"Л": "L", u"л": "l",
u"М": "M", u"м": "m",
u"Н": "N", u"н": "n",
u"О": "O", u"о": "o",
u"П": "P", u"п": "p",
u"Р": "R", u"р": "r",
u"С": "S", u"с": "s",
u"Т": "T", u"т": "t",
u"У": "Oe", u"у": "oe",
u"Ф": "F", u"ф": "f",
u"Х": "Ch", u"х": "ch",
u"Ц": "Ts", u"ц": "ts",
u"Ч": "Tj", u"ч": "tj",
u"Ш": "Sj", u"ш": "sj",
u"Щ": "Sjtsj", u"щ": "sjtsj",
u"Ы": "I", u"ы": "i",
u"Ъ": "", u"ъ": "",
u"Ь": "", u"ь": "",
u"Э": "E", u"э": "e",
u"Ю": "Joe", u"ю": "joe",
u"Я": "Ja", u"я": "ja",
}
cyr_vowels = u"аеёиоуыэюя"
def tranliterate_cyrillic(text):
trans = u""
for i in text:
trans += cyr2lat.get(i, i)
return trans
| true | true |
f73de554f940b45e634bb15b792ac534ad02242e | 4,333 | py | Python | sa_numeric.py | project-k-0-1/project-k | fa5be043a3c82daee992d28db25519e2b1b53289 | [
"MIT"
] | 1 | 2018-11-30T17:09:11.000Z | 2018-11-30T17:09:11.000Z | sa_numeric.py | Kahroo/kahroo | fa5be043a3c82daee992d28db25519e2b1b53289 | [
"MIT"
] | null | null | null | sa_numeric.py | Kahroo/kahroo | fa5be043a3c82daee992d28db25519e2b1b53289 | [
"MIT"
] | 2 | 2020-12-03T04:30:45.000Z | 2021-04-21T09:59:37.000Z | """ Numerical functions """
import math
import numpy as np
import pymysql.cursors
from sa_db import sa_db_access
ACCESS_OBJ = sa_db_access()
DB_USR = ACCESS_OBJ.username()
DB_PWD = ACCESS_OBJ.password()
DB_NAME = ACCESS_OBJ.db_name()
DB_SRV = ACCESS_OBJ.db_server()
def get_pct_change(ini_val, new_val):
""" xxx """
if not new_val == 0:
if new_val < ini_val:
return_data = ((ini_val - new_val) / ini_val) * (-1)
else:
return_data = (new_val - ini_val) / new_val
else:
return_data = 0
return return_data
def get_stdev(sql):
""" xxx """
return_data = 0
#sql with just one numerical value to compute standard deviation
connection = pymysql.connect(host=DB_SRV,
user=DB_USR,
password=DB_PWD,
db=DB_NAME,
charset='utf8mb4',
cursorclass=pymysql.cursors.DictCursor)
cursor = connection.cursor(pymysql.cursors.SSCursor)
cursor.execute(sql)
list_data = list(cursor.fetchall())
return_data = np.std(list_data)
cursor.close()
connection.close()
return return_data
def get_volatility_risk(sql, is_portf, symbol):
""" xxx """
return_data = 0
#sql with one numerical column to compute volatility risk
connection = pymysql.connect(host=DB_SRV,
user=DB_USR,
password=DB_PWD,
db=DB_NAME,
charset='utf8mb4',
cursorclass=pymysql.cursors.DictCursor)
cursor = connection.cursor(pymysql.cursors.SSCursor)
if is_portf:
sql_i = "SELECT account_reference FROM instruments WHERE symbol='"+ str(symbol) +"'"
cursor.execute(sql_i)
res = cursor.fetchall()
for row in res:
reference = row[0]
else:
cursor.execute(sql)
res = cursor.fetchall()
for row in res:
reference = row[0]
cursor.close()
connection.close()
stdev = get_stdev(sql)
ref_price = reference - stdev
return_data = abs(get_pct_change(reference, ref_price))
return return_data
def get_mdd(sql):
""" xxx """
return_data = 0
#sql with just one numerical value to compute maximum drawdown
connection = pymysql.connect(host=DB_SRV,
user=DB_USR,
password=DB_PWD,
db=DB_NAME,
charset='utf8mb4',
cursorclass=pymysql.cursors.DictCursor)
cursor = connection.cursor(pymysql.cursors.SSCursor)
cursor.execute(sql)
res = cursor.fetchall()
top = 0
breset = math.pow(10, 100)
bottom = breset
pct_dd = 0
cur_dd = 0
for row in res:
val = row[0]
if val > top:
top = val
bottom = breset
if val < bottom:
bottom = val
if bottom < top:
cur_dd = abs(get_pct_change(bottom, top))
else:
cur_dd = 0
if cur_dd > pct_dd:
pct_dd = cur_dd
cursor.close()
connection.close()
return_data = pct_dd
return return_data
def get_romad(sql):
""" xxx """
return_data = 0
#sql with one column as numerical value to compute return on maximum drawdown
#ordered by date ASC
connection = pymysql.connect(host=DB_SRV,
user=DB_USR,
password=DB_PWD,
db=DB_NAME,
charset='utf8mb4',
cursorclass=pymysql.cursors.DictCursor)
cursor = connection.cursor(pymysql.cursors.SSCursor)
cursor.execute(sql)
res = cursor.fetchall()
i = 0
first = 0
last = 0
for row in res:
if i == 0:
first = row[0]
last = row[0]
i += 1
cursor.close()
connection.close()
instrument_returns = get_pct_change(first, last)
drawdown = get_mdd(sql)
if drawdown >0:
return_data = instrument_returns / drawdown
else:
return_data = 0
return return_data
| 28.886667 | 92 | 0.543273 | import math
import numpy as np
import pymysql.cursors
from sa_db import sa_db_access
ACCESS_OBJ = sa_db_access()
DB_USR = ACCESS_OBJ.username()
DB_PWD = ACCESS_OBJ.password()
DB_NAME = ACCESS_OBJ.db_name()
DB_SRV = ACCESS_OBJ.db_server()
def get_pct_change(ini_val, new_val):
if not new_val == 0:
if new_val < ini_val:
return_data = ((ini_val - new_val) / ini_val) * (-1)
else:
return_data = (new_val - ini_val) / new_val
else:
return_data = 0
return return_data
def get_stdev(sql):
return_data = 0
connection = pymysql.connect(host=DB_SRV,
user=DB_USR,
password=DB_PWD,
db=DB_NAME,
charset='utf8mb4',
cursorclass=pymysql.cursors.DictCursor)
cursor = connection.cursor(pymysql.cursors.SSCursor)
cursor.execute(sql)
list_data = list(cursor.fetchall())
return_data = np.std(list_data)
cursor.close()
connection.close()
return return_data
def get_volatility_risk(sql, is_portf, symbol):
return_data = 0
connection = pymysql.connect(host=DB_SRV,
user=DB_USR,
password=DB_PWD,
db=DB_NAME,
charset='utf8mb4',
cursorclass=pymysql.cursors.DictCursor)
cursor = connection.cursor(pymysql.cursors.SSCursor)
if is_portf:
sql_i = "SELECT account_reference FROM instruments WHERE symbol='"+ str(symbol) +"'"
cursor.execute(sql_i)
res = cursor.fetchall()
for row in res:
reference = row[0]
else:
cursor.execute(sql)
res = cursor.fetchall()
for row in res:
reference = row[0]
cursor.close()
connection.close()
stdev = get_stdev(sql)
ref_price = reference - stdev
return_data = abs(get_pct_change(reference, ref_price))
return return_data
def get_mdd(sql):
return_data = 0
connection = pymysql.connect(host=DB_SRV,
user=DB_USR,
password=DB_PWD,
db=DB_NAME,
charset='utf8mb4',
cursorclass=pymysql.cursors.DictCursor)
cursor = connection.cursor(pymysql.cursors.SSCursor)
cursor.execute(sql)
res = cursor.fetchall()
top = 0
breset = math.pow(10, 100)
bottom = breset
pct_dd = 0
cur_dd = 0
for row in res:
val = row[0]
if val > top:
top = val
bottom = breset
if val < bottom:
bottom = val
if bottom < top:
cur_dd = abs(get_pct_change(bottom, top))
else:
cur_dd = 0
if cur_dd > pct_dd:
pct_dd = cur_dd
cursor.close()
connection.close()
return_data = pct_dd
return return_data
def get_romad(sql):
return_data = 0
connection = pymysql.connect(host=DB_SRV,
user=DB_USR,
password=DB_PWD,
db=DB_NAME,
charset='utf8mb4',
cursorclass=pymysql.cursors.DictCursor)
cursor = connection.cursor(pymysql.cursors.SSCursor)
cursor.execute(sql)
res = cursor.fetchall()
i = 0
first = 0
last = 0
for row in res:
if i == 0:
first = row[0]
last = row[0]
i += 1
cursor.close()
connection.close()
instrument_returns = get_pct_change(first, last)
drawdown = get_mdd(sql)
if drawdown >0:
return_data = instrument_returns / drawdown
else:
return_data = 0
return return_data
| true | true |
f73de5bd2fe7733a4afe751e1a41c515f4a54df6 | 6,389 | py | Python | tests/benchmarks/test_benchmark_trpo.py | parachutel/garage | e9d4301278f5dd31e3cbd20df1422befa2d0b6c4 | [
"MIT"
] | null | null | null | tests/benchmarks/test_benchmark_trpo.py | parachutel/garage | e9d4301278f5dd31e3cbd20df1422befa2d0b6c4 | [
"MIT"
] | null | null | null | tests/benchmarks/test_benchmark_trpo.py | parachutel/garage | e9d4301278f5dd31e3cbd20df1422befa2d0b6c4 | [
"MIT"
] | null | null | null | '''
This script creates a regression test over garage-TRPO and baselines-TRPO.
Unlike garage, baselines doesn't set max_path_length. It keeps steps the action
until it's done. So we introduced tests.wrappers.AutoStopEnv wrapper to set
done=True when it reaches max_path_length. We also need to change the
garage.tf.samplers.BatchSampler to smooth the reward curve.
'''
import datetime
import os.path as osp
import random
from baselines import logger as baselines_logger
from baselines.bench import benchmarks
from baselines.common.tf_util import _PLACEHOLDER_CACHE
from baselines.ppo1.mlp_policy import MlpPolicy
from baselines.trpo_mpi import trpo_mpi
import dowel
from dowel import logger as dowel_logger
import gym
import pytest
import tensorflow as tf
from garage.envs import normalize
from garage.experiment import deterministic
from garage.tf.algos import TRPO
from garage.tf.baselines import GaussianMLPBaseline
from garage.tf.envs import TfEnv
from garage.tf.experiment import LocalTFRunner
from garage.tf.policies import GaussianMLPPolicy
import tests.helpers as Rh
from tests.wrappers import AutoStopEnv
class TestBenchmarkPPO:
'''Compare benchmarks between garage and baselines.'''
@pytest.mark.huge
def test_benchmark_trpo(self):
'''
Compare benchmarks between garage and baselines.
:return:
'''
mujoco1m = benchmarks.get_benchmark('Mujoco1M')
timestamp = datetime.datetime.now().strftime('%Y-%m-%d-%H-%M-%S-%f')
benchmark_dir = './data/local/benchmarks/trpo/%s/' % timestamp
result_json = {}
for task in mujoco1m['tasks']:
env_id = task['env_id']
env = gym.make(env_id)
baseline_env = AutoStopEnv(env_name=env_id, max_path_length=100)
seeds = random.sample(range(100), task['trials'])
task_dir = osp.join(benchmark_dir, env_id)
plt_file = osp.join(benchmark_dir,
'{}_benchmark.png'.format(env_id))
baselines_csvs = []
garage_csvs = []
for trial in range(task['trials']):
_PLACEHOLDER_CACHE.clear()
seed = seeds[trial]
trial_dir = task_dir + '/trial_%d_seed_%d' % (trial + 1, seed)
garage_dir = trial_dir + '/garage'
baselines_dir = trial_dir + '/baselines'
with tf.Graph().as_default():
# Run garage algorithms
env.reset()
garage_csv = run_garage(env, seed, garage_dir)
# Run baseline algorithms
baseline_env.reset()
baselines_csv = run_baselines(baseline_env, seed,
baselines_dir)
garage_csvs.append(garage_csv)
baselines_csvs.append(baselines_csv)
Rh.plot(
b_csvs=baselines_csvs,
g_csvs=garage_csvs,
g_x='Iteration',
g_y='AverageReturn',
b_x='EpThisIter',
b_y='EpRewMean',
trials=task['trials'],
seeds=seeds,
plt_file=plt_file,
env_id=env_id,
x_label='Iteration',
y_label='AverageReturn')
result_json[env_id] = Rh.create_json(
b_csvs=baselines_csvs,
g_csvs=garage_csvs,
seeds=seeds,
trails=task['trials'],
g_x='Iteration',
g_y='AverageReturn',
b_x='TimestepsSoFar',
b_y='EpRewMean',
factor_g=1024,
factor_b=1)
env.close()
Rh.write_file(result_json, 'TRPO')
def run_garage(env, seed, log_dir):
'''
Create garage model and training.
Replace the trpo with the algorithm you want to run.
:param env: Environment of the task.
:param seed: Random seed for the trial.
:param log_dir: Log dir path.
:return:import baselines.common.tf_util as U
'''
deterministic.set_seed(seed)
with LocalTFRunner() as runner:
env = TfEnv(normalize(env))
policy = GaussianMLPPolicy(
env_spec=env.spec,
hidden_sizes=(32, 32),
hidden_nonlinearity=tf.nn.tanh,
output_nonlinearity=None,
)
baseline = GaussianMLPBaseline(
env_spec=env.spec,
regressor_args=dict(
hidden_sizes=(32, 32),
use_trust_region=True,
),
)
algo = TRPO(
env_spec=env.spec,
policy=policy,
baseline=baseline,
max_path_length=100,
discount=0.99,
gae_lambda=0.98,
max_kl_step=0.01,
policy_ent_coeff=0.0,
)
# Set up logger since we are not using run_experiment
tabular_log_file = osp.join(log_dir, 'progress.csv')
dowel_logger.add_output(dowel.CsvOutput(tabular_log_file))
dowel_logger.add_output(dowel.StdOutput())
dowel_logger.add_output(dowel.TensorBoardOutput(log_dir))
runner.setup(algo, env)
runner.train(n_epochs=976, batch_size=1024)
dowel_logger.remove_all()
return tabular_log_file
def run_baselines(env, seed, log_dir):
'''
Create baselines model and training.
Replace the trpo and its training with the algorithm you want to run.
:param env: Environment of the task.
:param seed: Random seed for the trial.
:param log_dir: Log dir path.
:return
'''
with tf.compat.v1.Session().as_default():
baselines_logger.configure(log_dir)
def policy_fn(name, ob_space, ac_space):
return MlpPolicy(
name=name,
ob_space=ob_space,
ac_space=ac_space,
hid_size=32,
num_hid_layers=2)
trpo_mpi.learn(
env,
policy_fn,
timesteps_per_batch=1024,
max_kl=0.01,
cg_iters=10,
cg_damping=0.1,
max_timesteps=int(1e6),
gamma=0.99,
lam=0.98,
vf_iters=5,
vf_stepsize=1e-3)
env.close()
return osp.join(log_dir, 'progress.csv')
| 30.864734 | 79 | 0.587885 | import datetime
import os.path as osp
import random
from baselines import logger as baselines_logger
from baselines.bench import benchmarks
from baselines.common.tf_util import _PLACEHOLDER_CACHE
from baselines.ppo1.mlp_policy import MlpPolicy
from baselines.trpo_mpi import trpo_mpi
import dowel
from dowel import logger as dowel_logger
import gym
import pytest
import tensorflow as tf
from garage.envs import normalize
from garage.experiment import deterministic
from garage.tf.algos import TRPO
from garage.tf.baselines import GaussianMLPBaseline
from garage.tf.envs import TfEnv
from garage.tf.experiment import LocalTFRunner
from garage.tf.policies import GaussianMLPPolicy
import tests.helpers as Rh
from tests.wrappers import AutoStopEnv
class TestBenchmarkPPO:
@pytest.mark.huge
def test_benchmark_trpo(self):
mujoco1m = benchmarks.get_benchmark('Mujoco1M')
timestamp = datetime.datetime.now().strftime('%Y-%m-%d-%H-%M-%S-%f')
benchmark_dir = './data/local/benchmarks/trpo/%s/' % timestamp
result_json = {}
for task in mujoco1m['tasks']:
env_id = task['env_id']
env = gym.make(env_id)
baseline_env = AutoStopEnv(env_name=env_id, max_path_length=100)
seeds = random.sample(range(100), task['trials'])
task_dir = osp.join(benchmark_dir, env_id)
plt_file = osp.join(benchmark_dir,
'{}_benchmark.png'.format(env_id))
baselines_csvs = []
garage_csvs = []
for trial in range(task['trials']):
_PLACEHOLDER_CACHE.clear()
seed = seeds[trial]
trial_dir = task_dir + '/trial_%d_seed_%d' % (trial + 1, seed)
garage_dir = trial_dir + '/garage'
baselines_dir = trial_dir + '/baselines'
with tf.Graph().as_default():
env.reset()
garage_csv = run_garage(env, seed, garage_dir)
baseline_env.reset()
baselines_csv = run_baselines(baseline_env, seed,
baselines_dir)
garage_csvs.append(garage_csv)
baselines_csvs.append(baselines_csv)
Rh.plot(
b_csvs=baselines_csvs,
g_csvs=garage_csvs,
g_x='Iteration',
g_y='AverageReturn',
b_x='EpThisIter',
b_y='EpRewMean',
trials=task['trials'],
seeds=seeds,
plt_file=plt_file,
env_id=env_id,
x_label='Iteration',
y_label='AverageReturn')
result_json[env_id] = Rh.create_json(
b_csvs=baselines_csvs,
g_csvs=garage_csvs,
seeds=seeds,
trails=task['trials'],
g_x='Iteration',
g_y='AverageReturn',
b_x='TimestepsSoFar',
b_y='EpRewMean',
factor_g=1024,
factor_b=1)
env.close()
Rh.write_file(result_json, 'TRPO')
def run_garage(env, seed, log_dir):
deterministic.set_seed(seed)
with LocalTFRunner() as runner:
env = TfEnv(normalize(env))
policy = GaussianMLPPolicy(
env_spec=env.spec,
hidden_sizes=(32, 32),
hidden_nonlinearity=tf.nn.tanh,
output_nonlinearity=None,
)
baseline = GaussianMLPBaseline(
env_spec=env.spec,
regressor_args=dict(
hidden_sizes=(32, 32),
use_trust_region=True,
),
)
algo = TRPO(
env_spec=env.spec,
policy=policy,
baseline=baseline,
max_path_length=100,
discount=0.99,
gae_lambda=0.98,
max_kl_step=0.01,
policy_ent_coeff=0.0,
)
tabular_log_file = osp.join(log_dir, 'progress.csv')
dowel_logger.add_output(dowel.CsvOutput(tabular_log_file))
dowel_logger.add_output(dowel.StdOutput())
dowel_logger.add_output(dowel.TensorBoardOutput(log_dir))
runner.setup(algo, env)
runner.train(n_epochs=976, batch_size=1024)
dowel_logger.remove_all()
return tabular_log_file
def run_baselines(env, seed, log_dir):
with tf.compat.v1.Session().as_default():
baselines_logger.configure(log_dir)
def policy_fn(name, ob_space, ac_space):
return MlpPolicy(
name=name,
ob_space=ob_space,
ac_space=ac_space,
hid_size=32,
num_hid_layers=2)
trpo_mpi.learn(
env,
policy_fn,
timesteps_per_batch=1024,
max_kl=0.01,
cg_iters=10,
cg_damping=0.1,
max_timesteps=int(1e6),
gamma=0.99,
lam=0.98,
vf_iters=5,
vf_stepsize=1e-3)
env.close()
return osp.join(log_dir, 'progress.csv')
| true | true |
f73de5e0f898f3f6c7ef23d93cf598aafbe36b71 | 5,348 | py | Python | sir/__main__.py | mwiencek/sir | 7c4c50b27367d4c2e2e2a29626801b12a9d4d388 | [
"MIT"
] | null | null | null | sir/__main__.py | mwiencek/sir | 7c4c50b27367d4c2e2e2a29626801b12a9d4d388 | [
"MIT"
] | null | null | null | sir/__main__.py | mwiencek/sir | 7c4c50b27367d4c2e2e2a29626801b12a9d4d388 | [
"MIT"
] | 1 | 2020-03-16T18:52:58.000Z | 2020-03-16T18:52:58.000Z | # Copyright (c) 2014, 2015, 2019 Wieland Hoffmann, MetaBrainz Foundation
# License: MIT, see LICENSE for details
import argparse
import logging
import multiprocessing
import ConfigParser
import config
from . import init_raven_client
from .amqp.extension_generation import generate_extension
from .amqp.handler import watch
from .amqp.setup import setup_rabbitmq
from .indexing import reindex
from .schema import SCHEMA
from .trigger_generation import generate_func
logger = logging.getLogger("sir")
def main():
parser = argparse.ArgumentParser(prog="sir")
parser.add_argument("-d", "--debug", action="store_true")
parser.add_argument("--sqltimings", action="store_true")
subparsers = parser.add_subparsers()
reindex_parser = subparsers.add_parser("reindex",
help="Reindexes all or a single "
"entity type")
reindex_parser.set_defaults(func=reindex)
reindex_parser.add_argument('--entity-type', action='append',
help="Which entity types to index.",
choices=SCHEMA.keys())
generate_trigger_parser = subparsers.add_parser("triggers",
help="Generate triggers")
generate_trigger_parser.set_defaults(func=generate_func)
generate_trigger_parser.add_argument('-t', '--trigger-file',
action="store",
default="sql/CreateTriggers.sql",
help="The filename to save the "
"triggers into")
generate_trigger_parser.add_argument('-f', '--function-file',
action="store",
default="sql/CreateFunctions.sql",
help="The filename to save the "
"functions into")
generate_trigger_parser.add_argument('-bid', '--broker-id',
action="store",
default="1",
help="ID of the AMQP broker row "
"in the database.")
generate_extension_parser = subparsers.add_parser("extension",
help="Generate extension")
generate_extension_parser.set_defaults(func=generate_extension)
generate_extension_parser.add_argument('-e', '--extension-file',
action="store",
default="sql/CreateExtension.sql",
help="The filename to save the "
"extension into")
amqp_setup_parser = subparsers.add_parser("amqp_setup",
help="Set up AMQP exchanges and "
"queues")
amqp_setup_parser.set_defaults(func=setup_rabbitmq)
amqp_watch_parser = subparsers.add_parser("amqp_watch",
help="Watch AMQP queues for "
"changes")
amqp_watch_parser.set_defaults(func=watch)
args = parser.parse_args()
if args.debug:
logger.setLevel(logging.DEBUG)
else:
logger.setLevel(logging.INFO)
loghandler = logging.StreamHandler()
if args.debug:
formatter = logging.Formatter(fmt="%(processName)s %(asctime)s "
"%(levelname)s: %(message)s")
else:
formatter = logging.Formatter(fmt="%(asctime)s: %(message)s")
loghandler.setFormatter(formatter)
logger.addHandler(loghandler)
mplogger = multiprocessing.get_logger()
mplogger.setLevel(logging.ERROR)
mplogger.addHandler(loghandler)
if args.sqltimings:
from sqlalchemy import event
from sqlalchemy.engine import Engine
import time
sqltimelogger = logging.getLogger("sqltimer")
sqltimelogger.setLevel(logging.DEBUG)
sqltimelogger.addHandler(loghandler)
@event.listens_for(Engine, "before_cursor_execute")
def before_cursor_execute(conn, cursor, statement,
parameters, context, executemany):
conn.info.setdefault('query_start_time', []).append(time.time())
sqltimelogger.debug("Start Query: %s", statement)
sqltimelogger.debug("With Parameters: %s", parameters)
@event.listens_for(Engine, "after_cursor_execute")
def after_cursor_execute(conn, cursor, statement,
parameters, context, executemany):
total = time.time() - conn.info['query_start_time'].pop(-1)
sqltimelogger.debug("Query Complete!")
sqltimelogger.debug("Total Time: %f", total)
config.read_config()
try:
init_raven_client(config.CFG.get("sentry", "dsn"))
except ConfigParser.Error as e:
logger.info("Skipping Raven client initialization. Configuration issue: %s", e)
func = args.func
args = vars(args)
func(args)
if __name__ == '__main__':
main()
| 41.78125 | 87 | 0.557405 |
import argparse
import logging
import multiprocessing
import ConfigParser
import config
from . import init_raven_client
from .amqp.extension_generation import generate_extension
from .amqp.handler import watch
from .amqp.setup import setup_rabbitmq
from .indexing import reindex
from .schema import SCHEMA
from .trigger_generation import generate_func
logger = logging.getLogger("sir")
def main():
parser = argparse.ArgumentParser(prog="sir")
parser.add_argument("-d", "--debug", action="store_true")
parser.add_argument("--sqltimings", action="store_true")
subparsers = parser.add_subparsers()
reindex_parser = subparsers.add_parser("reindex",
help="Reindexes all or a single "
"entity type")
reindex_parser.set_defaults(func=reindex)
reindex_parser.add_argument('--entity-type', action='append',
help="Which entity types to index.",
choices=SCHEMA.keys())
generate_trigger_parser = subparsers.add_parser("triggers",
help="Generate triggers")
generate_trigger_parser.set_defaults(func=generate_func)
generate_trigger_parser.add_argument('-t', '--trigger-file',
action="store",
default="sql/CreateTriggers.sql",
help="The filename to save the "
"triggers into")
generate_trigger_parser.add_argument('-f', '--function-file',
action="store",
default="sql/CreateFunctions.sql",
help="The filename to save the "
"functions into")
generate_trigger_parser.add_argument('-bid', '--broker-id',
action="store",
default="1",
help="ID of the AMQP broker row "
"in the database.")
generate_extension_parser = subparsers.add_parser("extension",
help="Generate extension")
generate_extension_parser.set_defaults(func=generate_extension)
generate_extension_parser.add_argument('-e', '--extension-file',
action="store",
default="sql/CreateExtension.sql",
help="The filename to save the "
"extension into")
amqp_setup_parser = subparsers.add_parser("amqp_setup",
help="Set up AMQP exchanges and "
"queues")
amqp_setup_parser.set_defaults(func=setup_rabbitmq)
amqp_watch_parser = subparsers.add_parser("amqp_watch",
help="Watch AMQP queues for "
"changes")
amqp_watch_parser.set_defaults(func=watch)
args = parser.parse_args()
if args.debug:
logger.setLevel(logging.DEBUG)
else:
logger.setLevel(logging.INFO)
loghandler = logging.StreamHandler()
if args.debug:
formatter = logging.Formatter(fmt="%(processName)s %(asctime)s "
"%(levelname)s: %(message)s")
else:
formatter = logging.Formatter(fmt="%(asctime)s: %(message)s")
loghandler.setFormatter(formatter)
logger.addHandler(loghandler)
mplogger = multiprocessing.get_logger()
mplogger.setLevel(logging.ERROR)
mplogger.addHandler(loghandler)
if args.sqltimings:
from sqlalchemy import event
from sqlalchemy.engine import Engine
import time
sqltimelogger = logging.getLogger("sqltimer")
sqltimelogger.setLevel(logging.DEBUG)
sqltimelogger.addHandler(loghandler)
@event.listens_for(Engine, "before_cursor_execute")
def before_cursor_execute(conn, cursor, statement,
parameters, context, executemany):
conn.info.setdefault('query_start_time', []).append(time.time())
sqltimelogger.debug("Start Query: %s", statement)
sqltimelogger.debug("With Parameters: %s", parameters)
@event.listens_for(Engine, "after_cursor_execute")
def after_cursor_execute(conn, cursor, statement,
parameters, context, executemany):
total = time.time() - conn.info['query_start_time'].pop(-1)
sqltimelogger.debug("Query Complete!")
sqltimelogger.debug("Total Time: %f", total)
config.read_config()
try:
init_raven_client(config.CFG.get("sentry", "dsn"))
except ConfigParser.Error as e:
logger.info("Skipping Raven client initialization. Configuration issue: %s", e)
func = args.func
args = vars(args)
func(args)
if __name__ == '__main__':
main()
| true | true |
f73de6b38f29f2a55fda3222e6771647566dab96 | 244 | py | Python | try_sum3.py | Sophie-Dai/Coding-Class-03 | ac60e9c6d9ea128155c13fd94125514397cfe44f | [
"Apache-2.0"
] | null | null | null | try_sum3.py | Sophie-Dai/Coding-Class-03 | ac60e9c6d9ea128155c13fd94125514397cfe44f | [
"Apache-2.0"
] | null | null | null | try_sum3.py | Sophie-Dai/Coding-Class-03 | ac60e9c6d9ea128155c13fd94125514397cfe44f | [
"Apache-2.0"
] | null | null | null | # -*- coding: UTF-8 -*-
num_start = 1
loops = 5
ap_step = 5
sum = 0
y = num_start
for num in range(loops):
sum += y
y += ap_step
print(sum)
sum = 0
y = num_start
for num in range(loops):
sum += y
y *= ap_step
print(sum)
| 10.166667 | 24 | 0.565574 |
num_start = 1
loops = 5
ap_step = 5
sum = 0
y = num_start
for num in range(loops):
sum += y
y += ap_step
print(sum)
sum = 0
y = num_start
for num in range(loops):
sum += y
y *= ap_step
print(sum)
| true | true |
f73de71ceaf9b50dd96686b597c06de425cb5649 | 54,323 | py | Python | zerver/tests/test_import_export.py | yasiruRathnayaka97/zulip | 2aec78e954e68f199de8d928642ff2325612c947 | [
"Apache-2.0"
] | null | null | null | zerver/tests/test_import_export.py | yasiruRathnayaka97/zulip | 2aec78e954e68f199de8d928642ff2325612c947 | [
"Apache-2.0"
] | null | null | null | zerver/tests/test_import_export.py | yasiruRathnayaka97/zulip | 2aec78e954e68f199de8d928642ff2325612c947 | [
"Apache-2.0"
] | null | null | null | import os
from typing import Any, Callable, Dict, FrozenSet, List, Optional, Set, Tuple
from unittest.mock import patch
import orjson
from django.conf import settings
from django.db.models import Q
from django.utils.timezone import now as timezone_now
from zerver.lib import upload
from zerver.lib.actions import (
do_add_reaction,
do_change_icon_source,
do_change_logo_source,
do_change_plan_type,
do_create_user,
do_update_user_presence,
)
from zerver.lib.avatar_hash import user_avatar_path
from zerver.lib.bot_config import set_bot_config
from zerver.lib.bot_lib import StateHandler
from zerver.lib.export import do_export_realm, do_export_user, export_usermessages_batch
from zerver.lib.import_realm import do_import_realm, get_incoming_message_ids
from zerver.lib.streams import create_stream_if_needed
from zerver.lib.test_classes import ZulipTestCase
from zerver.lib.test_helpers import create_s3_buckets, get_test_image_file, use_s3_backend
from zerver.lib.topic_mutes import add_topic_mute
from zerver.lib.upload import (
claim_attachment,
upload_avatar_image,
upload_emoji_image,
upload_message_file,
)
from zerver.lib.utils import query_chunker
from zerver.models import (
AlertWord,
Attachment,
BotConfigData,
BotStorageData,
CustomProfileField,
CustomProfileFieldValue,
Huddle,
Message,
MutedTopic,
Reaction,
Realm,
RealmAuditLog,
RealmEmoji,
Recipient,
Stream,
Subscription,
UserGroup,
UserGroupMembership,
UserHotspot,
UserMessage,
UserPresence,
UserProfile,
get_active_streams,
get_client,
get_huddle_hash,
get_realm,
get_stream,
)
class QueryUtilTest(ZulipTestCase):
def _create_messages(self) -> None:
for name in ["cordelia", "hamlet", "iago"]:
user = self.example_user(name)
for _ in range(5):
self.send_personal_message(user, self.example_user("othello"))
def test_query_chunker(self) -> None:
self._create_messages()
cordelia = self.example_user("cordelia")
hamlet = self.example_user("hamlet")
def get_queries() -> List[Any]:
queries = [
Message.objects.filter(sender_id=cordelia.id),
Message.objects.filter(sender_id=hamlet.id),
Message.objects.exclude(sender_id__in=[cordelia.id, hamlet.id]),
]
return queries
for query in get_queries():
# For our test to be meaningful, we want non-empty queries
# at first
assert len(list(query)) > 0
queries = get_queries()
all_msg_ids: Set[int] = set()
chunker = query_chunker(
queries=queries,
id_collector=all_msg_ids,
chunk_size=20,
)
all_row_ids = []
for chunk in chunker:
for row in chunk:
all_row_ids.append(row.id)
self.assertEqual(all_row_ids, sorted(all_row_ids))
self.assertEqual(len(all_msg_ids), len(Message.objects.all()))
# Now just search for cordelia/hamlet. Note that we don't really
# need the order_by here, but it should be harmless.
queries = [
Message.objects.filter(sender_id=cordelia.id).order_by("id"),
Message.objects.filter(sender_id=hamlet.id),
]
all_msg_ids = set()
chunker = query_chunker(
queries=queries,
id_collector=all_msg_ids,
chunk_size=7, # use a different size
)
list(chunker) # exhaust the iterator
self.assertEqual(
len(all_msg_ids),
len(Message.objects.filter(sender_id__in=[cordelia.id, hamlet.id])),
)
# Try just a single query to validate chunking.
queries = [
Message.objects.exclude(sender_id=cordelia.id),
]
all_msg_ids = set()
chunker = query_chunker(
queries=queries,
id_collector=all_msg_ids,
chunk_size=11, # use a different size each time
)
list(chunker) # exhaust the iterator
self.assertEqual(
len(all_msg_ids),
len(Message.objects.exclude(sender_id=cordelia.id)),
)
self.assertTrue(len(all_msg_ids) > 15)
# Verify assertions about disjoint-ness.
queries = [
Message.objects.exclude(sender_id=cordelia.id),
Message.objects.filter(sender_id=hamlet.id),
]
all_msg_ids = set()
chunker = query_chunker(
queries=queries,
id_collector=all_msg_ids,
chunk_size=13, # use a different size each time
)
with self.assertRaises(AssertionError):
list(chunker) # exercise the iterator
# Try to confuse things with ids part of the query...
queries = [
Message.objects.filter(id__lte=10),
Message.objects.filter(id__gt=10),
]
all_msg_ids = set()
chunker = query_chunker(
queries=queries,
id_collector=all_msg_ids,
chunk_size=11, # use a different size each time
)
self.assertEqual(len(all_msg_ids), 0) # until we actually use the iterator
list(chunker) # exhaust the iterator
self.assertEqual(len(all_msg_ids), len(Message.objects.all()))
# Verify that we can just get the first chunk with a next() call.
queries = [
Message.objects.all(),
]
all_msg_ids = set()
chunker = query_chunker(
queries=queries,
id_collector=all_msg_ids,
chunk_size=10, # use a different size each time
)
first_chunk = next(chunker)
self.assertEqual(len(first_chunk), 10)
self.assertEqual(len(all_msg_ids), 10)
expected_msg = Message.objects.all()[0:10][5]
actual_msg = first_chunk[5]
self.assertEqual(actual_msg.content, expected_msg.content)
self.assertEqual(actual_msg.sender_id, expected_msg.sender_id)
class ImportExportTest(ZulipTestCase):
def setUp(self) -> None:
super().setUp()
self.rm_tree(settings.LOCAL_UPLOADS_DIR)
def _make_output_dir(self) -> str:
output_dir = os.path.join(settings.TEST_WORKER_DIR, "test-export")
self.rm_tree(output_dir)
os.makedirs(output_dir, exist_ok=True)
return output_dir
def _export_realm(
self,
realm: Realm,
exportable_user_ids: Optional[Set[int]] = None,
consent_message_id: Optional[int] = None,
) -> Dict[str, Any]:
output_dir = self._make_output_dir()
with patch("logging.info"), patch("zerver.lib.export.create_soft_link"):
do_export_realm(
realm=realm,
output_dir=output_dir,
threads=0,
exportable_user_ids=exportable_user_ids,
consent_message_id=consent_message_id,
)
export_usermessages_batch(
input_path=os.path.join(output_dir, "messages-000001.json.partial"),
output_path=os.path.join(output_dir, "messages-000001.json"),
consent_message_id=consent_message_id,
)
try:
export_usermessages_batch(
input_path=os.path.join(output_dir, "messages-000002.json.partial"),
output_path=os.path.join(output_dir, "messages-000002.json"),
consent_message_id=consent_message_id,
)
except FileNotFoundError:
pass
def read_file(fn: str) -> Any:
full_fn = os.path.join(output_dir, fn)
with open(full_fn, "rb") as f:
return orjson.loads(f.read())
result = {}
result["realm"] = read_file("realm.json")
result["attachment"] = read_file("attachment.json")
result["message"] = read_file("messages-000001.json")
try:
message = read_file("messages-000002.json")
result["message"]["zerver_usermessage"].extend(message["zerver_usermessage"])
result["message"]["zerver_message"].extend(message["zerver_message"])
except FileNotFoundError:
pass
result["uploads_dir"] = os.path.join(output_dir, "uploads")
result["uploads_dir_records"] = read_file(os.path.join("uploads", "records.json"))
result["emoji_dir"] = os.path.join(output_dir, "emoji")
result["emoji_dir_records"] = read_file(os.path.join("emoji", "records.json"))
result["avatar_dir"] = os.path.join(output_dir, "avatars")
result["avatar_dir_records"] = read_file(os.path.join("avatars", "records.json"))
result["realm_icons_dir"] = os.path.join(output_dir, "realm_icons")
result["realm_icons_dir_records"] = read_file(os.path.join("realm_icons", "records.json"))
return result
def _setup_export_files(self, realm: Realm) -> Tuple[str, str, str, bytes]:
message = Message.objects.all()[0]
user_profile = message.sender
url = upload_message_file(
"dummy.txt", len(b"zulip!"), "text/plain", b"zulip!", user_profile
)
attachment_path_id = url.replace("/user_uploads/", "")
claim_attachment(
user_profile=user_profile,
path_id=attachment_path_id,
message=message,
is_message_realm_public=True,
)
avatar_path_id = user_avatar_path(user_profile)
original_avatar_path_id = avatar_path_id + ".original"
emoji_path = RealmEmoji.PATH_ID_TEMPLATE.format(
realm_id=realm.id,
emoji_file_name="1.png",
)
with get_test_image_file("img.png") as img_file:
upload_emoji_image(img_file, "1.png", user_profile)
with get_test_image_file("img.png") as img_file:
upload_avatar_image(img_file, user_profile, user_profile)
with get_test_image_file("img.png") as img_file:
upload.upload_backend.upload_realm_icon_image(img_file, user_profile)
do_change_icon_source(realm, Realm.ICON_UPLOADED)
with get_test_image_file("img.png") as img_file:
upload.upload_backend.upload_realm_logo_image(img_file, user_profile, night=False)
do_change_logo_source(realm, Realm.LOGO_UPLOADED, False, acting_user=user_profile)
with get_test_image_file("img.png") as img_file:
upload.upload_backend.upload_realm_logo_image(img_file, user_profile, night=True)
do_change_logo_source(realm, Realm.LOGO_UPLOADED, True, acting_user=user_profile)
with get_test_image_file("img.png") as img_file:
test_image = img_file.read()
message.sender.avatar_source = "U"
message.sender.save()
realm.refresh_from_db()
return attachment_path_id, emoji_path, original_avatar_path_id, test_image
"""
Tests for export
"""
def test_export_files_from_local(self) -> None:
realm = Realm.objects.get(string_id="zulip")
path_id, emoji_path, original_avatar_path_id, test_image = self._setup_export_files(realm)
full_data = self._export_realm(realm)
data = full_data["attachment"]
self.assertEqual(len(data["zerver_attachment"]), 1)
record = data["zerver_attachment"][0]
self.assertEqual(record["path_id"], path_id)
# Test uploads
fn = os.path.join(full_data["uploads_dir"], path_id)
with open(fn) as f:
self.assertEqual(f.read(), "zulip!")
records = full_data["uploads_dir_records"]
self.assertEqual(records[0]["path"], path_id)
self.assertEqual(records[0]["s3_path"], path_id)
# Test emojis
fn = os.path.join(full_data["emoji_dir"], emoji_path)
fn = fn.replace("1.png", "")
self.assertEqual("1.png", os.listdir(fn)[0])
records = full_data["emoji_dir_records"]
self.assertEqual(records[0]["file_name"], "1.png")
self.assertEqual(records[0]["path"], "2/emoji/images/1.png")
self.assertEqual(records[0]["s3_path"], "2/emoji/images/1.png")
# Test realm logo and icon
records = full_data["realm_icons_dir_records"]
image_files = set()
for record in records:
image_path = os.path.join(full_data["realm_icons_dir"], record["path"])
if image_path[-9:] == ".original":
with open(image_path, "rb") as image_file:
image_data = image_file.read()
self.assertEqual(image_data, test_image)
else:
self.assertTrue(os.path.exists(image_path))
image_files.add(os.path.basename(image_path))
self.assertEqual(
set(image_files),
{
"night_logo.png",
"logo.original",
"logo.png",
"icon.png",
"night_logo.original",
"icon.original",
},
)
# Test avatars
fn = os.path.join(full_data["avatar_dir"], original_avatar_path_id)
with open(fn, "rb") as fb:
fn_data = fb.read()
self.assertEqual(fn_data, test_image)
records = full_data["avatar_dir_records"]
record_path = [record["path"] for record in records]
record_s3_path = [record["s3_path"] for record in records]
self.assertIn(original_avatar_path_id, record_path)
self.assertIn(original_avatar_path_id, record_s3_path)
@use_s3_backend
def test_export_files_from_s3(self) -> None:
create_s3_buckets(settings.S3_AUTH_UPLOADS_BUCKET, settings.S3_AVATAR_BUCKET)
realm = Realm.objects.get(string_id="zulip")
(
attachment_path_id,
emoji_path,
original_avatar_path_id,
test_image,
) = self._setup_export_files(realm)
full_data = self._export_realm(realm)
data = full_data["attachment"]
self.assertEqual(len(data["zerver_attachment"]), 1)
record = data["zerver_attachment"][0]
self.assertEqual(record["path_id"], attachment_path_id)
def check_types(user_profile_id: int, realm_id: int) -> None:
self.assertEqual(type(user_profile_id), int)
self.assertEqual(type(realm_id), int)
# Test uploads
fields = attachment_path_id.split("/")
fn = os.path.join(full_data["uploads_dir"], os.path.join(fields[0], fields[1], fields[2]))
with open(fn) as f:
self.assertEqual(f.read(), "zulip!")
records = full_data["uploads_dir_records"]
self.assertEqual(records[0]["path"], os.path.join(fields[0], fields[1], fields[2]))
self.assertEqual(records[0]["s3_path"], attachment_path_id)
check_types(records[0]["user_profile_id"], records[0]["realm_id"])
# Test emojis
fn = os.path.join(full_data["emoji_dir"], emoji_path)
fn = fn.replace("1.png", "")
self.assertIn("1.png", os.listdir(fn))
records = full_data["emoji_dir_records"]
self.assertEqual(records[0]["file_name"], "1.png")
self.assertTrue("last_modified" in records[0])
self.assertEqual(records[0]["path"], "2/emoji/images/1.png")
self.assertEqual(records[0]["s3_path"], "2/emoji/images/1.png")
check_types(records[0]["user_profile_id"], records[0]["realm_id"])
# Test realm logo and icon
records = full_data["realm_icons_dir_records"]
image_files = set()
for record in records:
image_path = os.path.join(full_data["realm_icons_dir"], record["s3_path"])
if image_path[-9:] == ".original":
with open(image_path, "rb") as image_file:
image_data = image_file.read()
self.assertEqual(image_data, test_image)
else:
self.assertTrue(os.path.exists(image_path))
image_files.add(os.path.basename(image_path))
self.assertEqual(
set(image_files),
{
"night_logo.png",
"logo.original",
"logo.png",
"icon.png",
"night_logo.original",
"icon.original",
},
)
# Test avatars
fn = os.path.join(full_data["avatar_dir"], original_avatar_path_id)
with open(fn, "rb") as file:
fn_data = file.read()
self.assertEqual(fn_data, test_image)
records = full_data["avatar_dir_records"]
record_path = [record["path"] for record in records]
record_s3_path = [record["s3_path"] for record in records]
self.assertIn(original_avatar_path_id, record_path)
self.assertIn(original_avatar_path_id, record_s3_path)
check_types(records[0]["user_profile_id"], records[0]["realm_id"])
def test_zulip_realm(self) -> None:
realm = Realm.objects.get(string_id="zulip")
default_bot = self.example_user("default_bot")
pm_a_msg_id = self.send_personal_message(self.example_user("AARON"), default_bot)
pm_b_msg_id = self.send_personal_message(default_bot, self.example_user("iago"))
pm_c_msg_id = self.send_personal_message(
self.example_user("othello"), self.example_user("hamlet")
)
realm_emoji = RealmEmoji.objects.get(realm=realm)
realm_emoji.delete()
full_data = self._export_realm(realm)
realm_emoji.save()
data = full_data["realm"]
self.assertEqual(len(data["zerver_userprofile_crossrealm"]), 3)
self.assertEqual(len(data["zerver_userprofile_mirrordummy"]), 0)
exported_user_emails = self.get_set(data["zerver_userprofile"], "delivery_email")
self.assertIn(self.example_email("cordelia"), exported_user_emails)
self.assertIn("default-bot@zulip.com", exported_user_emails)
exported_streams = self.get_set(data["zerver_stream"], "name")
self.assertEqual(
exported_streams,
{"Denmark", "Rome", "Scotland", "Venice", "Verona"},
)
exported_alert_words = data["zerver_alertword"]
# We set up 4 alert words for Hamlet, Cordelia, etc.
# when we populate the test database.
num_zulip_users = 10
self.assertEqual(len(exported_alert_words), num_zulip_users * 4)
self.assertIn("robotics", {r["word"] for r in exported_alert_words})
data = full_data["message"]
um = UserMessage.objects.all()[0]
exported_um = self.find_by_id(data["zerver_usermessage"], um.id)
self.assertEqual(exported_um["message"], um.message_id)
self.assertEqual(exported_um["user_profile"], um.user_profile_id)
exported_message = self.find_by_id(data["zerver_message"], um.message_id)
self.assertEqual(exported_message["content"], um.message.content)
exported_message_ids = self.get_set(data["zerver_message"], "id")
self.assertIn(pm_a_msg_id, exported_message_ids)
self.assertIn(pm_b_msg_id, exported_message_ids)
self.assertIn(pm_c_msg_id, exported_message_ids)
def test_export_realm_with_exportable_user_ids(self) -> None:
realm = Realm.objects.get(string_id="zulip")
cordelia = self.example_user("iago")
hamlet = self.example_user("hamlet")
user_ids = {cordelia.id, hamlet.id}
pm_a_msg_id = self.send_personal_message(
self.example_user("AARON"), self.example_user("othello")
)
pm_b_msg_id = self.send_personal_message(
self.example_user("cordelia"), self.example_user("iago")
)
pm_c_msg_id = self.send_personal_message(
self.example_user("hamlet"), self.example_user("othello")
)
pm_d_msg_id = self.send_personal_message(
self.example_user("iago"), self.example_user("hamlet")
)
realm_emoji = RealmEmoji.objects.get(realm=realm)
realm_emoji.delete()
full_data = self._export_realm(realm, exportable_user_ids=user_ids)
realm_emoji.save()
data = full_data["realm"]
exported_user_emails = self.get_set(data["zerver_userprofile"], "delivery_email")
self.assertIn(self.example_email("iago"), exported_user_emails)
self.assertIn(self.example_email("hamlet"), exported_user_emails)
self.assertNotIn("default-bot@zulip.com", exported_user_emails)
self.assertNotIn(self.example_email("cordelia"), exported_user_emails)
dummy_user_emails = self.get_set(data["zerver_userprofile_mirrordummy"], "delivery_email")
self.assertIn(self.example_email("cordelia"), dummy_user_emails)
self.assertIn(self.example_email("othello"), dummy_user_emails)
self.assertIn("default-bot@zulip.com", dummy_user_emails)
self.assertNotIn(self.example_email("iago"), dummy_user_emails)
self.assertNotIn(self.example_email("hamlet"), dummy_user_emails)
data = full_data["message"]
exported_message_ids = self.get_set(data["zerver_message"], "id")
self.assertNotIn(pm_a_msg_id, exported_message_ids)
self.assertIn(pm_b_msg_id, exported_message_ids)
self.assertIn(pm_c_msg_id, exported_message_ids)
self.assertIn(pm_d_msg_id, exported_message_ids)
def test_export_realm_with_member_consent(self) -> None:
realm = Realm.objects.get(string_id="zulip")
# Create private streams and subscribe users for testing export
create_stream_if_needed(realm, "Private A", invite_only=True)
self.subscribe(self.example_user("iago"), "Private A")
self.subscribe(self.example_user("othello"), "Private A")
self.send_stream_message(self.example_user("iago"), "Private A", "Hello Stream A")
create_stream_if_needed(realm, "Private B", invite_only=True)
self.subscribe(self.example_user("prospero"), "Private B")
stream_b_message_id = self.send_stream_message(
self.example_user("prospero"), "Private B", "Hello Stream B"
)
self.subscribe(self.example_user("hamlet"), "Private B")
create_stream_if_needed(realm, "Private C", invite_only=True)
self.subscribe(self.example_user("othello"), "Private C")
self.subscribe(self.example_user("prospero"), "Private C")
stream_c_message_id = self.send_stream_message(
self.example_user("othello"), "Private C", "Hello Stream C"
)
# Create huddles
self.send_huddle_message(
self.example_user("iago"), [self.example_user("cordelia"), self.example_user("AARON")]
)
huddle_a = Huddle.objects.last()
self.send_huddle_message(
self.example_user("ZOE"),
[self.example_user("hamlet"), self.example_user("AARON"), self.example_user("othello")],
)
huddle_b = Huddle.objects.last()
huddle_c_message_id = self.send_huddle_message(
self.example_user("AARON"),
[self.example_user("cordelia"), self.example_user("ZOE"), self.example_user("othello")],
)
# Create PMs
pm_a_msg_id = self.send_personal_message(
self.example_user("AARON"), self.example_user("othello")
)
pm_b_msg_id = self.send_personal_message(
self.example_user("cordelia"), self.example_user("iago")
)
pm_c_msg_id = self.send_personal_message(
self.example_user("hamlet"), self.example_user("othello")
)
pm_d_msg_id = self.send_personal_message(
self.example_user("iago"), self.example_user("hamlet")
)
# Send message advertising export and make users react
self.send_stream_message(
self.example_user("othello"),
"Verona",
topic_name="Export",
content="Thumbs up for export",
)
message = Message.objects.last()
consented_user_ids = [self.example_user(user).id for user in ["iago", "hamlet"]]
do_add_reaction(
self.example_user("iago"), message, "outbox", "1f4e4", Reaction.UNICODE_EMOJI
)
do_add_reaction(
self.example_user("hamlet"), message, "outbox", "1f4e4", Reaction.UNICODE_EMOJI
)
realm_emoji = RealmEmoji.objects.get(realm=realm)
realm_emoji.delete()
full_data = self._export_realm(realm, consent_message_id=message.id)
realm_emoji.save()
data = full_data["realm"]
self.assertEqual(len(data["zerver_userprofile_crossrealm"]), 3)
self.assertEqual(len(data["zerver_userprofile_mirrordummy"]), 0)
exported_user_emails = self.get_set(data["zerver_userprofile"], "delivery_email")
self.assertIn(self.example_email("cordelia"), exported_user_emails)
self.assertIn(self.example_email("hamlet"), exported_user_emails)
self.assertIn(self.example_email("iago"), exported_user_emails)
self.assertIn(self.example_email("othello"), exported_user_emails)
self.assertIn("default-bot@zulip.com", exported_user_emails)
exported_streams = self.get_set(data["zerver_stream"], "name")
self.assertEqual(
exported_streams,
{
"Denmark",
"Rome",
"Scotland",
"Venice",
"Verona",
"Private A",
"Private B",
"Private C",
},
)
data = full_data["message"]
exported_usermessages = UserMessage.objects.filter(
user_profile__in=[self.example_user("iago"), self.example_user("hamlet")]
)
um = exported_usermessages[0]
self.assertEqual(len(data["zerver_usermessage"]), len(exported_usermessages))
exported_um = self.find_by_id(data["zerver_usermessage"], um.id)
self.assertEqual(exported_um["message"], um.message_id)
self.assertEqual(exported_um["user_profile"], um.user_profile_id)
exported_message = self.find_by_id(data["zerver_message"], um.message_id)
self.assertEqual(exported_message["content"], um.message.content)
public_stream_names = ["Denmark", "Rome", "Scotland", "Venice", "Verona"]
public_stream_ids = Stream.objects.filter(name__in=public_stream_names).values_list(
"id", flat=True
)
public_stream_recipients = Recipient.objects.filter(
type_id__in=public_stream_ids, type=Recipient.STREAM
)
public_stream_message_ids = Message.objects.filter(
recipient__in=public_stream_recipients
).values_list("id", flat=True)
# Messages from Private Stream C are not exported since no member gave consent
private_stream_ids = Stream.objects.filter(name__in=["Private A", "Private B"]).values_list(
"id", flat=True
)
private_stream_recipients = Recipient.objects.filter(
type_id__in=private_stream_ids, type=Recipient.STREAM
)
private_stream_message_ids = Message.objects.filter(
recipient__in=private_stream_recipients
).values_list("id", flat=True)
pm_recipients = Recipient.objects.filter(
type_id__in=consented_user_ids, type=Recipient.PERSONAL
)
pm_query = Q(recipient__in=pm_recipients) | Q(sender__in=consented_user_ids)
exported_pm_ids = (
Message.objects.filter(pm_query)
.values_list("id", flat=True)
.values_list("id", flat=True)
)
# Third huddle is not exported since none of the members gave consent
huddle_recipients = Recipient.objects.filter(
type_id__in=[huddle_a.id, huddle_b.id], type=Recipient.HUDDLE
)
pm_query = Q(recipient__in=huddle_recipients) | Q(sender__in=consented_user_ids)
exported_huddle_ids = (
Message.objects.filter(pm_query)
.values_list("id", flat=True)
.values_list("id", flat=True)
)
exported_msg_ids = (
set(public_stream_message_ids)
| set(private_stream_message_ids)
| set(exported_pm_ids)
| set(exported_huddle_ids)
)
self.assertEqual(self.get_set(data["zerver_message"], "id"), exported_msg_ids)
# TODO: This behavior is wrong and should be fixed. The message should not be exported
# since it was sent before the only consented user iago joined the stream.
self.assertIn(stream_b_message_id, exported_msg_ids)
self.assertNotIn(stream_c_message_id, exported_msg_ids)
self.assertNotIn(huddle_c_message_id, exported_msg_ids)
self.assertNotIn(pm_a_msg_id, exported_msg_ids)
self.assertIn(pm_b_msg_id, exported_msg_ids)
self.assertIn(pm_c_msg_id, exported_msg_ids)
self.assertIn(pm_d_msg_id, exported_msg_ids)
def test_export_single_user(self) -> None:
output_dir = self._make_output_dir()
cordelia = self.example_user("cordelia")
with patch("logging.info"):
do_export_user(cordelia, output_dir)
def read_file(fn: str) -> Any:
full_fn = os.path.join(output_dir, fn)
with open(full_fn, "rb") as f:
return orjson.loads(f.read())
messages = read_file("messages-000001.json")
user = read_file("user.json")
exported_user_id = self.get_set(user["zerver_userprofile"], "id")
self.assertEqual(exported_user_id, {cordelia.id})
exported_user_email = self.get_set(user["zerver_userprofile"], "email")
self.assertEqual(exported_user_email, {cordelia.email})
exported_recipient_type_id = self.get_set(user["zerver_recipient"], "type_id")
self.assertIn(cordelia.id, exported_recipient_type_id)
exported_stream_id = self.get_set(user["zerver_stream"], "id")
self.assertIn(list(exported_stream_id)[0], exported_recipient_type_id)
exported_recipient_id = self.get_set(user["zerver_recipient"], "id")
exported_subscription_recipient = self.get_set(user["zerver_subscription"], "recipient")
self.assertEqual(exported_recipient_id, exported_subscription_recipient)
exported_messages_recipient = self.get_set(messages["zerver_message"], "recipient")
self.assertIn(list(exported_messages_recipient)[0], exported_recipient_id)
"""
Tests for import_realm
"""
def test_import_realm(self) -> None:
original_realm = Realm.objects.get(string_id="zulip")
RealmEmoji.objects.get(realm=original_realm).delete()
# data to test import of huddles
huddle = [
self.example_user("hamlet"),
self.example_user("othello"),
]
self.send_huddle_message(
self.example_user("cordelia"),
huddle,
"test huddle message",
)
user_mention_message = "@**King Hamlet** Hello"
self.send_stream_message(self.example_user("iago"), "Verona", user_mention_message)
stream_mention_message = "Subscribe to #**Denmark**"
self.send_stream_message(self.example_user("hamlet"), "Verona", stream_mention_message)
user_group_mention_message = "Hello @*hamletcharacters*"
self.send_stream_message(self.example_user("othello"), "Verona", user_group_mention_message)
special_characters_message = "```\n'\n```\n@**Polonius**"
self.send_stream_message(self.example_user("iago"), "Denmark", special_characters_message)
sample_user = self.example_user("hamlet")
# data to test import of hotspots
UserHotspot.objects.create(
user=sample_user,
hotspot="intro_streams",
)
# data to test import of muted topic
stream = get_stream("Verona", original_realm)
add_topic_mute(
user_profile=sample_user,
stream_id=stream.id,
recipient_id=stream.recipient.id,
topic_name="Verona2",
)
do_update_user_presence(
sample_user, get_client("website"), timezone_now(), UserPresence.ACTIVE
)
# data to test import of botstoragedata and botconfigdata
bot_profile = do_create_user(
email="bot-1@zulip.com",
password="test",
realm=original_realm,
full_name="bot",
bot_type=UserProfile.EMBEDDED_BOT,
bot_owner=sample_user,
)
storage = StateHandler(bot_profile)
storage.put("some key", "some value")
set_bot_config(bot_profile, "entry 1", "value 1")
self._export_realm(original_realm)
with patch("logging.info"):
with self.settings(BILLING_ENABLED=False):
do_import_realm(os.path.join(settings.TEST_WORKER_DIR, "test-export"), "test-zulip")
# sanity checks
# test realm
self.assertTrue(Realm.objects.filter(string_id="test-zulip").exists())
imported_realm = Realm.objects.get(string_id="test-zulip")
self.assertNotEqual(imported_realm.id, original_realm.id)
def assert_realm_values(f: Callable[[Realm], Any], equal: bool = True) -> None:
orig_realm_result = f(original_realm)
imported_realm_result = f(imported_realm)
# orig_realm_result should be truthy and have some values, otherwise
# the test is kind of meaningless
assert orig_realm_result
if equal:
self.assertEqual(orig_realm_result, imported_realm_result)
else:
self.assertNotEqual(orig_realm_result, imported_realm_result)
# test users
assert_realm_values(
lambda r: {user.email for user in r.get_admin_users_and_bots()},
)
assert_realm_values(
lambda r: {user.email for user in r.get_active_users()},
)
# test stream
assert_realm_values(
lambda r: {stream.name for stream in get_active_streams(r)},
)
# test recipients
def get_recipient_stream(r: Realm) -> Recipient:
return Stream.objects.get(name="Verona", realm=r).recipient
def get_recipient_user(r: Realm) -> Recipient:
return UserProfile.objects.get(full_name="Iago", realm=r).recipient
assert_realm_values(lambda r: get_recipient_stream(r).type)
assert_realm_values(lambda r: get_recipient_user(r).type)
# test subscription
def get_subscribers(recipient: Recipient) -> Set[str]:
subscriptions = Subscription.objects.filter(recipient=recipient)
users = {sub.user_profile.email for sub in subscriptions}
return users
assert_realm_values(
lambda r: get_subscribers(get_recipient_stream(r)),
)
assert_realm_values(
lambda r: get_subscribers(get_recipient_user(r)),
)
# test custom profile fields
def get_custom_profile_field_names(r: Realm) -> Set[str]:
custom_profile_fields = CustomProfileField.objects.filter(realm=r)
custom_profile_field_names = {field.name for field in custom_profile_fields}
return custom_profile_field_names
assert_realm_values(get_custom_profile_field_names)
def get_custom_profile_with_field_type_user(
r: Realm,
) -> Tuple[Set[Any], Set[Any], Set[FrozenSet[str]]]:
fields = CustomProfileField.objects.filter(field_type=CustomProfileField.USER, realm=r)
def get_email(user_id: int) -> str:
return UserProfile.objects.get(id=user_id).email
def get_email_from_value(field_value: CustomProfileFieldValue) -> Set[str]:
user_id_list = orjson.loads(field_value.value)
return {get_email(user_id) for user_id in user_id_list}
def custom_profile_field_values_for(
fields: List[CustomProfileField],
) -> Set[FrozenSet[str]]:
user_emails: Set[FrozenSet[str]] = set()
for field in fields:
values = CustomProfileFieldValue.objects.filter(field=field)
for value in values:
user_emails.add(frozenset(get_email_from_value(value)))
return user_emails
field_names, field_hints = (set() for i in range(2))
for field in fields:
field_names.add(field.name)
field_hints.add(field.hint)
return (field_hints, field_names, custom_profile_field_values_for(fields))
assert_realm_values(get_custom_profile_with_field_type_user)
# test realmauditlog
def get_realm_audit_log_event_type(r: Realm) -> Set[str]:
realmauditlogs = RealmAuditLog.objects.filter(realm=r).exclude(
event_type=RealmAuditLog.REALM_PLAN_TYPE_CHANGED
)
realmauditlog_event_type = {log.event_type for log in realmauditlogs}
return realmauditlog_event_type
assert_realm_values(get_realm_audit_log_event_type)
cordelia_full_name = "Cordelia Lear"
hamlet_full_name = "King Hamlet"
othello_full_name = "Othello, the Moor of Venice"
def get_user_id(r: Realm, full_name: str) -> int:
return UserProfile.objects.get(realm=r, full_name=full_name).id
# test huddles
def get_huddle_hashes(r: Realm) -> str:
user_id_list = [
get_user_id(r, cordelia_full_name),
get_user_id(r, hamlet_full_name),
get_user_id(r, othello_full_name),
]
huddle_hash = get_huddle_hash(user_id_list)
return huddle_hash
assert_realm_values(get_huddle_hashes, equal=False)
def get_huddle_message(r: Realm) -> str:
huddle_hash = get_huddle_hashes(r)
huddle_id = Huddle.objects.get(huddle_hash=huddle_hash).id
huddle_recipient = Recipient.objects.get(type_id=huddle_id, type=3)
huddle_message = Message.objects.get(recipient=huddle_recipient)
return huddle_message.content
assert_realm_values(get_huddle_message)
self.assertEqual(get_huddle_message(imported_realm), "test huddle message")
# test alertword
def get_alertwords(r: Realm) -> Set[str]:
return {rec.word for rec in AlertWord.objects.filter(realm_id=r.id)}
assert_realm_values(get_alertwords)
# test userhotspot
def get_user_hotspots(r: Realm) -> Set[str]:
user_id = get_user_id(r, hamlet_full_name)
hotspots = UserHotspot.objects.filter(user_id=user_id)
user_hotspots = {hotspot.hotspot for hotspot in hotspots}
return user_hotspots
assert_realm_values(get_user_hotspots)
# test muted topics
def get_muted_topics(r: Realm) -> Set[str]:
user_profile_id = get_user_id(r, hamlet_full_name)
muted_topics = MutedTopic.objects.filter(user_profile_id=user_profile_id)
topic_names = {muted_topic.topic_name for muted_topic in muted_topics}
return topic_names
assert_realm_values(get_muted_topics)
# test usergroups
assert_realm_values(
lambda r: {group.name for group in UserGroup.objects.filter(realm=r)},
)
def get_user_membership(r: Realm) -> Set[str]:
usergroup = UserGroup.objects.get(realm=r, name="hamletcharacters")
usergroup_membership = UserGroupMembership.objects.filter(user_group=usergroup)
users = {membership.user_profile.email for membership in usergroup_membership}
return users
assert_realm_values(get_user_membership)
# test botstoragedata and botconfigdata
def get_botstoragedata(r: Realm) -> Dict[str, Any]:
bot_profile = UserProfile.objects.get(full_name="bot", realm=r)
bot_storage_data = BotStorageData.objects.get(bot_profile=bot_profile)
return {"key": bot_storage_data.key, "data": bot_storage_data.value}
assert_realm_values(get_botstoragedata)
def get_botconfigdata(r: Realm) -> Dict[str, Any]:
bot_profile = UserProfile.objects.get(full_name="bot", realm=r)
bot_config_data = BotConfigData.objects.get(bot_profile=bot_profile)
return {"key": bot_config_data.key, "data": bot_config_data.value}
assert_realm_values(get_botconfigdata)
# test messages
def get_stream_messages(r: Realm) -> Message:
recipient = get_recipient_stream(r)
messages = Message.objects.filter(recipient=recipient)
return messages
def get_stream_topics(r: Realm) -> Set[str]:
messages = get_stream_messages(r)
topics = {m.topic_name() for m in messages}
return topics
assert_realm_values(get_stream_topics)
# test usermessages
def get_usermessages_user(r: Realm) -> Set[Any]:
messages = get_stream_messages(r).order_by("content")
usermessage = UserMessage.objects.filter(message=messages[0])
usermessage_user = {um.user_profile.email for um in usermessage}
return usermessage_user
assert_realm_values(get_usermessages_user)
# tests to make sure that various data-*-ids in rendered_content
# are replaced correctly with the values of newer realm.
def get_user_mention(r: Realm) -> Set[Any]:
mentioned_user = UserProfile.objects.get(
delivery_email=self.example_email("hamlet"), realm=r
)
data_user_id = f'data-user-id="{mentioned_user.id}"'
mention_message = get_stream_messages(r).get(rendered_content__contains=data_user_id)
return mention_message.content
assert_realm_values(get_user_mention)
def get_stream_mention(r: Realm) -> Set[Any]:
mentioned_stream = get_stream("Denmark", r)
data_stream_id = f'data-stream-id="{mentioned_stream.id}"'
mention_message = get_stream_messages(r).get(rendered_content__contains=data_stream_id)
return mention_message.content
assert_realm_values(get_stream_mention)
def get_user_group_mention(r: Realm) -> Set[Any]:
user_group = UserGroup.objects.get(realm=r, name="hamletcharacters")
data_usergroup_id = f'data-user-group-id="{user_group.id}"'
mention_message = get_stream_messages(r).get(
rendered_content__contains=data_usergroup_id
)
return mention_message.content
assert_realm_values(get_user_group_mention)
def get_userpresence_timestamp(r: Realm) -> Set[Any]:
# It should be sufficient to compare UserPresence timestamps to verify
# they got exported/imported correctly.
return set(UserPresence.objects.filter(realm=r).values_list("timestamp", flat=True))
assert_realm_values(get_userpresence_timestamp)
# test to highlight that bs4 which we use to do data-**id
# replacements modifies the HTML sometimes. eg replacing <br>
# with </br>, ' with \' etc. The modifications doesn't
# affect how the browser displays the rendered_content so we
# are okay with using bs4 for this. lxml package also has
# similar behavior.
orig_polonius_user = self.example_user("polonius")
original_msg = Message.objects.get(
content=special_characters_message, sender__realm=original_realm
)
self.assertEqual(
original_msg.rendered_content,
'<div class="codehilite"><pre><span></span><code>'\n</code></pre></div>\n'
f'<p><span class="user-mention" data-user-id="{orig_polonius_user.id}">@Polonius</span></p>',
)
imported_polonius_user = UserProfile.objects.get(
delivery_email=self.example_email("polonius"), realm=imported_realm
)
imported_msg = Message.objects.get(
content=special_characters_message, sender__realm=imported_realm
)
self.assertEqual(
imported_msg.rendered_content,
'<div class="codehilite"><pre><span></span><code>\'\n</code></pre></div>\n'
f'<p><span class="user-mention" data-user-id="{imported_polonius_user.id}">@Polonius</span></p>',
)
# Check recipient_id was generated correctly for the imported users and streams.
for user_profile in UserProfile.objects.filter(realm=imported_realm):
self.assertEqual(
user_profile.recipient_id,
Recipient.objects.get(type=Recipient.PERSONAL, type_id=user_profile.id).id,
)
for stream in Stream.objects.filter(realm=imported_realm):
self.assertEqual(
stream.recipient_id,
Recipient.objects.get(type=Recipient.STREAM, type_id=stream.id).id,
)
for huddle_object in Huddle.objects.all():
# Huddles don't have a realm column, so we just test all Huddles for simplicity.
self.assertEqual(
huddle_object.recipient_id,
Recipient.objects.get(type=Recipient.HUDDLE, type_id=huddle_object.id).id,
)
def test_import_files_from_local(self) -> None:
realm = Realm.objects.get(string_id="zulip")
self._setup_export_files(realm)
self._export_realm(realm)
with self.settings(BILLING_ENABLED=False):
with patch("logging.info"):
do_import_realm(os.path.join(settings.TEST_WORKER_DIR, "test-export"), "test-zulip")
imported_realm = Realm.objects.get(string_id="test-zulip")
# Test attachments
uploaded_file = Attachment.objects.get(realm=imported_realm)
self.assertEqual(len(b"zulip!"), uploaded_file.size)
attachment_file_path = os.path.join(
settings.LOCAL_UPLOADS_DIR, "files", uploaded_file.path_id
)
self.assertTrue(os.path.isfile(attachment_file_path))
# Test emojis
realm_emoji = RealmEmoji.objects.get(realm=imported_realm)
emoji_path = RealmEmoji.PATH_ID_TEMPLATE.format(
realm_id=imported_realm.id,
emoji_file_name=realm_emoji.file_name,
)
emoji_file_path = os.path.join(settings.LOCAL_UPLOADS_DIR, "avatars", emoji_path)
self.assertTrue(os.path.isfile(emoji_file_path))
# Test avatars
user_email = Message.objects.all()[0].sender.email
user_profile = UserProfile.objects.get(email=user_email, realm=imported_realm)
avatar_path_id = user_avatar_path(user_profile) + ".original"
avatar_file_path = os.path.join(settings.LOCAL_UPLOADS_DIR, "avatars", avatar_path_id)
self.assertTrue(os.path.isfile(avatar_file_path))
# Test realm icon and logo
upload_path = upload.upload_backend.realm_avatar_and_logo_path(imported_realm)
full_upload_path = os.path.join(settings.LOCAL_UPLOADS_DIR, upload_path)
with get_test_image_file("img.png") as f:
test_image_data = f.read()
self.assertIsNotNone(test_image_data)
with open(os.path.join(full_upload_path, "icon.original"), "rb") as f:
self.assertEqual(f.read(), test_image_data)
self.assertTrue(os.path.isfile(os.path.join(full_upload_path, "icon.png")))
self.assertEqual(imported_realm.icon_source, Realm.ICON_UPLOADED)
with open(os.path.join(full_upload_path, "logo.original"), "rb") as f:
self.assertEqual(f.read(), test_image_data)
self.assertTrue(os.path.isfile(os.path.join(full_upload_path, "logo.png")))
self.assertEqual(imported_realm.logo_source, Realm.LOGO_UPLOADED)
with open(os.path.join(full_upload_path, "night_logo.original"), "rb") as f:
self.assertEqual(f.read(), test_image_data)
self.assertTrue(os.path.isfile(os.path.join(full_upload_path, "night_logo.png")))
self.assertEqual(imported_realm.night_logo_source, Realm.LOGO_UPLOADED)
@use_s3_backend
def test_import_files_from_s3(self) -> None:
uploads_bucket, avatar_bucket = create_s3_buckets(
settings.S3_AUTH_UPLOADS_BUCKET, settings.S3_AVATAR_BUCKET
)
realm = Realm.objects.get(string_id="zulip")
self._setup_export_files(realm)
self._export_realm(realm)
with self.settings(BILLING_ENABLED=False):
with patch("logging.info"):
do_import_realm(os.path.join(settings.TEST_WORKER_DIR, "test-export"), "test-zulip")
imported_realm = Realm.objects.get(string_id="test-zulip")
with get_test_image_file("img.png") as f:
test_image_data = f.read()
# Test attachments
uploaded_file = Attachment.objects.get(realm=imported_realm)
self.assertEqual(len(b"zulip!"), uploaded_file.size)
attachment_content = uploads_bucket.Object(uploaded_file.path_id).get()["Body"].read()
self.assertEqual(b"zulip!", attachment_content)
# Test emojis
realm_emoji = RealmEmoji.objects.get(realm=imported_realm)
emoji_path = RealmEmoji.PATH_ID_TEMPLATE.format(
realm_id=imported_realm.id,
emoji_file_name=realm_emoji.file_name,
)
emoji_key = avatar_bucket.Object(emoji_path)
self.assertIsNotNone(emoji_key.get()["Body"].read())
self.assertEqual(emoji_key.key, emoji_path)
# Test avatars
user_email = Message.objects.all()[0].sender.email
user_profile = UserProfile.objects.get(email=user_email, realm=imported_realm)
avatar_path_id = user_avatar_path(user_profile) + ".original"
original_image_key = avatar_bucket.Object(avatar_path_id)
self.assertEqual(original_image_key.key, avatar_path_id)
image_data = avatar_bucket.Object(avatar_path_id).get()["Body"].read()
self.assertEqual(image_data, test_image_data)
# Test realm icon and logo
upload_path = upload.upload_backend.realm_avatar_and_logo_path(imported_realm)
original_icon_path_id = os.path.join(upload_path, "icon.original")
original_icon_key = avatar_bucket.Object(original_icon_path_id)
self.assertEqual(original_icon_key.get()["Body"].read(), test_image_data)
resized_icon_path_id = os.path.join(upload_path, "icon.png")
resized_icon_key = avatar_bucket.Object(resized_icon_path_id)
self.assertEqual(resized_icon_key.key, resized_icon_path_id)
self.assertEqual(imported_realm.icon_source, Realm.ICON_UPLOADED)
original_logo_path_id = os.path.join(upload_path, "logo.original")
original_logo_key = avatar_bucket.Object(original_logo_path_id)
self.assertEqual(original_logo_key.get()["Body"].read(), test_image_data)
resized_logo_path_id = os.path.join(upload_path, "logo.png")
resized_logo_key = avatar_bucket.Object(resized_logo_path_id)
self.assertEqual(resized_logo_key.key, resized_logo_path_id)
self.assertEqual(imported_realm.logo_source, Realm.LOGO_UPLOADED)
night_logo_original_path_id = os.path.join(upload_path, "night_logo.original")
night_logo_original_key = avatar_bucket.Object(night_logo_original_path_id)
self.assertEqual(night_logo_original_key.get()["Body"].read(), test_image_data)
resized_night_logo_path_id = os.path.join(upload_path, "night_logo.png")
resized_night_logo_key = avatar_bucket.Object(resized_night_logo_path_id)
self.assertEqual(resized_night_logo_key.key, resized_night_logo_path_id)
self.assertEqual(imported_realm.night_logo_source, Realm.LOGO_UPLOADED)
def test_get_incoming_message_ids(self) -> None:
import_dir = os.path.join(
settings.DEPLOY_ROOT, "zerver", "tests", "fixtures", "import_fixtures"
)
message_ids = get_incoming_message_ids(
import_dir=import_dir,
sort_by_date=True,
)
self.assertEqual(message_ids, [888, 999, 555])
message_ids = get_incoming_message_ids(
import_dir=import_dir,
sort_by_date=False,
)
self.assertEqual(message_ids, [555, 888, 999])
def test_plan_type(self) -> None:
realm = get_realm("zulip")
do_change_plan_type(realm, Realm.LIMITED)
self._setup_export_files(realm)
self._export_realm(realm)
with patch("logging.info"):
with self.settings(BILLING_ENABLED=True):
realm = do_import_realm(
os.path.join(settings.TEST_WORKER_DIR, "test-export"), "test-zulip-1"
)
self.assertEqual(realm.plan_type, Realm.LIMITED)
self.assertEqual(realm.max_invites, 100)
self.assertEqual(realm.upload_quota_gb, 5)
self.assertEqual(realm.message_visibility_limit, 10000)
self.assertTrue(
RealmAuditLog.objects.filter(
realm=realm, event_type=RealmAuditLog.REALM_PLAN_TYPE_CHANGED
).exists()
)
with self.settings(BILLING_ENABLED=False):
realm = do_import_realm(
os.path.join(settings.TEST_WORKER_DIR, "test-export"), "test-zulip-2"
)
self.assertEqual(realm.plan_type, Realm.SELF_HOSTED)
self.assertEqual(realm.max_invites, 100)
self.assertEqual(realm.upload_quota_gb, None)
self.assertEqual(realm.message_visibility_limit, None)
self.assertTrue(
RealmAuditLog.objects.filter(
realm=realm, event_type=RealmAuditLog.REALM_PLAN_TYPE_CHANGED
).exists()
)
| 41.563122 | 109 | 0.644994 | import os
from typing import Any, Callable, Dict, FrozenSet, List, Optional, Set, Tuple
from unittest.mock import patch
import orjson
from django.conf import settings
from django.db.models import Q
from django.utils.timezone import now as timezone_now
from zerver.lib import upload
from zerver.lib.actions import (
do_add_reaction,
do_change_icon_source,
do_change_logo_source,
do_change_plan_type,
do_create_user,
do_update_user_presence,
)
from zerver.lib.avatar_hash import user_avatar_path
from zerver.lib.bot_config import set_bot_config
from zerver.lib.bot_lib import StateHandler
from zerver.lib.export import do_export_realm, do_export_user, export_usermessages_batch
from zerver.lib.import_realm import do_import_realm, get_incoming_message_ids
from zerver.lib.streams import create_stream_if_needed
from zerver.lib.test_classes import ZulipTestCase
from zerver.lib.test_helpers import create_s3_buckets, get_test_image_file, use_s3_backend
from zerver.lib.topic_mutes import add_topic_mute
from zerver.lib.upload import (
claim_attachment,
upload_avatar_image,
upload_emoji_image,
upload_message_file,
)
from zerver.lib.utils import query_chunker
from zerver.models import (
AlertWord,
Attachment,
BotConfigData,
BotStorageData,
CustomProfileField,
CustomProfileFieldValue,
Huddle,
Message,
MutedTopic,
Reaction,
Realm,
RealmAuditLog,
RealmEmoji,
Recipient,
Stream,
Subscription,
UserGroup,
UserGroupMembership,
UserHotspot,
UserMessage,
UserPresence,
UserProfile,
get_active_streams,
get_client,
get_huddle_hash,
get_realm,
get_stream,
)
class QueryUtilTest(ZulipTestCase):
def _create_messages(self) -> None:
for name in ["cordelia", "hamlet", "iago"]:
user = self.example_user(name)
for _ in range(5):
self.send_personal_message(user, self.example_user("othello"))
def test_query_chunker(self) -> None:
self._create_messages()
cordelia = self.example_user("cordelia")
hamlet = self.example_user("hamlet")
def get_queries() -> List[Any]:
queries = [
Message.objects.filter(sender_id=cordelia.id),
Message.objects.filter(sender_id=hamlet.id),
Message.objects.exclude(sender_id__in=[cordelia.id, hamlet.id]),
]
return queries
for query in get_queries():
assert len(list(query)) > 0
queries = get_queries()
all_msg_ids: Set[int] = set()
chunker = query_chunker(
queries=queries,
id_collector=all_msg_ids,
chunk_size=20,
)
all_row_ids = []
for chunk in chunker:
for row in chunk:
all_row_ids.append(row.id)
self.assertEqual(all_row_ids, sorted(all_row_ids))
self.assertEqual(len(all_msg_ids), len(Message.objects.all()))
# need the order_by here, but it should be harmless.
queries = [
Message.objects.filter(sender_id=cordelia.id).order_by("id"),
Message.objects.filter(sender_id=hamlet.id),
]
all_msg_ids = set()
chunker = query_chunker(
queries=queries,
id_collector=all_msg_ids,
chunk_size=7, # use a different size
)
list(chunker) # exhaust the iterator
self.assertEqual(
len(all_msg_ids),
len(Message.objects.filter(sender_id__in=[cordelia.id, hamlet.id])),
)
# Try just a single query to validate chunking.
queries = [
Message.objects.exclude(sender_id=cordelia.id),
]
all_msg_ids = set()
chunker = query_chunker(
queries=queries,
id_collector=all_msg_ids,
chunk_size=11, # use a different size each time
)
list(chunker) # exhaust the iterator
self.assertEqual(
len(all_msg_ids),
len(Message.objects.exclude(sender_id=cordelia.id)),
)
self.assertTrue(len(all_msg_ids) > 15)
# Verify assertions about disjoint-ness.
queries = [
Message.objects.exclude(sender_id=cordelia.id),
Message.objects.filter(sender_id=hamlet.id),
]
all_msg_ids = set()
chunker = query_chunker(
queries=queries,
id_collector=all_msg_ids,
chunk_size=13, # use a different size each time
)
with self.assertRaises(AssertionError):
list(chunker) # exercise the iterator
# Try to confuse things with ids part of the query...
queries = [
Message.objects.filter(id__lte=10),
Message.objects.filter(id__gt=10),
]
all_msg_ids = set()
chunker = query_chunker(
queries=queries,
id_collector=all_msg_ids,
chunk_size=11, # use a different size each time
)
self.assertEqual(len(all_msg_ids), 0) # until we actually use the iterator
list(chunker) # exhaust the iterator
self.assertEqual(len(all_msg_ids), len(Message.objects.all()))
# Verify that we can just get the first chunk with a next() call.
queries = [
Message.objects.all(),
]
all_msg_ids = set()
chunker = query_chunker(
queries=queries,
id_collector=all_msg_ids,
chunk_size=10, # use a different size each time
)
first_chunk = next(chunker)
self.assertEqual(len(first_chunk), 10)
self.assertEqual(len(all_msg_ids), 10)
expected_msg = Message.objects.all()[0:10][5]
actual_msg = first_chunk[5]
self.assertEqual(actual_msg.content, expected_msg.content)
self.assertEqual(actual_msg.sender_id, expected_msg.sender_id)
class ImportExportTest(ZulipTestCase):
def setUp(self) -> None:
super().setUp()
self.rm_tree(settings.LOCAL_UPLOADS_DIR)
def _make_output_dir(self) -> str:
output_dir = os.path.join(settings.TEST_WORKER_DIR, "test-export")
self.rm_tree(output_dir)
os.makedirs(output_dir, exist_ok=True)
return output_dir
def _export_realm(
self,
realm: Realm,
exportable_user_ids: Optional[Set[int]] = None,
consent_message_id: Optional[int] = None,
) -> Dict[str, Any]:
output_dir = self._make_output_dir()
with patch("logging.info"), patch("zerver.lib.export.create_soft_link"):
do_export_realm(
realm=realm,
output_dir=output_dir,
threads=0,
exportable_user_ids=exportable_user_ids,
consent_message_id=consent_message_id,
)
export_usermessages_batch(
input_path=os.path.join(output_dir, "messages-000001.json.partial"),
output_path=os.path.join(output_dir, "messages-000001.json"),
consent_message_id=consent_message_id,
)
try:
export_usermessages_batch(
input_path=os.path.join(output_dir, "messages-000002.json.partial"),
output_path=os.path.join(output_dir, "messages-000002.json"),
consent_message_id=consent_message_id,
)
except FileNotFoundError:
pass
def read_file(fn: str) -> Any:
full_fn = os.path.join(output_dir, fn)
with open(full_fn, "rb") as f:
return orjson.loads(f.read())
result = {}
result["realm"] = read_file("realm.json")
result["attachment"] = read_file("attachment.json")
result["message"] = read_file("messages-000001.json")
try:
message = read_file("messages-000002.json")
result["message"]["zerver_usermessage"].extend(message["zerver_usermessage"])
result["message"]["zerver_message"].extend(message["zerver_message"])
except FileNotFoundError:
pass
result["uploads_dir"] = os.path.join(output_dir, "uploads")
result["uploads_dir_records"] = read_file(os.path.join("uploads", "records.json"))
result["emoji_dir"] = os.path.join(output_dir, "emoji")
result["emoji_dir_records"] = read_file(os.path.join("emoji", "records.json"))
result["avatar_dir"] = os.path.join(output_dir, "avatars")
result["avatar_dir_records"] = read_file(os.path.join("avatars", "records.json"))
result["realm_icons_dir"] = os.path.join(output_dir, "realm_icons")
result["realm_icons_dir_records"] = read_file(os.path.join("realm_icons", "records.json"))
return result
def _setup_export_files(self, realm: Realm) -> Tuple[str, str, str, bytes]:
message = Message.objects.all()[0]
user_profile = message.sender
url = upload_message_file(
"dummy.txt", len(b"zulip!"), "text/plain", b"zulip!", user_profile
)
attachment_path_id = url.replace("/user_uploads/", "")
claim_attachment(
user_profile=user_profile,
path_id=attachment_path_id,
message=message,
is_message_realm_public=True,
)
avatar_path_id = user_avatar_path(user_profile)
original_avatar_path_id = avatar_path_id + ".original"
emoji_path = RealmEmoji.PATH_ID_TEMPLATE.format(
realm_id=realm.id,
emoji_file_name="1.png",
)
with get_test_image_file("img.png") as img_file:
upload_emoji_image(img_file, "1.png", user_profile)
with get_test_image_file("img.png") as img_file:
upload_avatar_image(img_file, user_profile, user_profile)
with get_test_image_file("img.png") as img_file:
upload.upload_backend.upload_realm_icon_image(img_file, user_profile)
do_change_icon_source(realm, Realm.ICON_UPLOADED)
with get_test_image_file("img.png") as img_file:
upload.upload_backend.upload_realm_logo_image(img_file, user_profile, night=False)
do_change_logo_source(realm, Realm.LOGO_UPLOADED, False, acting_user=user_profile)
with get_test_image_file("img.png") as img_file:
upload.upload_backend.upload_realm_logo_image(img_file, user_profile, night=True)
do_change_logo_source(realm, Realm.LOGO_UPLOADED, True, acting_user=user_profile)
with get_test_image_file("img.png") as img_file:
test_image = img_file.read()
message.sender.avatar_source = "U"
message.sender.save()
realm.refresh_from_db()
return attachment_path_id, emoji_path, original_avatar_path_id, test_image
def test_export_files_from_local(self) -> None:
realm = Realm.objects.get(string_id="zulip")
path_id, emoji_path, original_avatar_path_id, test_image = self._setup_export_files(realm)
full_data = self._export_realm(realm)
data = full_data["attachment"]
self.assertEqual(len(data["zerver_attachment"]), 1)
record = data["zerver_attachment"][0]
self.assertEqual(record["path_id"], path_id)
# Test uploads
fn = os.path.join(full_data["uploads_dir"], path_id)
with open(fn) as f:
self.assertEqual(f.read(), "zulip!")
records = full_data["uploads_dir_records"]
self.assertEqual(records[0]["path"], path_id)
self.assertEqual(records[0]["s3_path"], path_id)
# Test emojis
fn = os.path.join(full_data["emoji_dir"], emoji_path)
fn = fn.replace("1.png", "")
self.assertEqual("1.png", os.listdir(fn)[0])
records = full_data["emoji_dir_records"]
self.assertEqual(records[0]["file_name"], "1.png")
self.assertEqual(records[0]["path"], "2/emoji/images/1.png")
self.assertEqual(records[0]["s3_path"], "2/emoji/images/1.png")
# Test realm logo and icon
records = full_data["realm_icons_dir_records"]
image_files = set()
for record in records:
image_path = os.path.join(full_data["realm_icons_dir"], record["path"])
if image_path[-9:] == ".original":
with open(image_path, "rb") as image_file:
image_data = image_file.read()
self.assertEqual(image_data, test_image)
else:
self.assertTrue(os.path.exists(image_path))
image_files.add(os.path.basename(image_path))
self.assertEqual(
set(image_files),
{
"night_logo.png",
"logo.original",
"logo.png",
"icon.png",
"night_logo.original",
"icon.original",
},
)
# Test avatars
fn = os.path.join(full_data["avatar_dir"], original_avatar_path_id)
with open(fn, "rb") as fb:
fn_data = fb.read()
self.assertEqual(fn_data, test_image)
records = full_data["avatar_dir_records"]
record_path = [record["path"] for record in records]
record_s3_path = [record["s3_path"] for record in records]
self.assertIn(original_avatar_path_id, record_path)
self.assertIn(original_avatar_path_id, record_s3_path)
@use_s3_backend
def test_export_files_from_s3(self) -> None:
create_s3_buckets(settings.S3_AUTH_UPLOADS_BUCKET, settings.S3_AVATAR_BUCKET)
realm = Realm.objects.get(string_id="zulip")
(
attachment_path_id,
emoji_path,
original_avatar_path_id,
test_image,
) = self._setup_export_files(realm)
full_data = self._export_realm(realm)
data = full_data["attachment"]
self.assertEqual(len(data["zerver_attachment"]), 1)
record = data["zerver_attachment"][0]
self.assertEqual(record["path_id"], attachment_path_id)
def check_types(user_profile_id: int, realm_id: int) -> None:
self.assertEqual(type(user_profile_id), int)
self.assertEqual(type(realm_id), int)
# Test uploads
fields = attachment_path_id.split("/")
fn = os.path.join(full_data["uploads_dir"], os.path.join(fields[0], fields[1], fields[2]))
with open(fn) as f:
self.assertEqual(f.read(), "zulip!")
records = full_data["uploads_dir_records"]
self.assertEqual(records[0]["path"], os.path.join(fields[0], fields[1], fields[2]))
self.assertEqual(records[0]["s3_path"], attachment_path_id)
check_types(records[0]["user_profile_id"], records[0]["realm_id"])
# Test emojis
fn = os.path.join(full_data["emoji_dir"], emoji_path)
fn = fn.replace("1.png", "")
self.assertIn("1.png", os.listdir(fn))
records = full_data["emoji_dir_records"]
self.assertEqual(records[0]["file_name"], "1.png")
self.assertTrue("last_modified" in records[0])
self.assertEqual(records[0]["path"], "2/emoji/images/1.png")
self.assertEqual(records[0]["s3_path"], "2/emoji/images/1.png")
check_types(records[0]["user_profile_id"], records[0]["realm_id"])
# Test realm logo and icon
records = full_data["realm_icons_dir_records"]
image_files = set()
for record in records:
image_path = os.path.join(full_data["realm_icons_dir"], record["s3_path"])
if image_path[-9:] == ".original":
with open(image_path, "rb") as image_file:
image_data = image_file.read()
self.assertEqual(image_data, test_image)
else:
self.assertTrue(os.path.exists(image_path))
image_files.add(os.path.basename(image_path))
self.assertEqual(
set(image_files),
{
"night_logo.png",
"logo.original",
"logo.png",
"icon.png",
"night_logo.original",
"icon.original",
},
)
# Test avatars
fn = os.path.join(full_data["avatar_dir"], original_avatar_path_id)
with open(fn, "rb") as file:
fn_data = file.read()
self.assertEqual(fn_data, test_image)
records = full_data["avatar_dir_records"]
record_path = [record["path"] for record in records]
record_s3_path = [record["s3_path"] for record in records]
self.assertIn(original_avatar_path_id, record_path)
self.assertIn(original_avatar_path_id, record_s3_path)
check_types(records[0]["user_profile_id"], records[0]["realm_id"])
def test_zulip_realm(self) -> None:
realm = Realm.objects.get(string_id="zulip")
default_bot = self.example_user("default_bot")
pm_a_msg_id = self.send_personal_message(self.example_user("AARON"), default_bot)
pm_b_msg_id = self.send_personal_message(default_bot, self.example_user("iago"))
pm_c_msg_id = self.send_personal_message(
self.example_user("othello"), self.example_user("hamlet")
)
realm_emoji = RealmEmoji.objects.get(realm=realm)
realm_emoji.delete()
full_data = self._export_realm(realm)
realm_emoji.save()
data = full_data["realm"]
self.assertEqual(len(data["zerver_userprofile_crossrealm"]), 3)
self.assertEqual(len(data["zerver_userprofile_mirrordummy"]), 0)
exported_user_emails = self.get_set(data["zerver_userprofile"], "delivery_email")
self.assertIn(self.example_email("cordelia"), exported_user_emails)
self.assertIn("default-bot@zulip.com", exported_user_emails)
exported_streams = self.get_set(data["zerver_stream"], "name")
self.assertEqual(
exported_streams,
{"Denmark", "Rome", "Scotland", "Venice", "Verona"},
)
exported_alert_words = data["zerver_alertword"]
# We set up 4 alert words for Hamlet, Cordelia, etc.
# when we populate the test database.
num_zulip_users = 10
self.assertEqual(len(exported_alert_words), num_zulip_users * 4)
self.assertIn("robotics", {r["word"] for r in exported_alert_words})
data = full_data["message"]
um = UserMessage.objects.all()[0]
exported_um = self.find_by_id(data["zerver_usermessage"], um.id)
self.assertEqual(exported_um["message"], um.message_id)
self.assertEqual(exported_um["user_profile"], um.user_profile_id)
exported_message = self.find_by_id(data["zerver_message"], um.message_id)
self.assertEqual(exported_message["content"], um.message.content)
exported_message_ids = self.get_set(data["zerver_message"], "id")
self.assertIn(pm_a_msg_id, exported_message_ids)
self.assertIn(pm_b_msg_id, exported_message_ids)
self.assertIn(pm_c_msg_id, exported_message_ids)
def test_export_realm_with_exportable_user_ids(self) -> None:
realm = Realm.objects.get(string_id="zulip")
cordelia = self.example_user("iago")
hamlet = self.example_user("hamlet")
user_ids = {cordelia.id, hamlet.id}
pm_a_msg_id = self.send_personal_message(
self.example_user("AARON"), self.example_user("othello")
)
pm_b_msg_id = self.send_personal_message(
self.example_user("cordelia"), self.example_user("iago")
)
pm_c_msg_id = self.send_personal_message(
self.example_user("hamlet"), self.example_user("othello")
)
pm_d_msg_id = self.send_personal_message(
self.example_user("iago"), self.example_user("hamlet")
)
realm_emoji = RealmEmoji.objects.get(realm=realm)
realm_emoji.delete()
full_data = self._export_realm(realm, exportable_user_ids=user_ids)
realm_emoji.save()
data = full_data["realm"]
exported_user_emails = self.get_set(data["zerver_userprofile"], "delivery_email")
self.assertIn(self.example_email("iago"), exported_user_emails)
self.assertIn(self.example_email("hamlet"), exported_user_emails)
self.assertNotIn("default-bot@zulip.com", exported_user_emails)
self.assertNotIn(self.example_email("cordelia"), exported_user_emails)
dummy_user_emails = self.get_set(data["zerver_userprofile_mirrordummy"], "delivery_email")
self.assertIn(self.example_email("cordelia"), dummy_user_emails)
self.assertIn(self.example_email("othello"), dummy_user_emails)
self.assertIn("default-bot@zulip.com", dummy_user_emails)
self.assertNotIn(self.example_email("iago"), dummy_user_emails)
self.assertNotIn(self.example_email("hamlet"), dummy_user_emails)
data = full_data["message"]
exported_message_ids = self.get_set(data["zerver_message"], "id")
self.assertNotIn(pm_a_msg_id, exported_message_ids)
self.assertIn(pm_b_msg_id, exported_message_ids)
self.assertIn(pm_c_msg_id, exported_message_ids)
self.assertIn(pm_d_msg_id, exported_message_ids)
def test_export_realm_with_member_consent(self) -> None:
realm = Realm.objects.get(string_id="zulip")
# Create private streams and subscribe users for testing export
create_stream_if_needed(realm, "Private A", invite_only=True)
self.subscribe(self.example_user("iago"), "Private A")
self.subscribe(self.example_user("othello"), "Private A")
self.send_stream_message(self.example_user("iago"), "Private A", "Hello Stream A")
create_stream_if_needed(realm, "Private B", invite_only=True)
self.subscribe(self.example_user("prospero"), "Private B")
stream_b_message_id = self.send_stream_message(
self.example_user("prospero"), "Private B", "Hello Stream B"
)
self.subscribe(self.example_user("hamlet"), "Private B")
create_stream_if_needed(realm, "Private C", invite_only=True)
self.subscribe(self.example_user("othello"), "Private C")
self.subscribe(self.example_user("prospero"), "Private C")
stream_c_message_id = self.send_stream_message(
self.example_user("othello"), "Private C", "Hello Stream C"
)
# Create huddles
self.send_huddle_message(
self.example_user("iago"), [self.example_user("cordelia"), self.example_user("AARON")]
)
huddle_a = Huddle.objects.last()
self.send_huddle_message(
self.example_user("ZOE"),
[self.example_user("hamlet"), self.example_user("AARON"), self.example_user("othello")],
)
huddle_b = Huddle.objects.last()
huddle_c_message_id = self.send_huddle_message(
self.example_user("AARON"),
[self.example_user("cordelia"), self.example_user("ZOE"), self.example_user("othello")],
)
# Create PMs
pm_a_msg_id = self.send_personal_message(
self.example_user("AARON"), self.example_user("othello")
)
pm_b_msg_id = self.send_personal_message(
self.example_user("cordelia"), self.example_user("iago")
)
pm_c_msg_id = self.send_personal_message(
self.example_user("hamlet"), self.example_user("othello")
)
pm_d_msg_id = self.send_personal_message(
self.example_user("iago"), self.example_user("hamlet")
)
# Send message advertising export and make users react
self.send_stream_message(
self.example_user("othello"),
"Verona",
topic_name="Export",
content="Thumbs up for export",
)
message = Message.objects.last()
consented_user_ids = [self.example_user(user).id for user in ["iago", "hamlet"]]
do_add_reaction(
self.example_user("iago"), message, "outbox", "1f4e4", Reaction.UNICODE_EMOJI
)
do_add_reaction(
self.example_user("hamlet"), message, "outbox", "1f4e4", Reaction.UNICODE_EMOJI
)
realm_emoji = RealmEmoji.objects.get(realm=realm)
realm_emoji.delete()
full_data = self._export_realm(realm, consent_message_id=message.id)
realm_emoji.save()
data = full_data["realm"]
self.assertEqual(len(data["zerver_userprofile_crossrealm"]), 3)
self.assertEqual(len(data["zerver_userprofile_mirrordummy"]), 0)
exported_user_emails = self.get_set(data["zerver_userprofile"], "delivery_email")
self.assertIn(self.example_email("cordelia"), exported_user_emails)
self.assertIn(self.example_email("hamlet"), exported_user_emails)
self.assertIn(self.example_email("iago"), exported_user_emails)
self.assertIn(self.example_email("othello"), exported_user_emails)
self.assertIn("default-bot@zulip.com", exported_user_emails)
exported_streams = self.get_set(data["zerver_stream"], "name")
self.assertEqual(
exported_streams,
{
"Denmark",
"Rome",
"Scotland",
"Venice",
"Verona",
"Private A",
"Private B",
"Private C",
},
)
data = full_data["message"]
exported_usermessages = UserMessage.objects.filter(
user_profile__in=[self.example_user("iago"), self.example_user("hamlet")]
)
um = exported_usermessages[0]
self.assertEqual(len(data["zerver_usermessage"]), len(exported_usermessages))
exported_um = self.find_by_id(data["zerver_usermessage"], um.id)
self.assertEqual(exported_um["message"], um.message_id)
self.assertEqual(exported_um["user_profile"], um.user_profile_id)
exported_message = self.find_by_id(data["zerver_message"], um.message_id)
self.assertEqual(exported_message["content"], um.message.content)
public_stream_names = ["Denmark", "Rome", "Scotland", "Venice", "Verona"]
public_stream_ids = Stream.objects.filter(name__in=public_stream_names).values_list(
"id", flat=True
)
public_stream_recipients = Recipient.objects.filter(
type_id__in=public_stream_ids, type=Recipient.STREAM
)
public_stream_message_ids = Message.objects.filter(
recipient__in=public_stream_recipients
).values_list("id", flat=True)
# Messages from Private Stream C are not exported since no member gave consent
private_stream_ids = Stream.objects.filter(name__in=["Private A", "Private B"]).values_list(
"id", flat=True
)
private_stream_recipients = Recipient.objects.filter(
type_id__in=private_stream_ids, type=Recipient.STREAM
)
private_stream_message_ids = Message.objects.filter(
recipient__in=private_stream_recipients
).values_list("id", flat=True)
pm_recipients = Recipient.objects.filter(
type_id__in=consented_user_ids, type=Recipient.PERSONAL
)
pm_query = Q(recipient__in=pm_recipients) | Q(sender__in=consented_user_ids)
exported_pm_ids = (
Message.objects.filter(pm_query)
.values_list("id", flat=True)
.values_list("id", flat=True)
)
# Third huddle is not exported since none of the members gave consent
huddle_recipients = Recipient.objects.filter(
type_id__in=[huddle_a.id, huddle_b.id], type=Recipient.HUDDLE
)
pm_query = Q(recipient__in=huddle_recipients) | Q(sender__in=consented_user_ids)
exported_huddle_ids = (
Message.objects.filter(pm_query)
.values_list("id", flat=True)
.values_list("id", flat=True)
)
exported_msg_ids = (
set(public_stream_message_ids)
| set(private_stream_message_ids)
| set(exported_pm_ids)
| set(exported_huddle_ids)
)
self.assertEqual(self.get_set(data["zerver_message"], "id"), exported_msg_ids)
# TODO: This behavior is wrong and should be fixed. The message should not be exported
# since it was sent before the only consented user iago joined the stream.
self.assertIn(stream_b_message_id, exported_msg_ids)
self.assertNotIn(stream_c_message_id, exported_msg_ids)
self.assertNotIn(huddle_c_message_id, exported_msg_ids)
self.assertNotIn(pm_a_msg_id, exported_msg_ids)
self.assertIn(pm_b_msg_id, exported_msg_ids)
self.assertIn(pm_c_msg_id, exported_msg_ids)
self.assertIn(pm_d_msg_id, exported_msg_ids)
def test_export_single_user(self) -> None:
output_dir = self._make_output_dir()
cordelia = self.example_user("cordelia")
with patch("logging.info"):
do_export_user(cordelia, output_dir)
def read_file(fn: str) -> Any:
full_fn = os.path.join(output_dir, fn)
with open(full_fn, "rb") as f:
return orjson.loads(f.read())
messages = read_file("messages-000001.json")
user = read_file("user.json")
exported_user_id = self.get_set(user["zerver_userprofile"], "id")
self.assertEqual(exported_user_id, {cordelia.id})
exported_user_email = self.get_set(user["zerver_userprofile"], "email")
self.assertEqual(exported_user_email, {cordelia.email})
exported_recipient_type_id = self.get_set(user["zerver_recipient"], "type_id")
self.assertIn(cordelia.id, exported_recipient_type_id)
exported_stream_id = self.get_set(user["zerver_stream"], "id")
self.assertIn(list(exported_stream_id)[0], exported_recipient_type_id)
exported_recipient_id = self.get_set(user["zerver_recipient"], "id")
exported_subscription_recipient = self.get_set(user["zerver_subscription"], "recipient")
self.assertEqual(exported_recipient_id, exported_subscription_recipient)
exported_messages_recipient = self.get_set(messages["zerver_message"], "recipient")
self.assertIn(list(exported_messages_recipient)[0], exported_recipient_id)
def test_import_realm(self) -> None:
original_realm = Realm.objects.get(string_id="zulip")
RealmEmoji.objects.get(realm=original_realm).delete()
# data to test import of huddles
huddle = [
self.example_user("hamlet"),
self.example_user("othello"),
]
self.send_huddle_message(
self.example_user("cordelia"),
huddle,
"test huddle message",
)
user_mention_message = "@**King Hamlet** Hello"
self.send_stream_message(self.example_user("iago"), "Verona", user_mention_message)
stream_mention_message = "Subscribe to #**Denmark**"
self.send_stream_message(self.example_user("hamlet"), "Verona", stream_mention_message)
user_group_mention_message = "Hello @*hamletcharacters*"
self.send_stream_message(self.example_user("othello"), "Verona", user_group_mention_message)
special_characters_message = "```\n'\n```\n@**Polonius**"
self.send_stream_message(self.example_user("iago"), "Denmark", special_characters_message)
sample_user = self.example_user("hamlet")
UserHotspot.objects.create(
user=sample_user,
hotspot="intro_streams",
)
stream = get_stream("Verona", original_realm)
add_topic_mute(
user_profile=sample_user,
stream_id=stream.id,
recipient_id=stream.recipient.id,
topic_name="Verona2",
)
do_update_user_presence(
sample_user, get_client("website"), timezone_now(), UserPresence.ACTIVE
)
bot_profile = do_create_user(
email="bot-1@zulip.com",
password="test",
realm=original_realm,
full_name="bot",
bot_type=UserProfile.EMBEDDED_BOT,
bot_owner=sample_user,
)
storage = StateHandler(bot_profile)
storage.put("some key", "some value")
set_bot_config(bot_profile, "entry 1", "value 1")
self._export_realm(original_realm)
with patch("logging.info"):
with self.settings(BILLING_ENABLED=False):
do_import_realm(os.path.join(settings.TEST_WORKER_DIR, "test-export"), "test-zulip")
self.assertTrue(Realm.objects.filter(string_id="test-zulip").exists())
imported_realm = Realm.objects.get(string_id="test-zulip")
self.assertNotEqual(imported_realm.id, original_realm.id)
def assert_realm_values(f: Callable[[Realm], Any], equal: bool = True) -> None:
orig_realm_result = f(original_realm)
imported_realm_result = f(imported_realm)
assert orig_realm_result
if equal:
self.assertEqual(orig_realm_result, imported_realm_result)
else:
self.assertNotEqual(orig_realm_result, imported_realm_result)
assert_realm_values(
lambda r: {user.email for user in r.get_admin_users_and_bots()},
)
assert_realm_values(
lambda r: {user.email for user in r.get_active_users()},
)
assert_realm_values(
lambda r: {stream.name for stream in get_active_streams(r)},
)
def get_recipient_stream(r: Realm) -> Recipient:
return Stream.objects.get(name="Verona", realm=r).recipient
def get_recipient_user(r: Realm) -> Recipient:
return UserProfile.objects.get(full_name="Iago", realm=r).recipient
assert_realm_values(lambda r: get_recipient_stream(r).type)
assert_realm_values(lambda r: get_recipient_user(r).type)
def get_subscribers(recipient: Recipient) -> Set[str]:
subscriptions = Subscription.objects.filter(recipient=recipient)
users = {sub.user_profile.email for sub in subscriptions}
return users
assert_realm_values(
lambda r: get_subscribers(get_recipient_stream(r)),
)
assert_realm_values(
lambda r: get_subscribers(get_recipient_user(r)),
)
def get_custom_profile_field_names(r: Realm) -> Set[str]:
custom_profile_fields = CustomProfileField.objects.filter(realm=r)
custom_profile_field_names = {field.name for field in custom_profile_fields}
return custom_profile_field_names
assert_realm_values(get_custom_profile_field_names)
def get_custom_profile_with_field_type_user(
r: Realm,
) -> Tuple[Set[Any], Set[Any], Set[FrozenSet[str]]]:
fields = CustomProfileField.objects.filter(field_type=CustomProfileField.USER, realm=r)
def get_email(user_id: int) -> str:
return UserProfile.objects.get(id=user_id).email
def get_email_from_value(field_value: CustomProfileFieldValue) -> Set[str]:
user_id_list = orjson.loads(field_value.value)
return {get_email(user_id) for user_id in user_id_list}
def custom_profile_field_values_for(
fields: List[CustomProfileField],
) -> Set[FrozenSet[str]]:
user_emails: Set[FrozenSet[str]] = set()
for field in fields:
values = CustomProfileFieldValue.objects.filter(field=field)
for value in values:
user_emails.add(frozenset(get_email_from_value(value)))
return user_emails
field_names, field_hints = (set() for i in range(2))
for field in fields:
field_names.add(field.name)
field_hints.add(field.hint)
return (field_hints, field_names, custom_profile_field_values_for(fields))
assert_realm_values(get_custom_profile_with_field_type_user)
def get_realm_audit_log_event_type(r: Realm) -> Set[str]:
realmauditlogs = RealmAuditLog.objects.filter(realm=r).exclude(
event_type=RealmAuditLog.REALM_PLAN_TYPE_CHANGED
)
realmauditlog_event_type = {log.event_type for log in realmauditlogs}
return realmauditlog_event_type
assert_realm_values(get_realm_audit_log_event_type)
cordelia_full_name = "Cordelia Lear"
hamlet_full_name = "King Hamlet"
othello_full_name = "Othello, the Moor of Venice"
def get_user_id(r: Realm, full_name: str) -> int:
return UserProfile.objects.get(realm=r, full_name=full_name).id
def get_huddle_hashes(r: Realm) -> str:
user_id_list = [
get_user_id(r, cordelia_full_name),
get_user_id(r, hamlet_full_name),
get_user_id(r, othello_full_name),
]
huddle_hash = get_huddle_hash(user_id_list)
return huddle_hash
assert_realm_values(get_huddle_hashes, equal=False)
def get_huddle_message(r: Realm) -> str:
huddle_hash = get_huddle_hashes(r)
huddle_id = Huddle.objects.get(huddle_hash=huddle_hash).id
huddle_recipient = Recipient.objects.get(type_id=huddle_id, type=3)
huddle_message = Message.objects.get(recipient=huddle_recipient)
return huddle_message.content
assert_realm_values(get_huddle_message)
self.assertEqual(get_huddle_message(imported_realm), "test huddle message")
def get_alertwords(r: Realm) -> Set[str]:
return {rec.word for rec in AlertWord.objects.filter(realm_id=r.id)}
assert_realm_values(get_alertwords)
def get_user_hotspots(r: Realm) -> Set[str]:
user_id = get_user_id(r, hamlet_full_name)
hotspots = UserHotspot.objects.filter(user_id=user_id)
user_hotspots = {hotspot.hotspot for hotspot in hotspots}
return user_hotspots
assert_realm_values(get_user_hotspots)
def get_muted_topics(r: Realm) -> Set[str]:
user_profile_id = get_user_id(r, hamlet_full_name)
muted_topics = MutedTopic.objects.filter(user_profile_id=user_profile_id)
topic_names = {muted_topic.topic_name for muted_topic in muted_topics}
return topic_names
assert_realm_values(get_muted_topics)
assert_realm_values(
lambda r: {group.name for group in UserGroup.objects.filter(realm=r)},
)
def get_user_membership(r: Realm) -> Set[str]:
usergroup = UserGroup.objects.get(realm=r, name="hamletcharacters")
usergroup_membership = UserGroupMembership.objects.filter(user_group=usergroup)
users = {membership.user_profile.email for membership in usergroup_membership}
return users
assert_realm_values(get_user_membership)
def get_botstoragedata(r: Realm) -> Dict[str, Any]:
bot_profile = UserProfile.objects.get(full_name="bot", realm=r)
bot_storage_data = BotStorageData.objects.get(bot_profile=bot_profile)
return {"key": bot_storage_data.key, "data": bot_storage_data.value}
assert_realm_values(get_botstoragedata)
def get_botconfigdata(r: Realm) -> Dict[str, Any]:
bot_profile = UserProfile.objects.get(full_name="bot", realm=r)
bot_config_data = BotConfigData.objects.get(bot_profile=bot_profile)
return {"key": bot_config_data.key, "data": bot_config_data.value}
assert_realm_values(get_botconfigdata)
def get_stream_messages(r: Realm) -> Message:
recipient = get_recipient_stream(r)
messages = Message.objects.filter(recipient=recipient)
return messages
def get_stream_topics(r: Realm) -> Set[str]:
messages = get_stream_messages(r)
topics = {m.topic_name() for m in messages}
return topics
assert_realm_values(get_stream_topics)
def get_usermessages_user(r: Realm) -> Set[Any]:
messages = get_stream_messages(r).order_by("content")
usermessage = UserMessage.objects.filter(message=messages[0])
usermessage_user = {um.user_profile.email for um in usermessage}
return usermessage_user
assert_realm_values(get_usermessages_user)
def get_user_mention(r: Realm) -> Set[Any]:
mentioned_user = UserProfile.objects.get(
delivery_email=self.example_email("hamlet"), realm=r
)
data_user_id = f'data-user-id="{mentioned_user.id}"'
mention_message = get_stream_messages(r).get(rendered_content__contains=data_user_id)
return mention_message.content
assert_realm_values(get_user_mention)
def get_stream_mention(r: Realm) -> Set[Any]:
mentioned_stream = get_stream("Denmark", r)
data_stream_id = f'data-stream-id="{mentioned_stream.id}"'
mention_message = get_stream_messages(r).get(rendered_content__contains=data_stream_id)
return mention_message.content
assert_realm_values(get_stream_mention)
def get_user_group_mention(r: Realm) -> Set[Any]:
user_group = UserGroup.objects.get(realm=r, name="hamletcharacters")
data_usergroup_id = f'data-user-group-id="{user_group.id}"'
mention_message = get_stream_messages(r).get(
rendered_content__contains=data_usergroup_id
)
return mention_message.content
assert_realm_values(get_user_group_mention)
def get_userpresence_timestamp(r: Realm) -> Set[Any]:
return set(UserPresence.objects.filter(realm=r).values_list("timestamp", flat=True))
assert_realm_values(get_userpresence_timestamp)
lonius_user = self.example_user("polonius")
original_msg = Message.objects.get(
content=special_characters_message, sender__realm=original_realm
)
self.assertEqual(
original_msg.rendered_content,
'<div class="codehilite"><pre><span></span><code>'\n</code></pre></div>\n'
f'<p><span class="user-mention" data-user-id="{orig_polonius_user.id}">@Polonius</span></p>',
)
imported_polonius_user = UserProfile.objects.get(
delivery_email=self.example_email("polonius"), realm=imported_realm
)
imported_msg = Message.objects.get(
content=special_characters_message, sender__realm=imported_realm
)
self.assertEqual(
imported_msg.rendered_content,
'<div class="codehilite"><pre><span></span><code>\'\n</code></pre></div>\n'
f'<p><span class="user-mention" data-user-id="{imported_polonius_user.id}">@Polonius</span></p>',
)
# Check recipient_id was generated correctly for the imported users and streams.
for user_profile in UserProfile.objects.filter(realm=imported_realm):
self.assertEqual(
user_profile.recipient_id,
Recipient.objects.get(type=Recipient.PERSONAL, type_id=user_profile.id).id,
)
for stream in Stream.objects.filter(realm=imported_realm):
self.assertEqual(
stream.recipient_id,
Recipient.objects.get(type=Recipient.STREAM, type_id=stream.id).id,
)
for huddle_object in Huddle.objects.all():
# Huddles don't have a realm column, so we just test all Huddles for simplicity.
self.assertEqual(
huddle_object.recipient_id,
Recipient.objects.get(type=Recipient.HUDDLE, type_id=huddle_object.id).id,
)
def test_import_files_from_local(self) -> None:
realm = Realm.objects.get(string_id="zulip")
self._setup_export_files(realm)
self._export_realm(realm)
with self.settings(BILLING_ENABLED=False):
with patch("logging.info"):
do_import_realm(os.path.join(settings.TEST_WORKER_DIR, "test-export"), "test-zulip")
imported_realm = Realm.objects.get(string_id="test-zulip")
uploaded_file = Attachment.objects.get(realm=imported_realm)
self.assertEqual(len(b"zulip!"), uploaded_file.size)
attachment_file_path = os.path.join(
settings.LOCAL_UPLOADS_DIR, "files", uploaded_file.path_id
)
self.assertTrue(os.path.isfile(attachment_file_path))
realm_emoji = RealmEmoji.objects.get(realm=imported_realm)
emoji_path = RealmEmoji.PATH_ID_TEMPLATE.format(
realm_id=imported_realm.id,
emoji_file_name=realm_emoji.file_name,
)
emoji_file_path = os.path.join(settings.LOCAL_UPLOADS_DIR, "avatars", emoji_path)
self.assertTrue(os.path.isfile(emoji_file_path))
user_email = Message.objects.all()[0].sender.email
user_profile = UserProfile.objects.get(email=user_email, realm=imported_realm)
avatar_path_id = user_avatar_path(user_profile) + ".original"
avatar_file_path = os.path.join(settings.LOCAL_UPLOADS_DIR, "avatars", avatar_path_id)
self.assertTrue(os.path.isfile(avatar_file_path))
upload_path = upload.upload_backend.realm_avatar_and_logo_path(imported_realm)
full_upload_path = os.path.join(settings.LOCAL_UPLOADS_DIR, upload_path)
with get_test_image_file("img.png") as f:
test_image_data = f.read()
self.assertIsNotNone(test_image_data)
with open(os.path.join(full_upload_path, "icon.original"), "rb") as f:
self.assertEqual(f.read(), test_image_data)
self.assertTrue(os.path.isfile(os.path.join(full_upload_path, "icon.png")))
self.assertEqual(imported_realm.icon_source, Realm.ICON_UPLOADED)
with open(os.path.join(full_upload_path, "logo.original"), "rb") as f:
self.assertEqual(f.read(), test_image_data)
self.assertTrue(os.path.isfile(os.path.join(full_upload_path, "logo.png")))
self.assertEqual(imported_realm.logo_source, Realm.LOGO_UPLOADED)
with open(os.path.join(full_upload_path, "night_logo.original"), "rb") as f:
self.assertEqual(f.read(), test_image_data)
self.assertTrue(os.path.isfile(os.path.join(full_upload_path, "night_logo.png")))
self.assertEqual(imported_realm.night_logo_source, Realm.LOGO_UPLOADED)
@use_s3_backend
def test_import_files_from_s3(self) -> None:
uploads_bucket, avatar_bucket = create_s3_buckets(
settings.S3_AUTH_UPLOADS_BUCKET, settings.S3_AVATAR_BUCKET
)
realm = Realm.objects.get(string_id="zulip")
self._setup_export_files(realm)
self._export_realm(realm)
with self.settings(BILLING_ENABLED=False):
with patch("logging.info"):
do_import_realm(os.path.join(settings.TEST_WORKER_DIR, "test-export"), "test-zulip")
imported_realm = Realm.objects.get(string_id="test-zulip")
with get_test_image_file("img.png") as f:
test_image_data = f.read()
uploaded_file = Attachment.objects.get(realm=imported_realm)
self.assertEqual(len(b"zulip!"), uploaded_file.size)
attachment_content = uploads_bucket.Object(uploaded_file.path_id).get()["Body"].read()
self.assertEqual(b"zulip!", attachment_content)
realm_emoji = RealmEmoji.objects.get(realm=imported_realm)
emoji_path = RealmEmoji.PATH_ID_TEMPLATE.format(
realm_id=imported_realm.id,
emoji_file_name=realm_emoji.file_name,
)
emoji_key = avatar_bucket.Object(emoji_path)
self.assertIsNotNone(emoji_key.get()["Body"].read())
self.assertEqual(emoji_key.key, emoji_path)
user_email = Message.objects.all()[0].sender.email
user_profile = UserProfile.objects.get(email=user_email, realm=imported_realm)
avatar_path_id = user_avatar_path(user_profile) + ".original"
original_image_key = avatar_bucket.Object(avatar_path_id)
self.assertEqual(original_image_key.key, avatar_path_id)
image_data = avatar_bucket.Object(avatar_path_id).get()["Body"].read()
self.assertEqual(image_data, test_image_data)
upload_path = upload.upload_backend.realm_avatar_and_logo_path(imported_realm)
original_icon_path_id = os.path.join(upload_path, "icon.original")
original_icon_key = avatar_bucket.Object(original_icon_path_id)
self.assertEqual(original_icon_key.get()["Body"].read(), test_image_data)
resized_icon_path_id = os.path.join(upload_path, "icon.png")
resized_icon_key = avatar_bucket.Object(resized_icon_path_id)
self.assertEqual(resized_icon_key.key, resized_icon_path_id)
self.assertEqual(imported_realm.icon_source, Realm.ICON_UPLOADED)
original_logo_path_id = os.path.join(upload_path, "logo.original")
original_logo_key = avatar_bucket.Object(original_logo_path_id)
self.assertEqual(original_logo_key.get()["Body"].read(), test_image_data)
resized_logo_path_id = os.path.join(upload_path, "logo.png")
resized_logo_key = avatar_bucket.Object(resized_logo_path_id)
self.assertEqual(resized_logo_key.key, resized_logo_path_id)
self.assertEqual(imported_realm.logo_source, Realm.LOGO_UPLOADED)
night_logo_original_path_id = os.path.join(upload_path, "night_logo.original")
night_logo_original_key = avatar_bucket.Object(night_logo_original_path_id)
self.assertEqual(night_logo_original_key.get()["Body"].read(), test_image_data)
resized_night_logo_path_id = os.path.join(upload_path, "night_logo.png")
resized_night_logo_key = avatar_bucket.Object(resized_night_logo_path_id)
self.assertEqual(resized_night_logo_key.key, resized_night_logo_path_id)
self.assertEqual(imported_realm.night_logo_source, Realm.LOGO_UPLOADED)
def test_get_incoming_message_ids(self) -> None:
import_dir = os.path.join(
settings.DEPLOY_ROOT, "zerver", "tests", "fixtures", "import_fixtures"
)
message_ids = get_incoming_message_ids(
import_dir=import_dir,
sort_by_date=True,
)
self.assertEqual(message_ids, [888, 999, 555])
message_ids = get_incoming_message_ids(
import_dir=import_dir,
sort_by_date=False,
)
self.assertEqual(message_ids, [555, 888, 999])
def test_plan_type(self) -> None:
realm = get_realm("zulip")
do_change_plan_type(realm, Realm.LIMITED)
self._setup_export_files(realm)
self._export_realm(realm)
with patch("logging.info"):
with self.settings(BILLING_ENABLED=True):
realm = do_import_realm(
os.path.join(settings.TEST_WORKER_DIR, "test-export"), "test-zulip-1"
)
self.assertEqual(realm.plan_type, Realm.LIMITED)
self.assertEqual(realm.max_invites, 100)
self.assertEqual(realm.upload_quota_gb, 5)
self.assertEqual(realm.message_visibility_limit, 10000)
self.assertTrue(
RealmAuditLog.objects.filter(
realm=realm, event_type=RealmAuditLog.REALM_PLAN_TYPE_CHANGED
).exists()
)
with self.settings(BILLING_ENABLED=False):
realm = do_import_realm(
os.path.join(settings.TEST_WORKER_DIR, "test-export"), "test-zulip-2"
)
self.assertEqual(realm.plan_type, Realm.SELF_HOSTED)
self.assertEqual(realm.max_invites, 100)
self.assertEqual(realm.upload_quota_gb, None)
self.assertEqual(realm.message_visibility_limit, None)
self.assertTrue(
RealmAuditLog.objects.filter(
realm=realm, event_type=RealmAuditLog.REALM_PLAN_TYPE_CHANGED
).exists()
)
| true | true |
f73de756b772563bfb53c4366c455e3be0662e8f | 141 | py | Python | pandana/utils/__init__.py | HEPonHPC/pandana | 8ee68071892f2a34b54a09ac54033f5d14d42019 | [
"Apache-2.0"
] | 2 | 2021-04-23T19:36:57.000Z | 2021-06-30T15:57:35.000Z | pandana/utils/__init__.py | HEPonHPC/pandana | 8ee68071892f2a34b54a09ac54033f5d14d42019 | [
"Apache-2.0"
] | null | null | null | pandana/utils/__init__.py | HEPonHPC/pandana | 8ee68071892f2a34b54a09ac54033f5d14d42019 | [
"Apache-2.0"
] | null | null | null | """Make everything from submodules appear at the top level.
"""
from pandana.utils.mpiutils import *
from pandana.utils.pandasutils import *
| 28.2 | 59 | 0.780142 | from pandana.utils.mpiutils import *
from pandana.utils.pandasutils import *
| true | true |
f73de7a43b3fe8640e33e83876139c4b6df893b6 | 2,158 | py | Python | pajbot/models/web_sockets.py | sgaweda/troybot | 7153c0ad387e31de57c71172893fd92c85259d1b | [
"MIT"
] | null | null | null | pajbot/models/web_sockets.py | sgaweda/troybot | 7153c0ad387e31de57c71172893fd92c85259d1b | [
"MIT"
] | 2 | 2020-02-18T03:30:30.000Z | 2020-02-18T03:31:44.000Z | pajbot/models/web_sockets.py | sgaweda/troybot | 7153c0ad387e31de57c71172893fd92c85259d1b | [
"MIT"
] | null | null | null | import logging
import random
from sqlalchemy import TEXT, INT
from sqlalchemy import Column
from sqlalchemy import ForeignKey
from sqlalchemy.orm import relationship
from pajbot.managers.db import Base
log = logging.getLogger(__name__)
def salt_gen():
ALPHABET = "0123456789abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ"
return "".join(random.choice(ALPHABET) for i in range(8))
class WebSocket(Base):
__tablename__ = "websockets"
id = Column(INT, primary_key=True, autoincrement=True)
salt = Column(TEXT, nullable=False, unique=True)
widget_id = Column(INT, ForeignKey("widgets.id", ondelete="CASCADE"))
widget = relationship("Widget")
def jsonify(self):
return {"id": self.id, "salt": self.salt, "widget_id": self.widget_id, "widget_name": self.widget.name}
def _new_salt(self, db_session, salt=None):
if not salt:
salt = salt_gen()
self.salt = salt
db_session.merge(self)
return self
def _remove(self, db_session):
db_session.delete(self)
@staticmethod
def _create(db_session, widget_id, salt=None):
if not salt:
salt = salt_gen()
websocket = WebSocket(widget_id=widget_id, salt=salt)
db_session.add(websocket)
return websocket
@staticmethod
def _by_id(db_session, id):
return db_session.query(WebSocket).filter_by(id=id).one_or_none()
@staticmethod
def _by_salt(db_session, salt):
return db_session.query(WebSocket).filter_by(salt=salt).one_or_none()
@staticmethod
def _all(db_session):
return db_session.query(WebSocket).order_by(WebSocket.widget_id, WebSocket.id).all()
class Widget(Base):
__tablename__ = "widgets"
id = Column(INT, primary_key=True, autoincrement=True)
name = Column(TEXT, nullable=False)
def jsonify(self):
return {"id": self.id, "name": self.name}
@staticmethod
def _all(db_session):
return db_session.query(Widget).order_by(Widget.id).all()
@staticmethod
def _by_id(db_session, id):
return db_session.query(Widget).filter_by(id=id).one_or_none()
| 27.666667 | 111 | 0.68721 | import logging
import random
from sqlalchemy import TEXT, INT
from sqlalchemy import Column
from sqlalchemy import ForeignKey
from sqlalchemy.orm import relationship
from pajbot.managers.db import Base
log = logging.getLogger(__name__)
def salt_gen():
ALPHABET = "0123456789abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ"
return "".join(random.choice(ALPHABET) for i in range(8))
class WebSocket(Base):
__tablename__ = "websockets"
id = Column(INT, primary_key=True, autoincrement=True)
salt = Column(TEXT, nullable=False, unique=True)
widget_id = Column(INT, ForeignKey("widgets.id", ondelete="CASCADE"))
widget = relationship("Widget")
def jsonify(self):
return {"id": self.id, "salt": self.salt, "widget_id": self.widget_id, "widget_name": self.widget.name}
def _new_salt(self, db_session, salt=None):
if not salt:
salt = salt_gen()
self.salt = salt
db_session.merge(self)
return self
def _remove(self, db_session):
db_session.delete(self)
@staticmethod
def _create(db_session, widget_id, salt=None):
if not salt:
salt = salt_gen()
websocket = WebSocket(widget_id=widget_id, salt=salt)
db_session.add(websocket)
return websocket
@staticmethod
def _by_id(db_session, id):
return db_session.query(WebSocket).filter_by(id=id).one_or_none()
@staticmethod
def _by_salt(db_session, salt):
return db_session.query(WebSocket).filter_by(salt=salt).one_or_none()
@staticmethod
def _all(db_session):
return db_session.query(WebSocket).order_by(WebSocket.widget_id, WebSocket.id).all()
class Widget(Base):
__tablename__ = "widgets"
id = Column(INT, primary_key=True, autoincrement=True)
name = Column(TEXT, nullable=False)
def jsonify(self):
return {"id": self.id, "name": self.name}
@staticmethod
def _all(db_session):
return db_session.query(Widget).order_by(Widget.id).all()
@staticmethod
def _by_id(db_session, id):
return db_session.query(Widget).filter_by(id=id).one_or_none()
| true | true |
f73de873548cc15e656045753af17fe617e5347b | 401 | py | Python | test/test_situation_code.py | a-hacker/PyBall | ed88b28dceddf4c8b9f1370d931e4cfa74ce5fda | [
"MIT"
] | null | null | null | test/test_situation_code.py | a-hacker/PyBall | ed88b28dceddf4c8b9f1370d931e4cfa74ce5fda | [
"MIT"
] | null | null | null | test/test_situation_code.py | a-hacker/PyBall | ed88b28dceddf4c8b9f1370d931e4cfa74ce5fda | [
"MIT"
] | null | null | null | import pytest
from PyBall import PyBall
from PyBall.models.config import SituationCode
@pytest.fixture(scope='module')
def test_situation_codes():
pyball = PyBall()
return pyball.get_situation_codes()
def test_get_situation_codes_returns_situation_codes(test_situation_codes):
assert isinstance(test_situation_codes, list)
assert isinstance(test_situation_codes[0], SituationCode)
| 26.733333 | 75 | 0.812968 | import pytest
from PyBall import PyBall
from PyBall.models.config import SituationCode
@pytest.fixture(scope='module')
def test_situation_codes():
pyball = PyBall()
return pyball.get_situation_codes()
def test_get_situation_codes_returns_situation_codes(test_situation_codes):
assert isinstance(test_situation_codes, list)
assert isinstance(test_situation_codes[0], SituationCode)
| true | true |
f73dea1a4ea7ab8c5b58eb5f3787601ddd3117f4 | 21,919 | py | Python | shoptimizer_api/optimizers_builtin/image_link_optimizer_test.py | mitzaM/shoptimizer | 29fdea8e0b7e32fabef6a433bfb5d3b8060a4f36 | [
"Apache-2.0"
] | null | null | null | shoptimizer_api/optimizers_builtin/image_link_optimizer_test.py | mitzaM/shoptimizer | 29fdea8e0b7e32fabef6a433bfb5d3b8060a4f36 | [
"Apache-2.0"
] | null | null | null | shoptimizer_api/optimizers_builtin/image_link_optimizer_test.py | mitzaM/shoptimizer | 29fdea8e0b7e32fabef6a433bfb5d3b8060a4f36 | [
"Apache-2.0"
] | null | null | null | # coding=utf-8
# Copyright 2021 Google LLC.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Tests for image_link_optimizer.py."""
import json
import time
from typing import Any, Dict, Iterable, List
from unittest import mock
import urllib.error
from absl.testing import absltest
from shoptimizer_api import constants
import flask
from shoptimizer_api.optimizers_builtin import image_link_optimizer
from shoptimizer_api.test_data import requests_bodies
from shoptimizer_api.util import app_util
from shoptimizer_api.util import image_util
from shoptimizer_api.util import networking
def _build_list_of_image_links(num_links: int,
file_type: str = 'jpg') -> List[str]:
return [f'https://examples.com/image{n}.{file_type}'
for n in list(range(num_links))]
def _request_body_from_image_links(links: Iterable[str]) -> Dict[str, Any]:
return requests_bodies.build_request_body(properties_to_be_updated={
'imageLink': links[0],
'additionalImageLink': links[1:]
})
def _setup_flask_with_configs_only():
app = flask.Flask(__name__)
app.config['CONFIGS'] = app_util._load_all_configs()
app.app_context().push()
@mock.patch.object(image_link_optimizer, '_CONFIG_FILE_NAME',
new='image_link_optimizer_config_test')
class ImageLinkOptimizerTest(absltest.TestCase):
def setUp(self):
super().setUp()
_setup_flask_with_configs_only()
# By default, mock load_bytes_at_url to return empty bytes
self.mock_urlopen = self.enter_context(
mock.patch.object(networking, 'load_bytes_at_url', return_value=b'',
autospec=True))
# By default, mock the ML model to avoid scoring each image
self.mock_model = self.enter_context(
mock.patch.object(image_util, 'score_image', return_value=float('inf'),
autospec=True))
self.optimizer = image_link_optimizer.ImageLinkOptimizer(
image_link_optimizer.CONFIGURATION_DEFAULTS)
def test_config_uses_defaults_if_no_config_file_or_assignment(self):
with mock.patch.object(image_link_optimizer, '_CONFIG_FILE_NAME', 'file'):
optimizer = image_link_optimizer.ImageLinkOptimizer()
self.assertEqual(
image_link_optimizer
.CONFIGURATION_DEFAULTS['require_image_can_be_downloaded'],
optimizer.require_image_can_be_downloaded)
self.assertEqual(
image_link_optimizer
.CONFIGURATION_DEFAULTS['require_image_score_quality_better_than'],
optimizer.require_image_score_quality_better_than)
def test_config_uses_config_file_if_no_assignment(self):
with open(f'shoptimizer_api/config/{image_link_optimizer._CONFIG_FILE_NAME}.json') as f:
file_config = json.load(f)
optimizer = image_link_optimizer.ImageLinkOptimizer()
self.assertEqual(
file_config['require_image_can_be_downloaded'],
optimizer.require_image_can_be_downloaded)
self.assertEqual(
file_config['require_image_score_quality_better_than'],
optimizer.require_image_score_quality_better_than)
def test_config_uses_assignment_if_available(self):
assignments = {
'require_image_can_be_downloaded': True,
'require_image_score_quality_better_than': float('inf')
}
optimizer = image_link_optimizer.ImageLinkOptimizer(assignments)
self.assertEqual(
assignments['require_image_can_be_downloaded'],
optimizer.require_image_can_be_downloaded)
self.assertEqual(
assignments['require_image_score_quality_better_than'],
optimizer.require_image_score_quality_better_than)
def test_negative_require_image_score_quality_better_than_set_to_zero(self):
optimizer = image_link_optimizer.ImageLinkOptimizer({
'require_image_score_quality_better_than': -1
})
self.assertEqual(0, optimizer.require_image_score_quality_better_than)
def test_raises_if_invalid_require_image_score_quality_better_than(self):
with self.assertRaises(ValueError):
image_link_optimizer.ImageLinkOptimizer({
'require_image_score_quality_better_than': 'some string'
})
def test_optimizer_does_nothing_when_alternate_image_links_missing(self):
original_data = requests_bodies.build_request_body(
properties_to_be_removed=['additionalImageLink'])
optimized_data, optimization_result = self.optimizer.process(original_data)
product = optimized_data['entries'][0]['product']
self.assertNotIn('additionalImageLink', product)
self.assertEqual(0, optimization_result.num_of_products_optimized)
def test_optimizer_does_nothing_when_alternate_image_links_valid(self):
image_links = _build_list_of_image_links(3)
original_data = requests_bodies.build_request_body(
properties_to_be_updated={'additionalImageLink': image_links})
optimized_data, optimization_result = self.optimizer.process(original_data)
product = optimized_data['entries'][0]['product']
self.assertEqual(image_links, product['additionalImageLink'])
self.assertEqual(0, optimization_result.num_of_products_optimized)
def test_optimizer_does_not_remove_image_links_when_not_above_maximum(self):
image_links = _build_list_of_image_links(constants.MAX_ALTERNATE_IMAGE_URLS)
original_data = requests_bodies.build_request_body(
properties_to_be_updated={'additionalImageLink': image_links})
optimized_data, optimization_result = self.optimizer.process(original_data)
product = optimized_data['entries'][0]['product']
self.assertEqual(image_links, product['additionalImageLink'])
self.assertEqual(0, optimization_result.num_of_products_optimized)
def test_optimizer_truncates_additional_images_above_maximum(self):
image_links = _build_list_of_image_links(
constants.MAX_ALTERNATE_IMAGE_URLS + 1)
original_data = requests_bodies.build_request_body(
properties_to_be_updated={'additionalImageLink': image_links})
optimized_data, optimization_result = self.optimizer.process(original_data)
product = optimized_data['entries'][0]['product']
self.assertEqual(image_links[:constants.MAX_ALTERNATE_IMAGE_URLS],
product['additionalImageLink'])
self.assertEqual(1, optimization_result.num_of_products_optimized)
def test_optimizer_requests_data_from_all_image_urls(self):
image_links = _build_list_of_image_links(3)
self.optimizer.process(_request_body_from_image_links(image_links))
self.mock_urlopen.assert_has_calls(
[mock.call(image_links[0]),
mock.call(image_links[1]),
mock.call(image_links[2])],
any_order=True)
def test_doesnt_download_urls_if_not_require_image_can_be_downloaded(self):
image_links = _build_list_of_image_links(3)
optimizer = image_link_optimizer.ImageLinkOptimizer({
'require_image_can_be_downloaded': False
})
optimizer.process(_request_body_from_image_links(image_links))
self.mock_urlopen.assert_not_called()
def test_doesnt_attempt_scoring_if_not_require_image_can_be_downloaded(self):
image_links = _build_list_of_image_links(3)
optimizer = image_link_optimizer.ImageLinkOptimizer({
'require_image_can_be_downloaded': False
})
optimizer.process(_request_body_from_image_links(image_links))
self.mock_model.assert_not_called()
def test_optimizer_does_not_request_from_nonhttp_urls(self):
image_links = _build_list_of_image_links(2)
image_links[0] = 'ftp://google.com/image.jpg'
self.optimizer.process(_request_body_from_image_links(image_links))
self.assertNotIn(
mock.call(image_links[0]), self.mock_urlopen.call_args_list)
def test_optimizer_does_not_request_from_long_urls(self):
image_links = _build_list_of_image_links(2)
many_zeros = '0' * constants.MAX_IMAGE_URL_LENGTH
image_links[0] = f'https://google.com/image{many_zeros}.jpg'
self.optimizer.process(_request_body_from_image_links(image_links))
self.assertNotIn(
mock.call(image_links[0]), self.mock_urlopen.call_args_list)
def test_does_not_remove_additional_images_with_errors_below_max(self):
image_links = _build_list_of_image_links(3)
responses = [b''] * len(image_links)
responses[1] = urllib.error.HTTPError(image_links[1], 500, 'Internal Error',
{}, None)
with mock.patch.object(networking, 'load_bytes_at_url') as mock_request:
mock_request.side_effect = responses
optimized_data, optimization_result = self.optimizer.process(
_request_body_from_image_links(image_links))
product = optimized_data['entries'][0]['product']
self.assertEqual(image_links[0], product['imageLink'])
self.assertEqual(image_links[1:], product['additionalImageLink'])
self.assertEqual(0, optimization_result.num_of_products_optimized)
def test_scores_all_valid_images(self):
image_links = _build_list_of_image_links(3)
responses = bytearray('ABCDEF', 'ASCII')
with mock.patch.object(networking, 'load_bytes_at_url') as mock_request:
mock_request.side_effect = responses
self.optimizer.process(_request_body_from_image_links(image_links))
self.mock_model.assert_has_calls([
mock.call(responses[0]),
mock.call(responses[1]),
mock.call(responses[2])
], any_order=True)
def test_does_not_score_images_with_no_content(self):
image_links = _build_list_of_image_links(3)
responses = [b''] * len(image_links)
with mock.patch.object(networking, 'load_bytes_at_url') as mock_request:
mock_request.side_effect = responses
self.optimizer.process(_request_body_from_image_links(image_links))
self.mock_model.assert_not_called()
def test_does_not_score_images_if_minimum_score_is_infinite(self):
image_links = _build_list_of_image_links(3)
assignments = {
'require_image_can_be_downloaded': True,
'require_image_score_quality_better_than': float('inf')
}
optimizer = image_link_optimizer.ImageLinkOptimizer(assignments)
responses = bytearray('ABCDEF', 'ASCII')
with mock.patch.object(networking, 'load_bytes_at_url') as mock_request:
mock_request.side_effect = responses
optimizer.process(_request_body_from_image_links(image_links))
self.mock_model.assert_not_called()
def test_does_not_score_images_with_url_errors(self):
image_links = _build_list_of_image_links(3)
responses = [urllib.error.HTTPError(link, 500, 'Internal Error', {}, None)
for link in image_links]
with mock.patch.object(networking, 'load_bytes_at_url') as mock_request:
mock_request.side_effect = responses
self.optimizer.process(_request_body_from_image_links(image_links))
self.mock_model.assert_not_called()
def test_preferentially_removes_images_with_invalid_urls(self):
image_links = _build_list_of_image_links(
constants.MAX_ALTERNATE_IMAGE_URLS + 2)
image_links[1] = 'ftp://google.com/image.jpg'
responses = [b''] * len(image_links)
with mock.patch.object(networking, 'load_bytes_at_url') as mock_request:
mock_request.side_effect = responses
optimized_data, optimization_result = self.optimizer.process(
_request_body_from_image_links(image_links))
product = optimized_data['entries'][0]['product']
# Expect to remove the 1st additional image link
expected_links = image_links[2:]
self.assertEqual(image_links[0], product['imageLink'])
self.assertEqual(expected_links, product['additionalImageLink'])
self.assertEqual(1, optimization_result.num_of_products_optimized)
def test_preferentially_removes_images_above_size_limit(self):
image_links = _build_list_of_image_links(
constants.MAX_ALTERNATE_IMAGE_URLS + 2)
responses = [b''] * len(image_links)
responses[1] = b'0' * (constants.MAX_IMAGE_FILE_SIZE_BYTES + 1)
with mock.patch.object(networking, 'load_bytes_at_url') as mock_request:
mock_request.side_effect = responses
optimized_data, optimization_result = self.optimizer.process(
_request_body_from_image_links(image_links))
product = optimized_data['entries'][0]['product']
# Expect to remove the 1st additional image link
expected_links = image_links[2:]
self.assertEqual(image_links[0], product['imageLink'])
self.assertEqual(expected_links, product['additionalImageLink'])
self.assertEqual(1, optimization_result.num_of_products_optimized)
def test_preferentially_removes_images_with_errors_above_max(self):
image_links = _build_list_of_image_links(13)
responses = [b''] * len(image_links)
responses[4] = urllib.error.HTTPError(image_links[4], 500,
'Internal Error', {}, None)
responses[8] = urllib.error.HTTPError(image_links[8], 500,
'Internal Error', {}, None)
with mock.patch.object(networking, 'load_bytes_at_url') as mock_request:
mock_request.side_effect = responses
optimized_data, optimization_result = self.optimizer.process(
_request_body_from_image_links(image_links))
product = optimized_data['entries'][0]['product']
# Expect to remove the 4th and 8th image due to errors
expected_links = image_links[1:4] + image_links[5:8] + image_links[9:]
self.assertEqual(image_links[0], product['imageLink'])
self.assertEqual(expected_links, product['additionalImageLink'])
self.assertEqual(1, optimization_result.num_of_products_optimized)
def test_first_removes_errors_above_max_then_truncates_at_max(self):
image_links = _build_list_of_image_links(13)
responses = [b''] * len(image_links)
responses[4] = urllib.error.HTTPError(image_links[1], 500,
'Internal Error', {}, None)
with mock.patch.object(networking, 'load_bytes_at_url') as mock_request:
mock_request.side_effect = responses
optimized_data, optimization_result = self.optimizer.process(
_request_body_from_image_links(image_links))
product = optimized_data['entries'][0]['product']
# Expect to remove the 4th image due to error and the last from truncation
expected_links = image_links[1:4] + image_links[5:-1]
self.assertEqual(image_links[0], product['imageLink'])
self.assertEqual(expected_links, product['additionalImageLink'])
self.assertEqual(1, optimization_result.num_of_products_optimized)
def test_swaps_on_primary_image_error_with_alternate_available(self):
image_links = _build_list_of_image_links(3)
responses = [b''] * len(image_links)
responses[0] = urllib.error.HTTPError(image_links[0], 500,
'Internal Error', {}, None)
with mock.patch.object(networking, 'load_bytes_at_url') as mock_request:
mock_request.side_effect = responses
optimized_data, optimization_result = self.optimizer.process(
_request_body_from_image_links(image_links))
product = optimized_data['entries'][0]['product']
self.assertEqual(image_links[1], product['imageLink'])
expected_links = [image_links[0]] + image_links[2:]
self.assertEqual(expected_links, product['additionalImageLink'])
self.assertEqual(1, optimization_result.num_of_products_optimized)
def test_swaps_on_primary_image_error_with_any_alternate_available(self):
image_links = _build_list_of_image_links(3)
responses = [b''] * len(image_links)
responses[0] = urllib.error.HTTPError(image_links[0], 500,
'Internal Error', {}, None)
responses[1] = urllib.error.HTTPError(image_links[1], 500,
'Internal Error', {}, None)
with mock.patch.object(networking, 'load_bytes_at_url') as mock_request:
mock_request.side_effect = responses
optimized_data, optimization_result = self.optimizer.process(
_request_body_from_image_links(image_links))
product = optimized_data['entries'][0]['product']
self.assertEqual(image_links[2], product['imageLink'])
# Ensure imageLink swapped with 2nd alternate, since the 1st is an error
expected_links = [image_links[1], image_links[0]]
self.assertEqual(expected_links, product['additionalImageLink'])
self.assertEqual(1, optimization_result.num_of_products_optimized)
def test_preferentially_chooses_lowest_scoring_image(self):
image_links = _build_list_of_image_links(5)
image_responses = [b'101010'] * len(image_links)
image_responses[0] = urllib.error.HTTPError(image_links[0], 500,
'Internal Error', {}, None)
score_responses = [0.75, 0.5, 0.25, 1.0]
with mock.patch.object(networking, 'load_bytes_at_url') as mock_network:
mock_network.side_effect = image_responses
with mock.patch.object(image_util, 'score_image') as mock_model:
mock_model.side_effect = score_responses
optimized_data, optimization_result = self.optimizer.process(
_request_body_from_image_links(image_links))
product = optimized_data['entries'][0]['product']
# Ensure imageLink swapped with 3rd alternate; that has the lowest score
self.assertEqual(image_links[3], product['imageLink'])
expected_links = [image_links[1], image_links[2],
image_links[0], image_links[4]]
self.assertEqual(expected_links, product['additionalImageLink'])
self.assertEqual(1, optimization_result.num_of_products_optimized)
def test_images_scoring_below_threshold_are_considered_invalid(self):
image_links = _build_list_of_image_links(3)
image_responses = [b'101010'] * len(image_links)
score_responses = [0.75, 0.25, 1.0]
assignments = {
'require_image_can_be_downloaded': True,
'require_image_score_quality_better_than': 0.5
}
optimizer = image_link_optimizer.ImageLinkOptimizer(assignments)
with mock.patch.object(networking, 'load_bytes_at_url') as mock_network:
mock_network.side_effect = image_responses
with mock.patch.object(image_util, 'score_image') as mock_model:
mock_model.side_effect = score_responses
optimized_data, optimization_result = optimizer.process(
_request_body_from_image_links(image_links))
product = optimized_data['entries'][0]['product']
# Ensure imageLink swapped with 1st alternate; that has the lowest score
self.assertEqual(image_links[1], product['imageLink'])
expected_links = [image_links[0], image_links[2]]
self.assertEqual(expected_links, product['additionalImageLink'])
self.assertEqual(1, optimization_result.num_of_products_optimized)
def test_do_not_swap_images_if_better_alternates_score_below_threshold(self):
image_links = _build_list_of_image_links(3)
image_responses = [b'101010'] * len(image_links)
score_responses = [0.75, 0.6, 0.7]
assignments = {
'require_image_can_be_downloaded': True,
'require_image_score_quality_better_than': 0.5
}
optimizer = image_link_optimizer.ImageLinkOptimizer(assignments)
with mock.patch.object(networking, 'load_bytes_at_url') as mock_network:
mock_network.side_effect = image_responses
with mock.patch.object(image_util, 'score_image') as mock_model:
mock_model.side_effect = score_responses
optimized_data, optimization_result = optimizer.process(
_request_body_from_image_links(image_links))
product = optimized_data['entries'][0]['product']
self.assertEqual(image_links[0], product['imageLink'])
self.assertEqual(image_links[1:], product['additionalImageLink'])
self.assertEqual(0, optimization_result.num_of_products_optimized)
def test_does_not_swap_on_primary_image_error_if_no_alternate_available(self):
image_links = _build_list_of_image_links(3)
responses = [urllib.error.HTTPError(link, 500, 'Internal Error', {}, None)
for link in image_links]
with mock.patch.object(networking, 'load_bytes_at_url') as mock_request:
mock_request.side_effect = responses
optimized_data, optimization_result = self.optimizer.process(
_request_body_from_image_links(image_links))
product = optimized_data['entries'][0]['product']
self.assertEqual(image_links[0], product['imageLink'])
self.assertEqual(image_links[1:], product['additionalImageLink'])
self.assertEqual(0, optimization_result.num_of_products_optimized)
def test_downloads_images_in_parallel(self):
sleep_amount_secs = 0.25
image_links = _build_list_of_image_links(3)
def _wait_before_responding(*_args):
time.sleep(sleep_amount_secs)
return b''
with mock.patch.object(networking, 'load_bytes_at_url') as mock_request:
mock_request.side_effect = _wait_before_responding
start_time = time.time()
self.optimizer.process(_request_body_from_image_links(image_links))
end_time = time.time()
# Elapsed time < sum of the sleep times iff requests are in parallel
self.assertLess(end_time - start_time,
len(image_links) * sleep_amount_secs)
| 42.396518 | 92 | 0.736302 |
import json
import time
from typing import Any, Dict, Iterable, List
from unittest import mock
import urllib.error
from absl.testing import absltest
from shoptimizer_api import constants
import flask
from shoptimizer_api.optimizers_builtin import image_link_optimizer
from shoptimizer_api.test_data import requests_bodies
from shoptimizer_api.util import app_util
from shoptimizer_api.util import image_util
from shoptimizer_api.util import networking
def _build_list_of_image_links(num_links: int,
file_type: str = 'jpg') -> List[str]:
return [f'https://examples.com/image{n}.{file_type}'
for n in list(range(num_links))]
def _request_body_from_image_links(links: Iterable[str]) -> Dict[str, Any]:
return requests_bodies.build_request_body(properties_to_be_updated={
'imageLink': links[0],
'additionalImageLink': links[1:]
})
def _setup_flask_with_configs_only():
app = flask.Flask(__name__)
app.config['CONFIGS'] = app_util._load_all_configs()
app.app_context().push()
@mock.patch.object(image_link_optimizer, '_CONFIG_FILE_NAME',
new='image_link_optimizer_config_test')
class ImageLinkOptimizerTest(absltest.TestCase):
def setUp(self):
super().setUp()
_setup_flask_with_configs_only()
self.mock_urlopen = self.enter_context(
mock.patch.object(networking, 'load_bytes_at_url', return_value=b'',
autospec=True))
self.mock_model = self.enter_context(
mock.patch.object(image_util, 'score_image', return_value=float('inf'),
autospec=True))
self.optimizer = image_link_optimizer.ImageLinkOptimizer(
image_link_optimizer.CONFIGURATION_DEFAULTS)
def test_config_uses_defaults_if_no_config_file_or_assignment(self):
with mock.patch.object(image_link_optimizer, '_CONFIG_FILE_NAME', 'file'):
optimizer = image_link_optimizer.ImageLinkOptimizer()
self.assertEqual(
image_link_optimizer
.CONFIGURATION_DEFAULTS['require_image_can_be_downloaded'],
optimizer.require_image_can_be_downloaded)
self.assertEqual(
image_link_optimizer
.CONFIGURATION_DEFAULTS['require_image_score_quality_better_than'],
optimizer.require_image_score_quality_better_than)
def test_config_uses_config_file_if_no_assignment(self):
with open(f'shoptimizer_api/config/{image_link_optimizer._CONFIG_FILE_NAME}.json') as f:
file_config = json.load(f)
optimizer = image_link_optimizer.ImageLinkOptimizer()
self.assertEqual(
file_config['require_image_can_be_downloaded'],
optimizer.require_image_can_be_downloaded)
self.assertEqual(
file_config['require_image_score_quality_better_than'],
optimizer.require_image_score_quality_better_than)
def test_config_uses_assignment_if_available(self):
assignments = {
'require_image_can_be_downloaded': True,
'require_image_score_quality_better_than': float('inf')
}
optimizer = image_link_optimizer.ImageLinkOptimizer(assignments)
self.assertEqual(
assignments['require_image_can_be_downloaded'],
optimizer.require_image_can_be_downloaded)
self.assertEqual(
assignments['require_image_score_quality_better_than'],
optimizer.require_image_score_quality_better_than)
def test_negative_require_image_score_quality_better_than_set_to_zero(self):
optimizer = image_link_optimizer.ImageLinkOptimizer({
'require_image_score_quality_better_than': -1
})
self.assertEqual(0, optimizer.require_image_score_quality_better_than)
def test_raises_if_invalid_require_image_score_quality_better_than(self):
with self.assertRaises(ValueError):
image_link_optimizer.ImageLinkOptimizer({
'require_image_score_quality_better_than': 'some string'
})
def test_optimizer_does_nothing_when_alternate_image_links_missing(self):
original_data = requests_bodies.build_request_body(
properties_to_be_removed=['additionalImageLink'])
optimized_data, optimization_result = self.optimizer.process(original_data)
product = optimized_data['entries'][0]['product']
self.assertNotIn('additionalImageLink', product)
self.assertEqual(0, optimization_result.num_of_products_optimized)
def test_optimizer_does_nothing_when_alternate_image_links_valid(self):
image_links = _build_list_of_image_links(3)
original_data = requests_bodies.build_request_body(
properties_to_be_updated={'additionalImageLink': image_links})
optimized_data, optimization_result = self.optimizer.process(original_data)
product = optimized_data['entries'][0]['product']
self.assertEqual(image_links, product['additionalImageLink'])
self.assertEqual(0, optimization_result.num_of_products_optimized)
def test_optimizer_does_not_remove_image_links_when_not_above_maximum(self):
image_links = _build_list_of_image_links(constants.MAX_ALTERNATE_IMAGE_URLS)
original_data = requests_bodies.build_request_body(
properties_to_be_updated={'additionalImageLink': image_links})
optimized_data, optimization_result = self.optimizer.process(original_data)
product = optimized_data['entries'][0]['product']
self.assertEqual(image_links, product['additionalImageLink'])
self.assertEqual(0, optimization_result.num_of_products_optimized)
def test_optimizer_truncates_additional_images_above_maximum(self):
image_links = _build_list_of_image_links(
constants.MAX_ALTERNATE_IMAGE_URLS + 1)
original_data = requests_bodies.build_request_body(
properties_to_be_updated={'additionalImageLink': image_links})
optimized_data, optimization_result = self.optimizer.process(original_data)
product = optimized_data['entries'][0]['product']
self.assertEqual(image_links[:constants.MAX_ALTERNATE_IMAGE_URLS],
product['additionalImageLink'])
self.assertEqual(1, optimization_result.num_of_products_optimized)
def test_optimizer_requests_data_from_all_image_urls(self):
image_links = _build_list_of_image_links(3)
self.optimizer.process(_request_body_from_image_links(image_links))
self.mock_urlopen.assert_has_calls(
[mock.call(image_links[0]),
mock.call(image_links[1]),
mock.call(image_links[2])],
any_order=True)
def test_doesnt_download_urls_if_not_require_image_can_be_downloaded(self):
image_links = _build_list_of_image_links(3)
optimizer = image_link_optimizer.ImageLinkOptimizer({
'require_image_can_be_downloaded': False
})
optimizer.process(_request_body_from_image_links(image_links))
self.mock_urlopen.assert_not_called()
def test_doesnt_attempt_scoring_if_not_require_image_can_be_downloaded(self):
image_links = _build_list_of_image_links(3)
optimizer = image_link_optimizer.ImageLinkOptimizer({
'require_image_can_be_downloaded': False
})
optimizer.process(_request_body_from_image_links(image_links))
self.mock_model.assert_not_called()
def test_optimizer_does_not_request_from_nonhttp_urls(self):
image_links = _build_list_of_image_links(2)
image_links[0] = 'ftp://google.com/image.jpg'
self.optimizer.process(_request_body_from_image_links(image_links))
self.assertNotIn(
mock.call(image_links[0]), self.mock_urlopen.call_args_list)
def test_optimizer_does_not_request_from_long_urls(self):
image_links = _build_list_of_image_links(2)
many_zeros = '0' * constants.MAX_IMAGE_URL_LENGTH
image_links[0] = f'https://google.com/image{many_zeros}.jpg'
self.optimizer.process(_request_body_from_image_links(image_links))
self.assertNotIn(
mock.call(image_links[0]), self.mock_urlopen.call_args_list)
def test_does_not_remove_additional_images_with_errors_below_max(self):
image_links = _build_list_of_image_links(3)
responses = [b''] * len(image_links)
responses[1] = urllib.error.HTTPError(image_links[1], 500, 'Internal Error',
{}, None)
with mock.patch.object(networking, 'load_bytes_at_url') as mock_request:
mock_request.side_effect = responses
optimized_data, optimization_result = self.optimizer.process(
_request_body_from_image_links(image_links))
product = optimized_data['entries'][0]['product']
self.assertEqual(image_links[0], product['imageLink'])
self.assertEqual(image_links[1:], product['additionalImageLink'])
self.assertEqual(0, optimization_result.num_of_products_optimized)
def test_scores_all_valid_images(self):
image_links = _build_list_of_image_links(3)
responses = bytearray('ABCDEF', 'ASCII')
with mock.patch.object(networking, 'load_bytes_at_url') as mock_request:
mock_request.side_effect = responses
self.optimizer.process(_request_body_from_image_links(image_links))
self.mock_model.assert_has_calls([
mock.call(responses[0]),
mock.call(responses[1]),
mock.call(responses[2])
], any_order=True)
def test_does_not_score_images_with_no_content(self):
image_links = _build_list_of_image_links(3)
responses = [b''] * len(image_links)
with mock.patch.object(networking, 'load_bytes_at_url') as mock_request:
mock_request.side_effect = responses
self.optimizer.process(_request_body_from_image_links(image_links))
self.mock_model.assert_not_called()
def test_does_not_score_images_if_minimum_score_is_infinite(self):
image_links = _build_list_of_image_links(3)
assignments = {
'require_image_can_be_downloaded': True,
'require_image_score_quality_better_than': float('inf')
}
optimizer = image_link_optimizer.ImageLinkOptimizer(assignments)
responses = bytearray('ABCDEF', 'ASCII')
with mock.patch.object(networking, 'load_bytes_at_url') as mock_request:
mock_request.side_effect = responses
optimizer.process(_request_body_from_image_links(image_links))
self.mock_model.assert_not_called()
def test_does_not_score_images_with_url_errors(self):
image_links = _build_list_of_image_links(3)
responses = [urllib.error.HTTPError(link, 500, 'Internal Error', {}, None)
for link in image_links]
with mock.patch.object(networking, 'load_bytes_at_url') as mock_request:
mock_request.side_effect = responses
self.optimizer.process(_request_body_from_image_links(image_links))
self.mock_model.assert_not_called()
def test_preferentially_removes_images_with_invalid_urls(self):
image_links = _build_list_of_image_links(
constants.MAX_ALTERNATE_IMAGE_URLS + 2)
image_links[1] = 'ftp://google.com/image.jpg'
responses = [b''] * len(image_links)
with mock.patch.object(networking, 'load_bytes_at_url') as mock_request:
mock_request.side_effect = responses
optimized_data, optimization_result = self.optimizer.process(
_request_body_from_image_links(image_links))
product = optimized_data['entries'][0]['product']
expected_links = image_links[2:]
self.assertEqual(image_links[0], product['imageLink'])
self.assertEqual(expected_links, product['additionalImageLink'])
self.assertEqual(1, optimization_result.num_of_products_optimized)
def test_preferentially_removes_images_above_size_limit(self):
image_links = _build_list_of_image_links(
constants.MAX_ALTERNATE_IMAGE_URLS + 2)
responses = [b''] * len(image_links)
responses[1] = b'0' * (constants.MAX_IMAGE_FILE_SIZE_BYTES + 1)
with mock.patch.object(networking, 'load_bytes_at_url') as mock_request:
mock_request.side_effect = responses
optimized_data, optimization_result = self.optimizer.process(
_request_body_from_image_links(image_links))
product = optimized_data['entries'][0]['product']
expected_links = image_links[2:]
self.assertEqual(image_links[0], product['imageLink'])
self.assertEqual(expected_links, product['additionalImageLink'])
self.assertEqual(1, optimization_result.num_of_products_optimized)
def test_preferentially_removes_images_with_errors_above_max(self):
image_links = _build_list_of_image_links(13)
responses = [b''] * len(image_links)
responses[4] = urllib.error.HTTPError(image_links[4], 500,
'Internal Error', {}, None)
responses[8] = urllib.error.HTTPError(image_links[8], 500,
'Internal Error', {}, None)
with mock.patch.object(networking, 'load_bytes_at_url') as mock_request:
mock_request.side_effect = responses
optimized_data, optimization_result = self.optimizer.process(
_request_body_from_image_links(image_links))
product = optimized_data['entries'][0]['product']
expected_links = image_links[1:4] + image_links[5:8] + image_links[9:]
self.assertEqual(image_links[0], product['imageLink'])
self.assertEqual(expected_links, product['additionalImageLink'])
self.assertEqual(1, optimization_result.num_of_products_optimized)
def test_first_removes_errors_above_max_then_truncates_at_max(self):
image_links = _build_list_of_image_links(13)
responses = [b''] * len(image_links)
responses[4] = urllib.error.HTTPError(image_links[1], 500,
'Internal Error', {}, None)
with mock.patch.object(networking, 'load_bytes_at_url') as mock_request:
mock_request.side_effect = responses
optimized_data, optimization_result = self.optimizer.process(
_request_body_from_image_links(image_links))
product = optimized_data['entries'][0]['product']
expected_links = image_links[1:4] + image_links[5:-1]
self.assertEqual(image_links[0], product['imageLink'])
self.assertEqual(expected_links, product['additionalImageLink'])
self.assertEqual(1, optimization_result.num_of_products_optimized)
def test_swaps_on_primary_image_error_with_alternate_available(self):
image_links = _build_list_of_image_links(3)
responses = [b''] * len(image_links)
responses[0] = urllib.error.HTTPError(image_links[0], 500,
'Internal Error', {}, None)
with mock.patch.object(networking, 'load_bytes_at_url') as mock_request:
mock_request.side_effect = responses
optimized_data, optimization_result = self.optimizer.process(
_request_body_from_image_links(image_links))
product = optimized_data['entries'][0]['product']
self.assertEqual(image_links[1], product['imageLink'])
expected_links = [image_links[0]] + image_links[2:]
self.assertEqual(expected_links, product['additionalImageLink'])
self.assertEqual(1, optimization_result.num_of_products_optimized)
def test_swaps_on_primary_image_error_with_any_alternate_available(self):
image_links = _build_list_of_image_links(3)
responses = [b''] * len(image_links)
responses[0] = urllib.error.HTTPError(image_links[0], 500,
'Internal Error', {}, None)
responses[1] = urllib.error.HTTPError(image_links[1], 500,
'Internal Error', {}, None)
with mock.patch.object(networking, 'load_bytes_at_url') as mock_request:
mock_request.side_effect = responses
optimized_data, optimization_result = self.optimizer.process(
_request_body_from_image_links(image_links))
product = optimized_data['entries'][0]['product']
self.assertEqual(image_links[2], product['imageLink'])
expected_links = [image_links[1], image_links[0]]
self.assertEqual(expected_links, product['additionalImageLink'])
self.assertEqual(1, optimization_result.num_of_products_optimized)
def test_preferentially_chooses_lowest_scoring_image(self):
image_links = _build_list_of_image_links(5)
image_responses = [b'101010'] * len(image_links)
image_responses[0] = urllib.error.HTTPError(image_links[0], 500,
'Internal Error', {}, None)
score_responses = [0.75, 0.5, 0.25, 1.0]
with mock.patch.object(networking, 'load_bytes_at_url') as mock_network:
mock_network.side_effect = image_responses
with mock.patch.object(image_util, 'score_image') as mock_model:
mock_model.side_effect = score_responses
optimized_data, optimization_result = self.optimizer.process(
_request_body_from_image_links(image_links))
product = optimized_data['entries'][0]['product']
self.assertEqual(image_links[3], product['imageLink'])
expected_links = [image_links[1], image_links[2],
image_links[0], image_links[4]]
self.assertEqual(expected_links, product['additionalImageLink'])
self.assertEqual(1, optimization_result.num_of_products_optimized)
def test_images_scoring_below_threshold_are_considered_invalid(self):
image_links = _build_list_of_image_links(3)
image_responses = [b'101010'] * len(image_links)
score_responses = [0.75, 0.25, 1.0]
assignments = {
'require_image_can_be_downloaded': True,
'require_image_score_quality_better_than': 0.5
}
optimizer = image_link_optimizer.ImageLinkOptimizer(assignments)
with mock.patch.object(networking, 'load_bytes_at_url') as mock_network:
mock_network.side_effect = image_responses
with mock.patch.object(image_util, 'score_image') as mock_model:
mock_model.side_effect = score_responses
optimized_data, optimization_result = optimizer.process(
_request_body_from_image_links(image_links))
product = optimized_data['entries'][0]['product']
self.assertEqual(image_links[1], product['imageLink'])
expected_links = [image_links[0], image_links[2]]
self.assertEqual(expected_links, product['additionalImageLink'])
self.assertEqual(1, optimization_result.num_of_products_optimized)
def test_do_not_swap_images_if_better_alternates_score_below_threshold(self):
image_links = _build_list_of_image_links(3)
image_responses = [b'101010'] * len(image_links)
score_responses = [0.75, 0.6, 0.7]
assignments = {
'require_image_can_be_downloaded': True,
'require_image_score_quality_better_than': 0.5
}
optimizer = image_link_optimizer.ImageLinkOptimizer(assignments)
with mock.patch.object(networking, 'load_bytes_at_url') as mock_network:
mock_network.side_effect = image_responses
with mock.patch.object(image_util, 'score_image') as mock_model:
mock_model.side_effect = score_responses
optimized_data, optimization_result = optimizer.process(
_request_body_from_image_links(image_links))
product = optimized_data['entries'][0]['product']
self.assertEqual(image_links[0], product['imageLink'])
self.assertEqual(image_links[1:], product['additionalImageLink'])
self.assertEqual(0, optimization_result.num_of_products_optimized)
def test_does_not_swap_on_primary_image_error_if_no_alternate_available(self):
image_links = _build_list_of_image_links(3)
responses = [urllib.error.HTTPError(link, 500, 'Internal Error', {}, None)
for link in image_links]
with mock.patch.object(networking, 'load_bytes_at_url') as mock_request:
mock_request.side_effect = responses
optimized_data, optimization_result = self.optimizer.process(
_request_body_from_image_links(image_links))
product = optimized_data['entries'][0]['product']
self.assertEqual(image_links[0], product['imageLink'])
self.assertEqual(image_links[1:], product['additionalImageLink'])
self.assertEqual(0, optimization_result.num_of_products_optimized)
def test_downloads_images_in_parallel(self):
sleep_amount_secs = 0.25
image_links = _build_list_of_image_links(3)
def _wait_before_responding(*_args):
time.sleep(sleep_amount_secs)
return b''
with mock.patch.object(networking, 'load_bytes_at_url') as mock_request:
mock_request.side_effect = _wait_before_responding
start_time = time.time()
self.optimizer.process(_request_body_from_image_links(image_links))
end_time = time.time()
self.assertLess(end_time - start_time,
len(image_links) * sleep_amount_secs)
| true | true |
f73dec194f26d063bec326887cddf93b66f8463b | 3,302 | py | Python | codes/SOM.py | zahrag/3DHARSOM | f934d0b5786d2edac29a7a18be31fa74aafcb881 | [
"MIT"
] | null | null | null | codes/SOM.py | zahrag/3DHARSOM | f934d0b5786d2edac29a7a18be31fa74aafcb881 | [
"MIT"
] | null | null | null | codes/SOM.py | zahrag/3DHARSOM | f934d0b5786d2edac29a7a18be31fa74aafcb881 | [
"MIT"
] | null | null | null |
"""
Author: Zahra Gharaee.
This code is written for the 3D-Human-Action-Recognition Project, started March 14 2014.
"""
import numpy as np
from numpy import linalg as LA
class SOM:
def __init__(self, learning, outputsize_x, outputsize_y, inputsize, sigma, softmax_exponent, max_epoch):
self.name = 'SOM'
self.learning = learning
self.outputsize_x = outputsize_x
self.outputsize_y = outputsize_y
self.inputsize = inputsize
self.sigma = sigma
self.softmax_exponent = softmax_exponent
self.max_epoch = max_epoch
self.metric = 'Euclidean'
self.normalize_input = False
self.normalize_weights = False
self.softmax_normalization = True
self.neighborhood_decay = 0.9999
self.neighborhood_min = 1
self.learningRate = 0.1
self.learningRate_decay = 0.9999
self.learningRate_min = 0.01
self.neighborhood_radius = outputsize_x
self.node_map = np.zeros((outputsize_x, outputsize_y, 2))
self.weights = np.random.rand(outputsize_x, outputsize_y, inputsize) # Rows, Columns, Depth
for i in range(outputsize_x):
for j in range(outputsize_y):
self.node_map[i, j, 0] = i
self.node_map[i, j, 1] = j
def normalize(self, state):
if self.normalize_input:
state /= LA.norm(np.expand_dims(state, axis=0))
return state
def soft_max_normalization(self, state):
m = np.max(state)
if m != 0:
state /= m
return state
def set_activity(self, state):
if self.metric == 'Euclidean':
dist = np.sum((state - self.weights) ** 2, axis=2)
activity = np.exp(-dist / self.sigma)
else:
# Scalar Product
mat_mul = state * self.weights
activity = mat_mul.sum(axis=2)
if self.softmax_exponent != 1:
activity = activity ** self.softmax_exponent
if self.softmax_normalization:
activity = self.soft_max_normalization(activity)
return activity
def find_winning_node(self, activity):
winner_x, winner_y = np.unravel_index(np.argmax(activity, axis=None), activity.shape)
winning_node = np.array([winner_x, winner_y])
return winning_node
def learn(self, state, winner):
dis = np.sum((self.node_map - winner) ** 2, axis=2)
gus = np.exp(-dis / (2 * self.neighborhood_radius ** 2))
err = state - self.weights
self.weights += self.learningRate * (err.T * gus.T).T
def learning_decay(self):
self.learningRate *= self.learningRate_decay
if self.learningRate < self.learningRate_min:
self.learningRate = self.learningRate_min
self.neighborhood_radius *= self.neighborhood_decay
if self.neighborhood_radius < self.neighborhood_min:
self.neighborhood_radius = self.neighborhood_min
def run_SOM(self, state):
state = self.normalize(state)
activity = self.set_activity(state)
winner = self.find_winning_node(activity)
if self.learning:
self.learn(state, winner)
self.learning_decay()
return activity, winner
| 28.465517 | 108 | 0.621139 |
import numpy as np
from numpy import linalg as LA
class SOM:
def __init__(self, learning, outputsize_x, outputsize_y, inputsize, sigma, softmax_exponent, max_epoch):
self.name = 'SOM'
self.learning = learning
self.outputsize_x = outputsize_x
self.outputsize_y = outputsize_y
self.inputsize = inputsize
self.sigma = sigma
self.softmax_exponent = softmax_exponent
self.max_epoch = max_epoch
self.metric = 'Euclidean'
self.normalize_input = False
self.normalize_weights = False
self.softmax_normalization = True
self.neighborhood_decay = 0.9999
self.neighborhood_min = 1
self.learningRate = 0.1
self.learningRate_decay = 0.9999
self.learningRate_min = 0.01
self.neighborhood_radius = outputsize_x
self.node_map = np.zeros((outputsize_x, outputsize_y, 2))
self.weights = np.random.rand(outputsize_x, outputsize_y, inputsize)
for i in range(outputsize_x):
for j in range(outputsize_y):
self.node_map[i, j, 0] = i
self.node_map[i, j, 1] = j
def normalize(self, state):
if self.normalize_input:
state /= LA.norm(np.expand_dims(state, axis=0))
return state
def soft_max_normalization(self, state):
m = np.max(state)
if m != 0:
state /= m
return state
def set_activity(self, state):
if self.metric == 'Euclidean':
dist = np.sum((state - self.weights) ** 2, axis=2)
activity = np.exp(-dist / self.sigma)
else:
mat_mul = state * self.weights
activity = mat_mul.sum(axis=2)
if self.softmax_exponent != 1:
activity = activity ** self.softmax_exponent
if self.softmax_normalization:
activity = self.soft_max_normalization(activity)
return activity
def find_winning_node(self, activity):
winner_x, winner_y = np.unravel_index(np.argmax(activity, axis=None), activity.shape)
winning_node = np.array([winner_x, winner_y])
return winning_node
def learn(self, state, winner):
dis = np.sum((self.node_map - winner) ** 2, axis=2)
gus = np.exp(-dis / (2 * self.neighborhood_radius ** 2))
err = state - self.weights
self.weights += self.learningRate * (err.T * gus.T).T
def learning_decay(self):
self.learningRate *= self.learningRate_decay
if self.learningRate < self.learningRate_min:
self.learningRate = self.learningRate_min
self.neighborhood_radius *= self.neighborhood_decay
if self.neighborhood_radius < self.neighborhood_min:
self.neighborhood_radius = self.neighborhood_min
def run_SOM(self, state):
state = self.normalize(state)
activity = self.set_activity(state)
winner = self.find_winning_node(activity)
if self.learning:
self.learn(state, winner)
self.learning_decay()
return activity, winner
| true | true |
f73dede5cba7b95f066d85a6de3f25d10bd46121 | 6,700 | py | Python | pims/els_data.py | JPLMLIA/libeos | 3ad25c22159edf79d407454e32b8f07333cb57c2 | [
"Apache-2.0"
] | null | null | null | pims/els_data.py | JPLMLIA/libeos | 3ad25c22159edf79d407454e32b8f07333cb57c2 | [
"Apache-2.0"
] | null | null | null | pims/els_data.py | JPLMLIA/libeos | 3ad25c22159edf79d407454e32b8f07333cb57c2 | [
"Apache-2.0"
] | null | null | null | # Cassini CAPS ELS data reader
# Modeled after Gary's MDIS reader
# Kiri Wagstaff, 11/28/18
import os
from datetime import datetime
from collections import defaultdict
import numpy as np
from pds.core.parser import Parser
from scipy.interpolate import interp1d
GEOMFILE = os.path.join(
os.path.dirname(os.path.realpath(__file__)),
'ref',
'geometricfactor.npz'
)
_EARRAY = None
_GEOM = None
E_CHARGE_COULOMBS = 1.602176487e-19
E_MASS_KG = 9.10938188e-31
def _load_gfactors():
"""
Using global variables here because we only want to read these values from
file once, then cache them at the module level
"""
global _EARRAY
global _GEOM
if _EARRAY is None:
sav = np.load(GEOMFILE)
_EARRAY = sav['earray']
_GEOM = sav['geom']
def needs_gfactors(f):
"""
Decorator for any function that needs to have the geometric factors loaded
first (calls `_load_gfactors` prior to calling the function).
"""
def fprime(*args, **kwargs):
_load_gfactors()
return f(*args, **kwargs)
return fprime
@needs_gfactors
def compute_def(e, counts):
"""
Computes the Differential Energy Flux (DEF)
Units: m^-2 sr^-1 s^-1
According to Abi's script and the CAPS User Guide, this is done by dividing
the counts by the anode- and energy-specific geometric factors.
"""
# According to section 9.2 of the CAPS PDS User Guide, the proper thing to
# do is interpolate the geometric factors: "If the ELS data record you are
# working with has energy summing ... then you can use the above table to
# interpolate the value you need for G."
geom_interp = interp1d(
_EARRAY, _GEOM, axis=0,
fill_value='extrapolate',
bounds_error=False,
assume_sorted=True,
)
G = geom_interp(e)
# newaxis is for the "phi" dimension of the data
return counts / G[..., np.newaxis]
def compute_dnf(e, def_data):
"""
Computes the Differential Number Flux (DNF)
Units: m^-2 sr^-1 s^-1 J^-1
Following Abi's script and the CAPS User Guide, this is the DEF divided by
the product of the energy and the charge of the particle (electron).
"""
# Add the new axes to broadcast across the theta/phi dimensions
return def_data / (E_CHARGE_COULOMBS*e[..., np.newaxis, np.newaxis])
def compute_psd(e, def_data):
"""
Computes the Phase Space Density (PSD)
Units: m^-6 s^-3
Following Abi's script and the CAPS User Guide, this is the DEF times a
factor of (mass^2 / (2 q^2 E^2)).
the product of the energy and the charge of the particle (electron).
"""
qE_squared = (E_CHARGE_COULOMBS*e)**2
# Add the new axes to broadcast across the theta/phi dimensions
return (
def_data * (E_MASS_KG**2) /
(2 * qE_squared[..., np.newaxis, np.newaxis])
)
def parse_dates(datearray):
return np.array([
datetime.strptime(row.tostring(), '%Y-%jT%H:%M:%S.%f')
for row in datearray
])
def reshape_data(data):
# Dimensions taken from ELS_V01.FMT
# (records, energy, theta, phi)
return data.reshape((-1, 63, 8, 1))
class ELS(object):
COLUMNS = (
# Values obtained from ELS_V01.FMT
# Name, start byte, dtype, items, missing constant
('start_date', 1, np.uint8, 21, None),
('dead_time_method', 22, np.uint8, 1, None),
('record_dur', 25, np.float32, 1, 65535.0),
('acc_time', 29, np.float32, 63, 65535.0),
('data', 281, np.float32, 504, 65535.0),
('dim1_e', 2297, np.float32, 63, 65535.0),
('dim1_e_upper', 2549, np.float32, 63, 65535.0),
('dim1_e_lower', 2801, np.float32, 63, 65535.0),
('dim2_theta', 3053, np.float32, 8, 65535.0),
('dim2_theta_upper', 3085, np.float32, 8, 65535.0),
('dim2_theta_lower', 3117, np.float32, 8, 65535.0),
('dim3_phi', 3149, np.float32, 1, 65535.0),
('dim3_phi_upper', 3153, np.float32, 1, 65535.0),
('dim3_phi_lower', 3157, np.float32, 1, 65535.0),
)
POSTPROCESS = {
'start_date': parse_dates,
'data': reshape_data,
}
def __init__(self, data_path, lbl_path=None, verbose=False):
"""
If the LBL file path is not specified, we'll assume that it is
sitting right next to the DAT file (and raise an Error if not).
"""
self.data_path = data_path
if lbl_path is None:
# Infer the LBL path if not supplied
data_base, data_ext = os.path.splitext(data_path)
if data_ext.lower() == data_ext:
lbl_path = data_base + '.lbl'
else:
lbl_path = data_base + '.LBL'
if not os.path.exists(lbl_path):
raise ValueError('Expected LBL file "%s" does not exist' % lbl_path)
self.lbl_path = lbl_path
self.verbose = verbose
self._load()
def _log(self, msg):
if self.verbose:
print(msg)
def _load(self):
with open(self.lbl_path, 'r') as f:
parser = Parser()
labels = parser.parse(f)
record_bytes = int(labels['RECORD_BYTES'])
nrecords = int(labels['FILE_RECORDS'])
columns = defaultdict(list)
with open(self.data_path, 'rb') as f:
for i in range(nrecords):
for cname, cstart, ctype, citems, _ in ELS.COLUMNS:
# Subtract 1 because they are indexed from 1 in the .FMT
f.seek(i*record_bytes + cstart - 1)
columns[cname].append(f.read(np.dtype(ctype).itemsize*citems))
for cname, _, ctype, citems, missing in ELS.COLUMNS:
cstr = ''.join(columns[cname])
col = np.fromstring(cstr, dtype=ctype, count=nrecords*citems)
col = np.squeeze(col.reshape((nrecords, citems)))
# Replace missing value with NaN
if missing is not None:
col[col == missing] = np.nan
# Apply post-processing steps to appropriate columns
if cname in ELS.POSTPROCESS:
col = ELS.POSTPROCESS[cname](col)
# Store column as object attribute
setattr(self, cname, col)
# Add iso_data by summing across theta/phi
self.iso_data = np.sum(self.data, axis=(-2, -1))
# Compute DEF, DNF, and PSD
self.def_data = compute_def(self.dim1_e, self.data)
self.dnf_data = compute_dnf(self.dim1_e, self.def_data)
self.psd_data = compute_psd(self.dim1_e, self.def_data)
| 33.668342 | 82 | 0.602985 |
# Kiri Wagstaff, 11/28/18
import os
from datetime import datetime
from collections import defaultdict
import numpy as np
from pds.core.parser import Parser
from scipy.interpolate import interp1d
GEOMFILE = os.path.join(
os.path.dirname(os.path.realpath(__file__)),
'ref',
'geometricfactor.npz'
)
_EARRAY = None
_GEOM = None
E_CHARGE_COULOMBS = 1.602176487e-19
E_MASS_KG = 9.10938188e-31
def _load_gfactors():
global _EARRAY
global _GEOM
if _EARRAY is None:
sav = np.load(GEOMFILE)
_EARRAY = sav['earray']
_GEOM = sav['geom']
def needs_gfactors(f):
def fprime(*args, **kwargs):
_load_gfactors()
return f(*args, **kwargs)
return fprime
@needs_gfactors
def compute_def(e, counts):
# According to section 9.2 of the CAPS PDS User Guide, the proper thing to
# do is interpolate the geometric factors: "If the ELS data record you are
# working with has energy summing ... then you can use the above table to
# interpolate the value you need for G."
geom_interp = interp1d(
_EARRAY, _GEOM, axis=0,
fill_value='extrapolate',
bounds_error=False,
assume_sorted=True,
)
G = geom_interp(e)
# newaxis is for the "phi" dimension of the data
return counts / G[..., np.newaxis]
def compute_dnf(e, def_data):
# Add the new axes to broadcast across the theta/phi dimensions
return def_data / (E_CHARGE_COULOMBS*e[..., np.newaxis, np.newaxis])
def compute_psd(e, def_data):
qE_squared = (E_CHARGE_COULOMBS*e)**2
# Add the new axes to broadcast across the theta/phi dimensions
return (
def_data * (E_MASS_KG**2) /
(2 * qE_squared[..., np.newaxis, np.newaxis])
)
def parse_dates(datearray):
return np.array([
datetime.strptime(row.tostring(), '%Y-%jT%H:%M:%S.%f')
for row in datearray
])
def reshape_data(data):
# Dimensions taken from ELS_V01.FMT
# (records, energy, theta, phi)
return data.reshape((-1, 63, 8, 1))
class ELS(object):
COLUMNS = (
# Values obtained from ELS_V01.FMT
# Name, start byte, dtype, items, missing constant
('start_date', 1, np.uint8, 21, None),
('dead_time_method', 22, np.uint8, 1, None),
('record_dur', 25, np.float32, 1, 65535.0),
('acc_time', 29, np.float32, 63, 65535.0),
('data', 281, np.float32, 504, 65535.0),
('dim1_e', 2297, np.float32, 63, 65535.0),
('dim1_e_upper', 2549, np.float32, 63, 65535.0),
('dim1_e_lower', 2801, np.float32, 63, 65535.0),
('dim2_theta', 3053, np.float32, 8, 65535.0),
('dim2_theta_upper', 3085, np.float32, 8, 65535.0),
('dim2_theta_lower', 3117, np.float32, 8, 65535.0),
('dim3_phi', 3149, np.float32, 1, 65535.0),
('dim3_phi_upper', 3153, np.float32, 1, 65535.0),
('dim3_phi_lower', 3157, np.float32, 1, 65535.0),
)
POSTPROCESS = {
'start_date': parse_dates,
'data': reshape_data,
}
def __init__(self, data_path, lbl_path=None, verbose=False):
self.data_path = data_path
if lbl_path is None:
# Infer the LBL path if not supplied
data_base, data_ext = os.path.splitext(data_path)
if data_ext.lower() == data_ext:
lbl_path = data_base + '.lbl'
else:
lbl_path = data_base + '.LBL'
if not os.path.exists(lbl_path):
raise ValueError('Expected LBL file "%s" does not exist' % lbl_path)
self.lbl_path = lbl_path
self.verbose = verbose
self._load()
def _log(self, msg):
if self.verbose:
print(msg)
def _load(self):
with open(self.lbl_path, 'r') as f:
parser = Parser()
labels = parser.parse(f)
record_bytes = int(labels['RECORD_BYTES'])
nrecords = int(labels['FILE_RECORDS'])
columns = defaultdict(list)
with open(self.data_path, 'rb') as f:
for i in range(nrecords):
for cname, cstart, ctype, citems, _ in ELS.COLUMNS:
# Subtract 1 because they are indexed from 1 in the .FMT
f.seek(i*record_bytes + cstart - 1)
columns[cname].append(f.read(np.dtype(ctype).itemsize*citems))
for cname, _, ctype, citems, missing in ELS.COLUMNS:
cstr = ''.join(columns[cname])
col = np.fromstring(cstr, dtype=ctype, count=nrecords*citems)
col = np.squeeze(col.reshape((nrecords, citems)))
# Replace missing value with NaN
if missing is not None:
col[col == missing] = np.nan
# Apply post-processing steps to appropriate columns
if cname in ELS.POSTPROCESS:
col = ELS.POSTPROCESS[cname](col)
# Store column as object attribute
setattr(self, cname, col)
# Add iso_data by summing across theta/phi
self.iso_data = np.sum(self.data, axis=(-2, -1))
# Compute DEF, DNF, and PSD
self.def_data = compute_def(self.dim1_e, self.data)
self.dnf_data = compute_dnf(self.dim1_e, self.def_data)
self.psd_data = compute_psd(self.dim1_e, self.def_data)
| true | true |
f73def6e2f12a1497b1f9d7e2b31b68b2acc60ee | 10,842 | py | Python | lib/loss/loss_helper.py | Shuai-Xie/openseg.pytorch | 79116a58782ccd2150f9eb9054e70cfd42fc9773 | [
"MIT"
] | 1 | 2021-07-02T11:54:57.000Z | 2021-07-02T11:54:57.000Z | lib/loss/loss_helper.py | Shuai-Xie/openseg.pytorch | 79116a58782ccd2150f9eb9054e70cfd42fc9773 | [
"MIT"
] | null | null | null | lib/loss/loss_helper.py | Shuai-Xie/openseg.pytorch | 79116a58782ccd2150f9eb9054e70cfd42fc9773 | [
"MIT"
] | null | null | null | ##+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
## Created by: Donny You, RainbowSecret
## Microsoft Research
## yuyua@microsoft.com
## Copyright (c) 2019
##
## This source code is licensed under the MIT-style license found in the
## LICENSE file in the root directory of this source tree
##+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import os
import pdb
import torch
import torch.nn as nn
import numpy as np
import torch.nn.functional as F
from torch.autograd import Variable
from lib.utils.tools.logger import Logger as Log
class WeightedFSOhemCELoss(nn.Module):
def __init__(self, configer):
super().__init__()
self.configer = configer
self.thresh = self.configer.get('loss', 'params')['ohem_thresh']
self.reduction = 'elementwise_mean'
if self.configer.exists('loss', 'params') and 'ce_reduction' in self.configer.get('loss', 'params'):
self.reduction = self.configer.get('loss', 'params')['ce_reduction']
def forward(self, predict, target, min_kept=1, weight=None, ignore_index=-1, **kwargs):
"""
Args:
predict:(n, c, h, w)
target:(n, h, w)
"""
prob_out = F.softmax(predict, dim=1)
tmp_target = target.clone()
tmp_target[tmp_target == ignore_index] = 0
prob = prob_out.gather(1, tmp_target.unsqueeze(1))
mask = target.contiguous().view(-1,) != ignore_index
sort_prob, sort_indices = prob.contiguous().view(-1,)[mask].contiguous().sort()
min_threshold = sort_prob[min(min_kept, sort_prob.numel() - 1)]
threshold = max(min_threshold, self.thresh)
loss_matrix = F.cross_entropy(predict, target, weight=weight, ignore_index=ignore_index, reduction='none').contiguous().view(-1,)
sort_loss_matrix = loss_matrix[mask][sort_indices]
select_loss_matrix = sort_loss_matrix[sort_prob < threshold]
if self.reduction == 'sum':
return select_loss_matrix.sum()
elif self.reduction == 'elementwise_mean':
return select_loss_matrix.mean()
else:
raise NotImplementedError('Reduction Error!')
# Cross-entropy Loss
class FSCELoss(nn.Module):
def __init__(self, configer=None):
super(FSCELoss, self).__init__()
self.configer = configer
weight = None
if self.configer.exists('loss', 'params') and 'ce_weight' in self.configer.get('loss', 'params'):
weight = self.configer.get('loss', 'params')['ce_weight']
weight = torch.FloatTensor(weight).cuda()
reduction = 'elementwise_mean'
if self.configer.exists('loss', 'params') and 'ce_reduction' in self.configer.get('loss', 'params'):
reduction = self.configer.get('loss', 'params')['ce_reduction']
ignore_index = -1
if self.configer.exists('loss', 'params') and 'ce_ignore_index' in self.configer.get('loss', 'params'):
ignore_index = self.configer.get('loss', 'params')['ce_ignore_index']
self.ce_loss = nn.CrossEntropyLoss(weight=weight, ignore_index=ignore_index, reduction=reduction)
def forward(self, inputs, *targets, weights=None, **kwargs):
loss = 0.0
if isinstance(inputs, tuple) or isinstance(inputs, list):
if weights is None:
weights = [1.0] * len(inputs)
for i in range(len(inputs)):
if len(targets) > 1:
target = self._scale_target(targets[i], (inputs[i].size(2), inputs[i].size(3)))
loss += weights[i] * self.ce_loss(inputs[i], target)
else:
target = self._scale_target(targets[0], (inputs[i].size(2), inputs[i].size(3)))
loss += weights[i] * self.ce_loss(inputs[i], target)
else:
target = self._scale_target(targets[0], (inputs.size(2), inputs.size(3)))
loss = self.ce_loss(inputs, target)
return loss
@staticmethod
def _scale_target(targets_, scaled_size):
targets = targets_.clone().unsqueeze(1).float()
targets = F.interpolate(targets, size=scaled_size, mode='nearest')
return targets.squeeze(1).long()
class FSOhemCELoss(nn.Module):
def __init__(self, configer):
super(FSOhemCELoss, self).__init__()
self.configer = configer
self.thresh = self.configer.get('loss', 'params')['ohem_thresh']
self.min_kept = max(1, self.configer.get('loss', 'params')['ohem_minkeep'])
weight = None
if self.configer.exists('loss', 'params') and 'ce_weight' in self.configer.get('loss', 'params'):
weight = self.configer.get('loss', 'params')['ce_weight']
weight = torch.FloatTensor(weight).cuda()
self.reduction = 'elementwise_mean'
if self.configer.exists('loss', 'params') and 'ce_reduction' in self.configer.get('loss', 'params'):
self.reduction = self.configer.get('loss', 'params')['ce_reduction']
ignore_index = -1
if self.configer.exists('loss', 'params') and 'ce_ignore_index' in self.configer.get('loss', 'params'):
ignore_index = self.configer.get('loss', 'params')['ce_ignore_index']
self.ignore_label = ignore_index
self.ce_loss = nn.CrossEntropyLoss(weight=weight, ignore_index=ignore_index, reduction='none')
def forward(self, predict, target, **kwargs):
"""
Args:
predict:(n, c, h, w)
target:(n, h, w)
weight (Tensor, optional): a manual rescaling weight given to each class.
If given, has to be a Tensor of size "nclasses"
"""
prob_out = F.softmax(predict, dim=1)
tmp_target = target.clone()
tmp_target[tmp_target == self.ignore_label] = 0
prob = prob_out.gather(1, tmp_target.unsqueeze(1))
mask = target.contiguous().view(-1,) != self.ignore_label
sort_prob, sort_indices = prob.contiguous().view(-1,)[mask].contiguous().sort()
min_threshold = sort_prob[min(self.min_kept, sort_prob.numel() - 1)]
threshold = max(min_threshold, self.thresh)
loss_matirx = self.ce_loss(predict, target).contiguous().view(-1,)
sort_loss_matirx = loss_matirx[mask][sort_indices]
select_loss_matrix = sort_loss_matirx[sort_prob < threshold]
if self.reduction == 'sum':
return select_loss_matrix.sum()
elif self.reduction == 'elementwise_mean':
return select_loss_matrix.mean()
else:
raise NotImplementedError('Reduction Error!')
class FSAuxOhemCELoss(nn.Module):
def __init__(self, configer=None):
super(FSAuxOhemCELoss, self).__init__()
self.configer = configer
self.ce_loss = FSCELoss(self.configer)
if self.configer.get('loss', 'loss_type') == 'fs_auxohemce_loss':
self.ohem_ce_loss = FSOhemCELoss(self.configer)
else:
assert self.configer.get('loss', 'loss_type') == 'fs_auxslowohemce_loss'
self.ohem_ce_loss = FSSlowOhemCELoss(self.configer)
def forward(self, inputs, targets, **kwargs):
aux_out, seg_out = inputs
seg_loss = self.ohem_ce_loss(seg_out, targets)
aux_loss = self.ce_loss(aux_out, targets)
loss = self.configer.get('network', 'loss_weights')['seg_loss'] * seg_loss
loss = loss + self.configer.get('network', 'loss_weights')['aux_loss'] * aux_loss
return loss
class FSAuxCELoss(nn.Module):
def __init__(self, configer=None):
super(FSAuxCELoss, self).__init__()
self.configer = configer
self.ce_loss = FSCELoss(self.configer)
def forward(self, inputs, targets, **kwargs):
aux_out, seg_out = inputs
seg_loss = self.ce_loss(seg_out, targets)
aux_loss = self.ce_loss(aux_out, targets)
loss = self.configer.get('network', 'loss_weights')['seg_loss'] * seg_loss
loss = loss + self.configer.get('network', 'loss_weights')['aux_loss'] * aux_loss
return loss
class SegFixLoss(nn.Module):
"""
We predict a binary mask to categorize the boundary pixels as class 1 and otherwise as class 0
Based on the pixels predicted as 1 within the binary mask, we further predict the direction for these
pixels.
"""
def __init__(self, configer=None):
super().__init__()
self.configer = configer
self.ce_loss = FSCELoss(self.configer)
def calc_weights(self, label_map, num_classes):
weights = []
for i in range(num_classes):
weights.append((label_map == i).sum().data)
weights = torch.FloatTensor(weights)
weights_sum = weights.sum()
return (1 - weights / weights_sum).cuda()
def forward(self, inputs, targets, **kwargs):
from lib.utils.helpers.offset_helper import DTOffsetHelper
pred_mask, pred_direction = inputs
seg_label_map, distance_map, angle_map = targets[0], targets[1], targets[2]
gt_mask = DTOffsetHelper.distance_to_mask_label(distance_map, seg_label_map, return_tensor=True)
gt_size = gt_mask.shape[1:]
mask_weights = self.calc_weights(gt_mask, 2)
pred_direction = F.interpolate(pred_direction, size=gt_size, mode="bilinear", align_corners=True)
pred_mask = F.interpolate(pred_mask, size=gt_size, mode="bilinear", align_corners=True)
mask_loss = F.cross_entropy(pred_mask, gt_mask, weight=mask_weights, ignore_index=-1)
mask_threshold = float(os.environ.get('mask_threshold', 0.5))
binary_pred_mask = torch.softmax(pred_mask, dim=1)[:, 1, :, :] > mask_threshold
gt_direction = DTOffsetHelper.angle_to_direction_label(
angle_map,
seg_label_map=seg_label_map,
extra_ignore_mask=(binary_pred_mask == 0),
return_tensor=True
)
direction_loss_mask = gt_direction != -1
direction_weights = self.calc_weights(gt_direction[direction_loss_mask], pred_direction.size(1))
direction_loss = F.cross_entropy(pred_direction, gt_direction, weight=direction_weights, ignore_index=-1)
if self.training \
and self.configer.get('iters') % self.configer.get('solver', 'display_iter') == 0 \
and torch.cuda.current_device() == 0:
Log.info('mask loss: {} direction loss: {}.'.format(mask_loss, direction_loss))
mask_weight = float(os.environ.get('mask_weight', 1))
direction_weight = float(os.environ.get('direction_weight', 1))
return mask_weight * mask_loss + direction_weight * direction_loss | 43.542169 | 137 | 0.632356 | configer):
super().__init__()
self.configer = configer
self.thresh = self.configer.get('loss', 'params')['ohem_thresh']
self.reduction = 'elementwise_mean'
if self.configer.exists('loss', 'params') and 'ce_reduction' in self.configer.get('loss', 'params'):
self.reduction = self.configer.get('loss', 'params')['ce_reduction']
def forward(self, predict, target, min_kept=1, weight=None, ignore_index=-1, **kwargs):
prob_out = F.softmax(predict, dim=1)
tmp_target = target.clone()
tmp_target[tmp_target == ignore_index] = 0
prob = prob_out.gather(1, tmp_target.unsqueeze(1))
mask = target.contiguous().view(-1,) != ignore_index
sort_prob, sort_indices = prob.contiguous().view(-1,)[mask].contiguous().sort()
min_threshold = sort_prob[min(min_kept, sort_prob.numel() - 1)]
threshold = max(min_threshold, self.thresh)
loss_matrix = F.cross_entropy(predict, target, weight=weight, ignore_index=ignore_index, reduction='none').contiguous().view(-1,)
sort_loss_matrix = loss_matrix[mask][sort_indices]
select_loss_matrix = sort_loss_matrix[sort_prob < threshold]
if self.reduction == 'sum':
return select_loss_matrix.sum()
elif self.reduction == 'elementwise_mean':
return select_loss_matrix.mean()
else:
raise NotImplementedError('Reduction Error!')
class FSCELoss(nn.Module):
def __init__(self, configer=None):
super(FSCELoss, self).__init__()
self.configer = configer
weight = None
if self.configer.exists('loss', 'params') and 'ce_weight' in self.configer.get('loss', 'params'):
weight = self.configer.get('loss', 'params')['ce_weight']
weight = torch.FloatTensor(weight).cuda()
reduction = 'elementwise_mean'
if self.configer.exists('loss', 'params') and 'ce_reduction' in self.configer.get('loss', 'params'):
reduction = self.configer.get('loss', 'params')['ce_reduction']
ignore_index = -1
if self.configer.exists('loss', 'params') and 'ce_ignore_index' in self.configer.get('loss', 'params'):
ignore_index = self.configer.get('loss', 'params')['ce_ignore_index']
self.ce_loss = nn.CrossEntropyLoss(weight=weight, ignore_index=ignore_index, reduction=reduction)
def forward(self, inputs, *targets, weights=None, **kwargs):
loss = 0.0
if isinstance(inputs, tuple) or isinstance(inputs, list):
if weights is None:
weights = [1.0] * len(inputs)
for i in range(len(inputs)):
if len(targets) > 1:
target = self._scale_target(targets[i], (inputs[i].size(2), inputs[i].size(3)))
loss += weights[i] * self.ce_loss(inputs[i], target)
else:
target = self._scale_target(targets[0], (inputs[i].size(2), inputs[i].size(3)))
loss += weights[i] * self.ce_loss(inputs[i], target)
else:
target = self._scale_target(targets[0], (inputs.size(2), inputs.size(3)))
loss = self.ce_loss(inputs, target)
return loss
@staticmethod
def _scale_target(targets_, scaled_size):
targets = targets_.clone().unsqueeze(1).float()
targets = F.interpolate(targets, size=scaled_size, mode='nearest')
return targets.squeeze(1).long()
class FSOhemCELoss(nn.Module):
def __init__(self, configer):
super(FSOhemCELoss, self).__init__()
self.configer = configer
self.thresh = self.configer.get('loss', 'params')['ohem_thresh']
self.min_kept = max(1, self.configer.get('loss', 'params')['ohem_minkeep'])
weight = None
if self.configer.exists('loss', 'params') and 'ce_weight' in self.configer.get('loss', 'params'):
weight = self.configer.get('loss', 'params')['ce_weight']
weight = torch.FloatTensor(weight).cuda()
self.reduction = 'elementwise_mean'
if self.configer.exists('loss', 'params') and 'ce_reduction' in self.configer.get('loss', 'params'):
self.reduction = self.configer.get('loss', 'params')['ce_reduction']
ignore_index = -1
if self.configer.exists('loss', 'params') and 'ce_ignore_index' in self.configer.get('loss', 'params'):
ignore_index = self.configer.get('loss', 'params')['ce_ignore_index']
self.ignore_label = ignore_index
self.ce_loss = nn.CrossEntropyLoss(weight=weight, ignore_index=ignore_index, reduction='none')
def forward(self, predict, target, **kwargs):
prob_out = F.softmax(predict, dim=1)
tmp_target = target.clone()
tmp_target[tmp_target == self.ignore_label] = 0
prob = prob_out.gather(1, tmp_target.unsqueeze(1))
mask = target.contiguous().view(-1,) != self.ignore_label
sort_prob, sort_indices = prob.contiguous().view(-1,)[mask].contiguous().sort()
min_threshold = sort_prob[min(self.min_kept, sort_prob.numel() - 1)]
threshold = max(min_threshold, self.thresh)
loss_matirx = self.ce_loss(predict, target).contiguous().view(-1,)
sort_loss_matirx = loss_matirx[mask][sort_indices]
select_loss_matrix = sort_loss_matirx[sort_prob < threshold]
if self.reduction == 'sum':
return select_loss_matrix.sum()
elif self.reduction == 'elementwise_mean':
return select_loss_matrix.mean()
else:
raise NotImplementedError('Reduction Error!')
class FSAuxOhemCELoss(nn.Module):
def __init__(self, configer=None):
super(FSAuxOhemCELoss, self).__init__()
self.configer = configer
self.ce_loss = FSCELoss(self.configer)
if self.configer.get('loss', 'loss_type') == 'fs_auxohemce_loss':
self.ohem_ce_loss = FSOhemCELoss(self.configer)
else:
assert self.configer.get('loss', 'loss_type') == 'fs_auxslowohemce_loss'
self.ohem_ce_loss = FSSlowOhemCELoss(self.configer)
def forward(self, inputs, targets, **kwargs):
aux_out, seg_out = inputs
seg_loss = self.ohem_ce_loss(seg_out, targets)
aux_loss = self.ce_loss(aux_out, targets)
loss = self.configer.get('network', 'loss_weights')['seg_loss'] * seg_loss
loss = loss + self.configer.get('network', 'loss_weights')['aux_loss'] * aux_loss
return loss
class FSAuxCELoss(nn.Module):
def __init__(self, configer=None):
super(FSAuxCELoss, self).__init__()
self.configer = configer
self.ce_loss = FSCELoss(self.configer)
def forward(self, inputs, targets, **kwargs):
aux_out, seg_out = inputs
seg_loss = self.ce_loss(seg_out, targets)
aux_loss = self.ce_loss(aux_out, targets)
loss = self.configer.get('network', 'loss_weights')['seg_loss'] * seg_loss
loss = loss + self.configer.get('network', 'loss_weights')['aux_loss'] * aux_loss
return loss
class SegFixLoss(nn.Module):
def __init__(self, configer=None):
super().__init__()
self.configer = configer
self.ce_loss = FSCELoss(self.configer)
def calc_weights(self, label_map, num_classes):
weights = []
for i in range(num_classes):
weights.append((label_map == i).sum().data)
weights = torch.FloatTensor(weights)
weights_sum = weights.sum()
return (1 - weights / weights_sum).cuda()
def forward(self, inputs, targets, **kwargs):
from lib.utils.helpers.offset_helper import DTOffsetHelper
pred_mask, pred_direction = inputs
seg_label_map, distance_map, angle_map = targets[0], targets[1], targets[2]
gt_mask = DTOffsetHelper.distance_to_mask_label(distance_map, seg_label_map, return_tensor=True)
gt_size = gt_mask.shape[1:]
mask_weights = self.calc_weights(gt_mask, 2)
pred_direction = F.interpolate(pred_direction, size=gt_size, mode="bilinear", align_corners=True)
pred_mask = F.interpolate(pred_mask, size=gt_size, mode="bilinear", align_corners=True)
mask_loss = F.cross_entropy(pred_mask, gt_mask, weight=mask_weights, ignore_index=-1)
mask_threshold = float(os.environ.get('mask_threshold', 0.5))
binary_pred_mask = torch.softmax(pred_mask, dim=1)[:, 1, :, :] > mask_threshold
gt_direction = DTOffsetHelper.angle_to_direction_label(
angle_map,
seg_label_map=seg_label_map,
extra_ignore_mask=(binary_pred_mask == 0),
return_tensor=True
)
direction_loss_mask = gt_direction != -1
direction_weights = self.calc_weights(gt_direction[direction_loss_mask], pred_direction.size(1))
direction_loss = F.cross_entropy(pred_direction, gt_direction, weight=direction_weights, ignore_index=-1)
if self.training \
and self.configer.get('iters') % self.configer.get('solver', 'display_iter') == 0 \
and torch.cuda.current_device() == 0:
Log.info('mask loss: {} direction loss: {}.'.format(mask_loss, direction_loss))
mask_weight = float(os.environ.get('mask_weight', 1))
direction_weight = float(os.environ.get('direction_weight', 1))
return mask_weight * mask_loss + direction_weight * direction_loss | true | true |
f73df083c3b21548aa20ef6947039d25b38977dd | 1,918 | py | Python | dokang/utils.py | Polyconseil/dokang | b0ab3e4aabfb97adb2a2e877a42fc1896e5fcf08 | [
"BSD-3-Clause"
] | 6 | 2016-07-04T17:16:42.000Z | 2018-11-13T08:10:21.000Z | dokang/utils.py | Polyconseil/dokang | b0ab3e4aabfb97adb2a2e877a42fc1896e5fcf08 | [
"BSD-3-Clause"
] | 6 | 2016-02-23T15:08:51.000Z | 2017-01-02T11:57:45.000Z | dokang/utils.py | Polyconseil/dokang | b0ab3e4aabfb97adb2a2e877a42fc1896e5fcf08 | [
"BSD-3-Clause"
] | 5 | 2015-04-05T14:07:11.000Z | 2017-04-13T14:08:02.000Z | # -*- coding: utf-8 -*-
# Copyright (c) Polyconseil SAS. All rights reserved.
from __future__ import unicode_literals
import json
import os
import os.path
from dokang import api
from . import compat
def get_harvester(fqn):
module_fqn, function_fqn = fqn.rsplit('.', 1)
# Hack around https://bugs.python.org/issue21720
if compat.PY2 and not isinstance(module_fqn, bytes):
module_fqn = module_fqn.encode()
function_fqn = function_fqn.encode()
module = __import__(module_fqn, fromlist=[function_fqn])
return getattr(module, function_fqn)
def doc_set(settings, uploaded):
harvester = get_harvester(settings['dokang.uploaded_docs.harvester'])
upload_dir = settings.get('dokang.uploaded_docs.dir')
uploaded_path = os.path.join(upload_dir, uploaded)
title = None
info_file = os.path.join(uploaded_path, '.dokang')
if os.path.exists(info_file):
with open(info_file) as fp:
info = json.load(fp)
title = info.get('title') if isinstance(info, dict) else None
return {
'id': uploaded,
'title': title or uploaded,
'path': uploaded_path,
'harvester': harvester(),
}
def get_doc_sets(settings):
"""
Get doc sets using path of doc sets file defined in settings.
"""
index_path = settings['dokang.index_path']
if not os.path.exists(index_path):
try:
os.makedirs(os.path.dirname(index_path))
except OSError: # It's ok if the parent dir exists already
pass
api.initialize_index(index_path)
upload_dir = settings['dokang.uploaded_docs.dir']
if not os.path.exists(upload_dir):
os.makedirs(upload_dir)
return {
uploaded: doc_set(settings, uploaded)
for uploaded in (
x.decode('utf-8') if isinstance(x, bytes) else x
for x in os.listdir(upload_dir)
)
}
| 27.797101 | 73 | 0.650678 |
from __future__ import unicode_literals
import json
import os
import os.path
from dokang import api
from . import compat
def get_harvester(fqn):
module_fqn, function_fqn = fqn.rsplit('.', 1)
if compat.PY2 and not isinstance(module_fqn, bytes):
module_fqn = module_fqn.encode()
function_fqn = function_fqn.encode()
module = __import__(module_fqn, fromlist=[function_fqn])
return getattr(module, function_fqn)
def doc_set(settings, uploaded):
harvester = get_harvester(settings['dokang.uploaded_docs.harvester'])
upload_dir = settings.get('dokang.uploaded_docs.dir')
uploaded_path = os.path.join(upload_dir, uploaded)
title = None
info_file = os.path.join(uploaded_path, '.dokang')
if os.path.exists(info_file):
with open(info_file) as fp:
info = json.load(fp)
title = info.get('title') if isinstance(info, dict) else None
return {
'id': uploaded,
'title': title or uploaded,
'path': uploaded_path,
'harvester': harvester(),
}
def get_doc_sets(settings):
index_path = settings['dokang.index_path']
if not os.path.exists(index_path):
try:
os.makedirs(os.path.dirname(index_path))
except OSError:
pass
api.initialize_index(index_path)
upload_dir = settings['dokang.uploaded_docs.dir']
if not os.path.exists(upload_dir):
os.makedirs(upload_dir)
return {
uploaded: doc_set(settings, uploaded)
for uploaded in (
x.decode('utf-8') if isinstance(x, bytes) else x
for x in os.listdir(upload_dir)
)
}
| true | true |
f73df14d27d4c328ef0f6c514247fac68d5abf5c | 354 | py | Python | pywolf/migrations/0045_remove_villageparticipant_system_user_flg.py | tevawolf/pywolf | 94e3c26d8c3b279990624f23658e22ab00eead46 | [
"BSD-3-Clause"
] | null | null | null | pywolf/migrations/0045_remove_villageparticipant_system_user_flg.py | tevawolf/pywolf | 94e3c26d8c3b279990624f23658e22ab00eead46 | [
"BSD-3-Clause"
] | null | null | null | pywolf/migrations/0045_remove_villageparticipant_system_user_flg.py | tevawolf/pywolf | 94e3c26d8c3b279990624f23658e22ab00eead46 | [
"BSD-3-Clause"
] | null | null | null | # Generated by Django 2.1.2 on 2018-11-13 08:36
from django.db import migrations
class Migration(migrations.Migration):
dependencies = [
('pywolf', '0044_placcount_dummy_user_flg'),
]
operations = [
migrations.RemoveField(
model_name='villageparticipant',
name='system_user_flg',
),
]
| 19.666667 | 52 | 0.621469 |
from django.db import migrations
class Migration(migrations.Migration):
dependencies = [
('pywolf', '0044_placcount_dummy_user_flg'),
]
operations = [
migrations.RemoveField(
model_name='villageparticipant',
name='system_user_flg',
),
]
| true | true |
f73df18cbf678db43a3d0fa1e22a44a487d2d050 | 1,023 | py | Python | droidlet/interpreter/craftassist/dummy_interpreter.py | adamlerer/droidlet | ada38d191dadcea9aba12330e35e8e7d6d1663d9 | [
"MIT"
] | null | null | null | droidlet/interpreter/craftassist/dummy_interpreter.py | adamlerer/droidlet | ada38d191dadcea9aba12330e35e8e7d6d1663d9 | [
"MIT"
] | null | null | null | droidlet/interpreter/craftassist/dummy_interpreter.py | adamlerer/droidlet | ada38d191dadcea9aba12330e35e8e7d6d1663d9 | [
"MIT"
] | null | null | null | """
Copyright (c) Facebook, Inc. and its affiliates.
"""
from typing import Tuple, Dict, Any, Optional
from droidlet.dialog.dialogue_objects import DialogueObject
from ..interpreter import ReferenceObjectInterpreter, FilterInterpreter, interpret_reference_object
from ..condition_helper import ConditionInterpreter
from .attribute_helper import MCAttributeInterpreter
class DummyInterpreter(DialogueObject):
def __init__(self, speaker: str, **kwargs):
super().__init__(**kwargs)
self.speaker = speaker
self.provisional: Dict = {}
self.action_dict_frozen = False
self.loop_data = None
self.subinterpret = {
"attribute": MCAttributeInterpreter(),
"filters": FilterInterpreter(),
"reference_objects": ReferenceObjectInterpreter(interpret_reference_object),
"condition": ConditionInterpreter(),
}
self.action_handlers = {} # noqa
def step(self) -> Tuple[Optional[str], Any]:
return None, None
| 34.1 | 99 | 0.69697 | from typing import Tuple, Dict, Any, Optional
from droidlet.dialog.dialogue_objects import DialogueObject
from ..interpreter import ReferenceObjectInterpreter, FilterInterpreter, interpret_reference_object
from ..condition_helper import ConditionInterpreter
from .attribute_helper import MCAttributeInterpreter
class DummyInterpreter(DialogueObject):
def __init__(self, speaker: str, **kwargs):
super().__init__(**kwargs)
self.speaker = speaker
self.provisional: Dict = {}
self.action_dict_frozen = False
self.loop_data = None
self.subinterpret = {
"attribute": MCAttributeInterpreter(),
"filters": FilterInterpreter(),
"reference_objects": ReferenceObjectInterpreter(interpret_reference_object),
"condition": ConditionInterpreter(),
}
self.action_handlers = {}
def step(self) -> Tuple[Optional[str], Any]:
return None, None
| true | true |
f73df4096ea90bc7768f2653f74a9e39c91640a7 | 35,820 | py | Python | test_shaders.py | adrianrodriguesm/SPIRV-Cross | 1e8caa97d81a16ba6d6bd048cd5891cdef463b84 | [
"Apache-2.0"
] | 3 | 2022-03-02T08:55:54.000Z | 2022-03-23T05:56:43.000Z | test_shaders.py | adrianrodriguesm/SPIRV-Cross | 1e8caa97d81a16ba6d6bd048cd5891cdef463b84 | [
"Apache-2.0"
] | 1 | 2021-09-07T07:36:59.000Z | 2021-09-07T07:36:59.000Z | test_shaders.py | adrianrodriguesm/SPIRV-Cross | 1e8caa97d81a16ba6d6bd048cd5891cdef463b84 | [
"Apache-2.0"
] | null | null | null | #!/usr/bin/env python3
# Copyright 2015-2021 Arm Limited
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import sys
import os
import os.path
import subprocess
import tempfile
import re
import itertools
import hashlib
import shutil
import argparse
import codecs
import json
import multiprocessing
import errno
from functools import partial
class Paths():
def __init__(self, spirv_cross, glslang, spirv_as, spirv_val, spirv_opt):
self.spirv_cross = spirv_cross
self.glslang = glslang
self.spirv_as = spirv_as
self.spirv_val = spirv_val
self.spirv_opt = spirv_opt
def remove_file(path):
#print('Removing file:', path)
os.remove(path)
def create_temporary(suff = ''):
f, path = tempfile.mkstemp(suffix = suff)
os.close(f)
#print('Creating temporary:', path)
return path
def parse_stats(stats):
m = re.search('([0-9]+) work registers', stats)
registers = int(m.group(1)) if m else 0
m = re.search('([0-9]+) uniform registers', stats)
uniform_regs = int(m.group(1)) if m else 0
m_list = re.findall('(-?[0-9]+)\s+(-?[0-9]+)\s+(-?[0-9]+)', stats)
alu_short = float(m_list[1][0]) if m_list else 0
ls_short = float(m_list[1][1]) if m_list else 0
tex_short = float(m_list[1][2]) if m_list else 0
alu_long = float(m_list[2][0]) if m_list else 0
ls_long = float(m_list[2][1]) if m_list else 0
tex_long = float(m_list[2][2]) if m_list else 0
return (registers, uniform_regs, alu_short, ls_short, tex_short, alu_long, ls_long, tex_long)
def get_shader_type(shader):
_, ext = os.path.splitext(shader)
if ext == '.vert':
return '--vertex'
elif ext == '.frag':
return '--fragment'
elif ext == '.comp':
return '--compute'
elif ext == '.tesc':
return '--tessellation_control'
elif ext == '.tese':
return '--tessellation_evaluation'
elif ext == '.geom':
return '--geometry'
else:
return ''
def get_shader_stats(shader):
path = create_temporary()
p = subprocess.Popen(['malisc', get_shader_type(shader), '--core', 'Mali-T760', '-V', shader], stdout = subprocess.PIPE, stderr = subprocess.PIPE)
stdout, stderr = p.communicate()
remove_file(path)
if p.returncode != 0:
print(stderr.decode('utf-8'))
raise OSError('malisc failed')
p.wait()
returned = stdout.decode('utf-8')
return parse_stats(returned)
def print_msl_compiler_version():
try:
subprocess.check_call(['xcrun', '--sdk', 'iphoneos', 'metal', '--version'])
print('...are the Metal compiler characteristics.\n') # display after so xcrun FNF is silent
except OSError as e:
if (e.errno != errno.ENOENT): # Ignore xcrun not found error
raise
except subprocess.CalledProcessError:
pass
def msl_compiler_supports_version(version):
try:
subprocess.check_call(['xcrun', '--sdk', 'macosx', 'metal', '-x', 'metal', '-std=macos-metal' + version, '-'],
stdin = subprocess.DEVNULL, stdout = subprocess.DEVNULL, stderr = subprocess.DEVNULL)
print('Current SDK supports MSL {0}. Enabling validation for MSL {0} shaders.'.format(version))
return True
except OSError as e:
print('Failed to check if MSL {} is not supported. It probably is not.'.format(version))
return False
except subprocess.CalledProcessError:
print('Current SDK does NOT support MSL {0}. Disabling validation for MSL {0} shaders.'.format(version))
return False
def path_to_msl_standard(shader):
if '.ios.' in shader:
if '.msl2.' in shader:
return '-std=ios-metal2.0'
elif '.msl21.' in shader:
return '-std=ios-metal2.1'
elif '.msl22.' in shader:
return '-std=ios-metal2.2'
elif '.msl23.' in shader:
return '-std=ios-metal2.3'
elif '.msl11.' in shader:
return '-std=ios-metal1.1'
elif '.msl10.' in shader:
return '-std=ios-metal1.0'
else:
return '-std=ios-metal1.2'
else:
if '.msl2.' in shader:
return '-std=macos-metal2.0'
elif '.msl21.' in shader:
return '-std=macos-metal2.1'
elif '.msl22.' in shader:
return '-std=macos-metal2.2'
elif '.msl23.' in shader:
return '-std=macos-metal2.3'
elif '.msl11.' in shader:
return '-std=macos-metal1.1'
else:
return '-std=macos-metal1.2'
def path_to_msl_standard_cli(shader):
if '.msl2.' in shader:
return '20000'
elif '.msl21.' in shader:
return '20100'
elif '.msl22.' in shader:
return '20200'
elif '.msl23.' in shader:
return '20300'
elif '.msl11.' in shader:
return '10100'
else:
return '10200'
def validate_shader_msl(shader, opt):
msl_path = reference_path(shader[0], shader[1], opt)
try:
if '.ios.' in msl_path:
msl_os = 'iphoneos'
else:
msl_os = 'macosx'
subprocess.check_call(['xcrun', '--sdk', msl_os, 'metal', '-x', 'metal', path_to_msl_standard(msl_path), '-Werror', '-Wno-unused-variable', msl_path])
print('Compiled Metal shader: ' + msl_path) # display after so xcrun FNF is silent
except OSError as oe:
if (oe.errno != errno.ENOENT): # Ignore xcrun not found error
raise
except subprocess.CalledProcessError:
print('Error compiling Metal shader: ' + msl_path)
raise RuntimeError('Failed to compile Metal shader')
def cross_compile_msl(shader, spirv, opt, iterations, paths):
spirv_path = create_temporary()
msl_path = create_temporary(os.path.basename(shader))
spirv_env = 'vulkan1.1spv1.4' if ('.spv14.' in shader) else 'vulkan1.1'
spirv_cmd = [paths.spirv_as, '--target-env', spirv_env, '-o', spirv_path, shader]
if '.preserve.' in shader:
spirv_cmd.append('--preserve-numeric-ids')
if spirv:
subprocess.check_call(spirv_cmd)
else:
subprocess.check_call([paths.glslang, '--amb' ,'--target-env', 'vulkan1.1', '-V', '-o', spirv_path, shader])
if opt and (not shader_is_invalid_spirv(shader)):
if '.graphics-robust-access.' in shader:
subprocess.check_call([paths.spirv_opt, '--skip-validation', '-O', '--graphics-robust-access', '-o', spirv_path, spirv_path])
else:
subprocess.check_call([paths.spirv_opt, '--skip-validation', '-O', '-o', spirv_path, spirv_path])
spirv_cross_path = paths.spirv_cross
msl_args = [spirv_cross_path, '--output', msl_path, spirv_path, '--msl', '--iterations', str(iterations)]
msl_args.append('--msl-version')
msl_args.append(path_to_msl_standard_cli(shader))
if not '.nomain.' in shader:
msl_args.append('--entry')
msl_args.append('main')
if '.swizzle.' in shader:
msl_args.append('--msl-swizzle-texture-samples')
if '.ios.' in shader:
msl_args.append('--msl-ios')
if '.pad-fragment.' in shader:
msl_args.append('--msl-pad-fragment-output')
if '.capture.' in shader:
msl_args.append('--msl-capture-output')
if '.domain.' in shader:
msl_args.append('--msl-domain-lower-left')
if '.argument.' in shader:
msl_args.append('--msl-argument-buffers')
if '.texture-buffer-native.' in shader:
msl_args.append('--msl-texture-buffer-native')
if '.framebuffer-fetch.' in shader:
msl_args.append('--msl-framebuffer-fetch')
if '.invariant-float-math.' in shader:
msl_args.append('--msl-invariant-float-math')
if '.emulate-cube-array.' in shader:
msl_args.append('--msl-emulate-cube-array')
if '.discrete.' in shader:
# Arbitrary for testing purposes.
msl_args.append('--msl-discrete-descriptor-set')
msl_args.append('2')
msl_args.append('--msl-discrete-descriptor-set')
msl_args.append('3')
if '.force-active.' in shader:
msl_args.append('--msl-force-active-argument-buffer-resources')
if '.line.' in shader:
msl_args.append('--emit-line-directives')
if '.multiview.' in shader:
msl_args.append('--msl-multiview')
if '.no-layered.' in shader:
msl_args.append('--msl-multiview-no-layered-rendering')
if '.viewfromdev.' in shader:
msl_args.append('--msl-view-index-from-device-index')
if '.dispatchbase.' in shader:
msl_args.append('--msl-dispatch-base')
if '.dynamic-buffer.' in shader:
# Arbitrary for testing purposes.
msl_args.append('--msl-dynamic-buffer')
msl_args.append('0')
msl_args.append('0')
msl_args.append('--msl-dynamic-buffer')
msl_args.append('1')
msl_args.append('2')
if '.inline-block.' in shader:
# Arbitrary for testing purposes.
msl_args.append('--msl-inline-uniform-block')
msl_args.append('0')
msl_args.append('0')
if '.device-argument-buffer.' in shader:
msl_args.append('--msl-device-argument-buffer')
msl_args.append('0')
msl_args.append('--msl-device-argument-buffer')
msl_args.append('1')
if '.force-native-array.' in shader:
msl_args.append('--msl-force-native-arrays')
if '.zero-initialize.' in shader:
msl_args.append('--force-zero-initialized-variables')
if '.frag-output.' in shader:
# Arbitrary for testing purposes.
msl_args.append('--msl-disable-frag-depth-builtin')
msl_args.append('--msl-disable-frag-stencil-ref-builtin')
msl_args.append('--msl-enable-frag-output-mask')
msl_args.append('0x000000ca')
if '.no-user-varying.' in shader:
msl_args.append('--msl-no-clip-distance-user-varying')
if '.shader-inputs.' in shader:
# Arbitrary for testing purposes.
msl_args.append('--msl-shader-input')
msl_args.append('0')
msl_args.append('u8')
msl_args.append('2')
msl_args.append('--msl-shader-input')
msl_args.append('1')
msl_args.append('u16')
msl_args.append('3')
msl_args.append('--msl-shader-input')
msl_args.append('6')
msl_args.append('other')
msl_args.append('4')
if '.multi-patch.' in shader:
msl_args.append('--msl-multi-patch-workgroup')
# Arbitrary for testing purposes.
msl_args.append('--msl-shader-input')
msl_args.append('0')
msl_args.append('any32')
msl_args.append('3')
msl_args.append('--msl-shader-input')
msl_args.append('1')
msl_args.append('any16')
msl_args.append('2')
if '.for-tess.' in shader:
msl_args.append('--msl-vertex-for-tessellation')
if '.fixed-sample-mask.' in shader:
msl_args.append('--msl-additional-fixed-sample-mask')
msl_args.append('0x00000022')
if '.arrayed-subpass.' in shader:
msl_args.append('--msl-arrayed-subpass-input')
if '.1d-as-2d.' in shader:
msl_args.append('--msl-texture-1d-as-2d')
if '.simd.' in shader:
msl_args.append('--msl-ios-use-simdgroup-functions')
if '.emulate-subgroup.' in shader:
msl_args.append('--msl-emulate-subgroups')
if '.fixed-subgroup.' in shader:
# Arbitrary for testing purposes.
msl_args.append('--msl-fixed-subgroup-size')
msl_args.append('32')
if '.force-sample.' in shader:
msl_args.append('--msl-force-sample-rate-shading')
if '.decoration-binding.' in shader:
msl_args.append('--msl-decoration-binding')
subprocess.check_call(msl_args)
if not shader_is_invalid_spirv(msl_path):
subprocess.check_call([paths.spirv_val, '--scalar-block-layout', '--target-env', spirv_env, spirv_path])
return (spirv_path, msl_path)
def shader_model_hlsl(shader):
if '.vert' in shader:
if '.sm30.' in shader:
return '-Tvs_3_0'
else:
return '-Tvs_5_1'
elif '.frag' in shader:
if '.sm30.' in shader:
return '-Tps_3_0'
else:
return '-Tps_5_1'
elif '.comp' in shader:
return '-Tcs_5_1'
else:
return None
def shader_to_win_path(shader):
# It's (very) convenient to be able to run HLSL testing in wine on Unix-likes, so support that.
try:
with subprocess.Popen(['winepath', '-w', shader], stdout = subprocess.PIPE, stderr = subprocess.PIPE) as f:
stdout_data, stderr_data = f.communicate()
return stdout_data.decode('utf-8')
except OSError as oe:
if (oe.errno != errno.ENOENT): # Ignore not found errors
return shader
except subprocess.CalledProcessError:
raise
return shader
ignore_fxc = False
def validate_shader_hlsl(shader, force_no_external_validation, paths):
test_glslang = True
if '.nonuniformresource.' in shader:
test_glslang = False
if '.fxconly.' in shader:
test_glslang = False
hlsl_args = [paths.glslang, '--amb', '-e', 'main', '-D', '--target-env', 'vulkan1.1', '-V', shader]
if '.sm30.' in shader:
hlsl_args.append('--hlsl-dx9-compatible')
if test_glslang:
subprocess.check_call(hlsl_args)
is_no_fxc = '.nofxc.' in shader
global ignore_fxc
if (not ignore_fxc) and (not force_no_external_validation) and (not is_no_fxc):
try:
win_path = shader_to_win_path(shader)
args = ['fxc', '-nologo', shader_model_hlsl(shader), win_path]
if '.nonuniformresource.' in shader:
args.append('/enable_unbounded_descriptor_tables')
subprocess.check_call(args)
except OSError as oe:
if (oe.errno != errno.ENOENT): # Ignore not found errors
print('Failed to run FXC.')
ignore_fxc = True
raise
else:
print('Could not find FXC.')
ignore_fxc = True
except subprocess.CalledProcessError:
print('Failed compiling HLSL shader:', shader, 'with FXC.')
raise RuntimeError('Failed compiling HLSL shader')
def shader_to_sm(shader):
if '.sm62.' in shader:
return '62'
elif '.sm60.' in shader:
return '60'
elif '.sm51.' in shader:
return '51'
elif '.sm30.' in shader:
return '30'
else:
return '50'
def cross_compile_hlsl(shader, spirv, opt, force_no_external_validation, iterations, paths):
spirv_path = create_temporary()
hlsl_path = create_temporary(os.path.basename(shader))
spirv_env = 'vulkan1.1spv1.4' if '.spv14.' in shader else 'vulkan1.1'
spirv_cmd = [paths.spirv_as, '--target-env', spirv_env, '-o', spirv_path, shader]
if '.preserve.' in shader:
spirv_cmd.append('--preserve-numeric-ids')
if spirv:
subprocess.check_call(spirv_cmd)
else:
subprocess.check_call([paths.glslang, '--amb', '--target-env', 'vulkan1.1', '-V', '-o', spirv_path, shader])
if opt and (not shader_is_invalid_spirv(hlsl_path)):
subprocess.check_call([paths.spirv_opt, '--skip-validation', '-O', '-o', spirv_path, spirv_path])
spirv_cross_path = paths.spirv_cross
sm = shader_to_sm(shader)
hlsl_args = [spirv_cross_path, '--entry', 'main', '--output', hlsl_path, spirv_path, '--hlsl-enable-compat', '--hlsl', '--shader-model', sm, '--iterations', str(iterations)]
if '.line.' in shader:
hlsl_args.append('--emit-line-directives')
if '.force-uav.' in shader:
hlsl_args.append('--hlsl-force-storage-buffer-as-uav')
if '.zero-initialize.' in shader:
hlsl_args.append('--force-zero-initialized-variables')
if '.nonwritable-uav-texture.' in shader:
hlsl_args.append('--hlsl-nonwritable-uav-texture-as-srv')
if '.native-16bit.' in shader:
hlsl_args.append('--hlsl-enable-16bit-types')
if '.flatten-matrix-vertex-input.' in shader:
hlsl_args.append('--hlsl-flatten-matrix-vertex-input-semantics')
subprocess.check_call(hlsl_args)
if not shader_is_invalid_spirv(hlsl_path):
subprocess.check_call([paths.spirv_val, '--scalar-block-layout', '--target-env', spirv_env, spirv_path])
validate_shader_hlsl(hlsl_path, force_no_external_validation, paths)
return (spirv_path, hlsl_path)
def cross_compile_reflect(shader, spirv, opt, iterations, paths):
spirv_path = create_temporary()
reflect_path = create_temporary(os.path.basename(shader))
spirv_cmd = [paths.spirv_as, '--target-env', 'vulkan1.1', '-o', spirv_path, shader]
if '.preserve.' in shader:
spirv_cmd.append('--preserve-numeric-ids')
if spirv:
subprocess.check_call(spirv_cmd)
else:
subprocess.check_call([paths.glslang, '--amb', '--target-env', 'vulkan1.1', '-V', '-o', spirv_path, shader])
if opt and (not shader_is_invalid_spirv(reflect_path)):
subprocess.check_call([paths.spirv_opt, '--skip-validation', '-O', '-o', spirv_path, spirv_path])
spirv_cross_path = paths.spirv_cross
sm = shader_to_sm(shader)
subprocess.check_call([spirv_cross_path, '--entry', 'main', '--output', reflect_path, spirv_path, '--reflect', '--iterations', str(iterations)])
return (spirv_path, reflect_path)
def validate_shader(shader, vulkan, paths):
if vulkan:
spirv_14 = '.spv14.' in shader
glslang_env = 'spirv1.4' if spirv_14 else 'vulkan1.1'
subprocess.check_call([paths.glslang, '--amb', '--target-env', glslang_env, '-V', shader])
else:
subprocess.check_call([paths.glslang, shader])
def cross_compile(shader, vulkan, spirv, invalid_spirv, eliminate, is_legacy, flatten_ubo, sso, flatten_dim, opt, push_ubo, iterations, paths):
spirv_path = create_temporary()
glsl_path = create_temporary(os.path.basename(shader))
spirv_14 = '.spv14.' in shader
spirv_env = 'vulkan1.1spv1.4' if spirv_14 else 'vulkan1.1'
if vulkan or spirv:
vulkan_glsl_path = create_temporary('vk' + os.path.basename(shader))
spirv_cmd = [paths.spirv_as, '--target-env', spirv_env, '-o', spirv_path, shader]
if '.preserve.' in shader:
spirv_cmd.append('--preserve-numeric-ids')
if spirv:
subprocess.check_call(spirv_cmd)
else:
glslang_env = 'spirv1.4' if spirv_14 else 'vulkan1.1'
subprocess.check_call([paths.glslang, '--amb', '--target-env', glslang_env, '-V', '-o', spirv_path, shader])
if opt and (not invalid_spirv):
subprocess.check_call([paths.spirv_opt, '--skip-validation', '-O', '-o', spirv_path, spirv_path])
if not invalid_spirv:
subprocess.check_call([paths.spirv_val, '--scalar-block-layout', '--target-env', spirv_env, spirv_path])
extra_args = ['--iterations', str(iterations)]
if eliminate:
extra_args += ['--remove-unused-variables']
if is_legacy:
extra_args += ['--version', '100', '--es']
if flatten_ubo:
extra_args += ['--flatten-ubo']
if sso:
extra_args += ['--separate-shader-objects']
if flatten_dim:
extra_args += ['--flatten-multidimensional-arrays']
if push_ubo:
extra_args += ['--glsl-emit-push-constant-as-ubo']
if '.line.' in shader:
extra_args += ['--emit-line-directives']
if '.no-samplerless.' in shader:
extra_args += ['--vulkan-glsl-disable-ext-samplerless-texture-functions']
if '.no-qualifier-deduction.' in shader:
extra_args += ['--disable-storage-image-qualifier-deduction']
if '.framebuffer-fetch.' in shader:
extra_args += ['--glsl-remap-ext-framebuffer-fetch', '0', '0']
extra_args += ['--glsl-remap-ext-framebuffer-fetch', '1', '1']
extra_args += ['--glsl-remap-ext-framebuffer-fetch', '2', '2']
extra_args += ['--glsl-remap-ext-framebuffer-fetch', '3', '3']
if '.zero-initialize.' in shader:
extra_args += ['--force-zero-initialized-variables']
if '.force-flattened-io.' in shader:
extra_args += ['--glsl-force-flattened-io-blocks']
spirv_cross_path = paths.spirv_cross
# A shader might not be possible to make valid GLSL from, skip validation for this case.
if (not ('nocompat' in glsl_path)) or (not vulkan):
subprocess.check_call([spirv_cross_path, '--entry', 'main', '--output', glsl_path, spirv_path] + extra_args)
if not 'nocompat' in glsl_path:
validate_shader(glsl_path, False, paths)
else:
remove_file(glsl_path)
glsl_path = None
if (vulkan or spirv) and (not is_legacy):
subprocess.check_call([spirv_cross_path, '--entry', 'main', '-V', '--output', vulkan_glsl_path, spirv_path] + extra_args)
validate_shader(vulkan_glsl_path, True, paths)
# SPIR-V shaders might just want to validate Vulkan GLSL output, we don't always care about the output.
if not vulkan:
remove_file(vulkan_glsl_path)
return (spirv_path, glsl_path, vulkan_glsl_path if vulkan else None)
def make_unix_newline(buf):
decoded = codecs.decode(buf, 'utf-8')
decoded = decoded.replace('\r', '')
return codecs.encode(decoded, 'utf-8')
def md5_for_file(path):
md5 = hashlib.md5()
with open(path, 'rb') as f:
for chunk in iter(lambda: make_unix_newline(f.read(8192)), b''):
md5.update(chunk)
return md5.digest()
def make_reference_dir(path):
base = os.path.dirname(path)
if not os.path.exists(base):
os.makedirs(base)
def reference_path(directory, relpath, opt):
split_paths = os.path.split(directory)
reference_dir = os.path.join(split_paths[0], 'reference/' + ('opt/' if opt else ''))
reference_dir = os.path.join(reference_dir, split_paths[1])
return os.path.join(reference_dir, relpath)
def regression_check_reflect(shader, json_file, args):
reference = reference_path(shader[0], shader[1], args.opt) + '.json'
joined_path = os.path.join(shader[0], shader[1])
print('Reference shader reflection path:', reference)
if os.path.exists(reference):
actual = md5_for_file(json_file)
expected = md5_for_file(reference)
if actual != expected:
if args.update:
print('Generated reflection json has changed for {}!'.format(reference))
# If we expect changes, update the reference file.
if os.path.exists(reference):
remove_file(reference)
make_reference_dir(reference)
shutil.move(json_file, reference)
else:
print('Generated reflection json in {} does not match reference {}!'.format(json_file, reference))
with open(json_file, 'r') as f:
print('')
print('Generated:')
print('======================')
print(f.read())
print('======================')
print('')
# Otherwise, fail the test. Keep the shader file around so we can inspect.
if not args.keep:
remove_file(json_file)
raise RuntimeError('Does not match reference')
else:
remove_file(json_file)
else:
print('Found new shader {}. Placing generated source code in {}'.format(joined_path, reference))
make_reference_dir(reference)
shutil.move(json_file, reference)
def regression_check(shader, glsl, args):
reference = reference_path(shader[0], shader[1], args.opt)
joined_path = os.path.join(shader[0], shader[1])
print('Reference shader path:', reference)
if os.path.exists(reference):
if md5_for_file(glsl) != md5_for_file(reference):
if args.update:
print('Generated source code has changed for {}!'.format(reference))
# If we expect changes, update the reference file.
if os.path.exists(reference):
remove_file(reference)
make_reference_dir(reference)
shutil.move(glsl, reference)
else:
print('Generated source code in {} does not match reference {}!'.format(glsl, reference))
with open(glsl, 'r') as f:
print('')
print('Generated:')
print('======================')
print(f.read())
print('======================')
print('')
# Otherwise, fail the test. Keep the shader file around so we can inspect.
if not args.keep:
remove_file(glsl)
raise RuntimeError('Does not match reference')
else:
remove_file(glsl)
else:
print('Found new shader {}. Placing generated source code in {}'.format(joined_path, reference))
make_reference_dir(reference)
shutil.move(glsl, reference)
def shader_is_vulkan(shader):
return '.vk.' in shader
def shader_is_desktop(shader):
return '.desktop.' in shader
def shader_is_eliminate_dead_variables(shader):
return '.noeliminate.' not in shader
def shader_is_spirv(shader):
return '.asm.' in shader
def shader_is_invalid_spirv(shader):
return '.invalid.' in shader
def shader_is_legacy(shader):
return '.legacy.' in shader
def shader_is_flatten_ubo(shader):
return '.flatten.' in shader
def shader_is_sso(shader):
return '.sso.' in shader
def shader_is_flatten_dimensions(shader):
return '.flatten_dim.' in shader
def shader_is_noopt(shader):
return '.noopt.' in shader
def shader_is_push_ubo(shader):
return '.push-ubo.' in shader
def test_shader(stats, shader, args, paths):
joined_path = os.path.join(shader[0], shader[1])
vulkan = shader_is_vulkan(shader[1])
desktop = shader_is_desktop(shader[1])
eliminate = shader_is_eliminate_dead_variables(shader[1])
is_spirv = shader_is_spirv(shader[1])
invalid_spirv = shader_is_invalid_spirv(shader[1])
is_legacy = shader_is_legacy(shader[1])
flatten_ubo = shader_is_flatten_ubo(shader[1])
sso = shader_is_sso(shader[1])
flatten_dim = shader_is_flatten_dimensions(shader[1])
noopt = shader_is_noopt(shader[1])
push_ubo = shader_is_push_ubo(shader[1])
print('Testing shader:', joined_path)
spirv, glsl, vulkan_glsl = cross_compile(joined_path, vulkan, is_spirv, invalid_spirv, eliminate, is_legacy, flatten_ubo, sso, flatten_dim, args.opt and (not noopt), push_ubo, args.iterations, paths)
# Only test GLSL stats if we have a shader following GL semantics.
if stats and (not vulkan) and (not is_spirv) and (not desktop):
cross_stats = get_shader_stats(glsl)
if glsl:
regression_check(shader, glsl, args)
if vulkan_glsl:
regression_check((shader[0], shader[1] + '.vk'), vulkan_glsl, args)
remove_file(spirv)
if stats and (not vulkan) and (not is_spirv) and (not desktop):
pristine_stats = get_shader_stats(joined_path)
a = []
a.append(shader[1])
for i in pristine_stats:
a.append(str(i))
for i in cross_stats:
a.append(str(i))
print(','.join(a), file = stats)
def test_shader_msl(stats, shader, args, paths):
joined_path = os.path.join(shader[0], shader[1])
print('\nTesting MSL shader:', joined_path)
is_spirv = shader_is_spirv(shader[1])
noopt = shader_is_noopt(shader[1])
spirv, msl = cross_compile_msl(joined_path, is_spirv, args.opt and (not noopt), args.iterations, paths)
regression_check(shader, msl, args)
# Uncomment the following line to print the temp SPIR-V file path.
# This temp SPIR-V file is not deleted until after the Metal validation step below.
# If Metal validation fails, the temp SPIR-V file can be copied out and
# used as input to an invocation of spirv-cross to debug from Xcode directly.
# To do so, build spriv-cross using `make DEBUG=1`, then run the spriv-cross
# executable from Xcode using args: `--msl --entry main --output msl_path spirv_path`.
# print('SPRIV shader: ' + spirv)
shader_is_msl22 = 'msl22' in joined_path
shader_is_msl23 = 'msl23' in joined_path
skip_validation = (shader_is_msl22 and (not args.msl22)) or (shader_is_msl23 and (not args.msl23))
if '.invalid.' in joined_path:
skip_validation = True
if (not args.force_no_external_validation) and (not skip_validation):
validate_shader_msl(shader, args.opt)
remove_file(spirv)
def test_shader_hlsl(stats, shader, args, paths):
joined_path = os.path.join(shader[0], shader[1])
print('Testing HLSL shader:', joined_path)
is_spirv = shader_is_spirv(shader[1])
noopt = shader_is_noopt(shader[1])
spirv, hlsl = cross_compile_hlsl(joined_path, is_spirv, args.opt and (not noopt), args.force_no_external_validation, args.iterations, paths)
regression_check(shader, hlsl, args)
remove_file(spirv)
def test_shader_reflect(stats, shader, args, paths):
joined_path = os.path.join(shader[0], shader[1])
print('Testing shader reflection:', joined_path)
is_spirv = shader_is_spirv(shader[1])
noopt = shader_is_noopt(shader[1])
spirv, reflect = cross_compile_reflect(joined_path, is_spirv, args.opt and (not noopt), args.iterations, paths)
regression_check_reflect(shader, reflect, args)
remove_file(spirv)
def test_shader_file(relpath, stats, args, backend):
paths = Paths(args.spirv_cross, args.glslang, args.spirv_as, args.spirv_val, args.spirv_opt)
try:
if backend == 'msl':
test_shader_msl(stats, (args.folder, relpath), args, paths)
elif backend == 'hlsl':
test_shader_hlsl(stats, (args.folder, relpath), args, paths)
elif backend == 'reflect':
test_shader_reflect(stats, (args.folder, relpath), args, paths)
else:
test_shader(stats, (args.folder, relpath), args, paths)
return None
except Exception as e:
return e
def test_shaders_helper(stats, backend, args):
all_files = []
for root, dirs, files in os.walk(os.path.join(args.folder)):
files = [ f for f in files if not f.startswith(".") ] #ignore system files (esp OSX)
for i in files:
path = os.path.join(root, i)
relpath = os.path.relpath(path, args.folder)
all_files.append(relpath)
# The child processes in parallel execution mode don't have the proper state for the global args variable, so
# at this point we need to switch to explicit arguments
if args.parallel:
with multiprocessing.Pool(multiprocessing.cpu_count()) as pool:
results = []
for f in all_files:
results.append(pool.apply_async(test_shader_file,
args = (f, stats, args, backend)))
pool.close()
pool.join()
results_completed = [res.get() for res in results]
for error in results_completed:
if error is not None:
print('Error:', error)
sys.exit(1)
else:
for i in all_files:
e = test_shader_file(i, stats, args, backend)
if e is not None:
print('Error:', e)
sys.exit(1)
def test_shaders(backend, args):
if args.malisc:
with open('stats.csv', 'w') as stats:
print('Shader,OrigRegs,OrigUniRegs,OrigALUShort,OrigLSShort,OrigTEXShort,OrigALULong,OrigLSLong,OrigTEXLong,CrossRegs,CrossUniRegs,CrossALUShort,CrossLSShort,CrossTEXShort,CrossALULong,CrossLSLong,CrossTEXLong', file = stats)
test_shaders_helper(stats, backend, args)
else:
test_shaders_helper(None, backend, args)
def main():
parser = argparse.ArgumentParser(description = 'Script for regression testing.')
parser.add_argument('folder',
help = 'Folder containing shader files to test.')
parser.add_argument('--update',
action = 'store_true',
help = 'Updates reference files if there is a mismatch. Use when legitimate changes in output is found.')
parser.add_argument('--keep',
action = 'store_true',
help = 'Leave failed GLSL shaders on disk if they fail regression. Useful for debugging.')
parser.add_argument('--malisc',
action = 'store_true',
help = 'Use malisc offline compiler to determine static cycle counts before and after spirv-cross.')
parser.add_argument('--msl',
action = 'store_true',
help = 'Test Metal backend.')
parser.add_argument('--metal',
action = 'store_true',
help = 'Deprecated Metal option. Use --msl instead.')
parser.add_argument('--hlsl',
action = 'store_true',
help = 'Test HLSL backend.')
parser.add_argument('--force-no-external-validation',
action = 'store_true',
help = 'Disable all external validation.')
parser.add_argument('--opt',
action = 'store_true',
help = 'Run SPIRV-Tools optimization passes as well.')
parser.add_argument('--reflect',
action = 'store_true',
help = 'Test reflection backend.')
parser.add_argument('--parallel',
action = 'store_true',
help = 'Execute tests in parallel. Useful for doing regression quickly, but bad for debugging and stat output.')
parser.add_argument('--spirv-cross',
default = './spirv-cross',
help = 'Explicit path to spirv-cross')
parser.add_argument('--glslang',
default = 'glslangValidator',
help = 'Explicit path to glslangValidator')
parser.add_argument('--spirv-as',
default = 'spirv-as',
help = 'Explicit path to spirv-as')
parser.add_argument('--spirv-val',
default = 'spirv-val',
help = 'Explicit path to spirv-val')
parser.add_argument('--spirv-opt',
default = 'spirv-opt',
help = 'Explicit path to spirv-opt')
parser.add_argument('--iterations',
default = 1,
type = int,
help = 'Number of iterations to run SPIRV-Cross (benchmarking)')
args = parser.parse_args()
if not args.folder:
sys.stderr.write('Need shader folder.\n')
sys.exit(1)
if (args.parallel and (args.malisc or args.force_no_external_validation or args.update)):
sys.stderr.write('Parallel execution is disabled when using the flags --update, --malisc or --force-no-external-validation\n')
args.parallel = False
args.msl22 = False
args.msl23 = False
if args.msl:
print_msl_compiler_version()
args.msl22 = msl_compiler_supports_version('2.2')
args.msl23 = msl_compiler_supports_version('2.3')
backend = 'glsl'
if (args.msl or args.metal):
backend = 'msl'
elif args.hlsl:
backend = 'hlsl'
elif args.reflect:
backend = 'reflect'
test_shaders(backend, args)
if args.malisc:
print('Stats in stats.csv!')
print('Tests completed!')
if __name__ == '__main__':
main()
| 39.147541 | 237 | 0.630486 |
import sys
import os
import os.path
import subprocess
import tempfile
import re
import itertools
import hashlib
import shutil
import argparse
import codecs
import json
import multiprocessing
import errno
from functools import partial
class Paths():
def __init__(self, spirv_cross, glslang, spirv_as, spirv_val, spirv_opt):
self.spirv_cross = spirv_cross
self.glslang = glslang
self.spirv_as = spirv_as
self.spirv_val = spirv_val
self.spirv_opt = spirv_opt
def remove_file(path):
os.remove(path)
def create_temporary(suff = ''):
f, path = tempfile.mkstemp(suffix = suff)
os.close(f)
return path
def parse_stats(stats):
m = re.search('([0-9]+) work registers', stats)
registers = int(m.group(1)) if m else 0
m = re.search('([0-9]+) uniform registers', stats)
uniform_regs = int(m.group(1)) if m else 0
m_list = re.findall('(-?[0-9]+)\s+(-?[0-9]+)\s+(-?[0-9]+)', stats)
alu_short = float(m_list[1][0]) if m_list else 0
ls_short = float(m_list[1][1]) if m_list else 0
tex_short = float(m_list[1][2]) if m_list else 0
alu_long = float(m_list[2][0]) if m_list else 0
ls_long = float(m_list[2][1]) if m_list else 0
tex_long = float(m_list[2][2]) if m_list else 0
return (registers, uniform_regs, alu_short, ls_short, tex_short, alu_long, ls_long, tex_long)
def get_shader_type(shader):
_, ext = os.path.splitext(shader)
if ext == '.vert':
return '--vertex'
elif ext == '.frag':
return '--fragment'
elif ext == '.comp':
return '--compute'
elif ext == '.tesc':
return '--tessellation_control'
elif ext == '.tese':
return '--tessellation_evaluation'
elif ext == '.geom':
return '--geometry'
else:
return ''
def get_shader_stats(shader):
path = create_temporary()
p = subprocess.Popen(['malisc', get_shader_type(shader), '--core', 'Mali-T760', '-V', shader], stdout = subprocess.PIPE, stderr = subprocess.PIPE)
stdout, stderr = p.communicate()
remove_file(path)
if p.returncode != 0:
print(stderr.decode('utf-8'))
raise OSError('malisc failed')
p.wait()
returned = stdout.decode('utf-8')
return parse_stats(returned)
def print_msl_compiler_version():
try:
subprocess.check_call(['xcrun', '--sdk', 'iphoneos', 'metal', '--version'])
print('...are the Metal compiler characteristics.\n')
except OSError as e:
if (e.errno != errno.ENOENT):
raise
except subprocess.CalledProcessError:
pass
def msl_compiler_supports_version(version):
try:
subprocess.check_call(['xcrun', '--sdk', 'macosx', 'metal', '-x', 'metal', '-std=macos-metal' + version, '-'],
stdin = subprocess.DEVNULL, stdout = subprocess.DEVNULL, stderr = subprocess.DEVNULL)
print('Current SDK supports MSL {0}. Enabling validation for MSL {0} shaders.'.format(version))
return True
except OSError as e:
print('Failed to check if MSL {} is not supported. It probably is not.'.format(version))
return False
except subprocess.CalledProcessError:
print('Current SDK does NOT support MSL {0}. Disabling validation for MSL {0} shaders.'.format(version))
return False
def path_to_msl_standard(shader):
if '.ios.' in shader:
if '.msl2.' in shader:
return '-std=ios-metal2.0'
elif '.msl21.' in shader:
return '-std=ios-metal2.1'
elif '.msl22.' in shader:
return '-std=ios-metal2.2'
elif '.msl23.' in shader:
return '-std=ios-metal2.3'
elif '.msl11.' in shader:
return '-std=ios-metal1.1'
elif '.msl10.' in shader:
return '-std=ios-metal1.0'
else:
return '-std=ios-metal1.2'
else:
if '.msl2.' in shader:
return '-std=macos-metal2.0'
elif '.msl21.' in shader:
return '-std=macos-metal2.1'
elif '.msl22.' in shader:
return '-std=macos-metal2.2'
elif '.msl23.' in shader:
return '-std=macos-metal2.3'
elif '.msl11.' in shader:
return '-std=macos-metal1.1'
else:
return '-std=macos-metal1.2'
def path_to_msl_standard_cli(shader):
if '.msl2.' in shader:
return '20000'
elif '.msl21.' in shader:
return '20100'
elif '.msl22.' in shader:
return '20200'
elif '.msl23.' in shader:
return '20300'
elif '.msl11.' in shader:
return '10100'
else:
return '10200'
def validate_shader_msl(shader, opt):
msl_path = reference_path(shader[0], shader[1], opt)
try:
if '.ios.' in msl_path:
msl_os = 'iphoneos'
else:
msl_os = 'macosx'
subprocess.check_call(['xcrun', '--sdk', msl_os, 'metal', '-x', 'metal', path_to_msl_standard(msl_path), '-Werror', '-Wno-unused-variable', msl_path])
print('Compiled Metal shader: ' + msl_path)
except OSError as oe:
if (oe.errno != errno.ENOENT):
raise
except subprocess.CalledProcessError:
print('Error compiling Metal shader: ' + msl_path)
raise RuntimeError('Failed to compile Metal shader')
def cross_compile_msl(shader, spirv, opt, iterations, paths):
spirv_path = create_temporary()
msl_path = create_temporary(os.path.basename(shader))
spirv_env = 'vulkan1.1spv1.4' if ('.spv14.' in shader) else 'vulkan1.1'
spirv_cmd = [paths.spirv_as, '--target-env', spirv_env, '-o', spirv_path, shader]
if '.preserve.' in shader:
spirv_cmd.append('--preserve-numeric-ids')
if spirv:
subprocess.check_call(spirv_cmd)
else:
subprocess.check_call([paths.glslang, '--amb' ,'--target-env', 'vulkan1.1', '-V', '-o', spirv_path, shader])
if opt and (not shader_is_invalid_spirv(shader)):
if '.graphics-robust-access.' in shader:
subprocess.check_call([paths.spirv_opt, '--skip-validation', '-O', '--graphics-robust-access', '-o', spirv_path, spirv_path])
else:
subprocess.check_call([paths.spirv_opt, '--skip-validation', '-O', '-o', spirv_path, spirv_path])
spirv_cross_path = paths.spirv_cross
msl_args = [spirv_cross_path, '--output', msl_path, spirv_path, '--msl', '--iterations', str(iterations)]
msl_args.append('--msl-version')
msl_args.append(path_to_msl_standard_cli(shader))
if not '.nomain.' in shader:
msl_args.append('--entry')
msl_args.append('main')
if '.swizzle.' in shader:
msl_args.append('--msl-swizzle-texture-samples')
if '.ios.' in shader:
msl_args.append('--msl-ios')
if '.pad-fragment.' in shader:
msl_args.append('--msl-pad-fragment-output')
if '.capture.' in shader:
msl_args.append('--msl-capture-output')
if '.domain.' in shader:
msl_args.append('--msl-domain-lower-left')
if '.argument.' in shader:
msl_args.append('--msl-argument-buffers')
if '.texture-buffer-native.' in shader:
msl_args.append('--msl-texture-buffer-native')
if '.framebuffer-fetch.' in shader:
msl_args.append('--msl-framebuffer-fetch')
if '.invariant-float-math.' in shader:
msl_args.append('--msl-invariant-float-math')
if '.emulate-cube-array.' in shader:
msl_args.append('--msl-emulate-cube-array')
if '.discrete.' in shader:
msl_args.append('--msl-discrete-descriptor-set')
msl_args.append('2')
msl_args.append('--msl-discrete-descriptor-set')
msl_args.append('3')
if '.force-active.' in shader:
msl_args.append('--msl-force-active-argument-buffer-resources')
if '.line.' in shader:
msl_args.append('--emit-line-directives')
if '.multiview.' in shader:
msl_args.append('--msl-multiview')
if '.no-layered.' in shader:
msl_args.append('--msl-multiview-no-layered-rendering')
if '.viewfromdev.' in shader:
msl_args.append('--msl-view-index-from-device-index')
if '.dispatchbase.' in shader:
msl_args.append('--msl-dispatch-base')
if '.dynamic-buffer.' in shader:
msl_args.append('--msl-dynamic-buffer')
msl_args.append('0')
msl_args.append('0')
msl_args.append('--msl-dynamic-buffer')
msl_args.append('1')
msl_args.append('2')
if '.inline-block.' in shader:
msl_args.append('--msl-inline-uniform-block')
msl_args.append('0')
msl_args.append('0')
if '.device-argument-buffer.' in shader:
msl_args.append('--msl-device-argument-buffer')
msl_args.append('0')
msl_args.append('--msl-device-argument-buffer')
msl_args.append('1')
if '.force-native-array.' in shader:
msl_args.append('--msl-force-native-arrays')
if '.zero-initialize.' in shader:
msl_args.append('--force-zero-initialized-variables')
if '.frag-output.' in shader:
msl_args.append('--msl-disable-frag-depth-builtin')
msl_args.append('--msl-disable-frag-stencil-ref-builtin')
msl_args.append('--msl-enable-frag-output-mask')
msl_args.append('0x000000ca')
if '.no-user-varying.' in shader:
msl_args.append('--msl-no-clip-distance-user-varying')
if '.shader-inputs.' in shader:
msl_args.append('--msl-shader-input')
msl_args.append('0')
msl_args.append('u8')
msl_args.append('2')
msl_args.append('--msl-shader-input')
msl_args.append('1')
msl_args.append('u16')
msl_args.append('3')
msl_args.append('--msl-shader-input')
msl_args.append('6')
msl_args.append('other')
msl_args.append('4')
if '.multi-patch.' in shader:
msl_args.append('--msl-multi-patch-workgroup')
msl_args.append('--msl-shader-input')
msl_args.append('0')
msl_args.append('any32')
msl_args.append('3')
msl_args.append('--msl-shader-input')
msl_args.append('1')
msl_args.append('any16')
msl_args.append('2')
if '.for-tess.' in shader:
msl_args.append('--msl-vertex-for-tessellation')
if '.fixed-sample-mask.' in shader:
msl_args.append('--msl-additional-fixed-sample-mask')
msl_args.append('0x00000022')
if '.arrayed-subpass.' in shader:
msl_args.append('--msl-arrayed-subpass-input')
if '.1d-as-2d.' in shader:
msl_args.append('--msl-texture-1d-as-2d')
if '.simd.' in shader:
msl_args.append('--msl-ios-use-simdgroup-functions')
if '.emulate-subgroup.' in shader:
msl_args.append('--msl-emulate-subgroups')
if '.fixed-subgroup.' in shader:
msl_args.append('--msl-fixed-subgroup-size')
msl_args.append('32')
if '.force-sample.' in shader:
msl_args.append('--msl-force-sample-rate-shading')
if '.decoration-binding.' in shader:
msl_args.append('--msl-decoration-binding')
subprocess.check_call(msl_args)
if not shader_is_invalid_spirv(msl_path):
subprocess.check_call([paths.spirv_val, '--scalar-block-layout', '--target-env', spirv_env, spirv_path])
return (spirv_path, msl_path)
def shader_model_hlsl(shader):
if '.vert' in shader:
if '.sm30.' in shader:
return '-Tvs_3_0'
else:
return '-Tvs_5_1'
elif '.frag' in shader:
if '.sm30.' in shader:
return '-Tps_3_0'
else:
return '-Tps_5_1'
elif '.comp' in shader:
return '-Tcs_5_1'
else:
return None
def shader_to_win_path(shader):
try:
with subprocess.Popen(['winepath', '-w', shader], stdout = subprocess.PIPE, stderr = subprocess.PIPE) as f:
stdout_data, stderr_data = f.communicate()
return stdout_data.decode('utf-8')
except OSError as oe:
if (oe.errno != errno.ENOENT): # Ignore not found errors
return shader
except subprocess.CalledProcessError:
raise
return shader
ignore_fxc = False
def validate_shader_hlsl(shader, force_no_external_validation, paths):
test_glslang = True
if '.nonuniformresource.' in shader:
test_glslang = False
if '.fxconly.' in shader:
test_glslang = False
hlsl_args = [paths.glslang, '--amb', '-e', 'main', '-D', '--target-env', 'vulkan1.1', '-V', shader]
if '.sm30.' in shader:
hlsl_args.append('--hlsl-dx9-compatible')
if test_glslang:
subprocess.check_call(hlsl_args)
is_no_fxc = '.nofxc.' in shader
global ignore_fxc
if (not ignore_fxc) and (not force_no_external_validation) and (not is_no_fxc):
try:
win_path = shader_to_win_path(shader)
args = ['fxc', '-nologo', shader_model_hlsl(shader), win_path]
if '.nonuniformresource.' in shader:
args.append('/enable_unbounded_descriptor_tables')
subprocess.check_call(args)
except OSError as oe:
if (oe.errno != errno.ENOENT): # Ignore not found errors
print('Failed to run FXC.')
ignore_fxc = True
raise
else:
print('Could not find FXC.')
ignore_fxc = True
except subprocess.CalledProcessError:
print('Failed compiling HLSL shader:', shader, 'with FXC.')
raise RuntimeError('Failed compiling HLSL shader')
def shader_to_sm(shader):
if '.sm62.' in shader:
return '62'
elif '.sm60.' in shader:
return '60'
elif '.sm51.' in shader:
return '51'
elif '.sm30.' in shader:
return '30'
else:
return '50'
def cross_compile_hlsl(shader, spirv, opt, force_no_external_validation, iterations, paths):
spirv_path = create_temporary()
hlsl_path = create_temporary(os.path.basename(shader))
spirv_env = 'vulkan1.1spv1.4' if '.spv14.' in shader else 'vulkan1.1'
spirv_cmd = [paths.spirv_as, '--target-env', spirv_env, '-o', spirv_path, shader]
if '.preserve.' in shader:
spirv_cmd.append('--preserve-numeric-ids')
if spirv:
subprocess.check_call(spirv_cmd)
else:
subprocess.check_call([paths.glslang, '--amb', '--target-env', 'vulkan1.1', '-V', '-o', spirv_path, shader])
if opt and (not shader_is_invalid_spirv(hlsl_path)):
subprocess.check_call([paths.spirv_opt, '--skip-validation', '-O', '-o', spirv_path, spirv_path])
spirv_cross_path = paths.spirv_cross
sm = shader_to_sm(shader)
hlsl_args = [spirv_cross_path, '--entry', 'main', '--output', hlsl_path, spirv_path, '--hlsl-enable-compat', '--hlsl', '--shader-model', sm, '--iterations', str(iterations)]
if '.line.' in shader:
hlsl_args.append('--emit-line-directives')
if '.force-uav.' in shader:
hlsl_args.append('--hlsl-force-storage-buffer-as-uav')
if '.zero-initialize.' in shader:
hlsl_args.append('--force-zero-initialized-variables')
if '.nonwritable-uav-texture.' in shader:
hlsl_args.append('--hlsl-nonwritable-uav-texture-as-srv')
if '.native-16bit.' in shader:
hlsl_args.append('--hlsl-enable-16bit-types')
if '.flatten-matrix-vertex-input.' in shader:
hlsl_args.append('--hlsl-flatten-matrix-vertex-input-semantics')
subprocess.check_call(hlsl_args)
if not shader_is_invalid_spirv(hlsl_path):
subprocess.check_call([paths.spirv_val, '--scalar-block-layout', '--target-env', spirv_env, spirv_path])
validate_shader_hlsl(hlsl_path, force_no_external_validation, paths)
return (spirv_path, hlsl_path)
def cross_compile_reflect(shader, spirv, opt, iterations, paths):
spirv_path = create_temporary()
reflect_path = create_temporary(os.path.basename(shader))
spirv_cmd = [paths.spirv_as, '--target-env', 'vulkan1.1', '-o', spirv_path, shader]
if '.preserve.' in shader:
spirv_cmd.append('--preserve-numeric-ids')
if spirv:
subprocess.check_call(spirv_cmd)
else:
subprocess.check_call([paths.glslang, '--amb', '--target-env', 'vulkan1.1', '-V', '-o', spirv_path, shader])
if opt and (not shader_is_invalid_spirv(reflect_path)):
subprocess.check_call([paths.spirv_opt, '--skip-validation', '-O', '-o', spirv_path, spirv_path])
spirv_cross_path = paths.spirv_cross
sm = shader_to_sm(shader)
subprocess.check_call([spirv_cross_path, '--entry', 'main', '--output', reflect_path, spirv_path, '--reflect', '--iterations', str(iterations)])
return (spirv_path, reflect_path)
def validate_shader(shader, vulkan, paths):
if vulkan:
spirv_14 = '.spv14.' in shader
glslang_env = 'spirv1.4' if spirv_14 else 'vulkan1.1'
subprocess.check_call([paths.glslang, '--amb', '--target-env', glslang_env, '-V', shader])
else:
subprocess.check_call([paths.glslang, shader])
def cross_compile(shader, vulkan, spirv, invalid_spirv, eliminate, is_legacy, flatten_ubo, sso, flatten_dim, opt, push_ubo, iterations, paths):
spirv_path = create_temporary()
glsl_path = create_temporary(os.path.basename(shader))
spirv_14 = '.spv14.' in shader
spirv_env = 'vulkan1.1spv1.4' if spirv_14 else 'vulkan1.1'
if vulkan or spirv:
vulkan_glsl_path = create_temporary('vk' + os.path.basename(shader))
spirv_cmd = [paths.spirv_as, '--target-env', spirv_env, '-o', spirv_path, shader]
if '.preserve.' in shader:
spirv_cmd.append('--preserve-numeric-ids')
if spirv:
subprocess.check_call(spirv_cmd)
else:
glslang_env = 'spirv1.4' if spirv_14 else 'vulkan1.1'
subprocess.check_call([paths.glslang, '--amb', '--target-env', glslang_env, '-V', '-o', spirv_path, shader])
if opt and (not invalid_spirv):
subprocess.check_call([paths.spirv_opt, '--skip-validation', '-O', '-o', spirv_path, spirv_path])
if not invalid_spirv:
subprocess.check_call([paths.spirv_val, '--scalar-block-layout', '--target-env', spirv_env, spirv_path])
extra_args = ['--iterations', str(iterations)]
if eliminate:
extra_args += ['--remove-unused-variables']
if is_legacy:
extra_args += ['--version', '100', '--es']
if flatten_ubo:
extra_args += ['--flatten-ubo']
if sso:
extra_args += ['--separate-shader-objects']
if flatten_dim:
extra_args += ['--flatten-multidimensional-arrays']
if push_ubo:
extra_args += ['--glsl-emit-push-constant-as-ubo']
if '.line.' in shader:
extra_args += ['--emit-line-directives']
if '.no-samplerless.' in shader:
extra_args += ['--vulkan-glsl-disable-ext-samplerless-texture-functions']
if '.no-qualifier-deduction.' in shader:
extra_args += ['--disable-storage-image-qualifier-deduction']
if '.framebuffer-fetch.' in shader:
extra_args += ['--glsl-remap-ext-framebuffer-fetch', '0', '0']
extra_args += ['--glsl-remap-ext-framebuffer-fetch', '1', '1']
extra_args += ['--glsl-remap-ext-framebuffer-fetch', '2', '2']
extra_args += ['--glsl-remap-ext-framebuffer-fetch', '3', '3']
if '.zero-initialize.' in shader:
extra_args += ['--force-zero-initialized-variables']
if '.force-flattened-io.' in shader:
extra_args += ['--glsl-force-flattened-io-blocks']
spirv_cross_path = paths.spirv_cross
# A shader might not be possible to make valid GLSL from, skip validation for this case.
if (not ('nocompat' in glsl_path)) or (not vulkan):
subprocess.check_call([spirv_cross_path, '--entry', 'main', '--output', glsl_path, spirv_path] + extra_args)
if not 'nocompat' in glsl_path:
validate_shader(glsl_path, False, paths)
else:
remove_file(glsl_path)
glsl_path = None
if (vulkan or spirv) and (not is_legacy):
subprocess.check_call([spirv_cross_path, '--entry', 'main', '-V', '--output', vulkan_glsl_path, spirv_path] + extra_args)
validate_shader(vulkan_glsl_path, True, paths)
# SPIR-V shaders might just want to validate Vulkan GLSL output, we don't always care about the output.
if not vulkan:
remove_file(vulkan_glsl_path)
return (spirv_path, glsl_path, vulkan_glsl_path if vulkan else None)
def make_unix_newline(buf):
decoded = codecs.decode(buf, 'utf-8')
decoded = decoded.replace('\r', '')
return codecs.encode(decoded, 'utf-8')
def md5_for_file(path):
md5 = hashlib.md5()
with open(path, 'rb') as f:
for chunk in iter(lambda: make_unix_newline(f.read(8192)), b''):
md5.update(chunk)
return md5.digest()
def make_reference_dir(path):
base = os.path.dirname(path)
if not os.path.exists(base):
os.makedirs(base)
def reference_path(directory, relpath, opt):
split_paths = os.path.split(directory)
reference_dir = os.path.join(split_paths[0], 'reference/' + ('opt/' if opt else ''))
reference_dir = os.path.join(reference_dir, split_paths[1])
return os.path.join(reference_dir, relpath)
def regression_check_reflect(shader, json_file, args):
reference = reference_path(shader[0], shader[1], args.opt) + '.json'
joined_path = os.path.join(shader[0], shader[1])
print('Reference shader reflection path:', reference)
if os.path.exists(reference):
actual = md5_for_file(json_file)
expected = md5_for_file(reference)
if actual != expected:
if args.update:
print('Generated reflection json has changed for {}!'.format(reference))
if os.path.exists(reference):
remove_file(reference)
make_reference_dir(reference)
shutil.move(json_file, reference)
else:
print('Generated reflection json in {} does not match reference {}!'.format(json_file, reference))
with open(json_file, 'r') as f:
print('')
print('Generated:')
print('======================')
print(f.read())
print('======================')
print('')
if not args.keep:
remove_file(json_file)
raise RuntimeError('Does not match reference')
else:
remove_file(json_file)
else:
print('Found new shader {}. Placing generated source code in {}'.format(joined_path, reference))
make_reference_dir(reference)
shutil.move(json_file, reference)
def regression_check(shader, glsl, args):
reference = reference_path(shader[0], shader[1], args.opt)
joined_path = os.path.join(shader[0], shader[1])
print('Reference shader path:', reference)
if os.path.exists(reference):
if md5_for_file(glsl) != md5_for_file(reference):
if args.update:
print('Generated source code has changed for {}!'.format(reference))
if os.path.exists(reference):
remove_file(reference)
make_reference_dir(reference)
shutil.move(glsl, reference)
else:
print('Generated source code in {} does not match reference {}!'.format(glsl, reference))
with open(glsl, 'r') as f:
print('')
print('Generated:')
print('======================')
print(f.read())
print('======================')
print('')
if not args.keep:
remove_file(glsl)
raise RuntimeError('Does not match reference')
else:
remove_file(glsl)
else:
print('Found new shader {}. Placing generated source code in {}'.format(joined_path, reference))
make_reference_dir(reference)
shutil.move(glsl, reference)
def shader_is_vulkan(shader):
return '.vk.' in shader
def shader_is_desktop(shader):
return '.desktop.' in shader
def shader_is_eliminate_dead_variables(shader):
return '.noeliminate.' not in shader
def shader_is_spirv(shader):
return '.asm.' in shader
def shader_is_invalid_spirv(shader):
return '.invalid.' in shader
def shader_is_legacy(shader):
return '.legacy.' in shader
def shader_is_flatten_ubo(shader):
return '.flatten.' in shader
def shader_is_sso(shader):
return '.sso.' in shader
def shader_is_flatten_dimensions(shader):
return '.flatten_dim.' in shader
def shader_is_noopt(shader):
return '.noopt.' in shader
def shader_is_push_ubo(shader):
return '.push-ubo.' in shader
def test_shader(stats, shader, args, paths):
joined_path = os.path.join(shader[0], shader[1])
vulkan = shader_is_vulkan(shader[1])
desktop = shader_is_desktop(shader[1])
eliminate = shader_is_eliminate_dead_variables(shader[1])
is_spirv = shader_is_spirv(shader[1])
invalid_spirv = shader_is_invalid_spirv(shader[1])
is_legacy = shader_is_legacy(shader[1])
flatten_ubo = shader_is_flatten_ubo(shader[1])
sso = shader_is_sso(shader[1])
flatten_dim = shader_is_flatten_dimensions(shader[1])
noopt = shader_is_noopt(shader[1])
push_ubo = shader_is_push_ubo(shader[1])
print('Testing shader:', joined_path)
spirv, glsl, vulkan_glsl = cross_compile(joined_path, vulkan, is_spirv, invalid_spirv, eliminate, is_legacy, flatten_ubo, sso, flatten_dim, args.opt and (not noopt), push_ubo, args.iterations, paths)
if stats and (not vulkan) and (not is_spirv) and (not desktop):
cross_stats = get_shader_stats(glsl)
if glsl:
regression_check(shader, glsl, args)
if vulkan_glsl:
regression_check((shader[0], shader[1] + '.vk'), vulkan_glsl, args)
remove_file(spirv)
if stats and (not vulkan) and (not is_spirv) and (not desktop):
pristine_stats = get_shader_stats(joined_path)
a = []
a.append(shader[1])
for i in pristine_stats:
a.append(str(i))
for i in cross_stats:
a.append(str(i))
print(','.join(a), file = stats)
def test_shader_msl(stats, shader, args, paths):
joined_path = os.path.join(shader[0], shader[1])
print('\nTesting MSL shader:', joined_path)
is_spirv = shader_is_spirv(shader[1])
noopt = shader_is_noopt(shader[1])
spirv, msl = cross_compile_msl(joined_path, is_spirv, args.opt and (not noopt), args.iterations, paths)
regression_check(shader, msl, args)
shader_is_msl22 = 'msl22' in joined_path
shader_is_msl23 = 'msl23' in joined_path
skip_validation = (shader_is_msl22 and (not args.msl22)) or (shader_is_msl23 and (not args.msl23))
if '.invalid.' in joined_path:
skip_validation = True
if (not args.force_no_external_validation) and (not skip_validation):
validate_shader_msl(shader, args.opt)
remove_file(spirv)
def test_shader_hlsl(stats, shader, args, paths):
joined_path = os.path.join(shader[0], shader[1])
print('Testing HLSL shader:', joined_path)
is_spirv = shader_is_spirv(shader[1])
noopt = shader_is_noopt(shader[1])
spirv, hlsl = cross_compile_hlsl(joined_path, is_spirv, args.opt and (not noopt), args.force_no_external_validation, args.iterations, paths)
regression_check(shader, hlsl, args)
remove_file(spirv)
def test_shader_reflect(stats, shader, args, paths):
joined_path = os.path.join(shader[0], shader[1])
print('Testing shader reflection:', joined_path)
is_spirv = shader_is_spirv(shader[1])
noopt = shader_is_noopt(shader[1])
spirv, reflect = cross_compile_reflect(joined_path, is_spirv, args.opt and (not noopt), args.iterations, paths)
regression_check_reflect(shader, reflect, args)
remove_file(spirv)
def test_shader_file(relpath, stats, args, backend):
paths = Paths(args.spirv_cross, args.glslang, args.spirv_as, args.spirv_val, args.spirv_opt)
try:
if backend == 'msl':
test_shader_msl(stats, (args.folder, relpath), args, paths)
elif backend == 'hlsl':
test_shader_hlsl(stats, (args.folder, relpath), args, paths)
elif backend == 'reflect':
test_shader_reflect(stats, (args.folder, relpath), args, paths)
else:
test_shader(stats, (args.folder, relpath), args, paths)
return None
except Exception as e:
return e
def test_shaders_helper(stats, backend, args):
all_files = []
for root, dirs, files in os.walk(os.path.join(args.folder)):
files = [ f for f in files if not f.startswith(".") ]
for i in files:
path = os.path.join(root, i)
relpath = os.path.relpath(path, args.folder)
all_files.append(relpath)
# at this point we need to switch to explicit arguments
if args.parallel:
with multiprocessing.Pool(multiprocessing.cpu_count()) as pool:
results = []
for f in all_files:
results.append(pool.apply_async(test_shader_file,
args = (f, stats, args, backend)))
pool.close()
pool.join()
results_completed = [res.get() for res in results]
for error in results_completed:
if error is not None:
print('Error:', error)
sys.exit(1)
else:
for i in all_files:
e = test_shader_file(i, stats, args, backend)
if e is not None:
print('Error:', e)
sys.exit(1)
def test_shaders(backend, args):
if args.malisc:
with open('stats.csv', 'w') as stats:
print('Shader,OrigRegs,OrigUniRegs,OrigALUShort,OrigLSShort,OrigTEXShort,OrigALULong,OrigLSLong,OrigTEXLong,CrossRegs,CrossUniRegs,CrossALUShort,CrossLSShort,CrossTEXShort,CrossALULong,CrossLSLong,CrossTEXLong', file = stats)
test_shaders_helper(stats, backend, args)
else:
test_shaders_helper(None, backend, args)
def main():
parser = argparse.ArgumentParser(description = 'Script for regression testing.')
parser.add_argument('folder',
help = 'Folder containing shader files to test.')
parser.add_argument('--update',
action = 'store_true',
help = 'Updates reference files if there is a mismatch. Use when legitimate changes in output is found.')
parser.add_argument('--keep',
action = 'store_true',
help = 'Leave failed GLSL shaders on disk if they fail regression. Useful for debugging.')
parser.add_argument('--malisc',
action = 'store_true',
help = 'Use malisc offline compiler to determine static cycle counts before and after spirv-cross.')
parser.add_argument('--msl',
action = 'store_true',
help = 'Test Metal backend.')
parser.add_argument('--metal',
action = 'store_true',
help = 'Deprecated Metal option. Use --msl instead.')
parser.add_argument('--hlsl',
action = 'store_true',
help = 'Test HLSL backend.')
parser.add_argument('--force-no-external-validation',
action = 'store_true',
help = 'Disable all external validation.')
parser.add_argument('--opt',
action = 'store_true',
help = 'Run SPIRV-Tools optimization passes as well.')
parser.add_argument('--reflect',
action = 'store_true',
help = 'Test reflection backend.')
parser.add_argument('--parallel',
action = 'store_true',
help = 'Execute tests in parallel. Useful for doing regression quickly, but bad for debugging and stat output.')
parser.add_argument('--spirv-cross',
default = './spirv-cross',
help = 'Explicit path to spirv-cross')
parser.add_argument('--glslang',
default = 'glslangValidator',
help = 'Explicit path to glslangValidator')
parser.add_argument('--spirv-as',
default = 'spirv-as',
help = 'Explicit path to spirv-as')
parser.add_argument('--spirv-val',
default = 'spirv-val',
help = 'Explicit path to spirv-val')
parser.add_argument('--spirv-opt',
default = 'spirv-opt',
help = 'Explicit path to spirv-opt')
parser.add_argument('--iterations',
default = 1,
type = int,
help = 'Number of iterations to run SPIRV-Cross (benchmarking)')
args = parser.parse_args()
if not args.folder:
sys.stderr.write('Need shader folder.\n')
sys.exit(1)
if (args.parallel and (args.malisc or args.force_no_external_validation or args.update)):
sys.stderr.write('Parallel execution is disabled when using the flags --update, --malisc or --force-no-external-validation\n')
args.parallel = False
args.msl22 = False
args.msl23 = False
if args.msl:
print_msl_compiler_version()
args.msl22 = msl_compiler_supports_version('2.2')
args.msl23 = msl_compiler_supports_version('2.3')
backend = 'glsl'
if (args.msl or args.metal):
backend = 'msl'
elif args.hlsl:
backend = 'hlsl'
elif args.reflect:
backend = 'reflect'
test_shaders(backend, args)
if args.malisc:
print('Stats in stats.csv!')
print('Tests completed!')
if __name__ == '__main__':
main()
| true | true |
f73df42c6e52358e430d099293f2ac6a02b6ef0d | 2,711 | py | Python | Unit-7-The-Cartpole/q_learning.py | paulfioravanti/Reinforcement-Learning-In-Motion | e09afd23b82040d76c95875b077ba0a5af517470 | [
"MIT"
] | null | null | null | Unit-7-The-Cartpole/q_learning.py | paulfioravanti/Reinforcement-Learning-In-Motion | e09afd23b82040d76c95875b077ba0a5af517470 | [
"MIT"
] | null | null | null | Unit-7-The-Cartpole/q_learning.py | paulfioravanti/Reinforcement-Learning-In-Motion | e09afd23b82040d76c95875b077ba0a5af517470 | [
"MIT"
] | null | null | null | import gym
import numpy as np
from util import plot_running_average
# pylint: disable-msg=redefined-outer-name
def max_action(estimates, state):
values = np.array([estimates[state, i] for i in range(2)])
action = np.argmax(values)
return action
def get_state(observation):
cart_x, cart_x_dot, cart_theta, cart_theta_dot = observation
cart_x = int(np.digitize(cart_x, CART_POS_SPACE))
cart_x_dot = int(np.digitize(cart_x_dot, CART_VEL_SPACE))
cart_theta = int(np.digitize(cart_theta, POLE_THETA_SPACE))
cart_theta_dot = int(np.digitize(cart_theta_dot, POLE_THETA_VEL_SPACE))
return (cart_x, cart_x_dot, cart_theta, cart_theta_dot)
# discretize the spaces
POLE_THETA_SPACE = np.linspace(-0.20943951, 0.20943951, 10)
POLE_THETA_VEL_SPACE = np.linspace(-4, 4, 10)
CART_POS_SPACE = np.linspace(-2.4, 2.4, 10)
CART_VEL_SPACE = np.linspace(-4, 4, 10)
if __name__ == "__main__":
ENV = gym.make("CartPole-v0")
# model hyperparameters
STEP_SIZE = 0.1
DISCOUNT = 1.0
EPSILON = 1.0
# construct state space
STATES = []
for i in range(len(CART_POS_SPACE) + 1):
for j in range(len(CART_VEL_SPACE) + 1):
for k in range(len(POLE_THETA_SPACE) + 1):
for l in range(len(POLE_THETA_VEL_SPACE) + 1):
STATES.append((i, j, k, l))
ESTIMATES = {}
for state in STATES:
for action in range(2):
ESTIMATES[state, action] = 0
NUM_EPISODES = 50000
REPORT_INTERVAL = 5000
TOTAL_REWARDS = np.zeros(NUM_EPISODES)
for i in range(NUM_EPISODES):
if i % REPORT_INTERVAL == 0:
print("starting game ", i)
done = False
episode_rewards = 0
observation = ENV.reset()
while not done:
state = get_state(observation)
rand = np.random.random()
if rand < (1 - EPSILON):
action = max_action(ESTIMATES, state)
else:
action = ENV.action_space.sample()
observation_, reward, done, info = ENV.step(action)
episode_rewards += reward
state_ = get_state(observation_)
action_ = max_action(ESTIMATES, state_)
ESTIMATES[state, action] = (
ESTIMATES[state, action] + STEP_SIZE
* (
reward + DISCOUNT
* ESTIMATES[state_, action_] - ESTIMATES[state, action]
)
)
observation = observation_
if EPSILON - 2 / NUM_EPISODES > 0:
EPSILON -= 2 / NUM_EPISODES
else:
EPSILON = 0
TOTAL_REWARDS[i] = episode_rewards
plot_running_average(TOTAL_REWARDS)
| 33.469136 | 75 | 0.607156 | import gym
import numpy as np
from util import plot_running_average
def max_action(estimates, state):
values = np.array([estimates[state, i] for i in range(2)])
action = np.argmax(values)
return action
def get_state(observation):
cart_x, cart_x_dot, cart_theta, cart_theta_dot = observation
cart_x = int(np.digitize(cart_x, CART_POS_SPACE))
cart_x_dot = int(np.digitize(cart_x_dot, CART_VEL_SPACE))
cart_theta = int(np.digitize(cart_theta, POLE_THETA_SPACE))
cart_theta_dot = int(np.digitize(cart_theta_dot, POLE_THETA_VEL_SPACE))
return (cart_x, cart_x_dot, cart_theta, cart_theta_dot)
POLE_THETA_SPACE = np.linspace(-0.20943951, 0.20943951, 10)
POLE_THETA_VEL_SPACE = np.linspace(-4, 4, 10)
CART_POS_SPACE = np.linspace(-2.4, 2.4, 10)
CART_VEL_SPACE = np.linspace(-4, 4, 10)
if __name__ == "__main__":
ENV = gym.make("CartPole-v0")
STEP_SIZE = 0.1
DISCOUNT = 1.0
EPSILON = 1.0
STATES = []
for i in range(len(CART_POS_SPACE) + 1):
for j in range(len(CART_VEL_SPACE) + 1):
for k in range(len(POLE_THETA_SPACE) + 1):
for l in range(len(POLE_THETA_VEL_SPACE) + 1):
STATES.append((i, j, k, l))
ESTIMATES = {}
for state in STATES:
for action in range(2):
ESTIMATES[state, action] = 0
NUM_EPISODES = 50000
REPORT_INTERVAL = 5000
TOTAL_REWARDS = np.zeros(NUM_EPISODES)
for i in range(NUM_EPISODES):
if i % REPORT_INTERVAL == 0:
print("starting game ", i)
done = False
episode_rewards = 0
observation = ENV.reset()
while not done:
state = get_state(observation)
rand = np.random.random()
if rand < (1 - EPSILON):
action = max_action(ESTIMATES, state)
else:
action = ENV.action_space.sample()
observation_, reward, done, info = ENV.step(action)
episode_rewards += reward
state_ = get_state(observation_)
action_ = max_action(ESTIMATES, state_)
ESTIMATES[state, action] = (
ESTIMATES[state, action] + STEP_SIZE
* (
reward + DISCOUNT
* ESTIMATES[state_, action_] - ESTIMATES[state, action]
)
)
observation = observation_
if EPSILON - 2 / NUM_EPISODES > 0:
EPSILON -= 2 / NUM_EPISODES
else:
EPSILON = 0
TOTAL_REWARDS[i] = episode_rewards
plot_running_average(TOTAL_REWARDS)
| true | true |
f73df575c8d7a5c3b234b986925a4420b25923c6 | 1,401 | py | Python | src/app.py | taller-de-programacion-2/rest-python-flask | ff2567c204a3fb9cf3d8c7013fa5c4a0470f6501 | [
"MIT"
] | null | null | null | src/app.py | taller-de-programacion-2/rest-python-flask | ff2567c204a3fb9cf3d8c7013fa5c4a0470f6501 | [
"MIT"
] | 2 | 2021-04-22T03:07:27.000Z | 2021-06-02T00:19:49.000Z | src/app.py | taller-de-programacion-2/rest-python-flask | ff2567c204a3fb9cf3d8c7013fa5c4a0470f6501 | [
"MIT"
] | 1 | 2021-11-02T03:42:31.000Z | 2021-11-02T03:42:31.000Z | import os
from flask import Flask, escape, request, jsonify
from marshmallow import ValidationError
from flask_pymongo import PyMongo
from src.auth.auth_exception import UserExistsException, UserNotFoundException, AccessDeniedException
from src.auth.controllers.auth import auth_blueprint
import src.settings
from src.secret.controllers.secret import secret_blueprint
app = Flask(__name__)
app.config["MONGO_URI"] = os.environ.get('MONGO_URL', 'mongodb://localhost:27017/db')
print(os.environ.get('MONGO_URL'))
mongo = PyMongo(app)
# set default version to v1
version = os.environ.get('API_VERSION', 'v1')
prefix = f"/api/{version}"
@app.errorhandler(ValidationError)
def validation_error_handler(err):
errors = err.messages
return jsonify(errors), 400
@app.errorhandler(UserExistsException)
def user_error_handler(e):
return jsonify({"error": e.msg}), 400
@app.errorhandler(AccessDeniedException)
def user_error_handler(e):
return jsonify({"error": e.msg}), 401
@app.errorhandler(UserNotFoundException)
def user_error_handler(e):
return jsonify({"error": e.msg}), 404
app.register_blueprint(auth_blueprint, url_prefix=f'{prefix}/auth')
app.register_blueprint(secret_blueprint, url_prefix=f'{prefix}/secret')
@app.route(f'{prefix}/ping', methods=['GET'])
def ping():
"""
Check if server is alive
:return: "pong"
"""
return "pong"
| 23.35 | 101 | 0.748037 | import os
from flask import Flask, escape, request, jsonify
from marshmallow import ValidationError
from flask_pymongo import PyMongo
from src.auth.auth_exception import UserExistsException, UserNotFoundException, AccessDeniedException
from src.auth.controllers.auth import auth_blueprint
import src.settings
from src.secret.controllers.secret import secret_blueprint
app = Flask(__name__)
app.config["MONGO_URI"] = os.environ.get('MONGO_URL', 'mongodb://localhost:27017/db')
print(os.environ.get('MONGO_URL'))
mongo = PyMongo(app)
version = os.environ.get('API_VERSION', 'v1')
prefix = f"/api/{version}"
@app.errorhandler(ValidationError)
def validation_error_handler(err):
errors = err.messages
return jsonify(errors), 400
@app.errorhandler(UserExistsException)
def user_error_handler(e):
return jsonify({"error": e.msg}), 400
@app.errorhandler(AccessDeniedException)
def user_error_handler(e):
return jsonify({"error": e.msg}), 401
@app.errorhandler(UserNotFoundException)
def user_error_handler(e):
return jsonify({"error": e.msg}), 404
app.register_blueprint(auth_blueprint, url_prefix=f'{prefix}/auth')
app.register_blueprint(secret_blueprint, url_prefix=f'{prefix}/secret')
@app.route(f'{prefix}/ping', methods=['GET'])
def ping():
return "pong"
| true | true |
f73df60e449cd6eb00d2d05e8890a9778c9d095c | 4,570 | py | Python | hub/urls.py | efforia/hub | c51a5e9523e54e8cb4795a779b4e22c17599f92a | [
"MIT"
] | null | null | null | hub/urls.py | efforia/hub | c51a5e9523e54e8cb4795a779b4e22c17599f92a | [
"MIT"
] | 1 | 2020-07-02T22:47:44.000Z | 2020-07-02T22:47:44.000Z | hub/urls.py | williamlagos/hub | c51a5e9523e54e8cb4795a779b4e22c17599f92a | [
"MIT"
] | 1 | 2020-05-01T23:31:42.000Z | 2020-05-01T23:31:42.000Z | from __future__ import unicode_literals
from django.conf import settings
from django.conf.urls import include, url as django_url
from django.conf.urls.i18n import i18n_patterns
from django.contrib import admin
from django.conf.urls.static import static
from django.views.generic.base import TemplateView,RedirectView
from django.utils.translation import ugettext_lazy as _
from django.conf.urls.static import static
from django_distill import distill_url as url
# import .views
admin.autodiscover()
def getNone(): return None
# urlpatterns = i18n_patterns(django_url(u'^admin/', include(admin.site.urls)))
urlpatterns = [
# url(_(r'^shop/volumes/'),TemplateView.as_view(template_name='volumes.html'),name='volumes', distill_func=getNone),
# url(_(u'^shop/'), include(u'cartridge.shop.urls'), name='shop'),
url(u'^store/services',TemplateView.as_view(template_name='pages/services.html'), name='store_services', distill_func=getNone),
url(u'^store/choose',TemplateView.as_view(template_name='pages/choose.html'), name='store_choose', distill_func=getNone),
# url(u'^store/slip', invent.views.payment_slip, name='store_slip', distill_func=getNone),
# url(u'^store/bank', invent.views.payment_bank, name='store_bank', distill_func=getNone),
# url(u'^store/cancel', invent.views.payment_cancel, name='store_cancel', distill_func=getNone),
# url(u'^store/execute', invent.views.payment_execute, name=u'payment_execute', distill_func=getNone),
# url(u'^store/pay/(?P<order_id>\\d+)/$', invent.views.payment_redirect, name=u'payment_redirect', distill_func=getNone),
# url(u'^store/orders/$', "cartridge.shop.views.order_history", name=u'order_history'),
# url(r'^i18n/', include('django.conf.urls.i18n'), name='set_language', distill_func=getNone),
url(_(r'^atom/'), TemplateView.as_view(template_name='atom/index.html'), name='sensors', distill_func=getNone),
url(_(r'^hub/iot'), TemplateView.as_view(template_name='hub/iot.html'), name='iot', distill_func=getNone),
url(_(r'^hub/eos'), TemplateView.as_view(template_name='hub/eos.html'), name='eos', distill_func=getNone),
url(_(r'^hub/server'), TemplateView.as_view(template_name='hub/server.html'), name='server', distill_func=getNone),
url(r'^hub/', TemplateView.as_view(template_name='hub/index.html'), name='hub', distill_func=getNone),
url(_(r'^tv/eosd'), TemplateView.as_view(template_name='tv/eosd.html'), name='eosd', distill_func=getNone),
url(_(r'^tv/mediacenter'), TemplateView.as_view(template_name='tv/mediacenter.html'), name='mediacenter', distill_func=getNone),
url(_(r'^tv/videogame'), TemplateView.as_view(template_name='tv/videogame.html'), name='videogame', distill_func=getNone),
url(_(r'^tv/'), TemplateView.as_view(template_name='tv/index.html'), name='hubpro', distill_func=getNone),
#url(_(r'^services/design'), TemplateView.as_view(template_name='services/design.html'), name='hubdesign'),
#url(_(r'^services/plans'), TemplateView.as_view(template_name='services/plans.html'), name='plans'),
#url(_(r'^services/cloud'), TemplateView.as_view(template_name='services/cloud.html'), name='services'),
#url(_(r'^services/partners'), TemplateView.as_view(template_name='services/partners.html'), name='partners'),
#url(_(r'^services/apps'), TemplateView.as_view(template_name='services/apps.html'), name='apps'),
#url(_(r'^services/developer'), TemplateView.as_view(template_name='services/developer.html'), name='developer'),
#url(_(r'^services/'), TemplateView.as_view(template_name='services/index.html'), name='about'),
url(_(r'^help/localization'), TemplateView.as_view(template_name='help/localization.html'), name='localization', distill_func=getNone),
url(_(r'^help/warranty'), TemplateView.as_view(template_name='help/warranty.html'), name='warranty', distill_func=getNone),
url(_(r'^help/documentation'), TemplateView.as_view(template_name='help/documentation.html'), name='documentation', distill_func=getNone),
url(_(r'^help/'), TemplateView.as_view(template_name='help/index.html'), name='support', distill_func=getNone),
url(u'^$', TemplateView.as_view(template_name='index.html'), name=u'home', distill_func=getNone),
# url(r'^accountold/', RedirectView.as_view(url='/'), name=u'old_account_redirect', distill_func=getNone),
# url(u'^', include(u'mezzanine.urls'))
] + static(settings.STATIC_URL, document_root=settings.STATIC_ROOT)
# handler404 = u'mezzanine.core.views.page_not_found'
# handler500 = u'mezzanine.core.views.server_error'
| 69.242424 | 142 | 0.742451 | from __future__ import unicode_literals
from django.conf import settings
from django.conf.urls import include, url as django_url
from django.conf.urls.i18n import i18n_patterns
from django.contrib import admin
from django.conf.urls.static import static
from django.views.generic.base import TemplateView,RedirectView
from django.utils.translation import ugettext_lazy as _
from django.conf.urls.static import static
from django_distill import distill_url as url
admin.autodiscover()
def getNone(): return None
urlpatterns = [
url(u'^store/services',TemplateView.as_view(template_name='pages/services.html'), name='store_services', distill_func=getNone),
url(u'^store/choose',TemplateView.as_view(template_name='pages/choose.html'), name='store_choose', distill_func=getNone),
url(_(r'^atom/'), TemplateView.as_view(template_name='atom/index.html'), name='sensors', distill_func=getNone),
url(_(r'^hub/iot'), TemplateView.as_view(template_name='hub/iot.html'), name='iot', distill_func=getNone),
url(_(r'^hub/eos'), TemplateView.as_view(template_name='hub/eos.html'), name='eos', distill_func=getNone),
url(_(r'^hub/server'), TemplateView.as_view(template_name='hub/server.html'), name='server', distill_func=getNone),
url(r'^hub/', TemplateView.as_view(template_name='hub/index.html'), name='hub', distill_func=getNone),
url(_(r'^tv/eosd'), TemplateView.as_view(template_name='tv/eosd.html'), name='eosd', distill_func=getNone),
url(_(r'^tv/mediacenter'), TemplateView.as_view(template_name='tv/mediacenter.html'), name='mediacenter', distill_func=getNone),
url(_(r'^tv/videogame'), TemplateView.as_view(template_name='tv/videogame.html'), name='videogame', distill_func=getNone),
url(_(r'^tv/'), TemplateView.as_view(template_name='tv/index.html'), name='hubpro', distill_func=getNone),
url(_(r'^help/localization'), TemplateView.as_view(template_name='help/localization.html'), name='localization', distill_func=getNone),
url(_(r'^help/warranty'), TemplateView.as_view(template_name='help/warranty.html'), name='warranty', distill_func=getNone),
url(_(r'^help/documentation'), TemplateView.as_view(template_name='help/documentation.html'), name='documentation', distill_func=getNone),
url(_(r'^help/'), TemplateView.as_view(template_name='help/index.html'), name='support', distill_func=getNone),
url(u'^$', TemplateView.as_view(template_name='index.html'), name=u'home', distill_func=getNone),
] + static(settings.STATIC_URL, document_root=settings.STATIC_ROOT)
| true | true |
f73df6b323b883a4881d9651e3c80ae148188dbf | 6,078 | py | Python | utility/mad_api.py | Tabbomat/MADUtilities | 74f45820bfa1864f92f2eaf20a27308bd15f35c6 | [
"MIT"
] | 2 | 2020-11-21T07:26:37.000Z | 2021-02-07T00:31:51.000Z | utility/mad_api.py | Tabbomat/MADUtilities | 74f45820bfa1864f92f2eaf20a27308bd15f35c6 | [
"MIT"
] | null | null | null | utility/mad_api.py | Tabbomat/MADUtilities | 74f45820bfa1864f92f2eaf20a27308bd15f35c6 | [
"MIT"
] | 1 | 2021-02-07T09:20:53.000Z | 2021-02-07T09:20:53.000Z | import time
from typing import Dict, List, Optional, Tuple
import requests
import utility.args
class MadObj:
def __init__(self, api, obj_id: int):
assert obj_id >= 0
self.id = obj_id
self._data = {}
self._api = api # type:Api
def _update_data(self):
raise NotImplementedError
@property
def raw_data(self) -> dict:
if not self._data:
self._update_data()
return self._data
class Geofence(MadObj):
def __init__(self, api, geofence_id: int):
super().__init__(api, geofence_id)
self._sa = {}
def _update_data(self):
self._data = self._api.get_json(f'/api/geofence/{self.id}')
@property
def name(self) -> str:
return self.raw_data['name']
@property
def fence_type(self) -> str:
return self.raw_data['fence_type']
@property
def sub_areas(self) -> Dict[str, List[Tuple[float, float]]]:
if not self._sa:
self._sa = {}
name = ''
points = []
for line in self.raw_data['fence_data']: # type:str
if line[0] == '[' and line[-1] == ']':
# save previous sub area
if points:
self._sa[name] = points
name = line[1:-1]
points = []
else:
p, q = line.split(',')
points.append((float(p), float(q)))
if points:
self._sa[name] = points
return self._sa
class Area(MadObj):
def __init__(self, api, area_id: int, name: Optional[str] = None):
super().__init__(api, area_id)
self._name: Optional[str] = name
self._sp: List[dict] = []
self._gi: Optional[Geofence] = None
def __repr__(self):
return f"{self.name} ({self.id})"
def _update_data(self):
self._data = self._api.get_json(f'/api/area/{self.id}')
@property
def init(self) -> bool:
return self.raw_data['init']
@property
def name(self) -> str:
return self._name or self.raw_data['name']
@property
def mode(self):
return self.raw_data['mode']
@property
def geofence_included(self) -> Optional[Geofence]:
if not self._gi:
id_ = self.raw_data.get('geofence_included', None) # type:Optional[str]
if id_ is None:
return None
self._gi = Geofence(self._api, int(id_[id_.rfind('/') + 1:]))
return self._gi
def recalculate(self, wait: bool = True, wait_initial: float = 5, wait_interval: float = 1):
self._api.post(f'/api/area/{self.id}', call="recalculate")
if wait:
wait_interval = min(wait_initial, wait_interval)
if not self.is_recalculating:
# either, recalculation was incredibly quick (and we will waste wait_initial seconds), or it has not started yet
wait_start = time.time()
while time.time() - wait_start < wait_initial:
time.sleep(wait_interval)
if self.is_recalculating:
break
# at this point recalculation should be running
while self.is_recalculating:
time.sleep(wait_interval)
@property
def is_recalculating(self) -> bool:
return self.id in self._api.get_json('/recalc_status')
@property
def spawnpoints(self) -> List[dict]:
if not self._sp:
self._sp = []
for index in range(len(self.geofence_included.sub_areas)):
self._sp.extend(self._api.get_json('/get_spawn_details', area_id=self.id, event_id=1, mode='ALL', index=index))
return self._sp
@property
def routecalc_id(self) -> int:
id_ = self.raw_data['routecalc'] # type:str
return int(id_[id_.rfind('/') + 1:])
@property
def routecalc(self) -> List[Tuple[float, float]]:
data = [line.split(',') for line in self._api.get_json(f'/api/routecalc/{self.routecalc_id}')['routefile']]
return [(float(lat), float(lon)) for lat, lon in data]
@routecalc.setter
def routecalc(self, data: List[Tuple[float, float]]):
data = [','.join(map(str, line)) for line in data]
self._api.patch(f'/api/routecalc/{self.routecalc_id}', routefile=data)
class Api:
def __init__(self):
args = utility.args.parse_args()
self._mad_url: str = args['madmin_url']
self._mad_auth = (args['madmin_user'], args['madmin_password']) if args['madmin_user'] else None
self._areas = {}
try:
if requests.get(self._mad_url + '/settings/areas', auth=self._mad_auth).status_code != 200:
raise ValueError("Error trying to access MAD Api. Please check your config.")
except requests.exceptions.ConnectionError:
raise ValueError("Could not reach MAD. Please check your config, especially madmin_url")
def _update_areas(self):
areas = self.get_json('/api/area')['results']
areas = {int(area_id[area_id.rfind('/') + 1:]): name for area_id, name in areas.items()}
self._areas = {area_id: Area(self, area_id, name) for area_id, name in sorted(areas.items(), key=lambda k: k[0])}
@property
def areas(self) -> Dict[int, Area]:
if not self._areas:
self._update_areas()
return self._areas
def get(self, path: str, **kwargs):
requests.get(self._mad_url + path, params=kwargs, auth=self._mad_auth)
def get_json(self, path: str, **kwargs):
return requests.get(self._mad_url + path, params=kwargs, auth=self._mad_auth).json()
def post(self, path: str, **kwargs):
requests.post(self._mad_url + path, json=kwargs, headers={'Content-Type': 'application/json-rpc'}, auth=self._mad_auth)
def patch(self, path: str, **kwargs):
requests.patch(self._mad_url + path, json=kwargs, auth=self._mad_auth)
def apply_settings(self):
self.get('/reload')
| 34.338983 | 128 | 0.58539 | import time
from typing import Dict, List, Optional, Tuple
import requests
import utility.args
class MadObj:
def __init__(self, api, obj_id: int):
assert obj_id >= 0
self.id = obj_id
self._data = {}
self._api = api
def _update_data(self):
raise NotImplementedError
@property
def raw_data(self) -> dict:
if not self._data:
self._update_data()
return self._data
class Geofence(MadObj):
def __init__(self, api, geofence_id: int):
super().__init__(api, geofence_id)
self._sa = {}
def _update_data(self):
self._data = self._api.get_json(f'/api/geofence/{self.id}')
@property
def name(self) -> str:
return self.raw_data['name']
@property
def fence_type(self) -> str:
return self.raw_data['fence_type']
@property
def sub_areas(self) -> Dict[str, List[Tuple[float, float]]]:
if not self._sa:
self._sa = {}
name = ''
points = []
for line in self.raw_data['fence_data']:
if line[0] == '[' and line[-1] == ']':
if points:
self._sa[name] = points
name = line[1:-1]
points = []
else:
p, q = line.split(',')
points.append((float(p), float(q)))
if points:
self._sa[name] = points
return self._sa
class Area(MadObj):
def __init__(self, api, area_id: int, name: Optional[str] = None):
super().__init__(api, area_id)
self._name: Optional[str] = name
self._sp: List[dict] = []
self._gi: Optional[Geofence] = None
def __repr__(self):
return f"{self.name} ({self.id})"
def _update_data(self):
self._data = self._api.get_json(f'/api/area/{self.id}')
@property
def init(self) -> bool:
return self.raw_data['init']
@property
def name(self) -> str:
return self._name or self.raw_data['name']
@property
def mode(self):
return self.raw_data['mode']
@property
def geofence_included(self) -> Optional[Geofence]:
if not self._gi:
id_ = self.raw_data.get('geofence_included', None)
if id_ is None:
return None
self._gi = Geofence(self._api, int(id_[id_.rfind('/') + 1:]))
return self._gi
def recalculate(self, wait: bool = True, wait_initial: float = 5, wait_interval: float = 1):
self._api.post(f'/api/area/{self.id}', call="recalculate")
if wait:
wait_interval = min(wait_initial, wait_interval)
if not self.is_recalculating:
wait_start = time.time()
while time.time() - wait_start < wait_initial:
time.sleep(wait_interval)
if self.is_recalculating:
break
while self.is_recalculating:
time.sleep(wait_interval)
@property
def is_recalculating(self) -> bool:
return self.id in self._api.get_json('/recalc_status')
@property
def spawnpoints(self) -> List[dict]:
if not self._sp:
self._sp = []
for index in range(len(self.geofence_included.sub_areas)):
self._sp.extend(self._api.get_json('/get_spawn_details', area_id=self.id, event_id=1, mode='ALL', index=index))
return self._sp
@property
def routecalc_id(self) -> int:
id_ = self.raw_data['routecalc']
return int(id_[id_.rfind('/') + 1:])
@property
def routecalc(self) -> List[Tuple[float, float]]:
data = [line.split(',') for line in self._api.get_json(f'/api/routecalc/{self.routecalc_id}')['routefile']]
return [(float(lat), float(lon)) for lat, lon in data]
@routecalc.setter
def routecalc(self, data: List[Tuple[float, float]]):
data = [','.join(map(str, line)) for line in data]
self._api.patch(f'/api/routecalc/{self.routecalc_id}', routefile=data)
class Api:
def __init__(self):
args = utility.args.parse_args()
self._mad_url: str = args['madmin_url']
self._mad_auth = (args['madmin_user'], args['madmin_password']) if args['madmin_user'] else None
self._areas = {}
try:
if requests.get(self._mad_url + '/settings/areas', auth=self._mad_auth).status_code != 200:
raise ValueError("Error trying to access MAD Api. Please check your config.")
except requests.exceptions.ConnectionError:
raise ValueError("Could not reach MAD. Please check your config, especially madmin_url")
def _update_areas(self):
areas = self.get_json('/api/area')['results']
areas = {int(area_id[area_id.rfind('/') + 1:]): name for area_id, name in areas.items()}
self._areas = {area_id: Area(self, area_id, name) for area_id, name in sorted(areas.items(), key=lambda k: k[0])}
@property
def areas(self) -> Dict[int, Area]:
if not self._areas:
self._update_areas()
return self._areas
def get(self, path: str, **kwargs):
requests.get(self._mad_url + path, params=kwargs, auth=self._mad_auth)
def get_json(self, path: str, **kwargs):
return requests.get(self._mad_url + path, params=kwargs, auth=self._mad_auth).json()
def post(self, path: str, **kwargs):
requests.post(self._mad_url + path, json=kwargs, headers={'Content-Type': 'application/json-rpc'}, auth=self._mad_auth)
def patch(self, path: str, **kwargs):
requests.patch(self._mad_url + path, json=kwargs, auth=self._mad_auth)
def apply_settings(self):
self.get('/reload')
| true | true |
f73df82574b71212ffed937c7c167b4ea765bcd6 | 301 | py | Python | src/year2019/day09a.py | lancelote/advent_of_code | 06dda6ca034bc1e86addee7798bb9b2a34ff565b | [
"Unlicense"
] | 10 | 2017-12-11T17:54:52.000Z | 2021-12-09T20:16:30.000Z | src/year2019/day09a.py | lancelote/advent_of_code | 06dda6ca034bc1e86addee7798bb9b2a34ff565b | [
"Unlicense"
] | 260 | 2015-12-09T11:03:03.000Z | 2021-12-12T14:32:23.000Z | src/year2019/day09a.py | lancelote/advent_of_code | 06dda6ca034bc1e86addee7798bb9b2a34ff565b | [
"Unlicense"
] | null | null | null | """2019 - Day 9 Part 1: Sensor Boost."""
from src.year2019.intcode import Computer
def solve(task: str) -> int:
"""Find BOOST key code."""
computer = Computer()
computer.load_program(task)
computer.stdin.append(1) # test mode
computer.execute()
return computer.stdout.pop()
| 25.083333 | 41 | 0.664452 | from src.year2019.intcode import Computer
def solve(task: str) -> int:
computer = Computer()
computer.load_program(task)
computer.stdin.append(1)
computer.execute()
return computer.stdout.pop()
| true | true |
f73df856f61cc2ca3ec3941f6a7b4862c59e86fa | 18,652 | py | Python | wo/cli/plugins/stack_upgrade.py | searchboy-sudo/WordOps | 71926580fd396acb2535b15aafe330aa244601df | [
"MIT"
] | null | null | null | wo/cli/plugins/stack_upgrade.py | searchboy-sudo/WordOps | 71926580fd396acb2535b15aafe330aa244601df | [
"MIT"
] | null | null | null | wo/cli/plugins/stack_upgrade.py | searchboy-sudo/WordOps | 71926580fd396acb2535b15aafe330aa244601df | [
"MIT"
] | null | null | null | import os
import shutil
from cement.core.controller import CementBaseController, expose
from wo.cli.plugins.stack_pref import post_pref, pre_pref, pre_stack
from wo.core.aptget import WOAptGet
from wo.core.download import WODownload
from wo.core.extract import WOExtract
from wo.core.fileutils import WOFileUtils
from wo.core.logging import Log
from wo.core.shellexec import WOShellExec
from wo.core.variables import WOVar
from wo.core.services import WOService
class WOStackUpgradeController(CementBaseController):
class Meta:
label = 'upgrade'
stacked_on = 'stack'
stacked_type = 'nested'
description = ('Upgrade stack safely')
arguments = [
(['--all'],
dict(help='Upgrade all stack', action='store_true')),
(['--web'],
dict(help='Upgrade web stack', action='store_true')),
(['--admin'],
dict(help='Upgrade admin tools stack', action='store_true')),
(['--security'],
dict(help='Upgrade security stack', action='store_true')),
(['--nginx'],
dict(help='Upgrade Nginx stack', action='store_true')),
(['--php'],
dict(help='Upgrade PHP 7.2 stack', action='store_true')),
(['--php72'],
dict(help='Upgrade PHP 7.2 stack', action='store_true')),
(['--php73'],
dict(help='Upgrade PHP 7.3 stack', action='store_true')),
(['--php74'],
dict(help='Upgrade PHP 7.4 stack', action='store_true')),
(['--mysql'],
dict(help='Upgrade MySQL stack', action='store_true')),
(['--wpcli'],
dict(help='Upgrade WPCLI', action='store_true')),
(['--redis'],
dict(help='Upgrade Redis', action='store_true')),
(['--netdata'],
dict(help='Upgrade Netdata', action='store_true')),
(['--fail2ban'],
dict(help='Upgrade Fail2Ban', action='store_true')),
(['--dashboard'],
dict(help='Upgrade WordOps Dashboard', action='store_true')),
(['--composer'],
dict(help='Upgrade Composer', action='store_true')),
(['--mysqltuner'],
dict(help='Upgrade Composer', action='store_true')),
(['--phpmyadmin'],
dict(help='Upgrade phpMyAdmin', action='store_true')),
(['--adminer'],
dict(help='Upgrade Adminer', action='store_true')),
(['--ngxblocker'],
dict(help='Upgrade phpMyAdmin', action='store_true')),
(['--no-prompt'],
dict(help="Upgrade Packages without any prompt",
action='store_true')),
(['--force'],
dict(help="Force Packages upgrade without any prompt",
action='store_true')),
]
@expose(hide=True)
def default(self, disp_msg=False):
# All package update
apt_packages = []
packages = []
self.msg = []
pargs = self.app.pargs
wo_phpmyadmin = WODownload.pma_release(self)
if not (pargs.web or pargs.nginx or pargs.php or
pargs.php72 or pargs.php73 or pargs.php74 or pargs.mysql or
pargs.ngxblocker or pargs.all or pargs.netdata or
pargs.wpcli or pargs.composer or pargs.phpmyadmin or
pargs.adminer or pargs.dashboard or pargs.mysqltuner or
pargs.redis or pargs.fail2ban or pargs.security):
pargs.web = True
pargs.admin = True
pargs.security = True
if pargs.php:
pargs.php72 = True
if pargs.all:
pargs.web = True
pargs.admin = True
pargs.security = True
pargs.redis = True
if pargs.web:
pargs.nginx = True
pargs.php72 = True
pargs.php73 = True
pargs.php74 = True
pargs.mysql = True
pargs.wpcli = True
if pargs.admin:
pargs.netdata = True
pargs.composer = True
pargs.dashboard = True
pargs.phpmyadmin = True
pargs.wpcli = True
pargs.adminer = True
pargs.mysqltuner = True
if pargs.security:
pargs.ngxblocker = True
pargs.fail2ban = True
# nginx
if pargs.nginx:
if WOAptGet.is_installed(self, 'nginx-custom'):
apt_packages = apt_packages + WOVar.wo_nginx
else:
if os.path.isfile('/usr/sbin/nginx'):
Log.info(self, "Updating Nginx templates")
post_pref(self, WOVar.wo_nginx, [])
else:
Log.info(self, "Nginx Stable is not already installed")
# php 7.2
if pargs.php72:
if WOAptGet.is_installed(self, 'php7.2-fpm'):
apt_packages = apt_packages + WOVar.wo_php72 + \
WOVar.wo_php_extra
# php 7.3
if pargs.php73:
if WOAptGet.is_installed(self, 'php7.3-fpm'):
apt_packages = apt_packages + WOVar.wo_php73 + \
WOVar.wo_php_extra
# php 7.4
if pargs.php74:
if WOAptGet.is_installed(self, 'php7.4-fpm'):
apt_packages = apt_packages + WOVar.wo_php74 + \
WOVar.wo_php_extra
# mysql
if pargs.mysql:
if WOShellExec.cmd_exec(self, 'mysqladmin ping'):
apt_packages = apt_packages + ['mariadb-server']
# redis
if pargs.redis:
if WOAptGet.is_installed(self, 'redis-server'):
apt_packages = apt_packages + ['redis-server']
# fail2ban
if pargs.fail2ban:
if WOAptGet.is_installed(self, 'fail2ban'):
apt_packages = apt_packages + ['fail2ban']
# wp-cli
if pargs.wpcli:
if os.path.isfile('/usr/local/bin/wp'):
packages = packages + [[
"https://github.com/wp-cli/wp-cli/"
"releases/download/v{0}/"
"wp-cli-{0}.phar".format(WOVar.wo_wp_cli),
"/usr/local/bin/wp",
"WP-CLI"]]
else:
Log.info(self, "WPCLI is not installed with WordOps")
# netdata
if pargs.netdata:
# detect static binaries install
if os.path.isdir('/opt/netdata'):
packages = packages + [[
'https://my-netdata.io/kickstart-static64.sh',
'/var/lib/wo/tmp/kickstart.sh', 'Netdata']]
# detect install from source
elif os.path.isdir('/etc/netdata'):
packages = packages + [[
'https://my-netdata.io/kickstart.sh',
'/var/lib/wo/tmp/kickstart.sh', 'Netdata']]
else:
Log.info(self, 'Netdata is not installed')
# wordops dashboard
if pargs.dashboard:
if (os.path.isfile('/var/www/22222/htdocs/index.php') or
os.path.isfile('/var/www/22222/htdocs/index.html')):
packages = packages + [[
"https://github.com/WordOps/wordops-dashboard/"
"releases/download/v{0}/wordops-dashboard.tar.gz"
.format(WOVar.wo_dashboard),
"/var/lib/wo/tmp/wo-dashboard.tar.gz",
"WordOps Dashboard"]]
else:
Log.info(self, 'WordOps dashboard is not installed')
# phpmyadmin
if pargs.phpmyadmin:
if os.path.isdir('/var/www/22222/htdocs/db/pma'):
packages = packages + [[
"https://files.phpmyadmin.net"
"/phpMyAdmin/{0}/phpMyAdmin-{0}-"
"all-languages.tar.gz"
.format(wo_phpmyadmin),
"/var/lib/wo/tmp/pma.tar.gz",
"PHPMyAdmin"]]
else:
Log.info(self, "phpMyAdmin isn't installed")
# adminer
if pargs.adminer:
if os.path.isfile("{0}22222/htdocs/db/"
"adminer/index.php"
.format(WOVar.wo_webroot)):
Log.debug(self, "Setting packages variable for Adminer ")
packages = packages + [[
"https://www.adminer.org/latest.php",
"{0}22222/"
"htdocs/db/adminer/index.php"
.format(WOVar.wo_webroot),
"Adminer"],
["https://raw.githubusercontent.com"
"/vrana/adminer/master/designs/"
"pepa-linha/adminer.css",
"{0}22222/"
"htdocs/db/adminer/adminer.css"
.format(WOVar.wo_webroot),
"Adminer theme"]]
else:
Log.debug(self, "Adminer isn't installed")
Log.info(self, "Adminer isn't installed")
# composer
if pargs.composer:
if os.path.isfile('/usr/local/bin/composer'):
packages = packages + [[
"https://getcomposer.org/installer",
"/var/lib/wo/tmp/composer-install",
"Composer"]]
else:
Log.info(self, "Composer isn't installed")
# mysqltuner
if pargs.mysqltuner:
if WOAptGet.is_exec(self, 'mysqltuner'):
Log.debug(self, "Setting packages variable "
"for MySQLTuner ")
packages = packages + [["https://raw."
"githubusercontent.com/"
"major/MySQLTuner-perl"
"/master/mysqltuner.pl",
"/usr/bin/mysqltuner",
"MySQLTuner"]]
# ngxblocker
if pargs.ngxblocker:
if os.path.exists('/usr/local/sbin/install-ngxblocker'):
packages = packages + [[
'https://raw.githubusercontent.com/mitchellkrogza/'
'nginx-ultimate-bad-bot-blocker/master/update-ngxblocker',
'/usr/local/sbin/update-ngxblocker',
'ngxblocker'
]]
if ((not (apt_packages)) and (not(packages))):
self.app.args.print_help()
else:
pre_stack(self)
if (apt_packages):
if not ("php7.2-fpm" in apt_packages or
"php7.3-fpm" in apt_packages or
"php7.4-fpm" in apt_packages or
"redis-server" in apt_packages or
"nginx-custom" in apt_packages or
"mariadb-server" in apt_packages):
pass
else:
Log.warn(
self, "Your sites may be down for few seconds if "
"you are upgrading Nginx, PHP-FPM, MariaDB or Redis")
# Check prompt
if not (pargs.no_prompt or pargs.force):
start_upgrade = input("Do you want to continue:[y/N]")
if start_upgrade != "Y" and start_upgrade != "y":
Log.error(self, "Not starting package update")
Log.wait(self, "Updating APT cache")
# apt-get update
WOAptGet.update(self)
Log.valide(self, "Updating APT cache")
# additional pre_pref
if "nginx-custom" in apt_packages:
pre_pref(self, WOVar.wo_nginx)
if "php7.2-fpm" in apt_packages:
WOAptGet.remove(self, ['php7.2-fpm'],
auto=False, purge=True)
if "php7.3-fpm" in apt_packages:
WOAptGet.remove(self, ['php7.3-fpm'],
auto=False, purge=True)
if "php7.4-fpm" in apt_packages:
WOAptGet.remove(self, ['php7.4-fpm'],
auto=False, purge=True)
# check if nginx upgrade is blocked
if os.path.isfile(
'/etc/apt/preferences.d/nginx-block'):
post_pref(self, WOVar.wo_nginx, [], True)
# upgrade packages
WOAptGet.install(self, apt_packages)
Log.wait(self, "Configuring APT Packages")
post_pref(self, apt_packages, [], True)
if "mariadb-server" in apt_packages:
WOShellExec.cmd_exec(self, 'mysql_upgrade')
Log.valide(self, "Configuring APT Packages")
# Post Actions after package updates
if (packages):
if WOAptGet.is_selected(self, 'WP-CLI', packages):
WOFileUtils.rm(self, '/usr/local/bin/wp')
if WOAptGet.is_selected(self, 'Netdata', packages):
WOFileUtils.rm(self, '/var/lib/wo/tmp/kickstart.sh')
if WOAptGet.is_selected(self, 'ngxblocker', packages):
WOFileUtils.rm(self, '/usr/local/sbin/update-ngxblocker')
if WOAptGet.is_selected(self, 'WordOps Dashboard', packages):
if os.path.isfile('/var/www/22222/htdocs/index.php'):
WOFileUtils.rm(self, '/var/www/22222/htdocs/index.php')
if os.path.isfile('/var/www/22222/htdocs/index.html'):
WOFileUtils.rm(
self, '/var/www/22222/htdocs/index.html')
Log.debug(self, "Downloading following: {0}".format(packages))
WODownload.download(self, packages)
if WOAptGet.is_selected(self, 'WP-CLI', packages):
WOFileUtils.chmod(self, "/usr/local/bin/wp", 0o775)
if WOAptGet.is_selected(self, 'ngxblocker', packages):
if os.path.exists('/etc/nginx/conf.d/variables-hash.conf'):
WOFileUtils.rm(
self, '/etc/nginx/conf.d/variables-hash.conf')
WOFileUtils.chmod(
self, '/usr/local/sbin/update-ngxblocker', 0o775)
WOShellExec.cmd_exec(
self, '/usr/local/sbin/update-ngxblocker -nq')
if WOAptGet.is_selected(self, 'MySQLTuner', packages):
WOFileUtils.chmod(self, "/usr/bin/mysqltuner", 0o775)
if os.path.exists('/usr/local/bin/mysqltuner'):
WOFileUtils.rm(self, '/usr/local/bin/mysqltuner')
# Netdata
if WOAptGet.is_selected(self, 'Netdata', packages):
WOService.stop_service(self, 'netdata')
Log.wait(self, "Upgrading Netdata")
# detect static binaries install
WOShellExec.cmd_exec(
self,
"bash /var/lib/wo/tmp/kickstart.sh "
"--dont-wait --no-updates",
errormsg='', log=False)
Log.valide(self, "Upgrading Netdata")
if WOAptGet.is_selected(self, 'WordOps Dashboard', packages):
post_pref(
self, [], [["https://github.com/WordOps"
"/wordops-dashboard/"
"releases/download/v{0}/"
"wordops-dashboard.tar.gz"
.format(WOVar.wo_dashboard),
"/var/lib/wo/tmp/wo-dashboard.tar.gz",
"WordOps Dashboard"]])
if WOAptGet.is_selected(self, 'Composer', packages):
Log.wait(self, "Upgrading Composer")
if WOShellExec.cmd_exec(
self, '/usr/bin/php -v'):
WOShellExec.cmd_exec(
self, "php -q /var/lib/wo"
"/tmp/composer-install "
"--install-dir=/var/lib/wo/tmp/")
shutil.copyfile('/var/lib/wo/tmp/composer.phar',
'/usr/local/bin/composer')
WOFileUtils.chmod(self, "/usr/local/bin/composer", 0o775)
Log.valide(self, "Upgrading Composer ")
if WOAptGet.is_selected(self, 'PHPMyAdmin', packages):
Log.wait(self, "Upgrading phpMyAdmin")
WOExtract.extract(self, '/var/lib/wo/tmp/pma.tar.gz',
'/var/lib/wo/tmp/')
shutil.copyfile(('{0}22222/htdocs/db/pma'
'/config.inc.php'
.format(WOVar.wo_webroot)),
('/var/lib/wo/tmp/phpMyAdmin-{0}'
'-all-languages/config.inc.php'
.format(wo_phpmyadmin))
)
WOFileUtils.rm(self, '{0}22222/htdocs/db/pma'
.format(WOVar.wo_webroot))
shutil.move('/var/lib/wo/tmp/phpMyAdmin-{0}'
'-all-languages/'
.format(wo_phpmyadmin),
'{0}22222/htdocs/db/pma/'
.format(WOVar.wo_webroot))
WOFileUtils.chown(self, "{0}22222/htdocs"
.format(WOVar.wo_webroot),
'www-data',
'www-data', recursive=True)
Log.valide(self, "Upgrading phpMyAdmin")
if os.path.exists('{0}22222/htdocs'.format(WOVar.wo_webroot)):
WOFileUtils.chown(self, "{0}22222/htdocs"
.format(WOVar.wo_webroot),
'www-data',
'www-data', recursive=True)
Log.info(self, "Successfully updated packages")
| 43.887059 | 79 | 0.477965 | import os
import shutil
from cement.core.controller import CementBaseController, expose
from wo.cli.plugins.stack_pref import post_pref, pre_pref, pre_stack
from wo.core.aptget import WOAptGet
from wo.core.download import WODownload
from wo.core.extract import WOExtract
from wo.core.fileutils import WOFileUtils
from wo.core.logging import Log
from wo.core.shellexec import WOShellExec
from wo.core.variables import WOVar
from wo.core.services import WOService
class WOStackUpgradeController(CementBaseController):
class Meta:
label = 'upgrade'
stacked_on = 'stack'
stacked_type = 'nested'
description = ('Upgrade stack safely')
arguments = [
(['--all'],
dict(help='Upgrade all stack', action='store_true')),
(['--web'],
dict(help='Upgrade web stack', action='store_true')),
(['--admin'],
dict(help='Upgrade admin tools stack', action='store_true')),
(['--security'],
dict(help='Upgrade security stack', action='store_true')),
(['--nginx'],
dict(help='Upgrade Nginx stack', action='store_true')),
(['--php'],
dict(help='Upgrade PHP 7.2 stack', action='store_true')),
(['--php72'],
dict(help='Upgrade PHP 7.2 stack', action='store_true')),
(['--php73'],
dict(help='Upgrade PHP 7.3 stack', action='store_true')),
(['--php74'],
dict(help='Upgrade PHP 7.4 stack', action='store_true')),
(['--mysql'],
dict(help='Upgrade MySQL stack', action='store_true')),
(['--wpcli'],
dict(help='Upgrade WPCLI', action='store_true')),
(['--redis'],
dict(help='Upgrade Redis', action='store_true')),
(['--netdata'],
dict(help='Upgrade Netdata', action='store_true')),
(['--fail2ban'],
dict(help='Upgrade Fail2Ban', action='store_true')),
(['--dashboard'],
dict(help='Upgrade WordOps Dashboard', action='store_true')),
(['--composer'],
dict(help='Upgrade Composer', action='store_true')),
(['--mysqltuner'],
dict(help='Upgrade Composer', action='store_true')),
(['--phpmyadmin'],
dict(help='Upgrade phpMyAdmin', action='store_true')),
(['--adminer'],
dict(help='Upgrade Adminer', action='store_true')),
(['--ngxblocker'],
dict(help='Upgrade phpMyAdmin', action='store_true')),
(['--no-prompt'],
dict(help="Upgrade Packages without any prompt",
action='store_true')),
(['--force'],
dict(help="Force Packages upgrade without any prompt",
action='store_true')),
]
@expose(hide=True)
def default(self, disp_msg=False):
apt_packages = []
packages = []
self.msg = []
pargs = self.app.pargs
wo_phpmyadmin = WODownload.pma_release(self)
if not (pargs.web or pargs.nginx or pargs.php or
pargs.php72 or pargs.php73 or pargs.php74 or pargs.mysql or
pargs.ngxblocker or pargs.all or pargs.netdata or
pargs.wpcli or pargs.composer or pargs.phpmyadmin or
pargs.adminer or pargs.dashboard or pargs.mysqltuner or
pargs.redis or pargs.fail2ban or pargs.security):
pargs.web = True
pargs.admin = True
pargs.security = True
if pargs.php:
pargs.php72 = True
if pargs.all:
pargs.web = True
pargs.admin = True
pargs.security = True
pargs.redis = True
if pargs.web:
pargs.nginx = True
pargs.php72 = True
pargs.php73 = True
pargs.php74 = True
pargs.mysql = True
pargs.wpcli = True
if pargs.admin:
pargs.netdata = True
pargs.composer = True
pargs.dashboard = True
pargs.phpmyadmin = True
pargs.wpcli = True
pargs.adminer = True
pargs.mysqltuner = True
if pargs.security:
pargs.ngxblocker = True
pargs.fail2ban = True
if pargs.nginx:
if WOAptGet.is_installed(self, 'nginx-custom'):
apt_packages = apt_packages + WOVar.wo_nginx
else:
if os.path.isfile('/usr/sbin/nginx'):
Log.info(self, "Updating Nginx templates")
post_pref(self, WOVar.wo_nginx, [])
else:
Log.info(self, "Nginx Stable is not already installed")
if pargs.php72:
if WOAptGet.is_installed(self, 'php7.2-fpm'):
apt_packages = apt_packages + WOVar.wo_php72 + \
WOVar.wo_php_extra
if pargs.php73:
if WOAptGet.is_installed(self, 'php7.3-fpm'):
apt_packages = apt_packages + WOVar.wo_php73 + \
WOVar.wo_php_extra
if pargs.php74:
if WOAptGet.is_installed(self, 'php7.4-fpm'):
apt_packages = apt_packages + WOVar.wo_php74 + \
WOVar.wo_php_extra
if pargs.mysql:
if WOShellExec.cmd_exec(self, 'mysqladmin ping'):
apt_packages = apt_packages + ['mariadb-server']
if pargs.redis:
if WOAptGet.is_installed(self, 'redis-server'):
apt_packages = apt_packages + ['redis-server']
if pargs.fail2ban:
if WOAptGet.is_installed(self, 'fail2ban'):
apt_packages = apt_packages + ['fail2ban']
if pargs.wpcli:
if os.path.isfile('/usr/local/bin/wp'):
packages = packages + [[
"https://github.com/wp-cli/wp-cli/"
"releases/download/v{0}/"
"wp-cli-{0}.phar".format(WOVar.wo_wp_cli),
"/usr/local/bin/wp",
"WP-CLI"]]
else:
Log.info(self, "WPCLI is not installed with WordOps")
if pargs.netdata:
if os.path.isdir('/opt/netdata'):
packages = packages + [[
'https://my-netdata.io/kickstart-static64.sh',
'/var/lib/wo/tmp/kickstart.sh', 'Netdata']]
elif os.path.isdir('/etc/netdata'):
packages = packages + [[
'https://my-netdata.io/kickstart.sh',
'/var/lib/wo/tmp/kickstart.sh', 'Netdata']]
else:
Log.info(self, 'Netdata is not installed')
if pargs.dashboard:
if (os.path.isfile('/var/www/22222/htdocs/index.php') or
os.path.isfile('/var/www/22222/htdocs/index.html')):
packages = packages + [[
"https://github.com/WordOps/wordops-dashboard/"
"releases/download/v{0}/wordops-dashboard.tar.gz"
.format(WOVar.wo_dashboard),
"/var/lib/wo/tmp/wo-dashboard.tar.gz",
"WordOps Dashboard"]]
else:
Log.info(self, 'WordOps dashboard is not installed')
if pargs.phpmyadmin:
if os.path.isdir('/var/www/22222/htdocs/db/pma'):
packages = packages + [[
"https://files.phpmyadmin.net"
"/phpMyAdmin/{0}/phpMyAdmin-{0}-"
"all-languages.tar.gz"
.format(wo_phpmyadmin),
"/var/lib/wo/tmp/pma.tar.gz",
"PHPMyAdmin"]]
else:
Log.info(self, "phpMyAdmin isn't installed")
# adminer
if pargs.adminer:
if os.path.isfile("{0}22222/htdocs/db/"
"adminer/index.php"
.format(WOVar.wo_webroot)):
Log.debug(self, "Setting packages variable for Adminer ")
packages = packages + [[
"https://www.adminer.org/latest.php",
"{0}22222/"
"htdocs/db/adminer/index.php"
.format(WOVar.wo_webroot),
"Adminer"],
["https://raw.githubusercontent.com"
"/vrana/adminer/master/designs/"
"pepa-linha/adminer.css",
"{0}22222/"
"htdocs/db/adminer/adminer.css"
.format(WOVar.wo_webroot),
"Adminer theme"]]
else:
Log.debug(self, "Adminer isn't installed")
Log.info(self, "Adminer isn't installed")
# composer
if pargs.composer:
if os.path.isfile('/usr/local/bin/composer'):
packages = packages + [[
"https://getcomposer.org/installer",
"/var/lib/wo/tmp/composer-install",
"Composer"]]
else:
Log.info(self, "Composer isn't installed")
if pargs.mysqltuner:
if WOAptGet.is_exec(self, 'mysqltuner'):
Log.debug(self, "Setting packages variable "
"for MySQLTuner ")
packages = packages + [["https://raw."
"githubusercontent.com/"
"major/MySQLTuner-perl"
"/master/mysqltuner.pl",
"/usr/bin/mysqltuner",
"MySQLTuner"]]
if pargs.ngxblocker:
if os.path.exists('/usr/local/sbin/install-ngxblocker'):
packages = packages + [[
'https://raw.githubusercontent.com/mitchellkrogza/'
'nginx-ultimate-bad-bot-blocker/master/update-ngxblocker',
'/usr/local/sbin/update-ngxblocker',
'ngxblocker'
]]
if ((not (apt_packages)) and (not(packages))):
self.app.args.print_help()
else:
pre_stack(self)
if (apt_packages):
if not ("php7.2-fpm" in apt_packages or
"php7.3-fpm" in apt_packages or
"php7.4-fpm" in apt_packages or
"redis-server" in apt_packages or
"nginx-custom" in apt_packages or
"mariadb-server" in apt_packages):
pass
else:
Log.warn(
self, "Your sites may be down for few seconds if "
"you are upgrading Nginx, PHP-FPM, MariaDB or Redis")
if not (pargs.no_prompt or pargs.force):
start_upgrade = input("Do you want to continue:[y/N]")
if start_upgrade != "Y" and start_upgrade != "y":
Log.error(self, "Not starting package update")
Log.wait(self, "Updating APT cache")
WOAptGet.update(self)
Log.valide(self, "Updating APT cache")
if "nginx-custom" in apt_packages:
pre_pref(self, WOVar.wo_nginx)
if "php7.2-fpm" in apt_packages:
WOAptGet.remove(self, ['php7.2-fpm'],
auto=False, purge=True)
if "php7.3-fpm" in apt_packages:
WOAptGet.remove(self, ['php7.3-fpm'],
auto=False, purge=True)
if "php7.4-fpm" in apt_packages:
WOAptGet.remove(self, ['php7.4-fpm'],
auto=False, purge=True)
if os.path.isfile(
'/etc/apt/preferences.d/nginx-block'):
post_pref(self, WOVar.wo_nginx, [], True)
WOAptGet.install(self, apt_packages)
Log.wait(self, "Configuring APT Packages")
post_pref(self, apt_packages, [], True)
if "mariadb-server" in apt_packages:
WOShellExec.cmd_exec(self, 'mysql_upgrade')
Log.valide(self, "Configuring APT Packages")
if (packages):
if WOAptGet.is_selected(self, 'WP-CLI', packages):
WOFileUtils.rm(self, '/usr/local/bin/wp')
if WOAptGet.is_selected(self, 'Netdata', packages):
WOFileUtils.rm(self, '/var/lib/wo/tmp/kickstart.sh')
if WOAptGet.is_selected(self, 'ngxblocker', packages):
WOFileUtils.rm(self, '/usr/local/sbin/update-ngxblocker')
if WOAptGet.is_selected(self, 'WordOps Dashboard', packages):
if os.path.isfile('/var/www/22222/htdocs/index.php'):
WOFileUtils.rm(self, '/var/www/22222/htdocs/index.php')
if os.path.isfile('/var/www/22222/htdocs/index.html'):
WOFileUtils.rm(
self, '/var/www/22222/htdocs/index.html')
Log.debug(self, "Downloading following: {0}".format(packages))
WODownload.download(self, packages)
if WOAptGet.is_selected(self, 'WP-CLI', packages):
WOFileUtils.chmod(self, "/usr/local/bin/wp", 0o775)
if WOAptGet.is_selected(self, 'ngxblocker', packages):
if os.path.exists('/etc/nginx/conf.d/variables-hash.conf'):
WOFileUtils.rm(
self, '/etc/nginx/conf.d/variables-hash.conf')
WOFileUtils.chmod(
self, '/usr/local/sbin/update-ngxblocker', 0o775)
WOShellExec.cmd_exec(
self, '/usr/local/sbin/update-ngxblocker -nq')
if WOAptGet.is_selected(self, 'MySQLTuner', packages):
WOFileUtils.chmod(self, "/usr/bin/mysqltuner", 0o775)
if os.path.exists('/usr/local/bin/mysqltuner'):
WOFileUtils.rm(self, '/usr/local/bin/mysqltuner')
if WOAptGet.is_selected(self, 'Netdata', packages):
WOService.stop_service(self, 'netdata')
Log.wait(self, "Upgrading Netdata")
WOShellExec.cmd_exec(
self,
"bash /var/lib/wo/tmp/kickstart.sh "
"--dont-wait --no-updates",
errormsg='', log=False)
Log.valide(self, "Upgrading Netdata")
if WOAptGet.is_selected(self, 'WordOps Dashboard', packages):
post_pref(
self, [], [["https://github.com/WordOps"
"/wordops-dashboard/"
"releases/download/v{0}/"
"wordops-dashboard.tar.gz"
.format(WOVar.wo_dashboard),
"/var/lib/wo/tmp/wo-dashboard.tar.gz",
"WordOps Dashboard"]])
if WOAptGet.is_selected(self, 'Composer', packages):
Log.wait(self, "Upgrading Composer")
if WOShellExec.cmd_exec(
self, '/usr/bin/php -v'):
WOShellExec.cmd_exec(
self, "php -q /var/lib/wo"
"/tmp/composer-install "
"--install-dir=/var/lib/wo/tmp/")
shutil.copyfile('/var/lib/wo/tmp/composer.phar',
'/usr/local/bin/composer')
WOFileUtils.chmod(self, "/usr/local/bin/composer", 0o775)
Log.valide(self, "Upgrading Composer ")
if WOAptGet.is_selected(self, 'PHPMyAdmin', packages):
Log.wait(self, "Upgrading phpMyAdmin")
WOExtract.extract(self, '/var/lib/wo/tmp/pma.tar.gz',
'/var/lib/wo/tmp/')
shutil.copyfile(('{0}22222/htdocs/db/pma'
'/config.inc.php'
.format(WOVar.wo_webroot)),
('/var/lib/wo/tmp/phpMyAdmin-{0}'
'-all-languages/config.inc.php'
.format(wo_phpmyadmin))
)
WOFileUtils.rm(self, '{0}22222/htdocs/db/pma'
.format(WOVar.wo_webroot))
shutil.move('/var/lib/wo/tmp/phpMyAdmin-{0}'
'-all-languages/'
.format(wo_phpmyadmin),
'{0}22222/htdocs/db/pma/'
.format(WOVar.wo_webroot))
WOFileUtils.chown(self, "{0}22222/htdocs"
.format(WOVar.wo_webroot),
'www-data',
'www-data', recursive=True)
Log.valide(self, "Upgrading phpMyAdmin")
if os.path.exists('{0}22222/htdocs'.format(WOVar.wo_webroot)):
WOFileUtils.chown(self, "{0}22222/htdocs"
.format(WOVar.wo_webroot),
'www-data',
'www-data', recursive=True)
Log.info(self, "Successfully updated packages")
| true | true |
f73df968619061098e13b4fa7b996adf00b27af2 | 3,540 | py | Python | grpc_demo/pi_pb2.py | ResolveWang/rpc_demo | 1585404377f13d1d366b14917c0ec0b92466f428 | [
"MIT"
] | 10 | 2019-01-15T02:32:19.000Z | 2021-04-28T07:00:30.000Z | grpc_demo/pi_pb2.py | ResolveWang/rpc_demo | 1585404377f13d1d366b14917c0ec0b92466f428 | [
"MIT"
] | null | null | null | grpc_demo/pi_pb2.py | ResolveWang/rpc_demo | 1585404377f13d1d366b14917c0ec0b92466f428 | [
"MIT"
] | 7 | 2019-02-20T15:54:24.000Z | 2022-01-05T07:44:01.000Z | # Generated by the protocol buffer compiler. DO NOT EDIT!
# source: pi.proto
import sys
_b=sys.version_info[0]<3 and (lambda x:x) or (lambda x:x.encode('latin1'))
from google.protobuf import descriptor as _descriptor
from google.protobuf import message as _message
from google.protobuf import reflection as _reflection
from google.protobuf import symbol_database as _symbol_database
# @@protoc_insertion_point(imports)
_sym_db = _symbol_database.Default()
DESCRIPTOR = _descriptor.FileDescriptor(
name='pi.proto',
package='pi',
syntax='proto3',
serialized_options=None,
serialized_pb=_b('\n\x08pi.proto\x12\x02pi\"\x16\n\tPiRequest\x12\t\n\x01n\x18\x01 \x01(\x05\"\x1b\n\nPiResponse\x12\r\n\x05value\x18\x01 \x01(\x01\x32\x37\n\x0cPiCalculator\x12\'\n\x04\x43\x61lc\x12\r.pi.PiRequest\x1a\x0e.pi.PiResponse\"\x00\x62\x06proto3')
)
_PIREQUEST = _descriptor.Descriptor(
name='PiRequest',
full_name='pi.PiRequest',
filename=None,
file=DESCRIPTOR,
containing_type=None,
fields=[
_descriptor.FieldDescriptor(
name='n', full_name='pi.PiRequest.n', index=0,
number=1, type=5, cpp_type=1, label=1,
has_default_value=False, default_value=0,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
],
extensions=[
],
nested_types=[],
enum_types=[
],
serialized_options=None,
is_extendable=False,
syntax='proto3',
extension_ranges=[],
oneofs=[
],
serialized_start=16,
serialized_end=38,
)
_PIRESPONSE = _descriptor.Descriptor(
name='PiResponse',
full_name='pi.PiResponse',
filename=None,
file=DESCRIPTOR,
containing_type=None,
fields=[
_descriptor.FieldDescriptor(
name='value', full_name='pi.PiResponse.value', index=0,
number=1, type=1, cpp_type=5, label=1,
has_default_value=False, default_value=float(0),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
],
extensions=[
],
nested_types=[],
enum_types=[
],
serialized_options=None,
is_extendable=False,
syntax='proto3',
extension_ranges=[],
oneofs=[
],
serialized_start=40,
serialized_end=67,
)
DESCRIPTOR.message_types_by_name['PiRequest'] = _PIREQUEST
DESCRIPTOR.message_types_by_name['PiResponse'] = _PIRESPONSE
_sym_db.RegisterFileDescriptor(DESCRIPTOR)
PiRequest = _reflection.GeneratedProtocolMessageType('PiRequest', (_message.Message,), dict(
DESCRIPTOR = _PIREQUEST,
__module__ = 'pi_pb2'
# @@protoc_insertion_point(class_scope:pi.PiRequest)
))
_sym_db.RegisterMessage(PiRequest)
PiResponse = _reflection.GeneratedProtocolMessageType('PiResponse', (_message.Message,), dict(
DESCRIPTOR = _PIRESPONSE,
__module__ = 'pi_pb2'
# @@protoc_insertion_point(class_scope:pi.PiResponse)
))
_sym_db.RegisterMessage(PiResponse)
_PICALCULATOR = _descriptor.ServiceDescriptor(
name='PiCalculator',
full_name='pi.PiCalculator',
file=DESCRIPTOR,
index=0,
serialized_options=None,
serialized_start=69,
serialized_end=124,
methods=[
_descriptor.MethodDescriptor(
name='Calc',
full_name='pi.PiCalculator.Calc',
index=0,
containing_service=None,
input_type=_PIREQUEST,
output_type=_PIRESPONSE,
serialized_options=None,
),
])
_sym_db.RegisterServiceDescriptor(_PICALCULATOR)
DESCRIPTOR.services_by_name['PiCalculator'] = _PICALCULATOR
# @@protoc_insertion_point(module_scope)
| 26.616541 | 260 | 0.742938 |
import sys
_b=sys.version_info[0]<3 and (lambda x:x) or (lambda x:x.encode('latin1'))
from google.protobuf import descriptor as _descriptor
from google.protobuf import message as _message
from google.protobuf import reflection as _reflection
from google.protobuf import symbol_database as _symbol_database
_sym_db = _symbol_database.Default()
DESCRIPTOR = _descriptor.FileDescriptor(
name='pi.proto',
package='pi',
syntax='proto3',
serialized_options=None,
serialized_pb=_b('\n\x08pi.proto\x12\x02pi\"\x16\n\tPiRequest\x12\t\n\x01n\x18\x01 \x01(\x05\"\x1b\n\nPiResponse\x12\r\n\x05value\x18\x01 \x01(\x01\x32\x37\n\x0cPiCalculator\x12\'\n\x04\x43\x61lc\x12\r.pi.PiRequest\x1a\x0e.pi.PiResponse\"\x00\x62\x06proto3')
)
_PIREQUEST = _descriptor.Descriptor(
name='PiRequest',
full_name='pi.PiRequest',
filename=None,
file=DESCRIPTOR,
containing_type=None,
fields=[
_descriptor.FieldDescriptor(
name='n', full_name='pi.PiRequest.n', index=0,
number=1, type=5, cpp_type=1, label=1,
has_default_value=False, default_value=0,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
],
extensions=[
],
nested_types=[],
enum_types=[
],
serialized_options=None,
is_extendable=False,
syntax='proto3',
extension_ranges=[],
oneofs=[
],
serialized_start=16,
serialized_end=38,
)
_PIRESPONSE = _descriptor.Descriptor(
name='PiResponse',
full_name='pi.PiResponse',
filename=None,
file=DESCRIPTOR,
containing_type=None,
fields=[
_descriptor.FieldDescriptor(
name='value', full_name='pi.PiResponse.value', index=0,
number=1, type=1, cpp_type=5, label=1,
has_default_value=False, default_value=float(0),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
],
extensions=[
],
nested_types=[],
enum_types=[
],
serialized_options=None,
is_extendable=False,
syntax='proto3',
extension_ranges=[],
oneofs=[
],
serialized_start=40,
serialized_end=67,
)
DESCRIPTOR.message_types_by_name['PiRequest'] = _PIREQUEST
DESCRIPTOR.message_types_by_name['PiResponse'] = _PIRESPONSE
_sym_db.RegisterFileDescriptor(DESCRIPTOR)
PiRequest = _reflection.GeneratedProtocolMessageType('PiRequest', (_message.Message,), dict(
DESCRIPTOR = _PIREQUEST,
__module__ = 'pi_pb2'
# @@protoc_insertion_point(class_scope:pi.PiRequest)
))
_sym_db.RegisterMessage(PiRequest)
PiResponse = _reflection.GeneratedProtocolMessageType('PiResponse', (_message.Message,), dict(
DESCRIPTOR = _PIRESPONSE,
__module__ = 'pi_pb2'
# @@protoc_insertion_point(class_scope:pi.PiResponse)
))
_sym_db.RegisterMessage(PiResponse)
_PICALCULATOR = _descriptor.ServiceDescriptor(
name='PiCalculator',
full_name='pi.PiCalculator',
file=DESCRIPTOR,
index=0,
serialized_options=None,
serialized_start=69,
serialized_end=124,
methods=[
_descriptor.MethodDescriptor(
name='Calc',
full_name='pi.PiCalculator.Calc',
index=0,
containing_service=None,
input_type=_PIREQUEST,
output_type=_PIRESPONSE,
serialized_options=None,
),
])
_sym_db.RegisterServiceDescriptor(_PICALCULATOR)
DESCRIPTOR.services_by_name['PiCalculator'] = _PICALCULATOR
# @@protoc_insertion_point(module_scope)
| true | true |
f73dfb23c6b523f62cb80af67168bb1a2d0c4d1b | 337 | py | Python | __init__.py | Aran-Fey/introspection | 0ce3a16688b51bdcb72c7b070d571a1004f5151b | [
"MIT"
] | 1 | 2022-03-02T23:13:06.000Z | 2022-03-02T23:13:06.000Z | __init__.py | Aran-Fey/introspection | 0ce3a16688b51bdcb72c7b070d571a1004f5151b | [
"MIT"
] | null | null | null | __init__.py | Aran-Fey/introspection | 0ce3a16688b51bdcb72c7b070d571a1004f5151b | [
"MIT"
] | null | null | null |
# fake module that resides in my lib folder and
# imports the actual implementation
from pathlib import Path
here = Path(__file__).absolute().parent
name = here.stem
import sys
sys.path.insert(0, str(here))
del sys.modules[name]
module = __import__(name)
del sys.path[0]
del Path, here, name, sys
globals().update(module.__dict__)
| 17.736842 | 47 | 0.750742 |
from pathlib import Path
here = Path(__file__).absolute().parent
name = here.stem
import sys
sys.path.insert(0, str(here))
del sys.modules[name]
module = __import__(name)
del sys.path[0]
del Path, here, name, sys
globals().update(module.__dict__)
| true | true |
f73dfb49ca273c2cd622cca28f41629a38850154 | 992 | py | Python | azure-mgmt-network/azure/mgmt/network/v2017_09_01/models/network_security_group_paged.py | JonathanGailliez/azure-sdk-for-python | f0f051bfd27f8ea512aea6fc0c3212ee9ee0029b | [
"MIT"
] | 4 | 2016-06-17T23:25:29.000Z | 2022-03-30T22:37:45.000Z | azure-mgmt-network/azure/mgmt/network/v2017_09_01/models/network_security_group_paged.py | JonathanGailliez/azure-sdk-for-python | f0f051bfd27f8ea512aea6fc0c3212ee9ee0029b | [
"MIT"
] | 54 | 2016-03-25T17:25:01.000Z | 2018-10-22T17:27:54.000Z | azure-mgmt-network/azure/mgmt/network/v2017_09_01/models/network_security_group_paged.py | JonathanGailliez/azure-sdk-for-python | f0f051bfd27f8ea512aea6fc0c3212ee9ee0029b | [
"MIT"
] | 3 | 2016-05-03T20:49:46.000Z | 2017-10-05T21:05:27.000Z | # coding=utf-8
# --------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See License.txt in the project root for
# license information.
#
# Code generated by Microsoft (R) AutoRest Code Generator.
# Changes may cause incorrect behavior and will be lost if the code is
# regenerated.
# --------------------------------------------------------------------------
from msrest.paging import Paged
class NetworkSecurityGroupPaged(Paged):
"""
A paging container for iterating over a list of :class:`NetworkSecurityGroup <azure.mgmt.network.v2017_09_01.models.NetworkSecurityGroup>` object
"""
_attribute_map = {
'next_link': {'key': 'nextLink', 'type': 'str'},
'current_page': {'key': 'value', 'type': '[NetworkSecurityGroup]'}
}
def __init__(self, *args, **kwargs):
super(NetworkSecurityGroupPaged, self).__init__(*args, **kwargs)
| 35.428571 | 149 | 0.597782 |
from msrest.paging import Paged
class NetworkSecurityGroupPaged(Paged):
_attribute_map = {
'next_link': {'key': 'nextLink', 'type': 'str'},
'current_page': {'key': 'value', 'type': '[NetworkSecurityGroup]'}
}
def __init__(self, *args, **kwargs):
super(NetworkSecurityGroupPaged, self).__init__(*args, **kwargs)
| true | true |
f73dfb8958d4e3f9b93c88e039c211837ce20f88 | 2,772 | py | Python | example_mariokart.py | Cuyler36/NintendoClients | d38986674ecc4dec624694649361f1f334901020 | [
"MIT"
] | null | null | null | example_mariokart.py | Cuyler36/NintendoClients | d38986674ecc4dec624694649361f1f334901020 | [
"MIT"
] | null | null | null | example_mariokart.py | Cuyler36/NintendoClients | d38986674ecc4dec624694649361f1f334901020 | [
"MIT"
] | null | null | null |
from nintendo.nex import backend, authentication, ranking, datastore
from nintendo.games import MK8
from nintendo import account
import requests
import logging
logging.basicConfig(level=logging.INFO)
#Device id can be retrieved with a call to MCP_GetDeviceId on the Wii U
#Serial number can be found on the back of the Wii U
DEVICE_ID = 12345678
SERIAL_NUMBER = "..."
SYSTEM_VERSION = 0x220
REGION_ID = 4
COUNTRY_ID = 94
REGION_NAME = "EUR"
COUNTRY_NAME = "NL"
USERNAME = "..." #Nintendo network id
PASSWORD = "..." #Nintendo network password
TRACK_ID = 27 #Mario Kart Stadium
api = account.AccountAPI()
api.set_device(DEVICE_ID, SERIAL_NUMBER, SYSTEM_VERSION, REGION_ID, COUNTRY_NAME)
api.set_title(MK8.TITLE_ID_EUR, MK8.LATEST_VERSION)
api.login(USERNAME, PASSWORD)
nex_token = api.get_nex_token(MK8.GAME_SERVER_ID)
backend = backend.BackEndClient(MK8.ACCESS_KEY, MK8.NEX_VERSION)
backend.connect(nex_token.host, nex_token.port)
backend.login(nex_token.username, nex_token.password)
ranking_client = ranking.RankingClient(backend.secure_client)
order_param = ranking.RankingOrderParam()
order_param.order_calc = ranking.RankingOrderCalc.ORDINAL
order_param.offset = 499 #Start at 500th place
order_param.count = 20 #Download 20 highscores
rankings = ranking_client.get_ranking(
ranking.RankingMode.GLOBAL, TRACK_ID,
order_param, 0, 0
)
stats = ranking_client.get_stats(
TRACK_ID, order_param, ranking.RankingStatFlags.ALL
).stats
def format_time(score):
millisec = score % 1000
seconds = score // 1000 % 60
minutes = score // 1000 // 60
return "%i:%02i.%03i" %(minutes, seconds, millisec)
names = api.get_nnids([data.pid for data in rankings.datas])
#Print some interesting stats
print("Total:", int(stats[0]))
print("Total time:", format_time(stats[1]))
print("Average time:", format_time(stats[2]))
print("Lowest time:", format_time(stats[3]))
print("Highest time:", format_time(stats[4]))
print("Rankings:")
for rankdata in rankings.datas:
time = format_time(rankdata.score)
print("\t%5i %20s %s" %(rankdata.rank, names[rankdata.pid], time))
#Let's download the replay file of whoever is in 500th place
store = datastore.DataStoreClient(backend.secure_client)
rankdata = rankings.datas[0]
get_param = datastore.DataStorePrepareGetParam()
get_param.persistence_target.owner_id = rankdata.pid
get_param.persistence_target.persistence_id = TRACK_ID - 16
get_param.extra_data = ["WUP", str(REGION_ID), REGION_NAME, str(COUNTRY_ID), COUNTRY_NAME, ""]
req_info = store.prepare_get_object(get_param)
headers = {header.key: header.value for header in req_info.headers}
replay_data = requests.get("http://" + req_info.url, headers=headers).content
with open("replay.bin", "wb") as f:
f.write(replay_data)
#Close connection
backend.close()
| 30.8 | 94 | 0.772367 |
from nintendo.nex import backend, authentication, ranking, datastore
from nintendo.games import MK8
from nintendo import account
import requests
import logging
logging.basicConfig(level=logging.INFO)
DEVICE_ID = 12345678
SERIAL_NUMBER = "..."
SYSTEM_VERSION = 0x220
REGION_ID = 4
COUNTRY_ID = 94
REGION_NAME = "EUR"
COUNTRY_NAME = "NL"
USERNAME = "..."
PASSWORD = "..."
TRACK_ID = 27
api = account.AccountAPI()
api.set_device(DEVICE_ID, SERIAL_NUMBER, SYSTEM_VERSION, REGION_ID, COUNTRY_NAME)
api.set_title(MK8.TITLE_ID_EUR, MK8.LATEST_VERSION)
api.login(USERNAME, PASSWORD)
nex_token = api.get_nex_token(MK8.GAME_SERVER_ID)
backend = backend.BackEndClient(MK8.ACCESS_KEY, MK8.NEX_VERSION)
backend.connect(nex_token.host, nex_token.port)
backend.login(nex_token.username, nex_token.password)
ranking_client = ranking.RankingClient(backend.secure_client)
order_param = ranking.RankingOrderParam()
order_param.order_calc = ranking.RankingOrderCalc.ORDINAL
order_param.offset = 499
order_param.count = 20
rankings = ranking_client.get_ranking(
ranking.RankingMode.GLOBAL, TRACK_ID,
order_param, 0, 0
)
stats = ranking_client.get_stats(
TRACK_ID, order_param, ranking.RankingStatFlags.ALL
).stats
def format_time(score):
millisec = score % 1000
seconds = score // 1000 % 60
minutes = score // 1000 // 60
return "%i:%02i.%03i" %(minutes, seconds, millisec)
names = api.get_nnids([data.pid for data in rankings.datas])
print("Total:", int(stats[0]))
print("Total time:", format_time(stats[1]))
print("Average time:", format_time(stats[2]))
print("Lowest time:", format_time(stats[3]))
print("Highest time:", format_time(stats[4]))
print("Rankings:")
for rankdata in rankings.datas:
time = format_time(rankdata.score)
print("\t%5i %20s %s" %(rankdata.rank, names[rankdata.pid], time))
store = datastore.DataStoreClient(backend.secure_client)
rankdata = rankings.datas[0]
get_param = datastore.DataStorePrepareGetParam()
get_param.persistence_target.owner_id = rankdata.pid
get_param.persistence_target.persistence_id = TRACK_ID - 16
get_param.extra_data = ["WUP", str(REGION_ID), REGION_NAME, str(COUNTRY_ID), COUNTRY_NAME, ""]
req_info = store.prepare_get_object(get_param)
headers = {header.key: header.value for header in req_info.headers}
replay_data = requests.get("http://" + req_info.url, headers=headers).content
with open("replay.bin", "wb") as f:
f.write(replay_data)
#Close connection
backend.close()
| true | true |
f73dfbb3ad628d6d9456e84ab79764156db7292d | 7,508 | py | Python | mtp_noms_ops/apps/settings/views.py | uk-gov-mirror/ministryofjustice.money-to-prisoners-noms-ops | eb537fb8a8e3adc588d50af1b000402c957b32a7 | [
"MIT"
] | 3 | 2016-12-22T15:56:57.000Z | 2020-03-10T10:37:40.000Z | mtp_noms_ops/apps/settings/views.py | uk-gov-mirror/ministryofjustice.money-to-prisoners-noms-ops | eb537fb8a8e3adc588d50af1b000402c957b32a7 | [
"MIT"
] | 61 | 2016-06-10T08:37:23.000Z | 2022-01-28T12:41:29.000Z | mtp_noms_ops/apps/settings/views.py | uk-gov-mirror/ministryofjustice.money-to-prisoners-noms-ops | eb537fb8a8e3adc588d50af1b000402c957b32a7 | [
"MIT"
] | 1 | 2021-04-11T06:13:53.000Z | 2021-04-11T06:13:53.000Z | from urllib.parse import urlencode
from django.contrib.auth import REDIRECT_FIELD_NAME
from django.contrib.auth.views import SuccessURLAllowedHostsMixin
from django.shortcuts import redirect
from django.urls import reverse, reverse_lazy
from django.utils.http import is_safe_url
from django.utils.translation import gettext_lazy as _
from django.views.generic import FormView, TemplateView
from mtp_common.auth.api_client import get_api_session
from mtp_common.views import SettingsView
from security import confirmed_prisons_flag, provided_job_info_flag
from settings.forms import ConfirmPrisonForm, ChangePrisonForm, ALL_PRISONS_CODE, JobInformationForm
from security.models import EmailNotifications
from security.utils import save_user_flags, can_skip_confirming_prisons, has_provided_job_information
class NomsOpsSettingsView(SettingsView):
template_name = 'settings/settings.html'
def get_context_data(self, **kwargs):
context = super().get_context_data(**kwargs)
if self.request.can_access_security:
session = get_api_session(self.request)
email_preferences = session.get('/emailpreferences/').json()
context['email_notifications'] = email_preferences['frequency'] != EmailNotifications.never
return context
def post(self, *args, **kwargs):
if self.request.can_access_security and 'email_notifications' in self.request.POST:
session = get_api_session(self.request)
if self.request.POST['email_notifications'] == 'True':
session.post('/emailpreferences/', json={'frequency': EmailNotifications.daily})
else:
session.post('/emailpreferences/', json={'frequency': EmailNotifications.never})
return redirect(reverse_lazy('settings'))
class ConfirmPrisonsView(FormView):
title = _('Confirm your prisons')
template_name = 'settings/confirm-prisons.html'
form_class = ConfirmPrisonForm
success_url = reverse_lazy('confirm_prisons_confirmation')
def get_context_data(self, **kwargs):
context = super().get_context_data(**kwargs)
context['current_prisons'] = ','.join([
p['nomis_id'] for p in self.request.user.user_data['prisons']
] if self.request.user.user_data.get('prisons') else ['ALL'])
selected_prisons = self.request.GET.getlist('prisons')
if not selected_prisons:
selected_prisons = [
p['nomis_id'] for p in self.request.user.user_data['prisons']
]
if not selected_prisons:
selected_prisons = [ALL_PRISONS_CODE]
query_dict = self.request.GET.copy()
query_dict['prisons'] = selected_prisons
context['change_prison_query'] = urlencode(query_dict, doseq=True)
self.request.cannot_navigate_away = not can_skip_confirming_prisons(self.request.user)
return context
def get_form_kwargs(self):
kwargs = super().get_form_kwargs()
kwargs['request'] = self.request
return kwargs
def form_valid(self, form):
form.save()
save_user_flags(self.request, confirmed_prisons_flag)
return redirect(self.get_success_url())
def get_success_url(self):
if 'next' in self.request.GET:
return '{path}?{query}'.format(
path=self.success_url,
query=urlencode({'next': self.request.GET['next']})
)
return self.success_url
class ChangePrisonsView(SuccessURLAllowedHostsMixin, FormView):
title = _('Change prisons')
template_name = 'settings/confirm-prisons-change.html'
form_class = ChangePrisonForm
def get_success_url(self):
"""
Returns the REDIRECT_FIELD_NAME value in GET if it exists and it's valid
or the url to the settings page otherwise.
"""
if REDIRECT_FIELD_NAME in self.request.GET:
next_page = self.request.GET[REDIRECT_FIELD_NAME]
url_is_safe = is_safe_url(
url=next_page,
allowed_hosts=self.get_success_url_allowed_hosts(),
require_https=self.request.is_secure(),
)
if url_is_safe:
return next_page
return reverse('settings')
def get_context_data(self, **kwargs):
context = super().get_context_data(**kwargs)
context['data_attrs'] = {
'data-autocomplete-error-empty': _('Type a prison name'),
'data-autocomplete-error-summary': _('There was a problem'),
'data-event-category': 'PrisonConfirmation',
}
context['current_prisons'] = ','.join([
p['nomis_id'] for p in self.request.user.user_data['prisons']
] if self.request.user.user_data.get('prisons') else ['ALL'])
return context
def get_form_kwargs(self):
kwargs = super().get_form_kwargs()
kwargs['request'] = self.request
return kwargs
def form_valid(self, form):
form.save()
save_user_flags(self.request, confirmed_prisons_flag)
return redirect(self.get_success_url())
class AddOrRemovePrisonsView(ChangePrisonsView):
title = _('Add or remove prisons')
template_name = 'settings/confirm-prisons-change.html'
form_class = ChangePrisonForm
success_url = reverse_lazy('confirm_prisons')
def get_context_data(self, **kwargs):
context = super().get_context_data(**kwargs)
self.request.cannot_navigate_away = not can_skip_confirming_prisons(self.request.user)
return context
def form_valid(self, form):
return redirect('{path}?{query}'.format(
path=self.get_success_url(),
query=form.get_confirmation_query_string()
))
class ConfirmPrisonsConfirmationView(TemplateView):
title = _('Your prisons have been saved')
template_name = 'settings/confirm-prisons-confirmation.html'
def get_context_data(self, **kwargs):
context = super().get_context_data(**kwargs)
context['prisons'] = self.request.user_prisons
return context
class JobInformationView(SuccessURLAllowedHostsMixin, FormView):
title = _('Help us improve this service')
template_name = 'settings/job-information.html'
form_class = JobInformationForm
def dispatch(self, request, *args, **kwargs):
request.cannot_navigate_away = True
return super().dispatch(request, *args, **kwargs)
def get_success_url(self):
if REDIRECT_FIELD_NAME in self.request.GET:
next_page = self.request.GET[REDIRECT_FIELD_NAME]
url_is_safe = is_safe_url(
url=next_page,
allowed_hosts=self.get_success_url_allowed_hosts(),
require_https=self.request.is_secure(),
)
if url_is_safe:
return next_page
return reverse('security:dashboard')
def form_valid(self, form):
if has_provided_job_information(self.request.user):
return redirect(self.get_success_url())
session = get_api_session(self.request)
session.post('/job-information/', json={'title': form.cleaned_data['job_title_or_other'],
'prison_estate': form.cleaned_data['prison_estate'],
'tasks': form.cleaned_data['tasks']})
save_user_flags(self.request, provided_job_info_flag)
return super().form_valid(form)
| 39.515789 | 103 | 0.668221 | from urllib.parse import urlencode
from django.contrib.auth import REDIRECT_FIELD_NAME
from django.contrib.auth.views import SuccessURLAllowedHostsMixin
from django.shortcuts import redirect
from django.urls import reverse, reverse_lazy
from django.utils.http import is_safe_url
from django.utils.translation import gettext_lazy as _
from django.views.generic import FormView, TemplateView
from mtp_common.auth.api_client import get_api_session
from mtp_common.views import SettingsView
from security import confirmed_prisons_flag, provided_job_info_flag
from settings.forms import ConfirmPrisonForm, ChangePrisonForm, ALL_PRISONS_CODE, JobInformationForm
from security.models import EmailNotifications
from security.utils import save_user_flags, can_skip_confirming_prisons, has_provided_job_information
class NomsOpsSettingsView(SettingsView):
template_name = 'settings/settings.html'
def get_context_data(self, **kwargs):
context = super().get_context_data(**kwargs)
if self.request.can_access_security:
session = get_api_session(self.request)
email_preferences = session.get('/emailpreferences/').json()
context['email_notifications'] = email_preferences['frequency'] != EmailNotifications.never
return context
def post(self, *args, **kwargs):
if self.request.can_access_security and 'email_notifications' in self.request.POST:
session = get_api_session(self.request)
if self.request.POST['email_notifications'] == 'True':
session.post('/emailpreferences/', json={'frequency': EmailNotifications.daily})
else:
session.post('/emailpreferences/', json={'frequency': EmailNotifications.never})
return redirect(reverse_lazy('settings'))
class ConfirmPrisonsView(FormView):
title = _('Confirm your prisons')
template_name = 'settings/confirm-prisons.html'
form_class = ConfirmPrisonForm
success_url = reverse_lazy('confirm_prisons_confirmation')
def get_context_data(self, **kwargs):
context = super().get_context_data(**kwargs)
context['current_prisons'] = ','.join([
p['nomis_id'] for p in self.request.user.user_data['prisons']
] if self.request.user.user_data.get('prisons') else ['ALL'])
selected_prisons = self.request.GET.getlist('prisons')
if not selected_prisons:
selected_prisons = [
p['nomis_id'] for p in self.request.user.user_data['prisons']
]
if not selected_prisons:
selected_prisons = [ALL_PRISONS_CODE]
query_dict = self.request.GET.copy()
query_dict['prisons'] = selected_prisons
context['change_prison_query'] = urlencode(query_dict, doseq=True)
self.request.cannot_navigate_away = not can_skip_confirming_prisons(self.request.user)
return context
def get_form_kwargs(self):
kwargs = super().get_form_kwargs()
kwargs['request'] = self.request
return kwargs
def form_valid(self, form):
form.save()
save_user_flags(self.request, confirmed_prisons_flag)
return redirect(self.get_success_url())
def get_success_url(self):
if 'next' in self.request.GET:
return '{path}?{query}'.format(
path=self.success_url,
query=urlencode({'next': self.request.GET['next']})
)
return self.success_url
class ChangePrisonsView(SuccessURLAllowedHostsMixin, FormView):
title = _('Change prisons')
template_name = 'settings/confirm-prisons-change.html'
form_class = ChangePrisonForm
def get_success_url(self):
if REDIRECT_FIELD_NAME in self.request.GET:
next_page = self.request.GET[REDIRECT_FIELD_NAME]
url_is_safe = is_safe_url(
url=next_page,
allowed_hosts=self.get_success_url_allowed_hosts(),
require_https=self.request.is_secure(),
)
if url_is_safe:
return next_page
return reverse('settings')
def get_context_data(self, **kwargs):
context = super().get_context_data(**kwargs)
context['data_attrs'] = {
'data-autocomplete-error-empty': _('Type a prison name'),
'data-autocomplete-error-summary': _('There was a problem'),
'data-event-category': 'PrisonConfirmation',
}
context['current_prisons'] = ','.join([
p['nomis_id'] for p in self.request.user.user_data['prisons']
] if self.request.user.user_data.get('prisons') else ['ALL'])
return context
def get_form_kwargs(self):
kwargs = super().get_form_kwargs()
kwargs['request'] = self.request
return kwargs
def form_valid(self, form):
form.save()
save_user_flags(self.request, confirmed_prisons_flag)
return redirect(self.get_success_url())
class AddOrRemovePrisonsView(ChangePrisonsView):
title = _('Add or remove prisons')
template_name = 'settings/confirm-prisons-change.html'
form_class = ChangePrisonForm
success_url = reverse_lazy('confirm_prisons')
def get_context_data(self, **kwargs):
context = super().get_context_data(**kwargs)
self.request.cannot_navigate_away = not can_skip_confirming_prisons(self.request.user)
return context
def form_valid(self, form):
return redirect('{path}?{query}'.format(
path=self.get_success_url(),
query=form.get_confirmation_query_string()
))
class ConfirmPrisonsConfirmationView(TemplateView):
title = _('Your prisons have been saved')
template_name = 'settings/confirm-prisons-confirmation.html'
def get_context_data(self, **kwargs):
context = super().get_context_data(**kwargs)
context['prisons'] = self.request.user_prisons
return context
class JobInformationView(SuccessURLAllowedHostsMixin, FormView):
title = _('Help us improve this service')
template_name = 'settings/job-information.html'
form_class = JobInformationForm
def dispatch(self, request, *args, **kwargs):
request.cannot_navigate_away = True
return super().dispatch(request, *args, **kwargs)
def get_success_url(self):
if REDIRECT_FIELD_NAME in self.request.GET:
next_page = self.request.GET[REDIRECT_FIELD_NAME]
url_is_safe = is_safe_url(
url=next_page,
allowed_hosts=self.get_success_url_allowed_hosts(),
require_https=self.request.is_secure(),
)
if url_is_safe:
return next_page
return reverse('security:dashboard')
def form_valid(self, form):
if has_provided_job_information(self.request.user):
return redirect(self.get_success_url())
session = get_api_session(self.request)
session.post('/job-information/', json={'title': form.cleaned_data['job_title_or_other'],
'prison_estate': form.cleaned_data['prison_estate'],
'tasks': form.cleaned_data['tasks']})
save_user_flags(self.request, provided_job_info_flag)
return super().form_valid(form)
| true | true |
f73dfc126fee45a0819a344377cce6d1299075ba | 623 | py | Python | sidserver/common/sql/migrate_repo/__init__.py | UTSA-ICS/sid-server | 21a09204975dc54d6cba843233708e43bf9830d0 | [
"Apache-2.0"
] | null | null | null | sidserver/common/sql/migrate_repo/__init__.py | UTSA-ICS/sid-server | 21a09204975dc54d6cba843233708e43bf9830d0 | [
"Apache-2.0"
] | null | null | null | sidserver/common/sql/migrate_repo/__init__.py | UTSA-ICS/sid-server | 21a09204975dc54d6cba843233708e43bf9830d0 | [
"Apache-2.0"
] | 1 | 2020-07-02T09:12:28.000Z | 2020-07-02T09:12:28.000Z | # Copyright 2014 Mirantis.inc
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
DB_INIT_VERSION = 43
| 34.611111 | 75 | 0.760835 |
DB_INIT_VERSION = 43
| true | true |
f73dfcc5c30ddd6aad32958b4c3f0e278dd99d4a | 852 | py | Python | classification-multi/a/a.py | fishduke/vision | 0f7914d09a293d14f5ed91fb75068d5dc521b9c9 | [
"MIT"
] | 2 | 2020-10-05T05:37:55.000Z | 2020-10-07T04:30:04.000Z | classification-multi/a/a.py | fishduke/vision | 0f7914d09a293d14f5ed91fb75068d5dc521b9c9 | [
"MIT"
] | null | null | null | classification-multi/a/a.py | fishduke/vision | 0f7914d09a293d14f5ed91fb75068d5dc521b9c9 | [
"MIT"
] | null | null | null | from bs4 import BeautifulSoup
from datetime import datetime
import requests
import time
def get_code(company_code):
url="https://finance.naver.com/item/main.nhn?code=" + company_code
result = requests.get(url)
bs_obj = BeautifulSoup(result.content, "html.parser")
return bs_obj
def get_price(company_code):
bs_obj = get_code(company_code)
no_today = bs_obj.find("p", {"class": "no_today"})
blind = no_today.find("span", {"class": 'blind'})
now_price = blind.text
return now_price
company_codes = ["175250","153490"]
prices =[]
while True:
now = datetime.now()
print(now)
for item in company_codes:
now_price = get_price(item)
# print(now_price, company_codes)
prices.append(now_price)
# print("------------------------")
print(prices)
prices =[]
time.sleep(60) | 26.625 | 70 | 0.652582 | from bs4 import BeautifulSoup
from datetime import datetime
import requests
import time
def get_code(company_code):
url="https://finance.naver.com/item/main.nhn?code=" + company_code
result = requests.get(url)
bs_obj = BeautifulSoup(result.content, "html.parser")
return bs_obj
def get_price(company_code):
bs_obj = get_code(company_code)
no_today = bs_obj.find("p", {"class": "no_today"})
blind = no_today.find("span", {"class": 'blind'})
now_price = blind.text
return now_price
company_codes = ["175250","153490"]
prices =[]
while True:
now = datetime.now()
print(now)
for item in company_codes:
now_price = get_price(item)
prices.append(now_price)
print(prices)
prices =[]
time.sleep(60) | true | true |
f73dfd57a69880f0c4bb2960b392d495084eec6e | 221 | py | Python | DesignPatterns/01_Facade/2_facade/__main__.py | eduardormonteiro/PythonPersonalLibrary | 561733bb8305c4e25a08f99c28b60ec77251ad67 | [
"MIT"
] | null | null | null | DesignPatterns/01_Facade/2_facade/__main__.py | eduardormonteiro/PythonPersonalLibrary | 561733bb8305c4e25a08f99c28b60ec77251ad67 | [
"MIT"
] | null | null | null | DesignPatterns/01_Facade/2_facade/__main__.py | eduardormonteiro/PythonPersonalLibrary | 561733bb8305c4e25a08f99c28b60ec77251ad67 | [
"MIT"
] | null | null | null | from get_employees import PROVIDER
from get_employees.facade_factory import FacadeFactory
def main():
facade = FacadeFactory.create_facade(PROVIDER)
facade.get_employees()
if __name__ == '__main__':
main()
| 20.090909 | 54 | 0.764706 | from get_employees import PROVIDER
from get_employees.facade_factory import FacadeFactory
def main():
facade = FacadeFactory.create_facade(PROVIDER)
facade.get_employees()
if __name__ == '__main__':
main()
| true | true |
f73e00081d066126ef18de306ea91dd146dec201 | 591 | py | Python | hazelcast/protocol/codec/multi_map_message_type.py | buraksezer/hazelcast-python-client | 4cc593ef7de994bd84fdac8331b81b309cce30a0 | [
"Apache-2.0"
] | 3 | 2020-05-01T15:01:54.000Z | 2021-01-27T14:51:45.000Z | hazelcast/protocol/codec/multi_map_message_type.py | buraksezer/hazelcast-python-client | 4cc593ef7de994bd84fdac8331b81b309cce30a0 | [
"Apache-2.0"
] | null | null | null | hazelcast/protocol/codec/multi_map_message_type.py | buraksezer/hazelcast-python-client | 4cc593ef7de994bd84fdac8331b81b309cce30a0 | [
"Apache-2.0"
] | 1 | 2020-12-01T20:00:35.000Z | 2020-12-01T20:00:35.000Z |
MULTIMAP_PUT = 0x0201
MULTIMAP_GET = 0x0202
MULTIMAP_REMOVE = 0x0203
MULTIMAP_KEYSET = 0x0204
MULTIMAP_VALUES = 0x0205
MULTIMAP_ENTRYSET = 0x0206
MULTIMAP_CONTAINSKEY = 0x0207
MULTIMAP_CONTAINSVALUE = 0x0208
MULTIMAP_CONTAINSENTRY = 0x0209
MULTIMAP_SIZE = 0x020a
MULTIMAP_CLEAR = 0x020b
MULTIMAP_VALUECOUNT = 0x020c
MULTIMAP_ADDENTRYLISTENERTOKEY = 0x020d
MULTIMAP_ADDENTRYLISTENER = 0x020e
MULTIMAP_REMOVEENTRYLISTENER = 0x020f
MULTIMAP_LOCK = 0x0210
MULTIMAP_TRYLOCK = 0x0211
MULTIMAP_ISLOCKED = 0x0212
MULTIMAP_UNLOCK = 0x0213
MULTIMAP_FORCEUNLOCK = 0x0214
MULTIMAP_REMOVEENTRY = 0x0215
| 25.695652 | 39 | 0.856176 |
MULTIMAP_PUT = 0x0201
MULTIMAP_GET = 0x0202
MULTIMAP_REMOVE = 0x0203
MULTIMAP_KEYSET = 0x0204
MULTIMAP_VALUES = 0x0205
MULTIMAP_ENTRYSET = 0x0206
MULTIMAP_CONTAINSKEY = 0x0207
MULTIMAP_CONTAINSVALUE = 0x0208
MULTIMAP_CONTAINSENTRY = 0x0209
MULTIMAP_SIZE = 0x020a
MULTIMAP_CLEAR = 0x020b
MULTIMAP_VALUECOUNT = 0x020c
MULTIMAP_ADDENTRYLISTENERTOKEY = 0x020d
MULTIMAP_ADDENTRYLISTENER = 0x020e
MULTIMAP_REMOVEENTRYLISTENER = 0x020f
MULTIMAP_LOCK = 0x0210
MULTIMAP_TRYLOCK = 0x0211
MULTIMAP_ISLOCKED = 0x0212
MULTIMAP_UNLOCK = 0x0213
MULTIMAP_FORCEUNLOCK = 0x0214
MULTIMAP_REMOVEENTRY = 0x0215
| true | true |
f73e0292de7fccecb6091cd3c047353f94542b75 | 4,360 | py | Python | gfootball/examples/run_multiagent_rllib.py | Ruboninov/football | 9bdb4c2ec12b4b99f9132b578f839f05e2f950a6 | [
"Apache-2.0"
] | 2 | 2021-10-31T01:06:15.000Z | 2021-11-08T09:43:23.000Z | gfootball/examples/run_multiagent_rllib.py | Ruboninov/football | 9bdb4c2ec12b4b99f9132b578f839f05e2f950a6 | [
"Apache-2.0"
] | 20 | 2021-04-14T15:48:28.000Z | 2021-04-28T14:13:57.000Z | gfootball/examples/run_multiagent_rllib.py | Ruboninov/football | 9bdb4c2ec12b4b99f9132b578f839f05e2f950a6 | [
"Apache-2.0"
] | 2 | 2020-10-27T05:06:05.000Z | 2020-12-11T20:57:48.000Z | # coding=utf-8
# Copyright 2019 Google LLC
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""A simple example of setting up a multi-agent version of GFootball with rllib.
"""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import argparse
import gfootball.env as football_env
import gym
import ray
from ray import tune
from ray.rllib.env.multi_agent_env import MultiAgentEnv
from ray.tune.registry import register_env
parser = argparse.ArgumentParser()
parser.add_argument('--num-agents', type=int, default=3)
parser.add_argument('--num-policies', type=int, default=3)
parser.add_argument('--num-iters', type=int, default=100000)
parser.add_argument('--simple', action='store_true')
class RllibGFootball(MultiAgentEnv):
"""An example of a wrapper for GFootball to make it compatible with rllib."""
def __init__(self, num_agents):
self.env = football_env.create_environment(
env_name='test_example_multiagent', stacked=False,
logdir='/tmp/rllib_test',
write_goal_dumps=False, write_full_episode_dumps=False, render=True,
dump_frequency=0,
number_of_left_players_agent_controls=num_agents,
channel_dimensions=(42, 42))
self.action_space = gym.spaces.Discrete(self.env.action_space.nvec[1])
self.observation_space = gym.spaces.Box(
low=self.env.observation_space.low[0],
high=self.env.observation_space.high[0],
dtype=self.env.observation_space.dtype)
self.num_agents = num_agents
def reset(self):
original_obs = self.env.reset()
obs = {}
for x in range(self.num_agents):
if self.num_agents > 1:
obs['agent_%d' % x] = original_obs[x]
else:
obs['agent_%d' % x] = original_obs
return obs
def step(self, action_dict):
actions = []
for key, value in sorted(action_dict.items()):
actions.append(value)
o, r, d, i = self.env.step(actions)
rewards = {}
obs = {}
infos = {}
for pos, key in enumerate(sorted(action_dict.keys())):
infos[key] = i
if self.num_agents > 1:
rewards[key] = r[pos]
obs[key] = o[pos]
else:
rewards[key] = r
obs[key] = o
dones = {'__all__': d}
return obs, rewards, dones, infos
if __name__ == '__main__':
args = parser.parse_args()
ray.init(num_gpus=1)
# Simple environment with `num_agents` independent players
register_env('gfootball', lambda _: RllibGFootball(args.num_agents))
single_env = RllibGFootball(args.num_agents)
obs_space = single_env.observation_space
act_space = single_env.action_space
def gen_policy(_):
return (None, obs_space, act_space, {})
# Setup PPO with an ensemble of `num_policies` different policies
policies = {
'policy_{}'.format(i): gen_policy(i) for i in range(args.num_policies)
}
policy_ids = list(policies.keys())
tune.run(
'PPO',
stop={'training_iteration': args.num_iters},
checkpoint_freq=50,
config={
'env': 'gfootball',
'lambda': 0.95,
'kl_coeff': 0.2,
'clip_rewards': False,
'vf_clip_param': 10.0,
'entropy_coeff': 0.01,
'train_batch_size': 2000,
'sample_batch_size': 100,
'sgd_minibatch_size': 500,
'num_sgd_iter': 10,
'num_workers': 10,
'num_envs_per_worker': 1,
'batch_mode': 'truncate_episodes',
'observation_filter': 'NoFilter',
'vf_share_layers': 'true',
'num_gpus': 1,
'lr': 2.5e-4,
'log_level': 'DEBUG',
'simple_optimizer': args.simple,
'multiagent': {
'policies': policies,
'policy_mapping_fn': tune.function(
lambda agent_id: policy_ids[int(agent_id[6:])]),
},
},
)
| 31.824818 | 80 | 0.659633 |
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import argparse
import gfootball.env as football_env
import gym
import ray
from ray import tune
from ray.rllib.env.multi_agent_env import MultiAgentEnv
from ray.tune.registry import register_env
parser = argparse.ArgumentParser()
parser.add_argument('--num-agents', type=int, default=3)
parser.add_argument('--num-policies', type=int, default=3)
parser.add_argument('--num-iters', type=int, default=100000)
parser.add_argument('--simple', action='store_true')
class RllibGFootball(MultiAgentEnv):
def __init__(self, num_agents):
self.env = football_env.create_environment(
env_name='test_example_multiagent', stacked=False,
logdir='/tmp/rllib_test',
write_goal_dumps=False, write_full_episode_dumps=False, render=True,
dump_frequency=0,
number_of_left_players_agent_controls=num_agents,
channel_dimensions=(42, 42))
self.action_space = gym.spaces.Discrete(self.env.action_space.nvec[1])
self.observation_space = gym.spaces.Box(
low=self.env.observation_space.low[0],
high=self.env.observation_space.high[0],
dtype=self.env.observation_space.dtype)
self.num_agents = num_agents
def reset(self):
original_obs = self.env.reset()
obs = {}
for x in range(self.num_agents):
if self.num_agents > 1:
obs['agent_%d' % x] = original_obs[x]
else:
obs['agent_%d' % x] = original_obs
return obs
def step(self, action_dict):
actions = []
for key, value in sorted(action_dict.items()):
actions.append(value)
o, r, d, i = self.env.step(actions)
rewards = {}
obs = {}
infos = {}
for pos, key in enumerate(sorted(action_dict.keys())):
infos[key] = i
if self.num_agents > 1:
rewards[key] = r[pos]
obs[key] = o[pos]
else:
rewards[key] = r
obs[key] = o
dones = {'__all__': d}
return obs, rewards, dones, infos
if __name__ == '__main__':
args = parser.parse_args()
ray.init(num_gpus=1)
register_env('gfootball', lambda _: RllibGFootball(args.num_agents))
single_env = RllibGFootball(args.num_agents)
obs_space = single_env.observation_space
act_space = single_env.action_space
def gen_policy(_):
return (None, obs_space, act_space, {})
policies = {
'policy_{}'.format(i): gen_policy(i) for i in range(args.num_policies)
}
policy_ids = list(policies.keys())
tune.run(
'PPO',
stop={'training_iteration': args.num_iters},
checkpoint_freq=50,
config={
'env': 'gfootball',
'lambda': 0.95,
'kl_coeff': 0.2,
'clip_rewards': False,
'vf_clip_param': 10.0,
'entropy_coeff': 0.01,
'train_batch_size': 2000,
'sample_batch_size': 100,
'sgd_minibatch_size': 500,
'num_sgd_iter': 10,
'num_workers': 10,
'num_envs_per_worker': 1,
'batch_mode': 'truncate_episodes',
'observation_filter': 'NoFilter',
'vf_share_layers': 'true',
'num_gpus': 1,
'lr': 2.5e-4,
'log_level': 'DEBUG',
'simple_optimizer': args.simple,
'multiagent': {
'policies': policies,
'policy_mapping_fn': tune.function(
lambda agent_id: policy_ids[int(agent_id[6:])]),
},
},
)
| true | true |
f73e032135d1ebb0780d1d23e6f8c1fe756a3508 | 10,999 | py | Python | netket/driver/abstract_variational_driver.py | rbktech/netket | 847e120cad48f9c92d394e2078370e452f268a3d | [
"Apache-2.0"
] | null | null | null | netket/driver/abstract_variational_driver.py | rbktech/netket | 847e120cad48f9c92d394e2078370e452f268a3d | [
"Apache-2.0"
] | 8 | 2022-01-17T17:24:53.000Z | 2022-03-28T17:31:04.000Z | netket/driver/abstract_variational_driver.py | rbktech/netket | 847e120cad48f9c92d394e2078370e452f268a3d | [
"Apache-2.0"
] | null | null | null | # Copyright 2021 The NetKet Authors - All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import abc
import numbers
from functools import partial
import numpy as np
from tqdm import tqdm
import warnings
import jax
from jax.tree_util import tree_map
from netket.logging import JsonLog
from netket.utils import node_number, n_nodes
def _to_iterable(maybe_iterable):
"""
_to_iterable(maybe_iterable)
Ensure the result is iterable. If the input is not iterable, it is wrapped into a tuple.
"""
if hasattr(maybe_iterable, "__iter__"):
surely_iterable = maybe_iterable
else:
surely_iterable = (maybe_iterable,)
return surely_iterable
# Note: to implement a new Driver (see also _vmc.py for an example)
# If you want to inherit the nice interface of AbstractMCDriver, you should
# subclass it, defining the following methods:
# - Either _forward_and_backward or individually _forward, _backward, that should
# compute the loss function and the gradient. If the driver is minimizing or
# maximising some loss function, this quantity should be assigned to self._stats
# in order to monitor it.
# - _estimate_stats should return the MC estimate of a single operator
# - reset should reset the driver (usually the sampler).
# - info should return a string with an overview of the driver.
# - The __init__ method shouldbe called with the machine and the optimizer. If this
# driver is minimising a loss function and you want it's name to show up automatically
# in the progress bar/ouput files you should pass the optional keyword argument
# minimized_quantity_name.
class AbstractVariationalDriver(abc.ABC):
"""Abstract base class for NetKet Variational Monte Carlo drivers"""
def __init__(self, variational_state, optimizer, minimized_quantity_name=""):
self._mynode = node_number
self._mpi_nodes = n_nodes
self._loss_stats = None
self._loss_name = minimized_quantity_name
self._step_count = 0
self._variational_state = variational_state
self.optimizer = optimizer
def _forward_and_backward(self):
"""
Performs the forward and backward pass at the same time.
Concrete drivers should either override this method, or override individually
_forward and _backward.
Returns:
the update for the weights.
"""
self._forward()
dp = self._backward()
return dp
def _forward(self):
"""
Performs the forward pass, computing the loss function.
Concrete should either implement _forward and _backward or the joint method
_forward_and_backward.
"""
raise NotImplementedError()
def _backward(self):
"""
Performs the backward pass, computing the update for the parameters.
Concrete should either implement _forward and _backward or the joint method
_forward_and_backward.
"""
raise NotImplementedError()
def _estimate_stats(self, observable):
"""
Returns the MCMC statistics for the expectation value of an observable.
Must be implemented by super-classes of AbstractVMC.
:param observable: A quantum operator (netket observable)
:return:
"""
return self.state.expect(observable)
def reset(self):
"""
Resets the driver.
Concrete drivers should also call super().reset() to ensure that the step
count is set to 0.
"""
self.state.reset()
self.step_count = 0
pass
@abc.abstractmethod
def info(self, depth=0):
"""
Returns an info string used to print information to screen about this driver.
"""
pass
@property
def state(self):
"""
Returns the machine that is optimized by this driver.
"""
return self._variational_state
@property
def optimizer(self):
return self._optimizer
@optimizer.setter
def optimizer(self, optimizer):
self._optimizer = optimizer
self._optimizer_state = optimizer.init(self.state.parameters)
@property
def step_count(self):
"""
Returns a monotonic integer labelling all the steps performed by this driver.
This can be used, for example, to identify the line in a log file.
"""
return self._step_count
def iter(self, n_steps: int, step: int = 1):
"""
Returns a generator which advances the VMC optimization, yielding
after every `step_size` steps.
Args:
n_iter: The total number of steps to perform.
step_size: The number of internal steps the simulation
is advanced every turn.
Yields:
int: The current step.
"""
for _ in range(0, n_steps, step):
for i in range(0, step):
dp = self._forward_and_backward()
if i == 0:
yield self.step_count
self._step_count += 1
self.update_parameters(dp)
def advance(self, steps: int = 1):
"""
Performs `steps` optimization steps.
steps: (Default=1) number of steps
"""
for _ in self.iter(steps):
pass
def run(
self,
n_iter,
out=None,
obs=None,
show_progress=True,
save_params_every=50, # for default logger
write_every=50, # for default logger
step_size=1, # for default logger
callback=lambda *x: True,
):
"""
Executes the Monte Carlo Variational optimization, updating the weights of the network
stored in this driver for `n_iter` steps and dumping values of the observables `obs`
in the output `logger`. If no logger is specified, creates a json file at `out`,
overwriting files with the same prefix.
By default uses :ref:`netket.logging.JsonLogger`. To know about the output format
check it's documentation. The logger object is also returned at the end of this function
so that you can inspect the results without reading the json output.
Args:
n_iter: the total number of iterations
out: A logger object, or an iterable of loggers, to be used to store simulation log and data.
If this argument is a string, it will be used as output prefix for the standard JSON logger.
obs: An iterable containing all observables that should be computed
save_params_every: Every how many steps the parameters of the network should be
serialized to disk (ignored if logger is provided)
write_every: Every how many steps the json data should be flushed to disk (ignored if
logger is provided)
step_size: Every how many steps should observables be logged to disk (default=1)
show_progress: If true displays a progress bar (default=True)
callback: Callable or list of callable callback functions to stop training given a condition
"""
if not isinstance(n_iter, numbers.Number):
raise ValueError(
"n_iter, the first positional argument to `run`, must be a number!"
)
if obs is None:
obs = {}
if out is None:
out = tuple()
print(
"No output specified (out=[apath|nk.logging.JsonLogger(...)])."
"Running the optimization but not saving the output."
)
# Log only non-root nodes
if self._mynode == 0:
# if out is a path, create an overwriting Json Log for output
if isinstance(out, str):
loggers = (JsonLog(out, "w", save_params_every, write_every),)
else:
loggers = _to_iterable(out)
else:
loggers = tuple()
show_progress = False
callbacks = _to_iterable(callback)
callback_stop = False
with tqdm(
self.iter(n_iter, step_size), total=n_iter, disable=not show_progress
) as itr:
first_step = True
for step in itr:
log_data = self.estimate(obs)
# if the cost-function is defined then report it in the progress bar
if self._loss_stats is not None:
itr.set_postfix_str(self._loss_name + "=" + str(self._loss_stats))
log_data[self._loss_name] = self._loss_stats
for callback in callbacks:
if not callback(step, log_data, self):
callback_stop = True
for logger in loggers:
logger(self.step_count, log_data, self.state)
if callback_stop:
break
# Reset the timing of tqdm after the first step, to ignore compilation time
if first_step:
first_step = False
itr.unpause()
# flush at the end of the evolution so that final values are saved to
# file
for logger in loggers:
logger.flush(self.state)
return loggers
def estimate(self, observables):
"""
Return MCMC statistics for the expectation value of observables in the
current state of the driver.
Args:
observables: A pytree of operators for which statistics should be computed.
Returns:
A pytree of the same structure as the input, containing MCMC statistics
for the corresponding operators as leaves.
"""
return tree_map(self._estimate_stats, observables)
def update_parameters(self, dp):
"""
Updates the parameters of the machine using the optimizer in this driver
Args:
dp: the pytree containing the updates to the parameters
"""
self._optimizer_state, self.state.parameters = apply_gradient(
self._optimizer.update, self._optimizer_state, dp, self.state.parameters
)
@partial(jax.jit, static_argnums=0)
def apply_gradient(optimizer_fun, optimizer_state, dp, params):
import optax
updates, new_optimizer_state = optimizer_fun(dp, optimizer_state, params)
new_params = optax.apply_updates(params, updates)
return new_optimizer_state, new_params
| 34.697161 | 108 | 0.635876 |
import abc
import numbers
from functools import partial
import numpy as np
from tqdm import tqdm
import warnings
import jax
from jax.tree_util import tree_map
from netket.logging import JsonLog
from netket.utils import node_number, n_nodes
def _to_iterable(maybe_iterable):
if hasattr(maybe_iterable, "__iter__"):
surely_iterable = maybe_iterable
else:
surely_iterable = (maybe_iterable,)
return surely_iterable
# in the progress bar/ouput files you should pass the optional keyword argument
# minimized_quantity_name.
class AbstractVariationalDriver(abc.ABC):
def __init__(self, variational_state, optimizer, minimized_quantity_name=""):
self._mynode = node_number
self._mpi_nodes = n_nodes
self._loss_stats = None
self._loss_name = minimized_quantity_name
self._step_count = 0
self._variational_state = variational_state
self.optimizer = optimizer
def _forward_and_backward(self):
self._forward()
dp = self._backward()
return dp
def _forward(self):
raise NotImplementedError()
def _backward(self):
raise NotImplementedError()
def _estimate_stats(self, observable):
return self.state.expect(observable)
def reset(self):
self.state.reset()
self.step_count = 0
pass
@abc.abstractmethod
def info(self, depth=0):
pass
@property
def state(self):
return self._variational_state
@property
def optimizer(self):
return self._optimizer
@optimizer.setter
def optimizer(self, optimizer):
self._optimizer = optimizer
self._optimizer_state = optimizer.init(self.state.parameters)
@property
def step_count(self):
return self._step_count
def iter(self, n_steps: int, step: int = 1):
for _ in range(0, n_steps, step):
for i in range(0, step):
dp = self._forward_and_backward()
if i == 0:
yield self.step_count
self._step_count += 1
self.update_parameters(dp)
def advance(self, steps: int = 1):
for _ in self.iter(steps):
pass
def run(
self,
n_iter,
out=None,
obs=None,
show_progress=True,
save_params_every=50, # for default logger
write_every=50, # for default logger
step_size=1, # for default logger
callback=lambda *x: True,
):
if not isinstance(n_iter, numbers.Number):
raise ValueError(
"n_iter, the first positional argument to `run`, must be a number!"
)
if obs is None:
obs = {}
if out is None:
out = tuple()
print(
"No output specified (out=[apath|nk.logging.JsonLogger(...)])."
"Running the optimization but not saving the output."
)
# Log only non-root nodes
if self._mynode == 0:
# if out is a path, create an overwriting Json Log for output
if isinstance(out, str):
loggers = (JsonLog(out, "w", save_params_every, write_every),)
else:
loggers = _to_iterable(out)
else:
loggers = tuple()
show_progress = False
callbacks = _to_iterable(callback)
callback_stop = False
with tqdm(
self.iter(n_iter, step_size), total=n_iter, disable=not show_progress
) as itr:
first_step = True
for step in itr:
log_data = self.estimate(obs)
# if the cost-function is defined then report it in the progress bar
if self._loss_stats is not None:
itr.set_postfix_str(self._loss_name + "=" + str(self._loss_stats))
log_data[self._loss_name] = self._loss_stats
for callback in callbacks:
if not callback(step, log_data, self):
callback_stop = True
for logger in loggers:
logger(self.step_count, log_data, self.state)
if callback_stop:
break
# Reset the timing of tqdm after the first step, to ignore compilation time
if first_step:
first_step = False
itr.unpause()
# flush at the end of the evolution so that final values are saved to
# file
for logger in loggers:
logger.flush(self.state)
return loggers
def estimate(self, observables):
return tree_map(self._estimate_stats, observables)
def update_parameters(self, dp):
self._optimizer_state, self.state.parameters = apply_gradient(
self._optimizer.update, self._optimizer_state, dp, self.state.parameters
)
@partial(jax.jit, static_argnums=0)
def apply_gradient(optimizer_fun, optimizer_state, dp, params):
import optax
updates, new_optimizer_state = optimizer_fun(dp, optimizer_state, params)
new_params = optax.apply_updates(params, updates)
return new_optimizer_state, new_params
| true | true |
f73e035d9e970958ec45eb9000148a6761b22927 | 5,776 | py | Python | airflow/contrib/auth/backends/google_auth.py | diggzhang/airflow-dingit | 41482b83130d5815b772840681fb36eb9bfa69b9 | [
"Apache-2.0",
"BSD-2-Clause",
"MIT",
"ECL-2.0",
"BSD-3-Clause"
] | 1 | 2017-12-10T03:23:05.000Z | 2017-12-10T03:23:05.000Z | airflow/contrib/auth/backends/google_auth.py | diggzhang/airflow-dingit | 41482b83130d5815b772840681fb36eb9bfa69b9 | [
"Apache-2.0",
"BSD-2-Clause",
"MIT",
"ECL-2.0",
"BSD-3-Clause"
] | null | null | null | airflow/contrib/auth/backends/google_auth.py | diggzhang/airflow-dingit | 41482b83130d5815b772840681fb36eb9bfa69b9 | [
"Apache-2.0",
"BSD-2-Clause",
"MIT",
"ECL-2.0",
"BSD-3-Clause"
] | 3 | 2017-09-05T23:23:19.000Z | 2018-02-07T23:08:03.000Z | # Copyright 2016 Ananya Mishra (am747@cornell.edu)
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import flask_login
# Need to expose these downstream
# pylint: disable=unused-import
from flask_login import (current_user,
logout_user,
login_required,
login_user)
# pylint: enable=unused-import
from flask import url_for, redirect, request
from flask_oauthlib.client import OAuth
from airflow import models, configuration, settings
from airflow.utils.db import provide_session
from airflow.utils.log.logging_mixin import LoggingMixin
log = LoggingMixin().log
def get_config_param(param):
return str(configuration.get('google', param))
class GoogleUser(models.User):
def __init__(self, user):
self.user = user
def is_active(self):
'''Required by flask_login'''
return True
def is_authenticated(self):
'''Required by flask_login'''
return True
def is_anonymous(self):
'''Required by flask_login'''
return False
def get_id(self):
'''Returns the current user id as required by flask_login'''
return self.user.get_id()
def data_profiling(self):
'''Provides access to data profiling tools'''
return True
def is_superuser(self):
'''Access all the things'''
return True
class AuthenticationError(Exception):
pass
class GoogleAuthBackend(object):
def __init__(self):
# self.google_host = get_config_param('host')
self.login_manager = flask_login.LoginManager()
self.login_manager.login_view = 'airflow.login'
self.flask_app = None
self.google_oauth = None
self.api_rev = None
def init_app(self, flask_app):
self.flask_app = flask_app
self.login_manager.init_app(self.flask_app)
self.google_oauth = OAuth(self.flask_app).remote_app(
'google',
consumer_key=get_config_param('client_id'),
consumer_secret=get_config_param('client_secret'),
request_token_params={'scope': [
'https://www.googleapis.com/auth/userinfo.profile',
'https://www.googleapis.com/auth/userinfo.email']},
base_url='https://www.google.com/accounts/',
request_token_url=None,
access_token_method='POST',
access_token_url='https://accounts.google.com/o/oauth2/token',
authorize_url='https://accounts.google.com/o/oauth2/auth')
self.login_manager.user_loader(self.load_user)
self.flask_app.add_url_rule(get_config_param('oauth_callback_route'),
'google_oauth_callback',
self.oauth_callback)
def login(self, request):
log.debug('Redirecting user to Google login')
return self.google_oauth.authorize(callback=url_for(
'google_oauth_callback',
_external=True,
next=request.args.get('next') or request.referrer or None))
def get_google_user_profile_info(self, google_token):
resp = self.google_oauth.get('https://www.googleapis.com/oauth2/v1/userinfo',
token=(google_token, ''))
if not resp or resp.status != 200:
raise AuthenticationError(
'Failed to fetch user profile, status ({0})'.format(
resp.status if resp else 'None'))
return resp.data['name'], resp.data['email']
def domain_check(self, email):
domain = email.split('@')[1]
domains = get_config_param('domain').split(',')
if domain in domains:
return True
return False
@provide_session
def load_user(self, userid, session=None):
if not userid or userid == 'None':
return None
user = session.query(models.User).filter(
models.User.id == int(userid)).first()
return GoogleUser(user)
@provide_session
def oauth_callback(self, session=None):
log.debug('Google OAuth callback called')
next_url = request.args.get('next') or url_for('admin.index')
resp = self.google_oauth.authorized_response()
try:
if resp is None:
raise AuthenticationError(
'Null response from Google, denying access.'
)
google_token = resp['access_token']
username, email = self.get_google_user_profile_info(google_token)
if not self.domain_check(email):
return redirect(url_for('airflow.noaccess'))
except AuthenticationError:
return redirect(url_for('airflow.noaccess'))
user = session.query(models.User).filter(
models.User.username == username).first()
if not user:
user = models.User(
username=username,
email=email,
is_superuser=False)
session.merge(user)
session.commit()
login_user(GoogleUser(user))
session.commit()
return redirect(next_url)
login_manager = GoogleAuthBackend()
def login(self, request):
return login_manager.login(request)
| 30.887701 | 85 | 0.629501 |
import flask_login
from flask_login import (current_user,
logout_user,
login_required,
login_user)
from flask import url_for, redirect, request
from flask_oauthlib.client import OAuth
from airflow import models, configuration, settings
from airflow.utils.db import provide_session
from airflow.utils.log.logging_mixin import LoggingMixin
log = LoggingMixin().log
def get_config_param(param):
return str(configuration.get('google', param))
class GoogleUser(models.User):
def __init__(self, user):
self.user = user
def is_active(self):
return True
def is_authenticated(self):
return True
def is_anonymous(self):
return False
def get_id(self):
return self.user.get_id()
def data_profiling(self):
return True
def is_superuser(self):
return True
class AuthenticationError(Exception):
pass
class GoogleAuthBackend(object):
def __init__(self):
self.login_manager = flask_login.LoginManager()
self.login_manager.login_view = 'airflow.login'
self.flask_app = None
self.google_oauth = None
self.api_rev = None
def init_app(self, flask_app):
self.flask_app = flask_app
self.login_manager.init_app(self.flask_app)
self.google_oauth = OAuth(self.flask_app).remote_app(
'google',
consumer_key=get_config_param('client_id'),
consumer_secret=get_config_param('client_secret'),
request_token_params={'scope': [
'https://www.googleapis.com/auth/userinfo.profile',
'https://www.googleapis.com/auth/userinfo.email']},
base_url='https://www.google.com/accounts/',
request_token_url=None,
access_token_method='POST',
access_token_url='https://accounts.google.com/o/oauth2/token',
authorize_url='https://accounts.google.com/o/oauth2/auth')
self.login_manager.user_loader(self.load_user)
self.flask_app.add_url_rule(get_config_param('oauth_callback_route'),
'google_oauth_callback',
self.oauth_callback)
def login(self, request):
log.debug('Redirecting user to Google login')
return self.google_oauth.authorize(callback=url_for(
'google_oauth_callback',
_external=True,
next=request.args.get('next') or request.referrer or None))
def get_google_user_profile_info(self, google_token):
resp = self.google_oauth.get('https://www.googleapis.com/oauth2/v1/userinfo',
token=(google_token, ''))
if not resp or resp.status != 200:
raise AuthenticationError(
'Failed to fetch user profile, status ({0})'.format(
resp.status if resp else 'None'))
return resp.data['name'], resp.data['email']
def domain_check(self, email):
domain = email.split('@')[1]
domains = get_config_param('domain').split(',')
if domain in domains:
return True
return False
@provide_session
def load_user(self, userid, session=None):
if not userid or userid == 'None':
return None
user = session.query(models.User).filter(
models.User.id == int(userid)).first()
return GoogleUser(user)
@provide_session
def oauth_callback(self, session=None):
log.debug('Google OAuth callback called')
next_url = request.args.get('next') or url_for('admin.index')
resp = self.google_oauth.authorized_response()
try:
if resp is None:
raise AuthenticationError(
'Null response from Google, denying access.'
)
google_token = resp['access_token']
username, email = self.get_google_user_profile_info(google_token)
if not self.domain_check(email):
return redirect(url_for('airflow.noaccess'))
except AuthenticationError:
return redirect(url_for('airflow.noaccess'))
user = session.query(models.User).filter(
models.User.username == username).first()
if not user:
user = models.User(
username=username,
email=email,
is_superuser=False)
session.merge(user)
session.commit()
login_user(GoogleUser(user))
session.commit()
return redirect(next_url)
login_manager = GoogleAuthBackend()
def login(self, request):
return login_manager.login(request)
| true | true |
f73e03bea4c81cd549408b5d220ed96a9d999643 | 2,370 | py | Python | setup.py | xinrong-databricks/dask | d23d40b14bfe0c9d77577e86fa0bc2488b5c8092 | [
"BSD-3-Clause"
] | null | null | null | setup.py | xinrong-databricks/dask | d23d40b14bfe0c9d77577e86fa0bc2488b5c8092 | [
"BSD-3-Clause"
] | null | null | null | setup.py | xinrong-databricks/dask | d23d40b14bfe0c9d77577e86fa0bc2488b5c8092 | [
"BSD-3-Clause"
] | null | null | null | #!/usr/bin/env python
import sys
from os.path import exists
from setuptools import setup
import versioneer
# NOTE: These are tested in `continuous_integration/test_imports.sh` If
# you modify these, make sure to change the corresponding line there.
extras_require = {
"array": ["numpy >= 1.18"],
"bag": [], # keeping for backwards compatibility
"dataframe": ["numpy >= 1.18", "pandas >= 1.0"],
"distributed": ["distributed == 2022.01.0"],
"diagnostics": [
"bokeh >= 2.1.1",
"jinja2",
],
"delayed": [], # keeping for backwards compatibility
}
extras_require["complete"] = sorted({v for req in extras_require.values() for v in req})
# after complete is set, add in test
extras_require["test"] = [
"pytest",
"pytest-rerunfailures",
"pytest-xdist",
"pre-commit",
]
install_requires = [
"cloudpickle >= 1.1.1",
"fsspec >= 0.6.0",
"packaging >= 20.0",
"partd >= 0.3.10",
"pyyaml >= 5.3.1",
"toolz >= 0.8.2",
]
packages = [
"dask",
"dask.array",
"dask.bag",
"dask.bytes",
"dask.dataframe",
"dask.dataframe.io",
"dask.dataframe.tseries",
"dask.diagnostics",
]
tests = [p + ".tests" for p in packages]
# Only include pytest-runner in setup_requires if we're invoking tests
if {"pytest", "test", "ptr"}.intersection(sys.argv):
setup_requires = ["pytest-runner"]
else:
setup_requires = []
setup(
name="dask",
version=versioneer.get_version(),
cmdclass=versioneer.get_cmdclass(),
description="Parallel PyData with Task Scheduling",
url="https://github.com/dask/dask/",
maintainer="Matthew Rocklin",
maintainer_email="mrocklin@gmail.com",
license="BSD",
keywords="task-scheduling parallel numpy pandas pydata",
classifiers=[
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.7",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"License :: OSI Approved :: BSD License",
],
packages=packages + tests,
long_description=open("README.rst").read() if exists("README.rst") else "",
python_requires=">=3.7",
install_requires=install_requires,
setup_requires=setup_requires,
tests_require=["pytest"],
extras_require=extras_require,
include_package_data=True,
zip_safe=False,
)
| 27.241379 | 88 | 0.636287 |
import sys
from os.path import exists
from setuptools import setup
import versioneer
extras_require = {
"array": ["numpy >= 1.18"],
"bag": [],
"dataframe": ["numpy >= 1.18", "pandas >= 1.0"],
"distributed": ["distributed == 2022.01.0"],
"diagnostics": [
"bokeh >= 2.1.1",
"jinja2",
],
"delayed": [],
}
extras_require["complete"] = sorted({v for req in extras_require.values() for v in req})
extras_require["test"] = [
"pytest",
"pytest-rerunfailures",
"pytest-xdist",
"pre-commit",
]
install_requires = [
"cloudpickle >= 1.1.1",
"fsspec >= 0.6.0",
"packaging >= 20.0",
"partd >= 0.3.10",
"pyyaml >= 5.3.1",
"toolz >= 0.8.2",
]
packages = [
"dask",
"dask.array",
"dask.bag",
"dask.bytes",
"dask.dataframe",
"dask.dataframe.io",
"dask.dataframe.tseries",
"dask.diagnostics",
]
tests = [p + ".tests" for p in packages]
if {"pytest", "test", "ptr"}.intersection(sys.argv):
setup_requires = ["pytest-runner"]
else:
setup_requires = []
setup(
name="dask",
version=versioneer.get_version(),
cmdclass=versioneer.get_cmdclass(),
description="Parallel PyData with Task Scheduling",
url="https://github.com/dask/dask/",
maintainer="Matthew Rocklin",
maintainer_email="mrocklin@gmail.com",
license="BSD",
keywords="task-scheduling parallel numpy pandas pydata",
classifiers=[
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.7",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"License :: OSI Approved :: BSD License",
],
packages=packages + tests,
long_description=open("README.rst").read() if exists("README.rst") else "",
python_requires=">=3.7",
install_requires=install_requires,
setup_requires=setup_requires,
tests_require=["pytest"],
extras_require=extras_require,
include_package_data=True,
zip_safe=False,
)
| true | true |
f73e03e941f8e6db885abfe692d4befc04bc21f6 | 1,191 | py | Python | ocean_utils/exceptions.py | oceanprotocol/common-utils-py | f577f4762841496584e114baaec0d476e73c700e | [
"Apache-2.0"
] | null | null | null | ocean_utils/exceptions.py | oceanprotocol/common-utils-py | f577f4762841496584e114baaec0d476e73c700e | [
"Apache-2.0"
] | 2 | 2019-12-16T11:26:21.000Z | 2021-03-18T13:06:31.000Z | ocean_utils/exceptions.py | oceanprotocol/common-utils-py | f577f4762841496584e114baaec0d476e73c700e | [
"Apache-2.0"
] | null | null | null | """Exceptions for ocean_utils """
# Copyright 2018 Ocean Protocol Foundation
# SPDX-License-Identifier: Apache-2.0
class OceanInvalidContractAddress(Exception):
"""Raised when an invalid address is passed to the contract loader."""
class OceanDIDUnknownValueType(Exception):
"""Raised when a requested DID or a DID in the chain cannot be found."""
class OceanDIDAlreadyExist(Exception):
"""Raised when a requested DID is already published in OceanDB."""
class OceanInvalidMetadata(Exception):
"""Raised when some value in the metadata is invalid."""
class OceanInvalidServiceAgreementSignature(Exception):
"""Raised when the SLA signature is not valid."""
class OceanServiceAgreementExists(Exception):
"""Raised when the SLA already exists."""
class OceanInitializeServiceAgreementError(Exception):
"""Error on invoking purchase endpoint"""
class OceanEncryptAssetUrlsError(Exception):
"""Error invoking the encrypt endpoint"""
class OceanServiceConsumeError(Exception):
""" Error invoking a purchase endpoint"""
class OceanInvalidAgreementTemplate(Exception):
""" Error when agreement template is not valid or not approved"""
| 25.891304 | 76 | 0.755668 |
class OceanInvalidContractAddress(Exception):
class OceanDIDUnknownValueType(Exception):
class OceanDIDAlreadyExist(Exception):
class OceanInvalidMetadata(Exception):
class OceanInvalidServiceAgreementSignature(Exception):
class OceanServiceAgreementExists(Exception):
class OceanInitializeServiceAgreementError(Exception):
class OceanEncryptAssetUrlsError(Exception):
class OceanServiceConsumeError(Exception):
class OceanInvalidAgreementTemplate(Exception):
| true | true |
f73e03f42712e062871e9bdea75edb4b1cd8e4b1 | 2,107 | py | Python | tcex/threat_intelligence/mappings/group/group_types/event.py | kdeltared/tcex | 818c0d09256764f871e42d9ca5916f92d941d882 | [
"Apache-2.0"
] | null | null | null | tcex/threat_intelligence/mappings/group/group_types/event.py | kdeltared/tcex | 818c0d09256764f871e42d9ca5916f92d941d882 | [
"Apache-2.0"
] | null | null | null | tcex/threat_intelligence/mappings/group/group_types/event.py | kdeltared/tcex | 818c0d09256764f871e42d9ca5916f92d941d882 | [
"Apache-2.0"
] | null | null | null | """ThreatConnect TI Event"""
from ..group import Group
class Event(Group):
"""Unique API calls for Event API Endpoints
Valid status:
+ Escalated
+ False Positive
+ Needs Review
+ No Further Action
Args:
tcex (TcEx): An instantiated instance of TcEx object.
event_date (str, kwargs): The event "event date" datetime expression for this Group.
name (str, kwargs): [Required for Create] The name for this Group.
owner (str, kwargs): The name for this Group. Default to default Org when not provided
status (str, kwargs): The status for this Group.
"""
def __init__(self, tcex, **kwargs):
"""Initialize Class Properties."""
super().__init__(tcex, sub_type='Event', api_entity='event', api_branch='events', **kwargs)
def event_date(self, event_date):
"""Update the event date for the Event.
Args:
event_date (str): The event datetime expression for this Group.
Returns:
requests.Response: The response from the API call.
"""
if not self.can_update():
self._tcex.handle_error(910, [self.type])
event_date = self._utils.datetime.format_datetime(
event_date, date_format='%Y-%m-%dT%H:%M:%SZ'
)
self._data['eventDate'] = event_date
request = {'eventDate': event_date}
return self.tc_requests.update(self.api_type, self.api_branch, self.unique_id, request)
def status(self, status):
"""Update the event date for the Event.
Valid status:
+ Escalated
+ False Positive
+ Needs Review
+ No Further Action
Args:
status (str, kwargs): The status for this Group.
Returns:
requests.Response: The response from the API call.
"""
if not self.can_update():
self._tcex.handle_error(910, [self.type])
self._data['status'] = status
request = {'status': status}
return self.tc_requests.update(self.api_type, self.api_branch, self.unique_id, request)
| 31.924242 | 99 | 0.615567 | from ..group import Group
class Event(Group):
def __init__(self, tcex, **kwargs):
super().__init__(tcex, sub_type='Event', api_entity='event', api_branch='events', **kwargs)
def event_date(self, event_date):
if not self.can_update():
self._tcex.handle_error(910, [self.type])
event_date = self._utils.datetime.format_datetime(
event_date, date_format='%Y-%m-%dT%H:%M:%SZ'
)
self._data['eventDate'] = event_date
request = {'eventDate': event_date}
return self.tc_requests.update(self.api_type, self.api_branch, self.unique_id, request)
def status(self, status):
if not self.can_update():
self._tcex.handle_error(910, [self.type])
self._data['status'] = status
request = {'status': status}
return self.tc_requests.update(self.api_type, self.api_branch, self.unique_id, request)
| true | true |
f73e06194e11ecd7a8cb57365942e59471680cb6 | 3,416 | py | Python | models/R2D2_embedding.py | vardhanaleti/AdversarialQuerying | f2ed5960f345ba448eeb4c9a1f5c819c41d092da | [
"MIT"
] | 37 | 2019-10-02T23:05:54.000Z | 2022-03-03T07:41:14.000Z | models/R2D2_embedding.py | vardhanaleti/AdversarialQuerying | f2ed5960f345ba448eeb4c9a1f5c819c41d092da | [
"MIT"
] | 2 | 2020-04-28T06:09:16.000Z | 2020-11-10T14:52:58.000Z | models/R2D2_embedding.py | vardhanaleti/AdversarialQuerying | f2ed5960f345ba448eeb4c9a1f5c819c41d092da | [
"MIT"
] | 8 | 2020-02-12T11:16:51.000Z | 2021-12-08T18:02:55.000Z | import torch.nn as nn
import torch
import math
# Embedding network used in Meta-learning with differentiable closed-form solvers
# (Bertinetto et al., in submission to NIPS 2018).
# They call the ridge rigressor version as "Ridge Regression Differentiable Discriminator (R2D2)."
# Note that they use a peculiar ordering of functions, namely conv-BN-pooling-lrelu,
# as opposed to the conventional one (conv-BN-lrelu-pooling).
def R2D2_conv_block(in_channels, out_channels, retain_activation=True, keep_prob=1.0, activation='LeakyReLU'):
block = nn.Sequential(
nn.Conv2d(in_channels, out_channels, 3, padding=1),
nn.BatchNorm2d(out_channels),
nn.MaxPool2d(2)
)
if retain_activation:
if activation == 'LeakyReLU':
block.add_module("LeakyReLU", nn.LeakyReLU(0.1))
elif activation == 'ReLU':
block.add_module("ReLU", nn.ReLU())
elif activation == 'Softplus':
block.add_module("Softplus", nn.Softplus())
if keep_prob < 1.0:
block.add_module("Dropout", nn.Dropout(p=1 - keep_prob, inplace=False))
return block
class R2D2Embedding(nn.Module):
def __init__(self, x_dim=3, h1_dim=96, h2_dim=192, h3_dim=384, z_dim=512, \
retain_last_activation=False, denoise = False, activation='LeakyReLU'):
super(R2D2Embedding, self).__init__()
self.block1 = R2D2_conv_block(x_dim, h1_dim, activation=activation)
self.block2 = R2D2_conv_block(h1_dim, h2_dim, activation=activation)
self.block3 = R2D2_conv_block(h2_dim, h3_dim, keep_prob=0.9, activation=activation)
self.denoise = denoise
# In the last conv block, we disable activation function to boost the classification accuracy.
# This trick was proposed by Gidaris et al. (CVPR 2018).
# With this trick, the accuracy goes up from 50% to 51%.
# Although the authors of R2D2 did not mention this trick in the paper,
# we were unable to reproduce the result of Bertinetto et al. without resorting to this trick.
self.block4 = R2D2_conv_block(h3_dim, z_dim, retain_activation=retain_last_activation, keep_prob=0.7)
def forward(self, x):
b1 = self.block1(x)
b2 = self.block2(b1)
if self.denoise:
#print("before denoise", b2.size())
_, n_in, H, W = b2.size()
theta = nn.Conv2d(n_in, int(n_in / 2), 1,
stride=1, bias=False).to('cuda')
phi = nn.Conv2d(n_in, int(n_in / 2), 1,
stride=1, bias=False).to('cuda')
g = b2
f = torch.einsum('niab,nicd->nabcd', theta(b2), phi(b2))
orig_shape = f.size()
f = torch.reshape(f, (-1, H * W, H * W))
f = f / math.sqrt(n_in)
softmax = torch.nn.Softmax(dim = 0)
f = softmax(f)
f = torch.reshape(f, orig_shape)
f = torch.einsum('nabcd,nicd->niab', f, g)
final_conv = nn.Conv2d(f.size()[1], f.size()[1], 1, stride=1, bias=False).to('cuda')
f = final_conv(f)
b2 = b2 + f
#print("after denoise", b2.size())
b3 = self.block3(b2)
b4 = self.block4(b3)
# Flatten and concatenate the output of the 3rd and 4th conv blocks as proposed in R2D2 paper.
return torch.cat((b3.view(b3.size(0), -1), b4.view(b4.size(0), -1)), 1)
| 46.162162 | 110 | 0.619731 | import torch.nn as nn
import torch
import math
def R2D2_conv_block(in_channels, out_channels, retain_activation=True, keep_prob=1.0, activation='LeakyReLU'):
block = nn.Sequential(
nn.Conv2d(in_channels, out_channels, 3, padding=1),
nn.BatchNorm2d(out_channels),
nn.MaxPool2d(2)
)
if retain_activation:
if activation == 'LeakyReLU':
block.add_module("LeakyReLU", nn.LeakyReLU(0.1))
elif activation == 'ReLU':
block.add_module("ReLU", nn.ReLU())
elif activation == 'Softplus':
block.add_module("Softplus", nn.Softplus())
if keep_prob < 1.0:
block.add_module("Dropout", nn.Dropout(p=1 - keep_prob, inplace=False))
return block
class R2D2Embedding(nn.Module):
def __init__(self, x_dim=3, h1_dim=96, h2_dim=192, h3_dim=384, z_dim=512, \
retain_last_activation=False, denoise = False, activation='LeakyReLU'):
super(R2D2Embedding, self).__init__()
self.block1 = R2D2_conv_block(x_dim, h1_dim, activation=activation)
self.block2 = R2D2_conv_block(h1_dim, h2_dim, activation=activation)
self.block3 = R2D2_conv_block(h2_dim, h3_dim, keep_prob=0.9, activation=activation)
self.denoise = denoise
self.block4 = R2D2_conv_block(h3_dim, z_dim, retain_activation=retain_last_activation, keep_prob=0.7)
def forward(self, x):
b1 = self.block1(x)
b2 = self.block2(b1)
if self.denoise:
_, n_in, H, W = b2.size()
theta = nn.Conv2d(n_in, int(n_in / 2), 1,
stride=1, bias=False).to('cuda')
phi = nn.Conv2d(n_in, int(n_in / 2), 1,
stride=1, bias=False).to('cuda')
g = b2
f = torch.einsum('niab,nicd->nabcd', theta(b2), phi(b2))
orig_shape = f.size()
f = torch.reshape(f, (-1, H * W, H * W))
f = f / math.sqrt(n_in)
softmax = torch.nn.Softmax(dim = 0)
f = softmax(f)
f = torch.reshape(f, orig_shape)
f = torch.einsum('nabcd,nicd->niab', f, g)
final_conv = nn.Conv2d(f.size()[1], f.size()[1], 1, stride=1, bias=False).to('cuda')
f = final_conv(f)
b2 = b2 + f
b3 = self.block3(b2)
b4 = self.block4(b3)
return torch.cat((b3.view(b3.size(0), -1), b4.view(b4.size(0), -1)), 1)
| true | true |
f73e080c7818869a738706a40df7132dc8e096cc | 914 | py | Python | django_vali/urls.py | cnanyi/django-theme-vali-admin | fc1781ebdf2dbb456ca0aa35d18e81eb62f7789d | [
"MIT"
] | 3 | 2018-06-09T09:53:26.000Z | 2020-05-02T21:47:26.000Z | django_vali/urls.py | cnanyi/django-theme-vali-admin | fc1781ebdf2dbb456ca0aa35d18e81eb62f7789d | [
"MIT"
] | 1 | 2020-02-03T05:47:59.000Z | 2020-02-03T05:47:59.000Z | django_vali/urls.py | cnanyi/django-theme-vali-admin | fc1781ebdf2dbb456ca0aa35d18e81eb62f7789d | [
"MIT"
] | 2 | 2019-03-07T20:08:17.000Z | 2020-05-02T21:47:14.000Z | """django_vali URL Configuration
The `urlpatterns` list routes URLs to views. For more information please see:
https://docs.djangoproject.com/en/1.10/topics/http/urls/
Examples:
Function views
1. Add an import: from my_app import views
2. Add a URL to urlpatterns: url(r'^$', views.home, name='home')
Class-based views
1. Add an import: from other_app.views import Home
2. Add a URL to urlpatterns: url(r'^$', Home.as_view(), name='home')
Including another URLconf
1. Import the include() function: from django.conf.urls import url, include
2. Add a URL to urlpatterns: url(r'^blog/', include('blog.urls'))
"""
from django.conf.urls import url, include
from django.contrib import admin
from django.views.generic import RedirectView
urlpatterns = [
url(r'^admin/', admin.site.urls),
url(r'^vali/', include('vali.urls')),
url(r'', RedirectView.as_view(url='/admin/'))
]
| 38.083333 | 79 | 0.703501 | from django.conf.urls import url, include
from django.contrib import admin
from django.views.generic import RedirectView
urlpatterns = [
url(r'^admin/', admin.site.urls),
url(r'^vali/', include('vali.urls')),
url(r'', RedirectView.as_view(url='/admin/'))
]
| true | true |
f73e086dea4534e4b072d209cfde8542e77f5905 | 891 | py | Python | apps/movie/schema.py | aram2726/django_graphql | b915d148b0266628b0c03d8ff8932b16cce7b844 | [
"MIT"
] | null | null | null | apps/movie/schema.py | aram2726/django_graphql | b915d148b0266628b0c03d8ff8932b16cce7b844 | [
"MIT"
] | 6 | 2020-06-05T23:08:34.000Z | 2021-06-09T18:29:09.000Z | apps/movie/schema.py | aram2726/django_graphql | b915d148b0266628b0c03d8ff8932b16cce7b844 | [
"MIT"
] | null | null | null | import graphene
from apps.movie.mutators import MovieType, CreateMovie, UpdateMovie, DeleteMovie
from .models import Movie
class MovieInput(graphene.InputObjectType):
name = graphene.String(required=True)
class MovieMutations(graphene.ObjectType):
create_movie = CreateMovie.Field()
update_movie = UpdateMovie.Field()
delete_movie = DeleteMovie.Field()
class MovieQuery(graphene.ObjectType):
movies = graphene.List(MovieType)
movie = graphene.Field(MovieType, id=graphene.Int())
@staticmethod
def resolve_movies(info, **kwargs):
return Movie.objects.all()
@staticmethod
def resolve_movie(info, **kwargs):
movie_id = kwargs.get("id")
try:
return Movie.objects.get(pk=movie_id)
except Movie.DoesNotExist:
return None
schema = graphene.Schema(query=MovieQuery, mutation=MovieMutations)
| 24.75 | 80 | 0.71156 | import graphene
from apps.movie.mutators import MovieType, CreateMovie, UpdateMovie, DeleteMovie
from .models import Movie
class MovieInput(graphene.InputObjectType):
name = graphene.String(required=True)
class MovieMutations(graphene.ObjectType):
create_movie = CreateMovie.Field()
update_movie = UpdateMovie.Field()
delete_movie = DeleteMovie.Field()
class MovieQuery(graphene.ObjectType):
movies = graphene.List(MovieType)
movie = graphene.Field(MovieType, id=graphene.Int())
@staticmethod
def resolve_movies(info, **kwargs):
return Movie.objects.all()
@staticmethod
def resolve_movie(info, **kwargs):
movie_id = kwargs.get("id")
try:
return Movie.objects.get(pk=movie_id)
except Movie.DoesNotExist:
return None
schema = graphene.Schema(query=MovieQuery, mutation=MovieMutations)
| true | true |
f73e09a9615b3bb6c931b03cb352c21f6692e13b | 575 | py | Python | Modules/fibonacci_05/fib_fn.py | MihailMarkovski/Python-Advanced-2020 | 8edea78cbe5588a409ba9bc3767861250f58c1a6 | [
"MIT"
] | 4 | 2020-09-19T13:53:19.000Z | 2020-11-01T18:34:53.000Z | Modules/fibonacci_05/fib_fn.py | MNikov/Python-Advanced-September-2020 | 1d65039de7f094d908411afffa8aee9689ab4220 | [
"MIT"
] | null | null | null | Modules/fibonacci_05/fib_fn.py | MNikov/Python-Advanced-September-2020 | 1d65039de7f094d908411afffa8aee9689ab4220 | [
"MIT"
] | null | null | null | def create_sequence(count):
sequence = [0, 1, 1]
for n in range(3, count):
next_n = sequence[n - 1] + sequence[n - 2]
sequence.append(next_n)
print(' '.join([str(x) for x in sequence]))
def locate_number(number):
x, y = 0, 1
index = 0
while x < number:
x, y = y, x + y
index += 1
if number == x:
print(f"The number - {number} is at index {index}")
else:
print(f"The number {number} is not in the sequence")
# Python Advanced September 2020 - Jordan`s solution: https://pastebin.com/uQwzF9tB
| 26.136364 | 83 | 0.577391 | def create_sequence(count):
sequence = [0, 1, 1]
for n in range(3, count):
next_n = sequence[n - 1] + sequence[n - 2]
sequence.append(next_n)
print(' '.join([str(x) for x in sequence]))
def locate_number(number):
x, y = 0, 1
index = 0
while x < number:
x, y = y, x + y
index += 1
if number == x:
print(f"The number - {number} is at index {index}")
else:
print(f"The number {number} is not in the sequence")
| true | true |
f73e0a2e838f37accfc12698b815339f989010ea | 319,713 | py | Python | torch/testing/_internal/distributed/distributed_test.py | brianjo/pytorch | 3bda4ea84228587fd67eddafb1c6637c52605dae | [
"Intel"
] | 1 | 2021-06-30T22:21:28.000Z | 2021-06-30T22:21:28.000Z | torch/testing/_internal/distributed/distributed_test.py | xiezhq-hermann/pytorch | fd8004b42e2a2348ec8837e3fb524b960c1b4cdb | [
"Intel"
] | null | null | null | torch/testing/_internal/distributed/distributed_test.py | xiezhq-hermann/pytorch | fd8004b42e2a2348ec8837e3fb524b960c1b4cdb | [
"Intel"
] | null | null | null | import copy
import itertools
import math
import os
import random
import sys
import tempfile
import time
from collections import namedtuple
from contextlib import contextmanager, suppress
from datetime import timedelta
from functools import reduce
from typing import Union, NamedTuple, Callable, Any
import torch
import torch.cuda
import torch.distributed as dist
import torch.distributed.algorithms.ddp_comm_hooks.post_localSGD_hook as post_localSGD
import torch.distributed.algorithms.ddp_comm_hooks.powerSGD_hook as powerSGD
import torch.distributed.algorithms.model_averaging.averagers as averagers
import torch.distributed.algorithms.model_averaging.utils as model_averaging_utils
import torch.nn as nn
import torch.nn.functional as F
from torch._utils_internal import TEST_MASTER_ADDR as MASTER_ADDR
from torch._utils_internal import TEST_MASTER_PORT as MASTER_PORT
from torch.cuda.amp import GradScaler, autocast
from torch.distributed.algorithms.ddp_comm_hooks import default_hooks as default
from torch.distributed.algorithms.ddp_comm_hooks import (
quantization as quantization_hooks,
)
from torch.distributed.distributed_c10d import (
get_world_size,
_get_default_group,
AllreduceOptions,
GroupMember,
)
from torch.nn.parallel import DistributedDataParallel
from torch.nn.parallel.distributed import _dump_DDP_relevant_env_vars
from torch.testing._internal.common_distributed import (
MultiProcessTestCase,
TEST_SKIPS,
initialize_temp_directories,
cleanup_temp_dir,
simple_sparse_reduce_tests,
skip_if_rocm,
skip_if_small_worldsize,
skip_if_lt_x_gpu,
nccl_skip_if_lt_x_gpu,
skip_if_no_gpu,
require_n_gpus_for_nccl_backend,
requires_nccl_version,
captured_output,
with_nccl_blocking_wait,
with_dist_debug_levels,
verify_ddp_error_logged,
)
from torch.testing._internal.common_utils import (
IS_MACOS,
IS_WINDOWS,
FILE_SCHEMA,
IS_FBCODE,
NO_MULTIPROCESSING_SPAWN,
sandcastle_skip,
sandcastle_skip_if,
)
if not IS_WINDOWS:
import torch.distributed.optim.post_localSGD_optimizer as post_localSGD_optimizer
from torch.distributed.optim.functional_sgd import _FunctionalSGD
from torch.utils.data.distributed import DistributedSampler
try:
import torchvision
HAS_TORCHVISION = True
except ImportError:
HAS_TORCHVISION = False
if sys.platform == "win32":
import msvcrt
else:
import fcntl
class Foo:
def __init__(self, x):
# Can be tensor or int
self.x = x
def __eq__(self, other):
def eq(value, other):
if isinstance(value, torch.Tensor):
return torch.equal(value, other)
return value == other
for attr, value in self.__dict__.items():
other_value = other.__dict__[attr]
if not eq(value, other_value):
return False
return True
f = Foo(10)
f.bar = 1
foo_cpu_tensor = Foo(torch.randn(3, 3))
COLLECTIVES_OBJECT_TEST_LIST = [
{"key1": 3, "key2": 4, "key3": {"nested": True}},
f,
foo_cpu_tensor,
"foo",
[1, 2, True, "string", [4, 5, "nested"]],
]
# Allowlist of distributed backends where profiling collectives is supported.
PROFILING_SUPPORTED_BACKENDS = [
dist.Backend.NCCL,
dist.Backend.GLOO,
dist.Backend.MPI,
]
# Allowlist of distributed backends where profiling is supported with use_cuda=True
CUDA_PROFILING_SUPPORTED_BACKENDS = [
dist.Backend.GLOO,
dist.Backend.MPI,
dist.Backend.NCCL,
]
# Allowlist of distributed backends where profiling is supported for p2p ops
SEND_RECV_PROFILING_SUPPORTED_BACKENDS = [
dist.Backend.MPI,
dist.Backend.GLOO,
dist.Backend.NCCL,
]
# Dummy NamedTuple data structures to test DDP support for NamedTuple types.
EXPECTED_FIELDS = ("a", "b")
TestNamedTupleInput_0 = namedtuple("NamedTuple", EXPECTED_FIELDS)
class TestNamedTupleInput_1(NamedTuple):
a: torch.tensor
b: torch.tensor
skipIfNoTorchVision = sandcastle_skip_if(not HAS_TORCHVISION, "no torchvision")
BACKEND = os.environ["BACKEND"]
INIT_METHOD = os.getenv("INIT_METHOD", "env://")
DEFAULT_TIMEOUT = 300
CUSTOMIZED_TIMEOUT = {"test_DistributedDataParallel": 500}
def get_profiling_event(postfix, profiler):
event_list = (
profiler.events()
if isinstance(profiler, torch.profiler.profile)
else profiler.function_events
)
return [event for event in event_list if event.name.endswith(postfix)]
# Base error message substring on unfinished reductions.
ddp_prev_reduction_unfinished_str = (
"Expected to have finished reduction in the prior iteration"
)
# Error message substring when find_unused_parameters=True has not been passed
ddp_recommend_find_unused_params_str = (
"passing the keyword argument `find_unused_parameters=True`"
)
# Error message substring when find_unused_parameters=True is enabled
ddp_find_unused_params_enabled_str = "Since `find_unused_parameters=True` is enabled"
# Error message substring for possibility of not all model outputs being used
# in loss computation
ddp_outputs_not_used_in_loss_str = (
"`forward` function outputs participate in calculating loss"
)
# Error message substring suggesting to use TORCH_DISTRIBUTED_DEBUG
ddp_suggest_debug_mode_str = (
"set the environment variable TORCH_DISTRIBUTED_DEBUG to either INFO or DETAIL"
)
class DDPUnevenTestInput(NamedTuple):
name: str
model: nn.Module
inp: Union[torch.tensor, tuple]
sync_interval: int
throw_on_early_termination: bool = False
hook: Callable = None
state: Any = None
class _FC2(nn.Module):
def __init__(self):
super(_FC2, self).__init__()
self.fc = nn.Linear(10, 50, bias=True)
self.fc.bias.requires_grad = False
def forward(self, x):
x = self.fc(x)
return x
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.fc1 = nn.Linear(2, 10, bias=False)
self.fc2 = _FC2()
self.fc3 = nn.Linear(50, 4, bias=False)
self.relu = nn.ReLU()
self.no_grad_param = nn.Parameter(
torch.tensor([2, 2]).long(), requires_grad=False
)
def forward(self, x):
x = self.relu(self.fc1(x))
x = self.relu(self.fc2(x))
x = self.fc3(x)
return F.softmax(x, dim=1)
class LargeNet(nn.Module):
def __init__(self):
super(LargeNet, self).__init__()
self.fc1 = nn.Linear(1000, 2000, bias=False)
self.fc2 = nn.Linear(2000, 500, bias=False)
def forward(self, x):
x = self.fc1(x)
x = self.fc2(x)
return x
class Task(nn.Module):
def __init__(self):
super().__init__()
self.p = nn.Parameter(torch.ones(2, 2))
def forward(self, x):
return self.p + x
class BatchNormNet(nn.Module):
def __init__(self, affine=True):
super(BatchNormNet, self).__init__()
self.fc1 = nn.Linear(2, 40, bias=False)
self.bn = nn.BatchNorm1d(4, affine=affine)
self.fc2 = nn.Linear(40, 4, bias=False)
def forward(self, x):
x = torch.reshape(self.fc1(x), (-1, 4, 10))
x = self.bn(x)
x = torch.reshape(x, (-1, 40))
x = self.fc2(x)
return F.softmax(x, dim=1)
class TwoLinLayerNet(nn.Module):
def __init__(self):
super().__init__()
self.a = nn.Linear(10, 10, bias=False)
self.b = nn.Linear(10, 10, bias=False)
def forward(self, x):
a = self.a(x)
b = self.b(x)
return (a, b)
class EmbeddingNet(nn.Module):
def __init__(self, rank):
super().__init__()
embedding_dim = 500 if rank == 0 else 50
self.embedding = nn.Embedding(num_embeddings=10, embedding_dim=embedding_dim)
self.lin = nn.Linear(embedding_dim, 1)
def forward(self, x):
x = self.embedding(x)
return self.lin(x)
class ControlFlowToyModel(nn.Module):
def __init__(self):
super(ControlFlowToyModel, self).__init__()
self.lin1 = nn.Linear(10, 10, bias=False)
self.lin2 = nn.Linear(10, 10, bias=False)
def forward(self, x):
# Second layer is used dependent on input x.
use_second_layer = torch.equal(x, torch.ones(20, 10, device=x.device))
if use_second_layer:
return self.lin2(F.relu(self.lin1(x)))
else:
return F.relu(self.lin1(x))
DDP_NET = Net()
BN_NET = BatchNormNet()
BN_NET_NO_AFFINE = BatchNormNet(affine=False)
ONLY_SBN_NET = nn.SyncBatchNorm(2, momentum=0.99)
def get_timeout(test_id):
test_name = test_id.split(".")[-1]
if test_name in CUSTOMIZED_TIMEOUT:
return CUSTOMIZED_TIMEOUT[test_name]
else:
return DEFAULT_TIMEOUT
default_pg_timeout = 60
CUSTOM_PG_TIMEOUT = {
# This test runs slowly and needs additional time to complete, otherwise can
# be taken down by NCCL_ASYNC_ERROR_HANDLING
"test_ddp_uneven_inputs": 300,
# This test has a short timeout since it tests being taken down by
# NCCL_ASYNC_ERROR_HANDLING which we want to happen quickly.
"test_ddp_model_diff_across_ranks": 5,
}
def require_backend(backends):
if BACKEND not in backends:
return sandcastle_skip("Test requires backend to be one of %s" % backends)
return lambda func: func
def require_backends_available(backends):
def check(backend):
if backend == dist.Backend.GLOO:
return dist.is_gloo_available()
if backend == dist.Backend.NCCL:
return dist.is_nccl_available()
if backend == dist.Backend.MPI:
return dist.is_mpi_available()
return False
if not all(check(dist.Backend(backend)) for backend in backends):
return sandcastle_skip("Test requires backends to be available %s" % backends)
return lambda func: func
def require_world_size(world_size):
if int(os.environ["WORLD_SIZE"]) < world_size:
return sandcastle_skip("Test requires world size of %d" % world_size)
return lambda func: func
def apply_hack_for_nccl():
# This is a hack for a known NCCL issue using multiprocess
# in conjunction with multiple threads to manage different GPUs which
# may cause ncclCommInitRank to fail.
# http://docs.nvidia.com/deeplearning/sdk/nccl-release-notes/rel_2.1.4.html#rel_2.1.4
# It slows down the performance of collective operations.
# Without this setting NCCL might throw unhandled error.
os.environ["NCCL_MAX_NRINGS"] = "1"
@contextmanager
def _lock():
TEMP_DIR = os.environ["TEMP_DIR"]
lockfile = os.path.join(TEMP_DIR, "lockfile")
with open(lockfile, "w") as lf:
try:
if sys.platform == "win32":
msvcrt.locking(lf.fileno(), msvcrt.LK_RLCK, 1)
yield
else:
fcntl.flock(lf.fileno(), fcntl.LOCK_EX)
yield
finally:
if sys.platform == "win32":
msvcrt.locking(lf.fileno(), msvcrt.LK_UNLCK, 1)
else:
fcntl.flock(lf.fileno(), fcntl.LOCK_UN)
lf.close()
def _build_tensor(size, value=None, dtype=torch.float, device_id=None):
if value is None:
value = size
if device_id is None:
return torch.empty(size, size, size, dtype=dtype).fill_(value)
else:
return torch.empty(size, size, size, dtype=dtype).fill_(value).cuda(device_id)
def _build_multidim_tensor(dim, dim_size, value=None, dtype=torch.float):
if value is None:
value = size
return torch.empty(size=[dim_size for _ in range(dim)], dtype=dtype).fill_(value)
def _create_autograd_profiler():
return torch.autograd.profiler.profile(record_shapes=True)
def _create_torch_profiler():
return torch.profiler.profile(
activities=[
torch.profiler.ProfilerActivity.CPU,
],
record_shapes=True,
)
class Barrier(object):
barrier_id = 0
@classmethod
def init(cls):
cls.barrier_id = 0
barrier_dir = os.path.join(os.environ["TEMP_DIR"], "barrier")
for f_name in os.listdir(barrier_dir):
os.unlink(os.path.join(barrier_dir, f_name))
@classmethod
def sync(cls, wait_for=None, timeout=10):
if wait_for is None:
wait_for = dist.get_world_size()
cls.barrier_id += 1
barrier_dir = os.path.join(os.environ["TEMP_DIR"], "barrier")
pid = str(os.getpid())
barrier_file = os.path.join(barrier_dir, pid)
with _lock():
with open(barrier_file, "w") as f:
f.write(str(cls.barrier_id))
start_time = time.time()
while True:
arrived = 0
with _lock():
for f_name in os.listdir(barrier_dir):
with open(os.path.join(barrier_dir, f_name), "r") as f:
data = f.read()
if int(data) >= cls.barrier_id:
arrived += 1
if arrived == wait_for:
break
if time.time() - start_time > timeout:
raise RuntimeError("barrier timeout")
time.sleep(0.1)
class TestDistBackend(MultiProcessTestCase):
@classmethod
def setUpClass(cls):
os.environ["MASTER_ADDR"] = str(MASTER_ADDR)
os.environ["MASTER_PORT"] = str(MASTER_PORT)
# NCCL_BLOCKING_WAIT overrides NCCL_ASYNC_ERROR_HANDLING hence tests
# such as test_batch_isend_irecv_nccl will test NCCL_BLOCKING_WAIT as
# expected.
os.environ["NCCL_ASYNC_ERROR_HANDLING"] = "1"
super().setUpClass()
def setUp(self):
super().setUp()
# initialize temp directories
initialize_temp_directories()
# initialize Barrier
Barrier.init()
# Skip return code checking for following tests as they are expected to
# crash a process due to NCCL_ASYNC_ERROR_HANDLING.
self.skip_return_code_checks = []
def tearDown(self):
cleanup_temp_dir()
super().tearDown()
@property
def init_method(self):
return "{}{file_name}".format(FILE_SCHEMA, file_name=self.file_name)
@classmethod
def _run(cls, rank, test_name, file_name, pipe):
if BACKEND == "nccl" and not torch.cuda.is_available():
sys.exit(TEST_SKIPS["no_cuda"].exit_code)
self = cls(test_name)
self.rank = rank
self.file_name = file_name
if torch.cuda.is_available() and torch.cuda.device_count() < int(
self.world_size
):
sys.exit(TEST_SKIPS[f"multi-gpu-{self.world_size}"].exit_code)
try:
pg_timeout_seconds = CUSTOM_PG_TIMEOUT.get(test_name, default_pg_timeout)
timeout = timedelta(seconds=pg_timeout_seconds)
dist.init_process_group(
init_method=self.init_method,
backend=BACKEND,
world_size=int(self.world_size),
rank=self.rank,
timeout=timeout,
)
except RuntimeError as e:
if "recompile" in e.args[0]:
sys.exit(TEST_SKIPS["backend_unavailable"].exit_code)
raise
# Execute barrier prior to running test to ensure that every process
# has finished initialization and that the following test
# immediately exiting due to a skip doesn't cause flakiness.
self._barrier()
self.run_test(test_name, pipe)
self._barrier()
dist.destroy_process_group()
sys.exit(0)
# Needed since MultiProcessTestCase assumes a world_size of 4, but we
# run these tests under other various world_sizes.
@property
def world_size(self):
return os.environ["WORLD_SIZE"]
class DistributedTest:
class _DistTestBase:
def _barrier(self, *args, **kwargs):
Barrier.sync(*args, **kwargs)
def _init_group_test(self, **kwargs):
group = [1, 2]
group_id = dist.new_group(group, **kwargs)
rank = dist.get_rank()
if rank not in group:
return ([], None, rank)
return (group, group_id, rank)
def _init_full_group_test(self, **kwargs):
group = list(range(0, dist.get_world_size()))
group_id = dist.new_group(**kwargs)
rank = dist.get_rank()
return (group, group_id, rank)
def _init_global_test(self):
group = list(range(0, dist.get_world_size()))
group_id = dist.group.WORLD
rank = dist.get_rank()
return (group, group_id, rank)
# HELPER FOR MULTIGPU TESTS
def _init_multigpu_helper(self):
"""Multigpu tests are designed to simulate the multi nodes with multi
GPUs on each node. Nccl backend requires equal #GPUs in each process.
On a single node, all visible GPUs are evenly
divided to subsets, each process only uses a subset.
"""
nGPUs = torch.cuda.device_count()
world_size = dist.get_world_size()
visible_devices = range(nGPUs)
if BACKEND == "nccl":
apply_hack_for_nccl()
# If rank is lesser than or equal to number of available GPU's
# then each rank can be mapped to corresponding GPU.
nGPUs_per_process = 1
if world_size > nGPUs:
nGPUs_per_process = nGPUs // world_size
rank_to_GPU = {
i: list(
visible_devices[i * nGPUs_per_process : (i + 1) * nGPUs_per_process]
)
for i in range(world_size)
}
return rank_to_GPU
def test_dump_DDP_relevant_env_vars(self):
with captured_output() as (out, _):
_dump_DDP_relevant_env_vars()
lines = out.getvalue().splitlines()
def format_line(var):
return "env:%s=%s" % (
var,
os.environ[var] if var in os.environ else "N/A",
)
# Check relevant env vars
vars = [
"MASTER_ADDR",
"MASTER_PORT",
"WORLD_SIZE",
"NCCL_TOPO_DUMP_FILE", # N/A
"NCCL_ASYNC_ERROR_HANDLING",
]
for var in vars:
line = format_line(var)
self.assertIn(line, lines)
# Check irrelevant env vars
vars = [
"xxx",
"yyy",
"zzz",
]
for var in vars:
line = format_line(var)
self.assertNotIn(line, lines)
# GET RANK
def test_get_rank(self):
test_dir = os.path.join(os.environ["TEMP_DIR"], "test_dir")
pid = str(os.getpid())
num_processes = dist.get_world_size()
with open(os.path.join(test_dir, pid), "w") as f:
f.write(str(dist.get_rank()))
self._barrier()
all_ranks = set()
for f_name in os.listdir(test_dir):
with open(os.path.join(test_dir, f_name), "r") as f:
all_ranks.add(int(f.read()))
self.assertEqual(len(all_ranks), num_processes)
self._barrier()
if dist.get_rank() == 0:
for f_name in os.listdir(test_dir):
os.unlink(os.path.join(test_dir, f_name))
self._barrier()
def test_get_backend(self):
if dist.get_world_size() > 2:
group = [1, 2]
else:
group = [0, 1]
group_id = dist.new_group(group)
backend_str = BACKEND.lower()
self.assertEqual(dist.get_backend(), backend_str)
if dist.get_rank() in group:
self.assertEqual(dist.get_backend(group_id), backend_str)
else:
with self.assertRaisesRegex(
RuntimeError, "Invalid process group specified"
):
dist.get_backend(group_id)
def test_Backend_enum_class(self):
# test parsing
backend = BACKEND.lower()
self.assertEqual(dist.Backend(BACKEND.upper()), backend)
self.assertEqual(dist.Backend(BACKEND), backend)
with self.assertRaisesRegex(ValueError, "Invalid backend: 'undefined'"):
dist.Backend("undefined")
with self.assertRaisesRegex(ValueError, "Invalid backend: 'xYz'"):
dist.Backend("xYz")
with self.assertRaises(ValueError):
dist.Backend(None)
with self.assertRaises(ValueError):
dist.Backend(3)
with self.assertRaises(ValueError):
dist.Backend(["gloo"])
# Test destroy
def test_destroy_group(self):
if dist.get_world_size() > 2:
group = [1, 2]
else:
group = [0, 1]
group_id = dist.new_group(group)
self._barrier()
dist.destroy_process_group(group_id)
# Test get rank and size of group
def test_get_rank_size_group(self):
if dist.get_world_size() > 2:
group = [1, 2]
else:
group = [0, 1]
group_id = dist.new_group(group)
if dist.get_rank() in group:
self.assertEqual(dist.get_world_size(group_id), 2)
self.assertTrue(dist.get_rank(group_id) in list(range(2)))
else:
self.assertEqual(dist.get_world_size(group_id), -1)
self.assertEqual(dist.get_rank(group_id), -1)
# Test destroy full groups
def test_destroy_full_group(self):
_, group_id, _ = self._init_full_group_test()
self._barrier()
dist.destroy_process_group(group_id)
# Test get rank and size of full group
def test_get_rank_size_full_group(self):
_, group_id, _ = self._init_full_group_test()
self.assertEqual(dist.get_world_size(group_id), dist.get_world_size())
self.assertEqual(dist.get_rank(group_id), dist.get_rank())
def _test_barrier_timeout(self, group_id, timeout):
local_rank = dist.get_rank(group_id)
# Only execute barrier on rank == 0, causing it to timeout
if local_rank == 0:
expected_time = time.time() + timeout.total_seconds()
# In debug mode, we execute a monitored_barrier before the
# collective, so assert on that.
if dist._get_debug_mode() == dist._DistributedDebugLevel.DETAIL:
exception_ctx = self.assertRaisesRegex(
Exception, "failed to pass monitoredBarrier"
)
else:
exception_ctx = self.assertRaisesRegex(
Exception, " (Timed out|closed|timeout) "
)
with exception_ctx:
dist.barrier(group_id)
self.assertGreaterAlmostEqual(time.time(), expected_time, delta=0.1)
else:
pass
@sandcastle_skip_if(BACKEND != "gloo", "Only gloo backend supports timeouts")
@sandcastle_skip_if(
not INIT_METHOD.startswith("file://"),
"Requires file:// initialization method. "
+ "Both tcp:// and env:// rely on the TCP store for which "
"reinitialization has proven racy.",
)
def test_barrier_timeout_global(self):
dist.destroy_process_group()
# Explicitly pass world size to the barrier because we've
# just destroyed any state in torch.distributed.
self._barrier(wait_for=int(os.environ["WORLD_SIZE"]))
# Reinitialize global process group
timeout = timedelta(seconds=1)
dist.init_process_group(
init_method=INIT_METHOD,
backend=BACKEND,
world_size=int(os.environ["WORLD_SIZE"]),
rank=self.rank,
timeout=timeout,
)
self._test_barrier_timeout(dist.group.WORLD, timeout)
@skip_if_small_worldsize
@sandcastle_skip_if(BACKEND != "gloo", "Only gloo backend supports timeouts")
def test_barrier_timeout_group(self):
timeout = timedelta(seconds=5)
_, group_id, _ = self._init_group_test(timeout=timeout)
if group_id is not None:
self._test_barrier_timeout(group_id, timeout)
@sandcastle_skip_if(BACKEND != "gloo", "Only gloo backend supports timeouts")
def test_barrier_timeout_full_group(self):
timeout = timedelta(seconds=1)
_, group_id, _ = self._init_full_group_test(timeout=timeout)
if group_id is not None:
self._test_barrier_timeout(group_id, timeout)
# This test helper can only be used when using the Gloo or NCCL backend
# **and** both the Gloo and NCCL backends are available.
# See the @skip annotations below.
def _test_group_override_backend(self, initializer):
if BACKEND == "gloo":
new_backend = "nccl"
if BACKEND == "nccl":
new_backend = "gloo"
group, group_id, rank = initializer(backend=new_backend)
if group_id is None:
return
if new_backend == "gloo":
self.assertTrue(isinstance(group_id, dist.ProcessGroupGloo))
if new_backend == "nccl":
self.assertTrue(isinstance(group_id, dist.ProcessGroupNCCL))
self.assertEqual(rank, group[dist.get_rank(group_id)])
self.assertEqual(len(group), dist.get_world_size(group_id))
# Pin device (so we avoid NCCL race conditions/deadlocks).
group_rank = dist.get_rank(group_id)
torch.cuda.set_device(group_rank)
# Run broadcast of CUDA tensor (so it works for both Gloo and NCCL).
tensor = _build_tensor(2, value=group_rank).cuda()
dist.broadcast(tensor, src=group[0], group=group_id)
self.assertEqual(_build_tensor(2, value=0), tensor.to("cpu"))
@require_backend({"gloo", "nccl"})
@require_backends_available({"gloo", "nccl"})
@require_world_size(3)
@skip_if_lt_x_gpu(2)
def test_backend_group(self):
self._test_group_override_backend(self._init_group_test)
@require_backend({"gloo", "nccl"})
@require_backends_available({"gloo", "nccl"})
@skip_if_lt_x_gpu(3)
def test_backend_full_group(self):
self._test_group_override_backend(self._init_full_group_test)
@sandcastle_skip_if(
BACKEND != "nccl" and BACKEND != "gloo",
"MPI backend does not support creating subgroups on CUDA devices",
)
@require_world_size(4)
@skip_if_lt_x_gpu(2)
def test_new_subgroups(self):
subgroup_size = 2
cur_subgroup, subgroups = dist.new_subgroups(subgroup_size)
world_size = dist.get_world_size()
self.assertEqual(cur_subgroup.size(), subgroup_size)
self.assertEqual(len(subgroups), world_size / subgroup_size)
self.assertFalse(dist._rank_not_in_group(cur_subgroup))
for subgroup in subgroups:
dist.destroy_process_group(subgroup)
@sandcastle_skip_if(
BACKEND != "nccl" and BACKEND != "gloo",
"MPI backend does not support creating subgroups on CUDA devices",
)
@skip_if_no_gpu
def test_new_subgroups_group_size_exceeds_world_size(self):
with self.assertRaisesRegex(
ValueError, "The arg 'group_size' must not exceed the world size"
):
dist.new_subgroups(100)
@sandcastle_skip_if(
BACKEND != "nccl" and BACKEND != "gloo",
"MPI backend does not support creating subgroups on CUDA devices",
)
@require_world_size(4)
@skip_if_lt_x_gpu(4)
def test_new_subgroups_world_size_not_divisible_by_group_size(self):
with self.assertRaisesRegex(
ValueError, "The world size must be divisible by 'group_size'"
):
dist.new_subgroups(3)
@sandcastle_skip_if(
BACKEND != "nccl" and BACKEND != "gloo",
"MPI backend does not support creating subgroups on CUDA devices",
)
@require_world_size(4)
@skip_if_lt_x_gpu(4)
def test_new_subgroups_by_enumeration(self):
group, group_id, rank = self._init_global_test()
rank_to_GPU = self._init_multigpu_helper()
device_id = rank_to_GPU[rank][0]
cur_subgroup, subgroups = dist.new_subgroups_by_enumeration(
ranks_per_subgroup_list=[[0, 2], [1, 3]]
)
if device_id >= 4:
self.assertIsNone(cur_subgroup)
else:
self.assertEqual(cur_subgroup.size(), 2)
self.assertEqual(len(subgroups), 2)
if device_id == 0 or device_id == 2:
self.assertEqual(cur_subgroup, subgroups[0])
else:
self.assertEqual(cur_subgroup, subgroups[1])
for subgroup in subgroups:
dist.destroy_process_group(subgroup)
@sandcastle_skip_if(
BACKEND != "nccl" and BACKEND != "gloo",
"MPI backend does not support creating subgroups on CUDA devices",
)
@require_world_size(4)
@skip_if_lt_x_gpu(4)
def test_new_subgroups_by_enumeration_input_rank_exceeds_world_size(self):
group, group_id, rank = self._init_global_test()
rank_to_GPU = self._init_multigpu_helper()
device_id = rank_to_GPU[rank][0]
world_size = get_world_size(group_id)
with self.assertRaisesRegex(
RuntimeError,
"The new group's rank should be within the the world_size set by init_process_group",
):
dist.new_subgroups_by_enumeration(
ranks_per_subgroup_list=[[0, 1], [world_size, 2]]
)
@sandcastle_skip_if(
BACKEND != "nccl" and BACKEND != "gloo",
"MPI backend does not support creating subgroups on CUDA devices",
)
@skip_if_no_gpu
def test_new_subgroups_by_enumeration_negative_input_rank(self):
group, group_id, rank = self._init_global_test()
with self.assertRaisesRegex(
RuntimeError,
"The new group's rank should be within the the world_size set by init_process_group",
):
dist.new_subgroups_by_enumeration(
ranks_per_subgroup_list=[[-1, -2], [-3, -4]]
)
@sandcastle_skip_if(
BACKEND != "nccl" and BACKEND != "gloo",
"MPI backend does not support creating subgroups on CUDA devices",
)
@require_world_size(4)
@skip_if_lt_x_gpu(4)
def test_new_subgroups_overlap_not_allowed(self):
with self.assertRaisesRegex(
ValueError, "Rank 1 has appeared in both subgroup"
):
dist.new_subgroups_by_enumeration(
ranks_per_subgroup_list=[[0], [1, 2], [1, 3]]
)
@sandcastle_skip_if(
BACKEND != "nccl" and BACKEND != "gloo",
"MPI backend does not support creating subgroups on CUDA devices",
)
@skip_if_lt_x_gpu(2)
def test_average_parameters(self):
rank = dist.get_rank()
rank_to_GPU = self._init_multigpu_helper()
device_id = rank_to_GPU[rank][0]
model = nn.Sequential(
nn.Conv2d(3, 3, kernel_size=3, padding=1),
nn.ReLU(),
nn.Linear(1, 5, bias=False),
).cuda(device_id)
# Test global model averaging
for p in model.parameters():
p.data = torch.ones_like(p.data)
model_averaging_utils.average_parameters(
params=model.parameters(), process_group=None
)
# Every element will be the same as the input.
for p in model.parameters():
self.assertEqual(p.data, torch.ones_like(p.data))
# Test partial model averaging
for p in model.parameters():
p.data = torch.ones_like(p.data) * rank
group_nccl = dist.new_group(ranks=[0, 1], backend="nccl")
model_averaging_utils.average_parameters(
params=model.parameters(), process_group=group_nccl
)
if not dist._rank_not_in_group(group_nccl):
# Every element on device 0 or 1 should be the average of 0 and 1, i.e., 0.5.
for p in model.parameters():
self.assertEqual(p.data, torch.ones_like(p.data) * 0.5)
else:
# Every element on device not in the subgroup should remain the same.
for p in model.parameters():
self.assertEqual(p.data, torch.ones_like(p.data) * rank)
@sandcastle_skip_if(
BACKEND != "nccl" and BACKEND != "gloo",
"MPI backend does not support creating subgroups on CUDA devices",
)
@skip_if_lt_x_gpu(2)
def test_periodic_model_averager(self):
rank = dist.get_rank()
rank_to_GPU = self._init_multigpu_helper()
device_id = rank_to_GPU[rank][0]
world_size = dist.get_world_size()
model = nn.Linear(1, 5, bias=False).cuda(device_id)
param = next(model.parameters())
tensor = torch.ones_like(param.data) * rank
expected_avg_tensor = (
torch.ones_like(param.data) * sum(range(world_size)) / world_size
)
period = 4
for warmup_steps in [12, 13, 14, 15]:
averager = averagers.PeriodicModelAverager(period=period, warmup_steps=warmup_steps)
for step in range(0, 20):
# Reset the parameters at every step.
param.data = copy.deepcopy(tensor)
averager.average_parameters(model.parameters())
if step >= warmup_steps and (step - warmup_steps) % period == 0:
self.assertEqual(param.data, expected_avg_tensor)
else:
# No model averaging, so the parameters are not updated.
self.assertEqual(param.data, tensor)
# NCCL Batch SEND RECV
@skip_if_no_gpu
@sandcastle_skip_if(BACKEND != "nccl", "NCCL Batch Send Recv Only")
@requires_nccl_version(2700, "Need NCCL 2.7+ for send/recv")
def test_batch_isend_irecv_nccl(self):
self._barrier()
rank = dist.get_rank()
rank_to_GPU = self._init_multigpu_helper()
device_id = rank_to_GPU[rank][0]
torch.cuda.set_device(device_id)
p2p_op_list = []
for val in ["1", "0"]:
os.environ["NCCL_BLOCKING_WAIT"] = val
for src in range(0, dist.get_world_size()):
send_tensor = _build_tensor(rank + 1, device_id=device_id)
recv_tensor = _build_tensor(src + 1, value=-1, device_id=device_id)
recv_op = dist.P2POp(dist.irecv, recv_tensor, src)
p2p_op_list.append(recv_op)
send_op = dist.P2POp(dist.isend, send_tensor, src)
p2p_op_list.append(send_op)
reqs = dist.batch_isend_irecv(p2p_op_list)
for req in reqs:
req.wait()
self._barrier()
@skip_if_no_gpu
@sandcastle_skip_if(BACKEND != "nccl", "NCCL Batch Send Recv Only")
@requires_nccl_version(2700, "Need NCCL 2.7+ for send/recv")
def test_batch_isend_irecv_self_nccl(self):
self._barrier()
rank = dist.get_rank()
rank_to_GPU = self._init_multigpu_helper()
device_id = rank_to_GPU[rank][0]
p2p_op_list = []
if rank == 0:
send_tensor = _build_tensor(rank + 1, device_id=device_id)
recv_tensor = _build_tensor(rank + 1, value=-1, device_id=device_id)
recv_op = dist.P2POp(dist.irecv, recv_tensor, 0)
p2p_op_list.append(recv_op)
send_op = dist.P2POp(dist.isend, send_tensor, 0)
p2p_op_list.append(send_op)
reqs = dist.batch_isend_irecv(p2p_op_list)
for req in reqs:
req.wait()
self._barrier()
@skip_if_no_gpu
@skip_if_small_worldsize
@sandcastle_skip_if(BACKEND != "nccl", "NCCL Batch Send Recv Only")
@requires_nccl_version(2700, "Need NCCL 2.7+ for send/recv")
def test_batch_isend_irecv_no_rank_zero_nccl(self):
self._barrier()
rank = dist.get_rank()
rank_to_GPU = self._init_multigpu_helper()
device_id = rank_to_GPU[rank][0]
torch.cuda.set_device(device_id)
p2p_op_list = []
if rank == 1:
peer = 2
elif rank == 2:
peer = 1
if rank in [1, 2]:
send_tensor = _build_tensor(rank + 1, device_id=device_id)
recv_tensor = _build_tensor(peer + 1, value=-1, device_id=device_id)
recv_op = dist.P2POp(dist.irecv, recv_tensor, peer)
p2p_op_list.append(recv_op)
send_op = dist.P2POp(dist.isend, send_tensor, peer)
p2p_op_list.append(send_op)
reqs = dist.batch_isend_irecv(p2p_op_list)
for req in reqs:
req.wait()
self._barrier()
# GLOO Batch SEND RECV CPU
@sandcastle_skip_if(BACKEND != "gloo", "GLOO Batch Send Recv CPU")
def test_batch_isend_irecv_gloo(self):
self._barrier()
rank = dist.get_rank()
p2p_op_list = []
for src in range(0, dist.get_world_size()):
if src == rank:
continue
send_tensor = _build_tensor(rank + 1)
recv_tensor = _build_tensor(src + 1, value=-1)
recv_op = dist.P2POp(dist.irecv, recv_tensor, src)
p2p_op_list.append(recv_op)
send_op = dist.P2POp(dist.isend, send_tensor, src)
p2p_op_list.append(send_op)
reqs = dist.batch_isend_irecv(p2p_op_list)
for req in reqs:
req.wait()
self._barrier()
# GLOO Batch SEND RECV CPU with provided tags
@sandcastle_skip_if(BACKEND != "gloo", "GLOO Batch Send Recv CPU")
def test_batch_isend_irecv_gloo_tags(self):
self._barrier()
rank = dist.get_rank()
p2p_op_list = []
for src in range(0, dist.get_world_size()):
if src == rank:
continue
send_tensor = _build_tensor(rank + 1)
recv_tensor = _build_tensor(src + 1, value=-1)
recv_op = dist.P2POp(dist.irecv, recv_tensor, src, tag=src)
p2p_op_list.append(recv_op)
send_op = dist.P2POp(dist.isend, send_tensor, src, tag=rank)
p2p_op_list.append(send_op)
reqs = dist.batch_isend_irecv(p2p_op_list)
for req in reqs:
req.wait()
self._barrier()
# NCCL Batch SEND RECV Tensor Error
@sandcastle_skip_if(BACKEND != "nccl", "NCCL Batch Send Recv Only")
@requires_nccl_version(2700, "Need NCCL 2.7+ for send/recv")
def test_batch_isend_irecv_tensor_err(self):
self._barrier()
rank = dist.get_rank()
if rank == 0:
rank_to_GPU = self._init_multigpu_helper()
device_id = rank_to_GPU[rank][0]
with self.assertRaisesRegex(
RuntimeError, "Tensors must be CUDA and dense"
):
send_tensor = _build_tensor(rank + 1)
send_op = dist.P2POp(dist.isend, send_tensor, 1)
req = dist.batch_isend_irecv([send_op])
req.wait()
# NCCL Batch SEND RECV Op Error
@sandcastle_skip_if(BACKEND != "nccl", "NCCL Batch Send Recv Only")
@requires_nccl_version(2700, "Need NCCL 2.7+ for send/recv")
def test_batch_isend_irecv_op_err(self):
self._barrier()
rank = dist.get_rank()
if rank == 0:
rank_to_GPU = self._init_multigpu_helper()
device_id = rank_to_GPU[rank][0]
with self.assertRaisesRegex(RuntimeError, "^Invalid ``op``"):
send_tensor = _build_tensor(rank + 1, device_id=device_id)
send_op = dist.P2POp(dist.broadcast, send_tensor, 1)
req = dist.batch_isend_irecv([send_op])
req.wait()
# NCCL Batch SEND RECV p2p_op_list Error
@sandcastle_skip_if(BACKEND != "nccl", "NCCL Batch Send Recv Only")
@requires_nccl_version(2700, "Need NCCL 2.7+ for send/recv")
def test_batch_isend_irecv_op_list_err(self):
self._barrier()
rank = dist.get_rank()
if rank == 0:
rank_to_GPU = self._init_multigpu_helper()
device_id = rank_to_GPU[rank][0]
with self.assertRaisesRegex(RuntimeError, "^Invalid ``p2p_op_list``"):
send_tensor = _build_tensor(rank + 1)
req = dist.batch_isend_irecv([1, 2])
req.wait()
# NCCL Batch SEND RECV Mixed Backend Error
@sandcastle_skip_if(BACKEND != "nccl", "NCCL Batch Send Recv Only")
@requires_nccl_version(2700, "Need NCCL 2.7+ for send/recv")
def test_batch_isend_irecv_mixed_backend_err(self):
self._barrier()
rank = dist.get_rank()
rank_to_GPU = self._init_multigpu_helper()
device_id = rank_to_GPU[rank][0]
group_gloo = dist.new_group(ranks=[0, 1], backend="gloo")
group_nccl = dist.new_group(ranks=[0, 1], backend="nccl")
if rank == 0:
with self.assertRaisesRegex(
RuntimeError, "All groups need to use the same backend"
):
send_tensor = _build_tensor(rank + 1)
send_op_gloo = dist.P2POp(dist.isend, send_tensor, 1, group_gloo)
send_op_nccl = dist.P2POp(dist.isend, send_tensor, 1, group_nccl)
req = dist.batch_isend_irecv([send_op_gloo, send_op_nccl])
req.wait()
# NCCL SEND RECV
@skip_if_no_gpu
@sandcastle_skip_if(BACKEND != "nccl", "NCCL Send Recv Only")
@requires_nccl_version(2700, "Need NCCL 2.7+ for send/recv")
def _test_send_recv_nccl(self, profiler_ctx=None):
# TODO: now that nccl send/recv is supported, there does not seem to
# be a need to have nccl send/recv be tested separately.
rank = dist.get_rank()
rank_to_GPU = self._init_multigpu_helper()
device_id = rank_to_GPU[rank][0]
torch.cuda.set_device(device_id)
tensor = _build_tensor(rank + 1, device_id=device_id)
profiler_cls = profiler_ctx if profiler_ctx is not None else suppress()
with profiler_cls as prof:
for src in range(0, dist.get_world_size()):
if src == rank:
# Send mode
for dst in range(0, dist.get_world_size()):
if dst == rank:
continue
dist.send(tensor, dst)
else:
# Recv mode
expected_tensor = _build_tensor(src + 1)
output_tensor = _build_tensor(
src + 1, value=-1, device_id=device_id
)
dist.recv(output_tensor, src)
self.assertEqual(output_tensor, expected_tensor)
self._barrier()
if profiler_ctx is not None:
backend = dist.get_backend()
if backend in SEND_RECV_PROFILING_SUPPORTED_BACKENDS:
for event_name in [f"{backend}:send", f"{backend}:recv"]:
events = get_profiling_event(event_name, prof)
self.assertTrue(events)
# Event order is not deterministic, so simply assert their shape
# is found in the following list.
expected_shapes = [
[[rank + 1] * 3] for rank in range(dist.get_world_size())
]
for event in events:
self.assertTrue(event.input_shapes in expected_shapes)
@skip_if_no_gpu
@sandcastle_skip_if(BACKEND != "nccl", "NCCL Send Recv Only")
@requires_nccl_version(2700, "Need NCCL 2.7+ for send/recv")
def test_send_recv_nccl(self):
self._test_send_recv_nccl()
@skip_if_no_gpu
@sandcastle_skip_if(BACKEND != "nccl", "NCCL Send Recv Only")
@requires_nccl_version(2700, "Need NCCL 2.7+ for send/recv")
def test_send_recv_nccl_autograd_profiler(self):
profiler_ctx = torch.autograd.profiler.profile(record_shapes=True)
self._test_send_recv_nccl(profiler_ctx)
@skip_if_no_gpu
@sandcastle_skip_if(BACKEND != "nccl", "NCCL Send Recv Only")
@requires_nccl_version(2700, "Need NCCL 2.7+ for send/recv")
@sandcastle_skip_if(IS_FBCODE, "Kineto in fbcode causes hang")
@sandcastle_skip_if(
IS_MACOS or IS_WINDOWS,
"torch.profiler not enabled for mac/windows: https://github.com/pytorch/pytorch/pull/56124",
)
def test_send_recv_nccl_torch_profiler(self):
profiler_ctx = torch.profiler.profile(
activities=[
torch.profiler.ProfilerActivity.CPU,
torch.profiler.ProfilerActivity.CUDA,
],
record_shapes=True,
)
self._test_send_recv_nccl(profiler_ctx)
# SEND RECV
def _test_send_recv(self, profiler_ctx):
rank = dist.get_rank()
send_size = rank + 1
tensor = _build_tensor(send_size)
ctx = profiler_ctx if profiler_ctx is not None else suppress()
with ctx as prof:
for src in range(0, dist.get_world_size()):
if src == rank:
# Send mode
for dst in range(0, dist.get_world_size()):
if dst == rank:
continue
dist.send(tensor, dst)
else:
# Recv mode
recv_size = src + 1
expected_tensor = _build_tensor(recv_size)
output_tensor = _build_tensor(recv_size, value=-1)
dist.recv(output_tensor, src)
self.assertEqual(output_tensor, expected_tensor)
if profiler_ctx is not None:
backend = dist.get_backend()
if backend in SEND_RECV_PROFILING_SUPPORTED_BACKENDS:
for event_name in [f"{backend}:send", f"{backend}:recv"]:
events = get_profiling_event(event_name, prof)
# Each rank sends/recvs from all other ranks.
event_count = sum(e.count for e in events)
expected_event_count = dist.get_world_size() - 1
self.assertEqual(event_count, expected_event_count)
# Event order is not deterministic, so simply assert their shape
# is found in the following list.
expected_shapes = [
[[rank + 1] * 3] for rank in range(dist.get_world_size())
]
for event in events:
self.assertTrue(event.is_async)
self.assertTrue(event.input_shapes in expected_shapes)
@sandcastle_skip_if(
BACKEND == "nccl", "Nccl send/recv tested by test_send_recv_nccl"
)
def test_send_recv(self):
self._test_send_recv(profiler_ctx=None)
@sandcastle_skip_if(
BACKEND == "nccl", "NCCL send/recv tested by test_send_recv_nccl"
)
def test_send_recv_autograd_profiler(self):
autograd_profiler_ctx = _create_autograd_profiler()
self._test_send_recv(profiler_ctx=autograd_profiler_ctx)
@sandcastle_skip_if(
BACKEND == "nccl", "NCCL send/recv tested by test_send_recv_nccl"
)
@sandcastle_skip_if(IS_FBCODE, "Kineto in fbcode causes hang")
@sandcastle_skip_if(
IS_MACOS or IS_WINDOWS,
"torch.profiler not enabled for mac/windows: https://github.com/pytorch/pytorch/pull/56124",
)
def test_send_recv_torch_profiler(self):
torch_profiler_ctx = _create_torch_profiler()
return self._test_send_recv(profiler_ctx=torch_profiler_ctx)
# SEND RECV ANY SOURCE
def _test_send_recv_any_source(self, profiler_ctx):
rank = dist.get_rank()
send_recv_size = 10
tensor = _build_tensor(send_recv_size, value=rank)
recv_ranks = list()
irecv_ranks = list()
ctx = profiler_ctx if profiler_ctx is not None else suppress()
with ctx as prof:
for dst in range(0, dist.get_world_size()):
if dst == rank:
# Recv mode
for dst in range(0, dist.get_world_size()):
if dst == rank:
continue
for recv in ["recv", "irecv"]:
output_tensor = _build_tensor(send_recv_size, value=-1)
if recv == "recv":
sender = dist.recv(output_tensor)
recv_ranks.append(sender)
elif recv == "irecv":
work = dist.irecv(output_tensor)
work.wait()
sender = work._source_rank()
irecv_ranks.append(sender)
# Assert the scalar value "sender" that should be
# equal to the rank of the sender is equal to all
# values in the received tensor.
self.assertTrue(output_tensor.eq(sender).all())
else:
# Send mode
dist.send(tensor, dst) # recv
dist.send(tensor, dst) # irecv
if profiler_ctx is not None:
backend = dist.get_backend()
if backend in SEND_RECV_PROFILING_SUPPORTED_BACKENDS:
for event_name in [f"{backend}:send", f"{backend}:recvAnySource"]:
events = get_profiling_event(event_name, prof)
# Each rank sends/recvs from other rank twice.
self.assertEqual(
sum(event.count for event in events),
2 * (dist.get_world_size() - 1),
)
for event in events:
self.assertTrue(event.is_async)
self.assertEqual(event.input_shapes, [[send_recv_size] * 3])
# Each rank would have 2 * (world_size - 1) sends, verify that
# globally we receive the same amount on the other end.
recv_ranks_tensor = torch.cat(
(torch.tensor(recv_ranks), torch.tensor(irecv_ranks)), 0
)
global_recv_ranks = [
torch.empty_like(recv_ranks_tensor)
for _ in range(dist.get_world_size())
]
dist.all_gather(global_recv_ranks, recv_ranks_tensor)
global_recv_ranks_list = []
for tensor in global_recv_ranks:
global_recv_ranks_list += tensor.tolist()
from itertools import groupby
global_recv_ranks_list.sort()
frequency = [
len(list(group)) for key, group in groupby(global_recv_ranks_list)
]
self.assertEqual(dist.get_world_size(), len(frequency))
self.assertEqual(
[2 * (dist.get_world_size() - 1)] * dist.get_world_size(), frequency
)
self._barrier()
@sandcastle_skip_if(
BACKEND == "nccl", "Nccl does not support send/recv from any source"
)
def test_send_recv_any_source(self):
self._test_send_recv_any_source(profiler_ctx=None)
@sandcastle_skip_if(
BACKEND == "nccl", "Nccl does not support send/recv from any source"
)
def test_send_recv_any_source_autograd_profiler(self):
autograd_profiler_ctx = _create_autograd_profiler()
self._test_send_recv_any_source(profiler_ctx=autograd_profiler_ctx)
@sandcastle_skip_if(
BACKEND == "nccl", "Nccl does not support send/recv from any source"
)
@sandcastle_skip_if(IS_FBCODE, "Kineto in fbcode code causes hang")
@sandcastle_skip_if(
IS_MACOS or IS_WINDOWS,
"torch.profiler not enabled for mac/windows: https://github.com/pytorch/pytorch/pull/56124",
)
def test_send_recv_any_source_torch_profiler(self):
torch_profiler_ctx = _create_torch_profiler()
return self._test_send_recv_any_source(profiler_ctx=torch_profiler_ctx)
# SEND RECV WITH TAG
def _test_send_recv_with_tag(self, profiler_ctx):
rank = dist.get_rank()
world_size = dist.get_world_size()
send_recv_size = 10
tensor = _build_tensor(send_recv_size, value=rank)
ctx = profiler_ctx if profiler_ctx is not None else suppress()
with ctx as prof:
for dst in range(0, world_size):
if dst == rank:
# Recv mode
for src in range(0, world_size):
if src == rank:
continue
output_tensor = _build_tensor(send_recv_size, value=-1)
dist.recv(output_tensor, src, tag=src)
self.assertTrue(output_tensor.eq(src).all())
else:
# Send mode
dist.send(tensor, dst, tag=rank)
if profiler_ctx is not None:
backend = dist.get_backend()
if backend in SEND_RECV_PROFILING_SUPPORTED_BACKENDS:
for event_name in [f"{backend}:send", f"{backend}:recv"]:
events = get_profiling_event(event_name, prof)
# Each rank sends/recvs from all other ranks
event_count = sum(e.count for e in events)
expected_event_count = dist.get_world_size() - 1
self.assertEqual(event_count, expected_event_count)
for event in events:
self.assertTrue(event.is_async)
self.assertEqual(event.name, event_name)
self.assertEqual(event.input_shapes, [[send_recv_size] * 3])
@sandcastle_skip_if(
BACKEND == "nccl", "NCCL send/recv tested by test_send_recv_nccl"
)
def test_send_recv_with_tag(self):
self._test_send_recv_with_tag(profiler_ctx=None)
@sandcastle_skip_if(
BACKEND == "nccl", "NCCL send/recv tested by test_send_recv_nccl"
)
def test_send_recv_with_tag_autograd_profiler(self):
autograd_profiler_ctx = _create_autograd_profiler()
return self._test_send_recv_with_tag(profiler_ctx=autograd_profiler_ctx)
@sandcastle_skip_if(
BACKEND == "nccl", "NCCL send/recv tested by test_send_recv_nccl"
)
@sandcastle_skip_if(IS_FBCODE, "Kineto in fbcode code causes hang")
@sandcastle_skip_if(
IS_MACOS or IS_WINDOWS,
"torch.profiler not enabled for mac/windows: https://github.com/pytorch/pytorch/pull/56124",
)
def test_send_recv_with_tag_torch_profiler(self):
torch_profiler_ctx = _create_torch_profiler()
return self._test_send_recv_with_tag(profiler_ctx=torch_profiler_ctx)
# ISEND
def _test_isend(self, profiler_ctx):
rank = dist.get_rank()
world_size = dist.get_world_size()
ctx = profiler_ctx if profiler_ctx is not None else suppress()
with ctx as prof:
if rank == 0:
requests = [
dist.isend(_build_tensor(dest, 10), dest)
for dest in range(1, world_size)
]
for request in requests:
request.wait()
self.assertTrue(request.is_completed())
else:
tensor = _build_tensor(rank, -1)
dist.recv(tensor, 0)
self.assertEqual(tensor, _build_tensor(rank, 10))
self._barrier()
if profiler_ctx is not None:
backend = dist.get_backend()
if backend in SEND_RECV_PROFILING_SUPPORTED_BACKENDS:
expected_event_name = (
f"{backend}:send" if rank == 0 else f"{backend}:recv"
)
events = get_profiling_event(expected_event_name, prof)
event_count = sum(e.count for e in events)
expected_count = dist.get_world_size() - 1 if rank == 0 else 1
self.assertEqual(expected_count, event_count)
# Event ordering is not guaranteed, so simply ensure the shapes are
# found in the following map.
expected_shapes = {
r: [[r] * 3] for r in range(1, dist.get_world_size())
}
for event in events:
self.assertTrue(event.is_async)
self.assertEqual(event.name, expected_event_name)
if rank == 0:
self.assertTrue(
event.input_shapes in expected_shapes.values()
)
else:
self.assertEqual(event.input_shapes, expected_shapes[rank])
@sandcastle_skip_if(BACKEND == "nccl", "Nccl does not support isend")
def test_isend(self):
self._test_isend(profiler_ctx=None)
@sandcastle_skip_if(BACKEND == "nccl", "Nccl does not support isend")
def test_isend_autograd_profiler(self):
autograd_profiler_ctx = _create_autograd_profiler()
self._test_isend(profiler_ctx=autograd_profiler_ctx)
@sandcastle_skip_if(BACKEND == "nccl", "Nccl does not support isend")
@sandcastle_skip_if(IS_FBCODE, "Kineto in fbcode code causes hang")
@sandcastle_skip_if(
IS_MACOS or IS_WINDOWS,
"torch.profiler not enabled for mac/windows: https://github.com/pytorch/pytorch/pull/56124",
)
def test_isend_torch_profiler(self):
torch_profiler_ctx = _create_torch_profiler()
self._test_isend(profiler_ctx=torch_profiler_ctx)
# IRECV
@sandcastle_skip_if(BACKEND == "nccl", "Nccl does not support irecv")
def test_irecv(self):
rank = dist.get_rank()
world_size = dist.get_world_size()
if rank == 0:
expected_tensors = [
_build_tensor(src, -1) for src in range(1, world_size)
]
requests = [
dist.irecv(expected_tensors[src - 1], src)
for src in range(1, world_size)
]
for src in range(1, world_size):
requests[src - 1].wait()
self.assertTrue(requests[src - 1].is_completed())
self.assertEqual(expected_tensors[src - 1], _build_tensor(src, 10))
else:
tensor = _build_tensor(rank, 10)
dist.send(tensor, 0)
self._barrier()
# BROADCAST
def _test_broadcast_helper(
self,
group,
group_id,
rank,
cuda=False,
rank_to_GPU=None,
with_options=False,
):
for dtype, value, requires_cuda in [
(torch.float, -1e-10, False),
(torch.double, -1e-100, False),
(torch.half, -0.1, True),
(torch.int8, -2, False),
(torch.uint8, 129, False),
(torch.int, -1e5, False),
(torch.long, -1e15, False),
]:
if requires_cuda and not cuda:
continue
for src in group:
expected_tensor = _build_tensor(src + 1, value, dtype)
if cuda:
expected_tensor = expected_tensor.cuda(rank_to_GPU[rank][0])
if rank == src:
if with_options:
opts = dist.BroadcastOptions()
opts.rootTensor = 0
opts.rootRank = src
self.call_dist_op(
":broadcast",
True,
group_id.broadcast,
[expected_tensor],
opts,
)
else:
self.call_dist_op(
":broadcast",
False,
dist.broadcast,
expected_tensor,
src,
group_id,
)
else:
tensor = _build_tensor(src + 1, -1, dtype)
if cuda:
tensor = tensor.cuda(rank_to_GPU[rank][0])
if with_options:
opts = dist.BroadcastOptions()
opts.rootTensor = 0
opts.rootRank = src
self.call_dist_op(
":broadcast", True, group_id.broadcast, [tensor], opts
)
else:
self.call_dist_op(
":broadcast",
False,
dist.broadcast,
tensor,
src,
group_id,
)
self.assertEqual(tensor.size(), expected_tensor.size())
self.assertEqual(
tensor.ne(expected_tensor).max(), torch.tensor(False)
)
self._barrier()
@sandcastle_skip_if(BACKEND == "nccl", "Nccl does not support CPU tensors")
def test_broadcast(self):
group, group_id, rank = self._init_global_test()
self._test_broadcast_helper(group, group_id, rank)
@sandcastle_skip_if(
BACKEND != "gloo" and BACKEND != "nccl",
"Only Gloo and Nccl backend supports CUDA allReduce",
)
@skip_if_no_gpu
def test_broadcast_cuda(self):
group, group_id, rank = self._init_global_test()
rank_to_GPU = self._init_multigpu_helper()
device_id = rank_to_GPU[rank][0]
torch.cuda.set_device(device_id)
self._test_broadcast_helper(group, group_id, rank, True, rank_to_GPU)
@skip_if_small_worldsize
@sandcastle_skip_if(BACKEND == "nccl", "Nccl does not support CPU tensors")
def test_broadcast_group(self):
group, group_id, rank = self._init_group_test()
self._test_broadcast_helper(group, group_id, rank)
@sandcastle_skip_if(BACKEND == "nccl", "Nccl does not support CPU tensors")
def test_broadcast_full_group(self):
group, group_id, rank = self._init_full_group_test()
self._test_broadcast_helper(group, group_id, rank)
@sandcastle_skip_if(
BACKEND != "nccl",
"Only NCCL backend supports high priority stream",
)
@skip_if_no_gpu
def test_nccl_high_priority_stream(self):
group, _, rank = self._init_global_test()
rank_to_GPU = self._init_multigpu_helper()
device_id = rank_to_GPU[rank][0]
torch.cuda.set_device(device_id)
new_port = str(MASTER_PORT + 1)
os.environ["MASTER_PORT"] = new_port
gen_iterator = dist.rendezvous("env://", rank, dist.get_world_size())
store, rank, size = next(gen_iterator)
store = dist.PrefixStore(new_port, store)
opts = dist.ProcessGroupNCCL.Options()
opts.is_high_priority_stream = False
group_id = dist.ProcessGroupNCCL(store, rank, size, opts)
self._test_broadcast_helper(group, group_id, rank, True, rank_to_GPU, True)
# REDUCE
def _test_reduce_helper(
self,
group,
group_id,
rank,
op,
master_value,
worker_value,
expected_value,
cuda=False,
rank_to_GPU=None,
):
for src in group:
tensor = _build_tensor(src + 1).fill_(
master_value if rank == src else worker_value
)
if cuda:
tensor = tensor.cuda(rank_to_GPU[rank][0])
self.call_dist_op(
":reduce",
False,
dist.reduce,
tensor,
src,
op,
group_id,
tensor_shapes=[tensor.shape],
)
if rank == src:
self.assertEqual(tensor, _build_tensor(src + 1, expected_value))
self._barrier()
@sandcastle_skip_if(BACKEND == "nccl", "Nccl does not support CPU tensors")
def test_reduce_sum(self):
group, group_id, rank = self._init_global_test()
self._test_reduce_helper(
group,
group_id,
rank,
dist.ReduceOp.SUM,
2,
10,
2 + (10 * (len(group) - 1)),
)
@sandcastle_skip_if(BACKEND != "nccl", "Only Nccl supports CUDA reduce")
@skip_if_no_gpu
def test_reduce_sum_cuda(self):
group, group_id, rank = self._init_global_test()
rank_to_GPU = self._init_multigpu_helper()
device_id = rank_to_GPU[rank][0]
torch.cuda.set_device(device_id)
self._test_reduce_helper(
group,
group_id,
rank,
dist.ReduceOp.SUM,
2,
10,
2 + 10 * (len(group) - 1),
True,
rank_to_GPU,
)
@sandcastle_skip_if(BACKEND == "nccl", "Nccl does not support CPU tensors")
def test_reduce_product(self):
group, group_id, rank = self._init_global_test()
self._test_reduce_helper(
group,
group_id,
rank,
dist.ReduceOp.PRODUCT,
2,
10,
reduce((lambda x, y: x * y), [10] * (len(group) - 1), 2),
)
@sandcastle_skip_if(BACKEND == "nccl", "Nccl does not support CPU tensors")
def test_reduce_min(self):
group, group_id, rank = self._init_global_test()
self._test_reduce_helper(
group, group_id, rank, dist.ReduceOp.MIN, 1010, 1, 1
)
@sandcastle_skip_if(BACKEND == "nccl", "Nccl does not support CPU tensors")
def test_reduce_max(self):
group, group_id, rank = self._init_global_test()
self._test_reduce_helper(
group, group_id, rank, dist.ReduceOp.MAX, -1, 10, 10
)
@sandcastle_skip_if(BACKEND == "nccl", "Nccl does not support CPU tensors")
@skip_if_small_worldsize
def test_reduce_group_sum(self):
group, group_id, rank = self._init_group_test()
self._test_reduce_helper(
group,
group_id,
rank,
dist.ReduceOp.SUM,
2,
10,
2 + (10 * (len(group) - 1)),
)
@sandcastle_skip_if(BACKEND == "nccl", "Nccl does not support CPU tensors")
@skip_if_small_worldsize
def test_reduce_group_product(self):
group, group_id, rank = self._init_group_test()
self._test_reduce_helper(
group,
group_id,
rank,
dist.ReduceOp.PRODUCT,
2,
10,
reduce((lambda x, y: x * y), [10] * (len(group) - 1), 2),
)
@sandcastle_skip_if(BACKEND == "nccl", "Nccl does not support CPU tensors")
@skip_if_small_worldsize
def test_reduce_group_min(self):
group, group_id, rank = self._init_group_test()
self._test_reduce_helper(
group, group_id, rank, dist.ReduceOp.MIN, 1010, 1, 1
)
@sandcastle_skip_if(BACKEND == "nccl", "Nccl does not support CPU tensors")
@skip_if_small_worldsize
def test_reduce_group_max(self):
group, group_id, rank = self._init_group_test()
self._test_reduce_helper(
group, group_id, rank, dist.ReduceOp.MAX, -1, 10, 10
)
@sandcastle_skip_if(BACKEND == "nccl", "Nccl does not support CPU tensors")
def test_reduce_full_group_sum(self):
group, group_id, rank = self._init_full_group_test()
self._test_reduce_helper(
group,
group_id,
rank,
dist.ReduceOp.SUM,
2,
10,
2 + (10 * (len(group) - 1)),
)
@sandcastle_skip_if(BACKEND == "nccl", "Nccl does not support CPU tensors")
def test_reduce_full_group_product(self):
group, group_id, rank = self._init_full_group_test()
self._test_reduce_helper(
group,
group_id,
rank,
dist.ReduceOp.PRODUCT,
2,
10,
reduce((lambda x, y: x * y), [10] * (len(group) - 1), 2),
)
@sandcastle_skip_if(BACKEND == "nccl", "Nccl does not support CPU tensors")
def test_reduce_full_group_min(self):
group, group_id, rank = self._init_full_group_test()
self._test_reduce_helper(
group, group_id, rank, dist.ReduceOp.MIN, 1010, 1, 1
)
@sandcastle_skip_if(BACKEND == "nccl", "Nccl does not support CPU tensors")
def test_reduce_full_group_max(self):
group, group_id, rank = self._init_full_group_test()
self._test_reduce_helper(
group, group_id, rank, dist.ReduceOp.MAX, -1, 10, 10
)
# REDUCE TWICE
def _test_reduce_twice_helper(
self,
group,
group_id,
rank,
op,
master_value,
worker_value,
expected_value,
cuda=False,
rank_to_GPU=None,
):
for src in group:
tensors = [
_build_tensor(src + 1).fill_(
master_value if rank == src else worker_value
)
for i in range(2)
]
if cuda:
for i in range(2):
tensors[i] = tensors[i].cuda(rank_to_GPU[rank][0])
self.call_dist_op(
":reduce",
False,
dist.reduce,
tensors[0],
src,
op,
group_id,
secondary_op_call=lambda: dist.reduce(
tensors[1], src, op, group_id
),
tensor_shapes=[tensors[0].shape],
)
if rank == src:
for tensor in tensors:
self.assertEqual(tensor, _build_tensor(src + 1, expected_value))
self._barrier()
@sandcastle_skip_if(BACKEND == "nccl", "Nccl does not support CPU tensors")
def test_reduce_sum_twice(self):
group, group_id, rank = self._init_global_test()
self._test_reduce_twice_helper(
group,
group_id,
rank,
dist.ReduceOp.SUM,
2,
10,
2 + (10 * (len(group) - 1)),
)
@sandcastle_skip_if(BACKEND != "nccl", "Only Nccl supports CUDA reduce")
@skip_if_no_gpu
def test_reduce_sum_cuda_twice(self):
group, group_id, rank = self._init_global_test()
rank_to_GPU = self._init_multigpu_helper()
device_id = rank_to_GPU[rank][0]
torch.cuda.set_device(device_id)
self._test_reduce_twice_helper(
group,
group_id,
rank,
dist.ReduceOp.SUM,
2,
10,
2 + 10 * (len(group) - 1),
True,
rank_to_GPU,
)
@skip_if_no_gpu
@require_backend({"gloo", "nccl"})
def test_all_reduce_result_cuda(self):
group, group_id, rank = self._init_global_test()
rank_to_GPU = self._init_multigpu_helper()
for src in group:
if rank == src:
tensor = _build_tensor(src + 1, 2)
else:
tensor = _build_tensor(src + 1, 10)
tensor = tensor.cuda(rank_to_GPU[rank][0])
opts = AllreduceOptions()
opts.reduceOp = dist.ReduceOp.SUM
if group_id == GroupMember.WORLD:
work = _get_default_group().allreduce([tensor], opts)
else:
work = group_id.allreduce([tensor], opts)
if BACKEND == "gloo":
# Calling result right the work is finished should throw exception.
# Here we have a race condition, we may not assume the work is not
# finished by the time we run next lines.
try:
with self.assertRaisesRegex(
RuntimeError,
"Work needs to be completed before calling result",
):
work.result()
except AssertionError:
# Exception was not raised, ensure is_completed()
self.assertTrue(work.is_completed())
work.wait()
result = work.result()
else:
# In case of NCCL we should be able to retrieve pointer to the result
# even before work is finished.
result = work.result()
work.wait()
expected_value = 2 + (10 * (len(group) - 1))
self.assertEqual(result, [_build_tensor(src + 1, expected_value)])
self._barrier()
def call_dist_op(
self,
profiling_title_postfix,
is_async,
op,
*args,
expect_event=True,
secondary_op_call=None,
profile_cuda=False,
tensor_shapes=None,
**kwargs,
):
op_calls = [lambda: op(*args, **kwargs)]
if secondary_op_call is not None:
op_calls.append(secondary_op_call)
autograd_profiler_ctx = torch.autograd.profiler.profile(
use_cuda=profile_cuda, record_shapes=True
)
# TODO: move this test to use torch.profiler once kineto issues are
# fixed internally.
with autograd_profiler_ctx as prof:
works = [op_call() for op_call in op_calls]
if is_async:
for work in works:
work.wait()
if expect_event and dist.get_backend() in PROFILING_SUPPORTED_BACKENDS:
events = get_profiling_event(
profiling_title_postfix, autograd_profiler_ctx
)
# DETAIL debug mode can use a pg wrapper that issues more collectives
# under the hood
if dist._get_debug_mode() != dist._DistributedDebugLevel.DETAIL:
self.assertEqual(len(events), len(op_calls))
for e in events:
self.assertTrue(e.is_async)
self.assertEqual(e.count, 1)
self.assertGreaterEqual(e.cpu_time, 0)
# Verify tensor shapes if given
# DETAIL debug mode can use a pg wrapper that issues more collectives
# under the hood
if (
tensor_shapes is not None
and dist._get_debug_mode() != dist._DistributedDebugLevel.DETAIL
):
self.assertEqual(
e.input_shapes,
tensor_shapes,
f"event shape: {e.input_shapes} vs tensor {tensor_shapes}",
)
# ALL REDUCE
def _test_all_reduce_helper(
self,
group,
group_id,
rank,
op,
master_value,
worker_value,
expected_value,
cuda=False,
rank_to_GPU=None,
dtype=torch.float,
async_op=False,
):
for src in group:
curr_value = master_value if rank == src else worker_value
tensor = _build_tensor(src + 1, dtype=dtype).fill_(curr_value)
if cuda:
tensor = tensor.cuda(rank_to_GPU[rank][0])
if tensor.dtype == torch.complex64:
tensor_shapes = [torch.view_as_real(tensor).shape]
else:
tensor_shapes = [tensor.shape]
self.call_dist_op(
":all_reduce",
async_op,
dist.all_reduce,
tensor,
op,
group_id,
async_op=async_op,
tensor_shapes=tensor_shapes,
)
# Currently, only Gloo backend has profiling tested with CUDA enabled.
# Only run cuda profiling test for one rank to speed up since
# running with different src_rank does not affect the correctness.
if (
src == 0
and cuda
and dist.get_backend() in CUDA_PROFILING_SUPPORTED_BACKENDS
):
self.call_dist_op(
":all_reduce",
async_op,
dist.all_reduce,
tensor,
op,
group_id,
async_op=async_op,
profile_cuda=True,
tensor_shapes=tensor_shapes,
)
self._barrier()
@sandcastle_skip_if(BACKEND == "nccl", "Nccl does not support CPU tensors")
def test_all_reduce_sum(self):
group, group_id, rank = self._init_global_test()
self._test_all_reduce_helper(
group,
group_id,
rank,
dist.ReduceOp.SUM,
2,
10,
2 + (10 * (len(group) - 1)),
)
@sandcastle_skip_if(BACKEND == "nccl", "Nccl does not support CPU tensors")
def test_all_reduce_sum_async(self):
group, group_id, rank = self._init_global_test()
self._test_all_reduce_helper(
group,
group_id,
rank,
dist.ReduceOp.SUM,
2,
10,
2 + (10 * (len(group) - 1)),
async_op=True,
)
@sandcastle_skip_if(
BACKEND != "gloo" and BACKEND != "nccl",
"Only Gloo and NCCL backends will have CUDA allReduce tested",
)
@skip_if_no_gpu
def test_all_reduce_sum_cuda(self):
torch.cuda.set_device(self.rank)
group, group_id, rank = self._init_global_test()
rank_to_GPU = self._init_multigpu_helper()
self._test_all_reduce_helper(
group,
group_id,
rank,
dist.ReduceOp.SUM,
2,
10,
2 + (10 * (len(group) - 1)),
True,
rank_to_GPU,
)
@sandcastle_skip_if(
BACKEND != "gloo" and BACKEND != "nccl",
"Only Gloo and NCCL backends will have CUDA allReduce tested",
)
@skip_if_no_gpu
def test_all_reduce_sum_cuda_async(self):
torch.cuda.set_device(self.rank)
group, group_id, rank = self._init_global_test()
rank_to_GPU = self._init_multigpu_helper()
self._test_all_reduce_helper(
group,
group_id,
rank,
dist.ReduceOp.SUM,
2,
10,
2 + (10 * (len(group) - 1)),
True,
rank_to_GPU,
async_op=True,
)
@sandcastle_skip_if(BACKEND == "nccl", "Nccl does not support CPU tensors")
def test_all_reduce_sum_complex(self):
group, group_id, rank = self._init_global_test()
self._test_all_reduce_helper(
group,
group_id,
rank,
dist.ReduceOp.SUM,
complex(2, 3),
complex(10, 11),
complex(2, 3) + (complex(10, 11) * (len(group) - 1)),
dtype=torch.cfloat,
)
@sandcastle_skip_if(BACKEND == "nccl", "Nccl does not support CPU tensors")
def test_all_reduce_complex_unsupported_ops(self):
unsupported_ops = [
dist.ReduceOp.MAX,
dist.ReduceOp.MIN,
dist.ReduceOp.PRODUCT,
dist.ReduceOp.BAND,
dist.ReduceOp.BOR,
dist.ReduceOp.BXOR,
]
group, group_id, rank = self._init_global_test()
for unsupported_op in unsupported_ops:
with self.assertRaisesRegex(
RuntimeError, "all_reduce does not support"
):
dist.all_reduce(
_build_tensor(1, dtype=torch.cfloat), unsupported_op, group_id
)
@sandcastle_skip_if(
BACKEND != "gloo" and BACKEND != "nccl",
"Only Gloo and NCCL backends will have CUDA allReduce tested",
)
@skip_if_no_gpu
def test_all_reduce_sum_cuda_complex(self):
torch.cuda.set_device(self.rank)
group, group_id, rank = self._init_global_test()
rank_to_GPU = self._init_multigpu_helper()
self._test_all_reduce_helper(
group,
group_id,
rank,
dist.ReduceOp.SUM,
complex(2, 3),
complex(10, 11),
complex(2, 3) + (complex(10, 11) * (len(group) - 1)),
True,
rank_to_GPU,
dtype=torch.cfloat,
)
@sandcastle_skip_if(BACKEND == "nccl", "Nccl does not support CPU tensors")
def test_all_reduce_product(self):
group, group_id, rank = self._init_global_test()
self._test_all_reduce_helper(
group,
group_id,
rank,
dist.ReduceOp.PRODUCT,
2,
10,
reduce((lambda x, y: x * y), [10] * (len(group) - 1), 2),
)
@sandcastle_skip_if(BACKEND == "nccl", "Nccl does not support CPU tensors")
def test_all_reduce_min(self):
group, group_id, rank = self._init_global_test()
self._test_all_reduce_helper(
group, group_id, rank, dist.ReduceOp.MIN, 1010, 1, 1
)
@sandcastle_skip_if(BACKEND == "nccl", "Nccl does not support CPU tensors")
def test_all_reduce_max(self):
group, group_id, rank = self._init_global_test()
self._test_all_reduce_helper(
group, group_id, rank, dist.ReduceOp.MAX, -1, 10, 10
)
@skip_if_small_worldsize
@sandcastle_skip_if(BACKEND == "nccl", "Nccl does not support CPU tensors")
def test_all_reduce_group_sum(self):
group, group_id, rank = self._init_group_test()
self._test_all_reduce_helper(
group,
group_id,
rank,
dist.ReduceOp.SUM,
2,
10,
2 + (10 * (len(group) - 1)),
)
@skip_if_small_worldsize
@sandcastle_skip_if(BACKEND == "nccl", "Nccl does not support CPU tensors")
def test_all_reduce_group_product(self):
group, group_id, rank = self._init_group_test()
self._test_all_reduce_helper(
group,
group_id,
rank,
dist.ReduceOp.PRODUCT,
2,
10,
reduce((lambda x, y: x * y), [10] * (len(group) - 1), 2),
)
@skip_if_small_worldsize
@sandcastle_skip_if(BACKEND == "nccl", "Nccl does not support CPU tensors")
def test_all_reduce_group_min(self):
group, group_id, rank = self._init_group_test()
self._test_all_reduce_helper(
group, group_id, rank, dist.ReduceOp.MIN, 1010, 1, 1
)
@skip_if_small_worldsize
@sandcastle_skip_if(BACKEND == "nccl", "Nccl does not support CPU tensors")
def test_all_reduce_group_max(self):
group, group_id, rank = self._init_group_test()
self._test_all_reduce_helper(
group, group_id, rank, dist.ReduceOp.MAX, -1, 10, 10
)
@sandcastle_skip_if(BACKEND == "nccl", "Nccl does not support CPU tensors")
def test_all_reduce_full_group_sum(self):
group, group_id, rank = self._init_full_group_test()
self._test_all_reduce_helper(
group,
group_id,
rank,
dist.ReduceOp.SUM,
2,
10,
2 + (10 * (len(group) - 1)),
)
@sandcastle_skip_if(BACKEND == "nccl", "Nccl does not support CPU tensors")
def test_all_reduce_full_group_product(self):
group, group_id, rank = self._init_full_group_test()
self._test_all_reduce_helper(
group,
group_id,
rank,
dist.ReduceOp.PRODUCT,
2,
10,
reduce((lambda x, y: x * y), [10] * (len(group) - 1), 2),
)
@sandcastle_skip_if(BACKEND == "nccl", "Nccl does not support CPU tensors")
def test_all_reduce_full_group_min(self):
group, group_id, rank = self._init_full_group_test()
self._test_all_reduce_helper(
group, group_id, rank, dist.ReduceOp.MIN, 1010, 1, 1
)
@sandcastle_skip_if(BACKEND == "nccl", "Nccl does not support CPU tensors")
def test_all_reduce_full_group_max(self):
group, group_id, rank = self._init_full_group_test()
self._test_all_reduce_helper(
group, group_id, rank, dist.ReduceOp.MAX, -1, 10, 10
)
# SPARSE ALL REDUCE
def _test_sparse_all_reduce_sum(self, fn):
group, group_id, rank = self._init_global_test()
tests = simple_sparse_reduce_tests(
rank, dist.get_world_size(), num_inputs=1
)
for (inputs, outputs) in tests:
tensors = [fn(input) for input in inputs]
dist.all_reduce(tensors[0], dist.ReduceOp.SUM, group_id)
self.assertEqual(tensors[0], outputs[0])
@sandcastle_skip_if(
BACKEND != "gloo", "Only Gloo backend support sparse all reduce"
)
def test_sparse_all_reduce_sum(self):
self._test_sparse_all_reduce_sum(lambda t: t)
@sandcastle_skip_if(
BACKEND != "gloo", "Only Gloo backend support sparse all reduce"
)
@skip_if_no_gpu
def test_sparse_all_reduce_sum_cuda(self):
self._test_sparse_all_reduce_sum(lambda t: t.clone().cuda())
# ALL REDUCE - COALESCED
@staticmethod
def _all_reduce_coalesced_sum_test_cases(group_size):
return (
[2, 3, complex(2, 3)],
[10, 11, complex(10, 11)],
[
2 + 10 * (group_size - 1),
3 + 11 * (group_size - 1),
complex(2, 3) + complex(10, 11) * (group_size - 1),
],
[torch.float, torch.float, torch.cfloat],
)
@staticmethod
def _all_reduce_coalesced_product_test_cases(group_size):
return (
[1, 2],
[3, 4],
[1 * 3 ** (group_size - 1), 2 * 4 ** (group_size - 1)],
[torch.float, torch.float],
)
@staticmethod
def _all_reduce_coalesced_min_test_cases(group_size):
return (
[1, 4],
[2, 3],
[1, 3],
[torch.float, torch.float],
)
@staticmethod
def _all_reduce_coalesced_max_test_cases(group_size):
return (
[1, 4],
[2, 3],
[2, 4],
[torch.float, torch.float],
)
@sandcastle_skip_if(BACKEND == "nccl", "Nccl does not support CPU tensors")
def test_all_reduce_coalesced_max_complex_unsupported(self):
group, group_id, rank = self._init_global_test()
with self.assertRaisesRegex(RuntimeError, "all_reduce does not support"):
dist.all_reduce_coalesced(
[_build_tensor(1, dtype=torch.cfloat)], dist.ReduceOp.MAX, group_id
)
def _test_all_reduce_coalesced_helper(
self,
group,
group_id,
rank,
op,
cuda=False,
rank_to_GPU=None,
):
test_case_func = {
dist.ReduceOp.SUM: self._all_reduce_coalesced_sum_test_cases,
dist.ReduceOp.PRODUCT: self._all_reduce_coalesced_product_test_cases,
dist.ReduceOp.MIN: self._all_reduce_coalesced_min_test_cases,
dist.ReduceOp.MAX: self._all_reduce_coalesced_max_test_cases,
}[op]
master_values, worker_values, expected_values, dtypes = test_case_func(
len(group)
)
for src in group:
curr_values = master_values if rank == src else worker_values
tensors = [
_build_tensor(src + 1, val, dtype=dtype)
for dtype, val in zip(dtypes, curr_values)
]
if cuda:
tensors = [t.cuda(rank_to_GPU[rank][0]) for t in tensors]
tensor_shapes = []
for tensor in tensors:
if tensor.dtype == torch.complex64:
tensor_shapes.append(torch.view_as_real(tensor).shape)
else:
tensor_shapes.append(tensor.shape)
self.call_dist_op(
":all_reduce",
False,
dist.all_reduce_coalesced,
tensors,
op,
group_id,
tensor_shapes=tensor_shapes,
)
expected_tensors = [
_build_tensor(src + 1, expected_value, dtype=dtype)
for dtype, expected_value in zip(dtypes, expected_values)
]
self.assertEqual(tensors, expected_tensors)
self._barrier()
@require_backend({"gloo"})
def test_all_reduce_coalesced_sum(self):
group, group_id, rank = self._init_global_test()
self._test_all_reduce_coalesced_helper(
group,
group_id,
rank,
dist.ReduceOp.SUM,
cuda=False,
rank_to_GPU=None,
)
@require_backend({"gloo"})
def test_all_reduce_coalesced_product(self):
group, group_id, rank = self._init_global_test()
self._test_all_reduce_coalesced_helper(
group,
group_id,
rank,
dist.ReduceOp.PRODUCT,
cuda=False,
rank_to_GPU=None,
)
@require_backend({"gloo"})
def test_all_reduce_coalesced_min(self):
group, group_id, rank = self._init_global_test()
self._test_all_reduce_coalesced_helper(
group,
group_id,
rank,
dist.ReduceOp.MIN,
cuda=False,
rank_to_GPU=None,
)
@require_backend({"gloo"})
def test_all_reduce_coalesced_max(self):
group, group_id, rank = self._init_global_test()
self._test_all_reduce_coalesced_helper(
group, group_id, rank, dist.ReduceOp.MAX, cuda=False, rank_to_GPU=None
)
@skip_if_small_worldsize
@require_backend({"gloo"})
def test_all_reduce_coalesced_group_sum(self):
group, group_id, rank = self._init_group_test()
self._test_all_reduce_coalesced_helper(
group, group_id, rank, dist.ReduceOp.SUM, cuda=False, rank_to_GPU=None
)
@skip_if_small_worldsize
@require_backend({"gloo"})
def test_all_reduce_coalesced_group_product(self):
group, group_id, rank = self._init_group_test()
self._test_all_reduce_coalesced_helper(
group,
group_id,
rank,
dist.ReduceOp.PRODUCT,
cuda=False,
rank_to_GPU=None,
)
@skip_if_small_worldsize
@require_backend({"gloo"})
def test_all_reduce_coalesced_group_min(self):
group, group_id, rank = self._init_group_test()
self._test_all_reduce_coalesced_helper(
group, group_id, rank, dist.ReduceOp.MIN, cuda=False, rank_to_GPU=None
)
@skip_if_small_worldsize
@require_backend({"gloo"})
def test_all_reduce_coalesced_group_max(self):
group, group_id, rank = self._init_group_test()
self._test_all_reduce_coalesced_helper(
group, group_id, rank, dist.ReduceOp.MAX, cuda=False, rank_to_GPU=None
)
@require_backend({"gloo"})
def test_all_reduce_coalesced_full_group_sum(self):
group, group_id, rank = self._init_full_group_test()
self._test_all_reduce_coalesced_helper(
group, group_id, rank, dist.ReduceOp.SUM, cuda=False, rank_to_GPU=None
)
@require_backend({"gloo"})
def test_all_reduce_coalesced_full_group_product(self):
group, group_id, rank = self._init_full_group_test()
self._test_all_reduce_coalesced_helper(
group,
group_id,
rank,
dist.ReduceOp.PRODUCT,
cuda=False,
rank_to_GPU=None,
)
@require_backend({"gloo"})
def test_all_reduce_coalesced_full_group_min(self):
group, group_id, rank = self._init_full_group_test()
self._test_all_reduce_coalesced_helper(
group,
group_id,
rank,
dist.ReduceOp.MIN,
cuda=False,
rank_to_GPU=None,
)
@require_backend({"gloo"})
def test_all_reduce_coalesced_full_group_max(self):
group, group_id, rank = self._init_full_group_test()
self._test_all_reduce_coalesced_helper(
group, group_id, rank, dist.ReduceOp.MAX, cuda=False, rank_to_GPU=None
)
# SCATTER
def _test_scatter_helper(self, group, group_id, rank, dtype=torch.float):
for dest in group:
tensor = _build_tensor(dest + 1, -1, dtype=dtype)
expected_tensor = _build_tensor(dest + 1, rank, dtype=dtype)
tensors = (
[_build_tensor(dest + 1, i, dtype=dtype) for i in group]
if rank == dest
else []
)
if dtype == torch.complex64:
tensor_shapes = [torch.view_as_real(t).shape for t in tensors]
else:
tensor_shapes = [t.shape for t in tensors]
self.call_dist_op(
":scatter",
False,
dist.scatter,
tensor,
src=dest,
scatter_list=tensors,
group=group_id,
tensor_shapes=tensor_shapes,
)
self.assertEqual(tensor, expected_tensor)
self._barrier()
@sandcastle_skip_if(BACKEND == "nccl", "Nccl does not support CPU tensors")
def test_scatter_checks(self):
group, group_id, rank = self._init_global_test()
one = torch.ones([1])
# Specify scatter_list argument only on source rank.
output = one.clone() * -1
if rank == 0:
scatter_list = [one.clone() * i for i in group]
dist.scatter(output, src=0, scatter_list=scatter_list)
else:
dist.scatter(output, src=0)
self.assertEqual(output, one * rank)
# Don't specify src argument.
output = one.clone() * -1
if rank == 0:
scatter_list = [one.clone() * i for i in group]
dist.scatter(output, scatter_list=scatter_list)
else:
dist.scatter(output)
self.assertEqual(output, one * rank)
@sandcastle_skip_if(BACKEND == "nccl", "Nccl does not support scatter")
def test_scatter(self):
group, group_id, rank = self._init_global_test()
self._test_scatter_helper(group, group_id, rank)
@sandcastle_skip_if(BACKEND == "nccl", "Nccl does not support scatter")
def test_scatter_complex(self):
group, group_id, rank = self._init_global_test()
self._test_scatter_helper(group, group_id, rank, dtype=torch.cfloat)
@sandcastle_skip_if(BACKEND == "nccl", "Nccl does not support scatter")
@skip_if_small_worldsize
def test_scatter_group(self):
group, group_id, rank = self._init_group_test()
self._test_scatter_helper(group, group_id, rank)
@sandcastle_skip_if(BACKEND == "nccl", "Nccl does not support scatter")
def test_scatter_full_group(self):
group, group_id, rank = self._init_full_group_test()
self._test_scatter_helper(group, group_id, rank)
# GATHER
def _test_gather_helper(self, group, group_id, rank):
for dest in group:
tensor = _build_tensor(dest + 1, rank)
tensors = (
[_build_tensor(dest + 1, -1) for i in group] if rank == dest else []
)
self.call_dist_op(
":gather",
False,
dist.gather,
tensor,
dst=dest,
gather_list=tensors,
group=group_id,
tensor_shapes=[tensors[0].shape] if len(tensors) > 0 else None,
)
if rank == dest:
expected_tensors = [_build_tensor(dest + 1, i) for i in group]
for t1, t2 in zip(tensors, expected_tensors):
self.assertEqual(t1, t2)
self._barrier()
@sandcastle_skip_if(BACKEND == "nccl", "Nccl does not support CPU tensors")
def test_gather_checks(self):
group, group_id, rank = self._init_global_test()
one = torch.ones([1])
# Specify gather_list argument only on destination rank.
if rank == 0:
gather_list = [one.clone() for _ in group]
dist.gather(one * rank, dst=0, gather_list=gather_list)
for i in group:
self.assertEqual(gather_list[i], one * i)
else:
dist.gather(one * rank, dst=0)
# Don't specify dst argument.
if rank == 0:
gather_list = [one.clone() for _ in group]
dist.gather(one * rank, gather_list=gather_list)
for i in group:
self.assertEqual(gather_list[i], one * i)
else:
dist.gather(one * rank)
@sandcastle_skip_if(BACKEND == "nccl", "Nccl does not support CPU tensors")
def test_gather(self):
group, group_id, rank = self._init_global_test()
self._test_gather_helper(group, group_id, rank)
@sandcastle_skip_if(BACKEND == "nccl", "Nccl does not support CPU tensors")
@skip_if_small_worldsize
def test_gather_group(self):
group, group_id, rank = self._init_group_test()
self._test_gather_helper(group, group_id, rank)
@sandcastle_skip_if(BACKEND == "nccl", "Nccl does not support CPU tensors")
def test_gather_full_group(self):
group, group_id, rank = self._init_full_group_test()
self._test_gather_helper(group, group_id, rank)
# ALL GATHER
def _test_all_gather_helper(
self, group, group_id, rank, cuda=False, rank_to_GPU=None, dtype=torch.float
):
for dest in group:
tensor = _build_tensor(dest + 1, rank, dtype=dtype)
tensors = [_build_tensor(dest + 1, -1, dtype=dtype) for i in group]
if cuda:
tensor = tensor.cuda(rank_to_GPU[rank][0])
tensors = [t.cuda(rank_to_GPU[rank][0]) for t in tensors]
if tensors[0].dtype == torch.complex64:
tensor_shapes = [torch.view_as_real(tensors[0]).shape]
else:
tensor_shapes = [tensors[0].shape]
self.call_dist_op(
":all_gather",
False,
dist.all_gather,
tensors,
tensor,
group_id,
tensor_shapes=tensor_shapes,
)
expected_tensors = [
_build_tensor(dest + 1, i, dtype=dtype) for i in group
]
for t1, t2 in zip(tensors, expected_tensors):
self.assertEqual(t1, t2)
self._barrier()
@sandcastle_skip_if(BACKEND == "nccl", "Nccl does not support CPU tensors")
def test_all_gather(self):
group, group_id, rank = self._init_global_test()
self._test_all_gather_helper(group, group_id, rank)
@sandcastle_skip_if(BACKEND != "nccl", "Only Nccl supports CUDA all gather")
@sandcastle_skip_if(BACKEND == "nccl", "CUDA all gather skipped for NCCL")
@skip_if_no_gpu
def test_all_gather_cuda(self):
group, group_id, rank = self._init_global_test()
rank_to_GPU = self._init_multigpu_helper()
self._test_all_gather_helper(group, group_id, rank, True, rank_to_GPU)
@sandcastle_skip_if(BACKEND == "nccl", "Nccl does not support CPU tensors")
def test_all_gather_complex(self):
group, group_id, rank = self._init_global_test()
self._test_all_gather_helper(group, group_id, rank, dtype=torch.cfloat)
@sandcastle_skip_if(BACKEND != "nccl", "Only Nccl supports CUDA all gather")
@sandcastle_skip_if(BACKEND == "nccl", "CUDA all gather skipped for NCCL")
@skip_if_no_gpu
def test_all_gather_cuda_complex(self):
group, group_id, rank = self._init_global_test()
rank_to_GPU = self._init_multigpu_helper()
self._test_all_gather_helper(
group, group_id, rank, True, rank_to_GPU, dtype=torch.cfloat
)
@skip_if_small_worldsize
@sandcastle_skip_if(BACKEND == "nccl", "Nccl does not support CPU tensors")
def test_all_gather_group(self):
group, group_id, rank = self._init_group_test()
self._test_all_gather_helper(group, group_id, rank)
@sandcastle_skip_if(BACKEND == "nccl", "Nccl does not support CPU tensors")
def test_all_gather_full_group(self):
group, group_id, rank = self._init_full_group_test()
self._test_all_gather_helper(group, group_id, rank)
def _run_all_gather_coalesced_and_verify(
self, output_tensor_lists, input_tensors, expected_tensors, group_id
):
"""
Helper that runs all_gather_coalesced and returns true if output
matches expectations.
"""
tensor_shapes = []
for input_tensor in input_tensors:
if input_tensor.dtype == torch.complex64:
tensor_shapes.append(torch.view_as_real(input_tensor).shape)
else:
tensor_shapes.append(input_tensor.shape)
self.call_dist_op(
":all_gather",
False,
dist.all_gather_coalesced,
output_tensor_lists,
input_tensors,
group_id,
tensor_shapes=tensor_shapes,
)
for l1, l2 in zip(output_tensor_lists, expected_tensors):
for t1, t2 in zip(l1, l2):
if not torch.equal(t1, t2):
return False
return True
def _test_all_gather_coalesced_helper(
self, group, group_id, rank, dtype=torch.float
):
# TODO: Instead we should probably go through _rank_not_in_group
# mechanism to disable sending tensors
if group_id is not None:
for test_case_id in range(2, 5):
# Make sure we create tensors of incompatible sizes, e.g.
# [1], [2x2], [3x3x3] ... to be sent in one batch
input_tensors = [
_build_multidim_tensor(
tensor_id, tensor_id, rank + tensor_id, dtype=dtype
)
for tensor_id in range(1, test_case_id)
]
output_tensor_lists = [
[
_build_multidim_tensor(
tensor_id, tensor_id, -1, dtype=dtype
)
for tensor_id in range(1, test_case_id)
]
for _ in group
]
expected_tensors = [
[
_build_multidim_tensor(
tensor_id, tensor_id, rank_iter + tensor_id, dtype=dtype
)
for tensor_id in range(1, test_case_id)
]
for rank_iter in group
]
assert self._run_all_gather_coalesced_and_verify(
output_tensor_lists, input_tensors, expected_tensors, group_id
), "output tensors do not match expected ouputs"
self._barrier()
@sandcastle_skip_if(
BACKEND == "nccl", "all_gather_coalesced does not support NCCL"
)
@sandcastle_skip_if(BACKEND == "mpi", "all_gather_coalesced does not support MPI")
def test_all_gather_coalesced_simple(self):
group, group_id, rank = self._init_global_test()
self._test_all_gather_coalesced_helper(group, group_id, rank)
@sandcastle_skip_if(
BACKEND == "nccl", "all_gather_coalesced does not support NCCL"
)
@sandcastle_skip_if(BACKEND == "mpi", "all_gather_coalesced does not support MPI")
def test_all_gather_coalesced_complex(self):
group, group_id, rank = self._init_global_test()
self._test_all_gather_coalesced_helper(
group, group_id, rank, dtype=torch.cfloat
)
@skip_if_small_worldsize
@sandcastle_skip_if(
BACKEND == "nccl", "all_gather_coalesced does not support NCCL"
)
@sandcastle_skip_if(BACKEND == "mpi", "all_gather_coalesced does not support MPI")
def test_all_gather_coalesced_group(self):
group, group_id, rank = self._init_group_test()
self._test_all_gather_coalesced_helper(group, group_id, rank)
@sandcastle_skip_if(
BACKEND == "nccl", "all_gather_coalesced does not support NCCL"
)
@sandcastle_skip_if(BACKEND == "mpi", "all_gather_coalesced does not support MPI")
def test_all_gather_coalesced_full_group(self):
group, group_id, rank = self._init_full_group_test()
self._test_all_gather_coalesced_helper(group, group_id, rank)
@sandcastle_skip_if(
BACKEND == "nccl", "all_gather_coalesced does not support NCCL"
)
@sandcastle_skip_if(BACKEND == "mpi", "all_gather_coalesced does not support MPI")
def test_all_gather_coalesced_with_empty(self):
group, group_id, rank = self._init_global_test()
input_tensors = [
rank * torch.ones([2, 2]),
torch.ones([0]),
(rank + 1) * torch.ones([3, 3]),
torch.ones([0]),
torch.ones([0]),
]
output_tensors_lists = [
[
-1 * torch.ones([2, 2]),
-1 * torch.ones([0]),
-1 * torch.ones([3, 3]),
-1 * torch.ones([0]),
-1 * torch.ones([0]),
]
for _ in group
]
expected_tensors = [
[
r * torch.ones([2, 2]),
torch.ones([0]),
(r + 1) * torch.ones([3, 3]),
torch.ones([0]),
torch.ones([0]),
]
for r in group
]
assert self._run_all_gather_coalesced_and_verify(
output_tensors_lists, input_tensors, expected_tensors, group_id
)
self._barrier()
# AllToAll
def _test_all_to_all_single_equal_split_helper(
self, group, group_id, rank, cuda=False, rank_to_GPU=None, dtype=torch.float
):
if group_id is not None:
size = len(group)
in_tensor = torch.ones([size, size], dtype=dtype) * rank
expected_tensor = torch.cat(
[torch.ones([1, size], dtype=dtype) * i for i in group]
)
out_tensor = torch.ones([size, size], dtype=dtype) * -1
if cuda:
in_tensor = in_tensor.cuda(rank_to_GPU[rank][0])
expected_tensor = expected_tensor.cuda(rank_to_GPU[rank][0])
out_tensor = out_tensor.cuda(rank_to_GPU[rank][0])
if dtype == torch.complex64:
tensor_shapes = [torch.view_as_real(in_tensor).shape]
else:
tensor_shapes = [in_tensor.shape]
self.call_dist_op(
":all_to_all",
False,
dist.all_to_all_single,
out_tensor,
in_tensor,
group=group_id,
tensor_shapes=tensor_shapes,
)
self.assertEqual(out_tensor, expected_tensor)
self._barrier()
def _test_all_to_all_single_unequal_split_helper(
self, group, group_id, rank, cuda=False, rank_to_GPU=None, dtype=torch.float
):
if group_id is not None:
size = len(group)
in_splits = [i + 1 for i in group]
out_splits = [rank + 1 for _ in group]
in_tensor = torch.ones([sum(in_splits), size], dtype=dtype) * rank
out_tensor = torch.ones([(rank + 1) * size, size], dtype=dtype)
expected_tensor = torch.cat(
[torch.ones([rank + 1, size], dtype=dtype) * i for i in group]
)
if cuda:
in_tensor = in_tensor.cuda(rank_to_GPU[rank][0])
expected_tensor = expected_tensor.cuda(rank_to_GPU[rank][0])
out_tensor = out_tensor.cuda(rank_to_GPU[rank][0])
dist.all_to_all_single(
out_tensor, in_tensor, out_splits, in_splits, group=group_id
)
self.assertEqual(out_tensor, expected_tensor)
self._barrier()
def _test_all_to_all_helper(
self,
group,
group_id,
rank,
cuda=False,
rank_to_GPU=None,
dtype=torch.float,
):
if group_id is not None:
size = len(group)
in_splits = [i + 1 for i in group]
in_tensors = [
torch.ones([in_splits[i], size], dtype=dtype) * rank
for i, _ in enumerate(group)
]
out_tensors = [
torch.ones([(rank + 1), size], dtype=dtype) for _ in group
]
expected_tensors = [
torch.ones([rank + 1, size], dtype=dtype) * i for i in group
]
if cuda:
in_tensors = [t.cuda(rank_to_GPU[rank][0]) for t in in_tensors]
expected_tensors = [
t.cuda(rank_to_GPU[rank][0]) for t in expected_tensors
]
out_tensors = [t.cuda(rank_to_GPU[rank][0]) for t in out_tensors]
dist.all_to_all(out_tensors, in_tensors, group=group_id)
for t1, t2 in zip(out_tensors, expected_tensors):
self.assertEqual(t1, t2)
self._barrier()
@sandcastle_skip_if(BACKEND != "mpi", "Only MPI supports CPU all_to_all_single")
def test_all_to_all_single_equal_split(self):
group, group_id, rank = self._init_global_test()
self._test_all_to_all_single_equal_split_helper(group, group_id, rank)
@sandcastle_skip_if(BACKEND != "nccl", "Only Nccl supports CUDA all_to_all_single")
@skip_if_no_gpu
def test_all_to_all_single_equal_split_cuda(self):
group, group_id, rank = self._init_global_test()
rank_to_GPU = self._init_multigpu_helper()
self._test_all_to_all_single_equal_split_helper(
group,
group_id,
rank,
True,
rank_to_GPU,
)
@sandcastle_skip_if(BACKEND != "mpi", "Only MPI supports CPU all_to_all_single")
def test_all_to_all_single_equal_split_complex(self):
group, group_id, rank = self._init_global_test()
self._test_all_to_all_single_equal_split_helper(
group, group_id, rank, dtype=torch.cfloat
)
@sandcastle_skip_if(BACKEND != "nccl", "Only Nccl supports CUDA all_to_all_single")
@skip_if_no_gpu
def test_all_to_all_single_equal_split_cuda_complex(self):
group, group_id, rank = self._init_global_test()
rank_to_GPU = self._init_multigpu_helper()
self._test_all_to_all_single_equal_split_helper(
group, group_id, rank, True, rank_to_GPU, dtype=torch.cfloat
)
@sandcastle_skip_if(BACKEND != "mpi", "Only MPI supports CPU all_to_all_single")
def test_all_to_all_single_unequal_split(self):
group, group_id, rank = self._init_global_test()
self._test_all_to_all_single_unequal_split_helper(group, group_id, rank)
@sandcastle_skip_if(BACKEND != "nccl", "Only Nccl supports CUDA all_to_all_single")
@skip_if_no_gpu
def test_all_to_all_single_unequal_split_cuda(self):
group, group_id, rank = self._init_global_test()
rank_to_GPU = self._init_multigpu_helper()
self._test_all_to_all_single_unequal_split_helper(
group,
group_id,
rank,
True,
rank_to_GPU,
)
@sandcastle_skip_if(BACKEND != "mpi", "Only MPI supports CPU all_to_all_single")
def test_all_to_all_single_unequal_split_complex(self):
group, group_id, rank = self._init_global_test()
self._test_all_to_all_single_unequal_split_helper(
group, group_id, rank, dtype=torch.cfloat
)
@sandcastle_skip_if(BACKEND != "nccl", "Only Nccl supports CUDA all_to_all_single")
@skip_if_no_gpu
def test_all_to_all_single_unequal_split_cuda_complex(self):
group, group_id, rank = self._init_global_test()
rank_to_GPU = self._init_multigpu_helper()
self._test_all_to_all_single_unequal_split_helper(
group,
group_id,
rank,
True,
rank_to_GPU,
dtype=torch.cfloat,
)
@sandcastle_skip_if(BACKEND != "mpi", "Only MPI supports all_to_all")
def test_all_to_all(self):
group, group_id, rank = self._init_global_test()
self._test_all_to_all_helper(group, group_id, rank)
@sandcastle_skip_if(BACKEND != "nccl", "Only NCCL supports CUDA all_to_all")
@skip_if_rocm
def test_all_to_all_cuda(self):
group, group_id, rank = self._init_global_test()
rank_to_GPU = self._init_multigpu_helper()
self._test_all_to_all_helper(group, group_id, rank, True, rank_to_GPU)
@sandcastle_skip_if(BACKEND != "mpi", "Only MPI supports all_to_all")
def test_all_to_all_complex(self):
group, group_id, rank = self._init_global_test()
self._test_all_to_all_helper(group, group_id, rank, dtype=torch.cfloat)
@sandcastle_skip_if(BACKEND != "nccl", "Only NCCL supports CUDA all_to_all")
@skip_if_rocm
def test_all_to_all_cuda_complex(self):
group, group_id, rank = self._init_global_test()
rank_to_GPU = self._init_multigpu_helper()
self._test_all_to_all_helper(
group, group_id, rank, True, rank_to_GPU, dtype=torch.cfloat
)
@sandcastle_skip_if(BACKEND != "mpi", "Only MPI supports CPU all_to_all_single")
@skip_if_small_worldsize
def test_all_to_all_single_equal_split_group(self):
group, group_id, rank = self._init_group_test()
self._test_all_to_all_single_equal_split_helper(group, group_id, rank)
@sandcastle_skip_if(BACKEND != "nccl", "Only Nccl supports CUDA all_to_all_single")
@skip_if_no_gpu
@skip_if_small_worldsize
def test_all_to_all_single_equal_split_group_cuda(self):
group, group_id, rank = self._init_group_test()
rank_to_GPU = self._init_multigpu_helper()
self._test_all_to_all_single_equal_split_helper(
group,
group_id,
rank,
True,
rank_to_GPU,
)
@sandcastle_skip_if(BACKEND != "mpi", "Only MPI supports CPU all_to_all_single")
@skip_if_small_worldsize
def test_all_to_all_single_unequal_split_group(self):
group, group_id, rank = self._init_group_test()
self._test_all_to_all_single_unequal_split_helper(group, group_id, rank)
@sandcastle_skip_if(BACKEND != "nccl", "Only Nccl supports CUDA all_to_all_single")
@skip_if_no_gpu
@skip_if_small_worldsize
def test_all_to_all_single_unequal_split_group_cuda(self):
group, group_id, rank = self._init_global_test()
rank_to_GPU = self._init_multigpu_helper()
self._test_all_to_all_single_unequal_split_helper(
group,
group_id,
rank,
True,
rank_to_GPU,
)
@sandcastle_skip_if(BACKEND != "mpi", "Only MPI supports all_to_all")
@skip_if_small_worldsize
def test_all_to_all_group(self):
group, group_id, rank = self._init_group_test()
self._test_all_to_all_helper(group, group_id, rank)
@sandcastle_skip_if(BACKEND != "nccl", "Only Nccl supports CUDA all_to_all_single")
@skip_if_small_worldsize
@skip_if_rocm
def test_all_to_all_group_cuda(self):
group, group_id, rank = self._init_group_test()
rank_to_GPU = self._init_multigpu_helper()
self._test_all_to_all_helper(group, group_id, rank, True, rank_to_GPU)
@sandcastle_skip_if(BACKEND != "mpi", "Only MPI supports CPU all_to_all_single")
def test_all_to_all_single_equal_split_full_group(self):
group, group_id, rank = self._init_full_group_test()
self._test_all_to_all_single_equal_split_helper(group, group_id, rank)
@sandcastle_skip_if(BACKEND != "nccl", "Only Nccl supports CUDA all_to_all_single")
@skip_if_no_gpu
def test_all_to_all_single_equal_split_full_group_cuda(self):
group, group_id, rank = self._init_full_group_test()
rank_to_GPU = self._init_multigpu_helper()
self._test_all_to_all_single_equal_split_helper(
group,
group_id,
rank,
True,
rank_to_GPU,
)
@sandcastle_skip_if(BACKEND != "mpi", "Only MPI supports CPU all_to_all_single")
def test_all_to_all_single_unequal_split_full_group(self):
group, group_id, rank = self._init_full_group_test()
self._test_all_to_all_single_unequal_split_helper(group, group_id, rank)
@sandcastle_skip_if(BACKEND != "nccl", "Only Nccl supports CUDA all_to_all_single")
@skip_if_no_gpu
def test_all_to_all_single_unequal_split_full_group_cuda(self):
group, group_id, rank = self._init_full_group_test()
rank_to_GPU = self._init_multigpu_helper()
self._test_all_to_all_single_unequal_split_helper(
group,
group_id,
rank,
True,
rank_to_GPU,
)
@sandcastle_skip_if(BACKEND != "mpi", "Only MPI supports all_to_all")
def test_all_to_all_full_group(self):
group, group_id, rank = self._init_full_group_test()
self._test_all_to_all_helper(group, group_id, rank)
@sandcastle_skip_if(BACKEND != "nccl", "Only NCCL supports CUDA all_to_all")
@skip_if_rocm
def test_all_to_all_full_group_cuda(self):
group, group_id, rank = self._init_full_group_test()
rank_to_GPU = self._init_multigpu_helper()
self._test_all_to_all_helper(group, group_id, rank, True, rank_to_GPU)
# BARRIER
def _test_barrier_helper(
self, group, group_id, rank, cuda=False, rank_to_GPU=None
):
WAIT_TIME = 0.3 # seconds
for dest in group:
expected_time = torch.DoubleTensor(1).fill_(0.0)
if cuda:
expected_time = expected_time.cuda(rank_to_GPU[rank][0])
if dest == rank:
expected_time.fill_(time.time() + WAIT_TIME)
dist.broadcast(expected_time, dest, group_id)
time.sleep(WAIT_TIME + 0.1) # sleep a little bit longer
dist.barrier(group_id)
else:
dist.broadcast(expected_time, dest, group_id)
dist.barrier(group_id)
self.assertGreaterAlmostEqual(
float(time.time()),
float(expected_time[0]),
"destination rank: %d, my rank: %d" % (dest, rank)
+ " (if you see this failure, please report in #14554)",
)
# Use higher timeout for the instance where the test runs
# against a subgroup and uses a CUDA tensor for expected time.
# The CUDA initialization for the participating processes can
# take long enough for the barrier timeout to trigger on the
# process that doesn't participate in the group.
self._barrier(timeout=20)
@skip_if_no_gpu
@sandcastle_skip_if(BACKEND == "mpi", "MPI doesn't supports GPU barrier")
def test_barrier_cuda(self):
group, group_id, rank = self._init_global_test()
rank_to_GPU = self._init_multigpu_helper()
self._test_barrier_helper(group, group_id, rank, True, rank_to_GPU)
@skip_if_small_worldsize
@skip_if_no_gpu
@sandcastle_skip_if(BACKEND == "mpi", "MPI doesn't supports GPU barrier")
def test_barrier_group_cuda(self):
group, group_id, rank = self._init_group_test()
rank_to_GPU = self._init_multigpu_helper()
self._test_barrier_helper(group, group_id, rank, True, rank_to_GPU)
@skip_if_small_worldsize
@skip_if_no_gpu
@sandcastle_skip_if(BACKEND == "mpi", "MPI doesn't supports GPU barrier")
def test_barrier_full_group_cuda(self):
group, group_id, rank = self._init_full_group_test()
rank_to_GPU = self._init_multigpu_helper()
self._test_barrier_helper(group, group_id, rank, True, rank_to_GPU)
@sandcastle_skip_if(BACKEND == "nccl", "NCCL does not support CPU barrier")
def test_barrier(self):
group, group_id, rank = self._init_global_test()
self._test_barrier_helper(group, group_id, rank)
@skip_if_small_worldsize
@sandcastle_skip_if(BACKEND == "nccl", "NCCL does not support CPU barrier")
def test_barrier_group(self):
group, group_id, rank = self._init_group_test()
self._test_barrier_helper(group, group_id, rank)
@sandcastle_skip_if(BACKEND == "nccl", "NCCL does not support CPU barrier")
def test_barrier_full_group(self):
group, group_id, rank = self._init_full_group_test()
self._test_barrier_helper(group, group_id, rank)
def _test_broadcast_multigpu_helper(self, group, group_id, rank, rank_to_GPU):
for src in group:
expected_tensor = _build_tensor(src + 1)
tensors = [
_build_tensor(src + 1, -1).cuda(device=i) for i in rank_to_GPU[rank]
]
if rank == src:
tensors[0] = expected_tensor.cuda(device=rank_to_GPU[rank][0])
dist.broadcast_multigpu(tensors, src, group_id)
for tensor in tensors:
self.assertEqual(tensor, expected_tensor)
self._barrier()
@sandcastle_skip_if(BACKEND == "mpi", "MPI doesn't support broadcast multigpu")
@sandcastle_skip_if(BACKEND == "nccl", "NCCL broadcast multigpu skipped")
@skip_if_no_gpu
def test_broadcast_multigpu(self):
group, group_id, rank = self._init_global_test()
rank_to_GPU = self._init_multigpu_helper()
self._test_broadcast_multigpu_helper(group, group_id, rank, rank_to_GPU)
def _test_all_reduce_multigpu_helper(
self,
group,
group_id,
rank,
rank_to_GPU,
op,
master_value,
worker_value,
expected_value,
dtype=torch.float,
):
for src in group:
curr_value = master_value if rank == src else worker_value
tensors = [
_build_tensor(src + 1, curr_value, dtype=dtype).cuda(device=i)
for i in rank_to_GPU[rank]
]
self.call_dist_op(
":all_reduce",
False,
dist.all_reduce_multigpu,
tensors,
op,
group_id,
)
expected_tensor = _build_tensor(src + 1, expected_value, dtype=dtype)
for tensor in tensors:
self.assertEqual(tensor, expected_tensor)
self._barrier()
@sandcastle_skip_if(BACKEND == "mpi", "MPI doesn't support broadcast multigpu")
@sandcastle_skip_if(BACKEND == "nccl", "CUDA all_reduce multigpu skipped for NCCL")
@skip_if_no_gpu
def test_all_reduce_multigpu(self):
group, group_id, rank = self._init_global_test()
rank_to_GPU = self._init_multigpu_helper()
self._test_all_reduce_multigpu_helper(
group,
group_id,
rank,
rank_to_GPU,
dist.ReduceOp.SUM,
2,
10,
(2 + 10 * (len(group) - 1)) * len(rank_to_GPU[0]),
)
@sandcastle_skip_if(BACKEND == "mpi", "MPI doesn't support broadcast multigpu")
@sandcastle_skip_if(BACKEND == "nccl", "CUDA all_reduce multigpu skipped for NCCL")
@skip_if_no_gpu
def test_all_reduce_multigpu_complex(self):
group, group_id, rank = self._init_global_test()
rank_to_GPU = self._init_multigpu_helper()
self._test_all_reduce_multigpu_helper(
group,
group_id,
rank,
rank_to_GPU,
dist.ReduceOp.SUM,
complex(2, 3),
complex(10, 11),
(complex(2, 3) + complex(10, 11) * (len(group) - 1))
* len(rank_to_GPU[0]),
dtype=torch.cfloat,
)
def _test_reduce_multigpu_helper(
self,
group,
group_id,
rank,
rank_to_GPU,
op,
master_value,
worker_value,
expected_value,
):
for src in group:
tensor_value = master_value if rank == src else worker_value
tensors = [
_build_tensor(src + 1, tensor_value).cuda(device=i)
for i in rank_to_GPU[rank]
]
self.call_dist_op(
"reduce",
False,
dist.reduce_multigpu,
tensors,
src,
op,
group_id,
expect_event=len(tensors) == 1,
tensor_shapes=[tensors[0].shape],
)
if rank == src:
expected_tensor = _build_tensor(src + 1, expected_value)
self.assertEqual(tensors[0], expected_tensor)
self._barrier()
@sandcastle_skip_if(
BACKEND != "nccl", "Only Nccl backend supports reduce multigpu"
)
@skip_if_no_gpu
def test_reduce_multigpu(self):
group, group_id, rank = self._init_global_test()
rank_to_GPU = self._init_multigpu_helper()
device_id = rank_to_GPU[rank][0]
torch.cuda.set_device(device_id)
self._test_reduce_multigpu_helper(
group,
group_id,
rank,
rank_to_GPU,
dist.ReduceOp.SUM,
2,
10,
(2 + 10 * (len(group) - 1)) * len(rank_to_GPU[0]),
)
def _test_all_gather_multigpu_helper(
self, group, group_id, rank, rank_to_GPU, dtype=torch.float
):
for dest in group:
tensors = [
_build_tensor(dest + 1, dtype=dtype).cuda(device=i)
for i in rank_to_GPU[rank]
]
# construct expected output along with
# a place holder to receive all gather results
output_tensors = []
expected_output = []
output_per_gpu = (
[_build_tensor(dest + 1, -1, dtype=dtype)]
* len(rank_to_GPU[0])
* len(group)
)
expected_per_gpu = (
[_build_tensor(dest + 1, dtype=dtype)]
* len(rank_to_GPU[0])
* len(group)
)
for gpu in rank_to_GPU[rank]:
output_tensors.append([t.cuda(device=gpu) for t in output_per_gpu])
expected_output.append(
[t.cuda(device=gpu) for t in expected_per_gpu]
)
self.call_dist_op(
"all_gather",
False,
dist.all_gather_multigpu,
output_tensors,
tensors,
group_id,
expect_event=len(expected_output) == 1,
)
self.assertEqual(output_tensors, expected_output)
self._barrier()
@sandcastle_skip_if(
BACKEND != "nccl", "Only Nccl backend supports allgather multigpu"
)
@skip_if_no_gpu
def test_all_gather_multigpu(self):
group, group_id, rank = self._init_global_test()
rank_to_GPU = self._init_multigpu_helper()
device_id = rank_to_GPU[rank][0]
torch.cuda.set_device(device_id)
self._test_all_gather_multigpu_helper(group, group_id, rank, rank_to_GPU)
@sandcastle_skip_if(
BACKEND != "nccl", "Only Nccl backend supports allgather multigpu"
)
@skip_if_no_gpu
def test_all_gather_multigpu_complex(self):
group, group_id, rank = self._init_global_test()
rank_to_GPU = self._init_multigpu_helper()
device_id = rank_to_GPU[rank][0]
torch.cuda.set_device(device_id)
self._test_all_gather_multigpu_helper(
group, group_id, rank, rank_to_GPU, dtype=torch.cfloat
)
def _model_step(self, model):
for param in model.parameters():
if param.grad is not None:
with torch.no_grad():
param += param.grad
param.grad = None
def _model_step_with_zero_grad(self, model):
for param in model.parameters():
if param.grad is not None:
with torch.no_grad():
param += param.grad
param.grad.requires_grad_(False)
param.grad.zero_()
def _prepare_dummy_data(self, local_bs):
# global_bs for DDP should be divisible by WORLD_SIZE
world_size = int(os.environ["WORLD_SIZE"])
global_bs = world_size * local_bs
input_cpu = torch.randn(global_bs, 2)
target = torch.randn(global_bs, 4)
loss = nn.MSELoss()
return global_bs, input_cpu, target, loss
# END TO END TEST FOR DISTRIBUTEDDATAPARALLEL
def _test_DDP_helper(
self, model, input_var, target, loss, scale_factor=1.0, memory_format=None
):
model.train()
output = model(input_var)
l = loss(output, target) * scale_factor
l.backward()
if memory_format is not None:
self.assertTrue(output.is_contiguous(memory_format=memory_format))
def _assert_equal_param(self, param_gpu, param_DDP):
self.assertEqual(len(param_gpu), len(param_DDP))
for p_gpu, p_DDP in zip(param_gpu, param_DDP):
self.assertEqual(p_gpu, p_DDP)
def _test_DDP_niter(
self,
model_base,
model_DDP,
input,
target,
loss,
local_bs,
rank,
batch_size,
test_save,
offset=None,
world_size=0,
zero_grad=False,
memory_format=None,
n_iter=5,
):
for idx in range(n_iter):
# single cpu/gpu training
self._test_DDP_helper(
model_base, input, target, loss, memory_format=memory_format
)
if offset is None:
offset = rank * local_bs
# DDP training, DDP scatters subsets of input_cpu to nodes/GPUs
self._test_DDP_helper(
model_DDP,
input[offset : offset + local_bs],
target[offset : offset + local_bs],
loss,
world_size * local_bs / batch_size if world_size != 0 else 1,
memory_format=memory_format,
)
# Update weights and run a second iteration to shake out errors
if zero_grad:
self._model_step_with_zero_grad(model_base)
self._model_step_with_zero_grad(model_DDP)
else:
self._model_step(model_base)
self._model_step(model_DDP)
self._assert_equal_param(
list(model_base.parameters()), list(model_DDP.module.parameters())
)
# Shuffle the input so that DDP input is different
input = input[torch.randperm(batch_size)]
# save the model in the middle and reload
if test_save and idx == 2 and INIT_METHOD.startswith("file://"):
with tempfile.NamedTemporaryFile() as tmp:
if sys.platform == "win32":
torch.save(model_DDP, tmp)
tmp.seek(0)
model_DDP = torch.load(tmp)
else:
torch.save(model_DDP, tmp.name)
model_DDP = torch.load(tmp.name)
with tempfile.TemporaryFile() as tmp_file:
torch.save(model_DDP, tmp_file)
tmp_file.seek(0)
saved_model = torch.load(tmp_file)
for k in model_DDP.state_dict():
self.assertEqual(model_DDP.state_dict()[k], saved_model.state_dict()[k])
def _test_DistributedDataParallel(
self,
gpu_subset,
rank,
output_device=None,
gradient_as_bucket_view=False,
static_graph=False,
):
# Run a simple end to end DDP model, use result of single node model
# as baseline
# cpu training setup
model = DDP_NET
# single gpu training setup
model_gpu = copy.deepcopy(model)
model_gpu.cuda(gpu_subset[0])
# DDP training setup
model_DDP = copy.deepcopy(model)
model_DDP.cuda(gpu_subset[0])
model_DDP = nn.parallel.DistributedDataParallel(
model_DDP,
device_ids=gpu_subset,
gradient_as_bucket_view=gradient_as_bucket_view,
)
if static_graph:
model_DDP._set_static_graph()
# test serializable/unserializable
with tempfile.NamedTemporaryFile() as tmp:
if sys.platform == "win32":
torch.save(model_DDP, tmp)
tmp.seek(0)
model_DDP = torch.load(tmp)
else:
torch.save(model_DDP, tmp.name)
model_DDP = torch.load(tmp.name)
# dummy data initialization
local_bs = len(gpu_subset)
global_bs, input_cpu, target, loss = self._prepare_dummy_data(local_bs)
# check two model parameters over 5 iterations
self._test_DDP_niter(
model_gpu,
model_DDP,
input_cpu.cuda(gpu_subset[0]),
target.cuda(gpu_subset[0]),
loss,
local_bs,
rank,
global_bs,
True,
)
self._barrier()
def _test_DistributedDataParallelCPU(self, gradient_as_bucket_view=False):
# Run a simple end to end DDP-CPU model, use result of single node
# model as baseline
group, group_id, rank = self._init_global_test()
# cpu training setup
model_base = DDP_NET
# DDP-CPU training setup
model_DDP = copy.deepcopy(model_base)
model_DDP = nn.parallel.DistributedDataParallel(
model_DDP, gradient_as_bucket_view=gradient_as_bucket_view
)
# dummy data initialization
local_bs = 2
global_bs, input_cpu, target, loss = self._prepare_dummy_data(local_bs)
# check two model parameters over 5 iterations
self._test_DDP_niter(
model_base,
model_DDP,
input_cpu,
target,
loss,
local_bs,
rank,
global_bs,
False,
zero_grad=True,
)
self._barrier()
return model_DDP
@sandcastle_skip_if(BACKEND == "nccl", "nccl does not support DDP on CPU models")
def test_DistributedDataParallelCPU(self):
self._test_DistributedDataParallelCPU()
@sandcastle_skip_if(BACKEND == "nccl", "nccl does not support DDP on CPU models")
def test_DistributedDataParallelCPU_grad_is_view(self):
self._test_DistributedDataParallelCPU(gradient_as_bucket_view=True)
@sandcastle_skip_if(
BACKEND != "nccl" and BACKEND != "gloo",
"Only Nccl & Gloo backend support DistributedDataParallel",
)
def test_DistributedDataParallel_requires_grad(self):
# a module without gradients shouldn't be accepted
self.assertRaises(
RuntimeError, lambda: nn.parallel.DistributedDataParallel(nn.Module())
)
self._barrier()
@sandcastle_skip_if(
BACKEND != "nccl" and BACKEND != "gloo",
"Only NCCL and GLOO backend support DistributedDataParallel",
)
@skip_if_lt_x_gpu(int(os.environ["WORLD_SIZE"]))
@skip_if_rocm
def test_DistributedDataParallel_non_default_stream(self):
stream = torch.cuda.Stream(self.rank)
rank = self.rank
with torch.cuda.stream(stream):
net = torch.nn.parallel.DistributedDataParallel(
torch.nn.Linear(1, 1, bias=False).cuda(rank), device_ids=[rank]
)
for i in range(1000):
# Clear gradients manually
grad = net.module.weight.grad
if grad is not None:
grad.requires_grad_(False)
grad.zero_()
# Forward + BW
batch = torch.tensor([rank]).float().cuda(rank)
loss = net(batch).sum()
loss.backward()
# For each worker, the gradient on the weight should be worker_rank.
grad = net.module.weight.grad
avg = grad.clone()
# All-reducing the gradient averages should give us the gradient
# average. If not, then one of the workers has not correctly
# written back the averaged gradient before this all-reduce call.
dist.all_reduce(avg)
world_size = int(os.environ["WORLD_SIZE"])
avg.div_(world_size)
expected_grad = sum(i for i in range(world_size)) / world_size
self.assertEqual(
avg[0, 0],
expected_grad,
msg=f"Expected gradient of {expected_grad} but got {avg} on rank {self.rank}",
)
@sandcastle_skip_if(
BACKEND != "nccl" and BACKEND != "gloo",
"MPI backend does not support DDP communication hook on CUDA devices",
)
@skip_if_lt_x_gpu(int(os.environ["WORLD_SIZE"]))
@skip_if_rocm
def test_ddp_comm_hook_logging(self):
hooks = [
default.allreduce_hook,
default.fp16_compress_hook,
powerSGD.powerSGD_hook,
powerSGD.batched_powerSGD_hook,
quantization_hooks.quantization_pertensor_hook,
quantization_hooks.quantization_perchannel_hook,
]
cpp_builtin_hooks = [
dist.BuiltinCommHookType.ALLREDUCE,
dist.BuiltinCommHookType.FP16_COMPRESS,
]
for hook in hooks:
ddp_model = torch.nn.parallel.DistributedDataParallel(
torch.nn.Linear(1, 1, bias=False).cuda(self.rank),
device_ids=[self.rank],
)
ddp_logging_data = ddp_model._get_ddp_logging_data()
# Hook not registered yet, so should be empty
self.assertEqual(ddp_logging_data.get("comm_hook"), None)
ddp_model.register_comm_hook(None, hook)
ddp_logging_data = ddp_model._get_ddp_logging_data()
self.assertEqual(ddp_logging_data.get("comm_hook"), hook.__qualname__)
for hook in cpp_builtin_hooks:
ddp_model = torch.nn.parallel.DistributedDataParallel(
torch.nn.Linear(1, 1, bias=False).cuda(self.rank),
device_ids=[self.rank],
)
ddp_logging_data = ddp_model._get_ddp_logging_data()
# Hook not registered yet, so should be empty
self.assertEqual(ddp_logging_data.get("comm_hook"), None)
ddp_model._register_builtin_comm_hook(hook)
ddp_logging_data = ddp_model._get_ddp_logging_data()
self.assertEqual(ddp_logging_data.get("comm_hook"), str(hook))
# No hook registered
ddp_model = torch.nn.parallel.DistributedDataParallel(
torch.nn.Linear(1, 1, bias=False).cuda(self.rank),
device_ids=[self.rank],
)
ddp_logging_data = ddp_model._get_ddp_logging_data()
# Hook not registered yet, so should be empty
self.assertEqual(ddp_logging_data.get("comm_hook"), None)
# After second forward pass, hook should still be empty string
for i in range(2):
inp = torch.ones(1, 1, device=self.rank)
loss = ddp_model(inp).sum()
loss.backward()
ddp_logging_data = ddp_model._get_ddp_logging_data()
# Note: DETAIL debug mode logs DDP logging data to stdout and
# thus accesses std::map, which fills in a default value for the
# type if it didn't exist.
self.assertEqual(ddp_logging_data.get("comm_hook", ""), "")
def _test_ddp_hook_with_optimizer_parity(
self, grad_as_bucket_view, static_graph
):
rank = self.rank
torch.cuda.set_device(rank)
torch.manual_seed(rank)
torch.cuda.manual_seed(rank)
models_to_test = [
(LargeNet(), torch.randn(1, 1000).cuda()),
]
if HAS_TORCHVISION:
models_to_test.append(
(torchvision.models.resnet50(), torch.randn(1, 3, 3, 1000).cuda())
)
# Enable determinism in cudnn operators
for (model, inp) in models_to_test:
with torch.backends.cudnn.flags(
enabled=True, deterministic=True, benchmark=False
):
sgd_lr = 1e-2
sgd_momentum = 0.9
sgd_weight_decay = 0.01
ddp_model_with_optimizer_hook = (
torch.nn.parallel.DistributedDataParallel(
copy.deepcopy(model).cuda(),
device_ids=[self.rank],
gradient_as_bucket_view=grad_as_bucket_view,
)
)
if static_graph:
ddp_model_with_optimizer_hook._set_static_graph()
# Register hook that runs allreduce + functional SGD step.
allreduce_hook = default.allreduce_hook
opt_hook_state = default._OptimizerHookState(
_FunctionalSGD,
sgd_lr,
momentum=sgd_momentum,
weight_decay=sgd_weight_decay,
)
ddp_model_with_optimizer_hook.register_comm_hook(
None,
default._hook_then_optimizer(allreduce_hook, opt_hook_state),
)
# Create DDP model with no hook that does optimizer after
# backward.
ddp_model_with_no_hook = torch.nn.parallel.DistributedDataParallel(
copy.deepcopy(model).cuda(),
device_ids=[self.rank],
gradient_as_bucket_view=grad_as_bucket_view,
)
if static_graph:
ddp_model_with_no_hook._set_static_graph()
sgd_no_hook = torch.optim.SGD(
ddp_model_with_no_hook.parameters(),
lr=sgd_lr,
momentum=sgd_momentum,
weight_decay=sgd_weight_decay,
)
# Verify parameters are equal initially.
for hook_param, allreduce_param in zip(
ddp_model_with_optimizer_hook.parameters(),
ddp_model_with_no_hook.parameters(),
):
self.assertEqual(hook_param, allreduce_param)
# Save old parameters to later verify optimizer modified them.
opt_hook_init_params = copy.deepcopy(
list(ddp_model_with_optimizer_hook.parameters())
)
# Run optimizer with hook model.
for i in range(6):
ddp_model_with_optimizer_hook.zero_grad()
out = ddp_model_with_optimizer_hook(inp)
loss = out.sum()
loss.backward()
dist.barrier()
# Run regular model.
for i in range(6):
ddp_model_with_no_hook.zero_grad()
out = ddp_model_with_no_hook(inp)
loss = out.sum()
loss.backward()
sgd_no_hook.step()
dist.barrier()
# Now verify parameters are equal.
for hook_param, allreduce_param in zip(
ddp_model_with_optimizer_hook.parameters(),
ddp_model_with_no_hook.parameters(),
):
self.assertEqual(hook_param, allreduce_param)
# Verify optimizer modified parameters, otherwise they would be
# trivially equal above.
self.assertNotEqual(
opt_hook_init_params,
list(ddp_model_with_optimizer_hook.parameters()),
)
dist.barrier()
@sandcastle_skip_if(
BACKEND != "nccl" and BACKEND != "gloo",
"Only Nccl & Gloo backend support DistributedDataParallel",
)
@sandcastle_skip_if(IS_WINDOWS, "FunctionalSGD not yet supported with Windows.")
@skip_if_lt_x_gpu(2)
@skip_if_rocm
def test_ddp_hook_with_optimizer_parity(self):
for grad_as_bucket_view, static_graph in itertools.product(
[True, False], [True, False]
):
self._test_ddp_hook_with_optimizer_parity(
grad_as_bucket_view=grad_as_bucket_view, static_graph=static_graph
)
def _test_ddp_hook_parity(self, state, hook):
rank = self.rank
m = torch.nn.Linear(1, 5)
try:
process_group = state.process_group
except AttributeError:
process_group = state
net_with_hook = torch.nn.parallel.DistributedDataParallel(
copy.deepcopy(m).to(rank),
device_ids=[rank],
process_group=process_group,
)
net_with_hook.register_comm_hook(state=state, hook=hook)
net_without_hook = torch.nn.parallel.DistributedDataParallel(
copy.deepcopy(m).to(rank),
device_ids=[rank],
process_group=process_group,
)
for i in range(100):
# Clear gradients manually.
for g in [
net_without_hook.module.weight.grad,
net_with_hook.module.weight.grad,
]:
if g is not None:
g.requires_grad_(False)
g.zero_()
# Forward + BW
batch = torch.tensor([rank]).float().cuda(rank)
loss = net_without_hook(batch).sum()
loss.backward()
# For each worker, the gradient on the weight should be worker_rank.
grad = net_without_hook.module.weight.grad
avg = grad.clone()
expected_grad = (
sum(i for i in range(dist.get_world_size())) / dist.get_world_size()
)
loss_hook = net_with_hook(batch).sum()
loss_hook.backward()
grad_hook = net_with_hook.module.weight.grad
avg_hook = grad_hook.clone()
# Verify hook grad with expected.
# Cannot use exact match here due to a very small accuracy loss,
# e.g. 1e-05, for powerSGD hook case.
assert_func = (
self.assertEqual
if hook == default.allreduce_hook
else torch.testing.assert_allclose
)
assert_func(
avg_hook[0, 0],
expected_grad,
msg=f"Expected hook grad of {expected_grad} but got {avg_hook[0, 0]}",
)
# Verify hook grad with vanilla allreduce
assert_func(
avg_hook[0, 0],
avg[0, 0],
msg=f"Expected hook grad to be close to allreduce {avg[0, 0]}, but got {avg_hook[0, 0]}",
)
@sandcastle_skip_if(
BACKEND != "nccl" and BACKEND != "gloo",
"MPI backend does not support DDP communication hook on CUDA devices",
)
@skip_if_lt_x_gpu(int(os.environ["WORLD_SIZE"]))
@skip_if_rocm
def test_ddp_hook_parity_allreduce(self):
self._test_ddp_hook_parity(state=None, hook=default.allreduce_hook)
@sandcastle_skip_if(
BACKEND != "nccl" and BACKEND != "gloo",
"MPI backend does not support DDP communication hook on CUDA devices",
)
@skip_if_lt_x_gpu(int(os.environ["WORLD_SIZE"]))
@skip_if_rocm
def test_ddp_hook_parity_allreduce_process_group(self):
# process_group is passed in to both DDP and comm. hook
rank_to_GPU = self._init_multigpu_helper()
gpus = [rank_to_GPU[int(r)][0] for r in range(dist.get_world_size())]
process_group = torch.distributed.new_group(gpus)
self._test_ddp_hook_parity(state=process_group, hook=default.allreduce_hook)
@sandcastle_skip_if(
BACKEND != "nccl" and BACKEND != "gloo",
"MPI backend does not support DDP communication hook on CUDA devices",
)
@skip_if_lt_x_gpu(int(os.environ["WORLD_SIZE"]))
@skip_if_rocm
def test_ddp_hook_parity_powerSGD(self):
for warm_start in [True, False]:
powersgd_state = powerSGD.PowerSGDState(
process_group=None,
matrix_approximation_rank=1,
start_powerSGD_iter=2,
warm_start=warm_start,
)
self._test_ddp_hook_parity(
state=powersgd_state, hook=powerSGD.powerSGD_hook
)
@sandcastle_skip_if(
BACKEND != "nccl" and BACKEND != "gloo",
"MPI backend does not support DDP communication hook on CUDA devices",
)
@sandcastle_skip_if(
NO_MULTIPROCESSING_SPAWN,
"Disabled for environments that \
don't support multiprocessing with spawn start method",
)
@skip_if_lt_x_gpu(int(os.environ["WORLD_SIZE"]))
@skip_if_rocm
def test_ddp_hook_parity_post_localSGD(self):
# Although we start run local SGD at iteration 10, since we still use the global process group to run it,
# the post-LocalSGD actually still allreduces gradients globally for the remaining iterations.
state = post_localSGD.PostLocalSGDState(
process_group=None, subgroup=dist.group.WORLD, start_localSGD_iter=10
)
self._test_ddp_hook_parity(
state=state, hook=post_localSGD.post_localSGD_hook
)
# Since we start local SGD later than the total number of 100 iterations,
# no local SGD actually is executed, and we don't even need to provide a subgroup for this case.
state = post_localSGD.PostLocalSGDState(
process_group=None, subgroup=None, start_localSGD_iter=1000
)
self._test_ddp_hook_parity(
state=state, hook=post_localSGD.post_localSGD_hook
)
def _prepare_single_device_module(
self,
rank,
process_group,
devices,
device_ids,
global_batch_size,
gradient_as_bucket_view=False,
):
model = Net()
device = devices[0] if devices else torch.device("cuda:%d" % rank)
ddp_model = DistributedDataParallel(
copy.deepcopy(model).to(device),
device_ids=device_ids,
process_group=process_group,
bucket_cap_mb=0.001,
gradient_as_bucket_view=gradient_as_bucket_view,
)
model.to(device)
input = torch.randn(global_batch_size, 2).to(device)
target = torch.randn(global_batch_size, 4).to(device)
return model, ddp_model, input, target
def _prepare_cpu_module(
self,
process_group,
global_batch_size,
gradient_as_bucket_view=False,
):
model = Net()
ddp_model = DistributedDataParallel(
copy.deepcopy(model),
process_group=process_group,
bucket_cap_mb=0.001,
gradient_as_bucket_view=gradient_as_bucket_view,
)
input = torch.randn(global_batch_size, 2)
target = torch.randn(global_batch_size, 4)
return model, ddp_model, input, target
def _test_accumulate_gradients_no_sync(
self, num_iters=2, ddp_comm_hook=None, gradient_as_bucket_view=False
):
"""
This is the recommended way to implement accumulate grads.
If ``ddp_comm_hook`` input was specified, it will also register that hook
to the ``ddp_model``. The hook fed into this function should not change
the resulting gradients.
"""
group, group_id, rank = self._init_global_test()
world_size = get_world_size()
# FIXME: Add testing for gloo/CUDA
if BACKEND == "mpi" or BACKEND == "gloo":
global_batch_size = world_size
local_batch_size = 1
model, ddp_model, input, target = self._prepare_cpu_module(
group_id, global_batch_size, gradient_as_bucket_view
)
if BACKEND == "nccl":
rank_to_GPU = self._init_multigpu_helper()
int_devices = rank_to_GPU[rank][:1]
devices = [torch.device("cuda:" + str(i)) for i in int_devices]
global_batch_size = world_size
local_batch_size = len(devices)
model, ddp_model, input, target = self._prepare_single_device_module(
rank,
group_id,
devices,
devices,
global_batch_size,
gradient_as_bucket_view,
)
if ddp_comm_hook is not None:
ddp_model.register_comm_hook(group_id, ddp_comm_hook)
def step_model(model, input, target):
model.train()
output = model(input)
loss = F.mse_loss(output, target.to(output.device))
loss.backward()
# ensure accumulate grads works with no_grad => no grads are accumulated.
with torch.no_grad():
with ddp_model.no_sync():
ddp_model.train()
ddp_model(input)
# check two model parameters over num_iters iterations
for iteration in range(num_iters):
step_model(model, input, target)
ddp_input = input[
rank * local_batch_size : (rank + 1) * local_batch_size
]
ddp_target = target[
rank * local_batch_size : (rank + 1) * local_batch_size
]
if iteration % num_iters == 0:
# accumulate grads locally
with ddp_model.no_sync():
step_model(ddp_model, ddp_input, ddp_target)
else:
# sync grads
step_model(ddp_model, ddp_input, ddp_target)
for i, j in zip(model.parameters(), ddp_model.parameters()):
if not i.requires_grad:
continue
if iteration % num_iters == 0:
self.assertNotEqual(i.grad, j.grad)
else:
self.assertEqual(i.grad, j.grad)
# Shuffle the input so that DDP input is different
torch.manual_seed(1337 + iteration)
input = input[torch.randperm(global_batch_size)]
@sandcastle_skip_if(
BACKEND != "mpi" and BACKEND != "nccl" and BACKEND != "gloo",
"get_future is only supported on mpi, nccl and gloo",
)
@nccl_skip_if_lt_x_gpu(BACKEND, 2)
def test_accumulate_gradients_no_sync(self):
"""
Runs _test_accumulate_gradients_no_sync using default inputs
"""
self._test_accumulate_gradients_no_sync()
@sandcastle_skip_if(
BACKEND != "mpi" and BACKEND != "nccl" and BACKEND != "gloo",
"get_future is only supported on mpi, nccl and gloo",
)
@nccl_skip_if_lt_x_gpu(BACKEND, 2)
def test_accumulate_gradients_no_sync_grad_is_view(self):
"""
Runs _test_accumulate_gradients_no_sync using default inputs
"""
self._test_accumulate_gradients_no_sync(gradient_as_bucket_view=True)
@sandcastle_skip_if(
BACKEND != "mpi" and BACKEND != "nccl" and BACKEND != "gloo",
"get_future is only supported on mpi, nccl and gloo",
)
@nccl_skip_if_lt_x_gpu(BACKEND, 2)
def test_accumulate_gradients_no_sync_allreduce_hook(self):
"""
Runs multiple iterations on _test_accumulate_gradients_no_sync
using allreduce hook and validates whether future result was properly
passed as gradients in reducer.
"""
world_size = get_world_size()
def allreduce_hook(
group_id: object, bucket: dist.GradBucket
) -> torch.futures.Future[torch.Tensor]:
tensors = [bucket.get_tensor() / world_size]
return (
group_id.allreduce(tensors)
.get_future()
.then(lambda fut: fut.value()[0])
)
self._test_accumulate_gradients_no_sync(
num_iters=4, ddp_comm_hook=allreduce_hook
)
@sandcastle_skip_if(
BACKEND != "mpi" and BACKEND != "nccl" and BACKEND != "gloo",
"get_future is only supported on mpi, nccl and gloo",
)
@nccl_skip_if_lt_x_gpu(BACKEND, 2)
def test_accumulate_gradients_no_sync_allreduce_with_then_hook(self):
"""
Runs multiple iterations on _test_accumulate_gradients_no_sync using allreduce
hook that also uses then callbacks. In first then callback result is multiplied
by 2, and the second callback divides the result by 2 * world_size. It validates
whether final result was properly passed as gradients in reducer.
"""
world_size = get_world_size()
def allreduce_with_then_hook(
group_id: object, bucket: dist.GradBucket
) -> torch.futures.Future[torch.Tensor]:
fut = group_id.allreduce([bucket.get_tensor()]).get_future()
def mult(fut):
# Multiply the result by 2.
return 2 * fut.wait()[0]
def div(fut):
# Divide the result by 2 * world_size.
return fut.wait() / (2 * world_size)
return fut.then(mult).then(div)
self._test_accumulate_gradients_no_sync(
num_iters=4, ddp_comm_hook=allreduce_with_then_hook
)
@sandcastle_skip_if(
BACKEND != "mpi" and BACKEND != "nccl" and BACKEND != "gloo",
"get_future is only supported on mpi, nccl and gloo",
)
@nccl_skip_if_lt_x_gpu(BACKEND, 2)
def test_get_future(self):
def mult(fut):
return [t * 3 for t in fut.wait()]
def add(fut):
return [t + 1 for t in fut.wait()]
group, group_id, rank = self._init_global_test()
input = _build_tensor(3, 2)
if BACKEND == "nccl":
rank_to_GPU = self._init_multigpu_helper()
device_id = rank_to_GPU[rank][0]
input = input.to(device_id)
fut = group_id.allreduce([input]).get_future()
res = fut.then(mult).then(add).wait()
expected = _build_tensor(3, 2 * len(group) * 3 + 1)
self.assertEqual(res[0], expected)
@sandcastle_skip_if(
BACKEND != "nccl" and BACKEND != "gloo",
"Only Nccl & Gloo backend support DistributedDataParallel",
)
@skip_if_no_gpu
def test_DistributedDataParallel(self):
group, group_id, rank = self._init_global_test()
rank_to_GPU = self._init_multigpu_helper()
gpus = list(rank_to_GPU[rank])
for use_bucket_view, static_graph in itertools.product(
(False, True), (False, True)
):
self._test_DistributedDataParallel(
gpu_subset=gpus,
rank=rank,
gradient_as_bucket_view=use_bucket_view,
static_graph=static_graph,
)
# test output_device
self._test_DistributedDataParallel(
gpu_subset=gpus,
rank=rank,
output_device=torch.device("cuda"),
gradient_as_bucket_view=use_bucket_view,
static_graph=static_graph,
)
# test device_ids
gpus_list = [torch.device("cuda:" + str(i)) for i in gpus]
self._test_DistributedDataParallel(
gpu_subset=gpus_list,
rank=rank,
output_device=torch.device("cuda"),
gradient_as_bucket_view=use_bucket_view,
static_graph=static_graph,
)
def _test_DistributedDataParallel_with_amp(self, grad_is_view=False):
torch.manual_seed(31415)
# Creates model and optimizer in default precision
model = copy.deepcopy(DDP_NET).cuda()
optimizer = torch.optim.SGD(model.parameters(), lr=0.03)
# Creates a GradScaler once at the beginning of training.
scaler = GradScaler()
ddp_model = nn.parallel.DistributedDataParallel(
model, device_ids=[self.rank], gradient_as_bucket_view=grad_is_view
)
input = torch.randn(dist.get_world_size() * 2, 2).cuda()
target = torch.randn(dist.get_world_size() * 2, 4).cuda()
loss_fn = nn.MSELoss()
# verify grads are none before training
for p in ddp_model.parameters():
self.assertTrue(p is not None)
self.assertTrue(p.grad is None)
for idx in range(20):
optimizer.zero_grad()
# Runs the forward pass with autocasting.
with autocast():
output = ddp_model(input)
loss = loss_fn(output, target)
# Scales loss. Calls backward() on scaled loss to create scaled gradients.
# Backward passes under autocast are not recommended.
# Backward ops run in the same dtype autocast chose for corresponding forward ops.
scaler.scale(loss).backward()
# verify grads are not none and are valid during training
for p in ddp_model.parameters():
if p.requires_grad:
self.assertTrue(p.grad is not None)
self.assertFalse(p.grad.isnan().any())
self.assertFalse(p.grad.isinf().any())
# scaler.step() first unscales the gradients of the optimizer's assigned params.
# If these gradients do not contain infs or NaNs, optimizer.step() is then called,
# otherwise, optimizer.step() is skipped.
scaler.step(optimizer)
# Updates the scale for next iteration.
scaler.update()
# Shuffle the input so that DDP input is different
torch.manual_seed(1337 + idx)
input = input[torch.randperm(dist.get_world_size() * 2)]
return ddp_model
@sandcastle_skip_if(
BACKEND != "nccl" and BACKEND != "gloo",
"Only Nccl & Gloo backend support DistributedDataParallel",
)
@skip_if_no_gpu
def test_DistributedDataParallel_with_amp_and_grad_is_view(self):
torch.cuda.set_device(self.rank)
ddp_model_grad_not_view = self._test_DistributedDataParallel_with_amp(
grad_is_view=False
)
ddp_model_grad_is_view = self._test_DistributedDataParallel_with_amp(
grad_is_view=True
)
for i, j in zip(
ddp_model_grad_not_view.parameters(),
ddp_model_grad_is_view.parameters(),
):
self.assertEqual(i, j)
def _test_DistributedDataParallel_SyncBatchNorm(
self,
gpu_subset,
rank,
local_bs,
global_bs,
offset,
output_device=None,
affine=True,
):
# Run a simple end to end DDP model, use result of single node model
# as baseline
# cpu training setup
model = BN_NET if affine else BN_NET_NO_AFFINE
# single gpu training setup
model_gpu = copy.deepcopy(model)
model_gpu.cuda(gpu_subset[0])
# DDP training setup
model_DDP = nn.SyncBatchNorm.convert_sync_batchnorm(copy.deepcopy(model))
model_DDP.cuda(gpu_subset[0])
model_DDP = nn.parallel.DistributedDataParallel(
model_DDP, device_ids=gpu_subset
)
# test serializable/unserializable
with tempfile.NamedTemporaryFile() as tmp:
if sys.platform == "win32":
torch.save(model_DDP, tmp)
tmp.seek(0)
model_DDP = torch.load(tmp)
else:
torch.save(model_DDP, tmp.name)
model_DDP = torch.load(tmp.name)
# data initialization
input_cpu = torch.randn(global_bs, 2)
target = torch.randn(global_bs, 4)
loss = nn.MSELoss()
# check two model parameters over 5 iterations
self._test_DDP_niter(
model_gpu,
model_DDP,
input_cpu.cuda(gpu_subset[0]),
target.cuda(gpu_subset[0]),
loss,
local_bs,
rank,
global_bs,
True,
offset,
dist.get_world_size(),
5 if affine else 2,
)
self._barrier()
@skip_if_lt_x_gpu(2)
@sandcastle_skip_if(
BACKEND != "nccl" and BACKEND != "gloo",
"Only NCCL and GLOO backend support DistributedDataParallel",
)
@sandcastle_skip_if(
IS_WINDOWS, "PostLocalSGDOptimizer not yet supported with Windows."
)
def test_post_localSGD_optimizer_parity(self, grad_is_view=False):
learning_rate = 0.03
period = 4
warmup_steps = 10
torch.cuda.set_device(self.rank)
net = torch.nn.parallel.DistributedDataParallel(
copy.deepcopy(DDP_NET).cuda(),
device_ids=[self.rank],
gradient_as_bucket_view=grad_is_view,
)
opt = torch.optim.SGD(net.parameters(), lr=learning_rate)
averager = averagers.PeriodicModelAverager(
period=period, warmup_steps=warmup_steps
)
post_localSGD_net = torch.nn.parallel.DistributedDataParallel(
copy.deepcopy(DDP_NET).cuda(),
device_ids=[self.rank],
gradient_as_bucket_view=grad_is_view,
)
post_localSGD_opt = post_localSGD_optimizer.PostLocalSGDOptimizer(
params=post_localSGD_net.parameters(),
optimizer_class=torch.optim.SGD,
averager=averagers.PeriodicModelAverager(
period=period, warmup_steps=warmup_steps
),
lr=learning_rate,
)
input = torch.randn(dist.get_world_size() * 2, 2).cuda()
target = torch.randn(dist.get_world_size() * 2, 4).cuda()
loss_fn = nn.MSELoss()
for _ in range(20):
opt.zero_grad()
output = net(input)
loss = loss_fn(output, target)
loss.backward()
opt.step()
averager.average_parameters(net.parameters())
post_localSGD_opt.zero_grad()
post_localSGD_output = post_localSGD_net(input)
post_localSGD_loss = loss_fn(post_localSGD_output, target)
post_localSGD_loss.backward()
post_localSGD_opt.step()
for p1, p2 in zip(net.parameters(), post_localSGD_net.parameters()):
self.assertEqual(p1.data, p2.data)
@sandcastle_skip_if(
BACKEND != "nccl" and BACKEND != "gloo",
"Only Nccl & Gloo backend support DistributedDataParallel",
)
@skip_if_no_gpu
def test_DistributedDataParallel_SyncBatchNorm_Channels_Last(self):
group, group_id, rank = self._init_global_test()
num_processes = dist.get_world_size()
local_bs = 2
bs_offset = int(rank * 2)
global_bs = int(num_processes * 2)
model = ONLY_SBN_NET
model_gpu = copy.deepcopy(model).cuda(rank)
model_DDP = nn.parallel.DistributedDataParallel(
model_gpu, device_ids=[rank]
)
memory_format = torch.channels_last
input_gpu = (
torch.randn(global_bs, 2, 4, 4, dtype=torch.float)
.cuda(rank)
.to(memory_format=memory_format)
)
target_gpu = (
torch.randn(global_bs, 2, 4, 4, dtype=torch.float)
.cuda(rank)
.to(memory_format=memory_format)
)
loss = nn.MSELoss()
# check two model parameters over 5 iterations
self._test_DDP_niter(
model_gpu,
model_DDP,
input_gpu,
target_gpu,
loss,
local_bs,
rank,
global_bs,
True,
bs_offset,
dist.get_world_size(),
memory_format=memory_format,
)
self._barrier()
@sandcastle_skip_if(
BACKEND != "nccl" and BACKEND != "gloo",
"Only Nccl & Gloo backend support DistributedDataParallel",
)
@skip_if_no_gpu
def test_DistributedDataParallel_SyncBatchNorm(self):
group, group_id, rank = self._init_global_test()
rank_to_GPU = self._init_multigpu_helper()
# DDP does not support replicating BN layers within a process, hence
# testing with one module replica per process
gpus = [rank]
num_processes = dist.get_world_size()
local_bs = 2
bs_offset = int(rank * 2)
global_bs = int(num_processes * 2)
self._test_DistributedDataParallel_SyncBatchNorm(
gpu_subset=gpus,
rank=rank,
local_bs=local_bs,
global_bs=global_bs,
offset=bs_offset,
)
# test output_device
self._test_DistributedDataParallel_SyncBatchNorm(
gpu_subset=gpus,
rank=rank,
local_bs=local_bs,
global_bs=global_bs,
offset=bs_offset,
output_device=torch.device("cuda"),
)
# test device_ids
gpus = [torch.device("cuda:" + str(i)) for i in gpus]
self._test_DistributedDataParallel_SyncBatchNorm(
gpu_subset=gpus,
rank=rank,
local_bs=local_bs,
global_bs=global_bs,
offset=bs_offset,
output_device=torch.device("cuda"),
)
@sandcastle_skip_if(
BACKEND != "nccl" and BACKEND != "gloo",
"Only Nccl & Gloo backend support DistributedDataParallel",
)
@skip_if_no_gpu
def test_DistributedDataParallel_SyncBatchNorm_No_Affine(self):
group, group_id, rank = self._init_global_test()
rank_to_GPU = self._init_multigpu_helper()
# DDP does not support replicating BN layers within a process, hence
# testing with one module replica per process
gpus = [rank]
num_processes = dist.get_world_size()
local_bs = 2
bs_offset = int(rank * 2)
global_bs = int(num_processes * 2)
self._test_DistributedDataParallel_SyncBatchNorm(
gpu_subset=gpus,
rank=rank,
local_bs=local_bs,
global_bs=global_bs,
offset=bs_offset,
affine=False,
)
@sandcastle_skip_if(
BACKEND != "nccl" and BACKEND != "gloo",
"Only Nccl & Gloo backend support DistributedDataParallel",
)
@skip_if_no_gpu
def test_DistributedDataParallel_SyncBatchNorm_2D_Input(self):
group, group_id, rank = self._init_global_test()
rank_to_GPU = self._init_multigpu_helper()
# DDP does not support replicating BN layers within a process, hence
# testing with one module replica per process
gpus = [rank]
model = nn.BatchNorm1d(2)
# single gpu training setup
model_gpu = copy.deepcopy(model)
model_gpu.cuda(gpus[0])
# DDP training setup
model_DDP = nn.SyncBatchNorm.convert_sync_batchnorm(copy.deepcopy(model))
model_DDP.cuda(gpus[0])
model_DDP = nn.parallel.DistributedDataParallel(model_DDP, device_ids=gpus)
local_bs = len(gpus) * 2
global_bs = dist.get_world_size() * local_bs
input_cpu = torch.randn(global_bs, 2)
target = torch.randn(global_bs, 2)
loss = nn.MSELoss()
# disabling cudnn.
# SyncBatchNorm goes through native_batch_norm kernel, this avoids the
# numerical issue created by the divergent code path.
with torch.backends.cudnn.flags(False):
# check two model parameters over 5 iterations
self._test_DDP_niter(
model_gpu,
model_DDP,
input_cpu.cuda(gpus[0]),
target.cuda(gpus[0]),
loss,
local_bs,
rank,
global_bs,
True,
)
self._barrier()
@sandcastle_skip_if(
BACKEND != "nccl" and BACKEND != "gloo",
"Only Nccl & Gloo backend support DistributedDataParallel",
)
@skip_if_no_gpu
@require_world_size(2)
def test_DistributedDataParallel_SyncBatchNorm_Single_Input_Per_Process(self):
group, group_id, rank = self._init_global_test()
rank_to_GPU = self._init_multigpu_helper()
# DDP does not support replicating BN layers within a process, hence
# testing with one module replica per process
gpus = [rank]
model = nn.BatchNorm1d(2)
# single gpu training setup
model_gpu = copy.deepcopy(model)
model_gpu.cuda(gpus[0])
# DDP training setup
model_DDP = nn.SyncBatchNorm.convert_sync_batchnorm(copy.deepcopy(model))
model_DDP.cuda(gpus[0])
model_DDP = nn.parallel.DistributedDataParallel(model_DDP, device_ids=gpus)
local_bs = 1
global_bs = dist.get_world_size()
input_cpu = torch.randn(global_bs, 2)
target = torch.randn(global_bs, 2)
loss = nn.MSELoss()
# disabling cudnn.
# SyncBatchNorm goes through native_batch_norm kernel, this avoids the
# numerical issue created by the divergent code path.
with torch.backends.cudnn.flags(False):
# check two model parameters over 5 iterations
self._test_DDP_niter(
model_gpu,
model_DDP,
input_cpu.cuda(gpus[0]),
target.cuda(gpus[0]),
loss,
local_bs,
rank,
global_bs,
True,
)
self._barrier()
@sandcastle_skip_if(
BACKEND != "nccl" and BACKEND != "gloo",
"Only Nccl & Gloo backend support DistributedDataParallel",
)
@skip_if_no_gpu
def test_DistributedDataParallel_SyncBatchNorm_Diff_Input_Sizes_Running_Value(
self,
):
group, group_id, rank = self._init_global_test()
rank_to_GPU = self._init_multigpu_helper()
model = nn.parallel.DistributedDataParallel(
ONLY_SBN_NET.cuda(rank), device_ids=[rank]
)
input_var = []
for i in range(dist.get_world_size()):
input_var_rank = torch.cat(
[
torch.ones(2, 1, 10 ** (i + 1)) * (0.1 ** (i - 1)),
torch.ones(2, 1, 10 ** (i + 1)) * (0.3 ** (i - 1)),
],
dim=1,
)
input_var.append(input_var_rank)
all_input_var = torch.cat(
[
x.permute(1, 0, 2).contiguous().view(ONLY_SBN_NET.num_features, -1)
for x in input_var
],
dim=1,
).cuda(rank)
for i in range(100):
y = model(input_var[rank].cuda(rank))
y.mean().backward()
running_mean, running_var = (
model.module.running_mean,
model.module.running_var,
)
torch.testing.assert_allclose(running_mean, all_input_var.mean(1))
torch.testing.assert_allclose(running_var, all_input_var.var(1))
@sandcastle_skip_if(
BACKEND != "nccl" and BACKEND != "gloo",
"Only Nccl & Gloo backend support DistributedDataParallel",
)
@skip_if_no_gpu
def test_DistributedDataParallel_SyncBatchNorm_Diff_Input_Sizes_gradient(self):
group, group_id, rank = self._init_global_test()
# only do single GPU per process
gpus = [rank]
# cpu training setup
model = BN_NET
num_processes = dist.get_world_size()
local_bs = rank + 2
bs_offset = int((rank + 3) * rank / 2)
global_bs = int((num_processes + 3) * num_processes / 2)
self._test_DistributedDataParallel_SyncBatchNorm(
gpu_subset=gpus,
rank=rank,
local_bs=local_bs,
global_bs=global_bs,
offset=bs_offset,
)
def _test_ddp_logging_data(self, is_gpu):
rank = dist.get_rank()
model_DDP = copy.deepcopy(DDP_NET)
if is_gpu:
model_DDP = nn.parallel.DistributedDataParallel(
model_DDP.cuda(rank), device_ids=[rank]
)
else:
model_DDP = nn.parallel.DistributedDataParallel(model_DDP)
# dummy data initialization
local_bs = 2
batch_size, input, target, loss = self._prepare_dummy_data(local_bs)
if is_gpu:
input = input.cuda(rank)
target = target.cuda(rank)
model_DDP._set_ddp_runtime_logging_sample_rate(2)
for idx in range(20):
offset = rank * local_bs
# DDP training, DDP scatters subsets of input to nodes/GPUs
self._test_DDP_helper(
model_DDP,
input[offset : offset + local_bs],
target[offset : offset + local_bs],
loss,
1,
)
self._model_step_with_zero_grad(model_DDP)
# Verify DDP logging data is sampled as expected
# If it has ran more than 10 iteratons and this is
# the sampled iteration for measuring run time stats,
# the run time stats for this idx-th iteration will not
# be zeros.
ddp_logging_data = model_DDP._get_ddp_logging_data()
if idx > 0 and (idx < 10 or idx % 2 == 0):
self.assertGreaterEqual(
ddp_logging_data.get("forward_compute_time"), 1
)
self.assertGreaterEqual(
ddp_logging_data.get("backward_compute_time"), 1
)
self.assertGreaterEqual(
ddp_logging_data.get("backward_comm_time"), 1
)
self.assertGreaterEqual(
ddp_logging_data.get("backward_compute_time"),
ddp_logging_data.get("backward_compute_comm_overlap_time"),
)
self.assertGreaterEqual(
ddp_logging_data.get("backward_comm_time"),
ddp_logging_data.get("backward_compute_comm_overlap_time"),
)
self.assertEqual(ddp_logging_data.get("iteration"), idx)
elif idx > 0:
# if the idx-th iteration is not sampled to set runtime stats,
# ddp_logging_data.iteration will not be updated to current
# iteration.
self.assertNotEqual(ddp_logging_data.get("iteration"), idx)
# Shuffle the input so that DDP input is different
input = input[torch.randperm(batch_size)]
return model_DDP
@sandcastle_skip_if(BACKEND == "nccl", "nccl does not support DDP on CPU models")
def test_ddp_logging_data_cpu(self):
def parse_env(var):
return os.environ[var] if var in os.environ else "N/A"
os.environ["TORCH_DISTRIBUTED_DEBUG"] = "INFO"
group, group_id, rank = self._init_global_test()
model_DDP = self._test_ddp_logging_data(is_gpu=False)
ddp_logging_data = model_DDP._get_ddp_logging_data()
self.assertEqual(ddp_logging_data.get("world_size"), dist.get_world_size())
self.assertEqual(ddp_logging_data.get("rank"), dist.get_rank())
self.assertEqual(ddp_logging_data.get("module_name"), "Net")
self.assertEqual(ddp_logging_data.get("device_ids"), "")
# output_device is -1 in default if it is not set, e.g.
# output_device of CPU training is -1.
self.assertEqual(ddp_logging_data.get("output_device"), -1)
self.assertEqual(ddp_logging_data.get("broadcast_buffers"), 1)
self.assertEqual(ddp_logging_data.get("bucket_cap_bytes"), 25 * 1024 * 1024)
self.assertEqual(ddp_logging_data.get("find_unused_parameters"), 0)
self.assertEqual(ddp_logging_data.get("gradient_as_bucket_view"), 0)
self.assertEqual(
ddp_logging_data.get("backend_name"), dist.get_backend(group_id)
)
self.assertEqual(ddp_logging_data.get("iteration"), 18)
params = list(model_DDP.parameters())
num_params = 0
param_size = 0
params = list(
parameter
for parameter in filter(
lambda parameter: parameter.requires_grad, params
)
)
for p in params:
num_params += 1
param_size += p.numel() * p.element_size()
self.assertEqual(ddp_logging_data.get("dtypes"), "float")
self.assertEqual(
ddp_logging_data.get("total_parameter_size_bytes"), param_size
)
self.assertEqual(ddp_logging_data.get("num_parameter_tensors"), num_params)
self.assertEqual(ddp_logging_data.get("bucket_sizes"), str(param_size))
self.assertEqual(
ddp_logging_data.get("master_port"), parse_env("MASTER_PORT")
)
self.assertEqual(
ddp_logging_data.get("master_addr"), parse_env("MASTER_ADDR")
)
self.assertEqual(
ddp_logging_data.get("torch_distributed_debug"),
parse_env("TORCH_DISTRIBUTED_DEBUG"),
)
self.assertEqual(
ddp_logging_data.get("cuda_visible_devices"),
parse_env("CUDA_VISIBLE_DEVICES"),
)
if ddp_logging_data.get("backend_name") == "gloo":
self.assertEqual(
ddp_logging_data.get("gloo_socket_ifname"),
parse_env("GLOO_SOCKET_IFNAME"),
)
self.assertEqual(
ddp_logging_data.get("gloo_device_transport"),
parse_env("GLOO_DEVICE_TRANSPORT"),
)
self.assertEqual(ddp_logging_data.get("nccl_socket_ifname"), None)
self.assertEqual(ddp_logging_data.get("nccl_blocking_wait"), None)
self.assertEqual(ddp_logging_data.get("nccl_async_error_handling"), None)
self.assertEqual(ddp_logging_data.get("nccl_debug"), None)
self.assertEqual(ddp_logging_data.get("nccl_nthreads"), None)
self.assertEqual(ddp_logging_data.get("nccl_ib_timeout"), None)
# test runtime logging fields
# Note: DETAIL debug mode logs DDP logging data to stdout and
# thus accesses std::map, which fills in a default value for the
# type if it didn't exist.
self.assertEqual(ddp_logging_data.get("unused_parameter_size", 0), 0)
self.assertEqual(ddp_logging_data.get("has_rebuilt_buckets"), 1)
self.assertEqual(
ddp_logging_data.get("rebuilt_bucket_sizes"), str(param_size)
)
# It is hard to test accurate latency, but it can test whether the latency is
# a valid value and in the expected range.
self.assertGreaterEqual(ddp_logging_data.get("avg_forward_compute_time"), 1)
self.assertGreaterEqual(
ddp_logging_data.get("avg_backward_compute_time"), 1
)
self.assertGreaterEqual(ddp_logging_data.get("avg_backward_comm_time"), 1)
self.assertGreaterEqual(
ddp_logging_data.get("avg_backward_compute_time"),
ddp_logging_data.get("avg_backward_compute_comm_overlap_time"),
)
self.assertGreaterEqual(
ddp_logging_data.get("avg_backward_comm_time"),
ddp_logging_data.get("avg_backward_compute_comm_overlap_time"),
)
# test larger net with mixed data types, verify multiple bucket sizes
model = LargeNet()
model.float()
model.fc1.double()
model_DDP = nn.parallel.DistributedDataParallel(model, bucket_cap_mb=1.5)
ddp_logging_data = model_DDP._get_ddp_logging_data()
params = list(model_DDP.parameters())
self.assertEqual(
ddp_logging_data.get("bucket_cap_bytes"), int(1.5 * 1024 * 1024)
)
bucket_sizes = [
params[1].numel() * params[1].element_size(),
params[0].numel() * params[0].element_size(),
]
self.assertEqual(
ddp_logging_data.get("bucket_sizes"),
", ".join(str(x) for x in bucket_sizes),
)
self.assertEqual(ddp_logging_data.get("dtypes"), "double, float")
@sandcastle_skip_if(
BACKEND != "nccl" and BACKEND != "gloo",
"Only Nccl & Gloo backend support DistributedDataParallel",
)
@skip_if_no_gpu
def test_ddp_logging_data_gpu(self):
group, group_id, rank = self._init_global_test()
model_DDP = self._test_ddp_logging_data(is_gpu=True)
ddp_logging_data = model_DDP._get_ddp_logging_data()
self.assertEqual(ddp_logging_data.get("device_ids"), str(rank))
self.assertEqual(ddp_logging_data.get("output_device"), rank)
# test runtime logging fields
# It is hard to test accurate latency, but it can test whether the latency is
# a valid value and in the expected range.
self.assertGreaterEqual(ddp_logging_data.get("avg_forward_compute_time"), 1)
self.assertGreaterEqual(
ddp_logging_data.get("avg_backward_compute_comm_overlap_time"), 1
)
self.assertGreaterEqual(
ddp_logging_data.get("avg_backward_compute_time"),
ddp_logging_data.get("avg_backward_compute_comm_overlap_time"),
)
self.assertGreaterEqual(
ddp_logging_data.get("avg_backward_comm_time"),
ddp_logging_data.get("avg_backward_compute_comm_overlap_time"),
)
@sandcastle_skip_if(BACKEND == "nccl", "nccl does not support DDP on CPU models")
def test_static_graph_api_cpu(self):
model_DDP = nn.parallel.DistributedDataParallel(DDP_NET)
model_DDP._set_static_graph()
self.assertEqual(
model_DDP._get_ddp_logging_data().get("static_graph"), True
)
expected_err = "should be called before training loop starts"
with self.assertRaisesRegex(RuntimeError, expected_err):
local_bs = 2
batch_size, input, target, loss = self._prepare_dummy_data(local_bs)
offset = dist.get_rank() * local_bs
# DDP training, DDP scatters subsets of input to nodes/GPUs
self._test_DDP_helper(
model_DDP,
input[offset : offset + local_bs],
target[offset : offset + local_bs],
loss,
1,
)
model_DDP._set_static_graph()
# Verify error was logged in ddp_logging_data.
verify_ddp_error_logged(model_DDP, expected_err)
@skipIfNoTorchVision
def test_SyncBatchNorm_process_group(self):
# When adopting `convert_sync_batchnorm` to convert a `nn.modules`,
# it need to recursively pass the `process_group` in the module when the `SyncBatchNorm`
# is nested in a sub-module or sub-sub-module (e.g. resnet50 in torchvision.models).
process_ids = 0
process_group = torch.distributed.new_group([process_ids])
res50_model = torchvision.models.resnet50()
res50_model_sync = nn.SyncBatchNorm.convert_sync_batchnorm(
copy.deepcopy(res50_model), process_group
)
process_group_sync = res50_model_sync.layer1[0].bn1.process_group
self.assertEqual(process_group_sync, process_group)
def _run_reduction_test(
self, tensor, expected_tensor, op, reduction_fn=dist.all_reduce, dst=None
):
if reduction_fn != dist.all_reduce and dst is None:
raise ValueError(f"Reduction fn {reduction_fn} must specify dst!")
if dst is not None:
reduction_fn(tensor, dst, op)
# Only destination rank tensor is expected to have final result.
if dist.get_rank() == dst:
self.assertEqual(tensor, expected_tensor)
else:
reduction_fn(tensor, op)
self.assertEqual(tensor, expected_tensor)
@require_backend({"nccl"})
@require_backends_available({"nccl"})
@skip_if_lt_x_gpu(2)
def test_nccl_backend_bool_allreduce(self):
torch.cuda.set_device(self.rank)
# Run all_reduce with PRODUCT
element = self.rank % 2 == 0
for op in [dist.ReduceOp.PRODUCT, dist.ReduceOp.MIN]:
input_tensor = torch.tensor([element, element]).to(self.rank)
self._run_reduction_test(
input_tensor, torch.tensor([False, False]).to(self.rank), op
)
# Ensure that all ranks contributing True (cast to 1) results in the
# correct reduction.
input_tensor = torch.tensor([True, True]).to(self.rank)
expected_tensor = input_tensor.clone()
self._run_reduction_test(input_tensor, expected_tensor, op)
# Run all_reduce with SUM
for op in [dist.ReduceOp.SUM, dist.ReduceOp.MAX]:
input_tensor = torch.tensor([element, element]).to(self.rank)
self._run_reduction_test(
input_tensor, torch.tensor([True, True]).to(self.rank), op
)
# TODO: NCCL backend does not work correctly for bitwise reduction ops
# (see https://github.com/pytorch/pytorch/issues/41362). Add tests for
# these once it is supported.
@require_backend({"nccl"})
@require_backends_available({"nccl"})
@skip_if_lt_x_gpu(2)
def test_nccl_backend_bool_allgather(self):
torch.cuda.set_device(self.rank)
inp = {0: [True, True], 1: [False, True]}
input_tensor = torch.tensor(inp[self.rank % 2]).to(self.rank)
# Preserve a copy of the tensor to compare against after allgather.
input_tensor_copy = input_tensor.clone()
tensor_list = [
torch.tensor([False, False]).to(self.rank)
for _ in range(dist.get_world_size())
]
dist.all_gather(tensor_list, input_tensor)
self.assertEqual(len(tensor_list), dist.get_world_size())
for i, t in enumerate(tensor_list):
expected = torch.tensor(inp[i % 2]).to(self.rank)
self.assertEqual(t, expected)
# Ensure that the input tensor is not modified, since this collective
# does not modify its input.
self.assertEqual(input_tensor_copy, input_tensor)
@require_backend({"nccl"})
@require_backends_available({"nccl"})
@skip_if_lt_x_gpu(int(os.environ["WORLD_SIZE"]))
def test_nccl_backend_bool_reduce(self):
torch.cuda.set_device(self.rank)
inp = {0: [True, True], 1: [False, False]}
# Run reduce() with product op
for op in [dist.ReduceOp.PRODUCT, dist.ReduceOp.MIN]:
input_tensor = torch.tensor(inp[self.rank % 2]).to(self.rank)
expected = torch.tensor([False, False]).to(self.rank)
self._run_reduction_test(input_tensor, expected, op, dist.reduce, dst=0)
# Ensure that all ranks contributing True (cast to 1) results in the
# correct reduction.
input_tensor = torch.tensor([True, True]).to(self.rank)
expected_tensor = input_tensor.clone()
self._run_reduction_test(
input_tensor, expected_tensor, op, dist.reduce, dst=0
)
for op in [dist.ReduceOp.SUM, dist.ReduceOp.MAX]:
input_tensor = torch.tensor(inp[self.rank % 2]).to(self.rank)
expected = (
torch.tensor([True, True]).to(self.rank)
if self.rank == 0
else input_tensor.clone()
)
self._run_reduction_test(input_tensor, expected, op, dist.reduce, dst=0)
@require_backend({"nccl"})
@require_backends_available({"nccl"})
@skip_if_lt_x_gpu(2)
def test_nccl_backend_bool_broadcast(self):
tensor_size = 10
bcast_tensor = torch.tensor(
[
(random.random() < 0.5 if self.rank == 0 else False)
for _ in range(tensor_size)
]
).to(self.rank)
dist.broadcast(bcast_tensor, src=0)
# Now allgather and ensure the tensors are equal.
tensor_list = [
torch.tensor([False for _ in range(tensor_size)]).to(self.rank)
for _ in range(dist.get_world_size())
]
dist.all_gather(tensor_list, bcast_tensor)
expected = tensor_list[0]
for tensor in tensor_list[1:]:
self.assertEqual(tensor, expected)
@sandcastle_skip_if(
BACKEND != "nccl" and BACKEND != "gloo",
"Only NCCL and GLOO backend support DistributedDataParallel",
)
@skip_if_lt_x_gpu(int(os.environ["WORLD_SIZE"]))
def test_DistributedSampler_padding(self):
# Tests padding of distributed sampler.
world_size = dist.get_world_size()
# Simulates the 'casual' dataset size
dataset_size = 100 + world_size + 1
dataset = [torch.ones(1).to(self.rank) * i for i in range(dataset_size)]
# Simulates the 'tiny' dataset size
dataset_tiny_size = max(world_size // 2 - 1, 1)
dataset_tiny = [
torch.ones(1).to(self.rank) * i for i in range(dataset_tiny_size)
]
# Specifying drop_last=True will cause the tail of the data to be dropped.
dist_sampler = DistributedSampler(dataset=dataset, drop_last=True)
local_num_samples, local_dataset_size = (
dist_sampler.num_samples,
dist_sampler.total_size,
)
# The effective dataset size should be the greatest integer that is <=
# dataset_size that is divisible by the world_size. This is to ensure each
# rank processes the same number of samples.
effective_dataset_size = (
math.ceil((dataset_size - world_size) / world_size)
if dataset_size % world_size != 0
else dataset_size / world_size
)
self.assertEqual(local_num_samples, effective_dataset_size)
self.assertEqual(local_dataset_size, local_num_samples * world_size)
indices_list = list(iter(dist_sampler))
self.assertEqual(len(indices_list), local_num_samples)
def validate_global_samples(local_num_samples):
# Ensure that each rank processes the same number of samples.
world_samples = [
torch.LongTensor([0]).to(self.rank) for _ in range(world_size)
]
dist.all_gather(
world_samples, torch.tensor([local_num_samples]).to(self.rank)
)
world_samples = [sample.item() for sample in world_samples]
self.assertEqual(len(set(world_samples)), 1)
validate_global_samples(local_num_samples)
# drop_last=False is the default and will add additional indices to be sampled,
# increasing the effective dataset size.
dist_sampler_added_samples = DistributedSampler(dataset=dataset)
local_num_samples, local_dataset_size = (
dist_sampler_added_samples.num_samples,
dist_sampler_added_samples.total_size,
)
# The effective dataset size is the smallest integer that is >= dataset_size
# and divisible by the world size.
self.assertEqual(local_num_samples, math.ceil(dataset_size / world_size))
self.assertEqual(local_dataset_size, local_num_samples * world_size)
indices_list = list(iter(dist_sampler_added_samples))
self.assertEqual(len(indices_list), local_num_samples)
# Ensure that each rank processes the same number of samples.
validate_global_samples(local_num_samples)
# Ensure additional samples are padded even when
# the extremely small dataset is given.
dist_sampler_added_samples_tiny = DistributedSampler(dataset=dataset_tiny)
local_num_samples, local_dataset_size = (
dist_sampler_added_samples_tiny.num_samples,
dist_sampler_added_samples_tiny.total_size,
)
self.assertEqual(
local_num_samples, math.ceil(dataset_tiny_size / world_size)
)
self.assertEqual(local_dataset_size, local_num_samples * world_size)
indices_list = list(iter(dist_sampler_added_samples_tiny))
self.assertEqual(len(indices_list), local_num_samples)
validate_global_samples(local_num_samples)
@require_backend({"nccl", "gloo"})
@require_n_gpus_for_nccl_backend(
int(os.environ["WORLD_SIZE"]), os.environ["BACKEND"]
)
def test_allgather_object(self):
# Only set device for NCCL backend since it must use GPUs.
backend = os.environ["BACKEND"]
if backend == "nccl":
# Case where rank != GPU device.
next_rank = (self.rank + 1) % int(self.world_size)
torch.cuda.set_device(next_rank)
# If GPU test, add object with GPU tensor
if backend == "nccl":
COLLECTIVES_OBJECT_TEST_LIST.append(Foo(torch.randn(3, 3, device=0)))
gather_objects = COLLECTIVES_OBJECT_TEST_LIST
output_gathered = [None for _ in range(dist.get_world_size())]
dist.all_gather_object(
output_gathered, gather_objects[self.rank % len(gather_objects)]
)
for i, val in enumerate(output_gathered):
expected = gather_objects[i % len(gather_objects)]
self.assertEqual(val, expected)
output_gathered = [None for _ in range(dist.get_world_size())]
dist.all_gather_object(
output_gathered, gather_objects[self.rank % len(gather_objects)]
)
@require_backend({"gloo"})
@sandcastle_skip_if(BACKEND == "nccl", "NCCL does not support gather")
def test_gather_object(self):
# Ensure stateful objects can be gathered
gather_objects = COLLECTIVES_OBJECT_TEST_LIST
output_gathered = [None for _ in range(dist.get_world_size())]
gather_on_rank = 0
my_rank = dist.get_rank()
dist.gather_object(
gather_objects[self.rank % len(gather_objects)],
object_gather_list=output_gathered
if my_rank == gather_on_rank
else None,
dst=gather_on_rank,
)
if my_rank != gather_on_rank:
self.assertEqual(
output_gathered, [None for _ in range(dist.get_world_size())]
)
else:
for i, val in enumerate(output_gathered):
expected = gather_objects[i % len(gather_objects)]
self.assertEqual(val, expected)
# Validate errors when objects can't be pickled.
class Bar:
pass
b = Bar()
gather_objects = [b for _ in range(dist.get_world_size())]
with self.assertRaisesRegex(AttributeError, "Can't pickle local object"):
dist.all_gather_object(
[None for _ in range(dist.get_world_size())],
gather_objects[self.rank],
)
@require_backend({"nccl"})
@require_backends_available({"nccl"})
@skip_if_lt_x_gpu(2)
def test_nccl_gather_object_err(self):
output_gathered = [None for _ in range(dist.get_world_size())]
gather_on_rank = 0
# Case where rank != GPU device.
my_rank = dist.get_rank()
next_rank = (my_rank + 1) % dist.get_world_size()
torch.cuda.set_device(next_rank)
with self.assertRaisesRegex(
RuntimeError, "ProcessGroupNCCL does not support gather"
):
dist.gather_object(
"foo",
object_gather_list=output_gathered
if my_rank == gather_on_rank
else None,
dst=gather_on_rank,
)
def validate_net_equivalence(self, net):
# Helper to validate synchronization of nets across ranks.
net_module_states = list(net.module.state_dict().values())
# Check that all tensors in module's state_dict() are equal.
for t in net_module_states:
tensor_list = [
torch.zeros_like(t) for _ in range(dist.get_world_size())
]
dist.all_gather(tensor_list, t)
for tensor in tensor_list:
self.assertEqual(tensor, t)
@skip_if_lt_x_gpu(2)
@sandcastle_skip_if(
BACKEND != "nccl" and BACKEND != "gloo",
"Only NCCL and GLOO backend support DistributedDataParallel",
)
def test_ddp_sync_params_and_buffers(self):
# Test that after calling _sync_params_and_buffers, models across ranks
# are the same and are equal to the model on the input rank.
dim = 2
rank = self.rank
rank_to_broadcast = 1
# Seed to ensure that ranks are initialized with different initial models.
torch.manual_seed(rank)
model = nn.Linear(dim, dim, bias=False)
net = torch.nn.parallel.DistributedDataParallel(
model.cuda(rank), device_ids=[self.rank], bucket_cap_mb=1
)
new_model = nn.Linear(dim, dim, bias=False).cuda(rank)
net.module = copy.deepcopy(new_model)
# Assert params are different
net_module_states = list(net.module.state_dict().values())
for t in net_module_states:
tensor_list = [
torch.zeros_like(t) for _ in range(dist.get_world_size())
]
dist.all_gather(tensor_list, t)
for i, tensor in enumerate(tensor_list):
if i == rank:
self.assertEqual(t, tensor)
else:
# tensor from another rank should be different.
self.assertNotEqual(t, tensor)
net._sync_params_and_buffers(authoritative_rank=rank_to_broadcast)
# Now all model params should be the same.
self.validate_net_equivalence(net)
# Since the network params were broadcast from rank_to_broadcast, validate that
# they are the same as new_model on rank_to_broadcast.
if rank == rank_to_broadcast:
expected_states = new_model.state_dict().values()
for t, expected in zip(net_module_states, expected_states):
self.assertEqual(t, expected)
@skip_if_lt_x_gpu(2)
@sandcastle_skip_if(
BACKEND != "nccl" and BACKEND != "gloo",
"Only NCCL and GLOO backend support DistributedDataParallel",
)
def test_ddp_grad_div_uneven_inputs(self):
# Test gradient division during training with join() API. If
# divide_by_initial_world_size=False, we scale by the effective world
# size when allreducing grads.
dim = 5
batch = 1
grad_scale = 50
rank = self.rank
model = nn.Linear(dim, dim, bias=False)
inp = torch.ones(batch, dim, device=self.rank) * grad_scale
net = torch.nn.parallel.DistributedDataParallel(
model.cuda(rank), device_ids=[self.rank], bucket_cap_mb=1
)
n_iters = 3
if self.rank > 0:
n_iters += 2
with net.join(divide_by_initial_world_size=False):
for _ in range(n_iters):
loss = net(inp).sum()
loss.backward()
# The grad is always expected_grad, since we divide by the number
# of currently active processes and inactive processes contribute
# zero gradient. If we kept dividing by static initial world
# size as processes leave, the grad would be smaller.
expected_grad = torch.ones(dim, dim, device=self.rank) * grad_scale
param = list(net.parameters())[0]
self.assertEqual(expected_grad, param.grad)
# Avoid accumulating grads so that it's the same every iteration
net.zero_grad()
torch.cuda.synchronize(device=self.rank)
# If divide_by_initial_world_size=True (default), we always scale grads
# by the initial world_size.
with net.join(divide_by_initial_world_size=True):
for i in range(n_iters):
loss = net(inp).sum()
loss.backward()
effective_ws = dist.get_world_size()
if i >= 3:
effective_ws -= 1
expected_grad = (
torch.ones(dim, dim, device=self.rank)
* grad_scale
* effective_ws
) / dist.get_world_size()
param = list(net.parameters())[0]
self.assertEqual(expected_grad, param.grad)
# Avoid accumulating grad so that it's the same every iteration.
net.zero_grad()
torch.cuda.synchronize(device=self.rank)
def _test_ddp_profiling(self, profiler_ctx):
batch = 3
dim = 10
num_iters = 6
torch.cuda.set_device(self.rank)
model = nn.Linear(dim, dim, bias=False)
inp = torch.rand(batch, dim, device=self.rank)
net = torch.nn.parallel.DistributedDataParallel(
model.cuda(self.rank),
device_ids=[self.rank],
)
profiler_ctx_copy = copy.deepcopy(profiler_ctx)
with profiler_ctx as prof:
for i in range(num_iters):
loss = net(inp).sum()
loss.backward()
all_reduce_event_name = f"{dist.get_backend()}:all_reduce"
events = get_profiling_event(all_reduce_event_name, prof)
event_count = sum(e.count for e in events)
self.assertEqual(event_count, num_iters)
for event in events:
self.assertTrue(event.is_async)
self.assertEqual(event.name, all_reduce_event_name)
broadcast_event_name = f"{dist.get_backend()}:broadcast"
broadcast_events = get_profiling_event(broadcast_event_name, prof)
event_count = sum(e.count for e in broadcast_events)
# Broadcast is called during rebuild_buckets
self.assertGreaterEqual(event_count, 1)
for event in broadcast_events:
self.assertEqual(event.name, broadcast_event_name)
# Run DDP with profiling for a few iterations, then enable profiling
# for a single pass, and ensure it is recorded. This tests that the
# thread local state is correctly updated.
net = torch.nn.parallel.DistributedDataParallel(
model.cuda(self.rank),
device_ids=[self.rank],
find_unused_parameters=True,
)
for i in range(3):
loss = net(inp).sum()
loss.backward()
# Now enable the profiler.
with profiler_ctx_copy as prof:
loss = net(inp).sum()
loss.backward()
events = get_profiling_event(all_reduce_event_name, prof)
self.assertGreaterEqual(len(events), 1)
self.assertGreaterEqual(events[0].count, 1)
self.assertEqual(events[0].name, all_reduce_event_name)
for event in events:
self.assertTrue(event.is_async)
# Ensure searching unused parameters was profiled
events = get_profiling_event("search_unused_parameters", prof)
self.assertEqual(len(events), 1)
@require_backend({"gloo", "nccl"})
@require_backends_available({"gloo", "nccl"})
@skip_if_lt_x_gpu(2)
def test_ddp_profiling_autograd_profiler(self):
autograd_profiler_ctx = torch.autograd.profiler.profile()
return self._test_ddp_profiling(profiler_ctx=autograd_profiler_ctx)
@require_backend({"gloo", "nccl"})
@require_backends_available({"gloo", "nccl"})
@skip_if_lt_x_gpu(2)
@sandcastle_skip_if(IS_FBCODE, "Kineto in fbcode code causes hang")
@sandcastle_skip_if(
IS_MACOS or IS_WINDOWS,
"torch.profiler not enabled for mac/windows: https://github.com/pytorch/pytorch/pull/56124",
)
def test_ddp_profiling_torch_profiler(self):
cpu_act = torch.profiler.ProfilerActivity.CPU
cuda_act = torch.profiler.ProfilerActivity.CUDA
torch_profiler_ctx = torch.profiler.profile(activities=[cpu_act, cuda_act])
self._test_ddp_profiling(profiler_ctx=torch_profiler_ctx)
@skip_if_lt_x_gpu(2)
@sandcastle_skip_if(
BACKEND != "nccl" and BACKEND != "gloo",
"Only NCCL and GLOO backend support DistributedDataParallel",
)
def test_ddp_join_model_equivalence(self):
# Verifies equivalence with model training locally and with DDP under
# the join context manager.
batch = 3
dim = 10
learning_rate = 0.03
model = nn.Linear(dim, dim, bias=False)
inp = torch.rand(batch, dim, device=self.rank)
local_model = copy.deepcopy(model)
local_model = local_model.cuda(self.rank)
rank_to_iter_mapping = {
rank: 2 * (rank + 1) for rank in range(dist.get_world_size())
}
# run local model
local_iters = sum(rank_to_iter_mapping.values())
local_optim = torch.optim.SGD(local_model.parameters(), lr=learning_rate)
for _ in range(local_iters):
local_optim.zero_grad()
out = local_model(inp)
loss = out.sum()
loss.backward()
local_optim.step()
# run DDP model with join API
num_iters = rank_to_iter_mapping[self.rank]
net = torch.nn.parallel.DistributedDataParallel(
model.cuda(self.rank), device_ids=[self.rank]
)
ddp_optim = torch.optim.SGD(
model.parameters(), lr=learning_rate * dist.get_world_size()
)
with net.join():
for i in range(num_iters):
ddp_optim.zero_grad()
out = net(inp)
loss = out.sum()
loss.backward()
torch.cuda.synchronize(device=self.rank)
ddp_optim.step()
# Validate model state dicts are equal
for (_, local_tensor), (_, dist_tensor) in zip(
local_model.state_dict().items(), net.module.state_dict().items()
):
self.assertEqual(local_tensor, dist_tensor)
def _run_uneven_inputs_test(
self,
test_case,
iteration_mapping,
find_unused_params,
):
model = test_case.model
inp = test_case.inp
rank = self.rank
sync_interval = test_case.sync_interval
torch.cuda.set_device(rank)
# Ensure all outsanding GPU work is comlete so this test runs independently.
dist.barrier()
# Bucket_cap_mb is intentionally low to test allreduce scheduling when
# there are many buckets.
net = torch.nn.parallel.DistributedDataParallel(
model.cuda(rank),
device_ids=[rank],
bucket_cap_mb=1,
find_unused_parameters=find_unused_params,
)
# Register hook if specified
if test_case.hook is not None:
net.register_comm_hook(test_case.state, test_case.hook)
print(f"registered hook {test_case.hook}")
# Determine num iters for this rank via the passed in mapping.
num_iters = iteration_mapping[rank]
# If we throw when earliest rank terminates, we should ensure
# that we iterate for that minimum number of times.
num_iters_tensor = torch.tensor(
[num_iters], device=torch.cuda.current_device()
)
dist.all_reduce(num_iters_tensor, op=dist.ReduceOp.MIN)
min_num_iters = num_iters_tensor.item()
total_iters = 0
if test_case.throw_on_early_termination:
if min_num_iters == num_iters:
# Early termination rank(s)
exception_ctx = self.assertRaisesRegex(
RuntimeError, f"Rank {self.rank} exhausted all inputs"
)
else:
# Non early termination rank
exception_ctx = self.assertRaisesRegex(
RuntimeError,
"Detected at least one rank that exhausted inputs.",
)
else:
exception_ctx = suppress()
with exception_ctx:
with net.join(
throw_on_early_termination=test_case.throw_on_early_termination
):
for i in range(num_iters):
# Use model.no_sync() to disable grad synchronization every
# sync_interval.
if i % sync_interval != 0:
context = net.no_sync()
else:
context = suppress()
with context:
if isinstance(inp, tuple):
loss = net(*inp).sum()
else:
loss = net(inp).sum()
loss.backward()
self._model_step(net)
# Ensure completion of GPU kernels (including allreduce). If the
# join API is not properly implemented, then this should hang
# since the allreduce will hang.
torch.cuda.synchronize(device=rank)
total_iters += 1
if test_case.throw_on_early_termination:
# Ensure we iterated min_num_iters times.
self.assertEqual(total_iters, min_num_iters)
else:
# Ensure we iterated at least min_num_iters times.
self.assertGreaterEqual(total_iters, min_num_iters)
# Ensure completion of all GPU kernels.
torch.cuda.synchronize(device=rank)
# When throwing on early rank termination, we do not
# broadcast model state from an authoritative rank. All models
# should already be in sync.
if not test_case.throw_on_early_termination:
self.assertTrue(net._authoritative_rank)
# All ranks should have agreed on the same authoritative_rank!
final_rank_tensor = torch.tensor(
[net._authoritative_rank], device=self.rank
)
tensor_list = [
torch.zeros_like(final_rank_tensor)
for _ in range(dist.get_world_size())
]
dist.all_gather(tensor_list, final_rank_tensor)
max_rank = dist.get_world_size() - 1
self.assertSetEqual(
{max_rank}, set(tensor.item() for tensor in tensor_list)
)
# Ensure that all models are the same across ranks after all have joined.
self.validate_net_equivalence(net)
# Ensure that running with DDP uneven inputs was logged.
ddp_logging_data = net._get_ddp_logging_data()
self.assertTrue(ddp_logging_data.get("join_uneven_inputs"))
dist.barrier()
@skip_if_lt_x_gpu(2)
@sandcastle_skip_if(
BACKEND != "nccl" and BACKEND != "gloo",
"Only NCCL and GLOO backend support DistributedDataParallel",
)
def test_ddp_uneven_inputs_stop_iteration_sync_bn(self):
# Tests that uneven inputs join handler correctly throws StopIteration
# for models with SyncBN or general collective comm when
# throw_on_early_termination=True.
class ModelWithComm(torch.nn.Module):
def __init__(self):
super().__init__()
self.lin = nn.Linear(2, 40, bias=False)
def forward(self, x):
x = self.lin(x)
dist.all_reduce(x)
return x
torch.cuda.set_device(self.rank)
model_bn = BN_NET
model_bn = nn.SyncBatchNorm.convert_sync_batchnorm(
copy.deepcopy(model_bn)
).cuda(self.rank)
comm_model = ModelWithComm().cuda(self.rank)
model_input = torch.randn(10, 2).cuda(torch.cuda.current_device())
for model in [model_bn, comm_model]:
model = torch.nn.parallel.DistributedDataParallel(
model,
device_ids=[self.rank],
)
min_num_iters = 5
if self.rank != 0:
# Early termination rank(s)
num_iters = min_num_iters
exception_ctx = self.assertRaisesRegex(
RuntimeError, f"Rank {self.rank} exhausted all inputs"
)
else:
# Non early termination rank
num_iters = min_num_iters * 2
exception_ctx = self.assertRaisesRegex(
RuntimeError,
"Detected at least one rank that exhausted inputs.",
)
n = 0
with exception_ctx:
with model.join(throw_on_early_termination=True):
for i in range(num_iters):
loss = model(model_input).sum()
loss.backward()
self._model_step(model)
n += 1
self.assertEqual(n, min_num_iters)
# Verify model equivalence
self.validate_net_equivalence(model)
@skip_if_lt_x_gpu(2)
@sandcastle_skip_if(
BACKEND != "nccl" and BACKEND != "gloo",
"Only NCCL and GLOO backend support DistributedDataParallel",
)
def test_ddp_uneven_inputs(self):
dim = 1000
batch = 1
# Create a variety of models to run uneven input tests on.
large_model = nn.Sequential(
nn.Conv2d(1, 20, 5),
nn.ReLU(),
nn.Conv2d(20, 32, 5),
nn.ReLU(),
nn.Conv2d(32, 256, 5),
nn.ReLU(),
)
small_model = nn.Linear(dim, dim, bias=False)
bn_net = BatchNormNet()
class UnusedParamModule(nn.Module):
def __init__(self, unused_params_rank):
super().__init__()
self.t0 = Task()
self.t1 = Task()
self.unused_params_rank = unused_params_rank
def task_parameters(self):
return (self.t0.p, self.t1.p)
def forward(self, x, rank):
return (
self.t1(self.t0(x))
if rank != self.unused_params_rank
else self.t1(x)
)
unjoined_rank_with_unused_params_model = UnusedParamModule(1)
joined_rank_with_unused_params_model = UnusedParamModule(0)
rank = self.rank
models_to_test = [
# Network with batchnorm
DDPUnevenTestInput(
name="batch_norm_net",
model=bn_net,
inp=torch.ones(batch, 2, device=rank),
sync_interval=1,
),
DDPUnevenTestInput(
name="large_conv_model",
model=large_model,
inp=torch.ones(batch, batch, dim, dim, device=rank),
sync_interval=1,
),
DDPUnevenTestInput(
name="small_model",
model=small_model,
inp=torch.ones(batch, dim, device=rank),
sync_interval=1,
),
# Unused parameter test where rank that does not join early has unused params
DDPUnevenTestInput(
name="unjoined_rank_with_unused_params_model",
model=unjoined_rank_with_unused_params_model,
inp=(torch.ones(batch, 2, device=rank), rank),
sync_interval=1,
),
# Unused parameter test where rank that does join early has unused params
DDPUnevenTestInput(
name="joined_rank_with_unused_params_model",
model=joined_rank_with_unused_params_model,
inp=(torch.ones(batch, 2, device=rank), rank),
sync_interval=1,
),
]
# Test models that have hook installed.
models_with_hook = [
DDPUnevenTestInput(
name="small_model_allreduce_hook",
model=small_model,
hook=default.allreduce_hook,
state=None,
inp=torch.ones(batch, dim, device=rank),
sync_interval=1,
),
DDPUnevenTestInput(
name="small_model_power_sgd_hook",
model=small_model,
hook=powerSGD.powerSGD_hook,
state=powerSGD.PowerSGDState(
process_group=None,
matrix_approximation_rank=1,
# Config so that powerSGD runs immediately instead of
# allreduce.
start_powerSGD_iter=1,
warm_start=False,
use_error_feedback=False,
),
inp=torch.ones(batch, dim, device=rank),
sync_interval=1,
),
]
models_to_test.extend(models_with_hook)
# Add resnet model if we have torchvision installed.
if HAS_TORCHVISION:
resnet_model = torchvision.models.resnet50()
models_to_test.append(
DDPUnevenTestInput(
name="resnet_model",
model=resnet_model,
inp=torch.ones(1, 3, 1000, 1000),
sync_interval=1,
)
)
# Test with no_sync every 2, 3, 4, ... iterations.
models_with_sync = []
for i, test_input in enumerate(models_to_test):
models_with_sync.append(
DDPUnevenTestInput(
name=test_input.name,
model=test_input.model,
inp=test_input.inp,
sync_interval=i + 2,
)
)
throw_on_early_term_tests = []
for test_input in models_to_test:
throw_on_early_term_tests.append(
DDPUnevenTestInput(
name=test_input.name,
model=test_input.model,
inp=test_input.inp,
sync_interval=test_input.sync_interval,
throw_on_early_termination=True,
)
)
models_to_test.extend(models_with_sync)
models_to_test.extend(throw_on_early_term_tests)
# 0 iteration tests for when one process does not train model at all, so
# we must shadow the broadcast calls made when rebuilding buckets.
baseline_num_iters = [0, 5]
iteration_offsets = [2, 3, 10]
num_uneven_ranks = [1]
if dist.get_world_size() > 2:
num_uneven_ranks.append(2)
iteration_mappings = []
# Generate rank : num_iters mappings for various uneven input scenarios.
# This includes cases where rank 0 joins early and all other ranks join
# later, and scenarios where multiple ranks join early, but at different
# iterations, and later ranks join later.
for num_early_join_ranks in num_uneven_ranks:
for baseline_iter in baseline_num_iters:
for offset in iteration_offsets:
mapping = {
rank: baseline_iter
for rank in range(0, num_early_join_ranks)
}
# if num_early_join_ranks > 1, ranks > 0 that will join early
# iterate offset//2 more times than rank 0, to test nodes
# depleting inputs at different times.
if num_early_join_ranks > 1:
for rank in mapping.keys():
if rank > 0:
mapping[rank] += offset // 2
mapping.update(
{
rank: baseline_iter + offset
for rank in range(
num_early_join_ranks, dist.get_world_size()
)
}
)
iteration_mappings.append(mapping)
for (test_case, iteration_mapping) in itertools.product(
models_to_test, iteration_mappings
):
if self.rank == 0:
print(
f"""Running test: {test_case.name} sync interval
{test_case.sync_interval} with iteration mapping
{iteration_mapping}"""
)
self._run_uneven_inputs_test(
test_case,
iteration_mapping,
find_unused_params=("unused_params_model" in test_case.name),
)
@skip_if_lt_x_gpu(2)
@sandcastle_skip_if(
BACKEND != "nccl" and BACKEND != "gloo",
"Only NCCL and GLOO backend support DistributedDataParallel",
)
def test_ddp_uneven_input_join_disable(self):
# tests that if net.join() with enable=False is specified, DDP works as
# expected with even inputs.
torch.manual_seed(self.rank)
net = torch.nn.parallel.DistributedDataParallel(
torch.nn.Linear(1, 1).cuda(self.rank), device_ids=[self.rank]
)
inp = torch.ones(1) * self.rank
n_iters = 5
world_size = dist.get_world_size()
with net.join(enable=False):
for _ in range(n_iters):
# Clear grads
grad = net.module.weight.grad
if grad is not None:
grad.requires_grad_(False)
grad.zero_()
out = net(inp)
loss = out.sum()
loss.backward()
# Validate gradients to ensure that we divide by the correct
# world_size when join mode is disabled.
expected_grad = sum(i for i in range(world_size)) / world_size
self.assertEqual(net.module.weight.grad.item(), expected_grad)
join_config = net._join_config
self.assertFalse(join_config.enable)
self.validate_net_equivalence(net)
@skip_if_lt_x_gpu(2)
@sandcastle_skip_if(
BACKEND != "nccl" and BACKEND != "gloo",
"Only NCCL and GLOO backend support DistributedDataParallel",
)
def test_ddp_uneven_input_exception(self):
# Tests that exceptions during training are correctly propagated by the
# context manager.
error_str = "Intentional error"
class ExceptionModule(nn.Module):
def __init__(self):
super().__init__()
self.param = nn.Parameter(torch.ones(1, requires_grad=True))
def forward(self, _):
raise ValueError(error_str)
exception_module = ExceptionModule()
net = torch.nn.parallel.DistributedDataParallel(
exception_module.cuda(self.rank), device_ids=[self.rank]
)
inp = torch.ones(1)
with self.assertRaisesRegex(ValueError, error_str):
with net.join():
out = net(inp)
loss = out.sum()
loss.backward()
@require_backend({"nccl", "gloo"})
@require_n_gpus_for_nccl_backend(
int(os.environ["WORLD_SIZE"]), os.environ["BACKEND"]
)
def test_broadcast_object_list(self):
# Only set device for NCCL backend since it must use GPUs.
# Case where rank != GPU device.
next_rank = (self.rank + 1) % int(self.world_size)
backend = os.environ["BACKEND"]
if backend == "nccl":
torch.cuda.set_device(next_rank)
src_rank = 0
# If GPU test, add object with GPU tensor
if backend == "nccl":
COLLECTIVES_OBJECT_TEST_LIST.append(Foo(torch.randn(3, 3, device=0)))
objects = (
COLLECTIVES_OBJECT_TEST_LIST
if self.rank == src_rank
else [None for _ in COLLECTIVES_OBJECT_TEST_LIST]
)
# Single object test with device specified. Backend="gloo", device=cpu
if backend != "nccl":
single_obj_list = [objects[0]]
if self.rank != src_rank:
self.assertNotEqual(
single_obj_list[0], COLLECTIVES_OBJECT_TEST_LIST[0]
)
dist.broadcast_object_list(
single_obj_list, src=0, group=None, device=torch.device("cpu")
)
self.assertEqual(single_obj_list[0], COLLECTIVES_OBJECT_TEST_LIST[0])
# Single object test with device specified. Backend="gloo", device=current_device+1
# The test is gated by the fact GPU count is the same as world size to avoid the case
# when backend is gloo but there is no multiple GPU devices.
if backend != "nccl" and torch.cuda.device_count() == int(self.world_size):
single_obj_list = [objects[0]]
if self.rank != src_rank:
self.assertNotEqual(
single_obj_list[0], COLLECTIVES_OBJECT_TEST_LIST[0]
)
dist.broadcast_object_list(
single_obj_list, src=0, group=None, device=torch.device(next_rank)
)
self.assertEqual(single_obj_list[0], COLLECTIVES_OBJECT_TEST_LIST[0])
# Single object test with device specified. Backend="nccl", device=current_device+1
if backend == "nccl" and torch.cuda.device_count() == int(self.world_size):
single_obj_list = [objects[0]]
if self.rank != src_rank:
self.assertNotEqual(
single_obj_list[0], COLLECTIVES_OBJECT_TEST_LIST[0]
)
dist.broadcast_object_list(
single_obj_list, src=0, group=None, device=torch.device(next_rank)
)
self.assertEqual(single_obj_list[0], COLLECTIVES_OBJECT_TEST_LIST[0])
# Single object test: backward compatibility with device unspecified
single_obj_list = [objects[0]]
if self.rank != src_rank:
self.assertNotEqual(single_obj_list[0], COLLECTIVES_OBJECT_TEST_LIST[0])
dist.broadcast_object_list(single_obj_list, src=0)
self.assertEqual(single_obj_list[0], COLLECTIVES_OBJECT_TEST_LIST[0])
# Multiple input objects test
if self.rank != src_rank:
self.assertNotEqual(objects, COLLECTIVES_OBJECT_TEST_LIST)
dist.broadcast_object_list(objects, src=0)
self.assertEqual(objects, COLLECTIVES_OBJECT_TEST_LIST)
def _test_ddp_ignore_params_arg(self, static_graph=False):
class TestModel(nn.Module):
def __init__(self, rank):
self.rank = rank
super(TestModel, self).__init__()
self.fc1 = nn.Linear(1, 1, bias=False)
# Proxy that will be materialized to another architecture later.
# (after wrapping model with DDP)
if self.rank == 0:
self.fc2 = nn.Linear(1, 10, bias=False)
else:
self.fc2 = nn.Linear(10, 10, bias=False)
def forward(self, x):
x = self.fc1(x)
x = self.fc2(x)
return x
device_id = self.rank
# Ensure the test works for both find_unused_parameter and broadcast_buffer settings.
for (find_unused, broadcast_buffers) in itertools.product(
[False, True], [False, True]
):
model = TestModel(self.rank).float().to(device_id)
# Note that the model can have different shape buffers if we pass
# them in to be ignored as well.
model.fc2.register_buffer(
"ignore_buffer", torch.zeros(5 + self.rank, device=self.rank)
)
proxy_params = list(model.fc2.parameters())
proxy_buffers = list(model.fc2.buffers())
model_fc2_name = [
module_name
for module_name, module in model.named_modules()
if module is model.fc2
][0]
proxy_param_names = [
f"{model_fc2_name}.{param_name}"
for param_name, _ in model.fc2.named_parameters()
]
proxy_buffer_names = [
f"{model_fc2_name}.{buf_name}"
for buf_name, _ in model.fc2.named_buffers()
]
# Specify that we should ignore proxy_params since it will be
# materialized later.
torch.nn.parallel.DistributedDataParallel._set_params_and_buffers_to_ignore_for_model(
model, proxy_param_names + proxy_buffer_names
)
ddp = torch.nn.parallel.DistributedDataParallel(
model,
device_ids=[device_id],
find_unused_parameters=find_unused,
broadcast_buffers=broadcast_buffers,
)
if static_graph:
ddp._set_static_graph()
# Materialize new params. These are not registered in DDP and thus
# don't have autograd hooks installed on them.
ddp.module.fc2 = nn.Linear(1, 1, bias=False).to(device_id)
# local model with the new materialized parameters.
local_model = copy.deepcopy(ddp.module).cuda(self.rank)
inp = torch.ones(1, dtype=torch.float).to(device_id) * (self.rank + 1)
for i in range(6):
ddp(inp).sum().backward()
local_model(inp).sum().backward()
# materialized param grad is not touched by DDP, so its grad should
# be the same as if running locally.
for materialized_param, local_param in zip(
ddp.module.fc2.parameters(), local_model.fc2.parameters()
):
self.assertEqual(materialized_param.grad, local_param.grad)
# fc1 parameter grad should still be different, due to allreduce.
for synced_param, local_param in zip(
ddp.module.fc1.parameters(), local_model.fc1.parameters()
):
self.assertFalse(synced_param.grad == local_param.grad)
# Proxy module grad should not be touched
for proxy_param in proxy_params:
self.assertTrue(proxy_param.grad is None)
# Synchronize since we run multiple iterations of this test, to
# isolate failure hangs.
torch.cuda.synchronize(device=self.rank)
@require_backend({"gloo", "nccl"})
@require_backends_available({"gloo", "nccl"})
@skip_if_lt_x_gpu(2)
def test_ddp_ignore_params_arg(self):
self._test_ddp_ignore_params_arg(static_graph=False)
self._test_ddp_ignore_params_arg(static_graph=True)
@with_dist_debug_levels(levels=["OFF", "INFO", "DETAIL"])
@require_backend({"gloo", "nccl"})
@require_backends_available({"gloo", "nccl"})
@skip_if_lt_x_gpu(2)
def test_ddp_unused_params_rebuild_buckets_exception(self):
class ToyModel(nn.Module):
def __init__(self):
super(ToyModel, self).__init__()
self.net1 = nn.Linear(10, 10, bias=False)
self.net2 = nn.Linear(10, 10, bias=False)
def forward(self, x):
return self.net1(x)
ddp = torch.nn.parallel.DistributedDataParallel(
ToyModel().cuda(self.rank), device_ids=[self.rank]
)
for i in range(2):
inp = torch.rand(1, 10)
if i > 0:
# On 2nd iteration, this will fail during rebuild_buckets,
# but we should report an error regarding unused parameters
# since that is the underlying root cause.
try:
ddp(inp).sum().backward()
except RuntimeError as e:
msg = str(e)
verify_ddp_error_logged(ddp, msg)
expected_strs = [
ddp_prev_reduction_unfinished_str,
ddp_recommend_find_unused_params_str,
ddp_outputs_not_used_in_loss_str,
]
# In debug mode, should show parameters that weren't reduced.
# Without debug mode, should show suggestion to use debug mode.
if dist._get_debug_mode() == dist._DistributedDebugLevel.OFF:
expected_strs.append(ddp_suggest_debug_mode_str)
else:
unreduced_params = ", ".join(["net2.weight"])
expected_strs.append(
f"did not receive grad for rank {self.rank}: {unreduced_params}"
)
for s in expected_strs:
self.assertTrue(s in msg, f"Expected {s} to be in {msg}")
self.assertFalse(ddp_find_unused_params_enabled_str in msg)
else:
self.assertFalse(
True, "DDP unused parameters error not raised."
)
else:
ddp(inp).sum().backward()
dist.barrier()
@require_backend({"gloo", "nccl"})
@require_backends_available({"gloo", "nccl"})
@skip_if_lt_x_gpu(2)
def test_ddp_shared_grad_acc_unused_params(self):
# When find_unused_parameters=True, ensure we mark unused parameters
# even if they share gradient accumulators.
class ToyModel(nn.Module):
def __init__(self):
super(ToyModel, self).__init__()
# net1, bias, and net1.bias are all unused params.
self.net1 = nn.Linear(10, 5, bias=False)
self.bias = nn.Parameter(torch.zeros(5))
# net1.bias and self.bias are names for the same underlying
# parameter, so they share the same grad acc. This caused
# the bug reported in https://github.com/pytorch/pytorch/issues/41324.
self.net1.bias = self.bias
self.net2 = nn.Linear(10, 5)
def forward(self, x):
return self.net2(x)
torch.cuda.set_device(self.rank)
model = ToyModel().to(torch.cuda.current_device())
ddp_model = torch.nn.parallel.DistributedDataParallel(
model, device_ids=[self.rank], find_unused_parameters=True
)
inp = torch.randn(20, 10, device=self.rank)
for i in range(6):
out = ddp_model(inp)
loss = out.sum()
loss.backward()
@require_backend({"gloo", "nccl"})
@require_backends_available({"gloo", "nccl"})
@skip_if_lt_x_gpu(2)
def test_ddp_device(self):
m = nn.Linear(10, 10).to(self.rank)
expected_len = 2
class TensorWrapper:
__slots__ = ["t", "moved_to_gpu"]
def __init__(self, t):
self.t = t
self.moved_to_gpu = False
# Handlers for specific types of validation we want to do based on
# the input type.
def tuple_and_list_validator(x):
self.assertTrue(len(x), expected_len)
self.assertEqual(1, len(set(t.device for t in x)))
self.assertEqual(x[0].device.index, self.rank)
return x[0] + x[1]
def namedtuple_validator(x):
self.assertEqual(x._fields, EXPECTED_FIELDS)
self.assertEqual(x.a.device.index, x.b.device.index)
self.assertEqual(x.a.device.index, self.rank)
return x.a + x.b
def custom_type_validator(x):
self.assertTrue(x.moved_to_gpu or (str(x.t.device) == "cpu"))
x.t = x.t.to(self.rank)
x.moved_to_gpu = True
return x.t
def dict_validator(x):
self.assertTrue(EXPECTED_FIELDS[0] in x.keys())
self.assertTrue(EXPECTED_FIELDS[1] in x.keys())
self.assertEqual(1, len(set(t.device for t in x.values())))
self.assertEqual(x[EXPECTED_FIELDS[0]].device.index, self.rank)
return x[EXPECTED_FIELDS[0]] + x[EXPECTED_FIELDS[1]]
validators = {
TensorWrapper: custom_type_validator,
tuple: tuple_and_list_validator,
list: tuple_and_list_validator,
TestNamedTupleInput_0: namedtuple_validator,
TestNamedTupleInput_1: namedtuple_validator,
dict: dict_validator,
}
class ToyModel(torch.nn.Module):
def __init__(_self): # noqa: B902
super().__init__()
_self.lin = nn.Linear(10, 10, bias=False)
def forward(_self, x, expected_type): # noqa: B902
# Similar to scatter, the recursive to in the single-device
# case does not move tensors if they are in a custom type.
self.assertTrue(isinstance(x, expected_type))
fwd_tensor = validators[expected_type](x)
return _self.lin(fwd_tensor)
model = torch.nn.parallel.DistributedDataParallel(
ToyModel().to(self.rank), device_ids=[self.rank]
)
def train_iter(inp, input_type):
for _ in range(4):
out = model(inp, input_type)
out.sum().backward()
# CPU tuple input, should be moved to the proper device before call
# to forward.
inp = tuple(torch.randn(10, 10) for _ in range(expected_len))
train_iter(inp, tuple)
# List CPU input, should be moved to proper device before call to
# forward.
inp = [torch.randn(10, 10) for _ in range(expected_len)]
train_iter(inp, list)
# Custom type containing tensor. The type is maintained, but the
# device is not propagated (which is what happens with scatter too)
inp = TensorWrapper(torch.randn(10, 10))
train_iter(inp, TensorWrapper)
# NamedTuple input. The type should be maintained and tensor inputs
# should be moved to the correct device as in scatter.
batch = 5
dim = 10
a = torch.rand(batch, dim)
b = torch.rand(batch, dim)
inp = TestNamedTupleInput_0(a, b)
train_iter(inp, type(inp))
inp = TestNamedTupleInput_1(a, b)
train_iter(inp, type(inp))
# dictionary input.
inp = {
EXPECTED_FIELDS[0]: a,
EXPECTED_FIELDS[1]: b,
}
train_iter(inp, type(inp))
@require_backend({"gloo", "nccl"})
@require_backends_available({"gloo", "nccl"})
@skip_if_lt_x_gpu(2)
def test_ddp_namedtuple(self):
batch = 5
dim = 10
a = torch.rand(batch, dim, device=self.rank)
b = torch.rand(batch, dim, device=self.rank)
class NamedTupleModule(torch.nn.Module):
def __init__(_self): # noqa: B902
super().__init__()
_self.lin = nn.Linear(10, 1)
def forward(_self, input, expected_type): # noqa: B902
# Without NamedTuple support, this would be of type tuple.
self.assertTrue(
isinstance(input, expected_type),
f"Expected type {expected_type} but got {type(input)}",
)
self.assertEqual(input._fields, EXPECTED_FIELDS)
self.assertEqual(a, input.a)
self.assertEqual(b, input.b)
return _self.lin(torch.mul(input.a, input.b))
model = torch.nn.parallel.DistributedDataParallel(
NamedTupleModule().cuda(self.rank), device_ids=[self.rank]
)
inp = TestNamedTupleInput_0(a, b)
# The following would fail if DDP does not propagate NamedTuples correctly.
model(inp, type(inp))
inp = TestNamedTupleInput_1(a, b)
model(inp, type(inp))
@with_dist_debug_levels(levels=["OFF", "INFO", "DETAIL"])
@require_backend({"gloo", "nccl"})
@require_backends_available({"gloo", "nccl"})
@skip_if_lt_x_gpu(2)
def test_ddp_control_flow_same_across_ranks(self):
# Control flow that is the same across ranks.
batch = 20
dim = 10
world_size = dist.get_world_size()
torch.cuda.set_device(self.rank)
model = torch.nn.parallel.DistributedDataParallel(
ControlFlowToyModel().cuda(self.rank),
device_ids=[self.rank],
find_unused_parameters=True,
)
random_input = torch.randn(batch, dim, device=self.rank)
ones_input = torch.ones(batch, dim, device=self.rank)
for i in range(6):
if i % 2 == 0:
out = model(random_input)
else:
out = model(ones_input)
loss = out.sum()
loss.backward()
# On even iterations, 2nd param goes unused, on odd iterations,
# it is used.
local_used_maps = model.reducer._get_local_used_maps()
if i % 2 == 0:
expected = torch.tensor(
[world_size, 0], device=self.rank, dtype=torch.int32
)
else:
expected = torch.tensor(
[world_size, world_size], device=self.rank, dtype=torch.int32
)
# Validate parameter usage.
variable_usage_tensor = local_used_maps[0]
self.assertEqual(variable_usage_tensor, expected)
# Validate appropriate error message when DDP is used with
# find_unused_parameters=False.
model = torch.nn.parallel.DistributedDataParallel(
ControlFlowToyModel().cuda(self.rank),
device_ids=[self.rank],
find_unused_parameters=False,
)
for i in range(2):
if i == 0:
loss = model(random_input).sum()
loss.backward()
else:
try:
loss = model(random_input).sum()
loss.backward()
except RuntimeError as e:
msg = str(e)
verify_ddp_error_logged(model, msg)
# 2nd linear layer is unused
unused_param_index = 1
expected_strs = [
ddp_prev_reduction_unfinished_str,
ddp_recommend_find_unused_params_str,
ddp_outputs_not_used_in_loss_str,
f"Parameter indices which did not receive grad for rank {self.rank}: {unused_param_index}",
]
# In debug mode, should show parameters that weren't reduced.
# Without debug mode, should show suggestion to use debug mode.
if dist._get_debug_mode() == dist._DistributedDebugLevel.OFF:
expected_strs.append(ddp_suggest_debug_mode_str)
else:
unreduced_params = ", ".join(["lin2.weight"])
expected_strs.append(
f"did not receive grad for rank {self.rank}: {unreduced_params}"
)
for s in expected_strs:
self.assertTrue(s in msg, f"Expected {s} to be in {msg}")
self.assertFalse(ddp_find_unused_params_enabled_str in msg)
else:
self.assertFalse(True, "DDP error not raised")
dist.barrier()
@require_backend({"gloo", "nccl"})
@require_backends_available({"gloo", "nccl"})
@skip_if_lt_x_gpu(2)
def test_invalid_static_graph(self):
world_size = dist.get_world_size()
torch.cuda.set_device(self.rank)
model = torch.nn.parallel.DistributedDataParallel(
ControlFlowToyModel().cuda(self.rank),
device_ids=[self.rank],
)
model._set_static_graph()
random_input = torch.randn(20, 10, device=self.rank)
ones_input = torch.ones(20, 10, device=self.rank)
# unused parameter in the first iteration got used
# in second iteration.
expected_err = "Your training graph has changed in this iteration"
with self.assertRaisesRegex(RuntimeError, expected_err):
for i in range(2):
if i % 2 == 0:
out = model(random_input)
else:
out = model(ones_input)
loss = out.sum()
loss.backward()
verify_ddp_error_logged(model, expected_err)
# used parameter in the first iteration got unused
# in second iteration.
with self.assertRaisesRegex(
RuntimeError,
"Expected to have finished reduction in the prior iteration "
"before starting a new one. This error indicates that your "
"training graph has changed in this iteration",
):
for i in range(2):
if i % 2 != 0:
out = model(random_input)
else:
out = model(ones_input)
loss = out.sum()
loss.backward()
verify_ddp_error_logged(model, "Expected to have finished reduction")
@with_dist_debug_levels(levels=["OFF", "INFO", "DETAIL"])
@require_backend({"gloo", "nccl"})
@require_backends_available({"gloo", "nccl"})
@skip_if_lt_x_gpu(2)
def test_ddp_control_flow_different_across_ranks(self):
# Control flow that is different across ranks.
batch = 20
dim = 10
class ToyModel(nn.Module):
def __init__(self, rank):
super(ToyModel, self).__init__()
self.lin1 = nn.Linear(10, 10, bias=False)
self.lin2 = nn.Linear(10, 10, bias=False)
self.rank = rank
def forward(self, x):
# Control-flow that is rank and input dependent for the
# model.
use_second_layer = (
torch.equal(x, torch.ones(batch, dim, device=x.device))
and self.rank == 1
)
if use_second_layer:
return self.lin2(F.relu(self.lin1(x)))
else:
return F.relu(self.lin1(x))
world_size = dist.get_world_size()
torch.cuda.set_device(self.rank)
model = torch.nn.parallel.DistributedDataParallel(
ToyModel(self.rank).cuda(self.rank),
device_ids=[self.rank],
find_unused_parameters=True,
)
random_input = torch.randn(batch, dim, device=self.rank)
ones_input = torch.ones(batch, dim, device=self.rank)
for i in range(6):
if i % 2 == 0:
out = model(random_input)
else:
out = model(ones_input)
loss = out.sum()
loss.backward()
# On even iterations, 2nd param goes unused, on odd iterations,
# it is used only on rank 1.
local_used_maps = model.reducer._get_local_used_maps()
if i % 2 == 0:
expected = torch.tensor(
[world_size, 0], device=self.rank, dtype=torch.int32
)
else:
expected = torch.tensor(
[world_size, 1], device=self.rank, dtype=torch.int32
)
variable_usage_tensor = local_used_maps[0]
# Validate parameter usage. On odd iterations, 2nd param is only
# used on rank 1.
self.assertEqual(variable_usage_tensor, expected)
# Validate appropriate error message when DDP is used with
# find_unused_parameters=False.
model = torch.nn.parallel.DistributedDataParallel(
ToyModel(self.rank).cuda(self.rank),
device_ids=[self.rank],
find_unused_parameters=False,
)
for i in range(2):
if i == 0:
loss = model(random_input).sum()
loss.backward()
else:
try:
loss = model(random_input).sum()
loss.backward()
except RuntimeError as e:
msg = str(e)
verify_ddp_error_logged(model, msg)
unused_param_index = 1
expected_strs = [
ddp_prev_reduction_unfinished_str,
ddp_recommend_find_unused_params_str,
ddp_outputs_not_used_in_loss_str,
f"Parameter indices which did not receive grad for rank {self.rank}: {unused_param_index}",
]
# In debug mode, should show parameters that weren't reduced.
# Without debug mode, should show suggestion to use debug mode.
if dist._get_debug_mode() == dist._DistributedDebugLevel.OFF:
expected_strs.append(ddp_suggest_debug_mode_str)
else:
unreduced_params = ", ".join(["lin2.weight"])
expected_strs.append(
f"did not receive grad for rank {self.rank}: {unreduced_params}"
)
for s in expected_strs:
self.assertTrue(s in msg, f"Expected {s} to be in {msg}")
self.assertFalse(ddp_find_unused_params_enabled_str in msg)
else:
self.assertFalse(True, "DDP error not raised")
dist.barrier()
@require_backend({"gloo"})
@sandcastle_skip_if(BACKEND == "nccl", "NCCL does not support scatter")
def test_scatter_object_list(self):
src_rank = 0
scatter_list = (
COLLECTIVES_OBJECT_TEST_LIST
if self.rank == src_rank
else [None for _ in COLLECTIVES_OBJECT_TEST_LIST]
)
world_size = dist.get_world_size()
scatter_list = scatter_list[:world_size]
i = 0
while len(scatter_list) < world_size:
scatter_list.append(scatter_list[i])
i += 1
output_obj_list = [None]
dist.scatter_object_list(output_obj_list, scatter_list, src=src_rank)
self.assertEqual(
output_obj_list[0],
COLLECTIVES_OBJECT_TEST_LIST[
self.rank % len(COLLECTIVES_OBJECT_TEST_LIST)
],
)
# Ensure errors are raised upon incorrect arguments.
with self.assertRaisesRegex(
RuntimeError,
"Expected argument scatter_object_output_list to be a list of size at least 1.",
):
dist.scatter_object_list([], scatter_list, src=src_rank)
@require_backend({"gloo", "nccl"})
@require_backends_available({"gloo", "nccl"})
@skip_if_lt_x_gpu(2)
@skip_if_rocm
def test_ddp_model_diff_across_ranks(self):
group_gloo = dist.new_group(
timeout=timedelta(seconds=60), backend=dist.Backend.GLOO
)
# Set NCCL_BLOCKING_WAIT and use a new NCCL group to improve test
# determinism.
os.environ["NCCL_BLOCKING_WAIT"] = "1"
group_to_use = dist.new_group(
backend=dist.get_backend(), timeout=timedelta(seconds=5)
)
torch.cuda.set_device(self.rank)
# Creates network with different sized embedding table on different
# ranks. This should throw an error during DDP init.
net = EmbeddingNet(self.rank)
# When running with NCCL backend, we don't expect an error on rank 0,
# rather, it will be taken down by NCCL_ASYNC_ERROR_HANDLING. When
# running with Gloo or with debug mode wrapper, we expect the error
# to be caught inline.
is_detail_dbg_mode = (
dist._get_debug_mode() == dist._DistributedDebugLevel.DETAIL
)
rank_0_ctx = (
self.assertRaisesRegex(
RuntimeError, "Caught collective operation timeout"
)
if dist.get_backend(group_to_use) == dist.Backend.NCCL
and not is_detail_dbg_mode
# Gloo can raise various exception messages, so just assert
# Runtime error here.
else self.assertRaises(RuntimeError)
)
ctx = (
rank_0_ctx
if self.rank == 0
else self.assertRaisesRegex(RuntimeError, "appears not to match")
)
with ctx:
net = torch.nn.parallel.DistributedDataParallel(
net.to(self.rank),
device_ids=[self.rank],
process_group=group_to_use,
)
# Should only be run by rank 0, and blocking_wait catches and
# reports exception.
dist.barrier(group_to_use)
# Perform gloo-based barrier to ensure one rank doesn't exit test
# early which causes failure with Barrier.sync.
dist.barrier(group_gloo)
@with_dist_debug_levels(levels=["OFF", "INFO", "DETAIL"])
@require_backend({"gloo", "nccl"})
@require_backends_available({"gloo", "nccl"})
@skip_if_lt_x_gpu(2)
def test_output_unused_in_loss(self):
model = TwoLinLayerNet()
# Need copy of model to pass into 2nd DDP ctor otherwise autograd hooks
# on first DDP reducer will execute!
model_copy = copy.deepcopy(model)
net = torch.nn.parallel.DistributedDataParallel(
copy.deepcopy(model).cuda(self.rank),
device_ids=[self.rank],
)
net_with_find_unused = torch.nn.parallel.DistributedDataParallel(
model_copy.cuda(self.rank),
device_ids=[self.rank],
find_unused_parameters=True,
)
inp = torch.randn(10, 10)
for ddp in [net, net_with_find_unused]:
for i in range(2):
if i == 0:
a, b = ddp(inp)
loss = b.sum()
loss.backward()
else:
try:
a, b = ddp(inp)
loss = b.sum()
loss.backward()
except RuntimeError as e:
msg = str(e)
unused_index = 0
unused_index_substr = (
f"Parameter indices which did not receive grad for rank {self.rank}: {unused_index}"
)
if ddp == net:
expected_strs = [
ddp_prev_reduction_unfinished_str,
ddp_recommend_find_unused_params_str,
ddp_outputs_not_used_in_loss_str,
unused_index_substr,
]
unexpected_strs = [
ddp_find_unused_params_enabled_str,
]
elif ddp == net_with_find_unused:
expected_strs = [
ddp_prev_reduction_unfinished_str,
ddp_outputs_not_used_in_loss_str,
ddp_find_unused_params_enabled_str,
unused_index_substr,
]
unexpected_strs = [
ddp_recommend_find_unused_params_str,
]
# In debug mode, should show parameters that weren't reduced.
# Without debug mode, should show suggestion to use debug mode.
if (
dist._get_debug_mode()
== dist._DistributedDebugLevel.OFF
):
expected_strs.append(ddp_suggest_debug_mode_str)
else:
unreduced_params = ", ".join(["a.weight"])
expected_strs.append(
f"did not receive grad for rank {self.rank}: {unreduced_params}"
)
for s in expected_strs:
self.assertTrue(
s in msg, f"Expected {s} to be in {msg}"
)
for s in unexpected_strs:
self.assertFalse(
s in msg, f"Expected {s} not to be in {msg}"
)
else:
self.assertFalse(True, "DDP error not raised")
dist.barrier()
def _test_different_graph_across_ranks(
self, find_unused_parameters=False, static_graph=False
):
class ToyModel(nn.Module):
def __init__(self, rank):
super(ToyModel, self).__init__()
self.lin1 = nn.Linear(10, 10, bias=False)
self.lin2 = nn.Linear(10, 10, bias=False)
self.rank = rank
def forward(self, x):
if self.rank == 0:
return self.lin2(F.relu(self.lin1(x)))
else:
return F.relu(self.lin1(x))
torch.manual_seed(31415)
world_size = dist.get_world_size()
torch.cuda.set_device(self.rank)
model = ToyModel(self.rank).cuda(self.rank)
ddp_model = torch.nn.parallel.DistributedDataParallel(
model,
device_ids=[self.rank],
find_unused_parameters=find_unused_parameters,
gradient_as_bucket_view=True,
)
if static_graph:
ddp_model._set_static_graph()
random_input = torch.randn(20, 10, device=self.rank)
for i in range(10):
out = ddp_model(random_input)
loss = out.sum()
loss.backward()
return ddp_model
@require_backend({"gloo", "nccl"})
@require_backends_available({"gloo", "nccl"})
@skip_if_lt_x_gpu(2)
def test_different_graph_across_ranks(self):
base_model = self._test_different_graph_across_ranks(
find_unused_parameters=True
)
self.assertFalse(
base_model._get_ddp_logging_data().get("has_rebuilt_buckets", 0)
)
static_model = self._test_different_graph_across_ranks(static_graph=True)
self.assertTrue(
static_model._get_ddp_logging_data().get("has_rebuilt_buckets", 0)
)
for i, j in zip(base_model.parameters(), static_model.parameters()):
self.assertEqual(i, j)
@require_backend({"gloo"})
@require_backends_available({"gloo"})
@sandcastle_skip_if(
IS_MACOS or IS_WINDOWS,
"MacOS uses uv transport which does not have as robust error handling as tcp transport",
)
def test_monitored_barrier_gloo(self):
tensors = [torch.ones(10) * self.rank]
# Kick off some allreduce work on all ranks
for _ in range(10):
dist.all_reduce(torch.cat(tensors))
# Run monitored barrier and ensure it passees
timeout = timedelta(seconds=2)
dist.monitored_barrier(timeout=timeout)
# Check monitored_barrier success with wait_all_ranks=True
for _ in range(10):
dist.all_reduce(torch.cat(tensors))
dist.monitored_barrier(timeout=timeout, wait_all_ranks=True)
# All ranks besides 1 call into barrier, rank 0 should report failure
# while others report gloo error.
failed_rank = 1
src_rank = 0
if self.rank == src_rank:
with self.assertRaisesRegex(
RuntimeError, f"Rank {failed_rank} failed to pass monitoredBarrier"
):
dist.monitored_barrier(timeout=timeout)
elif self.rank != failed_rank:
# Other ranks should not pass barrier since rank 0 failed.
err_regex = (
f"Rank {self.rank} successfully reached monitoredBarrier,"
f" but received errors while waiting to be unblocked by rank"
f" {src_rank}"
)
with self.assertRaisesRegex(RuntimeError, err_regex):
dist.monitored_barrier(timeout=timeout)
# We need a barrier since otherwise failed_rank exits too early
# and cause a timeout.
self._barrier(timeout=30)
@require_backend({"gloo"})
@require_backends_available({"gloo"})
def test_monitored_barrier_gloo_subgroup(self):
# Tests that monitored_barrier works as expected on non-default
# process groups.
failed_rank = 1
timeout = 0.1
subgroup = dist.new_group(ranks=[0, 1])
if self.rank == failed_rank:
return
if self.rank == 0:
with self.assertRaisesRegex(
RuntimeError, f"Rank {failed_rank} failed to pass monitoredBarrier"
):
dist.monitored_barrier(subgroup, timeout)
else:
# Other ranks call into monitored_barrier, but this should be a
# noop because they are not part of the subgroup. Verify that
# there are no errors here.
dist.monitored_barrier(subgroup, timeout)
def _test_monitored_barrier_allreduce_hang(self, wait_all_ranks):
# tests expected behavior when nonzero rank hangs.
nccl_pg = dist.new_group(
ranks=list(i for i in range(int(self.world_size))),
timeout=timedelta(seconds=2),
backend=dist.Backend.NCCL,
)
gloo_pg = dist.new_group(
ranks=list(i for i in range(int(self.world_size))),
backend=dist.Backend.GLOO,
)
tensors = [torch.ones(10, device=self.rank) * self.rank]
# Let all ranks call allreduce first to set up communicators etc.
# Directly simulating error here will run into store issue described
# in https://github.com/pytorch/pytorch/issues/54524.
nccl_pg.allreduce(tensors).wait()
# All ranks besides 0 call into allreduce. This is to simulate a
# desync across the world, where some ranks call into
# monitored_barrier() and others are stuck in collective comm. In
# practice, we don't need NCCL_BLOCKING_WAIT, but we use it in this
# test to ensure it exits cleanly.
if self.rank != 0:
# Can get different errors here depending on whether gloo-based
# wrapper PG is enabled or not, since with wrapper pg, it will
# fail in a collective synchronization check and not actually
# call into the nccl pg.
if dist._get_debug_mode() == dist._DistributedDebugLevel.DETAIL:
err_regex = "Timed out waiting"
else:
err_regex = "Caught collective operation timeout"
with self.assertRaisesRegex(RuntimeError, err_regex):
nccl_pg.allreduce(tensors).wait(timedelta(seconds=0.1))
else:
# Rank 0 should report first (in order) timed out rank or all ranks
# depending on wait_all_ranks flag passed into monitored_barrier.
if wait_all_ranks:
rank_str = ", ".join(
[str(i) for i in range(1, int(self.world_size))]
)
err_regex = f"Ranks {rank_str} failed to pass monitoredBarrier"
else:
expected_first_fail_rank = 1
err_regex = f"Rank {expected_first_fail_rank} failed to pass monitoredBarrier"
monitored_barrier_timeout_seconds = timedelta(seconds=0.1)
with self.assertRaisesRegex(RuntimeError, err_regex):
gloo_pg.monitored_barrier(
monitored_barrier_timeout_seconds, wait_all_ranks=wait_all_ranks
)
@with_nccl_blocking_wait
@require_backend({"gloo", "nccl"})
@require_backends_available({"gloo", "nccl"})
@skip_if_rocm
@skip_if_lt_x_gpu(int(os.environ["WORLD_SIZE"]))
def test_monitored_barrier_allreduce_hang(self):
# tests expected behavior when nonzero rank hangs and we want to
# report first timed out rank.
self._test_monitored_barrier_allreduce_hang(wait_all_ranks=False)
@with_nccl_blocking_wait
@require_backend({"gloo", "nccl"})
@require_backends_available({"gloo", "nccl"})
@skip_if_rocm
@skip_if_lt_x_gpu(int(os.environ["WORLD_SIZE"]))
def test_monitored_barrier_allreduce_hang_wait_all_ranks(self):
# tests expected behavior when nonzero rank hangs and we want to
# report all timed out ranks.
self._test_monitored_barrier_allreduce_hang(wait_all_ranks=True)
@require_backend({"gloo"})
@require_backends_available({"gloo"})
def test_monitored_barrier_gloo_rank_0_timeout(self):
# tests error when rank 0 exhausts its given timeout.
process_group = dist.new_group(
ranks=list(i for i in range(int(self.world_size)))
)
timeout = timedelta(seconds=0)
if self.rank == 0:
with self.assertRaisesRegex(
RuntimeError, f"Rank {self.rank} timed out in monitoredBarrier"
):
process_group.monitored_barrier(timeout)
@require_backend({"gloo"})
@require_backends_available({"gloo"})
@skip_if_small_worldsize
@sandcastle_skip_if(
IS_MACOS or IS_WINDOWS,
"MacOS uses uv transport which does not have as robust error handling as tcp transport",
)
def test_monitored_barrier_failure_order(self):
# Ensure that the first (in sorted order) rank is reported when
# multiple ranks fail to pass the monitored_barrier.
# TODO(#54879): Provide ability to wait and report all failed ranks
expected_first_failed_rank = 2
timeout = timedelta(seconds=2)
src_rank = 0
if self.rank == src_rank:
with self.assertRaisesRegex(
RuntimeError, f"Rank {expected_first_failed_rank}"
):
dist.monitored_barrier(timeout=timeout)
elif self.rank == 1:
err_regex = (
f"Rank {self.rank} successfully reached monitoredBarrier,"
f" but received errors while waiting to be unblocked by rank"
f" {src_rank}"
)
with self.assertRaisesRegex(RuntimeError, err_regex):
dist.monitored_barrier(timeout=timeout)
@require_backend({"gloo"})
@require_backends_available({"gloo"})
@skip_if_small_worldsize
def test_monitored_barrier_wait_all_ranks(self):
# Tests simple case where > 1 rank does not call into monitored
# barrier and verifies all ranks are reported by rank 0.
if self.rank == 0:
timeout = timedelta(seconds=0.1)
rank_str = ", ".join([str(i) for i in range(1, int(self.world_size))])
err_regex = f"Ranks {rank_str} failed to pass monitoredBarrier"
with self.assertRaisesRegex(RuntimeError, err_regex):
dist.monitored_barrier(timeout=timeout, wait_all_ranks=True)
@require_backend({"gloo", "nccl"})
@require_backends_available({"gloo", "nccl"})
@skip_if_lt_x_gpu(2)
def test_ddp_build_param_to_name_mapping(self):
model = TwoLinLayerNet()
net = torch.nn.parallel.DistributedDataParallel(
model.cuda(self.rank),
device_ids=[self.rank],
)
expected_mapping = {0: "a.weight", 1: "b.weight"}
net_params, _ = net._build_params_for_reducer()
param_to_name_mapping = net._build_param_to_name_mapping(net_params)
self.assertDictEqual(expected_mapping, param_to_name_mapping)
# Test when DDP is used with ignored parameters.
model = TwoLinLayerNet()
# Parameters to ignore are in the format {module_name}.{param_name}
params_to_ignore = ["a.weight"]
torch.nn.parallel.DistributedDataParallel._set_params_and_buffers_to_ignore_for_model(
model, params_to_ignore
)
net = torch.nn.parallel.DistributedDataParallel(
model.cuda(self.rank),
device_ids=[self.rank],
)
expected_mapping = {0: "b.weight"}
net_params, _ = net._build_params_for_reducer()
param_to_name_mapping = net._build_param_to_name_mapping(net_params)
self.assertDictEqual(expected_mapping, param_to_name_mapping)
# Test errors are raised when DDP and module parameters mismatch.
# This generally indicates a bug with DDP and is not expected to
# happen in user applications.
model = TwoLinLayerNet()
net = torch.nn.parallel.DistributedDataParallel(
model.cuda(self.rank),
device_ids=[self.rank],
)
net_params, _ = net._build_params_for_reducer()
if self.rank == 0:
print(type(net_params[0][0]))
net_params[0].extend(
[
torch.nn.Parameter(torch.ones(1)),
torch.nn.Parameter(torch.ones(1)),
]
)
with self.assertRaisesRegex(ValueError, "Expected param to name mapping"):
net._build_param_to_name_mapping(net_params)
net_params[0] = net_params[0][:-3]
with self.assertRaisesRegex(ValueError, "Param with name"):
net._build_param_to_name_mapping(net_params)
net_params[0].extend(
[
torch.nn.Parameter(torch.ones(1)),
torch.nn.Parameter(torch.ones(1)),
]
)
@sandcastle_skip_if(
BACKEND != "nccl" and BACKEND != "gloo",
"Only Nccl & Gloo backend support DistributedDataParallel",
)
@skip_if_lt_x_gpu(2)
def test_ddp_build_param_to_name_mapping_requires_grad(self):
class Net(nn.Module):
def __init__(self):
super().__init__()
self.lin = nn.Linear(10, 10)
# Is not tracked by DDP and should not show up in param to
# name mapping.
self.lin.bias.requires_grad_(False)
def forward(self, x):
return self.lin(x)
model = Net()
net = torch.nn.parallel.DistributedDataParallel(
model.cuda(self.rank), device_ids=[self.rank]
)
expected_mapping = {
0: "lin.weight",
}
net_params, _ = net._build_params_for_reducer()
param_to_name_mapping = net._build_param_to_name_mapping(net_params)
self.assertEqual(param_to_name_mapping, expected_mapping)
def _test_ddp_multiple_nested_unused_params_error(self, ignore_sparse):
debug_mode_off = dist._get_debug_mode() == dist._DistributedDebugLevel.OFF
class SubModule(nn.Module):
def __init__(self):
super().__init__()
self.embedding_net = EmbeddingNet(0)
self.lin = TwoLinLayerNet()
self.bn = BatchNormNet()
self.lin_layer = nn.Linear(4, 10, bias=False)
def forward(self, x):
x = self.bn(x)
x = self.lin_layer(x)
x = self.lin.a(x) # self.lin.b param unused
# EmbeddingNet entirely unused: self.embedding_net.embedding and
# self.embedding_net.lin unused.
return x
class MyModel(nn.Module):
def __init__(self):
super().__init__()
self.sub_module = SubModule()
def forward(self, x):
return self.sub_module(x)
model = MyModel()
sparse_embedding_fqns = []
if ignore_sparse:
for module_name, module in model.named_modules():
if module == model.sub_module.embedding_net.embedding:
for parameter_name, param in module.named_parameters(
recurse=False
):
fqn = f"{module_name}.{parameter_name}"
sparse_embedding_fqns.append(fqn)
torch.nn.parallel.DistributedDataParallel._set_params_and_buffers_to_ignore_for_model(
model, sparse_embedding_fqns
)
unused_modules = [
model.sub_module.embedding_net.lin,
model.sub_module.lin.b,
]
else:
unused_modules = list(model.sub_module.embedding_net.modules()) + [
model.sub_module.lin.b,
]
expected_unused_param_fqns = []
used_param_fqns = [] # Validate that these don't mistakenly show up.
fqn_to_param_index = {}
index = 0
for module_name, module in model.named_modules():
for parameter_name, param in module.named_parameters(recurse=False):
fqn = f"{module_name}.{parameter_name}"
fqn_to_param_index[fqn] = index
if fqn not in sparse_embedding_fqns:
index += 1
if module in unused_modules:
expected_unused_param_fqns.append(fqn)
else:
if (
not ignore_sparse
or module != model.sub_module.embedding_net.embedding
):
used_param_fqns.append(fqn)
net = torch.nn.parallel.DistributedDataParallel(
model.cuda(self.rank),
device_ids=[self.rank],
)
batch, dim = 10, 2
inp = torch.ones(batch, dim)
for i in range(2):
if i == 0:
out = net(inp)
loss = out.sum()
loss.backward()
else:
try:
out = net(inp)
loss = out.sum()
loss.backward()
except RuntimeError as e:
e = str(e)
unused_param_substr = e[e.find("did not receive grad") :]
# Validate that each unused param fully qualified name
# shows up in error logs. We do this instead of
# constructing a joined string since order of parameters
# can be different in Reducer. In addition, validate
# param indices show up as well.
for unused_param_fqn in expected_unused_param_fqns:
self.assertTrue(
unused_param_fqn in unused_param_substr
or debug_mode_off
)
self.assertTrue(
str(fqn_to_param_index[unused_param_fqn])
in unused_param_substr,
f"Did not find index {fqn_to_param_index[unused_param_fqn]} for {unused_param_fqn}",
)
# Validate that used param fqns don't show up in error
# logs.
for used_param_fqn in used_param_fqns:
self.assertFalse(used_param_fqn in unused_param_substr)
# Validate that ignored param fqns don't show up as unused
# (since DDP does not track them)
for sparse_param_fqn in sparse_embedding_fqns:
self.assertFalse(sparse_param_fqn in unused_param_substr)
else:
self.assertTrue(False, "Expected error was not raised!")
@with_dist_debug_levels(levels=["OFF", "INFO", "DETAIL"])
@require_backend({"gloo", "nccl"})
@require_backends_available({"gloo", "nccl"})
@skip_if_lt_x_gpu(2)
def test_ddp_multiple_nested_unused_params_error(self):
self._test_ddp_multiple_nested_unused_params_error(ignore_sparse=False)
@with_dist_debug_levels(levels=["OFF", "INFO", "DETAIL"])
@require_backend({"gloo", "nccl"})
@require_backends_available({"gloo", "nccl"})
@skip_if_lt_x_gpu(2)
def test_ddp_multiple_nested_unused_params_err_ignore_params(self):
# Tests unused parameter reporting when DDP is configured to ignore
# certain parameters.
self._test_ddp_multiple_nested_unused_params_error(ignore_sparse=True)
@sandcastle_skip_if(
BACKEND != "nccl" and BACKEND != "gloo",
"Only Nccl & Gloo backend support DistributedDataParallel",
)
@skip_if_lt_x_gpu(2)
def test_ddp_inference(self):
# tests that DDP module can be run on a single node with no_grad
# or eval setting and there is no hang.
rank = self.rank
torch.cuda.set_device(rank)
model = Net().cuda()
local_model = copy.deepcopy(model)
model = torch.nn.parallel.DistributedDataParallel(
model,
device_ids=[rank],
)
syncbn_model = nn.SyncBatchNorm(
2, momentum=0.99, track_running_stats=False
).cuda()
local_syncbn_model = copy.deepcopy(syncbn_model)
syncbn_model = torch.nn.parallel.DistributedDataParallel(
syncbn_model, device_ids=[rank]
)
inp = torch.randn(10, 2, device=rank)
inp_syncbn = torch.randn(10, 2, 4, 4, device=rank)
tests = [
(model, local_model, inp),
(syncbn_model, local_syncbn_model, inp_syncbn),
]
for test in tests:
test_model, test_local_model, test_inp = test
if self.rank == 0:
test_model.eval()
test_local_model.eval()
for _ in range(6):
self.assertEqual(
test_model(test_inp), test_local_model(test_inp)
)
# Barrier since only rank 0 runs inference. Test should be
# much faster than 30s, but this is to avoid flakiness.
self._barrier(timeout=30)
@sandcastle_skip_if(
BACKEND != "nccl" and BACKEND != "gloo",
"Only Nccl & Gloo backend support DistributedDataParallel",
)
@skip_if_lt_x_gpu(2)
def test_ddp_sync_bn_training_vs_eval(self):
rank = self.rank
torch.cuda.set_device(rank)
# Need to set track_running_stats=False, when track_running_stats=True,
# bn_training is False and sync could not occur in eval model.
model = nn.SyncBatchNorm(2, momentum=0.99, track_running_stats=False).cuda(
rank
)
model = torch.nn.parallel.DistributedDataParallel(model, device_ids=[rank])
# Test sync occurs in training mode.
with torch.autograd.profiler.profile() as prof:
for i in range(6):
inp = torch.randn(10, 2, 4, 4).cuda(rank)
out = model(inp)
loss = out.sum()
loss.backward()
# SyncBN allgathers stats across all ranks, so verify call to
# all_gather in profiler.
if BACKEND == "nccl":
all_gather_calls = get_profiling_event("_all_gather_base", prof)
else:
all_gather_calls = get_profiling_event("all_gather", prof)
self.assertNotEqual([], all_gather_calls)
# Only do inference on one rank. If SyncBN did collective stats sync,
# this would hang/error.
model_inference = model.module
if self.rank == 0:
model_inference.eval()
with torch.autograd.profiler.profile() as prof:
for i in range(6):
inp = torch.randn(10, 2, 4, 4).cuda(rank)
out = model_inference(inp)
loss = out.sum()
loss.backward()
# Ensure sync does not occur in eval() mode.
if BACKEND == "nccl":
all_gather_calls = get_profiling_event("_all_gather_base", prof)
else:
all_gather_calls = get_profiling_event("all_gather", prof)
self.assertEqual([], all_gather_calls)
@skip_if_lt_x_gpu(2)
@sandcastle_skip_if(
BACKEND != "nccl" and BACKEND != "gloo",
"Only Nccl & Gloo backend support DistributedDataParallel",
)
def test_ddp_python_error_logged(self):
# Most python exceptions in DDP are raised during init before
# reducer is constructed, so we don't have a logger in those cases.
# However, the below is one example where a python error is thrown
# after reducer is constructed.
model = TwoLinLayerNet().cuda(self.rank)
model = torch.nn.parallel.DistributedDataParallel(
model,
device_ids=[self.rank],
)
expected_err = "must be callable"
with self.assertRaisesRegex(TypeError, expected_err):
model.register_comm_hook({}, {})
verify_ddp_error_logged(model, expected_err)
@skip_if_lt_x_gpu(2)
@sandcastle_skip_if(
BACKEND != "nccl" and BACKEND != "gloo",
"Only Nccl & Gloo backend support DistributedDataParallel",
)
def test_ddp_static_graph_nested_types(self):
# Tests for static graph training when outputs are not just tensors
# but can be (nested) tuple, list, dict, etc.
rank = self.rank
torch.cuda.set_device(rank)
class NestedOutputModule(torch.nn.Module):
def __init__(self):
super().__init__()
self.lin = nn.Linear(100, 1, bias=False)
def forward(self, inp, output_type):
if output_type == "tuple":
return (
self.lin(inp),
(
self.lin(inp),
self.lin(inp),
),
)
elif output_type == "list":
return [
self.lin(inp),
[
self.lin(inp),
self.lin(inp),
],
]
elif output_type == "dict":
return {
"a": self.lin(inp),
"b": {
"c": self.lin(inp),
},
}
def get_loss(model_output):
loss = 0.0
if isinstance(model_output, torch.Tensor):
return model_output.sum()
elif isinstance(model_output, dict):
for value in model_output.values():
loss += get_loss(value)
elif isinstance(model_output, tuple) or isinstance(model_output, list):
for x in model_output:
loss += get_loss(x)
else:
raise ValueError(f"Unknown model output type {type(model_output)}")
return loss
model = NestedOutputModule().cuda(rank)
model_static_graph = copy.deepcopy(model)
model = torch.nn.parallel.DistributedDataParallel(
model,
device_ids=[rank],
)
model_static_graph = torch.nn.parallel.DistributedDataParallel(
model,
device_ids=[rank],
)
model_static_graph._set_static_graph()
inp = torch.randn(10, 100)
type_mapping = {
"list": list,
"tuple": tuple,
"dict": dict,
}
for output_type in type_mapping.keys():
for i in range(6):
out = model(inp, output_type=output_type)
loss = get_loss(out)
loss.backward()
self._model_step(model)
out_static = model_static_graph(inp, output_type=output_type)
self.assertTrue(isinstance(out_static, type_mapping[output_type]))
loss_static = get_loss(out_static)
loss_static.backward()
self._model_step(model_static_graph)
for (p, p_static) in zip(
model.parameters(), model_static_graph.parameters()
):
self.assertEqual(p, p_static)
@skip_if_lt_x_gpu(2)
@sandcastle_skip_if(
BACKEND != "nccl" and BACKEND != "gloo",
"Only Nccl & Gloo backend support DistributedDataParallel",
)
def test_detect_ddp_is_actually_static(self):
class ToyModel(nn.Module):
def __init__(self):
super(ToyModel, self).__init__()
self.net1 = nn.Linear(10, 10, bias=False)
self.net2 = nn.Linear(10, 10)
def forward(self, x, find_unused, dynamic):
if find_unused:
if dynamic:
return self.net2(self.net1(x))
else:
return self.net2(x)
else:
return self.net2(self.net1(x))
# Set of unused parameters don't change across iterations
torch.cuda.set_device(self.rank)
model = ToyModel().cuda()
for find_unused in [True, False]:
ddp = torch.nn.parallel.DistributedDataParallel(
model,
device_ids=[self.rank],
find_unused_parameters=find_unused,
)
inp = torch.randn(1, 10, device="cuda")
for _ in range(6):
out = ddp(inp, find_unused=find_unused, dynamic=False)
loss = out.sum()
loss.backward()
self.assertTrue(ddp.reducer._ddp_graph_static())
# Set of unused parameters dynamically change
ddp = torch.nn.parallel.DistributedDataParallel(
model,
device_ids=[self.rank],
find_unused_parameters=True,
)
inp = torch.randn(1, 10, device="cuda")
for i in range(6):
out = ddp(inp, find_unused=True, dynamic=i % 2 == 0)
loss = out.sum()
loss.backward()
self.assertFalse(ddp.reducer._ddp_graph_static())
def _test_ddp_new_tensor_in_fwd(self, static_graph):
# Test from https://github.com/pytorch/pytorch/issues/60733
class MyModel(nn.Module):
def __init__(self):
super().__init__()
self.fc1 = nn.Linear(10, 10, bias=False)
self.fc2 = nn.Linear(10, 10, bias=False)
def __init_opt(self):
param = next(self.parameters())
opt = torch.randn(1, 10, device=param.device)
return opt
def forward(self, x, opt_1, opt_2, opt_nested):
x = F.relu(self.fc1(x))
x = self.fc2(x)
if opt_1 is None:
opt_1 = self.__init_opt()
if opt_2 is None:
opt_2 = self.__init_opt()
if opt_nested is None or not torch.is_tensor(opt_nested):
opt_nested = self.__init_opt()
# Test multiple tensors as well as newly created tensors
# within a struct.
return x, opt_1, opt_2, {"tensor": opt_nested}
model = MyModel().to(self.rank)
for find_unused in [True, False]:
ddp = DistributedDataParallel(
model,
device_ids=[self.rank],
output_device=self.rank,
broadcast_buffers=False,
find_unused_parameters=find_unused,
)
if static_graph:
ddp._set_static_graph()
opt = [None for _ in range(3)]
for i in range(2):
ddp.zero_grad()
x = torch.randn(1, 10, device=self.rank)
out, opt[0], opt[1], opt[2] = ddp(
x, opt_1=opt[0], opt_2=opt[1], opt_nested=opt[2]
)
for i in range(len(opt)):
if torch.is_tensor(opt[i]):
self.assertEqual(opt[i].grad_fn, None)
else:
self.assertEqual(opt[i]["tensor"].grad_fn, None)
out.mean().backward()
@skip_if_lt_x_gpu(2)
@sandcastle_skip_if(
BACKEND != "nccl" and BACKEND != "gloo",
"Only Nccl & Gloo backend support DistributedDataParallel",
)
def test_ddp_new_tensor_in_fwd(self):
return self._test_ddp_new_tensor_in_fwd(static_graph=False)
@skip_if_lt_x_gpu(2)
@sandcastle_skip_if(
BACKEND != "nccl" and BACKEND != "gloo",
"Only Nccl & Gloo backend support DistributedDataParallel",
)
def test_ddp_new_tensor_in_fwd_static_graph(self):
return self._test_ddp_new_tensor_in_fwd(static_graph=True)
| 41.935073 | 119 | 0.539093 | import copy
import itertools
import math
import os
import random
import sys
import tempfile
import time
from collections import namedtuple
from contextlib import contextmanager, suppress
from datetime import timedelta
from functools import reduce
from typing import Union, NamedTuple, Callable, Any
import torch
import torch.cuda
import torch.distributed as dist
import torch.distributed.algorithms.ddp_comm_hooks.post_localSGD_hook as post_localSGD
import torch.distributed.algorithms.ddp_comm_hooks.powerSGD_hook as powerSGD
import torch.distributed.algorithms.model_averaging.averagers as averagers
import torch.distributed.algorithms.model_averaging.utils as model_averaging_utils
import torch.nn as nn
import torch.nn.functional as F
from torch._utils_internal import TEST_MASTER_ADDR as MASTER_ADDR
from torch._utils_internal import TEST_MASTER_PORT as MASTER_PORT
from torch.cuda.amp import GradScaler, autocast
from torch.distributed.algorithms.ddp_comm_hooks import default_hooks as default
from torch.distributed.algorithms.ddp_comm_hooks import (
quantization as quantization_hooks,
)
from torch.distributed.distributed_c10d import (
get_world_size,
_get_default_group,
AllreduceOptions,
GroupMember,
)
from torch.nn.parallel import DistributedDataParallel
from torch.nn.parallel.distributed import _dump_DDP_relevant_env_vars
from torch.testing._internal.common_distributed import (
MultiProcessTestCase,
TEST_SKIPS,
initialize_temp_directories,
cleanup_temp_dir,
simple_sparse_reduce_tests,
skip_if_rocm,
skip_if_small_worldsize,
skip_if_lt_x_gpu,
nccl_skip_if_lt_x_gpu,
skip_if_no_gpu,
require_n_gpus_for_nccl_backend,
requires_nccl_version,
captured_output,
with_nccl_blocking_wait,
with_dist_debug_levels,
verify_ddp_error_logged,
)
from torch.testing._internal.common_utils import (
IS_MACOS,
IS_WINDOWS,
FILE_SCHEMA,
IS_FBCODE,
NO_MULTIPROCESSING_SPAWN,
sandcastle_skip,
sandcastle_skip_if,
)
if not IS_WINDOWS:
import torch.distributed.optim.post_localSGD_optimizer as post_localSGD_optimizer
from torch.distributed.optim.functional_sgd import _FunctionalSGD
from torch.utils.data.distributed import DistributedSampler
try:
import torchvision
HAS_TORCHVISION = True
except ImportError:
HAS_TORCHVISION = False
if sys.platform == "win32":
import msvcrt
else:
import fcntl
class Foo:
def __init__(self, x):
self.x = x
def __eq__(self, other):
def eq(value, other):
if isinstance(value, torch.Tensor):
return torch.equal(value, other)
return value == other
for attr, value in self.__dict__.items():
other_value = other.__dict__[attr]
if not eq(value, other_value):
return False
return True
f = Foo(10)
f.bar = 1
foo_cpu_tensor = Foo(torch.randn(3, 3))
COLLECTIVES_OBJECT_TEST_LIST = [
{"key1": 3, "key2": 4, "key3": {"nested": True}},
f,
foo_cpu_tensor,
"foo",
[1, 2, True, "string", [4, 5, "nested"]],
]
PROFILING_SUPPORTED_BACKENDS = [
dist.Backend.NCCL,
dist.Backend.GLOO,
dist.Backend.MPI,
]
CUDA_PROFILING_SUPPORTED_BACKENDS = [
dist.Backend.GLOO,
dist.Backend.MPI,
dist.Backend.NCCL,
]
SEND_RECV_PROFILING_SUPPORTED_BACKENDS = [
dist.Backend.MPI,
dist.Backend.GLOO,
dist.Backend.NCCL,
]
EXPECTED_FIELDS = ("a", "b")
TestNamedTupleInput_0 = namedtuple("NamedTuple", EXPECTED_FIELDS)
class TestNamedTupleInput_1(NamedTuple):
a: torch.tensor
b: torch.tensor
skipIfNoTorchVision = sandcastle_skip_if(not HAS_TORCHVISION, "no torchvision")
BACKEND = os.environ["BACKEND"]
INIT_METHOD = os.getenv("INIT_METHOD", "env://")
DEFAULT_TIMEOUT = 300
CUSTOMIZED_TIMEOUT = {"test_DistributedDataParallel": 500}
def get_profiling_event(postfix, profiler):
event_list = (
profiler.events()
if isinstance(profiler, torch.profiler.profile)
else profiler.function_events
)
return [event for event in event_list if event.name.endswith(postfix)]
ddp_prev_reduction_unfinished_str = (
"Expected to have finished reduction in the prior iteration"
)
ddp_recommend_find_unused_params_str = (
"passing the keyword argument `find_unused_parameters=True`"
)
ddp_find_unused_params_enabled_str = "Since `find_unused_parameters=True` is enabled"
ddp_outputs_not_used_in_loss_str = (
"`forward` function outputs participate in calculating loss"
)
ddp_suggest_debug_mode_str = (
"set the environment variable TORCH_DISTRIBUTED_DEBUG to either INFO or DETAIL"
)
class DDPUnevenTestInput(NamedTuple):
name: str
model: nn.Module
inp: Union[torch.tensor, tuple]
sync_interval: int
throw_on_early_termination: bool = False
hook: Callable = None
state: Any = None
class _FC2(nn.Module):
def __init__(self):
super(_FC2, self).__init__()
self.fc = nn.Linear(10, 50, bias=True)
self.fc.bias.requires_grad = False
def forward(self, x):
x = self.fc(x)
return x
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.fc1 = nn.Linear(2, 10, bias=False)
self.fc2 = _FC2()
self.fc3 = nn.Linear(50, 4, bias=False)
self.relu = nn.ReLU()
self.no_grad_param = nn.Parameter(
torch.tensor([2, 2]).long(), requires_grad=False
)
def forward(self, x):
x = self.relu(self.fc1(x))
x = self.relu(self.fc2(x))
x = self.fc3(x)
return F.softmax(x, dim=1)
class LargeNet(nn.Module):
def __init__(self):
super(LargeNet, self).__init__()
self.fc1 = nn.Linear(1000, 2000, bias=False)
self.fc2 = nn.Linear(2000, 500, bias=False)
def forward(self, x):
x = self.fc1(x)
x = self.fc2(x)
return x
class Task(nn.Module):
def __init__(self):
super().__init__()
self.p = nn.Parameter(torch.ones(2, 2))
def forward(self, x):
return self.p + x
class BatchNormNet(nn.Module):
def __init__(self, affine=True):
super(BatchNormNet, self).__init__()
self.fc1 = nn.Linear(2, 40, bias=False)
self.bn = nn.BatchNorm1d(4, affine=affine)
self.fc2 = nn.Linear(40, 4, bias=False)
def forward(self, x):
x = torch.reshape(self.fc1(x), (-1, 4, 10))
x = self.bn(x)
x = torch.reshape(x, (-1, 40))
x = self.fc2(x)
return F.softmax(x, dim=1)
class TwoLinLayerNet(nn.Module):
def __init__(self):
super().__init__()
self.a = nn.Linear(10, 10, bias=False)
self.b = nn.Linear(10, 10, bias=False)
def forward(self, x):
a = self.a(x)
b = self.b(x)
return (a, b)
class EmbeddingNet(nn.Module):
def __init__(self, rank):
super().__init__()
embedding_dim = 500 if rank == 0 else 50
self.embedding = nn.Embedding(num_embeddings=10, embedding_dim=embedding_dim)
self.lin = nn.Linear(embedding_dim, 1)
def forward(self, x):
x = self.embedding(x)
return self.lin(x)
class ControlFlowToyModel(nn.Module):
def __init__(self):
super(ControlFlowToyModel, self).__init__()
self.lin1 = nn.Linear(10, 10, bias=False)
self.lin2 = nn.Linear(10, 10, bias=False)
def forward(self, x):
use_second_layer = torch.equal(x, torch.ones(20, 10, device=x.device))
if use_second_layer:
return self.lin2(F.relu(self.lin1(x)))
else:
return F.relu(self.lin1(x))
DDP_NET = Net()
BN_NET = BatchNormNet()
BN_NET_NO_AFFINE = BatchNormNet(affine=False)
ONLY_SBN_NET = nn.SyncBatchNorm(2, momentum=0.99)
def get_timeout(test_id):
test_name = test_id.split(".")[-1]
if test_name in CUSTOMIZED_TIMEOUT:
return CUSTOMIZED_TIMEOUT[test_name]
else:
return DEFAULT_TIMEOUT
default_pg_timeout = 60
CUSTOM_PG_TIMEOUT = {
"test_ddp_uneven_inputs": 300,
"test_ddp_model_diff_across_ranks": 5,
}
def require_backend(backends):
if BACKEND not in backends:
return sandcastle_skip("Test requires backend to be one of %s" % backends)
return lambda func: func
def require_backends_available(backends):
def check(backend):
if backend == dist.Backend.GLOO:
return dist.is_gloo_available()
if backend == dist.Backend.NCCL:
return dist.is_nccl_available()
if backend == dist.Backend.MPI:
return dist.is_mpi_available()
return False
if not all(check(dist.Backend(backend)) for backend in backends):
return sandcastle_skip("Test requires backends to be available %s" % backends)
return lambda func: func
def require_world_size(world_size):
if int(os.environ["WORLD_SIZE"]) < world_size:
return sandcastle_skip("Test requires world size of %d" % world_size)
return lambda func: func
def apply_hack_for_nccl():
os.environ["NCCL_MAX_NRINGS"] = "1"
@contextmanager
def _lock():
TEMP_DIR = os.environ["TEMP_DIR"]
lockfile = os.path.join(TEMP_DIR, "lockfile")
with open(lockfile, "w") as lf:
try:
if sys.platform == "win32":
msvcrt.locking(lf.fileno(), msvcrt.LK_RLCK, 1)
yield
else:
fcntl.flock(lf.fileno(), fcntl.LOCK_EX)
yield
finally:
if sys.platform == "win32":
msvcrt.locking(lf.fileno(), msvcrt.LK_UNLCK, 1)
else:
fcntl.flock(lf.fileno(), fcntl.LOCK_UN)
lf.close()
def _build_tensor(size, value=None, dtype=torch.float, device_id=None):
if value is None:
value = size
if device_id is None:
return torch.empty(size, size, size, dtype=dtype).fill_(value)
else:
return torch.empty(size, size, size, dtype=dtype).fill_(value).cuda(device_id)
def _build_multidim_tensor(dim, dim_size, value=None, dtype=torch.float):
if value is None:
value = size
return torch.empty(size=[dim_size for _ in range(dim)], dtype=dtype).fill_(value)
def _create_autograd_profiler():
return torch.autograd.profiler.profile(record_shapes=True)
def _create_torch_profiler():
return torch.profiler.profile(
activities=[
torch.profiler.ProfilerActivity.CPU,
],
record_shapes=True,
)
class Barrier(object):
barrier_id = 0
@classmethod
def init(cls):
cls.barrier_id = 0
barrier_dir = os.path.join(os.environ["TEMP_DIR"], "barrier")
for f_name in os.listdir(barrier_dir):
os.unlink(os.path.join(barrier_dir, f_name))
@classmethod
def sync(cls, wait_for=None, timeout=10):
if wait_for is None:
wait_for = dist.get_world_size()
cls.barrier_id += 1
barrier_dir = os.path.join(os.environ["TEMP_DIR"], "barrier")
pid = str(os.getpid())
barrier_file = os.path.join(barrier_dir, pid)
with _lock():
with open(barrier_file, "w") as f:
f.write(str(cls.barrier_id))
start_time = time.time()
while True:
arrived = 0
with _lock():
for f_name in os.listdir(barrier_dir):
with open(os.path.join(barrier_dir, f_name), "r") as f:
data = f.read()
if int(data) >= cls.barrier_id:
arrived += 1
if arrived == wait_for:
break
if time.time() - start_time > timeout:
raise RuntimeError("barrier timeout")
time.sleep(0.1)
class TestDistBackend(MultiProcessTestCase):
@classmethod
def setUpClass(cls):
os.environ["MASTER_ADDR"] = str(MASTER_ADDR)
os.environ["MASTER_PORT"] = str(MASTER_PORT)
os.environ["NCCL_ASYNC_ERROR_HANDLING"] = "1"
super().setUpClass()
def setUp(self):
super().setUp()
initialize_temp_directories()
Barrier.init()
self.skip_return_code_checks = []
def tearDown(self):
cleanup_temp_dir()
super().tearDown()
@property
def init_method(self):
return "{}{file_name}".format(FILE_SCHEMA, file_name=self.file_name)
@classmethod
def _run(cls, rank, test_name, file_name, pipe):
if BACKEND == "nccl" and not torch.cuda.is_available():
sys.exit(TEST_SKIPS["no_cuda"].exit_code)
self = cls(test_name)
self.rank = rank
self.file_name = file_name
if torch.cuda.is_available() and torch.cuda.device_count() < int(
self.world_size
):
sys.exit(TEST_SKIPS[f"multi-gpu-{self.world_size}"].exit_code)
try:
pg_timeout_seconds = CUSTOM_PG_TIMEOUT.get(test_name, default_pg_timeout)
timeout = timedelta(seconds=pg_timeout_seconds)
dist.init_process_group(
init_method=self.init_method,
backend=BACKEND,
world_size=int(self.world_size),
rank=self.rank,
timeout=timeout,
)
except RuntimeError as e:
if "recompile" in e.args[0]:
sys.exit(TEST_SKIPS["backend_unavailable"].exit_code)
raise
self._barrier()
self.run_test(test_name, pipe)
self._barrier()
dist.destroy_process_group()
sys.exit(0)
# Needed since MultiProcessTestCase assumes a world_size of 4, but we
# run these tests under other various world_sizes.
@property
def world_size(self):
return os.environ["WORLD_SIZE"]
class DistributedTest:
class _DistTestBase:
def _barrier(self, *args, **kwargs):
Barrier.sync(*args, **kwargs)
def _init_group_test(self, **kwargs):
group = [1, 2]
group_id = dist.new_group(group, **kwargs)
rank = dist.get_rank()
if rank not in group:
return ([], None, rank)
return (group, group_id, rank)
def _init_full_group_test(self, **kwargs):
group = list(range(0, dist.get_world_size()))
group_id = dist.new_group(**kwargs)
rank = dist.get_rank()
return (group, group_id, rank)
def _init_global_test(self):
group = list(range(0, dist.get_world_size()))
group_id = dist.group.WORLD
rank = dist.get_rank()
return (group, group_id, rank)
# HELPER FOR MULTIGPU TESTS
def _init_multigpu_helper(self):
nGPUs = torch.cuda.device_count()
world_size = dist.get_world_size()
visible_devices = range(nGPUs)
if BACKEND == "nccl":
apply_hack_for_nccl()
# If rank is lesser than or equal to number of available GPU's
nGPUs_per_process = 1
if world_size > nGPUs:
nGPUs_per_process = nGPUs // world_size
rank_to_GPU = {
i: list(
visible_devices[i * nGPUs_per_process : (i + 1) * nGPUs_per_process]
)
for i in range(world_size)
}
return rank_to_GPU
def test_dump_DDP_relevant_env_vars(self):
with captured_output() as (out, _):
_dump_DDP_relevant_env_vars()
lines = out.getvalue().splitlines()
def format_line(var):
return "env:%s=%s" % (
var,
os.environ[var] if var in os.environ else "N/A",
)
vars = [
"MASTER_ADDR",
"MASTER_PORT",
"WORLD_SIZE",
"NCCL_TOPO_DUMP_FILE",
"NCCL_ASYNC_ERROR_HANDLING",
]
for var in vars:
line = format_line(var)
self.assertIn(line, lines)
vars = [
"xxx",
"yyy",
"zzz",
]
for var in vars:
line = format_line(var)
self.assertNotIn(line, lines)
def test_get_rank(self):
test_dir = os.path.join(os.environ["TEMP_DIR"], "test_dir")
pid = str(os.getpid())
num_processes = dist.get_world_size()
with open(os.path.join(test_dir, pid), "w") as f:
f.write(str(dist.get_rank()))
self._barrier()
all_ranks = set()
for f_name in os.listdir(test_dir):
with open(os.path.join(test_dir, f_name), "r") as f:
all_ranks.add(int(f.read()))
self.assertEqual(len(all_ranks), num_processes)
self._barrier()
if dist.get_rank() == 0:
for f_name in os.listdir(test_dir):
os.unlink(os.path.join(test_dir, f_name))
self._barrier()
def test_get_backend(self):
if dist.get_world_size() > 2:
group = [1, 2]
else:
group = [0, 1]
group_id = dist.new_group(group)
backend_str = BACKEND.lower()
self.assertEqual(dist.get_backend(), backend_str)
if dist.get_rank() in group:
self.assertEqual(dist.get_backend(group_id), backend_str)
else:
with self.assertRaisesRegex(
RuntimeError, "Invalid process group specified"
):
dist.get_backend(group_id)
def test_Backend_enum_class(self):
backend = BACKEND.lower()
self.assertEqual(dist.Backend(BACKEND.upper()), backend)
self.assertEqual(dist.Backend(BACKEND), backend)
with self.assertRaisesRegex(ValueError, "Invalid backend: 'undefined'"):
dist.Backend("undefined")
with self.assertRaisesRegex(ValueError, "Invalid backend: 'xYz'"):
dist.Backend("xYz")
with self.assertRaises(ValueError):
dist.Backend(None)
with self.assertRaises(ValueError):
dist.Backend(3)
with self.assertRaises(ValueError):
dist.Backend(["gloo"])
def test_destroy_group(self):
if dist.get_world_size() > 2:
group = [1, 2]
else:
group = [0, 1]
group_id = dist.new_group(group)
self._barrier()
dist.destroy_process_group(group_id)
def test_get_rank_size_group(self):
if dist.get_world_size() > 2:
group = [1, 2]
else:
group = [0, 1]
group_id = dist.new_group(group)
if dist.get_rank() in group:
self.assertEqual(dist.get_world_size(group_id), 2)
self.assertTrue(dist.get_rank(group_id) in list(range(2)))
else:
self.assertEqual(dist.get_world_size(group_id), -1)
self.assertEqual(dist.get_rank(group_id), -1)
def test_destroy_full_group(self):
_, group_id, _ = self._init_full_group_test()
self._barrier()
dist.destroy_process_group(group_id)
def test_get_rank_size_full_group(self):
_, group_id, _ = self._init_full_group_test()
self.assertEqual(dist.get_world_size(group_id), dist.get_world_size())
self.assertEqual(dist.get_rank(group_id), dist.get_rank())
def _test_barrier_timeout(self, group_id, timeout):
local_rank = dist.get_rank(group_id)
if local_rank == 0:
expected_time = time.time() + timeout.total_seconds()
if dist._get_debug_mode() == dist._DistributedDebugLevel.DETAIL:
exception_ctx = self.assertRaisesRegex(
Exception, "failed to pass monitoredBarrier"
)
else:
exception_ctx = self.assertRaisesRegex(
Exception, " (Timed out|closed|timeout) "
)
with exception_ctx:
dist.barrier(group_id)
self.assertGreaterAlmostEqual(time.time(), expected_time, delta=0.1)
else:
pass
@sandcastle_skip_if(BACKEND != "gloo", "Only gloo backend supports timeouts")
@sandcastle_skip_if(
not INIT_METHOD.startswith("file://"),
"Requires file:// initialization method. "
+ "Both tcp:// and env:// rely on the TCP store for which "
"reinitialization has proven racy.",
)
def test_barrier_timeout_global(self):
dist.destroy_process_group()
# just destroyed any state in torch.distributed.
self._barrier(wait_for=int(os.environ["WORLD_SIZE"]))
# Reinitialize global process group
timeout = timedelta(seconds=1)
dist.init_process_group(
init_method=INIT_METHOD,
backend=BACKEND,
world_size=int(os.environ["WORLD_SIZE"]),
rank=self.rank,
timeout=timeout,
)
self._test_barrier_timeout(dist.group.WORLD, timeout)
@skip_if_small_worldsize
@sandcastle_skip_if(BACKEND != "gloo", "Only gloo backend supports timeouts")
def test_barrier_timeout_group(self):
timeout = timedelta(seconds=5)
_, group_id, _ = self._init_group_test(timeout=timeout)
if group_id is not None:
self._test_barrier_timeout(group_id, timeout)
@sandcastle_skip_if(BACKEND != "gloo", "Only gloo backend supports timeouts")
def test_barrier_timeout_full_group(self):
timeout = timedelta(seconds=1)
_, group_id, _ = self._init_full_group_test(timeout=timeout)
if group_id is not None:
self._test_barrier_timeout(group_id, timeout)
# This test helper can only be used when using the Gloo or NCCL backend
# **and** both the Gloo and NCCL backends are available.
# See the @skip annotations below.
def _test_group_override_backend(self, initializer):
if BACKEND == "gloo":
new_backend = "nccl"
if BACKEND == "nccl":
new_backend = "gloo"
group, group_id, rank = initializer(backend=new_backend)
if group_id is None:
return
if new_backend == "gloo":
self.assertTrue(isinstance(group_id, dist.ProcessGroupGloo))
if new_backend == "nccl":
self.assertTrue(isinstance(group_id, dist.ProcessGroupNCCL))
self.assertEqual(rank, group[dist.get_rank(group_id)])
self.assertEqual(len(group), dist.get_world_size(group_id))
# Pin device (so we avoid NCCL race conditions/deadlocks).
group_rank = dist.get_rank(group_id)
torch.cuda.set_device(group_rank)
# Run broadcast of CUDA tensor (so it works for both Gloo and NCCL).
tensor = _build_tensor(2, value=group_rank).cuda()
dist.broadcast(tensor, src=group[0], group=group_id)
self.assertEqual(_build_tensor(2, value=0), tensor.to("cpu"))
@require_backend({"gloo", "nccl"})
@require_backends_available({"gloo", "nccl"})
@require_world_size(3)
@skip_if_lt_x_gpu(2)
def test_backend_group(self):
self._test_group_override_backend(self._init_group_test)
@require_backend({"gloo", "nccl"})
@require_backends_available({"gloo", "nccl"})
@skip_if_lt_x_gpu(3)
def test_backend_full_group(self):
self._test_group_override_backend(self._init_full_group_test)
@sandcastle_skip_if(
BACKEND != "nccl" and BACKEND != "gloo",
"MPI backend does not support creating subgroups on CUDA devices",
)
@require_world_size(4)
@skip_if_lt_x_gpu(2)
def test_new_subgroups(self):
subgroup_size = 2
cur_subgroup, subgroups = dist.new_subgroups(subgroup_size)
world_size = dist.get_world_size()
self.assertEqual(cur_subgroup.size(), subgroup_size)
self.assertEqual(len(subgroups), world_size / subgroup_size)
self.assertFalse(dist._rank_not_in_group(cur_subgroup))
for subgroup in subgroups:
dist.destroy_process_group(subgroup)
@sandcastle_skip_if(
BACKEND != "nccl" and BACKEND != "gloo",
"MPI backend does not support creating subgroups on CUDA devices",
)
@skip_if_no_gpu
def test_new_subgroups_group_size_exceeds_world_size(self):
with self.assertRaisesRegex(
ValueError, "The arg 'group_size' must not exceed the world size"
):
dist.new_subgroups(100)
@sandcastle_skip_if(
BACKEND != "nccl" and BACKEND != "gloo",
"MPI backend does not support creating subgroups on CUDA devices",
)
@require_world_size(4)
@skip_if_lt_x_gpu(4)
def test_new_subgroups_world_size_not_divisible_by_group_size(self):
with self.assertRaisesRegex(
ValueError, "The world size must be divisible by 'group_size'"
):
dist.new_subgroups(3)
@sandcastle_skip_if(
BACKEND != "nccl" and BACKEND != "gloo",
"MPI backend does not support creating subgroups on CUDA devices",
)
@require_world_size(4)
@skip_if_lt_x_gpu(4)
def test_new_subgroups_by_enumeration(self):
group, group_id, rank = self._init_global_test()
rank_to_GPU = self._init_multigpu_helper()
device_id = rank_to_GPU[rank][0]
cur_subgroup, subgroups = dist.new_subgroups_by_enumeration(
ranks_per_subgroup_list=[[0, 2], [1, 3]]
)
if device_id >= 4:
self.assertIsNone(cur_subgroup)
else:
self.assertEqual(cur_subgroup.size(), 2)
self.assertEqual(len(subgroups), 2)
if device_id == 0 or device_id == 2:
self.assertEqual(cur_subgroup, subgroups[0])
else:
self.assertEqual(cur_subgroup, subgroups[1])
for subgroup in subgroups:
dist.destroy_process_group(subgroup)
@sandcastle_skip_if(
BACKEND != "nccl" and BACKEND != "gloo",
"MPI backend does not support creating subgroups on CUDA devices",
)
@require_world_size(4)
@skip_if_lt_x_gpu(4)
def test_new_subgroups_by_enumeration_input_rank_exceeds_world_size(self):
group, group_id, rank = self._init_global_test()
rank_to_GPU = self._init_multigpu_helper()
device_id = rank_to_GPU[rank][0]
world_size = get_world_size(group_id)
with self.assertRaisesRegex(
RuntimeError,
"The new group's rank should be within the the world_size set by init_process_group",
):
dist.new_subgroups_by_enumeration(
ranks_per_subgroup_list=[[0, 1], [world_size, 2]]
)
@sandcastle_skip_if(
BACKEND != "nccl" and BACKEND != "gloo",
"MPI backend does not support creating subgroups on CUDA devices",
)
@skip_if_no_gpu
def test_new_subgroups_by_enumeration_negative_input_rank(self):
group, group_id, rank = self._init_global_test()
with self.assertRaisesRegex(
RuntimeError,
"The new group's rank should be within the the world_size set by init_process_group",
):
dist.new_subgroups_by_enumeration(
ranks_per_subgroup_list=[[-1, -2], [-3, -4]]
)
@sandcastle_skip_if(
BACKEND != "nccl" and BACKEND != "gloo",
"MPI backend does not support creating subgroups on CUDA devices",
)
@require_world_size(4)
@skip_if_lt_x_gpu(4)
def test_new_subgroups_overlap_not_allowed(self):
with self.assertRaisesRegex(
ValueError, "Rank 1 has appeared in both subgroup"
):
dist.new_subgroups_by_enumeration(
ranks_per_subgroup_list=[[0], [1, 2], [1, 3]]
)
@sandcastle_skip_if(
BACKEND != "nccl" and BACKEND != "gloo",
"MPI backend does not support creating subgroups on CUDA devices",
)
@skip_if_lt_x_gpu(2)
def test_average_parameters(self):
rank = dist.get_rank()
rank_to_GPU = self._init_multigpu_helper()
device_id = rank_to_GPU[rank][0]
model = nn.Sequential(
nn.Conv2d(3, 3, kernel_size=3, padding=1),
nn.ReLU(),
nn.Linear(1, 5, bias=False),
).cuda(device_id)
# Test global model averaging
for p in model.parameters():
p.data = torch.ones_like(p.data)
model_averaging_utils.average_parameters(
params=model.parameters(), process_group=None
)
# Every element will be the same as the input.
for p in model.parameters():
self.assertEqual(p.data, torch.ones_like(p.data))
# Test partial model averaging
for p in model.parameters():
p.data = torch.ones_like(p.data) * rank
group_nccl = dist.new_group(ranks=[0, 1], backend="nccl")
model_averaging_utils.average_parameters(
params=model.parameters(), process_group=group_nccl
)
if not dist._rank_not_in_group(group_nccl):
# Every element on device 0 or 1 should be the average of 0 and 1, i.e., 0.5.
for p in model.parameters():
self.assertEqual(p.data, torch.ones_like(p.data) * 0.5)
else:
# Every element on device not in the subgroup should remain the same.
for p in model.parameters():
self.assertEqual(p.data, torch.ones_like(p.data) * rank)
@sandcastle_skip_if(
BACKEND != "nccl" and BACKEND != "gloo",
"MPI backend does not support creating subgroups on CUDA devices",
)
@skip_if_lt_x_gpu(2)
def test_periodic_model_averager(self):
rank = dist.get_rank()
rank_to_GPU = self._init_multigpu_helper()
device_id = rank_to_GPU[rank][0]
world_size = dist.get_world_size()
model = nn.Linear(1, 5, bias=False).cuda(device_id)
param = next(model.parameters())
tensor = torch.ones_like(param.data) * rank
expected_avg_tensor = (
torch.ones_like(param.data) * sum(range(world_size)) / world_size
)
period = 4
for warmup_steps in [12, 13, 14, 15]:
averager = averagers.PeriodicModelAverager(period=period, warmup_steps=warmup_steps)
for step in range(0, 20):
# Reset the parameters at every step.
param.data = copy.deepcopy(tensor)
averager.average_parameters(model.parameters())
if step >= warmup_steps and (step - warmup_steps) % period == 0:
self.assertEqual(param.data, expected_avg_tensor)
else:
# No model averaging, so the parameters are not updated.
self.assertEqual(param.data, tensor)
# NCCL Batch SEND RECV
@skip_if_no_gpu
@sandcastle_skip_if(BACKEND != "nccl", "NCCL Batch Send Recv Only")
@requires_nccl_version(2700, "Need NCCL 2.7+ for send/recv")
def test_batch_isend_irecv_nccl(self):
self._barrier()
rank = dist.get_rank()
rank_to_GPU = self._init_multigpu_helper()
device_id = rank_to_GPU[rank][0]
torch.cuda.set_device(device_id)
p2p_op_list = []
for val in ["1", "0"]:
os.environ["NCCL_BLOCKING_WAIT"] = val
for src in range(0, dist.get_world_size()):
send_tensor = _build_tensor(rank + 1, device_id=device_id)
recv_tensor = _build_tensor(src + 1, value=-1, device_id=device_id)
recv_op = dist.P2POp(dist.irecv, recv_tensor, src)
p2p_op_list.append(recv_op)
send_op = dist.P2POp(dist.isend, send_tensor, src)
p2p_op_list.append(send_op)
reqs = dist.batch_isend_irecv(p2p_op_list)
for req in reqs:
req.wait()
self._barrier()
@skip_if_no_gpu
@sandcastle_skip_if(BACKEND != "nccl", "NCCL Batch Send Recv Only")
@requires_nccl_version(2700, "Need NCCL 2.7+ for send/recv")
def test_batch_isend_irecv_self_nccl(self):
self._barrier()
rank = dist.get_rank()
rank_to_GPU = self._init_multigpu_helper()
device_id = rank_to_GPU[rank][0]
p2p_op_list = []
if rank == 0:
send_tensor = _build_tensor(rank + 1, device_id=device_id)
recv_tensor = _build_tensor(rank + 1, value=-1, device_id=device_id)
recv_op = dist.P2POp(dist.irecv, recv_tensor, 0)
p2p_op_list.append(recv_op)
send_op = dist.P2POp(dist.isend, send_tensor, 0)
p2p_op_list.append(send_op)
reqs = dist.batch_isend_irecv(p2p_op_list)
for req in reqs:
req.wait()
self._barrier()
@skip_if_no_gpu
@skip_if_small_worldsize
@sandcastle_skip_if(BACKEND != "nccl", "NCCL Batch Send Recv Only")
@requires_nccl_version(2700, "Need NCCL 2.7+ for send/recv")
def test_batch_isend_irecv_no_rank_zero_nccl(self):
self._barrier()
rank = dist.get_rank()
rank_to_GPU = self._init_multigpu_helper()
device_id = rank_to_GPU[rank][0]
torch.cuda.set_device(device_id)
p2p_op_list = []
if rank == 1:
peer = 2
elif rank == 2:
peer = 1
if rank in [1, 2]:
send_tensor = _build_tensor(rank + 1, device_id=device_id)
recv_tensor = _build_tensor(peer + 1, value=-1, device_id=device_id)
recv_op = dist.P2POp(dist.irecv, recv_tensor, peer)
p2p_op_list.append(recv_op)
send_op = dist.P2POp(dist.isend, send_tensor, peer)
p2p_op_list.append(send_op)
reqs = dist.batch_isend_irecv(p2p_op_list)
for req in reqs:
req.wait()
self._barrier()
# GLOO Batch SEND RECV CPU
@sandcastle_skip_if(BACKEND != "gloo", "GLOO Batch Send Recv CPU")
def test_batch_isend_irecv_gloo(self):
self._barrier()
rank = dist.get_rank()
p2p_op_list = []
for src in range(0, dist.get_world_size()):
if src == rank:
continue
send_tensor = _build_tensor(rank + 1)
recv_tensor = _build_tensor(src + 1, value=-1)
recv_op = dist.P2POp(dist.irecv, recv_tensor, src)
p2p_op_list.append(recv_op)
send_op = dist.P2POp(dist.isend, send_tensor, src)
p2p_op_list.append(send_op)
reqs = dist.batch_isend_irecv(p2p_op_list)
for req in reqs:
req.wait()
self._barrier()
# GLOO Batch SEND RECV CPU with provided tags
@sandcastle_skip_if(BACKEND != "gloo", "GLOO Batch Send Recv CPU")
def test_batch_isend_irecv_gloo_tags(self):
self._barrier()
rank = dist.get_rank()
p2p_op_list = []
for src in range(0, dist.get_world_size()):
if src == rank:
continue
send_tensor = _build_tensor(rank + 1)
recv_tensor = _build_tensor(src + 1, value=-1)
recv_op = dist.P2POp(dist.irecv, recv_tensor, src, tag=src)
p2p_op_list.append(recv_op)
send_op = dist.P2POp(dist.isend, send_tensor, src, tag=rank)
p2p_op_list.append(send_op)
reqs = dist.batch_isend_irecv(p2p_op_list)
for req in reqs:
req.wait()
self._barrier()
# NCCL Batch SEND RECV Tensor Error
@sandcastle_skip_if(BACKEND != "nccl", "NCCL Batch Send Recv Only")
@requires_nccl_version(2700, "Need NCCL 2.7+ for send/recv")
def test_batch_isend_irecv_tensor_err(self):
self._barrier()
rank = dist.get_rank()
if rank == 0:
rank_to_GPU = self._init_multigpu_helper()
device_id = rank_to_GPU[rank][0]
with self.assertRaisesRegex(
RuntimeError, "Tensors must be CUDA and dense"
):
send_tensor = _build_tensor(rank + 1)
send_op = dist.P2POp(dist.isend, send_tensor, 1)
req = dist.batch_isend_irecv([send_op])
req.wait()
# NCCL Batch SEND RECV Op Error
@sandcastle_skip_if(BACKEND != "nccl", "NCCL Batch Send Recv Only")
@requires_nccl_version(2700, "Need NCCL 2.7+ for send/recv")
def test_batch_isend_irecv_op_err(self):
self._barrier()
rank = dist.get_rank()
if rank == 0:
rank_to_GPU = self._init_multigpu_helper()
device_id = rank_to_GPU[rank][0]
with self.assertRaisesRegex(RuntimeError, "^Invalid ``op``"):
send_tensor = _build_tensor(rank + 1, device_id=device_id)
send_op = dist.P2POp(dist.broadcast, send_tensor, 1)
req = dist.batch_isend_irecv([send_op])
req.wait()
# NCCL Batch SEND RECV p2p_op_list Error
@sandcastle_skip_if(BACKEND != "nccl", "NCCL Batch Send Recv Only")
@requires_nccl_version(2700, "Need NCCL 2.7+ for send/recv")
def test_batch_isend_irecv_op_list_err(self):
self._barrier()
rank = dist.get_rank()
if rank == 0:
rank_to_GPU = self._init_multigpu_helper()
device_id = rank_to_GPU[rank][0]
with self.assertRaisesRegex(RuntimeError, "^Invalid ``p2p_op_list``"):
send_tensor = _build_tensor(rank + 1)
req = dist.batch_isend_irecv([1, 2])
req.wait()
# NCCL Batch SEND RECV Mixed Backend Error
@sandcastle_skip_if(BACKEND != "nccl", "NCCL Batch Send Recv Only")
@requires_nccl_version(2700, "Need NCCL 2.7+ for send/recv")
def test_batch_isend_irecv_mixed_backend_err(self):
self._barrier()
rank = dist.get_rank()
rank_to_GPU = self._init_multigpu_helper()
device_id = rank_to_GPU[rank][0]
group_gloo = dist.new_group(ranks=[0, 1], backend="gloo")
group_nccl = dist.new_group(ranks=[0, 1], backend="nccl")
if rank == 0:
with self.assertRaisesRegex(
RuntimeError, "All groups need to use the same backend"
):
send_tensor = _build_tensor(rank + 1)
send_op_gloo = dist.P2POp(dist.isend, send_tensor, 1, group_gloo)
send_op_nccl = dist.P2POp(dist.isend, send_tensor, 1, group_nccl)
req = dist.batch_isend_irecv([send_op_gloo, send_op_nccl])
req.wait()
# NCCL SEND RECV
@skip_if_no_gpu
@sandcastle_skip_if(BACKEND != "nccl", "NCCL Send Recv Only")
@requires_nccl_version(2700, "Need NCCL 2.7+ for send/recv")
def _test_send_recv_nccl(self, profiler_ctx=None):
# TODO: now that nccl send/recv is supported, there does not seem to
# be a need to have nccl send/recv be tested separately.
rank = dist.get_rank()
rank_to_GPU = self._init_multigpu_helper()
device_id = rank_to_GPU[rank][0]
torch.cuda.set_device(device_id)
tensor = _build_tensor(rank + 1, device_id=device_id)
profiler_cls = profiler_ctx if profiler_ctx is not None else suppress()
with profiler_cls as prof:
for src in range(0, dist.get_world_size()):
if src == rank:
# Send mode
for dst in range(0, dist.get_world_size()):
if dst == rank:
continue
dist.send(tensor, dst)
else:
# Recv mode
expected_tensor = _build_tensor(src + 1)
output_tensor = _build_tensor(
src + 1, value=-1, device_id=device_id
)
dist.recv(output_tensor, src)
self.assertEqual(output_tensor, expected_tensor)
self._barrier()
if profiler_ctx is not None:
backend = dist.get_backend()
if backend in SEND_RECV_PROFILING_SUPPORTED_BACKENDS:
for event_name in [f"{backend}:send", f"{backend}:recv"]:
events = get_profiling_event(event_name, prof)
self.assertTrue(events)
# Event order is not deterministic, so simply assert their shape
# is found in the following list.
expected_shapes = [
[[rank + 1] * 3] for rank in range(dist.get_world_size())
]
for event in events:
self.assertTrue(event.input_shapes in expected_shapes)
@skip_if_no_gpu
@sandcastle_skip_if(BACKEND != "nccl", "NCCL Send Recv Only")
@requires_nccl_version(2700, "Need NCCL 2.7+ for send/recv")
def test_send_recv_nccl(self):
self._test_send_recv_nccl()
@skip_if_no_gpu
@sandcastle_skip_if(BACKEND != "nccl", "NCCL Send Recv Only")
@requires_nccl_version(2700, "Need NCCL 2.7+ for send/recv")
def test_send_recv_nccl_autograd_profiler(self):
profiler_ctx = torch.autograd.profiler.profile(record_shapes=True)
self._test_send_recv_nccl(profiler_ctx)
@skip_if_no_gpu
@sandcastle_skip_if(BACKEND != "nccl", "NCCL Send Recv Only")
@requires_nccl_version(2700, "Need NCCL 2.7+ for send/recv")
@sandcastle_skip_if(IS_FBCODE, "Kineto in fbcode causes hang")
@sandcastle_skip_if(
IS_MACOS or IS_WINDOWS,
"torch.profiler not enabled for mac/windows: https://github.com/pytorch/pytorch/pull/56124",
)
def test_send_recv_nccl_torch_profiler(self):
profiler_ctx = torch.profiler.profile(
activities=[
torch.profiler.ProfilerActivity.CPU,
torch.profiler.ProfilerActivity.CUDA,
],
record_shapes=True,
)
self._test_send_recv_nccl(profiler_ctx)
# SEND RECV
def _test_send_recv(self, profiler_ctx):
rank = dist.get_rank()
send_size = rank + 1
tensor = _build_tensor(send_size)
ctx = profiler_ctx if profiler_ctx is not None else suppress()
with ctx as prof:
for src in range(0, dist.get_world_size()):
if src == rank:
# Send mode
for dst in range(0, dist.get_world_size()):
if dst == rank:
continue
dist.send(tensor, dst)
else:
# Recv mode
recv_size = src + 1
expected_tensor = _build_tensor(recv_size)
output_tensor = _build_tensor(recv_size, value=-1)
dist.recv(output_tensor, src)
self.assertEqual(output_tensor, expected_tensor)
if profiler_ctx is not None:
backend = dist.get_backend()
if backend in SEND_RECV_PROFILING_SUPPORTED_BACKENDS:
for event_name in [f"{backend}:send", f"{backend}:recv"]:
events = get_profiling_event(event_name, prof)
# Each rank sends/recvs from all other ranks.
event_count = sum(e.count for e in events)
expected_event_count = dist.get_world_size() - 1
self.assertEqual(event_count, expected_event_count)
# Event order is not deterministic, so simply assert their shape
# is found in the following list.
expected_shapes = [
[[rank + 1] * 3] for rank in range(dist.get_world_size())
]
for event in events:
self.assertTrue(event.is_async)
self.assertTrue(event.input_shapes in expected_shapes)
@sandcastle_skip_if(
BACKEND == "nccl", "Nccl send/recv tested by test_send_recv_nccl"
)
def test_send_recv(self):
self._test_send_recv(profiler_ctx=None)
@sandcastle_skip_if(
BACKEND == "nccl", "NCCL send/recv tested by test_send_recv_nccl"
)
def test_send_recv_autograd_profiler(self):
autograd_profiler_ctx = _create_autograd_profiler()
self._test_send_recv(profiler_ctx=autograd_profiler_ctx)
@sandcastle_skip_if(
BACKEND == "nccl", "NCCL send/recv tested by test_send_recv_nccl"
)
@sandcastle_skip_if(IS_FBCODE, "Kineto in fbcode causes hang")
@sandcastle_skip_if(
IS_MACOS or IS_WINDOWS,
"torch.profiler not enabled for mac/windows: https://github.com/pytorch/pytorch/pull/56124",
)
def test_send_recv_torch_profiler(self):
torch_profiler_ctx = _create_torch_profiler()
return self._test_send_recv(profiler_ctx=torch_profiler_ctx)
# SEND RECV ANY SOURCE
def _test_send_recv_any_source(self, profiler_ctx):
rank = dist.get_rank()
send_recv_size = 10
tensor = _build_tensor(send_recv_size, value=rank)
recv_ranks = list()
irecv_ranks = list()
ctx = profiler_ctx if profiler_ctx is not None else suppress()
with ctx as prof:
for dst in range(0, dist.get_world_size()):
if dst == rank:
# Recv mode
for dst in range(0, dist.get_world_size()):
if dst == rank:
continue
for recv in ["recv", "irecv"]:
output_tensor = _build_tensor(send_recv_size, value=-1)
if recv == "recv":
sender = dist.recv(output_tensor)
recv_ranks.append(sender)
elif recv == "irecv":
work = dist.irecv(output_tensor)
work.wait()
sender = work._source_rank()
irecv_ranks.append(sender)
# Assert the scalar value "sender" that should be
# equal to the rank of the sender is equal to all
# values in the received tensor.
self.assertTrue(output_tensor.eq(sender).all())
else:
# Send mode
dist.send(tensor, dst) # recv
dist.send(tensor, dst) # irecv
if profiler_ctx is not None:
backend = dist.get_backend()
if backend in SEND_RECV_PROFILING_SUPPORTED_BACKENDS:
for event_name in [f"{backend}:send", f"{backend}:recvAnySource"]:
events = get_profiling_event(event_name, prof)
# Each rank sends/recvs from other rank twice.
self.assertEqual(
sum(event.count for event in events),
2 * (dist.get_world_size() - 1),
)
for event in events:
self.assertTrue(event.is_async)
self.assertEqual(event.input_shapes, [[send_recv_size] * 3])
# Each rank would have 2 * (world_size - 1) sends, verify that
# globally we receive the same amount on the other end.
recv_ranks_tensor = torch.cat(
(torch.tensor(recv_ranks), torch.tensor(irecv_ranks)), 0
)
global_recv_ranks = [
torch.empty_like(recv_ranks_tensor)
for _ in range(dist.get_world_size())
]
dist.all_gather(global_recv_ranks, recv_ranks_tensor)
global_recv_ranks_list = []
for tensor in global_recv_ranks:
global_recv_ranks_list += tensor.tolist()
from itertools import groupby
global_recv_ranks_list.sort()
frequency = [
len(list(group)) for key, group in groupby(global_recv_ranks_list)
]
self.assertEqual(dist.get_world_size(), len(frequency))
self.assertEqual(
[2 * (dist.get_world_size() - 1)] * dist.get_world_size(), frequency
)
self._barrier()
@sandcastle_skip_if(
BACKEND == "nccl", "Nccl does not support send/recv from any source"
)
def test_send_recv_any_source(self):
self._test_send_recv_any_source(profiler_ctx=None)
@sandcastle_skip_if(
BACKEND == "nccl", "Nccl does not support send/recv from any source"
)
def test_send_recv_any_source_autograd_profiler(self):
autograd_profiler_ctx = _create_autograd_profiler()
self._test_send_recv_any_source(profiler_ctx=autograd_profiler_ctx)
@sandcastle_skip_if(
BACKEND == "nccl", "Nccl does not support send/recv from any source"
)
@sandcastle_skip_if(IS_FBCODE, "Kineto in fbcode code causes hang")
@sandcastle_skip_if(
IS_MACOS or IS_WINDOWS,
"torch.profiler not enabled for mac/windows: https://github.com/pytorch/pytorch/pull/56124",
)
def test_send_recv_any_source_torch_profiler(self):
torch_profiler_ctx = _create_torch_profiler()
return self._test_send_recv_any_source(profiler_ctx=torch_profiler_ctx)
# SEND RECV WITH TAG
def _test_send_recv_with_tag(self, profiler_ctx):
rank = dist.get_rank()
world_size = dist.get_world_size()
send_recv_size = 10
tensor = _build_tensor(send_recv_size, value=rank)
ctx = profiler_ctx if profiler_ctx is not None else suppress()
with ctx as prof:
for dst in range(0, world_size):
if dst == rank:
# Recv mode
for src in range(0, world_size):
if src == rank:
continue
output_tensor = _build_tensor(send_recv_size, value=-1)
dist.recv(output_tensor, src, tag=src)
self.assertTrue(output_tensor.eq(src).all())
else:
# Send mode
dist.send(tensor, dst, tag=rank)
if profiler_ctx is not None:
backend = dist.get_backend()
if backend in SEND_RECV_PROFILING_SUPPORTED_BACKENDS:
for event_name in [f"{backend}:send", f"{backend}:recv"]:
events = get_profiling_event(event_name, prof)
# Each rank sends/recvs from all other ranks
event_count = sum(e.count for e in events)
expected_event_count = dist.get_world_size() - 1
self.assertEqual(event_count, expected_event_count)
for event in events:
self.assertTrue(event.is_async)
self.assertEqual(event.name, event_name)
self.assertEqual(event.input_shapes, [[send_recv_size] * 3])
@sandcastle_skip_if(
BACKEND == "nccl", "NCCL send/recv tested by test_send_recv_nccl"
)
def test_send_recv_with_tag(self):
self._test_send_recv_with_tag(profiler_ctx=None)
@sandcastle_skip_if(
BACKEND == "nccl", "NCCL send/recv tested by test_send_recv_nccl"
)
def test_send_recv_with_tag_autograd_profiler(self):
autograd_profiler_ctx = _create_autograd_profiler()
return self._test_send_recv_with_tag(profiler_ctx=autograd_profiler_ctx)
@sandcastle_skip_if(
BACKEND == "nccl", "NCCL send/recv tested by test_send_recv_nccl"
)
@sandcastle_skip_if(IS_FBCODE, "Kineto in fbcode code causes hang")
@sandcastle_skip_if(
IS_MACOS or IS_WINDOWS,
"torch.profiler not enabled for mac/windows: https://github.com/pytorch/pytorch/pull/56124",
)
def test_send_recv_with_tag_torch_profiler(self):
torch_profiler_ctx = _create_torch_profiler()
return self._test_send_recv_with_tag(profiler_ctx=torch_profiler_ctx)
# ISEND
def _test_isend(self, profiler_ctx):
rank = dist.get_rank()
world_size = dist.get_world_size()
ctx = profiler_ctx if profiler_ctx is not None else suppress()
with ctx as prof:
if rank == 0:
requests = [
dist.isend(_build_tensor(dest, 10), dest)
for dest in range(1, world_size)
]
for request in requests:
request.wait()
self.assertTrue(request.is_completed())
else:
tensor = _build_tensor(rank, -1)
dist.recv(tensor, 0)
self.assertEqual(tensor, _build_tensor(rank, 10))
self._barrier()
if profiler_ctx is not None:
backend = dist.get_backend()
if backend in SEND_RECV_PROFILING_SUPPORTED_BACKENDS:
expected_event_name = (
f"{backend}:send" if rank == 0 else f"{backend}:recv"
)
events = get_profiling_event(expected_event_name, prof)
event_count = sum(e.count for e in events)
expected_count = dist.get_world_size() - 1 if rank == 0 else 1
self.assertEqual(expected_count, event_count)
# Event ordering is not guaranteed, so simply ensure the shapes are
# found in the following map.
expected_shapes = {
r: [[r] * 3] for r in range(1, dist.get_world_size())
}
for event in events:
self.assertTrue(event.is_async)
self.assertEqual(event.name, expected_event_name)
if rank == 0:
self.assertTrue(
event.input_shapes in expected_shapes.values()
)
else:
self.assertEqual(event.input_shapes, expected_shapes[rank])
@sandcastle_skip_if(BACKEND == "nccl", "Nccl does not support isend")
def test_isend(self):
self._test_isend(profiler_ctx=None)
@sandcastle_skip_if(BACKEND == "nccl", "Nccl does not support isend")
def test_isend_autograd_profiler(self):
autograd_profiler_ctx = _create_autograd_profiler()
self._test_isend(profiler_ctx=autograd_profiler_ctx)
@sandcastle_skip_if(BACKEND == "nccl", "Nccl does not support isend")
@sandcastle_skip_if(IS_FBCODE, "Kineto in fbcode code causes hang")
@sandcastle_skip_if(
IS_MACOS or IS_WINDOWS,
"torch.profiler not enabled for mac/windows: https://github.com/pytorch/pytorch/pull/56124",
)
def test_isend_torch_profiler(self):
torch_profiler_ctx = _create_torch_profiler()
self._test_isend(profiler_ctx=torch_profiler_ctx)
# IRECV
@sandcastle_skip_if(BACKEND == "nccl", "Nccl does not support irecv")
def test_irecv(self):
rank = dist.get_rank()
world_size = dist.get_world_size()
if rank == 0:
expected_tensors = [
_build_tensor(src, -1) for src in range(1, world_size)
]
requests = [
dist.irecv(expected_tensors[src - 1], src)
for src in range(1, world_size)
]
for src in range(1, world_size):
requests[src - 1].wait()
self.assertTrue(requests[src - 1].is_completed())
self.assertEqual(expected_tensors[src - 1], _build_tensor(src, 10))
else:
tensor = _build_tensor(rank, 10)
dist.send(tensor, 0)
self._barrier()
# BROADCAST
def _test_broadcast_helper(
self,
group,
group_id,
rank,
cuda=False,
rank_to_GPU=None,
with_options=False,
):
for dtype, value, requires_cuda in [
(torch.float, -1e-10, False),
(torch.double, -1e-100, False),
(torch.half, -0.1, True),
(torch.int8, -2, False),
(torch.uint8, 129, False),
(torch.int, -1e5, False),
(torch.long, -1e15, False),
]:
if requires_cuda and not cuda:
continue
for src in group:
expected_tensor = _build_tensor(src + 1, value, dtype)
if cuda:
expected_tensor = expected_tensor.cuda(rank_to_GPU[rank][0])
if rank == src:
if with_options:
opts = dist.BroadcastOptions()
opts.rootTensor = 0
opts.rootRank = src
self.call_dist_op(
":broadcast",
True,
group_id.broadcast,
[expected_tensor],
opts,
)
else:
self.call_dist_op(
":broadcast",
False,
dist.broadcast,
expected_tensor,
src,
group_id,
)
else:
tensor = _build_tensor(src + 1, -1, dtype)
if cuda:
tensor = tensor.cuda(rank_to_GPU[rank][0])
if with_options:
opts = dist.BroadcastOptions()
opts.rootTensor = 0
opts.rootRank = src
self.call_dist_op(
":broadcast", True, group_id.broadcast, [tensor], opts
)
else:
self.call_dist_op(
":broadcast",
False,
dist.broadcast,
tensor,
src,
group_id,
)
self.assertEqual(tensor.size(), expected_tensor.size())
self.assertEqual(
tensor.ne(expected_tensor).max(), torch.tensor(False)
)
self._barrier()
@sandcastle_skip_if(BACKEND == "nccl", "Nccl does not support CPU tensors")
def test_broadcast(self):
group, group_id, rank = self._init_global_test()
self._test_broadcast_helper(group, group_id, rank)
@sandcastle_skip_if(
BACKEND != "gloo" and BACKEND != "nccl",
"Only Gloo and Nccl backend supports CUDA allReduce",
)
@skip_if_no_gpu
def test_broadcast_cuda(self):
group, group_id, rank = self._init_global_test()
rank_to_GPU = self._init_multigpu_helper()
device_id = rank_to_GPU[rank][0]
torch.cuda.set_device(device_id)
self._test_broadcast_helper(group, group_id, rank, True, rank_to_GPU)
@skip_if_small_worldsize
@sandcastle_skip_if(BACKEND == "nccl", "Nccl does not support CPU tensors")
def test_broadcast_group(self):
group, group_id, rank = self._init_group_test()
self._test_broadcast_helper(group, group_id, rank)
@sandcastle_skip_if(BACKEND == "nccl", "Nccl does not support CPU tensors")
def test_broadcast_full_group(self):
group, group_id, rank = self._init_full_group_test()
self._test_broadcast_helper(group, group_id, rank)
@sandcastle_skip_if(
BACKEND != "nccl",
"Only NCCL backend supports high priority stream",
)
@skip_if_no_gpu
def test_nccl_high_priority_stream(self):
group, _, rank = self._init_global_test()
rank_to_GPU = self._init_multigpu_helper()
device_id = rank_to_GPU[rank][0]
torch.cuda.set_device(device_id)
new_port = str(MASTER_PORT + 1)
os.environ["MASTER_PORT"] = new_port
gen_iterator = dist.rendezvous("env://", rank, dist.get_world_size())
store, rank, size = next(gen_iterator)
store = dist.PrefixStore(new_port, store)
opts = dist.ProcessGroupNCCL.Options()
opts.is_high_priority_stream = False
group_id = dist.ProcessGroupNCCL(store, rank, size, opts)
self._test_broadcast_helper(group, group_id, rank, True, rank_to_GPU, True)
# REDUCE
def _test_reduce_helper(
self,
group,
group_id,
rank,
op,
master_value,
worker_value,
expected_value,
cuda=False,
rank_to_GPU=None,
):
for src in group:
tensor = _build_tensor(src + 1).fill_(
master_value if rank == src else worker_value
)
if cuda:
tensor = tensor.cuda(rank_to_GPU[rank][0])
self.call_dist_op(
":reduce",
False,
dist.reduce,
tensor,
src,
op,
group_id,
tensor_shapes=[tensor.shape],
)
if rank == src:
self.assertEqual(tensor, _build_tensor(src + 1, expected_value))
self._barrier()
@sandcastle_skip_if(BACKEND == "nccl", "Nccl does not support CPU tensors")
def test_reduce_sum(self):
group, group_id, rank = self._init_global_test()
self._test_reduce_helper(
group,
group_id,
rank,
dist.ReduceOp.SUM,
2,
10,
2 + (10 * (len(group) - 1)),
)
@sandcastle_skip_if(BACKEND != "nccl", "Only Nccl supports CUDA reduce")
@skip_if_no_gpu
def test_reduce_sum_cuda(self):
group, group_id, rank = self._init_global_test()
rank_to_GPU = self._init_multigpu_helper()
device_id = rank_to_GPU[rank][0]
torch.cuda.set_device(device_id)
self._test_reduce_helper(
group,
group_id,
rank,
dist.ReduceOp.SUM,
2,
10,
2 + 10 * (len(group) - 1),
True,
rank_to_GPU,
)
@sandcastle_skip_if(BACKEND == "nccl", "Nccl does not support CPU tensors")
def test_reduce_product(self):
group, group_id, rank = self._init_global_test()
self._test_reduce_helper(
group,
group_id,
rank,
dist.ReduceOp.PRODUCT,
2,
10,
reduce((lambda x, y: x * y), [10] * (len(group) - 1), 2),
)
@sandcastle_skip_if(BACKEND == "nccl", "Nccl does not support CPU tensors")
def test_reduce_min(self):
group, group_id, rank = self._init_global_test()
self._test_reduce_helper(
group, group_id, rank, dist.ReduceOp.MIN, 1010, 1, 1
)
@sandcastle_skip_if(BACKEND == "nccl", "Nccl does not support CPU tensors")
def test_reduce_max(self):
group, group_id, rank = self._init_global_test()
self._test_reduce_helper(
group, group_id, rank, dist.ReduceOp.MAX, -1, 10, 10
)
@sandcastle_skip_if(BACKEND == "nccl", "Nccl does not support CPU tensors")
@skip_if_small_worldsize
def test_reduce_group_sum(self):
group, group_id, rank = self._init_group_test()
self._test_reduce_helper(
group,
group_id,
rank,
dist.ReduceOp.SUM,
2,
10,
2 + (10 * (len(group) - 1)),
)
@sandcastle_skip_if(BACKEND == "nccl", "Nccl does not support CPU tensors")
@skip_if_small_worldsize
def test_reduce_group_product(self):
group, group_id, rank = self._init_group_test()
self._test_reduce_helper(
group,
group_id,
rank,
dist.ReduceOp.PRODUCT,
2,
10,
reduce((lambda x, y: x * y), [10] * (len(group) - 1), 2),
)
@sandcastle_skip_if(BACKEND == "nccl", "Nccl does not support CPU tensors")
@skip_if_small_worldsize
def test_reduce_group_min(self):
group, group_id, rank = self._init_group_test()
self._test_reduce_helper(
group, group_id, rank, dist.ReduceOp.MIN, 1010, 1, 1
)
@sandcastle_skip_if(BACKEND == "nccl", "Nccl does not support CPU tensors")
@skip_if_small_worldsize
def test_reduce_group_max(self):
group, group_id, rank = self._init_group_test()
self._test_reduce_helper(
group, group_id, rank, dist.ReduceOp.MAX, -1, 10, 10
)
@sandcastle_skip_if(BACKEND == "nccl", "Nccl does not support CPU tensors")
def test_reduce_full_group_sum(self):
group, group_id, rank = self._init_full_group_test()
self._test_reduce_helper(
group,
group_id,
rank,
dist.ReduceOp.SUM,
2,
10,
2 + (10 * (len(group) - 1)),
)
@sandcastle_skip_if(BACKEND == "nccl", "Nccl does not support CPU tensors")
def test_reduce_full_group_product(self):
group, group_id, rank = self._init_full_group_test()
self._test_reduce_helper(
group,
group_id,
rank,
dist.ReduceOp.PRODUCT,
2,
10,
reduce((lambda x, y: x * y), [10] * (len(group) - 1), 2),
)
@sandcastle_skip_if(BACKEND == "nccl", "Nccl does not support CPU tensors")
def test_reduce_full_group_min(self):
group, group_id, rank = self._init_full_group_test()
self._test_reduce_helper(
group, group_id, rank, dist.ReduceOp.MIN, 1010, 1, 1
)
@sandcastle_skip_if(BACKEND == "nccl", "Nccl does not support CPU tensors")
def test_reduce_full_group_max(self):
group, group_id, rank = self._init_full_group_test()
self._test_reduce_helper(
group, group_id, rank, dist.ReduceOp.MAX, -1, 10, 10
)
# REDUCE TWICE
def _test_reduce_twice_helper(
self,
group,
group_id,
rank,
op,
master_value,
worker_value,
expected_value,
cuda=False,
rank_to_GPU=None,
):
for src in group:
tensors = [
_build_tensor(src + 1).fill_(
master_value if rank == src else worker_value
)
for i in range(2)
]
if cuda:
for i in range(2):
tensors[i] = tensors[i].cuda(rank_to_GPU[rank][0])
self.call_dist_op(
":reduce",
False,
dist.reduce,
tensors[0],
src,
op,
group_id,
secondary_op_call=lambda: dist.reduce(
tensors[1], src, op, group_id
),
tensor_shapes=[tensors[0].shape],
)
if rank == src:
for tensor in tensors:
self.assertEqual(tensor, _build_tensor(src + 1, expected_value))
self._barrier()
@sandcastle_skip_if(BACKEND == "nccl", "Nccl does not support CPU tensors")
def test_reduce_sum_twice(self):
group, group_id, rank = self._init_global_test()
self._test_reduce_twice_helper(
group,
group_id,
rank,
dist.ReduceOp.SUM,
2,
10,
2 + (10 * (len(group) - 1)),
)
@sandcastle_skip_if(BACKEND != "nccl", "Only Nccl supports CUDA reduce")
@skip_if_no_gpu
def test_reduce_sum_cuda_twice(self):
group, group_id, rank = self._init_global_test()
rank_to_GPU = self._init_multigpu_helper()
device_id = rank_to_GPU[rank][0]
torch.cuda.set_device(device_id)
self._test_reduce_twice_helper(
group,
group_id,
rank,
dist.ReduceOp.SUM,
2,
10,
2 + 10 * (len(group) - 1),
True,
rank_to_GPU,
)
@skip_if_no_gpu
@require_backend({"gloo", "nccl"})
def test_all_reduce_result_cuda(self):
group, group_id, rank = self._init_global_test()
rank_to_GPU = self._init_multigpu_helper()
for src in group:
if rank == src:
tensor = _build_tensor(src + 1, 2)
else:
tensor = _build_tensor(src + 1, 10)
tensor = tensor.cuda(rank_to_GPU[rank][0])
opts = AllreduceOptions()
opts.reduceOp = dist.ReduceOp.SUM
if group_id == GroupMember.WORLD:
work = _get_default_group().allreduce([tensor], opts)
else:
work = group_id.allreduce([tensor], opts)
if BACKEND == "gloo":
# Calling result right the work is finished should throw exception.
# Here we have a race condition, we may not assume the work is not
# finished by the time we run next lines.
try:
with self.assertRaisesRegex(
RuntimeError,
"Work needs to be completed before calling result",
):
work.result()
except AssertionError:
# Exception was not raised, ensure is_completed()
self.assertTrue(work.is_completed())
work.wait()
result = work.result()
else:
# In case of NCCL we should be able to retrieve pointer to the result
# even before work is finished.
result = work.result()
work.wait()
expected_value = 2 + (10 * (len(group) - 1))
self.assertEqual(result, [_build_tensor(src + 1, expected_value)])
self._barrier()
def call_dist_op(
self,
profiling_title_postfix,
is_async,
op,
*args,
expect_event=True,
secondary_op_call=None,
profile_cuda=False,
tensor_shapes=None,
**kwargs,
):
op_calls = [lambda: op(*args, **kwargs)]
if secondary_op_call is not None:
op_calls.append(secondary_op_call)
autograd_profiler_ctx = torch.autograd.profiler.profile(
use_cuda=profile_cuda, record_shapes=True
)
# TODO: move this test to use torch.profiler once kineto issues are
# fixed internally.
with autograd_profiler_ctx as prof:
works = [op_call() for op_call in op_calls]
if is_async:
for work in works:
work.wait()
if expect_event and dist.get_backend() in PROFILING_SUPPORTED_BACKENDS:
events = get_profiling_event(
profiling_title_postfix, autograd_profiler_ctx
)
# DETAIL debug mode can use a pg wrapper that issues more collectives
# under the hood
if dist._get_debug_mode() != dist._DistributedDebugLevel.DETAIL:
self.assertEqual(len(events), len(op_calls))
for e in events:
self.assertTrue(e.is_async)
self.assertEqual(e.count, 1)
self.assertGreaterEqual(e.cpu_time, 0)
# Verify tensor shapes if given
# DETAIL debug mode can use a pg wrapper that issues more collectives
# under the hood
if (
tensor_shapes is not None
and dist._get_debug_mode() != dist._DistributedDebugLevel.DETAIL
):
self.assertEqual(
e.input_shapes,
tensor_shapes,
f"event shape: {e.input_shapes} vs tensor {tensor_shapes}",
)
# ALL REDUCE
def _test_all_reduce_helper(
self,
group,
group_id,
rank,
op,
master_value,
worker_value,
expected_value,
cuda=False,
rank_to_GPU=None,
dtype=torch.float,
async_op=False,
):
for src in group:
curr_value = master_value if rank == src else worker_value
tensor = _build_tensor(src + 1, dtype=dtype).fill_(curr_value)
if cuda:
tensor = tensor.cuda(rank_to_GPU[rank][0])
if tensor.dtype == torch.complex64:
tensor_shapes = [torch.view_as_real(tensor).shape]
else:
tensor_shapes = [tensor.shape]
self.call_dist_op(
":all_reduce",
async_op,
dist.all_reduce,
tensor,
op,
group_id,
async_op=async_op,
tensor_shapes=tensor_shapes,
)
# Currently, only Gloo backend has profiling tested with CUDA enabled.
# Only run cuda profiling test for one rank to speed up since
# running with different src_rank does not affect the correctness.
if (
src == 0
and cuda
and dist.get_backend() in CUDA_PROFILING_SUPPORTED_BACKENDS
):
self.call_dist_op(
":all_reduce",
async_op,
dist.all_reduce,
tensor,
op,
group_id,
async_op=async_op,
profile_cuda=True,
tensor_shapes=tensor_shapes,
)
self._barrier()
@sandcastle_skip_if(BACKEND == "nccl", "Nccl does not support CPU tensors")
def test_all_reduce_sum(self):
group, group_id, rank = self._init_global_test()
self._test_all_reduce_helper(
group,
group_id,
rank,
dist.ReduceOp.SUM,
2,
10,
2 + (10 * (len(group) - 1)),
)
@sandcastle_skip_if(BACKEND == "nccl", "Nccl does not support CPU tensors")
def test_all_reduce_sum_async(self):
group, group_id, rank = self._init_global_test()
self._test_all_reduce_helper(
group,
group_id,
rank,
dist.ReduceOp.SUM,
2,
10,
2 + (10 * (len(group) - 1)),
async_op=True,
)
@sandcastle_skip_if(
BACKEND != "gloo" and BACKEND != "nccl",
"Only Gloo and NCCL backends will have CUDA allReduce tested",
)
@skip_if_no_gpu
def test_all_reduce_sum_cuda(self):
torch.cuda.set_device(self.rank)
group, group_id, rank = self._init_global_test()
rank_to_GPU = self._init_multigpu_helper()
self._test_all_reduce_helper(
group,
group_id,
rank,
dist.ReduceOp.SUM,
2,
10,
2 + (10 * (len(group) - 1)),
True,
rank_to_GPU,
)
@sandcastle_skip_if(
BACKEND != "gloo" and BACKEND != "nccl",
"Only Gloo and NCCL backends will have CUDA allReduce tested",
)
@skip_if_no_gpu
def test_all_reduce_sum_cuda_async(self):
torch.cuda.set_device(self.rank)
group, group_id, rank = self._init_global_test()
rank_to_GPU = self._init_multigpu_helper()
self._test_all_reduce_helper(
group,
group_id,
rank,
dist.ReduceOp.SUM,
2,
10,
2 + (10 * (len(group) - 1)),
True,
rank_to_GPU,
async_op=True,
)
@sandcastle_skip_if(BACKEND == "nccl", "Nccl does not support CPU tensors")
def test_all_reduce_sum_complex(self):
group, group_id, rank = self._init_global_test()
self._test_all_reduce_helper(
group,
group_id,
rank,
dist.ReduceOp.SUM,
complex(2, 3),
complex(10, 11),
complex(2, 3) + (complex(10, 11) * (len(group) - 1)),
dtype=torch.cfloat,
)
@sandcastle_skip_if(BACKEND == "nccl", "Nccl does not support CPU tensors")
def test_all_reduce_complex_unsupported_ops(self):
unsupported_ops = [
dist.ReduceOp.MAX,
dist.ReduceOp.MIN,
dist.ReduceOp.PRODUCT,
dist.ReduceOp.BAND,
dist.ReduceOp.BOR,
dist.ReduceOp.BXOR,
]
group, group_id, rank = self._init_global_test()
for unsupported_op in unsupported_ops:
with self.assertRaisesRegex(
RuntimeError, "all_reduce does not support"
):
dist.all_reduce(
_build_tensor(1, dtype=torch.cfloat), unsupported_op, group_id
)
@sandcastle_skip_if(
BACKEND != "gloo" and BACKEND != "nccl",
"Only Gloo and NCCL backends will have CUDA allReduce tested",
)
@skip_if_no_gpu
def test_all_reduce_sum_cuda_complex(self):
torch.cuda.set_device(self.rank)
group, group_id, rank = self._init_global_test()
rank_to_GPU = self._init_multigpu_helper()
self._test_all_reduce_helper(
group,
group_id,
rank,
dist.ReduceOp.SUM,
complex(2, 3),
complex(10, 11),
complex(2, 3) + (complex(10, 11) * (len(group) - 1)),
True,
rank_to_GPU,
dtype=torch.cfloat,
)
@sandcastle_skip_if(BACKEND == "nccl", "Nccl does not support CPU tensors")
def test_all_reduce_product(self):
group, group_id, rank = self._init_global_test()
self._test_all_reduce_helper(
group,
group_id,
rank,
dist.ReduceOp.PRODUCT,
2,
10,
reduce((lambda x, y: x * y), [10] * (len(group) - 1), 2),
)
@sandcastle_skip_if(BACKEND == "nccl", "Nccl does not support CPU tensors")
def test_all_reduce_min(self):
group, group_id, rank = self._init_global_test()
self._test_all_reduce_helper(
group, group_id, rank, dist.ReduceOp.MIN, 1010, 1, 1
)
@sandcastle_skip_if(BACKEND == "nccl", "Nccl does not support CPU tensors")
def test_all_reduce_max(self):
group, group_id, rank = self._init_global_test()
self._test_all_reduce_helper(
group, group_id, rank, dist.ReduceOp.MAX, -1, 10, 10
)
@skip_if_small_worldsize
@sandcastle_skip_if(BACKEND == "nccl", "Nccl does not support CPU tensors")
def test_all_reduce_group_sum(self):
group, group_id, rank = self._init_group_test()
self._test_all_reduce_helper(
group,
group_id,
rank,
dist.ReduceOp.SUM,
2,
10,
2 + (10 * (len(group) - 1)),
)
@skip_if_small_worldsize
@sandcastle_skip_if(BACKEND == "nccl", "Nccl does not support CPU tensors")
def test_all_reduce_group_product(self):
group, group_id, rank = self._init_group_test()
self._test_all_reduce_helper(
group,
group_id,
rank,
dist.ReduceOp.PRODUCT,
2,
10,
reduce((lambda x, y: x * y), [10] * (len(group) - 1), 2),
)
@skip_if_small_worldsize
@sandcastle_skip_if(BACKEND == "nccl", "Nccl does not support CPU tensors")
def test_all_reduce_group_min(self):
group, group_id, rank = self._init_group_test()
self._test_all_reduce_helper(
group, group_id, rank, dist.ReduceOp.MIN, 1010, 1, 1
)
@skip_if_small_worldsize
@sandcastle_skip_if(BACKEND == "nccl", "Nccl does not support CPU tensors")
def test_all_reduce_group_max(self):
group, group_id, rank = self._init_group_test()
self._test_all_reduce_helper(
group, group_id, rank, dist.ReduceOp.MAX, -1, 10, 10
)
@sandcastle_skip_if(BACKEND == "nccl", "Nccl does not support CPU tensors")
def test_all_reduce_full_group_sum(self):
group, group_id, rank = self._init_full_group_test()
self._test_all_reduce_helper(
group,
group_id,
rank,
dist.ReduceOp.SUM,
2,
10,
2 + (10 * (len(group) - 1)),
)
@sandcastle_skip_if(BACKEND == "nccl", "Nccl does not support CPU tensors")
def test_all_reduce_full_group_product(self):
group, group_id, rank = self._init_full_group_test()
self._test_all_reduce_helper(
group,
group_id,
rank,
dist.ReduceOp.PRODUCT,
2,
10,
reduce((lambda x, y: x * y), [10] * (len(group) - 1), 2),
)
@sandcastle_skip_if(BACKEND == "nccl", "Nccl does not support CPU tensors")
def test_all_reduce_full_group_min(self):
group, group_id, rank = self._init_full_group_test()
self._test_all_reduce_helper(
group, group_id, rank, dist.ReduceOp.MIN, 1010, 1, 1
)
@sandcastle_skip_if(BACKEND == "nccl", "Nccl does not support CPU tensors")
def test_all_reduce_full_group_max(self):
group, group_id, rank = self._init_full_group_test()
self._test_all_reduce_helper(
group, group_id, rank, dist.ReduceOp.MAX, -1, 10, 10
)
# SPARSE ALL REDUCE
def _test_sparse_all_reduce_sum(self, fn):
group, group_id, rank = self._init_global_test()
tests = simple_sparse_reduce_tests(
rank, dist.get_world_size(), num_inputs=1
)
for (inputs, outputs) in tests:
tensors = [fn(input) for input in inputs]
dist.all_reduce(tensors[0], dist.ReduceOp.SUM, group_id)
self.assertEqual(tensors[0], outputs[0])
@sandcastle_skip_if(
BACKEND != "gloo", "Only Gloo backend support sparse all reduce"
)
def test_sparse_all_reduce_sum(self):
self._test_sparse_all_reduce_sum(lambda t: t)
@sandcastle_skip_if(
BACKEND != "gloo", "Only Gloo backend support sparse all reduce"
)
@skip_if_no_gpu
def test_sparse_all_reduce_sum_cuda(self):
self._test_sparse_all_reduce_sum(lambda t: t.clone().cuda())
# ALL REDUCE - COALESCED
@staticmethod
def _all_reduce_coalesced_sum_test_cases(group_size):
return (
[2, 3, complex(2, 3)],
[10, 11, complex(10, 11)],
[
2 + 10 * (group_size - 1),
3 + 11 * (group_size - 1),
complex(2, 3) + complex(10, 11) * (group_size - 1),
],
[torch.float, torch.float, torch.cfloat],
)
@staticmethod
def _all_reduce_coalesced_product_test_cases(group_size):
return (
[1, 2],
[3, 4],
[1 * 3 ** (group_size - 1), 2 * 4 ** (group_size - 1)],
[torch.float, torch.float],
)
@staticmethod
def _all_reduce_coalesced_min_test_cases(group_size):
return (
[1, 4],
[2, 3],
[1, 3],
[torch.float, torch.float],
)
@staticmethod
def _all_reduce_coalesced_max_test_cases(group_size):
return (
[1, 4],
[2, 3],
[2, 4],
[torch.float, torch.float],
)
@sandcastle_skip_if(BACKEND == "nccl", "Nccl does not support CPU tensors")
def test_all_reduce_coalesced_max_complex_unsupported(self):
group, group_id, rank = self._init_global_test()
with self.assertRaisesRegex(RuntimeError, "all_reduce does not support"):
dist.all_reduce_coalesced(
[_build_tensor(1, dtype=torch.cfloat)], dist.ReduceOp.MAX, group_id
)
def _test_all_reduce_coalesced_helper(
self,
group,
group_id,
rank,
op,
cuda=False,
rank_to_GPU=None,
):
test_case_func = {
dist.ReduceOp.SUM: self._all_reduce_coalesced_sum_test_cases,
dist.ReduceOp.PRODUCT: self._all_reduce_coalesced_product_test_cases,
dist.ReduceOp.MIN: self._all_reduce_coalesced_min_test_cases,
dist.ReduceOp.MAX: self._all_reduce_coalesced_max_test_cases,
}[op]
master_values, worker_values, expected_values, dtypes = test_case_func(
len(group)
)
for src in group:
curr_values = master_values if rank == src else worker_values
tensors = [
_build_tensor(src + 1, val, dtype=dtype)
for dtype, val in zip(dtypes, curr_values)
]
if cuda:
tensors = [t.cuda(rank_to_GPU[rank][0]) for t in tensors]
tensor_shapes = []
for tensor in tensors:
if tensor.dtype == torch.complex64:
tensor_shapes.append(torch.view_as_real(tensor).shape)
else:
tensor_shapes.append(tensor.shape)
self.call_dist_op(
":all_reduce",
False,
dist.all_reduce_coalesced,
tensors,
op,
group_id,
tensor_shapes=tensor_shapes,
)
expected_tensors = [
_build_tensor(src + 1, expected_value, dtype=dtype)
for dtype, expected_value in zip(dtypes, expected_values)
]
self.assertEqual(tensors, expected_tensors)
self._barrier()
@require_backend({"gloo"})
def test_all_reduce_coalesced_sum(self):
group, group_id, rank = self._init_global_test()
self._test_all_reduce_coalesced_helper(
group,
group_id,
rank,
dist.ReduceOp.SUM,
cuda=False,
rank_to_GPU=None,
)
@require_backend({"gloo"})
def test_all_reduce_coalesced_product(self):
group, group_id, rank = self._init_global_test()
self._test_all_reduce_coalesced_helper(
group,
group_id,
rank,
dist.ReduceOp.PRODUCT,
cuda=False,
rank_to_GPU=None,
)
@require_backend({"gloo"})
def test_all_reduce_coalesced_min(self):
group, group_id, rank = self._init_global_test()
self._test_all_reduce_coalesced_helper(
group,
group_id,
rank,
dist.ReduceOp.MIN,
cuda=False,
rank_to_GPU=None,
)
@require_backend({"gloo"})
def test_all_reduce_coalesced_max(self):
group, group_id, rank = self._init_global_test()
self._test_all_reduce_coalesced_helper(
group, group_id, rank, dist.ReduceOp.MAX, cuda=False, rank_to_GPU=None
)
@skip_if_small_worldsize
@require_backend({"gloo"})
def test_all_reduce_coalesced_group_sum(self):
group, group_id, rank = self._init_group_test()
self._test_all_reduce_coalesced_helper(
group, group_id, rank, dist.ReduceOp.SUM, cuda=False, rank_to_GPU=None
)
@skip_if_small_worldsize
@require_backend({"gloo"})
def test_all_reduce_coalesced_group_product(self):
group, group_id, rank = self._init_group_test()
self._test_all_reduce_coalesced_helper(
group,
group_id,
rank,
dist.ReduceOp.PRODUCT,
cuda=False,
rank_to_GPU=None,
)
@skip_if_small_worldsize
@require_backend({"gloo"})
def test_all_reduce_coalesced_group_min(self):
group, group_id, rank = self._init_group_test()
self._test_all_reduce_coalesced_helper(
group, group_id, rank, dist.ReduceOp.MIN, cuda=False, rank_to_GPU=None
)
@skip_if_small_worldsize
@require_backend({"gloo"})
def test_all_reduce_coalesced_group_max(self):
group, group_id, rank = self._init_group_test()
self._test_all_reduce_coalesced_helper(
group, group_id, rank, dist.ReduceOp.MAX, cuda=False, rank_to_GPU=None
)
@require_backend({"gloo"})
def test_all_reduce_coalesced_full_group_sum(self):
group, group_id, rank = self._init_full_group_test()
self._test_all_reduce_coalesced_helper(
group, group_id, rank, dist.ReduceOp.SUM, cuda=False, rank_to_GPU=None
)
@require_backend({"gloo"})
def test_all_reduce_coalesced_full_group_product(self):
group, group_id, rank = self._init_full_group_test()
self._test_all_reduce_coalesced_helper(
group,
group_id,
rank,
dist.ReduceOp.PRODUCT,
cuda=False,
rank_to_GPU=None,
)
@require_backend({"gloo"})
def test_all_reduce_coalesced_full_group_min(self):
group, group_id, rank = self._init_full_group_test()
self._test_all_reduce_coalesced_helper(
group,
group_id,
rank,
dist.ReduceOp.MIN,
cuda=False,
rank_to_GPU=None,
)
@require_backend({"gloo"})
def test_all_reduce_coalesced_full_group_max(self):
group, group_id, rank = self._init_full_group_test()
self._test_all_reduce_coalesced_helper(
group, group_id, rank, dist.ReduceOp.MAX, cuda=False, rank_to_GPU=None
)
# SCATTER
def _test_scatter_helper(self, group, group_id, rank, dtype=torch.float):
for dest in group:
tensor = _build_tensor(dest + 1, -1, dtype=dtype)
expected_tensor = _build_tensor(dest + 1, rank, dtype=dtype)
tensors = (
[_build_tensor(dest + 1, i, dtype=dtype) for i in group]
if rank == dest
else []
)
if dtype == torch.complex64:
tensor_shapes = [torch.view_as_real(t).shape for t in tensors]
else:
tensor_shapes = [t.shape for t in tensors]
self.call_dist_op(
":scatter",
False,
dist.scatter,
tensor,
src=dest,
scatter_list=tensors,
group=group_id,
tensor_shapes=tensor_shapes,
)
self.assertEqual(tensor, expected_tensor)
self._barrier()
@sandcastle_skip_if(BACKEND == "nccl", "Nccl does not support CPU tensors")
def test_scatter_checks(self):
group, group_id, rank = self._init_global_test()
one = torch.ones([1])
# Specify scatter_list argument only on source rank.
output = one.clone() * -1
if rank == 0:
scatter_list = [one.clone() * i for i in group]
dist.scatter(output, src=0, scatter_list=scatter_list)
else:
dist.scatter(output, src=0)
self.assertEqual(output, one * rank)
# Don't specify src argument.
output = one.clone() * -1
if rank == 0:
scatter_list = [one.clone() * i for i in group]
dist.scatter(output, scatter_list=scatter_list)
else:
dist.scatter(output)
self.assertEqual(output, one * rank)
@sandcastle_skip_if(BACKEND == "nccl", "Nccl does not support scatter")
def test_scatter(self):
group, group_id, rank = self._init_global_test()
self._test_scatter_helper(group, group_id, rank)
@sandcastle_skip_if(BACKEND == "nccl", "Nccl does not support scatter")
def test_scatter_complex(self):
group, group_id, rank = self._init_global_test()
self._test_scatter_helper(group, group_id, rank, dtype=torch.cfloat)
@sandcastle_skip_if(BACKEND == "nccl", "Nccl does not support scatter")
@skip_if_small_worldsize
def test_scatter_group(self):
group, group_id, rank = self._init_group_test()
self._test_scatter_helper(group, group_id, rank)
@sandcastle_skip_if(BACKEND == "nccl", "Nccl does not support scatter")
def test_scatter_full_group(self):
group, group_id, rank = self._init_full_group_test()
self._test_scatter_helper(group, group_id, rank)
def _test_gather_helper(self, group, group_id, rank):
for dest in group:
tensor = _build_tensor(dest + 1, rank)
tensors = (
[_build_tensor(dest + 1, -1) for i in group] if rank == dest else []
)
self.call_dist_op(
":gather",
False,
dist.gather,
tensor,
dst=dest,
gather_list=tensors,
group=group_id,
tensor_shapes=[tensors[0].shape] if len(tensors) > 0 else None,
)
if rank == dest:
expected_tensors = [_build_tensor(dest + 1, i) for i in group]
for t1, t2 in zip(tensors, expected_tensors):
self.assertEqual(t1, t2)
self._barrier()
@sandcastle_skip_if(BACKEND == "nccl", "Nccl does not support CPU tensors")
def test_gather_checks(self):
group, group_id, rank = self._init_global_test()
one = torch.ones([1])
if rank == 0:
gather_list = [one.clone() for _ in group]
dist.gather(one * rank, dst=0, gather_list=gather_list)
for i in group:
self.assertEqual(gather_list[i], one * i)
else:
dist.gather(one * rank, dst=0)
if rank == 0:
gather_list = [one.clone() for _ in group]
dist.gather(one * rank, gather_list=gather_list)
for i in group:
self.assertEqual(gather_list[i], one * i)
else:
dist.gather(one * rank)
@sandcastle_skip_if(BACKEND == "nccl", "Nccl does not support CPU tensors")
def test_gather(self):
group, group_id, rank = self._init_global_test()
self._test_gather_helper(group, group_id, rank)
@sandcastle_skip_if(BACKEND == "nccl", "Nccl does not support CPU tensors")
@skip_if_small_worldsize
def test_gather_group(self):
group, group_id, rank = self._init_group_test()
self._test_gather_helper(group, group_id, rank)
@sandcastle_skip_if(BACKEND == "nccl", "Nccl does not support CPU tensors")
def test_gather_full_group(self):
group, group_id, rank = self._init_full_group_test()
self._test_gather_helper(group, group_id, rank)
# ALL GATHER
def _test_all_gather_helper(
self, group, group_id, rank, cuda=False, rank_to_GPU=None, dtype=torch.float
):
for dest in group:
tensor = _build_tensor(dest + 1, rank, dtype=dtype)
tensors = [_build_tensor(dest + 1, -1, dtype=dtype) for i in group]
if cuda:
tensor = tensor.cuda(rank_to_GPU[rank][0])
tensors = [t.cuda(rank_to_GPU[rank][0]) for t in tensors]
if tensors[0].dtype == torch.complex64:
tensor_shapes = [torch.view_as_real(tensors[0]).shape]
else:
tensor_shapes = [tensors[0].shape]
self.call_dist_op(
":all_gather",
False,
dist.all_gather,
tensors,
tensor,
group_id,
tensor_shapes=tensor_shapes,
)
expected_tensors = [
_build_tensor(dest + 1, i, dtype=dtype) for i in group
]
for t1, t2 in zip(tensors, expected_tensors):
self.assertEqual(t1, t2)
self._barrier()
@sandcastle_skip_if(BACKEND == "nccl", "Nccl does not support CPU tensors")
def test_all_gather(self):
group, group_id, rank = self._init_global_test()
self._test_all_gather_helper(group, group_id, rank)
@sandcastle_skip_if(BACKEND != "nccl", "Only Nccl supports CUDA all gather")
@sandcastle_skip_if(BACKEND == "nccl", "CUDA all gather skipped for NCCL")
@skip_if_no_gpu
def test_all_gather_cuda(self):
group, group_id, rank = self._init_global_test()
rank_to_GPU = self._init_multigpu_helper()
self._test_all_gather_helper(group, group_id, rank, True, rank_to_GPU)
@sandcastle_skip_if(BACKEND == "nccl", "Nccl does not support CPU tensors")
def test_all_gather_complex(self):
group, group_id, rank = self._init_global_test()
self._test_all_gather_helper(group, group_id, rank, dtype=torch.cfloat)
@sandcastle_skip_if(BACKEND != "nccl", "Only Nccl supports CUDA all gather")
@sandcastle_skip_if(BACKEND == "nccl", "CUDA all gather skipped for NCCL")
@skip_if_no_gpu
def test_all_gather_cuda_complex(self):
group, group_id, rank = self._init_global_test()
rank_to_GPU = self._init_multigpu_helper()
self._test_all_gather_helper(
group, group_id, rank, True, rank_to_GPU, dtype=torch.cfloat
)
@skip_if_small_worldsize
@sandcastle_skip_if(BACKEND == "nccl", "Nccl does not support CPU tensors")
def test_all_gather_group(self):
group, group_id, rank = self._init_group_test()
self._test_all_gather_helper(group, group_id, rank)
@sandcastle_skip_if(BACKEND == "nccl", "Nccl does not support CPU tensors")
def test_all_gather_full_group(self):
group, group_id, rank = self._init_full_group_test()
self._test_all_gather_helper(group, group_id, rank)
def _run_all_gather_coalesced_and_verify(
self, output_tensor_lists, input_tensors, expected_tensors, group_id
):
tensor_shapes = []
for input_tensor in input_tensors:
if input_tensor.dtype == torch.complex64:
tensor_shapes.append(torch.view_as_real(input_tensor).shape)
else:
tensor_shapes.append(input_tensor.shape)
self.call_dist_op(
":all_gather",
False,
dist.all_gather_coalesced,
output_tensor_lists,
input_tensors,
group_id,
tensor_shapes=tensor_shapes,
)
for l1, l2 in zip(output_tensor_lists, expected_tensors):
for t1, t2 in zip(l1, l2):
if not torch.equal(t1, t2):
return False
return True
def _test_all_gather_coalesced_helper(
self, group, group_id, rank, dtype=torch.float
):
# TODO: Instead we should probably go through _rank_not_in_group
# mechanism to disable sending tensors
if group_id is not None:
for test_case_id in range(2, 5):
# Make sure we create tensors of incompatible sizes, e.g.
# [1], [2x2], [3x3x3] ... to be sent in one batch
input_tensors = [
_build_multidim_tensor(
tensor_id, tensor_id, rank + tensor_id, dtype=dtype
)
for tensor_id in range(1, test_case_id)
]
output_tensor_lists = [
[
_build_multidim_tensor(
tensor_id, tensor_id, -1, dtype=dtype
)
for tensor_id in range(1, test_case_id)
]
for _ in group
]
expected_tensors = [
[
_build_multidim_tensor(
tensor_id, tensor_id, rank_iter + tensor_id, dtype=dtype
)
for tensor_id in range(1, test_case_id)
]
for rank_iter in group
]
assert self._run_all_gather_coalesced_and_verify(
output_tensor_lists, input_tensors, expected_tensors, group_id
), "output tensors do not match expected ouputs"
self._barrier()
@sandcastle_skip_if(
BACKEND == "nccl", "all_gather_coalesced does not support NCCL"
)
@sandcastle_skip_if(BACKEND == "mpi", "all_gather_coalesced does not support MPI")
def test_all_gather_coalesced_simple(self):
group, group_id, rank = self._init_global_test()
self._test_all_gather_coalesced_helper(group, group_id, rank)
@sandcastle_skip_if(
BACKEND == "nccl", "all_gather_coalesced does not support NCCL"
)
@sandcastle_skip_if(BACKEND == "mpi", "all_gather_coalesced does not support MPI")
def test_all_gather_coalesced_complex(self):
group, group_id, rank = self._init_global_test()
self._test_all_gather_coalesced_helper(
group, group_id, rank, dtype=torch.cfloat
)
@skip_if_small_worldsize
@sandcastle_skip_if(
BACKEND == "nccl", "all_gather_coalesced does not support NCCL"
)
@sandcastle_skip_if(BACKEND == "mpi", "all_gather_coalesced does not support MPI")
def test_all_gather_coalesced_group(self):
group, group_id, rank = self._init_group_test()
self._test_all_gather_coalesced_helper(group, group_id, rank)
@sandcastle_skip_if(
BACKEND == "nccl", "all_gather_coalesced does not support NCCL"
)
@sandcastle_skip_if(BACKEND == "mpi", "all_gather_coalesced does not support MPI")
def test_all_gather_coalesced_full_group(self):
group, group_id, rank = self._init_full_group_test()
self._test_all_gather_coalesced_helper(group, group_id, rank)
@sandcastle_skip_if(
BACKEND == "nccl", "all_gather_coalesced does not support NCCL"
)
@sandcastle_skip_if(BACKEND == "mpi", "all_gather_coalesced does not support MPI")
def test_all_gather_coalesced_with_empty(self):
group, group_id, rank = self._init_global_test()
input_tensors = [
rank * torch.ones([2, 2]),
torch.ones([0]),
(rank + 1) * torch.ones([3, 3]),
torch.ones([0]),
torch.ones([0]),
]
output_tensors_lists = [
[
-1 * torch.ones([2, 2]),
-1 * torch.ones([0]),
-1 * torch.ones([3, 3]),
-1 * torch.ones([0]),
-1 * torch.ones([0]),
]
for _ in group
]
expected_tensors = [
[
r * torch.ones([2, 2]),
torch.ones([0]),
(r + 1) * torch.ones([3, 3]),
torch.ones([0]),
torch.ones([0]),
]
for r in group
]
assert self._run_all_gather_coalesced_and_verify(
output_tensors_lists, input_tensors, expected_tensors, group_id
)
self._barrier()
# AllToAll
def _test_all_to_all_single_equal_split_helper(
self, group, group_id, rank, cuda=False, rank_to_GPU=None, dtype=torch.float
):
if group_id is not None:
size = len(group)
in_tensor = torch.ones([size, size], dtype=dtype) * rank
expected_tensor = torch.cat(
[torch.ones([1, size], dtype=dtype) * i for i in group]
)
out_tensor = torch.ones([size, size], dtype=dtype) * -1
if cuda:
in_tensor = in_tensor.cuda(rank_to_GPU[rank][0])
expected_tensor = expected_tensor.cuda(rank_to_GPU[rank][0])
out_tensor = out_tensor.cuda(rank_to_GPU[rank][0])
if dtype == torch.complex64:
tensor_shapes = [torch.view_as_real(in_tensor).shape]
else:
tensor_shapes = [in_tensor.shape]
self.call_dist_op(
":all_to_all",
False,
dist.all_to_all_single,
out_tensor,
in_tensor,
group=group_id,
tensor_shapes=tensor_shapes,
)
self.assertEqual(out_tensor, expected_tensor)
self._barrier()
def _test_all_to_all_single_unequal_split_helper(
self, group, group_id, rank, cuda=False, rank_to_GPU=None, dtype=torch.float
):
if group_id is not None:
size = len(group)
in_splits = [i + 1 for i in group]
out_splits = [rank + 1 for _ in group]
in_tensor = torch.ones([sum(in_splits), size], dtype=dtype) * rank
out_tensor = torch.ones([(rank + 1) * size, size], dtype=dtype)
expected_tensor = torch.cat(
[torch.ones([rank + 1, size], dtype=dtype) * i for i in group]
)
if cuda:
in_tensor = in_tensor.cuda(rank_to_GPU[rank][0])
expected_tensor = expected_tensor.cuda(rank_to_GPU[rank][0])
out_tensor = out_tensor.cuda(rank_to_GPU[rank][0])
dist.all_to_all_single(
out_tensor, in_tensor, out_splits, in_splits, group=group_id
)
self.assertEqual(out_tensor, expected_tensor)
self._barrier()
def _test_all_to_all_helper(
self,
group,
group_id,
rank,
cuda=False,
rank_to_GPU=None,
dtype=torch.float,
):
if group_id is not None:
size = len(group)
in_splits = [i + 1 for i in group]
in_tensors = [
torch.ones([in_splits[i], size], dtype=dtype) * rank
for i, _ in enumerate(group)
]
out_tensors = [
torch.ones([(rank + 1), size], dtype=dtype) for _ in group
]
expected_tensors = [
torch.ones([rank + 1, size], dtype=dtype) * i for i in group
]
if cuda:
in_tensors = [t.cuda(rank_to_GPU[rank][0]) for t in in_tensors]
expected_tensors = [
t.cuda(rank_to_GPU[rank][0]) for t in expected_tensors
]
out_tensors = [t.cuda(rank_to_GPU[rank][0]) for t in out_tensors]
dist.all_to_all(out_tensors, in_tensors, group=group_id)
for t1, t2 in zip(out_tensors, expected_tensors):
self.assertEqual(t1, t2)
self._barrier()
@sandcastle_skip_if(BACKEND != "mpi", "Only MPI supports CPU all_to_all_single")
def test_all_to_all_single_equal_split(self):
group, group_id, rank = self._init_global_test()
self._test_all_to_all_single_equal_split_helper(group, group_id, rank)
@sandcastle_skip_if(BACKEND != "nccl", "Only Nccl supports CUDA all_to_all_single")
@skip_if_no_gpu
def test_all_to_all_single_equal_split_cuda(self):
group, group_id, rank = self._init_global_test()
rank_to_GPU = self._init_multigpu_helper()
self._test_all_to_all_single_equal_split_helper(
group,
group_id,
rank,
True,
rank_to_GPU,
)
@sandcastle_skip_if(BACKEND != "mpi", "Only MPI supports CPU all_to_all_single")
def test_all_to_all_single_equal_split_complex(self):
group, group_id, rank = self._init_global_test()
self._test_all_to_all_single_equal_split_helper(
group, group_id, rank, dtype=torch.cfloat
)
@sandcastle_skip_if(BACKEND != "nccl", "Only Nccl supports CUDA all_to_all_single")
@skip_if_no_gpu
def test_all_to_all_single_equal_split_cuda_complex(self):
group, group_id, rank = self._init_global_test()
rank_to_GPU = self._init_multigpu_helper()
self._test_all_to_all_single_equal_split_helper(
group, group_id, rank, True, rank_to_GPU, dtype=torch.cfloat
)
@sandcastle_skip_if(BACKEND != "mpi", "Only MPI supports CPU all_to_all_single")
def test_all_to_all_single_unequal_split(self):
group, group_id, rank = self._init_global_test()
self._test_all_to_all_single_unequal_split_helper(group, group_id, rank)
@sandcastle_skip_if(BACKEND != "nccl", "Only Nccl supports CUDA all_to_all_single")
@skip_if_no_gpu
def test_all_to_all_single_unequal_split_cuda(self):
group, group_id, rank = self._init_global_test()
rank_to_GPU = self._init_multigpu_helper()
self._test_all_to_all_single_unequal_split_helper(
group,
group_id,
rank,
True,
rank_to_GPU,
)
@sandcastle_skip_if(BACKEND != "mpi", "Only MPI supports CPU all_to_all_single")
def test_all_to_all_single_unequal_split_complex(self):
group, group_id, rank = self._init_global_test()
self._test_all_to_all_single_unequal_split_helper(
group, group_id, rank, dtype=torch.cfloat
)
@sandcastle_skip_if(BACKEND != "nccl", "Only Nccl supports CUDA all_to_all_single")
@skip_if_no_gpu
def test_all_to_all_single_unequal_split_cuda_complex(self):
group, group_id, rank = self._init_global_test()
rank_to_GPU = self._init_multigpu_helper()
self._test_all_to_all_single_unequal_split_helper(
group,
group_id,
rank,
True,
rank_to_GPU,
dtype=torch.cfloat,
)
@sandcastle_skip_if(BACKEND != "mpi", "Only MPI supports all_to_all")
def test_all_to_all(self):
group, group_id, rank = self._init_global_test()
self._test_all_to_all_helper(group, group_id, rank)
@sandcastle_skip_if(BACKEND != "nccl", "Only NCCL supports CUDA all_to_all")
@skip_if_rocm
def test_all_to_all_cuda(self):
group, group_id, rank = self._init_global_test()
rank_to_GPU = self._init_multigpu_helper()
self._test_all_to_all_helper(group, group_id, rank, True, rank_to_GPU)
@sandcastle_skip_if(BACKEND != "mpi", "Only MPI supports all_to_all")
def test_all_to_all_complex(self):
group, group_id, rank = self._init_global_test()
self._test_all_to_all_helper(group, group_id, rank, dtype=torch.cfloat)
@sandcastle_skip_if(BACKEND != "nccl", "Only NCCL supports CUDA all_to_all")
@skip_if_rocm
def test_all_to_all_cuda_complex(self):
group, group_id, rank = self._init_global_test()
rank_to_GPU = self._init_multigpu_helper()
self._test_all_to_all_helper(
group, group_id, rank, True, rank_to_GPU, dtype=torch.cfloat
)
@sandcastle_skip_if(BACKEND != "mpi", "Only MPI supports CPU all_to_all_single")
@skip_if_small_worldsize
def test_all_to_all_single_equal_split_group(self):
group, group_id, rank = self._init_group_test()
self._test_all_to_all_single_equal_split_helper(group, group_id, rank)
@sandcastle_skip_if(BACKEND != "nccl", "Only Nccl supports CUDA all_to_all_single")
@skip_if_no_gpu
@skip_if_small_worldsize
def test_all_to_all_single_equal_split_group_cuda(self):
group, group_id, rank = self._init_group_test()
rank_to_GPU = self._init_multigpu_helper()
self._test_all_to_all_single_equal_split_helper(
group,
group_id,
rank,
True,
rank_to_GPU,
)
@sandcastle_skip_if(BACKEND != "mpi", "Only MPI supports CPU all_to_all_single")
@skip_if_small_worldsize
def test_all_to_all_single_unequal_split_group(self):
group, group_id, rank = self._init_group_test()
self._test_all_to_all_single_unequal_split_helper(group, group_id, rank)
@sandcastle_skip_if(BACKEND != "nccl", "Only Nccl supports CUDA all_to_all_single")
@skip_if_no_gpu
@skip_if_small_worldsize
def test_all_to_all_single_unequal_split_group_cuda(self):
group, group_id, rank = self._init_global_test()
rank_to_GPU = self._init_multigpu_helper()
self._test_all_to_all_single_unequal_split_helper(
group,
group_id,
rank,
True,
rank_to_GPU,
)
@sandcastle_skip_if(BACKEND != "mpi", "Only MPI supports all_to_all")
@skip_if_small_worldsize
def test_all_to_all_group(self):
group, group_id, rank = self._init_group_test()
self._test_all_to_all_helper(group, group_id, rank)
@sandcastle_skip_if(BACKEND != "nccl", "Only Nccl supports CUDA all_to_all_single")
@skip_if_small_worldsize
@skip_if_rocm
def test_all_to_all_group_cuda(self):
group, group_id, rank = self._init_group_test()
rank_to_GPU = self._init_multigpu_helper()
self._test_all_to_all_helper(group, group_id, rank, True, rank_to_GPU)
@sandcastle_skip_if(BACKEND != "mpi", "Only MPI supports CPU all_to_all_single")
def test_all_to_all_single_equal_split_full_group(self):
group, group_id, rank = self._init_full_group_test()
self._test_all_to_all_single_equal_split_helper(group, group_id, rank)
@sandcastle_skip_if(BACKEND != "nccl", "Only Nccl supports CUDA all_to_all_single")
@skip_if_no_gpu
def test_all_to_all_single_equal_split_full_group_cuda(self):
group, group_id, rank = self._init_full_group_test()
rank_to_GPU = self._init_multigpu_helper()
self._test_all_to_all_single_equal_split_helper(
group,
group_id,
rank,
True,
rank_to_GPU,
)
@sandcastle_skip_if(BACKEND != "mpi", "Only MPI supports CPU all_to_all_single")
def test_all_to_all_single_unequal_split_full_group(self):
group, group_id, rank = self._init_full_group_test()
self._test_all_to_all_single_unequal_split_helper(group, group_id, rank)
@sandcastle_skip_if(BACKEND != "nccl", "Only Nccl supports CUDA all_to_all_single")
@skip_if_no_gpu
def test_all_to_all_single_unequal_split_full_group_cuda(self):
group, group_id, rank = self._init_full_group_test()
rank_to_GPU = self._init_multigpu_helper()
self._test_all_to_all_single_unequal_split_helper(
group,
group_id,
rank,
True,
rank_to_GPU,
)
@sandcastle_skip_if(BACKEND != "mpi", "Only MPI supports all_to_all")
def test_all_to_all_full_group(self):
group, group_id, rank = self._init_full_group_test()
self._test_all_to_all_helper(group, group_id, rank)
@sandcastle_skip_if(BACKEND != "nccl", "Only NCCL supports CUDA all_to_all")
@skip_if_rocm
def test_all_to_all_full_group_cuda(self):
group, group_id, rank = self._init_full_group_test()
rank_to_GPU = self._init_multigpu_helper()
self._test_all_to_all_helper(group, group_id, rank, True, rank_to_GPU)
# BARRIER
def _test_barrier_helper(
self, group, group_id, rank, cuda=False, rank_to_GPU=None
):
WAIT_TIME = 0.3 # seconds
for dest in group:
expected_time = torch.DoubleTensor(1).fill_(0.0)
if cuda:
expected_time = expected_time.cuda(rank_to_GPU[rank][0])
if dest == rank:
expected_time.fill_(time.time() + WAIT_TIME)
dist.broadcast(expected_time, dest, group_id)
time.sleep(WAIT_TIME + 0.1) # sleep a little bit longer
dist.barrier(group_id)
else:
dist.broadcast(expected_time, dest, group_id)
dist.barrier(group_id)
self.assertGreaterAlmostEqual(
float(time.time()),
float(expected_time[0]),
"destination rank: %d, my rank: %d" % (dest, rank)
+ " (if you see this failure, please report in #14554)",
)
# Use higher timeout for the instance where the test runs
# against a subgroup and uses a CUDA tensor for expected time.
# The CUDA initialization for the participating processes can
# take long enough for the barrier timeout to trigger on the
# process that doesn't participate in the group.
self._barrier(timeout=20)
@skip_if_no_gpu
@sandcastle_skip_if(BACKEND == "mpi", "MPI doesn't supports GPU barrier")
def test_barrier_cuda(self):
group, group_id, rank = self._init_global_test()
rank_to_GPU = self._init_multigpu_helper()
self._test_barrier_helper(group, group_id, rank, True, rank_to_GPU)
@skip_if_small_worldsize
@skip_if_no_gpu
@sandcastle_skip_if(BACKEND == "mpi", "MPI doesn't supports GPU barrier")
def test_barrier_group_cuda(self):
group, group_id, rank = self._init_group_test()
rank_to_GPU = self._init_multigpu_helper()
self._test_barrier_helper(group, group_id, rank, True, rank_to_GPU)
@skip_if_small_worldsize
@skip_if_no_gpu
@sandcastle_skip_if(BACKEND == "mpi", "MPI doesn't supports GPU barrier")
def test_barrier_full_group_cuda(self):
group, group_id, rank = self._init_full_group_test()
rank_to_GPU = self._init_multigpu_helper()
self._test_barrier_helper(group, group_id, rank, True, rank_to_GPU)
@sandcastle_skip_if(BACKEND == "nccl", "NCCL does not support CPU barrier")
def test_barrier(self):
group, group_id, rank = self._init_global_test()
self._test_barrier_helper(group, group_id, rank)
@skip_if_small_worldsize
@sandcastle_skip_if(BACKEND == "nccl", "NCCL does not support CPU barrier")
def test_barrier_group(self):
group, group_id, rank = self._init_group_test()
self._test_barrier_helper(group, group_id, rank)
@sandcastle_skip_if(BACKEND == "nccl", "NCCL does not support CPU barrier")
def test_barrier_full_group(self):
group, group_id, rank = self._init_full_group_test()
self._test_barrier_helper(group, group_id, rank)
def _test_broadcast_multigpu_helper(self, group, group_id, rank, rank_to_GPU):
for src in group:
expected_tensor = _build_tensor(src + 1)
tensors = [
_build_tensor(src + 1, -1).cuda(device=i) for i in rank_to_GPU[rank]
]
if rank == src:
tensors[0] = expected_tensor.cuda(device=rank_to_GPU[rank][0])
dist.broadcast_multigpu(tensors, src, group_id)
for tensor in tensors:
self.assertEqual(tensor, expected_tensor)
self._barrier()
@sandcastle_skip_if(BACKEND == "mpi", "MPI doesn't support broadcast multigpu")
@sandcastle_skip_if(BACKEND == "nccl", "NCCL broadcast multigpu skipped")
@skip_if_no_gpu
def test_broadcast_multigpu(self):
group, group_id, rank = self._init_global_test()
rank_to_GPU = self._init_multigpu_helper()
self._test_broadcast_multigpu_helper(group, group_id, rank, rank_to_GPU)
def _test_all_reduce_multigpu_helper(
self,
group,
group_id,
rank,
rank_to_GPU,
op,
master_value,
worker_value,
expected_value,
dtype=torch.float,
):
for src in group:
curr_value = master_value if rank == src else worker_value
tensors = [
_build_tensor(src + 1, curr_value, dtype=dtype).cuda(device=i)
for i in rank_to_GPU[rank]
]
self.call_dist_op(
":all_reduce",
False,
dist.all_reduce_multigpu,
tensors,
op,
group_id,
)
expected_tensor = _build_tensor(src + 1, expected_value, dtype=dtype)
for tensor in tensors:
self.assertEqual(tensor, expected_tensor)
self._barrier()
@sandcastle_skip_if(BACKEND == "mpi", "MPI doesn't support broadcast multigpu")
@sandcastle_skip_if(BACKEND == "nccl", "CUDA all_reduce multigpu skipped for NCCL")
@skip_if_no_gpu
def test_all_reduce_multigpu(self):
group, group_id, rank = self._init_global_test()
rank_to_GPU = self._init_multigpu_helper()
self._test_all_reduce_multigpu_helper(
group,
group_id,
rank,
rank_to_GPU,
dist.ReduceOp.SUM,
2,
10,
(2 + 10 * (len(group) - 1)) * len(rank_to_GPU[0]),
)
@sandcastle_skip_if(BACKEND == "mpi", "MPI doesn't support broadcast multigpu")
@sandcastle_skip_if(BACKEND == "nccl", "CUDA all_reduce multigpu skipped for NCCL")
@skip_if_no_gpu
def test_all_reduce_multigpu_complex(self):
group, group_id, rank = self._init_global_test()
rank_to_GPU = self._init_multigpu_helper()
self._test_all_reduce_multigpu_helper(
group,
group_id,
rank,
rank_to_GPU,
dist.ReduceOp.SUM,
complex(2, 3),
complex(10, 11),
(complex(2, 3) + complex(10, 11) * (len(group) - 1))
* len(rank_to_GPU[0]),
dtype=torch.cfloat,
)
def _test_reduce_multigpu_helper(
self,
group,
group_id,
rank,
rank_to_GPU,
op,
master_value,
worker_value,
expected_value,
):
for src in group:
tensor_value = master_value if rank == src else worker_value
tensors = [
_build_tensor(src + 1, tensor_value).cuda(device=i)
for i in rank_to_GPU[rank]
]
self.call_dist_op(
"reduce",
False,
dist.reduce_multigpu,
tensors,
src,
op,
group_id,
expect_event=len(tensors) == 1,
tensor_shapes=[tensors[0].shape],
)
if rank == src:
expected_tensor = _build_tensor(src + 1, expected_value)
self.assertEqual(tensors[0], expected_tensor)
self._barrier()
@sandcastle_skip_if(
BACKEND != "nccl", "Only Nccl backend supports reduce multigpu"
)
@skip_if_no_gpu
def test_reduce_multigpu(self):
group, group_id, rank = self._init_global_test()
rank_to_GPU = self._init_multigpu_helper()
device_id = rank_to_GPU[rank][0]
torch.cuda.set_device(device_id)
self._test_reduce_multigpu_helper(
group,
group_id,
rank,
rank_to_GPU,
dist.ReduceOp.SUM,
2,
10,
(2 + 10 * (len(group) - 1)) * len(rank_to_GPU[0]),
)
def _test_all_gather_multigpu_helper(
self, group, group_id, rank, rank_to_GPU, dtype=torch.float
):
for dest in group:
tensors = [
_build_tensor(dest + 1, dtype=dtype).cuda(device=i)
for i in rank_to_GPU[rank]
]
output_tensors = []
expected_output = []
output_per_gpu = (
[_build_tensor(dest + 1, -1, dtype=dtype)]
* len(rank_to_GPU[0])
* len(group)
)
expected_per_gpu = (
[_build_tensor(dest + 1, dtype=dtype)]
* len(rank_to_GPU[0])
* len(group)
)
for gpu in rank_to_GPU[rank]:
output_tensors.append([t.cuda(device=gpu) for t in output_per_gpu])
expected_output.append(
[t.cuda(device=gpu) for t in expected_per_gpu]
)
self.call_dist_op(
"all_gather",
False,
dist.all_gather_multigpu,
output_tensors,
tensors,
group_id,
expect_event=len(expected_output) == 1,
)
self.assertEqual(output_tensors, expected_output)
self._barrier()
@sandcastle_skip_if(
BACKEND != "nccl", "Only Nccl backend supports allgather multigpu"
)
@skip_if_no_gpu
def test_all_gather_multigpu(self):
group, group_id, rank = self._init_global_test()
rank_to_GPU = self._init_multigpu_helper()
device_id = rank_to_GPU[rank][0]
torch.cuda.set_device(device_id)
self._test_all_gather_multigpu_helper(group, group_id, rank, rank_to_GPU)
@sandcastle_skip_if(
BACKEND != "nccl", "Only Nccl backend supports allgather multigpu"
)
@skip_if_no_gpu
def test_all_gather_multigpu_complex(self):
group, group_id, rank = self._init_global_test()
rank_to_GPU = self._init_multigpu_helper()
device_id = rank_to_GPU[rank][0]
torch.cuda.set_device(device_id)
self._test_all_gather_multigpu_helper(
group, group_id, rank, rank_to_GPU, dtype=torch.cfloat
)
def _model_step(self, model):
for param in model.parameters():
if param.grad is not None:
with torch.no_grad():
param += param.grad
param.grad = None
def _model_step_with_zero_grad(self, model):
for param in model.parameters():
if param.grad is not None:
with torch.no_grad():
param += param.grad
param.grad.requires_grad_(False)
param.grad.zero_()
def _prepare_dummy_data(self, local_bs):
world_size = int(os.environ["WORLD_SIZE"])
global_bs = world_size * local_bs
input_cpu = torch.randn(global_bs, 2)
target = torch.randn(global_bs, 4)
loss = nn.MSELoss()
return global_bs, input_cpu, target, loss
def _test_DDP_helper(
self, model, input_var, target, loss, scale_factor=1.0, memory_format=None
):
model.train()
output = model(input_var)
l = loss(output, target) * scale_factor
l.backward()
if memory_format is not None:
self.assertTrue(output.is_contiguous(memory_format=memory_format))
def _assert_equal_param(self, param_gpu, param_DDP):
self.assertEqual(len(param_gpu), len(param_DDP))
for p_gpu, p_DDP in zip(param_gpu, param_DDP):
self.assertEqual(p_gpu, p_DDP)
def _test_DDP_niter(
self,
model_base,
model_DDP,
input,
target,
loss,
local_bs,
rank,
batch_size,
test_save,
offset=None,
world_size=0,
zero_grad=False,
memory_format=None,
n_iter=5,
):
for idx in range(n_iter):
self._test_DDP_helper(
model_base, input, target, loss, memory_format=memory_format
)
if offset is None:
offset = rank * local_bs
self._test_DDP_helper(
model_DDP,
input[offset : offset + local_bs],
target[offset : offset + local_bs],
loss,
world_size * local_bs / batch_size if world_size != 0 else 1,
memory_format=memory_format,
)
if zero_grad:
self._model_step_with_zero_grad(model_base)
self._model_step_with_zero_grad(model_DDP)
else:
self._model_step(model_base)
self._model_step(model_DDP)
self._assert_equal_param(
list(model_base.parameters()), list(model_DDP.module.parameters())
)
input = input[torch.randperm(batch_size)]
if test_save and idx == 2 and INIT_METHOD.startswith("file://"):
with tempfile.NamedTemporaryFile() as tmp:
if sys.platform == "win32":
torch.save(model_DDP, tmp)
tmp.seek(0)
model_DDP = torch.load(tmp)
else:
torch.save(model_DDP, tmp.name)
model_DDP = torch.load(tmp.name)
with tempfile.TemporaryFile() as tmp_file:
torch.save(model_DDP, tmp_file)
tmp_file.seek(0)
saved_model = torch.load(tmp_file)
for k in model_DDP.state_dict():
self.assertEqual(model_DDP.state_dict()[k], saved_model.state_dict()[k])
def _test_DistributedDataParallel(
self,
gpu_subset,
rank,
output_device=None,
gradient_as_bucket_view=False,
static_graph=False,
):
model = DDP_NET
model_gpu = copy.deepcopy(model)
model_gpu.cuda(gpu_subset[0])
model_DDP = copy.deepcopy(model)
model_DDP.cuda(gpu_subset[0])
model_DDP = nn.parallel.DistributedDataParallel(
model_DDP,
device_ids=gpu_subset,
gradient_as_bucket_view=gradient_as_bucket_view,
)
if static_graph:
model_DDP._set_static_graph()
with tempfile.NamedTemporaryFile() as tmp:
if sys.platform == "win32":
torch.save(model_DDP, tmp)
tmp.seek(0)
model_DDP = torch.load(tmp)
else:
torch.save(model_DDP, tmp.name)
model_DDP = torch.load(tmp.name)
local_bs = len(gpu_subset)
global_bs, input_cpu, target, loss = self._prepare_dummy_data(local_bs)
self._test_DDP_niter(
model_gpu,
model_DDP,
input_cpu.cuda(gpu_subset[0]),
target.cuda(gpu_subset[0]),
loss,
local_bs,
rank,
global_bs,
True,
)
self._barrier()
def _test_DistributedDataParallelCPU(self, gradient_as_bucket_view=False):
group, group_id, rank = self._init_global_test()
model_base = DDP_NET
model_DDP = copy.deepcopy(model_base)
model_DDP = nn.parallel.DistributedDataParallel(
model_DDP, gradient_as_bucket_view=gradient_as_bucket_view
)
local_bs = 2
global_bs, input_cpu, target, loss = self._prepare_dummy_data(local_bs)
self._test_DDP_niter(
model_base,
model_DDP,
input_cpu,
target,
loss,
local_bs,
rank,
global_bs,
False,
zero_grad=True,
)
self._barrier()
return model_DDP
@sandcastle_skip_if(BACKEND == "nccl", "nccl does not support DDP on CPU models")
def test_DistributedDataParallelCPU(self):
self._test_DistributedDataParallelCPU()
@sandcastle_skip_if(BACKEND == "nccl", "nccl does not support DDP on CPU models")
def test_DistributedDataParallelCPU_grad_is_view(self):
self._test_DistributedDataParallelCPU(gradient_as_bucket_view=True)
@sandcastle_skip_if(
BACKEND != "nccl" and BACKEND != "gloo",
"Only Nccl & Gloo backend support DistributedDataParallel",
)
def test_DistributedDataParallel_requires_grad(self):
self.assertRaises(
RuntimeError, lambda: nn.parallel.DistributedDataParallel(nn.Module())
)
self._barrier()
@sandcastle_skip_if(
BACKEND != "nccl" and BACKEND != "gloo",
"Only NCCL and GLOO backend support DistributedDataParallel",
)
@skip_if_lt_x_gpu(int(os.environ["WORLD_SIZE"]))
@skip_if_rocm
def test_DistributedDataParallel_non_default_stream(self):
stream = torch.cuda.Stream(self.rank)
rank = self.rank
with torch.cuda.stream(stream):
net = torch.nn.parallel.DistributedDataParallel(
torch.nn.Linear(1, 1, bias=False).cuda(rank), device_ids=[rank]
)
for i in range(1000):
# Clear gradients manually
grad = net.module.weight.grad
if grad is not None:
grad.requires_grad_(False)
grad.zero_()
# Forward + BW
batch = torch.tensor([rank]).float().cuda(rank)
loss = net(batch).sum()
loss.backward()
# For each worker, the gradient on the weight should be worker_rank.
grad = net.module.weight.grad
avg = grad.clone()
# All-reducing the gradient averages should give us the gradient
# average. If not, then one of the workers has not correctly
# written back the averaged gradient before this all-reduce call.
dist.all_reduce(avg)
world_size = int(os.environ["WORLD_SIZE"])
avg.div_(world_size)
expected_grad = sum(i for i in range(world_size)) / world_size
self.assertEqual(
avg[0, 0],
expected_grad,
msg=f"Expected gradient of {expected_grad} but got {avg} on rank {self.rank}",
)
@sandcastle_skip_if(
BACKEND != "nccl" and BACKEND != "gloo",
"MPI backend does not support DDP communication hook on CUDA devices",
)
@skip_if_lt_x_gpu(int(os.environ["WORLD_SIZE"]))
@skip_if_rocm
def test_ddp_comm_hook_logging(self):
hooks = [
default.allreduce_hook,
default.fp16_compress_hook,
powerSGD.powerSGD_hook,
powerSGD.batched_powerSGD_hook,
quantization_hooks.quantization_pertensor_hook,
quantization_hooks.quantization_perchannel_hook,
]
cpp_builtin_hooks = [
dist.BuiltinCommHookType.ALLREDUCE,
dist.BuiltinCommHookType.FP16_COMPRESS,
]
for hook in hooks:
ddp_model = torch.nn.parallel.DistributedDataParallel(
torch.nn.Linear(1, 1, bias=False).cuda(self.rank),
device_ids=[self.rank],
)
ddp_logging_data = ddp_model._get_ddp_logging_data()
# Hook not registered yet, so should be empty
self.assertEqual(ddp_logging_data.get("comm_hook"), None)
ddp_model.register_comm_hook(None, hook)
ddp_logging_data = ddp_model._get_ddp_logging_data()
self.assertEqual(ddp_logging_data.get("comm_hook"), hook.__qualname__)
for hook in cpp_builtin_hooks:
ddp_model = torch.nn.parallel.DistributedDataParallel(
torch.nn.Linear(1, 1, bias=False).cuda(self.rank),
device_ids=[self.rank],
)
ddp_logging_data = ddp_model._get_ddp_logging_data()
# Hook not registered yet, so should be empty
self.assertEqual(ddp_logging_data.get("comm_hook"), None)
ddp_model._register_builtin_comm_hook(hook)
ddp_logging_data = ddp_model._get_ddp_logging_data()
self.assertEqual(ddp_logging_data.get("comm_hook"), str(hook))
# No hook registered
ddp_model = torch.nn.parallel.DistributedDataParallel(
torch.nn.Linear(1, 1, bias=False).cuda(self.rank),
device_ids=[self.rank],
)
ddp_logging_data = ddp_model._get_ddp_logging_data()
# Hook not registered yet, so should be empty
self.assertEqual(ddp_logging_data.get("comm_hook"), None)
# After second forward pass, hook should still be empty string
for i in range(2):
inp = torch.ones(1, 1, device=self.rank)
loss = ddp_model(inp).sum()
loss.backward()
ddp_logging_data = ddp_model._get_ddp_logging_data()
# Note: DETAIL debug mode logs DDP logging data to stdout and
# thus accesses std::map, which fills in a default value for the
# type if it didn't exist.
self.assertEqual(ddp_logging_data.get("comm_hook", ""), "")
def _test_ddp_hook_with_optimizer_parity(
self, grad_as_bucket_view, static_graph
):
rank = self.rank
torch.cuda.set_device(rank)
torch.manual_seed(rank)
torch.cuda.manual_seed(rank)
models_to_test = [
(LargeNet(), torch.randn(1, 1000).cuda()),
]
if HAS_TORCHVISION:
models_to_test.append(
(torchvision.models.resnet50(), torch.randn(1, 3, 3, 1000).cuda())
)
for (model, inp) in models_to_test:
with torch.backends.cudnn.flags(
enabled=True, deterministic=True, benchmark=False
):
sgd_lr = 1e-2
sgd_momentum = 0.9
sgd_weight_decay = 0.01
ddp_model_with_optimizer_hook = (
torch.nn.parallel.DistributedDataParallel(
copy.deepcopy(model).cuda(),
device_ids=[self.rank],
gradient_as_bucket_view=grad_as_bucket_view,
)
)
if static_graph:
ddp_model_with_optimizer_hook._set_static_graph()
allreduce_hook = default.allreduce_hook
opt_hook_state = default._OptimizerHookState(
_FunctionalSGD,
sgd_lr,
momentum=sgd_momentum,
weight_decay=sgd_weight_decay,
)
ddp_model_with_optimizer_hook.register_comm_hook(
None,
default._hook_then_optimizer(allreduce_hook, opt_hook_state),
)
ddp_model_with_no_hook = torch.nn.parallel.DistributedDataParallel(
copy.deepcopy(model).cuda(),
device_ids=[self.rank],
gradient_as_bucket_view=grad_as_bucket_view,
)
if static_graph:
ddp_model_with_no_hook._set_static_graph()
sgd_no_hook = torch.optim.SGD(
ddp_model_with_no_hook.parameters(),
lr=sgd_lr,
momentum=sgd_momentum,
weight_decay=sgd_weight_decay,
)
for hook_param, allreduce_param in zip(
ddp_model_with_optimizer_hook.parameters(),
ddp_model_with_no_hook.parameters(),
):
self.assertEqual(hook_param, allreduce_param)
opt_hook_init_params = copy.deepcopy(
list(ddp_model_with_optimizer_hook.parameters())
)
for i in range(6):
ddp_model_with_optimizer_hook.zero_grad()
out = ddp_model_with_optimizer_hook(inp)
loss = out.sum()
loss.backward()
dist.barrier()
for i in range(6):
ddp_model_with_no_hook.zero_grad()
out = ddp_model_with_no_hook(inp)
loss = out.sum()
loss.backward()
sgd_no_hook.step()
dist.barrier()
for hook_param, allreduce_param in zip(
ddp_model_with_optimizer_hook.parameters(),
ddp_model_with_no_hook.parameters(),
):
self.assertEqual(hook_param, allreduce_param)
self.assertNotEqual(
opt_hook_init_params,
list(ddp_model_with_optimizer_hook.parameters()),
)
dist.barrier()
@sandcastle_skip_if(
BACKEND != "nccl" and BACKEND != "gloo",
"Only Nccl & Gloo backend support DistributedDataParallel",
)
@sandcastle_skip_if(IS_WINDOWS, "FunctionalSGD not yet supported with Windows.")
@skip_if_lt_x_gpu(2)
@skip_if_rocm
def test_ddp_hook_with_optimizer_parity(self):
for grad_as_bucket_view, static_graph in itertools.product(
[True, False], [True, False]
):
self._test_ddp_hook_with_optimizer_parity(
grad_as_bucket_view=grad_as_bucket_view, static_graph=static_graph
)
def _test_ddp_hook_parity(self, state, hook):
rank = self.rank
m = torch.nn.Linear(1, 5)
try:
process_group = state.process_group
except AttributeError:
process_group = state
net_with_hook = torch.nn.parallel.DistributedDataParallel(
copy.deepcopy(m).to(rank),
device_ids=[rank],
process_group=process_group,
)
net_with_hook.register_comm_hook(state=state, hook=hook)
net_without_hook = torch.nn.parallel.DistributedDataParallel(
copy.deepcopy(m).to(rank),
device_ids=[rank],
process_group=process_group,
)
for i in range(100):
for g in [
net_without_hook.module.weight.grad,
net_with_hook.module.weight.grad,
]:
if g is not None:
g.requires_grad_(False)
g.zero_()
batch = torch.tensor([rank]).float().cuda(rank)
loss = net_without_hook(batch).sum()
loss.backward()
grad = net_without_hook.module.weight.grad
avg = grad.clone()
expected_grad = (
sum(i for i in range(dist.get_world_size())) / dist.get_world_size()
)
loss_hook = net_with_hook(batch).sum()
loss_hook.backward()
grad_hook = net_with_hook.module.weight.grad
avg_hook = grad_hook.clone()
assert_func = (
self.assertEqual
if hook == default.allreduce_hook
else torch.testing.assert_allclose
)
assert_func(
avg_hook[0, 0],
expected_grad,
msg=f"Expected hook grad of {expected_grad} but got {avg_hook[0, 0]}",
)
assert_func(
avg_hook[0, 0],
avg[0, 0],
msg=f"Expected hook grad to be close to allreduce {avg[0, 0]}, but got {avg_hook[0, 0]}",
)
@sandcastle_skip_if(
BACKEND != "nccl" and BACKEND != "gloo",
"MPI backend does not support DDP communication hook on CUDA devices",
)
@skip_if_lt_x_gpu(int(os.environ["WORLD_SIZE"]))
@skip_if_rocm
def test_ddp_hook_parity_allreduce(self):
self._test_ddp_hook_parity(state=None, hook=default.allreduce_hook)
@sandcastle_skip_if(
BACKEND != "nccl" and BACKEND != "gloo",
"MPI backend does not support DDP communication hook on CUDA devices",
)
@skip_if_lt_x_gpu(int(os.environ["WORLD_SIZE"]))
@skip_if_rocm
def test_ddp_hook_parity_allreduce_process_group(self):
rank_to_GPU = self._init_multigpu_helper()
gpus = [rank_to_GPU[int(r)][0] for r in range(dist.get_world_size())]
process_group = torch.distributed.new_group(gpus)
self._test_ddp_hook_parity(state=process_group, hook=default.allreduce_hook)
@sandcastle_skip_if(
BACKEND != "nccl" and BACKEND != "gloo",
"MPI backend does not support DDP communication hook on CUDA devices",
)
@skip_if_lt_x_gpu(int(os.environ["WORLD_SIZE"]))
@skip_if_rocm
def test_ddp_hook_parity_powerSGD(self):
for warm_start in [True, False]:
powersgd_state = powerSGD.PowerSGDState(
process_group=None,
matrix_approximation_rank=1,
start_powerSGD_iter=2,
warm_start=warm_start,
)
self._test_ddp_hook_parity(
state=powersgd_state, hook=powerSGD.powerSGD_hook
)
@sandcastle_skip_if(
BACKEND != "nccl" and BACKEND != "gloo",
"MPI backend does not support DDP communication hook on CUDA devices",
)
@sandcastle_skip_if(
NO_MULTIPROCESSING_SPAWN,
"Disabled for environments that \
don't support multiprocessing with spawn start method",
)
@skip_if_lt_x_gpu(int(os.environ["WORLD_SIZE"]))
@skip_if_rocm
def test_ddp_hook_parity_post_localSGD(self):
# Although we start run local SGD at iteration 10, since we still use the global process group to run it,
# the post-LocalSGD actually still allreduces gradients globally for the remaining iterations.
state = post_localSGD.PostLocalSGDState(
process_group=None, subgroup=dist.group.WORLD, start_localSGD_iter=10
)
self._test_ddp_hook_parity(
state=state, hook=post_localSGD.post_localSGD_hook
)
# Since we start local SGD later than the total number of 100 iterations,
# no local SGD actually is executed, and we don't even need to provide a subgroup for this case.
state = post_localSGD.PostLocalSGDState(
process_group=None, subgroup=None, start_localSGD_iter=1000
)
self._test_ddp_hook_parity(
state=state, hook=post_localSGD.post_localSGD_hook
)
def _prepare_single_device_module(
self,
rank,
process_group,
devices,
device_ids,
global_batch_size,
gradient_as_bucket_view=False,
):
model = Net()
device = devices[0] if devices else torch.device("cuda:%d" % rank)
ddp_model = DistributedDataParallel(
copy.deepcopy(model).to(device),
device_ids=device_ids,
process_group=process_group,
bucket_cap_mb=0.001,
gradient_as_bucket_view=gradient_as_bucket_view,
)
model.to(device)
input = torch.randn(global_batch_size, 2).to(device)
target = torch.randn(global_batch_size, 4).to(device)
return model, ddp_model, input, target
def _prepare_cpu_module(
self,
process_group,
global_batch_size,
gradient_as_bucket_view=False,
):
model = Net()
ddp_model = DistributedDataParallel(
copy.deepcopy(model),
process_group=process_group,
bucket_cap_mb=0.001,
gradient_as_bucket_view=gradient_as_bucket_view,
)
input = torch.randn(global_batch_size, 2)
target = torch.randn(global_batch_size, 4)
return model, ddp_model, input, target
def _test_accumulate_gradients_no_sync(
self, num_iters=2, ddp_comm_hook=None, gradient_as_bucket_view=False
):
group, group_id, rank = self._init_global_test()
world_size = get_world_size()
if BACKEND == "mpi" or BACKEND == "gloo":
global_batch_size = world_size
local_batch_size = 1
model, ddp_model, input, target = self._prepare_cpu_module(
group_id, global_batch_size, gradient_as_bucket_view
)
if BACKEND == "nccl":
rank_to_GPU = self._init_multigpu_helper()
int_devices = rank_to_GPU[rank][:1]
devices = [torch.device("cuda:" + str(i)) for i in int_devices]
global_batch_size = world_size
local_batch_size = len(devices)
model, ddp_model, input, target = self._prepare_single_device_module(
rank,
group_id,
devices,
devices,
global_batch_size,
gradient_as_bucket_view,
)
if ddp_comm_hook is not None:
ddp_model.register_comm_hook(group_id, ddp_comm_hook)
def step_model(model, input, target):
model.train()
output = model(input)
loss = F.mse_loss(output, target.to(output.device))
loss.backward()
with torch.no_grad():
with ddp_model.no_sync():
ddp_model.train()
ddp_model(input)
for iteration in range(num_iters):
step_model(model, input, target)
ddp_input = input[
rank * local_batch_size : (rank + 1) * local_batch_size
]
ddp_target = target[
rank * local_batch_size : (rank + 1) * local_batch_size
]
if iteration % num_iters == 0:
with ddp_model.no_sync():
step_model(ddp_model, ddp_input, ddp_target)
else:
step_model(ddp_model, ddp_input, ddp_target)
for i, j in zip(model.parameters(), ddp_model.parameters()):
if not i.requires_grad:
continue
if iteration % num_iters == 0:
self.assertNotEqual(i.grad, j.grad)
else:
self.assertEqual(i.grad, j.grad)
torch.manual_seed(1337 + iteration)
input = input[torch.randperm(global_batch_size)]
@sandcastle_skip_if(
BACKEND != "mpi" and BACKEND != "nccl" and BACKEND != "gloo",
"get_future is only supported on mpi, nccl and gloo",
)
@nccl_skip_if_lt_x_gpu(BACKEND, 2)
def test_accumulate_gradients_no_sync(self):
self._test_accumulate_gradients_no_sync()
@sandcastle_skip_if(
BACKEND != "mpi" and BACKEND != "nccl" and BACKEND != "gloo",
"get_future is only supported on mpi, nccl and gloo",
)
@nccl_skip_if_lt_x_gpu(BACKEND, 2)
def test_accumulate_gradients_no_sync_grad_is_view(self):
self._test_accumulate_gradients_no_sync(gradient_as_bucket_view=True)
@sandcastle_skip_if(
BACKEND != "mpi" and BACKEND != "nccl" and BACKEND != "gloo",
"get_future is only supported on mpi, nccl and gloo",
)
@nccl_skip_if_lt_x_gpu(BACKEND, 2)
def test_accumulate_gradients_no_sync_allreduce_hook(self):
world_size = get_world_size()
def allreduce_hook(
group_id: object, bucket: dist.GradBucket
) -> torch.futures.Future[torch.Tensor]:
tensors = [bucket.get_tensor() / world_size]
return (
group_id.allreduce(tensors)
.get_future()
.then(lambda fut: fut.value()[0])
)
self._test_accumulate_gradients_no_sync(
num_iters=4, ddp_comm_hook=allreduce_hook
)
@sandcastle_skip_if(
BACKEND != "mpi" and BACKEND != "nccl" and BACKEND != "gloo",
"get_future is only supported on mpi, nccl and gloo",
)
@nccl_skip_if_lt_x_gpu(BACKEND, 2)
def test_accumulate_gradients_no_sync_allreduce_with_then_hook(self):
world_size = get_world_size()
def allreduce_with_then_hook(
group_id: object, bucket: dist.GradBucket
) -> torch.futures.Future[torch.Tensor]:
fut = group_id.allreduce([bucket.get_tensor()]).get_future()
def mult(fut):
return 2 * fut.wait()[0]
def div(fut):
return fut.wait() / (2 * world_size)
return fut.then(mult).then(div)
self._test_accumulate_gradients_no_sync(
num_iters=4, ddp_comm_hook=allreduce_with_then_hook
)
@sandcastle_skip_if(
BACKEND != "mpi" and BACKEND != "nccl" and BACKEND != "gloo",
"get_future is only supported on mpi, nccl and gloo",
)
@nccl_skip_if_lt_x_gpu(BACKEND, 2)
def test_get_future(self):
def mult(fut):
return [t * 3 for t in fut.wait()]
def add(fut):
return [t + 1 for t in fut.wait()]
group, group_id, rank = self._init_global_test()
input = _build_tensor(3, 2)
if BACKEND == "nccl":
rank_to_GPU = self._init_multigpu_helper()
device_id = rank_to_GPU[rank][0]
input = input.to(device_id)
fut = group_id.allreduce([input]).get_future()
res = fut.then(mult).then(add).wait()
expected = _build_tensor(3, 2 * len(group) * 3 + 1)
self.assertEqual(res[0], expected)
@sandcastle_skip_if(
BACKEND != "nccl" and BACKEND != "gloo",
"Only Nccl & Gloo backend support DistributedDataParallel",
)
@skip_if_no_gpu
def test_DistributedDataParallel(self):
group, group_id, rank = self._init_global_test()
rank_to_GPU = self._init_multigpu_helper()
gpus = list(rank_to_GPU[rank])
for use_bucket_view, static_graph in itertools.product(
(False, True), (False, True)
):
self._test_DistributedDataParallel(
gpu_subset=gpus,
rank=rank,
gradient_as_bucket_view=use_bucket_view,
static_graph=static_graph,
)
self._test_DistributedDataParallel(
gpu_subset=gpus,
rank=rank,
output_device=torch.device("cuda"),
gradient_as_bucket_view=use_bucket_view,
static_graph=static_graph,
)
gpus_list = [torch.device("cuda:" + str(i)) for i in gpus]
self._test_DistributedDataParallel(
gpu_subset=gpus_list,
rank=rank,
output_device=torch.device("cuda"),
gradient_as_bucket_view=use_bucket_view,
static_graph=static_graph,
)
def _test_DistributedDataParallel_with_amp(self, grad_is_view=False):
torch.manual_seed(31415)
model = copy.deepcopy(DDP_NET).cuda()
optimizer = torch.optim.SGD(model.parameters(), lr=0.03)
scaler = GradScaler()
ddp_model = nn.parallel.DistributedDataParallel(
model, device_ids=[self.rank], gradient_as_bucket_view=grad_is_view
)
input = torch.randn(dist.get_world_size() * 2, 2).cuda()
target = torch.randn(dist.get_world_size() * 2, 4).cuda()
loss_fn = nn.MSELoss()
for p in ddp_model.parameters():
self.assertTrue(p is not None)
self.assertTrue(p.grad is None)
for idx in range(20):
optimizer.zero_grad()
with autocast():
output = ddp_model(input)
loss = loss_fn(output, target)
scaler.scale(loss).backward()
for p in ddp_model.parameters():
if p.requires_grad:
self.assertTrue(p.grad is not None)
self.assertFalse(p.grad.isnan().any())
self.assertFalse(p.grad.isinf().any())
# If these gradients do not contain infs or NaNs, optimizer.step() is then called,
# otherwise, optimizer.step() is skipped.
scaler.step(optimizer)
# Updates the scale for next iteration.
scaler.update()
# Shuffle the input so that DDP input is different
torch.manual_seed(1337 + idx)
input = input[torch.randperm(dist.get_world_size() * 2)]
return ddp_model
@sandcastle_skip_if(
BACKEND != "nccl" and BACKEND != "gloo",
"Only Nccl & Gloo backend support DistributedDataParallel",
)
@skip_if_no_gpu
def test_DistributedDataParallel_with_amp_and_grad_is_view(self):
torch.cuda.set_device(self.rank)
ddp_model_grad_not_view = self._test_DistributedDataParallel_with_amp(
grad_is_view=False
)
ddp_model_grad_is_view = self._test_DistributedDataParallel_with_amp(
grad_is_view=True
)
for i, j in zip(
ddp_model_grad_not_view.parameters(),
ddp_model_grad_is_view.parameters(),
):
self.assertEqual(i, j)
def _test_DistributedDataParallel_SyncBatchNorm(
self,
gpu_subset,
rank,
local_bs,
global_bs,
offset,
output_device=None,
affine=True,
):
# Run a simple end to end DDP model, use result of single node model
# as baseline
# cpu training setup
model = BN_NET if affine else BN_NET_NO_AFFINE
# single gpu training setup
model_gpu = copy.deepcopy(model)
model_gpu.cuda(gpu_subset[0])
# DDP training setup
model_DDP = nn.SyncBatchNorm.convert_sync_batchnorm(copy.deepcopy(model))
model_DDP.cuda(gpu_subset[0])
model_DDP = nn.parallel.DistributedDataParallel(
model_DDP, device_ids=gpu_subset
)
# test serializable/unserializable
with tempfile.NamedTemporaryFile() as tmp:
if sys.platform == "win32":
torch.save(model_DDP, tmp)
tmp.seek(0)
model_DDP = torch.load(tmp)
else:
torch.save(model_DDP, tmp.name)
model_DDP = torch.load(tmp.name)
# data initialization
input_cpu = torch.randn(global_bs, 2)
target = torch.randn(global_bs, 4)
loss = nn.MSELoss()
# check two model parameters over 5 iterations
self._test_DDP_niter(
model_gpu,
model_DDP,
input_cpu.cuda(gpu_subset[0]),
target.cuda(gpu_subset[0]),
loss,
local_bs,
rank,
global_bs,
True,
offset,
dist.get_world_size(),
5 if affine else 2,
)
self._barrier()
@skip_if_lt_x_gpu(2)
@sandcastle_skip_if(
BACKEND != "nccl" and BACKEND != "gloo",
"Only NCCL and GLOO backend support DistributedDataParallel",
)
@sandcastle_skip_if(
IS_WINDOWS, "PostLocalSGDOptimizer not yet supported with Windows."
)
def test_post_localSGD_optimizer_parity(self, grad_is_view=False):
learning_rate = 0.03
period = 4
warmup_steps = 10
torch.cuda.set_device(self.rank)
net = torch.nn.parallel.DistributedDataParallel(
copy.deepcopy(DDP_NET).cuda(),
device_ids=[self.rank],
gradient_as_bucket_view=grad_is_view,
)
opt = torch.optim.SGD(net.parameters(), lr=learning_rate)
averager = averagers.PeriodicModelAverager(
period=period, warmup_steps=warmup_steps
)
post_localSGD_net = torch.nn.parallel.DistributedDataParallel(
copy.deepcopy(DDP_NET).cuda(),
device_ids=[self.rank],
gradient_as_bucket_view=grad_is_view,
)
post_localSGD_opt = post_localSGD_optimizer.PostLocalSGDOptimizer(
params=post_localSGD_net.parameters(),
optimizer_class=torch.optim.SGD,
averager=averagers.PeriodicModelAverager(
period=period, warmup_steps=warmup_steps
),
lr=learning_rate,
)
input = torch.randn(dist.get_world_size() * 2, 2).cuda()
target = torch.randn(dist.get_world_size() * 2, 4).cuda()
loss_fn = nn.MSELoss()
for _ in range(20):
opt.zero_grad()
output = net(input)
loss = loss_fn(output, target)
loss.backward()
opt.step()
averager.average_parameters(net.parameters())
post_localSGD_opt.zero_grad()
post_localSGD_output = post_localSGD_net(input)
post_localSGD_loss = loss_fn(post_localSGD_output, target)
post_localSGD_loss.backward()
post_localSGD_opt.step()
for p1, p2 in zip(net.parameters(), post_localSGD_net.parameters()):
self.assertEqual(p1.data, p2.data)
@sandcastle_skip_if(
BACKEND != "nccl" and BACKEND != "gloo",
"Only Nccl & Gloo backend support DistributedDataParallel",
)
@skip_if_no_gpu
def test_DistributedDataParallel_SyncBatchNorm_Channels_Last(self):
group, group_id, rank = self._init_global_test()
num_processes = dist.get_world_size()
local_bs = 2
bs_offset = int(rank * 2)
global_bs = int(num_processes * 2)
model = ONLY_SBN_NET
model_gpu = copy.deepcopy(model).cuda(rank)
model_DDP = nn.parallel.DistributedDataParallel(
model_gpu, device_ids=[rank]
)
memory_format = torch.channels_last
input_gpu = (
torch.randn(global_bs, 2, 4, 4, dtype=torch.float)
.cuda(rank)
.to(memory_format=memory_format)
)
target_gpu = (
torch.randn(global_bs, 2, 4, 4, dtype=torch.float)
.cuda(rank)
.to(memory_format=memory_format)
)
loss = nn.MSELoss()
# check two model parameters over 5 iterations
self._test_DDP_niter(
model_gpu,
model_DDP,
input_gpu,
target_gpu,
loss,
local_bs,
rank,
global_bs,
True,
bs_offset,
dist.get_world_size(),
memory_format=memory_format,
)
self._barrier()
@sandcastle_skip_if(
BACKEND != "nccl" and BACKEND != "gloo",
"Only Nccl & Gloo backend support DistributedDataParallel",
)
@skip_if_no_gpu
def test_DistributedDataParallel_SyncBatchNorm(self):
group, group_id, rank = self._init_global_test()
rank_to_GPU = self._init_multigpu_helper()
# DDP does not support replicating BN layers within a process, hence
# testing with one module replica per process
gpus = [rank]
num_processes = dist.get_world_size()
local_bs = 2
bs_offset = int(rank * 2)
global_bs = int(num_processes * 2)
self._test_DistributedDataParallel_SyncBatchNorm(
gpu_subset=gpus,
rank=rank,
local_bs=local_bs,
global_bs=global_bs,
offset=bs_offset,
)
# test output_device
self._test_DistributedDataParallel_SyncBatchNorm(
gpu_subset=gpus,
rank=rank,
local_bs=local_bs,
global_bs=global_bs,
offset=bs_offset,
output_device=torch.device("cuda"),
)
# test device_ids
gpus = [torch.device("cuda:" + str(i)) for i in gpus]
self._test_DistributedDataParallel_SyncBatchNorm(
gpu_subset=gpus,
rank=rank,
local_bs=local_bs,
global_bs=global_bs,
offset=bs_offset,
output_device=torch.device("cuda"),
)
@sandcastle_skip_if(
BACKEND != "nccl" and BACKEND != "gloo",
"Only Nccl & Gloo backend support DistributedDataParallel",
)
@skip_if_no_gpu
def test_DistributedDataParallel_SyncBatchNorm_No_Affine(self):
group, group_id, rank = self._init_global_test()
rank_to_GPU = self._init_multigpu_helper()
# DDP does not support replicating BN layers within a process, hence
# testing with one module replica per process
gpus = [rank]
num_processes = dist.get_world_size()
local_bs = 2
bs_offset = int(rank * 2)
global_bs = int(num_processes * 2)
self._test_DistributedDataParallel_SyncBatchNorm(
gpu_subset=gpus,
rank=rank,
local_bs=local_bs,
global_bs=global_bs,
offset=bs_offset,
affine=False,
)
@sandcastle_skip_if(
BACKEND != "nccl" and BACKEND != "gloo",
"Only Nccl & Gloo backend support DistributedDataParallel",
)
@skip_if_no_gpu
def test_DistributedDataParallel_SyncBatchNorm_2D_Input(self):
group, group_id, rank = self._init_global_test()
rank_to_GPU = self._init_multigpu_helper()
# DDP does not support replicating BN layers within a process, hence
# testing with one module replica per process
gpus = [rank]
model = nn.BatchNorm1d(2)
# single gpu training setup
model_gpu = copy.deepcopy(model)
model_gpu.cuda(gpus[0])
# DDP training setup
model_DDP = nn.SyncBatchNorm.convert_sync_batchnorm(copy.deepcopy(model))
model_DDP.cuda(gpus[0])
model_DDP = nn.parallel.DistributedDataParallel(model_DDP, device_ids=gpus)
local_bs = len(gpus) * 2
global_bs = dist.get_world_size() * local_bs
input_cpu = torch.randn(global_bs, 2)
target = torch.randn(global_bs, 2)
loss = nn.MSELoss()
# disabling cudnn.
# SyncBatchNorm goes through native_batch_norm kernel, this avoids the
# numerical issue created by the divergent code path.
with torch.backends.cudnn.flags(False):
# check two model parameters over 5 iterations
self._test_DDP_niter(
model_gpu,
model_DDP,
input_cpu.cuda(gpus[0]),
target.cuda(gpus[0]),
loss,
local_bs,
rank,
global_bs,
True,
)
self._barrier()
@sandcastle_skip_if(
BACKEND != "nccl" and BACKEND != "gloo",
"Only Nccl & Gloo backend support DistributedDataParallel",
)
@skip_if_no_gpu
@require_world_size(2)
def test_DistributedDataParallel_SyncBatchNorm_Single_Input_Per_Process(self):
group, group_id, rank = self._init_global_test()
rank_to_GPU = self._init_multigpu_helper()
# DDP does not support replicating BN layers within a process, hence
# testing with one module replica per process
gpus = [rank]
model = nn.BatchNorm1d(2)
# single gpu training setup
model_gpu = copy.deepcopy(model)
model_gpu.cuda(gpus[0])
# DDP training setup
model_DDP = nn.SyncBatchNorm.convert_sync_batchnorm(copy.deepcopy(model))
model_DDP.cuda(gpus[0])
model_DDP = nn.parallel.DistributedDataParallel(model_DDP, device_ids=gpus)
local_bs = 1
global_bs = dist.get_world_size()
input_cpu = torch.randn(global_bs, 2)
target = torch.randn(global_bs, 2)
loss = nn.MSELoss()
# disabling cudnn.
# SyncBatchNorm goes through native_batch_norm kernel, this avoids the
# numerical issue created by the divergent code path.
with torch.backends.cudnn.flags(False):
# check two model parameters over 5 iterations
self._test_DDP_niter(
model_gpu,
model_DDP,
input_cpu.cuda(gpus[0]),
target.cuda(gpus[0]),
loss,
local_bs,
rank,
global_bs,
True,
)
self._barrier()
@sandcastle_skip_if(
BACKEND != "nccl" and BACKEND != "gloo",
"Only Nccl & Gloo backend support DistributedDataParallel",
)
@skip_if_no_gpu
def test_DistributedDataParallel_SyncBatchNorm_Diff_Input_Sizes_Running_Value(
self,
):
group, group_id, rank = self._init_global_test()
rank_to_GPU = self._init_multigpu_helper()
model = nn.parallel.DistributedDataParallel(
ONLY_SBN_NET.cuda(rank), device_ids=[rank]
)
input_var = []
for i in range(dist.get_world_size()):
input_var_rank = torch.cat(
[
torch.ones(2, 1, 10 ** (i + 1)) * (0.1 ** (i - 1)),
torch.ones(2, 1, 10 ** (i + 1)) * (0.3 ** (i - 1)),
],
dim=1,
)
input_var.append(input_var_rank)
all_input_var = torch.cat(
[
x.permute(1, 0, 2).contiguous().view(ONLY_SBN_NET.num_features, -1)
for x in input_var
],
dim=1,
).cuda(rank)
for i in range(100):
y = model(input_var[rank].cuda(rank))
y.mean().backward()
running_mean, running_var = (
model.module.running_mean,
model.module.running_var,
)
torch.testing.assert_allclose(running_mean, all_input_var.mean(1))
torch.testing.assert_allclose(running_var, all_input_var.var(1))
@sandcastle_skip_if(
BACKEND != "nccl" and BACKEND != "gloo",
"Only Nccl & Gloo backend support DistributedDataParallel",
)
@skip_if_no_gpu
def test_DistributedDataParallel_SyncBatchNorm_Diff_Input_Sizes_gradient(self):
group, group_id, rank = self._init_global_test()
# only do single GPU per process
gpus = [rank]
# cpu training setup
model = BN_NET
num_processes = dist.get_world_size()
local_bs = rank + 2
bs_offset = int((rank + 3) * rank / 2)
global_bs = int((num_processes + 3) * num_processes / 2)
self._test_DistributedDataParallel_SyncBatchNorm(
gpu_subset=gpus,
rank=rank,
local_bs=local_bs,
global_bs=global_bs,
offset=bs_offset,
)
def _test_ddp_logging_data(self, is_gpu):
rank = dist.get_rank()
model_DDP = copy.deepcopy(DDP_NET)
if is_gpu:
model_DDP = nn.parallel.DistributedDataParallel(
model_DDP.cuda(rank), device_ids=[rank]
)
else:
model_DDP = nn.parallel.DistributedDataParallel(model_DDP)
# dummy data initialization
local_bs = 2
batch_size, input, target, loss = self._prepare_dummy_data(local_bs)
if is_gpu:
input = input.cuda(rank)
target = target.cuda(rank)
model_DDP._set_ddp_runtime_logging_sample_rate(2)
for idx in range(20):
offset = rank * local_bs
# DDP training, DDP scatters subsets of input to nodes/GPUs
self._test_DDP_helper(
model_DDP,
input[offset : offset + local_bs],
target[offset : offset + local_bs],
loss,
1,
)
self._model_step_with_zero_grad(model_DDP)
# Verify DDP logging data is sampled as expected
# If it has ran more than 10 iteratons and this is
# the sampled iteration for measuring run time stats,
# the run time stats for this idx-th iteration will not
# be zeros.
ddp_logging_data = model_DDP._get_ddp_logging_data()
if idx > 0 and (idx < 10 or idx % 2 == 0):
self.assertGreaterEqual(
ddp_logging_data.get("forward_compute_time"), 1
)
self.assertGreaterEqual(
ddp_logging_data.get("backward_compute_time"), 1
)
self.assertGreaterEqual(
ddp_logging_data.get("backward_comm_time"), 1
)
self.assertGreaterEqual(
ddp_logging_data.get("backward_compute_time"),
ddp_logging_data.get("backward_compute_comm_overlap_time"),
)
self.assertGreaterEqual(
ddp_logging_data.get("backward_comm_time"),
ddp_logging_data.get("backward_compute_comm_overlap_time"),
)
self.assertEqual(ddp_logging_data.get("iteration"), idx)
elif idx > 0:
# if the idx-th iteration is not sampled to set runtime stats,
# ddp_logging_data.iteration will not be updated to current
# iteration.
self.assertNotEqual(ddp_logging_data.get("iteration"), idx)
# Shuffle the input so that DDP input is different
input = input[torch.randperm(batch_size)]
return model_DDP
@sandcastle_skip_if(BACKEND == "nccl", "nccl does not support DDP on CPU models")
def test_ddp_logging_data_cpu(self):
def parse_env(var):
return os.environ[var] if var in os.environ else "N/A"
os.environ["TORCH_DISTRIBUTED_DEBUG"] = "INFO"
group, group_id, rank = self._init_global_test()
model_DDP = self._test_ddp_logging_data(is_gpu=False)
ddp_logging_data = model_DDP._get_ddp_logging_data()
self.assertEqual(ddp_logging_data.get("world_size"), dist.get_world_size())
self.assertEqual(ddp_logging_data.get("rank"), dist.get_rank())
self.assertEqual(ddp_logging_data.get("module_name"), "Net")
self.assertEqual(ddp_logging_data.get("device_ids"), "")
# output_device is -1 in default if it is not set, e.g.
# output_device of CPU training is -1.
self.assertEqual(ddp_logging_data.get("output_device"), -1)
self.assertEqual(ddp_logging_data.get("broadcast_buffers"), 1)
self.assertEqual(ddp_logging_data.get("bucket_cap_bytes"), 25 * 1024 * 1024)
self.assertEqual(ddp_logging_data.get("find_unused_parameters"), 0)
self.assertEqual(ddp_logging_data.get("gradient_as_bucket_view"), 0)
self.assertEqual(
ddp_logging_data.get("backend_name"), dist.get_backend(group_id)
)
self.assertEqual(ddp_logging_data.get("iteration"), 18)
params = list(model_DDP.parameters())
num_params = 0
param_size = 0
params = list(
parameter
for parameter in filter(
lambda parameter: parameter.requires_grad, params
)
)
for p in params:
num_params += 1
param_size += p.numel() * p.element_size()
self.assertEqual(ddp_logging_data.get("dtypes"), "float")
self.assertEqual(
ddp_logging_data.get("total_parameter_size_bytes"), param_size
)
self.assertEqual(ddp_logging_data.get("num_parameter_tensors"), num_params)
self.assertEqual(ddp_logging_data.get("bucket_sizes"), str(param_size))
self.assertEqual(
ddp_logging_data.get("master_port"), parse_env("MASTER_PORT")
)
self.assertEqual(
ddp_logging_data.get("master_addr"), parse_env("MASTER_ADDR")
)
self.assertEqual(
ddp_logging_data.get("torch_distributed_debug"),
parse_env("TORCH_DISTRIBUTED_DEBUG"),
)
self.assertEqual(
ddp_logging_data.get("cuda_visible_devices"),
parse_env("CUDA_VISIBLE_DEVICES"),
)
if ddp_logging_data.get("backend_name") == "gloo":
self.assertEqual(
ddp_logging_data.get("gloo_socket_ifname"),
parse_env("GLOO_SOCKET_IFNAME"),
)
self.assertEqual(
ddp_logging_data.get("gloo_device_transport"),
parse_env("GLOO_DEVICE_TRANSPORT"),
)
self.assertEqual(ddp_logging_data.get("nccl_socket_ifname"), None)
self.assertEqual(ddp_logging_data.get("nccl_blocking_wait"), None)
self.assertEqual(ddp_logging_data.get("nccl_async_error_handling"), None)
self.assertEqual(ddp_logging_data.get("nccl_debug"), None)
self.assertEqual(ddp_logging_data.get("nccl_nthreads"), None)
self.assertEqual(ddp_logging_data.get("nccl_ib_timeout"), None)
# test runtime logging fields
# Note: DETAIL debug mode logs DDP logging data to stdout and
# thus accesses std::map, which fills in a default value for the
# type if it didn't exist.
self.assertEqual(ddp_logging_data.get("unused_parameter_size", 0), 0)
self.assertEqual(ddp_logging_data.get("has_rebuilt_buckets"), 1)
self.assertEqual(
ddp_logging_data.get("rebuilt_bucket_sizes"), str(param_size)
)
self.assertGreaterEqual(ddp_logging_data.get("avg_forward_compute_time"), 1)
self.assertGreaterEqual(
ddp_logging_data.get("avg_backward_compute_time"), 1
)
self.assertGreaterEqual(ddp_logging_data.get("avg_backward_comm_time"), 1)
self.assertGreaterEqual(
ddp_logging_data.get("avg_backward_compute_time"),
ddp_logging_data.get("avg_backward_compute_comm_overlap_time"),
)
self.assertGreaterEqual(
ddp_logging_data.get("avg_backward_comm_time"),
ddp_logging_data.get("avg_backward_compute_comm_overlap_time"),
)
model = LargeNet()
model.float()
model.fc1.double()
model_DDP = nn.parallel.DistributedDataParallel(model, bucket_cap_mb=1.5)
ddp_logging_data = model_DDP._get_ddp_logging_data()
params = list(model_DDP.parameters())
self.assertEqual(
ddp_logging_data.get("bucket_cap_bytes"), int(1.5 * 1024 * 1024)
)
bucket_sizes = [
params[1].numel() * params[1].element_size(),
params[0].numel() * params[0].element_size(),
]
self.assertEqual(
ddp_logging_data.get("bucket_sizes"),
", ".join(str(x) for x in bucket_sizes),
)
self.assertEqual(ddp_logging_data.get("dtypes"), "double, float")
@sandcastle_skip_if(
BACKEND != "nccl" and BACKEND != "gloo",
"Only Nccl & Gloo backend support DistributedDataParallel",
)
@skip_if_no_gpu
def test_ddp_logging_data_gpu(self):
group, group_id, rank = self._init_global_test()
model_DDP = self._test_ddp_logging_data(is_gpu=True)
ddp_logging_data = model_DDP._get_ddp_logging_data()
self.assertEqual(ddp_logging_data.get("device_ids"), str(rank))
self.assertEqual(ddp_logging_data.get("output_device"), rank)
self.assertGreaterEqual(ddp_logging_data.get("avg_forward_compute_time"), 1)
self.assertGreaterEqual(
ddp_logging_data.get("avg_backward_compute_comm_overlap_time"), 1
)
self.assertGreaterEqual(
ddp_logging_data.get("avg_backward_compute_time"),
ddp_logging_data.get("avg_backward_compute_comm_overlap_time"),
)
self.assertGreaterEqual(
ddp_logging_data.get("avg_backward_comm_time"),
ddp_logging_data.get("avg_backward_compute_comm_overlap_time"),
)
@sandcastle_skip_if(BACKEND == "nccl", "nccl does not support DDP on CPU models")
def test_static_graph_api_cpu(self):
model_DDP = nn.parallel.DistributedDataParallel(DDP_NET)
model_DDP._set_static_graph()
self.assertEqual(
model_DDP._get_ddp_logging_data().get("static_graph"), True
)
expected_err = "should be called before training loop starts"
with self.assertRaisesRegex(RuntimeError, expected_err):
local_bs = 2
batch_size, input, target, loss = self._prepare_dummy_data(local_bs)
offset = dist.get_rank() * local_bs
self._test_DDP_helper(
model_DDP,
input[offset : offset + local_bs],
target[offset : offset + local_bs],
loss,
1,
)
model_DDP._set_static_graph()
verify_ddp_error_logged(model_DDP, expected_err)
@skipIfNoTorchVision
def test_SyncBatchNorm_process_group(self):
process_ids = 0
process_group = torch.distributed.new_group([process_ids])
res50_model = torchvision.models.resnet50()
res50_model_sync = nn.SyncBatchNorm.convert_sync_batchnorm(
copy.deepcopy(res50_model), process_group
)
process_group_sync = res50_model_sync.layer1[0].bn1.process_group
self.assertEqual(process_group_sync, process_group)
def _run_reduction_test(
self, tensor, expected_tensor, op, reduction_fn=dist.all_reduce, dst=None
):
if reduction_fn != dist.all_reduce and dst is None:
raise ValueError(f"Reduction fn {reduction_fn} must specify dst!")
if dst is not None:
reduction_fn(tensor, dst, op)
if dist.get_rank() == dst:
self.assertEqual(tensor, expected_tensor)
else:
reduction_fn(tensor, op)
self.assertEqual(tensor, expected_tensor)
@require_backend({"nccl"})
@require_backends_available({"nccl"})
@skip_if_lt_x_gpu(2)
def test_nccl_backend_bool_allreduce(self):
torch.cuda.set_device(self.rank)
element = self.rank % 2 == 0
for op in [dist.ReduceOp.PRODUCT, dist.ReduceOp.MIN]:
input_tensor = torch.tensor([element, element]).to(self.rank)
self._run_reduction_test(
input_tensor, torch.tensor([False, False]).to(self.rank), op
)
input_tensor = torch.tensor([True, True]).to(self.rank)
expected_tensor = input_tensor.clone()
self._run_reduction_test(input_tensor, expected_tensor, op)
for op in [dist.ReduceOp.SUM, dist.ReduceOp.MAX]:
input_tensor = torch.tensor([element, element]).to(self.rank)
self._run_reduction_test(
input_tensor, torch.tensor([True, True]).to(self.rank), op
)
@require_backend({"nccl"})
@require_backends_available({"nccl"})
@skip_if_lt_x_gpu(2)
def test_nccl_backend_bool_allgather(self):
torch.cuda.set_device(self.rank)
inp = {0: [True, True], 1: [False, True]}
input_tensor = torch.tensor(inp[self.rank % 2]).to(self.rank)
input_tensor_copy = input_tensor.clone()
tensor_list = [
torch.tensor([False, False]).to(self.rank)
for _ in range(dist.get_world_size())
]
dist.all_gather(tensor_list, input_tensor)
self.assertEqual(len(tensor_list), dist.get_world_size())
for i, t in enumerate(tensor_list):
expected = torch.tensor(inp[i % 2]).to(self.rank)
self.assertEqual(t, expected)
self.assertEqual(input_tensor_copy, input_tensor)
@require_backend({"nccl"})
@require_backends_available({"nccl"})
@skip_if_lt_x_gpu(int(os.environ["WORLD_SIZE"]))
def test_nccl_backend_bool_reduce(self):
torch.cuda.set_device(self.rank)
inp = {0: [True, True], 1: [False, False]}
for op in [dist.ReduceOp.PRODUCT, dist.ReduceOp.MIN]:
input_tensor = torch.tensor(inp[self.rank % 2]).to(self.rank)
expected = torch.tensor([False, False]).to(self.rank)
self._run_reduction_test(input_tensor, expected, op, dist.reduce, dst=0)
input_tensor = torch.tensor([True, True]).to(self.rank)
expected_tensor = input_tensor.clone()
self._run_reduction_test(
input_tensor, expected_tensor, op, dist.reduce, dst=0
)
for op in [dist.ReduceOp.SUM, dist.ReduceOp.MAX]:
input_tensor = torch.tensor(inp[self.rank % 2]).to(self.rank)
expected = (
torch.tensor([True, True]).to(self.rank)
if self.rank == 0
else input_tensor.clone()
)
self._run_reduction_test(input_tensor, expected, op, dist.reduce, dst=0)
@require_backend({"nccl"})
@require_backends_available({"nccl"})
@skip_if_lt_x_gpu(2)
def test_nccl_backend_bool_broadcast(self):
tensor_size = 10
bcast_tensor = torch.tensor(
[
(random.random() < 0.5 if self.rank == 0 else False)
for _ in range(tensor_size)
]
).to(self.rank)
dist.broadcast(bcast_tensor, src=0)
tensor_list = [
torch.tensor([False for _ in range(tensor_size)]).to(self.rank)
for _ in range(dist.get_world_size())
]
dist.all_gather(tensor_list, bcast_tensor)
expected = tensor_list[0]
for tensor in tensor_list[1:]:
self.assertEqual(tensor, expected)
@sandcastle_skip_if(
BACKEND != "nccl" and BACKEND != "gloo",
"Only NCCL and GLOO backend support DistributedDataParallel",
)
@skip_if_lt_x_gpu(int(os.environ["WORLD_SIZE"]))
def test_DistributedSampler_padding(self):
world_size = dist.get_world_size()
dataset_size = 100 + world_size + 1
dataset = [torch.ones(1).to(self.rank) * i for i in range(dataset_size)]
dataset_tiny_size = max(world_size // 2 - 1, 1)
dataset_tiny = [
torch.ones(1).to(self.rank) * i for i in range(dataset_tiny_size)
]
dist_sampler = DistributedSampler(dataset=dataset, drop_last=True)
local_num_samples, local_dataset_size = (
dist_sampler.num_samples,
dist_sampler.total_size,
)
effective_dataset_size = (
math.ceil((dataset_size - world_size) / world_size)
if dataset_size % world_size != 0
else dataset_size / world_size
)
self.assertEqual(local_num_samples, effective_dataset_size)
self.assertEqual(local_dataset_size, local_num_samples * world_size)
indices_list = list(iter(dist_sampler))
self.assertEqual(len(indices_list), local_num_samples)
def validate_global_samples(local_num_samples):
world_samples = [
torch.LongTensor([0]).to(self.rank) for _ in range(world_size)
]
dist.all_gather(
world_samples, torch.tensor([local_num_samples]).to(self.rank)
)
world_samples = [sample.item() for sample in world_samples]
self.assertEqual(len(set(world_samples)), 1)
validate_global_samples(local_num_samples)
dist_sampler_added_samples = DistributedSampler(dataset=dataset)
local_num_samples, local_dataset_size = (
dist_sampler_added_samples.num_samples,
dist_sampler_added_samples.total_size,
)
self.assertEqual(local_num_samples, math.ceil(dataset_size / world_size))
self.assertEqual(local_dataset_size, local_num_samples * world_size)
indices_list = list(iter(dist_sampler_added_samples))
self.assertEqual(len(indices_list), local_num_samples)
validate_global_samples(local_num_samples)
dist_sampler_added_samples_tiny = DistributedSampler(dataset=dataset_tiny)
local_num_samples, local_dataset_size = (
dist_sampler_added_samples_tiny.num_samples,
dist_sampler_added_samples_tiny.total_size,
)
self.assertEqual(
local_num_samples, math.ceil(dataset_tiny_size / world_size)
)
self.assertEqual(local_dataset_size, local_num_samples * world_size)
indices_list = list(iter(dist_sampler_added_samples_tiny))
self.assertEqual(len(indices_list), local_num_samples)
validate_global_samples(local_num_samples)
@require_backend({"nccl", "gloo"})
@require_n_gpus_for_nccl_backend(
int(os.environ["WORLD_SIZE"]), os.environ["BACKEND"]
)
def test_allgather_object(self):
backend = os.environ["BACKEND"]
if backend == "nccl":
next_rank = (self.rank + 1) % int(self.world_size)
torch.cuda.set_device(next_rank)
if backend == "nccl":
COLLECTIVES_OBJECT_TEST_LIST.append(Foo(torch.randn(3, 3, device=0)))
gather_objects = COLLECTIVES_OBJECT_TEST_LIST
output_gathered = [None for _ in range(dist.get_world_size())]
dist.all_gather_object(
output_gathered, gather_objects[self.rank % len(gather_objects)]
)
for i, val in enumerate(output_gathered):
expected = gather_objects[i % len(gather_objects)]
self.assertEqual(val, expected)
output_gathered = [None for _ in range(dist.get_world_size())]
dist.all_gather_object(
output_gathered, gather_objects[self.rank % len(gather_objects)]
)
@require_backend({"gloo"})
@sandcastle_skip_if(BACKEND == "nccl", "NCCL does not support gather")
def test_gather_object(self):
gather_objects = COLLECTIVES_OBJECT_TEST_LIST
output_gathered = [None for _ in range(dist.get_world_size())]
gather_on_rank = 0
my_rank = dist.get_rank()
dist.gather_object(
gather_objects[self.rank % len(gather_objects)],
object_gather_list=output_gathered
if my_rank == gather_on_rank
else None,
dst=gather_on_rank,
)
if my_rank != gather_on_rank:
self.assertEqual(
output_gathered, [None for _ in range(dist.get_world_size())]
)
else:
for i, val in enumerate(output_gathered):
expected = gather_objects[i % len(gather_objects)]
self.assertEqual(val, expected)
class Bar:
pass
b = Bar()
gather_objects = [b for _ in range(dist.get_world_size())]
with self.assertRaisesRegex(AttributeError, "Can't pickle local object"):
dist.all_gather_object(
[None for _ in range(dist.get_world_size())],
gather_objects[self.rank],
)
@require_backend({"nccl"})
@require_backends_available({"nccl"})
@skip_if_lt_x_gpu(2)
def test_nccl_gather_object_err(self):
output_gathered = [None for _ in range(dist.get_world_size())]
gather_on_rank = 0
my_rank = dist.get_rank()
next_rank = (my_rank + 1) % dist.get_world_size()
torch.cuda.set_device(next_rank)
with self.assertRaisesRegex(
RuntimeError, "ProcessGroupNCCL does not support gather"
):
dist.gather_object(
"foo",
object_gather_list=output_gathered
if my_rank == gather_on_rank
else None,
dst=gather_on_rank,
)
def validate_net_equivalence(self, net):
net_module_states = list(net.module.state_dict().values())
for t in net_module_states:
tensor_list = [
torch.zeros_like(t) for _ in range(dist.get_world_size())
]
dist.all_gather(tensor_list, t)
for tensor in tensor_list:
self.assertEqual(tensor, t)
@skip_if_lt_x_gpu(2)
@sandcastle_skip_if(
BACKEND != "nccl" and BACKEND != "gloo",
"Only NCCL and GLOO backend support DistributedDataParallel",
)
def test_ddp_sync_params_and_buffers(self):
# Test that after calling _sync_params_and_buffers, models across ranks
# are the same and are equal to the model on the input rank.
dim = 2
rank = self.rank
rank_to_broadcast = 1
# Seed to ensure that ranks are initialized with different initial models.
torch.manual_seed(rank)
model = nn.Linear(dim, dim, bias=False)
net = torch.nn.parallel.DistributedDataParallel(
model.cuda(rank), device_ids=[self.rank], bucket_cap_mb=1
)
new_model = nn.Linear(dim, dim, bias=False).cuda(rank)
net.module = copy.deepcopy(new_model)
# Assert params are different
net_module_states = list(net.module.state_dict().values())
for t in net_module_states:
tensor_list = [
torch.zeros_like(t) for _ in range(dist.get_world_size())
]
dist.all_gather(tensor_list, t)
for i, tensor in enumerate(tensor_list):
if i == rank:
self.assertEqual(t, tensor)
else:
# tensor from another rank should be different.
self.assertNotEqual(t, tensor)
net._sync_params_and_buffers(authoritative_rank=rank_to_broadcast)
# Now all model params should be the same.
self.validate_net_equivalence(net)
# Since the network params were broadcast from rank_to_broadcast, validate that
# they are the same as new_model on rank_to_broadcast.
if rank == rank_to_broadcast:
expected_states = new_model.state_dict().values()
for t, expected in zip(net_module_states, expected_states):
self.assertEqual(t, expected)
@skip_if_lt_x_gpu(2)
@sandcastle_skip_if(
BACKEND != "nccl" and BACKEND != "gloo",
"Only NCCL and GLOO backend support DistributedDataParallel",
)
def test_ddp_grad_div_uneven_inputs(self):
# Test gradient division during training with join() API. If
# divide_by_initial_world_size=False, we scale by the effective world
# size when allreducing grads.
dim = 5
batch = 1
grad_scale = 50
rank = self.rank
model = nn.Linear(dim, dim, bias=False)
inp = torch.ones(batch, dim, device=self.rank) * grad_scale
net = torch.nn.parallel.DistributedDataParallel(
model.cuda(rank), device_ids=[self.rank], bucket_cap_mb=1
)
n_iters = 3
if self.rank > 0:
n_iters += 2
with net.join(divide_by_initial_world_size=False):
for _ in range(n_iters):
loss = net(inp).sum()
loss.backward()
# The grad is always expected_grad, since we divide by the number
# of currently active processes and inactive processes contribute
# zero gradient. If we kept dividing by static initial world
# size as processes leave, the grad would be smaller.
expected_grad = torch.ones(dim, dim, device=self.rank) * grad_scale
param = list(net.parameters())[0]
self.assertEqual(expected_grad, param.grad)
# Avoid accumulating grads so that it's the same every iteration
net.zero_grad()
torch.cuda.synchronize(device=self.rank)
with net.join(divide_by_initial_world_size=True):
for i in range(n_iters):
loss = net(inp).sum()
loss.backward()
effective_ws = dist.get_world_size()
if i >= 3:
effective_ws -= 1
expected_grad = (
torch.ones(dim, dim, device=self.rank)
* grad_scale
* effective_ws
) / dist.get_world_size()
param = list(net.parameters())[0]
self.assertEqual(expected_grad, param.grad)
net.zero_grad()
torch.cuda.synchronize(device=self.rank)
def _test_ddp_profiling(self, profiler_ctx):
batch = 3
dim = 10
num_iters = 6
torch.cuda.set_device(self.rank)
model = nn.Linear(dim, dim, bias=False)
inp = torch.rand(batch, dim, device=self.rank)
net = torch.nn.parallel.DistributedDataParallel(
model.cuda(self.rank),
device_ids=[self.rank],
)
profiler_ctx_copy = copy.deepcopy(profiler_ctx)
with profiler_ctx as prof:
for i in range(num_iters):
loss = net(inp).sum()
loss.backward()
all_reduce_event_name = f"{dist.get_backend()}:all_reduce"
events = get_profiling_event(all_reduce_event_name, prof)
event_count = sum(e.count for e in events)
self.assertEqual(event_count, num_iters)
for event in events:
self.assertTrue(event.is_async)
self.assertEqual(event.name, all_reduce_event_name)
broadcast_event_name = f"{dist.get_backend()}:broadcast"
broadcast_events = get_profiling_event(broadcast_event_name, prof)
event_count = sum(e.count for e in broadcast_events)
# Broadcast is called during rebuild_buckets
self.assertGreaterEqual(event_count, 1)
for event in broadcast_events:
self.assertEqual(event.name, broadcast_event_name)
# Run DDP with profiling for a few iterations, then enable profiling
# for a single pass, and ensure it is recorded. This tests that the
# thread local state is correctly updated.
net = torch.nn.parallel.DistributedDataParallel(
model.cuda(self.rank),
device_ids=[self.rank],
find_unused_parameters=True,
)
for i in range(3):
loss = net(inp).sum()
loss.backward()
# Now enable the profiler.
with profiler_ctx_copy as prof:
loss = net(inp).sum()
loss.backward()
events = get_profiling_event(all_reduce_event_name, prof)
self.assertGreaterEqual(len(events), 1)
self.assertGreaterEqual(events[0].count, 1)
self.assertEqual(events[0].name, all_reduce_event_name)
for event in events:
self.assertTrue(event.is_async)
# Ensure searching unused parameters was profiled
events = get_profiling_event("search_unused_parameters", prof)
self.assertEqual(len(events), 1)
@require_backend({"gloo", "nccl"})
@require_backends_available({"gloo", "nccl"})
@skip_if_lt_x_gpu(2)
def test_ddp_profiling_autograd_profiler(self):
autograd_profiler_ctx = torch.autograd.profiler.profile()
return self._test_ddp_profiling(profiler_ctx=autograd_profiler_ctx)
@require_backend({"gloo", "nccl"})
@require_backends_available({"gloo", "nccl"})
@skip_if_lt_x_gpu(2)
@sandcastle_skip_if(IS_FBCODE, "Kineto in fbcode code causes hang")
@sandcastle_skip_if(
IS_MACOS or IS_WINDOWS,
"torch.profiler not enabled for mac/windows: https://github.com/pytorch/pytorch/pull/56124",
)
def test_ddp_profiling_torch_profiler(self):
cpu_act = torch.profiler.ProfilerActivity.CPU
cuda_act = torch.profiler.ProfilerActivity.CUDA
torch_profiler_ctx = torch.profiler.profile(activities=[cpu_act, cuda_act])
self._test_ddp_profiling(profiler_ctx=torch_profiler_ctx)
@skip_if_lt_x_gpu(2)
@sandcastle_skip_if(
BACKEND != "nccl" and BACKEND != "gloo",
"Only NCCL and GLOO backend support DistributedDataParallel",
)
def test_ddp_join_model_equivalence(self):
# Verifies equivalence with model training locally and with DDP under
# the join context manager.
batch = 3
dim = 10
learning_rate = 0.03
model = nn.Linear(dim, dim, bias=False)
inp = torch.rand(batch, dim, device=self.rank)
local_model = copy.deepcopy(model)
local_model = local_model.cuda(self.rank)
rank_to_iter_mapping = {
rank: 2 * (rank + 1) for rank in range(dist.get_world_size())
}
# run local model
local_iters = sum(rank_to_iter_mapping.values())
local_optim = torch.optim.SGD(local_model.parameters(), lr=learning_rate)
for _ in range(local_iters):
local_optim.zero_grad()
out = local_model(inp)
loss = out.sum()
loss.backward()
local_optim.step()
# run DDP model with join API
num_iters = rank_to_iter_mapping[self.rank]
net = torch.nn.parallel.DistributedDataParallel(
model.cuda(self.rank), device_ids=[self.rank]
)
ddp_optim = torch.optim.SGD(
model.parameters(), lr=learning_rate * dist.get_world_size()
)
with net.join():
for i in range(num_iters):
ddp_optim.zero_grad()
out = net(inp)
loss = out.sum()
loss.backward()
torch.cuda.synchronize(device=self.rank)
ddp_optim.step()
# Validate model state dicts are equal
for (_, local_tensor), (_, dist_tensor) in zip(
local_model.state_dict().items(), net.module.state_dict().items()
):
self.assertEqual(local_tensor, dist_tensor)
def _run_uneven_inputs_test(
self,
test_case,
iteration_mapping,
find_unused_params,
):
model = test_case.model
inp = test_case.inp
rank = self.rank
sync_interval = test_case.sync_interval
torch.cuda.set_device(rank)
# Ensure all outsanding GPU work is comlete so this test runs independently.
dist.barrier()
# Bucket_cap_mb is intentionally low to test allreduce scheduling when
# there are many buckets.
net = torch.nn.parallel.DistributedDataParallel(
model.cuda(rank),
device_ids=[rank],
bucket_cap_mb=1,
find_unused_parameters=find_unused_params,
)
# Register hook if specified
if test_case.hook is not None:
net.register_comm_hook(test_case.state, test_case.hook)
print(f"registered hook {test_case.hook}")
# Determine num iters for this rank via the passed in mapping.
num_iters = iteration_mapping[rank]
# If we throw when earliest rank terminates, we should ensure
# that we iterate for that minimum number of times.
num_iters_tensor = torch.tensor(
[num_iters], device=torch.cuda.current_device()
)
dist.all_reduce(num_iters_tensor, op=dist.ReduceOp.MIN)
min_num_iters = num_iters_tensor.item()
total_iters = 0
if test_case.throw_on_early_termination:
if min_num_iters == num_iters:
# Early termination rank(s)
exception_ctx = self.assertRaisesRegex(
RuntimeError, f"Rank {self.rank} exhausted all inputs"
)
else:
# Non early termination rank
exception_ctx = self.assertRaisesRegex(
RuntimeError,
"Detected at least one rank that exhausted inputs.",
)
else:
exception_ctx = suppress()
with exception_ctx:
with net.join(
throw_on_early_termination=test_case.throw_on_early_termination
):
for i in range(num_iters):
# Use model.no_sync() to disable grad synchronization every
# sync_interval.
if i % sync_interval != 0:
context = net.no_sync()
else:
context = suppress()
with context:
if isinstance(inp, tuple):
loss = net(*inp).sum()
else:
loss = net(inp).sum()
loss.backward()
self._model_step(net)
# Ensure completion of GPU kernels (including allreduce). If the
# join API is not properly implemented, then this should hang
# since the allreduce will hang.
torch.cuda.synchronize(device=rank)
total_iters += 1
if test_case.throw_on_early_termination:
# Ensure we iterated min_num_iters times.
self.assertEqual(total_iters, min_num_iters)
else:
# Ensure we iterated at least min_num_iters times.
self.assertGreaterEqual(total_iters, min_num_iters)
# Ensure completion of all GPU kernels.
torch.cuda.synchronize(device=rank)
# When throwing on early rank termination, we do not
# broadcast model state from an authoritative rank. All models
# should already be in sync.
if not test_case.throw_on_early_termination:
self.assertTrue(net._authoritative_rank)
# All ranks should have agreed on the same authoritative_rank!
final_rank_tensor = torch.tensor(
[net._authoritative_rank], device=self.rank
)
tensor_list = [
torch.zeros_like(final_rank_tensor)
for _ in range(dist.get_world_size())
]
dist.all_gather(tensor_list, final_rank_tensor)
max_rank = dist.get_world_size() - 1
self.assertSetEqual(
{max_rank}, set(tensor.item() for tensor in tensor_list)
)
# Ensure that all models are the same across ranks after all have joined.
self.validate_net_equivalence(net)
# Ensure that running with DDP uneven inputs was logged.
ddp_logging_data = net._get_ddp_logging_data()
self.assertTrue(ddp_logging_data.get("join_uneven_inputs"))
dist.barrier()
@skip_if_lt_x_gpu(2)
@sandcastle_skip_if(
BACKEND != "nccl" and BACKEND != "gloo",
"Only NCCL and GLOO backend support DistributedDataParallel",
)
def test_ddp_uneven_inputs_stop_iteration_sync_bn(self):
# Tests that uneven inputs join handler correctly throws StopIteration
# for models with SyncBN or general collective comm when
# throw_on_early_termination=True.
class ModelWithComm(torch.nn.Module):
def __init__(self):
super().__init__()
self.lin = nn.Linear(2, 40, bias=False)
def forward(self, x):
x = self.lin(x)
dist.all_reduce(x)
return x
torch.cuda.set_device(self.rank)
model_bn = BN_NET
model_bn = nn.SyncBatchNorm.convert_sync_batchnorm(
copy.deepcopy(model_bn)
).cuda(self.rank)
comm_model = ModelWithComm().cuda(self.rank)
model_input = torch.randn(10, 2).cuda(torch.cuda.current_device())
for model in [model_bn, comm_model]:
model = torch.nn.parallel.DistributedDataParallel(
model,
device_ids=[self.rank],
)
min_num_iters = 5
if self.rank != 0:
# Early termination rank(s)
num_iters = min_num_iters
exception_ctx = self.assertRaisesRegex(
RuntimeError, f"Rank {self.rank} exhausted all inputs"
)
else:
# Non early termination rank
num_iters = min_num_iters * 2
exception_ctx = self.assertRaisesRegex(
RuntimeError,
"Detected at least one rank that exhausted inputs.",
)
n = 0
with exception_ctx:
with model.join(throw_on_early_termination=True):
for i in range(num_iters):
loss = model(model_input).sum()
loss.backward()
self._model_step(model)
n += 1
self.assertEqual(n, min_num_iters)
# Verify model equivalence
self.validate_net_equivalence(model)
@skip_if_lt_x_gpu(2)
@sandcastle_skip_if(
BACKEND != "nccl" and BACKEND != "gloo",
"Only NCCL and GLOO backend support DistributedDataParallel",
)
def test_ddp_uneven_inputs(self):
dim = 1000
batch = 1
# Create a variety of models to run uneven input tests on.
large_model = nn.Sequential(
nn.Conv2d(1, 20, 5),
nn.ReLU(),
nn.Conv2d(20, 32, 5),
nn.ReLU(),
nn.Conv2d(32, 256, 5),
nn.ReLU(),
)
small_model = nn.Linear(dim, dim, bias=False)
bn_net = BatchNormNet()
class UnusedParamModule(nn.Module):
def __init__(self, unused_params_rank):
super().__init__()
self.t0 = Task()
self.t1 = Task()
self.unused_params_rank = unused_params_rank
def task_parameters(self):
return (self.t0.p, self.t1.p)
def forward(self, x, rank):
return (
self.t1(self.t0(x))
if rank != self.unused_params_rank
else self.t1(x)
)
unjoined_rank_with_unused_params_model = UnusedParamModule(1)
joined_rank_with_unused_params_model = UnusedParamModule(0)
rank = self.rank
models_to_test = [
# Network with batchnorm
DDPUnevenTestInput(
name="batch_norm_net",
model=bn_net,
inp=torch.ones(batch, 2, device=rank),
sync_interval=1,
),
DDPUnevenTestInput(
name="large_conv_model",
model=large_model,
inp=torch.ones(batch, batch, dim, dim, device=rank),
sync_interval=1,
),
DDPUnevenTestInput(
name="small_model",
model=small_model,
inp=torch.ones(batch, dim, device=rank),
sync_interval=1,
),
# Unused parameter test where rank that does not join early has unused params
DDPUnevenTestInput(
name="unjoined_rank_with_unused_params_model",
model=unjoined_rank_with_unused_params_model,
inp=(torch.ones(batch, 2, device=rank), rank),
sync_interval=1,
),
# Unused parameter test where rank that does join early has unused params
DDPUnevenTestInput(
name="joined_rank_with_unused_params_model",
model=joined_rank_with_unused_params_model,
inp=(torch.ones(batch, 2, device=rank), rank),
sync_interval=1,
),
]
# Test models that have hook installed.
models_with_hook = [
DDPUnevenTestInput(
name="small_model_allreduce_hook",
model=small_model,
hook=default.allreduce_hook,
state=None,
inp=torch.ones(batch, dim, device=rank),
sync_interval=1,
),
DDPUnevenTestInput(
name="small_model_power_sgd_hook",
model=small_model,
hook=powerSGD.powerSGD_hook,
state=powerSGD.PowerSGDState(
process_group=None,
matrix_approximation_rank=1,
# Config so that powerSGD runs immediately instead of
# allreduce.
start_powerSGD_iter=1,
warm_start=False,
use_error_feedback=False,
),
inp=torch.ones(batch, dim, device=rank),
sync_interval=1,
),
]
models_to_test.extend(models_with_hook)
# Add resnet model if we have torchvision installed.
if HAS_TORCHVISION:
resnet_model = torchvision.models.resnet50()
models_to_test.append(
DDPUnevenTestInput(
name="resnet_model",
model=resnet_model,
inp=torch.ones(1, 3, 1000, 1000),
sync_interval=1,
)
)
# Test with no_sync every 2, 3, 4, ... iterations.
models_with_sync = []
for i, test_input in enumerate(models_to_test):
models_with_sync.append(
DDPUnevenTestInput(
name=test_input.name,
model=test_input.model,
inp=test_input.inp,
sync_interval=i + 2,
)
)
throw_on_early_term_tests = []
for test_input in models_to_test:
throw_on_early_term_tests.append(
DDPUnevenTestInput(
name=test_input.name,
model=test_input.model,
inp=test_input.inp,
sync_interval=test_input.sync_interval,
throw_on_early_termination=True,
)
)
models_to_test.extend(models_with_sync)
models_to_test.extend(throw_on_early_term_tests)
# 0 iteration tests for when one process does not train model at all, so
# we must shadow the broadcast calls made when rebuilding buckets.
baseline_num_iters = [0, 5]
iteration_offsets = [2, 3, 10]
num_uneven_ranks = [1]
if dist.get_world_size() > 2:
num_uneven_ranks.append(2)
iteration_mappings = []
# Generate rank : num_iters mappings for various uneven input scenarios.
# This includes cases where rank 0 joins early and all other ranks join
# later, and scenarios where multiple ranks join early, but at different
# iterations, and later ranks join later.
for num_early_join_ranks in num_uneven_ranks:
for baseline_iter in baseline_num_iters:
for offset in iteration_offsets:
mapping = {
rank: baseline_iter
for rank in range(0, num_early_join_ranks)
}
# if num_early_join_ranks > 1, ranks > 0 that will join early
# iterate offset//2 more times than rank 0, to test nodes
# depleting inputs at different times.
if num_early_join_ranks > 1:
for rank in mapping.keys():
if rank > 0:
mapping[rank] += offset // 2
mapping.update(
{
rank: baseline_iter + offset
for rank in range(
num_early_join_ranks, dist.get_world_size()
)
}
)
iteration_mappings.append(mapping)
for (test_case, iteration_mapping) in itertools.product(
models_to_test, iteration_mappings
):
if self.rank == 0:
print(
f"""Running test: {test_case.name} sync interval
{test_case.sync_interval} with iteration mapping
{iteration_mapping}"""
)
self._run_uneven_inputs_test(
test_case,
iteration_mapping,
find_unused_params=("unused_params_model" in test_case.name),
)
@skip_if_lt_x_gpu(2)
@sandcastle_skip_if(
BACKEND != "nccl" and BACKEND != "gloo",
"Only NCCL and GLOO backend support DistributedDataParallel",
)
def test_ddp_uneven_input_join_disable(self):
# tests that if net.join() with enable=False is specified, DDP works as
# expected with even inputs.
torch.manual_seed(self.rank)
net = torch.nn.parallel.DistributedDataParallel(
torch.nn.Linear(1, 1).cuda(self.rank), device_ids=[self.rank]
)
inp = torch.ones(1) * self.rank
n_iters = 5
world_size = dist.get_world_size()
with net.join(enable=False):
for _ in range(n_iters):
# Clear grads
grad = net.module.weight.grad
if grad is not None:
grad.requires_grad_(False)
grad.zero_()
out = net(inp)
loss = out.sum()
loss.backward()
# Validate gradients to ensure that we divide by the correct
# world_size when join mode is disabled.
expected_grad = sum(i for i in range(world_size)) / world_size
self.assertEqual(net.module.weight.grad.item(), expected_grad)
join_config = net._join_config
self.assertFalse(join_config.enable)
self.validate_net_equivalence(net)
@skip_if_lt_x_gpu(2)
@sandcastle_skip_if(
BACKEND != "nccl" and BACKEND != "gloo",
"Only NCCL and GLOO backend support DistributedDataParallel",
)
def test_ddp_uneven_input_exception(self):
# Tests that exceptions during training are correctly propagated by the
# context manager.
error_str = "Intentional error"
class ExceptionModule(nn.Module):
def __init__(self):
super().__init__()
self.param = nn.Parameter(torch.ones(1, requires_grad=True))
def forward(self, _):
raise ValueError(error_str)
exception_module = ExceptionModule()
net = torch.nn.parallel.DistributedDataParallel(
exception_module.cuda(self.rank), device_ids=[self.rank]
)
inp = torch.ones(1)
with self.assertRaisesRegex(ValueError, error_str):
with net.join():
out = net(inp)
loss = out.sum()
loss.backward()
@require_backend({"nccl", "gloo"})
@require_n_gpus_for_nccl_backend(
int(os.environ["WORLD_SIZE"]), os.environ["BACKEND"]
)
def test_broadcast_object_list(self):
# Only set device for NCCL backend since it must use GPUs.
# Case where rank != GPU device.
next_rank = (self.rank + 1) % int(self.world_size)
backend = os.environ["BACKEND"]
if backend == "nccl":
torch.cuda.set_device(next_rank)
src_rank = 0
# If GPU test, add object with GPU tensor
if backend == "nccl":
COLLECTIVES_OBJECT_TEST_LIST.append(Foo(torch.randn(3, 3, device=0)))
objects = (
COLLECTIVES_OBJECT_TEST_LIST
if self.rank == src_rank
else [None for _ in COLLECTIVES_OBJECT_TEST_LIST]
)
# Single object test with device specified. Backend="gloo", device=cpu
if backend != "nccl":
single_obj_list = [objects[0]]
if self.rank != src_rank:
self.assertNotEqual(
single_obj_list[0], COLLECTIVES_OBJECT_TEST_LIST[0]
)
dist.broadcast_object_list(
single_obj_list, src=0, group=None, device=torch.device("cpu")
)
self.assertEqual(single_obj_list[0], COLLECTIVES_OBJECT_TEST_LIST[0])
# Single object test with device specified. Backend="gloo", device=current_device+1
# The test is gated by the fact GPU count is the same as world size to avoid the case
# when backend is gloo but there is no multiple GPU devices.
if backend != "nccl" and torch.cuda.device_count() == int(self.world_size):
single_obj_list = [objects[0]]
if self.rank != src_rank:
self.assertNotEqual(
single_obj_list[0], COLLECTIVES_OBJECT_TEST_LIST[0]
)
dist.broadcast_object_list(
single_obj_list, src=0, group=None, device=torch.device(next_rank)
)
self.assertEqual(single_obj_list[0], COLLECTIVES_OBJECT_TEST_LIST[0])
# Single object test with device specified. Backend="nccl", device=current_device+1
if backend == "nccl" and torch.cuda.device_count() == int(self.world_size):
single_obj_list = [objects[0]]
if self.rank != src_rank:
self.assertNotEqual(
single_obj_list[0], COLLECTIVES_OBJECT_TEST_LIST[0]
)
dist.broadcast_object_list(
single_obj_list, src=0, group=None, device=torch.device(next_rank)
)
self.assertEqual(single_obj_list[0], COLLECTIVES_OBJECT_TEST_LIST[0])
# Single object test: backward compatibility with device unspecified
single_obj_list = [objects[0]]
if self.rank != src_rank:
self.assertNotEqual(single_obj_list[0], COLLECTIVES_OBJECT_TEST_LIST[0])
dist.broadcast_object_list(single_obj_list, src=0)
self.assertEqual(single_obj_list[0], COLLECTIVES_OBJECT_TEST_LIST[0])
# Multiple input objects test
if self.rank != src_rank:
self.assertNotEqual(objects, COLLECTIVES_OBJECT_TEST_LIST)
dist.broadcast_object_list(objects, src=0)
self.assertEqual(objects, COLLECTIVES_OBJECT_TEST_LIST)
def _test_ddp_ignore_params_arg(self, static_graph=False):
class TestModel(nn.Module):
def __init__(self, rank):
self.rank = rank
super(TestModel, self).__init__()
self.fc1 = nn.Linear(1, 1, bias=False)
# Proxy that will be materialized to another architecture later.
# (after wrapping model with DDP)
if self.rank == 0:
self.fc2 = nn.Linear(1, 10, bias=False)
else:
self.fc2 = nn.Linear(10, 10, bias=False)
def forward(self, x):
x = self.fc1(x)
x = self.fc2(x)
return x
device_id = self.rank
# Ensure the test works for both find_unused_parameter and broadcast_buffer settings.
for (find_unused, broadcast_buffers) in itertools.product(
[False, True], [False, True]
):
model = TestModel(self.rank).float().to(device_id)
# Note that the model can have different shape buffers if we pass
# them in to be ignored as well.
model.fc2.register_buffer(
"ignore_buffer", torch.zeros(5 + self.rank, device=self.rank)
)
proxy_params = list(model.fc2.parameters())
proxy_buffers = list(model.fc2.buffers())
model_fc2_name = [
module_name
for module_name, module in model.named_modules()
if module is model.fc2
][0]
proxy_param_names = [
f"{model_fc2_name}.{param_name}"
for param_name, _ in model.fc2.named_parameters()
]
proxy_buffer_names = [
f"{model_fc2_name}.{buf_name}"
for buf_name, _ in model.fc2.named_buffers()
]
# Specify that we should ignore proxy_params since it will be
# materialized later.
torch.nn.parallel.DistributedDataParallel._set_params_and_buffers_to_ignore_for_model(
model, proxy_param_names + proxy_buffer_names
)
ddp = torch.nn.parallel.DistributedDataParallel(
model,
device_ids=[device_id],
find_unused_parameters=find_unused,
broadcast_buffers=broadcast_buffers,
)
if static_graph:
ddp._set_static_graph()
# Materialize new params. These are not registered in DDP and thus
# don't have autograd hooks installed on them.
ddp.module.fc2 = nn.Linear(1, 1, bias=False).to(device_id)
local_model = copy.deepcopy(ddp.module).cuda(self.rank)
inp = torch.ones(1, dtype=torch.float).to(device_id) * (self.rank + 1)
for i in range(6):
ddp(inp).sum().backward()
local_model(inp).sum().backward()
for materialized_param, local_param in zip(
ddp.module.fc2.parameters(), local_model.fc2.parameters()
):
self.assertEqual(materialized_param.grad, local_param.grad)
for synced_param, local_param in zip(
ddp.module.fc1.parameters(), local_model.fc1.parameters()
):
self.assertFalse(synced_param.grad == local_param.grad)
for proxy_param in proxy_params:
self.assertTrue(proxy_param.grad is None)
torch.cuda.synchronize(device=self.rank)
@require_backend({"gloo", "nccl"})
@require_backends_available({"gloo", "nccl"})
@skip_if_lt_x_gpu(2)
def test_ddp_ignore_params_arg(self):
self._test_ddp_ignore_params_arg(static_graph=False)
self._test_ddp_ignore_params_arg(static_graph=True)
@with_dist_debug_levels(levels=["OFF", "INFO", "DETAIL"])
@require_backend({"gloo", "nccl"})
@require_backends_available({"gloo", "nccl"})
@skip_if_lt_x_gpu(2)
def test_ddp_unused_params_rebuild_buckets_exception(self):
class ToyModel(nn.Module):
def __init__(self):
super(ToyModel, self).__init__()
self.net1 = nn.Linear(10, 10, bias=False)
self.net2 = nn.Linear(10, 10, bias=False)
def forward(self, x):
return self.net1(x)
ddp = torch.nn.parallel.DistributedDataParallel(
ToyModel().cuda(self.rank), device_ids=[self.rank]
)
for i in range(2):
inp = torch.rand(1, 10)
if i > 0:
try:
ddp(inp).sum().backward()
except RuntimeError as e:
msg = str(e)
verify_ddp_error_logged(ddp, msg)
expected_strs = [
ddp_prev_reduction_unfinished_str,
ddp_recommend_find_unused_params_str,
ddp_outputs_not_used_in_loss_str,
]
# Without debug mode, should show suggestion to use debug mode.
if dist._get_debug_mode() == dist._DistributedDebugLevel.OFF:
expected_strs.append(ddp_suggest_debug_mode_str)
else:
unreduced_params = ", ".join(["net2.weight"])
expected_strs.append(
f"did not receive grad for rank {self.rank}: {unreduced_params}"
)
for s in expected_strs:
self.assertTrue(s in msg, f"Expected {s} to be in {msg}")
self.assertFalse(ddp_find_unused_params_enabled_str in msg)
else:
self.assertFalse(
True, "DDP unused parameters error not raised."
)
else:
ddp(inp).sum().backward()
dist.barrier()
@require_backend({"gloo", "nccl"})
@require_backends_available({"gloo", "nccl"})
@skip_if_lt_x_gpu(2)
def test_ddp_shared_grad_acc_unused_params(self):
# When find_unused_parameters=True, ensure we mark unused parameters
# even if they share gradient accumulators.
class ToyModel(nn.Module):
def __init__(self):
super(ToyModel, self).__init__()
# net1, bias, and net1.bias are all unused params.
self.net1 = nn.Linear(10, 5, bias=False)
self.bias = nn.Parameter(torch.zeros(5))
# net1.bias and self.bias are names for the same underlying
# parameter, so they share the same grad acc. This caused
# the bug reported in https://github.com/pytorch/pytorch/issues/41324.
self.net1.bias = self.bias
self.net2 = nn.Linear(10, 5)
def forward(self, x):
return self.net2(x)
torch.cuda.set_device(self.rank)
model = ToyModel().to(torch.cuda.current_device())
ddp_model = torch.nn.parallel.DistributedDataParallel(
model, device_ids=[self.rank], find_unused_parameters=True
)
inp = torch.randn(20, 10, device=self.rank)
for i in range(6):
out = ddp_model(inp)
loss = out.sum()
loss.backward()
@require_backend({"gloo", "nccl"})
@require_backends_available({"gloo", "nccl"})
@skip_if_lt_x_gpu(2)
def test_ddp_device(self):
m = nn.Linear(10, 10).to(self.rank)
expected_len = 2
class TensorWrapper:
__slots__ = ["t", "moved_to_gpu"]
def __init__(self, t):
self.t = t
self.moved_to_gpu = False
# Handlers for specific types of validation we want to do based on
# the input type.
def tuple_and_list_validator(x):
self.assertTrue(len(x), expected_len)
self.assertEqual(1, len(set(t.device for t in x)))
self.assertEqual(x[0].device.index, self.rank)
return x[0] + x[1]
def namedtuple_validator(x):
self.assertEqual(x._fields, EXPECTED_FIELDS)
self.assertEqual(x.a.device.index, x.b.device.index)
self.assertEqual(x.a.device.index, self.rank)
return x.a + x.b
def custom_type_validator(x):
self.assertTrue(x.moved_to_gpu or (str(x.t.device) == "cpu"))
x.t = x.t.to(self.rank)
x.moved_to_gpu = True
return x.t
def dict_validator(x):
self.assertTrue(EXPECTED_FIELDS[0] in x.keys())
self.assertTrue(EXPECTED_FIELDS[1] in x.keys())
self.assertEqual(1, len(set(t.device for t in x.values())))
self.assertEqual(x[EXPECTED_FIELDS[0]].device.index, self.rank)
return x[EXPECTED_FIELDS[0]] + x[EXPECTED_FIELDS[1]]
validators = {
TensorWrapper: custom_type_validator,
tuple: tuple_and_list_validator,
list: tuple_and_list_validator,
TestNamedTupleInput_0: namedtuple_validator,
TestNamedTupleInput_1: namedtuple_validator,
dict: dict_validator,
}
class ToyModel(torch.nn.Module):
def __init__(_self): # noqa: B902
super().__init__()
_self.lin = nn.Linear(10, 10, bias=False)
def forward(_self, x, expected_type): # noqa: B902
# Similar to scatter, the recursive to in the single-device
# case does not move tensors if they are in a custom type.
self.assertTrue(isinstance(x, expected_type))
fwd_tensor = validators[expected_type](x)
return _self.lin(fwd_tensor)
model = torch.nn.parallel.DistributedDataParallel(
ToyModel().to(self.rank), device_ids=[self.rank]
)
def train_iter(inp, input_type):
for _ in range(4):
out = model(inp, input_type)
out.sum().backward()
# CPU tuple input, should be moved to the proper device before call
# to forward.
inp = tuple(torch.randn(10, 10) for _ in range(expected_len))
train_iter(inp, tuple)
# List CPU input, should be moved to proper device before call to
# forward.
inp = [torch.randn(10, 10) for _ in range(expected_len)]
train_iter(inp, list)
# Custom type containing tensor. The type is maintained, but the
# device is not propagated (which is what happens with scatter too)
inp = TensorWrapper(torch.randn(10, 10))
train_iter(inp, TensorWrapper)
# NamedTuple input. The type should be maintained and tensor inputs
# should be moved to the correct device as in scatter.
batch = 5
dim = 10
a = torch.rand(batch, dim)
b = torch.rand(batch, dim)
inp = TestNamedTupleInput_0(a, b)
train_iter(inp, type(inp))
inp = TestNamedTupleInput_1(a, b)
train_iter(inp, type(inp))
# dictionary input.
inp = {
EXPECTED_FIELDS[0]: a,
EXPECTED_FIELDS[1]: b,
}
train_iter(inp, type(inp))
@require_backend({"gloo", "nccl"})
@require_backends_available({"gloo", "nccl"})
@skip_if_lt_x_gpu(2)
def test_ddp_namedtuple(self):
batch = 5
dim = 10
a = torch.rand(batch, dim, device=self.rank)
b = torch.rand(batch, dim, device=self.rank)
class NamedTupleModule(torch.nn.Module):
def __init__(_self): # noqa: B902
super().__init__()
_self.lin = nn.Linear(10, 1)
def forward(_self, input, expected_type): # noqa: B902
# Without NamedTuple support, this would be of type tuple.
self.assertTrue(
isinstance(input, expected_type),
f"Expected type {expected_type} but got {type(input)}",
)
self.assertEqual(input._fields, EXPECTED_FIELDS)
self.assertEqual(a, input.a)
self.assertEqual(b, input.b)
return _self.lin(torch.mul(input.a, input.b))
model = torch.nn.parallel.DistributedDataParallel(
NamedTupleModule().cuda(self.rank), device_ids=[self.rank]
)
inp = TestNamedTupleInput_0(a, b)
# The following would fail if DDP does not propagate NamedTuples correctly.
model(inp, type(inp))
inp = TestNamedTupleInput_1(a, b)
model(inp, type(inp))
@with_dist_debug_levels(levels=["OFF", "INFO", "DETAIL"])
@require_backend({"gloo", "nccl"})
@require_backends_available({"gloo", "nccl"})
@skip_if_lt_x_gpu(2)
def test_ddp_control_flow_same_across_ranks(self):
# Control flow that is the same across ranks.
batch = 20
dim = 10
world_size = dist.get_world_size()
torch.cuda.set_device(self.rank)
model = torch.nn.parallel.DistributedDataParallel(
ControlFlowToyModel().cuda(self.rank),
device_ids=[self.rank],
find_unused_parameters=True,
)
random_input = torch.randn(batch, dim, device=self.rank)
ones_input = torch.ones(batch, dim, device=self.rank)
for i in range(6):
if i % 2 == 0:
out = model(random_input)
else:
out = model(ones_input)
loss = out.sum()
loss.backward()
# On even iterations, 2nd param goes unused, on odd iterations,
# it is used.
local_used_maps = model.reducer._get_local_used_maps()
if i % 2 == 0:
expected = torch.tensor(
[world_size, 0], device=self.rank, dtype=torch.int32
)
else:
expected = torch.tensor(
[world_size, world_size], device=self.rank, dtype=torch.int32
)
# Validate parameter usage.
variable_usage_tensor = local_used_maps[0]
self.assertEqual(variable_usage_tensor, expected)
# Validate appropriate error message when DDP is used with
# find_unused_parameters=False.
model = torch.nn.parallel.DistributedDataParallel(
ControlFlowToyModel().cuda(self.rank),
device_ids=[self.rank],
find_unused_parameters=False,
)
for i in range(2):
if i == 0:
loss = model(random_input).sum()
loss.backward()
else:
try:
loss = model(random_input).sum()
loss.backward()
except RuntimeError as e:
msg = str(e)
verify_ddp_error_logged(model, msg)
# 2nd linear layer is unused
unused_param_index = 1
expected_strs = [
ddp_prev_reduction_unfinished_str,
ddp_recommend_find_unused_params_str,
ddp_outputs_not_used_in_loss_str,
f"Parameter indices which did not receive grad for rank {self.rank}: {unused_param_index}",
]
# In debug mode, should show parameters that weren't reduced.
if dist._get_debug_mode() == dist._DistributedDebugLevel.OFF:
expected_strs.append(ddp_suggest_debug_mode_str)
else:
unreduced_params = ", ".join(["lin2.weight"])
expected_strs.append(
f"did not receive grad for rank {self.rank}: {unreduced_params}"
)
for s in expected_strs:
self.assertTrue(s in msg, f"Expected {s} to be in {msg}")
self.assertFalse(ddp_find_unused_params_enabled_str in msg)
else:
self.assertFalse(True, "DDP error not raised")
dist.barrier()
@require_backend({"gloo", "nccl"})
@require_backends_available({"gloo", "nccl"})
@skip_if_lt_x_gpu(2)
def test_invalid_static_graph(self):
world_size = dist.get_world_size()
torch.cuda.set_device(self.rank)
model = torch.nn.parallel.DistributedDataParallel(
ControlFlowToyModel().cuda(self.rank),
device_ids=[self.rank],
)
model._set_static_graph()
random_input = torch.randn(20, 10, device=self.rank)
ones_input = torch.ones(20, 10, device=self.rank)
expected_err = "Your training graph has changed in this iteration"
with self.assertRaisesRegex(RuntimeError, expected_err):
for i in range(2):
if i % 2 == 0:
out = model(random_input)
else:
out = model(ones_input)
loss = out.sum()
loss.backward()
verify_ddp_error_logged(model, expected_err)
with self.assertRaisesRegex(
RuntimeError,
"Expected to have finished reduction in the prior iteration "
"before starting a new one. This error indicates that your "
"training graph has changed in this iteration",
):
for i in range(2):
if i % 2 != 0:
out = model(random_input)
else:
out = model(ones_input)
loss = out.sum()
loss.backward()
verify_ddp_error_logged(model, "Expected to have finished reduction")
@with_dist_debug_levels(levels=["OFF", "INFO", "DETAIL"])
@require_backend({"gloo", "nccl"})
@require_backends_available({"gloo", "nccl"})
@skip_if_lt_x_gpu(2)
def test_ddp_control_flow_different_across_ranks(self):
batch = 20
dim = 10
class ToyModel(nn.Module):
def __init__(self, rank):
super(ToyModel, self).__init__()
self.lin1 = nn.Linear(10, 10, bias=False)
self.lin2 = nn.Linear(10, 10, bias=False)
self.rank = rank
def forward(self, x):
use_second_layer = (
torch.equal(x, torch.ones(batch, dim, device=x.device))
and self.rank == 1
)
if use_second_layer:
return self.lin2(F.relu(self.lin1(x)))
else:
return F.relu(self.lin1(x))
world_size = dist.get_world_size()
torch.cuda.set_device(self.rank)
model = torch.nn.parallel.DistributedDataParallel(
ToyModel(self.rank).cuda(self.rank),
device_ids=[self.rank],
find_unused_parameters=True,
)
random_input = torch.randn(batch, dim, device=self.rank)
ones_input = torch.ones(batch, dim, device=self.rank)
for i in range(6):
if i % 2 == 0:
out = model(random_input)
else:
out = model(ones_input)
loss = out.sum()
loss.backward()
local_used_maps = model.reducer._get_local_used_maps()
if i % 2 == 0:
expected = torch.tensor(
[world_size, 0], device=self.rank, dtype=torch.int32
)
else:
expected = torch.tensor(
[world_size, 1], device=self.rank, dtype=torch.int32
)
variable_usage_tensor = local_used_maps[0]
self.assertEqual(variable_usage_tensor, expected)
model = torch.nn.parallel.DistributedDataParallel(
ToyModel(self.rank).cuda(self.rank),
device_ids=[self.rank],
find_unused_parameters=False,
)
for i in range(2):
if i == 0:
loss = model(random_input).sum()
loss.backward()
else:
try:
loss = model(random_input).sum()
loss.backward()
except RuntimeError as e:
msg = str(e)
verify_ddp_error_logged(model, msg)
unused_param_index = 1
expected_strs = [
ddp_prev_reduction_unfinished_str,
ddp_recommend_find_unused_params_str,
ddp_outputs_not_used_in_loss_str,
f"Parameter indices which did not receive grad for rank {self.rank}: {unused_param_index}",
]
# Without debug mode, should show suggestion to use debug mode.
if dist._get_debug_mode() == dist._DistributedDebugLevel.OFF:
expected_strs.append(ddp_suggest_debug_mode_str)
else:
unreduced_params = ", ".join(["lin2.weight"])
expected_strs.append(
f"did not receive grad for rank {self.rank}: {unreduced_params}"
)
for s in expected_strs:
self.assertTrue(s in msg, f"Expected {s} to be in {msg}")
self.assertFalse(ddp_find_unused_params_enabled_str in msg)
else:
self.assertFalse(True, "DDP error not raised")
dist.barrier()
@require_backend({"gloo"})
@sandcastle_skip_if(BACKEND == "nccl", "NCCL does not support scatter")
def test_scatter_object_list(self):
src_rank = 0
scatter_list = (
COLLECTIVES_OBJECT_TEST_LIST
if self.rank == src_rank
else [None for _ in COLLECTIVES_OBJECT_TEST_LIST]
)
world_size = dist.get_world_size()
scatter_list = scatter_list[:world_size]
i = 0
while len(scatter_list) < world_size:
scatter_list.append(scatter_list[i])
i += 1
output_obj_list = [None]
dist.scatter_object_list(output_obj_list, scatter_list, src=src_rank)
self.assertEqual(
output_obj_list[0],
COLLECTIVES_OBJECT_TEST_LIST[
self.rank % len(COLLECTIVES_OBJECT_TEST_LIST)
],
)
# Ensure errors are raised upon incorrect arguments.
with self.assertRaisesRegex(
RuntimeError,
"Expected argument scatter_object_output_list to be a list of size at least 1.",
):
dist.scatter_object_list([], scatter_list, src=src_rank)
@require_backend({"gloo", "nccl"})
@require_backends_available({"gloo", "nccl"})
@skip_if_lt_x_gpu(2)
@skip_if_rocm
def test_ddp_model_diff_across_ranks(self):
group_gloo = dist.new_group(
timeout=timedelta(seconds=60), backend=dist.Backend.GLOO
)
# Set NCCL_BLOCKING_WAIT and use a new NCCL group to improve test
# determinism.
os.environ["NCCL_BLOCKING_WAIT"] = "1"
group_to_use = dist.new_group(
backend=dist.get_backend(), timeout=timedelta(seconds=5)
)
torch.cuda.set_device(self.rank)
# Creates network with different sized embedding table on different
# ranks. This should throw an error during DDP init.
net = EmbeddingNet(self.rank)
# When running with NCCL backend, we don't expect an error on rank 0,
is_detail_dbg_mode = (
dist._get_debug_mode() == dist._DistributedDebugLevel.DETAIL
)
rank_0_ctx = (
self.assertRaisesRegex(
RuntimeError, "Caught collective operation timeout"
)
if dist.get_backend(group_to_use) == dist.Backend.NCCL
and not is_detail_dbg_mode
else self.assertRaises(RuntimeError)
)
ctx = (
rank_0_ctx
if self.rank == 0
else self.assertRaisesRegex(RuntimeError, "appears not to match")
)
with ctx:
net = torch.nn.parallel.DistributedDataParallel(
net.to(self.rank),
device_ids=[self.rank],
process_group=group_to_use,
)
dist.barrier(group_to_use)
# early which causes failure with Barrier.sync.
dist.barrier(group_gloo)
@with_dist_debug_levels(levels=["OFF", "INFO", "DETAIL"])
@require_backend({"gloo", "nccl"})
@require_backends_available({"gloo", "nccl"})
@skip_if_lt_x_gpu(2)
def test_output_unused_in_loss(self):
model = TwoLinLayerNet()
# Need copy of model to pass into 2nd DDP ctor otherwise autograd hooks
# on first DDP reducer will execute!
model_copy = copy.deepcopy(model)
net = torch.nn.parallel.DistributedDataParallel(
copy.deepcopy(model).cuda(self.rank),
device_ids=[self.rank],
)
net_with_find_unused = torch.nn.parallel.DistributedDataParallel(
model_copy.cuda(self.rank),
device_ids=[self.rank],
find_unused_parameters=True,
)
inp = torch.randn(10, 10)
for ddp in [net, net_with_find_unused]:
for i in range(2):
if i == 0:
a, b = ddp(inp)
loss = b.sum()
loss.backward()
else:
try:
a, b = ddp(inp)
loss = b.sum()
loss.backward()
except RuntimeError as e:
msg = str(e)
unused_index = 0
unused_index_substr = (
f"Parameter indices which did not receive grad for rank {self.rank}: {unused_index}"
)
if ddp == net:
expected_strs = [
ddp_prev_reduction_unfinished_str,
ddp_recommend_find_unused_params_str,
ddp_outputs_not_used_in_loss_str,
unused_index_substr,
]
unexpected_strs = [
ddp_find_unused_params_enabled_str,
]
elif ddp == net_with_find_unused:
expected_strs = [
ddp_prev_reduction_unfinished_str,
ddp_outputs_not_used_in_loss_str,
ddp_find_unused_params_enabled_str,
unused_index_substr,
]
unexpected_strs = [
ddp_recommend_find_unused_params_str,
]
# In debug mode, should show parameters that weren't reduced.
if (
dist._get_debug_mode()
== dist._DistributedDebugLevel.OFF
):
expected_strs.append(ddp_suggest_debug_mode_str)
else:
unreduced_params = ", ".join(["a.weight"])
expected_strs.append(
f"did not receive grad for rank {self.rank}: {unreduced_params}"
)
for s in expected_strs:
self.assertTrue(
s in msg, f"Expected {s} to be in {msg}"
)
for s in unexpected_strs:
self.assertFalse(
s in msg, f"Expected {s} not to be in {msg}"
)
else:
self.assertFalse(True, "DDP error not raised")
dist.barrier()
def _test_different_graph_across_ranks(
self, find_unused_parameters=False, static_graph=False
):
class ToyModel(nn.Module):
def __init__(self, rank):
super(ToyModel, self).__init__()
self.lin1 = nn.Linear(10, 10, bias=False)
self.lin2 = nn.Linear(10, 10, bias=False)
self.rank = rank
def forward(self, x):
if self.rank == 0:
return self.lin2(F.relu(self.lin1(x)))
else:
return F.relu(self.lin1(x))
torch.manual_seed(31415)
world_size = dist.get_world_size()
torch.cuda.set_device(self.rank)
model = ToyModel(self.rank).cuda(self.rank)
ddp_model = torch.nn.parallel.DistributedDataParallel(
model,
device_ids=[self.rank],
find_unused_parameters=find_unused_parameters,
gradient_as_bucket_view=True,
)
if static_graph:
ddp_model._set_static_graph()
random_input = torch.randn(20, 10, device=self.rank)
for i in range(10):
out = ddp_model(random_input)
loss = out.sum()
loss.backward()
return ddp_model
@require_backend({"gloo", "nccl"})
@require_backends_available({"gloo", "nccl"})
@skip_if_lt_x_gpu(2)
def test_different_graph_across_ranks(self):
base_model = self._test_different_graph_across_ranks(
find_unused_parameters=True
)
self.assertFalse(
base_model._get_ddp_logging_data().get("has_rebuilt_buckets", 0)
)
static_model = self._test_different_graph_across_ranks(static_graph=True)
self.assertTrue(
static_model._get_ddp_logging_data().get("has_rebuilt_buckets", 0)
)
for i, j in zip(base_model.parameters(), static_model.parameters()):
self.assertEqual(i, j)
@require_backend({"gloo"})
@require_backends_available({"gloo"})
@sandcastle_skip_if(
IS_MACOS or IS_WINDOWS,
"MacOS uses uv transport which does not have as robust error handling as tcp transport",
)
def test_monitored_barrier_gloo(self):
tensors = [torch.ones(10) * self.rank]
for _ in range(10):
dist.all_reduce(torch.cat(tensors))
timeout = timedelta(seconds=2)
dist.monitored_barrier(timeout=timeout)
for _ in range(10):
dist.all_reduce(torch.cat(tensors))
dist.monitored_barrier(timeout=timeout, wait_all_ranks=True)
failed_rank = 1
src_rank = 0
if self.rank == src_rank:
with self.assertRaisesRegex(
RuntimeError, f"Rank {failed_rank} failed to pass monitoredBarrier"
):
dist.monitored_barrier(timeout=timeout)
elif self.rank != failed_rank:
err_regex = (
f"Rank {self.rank} successfully reached monitoredBarrier,"
f" but received errors while waiting to be unblocked by rank"
f" {src_rank}"
)
with self.assertRaisesRegex(RuntimeError, err_regex):
dist.monitored_barrier(timeout=timeout)
self._barrier(timeout=30)
@require_backend({"gloo"})
@require_backends_available({"gloo"})
def test_monitored_barrier_gloo_subgroup(self):
failed_rank = 1
timeout = 0.1
subgroup = dist.new_group(ranks=[0, 1])
if self.rank == failed_rank:
return
if self.rank == 0:
with self.assertRaisesRegex(
RuntimeError, f"Rank {failed_rank} failed to pass monitoredBarrier"
):
dist.monitored_barrier(subgroup, timeout)
else:
dist.monitored_barrier(subgroup, timeout)
def _test_monitored_barrier_allreduce_hang(self, wait_all_ranks):
nccl_pg = dist.new_group(
ranks=list(i for i in range(int(self.world_size))),
timeout=timedelta(seconds=2),
backend=dist.Backend.NCCL,
)
gloo_pg = dist.new_group(
ranks=list(i for i in range(int(self.world_size))),
backend=dist.Backend.GLOO,
)
tensors = [torch.ones(10, device=self.rank) * self.rank]
nccl_pg.allreduce(tensors).wait()
# test to ensure it exits cleanly.
if self.rank != 0:
# Can get different errors here depending on whether gloo-based
# wrapper PG is enabled or not, since with wrapper pg, it will
# fail in a collective synchronization check and not actually
# call into the nccl pg.
if dist._get_debug_mode() == dist._DistributedDebugLevel.DETAIL:
err_regex = "Timed out waiting"
else:
err_regex = "Caught collective operation timeout"
with self.assertRaisesRegex(RuntimeError, err_regex):
nccl_pg.allreduce(tensors).wait(timedelta(seconds=0.1))
else:
# Rank 0 should report first (in order) timed out rank or all ranks
# depending on wait_all_ranks flag passed into monitored_barrier.
if wait_all_ranks:
rank_str = ", ".join(
[str(i) for i in range(1, int(self.world_size))]
)
err_regex = f"Ranks {rank_str} failed to pass monitoredBarrier"
else:
expected_first_fail_rank = 1
err_regex = f"Rank {expected_first_fail_rank} failed to pass monitoredBarrier"
monitored_barrier_timeout_seconds = timedelta(seconds=0.1)
with self.assertRaisesRegex(RuntimeError, err_regex):
gloo_pg.monitored_barrier(
monitored_barrier_timeout_seconds, wait_all_ranks=wait_all_ranks
)
@with_nccl_blocking_wait
@require_backend({"gloo", "nccl"})
@require_backends_available({"gloo", "nccl"})
@skip_if_rocm
@skip_if_lt_x_gpu(int(os.environ["WORLD_SIZE"]))
def test_monitored_barrier_allreduce_hang(self):
# tests expected behavior when nonzero rank hangs and we want to
# report first timed out rank.
self._test_monitored_barrier_allreduce_hang(wait_all_ranks=False)
@with_nccl_blocking_wait
@require_backend({"gloo", "nccl"})
@require_backends_available({"gloo", "nccl"})
@skip_if_rocm
@skip_if_lt_x_gpu(int(os.environ["WORLD_SIZE"]))
def test_monitored_barrier_allreduce_hang_wait_all_ranks(self):
# tests expected behavior when nonzero rank hangs and we want to
# report all timed out ranks.
self._test_monitored_barrier_allreduce_hang(wait_all_ranks=True)
@require_backend({"gloo"})
@require_backends_available({"gloo"})
def test_monitored_barrier_gloo_rank_0_timeout(self):
# tests error when rank 0 exhausts its given timeout.
process_group = dist.new_group(
ranks=list(i for i in range(int(self.world_size)))
)
timeout = timedelta(seconds=0)
if self.rank == 0:
with self.assertRaisesRegex(
RuntimeError, f"Rank {self.rank} timed out in monitoredBarrier"
):
process_group.monitored_barrier(timeout)
@require_backend({"gloo"})
@require_backends_available({"gloo"})
@skip_if_small_worldsize
@sandcastle_skip_if(
IS_MACOS or IS_WINDOWS,
"MacOS uses uv transport which does not have as robust error handling as tcp transport",
)
def test_monitored_barrier_failure_order(self):
# Ensure that the first (in sorted order) rank is reported when
# multiple ranks fail to pass the monitored_barrier.
# TODO(#54879): Provide ability to wait and report all failed ranks
expected_first_failed_rank = 2
timeout = timedelta(seconds=2)
src_rank = 0
if self.rank == src_rank:
with self.assertRaisesRegex(
RuntimeError, f"Rank {expected_first_failed_rank}"
):
dist.monitored_barrier(timeout=timeout)
elif self.rank == 1:
err_regex = (
f"Rank {self.rank} successfully reached monitoredBarrier,"
f" but received errors while waiting to be unblocked by rank"
f" {src_rank}"
)
with self.assertRaisesRegex(RuntimeError, err_regex):
dist.monitored_barrier(timeout=timeout)
@require_backend({"gloo"})
@require_backends_available({"gloo"})
@skip_if_small_worldsize
def test_monitored_barrier_wait_all_ranks(self):
# Tests simple case where > 1 rank does not call into monitored
# barrier and verifies all ranks are reported by rank 0.
if self.rank == 0:
timeout = timedelta(seconds=0.1)
rank_str = ", ".join([str(i) for i in range(1, int(self.world_size))])
err_regex = f"Ranks {rank_str} failed to pass monitoredBarrier"
with self.assertRaisesRegex(RuntimeError, err_regex):
dist.monitored_barrier(timeout=timeout, wait_all_ranks=True)
@require_backend({"gloo", "nccl"})
@require_backends_available({"gloo", "nccl"})
@skip_if_lt_x_gpu(2)
def test_ddp_build_param_to_name_mapping(self):
model = TwoLinLayerNet()
net = torch.nn.parallel.DistributedDataParallel(
model.cuda(self.rank),
device_ids=[self.rank],
)
expected_mapping = {0: "a.weight", 1: "b.weight"}
net_params, _ = net._build_params_for_reducer()
param_to_name_mapping = net._build_param_to_name_mapping(net_params)
self.assertDictEqual(expected_mapping, param_to_name_mapping)
# Test when DDP is used with ignored parameters.
model = TwoLinLayerNet()
# Parameters to ignore are in the format {module_name}.{param_name}
params_to_ignore = ["a.weight"]
torch.nn.parallel.DistributedDataParallel._set_params_and_buffers_to_ignore_for_model(
model, params_to_ignore
)
net = torch.nn.parallel.DistributedDataParallel(
model.cuda(self.rank),
device_ids=[self.rank],
)
expected_mapping = {0: "b.weight"}
net_params, _ = net._build_params_for_reducer()
param_to_name_mapping = net._build_param_to_name_mapping(net_params)
self.assertDictEqual(expected_mapping, param_to_name_mapping)
# Test errors are raised when DDP and module parameters mismatch.
# This generally indicates a bug with DDP and is not expected to
# happen in user applications.
model = TwoLinLayerNet()
net = torch.nn.parallel.DistributedDataParallel(
model.cuda(self.rank),
device_ids=[self.rank],
)
net_params, _ = net._build_params_for_reducer()
if self.rank == 0:
print(type(net_params[0][0]))
net_params[0].extend(
[
torch.nn.Parameter(torch.ones(1)),
torch.nn.Parameter(torch.ones(1)),
]
)
with self.assertRaisesRegex(ValueError, "Expected param to name mapping"):
net._build_param_to_name_mapping(net_params)
net_params[0] = net_params[0][:-3]
with self.assertRaisesRegex(ValueError, "Param with name"):
net._build_param_to_name_mapping(net_params)
net_params[0].extend(
[
torch.nn.Parameter(torch.ones(1)),
torch.nn.Parameter(torch.ones(1)),
]
)
@sandcastle_skip_if(
BACKEND != "nccl" and BACKEND != "gloo",
"Only Nccl & Gloo backend support DistributedDataParallel",
)
@skip_if_lt_x_gpu(2)
def test_ddp_build_param_to_name_mapping_requires_grad(self):
class Net(nn.Module):
def __init__(self):
super().__init__()
self.lin = nn.Linear(10, 10)
# Is not tracked by DDP and should not show up in param to
# name mapping.
self.lin.bias.requires_grad_(False)
def forward(self, x):
return self.lin(x)
model = Net()
net = torch.nn.parallel.DistributedDataParallel(
model.cuda(self.rank), device_ids=[self.rank]
)
expected_mapping = {
0: "lin.weight",
}
net_params, _ = net._build_params_for_reducer()
param_to_name_mapping = net._build_param_to_name_mapping(net_params)
self.assertEqual(param_to_name_mapping, expected_mapping)
def _test_ddp_multiple_nested_unused_params_error(self, ignore_sparse):
debug_mode_off = dist._get_debug_mode() == dist._DistributedDebugLevel.OFF
class SubModule(nn.Module):
def __init__(self):
super().__init__()
self.embedding_net = EmbeddingNet(0)
self.lin = TwoLinLayerNet()
self.bn = BatchNormNet()
self.lin_layer = nn.Linear(4, 10, bias=False)
def forward(self, x):
x = self.bn(x)
x = self.lin_layer(x)
x = self.lin.a(x) # self.lin.b param unused
# EmbeddingNet entirely unused: self.embedding_net.embedding and
# self.embedding_net.lin unused.
return x
class MyModel(nn.Module):
def __init__(self):
super().__init__()
self.sub_module = SubModule()
def forward(self, x):
return self.sub_module(x)
model = MyModel()
sparse_embedding_fqns = []
if ignore_sparse:
for module_name, module in model.named_modules():
if module == model.sub_module.embedding_net.embedding:
for parameter_name, param in module.named_parameters(
recurse=False
):
fqn = f"{module_name}.{parameter_name}"
sparse_embedding_fqns.append(fqn)
torch.nn.parallel.DistributedDataParallel._set_params_and_buffers_to_ignore_for_model(
model, sparse_embedding_fqns
)
unused_modules = [
model.sub_module.embedding_net.lin,
model.sub_module.lin.b,
]
else:
unused_modules = list(model.sub_module.embedding_net.modules()) + [
model.sub_module.lin.b,
]
expected_unused_param_fqns = []
used_param_fqns = [] # Validate that these don't mistakenly show up.
fqn_to_param_index = {}
index = 0
for module_name, module in model.named_modules():
for parameter_name, param in module.named_parameters(recurse=False):
fqn = f"{module_name}.{parameter_name}"
fqn_to_param_index[fqn] = index
if fqn not in sparse_embedding_fqns:
index += 1
if module in unused_modules:
expected_unused_param_fqns.append(fqn)
else:
if (
not ignore_sparse
or module != model.sub_module.embedding_net.embedding
):
used_param_fqns.append(fqn)
net = torch.nn.parallel.DistributedDataParallel(
model.cuda(self.rank),
device_ids=[self.rank],
)
batch, dim = 10, 2
inp = torch.ones(batch, dim)
for i in range(2):
if i == 0:
out = net(inp)
loss = out.sum()
loss.backward()
else:
try:
out = net(inp)
loss = out.sum()
loss.backward()
except RuntimeError as e:
e = str(e)
unused_param_substr = e[e.find("did not receive grad") :]
for unused_param_fqn in expected_unused_param_fqns:
self.assertTrue(
unused_param_fqn in unused_param_substr
or debug_mode_off
)
self.assertTrue(
str(fqn_to_param_index[unused_param_fqn])
in unused_param_substr,
f"Did not find index {fqn_to_param_index[unused_param_fqn]} for {unused_param_fqn}",
)
# logs.
for used_param_fqn in used_param_fqns:
self.assertFalse(used_param_fqn in unused_param_substr)
# Validate that ignored param fqns don't show up as unused
for sparse_param_fqn in sparse_embedding_fqns:
self.assertFalse(sparse_param_fqn in unused_param_substr)
else:
self.assertTrue(False, "Expected error was not raised!")
@with_dist_debug_levels(levels=["OFF", "INFO", "DETAIL"])
@require_backend({"gloo", "nccl"})
@require_backends_available({"gloo", "nccl"})
@skip_if_lt_x_gpu(2)
def test_ddp_multiple_nested_unused_params_error(self):
self._test_ddp_multiple_nested_unused_params_error(ignore_sparse=False)
@with_dist_debug_levels(levels=["OFF", "INFO", "DETAIL"])
@require_backend({"gloo", "nccl"})
@require_backends_available({"gloo", "nccl"})
@skip_if_lt_x_gpu(2)
def test_ddp_multiple_nested_unused_params_err_ignore_params(self):
self._test_ddp_multiple_nested_unused_params_error(ignore_sparse=True)
@sandcastle_skip_if(
BACKEND != "nccl" and BACKEND != "gloo",
"Only Nccl & Gloo backend support DistributedDataParallel",
)
@skip_if_lt_x_gpu(2)
def test_ddp_inference(self):
rank = self.rank
torch.cuda.set_device(rank)
model = Net().cuda()
local_model = copy.deepcopy(model)
model = torch.nn.parallel.DistributedDataParallel(
model,
device_ids=[rank],
)
syncbn_model = nn.SyncBatchNorm(
2, momentum=0.99, track_running_stats=False
).cuda()
local_syncbn_model = copy.deepcopy(syncbn_model)
syncbn_model = torch.nn.parallel.DistributedDataParallel(
syncbn_model, device_ids=[rank]
)
inp = torch.randn(10, 2, device=rank)
inp_syncbn = torch.randn(10, 2, 4, 4, device=rank)
tests = [
(model, local_model, inp),
(syncbn_model, local_syncbn_model, inp_syncbn),
]
for test in tests:
test_model, test_local_model, test_inp = test
if self.rank == 0:
test_model.eval()
test_local_model.eval()
for _ in range(6):
self.assertEqual(
test_model(test_inp), test_local_model(test_inp)
)
self._barrier(timeout=30)
@sandcastle_skip_if(
BACKEND != "nccl" and BACKEND != "gloo",
"Only Nccl & Gloo backend support DistributedDataParallel",
)
@skip_if_lt_x_gpu(2)
def test_ddp_sync_bn_training_vs_eval(self):
rank = self.rank
torch.cuda.set_device(rank)
model = nn.SyncBatchNorm(2, momentum=0.99, track_running_stats=False).cuda(
rank
)
model = torch.nn.parallel.DistributedDataParallel(model, device_ids=[rank])
with torch.autograd.profiler.profile() as prof:
for i in range(6):
inp = torch.randn(10, 2, 4, 4).cuda(rank)
out = model(inp)
loss = out.sum()
loss.backward()
if BACKEND == "nccl":
all_gather_calls = get_profiling_event("_all_gather_base", prof)
else:
all_gather_calls = get_profiling_event("all_gather", prof)
self.assertNotEqual([], all_gather_calls)
model_inference = model.module
if self.rank == 0:
model_inference.eval()
with torch.autograd.profiler.profile() as prof:
for i in range(6):
inp = torch.randn(10, 2, 4, 4).cuda(rank)
out = model_inference(inp)
loss = out.sum()
loss.backward()
if BACKEND == "nccl":
all_gather_calls = get_profiling_event("_all_gather_base", prof)
else:
all_gather_calls = get_profiling_event("all_gather", prof)
self.assertEqual([], all_gather_calls)
@skip_if_lt_x_gpu(2)
@sandcastle_skip_if(
BACKEND != "nccl" and BACKEND != "gloo",
"Only Nccl & Gloo backend support DistributedDataParallel",
)
def test_ddp_python_error_logged(self):
# However, the below is one example where a python error is thrown
# after reducer is constructed.
model = TwoLinLayerNet().cuda(self.rank)
model = torch.nn.parallel.DistributedDataParallel(
model,
device_ids=[self.rank],
)
expected_err = "must be callable"
with self.assertRaisesRegex(TypeError, expected_err):
model.register_comm_hook({}, {})
verify_ddp_error_logged(model, expected_err)
@skip_if_lt_x_gpu(2)
@sandcastle_skip_if(
BACKEND != "nccl" and BACKEND != "gloo",
"Only Nccl & Gloo backend support DistributedDataParallel",
)
def test_ddp_static_graph_nested_types(self):
# Tests for static graph training when outputs are not just tensors
# but can be (nested) tuple, list, dict, etc.
rank = self.rank
torch.cuda.set_device(rank)
class NestedOutputModule(torch.nn.Module):
def __init__(self):
super().__init__()
self.lin = nn.Linear(100, 1, bias=False)
def forward(self, inp, output_type):
if output_type == "tuple":
return (
self.lin(inp),
(
self.lin(inp),
self.lin(inp),
),
)
elif output_type == "list":
return [
self.lin(inp),
[
self.lin(inp),
self.lin(inp),
],
]
elif output_type == "dict":
return {
"a": self.lin(inp),
"b": {
"c": self.lin(inp),
},
}
def get_loss(model_output):
loss = 0.0
if isinstance(model_output, torch.Tensor):
return model_output.sum()
elif isinstance(model_output, dict):
for value in model_output.values():
loss += get_loss(value)
elif isinstance(model_output, tuple) or isinstance(model_output, list):
for x in model_output:
loss += get_loss(x)
else:
raise ValueError(f"Unknown model output type {type(model_output)}")
return loss
model = NestedOutputModule().cuda(rank)
model_static_graph = copy.deepcopy(model)
model = torch.nn.parallel.DistributedDataParallel(
model,
device_ids=[rank],
)
model_static_graph = torch.nn.parallel.DistributedDataParallel(
model,
device_ids=[rank],
)
model_static_graph._set_static_graph()
inp = torch.randn(10, 100)
type_mapping = {
"list": list,
"tuple": tuple,
"dict": dict,
}
for output_type in type_mapping.keys():
for i in range(6):
out = model(inp, output_type=output_type)
loss = get_loss(out)
loss.backward()
self._model_step(model)
out_static = model_static_graph(inp, output_type=output_type)
self.assertTrue(isinstance(out_static, type_mapping[output_type]))
loss_static = get_loss(out_static)
loss_static.backward()
self._model_step(model_static_graph)
for (p, p_static) in zip(
model.parameters(), model_static_graph.parameters()
):
self.assertEqual(p, p_static)
@skip_if_lt_x_gpu(2)
@sandcastle_skip_if(
BACKEND != "nccl" and BACKEND != "gloo",
"Only Nccl & Gloo backend support DistributedDataParallel",
)
def test_detect_ddp_is_actually_static(self):
class ToyModel(nn.Module):
def __init__(self):
super(ToyModel, self).__init__()
self.net1 = nn.Linear(10, 10, bias=False)
self.net2 = nn.Linear(10, 10)
def forward(self, x, find_unused, dynamic):
if find_unused:
if dynamic:
return self.net2(self.net1(x))
else:
return self.net2(x)
else:
return self.net2(self.net1(x))
# Set of unused parameters don't change across iterations
torch.cuda.set_device(self.rank)
model = ToyModel().cuda()
for find_unused in [True, False]:
ddp = torch.nn.parallel.DistributedDataParallel(
model,
device_ids=[self.rank],
find_unused_parameters=find_unused,
)
inp = torch.randn(1, 10, device="cuda")
for _ in range(6):
out = ddp(inp, find_unused=find_unused, dynamic=False)
loss = out.sum()
loss.backward()
self.assertTrue(ddp.reducer._ddp_graph_static())
ddp = torch.nn.parallel.DistributedDataParallel(
model,
device_ids=[self.rank],
find_unused_parameters=True,
)
inp = torch.randn(1, 10, device="cuda")
for i in range(6):
out = ddp(inp, find_unused=True, dynamic=i % 2 == 0)
loss = out.sum()
loss.backward()
self.assertFalse(ddp.reducer._ddp_graph_static())
def _test_ddp_new_tensor_in_fwd(self, static_graph):
class MyModel(nn.Module):
def __init__(self):
super().__init__()
self.fc1 = nn.Linear(10, 10, bias=False)
self.fc2 = nn.Linear(10, 10, bias=False)
def __init_opt(self):
param = next(self.parameters())
opt = torch.randn(1, 10, device=param.device)
return opt
def forward(self, x, opt_1, opt_2, opt_nested):
x = F.relu(self.fc1(x))
x = self.fc2(x)
if opt_1 is None:
opt_1 = self.__init_opt()
if opt_2 is None:
opt_2 = self.__init_opt()
if opt_nested is None or not torch.is_tensor(opt_nested):
opt_nested = self.__init_opt()
return x, opt_1, opt_2, {"tensor": opt_nested}
model = MyModel().to(self.rank)
for find_unused in [True, False]:
ddp = DistributedDataParallel(
model,
device_ids=[self.rank],
output_device=self.rank,
broadcast_buffers=False,
find_unused_parameters=find_unused,
)
if static_graph:
ddp._set_static_graph()
opt = [None for _ in range(3)]
for i in range(2):
ddp.zero_grad()
x = torch.randn(1, 10, device=self.rank)
out, opt[0], opt[1], opt[2] = ddp(
x, opt_1=opt[0], opt_2=opt[1], opt_nested=opt[2]
)
for i in range(len(opt)):
if torch.is_tensor(opt[i]):
self.assertEqual(opt[i].grad_fn, None)
else:
self.assertEqual(opt[i]["tensor"].grad_fn, None)
out.mean().backward()
@skip_if_lt_x_gpu(2)
@sandcastle_skip_if(
BACKEND != "nccl" and BACKEND != "gloo",
"Only Nccl & Gloo backend support DistributedDataParallel",
)
def test_ddp_new_tensor_in_fwd(self):
return self._test_ddp_new_tensor_in_fwd(static_graph=False)
@skip_if_lt_x_gpu(2)
@sandcastle_skip_if(
BACKEND != "nccl" and BACKEND != "gloo",
"Only Nccl & Gloo backend support DistributedDataParallel",
)
def test_ddp_new_tensor_in_fwd_static_graph(self):
return self._test_ddp_new_tensor_in_fwd(static_graph=True)
| true | true |
f73e0b4fe771dc6c907d03f59429e849d5a2c875 | 744 | py | Python | hard-gists/fbadce83877a36622720/snippet.py | jjhenkel/dockerizeme | eaa4fe5366f6b9adf74399eab01c712cacaeb279 | [
"Apache-2.0"
] | 21 | 2019-07-08T08:26:45.000Z | 2022-01-24T23:53:25.000Z | hard-gists/fbadce83877a36622720/snippet.py | jjhenkel/dockerizeme | eaa4fe5366f6b9adf74399eab01c712cacaeb279 | [
"Apache-2.0"
] | 5 | 2019-06-15T14:47:47.000Z | 2022-02-26T05:02:56.000Z | hard-gists/fbadce83877a36622720/snippet.py | jjhenkel/dockerizeme | eaa4fe5366f6b9adf74399eab01c712cacaeb279 | [
"Apache-2.0"
] | 17 | 2019-05-16T03:50:34.000Z | 2021-01-14T14:35:12.000Z | # for http://blender.stackexchange.com/questions/32787/example-of-creating-and-setting-a-cycles-material-node-with-the-python-api
import bpy
# get the material
mat = bpy.data.materials['Material']
# get the nodes
nodes = mat.node_tree.nodes
# clear all nodes to start clean
for node in nodes:
nodes.remove(node)
# create emission node
node_ani = nodes.new(type='ShaderNodeBsdfAnisotropic')
node_ani.inputs[0].default_value = (0,1,0,1) # green RGBA
node_ani.inputs[1].default_value = 5.0 # strength
node_ani.location = 0,0
# create output node
node_output = nodes.new(type='ShaderNodeOutputMaterial')
node_output.location = 400,0
# link nodes
links = mat.node_tree.links
link = links.new(node_ani.outputs[0], node_output.inputs[0]) | 28.615385 | 129 | 0.760753 |
import bpy
mat = bpy.data.materials['Material']
nodes = mat.node_tree.nodes
for node in nodes:
nodes.remove(node)
node_ani = nodes.new(type='ShaderNodeBsdfAnisotropic')
node_ani.inputs[0].default_value = (0,1,0,1)
node_ani.inputs[1].default_value = 5.0
node_ani.location = 0,0
node_output = nodes.new(type='ShaderNodeOutputMaterial')
node_output.location = 400,0
links = mat.node_tree.links
link = links.new(node_ani.outputs[0], node_output.inputs[0]) | true | true |
f73e0f0cf3bc12365339b9389c9861feac0c5c75 | 14,741 | py | Python | depth_and_motion_learning/consistency_losses.py | egonrian/google-research | 8177adbe9ca0d7e5a9463b54581fe6dd27be0974 | [
"Apache-2.0"
] | 3 | 2021-01-18T04:46:49.000Z | 2021-03-05T09:21:40.000Z | depth_and_motion_learning/consistency_losses.py | Alfaxad/google-research | 2c0043ecd507e75e2df9973a3015daf9253e1467 | [
"Apache-2.0"
] | 7 | 2021-11-10T19:44:38.000Z | 2022-02-10T06:48:39.000Z | depth_and_motion_learning/consistency_losses.py | Alfaxad/google-research | 2c0043ecd507e75e2df9973a3015daf9253e1467 | [
"Apache-2.0"
] | 4 | 2021-02-08T10:25:45.000Z | 2021-04-17T14:46:26.000Z | # coding=utf-8
# Copyright 2020 The Google Research Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Loss functions that impose RGB and depth motion-consistency across frames."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import tensorflow.compat.v1 as tf
from depth_and_motion_learning import resampler
from depth_and_motion_learning import transform_utils
def rgbd_consistency_loss(frame1transformed_depth,
frame1rgb,
frame2depth,
frame2rgb,
validity_mask=None):
"""Computes a loss that penalizes RGBD inconsistencies between frames.
This function computes 3 losses that penalize inconsistencies between two
frames: depth, RGB, and structural similarity. It IS NOT SYMMETRIC with
respect to both frames. In particular, to address occlusions, it only
penalizes depth and RGB inconsistencies at pixels where frame1 is closer to
the camera than frame2 (Why? see https://arxiv.org/abs/1904.04998). Therefore
the intended usage pattern is running it twice - second time with the two
frames swapped.
Args:
frame1transformed_depth: A transform_depth_map.TransformedDepthMap object
representing the depth map of frame 1 after it was motion-transformed to
frame 2, a motion transform that accounts for all camera and object motion
that occurred between frame1 and frame2. The tensors inside
frame1transformed_depth are of shape [B, H, W].
frame1rgb: A tf.Tensor of shape [B, H, W, C] containing the RGB image at
frame1.
frame2depth: A tf.Tensor of shape [B, H, W] containing the depth map at
frame2.
frame2rgb: A tf.Tensor of shape [B, H, W, C] containing the RGB image at
frame2.
validity_mask: a tf.Tensor of a floating point type and a shape of
[B, H, W, 1] containing a validity mask.
Returns:
A dicionary from string to tf.Tensor, with the following entries:
depth_error: A tf scalar, the depth mismatch error between the two frames.
rgb_error: A tf scalar, the rgb mismatch error between the two frames.
ssim_error: A tf scalar, the strictural similarity mismatch error between
the two frames.
depth_proximity_weight: A tf.Tensor of shape [B, H, W], representing a
function that peaks (at 1.0) for pixels where there is depth consistency
between the two frames, and is small otherwise.
frame1_closer_to_camera: A tf.Tensor of shape [B, H, W, 1], a mask that is
1.0 when the depth map of frame 1 has smaller depth than frame 2.
"""
frame2rgbd = tf.concat(
[frame2rgb, tf.expand_dims((frame2depth), -1)], axis=-1)
frame2rgbd_resampled = resampler.resampler_with_unstacked_warp(
frame2rgbd,
frame1transformed_depth.pixel_x,
frame1transformed_depth.pixel_y,
safe=False)
frame2rgb_resampled, frame2depth_resampled = tf.split(
frame2rgbd_resampled, [3, 1], axis=-1)
frame2depth_resampled = tf.squeeze(frame2depth_resampled, axis=-1)
# f1td.depth is the predicted depth at [pixel_y, pixel_x] for frame2. Now we
# generate (by interpolation) the actual depth values for frame2's depth, at
# the same locations, so that we can compare the two depths.
# We penalize inconsistencies between the two frames' depth maps only if the
# transformed depth map (of frame 1) falls closer to the camera than the
# actual depth map (of frame 2). This is intended for avoiding penalizing
# points that become occluded because of the transform.
# So what about depth inconsistencies where frame1's depth map is FARTHER from
# the camera than frame2's? These will be handled when we swap the roles of
# frame 1 and 2 (more in https://arxiv.org/abs/1904.04998).
frame1_closer_to_camera = tf.to_float(
tf.logical_and(
frame1transformed_depth.mask,
tf.less(frame1transformed_depth.depth, frame2depth_resampled)))
frames_l1_diff = tf.abs(frame2depth_resampled - frame1transformed_depth.depth)
if validity_mask is not None:
frames_l1_diff = frames_l1_diff * tf.squeeze(validity_mask, axis=[3])
depth_error = tf.reduce_mean(
tf.math.multiply_no_nan(frames_l1_diff, frame1_closer_to_camera))
frames_rgb_l1_diff = tf.abs(frame2rgb_resampled - frame1rgb)
if validity_mask is not None:
frames_rgb_l1_diff = frames_rgb_l1_diff * validity_mask
rgb_error = tf.math.multiply_no_nan(
frames_rgb_l1_diff, tf.expand_dims(frame1_closer_to_camera, -1))
rgb_error = tf.reduce_mean(rgb_error)
# We generate a weight function that peaks (at 1.0) for pixels where when the
# depth difference is less than its standard deviation across the frame, and
# fall off to zero otherwise. This function is used later for weighing the
# structural similarity loss term. We only want to demand structural
# similarity for surfaces that are close to one another in the two frames.
depth_error_second_moment = _weighted_average(
tf.square(frame2depth_resampled - frame1transformed_depth.depth),
frame1_closer_to_camera) + 1e-4
depth_proximity_weight = tf.math.multiply_no_nan(
depth_error_second_moment /
(tf.square(frame2depth_resampled - frame1transformed_depth.depth) +
depth_error_second_moment), tf.to_float(frame1transformed_depth.mask))
if validity_mask is not None:
depth_proximity_weight = depth_proximity_weight * tf.squeeze(
validity_mask, axis=[3])
# If we don't stop the gradient training won't start. The reason is presumably
# that then the network can push the depths apart instead of seeking RGB
# consistency.
depth_proximity_weight = tf.stop_gradient(depth_proximity_weight)
ssim_error, avg_weight = weighted_ssim(
frame2rgb_resampled,
frame1rgb,
depth_proximity_weight,
c1=float('inf'), # These values of c1 and c2 seemed to work better than
c2=9e-6) # defaults. TODO(gariel): Make them parameters rather
# than hard coded.
ssim_error_mean = tf.reduce_mean(
tf.math.multiply_no_nan(ssim_error, avg_weight))
endpoints = {
'depth_error': depth_error,
'rgb_error': rgb_error,
'ssim_error': ssim_error_mean,
'depth_proximity_weight': depth_proximity_weight,
'frame1_closer_to_camera': frame1_closer_to_camera
}
return endpoints
def motion_field_consistency_loss(frame1transformed_pixelx,
frame1transformed_pixely, mask, rotation1,
translation1, rotation2, translation2):
"""Computes a cycle consistency loss between two motion maps.
Given two rotation and translation maps (of two frames), and a mapping from
one frame to the other, this function assists in imposing that the fields at
frame 1 represent the opposite motion of the ones in frame 2.
In other words: At any given pixel on frame 1, if we apply the translation and
rotation designated at that pixel, we land on some pixel in frame 2, and if we
apply the translation and rotation designated there, we land back at the
original pixel at frame 1.
Args:
frame1transformed_pixelx: A tf.Tensor of shape [B, H, W] representing the
motion-transformed x-location of each pixel in frame 1.
frame1transformed_pixely: A tf.Tensor of shape [B, H, W] representing the
motion-transformed y-location of each pixel in frame 1.
mask: A tf.Tensor of shape [B, H, W, 2] expressing the weight of each pixel
in the calculation of the consistency loss.
rotation1: A tf.Tensor of shape [B, 3] representing rotation angles.
translation1: A tf.Tensor of shape [B, H, W, 3] representing translation
vectors.
rotation2: A tf.Tensor of shape [B, 3] representing rotation angles.
translation2: A tf.Tensor of shape [B, H, W, 3] representing translation
vectors.
Returns:
A dicionary from string to tf.Tensor, with the following entries:
rotation_error: A tf scalar, the rotation consistency error.
translation_error: A tf scalar, the translation consistency error.
"""
translation2resampled = resampler.resampler_with_unstacked_warp(
translation2,
tf.stop_gradient(frame1transformed_pixelx),
tf.stop_gradient(frame1transformed_pixely),
safe=False)
rotation1field = tf.broadcast_to(
_expand_dims_twice(rotation1, -2), tf.shape(translation1))
rotation2field = tf.broadcast_to(
_expand_dims_twice(rotation2, -2), tf.shape(translation2))
rotation1matrix = transform_utils.matrix_from_angles(rotation1field)
rotation2matrix = transform_utils.matrix_from_angles(rotation2field)
rot_unit, trans_zero = transform_utils.combine(rotation2matrix,
translation2resampled,
rotation1matrix, translation1)
eye = tf.eye(3, batch_shape=tf.shape(rot_unit)[:-2])
# We normalize the product of rotations by the product of their norms, to make
# the loss agnostic of their magnitudes, only wanting them to be opposite in
# directions. Otherwise the loss has a tendency to drive the rotations to
# zero.
rot_error = tf.reduce_mean(tf.square(rot_unit - eye), axis=(3, 4))
rot1_scale = tf.reduce_mean(tf.square(rotation1matrix - eye), axis=(3, 4))
rot2_scale = tf.reduce_mean(tf.square(rotation2matrix - eye), axis=(3, 4))
rot_error /= (1e-24 + rot1_scale + rot2_scale)
rotation_error = tf.reduce_mean(rot_error)
def norm(x):
return tf.reduce_sum(tf.square(x), axis=-1)
# Here again, we normalize by the magnitudes, for the same reason.
translation_error = tf.reduce_mean(tf.math.multiply_no_nan(
mask, norm(trans_zero) /
(1e-24 + norm(translation1) + norm(translation2resampled))))
return {
'rotation_error': rotation_error,
'translation_error': translation_error
}
def rgbd_and_motion_consistency_loss(frame1transformed_depth,
frame1rgb,
frame2depth,
frame2rgb,
rotation1,
translation1,
rotation2,
translation2,
validity_mask=None):
"""A helper that bundles rgbd and motion consistency losses together."""
endpoints = rgbd_consistency_loss(
frame1transformed_depth,
frame1rgb,
frame2depth,
frame2rgb,
validity_mask=validity_mask)
# We calculate the loss only for when frame1transformed_depth is closer to the
# camera than frame2 (occlusion-awareness). See explanation in
# rgbd_consistency_loss above.
mask = endpoints['frame1_closer_to_camera']
if validity_mask is not None:
mask *= tf.squeeze(validity_mask, axis=3)
endpoints.update(
motion_field_consistency_loss(frame1transformed_depth.pixel_x,
frame1transformed_depth.pixel_y, mask,
rotation1, translation1, rotation2,
translation2))
return endpoints
def weighted_ssim(x, y, weight, c1=0.01**2, c2=0.03**2, weight_epsilon=0.01):
"""Computes a weighted structured image similarity measure.
See https://en.wikipedia.org/wiki/Structural_similarity#Algorithm. The only
difference here is that not all pixels are weighted equally when calculating
the moments - they are weighted by a weight function.
Args:
x: A tf.Tensor representing a batch of images, of shape [B, H, W, C].
y: A tf.Tensor representing a batch of images, of shape [B, H, W, C].
weight: A tf.Tensor of shape [B, H, W], representing the weight of each
pixel in both images when we come to calculate moments (means and
correlations).
c1: A floating point number, regularizes division by zero of the means.
c2: A floating point number, regularizes division by zero of the second
moments.
weight_epsilon: A floating point number, used to regularize division by the
weight.
Returns:
A tuple of two tf.Tensors. First, of shape [B, H-2, W-2, C], is scalar
similarity loss oer pixel per channel, and the second, of shape
[B, H-2. W-2, 1], is the average pooled `weight`. It is needed so that we
know how much to weigh each pixel in the first tensor. For example, if
`'weight` was very small in some area of the images, the first tensor will
still assign a loss to these pixels, but we shouldn't take the result too
seriously.
"""
if c1 == float('inf') and c2 == float('inf'):
raise ValueError('Both c1 and c2 are infinite, SSIM loss is zero. This is '
'likely unintended.')
weight = tf.expand_dims(weight, -1)
average_pooled_weight = _avg_pool3x3(weight)
weight_plus_epsilon = weight + weight_epsilon
inverse_average_pooled_weight = 1.0 / (average_pooled_weight + weight_epsilon)
def weighted_avg_pool3x3(z):
wighted_avg = _avg_pool3x3(z * weight_plus_epsilon)
return wighted_avg * inverse_average_pooled_weight
mu_x = weighted_avg_pool3x3(x)
mu_y = weighted_avg_pool3x3(y)
sigma_x = weighted_avg_pool3x3(x**2) - mu_x**2
sigma_y = weighted_avg_pool3x3(y**2) - mu_y**2
sigma_xy = weighted_avg_pool3x3(x * y) - mu_x * mu_y
if c1 == float('inf'):
ssim_n = (2 * sigma_xy + c2)
ssim_d = (sigma_x + sigma_y + c2)
elif c2 == float('inf'):
ssim_n = 2 * mu_x * mu_y + c1
ssim_d = mu_x**2 + mu_y**2 + c1
else:
ssim_n = (2 * mu_x * mu_y + c1) * (2 * sigma_xy + c2)
ssim_d = (mu_x**2 + mu_y**2 + c1) * (sigma_x + sigma_y + c2)
result = ssim_n / ssim_d
return tf.clip_by_value((1 - result) / 2, 0, 1), average_pooled_weight
def _avg_pool3x3(x):
return tf.nn.avg_pool(x, [1, 3, 3, 1], [1, 1, 1, 1], 'VALID')
def _weighted_average(x, w, epsilon=1.0):
weighted_sum = tf.reduce_sum(x * w, axis=(1, 2), keepdims=True)
sum_of_weights = tf.reduce_sum(w, axis=(1, 2), keepdims=True)
return weighted_sum / (sum_of_weights + epsilon)
def _expand_dims_twice(x, dim):
return tf.expand_dims(tf.expand_dims(x, dim), dim)
| 45.079511 | 80 | 0.703412 |
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import tensorflow.compat.v1 as tf
from depth_and_motion_learning import resampler
from depth_and_motion_learning import transform_utils
def rgbd_consistency_loss(frame1transformed_depth,
frame1rgb,
frame2depth,
frame2rgb,
validity_mask=None):
frame2rgbd = tf.concat(
[frame2rgb, tf.expand_dims((frame2depth), -1)], axis=-1)
frame2rgbd_resampled = resampler.resampler_with_unstacked_warp(
frame2rgbd,
frame1transformed_depth.pixel_x,
frame1transformed_depth.pixel_y,
safe=False)
frame2rgb_resampled, frame2depth_resampled = tf.split(
frame2rgbd_resampled, [3, 1], axis=-1)
frame2depth_resampled = tf.squeeze(frame2depth_resampled, axis=-1)
# the same locations, so that we can compare the two depths.
# We penalize inconsistencies between the two frames' depth maps only if the
# the camera than frame2's? These will be handled when we swap the roles of
frame1_closer_to_camera = tf.to_float(
tf.logical_and(
frame1transformed_depth.mask,
tf.less(frame1transformed_depth.depth, frame2depth_resampled)))
frames_l1_diff = tf.abs(frame2depth_resampled - frame1transformed_depth.depth)
if validity_mask is not None:
frames_l1_diff = frames_l1_diff * tf.squeeze(validity_mask, axis=[3])
depth_error = tf.reduce_mean(
tf.math.multiply_no_nan(frames_l1_diff, frame1_closer_to_camera))
frames_rgb_l1_diff = tf.abs(frame2rgb_resampled - frame1rgb)
if validity_mask is not None:
frames_rgb_l1_diff = frames_rgb_l1_diff * validity_mask
rgb_error = tf.math.multiply_no_nan(
frames_rgb_l1_diff, tf.expand_dims(frame1_closer_to_camera, -1))
rgb_error = tf.reduce_mean(rgb_error)
depth_error_second_moment = _weighted_average(
tf.square(frame2depth_resampled - frame1transformed_depth.depth),
frame1_closer_to_camera) + 1e-4
depth_proximity_weight = tf.math.multiply_no_nan(
depth_error_second_moment /
(tf.square(frame2depth_resampled - frame1transformed_depth.depth) +
depth_error_second_moment), tf.to_float(frame1transformed_depth.mask))
if validity_mask is not None:
depth_proximity_weight = depth_proximity_weight * tf.squeeze(
validity_mask, axis=[3])
depth_proximity_weight = tf.stop_gradient(depth_proximity_weight)
ssim_error, avg_weight = weighted_ssim(
frame2rgb_resampled,
frame1rgb,
depth_proximity_weight,
c1=float('inf'),
c2=9e-6)
ssim_error_mean = tf.reduce_mean(
tf.math.multiply_no_nan(ssim_error, avg_weight))
endpoints = {
'depth_error': depth_error,
'rgb_error': rgb_error,
'ssim_error': ssim_error_mean,
'depth_proximity_weight': depth_proximity_weight,
'frame1_closer_to_camera': frame1_closer_to_camera
}
return endpoints
def motion_field_consistency_loss(frame1transformed_pixelx,
frame1transformed_pixely, mask, rotation1,
translation1, rotation2, translation2):
translation2resampled = resampler.resampler_with_unstacked_warp(
translation2,
tf.stop_gradient(frame1transformed_pixelx),
tf.stop_gradient(frame1transformed_pixely),
safe=False)
rotation1field = tf.broadcast_to(
_expand_dims_twice(rotation1, -2), tf.shape(translation1))
rotation2field = tf.broadcast_to(
_expand_dims_twice(rotation2, -2), tf.shape(translation2))
rotation1matrix = transform_utils.matrix_from_angles(rotation1field)
rotation2matrix = transform_utils.matrix_from_angles(rotation2field)
rot_unit, trans_zero = transform_utils.combine(rotation2matrix,
translation2resampled,
rotation1matrix, translation1)
eye = tf.eye(3, batch_shape=tf.shape(rot_unit)[:-2])
rot_error = tf.reduce_mean(tf.square(rot_unit - eye), axis=(3, 4))
rot1_scale = tf.reduce_mean(tf.square(rotation1matrix - eye), axis=(3, 4))
rot2_scale = tf.reduce_mean(tf.square(rotation2matrix - eye), axis=(3, 4))
rot_error /= (1e-24 + rot1_scale + rot2_scale)
rotation_error = tf.reduce_mean(rot_error)
def norm(x):
return tf.reduce_sum(tf.square(x), axis=-1)
translation_error = tf.reduce_mean(tf.math.multiply_no_nan(
mask, norm(trans_zero) /
(1e-24 + norm(translation1) + norm(translation2resampled))))
return {
'rotation_error': rotation_error,
'translation_error': translation_error
}
def rgbd_and_motion_consistency_loss(frame1transformed_depth,
frame1rgb,
frame2depth,
frame2rgb,
rotation1,
translation1,
rotation2,
translation2,
validity_mask=None):
endpoints = rgbd_consistency_loss(
frame1transformed_depth,
frame1rgb,
frame2depth,
frame2rgb,
validity_mask=validity_mask)
mask = endpoints['frame1_closer_to_camera']
if validity_mask is not None:
mask *= tf.squeeze(validity_mask, axis=3)
endpoints.update(
motion_field_consistency_loss(frame1transformed_depth.pixel_x,
frame1transformed_depth.pixel_y, mask,
rotation1, translation1, rotation2,
translation2))
return endpoints
def weighted_ssim(x, y, weight, c1=0.01**2, c2=0.03**2, weight_epsilon=0.01):
if c1 == float('inf') and c2 == float('inf'):
raise ValueError('Both c1 and c2 are infinite, SSIM loss is zero. This is '
'likely unintended.')
weight = tf.expand_dims(weight, -1)
average_pooled_weight = _avg_pool3x3(weight)
weight_plus_epsilon = weight + weight_epsilon
inverse_average_pooled_weight = 1.0 / (average_pooled_weight + weight_epsilon)
def weighted_avg_pool3x3(z):
wighted_avg = _avg_pool3x3(z * weight_plus_epsilon)
return wighted_avg * inverse_average_pooled_weight
mu_x = weighted_avg_pool3x3(x)
mu_y = weighted_avg_pool3x3(y)
sigma_x = weighted_avg_pool3x3(x**2) - mu_x**2
sigma_y = weighted_avg_pool3x3(y**2) - mu_y**2
sigma_xy = weighted_avg_pool3x3(x * y) - mu_x * mu_y
if c1 == float('inf'):
ssim_n = (2 * sigma_xy + c2)
ssim_d = (sigma_x + sigma_y + c2)
elif c2 == float('inf'):
ssim_n = 2 * mu_x * mu_y + c1
ssim_d = mu_x**2 + mu_y**2 + c1
else:
ssim_n = (2 * mu_x * mu_y + c1) * (2 * sigma_xy + c2)
ssim_d = (mu_x**2 + mu_y**2 + c1) * (sigma_x + sigma_y + c2)
result = ssim_n / ssim_d
return tf.clip_by_value((1 - result) / 2, 0, 1), average_pooled_weight
def _avg_pool3x3(x):
return tf.nn.avg_pool(x, [1, 3, 3, 1], [1, 1, 1, 1], 'VALID')
def _weighted_average(x, w, epsilon=1.0):
weighted_sum = tf.reduce_sum(x * w, axis=(1, 2), keepdims=True)
sum_of_weights = tf.reduce_sum(w, axis=(1, 2), keepdims=True)
return weighted_sum / (sum_of_weights + epsilon)
def _expand_dims_twice(x, dim):
return tf.expand_dims(tf.expand_dims(x, dim), dim)
| true | true |
f73e1059be87d93b7cb1181cf8bdcdc9331d220d | 18,312 | py | Python | sklearn/datasets/_twenty_newsgroups.py | lacrosse91/scikit-learn | 2325b19a86bd5b6e4b0bfb4eff4ee46a3343cf65 | [
"BSD-3-Clause"
] | 27 | 2015-01-22T22:30:09.000Z | 2022-02-15T07:33:06.000Z | sklearn/datasets/_twenty_newsgroups.py | ryanyu9/scikit-learn | 2a67d88258264eb2b6dfad221be8f8d61684dcba | [
"BSD-3-Clause"
] | 6 | 2021-07-05T15:38:00.000Z | 2022-02-27T13:35:19.000Z | sklearn/datasets/_twenty_newsgroups.py | ryanyu9/scikit-learn | 2a67d88258264eb2b6dfad221be8f8d61684dcba | [
"BSD-3-Clause"
] | 25 | 2015-07-30T13:47:25.000Z | 2021-08-03T07:48:38.000Z | """Caching loader for the 20 newsgroups text classification dataset.
The description of the dataset is available on the official website at:
http://people.csail.mit.edu/jrennie/20Newsgroups/
Quoting the introduction:
The 20 Newsgroups data set is a collection of approximately 20,000
newsgroup documents, partitioned (nearly) evenly across 20 different
newsgroups. To the best of my knowledge, it was originally collected
by Ken Lang, probably for his Newsweeder: Learning to filter netnews
paper, though he does not explicitly mention this collection. The 20
newsgroups collection has become a popular data set for experiments
in text applications of machine learning techniques, such as text
classification and text clustering.
This dataset loader will download the recommended "by date" variant of the
dataset and which features a point in time split between the train and
test sets. The compressed dataset size is around 14 Mb compressed. Once
uncompressed the train set is 52 MB and the test set is 34 MB.
"""
# Copyright (c) 2011 Olivier Grisel <olivier.grisel@ensta.org>
# License: BSD 3 clause
import os
from os.path import dirname, join
import logging
import tarfile
import pickle
import shutil
import re
import codecs
import numpy as np
import scipy.sparse as sp
import joblib
from . import get_data_home
from . import load_files
from ._base import _convert_data_dataframe
from ._base import _pkl_filepath
from ._base import _fetch_remote
from ._base import RemoteFileMetadata
from ..feature_extraction.text import CountVectorizer
from .. import preprocessing
from ..utils import check_random_state, Bunch
logger = logging.getLogger(__name__)
# The original data can be found at:
# https://people.csail.mit.edu/jrennie/20Newsgroups/20news-bydate.tar.gz
ARCHIVE = RemoteFileMetadata(
filename="20news-bydate.tar.gz",
url="https://ndownloader.figshare.com/files/5975967",
checksum=("8f1b2514ca22a5ade8fbb9cfa5727df9" "5fa587f4c87b786e15c759fa66d95610"),
)
CACHE_NAME = "20news-bydate.pkz"
TRAIN_FOLDER = "20news-bydate-train"
TEST_FOLDER = "20news-bydate-test"
def _download_20newsgroups(target_dir, cache_path):
"""Download the 20 newsgroups data and stored it as a zipped pickle."""
train_path = os.path.join(target_dir, TRAIN_FOLDER)
test_path = os.path.join(target_dir, TEST_FOLDER)
if not os.path.exists(target_dir):
os.makedirs(target_dir)
logger.info("Downloading dataset from %s (14 MB)", ARCHIVE.url)
archive_path = _fetch_remote(ARCHIVE, dirname=target_dir)
logger.debug("Decompressing %s", archive_path)
tarfile.open(archive_path, "r:gz").extractall(path=target_dir)
os.remove(archive_path)
# Store a zipped pickle
cache = dict(
train=load_files(train_path, encoding="latin1"),
test=load_files(test_path, encoding="latin1"),
)
compressed_content = codecs.encode(pickle.dumps(cache), "zlib_codec")
with open(cache_path, "wb") as f:
f.write(compressed_content)
shutil.rmtree(target_dir)
return cache
def strip_newsgroup_header(text):
"""
Given text in "news" format, strip the headers, by removing everything
before the first blank line.
Parameters
----------
text : str
The text from which to remove the signature block.
"""
_before, _blankline, after = text.partition("\n\n")
return after
_QUOTE_RE = re.compile(
r"(writes in|writes:|wrote:|says:|said:" r"|^In article|^Quoted from|^\||^>)"
)
def strip_newsgroup_quoting(text):
"""
Given text in "news" format, strip lines beginning with the quote
characters > or |, plus lines that often introduce a quoted section
(for example, because they contain the string 'writes:'.)
Parameters
----------
text : str
The text from which to remove the signature block.
"""
good_lines = [line for line in text.split("\n") if not _QUOTE_RE.search(line)]
return "\n".join(good_lines)
def strip_newsgroup_footer(text):
"""
Given text in "news" format, attempt to remove a signature block.
As a rough heuristic, we assume that signatures are set apart by either
a blank line or a line made of hyphens, and that it is the last such line
in the file (disregarding blank lines at the end).
Parameters
----------
text : str
The text from which to remove the signature block.
"""
lines = text.strip().split("\n")
for line_num in range(len(lines) - 1, -1, -1):
line = lines[line_num]
if line.strip().strip("-") == "":
break
if line_num > 0:
return "\n".join(lines[:line_num])
else:
return text
def fetch_20newsgroups(
*,
data_home=None,
subset="train",
categories=None,
shuffle=True,
random_state=42,
remove=(),
download_if_missing=True,
return_X_y=False,
):
"""Load the filenames and data from the 20 newsgroups dataset \
(classification).
Download it if necessary.
================= ==========
Classes 20
Samples total 18846
Dimensionality 1
Features text
================= ==========
Read more in the :ref:`User Guide <20newsgroups_dataset>`.
Parameters
----------
data_home : str, default=None
Specify a download and cache folder for the datasets. If None,
all scikit-learn data is stored in '~/scikit_learn_data' subfolders.
subset : {'train', 'test', 'all'}, default='train'
Select the dataset to load: 'train' for the training set, 'test'
for the test set, 'all' for both, with shuffled ordering.
categories : array-like, dtype=str or unicode, default=None
If None (default), load all the categories.
If not None, list of category names to load (other categories
ignored).
shuffle : bool, default=True
Whether or not to shuffle the data: might be important for models that
make the assumption that the samples are independent and identically
distributed (i.i.d.), such as stochastic gradient descent.
random_state : int, RandomState instance or None, default=None
Determines random number generation for dataset shuffling. Pass an int
for reproducible output across multiple function calls.
See :term:`Glossary <random_state>`.
remove : tuple, default=()
May contain any subset of ('headers', 'footers', 'quotes'). Each of
these are kinds of text that will be detected and removed from the
newsgroup posts, preventing classifiers from overfitting on
metadata.
'headers' removes newsgroup headers, 'footers' removes blocks at the
ends of posts that look like signatures, and 'quotes' removes lines
that appear to be quoting another post.
'headers' follows an exact standard; the other filters are not always
correct.
download_if_missing : bool, default=True
If False, raise an IOError if the data is not locally available
instead of trying to download the data from the source site.
return_X_y : bool, default=False
If True, returns `(data.data, data.target)` instead of a Bunch
object.
.. versionadded:: 0.22
Returns
-------
bunch : :class:`~sklearn.utils.Bunch`
Dictionary-like object, with the following attributes.
data : list of shape (n_samples,)
The data list to learn.
target: ndarray of shape (n_samples,)
The target labels.
filenames: list of shape (n_samples,)
The path to the location of the data.
DESCR: str
The full description of the dataset.
target_names: list of shape (n_classes,)
The names of target classes.
(data, target) : tuple if `return_X_y=True`
.. versionadded:: 0.22
"""
data_home = get_data_home(data_home=data_home)
cache_path = _pkl_filepath(data_home, CACHE_NAME)
twenty_home = os.path.join(data_home, "20news_home")
cache = None
if os.path.exists(cache_path):
try:
with open(cache_path, "rb") as f:
compressed_content = f.read()
uncompressed_content = codecs.decode(compressed_content, "zlib_codec")
cache = pickle.loads(uncompressed_content)
except Exception as e:
print(80 * "_")
print("Cache loading failed")
print(80 * "_")
print(e)
if cache is None:
if download_if_missing:
logger.info("Downloading 20news dataset. " "This may take a few minutes.")
cache = _download_20newsgroups(
target_dir=twenty_home, cache_path=cache_path
)
else:
raise IOError("20Newsgroups dataset not found")
if subset in ("train", "test"):
data = cache[subset]
elif subset == "all":
data_lst = list()
target = list()
filenames = list()
for subset in ("train", "test"):
data = cache[subset]
data_lst.extend(data.data)
target.extend(data.target)
filenames.extend(data.filenames)
data.data = data_lst
data.target = np.array(target)
data.filenames = np.array(filenames)
else:
raise ValueError(
"subset can only be 'train', 'test' or 'all', got '%s'" % subset
)
module_path = dirname(__file__)
with open(join(module_path, "descr", "twenty_newsgroups.rst")) as rst_file:
fdescr = rst_file.read()
data.DESCR = fdescr
if "headers" in remove:
data.data = [strip_newsgroup_header(text) for text in data.data]
if "footers" in remove:
data.data = [strip_newsgroup_footer(text) for text in data.data]
if "quotes" in remove:
data.data = [strip_newsgroup_quoting(text) for text in data.data]
if categories is not None:
labels = [(data.target_names.index(cat), cat) for cat in categories]
# Sort the categories to have the ordering of the labels
labels.sort()
labels, categories = zip(*labels)
mask = np.in1d(data.target, labels)
data.filenames = data.filenames[mask]
data.target = data.target[mask]
# searchsorted to have continuous labels
data.target = np.searchsorted(labels, data.target)
data.target_names = list(categories)
# Use an object array to shuffle: avoids memory copy
data_lst = np.array(data.data, dtype=object)
data_lst = data_lst[mask]
data.data = data_lst.tolist()
if shuffle:
random_state = check_random_state(random_state)
indices = np.arange(data.target.shape[0])
random_state.shuffle(indices)
data.filenames = data.filenames[indices]
data.target = data.target[indices]
# Use an object array to shuffle: avoids memory copy
data_lst = np.array(data.data, dtype=object)
data_lst = data_lst[indices]
data.data = data_lst.tolist()
if return_X_y:
return data.data, data.target
return data
def fetch_20newsgroups_vectorized(
*,
subset="train",
remove=(),
data_home=None,
download_if_missing=True,
return_X_y=False,
normalize=True,
as_frame=False,
):
"""Load and vectorize the 20 newsgroups dataset (classification).
Download it if necessary.
This is a convenience function; the transformation is done using the
default settings for
:class:`~sklearn.feature_extraction.text.CountVectorizer`. For more
advanced usage (stopword filtering, n-gram extraction, etc.), combine
fetch_20newsgroups with a custom
:class:`~sklearn.feature_extraction.text.CountVectorizer`,
:class:`~sklearn.feature_extraction.text.HashingVectorizer`,
:class:`~sklearn.feature_extraction.text.TfidfTransformer` or
:class:`~sklearn.feature_extraction.text.TfidfVectorizer`.
The resulting counts are normalized using
:func:`sklearn.preprocessing.normalize` unless normalize is set to False.
================= ==========
Classes 20
Samples total 18846
Dimensionality 130107
Features real
================= ==========
Read more in the :ref:`User Guide <20newsgroups_dataset>`.
Parameters
----------
subset : {'train', 'test', 'all'}, default='train'
Select the dataset to load: 'train' for the training set, 'test'
for the test set, 'all' for both, with shuffled ordering.
remove : tuple, default=()
May contain any subset of ('headers', 'footers', 'quotes'). Each of
these are kinds of text that will be detected and removed from the
newsgroup posts, preventing classifiers from overfitting on
metadata.
'headers' removes newsgroup headers, 'footers' removes blocks at the
ends of posts that look like signatures, and 'quotes' removes lines
that appear to be quoting another post.
data_home : str, default=None
Specify an download and cache folder for the datasets. If None,
all scikit-learn data is stored in '~/scikit_learn_data' subfolders.
download_if_missing : bool, default=True
If False, raise an IOError if the data is not locally available
instead of trying to download the data from the source site.
return_X_y : bool, default=False
If True, returns ``(data.data, data.target)`` instead of a Bunch
object.
.. versionadded:: 0.20
normalize : bool, default=True
If True, normalizes each document's feature vector to unit norm using
:func:`sklearn.preprocessing.normalize`.
.. versionadded:: 0.22
as_frame : bool, default=False
If True, the data is a pandas DataFrame including columns with
appropriate dtypes (numeric, string, or categorical). The target is
a pandas DataFrame or Series depending on the number of
`target_columns`.
.. versionadded:: 0.24
Returns
-------
bunch : :class:`~sklearn.utils.Bunch`
Dictionary-like object, with the following attributes.
data: {sparse matrix, dataframe} of shape (n_samples, n_features)
The input data matrix. If ``as_frame`` is `True`, ``data`` is
a pandas DataFrame with sparse columns.
target: {ndarray, series} of shape (n_samples,)
The target labels. If ``as_frame`` is `True`, ``target`` is a
pandas Series.
target_names: list of shape (n_classes,)
The names of target classes.
DESCR: str
The full description of the dataset.
frame: dataframe of shape (n_samples, n_features + 1)
Only present when `as_frame=True`. Pandas DataFrame with ``data``
and ``target``.
.. versionadded:: 0.24
(data, target) : tuple if ``return_X_y`` is True
`data` and `target` would be of the format defined in the `Bunch`
description above.
.. versionadded:: 0.20
"""
data_home = get_data_home(data_home=data_home)
filebase = "20newsgroup_vectorized"
if remove:
filebase += "remove-" + ("-".join(remove))
target_file = _pkl_filepath(data_home, filebase + ".pkl")
# we shuffle but use a fixed seed for the memoization
data_train = fetch_20newsgroups(
data_home=data_home,
subset="train",
categories=None,
shuffle=True,
random_state=12,
remove=remove,
download_if_missing=download_if_missing,
)
data_test = fetch_20newsgroups(
data_home=data_home,
subset="test",
categories=None,
shuffle=True,
random_state=12,
remove=remove,
download_if_missing=download_if_missing,
)
if os.path.exists(target_file):
try:
X_train, X_test, feature_names = joblib.load(target_file)
except ValueError as e:
raise ValueError(
f"The cached dataset located in {target_file} was fetched "
f"with an older scikit-learn version and it is not compatible "
f"with the scikit-learn version imported. You need to "
f"manually delete the file: {target_file}."
) from e
else:
vectorizer = CountVectorizer(dtype=np.int16)
X_train = vectorizer.fit_transform(data_train.data).tocsr()
X_test = vectorizer.transform(data_test.data).tocsr()
feature_names = vectorizer.get_feature_names()
joblib.dump((X_train, X_test, feature_names), target_file, compress=9)
# the data is stored as int16 for compactness
# but normalize needs floats
if normalize:
X_train = X_train.astype(np.float64)
X_test = X_test.astype(np.float64)
preprocessing.normalize(X_train, copy=False)
preprocessing.normalize(X_test, copy=False)
target_names = data_train.target_names
if subset == "train":
data = X_train
target = data_train.target
elif subset == "test":
data = X_test
target = data_test.target
elif subset == "all":
data = sp.vstack((X_train, X_test)).tocsr()
target = np.concatenate((data_train.target, data_test.target))
else:
raise ValueError(
"%r is not a valid subset: should be one of "
"['train', 'test', 'all']" % subset
)
module_path = dirname(__file__)
with open(join(module_path, "descr", "twenty_newsgroups.rst")) as rst_file:
fdescr = rst_file.read()
frame = None
target_name = ["category_class"]
if as_frame:
frame, data, target = _convert_data_dataframe(
"fetch_20newsgroups_vectorized",
data,
target,
feature_names,
target_names=target_name,
sparse_data=True,
)
if return_X_y:
return data, target
return Bunch(
data=data,
target=target,
frame=frame,
target_names=target_names,
feature_names=feature_names,
DESCR=fdescr,
)
| 33.848429 | 86 | 0.647554 |
import os
from os.path import dirname, join
import logging
import tarfile
import pickle
import shutil
import re
import codecs
import numpy as np
import scipy.sparse as sp
import joblib
from . import get_data_home
from . import load_files
from ._base import _convert_data_dataframe
from ._base import _pkl_filepath
from ._base import _fetch_remote
from ._base import RemoteFileMetadata
from ..feature_extraction.text import CountVectorizer
from .. import preprocessing
from ..utils import check_random_state, Bunch
logger = logging.getLogger(__name__)
ARCHIVE = RemoteFileMetadata(
filename="20news-bydate.tar.gz",
url="https://ndownloader.figshare.com/files/5975967",
checksum=("8f1b2514ca22a5ade8fbb9cfa5727df9" "5fa587f4c87b786e15c759fa66d95610"),
)
CACHE_NAME = "20news-bydate.pkz"
TRAIN_FOLDER = "20news-bydate-train"
TEST_FOLDER = "20news-bydate-test"
def _download_20newsgroups(target_dir, cache_path):
train_path = os.path.join(target_dir, TRAIN_FOLDER)
test_path = os.path.join(target_dir, TEST_FOLDER)
if not os.path.exists(target_dir):
os.makedirs(target_dir)
logger.info("Downloading dataset from %s (14 MB)", ARCHIVE.url)
archive_path = _fetch_remote(ARCHIVE, dirname=target_dir)
logger.debug("Decompressing %s", archive_path)
tarfile.open(archive_path, "r:gz").extractall(path=target_dir)
os.remove(archive_path)
cache = dict(
train=load_files(train_path, encoding="latin1"),
test=load_files(test_path, encoding="latin1"),
)
compressed_content = codecs.encode(pickle.dumps(cache), "zlib_codec")
with open(cache_path, "wb") as f:
f.write(compressed_content)
shutil.rmtree(target_dir)
return cache
def strip_newsgroup_header(text):
_before, _blankline, after = text.partition("\n\n")
return after
_QUOTE_RE = re.compile(
r"(writes in|writes:|wrote:|says:|said:" r"|^In article|^Quoted from|^\||^>)"
)
def strip_newsgroup_quoting(text):
good_lines = [line for line in text.split("\n") if not _QUOTE_RE.search(line)]
return "\n".join(good_lines)
def strip_newsgroup_footer(text):
lines = text.strip().split("\n")
for line_num in range(len(lines) - 1, -1, -1):
line = lines[line_num]
if line.strip().strip("-") == "":
break
if line_num > 0:
return "\n".join(lines[:line_num])
else:
return text
def fetch_20newsgroups(
*,
data_home=None,
subset="train",
categories=None,
shuffle=True,
random_state=42,
remove=(),
download_if_missing=True,
return_X_y=False,
):
data_home = get_data_home(data_home=data_home)
cache_path = _pkl_filepath(data_home, CACHE_NAME)
twenty_home = os.path.join(data_home, "20news_home")
cache = None
if os.path.exists(cache_path):
try:
with open(cache_path, "rb") as f:
compressed_content = f.read()
uncompressed_content = codecs.decode(compressed_content, "zlib_codec")
cache = pickle.loads(uncompressed_content)
except Exception as e:
print(80 * "_")
print("Cache loading failed")
print(80 * "_")
print(e)
if cache is None:
if download_if_missing:
logger.info("Downloading 20news dataset. " "This may take a few minutes.")
cache = _download_20newsgroups(
target_dir=twenty_home, cache_path=cache_path
)
else:
raise IOError("20Newsgroups dataset not found")
if subset in ("train", "test"):
data = cache[subset]
elif subset == "all":
data_lst = list()
target = list()
filenames = list()
for subset in ("train", "test"):
data = cache[subset]
data_lst.extend(data.data)
target.extend(data.target)
filenames.extend(data.filenames)
data.data = data_lst
data.target = np.array(target)
data.filenames = np.array(filenames)
else:
raise ValueError(
"subset can only be 'train', 'test' or 'all', got '%s'" % subset
)
module_path = dirname(__file__)
with open(join(module_path, "descr", "twenty_newsgroups.rst")) as rst_file:
fdescr = rst_file.read()
data.DESCR = fdescr
if "headers" in remove:
data.data = [strip_newsgroup_header(text) for text in data.data]
if "footers" in remove:
data.data = [strip_newsgroup_footer(text) for text in data.data]
if "quotes" in remove:
data.data = [strip_newsgroup_quoting(text) for text in data.data]
if categories is not None:
labels = [(data.target_names.index(cat), cat) for cat in categories]
labels.sort()
labels, categories = zip(*labels)
mask = np.in1d(data.target, labels)
data.filenames = data.filenames[mask]
data.target = data.target[mask]
data.target = np.searchsorted(labels, data.target)
data.target_names = list(categories)
data_lst = np.array(data.data, dtype=object)
data_lst = data_lst[mask]
data.data = data_lst.tolist()
if shuffle:
random_state = check_random_state(random_state)
indices = np.arange(data.target.shape[0])
random_state.shuffle(indices)
data.filenames = data.filenames[indices]
data.target = data.target[indices]
data_lst = np.array(data.data, dtype=object)
data_lst = data_lst[indices]
data.data = data_lst.tolist()
if return_X_y:
return data.data, data.target
return data
def fetch_20newsgroups_vectorized(
*,
subset="train",
remove=(),
data_home=None,
download_if_missing=True,
return_X_y=False,
normalize=True,
as_frame=False,
):
data_home = get_data_home(data_home=data_home)
filebase = "20newsgroup_vectorized"
if remove:
filebase += "remove-" + ("-".join(remove))
target_file = _pkl_filepath(data_home, filebase + ".pkl")
data_train = fetch_20newsgroups(
data_home=data_home,
subset="train",
categories=None,
shuffle=True,
random_state=12,
remove=remove,
download_if_missing=download_if_missing,
)
data_test = fetch_20newsgroups(
data_home=data_home,
subset="test",
categories=None,
shuffle=True,
random_state=12,
remove=remove,
download_if_missing=download_if_missing,
)
if os.path.exists(target_file):
try:
X_train, X_test, feature_names = joblib.load(target_file)
except ValueError as e:
raise ValueError(
f"The cached dataset located in {target_file} was fetched "
f"with an older scikit-learn version and it is not compatible "
f"with the scikit-learn version imported. You need to "
f"manually delete the file: {target_file}."
) from e
else:
vectorizer = CountVectorizer(dtype=np.int16)
X_train = vectorizer.fit_transform(data_train.data).tocsr()
X_test = vectorizer.transform(data_test.data).tocsr()
feature_names = vectorizer.get_feature_names()
joblib.dump((X_train, X_test, feature_names), target_file, compress=9)
if normalize:
X_train = X_train.astype(np.float64)
X_test = X_test.astype(np.float64)
preprocessing.normalize(X_train, copy=False)
preprocessing.normalize(X_test, copy=False)
target_names = data_train.target_names
if subset == "train":
data = X_train
target = data_train.target
elif subset == "test":
data = X_test
target = data_test.target
elif subset == "all":
data = sp.vstack((X_train, X_test)).tocsr()
target = np.concatenate((data_train.target, data_test.target))
else:
raise ValueError(
"%r is not a valid subset: should be one of "
"['train', 'test', 'all']" % subset
)
module_path = dirname(__file__)
with open(join(module_path, "descr", "twenty_newsgroups.rst")) as rst_file:
fdescr = rst_file.read()
frame = None
target_name = ["category_class"]
if as_frame:
frame, data, target = _convert_data_dataframe(
"fetch_20newsgroups_vectorized",
data,
target,
feature_names,
target_names=target_name,
sparse_data=True,
)
if return_X_y:
return data, target
return Bunch(
data=data,
target=target,
frame=frame,
target_names=target_names,
feature_names=feature_names,
DESCR=fdescr,
)
| true | true |
f73e116753c7d7923e9626acd3dcbd88a4c7d29a | 289 | py | Python | freq.py | soulruler01/print-the-letters-in-decreasing-order-of-frequency-using-dictionaries | 8ce55e1aefa63cceb3fe1ad709a5ac5d44f0a8c3 | [
"MIT"
] | null | null | null | freq.py | soulruler01/print-the-letters-in-decreasing-order-of-frequency-using-dictionaries | 8ce55e1aefa63cceb3fe1ad709a5ac5d44f0a8c3 | [
"MIT"
] | null | null | null | freq.py | soulruler01/print-the-letters-in-decreasing-order-of-frequency-using-dictionaries | 8ce55e1aefa63cceb3fe1ad709a5ac5d44f0a8c3 | [
"MIT"
] | null | null | null | def most_frequent(s):
dic={}
for i in s:
if i in dic:
dic[i] += 1
else:
dic[i] = 1
z = sorted(dic.items(), key = lambda x: x[1], reverse = True)
for i in z:
print(i[0]+"="+str(i[1]))
most_frequent('mississippi')
| 22.230769 | 66 | 0.449827 | def most_frequent(s):
dic={}
for i in s:
if i in dic:
dic[i] += 1
else:
dic[i] = 1
z = sorted(dic.items(), key = lambda x: x[1], reverse = True)
for i in z:
print(i[0]+"="+str(i[1]))
most_frequent('mississippi')
| true | true |
f73e12a1c014661754813b808c8c50782a8dc99d | 4,282 | py | Python | colour_datasets/loaders/jakob2019.py | colour-science/colour-datasets | 464c387c17739f08a0cceb5185f6b225872adb6c | [
"BSD-3-Clause"
] | 28 | 2019-06-15T03:07:28.000Z | 2022-03-28T14:11:51.000Z | colour_datasets/loaders/jakob2019.py | JGoldstone/colour-datasets | 8e0b52870c63c0e9b72d8b848720e0c28e0cbfa4 | [
"BSD-3-Clause"
] | 12 | 2020-03-24T17:35:36.000Z | 2021-11-09T08:49:39.000Z | colour_datasets/loaders/jakob2019.py | JGoldstone/colour-datasets | 8e0b52870c63c0e9b72d8b848720e0c28e0cbfa4 | [
"BSD-3-Clause"
] | 8 | 2019-10-27T15:00:52.000Z | 2022-01-26T15:29:38.000Z | # -*- coding: utf-8 -*-
"""
Spectral Upsampling Coefficient Tables - Jakob and Hanika (2019)
================================================================
Defines the objects implementing support for *Jakob and Hanika (2019)*
*Spectral Upsampling Coefficient Tables* dataset loading:
- :class:`colour_datasets.loaders.DatasetLoader_Jakob2019`
- :func:`colour_datasets.loaders.build_Jakob2019`
References
----------
- :cite:`Jakob2019` : Jakob, W., & Hanika, J. (2019). A Low‐Dimensional
Function Space for Efficient Spectral Upsampling. Computer Graphics Forum,
38(2), 147-155. doi:10.1111/cgf.13626
"""
import glob
import os
from collections import OrderedDict
from colour.recovery import LUT3D_Jakob2019
from colour_datasets.loaders import AbstractDatasetLoader
from colour_datasets.records import datasets
__author__ = 'Colour Developers'
__copyright__ = 'Copyright (C) 2019-2021 - Colour Developers'
__license__ = 'New BSD License - https://opensource.org/licenses/BSD-3-Clause'
__maintainer__ = 'Colour Developers'
__email__ = 'colour-developers@colour-science.org'
__status__ = 'Production'
__all__ = ['DatasetLoader_Jakob2019', 'build_Jakob2019']
class DatasetLoader_Jakob2019(AbstractDatasetLoader):
"""
Defines the *Jakob and Hanika (2019)*
*Spectral Upsampling Coefficient Tables* dataset loader.
Attributes
----------
- :attr:`colour_datasets.loaders.DatasetLoader_Jakob2019.ID`
Methods
-------
- :meth:`colour_datasets.loaders.DatasetLoader_Jakob2019.__init__`
- :meth:`colour_datasets.loaders.DatasetLoader_Jakob2019.load`
References
----------
:cite:`Jakob2019`
"""
ID = '4050598'
"""
Dataset record id, i.e. the *Zenodo* record number.
ID : unicode
"""
def __init__(self):
super(DatasetLoader_Jakob2019,
self).__init__(datasets()[DatasetLoader_Jakob2019.ID])
def load(self):
"""
Syncs, parses, converts and returns the *Jakob and Hanika (2019)*
*Spectral Upsampling Coefficient Tables* dataset content.
Returns
-------
OrderedDict
*Jakob and Hanika (2019)* *Spectral Upsampling Coefficient Tables*
dataset content.
Examples
--------
>>> from colour_datasets.utilities import suppress_stdout
>>> dataset = DatasetLoader_Jakob2019()
>>> with suppress_stdout():
... dataset.load()
>>> len(dataset.content.keys())
4
"""
super(DatasetLoader_Jakob2019, self).sync()
self._content = OrderedDict()
tables_path = os.path.join(self.record.repository, 'dataset',
'Jakob2019Spectral', 'supplement', 'tables')
coeff_file_to_RGB_colourspace = {
'rec2020': 'ITU-R BT.2020',
'srgb': 'sRGB',
'aces2065_1': 'ACES2065-1',
'prophotorgb': 'ProPhoto RGB',
}
for coeff_file in glob.glob('{0}/*.coeff'.format(tables_path)):
key = os.path.splitext(os.path.basename(coeff_file))[0]
key = coeff_file_to_RGB_colourspace.get(key, key)
LUT = LUT3D_Jakob2019()
LUT.read(coeff_file)
self._content[key] = LUT
return self._content
_DATASET_LOADER_JAKOB2019 = None
"""
Singleton instance of the *Jakob and Hanika (2019)*
*Spectral Upsampling Coefficient Tables* dataset loader.
_DATASET_LOADER_JAKOB2019 : DatasetLoader_Jakob2019
"""
def build_Jakob2019(load=True):
"""
Singleton factory that builds the *Jakob and Hanika (2019)*
*Spectral Upsampling Coefficient Tables* dataset loader.
Parameters
----------
load : bool, optional
Whether to load the dataset upon instantiation.
Returns
-------
DatasetLoader_Jakob2019
Singleton instance of the *Jakob and Hanika (2019)*
*Spectral Upsampling Coefficient Tables* dataset loader.
References
----------
:cite:`Jakob2019`
"""
global _DATASET_LOADER_JAKOB2019
if _DATASET_LOADER_JAKOB2019 is None:
_DATASET_LOADER_JAKOB2019 = DatasetLoader_Jakob2019()
if load:
_DATASET_LOADER_JAKOB2019.load()
return _DATASET_LOADER_JAKOB2019
| 27.986928 | 79 | 0.64596 |
import glob
import os
from collections import OrderedDict
from colour.recovery import LUT3D_Jakob2019
from colour_datasets.loaders import AbstractDatasetLoader
from colour_datasets.records import datasets
__author__ = 'Colour Developers'
__copyright__ = 'Copyright (C) 2019-2021 - Colour Developers'
__license__ = 'New BSD License - https://opensource.org/licenses/BSD-3-Clause'
__maintainer__ = 'Colour Developers'
__email__ = 'colour-developers@colour-science.org'
__status__ = 'Production'
__all__ = ['DatasetLoader_Jakob2019', 'build_Jakob2019']
class DatasetLoader_Jakob2019(AbstractDatasetLoader):
ID = '4050598'
def __init__(self):
super(DatasetLoader_Jakob2019,
self).__init__(datasets()[DatasetLoader_Jakob2019.ID])
def load(self):
super(DatasetLoader_Jakob2019, self).sync()
self._content = OrderedDict()
tables_path = os.path.join(self.record.repository, 'dataset',
'Jakob2019Spectral', 'supplement', 'tables')
coeff_file_to_RGB_colourspace = {
'rec2020': 'ITU-R BT.2020',
'srgb': 'sRGB',
'aces2065_1': 'ACES2065-1',
'prophotorgb': 'ProPhoto RGB',
}
for coeff_file in glob.glob('{0}/*.coeff'.format(tables_path)):
key = os.path.splitext(os.path.basename(coeff_file))[0]
key = coeff_file_to_RGB_colourspace.get(key, key)
LUT = LUT3D_Jakob2019()
LUT.read(coeff_file)
self._content[key] = LUT
return self._content
_DATASET_LOADER_JAKOB2019 = None
def build_Jakob2019(load=True):
global _DATASET_LOADER_JAKOB2019
if _DATASET_LOADER_JAKOB2019 is None:
_DATASET_LOADER_JAKOB2019 = DatasetLoader_Jakob2019()
if load:
_DATASET_LOADER_JAKOB2019.load()
return _DATASET_LOADER_JAKOB2019
| true | true |
f73e1317e89a0a57fb514f616b6d1a3b3eb11737 | 2,229 | py | Python | tensorflow_federated/python/core/impl/computation_impl_test.py | justin1121/federated | 117464b1c20d5890b50fc16f5fc030cf9a29ba6c | [
"Apache-2.0"
] | 5 | 2019-07-23T14:49:46.000Z | 2022-03-30T13:54:22.000Z | tensorflow_federated/python/core/impl/computation_impl_test.py | DaveKim3872/federated | 3559af64e8417ccb1b12a9d26f366b721bef021b | [
"Apache-2.0"
] | null | null | null | tensorflow_federated/python/core/impl/computation_impl_test.py | DaveKim3872/federated | 3559af64e8417ccb1b12a9d26f366b721bef021b | [
"Apache-2.0"
] | 1 | 2020-03-30T19:02:55.000Z | 2020-03-30T19:02:55.000Z | # Copyright 2018, The TensorFlow Federated Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Tests for ComputationImpl."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from absl.testing import absltest
import tensorflow as tf
from tensorflow_federated.proto.v0 import computation_pb2 as pb
from tensorflow_federated.python.core.api import computation_types
from tensorflow_federated.python.core.impl import computation_impl
from tensorflow_federated.python.core.impl import context_stack_impl
from tensorflow_federated.python.core.impl import type_serialization
class ComputationImplTest(absltest.TestCase):
def test_something(self):
# TODO(b/113112108): Revise these tests after a more complete implementation
# is in place.
# At the moment, this should succeed, as both the computation body and the
# type are well-formed.
computation_impl.ComputationImpl(
pb.Computation(
**{
'type':
type_serialization.serialize_type(
computation_types.FunctionType(tf.int32, tf.int32)),
'intrinsic':
pb.Intrinsic(uri='whatever')
}), context_stack_impl.context_stack)
# This should fail, as the proto is not well-formed.
self.assertRaises(TypeError, computation_impl.ComputationImpl,
pb.Computation(), context_stack_impl.context_stack)
# This should fail, as "10" is not an instance of pb.Computation.
self.assertRaises(TypeError, computation_impl.ComputationImpl, 10,
context_stack_impl.context_stack)
if __name__ == '__main__':
absltest.main()
| 37.779661 | 80 | 0.729026 |
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from absl.testing import absltest
import tensorflow as tf
from tensorflow_federated.proto.v0 import computation_pb2 as pb
from tensorflow_federated.python.core.api import computation_types
from tensorflow_federated.python.core.impl import computation_impl
from tensorflow_federated.python.core.impl import context_stack_impl
from tensorflow_federated.python.core.impl import type_serialization
class ComputationImplTest(absltest.TestCase):
def test_something(self):
computation_impl.ComputationImpl(
pb.Computation(
**{
'type':
type_serialization.serialize_type(
computation_types.FunctionType(tf.int32, tf.int32)),
'intrinsic':
pb.Intrinsic(uri='whatever')
}), context_stack_impl.context_stack)
self.assertRaises(TypeError, computation_impl.ComputationImpl,
pb.Computation(), context_stack_impl.context_stack)
self.assertRaises(TypeError, computation_impl.ComputationImpl, 10,
context_stack_impl.context_stack)
if __name__ == '__main__':
absltest.main()
| true | true |
f73e136dbb946cc2727f5e35cccc91bc6b420753 | 3,634 | py | Python | groupdocs_conversion_cloud/models/jpeg_convert_options.py | groupdocs-conversion-cloud/groupdocs-conversion-cloud-python | 841d06ad3205e10e8f2726517779ac2d7c33a02a | [
"MIT"
] | 5 | 2019-11-21T04:58:45.000Z | 2021-02-05T05:22:37.000Z | groupdocs_conversion_cloud/models/jpeg_convert_options.py | groupdocs-conversion-cloud/groupdocs-conversion-cloud-python | 841d06ad3205e10e8f2726517779ac2d7c33a02a | [
"MIT"
] | null | null | null | groupdocs_conversion_cloud/models/jpeg_convert_options.py | groupdocs-conversion-cloud/groupdocs-conversion-cloud-python | 841d06ad3205e10e8f2726517779ac2d7c33a02a | [
"MIT"
] | null | null | null | # coding: utf-8
# -----------------------------------------------------------------------------------
# <copyright company="Aspose Pty Ltd" file="JpegConvertOptions.py">
# Copyright (c) 2003-2021 Aspose Pty Ltd
# </copyright>
# <summary>
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in all
# copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
# SOFTWARE.
# </summary>
# -----------------------------------------------------------------------------------
import pprint
import re # noqa: F401
import six
from groupdocs_conversion_cloud.models import JpgConvertOptions
class JpegConvertOptions(JpgConvertOptions):
"""
Jpeg convert options
"""
"""
Attributes:
swagger_types (dict): The key is attribute name
and the value is attribute type.
attribute_map (dict): The key is attribute name
and the value is json key in definition.
"""
swagger_types = {
}
attribute_map = {
}
def __init__(self, **kwargs): # noqa: E501
"""Initializes new instance of JpegConvertOptions""" # noqa: E501
base = super(JpegConvertOptions, self)
base.__init__(**kwargs)
self.swagger_types.update(base.swagger_types)
self.attribute_map.update(base.attribute_map)
def to_dict(self):
"""Returns the model properties as a dict"""
result = {}
for attr, _ in six.iteritems(self.swagger_types):
value = getattr(self, attr)
if isinstance(value, list):
result[attr] = list(map(
lambda x: x.to_dict() if hasattr(x, "to_dict") else x,
value
))
elif hasattr(value, "to_dict"):
result[attr] = value.to_dict()
elif isinstance(value, dict):
result[attr] = dict(map(
lambda item: (item[0], item[1].to_dict())
if hasattr(item[1], "to_dict") else item,
value.items()
))
else:
result[attr] = value
return result
def to_str(self):
"""Returns the string representation of the model"""
return pprint.pformat(self.to_dict())
def __repr__(self):
"""For `print` and `pprint`"""
return self.to_str()
def __eq__(self, other):
"""Returns true if both objects are equal"""
if not isinstance(other, JpegConvertOptions):
return False
return self.__dict__ == other.__dict__
def __ne__(self, other):
"""Returns true if both objects are not equal"""
return not self == other
| 34.942308 | 85 | 0.59934 |
import pprint
import re
import six
from groupdocs_conversion_cloud.models import JpgConvertOptions
class JpegConvertOptions(JpgConvertOptions):
swagger_types = {
}
attribute_map = {
}
def __init__(self, **kwargs):
base = super(JpegConvertOptions, self)
base.__init__(**kwargs)
self.swagger_types.update(base.swagger_types)
self.attribute_map.update(base.attribute_map)
def to_dict(self):
result = {}
for attr, _ in six.iteritems(self.swagger_types):
value = getattr(self, attr)
if isinstance(value, list):
result[attr] = list(map(
lambda x: x.to_dict() if hasattr(x, "to_dict") else x,
value
))
elif hasattr(value, "to_dict"):
result[attr] = value.to_dict()
elif isinstance(value, dict):
result[attr] = dict(map(
lambda item: (item[0], item[1].to_dict())
if hasattr(item[1], "to_dict") else item,
value.items()
))
else:
result[attr] = value
return result
def to_str(self):
return pprint.pformat(self.to_dict())
def __repr__(self):
return self.to_str()
def __eq__(self, other):
if not isinstance(other, JpegConvertOptions):
return False
return self.__dict__ == other.__dict__
def __ne__(self, other):
return not self == other
| true | true |
f73e168829aff96aebe0833e5d64b250145dd594 | 28,598 | py | Python | mesonbuild/modules/pkgconfig.py | xggrnx/meson | af8b55d49b64e72dbefbd40d613b93f56d17b855 | [
"Apache-2.0"
] | null | null | null | mesonbuild/modules/pkgconfig.py | xggrnx/meson | af8b55d49b64e72dbefbd40d613b93f56d17b855 | [
"Apache-2.0"
] | null | null | null | mesonbuild/modules/pkgconfig.py | xggrnx/meson | af8b55d49b64e72dbefbd40d613b93f56d17b855 | [
"Apache-2.0"
] | null | null | null | # Copyright 2015 The Meson development team
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
# http://www.apache.org/licenses/LICENSE-2.0
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import os
from pathlib import PurePath
from .. import build
from .. import dependencies
from ..dependencies import ThreadDependency
from .. import mesonlib
from .. import mlog
from . import ModuleReturnValue
from . import ExtensionModule
from ..interpreterbase import permittedKwargs, FeatureNew, FeatureNewKwargs
already_warned_objs = set()
class DependenciesHelper:
def __init__(self, state, name):
self.state = state
self.name = name
self.pub_libs = []
self.pub_reqs = []
self.priv_libs = []
self.priv_reqs = []
self.cflags = []
self.version_reqs = {}
self.link_whole_targets = []
def add_pub_libs(self, libs):
libs, reqs, cflags = self._process_libs(libs, True)
self.pub_libs = libs + self.pub_libs # prepend to preserve dependencies
self.pub_reqs += reqs
self.cflags += cflags
def add_priv_libs(self, libs):
libs, reqs, _ = self._process_libs(libs, False)
self.priv_libs = libs + self.priv_libs
self.priv_reqs += reqs
def add_pub_reqs(self, reqs):
self.pub_reqs += self._process_reqs(reqs)
def add_priv_reqs(self, reqs):
self.priv_reqs += self._process_reqs(reqs)
def _check_generated_pc_deprecation(self, obj):
if not hasattr(obj, 'generated_pc_warn'):
return
name = obj.generated_pc_warn[0]
if (name, obj.name) in already_warned_objs:
return
mlog.deprecation('Library', mlog.bold(obj.name), 'was passed to the '
'"libraries" keyword argument of a previous call '
'to generate() method instead of first positional '
'argument.', 'Adding', mlog.bold(obj.generated_pc),
'to "Requires" field, but this is a deprecated '
'behaviour that will change in a future version '
'of Meson. Please report the issue if this '
'warning cannot be avoided in your case.',
location=obj.generated_pc_warn[1])
already_warned_objs.add((name, obj.name))
def _process_reqs(self, reqs):
'''Returns string names of requirements'''
processed_reqs = []
for obj in mesonlib.listify(reqs):
if not isinstance(obj, str):
FeatureNew.single_use('pkgconfig.generate requirement from non-string object', '0.46.0', self.state.subproject)
if hasattr(obj, 'generated_pc'):
self._check_generated_pc_deprecation(obj)
processed_reqs.append(obj.generated_pc)
elif isinstance(obj, dependencies.PkgConfigDependency):
if obj.found():
processed_reqs.append(obj.name)
self.add_version_reqs(obj.name, obj.version_reqs)
elif isinstance(obj, str):
name, version_req = self.split_version_req(obj)
processed_reqs.append(name)
self.add_version_reqs(name, version_req)
elif isinstance(obj, dependencies.Dependency) and not obj.found():
pass
elif isinstance(obj, ThreadDependency):
pass
else:
raise mesonlib.MesonException('requires argument not a string, '
'library with pkgconfig-generated file '
'or pkgconfig-dependency object, '
'got {!r}'.format(obj))
return processed_reqs
def add_cflags(self, cflags):
self.cflags += mesonlib.stringlistify(cflags)
def _process_libs(self, libs, public: bool):
libs = mesonlib.listify(libs)
processed_libs = []
processed_reqs = []
processed_cflags = []
for obj in libs:
if hasattr(obj, 'generated_pc'):
self._check_generated_pc_deprecation(obj)
processed_reqs.append(obj.generated_pc)
elif isinstance(obj, dependencies.PkgConfigDependency):
if obj.found():
processed_reqs.append(obj.name)
self.add_version_reqs(obj.name, obj.version_reqs)
elif isinstance(obj, dependencies.InternalDependency):
if obj.found():
processed_libs += obj.get_link_args()
processed_cflags += obj.get_compile_args()
self._add_lib_dependencies(obj.libraries, obj.whole_libraries, obj.ext_deps, public, private_external_deps=True)
elif isinstance(obj, dependencies.Dependency):
if obj.found():
processed_libs += obj.get_link_args()
processed_cflags += obj.get_compile_args()
elif isinstance(obj, build.SharedLibrary) and obj.shared_library_only:
# Do not pull dependencies for shared libraries because they are
# only required for static linking. Adding private requires has
# the side effect of exposing their cflags, which is the
# intended behaviour of pkg-config but force Debian to add more
# than needed build deps.
# See https://bugs.freedesktop.org/show_bug.cgi?id=105572
processed_libs.append(obj)
elif isinstance(obj, (build.SharedLibrary, build.StaticLibrary)):
processed_libs.append(obj)
# If there is a static library in `Libs:` all its deps must be
# public too, otherwise the generated pc file will never be
# usable without --static.
self._add_lib_dependencies(obj.link_targets,
obj.link_whole_targets,
obj.external_deps,
isinstance(obj, build.StaticLibrary) and public)
elif isinstance(obj, (build.CustomTarget, build.CustomTargetIndex)):
if not obj.is_linkable_target():
raise mesonlib.MesonException('library argument contains a not linkable custom_target.')
FeatureNew.single_use('custom_target in pkgconfig.generate libraries', '0.58.0', self.state.subproject)
processed_libs.append(obj)
elif isinstance(obj, str):
processed_libs.append(obj)
else:
raise mesonlib.MesonException(f'library argument of type {type(obj).__name__} not a string, library or dependency object.')
return processed_libs, processed_reqs, processed_cflags
def _add_lib_dependencies(self, link_targets, link_whole_targets, external_deps, public, private_external_deps=False):
add_libs = self.add_pub_libs if public else self.add_priv_libs
# Recursively add all linked libraries
for t in link_targets:
# Internal libraries (uninstalled static library) will be promoted
# to link_whole, treat them as such here.
if t.is_internal():
self._add_link_whole(t, public)
else:
add_libs([t])
for t in link_whole_targets:
self._add_link_whole(t, public)
# And finally its external dependencies
if private_external_deps:
self.add_priv_libs(external_deps)
else:
add_libs(external_deps)
def _add_link_whole(self, t, public):
# Don't include static libraries that we link_whole. But we still need to
# include their dependencies: a static library we link_whole
# could itself link to a shared library or an installed static library.
# Keep track of link_whole_targets so we can remove them from our
# lists in case a library is link_with and link_whole at the same time.
# See remove_dups() below.
self.link_whole_targets.append(t)
self._add_lib_dependencies(t.link_targets, t.link_whole_targets, t.external_deps, public)
def add_version_reqs(self, name, version_reqs):
if version_reqs:
if name not in self.version_reqs:
self.version_reqs[name] = set()
# Note that pkg-config is picky about whitespace.
# 'foo > 1.2' is ok but 'foo>1.2' is not.
# foo, bar' is ok, but 'foo,bar' is not.
new_vreqs = [s for s in mesonlib.stringlistify(version_reqs)]
self.version_reqs[name].update(new_vreqs)
def split_version_req(self, s):
for op in ['>=', '<=', '!=', '==', '=', '>', '<']:
pos = s.find(op)
if pos > 0:
return s[0:pos].strip(), s[pos:].strip()
return s, None
def format_vreq(self, vreq):
# vreq are '>=1.0' and pkgconfig wants '>= 1.0'
for op in ['>=', '<=', '!=', '==', '=', '>', '<']:
if vreq.startswith(op):
return op + ' ' + vreq[len(op):]
return vreq
def format_reqs(self, reqs):
result = []
for name in reqs:
vreqs = self.version_reqs.get(name, None)
if vreqs:
result += [name + ' ' + self.format_vreq(vreq) for vreq in vreqs]
else:
result += [name]
return ', '.join(result)
def remove_dups(self):
# Set of ids that have already been handled and should not be added any more
exclude = set()
# We can't just check if 'x' is excluded because we could have copies of
# the same SharedLibrary object for example.
def _ids(x):
if hasattr(x, 'generated_pc'):
yield x.generated_pc
if isinstance(x, build.Target):
yield x.get_id()
yield x
# Exclude 'x' in all its forms and return if it was already excluded
def _add_exclude(x):
was_excluded = False
for i in _ids(x):
if i in exclude:
was_excluded = True
else:
exclude.add(i)
return was_excluded
# link_whole targets are already part of other targets, exclude them all.
for t in self.link_whole_targets:
_add_exclude(t)
def _fn(xs, libs=False):
# Remove duplicates whilst preserving original order
result = []
for x in xs:
# Don't de-dup unknown strings to avoid messing up arguments like:
# ['-framework', 'CoreAudio', '-framework', 'CoreMedia']
known_flags = ['-pthread']
cannot_dedup = libs and isinstance(x, str) and \
not x.startswith(('-l', '-L')) and \
x not in known_flags
if not cannot_dedup and _add_exclude(x):
continue
result.append(x)
return result
# Handle lists in priority order: public items can be excluded from
# private and Requires can excluded from Libs.
self.pub_reqs = _fn(self.pub_reqs)
self.pub_libs = _fn(self.pub_libs, True)
self.priv_reqs = _fn(self.priv_reqs)
self.priv_libs = _fn(self.priv_libs, True)
# Reset exclude list just in case some values can be both cflags and libs.
exclude = set()
self.cflags = _fn(self.cflags)
class PkgConfigModule(ExtensionModule):
def __init__(self, interpreter):
super().__init__(interpreter)
self.methods.update({
'generate': self.generate,
})
def _get_lname(self, l, msg, pcfile, is_custom_target):
if is_custom_target:
basename = os.path.basename(l.get_filename())
name = os.path.splitext(basename)[0]
if name.startswith('lib'):
name = name[3:]
return name
# Nothing special
if not l.name_prefix_set:
return l.name
# Sometimes people want the library to start with 'lib' everywhere,
# which is achieved by setting name_prefix to '' and the target name to
# 'libfoo'. In that case, try to get the pkg-config '-lfoo' arg correct.
if l.prefix == '' and l.name.startswith('lib'):
return l.name[3:]
# If the library is imported via an import library which is always
# named after the target name, '-lfoo' is correct.
if isinstance(l, build.SharedLibrary) and l.import_filename:
return l.name
# In other cases, we can't guarantee that the compiler will be able to
# find the library via '-lfoo', so tell the user that.
mlog.warning(msg.format(l.name, 'name_prefix', l.name, pcfile))
return l.name
def _escape(self, value):
'''
We cannot use quote_arg because it quotes with ' and " which does not
work with pkg-config and pkgconf at all.
'''
# We should always write out paths with / because pkg-config requires
# spaces to be quoted with \ and that messes up on Windows:
# https://bugs.freedesktop.org/show_bug.cgi?id=103203
if isinstance(value, PurePath):
value = value.as_posix()
return value.replace(' ', r'\ ')
def _make_relative(self, prefix, subdir):
prefix = PurePath(prefix)
subdir = PurePath(subdir)
try:
return subdir.relative_to(prefix).as_posix()
except ValueError:
return subdir.as_posix()
def _generate_pkgconfig_file(self, state, deps, subdirs, name, description,
url, version, pcfile, conflicts, variables,
unescaped_variables, uninstalled=False, dataonly=False):
coredata = state.environment.get_coredata()
if uninstalled:
outdir = os.path.join(state.environment.build_dir, 'meson-uninstalled')
if not os.path.exists(outdir):
os.mkdir(outdir)
prefix = PurePath(state.environment.get_build_dir())
srcdir = PurePath(state.environment.get_source_dir())
else:
outdir = state.environment.scratch_dir
prefix = PurePath(coredata.get_option(mesonlib.OptionKey('prefix')))
# These always return paths relative to prefix
libdir = PurePath(coredata.get_option(mesonlib.OptionKey('libdir')))
incdir = PurePath(coredata.get_option(mesonlib.OptionKey('includedir')))
fname = os.path.join(outdir, pcfile)
with open(fname, 'w', encoding='utf-8') as ofile:
if not dataonly:
ofile.write('prefix={}\n'.format(self._escape(prefix)))
if uninstalled:
ofile.write('srcdir={}\n'.format(self._escape(srcdir)))
ofile.write('libdir={}\n'.format(self._escape('${prefix}' / libdir)))
ofile.write('includedir={}\n'.format(self._escape('${prefix}' / incdir)))
if variables or unescaped_variables:
ofile.write('\n')
for k, v in variables:
ofile.write('{}={}\n'.format(k, self._escape(v)))
for k, v in unescaped_variables:
ofile.write(f'{k}={v}\n')
ofile.write('\n')
ofile.write('Name: %s\n' % name)
if len(description) > 0:
ofile.write('Description: %s\n' % description)
if len(url) > 0:
ofile.write('URL: %s\n' % url)
ofile.write('Version: %s\n' % version)
reqs_str = deps.format_reqs(deps.pub_reqs)
if len(reqs_str) > 0:
ofile.write(f'Requires: {reqs_str}\n')
reqs_str = deps.format_reqs(deps.priv_reqs)
if len(reqs_str) > 0:
ofile.write(f'Requires.private: {reqs_str}\n')
if len(conflicts) > 0:
ofile.write('Conflicts: {}\n'.format(' '.join(conflicts)))
def generate_libs_flags(libs):
msg = 'Library target {0!r} has {1!r} set. Compilers ' \
'may not find it from its \'-l{2}\' linker flag in the ' \
'{3!r} pkg-config file.'
Lflags = []
for l in libs:
if isinstance(l, str):
yield l
else:
if uninstalled:
install_dir = os.path.dirname(state.backend.get_target_filename_abs(l))
else:
install_dir = l.get_custom_install_dir()[0]
if install_dir is False:
continue
is_custom_target = isinstance(l, (build.CustomTarget, build.CustomTargetIndex))
if not is_custom_target and 'cs' in l.compilers:
if isinstance(install_dir, str):
Lflag = '-r${{prefix}}/{}/{}'.format(self._escape(self._make_relative(prefix, install_dir)), l.filename)
else: # install_dir is True
Lflag = '-r${libdir}/%s' % l.filename
else:
if isinstance(install_dir, str):
Lflag = '-L${prefix}/%s' % self._escape(self._make_relative(prefix, install_dir))
else: # install_dir is True
Lflag = '-L${libdir}'
if Lflag not in Lflags:
Lflags.append(Lflag)
yield Lflag
lname = self._get_lname(l, msg, pcfile, is_custom_target)
# If using a custom suffix, the compiler may not be able to
# find the library
if not is_custom_target and l.name_suffix_set:
mlog.warning(msg.format(l.name, 'name_suffix', lname, pcfile))
if is_custom_target or 'cs' not in l.compilers:
yield '-l%s' % lname
def get_uninstalled_include_dirs(libs):
result = []
for l in libs:
if isinstance(l, (str, build.CustomTarget, build.CustomTargetIndex)):
continue
if l.get_subdir() not in result:
result.append(l.get_subdir())
for i in l.get_include_dirs():
curdir = i.get_curdir()
for d in i.get_incdirs():
path = os.path.join(curdir, d)
if path not in result:
result.append(path)
return result
def generate_uninstalled_cflags(libs):
for d in get_uninstalled_include_dirs(libs):
for basedir in ['${prefix}', '${srcdir}']:
path = PurePath(basedir, d)
yield '-I%s' % self._escape(path.as_posix())
if len(deps.pub_libs) > 0:
ofile.write('Libs: {}\n'.format(' '.join(generate_libs_flags(deps.pub_libs))))
if len(deps.priv_libs) > 0:
ofile.write('Libs.private: {}\n'.format(' '.join(generate_libs_flags(deps.priv_libs))))
cflags = []
if uninstalled:
cflags += generate_uninstalled_cflags(deps.pub_libs + deps.priv_libs)
else:
for d in subdirs:
if d == '.':
cflags.append('-I${includedir}')
else:
cflags.append(self._escape(PurePath('-I${includedir}') / d))
cflags += [self._escape(f) for f in deps.cflags]
if cflags and not dataonly:
ofile.write('Cflags: {}\n'.format(' '.join(cflags)))
@FeatureNewKwargs('pkgconfig.generate', '0.59.0', ['unescaped_variables', 'unescaped_uninstalled_variables'])
@FeatureNewKwargs('pkgconfig.generate', '0.54.0', ['uninstalled_variables'])
@FeatureNewKwargs('pkgconfig.generate', '0.42.0', ['extra_cflags'])
@FeatureNewKwargs('pkgconfig.generate', '0.41.0', ['variables'])
@FeatureNewKwargs('pkgconfig.generate', '0.54.0', ['dataonly'])
@permittedKwargs({'libraries', 'version', 'name', 'description', 'filebase',
'subdirs', 'requires', 'requires_private', 'libraries_private',
'install_dir', 'extra_cflags', 'variables', 'url', 'd_module_versions',
'dataonly', 'conflicts', 'uninstalled_variables',
'unescaped_variables', 'unescaped_uninstalled_variables'})
def generate(self, state, args, kwargs):
default_version = state.project_version['version']
default_install_dir = None
default_description = None
default_name = None
mainlib = None
default_subdirs = ['.']
if not args and 'version' not in kwargs:
FeatureNew.single_use('pkgconfig.generate implicit version keyword', '0.46.0', state.subproject)
elif len(args) == 1:
FeatureNew.single_use('pkgconfig.generate optional positional argument', '0.46.0', state.subproject)
mainlib = args[0]
if not isinstance(mainlib, (build.StaticLibrary, build.SharedLibrary)):
raise mesonlib.MesonException('Pkgconfig_gen first positional argument must be a library object')
default_name = mainlib.name
default_description = state.project_name + ': ' + mainlib.name
install_dir = mainlib.get_custom_install_dir()[0]
if isinstance(install_dir, str):
default_install_dir = os.path.join(install_dir, 'pkgconfig')
elif len(args) > 1:
raise mesonlib.MesonException('Too many positional arguments passed to Pkgconfig_gen.')
dataonly = kwargs.get('dataonly', False)
if not isinstance(dataonly, bool):
raise mesonlib.MesonException('dataonly must be boolean.')
if dataonly:
default_subdirs = []
blocked_vars = ['libraries', 'libraries_private', 'require_private', 'extra_cflags', 'subdirs']
if any(k in kwargs for k in blocked_vars):
raise mesonlib.MesonException(f'Cannot combine dataonly with any of {blocked_vars}')
subdirs = mesonlib.stringlistify(kwargs.get('subdirs', default_subdirs))
version = kwargs.get('version', default_version)
if not isinstance(version, str):
raise mesonlib.MesonException('Version must be specified.')
name = kwargs.get('name', default_name)
if not isinstance(name, str):
raise mesonlib.MesonException('Name not specified.')
filebase = kwargs.get('filebase', name)
if not isinstance(filebase, str):
raise mesonlib.MesonException('Filebase must be a string.')
description = kwargs.get('description', default_description)
if not isinstance(description, str):
raise mesonlib.MesonException('Description is not a string.')
url = kwargs.get('url', '')
if not isinstance(url, str):
raise mesonlib.MesonException('URL is not a string.')
conflicts = mesonlib.stringlistify(kwargs.get('conflicts', []))
# Prepend the main library to public libraries list. This is required
# so dep.add_pub_libs() can handle dependency ordering correctly and put
# extra libraries after the main library.
libraries = mesonlib.extract_as_list(kwargs, 'libraries')
if mainlib:
libraries = [mainlib] + libraries
deps = DependenciesHelper(state, filebase)
deps.add_pub_libs(libraries)
deps.add_priv_libs(kwargs.get('libraries_private', []))
deps.add_pub_reqs(kwargs.get('requires', []))
deps.add_priv_reqs(kwargs.get('requires_private', []))
deps.add_cflags(kwargs.get('extra_cflags', []))
dversions = kwargs.get('d_module_versions', None)
if dversions:
compiler = state.environment.coredata.compilers.host.get('d')
if compiler:
deps.add_cflags(compiler.get_feature_args({'versions': dversions}, None))
deps.remove_dups()
def parse_variable_list(vardict):
reserved = ['prefix', 'libdir', 'includedir']
variables = []
for name, value in vardict.items():
if not dataonly and name in reserved:
raise mesonlib.MesonException(f'Variable "{name}" is reserved')
variables.append((name, value))
return variables
variables = self.interpreter.extract_variables(kwargs, dict_new=True)
variables = parse_variable_list(variables)
unescaped_variables = self.interpreter.extract_variables(kwargs, argname='unescaped_variables')
unescaped_variables = parse_variable_list(unescaped_variables)
pcfile = filebase + '.pc'
pkgroot = pkgroot_name = kwargs.get('install_dir', default_install_dir)
if pkgroot is None:
if mesonlib.is_freebsd():
pkgroot = os.path.join(state.environment.coredata.get_option(mesonlib.OptionKey('prefix')), 'libdata', 'pkgconfig')
pkgroot_name = os.path.join('{prefix}', 'libdata', 'pkgconfig')
else:
pkgroot = os.path.join(state.environment.coredata.get_option(mesonlib.OptionKey('libdir')), 'pkgconfig')
pkgroot_name = os.path.join('{libdir}', 'pkgconfig')
if not isinstance(pkgroot, str):
raise mesonlib.MesonException('Install_dir must be a string.')
self._generate_pkgconfig_file(state, deps, subdirs, name, description, url,
version, pcfile, conflicts, variables,
unescaped_variables, False, dataonly)
res = build.Data([mesonlib.File(True, state.environment.get_scratch_dir(), pcfile)], pkgroot, pkgroot_name, None, state.subproject, install_tag='devel')
variables = self.interpreter.extract_variables(kwargs, argname='uninstalled_variables', dict_new=True)
variables = parse_variable_list(variables)
unescaped_variables = self.interpreter.extract_variables(kwargs, argname='unescaped_uninstalled_variables')
unescaped_variables = parse_variable_list(unescaped_variables)
pcfile = filebase + '-uninstalled.pc'
self._generate_pkgconfig_file(state, deps, subdirs, name, description, url,
version, pcfile, conflicts, variables,
unescaped_variables, uninstalled=True, dataonly=dataonly)
# Associate the main library with this generated pc file. If the library
# is used in any subsequent call to the generated, it will generate a
# 'Requires:' or 'Requires.private:'.
# Backward compatibility: We used to set 'generated_pc' on all public
# libraries instead of just the main one. Keep doing that but warn if
# anyone is relying on that deprecated behaviour.
if mainlib:
if not hasattr(mainlib, 'generated_pc'):
mainlib.generated_pc = filebase
else:
mlog.warning('Already generated a pkg-config file for', mlog.bold(mainlib.name))
else:
for lib in deps.pub_libs:
if not isinstance(lib, str) and not hasattr(lib, 'generated_pc'):
lib.generated_pc = filebase
location = state.current_node
lib.generated_pc_warn = [name, location]
return ModuleReturnValue(res, [res])
def initialize(*args, **kwargs):
return PkgConfigModule(*args, **kwargs)
| 48.969178 | 160 | 0.582313 |
import os
from pathlib import PurePath
from .. import build
from .. import dependencies
from ..dependencies import ThreadDependency
from .. import mesonlib
from .. import mlog
from . import ModuleReturnValue
from . import ExtensionModule
from ..interpreterbase import permittedKwargs, FeatureNew, FeatureNewKwargs
already_warned_objs = set()
class DependenciesHelper:
def __init__(self, state, name):
self.state = state
self.name = name
self.pub_libs = []
self.pub_reqs = []
self.priv_libs = []
self.priv_reqs = []
self.cflags = []
self.version_reqs = {}
self.link_whole_targets = []
def add_pub_libs(self, libs):
libs, reqs, cflags = self._process_libs(libs, True)
self.pub_libs = libs + self.pub_libs
self.pub_reqs += reqs
self.cflags += cflags
def add_priv_libs(self, libs):
libs, reqs, _ = self._process_libs(libs, False)
self.priv_libs = libs + self.priv_libs
self.priv_reqs += reqs
def add_pub_reqs(self, reqs):
self.pub_reqs += self._process_reqs(reqs)
def add_priv_reqs(self, reqs):
self.priv_reqs += self._process_reqs(reqs)
def _check_generated_pc_deprecation(self, obj):
if not hasattr(obj, 'generated_pc_warn'):
return
name = obj.generated_pc_warn[0]
if (name, obj.name) in already_warned_objs:
return
mlog.deprecation('Library', mlog.bold(obj.name), 'was passed to the '
'"libraries" keyword argument of a previous call '
'to generate() method instead of first positional '
'argument.', 'Adding', mlog.bold(obj.generated_pc),
'to "Requires" field, but this is a deprecated '
'behaviour that will change in a future version '
'of Meson. Please report the issue if this '
'warning cannot be avoided in your case.',
location=obj.generated_pc_warn[1])
already_warned_objs.add((name, obj.name))
def _process_reqs(self, reqs):
processed_reqs = []
for obj in mesonlib.listify(reqs):
if not isinstance(obj, str):
FeatureNew.single_use('pkgconfig.generate requirement from non-string object', '0.46.0', self.state.subproject)
if hasattr(obj, 'generated_pc'):
self._check_generated_pc_deprecation(obj)
processed_reqs.append(obj.generated_pc)
elif isinstance(obj, dependencies.PkgConfigDependency):
if obj.found():
processed_reqs.append(obj.name)
self.add_version_reqs(obj.name, obj.version_reqs)
elif isinstance(obj, str):
name, version_req = self.split_version_req(obj)
processed_reqs.append(name)
self.add_version_reqs(name, version_req)
elif isinstance(obj, dependencies.Dependency) and not obj.found():
pass
elif isinstance(obj, ThreadDependency):
pass
else:
raise mesonlib.MesonException('requires argument not a string, '
'library with pkgconfig-generated file '
'or pkgconfig-dependency object, '
'got {!r}'.format(obj))
return processed_reqs
def add_cflags(self, cflags):
self.cflags += mesonlib.stringlistify(cflags)
def _process_libs(self, libs, public: bool):
libs = mesonlib.listify(libs)
processed_libs = []
processed_reqs = []
processed_cflags = []
for obj in libs:
if hasattr(obj, 'generated_pc'):
self._check_generated_pc_deprecation(obj)
processed_reqs.append(obj.generated_pc)
elif isinstance(obj, dependencies.PkgConfigDependency):
if obj.found():
processed_reqs.append(obj.name)
self.add_version_reqs(obj.name, obj.version_reqs)
elif isinstance(obj, dependencies.InternalDependency):
if obj.found():
processed_libs += obj.get_link_args()
processed_cflags += obj.get_compile_args()
self._add_lib_dependencies(obj.libraries, obj.whole_libraries, obj.ext_deps, public, private_external_deps=True)
elif isinstance(obj, dependencies.Dependency):
if obj.found():
processed_libs += obj.get_link_args()
processed_cflags += obj.get_compile_args()
elif isinstance(obj, build.SharedLibrary) and obj.shared_library_only:
processed_libs.append(obj)
elif isinstance(obj, (build.SharedLibrary, build.StaticLibrary)):
processed_libs.append(obj)
self._add_lib_dependencies(obj.link_targets,
obj.link_whole_targets,
obj.external_deps,
isinstance(obj, build.StaticLibrary) and public)
elif isinstance(obj, (build.CustomTarget, build.CustomTargetIndex)):
if not obj.is_linkable_target():
raise mesonlib.MesonException('library argument contains a not linkable custom_target.')
FeatureNew.single_use('custom_target in pkgconfig.generate libraries', '0.58.0', self.state.subproject)
processed_libs.append(obj)
elif isinstance(obj, str):
processed_libs.append(obj)
else:
raise mesonlib.MesonException(f'library argument of type {type(obj).__name__} not a string, library or dependency object.')
return processed_libs, processed_reqs, processed_cflags
def _add_lib_dependencies(self, link_targets, link_whole_targets, external_deps, public, private_external_deps=False):
add_libs = self.add_pub_libs if public else self.add_priv_libs
for t in link_targets:
if t.is_internal():
self._add_link_whole(t, public)
else:
add_libs([t])
for t in link_whole_targets:
self._add_link_whole(t, public)
if private_external_deps:
self.add_priv_libs(external_deps)
else:
add_libs(external_deps)
def _add_link_whole(self, t, public):
# include their dependencies: a static library we link_whole
# could itself link to a shared library or an installed static library.
# Keep track of link_whole_targets so we can remove them from our
# lists in case a library is link_with and link_whole at the same time.
# See remove_dups() below.
self.link_whole_targets.append(t)
self._add_lib_dependencies(t.link_targets, t.link_whole_targets, t.external_deps, public)
def add_version_reqs(self, name, version_reqs):
if version_reqs:
if name not in self.version_reqs:
self.version_reqs[name] = set()
# Note that pkg-config is picky about whitespace.
# 'foo > 1.2' is ok but 'foo>1.2' is not.
# foo, bar' is ok, but 'foo,bar' is not.
new_vreqs = [s for s in mesonlib.stringlistify(version_reqs)]
self.version_reqs[name].update(new_vreqs)
def split_version_req(self, s):
for op in ['>=', '<=', '!=', '==', '=', '>', '<']:
pos = s.find(op)
if pos > 0:
return s[0:pos].strip(), s[pos:].strip()
return s, None
def format_vreq(self, vreq):
for op in ['>=', '<=', '!=', '==', '=', '>', '<']:
if vreq.startswith(op):
return op + ' ' + vreq[len(op):]
return vreq
def format_reqs(self, reqs):
result = []
for name in reqs:
vreqs = self.version_reqs.get(name, None)
if vreqs:
result += [name + ' ' + self.format_vreq(vreq) for vreq in vreqs]
else:
result += [name]
return ', '.join(result)
def remove_dups(self):
exclude = set()
# the same SharedLibrary object for example.
def _ids(x):
if hasattr(x, 'generated_pc'):
yield x.generated_pc
if isinstance(x, build.Target):
yield x.get_id()
yield x
# Exclude 'x' in all its forms and return if it was already excluded
def _add_exclude(x):
was_excluded = False
for i in _ids(x):
if i in exclude:
was_excluded = True
else:
exclude.add(i)
return was_excluded
# link_whole targets are already part of other targets, exclude them all.
for t in self.link_whole_targets:
_add_exclude(t)
def _fn(xs, libs=False):
# Remove duplicates whilst preserving original order
result = []
for x in xs:
# Don't de-dup unknown strings to avoid messing up arguments like:
known_flags = ['-pthread']
cannot_dedup = libs and isinstance(x, str) and \
not x.startswith(('-l', '-L')) and \
x not in known_flags
if not cannot_dedup and _add_exclude(x):
continue
result.append(x)
return result
self.pub_reqs = _fn(self.pub_reqs)
self.pub_libs = _fn(self.pub_libs, True)
self.priv_reqs = _fn(self.priv_reqs)
self.priv_libs = _fn(self.priv_libs, True)
exclude = set()
self.cflags = _fn(self.cflags)
class PkgConfigModule(ExtensionModule):
def __init__(self, interpreter):
super().__init__(interpreter)
self.methods.update({
'generate': self.generate,
})
def _get_lname(self, l, msg, pcfile, is_custom_target):
if is_custom_target:
basename = os.path.basename(l.get_filename())
name = os.path.splitext(basename)[0]
if name.startswith('lib'):
name = name[3:]
return name
if not l.name_prefix_set:
return l.name
if l.prefix == '' and l.name.startswith('lib'):
return l.name[3:]
if isinstance(l, build.SharedLibrary) and l.import_filename:
return l.name
# find the library via '-lfoo', so tell the user that.
mlog.warning(msg.format(l.name, 'name_prefix', l.name, pcfile))
return l.name
def _escape(self, value):
# We should always write out paths with / because pkg-config requires
# spaces to be quoted with \ and that messes up on Windows:
# https://bugs.freedesktop.org/show_bug.cgi?id=103203
if isinstance(value, PurePath):
value = value.as_posix()
return value.replace(' ', r'\ ')
def _make_relative(self, prefix, subdir):
prefix = PurePath(prefix)
subdir = PurePath(subdir)
try:
return subdir.relative_to(prefix).as_posix()
except ValueError:
return subdir.as_posix()
def _generate_pkgconfig_file(self, state, deps, subdirs, name, description,
url, version, pcfile, conflicts, variables,
unescaped_variables, uninstalled=False, dataonly=False):
coredata = state.environment.get_coredata()
if uninstalled:
outdir = os.path.join(state.environment.build_dir, 'meson-uninstalled')
if not os.path.exists(outdir):
os.mkdir(outdir)
prefix = PurePath(state.environment.get_build_dir())
srcdir = PurePath(state.environment.get_source_dir())
else:
outdir = state.environment.scratch_dir
prefix = PurePath(coredata.get_option(mesonlib.OptionKey('prefix')))
# These always return paths relative to prefix
libdir = PurePath(coredata.get_option(mesonlib.OptionKey('libdir')))
incdir = PurePath(coredata.get_option(mesonlib.OptionKey('includedir')))
fname = os.path.join(outdir, pcfile)
with open(fname, 'w', encoding='utf-8') as ofile:
if not dataonly:
ofile.write('prefix={}\n'.format(self._escape(prefix)))
if uninstalled:
ofile.write('srcdir={}\n'.format(self._escape(srcdir)))
ofile.write('libdir={}\n'.format(self._escape('${prefix}' / libdir)))
ofile.write('includedir={}\n'.format(self._escape('${prefix}' / incdir)))
if variables or unescaped_variables:
ofile.write('\n')
for k, v in variables:
ofile.write('{}={}\n'.format(k, self._escape(v)))
for k, v in unescaped_variables:
ofile.write(f'{k}={v}\n')
ofile.write('\n')
ofile.write('Name: %s\n' % name)
if len(description) > 0:
ofile.write('Description: %s\n' % description)
if len(url) > 0:
ofile.write('URL: %s\n' % url)
ofile.write('Version: %s\n' % version)
reqs_str = deps.format_reqs(deps.pub_reqs)
if len(reqs_str) > 0:
ofile.write(f'Requires: {reqs_str}\n')
reqs_str = deps.format_reqs(deps.priv_reqs)
if len(reqs_str) > 0:
ofile.write(f'Requires.private: {reqs_str}\n')
if len(conflicts) > 0:
ofile.write('Conflicts: {}\n'.format(' '.join(conflicts)))
def generate_libs_flags(libs):
msg = 'Library target {0!r} has {1!r} set. Compilers ' \
'may not find it from its \'-l{2}\' linker flag in the ' \
'{3!r} pkg-config file.'
Lflags = []
for l in libs:
if isinstance(l, str):
yield l
else:
if uninstalled:
install_dir = os.path.dirname(state.backend.get_target_filename_abs(l))
else:
install_dir = l.get_custom_install_dir()[0]
if install_dir is False:
continue
is_custom_target = isinstance(l, (build.CustomTarget, build.CustomTargetIndex))
if not is_custom_target and 'cs' in l.compilers:
if isinstance(install_dir, str):
Lflag = '-r${{prefix}}/{}/{}'.format(self._escape(self._make_relative(prefix, install_dir)), l.filename)
else: # install_dir is True
Lflag = '-r${libdir}/%s' % l.filename
else:
if isinstance(install_dir, str):
Lflag = '-L${prefix}/%s' % self._escape(self._make_relative(prefix, install_dir))
else: # install_dir is True
Lflag = '-L${libdir}'
if Lflag not in Lflags:
Lflags.append(Lflag)
yield Lflag
lname = self._get_lname(l, msg, pcfile, is_custom_target)
# If using a custom suffix, the compiler may not be able to
# find the library
if not is_custom_target and l.name_suffix_set:
mlog.warning(msg.format(l.name, 'name_suffix', lname, pcfile))
if is_custom_target or 'cs' not in l.compilers:
yield '-l%s' % lname
def get_uninstalled_include_dirs(libs):
result = []
for l in libs:
if isinstance(l, (str, build.CustomTarget, build.CustomTargetIndex)):
continue
if l.get_subdir() not in result:
result.append(l.get_subdir())
for i in l.get_include_dirs():
curdir = i.get_curdir()
for d in i.get_incdirs():
path = os.path.join(curdir, d)
if path not in result:
result.append(path)
return result
def generate_uninstalled_cflags(libs):
for d in get_uninstalled_include_dirs(libs):
for basedir in ['${prefix}', '${srcdir}']:
path = PurePath(basedir, d)
yield '-I%s' % self._escape(path.as_posix())
if len(deps.pub_libs) > 0:
ofile.write('Libs: {}\n'.format(' '.join(generate_libs_flags(deps.pub_libs))))
if len(deps.priv_libs) > 0:
ofile.write('Libs.private: {}\n'.format(' '.join(generate_libs_flags(deps.priv_libs))))
cflags = []
if uninstalled:
cflags += generate_uninstalled_cflags(deps.pub_libs + deps.priv_libs)
else:
for d in subdirs:
if d == '.':
cflags.append('-I${includedir}')
else:
cflags.append(self._escape(PurePath('-I${includedir}') / d))
cflags += [self._escape(f) for f in deps.cflags]
if cflags and not dataonly:
ofile.write('Cflags: {}\n'.format(' '.join(cflags)))
@FeatureNewKwargs('pkgconfig.generate', '0.59.0', ['unescaped_variables', 'unescaped_uninstalled_variables'])
@FeatureNewKwargs('pkgconfig.generate', '0.54.0', ['uninstalled_variables'])
@FeatureNewKwargs('pkgconfig.generate', '0.42.0', ['extra_cflags'])
@FeatureNewKwargs('pkgconfig.generate', '0.41.0', ['variables'])
@FeatureNewKwargs('pkgconfig.generate', '0.54.0', ['dataonly'])
@permittedKwargs({'libraries', 'version', 'name', 'description', 'filebase',
'subdirs', 'requires', 'requires_private', 'libraries_private',
'install_dir', 'extra_cflags', 'variables', 'url', 'd_module_versions',
'dataonly', 'conflicts', 'uninstalled_variables',
'unescaped_variables', 'unescaped_uninstalled_variables'})
def generate(self, state, args, kwargs):
default_version = state.project_version['version']
default_install_dir = None
default_description = None
default_name = None
mainlib = None
default_subdirs = ['.']
if not args and 'version' not in kwargs:
FeatureNew.single_use('pkgconfig.generate implicit version keyword', '0.46.0', state.subproject)
elif len(args) == 1:
FeatureNew.single_use('pkgconfig.generate optional positional argument', '0.46.0', state.subproject)
mainlib = args[0]
if not isinstance(mainlib, (build.StaticLibrary, build.SharedLibrary)):
raise mesonlib.MesonException('Pkgconfig_gen first positional argument must be a library object')
default_name = mainlib.name
default_description = state.project_name + ': ' + mainlib.name
install_dir = mainlib.get_custom_install_dir()[0]
if isinstance(install_dir, str):
default_install_dir = os.path.join(install_dir, 'pkgconfig')
elif len(args) > 1:
raise mesonlib.MesonException('Too many positional arguments passed to Pkgconfig_gen.')
dataonly = kwargs.get('dataonly', False)
if not isinstance(dataonly, bool):
raise mesonlib.MesonException('dataonly must be boolean.')
if dataonly:
default_subdirs = []
blocked_vars = ['libraries', 'libraries_private', 'require_private', 'extra_cflags', 'subdirs']
if any(k in kwargs for k in blocked_vars):
raise mesonlib.MesonException(f'Cannot combine dataonly with any of {blocked_vars}')
subdirs = mesonlib.stringlistify(kwargs.get('subdirs', default_subdirs))
version = kwargs.get('version', default_version)
if not isinstance(version, str):
raise mesonlib.MesonException('Version must be specified.')
name = kwargs.get('name', default_name)
if not isinstance(name, str):
raise mesonlib.MesonException('Name not specified.')
filebase = kwargs.get('filebase', name)
if not isinstance(filebase, str):
raise mesonlib.MesonException('Filebase must be a string.')
description = kwargs.get('description', default_description)
if not isinstance(description, str):
raise mesonlib.MesonException('Description is not a string.')
url = kwargs.get('url', '')
if not isinstance(url, str):
raise mesonlib.MesonException('URL is not a string.')
conflicts = mesonlib.stringlistify(kwargs.get('conflicts', []))
# Prepend the main library to public libraries list. This is required
# so dep.add_pub_libs() can handle dependency ordering correctly and put
# extra libraries after the main library.
libraries = mesonlib.extract_as_list(kwargs, 'libraries')
if mainlib:
libraries = [mainlib] + libraries
deps = DependenciesHelper(state, filebase)
deps.add_pub_libs(libraries)
deps.add_priv_libs(kwargs.get('libraries_private', []))
deps.add_pub_reqs(kwargs.get('requires', []))
deps.add_priv_reqs(kwargs.get('requires_private', []))
deps.add_cflags(kwargs.get('extra_cflags', []))
dversions = kwargs.get('d_module_versions', None)
if dversions:
compiler = state.environment.coredata.compilers.host.get('d')
if compiler:
deps.add_cflags(compiler.get_feature_args({'versions': dversions}, None))
deps.remove_dups()
def parse_variable_list(vardict):
reserved = ['prefix', 'libdir', 'includedir']
variables = []
for name, value in vardict.items():
if not dataonly and name in reserved:
raise mesonlib.MesonException(f'Variable "{name}" is reserved')
variables.append((name, value))
return variables
variables = self.interpreter.extract_variables(kwargs, dict_new=True)
variables = parse_variable_list(variables)
unescaped_variables = self.interpreter.extract_variables(kwargs, argname='unescaped_variables')
unescaped_variables = parse_variable_list(unescaped_variables)
pcfile = filebase + '.pc'
pkgroot = pkgroot_name = kwargs.get('install_dir', default_install_dir)
if pkgroot is None:
if mesonlib.is_freebsd():
pkgroot = os.path.join(state.environment.coredata.get_option(mesonlib.OptionKey('prefix')), 'libdata', 'pkgconfig')
pkgroot_name = os.path.join('{prefix}', 'libdata', 'pkgconfig')
else:
pkgroot = os.path.join(state.environment.coredata.get_option(mesonlib.OptionKey('libdir')), 'pkgconfig')
pkgroot_name = os.path.join('{libdir}', 'pkgconfig')
if not isinstance(pkgroot, str):
raise mesonlib.MesonException('Install_dir must be a string.')
self._generate_pkgconfig_file(state, deps, subdirs, name, description, url,
version, pcfile, conflicts, variables,
unescaped_variables, False, dataonly)
res = build.Data([mesonlib.File(True, state.environment.get_scratch_dir(), pcfile)], pkgroot, pkgroot_name, None, state.subproject, install_tag='devel')
variables = self.interpreter.extract_variables(kwargs, argname='uninstalled_variables', dict_new=True)
variables = parse_variable_list(variables)
unescaped_variables = self.interpreter.extract_variables(kwargs, argname='unescaped_uninstalled_variables')
unescaped_variables = parse_variable_list(unescaped_variables)
pcfile = filebase + '-uninstalled.pc'
self._generate_pkgconfig_file(state, deps, subdirs, name, description, url,
version, pcfile, conflicts, variables,
unescaped_variables, uninstalled=True, dataonly=dataonly)
# Associate the main library with this generated pc file. If the library
# is used in any subsequent call to the generated, it will generate a
# 'Requires:' or 'Requires.private:'.
# Backward compatibility: We used to set 'generated_pc' on all public
# libraries instead of just the main one. Keep doing that but warn if
# anyone is relying on that deprecated behaviour.
if mainlib:
if not hasattr(mainlib, 'generated_pc'):
mainlib.generated_pc = filebase
else:
mlog.warning('Already generated a pkg-config file for', mlog.bold(mainlib.name))
else:
for lib in deps.pub_libs:
if not isinstance(lib, str) and not hasattr(lib, 'generated_pc'):
lib.generated_pc = filebase
location = state.current_node
lib.generated_pc_warn = [name, location]
return ModuleReturnValue(res, [res])
def initialize(*args, **kwargs):
return PkgConfigModule(*args, **kwargs)
| true | true |
f73e16a1f56d97d580c3491d375c2e7d693b5904 | 2,924 | py | Python | functions/socketio/stream.py | codions-forks/flask-nginx-rtmp-manager | 9088c44616a2e94f6771216af6f22c241064e321 | [
"MIT"
] | 1 | 2021-09-26T05:32:00.000Z | 2021-09-26T05:32:00.000Z | functions/socketio/stream.py | codions-forks/flask-nginx-rtmp-manager | 9088c44616a2e94f6771216af6f22c241064e321 | [
"MIT"
] | null | null | null | functions/socketio/stream.py | codions-forks/flask-nginx-rtmp-manager | 9088c44616a2e94f6771216af6f22c241064e321 | [
"MIT"
] | null | null | null | from flask_socketio import emit
from flask_security import current_user
from sqlalchemy import update
from classes.shared import db, socketio
from classes import Channel
from classes import Stream
from classes import settings
from functions import system
from functions import webhookFunc
from functions import templateFilters
from functions import xmpp
from functions import cachedDbCalls
from app import r
@socketio.on('getViewerTotal')
def handle_viewer_total_request(streamData, room=None):
channelLoc = str(streamData['data'])
viewers = xmpp.getChannelCounts(channelLoc)
ChannelUpdateStatement = (update(Channel.Channel).where(Channel.Channel.channelLoc == channelLoc).values(channelViewers=viewers))
channelQuery = Channel.Channel.query.filter_by(channelLoc=channelLoc).with_entities(Channel.Channel.id).first()
StreamUpdateStatement = (update(Stream.Stream).where(Stream.Stream.linkedChannel == channelQuery.id).values(currentViewers=viewers))
db.session.commit()
db.session.close()
if room is None:
emit('viewerTotalResponse', {'data': str(viewers)})
else:
emit('viewerTotalResponse', {'data': str(viewers)}, room=room)
return 'OK'
@socketio.on('updateStreamData')
def updateStreamData(message):
channelLoc = message['channel']
sysSettings = cachedDbCalls.getSystemSettings()
channelQuery = Channel.Channel.query.filter_by(channelLoc=channelLoc, owningUser=current_user.id).first()
if channelQuery is not None:
stream = channelQuery.stream[0]
stream.streamName = system.strip_html(message['name'])
stream.topic = int(message['topic'])
db.session.commit()
if channelQuery.imageLocation is None:
channelImage = (sysSettings.siteProtocol + sysSettings.siteAddress + "/static/img/video-placeholder.jpg")
else:
channelImage = (sysSettings.siteProtocol + sysSettings.siteAddress + "/images/" + channelQuery.imageLocation)
webhookFunc.runWebhook(channelQuery.id, 4, channelname=channelQuery.channelName,
channelurl=(sysSettings.siteProtocol + sysSettings.siteAddress + "/channel/" + str(channelQuery.id)),
channeltopic=channelQuery.topic,
channelimage=channelImage, streamer=templateFilters.get_userName(channelQuery.owningUser),
channeldescription=str(channelQuery.description),
streamname=stream.streamName,
streamurl=(sysSettings.siteProtocol + sysSettings.siteAddress + "/view/" + channelQuery.channelLoc),
streamtopic=templateFilters.get_topicName(stream.topic),
streamimage=(sysSettings.siteProtocol + sysSettings.siteAddress + "/stream-thumb/" + channelQuery.channelLoc + ".png"))
db.session.commit()
db.session.close()
db.session.commit()
db.session.close()
return 'OK' | 42.376812 | 138 | 0.718536 | from flask_socketio import emit
from flask_security import current_user
from sqlalchemy import update
from classes.shared import db, socketio
from classes import Channel
from classes import Stream
from classes import settings
from functions import system
from functions import webhookFunc
from functions import templateFilters
from functions import xmpp
from functions import cachedDbCalls
from app import r
@socketio.on('getViewerTotal')
def handle_viewer_total_request(streamData, room=None):
channelLoc = str(streamData['data'])
viewers = xmpp.getChannelCounts(channelLoc)
ChannelUpdateStatement = (update(Channel.Channel).where(Channel.Channel.channelLoc == channelLoc).values(channelViewers=viewers))
channelQuery = Channel.Channel.query.filter_by(channelLoc=channelLoc).with_entities(Channel.Channel.id).first()
StreamUpdateStatement = (update(Stream.Stream).where(Stream.Stream.linkedChannel == channelQuery.id).values(currentViewers=viewers))
db.session.commit()
db.session.close()
if room is None:
emit('viewerTotalResponse', {'data': str(viewers)})
else:
emit('viewerTotalResponse', {'data': str(viewers)}, room=room)
return 'OK'
@socketio.on('updateStreamData')
def updateStreamData(message):
channelLoc = message['channel']
sysSettings = cachedDbCalls.getSystemSettings()
channelQuery = Channel.Channel.query.filter_by(channelLoc=channelLoc, owningUser=current_user.id).first()
if channelQuery is not None:
stream = channelQuery.stream[0]
stream.streamName = system.strip_html(message['name'])
stream.topic = int(message['topic'])
db.session.commit()
if channelQuery.imageLocation is None:
channelImage = (sysSettings.siteProtocol + sysSettings.siteAddress + "/static/img/video-placeholder.jpg")
else:
channelImage = (sysSettings.siteProtocol + sysSettings.siteAddress + "/images/" + channelQuery.imageLocation)
webhookFunc.runWebhook(channelQuery.id, 4, channelname=channelQuery.channelName,
channelurl=(sysSettings.siteProtocol + sysSettings.siteAddress + "/channel/" + str(channelQuery.id)),
channeltopic=channelQuery.topic,
channelimage=channelImage, streamer=templateFilters.get_userName(channelQuery.owningUser),
channeldescription=str(channelQuery.description),
streamname=stream.streamName,
streamurl=(sysSettings.siteProtocol + sysSettings.siteAddress + "/view/" + channelQuery.channelLoc),
streamtopic=templateFilters.get_topicName(stream.topic),
streamimage=(sysSettings.siteProtocol + sysSettings.siteAddress + "/stream-thumb/" + channelQuery.channelLoc + ".png"))
db.session.commit()
db.session.close()
db.session.commit()
db.session.close()
return 'OK' | true | true |
f73e1762c143d495986eb2f7a7651c4d11930c34 | 17,681 | py | Python | sevenseconds/config/bastion.py | jonathanbeber/sevenseconds | a8ba8f5fc5f7e658a928f807ab7905b56498f12e | [
"Apache-2.0"
] | null | null | null | sevenseconds/config/bastion.py | jonathanbeber/sevenseconds | a8ba8f5fc5f7e658a928f807ab7905b56498f12e | [
"Apache-2.0"
] | null | null | null | sevenseconds/config/bastion.py | jonathanbeber/sevenseconds | a8ba8f5fc5f7e658a928f807ab7905b56498f12e | [
"Apache-2.0"
] | null | null | null | import time
import socket
import yaml
import datetime
import base64
import difflib
import botocore.exceptions
import requests
import json
from copy import deepcopy
from ..helper import info, warning, error, ActionOnExit, substitute_template_vars
from ..helper.aws import filter_subnets, associate_address, get_tag
from .route53 import configure_dns_record, delete_dns_record
from ..config import AccountData
def configure_bastion_host(account: AccountData, vpc: object, region: str, base_ami_id: str):
ec2 = account.session.resource('ec2', region)
cf = account.session.resource('cloudformation', region)
cfc = account.session.client('cloudformation', region)
re_deploy = account.config['bastion'].get('re_deploy', account.options.get('redeploy_odd_host'))
bastion_version = None
if account.config['bastion'].get('version_url'):
with ActionOnExit('Get last Tag for Bastion Image...') as act:
r = requests.get(account.config['bastion'].get('version_url'))
if r.status_code != 200:
act.error('Error code: {}'.format(r.status_code))
act.error('Error msg: {}'.format(r.text))
return
tags = sorted(r.json(), key=lambda x: x['created'], reverse=True)
bastion_version = tags[0]['name']
act.ok(bastion_version)
config = substitute_template_vars(account.config['bastion'].get('ami_config'),
{'account_name': account.name,
'vpc_net': str(vpc.cidr_block),
'version': bastion_version})
user_data = '#taupage-ami-config\n{}'.format(yaml.safe_dump(config)).encode('utf-8')
# Search all existing hosts (Instances and Cloudformation)
instance_filter = [
{'Name': 'tag:Name',
'Values': ['Odd (SSH Bastion Host)']},
{'Name': 'instance-state-name',
'Values': ['running', 'pending', 'stopping', 'stopped']},
]
legacy_instances = list(vpc.instances.filter(Filters=instance_filter))
for instance in legacy_instances:
# Terminate old (stopped) Odd Systems
if instance.state.get('Name') == 'stopped':
drop_bastionhost(instance)
else:
# Verify Running Version (Userdate, FS Parameter)
inst_user_data = base64.b64decode(instance.describe_attribute(Attribute='userData')['UserData']['Value'])
if instance.image_id != base_ami_id:
error('{} use {} instand of {}.'.format(instance.id, instance.image_id, base_ami_id))
if re_deploy or account.options.get('update_odd_host'):
error(' ==> Make re-deploy')
re_deploy = True
if inst_user_data != user_data:
original = inst_user_data.decode('utf-8')
new = user_data.decode('utf-8')
diff = difflib.ndiff(original.splitlines(1), new.splitlines(1))
error('{} use a different UserData\n{}'.format(instance.id, ''.join(diff)))
if re_deploy or account.options.get('update_odd_host'):
error(' ==> Make re-deploy')
re_deploy = True
launch_time = instance.launch_time
if (not wait_for_ssh_port(instance.public_ip_address, 60) and
datetime.timedelta(minutes=15) < datetime.datetime.now(launch_time.tzinfo) - launch_time):
error('Bastion Host does not response. Drop Bastionhost and create new one')
drop_bastionhost(instance)
legacy_instances = None
# Start migration
if legacy_instances and re_deploy:
for instance in legacy_instances:
drop_bastionhost(instance)
legacy_instances = None
update_needed = False
# Check Odd Hosts in other vpcs
cloudformation_filter = [
{'Name': 'tag:aws:cloudformation:logical-id',
'Values': ['OddServerInstance']},
{'Name': 'instance-state-name',
'Values': ['running', 'pending', 'stopping', 'stopped']},
]
cloudformation_instances = list(vpc.instances.filter(Filters=cloudformation_filter))
if cloudformation_instances:
for instance in cloudformation_instances:
# Terminate old (stopped) Odd Systems
if instance.state.get('Name') == 'stopped':
drop_bastionhost(instance)
else:
# Verify Running Version (Userdate, FS Parameter)
oddstack = cf.Stack(get_tag(instance.tags, 'aws:cloudformation:stack-name'))
used_ami_id = get_tag(oddstack.parameters, 'TaupageId', prefix='Parameter')
if used_ami_id != base_ami_id:
error('{} use {} instand of {}.'.format(oddstack.name, used_ami_id, base_ami_id))
if re_deploy or account.options.get('update_odd_host'):
error(' ==> prepare change set')
update_needed = True
used_bastion_version = get_tag(oddstack.parameters, 'OddRelease', prefix='Parameter')
if used_bastion_version != bastion_version:
error('{} use {} instand of {}.'.format(oddstack.name, used_bastion_version, bastion_version))
if re_deploy or account.options.get('update_odd_host'):
error(' ==> prepare change set')
update_needed = True
if update_needed or re_deploy:
update_cf_bastion_host(account, vpc, region, oddstack, base_ami_id, bastion_version)
if not legacy_instances:
info('check old odd security groups')
cleanup_old_security_group(account, region, oddstack, vpc)
if not legacy_instances and not cloudformation_instances:
try:
stack = cf.Stack('Odd')
info('Stack Status: {}'.format(stack.stack_status))
except Exception:
create_cf_bastion_host(account, vpc, region, base_ami_id, bastion_version)
if stack.stack_status in ('UPDATE_IN_PROGRESS', 'CREATE_IN_PROGRESS'):
if stack.stack_status.startswith('UPDATE_'):
waiter = cfc.get_waiter('stack_update_complete')
else:
waiter = cfc.get_waiter('stack_create_complete')
with ActionOnExit('Waiting of Stack') as act:
try:
waiter.wait(StackName='Odd')
except botocore.exceptions.WaiterError as e:
act.error('Stack creation failed: {}'.format(e))
return
info('check old odd security groups')
cleanup_old_security_group(account, region, stack, vpc)
instance = ec2.Instance(stack.Resource(logical_id='OddServerInstance').physical_resource_id)
launch_time = instance.launch_time
if (not wait_for_ssh_port(instance.public_ip_address, 60) and
datetime.timedelta(minutes=15) < datetime.datetime.now(launch_time.tzinfo) - launch_time):
error('Bastion Host does not response. Force Update for Bastionhost Stack')
update_cf_bastion_host(account, vpc, region, stack, base_ami_id, bastion_version)
def cleanup_old_security_group(account: AccountData, region: str, oddstack: object, vpc: object):
ec2 = account.session.resource('ec2', region)
stack_security_group_id = oddstack.Resource(logical_id='OddSecurityGroup').physical_resource_id
sgs = [x for x in vpc.security_groups.all() if x.group_name == 'Odd (SSH Bastion Host)']
for sg in sgs:
with ActionOnExit('Found old Odd Security Group {}/{}'.format(sg.id, sg.group_name)) as act:
for sg_depency in vpc.meta.client.describe_security_groups(Filters=[
{
'Name': 'ip-permission.group-id',
'Values': [
sg.group_id,
]
},
])['SecurityGroups']:
sg_depency = ec2.SecurityGroup(sg_depency.get('GroupId'))
with ActionOnExit(
'Found old Odd SG depency in Security Group {}/{}'
.format(sg_depency.id, sg_depency.group_name)) as act:
for permission in sg_depency.ip_permissions:
_change_permission(sg_depency, permission, sg.group_id, stack_security_group_id, 'ingress', act)
for permission in sg_depency.ip_permissions_egress:
_change_permission(sg_depency, permission, sg.group_id, stack_security_group_id, 'egress', act)
try:
sg.delete()
act.ok('removed')
except Exception as e:
act.error('Can\'t cleanup old Odd Stack: {}'.format(e))
def _change_permission(sg, permission, old_group_id, new_group_id, direction, act):
old_permission = deepcopy(permission)
replace = False
for user_id_group_pair in permission.get('UserIdGroupPairs', []):
if user_id_group_pair.get('GroupId') == old_group_id:
user_id_group_pair['GroupId'] = new_group_id
replace = True
if permission.get('UserIdGroupPairs'):
permission['UserIdGroupPairs'] = list(
dict(
(v['GroupId'], v) for v in permission['UserIdGroupPairs']
).values()
)
if replace:
try:
if direction == 'egress':
sg.revoke_egress(IpPermissions=[old_permission])
elif direction == 'ingress':
sg.revoke_ingress(IpPermissions=[old_permission])
except Exception as e:
act.error('Can\'t revoke the Permissions: {}'.format(e))
try:
if direction == 'egress':
sg.authorize_egress(IpPermissions=[permission])
elif direction == 'ingress':
sg.authorize_ingress(IpPermissions=[permission])
except Exception as e:
act.error('Can\'t authorize the Permissions: {}'.format(e))
def create_cf_bastion_host(account: AccountData, vpc: object, region: str, ami_id: str, bastion_version: str):
cf = account.session.resource('cloudformation', region)
cfc = account.session.client('cloudformation', region)
ec2c = account.session.client('ec2', region)
subnet_ids = [a.id for a in filter_subnets(vpc, 'dmz')]
if not subnet_ids:
warning('No DMZ subnet found')
return
allocation_id, ip = associate_address(ec2c)
stackname = 'Odd'
stack = cf.create_stack(
StackName=stackname,
TemplateBody=json.dumps(account.config['bastion'].get('cf_template')),
Parameters=[
{
'ParameterKey': 'AccountName',
'ParameterValue': account.name
},
{
'ParameterKey': 'DisableApiTermination',
'ParameterValue': 'false'
},
{
'ParameterKey': 'EIPAllocation',
'ParameterValue': allocation_id
},
{
'ParameterKey': 'OddRelease',
'ParameterValue': bastion_version
},
{
'ParameterKey': 'SubnetId',
'ParameterValue': subnet_ids[0]
},
{
'ParameterKey': 'TaupageId',
'ParameterValue': ami_id
},
{
'ParameterKey': 'VPCNetwork',
'ParameterValue': str(vpc.cidr_block)
},
{
'ParameterKey': 'VpcId',
'ParameterValue': vpc.id
}
],
OnFailure='DELETE',
Tags=[
{'Key': 'LastUpdate', 'Value': time.strftime('%Y-%m-%dT%H:%M:%S%z')},
{'Key': 'InfrastructureComponent', 'Value': 'true'}
]
)
with ActionOnExit('Wait of stack create complete') as act:
waiter = cfc.get_waiter('stack_create_complete')
try:
waiter.wait(StackName=stack.name)
except botocore.exceptions.WaiterError as e:
act.error('Stack creation failed: {}'.format(e))
return
info('SSH Bastion instance is running with public IP {}'.format(ip))
if account.domain is not None:
configure_dns_record(account, 'odd-{}'.format(region), ip)
else:
warning('No DNS domain configured, skipping record creation')
def update_cf_bastion_host(account: AccountData, vpc: object, region: str, stack: object, ami_id: str,
bastion_version: str):
cloudformation = account.session.client('cloudformation', region)
# switch subnet, every update => force reinitialisation
current_subnet = get_tag(stack.parameters, 'SubnetId', prefix='Parameter')
subnet_ids = [a.id for a in filter_subnets(vpc, 'dmz')]
if current_subnet in subnet_ids:
subnet_ids.remove(current_subnet)
if not subnet_ids:
warning('No DMZ subnet found')
return
response = stack.update(
TemplateBody=json.dumps(account.config['bastion'].get('cf_template')),
Parameters=[
{
'ParameterKey': 'AccountName',
'ParameterValue': account.name
},
{
'ParameterKey': 'DisableApiTermination',
'ParameterValue': 'false'
},
{
'ParameterKey': 'EIPAllocation',
'ParameterValue': get_tag(stack.parameters, 'EIPAllocation', prefix='Parameter')
},
{
'ParameterKey': 'OddRelease',
'ParameterValue': bastion_version
},
{
'ParameterKey': 'SubnetId',
'ParameterValue': subnet_ids[0]
},
{
'ParameterKey': 'TaupageId',
'ParameterValue': ami_id
},
{
'ParameterKey': 'VPCNetwork',
'ParameterValue': str(vpc.cidr_block)
},
{
'ParameterKey': 'VpcId',
'ParameterValue': vpc.id
}
],
Tags=[
{'Key': 'LastUpdate', 'Value': time.strftime('%Y-%m-%dT%H:%M:%S%z')},
{'Key': 'InfrastructureComponent', 'Value': 'true'}
]
)
info(response)
with ActionOnExit('Wait of stack update complete') as act:
waiter = cloudformation.get_waiter('stack_update_complete')
try:
waiter.wait(StackName=stack.name)
except botocore.exceptions.WaiterError as e:
act.error('Stack creation failed: {}'.format(e))
return
def drop_bastionhost(instance):
with ActionOnExit('Terminating SSH Bastion host..'):
instance.reload()
if instance.state.get('Name') in ('running', 'pending', 'stopping', 'stopped'):
instance.modify_attribute(Attribute='disableApiTermination', Value='false')
instance.terminate()
instance.wait_until_terminated()
def wait_for_ssh_port(host: str, timeout: int):
start = time.time()
with ActionOnExit('Waiting for SSH port of {}..'.format(host)) as act:
while True:
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
try:
result = sock.connect_ex((host, 22))
except Exception:
result = -1
if result == 0:
return True
if time.time() - start > timeout:
act.error('TIMEOUT')
return False
time.sleep(5)
act.progress()
def delete_bastion_host(account: AccountData, region: str):
ec2 = account.session.resource('ec2', region)
cf = account.session.resource('cloudformation', region)
cfc = account.session.client('cloudformation', region)
for instance in ec2.instances.all():
if get_tag(instance.tags, 'Name') == 'Odd (SSH Bastion Host)':
if instance.state.get('Name') in ('running', 'pending', 'stopping', 'stopped'):
if account.domain is not None and instance.public_ip_address:
try:
delete_dns_record(account, 'odd-{}'.format(region), instance.public_ip_address)
except Exception:
pass
drop_bastionhost(instance)
cloudformation_filter = [
{'Name': 'tag:aws:cloudformation:logical-id',
'Values': ['OddServerInstance']},
{'Name': 'instance-state-name',
'Values': ['running', 'pending', 'stopping', 'stopped']},
]
for instance in ec2.instances.filter(Filters=cloudformation_filter):
if account.domain is not None and instance.public_ip_address:
try:
delete_dns_record(account, 'odd-{}'.format(region), instance.public_ip_address)
except Exception as e:
warning('Can\'t cleanup old Odd host name: {}'.format(e))
oddstack = cf.Stack(get_tag(instance.tags, 'aws:cloudformation:stack-name'))
oddstack.delete()
waiter = cfc.get_waiter('stack_delete_complete')
with ActionOnExit('Waiting of Stack delete') as act:
try:
waiter.wait(StackName=get_tag(instance.tags, 'aws:cloudformation:stack-name'))
except botocore.exceptions.WaiterError as e:
act.error('Stack delete failed: {}'.format(e))
| 43.65679 | 120 | 0.580963 | import time
import socket
import yaml
import datetime
import base64
import difflib
import botocore.exceptions
import requests
import json
from copy import deepcopy
from ..helper import info, warning, error, ActionOnExit, substitute_template_vars
from ..helper.aws import filter_subnets, associate_address, get_tag
from .route53 import configure_dns_record, delete_dns_record
from ..config import AccountData
def configure_bastion_host(account: AccountData, vpc: object, region: str, base_ami_id: str):
ec2 = account.session.resource('ec2', region)
cf = account.session.resource('cloudformation', region)
cfc = account.session.client('cloudformation', region)
re_deploy = account.config['bastion'].get('re_deploy', account.options.get('redeploy_odd_host'))
bastion_version = None
if account.config['bastion'].get('version_url'):
with ActionOnExit('Get last Tag for Bastion Image...') as act:
r = requests.get(account.config['bastion'].get('version_url'))
if r.status_code != 200:
act.error('Error code: {}'.format(r.status_code))
act.error('Error msg: {}'.format(r.text))
return
tags = sorted(r.json(), key=lambda x: x['created'], reverse=True)
bastion_version = tags[0]['name']
act.ok(bastion_version)
config = substitute_template_vars(account.config['bastion'].get('ami_config'),
{'account_name': account.name,
'vpc_net': str(vpc.cidr_block),
'version': bastion_version})
user_data = '#taupage-ami-config\n{}'.format(yaml.safe_dump(config)).encode('utf-8')
instance_filter = [
{'Name': 'tag:Name',
'Values': ['Odd (SSH Bastion Host)']},
{'Name': 'instance-state-name',
'Values': ['running', 'pending', 'stopping', 'stopped']},
]
legacy_instances = list(vpc.instances.filter(Filters=instance_filter))
for instance in legacy_instances:
if instance.state.get('Name') == 'stopped':
drop_bastionhost(instance)
else:
inst_user_data = base64.b64decode(instance.describe_attribute(Attribute='userData')['UserData']['Value'])
if instance.image_id != base_ami_id:
error('{} use {} instand of {}.'.format(instance.id, instance.image_id, base_ami_id))
if re_deploy or account.options.get('update_odd_host'):
error(' ==> Make re-deploy')
re_deploy = True
if inst_user_data != user_data:
original = inst_user_data.decode('utf-8')
new = user_data.decode('utf-8')
diff = difflib.ndiff(original.splitlines(1), new.splitlines(1))
error('{} use a different UserData\n{}'.format(instance.id, ''.join(diff)))
if re_deploy or account.options.get('update_odd_host'):
error(' ==> Make re-deploy')
re_deploy = True
launch_time = instance.launch_time
if (not wait_for_ssh_port(instance.public_ip_address, 60) and
datetime.timedelta(minutes=15) < datetime.datetime.now(launch_time.tzinfo) - launch_time):
error('Bastion Host does not response. Drop Bastionhost and create new one')
drop_bastionhost(instance)
legacy_instances = None
if legacy_instances and re_deploy:
for instance in legacy_instances:
drop_bastionhost(instance)
legacy_instances = None
update_needed = False
cloudformation_filter = [
{'Name': 'tag:aws:cloudformation:logical-id',
'Values': ['OddServerInstance']},
{'Name': 'instance-state-name',
'Values': ['running', 'pending', 'stopping', 'stopped']},
]
cloudformation_instances = list(vpc.instances.filter(Filters=cloudformation_filter))
if cloudformation_instances:
for instance in cloudformation_instances:
if instance.state.get('Name') == 'stopped':
drop_bastionhost(instance)
else:
oddstack = cf.Stack(get_tag(instance.tags, 'aws:cloudformation:stack-name'))
used_ami_id = get_tag(oddstack.parameters, 'TaupageId', prefix='Parameter')
if used_ami_id != base_ami_id:
error('{} use {} instand of {}.'.format(oddstack.name, used_ami_id, base_ami_id))
if re_deploy or account.options.get('update_odd_host'):
error(' ==> prepare change set')
update_needed = True
used_bastion_version = get_tag(oddstack.parameters, 'OddRelease', prefix='Parameter')
if used_bastion_version != bastion_version:
error('{} use {} instand of {}.'.format(oddstack.name, used_bastion_version, bastion_version))
if re_deploy or account.options.get('update_odd_host'):
error(' ==> prepare change set')
update_needed = True
if update_needed or re_deploy:
update_cf_bastion_host(account, vpc, region, oddstack, base_ami_id, bastion_version)
if not legacy_instances:
info('check old odd security groups')
cleanup_old_security_group(account, region, oddstack, vpc)
if not legacy_instances and not cloudformation_instances:
try:
stack = cf.Stack('Odd')
info('Stack Status: {}'.format(stack.stack_status))
except Exception:
create_cf_bastion_host(account, vpc, region, base_ami_id, bastion_version)
if stack.stack_status in ('UPDATE_IN_PROGRESS', 'CREATE_IN_PROGRESS'):
if stack.stack_status.startswith('UPDATE_'):
waiter = cfc.get_waiter('stack_update_complete')
else:
waiter = cfc.get_waiter('stack_create_complete')
with ActionOnExit('Waiting of Stack') as act:
try:
waiter.wait(StackName='Odd')
except botocore.exceptions.WaiterError as e:
act.error('Stack creation failed: {}'.format(e))
return
info('check old odd security groups')
cleanup_old_security_group(account, region, stack, vpc)
instance = ec2.Instance(stack.Resource(logical_id='OddServerInstance').physical_resource_id)
launch_time = instance.launch_time
if (not wait_for_ssh_port(instance.public_ip_address, 60) and
datetime.timedelta(minutes=15) < datetime.datetime.now(launch_time.tzinfo) - launch_time):
error('Bastion Host does not response. Force Update for Bastionhost Stack')
update_cf_bastion_host(account, vpc, region, stack, base_ami_id, bastion_version)
def cleanup_old_security_group(account: AccountData, region: str, oddstack: object, vpc: object):
ec2 = account.session.resource('ec2', region)
stack_security_group_id = oddstack.Resource(logical_id='OddSecurityGroup').physical_resource_id
sgs = [x for x in vpc.security_groups.all() if x.group_name == 'Odd (SSH Bastion Host)']
for sg in sgs:
with ActionOnExit('Found old Odd Security Group {}/{}'.format(sg.id, sg.group_name)) as act:
for sg_depency in vpc.meta.client.describe_security_groups(Filters=[
{
'Name': 'ip-permission.group-id',
'Values': [
sg.group_id,
]
},
])['SecurityGroups']:
sg_depency = ec2.SecurityGroup(sg_depency.get('GroupId'))
with ActionOnExit(
'Found old Odd SG depency in Security Group {}/{}'
.format(sg_depency.id, sg_depency.group_name)) as act:
for permission in sg_depency.ip_permissions:
_change_permission(sg_depency, permission, sg.group_id, stack_security_group_id, 'ingress', act)
for permission in sg_depency.ip_permissions_egress:
_change_permission(sg_depency, permission, sg.group_id, stack_security_group_id, 'egress', act)
try:
sg.delete()
act.ok('removed')
except Exception as e:
act.error('Can\'t cleanup old Odd Stack: {}'.format(e))
def _change_permission(sg, permission, old_group_id, new_group_id, direction, act):
old_permission = deepcopy(permission)
replace = False
for user_id_group_pair in permission.get('UserIdGroupPairs', []):
if user_id_group_pair.get('GroupId') == old_group_id:
user_id_group_pair['GroupId'] = new_group_id
replace = True
if permission.get('UserIdGroupPairs'):
permission['UserIdGroupPairs'] = list(
dict(
(v['GroupId'], v) for v in permission['UserIdGroupPairs']
).values()
)
if replace:
try:
if direction == 'egress':
sg.revoke_egress(IpPermissions=[old_permission])
elif direction == 'ingress':
sg.revoke_ingress(IpPermissions=[old_permission])
except Exception as e:
act.error('Can\'t revoke the Permissions: {}'.format(e))
try:
if direction == 'egress':
sg.authorize_egress(IpPermissions=[permission])
elif direction == 'ingress':
sg.authorize_ingress(IpPermissions=[permission])
except Exception as e:
act.error('Can\'t authorize the Permissions: {}'.format(e))
def create_cf_bastion_host(account: AccountData, vpc: object, region: str, ami_id: str, bastion_version: str):
cf = account.session.resource('cloudformation', region)
cfc = account.session.client('cloudformation', region)
ec2c = account.session.client('ec2', region)
subnet_ids = [a.id for a in filter_subnets(vpc, 'dmz')]
if not subnet_ids:
warning('No DMZ subnet found')
return
allocation_id, ip = associate_address(ec2c)
stackname = 'Odd'
stack = cf.create_stack(
StackName=stackname,
TemplateBody=json.dumps(account.config['bastion'].get('cf_template')),
Parameters=[
{
'ParameterKey': 'AccountName',
'ParameterValue': account.name
},
{
'ParameterKey': 'DisableApiTermination',
'ParameterValue': 'false'
},
{
'ParameterKey': 'EIPAllocation',
'ParameterValue': allocation_id
},
{
'ParameterKey': 'OddRelease',
'ParameterValue': bastion_version
},
{
'ParameterKey': 'SubnetId',
'ParameterValue': subnet_ids[0]
},
{
'ParameterKey': 'TaupageId',
'ParameterValue': ami_id
},
{
'ParameterKey': 'VPCNetwork',
'ParameterValue': str(vpc.cidr_block)
},
{
'ParameterKey': 'VpcId',
'ParameterValue': vpc.id
}
],
OnFailure='DELETE',
Tags=[
{'Key': 'LastUpdate', 'Value': time.strftime('%Y-%m-%dT%H:%M:%S%z')},
{'Key': 'InfrastructureComponent', 'Value': 'true'}
]
)
with ActionOnExit('Wait of stack create complete') as act:
waiter = cfc.get_waiter('stack_create_complete')
try:
waiter.wait(StackName=stack.name)
except botocore.exceptions.WaiterError as e:
act.error('Stack creation failed: {}'.format(e))
return
info('SSH Bastion instance is running with public IP {}'.format(ip))
if account.domain is not None:
configure_dns_record(account, 'odd-{}'.format(region), ip)
else:
warning('No DNS domain configured, skipping record creation')
def update_cf_bastion_host(account: AccountData, vpc: object, region: str, stack: object, ami_id: str,
bastion_version: str):
cloudformation = account.session.client('cloudformation', region)
# switch subnet, every update => force reinitialisation
current_subnet = get_tag(stack.parameters, 'SubnetId', prefix='Parameter')
subnet_ids = [a.id for a in filter_subnets(vpc, 'dmz')]
if current_subnet in subnet_ids:
subnet_ids.remove(current_subnet)
if not subnet_ids:
warning('No DMZ subnet found')
return
response = stack.update(
TemplateBody=json.dumps(account.config['bastion'].get('cf_template')),
Parameters=[
{
'ParameterKey': 'AccountName',
'ParameterValue': account.name
},
{
'ParameterKey': 'DisableApiTermination',
'ParameterValue': 'false'
},
{
'ParameterKey': 'EIPAllocation',
'ParameterValue': get_tag(stack.parameters, 'EIPAllocation', prefix='Parameter')
},
{
'ParameterKey': 'OddRelease',
'ParameterValue': bastion_version
},
{
'ParameterKey': 'SubnetId',
'ParameterValue': subnet_ids[0]
},
{
'ParameterKey': 'TaupageId',
'ParameterValue': ami_id
},
{
'ParameterKey': 'VPCNetwork',
'ParameterValue': str(vpc.cidr_block)
},
{
'ParameterKey': 'VpcId',
'ParameterValue': vpc.id
}
],
Tags=[
{'Key': 'LastUpdate', 'Value': time.strftime('%Y-%m-%dT%H:%M:%S%z')},
{'Key': 'InfrastructureComponent', 'Value': 'true'}
]
)
info(response)
with ActionOnExit('Wait of stack update complete') as act:
waiter = cloudformation.get_waiter('stack_update_complete')
try:
waiter.wait(StackName=stack.name)
except botocore.exceptions.WaiterError as e:
act.error('Stack creation failed: {}'.format(e))
return
def drop_bastionhost(instance):
with ActionOnExit('Terminating SSH Bastion host..'):
instance.reload()
if instance.state.get('Name') in ('running', 'pending', 'stopping', 'stopped'):
instance.modify_attribute(Attribute='disableApiTermination', Value='false')
instance.terminate()
instance.wait_until_terminated()
def wait_for_ssh_port(host: str, timeout: int):
start = time.time()
with ActionOnExit('Waiting for SSH port of {}..'.format(host)) as act:
while True:
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
try:
result = sock.connect_ex((host, 22))
except Exception:
result = -1
if result == 0:
return True
if time.time() - start > timeout:
act.error('TIMEOUT')
return False
time.sleep(5)
act.progress()
def delete_bastion_host(account: AccountData, region: str):
ec2 = account.session.resource('ec2', region)
cf = account.session.resource('cloudformation', region)
cfc = account.session.client('cloudformation', region)
for instance in ec2.instances.all():
if get_tag(instance.tags, 'Name') == 'Odd (SSH Bastion Host)':
if instance.state.get('Name') in ('running', 'pending', 'stopping', 'stopped'):
if account.domain is not None and instance.public_ip_address:
try:
delete_dns_record(account, 'odd-{}'.format(region), instance.public_ip_address)
except Exception:
pass
drop_bastionhost(instance)
cloudformation_filter = [
{'Name': 'tag:aws:cloudformation:logical-id',
'Values': ['OddServerInstance']},
{'Name': 'instance-state-name',
'Values': ['running', 'pending', 'stopping', 'stopped']},
]
for instance in ec2.instances.filter(Filters=cloudformation_filter):
if account.domain is not None and instance.public_ip_address:
try:
delete_dns_record(account, 'odd-{}'.format(region), instance.public_ip_address)
except Exception as e:
warning('Can\'t cleanup old Odd host name: {}'.format(e))
oddstack = cf.Stack(get_tag(instance.tags, 'aws:cloudformation:stack-name'))
oddstack.delete()
waiter = cfc.get_waiter('stack_delete_complete')
with ActionOnExit('Waiting of Stack delete') as act:
try:
waiter.wait(StackName=get_tag(instance.tags, 'aws:cloudformation:stack-name'))
except botocore.exceptions.WaiterError as e:
act.error('Stack delete failed: {}'.format(e))
| true | true |
f73e1766a6f2995af45f56ada2c106b95188b432 | 1,075 | py | Python | Math/python/leetcode69_Sqrt_x.py | wenxinjie/leetcode | c459a01040c8fe0783e15a16b8d7cca4baf4612a | [
"Apache-2.0"
] | null | null | null | Math/python/leetcode69_Sqrt_x.py | wenxinjie/leetcode | c459a01040c8fe0783e15a16b8d7cca4baf4612a | [
"Apache-2.0"
] | null | null | null | Math/python/leetcode69_Sqrt_x.py | wenxinjie/leetcode | c459a01040c8fe0783e15a16b8d7cca4baf4612a | [
"Apache-2.0"
] | null | null | null | # Implement int sqrt(int x).
# Compute and return the square root of x, where x is guaranteed to be a non-negative integer.
# Since the return type is an integer, the decimal digits are truncated and only the integer part of the result is returned.
# Example 1:
# Input: 4
# Output: 2
# Example 2:
# Input: 8
# Output: 2
# Explanation: The square root of 8 is 2.82842..., and since
# the decimal part is truncated, 2 is returned.
class Solution:
def mySqrt(self, x):
"""
:type x: int
:rtype: int
"""
if x == 0:
return 0
elif x < 4:
return 1
elif x < 9:
return 2
res = self.helper(x, 0, x//2)
return res
def helper(self, x, left, right):
mid = (left + right)//2
if mid**2 <= x and (mid+1)**2:
return mid
elif mid**2 > x:
right = mid
elif mid**2 < x:
left = mid
return self.helper(x, left, right)
# Time: O(log(n))
# Space: O(1)
# Difficulty: easy | 23.888889 | 124 | 0.528372 |
class Solution:
def mySqrt(self, x):
if x == 0:
return 0
elif x < 4:
return 1
elif x < 9:
return 2
res = self.helper(x, 0, x//2)
return res
def helper(self, x, left, right):
mid = (left + right)//2
if mid**2 <= x and (mid+1)**2:
return mid
elif mid**2 > x:
right = mid
elif mid**2 < x:
left = mid
return self.helper(x, left, right)
| true | true |
f73e18208160fc1d67a3bd81f5452083e16ca14f | 7,247 | py | Python | MV3D_TF_release/lib/datasets/voc_eval.py | ZiningWang/Sparse_Pooling | a160ddf9a03ef53bad630b4ac186a8437bd0475c | [
"Unlicense"
] | 52 | 2018-08-28T03:44:51.000Z | 2022-03-23T16:00:14.000Z | MV3D_TF_release/lib/datasets/voc_eval.py | weidezhang/Sparse_Pooling | a160ddf9a03ef53bad630b4ac186a8437bd0475c | [
"Unlicense"
] | 1 | 2019-06-25T01:32:35.000Z | 2019-07-01T01:34:20.000Z | MV3D_TF_release/lib/datasets/voc_eval.py | weidezhang/Sparse_Pooling | a160ddf9a03ef53bad630b4ac186a8437bd0475c | [
"Unlicense"
] | 20 | 2018-07-31T18:17:35.000Z | 2021-07-09T08:42:06.000Z | # --------------------------------------------------------
# Fast/er R-CNN
# Licensed under The MIT License [see LICENSE for details]
# Written by Bharath Hariharan
# --------------------------------------------------------
import xml.etree.ElementTree as ET
import os
import pickle
import numpy as np
import pdb
def parse_rec(filename):
""" Parse a PASCAL VOC xml file """
tree = ET.parse(filename)
objects = []
for obj in tree.findall('object'):
obj_struct = {}
obj_struct['name'] = obj.find('name').text
obj_struct['pose'] = obj.find('pose').text
obj_struct['truncated'] = int(obj.find('truncated').text)
obj_struct['difficult'] = int(obj.find('difficult').text)
bbox = obj.find('bndbox')
obj_struct['bbox'] = [int(bbox.find('xmin').text),
int(bbox.find('ymin').text),
int(bbox.find('xmax').text),
int(bbox.find('ymax').text)]
objects.append(obj_struct)
return objects
def voc_ap(rec, prec, use_07_metric=False):
""" ap = voc_ap(rec, prec, [use_07_metric])
Compute VOC AP given precision and recall.
If use_07_metric is true, uses the
VOC 07 11 point method (default:False).
"""
if use_07_metric:
# 11 point metric
ap = 0.
for t in np.arange(0., 1.1, 0.1):
if np.sum(rec >= t) == 0:
p = 0
else:
p = np.max(prec[rec >= t])
ap = ap + p / 11.
else:
# correct AP calculation
# first append sentinel values at the end
mrec = np.concatenate(([0.], rec, [1.]))
mpre = np.concatenate(([0.], prec, [0.]))
# compute the precision envelope
for i in range(mpre.size - 1, 0, -1):
mpre[i - 1] = np.maximum(mpre[i - 1], mpre[i])
# to calculate area under PR curve, look for points
# where X axis (recall) changes value
i = np.where(mrec[1:] != mrec[:-1])[0]
# and sum (\Delta recall) * prec
ap = np.sum((mrec[i + 1] - mrec[i]) * mpre[i + 1])
return ap
def voc_eval(detpath,
annopath,
imagesetfile,
classname,
cachedir,
ovthresh=0.5,
use_07_metric=False):
"""rec, prec, ap = voc_eval(detpath,
annopath,
imagesetfile,
classname,
[ovthresh],
[use_07_metric])
Top level function that does the PASCAL VOC evaluation.
detpath: Path to detections
detpath.format(classname) should produce the detection results file.
annopath: Path to annotations
annopath.format(imagename) should be the xml annotations file.
imagesetfile: Text file containing the list of images, one image per line.
classname: Category name (duh)
cachedir: Directory for caching the annotations
[ovthresh]: Overlap threshold (default = 0.5)
[use_07_metric]: Whether to use VOC07's 11 point AP computation
(default False)
"""
# assumes detections are in detpath.format(classname)
# assumes annotations are in annopath.format(imagename)
# assumes imagesetfile is a text file with each line an image name
# cachedir caches the annotations in a pickle file
# first load gt
if not os.path.isdir(cachedir):
os.mkdir(cachedir)
cachefile = os.path.join(cachedir, 'annots.pkl')
# read list of images
with open(imagesetfile, 'r') as f:
lines = f.readlines()
imagenames = [x.strip() for x in lines]
if not os.path.isfile(cachefile):
# load annots
recs = {}
for i, imagename in enumerate(imagenames):
recs[imagename] = parse_rec(annopath.format(imagename))
if i % 100 == 0:
print ('Reading annotation for {:d}/{:d}'.format(
i + 1, len(imagenames)))
# save
print ('Saving cached annotations to {:s}'.format(cachefile))
with open(cachefile, 'w') as f:
cPickle.dump(recs, f)
else:
# load
with open(cachefile, 'r') as f:
recs = cPickle.load(f)
# extract gt objects for this class
class_recs = {}
npos = 0
for imagename in imagenames:
R = [obj for obj in recs[imagename] if obj['name'] == classname]
bbox = np.array([x['bbox'] for x in R])
difficult = np.array([x['difficult'] for x in R]).astype(np.bool)
det = [False] * len(R)
npos = npos + sum(~difficult)
class_recs[imagename] = {'bbox': bbox,
'difficult': difficult,
'det': det}
# read dets
detfile = detpath.format(classname)
with open(detfile, 'r') as f:
lines = f.readlines()
if any(lines) == 1:
splitlines = [x.strip().split(' ') for x in lines]
image_ids = [x[0] for x in splitlines]
confidence = np.array([float(x[1]) for x in splitlines])
BB = np.array([[float(z) for z in x[2:]] for x in splitlines])
# sort by confidence
sorted_ind = np.argsort(-confidence)
sorted_scores = np.sort(-confidence)
BB = BB[sorted_ind, :]
image_ids = [image_ids[x] for x in sorted_ind]
# go down dets and mark TPs and FPs
nd = len(image_ids)
tp = np.zeros(nd)
fp = np.zeros(nd)
for d in range(nd):
R = class_recs[image_ids[d]]
bb = BB[d, :].astype(float)
ovmax = -np.inf
BBGT = R['bbox'].astype(float)
if BBGT.size > 0:
# compute overlaps
# intersection
ixmin = np.maximum(BBGT[:, 0], bb[0])
iymin = np.maximum(BBGT[:, 1], bb[1])
ixmax = np.minimum(BBGT[:, 2], bb[2])
iymax = np.minimum(BBGT[:, 3], bb[3])
iw = np.maximum(ixmax - ixmin + 1., 0.)
ih = np.maximum(iymax - iymin + 1., 0.)
inters = iw * ih
# union
uni = ((bb[2] - bb[0] + 1.) * (bb[3] - bb[1] + 1.) +
(BBGT[:, 2] - BBGT[:, 0] + 1.) *
(BBGT[:, 3] - BBGT[:, 1] + 1.) - inters)
overlaps = inters / uni
ovmax = np.max(overlaps)
jmax = np.argmax(overlaps)
if ovmax > ovthresh:
if not R['difficult'][jmax]:
if not R['det'][jmax]:
tp[d] = 1.
R['det'][jmax] = 1
else:
fp[d] = 1.
else:
fp[d] = 1.
# compute precision recall
fp = np.cumsum(fp)
tp = np.cumsum(tp)
rec = tp / float(npos)
# avoid divide by zero in case the first detection matches a difficult
# ground truth
prec = tp / np.maximum(tp + fp, np.finfo(np.float64).eps)
ap = voc_ap(rec, prec, use_07_metric)
else:
rec = -1
prec = -1
ap = -1
return rec, prec, ap
| 35.179612 | 78 | 0.507382 |
import xml.etree.ElementTree as ET
import os
import pickle
import numpy as np
import pdb
def parse_rec(filename):
tree = ET.parse(filename)
objects = []
for obj in tree.findall('object'):
obj_struct = {}
obj_struct['name'] = obj.find('name').text
obj_struct['pose'] = obj.find('pose').text
obj_struct['truncated'] = int(obj.find('truncated').text)
obj_struct['difficult'] = int(obj.find('difficult').text)
bbox = obj.find('bndbox')
obj_struct['bbox'] = [int(bbox.find('xmin').text),
int(bbox.find('ymin').text),
int(bbox.find('xmax').text),
int(bbox.find('ymax').text)]
objects.append(obj_struct)
return objects
def voc_ap(rec, prec, use_07_metric=False):
if use_07_metric:
ap = 0.
for t in np.arange(0., 1.1, 0.1):
if np.sum(rec >= t) == 0:
p = 0
else:
p = np.max(prec[rec >= t])
ap = ap + p / 11.
else:
mrec = np.concatenate(([0.], rec, [1.]))
mpre = np.concatenate(([0.], prec, [0.]))
for i in range(mpre.size - 1, 0, -1):
mpre[i - 1] = np.maximum(mpre[i - 1], mpre[i])
i = np.where(mrec[1:] != mrec[:-1])[0]
ap = np.sum((mrec[i + 1] - mrec[i]) * mpre[i + 1])
return ap
def voc_eval(detpath,
annopath,
imagesetfile,
classname,
cachedir,
ovthresh=0.5,
use_07_metric=False):
if not os.path.isdir(cachedir):
os.mkdir(cachedir)
cachefile = os.path.join(cachedir, 'annots.pkl')
with open(imagesetfile, 'r') as f:
lines = f.readlines()
imagenames = [x.strip() for x in lines]
if not os.path.isfile(cachefile):
recs = {}
for i, imagename in enumerate(imagenames):
recs[imagename] = parse_rec(annopath.format(imagename))
if i % 100 == 0:
print ('Reading annotation for {:d}/{:d}'.format(
i + 1, len(imagenames)))
print ('Saving cached annotations to {:s}'.format(cachefile))
with open(cachefile, 'w') as f:
cPickle.dump(recs, f)
else:
with open(cachefile, 'r') as f:
recs = cPickle.load(f)
class_recs = {}
npos = 0
for imagename in imagenames:
R = [obj for obj in recs[imagename] if obj['name'] == classname]
bbox = np.array([x['bbox'] for x in R])
difficult = np.array([x['difficult'] for x in R]).astype(np.bool)
det = [False] * len(R)
npos = npos + sum(~difficult)
class_recs[imagename] = {'bbox': bbox,
'difficult': difficult,
'det': det}
detfile = detpath.format(classname)
with open(detfile, 'r') as f:
lines = f.readlines()
if any(lines) == 1:
splitlines = [x.strip().split(' ') for x in lines]
image_ids = [x[0] for x in splitlines]
confidence = np.array([float(x[1]) for x in splitlines])
BB = np.array([[float(z) for z in x[2:]] for x in splitlines])
sorted_ind = np.argsort(-confidence)
sorted_scores = np.sort(-confidence)
BB = BB[sorted_ind, :]
image_ids = [image_ids[x] for x in sorted_ind]
nd = len(image_ids)
tp = np.zeros(nd)
fp = np.zeros(nd)
for d in range(nd):
R = class_recs[image_ids[d]]
bb = BB[d, :].astype(float)
ovmax = -np.inf
BBGT = R['bbox'].astype(float)
if BBGT.size > 0:
ixmin = np.maximum(BBGT[:, 0], bb[0])
iymin = np.maximum(BBGT[:, 1], bb[1])
ixmax = np.minimum(BBGT[:, 2], bb[2])
iymax = np.minimum(BBGT[:, 3], bb[3])
iw = np.maximum(ixmax - ixmin + 1., 0.)
ih = np.maximum(iymax - iymin + 1., 0.)
inters = iw * ih
uni = ((bb[2] - bb[0] + 1.) * (bb[3] - bb[1] + 1.) +
(BBGT[:, 2] - BBGT[:, 0] + 1.) *
(BBGT[:, 3] - BBGT[:, 1] + 1.) - inters)
overlaps = inters / uni
ovmax = np.max(overlaps)
jmax = np.argmax(overlaps)
if ovmax > ovthresh:
if not R['difficult'][jmax]:
if not R['det'][jmax]:
tp[d] = 1.
R['det'][jmax] = 1
else:
fp[d] = 1.
else:
fp[d] = 1.
fp = np.cumsum(fp)
tp = np.cumsum(tp)
rec = tp / float(npos)
prec = tp / np.maximum(tp + fp, np.finfo(np.float64).eps)
ap = voc_ap(rec, prec, use_07_metric)
else:
rec = -1
prec = -1
ap = -1
return rec, prec, ap
| true | true |
f73e196426efceb90395be40386532ec921d821d | 229 | py | Python | src/xsd_auth/forms.py | minyiky/xSACdb | 8c407e9a9da196750a66ad53613ad67c8c56e1c3 | [
"MIT"
] | 2 | 2017-08-14T14:40:17.000Z | 2019-02-07T13:10:23.000Z | src/xsd_auth/forms.py | minyiky/xSACdb | 8c407e9a9da196750a66ad53613ad67c8c56e1c3 | [
"MIT"
] | 19 | 2016-02-07T18:02:53.000Z | 2019-11-03T17:48:13.000Z | src/xsd_auth/forms.py | minyiky/xSACdb | 8c407e9a9da196750a66ad53613ad67c8c56e1c3 | [
"MIT"
] | 4 | 2015-10-19T17:24:35.000Z | 2021-05-12T07:30:32.000Z | from django import forms
from allauth.socialaccount.forms import SignupForm as SocialSignupForm
class SignupForm(SocialSignupForm):
first_name = forms.CharField(max_length=30)
last_name = forms.CharField(max_length=30)
| 28.625 | 70 | 0.812227 | from django import forms
from allauth.socialaccount.forms import SignupForm as SocialSignupForm
class SignupForm(SocialSignupForm):
first_name = forms.CharField(max_length=30)
last_name = forms.CharField(max_length=30)
| true | true |
f73e19b9e95c7ed922b35ea3a9896dd6ff7155c1 | 8,635 | py | Python | uwsgi_sloth/analyzer.py | prafulbagai/uwsgi-sloth | b19b9a7e6a0b8edfdc94bfbe9f7a0030ab95db03 | [
"Apache-2.0"
] | 127 | 2015-01-02T11:57:22.000Z | 2022-03-03T02:23:54.000Z | uwsgi_sloth/analyzer.py | prafulbagai/uwsgi-sloth | b19b9a7e6a0b8edfdc94bfbe9f7a0030ab95db03 | [
"Apache-2.0"
] | 8 | 2015-06-15T12:10:13.000Z | 2019-07-21T23:01:18.000Z | uwsgi_sloth/analyzer.py | prafulbagai/uwsgi-sloth | b19b9a7e6a0b8edfdc94bfbe9f7a0030ab95db03 | [
"Apache-2.0"
] | 20 | 2015-01-06T03:27:25.000Z | 2020-09-04T03:53:46.000Z | # -*- coding: utf-8 -*-
"""Analyzer for uwsgi log"""
import re
import copy
import datetime
from uwsgi_sloth.utils import total_seconds
from uwsgi_sloth.structures import ValuesAggregation
from uwsgi_sloth.settings import FILTER_METHODS, FILTER_STATUS, LIMIT_URL_GROUPS, \
LIMIT_PER_URL_GROUP, ROOT, REALTIME_UPDATE_INTERVAL
class UWSGILogParser(object):
"""Parser for uwsgi log file, support only default log format:
log format: "[pid: 27011|app: 0|req: 16858/537445] 58.251.73.227 () {40 vars in 1030 bytes} \
[Tue Apr 29 00:13:10 2014] POST /trips/2387949771/add_waypoint/ => \
generated 1053 bytes in 2767 msecs (HTTP/1.1 200) 4 headers in 282 bytes \
(1 switches on core 0)"
Returns:
~~~~~~~~
An dict of parsed log result.
"""
DATETIME_FORMAT = '%a %b %d %H:%M:%S %Y'
RE_LOG_LINE = re.compile(r'''}\ \[(?P<datetime>.*?)\]\ (?P<request_method>POST|GET|DELETE|PUT|PATCH)\s
(?P<request_uri>[^ ]*?)\ =>\ generated\ (?:.*?)\ in\ (?P<resp_msecs>\d+)\ msecs\s
\(HTTP/[\d.]+\ (?P<resp_status>\d+)\)''', re.VERBOSE)
def __init__(self):
pass
def parse(self, line):
matched = self.RE_LOG_LINE.search(line)
if matched:
matched_dict = matched.groupdict()
method = matched_dict['request_method']
status = matched_dict['resp_status']
if not method in FILTER_METHODS or status not in FILTER_STATUS:
return
url = matched_dict['request_uri'].replace('//', '/')
url_path = url.split('?')[0]
resp_time = int(matched_dict['resp_msecs'])
request_datetime = datetime.datetime.strptime(matched_dict['datetime'],
self.DATETIME_FORMAT)
return {
'method': method,
'url': url,
'url_path': url_path,
'resp_time': resp_time,
'status': status,
'request_datetime': request_datetime
}
return
class URLClassifier(object):
"""A simple url classifier, current rules:
- replacing sequential digits part by '(\d+)'
"""
RE_SIMPLIFY_URL = re.compile(r'(?<=/)\d+(/|$)')
def __init__(self, user_defined_rules=[]):
self.user_defined_rules = user_defined_rules
def classify(self, url_path):
"""Classify an url"""
for dict_api_url in self.user_defined_rules:
api_url = dict_api_url['str']
re_api_url = dict_api_url['re']
if re_api_url.match(url_path[1:]):
return api_url
return self.RE_SIMPLIFY_URL.sub(r'(\\d+)/', url_path)
class LogAnalyzer(object):
"""Log analyzer"""
def __init__(self, url_classifier=None, min_msecs=200, start_from_datetime=None):
self.data = {}
self.requests_counter = {'normal': 0, 'slow': 0}
self.total_slow_duration = 0
self.min_msecs = min_msecs
self.start_from_datetime = start_from_datetime
self.datetime_range = [None, None]
self.url_classifier = url_classifier or URLClassifier()
self.log_parser = UWSGILogParser()
def analyze_line(self, line):
line = line.strip()
result = self.log_parser.parse(line)
# Ignore invalid log
if not result:
return
if self.start_from_datetime and result['request_datetime'] <= self.start_from_datetime:
return
self.requests_counter['normal'] += 1
if not self.datetime_range[0]:
self.datetime_range[0] = result['request_datetime']
self.datetime_range[1] = result['request_datetime']
if result['resp_time'] < self.min_msecs:
return
resp_time = result['resp_time']
# Use url_classifier to classify url
matched_url_rule = self.url_classifier.classify(result['url_path'])
big_d = self.data.setdefault((result['method'], matched_url_rule), {
'urls': {},
'duration_agr_data': ValuesAggregation(),
})
big_d['duration_agr_data'].add_value(resp_time)
big_d['urls'].setdefault(result['url'], ValuesAggregation()).add_value(resp_time)
self.requests_counter['slow'] += 1
self.total_slow_duration += resp_time
def get_data(self):
return {
'requests_counter': self.requests_counter,
'total_slow_duration': self.total_slow_duration,
'datetime_range': self.datetime_range,
'data_details': self.data
}
class RealtimeLogAnalyzer(object):
"""Log analyzer for realtime support"""
default_data = {
'requests_counter': {'normal': 0, 'slow': 0},
'total_slow_duration': 0,
'data_details': {}
}
def __init__(self, url_classifier=None, min_msecs=200, start_from_datetime=None):
self.data = {}
self.min_msecs = min_msecs
self.start_from_datetime = start_from_datetime
self.last_analyzed_datetime = None
self.url_classifier = url_classifier or URLClassifier()
self.log_parser = UWSGILogParser()
def analyze_line(self, line):
line = line.strip()
result = self.log_parser.parse(line)
# Ignore invalid log
if not result:
return
if self.start_from_datetime and result['request_datetime'] <= self.start_from_datetime:
return
request_datetime = result['request_datetime']
self.last_analyzed_datetime = request_datetime
groups = self.get_result_group_names(request_datetime)
if not groups:
return
for group in groups:
if group not in self.data:
self.data[group] = copy.deepcopy(self.default_data)
for group in groups:
self.data[group]['requests_counter']['normal'] += 1
if result['resp_time'] < self.min_msecs:
return
resp_time = result['resp_time']
# Use url_classifier to classify url
matched_url_rule = self.url_classifier.classify(result['url_path'])
for group in groups:
big_d = self.data[group]['data_details'].setdefault((result['method'], matched_url_rule), {
'urls': {},
'duration_agr_data': ValuesAggregation(),
})
big_d['duration_agr_data'].add_value(resp_time)
big_d['urls'].setdefault(result['url'], ValuesAggregation()).add_value(resp_time)
self.data[group]['requests_counter']['slow'] += 1
self.data[group]['total_slow_duration'] += resp_time
def get_result_group_names(self, request_datetime):
"""Only today/yesterday/last interval are valid datetime"""
request_date = request_datetime.date()
today = datetime.date.today()
yesterday = datetime.date.today() - datetime.timedelta(days=1)
result = []
if total_seconds(datetime.datetime.now() - request_datetime) < REALTIME_UPDATE_INTERVAL:
result.append('last_interval')
if request_date == today:
result.append(today.isoformat())
elif request_date == yesterday:
result.append(yesterday.isoformat())
return result
def get_data(self, key=None):
if key:
return self.data.get(key, self.default_data)
return self.data
def clean_data_by_key(self, key):
try:
del self.data[key]
except KeyError:
pass
def format_data(raw_data, limit_per_url_group=LIMIT_PER_URL_GROUP, limit_url_groups=LIMIT_URL_GROUPS):
"""Fomat data from LogAnalyzer for render purpose"""
data = copy.deepcopy(raw_data)
for k, v in list(data['data_details'].items()):
# Only reserve first ``limit_per_url_group`` items
v['urls'] = sorted(list(v['urls'].items()), key=lambda k_v: k_v[1].total,
reverse=True)[:limit_per_url_group]
data_details = sorted(iter(data['data_details'].items()),
key=lambda k_v1: k_v1[1]["duration_agr_data"].total,
reverse=True)[:limit_url_groups]
if data['requests_counter']['normal']:
slow_rate = format(data['requests_counter']['slow'] / \
float(data['requests_counter']['normal']), '.2%')
else:
slow_rate = '-'
data.update({
'slow_rate': slow_rate,
'data_details': data_details,
})
return data
| 35.244898 | 106 | 0.600579 |
import re
import copy
import datetime
from uwsgi_sloth.utils import total_seconds
from uwsgi_sloth.structures import ValuesAggregation
from uwsgi_sloth.settings import FILTER_METHODS, FILTER_STATUS, LIMIT_URL_GROUPS, \
LIMIT_PER_URL_GROUP, ROOT, REALTIME_UPDATE_INTERVAL
class UWSGILogParser(object):
DATETIME_FORMAT = '%a %b %d %H:%M:%S %Y'
RE_LOG_LINE = re.compile(r'''}\ \[(?P<datetime>.*?)\]\ (?P<request_method>POST|GET|DELETE|PUT|PATCH)\s
(?P<request_uri>[^ ]*?)\ =>\ generated\ (?:.*?)\ in\ (?P<resp_msecs>\d+)\ msecs\s
\(HTTP/[\d.]+\ (?P<resp_status>\d+)\)''', re.VERBOSE)
def __init__(self):
pass
def parse(self, line):
matched = self.RE_LOG_LINE.search(line)
if matched:
matched_dict = matched.groupdict()
method = matched_dict['request_method']
status = matched_dict['resp_status']
if not method in FILTER_METHODS or status not in FILTER_STATUS:
return
url = matched_dict['request_uri'].replace('//', '/')
url_path = url.split('?')[0]
resp_time = int(matched_dict['resp_msecs'])
request_datetime = datetime.datetime.strptime(matched_dict['datetime'],
self.DATETIME_FORMAT)
return {
'method': method,
'url': url,
'url_path': url_path,
'resp_time': resp_time,
'status': status,
'request_datetime': request_datetime
}
return
class URLClassifier(object):
RE_SIMPLIFY_URL = re.compile(r'(?<=/)\d+(/|$)')
def __init__(self, user_defined_rules=[]):
self.user_defined_rules = user_defined_rules
def classify(self, url_path):
for dict_api_url in self.user_defined_rules:
api_url = dict_api_url['str']
re_api_url = dict_api_url['re']
if re_api_url.match(url_path[1:]):
return api_url
return self.RE_SIMPLIFY_URL.sub(r'(\\d+)/', url_path)
class LogAnalyzer(object):
def __init__(self, url_classifier=None, min_msecs=200, start_from_datetime=None):
self.data = {}
self.requests_counter = {'normal': 0, 'slow': 0}
self.total_slow_duration = 0
self.min_msecs = min_msecs
self.start_from_datetime = start_from_datetime
self.datetime_range = [None, None]
self.url_classifier = url_classifier or URLClassifier()
self.log_parser = UWSGILogParser()
def analyze_line(self, line):
line = line.strip()
result = self.log_parser.parse(line)
if not result:
return
if self.start_from_datetime and result['request_datetime'] <= self.start_from_datetime:
return
self.requests_counter['normal'] += 1
if not self.datetime_range[0]:
self.datetime_range[0] = result['request_datetime']
self.datetime_range[1] = result['request_datetime']
if result['resp_time'] < self.min_msecs:
return
resp_time = result['resp_time']
matched_url_rule = self.url_classifier.classify(result['url_path'])
big_d = self.data.setdefault((result['method'], matched_url_rule), {
'urls': {},
'duration_agr_data': ValuesAggregation(),
})
big_d['duration_agr_data'].add_value(resp_time)
big_d['urls'].setdefault(result['url'], ValuesAggregation()).add_value(resp_time)
self.requests_counter['slow'] += 1
self.total_slow_duration += resp_time
def get_data(self):
return {
'requests_counter': self.requests_counter,
'total_slow_duration': self.total_slow_duration,
'datetime_range': self.datetime_range,
'data_details': self.data
}
class RealtimeLogAnalyzer(object):
default_data = {
'requests_counter': {'normal': 0, 'slow': 0},
'total_slow_duration': 0,
'data_details': {}
}
def __init__(self, url_classifier=None, min_msecs=200, start_from_datetime=None):
self.data = {}
self.min_msecs = min_msecs
self.start_from_datetime = start_from_datetime
self.last_analyzed_datetime = None
self.url_classifier = url_classifier or URLClassifier()
self.log_parser = UWSGILogParser()
def analyze_line(self, line):
line = line.strip()
result = self.log_parser.parse(line)
if not result:
return
if self.start_from_datetime and result['request_datetime'] <= self.start_from_datetime:
return
request_datetime = result['request_datetime']
self.last_analyzed_datetime = request_datetime
groups = self.get_result_group_names(request_datetime)
if not groups:
return
for group in groups:
if group not in self.data:
self.data[group] = copy.deepcopy(self.default_data)
for group in groups:
self.data[group]['requests_counter']['normal'] += 1
if result['resp_time'] < self.min_msecs:
return
resp_time = result['resp_time']
matched_url_rule = self.url_classifier.classify(result['url_path'])
for group in groups:
big_d = self.data[group]['data_details'].setdefault((result['method'], matched_url_rule), {
'urls': {},
'duration_agr_data': ValuesAggregation(),
})
big_d['duration_agr_data'].add_value(resp_time)
big_d['urls'].setdefault(result['url'], ValuesAggregation()).add_value(resp_time)
self.data[group]['requests_counter']['slow'] += 1
self.data[group]['total_slow_duration'] += resp_time
def get_result_group_names(self, request_datetime):
request_date = request_datetime.date()
today = datetime.date.today()
yesterday = datetime.date.today() - datetime.timedelta(days=1)
result = []
if total_seconds(datetime.datetime.now() - request_datetime) < REALTIME_UPDATE_INTERVAL:
result.append('last_interval')
if request_date == today:
result.append(today.isoformat())
elif request_date == yesterday:
result.append(yesterday.isoformat())
return result
def get_data(self, key=None):
if key:
return self.data.get(key, self.default_data)
return self.data
def clean_data_by_key(self, key):
try:
del self.data[key]
except KeyError:
pass
def format_data(raw_data, limit_per_url_group=LIMIT_PER_URL_GROUP, limit_url_groups=LIMIT_URL_GROUPS):
data = copy.deepcopy(raw_data)
for k, v in list(data['data_details'].items()):
v['urls'] = sorted(list(v['urls'].items()), key=lambda k_v: k_v[1].total,
reverse=True)[:limit_per_url_group]
data_details = sorted(iter(data['data_details'].items()),
key=lambda k_v1: k_v1[1]["duration_agr_data"].total,
reverse=True)[:limit_url_groups]
if data['requests_counter']['normal']:
slow_rate = format(data['requests_counter']['slow'] / \
float(data['requests_counter']['normal']), '.2%')
else:
slow_rate = '-'
data.update({
'slow_rate': slow_rate,
'data_details': data_details,
})
return data
| true | true |
f73e19e80dfde9f3b5cf491e079c74c1744a629e | 2,182 | py | Python | factors.py | w4jbm/Python-Programs | 3c7c63d3c85e58c80252809f931daab0e67b43b8 | [
"Unlicense",
"MIT"
] | 1 | 2021-07-03T00:21:04.000Z | 2021-07-03T00:21:04.000Z | factors.py | w4jbm/Python-Programs | 3c7c63d3c85e58c80252809f931daab0e67b43b8 | [
"Unlicense",
"MIT"
] | null | null | null | factors.py | w4jbm/Python-Programs | 3c7c63d3c85e58c80252809f931daab0e67b43b8 | [
"Unlicense",
"MIT"
] | null | null | null | #!/usr/bin/python3
#
# factors.py - Find the factors of a positive integer
#
# By Jim McClanahah, W4JBM (Dec 2020)
#
# Find the factors of a provided positive integer.
#
# The function is a modification of one originally
# provided by Harshit Agrawal to the geeksforgeeks.org
# website.
#
# It seems like things stop working at around 18 digits
import sys
import math
# The following function creates and return a list of all
# prime factors of a given number n
#
# It uses three steps to find o find all prime factors:
#
# 1. While n is divisible by 2, add two to the list and
# divide n by 2.
# 2. After step 1, n must be odd. Now start a loop from
# i = 3 to square root of n. While i divides n, add
# i to the list and divide n by i, increment i by 2
# and continue.
# 3. If n is a prime number and is greater than 2, then
# n will not become 1 by above two steps. So add n
# to the list if it is greater than 2.
def primeFactors(n):
lst=[]
# Find the number of two's that divide n
while n % 2 == 0:
lst.append(2)
n = n / 2
# n must be odd at this point so a skip of 2
# (i.e., i = i + 2) can be used
for i in range(3,int(math.sqrt(n))+1,2):
# while i divides n , add i to list and
# divide n
while n % i== 0:
lst.append(i)
n = n / i
# Check if n is a prime number greater than 2
if n > 2:
lst.append(int(n))
# And return the list of factors
return lst
# Check for command line argument and print an intro if
# none was provided...
if len(sys.argv) != 2:
print('Find the factors for a given positive integer.')
print('USAGE: factor.py integer')
sys.exit(1)
# Make sure the argument is a positive integer...
if sys.argv[1].isdigit():
n = int(sys.argv[1])
# If not, print a warning...
else:
print('Argument must be a positive integer.')
sys.exit(1)
if n > 10**16:
print('Argument cannot be more than 15 digits.')
sys.exit(1)
lst = primeFactors(n)
# Here's where all the work happens... :-)
print('Factors of ' + str(n) + ': ' + ', '.join(map(str,lst)))
| 26.609756 | 62 | 0.617782 |
import sys
import math
def primeFactors(n):
lst=[]
while n % 2 == 0:
lst.append(2)
n = n / 2
# n must be odd at this point so a skip of 2
# (i.e., i = i + 2) can be used
for i in range(3,int(math.sqrt(n))+1,2):
# while i divides n , add i to list and
# divide n
while n % i== 0:
lst.append(i)
n = n / i
# Check if n is a prime number greater than 2
if n > 2:
lst.append(int(n))
# And return the list of factors
return lst
# Check for command line argument and print an intro if
# none was provided...
if len(sys.argv) != 2:
print('Find the factors for a given positive integer.')
print('USAGE: factor.py integer')
sys.exit(1)
# Make sure the argument is a positive integer...
if sys.argv[1].isdigit():
n = int(sys.argv[1])
# If not, print a warning...
else:
print('Argument must be a positive integer.')
sys.exit(1)
if n > 10**16:
print('Argument cannot be more than 15 digits.')
sys.exit(1)
lst = primeFactors(n)
# Here's where all the work happens... :-)
print('Factors of ' + str(n) + ': ' + ', '.join(map(str,lst)))
| true | true |
f73e1ab59f0ad3d2ec802ec8ad2285bb719e7f38 | 1,933 | py | Python | hangman.py | KevinCardenasDev/PythonIntermedio | 022fa790ac57263df26cc44c68ea1e8b92e94cff | [
"MIT"
] | null | null | null | hangman.py | KevinCardenasDev/PythonIntermedio | 022fa790ac57263df26cc44c68ea1e8b92e94cff | [
"MIT"
] | null | null | null | hangman.py | KevinCardenasDev/PythonIntermedio | 022fa790ac57263df26cc44c68ea1e8b92e94cff | [
"MIT"
] | null | null | null | import random
NUMBERS = ["1", "2", "3", "4", "5", "6", "7", "8", "9"]
def read_file():
WORDS = []
with open("./archivos/data.txt", "r", encoding="utf-8") as f:
for line in f:
WORDS.append(line.replace("\n", ""))
return WORDS
def random_word(words):
idx = random.randint(0, len(words) - 1)
return words[idx]
def main():
print("* - * - * - * - * - * - * - * - *- *- *")
print("B I E N V E N I D O A H A N G M A N")
print("* - * - * - * - * - * - * - * - *- *- *")
print("\n")
print("¡Adivina la palabra oculta!")
tries = 0
words = read_file()
current_word = random_word(words)
hidden_word = ['-' for i in current_word]
print(hidden_word)
try:
while True:
current_letter = input("Ingresa una letra: ")
for i in range(len(NUMBERS)):
if current_letter == NUMBERS[i]:
raise ValueError("No ingreses números, solamente letras, por favor")
letter_indexes = []
for idx in range(len(current_word)):
if current_letter == current_word[idx]:
letter_indexes.append(idx)
if len(letter_indexes) == 0:
tries += 1
if tries == 7:
print(hidden_word)
print("")
print("¡Perdiste! La palabra correta era {}".format(current_word))
break
else:
for idx in letter_indexes:
hidden_word[idx] = current_letter
print(hidden_word)
letter_indexes = []
try:
hidden_word.index("-")
except ValueError:
print("¡Ganaste! La palabra era {}".format(current_word))
break
except ValueError as ve:
print(ve)
if __name__ == "__main__":
main()
| 27.614286 | 89 | 0.475944 | import random
NUMBERS = ["1", "2", "3", "4", "5", "6", "7", "8", "9"]
def read_file():
WORDS = []
with open("./archivos/data.txt", "r", encoding="utf-8") as f:
for line in f:
WORDS.append(line.replace("\n", ""))
return WORDS
def random_word(words):
idx = random.randint(0, len(words) - 1)
return words[idx]
def main():
print("* - * - * - * - * - * - * - * - *- *- *")
print("B I E N V E N I D O A H A N G M A N")
print("* - * - * - * - * - * - * - * - *- *- *")
print("\n")
print("¡Adivina la palabra oculta!")
tries = 0
words = read_file()
current_word = random_word(words)
hidden_word = ['-' for i in current_word]
print(hidden_word)
try:
while True:
current_letter = input("Ingresa una letra: ")
for i in range(len(NUMBERS)):
if current_letter == NUMBERS[i]:
raise ValueError("No ingreses números, solamente letras, por favor")
letter_indexes = []
for idx in range(len(current_word)):
if current_letter == current_word[idx]:
letter_indexes.append(idx)
if len(letter_indexes) == 0:
tries += 1
if tries == 7:
print(hidden_word)
print("")
print("¡Perdiste! La palabra correta era {}".format(current_word))
break
else:
for idx in letter_indexes:
hidden_word[idx] = current_letter
print(hidden_word)
letter_indexes = []
try:
hidden_word.index("-")
except ValueError:
print("¡Ganaste! La palabra era {}".format(current_word))
break
except ValueError as ve:
print(ve)
if __name__ == "__main__":
main()
| true | true |
f73e1be033d66fa151dcb1a8eede36e0edcbcf22 | 342 | py | Python | pyleecan/Methods/Machine/CondType13/comp_surface_active.py | Eomys/Pyleecan | 4d7f0cbabf0311006963e7a2f435db2ecd901118 | [
"Apache-2.0"
] | 4 | 2017-11-27T10:14:34.000Z | 2018-09-20T11:30:32.000Z | pyleecan/Methods/Machine/CondType13/comp_surface_active.py | Eomys/Pyleecan | 4d7f0cbabf0311006963e7a2f435db2ecd901118 | [
"Apache-2.0"
] | null | null | null | pyleecan/Methods/Machine/CondType13/comp_surface_active.py | Eomys/Pyleecan | 4d7f0cbabf0311006963e7a2f435db2ecd901118 | [
"Apache-2.0"
] | null | null | null | def comp_surface_active(self):
"""Compute the active surface of the conductor
Parameters
----------
self : CondType13
A CondType13 object
Returns
-------
Sact: float
Surface without insulation [m**2]
"""
Sact = self.Wwire * self.Wwire * self.Nwppc_tan * self.Nwppc_rad
return Sact
| 18 | 68 | 0.596491 | def comp_surface_active(self):
Sact = self.Wwire * self.Wwire * self.Nwppc_tan * self.Nwppc_rad
return Sact
| true | true |
f73e1be0cafcb8b796324ceebe9febe3aad4376b | 12,057 | py | Python | pvtools.py | sadams2013/pvtools | 12bd9334a1335972519c81d0c01c6308aa597c39 | [
"MIT"
] | 1 | 2020-12-23T11:11:59.000Z | 2020-12-23T11:11:59.000Z | pvtools.py | sadams2013/pvtools | 12bd9334a1335972519c81d0c01c6308aa597c39 | [
"MIT"
] | null | null | null | pvtools.py | sadams2013/pvtools | 12bd9334a1335972519c81d0c01c6308aa597c39 | [
"MIT"
] | 1 | 2021-01-05T18:37:25.000Z | 2021-01-05T18:37:25.000Z | # Import standard libraries.
import json
# Import external libraries.
import numpy as np
import pandas as pd
class dbSNP:
"""Store dbSNP data for a gene.
Parameters
----------
dbsnp_file : str
Path to a dbSNP file containing variant information.
Attributes
----------
df : pandas.DataFrame
Dataframe containing dbSNP data.
"""
def __init__(self, dbsnp_file):
self.df = pd.read_table(dbsnp_file)
def get_ref(self, start, end):
"""Return reference allele."""
try:
i = (self.df['chromStart'] == start) & (self.df['chromEnd'] == end)
result = self.df[i]['name'].values[0]
except IndexError:
result = None
return result
class LookupTable:
"""Store liftover data for a gene.
Parameters
----------
ng : Sequence
Sequence object for RefSeqGene.
g7 : Sequence
Sequence object for GRCh37.
g8 : Sequence
Sequence object for GRCh38.
Attributes
----------
ng : Sequence
Sequence object for RefSeqGene.
g7 : Sequence
Sequence object for GRCh37.
g8 : Sequence
Sequence object for GRCh38.
df : pandas.DataFrame
Dataframe containing liftover data.
"""
def __init__(self, ng, g7, g8):
self.ng = ng
self.g7 = g7
self.g8 = g8
self.df = self._build_lookup_table(ng, g7, g8)
def _build_lookup_table(self, ng, g7, g8):
ng_pos1 = np.arange(1, len(ng.seq)+1)
ng_pos2 = ng_pos1 - ng.data['CDSStarts'][0]
ng_pos3 = ng.liftover()
g7_pos = list(range(g7.data['Start'], g7.data['End']+1))
g8_pos = list(range(g8.data['Start'], g8.data['End']+1))
allele = np.array(list(ng.seq))
annot1 = ng.annotate(cds=False)
annot2 = ng.annotate(cds=True)
d = {'Start_Position': ng_pos1, 'ATG_Position': ng_pos2,
'Transcript_Position': ng_pos3, 'GRCh37_Position': g7_pos,
'GRCh38_Position': g8_pos, 'Allele': allele,
'Exon_Annotation': annot1, 'CDS_Annotation': annot2}
return pd.DataFrame(d)
def to_tsv(self, f):
self.df.to_csv(f, sep='\t', index=False)
def find(self, system1, system2, value):
try:
result = self.df[self.df[system1] == value][system2].values[0]
except IndexError:
result = None
return result
class Sequence:
"""Store sequence data for a gene.
Parameters
----------
fasta_file : str
Path to a FASTA file containing the DNA sequence.
json_file : str
Path to a JSON file containing metadata for the DNA sequence.
Attributes
----------
name : str
Sequence identifier with the leading character '>' removed.
seq : str
DNA sequence.
len : int
Length of the DNA sequence.
data : dict
Metadata of the DNA sequence.
"""
def __init__(self, fasta_file, json_file=None):
self.name, self.seq = self._read_fasta_file(fasta_file)
self.len = len(self.seq)
self.data = self._read_json_file(json_file)
def _read_fasta_file(self, fasta_file):
name = ''
seq = ''
with open(fasta_file) as f:
name = next(f).strip().replace('>', '')
for line in f:
seq += line.strip()
return name, seq
def _read_json_file(self, json_file):
if json_file is None:
return None
with open(json_file) as f:
return json.load(f)
def transcribe(self):
"""Transcribe the DNA sequence.
Returns
-------
str
mRNA sequence.
"""
rna = ''
for i in range(self.data['ExonCount']):
start = self.data['ExonStarts'][i]
end = self.data['ExonEnds'][i]
rna += self.seq[start-1:end]
return rna
def get_exon_dataframe(self):
"""Tabulate Exon data.
Returns
-------
pandas.DataFrame
Dataframe containing Exon data.
"""
exon_starts = self.data['ExonStarts']
exon_ends = self.data['ExonEnds']
exon_names = [f'Exon {x+1}' for x in range(len(exon_starts))]
intron_starts = [x+1 for x in exon_ends[:-1]]
intron_ends = [x-1 for x in exon_starts[1:]]
intron_names = [f'Intron {x+1}' for x in range(len(intron_starts))]
upstream_start = 1
upstream_end = exon_starts[0] - 1
upstream_name = 'Upstream'
downstream_start = exon_ends[-1] + 1
downstream_end = len(self.seq)
downstream_name = 'Downstream'
starts = exon_starts + intron_starts + [upstream_start, downstream_start]
ends = exon_ends + intron_ends + [upstream_end, downstream_end]
names = exon_names + intron_names + [upstream_name, downstream_name]
df = pd.DataFrame({'Name': names, 'Start': starts, 'End': ends})
df = df.sort_values('Start')
df = df.reset_index(drop=True)
return df
def get_cds_dataframe(self):
"""Tabulate CDS data.
Returns
-------
pandas.DataFrame
Dataframe containing CDS data.
"""
cds_starts = self.data['CDSStarts']
cds_ends = self.data['CDSEnds']
cds_names = [f'CDS {x+1}' for x in range(len(cds_starts))]
intron_starts = [x+1 for x in cds_ends[:-1]]
intron_ends = [x-1 for x in cds_starts[1:]]
intron_names = [f'Intron {x+1}' for x in range(len(intron_starts))]
exon_df = self.get_exon_dataframe()
upstream_start = 1
upstream_end = exon_df[exon_df.Name == 'Upstream'].End.values[0]
upstream_name = 'Upstream'
utr5_starts = []
utr5_ends = []
atg_pos = self.get_atg_pos()
i = self.get_atg_exon_index()
for x in range(self.data['ExonCount']):
start = self.data['ExonStarts'][x]
end = self.data['ExonEnds'][x]
if x < i:
utr5_starts.append(start)
utr5_ends.append(end)
elif x == i:
utr5_starts.append(start)
utr5_ends.append(atg_pos-1)
else:
break
utr5_names = [f"5' UTR Exon {x+1}" for x in range(len(utr5_starts))]
utr5_intron_starts = []
utr5_intron_ends = []
for utr5_end in utr5_ends[:-1]:
utr5_intron_starts.append(utr5_end+1)
for utr5_start in utr5_starts[1:]:
utr5_intron_ends.append(utr5_start-1)
utr5_intron_names = [f"5' UTR Intron {x+1}" for x in range(len(utr5_intron_starts))]
utr3_starts = []
utr3_ends = []
stop_pos = self.get_stop_pos()
i = self.get_stop_exon_index()
for x in range(self.data['ExonCount']):
start = self.data['ExonStarts'][x]
end = self.data['ExonEnds'][x]
if x < i:
pass
elif x == i:
utr3_starts.append(stop_pos+1)
utr3_ends.append(end)
else:
utr3_starts.append(start)
utr3_ends.append(end)
utr3_names = [f"3' UTR Exon {x+1}" for x in range(len(utr3_starts))]
utr3_intron_starts = []
utr3_intron_ends = []
for utr3_end in utr3_ends[:-1]:
utr3_intron_starts.append(utr3_end+1)
for utr3_start in utr3_starts[1:]:
utr3_intron_ends.append(utr3_start-1)
utr3_intron_names = [f"3' UTR Intron {x+1}" for x in range(len(utr3_intron_starts))]
downstream_start = exon_df[exon_df.Name == 'Downstream'].Start.values[0]
downstream_end = len(self.seq)
downstream_name = 'Downstream'
starts = cds_starts + intron_starts + utr5_starts + utr5_intron_starts + utr3_starts + utr3_intron_starts + [upstream_start, downstream_start]
ends = cds_ends + intron_ends + utr5_ends + utr5_intron_ends + utr3_ends + utr3_intron_ends + [upstream_end, downstream_end]
names = cds_names + intron_names + utr5_names + utr5_intron_names + utr3_names + utr3_intron_names + [upstream_name, downstream_name]
df = pd.DataFrame({'Name': names, 'Start': starts, 'End': ends})
df = df.sort_values('Start')
df = df.reset_index(drop=True)
return df
def annotate(self, cds=False):
if cds:
df = self.get_cds_dataframe()
else:
df = self.get_exon_dataframe()
annotations = []
for i, r in df.iterrows():
n = r.End - r.Start + 1
annotations += [r.Name] * n
return annotations
def liftover(self):
cds_df = self.get_cds_dataframe()
cds_pos = []
cds_sum = 1
atg_start = self.data['CDSStarts'][0]
utr5_exon_offset = -1 * self.get_utr5_exon_len()
utr3_exon_sum = 1
for i, r in cds_df.iterrows():
cds_len = r.End - r.Start + 1
if r.Name.startswith('CDS'):
cds_pos += list(range(cds_sum, cds_sum + cds_len))
cds_sum += cds_len
elif r.Name.startswith('Intron'):
cds_pos += [f'{cds_sum-1}+{x}' for x in range(1, cds_len+1)]
elif r.Name == 'Upstream':
a = self.get_atg_pos() - self.get_utr5_intron_len()
cds_pos += [x-a for x in range(1, r.End+1)]
elif r.Name.startswith("5' UTR Exon"):
a = r.End - r.Start + 1
cds_pos += [x for x in range(utr5_exon_offset, utr5_exon_offset+a)]
utr5_exon_offset += a
elif r.Name.startswith("5' UTR Intron"):
cds_pos += [f'{utr5_exon_offset-1}+{x}' for x in range(1, cds_len+1)]
elif r.Name == 'Downstream':
a = self.get_utr3_exon_len() + 1
b = r.End - r.Start + 1
cds_pos += [f'*{x+a}' for x in range(b)]
elif r.Name.startswith("3' UTR Exon"):
a = r.End - r.Start + 1
cds_pos += [f'*{x}' for x in list(range(utr3_exon_sum, utr3_exon_sum+a))]
utr3_exon_sum += a
elif r.Name.startswith("3' UTR Intron"):
cds_pos += [f'*{utr3_exon_sum-1}+{x}' for x in range(1, cds_len+1)]
else:
cds_pos += ['.' for x in range(cds_len)]
if len(cds_pos) != self.len:
raise ValueError(f"LiftOver length error: expected {self.len} bp, "
f"but generated: {len(cds_pos)} bp")
return [f'c.{x}' for x in cds_pos]
def get_atg_pos(self):
return self.data['CDSStarts'][0]
def get_atg_exon_index(self):
exon_starts = self.data['ExonStarts']
exon_ends = self.data['ExonEnds']
atg_pos = self.get_atg_pos()
for i in range(self.data['ExonCount']):
if exon_starts[i] <= atg_pos <= exon_ends[i]:
return i
def get_stop_pos(self):
return self.data['CDSEnds'][-1]
def get_stop_exon_index(self):
exon_starts = self.data['ExonStarts']
exon_ends = self.data['ExonEnds']
stop_pos = self.get_stop_pos()
for i in range(self.data['ExonCount']):
if exon_starts[i] <= stop_pos <= exon_ends[i]:
return i
def get_utr5_intron_len(self):
df = self.get_cds_dataframe()
df = df[df.Name.str.contains("5' UTR Intron")]
return sum(df.End - df.Start + 1)
def get_utr5_exon_len(self):
df = self.get_cds_dataframe()
df = df[df.Name.str.contains("5' UTR Exon")]
return sum(df.End - df.Start + 1)
def get_utr3_intron_len(self):
df = self.get_cds_dataframe()
df = df[df.Name.str.contains("3' UTR Intron")]
return sum(df.End - df.Start + 1)
def get_utr3_exon_len(self):
df = self.get_cds_dataframe()
df = df[df.Name.str.contains("3' UTR Exon")]
return sum(df.End - df.Start + 1)
| 34.746398 | 150 | 0.562744 |
import json
import numpy as np
import pandas as pd
class dbSNP:
def __init__(self, dbsnp_file):
self.df = pd.read_table(dbsnp_file)
def get_ref(self, start, end):
try:
i = (self.df['chromStart'] == start) & (self.df['chromEnd'] == end)
result = self.df[i]['name'].values[0]
except IndexError:
result = None
return result
class LookupTable:
def __init__(self, ng, g7, g8):
self.ng = ng
self.g7 = g7
self.g8 = g8
self.df = self._build_lookup_table(ng, g7, g8)
def _build_lookup_table(self, ng, g7, g8):
ng_pos1 = np.arange(1, len(ng.seq)+1)
ng_pos2 = ng_pos1 - ng.data['CDSStarts'][0]
ng_pos3 = ng.liftover()
g7_pos = list(range(g7.data['Start'], g7.data['End']+1))
g8_pos = list(range(g8.data['Start'], g8.data['End']+1))
allele = np.array(list(ng.seq))
annot1 = ng.annotate(cds=False)
annot2 = ng.annotate(cds=True)
d = {'Start_Position': ng_pos1, 'ATG_Position': ng_pos2,
'Transcript_Position': ng_pos3, 'GRCh37_Position': g7_pos,
'GRCh38_Position': g8_pos, 'Allele': allele,
'Exon_Annotation': annot1, 'CDS_Annotation': annot2}
return pd.DataFrame(d)
def to_tsv(self, f):
self.df.to_csv(f, sep='\t', index=False)
def find(self, system1, system2, value):
try:
result = self.df[self.df[system1] == value][system2].values[0]
except IndexError:
result = None
return result
class Sequence:
def __init__(self, fasta_file, json_file=None):
self.name, self.seq = self._read_fasta_file(fasta_file)
self.len = len(self.seq)
self.data = self._read_json_file(json_file)
def _read_fasta_file(self, fasta_file):
name = ''
seq = ''
with open(fasta_file) as f:
name = next(f).strip().replace('>', '')
for line in f:
seq += line.strip()
return name, seq
def _read_json_file(self, json_file):
if json_file is None:
return None
with open(json_file) as f:
return json.load(f)
def transcribe(self):
rna = ''
for i in range(self.data['ExonCount']):
start = self.data['ExonStarts'][i]
end = self.data['ExonEnds'][i]
rna += self.seq[start-1:end]
return rna
def get_exon_dataframe(self):
exon_starts = self.data['ExonStarts']
exon_ends = self.data['ExonEnds']
exon_names = [f'Exon {x+1}' for x in range(len(exon_starts))]
intron_starts = [x+1 for x in exon_ends[:-1]]
intron_ends = [x-1 for x in exon_starts[1:]]
intron_names = [f'Intron {x+1}' for x in range(len(intron_starts))]
upstream_start = 1
upstream_end = exon_starts[0] - 1
upstream_name = 'Upstream'
downstream_start = exon_ends[-1] + 1
downstream_end = len(self.seq)
downstream_name = 'Downstream'
starts = exon_starts + intron_starts + [upstream_start, downstream_start]
ends = exon_ends + intron_ends + [upstream_end, downstream_end]
names = exon_names + intron_names + [upstream_name, downstream_name]
df = pd.DataFrame({'Name': names, 'Start': starts, 'End': ends})
df = df.sort_values('Start')
df = df.reset_index(drop=True)
return df
def get_cds_dataframe(self):
cds_starts = self.data['CDSStarts']
cds_ends = self.data['CDSEnds']
cds_names = [f'CDS {x+1}' for x in range(len(cds_starts))]
intron_starts = [x+1 for x in cds_ends[:-1]]
intron_ends = [x-1 for x in cds_starts[1:]]
intron_names = [f'Intron {x+1}' for x in range(len(intron_starts))]
exon_df = self.get_exon_dataframe()
upstream_start = 1
upstream_end = exon_df[exon_df.Name == 'Upstream'].End.values[0]
upstream_name = 'Upstream'
utr5_starts = []
utr5_ends = []
atg_pos = self.get_atg_pos()
i = self.get_atg_exon_index()
for x in range(self.data['ExonCount']):
start = self.data['ExonStarts'][x]
end = self.data['ExonEnds'][x]
if x < i:
utr5_starts.append(start)
utr5_ends.append(end)
elif x == i:
utr5_starts.append(start)
utr5_ends.append(atg_pos-1)
else:
break
utr5_names = [f"5' UTR Exon {x+1}" for x in range(len(utr5_starts))]
utr5_intron_starts = []
utr5_intron_ends = []
for utr5_end in utr5_ends[:-1]:
utr5_intron_starts.append(utr5_end+1)
for utr5_start in utr5_starts[1:]:
utr5_intron_ends.append(utr5_start-1)
utr5_intron_names = [f"5' UTR Intron {x+1}" for x in range(len(utr5_intron_starts))]
utr3_starts = []
utr3_ends = []
stop_pos = self.get_stop_pos()
i = self.get_stop_exon_index()
for x in range(self.data['ExonCount']):
start = self.data['ExonStarts'][x]
end = self.data['ExonEnds'][x]
if x < i:
pass
elif x == i:
utr3_starts.append(stop_pos+1)
utr3_ends.append(end)
else:
utr3_starts.append(start)
utr3_ends.append(end)
utr3_names = [f"3' UTR Exon {x+1}" for x in range(len(utr3_starts))]
utr3_intron_starts = []
utr3_intron_ends = []
for utr3_end in utr3_ends[:-1]:
utr3_intron_starts.append(utr3_end+1)
for utr3_start in utr3_starts[1:]:
utr3_intron_ends.append(utr3_start-1)
utr3_intron_names = [f"3' UTR Intron {x+1}" for x in range(len(utr3_intron_starts))]
downstream_start = exon_df[exon_df.Name == 'Downstream'].Start.values[0]
downstream_end = len(self.seq)
downstream_name = 'Downstream'
starts = cds_starts + intron_starts + utr5_starts + utr5_intron_starts + utr3_starts + utr3_intron_starts + [upstream_start, downstream_start]
ends = cds_ends + intron_ends + utr5_ends + utr5_intron_ends + utr3_ends + utr3_intron_ends + [upstream_end, downstream_end]
names = cds_names + intron_names + utr5_names + utr5_intron_names + utr3_names + utr3_intron_names + [upstream_name, downstream_name]
df = pd.DataFrame({'Name': names, 'Start': starts, 'End': ends})
df = df.sort_values('Start')
df = df.reset_index(drop=True)
return df
def annotate(self, cds=False):
if cds:
df = self.get_cds_dataframe()
else:
df = self.get_exon_dataframe()
annotations = []
for i, r in df.iterrows():
n = r.End - r.Start + 1
annotations += [r.Name] * n
return annotations
def liftover(self):
cds_df = self.get_cds_dataframe()
cds_pos = []
cds_sum = 1
atg_start = self.data['CDSStarts'][0]
utr5_exon_offset = -1 * self.get_utr5_exon_len()
utr3_exon_sum = 1
for i, r in cds_df.iterrows():
cds_len = r.End - r.Start + 1
if r.Name.startswith('CDS'):
cds_pos += list(range(cds_sum, cds_sum + cds_len))
cds_sum += cds_len
elif r.Name.startswith('Intron'):
cds_pos += [f'{cds_sum-1}+{x}' for x in range(1, cds_len+1)]
elif r.Name == 'Upstream':
a = self.get_atg_pos() - self.get_utr5_intron_len()
cds_pos += [x-a for x in range(1, r.End+1)]
elif r.Name.startswith("5' UTR Exon"):
a = r.End - r.Start + 1
cds_pos += [x for x in range(utr5_exon_offset, utr5_exon_offset+a)]
utr5_exon_offset += a
elif r.Name.startswith("5' UTR Intron"):
cds_pos += [f'{utr5_exon_offset-1}+{x}' for x in range(1, cds_len+1)]
elif r.Name == 'Downstream':
a = self.get_utr3_exon_len() + 1
b = r.End - r.Start + 1
cds_pos += [f'*{x+a}' for x in range(b)]
elif r.Name.startswith("3' UTR Exon"):
a = r.End - r.Start + 1
cds_pos += [f'*{x}' for x in list(range(utr3_exon_sum, utr3_exon_sum+a))]
utr3_exon_sum += a
elif r.Name.startswith("3' UTR Intron"):
cds_pos += [f'*{utr3_exon_sum-1}+{x}' for x in range(1, cds_len+1)]
else:
cds_pos += ['.' for x in range(cds_len)]
if len(cds_pos) != self.len:
raise ValueError(f"LiftOver length error: expected {self.len} bp, "
f"but generated: {len(cds_pos)} bp")
return [f'c.{x}' for x in cds_pos]
def get_atg_pos(self):
return self.data['CDSStarts'][0]
def get_atg_exon_index(self):
exon_starts = self.data['ExonStarts']
exon_ends = self.data['ExonEnds']
atg_pos = self.get_atg_pos()
for i in range(self.data['ExonCount']):
if exon_starts[i] <= atg_pos <= exon_ends[i]:
return i
def get_stop_pos(self):
return self.data['CDSEnds'][-1]
def get_stop_exon_index(self):
exon_starts = self.data['ExonStarts']
exon_ends = self.data['ExonEnds']
stop_pos = self.get_stop_pos()
for i in range(self.data['ExonCount']):
if exon_starts[i] <= stop_pos <= exon_ends[i]:
return i
def get_utr5_intron_len(self):
df = self.get_cds_dataframe()
df = df[df.Name.str.contains("5' UTR Intron")]
return sum(df.End - df.Start + 1)
def get_utr5_exon_len(self):
df = self.get_cds_dataframe()
df = df[df.Name.str.contains("5' UTR Exon")]
return sum(df.End - df.Start + 1)
def get_utr3_intron_len(self):
df = self.get_cds_dataframe()
df = df[df.Name.str.contains("3' UTR Intron")]
return sum(df.End - df.Start + 1)
def get_utr3_exon_len(self):
df = self.get_cds_dataframe()
df = df[df.Name.str.contains("3' UTR Exon")]
return sum(df.End - df.Start + 1)
| true | true |
f73e1c0a7688518e63f8e8d783b887280d8de5dd | 3,830 | py | Python | src/main/python/thalesians/tsa/optimization/visual.py | vishalbelsare/tsa | 203e602fe5fc95b89afb454156fc7e4faee90f2a | [
"Apache-2.0"
] | 117 | 2017-06-30T14:29:32.000Z | 2022-02-10T00:54:35.000Z | src/main/python/thalesians/tsa/optimization/visual.py | vishalbelsare/tsa | 203e602fe5fc95b89afb454156fc7e4faee90f2a | [
"Apache-2.0"
] | 2 | 2017-09-01T11:42:14.000Z | 2017-11-29T20:00:19.000Z | src/main/python/thalesians/tsa/optimization/visual.py | vishalbelsare/tsa | 203e602fe5fc95b89afb454156fc7e4faee90f2a | [
"Apache-2.0"
] | 37 | 2017-07-05T19:51:10.000Z | 2021-04-27T00:11:18.000Z | import itertools
import time
import warnings
import numpy as np
import matplotlib.colors
import matplotlib.pyplot as plt
import thalesians.tsa.checks as checks
import thalesians.tsa.numpyutils as npu
import thalesians.tsa.utils as utils
def _aggregate(aggregate_func, data, empty_aggregate):
if empty_aggregate != 'none':
return npu.apply(lambda x: empty_aggregate if len(x) == 0 else aggregate_func(x), data)
else:
return npu.apply(aggregate_func, data)
def visualize_grid_search(grid_search_result,
aggregate_func=np.nanmean, empty_aggregate='none',
fig=None, title=None,
refresh_until_ready=False):
if fig is None: fig = plt.figure()
if title is None: title = grid_search_result.optimization_id
fig.suptitle(title)
param_names = list(grid_search_result.param_ranges.keys())
subplots = {}
heatmaps = {}
datas = {}
for i1 in range(len(param_names)):
param_name1 = param_names[i1]
param_values1 = grid_search_result.param_ranges[param_name1]
for i2 in range(i1):
param_name2 = param_names[i2]
param_values2 = grid_search_result.param_ranges[param_name2]
data = np.empty((len(param_values1), len(param_values2)), dtype=object)
for i in range(np.size(data)): data.flat[i] = []
datas[(i1, i2)] = data
ax = fig.add_subplot(len(param_names) - 1, len(param_names) - 1, (i1 - 1) * (len(param_names) - 1) + i2 + 1)
subplots[(i1, i2)] = ax
initial_data = _aggregate(aggregate_func, datas[(i1, i2)], empty_aggregate)
heatmaps[(i1, i2)] = ax.matshow(npu.apply(aggregate_func, initial_data), cmap='coolwarm')
if i2 == i1 - 1:
ax.set_xticklabels([np.nan] + [0. if x == 1e-06 else x for x in param_values2], fontsize=6, rotation='vertical', verticalalignment='bottom')
ax.xaxis.set_ticks_position('top')
ax.set_yticklabels([np.nan] + [0. if x == 1e-06 else x for x in param_values1], fontsize=6)
ax.yaxis.set_ticks_position('right')
else:
ax.set_xticks([])
ax.set_yticks([])
if i1 == len(param_names) - 1: ax.set_xlabel(param_name2)
if i2 == 0: ax.set_ylabel(param_name1)
while True:
all_ready = True
for status in grid_search_result.evaluation_statuses:
if not status.ready: all_ready = False
else:
checks.check(utils.sequence_eq(param_names, status.work.info['param_names']))
param_value_index_combinations = itertools.combinations(range(len(param_names)), 2)
param_value_index_combinations = [(i2, i1) for (i1, i2) in param_value_index_combinations if i1 != i2]
for i1, i2 in param_value_index_combinations:
param_value_index1 = status.work.info['param_value_indices'][i1]
param_value_index2 = status.work.info['param_value_indices'][i2]
if status.result.exception is not None:
result = np.nan
elif status.result.result is None:
result = np.nan
else:
result = status.result.result
datas[(i1, i2)][param_value_index1, param_value_index2].append(result)
for i1 in range(len(param_names)):
for i2 in range(i1):
new_data = _aggregate(aggregate_func, datas[(i1, i2)], empty_aggregate)
heatmaps[(i1, i2)].set_data(new_data)
heatmaps[(i1, i2)].autoscale()
if (not refresh_until_ready) or all_ready: break
else:
fig.canvas.draw()
time.sleep(1)
return fig
| 42.087912 | 156 | 0.609138 | import itertools
import time
import warnings
import numpy as np
import matplotlib.colors
import matplotlib.pyplot as plt
import thalesians.tsa.checks as checks
import thalesians.tsa.numpyutils as npu
import thalesians.tsa.utils as utils
def _aggregate(aggregate_func, data, empty_aggregate):
if empty_aggregate != 'none':
return npu.apply(lambda x: empty_aggregate if len(x) == 0 else aggregate_func(x), data)
else:
return npu.apply(aggregate_func, data)
def visualize_grid_search(grid_search_result,
aggregate_func=np.nanmean, empty_aggregate='none',
fig=None, title=None,
refresh_until_ready=False):
if fig is None: fig = plt.figure()
if title is None: title = grid_search_result.optimization_id
fig.suptitle(title)
param_names = list(grid_search_result.param_ranges.keys())
subplots = {}
heatmaps = {}
datas = {}
for i1 in range(len(param_names)):
param_name1 = param_names[i1]
param_values1 = grid_search_result.param_ranges[param_name1]
for i2 in range(i1):
param_name2 = param_names[i2]
param_values2 = grid_search_result.param_ranges[param_name2]
data = np.empty((len(param_values1), len(param_values2)), dtype=object)
for i in range(np.size(data)): data.flat[i] = []
datas[(i1, i2)] = data
ax = fig.add_subplot(len(param_names) - 1, len(param_names) - 1, (i1 - 1) * (len(param_names) - 1) + i2 + 1)
subplots[(i1, i2)] = ax
initial_data = _aggregate(aggregate_func, datas[(i1, i2)], empty_aggregate)
heatmaps[(i1, i2)] = ax.matshow(npu.apply(aggregate_func, initial_data), cmap='coolwarm')
if i2 == i1 - 1:
ax.set_xticklabels([np.nan] + [0. if x == 1e-06 else x for x in param_values2], fontsize=6, rotation='vertical', verticalalignment='bottom')
ax.xaxis.set_ticks_position('top')
ax.set_yticklabels([np.nan] + [0. if x == 1e-06 else x for x in param_values1], fontsize=6)
ax.yaxis.set_ticks_position('right')
else:
ax.set_xticks([])
ax.set_yticks([])
if i1 == len(param_names) - 1: ax.set_xlabel(param_name2)
if i2 == 0: ax.set_ylabel(param_name1)
while True:
all_ready = True
for status in grid_search_result.evaluation_statuses:
if not status.ready: all_ready = False
else:
checks.check(utils.sequence_eq(param_names, status.work.info['param_names']))
param_value_index_combinations = itertools.combinations(range(len(param_names)), 2)
param_value_index_combinations = [(i2, i1) for (i1, i2) in param_value_index_combinations if i1 != i2]
for i1, i2 in param_value_index_combinations:
param_value_index1 = status.work.info['param_value_indices'][i1]
param_value_index2 = status.work.info['param_value_indices'][i2]
if status.result.exception is not None:
result = np.nan
elif status.result.result is None:
result = np.nan
else:
result = status.result.result
datas[(i1, i2)][param_value_index1, param_value_index2].append(result)
for i1 in range(len(param_names)):
for i2 in range(i1):
new_data = _aggregate(aggregate_func, datas[(i1, i2)], empty_aggregate)
heatmaps[(i1, i2)].set_data(new_data)
heatmaps[(i1, i2)].autoscale()
if (not refresh_until_ready) or all_ready: break
else:
fig.canvas.draw()
time.sleep(1)
return fig
| true | true |
f73e1d51694172dda5cb6550512fac7b3b4e4bd0 | 12,313 | py | Python | code/python/StocksAPIforDigitalPortals/v2/fds/sdk/StocksAPIforDigitalPortals/model/stock_notation_screener_search_data_performance_end_of_day_week1.py | factset/enterprise-sdk | 3fd4d1360756c515c9737a0c9a992c7451d7de7e | [
"Apache-2.0"
] | 6 | 2022-02-07T16:34:18.000Z | 2022-03-30T08:04:57.000Z | code/python/StocksAPIforDigitalPortals/v2/fds/sdk/StocksAPIforDigitalPortals/model/stock_notation_screener_search_data_performance_end_of_day_week1.py | factset/enterprise-sdk | 3fd4d1360756c515c9737a0c9a992c7451d7de7e | [
"Apache-2.0"
] | 2 | 2022-02-07T05:25:57.000Z | 2022-03-07T14:18:04.000Z | code/python/StocksAPIforDigitalPortals/v2/fds/sdk/StocksAPIforDigitalPortals/model/stock_notation_screener_search_data_performance_end_of_day_week1.py | factset/enterprise-sdk | 3fd4d1360756c515c9737a0c9a992c7451d7de7e | [
"Apache-2.0"
] | null | null | null | """
Prime Developer Trial
No description provided (generated by Openapi Generator https://github.com/openapitools/openapi-generator) # noqa: E501
The version of the OpenAPI document: v1
Generated by: https://openapi-generator.tech
"""
import re # noqa: F401
import sys # noqa: F401
from fds.sdk.StocksAPIforDigitalPortals.model_utils import ( # noqa: F401
ApiTypeError,
ModelComposed,
ModelNormal,
ModelSimple,
cached_property,
change_keys_js_to_python,
convert_js_args_to_python_args,
date,
datetime,
file_type,
none_type,
validate_get_composed_info,
OpenApiModel
)
from fds.sdk.StocksAPIforDigitalPortals.exceptions import ApiAttributeError
def lazy_import():
from fds.sdk.StocksAPIforDigitalPortals.model.stock_notation_screener_search_data_ebit_margin_maximum import StockNotationScreenerSearchDataEbitMarginMaximum
from fds.sdk.StocksAPIforDigitalPortals.model.stock_notation_screener_search_data_ebit_margin_minimum import StockNotationScreenerSearchDataEbitMarginMinimum
globals()['StockNotationScreenerSearchDataEbitMarginMaximum'] = StockNotationScreenerSearchDataEbitMarginMaximum
globals()['StockNotationScreenerSearchDataEbitMarginMinimum'] = StockNotationScreenerSearchDataEbitMarginMinimum
class StockNotationScreenerSearchDataPerformanceEndOfDayWeek1(ModelNormal):
"""NOTE: This class is auto generated by OpenAPI Generator.
Ref: https://openapi-generator.tech
Do not edit the class manually.
Attributes:
allowed_values (dict): The key is the tuple path to the attribute
and the for var_name this is (var_name,). The value is a dict
with a capitalized key describing the allowed value and an allowed
value. These dicts store the allowed enum values.
attribute_map (dict): The key is attribute name
and the value is json key in definition.
discriminator_value_class_map (dict): A dict to go from the discriminator
variable value to the discriminator class name.
validations (dict): The key is the tuple path to the attribute
and the for var_name this is (var_name,). The value is a dict
that stores validations for max_length, min_length, max_items,
min_items, exclusive_maximum, inclusive_maximum, exclusive_minimum,
inclusive_minimum, and regex.
additional_properties_type (tuple): A tuple of classes accepted
as additional properties values.
"""
allowed_values = {
}
validations = {
}
@cached_property
def additional_properties_type():
"""
This must be a method because a model may have properties that are
of type self, this must run after the class is loaded
"""
lazy_import()
return (bool, date, datetime, dict, float, int, list, str, none_type,) # noqa: E501
_nullable = False
@cached_property
def openapi_types():
"""
This must be a method because a model may have properties that are
of type self, this must run after the class is loaded
Returns
openapi_types (dict): The key is attribute name
and the value is attribute type.
"""
lazy_import()
return {
'minimum': (StockNotationScreenerSearchDataEbitMarginMinimum,), # noqa: E501
'maximum': (StockNotationScreenerSearchDataEbitMarginMaximum,), # noqa: E501
}
@cached_property
def discriminator():
return None
attribute_map = {
'minimum': 'minimum', # noqa: E501
'maximum': 'maximum', # noqa: E501
}
read_only_vars = {
}
_composed_schemas = {}
@classmethod
@convert_js_args_to_python_args
def _from_openapi_data(cls, *args, **kwargs): # noqa: E501
"""StockNotationScreenerSearchDataPerformanceEndOfDayWeek1 - a model defined in OpenAPI
Keyword Args:
_check_type (bool): if True, values for parameters in openapi_types
will be type checked and a TypeError will be
raised if the wrong type is input.
Defaults to True
_path_to_item (tuple/list): This is a list of keys or values to
drill down to the model in received_data
when deserializing a response
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_configuration (Configuration): the instance to use when
deserializing a file_type parameter.
If passed, type conversion is attempted
If omitted no type conversion is done.
_visited_composed_classes (tuple): This stores a tuple of
classes that we have traveled through so that
if we see that class again we will not use its
discriminator again.
When traveling through a discriminator, the
composed schema that is
is traveled through is added to this set.
For example if Animal has a discriminator
petType and we pass in "Dog", and the class Dog
allOf includes Animal, we move through Animal
once using the discriminator, and pick Dog.
Then in Dog, we will make an instance of the
Animal class but this time we won't travel
through its discriminator because we passed in
_visited_composed_classes = (Animal,)
minimum (StockNotationScreenerSearchDataEbitMarginMinimum): [optional] # noqa: E501
maximum (StockNotationScreenerSearchDataEbitMarginMaximum): [optional] # noqa: E501
"""
_check_type = kwargs.pop('_check_type', True)
_spec_property_naming = kwargs.pop('_spec_property_naming', False)
_path_to_item = kwargs.pop('_path_to_item', ())
_configuration = kwargs.pop('_configuration', None)
_visited_composed_classes = kwargs.pop('_visited_composed_classes', ())
self = super(OpenApiModel, cls).__new__(cls)
if args:
raise ApiTypeError(
"Invalid positional arguments=%s passed to %s. Remove those invalid positional arguments." % (
args,
self.__class__.__name__,
),
path_to_item=_path_to_item,
valid_classes=(self.__class__,),
)
self._data_store = {}
self._check_type = _check_type
self._spec_property_naming = _spec_property_naming
self._path_to_item = _path_to_item
self._configuration = _configuration
self._visited_composed_classes = _visited_composed_classes + (self.__class__,)
for var_name, var_value in kwargs.items():
if var_name not in self.attribute_map and \
self._configuration is not None and \
self._configuration.discard_unknown_keys and \
self.additional_properties_type is None:
# discard variable.
continue
setattr(self, var_name, var_value)
return self
required_properties = set([
'_data_store',
'_check_type',
'_spec_property_naming',
'_path_to_item',
'_configuration',
'_visited_composed_classes',
])
@convert_js_args_to_python_args
def __init__(self, *args, **kwargs): # noqa: E501
"""StockNotationScreenerSearchDataPerformanceEndOfDayWeek1 - a model defined in OpenAPI
Keyword Args:
_check_type (bool): if True, values for parameters in openapi_types
will be type checked and a TypeError will be
raised if the wrong type is input.
Defaults to True
_path_to_item (tuple/list): This is a list of keys or values to
drill down to the model in received_data
when deserializing a response
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_configuration (Configuration): the instance to use when
deserializing a file_type parameter.
If passed, type conversion is attempted
If omitted no type conversion is done.
_visited_composed_classes (tuple): This stores a tuple of
classes that we have traveled through so that
if we see that class again we will not use its
discriminator again.
When traveling through a discriminator, the
composed schema that is
is traveled through is added to this set.
For example if Animal has a discriminator
petType and we pass in "Dog", and the class Dog
allOf includes Animal, we move through Animal
once using the discriminator, and pick Dog.
Then in Dog, we will make an instance of the
Animal class but this time we won't travel
through its discriminator because we passed in
_visited_composed_classes = (Animal,)
minimum (StockNotationScreenerSearchDataEbitMarginMinimum): [optional] # noqa: E501
maximum (StockNotationScreenerSearchDataEbitMarginMaximum): [optional] # noqa: E501
"""
_check_type = kwargs.pop('_check_type', True)
_spec_property_naming = kwargs.pop('_spec_property_naming', False)
_path_to_item = kwargs.pop('_path_to_item', ())
_configuration = kwargs.pop('_configuration', None)
_visited_composed_classes = kwargs.pop('_visited_composed_classes', ())
if args:
raise ApiTypeError(
"Invalid positional arguments=%s passed to %s. Remove those invalid positional arguments." % (
args,
self.__class__.__name__,
),
path_to_item=_path_to_item,
valid_classes=(self.__class__,),
)
self._data_store = {}
self._check_type = _check_type
self._spec_property_naming = _spec_property_naming
self._path_to_item = _path_to_item
self._configuration = _configuration
self._visited_composed_classes = _visited_composed_classes + (self.__class__,)
for var_name, var_value in kwargs.items():
if var_name not in self.attribute_map and \
self._configuration is not None and \
self._configuration.discard_unknown_keys and \
self.additional_properties_type is None:
# discard variable.
continue
setattr(self, var_name, var_value)
if var_name in self.read_only_vars:
raise ApiAttributeError(f"`{var_name}` is a read-only attribute. Use `from_openapi_data` to instantiate "
f"class with read only attributes.")
| 45.94403 | 161 | 0.598879 |
import re
import sys
from fds.sdk.StocksAPIforDigitalPortals.model_utils import (
ApiTypeError,
ModelComposed,
ModelNormal,
ModelSimple,
cached_property,
change_keys_js_to_python,
convert_js_args_to_python_args,
date,
datetime,
file_type,
none_type,
validate_get_composed_info,
OpenApiModel
)
from fds.sdk.StocksAPIforDigitalPortals.exceptions import ApiAttributeError
def lazy_import():
from fds.sdk.StocksAPIforDigitalPortals.model.stock_notation_screener_search_data_ebit_margin_maximum import StockNotationScreenerSearchDataEbitMarginMaximum
from fds.sdk.StocksAPIforDigitalPortals.model.stock_notation_screener_search_data_ebit_margin_minimum import StockNotationScreenerSearchDataEbitMarginMinimum
globals()['StockNotationScreenerSearchDataEbitMarginMaximum'] = StockNotationScreenerSearchDataEbitMarginMaximum
globals()['StockNotationScreenerSearchDataEbitMarginMinimum'] = StockNotationScreenerSearchDataEbitMarginMinimum
class StockNotationScreenerSearchDataPerformanceEndOfDayWeek1(ModelNormal):
allowed_values = {
}
validations = {
}
@cached_property
def additional_properties_type():
lazy_import()
return (bool, date, datetime, dict, float, int, list, str, none_type,)
_nullable = False
@cached_property
def openapi_types():
lazy_import()
return {
'minimum': (StockNotationScreenerSearchDataEbitMarginMinimum,),
'maximum': (StockNotationScreenerSearchDataEbitMarginMaximum,),
}
@cached_property
def discriminator():
return None
attribute_map = {
'minimum': 'minimum',
'maximum': 'maximum',
}
read_only_vars = {
}
_composed_schemas = {}
@classmethod
@convert_js_args_to_python_args
def _from_openapi_data(cls, *args, **kwargs):
_check_type = kwargs.pop('_check_type', True)
_spec_property_naming = kwargs.pop('_spec_property_naming', False)
_path_to_item = kwargs.pop('_path_to_item', ())
_configuration = kwargs.pop('_configuration', None)
_visited_composed_classes = kwargs.pop('_visited_composed_classes', ())
self = super(OpenApiModel, cls).__new__(cls)
if args:
raise ApiTypeError(
"Invalid positional arguments=%s passed to %s. Remove those invalid positional arguments." % (
args,
self.__class__.__name__,
),
path_to_item=_path_to_item,
valid_classes=(self.__class__,),
)
self._data_store = {}
self._check_type = _check_type
self._spec_property_naming = _spec_property_naming
self._path_to_item = _path_to_item
self._configuration = _configuration
self._visited_composed_classes = _visited_composed_classes + (self.__class__,)
for var_name, var_value in kwargs.items():
if var_name not in self.attribute_map and \
self._configuration is not None and \
self._configuration.discard_unknown_keys and \
self.additional_properties_type is None:
continue
setattr(self, var_name, var_value)
return self
required_properties = set([
'_data_store',
'_check_type',
'_spec_property_naming',
'_path_to_item',
'_configuration',
'_visited_composed_classes',
])
@convert_js_args_to_python_args
def __init__(self, *args, **kwargs):
_check_type = kwargs.pop('_check_type', True)
_spec_property_naming = kwargs.pop('_spec_property_naming', False)
_path_to_item = kwargs.pop('_path_to_item', ())
_configuration = kwargs.pop('_configuration', None)
_visited_composed_classes = kwargs.pop('_visited_composed_classes', ())
if args:
raise ApiTypeError(
"Invalid positional arguments=%s passed to %s. Remove those invalid positional arguments." % (
args,
self.__class__.__name__,
),
path_to_item=_path_to_item,
valid_classes=(self.__class__,),
)
self._data_store = {}
self._check_type = _check_type
self._spec_property_naming = _spec_property_naming
self._path_to_item = _path_to_item
self._configuration = _configuration
self._visited_composed_classes = _visited_composed_classes + (self.__class__,)
for var_name, var_value in kwargs.items():
if var_name not in self.attribute_map and \
self._configuration is not None and \
self._configuration.discard_unknown_keys and \
self.additional_properties_type is None:
continue
setattr(self, var_name, var_value)
if var_name in self.read_only_vars:
raise ApiAttributeError(f"`{var_name}` is a read-only attribute. Use `from_openapi_data` to instantiate "
f"class with read only attributes.")
| true | true |
f73e1e62965a6e32a612a32cdaf83a3e565b2aba | 997 | py | Python | azure-mgmt-network/azure/mgmt/network/v2017_09_01/models/virtual_network_gateway_paged.py | JonathanGailliez/azure-sdk-for-python | f0f051bfd27f8ea512aea6fc0c3212ee9ee0029b | [
"MIT"
] | 4 | 2016-06-17T23:25:29.000Z | 2022-03-30T22:37:45.000Z | azure-mgmt-network/azure/mgmt/network/v2017_09_01/models/virtual_network_gateway_paged.py | JonathanGailliez/azure-sdk-for-python | f0f051bfd27f8ea512aea6fc0c3212ee9ee0029b | [
"MIT"
] | 54 | 2016-03-25T17:25:01.000Z | 2018-10-22T17:27:54.000Z | azure-mgmt-network/azure/mgmt/network/v2017_09_01/models/virtual_network_gateway_paged.py | JonathanGailliez/azure-sdk-for-python | f0f051bfd27f8ea512aea6fc0c3212ee9ee0029b | [
"MIT"
] | 3 | 2016-05-03T20:49:46.000Z | 2017-10-05T21:05:27.000Z | # coding=utf-8
# --------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See License.txt in the project root for
# license information.
#
# Code generated by Microsoft (R) AutoRest Code Generator.
# Changes may cause incorrect behavior and will be lost if the code is
# regenerated.
# --------------------------------------------------------------------------
from msrest.paging import Paged
class VirtualNetworkGatewayPaged(Paged):
"""
A paging container for iterating over a list of :class:`VirtualNetworkGateway <azure.mgmt.network.v2017_09_01.models.VirtualNetworkGateway>` object
"""
_attribute_map = {
'next_link': {'key': 'nextLink', 'type': 'str'},
'current_page': {'key': 'value', 'type': '[VirtualNetworkGateway]'}
}
def __init__(self, *args, **kwargs):
super(VirtualNetworkGatewayPaged, self).__init__(*args, **kwargs)
| 35.607143 | 151 | 0.599799 |
from msrest.paging import Paged
class VirtualNetworkGatewayPaged(Paged):
_attribute_map = {
'next_link': {'key': 'nextLink', 'type': 'str'},
'current_page': {'key': 'value', 'type': '[VirtualNetworkGateway]'}
}
def __init__(self, *args, **kwargs):
super(VirtualNetworkGatewayPaged, self).__init__(*args, **kwargs)
| true | true |
f73e1e875eae6a7f4616a8ff224bd7dc4e109132 | 1,487 | py | Python | projects/1-molecular-dynamics/check.py | MUYANGGUO/HPC | ab95d18d4054b892269dd439470548abd06f5512 | [
"MIT"
] | 2 | 2021-08-04T11:03:07.000Z | 2022-03-17T04:57:00.000Z | projects/1-molecular-dynamics/check.py | MUYANGGUO/HPC | ab95d18d4054b892269dd439470548abd06f5512 | [
"MIT"
] | null | null | null | projects/1-molecular-dynamics/check.py | MUYANGGUO/HPC | ab95d18d4054b892269dd439470548abd06f5512 | [
"MIT"
] | 2 | 2019-11-19T21:16:21.000Z | 2021-01-07T05:13:11.000Z |
if __name__ == "__main__":
import sys
import json
import numpy as np
firstline = sys.stdin.readline()
obj = json.loads(firstline)
Np = obj['num_points']
dt = obj['dt']
L = obj['L']
Nt = obj['num_steps']
Nint = obj['step_chunk']
k = obj['k']
d = obj['d']
gifname = obj['gifname']
numframes = int(Nt) // int(Nint) + 1
maxinterv = 100
maxinterv = min(maxinterv,numframes -1)
accum = np.zeros((maxinterv,1))
denom = np.zeros((maxinterv,1))
for i in range(numframes):
try:
line = sys.stdin.readline()
obj = json.loads(line)
X = np.array(obj['X'])
except:
break
center = np.mean(X,axis=1)
X = X - center.reshape((3,1)) * np.ones((1,X.shape[1]))
if not i:
X0 = np.ndarray((maxinterv,X.shape[0],X.shape[1]))
for j in range(maxinterv):
X0[j,:,:] = X[:,:]
continue
for interv in range(1,maxinterv+1):
if i % interv:
continue
r = X[:,:] - X0[interv-1,:,:]
s_pro = r[0,:]*r[0,:] + r[1,:]*r[1,:] + r[2,:]*r[2,:]
accum[interv-1] = accum[interv-1] + np.mean(s_pro)
denom[interv-1] = denom[interv-1] + 1
X0[interv-1,:,:] = X[:,:]
out = accum / denom
x = np.linspace(dt*Nint,dt*Nint*maxinterv,maxinterv)
p = np.polyfit(x,out,1)
print(f'Diffusion constant: {p[0] / 6.}')
| 29.156863 | 65 | 0.492266 |
if __name__ == "__main__":
import sys
import json
import numpy as np
firstline = sys.stdin.readline()
obj = json.loads(firstline)
Np = obj['num_points']
dt = obj['dt']
L = obj['L']
Nt = obj['num_steps']
Nint = obj['step_chunk']
k = obj['k']
d = obj['d']
gifname = obj['gifname']
numframes = int(Nt) // int(Nint) + 1
maxinterv = 100
maxinterv = min(maxinterv,numframes -1)
accum = np.zeros((maxinterv,1))
denom = np.zeros((maxinterv,1))
for i in range(numframes):
try:
line = sys.stdin.readline()
obj = json.loads(line)
X = np.array(obj['X'])
except:
break
center = np.mean(X,axis=1)
X = X - center.reshape((3,1)) * np.ones((1,X.shape[1]))
if not i:
X0 = np.ndarray((maxinterv,X.shape[0],X.shape[1]))
for j in range(maxinterv):
X0[j,:,:] = X[:,:]
continue
for interv in range(1,maxinterv+1):
if i % interv:
continue
r = X[:,:] - X0[interv-1,:,:]
s_pro = r[0,:]*r[0,:] + r[1,:]*r[1,:] + r[2,:]*r[2,:]
accum[interv-1] = accum[interv-1] + np.mean(s_pro)
denom[interv-1] = denom[interv-1] + 1
X0[interv-1,:,:] = X[:,:]
out = accum / denom
x = np.linspace(dt*Nint,dt*Nint*maxinterv,maxinterv)
p = np.polyfit(x,out,1)
print(f'Diffusion constant: {p[0] / 6.}')
| true | true |
f73e1ee2d8bad91d5e8f74197e5063b0b3a1bda4 | 2,460 | py | Python | app/utils.py | jkereako/flask-api-skeleton | b765f335208b93056867f8f28d072dbd42abe826 | [
"MIT"
] | 4 | 2017-09-25T09:31:57.000Z | 2021-07-28T15:57:23.000Z | app/utils.py | jkereako/flask-api-skeleton | b765f335208b93056867f8f28d072dbd42abe826 | [
"MIT"
] | null | null | null | app/utils.py | jkereako/flask-api-skeleton | b765f335208b93056867f8f28d072dbd42abe826 | [
"MIT"
] | 3 | 2016-06-30T07:28:47.000Z | 2020-10-31T15:08:16.000Z | #!/usr/bin/env python
# -*- coding: utf-8 -*-
"""
utils
~~~~~
Utility methods.
I'm including this file in the skeleton because it contains methods I've
found useful.
The goal is to keep this file as lean as possible.
:author: Jeff Kereakoglow
:date: 2014-11-14
:copyright: (c) 2014 by Alexis Digital
:license: MIT, see LICENSE for more details
"""
from flask import request
import app
def prepare_json_response(success, message, data):
response = {"meta":{"success":success, "request":request.url}}
if data:
response["data"] = data
response["meta"]["data_count"] = len(data)
if message:
response["meta"]["message"] = message
return response
def fetch_cached_data(args=None):
"""
Retrieves a cache object when given an optional cache key.
Because most cache keys within this app are URL dependent, the
code which retrieves the cache has been refactored here to maximize
consistency.
:param cache_key: The identifier for the cache object. This must be unique
:type cache_key: str
:returns: A dictionary of JSON data
:rtype: dict
"""
cache_key = request.base_url
if args:
cache_key += args
cache_key = sha224(cache_key).hexdigest()
rv = cache.get(cache_key)
# if rv is not None:
# rv = "<!-- served from cache -->" + rv
return rv
def cache_data(data, args=None, timeout=None):
"""
Stores data in the application cache using the base URL as the main
cache key.
To prevent all URLs from being cached, such as
/teams/nba?this_is_not_a_real_param=2
The base URL along with optional arguments are used. This ensures
that URLS passed with arbitrary query string arguments will not
break the cache.
Because most cache keys within this app are URL dependent, the
code which stores the cache has been refactored here to maximize
consistency.
:param data: The data object to cache
:type data: dict
:param cache_key: The identifier for the cache object. This must be unique
:type cache_key: str
:param timeout: The expiry for the cache
:type timeout: int
:returns: None
:rtype: None
"""
cache_key = request.base_url
if args:
cache_key += args
cache_key = sha224(cache_key).hexdigest()
timeout = app.config["CACHE_TIMEOUT"] if timeout is None else timeout
cache.set(cache_key, data, timeout)
| 28.604651 | 78 | 0.673171 |
from flask import request
import app
def prepare_json_response(success, message, data):
response = {"meta":{"success":success, "request":request.url}}
if data:
response["data"] = data
response["meta"]["data_count"] = len(data)
if message:
response["meta"]["message"] = message
return response
def fetch_cached_data(args=None):
cache_key = request.base_url
if args:
cache_key += args
cache_key = sha224(cache_key).hexdigest()
rv = cache.get(cache_key)
return rv
def cache_data(data, args=None, timeout=None):
cache_key = request.base_url
if args:
cache_key += args
cache_key = sha224(cache_key).hexdigest()
timeout = app.config["CACHE_TIMEOUT"] if timeout is None else timeout
cache.set(cache_key, data, timeout)
| true | true |
f73e1f2737483267ee57033b50462b9d6e91aabe | 17,652 | py | Python | tools/perf/scripts_smoke_unittest.py | zealoussnow/chromium | fd8a8914ca0183f0add65ae55f04e287543c7d4a | [
"BSD-3-Clause-No-Nuclear-License-2014",
"BSD-3-Clause"
] | 14,668 | 2015-01-01T01:57:10.000Z | 2022-03-31T23:33:32.000Z | tools/perf/scripts_smoke_unittest.py | zealoussnow/chromium | fd8a8914ca0183f0add65ae55f04e287543c7d4a | [
"BSD-3-Clause-No-Nuclear-License-2014",
"BSD-3-Clause"
] | 86 | 2015-10-21T13:02:42.000Z | 2022-03-14T07:50:50.000Z | tools/perf/scripts_smoke_unittest.py | zealoussnow/chromium | fd8a8914ca0183f0add65ae55f04e287543c7d4a | [
"BSD-3-Clause-No-Nuclear-License-2014",
"BSD-3-Clause"
] | 5,941 | 2015-01-02T11:32:21.000Z | 2022-03-31T16:35:46.000Z | # Copyright 2015 The Chromium Authors. All rights reserved.
# Use of this source code is governed by a BSD-style license that can be
# found in the LICENSE file.
from __future__ import print_function
import json
import logging
import os
import shutil
import subprocess
import sys
import tempfile
import unittest
from telemetry import decorators
from telemetry.testing import options_for_unittests
RUNNER_SCRIPTS_DIR = os.path.join(os.path.dirname(__file__),
'..', '..', 'testing', 'scripts')
sys.path.append(RUNNER_SCRIPTS_DIR)
import run_performance_tests # pylint: disable=wrong-import-position,import-error
class ScriptsSmokeTest(unittest.TestCase):
perf_dir = os.path.dirname(__file__)
def setUp(self):
self.options = options_for_unittests.GetCopy()
def RunPerfScript(self, args, env=None):
# TODO(crbug.com/985712): Switch all clients to pass a list of args rather
# than a string which we may not be parsing correctly.
if not isinstance(args, list):
args = args.split(' ')
proc = subprocess.Popen([sys.executable] + args, stdout=subprocess.PIPE,
stderr=subprocess.STDOUT, cwd=self.perf_dir,
env=env)
stdout = proc.communicate()[0]
return_code = proc.returncode
return return_code, stdout.decode('utf-8')
def testRunBenchmarkHelp(self):
return_code, stdout = self.RunPerfScript('run_benchmark --help')
self.assertEquals(return_code, 0, stdout)
self.assertIn('usage: run_benchmark', stdout)
@decorators.Disabled('chromeos') # crbug.com/754913
def testRunBenchmarkListBenchmarks(self):
cmdline = ['run_benchmark', 'list', '--browser', self.options.browser_type]
if self.options.browser_type == 'exact':
# If we're running with an exact browser and it was not specified with
# an absolute path, then there's no guarantee that we can actually find it
# now, so make the test a no-op.
if not os.path.isabs(self.options.browser_executable):
return
cmdline.extend(['--browser-executable', self.options.browser_executable])
return_code, stdout = self.RunPerfScript(cmdline)
self.assertRegexpMatches(stdout, r'Available benchmarks .*? are:')
self.assertEqual(return_code, 0)
def testRunBenchmarkRunListsOutBenchmarks(self):
return_code, stdout = self.RunPerfScript('run_benchmark run')
self.assertIn('Pass --browser to list benchmarks', stdout)
self.assertNotEquals(return_code, 0)
def testRunBenchmarkRunNonExistingBenchmark(self):
return_code, stdout = self.RunPerfScript('run_benchmark foo')
self.assertIn('no such benchmark: foo', stdout)
self.assertNotEquals(return_code, 0)
def testRunRecordWprHelp(self):
return_code, stdout = self.RunPerfScript('record_wpr')
self.assertEquals(return_code, 0, stdout)
self.assertIn('optional arguments:', stdout)
@decorators.Disabled('chromeos') # crbug.com/814068
def testRunRecordWprList(self):
return_code, stdout = self.RunPerfScript('record_wpr --list-benchmarks')
# TODO(nednguyen): Remove this once we figure out why importing
# small_profile_extender fails on Android dbg.
# crbug.com/561668
if 'ImportError: cannot import name small_profile_extender' in stdout:
self.skipTest('small_profile_extender is missing')
self.assertEquals(return_code, 0, stdout)
self.assertIn('kraken', stdout)
@decorators.Disabled('chromeos') # crbug.com/754913
def testRunPerformanceTestsTelemetry_end2end(self):
tempdir = tempfile.mkdtemp()
benchmarks = ['dummy_benchmark.stable_benchmark_1',
'dummy_benchmark.noisy_benchmark_1']
cmdline = ('../../testing/scripts/run_performance_tests.py '
'../../tools/perf/run_benchmark '
'--benchmarks=%s '
'--browser=%s '
'--isolated-script-test-also-run-disabled-tests '
'--isolated-script-test-output=%s' %
(','.join(benchmarks), self.options.browser_type,
os.path.join(tempdir, 'output.json')))
if self.options.browser_type == 'exact':
# If the path to the browser executable is not absolute, there is no
# guarantee that we can actually find it at this point, so no-op the
# test.
if not os.path.isabs(self.options.browser_executable):
return
cmdline += ' --browser-executable=%s' % self.options.browser_executable
return_code, stdout = self.RunPerfScript(cmdline)
self.assertEquals(return_code, 0, stdout)
try:
with open(os.path.join(tempdir, 'output.json')) as f:
test_results = json.load(f)
self.assertIsNotNone(
test_results, 'json_test_results should be populated: ' + stdout)
benchmarks_run = [str(b) for b in test_results['tests'].keys()]
self.assertEqual(sorted(benchmarks_run), sorted(benchmarks))
story_runs = test_results['num_failures_by_type']['PASS']
self.assertEqual(
story_runs, 2,
'Total runs should be 2 since each benchmark has one story.')
for benchmark in benchmarks:
with open(os.path.join(tempdir, benchmark, 'test_results.json')) as f:
test_results = json.load(f)
self.assertIsNotNone(
test_results, 'json_test_results should be populated: ' + stdout)
with open(os.path.join(tempdir, benchmark, 'perf_results.json')) as f:
perf_results = json.load(f)
self.assertIsNotNone(
perf_results, 'json perf results should be populated: ' + stdout)
except IOError as e:
self.fail('json_test_results should be populated: ' + stdout + str(e))
except AssertionError as e:
self.fail('Caught assertion error: ' + str(e) + 'With stdout: ' + stdout)
finally:
shutil.rmtree(tempdir)
@decorators.Enabled('linux') # Testing platform-independent code.
def testRunPerformanceTestsTelemetry_NoTestResults(self):
"""Test that test results output gets returned for complete failures."""
tempdir = tempfile.mkdtemp()
benchmarks = ['benchmark1', 'benchmark2']
return_code, stdout = self.RunPerfScript(
'../../testing/scripts/run_performance_tests.py '
'../../tools/perf/testdata/fail_and_do_nothing '
'--benchmarks=%s '
'--browser=%s '
'--isolated-script-test-output=%s' % (
','.join(benchmarks),
self.options.browser_type,
os.path.join(tempdir, 'output.json')
))
self.assertNotEqual(return_code, 0)
try:
with open(os.path.join(tempdir, 'output.json')) as f:
test_results = json.load(f)
self.assertIsNotNone(
test_results, 'json_test_results should be populated: ' + stdout)
self.assertTrue(
test_results['interrupted'],
'if the benchmark does not populate test results, then we should '
'populate it with a failure.')
for benchmark in benchmarks:
with open(os.path.join(tempdir, benchmark, 'test_results.json')) as f:
test_results = json.load(f)
self.assertIsNotNone(
test_results, 'json_test_results should be populated: ' + stdout)
self.assertTrue(
test_results['interrupted'],
'if the benchmark does not populate test results, then we should '
'populate it with a failure.')
except IOError as e:
self.fail('json_test_results should be populated: ' + stdout + str(e))
finally:
shutil.rmtree(tempdir)
# Android: crbug.com/932301
# ChromeOS: crbug.com/754913
# Windows: crbug.com/1024767
# Linux: crbug.com/1024767
# all: Disabled everywhere because the smoke test shard map
# needed to be changed to fix crbug.com/1024767.
@decorators.Disabled('all')
def testRunPerformanceTestsTelemetrySharded_end2end(self):
tempdir = tempfile.mkdtemp()
env = os.environ.copy()
env['GTEST_SHARD_INDEX'] = '0'
env['GTEST_TOTAL_SHARDS'] = '2'
return_code, stdout = self.RunPerfScript(
'../../testing/scripts/run_performance_tests.py '
'../../tools/perf/run_benchmark '
'--test-shard-map-filename=smoke_test_benchmark_shard_map.json '
'--browser=%s '
'--run-ref-build '
'--isolated-script-test-filter=dummy_benchmark.noisy_benchmark_1/'
'dummy_page.html::dummy_benchmark.stable_benchmark_1/dummy_page.html '
'--isolated-script-test-repeat=2 '
'--isolated-script-test-also-run-disabled-tests '
'--isolated-script-test-output=%s' % (
self.options.browser_type,
os.path.join(tempdir, 'output.json')
), env=env)
test_results = None
try:
self.assertEquals(return_code, 0)
expected_benchmark_folders = (
'dummy_benchmark.stable_benchmark_1',
'dummy_benchmark.stable_benchmark_1.reference',
'dummy_gtest')
with open(os.path.join(tempdir, 'output.json')) as f:
test_results = json.load(f)
self.assertIsNotNone(
test_results, 'json_test_results should be populated.')
test_runs = test_results['num_failures_by_type']['PASS']
# 1 gtest runs (since --isolated-script-test-repeat doesn't work for gtest
# yet) plus 2 dummy_benchmark runs = 3 runs.
self.assertEqual(
test_runs, 3, '--isolated-script-test-repeat=2 should work.')
for folder in expected_benchmark_folders:
with open(os.path.join(tempdir, folder, 'test_results.json')) as f:
test_results = json.load(f)
self.assertIsNotNone(
test_results, 'json test results should be populated.')
test_repeats = test_results['num_failures_by_type']['PASS']
if 'dummy_gtest' not in folder: # Repeats don't work for gtest yet.
self.assertEqual(
test_repeats, 2, '--isolated-script-test-repeat=2 should work.')
with open(os.path.join(tempdir, folder, 'perf_results.json')) as f:
perf_results = json.load(f)
self.assertIsNotNone(
perf_results, 'json perf results should be populated.')
except Exception as exc:
logging.error(
'Failed with error: %s\nOutput from run_performance_tests.py:\n\n%s',
exc, stdout)
if test_results is not None:
logging.error(
'Got test_results: %s\n', json.dumps(test_results, indent=2))
raise
finally:
shutil.rmtree(tempdir)
def RunGtest(self, generate_trace):
tempdir = tempfile.mkdtemp()
benchmark = 'dummy_gtest'
return_code, stdout = self.RunPerfScript(
'../../testing/scripts/run_performance_tests.py ' +
('../../tools/perf/run_gtest_benchmark.py ' if generate_trace else '') +
os.path.join('..', '..', 'tools', 'perf', 'testdata',
'dummy_gtest') +
(' --use-gtest-benchmark-script --output-format=histograms'
if generate_trace else '') +
' --non-telemetry=true '
'--this-arg=passthrough '
'--argument-to-check-that-arguments-work '
'--gtest-benchmark-name dummy_gtest '
'--isolated-script-test-output=%s' % (
os.path.join(tempdir, 'output.json')
))
try:
self.assertEquals(return_code, 0, stdout)
except AssertionError:
try:
with open(os.path.join(tempdir, benchmark, 'benchmark_log.txt')) as fh:
print(fh.read())
# pylint: disable=bare-except
except:
# pylint: enable=bare-except
pass
raise
try:
with open(os.path.join(tempdir, 'output.json')) as f:
test_results = json.load(f)
self.assertIsNotNone(
test_results, 'json_test_results should be populated: ' + stdout)
with open(os.path.join(tempdir, benchmark, 'test_results.json')) as f:
test_results = json.load(f)
self.assertIsNotNone(
test_results, 'json_test_results should be populated: ' + stdout)
with open(os.path.join(tempdir, benchmark, 'perf_results.json')) as f:
perf_results = json.load(f)
self.assertIsNotNone(
perf_results, 'json perf results should be populated: ' + stdout)
except IOError as e:
self.fail('json_test_results should be populated: ' + stdout + str(e))
finally:
shutil.rmtree(tempdir)
# Windows: ".exe" is auto-added which breaks Windows.
# ChromeOS: crbug.com/754913.
@decorators.Disabled('win', 'chromeos')
def testRunPerformanceTestsGtest_end2end(self):
self.RunGtest(generate_trace=False)
# Windows: ".exe" is auto-added which breaks Windows.
# ChromeOS: crbug.com/754913.
@decorators.Disabled('win', 'chromeos')
def testRunPerformanceTestsGtestTrace_end2end(self):
self.RunGtest(generate_trace=True)
def testRunPerformanceTestsShardedArgsParser(self):
options = run_performance_tests.parse_arguments([
'../../tools/perf/run_benchmark', '-v', '--browser=release_x64',
'--upload-results', '--run-ref-build',
'--test-shard-map-filename=win-10-perf_map.json',
'--assert-gpu-compositing',
r'--isolated-script-test-output=c:\a\b\c\output.json',
r'--isolated-script-test-perf-output=c:\a\b\c\perftest-output.json',
'--passthrough-arg=--a=b',
])
self.assertIn('--assert-gpu-compositing', options.passthrough_args)
self.assertIn('--browser=release_x64', options.passthrough_args)
self.assertIn('-v', options.passthrough_args)
self.assertIn('--a=b', options.passthrough_args)
self.assertEqual(options.executable, '../../tools/perf/run_benchmark')
self.assertEqual(options.isolated_script_test_output,
r'c:\a\b\c\output.json')
def testRunPerformanceTestsTelemetryCommandGenerator_ReferenceBrowserComeLast(self):
"""This tests for crbug.com/928928."""
options = run_performance_tests.parse_arguments([
'../../tools/perf/run_benchmark', '--browser=release_x64',
'--run-ref-build',
'--test-shard-map-filename=win-10-perf_map.json',
r'--isolated-script-test-output=c:\a\b\c\output.json',
])
self.assertIn('--browser=release_x64', options.passthrough_args)
command = run_performance_tests.TelemetryCommandGenerator(
'fake_benchmark_name', options, is_reference=True).generate(
'fake_output_dir')
original_browser_arg_index = command.index('--browser=release_x64')
reference_browser_arg_index = command.index('--browser=reference')
self.assertTrue(reference_browser_arg_index > original_browser_arg_index)
def testRunPerformanceTestsTelemetryCommandGenerator_StorySelectionConfig_Unabridged(self):
options = run_performance_tests.parse_arguments([
'../../tools/perf/run_benchmark', '--browser=release_x64',
'--run-ref-build',
r'--isolated-script-test-output=c:\a\b\c\output.json',
])
story_selection_config = {
'abridged': False,
'begin': 1,
'end': 5,
}
command = run_performance_tests.TelemetryCommandGenerator(
'fake_benchmark_name', options, story_selection_config).generate(
'fake_output_dir')
self.assertNotIn('--run-abridged-story-set', command)
self.assertIn('--story-shard-begin-index=1', command)
self.assertIn('--story-shard-end-index=5', command)
def testRunPerformanceTestsTelemetryCommandGenerator_StorySelectionConfig_Abridged(self):
options = run_performance_tests.parse_arguments([
'../../tools/perf/run_benchmark', '--browser=release_x64',
'--run-ref-build',
r'--isolated-script-test-output=c:\a\b\c\output.json',
])
story_selection_config = {
'abridged': True,
}
command = run_performance_tests.TelemetryCommandGenerator(
'fake_benchmark_name', options, story_selection_config).generate(
'fake_output_dir')
self.assertIn('--run-abridged-story-set', command)
def testRunPerformanceTestsGtestArgsParser(self):
options = run_performance_tests.parse_arguments([
'media_perftests',
'--non-telemetry=true',
'--single-process-tests',
'--test-launcher-retry-limit=0',
'--isolated-script-test-filter=*::-*_unoptimized::*_unaligned::'
'*unoptimized_aligned',
'--gtest-benchmark-name',
'media_perftests',
'--isolated-script-test-output=/x/y/z/output.json',
])
self.assertIn('--single-process-tests', options.passthrough_args)
self.assertIn('--test-launcher-retry-limit=0', options.passthrough_args)
self.assertEqual(options.executable, 'media_perftests')
self.assertEqual(options.isolated_script_test_output, r'/x/y/z/output.json')
def testRunPerformanceTestsExecuteGtest_OSError(self):
class FakeCommandGenerator(object):
def __init__(self):
self.executable_name = 'binary_that_doesnt_exist'
self._ignore_shard_env_vars = False
def generate(self, unused_path):
return [self.executable_name]
tempdir = tempfile.mkdtemp()
try:
fake_command_generator = FakeCommandGenerator()
output_paths = run_performance_tests.OutputFilePaths(
tempdir, 'fake_gtest')
output_paths.SetUp()
return_code = run_performance_tests.execute_gtest_perf_test(
fake_command_generator, output_paths, is_unittest=True)
self.assertEqual(return_code, 1)
with open(output_paths.test_results) as fh:
json_test_results = json.load(fh)
self.assertGreater(json_test_results['num_failures_by_type']['FAIL'], 0)
finally:
shutil.rmtree(tempdir)
| 43.477833 | 93 | 0.669103 |
from __future__ import print_function
import json
import logging
import os
import shutil
import subprocess
import sys
import tempfile
import unittest
from telemetry import decorators
from telemetry.testing import options_for_unittests
RUNNER_SCRIPTS_DIR = os.path.join(os.path.dirname(__file__),
'..', '..', 'testing', 'scripts')
sys.path.append(RUNNER_SCRIPTS_DIR)
import run_performance_tests
class ScriptsSmokeTest(unittest.TestCase):
perf_dir = os.path.dirname(__file__)
def setUp(self):
self.options = options_for_unittests.GetCopy()
def RunPerfScript(self, args, env=None):
if not isinstance(args, list):
args = args.split(' ')
proc = subprocess.Popen([sys.executable] + args, stdout=subprocess.PIPE,
stderr=subprocess.STDOUT, cwd=self.perf_dir,
env=env)
stdout = proc.communicate()[0]
return_code = proc.returncode
return return_code, stdout.decode('utf-8')
def testRunBenchmarkHelp(self):
return_code, stdout = self.RunPerfScript('run_benchmark --help')
self.assertEquals(return_code, 0, stdout)
self.assertIn('usage: run_benchmark', stdout)
@decorators.Disabled('chromeos')
def testRunBenchmarkListBenchmarks(self):
cmdline = ['run_benchmark', 'list', '--browser', self.options.browser_type]
if self.options.browser_type == 'exact':
# an absolute path, then there's no guarantee that we can actually find it
if not os.path.isabs(self.options.browser_executable):
return
cmdline.extend(['--browser-executable', self.options.browser_executable])
return_code, stdout = self.RunPerfScript(cmdline)
self.assertRegexpMatches(stdout, r'Available benchmarks .*? are:')
self.assertEqual(return_code, 0)
def testRunBenchmarkRunListsOutBenchmarks(self):
return_code, stdout = self.RunPerfScript('run_benchmark run')
self.assertIn('Pass --browser to list benchmarks', stdout)
self.assertNotEquals(return_code, 0)
def testRunBenchmarkRunNonExistingBenchmark(self):
return_code, stdout = self.RunPerfScript('run_benchmark foo')
self.assertIn('no such benchmark: foo', stdout)
self.assertNotEquals(return_code, 0)
def testRunRecordWprHelp(self):
return_code, stdout = self.RunPerfScript('record_wpr')
self.assertEquals(return_code, 0, stdout)
self.assertIn('optional arguments:', stdout)
@decorators.Disabled('chromeos')
def testRunRecordWprList(self):
return_code, stdout = self.RunPerfScript('record_wpr --list-benchmarks')
if 'ImportError: cannot import name small_profile_extender' in stdout:
self.skipTest('small_profile_extender is missing')
self.assertEquals(return_code, 0, stdout)
self.assertIn('kraken', stdout)
@decorators.Disabled('chromeos')
def testRunPerformanceTestsTelemetry_end2end(self):
tempdir = tempfile.mkdtemp()
benchmarks = ['dummy_benchmark.stable_benchmark_1',
'dummy_benchmark.noisy_benchmark_1']
cmdline = ('../../testing/scripts/run_performance_tests.py '
'../../tools/perf/run_benchmark '
'--benchmarks=%s '
'--browser=%s '
'--isolated-script-test-also-run-disabled-tests '
'--isolated-script-test-output=%s' %
(','.join(benchmarks), self.options.browser_type,
os.path.join(tempdir, 'output.json')))
if self.options.browser_type == 'exact':
if not os.path.isabs(self.options.browser_executable):
return
cmdline += ' --browser-executable=%s' % self.options.browser_executable
return_code, stdout = self.RunPerfScript(cmdline)
self.assertEquals(return_code, 0, stdout)
try:
with open(os.path.join(tempdir, 'output.json')) as f:
test_results = json.load(f)
self.assertIsNotNone(
test_results, 'json_test_results should be populated: ' + stdout)
benchmarks_run = [str(b) for b in test_results['tests'].keys()]
self.assertEqual(sorted(benchmarks_run), sorted(benchmarks))
story_runs = test_results['num_failures_by_type']['PASS']
self.assertEqual(
story_runs, 2,
'Total runs should be 2 since each benchmark has one story.')
for benchmark in benchmarks:
with open(os.path.join(tempdir, benchmark, 'test_results.json')) as f:
test_results = json.load(f)
self.assertIsNotNone(
test_results, 'json_test_results should be populated: ' + stdout)
with open(os.path.join(tempdir, benchmark, 'perf_results.json')) as f:
perf_results = json.load(f)
self.assertIsNotNone(
perf_results, 'json perf results should be populated: ' + stdout)
except IOError as e:
self.fail('json_test_results should be populated: ' + stdout + str(e))
except AssertionError as e:
self.fail('Caught assertion error: ' + str(e) + 'With stdout: ' + stdout)
finally:
shutil.rmtree(tempdir)
@decorators.Enabled('linux')
def testRunPerformanceTestsTelemetry_NoTestResults(self):
tempdir = tempfile.mkdtemp()
benchmarks = ['benchmark1', 'benchmark2']
return_code, stdout = self.RunPerfScript(
'../../testing/scripts/run_performance_tests.py '
'../../tools/perf/testdata/fail_and_do_nothing '
'--benchmarks=%s '
'--browser=%s '
'--isolated-script-test-output=%s' % (
','.join(benchmarks),
self.options.browser_type,
os.path.join(tempdir, 'output.json')
))
self.assertNotEqual(return_code, 0)
try:
with open(os.path.join(tempdir, 'output.json')) as f:
test_results = json.load(f)
self.assertIsNotNone(
test_results, 'json_test_results should be populated: ' + stdout)
self.assertTrue(
test_results['interrupted'],
'if the benchmark does not populate test results, then we should '
'populate it with a failure.')
for benchmark in benchmarks:
with open(os.path.join(tempdir, benchmark, 'test_results.json')) as f:
test_results = json.load(f)
self.assertIsNotNone(
test_results, 'json_test_results should be populated: ' + stdout)
self.assertTrue(
test_results['interrupted'],
'if the benchmark does not populate test results, then we should '
'populate it with a failure.')
except IOError as e:
self.fail('json_test_results should be populated: ' + stdout + str(e))
finally:
shutil.rmtree(tempdir)
@decorators.Disabled('all')
def testRunPerformanceTestsTelemetrySharded_end2end(self):
tempdir = tempfile.mkdtemp()
env = os.environ.copy()
env['GTEST_SHARD_INDEX'] = '0'
env['GTEST_TOTAL_SHARDS'] = '2'
return_code, stdout = self.RunPerfScript(
'../../testing/scripts/run_performance_tests.py '
'../../tools/perf/run_benchmark '
'--test-shard-map-filename=smoke_test_benchmark_shard_map.json '
'--browser=%s '
'--run-ref-build '
'--isolated-script-test-filter=dummy_benchmark.noisy_benchmark_1/'
'dummy_page.html::dummy_benchmark.stable_benchmark_1/dummy_page.html '
'--isolated-script-test-repeat=2 '
'--isolated-script-test-also-run-disabled-tests '
'--isolated-script-test-output=%s' % (
self.options.browser_type,
os.path.join(tempdir, 'output.json')
), env=env)
test_results = None
try:
self.assertEquals(return_code, 0)
expected_benchmark_folders = (
'dummy_benchmark.stable_benchmark_1',
'dummy_benchmark.stable_benchmark_1.reference',
'dummy_gtest')
with open(os.path.join(tempdir, 'output.json')) as f:
test_results = json.load(f)
self.assertIsNotNone(
test_results, 'json_test_results should be populated.')
test_runs = test_results['num_failures_by_type']['PASS']
# yet) plus 2 dummy_benchmark runs = 3 runs.
self.assertEqual(
test_runs, 3, '--isolated-script-test-repeat=2 should work.')
for folder in expected_benchmark_folders:
with open(os.path.join(tempdir, folder, 'test_results.json')) as f:
test_results = json.load(f)
self.assertIsNotNone(
test_results, 'json test results should be populated.')
test_repeats = test_results['num_failures_by_type']['PASS']
if 'dummy_gtest' not in folder: # Repeats don't work for gtest yet.
self.assertEqual(
test_repeats, 2, '--isolated-script-test-repeat=2 should work.')
with open(os.path.join(tempdir, folder, 'perf_results.json')) as f:
perf_results = json.load(f)
self.assertIsNotNone(
perf_results, 'json perf results should be populated.')
except Exception as exc:
logging.error(
'Failed with error: %s\nOutput from run_performance_tests.py:\n\n%s',
exc, stdout)
if test_results is not None:
logging.error(
'Got test_results: %s\n', json.dumps(test_results, indent=2))
raise
finally:
shutil.rmtree(tempdir)
def RunGtest(self, generate_trace):
tempdir = tempfile.mkdtemp()
benchmark = 'dummy_gtest'
return_code, stdout = self.RunPerfScript(
'../../testing/scripts/run_performance_tests.py ' +
('../../tools/perf/run_gtest_benchmark.py ' if generate_trace else '') +
os.path.join('..', '..', 'tools', 'perf', 'testdata',
'dummy_gtest') +
(' --use-gtest-benchmark-script --output-format=histograms'
if generate_trace else '') +
' --non-telemetry=true '
'--this-arg=passthrough '
'--argument-to-check-that-arguments-work '
'--gtest-benchmark-name dummy_gtest '
'--isolated-script-test-output=%s' % (
os.path.join(tempdir, 'output.json')
))
try:
self.assertEquals(return_code, 0, stdout)
except AssertionError:
try:
with open(os.path.join(tempdir, benchmark, 'benchmark_log.txt')) as fh:
print(fh.read())
except:
pass
raise
try:
with open(os.path.join(tempdir, 'output.json')) as f:
test_results = json.load(f)
self.assertIsNotNone(
test_results, 'json_test_results should be populated: ' + stdout)
with open(os.path.join(tempdir, benchmark, 'test_results.json')) as f:
test_results = json.load(f)
self.assertIsNotNone(
test_results, 'json_test_results should be populated: ' + stdout)
with open(os.path.join(tempdir, benchmark, 'perf_results.json')) as f:
perf_results = json.load(f)
self.assertIsNotNone(
perf_results, 'json perf results should be populated: ' + stdout)
except IOError as e:
self.fail('json_test_results should be populated: ' + stdout + str(e))
finally:
shutil.rmtree(tempdir)
@decorators.Disabled('win', 'chromeos')
def testRunPerformanceTestsGtest_end2end(self):
self.RunGtest(generate_trace=False)
@decorators.Disabled('win', 'chromeos')
def testRunPerformanceTestsGtestTrace_end2end(self):
self.RunGtest(generate_trace=True)
def testRunPerformanceTestsShardedArgsParser(self):
options = run_performance_tests.parse_arguments([
'../../tools/perf/run_benchmark', '-v', '--browser=release_x64',
'--upload-results', '--run-ref-build',
'--test-shard-map-filename=win-10-perf_map.json',
'--assert-gpu-compositing',
r'--isolated-script-test-output=c:\a\b\c\output.json',
r'--isolated-script-test-perf-output=c:\a\b\c\perftest-output.json',
'--passthrough-arg=--a=b',
])
self.assertIn('--assert-gpu-compositing', options.passthrough_args)
self.assertIn('--browser=release_x64', options.passthrough_args)
self.assertIn('-v', options.passthrough_args)
self.assertIn('--a=b', options.passthrough_args)
self.assertEqual(options.executable, '../../tools/perf/run_benchmark')
self.assertEqual(options.isolated_script_test_output,
r'c:\a\b\c\output.json')
def testRunPerformanceTestsTelemetryCommandGenerator_ReferenceBrowserComeLast(self):
options = run_performance_tests.parse_arguments([
'../../tools/perf/run_benchmark', '--browser=release_x64',
'--run-ref-build',
'--test-shard-map-filename=win-10-perf_map.json',
r'--isolated-script-test-output=c:\a\b\c\output.json',
])
self.assertIn('--browser=release_x64', options.passthrough_args)
command = run_performance_tests.TelemetryCommandGenerator(
'fake_benchmark_name', options, is_reference=True).generate(
'fake_output_dir')
original_browser_arg_index = command.index('--browser=release_x64')
reference_browser_arg_index = command.index('--browser=reference')
self.assertTrue(reference_browser_arg_index > original_browser_arg_index)
def testRunPerformanceTestsTelemetryCommandGenerator_StorySelectionConfig_Unabridged(self):
options = run_performance_tests.parse_arguments([
'../../tools/perf/run_benchmark', '--browser=release_x64',
'--run-ref-build',
r'--isolated-script-test-output=c:\a\b\c\output.json',
])
story_selection_config = {
'abridged': False,
'begin': 1,
'end': 5,
}
command = run_performance_tests.TelemetryCommandGenerator(
'fake_benchmark_name', options, story_selection_config).generate(
'fake_output_dir')
self.assertNotIn('--run-abridged-story-set', command)
self.assertIn('--story-shard-begin-index=1', command)
self.assertIn('--story-shard-end-index=5', command)
def testRunPerformanceTestsTelemetryCommandGenerator_StorySelectionConfig_Abridged(self):
options = run_performance_tests.parse_arguments([
'../../tools/perf/run_benchmark', '--browser=release_x64',
'--run-ref-build',
r'--isolated-script-test-output=c:\a\b\c\output.json',
])
story_selection_config = {
'abridged': True,
}
command = run_performance_tests.TelemetryCommandGenerator(
'fake_benchmark_name', options, story_selection_config).generate(
'fake_output_dir')
self.assertIn('--run-abridged-story-set', command)
def testRunPerformanceTestsGtestArgsParser(self):
options = run_performance_tests.parse_arguments([
'media_perftests',
'--non-telemetry=true',
'--single-process-tests',
'--test-launcher-retry-limit=0',
'--isolated-script-test-filter=*::-*_unoptimized::*_unaligned::'
'*unoptimized_aligned',
'--gtest-benchmark-name',
'media_perftests',
'--isolated-script-test-output=/x/y/z/output.json',
])
self.assertIn('--single-process-tests', options.passthrough_args)
self.assertIn('--test-launcher-retry-limit=0', options.passthrough_args)
self.assertEqual(options.executable, 'media_perftests')
self.assertEqual(options.isolated_script_test_output, r'/x/y/z/output.json')
def testRunPerformanceTestsExecuteGtest_OSError(self):
class FakeCommandGenerator(object):
def __init__(self):
self.executable_name = 'binary_that_doesnt_exist'
self._ignore_shard_env_vars = False
def generate(self, unused_path):
return [self.executable_name]
tempdir = tempfile.mkdtemp()
try:
fake_command_generator = FakeCommandGenerator()
output_paths = run_performance_tests.OutputFilePaths(
tempdir, 'fake_gtest')
output_paths.SetUp()
return_code = run_performance_tests.execute_gtest_perf_test(
fake_command_generator, output_paths, is_unittest=True)
self.assertEqual(return_code, 1)
with open(output_paths.test_results) as fh:
json_test_results = json.load(fh)
self.assertGreater(json_test_results['num_failures_by_type']['FAIL'], 0)
finally:
shutil.rmtree(tempdir)
| true | true |
f73e1f6241b7d6e7a26e4da5c1098b6baf09c872 | 2,873 | py | Python | tests/app_text.py | pglet/pglet-python | 31f65083d68888661d4780322c26bfc57c091e1c | [
"MIT"
] | 12 | 2021-05-01T17:49:57.000Z | 2022-02-12T21:20:56.000Z | tests/app_text.py | pglet/pglet-python | 31f65083d68888661d4780322c26bfc57c091e1c | [
"MIT"
] | 47 | 2021-01-22T18:31:22.000Z | 2022-03-24T00:17:03.000Z | tests/app_text.py | pglet/pglet-python | 31f65083d68888661d4780322c26bfc57c091e1c | [
"MIT"
] | 10 | 2021-02-08T19:13:42.000Z | 2022-03-26T10:40:20.000Z | import os
import sys,inspect
currentdir = os.path.dirname(os.path.abspath(inspect.getfile(inspect.currentframe())))
parentdir = os.path.dirname(currentdir)
sys.path.insert(0,parentdir)
import pglet
from pglet import Stack, Text
def main(page):
page.add(
Text('Squares', size='large'),
Stack(horizontal=True, controls=[
Text('left top', align='left', vertical_align='top', width=100, height=100, bgcolor='salmon', color='white', padding=5),
Text('center top', align='center', vertical_align='top', width=100, height=100, bgcolor='salmon', color='white', padding=5, size='large', border='1px solid #555'),
Text('right top', align='right', vertical_align='top', width=100, height=100, bgcolor='salmon', color='white', padding=5, border='2px solid #555')
]),
Stack(horizontal=True, controls=[
Text('left center', align='left', vertical_align='center', width=100, height=100, bgcolor='PaleGoldenrod', padding=5),
Text('center center', align='center', vertical_align='center', width=100, height=100, bgcolor='PaleGoldenrod', padding=5, size='large', border='1px solid #555'),
Text('right center', align='right', vertical_align='center', width=100, height=100, bgcolor='PaleGoldenrod', padding=5, border='2px solid #555')
]),
Stack(horizontal=True, controls=[
Text('left bottom', align='left', vertical_align='center', width=100, height=100, bgcolor='PaleGreen', padding=5),
Text('center bottom', align='center', vertical_align='center', width=100, height=100, bgcolor='PaleGreen', padding=5, size='large', border='1px solid #555'),
Text('right bottom', align='right', vertical_align='center', width=100, height=100, bgcolor='PaleGreen', padding=5, border='2px solid #555')
]),
Text('Circles', size='large'),
Stack(horizontal=True, controls=[
Text('regular', align='center', vertical_align='center', width=100, height=100, border_radius=50, bgcolor='salmon'),
Text('bold italic', bold=True, italic=True, align='center', vertical_align='center', width=100, height=100, border_radius=50, bgcolor='PaleGoldenrod', size='large', border='1px solid #555'),
Text('bold', bold=True, align='center', vertical_align='center', width=100, height=100, border_radius=50, bgcolor='PaleGreen', border='2px solid #555')
]),
Text('Markdown', size='large'),
Text('''
# GitHub Flavored Markdown
## Autolink literals
www.example.com, https://example.com, and contact@example.com.
## Strikethrough
~one~ or ~~two~~ tildes.
### Code sample
```
import pglet
page = page.page()
```
## Table
| a | b | c | d |
| - | :- | -: | :-: |
## Tasklist
* [ ] to do
* [x] done
''', markdown=True)
)
pglet.app("python-text", target=main) | 42.25 | 202 | 0.641838 | import os
import sys,inspect
currentdir = os.path.dirname(os.path.abspath(inspect.getfile(inspect.currentframe())))
parentdir = os.path.dirname(currentdir)
sys.path.insert(0,parentdir)
import pglet
from pglet import Stack, Text
def main(page):
page.add(
Text('Squares', size='large'),
Stack(horizontal=True, controls=[
Text('left top', align='left', vertical_align='top', width=100, height=100, bgcolor='salmon', color='white', padding=5),
Text('center top', align='center', vertical_align='top', width=100, height=100, bgcolor='salmon', color='white', padding=5, size='large', border='1px solid #555'),
Text('right top', align='right', vertical_align='top', width=100, height=100, bgcolor='salmon', color='white', padding=5, border='2px solid #555')
]),
Stack(horizontal=True, controls=[
Text('left center', align='left', vertical_align='center', width=100, height=100, bgcolor='PaleGoldenrod', padding=5),
Text('center center', align='center', vertical_align='center', width=100, height=100, bgcolor='PaleGoldenrod', padding=5, size='large', border='1px solid #555'),
Text('right center', align='right', vertical_align='center', width=100, height=100, bgcolor='PaleGoldenrod', padding=5, border='2px solid #555')
]),
Stack(horizontal=True, controls=[
Text('left bottom', align='left', vertical_align='center', width=100, height=100, bgcolor='PaleGreen', padding=5),
Text('center bottom', align='center', vertical_align='center', width=100, height=100, bgcolor='PaleGreen', padding=5, size='large', border='1px solid #555'),
Text('right bottom', align='right', vertical_align='center', width=100, height=100, bgcolor='PaleGreen', padding=5, border='2px solid #555')
]),
Text('Circles', size='large'),
Stack(horizontal=True, controls=[
Text('regular', align='center', vertical_align='center', width=100, height=100, border_radius=50, bgcolor='salmon'),
Text('bold italic', bold=True, italic=True, align='center', vertical_align='center', width=100, height=100, border_radius=50, bgcolor='PaleGoldenrod', size='large', border='1px solid #555'),
Text('bold', bold=True, align='center', vertical_align='center', width=100, height=100, border_radius=50, bgcolor='PaleGreen', border='2px solid #555')
]),
Text('Markdown', size='large'),
Text('''
# GitHub Flavored Markdown
## Autolink literals
www.example.com, https://example.com, and contact@example.com.
## Strikethrough
~one~ or ~~two~~ tildes.
### Code sample
```
import pglet
page = page.page()
```
## Table
| a | b | c | d |
| - | :- | -: | :-: |
## Tasklist
* [ ] to do
* [x] done
''', markdown=True)
)
pglet.app("python-text", target=main) | true | true |
f73e1f7b2a7d9a4cce3cd080e48fed5c3c089984 | 3,471 | py | Python | scripts/gan/cycle_gan/train.py | hiroyasuakada/ros_start | 10221ad2bcaefa4aaadc6c90424a3751126ac256 | [
"MIT"
] | null | null | null | scripts/gan/cycle_gan/train.py | hiroyasuakada/ros_start | 10221ad2bcaefa4aaadc6c90424a3751126ac256 | [
"MIT"
] | null | null | null | scripts/gan/cycle_gan/train.py | hiroyasuakada/ros_start | 10221ad2bcaefa4aaadc6c90424a3751126ac256 | [
"MIT"
] | null | null | null | import os
import random
import itertools
import numpy as np
import torch
import torch.nn as nn
import torch.utils.data
import torchvision.transforms as transforms
from torchvision.utils import make_grid
from torch.autograd import Variable
from PIL import Image
import matplotlib.pyplot as plt
from tensorboardX import SummaryWriter
import time
import cv2
##################################################################
from dataset import UnalignedDataset
from model_base import ResNetBlock, Generator, Discriminator
from model_cyclegan import CycleGAN
##################################################################
def train(log_dir, device, lr, beta1, lambda_idt, lambda_A, lambda_B, lambda_mask,
num_epoch, num_epoch_resume, save_epoch_freq):
model = CycleGAN(log_dir=log_dir, device=device, lr=lr, beta1=beta1,
lambda_idt=lambda_idt, lambda_A=lambda_A, lambda_B=lambda_B, lambda_mask=lambda_mask)
if num_epoch_resume != 0:
model.log_dir = 'logs'
print('load model {}'.format(num_epoch_resume))
model.load('epoch' + str(num_epoch_resume))
writer = SummaryWriter(log_dir)
for epoch in range(num_epoch):
print('epoch {} started'.format(epoch + 1 + num_epoch_resume))
t1 = time.perf_counter()
losses = model.train(train_loader)
t2 = time.perf_counter()
get_processing_time = t2 - t1
print('epoch: {}, elapsed_time: {} sec losses: {}'
.format(epoch + 1 + num_epoch_resume, get_processing_time, losses))
writer.add_scalar('loss_G_A', losses[0], epoch + 1 + num_epoch_resume)
writer.add_scalar('loss_D_A', losses[1], epoch + 1 + num_epoch_resume)
writer.add_scalar('loss_G_B', losses[2], epoch + 1 + num_epoch_resume)
writer.add_scalar('loss_D_B', losses[3], epoch + 1 + num_epoch_resume)
writer.add_scalar('loss_cycle_A', losses[4], epoch + 1 + num_epoch_resume)
writer.add_scalar('loss_cycle_B', losses[5], epoch + 1 + num_epoch_resume)
writer.add_scalar('loss_idt_A', losses[6], epoch + 1 + num_epoch_resume)
writer.add_scalar('loss_idt_B', losses[7], epoch + 1 + num_epoch_resume)
writer.add_scalar('loss_mask', losses[8], epoch + 1 + num_epoch_resume)
if (epoch + 1 + num_epoch_resume) % save_epoch_freq == 0:
model.save('epoch%d' % (epoch + 1 + num_epoch_resume))
if __name__ == '__main__':
# random seeds
torch.manual_seed(1234)
np.random.seed(1234)
random.seed(1234)
# image
height = 128
width = 256
# training details
batch_size = 1
lr = 0.0002 # initial learning rate for adam
beta1 = 0.5 # momentum term of adam
num_epoch = 100
num_epoch_resume = 0
save_epoch_freq = 1
# weights of loss function
# lambda_idt = 5
# lambda_A = 10.0
# lambda_B = 10.0
# lambda_mask = 10.0
lambda_idt = 5.0
lambda_A = 10.0
lambda_B = 10.0
lambda_mask = 0
# files, dirs
log_dir = 'logs'
# gpu
device = torch.device("cuda:0" if torch.cuda.is_available else "cpu")
print('device {}'.format(device))
# dataset
train_dataset = UnalignedDataset(is_train=True)
train_loader = torch.utils.data.DataLoader(train_dataset, batch_size=batch_size, shuffle=True)
# train
train(log_dir, device, lr, beta1, lambda_idt, lambda_A, lambda_B, lambda_mask,
num_epoch, num_epoch_resume, save_epoch_freq)
| 31.844037 | 106 | 0.652838 | import os
import random
import itertools
import numpy as np
import torch
import torch.nn as nn
import torch.utils.data
import torchvision.transforms as transforms
from torchvision.utils import make_grid
from torch.autograd import Variable
from PIL import Image
import matplotlib.pyplot as plt
from tensorboardX import SummaryWriter
import time
import cv2
| true | true |
f73e20489d20cf4f5c8057aeb413fc2f1f957f89 | 3,355 | py | Python | setup.py | albert118/jaffle | 55da4d75ad3a9ca633af3865cc35b73e3406a4ef | [
"BSD-3-Clause"
] | null | null | null | setup.py | albert118/jaffle | 55da4d75ad3a9ca633af3865cc35b73e3406a4ef | [
"BSD-3-Clause"
] | null | null | null | setup.py | albert118/jaffle | 55da4d75ad3a9ca633af3865cc35b73e3406a4ef | [
"BSD-3-Clause"
] | null | null | null | # -*- coding: utf-8 -*-
# flake8: noqa
import os
from setuptools import find_packages, setup
from jaffle import __version__
long_description = '''
Jaffle is an automation tool for Python software development, which does:
- Instantiate Python applications in a Jupyter kernel and allows them to call
each other
- Launch external processes
- Combine log messages of all Python applications and external processes
enabling filtering and reformatting
Jaffle contains WatchdogApp that can watch filesystem events and call
arbitrary code or command. That allows you to automate testing, reloading
applications, etc.
Examples
========
- `Auto-testing with pytest`_
- `Automatic Sphinx Document Build`_
- `Web Development with Tornado and React`_
- `Jupyter Extension Development`_
.. _`Auto-testing with pytest`: http://jaffle.readthedocs.io/en/latest/cookbook/pytest.html
.. _`Automatic Sphinx Document Build`: http://jaffle.readthedocs.io/en/latest/cookbook/sphinx.html
.. _`Web Development with Tornado and React`: http://jaffle.readthedocs.io/en/latest/cookbook/tornado_spa.html
.. _`Jupyter Extension Development`: http://jaffle.readthedocs.io/en/latest/cookbook/jupyter_ext.html
GitHub Respository
==================
`yatsu/jaffle`_
.. _`yatsu/jaffle`: https://github.com/yatsu/jaffle
Documentation
=============
`Jaffle documentation`_
.. _`Jaffle documentation`: http://jaffle.readthedocs.io
'''.strip()
requirements = [
"filelock>=3.0.0,<4",
"ipython",
"jupyter-client",
"jupyter-console",
"jupyter-core",
"jsonschema>=2.0.0,<3",
"mako>=1.0.0,<2",
"notebook>=5.0.0,<6",
"prompt-toolkit<2",
"pygments",
"pyyaml",
"pyzmq",
"setuptools",
"tornado>=4.5,<5",
"traitlets",
"watchdog>=0.8.0"
]
dev_requirements = [
"flake8>=3.5.0",
"pip",
"pytest>=3.4.0",
"pytest-cov>=2.5.0",
"pytest-tornado>=0.4.0",
"watchdog>=0.8.0"
]
setup(
name='jaffle',
version=__version__,
description='Python app and process orchestration tool for development environment',
long_description=long_description,
author='Jaffle Development Team',
author_email='jaffle@yatsu.info',
url='https://github.com/yatsu/jaffle',
classifiers=[
'Development Status :: 3 - Alpha',
'Environment :: Console',
'Intended Audience :: Developers',
'License :: OSI Approved :: BSD License',
'Natural Language :: English',
'Operating System :: MacOS',
'Operating System :: POSIX',
'Programming Language :: Python :: 3',
'Programming Language :: Python :: 3.4',
'Programming Language :: Python :: 3.5',
'Programming Language :: Python :: 3.6',
'Topic :: Software Development :: Libraries',
'Topic :: Software Development :: Testing',
'Topic :: System :: Monitoring',
'Topic :: System :: Filesystems',
'Topic :: System :: Shells',
'Topic :: Utilities'
],
keywords='orchestration interactive process test pytest watchdog',
packages=find_packages(),
install_requires=requirements,
extras_require={
'dev': dev_requirements,
'pytest': ['pytest>=3.4.0']
},
include_package_data=True,
entry_points={
'console_scripts': [
'jaffle = jaffle.command:main'
]
}
)
| 27.958333 | 110 | 0.653949 |
import os
from setuptools import find_packages, setup
from jaffle import __version__
long_description = '''
Jaffle is an automation tool for Python software development, which does:
- Instantiate Python applications in a Jupyter kernel and allows them to call
each other
- Launch external processes
- Combine log messages of all Python applications and external processes
enabling filtering and reformatting
Jaffle contains WatchdogApp that can watch filesystem events and call
arbitrary code or command. That allows you to automate testing, reloading
applications, etc.
Examples
========
- `Auto-testing with pytest`_
- `Automatic Sphinx Document Build`_
- `Web Development with Tornado and React`_
- `Jupyter Extension Development`_
.. _`Auto-testing with pytest`: http://jaffle.readthedocs.io/en/latest/cookbook/pytest.html
.. _`Automatic Sphinx Document Build`: http://jaffle.readthedocs.io/en/latest/cookbook/sphinx.html
.. _`Web Development with Tornado and React`: http://jaffle.readthedocs.io/en/latest/cookbook/tornado_spa.html
.. _`Jupyter Extension Development`: http://jaffle.readthedocs.io/en/latest/cookbook/jupyter_ext.html
GitHub Respository
==================
`yatsu/jaffle`_
.. _`yatsu/jaffle`: https://github.com/yatsu/jaffle
Documentation
=============
`Jaffle documentation`_
.. _`Jaffle documentation`: http://jaffle.readthedocs.io
'''.strip()
requirements = [
"filelock>=3.0.0,<4",
"ipython",
"jupyter-client",
"jupyter-console",
"jupyter-core",
"jsonschema>=2.0.0,<3",
"mako>=1.0.0,<2",
"notebook>=5.0.0,<6",
"prompt-toolkit<2",
"pygments",
"pyyaml",
"pyzmq",
"setuptools",
"tornado>=4.5,<5",
"traitlets",
"watchdog>=0.8.0"
]
dev_requirements = [
"flake8>=3.5.0",
"pip",
"pytest>=3.4.0",
"pytest-cov>=2.5.0",
"pytest-tornado>=0.4.0",
"watchdog>=0.8.0"
]
setup(
name='jaffle',
version=__version__,
description='Python app and process orchestration tool for development environment',
long_description=long_description,
author='Jaffle Development Team',
author_email='jaffle@yatsu.info',
url='https://github.com/yatsu/jaffle',
classifiers=[
'Development Status :: 3 - Alpha',
'Environment :: Console',
'Intended Audience :: Developers',
'License :: OSI Approved :: BSD License',
'Natural Language :: English',
'Operating System :: MacOS',
'Operating System :: POSIX',
'Programming Language :: Python :: 3',
'Programming Language :: Python :: 3.4',
'Programming Language :: Python :: 3.5',
'Programming Language :: Python :: 3.6',
'Topic :: Software Development :: Libraries',
'Topic :: Software Development :: Testing',
'Topic :: System :: Monitoring',
'Topic :: System :: Filesystems',
'Topic :: System :: Shells',
'Topic :: Utilities'
],
keywords='orchestration interactive process test pytest watchdog',
packages=find_packages(),
install_requires=requirements,
extras_require={
'dev': dev_requirements,
'pytest': ['pytest>=3.4.0']
},
include_package_data=True,
entry_points={
'console_scripts': [
'jaffle = jaffle.command:main'
]
}
)
| true | true |
f73e209070dc37f7d696a0195027ea1dc8af011c | 564 | py | Python | ansible/ansiblelints/stage/LongStatement.py | KalmanMeth/defect-prediction | 0a4b549b8af5259241f41eaee77dd841e98e0064 | [
"Apache-2.0"
] | null | null | null | ansible/ansiblelints/stage/LongStatement.py | KalmanMeth/defect-prediction | 0a4b549b8af5259241f41eaee77dd841e98e0064 | [
"Apache-2.0"
] | null | null | null | ansible/ansiblelints/stage/LongStatement.py | KalmanMeth/defect-prediction | 0a4b549b8af5259241f41eaee77dd841e98e0064 | [
"Apache-2.0"
] | null | null | null | from ansiblelint import AnsibleLintRule
class LongStatement(AnsibleLintRule):
id = 'ANSIBLE0020'
descreiption = 'Keeping line in YAML file below 160 characters'
severity = 'medium'
tags = {'clarity'}
version_added = 'v1.0.0'
shortdesc = 'Keeping line in YAML file below 160 characters'
def matchlines(self, file, text):
for (prev_line_no, line) in enumerate(text.split("\n")):
if prev_line_no != 0:
if len(line) > 160:
return (prev_line_no, len(line), line)
return []
| 29.684211 | 67 | 0.620567 | from ansiblelint import AnsibleLintRule
class LongStatement(AnsibleLintRule):
id = 'ANSIBLE0020'
descreiption = 'Keeping line in YAML file below 160 characters'
severity = 'medium'
tags = {'clarity'}
version_added = 'v1.0.0'
shortdesc = 'Keeping line in YAML file below 160 characters'
def matchlines(self, file, text):
for (prev_line_no, line) in enumerate(text.split("\n")):
if prev_line_no != 0:
if len(line) > 160:
return (prev_line_no, len(line), line)
return []
| true | true |
f73e21974536b042d36ad1dd7d0bec01a5a1aed7 | 75,532 | py | Python | Bio/SeqFeature.py | rchurt/biopython | 4e9a15172ba26bae104eaa7f05819cd6d41d0da8 | [
"BSD-3-Clause"
] | 2 | 2020-06-25T12:52:03.000Z | 2020-07-11T09:47:34.000Z | Bio/SeqFeature.py | EmmanuelOwusu/biopython | 4e9a15172ba26bae104eaa7f05819cd6d41d0da8 | [
"BSD-3-Clause"
] | null | null | null | Bio/SeqFeature.py | EmmanuelOwusu/biopython | 4e9a15172ba26bae104eaa7f05819cd6d41d0da8 | [
"BSD-3-Clause"
] | null | null | null | # Copyright 2000-2003 Jeff Chang.
# Copyright 2001-2008 Brad Chapman.
# Copyright 2005-2016 by Peter Cock.
# Copyright 2006-2009 Michiel de Hoon.
# All rights reserved.
#
# This file is part of the Biopython distribution and governed by your
# choice of the "Biopython License Agreement" or the "BSD 3-Clause License".
# Please see the LICENSE file that should have been included as part of this
# package.
"""Represent a Sequence Feature holding info about a part of a sequence.
This is heavily modeled after the Biocorba SeqFeature objects, and
may be pretty biased towards GenBank stuff since I'm writing it
for the GenBank parser output...
What's here:
Base class to hold a Feature
----------------------------
Classes:
- SeqFeature
Hold information about a Reference
----------------------------------
This is an attempt to create a General class to hold Reference type
information.
Classes:
- Reference
Specify locations of a feature on a Sequence
--------------------------------------------
This aims to handle, in Ewan Birney's words, 'the dreaded fuzziness issue'.
This has the advantages of allowing us to handle fuzzy stuff in case anyone
needs it, and also be compatible with BioPerl etc and BioSQL.
Classes:
- FeatureLocation - Specify the start and end location of a feature.
- CompoundLocation - Collection of FeatureLocation objects (for joins etc).
- ExactPosition - Specify the position as being exact.
- WithinPosition - Specify a position occurring within some range.
- BetweenPosition - Specify a position occurring between a range (OBSOLETE?).
- BeforePosition - Specify the position as being found before some base.
- AfterPosition - Specify the position as being found after some base.
- OneOfPosition - Specify a position where the location can be multiple positions.
- UncertainPosition - Specify a specific position which is uncertain.
- UnknownPosition - Represents missing information like '?' in UniProt.
"""
from collections import OrderedDict
from Bio.Seq import MutableSeq, reverse_complement
class SeqFeature:
"""Represent a Sequence Feature on an object.
Attributes:
- location - the location of the feature on the sequence (FeatureLocation)
- type - the specified type of the feature (ie. CDS, exon, repeat...)
- location_operator - a string specifying how this SeqFeature may
be related to others. For example, in the example GenBank feature
shown below, the location_operator would be "join". This is a proxy
for feature.location.operator and only applies to compound locations.
- strand - A value specifying on which strand (of a DNA sequence, for
instance) the feature deals with. 1 indicates the plus strand, -1
indicates the minus strand, 0 indicates stranded but unknown (? in GFF3),
while the default of None indicates that strand doesn't apply (dot in GFF3,
e.g. features on proteins). Note this is a shortcut for accessing the
strand property of the feature's location.
- id - A string identifier for the feature.
- ref - A reference to another sequence. This could be an accession
number for some different sequence. Note this is a shortcut for the
reference property of the feature's location.
- ref_db - A different database for the reference accession number.
Note this is a shortcut for the reference property of the location
- qualifiers - A dictionary of qualifiers on the feature. These are
analogous to the qualifiers from a GenBank feature table. The keys of
the dictionary are qualifier names, the values are the qualifier
values. As of Biopython 1.69 this is an ordered dictionary.
"""
def __init__(
self,
location=None,
type="",
location_operator="",
strand=None,
id="<unknown id>",
qualifiers=None,
sub_features=None,
ref=None,
ref_db=None,
):
"""Initialize a SeqFeature on a Sequence.
location can either be a FeatureLocation (with strand argument also
given if required), or None.
e.g. With no strand, on the forward strand, and on the reverse strand:
>>> from Bio.SeqFeature import SeqFeature, FeatureLocation
>>> f1 = SeqFeature(FeatureLocation(5, 10), type="domain")
>>> f1.strand == f1.location.strand == None
True
>>> f2 = SeqFeature(FeatureLocation(7, 110, strand=1), type="CDS")
>>> f2.strand == f2.location.strand == +1
True
>>> f3 = SeqFeature(FeatureLocation(9, 108, strand=-1), type="CDS")
>>> f3.strand == f3.location.strand == -1
True
An invalid strand will trigger an exception:
>>> f4 = SeqFeature(FeatureLocation(50, 60), strand=2)
Traceback (most recent call last):
...
ValueError: Strand should be +1, -1, 0 or None, not 2
Similarly if set via the FeatureLocation directly:
>>> loc4 = FeatureLocation(50, 60, strand=2)
Traceback (most recent call last):
...
ValueError: Strand should be +1, -1, 0 or None, not 2
For exact start/end positions, an integer can be used (as shown above)
as shorthand for the ExactPosition object. For non-exact locations, the
FeatureLocation must be specified via the appropriate position objects.
Note that the strand, ref and ref_db arguments to the SeqFeature are
now obsolete and will be deprecated in a future release (which will
give warning messages) and later removed. Set them via the location
object instead.
Note that location_operator and sub_features arguments can no longer
be used, instead do this via the CompoundLocation object.
"""
if (
location is not None
and not isinstance(location, FeatureLocation)
and not isinstance(location, CompoundLocation)
):
raise TypeError(
"FeatureLocation, CompoundLocation (or None) required for the location"
)
self.location = location
self.type = type
if location_operator:
# TODO - Deprecation warning
self.location_operator = location_operator
if strand is not None:
# TODO - Deprecation warning
self.strand = strand
self.id = id
if qualifiers is None:
qualifiers = OrderedDict()
self.qualifiers = qualifiers
if sub_features is not None:
raise TypeError("Rather than sub_features, use a CompoundFeatureLocation")
if ref is not None:
# TODO - Deprecation warning
self.ref = ref
if ref_db is not None:
# TODO - Deprecation warning
self.ref_db = ref_db
def _get_strand(self):
"""Get function for the strand property (PRIVATE)."""
return self.location.strand
def _set_strand(self, value):
"""Set function for the strand property (PRIVATE)."""
try:
self.location.strand = value
except AttributeError:
if self.location is None:
if value is not None:
raise ValueError("Can't set strand without a location.") from None
else:
raise
strand = property(
fget=_get_strand,
fset=_set_strand,
doc="""Feature's strand
This is a shortcut for feature.location.strand
""",
)
def _get_ref(self):
"""Get function for the reference property (PRIVATE)."""
try:
return self.location.ref
except AttributeError:
return None
def _set_ref(self, value):
"""Set function for the reference property (PRIVATE)."""
try:
self.location.ref = value
except AttributeError:
if self.location is None:
if value is not None:
raise ValueError("Can't set ref without a location.") from None
else:
raise
ref = property(
fget=_get_ref,
fset=_set_ref,
doc="""Feature location reference (e.g. accession).
This is a shortcut for feature.location.ref
""",
)
def _get_ref_db(self):
"""Get function for the database reference property (PRIVATE)."""
try:
return self.location.ref_db
except AttributeError:
return None
def _set_ref_db(self, value):
"""Set function for the database reference property (PRIVATE)."""
self.location.ref_db = value
ref_db = property(
fget=_get_ref_db,
fset=_set_ref_db,
doc="""Feature location reference's database.
This is a shortcut for feature.location.ref_db
""",
)
def _get_location_operator(self):
"""Get function for the location operator property (PRIVATE)."""
try:
return self.location.operator
except AttributeError:
return None
def _set_location_operator(self, value):
"""Set function for the location operator property (PRIVATE)."""
if value:
if isinstance(self.location, CompoundLocation):
self.location.operator = value
elif self.location is None:
raise ValueError(
"Location is None so can't set its operator (to %r)" % value
)
else:
raise ValueError("Only CompoundLocation gets an operator (%r)" % value)
location_operator = property(
fget=_get_location_operator,
fset=_set_location_operator,
doc="Location operator for compound locations (e.g. join).",
)
def __repr__(self):
"""Represent the feature as a string for debugging."""
answer = "%s(%s" % (self.__class__.__name__, repr(self.location))
if self.type:
answer += ", type=%s" % repr(self.type)
if self.location_operator:
answer += ", location_operator=%s" % repr(self.location_operator)
if self.id and self.id != "<unknown id>":
answer += ", id=%s" % repr(self.id)
if self.ref:
answer += ", ref=%s" % repr(self.ref)
if self.ref_db:
answer += ", ref_db=%s" % repr(self.ref_db)
answer += ")"
return answer
def __str__(self):
"""Return the full feature as a python string."""
out = "type: %s\n" % self.type
out += "location: %s\n" % self.location
if self.id and self.id != "<unknown id>":
out += "id: %s\n" % self.id
out += "qualifiers:\n"
for qual_key in sorted(self.qualifiers):
out += " Key: %s, Value: %s\n" % (qual_key, self.qualifiers[qual_key])
return out
def _shift(self, offset):
"""Return a copy of the feature with its location shifted (PRIVATE).
The annotation qaulifiers are copied.
"""
return SeqFeature(
location=self.location._shift(offset),
type=self.type,
location_operator=self.location_operator,
id=self.id,
qualifiers=OrderedDict(self.qualifiers.items()),
)
def _flip(self, length):
"""Return a copy of the feature with its location flipped (PRIVATE).
The argument length gives the length of the parent sequence. For
example a location 0..20 (+1 strand) with parent length 30 becomes
after flipping 10..30 (-1 strand). Strandless (None) or unknown
strand (0) remain like that - just their end points are changed.
The annotation qaulifiers are copied.
"""
return SeqFeature(
location=self.location._flip(length),
type=self.type,
location_operator=self.location_operator,
id=self.id,
qualifiers=OrderedDict(self.qualifiers.items()),
)
def extract(self, parent_sequence):
"""Extract the feature's sequence from supplied parent sequence.
The parent_sequence can be a Seq like object or a string, and will
generally return an object of the same type. The exception to this is
a MutableSeq as the parent sequence will return a Seq object.
This should cope with complex locations including complements, joins
and fuzzy positions. Even mixed strand features should work! This
also covers features on protein sequences (e.g. domains), although
here reverse strand features are not permitted.
>>> from Bio.Seq import Seq
>>> from Bio.Alphabet import generic_protein
>>> from Bio.SeqFeature import SeqFeature, FeatureLocation
>>> seq = Seq("MKQHKAMIVALIVICITAVVAAL", generic_protein)
>>> f = SeqFeature(FeatureLocation(8, 15), type="domain")
>>> f.extract(seq)
Seq('VALIVIC')
If the FeatureLocation is None, e.g. when parsing invalid locus
locations in the GenBank parser, extract() will raise a ValueError.
>>> from Bio.Seq import Seq
>>> from Bio.SeqFeature import SeqFeature
>>> seq = Seq("MKQHKAMIVALIVICITAVVAAL", generic_protein)
>>> f = SeqFeature(None, type="domain")
>>> f.extract(seq)
Traceback (most recent call last):
...
ValueError: The feature's .location is None. Check the sequence file for a valid location.
Note - currently only compound features of type "join" are supported.
"""
if self.location is None:
raise ValueError(
"The feature's .location is None. Check the "
"sequence file for a valid location."
)
return self.location.extract(parent_sequence)
def translate(
self,
parent_sequence,
table="Standard",
start_offset=None,
stop_symbol="*",
to_stop=False,
cds=None,
gap=None,
):
"""Get a translation of the feature's sequence.
This method is intended for CDS or other features that code proteins
and is a shortcut that will both extract the feature and
translate it, taking into account the codon_start and transl_table
qualifiers, if they are present. If they are not present the
value of the arguments "table" and "start_offset" are used.
The "cds" parameter is set to "True" if the feature is of type
"CDS" but can be overridden by giving an explicit argument.
The arguments stop_symbol, to_stop and gap have the same meaning
as Seq.translate, refer to that documentation for further information.
Arguments:
- parent_sequence - This method will translate DNA or RNA sequences,
and those with a nucleotide or generic alphabet. Trying to
translate a protein sequence raises an exception.
- table - Which codon table to use if there is no transl_table
qualifier for this feature. This can be either a name
(string), an NCBI identifier (integer), or a CodonTable
object (useful for non-standard genetic codes). This
defaults to the "Standard" table.
- start_offset - offset at which the first complete codon of a
coding feature can be found, relative to the first base of
that feature. Has a valid value of 0, 1 or 2. NOTE: this
uses python's 0-based numbering whereas the codon_start
qualifier in files from NCBI use 1-based numbering.
Will override a codon_start qualifier
>>> from Bio.Seq import Seq
>>> from Bio.Alphabet import generic_dna
>>> from Bio.SeqFeature import SeqFeature, FeatureLocation
>>> seq = Seq("GGTTACACTTACCGATAATGTCTCTGATGA", generic_dna)
>>> f = SeqFeature(FeatureLocation(0, 30), type="CDS")
>>> f.qualifiers['transl_table'] = [11]
Note that features of type CDS are subject to the usual
checks at translation. But you can override this behaviour
by giving explicit arguments:
>>> f.translate(seq, cds=False)
Seq('GYTYR*CL**')
Now use the start_offset argument to change the frame. Note
this uses python 0-based numbering.
>>> f.translate(seq, start_offset=1, cds=False)
Seq('VTLTDNVSD')
Alternatively use the codon_start qualifier to do the same
thing. Note: this uses 1-based numbering, which is found
in files from NCBI.
>>> f.qualifiers['codon_start'] = [2]
>>> f.translate(seq, cds=False)
Seq('VTLTDNVSD')
"""
# see if this feature should be translated in a different
# frame using the "codon_start" qualifier
if start_offset is None:
try:
start_offset = int(self.qualifiers["codon_start"][0]) - 1
except KeyError:
start_offset = 0
if start_offset not in [0, 1, 2]:
raise ValueError(
"The start_offset must be 0, 1, or 2. "
f"The supplied value is {start_offset}. "
"Check the value of either the codon_start qualifier "
"or the start_offset argument"
)
feat_seq = self.extract(parent_sequence)[start_offset:]
codon_table = self.qualifiers.get("transl_table", [table])[0]
if cds is None:
cds = self.type == "CDS"
return feat_seq.translate(
table=codon_table,
stop_symbol=stop_symbol,
to_stop=to_stop,
cds=cds,
gap=gap,
)
def __bool__(self):
"""Boolean value of an instance of this class (True).
This behaviour is for backwards compatibility, since until the
__len__ method was added, a SeqFeature always evaluated as True.
Note that in comparison, Seq objects, strings, lists, etc, will all
evaluate to False if they have length zero.
WARNING: The SeqFeature may in future evaluate to False when its
length is zero (in order to better match normal python behaviour)!
"""
return True
def __len__(self):
"""Return the length of the region where the feature is located.
>>> from Bio.Seq import Seq
>>> from Bio.Alphabet import generic_protein
>>> from Bio.SeqFeature import SeqFeature, FeatureLocation
>>> seq = Seq("MKQHKAMIVALIVICITAVVAAL", generic_protein)
>>> f = SeqFeature(FeatureLocation(8, 15), type="domain")
>>> len(f)
7
>>> f.extract(seq)
Seq('VALIVIC')
>>> len(f.extract(seq))
7
This is a proxy for taking the length of the feature's location:
>>> len(f.location)
7
For simple features this is the same as the region spanned (end
position minus start position using Pythonic counting). However, for
a compound location (e.g. a CDS as the join of several exons) the
gaps are not counted (e.g. introns). This ensures that len(f) matches
len(f.extract(parent_seq)), and also makes sure things work properly
with features wrapping the origin etc.
"""
return len(self.location)
def __iter__(self):
"""Iterate over the parent positions within the feature.
The iteration order is strand aware, and can be thought of as moving
along the feature using the parent sequence coordinates:
>>> from Bio.SeqFeature import SeqFeature, FeatureLocation
>>> f = SeqFeature(FeatureLocation(5, 10), type="domain", strand=-1)
>>> len(f)
5
>>> for i in f: print(i)
9
8
7
6
5
>>> list(f)
[9, 8, 7, 6, 5]
This is a proxy for iterating over the location,
>>> list(f.location)
[9, 8, 7, 6, 5]
"""
return iter(self.location)
def __contains__(self, value):
"""Check if an integer position is within the feature.
>>> from Bio.SeqFeature import SeqFeature, FeatureLocation
>>> f = SeqFeature(FeatureLocation(5, 10), type="domain", strand=-1)
>>> len(f)
5
>>> [i for i in range(15) if i in f]
[5, 6, 7, 8, 9]
For example, to see which features include a SNP position, you could
use this:
>>> from Bio import SeqIO
>>> record = SeqIO.read("GenBank/NC_000932.gb", "gb")
>>> for f in record.features:
... if 1750 in f:
... print("%s %s" % (f.type, f.location))
source [0:154478](+)
gene [1716:4347](-)
tRNA join{[4310:4347](-), [1716:1751](-)}
Note that for a feature defined as a join of several subfeatures (e.g.
the union of several exons) the gaps are not checked (e.g. introns).
In this example, the tRNA location is defined in the GenBank file as
complement(join(1717..1751,4311..4347)), so that position 1760 falls
in the gap:
>>> for f in record.features:
... if 1760 in f:
... print("%s %s" % (f.type, f.location))
source [0:154478](+)
gene [1716:4347](-)
Note that additional care may be required with fuzzy locations, for
example just before a BeforePosition:
>>> from Bio.SeqFeature import SeqFeature, FeatureLocation
>>> from Bio.SeqFeature import BeforePosition
>>> f = SeqFeature(FeatureLocation(BeforePosition(3), 8), type="domain")
>>> len(f)
5
>>> [i for i in range(10) if i in f]
[3, 4, 5, 6, 7]
Note that is is a proxy for testing membership on the location.
>>> [i for i in range(10) if i in f.location]
[3, 4, 5, 6, 7]
"""
return value in self.location
# --- References
# TODO -- Will this hold PubMed and Medline information decently?
class Reference:
"""Represent a Generic Reference object.
Attributes:
- location - A list of Location objects specifying regions of
the sequence that the references correspond to. If no locations are
specified, the entire sequence is assumed.
- authors - A big old string, or a list split by author, of authors
for the reference.
- title - The title of the reference.
- journal - Journal the reference was published in.
- medline_id - A medline reference for the article.
- pubmed_id - A pubmed reference for the article.
- comment - A place to stick any comments about the reference.
"""
def __init__(self):
"""Initialize the class."""
self.location = []
self.authors = ""
self.consrtm = ""
self.title = ""
self.journal = ""
self.medline_id = ""
self.pubmed_id = ""
self.comment = ""
def __str__(self):
"""Return the full Reference object as a python string."""
out = ""
for single_location in self.location:
out += "location: %s\n" % single_location
out += "authors: %s\n" % self.authors
if self.consrtm:
out += "consrtm: %s\n" % self.consrtm
out += "title: %s\n" % self.title
out += "journal: %s\n" % self.journal
out += "medline id: %s\n" % self.medline_id
out += "pubmed id: %s\n" % self.pubmed_id
out += "comment: %s\n" % self.comment
return out
def __repr__(self):
"""Represent the Reference object as a string for debugging."""
# TODO - Update this is __init__ later accpets values
return "%s(title=%s, ...)" % (self.__class__.__name__, repr(self.title))
def __eq__(self, other):
"""Check if two Reference objects should be considered equal.
Note prior to Biopython 1.70 the location was not compared, as
until then __eq__ for the FeatureLocation class was not defined.
"""
return (
self.authors == other.authors
and self.consrtm == other.consrtm
and self.title == other.title
and self.journal == other.journal
and self.medline_id == other.medline_id
and self.pubmed_id == other.pubmed_id
and self.comment == other.comment
and self.location == other.location
)
# --- Handling feature locations
class FeatureLocation:
"""Specify the location of a feature along a sequence.
The FeatureLocation is used for simple continuous features, which can
be described as running from a start position to and end position
(optionally with a strand and reference information). More complex
locations made up from several non-continuous parts (e.g. a coding
sequence made up of several exons) are described using a SeqFeature
with a CompoundLocation.
Note that the start and end location numbering follow Python's scheme,
thus a GenBank entry of 123..150 (one based counting) becomes a location
of [122:150] (zero based counting).
>>> from Bio.SeqFeature import FeatureLocation
>>> f = FeatureLocation(122, 150)
>>> print(f)
[122:150]
>>> print(f.start)
122
>>> print(f.end)
150
>>> print(f.strand)
None
Note the strand defaults to None. If you are working with nucleotide
sequences you'd want to be explicit if it is the forward strand:
>>> from Bio.SeqFeature import FeatureLocation
>>> f = FeatureLocation(122, 150, strand=+1)
>>> print(f)
[122:150](+)
>>> print(f.strand)
1
Note that for a parent sequence of length n, the FeatureLocation
start and end must satisfy the inequality 0 <= start <= end <= n.
This means even for features on the reverse strand of a nucleotide
sequence, we expect the 'start' coordinate to be less than the
'end'.
>>> from Bio.SeqFeature import FeatureLocation
>>> r = FeatureLocation(122, 150, strand=-1)
>>> print(r)
[122:150](-)
>>> print(r.start)
122
>>> print(r.end)
150
>>> print(r.strand)
-1
i.e. Rather than thinking of the 'start' and 'end' biologically in a
strand aware manner, think of them as the 'left most' or 'minimum'
boundary, and the 'right most' or 'maximum' boundary of the region
being described. This is particularly important with compound
locations describing non-continuous regions.
In the example above we have used standard exact positions, but there
are also specialised position objects used to represent fuzzy positions
as well, for example a GenBank location like complement(<123..150)
would use a BeforePosition object for the start.
"""
def __init__(self, start, end, strand=None, ref=None, ref_db=None):
"""Initialize the class.
start and end arguments specify the values where the feature begins
and ends. These can either by any of the ``*Position`` objects that
inherit from AbstractPosition, or can just be integers specifying the
position. In the case of integers, the values are assumed to be
exact and are converted in ExactPosition arguments. This is meant
to make it easy to deal with non-fuzzy ends.
i.e. Short form:
>>> from Bio.SeqFeature import FeatureLocation
>>> loc = FeatureLocation(5, 10, strand=-1)
>>> print(loc)
[5:10](-)
Explicit form:
>>> from Bio.SeqFeature import FeatureLocation, ExactPosition
>>> loc = FeatureLocation(ExactPosition(5), ExactPosition(10), strand=-1)
>>> print(loc)
[5:10](-)
Other fuzzy positions are used similarly,
>>> from Bio.SeqFeature import FeatureLocation
>>> from Bio.SeqFeature import BeforePosition, AfterPosition
>>> loc2 = FeatureLocation(BeforePosition(5), AfterPosition(10), strand=-1)
>>> print(loc2)
[<5:>10](-)
For nucleotide features you will also want to specify the strand,
use 1 for the forward (plus) strand, -1 for the reverse (negative)
strand, 0 for stranded but strand unknown (? in GFF3), or None for
when the strand does not apply (dot in GFF3), e.g. features on
proteins.
>>> loc = FeatureLocation(5, 10, strand=+1)
>>> print(loc)
[5:10](+)
>>> print(loc.strand)
1
Normally feature locations are given relative to the parent
sequence you are working with, but an explicit accession can
be given with the optional ref and db_ref strings:
>>> loc = FeatureLocation(105172, 108462, ref="AL391218.9", strand=1)
>>> print(loc)
AL391218.9[105172:108462](+)
>>> print(loc.ref)
AL391218.9
"""
# TODO - Check 0 <= start <= end (<= length of reference)
if isinstance(start, AbstractPosition):
self._start = start
elif isinstance(start, int):
self._start = ExactPosition(start)
else:
raise TypeError("start=%r %s" % (start, type(start)))
if isinstance(end, AbstractPosition):
self._end = end
elif isinstance(end, int):
self._end = ExactPosition(end)
else:
raise TypeError("end=%r %s" % (end, type(end)))
if (
isinstance(self.start.position, int)
and isinstance(self.end.position, int)
and self.start > self.end
):
raise ValueError(
f"End location ({self.end}) must be greater than "
f"or equal to start location ({self.start})"
)
self.strand = strand
self.ref = ref
self.ref_db = ref_db
def _get_strand(self):
"""Get function for the strand property (PRIVATE)."""
return self._strand
def _set_strand(self, value):
"""Set function for the strand property (PRIVATE)."""
if value not in [+1, -1, 0, None]:
raise ValueError("Strand should be +1, -1, 0 or None, not %r" % value)
self._strand = value
strand = property(
fget=_get_strand,
fset=_set_strand,
doc="Strand of the location (+1, -1, 0 or None).",
)
def __str__(self):
"""Return a representation of the FeatureLocation object (with python counting).
For the simple case this uses the python splicing syntax, [122:150]
(zero based counting) which GenBank would call 123..150 (one based
counting).
"""
answer = "[%s:%s]" % (self._start, self._end)
if self.ref and self.ref_db:
answer = "%s:%s%s" % (self.ref_db, self.ref, answer)
elif self.ref:
answer = self.ref + answer
# Is ref_db without ref meaningful?
if self.strand is None:
return answer
elif self.strand == +1:
return answer + "(+)"
elif self.strand == -1:
return answer + "(-)"
else:
# strand = 0, stranded but strand unknown, ? in GFF3
return answer + "(?)"
def __repr__(self):
"""Represent the FeatureLocation object as a string for debugging."""
optional = ""
if self.strand is not None:
optional += ", strand=%r" % self.strand
if self.ref is not None:
optional += ", ref=%r" % self.ref
if self.ref_db is not None:
optional += ", ref_db=%r" % self.ref_db
return "%s(%r, %r%s)" % (
self.__class__.__name__,
self.start,
self.end,
optional,
)
def __add__(self, other):
"""Combine location with another FeatureLocation object, or shift it.
You can add two feature locations to make a join CompoundLocation:
>>> from Bio.SeqFeature import FeatureLocation
>>> f1 = FeatureLocation(5, 10)
>>> f2 = FeatureLocation(20, 30)
>>> combined = f1 + f2
>>> print(combined)
join{[5:10], [20:30]}
This is thus equivalent to:
>>> from Bio.SeqFeature import CompoundLocation
>>> join = CompoundLocation([f1, f2])
>>> print(join)
join{[5:10], [20:30]}
You can also use sum(...) in this way:
>>> join = sum([f1, f2])
>>> print(join)
join{[5:10], [20:30]}
Furthermore, you can combine a FeatureLocation with a CompoundLocation
in this way.
Separately, adding an integer will give a new FeatureLocation with
its start and end offset by that amount. For example:
>>> print(f1)
[5:10]
>>> print(f1 + 100)
[105:110]
>>> print(200 + f1)
[205:210]
This can be useful when editing annotation.
"""
if isinstance(other, FeatureLocation):
return CompoundLocation([self, other])
elif isinstance(other, int):
return self._shift(other)
else:
# This will allow CompoundLocation's __radd__ to be called:
return NotImplemented
def __radd__(self, other):
"""Add a feature locationanother FeatureLocation object to the left."""
if isinstance(other, int):
return self._shift(other)
else:
return NotImplemented
def __nonzero__(self):
"""Return True regardless of the length of the feature.
This behaviour is for backwards compatibility, since until the
__len__ method was added, a FeatureLocation always evaluated as True.
Note that in comparison, Seq objects, strings, lists, etc, will all
evaluate to False if they have length zero.
WARNING: The FeatureLocation may in future evaluate to False when its
length is zero (in order to better match normal python behaviour)!
"""
return True
def __len__(self):
"""Return the length of the region described by the FeatureLocation object.
Note that extra care may be needed for fuzzy locations, e.g.
>>> from Bio.SeqFeature import FeatureLocation
>>> from Bio.SeqFeature import BeforePosition, AfterPosition
>>> loc = FeatureLocation(BeforePosition(5), AfterPosition(10))
>>> len(loc)
5
"""
return int(self._end) - int(self._start)
def __contains__(self, value):
"""Check if an integer position is within the FeatureLocation object.
Note that extra care may be needed for fuzzy locations, e.g.
>>> from Bio.SeqFeature import FeatureLocation
>>> from Bio.SeqFeature import BeforePosition, AfterPosition
>>> loc = FeatureLocation(BeforePosition(5), AfterPosition(10))
>>> len(loc)
5
>>> [i for i in range(15) if i in loc]
[5, 6, 7, 8, 9]
"""
if not isinstance(value, int):
raise ValueError(
"Currently we only support checking for integer "
"positions being within a FeatureLocation."
)
if value < self._start or value >= self._end:
return False
else:
return True
def __iter__(self):
"""Iterate over the parent positions within the FeatureLocation object.
>>> from Bio.SeqFeature import FeatureLocation
>>> from Bio.SeqFeature import BeforePosition, AfterPosition
>>> loc = FeatureLocation(BeforePosition(5), AfterPosition(10))
>>> len(loc)
5
>>> for i in loc: print(i)
5
6
7
8
9
>>> list(loc)
[5, 6, 7, 8, 9]
>>> [i for i in range(15) if i in loc]
[5, 6, 7, 8, 9]
Note this is strand aware:
>>> loc = FeatureLocation(BeforePosition(5), AfterPosition(10), strand = -1)
>>> list(loc)
[9, 8, 7, 6, 5]
"""
if self.strand == -1:
yield from range(self._end - 1, self._start - 1, -1)
else:
yield from range(self._start, self._end)
def __eq__(self, other):
"""Implement equality by comparing all the location attributes."""
if not isinstance(other, FeatureLocation):
return False
return (
self._start == other.start
and self._end == other.end
and self._strand == other.strand
and self.ref == other.ref
and self.ref_db == other.ref_db
)
def _shift(self, offset):
"""Return a copy of the FeatureLocation shifted by an offset (PRIVATE)."""
# TODO - What if offset is a fuzzy position?
if self.ref or self.ref_db:
# TODO - Return self?
raise ValueError("Feature references another sequence.")
return FeatureLocation(
start=self._start._shift(offset),
end=self._end._shift(offset),
strand=self.strand,
)
def _flip(self, length):
"""Return a copy of the location after the parent is reversed (PRIVATE)."""
if self.ref or self.ref_db:
# TODO - Return self?
raise ValueError("Feature references another sequence.")
# Note this will flip the start and end too!
if self.strand == +1:
flip_strand = -1
elif self.strand == -1:
flip_strand = +1
else:
# 0 or None
flip_strand = self.strand
return FeatureLocation(
start=self._end._flip(length),
end=self._start._flip(length),
strand=flip_strand,
)
@property
def parts(self):
"""Read only list of sections (always one, the FeatureLocation object).
This is a convenience property allowing you to write code handling
both simple FeatureLocation objects (with one part) and more complex
CompoundLocation objects (with multiple parts) interchangeably.
"""
return [self]
@property
def start(self):
"""Start location - left most (minimum) value, regardless of strand.
Read only, returns an integer like position object, possibly a fuzzy
position.
"""
return self._start
@property
def end(self):
"""End location - right most (maximum) value, regardless of strand.
Read only, returns an integer like position object, possibly a fuzzy
position.
"""
return self._end
@property
def nofuzzy_start(self):
"""Start position (integer, approximated if fuzzy, read only) (OBSOLETE).
This is now an alias for int(feature.start), which should be
used in preference -- unless you are trying to support old
versions of Biopython.
"""
try:
return int(self._start)
except TypeError:
if isinstance(self._start, UnknownPosition):
return None
raise
@property
def nofuzzy_end(self):
"""End position (integer, approximated if fuzzy, read only) (OBSOLETE).
This is now an alias for int(feature.end), which should be
used in preference -- unless you are trying to support old
versions of Biopython.
"""
try:
return int(self._end)
except TypeError:
if isinstance(self._end, UnknownPosition):
return None
raise
def extract(self, parent_sequence):
"""Extract the sequence from supplied parent sequence using the FeatureLocation object.
The parent_sequence can be a Seq like object or a string, and will
generally return an object of the same type. The exception to this is
a MutableSeq as the parent sequence will return a Seq object.
>>> from Bio.Seq import Seq
>>> from Bio.Alphabet import generic_protein
>>> from Bio.SeqFeature import FeatureLocation
>>> seq = Seq("MKQHKAMIVALIVICITAVVAAL", generic_protein)
>>> feature_loc = FeatureLocation(8, 15)
>>> feature_loc.extract(seq)
Seq('VALIVIC')
"""
if self.ref or self.ref_db:
# TODO - Take a dictionary as an optional argument?
raise ValueError("Feature references another sequence.")
if isinstance(parent_sequence, MutableSeq):
# This avoids complications with reverse complements
# (the MutableSeq reverse complement acts in situ)
parent_sequence = parent_sequence.toseq()
f_seq = parent_sequence[self.nofuzzy_start : self.nofuzzy_end]
if self.strand == -1:
try:
f_seq = f_seq.reverse_complement()
except AttributeError:
assert isinstance(f_seq, str)
f_seq = reverse_complement(f_seq)
return f_seq
class CompoundLocation:
"""For handling joins etc where a feature location has several parts."""
def __init__(self, parts, operator="join"):
"""Initialize the class.
>>> from Bio.SeqFeature import FeatureLocation, CompoundLocation
>>> f1 = FeatureLocation(10, 40, strand=+1)
>>> f2 = FeatureLocation(50, 59, strand=+1)
>>> f = CompoundLocation([f1, f2])
>>> len(f) == len(f1) + len(f2) == 39 == len(list(f))
True
>>> print(f.operator)
join
>>> 5 in f
False
>>> 15 in f
True
>>> f.strand
1
Notice that the strand of the compound location is computed
automatically - in the case of mixed strands on the sub-locations
the overall strand is set to None.
>>> f = CompoundLocation([FeatureLocation(3, 6, strand=+1),
... FeatureLocation(10, 13, strand=-1)])
>>> print(f.strand)
None
>>> len(f)
6
>>> list(f)
[3, 4, 5, 12, 11, 10]
The example above doing list(f) iterates over the coordinates within the
feature. This allows you to use max and min on the location, to find the
range covered:
>>> min(f)
3
>>> max(f)
12
More generally, you can use the compound location's start and end which
give the full span covered, 0 <= start <= end <= full sequence length.
>>> f.start == min(f)
True
>>> f.end == max(f) + 1
True
This is consistent with the behaviour of the simple FeatureLocation for
a single region, where again the 'start' and 'end' do not necessarily
give the biological start and end, but rather the 'minimal' and 'maximal'
coordinate boundaries.
Note that adding locations provides a more intuitive method of
construction:
>>> f = FeatureLocation(3, 6, strand=+1) + FeatureLocation(10, 13, strand=-1)
>>> len(f)
6
>>> list(f)
[3, 4, 5, 12, 11, 10]
"""
self.operator = operator
self.parts = list(parts)
for loc in self.parts:
if not isinstance(loc, FeatureLocation):
raise ValueError(
"CompoundLocation should be given a list of "
"FeatureLocation objects, not %s" % loc.__class__
)
if len(parts) < 2:
raise ValueError(
"CompoundLocation should have at least 2 parts, not %r" % parts
)
def __str__(self):
"""Return a representation of the CompoundLocation object (with python counting)."""
return "%s{%s}" % (self.operator, ", ".join(str(loc) for loc in self.parts))
def __repr__(self):
"""Represent the CompoundLocation object as string for debugging."""
return "%s(%r, %r)" % (self.__class__.__name__, self.parts, self.operator)
def _get_strand(self):
"""Get function for the strand property (PRIVATE)."""
# Historically a join on the reverse strand has been represented
# in Biopython with both the parent SeqFeature and its children
# (the exons for a CDS) all given a strand of -1. Likewise, for
# a join feature on the forward strand they all have strand +1.
# However, we must also consider evil mixed strand examples like
# this, join(complement(69611..69724),139856..140087,140625..140650)
if len({loc.strand for loc in self.parts}) == 1:
return self.parts[0].strand
else:
return None # i.e. mixed strands
def _set_strand(self, value):
"""Set function for the strand property (PRIVATE)."""
# Should this be allowed/encouraged?
for loc in self.parts:
loc.strand = value
strand = property(
fget=_get_strand,
fset=_set_strand,
doc="""Overall strand of the compound location.
If all the parts have the same strand, that is returned. Otherwise
for mixed strands, this returns None.
>>> from Bio.SeqFeature import FeatureLocation, CompoundLocation
>>> f1 = FeatureLocation(15, 17, strand=1)
>>> f2 = FeatureLocation(20, 30, strand=-1)
>>> f = f1 + f2
>>> f1.strand
1
>>> f2.strand
-1
>>> f.strand
>>> f.strand is None
True
If you set the strand of a CompoundLocation, this is applied to
all the parts - use with caution:
>>> f.strand = 1
>>> f1.strand
1
>>> f2.strand
1
>>> f.strand
1
""",
)
def __add__(self, other):
"""Combine locations, or shift the location by an integer offset.
>>> from Bio.SeqFeature import FeatureLocation, CompoundLocation
>>> f1 = FeatureLocation(15, 17) + FeatureLocation(20, 30)
>>> print(f1)
join{[15:17], [20:30]}
You can add another FeatureLocation:
>>> print(f1 + FeatureLocation(40, 50))
join{[15:17], [20:30], [40:50]}
>>> print(FeatureLocation(5, 10) + f1)
join{[5:10], [15:17], [20:30]}
You can also add another CompoundLocation:
>>> f2 = FeatureLocation(40, 50) + FeatureLocation(60, 70)
>>> print(f2)
join{[40:50], [60:70]}
>>> print(f1 + f2)
join{[15:17], [20:30], [40:50], [60:70]}
Also, as with the FeatureLocation, adding an integer shifts the
location's co-ordinates by that offset:
>>> print(f1 + 100)
join{[115:117], [120:130]}
>>> print(200 + f1)
join{[215:217], [220:230]}
>>> print(f1 + (-5))
join{[10:12], [15:25]}
"""
if isinstance(other, FeatureLocation):
return CompoundLocation(self.parts + [other], self.operator)
elif isinstance(other, CompoundLocation):
if self.operator != other.operator:
# Handle join+order -> order as a special case?
raise ValueError(
"Mixed operators %s and %s" % (self.operator, other.operator)
)
return CompoundLocation(self.parts + other.parts, self.operator)
elif isinstance(other, int):
return self._shift(other)
else:
raise NotImplementedError
def __radd__(self, other):
"""Add a feature to the left."""
if isinstance(other, FeatureLocation):
return CompoundLocation([other] + self.parts, self.operator)
elif isinstance(other, int):
return self._shift(other)
else:
raise NotImplementedError
def __contains__(self, value):
"""Check if an integer position is within the CompoundLocation object."""
for loc in self.parts:
if value in loc:
return True
return False
def __nonzero__(self):
"""Return True regardless of the length of the feature.
This behaviour is for backwards compatibility, since until the
__len__ method was added, a FeatureLocation always evaluated as True.
Note that in comparison, Seq objects, strings, lists, etc, will all
evaluate to False if they have length zero.
WARNING: The FeatureLocation may in future evaluate to False when its
length is zero (in order to better match normal python behaviour)!
"""
return True
def __len__(self):
"""Return the length of the CompoundLocation object."""
return sum(len(loc) for loc in self.parts)
def __iter__(self):
"""Iterate over the parent positions within the CompoundLocation object."""
for loc in self.parts:
yield from loc
def __eq__(self, other):
"""Check if all parts of CompoundLocation are equal to all parts of other CompoundLocation."""
if not isinstance(other, CompoundLocation):
return False
if len(self.parts) != len(other.parts):
return False
if self.operator != other.operator:
return False
for self_part, other_part in zip(self.parts, other.parts):
if self_part != other_part:
return False
return True
def _shift(self, offset):
"""Return a copy of the CompoundLocation shifted by an offset (PRIVATE)."""
return CompoundLocation(
[loc._shift(offset) for loc in self.parts], self.operator
)
def _flip(self, length):
"""Return a copy of the locations after the parent is reversed (PRIVATE).
Note that the order of the parts is NOT reversed too. Consider a CDS
on the forward strand with exons small, medium and large (in length).
Once we change the frame of reference to the reverse complement strand,
the start codon is still part of the small exon, and the stop codon
still part of the large exon - so the part order remains the same!
Here is an artificial example, were the features map to the two upper
case regions and the lower case runs of n are not used:
>>> from Bio.Seq import Seq
>>> from Bio.SeqFeature import FeatureLocation
>>> dna = Seq("nnnnnAGCATCCTGCTGTACnnnnnnnnGAGAMTGCCATGCCCCTGGAGTGAnnnnn")
>>> small = FeatureLocation(5, 20, strand=1)
>>> large = FeatureLocation(28, 52, strand=1)
>>> location = small + large
>>> print(small)
[5:20](+)
>>> print(large)
[28:52](+)
>>> print(location)
join{[5:20](+), [28:52](+)}
>>> for part in location.parts:
... print(len(part))
...
15
24
As you can see, this is a silly example where each "exon" is a word:
>>> print(small.extract(dna).translate())
SILLY
>>> print(large.extract(dna).translate())
EXAMPLE*
>>> print(location.extract(dna).translate())
SILLYEXAMPLE*
>>> for part in location.parts:
... print(part.extract(dna).translate())
...
SILLY
EXAMPLE*
Now, let's look at this from the reverse strand frame of reference:
>>> flipped_dna = dna.reverse_complement()
>>> flipped_location = location._flip(len(dna))
>>> print(flipped_location.extract(flipped_dna).translate())
SILLYEXAMPLE*
>>> for part in flipped_location.parts:
... print(part.extract(flipped_dna).translate())
...
SILLY
EXAMPLE*
The key point here is the first part of the CompoundFeature is still the
small exon, while the second part is still the large exon:
>>> for part in flipped_location.parts:
... print(len(part))
...
15
24
>>> print(flipped_location)
join{[37:52](-), [5:29](-)}
Notice the parts are not reversed. However, there was a bug here in older
versions of Biopython which would have given join{[5:29](-), [37:52](-)}
and the translation would have wrongly been "EXAMPLE*SILLY" instead.
"""
return CompoundLocation(
[loc._flip(length) for loc in self.parts], self.operator
)
@property
def start(self):
"""Start location - left most (minimum) value, regardless of strand.
Read only, returns an integer like position object, possibly a fuzzy
position.
For the special case of a CompoundLocation wrapping the origin of a
circular genome, this will return zero.
"""
return min(loc.start for loc in self.parts)
@property
def end(self):
"""End location - right most (maximum) value, regardless of strand.
Read only, returns an integer like position object, possibly a fuzzy
position.
For the special case of a CompoundLocation wrapping the origin of
a circular genome this will match the genome length (minus one
given how Python counts from zero).
"""
return max(loc.end for loc in self.parts)
@property
def nofuzzy_start(self):
"""Start position (integer, approximated if fuzzy, read only) (OBSOLETE).
This is an alias for int(feature.start), which should be used in
preference -- unless you are trying to support old versions of
Biopython.
"""
try:
return int(self.start)
except TypeError:
if isinstance(self.start, UnknownPosition):
return None
raise
@property
def nofuzzy_end(self):
"""End position (integer, approximated if fuzzy, read only) (OBSOLETE).
This is an alias for int(feature.end), which should be used in
preference -- unless you are trying to support old versions of
Biopython.
"""
try:
return int(self.end)
except TypeError:
if isinstance(self.end, UnknownPosition):
return None
raise
@property
def ref(self):
"""Not present in CompoundLocation, dummy method for API compatibility."""
return None
@property
def ref_db(self):
"""Not present in CompoundLocation, dummy method for API compatibility."""
return None
def extract(self, parent_sequence):
"""Extract the sequence from supplied parent sequence using the CompoundLocation object.
The parent_sequence can be a Seq like object or a string, and will
generally return an object of the same type. The exception to this is
a MutableSeq as the parent sequence will return a Seq object.
>>> from Bio.Seq import Seq
>>> from Bio.Alphabet import generic_protein
>>> from Bio.SeqFeature import FeatureLocation, CompoundLocation
>>> seq = Seq("MKQHKAMIVALIVICITAVVAAL", generic_protein)
>>> fl1 = FeatureLocation(2, 8)
>>> fl2 = FeatureLocation(10, 15)
>>> fl3 = CompoundLocation([fl1,fl2])
>>> fl3.extract(seq)
Seq('QHKAMILIVIC')
"""
# This copes with mixed strand features & all on reverse:
parts = [loc.extract(parent_sequence) for loc in self.parts]
# We use addition rather than a join to avoid alphabet issues:
f_seq = parts[0]
for part in parts[1:]:
f_seq += part
return f_seq
class AbstractPosition:
"""Abstract base class representing a position."""
def __repr__(self):
"""Represent the AbstractPosition object as a string for debugging."""
return "%s(...)" % (self.__class__.__name__)
class ExactPosition(int, AbstractPosition):
"""Specify the specific position of a boundary.
Arguments:
- position - The position of the boundary.
- extension - An optional argument which must be zero since we don't
have an extension. The argument is provided so that the same number
of arguments can be passed to all position types.
In this case, there is no fuzziness associated with the position.
>>> p = ExactPosition(5)
>>> p
ExactPosition(5)
>>> print(p)
5
>>> isinstance(p, AbstractPosition)
True
>>> isinstance(p, int)
True
Integer comparisons and operations should work as expected:
>>> p == 5
True
>>> p < 6
True
>>> p <= 5
True
>>> p + 10
15
"""
def __new__(cls, position, extension=0):
"""Create an ExactPosition object."""
if extension != 0:
raise AttributeError(
"Non-zero extension %s for exact position." % extension
)
return int.__new__(cls, position)
# Must define this on Python 3.8 onwards because we redefine __repr__
def __str__(self):
"""Return a representation of the ExactPosition object (with python counting)."""
return str(int(self))
def __repr__(self):
"""Represent the ExactPosition object as a string for debugging."""
return "%s(%i)" % (self.__class__.__name__, int(self))
@property
def position(self):
"""Legacy attribute to get position as integer (OBSOLETE)."""
return int(self)
@property
def extension(self):
"""Not present in this object, return zero (OBSOLETE)."""
return 0
def _shift(self, offset):
"""Return a copy of the position object with its location shifted (PRIVATE)."""
# By default preserve any subclass
return self.__class__(int(self) + offset)
def _flip(self, length):
"""Return a copy of the location after the parent is reversed (PRIVATE)."""
# By default perserve any subclass
return self.__class__(length - int(self))
class UncertainPosition(ExactPosition):
"""Specify a specific position which is uncertain.
This is used in UniProt, e.g. ?222 for uncertain position 222, or in the
XML format explicitly marked as uncertain. Does not apply to GenBank/EMBL.
"""
pass
class UnknownPosition(AbstractPosition):
"""Specify a specific position which is unknown (has no position).
This is used in UniProt, e.g. ? or in the XML as unknown.
"""
def __repr__(self):
"""Represent the UnknownPosition object as a string for debugging."""
return "%s()" % self.__class__.__name__
def __hash__(self):
"""Return the hash value of the UnknownPosition object."""
return hash(None)
@property
def position(self):
"""Legacy attribute to get location (None) (OBSOLETE)."""
return None
@property
def extension(self): # noqa: D402
"""Legacy attribute to get extension (zero) as integer (OBSOLETE)."""
return 0
def _shift(self, offset):
"""Return a copy of the position object with its location shifted (PRIVATE)."""
return self
def _flip(self, length):
"""Return a copy of the location after the parent is reversed (PRIVATE)."""
return self
class WithinPosition(int, AbstractPosition):
"""Specify the position of a boundary within some coordinates.
Arguments:
- position - The default integer position
- left - The start (left) position of the boundary
- right - The end (right) position of the boundary
This allows dealing with a location like ((1.4)..100). This
indicates that the start of the sequence is somewhere between 1
and 4. Since this is a start coordinate, it should acts like
it is at position 1 (or in Python counting, 0).
>>> p = WithinPosition(10, 10, 13)
>>> p
WithinPosition(10, left=10, right=13)
>>> print(p)
(10.13)
>>> int(p)
10
Basic integer comparisons and operations should work as though
this were a plain integer:
>>> p == 10
True
>>> p in [9, 10, 11]
True
>>> p < 11
True
>>> p + 10
20
>>> isinstance(p, WithinPosition)
True
>>> isinstance(p, AbstractPosition)
True
>>> isinstance(p, int)
True
Note this also applies for comparison to other position objects,
where again the integer behaviour is used:
>>> p == 10
True
>>> p == ExactPosition(10)
True
>>> p == BeforePosition(10)
True
>>> p == AfterPosition(10)
True
If this were an end point, you would want the position to be 13:
>>> p2 = WithinPosition(13, 10, 13)
>>> p2
WithinPosition(13, left=10, right=13)
>>> print(p2)
(10.13)
>>> int(p2)
13
>>> p2 == 13
True
>>> p2 == ExactPosition(13)
True
The old legacy properties of position and extension give the
starting/lower/left position as an integer, and the distance
to the ending/higher/right position as an integer. Note that
the position object will act like either the left or the right
end-point depending on how it was created:
>>> p.position == p2.position == 10
True
>>> p.extension == p2.extension == 3
True
>>> int(p) == int(p2)
False
>>> p == 10
True
>>> p2 == 13
True
"""
def __new__(cls, position, left, right):
"""Create a WithinPosition object."""
if not (position == left or position == right):
raise RuntimeError(
"WithinPosition: %r should match left %r or "
"right %r" % (position, left, right)
)
obj = int.__new__(cls, position)
obj._left = left
obj._right = right
return obj
def __getnewargs__(self):
"""Return the arguments accepted by __new__.
Necessary to allow pickling and unpickling of class instances.
"""
return (int(self), self._left, self._right)
def __repr__(self):
"""Represent the WithinPosition object as a string for debugging."""
return "%s(%i, left=%i, right=%i)" % (
self.__class__.__name__,
int(self),
self._left,
self._right,
)
def __str__(self):
"""Return a representation of the WithinPosition object (with python counting)."""
return "(%s.%s)" % (self._left, self._right)
@property
def position(self):
"""Legacy attribute to get (left) position as integer (OBSOLETE)."""
return self._left
@property
def extension(self): # noqa: D402
"""Legacy attribute to get extension (from left to right) as an integer (OBSOLETE)."""
return self._right - self._left
def _shift(self, offset):
"""Return a copy of the position object with its location shifted (PRIVATE)."""
return self.__class__(
int(self) + offset, self._left + offset, self._right + offset
)
def _flip(self, length):
"""Return a copy of the location after the parent is reversed (PRIVATE)."""
return self.__class__(
length - int(self), length - self._right, length - self._left
)
class BetweenPosition(int, AbstractPosition):
"""Specify the position of a boundary between two coordinates (OBSOLETE?).
Arguments:
- position - The default integer position
- left - The start (left) position of the boundary
- right - The end (right) position of the boundary
This allows dealing with a position like 123^456. This
indicates that the start of the sequence is somewhere between
123 and 456. It is up to the parser to set the position argument
to either boundary point (depending on if this is being used as
a start or end of the feature). For example as a feature end:
>>> p = BetweenPosition(456, 123, 456)
>>> p
BetweenPosition(456, left=123, right=456)
>>> print(p)
(123^456)
>>> int(p)
456
Integer equality and comparison use the given position,
>>> p == 456
True
>>> p in [455, 456, 457]
True
>>> p > 300
True
The old legacy properties of position and extension give the
starting/lower/left position as an integer, and the distance
to the ending/higher/right position as an integer. Note that
the position object will act like either the left or the right
end-point depending on how it was created:
>>> p2 = BetweenPosition(123, left=123, right=456)
>>> p.position == p2.position == 123
True
>>> p.extension
333
>>> p2.extension
333
>>> p.extension == p2.extension == 333
True
>>> int(p) == int(p2)
False
>>> p == 456
True
>>> p2 == 123
True
Note this potentially surprising behaviour:
>>> BetweenPosition(123, left=123, right=456) == ExactPosition(123)
True
>>> BetweenPosition(123, left=123, right=456) == BeforePosition(123)
True
>>> BetweenPosition(123, left=123, right=456) == AfterPosition(123)
True
i.e. For equality (and sorting) the position objects behave like
integers.
"""
def __new__(cls, position, left, right):
"""Create a new instance in BetweenPosition object."""
assert position == left or position == right
obj = int.__new__(cls, position)
obj._left = left
obj._right = right
return obj
def __getnewargs__(self):
"""Return the arguments accepted by __new__.
Necessary to allow pickling and unpickling of class instances.
"""
return (int(self), self._left, self._right)
def __repr__(self):
"""Represent the BetweenPosition object as a string for debugging."""
return "%s(%i, left=%i, right=%i)" % (
self.__class__.__name__,
int(self),
self._left,
self._right,
)
def __str__(self):
"""Return a representation of the BetweenPosition object (with python counting)."""
return "(%s^%s)" % (self._left, self._right)
@property
def position(self):
"""Legacy attribute to get (left) position as integer (OBSOLETE)."""
return self._left
@property
def extension(self): # noqa: D402
"""Legacy attribute to get extension (from left to right) as an integer (OBSOLETE)."""
return self._right - self._left
def _shift(self, offset):
"""Return a copy of the position object with its location shifted (PRIVATE)."""
return self.__class__(
int(self) + offset, self._left + offset, self._right + offset
)
def _flip(self, length):
"""Return a copy of the location after the parent is reversed (PRIVATE)."""
return self.__class__(
length - int(self), length - self._right, length - self._left
)
class BeforePosition(int, AbstractPosition):
"""Specify a position where the actual location occurs before it.
Arguments:
- position - The upper boundary of where the location can occur.
- extension - An optional argument which must be zero since we don't
have an extension. The argument is provided so that the same number
of arguments can be passed to all position types.
This is used to specify positions like (<10..100) where the location
occurs somewhere before position 10.
>>> p = BeforePosition(5)
>>> p
BeforePosition(5)
>>> print(p)
<5
>>> int(p)
5
>>> p + 10
15
Note this potentially surprising behaviour:
>>> p == ExactPosition(5)
True
>>> p == AfterPosition(5)
True
Just remember that for equality and sorting the position objects act
like integers.
"""
# Subclasses int so can't use __init__
def __new__(cls, position, extension=0):
"""Create a new instance in BeforePosition object."""
if extension != 0:
raise AttributeError(
"Non-zero extension %s for exact position." % extension
)
return int.__new__(cls, position)
@property
def position(self):
"""Legacy attribute to get position as integer (OBSOLETE)."""
return int(self)
@property
def extension(self): # noqa: D402
"""Legacy attribute to get extension (zero) as integer (OBSOLETE)."""
return 0
def __repr__(self):
"""Represent the location as a string for debugging."""
return "%s(%i)" % (self.__class__.__name__, int(self))
def __str__(self):
"""Return a representation of the BeforePosition object (with python counting)."""
return "<%s" % self.position
def _shift(self, offset):
"""Return a copy of the position object with its location shifted (PRIVATE)."""
return self.__class__(int(self) + offset)
def _flip(self, length):
"""Return a copy of the location after the parent is reversed (PRIVATE)."""
return AfterPosition(length - int(self))
class AfterPosition(int, AbstractPosition):
"""Specify a position where the actual location is found after it.
Arguments:
- position - The lower boundary of where the location can occur.
- extension - An optional argument which must be zero since we don't
have an extension. The argument is provided so that the same number
of arguments can be passed to all position types.
This is used to specify positions like (>10..100) where the location
occurs somewhere after position 10.
>>> p = AfterPosition(7)
>>> p
AfterPosition(7)
>>> print(p)
>7
>>> int(p)
7
>>> p + 10
17
>>> isinstance(p, AfterPosition)
True
>>> isinstance(p, AbstractPosition)
True
>>> isinstance(p, int)
True
Note this potentially surprising behaviour:
>>> p == ExactPosition(7)
True
>>> p == BeforePosition(7)
True
Just remember that for equality and sorting the position objects act
like integers.
"""
# Subclasses int so can't use __init__
def __new__(cls, position, extension=0):
"""Create a new instance of the AfterPosition object."""
if extension != 0:
raise AttributeError(
"Non-zero extension %s for exact position." % extension
)
return int.__new__(cls, position)
@property
def position(self):
"""Legacy attribute to get position as integer (OBSOLETE)."""
return int(self)
@property
def extension(self): # noqa: D402
"""Legacy attribute to get extension (zero) as integer (OBSOLETE)."""
return 0
def __repr__(self):
"""Represent the location as a string for debugging."""
return "%s(%i)" % (self.__class__.__name__, int(self))
def __str__(self):
"""Return a representation of the AfterPosition object (with python counting)."""
return ">%s" % self.position
def _shift(self, offset):
"""Return a copy of the position object with its location shifted (PRIVATE)."""
return self.__class__(int(self) + offset)
def _flip(self, length):
"""Return a copy of the location after the parent is reversed (PRIVATE)."""
return BeforePosition(length - int(self))
class OneOfPosition(int, AbstractPosition):
"""Specify a position where the location can be multiple positions.
This models the GenBank 'one-of(1888,1901)' function, and tries
to make this fit within the Biopython Position models. If this was
a start position it should act like 1888, but as an end position 1901.
>>> p = OneOfPosition(1888, [ExactPosition(1888), ExactPosition(1901)])
>>> p
OneOfPosition(1888, choices=[ExactPosition(1888), ExactPosition(1901)])
>>> int(p)
1888
Integer comparisons and operators act like using int(p),
>>> p == 1888
True
>>> p <= 1888
True
>>> p > 1888
False
>>> p + 100
1988
>>> isinstance(p, OneOfPosition)
True
>>> isinstance(p, AbstractPosition)
True
>>> isinstance(p, int)
True
The old legacy properties of position and extension give the
starting/lowest/left-most position as an integer, and the
distance to the ending/highest/right-most position as an integer.
Note that the position object will act like one of the list of
possible locations depending on how it was created:
>>> p2 = OneOfPosition(1901, [ExactPosition(1888), ExactPosition(1901)])
>>> p.position == p2.position == 1888
True
>>> p.extension == p2.extension == 13
True
>>> int(p) == int(p2)
False
>>> p == 1888
True
>>> p2 == 1901
True
"""
def __new__(cls, position, choices):
"""Initialize with a set of possible positions.
position_list is a list of AbstractPosition derived objects,
specifying possible locations.
position is an integer specifying the default behaviour.
"""
if position not in choices:
raise ValueError(
"OneOfPosition: %r should match one of %r" % (position, choices)
)
obj = int.__new__(cls, position)
obj.position_choices = choices
return obj
def __getnewargs__(self):
"""Return the arguments accepted by __new__.
Necessary to allow pickling and unpickling of class instances.
"""
return (int(self), self.position_choices)
@property
def position(self):
"""Legacy attribute to get (left) position as integer (OBSOLETE)."""
return min(int(pos) for pos in self.position_choices)
@property
def extension(self):
"""Legacy attribute to get extension as integer (OBSOLETE)."""
positions = [int(pos) for pos in self.position_choices]
return max(positions) - min(positions)
def __repr__(self):
"""Represent the OneOfPosition object as a string for debugging."""
return "%s(%i, choices=%r)" % (
self.__class__.__name__,
int(self),
self.position_choices,
)
def __str__(self):
"""Return a representation of the OneOfPosition object (with python counting)."""
out = "one-of("
for position in self.position_choices:
out += "%s," % position
# replace the last comma with the closing parenthesis
return out[:-1] + ")"
def _shift(self, offset):
"""Return a copy of the position object with its location shifted (PRIVATE)."""
return self.__class__(
int(self) + offset, [p._shift(offset) for p in self.position_choices]
)
def _flip(self, length):
"""Return a copy of the location after the parent is reversed (PRIVATE)."""
return self.__class__(
length - int(self), [p._flip(length) for p in self.position_choices[::-1]]
)
class PositionGap:
"""Simple class to hold information about a gap between positions."""
def __init__(self, gap_size):
"""Intialize with a position object containing the gap information."""
self.gap_size = gap_size
def __repr__(self):
"""Represent the position gap as a string for debugging."""
return "%s(%s)" % (self.__class__.__name__, repr(self.gap_size))
def __str__(self):
"""Return a representation of the PositionGap object (with python counting)."""
return "gap(%s)" % self.gap_size
if __name__ == "__main__":
from Bio._utils import run_doctest
run_doctest()
| 34.254875 | 102 | 0.606418 |
from collections import OrderedDict
from Bio.Seq import MutableSeq, reverse_complement
class SeqFeature:
def __init__(
self,
location=None,
type="",
location_operator="",
strand=None,
id="<unknown id>",
qualifiers=None,
sub_features=None,
ref=None,
ref_db=None,
):
if (
location is not None
and not isinstance(location, FeatureLocation)
and not isinstance(location, CompoundLocation)
):
raise TypeError(
"FeatureLocation, CompoundLocation (or None) required for the location"
)
self.location = location
self.type = type
if location_operator:
self.location_operator = location_operator
if strand is not None:
self.strand = strand
self.id = id
if qualifiers is None:
qualifiers = OrderedDict()
self.qualifiers = qualifiers
if sub_features is not None:
raise TypeError("Rather than sub_features, use a CompoundFeatureLocation")
if ref is not None:
self.ref = ref
if ref_db is not None:
self.ref_db = ref_db
def _get_strand(self):
return self.location.strand
def _set_strand(self, value):
try:
self.location.strand = value
except AttributeError:
if self.location is None:
if value is not None:
raise ValueError("Can't set strand without a location.") from None
else:
raise
strand = property(
fget=_get_strand,
fset=_set_strand,
doc="""Feature's strand
This is a shortcut for feature.location.strand
""",
)
def _get_ref(self):
try:
return self.location.ref
except AttributeError:
return None
def _set_ref(self, value):
try:
self.location.ref = value
except AttributeError:
if self.location is None:
if value is not None:
raise ValueError("Can't set ref without a location.") from None
else:
raise
ref = property(
fget=_get_ref,
fset=_set_ref,
doc="""Feature location reference (e.g. accession).
This is a shortcut for feature.location.ref
""",
)
def _get_ref_db(self):
try:
return self.location.ref_db
except AttributeError:
return None
def _set_ref_db(self, value):
self.location.ref_db = value
ref_db = property(
fget=_get_ref_db,
fset=_set_ref_db,
doc="""Feature location reference's database.
This is a shortcut for feature.location.ref_db
""",
)
def _get_location_operator(self):
try:
return self.location.operator
except AttributeError:
return None
def _set_location_operator(self, value):
if value:
if isinstance(self.location, CompoundLocation):
self.location.operator = value
elif self.location is None:
raise ValueError(
"Location is None so can't set its operator (to %r)" % value
)
else:
raise ValueError("Only CompoundLocation gets an operator (%r)" % value)
location_operator = property(
fget=_get_location_operator,
fset=_set_location_operator,
doc="Location operator for compound locations (e.g. join).",
)
def __repr__(self):
answer = "%s(%s" % (self.__class__.__name__, repr(self.location))
if self.type:
answer += ", type=%s" % repr(self.type)
if self.location_operator:
answer += ", location_operator=%s" % repr(self.location_operator)
if self.id and self.id != "<unknown id>":
answer += ", id=%s" % repr(self.id)
if self.ref:
answer += ", ref=%s" % repr(self.ref)
if self.ref_db:
answer += ", ref_db=%s" % repr(self.ref_db)
answer += ")"
return answer
def __str__(self):
out = "type: %s\n" % self.type
out += "location: %s\n" % self.location
if self.id and self.id != "<unknown id>":
out += "id: %s\n" % self.id
out += "qualifiers:\n"
for qual_key in sorted(self.qualifiers):
out += " Key: %s, Value: %s\n" % (qual_key, self.qualifiers[qual_key])
return out
def _shift(self, offset):
return SeqFeature(
location=self.location._shift(offset),
type=self.type,
location_operator=self.location_operator,
id=self.id,
qualifiers=OrderedDict(self.qualifiers.items()),
)
def _flip(self, length):
return SeqFeature(
location=self.location._flip(length),
type=self.type,
location_operator=self.location_operator,
id=self.id,
qualifiers=OrderedDict(self.qualifiers.items()),
)
def extract(self, parent_sequence):
if self.location is None:
raise ValueError(
"The feature's .location is None. Check the "
"sequence file for a valid location."
)
return self.location.extract(parent_sequence)
def translate(
self,
parent_sequence,
table="Standard",
start_offset=None,
stop_symbol="*",
to_stop=False,
cds=None,
gap=None,
):
if start_offset is None:
try:
start_offset = int(self.qualifiers["codon_start"][0]) - 1
except KeyError:
start_offset = 0
if start_offset not in [0, 1, 2]:
raise ValueError(
"The start_offset must be 0, 1, or 2. "
f"The supplied value is {start_offset}. "
"Check the value of either the codon_start qualifier "
"or the start_offset argument"
)
feat_seq = self.extract(parent_sequence)[start_offset:]
codon_table = self.qualifiers.get("transl_table", [table])[0]
if cds is None:
cds = self.type == "CDS"
return feat_seq.translate(
table=codon_table,
stop_symbol=stop_symbol,
to_stop=to_stop,
cds=cds,
gap=gap,
)
def __bool__(self):
return True
def __len__(self):
return len(self.location)
def __iter__(self):
return iter(self.location)
def __contains__(self, value):
return value in self.location
class Reference:
def __init__(self):
self.location = []
self.authors = ""
self.consrtm = ""
self.title = ""
self.journal = ""
self.medline_id = ""
self.pubmed_id = ""
self.comment = ""
def __str__(self):
out = ""
for single_location in self.location:
out += "location: %s\n" % single_location
out += "authors: %s\n" % self.authors
if self.consrtm:
out += "consrtm: %s\n" % self.consrtm
out += "title: %s\n" % self.title
out += "journal: %s\n" % self.journal
out += "medline id: %s\n" % self.medline_id
out += "pubmed id: %s\n" % self.pubmed_id
out += "comment: %s\n" % self.comment
return out
def __repr__(self):
return "%s(title=%s, ...)" % (self.__class__.__name__, repr(self.title))
def __eq__(self, other):
return (
self.authors == other.authors
and self.consrtm == other.consrtm
and self.title == other.title
and self.journal == other.journal
and self.medline_id == other.medline_id
and self.pubmed_id == other.pubmed_id
and self.comment == other.comment
and self.location == other.location
)
class FeatureLocation:
def __init__(self, start, end, strand=None, ref=None, ref_db=None):
if isinstance(start, AbstractPosition):
self._start = start
elif isinstance(start, int):
self._start = ExactPosition(start)
else:
raise TypeError("start=%r %s" % (start, type(start)))
if isinstance(end, AbstractPosition):
self._end = end
elif isinstance(end, int):
self._end = ExactPosition(end)
else:
raise TypeError("end=%r %s" % (end, type(end)))
if (
isinstance(self.start.position, int)
and isinstance(self.end.position, int)
and self.start > self.end
):
raise ValueError(
f"End location ({self.end}) must be greater than "
f"or equal to start location ({self.start})"
)
self.strand = strand
self.ref = ref
self.ref_db = ref_db
def _get_strand(self):
return self._strand
def _set_strand(self, value):
if value not in [+1, -1, 0, None]:
raise ValueError("Strand should be +1, -1, 0 or None, not %r" % value)
self._strand = value
strand = property(
fget=_get_strand,
fset=_set_strand,
doc="Strand of the location (+1, -1, 0 or None).",
)
def __str__(self):
answer = "[%s:%s]" % (self._start, self._end)
if self.ref and self.ref_db:
answer = "%s:%s%s" % (self.ref_db, self.ref, answer)
elif self.ref:
answer = self.ref + answer
if self.strand is None:
return answer
elif self.strand == +1:
return answer + "(+)"
elif self.strand == -1:
return answer + "(-)"
else:
return answer + "(?)"
def __repr__(self):
optional = ""
if self.strand is not None:
optional += ", strand=%r" % self.strand
if self.ref is not None:
optional += ", ref=%r" % self.ref
if self.ref_db is not None:
optional += ", ref_db=%r" % self.ref_db
return "%s(%r, %r%s)" % (
self.__class__.__name__,
self.start,
self.end,
optional,
)
def __add__(self, other):
if isinstance(other, FeatureLocation):
return CompoundLocation([self, other])
elif isinstance(other, int):
return self._shift(other)
else:
return NotImplemented
def __radd__(self, other):
if isinstance(other, int):
return self._shift(other)
else:
return NotImplemented
def __nonzero__(self):
return True
def __len__(self):
return int(self._end) - int(self._start)
def __contains__(self, value):
if not isinstance(value, int):
raise ValueError(
"Currently we only support checking for integer "
"positions being within a FeatureLocation."
)
if value < self._start or value >= self._end:
return False
else:
return True
def __iter__(self):
if self.strand == -1:
yield from range(self._end - 1, self._start - 1, -1)
else:
yield from range(self._start, self._end)
def __eq__(self, other):
if not isinstance(other, FeatureLocation):
return False
return (
self._start == other.start
and self._end == other.end
and self._strand == other.strand
and self.ref == other.ref
and self.ref_db == other.ref_db
)
def _shift(self, offset):
# TODO - What if offset is a fuzzy position?
if self.ref or self.ref_db:
# TODO - Return self?
raise ValueError("Feature references another sequence.")
return FeatureLocation(
start=self._start._shift(offset),
end=self._end._shift(offset),
strand=self.strand,
)
def _flip(self, length):
if self.ref or self.ref_db:
# TODO - Return self?
raise ValueError("Feature references another sequence.")
# Note this will flip the start and end too!
if self.strand == +1:
flip_strand = -1
elif self.strand == -1:
flip_strand = +1
else:
# 0 or None
flip_strand = self.strand
return FeatureLocation(
start=self._end._flip(length),
end=self._start._flip(length),
strand=flip_strand,
)
@property
def parts(self):
return [self]
@property
def start(self):
return self._start
@property
def end(self):
return self._end
@property
def nofuzzy_start(self):
try:
return int(self._start)
except TypeError:
if isinstance(self._start, UnknownPosition):
return None
raise
@property
def nofuzzy_end(self):
try:
return int(self._end)
except TypeError:
if isinstance(self._end, UnknownPosition):
return None
raise
def extract(self, parent_sequence):
if self.ref or self.ref_db:
# TODO - Take a dictionary as an optional argument?
raise ValueError("Feature references another sequence.")
if isinstance(parent_sequence, MutableSeq):
# This avoids complications with reverse complements
# (the MutableSeq reverse complement acts in situ)
parent_sequence = parent_sequence.toseq()
f_seq = parent_sequence[self.nofuzzy_start : self.nofuzzy_end]
if self.strand == -1:
try:
f_seq = f_seq.reverse_complement()
except AttributeError:
assert isinstance(f_seq, str)
f_seq = reverse_complement(f_seq)
return f_seq
class CompoundLocation:
def __init__(self, parts, operator="join"):
self.operator = operator
self.parts = list(parts)
for loc in self.parts:
if not isinstance(loc, FeatureLocation):
raise ValueError(
"CompoundLocation should be given a list of "
"FeatureLocation objects, not %s" % loc.__class__
)
if len(parts) < 2:
raise ValueError(
"CompoundLocation should have at least 2 parts, not %r" % parts
)
def __str__(self):
return "%s{%s}" % (self.operator, ", ".join(str(loc) for loc in self.parts))
def __repr__(self):
return "%s(%r, %r)" % (self.__class__.__name__, self.parts, self.operator)
def _get_strand(self):
# Historically a join on the reverse strand has been represented
# in Biopython with both the parent SeqFeature and its children
# (the exons for a CDS) all given a strand of -1. Likewise, for
# a join feature on the forward strand they all have strand +1.
# However, we must also consider evil mixed strand examples like
# this, join(complement(69611..69724),139856..140087,140625..140650)
if len({loc.strand for loc in self.parts}) == 1:
return self.parts[0].strand
else:
return None # i.e. mixed strands
def _set_strand(self, value):
# Should this be allowed/encouraged?
for loc in self.parts:
loc.strand = value
strand = property(
fget=_get_strand,
fset=_set_strand,
doc="""Overall strand of the compound location.
If all the parts have the same strand, that is returned. Otherwise
for mixed strands, this returns None.
>>> from Bio.SeqFeature import FeatureLocation, CompoundLocation
>>> f1 = FeatureLocation(15, 17, strand=1)
>>> f2 = FeatureLocation(20, 30, strand=-1)
>>> f = f1 + f2
>>> f1.strand
1
>>> f2.strand
-1
>>> f.strand
>>> f.strand is None
True
If you set the strand of a CompoundLocation, this is applied to
all the parts - use with caution:
>>> f.strand = 1
>>> f1.strand
1
>>> f2.strand
1
>>> f.strand
1
""",
)
def __add__(self, other):
if isinstance(other, FeatureLocation):
return CompoundLocation(self.parts + [other], self.operator)
elif isinstance(other, CompoundLocation):
if self.operator != other.operator:
# Handle join+order -> order as a special case?
raise ValueError(
"Mixed operators %s and %s" % (self.operator, other.operator)
)
return CompoundLocation(self.parts + other.parts, self.operator)
elif isinstance(other, int):
return self._shift(other)
else:
raise NotImplementedError
def __radd__(self, other):
if isinstance(other, FeatureLocation):
return CompoundLocation([other] + self.parts, self.operator)
elif isinstance(other, int):
return self._shift(other)
else:
raise NotImplementedError
def __contains__(self, value):
for loc in self.parts:
if value in loc:
return True
return False
def __nonzero__(self):
return True
def __len__(self):
return sum(len(loc) for loc in self.parts)
def __iter__(self):
for loc in self.parts:
yield from loc
def __eq__(self, other):
if not isinstance(other, CompoundLocation):
return False
if len(self.parts) != len(other.parts):
return False
if self.operator != other.operator:
return False
for self_part, other_part in zip(self.parts, other.parts):
if self_part != other_part:
return False
return True
def _shift(self, offset):
return CompoundLocation(
[loc._shift(offset) for loc in self.parts], self.operator
)
def _flip(self, length):
return CompoundLocation(
[loc._flip(length) for loc in self.parts], self.operator
)
@property
def start(self):
return min(loc.start for loc in self.parts)
@property
def end(self):
return max(loc.end for loc in self.parts)
@property
def nofuzzy_start(self):
try:
return int(self.start)
except TypeError:
if isinstance(self.start, UnknownPosition):
return None
raise
@property
def nofuzzy_end(self):
try:
return int(self.end)
except TypeError:
if isinstance(self.end, UnknownPosition):
return None
raise
@property
def ref(self):
return None
@property
def ref_db(self):
return None
def extract(self, parent_sequence):
# This copes with mixed strand features & all on reverse:
parts = [loc.extract(parent_sequence) for loc in self.parts]
# We use addition rather than a join to avoid alphabet issues:
f_seq = parts[0]
for part in parts[1:]:
f_seq += part
return f_seq
class AbstractPosition:
def __repr__(self):
return "%s(...)" % (self.__class__.__name__)
class ExactPosition(int, AbstractPosition):
def __new__(cls, position, extension=0):
if extension != 0:
raise AttributeError(
"Non-zero extension %s for exact position." % extension
)
return int.__new__(cls, position)
# Must define this on Python 3.8 onwards because we redefine __repr__
def __str__(self):
return str(int(self))
def __repr__(self):
return "%s(%i)" % (self.__class__.__name__, int(self))
@property
def position(self):
return int(self)
@property
def extension(self):
return 0
def _shift(self, offset):
# By default preserve any subclass
return self.__class__(int(self) + offset)
def _flip(self, length):
# By default perserve any subclass
return self.__class__(length - int(self))
class UncertainPosition(ExactPosition):
pass
class UnknownPosition(AbstractPosition):
def __repr__(self):
return "%s()" % self.__class__.__name__
def __hash__(self):
return hash(None)
@property
def position(self):
return None
@property
def extension(self): # noqa: D402
return 0
def _shift(self, offset):
return self
def _flip(self, length):
return self
class WithinPosition(int, AbstractPosition):
def __new__(cls, position, left, right):
if not (position == left or position == right):
raise RuntimeError(
"WithinPosition: %r should match left %r or "
"right %r" % (position, left, right)
)
obj = int.__new__(cls, position)
obj._left = left
obj._right = right
return obj
def __getnewargs__(self):
return (int(self), self._left, self._right)
def __repr__(self):
return "%s(%i, left=%i, right=%i)" % (
self.__class__.__name__,
int(self),
self._left,
self._right,
)
def __str__(self):
return "(%s.%s)" % (self._left, self._right)
@property
def position(self):
return self._left
@property
def extension(self): # noqa: D402
return self._right - self._left
def _shift(self, offset):
return self.__class__(
int(self) + offset, self._left + offset, self._right + offset
)
def _flip(self, length):
return self.__class__(
length - int(self), length - self._right, length - self._left
)
class BetweenPosition(int, AbstractPosition):
def __new__(cls, position, left, right):
assert position == left or position == right
obj = int.__new__(cls, position)
obj._left = left
obj._right = right
return obj
def __getnewargs__(self):
return (int(self), self._left, self._right)
def __repr__(self):
return "%s(%i, left=%i, right=%i)" % (
self.__class__.__name__,
int(self),
self._left,
self._right,
)
def __str__(self):
return "(%s^%s)" % (self._left, self._right)
@property
def position(self):
return self._left
@property
def extension(self): # noqa: D402
return self._right - self._left
def _shift(self, offset):
return self.__class__(
int(self) + offset, self._left + offset, self._right + offset
)
def _flip(self, length):
return self.__class__(
length - int(self), length - self._right, length - self._left
)
class BeforePosition(int, AbstractPosition):
# Subclasses int so can't use __init__
def __new__(cls, position, extension=0):
if extension != 0:
raise AttributeError(
"Non-zero extension %s for exact position." % extension
)
return int.__new__(cls, position)
@property
def position(self):
return int(self)
@property
def extension(self):
return 0
def __repr__(self):
return "%s(%i)" % (self.__class__.__name__, int(self))
def __str__(self):
return "<%s" % self.position
def _shift(self, offset):
return self.__class__(int(self) + offset)
def _flip(self, length):
return AfterPosition(length - int(self))
class AfterPosition(int, AbstractPosition):
def __new__(cls, position, extension=0):
if extension != 0:
raise AttributeError(
"Non-zero extension %s for exact position." % extension
)
return int.__new__(cls, position)
@property
def position(self):
return int(self)
@property
def extension(self): # noqa: D402
return 0
def __repr__(self):
return "%s(%i)" % (self.__class__.__name__, int(self))
def __str__(self):
return ">%s" % self.position
def _shift(self, offset):
return self.__class__(int(self) + offset)
def _flip(self, length):
return BeforePosition(length - int(self))
class OneOfPosition(int, AbstractPosition):
def __new__(cls, position, choices):
if position not in choices:
raise ValueError(
"OneOfPosition: %r should match one of %r" % (position, choices)
)
obj = int.__new__(cls, position)
obj.position_choices = choices
return obj
def __getnewargs__(self):
return (int(self), self.position_choices)
@property
def position(self):
return min(int(pos) for pos in self.position_choices)
@property
def extension(self):
positions = [int(pos) for pos in self.position_choices]
return max(positions) - min(positions)
def __repr__(self):
return "%s(%i, choices=%r)" % (
self.__class__.__name__,
int(self),
self.position_choices,
)
def __str__(self):
out = "one-of("
for position in self.position_choices:
out += "%s," % position
# replace the last comma with the closing parenthesis
return out[:-1] + ")"
def _shift(self, offset):
return self.__class__(
int(self) + offset, [p._shift(offset) for p in self.position_choices]
)
def _flip(self, length):
return self.__class__(
length - int(self), [p._flip(length) for p in self.position_choices[::-1]]
)
class PositionGap:
def __init__(self, gap_size):
self.gap_size = gap_size
def __repr__(self):
return "%s(%s)" % (self.__class__.__name__, repr(self.gap_size))
def __str__(self):
return "gap(%s)" % self.gap_size
if __name__ == "__main__":
from Bio._utils import run_doctest
run_doctest()
| true | true |
f73e22e6f2a1a6f43caad4aecd36ef7f0884eed1 | 672 | py | Python | setup.py | vinobc/ssncoe | e286cd55e31b0e61954e57e9b7f53251c3f419b7 | [
"MIT"
] | null | null | null | setup.py | vinobc/ssncoe | e286cd55e31b0e61954e57e9b7f53251c3f419b7 | [
"MIT"
] | null | null | null | setup.py | vinobc/ssncoe | e286cd55e31b0e61954e57e9b7f53251c3f419b7 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
from setuptools import setup, find_packages
import re, ast
with open('requirements.txt') as f:
install_requires = f.read().strip().split('\n')
# get version from __version__ variable in it_dept_library/__init__.py
_version_re = re.compile(r'__version__\s+=\s+(.*)')
with open('ssncoe/__init__.py', 'rb') as f:
version = str(ast.literal_eval(_version_re.search(
f.read().decode('utf-8')).group(1)))
setup(
name='ssncoe',
version=version,
description='ssncoe',
author='R Vinob Chander',
author_email='vinobchanderr@ssn.edu.in',
packages=find_packages(),
zip_safe=False,
include_package_data=True,
install_requires=install_requires
)
| 25.846154 | 70 | 0.730655 |
from setuptools import setup, find_packages
import re, ast
with open('requirements.txt') as f:
install_requires = f.read().strip().split('\n')
_version_re = re.compile(r'__version__\s+=\s+(.*)')
with open('ssncoe/__init__.py', 'rb') as f:
version = str(ast.literal_eval(_version_re.search(
f.read().decode('utf-8')).group(1)))
setup(
name='ssncoe',
version=version,
description='ssncoe',
author='R Vinob Chander',
author_email='vinobchanderr@ssn.edu.in',
packages=find_packages(),
zip_safe=False,
include_package_data=True,
install_requires=install_requires
)
| true | true |
f73e23632950e7223ada376f90e321f85c077de0 | 462 | py | Python | requirements/docutils-0.18/test/functional/tests/math_output_mathml.py | QuentinTournier40/AnimationFreeCAD | 8eaff8356ec68b948a721b83a6888b652278db8a | [
"Apache-2.0"
] | null | null | null | requirements/docutils-0.18/test/functional/tests/math_output_mathml.py | QuentinTournier40/AnimationFreeCAD | 8eaff8356ec68b948a721b83a6888b652278db8a | [
"Apache-2.0"
] | null | null | null | requirements/docutils-0.18/test/functional/tests/math_output_mathml.py | QuentinTournier40/AnimationFreeCAD | 8eaff8356ec68b948a721b83a6888b652278db8a | [
"Apache-2.0"
] | 1 | 2022-02-03T08:03:30.000Z | 2022-02-03T08:03:30.000Z | # Source and destination file names.
test_source = "data/math.txt"
test_destination = "math_output_mathml.html"
# Keyword parameters passed to publish_file.
reader_name = "standalone"
parser_name = "rst"
writer_name = "html5"
# Settings
settings_overrides['math_output'] = 'MathML'
# local copy of default stylesheet:
# (test runs in ``docutils/test/``, we need relative path from there.)
settings_overrides['stylesheet_dirs'] = ('.', 'functional/input/data')
| 30.8 | 70 | 0.755411 |
test_source = "data/math.txt"
test_destination = "math_output_mathml.html"
reader_name = "standalone"
parser_name = "rst"
writer_name = "html5"
settings_overrides['math_output'] = 'MathML'
settings_overrides['stylesheet_dirs'] = ('.', 'functional/input/data')
| true | true |
f73e23f978c60ba412fc3a443cb0e1e12d71e247 | 14,516 | py | Python | hplip-3.20.3/base/g.py | Deril-Pana/wikiBlackcoinNL | 9633307f0b485c27feae5da242944adf450e8963 | [
"MIT"
] | null | null | null | hplip-3.20.3/base/g.py | Deril-Pana/wikiBlackcoinNL | 9633307f0b485c27feae5da242944adf450e8963 | [
"MIT"
] | 1 | 2021-11-20T16:33:39.000Z | 2021-11-20T16:33:39.000Z | hplip-3.20.3/base/g.py | Deril-Pana/wikiBlackcoinNL | 9633307f0b485c27feae5da242944adf450e8963 | [
"MIT"
] | null | null | null | #!/usr/bin/env python
# -*- coding: utf-8 -*-
#
# (c) Copyright 2003-2015 HP Development Company, L.P.
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 2 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
#
# Author: Don Welch
#
# NOTE: This module is safe for 'from g import *'
#
# Std Lib
import sys
import os
import os.path
from .sixext import PY3
from .sixext.moves import configparser
import locale
import pwd
import stat
import re
# Local
from .codes import *
from . import logger
from . import os_utils
from .sixext import to_unicode
if PY3:
QString = type("")
def cmp(a, b):
return (a > b) - (a < b)
# System wide logger
log = logger.Logger('', logger.Logger.LOG_LEVEL_INFO, logger.Logger.LOG_TO_CONSOLE)
log.set_level('info')
MINIMUM_PYQT_MAJOR_VER = 3
MINIMUM_PYQT_MINOR_VER = 14
MINIMUM_QT_MAJOR_VER = 3
MINIMUM_QT_MINOR_VER = 0
def to_bool(s, default=False):
if isinstance(s, str) and s:
if s[0].lower() in ['1', 't', 'y']:
return True
elif s[0].lower() in ['0', 'f', 'n']:
return False
elif isinstance(s, bool):
return s
return default
# System wide properties
class Properties(dict):
def __getattr__(self, attr):
if attr in list(self.keys()):
return self.__getitem__(attr)
else:
return ""
def __setattr__(self, attr, val):
self.__setitem__(attr, val)
prop = Properties()
class ConfigBase(object):
def __init__(self, filename):
self.filename = filename
self.conf = configparser.ConfigParser()
self.read()
def get(self, section, key, default=to_unicode('')):
try:
return self.conf.get(section, key)
except (configparser.NoOptionError, configparser.NoSectionError):
return default
def set(self, section, key, value):
if not self.conf.has_section(section):
self.conf.add_section(section)
self.conf.set(section, key, value)
self.write()
def sections(self):
return self.conf.sections()
def has_section(self, section):
return self.conf.has_section(section)
def options(self, section):
return self.conf.options(section)
keys = options
def read(self):
if self.filename is not None:
filename = self.filename
if filename.startswith("/root/"):
# Don't try opening a file in root's home directory.
log.error("attempted to read from '%s'" % self.filename)
return
try:
fp = open(self.filename, "r")
try:
self.conf.readfp(fp)
except configparser.MissingSectionHeaderError:
print("")
log.error("Found No Section in %s. Please set the http proxy for root and try again." % self.filename)
except (configparser.DuplicateOptionError):
log.warn("Found Duplicate Entery in %s" % self.filename)
self.CheckDuplicateEntries()
finally:
fp.close()
except (OSError, IOError, configparser.MissingSectionHeaderError):
log.debug("Unable to open file %s for reading." % self.filename)
def write(self):
if self.filename is not None:
filename = self.filename
if filename.startswith("/root/") or filename.startswith("/etc/"):
# Don't try writing a file in root's home directory or
# the system-wide config file.
# See bug #479178.
log.error("attempted to write to '%s'" % self.filename)
return
try:
fp = open(self.filename, "w")
self.conf.write(fp)
fp.close()
except (OSError, IOError):
log.debug("Unable to open file %s for writing." % self.filename)
def CheckDuplicateEntries(self):
try:
f = open(self.filename,'r')
data = f.read()
f.close()
except IOError:
data =""
final_data =''
for a in data.splitlines():
if not a or a not in final_data:
final_data = final_data +'\n' +a
import tempfile
fd, self.filename = tempfile.mkstemp()
f = open(self.filename,'w')
f.write(final_data)
f.close()
self.read()
os.unlink(self.filename)
class SysConfig(ConfigBase):
def __init__(self):
ConfigBase.__init__(self, '/etc/hp/hplip.conf')
class State(ConfigBase):
def __init__(self):
if not os.path.exists('/var/lib/hp/') and os.geteuid() == 0:
os.makedirs('/var/lib/hp/')
cmd = 'chmod 755 /var/lib/hp/'
os_utils.execute(cmd)
ConfigBase.__init__(self, '/var/lib/hp/hplip.state')
class UserConfig(ConfigBase):
def __init__(self):
sts, prop.user_dir = os_utils.getHPLIPDir()
if not os.geteuid() == 0:
prop.user_config_file = os.path.join(prop.user_dir, 'hplip.conf')
if not os.path.exists(prop.user_config_file):
try:
open(prop.user_config_file, 'w').close()
s = os.stat(os.path.dirname(prop.user_config_file))
os.chown(prop.user_config_file, s[stat.ST_UID], s[stat.ST_GID])
except IOError:
pass
ConfigBase.__init__(self, prop.user_config_file)
else:
# If running as root, conf file is None
prop.user_config_file = None
ConfigBase.__init__(self, None)
def workingDirectory(self):
t = self.get('last_used', 'working_dir', os.path.expanduser("~"))
try:
t = t.decode('utf-8')
except UnicodeError:
log.error("Invalid unicode: %s" % t)
log.debug("working directory: %s" % t)
return t
def setWorkingDirectory(self, t):
self.set('last_used', 'working_dir', t.encode('utf-8'))
log.debug("working directory: %s" % t.encode('utf-8'))
os.umask(0o037)
# System Config File: Directories and build settings. Not altered after installation.
sys_conf = SysConfig()
# System State File: System-wide runtime settings
sys_state = State()
# Per-user Settings File: (Note: For Qt4 code, limit the use of this to non-GUI apps. only)
user_conf = UserConfig()
# Language settings
try:
prop.locale, prop.encoding = locale.getdefaultlocale()
except ValueError:
prop.locale = 'en_US'
prop.encoding = 'UTF8'
prop.version = sys_conf.get('hplip', 'version', '0.0.0') # e.g., 3.9.2b.10
_p, _x = re.compile(r'(\d\w*)', re.I), []
for _y in prop.version.split('.')[:3]:
_z = _p.match(_y)
if _z is not None:
_x.append(_z.group(1))
prop.installed_version = '.'.join(_x) # e.g., '3.9.2'
try:
prop.installed_version_int = int(''.join(['%02x' % int(_y) for _y in _x]), 16) # e.g., 0x030902 -> 198914
except ValueError:
prop.installed_version_int = 0
prop.home_dir = sys_conf.get('dirs', 'home', os.path.realpath(os.path.normpath(os.getcwd())))
prop.username = pwd.getpwuid(os.getuid())[0]
pdb = pwd.getpwnam(prop.username)
prop.userhome = pdb[5]
prop.history_size = 50
prop.data_dir = os.path.join(prop.home_dir, 'data')
prop.image_dir = os.path.join(prop.home_dir, 'data', 'images')
prop.xml_dir = os.path.join(prop.home_dir, 'data', 'xml')
prop.models_dir = os.path.join(prop.home_dir, 'data', 'models')
prop.localization_dir = os.path.join(prop.home_dir, 'data', 'localization')
prop.max_message_len = 8192
prop.max_message_read = 65536
prop.read_timeout = 90
prop.ppd_search_path = '/usr/share;/usr/local/share;/usr/lib;/usr/local/lib;/usr/libexec;/opt;/usr/lib64'
prop.ppd_search_pattern = 'HP-*.ppd.*'
prop.ppd_download_url = 'http://www.linuxprinting.org/ppd-o-matic.cgi'
prop.ppd_file_suffix = '-hpijs.ppd'
# Build and install configurations
prop.gui_build = to_bool(sys_conf.get('configure', 'gui-build', '0'))
prop.net_build = to_bool(sys_conf.get('configure', 'network-build', '0'))
prop.par_build = to_bool(sys_conf.get('configure', 'pp-build', '0'))
prop.usb_build = True
prop.scan_build = to_bool(sys_conf.get('configure', 'scanner-build', '0'))
prop.fax_build = to_bool(sys_conf.get('configure', 'fax-build', '0'))
prop.doc_build = to_bool(sys_conf.get('configure', 'doc-build', '0'))
prop.foomatic_xml_install = to_bool(sys_conf.get('configure', 'foomatic-xml-install', '0'))
prop.foomatic_ppd_install = to_bool(sys_conf.get('configure', 'foomatic-ppd-install', '0'))
prop.hpcups_build = to_bool(sys_conf.get('configure', 'hpcups-install', '0'))
prop.hpijs_build = to_bool(sys_conf.get('configure', 'hpijs-install', '0'))
# Spinner, ala Gentoo Portage
spinner = "\|/-\|/-"
spinpos = 0
enable_spinner = True
def change_spinner_state(enable =True):
global enable_spinner
enable_spinner = enable
def update_spinner():
global spinner, spinpos, enable_spinner
if enable_spinner and not log.is_debug() and sys.stdout.isatty():
sys.stdout.write("\b" + spinner[spinpos])
spinpos=(spinpos + 1) % 8
sys.stdout.flush()
def cleanup_spinner():
global enable_spinner
if enable_spinner and not log.is_debug() and sys.stdout.isatty():
sys.stdout.write("\b \b")
sys.stdout.flush()
# Convert string to int and return a list.
def xint(ver):
try:
l = [int(x) for x in ver.split('.')]
except:
pass
return l
# In case of import failure of extension modules, check whether its a mixed python environment issue.
def check_extension_module_env(ext_mod):
flag = 0
ext_mod_so = ext_mod + '.so'
python_ver = xint((sys.version).split(' ')[0]) #find the current python version ; xint() to convert string to int, returns a list
if python_ver[0] == 3 :
python_ver = 3
else :
python_ver = 2
for dirpath, dirname, filenames in os.walk('/usr/lib/'): #find the .so path
if ext_mod_so in filenames:
ext_path = dirpath
flag = 1
if flag == 0:
log.error('%s not present in the system. Please re-install HPLIP.' %ext_mod)
sys.exit(1)
m = re.search('python(\d(\.\d){0,2})', ext_path) #get the python version where the .so file is found
ext_ver = xint(m.group(1))
if ext_ver[0] == 3:
ver = 3
else:
ver = 2
if python_ver != ver : #compare the python version and the version where .so files are present
log.error("%s Extension module is missing from Python's path." %ext_mod)
log.info("To fix this issue, please refer to this 'http://hplipopensource.com/node/372'")
sys.exit(1)
# Internal/messaging errors
ERROR_STRINGS = {
ERROR_SUCCESS : 'No error',
ERROR_UNKNOWN_ERROR : 'Unknown error',
ERROR_DEVICE_NOT_FOUND : 'Device not found',
ERROR_INVALID_DEVICE_ID : 'Unknown/invalid device-id field',
ERROR_INVALID_DEVICE_URI : 'Unknown/invalid device-uri field',
ERROR_DATA_LENGTH_EXCEEDS_MAX : 'Data length exceeds maximum',
ERROR_DEVICE_IO_ERROR : 'Device I/O error',
ERROR_NO_PROBED_DEVICES_FOUND : 'No probed devices found',
ERROR_DEVICE_BUSY : 'Device busy',
ERROR_DEVICE_STATUS_NOT_AVAILABLE : 'DeviceStatus not available',
ERROR_INVALID_SERVICE_NAME : 'Invalid service name',
ERROR_ERROR_INVALID_CHANNEL_ID : 'Invalid channel-id (service name)',
ERROR_CHANNEL_BUSY : 'Channel busy',
ERROR_DEVICE_DOES_NOT_SUPPORT_OPERATION : 'Device does not support operation',
ERROR_DEVICEOPEN_FAILED : 'Device open failed',
ERROR_INVALID_DEVNODE : 'Invalid device node',
ERROR_INVALID_HOSTNAME : "Invalid hostname ip address",
ERROR_INVALID_PORT_NUMBER : "Invalid JetDirect port number",
ERROR_NO_CUPS_QUEUE_FOUND_FOR_DEVICE : "No CUPS queue found for device.",
ERROR_DATFILE_ERROR: "DAT file error",
ERROR_INVALID_TIMEOUT: "Invalid timeout",
ERROR_IO_TIMEOUT: "I/O timeout",
ERROR_FAX_INCOMPATIBLE_OPTIONS: "Incompatible fax options",
ERROR_FAX_INVALID_FAX_FILE: "Invalid fax file",
ERROR_FAX_FILE_NOT_FOUND: "Fax file not found",
ERROR_INTERNAL : 'Unknown internal error',
}
class Error(Exception):
def __init__(self, opt=ERROR_INTERNAL):
self.opt = opt
self.msg = ERROR_STRINGS.get(opt, ERROR_STRINGS[ERROR_INTERNAL])
log.debug("Exception: %d (%s)" % (opt, self.msg))
Exception.__init__(self, self.msg, opt)
# Make sure True and False are avail. in pre-2.2 versions
#try:
# True
#except NameError:
# True = (1==1)
# False = not True
# as new translations are completed, add them here
supported_locales = { 'en_US': ('us', 'en', 'en_us', 'american', 'america', 'usa', 'english'),}
# Localization support was disabled in 3.9.2
#'zh_CN': ('zh', 'cn', 'zh_cn' , 'china', 'chinese', 'prc'),
#'de_DE': ('de', 'de_de', 'german', 'deutsche'),
#'fr_FR': ('fr', 'fr_fr', 'france', 'french', 'français'),
#'it_IT': ('it', 'it_it', 'italy', 'italian', 'italiano'),
#'ru_RU': ('ru', 'ru_ru', 'russian'),
#'pt_BR': ('pt', 'br', 'pt_br', 'brazil', 'brazilian', 'portuguese', 'brasil', 'portuguesa'),
#'es_MX': ('es', 'mx', 'es_mx', 'mexico', 'spain', 'spanish', 'espanol', 'español'),
#}
| 33.915888 | 146 | 0.610361 |
import sys
import os
import os.path
from .sixext import PY3
from .sixext.moves import configparser
import locale
import pwd
import stat
import re
from .codes import *
from . import logger
from . import os_utils
from .sixext import to_unicode
if PY3:
QString = type("")
def cmp(a, b):
return (a > b) - (a < b)
log = logger.Logger('', logger.Logger.LOG_LEVEL_INFO, logger.Logger.LOG_TO_CONSOLE)
log.set_level('info')
MINIMUM_PYQT_MAJOR_VER = 3
MINIMUM_PYQT_MINOR_VER = 14
MINIMUM_QT_MAJOR_VER = 3
MINIMUM_QT_MINOR_VER = 0
def to_bool(s, default=False):
if isinstance(s, str) and s:
if s[0].lower() in ['1', 't', 'y']:
return True
elif s[0].lower() in ['0', 'f', 'n']:
return False
elif isinstance(s, bool):
return s
return default
class Properties(dict):
def __getattr__(self, attr):
if attr in list(self.keys()):
return self.__getitem__(attr)
else:
return ""
def __setattr__(self, attr, val):
self.__setitem__(attr, val)
prop = Properties()
class ConfigBase(object):
def __init__(self, filename):
self.filename = filename
self.conf = configparser.ConfigParser()
self.read()
def get(self, section, key, default=to_unicode('')):
try:
return self.conf.get(section, key)
except (configparser.NoOptionError, configparser.NoSectionError):
return default
def set(self, section, key, value):
if not self.conf.has_section(section):
self.conf.add_section(section)
self.conf.set(section, key, value)
self.write()
def sections(self):
return self.conf.sections()
def has_section(self, section):
return self.conf.has_section(section)
def options(self, section):
return self.conf.options(section)
keys = options
def read(self):
if self.filename is not None:
filename = self.filename
if filename.startswith("/root/"):
log.error("attempted to read from '%s'" % self.filename)
return
try:
fp = open(self.filename, "r")
try:
self.conf.readfp(fp)
except configparser.MissingSectionHeaderError:
print("")
log.error("Found No Section in %s. Please set the http proxy for root and try again." % self.filename)
except (configparser.DuplicateOptionError):
log.warn("Found Duplicate Entery in %s" % self.filename)
self.CheckDuplicateEntries()
finally:
fp.close()
except (OSError, IOError, configparser.MissingSectionHeaderError):
log.debug("Unable to open file %s for reading." % self.filename)
def write(self):
if self.filename is not None:
filename = self.filename
if filename.startswith("/root/") or filename.startswith("/etc/"):
log.error("attempted to write to '%s'" % self.filename)
return
try:
fp = open(self.filename, "w")
self.conf.write(fp)
fp.close()
except (OSError, IOError):
log.debug("Unable to open file %s for writing." % self.filename)
def CheckDuplicateEntries(self):
try:
f = open(self.filename,'r')
data = f.read()
f.close()
except IOError:
data =""
final_data =''
for a in data.splitlines():
if not a or a not in final_data:
final_data = final_data +'\n' +a
import tempfile
fd, self.filename = tempfile.mkstemp()
f = open(self.filename,'w')
f.write(final_data)
f.close()
self.read()
os.unlink(self.filename)
class SysConfig(ConfigBase):
def __init__(self):
ConfigBase.__init__(self, '/etc/hp/hplip.conf')
class State(ConfigBase):
def __init__(self):
if not os.path.exists('/var/lib/hp/') and os.geteuid() == 0:
os.makedirs('/var/lib/hp/')
cmd = 'chmod 755 /var/lib/hp/'
os_utils.execute(cmd)
ConfigBase.__init__(self, '/var/lib/hp/hplip.state')
class UserConfig(ConfigBase):
def __init__(self):
sts, prop.user_dir = os_utils.getHPLIPDir()
if not os.geteuid() == 0:
prop.user_config_file = os.path.join(prop.user_dir, 'hplip.conf')
if not os.path.exists(prop.user_config_file):
try:
open(prop.user_config_file, 'w').close()
s = os.stat(os.path.dirname(prop.user_config_file))
os.chown(prop.user_config_file, s[stat.ST_UID], s[stat.ST_GID])
except IOError:
pass
ConfigBase.__init__(self, prop.user_config_file)
else:
prop.user_config_file = None
ConfigBase.__init__(self, None)
def workingDirectory(self):
t = self.get('last_used', 'working_dir', os.path.expanduser("~"))
try:
t = t.decode('utf-8')
except UnicodeError:
log.error("Invalid unicode: %s" % t)
log.debug("working directory: %s" % t)
return t
def setWorkingDirectory(self, t):
self.set('last_used', 'working_dir', t.encode('utf-8'))
log.debug("working directory: %s" % t.encode('utf-8'))
os.umask(0o037)
sys_conf = SysConfig()
sys_state = State()
user_conf = UserConfig()
try:
prop.locale, prop.encoding = locale.getdefaultlocale()
except ValueError:
prop.locale = 'en_US'
prop.encoding = 'UTF8'
prop.version = sys_conf.get('hplip', 'version', '0.0.0')
_p, _x = re.compile(r'(\d\w*)', re.I), []
for _y in prop.version.split('.')[:3]:
_z = _p.match(_y)
if _z is not None:
_x.append(_z.group(1))
prop.installed_version = '.'.join(_x)
try:
prop.installed_version_int = int(''.join(['%02x' % int(_y) for _y in _x]), 16)
except ValueError:
prop.installed_version_int = 0
prop.home_dir = sys_conf.get('dirs', 'home', os.path.realpath(os.path.normpath(os.getcwd())))
prop.username = pwd.getpwuid(os.getuid())[0]
pdb = pwd.getpwnam(prop.username)
prop.userhome = pdb[5]
prop.history_size = 50
prop.data_dir = os.path.join(prop.home_dir, 'data')
prop.image_dir = os.path.join(prop.home_dir, 'data', 'images')
prop.xml_dir = os.path.join(prop.home_dir, 'data', 'xml')
prop.models_dir = os.path.join(prop.home_dir, 'data', 'models')
prop.localization_dir = os.path.join(prop.home_dir, 'data', 'localization')
prop.max_message_len = 8192
prop.max_message_read = 65536
prop.read_timeout = 90
prop.ppd_search_path = '/usr/share;/usr/local/share;/usr/lib;/usr/local/lib;/usr/libexec;/opt;/usr/lib64'
prop.ppd_search_pattern = 'HP-*.ppd.*'
prop.ppd_download_url = 'http://www.linuxprinting.org/ppd-o-matic.cgi'
prop.ppd_file_suffix = '-hpijs.ppd'
prop.gui_build = to_bool(sys_conf.get('configure', 'gui-build', '0'))
prop.net_build = to_bool(sys_conf.get('configure', 'network-build', '0'))
prop.par_build = to_bool(sys_conf.get('configure', 'pp-build', '0'))
prop.usb_build = True
prop.scan_build = to_bool(sys_conf.get('configure', 'scanner-build', '0'))
prop.fax_build = to_bool(sys_conf.get('configure', 'fax-build', '0'))
prop.doc_build = to_bool(sys_conf.get('configure', 'doc-build', '0'))
prop.foomatic_xml_install = to_bool(sys_conf.get('configure', 'foomatic-xml-install', '0'))
prop.foomatic_ppd_install = to_bool(sys_conf.get('configure', 'foomatic-ppd-install', '0'))
prop.hpcups_build = to_bool(sys_conf.get('configure', 'hpcups-install', '0'))
prop.hpijs_build = to_bool(sys_conf.get('configure', 'hpijs-install', '0'))
spinner = "\|/-\|/-"
spinpos = 0
enable_spinner = True
def change_spinner_state(enable =True):
global enable_spinner
enable_spinner = enable
def update_spinner():
global spinner, spinpos, enable_spinner
if enable_spinner and not log.is_debug() and sys.stdout.isatty():
sys.stdout.write("\b" + spinner[spinpos])
spinpos=(spinpos + 1) % 8
sys.stdout.flush()
def cleanup_spinner():
global enable_spinner
if enable_spinner and not log.is_debug() and sys.stdout.isatty():
sys.stdout.write("\b \b")
sys.stdout.flush()
def xint(ver):
try:
l = [int(x) for x in ver.split('.')]
except:
pass
return l
def check_extension_module_env(ext_mod):
flag = 0
ext_mod_so = ext_mod + '.so'
python_ver = xint((sys.version).split(' ')[0])
if python_ver[0] == 3 :
python_ver = 3
else :
python_ver = 2
for dirpath, dirname, filenames in os.walk('/usr/lib/'):
if ext_mod_so in filenames:
ext_path = dirpath
flag = 1
if flag == 0:
log.error('%s not present in the system. Please re-install HPLIP.' %ext_mod)
sys.exit(1)
m = re.search('python(\d(\.\d){0,2})', ext_path)
ext_ver = xint(m.group(1))
if ext_ver[0] == 3:
ver = 3
else:
ver = 2
if python_ver != ver :
log.error("%s Extension module is missing from Python's path." %ext_mod)
log.info("To fix this issue, please refer to this 'http://hplipopensource.com/node/372'")
sys.exit(1)
# Internal/messaging errors
ERROR_STRINGS = {
ERROR_SUCCESS : 'No error',
ERROR_UNKNOWN_ERROR : 'Unknown error',
ERROR_DEVICE_NOT_FOUND : 'Device not found',
ERROR_INVALID_DEVICE_ID : 'Unknown/invalid device-id field',
ERROR_INVALID_DEVICE_URI : 'Unknown/invalid device-uri field',
ERROR_DATA_LENGTH_EXCEEDS_MAX : 'Data length exceeds maximum',
ERROR_DEVICE_IO_ERROR : 'Device I/O error',
ERROR_NO_PROBED_DEVICES_FOUND : 'No probed devices found',
ERROR_DEVICE_BUSY : 'Device busy',
ERROR_DEVICE_STATUS_NOT_AVAILABLE : 'DeviceStatus not available',
ERROR_INVALID_SERVICE_NAME : 'Invalid service name',
ERROR_ERROR_INVALID_CHANNEL_ID : 'Invalid channel-id (service name)',
ERROR_CHANNEL_BUSY : 'Channel busy',
ERROR_DEVICE_DOES_NOT_SUPPORT_OPERATION : 'Device does not support operation',
ERROR_DEVICEOPEN_FAILED : 'Device open failed',
ERROR_INVALID_DEVNODE : 'Invalid device node',
ERROR_INVALID_HOSTNAME : "Invalid hostname ip address",
ERROR_INVALID_PORT_NUMBER : "Invalid JetDirect port number",
ERROR_NO_CUPS_QUEUE_FOUND_FOR_DEVICE : "No CUPS queue found for device.",
ERROR_DATFILE_ERROR: "DAT file error",
ERROR_INVALID_TIMEOUT: "Invalid timeout",
ERROR_IO_TIMEOUT: "I/O timeout",
ERROR_FAX_INCOMPATIBLE_OPTIONS: "Incompatible fax options",
ERROR_FAX_INVALID_FAX_FILE: "Invalid fax file",
ERROR_FAX_FILE_NOT_FOUND: "Fax file not found",
ERROR_INTERNAL : 'Unknown internal error',
}
class Error(Exception):
def __init__(self, opt=ERROR_INTERNAL):
self.opt = opt
self.msg = ERROR_STRINGS.get(opt, ERROR_STRINGS[ERROR_INTERNAL])
log.debug("Exception: %d (%s)" % (opt, self.msg))
Exception.__init__(self, self.msg, opt)
# Make sure True and False are avail. in pre-2.2 versions
#try:
# True
#except NameError:
# True = (1==1)
# False = not True
# as new translations are completed, add them here
supported_locales = { 'en_US': ('us', 'en', 'en_us', 'american', 'america', 'usa', 'english'),}
# Localization support was disabled in 3.9.2
#'zh_CN': ('zh', 'cn', 'zh_cn' , 'china', 'chinese', 'prc'),
#'de_DE': ('de', 'de_de', 'german', 'deutsche'),
#'fr_FR': ('fr', 'fr_fr', 'france', 'french', 'français'),
#'it_IT': ('it', 'it_it', 'italy', 'italian', 'italiano'),
#'ru_RU': ('ru', 'ru_ru', 'russian'),
#'pt_BR': ('pt', 'br', 'pt_br', 'brazil', 'brazilian', 'portuguese', 'brasil', 'portuguesa'),
#'es_MX': ('es', 'mx', 'es_mx', 'mexico', 'spain', 'spanish', 'espanol', 'español'),
#}
| true | true |
f73e25089adcc007af52ac2d6b44e0e28f19a605 | 4,120 | py | Python | [Kaleido-subs]/Completed/Joshiraku [BD]/JoshirakuBD_NCED2.py | LightArrowsEXE/Encoding-Projects | 4ea96a5b25a7710f615ada5ff25949c496492b53 | [
"MIT"
] | 57 | 2019-01-31T17:32:46.000Z | 2022-03-23T05:46:51.000Z | [Kaleido-subs]/Completed/Joshiraku [BD]/JoshirakuBD_NCED2.py | LightArrowsEXE/Encoding-Projects | 4ea96a5b25a7710f615ada5ff25949c496492b53 | [
"MIT"
] | null | null | null | [Kaleido-subs]/Completed/Joshiraku [BD]/JoshirakuBD_NCED2.py | LightArrowsEXE/Encoding-Projects | 4ea96a5b25a7710f615ada5ff25949c496492b53 | [
"MIT"
] | 12 | 2019-04-30T06:16:13.000Z | 2022-03-14T16:15:07.000Z | from typing import Tuple, Union
import vapoursynth as vs
from lvsfunc.misc import source
from vardautomation import FileInfo, PresetAAC, PresetBD, VPath
from project_module import encoder as enc, flt
core = vs.core
core.num_threads = 4
# Sources
JP_NCED = FileInfo(r'BDMV/120926_JOSHIRAKU_VOL1/BDMV/STREAM/00002.m2ts', (24, -24),
idx=lambda x: source(x, cachedir=''))
JP_BD_13 = FileInfo(r'BDMV/130522_JOSHIRAKU_VOL6/BDMV/STREAM/00000.m2ts', (101493, -29),
idx=lambda x: source(x, cachedir=''),
preset=[PresetBD, PresetAAC])
JP_BD_13.name_file_final = VPath(fr"premux/{JP_NCED.name} (Premux).mkv")
def filterchain() -> Union[vs.VideoNode, Tuple[vs.VideoNode, ...]]:
"""Main filterchain"""
import havsfunc as haf
import lvsfunc as lvf
import rekt
import vardefunc as vdf
from adptvgrnMod import adptvgrnMod
from awsmfunc import bbmod
from ccd import ccd
from vsutil import depth, get_y
from xvs import WarpFixChromaBlend
src = JP_NCED.clip_cut
src_13 = JP_BD_13.clip_cut
src = lvf.rfs(src, src_13, [(2073, None)])
# Edgefixing
rkt = rekt.rektlvls(
src,
[0, 1079], [17, 16],
[0, 1, 2, 3] + [1917, 1918, 1919], [16, 4, -2, 2] + [-2, 5, 14]
)
ef = bbmod(rkt, left=4, right=3, y=False)
ef = depth(ef, 32)
# Descaling + Rescaling
src_y = get_y(ef)
descaled = lvf.kernels.Bicubic().descale(src_y, 1280, 720)
rescaled = vdf.scale.nnedi3_upscale(descaled)
downscaled = lvf.kernels.Bicubic(-1/2, 1/4).scale(rescaled, 1920, 1080)
l_mask = vdf.mask.FDOG().get_mask(src_y, lthr=0.065, hthr=0.065).std.Maximum().std.Minimum()
l_mask = l_mask.std.Median().std.Convolution([1] * 9)
rescaled_masked = core.std.MaskedMerge(src_y, downscaled, l_mask)
scaled = depth(vdf.misc.merge_chroma(rescaled_masked, ef), 16)
unwarp = flt.line_darkening(scaled, 0.145).warp.AWarpSharp2(depth=2)
sharp = haf.LSFmod(unwarp, strength=65, Smode=3, Lmode=1, edgemode=1, edgemaskHQ=True)
mask_sharp = core.std.MaskedMerge(scaled, sharp, depth(l_mask, 16))
upscaled = lvf.kernels.Bicubic().scale(descaled, 1920, 1080)
descale_mask = lvf.scale.descale_detail_mask(src_y, upscaled)
details_merged = core.std.MaskedMerge(mask_sharp, depth(ef, 16), depth(descale_mask, 16))
# Denoising
denoise_y = core.knlm.KNLMeansCL(details_merged, d=1, a=3, s=4, h=0.15, channels='Y')
denoise_uv = ccd(denoise_y, threshold=6, matrix='709')
stab = haf.GSMC(denoise_uv, radius=2, adapt=1, planes=[0])
decs = vdf.noise.decsiz(stab, sigmaS=8, min_in=208 << 8, max_in=232 << 8)
# Fixing chroma
cshift = haf.FixChromaBleedingMod(decs, cx=-.25, cy=0, thr=100, strength=1, blur=True)
cwarp = WarpFixChromaBlend(cshift, thresh=88, blur=3, depth=6)
# Regular debanding + graining
detail_mask = flt.detail_mask(cwarp, brz=(1800, 3500))
deband = vdf.deband.dumb3kdb(cwarp, threshold=32, grain=16)
deband_masked = core.std.MaskedMerge(deband, cwarp, detail_mask)
grain: vs.VideoNode = adptvgrnMod(deband_masked, 0.2, luma_scaling=10, size=1.35, static=True, grain_chroma=False)
return grain
if __name__ == '__main__':
FILTERED = filterchain()
enc.Patcher(JP_BD_13, FILTERED).patch( # type: ignore
ranges=[(1794, 2157)],
external_file=f"premux/{JP_NCED.name[:-1]}1 (Premux).mkv",
clean_up=True)
elif __name__ == '__vapoursynth__':
FILTERED = filterchain()
if not isinstance(FILTERED, vs.VideoNode):
raise ImportError(
f"Input clip has multiple output nodes ({len(FILTERED)})! Please output just 1 clip"
)
else:
enc.dither_down(FILTERED).set_output(0)
else:
JP_NCED.clip_cut.std.SetFrameProp('node', intval=0).set_output(0)
FILTERED = filterchain()
if not isinstance(FILTERED, vs.VideoNode):
for i, clip_filtered in enumerate(FILTERED, start=1):
clip_filtered.std.SetFrameProp('node', intval=i).set_output(i)
else:
FILTERED.std.SetFrameProp('node', intval=1).set_output(1)
| 37.798165 | 118 | 0.673301 | from typing import Tuple, Union
import vapoursynth as vs
from lvsfunc.misc import source
from vardautomation import FileInfo, PresetAAC, PresetBD, VPath
from project_module import encoder as enc, flt
core = vs.core
core.num_threads = 4
JP_NCED = FileInfo(r'BDMV/120926_JOSHIRAKU_VOL1/BDMV/STREAM/00002.m2ts', (24, -24),
idx=lambda x: source(x, cachedir=''))
JP_BD_13 = FileInfo(r'BDMV/130522_JOSHIRAKU_VOL6/BDMV/STREAM/00000.m2ts', (101493, -29),
idx=lambda x: source(x, cachedir=''),
preset=[PresetBD, PresetAAC])
JP_BD_13.name_file_final = VPath(fr"premux/{JP_NCED.name} (Premux).mkv")
def filterchain() -> Union[vs.VideoNode, Tuple[vs.VideoNode, ...]]:
import havsfunc as haf
import lvsfunc as lvf
import rekt
import vardefunc as vdf
from adptvgrnMod import adptvgrnMod
from awsmfunc import bbmod
from ccd import ccd
from vsutil import depth, get_y
from xvs import WarpFixChromaBlend
src = JP_NCED.clip_cut
src_13 = JP_BD_13.clip_cut
src = lvf.rfs(src, src_13, [(2073, None)])
rkt = rekt.rektlvls(
src,
[0, 1079], [17, 16],
[0, 1, 2, 3] + [1917, 1918, 1919], [16, 4, -2, 2] + [-2, 5, 14]
)
ef = bbmod(rkt, left=4, right=3, y=False)
ef = depth(ef, 32)
src_y = get_y(ef)
descaled = lvf.kernels.Bicubic().descale(src_y, 1280, 720)
rescaled = vdf.scale.nnedi3_upscale(descaled)
downscaled = lvf.kernels.Bicubic(-1/2, 1/4).scale(rescaled, 1920, 1080)
l_mask = vdf.mask.FDOG().get_mask(src_y, lthr=0.065, hthr=0.065).std.Maximum().std.Minimum()
l_mask = l_mask.std.Median().std.Convolution([1] * 9)
rescaled_masked = core.std.MaskedMerge(src_y, downscaled, l_mask)
scaled = depth(vdf.misc.merge_chroma(rescaled_masked, ef), 16)
unwarp = flt.line_darkening(scaled, 0.145).warp.AWarpSharp2(depth=2)
sharp = haf.LSFmod(unwarp, strength=65, Smode=3, Lmode=1, edgemode=1, edgemaskHQ=True)
mask_sharp = core.std.MaskedMerge(scaled, sharp, depth(l_mask, 16))
upscaled = lvf.kernels.Bicubic().scale(descaled, 1920, 1080)
descale_mask = lvf.scale.descale_detail_mask(src_y, upscaled)
details_merged = core.std.MaskedMerge(mask_sharp, depth(ef, 16), depth(descale_mask, 16))
denoise_y = core.knlm.KNLMeansCL(details_merged, d=1, a=3, s=4, h=0.15, channels='Y')
denoise_uv = ccd(denoise_y, threshold=6, matrix='709')
stab = haf.GSMC(denoise_uv, radius=2, adapt=1, planes=[0])
decs = vdf.noise.decsiz(stab, sigmaS=8, min_in=208 << 8, max_in=232 << 8)
cshift = haf.FixChromaBleedingMod(decs, cx=-.25, cy=0, thr=100, strength=1, blur=True)
cwarp = WarpFixChromaBlend(cshift, thresh=88, blur=3, depth=6)
detail_mask = flt.detail_mask(cwarp, brz=(1800, 3500))
deband = vdf.deband.dumb3kdb(cwarp, threshold=32, grain=16)
deband_masked = core.std.MaskedMerge(deband, cwarp, detail_mask)
grain: vs.VideoNode = adptvgrnMod(deband_masked, 0.2, luma_scaling=10, size=1.35, static=True, grain_chroma=False)
return grain
if __name__ == '__main__':
FILTERED = filterchain()
enc.Patcher(JP_BD_13, FILTERED).patch(
ranges=[(1794, 2157)],
external_file=f"premux/{JP_NCED.name[:-1]}1 (Premux).mkv",
clean_up=True)
elif __name__ == '__vapoursynth__':
FILTERED = filterchain()
if not isinstance(FILTERED, vs.VideoNode):
raise ImportError(
f"Input clip has multiple output nodes ({len(FILTERED)})! Please output just 1 clip"
)
else:
enc.dither_down(FILTERED).set_output(0)
else:
JP_NCED.clip_cut.std.SetFrameProp('node', intval=0).set_output(0)
FILTERED = filterchain()
if not isinstance(FILTERED, vs.VideoNode):
for i, clip_filtered in enumerate(FILTERED, start=1):
clip_filtered.std.SetFrameProp('node', intval=i).set_output(i)
else:
FILTERED.std.SetFrameProp('node', intval=1).set_output(1)
| true | true |
f73e255f02cde90e00dbf1c2e3f6da94c3f3a1b3 | 623 | py | Python | smarthome/smarthomeproj/server/migrations/0007_auto_20210121_2155.py | nunocaseiro/smarthome-server-django | 711db6ff360061d861d9985264f753e0f7846327 | [
"Apache-2.0"
] | null | null | null | smarthome/smarthomeproj/server/migrations/0007_auto_20210121_2155.py | nunocaseiro/smarthome-server-django | 711db6ff360061d861d9985264f753e0f7846327 | [
"Apache-2.0"
] | null | null | null | smarthome/smarthomeproj/server/migrations/0007_auto_20210121_2155.py | nunocaseiro/smarthome-server-django | 711db6ff360061d861d9985264f753e0f7846327 | [
"Apache-2.0"
] | null | null | null | # Generated by Django 3.1.3 on 2021-01-21 21:55
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('server', '0006_auto_20210120_2320'),
]
operations = [
migrations.AddField(
model_name='sensor',
name='lux_max',
field=models.DecimalField(blank=True, decimal_places=1, max_digits=3, null=True),
),
migrations.AddField(
model_name='sensor',
name='temp_max',
field=models.DecimalField(blank=True, decimal_places=1, max_digits=3, null=True),
),
]
| 25.958333 | 93 | 0.601926 |
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('server', '0006_auto_20210120_2320'),
]
operations = [
migrations.AddField(
model_name='sensor',
name='lux_max',
field=models.DecimalField(blank=True, decimal_places=1, max_digits=3, null=True),
),
migrations.AddField(
model_name='sensor',
name='temp_max',
field=models.DecimalField(blank=True, decimal_places=1, max_digits=3, null=True),
),
]
| true | true |
f73e261a052d28b5820e5eaf0cc2a5369ce07257 | 2,999 | py | Python | lib/prior/priorNet.py | ziyedy/category-priornet | 5aa080eeff936ce3939f0d5458a2936677c15726 | [
"MIT"
] | null | null | null | lib/prior/priorNet.py | ziyedy/category-priornet | 5aa080eeff936ce3939f0d5458a2936677c15726 | [
"MIT"
] | null | null | null | lib/prior/priorNet.py | ziyedy/category-priornet | 5aa080eeff936ce3939f0d5458a2936677c15726 | [
"MIT"
] | null | null | null | import sys
sys.path.append("../../")
import lib.gcn3d as gcn3d
import torch
import torch.nn as nn
import torch.nn.functional as F
class PriorEncoder(nn.Module):
def __init__(self, support_num: int, neighbor_num: int):
super(PriorEncoder, self).__init__()
self.neighbor_num = neighbor_num
self.conv_0 = gcn3d.Conv_surface(kernel_num=32, support_num=support_num)
self.conv_1 = gcn3d.Conv_layer(32, 64, support_num=support_num)
self.pool_1 = gcn3d.Pool_layer(pooling_rate=4, neighbor_num=4)
self.conv_2 = gcn3d.Conv_layer(64, 128, support_num=support_num)
self.conv_3 = gcn3d.Conv_layer(128, 256, support_num=support_num)
self.pool_2 = gcn3d.Pool_layer(pooling_rate=4, neighbor_num=4)
self.conv_4 = gcn3d.Conv_layer(256, 512, support_num=support_num)
self.pool_3 = gcn3d.Pool_layer(pooling_rate=4, neighbor_num=4)
def forward(self, vertices: "(bs, vertice_num, 3)"):
bs, vertice_num, _ = vertices.size()
neighbor_index = gcn3d.get_neighbor_index(vertices, self.neighbor_num)
fm_0 = self.conv_0(neighbor_index, vertices)
fm_0 = F.relu(fm_0, inplace=True)
fm_1 = self.conv_1(neighbor_index, vertices, fm_0)
fm_1 = F.relu(fm_1, inplace=True)
vertices, fm_1 = self.pool_1(vertices, fm_1)
neighbor_index = gcn3d.get_neighbor_index(vertices, self.neighbor_num)
fm_2 = self.conv_2(neighbor_index, vertices, fm_1)
fm_2 = F.relu(fm_2, inplace=True)
fm_3 = self.conv_3(neighbor_index, vertices, fm_2)
fm_3 = F.relu(fm_3, inplace=True)
vertices, fm_3 = self.pool_2(vertices, fm_3)
neighbor_index = gcn3d.get_neighbor_index(vertices, self.neighbor_num)
fm_4 = self.conv_4(neighbor_index, vertices, fm_3)
feature_global = fm_4.max(1)[0]
# fm_4 = F.relu(fm_4, inplace=True)
# vertices, fm_4 = self.pool_3(vertices, fm_4)
return feature_global
class PriorDecoder(nn.Module):
def __init__(self, emb_dim, n_pts):
super(PriorDecoder, self).__init__()
self.fc1 = nn.Linear(emb_dim, 512)
self.fc2 = nn.Linear(512, 1024)
self.fc3 = nn.Linear(1024, 3 * n_pts)
def forward(self, embedding):
"""
Args:
embedding: (B, 512)
"""
bs = embedding.size()[0]
out1 = F.relu(self.fc1(embedding))
out2 = F.relu(self.fc2(out1))
out3 = self.fc3(out2)
out_pc = out3.view(bs, -1, 3)
return out_pc
class PriorNet(nn.Module):
def __init__(self, emb_dim=512, n_pts=1024):
super(PriorNet, self).__init__()
self.encoder = PriorEncoder(1, 20)
self.decoder = PriorDecoder(emb_dim, n_pts)
def forward(self, in_pc):
emb = self.encoder(in_pc)
out_pc = self.decoder(emb)
return emb, out_pc
if __name__ == '__main__':
estimator = PriorEncoder(1, 1)
xyz = torch.randn(32, 2048, 3)
gg = estimator(xyz) | 33.322222 | 80 | 0.646882 | import sys
sys.path.append("../../")
import lib.gcn3d as gcn3d
import torch
import torch.nn as nn
import torch.nn.functional as F
class PriorEncoder(nn.Module):
def __init__(self, support_num: int, neighbor_num: int):
super(PriorEncoder, self).__init__()
self.neighbor_num = neighbor_num
self.conv_0 = gcn3d.Conv_surface(kernel_num=32, support_num=support_num)
self.conv_1 = gcn3d.Conv_layer(32, 64, support_num=support_num)
self.pool_1 = gcn3d.Pool_layer(pooling_rate=4, neighbor_num=4)
self.conv_2 = gcn3d.Conv_layer(64, 128, support_num=support_num)
self.conv_3 = gcn3d.Conv_layer(128, 256, support_num=support_num)
self.pool_2 = gcn3d.Pool_layer(pooling_rate=4, neighbor_num=4)
self.conv_4 = gcn3d.Conv_layer(256, 512, support_num=support_num)
self.pool_3 = gcn3d.Pool_layer(pooling_rate=4, neighbor_num=4)
def forward(self, vertices: "(bs, vertice_num, 3)"):
bs, vertice_num, _ = vertices.size()
neighbor_index = gcn3d.get_neighbor_index(vertices, self.neighbor_num)
fm_0 = self.conv_0(neighbor_index, vertices)
fm_0 = F.relu(fm_0, inplace=True)
fm_1 = self.conv_1(neighbor_index, vertices, fm_0)
fm_1 = F.relu(fm_1, inplace=True)
vertices, fm_1 = self.pool_1(vertices, fm_1)
neighbor_index = gcn3d.get_neighbor_index(vertices, self.neighbor_num)
fm_2 = self.conv_2(neighbor_index, vertices, fm_1)
fm_2 = F.relu(fm_2, inplace=True)
fm_3 = self.conv_3(neighbor_index, vertices, fm_2)
fm_3 = F.relu(fm_3, inplace=True)
vertices, fm_3 = self.pool_2(vertices, fm_3)
neighbor_index = gcn3d.get_neighbor_index(vertices, self.neighbor_num)
fm_4 = self.conv_4(neighbor_index, vertices, fm_3)
feature_global = fm_4.max(1)[0]
return feature_global
class PriorDecoder(nn.Module):
def __init__(self, emb_dim, n_pts):
super(PriorDecoder, self).__init__()
self.fc1 = nn.Linear(emb_dim, 512)
self.fc2 = nn.Linear(512, 1024)
self.fc3 = nn.Linear(1024, 3 * n_pts)
def forward(self, embedding):
bs = embedding.size()[0]
out1 = F.relu(self.fc1(embedding))
out2 = F.relu(self.fc2(out1))
out3 = self.fc3(out2)
out_pc = out3.view(bs, -1, 3)
return out_pc
class PriorNet(nn.Module):
def __init__(self, emb_dim=512, n_pts=1024):
super(PriorNet, self).__init__()
self.encoder = PriorEncoder(1, 20)
self.decoder = PriorDecoder(emb_dim, n_pts)
def forward(self, in_pc):
emb = self.encoder(in_pc)
out_pc = self.decoder(emb)
return emb, out_pc
if __name__ == '__main__':
estimator = PriorEncoder(1, 1)
xyz = torch.randn(32, 2048, 3)
gg = estimator(xyz) | true | true |
f73e263128110e3f80e7abc4cd94cb03038c4f6a | 6,386 | py | Python | tests/test_windows_utils.py | haypo/trollius | 2df073c33c7fe8ab0ac60cad4826de730a411a8c | [
"Apache-2.0"
] | 175 | 2015-07-09T00:18:14.000Z | 2017-11-06T17:40:42.000Z | tests/test_windows_utils.py | haypo/trollius | 2df073c33c7fe8ab0ac60cad4826de730a411a8c | [
"Apache-2.0"
] | 9 | 2015-07-17T16:44:13.000Z | 2016-09-14T17:47:13.000Z | tests/test_windows_utils.py | roverdotcom/trollius | fa13a1c0182840b72faa17078eb78c8b5e5f45c4 | [
"Apache-2.0"
] | 21 | 2015-07-14T14:10:15.000Z | 2017-04-05T07:03:48.000Z | """Tests for window_utils"""
import socket
import sys
import warnings
from trollius.test_utils import unittest
if sys.platform != 'win32':
raise unittest.SkipTest('Windows only')
from trollius import _overlapped
from trollius import py33_winapi as _winapi
from trollius import test_support as support
from trollius import test_utils
from trollius import windows_utils
from trollius.test_utils import mock
class WinsocketpairTests(unittest.TestCase):
def check_winsocketpair(self, ssock, csock):
csock.send(b'xxx')
self.assertEqual(b'xxx', ssock.recv(1024))
csock.close()
ssock.close()
def test_winsocketpair(self):
ssock, csock = windows_utils.socketpair()
self.check_winsocketpair(ssock, csock)
@unittest.skipUnless(support.IPV6_ENABLED,
'IPv6 not supported or enabled')
def test_winsocketpair_ipv6(self):
ssock, csock = windows_utils.socketpair(family=socket.AF_INET6)
self.check_winsocketpair(ssock, csock)
@unittest.skipIf(hasattr(socket, 'socketpair'),
'socket.socketpair is available')
@mock.patch('trollius.windows_utils.socket')
def test_winsocketpair_exc(self, m_socket):
m_socket.AF_INET = socket.AF_INET
m_socket.SOCK_STREAM = socket.SOCK_STREAM
m_socket.socket.return_value.getsockname.return_value = ('', 12345)
m_socket.socket.return_value.accept.return_value = object(), object()
m_socket.socket.return_value.connect.side_effect = OSError()
self.assertRaises(OSError, windows_utils.socketpair)
def test_winsocketpair_invalid_args(self):
self.assertRaises(ValueError,
windows_utils.socketpair, family=socket.AF_UNSPEC)
self.assertRaises(ValueError,
windows_utils.socketpair, type=socket.SOCK_DGRAM)
self.assertRaises(ValueError,
windows_utils.socketpair, proto=1)
@unittest.skipIf(hasattr(socket, 'socketpair'),
'socket.socketpair is available')
@mock.patch('trollius.windows_utils.socket')
def test_winsocketpair_close(self, m_socket):
m_socket.AF_INET = socket.AF_INET
m_socket.SOCK_STREAM = socket.SOCK_STREAM
sock = mock.Mock()
m_socket.socket.return_value = sock
sock.bind.side_effect = OSError
self.assertRaises(OSError, windows_utils.socketpair)
self.assertTrue(sock.close.called)
class PipeTests(unittest.TestCase):
def test_pipe_overlapped(self):
h1, h2 = windows_utils.pipe(overlapped=(True, True))
try:
ov1 = _overlapped.Overlapped()
self.assertFalse(ov1.pending)
self.assertEqual(ov1.error, 0)
ov1.ReadFile(h1, 100)
self.assertTrue(ov1.pending)
self.assertEqual(ov1.error, _winapi.ERROR_IO_PENDING)
ERROR_IO_INCOMPLETE = 996
try:
ov1.getresult()
except WindowsError as e:
self.assertEqual(e.winerror, ERROR_IO_INCOMPLETE)
else:
raise RuntimeError('expected ERROR_IO_INCOMPLETE')
ov2 = _overlapped.Overlapped()
self.assertFalse(ov2.pending)
self.assertEqual(ov2.error, 0)
ov2.WriteFile(h2, b"hello")
self.assertIn(ov2.error, set((0, _winapi.ERROR_IO_PENDING)))
res = _winapi.WaitForSingleObject(ov2.event, 100)
self.assertEqual(res, _winapi.WAIT_OBJECT_0)
self.assertFalse(ov1.pending)
self.assertEqual(ov1.error, ERROR_IO_INCOMPLETE)
self.assertFalse(ov2.pending)
self.assertIn(ov2.error, set((0, _winapi.ERROR_IO_PENDING)))
self.assertEqual(ov1.getresult(), b"hello")
finally:
_winapi.CloseHandle(h1)
_winapi.CloseHandle(h2)
def test_pipe_handle(self):
h, _ = windows_utils.pipe(overlapped=(True, True))
_winapi.CloseHandle(_)
p = windows_utils.PipeHandle(h)
self.assertEqual(p.fileno(), h)
self.assertEqual(p.handle, h)
# check garbage collection of p closes handle
with warnings.catch_warnings():
if sys.version_info >= (3, 4):
warnings.filterwarnings("ignore", "", ResourceWarning)
del p
support.gc_collect()
try:
_winapi.CloseHandle(h)
except OSError as e:
self.assertEqual(e.winerror, 6) # ERROR_INVALID_HANDLE
else:
raise RuntimeError('expected ERROR_INVALID_HANDLE')
class PopenTests(unittest.TestCase):
def test_popen(self):
command = r"""if 1:
import sys
s = sys.stdin.readline()
sys.stdout.write(s.upper())
sys.stderr.write('stderr')
"""
msg = b"blah\n"
p = windows_utils.Popen([sys.executable, '-c', command],
stdin=windows_utils.PIPE,
stdout=windows_utils.PIPE,
stderr=windows_utils.PIPE)
for f in [p.stdin, p.stdout, p.stderr]:
self.assertIsInstance(f, windows_utils.PipeHandle)
ovin = _overlapped.Overlapped()
ovout = _overlapped.Overlapped()
overr = _overlapped.Overlapped()
ovin.WriteFile(p.stdin.handle, msg)
ovout.ReadFile(p.stdout.handle, 100)
overr.ReadFile(p.stderr.handle, 100)
events = [ovin.event, ovout.event, overr.event]
# Super-long timeout for slow buildbots.
res = _winapi.WaitForMultipleObjects(events, True, 10000)
self.assertEqual(res, _winapi.WAIT_OBJECT_0)
self.assertFalse(ovout.pending)
self.assertFalse(overr.pending)
self.assertFalse(ovin.pending)
self.assertEqual(ovin.getresult(), len(msg))
out = ovout.getresult().rstrip()
err = overr.getresult().rstrip()
self.assertGreater(len(out), 0)
self.assertGreater(len(err), 0)
# allow for partial reads...
self.assertTrue(msg.upper().rstrip().startswith(out))
self.assertTrue(b"stderr".startswith(err))
p.stdin.close()
p.stdout.close()
p.stderr.close()
p.wait()
if __name__ == '__main__':
unittest.main()
| 34.896175 | 77 | 0.627153 |
import socket
import sys
import warnings
from trollius.test_utils import unittest
if sys.platform != 'win32':
raise unittest.SkipTest('Windows only')
from trollius import _overlapped
from trollius import py33_winapi as _winapi
from trollius import test_support as support
from trollius import test_utils
from trollius import windows_utils
from trollius.test_utils import mock
class WinsocketpairTests(unittest.TestCase):
def check_winsocketpair(self, ssock, csock):
csock.send(b'xxx')
self.assertEqual(b'xxx', ssock.recv(1024))
csock.close()
ssock.close()
def test_winsocketpair(self):
ssock, csock = windows_utils.socketpair()
self.check_winsocketpair(ssock, csock)
@unittest.skipUnless(support.IPV6_ENABLED,
'IPv6 not supported or enabled')
def test_winsocketpair_ipv6(self):
ssock, csock = windows_utils.socketpair(family=socket.AF_INET6)
self.check_winsocketpair(ssock, csock)
@unittest.skipIf(hasattr(socket, 'socketpair'),
'socket.socketpair is available')
@mock.patch('trollius.windows_utils.socket')
def test_winsocketpair_exc(self, m_socket):
m_socket.AF_INET = socket.AF_INET
m_socket.SOCK_STREAM = socket.SOCK_STREAM
m_socket.socket.return_value.getsockname.return_value = ('', 12345)
m_socket.socket.return_value.accept.return_value = object(), object()
m_socket.socket.return_value.connect.side_effect = OSError()
self.assertRaises(OSError, windows_utils.socketpair)
def test_winsocketpair_invalid_args(self):
self.assertRaises(ValueError,
windows_utils.socketpair, family=socket.AF_UNSPEC)
self.assertRaises(ValueError,
windows_utils.socketpair, type=socket.SOCK_DGRAM)
self.assertRaises(ValueError,
windows_utils.socketpair, proto=1)
@unittest.skipIf(hasattr(socket, 'socketpair'),
'socket.socketpair is available')
@mock.patch('trollius.windows_utils.socket')
def test_winsocketpair_close(self, m_socket):
m_socket.AF_INET = socket.AF_INET
m_socket.SOCK_STREAM = socket.SOCK_STREAM
sock = mock.Mock()
m_socket.socket.return_value = sock
sock.bind.side_effect = OSError
self.assertRaises(OSError, windows_utils.socketpair)
self.assertTrue(sock.close.called)
class PipeTests(unittest.TestCase):
def test_pipe_overlapped(self):
h1, h2 = windows_utils.pipe(overlapped=(True, True))
try:
ov1 = _overlapped.Overlapped()
self.assertFalse(ov1.pending)
self.assertEqual(ov1.error, 0)
ov1.ReadFile(h1, 100)
self.assertTrue(ov1.pending)
self.assertEqual(ov1.error, _winapi.ERROR_IO_PENDING)
ERROR_IO_INCOMPLETE = 996
try:
ov1.getresult()
except WindowsError as e:
self.assertEqual(e.winerror, ERROR_IO_INCOMPLETE)
else:
raise RuntimeError('expected ERROR_IO_INCOMPLETE')
ov2 = _overlapped.Overlapped()
self.assertFalse(ov2.pending)
self.assertEqual(ov2.error, 0)
ov2.WriteFile(h2, b"hello")
self.assertIn(ov2.error, set((0, _winapi.ERROR_IO_PENDING)))
res = _winapi.WaitForSingleObject(ov2.event, 100)
self.assertEqual(res, _winapi.WAIT_OBJECT_0)
self.assertFalse(ov1.pending)
self.assertEqual(ov1.error, ERROR_IO_INCOMPLETE)
self.assertFalse(ov2.pending)
self.assertIn(ov2.error, set((0, _winapi.ERROR_IO_PENDING)))
self.assertEqual(ov1.getresult(), b"hello")
finally:
_winapi.CloseHandle(h1)
_winapi.CloseHandle(h2)
def test_pipe_handle(self):
h, _ = windows_utils.pipe(overlapped=(True, True))
_winapi.CloseHandle(_)
p = windows_utils.PipeHandle(h)
self.assertEqual(p.fileno(), h)
self.assertEqual(p.handle, h)
with warnings.catch_warnings():
if sys.version_info >= (3, 4):
warnings.filterwarnings("ignore", "", ResourceWarning)
del p
support.gc_collect()
try:
_winapi.CloseHandle(h)
except OSError as e:
self.assertEqual(e.winerror, 6)
else:
raise RuntimeError('expected ERROR_INVALID_HANDLE')
class PopenTests(unittest.TestCase):
def test_popen(self):
command = r"""if 1:
import sys
s = sys.stdin.readline()
sys.stdout.write(s.upper())
sys.stderr.write('stderr')
"""
msg = b"blah\n"
p = windows_utils.Popen([sys.executable, '-c', command],
stdin=windows_utils.PIPE,
stdout=windows_utils.PIPE,
stderr=windows_utils.PIPE)
for f in [p.stdin, p.stdout, p.stderr]:
self.assertIsInstance(f, windows_utils.PipeHandle)
ovin = _overlapped.Overlapped()
ovout = _overlapped.Overlapped()
overr = _overlapped.Overlapped()
ovin.WriteFile(p.stdin.handle, msg)
ovout.ReadFile(p.stdout.handle, 100)
overr.ReadFile(p.stderr.handle, 100)
events = [ovin.event, ovout.event, overr.event]
res = _winapi.WaitForMultipleObjects(events, True, 10000)
self.assertEqual(res, _winapi.WAIT_OBJECT_0)
self.assertFalse(ovout.pending)
self.assertFalse(overr.pending)
self.assertFalse(ovin.pending)
self.assertEqual(ovin.getresult(), len(msg))
out = ovout.getresult().rstrip()
err = overr.getresult().rstrip()
self.assertGreater(len(out), 0)
self.assertGreater(len(err), 0)
self.assertTrue(msg.upper().rstrip().startswith(out))
self.assertTrue(b"stderr".startswith(err))
p.stdin.close()
p.stdout.close()
p.stderr.close()
p.wait()
if __name__ == '__main__':
unittest.main()
| true | true |
f73e26be5e8339103201e497c6e376922c92d858 | 13,682 | py | Python | django/contrib/gis/db/models/fields.py | Perlence/django | 4f7328ce8a35160d155c41d362c3d674f8ef4d2d | [
"PSF-2.0",
"BSD-3-Clause"
] | 4 | 2017-01-09T10:51:20.000Z | 2020-06-30T14:00:41.000Z | django/contrib/gis/db/models/fields.py | Perlence/django | 4f7328ce8a35160d155c41d362c3d674f8ef4d2d | [
"PSF-2.0",
"BSD-3-Clause"
] | 10 | 2016-05-19T21:54:42.000Z | 2019-08-09T15:59:50.000Z | django/contrib/gis/db/models/fields.py | Perlence/django | 4f7328ce8a35160d155c41d362c3d674f8ef4d2d | [
"PSF-2.0",
"BSD-3-Clause"
] | 2 | 2016-08-02T20:16:08.000Z | 2020-01-07T19:45:38.000Z | from collections import defaultdict, namedtuple
from django.contrib.gis import forms, gdal
from django.contrib.gis.db.models.proxy import SpatialProxy
from django.contrib.gis.gdal.error import GDALException
from django.contrib.gis.geos import (
GeometryCollection, GEOSException, GEOSGeometry, LineString,
MultiLineString, MultiPoint, MultiPolygon, Point, Polygon,
)
from django.core.exceptions import ImproperlyConfigured
from django.db.models.fields import Field
from django.utils.translation import gettext_lazy as _
# Local cache of the spatial_ref_sys table, which holds SRID data for each
# spatial database alias. This cache exists so that the database isn't queried
# for SRID info each time a distance query is constructed.
_srid_cache = defaultdict(dict)
SRIDCacheEntry = namedtuple('SRIDCacheEntry', ['units', 'units_name', 'spheroid', 'geodetic'])
def get_srid_info(srid, connection):
"""
Return the units, unit name, and spheroid WKT associated with the
given SRID from the `spatial_ref_sys` (or equivalent) spatial database
table for the given database connection. These results are cached.
"""
from django.contrib.gis.gdal import SpatialReference
global _srid_cache
try:
# The SpatialRefSys model for the spatial backend.
SpatialRefSys = connection.ops.spatial_ref_sys()
except NotImplementedError:
SpatialRefSys = None
alias, get_srs = (
(connection.alias, lambda srid: SpatialRefSys.objects.using(connection.alias).get(srid=srid).srs)
if SpatialRefSys else
(None, SpatialReference)
)
if srid not in _srid_cache[alias]:
srs = get_srs(srid)
units, units_name = srs.units
_srid_cache[alias][srid] = SRIDCacheEntry(
units=units,
units_name=units_name,
spheroid='SPHEROID["%s",%s,%s]' % (srs['spheroid'], srs.semi_major, srs.inverse_flattening),
geodetic=srs.geographic,
)
return _srid_cache[alias][srid]
class BaseSpatialField(Field):
"""
The Base GIS Field.
It's used as a base class for GeometryField and RasterField. Defines
properties that are common to all GIS fields such as the characteristics
of the spatial reference system of the field.
"""
description = _("The base GIS field.")
empty_strings_allowed = False
def __init__(self, verbose_name=None, srid=4326, spatial_index=True, **kwargs):
"""
The initialization function for base spatial fields. Takes the following
as keyword arguments:
srid:
The spatial reference system identifier, an OGC standard.
Defaults to 4326 (WGS84).
spatial_index:
Indicates whether to create a spatial index. Defaults to True.
Set this instead of 'db_index' for geographic fields since index
creation is different for geometry columns.
"""
# Setting the index flag with the value of the `spatial_index` keyword.
self.spatial_index = spatial_index
# Setting the SRID and getting the units. Unit information must be
# easily available in the field instance for distance queries.
self.srid = srid
# Setting the verbose_name keyword argument with the positional
# first parameter, so this works like normal fields.
kwargs['verbose_name'] = verbose_name
super().__init__(**kwargs)
def deconstruct(self):
name, path, args, kwargs = super().deconstruct()
# Always include SRID for less fragility; include spatial index if it's
# not the default value.
kwargs['srid'] = self.srid
if self.spatial_index is not True:
kwargs['spatial_index'] = self.spatial_index
return name, path, args, kwargs
def db_type(self, connection):
return connection.ops.geo_db_type(self)
def spheroid(self, connection):
return get_srid_info(self.srid, connection).spheroid
def units(self, connection):
return get_srid_info(self.srid, connection).units
def units_name(self, connection):
return get_srid_info(self.srid, connection).units_name
def geodetic(self, connection):
"""
Return true if this field's SRID corresponds with a coordinate
system that uses non-projected units (e.g., latitude/longitude).
"""
return get_srid_info(self.srid, connection).geodetic
def get_placeholder(self, value, compiler, connection):
"""
Return the placeholder for the spatial column for the
given value.
"""
return connection.ops.get_geom_placeholder(self, value, compiler)
def get_srid(self, obj):
"""
Return the default SRID for the given geometry or raster, taking into
account the SRID set for the field. For example, if the input geometry
or raster doesn't have an SRID, then the SRID of the field will be
returned.
"""
srid = obj.srid # SRID of given geometry.
if srid is None or self.srid == -1 or (srid == -1 and self.srid != -1):
return self.srid
else:
return srid
def get_db_prep_value(self, value, connection, *args, **kwargs):
if value is None:
return None
return connection.ops.Adapter(
super().get_db_prep_value(value, connection, *args, **kwargs),
**({'geography': True} if self.geography and connection.ops.geography else {})
)
def get_raster_prep_value(self, value, is_candidate):
"""
Return a GDALRaster if conversion is successful, otherwise return None.
"""
if isinstance(value, gdal.GDALRaster):
return value
elif is_candidate:
try:
return gdal.GDALRaster(value)
except GDALException:
pass
elif isinstance(value, dict):
try:
return gdal.GDALRaster(value)
except GDALException:
raise ValueError("Couldn't create spatial object from lookup value '%s'." % value)
def get_prep_value(self, value):
obj = super().get_prep_value(value)
if obj is None:
return None
# When the input is not a geometry or raster, attempt to construct one
# from the given string input.
if isinstance(obj, GEOSGeometry):
pass
else:
# Check if input is a candidate for conversion to raster or geometry.
is_candidate = isinstance(obj, (bytes, str)) or hasattr(obj, '__geo_interface__')
# Try to convert the input to raster.
raster = self.get_raster_prep_value(obj, is_candidate)
if raster:
obj = raster
elif is_candidate:
try:
obj = GEOSGeometry(obj)
except (GEOSException, GDALException):
raise ValueError("Couldn't create spatial object from lookup value '%s'." % obj)
else:
raise ValueError('Cannot use object with type %s for a spatial lookup parameter.' % type(obj).__name__)
# Assigning the SRID value.
obj.srid = self.get_srid(obj)
return obj
class GeometryField(BaseSpatialField):
"""
The base Geometry field -- maps to the OpenGIS Specification Geometry type.
"""
description = _('The base Geometry field — maps to the OpenGIS Specification Geometry type.')
form_class = forms.GeometryField
# The OpenGIS Geometry name.
geom_type = 'GEOMETRY'
geom_class = None
def __init__(self, verbose_name=None, dim=2, geography=False, *, extent=(-180.0, -90.0, 180.0, 90.0),
tolerance=0.05, **kwargs):
"""
The initialization function for geometry fields. In addition to the
parameters from BaseSpatialField, it takes the following as keyword
arguments:
dim:
The number of dimensions for this geometry. Defaults to 2.
extent:
Customize the extent, in a 4-tuple of WGS 84 coordinates, for the
geometry field entry in the `USER_SDO_GEOM_METADATA` table. Defaults
to (-180.0, -90.0, 180.0, 90.0).
tolerance:
Define the tolerance, in meters, to use for the geometry field
entry in the `USER_SDO_GEOM_METADATA` table. Defaults to 0.05.
"""
# Setting the dimension of the geometry field.
self.dim = dim
# Is this a geography rather than a geometry column?
self.geography = geography
# Oracle-specific private attributes for creating the entry in
# `USER_SDO_GEOM_METADATA`
self._extent = extent
self._tolerance = tolerance
super().__init__(verbose_name=verbose_name, **kwargs)
def deconstruct(self):
name, path, args, kwargs = super().deconstruct()
# Include kwargs if they're not the default values.
if self.dim != 2:
kwargs['dim'] = self.dim
if self.geography is not False:
kwargs['geography'] = self.geography
if self._extent != (-180.0, -90.0, 180.0, 90.0):
kwargs['extent'] = self._extent
if self._tolerance != 0.05:
kwargs['tolerance'] = self._tolerance
return name, path, args, kwargs
def contribute_to_class(self, cls, name, **kwargs):
super().contribute_to_class(cls, name, **kwargs)
# Setup for lazy-instantiated Geometry object.
setattr(cls, self.attname, SpatialProxy(self.geom_class or GEOSGeometry, self, load_func=GEOSGeometry))
def formfield(self, **kwargs):
defaults = {
'form_class': self.form_class,
'geom_type': self.geom_type,
'srid': self.srid,
**kwargs,
}
if self.dim > 2 and not getattr(defaults['form_class'].widget, 'supports_3d', False):
defaults.setdefault('widget', forms.Textarea)
return super().formfield(**defaults)
def select_format(self, compiler, sql, params):
"""
Return the selection format string, depending on the requirements
of the spatial backend. For example, Oracle and MySQL require custom
selection formats in order to retrieve geometries in OGC WKB.
"""
return compiler.connection.ops.select % sql, params
# The OpenGIS Geometry Type Fields
class PointField(GeometryField):
geom_type = 'POINT'
geom_class = Point
form_class = forms.PointField
description = _("Point")
class LineStringField(GeometryField):
geom_type = 'LINESTRING'
geom_class = LineString
form_class = forms.LineStringField
description = _("Line string")
class PolygonField(GeometryField):
geom_type = 'POLYGON'
geom_class = Polygon
form_class = forms.PolygonField
description = _("Polygon")
class MultiPointField(GeometryField):
geom_type = 'MULTIPOINT'
geom_class = MultiPoint
form_class = forms.MultiPointField
description = _("Multi-point")
class MultiLineStringField(GeometryField):
geom_type = 'MULTILINESTRING'
geom_class = MultiLineString
form_class = forms.MultiLineStringField
description = _("Multi-line string")
class MultiPolygonField(GeometryField):
geom_type = 'MULTIPOLYGON'
geom_class = MultiPolygon
form_class = forms.MultiPolygonField
description = _("Multi polygon")
class GeometryCollectionField(GeometryField):
geom_type = 'GEOMETRYCOLLECTION'
geom_class = GeometryCollection
form_class = forms.GeometryCollectionField
description = _("Geometry collection")
class ExtentField(Field):
"Used as a return value from an extent aggregate"
description = _("Extent Aggregate Field")
def get_internal_type(self):
return "ExtentField"
def select_format(self, compiler, sql, params):
select = compiler.connection.ops.select_extent
return select % sql if select else sql, params
class RasterField(BaseSpatialField):
"""
Raster field for GeoDjango -- evaluates into GDALRaster objects.
"""
description = _("Raster Field")
geom_type = 'RASTER'
geography = False
def _check_connection(self, connection):
# Make sure raster fields are used only on backends with raster support.
if not connection.features.gis_enabled or not connection.features.supports_raster:
raise ImproperlyConfigured('Raster fields require backends with raster support.')
def db_type(self, connection):
self._check_connection(connection)
return super().db_type(connection)
def from_db_value(self, value, expression, connection):
return connection.ops.parse_raster(value)
def contribute_to_class(self, cls, name, **kwargs):
super().contribute_to_class(cls, name, **kwargs)
# Setup for lazy-instantiated Raster object. For large querysets, the
# instantiation of all GDALRasters can potentially be expensive. This
# delays the instantiation of the objects to the moment of evaluation
# of the raster attribute.
setattr(cls, self.attname, SpatialProxy(gdal.GDALRaster, self))
def get_transform(self, name):
from django.contrib.gis.db.models.lookups import RasterBandTransform
try:
band_index = int(name)
return type(
'SpecificRasterBandTransform',
(RasterBandTransform,),
{'band_index': band_index}
)
except ValueError:
pass
return super().get_transform(name)
| 35.816754 | 119 | 0.656044 | from collections import defaultdict, namedtuple
from django.contrib.gis import forms, gdal
from django.contrib.gis.db.models.proxy import SpatialProxy
from django.contrib.gis.gdal.error import GDALException
from django.contrib.gis.geos import (
GeometryCollection, GEOSException, GEOSGeometry, LineString,
MultiLineString, MultiPoint, MultiPolygon, Point, Polygon,
)
from django.core.exceptions import ImproperlyConfigured
from django.db.models.fields import Field
from django.utils.translation import gettext_lazy as _
# for SRID info each time a distance query is constructed.
_srid_cache = defaultdict(dict)
SRIDCacheEntry = namedtuple('SRIDCacheEntry', ['units', 'units_name', 'spheroid', 'geodetic'])
def get_srid_info(srid, connection):
from django.contrib.gis.gdal import SpatialReference
global _srid_cache
try:
# The SpatialRefSys model for the spatial backend.
SpatialRefSys = connection.ops.spatial_ref_sys()
except NotImplementedError:
SpatialRefSys = None
alias, get_srs = (
(connection.alias, lambda srid: SpatialRefSys.objects.using(connection.alias).get(srid=srid).srs)
if SpatialRefSys else
(None, SpatialReference)
)
if srid not in _srid_cache[alias]:
srs = get_srs(srid)
units, units_name = srs.units
_srid_cache[alias][srid] = SRIDCacheEntry(
units=units,
units_name=units_name,
spheroid='SPHEROID["%s",%s,%s]' % (srs['spheroid'], srs.semi_major, srs.inverse_flattening),
geodetic=srs.geographic,
)
return _srid_cache[alias][srid]
class BaseSpatialField(Field):
description = _("The base GIS field.")
empty_strings_allowed = False
def __init__(self, verbose_name=None, srid=4326, spatial_index=True, **kwargs):
# Setting the index flag with the value of the `spatial_index` keyword.
self.spatial_index = spatial_index
# Setting the SRID and getting the units. Unit information must be
# easily available in the field instance for distance queries.
self.srid = srid
# Setting the verbose_name keyword argument with the positional
# first parameter, so this works like normal fields.
kwargs['verbose_name'] = verbose_name
super().__init__(**kwargs)
def deconstruct(self):
name, path, args, kwargs = super().deconstruct()
# Always include SRID for less fragility; include spatial index if it's
kwargs['srid'] = self.srid
if self.spatial_index is not True:
kwargs['spatial_index'] = self.spatial_index
return name, path, args, kwargs
def db_type(self, connection):
return connection.ops.geo_db_type(self)
def spheroid(self, connection):
return get_srid_info(self.srid, connection).spheroid
def units(self, connection):
return get_srid_info(self.srid, connection).units
def units_name(self, connection):
return get_srid_info(self.srid, connection).units_name
def geodetic(self, connection):
return get_srid_info(self.srid, connection).geodetic
def get_placeholder(self, value, compiler, connection):
return connection.ops.get_geom_placeholder(self, value, compiler)
def get_srid(self, obj):
srid = obj.srid
if srid is None or self.srid == -1 or (srid == -1 and self.srid != -1):
return self.srid
else:
return srid
def get_db_prep_value(self, value, connection, *args, **kwargs):
if value is None:
return None
return connection.ops.Adapter(
super().get_db_prep_value(value, connection, *args, **kwargs),
**({'geography': True} if self.geography and connection.ops.geography else {})
)
def get_raster_prep_value(self, value, is_candidate):
if isinstance(value, gdal.GDALRaster):
return value
elif is_candidate:
try:
return gdal.GDALRaster(value)
except GDALException:
pass
elif isinstance(value, dict):
try:
return gdal.GDALRaster(value)
except GDALException:
raise ValueError("Couldn't create spatial object from lookup value '%s'." % value)
def get_prep_value(self, value):
obj = super().get_prep_value(value)
if obj is None:
return None
# When the input is not a geometry or raster, attempt to construct one
# from the given string input.
if isinstance(obj, GEOSGeometry):
pass
else:
# Check if input is a candidate for conversion to raster or geometry.
is_candidate = isinstance(obj, (bytes, str)) or hasattr(obj, '__geo_interface__')
# Try to convert the input to raster.
raster = self.get_raster_prep_value(obj, is_candidate)
if raster:
obj = raster
elif is_candidate:
try:
obj = GEOSGeometry(obj)
except (GEOSException, GDALException):
raise ValueError("Couldn't create spatial object from lookup value '%s'." % obj)
else:
raise ValueError('Cannot use object with type %s for a spatial lookup parameter.' % type(obj).__name__)
obj.srid = self.get_srid(obj)
return obj
class GeometryField(BaseSpatialField):
description = _('The base Geometry field — maps to the OpenGIS Specification Geometry type.')
form_class = forms.GeometryField
geom_type = 'GEOMETRY'
geom_class = None
def __init__(self, verbose_name=None, dim=2, geography=False, *, extent=(-180.0, -90.0, 180.0, 90.0),
tolerance=0.05, **kwargs):
self.dim = dim
self.geography = geography
self._extent = extent
self._tolerance = tolerance
super().__init__(verbose_name=verbose_name, **kwargs)
def deconstruct(self):
name, path, args, kwargs = super().deconstruct()
if self.dim != 2:
kwargs['dim'] = self.dim
if self.geography is not False:
kwargs['geography'] = self.geography
if self._extent != (-180.0, -90.0, 180.0, 90.0):
kwargs['extent'] = self._extent
if self._tolerance != 0.05:
kwargs['tolerance'] = self._tolerance
return name, path, args, kwargs
def contribute_to_class(self, cls, name, **kwargs):
super().contribute_to_class(cls, name, **kwargs)
# Setup for lazy-instantiated Geometry object.
setattr(cls, self.attname, SpatialProxy(self.geom_class or GEOSGeometry, self, load_func=GEOSGeometry))
def formfield(self, **kwargs):
defaults = {
'form_class': self.form_class,
'geom_type': self.geom_type,
'srid': self.srid,
**kwargs,
}
if self.dim > 2 and not getattr(defaults['form_class'].widget, 'supports_3d', False):
defaults.setdefault('widget', forms.Textarea)
return super().formfield(**defaults)
def select_format(self, compiler, sql, params):
return compiler.connection.ops.select % sql, params
# The OpenGIS Geometry Type Fields
class PointField(GeometryField):
geom_type = 'POINT'
geom_class = Point
form_class = forms.PointField
description = _("Point")
class LineStringField(GeometryField):
geom_type = 'LINESTRING'
geom_class = LineString
form_class = forms.LineStringField
description = _("Line string")
class PolygonField(GeometryField):
geom_type = 'POLYGON'
geom_class = Polygon
form_class = forms.PolygonField
description = _("Polygon")
class MultiPointField(GeometryField):
geom_type = 'MULTIPOINT'
geom_class = MultiPoint
form_class = forms.MultiPointField
description = _("Multi-point")
class MultiLineStringField(GeometryField):
geom_type = 'MULTILINESTRING'
geom_class = MultiLineString
form_class = forms.MultiLineStringField
description = _("Multi-line string")
class MultiPolygonField(GeometryField):
geom_type = 'MULTIPOLYGON'
geom_class = MultiPolygon
form_class = forms.MultiPolygonField
description = _("Multi polygon")
class GeometryCollectionField(GeometryField):
geom_type = 'GEOMETRYCOLLECTION'
geom_class = GeometryCollection
form_class = forms.GeometryCollectionField
description = _("Geometry collection")
class ExtentField(Field):
description = _("Extent Aggregate Field")
def get_internal_type(self):
return "ExtentField"
def select_format(self, compiler, sql, params):
select = compiler.connection.ops.select_extent
return select % sql if select else sql, params
class RasterField(BaseSpatialField):
description = _("Raster Field")
geom_type = 'RASTER'
geography = False
def _check_connection(self, connection):
# Make sure raster fields are used only on backends with raster support.
if not connection.features.gis_enabled or not connection.features.supports_raster:
raise ImproperlyConfigured('Raster fields require backends with raster support.')
def db_type(self, connection):
self._check_connection(connection)
return super().db_type(connection)
def from_db_value(self, value, expression, connection):
return connection.ops.parse_raster(value)
def contribute_to_class(self, cls, name, **kwargs):
super().contribute_to_class(cls, name, **kwargs)
# Setup for lazy-instantiated Raster object. For large querysets, the
# instantiation of all GDALRasters can potentially be expensive. This
# delays the instantiation of the objects to the moment of evaluation
# of the raster attribute.
setattr(cls, self.attname, SpatialProxy(gdal.GDALRaster, self))
def get_transform(self, name):
from django.contrib.gis.db.models.lookups import RasterBandTransform
try:
band_index = int(name)
return type(
'SpecificRasterBandTransform',
(RasterBandTransform,),
{'band_index': band_index}
)
except ValueError:
pass
return super().get_transform(name)
| true | true |
f73e27240746dece6f5e7ef28f69fcf3af24de23 | 6,361 | py | Python | mmpose/datasets/datasets/hand/freihand_dataset.py | yulong314/mmpose | cdfce789d0e48dd868c70a405a7d7f3da2b4ebe3 | [
"Apache-2.0"
] | 1 | 2021-06-01T08:21:32.000Z | 2021-06-01T08:21:32.000Z | mmpose/datasets/datasets/hand/freihand_dataset.py | yulong314/mmpose | cdfce789d0e48dd868c70a405a7d7f3da2b4ebe3 | [
"Apache-2.0"
] | null | null | null | mmpose/datasets/datasets/hand/freihand_dataset.py | yulong314/mmpose | cdfce789d0e48dd868c70a405a7d7f3da2b4ebe3 | [
"Apache-2.0"
] | 1 | 2021-06-22T06:41:45.000Z | 2021-06-22T06:41:45.000Z | # Copyright (c) OpenMMLab. All rights reserved.
import os
from collections import OrderedDict
import numpy as np
from mmpose.datasets.builder import DATASETS
from .hand_base_dataset import HandBaseDataset
@DATASETS.register_module()
class FreiHandDataset(HandBaseDataset):
"""FreiHand dataset for top-down hand pose estimation.
`FreiHAND: A Dataset for Markerless Capture of Hand Pose
and Shape from Single RGB Images' ICCV'2019
More details can be found in the `paper
<https://arxiv.org/pdf/1909.04349.pdf>`__ .
The dataset loads raw features and apply specified transforms
to return a dict containing the image tensors and other information.
FreiHand keypoint indexes::
0: 'wrist',
1: 'thumb1',
2: 'thumb2',
3: 'thumb3',
4: 'thumb4',
5: 'forefinger1',
6: 'forefinger2',
7: 'forefinger3',
8: 'forefinger4',
9: 'middle_finger1',
10: 'middle_finger2',
11: 'middle_finger3',
12: 'middle_finger4',
13: 'ring_finger1',
14: 'ring_finger2',
15: 'ring_finger3',
16: 'ring_finger4',
17: 'pinky_finger1',
18: 'pinky_finger2',
19: 'pinky_finger3',
20: 'pinky_finger4'
Args:
ann_file (str): Path to the annotation file.
img_prefix (str): Path to a directory where images are held.
Default: None.
data_cfg (dict): config
pipeline (list[dict | callable]): A sequence of data transforms.
test_mode (bool): Store True when building test or
validation dataset. Default: False.
"""
def __init__(self,
ann_file,
img_prefix,
data_cfg,
pipeline,
test_mode=False):
super().__init__(
ann_file, img_prefix, data_cfg, pipeline, test_mode=test_mode)
self.ann_info['use_different_joint_weights'] = False
assert self.ann_info['num_joints'] == 21
self.ann_info['joint_weights'] = \
np.ones((self.ann_info['num_joints'], 1), dtype=np.float32)
self.dataset_name = 'freihand'
self.db = self._get_db()
print(f'=> num_images: {self.num_images}')
print(f'=> load {len(self.db)} samples')
def _get_db(self):
"""Load dataset."""
gt_db = []
bbox_id = 0
num_joints = self.ann_info['num_joints']
for img_id in self.img_ids:
ann_ids = self.coco.getAnnIds(imgIds=img_id, iscrowd=False)
objs = self.coco.loadAnns(ann_ids)
for obj in objs:
if max(obj['keypoints']) == 0:
continue
joints_3d = np.zeros((num_joints, 3), dtype=np.float32)
joints_3d_visible = np.zeros((num_joints, 3), dtype=np.float32)
keypoints = np.array(obj['keypoints']).reshape(-1, 3)
joints_3d[:, :2] = keypoints[:, :2]
joints_3d_visible[:, :2] = np.minimum(1, keypoints[:, 2:3])
# the ori image is 224x224
center, scale = self._xywh2cs(0, 0, 224, 224, 0.8)
image_file = os.path.join(self.img_prefix,
self.id2name[img_id])
gt_db.append({
'image_file': image_file,
'center': center,
'scale': scale,
'rotation': 0,
'joints_3d': joints_3d,
'joints_3d_visible': joints_3d_visible,
'dataset': self.dataset_name,
'bbox': obj['bbox'],
'bbox_score': 1,
'bbox_id': bbox_id
})
bbox_id = bbox_id + 1
gt_db = sorted(gt_db, key=lambda x: x['bbox_id'])
return gt_db
def evaluate(self, outputs, res_folder, metric='PCK', **kwargs):
"""Evaluate freihand keypoint results. The pose prediction results will
be saved in `${res_folder}/result_keypoints.json`.
Note:
batch_size: N
num_keypoints: K
heatmap height: H
heatmap width: W
Args:
outputs (list(preds, boxes, image_path, output_heatmap))
:preds (np.ndarray[N,K,3]): The first two dimensions are
coordinates, score is the third dimension of the array.
:boxes (np.ndarray[N,6]): [center[0], center[1], scale[0]
, scale[1],area, score]
:image_paths (list[str]): For example, ['training/rgb/
00031426.jpg']
:output_heatmap (np.ndarray[N, K, H, W]): model outpus.
res_folder (str): Path of directory to save the results.
metric (str | list[str]): Metric to be performed.
Options: 'PCK', 'AUC', 'EPE'.
Returns:
dict: Evaluation results for evaluation metric.
"""
metrics = metric if isinstance(metric, list) else [metric]
allowed_metrics = ['PCK', 'AUC', 'EPE']
for metric in metrics:
if metric not in allowed_metrics:
raise KeyError(f'metric {metric} is not supported')
res_file = os.path.join(res_folder, 'result_keypoints.json')
kpts = []
for output in outputs:
preds = output['preds']
boxes = output['boxes']
image_paths = output['image_paths']
bbox_ids = output['bbox_ids']
batch_size = len(image_paths)
for i in range(batch_size):
image_id = self.name2id[image_paths[i][len(self.img_prefix):]]
kpts.append({
'keypoints': preds[i].tolist(),
'center': boxes[i][0:2].tolist(),
'scale': boxes[i][2:4].tolist(),
'area': float(boxes[i][4]),
'score': float(boxes[i][5]),
'image_id': image_id,
'bbox_id': bbox_ids[i]
})
kpts = self._sort_and_unique_bboxes(kpts)
self._write_keypoint_results(kpts, res_file)
info_str = self._report_metric(res_file, metrics)
name_value = OrderedDict(info_str)
return name_value
| 34.950549 | 79 | 0.542053 |
import os
from collections import OrderedDict
import numpy as np
from mmpose.datasets.builder import DATASETS
from .hand_base_dataset import HandBaseDataset
@DATASETS.register_module()
class FreiHandDataset(HandBaseDataset):
def __init__(self,
ann_file,
img_prefix,
data_cfg,
pipeline,
test_mode=False):
super().__init__(
ann_file, img_prefix, data_cfg, pipeline, test_mode=test_mode)
self.ann_info['use_different_joint_weights'] = False
assert self.ann_info['num_joints'] == 21
self.ann_info['joint_weights'] = \
np.ones((self.ann_info['num_joints'], 1), dtype=np.float32)
self.dataset_name = 'freihand'
self.db = self._get_db()
print(f'=> num_images: {self.num_images}')
print(f'=> load {len(self.db)} samples')
def _get_db(self):
gt_db = []
bbox_id = 0
num_joints = self.ann_info['num_joints']
for img_id in self.img_ids:
ann_ids = self.coco.getAnnIds(imgIds=img_id, iscrowd=False)
objs = self.coco.loadAnns(ann_ids)
for obj in objs:
if max(obj['keypoints']) == 0:
continue
joints_3d = np.zeros((num_joints, 3), dtype=np.float32)
joints_3d_visible = np.zeros((num_joints, 3), dtype=np.float32)
keypoints = np.array(obj['keypoints']).reshape(-1, 3)
joints_3d[:, :2] = keypoints[:, :2]
joints_3d_visible[:, :2] = np.minimum(1, keypoints[:, 2:3])
center, scale = self._xywh2cs(0, 0, 224, 224, 0.8)
image_file = os.path.join(self.img_prefix,
self.id2name[img_id])
gt_db.append({
'image_file': image_file,
'center': center,
'scale': scale,
'rotation': 0,
'joints_3d': joints_3d,
'joints_3d_visible': joints_3d_visible,
'dataset': self.dataset_name,
'bbox': obj['bbox'],
'bbox_score': 1,
'bbox_id': bbox_id
})
bbox_id = bbox_id + 1
gt_db = sorted(gt_db, key=lambda x: x['bbox_id'])
return gt_db
def evaluate(self, outputs, res_folder, metric='PCK', **kwargs):
metrics = metric if isinstance(metric, list) else [metric]
allowed_metrics = ['PCK', 'AUC', 'EPE']
for metric in metrics:
if metric not in allowed_metrics:
raise KeyError(f'metric {metric} is not supported')
res_file = os.path.join(res_folder, 'result_keypoints.json')
kpts = []
for output in outputs:
preds = output['preds']
boxes = output['boxes']
image_paths = output['image_paths']
bbox_ids = output['bbox_ids']
batch_size = len(image_paths)
for i in range(batch_size):
image_id = self.name2id[image_paths[i][len(self.img_prefix):]]
kpts.append({
'keypoints': preds[i].tolist(),
'center': boxes[i][0:2].tolist(),
'scale': boxes[i][2:4].tolist(),
'area': float(boxes[i][4]),
'score': float(boxes[i][5]),
'image_id': image_id,
'bbox_id': bbox_ids[i]
})
kpts = self._sort_and_unique_bboxes(kpts)
self._write_keypoint_results(kpts, res_file)
info_str = self._report_metric(res_file, metrics)
name_value = OrderedDict(info_str)
return name_value
| true | true |
f73e27d7863b60a730756b99681c2762cf347061 | 10,550 | py | Python | training_script/cifar10_keras_sm.py | gonsoomoon/tensorflow-workshop-for-sagemaker | 985ab3853c16f4833caeae6382ccfc4474ac8e98 | [
"MIT"
] | null | null | null | training_script/cifar10_keras_sm.py | gonsoomoon/tensorflow-workshop-for-sagemaker | 985ab3853c16f4833caeae6382ccfc4474ac8e98 | [
"MIT"
] | null | null | null | training_script/cifar10_keras_sm.py | gonsoomoon/tensorflow-workshop-for-sagemaker | 985ab3853c16f4833caeae6382ccfc4474ac8e98 | [
"MIT"
] | 1 | 2020-02-29T04:58:00.000Z | 2020-02-29T04:58:00.000Z | # Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
#
# Permission is hereby granted, free of charge, to any person obtaining a copy of this
# software and associated documentation files (the "Software"), to deal in the Software
# without restriction, including without limitation the rights to use, copy, modify,
# merge, publish, distribute, sublicense, and/or sell copies of the Software, and to
# permit persons to whom the Software is furnished to do so.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED,
# INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A
# PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT
# HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
# OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
# SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import argparse
import logging
import os
from keras.callbacks import ModelCheckpoint
from keras.layers import Activation, Conv2D, Dense, Dropout, Flatten, MaxPooling2D, BatchNormalization
from keras.models import Sequential
from keras.optimizers import Adam, SGD, RMSprop
import tensorflow as tf
from keras import backend as K
sess = tf.Session()
K.set_session(sess)
logging.getLogger().setLevel(logging.INFO)
tf.logging.set_verbosity(tf.logging.INFO)
HEIGHT = 32
WIDTH = 32
DEPTH = 3
NUM_CLASSES = 10
NUM_DATA_BATCHES = 5
NUM_EXAMPLES_PER_EPOCH_FOR_TRAIN = 10000 * NUM_DATA_BATCHES
INPUT_TENSOR_NAME = 'inputs_input' # needs to match the name of the first layer + "_input"
def keras_model_fn(learning_rate, weight_decay, optimizer, momentum):
"""keras_model_fn receives hyperparameters from the training job and returns a compiled keras model.
The model will be transformed into a TensorFlow Estimator before training and it will be saved in a
TensorFlow Serving SavedModel at the end of training.
Args:
hyperparameters: The hyperparameters passed to the SageMaker TrainingJob that runs your TensorFlow
training script.
Returns: A compiled Keras model
"""
model = Sequential()
model.add(Conv2D(32, (3, 3), padding='same', name='inputs', input_shape=(HEIGHT, WIDTH, DEPTH)))
model.add(BatchNormalization())
model.add(Activation('relu'))
model.add(Conv2D(32, (3, 3)))
model.add(BatchNormalization())
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.2))
model.add(Conv2D(64, (3, 3), padding='same'))
model.add(BatchNormalization())
model.add(Activation('relu'))
model.add(Conv2D(64, (3, 3)))
model.add(BatchNormalization())
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.3))
model.add(Conv2D(128, (3, 3), padding='same'))
model.add(BatchNormalization())
model.add(Activation('relu'))
model.add(Conv2D(128, (3, 3)))
model.add(BatchNormalization())
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.4))
model.add(Flatten())
model.add(Dense(512))
model.add(Activation('relu'))
model.add(Dropout(0.5))
model.add(Dense(NUM_CLASSES))
model.add(Activation('softmax'))
size = 1
if optimizer.lower() == 'sgd':
opt = SGD(lr=learning_rate * size, decay=weight_decay, momentum=momentum)
elif optimizer.lower() == 'rmsprop':
opt = RMSprop(lr=learning_rate * size, decay=weight_decay)
else:
opt = Adam(lr=learning_rate * size, decay=weight_decay)
model.compile(loss='categorical_crossentropy',
optimizer=opt,
metrics=['accuracy'])
return model
def get_filenames(channel_name, channel):
if channel_name in ['train', 'validation', 'eval']:
return [os.path.join(channel, channel_name + '.tfrecords')]
else:
raise ValueError('Invalid data subset "%s"' % channel_name)
def train_input_fn():
return _input(args.epochs, args.batch_size, args.train, 'train')
def eval_input_fn():
return _input(args.epochs, args.batch_size, args.eval, 'eval')
def validation_input_fn():
return _input(args.epochs, args.batch_size, args.validation, 'validation')
def _input(epochs, batch_size, channel, channel_name):
filenames = get_filenames(channel_name, channel)
dataset = tf.data.TFRecordDataset(filenames)
dataset = dataset.repeat(epochs)
dataset = dataset.prefetch(10)
# Parse records.
dataset = dataset.map(
_dataset_parser, num_parallel_calls=10)
# Potentially shuffle records.
if channel_name == 'train':
# Ensure that the capacity is sufficiently large to provide good random
# shuffling.
buffer_size = int(NUM_EXAMPLES_PER_EPOCH_FOR_TRAIN * 0.4) + 3 * batch_size
dataset = dataset.shuffle(buffer_size=buffer_size)
# Batch it up.
dataset = dataset.batch(batch_size, drop_remainder=True)
iterator = dataset.make_one_shot_iterator()
image_batch, label_batch = iterator.get_next()
return {INPUT_TENSOR_NAME: image_batch}, label_batch
def _train_preprocess_fn(image):
"""Preprocess a single training image of layout [height, width, depth]."""
# Resize the image to add four extra pixels on each side.
image = tf.image.resize_image_with_crop_or_pad(image, HEIGHT + 8, WIDTH + 8)
# Randomly crop a [HEIGHT, WIDTH] section of the image.
image = tf.random_crop(image, [HEIGHT, WIDTH, DEPTH])
# Randomly flip the image horizontally.
image = tf.image.random_flip_left_right(image)
return image
def _dataset_parser(value):
"""Parse a CIFAR-10 record from value."""
featdef = {
'image': tf.FixedLenFeature([], tf.string),
'label': tf.FixedLenFeature([], tf.int64),
}
example = tf.parse_single_example(value, featdef)
image = tf.decode_raw(example['image'], tf.uint8)
image.set_shape([DEPTH * HEIGHT * WIDTH])
# Reshape from [depth * height * width] to [depth, height, width].
image = tf.cast(
tf.transpose(tf.reshape(image, [DEPTH, HEIGHT, WIDTH]), [1, 2, 0]),
tf.float32)
label = tf.cast(example['label'], tf.int32)
image = _train_preprocess_fn(image)
return image, tf.one_hot(label, NUM_CLASSES)
def save_model(model, output):
signature = tf.saved_model.signature_def_utils.predict_signature_def(
inputs={'inputs': model.input}, outputs={'scores': model.output})
builder = tf.saved_model.builder.SavedModelBuilder(output+'/1/')
builder.add_meta_graph_and_variables(
sess=K.get_session(),
tags=[tf.saved_model.tag_constants.SERVING],
signature_def_map={"serving_default": signature})
builder.save()
logging.info("Model successfully saved at: {}".format(output))
return
def main(args):
logging.info("getting data")
train_dataset = train_input_fn()
eval_dataset = eval_input_fn()
validation_dataset = validation_input_fn()
logging.info("configuring model")
model = keras_model_fn(args.learning_rate, args.weight_decay, args.optimizer, args.momentum)
callbacks = []
# -----------수정 부분
# callbacks.append(ModelCheckpoint(args.model_dir + '/checkpoint-{epoch}.h5'))
callbacks.append(ModelCheckpoint(args.model_output_dir + '/checkpoint-{epoch}.h5'))
logging.info("Starting training")
model.fit(x=train_dataset[0], y=train_dataset[1],
steps_per_epoch=(num_examples_per_epoch('train') // args.batch_size),
epochs=args.epochs, validation_data=validation_dataset,
validation_steps=(num_examples_per_epoch('validation') // args.batch_size), callbacks=callbacks)
score = model.evaluate(eval_dataset[0], eval_dataset[1], steps=num_examples_per_epoch('eval') // args.batch_size,
verbose=0)
logging.info('Test loss:{}'.format(score[0]))
logging.info('Test accuracy:{}'.format(score[1]))
# -------------수정 부분
# return save_model(model, args.model_dir)
return save_model(model, args.model_output_dir)
def num_examples_per_epoch(subset='train'):
if subset == 'train':
return 40000
elif subset == 'validation':
return 10000
elif subset == 'eval':
return 10000
else:
raise ValueError('Invalid data subset "%s"' % subset)
if __name__ == '__main__':
parser = argparse.ArgumentParser()
parser.add_argument(
'--train',
type=str,
required=False,
default=os.environ.get('SM_CHANNEL_TRAIN'), # ----수정 부분
help='The directory where the CIFAR-10 input data is stored.')
parser.add_argument(
'--validation',
type=str,
required=False,
default=os.environ.get('SM_CHANNEL_VALIDATION'), # ----수정 부분
help='The directory where the CIFAR-10 input data is stored.')
parser.add_argument(
'--eval',
type=str,
required=False,
default=os.environ.get('SM_CHANNEL_EVAL'), # ----수정 부분
help='The directory where the CIFAR-10 input data is stored.')
parser.add_argument(
'--model_dir',
type=str,
required=True,
help='The directory where the model will be stored.')
parser.add_argument(
'--weight-decay',
type=float,
default=2e-4,
help='Weight decay for convolutions.')
parser.add_argument(
'--learning-rate',
type=float,
default=0.001,
help="""\
This is the inital learning rate value. The learning rate will decrease
during training. For more details check the model_fn implementation in
this file.\
""")
parser.add_argument(
'--epochs',
type=int,
default=10,
help='The number of steps to use for training.')
parser.add_argument(
'--batch-size',
type=int,
default=128,
help='Batch size for training.')
parser.add_argument(
'--optimizer',
type=str,
default='adam')
parser.add_argument(
'--momentum',
type=float,
default='0.9')
# ----------추가 부분
parser.add_argument(
'--model_output_dir',
type=str,
default=os.environ.get('SM_MODEL_DIR'))
args = parser.parse_args()
main(args) | 34.933775 | 117 | 0.674123 |
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import argparse
import logging
import os
from keras.callbacks import ModelCheckpoint
from keras.layers import Activation, Conv2D, Dense, Dropout, Flatten, MaxPooling2D, BatchNormalization
from keras.models import Sequential
from keras.optimizers import Adam, SGD, RMSprop
import tensorflow as tf
from keras import backend as K
sess = tf.Session()
K.set_session(sess)
logging.getLogger().setLevel(logging.INFO)
tf.logging.set_verbosity(tf.logging.INFO)
HEIGHT = 32
WIDTH = 32
DEPTH = 3
NUM_CLASSES = 10
NUM_DATA_BATCHES = 5
NUM_EXAMPLES_PER_EPOCH_FOR_TRAIN = 10000 * NUM_DATA_BATCHES
INPUT_TENSOR_NAME = 'inputs_input'
def keras_model_fn(learning_rate, weight_decay, optimizer, momentum):
model = Sequential()
model.add(Conv2D(32, (3, 3), padding='same', name='inputs', input_shape=(HEIGHT, WIDTH, DEPTH)))
model.add(BatchNormalization())
model.add(Activation('relu'))
model.add(Conv2D(32, (3, 3)))
model.add(BatchNormalization())
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.2))
model.add(Conv2D(64, (3, 3), padding='same'))
model.add(BatchNormalization())
model.add(Activation('relu'))
model.add(Conv2D(64, (3, 3)))
model.add(BatchNormalization())
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.3))
model.add(Conv2D(128, (3, 3), padding='same'))
model.add(BatchNormalization())
model.add(Activation('relu'))
model.add(Conv2D(128, (3, 3)))
model.add(BatchNormalization())
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.4))
model.add(Flatten())
model.add(Dense(512))
model.add(Activation('relu'))
model.add(Dropout(0.5))
model.add(Dense(NUM_CLASSES))
model.add(Activation('softmax'))
size = 1
if optimizer.lower() == 'sgd':
opt = SGD(lr=learning_rate * size, decay=weight_decay, momentum=momentum)
elif optimizer.lower() == 'rmsprop':
opt = RMSprop(lr=learning_rate * size, decay=weight_decay)
else:
opt = Adam(lr=learning_rate * size, decay=weight_decay)
model.compile(loss='categorical_crossentropy',
optimizer=opt,
metrics=['accuracy'])
return model
def get_filenames(channel_name, channel):
if channel_name in ['train', 'validation', 'eval']:
return [os.path.join(channel, channel_name + '.tfrecords')]
else:
raise ValueError('Invalid data subset "%s"' % channel_name)
def train_input_fn():
return _input(args.epochs, args.batch_size, args.train, 'train')
def eval_input_fn():
return _input(args.epochs, args.batch_size, args.eval, 'eval')
def validation_input_fn():
return _input(args.epochs, args.batch_size, args.validation, 'validation')
def _input(epochs, batch_size, channel, channel_name):
filenames = get_filenames(channel_name, channel)
dataset = tf.data.TFRecordDataset(filenames)
dataset = dataset.repeat(epochs)
dataset = dataset.prefetch(10)
dataset = dataset.map(
_dataset_parser, num_parallel_calls=10)
if channel_name == 'train':
buffer_size = int(NUM_EXAMPLES_PER_EPOCH_FOR_TRAIN * 0.4) + 3 * batch_size
dataset = dataset.shuffle(buffer_size=buffer_size)
dataset = dataset.batch(batch_size, drop_remainder=True)
iterator = dataset.make_one_shot_iterator()
image_batch, label_batch = iterator.get_next()
return {INPUT_TENSOR_NAME: image_batch}, label_batch
def _train_preprocess_fn(image):
image = tf.image.resize_image_with_crop_or_pad(image, HEIGHT + 8, WIDTH + 8)
image = tf.random_crop(image, [HEIGHT, WIDTH, DEPTH])
image = tf.image.random_flip_left_right(image)
return image
def _dataset_parser(value):
featdef = {
'image': tf.FixedLenFeature([], tf.string),
'label': tf.FixedLenFeature([], tf.int64),
}
example = tf.parse_single_example(value, featdef)
image = tf.decode_raw(example['image'], tf.uint8)
image.set_shape([DEPTH * HEIGHT * WIDTH])
image = tf.cast(
tf.transpose(tf.reshape(image, [DEPTH, HEIGHT, WIDTH]), [1, 2, 0]),
tf.float32)
label = tf.cast(example['label'], tf.int32)
image = _train_preprocess_fn(image)
return image, tf.one_hot(label, NUM_CLASSES)
def save_model(model, output):
signature = tf.saved_model.signature_def_utils.predict_signature_def(
inputs={'inputs': model.input}, outputs={'scores': model.output})
builder = tf.saved_model.builder.SavedModelBuilder(output+'/1/')
builder.add_meta_graph_and_variables(
sess=K.get_session(),
tags=[tf.saved_model.tag_constants.SERVING],
signature_def_map={"serving_default": signature})
builder.save()
logging.info("Model successfully saved at: {}".format(output))
return
def main(args):
logging.info("getting data")
train_dataset = train_input_fn()
eval_dataset = eval_input_fn()
validation_dataset = validation_input_fn()
logging.info("configuring model")
model = keras_model_fn(args.learning_rate, args.weight_decay, args.optimizer, args.momentum)
callbacks = []
callbacks.append(ModelCheckpoint(args.model_output_dir + '/checkpoint-{epoch}.h5'))
logging.info("Starting training")
model.fit(x=train_dataset[0], y=train_dataset[1],
steps_per_epoch=(num_examples_per_epoch('train') // args.batch_size),
epochs=args.epochs, validation_data=validation_dataset,
validation_steps=(num_examples_per_epoch('validation') // args.batch_size), callbacks=callbacks)
score = model.evaluate(eval_dataset[0], eval_dataset[1], steps=num_examples_per_epoch('eval') // args.batch_size,
verbose=0)
logging.info('Test loss:{}'.format(score[0]))
logging.info('Test accuracy:{}'.format(score[1]))
return save_model(model, args.model_output_dir)
def num_examples_per_epoch(subset='train'):
if subset == 'train':
return 40000
elif subset == 'validation':
return 10000
elif subset == 'eval':
return 10000
else:
raise ValueError('Invalid data subset "%s"' % subset)
if __name__ == '__main__':
parser = argparse.ArgumentParser()
parser.add_argument(
'--train',
type=str,
required=False,
default=os.environ.get('SM_CHANNEL_TRAIN'),
help='The directory where the CIFAR-10 input data is stored.')
parser.add_argument(
'--validation',
type=str,
required=False,
default=os.environ.get('SM_CHANNEL_VALIDATION'),
help='The directory where the CIFAR-10 input data is stored.')
parser.add_argument(
'--eval',
type=str,
required=False,
default=os.environ.get('SM_CHANNEL_EVAL'),
help='The directory where the CIFAR-10 input data is stored.')
parser.add_argument(
'--model_dir',
type=str,
required=True,
help='The directory where the model will be stored.')
parser.add_argument(
'--weight-decay',
type=float,
default=2e-4,
help='Weight decay for convolutions.')
parser.add_argument(
'--learning-rate',
type=float,
default=0.001,
help="""\
This is the inital learning rate value. The learning rate will decrease
during training. For more details check the model_fn implementation in
this file.\
""")
parser.add_argument(
'--epochs',
type=int,
default=10,
help='The number of steps to use for training.')
parser.add_argument(
'--batch-size',
type=int,
default=128,
help='Batch size for training.')
parser.add_argument(
'--optimizer',
type=str,
default='adam')
parser.add_argument(
'--momentum',
type=float,
default='0.9')
parser.add_argument(
'--model_output_dir',
type=str,
default=os.environ.get('SM_MODEL_DIR'))
args = parser.parse_args()
main(args) | true | true |
f73e281fb75e286dad12e996fe2a93e1ad01aba4 | 683 | py | Python | vel/api/metrics/value_metric.py | cclauss/vel | 78a6a20af80ff613898d2983c83fdb223634aaad | [
"MIT"
] | null | null | null | vel/api/metrics/value_metric.py | cclauss/vel | 78a6a20af80ff613898d2983c83fdb223634aaad | [
"MIT"
] | null | null | null | vel/api/metrics/value_metric.py | cclauss/vel | 78a6a20af80ff613898d2983c83fdb223634aaad | [
"MIT"
] | null | null | null | from .base_metric import BaseMetric
class ValueMetric(BaseMetric):
""" Base class for metrics that don't have state and just calculate a simple value """
def __init__(self, name):
super().__init__(name)
self._metric_value = None
def calculate(self, data_dict):
""" Calculate value of a metric based on supplied data """
self._metric_value = self._value_function(data_dict)
def reset(self):
""" Reset value of a metric """
pass
def value(self):
""" Return current value for the metric """
return self._metric_value
def _value_function(self, data_dict):
raise NotImplementedError
| 25.296296 | 90 | 0.651537 | from .base_metric import BaseMetric
class ValueMetric(BaseMetric):
def __init__(self, name):
super().__init__(name)
self._metric_value = None
def calculate(self, data_dict):
self._metric_value = self._value_function(data_dict)
def reset(self):
pass
def value(self):
return self._metric_value
def _value_function(self, data_dict):
raise NotImplementedError
| true | true |
f73e285889cb990cb1dfc381934257b27dc05fca | 227 | py | Python | examples/text_suggest_api.py | sushi-chaaaan/pya3rt | d38cb21df8e476c1268ba039c973a0a2be93df36 | [
"MIT"
] | 12 | 2017-04-28T05:07:15.000Z | 2022-03-11T08:53:30.000Z | examples/text_suggest_api.py | sushi-chaaaan/pya3rt | d38cb21df8e476c1268ba039c973a0a2be93df36 | [
"MIT"
] | 3 | 2017-04-29T09:05:44.000Z | 2019-10-31T06:51:38.000Z | examples/text_suggest_api.py | sushi-chaaaan/pya3rt | d38cb21df8e476c1268ba039c973a0a2be93df36 | [
"MIT"
] | 8 | 2018-07-12T07:23:13.000Z | 2022-03-11T08:53:33.000Z | # -*- coding: utf-8 -*-
import pya3rt
apikey = "{YOUR_API_KEY}"
client = pya3rt.TextSuggestClient(apikey)
print(client.text_suggest("馬"))
print(client.text_suggest("あき", style=1))
print(client.text_suggest("func", style=2))
| 20.636364 | 43 | 0.718062 |
import pya3rt
apikey = "{YOUR_API_KEY}"
client = pya3rt.TextSuggestClient(apikey)
print(client.text_suggest("馬"))
print(client.text_suggest("あき", style=1))
print(client.text_suggest("func", style=2))
| true | true |
f73e286a7be57a1440b0a605de9d42d36073af54 | 7,991 | py | Python | hordak/migrations/0001_initial.py | CodeBrew-LTD/django-hordak | efdfe503bf38b0a283790c5b4d27bd6bb28155e4 | [
"MIT"
] | 187 | 2016-12-12T10:58:11.000Z | 2022-03-27T08:14:19.000Z | hordak/migrations/0001_initial.py | CodeBrew-LTD/django-hordak | efdfe503bf38b0a283790c5b4d27bd6bb28155e4 | [
"MIT"
] | 62 | 2016-12-10T00:12:47.000Z | 2022-03-16T09:23:05.000Z | hordak/migrations/0001_initial.py | CodeBrew-LTD/django-hordak | efdfe503bf38b0a283790c5b4d27bd6bb28155e4 | [
"MIT"
] | 47 | 2016-12-12T11:07:31.000Z | 2022-03-15T20:30:07.000Z | # -*- coding: utf-8 -*-
# Generated by Django 1.10.1 on 2016-09-13 11:01
from __future__ import unicode_literals
from django.db import migrations, models
import django.db.models.deletion
import django.utils.timezone
import django_smalluuid.models
import mptt.fields
class Migration(migrations.Migration):
initial = True
dependencies = []
operations = [
migrations.CreateModel(
name="Account",
fields=[
(
"id",
models.AutoField(
auto_created=True, primary_key=True, serialize=False, verbose_name="ID"
),
),
(
"uuid",
django_smalluuid.models.SmallUUIDField(
default=django_smalluuid.models.UUIDDefault(), editable=False, unique=True
),
),
("name", models.CharField(max_length=50)),
("code", models.CharField(max_length=3)),
(
"_type",
models.CharField(
blank=True,
choices=[
("AS", "Asset"),
("LI", "Liability"),
("IN", "Income"),
("EX", "Expense"),
("EQ", "Equity"),
],
max_length=2,
),
),
(
"has_statements",
models.BooleanField(
default=False,
help_text="Does this account have statements to reconcile against. This is typically the case for bank accounts.",
),
),
("lft", models.PositiveIntegerField(db_index=True, editable=False)),
("rght", models.PositiveIntegerField(db_index=True, editable=False)),
("tree_id", models.PositiveIntegerField(db_index=True, editable=False)),
("level", models.PositiveIntegerField(db_index=True, editable=False)),
(
"parent",
mptt.fields.TreeForeignKey(
blank=True,
null=True,
on_delete=django.db.models.deletion.CASCADE,
related_name="children",
to="hordak.Account",
),
),
],
),
migrations.CreateModel(
name="Leg",
fields=[
(
"id",
models.AutoField(
auto_created=True, primary_key=True, serialize=False, verbose_name="ID"
),
),
(
"uuid",
django_smalluuid.models.SmallUUIDField(
default=django_smalluuid.models.UUIDDefault(), editable=False, unique=True
),
),
(
"amount",
models.DecimalField(
decimal_places=2,
help_text="Record debits as positive, credits as negative",
max_digits=13,
),
),
("description", models.TextField(blank=True, default="")),
(
"account",
models.ForeignKey(
on_delete=django.db.models.deletion.CASCADE,
related_name="legs",
to="hordak.Account",
),
),
],
),
migrations.CreateModel(
name="StatementImport",
fields=[
(
"id",
models.AutoField(
auto_created=True, primary_key=True, serialize=False, verbose_name="ID"
),
),
(
"uuid",
django_smalluuid.models.SmallUUIDField(
default=django_smalluuid.models.UUIDDefault(), editable=False, unique=True
),
),
("timestamp", models.DateTimeField(default=django.utils.timezone.now)),
(
"bank_account",
models.ForeignKey(
on_delete=django.db.models.deletion.CASCADE,
related_name="imports",
to="hordak.Account",
),
),
],
),
migrations.CreateModel(
name="StatementLine",
fields=[
(
"id",
models.AutoField(
auto_created=True, primary_key=True, serialize=False, verbose_name="ID"
),
),
(
"uuid",
django_smalluuid.models.SmallUUIDField(
default=django_smalluuid.models.UUIDDefault(), editable=False, unique=True
),
),
("timestamp", models.DateTimeField(default=django.utils.timezone.now)),
("date", models.DateField()),
("amount", models.DecimalField(decimal_places=2, max_digits=13)),
("description", models.TextField(blank=True, default="")),
(
"statement_import",
models.ForeignKey(
on_delete=django.db.models.deletion.CASCADE,
related_name="lines",
to="hordak.StatementImport",
),
),
],
),
migrations.CreateModel(
name="Transaction",
fields=[
(
"id",
models.AutoField(
auto_created=True, primary_key=True, serialize=False, verbose_name="ID"
),
),
(
"uuid",
django_smalluuid.models.SmallUUIDField(
default=django_smalluuid.models.UUIDDefault(), editable=False, unique=True
),
),
(
"timestamp",
models.DateTimeField(
default=django.utils.timezone.now,
help_text="The creation date of this transaction object",
),
),
(
"date",
models.DateField(
default=django.utils.timezone.now,
help_text="The date on which this transaction occurred",
),
),
("description", models.TextField(blank=True, default="")),
],
),
migrations.AddField(
model_name="statementline",
name="transaction",
field=models.ForeignKey(
blank=True,
default=None,
help_text="Reconcile this statement line to this transaction",
null=True,
on_delete=django.db.models.deletion.CASCADE,
to="hordak.Transaction",
),
),
migrations.AddField(
model_name="leg",
name="transaction",
field=models.ForeignKey(
on_delete=django.db.models.deletion.CASCADE,
related_name="legs",
to="hordak.Transaction",
),
),
migrations.AlterUniqueTogether(name="account", unique_together=set([("parent", "code")])),
]
| 36.824885 | 138 | 0.417845 |
from __future__ import unicode_literals
from django.db import migrations, models
import django.db.models.deletion
import django.utils.timezone
import django_smalluuid.models
import mptt.fields
class Migration(migrations.Migration):
initial = True
dependencies = []
operations = [
migrations.CreateModel(
name="Account",
fields=[
(
"id",
models.AutoField(
auto_created=True, primary_key=True, serialize=False, verbose_name="ID"
),
),
(
"uuid",
django_smalluuid.models.SmallUUIDField(
default=django_smalluuid.models.UUIDDefault(), editable=False, unique=True
),
),
("name", models.CharField(max_length=50)),
("code", models.CharField(max_length=3)),
(
"_type",
models.CharField(
blank=True,
choices=[
("AS", "Asset"),
("LI", "Liability"),
("IN", "Income"),
("EX", "Expense"),
("EQ", "Equity"),
],
max_length=2,
),
),
(
"has_statements",
models.BooleanField(
default=False,
help_text="Does this account have statements to reconcile against. This is typically the case for bank accounts.",
),
),
("lft", models.PositiveIntegerField(db_index=True, editable=False)),
("rght", models.PositiveIntegerField(db_index=True, editable=False)),
("tree_id", models.PositiveIntegerField(db_index=True, editable=False)),
("level", models.PositiveIntegerField(db_index=True, editable=False)),
(
"parent",
mptt.fields.TreeForeignKey(
blank=True,
null=True,
on_delete=django.db.models.deletion.CASCADE,
related_name="children",
to="hordak.Account",
),
),
],
),
migrations.CreateModel(
name="Leg",
fields=[
(
"id",
models.AutoField(
auto_created=True, primary_key=True, serialize=False, verbose_name="ID"
),
),
(
"uuid",
django_smalluuid.models.SmallUUIDField(
default=django_smalluuid.models.UUIDDefault(), editable=False, unique=True
),
),
(
"amount",
models.DecimalField(
decimal_places=2,
help_text="Record debits as positive, credits as negative",
max_digits=13,
),
),
("description", models.TextField(blank=True, default="")),
(
"account",
models.ForeignKey(
on_delete=django.db.models.deletion.CASCADE,
related_name="legs",
to="hordak.Account",
),
),
],
),
migrations.CreateModel(
name="StatementImport",
fields=[
(
"id",
models.AutoField(
auto_created=True, primary_key=True, serialize=False, verbose_name="ID"
),
),
(
"uuid",
django_smalluuid.models.SmallUUIDField(
default=django_smalluuid.models.UUIDDefault(), editable=False, unique=True
),
),
("timestamp", models.DateTimeField(default=django.utils.timezone.now)),
(
"bank_account",
models.ForeignKey(
on_delete=django.db.models.deletion.CASCADE,
related_name="imports",
to="hordak.Account",
),
),
],
),
migrations.CreateModel(
name="StatementLine",
fields=[
(
"id",
models.AutoField(
auto_created=True, primary_key=True, serialize=False, verbose_name="ID"
),
),
(
"uuid",
django_smalluuid.models.SmallUUIDField(
default=django_smalluuid.models.UUIDDefault(), editable=False, unique=True
),
),
("timestamp", models.DateTimeField(default=django.utils.timezone.now)),
("date", models.DateField()),
("amount", models.DecimalField(decimal_places=2, max_digits=13)),
("description", models.TextField(blank=True, default="")),
(
"statement_import",
models.ForeignKey(
on_delete=django.db.models.deletion.CASCADE,
related_name="lines",
to="hordak.StatementImport",
),
),
],
),
migrations.CreateModel(
name="Transaction",
fields=[
(
"id",
models.AutoField(
auto_created=True, primary_key=True, serialize=False, verbose_name="ID"
),
),
(
"uuid",
django_smalluuid.models.SmallUUIDField(
default=django_smalluuid.models.UUIDDefault(), editable=False, unique=True
),
),
(
"timestamp",
models.DateTimeField(
default=django.utils.timezone.now,
help_text="The creation date of this transaction object",
),
),
(
"date",
models.DateField(
default=django.utils.timezone.now,
help_text="The date on which this transaction occurred",
),
),
("description", models.TextField(blank=True, default="")),
],
),
migrations.AddField(
model_name="statementline",
name="transaction",
field=models.ForeignKey(
blank=True,
default=None,
help_text="Reconcile this statement line to this transaction",
null=True,
on_delete=django.db.models.deletion.CASCADE,
to="hordak.Transaction",
),
),
migrations.AddField(
model_name="leg",
name="transaction",
field=models.ForeignKey(
on_delete=django.db.models.deletion.CASCADE,
related_name="legs",
to="hordak.Transaction",
),
),
migrations.AlterUniqueTogether(name="account", unique_together=set([("parent", "code")])),
]
| true | true |
f73e293616698db46a21ac440b641745d4241adc | 551 | py | Python | stage1/process_demo/demo3.py | kaixiang1992/python-review | 7f4f82b453f81b47af7ab1e8f3b3d07d1d75cbe4 | [
"MIT"
] | null | null | null | stage1/process_demo/demo3.py | kaixiang1992/python-review | 7f4f82b453f81b47af7ab1e8f3b3d07d1d75cbe4 | [
"MIT"
] | null | null | null | stage1/process_demo/demo3.py | kaixiang1992/python-review | 7f4f82b453f81b47af7ab1e8f3b3d07d1d75cbe4 | [
"MIT"
] | null | null | null | """
2019/12/08 15:16
142.【Python多任务编程】使用类的方式创建子进程(进程)
"""
"""
使用类的方式创建子进程:
有些时候,你想以类的形式定义子进程的代码。那么你可以自定义一个类,让他继承自`Process`,
然后在这个类中实现run方法,以后这个子进程在执行的时候就会调用run方法中的代码。
"""
from multiprocessing import Process
import os
class zhiliao(Process):
def run(self):
print('子进程ID: %s' % os.getpid())
print('父进程ID: %s' % os.getppid())
for x in range(0, 5):
print('子进程: %s' % x)
if __name__ == '__main__':
p = zhiliao()
p.start()
print('主进程ID: %s' % os.getpid())
p.join()
print('所有子进程代码执行完毕...')
| 17.21875 | 48 | 0.607985 |
from multiprocessing import Process
import os
class zhiliao(Process):
def run(self):
print('子进程ID: %s' % os.getpid())
print('父进程ID: %s' % os.getppid())
for x in range(0, 5):
print('子进程: %s' % x)
if __name__ == '__main__':
p = zhiliao()
p.start()
print('主进程ID: %s' % os.getpid())
p.join()
print('所有子进程代码执行完毕...')
| true | true |
f73e29dfcb875a58615903b00ad31196b8d22a4c | 371 | py | Python | tests/__init__.py | denjas/nempy | 3581ef56b399b1578a2881e6aecf5fe25a345158 | [
"Apache-2.0"
] | 2 | 2021-08-12T19:13:11.000Z | 2021-08-16T19:53:35.000Z | tests/__init__.py | denjas/nempy | 3581ef56b399b1578a2881e6aecf5fe25a345158 | [
"Apache-2.0"
] | null | null | null | tests/__init__.py | denjas/nempy | 3581ef56b399b1578a2881e6aecf5fe25a345158 | [
"Apache-2.0"
] | null | null | null | import logging
logging.getLogger('asyncio').setLevel(logging.ERROR)
logging.getLogger('asyncio.coroutines').setLevel(logging.ERROR)
logging.getLogger('websockets').setLevel(logging.ERROR)
logging.getLogger('urllib3').setLevel(logging.ERROR)
log_format = "[%(asctime)s][%(levelname)s] %(name)s - %(message)s"
logging.basicConfig(level=logging.DEBUG, format=log_format)
| 33.727273 | 66 | 0.781671 | import logging
logging.getLogger('asyncio').setLevel(logging.ERROR)
logging.getLogger('asyncio.coroutines').setLevel(logging.ERROR)
logging.getLogger('websockets').setLevel(logging.ERROR)
logging.getLogger('urllib3').setLevel(logging.ERROR)
log_format = "[%(asctime)s][%(levelname)s] %(name)s - %(message)s"
logging.basicConfig(level=logging.DEBUG, format=log_format)
| true | true |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.