code
stringlengths
114
1.05M
path
stringlengths
3
312
quality_prob
float64
0.5
0.99
learning_prob
float64
0.2
1
filename
stringlengths
3
168
kind
stringclasses
1 value
Description =========== :Class: `roman.tweakreg.TweakRegStep` :Alias: tweakreg Overview -------- This step uses the coordinates of point-like sources from an input catalog (i.e. the result from `SourceDetectionStep` saved in the `meta.tweakreg_catalog` attribute) and compares them with the coordinates from a Gaia catalog to compute corrections to the WCS of the input images such that sky catalogs obtained from the image catalogs using the corrected WCS will align on the sky. Custom Source Catalogs ---------------------- The default catalog used by ``tweakreg`` step can be disabled by providing a file name to a custom source catalog in the ``meta.tweakreg_catalog`` attribute of input data models. The catalog must be in a format automatically recognized by :py:meth:`~astropy.table.Table.read`. The catalog must contain either ``'x'`` and ``'y'`` or ``'xcentroid'`` and ``'ycentroid'`` columns which indicate source *image* coordinates (in pixels). Pixel coordinates are 0-indexed. For the ``tweakreg`` step to use user-provided input source catalogs, ``use_custom_catalogs`` parameter of the ``tweakreg`` step must be set to `True`. In addition to setting the ``meta.tweakreg_catalog`` attribute of input data models to the custom catalog file name, the ``tweakreg_step`` also supports two other ways of supplying custom source catalogs to the step: 1. Adding ``tweakreg_catalog`` attribute to the ``members`` of the input ASN table - see `~roman.datamodels.ModelContainer` for more details. Catalog file names are relative to ASN file path. 2. Providing a simple two-column text file, specified via step's parameter ``catfile``, that contains input data models' file names in the first column and the file names of the corresponding catalogs in the second column. Catalog file names are relative to ``catfile`` file path. Specifying custom source catalogs via either the input ASN table or ``catfile``, will update input data models' ``meta.tweakreg_catalog`` attributes to the catalog file names provided in either in the ASN table or ``catfile``. .. note:: When custom source catalogs are provided via both ``catfile`` and ASN table members' attributes, the ``catfile`` takes precedence and catalogs specified via ASN table are ignored altogether. .. note:: 1. Providing a data model file name in the ``catfile`` and leaving the corresponding source catalog file name empty -- same as setting ``'tweakreg_catalog'`` in the ASN table to an empty string ``""`` -- would set corresponding input data model's ``meta.tweakreg_catalog`` attribute to `None`. In this case, ``tweakreg_step`` will automatically generate a source catalog for that data model. 2. If an input data model is not listed in the ``catfile`` or does not have ``'tweakreg_catalog'`` attribute provided in the ASN table, then the catalog file name in that model's ``meta.tweakreg_catalog`` attribute will be used. If ``model.meta.tweakreg_catalog`` is `None`, ``tweakreg_step`` will automatically generate a source catalog for that data model. Alignment --------- The source catalog (either created by `SourceDetectionStep` or provided by the user) gets cross-matched and fit to an astrometric reference catalog (set by ``TweakRegStep.abs_refcat``) and the results are stored in ``model.meta.wcs_fit_results``. The pipeline initially supports fitting to any Gaia Data Release (defaults to `GAIADR3`). An example of the content of ``model.meta.wcs_fit_results`` is as follows: .. code-block:: python model.meta.wcs_fit_results = { "status": "SUCCESS", "fitgeom": "rshift", "matrix": array([[ 1.00000000e+00, 1.04301609e-13], [-1.04301609e-13, 1.00000000e+00]]), "shift": array([ 7.45523163e-11, -1.42718944e-10]), "center": array([-183.87997841, -119.38467775]), "proper_rot": 5.9760419875149846e-12, "proper": True, "rot": (5.9760419875149846e-12, 5.9760419875149846e-12), "<rot>": 5.9760419875149846e-12, "scale": (1.0, 1.0), "<scale>": 1.0, "skew": 0.0, "rmse": 2.854152848489525e-10, "mae": 2.3250544963289652e-10, "nmatches": 22 } Details about most of the parameters available in ``model.meta.wcs_fit_results`` can be found on the TweakWCS_'s webpage, under its linearfit_ module. WCS Correction -------------- The linear coordinate transformation computed in the previous step is used to define tangent-plane corrections that need to be applied to the GWCS pipeline in order to correct input image WCS. This correction is implemented by inserting a ``v2v3corr`` frame with tangent plane corrections into the GWCS pipeline of the image's WCS. Step Arguments -------------- ``TweakRegStep`` has the following arguments: **Catalog parameters:** * ``use_custom_catalogs``: A boolean that indicates whether to ignore source catalog in the input data model's ``meta.tweakreg_catalog`` attribute (Default=`False`). .. note:: If `True`, the user must provide a valid custom catalog that will be assigned to `meta.tweakreg_catalog` and used throughout the step. * ``catalog_format``: A `str` indicating one of the catalog output file format supported by :py:class:`astropy.table.Table` (Default='ascii.ecsv'). .. note:: - This option must be provided whenever `use_custom_catalogs = True`. - The full list of supported formats can be found on the `astropy`'s `Built-In Table Readers/Writers`_ webpage. .. _`Built-In Table Readers/Writers`: https://docs.astropy.org/en/stable/io/unified.html#built-in-table-readers-writers * ``catfile``: Name of the file with a list of custom user-provided catalogs (Default=''). .. note:: - This option must be provided whenever `use_custom_catalogs = True`. * ``catalog_path``: A `str` indicating the catalogs output file path (Default=''). .. note:: All catalogs will be saved to this path. The default value is the current working directory. **Reference Catalog parameters:** * ``expand_refcat``: A boolean indicating whether or not to expand reference catalog with new sources from other input images that have been already aligned to the reference image (Default=False). **Object matching parameters:** * ``minobj``: A positive `int` indicating minimum number of objects acceptable for matching (Default=15). * ``searchrad``: A `float` indicating the search radius in arcsec for a match (Default=2.0). * ``use2dhist``: A boolean indicating whether to use 2D histogram to find initial offset (Default=True). * ``separation``: Minimum object separation in arcsec (Default=1.0). * ``tolerance``: Matching tolerance for ``xyxymatch`` in arcsec (Default=0.7). **Catalog fitting parameters:** * ``fitgeometry``: A `str` value indicating the type of affine transformation to be considered when fitting catalogs. Allowed values: - ``'shift'``: x/y shifts only - ``'rshift'``: rotation and shifts - ``'rscale'``: rotation and scale - ``'general'``: shift, rotation, and scale The default value is "rshift". .. note:: Mathematically, alignment of images observed in different tangent planes requires ``fitgeometry='general'`` in order to fit source catalogs in the different images even if mis-alignment is caused only by a shift or rotation in the tangent plane of one of the images. However, under certain circumstances, such as small alignment errors or minimal dithering during observations that keep tangent planes of the images to be aligned almost parallel, then it may be more robust to use a ``fitgeometry`` setting with fewer degrees of freedom such as ``'rshift'``, especially for "ill-conditioned" source catalogs such as catalogs with very few sources, or large errors in source positions, or sources placed along a line or bunched in a corner of the image (not spread across/covering the entire image). * ``nclip``: A non-negative integer number of clipping iterations to use in the fit (Default = 3). * ``sigma``: A positive `float` indicating the clipping limit, in sigma units, used when performing fit (Default=3.0). **Absolute Astrometric fitting parameters:** Parameters used for absolute astrometry to a reference catalog. * ``abs_refcat``: String indicating what astrometric catalog should be used. Currently supported options are (Default='GAIADR3'): ``'GAIADR1'``, ``'GAIADR2'``, or ``'GAIADR3'``. .. note:: If `None` or an empty string is passed in, `TweakRegStep` will use the default catalog as set by `tweakreg_step.DEFAULT_ABS_REFCAT`. * ``abs_minobj``: A positive `int` indicating minimum number of objects acceptable for matching. (Default=15). * ``abs_searchrad``: A `float` indicating the search radius in arcsec for a match. It is recommended that a value larger than ``searchrad`` be used for this parameter (e.g. 3 times larger) (Default=6.0). * ``abs_use2dhist``: A boolean indicating whether to use 2D histogram to find initial offset. It is strongly recommended setting this parameter to `True`. Otherwise the initial guess for the offsets will be set to zero (Default=True). * ``abs_separation``: Minimum object separation in arcsec. It is recommended that a value smaller than ``separation`` be used for this parameter (e.g. 10 times smaller) (Default=0.1). * ``abs_tolerance``: Matching tolerance for ``xyxymatch`` in arcsec (Default=0.7). * ``abs_fitgeometry``: A `str` value indicating the type of affine transformation to be considered when fitting catalogs. Allowed values: - ``'shift'``: x/y shifts only - ``'rshift'``: rotation and shifts - ``'rscale'``: rotation and scale - ``'general'``: shift, rotation, and scale The default value is "rshift". Note that the same conditions/restrictions that apply to ``fitgeometry`` also apply to ``abs_fitgeometry``. * ``abs_nclip``: A non-negative integer number of clipping iterations to use in the fit (Default = 3). * ``abs_sigma``: A positive `float` indicating the clipping limit, in sigma units, used when performing fit (Default=3.0). * ``save_abs_catalog``: A boolean specifying whether or not to write out the astrometric catalog used for the fit as a separate product (Default=False). Further Documentation --------------------- The underlying algorithms as well as formats of source catalogs are described in more detail on the TweakWCS_ webpage. .. _TweakWCS: https://tweakwcs.readthedocs.io/en/latest/ .. _linearfit: https://tweakwcs.readthedocs.io/en/latest/source/linearfit.html #tweakwcs.linearfit.iter_linear_fit
/romancal-0.12.0.tar.gz/romancal-0.12.0/docs/roman/tweakreg/README.rst
0.952607
0.827619
README.rst
pypi
===== Steps ===== .. _writing-a-step: Writing a step ============== Writing a new step involves writing a class that has a `process` method to perform work and a `spec` member to define its configuration parameters. (Optionally, the `spec` member may be defined in a separate `spec` file). Inputs and outputs ------------------ A `Step` provides a full framework for handling I/O. Steps get their inputs from two sources: - Configuration parameters come from the parameter file or the command line and are set as member variables on the Step object by the stpipe framework. - Arguments are passed to the Step’s `process` function as regular function arguments. Parameters should be used to specify things that must be determined outside of the code by a user using the class. Arguments should be used to pass data that needs to go from one step to another as part of a larger pipeline. Another way to think about this is: if the user would want to examine or change the value, use a parameter. The parameters are defined by the :ref:`Step.spec <the-spec-member>` member. Input Files, Associations, and Directories `````````````````````````````````````````` It is presumed that all input files are co-resident in the same directory. This directory is whichever directory the first input file is found in. This is particularly important for associations. It is assumed that all files referenced by an association are in the same directory as the association file itself. Output Files and Directories ```````````````````````````` The step will generally return its output as a data model. Every step has implicitly created parameters `output_dir` and `output_file` which the user can use to specify the directory and file to save this model to. Since the `stpipe` architecture generally creates output file names, in general, it is expected that `output_file` be rarely specified, and that different sets of outputs be separated using `output_dir`. Output Suffix ------------- There are three ways a step's results can be written to a file: 1. Implicitly when a step is run from the command line or with `Step.from_cmdline` 2. Explicitly by specifying the parameter `save_results` 3. Explicitly by specifying a file name with the parameter `output_file` In all cases, the file, or files, is/are created with an added suffix at the end of the base file name. By default this suffix is the class name of the step that produced the results. Use the `suffix` parameter to explicitly change the suffix. The Python class ---------------- At a minimum, the Python Step class should inherit from `stpipe.Step`, implement a ``process`` method to do the actual work of the step and have a `spec` member to describe its parameters. 1. Objects from other Steps in a pipeline are passed as arguments to the ``process`` method. 2. The parameters described in :ref:`configuring-a-step` are available as member variables on ``self``. 3. To support the caching suspend/resume feature of pipelines, images must be passed between steps as model objects. To ensure you’re always getting a model object, call the model constructor on the parameter passed in. It is good idea to use a `with` statement here to ensure that if the input is a file path that the file will be appropriately closed. 4. Objects to pass to other Steps in the pipeline are simply returned from the function. To return multiple objects, return a tuple. 5. The parameters for the step are described in the `spec` member in the `configspec` format. 6. Declare any CRDS reference files used by the Step. (See :ref:`interfacing-with-crds`). :: from romancal.stpipe import RomanStep from roman_datamodels.datamodels import ImageModel from my_awesome_astronomy_library import combine class ExampleStep(RomanStep): """ Every step should include a docstring. This docstring will be displayed by the `strun --help`. """ # 1. def process(self, image1, image2): self.log.info("Inside ExampleStep") # 2. threshold = self.threshold # 3. with ImageModel(image1) as image1, ImageModel(image2) as image2: # 4. with self.get_reference_file_model(image1, "flat_field") as flat: new_image = combine(image1, image2, flat, threshold) # 5. return new_image # 6. spec = """ # This is the configspec file for ExampleStep threshold = float(default=1.0) # maximum flux """ # 7. reference_file_types = ['flat_field'] The Python Step subclass may be installed anywhere that your Python installation can find it. It does not need to be installed in the `stpipe` package. .. _the-spec-member: The spec member --------------- The `spec` member variable is a string containing information about the parameters. It is in the `configspec` format defined in the `ConfigObj` library that stpipe uses. The `configspec` format defines the types of the parameters, as well as allowing an optional tree structure. The types of parameters are declared like this:: n_iterations = integer(1, 100) # The number of iterations to run factor = float() # A multiplication factor author = string() # The author of the file Note that each parameter may have a comment. This comment is extracted and displayed in help messages and docstrings etc. Parameters can be grouped into categories using ini-file-like syntax:: [red] offset = float() scale = float() [green] offset = float() scale = float() [blue] offset = float() scale = float() Default values may be specified on any parameter using the `default` keyword argument:: name = string(default="John Doe") While the most commonly useful parts of the configspec format are discussed here, greater detail can be found in the `configspec documentation <https://configobj.readthedocs.io/en/latest/>`_. Configspec types ```````````````` The following is a list of the commonly useful configspec types. `integer`: matches integer values. Takes optional `min` and `max` arguments:: integer() integer(3, 9) # any value from 3 to 9 integer(min=0) # any positive value integer(max=9) `float`: matches float values Has the same parameters as the integer check. `boolean`: matches boolean values: True or False. `string`: matches any string. Takes optional keyword args `min` and `max` to specify min and max length of string. `list`: matches any list. Takes optional keyword args `min`, and `max` to specify min and max sizes of the list. The list checks always return a list. `force_list`: matches any list, but if a single value is passed in will coerce it into a list containing that value. `int_list`: Matches a list of integers. Takes the same arguments as list. `float_list`: Matches a list of floats. Takes the same arguments as list. `bool_list`: Matches a list of boolean values. Takes the same arguments as list. `string_list`: Matches a list of strings. Takes the same arguments as list. `option`: matches any from a list of options. You specify this test with:: option('option 1', 'option 2', 'option 3') Normally, steps will receive input files as parameters and return output files from their process methods. However, in cases where paths to files should be specified in the parameter file, there are some extra parameter types that stpipe provides that aren’t part of the core configobj library. `input_file`: Specifies an input file. Relative paths are resolved against the location of the parameter file. The file must also exist. `output_file`: Specifies an output file. Identical to `input_file`, except the file doesn’t have to already exist. .. _interfacing-with-crds: Interfacing with CRDS --------------------- If a Step uses CRDS to retrieve reference files, there are two things to do: 1. Within the `process` method, call `self.get_reference_file` or `self.get_reference_file_model` to get a reference file from CRDS. These methods take as input a) a model for the input file, whose metadata is used to do a CRDS bestref lookup, and b) a reference file type, which is just a string to identify the kind of reference file. 2. Declare the reference file types used by the Step in the `reference_file_types` member. This information is used by the stpipe framework for two purposes: a) to pre-cache the reference files needed by a Pipeline before any of the pipeline processing actually runs, and b) to add override parameters to the Step's configspec. For each reference file type that the Step declares, an `override_*` parameter is added to the Step's configspec. For example, if a step declares the following:: reference_file_types = ['flat_field'] then the user can override the flat field reference file using the parameter file:: override_flat_field = /path/to/my_reference_file.asdf or at the command line:: --override_flat_field=/path/to/my_reference_file.asdf
/romancal-0.12.0.tar.gz/romancal-0.12.0/docs/roman/stpipe/devel_step.rst
0.804444
0.818193
devel_step.rst
pypi
.. _stpipe-user-pipelines: ========= Pipelines ========= It is important to note that a Pipeline is also a Step, so everything that applies to a Step in the :ref:`stpipe-user-steps` chapter also applies to Pipelines. Configuring a Pipeline ====================== This section describes how to set parameters on the individual steps in a pipeline. To change the order of steps in a pipeline, one must write a Pipeline subclass in Python. That is described in the :ref:`devel-pipelines` section of the developer documentation. Just as with Steps, Pipelines can by configured either by a parameter file or directly from Python. From a parameter file --------------------- A Pipeline parameter file follows the same format as a Step parameter file: :ref:`config_asdf_files` Here is an example pipeline parameter file for the `ExposurePipeline` class: .. code-block:: yaml #ASDF 1.0.0 #ASDF_STANDARD 1.5.0 %YAML 1.1 %TAG ! tag:stsci.edu:asdf/ --- !core/asdf-1.1.0 asdf_library: !core/software-1.0.0 {author: The ASDF Developers, homepage: 'http://github.com/asdf-format/asdf', name: asdf, version: 2.13.0} history: extensions: - !core/extension_metadata-1.0.0 extension_class: asdf.extension.BuiltinExtension software: !core/software-1.0.0 {name: asdf, version: 2.13.0} class: romancal.pipeline.exposure_pipeline.ExposurePipeline meta: author: <SPECIFY> date: '2022-09-15T13:59:54' description: Parameters for calibration step romancal.pipeline.exposure_pipeline.ExposurePipeline instrument: {name: <SPECIFY>} origin: <SPECIFY> pedigree: <SPECIFY> reftype: <SPECIFY> telescope: <SPECIFY> useafter: <SPECIFY> name: ExposurePipeline parameters: input_dir: '' output_dir: null output_ext: .asdf output_file: null output_use_index: true output_use_model: false post_hooks: [] pre_hooks: [] save_calibrated_ramp: false save_results: true search_output_file: true skip: false suffix: null steps: - class: romancal.dq_init.dq_init_step.DQInitStep name: dq_init parameters: input_dir: '' output_dir: null output_ext: .asdf output_file: null output_use_index: true output_use_model: false post_hooks: [] pre_hooks: [] save_results: false search_output_file: true skip: false suffix: null - class: romancal.saturation.saturation_step.SaturationStep ... Just like a ``Step``, it must have ``name`` and ``class`` values. Here the ``class`` must refer to a subclass of `stpipe.Pipeline`. Following ``name`` and ``class`` is the ``steps`` section. Under this section is a subsection for each step in the pipeline. The easiest way to get started on a parameter file is to call ``Step.export_config`` and then edit the file that is created. This will generate an ASDF config file that includes every available parameter, which can then be trimmed to the parameters that require customization. For each Step’s section, the parameters for that step may either be specified inline, or specified by referencing an external parameter file just for that step. For example, a pipeline parameter file that contains: .. code-block:: yaml - class: romancal.jump.jump_step.JumpStep name: jump parameters: flag_4_neighbors: true four_group_rejection_threshold: 190.0 input_dir: '' max_jump_to_flag_neighbors: 1000.0 maximum_cores: none min_jump_to_flag_neighbors: 10.0 is equivalent to: .. code-block:: yaml steps: - class: romancal.jump.jump_step.JumpStep name: jump parameters: config_file = myjump.asdf with the file ``myjump.asdf.`` in the same directory: .. code-block:: yaml class: romancal.jump.jump_step.JumpStep name: jump parameters: flag_4_neighbors: true four_group_rejection_threshold: 190.0 If both a ``config_file`` and additional parameters are specified, the ``config_file`` is loaded, and then the local parameters override them. Any optional parameters for each Step may be omitted, in which case defaults will be used. From Python ----------- A pipeline may be configured from Python by passing a nested dictionary of parameters to the Pipeline’s constructor. Each key is the name of a step, and the value is another dictionary containing parameters for that step. For example, the following is the equivalent of the parameter file above: .. code-block:: python from stpipe.pipeline import Image2Pipeline steps = { 'jump':{'rejection_threshold': 180., 'three_group_rejection_threshold': 190., 'four_group_rejection_threshold':195. } pipe = ExposurePipeline(steps=steps) Running a Pipeline ================== From the commandline -------------------- The same ``strun`` script used to run Steps from the commandline can also run Pipelines. The only wrinkle is that any parameters overridden from the commandline use dot notation to specify the parameter name. For example, to override the ``rejection_threshold`` value on the ``jump`` step in the example above, one can do:: > strun romancal.pipeline.ExposurePipeline --steps.jump.rejection_threshold=180. From Python ----------- Once the pipeline has been configured (as above), just call the instance to run it. pipe(input_data) Caching details --------------- The results of a Step are cached using Python pickles. This allows virtually most of the standard Python data types to be cached. In addition, any ASDF models that are the result of a step are saved as standalone ASDF files to make them more easily used by external tools. The filenames are based on the name of the substep within the pipeline. Hooks ===== Each Step in a pipeline can also have pre- and post-hooks associated. Hooks themselves are Step instances, but there are some conveniences provided to make them easier to specify in a parameter file. Pre-hooks are run right before the Step. The inputs to the pre-hook are the same as the inputs to their parent Step. Post-hooks are run right after the Step. The inputs to the post-hook are the return value(s) from the parent Step. The return values are always passed as a list. If the return value from the parent Step is a single item, a list of this single item is passed to the post hooks. This allows the post hooks to modify the return results, if necessary. Hooks are specified using the ``pre_hooks`` and ``post_hooks`` parameters associated with each step. More than one pre- or post-hook may be assigned, and they are run in the order they are given. There can also be ``pre_hooks`` and ``post_hooks`` on the Pipeline as a whole (since a Pipeline is also a Step). Each of these parameters is a list of strings, where each entry is one of: - An external commandline application. The arguments can be accessed using {0}, {1} etc. (See `stpipe.subproc.SystemCall`). - A dot-separated path to a Python Step class. - A dot-separated path to a Python function.
/romancal-0.12.0.tar.gz/romancal-0.12.0/docs/roman/stpipe/user_pipeline.rst
0.927863
0.651277
user_pipeline.rst
pypi
===== Steps ===== .. _configuring-a-step: Configuring a Step ================== This section describes how to instantiate a Step and set configuration parameters on it. Steps can be configured by: - Instantiating the Step directly from Python - Reading the input from a parameter file .. _running_a_step_from_a_configuration_file: Running a Step from a parameter file ==================================== A parameter file contains one or more of a ``Step``'s parameters. Any parameter not specified in the file will take its value from the defaults coded directly into the ``Step``. Note that any parameter specified on the command line overrides all other values. The format of parameter files is the :ref:`config_asdf_files` format. Refer to the :ref:`minimal example<asdf_minimal_file>` for a complete description of the contents. The rest of this document will focus on the step parameters themselves. Every parameter file must contain the key ``class``, followed by the optional ``name`` followed by any parameters that are specific to the step being run. ``class`` specifies the Python class to run. It should be a fully-qualified Python path to the class. Step classes can ship with ``stpipe`` itself, they may be part of other Python packages, or they exist in freestanding modules alongside the configuration file. ``name`` defines the name of the step. This is distinct from the class of the step, since the same class of Step may be configured in different ways, and it is useful to be able to have a way of distinguishing between them. For example, when Steps are combined into :ref:`stpipe-user-pipelines`, a Pipeline may use the same Step class multiple times, each with different configuration parameters. The parameters specific to the Step all reside under the key ``parameters``. The set of accepted parameters is defined in the Step’s spec member. The easiest way to get started on a parameter file is to call ``Step.export_config`` and then edit the file that is created. This will generate an ASDF config file that includes every available parameter, which can then be trimmed to the parameters that require customization. Here is an example parameter file (``do_cleanup.asdf``) that runs the (imaginary) step ``stpipe.cleanup`` to clean up an image. .. code-block:: #ASDF 1.0.0 #ASDF_STANDARD 1.3.0 %YAML 1.1 %TAG ! tag:stsci.edu:asdf/ --- !core/asdf-1.1.0 class: stpipe.cleanup name: MyCleanup parameters: threshold: 42.0 scale: 0.01 ... .. _strun: Running a Step from the commandline ----------------------------------- The ``strun`` command can be used to run Steps from the commandline. The first argument may be either: - The a parameter file - A Python class Additional parameters may be passed on the commandline. These parameters override any defaults. Any extra positional parameters on the commandline are passed to the step's process method. This will often be input filenames. To display a list of the parameters that are accepted for a given Step class, pass the ``-h`` parameter, and the name of a Step class or parameter file:: $ strun -h romancal.dq_init.DQInitStep usage: strun [-h] [--logcfg LOGCFG] [--verbose] [--debug] [--save-parameters SAVE_PARAMETERS] [--disable-crds-steppars] [--pre_hooks] [--post_hooks] [--output_file] [--output_dir] [--output_ext] [--output_use_model] [--output_use_index] [--save_results] [--skip] [--suffix] [--search_output_file] [--input_dir] [--override_mask] cfg_file_or_class [args ...] (selected) optional arguments: -h, --help show this help message and exit --logcfg LOGCFG The logging configuration file to load --verbose, -v Turn on all logging messages --debug When an exception occurs, invoke the Python debugger, pdb --save-parameters SAVE_PARAMETERS Save step parameters to specified file. --disable-crds-steppars Disable retrieval of step parameter references files from CRDS --output_file File to save the output to Every step has an `--output_file` parameter. If one is not provided, the output filename is determined based on the input file by appending the name of the step. For example, in this case, `foo.asdf` is output to `foo_cleanup.asdf`. Finally, the parameters a ``Step`` actually ran with can be saved to a new parameter file using the `--save-parameters` option. This file will have all the parameters, specific to the step, and the final values used. .. _`Parameter Precedence`: Parameter Precedence ```````````````````` There are a number of places where the value of a parameter can be specified. The order of precedence, from most to least significant, for parameter value assignment is as follows: 1. Value specified on the command-line: ``strun step.asdf --par=value_that_will_be_used`` 2. Value found in the user-specified parameter file 3. ``Step``-coded default, determined by the parameter definition ``Step.spec`` For pipelines, if a pipeline parameter file specifies a value for a step in the pipeline, that takes precedence over any step-specific value found from a step-specific parameter file. The full order of precedence for a pipeline and its sub steps is as follows: 1. Value specified on the command-line: ``strun pipeline.asdf --steps.step.par=value_that_will_be_used`` 2. Value found in the user-specified pipeline parameter file: ``strun pipeline.asdf`` 3. Value found in the parameter file specified in a pipeline parameter file 4. ``Pipeline``-coded default for itself and all sub-steps 5. ``Step``-coded default for each sub-step Debugging ````````` To output all logging output from the step, add the `--verbose` option to the commandline. (If more fine-grained control over logging is required, see :ref:`user-logging`). To start the Python debugger if the step itself raises an exception, pass the `--debug` option to the commandline. .. _run_step_from_python: Running a Step in Python ------------------------ There are a number of methods to run a step within a Python interpreter, depending on how much control one needs. Step.from_cmdline() ``````````````````` For individuals who are used to using the ``strun`` command, `Step.from_cmdline` is the most direct method of executing a step or pipeline. The only argument is a list of strings, representing the command line arguments one would have used for ``strun``. The call signature is:: Step.from_cmdline([string,...]) For example, given the following command-line:: $ strun romancal.pipeline.ExposurePipeline r0000101001001001001_01101_0001_WFI01_uncal.asdf \ --steps.jump.override_gain=roman_wfi_gain_0033.asdf the equivalent `from_cmdline` call would be:: from romancal.pipeline import ExposurePipeline ExposurePipeline.from_cmdline([' r0000101001001001001_01101_0001_WFI01_uncal.asdf', 'steps.jump.override_gain', 'roman_wfi_gain_0033.asdf']) call() `````` Class method `Step.call` is the slightly more programmatic, and preferred, method of executing a step or pipeline. When using ``call``, one gets the full configuration initialization that one gets with the ``strun`` command or ``Step.from_cmdline`` method. The call signature is:: Step.call(input, logcfg=None, **parameters) The positional argument ``input`` is the data to be operated on, usually a string representing a file path or a :ref:`DataModel<datamodels>`. The optional keyword argument ``config_file`` is used to specify a local parameter file. The optional keyword argument ``logcfg`` is used to specify a logging configuration file. Finally, the remaining optional keyword arguments are the parameters that the particular step accepts. The method returns the result of the step. A basic example is:: from romancal.jump import JumpStep output = JumpStep.call('r0000101001001001001_01101_0001_WFI01_uncal.asdf') makes a new instance of `JumpStep` and executes using the specified exposure file. `JumpStep` has a parameter ``rejection_threshold``. To use a different value than the default, the statement would be:: output = JumpStep.call('r0000101001001001001_01101_0001_WFI01_uncal.asdf', rejection_threshold=42.0) If one wishes to use a :ref:`parameter file<parameter_files>`, specify the path to it using the ``config_file`` argument:: output = JumpStep.call('r0000101001001001001_01101_0001_WFI01_uncal.asdf', config_file='my_jumpstep_config.asdf') run() ````` The instance method `Step.run()` is the lowest-level method to executing a step or pipeline. Initialization and parameter settings are left up to the user. An example is:: from romancal.flatfield import FlatFieldStep mystep = FlatFieldStep() mystep.override_sflat = 'sflat.asdf' output = mystep.run(input) `input` in this case can be a asdf file containing the appropriate data, or the output of a previously run step/pipeline, which is an instance of a particular :ref:`datamodel<datamodels>`. Unlike the ``call`` class method, there is no parameter initialization that occurs, either by a local parameter file or from a CRDS-retrieved parameter reference file. Parameters can be set individually on the instance, as is shown above. Parameters can also be specified as keyword arguments when instantiating the step. The previous example could be re-written as:: from romancal.flatfield import FlatFieldStep mystep = FlatFieldStep(override_sflat='sflat.asdf') output = mystep.run(input) Using the ``.run()`` method is the same as calling the instance directly. They are equivalent:: output = mystep(input)
/romancal-0.12.0.tar.gz/romancal-0.12.0/docs/roman/stpipe/user_step.rst
0.928141
0.902653
user_step.rst
pypi
Introduction ============ This document is intended to be a core reference guide to the formats, naming convention and data quality flags used by the reference files for pipeline steps requiring them, and is not intended to be a detailed description of each of those pipeline steps. It also does not give details on pipeline steps that do not use reference files. The present manual is the living document for the reference file specifications. Reference File Naming Convention ================================ Before reference files are ingested into CRDS, they are renamed following a convention used by the pipeline. As with any other changes undergone by the reference files, the previous names are kept in header, so the Instrument Teams can easily track which delivered file is being used by the pipeline in each step. The naming of reference files uses the following syntax:: roman_<instrument>_<reftype>_<version>.<extension> where - ``instrument`` is currently "WFI" - ``reftype`` is one of the type names listed in the table below - ``version`` is a 4-digit version number (e.g. 0042) - ``extension`` gives the file format, "asdf" An example WFI FLAT reference file name would be "roman_wfi_flat_0042.asdf". Reference File Types ==================== Most reference files have a one-to-one relationship with calibration steps, e.g. there is one step that uses one type of reference file. Some steps, however, use several types of reference files and some reference file types are used by more than one step. The tables below show the correspondence between pipeline steps and reference file types. The first table is ordered by pipeline step, while the second is ordered by reference file type. Links to the reference file types provide detailed documentation on each reference file. +---------------------------------------------+--------------------------------------------------+ | Pipeline Step | Reference File Type (reftype) | +=============================================+==================================================+ | :ref:`assign_wcs <assign_wcs_step>` | :ref:`DISTORTION <distortion_reffile>` | +---------------------------------------------+--------------------------------------------------+ | :ref:`dark_current <dark_current_step>` | :ref:`DARK <dark_reffile>` | +---------------------------------------------+--------------------------------------------------+ | :ref:`dq_init <dq_init_step>` | :ref:`MASK <mask_reffile>` | +---------------------------------------------+--------------------------------------------------+ | :ref:`flatfield <flatfield_step>` | :ref:`FLAT <flat_reffile>` | +---------------------------------------------+--------------------------------------------------+ | :ref:`jump_detection <jump_step>` | :ref:`GAIN <gain_reffile>` | + +--------------------------------------------------+ | | :ref:`READNOISE <readnoise_reffile>` | +---------------------------------------------+--------------------------------------------------+ | :ref:`linearity <linearity_step>` | :ref:`LINEARITY <linearity_reffile>` | +---------------------------------------------+--------------------------------------------------+ | :ref:`photom <photom_step>` | :ref:`PHOTOM <photom_reffile>` | +---------------------------------------------+--------------------------------------------------+ | :ref:`ramp_fitting <ramp_fitting_step>` | :ref:`GAIN <gain_reffile>` | + +--------------------------------------------------+ | | :ref:`READNOISE <readnoise_reffile>` | +---------------------------------------------+--------------------------------------------------+ | :ref:`saturation <saturation_step>` | :ref:`SATURATION <saturation_reffile>` | +---------------------------------------------+--------------------------------------------------+ +--------------------------------------------------+---------------------------------------------+ | Reference File Type (reftype) | Pipeline Step | +==================================================+=============================================+ | :ref:`DARK <dark_reffile>` | :ref:`dark_current <dark_current_step>` | +--------------------------------------------------+---------------------------------------------+ | :ref:`DISTORTION <distortion_reffile>` | :ref:`assign_wcs <assign_wcs_step>` | +--------------------------------------------------+---------------------------------------------+ | :ref:`FLAT <flat_reffile>` | :ref:`flatfield <flatfield_step>` | +--------------------------------------------------+---------------------------------------------+ | :ref:`GAIN <gain_reffile>` | :ref:`jump_detection <jump_step>` | + +---------------------------------------------+ | | :ref:`ramp_fitting <ramp_fitting_step>` | +--------------------------------------------------+---------------------------------------------+ | :ref:`LINEARITY <linearity_reffile>` | :ref:`linearity <linearity_step>` | +--------------------------------------------------+---------------------------------------------+ | :ref:`MASK <mask_reffile>` | :ref:`dq_init <dq_init_step>` | +--------------------------------------------------+---------------------------------------------+ | :ref:`PHOTOM <photom_reffile>` | :ref:`photom <photom_step>` | +--------------------------------------------------+---------------------------------------------+ | :ref:`READNOISE <readnoise_reffile>` | :ref:`jump_detection <jump_step>` | + +---------------------------------------------+ | | :ref:`ramp_fitting <ramp_fitting_step>` | +--------------------------------------------------+---------------------------------------------+ | :ref:`SATURATION <saturation_reffile>` | :ref:`saturation <saturation_step>` | +--------------------------------------------------+---------------------------------------------+ .. _`Standard ASDF metadata`: Standard ASDF metadata ====================== Al Roman science and reference files are ASDF files. The required attributes Documenting Contents of Reference Files are: =========== ================================================================================== Attribute Comment =========== ================================================================================== reftype `FLAT Required values are listed in the discussion of each pipeline step.` description `Summary of file content and/or reason for delivery.` author `Fred Jones Person(s) who created the file.` useafter `YYYY-MM-DDThh:mm:ss Date and time after the reference files will be used. The T is required. Time string may NOT be omitted; use T00:00:00 if no meaningful value is available. Astropy Time objects are allowed.` pedigree `Options are 'SIMULATION' 'GROUND' 'DUMMY' 'INFLIGHT YYYY-MM-DD YYYY-MM-DD'` history `Description of Reference File Creation`. telescope `ROMAN Name of the telescope/project.` instrument `WFI Instrument name.` =========== ================================================================================== Observing Mode Attributes ========================= A pipeline module may require separate reference files for each instrument, detector, optical element, observation date, etc. The values of these parameters must be included in the reference file attributes. The observing-mode attributes are vital to the process of ingesting reference files into CRDS, as they are used to establish the mapping between observing modes and specific reference files. Some observing-mode attributes are also used in the pipeline processing steps. The Keywords Documenting the Observing Mode are: =============== ================== ============================================================================== Keyword Sample Value Comment =============== ================== ============================================================================== detector WFI01 Allowed values WFI01, WFI02, ... WFI18 optical element F158 Name of the filter element and includes PRISM and GRISM exposure type WFI_IMAGE Allowed values WFI_IMAGE, WFI_GRATING, WFI_PRISM, WFI_DARK, WFI_FLAT, WFI_WFSC =============== ================== ============================================================================== Tracking Pipeline Progress ++++++++++++++++++++++++++ As each pipeline step is applied to a science data product, it will record a status indicator in a cal_step attribute of the science data product. These statuses may be included in the primary header of reference files, in order to maintain a history of the data that went into creating the reference file. Allowed values for the status Attribute are 'INCOMPLETE', 'COMPLETE' and 'SKIPPED'. The default value is set to 'INCOMPLETE'. The pipeline modules will set the value to 'COMPLETE' or 'SKIPPED'. If the pipeline steps are run manually and you skip a step the cal_step will remain 'INCOMPLETE'. Data Quality Flags ================== Within science data files, the PIXELDQ flags are stored as 32-bit integers; the GROUPDQ flags are 8-bit integers. All calibrated data from a particular instrument and observing mode have the same set of DQ flags in the same (bit) order. The table below lists the allowed DQ flags. Only the first eight entries in the table below are relevant to the GROUPDQ array. .. include:: dq_flags.inc Parameter Specification ======================= There are a number of steps, such as :ref:`OutlierDetectionStep <outlier_detection_step>`, that define what data quality flags a pixel is allowed to have to be considered in calculations. Such parameters can be set in a number of ways. First, the flag can be defined as the integer sum of all the DQ bit values from the input images DQ arrays that should be considered "good". For example, if pixels in the DQ array can have combinations of 1, 2, 4, and 8 and one wants to consider DQ flags 2 and 4 as being acceptable for computations, then the parameter value should be set to "6" (2+4). In this case a pixel having DQ values 2, 4, or 6 will be considered a good pixel, while a pixel with a DQ value, e.g., 1+2=3, 4+8="12", etc. will be flagged as a "bad" pixel. Alternatively, one can enter a comma-separated or '+' separated list of integer bit flags that should be summed to obtain the final "good" bits. For example, both "4,8" and "4+8" are equivalent to a setting of "12". Finally, instead of integers, the Roman mnemonics, as defined above, may be used. For example, all the following specifications are equivalent: `"12" == "4+8" == "4, 8" == "JUMP_DET, DROPOUT"` .. note:: - The default value (0) will make *all* non-zero pixels in the DQ mask be considered "bad" pixels and the corresponding pixels will not be used in computations. - Setting to `None` will turn off the use of the DQ array for computations. - In order to reverse the meaning of the flags from indicating values of the "good" DQ flags to indicating the "bad" DQ flags, prepend '~' to the string value. For example, in order to exclude pixels with DQ flags 4 and 8 for computations and to consider as "good" all other pixels (regardless of their DQ flag), use a value of ``~4+8``, or ``~4,8``. A string value of ``~0`` would be equivalent to a setting of ``None``.
/romancal-0.12.0.tar.gz/romancal-0.12.0/docs/roman/references_general/references_general.rst
0.933484
0.780955
references_general.rst
pypi
Description ============ This step determines the mean count rate, in units of counts per second, for each pixel by performing a linear fit to the data in the input file. The fit is done using the "ordinary least squares" method. The fit is performed independently for each pixel. There can be up to two output files created by the step. The primary output file ("rate") contains the slope at each pixel. A second, optional output product is also available, containing detailed fit information for each pixel. The two types of output files are described in more detail below. The count rate for each pixel is determined by a linear fit to the cosmic-ray-free and saturation-free ramp intervals for each pixel; hereafter this interval will be referred to as a "segment." The fitting algorithm uses an 'optimal' weighting scheme, as described by Fixsen et al, PASP, 112, 1350. Segments are determined using the 3-D GROUPDQ array of the input data set, under the assumption that the jump step will have already flagged CR's. Segments are terminated where saturation flags are found. Pixels are processed simultaneously in blocks using the array-based functionality of numpy. The size of the block depends on the image size and the number of groups. The ramp fitting step is also where the :ref:`reference pixels <refpix>` are trimmed, resulting in a smaller array being passed to the subsequent steps. Multiprocessing =============== This step has the option of running in multiprocessing mode. In that mode it will split the input data cube into a number of row slices based on the number of available cores on the host computer and the value of the max_cores input parameter. By default the step runs on a single processor. At the other extreme if max_cores is set to 'all', it will use all available cores (real and virtual). Testing has shown a reduction in the elapsed time for the step proportional to the number of real cores used. Using the virtual cores also reduces the elasped time but at a slightly lower rate than the real cores. Special Cases +++++++++++++ If the input dataset has only a single group, the count rate for all unsaturated pixels will be calculated as the value of the science data in that group divided by the group time. If the input dataset has only two groups, the count rate for all unsaturated pixels will be calculated using the differences between the two valid groups of the science data. For datasets having more than a single group, a ramp having a segment with only a single group is processed differently depending on the number and size of the other segments in the ramp. If a ramp has only one segment and that segment contains a single group, the count rate will be calculated to be the value of the science data in that group divided by the group time. If a ramp has a segment having a single group, and at least one other segment having more than one good group, only data from the segment(s) having more than a single good group will be used to calculate the count rate. The data are checked for ramps in which there is good data in the first group, but all first differences for the ramp are undefined because the remainder of the groups are either saturated or affected by cosmic rays. For such ramps, the first differences will be set to equal the data in the first group. The first difference is used to estimate the slope of the ramp, as explained in the 'segment-specific computations' section below. If any input dataset contains ramps saturated in their second group, the count rates for those pixels will be calculated as the value of the science data in the first group divided by the group time. All Cases +++++++++ For all input datasets, including the special cases described above, arrays for the primary output (rate) product are computed as follows. After computing the slopes for all segments for a given pixel, the final slope is determined as a weighted average from all segments, and is written as the primary output product. In this output product, the 3-D GROUPDQ is collapsed into 2-D, merged (using a bitwise OR) with the input 2-D PIXELDQ, and stored as a 2-D DQ array. A second, optional output product is also available and is produced only when the step parameter 'save_opt' is True (the default is False). This optional product contains 3-D arrays called SLOPE, SIGSLOPE, YINT, SIGYINT, WEIGHTS, VAR_POISSON, and VAR_RNOISE that contain the slopes, uncertainties in the slopes, y-intercept, uncertainty in the y-intercept, fitting weights, the variance of the slope due to poisson noise only, and the variance of the slope due to read noise only for each segment of each pixel, respectively. The y-intercept refers to the result of the fit at an effective exposure time of zero. This product also contains a 2-D array called PEDESTAL, which gives the signal at zero exposure time for each pixel, and the 3-D CRMAG array, which contains the magnitude of each group that was flagged as having a CR hit. By default, the name of this output file will have the suffix "_fitopt". In this optional output product, the pedestal array is calculated by extrapolating the final slope (the weighted average of the slopes of all ramp segments) for each pixel from its value at the first group to an exposure time of zero. Any pixel that is saturated on the first group is given a pedestal value of 0. Before compression, the cosmic ray magnitude array is equivalent to the input SCI array but with the only nonzero values being those whose pixel locations are flagged in the input GROUPDQ as cosmic ray hits. The array is compressed, removing all groups in which all the values are 0 for pixels having at least one group with a non-zero magnitude. The order of the cosmic rays within the ramp is preserved. Slope and Variance Calculations +++++++++++++++++++++++++++++++ Slopes and their variances are calculated for each segment, and for the entire exposure. As defined above, a segment is a set of contiguous groups where none of the groups are saturated or cosmic ray-affected. The appropriate slopes and variances are output to the primary output product, and the optional output product. The following is a description of these computations. The notation in the equations is the following: the type of noise (when appropriate) will appear as the superscript ‘R’, ‘P’, or ‘C’ for readnoise, Poisson noise, or combined, respectively; and the form of the data will appear as the subscript: ‘s’, ‘o’ for segment, or overall (for the entire dataset), respectively. Optimal Weighting Algorithm --------------------------- The slope of each segment is calculated using the least-squares method with optimal weighting, as described by Fixsen et al. 2000, PASP, 112, 1350; Regan 2007, JWST-STScI-001212. Optimal weighting determines the relative weighting of each sample when calculating the least-squares fit to the ramp. When the data have low signal-to-noise ratio :math:`S`, the data are read noise dominated and equal weighting of samples is the best approach. In the high signal-to-noise regime, data are Poisson-noise dominated and the least-squares fit is calculated with the first and last samples. In most practical cases, the data will fall somewhere in between, where the weighting is scaled between the two extremes. The signal-to-noise ratio :math:`S` used for weighting selection is calculated from the last sample as: .. math:: S = \frac{data \times gain} { \sqrt{(read\_noise)^2 + (data \times gain) } } \,, The weighting for a sample :math:`i` is given as: .. math:: w_i = (i - i_{midpoint})^P \,, where :math:`i_{midpoint}` is the the sample number of the midpoint of the sequence, and :math:`P` is the exponent applied to weights, determined by the value of :math:`S`. Fixsen et al. 2000 found that defining a small number of P values to apply to values of S was sufficient; they are given as: +-------------------+------------------------+----------+ | Minimum S | Maximum S | P | +===================+========================+==========+ | 0 | 5 | 0 | +-------------------+------------------------+----------+ | 5 | 10 | 0.4 | +-------------------+------------------------+----------+ | 10 | 20 | 1 | +-------------------+------------------------+----------+ | 20 | 50 | 3 | +-------------------+------------------------+----------+ | 50 | 100 | 6 | +-------------------+------------------------+----------+ | 100 | | 10 | +-------------------+------------------------+----------+ Segment-specific Computations: ------------------------------ The variance of the slope of a segment due to read noise is: .. math:: var^R_{s} = \frac{12 \ R^2 }{ (ngroups_{s}^3 - ngroups_{s})(group_time^2) } \,, where :math:`R` is the noise in the difference between 2 frames, :math:`ngroups_{s}` is the number of groups in the segment, and :math:`group_time` is the group time in seconds (from the exposure.group_time). The variance of the slope in a segment due to Poisson noise is: .. math:: var^P_{s} = \frac{ slope_{est} }{ tgroup \times gain\ (ngroups_{s} -1)} \,, where :math:`gain` is the gain for the pixel (from the GAIN reference file), in e/DN. The :math:`slope_{est}` is an overall estimated slope of the pixel, calculated by taking the median of the first differences of the groups that are unaffected by saturation and cosmic rays. This is a more robust estimate of the slope than the segment-specific slope, which may be noisy for short segments. The combined variance of the slope of a segment is the sum of the variances: .. math:: var^C_{s} = var^R_{s} + var^P_{s} Exposure-level computations: ---------------------------- The variance of the slope due to read noise is: .. math:: var^R_{o} = \frac{1}{ \sum_{s} \frac{1}{ var^R_{s}}} where the sum is over all segments. The variance of the slope due to Poisson noise is: .. math:: var^P_{o} = \frac{1}{ \sum_{s} \frac{1}{ var^P_{s}}} The combined variance of the slope is the sum of the variances: .. math:: var^C_{o} = var^R_{o} + var^P_{o} The square root of the combined variance is stored in the ERR array of the primary output. The overall slope depends on the slope and the combined variance of the slope of all segments, so is a sum over segments: .. math:: slope_{o} = \frac{ \sum_{s}{ \frac{slope_{s}} {var^C_{s}}}} { \sum_{s}{ \frac{1} {var^C_{s}}}} Upon successful completion of this step, the status attribute ramp_fit will be set to "COMPLETE". Error Propagation ================= Error propagation in the ramp fitting step is implemented by storing the square-root of the exposure-level combined variance in the ERR array of the primary output product. This combined variance of the exposure-level slope is the sum of the variance of the slope due to the Poisson noise and the variance of the slope due to the read noise. These two variances are also separately written to the arrays VAR_POISSON and VAR_RNOISE in the asdf output. For the optional output product, the variance of the slope due to the Poisson noise of the segment-specific slope is written to the VAR_POISSON array. Similarly, the variance of the slope due to the read noise of the segment-specific slope is written to the VAR_RNOISE array.
/romancal-0.12.0.tar.gz/romancal-0.12.0/docs/roman/ramp_fitting/description.rst
0.937397
0.985936
description.rst
pypi
Description =========== Overview -------- The ``skymatch`` step can be used to compute sky values in a collection of input images that contain both sky and source signal. The sky values can be computed for each image separately or in a way that matches the sky levels amongst the collection of images so as to minimize their differences. This operation is typically applied before doing cosmic-ray rejection and combining multiple images into a mosaic. When running the ``skymatch`` step in a matching mode, it compares *total* signal levels in *the overlap regions* of a set of input images and computes the signal offsets for each image that will minimize -- in a least squares sense -- the residuals across the entire set. This comparison is performed directly on the input images without resampling them onto a common grid. The overlap regions are computed directly on the sky (celestial sphere) for each pair of input images. Matching based on total signal level is especially useful for images that are dominated by large, diffuse sources, where it is difficult -- if not impossible -- to find and measure true sky. Note that the meaning of "sky background" depends on the chosen sky computation method. When the matching method is used, for example, the reported "sky" value is only the offset in levels between images and does not necessarily include the true total sky level. .. note:: Throughout this document the term "sky" is used in a generic sense, referring to any kind of non-source background signal, which may include actual sky, as well as instrumental (e.g. thermal) background, etc. The step records information in three attributes that are included in the output files: - method: records the sky method that was used to compute sky levels - level: the sky level computed for each image - subtract: a boolean indicating whether or not the sky was subtracted from the output images. Note that by default the step argument "subtract" is set to ``False``, which means that the sky will *NOT* be subtracted (see the :ref:`skymatch step arguments <skymatch_arguments>` for more details). Both the "subtract" and "level" attributes are important information for downstream tasks, such as outlier detection and resampling. Outlier detection will use the "level" values to internally equalize the images, which is necessary to prevent false detections due to overall differences in signal levels between images, and the resample step will subtract the "level" values from each input image when combining them into a mosaic. Assumptions ----------- When matching sky background, the code needs to compute bounding polygon intersections in world coordinates. The input images, therefore, need to have a valid WCS, generated by the :ref:`assign_wcs <assign_wcs_step>` step. Algorithms ---------- The ``skymatch`` step provides several methods for constant sky background value computations. The first method, called "local", essentially is an enhanced version of the original sky subtraction method used in older versions of `astrodrizzle <https://drizzlepac.readthedocs.io/en/latest/astrodrizzle.html>`_. This method simply computes the mean/median/mode/etc. value of the sky separately in each input image. This method was upgraded to be able to use DQ flags to remove bad pixels from being used in the computations of sky statistics. In addition to the classic "local" method, two other methods have been introduced: "global" and "match", as well as a combination of the two -- "global+match". - The "global" method essentially uses the "local" method to first compute a sky value for each image separately, and then assigns the minimum of those results to all images in the collection. Hence after subtraction of the sky values only one image will have a net sky of zero, while the remaining images will have some small positive residual. - The "match" algorithm computes only a correction value for each image, such that, when applied to each image, the mismatch between *all* pairs of images is minimized, in the least-squares sense. For each pair of images, the sky mismatch is computed *only* in the regions in which the two images overlap on the sky. This makes the "match" algorithm particularly useful for equalizing sky values in large mosaics in which one may have only pair-wise intersection of adjacent images without having a common intersection region (on the sky) in all images. Note that if the argument "match_down=True", matching will be done to the image with the lowest sky value, and if "match_down=False" it will be done to the image with the highest value (see :ref:`skymatch step arguments <skymatch_arguments>` for full details). - The "global+match" algorithm combines the "global" and "match" methods. It uses the "global" algorithm to find a baseline sky value common to all input images and the "match" algorithm to equalize sky values among images. The direction of matching (to the lowest or highest) is again controlled by the "match_down" argument. In the "local" and "global" methods, which find sky levels in each image, the calculation of the image statistics takes advantage of sigma clipping to remove contributions from isolated sources. This can work well for accurately determining the true sky level in images that contain semi-large regions of empty sky. The "match" algorithm, on the other hand, compares the *total* signal levels integrated over regions of overlap in each image pair. This method can produce better results when there are no large empty regions of sky in the images. This method cannot measure the true sky level, but instead provides additive corrections that can be used to equalize the signal between overlapping images. Examples -------- To get a better idea of the behavior of these different methods, the tables below show the results for two hypothetical sets of images. The first example is for a set of 6 images that form a 2x3 mosaic, with every image having overlap with its immediate neighbors. The first column of the table gives the actual (fake) sky signal that was imposed in each image, and the subsequent columns show the results computed by each method (i.e. the values of the resulting "level" attribute). All results are for the case where the step argument ``match_down = True``, which means matching is done to the image with the lowest sky value. Note that these examples are for the highly simplistic case where each example image contains nothing but the constant sky value. Hence the sky computations are not affected at all by any source content and are therefore able to determine the sky values exactly in each image. Results for real images will of course not be so exact. +-------+-------+--------+-------+--------------+ | Sky | Local | Global | Match | Global+Match | +=======+=======+========+=======+==============+ | 100 | 100 | 100 | 0 | 100 | +-------+-------+--------+-------+--------------+ | 120 | 120 | 100 | 20 | 120 | +-------+-------+--------+-------+--------------+ | 105 | 105 | 100 | 5 | 105 | +-------+-------+--------+-------+--------------+ | 110 | 110 | 100 | 10 | 110 | +-------+-------+--------+-------+--------------+ | 105 | 105 | 100 | 5 | 105 | +-------+-------+--------+-------+--------------+ | 115 | 115 | 100 | 15 | 115 | +-------+-------+--------+-------+--------------+ - "local" finds the sky level of each image independently of the rest. - "global" uses the minimum sky level found by "local" and applies it to all images. - "match" with "match_down=True" finds the offset needed to match all images to the level of the image with the lowest sky level. - "global+match" with "match_down=True" finds the offsets and global value needed to set all images to a sky level of zero. In this trivial example, the results are identical to the "local" method. The second example is for a set of 7 images, where the first 4 form a 2x2 mosaic, with overlaps, and the second set of 3 images forms another mosaic, with internal overlap, but the 2 mosaics do *NOT* overlap one another. +-------+-------+--------+-------+--------------+ | Sky | Local | Global | Match | Global+Match | +=======+=======+========+=======+==============+ | 100 | 100 | 90 | 0 | 86.25 | +-------+-------+--------+-------+--------------+ | 120 | 120 | 90 | 20 | 106.25 | +-------+-------+--------+-------+--------------+ | 105 | 105 | 90 | 5 | 91.25 | +-------+-------+--------+-------+--------------+ | 110 | 110 | 90 | 10 | 96.25 | +-------+-------+--------+-------+--------------+ | 95 | 95 | 90 | 8.75 | 95 | +-------+-------+--------+-------+--------------+ | 90 | 90 | 90 | 3.75 | 90 | +-------+-------+--------+-------+--------------+ | 100 | 100 | 90 | 13.75 | 100 | +-------+-------+--------+-------+--------------+ In this case, the "local" method again computes the sky in each image independently of the rest, and the "global" method sets the result for each image to the minimum value returned by "local". The matching results, however, require some explanation. With "match" only, all of the results give the proper offsets required to equalize the images contained within each mosaic, but the algorithm does not have the information needed to match the two (non-overlapping) mosaics to one another. Similarly, the "global+match" results again provide proper matching within each mosaic, but will leave an overall residual in one of the mosaics. Limitations and Discussions --------------------------- As alluded to above, the best sky computation method depends on the nature of the data in the input images. If the input images contain mostly compact, isolated sources, the "local" and "global" algorithms can do a good job at finding the true sky level in each image. If the images contain large, diffuse sources, the "match" algorithm is more appropriate, assuming of course there is sufficient overlap between images from which to compute the matching values. In the event there is not overlap between all of the images, as illustrated in the second example above, the "match" method can still provide useful results for matching the levels within each non-contigous region covered by the images, but will not provide a good overall sky level across all of the images. In these situations it is more appropriate to either process the non-contiguous groups independently of one another or use the "local" or "global" methods to compute the sky separately in each image. The latter option will of course only work well if the images are not domimated by extended, diffuse sources. The primary reason for introducing the ``skymatch`` algorithm was to try to equalize the sky in large mosaics in which computation of the absolute sky is difficult, due to the presence of large diffuse sources in the image. As discussed above, the ``skymatch`` step accomplishes this by comparing the sky values in the overlap regions of each image pair. The quality of sky matching will obviously depend on how well these sky values can be estimated. True background may not be present at all in some images, in which case the computed "sky" may be the surface brightness of a large galaxy, nebula, etc. Here is a brief list of possible limitations and factors that can affect the outcome of the matching (sky subtraction in general) algorithm: - Because sky computation is performed on *flat-fielded* but *not distortion corrected* images, it is important to keep in mind that flat-fielding is performed to obtain correct surface brightnesses. Because the surface brightness of a pixel containing a point-like source will change inversely with a change to the pixel area, it is advisable to mask point-like sources through user-supplied mask files. Values different from zero in user-supplied masks indicate good data pixels. Alternatively, one can use the ``upper`` parameter to exclude the use of pixels containing bright objects when performing the sky computations. - The input images may contain cosmic rays. This algorithm does not perform CR cleaning. A possible way of minimizing the effect of the cosmic rays on sky computations is to use clipping (\ ``nclip`` > 0) and/or set the ``upper`` parameter to a value larger than most of the sky background (or extended sources) but lower than the values of most CR-affected pixels. - In general, clipping is a good way of eliminating bad pixels: pixels affected by CR, hot/dead pixels, etc. However, for images with complicated backgrounds (extended galaxies, nebulae, etc.), affected by CR and noise, the clipping process may mask different pixels in different images. If variations in the background are too strong, clipping may converge to different sky values in different images even when factoring in the true difference in the sky background between the two images. - In general images can have different true background values (we could measure it if images were not affected by large diffuse sources). However, arguments such as ``lower`` and ``upper`` will apply to all images regardless of the intrinsic differences in sky levels (see :ref:`skymatch step arguments <skymatch_arguments>`).
/romancal-0.12.0.tar.gz/romancal-0.12.0/docs/roman/skymatch/description.rst
0.969942
0.979255
description.rst
pypi
import math import copy import time import numpy as np import pandas as pd import torch from torch import nn import torch.nn.functional as F from torchsummary import summary import torch.backends.cudnn as cudnn import torch.optim as optim class conv_block(nn.Module): def __init__(self, in_channels, out_channels, **kwargs): super(conv_block, self).__init__() self.relu = nn.ReLU() self.bathcnorm = nn.BatchNorm1d(out_channels) self.conv = nn.Conv1d(in_channels, out_channels, **kwargs) def forward(self, x): return self.relu(self.bathcnorm(self.conv(x))) class Classifier_FCN(nn.Module): def __init__(self, input_shape, nb_classes): super(Classifier_FCN, self).__init__() self.nb_classes = nb_classes self.conv1 = conv_block(in_channels=1, out_channels=128, kernel_size=8, stride=1, padding='same') self.conv2 = conv_block(in_channels=128, out_channels=256, kernel_size=5, stride=1, padding='same') self.conv3 = conv_block(in_channels=256, out_channels=128, kernel_size=3, stride=1, padding='same') self.avgpool1 = nn.AdaptiveAvgPool1d(1) self.fc1 = nn.Linear(128, self.nb_classes) def forward(self, x): x = self.conv1(x) x = self.conv2(x) x = self.conv3(x) x = self.avgpool1(x) x = x.reshape(x.shape[0], -1) x = self.fc1(x) return x def fit(trainloader, valloader, input_shape, nb_classes): print('Hi from FCN!') model = Classifier_FCN(input_shape, nb_classes) print(model) use_cuda = torch.cuda.is_available() if use_cuda: torch.cuda.set_device(0) model.cuda() cudnn.benchmark = True summary(model, ( input_shape[-2], input_shape[-1])) # Training def train_alone_model(net, epoch): print('\Teaining epoch: %d' % epoch) net.train() train_loss = 0 correct = 0 total = 0 for batch_idx, (inputs, targets) in enumerate(trainloader): if use_cuda: inputs, targets = inputs.cuda(), targets.cuda() optimizer.zero_grad() outputs = net(inputs.float()) loss = criterion_CE(outputs, targets) loss.backward() optimizer.step() train_loss += loss.item() _, predicted = torch.max(outputs.data, 1) total += targets.size(0) correct += predicted.eq(targets.data).cpu().sum().float().item() b_idx = batch_idx print('Training Loss: %.3f | Training Acc: %.3f%% (%d/%d)' % (train_loss / (b_idx + 1), 100. * correct / total, correct, total)) return train_loss / (b_idx + 1) def test(net): net.eval() test_loss = 0 correct = 0 total = 0 for batch_idx, (inputs, targets) in enumerate(valloader): if use_cuda: inputs, targets = inputs.cuda(), targets.cuda() outputs = net(inputs.float()) loss = criterion_CE(outputs, targets) test_loss += loss.item() _, predicted = torch.max(outputs.data, 1) total += targets.size(0) correct += predicted.eq(targets.data).cpu().sum().float().item() b_idx = batch_idx print('Test Loss: %.3f | Test Acc: %.3f%% (%d/%d)' % (test_loss / (b_idx + 1), 100. * correct / total, correct, total)) return test_loss / (b_idx + 1), correct / total final_loss = [] learning_rates = [] epochs = 2000 criterion_CE = nn.CrossEntropyLoss() optimizer = optim.Adam(model.parameters(), lr=0.001,) scheduler = optim.lr_scheduler.OneCycleLR(optimizer, 1e-3, epochs=2000, steps_per_epoch=len(trainloader)) use_cuda = torch.cuda.is_available() best_model_wts = copy.deepcopy(model.state_dict()) min_train_loss = np.inf start_time = time.time() for epoch in range(epochs): train_loss = train_alone_model(model, epoch) test(model) if min_train_loss > train_loss: min_train_loss = train_loss best_model_wts = copy.deepcopy(model.state_dict()) scheduler.step() final_loss.append(train_loss) learning_rates.append(optimizer.param_groups[0]['lr']) output_directory = '/' torch.save(best_model_wts, output_directory + 'best_teacher_model.pt') # Save Logs duration = time.time() - start_time best_teacher = Classifier_FCN(input_shape, nb_classes) best_teacher.load_state_dict(best_model_wts) best_teacher.cuda() print('Best Model Accuracy in below ') start_test_time = time.time() test(best_teacher) test_duration = time.time() - start_test_time print(test(best_teacher)) df = pd.DataFrame(list(zip(final_loss, learning_rates)), columns =['loss', 'learning_rate']) index_best_model = df['loss'].idxmin() row_best_model = df.loc[index_best_model] df_best_model = pd.DataFrame(list(zip([row_best_model['loss']], [index_best_model+1])), columns =['best_model_train_loss', 'best_model_nb_epoch']) df.to_csv(output_directory + 'history.csv', index=False) df_best_model.to_csv(output_directory + 'df_best_model.csv', index=False) loss_, acc_ = test(best_teacher) df_metrics = pd.DataFrame(list(zip([min_train_loss], [acc_], [duration], [test_duration])), columns =['Loss', 'Accuracy', 'Duration', 'Test Duration']) df_metrics.to_csv(output_directory + 'df_metrics.csv', index=False)
/romaniya_menim-0.0.34-py3-none-any.whl/romaniya_menim/models/fcn.py
0.899949
0.313709
fcn.py
pypi
import math import copy import time import numpy as np import pandas as pd import torch from torch import nn import torch.nn.functional as F from torchsummary import summary import torch.backends.cudnn as cudnn import torch.optim as optim class Residual_block(nn.Module): def __init__(self, in_channels, out_channels, stride=1): super(Residual_block, self).__init__() self.residual = nn.Sequential( nn.Conv1d(in_channels=in_channels, out_channels=out_channels, kernel_size=1, stride=stride, padding=0, bias=False), nn.BatchNorm1d(out_channels) ) def forward(self, x): x = self.residual(x) return x class Inception_block(nn.Module): def __init__(self, in_channels, out_channels, stride=1, use_residual=True, use_bottleneck=True): super(Inception_block, self).__init__() self.use_bottleneck = use_bottleneck self.in_channels = in_channels self.out_channels = out_channels self.bottleneck_size = 32 if self.in_channels == 1: self.in_channels = int(in_channels) else: self.in_channels = int(self.bottleneck_size) self.bottleneck = nn.Conv1d(in_channels, self.bottleneck_size, kernel_size=1, padding='same', bias=False) self.branch1 = nn.Conv1d(in_channels=self.in_channels, out_channels=self.out_channels, kernel_size=40, stride=1, padding='same', bias=False) self.branch2 = nn.Conv1d(in_channels=self.in_channels, out_channels=self.out_channels, kernel_size=20, stride=1, padding='same', bias=False) self.branch3 = nn.Conv1d(in_channels=self.in_channels, out_channels=self.out_channels, kernel_size=10, stride=1, padding='same', bias=False) self.branch4 = nn.Sequential( nn.MaxPool1d(kernel_size=3, stride=1, padding=1), nn.Conv1d(in_channels=in_channels, out_channels=self.out_channels, kernel_size=1, stride=1, padding='same', bias=False), ) self.bn = nn.BatchNorm1d(4*self.out_channels) self.relu = nn.ReLU() def forward(self, x): if self.use_bottleneck and int(x.shape[-2]) > 1: y = self.bottleneck(x) else: y = x x = torch.cat([self.branch1(y), self.branch2(y), self.branch3(y), self.branch4(x)], 1) x = self.bn(x) x = self.relu(x) return x class Classifier_INCEPTION(nn.Module): def __init__(self, input_shape, nb_classes, nb_filters=32, depth=6, residual=True): super(Classifier_INCEPTION, self).__init__() self.nb_classes = nb_classes self.nb_filters = nb_filters self.residual = residual self.depth = depth self.inception, self.shortcut = nn.ModuleList(), nn.ModuleList() for d in range(depth): self.inception.append(Inception_block(1 if d == 0 else nb_filters * 4, nb_filters)) if self.residual and d % 3 == 2: self.shortcut.append(Residual_block(1 if d == 2 else 4*nb_filters, 4*nb_filters)) self.avgpool1 = nn.AdaptiveAvgPool1d(1) self.fc1 = nn.Linear(4*nb_filters, self.nb_classes) self.relu = nn.ReLU() def forward(self, x): input_res = x for d in range(self.depth): x = self.inception[d](x) if self.residual and d % 3 == 2: y = self.shortcut[d//3](input_res) x = self.relu(x + y) input_res = x x = self.avgpool1(x) x = x.reshape(x.shape[0], -1) x = self.fc1(x) return x def fit(trainloader, valloader, input_shape, nb_classes): print('Hi from Inception!') recupereTeacherLossAccurayTest2 = [] teacher = Classifier_INCEPTION(input_shape, nb_classes) print(teacher) use_cuda = torch.cuda.is_available() if use_cuda: torch.cuda.set_device(0) teacher.cuda() cudnn.benchmark = True summary(teacher, ( input_shape[-2], input_shape[-1])) # Training def train_alone_model(net, epoch): print('\Teaining epoch: %d' % epoch) net.train() train_loss = 0 correct = 0 total = 0 for batch_idx, (inputs, targets) in enumerate(trainloader): if use_cuda: inputs, targets = inputs.cuda(), targets.cuda() optimizer.zero_grad() outputs = net(inputs.float()) loss = criterion_CE(outputs, targets) loss.backward() optimizer.step() train_loss += loss.item() _, predicted = torch.max(outputs.data, 1) total += targets.size(0) correct += predicted.eq(targets.data).cpu().sum().float().item() b_idx = batch_idx print('Training Loss: %.3f | Training Acc: %.3f%% (%d/%d)' % (train_loss / (b_idx + 1), 100. * correct / total, correct, total)) return train_loss / (b_idx + 1) def test(net): net.eval() test_loss = 0 correct = 0 total = 0 for batch_idx, (inputs, targets) in enumerate(valloader): if use_cuda: inputs, targets = inputs.cuda(), targets.cuda() outputs = net(inputs.float()) loss = criterion_CE(outputs, targets) test_loss += loss.item() _, predicted = torch.max(outputs.data, 1) total += targets.size(0) correct += predicted.eq(targets.data).cpu().sum().float().item() b_idx = batch_idx print('Test Loss: %.3f | Test Acc: %.3f%% (%d/%d)' % (test_loss / (b_idx + 1), 100. * correct / total, correct, total)) return test_loss / (b_idx + 1), correct / total final_loss = [] learning_rates = [] epochs = 15 criterion_CE = nn.CrossEntropyLoss() optimizer = optim.Adam(teacher.parameters(), lr=0.001,) scheduler = optim.lr_scheduler.OneCycleLR(optimizer, 1e-3, epochs=epochs, steps_per_epoch=len(trainloader)) use_cuda = torch.cuda.is_available() best_model_wts = copy.deepcopy(teacher.state_dict()) min_train_loss = np.inf start_time = time.time() for epoch in range(epochs): train_loss = train_alone_model(teacher, epoch) test(teacher) if min_train_loss > train_loss: min_train_loss = train_loss best_model_wts = copy.deepcopy(teacher.state_dict()) scheduler.step() final_loss.append(train_loss) learning_rates.append(optimizer.param_groups[0]['lr']) output_directory = '/' torch.save(best_model_wts, output_directory + 'best_teacher_model.pt') # Save Logs duration = time.time() - start_time best_teacher = Classifier_INCEPTION(input_shape, nb_classes) best_teacher.load_state_dict(best_model_wts) best_teacher.cuda() print('Best Model Accuracy in below ') start_test_time = time.time() test(best_teacher) test_duration = time.time() - start_test_time print(test(best_teacher)) df = pd.DataFrame(list(zip(final_loss, learning_rates)), columns =['loss', 'learning_rate']) index_best_model = df['loss'].idxmin() row_best_model = df.loc[index_best_model] df_best_model = pd.DataFrame(list(zip([row_best_model['loss']], [index_best_model+1])), columns =['best_model_train_loss', 'best_model_nb_epoch']) df.to_csv(output_directory + 'history.csv', index=False) df_best_model.to_csv(output_directory + 'df_best_model.csv', index=False) loss_, acc_ = test(best_teacher) df_metrics = pd.DataFrame(list(zip([min_train_loss], [acc_], [duration], [test_duration])), columns =['Loss', 'Accuracy', 'Duration', 'Test Duration']) df_metrics.to_csv(output_directory + 'df_metrics.csv', index=False)
/romaniya_menim-0.0.34-py3-none-any.whl/romaniya_menim/models/inception.py
0.921627
0.309454
inception.py
pypi
import math import copy import time import numpy as np import pandas as pd import torch from torch import nn import torch.nn.functional as F from torchsummary import summary import torch.backends.cudnn as cudnn import torch.optim as optim class conv_block(nn.Module): def __init__(self, in_channels, out_channels, **kwargs): super(conv_block, self).__init__() self.block = nn.Sequential( nn.Conv1d(in_channels=in_channels, out_channels=out_channels, **kwargs), nn.Sigmoid(), nn.AvgPool1d(kernel_size=3) ) def forward(self, x): return self.block(x) class Classifier_FCN(nn.Module): def __init__(self, input_shape, nb_classes): super(Classifier_FCN, self).__init__() self.nb_classes = nb_classes self.conv1 = conv_block(in_channels=1, out_channels=6, kernel_size=7, stride=1, padding='valid') self.conv2 = conv_block(in_channels=6, out_channels=12, kernel_size=7, stride=1, padding='valid') self.fc1 = nn.Linear(300, nb_classes) self.sigmoid = nn.Sigmoid() def forward(self, x): x = self.conv1(x) x = self.conv2(x) x = x.reshape(x.shape[0], -1) x = self.fc1(x) x = self.sigmoid(x) return x def fit(trainloader, valloader, input_shape, nb_classes): print('Hi from CNN!') model = Classifier_FCN(input_shape, nb_classes) print(model) use_cuda = torch.cuda.is_available() if use_cuda: torch.cuda.set_device(0) model.cuda() cudnn.benchmark = True summary(model, ( input_shape[-2], input_shape[-1])) # Training def train_alone_model(net, epoch): print('\Teaining epoch: %d' % epoch) net.train() train_loss = 0 correct = 0 total = 0 for batch_idx, (inputs, targets) in enumerate(trainloader): if use_cuda: inputs, targets = inputs.cuda(), targets.cuda() optimizer.zero_grad() outputs = net(inputs.float()) loss = criterion_CE(outputs, targets) loss.backward() optimizer.step() train_loss += loss.item() _, predicted = torch.max(outputs.data, 1) total += targets.size(0) correct += predicted.eq(targets.data).cpu().sum().float().item() b_idx = batch_idx print('Training Loss: %.3f | Training Acc: %.3f%% (%d/%d)' % (train_loss / (b_idx + 1), 100. * correct / total, correct, total)) return train_loss / (b_idx + 1) def test(net): net.eval() test_loss = 0 correct = 0 total = 0 for batch_idx, (inputs, targets) in enumerate(valloader): if use_cuda: inputs, targets = inputs.cuda(), targets.cuda() outputs = net(inputs.float()) loss = criterion_CE(outputs, targets) test_loss += loss.item() _, predicted = torch.max(outputs.data, 1) total += targets.size(0) correct += predicted.eq(targets.data).cpu().sum().float().item() b_idx = batch_idx print('Test Loss: %.3f | Test Acc: %.3f%% (%d/%d)' % (test_loss / (b_idx + 1), 100. * correct / total, correct, total)) return test_loss / (b_idx + 1), correct / total final_loss = [] learning_rates = [] epochs = 2000 criterion_CE = nn.CrossEntropyLoss() optimizer = optim.Adam(model.parameters(), lr=0.001,) scheduler = optim.lr_scheduler.OneCycleLR(optimizer, 1e-3, epochs=2000, steps_per_epoch=len(trainloader)) use_cuda = torch.cuda.is_available() best_model_wts = copy.deepcopy(model.state_dict()) min_train_loss = np.inf start_time = time.time() for epoch in range(epochs): train_loss = train_alone_model(model, epoch) test(model) if min_train_loss > train_loss: min_train_loss = train_loss best_model_wts = copy.deepcopy(model.state_dict()) scheduler.step() final_loss.append(train_loss) learning_rates.append(optimizer.param_groups[0]['lr']) torch.save(best_model_wts, output_directory + 'best_teacher_model.pt') # Save Logs duration = time.time() - start_time best_teacher = Classifier_FCN(input_shape, nb_classes) best_teacher.load_state_dict(best_model_wts) best_teacher.cuda() print('Best Model Accuracy in below ') start_test_time = time.time() test(best_teacher) test_duration = time.time() - start_test_time print(test(best_teacher)) df = pd.DataFrame(list(zip(final_loss, learning_rates)), columns =['loss', 'learning_rate']) index_best_model = df['loss'].idxmin() row_best_model = df.loc[index_best_model] df_best_model = pd.DataFrame(list(zip([row_best_model['loss']], [index_best_model+1])), columns =['best_model_train_loss', 'best_model_nb_epoch']) df.to_csv(output_directory + 'history.csv', index=False) df_best_model.to_csv(output_directory + 'df_best_model.csv', index=False) loss_, acc_ = test(best_teacher) df_metrics = pd.DataFrame(list(zip([min_train_loss], [acc_], [duration], [test_duration])), columns =['Loss', 'Accuracy', 'Duration', 'Test Duration']) df_metrics.to_csv(output_directory + 'df_metrics.csv', index=False)
/romaniya_menim-0.0.34-py3-none-any.whl/romaniya_menim/models/cnn.py
0.898431
0.317797
cnn.py
pypi
import math import copy import time import numpy as np import pandas as pd import torch from torch import nn import torch.nn.functional as F from torchsummary import summary import torch.backends.cudnn as cudnn import torch.optim as optim class conv_block(nn.Module): def __init__(self, in_channels, out_channels, **kwargs): super(conv_block, self).__init__() self.relu = nn.ReLU() self.bathcnorm = nn.BatchNorm1d(out_channels) self.conv = nn.Conv1d(in_channels, out_channels, **kwargs) def forward(self, x): return self.relu(self.bathcnorm(self.conv(x))) class conv_block_2(nn.Module): def __init__(self, in_channels, out_channels, **kwargs): super(conv_block_2, self).__init__() self.block = nn.Sequential( nn.Conv1d(in_channels=in_channels, out_channels=out_channels, **kwargs), nn.BatchNorm1d(out_channels) ) def forward(self, x): return self.block(x) class Classifier_RESNET(nn.Module): def __init__(self, input_shape, nb_classes): super(Classifier_RESNET, self).__init__() self.nb_classes = nb_classes n_feature_maps = 64 # BLOCK 1 self.conv_x = conv_block(in_channels=1, out_channels=n_feature_maps, kernel_size=8, stride=1, padding='same') self.conv_y = conv_block(in_channels=n_feature_maps, out_channels=n_feature_maps, kernel_size=5, stride=1, padding='same') self.conv_z = conv_block_2(in_channels=n_feature_maps, out_channels=n_feature_maps, kernel_size=3, stride=1, padding='same') # expand channels for the sum self.shortcut_y = conv_block_2(in_channels=1, out_channels=n_feature_maps, kernel_size=1, stride=1, padding='same') self.relu = nn.ReLU() # BLOCK 2 self.conv_x_2 = conv_block(in_channels=n_feature_maps, out_channels=n_feature_maps*2, kernel_size=8, stride=1, padding='same') self.conv_y_2 = conv_block(in_channels=n_feature_maps*2, out_channels=n_feature_maps*2, kernel_size=5, stride=1, padding='same') self.conv_z_2 = conv_block_2(in_channels=n_feature_maps*2, out_channels=n_feature_maps*2, kernel_size=3, stride=1, padding='same') # expand channels for the sum self.shortcut_y_2 = conv_block_2(in_channels=n_feature_maps, out_channels=n_feature_maps*2, kernel_size=1, stride=1, padding='same') self.relu = nn.ReLU() # BLOCK 3 self.conv_x_3 = conv_block(in_channels=n_feature_maps*2, out_channels=n_feature_maps*2, kernel_size=8, stride=1, padding='same') self.conv_y_3 = conv_block(in_channels=n_feature_maps*2, out_channels=n_feature_maps*2, kernel_size=5, stride=1, padding='same') self.conv_z_3 = conv_block_2(in_channels=n_feature_maps*2, out_channels=n_feature_maps*2, kernel_size=3, stride=1, padding='same') # expand channels for the sum self.shortcut_y_3 = nn.BatchNorm1d(n_feature_maps*2) self.relu = nn.ReLU() self.avgpool1 = nn.AdaptiveAvgPool1d(1) self.fc1 = nn.Linear(2*n_feature_maps, self.nb_classes) def forward(self, x): input_layer = x # BLOCK 1 conv_x = self.conv_x(x) conv_y = self.conv_y(conv_x) conv_z = self.conv_z(conv_y) shortcut_y = self.shortcut_y(input_layer) output_block_1 = shortcut_y + conv_z output_block_1 = self.relu(output_block_1) # BLOCK 2 conv_x_2 = self.conv_x_2(output_block_1) conv_y_2 = self.conv_y_2(conv_x_2) conv_z_2 = self.conv_z_2(conv_y_2) shortcut_y_2 = self.shortcut_y_2(output_block_1) output_block_2 = shortcut_y_2 + conv_z_2 output_block_2 = self.relu(output_block_2) # BLOCK 3 conv_x_3 = self.conv_x_3(output_block_2) conv_y_3 = self.conv_y_3(conv_x_3) conv_z_3 = self.conv_z_3(conv_y_3) shortcut_y_3 = self.shortcut_y_3(output_block_2) output_block_3 = shortcut_y_3 + conv_z_3 output_block_3 = self.relu(output_block_3) x = self.avgpool1(output_block_3) x = x.reshape(x.shape[0], -1) x = self.fc1(x) return x def fit(trainloader, valloader, input_shape, nb_classes): print('Hi from RESNET!') model = Classifier_RESNET(input_shape, nb_classes) print(model) use_cuda = torch.cuda.is_available() if use_cuda: torch.cuda.set_device(0) model.cuda() cudnn.benchmark = True summary(model, ( input_shape[-2], input_shape[-1])) # Training def train_alone_model(net, epoch): print('\Teaining epoch: %d' % epoch) net.train() train_loss = 0 correct = 0 total = 0 for batch_idx, (inputs, targets) in enumerate(trainloader): if use_cuda: inputs, targets = inputs.cuda(), targets.cuda() optimizer.zero_grad() outputs = net(inputs.float()) loss = criterion_CE(outputs, targets) loss.backward() optimizer.step() train_loss += loss.item() _, predicted = torch.max(outputs.data, 1) total += targets.size(0) correct += predicted.eq(targets.data).cpu().sum().float().item() b_idx = batch_idx print('Training Loss: %.3f | Training Acc: %.3f%% (%d/%d)' % (train_loss / (b_idx + 1), 100. * correct / total, correct, total)) return train_loss / (b_idx + 1) def test(net): net.eval() test_loss = 0 correct = 0 total = 0 for batch_idx, (inputs, targets) in enumerate(valloader): if use_cuda: inputs, targets = inputs.cuda(), targets.cuda() outputs = net(inputs.float()) loss = criterion_CE(outputs, targets) test_loss += loss.item() _, predicted = torch.max(outputs.data, 1) total += targets.size(0) correct += predicted.eq(targets.data).cpu().sum().float().item() b_idx = batch_idx print('Test Loss: %.3f | Test Acc: %.3f%% (%d/%d)' % (test_loss / (b_idx + 1), 100. * correct / total, correct, total)) return test_loss / (b_idx + 1), correct / total final_loss = [] learning_rates = [] epochs = 20 criterion_CE = nn.CrossEntropyLoss() optimizer = optim.Adam(model.parameters(), lr=0.001,) scheduler = optim.lr_scheduler.OneCycleLR(optimizer, 1e-3, epochs=20, steps_per_epoch=len(trainloader)) use_cuda = torch.cuda.is_available() best_model_wts = copy.deepcopy(model.state_dict()) min_train_loss = np.inf start_time = time.time() for epoch in range(epochs): train_loss = train_alone_model(model, epoch) test(model) if min_train_loss > train_loss: min_train_loss = train_loss best_model_wts = copy.deepcopy(model.state_dict()) scheduler.step() final_loss.append(train_loss) learning_rates.append(optimizer.param_groups[0]['lr']) output_directory = '/' torch.save(best_model_wts, output_directory + 'best_teacher_model.pt') # Save Logs duration = time.time() - start_time best_teacher = Classifier_RESNET(input_shape, nb_classes) best_teacher.load_state_dict(best_model_wts) best_teacher.cuda() print('Best Model Accuracy in below ') start_test_time = time.time() test(best_teacher) test_duration = time.time() - start_test_time print(test(best_teacher)) df = pd.DataFrame(list(zip(final_loss, learning_rates)), columns =['loss', 'learning_rate']) index_best_model = df['loss'].idxmin() row_best_model = df.loc[index_best_model] df_best_model = pd.DataFrame(list(zip([row_best_model['loss']], [index_best_model+1])), columns =['best_model_train_loss', 'best_model_nb_epoch']) df.to_csv(output_directory + 'history.csv', index=False) df_best_model.to_csv(output_directory + 'df_best_model.csv', index=False) loss_, acc_ = test(best_teacher) df_metrics = pd.DataFrame(list(zip([min_train_loss], [acc_], [duration], [test_duration])), columns =['Loss', 'Accuracy', 'Duration', 'Test Duration']) df_metrics.to_csv(output_directory + 'df_metrics.csv', index=False)
/romaniya_menim-0.0.34-py3-none-any.whl/romaniya_menim/models/resnet.py
0.913922
0.268249
resnet.py
pypi
import math import copy import time import numpy as np import pandas as pd import torch from torch import nn import torch.nn.functional as F from torchsummary import summary import torch.backends.cudnn as cudnn import torch.optim as optim class mlp_block(nn.Module): def __init__(self, in_channels, out_channels, dropout): super(mlp_block, self).__init__() self.block = nn.Sequential( nn.Dropout(p=dropout), nn.Linear(in_channels, out_channels), nn.ReLU() ) def forward(self, x): return self.block(x) class Classifier_MLP(nn.Module): def __init__(self, input_shape, nb_classes): super(Classifier_MLP, self).__init__() self.nb_classes = nb_classes self.in_channels = 10 self.dense_1 = mlp_block(input_shape[-1], 500, 0.1) self.dense_2 = mlp_block(500, 500, 0.2) self.dense_3 = mlp_block(500, 500, 0.2) self.drop_1 = nn.Dropout(p=0.3) self.dense_4 = nn.Linear(500, nb_classes) def forward(self, x): x = x.reshape(x.shape[0], -1) x = self.dense_1(x) x = self.dense_2(x) x = self.dense_3(x) x = self.drop_1(x) x = self.dense_4(x) return x def fit(trainloader, valloader, input_shape, nb_classes): print('Hi from MLP!') model = Classifier_MLP(input_shape, nb_classes) print(model) use_cuda = torch.cuda.is_available() if use_cuda: torch.cuda.set_device(0) model.cuda() cudnn.benchmark = True summary(model, ( input_shape[-2], input_shape[-1])) # Training def train_alone_model(net, epoch): print('\Teaining epoch: %d' % epoch) net.train() train_loss = 0 correct = 0 total = 0 for batch_idx, (inputs, targets) in enumerate(trainloader): if use_cuda: inputs, targets = inputs.cuda(), targets.cuda() optimizer.zero_grad() outputs = net(inputs.float()) loss = criterion_CE(outputs, targets) loss.backward() optimizer.step() train_loss += loss.item() _, predicted = torch.max(outputs.data, 1) total += targets.size(0) correct += predicted.eq(targets.data).cpu().sum().float().item() b_idx = batch_idx print('Training Loss: %.3f | Training Acc: %.3f%% (%d/%d)' % (train_loss / (b_idx + 1), 100. * correct / total, correct, total)) return train_loss / (b_idx + 1) def test(net): net.eval() test_loss = 0 correct = 0 total = 0 for batch_idx, (inputs, targets) in enumerate(valloader): if use_cuda: inputs, targets = inputs.cuda(), targets.cuda() outputs = net(inputs.float()) loss = criterion_CE(outputs, targets) test_loss += loss.item() _, predicted = torch.max(outputs.data, 1) total += targets.size(0) correct += predicted.eq(targets.data).cpu().sum().float().item() b_idx = batch_idx print('Test Loss: %.3f | Test Acc: %.3f%% (%d/%d)' % (test_loss / (b_idx + 1), 100. * correct / total, correct, total)) return test_loss / (b_idx + 1), correct / total final_loss = [] learning_rates = [] epochs = 5000 criterion_CE = nn.CrossEntropyLoss() optimizer = optim.Adam(model.parameters(), lr=0.001,) scheduler = optim.lr_scheduler.OneCycleLR(optimizer, 1e-3, epochs=2000, steps_per_epoch=len(trainloader)) use_cuda = torch.cuda.is_available() best_model_wts = copy.deepcopy(model.state_dict()) min_train_loss = np.inf start_time = time.time() for epoch in range(epochs): train_loss = train_alone_model(model, epoch) test(model) if min_train_loss > train_loss: min_train_loss = train_loss best_model_wts = copy.deepcopy(model.state_dict()) scheduler.step() final_loss.append(train_loss) learning_rates.append(optimizer.param_groups[0]['lr']) output_directory = '/' torch.save(best_model_wts, output_directory + 'best_teacher_model.pt') # Save Logs duration = time.time() - start_time best_teacher = Classifier_MLP(input_shape, nb_classes) best_teacher.load_state_dict(best_model_wts) best_teacher.cuda() print('Best Model Accuracy in below ') start_test_time = time.time() test(best_teacher) test_duration = time.time() - start_test_time print(test(best_teacher)) df = pd.DataFrame(list(zip(final_loss, learning_rates)), columns =['loss', 'learning_rate']) index_best_model = df['loss'].idxmin() row_best_model = df.loc[index_best_model] df_best_model = pd.DataFrame(list(zip([row_best_model['loss']], [index_best_model+1])), columns =['best_model_train_loss', 'best_model_nb_epoch']) df.to_csv(output_directory + 'history.csv', index=False) df_best_model.to_csv(output_directory + 'df_best_model.csv', index=False) loss_, acc_ = test(best_teacher) df_metrics = pd.DataFrame(list(zip([min_train_loss], [acc_], [duration], [test_duration])), columns =['Loss', 'Accuracy', 'Duration', 'Test Duration']) df_metrics.to_csv(output_directory + 'df_metrics.csv', index=False)
/romaniya_menim-0.0.34-py3-none-any.whl/romaniya_menim/models/mlp.py
0.887778
0.316303
mlp.py
pypi
import re from collections import OrderedDict from .romanizer import romanizer has_capitals = False data = OrderedDict() # https://en.wikipedia.org/wiki/Aramaic_alphabet # https://en.wikipedia.org/wiki/Brahmi_script # letters from 𑀅 to 𑀣 # alef:http://en.wiktionary.org/wiki/ data['a'] = dict(letter=[u'𑀅'], name=u'𑀅', segment='vowel', subsegment='', transliteration=u'a', order=1) # beth:http://en.wiktionary.org/wiki/ data['ba'] = dict(letter=[u'𑀩'], name=u'𑀩', segment='consonant', subsegment='', transliteration=u'b', order=2) # gimel:http://en.wiktionary.org/wiki/ data['ga'] = dict(letter=[u'𑀕'], name=u'𑀕', segment='consonant', subsegment='', transliteration=u'g', order=3) # daleth:http://en.wiktionary.org/wiki/ data['dha'] = dict(letter=[u'𑀥'], name=u'𑀥', segment='consonant', subsegment='', transliteration=u'd', order=4) # he:http://en.wiktionary.org/wiki/ data['ha'] = dict(letter=[u'𑀳'], name=u'𑀳', segment='vowel', subsegment='', transliteration=u'e', order=5) # vau:http://en.wikipedia.org/wiki/ data['va'] = dict(letter=[u'𑀯'], name=u'𑀯', segment='vowel', subsegment='', transliteration=u'w', order=6) # zayin:http://en.wiktionary.org/wiki/ data['ja'] = dict(letter=[u'𑀚'], name=u'𑀚', segment='consonant', subsegment='', transliteration=u'z', order=7) # heth:http://en.wiktionary.org/wiki/ data['gha'] = dict(letter=[u'𑀖'], name=u'𑀖', segment='consonant', subsegment='', transliteration=u'ê', order=8) # teth:http://en.wiktionary.org/wiki/ data['tha'] = dict(letter=[u'𑀣'], name=u'𑀣', segment='consonant', subsegment='', transliteration=u'h', order=9) # letters from 𑀬 to 𑀲 # yod:http://en.wiktionary.org/wiki/ data['ya'] = dict(letter=[u'𑀬'], name=u'𑀬', segment='vowel', subsegment='', transliteration=u'i', order=10) # kaph:http://en.wiktionary.org/wiki/ data['ka'] = dict(letter=[u'𑀓'], name=u'𑀓', segment='consonant', subsegment='', transliteration=u'k', order=11) # lamed:http://en.wiktionary.org/wiki/ data['la'] = dict(letter=[u'𑀮'], name=u'𑀮', segment='consonant', subsegment='', transliteration=u'l', order=12) # mem:http://en.wiktionary.org/wiki/ data['ma'] = dict(letter=[u'𑀫'], name=u'𑀫', segment='consonant', subsegment='', transliteration=u'm', order=13) # num:http://en.wiktionary.org/wiki/ data['na'] = dict(letter=[u'𑀦'], name=u'𑀦', segment='consonant', subsegment='', transliteration=u'n', order=14) # samekh:http://en.wiktionary.org/wiki/ data['sha'] = dict(letter=[u'𑀰'], name=u'𑀰', segment='consonant', subsegment='', transliteration=u'x', order=15) # ayin:http://en.wiktionary.org/wiki/ data['e'] = dict(letter=[u'𑀏'], name=u'𑀏', segment='vowel', subsegment='', transliteration=u'o', order=16) # pe:http://en.wiktionary.org/wiki/ data['pa'] = dict(letter=[u'𑀧'], name=u'𑀧', segment='consonant', subsegment='', transliteration=u'p', order=17) # tsade:http://en.wikipedia.org/wiki/ data['sa'] = dict(letter=[u'𑀲'], name=u'𑀲', segment='consonant', subsegment='', transliteration=u'c', order=18) # letters from 𑀔 to 𑀢 # resh:http://en.wiktionary.org/wiki/ data['kha'] = dict(letter=[u'𑀔'], name=u'𑀔', segment='consonant', subsegment='', transliteration=u'q', order=19) # shin:http://en.wiktionary.org/wiki/ data['ra'] = dict(letter=[u'𑀭'], name=u'𑀭', segment='consonant', subsegment='', transliteration=u'r', order=20) # tau:http://en.wiktionary.org/wiki/ data['ssa'] = dict(letter=[u'𑀱'], name=u'𑀱', segment='consonant', subsegment='', transliteration=u's', order=21) # final_tsade:http://en.wiktionary.org/wiki/ data['ta'] = dict(letter=[u'𑀢'], name=u'𑀢', segment='consonant', subsegment='', transliteration=u't', order=22) # final_kaph:http://en.wiktionary.org/wiki/ #data['final_kap'] = dict(letter=[u'ⲫ'], name=u'ⲫ', segment='consonant', subsegment='', transliteration=u'K', order=23) # final_mem, chi:http://en.wiktionⲣary.org/wiki/ #data['final_mem'] = dict(letter=[u'ⲭ'], name=u'ⲭ', segment='consonant', subsegment='', transliteration=u'M', order=24) # final_nun:http://en.wiktionary.org/wiki/ #data['final_nun'] = dict(letter=[u'ⲯ'], name=u'ⲯ', segment='consonant', subsegment='', transliteration=u'N', order=25) # final_pe:http://en.wiktionary.org/wiki/ #data['final_pe'] = dict(letter=[u'ⲱ'], name=u'ⲱ', segment='consonant', subsegment='', transliteration=u'P', order=26) # final_tsade:http://en.wiktionary.org/wiki/ #data['final_sadhe'] = dict(letter=[u'ⳁ'], name=u'ⳁ', segment='consonant', subsegment='', transliteration=u'Y', order=27) r = romanizer(data, has_capitals) # collect brahmic and transliteration letters from data dictionary for preprocessing function letters = ''.join([''.join(d['letter'])+d['transliteration'] for key, d in data.items()]) regex = re.compile('[^%s ]+' % letters) regex2 = re.compile('[^%s\s]' % ''.join([''.join(d['letter']) for key, d in data.items()])) def filter(string): """ Preprocess string to remove all other characters but brahmic ones :param string: :return: """ # remove all unwanted characters return regex2.sub(' ', string) def preprocess(string): """ Preprocess string to transform all diacritics and remove other special characters :param string: :return: """ return regex.sub('', string) def convert(string, sanitize=False): """ Swap characters from script to transliterated version and vice versa. Optionally sanitize string by using preprocess function. :param sanitize: :param string: :return: """ return r.convert(string, (preprocess if sanitize else False))
/romanize3-0.1.14.tar.gz/romanize3-0.1.14/romanize/brh.py
0.518302
0.304701
brh.py
pypi
import re from collections import OrderedDict from .romanizer import romanizer has_capitals = False data = OrderedDict() # https://en.wikipedia.org/wiki/Aramaic_alphabet # https://en.wikipedia.org/wiki/Phoenician_alphabet # 1 = 𐤖 ‬ # 2 = 𐤚‬ # 3 = 𐤛 # 10 = 𐤗 # 20 = 𐤘 # 100 = 𐤙 # letters from 𐤀 to 𐤈 # alef:http://en.wiktionary.org/wiki/ data['alep'] = dict(letter=[u'𐤀'], name=u'𐤀', segment='vowel', subsegment='', transliteration=u'a', order=1) # beth:http://en.wiktionary.org/wiki/ data['bet'] = dict(letter=[u'𐤁'], name=u'𐤁', segment='consonant', subsegment='', transliteration=u'b', order=2) # gimel:http://en.wiktionary.org/wiki/ data['giml'] = dict(letter=[u'𐤂'], name=u'𐤂', segment='consonant', subsegment='', transliteration=u'g', order=3) # daleth:http://en.wiktionary.org/wiki/ data['dalet'] = dict(letter=[u'𐤃'], name=u'𐤃', segment='consonant', subsegment='', transliteration=u'd', order=4) # he:http://en.wiktionary.org/wiki/ data['he'] = dict(letter=[u'𐤄'], name=u'𐤄', segment='vowel', subsegment='', transliteration=u'h', order=5) # vau:http://en.wikipedia.org/wiki/ data['waw'] = dict(letter=[u'𐤅'], name=u'𐤅', segment='vowel', subsegment='', transliteration=u'w', order=6) # zayin:http://en.wiktionary.org/wiki/ data['zayin'] = dict(letter=[u'𐤆'], name=u'𐤆', segment='consonant', subsegment='', transliteration=u'z', order=7) # heth:http://en.wiktionary.org/wiki/ data['het'] = dict(letter=[u'𐤇'], name=u'𐤇', segment='consonant', subsegment='', transliteration=u'ḥ', order=8) # teth:http://en.wiktionary.org/wiki/ data['tet'] = dict(letter=[u'𐤈'], name=u'𐤈', segment='consonant', subsegment='', transliteration=u'ṭ', order=9) # letters from 𐤉 to 𐤑‬ # yod:http://en.wiktionary.org/wiki/ data['yod'] = dict(letter=[u'𐤉'], name=u'𐤉', segment='vowel', subsegment='', transliteration=u'y', order=10) # kaph:http://en.wiktionary.org/wiki/ data['kap'] = dict(letter=[u'𐤊'], name=u'𐤊', segment='consonant', subsegment='', transliteration=u'k', order=11) # lamed:http://en.wiktionary.org/wiki/ data['lamed'] = dict(letter=[u'𐤋'], name=u'𐤋', segment='consonant', subsegment='', transliteration=u'l', order=12) # mem:http://en.wiktionary.org/wiki/ data['mem'] = dict(letter=[u'𐤌'], name=u'𐤌', segment='consonant', subsegment='', transliteration=u'm', order=13) # num:http://en.wiktionary.org/wiki/ data['nun'] = dict(letter=[u'𐤍'], name=u'𐤍', segment='consonant', subsegment='', transliteration=u'n', order=14) # samekh:http://en.wiktionary.org/wiki/ data['semek'] = dict(letter=[u'𐤎'], name=u'𐤎', segment='consonant', subsegment='', transliteration=u's', order=15) # ayin:http://en.wiktionary.org/wiki/ data['ayin'] = dict(letter=[u'𐤏'], name=u'𐤏', segment='vowel', subsegment='', transliteration=u'o', order=16) # pe:http://en.wiktionary.org/wiki/ data['pe'] = dict(letter=[u'𐤐'], name=u'𐤐', segment='consonant', subsegment='', transliteration=u'p', order=17) # tsade:http://en.wikipedia.org/wiki/ data['sade'] = dict(letter=[u'𐤑'], name=u'𐤑', segment='consonant', subsegment='', transliteration=u'ṣ', order=18) # letters from 𐤒 to ܬ # resh:http://en.wiktionary.org/wiki/ data['qop'] = dict(letter=[u'𐤒'], name=u'𐤒', segment='consonant', subsegment='', transliteration=u'q', order=19) # shin:http://en.wiktionary.org/wiki/ data['res'] = dict(letter=[u'𐤓'], name=u'𐤓', segment='consonant', subsegment='', transliteration=u'r', order=20) # tau:http://en.wiktionary.org/wiki/ data['sin'] = dict(letter=[u'𐤔'], name=u'𐤔', segment='consonant', subsegment='', transliteration=u'š', order=21) # final_tsade:http://en.wiktionary.org/wiki/ data['taw'] = dict(letter=[u'𐤕'], name=u'𐤕', segment='consonant', subsegment='', transliteration=u't', order=22) # final_kaph:http://en.wiktionary.org/wiki/ #data['final_kap'] = dict(letter=[u'ⲫ'], name=u'ⲫ', segment='consonant', subsegment='', transliteration=u'K', order=23) # final_mem, chi:http://en.wiktionⲣary.org/wiki/ #data['final_mem'] = dict(letter=[u'ⲭ'], name=u'ⲭ', segment='consonant', subsegment='', transliteration=u'M', order=24) # final_nun:http://en.wiktionary.org/wiki/ #data['final_nun'] = dict(letter=[u'ⲯ'], name=u'ⲯ', segment='consonant', subsegment='', transliteration=u'N', order=25) # final_pe:http://en.wiktionary.org/wiki/ #data['final_pe'] = dict(letter=[u'ⲱ'], name=u'ⲱ', segment='consonant', subsegment='', transliteration=u'P', order=26) # final_tsade:http://en.wiktionary.org/wiki/ #data['final_sadhe'] = dict(letter=[u'ⳁ'], name=u'ⳁ', segment='consonant', subsegment='', transliteration=u'Y', order=27) r = romanizer(data, has_capitals) # collect phoenician and transliteration letters from data dictionary for preprocessing function letters = ''.join([''.join(d['letter'])+d['transliteration'] for key, d in data.items()]) regex = re.compile('[^%s ]+' % letters) regex2 = re.compile('[^%s\s]' % ''.join([''.join(d['letter']) for key, d in data.items()])) def filter(string): """ Preprocess string to remove all other characters but phoenician ones :param string: :return: """ # remove all unwanted characters return regex2.sub(' ', string) def preprocess(string): """ Preprocess string to transform all diacritics and remove other special characters :param string: :return: """ return regex.sub('', string) def convert(string, sanitize=False): """ Swap characters from script to transliterated version and vice versa. Optionally sanitize string by using preprocess function. :param sanitize: :param string: :return: """ return r.convert(string, (preprocess if sanitize else False))
/romanize3-0.1.14.tar.gz/romanize3-0.1.14/romanize/phn.py
0.511473
0.233455
phn.py
pypi
import re from collections import OrderedDict from .romanizer import romanizer has_capitals = False data = OrderedDict() # https://en.wikipedia.org/wiki/Aramaic_alphabet # letters from ܐ to ܛ # alef:http://en.wiktionary.org/wiki/ data['alap'] = dict(letter=[u'ܐ'], name=u'ܐ', segment='vowel', subsegment='', transliteration=u'a', order=1) # beth:http://en.wiktionary.org/wiki/ data['beth'] = dict(letter=[u'ܒ'], name=u'ܒ', segment='consonant', subsegment='', transliteration=u'b', order=2) # gimel:http://en.wiktionary.org/wiki/ data['gamal'] = dict(letter=[u'ܓ'], name=u'ܓ', segment='consonant', subsegment='', transliteration=u'g', order=3) # daleth:http://en.wiktionary.org/wiki/ data['dalath'] = dict(letter=[u'ܕ'], name=u'ܕ', segment='consonant', subsegment='', transliteration=u'd', order=4) # he:http://en.wiktionary.org/wiki/ data['he'] = dict(letter=[u'ܗ'], name=u'ܗ', segment='vowel', subsegment='', transliteration=u'e', order=5) # vau:http://en.wikipedia.org/wiki/ data['waw'] = dict(letter=[u'ܘ'], name=u'ܘ', segment='vowel', subsegment='', transliteration=u'w', order=6) # zayin:http://en.wiktionary.org/wiki/ data['zain'] = dict(letter=[u'ܙ'], name=u'ܙ', segment='consonant', subsegment='', transliteration=u'z', order=7) # heth:http://en.wiktionary.org/wiki/ data['heth'] = dict(letter=[u'ܚ'], name=u'ܚ', segment='consonant', subsegment='', transliteration=u'ê', order=8) # teth:http://en.wiktionary.org/wiki/ data['teth'] = dict(letter=[u'ܛ'], name=u'ܛ', segment='consonant', subsegment='', transliteration=u'h', order=9) # letters from ܝ to ܨ # yod:http://en.wiktionary.org/wiki/ data['yodh'] = dict(letter=[u'ܝ'], name=u'ܝ', segment='vowel', subsegment='', transliteration=u'i', order=10) # kaph:http://en.wiktionary.org/wiki/ data['kap'] = dict(letter=[u'ܟ'], name=u'ܟ', segment='consonant', subsegment='', transliteration=u'k', order=11) # lamed:http://en.wiktionary.org/wiki/ data['lamadh'] = dict(letter=[u'ܠ'], name=u'ܠ', segment='consonant', subsegment='', transliteration=u'l', order=12) # mem:http://en.wiktionary.org/wiki/ data['mem'] = dict(letter=[u'ܡ'], name=u'ܡ', segment='consonant', subsegment='', transliteration=u'm', order=13) # num:http://en.wiktionary.org/wiki/ data['num'] = dict(letter=[u'ܢ'], name=u'ܢ', segment='consonant', subsegment='', transliteration=u'n', order=14) # samekh:http://en.wiktionary.org/wiki/ data['semkath'] = dict(letter=[u'ܣ'], name=u'ܣ', segment='consonant', subsegment='', transliteration=u'x', order=15) # ayin:http://en.wiktionary.org/wiki/ data['e'] = dict(letter=[u'ܥ'], name=u'ܥ', segment='consonant', subsegment='', transliteration=u'o', order=16) # pe:http://en.wiktionary.org/wiki/ data['pe'] = dict(letter=[u'ܦ'], name=u'ܦ', segment='consonant', subsegment='', transliteration=u'p', order=17) # tsade:http://en.wikipedia.org/wiki/ data['sadhe'] = dict(letter=[u'ܨ'], name=u'ܨ', segment='consonant', subsegment='', transliteration=u'c', order=18) # letters from ܩ to ܬ # resh:http://en.wiktionary.org/wiki/ data['qop'] = dict(letter=[u'ܩ'], name=u'ܩ', segment='consonant', subsegment='', transliteration=u'q', order=19) # shin:http://en.wiktionary.org/wiki/ data['resh'] = dict(letter=[u'ܪ'], name=u'ܪ', segment='consonant', subsegment='', transliteration=u'r', order=20) # tau:http://en.wiktionary.org/wiki/ data['shin'] = dict(letter=[u'ܫ'], name=u'ܫ', segment='consonant', subsegment='', transliteration=u's', order=21) # final_tsade:http://en.wiktionary.org/wiki/ data['taw'] = dict(letter=[u'ܬ'], name=u'ܬ', segment='consonant', subsegment='', transliteration=u't', order=22) # final_kaph:http://en.wiktionary.org/wiki/ #data['final_kap'] = dict(letter=[u'ⲫ'], name=u'ⲫ', segment='consonant', subsegment='', transliteration=u'K', order=23) # final_mem, chi:http://en.wiktionⲣary.org/wiki/ #data['final_mem'] = dict(letter=[u'ⲭ'], name=u'ⲭ', segment='consonant', subsegment='', transliteration=u'M', order=24) # final_nun:http://en.wiktionary.org/wiki/ #data['final_nun'] = dict(letter=[u'ⲯ'], name=u'ⲯ', segment='consonant', subsegment='', transliteration=u'N', order=25) # final_pe:http://en.wiktionary.org/wiki/ #data['final_pe'] = dict(letter=[u'ⲱ'], name=u'ⲱ', segment='consonant', subsegment='', transliteration=u'P', order=26) # final_tsade:http://en.wiktionary.org/wiki/ #data['final_sadhe'] = dict(letter=[u'ⳁ'], name=u'ⳁ', segment='consonant', subsegment='', transliteration=u'Y', order=27) r = romanizer(data, has_capitals) # collect syriaic and transliteration letters from data dictionary for preprocessing function letters = ''.join([''.join(d['letter'])+d['transliteration'] for key, d in data.items()]) regex = re.compile('[^%s ]+' % letters) regex2 = re.compile('[^%s\s]' % ''.join([''.join(d['letter']) for key, d in data.items()])) def filter(string): """ Preprocess string to remove all other characters but syriaic ones :param string: :return: """ # remove all unwanted characters return regex2.sub(' ', string) def preprocess(string): """ Preprocess string to transform all diacritics and remove other special characters :param string: :return: """ return regex.sub('', string) def convert(string, sanitize=False): """ Swap characters from script to transliterated version and vice versa. Optionally sanitize string by using preprocess function. :param sanitize: :param string: :return: """ return r.convert(string, (preprocess if sanitize else False))
/romanize3-0.1.14.tar.gz/romanize3-0.1.14/romanize/syc.py
0.514156
0.251832
syc.py
pypi
import re from collections import OrderedDict from .romanizer import romanizer has_capitals = False data = OrderedDict() # https://en.wikipedia.org/wiki/Aramaic_alphabet # letters from 𐡀 to 𐡈 # alef:http://en.wiktionary.org/wiki/ data['alap'] = dict(letter=[u'𐡀'], name=u'𐡀', segment='vowel', subsegment='', transliteration=u'a', order=1) # beth:http://en.wiktionary.org/wiki/ data['beth'] = dict(letter=[u'𐡁'], name=u'𐡁', segment='consonant', subsegment='', transliteration=u'b', order=2) # gimel:http://en.wiktionary.org/wiki/ data['gamal'] = dict(letter=[u'𐡂'], name=u'𐡂', segment='consonant', subsegment='', transliteration=u'g', order=3) # daleth:http://en.wiktionary.org/wiki/ data['dalath'] = dict(letter=[u'𐡃'], name=u'𐡃', segment='consonant', subsegment='', transliteration=u'd', order=4) # he:http://en.wiktionary.org/wiki/ data['he'] = dict(letter=[u'𐡄'], name=u'𐡄', segment='vowel', subsegment='', transliteration=u'h', order=5) # vau:http://en.wikipedia.org/wiki/ data['waw'] = dict(letter=[u'𐡅'], name=u'𐡅', segment='vowel', subsegment='', transliteration=u'w', order=6) # zayin:http://en.wiktionary.org/wiki/ data['zain'] = dict(letter=[u'𐡆'], name=u'𐡆', segment='consonant', subsegment='', transliteration=u'z', order=7) # heth:http://en.wiktionary.org/wiki/ data['heth'] = dict(letter=[u'𐡇'], name=u'𐡇', segment='consonant', subsegment='', transliteration=u'ḥ', order=8) # teth:http://en.wiktionary.org/wiki/ data['teth'] = dict(letter=[u'𐡈'], name=u'𐡈', segment='consonant', subsegment='', transliteration=u'ṭ', order=9) # letters from 𐡉 to 𐡑‬ # yod:http://en.wiktionary.org/wiki/ data['yodh'] = dict(letter=[u'𐡉'], name=u'𐡉', segment='vowel', subsegment='', transliteration=u'i', order=10) # kaph:http://en.wiktionary.org/wiki/ data['kap'] = dict(letter=[u'𐡊'], name=u'𐡊', segment='consonant', subsegment='', transliteration=u'k', order=11) # lamed:http://en.wiktionary.org/wiki/ data['lamadh'] = dict(letter=[u'𐡋'], name=u'𐡋', segment='consonant', subsegment='', transliteration=u'l', order=12) # mem:http://en.wiktionary.org/wiki/ data['mem'] = dict(letter=[u'𐡌'], name=u'𐡌', segment='consonant', subsegment='', transliteration=u'm', order=13) # num:http://en.wiktionary.org/wiki/ data['num'] = dict(letter=[u'𐡍'], name=u'𐡍', segment='consonant', subsegment='', transliteration=u'n', order=14) # samekh:http://en.wiktionary.org/wiki/ data['semkath'] = dict(letter=[u'𐡎'], name=u'𐡎', segment='consonant', subsegment='', transliteration=u's', order=15) # ayin:http://en.wiktionary.org/wiki/ data['e'] = dict(letter=[u'𐡏'], name=u'𐡏', segment='vowel', subsegment='', transliteration=u'o', order=16) # pe:http://en.wiktionary.org/wiki/ data['pe'] = dict(letter=[u'𐡐'], name=u'𐡐', segment='consonant', subsegment='', transliteration=u'p', order=17) # tsade:http://en.wikipedia.org/wiki/ data['sadhe'] = dict(letter=[u'𐡑'], name=u'𐡑', segment='consonant', subsegment='', transliteration=u'ṣ', order=18) # letters from 𐡒 to 𐡕 # resh:http://en.wiktionary.org/wiki/ data['qop'] = dict(letter=[u'𐡒'], name=u'𐡒', segment='consonant', subsegment='', transliteration=u'q', order=19) # shin:http://en.wiktionary.org/wiki/ data['resh'] = dict(letter=[u'𐡓'], name=u'𐡓', segment='consonant', subsegment='', transliteration=u'r', order=20) # tau:http://en.wiktionary.org/wiki/ data['shin'] = dict(letter=[u'𐡔'], name=u'𐡔', segment='consonant', subsegment='', transliteration=u'š', order=21) # final_tsade:http://en.wiktionary.org/wiki/ data['taw'] = dict(letter=[u'𐡕'], name=u'𐡕', segment='consonant', subsegment='', transliteration=u't', order=22) # final_kaph:http://en.wiktionary.org/wiki/ #data['final_kap'] = dict(letter=[u'ⲫ'], name=u'ⲫ', segment='consonant', subsegment='', transliteration=u'K', order=23) # final_mem, chi:http://en.wiktionⲣary.org/wiki/ #data['final_mem'] = dict(letter=[u'ⲭ'], name=u'ⲭ', segment='consonant', subsegment='', transliteration=u'M', order=24) # final_nun:http://en.wiktionary.org/wiki/ #data['final_nun'] = dict(letter=[u'ⲯ'], name=u'ⲯ', segment='consonant', subsegment='', transliteration=u'N', order=25) # final_pe:http://en.wiktionary.org/wiki/ #data['final_pe'] = dict(letter=[u'ⲱ'], name=u'ⲱ', segment='consonant', subsegment='', transliteration=u'P', order=26) # final_tsade:http://en.wiktionary.org/wiki/ #data['final_sadhe'] = dict(letter=[u'ⳁ'], name=u'ⳁ', segment='consonant', subsegment='', transliteration=u'Y', order=27) r = romanizer(data, has_capitals) # collect aramaic and transliteration letters from data dictionary for preprocessing function letters = ''.join([''.join(d['letter'])+d['transliteration'] for key, d in data.items()]) regex = re.compile('[^%s ]+' % letters) regex2 = re.compile('[^%s\s]' % ''.join([''.join(d['letter']) for key, d in data.items()])) def filter(string): """ Preprocess string to remove all other characters but aramaic ones :param string: :return: """ # remove all unwanted characters return regex2.sub(' ', string) def preprocess(string): """ Preprocess string to transform all diacritics and remove other special characters :param string: :return: """ return regex.sub('', string) def convert(string, sanitize=False): """ Swap characters from script to transliterated version and vice versa. Optionally sanitize string by using preprocess function. :param sanitize: :param string: :return: """ return r.convert(string, (preprocess if sanitize else False))
/romanize3-0.1.14.tar.gz/romanize3-0.1.14/romanize/arm.py
0.503662
0.300816
arm.py
pypi
import re from collections import OrderedDict from .romanizer import romanizer has_capitals = True data = OrderedDict() # http://en.wikipedia.org/wiki/Coptic_alphabet # letters from ⲁ to ⲑ (1 - 9) # alef:http://en.wiktionary.org/wiki/ data['alpha'] = dict(letter=[u'ⲁ'], name=u'ⲁ', segment='vowel', subsegment='', transliteration=u'a', order=1) # beth:http://en.wiktionary.org/wiki/ data['beth'] = dict(letter=[u'ⲃ'], name=u'ⲃ', segment='consonant', subsegment='', transliteration=u'b', order=2) # gimel:http://en.wiktionary.org/wiki/ data['gamma'] = dict(letter=[u'ⲅ'], name=u'ⲅ', segment='consonant', subsegment='', transliteration=u'g', order=3) # daleth:http://en.wiktionary.org/wiki/ data['delta'] = dict(letter=[u'ⲇ'], name=u'ⲇ', segment='consonant', subsegment='', transliteration=u'd', order=4) # he:http://en.wiktionary.org/wiki/ data['ei'] = dict(letter=[u'ⲉ'], name=u'ⲉי', segment='vowel', subsegment='', transliteration=u'e', order=5) # vau:http://en.wikipedia.org/wiki/ data['so'] = dict(letter=[u'ⲋ'], name=u'ⲋ', segment='numeral', subsegment='', transliteration=u'w', order=6) # zayin:http://en.wiktionary.org/wiki/ data['zeta'] = dict(letter=[u'ⲍ'], name=u'ⲍ', segment='consonant', subsegment='', transliteration=u'z', order=7) # heth:http://en.wiktionary.org/wiki/ data['eta'] = dict(letter=[u'ⲏ'], name=u'ⲏ', segment='vowel', subsegment='', transliteration=u'ê', order=8) # teth:http://en.wiktionary.org/wiki/ data['theta'] = dict(letter=[u'ⲑ'], name=u'ⲑ', segment='consonant', subsegment='', transliteration=u'h', order=9) # letters from י to ϥ (10 - 90) # yod:http://en.wiktionary.org/wiki/ data['yota'] = dict(letter=[u'ⲓ'], name=u'ⲓ', segment='vowel', subsegment='', transliteration=u'i', order=10) # kaph:http://en.wiktionary.org/wiki/ data['kappa'] = dict(letter=[u'ⲕ'], name=u'ⲕ', segment='consonant', subsegment='', transliteration=u'k', order=11) # lamed:http://en.wiktionary.org/wiki/ data['lambda'] = dict(letter=[u'ⲗ'], name=u'ⲗ', segment='consonant', subsegment='', transliteration=u'l', order=12) # mem:http://en.wiktionary.org/wiki/ data['me'] = dict(letter=[u'ⲙ'], name=u'ⲙ', segment='consonant', subsegment='', transliteration=u'm', order=13) # num:http://en.wiktionary.org/wiki/ data['ne'] = dict(letter=[u'ⲛ'], name=u'ⲛ', segment='consonant', subsegment='', transliteration=u'n', order=14) # samekh:http://en.wiktionary.org/wiki/ data['eksi'] = dict(letter=[u'ⲝ'], name=u'ⲝ', segment='consonant', subsegment='', transliteration=u'x', order=15) # ayin:http://en.wiktionary.org/wiki/ data['o'] = dict(letter=[u'ⲟ'], name=u'ⲟ', segment='consonant', subsegment='', transliteration=u'o', order=16) # pe:http://en.wiktionary.org/wiki/ data['pi'] = dict(letter=[u'ⲡ'], name=u'ⲡ', segment='consonant', subsegment='', transliteration=u'p', order=17) # tsade:http://en.wikipedia.org/wiki/ data['fay'] = dict(letter=[u'ϥ'], name=u'ϥ', segment='numeral', subsegment='', transliteration=u'q', order=18) # letters from ⲣ to ⳁ (100 - 900) # resh:http://en.wiktionary.org/wiki/ data['ro'] = dict(letter=[u'ⲣ'], name=u'ⲣ', segment='consonant', subsegment='', transliteration=u'r', order=19) # shin:http://en.wiktionary.org/wiki/ data['sima'] = dict(letter=[u'ⲥ'], name=u'ⲥ', segment='consonant', subsegment='', transliteration=u's', order=20) # tau:http://en.wiktionary.org/wiki/ data['taw'] = dict(letter=[u'ⲧ'], name=u'ⲧו', segment='consonant', subsegment='', transliteration=u't', order=21) # final_tsade:http://en.wiktionary.org/wiki/Tsade data['epsilon'] = dict(letter=[u'ⲩ'], name=u'ⲩ', segment='vowel', subsegment='', transliteration=u'u', order=22) # final_kaph:http://en.wiktionary.org/wiki/ data['fi'] = dict(letter=[u'ⲫ'], name=u'ⲫ', segment='consonant', subsegment='', transliteration=u'f', order=23) # final_mem, chi:http://en.wiktionⲣary.org/wiki/ data['khe'] = dict(letter=[u'ⲭ'], name=u'ⲭ', segment='consonant', subsegment='', transliteration=u'c', order=24) # final_nun:http://en.wiktionary.org/wiki/ data['epsi'] = dict(letter=[u'ⲯ'], name=u'ⲯ', segment='consonant', subsegment='', transliteration=u'y', order=25) # final_pe:http://en.wiktionary.org/wiki/ data['ou'] = dict(letter=[u'ⲱ'], name=u'ⲱ', segment='vowel', subsegment='', transliteration=u'ô', order=26) # final_tsade:http://en.wiktionary.org/wiki/Tsade data['nine'] = dict(letter=[u'ⳁ'], name=u'ⳁ', segment='numeral', subsegment='', transliteration=u'j', order=27) r = romanizer(data, has_capitals) # collect coptic and transliteration letters from data dictionary for preprocessing function letters = ''.join([''.join(d['letter'])+d['transliteration']+''.join(d['letter']).upper()+d['transliteration'].upper() for key, d in data.items()]) regex = re.compile('[^%s ]+' % letters) regex2 = re.compile('[^%s\s]' % ''.join([''.join(d['letter'])+''.join(d['letter']).upper() for key, d in data.items()])) def filter(string): """ Preprocess string to remove all other characters but coptic ones :param string: :return: """ # remove all unwanted characters return regex2.sub(' ', string) def preprocess(string): """ Preprocess string to transform all diacritics and remove other special characters :param string: :return: """ return regex.sub('', string) def convert(string, sanitize=False): """ Swap characters from script to transliterated version and vice versa. Optionally sanitize string by using preprocess function. :param sanitize: :param string: :return: """ return r.convert(string, (preprocess if sanitize else False))
/romanize3-0.1.14.tar.gz/romanize3-0.1.14/romanize/cop.py
0.519278
0.302056
cop.py
pypi
import re from collections import OrderedDict from .romanizer import romanizer has_capitals = False data = OrderedDict() # https://en.wikipedia.org/wiki/Aramaic_alphabet # https://en.wikipedia.org/wiki/Abjad_numerals # letters from ا to ط # https://en.wikipedia.org/wiki/%D8%A7 data['alif'] = dict(letter=[u'ا'], name=u'ا', segment='vowel', subsegment='', transliteration=u'a', order=1) # https://en.wikipedia.org/wiki/%D8%A8 data['ba'] = dict(letter=[u'ب'], name=u'ب', segment='consonant', subsegment='', transliteration=u'b', order=2) # https://en.wikipedia.org/wiki/%D8%AC data['jim'] = dict(letter=[u'ج'], name=u'ج', segment='consonant', subsegment='', transliteration=u'j', order=3) # https://en.wikipedia.org/wiki/%D8%AF data['dal'] = dict(letter=[u'د'], name=u'د', segment='consonant', subsegment='', transliteration=u'd', order=4) # https://en.wikipedia.org/wiki/%D9%87 data['ha'] = dict(letter=[u'ه'], name=u'ه', segment='vowel', subsegment='', transliteration=u'h', order=5) # https://en.wikipedia.org/wiki/%D9%88 data['waw'] = dict(letter=[u'و'], name=u'و', segment='vowel', subsegment='', transliteration=u'w', order=6) # https://en.wikipedia.org/wiki/%D8%B2 data['zayn'] = dict(letter=[u'ز'], name=u'ز', segment='consonant', subsegment='', transliteration=u'z', order=7) # https://en.wikipedia.org/wiki/%D8%AD data['ha'] = dict(letter=[u'ح'], name=u'ح', segment='consonant', subsegment='', transliteration=u'ḥ', order=8) # https://en.wikipedia.org/wiki/%D8%B7 data['ta'] = dict(letter=[u'ط'], name=u'ط', segment='consonant', subsegment='', transliteration=u'ṭ', order=9) # letters from ى to ص # https://en.wikipedia.org/wiki/%D9%89 data['ya'] = dict(letter=[u'ى'], name=u'ى', segment='vowel', subsegment='', transliteration=u'i', order=10) # https://en.wikipedia.org/wiki/%D9%83 data['kaf'] = dict(letter=[u'ك'], name=u'ك', segment='consonant', subsegment='', transliteration=u'k', order=11) # https://en.wikipedia.org/wiki/%D9%84 data['lam'] = dict(letter=[u'ل'], name=u'ل', segment='consonant', subsegment='', transliteration=u'l', order=12) # https://en.wikipedia.org/wiki/%D9%85 data['mim'] = dict(letter=[u'م'], name=u'م', segment='consonant', subsegment='', transliteration=u'm', order=13) # https://en.wikipedia.org/wiki/%D9%86 data['nun'] = dict(letter=[u'ن'], name=u'ن', segment='consonant', subsegment='', transliteration=u'n', order=14) # https://en.wikipedia.org/wiki/%D8%B3 data['sin'] = dict(letter=[u'س'], name=u'س', segment='consonant', subsegment='', transliteration=u's', order=15) # https://en.wikipedia.org/wiki/%D8%B9 data['ayn'] = dict(letter=[u'ع'], name=u'ع', segment='consonant', subsegment='', transliteration=u'ʻ', order=16) # https://en.wikipedia.org/wiki/%D9%81 data['fa'] = dict(letter=[u'ف'], name=u'ف', segment='consonant', subsegment='', transliteration=u'f', order=17) # https://en.wikipedia.org/wiki/%D8%B5 data['sad'] = dict(letter=[u'ص'], name=u'ص', segment='consonant', subsegment='', transliteration=u'ṣ', order=18) # letters from ق to غ # https://en.wikipedia.org/wiki/%D9%82 data['qaf'] = dict(letter=[u'ق'], name=u'ق', segment='consonant', subsegment='', transliteration=u'q', order=19) # https://en.wikipedia.org/wiki/%D8%B1 data['ra'] = dict(letter=[u'ر'], name=u'ر', segment='consonant', subsegment='', transliteration=u'r', order=20) # https://en.wikipedia.org/wiki/%D8%B4 data['shin'] = dict(letter=[u'ش'], name=u'ش', segment='consonant', subsegment='', transliteration=u'š', order=21) # https://en.wikipedia.org/wiki/%D8%AA data['ta'] = dict(letter=[u'ت'], name=u'ت', segment='consonant', subsegment='', transliteration=u't', order=22) # https://en.wikipedia.org/wiki/%D8%AB data['tha'] = dict(letter=[u'ث'], name=u'ث', segment='consonant', subsegment='', transliteration=u'ṯ', order=23) # https://en.wikipedia.org/wiki/%D8%AE data['kha'] = dict(letter=[u'خ'], name=u'خ', segment='consonant', subsegment='', transliteration=u'ḵ', order=24) # https://en.wikipedia.org/wiki/%D8%B0 data['dhal'] = dict(letter=[u'ذ'], name=u'ذ', segment='consonant', subsegment='', transliteration=u'ḏ', order=25) # https://en.wikipedia.org/wiki/%D8%B6 data['dad'] = dict(letter=[u'ض'], name=u'ض', segment='consonant', subsegment='', transliteration=u'ḍ', order=26) # https://en.wikipedia.org/wiki/%D8%B8 data['za'] = dict(letter=[u'ظ'], name=u'ظ', segment='consonant', subsegment='', transliteration=u'ẓ', order=27) # https://en.wikipedia.org/wiki/%D8%BA data['ghayn'] = dict(letter=[u'غ'], name=u'غ', segment='consonant', subsegment='', transliteration=u'g', order=28) r = romanizer(data, has_capitals) # collect arabic and transliteration letters from data dictionary for preprocessing function letters = ''.join([''.join(d['letter'])+d['transliteration'] for key, d in data.items()]) regex = re.compile('[^%s ]+' % letters) regex2 = re.compile('[^%s\s]' % ''.join([''.join(d['letter']) for key, d in data.items()])) def filter(string): """ Preprocess string to remove all other characters but arabic ones :param string: :return: """ # remove all unwanted characters return regex2.sub(' ', string) def preprocess(string): """ Preprocess string to transform all diacritics and remove other special characters :param string: :return: """ return regex.sub('', string) def convert(string, sanitize=False): """ Swap characters from script to transliterated version and vice versa. Optionally sanitize string by using preprocess function. :param sanitize: :param string: :return: """ return r.convert(string, (preprocess if sanitize else False))
/romanize3-0.1.14.tar.gz/romanize3-0.1.14/romanize/ara.py
0.59561
0.401482
ara.py
pypi
from itertools import count __version__ = '0.0.3' _map = {'I': 1, 'V': 5, 'X': 10, 'L': 50, 'C': 100, 'D': 500, 'M': 1000} class Roman(int): def __new__(class_, roman): try: roman = int(roman) except ValueError: roman = str(roman).upper().replace(' ', '') if not set(_map) >= set(roman): raise ValueError('Not a valid Roman numeral: %r' % roman) values = map(_map.get, roman) value = sum(-n if n < max(values[i:]) else n for i, n in enumerate(values)) return super(Roman, class_).__new__(class_, value) else: if roman < 1: raise ValueError('Only n > 0 allowed, given: %r' % roman) return super(Roman, class_).__new__(class_, roman) def _negatively(self): if self > 1000: return '' base, s = sorted((v, k) for k, v in _map.items() if v >= self)[0] decrement = base - self if decrement == 0: return s else: return Roman(decrement)._positively() + s def _positively(self): value = self result = '' while value > 0: for v, r in reversed(sorted((v, k) for k, v in _map.items())): if v <= value: value -= v result += r break return result def _split(self): result = [] for i in (10 ** i for i in count()): if i > self: break result.append(self % (i * 10) / i * i) return result[::-1] def __str__(self): s = '' for n in self._split(): if n == 0: continue pos = Roman(n)._positively() neg = Roman(n)._negatively() s += neg if neg and len(neg) < len(pos) else pos return s def __repr__(self): return '%s(%r)' % (self.__class__.__name__, self.__str__())
/rome-0.0.3.tar.gz/rome-0.0.3/rome.py
0.435421
0.246205
rome.py
pypi
import math import matplotlib.pyplot as plt from .Generaldistribution import Distribution class Gaussian(Distribution): """ Gaussian distribution class for calculating and visualizing a Gaussian distribution. Attributes: mean (float) representing the mean value of the distribution stdev (float) representing the standard deviation of the distribution data_list (list of floats) a list of floats extracted from the data file """ def __init__(self, mu=0, sigma=1): Distribution.__init__(self, mu, sigma) def calculate_mean(self): """Function to calculate the mean of the data set. Args: None Returns: float: mean of the data set """ avg = 1.0 * sum(self.data) / len(self.data) self.mean = avg return self.mean def calculate_stdev(self, sample=True): """Function to calculate the standard deviation of the data set. Args: sample (bool): whether the data represents a sample or population Returns: float: standard deviation of the data set """ if sample: n = len(self.data) - 1 else: n = len(self.data) mean = self.calculate_mean() sigma = 0 for d in self.data: sigma += (d - mean) ** 2 sigma = math.sqrt(sigma / n) self.stdev = sigma return self.stdev def plot_histogram(self): """Function to output a histogram of the instance variable data using matplotlib pyplot library. Args: None Returns: None """ plt.hist(self.data) plt.title('Histogram of Data') plt.xlabel('data') plt.ylabel('count') def pdf(self, x): """Probability density function calculator for the gaussian distribution. Args: x (float): point for calculating the probability density function Returns: float: probability density function output """ return (1.0 / (self.stdev * math.sqrt(2*math.pi))) * math.exp(-0.5*((x - self.mean) / self.stdev) ** 2) def plot_histogram_pdf(self, n_spaces = 50): """Function to plot the normalized histogram of the data and a plot of the probability density function along the same range Args: n_spaces (int): number of data points Returns: list: x values for the pdf plot list: y values for the pdf plot """ mu = self.mean sigma = self.stdev min_range = min(self.data) max_range = max(self.data) # calculates the interval between x values interval = 1.0 * (max_range - min_range) / n_spaces x = [] y = [] # calculate the x values to visualize for i in range(n_spaces): tmp = min_range + interval*i x.append(tmp) y.append(self.pdf(tmp)) # make the plots fig, axes = plt.subplots(2,sharex=True) fig.subplots_adjust(hspace=.5) axes[0].hist(self.data, density=True) axes[0].set_title('Normed Histogram of Data') axes[0].set_ylabel('Density') axes[1].plot(x, y) axes[1].set_title('Normal Distribution for \n Sample Mean and Sample Standard Deviation') axes[0].set_ylabel('Density') plt.show() return x, y def __add__(self, other): """Function to add together two Gaussian distributions Args: other (Gaussian): Gaussian instance Returns: Gaussian: Gaussian distribution """ result = Gaussian() result.mean = self.mean + other.mean result.stdev = math.sqrt(self.stdev ** 2 + other.stdev ** 2) return result def __repr__(self): """Function to output the characteristics of the Gaussian instance Args: None Returns: string: characteristics of the Gaussian """ return "mean {}, standard deviation {}".format(self.mean, self.stdev)
/romeliagurit%3Fimagurit-0.1.tar.gz/romeliagurit?imagurit-0.1/distributions/Gaussiandistribution.py
0.688364
0.853058
Gaussiandistribution.py
pypi
import md5 from binascii import unhexlify from docopt import docopt def get_md5(source): """Return the MD5 hash of the file `source`.""" m = md5.new() while True: d = source.read(8196) if not d: break m.update(d) return m.hexdigest() def hex_to_bstr(d): """Return the bytestring equivalent of a plain-text hex value.""" if len(d) % 2: d = "0" + d return unhexlify(d) def load_line(s): """Tokenize a tab-delineated string and return as a list.""" return s.strip().split('\t') def load_script(txt="ROM Expander Pro.txt"): script = {} script["file"] = txt with open(script["file"]) as script_file: script_lines = script_file.readlines() # Load the `NAME` line from script. l = load_line(script_lines.pop(0)) assert 'NAME' == l.pop(0) script["source"], script["target"] = l assert script["target"] != script["source"] # Load the `SIZE` and optional `MD5` l = load_line(script_lines.pop(0)) script["old_size"] = eval("0x" + l[1]) script["new_size"] = eval("0x" + l[2]) if l.index(l[-1]) > 2: script["MD5"] = l[3].lower() # Load the replacement `HEADER`. l = load_line(script_lines.pop(0)) assert 'HEADER' == l.pop(0) script["header_size"] = eval("0x" + l.pop(0)) assert script["header_size"] > len(l) # Sanitize and concatenate the header data. new_header = "".join(["0" * (2 - len(x)) + x for x in l]) # Cast to character data and pad with 0x00 to header_size new_header = hex_to_bstr(new_header) script["header"] = new_header + "\x00" * (script["header_size"] - len(l)) script["ops"] = [] while script_lines: script["ops"].append(load_line(script_lines.pop(0))) script["patches"] = [] for op in script["ops"]: if op[0] == "REPLACE": script["patches"].append(op[1:]) script["ops"].remove(op) return script def expand_rom(script): # Check the source file MD5. if "MD5" in script: with open(script["source"], "rb") as s_file: # Don't digest the header. s_file.read(script["header_size"]) assert script["MD5"] == get_md5(s_file) print "MD5... match!" print "Expanding..." with open(script["source"], "rb") as s, open(script["target"], "wb") as t: def copy(s_offset, t_offset): source_ptr = script["header_size"] + s_offset write_ptr = script["header_size"] + t_offset s.seek(source_ptr) t.seek(write_ptr) t.write(s.read(end_ptr - write_ptr)) def fill(destination, value): write_ptr = script["header_size"] + destination t.seek(write_ptr) t.write(value * (end_ptr - write_ptr)) # Write Header t.write(script["header"]) while script["ops"]: op = script["ops"].pop(0) cmd = op.pop(0) if not script["ops"]: end_ptr = script["header_size"] + script["new_size"] else: end_ptr = eval("0x" + script["ops"][0][1]) + \ script["header_size"] if cmd == "COPY": copy(eval("0x" + op[1]), # Source eval("0x" + op[0])) # Target elif cmd == "FILL": fill(eval("0x" + op[0]), # Destination hex_to_bstr(op[1])) # Value else: raise Exception # REPLACE for patch in script["patches"]: offset = eval("0x" + patch.pop(0)) data = "".join(["0" * (2 - len(x)) + x for x in patch]) t.seek(offset + script['header_size']) t.write(hex_to_bstr(data)) print "Wrote %s successfully." % (script["target"]) def run(**kwargs): if kwargs['--txt']: script = load_script(kwargs['--txt']) else: script = load_script() if kwargs["--output"]: script["target"] = kwargs["--output"] if kwargs["INPUT"]: script["source"] = kwargs["INPUT"] expand_rom(script) if __name__ == "__main__": arguments = docopt(__doc__, version='romexpander 0.4') run(**arguments)
/romexpander-0.5.tar.gz/romexpander-0.5/romexpander.py
0.710628
0.374991
romexpander.py
pypi
import sympy as sp from sympy.parsing.sympy_parser import parse_expr def get_stationary_data(fn, symbol="infer"): r"""Get stationary point of the fn w.r.t a symbol. If symbol is infer, fn should only contain one symbol.""" if isinstance(fn, str): fn = parse_expr(fn) if symbol != "infer" and isinstance(symbol, str): symbol = sp.Symbol(symbol) symbols = fn.free_symbols if symbol == "infer": assert len(symbols) == 1, f"Multiple symbols ({symbols}) present in "\ f"function ({fn}). Specify the symbol w.r.t which stationary "\ "point is to be obtained." symbol = next(iter(symbols)) stationary_fn = fn.diff(symbol) stationary_points = sp.solve(stationary_fn, symbol) return symbol, stationary_fn, stationary_points def get_minima_or_maxima_data(fn, symbol="infer", allow_expr_return=True, for_minima=True): r"""Get points, fn values and condition for minima or maxima for the fn w.r.t a symbol. If symbol is infer, fn should only contain one symbol. allow_expr_return is set to be True if the minima or maxima points are to be left in terms of other symbols and the value of minima or maxima are to be obtained by substituting their value by the caller.""" if isinstance(fn, str): fn = parse_expr(fn) symbol, stationary_fn, stationary_points = get_stationary_data(fn, symbol) stationary_fn_diffs = [ stationary_fn.diff(symbol).subs(symbol, p) for p in stationary_points ] relational_type = sp.core.relational.Relational points, fn_values, conditions = [], [], [] for point, expr in zip(stationary_points, stationary_fn_diffs): check = expr > 0 if for_minima else expr < 0 if ((allow_expr_return and isinstance(check, relational_type)) or (not isinstance(check, relational_type) and check)): conditions.append(check) points.append(point) fn_values.append(fn.subs(symbol, point)) return symbol, points, fn_values, conditions def maximize(fn, symbol="infer", allow_expr_return=True): r"""Get maxima points, maxima and maxima conditions for the fn w.r.t a symbol. If symbol is infer, fn should only contain one symbol. allow_expr_return is set to be True if the maxima points are to be left in terms of other symbols and the value of the maxima are to be obtained by substituting their value by the caller.""" symbol, maxima_points, maxima, maxima_conditions = get_minima_or_maxima_data( fn, symbol, allow_expr_return, for_minima=False, ) return symbol, maxima_points, maxima, maxima_conditions def minimize(fn, symbol="infer", allow_expr_return=True): r"""Get minima points, minima and minima conditions for the fn w.r.t a symbol. If symbol is infer, fn should only contain one symbol. allow_expr_return is set to be True if the minima points are to be left in terms of other symbols and the value of the minima are to be obtained by substituting their value by the caller.""" symbol, minima_points, minima, minima_conditions = get_minima_or_maxima_data( fn, symbol, allow_expr_return, for_minima=True, ) return symbol, minima_points, minima, minima_conditions
/graph/optimize.py
0.794544
0.722894
optimize.py
pypi
# Developing match up function v2 - Move to a 1D model output ``` import xarray as xr import numpy as np import matplotlib.pyplot as plt import pandas as pd import rompy from rompy import utils ## Should we import utils in __init__.py? from shapely.geometry import MultiPoint,Point %matplotlib inline xr.set_options(display_style = 'text') cat = rompy.cat model_ds = cat.csiro.swan.swan_perth_fc.map(fcdate='2021-02').to_dask() x = model_ds.longitude.values y = model_ds.latitude.values xx,yy = np.meshgrid(x,y) points = MultiPoint(list(map(Point,zip(xx.ravel(),yy.ravel())))) geom = points.convex_hull.buffer(0.01).simplify(tolerance=0.01) df=cat.aodn.nrt_wave_buoys(startdt='2021-02',enddt='2021-04',geom=geom.to_wkt()).read() obs = df[['TIME','LATITUDE','LONGITUDE','WHTH']] obs['TIME'] = pd.to_datetime(obs['TIME']) model_ds obs out_ds = rompy.utils.find_matchup_data(obs,model_ds,{'WHTH':'hs'},time_thresh=None,KDtree_kwargs={}) out_ds fig, ax = plt.subplots(figsize=(12,8)) ax.scatter(out_ds['model_hs'],out_ds['meas_whth']) ax.plot([0,3],[0,3],ls='--',c='#252525') ax.set_ylim(0,2.5) ax.set_xlim(0,2.5) ax.set_xlabel('Model') ax.set_ylabel('Measured') ax.set_title('Hs') ```
/rompy-0.1.0.tar.gz/rompy-0.1.0/notebooks/rompy-dev_matchup_code v2.ipynb
0.400515
0.828141
rompy-dev_matchup_code v2.ipynb
pypi
# Developing match up function v2 - Should work on altimetry & waveriders! ``` import xarray as xr import numpy as np import matplotlib.pyplot as plt import pandas as pd import rompy from rompy import utils ## Should we import utils in __init__.py? from shapely.geometry import MultiPoint,Point %matplotlib inline xr.set_options(display_style = 'text') cat = rompy.cat model_ds = cat.csiro.swan.swan_perth_fc.map(fcdate='2020-12-21').to_dask() x = model_ds.longitude.values y = model_ds.latitude.values xx,yy = np.meshgrid(x,y) points = MultiPoint(list(map(Point,zip(xx.ravel(),yy.ravel())))) geom = points.convex_hull.buffer(0.01).simplify(tolerance=0.01) geom model_ds ds_alt=rompy.cat.aodn.nrt_wave_altimetry(startdt='2020-12-21', enddt='2020-12-26', geom=geom.to_wkt(), ds_filters={'subset':{'data_vars':['SWH_C']},'sort':{'coords':['TIME']}}).to_dask() out_ds = rompy.utils.find_matchup_data(ds_alt,model_ds,{'swh_c':'hs'},time_thresh=None,KDtree_kwargs={}) out_ds fig, ax = plt.subplots(figsize=(12,8)) ax.scatter(out_ds['model_hs'],out_ds['meas_swh_c']) ax.plot([0,3],[0,3],ls='--',c='#252525') ax.set_ylim(0,2.5) ax.set_xlim(0,2.5) ax.set_xlabel('Model') ax.set_ylabel('Measured') ax.set_title('Hs') ax.grid() ```
/rompy-0.1.0.tar.gz/rompy-0.1.0/notebooks/rompy-dev_matchup_code v2-Alt.ipynb
0.417271
0.665854
rompy-dev_matchup_code v2-Alt.ipynb
pypi
# Developing match up function - Focus on wave rider buoys ``` import xarray as xr import numpy as np import matplotlib.pyplot as plt import pandas as pd import rompy from rompy import utils ## Should we import utils in __init__.py? from shapely.geometry import MultiPoint,Point %matplotlib inline xr.set_options(display_style = 'text') cat = rompy.cat model_ds = cat.csiro.swan.swan_perth_fc.map(fcdate='2021-02').to_dask() x = model_ds.longitude.values.flatten().tolist() y = model_ds.latitude.values.flatten().tolist() points = MultiPoint(list(map(Point,zip(x,y)))) geom = points.convex_hull.buffer(0.5).simplify(tolerance=0.01) df=cat.aodn.nrt_wave_buoys(startdt='2021-02',enddt='2021-04',geom=geom.to_wkt()).read() obs = df[['TIME','LATITUDE','LONGITUDE','WHTH']].rename(columns={'TIME':'time','LATITUDE':'latitude','LONGITUDE':'longitude','WHTH':'hs'}).set_index(['time','latitude','longitude']) obs_ds = xr.Dataset.from_dataframe(obs) obs_ds['time'] = obs_ds['time'].astype(np.datetime64) model_ds obs_ds out_ds = rompy.utils.find_matchup_data(obs_ds,model_ds,{'hs':'hs'},time_thresh=None) out_ds observations = out_ds['observation'].values obs_latlon_inds = out_ds['obs_latlon_inds'].values fig, axes = plt.subplots(nrows=len(observations),sharex=True,figsize=(12,8)) for obs,latlon in zip(observations,obs_latlon_inds): out_ds['model_hs'].isel({'latitude':latlon[0],'longitude':latlon[1]}).plot(ax=axes[obs],marker='o',label='Model') out_ds['meas_hs'].isel({'latitude':latlon[0],'longitude':latlon[1]}).plot(ax=axes[obs],marker='o',label='Measured') axes[obs].set_ylabel('Hs [m]') axes[obs].set_xlabel('') axes[obs].legend() axes[obs].set_ylim(0,3) fig,ax = plt.subplots(figsize=(12,8)) plt.scatter(out_ds['model_lon'].values[out_ds['obs_latlon_inds'].values[:,1]],out_ds['model_lat'].values[out_ds['obs_latlon_inds'].values[:,0]],label='Nearest model cell',c='white') plt.scatter(out_ds['longitude'].values[out_ds['obs_latlon_inds'].values[:,1]],out_ds['latitude'].values[out_ds['obs_latlon_inds'].values[:,0]],label='Observations',marker='^',c='red') model_ds['hs'].isel(time=0).plot(zorder=-1) plt.legend(loc='lower left') ```
/rompy-0.1.0.tar.gz/rompy-0.1.0/notebooks/rompy-dev_matchup_code.ipynb
0.461259
0.826991
rompy-dev_matchup_code.ipynb
pypi
import math import matplotlib.pyplot as plt from .Generaldistribution import Distribution class Gaussian(Distribution): """ Gaussian distribution class for calculating and visualizing a Gaussian distribution. Attributes: mean (float) representing the mean value of the distribution stdev (float) representing the standard deviation of the distribution data_list (list of floats) a list of floats extracted from the data file """ def __init__(self, mu=0, sigma=1): Distribution.__init__(self, mu, sigma) def calculate_mean(self): """Function to calculate the mean of the data set. Args: None Returns: float: mean of the data set """ avg = 1.0 * sum(self.data) / len(self.data) self.mean = avg return self.mean def calculate_stdev(self, sample=True): """Function to calculate the standard deviation of the data set. Args: sample (bool): whether the data represents a sample or population Returns: float: standard deviation of the data set """ if sample: n = len(self.data) - 1 else: n = len(self.data) mean = self.calculate_mean() sigma = 0 for d in self.data: sigma += (d - mean) ** 2 sigma = math.sqrt(sigma / n) self.stdev = sigma return self.stdev def plot_histogram(self): """Function to output a histogram of the instance variable data using matplotlib pyplot library. Args: None Returns: None """ plt.hist(self.data) plt.title('Histogram of Data') plt.xlabel('data') plt.ylabel('count') def pdf(self, x): """Probability density function calculator for the gaussian distribution. Args: x (float): point for calculating the probability density function Returns: float: probability density function output """ return (1.0 / (self.stdev * math.sqrt(2*math.pi))) * math.exp(-0.5*((x - self.mean) / self.stdev) ** 2) def plot_histogram_pdf(self, n_spaces = 50): """Function to plot the normalized histogram of the data and a plot of the probability density function along the same range Args: n_spaces (int): number of data points Returns: list: x values for the pdf plot list: y values for the pdf plot """ mu = self.mean sigma = self.stdev min_range = min(self.data) max_range = max(self.data) # calculates the interval between x values interval = 1.0 * (max_range - min_range) / n_spaces x = [] y = [] # calculate the x values to visualize for i in range(n_spaces): tmp = min_range + interval*i x.append(tmp) y.append(self.pdf(tmp)) # make the plots fig, axes = plt.subplots(2,sharex=True) fig.subplots_adjust(hspace=.5) axes[0].hist(self.data, density=True) axes[0].set_title('Normed Histogram of Data') axes[0].set_ylabel('Density') axes[1].plot(x, y) axes[1].set_title('Normal Distribution for \n Sample Mean and Sample Standard Deviation') axes[0].set_ylabel('Density') plt.show() return x, y def __add__(self, other): """Function to add together two Gaussian distributions Args: other (Gaussian): Gaussian instance Returns: Gaussian: Gaussian distribution """ result = Gaussian() result.mean = self.mean + other.mean result.stdev = math.sqrt(self.stdev ** 2 + other.stdev ** 2) return result def __repr__(self): """Function to output the characteristics of the Gaussian instance Args: None Returns: string: characteristics of the Gaussian """ return "mean {}, standard deviation {}".format(self.mean, self.stdev)
/romullo_distribution-1.0-py3-none-any.whl/romullo_distribution/Gaussiandistribution.py
0.688364
0.853058
Gaussiandistribution.py
pypi
from ciphers.caeser_cipher import CaeserCipher from ciphers.vigenere_cipher import VigenereCipher import argparse VERSION = "1.0.4" def caeser_cipher(args): # pragma: no cover caeser = CaeserCipher(rotation=args.rotation) if args.action == "encrypt": print(caeser.encrypt(plain_text=args.input)) else: print(caeser.decrypt(encrypted=args.input)) def vigenere_cipher(args): # pragma: no cover vigenere = VigenereCipher(secret=args.secret) if args.action == "encrypt": print(vigenere.encrypt(plain_text=args.input)) else: print(vigenere.decrypt(encrypted=args.input)) def parse_args(): # pragma: no cover parser = argparse.ArgumentParser( description="Determine which cipher, rotation, input string" ) subparsers = parser.add_subparsers(dest="cipher") caeser = subparsers.add_parser("caeser", help="Use the Caeser Cipher") caeser.add_argument( "-r", "--rotation", help="Set the rotation for the cipher", type=int, action="store", nargs="?", const=5, ) caeser.add_argument( "-i", "--input", help="Set the input string for the cipher", type=str, action="store", required=True, ) caeser.add_argument( "-a", "--action", help="Set the action for the cipher", choices=["encrypt", "decrypt"], action="store", required=True, ) vigenere = subparsers.add_parser( "vigenere", help="Use the Vigenere Cipher" ) vigenere.add_argument( "-s", "--secret", help="Set the secret for the cipher", action="store", type=str, nargs="?", const="secret", ) vigenere.add_argument( "-i", "--input", help="Set the input string for the cipher", action="store", required=True, ) vigenere.add_argument( "-a", "--action", help="Set the action for the cipher", choices=["encrypt", "decrypt"], action="store", required=True, ) return parser.parse_args() operations = {"caeser": caeser_cipher, "vigenere": vigenere_cipher} def main(): # pragma: no cover args = parse_args() operations[args.cipher](args)
/ron_cipher-1.0.4.tar.gz/ron_cipher-1.0.4/ciphers/__init__.py
0.481454
0.213131
__init__.py
pypi
# Stochastic Short Rates This brief section illustrates the use of stochastic short rate models for simulation and (risk-neutral) discounting. The class used is called `stochastic_short_rate`. ## The Modelling First, the market environment. As a stochastic short rate model the `square_root_diffusion` class is (currently) available. We therefore need to define the respective parameters for this class in the market environment. ``` from dx import * me = market_environment(name='me', pricing_date=dt.datetime(2015, 1, 1)) me.add_constant('initial_value', 0.01) me.add_constant('volatility', 0.1) me.add_constant('kappa', 2.0) me.add_constant('theta', 0.05) me.add_constant('paths', 1000) me.add_constant('frequency', 'M') me.add_constant('starting_date', me.pricing_date) me.add_constant('final_date', dt.datetime(2015, 12, 31)) me.add_curve('discount_curve', 0.0) # dummy me.add_constant('currency', 0.0) # dummy ``` Second, the instantiation of the class. ``` ssr = stochastic_short_rate('sr', me) ``` The following is an example `list` object containing `datetime` objects. ``` time_list = [dt.datetime(2015, 1, 1), dt.datetime(2015, 4, 1), dt.datetime(2015, 6, 15), dt.datetime(2015, 10, 21)] ``` The call of the method `get_forward_reates()` yields the above `time_list` object and the simulated forward rates. In this case, 10 simulations. ``` ssr.get_forward_rates(time_list, 10) ``` Accordingly, the call of the `get_discount_factors()` method yields simulated zero-coupon bond prices for the time grid. ``` ssr.get_discount_factors(time_list, 10) ``` ## Stochstic Drifts Let us value use the stochastic short rate model to simulate a geometric Brownian motion with stochastic short rate. Define the market environment as follows: ``` me.add_constant('initial_value', 36.) me.add_constant('volatility', 0.2) # time horizon for the simulation me.add_constant('currency', 'EUR') me.add_constant('frequency', 'M') # monthly frequency; paramter accorind to pandas convention me.add_constant('paths', 10) # number of paths for simulation ``` Then add the `stochastic_short_rate` object as discount curve. ``` me.add_curve('discount_curve', ssr) ``` Finally, instantiate the `geometric_brownian_motion` object. ``` gbm = geometric_brownian_motion('gbm', me) ``` We get simulated instrument values as usual via the `get_instrument_values()` method. ``` gbm.get_instrument_values() ``` ## Visualization of Simulated Stochastic Short Rate ``` from pylab import plt plt.style.use('seaborn') %matplotlib inline # short rate paths plt.figure(figsize=(10, 6)) plt.plot(ssr.process.instrument_values[:, :10]); ```
/ron-0.0.1.tar.gz/ron-0.0.1/12_dx_stochastic_short_rates.ipynb
0.526343
0.986967
12_dx_stochastic_short_rates.ipynb
pypi
import importlib import logging import re import tempfile import time from collections import defaultdict from types import TracebackType from typing import Optional, List, Union, Mapping, Dict, Set, Any, Type import unicodedata from ronds_sdk import error from ronds_sdk.options.pipeline_options import PipelineOptions from ronds_sdk.transforms import ptransform from ronds_sdk.dataframe import pvalue from typing import TYPE_CHECKING if TYPE_CHECKING: from ronds_sdk.runners.runner import PipelineRunner, PipelineResult from ronds_sdk.transforms.ptransform import PTransform __all__ = ['Pipeline', 'AppliedPTransform', 'PipelineVisitor'] class Pipeline(object): """ Pipeline 对象代表 DAG, 由 :class:`ronds_sdk.dataframe.pvalue.PValue` and :class:`ronds_sdk.transforms.ptransform.PTransform` 组成. PValue 是 DAG 的 nodes, 算子 PTransform 是 DAG 的 edges. 所有应用到 Pipeline 的 PTransform 算子必须有唯一的 full label. """ def __init__(self, options=None, argv=None): # type: (Optional[PipelineOptions], Optional[List[str]]) -> None """ 初始化 Pipeline :param options: 运行 Pipeline 需要的参数 :param argv: 当 options=None 时, 用于构建 options """ logging.basicConfig() # parse PipelineOptions if options is not None: if isinstance(options, PipelineOptions): self._options = options else: raise ValueError( 'Parameter options, if specified, must be of type PipelineOptions. ' 'Received : %r' % options) elif argv is not None: if isinstance(argv, list): self._options = PipelineOptions(argv) else: raise ValueError( 'Parameter argv, if specified, must be a list. Received : %r' % argv) else: self._options = PipelineOptions([]) self.local_tempdir = tempfile.mkdtemp(prefix='ronds-pipeline-temp') self.root_transform = AppliedPTransform(None, None, '', None, is_root=True) # Set of transform labels (full labels) applied to the pipeline. # If a transform is applied and the full label is already in the set # then the transform will have to be cloned with a new label. self.applied_labels = set() # type: Set[str] # Create a ComponentIdMap for assigning IDs to components. self.component_id_map = ComponentIdMap() # Records whether this pipeline contains any external transforms. self.contain_external_transforms = False self.job_context = None @property def options(self): return self._options def _root_transform(self): # type: () -> AppliedPTransform """ 返回 root transform :return: root transform """ return self.root_transform def run(self): # type: () -> PipelineResult from ronds_sdk.runners.spark_runner import SparkRunner runner = SparkRunner(self.options) return runner.run_pipeline(self, self._options) def __enter__(self): return self def __exit__(self, exc_type, # type: Optional[Type[BaseException]] exc_val, # type: Optional[BaseException] exc_tb # type: Optional[TracebackType] ): start = time.time() try: if not exc_type: self.result = self.run() self.result.wait_until_finish() finally: end = time.time() logging.info('pipeline exited, cost: %s ~' % str(end - start)) def visit(self, visitor): # type: (PipelineVisitor) -> None self._root_transform().visit(visitor, ) def apply(self, transform, # type: ptransform.PTransform p_valueish=None, # type: Optional[pvalue.PValue] label=None # type: Optional[str] ): # type: (...) -> pvalue.PValue # noinspection PyProtectedMember if isinstance(transform, ptransform._NamedPTransform): self.apply(transform.transform, p_valueish, label or transform.label) if not isinstance(transform, ptransform.PTransform): raise TypeError("Expected a PTransform object, got %s" % transform) full_label = label or transform.label if full_label in self.applied_labels: raise RuntimeError( 'A transform with label "%s" already exists in the pipeline. ' 'To apply a transform with a specified label write ' 'pvalue | "label" >> transform' % full_label) self.applied_labels.add(full_label) # noinspection PyProtectedMember p_valueish, inputs = transform._extract_input_p_values(p_valueish) try: if not isinstance(inputs, dict): inputs = {str(ix): inp for (ix, inp) in enumerate(inputs)} except TypeError: raise NotImplementedError( 'Unable to extract PValue inputs from %s; either %s does not accept ' 'inputs of this format, or it does not properly override ' '_extract_input_p_values' % (p_valueish, transform)) for t, leaf_input in inputs.items(): if not isinstance(leaf_input, pvalue.PValue) or not isinstance(t, str): raise NotImplementedError( '%s does not properly override _extract_input_p_values, ' 'returned %s from %s' % (transform, inputs, p_valueish)) current = AppliedPTransform( p_valueish.producer, transform, full_label, inputs ) p_valueish.add_consumer(current) try: p_valueish_result = self.expand(transform, p_valueish, self._options) for tag, result in ptransform.get_named_nested_p_values(p_valueish_result): assert isinstance(result, pvalue.PValue) if result.producer is None: result.producer = current assert isinstance(result.producer.inputs, tuple) base = tag counter = 0 while tag in current.outputs: counter += 1 tag = '%s_%d' % (base, counter) current.add_output(result, tag) except Exception as r: logging.error('unexpected error: %s' % repr(r)) raise r return p_valueish_result @staticmethod def expand(transform, # type: PTransform input, options, # type: PipelineOptions ): # sp_transform = self.load_spark_transform(transform) return transform.expand(input) class AppliedPTransform(object): """ A transform node representing an instance of applying a PTransform """ def __init__(self, parent, # type: Optional[AppliedPTransform] transform, # type: Optional[ptransform], full_label, # type: str main_inputs, # type: Optional[Mapping[str, Union[pvalue.PBegin, pvalue.PCollection]]] environment_id=None, # type: Optional[str] annotations=None, # type: Optional[Dict[str, bytes]] is_root=False # type: bool ): # type: (...) -> None self.parent = parent self.transform = transform self.full_label = full_label self.main_input = dict(main_inputs or {}) self.side_input = tuple() if transform is None else transform.side_inputs self.outputs = {} # type: Dict[Union[str, int, None], pvalue.PValue] self.parts = [] # type: List[AppliedPTransform] self.environment_id = environment_id if environment_id else None # type: Optional[str] self._is_root = is_root @property def inputs(self): # type: () -> tuple[Union[pvalue.PBegin, pvalue.PCollection]] return tuple(self.main_input.values()) def is_root(self): # type: () -> bool return self._is_root def __repr__(self): # type: () -> str return "%s(%s, %s)" % ( self.__class__.__name__, self.full_label, type(self.transform).__name__) def add_output(self, output, # type: Union[pvalue.PValue] tag # type: Union[str, int, None] ): # (...) -> None if isinstance(output, pvalue.PValue): if tag not in self.outputs: self.outputs[tag] = output else: raise error.TransformError('tag[%s] has existed in outputs') else: raise TypeError("Unexpected out type: %s" % output) def add_part(self, part): # type: (AppliedPTransform) -> None assert isinstance(part, AppliedPTransform) self.parts.append(part) def is_composite(self): # type: () -> bool return bool(self.parts) or all( p_val.producer is not self for p_val in self.outputs.values() ) def visit(self, visitor, # type: PipelineVisitor ): # type: (...) -> None """Visits all nodes reachable from the current node.""" if self._is_root and self.outputs and self.outputs['__root']: root = self.outputs['__root'] assert isinstance(root, pvalue.PValue) for consumer in root.consumers: consumer.visit(visitor) elif visitor.visit_transform(self) and self.outputs: for p_value in self.outputs.values(): assert isinstance(p_value, pvalue.PValue) p_value.visit(visitor) visitor.leave_transform(self) class ComponentIdMap(object): """ A utility for assigning unique component ids to Beam components. Component ID assignments are only guaranteed to be unique and consistent within the scope of a ComponentIdMap instance. """ def __init__(self, namespace="ref"): self.namespace = namespace self._counters = defaultdict(lambda: 0) # type:Dict[type, int] self._obj_to_id = {} # type: Dict[Any, str] def get_or_assign(self, obj=None, obj_type=None, label=None): if obj not in self._obj_to_id: self._obj_to_id[obj] = self._unique_ref(obj, obj_type, label) return self._obj_to_id[obj] def _unique_ref(self, obj=None, obj_type=None, label=None): # Normalize, trim, and unify. prefix = self._normalize( "%s_%s_%s" % (self.namespace, obj_type.__name__, label or type(obj).__name__) )[0:100] self._counters[obj_type] += 1 return '%s_%d' % (prefix, self._counters[obj_type]) @staticmethod def _normalize(str_value): str_value = unicodedata.normalize('NFC', str_value) return re.sub(r'[^a-zA-Z0-9-_]+', '-', str_value) class PipelineVisitor(object): """ Visitor pattern class used to traverse a DAG of transforms """ def __init__(self): self.stream_writers = list() self.module = None self.__trans_cache = dict() # type: dict[PTransform, PTransform] def load_spark_transform(self, # type: PipelineVisitor transform, # type: PTransform, options=None, # type: PipelineOptions **kwargs, ): # type: (...) -> PTransform # cache first if self.__trans_cache.__contains__(transform): return self.__trans_cache[transform] # create new PTransform if not self.module: self.module = importlib.import_module(self.transform_package(options)) d = getattr(self.module, transform.__class__.__name__) if d: all_kwargs = dict() if 'options' in d.__dict__['__init__'].__code__.co_varnames: all_kwargs['options'] = options for key in kwargs.keys(): if key in d.__dict__['__init__'].__code__.co_varnames: all_kwargs[key] = kwargs[key] sp_trans = d(transform, **all_kwargs) self.__trans_cache[transform] = sp_trans return sp_trans else: raise NotImplementedError( "transform [%s] not implemented by Spark!" % transform.__class__.__name__) @staticmethod def transform_package(options): if options is None: raise error.PipelineError("load_spark_transform pipeline options is null!") return options.transform_package def visit_value(self, value): # type: (pvalue.PValue) -> bool """ Callback for visiting a PValue in the pipeline DAG. :param value: PValue visited (typically a PCollection instance). :return: """ return True def visit_transform(self, transform_node): # type: (AppliedPTransform) -> bool return True def leave_transform(self, transform_node): # type: (AppliedPTransform) -> None pass
/rondsspark-0.0.4.23.tar.gz/rondsspark-0.0.4.23/ronds_sdk/pipeline.py
0.762513
0.214075
pipeline.py
pypi
import importlib from typing import Optional from ronds_sdk.options.pipeline_options import PipelineOptions from typing import TYPE_CHECKING if TYPE_CHECKING: from ronds_sdk.pipeline import Pipeline from ronds_sdk.transforms.ptransform import PTransform from ronds_sdk.dataframe import pvalue class PipelineRunner(object): def __init__(self, options # type: PipelineOptions ): self._options = options if options else PipelineOptions() self.mod = None @property def options(self): return self._options def transform_package(self) -> str: raise NotImplementedError( 'runner [%s] must implement transform_package' % self.__class__.__name__) def run(self, transform, # type: PTransform options=None ): # type: (...) -> PipelineResult """Run the given transform or callable with this runner. Blocks until the pipeline is complete. See also `PipelineRunner.run_async`. """ result = self.run_async(transform, options) result.wait_until_finish() return result def run_async(self, transform, # type: PTransform options=None ): # type: (...) -> PipelineResult from ronds_sdk.pipeline import Pipeline p = Pipeline(runner=self, options=options) if isinstance(transform, PTransform): p | transform return p.run() def run_pipeline(self, pipeline, # type: Pipeline options # type: PipelineOptions ): # type: (...) -> PipelineResult """Execute the entire pipeline or the sub-DAG reachable from a node. Runners should override this method. """ raise NotImplementedError def apply(self, transform, # type: PTransform input, # type: Optional[pvalue.PValue] options # type: PipelineOptions ): """Runner callback for a pipeline.apply call.""" raise NotImplementedError( 'Execution of [%s] not implemented in runner %s.' % (transform, self)) class PipelineResult(object): """A :class:`PipelineResult` 提供获取 pipelines 的信息获取""" def __init__(self, state): self._state = state def state(self): """返回当前的 pipelines 执行状态信息""" return self._state def wait_until_finish(self, duration=None): """ 等待 pipelines 运行结束,返回最终状态 :param duration: 等待时间 (milliseconds). 若设置为 :data:`None`, 会无限等待 :return: 最终的任务执行状态,或者 :data:`None` 表示超时 """ raise NotImplementedError
/rondsspark-0.0.4.23.tar.gz/rondsspark-0.0.4.23/ronds_sdk/runners/runner.py
0.791821
0.209631
runner.py
pypi
import logging from pyspark.sql import SparkSession from ronds_sdk.pipeline import PipelineVisitor from ronds_sdk.dataframe import pvalue from ronds_sdk.runners.runner import PipelineRunner, PipelineResult from ronds_sdk.runners.visitors.spark_runner_visitor import SparkRunnerVisitor from ronds_sdk.options.pipeline_options import PipelineOptions, SparkRunnerOptions from typing import TYPE_CHECKING if TYPE_CHECKING: from ronds_sdk.transforms.ptransform import PTransform class SparkRunner(PipelineRunner): def __init__(self, options # type: PipelineOptions ): super(SparkRunner, self).__init__(options) import findspark findspark.init() self.spark_options = self.options.view_as(SparkRunnerOptions) self.spark = self.new_spark(self.spark_options) @staticmethod def new_spark(options): builder = SparkSession.builder logging.info("spark master url: %s" % options.spark_master_url) if options.spark_master_url is not None: builder.master(options.spark_master_url) return builder.getOrCreate() def transform_package(self): return self.spark_options.spark_transform_package def run_pipeline(self, pipeline, options): visitor = SparkRunnerVisitor(self.options, self.spark) pipeline.visit(visitor) stream_writers = visitor.stream_writers return SparkPipelineResult(stream_writers, 'STARTED') def apply(self, transform, # type: PTransform input, options, # type: PipelineOptions ): # sp_transform = self.load_spark_transform(transform) return transform.expand(input) class SparkPipelineVisitor(PipelineVisitor): def visit_value(self, value): # type: (pvalue.PValue) -> bool if isinstance(value, pvalue.PDone): if value.stream_writer: self.stream_writers.append(value.stream_writer) return super().visit_value(value) class SparkPipelineResult(PipelineResult): def __init__(self, stream_writers, state, ): super(SparkPipelineResult, self).__init__(state) self._stream_writers = stream_writers def wait_until_finish(self, duration=None): await_list = list() if self._stream_writers: for w in self._stream_writers: query = w.start() await_list.append(query) logging.warning("query started, attention please ~") if await_list: for query in await_list: query.awaitTermination()
/rondsspark-0.0.4.23.tar.gz/rondsspark-0.0.4.23/ronds_sdk/runners/spark_runner.py
0.621771
0.184345
spark_runner.py
pypi
from typing import TYPE_CHECKING, Union, Callable, List from ronds_sdk.dataframe import pvalue from ronds_sdk.tools.utils import RuleParser from ronds_sdk.transforms.ptransform import PTransform if TYPE_CHECKING: from ronds_sdk.options.pipeline_options import KafkaOptions __all__ = [ 'RulesCassandraScan', 'Create', 'Socket', 'Filter', 'Console', 'Algorithm', 'Sleep', 'SendKafka', 'SendAlgJsonKafka' ] class Sleep(PTransform): def __init__(self, seconds=60, # type: int ): super(Sleep, self).__init__() self.seconds = seconds def expand(self, input_inputs, action_func=None): return input_inputs class RulesCassandraScan(PTransform): """ 根据规则配置, 按照制定时间窗口周期, 进行 Cassandra 定期读表加载数据 """ def __init__(self, rules, # type: list ): super(RulesCassandraScan, self).__init__() self._rules = rules @property def rules(self): return self._rules def expand(self, p_begin, action_func=None): assert isinstance(p_begin, pvalue.PBegin) return pvalue.PCollection(p_begin.pipeline, element_type=pvalue.PBegin, is_bounded=True) class Create(PTransform): def __init__(self, values): """ create dataframe from memory list, generally for test :param values: eg. [(1, 2, 'str'), ...] """ super(Create, self).__init__() if isinstance(values, (str, bytes)): raise TypeError( 'PTransform Create: Refusing to treat string as ' 'an iterable. (string=%r)' % values) elif isinstance(values, dict): values = values.items() self.values = values def expand(self, p_begin, action_func=None): return pvalue.PCollection(p_begin.pipeline, element_type=pvalue.PBegin, is_bounded=True) class Socket(PTransform): def __init__(self, host, # type: str port, # type: int ): """ Read Socket Data, for streaming :param host: socket host :param port: socket port """ super(Socket, self).__init__() if not host or not port: raise ValueError( 'PTransform Socket: host or port unexpected null, ' 'host: %s , port: %s' % (host, port)) self.host = host self.port = port def expand(self, p_begin, action_func=None): return pvalue.PCollection(p_begin.pipeline, element_type=pvalue.PBegin, is_bounded=False) class Filter(PTransform): def __init__(self, select_cols, # type: Union[str, list[str]] where # type: str ): """ 过滤数据, 同时进行字段的筛选 eg. pipeline | ... | 'filter data' >> ronds.Filter("col_1", "col_2 > 'xxx'") :param select_cols: :param where: """ super(Filter, self).__init__() self.where = where self.select_cols = select_cols def expand(self, input_inputs, action_func=None): assert isinstance(input_inputs, pvalue.PCollection) return pvalue.PCollection(input_inputs.pipeline, element_type=pvalue.PCollection, is_bounded=input_inputs.is_bounded) class Console(PTransform): def __init__(self, mode='complete', # type: str ): """ 控制台输出数据集, for test :param mode: 输出模式, 默认 complete """ super(Console, self).__init__() self.mode = mode if mode else 'complete' def expand(self, input_inputs, action_func=None): assert isinstance(input_inputs, pvalue.PCollection) return pvalue.PCollection(input_inputs.pipeline, element_type=pvalue.PCollection, is_bounded=input_inputs.is_bounded) class Algorithm(PTransform): def __init__(self, # type: Algorithm alg_path=None, # type: str func_name=None, # type: str ): """ 调用外部算法, 指定算法文件的地址和需要调用的函数名称; 算法需要接受记录作为参数, 同时返回记录作为算法的处理结果 :param alg_path: 算法文件地址 :param func_name: 算法函数名称 """ super(Algorithm, self).__init__() self.path = alg_path self.func_name = func_name def expand(self, input_inputs, action_func=None): assert isinstance(input_inputs, pvalue.PCollection) return pvalue.PCollection(input_inputs.pipeline, element_type=pvalue.PCollection, is_bounded=input_inputs.is_bounded, ) class SendKafka(PTransform): def __init__(self, key_value_generator, # type: Callable topic=None, # type: str ): """ 通用的 kafka 消息发送组件 :param topic: kafka topics :param key_value_generator: An function generate dict(key, value) """ super(SendKafka, self).__init__() self.topic = topic self.key_value_generator = key_value_generator class SendAlgJsonKafka(PTransform): def __init__(self, topics, # type: dict ): # noinspection SpellCheckingInspection """ 将算法处理过的JSON 数据进行后续的 events 内容告警, indices 指标存储等操作 :param topics: kafka 配置及 topics 信息 { "eventKafkaSource": { "topics": [ "alarm_event_json" ], "bootstraps": "192.168.1.186", "port": 9092 }, "indiceKafkaSource": {}, "graphKafkaSource": {} } """ super(SendAlgJsonKafka, self).__init__() self.topics = topics
/rondsspark-0.0.4.23.tar.gz/rondsspark-0.0.4.23/ronds_sdk/transforms/ronds.py
0.69285
0.397295
ronds.py
pypi
from typing import TypeVar, Generic, Sequence, Optional, Callable, Union from ronds_sdk import error from ronds_sdk.dataframe import pvalue from typing import TYPE_CHECKING if TYPE_CHECKING: from ronds_sdk.pipeline import Pipeline, AppliedPTransform from ronds_sdk.tools.utils import WrapperFunc InputT = TypeVar('InputT') OutputT = TypeVar('OutputT') PTransformT = TypeVar('PTransformT', bound='PTransform') class PTransform(object): # 默认, transform 不包含 side inputs side_inputs = () # type: Sequence[pvalue.AsSideInput] # used for nullity transforms pipeline = None # type: Optional[Pipeline] # Default is unset _user_label = None # type: Optional[str] def __init__(self, label=None): # type: (Optional[str]) -> None super().__init__() self.label = label @property def label(self): # type: () -> str return self._user_label or self.default_label() @label.setter def label(self, value): # type: (Optional[str]) -> None self._user_label = value def default_label(self): # type: () -> str return self.__class__.__name__ def expand(self, # type: PTransform input_inputs, # type: InputT action_func=None # type: WrapperFunc ): # type: (...) -> OutputT if not self.validate_input_inputs(input_inputs): raise NotImplementedError( "transform not implemented: %s" % self.__class__.__name__) return pvalue.PCollection(input_inputs.pipeline, element_type=pvalue.PCollection, is_bounded=input_inputs.is_bounded, ) @staticmethod def validate_input_inputs(input_inputs, ): if isinstance(input_inputs, pvalue.PValue): return input_inputs.element_value is None if isinstance(input_inputs, list): for input in input_inputs: if not isinstance(input, pvalue.PValue) \ or input.element_value is not None: return False if isinstance(input_inputs, dict): for input in input_inputs.values(): if not isinstance(input, pvalue.PValue) \ or input.element_value is not None: return False return True def __str__(self): return '<%s>' % self._str_internal() def __repr__(self): return '<%s at %s>' % (self._str_internal(), hex(id(self))) def _str_internal(self) -> str: return '%s(PTransform)%s%s%s' % ( self.__class__.__name__, ' label=[%s]' % self.label if (hasattr(self, 'label') and self.label) else '', ' inputs=%s ' % str(self.inputs) if (hasattr(self, 'inputs') and self.inputs) else '', ' side_inputs=%s' % str(self.side_inputs) if self.side_inputs else '' ) def __rrshift__(self, label): return _NamedPTransform(self, label) def __or__(self, right): """Used to compose PTransforms, e.g., ptransform1 | ptransform2.""" if isinstance(right, PTransform): return _ChainedPTransform(self, right) return NotImplemented def __ror__(self, left, label): p_valueish, p_values = self._extract_input_p_values(left) if isinstance(p_values, dict): p_values = tuple(p_values.values()) pipelines = [v.pipeline for v in p_values if isinstance(v, pvalue.PValue)] if not pipelines: if self.pipeline is not None: p = self.pipeline else: raise ValueError('"%s" requires a pipeline to be specified ' 'as there are no deferred inputs.' % self.label) else: p = self.pipeline or pipelines[0] for pp in pipelines: if p != pp: raise ValueError( 'Mixing values in different pipelines is not allowed.' '\n{%r} != {%r}' % (p, pp)) # deferred = not getattr(p.runner, 'is_eager', False) self.pipeline = p result = p.apply(self, p_valueish, label) return result @staticmethod def extract_input_if_one_p_values(input_inputs) -> pvalue.PValue: """ In most scenarios, if Collection contain one element, return the element directly :param input_inputs: inputs :return: """ if isinstance(input_inputs, tuple) or isinstance(input_inputs, list): return input_inputs[0] if len(input_inputs) == 1 else input_inputs elif isinstance(input_inputs, dict): return next(iter(input_inputs.values())) if len(input_inputs) == 1 else input_inputs return input_inputs @staticmethod def extract_first_input_p_values(input_inputs) -> pvalue.PValue: """ extract first input for use :param input_inputs: inputs :return: first input """ if isinstance(input_inputs, tuple) or isinstance(input_inputs, list): return input_inputs[0] elif isinstance(input_inputs, dict): return next(iter(input_inputs.values())) assert isinstance(input_inputs, pvalue.PValue) return input_inputs @staticmethod def _extract_input_p_values(p_valueish): from ronds_sdk import pipeline if isinstance(p_valueish, pipeline.Pipeline): if not p_valueish.root_transform.outputs: p_begin = pvalue.PBegin(p_valueish) p_valueish.root_transform.add_output(p_begin, '__root') p_valueish = p_valueish.root_transform.outputs.get('__root') return p_valueish, { str(tag): value for (tag, value) in get_named_nested_p_values(p_valueish, as_inputs=True) } @staticmethod def _check_pcollection(p_coll): # type: (pvalue.PCollection) -> None if not isinstance(p_coll, pvalue.PCollection): raise error.TransformError('Expecting a PCollection argument.') if not p_coll.pipeline: raise error.TransformError('PCollection not part of a pipeline') class ForeachBatchTransform(PTransform): def expand(self, input_inputs, action_func=None): raise NotImplementedError def get_named_nested_p_values(p_valueish, as_inputs=False): if isinstance(p_valueish, tuple): fields = getattr(p_valueish, '_fields', None) if fields and len(fields) == len(p_valueish): tagged_values = zip(fields, p_valueish) else: tagged_values = enumerate(p_valueish) elif isinstance(p_valueish, list): if as_inputs: yield None, p_valueish return tagged_values = enumerate(p_valueish) elif isinstance(p_valueish, dict): tagged_values = p_valueish.items() else: if as_inputs or isinstance(p_valueish, pvalue.PValue): yield None, p_valueish return for tag, sub_value in tagged_values: for sub_tag, sub_sub_value in get_named_nested_p_values( sub_value, as_inputs=as_inputs): if sub_tag is None: yield tag, sub_sub_value else: yield '%s.%s' % (tag, sub_tag), sub_sub_value class _NamedPTransform(PTransform): def __init__(self, transform, label): super().__init__(label) self.transform = transform def __ror__(self, p_valueish, _unused=None): return self.transform.__ror__(p_valueish, self.label) def expand(self, p_value): raise RuntimeError("Should never be expanded directly.") class _ChainedPTransform(PTransform): def __init__(self, *parts): # type: (*PTransform) -> None super().__init__(label=self._chain_label(parts)) self._parts = parts @staticmethod def _chain_label(parts): return '|'.join(p.label for p in parts) def __or__(self, right): if isinstance(right, PTransform): return _ChainedPTransform(*(self._parts + (right,))) return NotImplemented def expand(self, p_val): raise NotImplementedError
/rondsspark-0.0.4.23.tar.gz/rondsspark-0.0.4.23/ronds_sdk/transforms/ptransform.py
0.856857
0.298402
ptransform.py
pypi
import logging import time from ronds_sdk import error from ronds_sdk.transforms.ptransform import PTransform, ForeachBatchTransform from ronds_sdk.dataframe import pvalue from ronds_sdk.runners.spark_runner import SparkRunner from pyspark.sql import DataFrame from typing import TYPE_CHECKING if TYPE_CHECKING: from ronds_sdk.transforms import ronds class Sleep(PTransform): def __init__(self, _sleep # type: ronds.Sleep ): super(Sleep, self).__init__() self._sleep = _sleep def expand(self, input_inputs, action_func=None): logging.info("start sleep~") time.sleep(self._sleep.seconds) logging.info("end sleep~") return input_inputs class Create(PTransform): def __init__(self, create, # type: ronds.Create ): super(Create, self).__init__() self.values = create.values def expand(self, p_begin, action_func=None): assert isinstance(p_begin, pvalue.PBegin) df = get_spark(p_begin).createDataFrame(self.values) return pvalue.PCollection(p_begin.pipeline, element_value=df, element_type=DataFrame, is_bounded=True) class Socket(ForeachBatchTransform): def __init__(self, socket, # type: ronds.Socket ): super(Socket, self).__init__() self.host = socket.host self.port = socket.port def expand(self, p_begin, action_func=None): assert isinstance(p_begin, pvalue.PBegin) df = get_spark(p_begin).readStream \ .format("socket") \ .option("host", self.host) \ .option("port", self.port) \ .load() if action_func: writer = df.writeStream.foreachBatch(action_func.call) return pvalue.PDone(p_begin.pipeline, element_type=DataFrame, is_bounded=False, stream_writer=writer) return pvalue.PCollection(p_begin.pipeline, element_type=DataFrame, element_value=df, is_bounded=False) class Filter(PTransform): def __init__(self, filter_, # type: ronds.Filter ): super(Filter, self).__init__() self.where = filter_.where self.select_cols = filter_.select_cols def expand(self, input_inputs, action_func=None): assert isinstance(input_inputs, pvalue.PCollection) if input_inputs.element_value: df = input_inputs.element_value assert isinstance(df, DataFrame) new_df = df.select(self.select_cols).where(self.where) return pvalue.PCollection(input_inputs.pipeline, element_value=new_df, element_type=DataFrame, is_bounded=input_inputs.is_bounded) raise error.PValueError( "unexpected input_inputs.element_value: %s" % input_inputs.tag) class Console(PTransform): def __init__(self, console, # type: ronds.Console ): super(Console, self).__init__('Console') self.mode = console.mode def expand(self, input_inputs, action_func=None): assert isinstance(input_inputs, pvalue.PCollection) df = input_inputs.element_value assert isinstance(df, DataFrame) if not df.isStreaming: df.show() return pvalue.PDone(input_inputs.pipeline, element_type=DataFrame, is_bounded=True) else: query = df.writeStream \ .outputMode(self.mode) \ .format("console") return pvalue.PDone(input_inputs.pipeline, element_type=DataFrame, is_bounded=False, stream_writer=query) def get_spark(p_coll: pvalue.PValue): """ 从 runner 中 获取 SparkSession :param p_coll: 数据集 :return: SparkSession """ if p_coll: runner = p_coll.pipeline.runner if isinstance(runner, SparkRunner): return runner.spark raise TypeError("expect SparkRunner, but found %s " % runner) else: raise error.PValueError("get_spark, PValue is null!")
/rondsspark-0.0.4.23.tar.gz/rondsspark-0.0.4.23/ronds_sdk/transforms/spark/transforms.py
0.646014
0.307293
transforms.py
pypi
from collections import deque from typing import Union from ronds_sdk.tools.constants import JsonKey # noinspection SpellCheckingInspection class RuleData(object): def __init__(self, device_id, # type: str rule_ids, # type: list[str] nodes=None, # type: list edges=None, # type: list datasources=None, # type: list indices=None, # type: list events=None, # type: list running_time=None, # type: float datasource_times=None, # type: str ): """ 根据规则读取 Cassandra 数据, 组合成算法能够识别的 JSON 格式数据 :param nodes: 设备 DAG nodes :param edges: 设备 DAG edges :param datasources: 数据源的具体数据 :param indices: 需要存储到 Cassandra 的数据 :param events: 需要发送到 Kafka 告警主题的数据 :param running_time: 运行时间 :param datasource_times: 数据源数据批次生成 时间 """ self._data_dict = { JsonKey.DEVICE_ID.value: device_id, 'nodes': nodes if nodes else [ { 'id': device_id, 'name': '', 'group': 2, 'code': 139555, 'attributes': { 'entcode': 'COM' } } ], 'edges': edges if edges else [], 'datasource': [[]] if datasources is None else datasources, 'version': '1.0.0', 'indices': indices if indices else [], 'events': events if events else [], 'runningtime': running_time if running_time else [], 'datasourcetimes': [datasource_times] if datasource_times else [], 'rules': rule_ids, 'cache': { JsonKey.DEVICE_ID.value: device_id, 'metadata': [], 'buffer': '' }, } self._data_index = dict() def get_data(self) -> dict: datasources = self.get_datasource() for datasource in datasources: if datasource.__contains__('value') \ and datasource['value'].__contains__('channeldata') \ and datasource['value']['channeldata'].__contains__('data'): data = datasource['value']['channeldata']['data'] for channel_data in data: if isinstance(channel_data['times'], deque): channel_data['times'] = list(channel_data['times']) if isinstance(channel_data['values'], deque): channel_data['values'] = list(channel_data['values']) return self._data_dict def get_datasource(self) -> list: datasources = self._data_dict['datasource'] assert isinstance(datasources, list) return datasources[0] def set_nodes(self, nodes: list) -> None: if nodes: self._data_dict['nodes'] = nodes def set_edges(self, edges: list) -> None: if edges: self._data_dict['edges'] = edges def add_process_data(self, # type: RuleData asset_id, # type: str c_time, # type: str value, # type: Union[float, int] data_type=106, # type: int ): # type: (...) -> None """ 添加工艺数据 :param asset_id: 测点 id :param c_time: 数据产生时间 :param value: 数据内容 :param data_type: 数据类型, 默认工艺 data_type = 106 :return: None """ if not self._data_index.__contains__(asset_id): datasource = { 'assetid': asset_id, 'value': { 'channeldata': { 'code': '', 'nodetype': '', 'data': [], 'assetid': asset_id, 'caption': '', 'property': { 'sernortype': 0, 'direct': '', 'direction': 1 } } } } self._data_index[asset_id] = datasource self.get_datasource().append(datasource) datasource = self._data_index[asset_id] self.fill_channel_data(data_type, asset_id, c_time, value, datasource) @staticmethod def get_data_key(*args) -> str: return '_'.join([str(s) for s in args]) def fill_channel_data(self, # type: RuleData data_type, # type: int asset_id, # type: str c_time, # type: str value, # type: float datasource, # type: dict ): # type: (...) -> None """ 按照格式填充 datasource.value.chaneldata.data 的数据 :param value: 工艺数据值 :param c_time: 数据产生时间 :param asset_id: 测点 id :param data_type: 设备类型 :param datasource: 需要填充的 datasource """ channel_data_key = self.get_data_key(asset_id, data_type) if not self._data_index.__contains__(channel_data_key): channel_data = { 'times': deque(), 'values': deque(), 'datatype': data_type, 'indexcode': -1, 'conditions': [], 'properties': [] } self._data_index[channel_data_key] = channel_data datasource['value']['channeldata']['data'].append(channel_data) channel_data = self._data_index[channel_data_key] channel_data['times'].appendleft(c_time) channel_data['values'].appendleft(value)
/rondsspark-0.0.4.23.tar.gz/rondsspark-0.0.4.23/ronds_sdk/transforms/pandas/rule_merge_data.py
0.669421
0.32314
rule_merge_data.py
pypi
import importlib.machinery import json import logging import sys import time from os.path import dirname, abspath from typing import TYPE_CHECKING, Callable from pathlib import Path import pandas as pd from pyspark.sql import DataFrame, SparkSession from ronds_sdk import error, logger_config from ronds_sdk.dataframe import pvalue from ronds_sdk.datasources.kafka_manager import KafkaManager from ronds_sdk.options.pipeline_options import SparkRunnerOptions, CassandraOptions, AlgorithmOptions, KafkaOptions from ronds_sdk.tools.constants import JsonKey, Constant from ronds_sdk.tools.utils import RuleParser from ronds_sdk.transforms import ronds from ronds_sdk.transforms.ptransform import PTransform, ForeachBatchTransform if TYPE_CHECKING: from ronds_sdk.options.pipeline_options import PipelineOptions logger_config.config() logger = logging.getLogger('executor') class Sleep(PTransform): def __init__(self, _sleep # type: ronds.Sleep ): super(Sleep, self).__init__() self._sleep = _sleep def expand(self, input_inputs, action_func=None): logging.info("start sleep~") time.sleep(self._sleep.seconds) logging.info("end sleep~") return input_inputs class RulesCassandraScan(ForeachBatchTransform): """ 基于配置规则信息, 定时轮询读取 Cassandra 数据, 整合成算法需要的 Graph JSON 结构, 用于后续的算法处理流程. """ def __init__(self, rule_load, # type: ronds.RulesCassandraScan options, # type: PipelineOptions spark=None, # type: SparkSession ): super(RulesCassandraScan, self).__init__() self._rules = rule_load.rules self._spark = spark self.__options = options @property def options(self): return self.__options def expand(self, p_begin, action_func=None): from ronds_sdk.transforms.pandas.cassandra_rule import ForeachRule foreach_rule = ForeachRule(self.options.view_as(CassandraOptions), action_func) if self._spark: repartition_num = self.options.view_as(SparkRunnerOptions).spark_repartition_num df = self._spark.createDataFrame(self._rules) df = df.repartition(repartition_num, df.assetId) if action_func: df.foreachPartition(foreach_rule.foreach_rules) return pvalue.PDone(p_begin.pipeline, element_type=DataFrame, is_bounded=True) else: df = pd.DataFrame(self._rules) return pvalue.PCollection(p_begin.pipeline, element_value=df, element_type=pd.DataFrame, is_bounded=True) class Console(PTransform): def __init__(self, console, # type: ronds.Console ): super(Console, self).__init__() self._mode = console.mode def expand(self, input_inputs, action_func=None): assert isinstance(input_inputs, pvalue.PCollection) df = input_inputs.element_value assert isinstance(df, pd.DataFrame) print('*' * 20) print(df.head(10)) return pvalue.PDone(input_inputs.pipeline, element_type=pd.DataFrame, is_bounded=True) class Algorithm(PTransform): def __init__(self, algorithm, # type: ronds.Algorithm options, # type: PipelineOptions ): super(Algorithm, self).__init__() self._options = options.view_as(AlgorithmOptions) if options is not None \ else AlgorithmOptions() self.path = algorithm.path if algorithm.path \ else self._options.algorithm_path self.func_name = algorithm.func_name if algorithm.func_name \ else self._options.algorithm_funcname # directory of RondsSpark/ronds_sdk/transforms/pandas self._base_dir = dirname(dirname(dirname(dirname(dirname(abspath(__file__)))))) # load algorithm as module by path self._algorithm_func = self.__load_alg() @staticmethod def is_absolute(path: str) -> bool: p_obj = Path(path) return p_obj.is_absolute() @property def algorithm_func(self): return self._algorithm_func def __load_alg(self): """load algorithm by file path""" # load new algorithm func alg_absolute_path = self.path if self.is_absolute(self.path) \ else '%s/%s' % (self._base_dir, self.path) if alg_absolute_path not in sys.path: sys.path.append(alg_absolute_path) func_paths = self.func_name.split('.') if len(func_paths) <= 1: raise error.TransformError("""algorithm func path expect the format: file.function_name, but found: %s""" % self.func_name) model_path = '.'.join(func_paths[0:-1]) func_name = func_paths[-1] loader = importlib.machinery.SourceFileLoader(model_path, '%s/%s.py' % (alg_absolute_path, model_path)) alg_model = loader.load_module(model_path) alg_func = getattr(alg_model, func_name) if alg_func is None: raise error.TransformError("""failed load algorithm """) return alg_func # noinspection SpellCheckingInspection def algorithm_call(self, row): device_id = row[JsonKey.DEVICE_ID.value] res_row = self.algorithm_func(row) if isinstance(res_row, dict): ret_row = res_row elif isinstance(res_row, str): ret_row = json.loads(res_row) else: raise error.TransformError('unexpected algorithm func return type: %s, value: %s' % (type(res_row), res_row)) assert isinstance(ret_row, dict) if ret_row is not None and not ret_row.__contains__(JsonKey.DEVICE_ID.value): ret_row[JsonKey.DEVICE_ID.value] = device_id return ret_row def expand(self, input_inputs, action_func=None): assert isinstance(input_inputs, pvalue.PCollection) assert isinstance(input_inputs.element_value, pd.DataFrame) df = input_inputs.element_value if len(input_inputs.element_value) > 0: df_dict = input_inputs.element_value.to_dict('records') res_df_list = list() for row in df_dict: res_row = self.algorithm_call(row) res_df_list.append(res_row) logger.info('algorithm data: %s' % json.dumps(next(iter(res_df_list)))) df = pd.DataFrame(res_df_list) assert isinstance(df, pd.DataFrame) return pvalue.PCollection(input_inputs.pipeline, element_value=df, element_type=pd.DataFrame, is_bounded=True) class SendKafka(PTransform): def __init__(self, send_kafka, # type: ronds.SendKafka options, # type: PipelineOptions ): super(SendKafka, self).__init__() self.topic = send_kafka.topic self.key_value_generator = send_kafka.key_value_generator self.kafka_manager = KafkaManager(options.view_as(KafkaOptions)) # type: KafkaManager def expand(self, input_inputs, action_func=None): assert isinstance(input_inputs, pvalue.PCollection) assert isinstance(input_inputs.element_value, pd.DataFrame) df = input_inputs.element_value key_values = self.key_value_generator(df) assert isinstance(key_values, dict) for key, value in key_values.items(): self.kafka_manager.send(self.topic, key, value) return pvalue.PCollection(input_inputs.pipeline, element_value=df, element_type=pd.DataFrame, is_bounded=input_inputs.is_bounded) class SendAlgJsonKafka(PTransform): def __init__(self, # type: SendAlgJsonKafka send_alg_json_kafka, # type: ronds.SendAlgJsonKafka options, # type: PipelineOptions ): """ 将算法处理之后的数据发送到 Kafka; 包括: events 告警, indices 指标, graph json 等信息 . :param send_alg_json_kafka: 包含 kafka topic 等基本配置信息 :param options: kafka 等系统配置信息 """ super(SendAlgJsonKafka, self).__init__() self.topics = send_alg_json_kafka.topics self.options = options self.kafka_manager = KafkaManager(options.view_as(KafkaOptions)) # type: KafkaManager # noinspection SpellCheckingInspection self.switch_dict = { 'eventKafkaSource': self.send_events, 'indiceKafkaSource': self.send_indices, 'graphKafkaSource': self.send_graph, 'exceptionKafkaSource': self.send_exception, } # type: dict[str, Callable[[pd.DataFrame, str], None]] def send_events(self, df, # type: pd.DataFrame topic, # type: str ): # type: (...) -> None """ "events": [ [ { "assetid": "3a..0", "name": "sh_event", "value": {}, "group": "alarm" } ] ] :param df: :param topic: :return: """ if len(df) == 0 or 'events' not in df.columns: return None all_events = df['events'] for events in all_events: if isinstance(events, list) and len(events) > 0: for in_events in events: match = isinstance(in_events, list) and len(in_events) > 0 if not match: continue for event in in_events: event_str = json.dumps(event) logging.warning('kafka events sending, topic: %s, error: %s' % (topic, event_str)) self.kafka_manager.send(topic, key=None, value=event_str) # noinspection SpellCheckingInspection def send_indices(self, df, # type: pd.DataFrame topic, # type: str ): # type: (...) -> None """ "indices": [ [ { "assetid": "3a00..f", "meastime": "2023-05-22T14:44:00", "names": [], "values": [], "wids": [] } ] ] :param df: json DataFrame after algorithm process :param topic: kafka topic :return: kv for sending to kafka """ if len(df) == 0 or 'indices' not in df.columns: return None all_indices = df['indices'] for indices in all_indices: if isinstance(indices, list) and len(indices) > 0: for in_indices in indices: match = isinstance(in_indices, list) and len(in_indices) > 0 if not match: continue for index in in_indices: assert isinstance(index, dict) if not index.__contains__(JsonKey.NAMES.value): continue msg = list() asset_id = None for i, name in enumerate(index[JsonKey.NAMES.value]): index_value = index['values'][i] if index_value is None or str.lower(index_value) == Constant.NAN.value: continue wid = index['wids'][i] asset_id = index['assetid'] msg.append({ 'assetid': asset_id, 'datatype': name, 'measdate': index['meastime'], 'condition': -1, 'measvalue': float(index_value), 'wid': wid, }) if len(msg) > 0: self.kafka_manager.send(topic, asset_id, json.dumps(msg)) def send_exception(self, # type: SendAlgJsonKafka df, # type: pd.DataFrame topic, # type: str ): # type: (...) -> None if len(df) == 0 or 'exceptions' not in df.columns: return None all_ex = df['exceptions'] if len(all_ex) == 0: return None for index, row in df.iterrows(): json_str = row.to_json(orient='index', date_format=RuleParser.datetime_format()) self.kafka_manager.send(topic, key=None, value=json_str) def send_graph(self, df, # type: pd.DataFrame topic, # type: str ): # type: (...) -> None if len(df) == 0: return None for index, row in df.iterrows(): if 'exceptions' in df.columns and len(row['exceptions']) > 0: continue # noinspection SpellCheckingInspection device_id = row[JsonKey.DEVICE_ID.value] json_str = row.to_json(orient='index', date_format=RuleParser.datetime_format()) self.kafka_manager.send(topic, key=device_id, value=json_str) def expand(self, input_inputs, action_func=None): assert isinstance(input_inputs, pvalue.PCollection) assert isinstance(input_inputs.element_value, pd.DataFrame) df = input_inputs.element_value for source, kafka_config in self.topics.items(): if not self.switch_dict.__contains__(source): continue kv_sender = self.switch_dict[source] for t in kafka_config['topics']: kv_sender(df, t) return pvalue.PCollection(input_inputs.pipeline, element_value=df, element_type=pd.DataFrame, is_bounded=input_inputs.is_bounded)
/rondsspark-0.0.4.23.tar.gz/rondsspark-0.0.4.23/ronds_sdk/transforms/pandas/transforms.py
0.431704
0.20351
transforms.py
pypi
from typing import TypeVar, Generic, Optional, Union from ronds_sdk.transforms.core import Windowing from typing import TYPE_CHECKING if TYPE_CHECKING: from ronds_sdk.pipeline import Pipeline, PipelineVisitor from ronds_sdk.pipeline import AppliedPTransform T = TypeVar('T') class PValue(object): """ Base class for PCollection. 主要特征: (1) 属于一个 Pipeline, 初始化时加入 (2) 拥有一个 Transform 用来计算 value (3) 拥有一个 value, 如果 Transform执行后,该值拥有意义 """ def __init__(self, pipeline, # type: Pipeline tag=None, # type: Optional[str] element_value=None, # type: T element_type=None, # type: Optional[Union[type]], windowing=None, # type: Optional[Windowing] is_bounded=True, ): self.pipeline = pipeline self.tag = tag self.element_value = element_value self.element_type = element_type # The AppliedPTransform instance for the application of the PTransform # generating this PValue. The field gets initialized when a transform # gets applied. self.producer = None # type: Optional[AppliedPTransform] self.consumers = list() # type: list[AppliedPTransform] self.is_bounded = is_bounded if windowing: self._windowing = windowing self.requires_deterministic_key_coder = None def __str__(self): return self._str_internal() def __repr__(self): return '<%s at %s>' % (self._str_internal(), hex(id(self))) def _str_internal(self) -> str: return "%s[%s.%s]" % ( self.__class__.__name__, self.producer.full_label if self.producer else None, self.tag ) def apply(self, *args, **kwargs): """Applies a transform or callable to a PValue""" arg_list = list(args) arg_list.insert(1, self) return self.pipeline.apply(*arg_list, **kwargs) def visit(self, visitor, # type: PipelineVisitor ): # type: (...) -> bool if visitor.visit_value(self) and self.consumers: for consumer in self.consumers: consumer.visit(visitor) return True return False def add_consumer(self, consumer # type: AppliedPTransform ): self.consumers.append(consumer) class PCollection(PValue, Generic[T]): def __init__(self, pipeline, # type: Pipeline tag=None, # type: Optional[str] element_value=None, # type: T element_type=None, # type: Optional[Union[type]], windowing=None, # type: Optional[Windowing] is_bounded=True, ): super(PCollection, self).__init__(pipeline, tag, element_value, element_type, windowing, is_bounded) def __eq__(self, other): if isinstance(other, PCollection): return self.tag == other.tag and self.producer == other.producer def __hash__(self): return hash((self.tag, self.producer)) class AsSideInput(object): def __init__(self, p_coll): # type: (PCollection) -> None self.pvalue = p_coll class PBegin(PValue): """pipelines input 的 begin marker, 用于 create/read transforms""" pass class PDone(PValue): """ PDone 代表 transform 的 output,具有简单的结果, 例如 Write """ def __init__(self, pipeline, # type: Pipeline tag=None, # type: Optional[str] element_type=None, # type: Optional[Union[type]], windowing=None, # type: Optional[Windowing] is_bounded=True, stream_writer=None, ): super().__init__(pipeline, tag, element_type=element_type, windowing=windowing, is_bounded=is_bounded) self.stream_writer = stream_writer
/rondsspark-0.0.4.23.tar.gz/rondsspark-0.0.4.23/ronds_sdk/dataframe/pvalue.py
0.87213
0.208078
pvalue.py
pypi
# pytype: skip-file from functools import wraps from typing import Set from ronds_sdk import error __all__ = [ 'ValueProvider', 'StaticValueProvider', 'RuntimeValueProvider', 'NestedValueProvider', 'check_accessible', ] class ValueProvider(object): """Base class that all other ValueProviders must implement. """ def is_accessible(self): """Whether the contents of this ValueProvider is available to routines that run at graph construction time. """ raise NotImplementedError( 'ValueProvider.is_accessible implemented in derived classes') def get(self): """Return the value wrapped by this ValueProvider. """ raise NotImplementedError( 'ValueProvider.get implemented in derived classes') class StaticValueProvider(ValueProvider): """StaticValueProvider is an implementation of ValueProvider that allows for a static value to be provided. """ def __init__(self, value_type, value): """ Args: value_type: Type of the static value value: Static value """ self.value_type = value_type self.value = value_type(value) def is_accessible(self): return True def get(self): return self.value def __str__(self): return str(self.value) def __eq__(self, other): if self.value == other: return True if isinstance(other, StaticValueProvider): if self.value_type == other.value_type and self.value == other.value: return True return False def __hash__(self): return hash((type(self), self.value_type, self.value)) class RuntimeValueProvider(ValueProvider): """RuntimeValueProvider is an implementation of ValueProvider that allows for a value to be provided at execution time rather than at graph construction time. """ runtime_options = None experiments = set() # type: Set[str] def __init__(self, option_name, value_type, default_value): self.option_name = option_name self.default_value = default_value self.value_type = value_type def is_accessible(self): return RuntimeValueProvider.runtime_options is not None @classmethod def get_value(cls, option_name, value_type, default_value): if not RuntimeValueProvider.runtime_options: return default_value candidate = RuntimeValueProvider.runtime_options.get(option_name) if candidate: return value_type(candidate) else: return default_value def get(self): if RuntimeValueProvider.runtime_options is None: raise error.RuntimeValueProviderError( '%s.get() not called from a runtime context' % self) return RuntimeValueProvider.get_value( self.option_name, self.value_type, self.default_value) @classmethod def set_runtime_options(cls, pipeline_options): RuntimeValueProvider.runtime_options = pipeline_options RuntimeValueProvider.experiments = RuntimeValueProvider.get_value( 'experiments', set, set()) def __str__(self): return '%s(option: %s, type: %s, default_value: %s)' % ( self.__class__.__name__, self.option_name, self.value_type.__name__, repr(self.default_value)) class NestedValueProvider(ValueProvider): """NestedValueProvider is an implementation of ValueProvider that allows for wrapping another ValueProvider object. """ def __init__(self, value, translator): """Creates a NestedValueProvider that wraps the provided ValueProvider. Args: value: ValueProvider object to wrap translator: function that is applied to the ValueProvider Raises: ``RuntimeValueProviderError``: if any of the provided objects are not accessible. """ self.value = value self.translator = translator self.cached_value = None def is_accessible(self): return self.value.is_accessible() def get(self): try: return self.cached_value except AttributeError: self.cached_value = self.translator(self.value.get()) return self.cached_value def __str__(self): return "%s(value: %s, translator: %s)" % ( self.__class__.__name__, self.value, self.translator.__name__, ) def check_accessible(value_provider_list): """A decorator that checks accessibility of a list of ValueProvider objects. Args: value_provider_list: list of ValueProvider objects Raises: ``RuntimeValueProviderError``: if any of the provided objects are not accessible. """ assert isinstance(value_provider_list, list) def _check_accessible(fnc): @wraps(fnc) def _f(self, *args, **kwargs): for obj in [getattr(self, vp) for vp in value_provider_list]: if not obj.is_accessible(): raise error.RuntimeValueProviderError('%s not accessible' % obj) return fnc(self, *args, **kwargs) return _f return _check_accessible
/rondsspark-0.0.4.23.tar.gz/rondsspark-0.0.4.23/ronds_sdk/options/value_provider.py
0.86587
0.248153
value_provider.py
pypi
# Rong - A console color utility for python console app #### Developed by [Md. Almas Ali][1] ***Version 0.0.1*** [![LICENSE](https://img.shields.io/github/license/dwisiswant0/WiFiID.svg "LICENSE")](LICENSE) ![Image](https://raw.githubusercontent.com/Almas-Ali/rong/master/logo.png) ## Installation It is very easy to install. Like as usual you can install it with `pip`. ```bash pip install rong ``` ## Documentation ***Welcome to Rong documentation,*** <br> Here you will learn about a CLI tool which can add color into your CLI bashed project. Highly recomended module in python by developers. Its easy to use and easily adaptable to every lavel developers. Anyone can learn this in 10 min. <br> <a href="#examples">Give it a try ?</a> ### Project index's 1. <a href="#color-style-class">Color & Style codes</a><br> 2. <a href="#log-class">Log class</a><br> 3. <a href="#mark-class">Mark class</a><br> 4. <a href="#highlight-class">Highlight class</a><br> 5. <a href="#text-class">Text class</a><br> 6. <a href="#examples">Examples for practice</a><br> <div id="color-style-class"></div> - Color & Styles - All Colors for forground and background: - `black` - `red` - `green` - `yellow` - `blue` - `purple` - `cyan` - `white` - All Styles: - `underline` : for underlining text in console - `bold` : for bolding text in console - `clear` : for clearing all setted styles <br> <div id="log-class"></div> 1. `Log` : A simple logging text class for coloring text - To display primary text `primary(text:str)` - To display blue text `blue(text:str)` - To display success text `success(text:str)` - To display green text `green(text:str)` - To display ok text `ok(text:str)` - To display warning text `warning(text:str)` - To display yellow text `yellow(text:str)` - To display help text `help(text:str)` - To display danger text `danger(text:str)` - To display error text `error(text:str)` - To display fail message `fail(text:str)` - To display underline `underline(text:str)` - To display bold text `bold(text:str)` - To display ok message `okmsg(text:str)` - To display wait message `waitmsg(text:str)` - To display error message `errormsg(text:str)` <br> <div id="mark-class"></div> 2. `Mark` : A simple class for coloring manually in line - To add color manually you need to use this class with some constant color which is binded into this class. You have to manually start the color as, this example bellow: ```python print(f"This is a {Mark.GREEN}sample Mark{Mark.END} test.") ``` <br> <div id="highlight-class"></div> 3. `Highlight` : A class for highlighing text color - To get white color `white(text:str)` - To get bold white color `bwhite(text:str)` - To get green color `green(text:str)` - To get bold green color `bgreen(text:str)` - To get blue color `blue(text:str)` - To get bold blue color `bblue(text:str)` - To get yellow color `yellow(text:str)` - To get bold yellow color `byellow(text:str)` - To get red color `red(text:str)` - To get bold red color `bred(text:str)` <br> <div id="text-class"></div> 4. Most powerfull, all in one class `Text` - To add forground / text color `foreground(color:str)` - To add backgroung color `background(color:str)` - To add styles as list `style(styles:list)` - To update object text `update(text:str):` - To show output text `print()` - All in one in a single line : ```python Text(text="Single line test", styles=["bold", "underline"]) ``` <br> <div id="examples"></div> Some sample codes are for text. ```python from rong import * # In line Log display print(f"I am {Log.waitmsg('Almas')} Ali") print(f"I am {Log.errormsg('Almas')} Ali") print(f"I am {Log.warning('Almas')} Ali") print(f"I am {Log.primary('Almas')} Ali") # In line color with custom parameter print(f"{Mark.BLUE} Hi, {Mark.END}") print(f"{Mark.RED} Hi, {Mark.END}") print(f"{Mark.GREEN} Hi, {Mark.END}") print(f"{Mark.CYAN} Hi, {Mark.END}") print(f"This is a {Mark.GREEN}sample Mark{Mark.END} test.") # In line text highlighting print(f"Enjoy {Highlight.red('Almas')}") print(f"Enjoy {Highlight.bred('Almas')}") print(f"Enjoy {Highlight.blue('Almas')}") print(f"Enjoy {Highlight.bblue('Almas')}") print(f"Enjoy {Highlight.yellow('Almas')}") print(f"Enjoy {Highlight.byellow('Almas')}") # Working with Text objects # Creating Text class object text = Text(text='Almas Ali') # Adding forground color / text color text.foreground('blue') text.foreground('purple') # Adding background color text.background('white') # Adding custom styles text.style(styles=['bold', 'underline']) # Updating object text text.update(text=' New text ') # Printing output in two ways # Advance methode bashed mode text.print() # Normal pythonic mode print(text) # Doing everything in one line text1 = Text(text='Demo1', styles=['bold'], fg='blue', bg='white') text1.print() # Clearing all styles text2 = Text(text='Demo', styles=['clear']) text2.print() ``` > Everything is open source. You can contribute in this project by submitting a issue or fixing a problem and make pull request. Made with love by © *Md. Almas Ali* <br> LICENSE under MIT [1]: <https://github.com/Almas-Ali> "Md. Almas Ali"
/rong-0.0.1.tar.gz/rong-0.0.1/README.md
0.906366
0.822546
README.md
pypi
Rōnin ===== A straightforward but powerful build system based on [Ninja](https://ninja-build.org/) and [Python](https://www.python.org/), suitable for projects both big and small. Rōnin comes in [frustration-free packaging](https://en.wikipedia.org/wiki/Wrap_rage). Let's build all the things! Features -------- Currently supported out-of-the-box: all [gcc](https://gcc.gnu.org/) languages, [Java](https://www.oracle.com/java/), [Rust](https://www.rust-lang.org/), [Go](https://golang.org/), [Vala](https://wiki.gnome.org/Projects/Vala)/[Genie](https://wiki.gnome.org/Projects/Genie), [pkg-config](https://www.freedesktop.org/wiki/Software/pkg-config/), [Qt tools](https://www.qt.io/), [sdl2-config](https://wiki.libsdl.org/Installation), and [binutils](https://sourceware.org/binutils/docs/binutils/). It's also easy to integrate your favorite [testing framework](https://github.com/tliron/ronin/wiki/Testing%20and%20Running). "Based on Python" means that not only is it written in Python, but also it uses **Python as the DSL** for build scripts. Many build systems invent their own DSLs, but Rōnin intentionally uses a language that already exists. There's no hidden cost to this design choice: build scripts are pretty much as concise and coherent as any specialized DSL. You _don't_ need to be an expert in Python to use Rōnin, but its power is at your fingertips if you need it. Rōnin supports **Unicode** throughout: Ninja files are created in UTF-8 by default and you can include Unicode characters in your build scripts. Python 3 is recommended, but Rōnin can also run on Python 2.7. Download -------- The latest release is available on [PyPI](https://pypi.python.org/pypi/ronin), so you can install with `pip`, `easy_install`, or `setuptools`. On Debian/Ubuntu: sudo apt install python3-pip sudo -H pip3 install ronin Since Ninja is just one small self-contained executable, it's easy to get it by downloading the [latest release](https://github.com/ninja-build/ninja/releases). Just make sure it's in your execution path, or run your build script with `--set ninja.command=` and give it the full path to `ninja`. Older versions (they work fine) may also be available in your operating system. On Debian/Ubuntu: sudo apt install ninja-build Documentation ------------- An undocumented system is a broken system. We strive for coherent, comprehensive, and up-to-date documentation. A detailed user manual is available on the [wiki](https://github.com/tliron/ronin/wiki). If you prefer to learn by example, [there are many](https://github.com/tliron/ronin/tree/master/examples). Rich API docs available on [Read the Docs](http://ronin.readthedocs.io/en/latest/). Feelings -------- Guiding lights: 1. **Powerful does not have to mean hard to use**: _optional_ auto-configuration with sensible, _overridable_ defaults. 2. **Complex does not have to mean complicated**: handle cross-compilation and other multi-configuration builds in a single script with minimal duplication of effort. Design principles: 1. **Don't hide functionality behind complexity**: the architecture should be straightforward. For example, if the user wants to manipulate a compiler command line, let them do it easily. Too many build systems bungle this and make it either impossible or very difficult to do something that would be trivial using a shell script. 2. **Pour some sugar on me**: make common tasks easier with sweet utility functions. But make sure that sugar is optional, allowing the script to be more verbose when more control is necessary. 3. **Don't reinvent wheels**: if Python or Ninja do something for us, use it. The build script is a plain Python program without any unnecessary cleverness. The generated Ninja file looks like something you could have created manually. FAQ --- * _Do we really need another build system?_ Yes. The other existing ones have convoluted architectures, impossible to opt-out-from automatic features, or are otherwise hostile to straightforward hacking. After so much wasted time fighting build systems to make them work for us, the time came to roll out a new one that does it right. * _Python is too hard. Why not create a simpler DSL?_ Others have done it, and it seems that the costs outweigh the benefits. Making a new language is not trivial. Making a _robust_ language could take years of effort. Python is here right now, with a huge ecosystem of libraries and tools. Yes, it introduces a learning curve, but getting familiar with Python is useful for so many practical reasons beyond writing build scripts for Rōnin. That said, if someone wants to contribute a simple DSL as an optional extra, we will consider! * _Why require Ninja, a binary, instead of building everything in 100% Python?_ Because it's silly to reinvent wheels, especially when the wheels are so good. Ninja is a one-trick pony that does its job extremely well. But it's just too low-level for most users, hence the need for a frontend. * _Why Ninja? It's already yesterday's news! There are even faster builders._ Eh, if you ignore the initial configuration phase, and are properly multithreading your build (`-j` flag in Make), then the time you wait for the build to finish ends up depending on your compiler, not the build system. Ninja was chosen because of its marvelous minimalism, not its speed. Ninja is actually [not much](http://david.rothlis.net/ninja-benchmark/) [faster](http://hamelot.io/programming/make-vs-ninja-performance-comparison/) than Make. For a similarly minimalist build system, see [tup](http://gittup.org/tup/). Similar Projects ---------------- * [bfg9000](https://github.com/jimporter/bfg9000): "bfg9000 is a cross-platform build configuration system with an emphasis on making it easy to define how to build your software." * [emk](https://github.com/kmackay/emk): "A Python-based build tool." * [Craftr](https://craftr.net/): "Craftr is a next generation build system based on Ninja and Python that features modular and cross-platform build definitions at the flexibility of a Python script and provides access to multiple levels of build automation abstraction." * [Meson](http://mesonbuild.com/): "Meson is an open source build system meant to be both extremely fast, and, even more importantly, as user friendly as possible." * [pyrate](https://github.com/pyrate-build/pyrate-build): "pyrate is a small python based build file generator targeting ninja(s)." * [Waf](https://waf.io/): "The meta build system."
/ronin-1.1.1.tar.gz/ronin-1.1.1/README.md
0.769427
0.767385
README.md
pypi
import re import numpy as np def to_isoformat(tm): """ Returns an ISO 8601 string from a time object (of different types). :param tm: Time object :return: (str) ISO 8601 time string """ if type(tm) == np.datetime64: return str(tm).split(".")[0] else: return tm.isoformat() class AnyCalendarDateTime: """ A class to represent a datetime that could be of any calendar. Has the ability to add and subtract a day from the input based on MAX_DAY, MIN_DAY, MAX_MONTH and MIN_MONTH """ MONTH_RANGE = range(1, 13) # 31 is the maximum number of days in any month in any of the calendars supported by cftime DAY_RANGE = range(1, 32) HOUR_RANGE = range(0, 24) MINUTE_RANGE = range(0, 60) SECOND_RANGE = range(0, 60) def __init__(self, year, month, day, hour, minute, second): self.year = year self.month = month self.validate_input(self.month, "month", self.MONTH_RANGE) self.day = day self.validate_input(self.day, "day", self.DAY_RANGE) self.hour = hour self.validate_input(self.hour, "hour", self.HOUR_RANGE) self.minute = minute self.validate_input(self.minute, "minute", self.MINUTE_RANGE) self.second = second self.validate_input(self.second, "second", self.SECOND_RANGE) def validate_input(self, input, name, range): if input not in range: raise ValueError( f"Invalid input {input} for {name}. Expected value between {range[0]} and {range[-1]}." ) def __repr__(self): return self.value @property def value(self): return ( f"{self.year}-{self.month:02d}-{self.day:02d}" f"T{self.hour:02d}:{self.minute:02d}:{self.second:02d}" ) def add_day(self): """ Add a day to the input datetime. """ self.day += 1 if self.day > self.DAY_RANGE[-1]: self.month += 1 self.day = 1 if self.month > self.MONTH_RANGE[-1]: self.year += 1 self.month = self.MONTH_RANGE[0] def sub_day(self, n=1): """ Subtract a day to the input datetime. """ self.day -= 1 if self.day < self.DAY_RANGE[0]: self.month -= 1 self.day = self.DAY_RANGE[-1] if self.month < self.MONTH_RANGE[0]: self.year -= 1 self.month = self.MONTH_RANGE[-1] def str_to_AnyCalendarDateTime(dt, defaults=None): """ Takes a string representing date/time and returns a DateTimeAnyTime object. String formats should start with Year and go through to Second, but you can miss out anything from month onwards. :param dt: (str) string representing a date/time. :param defaults: (list) The default values to use for year, month, day, hour, minute and second if they cannot be parsed from the string. A default value must be provided for each component. If defaults=None, [-1, 1, 1, 0, 0, 0] is used. :return: AnyCalendarDateTime object """ if not dt and not defaults: raise Exception( "Must provide at least the year as argument, or all defaults, to create date time." ) # Start with most common pattern regex = re.compile(r"^(\d+)-(\d+)-(\d+)[T ](\d+):(\d+):(\d+)$") match = regex.match(dt) if match: items = match.groups() else: # Try a more complex split and build of the time string if not defaults: defaults = [-1, 1, 1, 0, 0, 0] else: if len(defaults) < 6: raise Exception( "A default value must be provided for year, month, day, hour, minute and second." ) components = re.split("[- T:]", dt.strip("Z")) # Build a list of time components items = components + defaults[len(components) :] return AnyCalendarDateTime(*[int(float(i)) for i in items])
/roocs_utils-0.6.4.tar.gz/roocs_utils-0.6.4/roocs_utils/utils/time_utils.py
0.861567
0.611295
time_utils.py
pypi
import os from roocs_utils import CONFIG from roocs_utils.exceptions import InvalidProject class FileMapper: """ Class to represent a set of files that exist in the same directory as one object. Args: file_list: the list of files to represent. If dirpath not providedm these should be full file paths. dirpath: The directory path where the files exist. Default is None. If dirpath is not provided it will be deduced from the file paths provided in file_list. Attributes: file_list: list of file names of the files represented. file_paths: list of full file paths of the files represented. dirpath: The directory path where the files exist. Either deduced or provided. """ def __init__(self, file_list, dirpath=None): self.dirpath = dirpath self.file_list = file_list self._resolve() def _resolve(self): if not self.dirpath: first_dir = os.path.dirname(self.file_list[0]) if os.path.dirname(os.path.commonprefix(self.file_list)) == first_dir: self.dirpath = first_dir else: raise Exception( "File inputs are not from the same directory so cannot be resolved." ) self.file_list = [os.path.basename(fpath) for fpath in self.file_list] self.file_paths = [ os.path.join(self.dirpath, fname) for fname in self.file_list ] # check all files exist on file system if not all(os.path.isfile(file) for file in self.file_paths): raise FileNotFoundError("Some files could not be found.") def is_file_list(coll): """ Checks whether a collection is a list of files. :param coll (list): collection to check. :return: True if collection is a list of files, else returns False. """ # check if collection is a list of files if not isinstance(coll, list): raise Exception(f"Expected collection as a list, have received {type(coll)}") if coll[0].startswith("/") or coll[0].startswith("http"): return True return False
/roocs_utils-0.6.4.tar.gz/roocs_utils-0.6.4/roocs_utils/utils/file_utils.py
0.469034
0.178902
file_utils.py
pypi
import datetime from roocs_utils.exceptions import InvalidParameterValue from roocs_utils.parameter.base_parameter import _BaseIntervalOrSeriesParameter from roocs_utils.parameter.param_utils import parse_datetime from roocs_utils.parameter.param_utils import time_interval class TimeParameter(_BaseIntervalOrSeriesParameter): """ Class for time parameter used in subsetting operation. | Time can be input as: | A string of slash separated values: "2085-01-01T12:00:00Z/2120-12-30T12:00:00Z" | A sequence of strings: e.g. ("2085-01-01T12:00:00Z", "2120-12-30T12:00:00Z") A time input must be 2 values. If using a string input a trailing slash indicates you want to use the earliest/ latest time of the dataset. e.g. "2085-01-01T12:00:00Z/" will subset from 01/01/2085 to the final time in the dataset. Validates the times input and parses the values into isoformat. """ def _parse_as_interval(self): start, end = self.input.value try: if start is not None: start = parse_datetime( start, defaults=[datetime.MINYEAR, 1, 1, 0, 0, 0] ) if end is not None: end = parse_datetime( end, defaults=[datetime.MAXYEAR, 12, 31, 23, 59, 59] ) except Exception: raise InvalidParameterValue("Unable to parse the time values entered") # Set as None if no start or end, otherwise set as tuple value = (start, end) if set(value) == {None}: value = None return value def _parse_as_series(self): try: value = [parse_datetime(tm) for tm in self.input.value] except Exception: raise InvalidParameterValue("Unable to parse the time values entered") return value def asdict(self): """Returns a dictionary of the time values""" if self.type in ("interval", "none"): value = self._value_as_tuple() return {"start_time": value[0], "end_time": value[1]} elif self.type == "series": return {"time_values": self.value} def get_bounds(self): """Returns a tuple of the (start, end) times, calculated from the value of the parameter. Either will default to None.""" if self.type in ("interval", "none"): return self._value_as_tuple() elif self.type == "series": return self.value[0], self.value[-1] def __str__(self): if self.type in ("interval", "none"): value = self._value_as_tuple() return ( f"Time period to subset over" f"\n start time: {value[0]}" f"\n end time: {value[1]}" ) else: return f"Time values to select: {self.value}"
/roocs_utils-0.6.4.tar.gz/roocs_utils-0.6.4/roocs_utils/parameter/time_parameter.py
0.677687
0.564939
time_parameter.py
pypi
import calendar from collections.abc import Sequence from roocs_utils.exceptions import InvalidParameterValue from roocs_utils.utils.file_utils import FileMapper from roocs_utils.utils.time_utils import str_to_AnyCalendarDateTime # Global variables that are generally useful month_map = {name.lower(): num for num, name in enumerate(calendar.month_abbr) if num} time_comp_limits = { "year": None, "month": (1, 12), "day": (1, 40), # allowing for strange calendars "hour": (0, 23), "minute": (0, 59), "second": (0, 59), } # A set of simple parser functions def parse_range(x, caller): if isinstance(x, Sequence) and len(x) == 1: x = x[0] if x in ("/", None, ""): start = None end = None elif isinstance(x, str): if "/" not in x: raise InvalidParameterValue( f"{caller} should be passed in as a range separated by /" ) # empty string either side of '/' is converted to None start, end = (i.strip() or None for i in x.split("/")) elif isinstance(x, Sequence): if len(x) != 2: raise InvalidParameterValue( f"{caller} should be a range. Expected 2 values, " f"received {len(x)}" ) start, end = x else: raise InvalidParameterValue(f"{caller} is not in an accepted format") return start, end def parse_sequence(x, caller): if x in (None, ""): sequence = None # check str or bytes elif isinstance(x, (str, bytes)): sequence = [i.strip() for i in x.strip().split(",")] elif isinstance(x, FileMapper): sequence = [x] elif isinstance(x, Sequence): sequence = x else: raise InvalidParameterValue(f"{caller} is not in an accepted format") return sequence def parse_datetime(dt, defaults=None): """Parses string to datetime and returns isoformat string for it. If `defaults` is set, use that in case `dt` is None.""" return str(str_to_AnyCalendarDateTime(dt, defaults=defaults)) class Series: """ A simple class for handling a series selection, created by any sequence as input. It has a `value` that holds the sequence as a list. """ def __init__(self, *data): if len(data) == 1: data = data[0] self.value = parse_sequence(data, caller=self.__class__.__name__) class Interval: """ A simple class for handling an interval of any type. It holds a `start` and `end` but does not try to resolve the range, it is just a container to be used by other tools. The contents can be of any type, such as datetimes, strings etc. """ def __init__(self, *data): self.value = parse_range(data, self.__class__.__name__) class TimeComponents: """ A simple class for parsing and representing a set of time components. The components are stored in a dictionary of {time_comp: values}, such as: {"year": [2000, 2001], "month": [1, 2, 3]} Note that you can provide month strings as strings or numbers, e.g.: "feb", "Feb", "February", 2 """ def __init__( self, year=None, month=None, day=None, hour=None, minute=None, second=None ): comps = ("year", "month", "day", "hour", "minute", "second") self.value = {} for comp in comps: if comp in locals(): value = locals()[comp] # Only add to dict if defined if value is not None: self.value[comp] = self._parse_component(comp, value) def _parse_component(self, time_comp, value): limits = time_comp_limits[time_comp] if isinstance(value, str): if "," in value: value = value.split(",") else: value = [value] if not isinstance(value, Sequence): value = [value] def _month_to_int(month): if isinstance(month, str): month = month_map.get(month.lower()[:3], month) return int(month) if time_comp == "month": value = [_month_to_int(month) for month in value] else: value = [int(i) for i in value] if limits: mn, mx = min(value), max(value) if mn < limits[0] or mx > limits[1]: raise ValueError( f"Some time components are out of range for {time_comp}: " f"({mn}, {mx})" ) return value def string_to_dict(s, splitters=("|", ":", ",")): """Convert a string to a dictionary of dictionaries, based on splitting rules: splitters.""" dct = {} for tdict in s.strip().split(splitters[0]): key, value = tdict.split(splitters[1]) dct[key] = value.split(splitters[2]) return dct def to_float(i, allow_none=True): try: if allow_none and i is None: return i return float(i) except Exception: raise InvalidParameterValue("Values must be valid numbers") # Create some aliases for creating simple selection types series = time_series = level_series = area = collection = dimensions = Series interval = time_interval = level_interval = Interval time_components = TimeComponents
/roocs_utils-0.6.4.tar.gz/roocs_utils-0.6.4/roocs_utils/parameter/param_utils.py
0.79542
0.475849
param_utils.py
pypi
from roocs_utils.exceptions import InvalidParameterValue from roocs_utils.parameter.base_parameter import _BaseParameter from roocs_utils.parameter.param_utils import string_to_dict from roocs_utils.parameter.param_utils import time_components class TimeComponentsParameter(_BaseParameter): """ Class for time components parameter used in subsetting operation. The Time Components are any, or none of: - year: [list of years] - month: [list of months] - day: [list of days] - hour: [list of hours] - minute: [list of minutes] - second: [list of seconds] `month` is special: you can use either strings or values: "feb", "mar" == 2, 3 == "02,03" Validates the times input and parses them into a dictionary. """ allowed_input_types = [dict, str, time_components, type(None)] def _parse(self): try: if self.input in (None, ""): return None elif isinstance(self.input, time_components): return self.input.value elif isinstance(self.input, str): time_comp_dict = string_to_dict(self.input, splitters=("|", ":", ",")) return time_components(**time_comp_dict).value else: # Must be a dict to get here return time_components(**self.input).value except Exception: raise InvalidParameterValue( f"Cannot create TimeComponentsParameter " f"from: {self.input}" ) def asdict(self): # Just return the value, either a dict or None return {"time_components": self.value} def get_bounds(self): """Returns a tuple of the (start, end) times, calculated from the value of the parameter. Either will default to None.""" if "year" in self.value: start = f"{self.value['year'][0]}-01-01T00:00:00" end = f"{self.value['year'][-1]}-12-31T23:59:59" else: start = end = None return (start, end) def __str__(self): if self.value is None: return "No time components specified" resp = "Time components to select:" for key, value in self.value.items(): resp += f"\n {key} => {value}" return resp
/roocs_utils-0.6.4.tar.gz/roocs_utils-0.6.4/roocs_utils/parameter/time_components_parameter.py
0.769297
0.444505
time_components_parameter.py
pypi
from roocs_utils.exceptions import InvalidParameterValue from roocs_utils.parameter.param_utils import interval from roocs_utils.parameter.param_utils import series class _BaseParameter: """ Base class for parameters used in operations (e.g. subset, average etc.) """ allowed_input_types = None def __init__(self, input): self.input = self.raw = input # If the input is already an instance of this class, call its parse method if isinstance(self.input, self.__class__): self.value = self.input.value self.type = getattr(self.input, "type", "undefined") else: self._check_input_type() self.value = self._parse() def _check_input_type(self): if not self.allowed_input_types: return if not isinstance(self.input, tuple(self.allowed_input_types)): raise InvalidParameterValue( f"Input type of {type(self.input)} not allowed. " f"Must be one of: {self.allowed_input_types}" ) def _parse(self): raise NotImplementedError def get_bounds(self): """Returns a tuple of the (start, end) times, calculated from the value of the parameter. Either will default to None.""" raise NotImplementedError def __str__(self): raise NotImplementedError def __repr__(self): return str(self) def __unicode__(self): return str(self) class _BaseIntervalOrSeriesParameter(_BaseParameter): """ A base class for a parameter that can be instantiated from either and `Interval` or `Series` class instance. It has a `type` and a `value` reflecting the type. E.g.: type: "interval" --> value: (start, end) type: "series" --> value: [item1, item2, ..., item_n] """ allowed_input_types = [interval, series, type(None), str] def _parse(self): if isinstance(self.input, interval): self.type = "interval" return self._parse_as_interval() elif isinstance(self.input, series): self.type = "series" return self._parse_as_series() elif isinstance(self.input, type(None)): self.type = "none" return None elif isinstance(self.input, str): if "/" in self.input: self.type = "interval" self.input = interval(self.input) return self._parse_as_interval() else: self.type = "series" self.input = series(self.input) return self._parse_as_series() def _parse_as_interval(self): raise NotImplementedError def _parse_as_series(self): raise NotImplementedError def _value_as_tuple(self): value = self.value if value is None: value = None, None return value
/roocs_utils-0.6.4.tar.gz/roocs_utils-0.6.4/roocs_utils/parameter/base_parameter.py
0.881933
0.331742
base_parameter.py
pypi
from roocs_utils.parameter.base_parameter import _BaseIntervalOrSeriesParameter from roocs_utils.parameter.param_utils import to_float from roocs_utils.exceptions import InvalidParameterValue class LevelParameter(_BaseIntervalOrSeriesParameter): """ Class for level parameter used in subsetting operation. | Level can be input as: | A string of slash separated values: "1000/2000" | A sequence of strings: e.g. ("1000.50", "2000.60") | A sequence of numbers: e.g. (1000.50, 2000.60) A level input must be 2 values. If using a string input a trailing slash indicates you want to use the lowest/highest level of the dataset. e.g. "/2000" will subset from the lowest level in the dataset to 2000. Validates the level input and parses the values into numbers. """ def _parse_as_interval(self): try: value = tuple([to_float(i) for i in self.input.value]) except InvalidParameterValue: raise except Exception: raise InvalidParameterValue("Unable to parse the level values entered") if set(value) == {None}: value = None return value def _parse_as_series(self): try: value = [to_float(i) for i in self.input.value if i is not None] except InvalidParameterValue: raise except Exception: raise InvalidParameterValue("Unable to parse the level values entered") return value def asdict(self): """Returns a dictionary of the level values""" if self.type in ("interval", "none"): value = self._value_as_tuple() return {"first_level": value[0], "last_level": value[1]} elif self.type == "series": return {"level_values": self.value} def __str__(self): if self.type in ("interval", "none"): value = self._value_as_tuple() return ( f"Level range to subset over" f"\n first_level: {value[0]}" f"\n last_level: {value[1]}" ) else: return f"Level values to select: {self.value}"
/roocs_utils-0.6.4.tar.gz/roocs_utils-0.6.4/roocs_utils/parameter/level_parameter.py
0.790732
0.568955
level_parameter.py
pypi
import logging from typing import List, Optional, Tuple from .config import CfgNode as CN from .defaults import _C __all__ = ["upgrade_config", "downgrade_config"] def upgrade_config(cfg: CN, to_version: Optional[int] = None) -> CN: """ Upgrade a config from its current version to a newer version. Args: cfg (CfgNode): to_version (int): defaults to the latest version. """ cfg = cfg.clone() if to_version is None: to_version = _C.VERSION assert cfg.VERSION <= to_version, "Cannot upgrade from v{} to v{}!".format( cfg.VERSION, to_version ) for k in range(cfg.VERSION, to_version): converter = globals()["ConverterV" + str(k + 1)] converter.upgrade(cfg) cfg.VERSION = k + 1 return cfg def downgrade_config(cfg: CN, to_version: int) -> CN: """ Downgrade a config from its current version to an older version. Args: cfg (CfgNode): to_version (int): Note: A general downgrade of arbitrary configs is not always possible due to the different functionalities in different versions. The purpose of downgrade is only to recover the defaults in old versions, allowing it to load an old partial yaml config. Therefore, the implementation only needs to fill in the default values in the old version when a general downgrade is not possible. """ cfg = cfg.clone() assert cfg.VERSION >= to_version, "Cannot downgrade from v{} to v{}!".format( cfg.VERSION, to_version ) for k in range(cfg.VERSION, to_version, -1): converter = globals()["ConverterV" + str(k)] converter.downgrade(cfg) cfg.VERSION = k - 1 return cfg def guess_version(cfg: CN, filename: str) -> int: """ Guess the version of a partial config where the VERSION field is not specified. Returns the version, or the latest if cannot make a guess. This makes it easier for users to migrate. """ logger = logging.getLogger(__name__) def _has(name: str) -> bool: cur = cfg for n in name.split("."): if n not in cur: return False cur = cur[n] return True # Most users' partial configs have "MODEL.WEIGHT", so guess on it ret = None if _has("MODEL.WEIGHT") or _has("TEST.AUG_ON"): ret = 1 if ret is not None: logger.warning("Config '{}' has no VERSION. Assuming it to be v{}.".format(filename, ret)) else: ret = _C.VERSION logger.warning( "Config '{}' has no VERSION. Assuming it to be compatible with latest v{}.".format( filename, ret ) ) return ret def _rename(cfg: CN, old: str, new: str) -> None: old_keys = old.split(".") new_keys = new.split(".") def _set(key_seq: List[str], val: str) -> None: cur = cfg for k in key_seq[:-1]: if k not in cur: cur[k] = CN() cur = cur[k] cur[key_seq[-1]] = val def _get(key_seq: List[str]) -> CN: cur = cfg for k in key_seq: cur = cur[k] return cur def _del(key_seq: List[str]) -> None: cur = cfg for k in key_seq[:-1]: cur = cur[k] del cur[key_seq[-1]] if len(cur) == 0 and len(key_seq) > 1: _del(key_seq[:-1]) _set(new_keys, _get(old_keys)) _del(old_keys) class _RenameConverter: """ A converter that handles simple rename. """ RENAME: List[Tuple[str, str]] = [] # list of tuples of (old name, new name) @classmethod def upgrade(cls, cfg: CN) -> None: for old, new in cls.RENAME: _rename(cfg, old, new) @classmethod def downgrade(cls, cfg: CN) -> None: for old, new in cls.RENAME[::-1]: _rename(cfg, new, old) class ConverterV1(_RenameConverter): RENAME = [("MODEL.RPN_HEAD.NAME", "MODEL.RPN.HEAD_NAME")] class ConverterV2(_RenameConverter): """ A large bulk of rename, before public release. """ RENAME = [ ("MODEL.WEIGHT", "MODEL.WEIGHTS"), ("MODEL.PANOPTIC_FPN.SEMANTIC_LOSS_SCALE", "MODEL.SEM_SEG_HEAD.LOSS_WEIGHT"), ("MODEL.PANOPTIC_FPN.RPN_LOSS_SCALE", "MODEL.RPN.LOSS_WEIGHT"), ("MODEL.PANOPTIC_FPN.INSTANCE_LOSS_SCALE", "MODEL.PANOPTIC_FPN.INSTANCE_LOSS_WEIGHT"), ("MODEL.PANOPTIC_FPN.COMBINE_ON", "MODEL.PANOPTIC_FPN.COMBINE.ENABLED"), ( "MODEL.PANOPTIC_FPN.COMBINE_OVERLAP_THRESHOLD", "MODEL.PANOPTIC_FPN.COMBINE.OVERLAP_THRESH", ), ( "MODEL.PANOPTIC_FPN.COMBINE_STUFF_AREA_LIMIT", "MODEL.PANOPTIC_FPN.COMBINE.STUFF_AREA_LIMIT", ), ( "MODEL.PANOPTIC_FPN.COMBINE_INSTANCES_CONFIDENCE_THRESHOLD", "MODEL.PANOPTIC_FPN.COMBINE.INSTANCES_CONFIDENCE_THRESH", ), ("MODEL.ROI_HEADS.SCORE_THRESH", "MODEL.ROI_HEADS.SCORE_THRESH_TEST"), ("MODEL.ROI_HEADS.NMS", "MODEL.ROI_HEADS.NMS_THRESH_TEST"), ("MODEL.RETINANET.INFERENCE_SCORE_THRESHOLD", "MODEL.RETINANET.SCORE_THRESH_TEST"), ("MODEL.RETINANET.INFERENCE_TOPK_CANDIDATES", "MODEL.RETINANET.TOPK_CANDIDATES_TEST"), ("MODEL.RETINANET.INFERENCE_NMS_THRESHOLD", "MODEL.RETINANET.NMS_THRESH_TEST"), ("TEST.DETECTIONS_PER_IMG", "TEST.DETECTIONS_PER_IMAGE"), ("TEST.AUG_ON", "TEST.AUG.ENABLED"), ("TEST.AUG_MIN_SIZES", "TEST.AUG.MIN_SIZES"), ("TEST.AUG_MAX_SIZE", "TEST.AUG.MAX_SIZE"), ("TEST.AUG_FLIP", "TEST.AUG.FLIP"), ] @classmethod def upgrade(cls, cfg: CN) -> None: super().upgrade(cfg) if cfg.MODEL.META_ARCHITECTURE == "RetinaNet": _rename( cfg, "MODEL.RETINANET.ANCHOR_ASPECT_RATIOS", "MODEL.ANCHOR_GENERATOR.ASPECT_RATIOS" ) _rename(cfg, "MODEL.RETINANET.ANCHOR_SIZES", "MODEL.ANCHOR_GENERATOR.SIZES") del cfg["MODEL"]["RPN"]["ANCHOR_SIZES"] del cfg["MODEL"]["RPN"]["ANCHOR_ASPECT_RATIOS"] else: _rename(cfg, "MODEL.RPN.ANCHOR_ASPECT_RATIOS", "MODEL.ANCHOR_GENERATOR.ASPECT_RATIOS") _rename(cfg, "MODEL.RPN.ANCHOR_SIZES", "MODEL.ANCHOR_GENERATOR.SIZES") del cfg["MODEL"]["RETINANET"]["ANCHOR_SIZES"] del cfg["MODEL"]["RETINANET"]["ANCHOR_ASPECT_RATIOS"] del cfg["MODEL"]["RETINANET"]["ANCHOR_STRIDES"] @classmethod def downgrade(cls, cfg: CN) -> None: super().downgrade(cfg) _rename(cfg, "MODEL.ANCHOR_GENERATOR.ASPECT_RATIOS", "MODEL.RPN.ANCHOR_ASPECT_RATIOS") _rename(cfg, "MODEL.ANCHOR_GENERATOR.SIZES", "MODEL.RPN.ANCHOR_SIZES") cfg.MODEL.RETINANET.ANCHOR_ASPECT_RATIOS = cfg.MODEL.RPN.ANCHOR_ASPECT_RATIOS cfg.MODEL.RETINANET.ANCHOR_SIZES = cfg.MODEL.RPN.ANCHOR_SIZES cfg.MODEL.RETINANET.ANCHOR_STRIDES = [] # this is not used anywhere in any version
/roof_mask_Yv8-0.5.9-py3-none-any.whl/detectron2/config/compat.py
0.846895
0.197367
compat.py
pypi
import functools import inspect import logging from fvcore.common.config import CfgNode as _CfgNode from detectron2.utils.file_io import PathManager class CfgNode(_CfgNode): """ The same as `fvcore.common.config.CfgNode`, but different in: 1. Use unsafe yaml loading by default. Note that this may lead to arbitrary code execution: you must not load a config file from untrusted sources before manually inspecting the content of the file. 2. Support config versioning. When attempting to merge an old config, it will convert the old config automatically. .. automethod:: clone .. automethod:: freeze .. automethod:: defrost .. automethod:: is_frozen .. automethod:: load_yaml_with_base .. automethod:: merge_from_list .. automethod:: merge_from_other_cfg """ @classmethod def _open_cfg(cls, filename): return PathManager.open(filename, "r") # Note that the default value of allow_unsafe is changed to True def merge_from_file(self, cfg_filename: str, allow_unsafe: bool = True) -> None: """ Load content from the given config file and merge it into self. Args: cfg_filename: config filename allow_unsafe: allow unsafe yaml syntax """ assert PathManager.isfile(cfg_filename), f"Config file '{cfg_filename}' does not exist!" loaded_cfg = self.load_yaml_with_base(cfg_filename, allow_unsafe=allow_unsafe) loaded_cfg = type(self)(loaded_cfg) # defaults.py needs to import CfgNode from .defaults import _C latest_ver = _C.VERSION assert ( latest_ver == self.VERSION ), "CfgNode.merge_from_file is only allowed on a config object of latest version!" logger = logging.getLogger(__name__) loaded_ver = loaded_cfg.get("VERSION", None) if loaded_ver is None: from .compat import guess_version loaded_ver = guess_version(loaded_cfg, cfg_filename) assert loaded_ver <= self.VERSION, "Cannot merge a v{} config into a v{} config.".format( loaded_ver, self.VERSION ) if loaded_ver == self.VERSION: self.merge_from_other_cfg(loaded_cfg) else: # compat.py needs to import CfgNode from .compat import upgrade_config, downgrade_config logger.warning( "Loading an old v{} config file '{}' by automatically upgrading to v{}. " "See docs/CHANGELOG.md for instructions to update your files.".format( loaded_ver, cfg_filename, self.VERSION ) ) # To convert, first obtain a full config at an old version old_self = downgrade_config(self, to_version=loaded_ver) old_self.merge_from_other_cfg(loaded_cfg) new_config = upgrade_config(old_self) self.clear() self.update(new_config) def dump(self, *args, **kwargs): """ Returns: str: a yaml string representation of the config """ # to make it show up in docs return super().dump(*args, **kwargs) global_cfg = CfgNode() def get_cfg() -> CfgNode: """ Get a copy of the default config. Returns: a detectron2 CfgNode instance. """ from .defaults import _C return _C.clone() def set_global_cfg(cfg: CfgNode) -> None: """ Let the global config point to the given cfg. Assume that the given "cfg" has the key "KEY", after calling `set_global_cfg(cfg)`, the key can be accessed by: :: from detectron2.config import global_cfg print(global_cfg.KEY) By using a hacky global config, you can access these configs anywhere, without having to pass the config object or the values deep into the code. This is a hacky feature introduced for quick prototyping / research exploration. """ global global_cfg global_cfg.clear() global_cfg.update(cfg) def configurable(init_func=None, *, from_config=None): """ Decorate a function or a class's __init__ method so that it can be called with a :class:`CfgNode` object using a :func:`from_config` function that translates :class:`CfgNode` to arguments. Examples: :: # Usage 1: Decorator on __init__: class A: @configurable def __init__(self, a, b=2, c=3): pass @classmethod def from_config(cls, cfg): # 'cfg' must be the first argument # Returns kwargs to be passed to __init__ return {"a": cfg.A, "b": cfg.B} a1 = A(a=1, b=2) # regular construction a2 = A(cfg) # construct with a cfg a3 = A(cfg, b=3, c=4) # construct with extra overwrite # Usage 2: Decorator on any function. Needs an extra from_config argument: @configurable(from_config=lambda cfg: {"a: cfg.A, "b": cfg.B}) def a_func(a, b=2, c=3): pass a1 = a_func(a=1, b=2) # regular call a2 = a_func(cfg) # call with a cfg a3 = a_func(cfg, b=3, c=4) # call with extra overwrite Args: init_func (callable): a class's ``__init__`` method in usage 1. The class must have a ``from_config`` classmethod which takes `cfg` as the first argument. from_config (callable): the from_config function in usage 2. It must take `cfg` as its first argument. """ if init_func is not None: assert ( inspect.isfunction(init_func) and from_config is None and init_func.__name__ == "__init__" ), "Incorrect use of @configurable. Check API documentation for examples." @functools.wraps(init_func) def wrapped(self, *args, **kwargs): try: from_config_func = type(self).from_config except AttributeError as e: raise AttributeError( "Class with @configurable must have a 'from_config' classmethod." ) from e if not inspect.ismethod(from_config_func): raise TypeError("Class with @configurable must have a 'from_config' classmethod.") if _called_with_cfg(*args, **kwargs): explicit_args = _get_args_from_config(from_config_func, *args, **kwargs) init_func(self, **explicit_args) else: init_func(self, *args, **kwargs) return wrapped else: if from_config is None: return configurable # @configurable() is made equivalent to @configurable assert inspect.isfunction( from_config ), "from_config argument of configurable must be a function!" def wrapper(orig_func): @functools.wraps(orig_func) def wrapped(*args, **kwargs): if _called_with_cfg(*args, **kwargs): explicit_args = _get_args_from_config(from_config, *args, **kwargs) return orig_func(**explicit_args) else: return orig_func(*args, **kwargs) wrapped.from_config = from_config return wrapped return wrapper def _get_args_from_config(from_config_func, *args, **kwargs): """ Use `from_config` to obtain explicit arguments. Returns: dict: arguments to be used for cls.__init__ """ signature = inspect.signature(from_config_func) if list(signature.parameters.keys())[0] != "cfg": if inspect.isfunction(from_config_func): name = from_config_func.__name__ else: name = f"{from_config_func.__self__}.from_config" raise TypeError(f"{name} must take 'cfg' as the first argument!") support_var_arg = any( param.kind in [param.VAR_POSITIONAL, param.VAR_KEYWORD] for param in signature.parameters.values() ) if support_var_arg: # forward all arguments to from_config, if from_config accepts them ret = from_config_func(*args, **kwargs) else: # forward supported arguments to from_config supported_arg_names = set(signature.parameters.keys()) extra_kwargs = {} for name in list(kwargs.keys()): if name not in supported_arg_names: extra_kwargs[name] = kwargs.pop(name) ret = from_config_func(*args, **kwargs) # forward the other arguments to __init__ ret.update(extra_kwargs) return ret def _called_with_cfg(*args, **kwargs): """ Returns: bool: whether the arguments contain CfgNode and should be considered forwarded to from_config. """ from omegaconf import DictConfig if len(args) and isinstance(args[0], (_CfgNode, DictConfig)): return True if isinstance(kwargs.pop("cfg", None), (_CfgNode, DictConfig)): return True # `from_config`'s first argument is forced to be "cfg". # So the above check covers all cases. return False
/roof_mask_Yv8-0.5.9-py3-none-any.whl/detectron2/config/config.py
0.777131
0.214825
config.py
pypi
import copy import io import logging import numpy as np from typing import List import onnx import torch from caffe2.proto import caffe2_pb2 from caffe2.python import core from caffe2.python.onnx.backend import Caffe2Backend from tabulate import tabulate from termcolor import colored from torch.onnx import OperatorExportTypes from .shared import ( ScopedWS, construct_init_net_from_params, fuse_alias_placeholder, fuse_copy_between_cpu_and_gpu, get_params_from_init_net, group_norm_replace_aten_with_caffe2, infer_device_type, remove_dead_end_ops, remove_reshape_for_fc, save_graph, ) logger = logging.getLogger(__name__) def export_onnx_model(model, inputs): """ Trace and export a model to onnx format. Args: model (nn.Module): inputs (tuple[args]): the model will be called by `model(*inputs)` Returns: an onnx model """ assert isinstance(model, torch.nn.Module) # make sure all modules are in eval mode, onnx may change the training state # of the module if the states are not consistent def _check_eval(module): assert not module.training model.apply(_check_eval) # Export the model to ONNX with torch.no_grad(): with io.BytesIO() as f: torch.onnx.export( model, inputs, f, operator_export_type=OperatorExportTypes.ONNX_ATEN_FALLBACK, # verbose=True, # NOTE: uncomment this for debugging # export_params=True, ) onnx_model = onnx.load_from_string(f.getvalue()) # Apply ONNX's Optimization all_passes = onnx.optimizer.get_available_passes() passes = ["fuse_bn_into_conv"] assert all(p in all_passes for p in passes) onnx_model = onnx.optimizer.optimize(onnx_model, passes) return onnx_model def _op_stats(net_def): type_count = {} for t in [op.type for op in net_def.op]: type_count[t] = type_count.get(t, 0) + 1 type_count_list = sorted(type_count.items(), key=lambda kv: kv[0]) # alphabet type_count_list = sorted(type_count_list, key=lambda kv: -kv[1]) # count return "\n".join("{:>4}x {}".format(count, name) for name, count in type_count_list) def _assign_device_option( predict_net: caffe2_pb2.NetDef, init_net: caffe2_pb2.NetDef, tensor_inputs: List[torch.Tensor] ): """ ONNX exported network doesn't have concept of device, assign necessary device option for each op in order to make it runable on GPU runtime. """ def _get_device_type(torch_tensor): assert torch_tensor.device.type in ["cpu", "cuda"] assert torch_tensor.device.index == 0 return torch_tensor.device.type def _assign_op_device_option(net_proto, net_ssa, blob_device_types): for op, ssa_i in zip(net_proto.op, net_ssa): if op.type in ["CopyCPUToGPU", "CopyGPUToCPU"]: op.device_option.CopyFrom(core.DeviceOption(caffe2_pb2.CUDA, 0)) else: devices = [blob_device_types[b] for b in ssa_i[0] + ssa_i[1]] assert all(d == devices[0] for d in devices) if devices[0] == "cuda": op.device_option.CopyFrom(core.DeviceOption(caffe2_pb2.CUDA, 0)) # update ops in predict_net predict_net_input_device_types = { (name, 0): _get_device_type(tensor) for name, tensor in zip(predict_net.external_input, tensor_inputs) } predict_net_device_types = infer_device_type( predict_net, known_status=predict_net_input_device_types, device_name_style="pytorch" ) predict_net_ssa, _ = core.get_ssa(predict_net) _assign_op_device_option(predict_net, predict_net_ssa, predict_net_device_types) # update ops in init_net init_net_ssa, versions = core.get_ssa(init_net) init_net_output_device_types = { (name, versions[name]): predict_net_device_types[(name, 0)] for name in init_net.external_output } init_net_device_types = infer_device_type( init_net, known_status=init_net_output_device_types, device_name_style="pytorch" ) _assign_op_device_option(init_net, init_net_ssa, init_net_device_types) def export_caffe2_detection_model(model: torch.nn.Module, tensor_inputs: List[torch.Tensor]): """ Export a caffe2-compatible Detectron2 model to caffe2 format via ONNX. Arg: model: a caffe2-compatible version of detectron2 model, defined in caffe2_modeling.py tensor_inputs: a list of tensors that caffe2 model takes as input. """ model = copy.deepcopy(model) assert isinstance(model, torch.nn.Module) assert hasattr(model, "encode_additional_info") # Export via ONNX logger.info( "Exporting a {} model via ONNX ...".format(type(model).__name__) + " Some warnings from ONNX are expected and are usually not to worry about." ) onnx_model = export_onnx_model(model, (tensor_inputs,)) # Convert ONNX model to Caffe2 protobuf init_net, predict_net = Caffe2Backend.onnx_graph_to_caffe2_net(onnx_model) ops_table = [[op.type, op.input, op.output] for op in predict_net.op] table = tabulate(ops_table, headers=["type", "input", "output"], tablefmt="pipe") logger.info( "ONNX export Done. Exported predict_net (before optimizations):\n" + colored(table, "cyan") ) # Apply protobuf optimization fuse_alias_placeholder(predict_net, init_net) if any(t.device.type != "cpu" for t in tensor_inputs): fuse_copy_between_cpu_and_gpu(predict_net) remove_dead_end_ops(init_net) _assign_device_option(predict_net, init_net, tensor_inputs) params, device_options = get_params_from_init_net(init_net) predict_net, params = remove_reshape_for_fc(predict_net, params) init_net = construct_init_net_from_params(params, device_options) group_norm_replace_aten_with_caffe2(predict_net) # Record necessary information for running the pb model in Detectron2 system. model.encode_additional_info(predict_net, init_net) logger.info("Operators used in predict_net: \n{}".format(_op_stats(predict_net))) logger.info("Operators used in init_net: \n{}".format(_op_stats(init_net))) return predict_net, init_net def run_and_save_graph(predict_net, init_net, tensor_inputs, graph_save_path): """ Run the caffe2 model on given inputs, recording the shape and draw the graph. predict_net/init_net: caffe2 model. tensor_inputs: a list of tensors that caffe2 model takes as input. graph_save_path: path for saving graph of exported model. """ logger.info("Saving graph of ONNX exported model to {} ...".format(graph_save_path)) save_graph(predict_net, graph_save_path, op_only=False) # Run the exported Caffe2 net logger.info("Running ONNX exported model ...") with ScopedWS("__ws_tmp__", True) as ws: ws.RunNetOnce(init_net) initialized_blobs = set(ws.Blobs()) uninitialized = [inp for inp in predict_net.external_input if inp not in initialized_blobs] for name, blob in zip(uninitialized, tensor_inputs): ws.FeedBlob(name, blob) try: ws.RunNetOnce(predict_net) except RuntimeError as e: logger.warning("Encountered RuntimeError: \n{}".format(str(e))) ws_blobs = {b: ws.FetchBlob(b) for b in ws.Blobs()} blob_sizes = {b: ws_blobs[b].shape for b in ws_blobs if isinstance(ws_blobs[b], np.ndarray)} logger.info("Saving graph with blob shapes to {} ...".format(graph_save_path)) save_graph(predict_net, graph_save_path, op_only=False, blob_sizes=blob_sizes) return ws_blobs
/roof_mask_Yv8-0.5.9-py3-none-any.whl/detectron2/export/caffe2_export.py
0.894703
0.335174
caffe2_export.py
pypi
import os import torch from detectron2.utils.file_io import PathManager from .torchscript_patch import freeze_training_mode, patch_instances __all__ = ["scripting_with_instances", "dump_torchscript_IR"] def scripting_with_instances(model, fields): """ Run :func:`torch.jit.script` on a model that uses the :class:`Instances` class. Since attributes of :class:`Instances` are "dynamically" added in eager mode,it is difficult for scripting to support it out of the box. This function is made to support scripting a model that uses :class:`Instances`. It does the following: 1. Create a scriptable ``new_Instances`` class which behaves similarly to ``Instances``, but with all attributes been "static". The attributes need to be statically declared in the ``fields`` argument. 2. Register ``new_Instances``, and force scripting compiler to use it when trying to compile ``Instances``. After this function, the process will be reverted. User should be able to script another model using different fields. Example: Assume that ``Instances`` in the model consist of two attributes named ``proposal_boxes`` and ``objectness_logits`` with type :class:`Boxes` and :class:`Tensor` respectively during inference. You can call this function like: :: fields = {"proposal_boxes": Boxes, "objectness_logits": torch.Tensor} torchscipt_model = scripting_with_instances(model, fields) Note: It only support models in evaluation mode. Args: model (nn.Module): The input model to be exported by scripting. fields (Dict[str, type]): Attribute names and corresponding type that ``Instances`` will use in the model. Note that all attributes used in ``Instances`` need to be added, regardless of whether they are inputs/outputs of the model. Data type not defined in detectron2 is not supported for now. Returns: torch.jit.ScriptModule: the model in torchscript format """ assert ( not model.training ), "Currently we only support exporting models in evaluation mode to torchscript" with freeze_training_mode(model), patch_instances(fields): scripted_model = torch.jit.script(model) return scripted_model # alias for old name export_torchscript_with_instances = scripting_with_instances def dump_torchscript_IR(model, dir): """ Dump IR of a TracedModule/ScriptModule/Function in various format (code, graph, inlined graph). Useful for debugging. Args: model (TracedModule/ScriptModule/ScriptFUnction): traced or scripted module dir (str): output directory to dump files. """ dir = os.path.expanduser(dir) PathManager.mkdirs(dir) def _get_script_mod(mod): if isinstance(mod, torch.jit.TracedModule): return mod._actual_script_module return mod # Dump pretty-printed code: https://pytorch.org/docs/stable/jit.html#inspecting-code with PathManager.open(os.path.join(dir, "model_ts_code.txt"), "w") as f: def get_code(mod): # Try a few ways to get code using private attributes. try: # This contains more information than just `mod.code` return _get_script_mod(mod)._c.code except AttributeError: pass try: return mod.code except AttributeError: return None def dump_code(prefix, mod): code = get_code(mod) name = prefix or "root model" if code is None: f.write(f"Could not found code for {name} (type={mod.original_name})\n") f.write("\n") else: f.write(f"\nCode for {name}, type={mod.original_name}:\n") f.write(code) f.write("\n") f.write("-" * 80) for name, m in mod.named_children(): dump_code(prefix + "." + name, m) if isinstance(model, torch.jit.ScriptFunction): f.write(get_code(model)) else: dump_code("", model) def _get_graph(model): try: # Recursively dump IR of all modules return _get_script_mod(model)._c.dump_to_str(True, False, False) except AttributeError: return model.graph.str() with PathManager.open(os.path.join(dir, "model_ts_IR.txt"), "w") as f: f.write(_get_graph(model)) # Dump IR of the entire graph (all submodules inlined) with PathManager.open(os.path.join(dir, "model_ts_IR_inlined.txt"), "w") as f: f.write(str(model.inlined_graph)) if not isinstance(model, torch.jit.ScriptFunction): # Dump the model structure in pytorch style with PathManager.open(os.path.join(dir, "model.txt"), "w") as f: f.write(str(model))
/roof_mask_Yv8-0.5.9-py3-none-any.whl/detectron2/export/torchscript.py
0.824285
0.529446
torchscript.py
pypi
import copy import logging import os import torch from caffe2.proto import caffe2_pb2 from torch import nn from detectron2.config import CfgNode from detectron2.utils.file_io import PathManager from .caffe2_inference import ProtobufDetectionModel from .caffe2_modeling import META_ARCH_CAFFE2_EXPORT_TYPE_MAP, convert_batched_inputs_to_c2_format from .shared import get_pb_arg_vali, get_pb_arg_vals, save_graph __all__ = [ "add_export_config", "Caffe2Model", "Caffe2Tracer", ] def add_export_config(cfg): return cfg class Caffe2Tracer: """ Make a detectron2 model traceable with Caffe2 operators. This class creates a traceable version of a detectron2 model which: 1. Rewrite parts of the model using ops in Caffe2. Note that some ops do not have GPU implementation in Caffe2. 2. Remove post-processing and only produce raw layer outputs After making a traceable model, the class provide methods to export such a model to different deployment formats. Exported graph produced by this class take two input tensors: 1. (1, C, H, W) float "data" which is an image (usually in [0, 255]). (H, W) often has to be padded to multiple of 32 (depend on the model architecture). 2. 1x3 float "im_info", each row of which is (height, width, 1.0). Height and width are true image shapes before padding. The class currently only supports models using builtin meta architectures. Batch inference is not supported, and contributions are welcome. """ def __init__(self, cfg: CfgNode, model: nn.Module, inputs): """ Args: cfg (CfgNode): a detectron2 config used to construct caffe2-compatible model. model (nn.Module): An original pytorch model. Must be among a few official models in detectron2 that can be converted to become caffe2-compatible automatically. Weights have to be already loaded to this model. inputs: sample inputs that the given model takes for inference. Will be used to trace the model. For most models, random inputs with no detected objects will not work as they lead to wrong traces. """ assert isinstance(cfg, CfgNode), cfg assert isinstance(model, torch.nn.Module), type(model) # TODO make it support custom models, by passing in c2 model directly C2MetaArch = META_ARCH_CAFFE2_EXPORT_TYPE_MAP[cfg.MODEL.META_ARCHITECTURE] self.traceable_model = C2MetaArch(cfg, copy.deepcopy(model)) self.inputs = inputs self.traceable_inputs = self.traceable_model.get_caffe2_inputs(inputs) def export_caffe2(self): """ Export the model to Caffe2's protobuf format. The returned object can be saved with its :meth:`.save_protobuf()` method. The result can be loaded and executed using Caffe2 runtime. Returns: :class:`Caffe2Model` """ from .caffe2_export import export_caffe2_detection_model predict_net, init_net = export_caffe2_detection_model( self.traceable_model, self.traceable_inputs ) return Caffe2Model(predict_net, init_net) def export_onnx(self): """ Export the model to ONNX format. Note that the exported model contains custom ops only available in caffe2, therefore it cannot be directly executed by other runtime (such as onnxruntime or TensorRT). Post-processing or transformation passes may be applied on the model to accommodate different runtimes, but we currently do not provide support for them. Returns: onnx.ModelProto: an onnx model. """ from .caffe2_export import export_onnx_model as export_onnx_model_impl return export_onnx_model_impl(self.traceable_model, (self.traceable_inputs,)) def export_torchscript(self): """ Export the model to a ``torch.jit.TracedModule`` by tracing. The returned object can be saved to a file by ``.save()``. Returns: torch.jit.TracedModule: a torch TracedModule """ logger = logging.getLogger(__name__) logger.info("Tracing the model with torch.jit.trace ...") with torch.no_grad(): return torch.jit.trace(self.traceable_model, (self.traceable_inputs,)) class Caffe2Model(nn.Module): """ A wrapper around the traced model in Caffe2's protobuf format. The exported graph has different inputs/outputs from the original Pytorch model, as explained in :class:`Caffe2Tracer`. This class wraps around the exported graph to simulate the same interface as the original Pytorch model. It also provides functions to save/load models in Caffe2's format.' Examples: :: c2_model = Caffe2Tracer(cfg, torch_model, inputs).export_caffe2() inputs = [{"image": img_tensor_CHW}] outputs = c2_model(inputs) orig_outputs = torch_model(inputs) """ def __init__(self, predict_net, init_net): super().__init__() self.eval() # always in eval mode self._predict_net = predict_net self._init_net = init_net self._predictor = None __init__.__HIDE_SPHINX_DOC__ = True @property def predict_net(self): """ caffe2.core.Net: the underlying caffe2 predict net """ return self._predict_net @property def init_net(self): """ caffe2.core.Net: the underlying caffe2 init net """ return self._init_net def save_protobuf(self, output_dir): """ Save the model as caffe2's protobuf format. It saves the following files: * "model.pb": definition of the graph. Can be visualized with tools like `netron <https://github.com/lutzroeder/netron>`_. * "model_init.pb": model parameters * "model.pbtxt": human-readable definition of the graph. Not needed for deployment. Args: output_dir (str): the output directory to save protobuf files. """ logger = logging.getLogger(__name__) logger.info("Saving model to {} ...".format(output_dir)) if not PathManager.exists(output_dir): PathManager.mkdirs(output_dir) with PathManager.open(os.path.join(output_dir, "model.pb"), "wb") as f: f.write(self._predict_net.SerializeToString()) with PathManager.open(os.path.join(output_dir, "model.pbtxt"), "w") as f: f.write(str(self._predict_net)) with PathManager.open(os.path.join(output_dir, "model_init.pb"), "wb") as f: f.write(self._init_net.SerializeToString()) def save_graph(self, output_file, inputs=None): """ Save the graph as SVG format. Args: output_file (str): a SVG file inputs: optional inputs given to the model. If given, the inputs will be used to run the graph to record shape of every tensor. The shape information will be saved together with the graph. """ from .caffe2_export import run_and_save_graph if inputs is None: save_graph(self._predict_net, output_file, op_only=False) else: size_divisibility = get_pb_arg_vali(self._predict_net, "size_divisibility", 0) device = get_pb_arg_vals(self._predict_net, "device", b"cpu").decode("ascii") inputs = convert_batched_inputs_to_c2_format(inputs, size_divisibility, device) inputs = [x.cpu().numpy() for x in inputs] run_and_save_graph(self._predict_net, self._init_net, inputs, output_file) @staticmethod def load_protobuf(dir): """ Args: dir (str): a directory used to save Caffe2Model with :meth:`save_protobuf`. The files "model.pb" and "model_init.pb" are needed. Returns: Caffe2Model: the caffe2 model loaded from this directory. """ predict_net = caffe2_pb2.NetDef() with PathManager.open(os.path.join(dir, "model.pb"), "rb") as f: predict_net.ParseFromString(f.read()) init_net = caffe2_pb2.NetDef() with PathManager.open(os.path.join(dir, "model_init.pb"), "rb") as f: init_net.ParseFromString(f.read()) return Caffe2Model(predict_net, init_net) def __call__(self, inputs): """ An interface that wraps around a Caffe2 model and mimics detectron2's models' input/output format. See details about the format at :doc:`/tutorials/models`. This is used to compare the outputs of caffe2 model with its original torch model. Due to the extra conversion between Pytorch/Caffe2, this method is not meant for benchmark. Because of the conversion, this method also has dependency on detectron2 in order to convert to detectron2's output format. """ if self._predictor is None: self._predictor = ProtobufDetectionModel(self._predict_net, self._init_net) return self._predictor(inputs)
/roof_mask_Yv8-0.5.9-py3-none-any.whl/detectron2/export/api.py
0.855127
0.417925
api.py
pypi
import contextlib from unittest import mock import torch from detectron2.modeling import poolers from detectron2.modeling.proposal_generator import rpn from detectron2.modeling.roi_heads import keypoint_head, mask_head from detectron2.modeling.roi_heads.fast_rcnn import FastRCNNOutputLayers from .c10 import ( Caffe2Compatible, Caffe2FastRCNNOutputsInference, Caffe2KeypointRCNNInference, Caffe2MaskRCNNInference, Caffe2ROIPooler, Caffe2RPN, ) class GenericMixin(object): pass class Caffe2CompatibleConverter(object): """ A GenericUpdater which implements the `create_from` interface, by modifying module object and assign it with another class replaceCls. """ def __init__(self, replaceCls): self.replaceCls = replaceCls def create_from(self, module): # update module's class to the new class assert isinstance(module, torch.nn.Module) if issubclass(self.replaceCls, GenericMixin): # replaceCls should act as mixin, create a new class on-the-fly new_class = type( "{}MixedWith{}".format(self.replaceCls.__name__, module.__class__.__name__), (self.replaceCls, module.__class__), {}, # {"new_method": lambda self: ...}, ) module.__class__ = new_class else: # replaceCls is complete class, this allow arbitrary class swap module.__class__ = self.replaceCls # initialize Caffe2Compatible if isinstance(module, Caffe2Compatible): module.tensor_mode = False return module def patch(model, target, updater, *args, **kwargs): """ recursively (post-order) update all modules with the target type and its subclasses, make a initialization/composition/inheritance/... via the updater.create_from. """ for name, module in model.named_children(): model._modules[name] = patch(module, target, updater, *args, **kwargs) if isinstance(model, target): return updater.create_from(model, *args, **kwargs) return model def patch_generalized_rcnn(model): ccc = Caffe2CompatibleConverter model = patch(model, rpn.RPN, ccc(Caffe2RPN)) model = patch(model, poolers.ROIPooler, ccc(Caffe2ROIPooler)) return model @contextlib.contextmanager def mock_fastrcnn_outputs_inference( tensor_mode, check=True, box_predictor_type=FastRCNNOutputLayers ): with mock.patch.object( box_predictor_type, "inference", autospec=True, side_effect=Caffe2FastRCNNOutputsInference(tensor_mode), ) as mocked_func: yield if check: assert mocked_func.call_count > 0 @contextlib.contextmanager def mock_mask_rcnn_inference(tensor_mode, patched_module, check=True): with mock.patch( "{}.mask_rcnn_inference".format(patched_module), side_effect=Caffe2MaskRCNNInference() ) as mocked_func: yield if check: assert mocked_func.call_count > 0 @contextlib.contextmanager def mock_keypoint_rcnn_inference(tensor_mode, patched_module, use_heatmap_max_keypoint, check=True): with mock.patch( "{}.keypoint_rcnn_inference".format(patched_module), side_effect=Caffe2KeypointRCNNInference(use_heatmap_max_keypoint), ) as mocked_func: yield if check: assert mocked_func.call_count > 0 class ROIHeadsPatcher: def __init__(self, heads, use_heatmap_max_keypoint): self.heads = heads self.use_heatmap_max_keypoint = use_heatmap_max_keypoint @contextlib.contextmanager def mock_roi_heads(self, tensor_mode=True): """ Patching several inference functions inside ROIHeads and its subclasses Args: tensor_mode (bool): whether the inputs/outputs are caffe2's tensor format or not. Default to True. """ # NOTE: this requries the `keypoint_rcnn_inference` and `mask_rcnn_inference` # are called inside the same file as BaseXxxHead due to using mock.patch. kpt_heads_mod = keypoint_head.BaseKeypointRCNNHead.__module__ mask_head_mod = mask_head.BaseMaskRCNNHead.__module__ mock_ctx_managers = [ mock_fastrcnn_outputs_inference( tensor_mode=tensor_mode, check=True, box_predictor_type=type(self.heads.box_predictor), ) ] if getattr(self.heads, "keypoint_on", False): mock_ctx_managers += [ mock_keypoint_rcnn_inference( tensor_mode, kpt_heads_mod, self.use_heatmap_max_keypoint ) ] if getattr(self.heads, "mask_on", False): mock_ctx_managers += [mock_mask_rcnn_inference(tensor_mode, mask_head_mod)] with contextlib.ExitStack() as stack: # python 3.3+ for mgr in mock_ctx_managers: stack.enter_context(mgr) yield
/roof_mask_Yv8-0.5.9-py3-none-any.whl/detectron2/export/caffe2_patch.py
0.849691
0.251625
caffe2_patch.py
pypi
import copy import itertools import numpy as np from typing import Any, Iterator, List, Union import pycocotools.mask as mask_util import torch from torch import device from detectron2.layers.roi_align import ROIAlign from detectron2.utils.memory import retry_if_cuda_oom from .boxes import Boxes def polygon_area(x, y): # Using the shoelace formula # https://stackoverflow.com/questions/24467972/calculate-area-of-polygon-given-x-y-coordinates return 0.5 * np.abs(np.dot(x, np.roll(y, 1)) - np.dot(y, np.roll(x, 1))) def polygons_to_bitmask(polygons: List[np.ndarray], height: int, width: int) -> np.ndarray: """ Args: polygons (list[ndarray]): each array has shape (Nx2,) height, width (int) Returns: ndarray: a bool mask of shape (height, width) """ if len(polygons) == 0: # COCOAPI does not support empty polygons return np.zeros((height, width)).astype(np.bool) rles = mask_util.frPyObjects(polygons, height, width) rle = mask_util.merge(rles) return mask_util.decode(rle).astype(np.bool) def rasterize_polygons_within_box( polygons: List[np.ndarray], box: np.ndarray, mask_size: int ) -> torch.Tensor: """ Rasterize the polygons into a mask image and crop the mask content in the given box. The cropped mask is resized to (mask_size, mask_size). This function is used when generating training targets for mask head in Mask R-CNN. Given original ground-truth masks for an image, new ground-truth mask training targets in the size of `mask_size x mask_size` must be provided for each predicted box. This function will be called to produce such targets. Args: polygons (list[ndarray[float]]): a list of polygons, which represents an instance. box: 4-element numpy array mask_size (int): Returns: Tensor: BoolTensor of shape (mask_size, mask_size) """ # 1. Shift the polygons w.r.t the boxes w, h = box[2] - box[0], box[3] - box[1] polygons = copy.deepcopy(polygons) for p in polygons: p[0::2] = p[0::2] - box[0] p[1::2] = p[1::2] - box[1] # 2. Rescale the polygons to the new box size # max() to avoid division by small number ratio_h = mask_size / max(h, 0.1) ratio_w = mask_size / max(w, 0.1) if ratio_h == ratio_w: for p in polygons: p *= ratio_h else: for p in polygons: p[0::2] *= ratio_w p[1::2] *= ratio_h # 3. Rasterize the polygons with coco api mask = polygons_to_bitmask(polygons, mask_size, mask_size) mask = torch.from_numpy(mask) return mask class BitMasks: """ This class stores the segmentation masks for all objects in one image, in the form of bitmaps. Attributes: tensor: bool Tensor of N,H,W, representing N instances in the image. """ def __init__(self, tensor: Union[torch.Tensor, np.ndarray]): """ Args: tensor: bool Tensor of N,H,W, representing N instances in the image. """ if isinstance(tensor, torch.Tensor): tensor = tensor.to(torch.bool) else: tensor = torch.as_tensor(tensor, dtype=torch.bool, device=torch.device("cpu")) assert tensor.dim() == 3, tensor.size() self.image_size = tensor.shape[1:] self.tensor = tensor @torch.jit.unused def to(self, *args: Any, **kwargs: Any) -> "BitMasks": return BitMasks(self.tensor.to(*args, **kwargs)) @property def device(self) -> torch.device: return self.tensor.device @torch.jit.unused def __getitem__(self, item: Union[int, slice, torch.BoolTensor]) -> "BitMasks": """ Returns: BitMasks: Create a new :class:`BitMasks` by indexing. The following usage are allowed: 1. `new_masks = masks[3]`: return a `BitMasks` which contains only one mask. 2. `new_masks = masks[2:10]`: return a slice of masks. 3. `new_masks = masks[vector]`, where vector is a torch.BoolTensor with `length = len(masks)`. Nonzero elements in the vector will be selected. Note that the returned object might share storage with this object, subject to Pytorch's indexing semantics. """ if isinstance(item, int): return BitMasks(self.tensor[item].unsqueeze(0)) m = self.tensor[item] assert m.dim() == 3, "Indexing on BitMasks with {} returns a tensor with shape {}!".format( item, m.shape ) return BitMasks(m) @torch.jit.unused def __iter__(self) -> torch.Tensor: yield from self.tensor @torch.jit.unused def __repr__(self) -> str: s = self.__class__.__name__ + "(" s += "num_instances={})".format(len(self.tensor)) return s def __len__(self) -> int: return self.tensor.shape[0] def nonempty(self) -> torch.Tensor: """ Find masks that are non-empty. Returns: Tensor: a BoolTensor which represents whether each mask is empty (False) or non-empty (True). """ return self.tensor.flatten(1).any(dim=1) @staticmethod def from_polygon_masks( polygon_masks: Union["PolygonMasks", List[List[np.ndarray]]], height: int, width: int ) -> "BitMasks": """ Args: polygon_masks (list[list[ndarray]] or PolygonMasks) height, width (int) """ if isinstance(polygon_masks, PolygonMasks): polygon_masks = polygon_masks.polygons masks = [polygons_to_bitmask(p, height, width) for p in polygon_masks] if len(masks): return BitMasks(torch.stack([torch.from_numpy(x) for x in masks])) else: return BitMasks(torch.empty(0, height, width, dtype=torch.bool)) @staticmethod def from_roi_masks(roi_masks: "ROIMasks", height: int, width: int) -> "BitMasks": """ Args: roi_masks: height, width (int): """ return roi_masks.to_bitmasks(height, width) def crop_and_resize(self, boxes: torch.Tensor, mask_size: int) -> torch.Tensor: """ Crop each bitmask by the given box, and resize results to (mask_size, mask_size). This can be used to prepare training targets for Mask R-CNN. It has less reconstruction error compared to rasterization with polygons. However we observe no difference in accuracy, but BitMasks requires more memory to store all the masks. Args: boxes (Tensor): Nx4 tensor storing the boxes for each mask mask_size (int): the size of the rasterized mask. Returns: Tensor: A bool tensor of shape (N, mask_size, mask_size), where N is the number of predicted boxes for this image. """ assert len(boxes) == len(self), "{} != {}".format(len(boxes), len(self)) device = self.tensor.device batch_inds = torch.arange(len(boxes), device=device).to(dtype=boxes.dtype)[:, None] rois = torch.cat([batch_inds, boxes], dim=1) # Nx5 bit_masks = self.tensor.to(dtype=torch.float32) rois = rois.to(device=device) output = ( ROIAlign((mask_size, mask_size), 1.0, 0, aligned=True) .forward(bit_masks[:, None, :, :], rois) .squeeze(1) ) output = output >= 0.5 return output def get_bounding_boxes(self) -> Boxes: """ Returns: Boxes: tight bounding boxes around bitmasks. If a mask is empty, it's bounding box will be all zero. """ boxes = torch.zeros(self.tensor.shape[0], 4, dtype=torch.float32) x_any = torch.any(self.tensor, dim=1) y_any = torch.any(self.tensor, dim=2) for idx in range(self.tensor.shape[0]): x = torch.where(x_any[idx, :])[0] y = torch.where(y_any[idx, :])[0] if len(x) > 0 and len(y) > 0: boxes[idx, :] = torch.as_tensor( [x[0], y[0], x[-1] + 1, y[-1] + 1], dtype=torch.float32 ) return Boxes(boxes) @staticmethod def cat(bitmasks_list: List["BitMasks"]) -> "BitMasks": """ Concatenates a list of BitMasks into a single BitMasks Arguments: bitmasks_list (list[BitMasks]) Returns: BitMasks: the concatenated BitMasks """ assert isinstance(bitmasks_list, (list, tuple)) assert len(bitmasks_list) > 0 assert all(isinstance(bitmask, BitMasks) for bitmask in bitmasks_list) cat_bitmasks = type(bitmasks_list[0])(torch.cat([bm.tensor for bm in bitmasks_list], dim=0)) return cat_bitmasks class PolygonMasks: """ This class stores the segmentation masks for all objects in one image, in the form of polygons. Attributes: polygons: list[list[ndarray]]. Each ndarray is a float64 vector representing a polygon. """ def __init__(self, polygons: List[List[Union[torch.Tensor, np.ndarray]]]): """ Arguments: polygons (list[list[np.ndarray]]): The first level of the list correspond to individual instances, the second level to all the polygons that compose the instance, and the third level to the polygon coordinates. The third level array should have the format of [x0, y0, x1, y1, ..., xn, yn] (n >= 3). """ if not isinstance(polygons, list): raise ValueError( "Cannot create PolygonMasks: Expect a list of list of polygons per image. " "Got '{}' instead.".format(type(polygons)) ) def _make_array(t: Union[torch.Tensor, np.ndarray]) -> np.ndarray: # Use float64 for higher precision, because why not? # Always put polygons on CPU (self.to is a no-op) since they # are supposed to be small tensors. # May need to change this assumption if GPU placement becomes useful if isinstance(t, torch.Tensor): t = t.cpu().numpy() return np.asarray(t).astype("float64") def process_polygons( polygons_per_instance: List[Union[torch.Tensor, np.ndarray]] ) -> List[np.ndarray]: if not isinstance(polygons_per_instance, list): raise ValueError( "Cannot create polygons: Expect a list of polygons per instance. " "Got '{}' instead.".format(type(polygons_per_instance)) ) # transform each polygon to a numpy array polygons_per_instance = [_make_array(p) for p in polygons_per_instance] for polygon in polygons_per_instance: if len(polygon) % 2 != 0 or len(polygon) < 6: raise ValueError(f"Cannot create a polygon from {len(polygon)} coordinates.") return polygons_per_instance self.polygons: List[List[np.ndarray]] = [ process_polygons(polygons_per_instance) for polygons_per_instance in polygons ] def to(self, *args: Any, **kwargs: Any) -> "PolygonMasks": return self @property def device(self) -> torch.device: return torch.device("cpu") def get_bounding_boxes(self) -> Boxes: """ Returns: Boxes: tight bounding boxes around polygon masks. """ boxes = torch.zeros(len(self.polygons), 4, dtype=torch.float32) for idx, polygons_per_instance in enumerate(self.polygons): minxy = torch.as_tensor([float("inf"), float("inf")], dtype=torch.float32) maxxy = torch.zeros(2, dtype=torch.float32) for polygon in polygons_per_instance: coords = torch.from_numpy(polygon).view(-1, 2).to(dtype=torch.float32) minxy = torch.min(minxy, torch.min(coords, dim=0).values) maxxy = torch.max(maxxy, torch.max(coords, dim=0).values) boxes[idx, :2] = minxy boxes[idx, 2:] = maxxy return Boxes(boxes) def nonempty(self) -> torch.Tensor: """ Find masks that are non-empty. Returns: Tensor: a BoolTensor which represents whether each mask is empty (False) or not (True). """ keep = [1 if len(polygon) > 0 else 0 for polygon in self.polygons] return torch.from_numpy(np.asarray(keep, dtype=np.bool)) def __getitem__(self, item: Union[int, slice, List[int], torch.BoolTensor]) -> "PolygonMasks": """ Support indexing over the instances and return a `PolygonMasks` object. `item` can be: 1. An integer. It will return an object with only one instance. 2. A slice. It will return an object with the selected instances. 3. A list[int]. It will return an object with the selected instances, correpsonding to the indices in the list. 4. A vector mask of type BoolTensor, whose length is num_instances. It will return an object with the instances whose mask is nonzero. """ if isinstance(item, int): selected_polygons = [self.polygons[item]] elif isinstance(item, slice): selected_polygons = self.polygons[item] elif isinstance(item, list): selected_polygons = [self.polygons[i] for i in item] elif isinstance(item, torch.Tensor): # Polygons is a list, so we have to move the indices back to CPU. if item.dtype == torch.bool: assert item.dim() == 1, item.shape item = item.nonzero().squeeze(1).cpu().numpy().tolist() elif item.dtype in [torch.int32, torch.int64]: item = item.cpu().numpy().tolist() else: raise ValueError("Unsupported tensor dtype={} for indexing!".format(item.dtype)) selected_polygons = [self.polygons[i] for i in item] return PolygonMasks(selected_polygons) def __iter__(self) -> Iterator[List[np.ndarray]]: """ Yields: list[ndarray]: the polygons for one instance. Each Tensor is a float64 vector representing a polygon. """ return iter(self.polygons) def __repr__(self) -> str: s = self.__class__.__name__ + "(" s += "num_instances={})".format(len(self.polygons)) return s def __len__(self) -> int: return len(self.polygons) def crop_and_resize(self, boxes: torch.Tensor, mask_size: int) -> torch.Tensor: """ Crop each mask by the given box, and resize results to (mask_size, mask_size). This can be used to prepare training targets for Mask R-CNN. Args: boxes (Tensor): Nx4 tensor storing the boxes for each mask mask_size (int): the size of the rasterized mask. Returns: Tensor: A bool tensor of shape (N, mask_size, mask_size), where N is the number of predicted boxes for this image. """ assert len(boxes) == len(self), "{} != {}".format(len(boxes), len(self)) device = boxes.device # Put boxes on the CPU, as the polygon representation is not efficient GPU-wise # (several small tensors for representing a single instance mask) boxes = boxes.to(torch.device("cpu")) results = [ rasterize_polygons_within_box(poly, box.numpy(), mask_size) for poly, box in zip(self.polygons, boxes) ] """ poly: list[list[float]], the polygons for one instance box: a tensor of shape (4,) """ if len(results) == 0: return torch.empty(0, mask_size, mask_size, dtype=torch.bool, device=device) return torch.stack(results, dim=0).to(device=device) def area(self): """ Computes area of the mask. Only works with Polygons, using the shoelace formula: https://stackoverflow.com/questions/24467972/calculate-area-of-polygon-given-x-y-coordinates Returns: Tensor: a vector, area for each instance """ area = [] for polygons_per_instance in self.polygons: area_per_instance = 0 for p in polygons_per_instance: area_per_instance += polygon_area(p[0::2], p[1::2]) area.append(area_per_instance) return torch.tensor(area) @staticmethod def cat(polymasks_list: List["PolygonMasks"]) -> "PolygonMasks": """ Concatenates a list of PolygonMasks into a single PolygonMasks Arguments: polymasks_list (list[PolygonMasks]) Returns: PolygonMasks: the concatenated PolygonMasks """ assert isinstance(polymasks_list, (list, tuple)) assert len(polymasks_list) > 0 assert all(isinstance(polymask, PolygonMasks) for polymask in polymasks_list) cat_polymasks = type(polymasks_list[0])( list(itertools.chain.from_iterable(pm.polygons for pm in polymasks_list)) ) return cat_polymasks class ROIMasks: """ Represent masks by N smaller masks defined in some ROIs. Once ROI boxes are given, full-image bitmask can be obtained by "pasting" the mask on the region defined by the corresponding ROI box. """ def __init__(self, tensor: torch.Tensor): """ Args: tensor: (N, M, M) mask tensor that defines the mask within each ROI. """ if tensor.dim() != 3: raise ValueError("ROIMasks must take a masks of 3 dimension.") self.tensor = tensor def to(self, device: torch.device) -> "ROIMasks": return ROIMasks(self.tensor.to(device)) @property def device(self) -> device: return self.tensor.device def __len__(self): return self.tensor.shape[0] def __getitem__(self, item) -> "ROIMasks": """ Returns: ROIMasks: Create a new :class:`ROIMasks` by indexing. The following usage are allowed: 1. `new_masks = masks[2:10]`: return a slice of masks. 2. `new_masks = masks[vector]`, where vector is a torch.BoolTensor with `length = len(masks)`. Nonzero elements in the vector will be selected. Note that the returned object might share storage with this object, subject to Pytorch's indexing semantics. """ t = self.tensor[item] if t.dim() != 3: raise ValueError( f"Indexing on ROIMasks with {item} returns a tensor with shape {t.shape}!" ) return ROIMasks(t) @torch.jit.unused def __repr__(self) -> str: s = self.__class__.__name__ + "(" s += "num_instances={})".format(len(self.tensor)) return s @torch.jit.unused def to_bitmasks(self, boxes: torch.Tensor, height, width, threshold=0.5): """ Args: see documentation of :func:`paste_masks_in_image`. """ from detectron2.layers.mask_ops import paste_masks_in_image, _paste_masks_tensor_shape if torch.jit.is_tracing(): if isinstance(height, torch.Tensor): paste_func = _paste_masks_tensor_shape else: paste_func = paste_masks_in_image else: paste_func = retry_if_cuda_oom(paste_masks_in_image) bitmasks = paste_func(self.tensor, boxes.tensor, (height, width), threshold=threshold) return BitMasks(bitmasks)
/roof_mask_Yv8-0.5.9-py3-none-any.whl/detectron2/structures/masks.py
0.941439
0.650503
masks.py
pypi
import itertools from typing import Any, Dict, List, Tuple, Union import torch class Instances: """ This class represents a list of instances in an image. It stores the attributes of instances (e.g., boxes, masks, labels, scores) as "fields". All fields must have the same ``__len__`` which is the number of instances. All other (non-field) attributes of this class are considered private: they must start with '_' and are not modifiable by a user. Some basic usage: 1. Set/get/check a field: .. code-block:: python instances.gt_boxes = Boxes(...) print(instances.pred_masks) # a tensor of shape (N, H, W) print('gt_masks' in instances) 2. ``len(instances)`` returns the number of instances 3. Indexing: ``instances[indices]`` will apply the indexing on all the fields and returns a new :class:`Instances`. Typically, ``indices`` is a integer vector of indices, or a binary mask of length ``num_instances`` .. code-block:: python category_3_detections = instances[instances.pred_classes == 3] confident_detections = instances[instances.scores > 0.9] """ def __init__(self, image_size: Tuple[int, int], **kwargs: Any): """ Args: image_size (height, width): the spatial size of the image. kwargs: fields to add to this `Instances`. """ self._image_size = image_size self._fields: Dict[str, Any] = {} for k, v in kwargs.items(): self.set(k, v) @property def image_size(self) -> Tuple[int, int]: """ Returns: tuple: height, width """ return self._image_size def __setattr__(self, name: str, val: Any) -> None: if name.startswith("_"): super().__setattr__(name, val) else: self.set(name, val) def __getattr__(self, name: str) -> Any: if name == "_fields" or name not in self._fields: raise AttributeError("Cannot find field '{}' in the given Instances!".format(name)) return self._fields[name] def set(self, name: str, value: Any) -> None: """ Set the field named `name` to `value`. The length of `value` must be the number of instances, and must agree with other existing fields in this object. """ data_len = len(value) if len(self._fields): assert ( len(self) == data_len ), "Adding a field of length {} to a Instances of length {}".format(data_len, len(self)) self._fields[name] = value def has(self, name: str) -> bool: """ Returns: bool: whether the field called `name` exists. """ return name in self._fields def remove(self, name: str) -> None: """ Remove the field called `name`. """ del self._fields[name] def get(self, name: str) -> Any: """ Returns the field called `name`. """ return self._fields[name] def get_fields(self) -> Dict[str, Any]: """ Returns: dict: a dict which maps names (str) to data of the fields Modifying the returned dict will modify this instance. """ return self._fields # Tensor-like methods def to(self, *args: Any, **kwargs: Any) -> "Instances": """ Returns: Instances: all fields are called with a `to(device)`, if the field has this method. """ ret = Instances(self._image_size) for k, v in self._fields.items(): if hasattr(v, "to"): v = v.to(*args, **kwargs) ret.set(k, v) return ret def __getitem__(self, item: Union[int, slice, torch.BoolTensor]) -> "Instances": """ Args: item: an index-like object and will be used to index all the fields. Returns: If `item` is a string, return the data in the corresponding field. Otherwise, returns an `Instances` where all fields are indexed by `item`. """ if type(item) == int: if item >= len(self) or item < -len(self): raise IndexError("Instances index out of range!") else: item = slice(item, None, len(self)) ret = Instances(self._image_size) for k, v in self._fields.items(): ret.set(k, v[item]) return ret def __len__(self) -> int: for v in self._fields.values(): # use __len__ because len() has to be int and is not friendly to tracing return v.__len__() raise NotImplementedError("Empty Instances does not support __len__!") def __iter__(self): raise NotImplementedError("`Instances` object is not iterable!") @staticmethod def cat(instance_lists: List["Instances"]) -> "Instances": """ Args: instance_lists (list[Instances]) Returns: Instances """ assert all(isinstance(i, Instances) for i in instance_lists) assert len(instance_lists) > 0 if len(instance_lists) == 1: return instance_lists[0] image_size = instance_lists[0].image_size if not isinstance(image_size, torch.Tensor): # could be a tensor in tracing for i in instance_lists[1:]: assert i.image_size == image_size ret = Instances(image_size) for k in instance_lists[0]._fields.keys(): values = [i.get(k) for i in instance_lists] v0 = values[0] if isinstance(v0, torch.Tensor): values = torch.cat(values, dim=0) elif isinstance(v0, list): values = list(itertools.chain(*values)) elif hasattr(type(v0), "cat"): values = type(v0).cat(values) else: raise ValueError("Unsupported type {} for concatenation".format(type(v0))) ret.set(k, values) return ret def __str__(self) -> str: s = self.__class__.__name__ + "(" s += "num_instances={}, ".format(len(self)) s += "image_height={}, ".format(self._image_size[0]) s += "image_width={}, ".format(self._image_size[1]) s += "fields=[{}])".format(", ".join((f"{k}: {v}" for k, v in self._fields.items()))) return s __repr__ = __str__
/roof_mask_Yv8-0.5.9-py3-none-any.whl/detectron2/structures/instances.py
0.902262
0.67662
instances.py
pypi
from __future__ import division from typing import Any, List, Tuple import torch from torch import device from torch.nn import functional as F from detectron2.layers.wrappers import move_device_like, shapes_to_tensor class ImageList(object): """ Structure that holds a list of images (of possibly varying sizes) as a single tensor. This works by padding the images to the same size. The original sizes of each image is stored in `image_sizes`. Attributes: image_sizes (list[tuple[int, int]]): each tuple is (h, w). During tracing, it becomes list[Tensor] instead. """ def __init__(self, tensor: torch.Tensor, image_sizes: List[Tuple[int, int]]): """ Arguments: tensor (Tensor): of shape (N, H, W) or (N, C_1, ..., C_K, H, W) where K >= 1 image_sizes (list[tuple[int, int]]): Each tuple is (h, w). It can be smaller than (H, W) due to padding. """ self.tensor = tensor self.image_sizes = image_sizes def __len__(self) -> int: return len(self.image_sizes) def __getitem__(self, idx) -> torch.Tensor: """ Access the individual image in its original size. Args: idx: int or slice Returns: Tensor: an image of shape (H, W) or (C_1, ..., C_K, H, W) where K >= 1 """ size = self.image_sizes[idx] return self.tensor[idx, ..., : size[0], : size[1]] @torch.jit.unused def to(self, *args: Any, **kwargs: Any) -> "ImageList": cast_tensor = self.tensor.to(*args, **kwargs) return ImageList(cast_tensor, self.image_sizes) @property def device(self) -> device: return self.tensor.device @staticmethod def from_tensors( tensors: List[torch.Tensor], size_divisibility: int = 0, pad_value: float = 0.0 ) -> "ImageList": """ Args: tensors: a tuple or list of `torch.Tensor`, each of shape (Hi, Wi) or (C_1, ..., C_K, Hi, Wi) where K >= 1. The Tensors will be padded to the same shape with `pad_value`. size_divisibility (int): If `size_divisibility > 0`, add padding to ensure the common height and width is divisible by `size_divisibility`. This depends on the model and many models need a divisibility of 32. pad_value (float): value to pad Returns: an `ImageList`. """ assert len(tensors) > 0 assert isinstance(tensors, (tuple, list)) for t in tensors: assert isinstance(t, torch.Tensor), type(t) assert t.shape[:-2] == tensors[0].shape[:-2], t.shape image_sizes = [(im.shape[-2], im.shape[-1]) for im in tensors] image_sizes_tensor = [shapes_to_tensor(x) for x in image_sizes] max_size = torch.stack(image_sizes_tensor).max(0).values if size_divisibility > 1: stride = size_divisibility # the last two dims are H,W, both subject to divisibility requirement max_size = (max_size + (stride - 1)).div(stride, rounding_mode="floor") * stride # handle weirdness of scripting and tracing ... if torch.jit.is_scripting(): max_size: List[int] = max_size.to(dtype=torch.long).tolist() else: if torch.jit.is_tracing(): image_sizes = image_sizes_tensor if len(tensors) == 1: # This seems slightly (2%) faster. # TODO: check whether it's faster for multiple images as well image_size = image_sizes[0] padding_size = [0, max_size[-1] - image_size[1], 0, max_size[-2] - image_size[0]] batched_imgs = F.pad(tensors[0], padding_size, value=pad_value).unsqueeze_(0) else: # max_size can be a tensor in tracing mode, therefore convert to list batch_shape = [len(tensors)] + list(tensors[0].shape[:-2]) + list(max_size) device = ( None if torch.jit.is_scripting() else ("cpu" if torch.jit.is_tracing() else None) ) batched_imgs = tensors[0].new_full(batch_shape, pad_value, device=device) batched_imgs = move_device_like(batched_imgs, tensors[0]) for img, pad_img in zip(tensors, batched_imgs): pad_img[..., : img.shape[-2], : img.shape[-1]].copy_(img) return ImageList(batched_imgs.contiguous(), image_sizes)
/roof_mask_Yv8-0.5.9-py3-none-any.whl/detectron2/structures/image_list.py
0.924432
0.686091
image_list.py
pypi
import numpy as np from typing import Any, List, Tuple, Union import torch from torch.nn import functional as F class Keypoints: """ Stores keypoint **annotation** data. GT Instances have a `gt_keypoints` property containing the x,y location and visibility flag of each keypoint. This tensor has shape (N, K, 3) where N is the number of instances and K is the number of keypoints per instance. The visibility flag follows the COCO format and must be one of three integers: * v=0: not labeled (in which case x=y=0) * v=1: labeled but not visible * v=2: labeled and visible """ def __init__(self, keypoints: Union[torch.Tensor, np.ndarray, List[List[float]]]): """ Arguments: keypoints: A Tensor, numpy array, or list of the x, y, and visibility of each keypoint. The shape should be (N, K, 3) where N is the number of instances, and K is the number of keypoints per instance. """ device = keypoints.device if isinstance(keypoints, torch.Tensor) else torch.device("cpu") keypoints = torch.as_tensor(keypoints, dtype=torch.float32, device=device) assert keypoints.dim() == 3 and keypoints.shape[2] == 3, keypoints.shape self.tensor = keypoints def __len__(self) -> int: return self.tensor.size(0) def to(self, *args: Any, **kwargs: Any) -> "Keypoints": return type(self)(self.tensor.to(*args, **kwargs)) @property def device(self) -> torch.device: return self.tensor.device def to_heatmap(self, boxes: torch.Tensor, heatmap_size: int) -> torch.Tensor: """ Convert keypoint annotations to a heatmap of one-hot labels for training, as described in :paper:`Mask R-CNN`. Arguments: boxes: Nx4 tensor, the boxes to draw the keypoints to Returns: heatmaps: A tensor of shape (N, K), each element is integer spatial label in the range [0, heatmap_size**2 - 1] for each keypoint in the input. valid: A tensor of shape (N, K) containing whether each keypoint is in the roi or not. """ return _keypoints_to_heatmap(self.tensor, boxes, heatmap_size) def __getitem__(self, item: Union[int, slice, torch.BoolTensor]) -> "Keypoints": """ Create a new `Keypoints` by indexing on this `Keypoints`. The following usage are allowed: 1. `new_kpts = kpts[3]`: return a `Keypoints` which contains only one instance. 2. `new_kpts = kpts[2:10]`: return a slice of key points. 3. `new_kpts = kpts[vector]`, where vector is a torch.ByteTensor with `length = len(kpts)`. Nonzero elements in the vector will be selected. Note that the returned Keypoints might share storage with this Keypoints, subject to Pytorch's indexing semantics. """ if isinstance(item, int): return Keypoints([self.tensor[item]]) return Keypoints(self.tensor[item]) def __repr__(self) -> str: s = self.__class__.__name__ + "(" s += "num_instances={})".format(len(self.tensor)) return s @staticmethod def cat(keypoints_list: List["Keypoints"]) -> "Keypoints": """ Concatenates a list of Keypoints into a single Keypoints Arguments: keypoints_list (list[Keypoints]) Returns: Keypoints: the concatenated Keypoints """ assert isinstance(keypoints_list, (list, tuple)) assert len(keypoints_list) > 0 assert all(isinstance(keypoints, Keypoints) for keypoints in keypoints_list) cat_kpts = type(keypoints_list[0])( torch.cat([kpts.tensor for kpts in keypoints_list], dim=0) ) return cat_kpts # TODO make this nicer, this is a direct translation from C2 (but removing the inner loop) def _keypoints_to_heatmap( keypoints: torch.Tensor, rois: torch.Tensor, heatmap_size: int ) -> Tuple[torch.Tensor, torch.Tensor]: """ Encode keypoint locations into a target heatmap for use in SoftmaxWithLoss across space. Maps keypoints from the half-open interval [x1, x2) on continuous image coordinates to the closed interval [0, heatmap_size - 1] on discrete image coordinates. We use the continuous-discrete conversion from Heckbert 1990 ("What is the coordinate of a pixel?"): d = floor(c) and c = d + 0.5, where d is a discrete coordinate and c is a continuous coordinate. Arguments: keypoints: tensor of keypoint locations in of shape (N, K, 3). rois: Nx4 tensor of rois in xyxy format heatmap_size: integer side length of square heatmap. Returns: heatmaps: A tensor of shape (N, K) containing an integer spatial label in the range [0, heatmap_size**2 - 1] for each keypoint in the input. valid: A tensor of shape (N, K) containing whether each keypoint is in the roi or not. """ if rois.numel() == 0: return rois.new().long(), rois.new().long() offset_x = rois[:, 0] offset_y = rois[:, 1] scale_x = heatmap_size / (rois[:, 2] - rois[:, 0]) scale_y = heatmap_size / (rois[:, 3] - rois[:, 1]) offset_x = offset_x[:, None] offset_y = offset_y[:, None] scale_x = scale_x[:, None] scale_y = scale_y[:, None] x = keypoints[..., 0] y = keypoints[..., 1] x_boundary_inds = x == rois[:, 2][:, None] y_boundary_inds = y == rois[:, 3][:, None] x = (x - offset_x) * scale_x x = x.floor().long() y = (y - offset_y) * scale_y y = y.floor().long() x[x_boundary_inds] = heatmap_size - 1 y[y_boundary_inds] = heatmap_size - 1 valid_loc = (x >= 0) & (y >= 0) & (x < heatmap_size) & (y < heatmap_size) vis = keypoints[..., 2] > 0 valid = (valid_loc & vis).long() lin_ind = y * heatmap_size + x heatmaps = lin_ind * valid return heatmaps, valid @torch.jit.script_if_tracing def heatmaps_to_keypoints(maps: torch.Tensor, rois: torch.Tensor) -> torch.Tensor: """ Extract predicted keypoint locations from heatmaps. Args: maps (Tensor): (#ROIs, #keypoints, POOL_H, POOL_W). The predicted heatmap of logits for each ROI and each keypoint. rois (Tensor): (#ROIs, 4). The box of each ROI. Returns: Tensor of shape (#ROIs, #keypoints, 4) with the last dimension corresponding to (x, y, logit, score) for each keypoint. When converting discrete pixel indices in an NxN image to a continuous keypoint coordinate, we maintain consistency with :meth:`Keypoints.to_heatmap` by using the conversion from Heckbert 1990: c = d + 0.5, where d is a discrete coordinate and c is a continuous coordinate. """ # The decorator use of torch.no_grad() was not supported by torchscript. # https://github.com/pytorch/pytorch/issues/44768 maps = maps.detach() rois = rois.detach() offset_x = rois[:, 0] offset_y = rois[:, 1] widths = (rois[:, 2] - rois[:, 0]).clamp(min=1) heights = (rois[:, 3] - rois[:, 1]).clamp(min=1) widths_ceil = widths.ceil() heights_ceil = heights.ceil() num_rois, num_keypoints = maps.shape[:2] xy_preds = maps.new_zeros(rois.shape[0], num_keypoints, 4) width_corrections = widths / widths_ceil height_corrections = heights / heights_ceil keypoints_idx = torch.arange(num_keypoints, device=maps.device) for i in range(num_rois): outsize = (int(heights_ceil[i]), int(widths_ceil[i])) roi_map = F.interpolate( maps[[i]], size=outsize, mode="bicubic", align_corners=False ).squeeze( 0 ) # #keypoints x H x W # softmax over the spatial region max_score, _ = roi_map.view(num_keypoints, -1).max(1) max_score = max_score.view(num_keypoints, 1, 1) tmp_full_resolution = (roi_map - max_score).exp_() tmp_pool_resolution = (maps[i] - max_score).exp_() # Produce scores over the region H x W, but normalize with POOL_H x POOL_W, # so that the scores of objects of different absolute sizes will be more comparable roi_map_scores = tmp_full_resolution / tmp_pool_resolution.sum((1, 2), keepdim=True) w = roi_map.shape[2] pos = roi_map.view(num_keypoints, -1).argmax(1) x_int = pos % w y_int = (pos - x_int) // w assert ( roi_map_scores[keypoints_idx, y_int, x_int] == roi_map_scores.view(num_keypoints, -1).max(1)[0] ).all() x = (x_int.float() + 0.5) * width_corrections[i] y = (y_int.float() + 0.5) * height_corrections[i] xy_preds[i, :, 0] = x + offset_x[i] xy_preds[i, :, 1] = y + offset_y[i] xy_preds[i, :, 2] = roi_map[keypoints_idx, y_int, x_int] xy_preds[i, :, 3] = roi_map_scores[keypoints_idx, y_int, x_int] return xy_preds
/roof_mask_Yv8-0.5.9-py3-none-any.whl/detectron2/structures/keypoints.py
0.938674
0.802942
keypoints.py
pypi
import logging from detectron2.utils.file_io import PathHandler, PathManager class ModelCatalog(object): """ Store mappings from names to third-party models. """ S3_C2_DETECTRON_PREFIX = "https://dl.fbaipublicfiles.com/detectron" # MSRA models have STRIDE_IN_1X1=True. False otherwise. # NOTE: all BN models here have fused BN into an affine layer. # As a result, you should only load them to a model with "FrozenBN". # Loading them to a model with regular BN or SyncBN is wrong. # Even when loaded to FrozenBN, it is still different from affine by an epsilon, # which should be negligible for training. # NOTE: all models here uses PIXEL_STD=[1,1,1] # NOTE: Most of the BN models here are no longer used. We use the # re-converted pre-trained models under detectron2 model zoo instead. C2_IMAGENET_MODELS = { "MSRA/R-50": "ImageNetPretrained/MSRA/R-50.pkl", "MSRA/R-101": "ImageNetPretrained/MSRA/R-101.pkl", "FAIR/R-50-GN": "ImageNetPretrained/47261647/R-50-GN.pkl", "FAIR/R-101-GN": "ImageNetPretrained/47592356/R-101-GN.pkl", "FAIR/X-101-32x8d": "ImageNetPretrained/20171220/X-101-32x8d.pkl", "FAIR/X-101-64x4d": "ImageNetPretrained/FBResNeXt/X-101-64x4d.pkl", "FAIR/X-152-32x8d-IN5k": "ImageNetPretrained/25093814/X-152-32x8d-IN5k.pkl", } C2_DETECTRON_PATH_FORMAT = ( "{prefix}/{url}/output/train/{dataset}/{type}/model_final.pkl" # noqa B950 ) C2_DATASET_COCO = "coco_2014_train%3Acoco_2014_valminusminival" C2_DATASET_COCO_KEYPOINTS = "keypoints_coco_2014_train%3Akeypoints_coco_2014_valminusminival" # format: {model_name} -> part of the url C2_DETECTRON_MODELS = { "35857197/e2e_faster_rcnn_R-50-C4_1x": "35857197/12_2017_baselines/e2e_faster_rcnn_R-50-C4_1x.yaml.01_33_49.iAX0mXvW", # noqa B950 "35857345/e2e_faster_rcnn_R-50-FPN_1x": "35857345/12_2017_baselines/e2e_faster_rcnn_R-50-FPN_1x.yaml.01_36_30.cUF7QR7I", # noqa B950 "35857890/e2e_faster_rcnn_R-101-FPN_1x": "35857890/12_2017_baselines/e2e_faster_rcnn_R-101-FPN_1x.yaml.01_38_50.sNxI7sX7", # noqa B950 "36761737/e2e_faster_rcnn_X-101-32x8d-FPN_1x": "36761737/12_2017_baselines/e2e_faster_rcnn_X-101-32x8d-FPN_1x.yaml.06_31_39.5MIHi1fZ", # noqa B950 "35858791/e2e_mask_rcnn_R-50-C4_1x": "35858791/12_2017_baselines/e2e_mask_rcnn_R-50-C4_1x.yaml.01_45_57.ZgkA7hPB", # noqa B950 "35858933/e2e_mask_rcnn_R-50-FPN_1x": "35858933/12_2017_baselines/e2e_mask_rcnn_R-50-FPN_1x.yaml.01_48_14.DzEQe4wC", # noqa B950 "35861795/e2e_mask_rcnn_R-101-FPN_1x": "35861795/12_2017_baselines/e2e_mask_rcnn_R-101-FPN_1x.yaml.02_31_37.KqyEK4tT", # noqa B950 "36761843/e2e_mask_rcnn_X-101-32x8d-FPN_1x": "36761843/12_2017_baselines/e2e_mask_rcnn_X-101-32x8d-FPN_1x.yaml.06_35_59.RZotkLKI", # noqa B950 "48616381/e2e_mask_rcnn_R-50-FPN_2x_gn": "GN/48616381/04_2018_gn_baselines/e2e_mask_rcnn_R-50-FPN_2x_gn_0416.13_23_38.bTlTI97Q", # noqa B950 "37697547/e2e_keypoint_rcnn_R-50-FPN_1x": "37697547/12_2017_baselines/e2e_keypoint_rcnn_R-50-FPN_1x.yaml.08_42_54.kdzV35ao", # noqa B950 "35998355/rpn_R-50-C4_1x": "35998355/12_2017_baselines/rpn_R-50-C4_1x.yaml.08_00_43.njH5oD9L", # noqa B950 "35998814/rpn_R-50-FPN_1x": "35998814/12_2017_baselines/rpn_R-50-FPN_1x.yaml.08_06_03.Axg0r179", # noqa B950 "36225147/fast_R-50-FPN_1x": "36225147/12_2017_baselines/fast_rcnn_R-50-FPN_1x.yaml.08_39_09.L3obSdQ2", # noqa B950 } @staticmethod def get(name): if name.startswith("Caffe2Detectron/COCO"): return ModelCatalog._get_c2_detectron_baseline(name) if name.startswith("ImageNetPretrained/"): return ModelCatalog._get_c2_imagenet_pretrained(name) raise RuntimeError("model not present in the catalog: {}".format(name)) @staticmethod def _get_c2_imagenet_pretrained(name): prefix = ModelCatalog.S3_C2_DETECTRON_PREFIX name = name[len("ImageNetPretrained/") :] name = ModelCatalog.C2_IMAGENET_MODELS[name] url = "/".join([prefix, name]) return url @staticmethod def _get_c2_detectron_baseline(name): name = name[len("Caffe2Detectron/COCO/") :] url = ModelCatalog.C2_DETECTRON_MODELS[name] if "keypoint_rcnn" in name: dataset = ModelCatalog.C2_DATASET_COCO_KEYPOINTS else: dataset = ModelCatalog.C2_DATASET_COCO if "35998355/rpn_R-50-C4_1x" in name: # this one model is somehow different from others .. type = "rpn" else: type = "generalized_rcnn" # Detectron C2 models are stored in the structure defined in `C2_DETECTRON_PATH_FORMAT`. url = ModelCatalog.C2_DETECTRON_PATH_FORMAT.format( prefix=ModelCatalog.S3_C2_DETECTRON_PREFIX, url=url, type=type, dataset=dataset ) return url class ModelCatalogHandler(PathHandler): """ Resolve URL like catalog://. """ PREFIX = "catalog://" def _get_supported_prefixes(self): return [self.PREFIX] def _get_local_path(self, path, **kwargs): logger = logging.getLogger(__name__) catalog_path = ModelCatalog.get(path[len(self.PREFIX) :]) logger.info("Catalog entry {} points to {}".format(path, catalog_path)) return PathManager.get_local_path(catalog_path, **kwargs) def _open(self, path, mode="r", **kwargs): return PathManager.open(self._get_local_path(path), mode, **kwargs) PathManager.register_handler(ModelCatalogHandler())
/roof_mask_Yv8-0.5.9-py3-none-any.whl/detectron2/checkpoint/catalog.py
0.600071
0.286731
catalog.py
pypi
import copy import logging import re from typing import Dict, List import torch from tabulate import tabulate def convert_basic_c2_names(original_keys): """ Apply some basic name conversion to names in C2 weights. It only deals with typical backbone models. Args: original_keys (list[str]): Returns: list[str]: The same number of strings matching those in original_keys. """ layer_keys = copy.deepcopy(original_keys) layer_keys = [ {"pred_b": "linear_b", "pred_w": "linear_w"}.get(k, k) for k in layer_keys ] # some hard-coded mappings layer_keys = [k.replace("_", ".") for k in layer_keys] layer_keys = [re.sub("\\.b$", ".bias", k) for k in layer_keys] layer_keys = [re.sub("\\.w$", ".weight", k) for k in layer_keys] # Uniform both bn and gn names to "norm" layer_keys = [re.sub("bn\\.s$", "norm.weight", k) for k in layer_keys] layer_keys = [re.sub("bn\\.bias$", "norm.bias", k) for k in layer_keys] layer_keys = [re.sub("bn\\.rm", "norm.running_mean", k) for k in layer_keys] layer_keys = [re.sub("bn\\.running.mean$", "norm.running_mean", k) for k in layer_keys] layer_keys = [re.sub("bn\\.riv$", "norm.running_var", k) for k in layer_keys] layer_keys = [re.sub("bn\\.running.var$", "norm.running_var", k) for k in layer_keys] layer_keys = [re.sub("bn\\.gamma$", "norm.weight", k) for k in layer_keys] layer_keys = [re.sub("bn\\.beta$", "norm.bias", k) for k in layer_keys] layer_keys = [re.sub("gn\\.s$", "norm.weight", k) for k in layer_keys] layer_keys = [re.sub("gn\\.bias$", "norm.bias", k) for k in layer_keys] # stem layer_keys = [re.sub("^res\\.conv1\\.norm\\.", "conv1.norm.", k) for k in layer_keys] # to avoid mis-matching with "conv1" in other components (e.g. detection head) layer_keys = [re.sub("^conv1\\.", "stem.conv1.", k) for k in layer_keys] # layer1-4 is used by torchvision, however we follow the C2 naming strategy (res2-5) # layer_keys = [re.sub("^res2.", "layer1.", k) for k in layer_keys] # layer_keys = [re.sub("^res3.", "layer2.", k) for k in layer_keys] # layer_keys = [re.sub("^res4.", "layer3.", k) for k in layer_keys] # layer_keys = [re.sub("^res5.", "layer4.", k) for k in layer_keys] # blocks layer_keys = [k.replace(".branch1.", ".shortcut.") for k in layer_keys] layer_keys = [k.replace(".branch2a.", ".conv1.") for k in layer_keys] layer_keys = [k.replace(".branch2b.", ".conv2.") for k in layer_keys] layer_keys = [k.replace(".branch2c.", ".conv3.") for k in layer_keys] # DensePose substitutions layer_keys = [re.sub("^body.conv.fcn", "body_conv_fcn", k) for k in layer_keys] layer_keys = [k.replace("AnnIndex.lowres", "ann_index_lowres") for k in layer_keys] layer_keys = [k.replace("Index.UV.lowres", "index_uv_lowres") for k in layer_keys] layer_keys = [k.replace("U.lowres", "u_lowres") for k in layer_keys] layer_keys = [k.replace("V.lowres", "v_lowres") for k in layer_keys] return layer_keys def convert_c2_detectron_names(weights): """ Map Caffe2 Detectron weight names to Detectron2 names. Args: weights (dict): name -> tensor Returns: dict: detectron2 names -> tensor dict: detectron2 names -> C2 names """ logger = logging.getLogger(__name__) logger.info("Renaming Caffe2 weights ......") original_keys = sorted(weights.keys()) layer_keys = copy.deepcopy(original_keys) layer_keys = convert_basic_c2_names(layer_keys) # -------------------------------------------------------------------------- # RPN hidden representation conv # -------------------------------------------------------------------------- # FPN case # In the C2 model, the RPN hidden layer conv is defined for FPN level 2 and then # shared for all other levels, hence the appearance of "fpn2" layer_keys = [ k.replace("conv.rpn.fpn2", "proposal_generator.rpn_head.conv") for k in layer_keys ] # Non-FPN case layer_keys = [k.replace("conv.rpn", "proposal_generator.rpn_head.conv") for k in layer_keys] # -------------------------------------------------------------------------- # RPN box transformation conv # -------------------------------------------------------------------------- # FPN case (see note above about "fpn2") layer_keys = [ k.replace("rpn.bbox.pred.fpn2", "proposal_generator.rpn_head.anchor_deltas") for k in layer_keys ] layer_keys = [ k.replace("rpn.cls.logits.fpn2", "proposal_generator.rpn_head.objectness_logits") for k in layer_keys ] # Non-FPN case layer_keys = [ k.replace("rpn.bbox.pred", "proposal_generator.rpn_head.anchor_deltas") for k in layer_keys ] layer_keys = [ k.replace("rpn.cls.logits", "proposal_generator.rpn_head.objectness_logits") for k in layer_keys ] # -------------------------------------------------------------------------- # Fast R-CNN box head # -------------------------------------------------------------------------- layer_keys = [re.sub("^bbox\\.pred", "bbox_pred", k) for k in layer_keys] layer_keys = [re.sub("^cls\\.score", "cls_score", k) for k in layer_keys] layer_keys = [re.sub("^fc6\\.", "box_head.fc1.", k) for k in layer_keys] layer_keys = [re.sub("^fc7\\.", "box_head.fc2.", k) for k in layer_keys] # 4conv1fc head tensor names: head_conv1_w, head_conv1_gn_s layer_keys = [re.sub("^head\\.conv", "box_head.conv", k) for k in layer_keys] # -------------------------------------------------------------------------- # FPN lateral and output convolutions # -------------------------------------------------------------------------- def fpn_map(name): """ Look for keys with the following patterns: 1) Starts with "fpn.inner." Example: "fpn.inner.res2.2.sum.lateral.weight" Meaning: These are lateral pathway convolutions 2) Starts with "fpn.res" Example: "fpn.res2.2.sum.weight" Meaning: These are FPN output convolutions """ splits = name.split(".") norm = ".norm" if "norm" in splits else "" if name.startswith("fpn.inner."): # splits example: ['fpn', 'inner', 'res2', '2', 'sum', 'lateral', 'weight'] stage = int(splits[2][len("res") :]) return "fpn_lateral{}{}.{}".format(stage, norm, splits[-1]) elif name.startswith("fpn.res"): # splits example: ['fpn', 'res2', '2', 'sum', 'weight'] stage = int(splits[1][len("res") :]) return "fpn_output{}{}.{}".format(stage, norm, splits[-1]) return name layer_keys = [fpn_map(k) for k in layer_keys] # -------------------------------------------------------------------------- # Mask R-CNN mask head # -------------------------------------------------------------------------- # roi_heads.StandardROIHeads case layer_keys = [k.replace(".[mask].fcn", "mask_head.mask_fcn") for k in layer_keys] layer_keys = [re.sub("^\\.mask\\.fcn", "mask_head.mask_fcn", k) for k in layer_keys] layer_keys = [k.replace("mask.fcn.logits", "mask_head.predictor") for k in layer_keys] # roi_heads.Res5ROIHeads case layer_keys = [k.replace("conv5.mask", "mask_head.deconv") for k in layer_keys] # -------------------------------------------------------------------------- # Keypoint R-CNN head # -------------------------------------------------------------------------- # interestingly, the keypoint head convs have blob names that are simply "conv_fcnX" layer_keys = [k.replace("conv.fcn", "roi_heads.keypoint_head.conv_fcn") for k in layer_keys] layer_keys = [ k.replace("kps.score.lowres", "roi_heads.keypoint_head.score_lowres") for k in layer_keys ] layer_keys = [k.replace("kps.score.", "roi_heads.keypoint_head.score.") for k in layer_keys] # -------------------------------------------------------------------------- # Done with replacements # -------------------------------------------------------------------------- assert len(set(layer_keys)) == len(layer_keys) assert len(original_keys) == len(layer_keys) new_weights = {} new_keys_to_original_keys = {} for orig, renamed in zip(original_keys, layer_keys): new_keys_to_original_keys[renamed] = orig if renamed.startswith("bbox_pred.") or renamed.startswith("mask_head.predictor."): # remove the meaningless prediction weight for background class new_start_idx = 4 if renamed.startswith("bbox_pred.") else 1 new_weights[renamed] = weights[orig][new_start_idx:] logger.info( "Remove prediction weight for background class in {}. The shape changes from " "{} to {}.".format( renamed, tuple(weights[orig].shape), tuple(new_weights[renamed].shape) ) ) elif renamed.startswith("cls_score."): # move weights of bg class from original index 0 to last index logger.info( "Move classification weights for background class in {} from index 0 to " "index {}.".format(renamed, weights[orig].shape[0] - 1) ) new_weights[renamed] = torch.cat([weights[orig][1:], weights[orig][:1]]) else: new_weights[renamed] = weights[orig] return new_weights, new_keys_to_original_keys # Note the current matching is not symmetric. # it assumes model_state_dict will have longer names. def align_and_update_state_dicts(model_state_dict, ckpt_state_dict, c2_conversion=True): """ Match names between the two state-dict, and returns a new chkpt_state_dict with names converted to match model_state_dict with heuristics. The returned dict can be later loaded with fvcore checkpointer. If `c2_conversion==True`, `ckpt_state_dict` is assumed to be a Caffe2 model and will be renamed at first. Strategy: suppose that the models that we will create will have prefixes appended to each of its keys, for example due to an extra level of nesting that the original pre-trained weights from ImageNet won't contain. For example, model.state_dict() might return backbone[0].body.res2.conv1.weight, while the pre-trained model contains res2.conv1.weight. We thus want to match both parameters together. For that, we look for each model weight, look among all loaded keys if there is one that is a suffix of the current weight name, and use it if that's the case. If multiple matches exist, take the one with longest size of the corresponding name. For example, for the same model as before, the pretrained weight file can contain both res2.conv1.weight, as well as conv1.weight. In this case, we want to match backbone[0].body.conv1.weight to conv1.weight, and backbone[0].body.res2.conv1.weight to res2.conv1.weight. """ model_keys = sorted(model_state_dict.keys()) if c2_conversion: ckpt_state_dict, original_keys = convert_c2_detectron_names(ckpt_state_dict) # original_keys: the name in the original dict (before renaming) else: original_keys = {x: x for x in ckpt_state_dict.keys()} ckpt_keys = sorted(ckpt_state_dict.keys()) def match(a, b): # Matched ckpt_key should be a complete (starts with '.') suffix. # For example, roi_heads.mesh_head.whatever_conv1 does not match conv1, # but matches whatever_conv1 or mesh_head.whatever_conv1. return a == b or a.endswith("." + b) # get a matrix of string matches, where each (i, j) entry correspond to the size of the # ckpt_key string, if it matches match_matrix = [len(j) if match(i, j) else 0 for i in model_keys for j in ckpt_keys] match_matrix = torch.as_tensor(match_matrix).view(len(model_keys), len(ckpt_keys)) # use the matched one with longest size in case of multiple matches max_match_size, idxs = match_matrix.max(1) # remove indices that correspond to no-match idxs[max_match_size == 0] = -1 logger = logging.getLogger(__name__) # matched_pairs (matched checkpoint key --> matched model key) matched_keys = {} result_state_dict = {} for idx_model, idx_ckpt in enumerate(idxs.tolist()): if idx_ckpt == -1: continue key_model = model_keys[idx_model] key_ckpt = ckpt_keys[idx_ckpt] value_ckpt = ckpt_state_dict[key_ckpt] shape_in_model = model_state_dict[key_model].shape if shape_in_model != value_ckpt.shape: logger.warning( "Shape of {} in checkpoint is {}, while shape of {} in model is {}.".format( key_ckpt, value_ckpt.shape, key_model, shape_in_model ) ) logger.warning( "{} will not be loaded. Please double check and see if this is desired.".format( key_ckpt ) ) continue assert key_model not in result_state_dict result_state_dict[key_model] = value_ckpt if key_ckpt in matched_keys: # already added to matched_keys logger.error( "Ambiguity found for {} in checkpoint!" "It matches at least two keys in the model ({} and {}).".format( key_ckpt, key_model, matched_keys[key_ckpt] ) ) raise ValueError("Cannot match one checkpoint key to multiple keys in the model.") matched_keys[key_ckpt] = key_model # logging: matched_model_keys = sorted(matched_keys.values()) if len(matched_model_keys) == 0: logger.warning("No weights in checkpoint matched with model.") return ckpt_state_dict common_prefix = _longest_common_prefix(matched_model_keys) rev_matched_keys = {v: k for k, v in matched_keys.items()} original_keys = {k: original_keys[rev_matched_keys[k]] for k in matched_model_keys} model_key_groups = _group_keys_by_module(matched_model_keys, original_keys) table = [] memo = set() for key_model in matched_model_keys: if key_model in memo: continue if key_model in model_key_groups: group = model_key_groups[key_model] memo |= set(group) shapes = [tuple(model_state_dict[k].shape) for k in group] table.append( ( _longest_common_prefix([k[len(common_prefix) :] for k in group]) + "*", _group_str([original_keys[k] for k in group]), " ".join([str(x).replace(" ", "") for x in shapes]), ) ) else: key_checkpoint = original_keys[key_model] shape = str(tuple(model_state_dict[key_model].shape)) table.append((key_model[len(common_prefix) :], key_checkpoint, shape)) table_str = tabulate( table, tablefmt="pipe", headers=["Names in Model", "Names in Checkpoint", "Shapes"] ) # logger.info( # "Following weights matched with " # + (f"submodule {common_prefix[:-1]}" if common_prefix else "model") # + ":\n" # + table_str # ) unmatched_ckpt_keys = [k for k in ckpt_keys if k not in set(matched_keys.keys())] for k in unmatched_ckpt_keys: result_state_dict[k] = ckpt_state_dict[k] return result_state_dict def _group_keys_by_module(keys: List[str], original_names: Dict[str, str]): """ Params in the same submodule are grouped together. Args: keys: names of all parameters original_names: mapping from parameter name to their name in the checkpoint Returns: dict[name -> all other names in the same group] """ def _submodule_name(key): pos = key.rfind(".") if pos < 0: return None prefix = key[: pos + 1] return prefix all_submodules = [_submodule_name(k) for k in keys] all_submodules = [x for x in all_submodules if x] all_submodules = sorted(all_submodules, key=len) ret = {} for prefix in all_submodules: group = [k for k in keys if k.startswith(prefix)] if len(group) <= 1: continue original_name_lcp = _longest_common_prefix_str([original_names[k] for k in group]) if len(original_name_lcp) == 0: # don't group weights if original names don't share prefix continue for k in group: if k in ret: continue ret[k] = group return ret def _longest_common_prefix(names: List[str]) -> str: """ ["abc.zfg", "abc.zef"] -> "abc." """ names = [n.split(".") for n in names] m1, m2 = min(names), max(names) ret = [a for a, b in zip(m1, m2) if a == b] ret = ".".join(ret) + "." if len(ret) else "" return ret def _longest_common_prefix_str(names: List[str]) -> str: m1, m2 = min(names), max(names) lcp = [a for a, b in zip(m1, m2) if a == b] lcp = "".join(lcp) return lcp def _group_str(names: List[str]) -> str: """ Turn "common1", "common2", "common3" into "common{1,2,3}" """ lcp = _longest_common_prefix_str(names) rest = [x[len(lcp) :] for x in names] rest = "{" + ",".join(rest) + "}" ret = lcp + rest # add some simplification for BN specifically ret = ret.replace("bn_{beta,running_mean,running_var,gamma}", "bn_*") ret = ret.replace("bn_beta,bn_running_mean,bn_running_var,bn_gamma", "bn_*") return ret
/roof_mask_Yv8-0.5.9-py3-none-any.whl/detectron2/checkpoint/c2_model_loading.py
0.885415
0.549399
c2_model_loading.py
pypi
import argparse import logging import os import sys import weakref from collections import OrderedDict from typing import Optional import torch from fvcore.nn.precise_bn import get_bn_modules from omegaconf import OmegaConf from torch.nn.parallel import DistributedDataParallel import detectron2.data.transforms as T from detectron2.checkpoint import DetectionCheckpointer from detectron2.config import CfgNode, LazyConfig from detectron2.data import ( MetadataCatalog, build_detection_test_loader, build_detection_train_loader, ) from detectron2.evaluation import ( DatasetEvaluator, inference_on_dataset, print_csv_format, verify_results, ) from detectron2.modeling import build_model from detectron2.solver import build_lr_scheduler, build_optimizer from detectron2.utils import comm from detectron2.utils.collect_env import collect_env_info from detectron2.utils.env import seed_all_rng from detectron2.utils.events import CommonMetricPrinter, JSONWriter, TensorboardXWriter from detectron2.utils.file_io import PathManager from detectron2.utils.logger import setup_logger from . import hooks from .train_loop import AMPTrainer, SimpleTrainer, TrainerBase __all__ = [ "create_ddp_model", "default_argument_parser", "default_setup", "default_writers", "DefaultPredictor", "DefaultTrainer", ] def create_ddp_model(model, *, fp16_compression=False, **kwargs): """ Create a DistributedDataParallel model if there are >1 processes. Args: model: a torch.nn.Module fp16_compression: add fp16 compression hooks to the ddp object. See more at https://pytorch.org/docs/stable/ddp_comm_hooks.html#torch.distributed.algorithms.ddp_comm_hooks.default_hooks.fp16_compress_hook kwargs: other arguments of :module:`torch.nn.parallel.DistributedDataParallel`. """ # noqa if comm.get_world_size() == 1: return model if "device_ids" not in kwargs: kwargs["device_ids"] = [comm.get_local_rank()] ddp = DistributedDataParallel(model, **kwargs) if fp16_compression: from torch.distributed.algorithms.ddp_comm_hooks import default as comm_hooks ddp.register_comm_hook(state=None, hook=comm_hooks.fp16_compress_hook) return ddp def default_argument_parser(epilog=None): """ Create a parser with some common arguments used by detectron2 users. Args: epilog (str): epilog passed to ArgumentParser describing the usage. Returns: argparse.ArgumentParser: """ parser = argparse.ArgumentParser( epilog=epilog or f""" Examples: Run on single machine: $ {sys.argv[0]} --num-gpus 8 --config-file cfg.yaml Change some config options: $ {sys.argv[0]} --config-file cfg.yaml MODEL.WEIGHTS /path/to/weight.pth SOLVER.BASE_LR 0.001 Run on multiple machines: (machine0)$ {sys.argv[0]} --machine-rank 0 --num-machines 2 --dist-url <URL> [--other-flags] (machine1)$ {sys.argv[0]} --machine-rank 1 --num-machines 2 --dist-url <URL> [--other-flags] """, formatter_class=argparse.RawDescriptionHelpFormatter, ) parser.add_argument("--config-file", default="", metavar="FILE", help="path to config file") parser.add_argument( "--resume", action="store_true", help="Whether to attempt to resume from the checkpoint directory. " "See documentation of `DefaultTrainer.resume_or_load()` for what it means.", ) parser.add_argument("--eval-only", action="store_true", help="perform evaluation only") parser.add_argument("--num-gpus", type=int, default=1, help="number of gpus *per machine*") parser.add_argument("--num-machines", type=int, default=1, help="total number of machines") parser.add_argument( "--machine-rank", type=int, default=0, help="the rank of this machine (unique per machine)" ) # PyTorch still may leave orphan processes in multi-gpu training. # Therefore we use a deterministic way to obtain port, # so that users are aware of orphan processes by seeing the port occupied. port = 2 ** 15 + 2 ** 14 + hash(os.getuid() if sys.platform != "win32" else 1) % 2 ** 14 parser.add_argument( "--dist-url", default="tcp://127.0.0.1:{}".format(port), help="initialization URL for pytorch distributed backend. See " "https://pytorch.org/docs/stable/distributed.html for details.", ) parser.add_argument( "opts", help=""" Modify config options at the end of the command. For Yacs configs, use space-separated "PATH.KEY VALUE" pairs. For python-based LazyConfig, use "path.key=value". """.strip(), default=None, nargs=argparse.REMAINDER, ) return parser def _try_get_key(cfg, *keys, default=None): """ Try select keys from cfg until the first key that exists. Otherwise return default. """ if isinstance(cfg, CfgNode): cfg = OmegaConf.create(cfg.dump()) for k in keys: none = object() p = OmegaConf.select(cfg, k, default=none) if p is not none: return p return default def _highlight(code, filename): try: import pygments except ImportError: return code from pygments.lexers import Python3Lexer, YamlLexer from pygments.formatters import Terminal256Formatter lexer = Python3Lexer() if filename.endswith(".py") else YamlLexer() code = pygments.highlight(code, lexer, Terminal256Formatter(style="monokai")) return code def default_setup(cfg, args): """ Perform some basic common setups at the beginning of a job, including: 1. Set up the detectron2 logger 2. Log basic information about environment, cmdline arguments, and config 3. Backup the config to the output directory Args: cfg (CfgNode or omegaconf.DictConfig): the full config to be used args (argparse.NameSpace): the command line arguments to be logged """ output_dir = _try_get_key(cfg, "OUTPUT_DIR", "output_dir", "train.output_dir") if comm.is_main_process() and output_dir: PathManager.mkdirs(output_dir) rank = comm.get_rank() setup_logger(output_dir, distributed_rank=rank, name="fvcore") logger = setup_logger(output_dir, distributed_rank=rank) logger.info("Rank of current process: {}. World size: {}".format(rank, comm.get_world_size())) logger.info("Environment info:\n" + collect_env_info()) logger.info("Command line arguments: " + str(args)) if hasattr(args, "config_file") and args.config_file != "": logger.info( "Contents of args.config_file={}:\n{}".format( args.config_file, _highlight(PathManager.open(args.config_file, "r").read(), args.config_file), ) ) if comm.is_main_process() and output_dir: # Note: some of our scripts may expect the existence of # config.yaml in output directory path = os.path.join(output_dir, "config.yaml") if isinstance(cfg, CfgNode): logger.info("Running with full config:\n{}".format(_highlight(cfg.dump(), ".yaml"))) with PathManager.open(path, "w") as f: f.write(cfg.dump()) else: LazyConfig.save(cfg, path) logger.info("Full config saved to {}".format(path)) # make sure each worker has a different, yet deterministic seed if specified seed = _try_get_key(cfg, "SEED", "train.seed", default=-1) seed_all_rng(None if seed < 0 else seed + rank) # cudnn benchmark has large overhead. It shouldn't be used considering the small size of # typical validation set. if not (hasattr(args, "eval_only") and args.eval_only): torch.backends.cudnn.benchmark = _try_get_key( cfg, "CUDNN_BENCHMARK", "train.cudnn_benchmark", default=False ) def default_writers(output_dir: str, max_iter: Optional[int] = None): """ Build a list of :class:`EventWriter` to be used. It now consists of a :class:`CommonMetricPrinter`, :class:`TensorboardXWriter` and :class:`JSONWriter`. Args: output_dir: directory to store JSON metrics and tensorboard events max_iter: the total number of iterations Returns: list[EventWriter]: a list of :class:`EventWriter` objects. """ PathManager.mkdirs(output_dir) return [ # It may not always print what you want to see, since it prints "common" metrics only. CommonMetricPrinter(max_iter), JSONWriter(os.path.join(output_dir, "metrics.json")), TensorboardXWriter(output_dir), ] class DefaultPredictor: """ Create a simple end-to-end predictor with the given config that runs on single device for a single input image. Compared to using the model directly, this class does the following additions: 1. Load checkpoint from `cfg.MODEL.WEIGHTS`. 2. Always take BGR image as the input and apply conversion defined by `cfg.INPUT.FORMAT`. 3. Apply resizing defined by `cfg.INPUT.{MIN,MAX}_SIZE_TEST`. 4. Take one input image and produce a single output, instead of a batch. This is meant for simple demo purposes, so it does the above steps automatically. This is not meant for benchmarks or running complicated inference logic. If you'd like to do anything more complicated, please refer to its source code as examples to build and use the model manually. Attributes: metadata (Metadata): the metadata of the underlying dataset, obtained from cfg.DATASETS.TEST. Examples: :: pred = DefaultPredictor(cfg) inputs = cv2.imread("input.jpg") outputs = pred(inputs) """ def __init__(self, cfg): self.cfg = cfg.clone() # cfg can be modified by model self.model = build_model(self.cfg) self.model.eval() if len(cfg.DATASETS.TEST): self.metadata = MetadataCatalog.get(cfg.DATASETS.TEST[0]) checkpointer = DetectionCheckpointer(self.model) checkpointer.load(cfg.MODEL.WEIGHTS) self.aug = T.ResizeShortestEdge( [cfg.INPUT.MIN_SIZE_TEST, cfg.INPUT.MIN_SIZE_TEST], cfg.INPUT.MAX_SIZE_TEST ) self.input_format = cfg.INPUT.FORMAT assert self.input_format in ["RGB", "BGR"], self.input_format def __call__(self, original_image): """ Args: original_image (np.ndarray): an image of shape (H, W, C) (in BGR order). Returns: predictions (dict): the output of the model for one image only. See :doc:`/tutorials/models` for details about the format. """ with torch.no_grad(): # https://github.com/sphinx-doc/sphinx/issues/4258 # Apply pre-processing to image. if self.input_format == "RGB": # whether the model expects BGR inputs or RGB original_image = original_image[:, :, ::-1] height, width = original_image.shape[:2] image = self.aug.get_transform(original_image).apply_image(original_image) image = torch.as_tensor(image.astype("float32").transpose(2, 0, 1)) inputs = {"image": image, "height": height, "width": width} predictions = self.model([inputs])[0] return predictions class DefaultTrainer(TrainerBase): """ A trainer with default training logic. It does the following: 1. Create a :class:`SimpleTrainer` using model, optimizer, dataloader defined by the given config. Create a LR scheduler defined by the config. 2. Load the last checkpoint or `cfg.MODEL.WEIGHTS`, if exists, when `resume_or_load` is called. 3. Register a few common hooks defined by the config. It is created to simplify the **standard model training workflow** and reduce code boilerplate for users who only need the standard training workflow, with standard features. It means this class makes *many assumptions* about your training logic that may easily become invalid in a new research. In fact, any assumptions beyond those made in the :class:`SimpleTrainer` are too much for research. The code of this class has been annotated about restrictive assumptions it makes. When they do not work for you, you're encouraged to: 1. Overwrite methods of this class, OR: 2. Use :class:`SimpleTrainer`, which only does minimal SGD training and nothing else. You can then add your own hooks if needed. OR: 3. Write your own training loop similar to `tools/plain_train_net.py`. See the :doc:`/tutorials/training` tutorials for more details. Note that the behavior of this class, like other functions/classes in this file, is not stable, since it is meant to represent the "common default behavior". It is only guaranteed to work well with the standard models and training workflow in detectron2. To obtain more stable behavior, write your own training logic with other public APIs. Examples: :: trainer = DefaultTrainer(cfg) trainer.resume_or_load() # load last checkpoint or MODEL.WEIGHTS trainer.train() Attributes: scheduler: checkpointer (DetectionCheckpointer): cfg (CfgNode): """ def __init__(self, cfg): """ Args: cfg (CfgNode): """ super().__init__() logger = logging.getLogger("detectron2") if not logger.isEnabledFor(logging.INFO): # setup_logger is not called for d2 setup_logger() cfg = DefaultTrainer.auto_scale_workers(cfg, comm.get_world_size()) # Assume these objects must be constructed in this order. model = self.build_model(cfg) optimizer = self.build_optimizer(cfg, model) data_loader = self.build_train_loader(cfg) model = create_ddp_model(model, broadcast_buffers=False) self._trainer = (AMPTrainer if cfg.SOLVER.AMP.ENABLED else SimpleTrainer)( model, data_loader, optimizer ) self.scheduler = self.build_lr_scheduler(cfg, optimizer) self.checkpointer = DetectionCheckpointer( # Assume you want to save checkpoints together with logs/statistics model, cfg.OUTPUT_DIR, trainer=weakref.proxy(self), ) self.start_iter = 0 self.max_iter = cfg.SOLVER.MAX_ITER self.cfg = cfg self.register_hooks(self.build_hooks()) def resume_or_load(self, resume=True): """ If `resume==True` and `cfg.OUTPUT_DIR` contains the last checkpoint (defined by a `last_checkpoint` file), resume from the file. Resuming means loading all available states (eg. optimizer and scheduler) and update iteration counter from the checkpoint. ``cfg.MODEL.WEIGHTS`` will not be used. Otherwise, this is considered as an independent training. The method will load model weights from the file `cfg.MODEL.WEIGHTS` (but will not load other states) and start from iteration 0. Args: resume (bool): whether to do resume or not """ self.checkpointer.resume_or_load(self.cfg.MODEL.WEIGHTS, resume=resume) if resume and self.checkpointer.has_checkpoint(): # The checkpoint stores the training iteration that just finished, thus we start # at the next iteration self.start_iter = self.iter + 1 def build_hooks(self): """ Build a list of default hooks, including timing, evaluation, checkpointing, lr scheduling, precise BN, writing events. Returns: list[HookBase]: """ cfg = self.cfg.clone() cfg.defrost() cfg.DATALOADER.NUM_WORKERS = 0 # save some memory and time for PreciseBN ret = [ hooks.IterationTimer(), hooks.LRScheduler(), hooks.PreciseBN( # Run at the same freq as (but before) evaluation. cfg.TEST.EVAL_PERIOD, self.model, # Build a new data loader to not affect training self.build_train_loader(cfg), cfg.TEST.PRECISE_BN.NUM_ITER, ) if cfg.TEST.PRECISE_BN.ENABLED and get_bn_modules(self.model) else None, ] # Do PreciseBN before checkpointer, because it updates the model and need to # be saved by checkpointer. # This is not always the best: if checkpointing has a different frequency, # some checkpoints may have more precise statistics than others. if comm.is_main_process(): ret.append(hooks.PeriodicCheckpointer(self.checkpointer, cfg.SOLVER.CHECKPOINT_PERIOD)) def test_and_save_results(): self._last_eval_results = self.test(self.cfg, self.model) return self._last_eval_results # Do evaluation after checkpointer, because then if it fails, # we can use the saved checkpoint to debug. ret.append(hooks.EvalHook(cfg.TEST.EVAL_PERIOD, test_and_save_results)) if comm.is_main_process(): # Here the default print/log frequency of each writer is used. # run writers in the end, so that evaluation metrics are written ret.append(hooks.PeriodicWriter(self.build_writers(), period=20)) return ret def build_writers(self): """ Build a list of writers to be used using :func:`default_writers()`. If you'd like a different list of writers, you can overwrite it in your trainer. Returns: list[EventWriter]: a list of :class:`EventWriter` objects. """ return default_writers(self.cfg.OUTPUT_DIR, self.max_iter) def train(self): """ Run training. Returns: OrderedDict of results, if evaluation is enabled. Otherwise None. """ super().train(self.start_iter, self.max_iter) if len(self.cfg.TEST.EXPECTED_RESULTS) and comm.is_main_process(): assert hasattr( self, "_last_eval_results" ), "No evaluation results obtained during training!" verify_results(self.cfg, self._last_eval_results) return self._last_eval_results def run_step(self): self._trainer.iter = self.iter self._trainer.run_step() def state_dict(self): ret = super().state_dict() ret["_trainer"] = self._trainer.state_dict() return ret def load_state_dict(self, state_dict): super().load_state_dict(state_dict) self._trainer.load_state_dict(state_dict["_trainer"]) @classmethod def build_model(cls, cfg): """ Returns: torch.nn.Module: It now calls :func:`detectron2.modeling.build_model`. Overwrite it if you'd like a different model. """ model = build_model(cfg) logger = logging.getLogger(__name__) logger.info("Model:\n{}".format(model)) return model @classmethod def build_optimizer(cls, cfg, model): """ Returns: torch.optim.Optimizer: It now calls :func:`detectron2.solver.build_optimizer`. Overwrite it if you'd like a different optimizer. """ return build_optimizer(cfg, model) @classmethod def build_lr_scheduler(cls, cfg, optimizer): """ It now calls :func:`detectron2.solver.build_lr_scheduler`. Overwrite it if you'd like a different scheduler. """ return build_lr_scheduler(cfg, optimizer) @classmethod def build_train_loader(cls, cfg): """ Returns: iterable It now calls :func:`detectron2.data.build_detection_train_loader`. Overwrite it if you'd like a different data loader. """ return build_detection_train_loader(cfg) @classmethod def build_test_loader(cls, cfg, dataset_name): """ Returns: iterable It now calls :func:`detectron2.data.build_detection_test_loader`. Overwrite it if you'd like a different data loader. """ return build_detection_test_loader(cfg, dataset_name) @classmethod def build_evaluator(cls, cfg, dataset_name): """ Returns: DatasetEvaluator or None It is not implemented by default. """ raise NotImplementedError( """ If you want DefaultTrainer to automatically run evaluation, please implement `build_evaluator()` in subclasses (see train_net.py for example). Alternatively, you can call evaluation functions yourself (see Colab balloon tutorial for example). """ ) @classmethod def test(cls, cfg, model, evaluators=None): """ Evaluate the given model. The given model is expected to already contain weights to evaluate. Args: cfg (CfgNode): model (nn.Module): evaluators (list[DatasetEvaluator] or None): if None, will call :meth:`build_evaluator`. Otherwise, must have the same length as ``cfg.DATASETS.TEST``. Returns: dict: a dict of result metrics """ logger = logging.getLogger(__name__) if isinstance(evaluators, DatasetEvaluator): evaluators = [evaluators] if evaluators is not None: assert len(cfg.DATASETS.TEST) == len(evaluators), "{} != {}".format( len(cfg.DATASETS.TEST), len(evaluators) ) results = OrderedDict() for idx, dataset_name in enumerate(cfg.DATASETS.TEST): data_loader = cls.build_test_loader(cfg, dataset_name) # When evaluators are passed in as arguments, # implicitly assume that evaluators can be created before data_loader. if evaluators is not None: evaluator = evaluators[idx] else: try: evaluator = cls.build_evaluator(cfg, dataset_name) except NotImplementedError: logger.warn( "No evaluator found. Use `DefaultTrainer.test(evaluators=)`, " "or implement its `build_evaluator` method." ) results[dataset_name] = {} continue results_i = inference_on_dataset(model, data_loader, evaluator) results[dataset_name] = results_i if comm.is_main_process(): assert isinstance( results_i, dict ), "Evaluator must return a dict on the main process. Got {} instead.".format( results_i ) logger.info("Evaluation results for {} in csv format:".format(dataset_name)) print_csv_format(results_i) if len(results) == 1: results = list(results.values())[0] return results @staticmethod def auto_scale_workers(cfg, num_workers: int): """ When the config is defined for certain number of workers (according to ``cfg.SOLVER.REFERENCE_WORLD_SIZE``) that's different from the number of workers currently in use, returns a new cfg where the total batch size is scaled so that the per-GPU batch size stays the same as the original ``IMS_PER_BATCH // REFERENCE_WORLD_SIZE``. Other config options are also scaled accordingly: * training steps and warmup steps are scaled inverse proportionally. * learning rate are scaled proportionally, following :paper:`ImageNet in 1h`. For example, with the original config like the following: .. code-block:: yaml IMS_PER_BATCH: 16 BASE_LR: 0.1 REFERENCE_WORLD_SIZE: 8 MAX_ITER: 5000 STEPS: (4000,) CHECKPOINT_PERIOD: 1000 When this config is used on 16 GPUs instead of the reference number 8, calling this method will return a new config with: .. code-block:: yaml IMS_PER_BATCH: 32 BASE_LR: 0.2 REFERENCE_WORLD_SIZE: 16 MAX_ITER: 2500 STEPS: (2000,) CHECKPOINT_PERIOD: 500 Note that both the original config and this new config can be trained on 16 GPUs. It's up to user whether to enable this feature (by setting ``REFERENCE_WORLD_SIZE``). Returns: CfgNode: a new config. Same as original if ``cfg.SOLVER.REFERENCE_WORLD_SIZE==0``. """ old_world_size = cfg.SOLVER.REFERENCE_WORLD_SIZE if old_world_size == 0 or old_world_size == num_workers: return cfg cfg = cfg.clone() frozen = cfg.is_frozen() cfg.defrost() assert ( cfg.SOLVER.IMS_PER_BATCH % old_world_size == 0 ), "Invalid REFERENCE_WORLD_SIZE in config!" scale = num_workers / old_world_size bs = cfg.SOLVER.IMS_PER_BATCH = int(round(cfg.SOLVER.IMS_PER_BATCH * scale)) lr = cfg.SOLVER.BASE_LR = cfg.SOLVER.BASE_LR * scale max_iter = cfg.SOLVER.MAX_ITER = int(round(cfg.SOLVER.MAX_ITER / scale)) warmup_iter = cfg.SOLVER.WARMUP_ITERS = int(round(cfg.SOLVER.WARMUP_ITERS / scale)) cfg.SOLVER.STEPS = tuple(int(round(s / scale)) for s in cfg.SOLVER.STEPS) cfg.TEST.EVAL_PERIOD = int(round(cfg.TEST.EVAL_PERIOD / scale)) cfg.SOLVER.CHECKPOINT_PERIOD = int(round(cfg.SOLVER.CHECKPOINT_PERIOD / scale)) cfg.SOLVER.REFERENCE_WORLD_SIZE = num_workers # maintain invariant logger = logging.getLogger(__name__) logger.info( f"Auto-scaling the config to batch_size={bs}, learning_rate={lr}, " f"max_iter={max_iter}, warmup={warmup_iter}." ) if frozen: cfg.freeze() return cfg # Access basic attributes from the underlying trainer for _attr in ["model", "data_loader", "optimizer"]: setattr( DefaultTrainer, _attr, property( # getter lambda self, x=_attr: getattr(self._trainer, x), # setter lambda self, value, x=_attr: setattr(self._trainer, x, value), ), )
/roof_mask_Yv8-0.5.9-py3-none-any.whl/detectron2/engine/defaults.py
0.778439
0.187021
defaults.py
pypi
import datetime import itertools import logging import math import operator import os import tempfile import time import warnings from collections import Counter import torch from fvcore.common.checkpoint import Checkpointer from fvcore.common.checkpoint import PeriodicCheckpointer as _PeriodicCheckpointer from fvcore.common.param_scheduler import ParamScheduler from fvcore.common.timer import Timer from fvcore.nn.precise_bn import get_bn_modules, update_bn_stats import detectron2.utils.comm as comm from detectron2.evaluation.testing import flatten_results_dict from detectron2.solver import LRMultiplier from detectron2.utils.events import EventStorage, EventWriter from detectron2.utils.file_io import PathManager from .train_loop import HookBase __all__ = [ "CallbackHook", "IterationTimer", "PeriodicWriter", "PeriodicCheckpointer", "BestCheckpointer", "LRScheduler", "AutogradProfiler", "EvalHook", "PreciseBN", "TorchProfiler", "TorchMemoryStats", ] """ Implement some common hooks. """ class CallbackHook(HookBase): """ Create a hook using callback functions provided by the user. """ def __init__(self, *, before_train=None, after_train=None, before_step=None, after_step=None): """ Each argument is a function that takes one argument: the trainer. """ self._before_train = before_train self._before_step = before_step self._after_step = after_step self._after_train = after_train def before_train(self): if self._before_train: self._before_train(self.trainer) def after_train(self): if self._after_train: self._after_train(self.trainer) # The functions may be closures that hold reference to the trainer # Therefore, delete them to avoid circular reference. del self._before_train, self._after_train del self._before_step, self._after_step def before_step(self): if self._before_step: self._before_step(self.trainer) def after_step(self): if self._after_step: self._after_step(self.trainer) class IterationTimer(HookBase): """ Track the time spent for each iteration (each run_step call in the trainer). Print a summary in the end of training. This hook uses the time between the call to its :meth:`before_step` and :meth:`after_step` methods. Under the convention that :meth:`before_step` of all hooks should only take negligible amount of time, the :class:`IterationTimer` hook should be placed at the beginning of the list of hooks to obtain accurate timing. """ def __init__(self, warmup_iter=3): """ Args: warmup_iter (int): the number of iterations at the beginning to exclude from timing. """ self._warmup_iter = warmup_iter self._step_timer = Timer() self._start_time = time.perf_counter() self._total_timer = Timer() def before_train(self): self._start_time = time.perf_counter() self._total_timer.reset() self._total_timer.pause() def after_train(self): logger = logging.getLogger(__name__) total_time = time.perf_counter() - self._start_time total_time_minus_hooks = self._total_timer.seconds() hook_time = total_time - total_time_minus_hooks num_iter = self.trainer.storage.iter + 1 - self.trainer.start_iter - self._warmup_iter if num_iter > 0 and total_time_minus_hooks > 0: # Speed is meaningful only after warmup # NOTE this format is parsed by grep in some scripts logger.info( "Overall training speed: {} iterations in {} ({:.4f} s / it)".format( num_iter, str(datetime.timedelta(seconds=int(total_time_minus_hooks))), total_time_minus_hooks / num_iter, ) ) logger.info( "Total training time: {} ({} on hooks)".format( str(datetime.timedelta(seconds=int(total_time))), str(datetime.timedelta(seconds=int(hook_time))), ) ) def before_step(self): self._step_timer.reset() self._total_timer.resume() def after_step(self): # +1 because we're in after_step, the current step is done # but not yet counted iter_done = self.trainer.storage.iter - self.trainer.start_iter + 1 if iter_done >= self._warmup_iter: sec = self._step_timer.seconds() self.trainer.storage.put_scalars(time=sec) else: self._start_time = time.perf_counter() self._total_timer.reset() self._total_timer.pause() class PeriodicWriter(HookBase): """ Write events to EventStorage (by calling ``writer.write()``) periodically. It is executed every ``period`` iterations and after the last iteration. Note that ``period`` does not affect how data is smoothed by each writer. """ def __init__(self, writers, period=20): """ Args: writers (list[EventWriter]): a list of EventWriter objects period (int): """ self._writers = writers for w in writers: assert isinstance(w, EventWriter), w self._period = period def after_step(self): if (self.trainer.iter + 1) % self._period == 0 or ( self.trainer.iter == self.trainer.max_iter - 1 ): for writer in self._writers: writer.write() def after_train(self): for writer in self._writers: # If any new data is found (e.g. produced by other after_train), # write them before closing writer.write() writer.close() class PeriodicCheckpointer(_PeriodicCheckpointer, HookBase): """ Same as :class:`detectron2.checkpoint.PeriodicCheckpointer`, but as a hook. Note that when used as a hook, it is unable to save additional data other than what's defined by the given `checkpointer`. It is executed every ``period`` iterations and after the last iteration. """ def before_train(self): self.max_iter = self.trainer.max_iter def after_step(self): # No way to use **kwargs self.step(self.trainer.iter) class BestCheckpointer(HookBase): """ Checkpoints best weights based off given metric. This hook should be used in conjunction to and executed after the hook that produces the metric, e.g. `EvalHook`. """ def __init__( self, eval_period: int, checkpointer: Checkpointer, val_metric: str, mode: str = "max", file_prefix: str = "model_best", ) -> None: """ Args: eval_period (int): the period `EvalHook` is set to run. checkpointer: the checkpointer object used to save checkpoints. val_metric (str): validation metric to track for best checkpoint, e.g. "bbox/AP50" mode (str): one of {'max', 'min'}. controls whether the chosen val metric should be maximized or minimized, e.g. for "bbox/AP50" it should be "max" file_prefix (str): the prefix of checkpoint's filename, defaults to "model_best" """ self._logger = logging.getLogger(__name__) self._period = eval_period self._val_metric = val_metric assert mode in [ "max", "min", ], f'Mode "{mode}" to `BestCheckpointer` is unknown. It should be one of {"max", "min"}.' if mode == "max": self._compare = operator.gt else: self._compare = operator.lt self._checkpointer = checkpointer self._file_prefix = file_prefix self.best_metric = None self.best_iter = None def _update_best(self, val, iteration): if math.isnan(val) or math.isinf(val): return False self.best_metric = val self.best_iter = iteration return True def _best_checking(self): metric_tuple = self.trainer.storage.latest().get(self._val_metric) if metric_tuple is None: self._logger.warning( f"Given val metric {self._val_metric} does not seem to be computed/stored." "Will not be checkpointing based on it." ) return else: latest_metric, metric_iter = metric_tuple if self.best_metric is None: if self._update_best(latest_metric, metric_iter): additional_state = {"iteration": metric_iter} self._checkpointer.save(f"{self._file_prefix}", **additional_state) self._logger.info( f"Saved first model at {self.best_metric:0.5f} @ {self.best_iter} steps" ) elif self._compare(latest_metric, self.best_metric): additional_state = {"iteration": metric_iter} self._checkpointer.save(f"{self._file_prefix}", **additional_state) self._logger.info( f"Saved best model as latest eval score for {self._val_metric} is " f"{latest_metric:0.5f}, better than last best score " f"{self.best_metric:0.5f} @ iteration {self.best_iter}." ) self._update_best(latest_metric, metric_iter) else: self._logger.info( f"Not saving as latest eval score for {self._val_metric} is {latest_metric:0.5f}, " f"not better than best score {self.best_metric:0.5f} @ iteration {self.best_iter}." ) def after_step(self): # same conditions as `EvalHook` next_iter = self.trainer.iter + 1 if ( self._period > 0 and next_iter % self._period == 0 and next_iter != self.trainer.max_iter ): self._best_checking() def after_train(self): # same conditions as `EvalHook` if self.trainer.iter + 1 >= self.trainer.max_iter: self._best_checking() class LRScheduler(HookBase): """ A hook which executes a torch builtin LR scheduler and summarizes the LR. It is executed after every iteration. """ def __init__(self, optimizer=None, scheduler=None): """ Args: optimizer (torch.optim.Optimizer): scheduler (torch.optim.LRScheduler or fvcore.common.param_scheduler.ParamScheduler): if a :class:`ParamScheduler` object, it defines the multiplier over the base LR in the optimizer. If any argument is not given, will try to obtain it from the trainer. """ self._optimizer = optimizer self._scheduler = scheduler def before_train(self): self._optimizer = self._optimizer or self.trainer.optimizer if isinstance(self.scheduler, ParamScheduler): self._scheduler = LRMultiplier( self._optimizer, self.scheduler, self.trainer.max_iter, last_iter=self.trainer.iter - 1, ) self._best_param_group_id = LRScheduler.get_best_param_group_id(self._optimizer) @staticmethod def get_best_param_group_id(optimizer): # NOTE: some heuristics on what LR to summarize # summarize the param group with most parameters largest_group = max(len(g["params"]) for g in optimizer.param_groups) if largest_group == 1: # If all groups have one parameter, # then find the most common initial LR, and use it for summary lr_count = Counter([g["lr"] for g in optimizer.param_groups]) lr = lr_count.most_common()[0][0] for i, g in enumerate(optimizer.param_groups): if g["lr"] == lr: return i else: for i, g in enumerate(optimizer.param_groups): if len(g["params"]) == largest_group: return i def after_step(self): lr = self._optimizer.param_groups[self._best_param_group_id]["lr"] self.trainer.storage.put_scalar("lr", lr, smoothing_hint=False) self.scheduler.step() @property def scheduler(self): return self._scheduler or self.trainer.scheduler def state_dict(self): if isinstance(self.scheduler, torch.optim.lr_scheduler._LRScheduler): return self.scheduler.state_dict() return {} def load_state_dict(self, state_dict): if isinstance(self.scheduler, torch.optim.lr_scheduler._LRScheduler): logger = logging.getLogger(__name__) logger.info("Loading scheduler from state_dict ...") self.scheduler.load_state_dict(state_dict) class TorchProfiler(HookBase): """ A hook which runs `torch.profiler.profile`. Examples: :: hooks.TorchProfiler( lambda trainer: 10 < trainer.iter < 20, self.cfg.OUTPUT_DIR ) The above example will run the profiler for iteration 10~20 and dump results to ``OUTPUT_DIR``. We did not profile the first few iterations because they are typically slower than the rest. The result files can be loaded in the ``chrome://tracing`` page in chrome browser, and the tensorboard visualizations can be visualized using ``tensorboard --logdir OUTPUT_DIR/log`` """ def __init__(self, enable_predicate, output_dir, *, activities=None, save_tensorboard=True): """ Args: enable_predicate (callable[trainer -> bool]): a function which takes a trainer, and returns whether to enable the profiler. It will be called once every step, and can be used to select which steps to profile. output_dir (str): the output directory to dump tracing files. activities (iterable): same as in `torch.profiler.profile`. save_tensorboard (bool): whether to save tensorboard visualizations at (output_dir)/log/ """ self._enable_predicate = enable_predicate self._activities = activities self._output_dir = output_dir self._save_tensorboard = save_tensorboard def before_step(self): if self._enable_predicate(self.trainer): if self._save_tensorboard: on_trace_ready = torch.profiler.tensorboard_trace_handler( os.path.join( self._output_dir, "log", "profiler-tensorboard-iter{}".format(self.trainer.iter), ), f"worker{comm.get_rank()}", ) else: on_trace_ready = None self._profiler = torch.profiler.profile( activities=self._activities, on_trace_ready=on_trace_ready, record_shapes=True, profile_memory=True, with_stack=True, with_flops=True, ) self._profiler.__enter__() else: self._profiler = None def after_step(self): if self._profiler is None: return self._profiler.__exit__(None, None, None) if not self._save_tensorboard: PathManager.mkdirs(self._output_dir) out_file = os.path.join( self._output_dir, "profiler-trace-iter{}.json".format(self.trainer.iter) ) if "://" not in out_file: self._profiler.export_chrome_trace(out_file) else: # Support non-posix filesystems with tempfile.TemporaryDirectory(prefix="detectron2_profiler") as d: tmp_file = os.path.join(d, "tmp.json") self._profiler.export_chrome_trace(tmp_file) with open(tmp_file) as f: content = f.read() with PathManager.open(out_file, "w") as f: f.write(content) class AutogradProfiler(TorchProfiler): """ A hook which runs `torch.autograd.profiler.profile`. Examples: :: hooks.AutogradProfiler( lambda trainer: 10 < trainer.iter < 20, self.cfg.OUTPUT_DIR ) The above example will run the profiler for iteration 10~20 and dump results to ``OUTPUT_DIR``. We did not profile the first few iterations because they are typically slower than the rest. The result files can be loaded in the ``chrome://tracing`` page in chrome browser. Note: When used together with NCCL on older version of GPUs, autograd profiler may cause deadlock because it unnecessarily allocates memory on every device it sees. The memory management calls, if interleaved with NCCL calls, lead to deadlock on GPUs that do not support ``cudaLaunchCooperativeKernelMultiDevice``. """ def __init__(self, enable_predicate, output_dir, *, use_cuda=True): """ Args: enable_predicate (callable[trainer -> bool]): a function which takes a trainer, and returns whether to enable the profiler. It will be called once every step, and can be used to select which steps to profile. output_dir (str): the output directory to dump tracing files. use_cuda (bool): same as in `torch.autograd.profiler.profile`. """ warnings.warn("AutogradProfiler has been deprecated in favor of TorchProfiler.") self._enable_predicate = enable_predicate self._use_cuda = use_cuda self._output_dir = output_dir def before_step(self): if self._enable_predicate(self.trainer): self._profiler = torch.autograd.profiler.profile(use_cuda=self._use_cuda) self._profiler.__enter__() else: self._profiler = None class EvalHook(HookBase): """ Run an evaluation function periodically, and at the end of training. It is executed every ``eval_period`` iterations and after the last iteration. """ def __init__(self, eval_period, eval_function, eval_after_train=True): """ Args: eval_period (int): the period to run `eval_function`. Set to 0 to not evaluate periodically (but still evaluate after the last iteration if `eval_after_train` is True). eval_function (callable): a function which takes no arguments, and returns a nested dict of evaluation metrics. eval_after_train (bool): whether to evaluate after the last iteration Note: This hook must be enabled in all or none workers. If you would like only certain workers to perform evaluation, give other workers a no-op function (`eval_function=lambda: None`). """ self._period = eval_period self._func = eval_function self._eval_after_train = eval_after_train def _do_eval(self): results = self._func() if results: assert isinstance( results, dict ), "Eval function must return a dict. Got {} instead.".format(results) flattened_results = flatten_results_dict(results) for k, v in flattened_results.items(): try: v = float(v) except Exception as e: raise ValueError( "[EvalHook] eval_function should return a nested dict of float. " "Got '{}: {}' instead.".format(k, v) ) from e self.trainer.storage.put_scalars(**flattened_results, smoothing_hint=False) # Evaluation may take different time among workers. # A barrier make them start the next iteration together. comm.synchronize() def after_step(self): next_iter = self.trainer.iter + 1 if self._period > 0 and next_iter % self._period == 0: # do the last eval in after_train if next_iter != self.trainer.max_iter: self._do_eval() def after_train(self): # This condition is to prevent the eval from running after a failed training if self._eval_after_train and self.trainer.iter + 1 >= self.trainer.max_iter: self._do_eval() # func is likely a closure that holds reference to the trainer # therefore we clean it to avoid circular reference in the end del self._func class PreciseBN(HookBase): """ The standard implementation of BatchNorm uses EMA in inference, which is sometimes suboptimal. This class computes the true average of statistics rather than the moving average, and put true averages to every BN layer in the given model. It is executed every ``period`` iterations and after the last iteration. """ def __init__(self, period, model, data_loader, num_iter): """ Args: period (int): the period this hook is run, or 0 to not run during training. The hook will always run in the end of training. model (nn.Module): a module whose all BN layers in training mode will be updated by precise BN. Note that user is responsible for ensuring the BN layers to be updated are in training mode when this hook is triggered. data_loader (iterable): it will produce data to be run by `model(data)`. num_iter (int): number of iterations used to compute the precise statistics. """ self._logger = logging.getLogger(__name__) if len(get_bn_modules(model)) == 0: self._logger.info( "PreciseBN is disabled because model does not contain BN layers in training mode." ) self._disabled = True return self._model = model self._data_loader = data_loader self._num_iter = num_iter self._period = period self._disabled = False self._data_iter = None def after_step(self): next_iter = self.trainer.iter + 1 is_final = next_iter == self.trainer.max_iter if is_final or (self._period > 0 and next_iter % self._period == 0): self.update_stats() def update_stats(self): """ Update the model with precise statistics. Users can manually call this method. """ if self._disabled: return if self._data_iter is None: self._data_iter = iter(self._data_loader) def data_loader(): for num_iter in itertools.count(1): if num_iter % 100 == 0: self._logger.info( "Running precise-BN ... {}/{} iterations.".format(num_iter, self._num_iter) ) # This way we can reuse the same iterator yield next(self._data_iter) with EventStorage(): # capture events in a new storage to discard them self._logger.info( "Running precise-BN for {} iterations... ".format(self._num_iter) + "Note that this could produce different statistics every time." ) update_bn_stats(self._model, data_loader(), self._num_iter) class TorchMemoryStats(HookBase): """ Writes pytorch's cuda memory statistics periodically. """ def __init__(self, period=20, max_runs=10): """ Args: period (int): Output stats each 'period' iterations max_runs (int): Stop the logging after 'max_runs' """ self._logger = logging.getLogger(__name__) self._period = period self._max_runs = max_runs self._runs = 0 def after_step(self): if self._runs > self._max_runs: return if (self.trainer.iter + 1) % self._period == 0 or ( self.trainer.iter == self.trainer.max_iter - 1 ): if torch.cuda.is_available(): max_reserved_mb = torch.cuda.max_memory_reserved() / 1024.0 / 1024.0 reserved_mb = torch.cuda.memory_reserved() / 1024.0 / 1024.0 max_allocated_mb = torch.cuda.max_memory_allocated() / 1024.0 / 1024.0 allocated_mb = torch.cuda.memory_allocated() / 1024.0 / 1024.0 self._logger.info( ( " iter: {} " " max_reserved_mem: {:.0f}MB " " reserved_mem: {:.0f}MB " " max_allocated_mem: {:.0f}MB " " allocated_mem: {:.0f}MB " ).format( self.trainer.iter, max_reserved_mb, reserved_mb, max_allocated_mb, allocated_mb, ) ) self._runs += 1 if self._runs == self._max_runs: mem_summary = torch.cuda.memory_summary() self._logger.info("\n" + mem_summary) torch.cuda.reset_peak_memory_stats()
/roof_mask_Yv8-0.5.9-py3-none-any.whl/detectron2/engine/hooks.py
0.757705
0.225843
hooks.py
pypi
import logging import math from bisect import bisect_right from typing import List import torch from fvcore.common.param_scheduler import ( CompositeParamScheduler, ConstantParamScheduler, LinearParamScheduler, ParamScheduler, ) logger = logging.getLogger(__name__) class WarmupParamScheduler(CompositeParamScheduler): """ Add an initial warmup stage to another scheduler. """ def __init__( self, scheduler: ParamScheduler, warmup_factor: float, warmup_length: float, warmup_method: str = "linear", ): """ Args: scheduler: warmup will be added at the beginning of this scheduler warmup_factor: the factor w.r.t the initial value of ``scheduler``, e.g. 0.001 warmup_length: the relative length (in [0, 1]) of warmup steps w.r.t the entire training, e.g. 0.01 warmup_method: one of "linear" or "constant" """ end_value = scheduler(warmup_length) # the value to reach when warmup ends start_value = warmup_factor * scheduler(0.0) if warmup_method == "constant": warmup = ConstantParamScheduler(start_value) elif warmup_method == "linear": warmup = LinearParamScheduler(start_value, end_value) else: raise ValueError("Unknown warmup method: {}".format(warmup_method)) super().__init__( [warmup, scheduler], interval_scaling=["rescaled", "fixed"], lengths=[warmup_length, 1 - warmup_length], ) class LRMultiplier(torch.optim.lr_scheduler._LRScheduler): """ A LRScheduler which uses fvcore :class:`ParamScheduler` to multiply the learning rate of each param in the optimizer. Every step, the learning rate of each parameter becomes its initial value multiplied by the output of the given :class:`ParamScheduler`. The absolute learning rate value of each parameter can be different. This scheduler can be used as long as the relative scale among them do not change during training. Examples: :: LRMultiplier( opt, WarmupParamScheduler( MultiStepParamScheduler( [1, 0.1, 0.01], milestones=[60000, 80000], num_updates=90000, ), 0.001, 100 / 90000 ), max_iter=90000 ) """ # NOTES: in the most general case, every LR can use its own scheduler. # Supporting this requires interaction with the optimizer when its parameter # group is initialized. For example, classyvision implements its own optimizer # that allows different schedulers for every parameter group. # To avoid this complexity, we use this class to support the most common cases # where the relative scale among all LRs stay unchanged during training. In this # case we only need a total of one scheduler that defines the relative LR multiplier. def __init__( self, optimizer: torch.optim.Optimizer, multiplier: ParamScheduler, max_iter: int, last_iter: int = -1, ): """ Args: optimizer, last_iter: See ``torch.optim.lr_scheduler._LRScheduler``. ``last_iter`` is the same as ``last_epoch``. multiplier: a fvcore ParamScheduler that defines the multiplier on every LR of the optimizer max_iter: the total number of training iterations """ if not isinstance(multiplier, ParamScheduler): raise ValueError( "_LRMultiplier(multiplier=) must be an instance of fvcore " f"ParamScheduler. Got {multiplier} instead." ) self._multiplier = multiplier self._max_iter = max_iter super().__init__(optimizer, last_epoch=last_iter) def state_dict(self): # fvcore schedulers are stateless. Only keep pytorch scheduler states return {"base_lrs": self.base_lrs, "last_epoch": self.last_epoch} def get_lr(self) -> List[float]: multiplier = self._multiplier(self.last_epoch / self._max_iter) return [base_lr * multiplier for base_lr in self.base_lrs] """ Content below is no longer needed! """ # NOTE: PyTorch's LR scheduler interface uses names that assume the LR changes # only on epoch boundaries. We typically use iteration based schedules instead. # As a result, "epoch" (e.g., as in self.last_epoch) should be understood to mean # "iteration" instead. # FIXME: ideally this would be achieved with a CombinedLRScheduler, separating # MultiStepLR with WarmupLR but the current LRScheduler design doesn't allow it. class WarmupMultiStepLR(torch.optim.lr_scheduler._LRScheduler): def __init__( self, optimizer: torch.optim.Optimizer, milestones: List[int], gamma: float = 0.1, warmup_factor: float = 0.001, warmup_iters: int = 1000, warmup_method: str = "linear", last_epoch: int = -1, ): logger.warning( "WarmupMultiStepLR is deprecated! Use LRMultipilier with fvcore ParamScheduler instead!" ) if not list(milestones) == sorted(milestones): raise ValueError( "Milestones should be a list of" " increasing integers. Got {}", milestones ) self.milestones = milestones self.gamma = gamma self.warmup_factor = warmup_factor self.warmup_iters = warmup_iters self.warmup_method = warmup_method super().__init__(optimizer, last_epoch) def get_lr(self) -> List[float]: warmup_factor = _get_warmup_factor_at_iter( self.warmup_method, self.last_epoch, self.warmup_iters, self.warmup_factor ) return [ base_lr * warmup_factor * self.gamma ** bisect_right(self.milestones, self.last_epoch) for base_lr in self.base_lrs ] def _compute_values(self) -> List[float]: # The new interface return self.get_lr() class WarmupCosineLR(torch.optim.lr_scheduler._LRScheduler): def __init__( self, optimizer: torch.optim.Optimizer, max_iters: int, warmup_factor: float = 0.001, warmup_iters: int = 1000, warmup_method: str = "linear", last_epoch: int = -1, ): logger.warning( "WarmupCosineLR is deprecated! Use LRMultipilier with fvcore ParamScheduler instead!" ) self.max_iters = max_iters self.warmup_factor = warmup_factor self.warmup_iters = warmup_iters self.warmup_method = warmup_method super().__init__(optimizer, last_epoch) def get_lr(self) -> List[float]: warmup_factor = _get_warmup_factor_at_iter( self.warmup_method, self.last_epoch, self.warmup_iters, self.warmup_factor ) # Different definitions of half-cosine with warmup are possible. For # simplicity we multiply the standard half-cosine schedule by the warmup # factor. An alternative is to start the period of the cosine at warmup_iters # instead of at 0. In the case that warmup_iters << max_iters the two are # very close to each other. return [ base_lr * warmup_factor * 0.5 * (1.0 + math.cos(math.pi * self.last_epoch / self.max_iters)) for base_lr in self.base_lrs ] def _compute_values(self) -> List[float]: # The new interface return self.get_lr() def _get_warmup_factor_at_iter( method: str, iter: int, warmup_iters: int, warmup_factor: float ) -> float: """ Return the learning rate warmup factor at a specific iteration. See :paper:`ImageNet in 1h` for more details. Args: method (str): warmup method; either "constant" or "linear". iter (int): iteration at which to calculate the warmup factor. warmup_iters (int): the number of warmup iterations. warmup_factor (float): the base warmup factor (the meaning changes according to the method used). Returns: float: the effective warmup factor at the given iteration. """ if iter >= warmup_iters: return 1.0 if method == "constant": return warmup_factor elif method == "linear": alpha = iter / warmup_iters return warmup_factor * (1 - alpha) + alpha else: raise ValueError("Unknown warmup method: {}".format(method))
/roof_mask_Yv8-0.5.9-py3-none-any.whl/detectron2/solver/lr_scheduler.py
0.924611
0.550064
lr_scheduler.py
pypi
import torch from detectron2.layers import nonzero_tuple __all__ = ["subsample_labels"] def subsample_labels( labels: torch.Tensor, num_samples: int, positive_fraction: float, bg_label: int ): """ Return `num_samples` (or fewer, if not enough found) random samples from `labels` which is a mixture of positives & negatives. It will try to return as many positives as possible without exceeding `positive_fraction * num_samples`, and then try to fill the remaining slots with negatives. Args: labels (Tensor): (N, ) label vector with values: * -1: ignore * bg_label: background ("negative") class * otherwise: one or more foreground ("positive") classes num_samples (int): The total number of labels with value >= 0 to return. Values that are not sampled will be filled with -1 (ignore). positive_fraction (float): The number of subsampled labels with values > 0 is `min(num_positives, int(positive_fraction * num_samples))`. The number of negatives sampled is `min(num_negatives, num_samples - num_positives_sampled)`. In order words, if there are not enough positives, the sample is filled with negatives. If there are also not enough negatives, then as many elements are sampled as is possible. bg_label (int): label index of background ("negative") class. Returns: pos_idx, neg_idx (Tensor): 1D vector of indices. The total length of both is `num_samples` or fewer. """ positive = nonzero_tuple((labels != -1) & (labels != bg_label))[0] negative = nonzero_tuple(labels == bg_label)[0] num_pos = int(num_samples * positive_fraction) # protect against not enough positive examples num_pos = min(positive.numel(), num_pos) num_neg = num_samples - num_pos # protect against not enough negative examples num_neg = min(negative.numel(), num_neg) # randomly select positive and negative examples perm1 = torch.randperm(positive.numel(), device=positive.device)[:num_pos] perm2 = torch.randperm(negative.numel(), device=negative.device)[:num_neg] pos_idx = positive[perm1] neg_idx = negative[perm2] return pos_idx, neg_idx
/roof_mask_Yv8-0.5.9-py3-none-any.whl/detectron2/modeling/sampling.py
0.935744
0.770551
sampling.py
pypi
import itertools import logging import numpy as np from collections import OrderedDict from collections.abc import Mapping from typing import Dict, List, Optional, Tuple, Union import torch from omegaconf import DictConfig, OmegaConf from torch import Tensor, nn from detectron2.layers import ShapeSpec from detectron2.structures import BitMasks, Boxes, ImageList, Instances from detectron2.utils.events import get_event_storage from .backbone import Backbone logger = logging.getLogger(__name__) def _to_container(cfg): """ mmdet will assert the type of dict/list. So convert omegaconf objects to dict/list. """ if isinstance(cfg, DictConfig): cfg = OmegaConf.to_container(cfg, resolve=True) from mmcv.utils import ConfigDict return ConfigDict(cfg) class MMDetBackbone(Backbone): """ Wrapper of mmdetection backbones to use in detectron2. mmdet backbones produce list/tuple of tensors, while detectron2 backbones produce a dict of tensors. This class wraps the given backbone to produce output in detectron2's convention, so it can be used in place of detectron2 backbones. """ def __init__( self, backbone: Union[nn.Module, Mapping], neck: Union[nn.Module, Mapping, None] = None, *, output_shapes: List[ShapeSpec], output_names: Optional[List[str]] = None, ): """ Args: backbone: either a backbone module or a mmdet config dict that defines a backbone. The backbone takes a 4D image tensor and returns a sequence of tensors. neck: either a backbone module or a mmdet config dict that defines a neck. The neck takes outputs of backbone and returns a sequence of tensors. If None, no neck is used. output_shapes: shape for every output of the backbone (or neck, if given). stride and channels are often needed. output_names: names for every output of the backbone (or neck, if given). By default, will use "out0", "out1", ... """ super().__init__() if isinstance(backbone, Mapping): from mmdet.models import build_backbone backbone = build_backbone(_to_container(backbone)) self.backbone = backbone if isinstance(neck, Mapping): from mmdet.models import build_neck neck = build_neck(_to_container(neck)) self.neck = neck # "Neck" weights, if any, are part of neck itself. This is the interface # of mmdet so we follow it. Reference: # https://github.com/open-mmlab/mmdetection/blob/master/mmdet/models/detectors/two_stage.py logger.info("Initializing mmdet backbone weights...") self.backbone.init_weights() # train() in mmdet modules is non-trivial, and has to be explicitly # called. Reference: # https://github.com/open-mmlab/mmdetection/blob/master/mmdet/models/backbones/resnet.py self.backbone.train() if self.neck is not None: logger.info("Initializing mmdet neck weights ...") if isinstance(self.neck, nn.Sequential): for m in self.neck: m.init_weights() else: self.neck.init_weights() self.neck.train() self._output_shapes = output_shapes if not output_names: output_names = [f"out{i}" for i in range(len(output_shapes))] self._output_names = output_names def forward(self, x) -> Dict[str, Tensor]: outs = self.backbone(x) if self.neck is not None: outs = self.neck(outs) assert isinstance( outs, (list, tuple) ), "mmdet backbone should return a list/tuple of tensors!" if len(outs) != len(self._output_shapes): raise ValueError( "Length of output_shapes does not match outputs from the mmdet backbone: " f"{len(outs)} != {len(self._output_shapes)}" ) return {k: v for k, v in zip(self._output_names, outs)} def output_shape(self) -> Dict[str, ShapeSpec]: return {k: v for k, v in zip(self._output_names, self._output_shapes)} class MMDetDetector(nn.Module): """ Wrapper of a mmdetection detector model, for detection and instance segmentation. Input/output formats of this class follow detectron2's convention, so a mmdetection model can be trained and evaluated in detectron2. """ def __init__( self, detector: Union[nn.Module, Mapping], *, # Default is 32 regardless of model: # https://github.com/open-mmlab/mmdetection/tree/master/configs/_base_/datasets size_divisibility=32, pixel_mean: Tuple[float], pixel_std: Tuple[float], ): """ Args: detector: a mmdet detector, or a mmdet config dict that defines a detector. size_divisibility: pad input images to multiple of this number pixel_mean: per-channel mean to normalize input image pixel_std: per-channel stddev to normalize input image """ super().__init__() if isinstance(detector, Mapping): from mmdet.models import build_detector detector = build_detector(_to_container(detector)) self.detector = detector self.size_divisibility = size_divisibility self.register_buffer("pixel_mean", torch.tensor(pixel_mean).view(-1, 1, 1), False) self.register_buffer("pixel_std", torch.tensor(pixel_std).view(-1, 1, 1), False) assert ( self.pixel_mean.shape == self.pixel_std.shape ), f"{self.pixel_mean} and {self.pixel_std} have different shapes!" def forward(self, batched_inputs: List[Dict[str, torch.Tensor]]): images = [x["image"].to(self.device) for x in batched_inputs] images = [(x - self.pixel_mean) / self.pixel_std for x in images] images = ImageList.from_tensors(images, size_divisibility=self.size_divisibility).tensor metas = [] rescale = {"height" in x for x in batched_inputs} if len(rescale) != 1: raise ValueError("Some inputs have original height/width, but some don't!") rescale = list(rescale)[0] output_shapes = [] for input in batched_inputs: meta = {} c, h, w = input["image"].shape meta["img_shape"] = meta["ori_shape"] = (h, w, c) if rescale: scale_factor = np.array( [w / input["width"], h / input["height"]] * 2, dtype="float32" ) ori_shape = (input["height"], input["width"]) output_shapes.append(ori_shape) meta["ori_shape"] = ori_shape + (c,) else: scale_factor = 1.0 output_shapes.append((h, w)) meta["scale_factor"] = scale_factor meta["flip"] = False padh, padw = images.shape[-2:] meta["pad_shape"] = (padh, padw, c) metas.append(meta) if self.training: gt_instances = [x["instances"].to(self.device) for x in batched_inputs] if gt_instances[0].has("gt_masks"): from mmdet.core import PolygonMasks as mm_PolygonMasks, BitmapMasks as mm_BitMasks def convert_mask(m, shape): # mmdet mask format if isinstance(m, BitMasks): return mm_BitMasks(m.tensor.cpu().numpy(), shape[0], shape[1]) else: return mm_PolygonMasks(m.polygons, shape[0], shape[1]) gt_masks = [convert_mask(x.gt_masks, x.image_size) for x in gt_instances] losses_and_metrics = self.detector.forward_train( images, metas, [x.gt_boxes.tensor for x in gt_instances], [x.gt_classes for x in gt_instances], gt_masks=gt_masks, ) else: losses_and_metrics = self.detector.forward_train( images, metas, [x.gt_boxes.tensor for x in gt_instances], [x.gt_classes for x in gt_instances], ) return _parse_losses(losses_and_metrics) else: results = self.detector.simple_test(images, metas, rescale=rescale) results = [ {"instances": _convert_mmdet_result(r, shape)} for r, shape in zip(results, output_shapes) ] return results @property def device(self): return self.pixel_mean.device # Reference: show_result() in # https://github.com/open-mmlab/mmdetection/blob/master/mmdet/models/detectors/base.py def _convert_mmdet_result(result, shape: Tuple[int, int]) -> Instances: if isinstance(result, tuple): bbox_result, segm_result = result if isinstance(segm_result, tuple): segm_result = segm_result[0] else: bbox_result, segm_result = result, None bboxes = torch.from_numpy(np.vstack(bbox_result)) # Nx5 bboxes, scores = bboxes[:, :4], bboxes[:, -1] labels = [ torch.full((bbox.shape[0],), i, dtype=torch.int32) for i, bbox in enumerate(bbox_result) ] labels = torch.cat(labels) inst = Instances(shape) inst.pred_boxes = Boxes(bboxes) inst.scores = scores inst.pred_classes = labels if segm_result is not None and len(labels) > 0: segm_result = list(itertools.chain(*segm_result)) segm_result = [torch.from_numpy(x) if isinstance(x, np.ndarray) else x for x in segm_result] segm_result = torch.stack(segm_result, dim=0) inst.pred_masks = segm_result return inst # reference: https://github.com/open-mmlab/mmdetection/blob/master/mmdet/models/detectors/base.py def _parse_losses(losses: Dict[str, Tensor]) -> Dict[str, Tensor]: log_vars = OrderedDict() for loss_name, loss_value in losses.items(): if isinstance(loss_value, torch.Tensor): log_vars[loss_name] = loss_value.mean() elif isinstance(loss_value, list): log_vars[loss_name] = sum(_loss.mean() for _loss in loss_value) else: raise TypeError(f"{loss_name} is not a tensor or list of tensors") if "loss" not in loss_name: # put metrics to storage; don't return them storage = get_event_storage() value = log_vars.pop(loss_name).cpu().item() storage.put_scalar(loss_name, value) return log_vars
/roof_mask_Yv8-0.5.9-py3-none-any.whl/detectron2/modeling/mmdet_wrapper.py
0.928149
0.40539
mmdet_wrapper.py
pypi
import torch from torch.nn import functional as F from detectron2.structures import Instances, ROIMasks # perhaps should rename to "resize_instance" def detector_postprocess( results: Instances, output_height: int, output_width: int, mask_threshold: float = 0.5 ): """ Resize the output instances. The input images are often resized when entering an object detector. As a result, we often need the outputs of the detector in a different resolution from its inputs. This function will resize the raw outputs of an R-CNN detector to produce outputs according to the desired output resolution. Args: results (Instances): the raw outputs from the detector. `results.image_size` contains the input image resolution the detector sees. This object might be modified in-place. output_height, output_width: the desired output resolution. Returns: Instances: the resized output from the model, based on the output resolution """ if isinstance(output_width, torch.Tensor): # This shape might (but not necessarily) be tensors during tracing. # Converts integer tensors to float temporaries to ensure true # division is performed when computing scale_x and scale_y. output_width_tmp = output_width.float() output_height_tmp = output_height.float() new_size = torch.stack([output_height, output_width]) else: new_size = (output_height, output_width) output_width_tmp = output_width output_height_tmp = output_height scale_x, scale_y = ( output_width_tmp / results.image_size[1], output_height_tmp / results.image_size[0], ) results = Instances(new_size, **results.get_fields()) if results.has("pred_boxes"): output_boxes = results.pred_boxes elif results.has("proposal_boxes"): output_boxes = results.proposal_boxes else: output_boxes = None assert output_boxes is not None, "Predictions must contain boxes!" output_boxes.scale(scale_x, scale_y) output_boxes.clip(results.image_size) results = results[output_boxes.nonempty()] if results.has("pred_masks"): if isinstance(results.pred_masks, ROIMasks): roi_masks = results.pred_masks else: # pred_masks is a tensor of shape (N, 1, M, M) roi_masks = ROIMasks(results.pred_masks[:, 0, :, :]) results.pred_masks = roi_masks.to_bitmasks( results.pred_boxes, output_height, output_width, mask_threshold ).tensor # TODO return ROIMasks/BitMask object in the future if results.has("pred_keypoints"): results.pred_keypoints[:, :, 0] *= scale_x results.pred_keypoints[:, :, 1] *= scale_y return results def sem_seg_postprocess(result, img_size, output_height, output_width): """ Return semantic segmentation predictions in the original resolution. The input images are often resized when entering semantic segmentor. Moreover, in same cases, they also padded inside segmentor to be divisible by maximum network stride. As a result, we often need the predictions of the segmentor in a different resolution from its inputs. Args: result (Tensor): semantic segmentation prediction logits. A tensor of shape (C, H, W), where C is the number of classes, and H, W are the height and width of the prediction. img_size (tuple): image size that segmentor is taking as input. output_height, output_width: the desired output resolution. Returns: semantic segmentation prediction (Tensor): A tensor of the shape (C, output_height, output_width) that contains per-pixel soft predictions. """ result = result[:, : img_size[0], : img_size[1]].expand(1, -1, -1, -1) result = F.interpolate( result, size=(output_height, output_width), mode="bilinear", align_corners=False )[0] return result
/roof_mask_Yv8-0.5.9-py3-none-any.whl/detectron2/modeling/postprocessing.py
0.859266
0.747363
postprocessing.py
pypi
import collections import math from typing import List import torch from torch import nn from detectron2.config import configurable from detectron2.layers import ShapeSpec, move_device_like from detectron2.structures import Boxes, RotatedBoxes from detectron2.utils.registry import Registry ANCHOR_GENERATOR_REGISTRY = Registry("ANCHOR_GENERATOR") ANCHOR_GENERATOR_REGISTRY.__doc__ = """ Registry for modules that creates object detection anchors for feature maps. The registered object will be called with `obj(cfg, input_shape)`. """ class BufferList(nn.Module): """ Similar to nn.ParameterList, but for buffers """ def __init__(self, buffers): super().__init__() for i, buffer in enumerate(buffers): # Use non-persistent buffer so the values are not saved in checkpoint self.register_buffer(str(i), buffer, persistent=False) def __len__(self): return len(self._buffers) def __iter__(self): return iter(self._buffers.values()) def _create_grid_offsets( size: List[int], stride: int, offset: float, target_device_tensor: torch.Tensor ): grid_height, grid_width = size shifts_x = move_device_like( torch.arange(offset * stride, grid_width * stride, step=stride, dtype=torch.float32), target_device_tensor, ) shifts_y = move_device_like( torch.arange(offset * stride, grid_height * stride, step=stride, dtype=torch.float32), target_device_tensor, ) shift_y, shift_x = torch.meshgrid(shifts_y, shifts_x) shift_x = shift_x.reshape(-1) shift_y = shift_y.reshape(-1) return shift_x, shift_y def _broadcast_params(params, num_features, name): """ If one size (or aspect ratio) is specified and there are multiple feature maps, we "broadcast" anchors of that single size (or aspect ratio) over all feature maps. If params is list[float], or list[list[float]] with len(params) == 1, repeat it num_features time. Returns: list[list[float]]: param for each feature """ assert isinstance( params, collections.abc.Sequence ), f"{name} in anchor generator has to be a list! Got {params}." assert len(params), f"{name} in anchor generator cannot be empty!" if not isinstance(params[0], collections.abc.Sequence): # params is list[float] return [params] * num_features if len(params) == 1: return list(params) * num_features assert len(params) == num_features, ( f"Got {name} of length {len(params)} in anchor generator, " f"but the number of input features is {num_features}!" ) return params @ANCHOR_GENERATOR_REGISTRY.register() class DefaultAnchorGenerator(nn.Module): """ Compute anchors in the standard ways described in "Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks". """ box_dim: torch.jit.Final[int] = 4 """ the dimension of each anchor box. """ @configurable def __init__(self, *, sizes, aspect_ratios, strides, offset=0.5): """ This interface is experimental. Args: sizes (list[list[float]] or list[float]): If ``sizes`` is list[list[float]], ``sizes[i]`` is the list of anchor sizes (i.e. sqrt of anchor area) to use for the i-th feature map. If ``sizes`` is list[float], ``sizes`` is used for all feature maps. Anchor sizes are given in absolute lengths in units of the input image; they do not dynamically scale if the input image size changes. aspect_ratios (list[list[float]] or list[float]): list of aspect ratios (i.e. height / width) to use for anchors. Same "broadcast" rule for `sizes` applies. strides (list[int]): stride of each input feature. offset (float): Relative offset between the center of the first anchor and the top-left corner of the image. Value has to be in [0, 1). Recommend to use 0.5, which means half stride. """ super().__init__() self.strides = strides self.num_features = len(self.strides) sizes = _broadcast_params(sizes, self.num_features, "sizes") aspect_ratios = _broadcast_params(aspect_ratios, self.num_features, "aspect_ratios") self.cell_anchors = self._calculate_anchors(sizes, aspect_ratios) self.offset = offset assert 0.0 <= self.offset < 1.0, self.offset @classmethod def from_config(cls, cfg, input_shape: List[ShapeSpec]): return { "sizes": cfg.MODEL.ANCHOR_GENERATOR.SIZES, "aspect_ratios": cfg.MODEL.ANCHOR_GENERATOR.ASPECT_RATIOS, "strides": [x.stride for x in input_shape], "offset": cfg.MODEL.ANCHOR_GENERATOR.OFFSET, } def _calculate_anchors(self, sizes, aspect_ratios): cell_anchors = [ self.generate_cell_anchors(s, a).float() for s, a in zip(sizes, aspect_ratios) ] return BufferList(cell_anchors) @property @torch.jit.unused def num_cell_anchors(self): """ Alias of `num_anchors`. """ return self.num_anchors @property @torch.jit.unused def num_anchors(self): """ Returns: list[int]: Each int is the number of anchors at every pixel location, on that feature map. For example, if at every pixel we use anchors of 3 aspect ratios and 5 sizes, the number of anchors is 15. (See also ANCHOR_GENERATOR.SIZES and ANCHOR_GENERATOR.ASPECT_RATIOS in config) In standard RPN models, `num_anchors` on every feature map is the same. """ return [len(cell_anchors) for cell_anchors in self.cell_anchors] def _grid_anchors(self, grid_sizes: List[List[int]]): """ Returns: list[Tensor]: #featuremap tensors, each is (#locations x #cell_anchors) x 4 """ anchors = [] # buffers() not supported by torchscript. use named_buffers() instead buffers: List[torch.Tensor] = [x[1] for x in self.cell_anchors.named_buffers()] for size, stride, base_anchors in zip(grid_sizes, self.strides, buffers): shift_x, shift_y = _create_grid_offsets(size, stride, self.offset, base_anchors) shifts = torch.stack((shift_x, shift_y, shift_x, shift_y), dim=1) anchors.append((shifts.view(-1, 1, 4) + base_anchors.view(1, -1, 4)).reshape(-1, 4)) return anchors def generate_cell_anchors(self, sizes=(32, 64, 128, 256, 512), aspect_ratios=(0.5, 1, 2)): """ Generate a tensor storing canonical anchor boxes, which are all anchor boxes of different sizes and aspect_ratios centered at (0, 0). We can later build the set of anchors for a full feature map by shifting and tiling these tensors (see `meth:_grid_anchors`). Args: sizes (tuple[float]): aspect_ratios (tuple[float]]): Returns: Tensor of shape (len(sizes) * len(aspect_ratios), 4) storing anchor boxes in XYXY format. """ # This is different from the anchor generator defined in the original Faster R-CNN # code or Detectron. They yield the same AP, however the old version defines cell # anchors in a less natural way with a shift relative to the feature grid and # quantization that results in slightly different sizes for different aspect ratios. # See also https://github.com/facebookresearch/Detectron/issues/227 anchors = [] for size in sizes: area = size ** 2.0 for aspect_ratio in aspect_ratios: # s * s = w * h # a = h / w # ... some algebra ... # w = sqrt(s * s / a) # h = a * w w = math.sqrt(area / aspect_ratio) h = aspect_ratio * w x0, y0, x1, y1 = -w / 2.0, -h / 2.0, w / 2.0, h / 2.0 anchors.append([x0, y0, x1, y1]) return torch.tensor(anchors) def forward(self, features: List[torch.Tensor]): """ Args: features (list[Tensor]): list of backbone feature maps on which to generate anchors. Returns: list[Boxes]: a list of Boxes containing all the anchors for each feature map (i.e. the cell anchors repeated over all locations in the feature map). The number of anchors of each feature map is Hi x Wi x num_cell_anchors, where Hi, Wi are resolution of the feature map divided by anchor stride. """ grid_sizes = [feature_map.shape[-2:] for feature_map in features] anchors_over_all_feature_maps = self._grid_anchors(grid_sizes) return [Boxes(x) for x in anchors_over_all_feature_maps] @ANCHOR_GENERATOR_REGISTRY.register() class RotatedAnchorGenerator(nn.Module): """ Compute rotated anchors used by Rotated RPN (RRPN), described in "Arbitrary-Oriented Scene Text Detection via Rotation Proposals". """ box_dim: int = 5 """ the dimension of each anchor box. """ @configurable def __init__(self, *, sizes, aspect_ratios, strides, angles, offset=0.5): """ This interface is experimental. Args: sizes (list[list[float]] or list[float]): If sizes is list[list[float]], sizes[i] is the list of anchor sizes (i.e. sqrt of anchor area) to use for the i-th feature map. If sizes is list[float], the sizes are used for all feature maps. Anchor sizes are given in absolute lengths in units of the input image; they do not dynamically scale if the input image size changes. aspect_ratios (list[list[float]] or list[float]): list of aspect ratios (i.e. height / width) to use for anchors. Same "broadcast" rule for `sizes` applies. strides (list[int]): stride of each input feature. angles (list[list[float]] or list[float]): list of angles (in degrees CCW) to use for anchors. Same "broadcast" rule for `sizes` applies. offset (float): Relative offset between the center of the first anchor and the top-left corner of the image. Value has to be in [0, 1). Recommend to use 0.5, which means half stride. """ super().__init__() self.strides = strides self.num_features = len(self.strides) sizes = _broadcast_params(sizes, self.num_features, "sizes") aspect_ratios = _broadcast_params(aspect_ratios, self.num_features, "aspect_ratios") angles = _broadcast_params(angles, self.num_features, "angles") self.cell_anchors = self._calculate_anchors(sizes, aspect_ratios, angles) self.offset = offset assert 0.0 <= self.offset < 1.0, self.offset @classmethod def from_config(cls, cfg, input_shape: List[ShapeSpec]): return { "sizes": cfg.MODEL.ANCHOR_GENERATOR.SIZES, "aspect_ratios": cfg.MODEL.ANCHOR_GENERATOR.ASPECT_RATIOS, "strides": [x.stride for x in input_shape], "offset": cfg.MODEL.ANCHOR_GENERATOR.OFFSET, "angles": cfg.MODEL.ANCHOR_GENERATOR.ANGLES, } def _calculate_anchors(self, sizes, aspect_ratios, angles): cell_anchors = [ self.generate_cell_anchors(size, aspect_ratio, angle).float() for size, aspect_ratio, angle in zip(sizes, aspect_ratios, angles) ] return BufferList(cell_anchors) @property def num_cell_anchors(self): """ Alias of `num_anchors`. """ return self.num_anchors @property def num_anchors(self): """ Returns: list[int]: Each int is the number of anchors at every pixel location, on that feature map. For example, if at every pixel we use anchors of 3 aspect ratios, 2 sizes and 5 angles, the number of anchors is 30. (See also ANCHOR_GENERATOR.SIZES, ANCHOR_GENERATOR.ASPECT_RATIOS and ANCHOR_GENERATOR.ANGLES in config) In standard RRPN models, `num_anchors` on every feature map is the same. """ return [len(cell_anchors) for cell_anchors in self.cell_anchors] def _grid_anchors(self, grid_sizes): anchors = [] for size, stride, base_anchors in zip(grid_sizes, self.strides, self.cell_anchors): shift_x, shift_y = _create_grid_offsets(size, stride, self.offset, base_anchors) zeros = torch.zeros_like(shift_x) shifts = torch.stack((shift_x, shift_y, zeros, zeros, zeros), dim=1) anchors.append((shifts.view(-1, 1, 5) + base_anchors.view(1, -1, 5)).reshape(-1, 5)) return anchors def generate_cell_anchors( self, sizes=(32, 64, 128, 256, 512), aspect_ratios=(0.5, 1, 2), angles=(-90, -60, -30, 0, 30, 60, 90), ): """ Generate a tensor storing canonical anchor boxes, which are all anchor boxes of different sizes, aspect_ratios, angles centered at (0, 0). We can later build the set of anchors for a full feature map by shifting and tiling these tensors (see `meth:_grid_anchors`). Args: sizes (tuple[float]): aspect_ratios (tuple[float]]): angles (tuple[float]]): Returns: Tensor of shape (len(sizes) * len(aspect_ratios) * len(angles), 5) storing anchor boxes in (x_ctr, y_ctr, w, h, angle) format. """ anchors = [] for size in sizes: area = size ** 2.0 for aspect_ratio in aspect_ratios: # s * s = w * h # a = h / w # ... some algebra ... # w = sqrt(s * s / a) # h = a * w w = math.sqrt(area / aspect_ratio) h = aspect_ratio * w anchors.extend([0, 0, w, h, a] for a in angles) return torch.tensor(anchors) def forward(self, features): """ Args: features (list[Tensor]): list of backbone feature maps on which to generate anchors. Returns: list[RotatedBoxes]: a list of Boxes containing all the anchors for each feature map (i.e. the cell anchors repeated over all locations in the feature map). The number of anchors of each feature map is Hi x Wi x num_cell_anchors, where Hi, Wi are resolution of the feature map divided by anchor stride. """ grid_sizes = [feature_map.shape[-2:] for feature_map in features] anchors_over_all_feature_maps = self._grid_anchors(grid_sizes) return [RotatedBoxes(x) for x in anchors_over_all_feature_maps] def build_anchor_generator(cfg, input_shape): """ Built an anchor generator from `cfg.MODEL.ANCHOR_GENERATOR.NAME`. """ anchor_generator = cfg.MODEL.ANCHOR_GENERATOR.NAME return ANCHOR_GENERATOR_REGISTRY.get(anchor_generator)(cfg, input_shape)
/roof_mask_Yv8-0.5.9-py3-none-any.whl/detectron2/modeling/anchor_generator.py
0.943971
0.433981
anchor_generator.py
pypi
import math from typing import List, Tuple, Union import torch from fvcore.nn import giou_loss, smooth_l1_loss from torch.nn import functional as F from detectron2.layers import cat, ciou_loss, diou_loss from detectron2.structures import Boxes # Value for clamping large dw and dh predictions. The heuristic is that we clamp # such that dw and dh are no larger than what would transform a 16px box into a # 1000px box (based on a small anchor, 16px, and a typical image size, 1000px). _DEFAULT_SCALE_CLAMP = math.log(1000.0 / 16) __all__ = ["Box2BoxTransform", "Box2BoxTransformRotated", "Box2BoxTransformLinear"] @torch.jit.script class Box2BoxTransform(object): """ The box-to-box transform defined in R-CNN. The transformation is parameterized by 4 deltas: (dx, dy, dw, dh). The transformation scales the box's width and height by exp(dw), exp(dh) and shifts a box's center by the offset (dx * width, dy * height). """ def __init__( self, weights: Tuple[float, float, float, float], scale_clamp: float = _DEFAULT_SCALE_CLAMP ): """ Args: weights (4-element tuple): Scaling factors that are applied to the (dx, dy, dw, dh) deltas. In Fast R-CNN, these were originally set such that the deltas have unit variance; now they are treated as hyperparameters of the system. scale_clamp (float): When predicting deltas, the predicted box scaling factors (dw and dh) are clamped such that they are <= scale_clamp. """ self.weights = weights self.scale_clamp = scale_clamp def get_deltas(self, src_boxes, target_boxes): """ Get box regression transformation deltas (dx, dy, dw, dh) that can be used to transform the `src_boxes` into the `target_boxes`. That is, the relation ``target_boxes == self.apply_deltas(deltas, src_boxes)`` is true (unless any delta is too large and is clamped). Args: src_boxes (Tensor): source boxes, e.g., object proposals target_boxes (Tensor): target of the transformation, e.g., ground-truth boxes. """ assert isinstance(src_boxes, torch.Tensor), type(src_boxes) assert isinstance(target_boxes, torch.Tensor), type(target_boxes) src_widths = src_boxes[:, 2] - src_boxes[:, 0] src_heights = src_boxes[:, 3] - src_boxes[:, 1] src_ctr_x = src_boxes[:, 0] + 0.5 * src_widths src_ctr_y = src_boxes[:, 1] + 0.5 * src_heights target_widths = target_boxes[:, 2] - target_boxes[:, 0] target_heights = target_boxes[:, 3] - target_boxes[:, 1] target_ctr_x = target_boxes[:, 0] + 0.5 * target_widths target_ctr_y = target_boxes[:, 1] + 0.5 * target_heights wx, wy, ww, wh = self.weights dx = wx * (target_ctr_x - src_ctr_x) / src_widths dy = wy * (target_ctr_y - src_ctr_y) / src_heights dw = ww * torch.log(target_widths / src_widths) dh = wh * torch.log(target_heights / src_heights) deltas = torch.stack((dx, dy, dw, dh), dim=1) assert (src_widths > 0).all().item(), "Input boxes to Box2BoxTransform are not valid!" return deltas def apply_deltas(self, deltas, boxes): """ Apply transformation `deltas` (dx, dy, dw, dh) to `boxes`. Args: deltas (Tensor): transformation deltas of shape (N, k*4), where k >= 1. deltas[i] represents k potentially different class-specific box transformations for the single box boxes[i]. boxes (Tensor): boxes to transform, of shape (N, 4) """ deltas = deltas.float() # ensure fp32 for decoding precision boxes = boxes.to(deltas.dtype) widths = boxes[:, 2] - boxes[:, 0] heights = boxes[:, 3] - boxes[:, 1] ctr_x = boxes[:, 0] + 0.5 * widths ctr_y = boxes[:, 1] + 0.5 * heights wx, wy, ww, wh = self.weights dx = deltas[:, 0::4] / wx dy = deltas[:, 1::4] / wy dw = deltas[:, 2::4] / ww dh = deltas[:, 3::4] / wh # Prevent sending too large values into torch.exp() dw = torch.clamp(dw, max=self.scale_clamp) dh = torch.clamp(dh, max=self.scale_clamp) pred_ctr_x = dx * widths[:, None] + ctr_x[:, None] pred_ctr_y = dy * heights[:, None] + ctr_y[:, None] pred_w = torch.exp(dw) * widths[:, None] pred_h = torch.exp(dh) * heights[:, None] x1 = pred_ctr_x - 0.5 * pred_w y1 = pred_ctr_y - 0.5 * pred_h x2 = pred_ctr_x + 0.5 * pred_w y2 = pred_ctr_y + 0.5 * pred_h pred_boxes = torch.stack((x1, y1, x2, y2), dim=-1) return pred_boxes.reshape(deltas.shape) @torch.jit.script class Box2BoxTransformRotated(object): """ The box-to-box transform defined in Rotated R-CNN. The transformation is parameterized by 5 deltas: (dx, dy, dw, dh, da). The transformation scales the box's width and height by exp(dw), exp(dh), shifts a box's center by the offset (dx * width, dy * height), and rotate a box's angle by da (radians). Note: angles of deltas are in radians while angles of boxes are in degrees. """ def __init__( self, weights: Tuple[float, float, float, float, float], scale_clamp: float = _DEFAULT_SCALE_CLAMP, ): """ Args: weights (5-element tuple): Scaling factors that are applied to the (dx, dy, dw, dh, da) deltas. These are treated as hyperparameters of the system. scale_clamp (float): When predicting deltas, the predicted box scaling factors (dw and dh) are clamped such that they are <= scale_clamp. """ self.weights = weights self.scale_clamp = scale_clamp def get_deltas(self, src_boxes, target_boxes): """ Get box regression transformation deltas (dx, dy, dw, dh, da) that can be used to transform the `src_boxes` into the `target_boxes`. That is, the relation ``target_boxes == self.apply_deltas(deltas, src_boxes)`` is true (unless any delta is too large and is clamped). Args: src_boxes (Tensor): Nx5 source boxes, e.g., object proposals target_boxes (Tensor): Nx5 target of the transformation, e.g., ground-truth boxes. """ assert isinstance(src_boxes, torch.Tensor), type(src_boxes) assert isinstance(target_boxes, torch.Tensor), type(target_boxes) src_ctr_x, src_ctr_y, src_widths, src_heights, src_angles = torch.unbind(src_boxes, dim=1) target_ctr_x, target_ctr_y, target_widths, target_heights, target_angles = torch.unbind( target_boxes, dim=1 ) wx, wy, ww, wh, wa = self.weights dx = wx * (target_ctr_x - src_ctr_x) / src_widths dy = wy * (target_ctr_y - src_ctr_y) / src_heights dw = ww * torch.log(target_widths / src_widths) dh = wh * torch.log(target_heights / src_heights) # Angles of deltas are in radians while angles of boxes are in degrees. # the conversion to radians serve as a way to normalize the values da = target_angles - src_angles da = (da + 180.0) % 360.0 - 180.0 # make it in [-180, 180) da *= wa * math.pi / 180.0 deltas = torch.stack((dx, dy, dw, dh, da), dim=1) assert ( (src_widths > 0).all().item() ), "Input boxes to Box2BoxTransformRotated are not valid!" return deltas def apply_deltas(self, deltas, boxes): """ Apply transformation `deltas` (dx, dy, dw, dh, da) to `boxes`. Args: deltas (Tensor): transformation deltas of shape (N, k*5). deltas[i] represents box transformation for the single box boxes[i]. boxes (Tensor): boxes to transform, of shape (N, 5) """ assert deltas.shape[1] % 5 == 0 and boxes.shape[1] == 5 boxes = boxes.to(deltas.dtype).unsqueeze(2) ctr_x = boxes[:, 0] ctr_y = boxes[:, 1] widths = boxes[:, 2] heights = boxes[:, 3] angles = boxes[:, 4] wx, wy, ww, wh, wa = self.weights dx = deltas[:, 0::5] / wx dy = deltas[:, 1::5] / wy dw = deltas[:, 2::5] / ww dh = deltas[:, 3::5] / wh da = deltas[:, 4::5] / wa # Prevent sending too large values into torch.exp() dw = torch.clamp(dw, max=self.scale_clamp) dh = torch.clamp(dh, max=self.scale_clamp) pred_boxes = torch.zeros_like(deltas) pred_boxes[:, 0::5] = dx * widths + ctr_x # x_ctr pred_boxes[:, 1::5] = dy * heights + ctr_y # y_ctr pred_boxes[:, 2::5] = torch.exp(dw) * widths # width pred_boxes[:, 3::5] = torch.exp(dh) * heights # height # Following original RRPN implementation, # angles of deltas are in radians while angles of boxes are in degrees. pred_angle = da * 180.0 / math.pi + angles pred_angle = (pred_angle + 180.0) % 360.0 - 180.0 # make it in [-180, 180) pred_boxes[:, 4::5] = pred_angle return pred_boxes class Box2BoxTransformLinear(object): """ The linear box-to-box transform defined in FCOS. The transformation is parameterized by the distance from the center of (square) src box to 4 edges of the target box. """ def __init__(self, normalize_by_size=True): """ Args: normalize_by_size: normalize deltas by the size of src (anchor) boxes. """ self.normalize_by_size = normalize_by_size def get_deltas(self, src_boxes, target_boxes): """ Get box regression transformation deltas (dx1, dy1, dx2, dy2) that can be used to transform the `src_boxes` into the `target_boxes`. That is, the relation ``target_boxes == self.apply_deltas(deltas, src_boxes)`` is true. The center of src must be inside target boxes. Args: src_boxes (Tensor): square source boxes, e.g., anchors target_boxes (Tensor): target of the transformation, e.g., ground-truth boxes. """ assert isinstance(src_boxes, torch.Tensor), type(src_boxes) assert isinstance(target_boxes, torch.Tensor), type(target_boxes) src_ctr_x = 0.5 * (src_boxes[:, 0] + src_boxes[:, 2]) src_ctr_y = 0.5 * (src_boxes[:, 1] + src_boxes[:, 3]) target_l = src_ctr_x - target_boxes[:, 0] target_t = src_ctr_y - target_boxes[:, 1] target_r = target_boxes[:, 2] - src_ctr_x target_b = target_boxes[:, 3] - src_ctr_y deltas = torch.stack((target_l, target_t, target_r, target_b), dim=1) if self.normalize_by_size: stride_w = src_boxes[:, 2] - src_boxes[:, 0] stride_h = src_boxes[:, 3] - src_boxes[:, 1] strides = torch.stack([stride_w, stride_h, stride_w, stride_h], axis=1) deltas = deltas / strides return deltas def apply_deltas(self, deltas, boxes): """ Apply transformation `deltas` (dx1, dy1, dx2, dy2) to `boxes`. Args: deltas (Tensor): transformation deltas of shape (N, k*4), where k >= 1. deltas[i] represents k potentially different class-specific box transformations for the single box boxes[i]. boxes (Tensor): boxes to transform, of shape (N, 4) """ # Ensure the output is a valid box. See Sec 2.1 of https://arxiv.org/abs/2006.09214 deltas = F.relu(deltas) boxes = boxes.to(deltas.dtype) ctr_x = 0.5 * (boxes[:, 0] + boxes[:, 2]) ctr_y = 0.5 * (boxes[:, 1] + boxes[:, 3]) if self.normalize_by_size: stride_w = boxes[:, 2] - boxes[:, 0] stride_h = boxes[:, 3] - boxes[:, 1] strides = torch.stack([stride_w, stride_h, stride_w, stride_h], axis=1) deltas = deltas * strides l = deltas[:, 0::4] t = deltas[:, 1::4] r = deltas[:, 2::4] b = deltas[:, 3::4] pred_boxes = torch.zeros_like(deltas) pred_boxes[:, 0::4] = ctr_x[:, None] - l # x1 pred_boxes[:, 1::4] = ctr_y[:, None] - t # y1 pred_boxes[:, 2::4] = ctr_x[:, None] + r # x2 pred_boxes[:, 3::4] = ctr_y[:, None] + b # y2 return pred_boxes def _dense_box_regression_loss( anchors: List[Union[Boxes, torch.Tensor]], box2box_transform: Box2BoxTransform, pred_anchor_deltas: List[torch.Tensor], gt_boxes: List[torch.Tensor], fg_mask: torch.Tensor, box_reg_loss_type="smooth_l1", smooth_l1_beta=0.0, ): """ Compute loss for dense multi-level box regression. Loss is accumulated over ``fg_mask``. Args: anchors: #lvl anchor boxes, each is (HixWixA, 4) pred_anchor_deltas: #lvl predictions, each is (N, HixWixA, 4) gt_boxes: N ground truth boxes, each has shape (R, 4) (R = sum(Hi * Wi * A)) fg_mask: the foreground boolean mask of shape (N, R) to compute loss on box_reg_loss_type (str): Loss type to use. Supported losses: "smooth_l1", "giou", "diou", "ciou". smooth_l1_beta (float): beta parameter for the smooth L1 regression loss. Default to use L1 loss. Only used when `box_reg_loss_type` is "smooth_l1" """ if isinstance(anchors[0], Boxes): anchors = type(anchors[0]).cat(anchors).tensor # (R, 4) else: anchors = cat(anchors) if box_reg_loss_type == "smooth_l1": gt_anchor_deltas = [box2box_transform.get_deltas(anchors, k) for k in gt_boxes] gt_anchor_deltas = torch.stack(gt_anchor_deltas) # (N, R, 4) loss_box_reg = smooth_l1_loss( cat(pred_anchor_deltas, dim=1)[fg_mask], gt_anchor_deltas[fg_mask], beta=smooth_l1_beta, reduction="sum", ) elif box_reg_loss_type == "giou": pred_boxes = [ box2box_transform.apply_deltas(k, anchors) for k in cat(pred_anchor_deltas, dim=1) ] loss_box_reg = giou_loss( torch.stack(pred_boxes)[fg_mask], torch.stack(gt_boxes)[fg_mask], reduction="sum" ) elif box_reg_loss_type == "diou": pred_boxes = [ box2box_transform.apply_deltas(k, anchors) for k in cat(pred_anchor_deltas, dim=1) ] loss_box_reg = diou_loss( torch.stack(pred_boxes)[fg_mask], torch.stack(gt_boxes)[fg_mask], reduction="sum" ) elif box_reg_loss_type == "ciou": pred_boxes = [ box2box_transform.apply_deltas(k, anchors) for k in cat(pred_anchor_deltas, dim=1) ] loss_box_reg = ciou_loss( torch.stack(pred_boxes)[fg_mask], torch.stack(gt_boxes)[fg_mask], reduction="sum" ) else: raise ValueError(f"Invalid dense box regression loss type '{box_reg_loss_type}'") return loss_box_reg
/roof_mask_Yv8-0.5.9-py3-none-any.whl/detectron2/modeling/box_regression.py
0.9605
0.644281
box_regression.py
pypi
import math import fvcore.nn.weight_init as weight_init import torch import torch.nn.functional as F from torch import nn from detectron2.layers import Conv2d, ShapeSpec, get_norm from .backbone import Backbone from .build import BACKBONE_REGISTRY from .resnet import build_resnet_backbone __all__ = ["build_resnet_fpn_backbone", "build_retinanet_resnet_fpn_backbone", "FPN"] class FPN(Backbone): """ This module implements :paper:`FPN`. It creates pyramid features built on top of some input feature maps. """ _fuse_type: torch.jit.Final[str] def __init__( self, bottom_up, in_features, out_channels, norm="", top_block=None, fuse_type="sum" ): """ Args: bottom_up (Backbone): module representing the bottom up subnetwork. Must be a subclass of :class:`Backbone`. The multi-scale feature maps generated by the bottom up network, and listed in `in_features`, are used to generate FPN levels. in_features (list[str]): names of the input feature maps coming from the backbone to which FPN is attached. For example, if the backbone produces ["res2", "res3", "res4"], any *contiguous* sublist of these may be used; order must be from high to low resolution. out_channels (int): number of channels in the output feature maps. norm (str): the normalization to use. top_block (nn.Module or None): if provided, an extra operation will be performed on the output of the last (smallest resolution) FPN output, and the result will extend the result list. The top_block further downsamples the feature map. It must have an attribute "num_levels", meaning the number of extra FPN levels added by this block, and "in_feature", which is a string representing its input feature (e.g., p5). fuse_type (str): types for fusing the top down features and the lateral ones. It can be "sum" (default), which sums up element-wise; or "avg", which takes the element-wise mean of the two. """ super(FPN, self).__init__() assert isinstance(bottom_up, Backbone) assert in_features, in_features # Feature map strides and channels from the bottom up network (e.g. ResNet) input_shapes = bottom_up.output_shape() strides = [input_shapes[f].stride for f in in_features] in_channels_per_feature = [input_shapes[f].channels for f in in_features] _assert_strides_are_log2_contiguous(strides) lateral_convs = [] output_convs = [] use_bias = norm == "" for idx, in_channels in enumerate(in_channels_per_feature): lateral_norm = get_norm(norm, out_channels) output_norm = get_norm(norm, out_channels) lateral_conv = Conv2d( in_channels, out_channels, kernel_size=1, bias=use_bias, norm=lateral_norm ) output_conv = Conv2d( out_channels, out_channels, kernel_size=3, stride=1, padding=1, bias=use_bias, norm=output_norm, ) weight_init.c2_xavier_fill(lateral_conv) weight_init.c2_xavier_fill(output_conv) stage = int(math.log2(strides[idx])) self.add_module("fpn_lateral{}".format(stage), lateral_conv) self.add_module("fpn_output{}".format(stage), output_conv) lateral_convs.append(lateral_conv) output_convs.append(output_conv) # Place convs into top-down order (from low to high resolution) # to make the top-down computation in forward clearer. self.lateral_convs = lateral_convs[::-1] self.output_convs = output_convs[::-1] self.top_block = top_block self.in_features = tuple(in_features) self.bottom_up = bottom_up # Return feature names are "p<stage>", like ["p2", "p3", ..., "p6"] self._out_feature_strides = {"p{}".format(int(math.log2(s))): s for s in strides} # top block output feature maps. if self.top_block is not None: for s in range(stage, stage + self.top_block.num_levels): self._out_feature_strides["p{}".format(s + 1)] = 2 ** (s + 1) self._out_features = list(self._out_feature_strides.keys()) self._out_feature_channels = {k: out_channels for k in self._out_features} self._size_divisibility = strides[-1] assert fuse_type in {"avg", "sum"} self._fuse_type = fuse_type @property def size_divisibility(self): return self._size_divisibility def forward(self, x): """ Args: input (dict[str->Tensor]): mapping feature map name (e.g., "res5") to feature map tensor for each feature level in high to low resolution order. Returns: dict[str->Tensor]: mapping from feature map name to FPN feature map tensor in high to low resolution order. Returned feature names follow the FPN paper convention: "p<stage>", where stage has stride = 2 ** stage e.g., ["p2", "p3", ..., "p6"]. """ bottom_up_features = self.bottom_up(x) results = [] prev_features = self.lateral_convs[0](bottom_up_features[self.in_features[-1]]) results.append(self.output_convs[0](prev_features)) # Reverse feature maps into top-down order (from low to high resolution) for idx, (lateral_conv, output_conv) in enumerate( zip(self.lateral_convs, self.output_convs) ): # Slicing of ModuleList is not supported https://github.com/pytorch/pytorch/issues/47336 # Therefore we loop over all modules but skip the first one if idx > 0: features = self.in_features[-idx - 1] features = bottom_up_features[features] top_down_features = F.interpolate(prev_features, scale_factor=2.0, mode="nearest") lateral_features = lateral_conv(features) prev_features = lateral_features + top_down_features if self._fuse_type == "avg": prev_features /= 2 results.insert(0, output_conv(prev_features)) if self.top_block is not None: if self.top_block.in_feature in bottom_up_features: top_block_in_feature = bottom_up_features[self.top_block.in_feature] else: top_block_in_feature = results[self._out_features.index(self.top_block.in_feature)] results.extend(self.top_block(top_block_in_feature)) assert len(self._out_features) == len(results) return {f: res for f, res in zip(self._out_features, results)} def output_shape(self): return { name: ShapeSpec( channels=self._out_feature_channels[name], stride=self._out_feature_strides[name] ) for name in self._out_features } def _assert_strides_are_log2_contiguous(strides): """ Assert that each stride is 2x times its preceding stride, i.e. "contiguous in log2". """ for i, stride in enumerate(strides[1:], 1): assert stride == 2 * strides[i - 1], "Strides {} {} are not log2 contiguous".format( stride, strides[i - 1] ) class LastLevelMaxPool(nn.Module): """ This module is used in the original FPN to generate a downsampled P6 feature from P5. """ def __init__(self): super().__init__() self.num_levels = 1 self.in_feature = "p5" def forward(self, x): return [F.max_pool2d(x, kernel_size=1, stride=2, padding=0)] class LastLevelP6P7(nn.Module): """ This module is used in RetinaNet to generate extra layers, P6 and P7 from C5 feature. """ def __init__(self, in_channels, out_channels, in_feature="res5"): super().__init__() self.num_levels = 2 self.in_feature = in_feature self.p6 = nn.Conv2d(in_channels, out_channels, 3, 2, 1) self.p7 = nn.Conv2d(out_channels, out_channels, 3, 2, 1) for module in [self.p6, self.p7]: weight_init.c2_xavier_fill(module) def forward(self, c5): p6 = self.p6(c5) p7 = self.p7(F.relu(p6)) return [p6, p7] @BACKBONE_REGISTRY.register() def build_resnet_fpn_backbone(cfg, input_shape: ShapeSpec): """ Args: cfg: a detectron2 CfgNode Returns: backbone (Backbone): backbone module, must be a subclass of :class:`Backbone`. """ bottom_up = build_resnet_backbone(cfg, input_shape) in_features = cfg.MODEL.FPN.IN_FEATURES out_channels = cfg.MODEL.FPN.OUT_CHANNELS backbone = FPN( bottom_up=bottom_up, in_features=in_features, out_channels=out_channels, norm=cfg.MODEL.FPN.NORM, top_block=LastLevelMaxPool(), fuse_type=cfg.MODEL.FPN.FUSE_TYPE, ) return backbone @BACKBONE_REGISTRY.register() def build_retinanet_resnet_fpn_backbone(cfg, input_shape: ShapeSpec): """ Args: cfg: a detectron2 CfgNode Returns: backbone (Backbone): backbone module, must be a subclass of :class:`Backbone`. """ bottom_up = build_resnet_backbone(cfg, input_shape) in_features = cfg.MODEL.FPN.IN_FEATURES out_channels = cfg.MODEL.FPN.OUT_CHANNELS in_channels_p6p7 = bottom_up.output_shape()["res5"].channels backbone = FPN( bottom_up=bottom_up, in_features=in_features, out_channels=out_channels, norm=cfg.MODEL.FPN.NORM, top_block=LastLevelP6P7(in_channels_p6p7, out_channels), fuse_type=cfg.MODEL.FPN.FUSE_TYPE, ) return backbone
/roof_mask_Yv8-0.5.9-py3-none-any.whl/detectron2/modeling/backbone/fpn.py
0.931009
0.435902
fpn.py
pypi
import numpy as np from torch import nn from detectron2.layers import CNNBlockBase, ShapeSpec, get_norm from .backbone import Backbone __all__ = [ "AnyNet", "RegNet", "ResStem", "SimpleStem", "VanillaBlock", "ResBasicBlock", "ResBottleneckBlock", ] def conv2d(w_in, w_out, k, *, stride=1, groups=1, bias=False): """Helper for building a conv2d layer.""" assert k % 2 == 1, "Only odd size kernels supported to avoid padding issues." s, p, g, b = stride, (k - 1) // 2, groups, bias return nn.Conv2d(w_in, w_out, k, stride=s, padding=p, groups=g, bias=b) def gap2d(): """Helper for building a global average pooling layer.""" return nn.AdaptiveAvgPool2d((1, 1)) def pool2d(k, *, stride=1): """Helper for building a pool2d layer.""" assert k % 2 == 1, "Only odd size kernels supported to avoid padding issues." return nn.MaxPool2d(k, stride=stride, padding=(k - 1) // 2) def init_weights(m): """Performs ResNet-style weight initialization.""" if isinstance(m, nn.Conv2d): # Note that there is no bias due to BN fan_out = m.kernel_size[0] * m.kernel_size[1] * m.out_channels m.weight.data.normal_(mean=0.0, std=np.sqrt(2.0 / fan_out)) elif isinstance(m, nn.BatchNorm2d): m.weight.data.fill_(1.0) m.bias.data.zero_() elif isinstance(m, nn.Linear): m.weight.data.normal_(mean=0.0, std=0.01) m.bias.data.zero_() class ResStem(CNNBlockBase): """ResNet stem for ImageNet: 7x7, BN, AF, MaxPool.""" def __init__(self, w_in, w_out, norm, activation_class): super().__init__(w_in, w_out, 4) self.conv = conv2d(w_in, w_out, 7, stride=2) self.bn = get_norm(norm, w_out) self.af = activation_class() self.pool = pool2d(3, stride=2) def forward(self, x): for layer in self.children(): x = layer(x) return x class SimpleStem(CNNBlockBase): """Simple stem for ImageNet: 3x3, BN, AF.""" def __init__(self, w_in, w_out, norm, activation_class): super().__init__(w_in, w_out, 2) self.conv = conv2d(w_in, w_out, 3, stride=2) self.bn = get_norm(norm, w_out) self.af = activation_class() def forward(self, x): for layer in self.children(): x = layer(x) return x class SE(nn.Module): """Squeeze-and-Excitation (SE) block: AvgPool, FC, Act, FC, Sigmoid.""" def __init__(self, w_in, w_se, activation_class): super().__init__() self.avg_pool = gap2d() self.f_ex = nn.Sequential( conv2d(w_in, w_se, 1, bias=True), activation_class(), conv2d(w_se, w_in, 1, bias=True), nn.Sigmoid(), ) def forward(self, x): return x * self.f_ex(self.avg_pool(x)) class VanillaBlock(CNNBlockBase): """Vanilla block: [3x3 conv, BN, Relu] x2.""" def __init__(self, w_in, w_out, stride, norm, activation_class, _params): super().__init__(w_in, w_out, stride) self.a = conv2d(w_in, w_out, 3, stride=stride) self.a_bn = get_norm(norm, w_out) self.a_af = activation_class() self.b = conv2d(w_out, w_out, 3) self.b_bn = get_norm(norm, w_out) self.b_af = activation_class() def forward(self, x): for layer in self.children(): x = layer(x) return x class BasicTransform(nn.Module): """Basic transformation: [3x3 conv, BN, Relu] x2.""" def __init__(self, w_in, w_out, stride, norm, activation_class, _params): super().__init__() self.a = conv2d(w_in, w_out, 3, stride=stride) self.a_bn = get_norm(norm, w_out) self.a_af = activation_class() self.b = conv2d(w_out, w_out, 3) self.b_bn = get_norm(norm, w_out) self.b_bn.final_bn = True def forward(self, x): for layer in self.children(): x = layer(x) return x class ResBasicBlock(CNNBlockBase): """Residual basic block: x + f(x), f = basic transform.""" def __init__(self, w_in, w_out, stride, norm, activation_class, params): super().__init__(w_in, w_out, stride) self.proj, self.bn = None, None if (w_in != w_out) or (stride != 1): self.proj = conv2d(w_in, w_out, 1, stride=stride) self.bn = get_norm(norm, w_out) self.f = BasicTransform(w_in, w_out, stride, norm, activation_class, params) self.af = activation_class() def forward(self, x): x_p = self.bn(self.proj(x)) if self.proj else x return self.af(x_p + self.f(x)) class BottleneckTransform(nn.Module): """Bottleneck transformation: 1x1, 3x3 [+SE], 1x1.""" def __init__(self, w_in, w_out, stride, norm, activation_class, params): super().__init__() w_b = int(round(w_out * params["bot_mul"])) w_se = int(round(w_in * params["se_r"])) groups = w_b // params["group_w"] self.a = conv2d(w_in, w_b, 1) self.a_bn = get_norm(norm, w_b) self.a_af = activation_class() self.b = conv2d(w_b, w_b, 3, stride=stride, groups=groups) self.b_bn = get_norm(norm, w_b) self.b_af = activation_class() self.se = SE(w_b, w_se, activation_class) if w_se else None self.c = conv2d(w_b, w_out, 1) self.c_bn = get_norm(norm, w_out) self.c_bn.final_bn = True def forward(self, x): for layer in self.children(): x = layer(x) return x class ResBottleneckBlock(CNNBlockBase): """Residual bottleneck block: x + f(x), f = bottleneck transform.""" def __init__(self, w_in, w_out, stride, norm, activation_class, params): super().__init__(w_in, w_out, stride) self.proj, self.bn = None, None if (w_in != w_out) or (stride != 1): self.proj = conv2d(w_in, w_out, 1, stride=stride) self.bn = get_norm(norm, w_out) self.f = BottleneckTransform(w_in, w_out, stride, norm, activation_class, params) self.af = activation_class() def forward(self, x): x_p = self.bn(self.proj(x)) if self.proj else x return self.af(x_p + self.f(x)) class AnyStage(nn.Module): """AnyNet stage (sequence of blocks w/ the same output shape).""" def __init__(self, w_in, w_out, stride, d, block_class, norm, activation_class, params): super().__init__() for i in range(d): block = block_class(w_in, w_out, stride, norm, activation_class, params) self.add_module("b{}".format(i + 1), block) stride, w_in = 1, w_out def forward(self, x): for block in self.children(): x = block(x) return x class AnyNet(Backbone): """AnyNet model. See :paper:`dds`.""" def __init__( self, *, stem_class, stem_width, block_class, depths, widths, group_widths, strides, bottleneck_ratios, se_ratio, activation_class, freeze_at=0, norm="BN", out_features=None, ): """ Args: stem_class (callable): A callable taking 4 arguments (channels in, channels out, normalization, callable returning an activation function) that returns another callable implementing the stem module. stem_width (int): The number of output channels that the stem produces. block_class (callable): A callable taking 6 arguments (channels in, channels out, stride, normalization, callable returning an activation function, a dict of block-specific parameters) that returns another callable implementing the repeated block module. depths (list[int]): Number of blocks in each stage. widths (list[int]): For each stage, the number of output channels of each block. group_widths (list[int]): For each stage, the number of channels per group in group convolution, if the block uses group convolution. strides (list[int]): The stride that each network stage applies to its input. bottleneck_ratios (list[float]): For each stage, the ratio of the number of bottleneck channels to the number of block input channels (or, equivalently, output channels), if the block uses a bottleneck. se_ratio (float): The ratio of the number of channels used inside the squeeze-excitation (SE) module to it number of input channels, if SE the block uses SE. activation_class (callable): A callable taking no arguments that returns another callable implementing an activation function. freeze_at (int): The number of stages at the beginning to freeze. see :meth:`freeze` for detailed explanation. norm (str or callable): normalization for all conv layers. See :func:`layers.get_norm` for supported format. out_features (list[str]): name of the layers whose outputs should be returned in forward. RegNet's use "stem" and "s1", "s2", etc for the stages after the stem. If None, will return the output of the last layer. """ super().__init__() self.stem = stem_class(3, stem_width, norm, activation_class) current_stride = self.stem.stride self._out_feature_strides = {"stem": current_stride} self._out_feature_channels = {"stem": self.stem.out_channels} self.stages_and_names = [] prev_w = stem_width for i, (d, w, s, b, g) in enumerate( zip(depths, widths, strides, bottleneck_ratios, group_widths) ): params = {"bot_mul": b, "group_w": g, "se_r": se_ratio} stage = AnyStage(prev_w, w, s, d, block_class, norm, activation_class, params) name = "s{}".format(i + 1) self.add_module(name, stage) self.stages_and_names.append((stage, name)) self._out_feature_strides[name] = current_stride = int( current_stride * np.prod([k.stride for k in stage.children()]) ) self._out_feature_channels[name] = list(stage.children())[-1].out_channels prev_w = w self.apply(init_weights) if out_features is None: out_features = [name] self._out_features = out_features assert len(self._out_features) children = [x[0] for x in self.named_children()] for out_feature in self._out_features: assert out_feature in children, "Available children: {} does not include {}".format( ", ".join(children), out_feature ) self.freeze(freeze_at) def forward(self, x): """ Args: x: Tensor of shape (N,C,H,W). H, W must be a multiple of ``self.size_divisibility``. Returns: dict[str->Tensor]: names and the corresponding features """ assert x.dim() == 4, f"Model takes an input of shape (N, C, H, W). Got {x.shape} instead!" outputs = {} x = self.stem(x) if "stem" in self._out_features: outputs["stem"] = x for stage, name in self.stages_and_names: x = stage(x) if name in self._out_features: outputs[name] = x return outputs def output_shape(self): return { name: ShapeSpec( channels=self._out_feature_channels[name], stride=self._out_feature_strides[name] ) for name in self._out_features } def freeze(self, freeze_at=0): """ Freeze the first several stages of the model. Commonly used in fine-tuning. Layers that produce the same feature map spatial size are defined as one "stage" by :paper:`FPN`. Args: freeze_at (int): number of stages to freeze. `1` means freezing the stem. `2` means freezing the stem and one residual stage, etc. Returns: nn.Module: this model itself """ if freeze_at >= 1: self.stem.freeze() for idx, (stage, _) in enumerate(self.stages_and_names, start=2): if freeze_at >= idx: for block in stage.children(): block.freeze() return self def adjust_block_compatibility(ws, bs, gs): """Adjusts the compatibility of widths, bottlenecks, and groups.""" assert len(ws) == len(bs) == len(gs) assert all(w > 0 and b > 0 and g > 0 for w, b, g in zip(ws, bs, gs)) vs = [int(max(1, w * b)) for w, b in zip(ws, bs)] gs = [int(min(g, v)) for g, v in zip(gs, vs)] ms = [np.lcm(g, b) if b > 1 else g for g, b in zip(gs, bs)] vs = [max(m, int(round(v / m) * m)) for v, m in zip(vs, ms)] ws = [int(v / b) for v, b in zip(vs, bs)] assert all(w * b % g == 0 for w, b, g in zip(ws, bs, gs)) return ws, bs, gs def generate_regnet_parameters(w_a, w_0, w_m, d, q=8): """Generates per stage widths and depths from RegNet parameters.""" assert w_a >= 0 and w_0 > 0 and w_m > 1 and w_0 % q == 0 # Generate continuous per-block ws ws_cont = np.arange(d) * w_a + w_0 # Generate quantized per-block ws ks = np.round(np.log(ws_cont / w_0) / np.log(w_m)) ws_all = w_0 * np.power(w_m, ks) ws_all = np.round(np.divide(ws_all, q)).astype(int) * q # Generate per stage ws and ds (assumes ws_all are sorted) ws, ds = np.unique(ws_all, return_counts=True) # Compute number of actual stages and total possible stages num_stages, total_stages = len(ws), ks.max() + 1 # Convert numpy arrays to lists and return ws, ds, ws_all, ws_cont = (x.tolist() for x in (ws, ds, ws_all, ws_cont)) return ws, ds, num_stages, total_stages, ws_all, ws_cont class RegNet(AnyNet): """RegNet model. See :paper:`dds`.""" def __init__( self, *, stem_class, stem_width, block_class, depth, w_a, w_0, w_m, group_width, stride=2, bottleneck_ratio=1.0, se_ratio=0.0, activation_class=None, freeze_at=0, norm="BN", out_features=None, ): """ Build a RegNet from the parameterization described in :paper:`dds` Section 3.3. Args: See :class:`AnyNet` for arguments that are not listed here. depth (int): Total number of blocks in the RegNet. w_a (float): Factor by which block width would increase prior to quantizing block widths by stage. See :paper:`dds` Section 3.3. w_0 (int): Initial block width. See :paper:`dds` Section 3.3. w_m (float): Parameter controlling block width quantization. See :paper:`dds` Section 3.3. group_width (int): Number of channels per group in group convolution, if the block uses group convolution. bottleneck_ratio (float): The ratio of the number of bottleneck channels to the number of block input channels (or, equivalently, output channels), if the block uses a bottleneck. stride (int): The stride that each network stage applies to its input. """ ws, ds = generate_regnet_parameters(w_a, w_0, w_m, depth)[0:2] ss = [stride for _ in ws] bs = [bottleneck_ratio for _ in ws] gs = [group_width for _ in ws] ws, bs, gs = adjust_block_compatibility(ws, bs, gs) def default_activation_class(): return nn.ReLU(inplace=True) super().__init__( stem_class=stem_class, stem_width=stem_width, block_class=block_class, depths=ds, widths=ws, strides=ss, group_widths=gs, bottleneck_ratios=bs, se_ratio=se_ratio, activation_class=default_activation_class if activation_class is None else activation_class, freeze_at=freeze_at, norm=norm, out_features=out_features, )
/roof_mask_Yv8-0.5.9-py3-none-any.whl/detectron2/modeling/backbone/regnet.py
0.983447
0.608478
regnet.py
pypi
import numpy as np import fvcore.nn.weight_init as weight_init import torch import torch.nn.functional as F from torch import nn from detectron2.layers import ( CNNBlockBase, Conv2d, DeformConv, ModulatedDeformConv, ShapeSpec, get_norm, ) from .backbone import Backbone from .build import BACKBONE_REGISTRY __all__ = [ "ResNetBlockBase", "BasicBlock", "BottleneckBlock", "DeformBottleneckBlock", "BasicStem", "ResNet", "make_stage", "build_resnet_backbone", ] class BasicBlock(CNNBlockBase): """ The basic residual block for ResNet-18 and ResNet-34 defined in :paper:`ResNet`, with two 3x3 conv layers and a projection shortcut if needed. """ def __init__(self, in_channels, out_channels, *, stride=1, norm="BN"): """ Args: in_channels (int): Number of input channels. out_channels (int): Number of output channels. stride (int): Stride for the first conv. norm (str or callable): normalization for all conv layers. See :func:`layers.get_norm` for supported format. """ super().__init__(in_channels, out_channels, stride) if in_channels != out_channels: self.shortcut = Conv2d( in_channels, out_channels, kernel_size=1, stride=stride, bias=False, norm=get_norm(norm, out_channels), ) else: self.shortcut = None self.conv1 = Conv2d( in_channels, out_channels, kernel_size=3, stride=stride, padding=1, bias=False, norm=get_norm(norm, out_channels), ) self.conv2 = Conv2d( out_channels, out_channels, kernel_size=3, stride=1, padding=1, bias=False, norm=get_norm(norm, out_channels), ) for layer in [self.conv1, self.conv2, self.shortcut]: if layer is not None: # shortcut can be None weight_init.c2_msra_fill(layer) def forward(self, x): out = self.conv1(x) out = F.relu_(out) out = self.conv2(out) if self.shortcut is not None: shortcut = self.shortcut(x) else: shortcut = x out += shortcut out = F.relu_(out) return out class BottleneckBlock(CNNBlockBase): """ The standard bottleneck residual block used by ResNet-50, 101 and 152 defined in :paper:`ResNet`. It contains 3 conv layers with kernels 1x1, 3x3, 1x1, and a projection shortcut if needed. """ def __init__( self, in_channels, out_channels, *, bottleneck_channels, stride=1, num_groups=1, norm="BN", stride_in_1x1=False, dilation=1, ): """ Args: bottleneck_channels (int): number of output channels for the 3x3 "bottleneck" conv layers. num_groups (int): number of groups for the 3x3 conv layer. norm (str or callable): normalization for all conv layers. See :func:`layers.get_norm` for supported format. stride_in_1x1 (bool): when stride>1, whether to put stride in the first 1x1 convolution or the bottleneck 3x3 convolution. dilation (int): the dilation rate of the 3x3 conv layer. """ super().__init__(in_channels, out_channels, stride) if in_channels != out_channels: self.shortcut = Conv2d( in_channels, out_channels, kernel_size=1, stride=stride, bias=False, norm=get_norm(norm, out_channels), ) else: self.shortcut = None # The original MSRA ResNet models have stride in the first 1x1 conv # The subsequent fb.torch.resnet and Caffe2 ResNe[X]t implementations have # stride in the 3x3 conv stride_1x1, stride_3x3 = (stride, 1) if stride_in_1x1 else (1, stride) self.conv1 = Conv2d( in_channels, bottleneck_channels, kernel_size=1, stride=stride_1x1, bias=False, norm=get_norm(norm, bottleneck_channels), ) self.conv2 = Conv2d( bottleneck_channels, bottleneck_channels, kernel_size=3, stride=stride_3x3, padding=1 * dilation, bias=False, groups=num_groups, dilation=dilation, norm=get_norm(norm, bottleneck_channels), ) self.conv3 = Conv2d( bottleneck_channels, out_channels, kernel_size=1, bias=False, norm=get_norm(norm, out_channels), ) for layer in [self.conv1, self.conv2, self.conv3, self.shortcut]: if layer is not None: # shortcut can be None weight_init.c2_msra_fill(layer) # Zero-initialize the last normalization in each residual branch, # so that at the beginning, the residual branch starts with zeros, # and each residual block behaves like an identity. # See Sec 5.1 in "Accurate, Large Minibatch SGD: Training ImageNet in 1 Hour": # "For BN layers, the learnable scaling coefficient γ is initialized # to be 1, except for each residual block's last BN # where γ is initialized to be 0." # nn.init.constant_(self.conv3.norm.weight, 0) # TODO this somehow hurts performance when training GN models from scratch. # Add it as an option when we need to use this code to train a backbone. def forward(self, x): out = self.conv1(x) out = F.relu_(out) out = self.conv2(out) out = F.relu_(out) out = self.conv3(out) if self.shortcut is not None: shortcut = self.shortcut(x) else: shortcut = x out += shortcut out = F.relu_(out) return out class DeformBottleneckBlock(CNNBlockBase): """ Similar to :class:`BottleneckBlock`, but with :paper:`deformable conv <deformconv>` in the 3x3 convolution. """ def __init__( self, in_channels, out_channels, *, bottleneck_channels, stride=1, num_groups=1, norm="BN", stride_in_1x1=False, dilation=1, deform_modulated=False, deform_num_groups=1, ): super().__init__(in_channels, out_channels, stride) self.deform_modulated = deform_modulated if in_channels != out_channels: self.shortcut = Conv2d( in_channels, out_channels, kernel_size=1, stride=stride, bias=False, norm=get_norm(norm, out_channels), ) else: self.shortcut = None stride_1x1, stride_3x3 = (stride, 1) if stride_in_1x1 else (1, stride) self.conv1 = Conv2d( in_channels, bottleneck_channels, kernel_size=1, stride=stride_1x1, bias=False, norm=get_norm(norm, bottleneck_channels), ) if deform_modulated: deform_conv_op = ModulatedDeformConv # offset channels are 2 or 3 (if with modulated) * kernel_size * kernel_size offset_channels = 27 else: deform_conv_op = DeformConv offset_channels = 18 self.conv2_offset = Conv2d( bottleneck_channels, offset_channels * deform_num_groups, kernel_size=3, stride=stride_3x3, padding=1 * dilation, dilation=dilation, ) self.conv2 = deform_conv_op( bottleneck_channels, bottleneck_channels, kernel_size=3, stride=stride_3x3, padding=1 * dilation, bias=False, groups=num_groups, dilation=dilation, deformable_groups=deform_num_groups, norm=get_norm(norm, bottleneck_channels), ) self.conv3 = Conv2d( bottleneck_channels, out_channels, kernel_size=1, bias=False, norm=get_norm(norm, out_channels), ) for layer in [self.conv1, self.conv2, self.conv3, self.shortcut]: if layer is not None: # shortcut can be None weight_init.c2_msra_fill(layer) nn.init.constant_(self.conv2_offset.weight, 0) nn.init.constant_(self.conv2_offset.bias, 0) def forward(self, x): out = self.conv1(x) out = F.relu_(out) if self.deform_modulated: offset_mask = self.conv2_offset(out) offset_x, offset_y, mask = torch.chunk(offset_mask, 3, dim=1) offset = torch.cat((offset_x, offset_y), dim=1) mask = mask.sigmoid() out = self.conv2(out, offset, mask) else: offset = self.conv2_offset(out) out = self.conv2(out, offset) out = F.relu_(out) out = self.conv3(out) if self.shortcut is not None: shortcut = self.shortcut(x) else: shortcut = x out += shortcut out = F.relu_(out) return out class BasicStem(CNNBlockBase): """ The standard ResNet stem (layers before the first residual block), with a conv, relu and max_pool. """ def __init__(self, in_channels=3, out_channels=64, norm="BN"): """ Args: norm (str or callable): norm after the first conv layer. See :func:`layers.get_norm` for supported format. """ super().__init__(in_channels, out_channels, 4) self.in_channels = in_channels self.conv1 = Conv2d( in_channels, out_channels, kernel_size=7, stride=2, padding=3, bias=False, norm=get_norm(norm, out_channels), ) weight_init.c2_msra_fill(self.conv1) def forward(self, x): x = self.conv1(x) x = F.relu_(x) x = F.max_pool2d(x, kernel_size=3, stride=2, padding=1) return x class ResNet(Backbone): """ Implement :paper:`ResNet`. """ def __init__(self, stem, stages, num_classes=None, out_features=None, freeze_at=0): """ Args: stem (nn.Module): a stem module stages (list[list[CNNBlockBase]]): several (typically 4) stages, each contains multiple :class:`CNNBlockBase`. num_classes (None or int): if None, will not perform classification. Otherwise, will create a linear layer. out_features (list[str]): name of the layers whose outputs should be returned in forward. Can be anything in "stem", "linear", or "res2" ... If None, will return the output of the last layer. freeze_at (int): The number of stages at the beginning to freeze. see :meth:`freeze` for detailed explanation. """ super().__init__() self.stem = stem self.num_classes = num_classes current_stride = self.stem.stride self._out_feature_strides = {"stem": current_stride} self._out_feature_channels = {"stem": self.stem.out_channels} self.stage_names, self.stages = [], [] if out_features is not None: # Avoid keeping unused layers in this module. They consume extra memory # and may cause allreduce to fail num_stages = max( [{"res2": 1, "res3": 2, "res4": 3, "res5": 4}.get(f, 0) for f in out_features] ) stages = stages[:num_stages] for i, blocks in enumerate(stages): assert len(blocks) > 0, len(blocks) for block in blocks: assert isinstance(block, CNNBlockBase), block name = "res" + str(i + 2) stage = nn.Sequential(*blocks) self.add_module(name, stage) self.stage_names.append(name) self.stages.append(stage) self._out_feature_strides[name] = current_stride = int( current_stride * np.prod([k.stride for k in blocks]) ) self._out_feature_channels[name] = curr_channels = blocks[-1].out_channels self.stage_names = tuple(self.stage_names) # Make it static for scripting if num_classes is not None: self.avgpool = nn.AdaptiveAvgPool2d((1, 1)) self.linear = nn.Linear(curr_channels, num_classes) # Sec 5.1 in "Accurate, Large Minibatch SGD: Training ImageNet in 1 Hour": # "The 1000-way fully-connected layer is initialized by # drawing weights from a zero-mean Gaussian with standard deviation of 0.01." nn.init.normal_(self.linear.weight, std=0.01) name = "linear" if out_features is None: out_features = [name] self._out_features = out_features assert len(self._out_features) children = [x[0] for x in self.named_children()] for out_feature in self._out_features: assert out_feature in children, "Available children: {}".format(", ".join(children)) self.freeze(freeze_at) def forward(self, x): """ Args: x: Tensor of shape (N,C,H,W). H, W must be a multiple of ``self.size_divisibility``. Returns: dict[str->Tensor]: names and the corresponding features """ assert x.dim() == 4, f"ResNet takes an input of shape (N, C, H, W). Got {x.shape} instead!" outputs = {} x = self.stem(x) if "stem" in self._out_features: outputs["stem"] = x for name, stage in zip(self.stage_names, self.stages): x = stage(x) if name in self._out_features: outputs[name] = x if self.num_classes is not None: x = self.avgpool(x) x = torch.flatten(x, 1) x = self.linear(x) if "linear" in self._out_features: outputs["linear"] = x return outputs def output_shape(self): return { name: ShapeSpec( channels=self._out_feature_channels[name], stride=self._out_feature_strides[name] ) for name in self._out_features } def freeze(self, freeze_at=0): """ Freeze the first several stages of the ResNet. Commonly used in fine-tuning. Layers that produce the same feature map spatial size are defined as one "stage" by :paper:`FPN`. Args: freeze_at (int): number of stages to freeze. `1` means freezing the stem. `2` means freezing the stem and one residual stage, etc. Returns: nn.Module: this ResNet itself """ if freeze_at >= 1: self.stem.freeze() for idx, stage in enumerate(self.stages, start=2): if freeze_at >= idx: for block in stage.children(): block.freeze() return self @staticmethod def make_stage(block_class, num_blocks, *, in_channels, out_channels, **kwargs): """ Create a list of blocks of the same type that forms one ResNet stage. Args: block_class (type): a subclass of CNNBlockBase that's used to create all blocks in this stage. A module of this type must not change spatial resolution of inputs unless its stride != 1. num_blocks (int): number of blocks in this stage in_channels (int): input channels of the entire stage. out_channels (int): output channels of **every block** in the stage. kwargs: other arguments passed to the constructor of `block_class`. If the argument name is "xx_per_block", the argument is a list of values to be passed to each block in the stage. Otherwise, the same argument is passed to every block in the stage. Returns: list[CNNBlockBase]: a list of block module. Examples: :: stage = ResNet.make_stage( BottleneckBlock, 3, in_channels=16, out_channels=64, bottleneck_channels=16, num_groups=1, stride_per_block=[2, 1, 1], dilations_per_block=[1, 1, 2] ) Usually, layers that produce the same feature map spatial size are defined as one "stage" (in :paper:`FPN`). Under such definition, ``stride_per_block[1:]`` should all be 1. """ blocks = [] for i in range(num_blocks): curr_kwargs = {} for k, v in kwargs.items(): if k.endswith("_per_block"): assert len(v) == num_blocks, ( f"Argument '{k}' of make_stage should have the " f"same length as num_blocks={num_blocks}." ) newk = k[: -len("_per_block")] assert newk not in kwargs, f"Cannot call make_stage with both {k} and {newk}!" curr_kwargs[newk] = v[i] else: curr_kwargs[k] = v blocks.append( block_class(in_channels=in_channels, out_channels=out_channels, **curr_kwargs) ) in_channels = out_channels return blocks @staticmethod def make_default_stages(depth, block_class=None, **kwargs): """ Created list of ResNet stages from pre-defined depth (one of 18, 34, 50, 101, 152). If it doesn't create the ResNet variant you need, please use :meth:`make_stage` instead for fine-grained customization. Args: depth (int): depth of ResNet block_class (type): the CNN block class. Has to accept `bottleneck_channels` argument for depth > 50. By default it is BasicBlock or BottleneckBlock, based on the depth. kwargs: other arguments to pass to `make_stage`. Should not contain stride and channels, as they are predefined for each depth. Returns: list[list[CNNBlockBase]]: modules in all stages; see arguments of :class:`ResNet.__init__`. """ num_blocks_per_stage = { 18: [2, 2, 2, 2], 34: [3, 4, 6, 3], 50: [3, 4, 6, 3], 101: [3, 4, 23, 3], 152: [3, 8, 36, 3], }[depth] if block_class is None: block_class = BasicBlock if depth < 50 else BottleneckBlock if depth < 50: in_channels = [64, 64, 128, 256] out_channels = [64, 128, 256, 512] else: in_channels = [64, 256, 512, 1024] out_channels = [256, 512, 1024, 2048] ret = [] for (n, s, i, o) in zip(num_blocks_per_stage, [1, 2, 2, 2], in_channels, out_channels): if depth >= 50: kwargs["bottleneck_channels"] = o // 4 ret.append( ResNet.make_stage( block_class=block_class, num_blocks=n, stride_per_block=[s] + [1] * (n - 1), in_channels=i, out_channels=o, **kwargs, ) ) return ret ResNetBlockBase = CNNBlockBase """ Alias for backward compatibiltiy. """ def make_stage(*args, **kwargs): """ Deprecated alias for backward compatibiltiy. """ return ResNet.make_stage(*args, **kwargs) @BACKBONE_REGISTRY.register() def build_resnet_backbone(cfg, input_shape): """ Create a ResNet instance from config. Returns: ResNet: a :class:`ResNet` instance. """ # need registration of new blocks/stems? norm = cfg.MODEL.RESNETS.NORM stem = BasicStem( in_channels=input_shape.channels, out_channels=cfg.MODEL.RESNETS.STEM_OUT_CHANNELS, norm=norm, ) # fmt: off freeze_at = cfg.MODEL.BACKBONE.FREEZE_AT out_features = cfg.MODEL.RESNETS.OUT_FEATURES depth = cfg.MODEL.RESNETS.DEPTH num_groups = cfg.MODEL.RESNETS.NUM_GROUPS width_per_group = cfg.MODEL.RESNETS.WIDTH_PER_GROUP bottleneck_channels = num_groups * width_per_group in_channels = cfg.MODEL.RESNETS.STEM_OUT_CHANNELS out_channels = cfg.MODEL.RESNETS.RES2_OUT_CHANNELS stride_in_1x1 = cfg.MODEL.RESNETS.STRIDE_IN_1X1 res5_dilation = cfg.MODEL.RESNETS.RES5_DILATION deform_on_per_stage = cfg.MODEL.RESNETS.DEFORM_ON_PER_STAGE deform_modulated = cfg.MODEL.RESNETS.DEFORM_MODULATED deform_num_groups = cfg.MODEL.RESNETS.DEFORM_NUM_GROUPS # fmt: on assert res5_dilation in {1, 2}, "res5_dilation cannot be {}.".format(res5_dilation) num_blocks_per_stage = { 18: [2, 2, 2, 2], 34: [3, 4, 6, 3], 50: [3, 4, 6, 3], 101: [3, 4, 23, 3], 152: [3, 8, 36, 3], }[depth] if depth in [18, 34]: assert out_channels == 64, "Must set MODEL.RESNETS.RES2_OUT_CHANNELS = 64 for R18/R34" assert not any( deform_on_per_stage ), "MODEL.RESNETS.DEFORM_ON_PER_STAGE unsupported for R18/R34" assert res5_dilation == 1, "Must set MODEL.RESNETS.RES5_DILATION = 1 for R18/R34" assert num_groups == 1, "Must set MODEL.RESNETS.NUM_GROUPS = 1 for R18/R34" stages = [] for idx, stage_idx in enumerate(range(2, 6)): # res5_dilation is used this way as a convention in R-FCN & Deformable Conv paper dilation = res5_dilation if stage_idx == 5 else 1 first_stride = 1 if idx == 0 or (stage_idx == 5 and dilation == 2) else 2 stage_kargs = { "num_blocks": num_blocks_per_stage[idx], "stride_per_block": [first_stride] + [1] * (num_blocks_per_stage[idx] - 1), "in_channels": in_channels, "out_channels": out_channels, "norm": norm, } # Use BasicBlock for R18 and R34. if depth in [18, 34]: stage_kargs["block_class"] = BasicBlock else: stage_kargs["bottleneck_channels"] = bottleneck_channels stage_kargs["stride_in_1x1"] = stride_in_1x1 stage_kargs["dilation"] = dilation stage_kargs["num_groups"] = num_groups if deform_on_per_stage[idx]: stage_kargs["block_class"] = DeformBottleneckBlock stage_kargs["deform_modulated"] = deform_modulated stage_kargs["deform_num_groups"] = deform_num_groups else: stage_kargs["block_class"] = BottleneckBlock blocks = ResNet.make_stage(**stage_kargs) in_channels = out_channels out_channels *= 2 bottleneck_channels *= 2 stages.append(blocks) return ResNet(stem, stages, out_features=out_features, freeze_at=freeze_at)
/roof_mask_Yv8-0.5.9-py3-none-any.whl/detectron2/modeling/backbone/resnet.py
0.877405
0.386619
resnet.py
pypi
import logging import numpy as np from typing import Dict, List, Optional, Tuple import torch from torch import nn from detectron2.config import configurable from detectron2.data.detection_utils import convert_image_to_rgb from detectron2.layers import move_device_like from detectron2.structures import ImageList, Instances from detectron2.utils.events import get_event_storage from detectron2.utils.logger import log_first_n from ..backbone import Backbone, build_backbone from ..postprocessing import detector_postprocess from ..proposal_generator import build_proposal_generator from ..roi_heads import build_roi_heads from .build import META_ARCH_REGISTRY __all__ = ["GeneralizedRCNN", "ProposalNetwork"] @META_ARCH_REGISTRY.register() class GeneralizedRCNN(nn.Module): """ Generalized R-CNN. Any models that contains the following three components: 1. Per-image feature extraction (aka backbone) 2. Region proposal generation 3. Per-region feature extraction and prediction """ @configurable def __init__( self, *, backbone: Backbone, proposal_generator: nn.Module, roi_heads: nn.Module, pixel_mean: Tuple[float], pixel_std: Tuple[float], input_format: Optional[str] = None, vis_period: int = 0, ): """ Args: backbone: a backbone module, must follow detectron2's backbone interface proposal_generator: a module that generates proposals using backbone features roi_heads: a ROI head that performs per-region computation pixel_mean, pixel_std: list or tuple with #channels element, representing the per-channel mean and std to be used to normalize the input image input_format: describe the meaning of channels of input. Needed by visualization vis_period: the period to run visualization. Set to 0 to disable. """ super().__init__() self.backbone = backbone self.proposal_generator = proposal_generator self.roi_heads = roi_heads self.input_format = input_format self.vis_period = vis_period if vis_period > 0: assert input_format is not None, "input_format is required for visualization!" self.register_buffer("pixel_mean", torch.tensor(pixel_mean).view(-1, 1, 1), False) self.register_buffer("pixel_std", torch.tensor(pixel_std).view(-1, 1, 1), False) assert ( self.pixel_mean.shape == self.pixel_std.shape ), f"{self.pixel_mean} and {self.pixel_std} have different shapes!" @classmethod def from_config(cls, cfg): backbone = build_backbone(cfg) return { "backbone": backbone, "proposal_generator": build_proposal_generator(cfg, backbone.output_shape()), "roi_heads": build_roi_heads(cfg, backbone.output_shape()), "input_format": cfg.INPUT.FORMAT, "vis_period": cfg.VIS_PERIOD, "pixel_mean": cfg.MODEL.PIXEL_MEAN, "pixel_std": cfg.MODEL.PIXEL_STD, } @property def device(self): return self.pixel_mean.device def _move_to_current_device(self, x): return move_device_like(x, self.pixel_mean) def visualize_training(self, batched_inputs, proposals): """ A function used to visualize images and proposals. It shows ground truth bounding boxes on the original image and up to 20 top-scoring predicted object proposals on the original image. Users can implement different visualization functions for different models. Args: batched_inputs (list): a list that contains input to the model. proposals (list): a list that contains predicted proposals. Both batched_inputs and proposals should have the same length. """ from detectron2.utils.visualizer import Visualizer storage = get_event_storage() max_vis_prop = 20 for input, prop in zip(batched_inputs, proposals): img = input["image"] img = convert_image_to_rgb(img.permute(1, 2, 0), self.input_format) v_gt = Visualizer(img, None) v_gt = v_gt.overlay_instances(boxes=input["instances"].gt_boxes) anno_img = v_gt.get_image() box_size = min(len(prop.proposal_boxes), max_vis_prop) v_pred = Visualizer(img, None) v_pred = v_pred.overlay_instances( boxes=prop.proposal_boxes[0:box_size].tensor.cpu().numpy() ) prop_img = v_pred.get_image() vis_img = np.concatenate((anno_img, prop_img), axis=1) vis_img = vis_img.transpose(2, 0, 1) vis_name = "Left: GT bounding boxes; Right: Predicted proposals" storage.put_image(vis_name, vis_img) break # only visualize one image in a batch def forward(self, batched_inputs: List[Dict[str, torch.Tensor]]): """ Args: batched_inputs: a list, batched outputs of :class:`DatasetMapper` . Each item in the list contains the inputs for one image. For now, each item in the list is a dict that contains: * image: Tensor, image in (C, H, W) format. * instances (optional): groundtruth :class:`Instances` * proposals (optional): :class:`Instances`, precomputed proposals. Other information that's included in the original dicts, such as: * "height", "width" (int): the output resolution of the model, used in inference. See :meth:`postprocess` for details. Returns: list[dict]: Each dict is the output for one input image. The dict contains one key "instances" whose value is a :class:`Instances`. The :class:`Instances` object has the following keys: "pred_boxes", "pred_classes", "scores", "pred_masks", "pred_keypoints" """ if not self.training: return self.inference(batched_inputs) images = self.preprocess_image(batched_inputs) if "instances" in batched_inputs[0]: gt_instances = [x["instances"].to(self.device) for x in batched_inputs] else: gt_instances = None features = self.backbone(images.tensor) if self.proposal_generator is not None: proposals, proposal_losses = self.proposal_generator(images, features, gt_instances) else: assert "proposals" in batched_inputs[0] proposals = [x["proposals"].to(self.device) for x in batched_inputs] proposal_losses = {} _, detector_losses = self.roi_heads(images, features, proposals, gt_instances) if self.vis_period > 0: storage = get_event_storage() if storage.iter % self.vis_period == 0: self.visualize_training(batched_inputs, proposals) losses = {} losses.update(detector_losses) losses.update(proposal_losses) return losses def inference( self, batched_inputs: List[Dict[str, torch.Tensor]], detected_instances: Optional[List[Instances]] = None, do_postprocess: bool = True, ): """ Run inference on the given inputs. Args: batched_inputs (list[dict]): same as in :meth:`forward` detected_instances (None or list[Instances]): if not None, it contains an `Instances` object per image. The `Instances` object contains "pred_boxes" and "pred_classes" which are known boxes in the image. The inference will then skip the detection of bounding boxes, and only predict other per-ROI outputs. do_postprocess (bool): whether to apply post-processing on the outputs. Returns: When do_postprocess=True, same as in :meth:`forward`. Otherwise, a list[Instances] containing raw network outputs. """ assert not self.training images = self.preprocess_image(batched_inputs) features = self.backbone(images.tensor) if detected_instances is None: if self.proposal_generator is not None: proposals, _ = self.proposal_generator(images, features, None) else: assert "proposals" in batched_inputs[0] proposals = [x["proposals"].to(self.device) for x in batched_inputs] results, _ = self.roi_heads(images, features, proposals, None) else: detected_instances = [x.to(self.device) for x in detected_instances] results = self.roi_heads.forward_with_given_boxes(features, detected_instances) if do_postprocess: assert not torch.jit.is_scripting(), "Scripting is not supported for postprocess." return GeneralizedRCNN._postprocess(results, batched_inputs, images.image_sizes) else: return results def preprocess_image(self, batched_inputs: List[Dict[str, torch.Tensor]]): """ Normalize, pad and batch the input images. """ images = [self._move_to_current_device(x["image"]) for x in batched_inputs] images = [(x - self.pixel_mean) / self.pixel_std for x in images] images = ImageList.from_tensors(images, self.backbone.size_divisibility) return images @staticmethod def _postprocess(instances, batched_inputs: List[Dict[str, torch.Tensor]], image_sizes): """ Rescale the output instances to the target size. """ # note: private function; subject to changes processed_results = [] for results_per_image, input_per_image, image_size in zip( instances, batched_inputs, image_sizes ): height = input_per_image.get("height", image_size[0]) width = input_per_image.get("width", image_size[1]) r = detector_postprocess(results_per_image, height, width) processed_results.append({"instances": r}) return processed_results @META_ARCH_REGISTRY.register() class ProposalNetwork(nn.Module): """ A meta architecture that only predicts object proposals. """ @configurable def __init__( self, *, backbone: Backbone, proposal_generator: nn.Module, pixel_mean: Tuple[float], pixel_std: Tuple[float], ): """ Args: backbone: a backbone module, must follow detectron2's backbone interface proposal_generator: a module that generates proposals using backbone features pixel_mean, pixel_std: list or tuple with #channels element, representing the per-channel mean and std to be used to normalize the input image """ super().__init__() self.backbone = backbone self.proposal_generator = proposal_generator self.register_buffer("pixel_mean", torch.tensor(pixel_mean).view(-1, 1, 1), False) self.register_buffer("pixel_std", torch.tensor(pixel_std).view(-1, 1, 1), False) @classmethod def from_config(cls, cfg): backbone = build_backbone(cfg) return { "backbone": backbone, "proposal_generator": build_proposal_generator(cfg, backbone.output_shape()), "pixel_mean": cfg.MODEL.PIXEL_MEAN, "pixel_std": cfg.MODEL.PIXEL_STD, } @property def device(self): return self.pixel_mean.device def _move_to_current_device(self, x): return move_device_like(x, self.pixel_mean) def forward(self, batched_inputs): """ Args: Same as in :class:`GeneralizedRCNN.forward` Returns: list[dict]: Each dict is the output for one input image. The dict contains one key "proposals" whose value is a :class:`Instances` with keys "proposal_boxes" and "objectness_logits". """ images = [self._move_to_current_device(x["image"]) for x in batched_inputs] images = [(x - self.pixel_mean) / self.pixel_std for x in images] images = ImageList.from_tensors(images, self.backbone.size_divisibility) features = self.backbone(images.tensor) if "instances" in batched_inputs[0]: gt_instances = [x["instances"].to(self.device) for x in batched_inputs] elif "targets" in batched_inputs[0]: log_first_n( logging.WARN, "'targets' in the model inputs is now renamed to 'instances'!", n=10 ) gt_instances = [x["targets"].to(self.device) for x in batched_inputs] else: gt_instances = None proposals, proposal_losses = self.proposal_generator(images, features, gt_instances) # In training, the proposals are not useful at all but we generate them anyway. # This makes RPN-only models about 5% slower. if self.training: return proposal_losses processed_results = [] for results_per_image, input_per_image, image_size in zip( proposals, batched_inputs, images.image_sizes ): height = input_per_image.get("height", image_size[0]) width = input_per_image.get("width", image_size[1]) r = detector_postprocess(results_per_image, height, width) processed_results.append({"proposals": r}) return processed_results
/roof_mask_Yv8-0.5.9-py3-none-any.whl/detectron2/modeling/meta_arch/rcnn.py
0.947974
0.418816
rcnn.py
pypi
import numpy as np from typing import Dict, List, Optional, Tuple import torch from torch import Tensor, nn from detectron2.data.detection_utils import convert_image_to_rgb from detectron2.layers import move_device_like from detectron2.modeling import Backbone from detectron2.structures import Boxes, ImageList, Instances from detectron2.utils.events import get_event_storage from ..postprocessing import detector_postprocess def permute_to_N_HWA_K(tensor, K: int): """ Transpose/reshape a tensor from (N, (Ai x K), H, W) to (N, (HxWxAi), K) """ assert tensor.dim() == 4, tensor.shape N, _, H, W = tensor.shape tensor = tensor.view(N, -1, K, H, W) tensor = tensor.permute(0, 3, 4, 1, 2) tensor = tensor.reshape(N, -1, K) # Size=(N,HWA,K) return tensor class DenseDetector(nn.Module): """ Base class for dense detector. We define a dense detector as a fully-convolutional model that makes per-pixel (i.e. dense) predictions. """ def __init__( self, backbone: Backbone, head: nn.Module, head_in_features: Optional[List[str]] = None, *, pixel_mean, pixel_std, ): """ Args: backbone: backbone module head: head module head_in_features: backbone features to use in head. Default to all backbone features. pixel_mean (Tuple[float]): Values to be used for image normalization (BGR order). To train on images of different number of channels, set different mean & std. Default values are the mean pixel value from ImageNet: [103.53, 116.28, 123.675] pixel_std (Tuple[float]): When using pre-trained models in Detectron1 or any MSRA models, std has been absorbed into its conv1 weights, so the std needs to be set 1. Otherwise, you can use [57.375, 57.120, 58.395] (ImageNet std) """ super().__init__() self.backbone = backbone self.head = head if head_in_features is None: shapes = self.backbone.output_shape() self.head_in_features = sorted(shapes.keys(), key=lambda x: shapes[x].stride) else: self.head_in_features = head_in_features self.register_buffer("pixel_mean", torch.tensor(pixel_mean).view(-1, 1, 1), False) self.register_buffer("pixel_std", torch.tensor(pixel_std).view(-1, 1, 1), False) @property def device(self): return self.pixel_mean.device def _move_to_current_device(self, x): return move_device_like(x, self.pixel_mean) def forward(self, batched_inputs: List[Dict[str, Tensor]]): """ Args: batched_inputs: a list, batched outputs of :class:`DatasetMapper` . Each item in the list contains the inputs for one image. For now, each item in the list is a dict that contains: * image: Tensor, image in (C, H, W) format. * instances: Instances Other information that's included in the original dicts, such as: * "height", "width" (int): the output resolution of the model, used in inference. See :meth:`postprocess` for details. Returns: In training, dict[str, Tensor]: mapping from a named loss to a tensor storing the loss. Used during training only. In inference, the standard output format, described in :doc:`/tutorials/models`. """ images = self.preprocess_image(batched_inputs) features = self.backbone(images.tensor) features = [features[f] for f in self.head_in_features] predictions = self.head(features) if self.training: assert not torch.jit.is_scripting(), "Not supported" assert "instances" in batched_inputs[0], "Instance annotations are missing in training!" gt_instances = [x["instances"].to(self.device) for x in batched_inputs] return self.forward_training(images, features, predictions, gt_instances) else: results = self.forward_inference(images, features, predictions) if torch.jit.is_scripting(): return results processed_results = [] for results_per_image, input_per_image, image_size in zip( results, batched_inputs, images.image_sizes ): height = input_per_image.get("height", image_size[0]) width = input_per_image.get("width", image_size[1]) r = detector_postprocess(results_per_image, height, width) processed_results.append({"instances": r}) return processed_results def forward_training(self, images, features, predictions, gt_instances): raise NotImplementedError() def preprocess_image(self, batched_inputs: List[Dict[str, Tensor]]): """ Normalize, pad and batch the input images. """ images = [self._move_to_current_device(x["image"]) for x in batched_inputs] images = [(x - self.pixel_mean) / self.pixel_std for x in images] images = ImageList.from_tensors(images, self.backbone.size_divisibility) return images def _transpose_dense_predictions( self, predictions: List[List[Tensor]], dims_per_anchor: List[int] ) -> List[List[Tensor]]: """ Transpose the dense per-level predictions. Args: predictions: a list of outputs, each is a list of per-level predictions with shape (N, Ai x K, Hi, Wi), where N is the number of images, Ai is the number of anchors per location on level i, K is the dimension of predictions per anchor. dims_per_anchor: the value of K for each predictions. e.g. 4 for box prediction, #classes for classification prediction. Returns: List[List[Tensor]]: each prediction is transposed to (N, Hi x Wi x Ai, K). """ assert len(predictions) == len(dims_per_anchor) res: List[List[Tensor]] = [] for pred, dim_per_anchor in zip(predictions, dims_per_anchor): pred = [permute_to_N_HWA_K(x, dim_per_anchor) for x in pred] res.append(pred) return res def _ema_update(self, name: str, value: float, initial_value: float, momentum: float = 0.9): """ Apply EMA update to `self.name` using `value`. This is mainly used for loss normalizer. In Detectron1, loss is normalized by number of foreground samples in the batch. When batch size is 1 per GPU, #foreground has a large variance and using it lead to lower performance. Therefore we maintain an EMA of #foreground to stabilize the normalizer. Args: name: name of the normalizer value: the new value to update initial_value: the initial value to start with momentum: momentum of EMA Returns: float: the updated EMA value """ if hasattr(self, name): old = getattr(self, name) else: old = initial_value new = old * momentum + value * (1 - momentum) setattr(self, name, new) return new def _decode_per_level_predictions( self, anchors: Boxes, pred_scores: Tensor, pred_deltas: Tensor, score_thresh: float, topk_candidates: int, image_size: Tuple[int, int], ) -> Instances: """ Decode boxes and classification predictions of one featuer level, by the following steps: 1. filter the predictions based on score threshold and top K scores. 2. transform the box regression outputs 3. return the predicted scores, classes and boxes Args: anchors: Boxes, anchor for this feature level pred_scores: HxWxA,K pred_deltas: HxWxA,4 Returns: Instances: with field "scores", "pred_boxes", "pred_classes". """ # Apply two filtering to make NMS faster. # 1. Keep boxes with confidence score higher than threshold keep_idxs = pred_scores > score_thresh pred_scores = pred_scores[keep_idxs] topk_idxs = torch.nonzero(keep_idxs) # Kx2 # 2. Keep top k top scoring boxes only num_topk = min(topk_candidates, topk_idxs.size(0)) pred_scores, idxs = pred_scores.topk(num_topk) topk_idxs = topk_idxs[idxs] anchor_idxs, classes_idxs = topk_idxs.unbind(dim=1) pred_boxes = self.box2box_transform.apply_deltas( pred_deltas[anchor_idxs], anchors.tensor[anchor_idxs] ) return Instances( image_size, pred_boxes=Boxes(pred_boxes), scores=pred_scores, pred_classes=classes_idxs ) def _decode_multi_level_predictions( self, anchors: List[Boxes], pred_scores: List[Tensor], pred_deltas: List[Tensor], score_thresh: float, topk_candidates: int, image_size: Tuple[int, int], ) -> Instances: """ Run `_decode_per_level_predictions` for all feature levels and concat the results. """ predictions = [ self._decode_per_level_predictions( anchors_i, box_cls_i, box_reg_i, self.test_score_thresh, self.test_topk_candidates, image_size, ) # Iterate over every feature level for box_cls_i, box_reg_i, anchors_i in zip(pred_scores, pred_deltas, anchors) ] return predictions[0].cat(predictions) # 'Instances.cat' is not scriptale but this is def visualize_training(self, batched_inputs, results): """ A function used to visualize ground truth images and final network predictions. It shows ground truth bounding boxes on the original image and up to 20 predicted object bounding boxes on the original image. Args: batched_inputs (list): a list that contains input to the model. results (List[Instances]): a list of #images elements returned by forward_inference(). """ from detectron2.utils.visualizer import Visualizer assert len(batched_inputs) == len( results ), "Cannot visualize inputs and results of different sizes" storage = get_event_storage() max_boxes = 20 image_index = 0 # only visualize a single image img = batched_inputs[image_index]["image"] img = convert_image_to_rgb(img.permute(1, 2, 0), self.input_format) v_gt = Visualizer(img, None) v_gt = v_gt.overlay_instances(boxes=batched_inputs[image_index]["instances"].gt_boxes) anno_img = v_gt.get_image() processed_results = detector_postprocess(results[image_index], img.shape[0], img.shape[1]) predicted_boxes = processed_results.pred_boxes.tensor.detach().cpu().numpy() v_pred = Visualizer(img, None) v_pred = v_pred.overlay_instances(boxes=predicted_boxes[0:max_boxes]) prop_img = v_pred.get_image() vis_img = np.vstack((anno_img, prop_img)) vis_img = vis_img.transpose(2, 0, 1) vis_name = f"Top: GT bounding boxes; Bottom: {max_boxes} Highest Scoring Results" storage.put_image(vis_name, vis_img)
/roof_mask_Yv8-0.5.9-py3-none-any.whl/detectron2/modeling/meta_arch/dense_detector.py
0.956856
0.630813
dense_detector.py
pypi
import numpy as np from typing import Callable, Dict, Optional, Tuple, Union import fvcore.nn.weight_init as weight_init import torch from torch import nn from torch.nn import functional as F from detectron2.config import configurable from detectron2.layers import Conv2d, ShapeSpec, get_norm from detectron2.structures import ImageList from detectron2.utils.registry import Registry from ..backbone import Backbone, build_backbone from ..postprocessing import sem_seg_postprocess from .build import META_ARCH_REGISTRY __all__ = [ "SemanticSegmentor", "SEM_SEG_HEADS_REGISTRY", "SemSegFPNHead", "build_sem_seg_head", ] SEM_SEG_HEADS_REGISTRY = Registry("SEM_SEG_HEADS") SEM_SEG_HEADS_REGISTRY.__doc__ = """ Registry for semantic segmentation heads, which make semantic segmentation predictions from feature maps. """ @META_ARCH_REGISTRY.register() class SemanticSegmentor(nn.Module): """ Main class for semantic segmentation architectures. """ @configurable def __init__( self, *, backbone: Backbone, sem_seg_head: nn.Module, pixel_mean: Tuple[float], pixel_std: Tuple[float], ): """ Args: backbone: a backbone module, must follow detectron2's backbone interface sem_seg_head: a module that predicts semantic segmentation from backbone features pixel_mean, pixel_std: list or tuple with #channels element, representing the per-channel mean and std to be used to normalize the input image """ super().__init__() self.backbone = backbone self.sem_seg_head = sem_seg_head self.register_buffer("pixel_mean", torch.tensor(pixel_mean).view(-1, 1, 1), False) self.register_buffer("pixel_std", torch.tensor(pixel_std).view(-1, 1, 1), False) @classmethod def from_config(cls, cfg): backbone = build_backbone(cfg) sem_seg_head = build_sem_seg_head(cfg, backbone.output_shape()) return { "backbone": backbone, "sem_seg_head": sem_seg_head, "pixel_mean": cfg.MODEL.PIXEL_MEAN, "pixel_std": cfg.MODEL.PIXEL_STD, } @property def device(self): return self.pixel_mean.device def forward(self, batched_inputs): """ Args: batched_inputs: a list, batched outputs of :class:`DatasetMapper`. Each item in the list contains the inputs for one image. For now, each item in the list is a dict that contains: * "image": Tensor, image in (C, H, W) format. * "sem_seg": semantic segmentation ground truth * Other information that's included in the original dicts, such as: "height", "width" (int): the output resolution of the model (may be different from input resolution), used in inference. Returns: list[dict]: Each dict is the output for one input image. The dict contains one key "sem_seg" whose value is a Tensor that represents the per-pixel segmentation prediced by the head. The prediction has shape KxHxW that represents the logits of each class for each pixel. """ images = [x["image"].to(self.device) for x in batched_inputs] images = [(x - self.pixel_mean) / self.pixel_std for x in images] images = ImageList.from_tensors(images, self.backbone.size_divisibility) features = self.backbone(images.tensor) if "sem_seg" in batched_inputs[0]: targets = [x["sem_seg"].to(self.device) for x in batched_inputs] targets = ImageList.from_tensors( targets, self.backbone.size_divisibility, self.sem_seg_head.ignore_value ).tensor else: targets = None results, losses = self.sem_seg_head(features, targets) if self.training: return losses processed_results = [] for result, input_per_image, image_size in zip(results, batched_inputs, images.image_sizes): height = input_per_image.get("height", image_size[0]) width = input_per_image.get("width", image_size[1]) r = sem_seg_postprocess(result, image_size, height, width) processed_results.append({"sem_seg": r}) return processed_results def build_sem_seg_head(cfg, input_shape): """ Build a semantic segmentation head from `cfg.MODEL.SEM_SEG_HEAD.NAME`. """ name = cfg.MODEL.SEM_SEG_HEAD.NAME return SEM_SEG_HEADS_REGISTRY.get(name)(cfg, input_shape) @SEM_SEG_HEADS_REGISTRY.register() class SemSegFPNHead(nn.Module): """ A semantic segmentation head described in :paper:`PanopticFPN`. It takes a list of FPN features as input, and applies a sequence of 3x3 convs and upsampling to scale all of them to the stride defined by ``common_stride``. Then these features are added and used to make final predictions by another 1x1 conv layer. """ @configurable def __init__( self, input_shape: Dict[str, ShapeSpec], *, num_classes: int, conv_dims: int, common_stride: int, loss_weight: float = 1.0, norm: Optional[Union[str, Callable]] = None, ignore_value: int = -1, ): """ NOTE: this interface is experimental. Args: input_shape: shapes (channels and stride) of the input features num_classes: number of classes to predict conv_dims: number of output channels for the intermediate conv layers. common_stride: the common stride that all features will be upscaled to loss_weight: loss weight norm (str or callable): normalization for all conv layers ignore_value: category id to be ignored during training. """ super().__init__() input_shape = sorted(input_shape.items(), key=lambda x: x[1].stride) if not len(input_shape): raise ValueError("SemSegFPNHead(input_shape=) cannot be empty!") self.in_features = [k for k, v in input_shape] feature_strides = [v.stride for k, v in input_shape] feature_channels = [v.channels for k, v in input_shape] self.ignore_value = ignore_value self.common_stride = common_stride self.loss_weight = loss_weight self.scale_heads = [] for in_feature, stride, channels in zip( self.in_features, feature_strides, feature_channels ): head_ops = [] head_length = max(1, int(np.log2(stride) - np.log2(self.common_stride))) for k in range(head_length): norm_module = get_norm(norm, conv_dims) conv = Conv2d( channels if k == 0 else conv_dims, conv_dims, kernel_size=3, stride=1, padding=1, bias=not norm, norm=norm_module, activation=F.relu, ) weight_init.c2_msra_fill(conv) head_ops.append(conv) if stride != self.common_stride: head_ops.append( nn.Upsample(scale_factor=2, mode="bilinear", align_corners=False) ) self.scale_heads.append(nn.Sequential(*head_ops)) self.add_module(in_feature, self.scale_heads[-1]) self.predictor = Conv2d(conv_dims, num_classes, kernel_size=1, stride=1, padding=0) weight_init.c2_msra_fill(self.predictor) @classmethod def from_config(cls, cfg, input_shape: Dict[str, ShapeSpec]): return { "input_shape": { k: v for k, v in input_shape.items() if k in cfg.MODEL.SEM_SEG_HEAD.IN_FEATURES }, "ignore_value": cfg.MODEL.SEM_SEG_HEAD.IGNORE_VALUE, "num_classes": cfg.MODEL.SEM_SEG_HEAD.NUM_CLASSES, "conv_dims": cfg.MODEL.SEM_SEG_HEAD.CONVS_DIM, "common_stride": cfg.MODEL.SEM_SEG_HEAD.COMMON_STRIDE, "norm": cfg.MODEL.SEM_SEG_HEAD.NORM, "loss_weight": cfg.MODEL.SEM_SEG_HEAD.LOSS_WEIGHT, } def forward(self, features, targets=None): """ Returns: In training, returns (None, dict of losses) In inference, returns (CxHxW logits, {}) """ x = self.layers(features) if self.training: return None, self.losses(x, targets) else: x = F.interpolate( x, scale_factor=self.common_stride, mode="bilinear", align_corners=False ) return x, {} def layers(self, features): for i, f in enumerate(self.in_features): if i == 0: x = self.scale_heads[i](features[f]) else: x = x + self.scale_heads[i](features[f]) x = self.predictor(x) return x def losses(self, predictions, targets): predictions = predictions.float() # https://github.com/pytorch/pytorch/issues/48163 predictions = F.interpolate( predictions, scale_factor=self.common_stride, mode="bilinear", align_corners=False, ) loss = F.cross_entropy( predictions, targets, reduction="mean", ignore_index=self.ignore_value ) losses = {"loss_sem_seg": loss * self.loss_weight} return losses
/roof_mask_Yv8-0.5.9-py3-none-any.whl/detectron2/modeling/meta_arch/semantic_seg.py
0.94474
0.461017
semantic_seg.py
pypi
import logging import math from typing import List, Tuple import torch from fvcore.nn import sigmoid_focal_loss_jit from torch import Tensor, nn from torch.nn import functional as F from detectron2.config import configurable from detectron2.layers import CycleBatchNormList, ShapeSpec, batched_nms, cat, get_norm from detectron2.structures import Boxes, ImageList, Instances, pairwise_iou from detectron2.utils.events import get_event_storage from ..anchor_generator import build_anchor_generator from ..backbone import Backbone, build_backbone from ..box_regression import Box2BoxTransform, _dense_box_regression_loss from ..matcher import Matcher from .build import META_ARCH_REGISTRY from .dense_detector import DenseDetector, permute_to_N_HWA_K # noqa __all__ = ["RetinaNet"] logger = logging.getLogger(__name__) @META_ARCH_REGISTRY.register() class RetinaNet(DenseDetector): """ Implement RetinaNet in :paper:`RetinaNet`. """ @configurable def __init__( self, *, backbone: Backbone, head: nn.Module, head_in_features, anchor_generator, box2box_transform, anchor_matcher, num_classes, focal_loss_alpha=0.25, focal_loss_gamma=2.0, smooth_l1_beta=0.0, box_reg_loss_type="smooth_l1", test_score_thresh=0.05, test_topk_candidates=1000, test_nms_thresh=0.5, max_detections_per_image=100, pixel_mean, pixel_std, vis_period=0, input_format="BGR", ): """ NOTE: this interface is experimental. Args: backbone: a backbone module, must follow detectron2's backbone interface head (nn.Module): a module that predicts logits and regression deltas for each level from a list of per-level features head_in_features (Tuple[str]): Names of the input feature maps to be used in head anchor_generator (nn.Module): a module that creates anchors from a list of features. Usually an instance of :class:`AnchorGenerator` box2box_transform (Box2BoxTransform): defines the transform from anchors boxes to instance boxes anchor_matcher (Matcher): label the anchors by matching them with ground truth. num_classes (int): number of classes. Used to label background proposals. # Loss parameters: focal_loss_alpha (float): focal_loss_alpha focal_loss_gamma (float): focal_loss_gamma smooth_l1_beta (float): smooth_l1_beta box_reg_loss_type (str): Options are "smooth_l1", "giou", "diou", "ciou" # Inference parameters: test_score_thresh (float): Inference cls score threshold, only anchors with score > INFERENCE_TH are considered for inference (to improve speed) test_topk_candidates (int): Select topk candidates before NMS test_nms_thresh (float): Overlap threshold used for non-maximum suppression (suppress boxes with IoU >= this threshold) max_detections_per_image (int): Maximum number of detections to return per image during inference (100 is based on the limit established for the COCO dataset). pixel_mean, pixel_std: see :class:`DenseDetector`. """ super().__init__( backbone, head, head_in_features, pixel_mean=pixel_mean, pixel_std=pixel_std ) self.num_classes = num_classes # Anchors self.anchor_generator = anchor_generator self.box2box_transform = box2box_transform self.anchor_matcher = anchor_matcher # Loss parameters: self.focal_loss_alpha = focal_loss_alpha self.focal_loss_gamma = focal_loss_gamma self.smooth_l1_beta = smooth_l1_beta self.box_reg_loss_type = box_reg_loss_type # Inference parameters: self.test_score_thresh = test_score_thresh self.test_topk_candidates = test_topk_candidates self.test_nms_thresh = test_nms_thresh self.max_detections_per_image = max_detections_per_image # Vis parameters self.vis_period = vis_period self.input_format = input_format @classmethod def from_config(cls, cfg): backbone = build_backbone(cfg) backbone_shape = backbone.output_shape() feature_shapes = [backbone_shape[f] for f in cfg.MODEL.RETINANET.IN_FEATURES] head = RetinaNetHead(cfg, feature_shapes) anchor_generator = build_anchor_generator(cfg, feature_shapes) return { "backbone": backbone, "head": head, "anchor_generator": anchor_generator, "box2box_transform": Box2BoxTransform(weights=cfg.MODEL.RETINANET.BBOX_REG_WEIGHTS), "anchor_matcher": Matcher( cfg.MODEL.RETINANET.IOU_THRESHOLDS, cfg.MODEL.RETINANET.IOU_LABELS, allow_low_quality_matches=True, ), "pixel_mean": cfg.MODEL.PIXEL_MEAN, "pixel_std": cfg.MODEL.PIXEL_STD, "num_classes": cfg.MODEL.RETINANET.NUM_CLASSES, "head_in_features": cfg.MODEL.RETINANET.IN_FEATURES, # Loss parameters: "focal_loss_alpha": cfg.MODEL.RETINANET.FOCAL_LOSS_ALPHA, "focal_loss_gamma": cfg.MODEL.RETINANET.FOCAL_LOSS_GAMMA, "smooth_l1_beta": cfg.MODEL.RETINANET.SMOOTH_L1_LOSS_BETA, "box_reg_loss_type": cfg.MODEL.RETINANET.BBOX_REG_LOSS_TYPE, # Inference parameters: "test_score_thresh": cfg.MODEL.RETINANET.SCORE_THRESH_TEST, "test_topk_candidates": cfg.MODEL.RETINANET.TOPK_CANDIDATES_TEST, "test_nms_thresh": cfg.MODEL.RETINANET.NMS_THRESH_TEST, "max_detections_per_image": cfg.TEST.DETECTIONS_PER_IMAGE, # Vis parameters "vis_period": cfg.VIS_PERIOD, "input_format": cfg.INPUT.FORMAT, } def forward_training(self, images, features, predictions, gt_instances): # Transpose the Hi*Wi*A dimension to the middle: pred_logits, pred_anchor_deltas = self._transpose_dense_predictions( predictions, [self.num_classes, 4] ) anchors = self.anchor_generator(features) gt_labels, gt_boxes = self.label_anchors(anchors, gt_instances) return self.losses(anchors, pred_logits, gt_labels, pred_anchor_deltas, gt_boxes) def losses(self, anchors, pred_logits, gt_labels, pred_anchor_deltas, gt_boxes): """ Args: anchors (list[Boxes]): a list of #feature level Boxes gt_labels, gt_boxes: see output of :meth:`RetinaNet.label_anchors`. Their shapes are (N, R) and (N, R, 4), respectively, where R is the total number of anchors across levels, i.e. sum(Hi x Wi x Ai) pred_logits, pred_anchor_deltas: both are list[Tensor]. Each element in the list corresponds to one level and has shape (N, Hi * Wi * Ai, K or 4). Where K is the number of classes used in `pred_logits`. Returns: dict[str, Tensor]: mapping from a named loss to a scalar tensor storing the loss. Used during training only. The dict keys are: "loss_cls" and "loss_box_reg" """ num_images = len(gt_labels) gt_labels = torch.stack(gt_labels) # (N, R) valid_mask = gt_labels >= 0 pos_mask = (gt_labels >= 0) & (gt_labels != self.num_classes) num_pos_anchors = pos_mask.sum().item() get_event_storage().put_scalar("num_pos_anchors", num_pos_anchors / num_images) normalizer = self._ema_update("loss_normalizer", max(num_pos_anchors, 1), 100) # classification and regression loss gt_labels_target = F.one_hot(gt_labels[valid_mask], num_classes=self.num_classes + 1)[ :, :-1 ] # no loss for the last (background) class loss_cls = sigmoid_focal_loss_jit( cat(pred_logits, dim=1)[valid_mask], gt_labels_target.to(pred_logits[0].dtype), alpha=self.focal_loss_alpha, gamma=self.focal_loss_gamma, reduction="sum", ) loss_box_reg = _dense_box_regression_loss( anchors, self.box2box_transform, pred_anchor_deltas, gt_boxes, pos_mask, box_reg_loss_type=self.box_reg_loss_type, smooth_l1_beta=self.smooth_l1_beta, ) return { "loss_cls": loss_cls / normalizer, "loss_box_reg": loss_box_reg / normalizer, } @torch.no_grad() def label_anchors(self, anchors, gt_instances): """ Args: anchors (list[Boxes]): A list of #feature level Boxes. The Boxes contains anchors of this image on the specific feature level. gt_instances (list[Instances]): a list of N `Instances`s. The i-th `Instances` contains the ground-truth per-instance annotations for the i-th input image. Returns: list[Tensor]: List of #img tensors. i-th element is a vector of labels whose length is the total number of anchors across all feature maps (sum(Hi * Wi * A)). Label values are in {-1, 0, ..., K}, with -1 means ignore, and K means background. list[Tensor]: i-th element is a Rx4 tensor, where R is the total number of anchors across feature maps. The values are the matched gt boxes for each anchor. Values are undefined for those anchors not labeled as foreground. """ anchors = Boxes.cat(anchors) # Rx4 gt_labels = [] matched_gt_boxes = [] for gt_per_image in gt_instances: match_quality_matrix = pairwise_iou(gt_per_image.gt_boxes, anchors) matched_idxs, anchor_labels = self.anchor_matcher(match_quality_matrix) del match_quality_matrix if len(gt_per_image) > 0: matched_gt_boxes_i = gt_per_image.gt_boxes.tensor[matched_idxs] gt_labels_i = gt_per_image.gt_classes[matched_idxs] # Anchors with label 0 are treated as background. gt_labels_i[anchor_labels == 0] = self.num_classes # Anchors with label -1 are ignored. gt_labels_i[anchor_labels == -1] = -1 else: matched_gt_boxes_i = torch.zeros_like(anchors.tensor) gt_labels_i = torch.zeros_like(matched_idxs) + self.num_classes gt_labels.append(gt_labels_i) matched_gt_boxes.append(matched_gt_boxes_i) return gt_labels, matched_gt_boxes def forward_inference( self, images: ImageList, features: List[Tensor], predictions: List[List[Tensor]] ): pred_logits, pred_anchor_deltas = self._transpose_dense_predictions( predictions, [self.num_classes, 4] ) anchors = self.anchor_generator(features) results: List[Instances] = [] for img_idx, image_size in enumerate(images.image_sizes): scores_per_image = [x[img_idx].sigmoid_() for x in pred_logits] deltas_per_image = [x[img_idx] for x in pred_anchor_deltas] results_per_image = self.inference_single_image( anchors, scores_per_image, deltas_per_image, image_size ) results.append(results_per_image) return results def inference_single_image( self, anchors: List[Boxes], box_cls: List[Tensor], box_delta: List[Tensor], image_size: Tuple[int, int], ): """ Single-image inference. Return bounding-box detection results by thresholding on scores and applying non-maximum suppression (NMS). Arguments: anchors (list[Boxes]): list of #feature levels. Each entry contains a Boxes object, which contains all the anchors in that feature level. box_cls (list[Tensor]): list of #feature levels. Each entry contains tensor of size (H x W x A, K) box_delta (list[Tensor]): Same shape as 'box_cls' except that K becomes 4. image_size (tuple(H, W)): a tuple of the image height and width. Returns: Same as `inference`, but for only one image. """ pred = self._decode_multi_level_predictions( anchors, box_cls, box_delta, self.test_score_thresh, self.test_topk_candidates, image_size, ) keep = batched_nms( # per-class NMS pred.pred_boxes.tensor, pred.scores, pred.pred_classes, self.test_nms_thresh ) return pred[keep[: self.max_detections_per_image]] class RetinaNetHead(nn.Module): """ The head used in RetinaNet for object classification and box regression. It has two subnets for the two tasks, with a common structure but separate parameters. """ @configurable def __init__( self, *, input_shape: List[ShapeSpec], num_classes, num_anchors, conv_dims: List[int], norm="", prior_prob=0.01, ): """ NOTE: this interface is experimental. Args: input_shape (List[ShapeSpec]): input shape num_classes (int): number of classes. Used to label background proposals. num_anchors (int): number of generated anchors conv_dims (List[int]): dimensions for each convolution layer norm (str or callable): Normalization for conv layers except for the two output layers. See :func:`detectron2.layers.get_norm` for supported types. prior_prob (float): Prior weight for computing bias """ super().__init__() self._num_features = len(input_shape) if norm == "BN" or norm == "SyncBN": logger.info( f"Using domain-specific {norm} in RetinaNetHead with len={self._num_features}." ) bn_class = nn.BatchNorm2d if norm == "BN" else nn.SyncBatchNorm def norm(c): return CycleBatchNormList( length=self._num_features, bn_class=bn_class, num_features=c ) else: norm_name = str(type(get_norm(norm, 1))) if "BN" in norm_name: logger.warning( f"Shared BatchNorm (type={norm_name}) may not work well in RetinaNetHead." ) cls_subnet = [] bbox_subnet = [] for in_channels, out_channels in zip( [input_shape[0].channels] + list(conv_dims), conv_dims ): cls_subnet.append( nn.Conv2d(in_channels, out_channels, kernel_size=3, stride=1, padding=1) ) if norm: cls_subnet.append(get_norm(norm, out_channels)) cls_subnet.append(nn.ReLU()) bbox_subnet.append( nn.Conv2d(in_channels, out_channels, kernel_size=3, stride=1, padding=1) ) if norm: bbox_subnet.append(get_norm(norm, out_channels)) bbox_subnet.append(nn.ReLU()) self.cls_subnet = nn.Sequential(*cls_subnet) self.bbox_subnet = nn.Sequential(*bbox_subnet) self.cls_score = nn.Conv2d( conv_dims[-1], num_anchors * num_classes, kernel_size=3, stride=1, padding=1 ) self.bbox_pred = nn.Conv2d( conv_dims[-1], num_anchors * 4, kernel_size=3, stride=1, padding=1 ) # Initialization for modules in [self.cls_subnet, self.bbox_subnet, self.cls_score, self.bbox_pred]: for layer in modules.modules(): if isinstance(layer, nn.Conv2d): torch.nn.init.normal_(layer.weight, mean=0, std=0.01) torch.nn.init.constant_(layer.bias, 0) # Use prior in model initialization to improve stability bias_value = -(math.log((1 - prior_prob) / prior_prob)) torch.nn.init.constant_(self.cls_score.bias, bias_value) @classmethod def from_config(cls, cfg, input_shape: List[ShapeSpec]): num_anchors = build_anchor_generator(cfg, input_shape).num_cell_anchors assert ( len(set(num_anchors)) == 1 ), "Using different number of anchors between levels is not currently supported!" num_anchors = num_anchors[0] return { "input_shape": input_shape, "num_classes": cfg.MODEL.RETINANET.NUM_CLASSES, "conv_dims": [input_shape[0].channels] * cfg.MODEL.RETINANET.NUM_CONVS, "prior_prob": cfg.MODEL.RETINANET.PRIOR_PROB, "norm": cfg.MODEL.RETINANET.NORM, "num_anchors": num_anchors, } def forward(self, features: List[Tensor]): """ Arguments: features (list[Tensor]): FPN feature map tensors in high to low resolution. Each tensor in the list correspond to different feature levels. Returns: logits (list[Tensor]): #lvl tensors, each has shape (N, AxK, Hi, Wi). The tensor predicts the classification probability at each spatial position for each of the A anchors and K object classes. bbox_reg (list[Tensor]): #lvl tensors, each has shape (N, Ax4, Hi, Wi). The tensor predicts 4-vector (dx,dy,dw,dh) box regression values for every anchor. These values are the relative offset between the anchor and the ground truth box. """ assert len(features) == self._num_features logits = [] bbox_reg = [] for feature in features: logits.append(self.cls_score(self.cls_subnet(feature))) bbox_reg.append(self.bbox_pred(self.bbox_subnet(feature))) return logits, bbox_reg
/roof_mask_Yv8-0.5.9-py3-none-any.whl/detectron2/modeling/meta_arch/retinanet.py
0.930387
0.324704
retinanet.py
pypi
import logging from typing import Dict, List import torch from torch import nn from detectron2.config import configurable from detectron2.structures import ImageList from ..postprocessing import detector_postprocess, sem_seg_postprocess from .build import META_ARCH_REGISTRY from .rcnn import GeneralizedRCNN from .semantic_seg import build_sem_seg_head __all__ = ["PanopticFPN"] @META_ARCH_REGISTRY.register() class PanopticFPN(GeneralizedRCNN): """ Implement the paper :paper:`PanopticFPN`. """ @configurable def __init__( self, *, sem_seg_head: nn.Module, combine_overlap_thresh: float = 0.5, combine_stuff_area_thresh: float = 4096, combine_instances_score_thresh: float = 0.5, **kwargs, ): """ NOTE: this interface is experimental. Args: sem_seg_head: a module for the semantic segmentation head. combine_overlap_thresh: combine masks into one instances if they have enough overlap combine_stuff_area_thresh: ignore stuff areas smaller than this threshold combine_instances_score_thresh: ignore instances whose score is smaller than this threshold Other arguments are the same as :class:`GeneralizedRCNN`. """ super().__init__(**kwargs) self.sem_seg_head = sem_seg_head # options when combining instance & semantic outputs self.combine_overlap_thresh = combine_overlap_thresh self.combine_stuff_area_thresh = combine_stuff_area_thresh self.combine_instances_score_thresh = combine_instances_score_thresh @classmethod def from_config(cls, cfg): ret = super().from_config(cfg) ret.update( { "combine_overlap_thresh": cfg.MODEL.PANOPTIC_FPN.COMBINE.OVERLAP_THRESH, "combine_stuff_area_thresh": cfg.MODEL.PANOPTIC_FPN.COMBINE.STUFF_AREA_LIMIT, "combine_instances_score_thresh": cfg.MODEL.PANOPTIC_FPN.COMBINE.INSTANCES_CONFIDENCE_THRESH, # noqa } ) ret["sem_seg_head"] = build_sem_seg_head(cfg, ret["backbone"].output_shape()) logger = logging.getLogger(__name__) if not cfg.MODEL.PANOPTIC_FPN.COMBINE.ENABLED: logger.warning( "PANOPTIC_FPN.COMBINED.ENABLED is no longer used. " " model.inference(do_postprocess=) should be used to toggle postprocessing." ) if cfg.MODEL.PANOPTIC_FPN.INSTANCE_LOSS_WEIGHT != 1.0: w = cfg.MODEL.PANOPTIC_FPN.INSTANCE_LOSS_WEIGHT logger.warning( "PANOPTIC_FPN.INSTANCE_LOSS_WEIGHT should be replaced by weights on each ROI head." ) def update_weight(x): if isinstance(x, dict): return {k: v * w for k, v in x.items()} else: return x * w roi_heads = ret["roi_heads"] roi_heads.box_predictor.loss_weight = update_weight(roi_heads.box_predictor.loss_weight) roi_heads.mask_head.loss_weight = update_weight(roi_heads.mask_head.loss_weight) return ret def forward(self, batched_inputs): """ Args: batched_inputs: a list, batched outputs of :class:`DatasetMapper`. Each item in the list contains the inputs for one image. For now, each item in the list is a dict that contains: * "image": Tensor, image in (C, H, W) format. * "instances": Instances * "sem_seg": semantic segmentation ground truth. * Other information that's included in the original dicts, such as: "height", "width" (int): the output resolution of the model, used in inference. See :meth:`postprocess` for details. Returns: list[dict]: each dict has the results for one image. The dict contains the following keys: * "instances": see :meth:`GeneralizedRCNN.forward` for its format. * "sem_seg": see :meth:`SemanticSegmentor.forward` for its format. * "panoptic_seg": See the return value of :func:`combine_semantic_and_instance_outputs` for its format. """ if not self.training: return self.inference(batched_inputs) images = self.preprocess_image(batched_inputs) features = self.backbone(images.tensor) assert "sem_seg" in batched_inputs[0] gt_sem_seg = [x["sem_seg"].to(self.device) for x in batched_inputs] gt_sem_seg = ImageList.from_tensors( gt_sem_seg, self.backbone.size_divisibility, self.sem_seg_head.ignore_value ).tensor sem_seg_results, sem_seg_losses = self.sem_seg_head(features, gt_sem_seg) gt_instances = [x["instances"].to(self.device) for x in batched_inputs] proposals, proposal_losses = self.proposal_generator(images, features, gt_instances) detector_results, detector_losses = self.roi_heads( images, features, proposals, gt_instances ) losses = sem_seg_losses losses.update(proposal_losses) losses.update(detector_losses) return losses def inference(self, batched_inputs: List[Dict[str, torch.Tensor]], do_postprocess: bool = True): """ Run inference on the given inputs. Args: batched_inputs (list[dict]): same as in :meth:`forward` do_postprocess (bool): whether to apply post-processing on the outputs. Returns: When do_postprocess=True, see docs in :meth:`forward`. Otherwise, returns a (list[Instances], list[Tensor]) that contains the raw detector outputs, and raw semantic segmentation outputs. """ images = self.preprocess_image(batched_inputs) features = self.backbone(images.tensor) sem_seg_results, sem_seg_losses = self.sem_seg_head(features, None) proposals, _ = self.proposal_generator(images, features, None) detector_results, _ = self.roi_heads(images, features, proposals, None) if do_postprocess: processed_results = [] for sem_seg_result, detector_result, input_per_image, image_size in zip( sem_seg_results, detector_results, batched_inputs, images.image_sizes ): height = input_per_image.get("height", image_size[0]) width = input_per_image.get("width", image_size[1]) sem_seg_r = sem_seg_postprocess(sem_seg_result, image_size, height, width) detector_r = detector_postprocess(detector_result, height, width) processed_results.append({"sem_seg": sem_seg_r, "instances": detector_r}) panoptic_r = combine_semantic_and_instance_outputs( detector_r, sem_seg_r.argmax(dim=0), self.combine_overlap_thresh, self.combine_stuff_area_thresh, self.combine_instances_score_thresh, ) processed_results[-1]["panoptic_seg"] = panoptic_r return processed_results else: return detector_results, sem_seg_results def combine_semantic_and_instance_outputs( instance_results, semantic_results, overlap_threshold, stuff_area_thresh, instances_score_thresh, ): """ Implement a simple combining logic following "combine_semantic_and_instance_predictions.py" in panopticapi to produce panoptic segmentation outputs. Args: instance_results: output of :func:`detector_postprocess`. semantic_results: an (H, W) tensor, each element is the contiguous semantic category id Returns: panoptic_seg (Tensor): of shape (height, width) where the values are ids for each segment. segments_info (list[dict]): Describe each segment in `panoptic_seg`. Each dict contains keys "id", "category_id", "isthing". """ panoptic_seg = torch.zeros_like(semantic_results, dtype=torch.int32) # sort instance outputs by scores sorted_inds = torch.argsort(-instance_results.scores) current_segment_id = 0 segments_info = [] instance_masks = instance_results.pred_masks.to(dtype=torch.bool, device=panoptic_seg.device) # Add instances one-by-one, check for overlaps with existing ones for inst_id in sorted_inds: score = instance_results.scores[inst_id].item() if score < instances_score_thresh: break mask = instance_masks[inst_id] # H,W mask_area = mask.sum().item() if mask_area == 0: continue intersect = (mask > 0) & (panoptic_seg > 0) intersect_area = intersect.sum().item() if intersect_area * 1.0 / mask_area > overlap_threshold: continue if intersect_area > 0: mask = mask & (panoptic_seg == 0) current_segment_id += 1 panoptic_seg[mask] = current_segment_id segments_info.append( { "id": current_segment_id, "isthing": True, "score": score, "category_id": instance_results.pred_classes[inst_id].item(), "instance_id": inst_id.item(), } ) # Add semantic results to remaining empty areas semantic_labels = torch.unique(semantic_results).cpu().tolist() for semantic_label in semantic_labels: if semantic_label == 0: # 0 is a special "thing" class continue mask = (semantic_results == semantic_label) & (panoptic_seg == 0) mask_area = mask.sum().item() if mask_area < stuff_area_thresh: continue current_segment_id += 1 panoptic_seg[mask] = current_segment_id segments_info.append( { "id": current_segment_id, "isthing": False, "category_id": semantic_label, "area": mask_area, } ) return panoptic_seg, segments_info
/roof_mask_Yv8-0.5.9-py3-none-any.whl/detectron2/modeling/meta_arch/panoptic_fpn.py
0.946163
0.257975
panoptic_fpn.py
pypi
import logging import math from typing import List, Tuple, Union import torch from detectron2.layers import batched_nms, cat, move_device_like from detectron2.structures import Boxes, Instances logger = logging.getLogger(__name__) def _is_tracing(): # (fixed in TORCH_VERSION >= 1.9) if torch.jit.is_scripting(): # https://github.com/pytorch/pytorch/issues/47379 return False else: return torch.jit.is_tracing() def find_top_rpn_proposals( proposals: List[torch.Tensor], pred_objectness_logits: List[torch.Tensor], image_sizes: List[Tuple[int, int]], nms_thresh: float, pre_nms_topk: int, post_nms_topk: int, min_box_size: float, training: bool, ): """ For each feature map, select the `pre_nms_topk` highest scoring proposals, apply NMS, clip proposals, and remove small boxes. Return the `post_nms_topk` highest scoring proposals among all the feature maps for each image. Args: proposals (list[Tensor]): A list of L tensors. Tensor i has shape (N, Hi*Wi*A, 4). All proposal predictions on the feature maps. pred_objectness_logits (list[Tensor]): A list of L tensors. Tensor i has shape (N, Hi*Wi*A). image_sizes (list[tuple]): sizes (h, w) for each image nms_thresh (float): IoU threshold to use for NMS pre_nms_topk (int): number of top k scoring proposals to keep before applying NMS. When RPN is run on multiple feature maps (as in FPN) this number is per feature map. post_nms_topk (int): number of top k scoring proposals to keep after applying NMS. When RPN is run on multiple feature maps (as in FPN) this number is total, over all feature maps. min_box_size (float): minimum proposal box side length in pixels (absolute units wrt input images). training (bool): True if proposals are to be used in training, otherwise False. This arg exists only to support a legacy bug; look for the "NB: Legacy bug ..." comment. Returns: list[Instances]: list of N Instances. The i-th Instances stores post_nms_topk object proposals for image i, sorted by their objectness score in descending order. """ num_images = len(image_sizes) device = ( proposals[0].device if torch.jit.is_scripting() else ("cpu" if torch.jit.is_tracing() else proposals[0].device) ) # 1. Select top-k anchor for every level and every image topk_scores = [] # #lvl Tensor, each of shape N x topk topk_proposals = [] level_ids = [] # #lvl Tensor, each of shape (topk,) batch_idx = move_device_like(torch.arange(num_images, device=device), proposals[0]) for level_id, (proposals_i, logits_i) in enumerate(zip(proposals, pred_objectness_logits)): Hi_Wi_A = logits_i.shape[1] if isinstance(Hi_Wi_A, torch.Tensor): # it's a tensor in tracing num_proposals_i = torch.clamp(Hi_Wi_A, max=pre_nms_topk) else: num_proposals_i = min(Hi_Wi_A, pre_nms_topk) topk_scores_i, topk_idx = logits_i.topk(num_proposals_i, dim=1) # each is N x topk topk_proposals_i = proposals_i[batch_idx[:, None], topk_idx] # N x topk x 4 topk_proposals.append(topk_proposals_i) topk_scores.append(topk_scores_i) level_ids.append( move_device_like( torch.full((num_proposals_i,), level_id, dtype=torch.int64, device=device), proposals[0], ) ) # 2. Concat all levels together topk_scores = cat(topk_scores, dim=1) topk_proposals = cat(topk_proposals, dim=1) level_ids = cat(level_ids, dim=0) # 3. For each image, run a per-level NMS, and choose topk results. results: List[Instances] = [] for n, image_size in enumerate(image_sizes): boxes = Boxes(topk_proposals[n]) scores_per_img = topk_scores[n] lvl = level_ids valid_mask = torch.isfinite(boxes.tensor).all(dim=1) & torch.isfinite(scores_per_img) if not valid_mask.all(): if training: raise FloatingPointError( "Predicted boxes or scores contain Inf/NaN. Training has diverged." ) boxes = boxes[valid_mask] scores_per_img = scores_per_img[valid_mask] lvl = lvl[valid_mask] boxes.clip(image_size) # filter empty boxes keep = boxes.nonempty(threshold=min_box_size) if _is_tracing() or keep.sum().item() != len(boxes): boxes, scores_per_img, lvl = boxes[keep], scores_per_img[keep], lvl[keep] keep = batched_nms(boxes.tensor, scores_per_img, lvl, nms_thresh) # In Detectron1, there was different behavior during training vs. testing. # (https://github.com/facebookresearch/Detectron/issues/459) # During training, topk is over the proposals from *all* images in the training batch. # During testing, it is over the proposals for each image separately. # As a result, the training behavior becomes batch-dependent, # and the configuration "POST_NMS_TOPK_TRAIN" end up relying on the batch size. # This bug is addressed in Detectron2 to make the behavior independent of batch size. keep = keep[:post_nms_topk] # keep is already sorted res = Instances(image_size) res.proposal_boxes = boxes[keep] res.objectness_logits = scores_per_img[keep] results.append(res) return results def add_ground_truth_to_proposals( gt: Union[List[Instances], List[Boxes]], proposals: List[Instances] ) -> List[Instances]: """ Call `add_ground_truth_to_proposals_single_image` for all images. Args: gt(Union[List[Instances], List[Boxes]): list of N elements. Element i is a Instances representing the ground-truth for image i. proposals (list[Instances]): list of N elements. Element i is a Instances representing the proposals for image i. Returns: list[Instances]: list of N Instances. Each is the proposals for the image, with field "proposal_boxes" and "objectness_logits". """ assert gt is not None if len(proposals) != len(gt): raise ValueError("proposals and gt should have the same length as the number of images!") if len(proposals) == 0: return proposals return [ add_ground_truth_to_proposals_single_image(gt_i, proposals_i) for gt_i, proposals_i in zip(gt, proposals) ] def add_ground_truth_to_proposals_single_image( gt: Union[Instances, Boxes], proposals: Instances ) -> Instances: """ Augment `proposals` with `gt`. Args: Same as `add_ground_truth_to_proposals`, but with gt and proposals per image. Returns: Same as `add_ground_truth_to_proposals`, but for only one image. """ if isinstance(gt, Boxes): # convert Boxes to Instances gt = Instances(proposals.image_size, gt_boxes=gt) gt_boxes = gt.gt_boxes device = proposals.objectness_logits.device # Assign all ground-truth boxes an objectness logit corresponding to # P(object) = sigmoid(logit) =~ 1. gt_logit_value = math.log((1.0 - 1e-10) / (1 - (1.0 - 1e-10))) gt_logits = gt_logit_value * torch.ones(len(gt_boxes), device=device) # Concatenating gt_boxes with proposals requires them to have the same fields gt_proposal = Instances(proposals.image_size, **gt.get_fields()) gt_proposal.proposal_boxes = gt_boxes gt_proposal.objectness_logits = gt_logits for key in proposals.get_fields().keys(): assert gt_proposal.has( key ), "The attribute '{}' in `proposals` does not exist in `gt`".format(key) # NOTE: Instances.cat only use fields from the first item. Extra fields in latter items # will be thrown away. new_proposals = Instances.cat([proposals, gt_proposal]) return new_proposals
/roof_mask_Yv8-0.5.9-py3-none-any.whl/detectron2/modeling/proposal_generator/proposal_utils.py
0.902076
0.469155
proposal_utils.py
pypi
import itertools import logging from typing import Dict, List import torch from detectron2.config import configurable from detectron2.layers import ShapeSpec, batched_nms_rotated, cat from detectron2.structures import Instances, RotatedBoxes, pairwise_iou_rotated from detectron2.utils.memory import retry_if_cuda_oom from ..box_regression import Box2BoxTransformRotated from .build import PROPOSAL_GENERATOR_REGISTRY from .proposal_utils import _is_tracing from .rpn import RPN logger = logging.getLogger(__name__) def find_top_rrpn_proposals( proposals, pred_objectness_logits, image_sizes, nms_thresh, pre_nms_topk, post_nms_topk, min_box_size, training, ): """ For each feature map, select the `pre_nms_topk` highest scoring proposals, apply NMS, clip proposals, and remove small boxes. Return the `post_nms_topk` highest scoring proposals among all the feature maps if `training` is True, otherwise, returns the highest `post_nms_topk` scoring proposals for each feature map. Args: proposals (list[Tensor]): A list of L tensors. Tensor i has shape (N, Hi*Wi*A, 5). All proposal predictions on the feature maps. pred_objectness_logits (list[Tensor]): A list of L tensors. Tensor i has shape (N, Hi*Wi*A). image_sizes (list[tuple]): sizes (h, w) for each image nms_thresh (float): IoU threshold to use for NMS pre_nms_topk (int): number of top k scoring proposals to keep before applying NMS. When RRPN is run on multiple feature maps (as in FPN) this number is per feature map. post_nms_topk (int): number of top k scoring proposals to keep after applying NMS. When RRPN is run on multiple feature maps (as in FPN) this number is total, over all feature maps. min_box_size(float): minimum proposal box side length in pixels (absolute units wrt input images). training (bool): True if proposals are to be used in training, otherwise False. This arg exists only to support a legacy bug; look for the "NB: Legacy bug ..." comment. Returns: proposals (list[Instances]): list of N Instances. The i-th Instances stores post_nms_topk object proposals for image i. """ num_images = len(image_sizes) device = proposals[0].device # 1. Select top-k anchor for every level and every image topk_scores = [] # #lvl Tensor, each of shape N x topk topk_proposals = [] level_ids = [] # #lvl Tensor, each of shape (topk,) batch_idx = torch.arange(num_images, device=device) for level_id, proposals_i, logits_i in zip( itertools.count(), proposals, pred_objectness_logits ): Hi_Wi_A = logits_i.shape[1] if isinstance(Hi_Wi_A, torch.Tensor): # it's a tensor in tracing num_proposals_i = torch.clamp(Hi_Wi_A, max=pre_nms_topk) else: num_proposals_i = min(Hi_Wi_A, pre_nms_topk) topk_scores_i, topk_idx = logits_i.topk(num_proposals_i, dim=1) # each is N x topk topk_proposals_i = proposals_i[batch_idx[:, None], topk_idx] # N x topk x 5 topk_proposals.append(topk_proposals_i) topk_scores.append(topk_scores_i) level_ids.append(torch.full((num_proposals_i,), level_id, dtype=torch.int64, device=device)) # 2. Concat all levels together topk_scores = cat(topk_scores, dim=1) topk_proposals = cat(topk_proposals, dim=1) level_ids = cat(level_ids, dim=0) # 3. For each image, run a per-level NMS, and choose topk results. results = [] for n, image_size in enumerate(image_sizes): boxes = RotatedBoxes(topk_proposals[n]) scores_per_img = topk_scores[n] lvl = level_ids valid_mask = torch.isfinite(boxes.tensor).all(dim=1) & torch.isfinite(scores_per_img) if not valid_mask.all(): if training: raise FloatingPointError( "Predicted boxes or scores contain Inf/NaN. Training has diverged." ) boxes = boxes[valid_mask] scores_per_img = scores_per_img[valid_mask] lvl = lvl[valid_mask] boxes.clip(image_size) # filter empty boxes keep = boxes.nonempty(threshold=min_box_size) if _is_tracing() or keep.sum().item() != len(boxes): boxes, scores_per_img, lvl = (boxes[keep], scores_per_img[keep], lvl[keep]) keep = batched_nms_rotated(boxes.tensor, scores_per_img, lvl, nms_thresh) # In Detectron1, there was different behavior during training vs. testing. # (https://github.com/facebookresearch/Detectron/issues/459) # During training, topk is over the proposals from *all* images in the training batch. # During testing, it is over the proposals for each image separately. # As a result, the training behavior becomes batch-dependent, # and the configuration "POST_NMS_TOPK_TRAIN" end up relying on the batch size. # This bug is addressed in Detectron2 to make the behavior independent of batch size. keep = keep[:post_nms_topk] res = Instances(image_size) res.proposal_boxes = boxes[keep] res.objectness_logits = scores_per_img[keep] results.append(res) return results @PROPOSAL_GENERATOR_REGISTRY.register() class RRPN(RPN): """ Rotated Region Proposal Network described in :paper:`RRPN`. """ @configurable def __init__(self, *args, **kwargs): super().__init__(*args, **kwargs) if self.anchor_boundary_thresh >= 0: raise NotImplementedError( "anchor_boundary_thresh is a legacy option not implemented for RRPN." ) @classmethod def from_config(cls, cfg, input_shape: Dict[str, ShapeSpec]): ret = super().from_config(cfg, input_shape) ret["box2box_transform"] = Box2BoxTransformRotated(weights=cfg.MODEL.RPN.BBOX_REG_WEIGHTS) return ret @torch.no_grad() def label_and_sample_anchors(self, anchors: List[RotatedBoxes], gt_instances: List[Instances]): """ Args: anchors (list[RotatedBoxes]): anchors for each feature map. gt_instances: the ground-truth instances for each image. Returns: list[Tensor]: List of #img tensors. i-th element is a vector of labels whose length is the total number of anchors across feature maps. Label values are in {-1, 0, 1}, with meanings: -1 = ignore; 0 = negative class; 1 = positive class. list[Tensor]: i-th element is a Nx5 tensor, where N is the total number of anchors across feature maps. The values are the matched gt boxes for each anchor. Values are undefined for those anchors not labeled as 1. """ anchors = RotatedBoxes.cat(anchors) gt_boxes = [x.gt_boxes for x in gt_instances] del gt_instances gt_labels = [] matched_gt_boxes = [] for gt_boxes_i in gt_boxes: """ gt_boxes_i: ground-truth boxes for i-th image """ match_quality_matrix = retry_if_cuda_oom(pairwise_iou_rotated)(gt_boxes_i, anchors) matched_idxs, gt_labels_i = retry_if_cuda_oom(self.anchor_matcher)(match_quality_matrix) # Matching is memory-expensive and may result in CPU tensors. But the result is small gt_labels_i = gt_labels_i.to(device=gt_boxes_i.device) # A vector of labels (-1, 0, 1) for each anchor gt_labels_i = self._subsample_labels(gt_labels_i) if len(gt_boxes_i) == 0: # These values won't be used anyway since the anchor is labeled as background matched_gt_boxes_i = torch.zeros_like(anchors.tensor) else: # TODO wasted indexing computation for ignored boxes matched_gt_boxes_i = gt_boxes_i[matched_idxs].tensor gt_labels.append(gt_labels_i) # N,AHW matched_gt_boxes.append(matched_gt_boxes_i) return gt_labels, matched_gt_boxes @torch.no_grad() def predict_proposals(self, anchors, pred_objectness_logits, pred_anchor_deltas, image_sizes): pred_proposals = self._decode_proposals(anchors, pred_anchor_deltas) return find_top_rrpn_proposals( pred_proposals, pred_objectness_logits, image_sizes, self.nms_thresh, self.pre_nms_topk[self.training], self.post_nms_topk[self.training], self.min_box_size, self.training, )
/roof_mask_Yv8-0.5.9-py3-none-any.whl/detectron2/modeling/proposal_generator/rrpn.py
0.89204
0.351826
rrpn.py
pypi
import inspect import logging import numpy as np from typing import Dict, List, Optional, Tuple import torch from torch import nn from detectron2.config import configurable from detectron2.layers import ShapeSpec, nonzero_tuple from detectron2.structures import Boxes, ImageList, Instances, pairwise_iou from detectron2.utils.events import get_event_storage from detectron2.utils.registry import Registry from ..backbone.resnet import BottleneckBlock, ResNet from ..matcher import Matcher from ..poolers import ROIPooler from ..proposal_generator.proposal_utils import add_ground_truth_to_proposals from ..sampling import subsample_labels from .box_head import build_box_head from .fast_rcnn import FastRCNNOutputLayers from .keypoint_head import build_keypoint_head from .mask_head import build_mask_head ROI_HEADS_REGISTRY = Registry("ROI_HEADS") ROI_HEADS_REGISTRY.__doc__ = """ Registry for ROI heads in a generalized R-CNN model. ROIHeads take feature maps and region proposals, and perform per-region computation. The registered object will be called with `obj(cfg, input_shape)`. The call is expected to return an :class:`ROIHeads`. """ logger = logging.getLogger(__name__) def build_roi_heads(cfg, input_shape): """ Build ROIHeads defined by `cfg.MODEL.ROI_HEADS.NAME`. """ name = cfg.MODEL.ROI_HEADS.NAME return ROI_HEADS_REGISTRY.get(name)(cfg, input_shape) def select_foreground_proposals( proposals: List[Instances], bg_label: int ) -> Tuple[List[Instances], List[torch.Tensor]]: """ Given a list of N Instances (for N images), each containing a `gt_classes` field, return a list of Instances that contain only instances with `gt_classes != -1 && gt_classes != bg_label`. Args: proposals (list[Instances]): A list of N Instances, where N is the number of images in the batch. bg_label: label index of background class. Returns: list[Instances]: N Instances, each contains only the selected foreground instances. list[Tensor]: N boolean vector, correspond to the selection mask of each Instances object. True for selected instances. """ assert isinstance(proposals, (list, tuple)) assert isinstance(proposals[0], Instances) assert proposals[0].has("gt_classes") fg_proposals = [] fg_selection_masks = [] for proposals_per_image in proposals: gt_classes = proposals_per_image.gt_classes fg_selection_mask = (gt_classes != -1) & (gt_classes != bg_label) fg_idxs = fg_selection_mask.nonzero().squeeze(1) fg_proposals.append(proposals_per_image[fg_idxs]) fg_selection_masks.append(fg_selection_mask) return fg_proposals, fg_selection_masks def select_proposals_with_visible_keypoints(proposals: List[Instances]) -> List[Instances]: """ Args: proposals (list[Instances]): a list of N Instances, where N is the number of images. Returns: proposals: only contains proposals with at least one visible keypoint. Note that this is still slightly different from Detectron. In Detectron, proposals for training keypoint head are re-sampled from all the proposals with IOU>threshold & >=1 visible keypoint. Here, the proposals are first sampled from all proposals with IOU>threshold, then proposals with no visible keypoint are filtered out. This strategy seems to make no difference on Detectron and is easier to implement. """ ret = [] all_num_fg = [] for proposals_per_image in proposals: # If empty/unannotated image (hard negatives), skip filtering for train if len(proposals_per_image) == 0: ret.append(proposals_per_image) continue gt_keypoints = proposals_per_image.gt_keypoints.tensor # #fg x K x 3 vis_mask = gt_keypoints[:, :, 2] >= 1 xs, ys = gt_keypoints[:, :, 0], gt_keypoints[:, :, 1] proposal_boxes = proposals_per_image.proposal_boxes.tensor.unsqueeze(dim=1) # #fg x 1 x 4 kp_in_box = ( (xs >= proposal_boxes[:, :, 0]) & (xs <= proposal_boxes[:, :, 2]) & (ys >= proposal_boxes[:, :, 1]) & (ys <= proposal_boxes[:, :, 3]) ) selection = (kp_in_box & vis_mask).any(dim=1) selection_idxs = nonzero_tuple(selection)[0] all_num_fg.append(selection_idxs.numel()) ret.append(proposals_per_image[selection_idxs]) storage = get_event_storage() storage.put_scalar("keypoint_head/num_fg_samples", np.mean(all_num_fg)) return ret class ROIHeads(torch.nn.Module): """ ROIHeads perform all per-region computation in an R-CNN. It typically contains logic to 1. (in training only) match proposals with ground truth and sample them 2. crop the regions and extract per-region features using proposals 3. make per-region predictions with different heads It can have many variants, implemented as subclasses of this class. This base class contains the logic to match/sample proposals. But it is not necessary to inherit this class if the sampling logic is not needed. """ @configurable def __init__( self, *, num_classes, batch_size_per_image, positive_fraction, proposal_matcher, proposal_append_gt=True, ): """ NOTE: this interface is experimental. Args: num_classes (int): number of foreground classes (i.e. background is not included) batch_size_per_image (int): number of proposals to sample for training positive_fraction (float): fraction of positive (foreground) proposals to sample for training. proposal_matcher (Matcher): matcher that matches proposals and ground truth proposal_append_gt (bool): whether to include ground truth as proposals as well """ super().__init__() self.batch_size_per_image = batch_size_per_image self.positive_fraction = positive_fraction self.num_classes = num_classes self.proposal_matcher = proposal_matcher self.proposal_append_gt = proposal_append_gt @classmethod def from_config(cls, cfg): return { "batch_size_per_image": cfg.MODEL.ROI_HEADS.BATCH_SIZE_PER_IMAGE, "positive_fraction": cfg.MODEL.ROI_HEADS.POSITIVE_FRACTION, "num_classes": cfg.MODEL.ROI_HEADS.NUM_CLASSES, "proposal_append_gt": cfg.MODEL.ROI_HEADS.PROPOSAL_APPEND_GT, # Matcher to assign box proposals to gt boxes "proposal_matcher": Matcher( cfg.MODEL.ROI_HEADS.IOU_THRESHOLDS, cfg.MODEL.ROI_HEADS.IOU_LABELS, allow_low_quality_matches=False, ), } def _sample_proposals( self, matched_idxs: torch.Tensor, matched_labels: torch.Tensor, gt_classes: torch.Tensor ) -> Tuple[torch.Tensor, torch.Tensor]: """ Based on the matching between N proposals and M groundtruth, sample the proposals and set their classification labels. Args: matched_idxs (Tensor): a vector of length N, each is the best-matched gt index in [0, M) for each proposal. matched_labels (Tensor): a vector of length N, the matcher's label (one of cfg.MODEL.ROI_HEADS.IOU_LABELS) for each proposal. gt_classes (Tensor): a vector of length M. Returns: Tensor: a vector of indices of sampled proposals. Each is in [0, N). Tensor: a vector of the same length, the classification label for each sampled proposal. Each sample is labeled as either a category in [0, num_classes) or the background (num_classes). """ has_gt = gt_classes.numel() > 0 # Get the corresponding GT for each proposal if has_gt: gt_classes = gt_classes[matched_idxs] # Label unmatched proposals (0 label from matcher) as background (label=num_classes) gt_classes[matched_labels == 0] = self.num_classes # Label ignore proposals (-1 label) gt_classes[matched_labels == -1] = -1 else: gt_classes = torch.zeros_like(matched_idxs) + self.num_classes sampled_fg_idxs, sampled_bg_idxs = subsample_labels( gt_classes, self.batch_size_per_image, self.positive_fraction, self.num_classes ) sampled_idxs = torch.cat([sampled_fg_idxs, sampled_bg_idxs], dim=0) return sampled_idxs, gt_classes[sampled_idxs] @torch.no_grad() def label_and_sample_proposals( self, proposals: List[Instances], targets: List[Instances] ) -> List[Instances]: """ Prepare some proposals to be used to train the ROI heads. It performs box matching between `proposals` and `targets`, and assigns training labels to the proposals. It returns ``self.batch_size_per_image`` random samples from proposals and groundtruth boxes, with a fraction of positives that is no larger than ``self.positive_fraction``. Args: See :meth:`ROIHeads.forward` Returns: list[Instances]: length `N` list of `Instances`s containing the proposals sampled for training. Each `Instances` has the following fields: - proposal_boxes: the proposal boxes - gt_boxes: the ground-truth box that the proposal is assigned to (this is only meaningful if the proposal has a label > 0; if label = 0 then the ground-truth box is random) Other fields such as "gt_classes", "gt_masks", that's included in `targets`. """ # Augment proposals with ground-truth boxes. # In the case of learned proposals (e.g., RPN), when training starts # the proposals will be low quality due to random initialization. # It's possible that none of these initial # proposals have high enough overlap with the gt objects to be used # as positive examples for the second stage components (box head, # cls head, mask head). Adding the gt boxes to the set of proposals # ensures that the second stage components will have some positive # examples from the start of training. For RPN, this augmentation improves # convergence and empirically improves box AP on COCO by about 0.5 # points (under one tested configuration). if self.proposal_append_gt: proposals = add_ground_truth_to_proposals(targets, proposals) proposals_with_gt = [] num_fg_samples = [] num_bg_samples = [] for proposals_per_image, targets_per_image in zip(proposals, targets): has_gt = len(targets_per_image) > 0 match_quality_matrix = pairwise_iou( targets_per_image.gt_boxes, proposals_per_image.proposal_boxes ) matched_idxs, matched_labels = self.proposal_matcher(match_quality_matrix) sampled_idxs, gt_classes = self._sample_proposals( matched_idxs, matched_labels, targets_per_image.gt_classes ) # Set target attributes of the sampled proposals: proposals_per_image = proposals_per_image[sampled_idxs] proposals_per_image.gt_classes = gt_classes if has_gt: sampled_targets = matched_idxs[sampled_idxs] # We index all the attributes of targets that start with "gt_" # and have not been added to proposals yet (="gt_classes"). # NOTE: here the indexing waste some compute, because heads # like masks, keypoints, etc, will filter the proposals again, # (by foreground/background, or number of keypoints in the image, etc) # so we essentially index the data twice. for (trg_name, trg_value) in targets_per_image.get_fields().items(): if trg_name.startswith("gt_") and not proposals_per_image.has(trg_name): proposals_per_image.set(trg_name, trg_value[sampled_targets]) # If no GT is given in the image, we don't know what a dummy gt value can be. # Therefore the returned proposals won't have any gt_* fields, except for a # gt_classes full of background label. num_bg_samples.append((gt_classes == self.num_classes).sum().item()) num_fg_samples.append(gt_classes.numel() - num_bg_samples[-1]) proposals_with_gt.append(proposals_per_image) # Log the number of fg/bg samples that are selected for training ROI heads storage = get_event_storage() storage.put_scalar("roi_head/num_fg_samples", np.mean(num_fg_samples)) storage.put_scalar("roi_head/num_bg_samples", np.mean(num_bg_samples)) return proposals_with_gt def forward( self, images: ImageList, features: Dict[str, torch.Tensor], proposals: List[Instances], targets: Optional[List[Instances]] = None, ) -> Tuple[List[Instances], Dict[str, torch.Tensor]]: """ Args: images (ImageList): features (dict[str,Tensor]): input data as a mapping from feature map name to tensor. Axis 0 represents the number of images `N` in the input data; axes 1-3 are channels, height, and width, which may vary between feature maps (e.g., if a feature pyramid is used). proposals (list[Instances]): length `N` list of `Instances`. The i-th `Instances` contains object proposals for the i-th input image, with fields "proposal_boxes" and "objectness_logits". targets (list[Instances], optional): length `N` list of `Instances`. The i-th `Instances` contains the ground-truth per-instance annotations for the i-th input image. Specify `targets` during training only. It may have the following fields: - gt_boxes: the bounding box of each instance. - gt_classes: the label for each instance with a category ranging in [0, #class]. - gt_masks: PolygonMasks or BitMasks, the ground-truth masks of each instance. - gt_keypoints: NxKx3, the groud-truth keypoints for each instance. Returns: list[Instances]: length `N` list of `Instances` containing the detected instances. Returned during inference only; may be [] during training. dict[str->Tensor]: mapping from a named loss to a tensor storing the loss. Used during training only. """ raise NotImplementedError() @ROI_HEADS_REGISTRY.register() class Res5ROIHeads(ROIHeads): """ The ROIHeads in a typical "C4" R-CNN model, where the box and mask head share the cropping and the per-region feature computation by a Res5 block. See :paper:`ResNet` Appendix A. """ @configurable def __init__( self, *, in_features: List[str], pooler: ROIPooler, res5: nn.Module, box_predictor: nn.Module, mask_head: Optional[nn.Module] = None, **kwargs, ): """ NOTE: this interface is experimental. Args: in_features (list[str]): list of backbone feature map names to use for feature extraction pooler (ROIPooler): pooler to extra region features from backbone res5 (nn.Sequential): a CNN to compute per-region features, to be used by ``box_predictor`` and ``mask_head``. Typically this is a "res5" block from a ResNet. box_predictor (nn.Module): make box predictions from the feature. Should have the same interface as :class:`FastRCNNOutputLayers`. mask_head (nn.Module): transform features to make mask predictions """ super().__init__(**kwargs) self.in_features = in_features self.pooler = pooler if isinstance(res5, (list, tuple)): res5 = nn.Sequential(*res5) self.res5 = res5 self.box_predictor = box_predictor self.mask_on = mask_head is not None if self.mask_on: self.mask_head = mask_head @classmethod def from_config(cls, cfg, input_shape): # fmt: off ret = super().from_config(cfg) in_features = ret["in_features"] = cfg.MODEL.ROI_HEADS.IN_FEATURES pooler_resolution = cfg.MODEL.ROI_BOX_HEAD.POOLER_RESOLUTION pooler_type = cfg.MODEL.ROI_BOX_HEAD.POOLER_TYPE pooler_scales = (1.0 / input_shape[in_features[0]].stride, ) sampling_ratio = cfg.MODEL.ROI_BOX_HEAD.POOLER_SAMPLING_RATIO mask_on = cfg.MODEL.MASK_ON # fmt: on assert not cfg.MODEL.KEYPOINT_ON assert len(in_features) == 1 ret["pooler"] = ROIPooler( output_size=pooler_resolution, scales=pooler_scales, sampling_ratio=sampling_ratio, pooler_type=pooler_type, ) # Compatbility with old moco code. Might be useful. # See notes in StandardROIHeads.from_config if not inspect.ismethod(cls._build_res5_block): logger.warning( "The behavior of _build_res5_block may change. " "Please do not depend on private methods." ) cls._build_res5_block = classmethod(cls._build_res5_block) ret["res5"], out_channels = cls._build_res5_block(cfg) ret["box_predictor"] = FastRCNNOutputLayers( cfg, ShapeSpec(channels=out_channels, height=1, width=1) ) if mask_on: ret["mask_head"] = build_mask_head( cfg, ShapeSpec(channels=out_channels, width=pooler_resolution, height=pooler_resolution), ) return ret @classmethod def _build_res5_block(cls, cfg): # fmt: off stage_channel_factor = 2 ** 3 # res5 is 8x res2 num_groups = cfg.MODEL.RESNETS.NUM_GROUPS width_per_group = cfg.MODEL.RESNETS.WIDTH_PER_GROUP bottleneck_channels = num_groups * width_per_group * stage_channel_factor out_channels = cfg.MODEL.RESNETS.RES2_OUT_CHANNELS * stage_channel_factor stride_in_1x1 = cfg.MODEL.RESNETS.STRIDE_IN_1X1 norm = cfg.MODEL.RESNETS.NORM assert not cfg.MODEL.RESNETS.DEFORM_ON_PER_STAGE[-1], \ "Deformable conv is not yet supported in res5 head." # fmt: on blocks = ResNet.make_stage( BottleneckBlock, 3, stride_per_block=[2, 1, 1], in_channels=out_channels // 2, bottleneck_channels=bottleneck_channels, out_channels=out_channels, num_groups=num_groups, norm=norm, stride_in_1x1=stride_in_1x1, ) return nn.Sequential(*blocks), out_channels def _shared_roi_transform(self, features: List[torch.Tensor], boxes: List[Boxes]): x = self.pooler(features, boxes) return self.res5(x) def forward( self, images: ImageList, features: Dict[str, torch.Tensor], proposals: List[Instances], targets: Optional[List[Instances]] = None, ): """ See :meth:`ROIHeads.forward`. """ del images if self.training: assert targets proposals = self.label_and_sample_proposals(proposals, targets) del targets proposal_boxes = [x.proposal_boxes for x in proposals] box_features = self._shared_roi_transform( [features[f] for f in self.in_features], proposal_boxes ) predictions = self.box_predictor(box_features.mean(dim=[2, 3])) if self.training: del features losses = self.box_predictor.losses(predictions, proposals) if self.mask_on: proposals, fg_selection_masks = select_foreground_proposals( proposals, self.num_classes ) # Since the ROI feature transform is shared between boxes and masks, # we don't need to recompute features. The mask loss is only defined # on foreground proposals, so we need to select out the foreground # features. mask_features = box_features[torch.cat(fg_selection_masks, dim=0)] del box_features losses.update(self.mask_head(mask_features, proposals)) return [], losses else: pred_instances, _ = self.box_predictor.inference(predictions, proposals) pred_instances = self.forward_with_given_boxes(features, pred_instances) return pred_instances, {} def forward_with_given_boxes( self, features: Dict[str, torch.Tensor], instances: List[Instances] ) -> List[Instances]: """ Use the given boxes in `instances` to produce other (non-box) per-ROI outputs. Args: features: same as in `forward()` instances (list[Instances]): instances to predict other outputs. Expect the keys "pred_boxes" and "pred_classes" to exist. Returns: instances (Instances): the same `Instances` object, with extra fields such as `pred_masks` or `pred_keypoints`. """ assert not self.training assert instances[0].has("pred_boxes") and instances[0].has("pred_classes") if self.mask_on: feature_list = [features[f] for f in self.in_features] x = self._shared_roi_transform(feature_list, [x.pred_boxes for x in instances]) return self.mask_head(x, instances) else: return instances @ROI_HEADS_REGISTRY.register() class StandardROIHeads(ROIHeads): """ It's "standard" in a sense that there is no ROI transform sharing or feature sharing between tasks. Each head independently processes the input features by each head's own pooler and head. This class is used by most models, such as FPN and C5. To implement more models, you can subclass it and implement a different :meth:`forward()` or a head. """ @configurable def __init__( self, *, box_in_features: List[str], box_pooler: ROIPooler, box_head: nn.Module, box_predictor: nn.Module, mask_in_features: Optional[List[str]] = None, mask_pooler: Optional[ROIPooler] = None, mask_head: Optional[nn.Module] = None, keypoint_in_features: Optional[List[str]] = None, keypoint_pooler: Optional[ROIPooler] = None, keypoint_head: Optional[nn.Module] = None, train_on_pred_boxes: bool = False, **kwargs, ): """ NOTE: this interface is experimental. Args: box_in_features (list[str]): list of feature names to use for the box head. box_pooler (ROIPooler): pooler to extra region features for box head box_head (nn.Module): transform features to make box predictions box_predictor (nn.Module): make box predictions from the feature. Should have the same interface as :class:`FastRCNNOutputLayers`. mask_in_features (list[str]): list of feature names to use for the mask pooler or mask head. None if not using mask head. mask_pooler (ROIPooler): pooler to extract region features from image features. The mask head will then take region features to make predictions. If None, the mask head will directly take the dict of image features defined by `mask_in_features` mask_head (nn.Module): transform features to make mask predictions keypoint_in_features, keypoint_pooler, keypoint_head: similar to ``mask_*``. train_on_pred_boxes (bool): whether to use proposal boxes or predicted boxes from the box head to train other heads. """ super().__init__(**kwargs) # keep self.in_features for backward compatibility self.in_features = self.box_in_features = box_in_features self.box_pooler = box_pooler self.box_head = box_head self.box_predictor = box_predictor self.mask_on = mask_in_features is not None if self.mask_on: self.mask_in_features = mask_in_features self.mask_pooler = mask_pooler self.mask_head = mask_head self.keypoint_on = keypoint_in_features is not None if self.keypoint_on: self.keypoint_in_features = keypoint_in_features self.keypoint_pooler = keypoint_pooler self.keypoint_head = keypoint_head self.train_on_pred_boxes = train_on_pred_boxes @classmethod def from_config(cls, cfg, input_shape): ret = super().from_config(cfg) ret["train_on_pred_boxes"] = cfg.MODEL.ROI_BOX_HEAD.TRAIN_ON_PRED_BOXES # Subclasses that have not been updated to use from_config style construction # may have overridden _init_*_head methods. In this case, those overridden methods # will not be classmethods and we need to avoid trying to call them here. # We test for this with ismethod which only returns True for bound methods of cls. # Such subclasses will need to handle calling their overridden _init_*_head methods. if inspect.ismethod(cls._init_box_head): ret.update(cls._init_box_head(cfg, input_shape)) if inspect.ismethod(cls._init_mask_head): ret.update(cls._init_mask_head(cfg, input_shape)) if inspect.ismethod(cls._init_keypoint_head): ret.update(cls._init_keypoint_head(cfg, input_shape)) return ret @classmethod def _init_box_head(cls, cfg, input_shape): # fmt: off in_features = cfg.MODEL.ROI_HEADS.IN_FEATURES pooler_resolution = cfg.MODEL.ROI_BOX_HEAD.POOLER_RESOLUTION pooler_scales = tuple(1.0 / input_shape[k].stride for k in in_features) sampling_ratio = cfg.MODEL.ROI_BOX_HEAD.POOLER_SAMPLING_RATIO pooler_type = cfg.MODEL.ROI_BOX_HEAD.POOLER_TYPE # fmt: on # If StandardROIHeads is applied on multiple feature maps (as in FPN), # then we share the same predictors and therefore the channel counts must be the same in_channels = [input_shape[f].channels for f in in_features] # Check all channel counts are equal assert len(set(in_channels)) == 1, in_channels in_channels = in_channels[0] box_pooler = ROIPooler( output_size=pooler_resolution, scales=pooler_scales, sampling_ratio=sampling_ratio, pooler_type=pooler_type, ) # Here we split "box head" and "box predictor", which is mainly due to historical reasons. # They are used together so the "box predictor" layers should be part of the "box head". # New subclasses of ROIHeads do not need "box predictor"s. box_head = build_box_head( cfg, ShapeSpec(channels=in_channels, height=pooler_resolution, width=pooler_resolution) ) box_predictor = FastRCNNOutputLayers(cfg, box_head.output_shape) return { "box_in_features": in_features, "box_pooler": box_pooler, "box_head": box_head, "box_predictor": box_predictor, } @classmethod def _init_mask_head(cls, cfg, input_shape): if not cfg.MODEL.MASK_ON: return {} # fmt: off in_features = cfg.MODEL.ROI_HEADS.IN_FEATURES pooler_resolution = cfg.MODEL.ROI_MASK_HEAD.POOLER_RESOLUTION pooler_scales = tuple(1.0 / input_shape[k].stride for k in in_features) sampling_ratio = cfg.MODEL.ROI_MASK_HEAD.POOLER_SAMPLING_RATIO pooler_type = cfg.MODEL.ROI_MASK_HEAD.POOLER_TYPE # fmt: on in_channels = [input_shape[f].channels for f in in_features][0] ret = {"mask_in_features": in_features} ret["mask_pooler"] = ( ROIPooler( output_size=pooler_resolution, scales=pooler_scales, sampling_ratio=sampling_ratio, pooler_type=pooler_type, ) if pooler_type else None ) if pooler_type: shape = ShapeSpec( channels=in_channels, width=pooler_resolution, height=pooler_resolution ) else: shape = {f: input_shape[f] for f in in_features} ret["mask_head"] = build_mask_head(cfg, shape) return ret @classmethod def _init_keypoint_head(cls, cfg, input_shape): if not cfg.MODEL.KEYPOINT_ON: return {} # fmt: off in_features = cfg.MODEL.ROI_HEADS.IN_FEATURES pooler_resolution = cfg.MODEL.ROI_KEYPOINT_HEAD.POOLER_RESOLUTION pooler_scales = tuple(1.0 / input_shape[k].stride for k in in_features) # noqa sampling_ratio = cfg.MODEL.ROI_KEYPOINT_HEAD.POOLER_SAMPLING_RATIO pooler_type = cfg.MODEL.ROI_KEYPOINT_HEAD.POOLER_TYPE # fmt: on in_channels = [input_shape[f].channels for f in in_features][0] ret = {"keypoint_in_features": in_features} ret["keypoint_pooler"] = ( ROIPooler( output_size=pooler_resolution, scales=pooler_scales, sampling_ratio=sampling_ratio, pooler_type=pooler_type, ) if pooler_type else None ) if pooler_type: shape = ShapeSpec( channels=in_channels, width=pooler_resolution, height=pooler_resolution ) else: shape = {f: input_shape[f] for f in in_features} ret["keypoint_head"] = build_keypoint_head(cfg, shape) return ret def forward( self, images: ImageList, features: Dict[str, torch.Tensor], proposals: List[Instances], targets: Optional[List[Instances]] = None, ) -> Tuple[List[Instances], Dict[str, torch.Tensor]]: """ See :class:`ROIHeads.forward`. """ del images if self.training: assert targets, "'targets' argument is required during training" proposals = self.label_and_sample_proposals(proposals, targets) del targets if self.training: losses = self._forward_box(features, proposals) # Usually the original proposals used by the box head are used by the mask, keypoint # heads. But when `self.train_on_pred_boxes is True`, proposals will contain boxes # predicted by the box head. losses.update(self._forward_mask(features, proposals)) losses.update(self._forward_keypoint(features, proposals)) return proposals, losses else: pred_instances = self._forward_box(features, proposals) # During inference cascaded prediction is used: the mask and keypoints heads are only # applied to the top scoring box detections. pred_instances = self.forward_with_given_boxes(features, pred_instances) return pred_instances, {} def forward_with_given_boxes( self, features: Dict[str, torch.Tensor], instances: List[Instances] ) -> List[Instances]: """ Use the given boxes in `instances` to produce other (non-box) per-ROI outputs. This is useful for downstream tasks where a box is known, but need to obtain other attributes (outputs of other heads). Test-time augmentation also uses this. Args: features: same as in `forward()` instances (list[Instances]): instances to predict other outputs. Expect the keys "pred_boxes" and "pred_classes" to exist. Returns: list[Instances]: the same `Instances` objects, with extra fields such as `pred_masks` or `pred_keypoints`. """ assert not self.training assert instances[0].has("pred_boxes") and instances[0].has("pred_classes") instances = self._forward_mask(features, instances) instances = self._forward_keypoint(features, instances) return instances def _forward_box(self, features: Dict[str, torch.Tensor], proposals: List[Instances]): """ Forward logic of the box prediction branch. If `self.train_on_pred_boxes is True`, the function puts predicted boxes in the `proposal_boxes` field of `proposals` argument. Args: features (dict[str, Tensor]): mapping from feature map names to tensor. Same as in :meth:`ROIHeads.forward`. proposals (list[Instances]): the per-image object proposals with their matching ground truth. Each has fields "proposal_boxes", and "objectness_logits", "gt_classes", "gt_boxes". Returns: In training, a dict of losses. In inference, a list of `Instances`, the predicted instances. """ features = [features[f] for f in self.box_in_features] box_features = self.box_pooler(features, [x.proposal_boxes for x in proposals]) box_features = self.box_head(box_features) predictions = self.box_predictor(box_features) del box_features if self.training: losses = self.box_predictor.losses(predictions, proposals) # proposals is modified in-place below, so losses must be computed first. if self.train_on_pred_boxes: with torch.no_grad(): pred_boxes = self.box_predictor.predict_boxes_for_gt_classes( predictions, proposals ) for proposals_per_image, pred_boxes_per_image in zip(proposals, pred_boxes): proposals_per_image.proposal_boxes = Boxes(pred_boxes_per_image) return losses else: pred_instances, _ = self.box_predictor.inference(predictions, proposals) return pred_instances def _forward_mask(self, features: Dict[str, torch.Tensor], instances: List[Instances]): """ Forward logic of the mask prediction branch. Args: features (dict[str, Tensor]): mapping from feature map names to tensor. Same as in :meth:`ROIHeads.forward`. instances (list[Instances]): the per-image instances to train/predict masks. In training, they can be the proposals. In inference, they can be the boxes predicted by R-CNN box head. Returns: In training, a dict of losses. In inference, update `instances` with new fields "pred_masks" and return it. """ if not self.mask_on: return {} if self.training else instances if self.training: # head is only trained on positive proposals. instances, _ = select_foreground_proposals(instances, self.num_classes) if self.mask_pooler is not None: features = [features[f] for f in self.mask_in_features] boxes = [x.proposal_boxes if self.training else x.pred_boxes for x in instances] features = self.mask_pooler(features, boxes) else: features = {f: features[f] for f in self.mask_in_features} return self.mask_head(features, instances) def _forward_keypoint(self, features: Dict[str, torch.Tensor], instances: List[Instances]): """ Forward logic of the keypoint prediction branch. Args: features (dict[str, Tensor]): mapping from feature map names to tensor. Same as in :meth:`ROIHeads.forward`. instances (list[Instances]): the per-image instances to train/predict keypoints. In training, they can be the proposals. In inference, they can be the boxes predicted by R-CNN box head. Returns: In training, a dict of losses. In inference, update `instances` with new fields "pred_keypoints" and return it. """ if not self.keypoint_on: return {} if self.training else instances if self.training: # head is only trained on positive proposals with >=1 visible keypoints. instances, _ = select_foreground_proposals(instances, self.num_classes) instances = select_proposals_with_visible_keypoints(instances) if self.keypoint_pooler is not None: features = [features[f] for f in self.keypoint_in_features] boxes = [x.proposal_boxes if self.training else x.pred_boxes for x in instances] features = self.keypoint_pooler(features, boxes) else: features = {f: features[f] for f in self.keypoint_in_features} return self.keypoint_head(features, instances)
/roof_mask_Yv8-0.5.9-py3-none-any.whl/detectron2/modeling/roi_heads/roi_heads.py
0.907487
0.541954
roi_heads.py
pypi
from typing import List import fvcore.nn.weight_init as weight_init import torch from torch import nn from torch.nn import functional as F from detectron2.config import configurable from detectron2.layers import Conv2d, ConvTranspose2d, ShapeSpec, cat, get_norm from detectron2.layers.wrappers import move_device_like from detectron2.structures import Instances from detectron2.utils.events import get_event_storage from detectron2.utils.registry import Registry __all__ = [ "BaseMaskRCNNHead", "MaskRCNNConvUpsampleHead", "build_mask_head", "ROI_MASK_HEAD_REGISTRY", ] ROI_MASK_HEAD_REGISTRY = Registry("ROI_MASK_HEAD") ROI_MASK_HEAD_REGISTRY.__doc__ = """ Registry for mask heads, which predicts instance masks given per-region features. The registered object will be called with `obj(cfg, input_shape)`. """ @torch.jit.unused def mask_rcnn_loss(pred_mask_logits: torch.Tensor, instances: List[Instances], vis_period: int = 0): """ Compute the mask prediction loss defined in the Mask R-CNN paper. Args: pred_mask_logits (Tensor): A tensor of shape (B, C, Hmask, Wmask) or (B, 1, Hmask, Wmask) for class-specific or class-agnostic, where B is the total number of predicted masks in all images, C is the number of foreground classes, and Hmask, Wmask are the height and width of the mask predictions. The values are logits. instances (list[Instances]): A list of N Instances, where N is the number of images in the batch. These instances are in 1:1 correspondence with the pred_mask_logits. The ground-truth labels (class, box, mask, ...) associated with each instance are stored in fields. vis_period (int): the period (in steps) to dump visualization. Returns: mask_loss (Tensor): A scalar tensor containing the loss. """ cls_agnostic_mask = pred_mask_logits.size(1) == 1 total_num_masks = pred_mask_logits.size(0) mask_side_len = pred_mask_logits.size(2) assert pred_mask_logits.size(2) == pred_mask_logits.size(3), "Mask prediction must be square!" gt_classes = [] gt_masks = [] for instances_per_image in instances: if len(instances_per_image) == 0: continue if not cls_agnostic_mask: gt_classes_per_image = instances_per_image.gt_classes.to(dtype=torch.int64) gt_classes.append(gt_classes_per_image) gt_masks_per_image = instances_per_image.gt_masks.crop_and_resize( instances_per_image.proposal_boxes.tensor, mask_side_len ).to(device=pred_mask_logits.device) # A tensor of shape (N, M, M), N=#instances in the image; M=mask_side_len gt_masks.append(gt_masks_per_image) if len(gt_masks) == 0: return pred_mask_logits.sum() * 0 gt_masks = cat(gt_masks, dim=0) if cls_agnostic_mask: pred_mask_logits = pred_mask_logits[:, 0] else: indices = torch.arange(total_num_masks) gt_classes = cat(gt_classes, dim=0) pred_mask_logits = pred_mask_logits[indices, gt_classes] if gt_masks.dtype == torch.bool: gt_masks_bool = gt_masks else: # Here we allow gt_masks to be float as well (depend on the implementation of rasterize()) gt_masks_bool = gt_masks > 0.5 gt_masks = gt_masks.to(dtype=torch.float32) # Log the training accuracy (using gt classes and 0.5 threshold) mask_incorrect = (pred_mask_logits > 0.0) != gt_masks_bool mask_accuracy = 1 - (mask_incorrect.sum().item() / max(mask_incorrect.numel(), 1.0)) num_positive = gt_masks_bool.sum().item() false_positive = (mask_incorrect & ~gt_masks_bool).sum().item() / max( gt_masks_bool.numel() - num_positive, 1.0 ) false_negative = (mask_incorrect & gt_masks_bool).sum().item() / max(num_positive, 1.0) storage = get_event_storage() storage.put_scalar("mask_rcnn/accuracy", mask_accuracy) storage.put_scalar("mask_rcnn/false_positive", false_positive) storage.put_scalar("mask_rcnn/false_negative", false_negative) if vis_period > 0 and storage.iter % vis_period == 0: pred_masks = pred_mask_logits.sigmoid() vis_masks = torch.cat([pred_masks, gt_masks], axis=2) name = "Left: mask prediction; Right: mask GT" for idx, vis_mask in enumerate(vis_masks): vis_mask = torch.stack([vis_mask] * 3, axis=0) storage.put_image(name + f" ({idx})", vis_mask) mask_loss = F.binary_cross_entropy_with_logits(pred_mask_logits, gt_masks, reduction="mean") return mask_loss def mask_rcnn_inference(pred_mask_logits: torch.Tensor, pred_instances: List[Instances]): """ Convert pred_mask_logits to estimated foreground probability masks while also extracting only the masks for the predicted classes in pred_instances. For each predicted box, the mask of the same class is attached to the instance by adding a new "pred_masks" field to pred_instances. Args: pred_mask_logits (Tensor): A tensor of shape (B, C, Hmask, Wmask) or (B, 1, Hmask, Wmask) for class-specific or class-agnostic, where B is the total number of predicted masks in all images, C is the number of foreground classes, and Hmask, Wmask are the height and width of the mask predictions. The values are logits. pred_instances (list[Instances]): A list of N Instances, where N is the number of images in the batch. Each Instances must have field "pred_classes". Returns: None. pred_instances will contain an extra "pred_masks" field storing a mask of size (Hmask, Wmask) for predicted class. Note that the masks are returned as a soft (non-quantized) masks the resolution predicted by the network; post-processing steps, such as resizing the predicted masks to the original image resolution and/or binarizing them, is left to the caller. """ cls_agnostic_mask = pred_mask_logits.size(1) == 1 if cls_agnostic_mask: mask_probs_pred = pred_mask_logits.sigmoid() else: # Select masks corresponding to the predicted classes num_masks = pred_mask_logits.shape[0] class_pred = cat([i.pred_classes for i in pred_instances]) device = ( class_pred.device if torch.jit.is_scripting() else ("cpu" if torch.jit.is_tracing() else class_pred.device) ) indices = move_device_like(torch.arange(num_masks, device=device), class_pred) mask_probs_pred = pred_mask_logits[indices, class_pred][:, None].sigmoid() # mask_probs_pred.shape: (B, 1, Hmask, Wmask) num_boxes_per_image = [len(i) for i in pred_instances] mask_probs_pred = mask_probs_pred.split(num_boxes_per_image, dim=0) for prob, instances in zip(mask_probs_pred, pred_instances): instances.pred_masks = prob # (1, Hmask, Wmask) class BaseMaskRCNNHead(nn.Module): """ Implement the basic Mask R-CNN losses and inference logic described in :paper:`Mask R-CNN` """ @configurable def __init__(self, *, loss_weight: float = 1.0, vis_period: int = 0): """ NOTE: this interface is experimental. Args: loss_weight (float): multiplier of the loss vis_period (int): visualization period """ super().__init__() self.vis_period = vis_period self.loss_weight = loss_weight @classmethod def from_config(cls, cfg, input_shape): return {"vis_period": cfg.VIS_PERIOD} def forward(self, x, instances: List[Instances]): """ Args: x: input region feature(s) provided by :class:`ROIHeads`. instances (list[Instances]): contains the boxes & labels corresponding to the input features. Exact format is up to its caller to decide. Typically, this is the foreground instances in training, with "proposal_boxes" field and other gt annotations. In inference, it contains boxes that are already predicted. Returns: A dict of losses in training. The predicted "instances" in inference. """ x = self.layers(x) if self.training: return {"loss_mask": mask_rcnn_loss(x, instances, self.vis_period) * self.loss_weight} else: mask_rcnn_inference(x, instances) return instances def layers(self, x): """ Neural network layers that makes predictions from input features. """ raise NotImplementedError # To get torchscript support, we make the head a subclass of `nn.Sequential`. # Therefore, to add new layers in this head class, please make sure they are # added in the order they will be used in forward(). @ROI_MASK_HEAD_REGISTRY.register() class MaskRCNNConvUpsampleHead(BaseMaskRCNNHead, nn.Sequential): """ A mask head with several conv layers, plus an upsample layer (with `ConvTranspose2d`). Predictions are made with a final 1x1 conv layer. """ @configurable def __init__(self, input_shape: ShapeSpec, *, num_classes, conv_dims, conv_norm="", **kwargs): """ NOTE: this interface is experimental. Args: input_shape (ShapeSpec): shape of the input feature num_classes (int): the number of foreground classes (i.e. background is not included). 1 if using class agnostic prediction. conv_dims (list[int]): a list of N>0 integers representing the output dimensions of N-1 conv layers and the last upsample layer. conv_norm (str or callable): normalization for the conv layers. See :func:`detectron2.layers.get_norm` for supported types. """ super().__init__(**kwargs) assert len(conv_dims) >= 1, "conv_dims have to be non-empty!" self.conv_norm_relus = [] cur_channels = input_shape.channels for k, conv_dim in enumerate(conv_dims[:-1]): conv = Conv2d( cur_channels, conv_dim, kernel_size=3, stride=1, padding=1, bias=not conv_norm, norm=get_norm(conv_norm, conv_dim), activation=nn.ReLU(), ) self.add_module("mask_fcn{}".format(k + 1), conv) self.conv_norm_relus.append(conv) cur_channels = conv_dim self.deconv = ConvTranspose2d( cur_channels, conv_dims[-1], kernel_size=2, stride=2, padding=0 ) self.add_module("deconv_relu", nn.ReLU()) cur_channels = conv_dims[-1] self.predictor = Conv2d(cur_channels, num_classes, kernel_size=1, stride=1, padding=0) for layer in self.conv_norm_relus + [self.deconv]: weight_init.c2_msra_fill(layer) # use normal distribution initialization for mask prediction layer nn.init.normal_(self.predictor.weight, std=0.001) if self.predictor.bias is not None: nn.init.constant_(self.predictor.bias, 0) @classmethod def from_config(cls, cfg, input_shape): ret = super().from_config(cfg, input_shape) conv_dim = cfg.MODEL.ROI_MASK_HEAD.CONV_DIM num_conv = cfg.MODEL.ROI_MASK_HEAD.NUM_CONV ret.update( conv_dims=[conv_dim] * (num_conv + 1), # +1 for ConvTranspose conv_norm=cfg.MODEL.ROI_MASK_HEAD.NORM, input_shape=input_shape, ) if cfg.MODEL.ROI_MASK_HEAD.CLS_AGNOSTIC_MASK: ret["num_classes"] = 1 else: ret["num_classes"] = cfg.MODEL.ROI_HEADS.NUM_CLASSES return ret def layers(self, x): for layer in self: x = layer(x) return x def build_mask_head(cfg, input_shape): """ Build a mask head defined by `cfg.MODEL.ROI_MASK_HEAD.NAME`. """ name = cfg.MODEL.ROI_MASK_HEAD.NAME return ROI_MASK_HEAD_REGISTRY.get(name)(cfg, input_shape)
/roof_mask_Yv8-0.5.9-py3-none-any.whl/detectron2/modeling/roi_heads/mask_head.py
0.959345
0.578329
mask_head.py
pypi
from typing import List import torch from torch import nn from torch.autograd.function import Function from detectron2.config import configurable from detectron2.layers import ShapeSpec from detectron2.structures import Boxes, Instances, pairwise_iou from detectron2.utils.events import get_event_storage from ..box_regression import Box2BoxTransform from ..matcher import Matcher from ..poolers import ROIPooler from .box_head import build_box_head from .fast_rcnn import FastRCNNOutputLayers, fast_rcnn_inference from .roi_heads import ROI_HEADS_REGISTRY, StandardROIHeads class _ScaleGradient(Function): @staticmethod def forward(ctx, input, scale): ctx.scale = scale return input @staticmethod def backward(ctx, grad_output): return grad_output * ctx.scale, None @ROI_HEADS_REGISTRY.register() class CascadeROIHeads(StandardROIHeads): """ The ROI heads that implement :paper:`Cascade R-CNN`. """ @configurable def __init__( self, *, box_in_features: List[str], box_pooler: ROIPooler, box_heads: List[nn.Module], box_predictors: List[nn.Module], proposal_matchers: List[Matcher], **kwargs, ): """ NOTE: this interface is experimental. Args: box_pooler (ROIPooler): pooler that extracts region features from given boxes box_heads (list[nn.Module]): box head for each cascade stage box_predictors (list[nn.Module]): box predictor for each cascade stage proposal_matchers (list[Matcher]): matcher with different IoU thresholds to match boxes with ground truth for each stage. The first matcher matches RPN proposals with ground truth, the other matchers use boxes predicted by the previous stage as proposals and match them with ground truth. """ assert "proposal_matcher" not in kwargs, ( "CascadeROIHeads takes 'proposal_matchers=' for each stage instead " "of one 'proposal_matcher='." ) # The first matcher matches RPN proposals with ground truth, done in the base class kwargs["proposal_matcher"] = proposal_matchers[0] num_stages = self.num_cascade_stages = len(box_heads) box_heads = nn.ModuleList(box_heads) box_predictors = nn.ModuleList(box_predictors) assert len(box_predictors) == num_stages, f"{len(box_predictors)} != {num_stages}!" assert len(proposal_matchers) == num_stages, f"{len(proposal_matchers)} != {num_stages}!" super().__init__( box_in_features=box_in_features, box_pooler=box_pooler, box_head=box_heads, box_predictor=box_predictors, **kwargs, ) self.proposal_matchers = proposal_matchers @classmethod def from_config(cls, cfg, input_shape): ret = super().from_config(cfg, input_shape) ret.pop("proposal_matcher") return ret @classmethod def _init_box_head(cls, cfg, input_shape): # fmt: off in_features = cfg.MODEL.ROI_HEADS.IN_FEATURES pooler_resolution = cfg.MODEL.ROI_BOX_HEAD.POOLER_RESOLUTION pooler_scales = tuple(1.0 / input_shape[k].stride for k in in_features) sampling_ratio = cfg.MODEL.ROI_BOX_HEAD.POOLER_SAMPLING_RATIO pooler_type = cfg.MODEL.ROI_BOX_HEAD.POOLER_TYPE cascade_bbox_reg_weights = cfg.MODEL.ROI_BOX_CASCADE_HEAD.BBOX_REG_WEIGHTS cascade_ious = cfg.MODEL.ROI_BOX_CASCADE_HEAD.IOUS assert len(cascade_bbox_reg_weights) == len(cascade_ious) assert cfg.MODEL.ROI_BOX_HEAD.CLS_AGNOSTIC_BBOX_REG, \ "CascadeROIHeads only support class-agnostic regression now!" assert cascade_ious[0] == cfg.MODEL.ROI_HEADS.IOU_THRESHOLDS[0] # fmt: on in_channels = [input_shape[f].channels for f in in_features] # Check all channel counts are equal assert len(set(in_channels)) == 1, in_channels in_channels = in_channels[0] box_pooler = ROIPooler( output_size=pooler_resolution, scales=pooler_scales, sampling_ratio=sampling_ratio, pooler_type=pooler_type, ) pooled_shape = ShapeSpec( channels=in_channels, width=pooler_resolution, height=pooler_resolution ) box_heads, box_predictors, proposal_matchers = [], [], [] for match_iou, bbox_reg_weights in zip(cascade_ious, cascade_bbox_reg_weights): box_head = build_box_head(cfg, pooled_shape) box_heads.append(box_head) box_predictors.append( FastRCNNOutputLayers( cfg, box_head.output_shape, box2box_transform=Box2BoxTransform(weights=bbox_reg_weights), ) ) proposal_matchers.append(Matcher([match_iou], [0, 1], allow_low_quality_matches=False)) return { "box_in_features": in_features, "box_pooler": box_pooler, "box_heads": box_heads, "box_predictors": box_predictors, "proposal_matchers": proposal_matchers, } def forward(self, images, features, proposals, targets=None): del images if self.training: proposals = self.label_and_sample_proposals(proposals, targets) if self.training: # Need targets to box head losses = self._forward_box(features, proposals, targets) losses.update(self._forward_mask(features, proposals)) losses.update(self._forward_keypoint(features, proposals)) return proposals, losses else: pred_instances = self._forward_box(features, proposals) pred_instances = self.forward_with_given_boxes(features, pred_instances) return pred_instances, {} def _forward_box(self, features, proposals, targets=None): """ Args: features, targets: the same as in Same as in :meth:`ROIHeads.forward`. proposals (list[Instances]): the per-image object proposals with their matching ground truth. Each has fields "proposal_boxes", and "objectness_logits", "gt_classes", "gt_boxes". """ features = [features[f] for f in self.box_in_features] head_outputs = [] # (predictor, predictions, proposals) prev_pred_boxes = None image_sizes = [x.image_size for x in proposals] for k in range(self.num_cascade_stages): if k > 0: # The output boxes of the previous stage are used to create the input # proposals of the next stage. proposals = self._create_proposals_from_boxes(prev_pred_boxes, image_sizes) if self.training: proposals = self._match_and_label_boxes(proposals, k, targets) predictions = self._run_stage(features, proposals, k) prev_pred_boxes = self.box_predictor[k].predict_boxes(predictions, proposals) head_outputs.append((self.box_predictor[k], predictions, proposals)) if self.training: losses = {} storage = get_event_storage() for stage, (predictor, predictions, proposals) in enumerate(head_outputs): with storage.name_scope("stage{}".format(stage)): stage_losses = predictor.losses(predictions, proposals) losses.update({k + "_stage{}".format(stage): v for k, v in stage_losses.items()}) return losses else: # Each is a list[Tensor] of length #image. Each tensor is Ri x (K+1) scores_per_stage = [h[0].predict_probs(h[1], h[2]) for h in head_outputs] # Average the scores across heads scores = [ sum(list(scores_per_image)) * (1.0 / self.num_cascade_stages) for scores_per_image in zip(*scores_per_stage) ] # Use the boxes of the last head predictor, predictions, proposals = head_outputs[-1] boxes = predictor.predict_boxes(predictions, proposals) pred_instances, _ = fast_rcnn_inference( boxes, scores, image_sizes, predictor.test_score_thresh, predictor.test_nms_thresh, predictor.test_topk_per_image, ) return pred_instances @torch.no_grad() def _match_and_label_boxes(self, proposals, stage, targets): """ Match proposals with groundtruth using the matcher at the given stage. Label the proposals as foreground or background based on the match. Args: proposals (list[Instances]): One Instances for each image, with the field "proposal_boxes". stage (int): the current stage targets (list[Instances]): the ground truth instances Returns: list[Instances]: the same proposals, but with fields "gt_classes" and "gt_boxes" """ num_fg_samples, num_bg_samples = [], [] for proposals_per_image, targets_per_image in zip(proposals, targets): match_quality_matrix = pairwise_iou( targets_per_image.gt_boxes, proposals_per_image.proposal_boxes ) # proposal_labels are 0 or 1 matched_idxs, proposal_labels = self.proposal_matchers[stage](match_quality_matrix) if len(targets_per_image) > 0: gt_classes = targets_per_image.gt_classes[matched_idxs] # Label unmatched proposals (0 label from matcher) as background (label=num_classes) gt_classes[proposal_labels == 0] = self.num_classes gt_boxes = targets_per_image.gt_boxes[matched_idxs] else: gt_classes = torch.zeros_like(matched_idxs) + self.num_classes gt_boxes = Boxes( targets_per_image.gt_boxes.tensor.new_zeros((len(proposals_per_image), 4)) ) proposals_per_image.gt_classes = gt_classes proposals_per_image.gt_boxes = gt_boxes num_fg_samples.append((proposal_labels == 1).sum().item()) num_bg_samples.append(proposal_labels.numel() - num_fg_samples[-1]) # Log the number of fg/bg samples in each stage storage = get_event_storage() storage.put_scalar( "stage{}/roi_head/num_fg_samples".format(stage), sum(num_fg_samples) / len(num_fg_samples), ) storage.put_scalar( "stage{}/roi_head/num_bg_samples".format(stage), sum(num_bg_samples) / len(num_bg_samples), ) return proposals def _run_stage(self, features, proposals, stage): """ Args: features (list[Tensor]): #lvl input features to ROIHeads proposals (list[Instances]): #image Instances, with the field "proposal_boxes" stage (int): the current stage Returns: Same output as `FastRCNNOutputLayers.forward()`. """ box_features = self.box_pooler(features, [x.proposal_boxes for x in proposals]) # The original implementation averages the losses among heads, # but scale up the parameter gradients of the heads. # This is equivalent to adding the losses among heads, # but scale down the gradients on features. if self.training: box_features = _ScaleGradient.apply(box_features, 1.0 / self.num_cascade_stages) box_features = self.box_head[stage](box_features) return self.box_predictor[stage](box_features) def _create_proposals_from_boxes(self, boxes, image_sizes): """ Args: boxes (list[Tensor]): per-image predicted boxes, each of shape Ri x 4 image_sizes (list[tuple]): list of image shapes in (h, w) Returns: list[Instances]: per-image proposals with the given boxes. """ # Just like RPN, the proposals should not have gradients boxes = [Boxes(b.detach()) for b in boxes] proposals = [] for boxes_per_image, image_size in zip(boxes, image_sizes): boxes_per_image.clip(image_size) if self.training: # do not filter empty boxes at inference time, # because the scores from each stage need to be aligned and added later boxes_per_image = boxes_per_image[boxes_per_image.nonempty()] prop = Instances(image_size) prop.proposal_boxes = boxes_per_image proposals.append(prop) return proposals
/roof_mask_Yv8-0.5.9-py3-none-any.whl/detectron2/modeling/roi_heads/cascade_rcnn.py
0.947986
0.480235
cascade_rcnn.py
pypi
import logging import numpy as np import torch from detectron2.config import configurable from detectron2.layers import ShapeSpec, batched_nms_rotated from detectron2.structures import Instances, RotatedBoxes, pairwise_iou_rotated from detectron2.utils.events import get_event_storage from ..box_regression import Box2BoxTransformRotated from ..poolers import ROIPooler from ..proposal_generator.proposal_utils import add_ground_truth_to_proposals from .box_head import build_box_head from .fast_rcnn import FastRCNNOutputLayers from .roi_heads import ROI_HEADS_REGISTRY, StandardROIHeads logger = logging.getLogger(__name__) """ Shape shorthand in this module: N: number of images in the minibatch R: number of ROIs, combined over all images, in the minibatch Ri: number of ROIs in image i K: number of foreground classes. E.g.,there are 80 foreground classes in COCO. Naming convention: deltas: refers to the 5-d (dx, dy, dw, dh, da) deltas that parameterize the box2box transform (see :class:`box_regression.Box2BoxTransformRotated`). pred_class_logits: predicted class scores in [-inf, +inf]; use softmax(pred_class_logits) to estimate P(class). gt_classes: ground-truth classification labels in [0, K], where [0, K) represent foreground object classes and K represents the background class. pred_proposal_deltas: predicted rotated box2box transform deltas for transforming proposals to detection box predictions. gt_proposal_deltas: ground-truth rotated box2box transform deltas """ def fast_rcnn_inference_rotated( boxes, scores, image_shapes, score_thresh, nms_thresh, topk_per_image ): """ Call `fast_rcnn_inference_single_image_rotated` for all images. Args: boxes (list[Tensor]): A list of Tensors of predicted class-specific or class-agnostic boxes for each image. Element i has shape (Ri, K * 5) if doing class-specific regression, or (Ri, 5) if doing class-agnostic regression, where Ri is the number of predicted objects for image i. This is compatible with the output of :meth:`FastRCNNOutputLayers.predict_boxes`. scores (list[Tensor]): A list of Tensors of predicted class scores for each image. Element i has shape (Ri, K + 1), where Ri is the number of predicted objects for image i. Compatible with the output of :meth:`FastRCNNOutputLayers.predict_probs`. image_shapes (list[tuple]): A list of (width, height) tuples for each image in the batch. score_thresh (float): Only return detections with a confidence score exceeding this threshold. nms_thresh (float): The threshold to use for box non-maximum suppression. Value in [0, 1]. topk_per_image (int): The number of top scoring detections to return. Set < 0 to return all detections. Returns: instances: (list[Instances]): A list of N instances, one for each image in the batch, that stores the topk most confidence detections. kept_indices: (list[Tensor]): A list of 1D tensor of length of N, each element indicates the corresponding boxes/scores index in [0, Ri) from the input, for image i. """ result_per_image = [ fast_rcnn_inference_single_image_rotated( boxes_per_image, scores_per_image, image_shape, score_thresh, nms_thresh, topk_per_image ) for scores_per_image, boxes_per_image, image_shape in zip(scores, boxes, image_shapes) ] return [x[0] for x in result_per_image], [x[1] for x in result_per_image] def fast_rcnn_inference_single_image_rotated( boxes, scores, image_shape, score_thresh, nms_thresh, topk_per_image ): """ Single-image inference. Return rotated bounding-box detection results by thresholding on scores and applying rotated non-maximum suppression (Rotated NMS). Args: Same as `fast_rcnn_inference_rotated`, but with rotated boxes, scores, and image shapes per image. Returns: Same as `fast_rcnn_inference_rotated`, but for only one image. """ valid_mask = torch.isfinite(boxes).all(dim=1) & torch.isfinite(scores).all(dim=1) if not valid_mask.all(): boxes = boxes[valid_mask] scores = scores[valid_mask] B = 5 # box dimension scores = scores[:, :-1] num_bbox_reg_classes = boxes.shape[1] // B # Convert to Boxes to use the `clip` function ... boxes = RotatedBoxes(boxes.reshape(-1, B)) boxes.clip(image_shape) boxes = boxes.tensor.view(-1, num_bbox_reg_classes, B) # R x C x B # Filter results based on detection scores filter_mask = scores > score_thresh # R x K # R' x 2. First column contains indices of the R predictions; # Second column contains indices of classes. filter_inds = filter_mask.nonzero() if num_bbox_reg_classes == 1: boxes = boxes[filter_inds[:, 0], 0] else: boxes = boxes[filter_mask] scores = scores[filter_mask] # Apply per-class Rotated NMS keep = batched_nms_rotated(boxes, scores, filter_inds[:, 1], nms_thresh) if topk_per_image >= 0: keep = keep[:topk_per_image] boxes, scores, filter_inds = boxes[keep], scores[keep], filter_inds[keep] result = Instances(image_shape) result.pred_boxes = RotatedBoxes(boxes) result.scores = scores result.pred_classes = filter_inds[:, 1] return result, filter_inds[:, 0] class RotatedFastRCNNOutputLayers(FastRCNNOutputLayers): """ Two linear layers for predicting Rotated Fast R-CNN outputs. """ @classmethod def from_config(cls, cfg, input_shape): args = super().from_config(cfg, input_shape) args["box2box_transform"] = Box2BoxTransformRotated( weights=cfg.MODEL.ROI_BOX_HEAD.BBOX_REG_WEIGHTS ) return args def inference(self, predictions, proposals): """ Returns: list[Instances]: same as `fast_rcnn_inference_rotated`. list[Tensor]: same as `fast_rcnn_inference_rotated`. """ boxes = self.predict_boxes(predictions, proposals) scores = self.predict_probs(predictions, proposals) image_shapes = [x.image_size for x in proposals] return fast_rcnn_inference_rotated( boxes, scores, image_shapes, self.test_score_thresh, self.test_nms_thresh, self.test_topk_per_image, ) @ROI_HEADS_REGISTRY.register() class RROIHeads(StandardROIHeads): """ This class is used by Rotated Fast R-CNN to detect rotated boxes. For now, it only supports box predictions but not mask or keypoints. """ @configurable def __init__(self, **kwargs): """ NOTE: this interface is experimental. """ super().__init__(**kwargs) assert ( not self.mask_on and not self.keypoint_on ), "Mask/Keypoints not supported in Rotated ROIHeads." assert not self.train_on_pred_boxes, "train_on_pred_boxes not implemented for RROIHeads!" @classmethod def _init_box_head(cls, cfg, input_shape): # fmt: off in_features = cfg.MODEL.ROI_HEADS.IN_FEATURES pooler_resolution = cfg.MODEL.ROI_BOX_HEAD.POOLER_RESOLUTION pooler_scales = tuple(1.0 / input_shape[k].stride for k in in_features) sampling_ratio = cfg.MODEL.ROI_BOX_HEAD.POOLER_SAMPLING_RATIO pooler_type = cfg.MODEL.ROI_BOX_HEAD.POOLER_TYPE # fmt: on assert pooler_type in ["ROIAlignRotated"], pooler_type # assume all channel counts are equal in_channels = [input_shape[f].channels for f in in_features][0] box_pooler = ROIPooler( output_size=pooler_resolution, scales=pooler_scales, sampling_ratio=sampling_ratio, pooler_type=pooler_type, ) box_head = build_box_head( cfg, ShapeSpec(channels=in_channels, height=pooler_resolution, width=pooler_resolution) ) # This line is the only difference v.s. StandardROIHeads box_predictor = RotatedFastRCNNOutputLayers(cfg, box_head.output_shape) return { "box_in_features": in_features, "box_pooler": box_pooler, "box_head": box_head, "box_predictor": box_predictor, } @torch.no_grad() def label_and_sample_proposals(self, proposals, targets): """ Prepare some proposals to be used to train the RROI heads. It performs box matching between `proposals` and `targets`, and assigns training labels to the proposals. It returns `self.batch_size_per_image` random samples from proposals and groundtruth boxes, with a fraction of positives that is no larger than `self.positive_sample_fraction. Args: See :meth:`StandardROIHeads.forward` Returns: list[Instances]: length `N` list of `Instances`s containing the proposals sampled for training. Each `Instances` has the following fields: - proposal_boxes: the rotated proposal boxes - gt_boxes: the ground-truth rotated boxes that the proposal is assigned to (this is only meaningful if the proposal has a label > 0; if label = 0 then the ground-truth box is random) - gt_classes: the ground-truth classification lable for each proposal """ if self.proposal_append_gt: proposals = add_ground_truth_to_proposals(targets, proposals) proposals_with_gt = [] num_fg_samples = [] num_bg_samples = [] for proposals_per_image, targets_per_image in zip(proposals, targets): has_gt = len(targets_per_image) > 0 match_quality_matrix = pairwise_iou_rotated( targets_per_image.gt_boxes, proposals_per_image.proposal_boxes ) matched_idxs, matched_labels = self.proposal_matcher(match_quality_matrix) sampled_idxs, gt_classes = self._sample_proposals( matched_idxs, matched_labels, targets_per_image.gt_classes ) proposals_per_image = proposals_per_image[sampled_idxs] proposals_per_image.gt_classes = gt_classes if has_gt: sampled_targets = matched_idxs[sampled_idxs] proposals_per_image.gt_boxes = targets_per_image.gt_boxes[sampled_targets] num_bg_samples.append((gt_classes == self.num_classes).sum().item()) num_fg_samples.append(gt_classes.numel() - num_bg_samples[-1]) proposals_with_gt.append(proposals_per_image) # Log the number of fg/bg samples that are selected for training ROI heads storage = get_event_storage() storage.put_scalar("roi_head/num_fg_samples", np.mean(num_fg_samples)) storage.put_scalar("roi_head/num_bg_samples", np.mean(num_bg_samples)) return proposals_with_gt
/roof_mask_Yv8-0.5.9-py3-none-any.whl/detectron2/modeling/roi_heads/rotated_fast_rcnn.py
0.926012
0.584212
rotated_fast_rcnn.py
pypi
import numpy as np from typing import List import fvcore.nn.weight_init as weight_init import torch from torch import nn from detectron2.config import configurable from detectron2.layers import Conv2d, ShapeSpec, get_norm from detectron2.utils.registry import Registry __all__ = ["FastRCNNConvFCHead", "build_box_head", "ROI_BOX_HEAD_REGISTRY"] ROI_BOX_HEAD_REGISTRY = Registry("ROI_BOX_HEAD") ROI_BOX_HEAD_REGISTRY.__doc__ = """ Registry for box heads, which make box predictions from per-region features. The registered object will be called with `obj(cfg, input_shape)`. """ # To get torchscript support, we make the head a subclass of `nn.Sequential`. # Therefore, to add new layers in this head class, please make sure they are # added in the order they will be used in forward(). @ROI_BOX_HEAD_REGISTRY.register() class FastRCNNConvFCHead(nn.Sequential): """ A head with several 3x3 conv layers (each followed by norm & relu) and then several fc layers (each followed by relu). """ @configurable def __init__( self, input_shape: ShapeSpec, *, conv_dims: List[int], fc_dims: List[int], conv_norm="" ): """ NOTE: this interface is experimental. Args: input_shape (ShapeSpec): shape of the input feature. conv_dims (list[int]): the output dimensions of the conv layers fc_dims (list[int]): the output dimensions of the fc layers conv_norm (str or callable): normalization for the conv layers. See :func:`detectron2.layers.get_norm` for supported types. """ super().__init__() assert len(conv_dims) + len(fc_dims) > 0 self._output_size = (input_shape.channels, input_shape.height, input_shape.width) self.conv_norm_relus = [] for k, conv_dim in enumerate(conv_dims): conv = Conv2d( self._output_size[0], conv_dim, kernel_size=3, padding=1, bias=not conv_norm, norm=get_norm(conv_norm, conv_dim), activation=nn.ReLU(), ) self.add_module("conv{}".format(k + 1), conv) self.conv_norm_relus.append(conv) self._output_size = (conv_dim, self._output_size[1], self._output_size[2]) self.fcs = [] for k, fc_dim in enumerate(fc_dims): if k == 0: self.add_module("flatten", nn.Flatten()) fc = nn.Linear(int(np.prod(self._output_size)), fc_dim) self.add_module("fc{}".format(k + 1), fc) self.add_module("fc_relu{}".format(k + 1), nn.ReLU()) self.fcs.append(fc) self._output_size = fc_dim for layer in self.conv_norm_relus: weight_init.c2_msra_fill(layer) for layer in self.fcs: weight_init.c2_xavier_fill(layer) @classmethod def from_config(cls, cfg, input_shape): num_conv = cfg.MODEL.ROI_BOX_HEAD.NUM_CONV conv_dim = cfg.MODEL.ROI_BOX_HEAD.CONV_DIM num_fc = cfg.MODEL.ROI_BOX_HEAD.NUM_FC fc_dim = cfg.MODEL.ROI_BOX_HEAD.FC_DIM return { "input_shape": input_shape, "conv_dims": [conv_dim] * num_conv, "fc_dims": [fc_dim] * num_fc, "conv_norm": cfg.MODEL.ROI_BOX_HEAD.NORM, } def forward(self, x): for layer in self: x = layer(x) return x @property @torch.jit.unused def output_shape(self): """ Returns: ShapeSpec: the output feature shape """ o = self._output_size if isinstance(o, int): return ShapeSpec(channels=o) else: return ShapeSpec(channels=o[0], height=o[1], width=o[2]) def build_box_head(cfg, input_shape): """ Build a box head defined by `cfg.MODEL.ROI_BOX_HEAD.NAME`. """ name = cfg.MODEL.ROI_BOX_HEAD.NAME return ROI_BOX_HEAD_REGISTRY.get(name)(cfg, input_shape)
/roof_mask_Yv8-0.5.9-py3-none-any.whl/detectron2/modeling/roi_heads/box_head.py
0.934671
0.421016
box_head.py
pypi
from typing import List import torch from torch import nn from torch.nn import functional as F from detectron2.config import configurable from detectron2.layers import Conv2d, ConvTranspose2d, cat, interpolate from detectron2.structures import Instances, heatmaps_to_keypoints from detectron2.utils.events import get_event_storage from detectron2.utils.registry import Registry _TOTAL_SKIPPED = 0 __all__ = [ "ROI_KEYPOINT_HEAD_REGISTRY", "build_keypoint_head", "BaseKeypointRCNNHead", "KRCNNConvDeconvUpsampleHead", ] ROI_KEYPOINT_HEAD_REGISTRY = Registry("ROI_KEYPOINT_HEAD") ROI_KEYPOINT_HEAD_REGISTRY.__doc__ = """ Registry for keypoint heads, which make keypoint predictions from per-region features. The registered object will be called with `obj(cfg, input_shape)`. """ def build_keypoint_head(cfg, input_shape): """ Build a keypoint head from `cfg.MODEL.ROI_KEYPOINT_HEAD.NAME`. """ name = cfg.MODEL.ROI_KEYPOINT_HEAD.NAME return ROI_KEYPOINT_HEAD_REGISTRY.get(name)(cfg, input_shape) def keypoint_rcnn_loss(pred_keypoint_logits, instances, normalizer): """ Arguments: pred_keypoint_logits (Tensor): A tensor of shape (N, K, S, S) where N is the total number of instances in the batch, K is the number of keypoints, and S is the side length of the keypoint heatmap. The values are spatial logits. instances (list[Instances]): A list of M Instances, where M is the batch size. These instances are predictions from the model that are in 1:1 correspondence with pred_keypoint_logits. Each Instances should contain a `gt_keypoints` field containing a `structures.Keypoint` instance. normalizer (float): Normalize the loss by this amount. If not specified, we normalize by the number of visible keypoints in the minibatch. Returns a scalar tensor containing the loss. """ heatmaps = [] valid = [] keypoint_side_len = pred_keypoint_logits.shape[2] for instances_per_image in instances: if len(instances_per_image) == 0: continue keypoints = instances_per_image.gt_keypoints heatmaps_per_image, valid_per_image = keypoints.to_heatmap( instances_per_image.proposal_boxes.tensor, keypoint_side_len ) heatmaps.append(heatmaps_per_image.view(-1)) valid.append(valid_per_image.view(-1)) if len(heatmaps): keypoint_targets = cat(heatmaps, dim=0) valid = cat(valid, dim=0).to(dtype=torch.uint8) valid = torch.nonzero(valid).squeeze(1) # torch.mean (in binary_cross_entropy_with_logits) doesn't # accept empty tensors, so handle it separately if len(heatmaps) == 0 or valid.numel() == 0: global _TOTAL_SKIPPED _TOTAL_SKIPPED += 1 storage = get_event_storage() storage.put_scalar("kpts_num_skipped_batches", _TOTAL_SKIPPED, smoothing_hint=False) return pred_keypoint_logits.sum() * 0 N, K, H, W = pred_keypoint_logits.shape pred_keypoint_logits = pred_keypoint_logits.view(N * K, H * W) keypoint_loss = F.cross_entropy( pred_keypoint_logits[valid], keypoint_targets[valid], reduction="sum" ) # If a normalizer isn't specified, normalize by the number of visible keypoints in the minibatch if normalizer is None: normalizer = valid.numel() keypoint_loss /= normalizer return keypoint_loss def keypoint_rcnn_inference(pred_keypoint_logits: torch.Tensor, pred_instances: List[Instances]): """ Post process each predicted keypoint heatmap in `pred_keypoint_logits` into (x, y, score) and add it to the `pred_instances` as a `pred_keypoints` field. Args: pred_keypoint_logits (Tensor): A tensor of shape (R, K, S, S) where R is the total number of instances in the batch, K is the number of keypoints, and S is the side length of the keypoint heatmap. The values are spatial logits. pred_instances (list[Instances]): A list of N Instances, where N is the number of images. Returns: None. Each element in pred_instances will contain extra "pred_keypoints" and "pred_keypoint_heatmaps" fields. "pred_keypoints" is a tensor of shape (#instance, K, 3) where the last dimension corresponds to (x, y, score). The scores are larger than 0. "pred_keypoint_heatmaps" contains the raw keypoint logits as passed to this function. """ # flatten all bboxes from all images together (list[Boxes] -> Rx4 tensor) bboxes_flat = cat([b.pred_boxes.tensor for b in pred_instances], dim=0) pred_keypoint_logits = pred_keypoint_logits.detach() keypoint_results = heatmaps_to_keypoints(pred_keypoint_logits, bboxes_flat.detach()) num_instances_per_image = [len(i) for i in pred_instances] keypoint_results = keypoint_results[:, :, [0, 1, 3]].split(num_instances_per_image, dim=0) heatmap_results = pred_keypoint_logits.split(num_instances_per_image, dim=0) for keypoint_results_per_image, heatmap_results_per_image, instances_per_image in zip( keypoint_results, heatmap_results, pred_instances ): # keypoint_results_per_image is (num instances)x(num keypoints)x(x, y, score) # heatmap_results_per_image is (num instances)x(num keypoints)x(side)x(side) instances_per_image.pred_keypoints = keypoint_results_per_image instances_per_image.pred_keypoint_heatmaps = heatmap_results_per_image class BaseKeypointRCNNHead(nn.Module): """ Implement the basic Keypoint R-CNN losses and inference logic described in Sec. 5 of :paper:`Mask R-CNN`. """ @configurable def __init__(self, *, num_keypoints, loss_weight=1.0, loss_normalizer=1.0): """ NOTE: this interface is experimental. Args: num_keypoints (int): number of keypoints to predict loss_weight (float): weight to multiple on the keypoint loss loss_normalizer (float or str): If float, divide the loss by `loss_normalizer * #images`. If 'visible', the loss is normalized by the total number of visible keypoints across images. """ super().__init__() self.num_keypoints = num_keypoints self.loss_weight = loss_weight assert loss_normalizer == "visible" or isinstance(loss_normalizer, float), loss_normalizer self.loss_normalizer = loss_normalizer @classmethod def from_config(cls, cfg, input_shape): ret = { "loss_weight": cfg.MODEL.ROI_KEYPOINT_HEAD.LOSS_WEIGHT, "num_keypoints": cfg.MODEL.ROI_KEYPOINT_HEAD.NUM_KEYPOINTS, } normalize_by_visible = ( cfg.MODEL.ROI_KEYPOINT_HEAD.NORMALIZE_LOSS_BY_VISIBLE_KEYPOINTS ) # noqa if not normalize_by_visible: batch_size_per_image = cfg.MODEL.ROI_HEADS.BATCH_SIZE_PER_IMAGE positive_sample_fraction = cfg.MODEL.ROI_HEADS.POSITIVE_FRACTION ret["loss_normalizer"] = ( ret["num_keypoints"] * batch_size_per_image * positive_sample_fraction ) else: ret["loss_normalizer"] = "visible" return ret def forward(self, x, instances: List[Instances]): """ Args: x: input 4D region feature(s) provided by :class:`ROIHeads`. instances (list[Instances]): contains the boxes & labels corresponding to the input features. Exact format is up to its caller to decide. Typically, this is the foreground instances in training, with "proposal_boxes" field and other gt annotations. In inference, it contains boxes that are already predicted. Returns: A dict of losses if in training. The predicted "instances" if in inference. """ x = self.layers(x) if self.training: num_images = len(instances) normalizer = ( None if self.loss_normalizer == "visible" else num_images * self.loss_normalizer ) return { "loss_keypoint": keypoint_rcnn_loss(x, instances, normalizer=normalizer) * self.loss_weight } else: keypoint_rcnn_inference(x, instances) return instances def layers(self, x): """ Neural network layers that makes predictions from regional input features. """ raise NotImplementedError # To get torchscript support, we make the head a subclass of `nn.Sequential`. # Therefore, to add new layers in this head class, please make sure they are # added in the order they will be used in forward(). @ROI_KEYPOINT_HEAD_REGISTRY.register() class KRCNNConvDeconvUpsampleHead(BaseKeypointRCNNHead, nn.Sequential): """ A standard keypoint head containing a series of 3x3 convs, followed by a transpose convolution and bilinear interpolation for upsampling. It is described in Sec. 5 of :paper:`Mask R-CNN`. """ @configurable def __init__(self, input_shape, *, num_keypoints, conv_dims, **kwargs): """ NOTE: this interface is experimental. Args: input_shape (ShapeSpec): shape of the input feature conv_dims: an iterable of output channel counts for each conv in the head e.g. (512, 512, 512) for three convs outputting 512 channels. """ super().__init__(num_keypoints=num_keypoints, **kwargs) # default up_scale to 2.0 (this can be made an option) up_scale = 2.0 in_channels = input_shape.channels for idx, layer_channels in enumerate(conv_dims, 1): module = Conv2d(in_channels, layer_channels, 3, stride=1, padding=1) self.add_module("conv_fcn{}".format(idx), module) self.add_module("conv_fcn_relu{}".format(idx), nn.ReLU()) in_channels = layer_channels deconv_kernel = 4 self.score_lowres = ConvTranspose2d( in_channels, num_keypoints, deconv_kernel, stride=2, padding=deconv_kernel // 2 - 1 ) self.up_scale = up_scale for name, param in self.named_parameters(): if "bias" in name: nn.init.constant_(param, 0) elif "weight" in name: # Caffe2 implementation uses MSRAFill, which in fact # corresponds to kaiming_normal_ in PyTorch nn.init.kaiming_normal_(param, mode="fan_out", nonlinearity="relu") @classmethod def from_config(cls, cfg, input_shape): ret = super().from_config(cfg, input_shape) ret["input_shape"] = input_shape ret["conv_dims"] = cfg.MODEL.ROI_KEYPOINT_HEAD.CONV_DIMS return ret def layers(self, x): for layer in self: x = layer(x) x = interpolate(x, scale_factor=self.up_scale, mode="bilinear", align_corners=False) return x
/roof_mask_Yv8-0.5.9-py3-none-any.whl/detectron2/modeling/roi_heads/keypoint_head.py
0.965892
0.535706
keypoint_head.py
pypi
import copy import logging import numpy as np import time from pycocotools.cocoeval import COCOeval from detectron2 import _C logger = logging.getLogger(__name__) class COCOeval_opt(COCOeval): """ This is a slightly modified version of the original COCO API, where the functions evaluateImg() and accumulate() are implemented in C++ to speedup evaluation """ def evaluate(self): """ Run per image evaluation on given images and store results in self.evalImgs_cpp, a datastructure that isn't readable from Python but is used by a c++ implementation of accumulate(). Unlike the original COCO PythonAPI, we don't populate the datastructure self.evalImgs because this datastructure is a computational bottleneck. :return: None """ tic = time.time() p = self.params # add backward compatibility if useSegm is specified in params if p.useSegm is not None: p.iouType = "segm" if p.useSegm == 1 else "bbox" logger.info("Evaluate annotation type *{}*".format(p.iouType)) p.imgIds = list(np.unique(p.imgIds)) if p.useCats: p.catIds = list(np.unique(p.catIds)) p.maxDets = sorted(p.maxDets) self.params = p self._prepare() # bottleneck # loop through images, area range, max detection number catIds = p.catIds if p.useCats else [-1] if p.iouType == "segm" or p.iouType == "bbox": computeIoU = self.computeIoU elif p.iouType == "keypoints": computeIoU = self.computeOks self.ious = { (imgId, catId): computeIoU(imgId, catId) for imgId in p.imgIds for catId in catIds } # bottleneck maxDet = p.maxDets[-1] # <<<< Beginning of code differences with original COCO API def convert_instances_to_cpp(instances, is_det=False): # Convert annotations for a list of instances in an image to a format that's fast # to access in C++ instances_cpp = [] for instance in instances: instance_cpp = _C.InstanceAnnotation( int(instance["id"]), instance["score"] if is_det else instance.get("score", 0.0), instance["area"], bool(instance.get("iscrowd", 0)), bool(instance.get("ignore", 0)), ) instances_cpp.append(instance_cpp) return instances_cpp # Convert GT annotations, detections, and IOUs to a format that's fast to access in C++ ground_truth_instances = [ [convert_instances_to_cpp(self._gts[imgId, catId]) for catId in p.catIds] for imgId in p.imgIds ] detected_instances = [ [convert_instances_to_cpp(self._dts[imgId, catId], is_det=True) for catId in p.catIds] for imgId in p.imgIds ] ious = [[self.ious[imgId, catId] for catId in catIds] for imgId in p.imgIds] if not p.useCats: # For each image, flatten per-category lists into a single list ground_truth_instances = [[[o for c in i for o in c]] for i in ground_truth_instances] detected_instances = [[[o for c in i for o in c]] for i in detected_instances] # Call C++ implementation of self.evaluateImgs() self._evalImgs_cpp = _C.COCOevalEvaluateImages( p.areaRng, maxDet, p.iouThrs, ious, ground_truth_instances, detected_instances ) self._evalImgs = None self._paramsEval = copy.deepcopy(self.params) toc = time.time() logger.info("COCOeval_opt.evaluate() finished in {:0.2f} seconds.".format(toc - tic)) # >>>> End of code differences with original COCO API def accumulate(self): """ Accumulate per image evaluation results and store the result in self.eval. Does not support changing parameter settings from those used by self.evaluate() """ logger.info("Accumulating evaluation results...") tic = time.time() assert hasattr( self, "_evalImgs_cpp" ), "evaluate() must be called before accmulate() is called." self.eval = _C.COCOevalAccumulate(self._paramsEval, self._evalImgs_cpp) # recall is num_iou_thresholds X num_categories X num_area_ranges X num_max_detections self.eval["recall"] = np.array(self.eval["recall"]).reshape( self.eval["counts"][:1] + self.eval["counts"][2:] ) # precision and scores are num_iou_thresholds X num_recall_thresholds X num_categories X # num_area_ranges X num_max_detections self.eval["precision"] = np.array(self.eval["precision"]).reshape(self.eval["counts"]) self.eval["scores"] = np.array(self.eval["scores"]).reshape(self.eval["counts"]) toc = time.time() logger.info("COCOeval_opt.accumulate() finished in {:0.2f} seconds.".format(toc - tic))
/roof_mask_Yv8-0.5.9-py3-none-any.whl/detectron2/evaluation/fast_eval_api.py
0.661158
0.413181
fast_eval_api.py
pypi
import datetime import logging import time from collections import OrderedDict, abc from contextlib import ExitStack, contextmanager from typing import List, Union import torch from torch import nn from detectron2.utils.comm import get_world_size, is_main_process from detectron2.utils.logger import log_every_n_seconds class DatasetEvaluator: """ Base class for a dataset evaluator. The function :func:`inference_on_dataset` runs the model over all samples in the dataset, and have a DatasetEvaluator to process the inputs/outputs. This class will accumulate information of the inputs/outputs (by :meth:`process`), and produce evaluation results in the end (by :meth:`evaluate`). """ def reset(self): """ Preparation for a new round of evaluation. Should be called before starting a round of evaluation. """ pass def process(self, inputs, outputs): """ Process the pair of inputs and outputs. If they contain batches, the pairs can be consumed one-by-one using `zip`: .. code-block:: python for input_, output in zip(inputs, outputs): # do evaluation on single input/output pair ... Args: inputs (list): the inputs that's used to call the model. outputs (list): the return value of `model(inputs)` """ pass def evaluate(self): """ Evaluate/summarize the performance, after processing all input/output pairs. Returns: dict: A new evaluator class can return a dict of arbitrary format as long as the user can process the results. In our train_net.py, we expect the following format: * key: the name of the task (e.g., bbox) * value: a dict of {metric name: score}, e.g.: {"AP50": 80} """ pass class DatasetEvaluators(DatasetEvaluator): """ Wrapper class to combine multiple :class:`DatasetEvaluator` instances. This class dispatches every evaluation call to all of its :class:`DatasetEvaluator`. """ def __init__(self, evaluators): """ Args: evaluators (list): the evaluators to combine. """ super().__init__() self._evaluators = evaluators def reset(self): for evaluator in self._evaluators: evaluator.reset() def process(self, inputs, outputs): for evaluator in self._evaluators: evaluator.process(inputs, outputs) def evaluate(self): results = OrderedDict() for evaluator in self._evaluators: result = evaluator.evaluate() if is_main_process() and result is not None: for k, v in result.items(): assert ( k not in results ), "Different evaluators produce results with the same key {}".format(k) results[k] = v return results def inference_on_dataset( model, data_loader, evaluator: Union[DatasetEvaluator, List[DatasetEvaluator], None] ): """ Run model on the data_loader and evaluate the metrics with evaluator. Also benchmark the inference speed of `model.__call__` accurately. The model will be used in eval mode. Args: model (callable): a callable which takes an object from `data_loader` and returns some outputs. If it's an nn.Module, it will be temporarily set to `eval` mode. If you wish to evaluate a model in `training` mode instead, you can wrap the given model and override its behavior of `.eval()` and `.train()`. data_loader: an iterable object with a length. The elements it generates will be the inputs to the model. evaluator: the evaluator(s) to run. Use `None` if you only want to benchmark, but don't want to do any evaluation. Returns: The return value of `evaluator.evaluate()` """ num_devices = get_world_size() logger = logging.getLogger(__name__) logger.info("Start inference on {} batches".format(len(data_loader))) total = len(data_loader) # inference data loader must have a fixed length if evaluator is None: # create a no-op evaluator evaluator = DatasetEvaluators([]) if isinstance(evaluator, abc.MutableSequence): evaluator = DatasetEvaluators(evaluator) evaluator.reset() num_warmup = min(5, total - 1) start_time = time.perf_counter() total_data_time = 0 total_compute_time = 0 total_eval_time = 0 with ExitStack() as stack: if isinstance(model, nn.Module): stack.enter_context(inference_context(model)) stack.enter_context(torch.no_grad()) start_data_time = time.perf_counter() for idx, inputs in enumerate(data_loader): total_data_time += time.perf_counter() - start_data_time if idx == num_warmup: start_time = time.perf_counter() total_data_time = 0 total_compute_time = 0 total_eval_time = 0 start_compute_time = time.perf_counter() outputs = model(inputs) if torch.cuda.is_available(): torch.cuda.synchronize() total_compute_time += time.perf_counter() - start_compute_time start_eval_time = time.perf_counter() evaluator.process(inputs, outputs) total_eval_time += time.perf_counter() - start_eval_time iters_after_start = idx + 1 - num_warmup * int(idx >= num_warmup) data_seconds_per_iter = total_data_time / iters_after_start compute_seconds_per_iter = total_compute_time / iters_after_start eval_seconds_per_iter = total_eval_time / iters_after_start total_seconds_per_iter = (time.perf_counter() - start_time) / iters_after_start if idx >= num_warmup * 2 or compute_seconds_per_iter > 5: eta = datetime.timedelta(seconds=int(total_seconds_per_iter * (total - idx - 1))) log_every_n_seconds( logging.INFO, ( f"Inference done {idx + 1}/{total}. " f"Dataloading: {data_seconds_per_iter:.4f} s/iter. " f"Inference: {compute_seconds_per_iter:.4f} s/iter. " f"Eval: {eval_seconds_per_iter:.4f} s/iter. " f"Total: {total_seconds_per_iter:.4f} s/iter. " f"ETA={eta}" ), n=5, ) start_data_time = time.perf_counter() # Measure the time only for this worker (before the synchronization barrier) total_time = time.perf_counter() - start_time total_time_str = str(datetime.timedelta(seconds=total_time)) # NOTE this format is parsed by grep logger.info( "Total inference time: {} ({:.6f} s / iter per device, on {} devices)".format( total_time_str, total_time / (total - num_warmup), num_devices ) ) total_compute_time_str = str(datetime.timedelta(seconds=int(total_compute_time))) logger.info( "Total inference pure compute time: {} ({:.6f} s / iter per device, on {} devices)".format( total_compute_time_str, total_compute_time / (total - num_warmup), num_devices ) ) results = evaluator.evaluate() # An evaluator may return None when not in main process. # Replace it by an empty dict instead to make it easier for downstream code to handle if results is None: results = {} return results @contextmanager def inference_context(model): """ A context where the model is temporarily changed to eval mode, and restored to previous mode afterwards. Args: model: a torch Module """ training_mode = model.training model.eval() yield model.train(training_mode)
/roof_mask_Yv8-0.5.9-py3-none-any.whl/detectron2/evaluation/evaluator.py
0.931851
0.600569
evaluator.py
pypi
import glob import logging import numpy as np import os import tempfile from collections import OrderedDict import torch from PIL import Image from detectron2.data import MetadataCatalog from detectron2.utils import comm from detectron2.utils.file_io import PathManager from .evaluator import DatasetEvaluator class CityscapesEvaluator(DatasetEvaluator): """ Base class for evaluation using cityscapes API. """ def __init__(self, dataset_name): """ Args: dataset_name (str): the name of the dataset. It must have the following metadata associated with it: "thing_classes", "gt_dir". """ self._metadata = MetadataCatalog.get(dataset_name) self._cpu_device = torch.device("cpu") self._logger = logging.getLogger(__name__) def reset(self): self._working_dir = tempfile.TemporaryDirectory(prefix="cityscapes_eval_") self._temp_dir = self._working_dir.name # All workers will write to the same results directory # TODO this does not work in distributed training assert ( comm.get_local_size() == comm.get_world_size() ), "CityscapesEvaluator currently do not work with multiple machines." self._temp_dir = comm.all_gather(self._temp_dir)[0] if self._temp_dir != self._working_dir.name: self._working_dir.cleanup() self._logger.info( "Writing cityscapes results to temporary directory {} ...".format(self._temp_dir) ) class CityscapesInstanceEvaluator(CityscapesEvaluator): """ Evaluate instance segmentation results on cityscapes dataset using cityscapes API. Note: * It does not work in multi-machine distributed training. * It contains a synchronization, therefore has to be used on all ranks. * Only the main process runs evaluation. """ def process(self, inputs, outputs): from cityscapesscripts.helpers.labels import name2label for input, output in zip(inputs, outputs): file_name = input["file_name"] basename = os.path.splitext(os.path.basename(file_name))[0] pred_txt = os.path.join(self._temp_dir, basename + "_pred.txt") if "instances" in output: output = output["instances"].to(self._cpu_device) num_instances = len(output) with open(pred_txt, "w") as fout: for i in range(num_instances): pred_class = output.pred_classes[i] classes = self._metadata.thing_classes[pred_class] class_id = name2label[classes].id score = output.scores[i] mask = output.pred_masks[i].numpy().astype("uint8") png_filename = os.path.join( self._temp_dir, basename + "_{}_{}.png".format(i, classes) ) Image.fromarray(mask * 255).save(png_filename) fout.write( "{} {} {}\n".format(os.path.basename(png_filename), class_id, score) ) else: # Cityscapes requires a prediction file for every ground truth image. with open(pred_txt, "w") as fout: pass def evaluate(self): """ Returns: dict: has a key "segm", whose value is a dict of "AP" and "AP50". """ comm.synchronize() if comm.get_rank() > 0: return import cityscapesscripts.evaluation.evalInstanceLevelSemanticLabeling as cityscapes_eval self._logger.info("Evaluating results under {} ...".format(self._temp_dir)) # set some global states in cityscapes evaluation API, before evaluating cityscapes_eval.args.predictionPath = os.path.abspath(self._temp_dir) cityscapes_eval.args.predictionWalk = None cityscapes_eval.args.JSONOutput = False cityscapes_eval.args.colorized = False cityscapes_eval.args.gtInstancesFile = os.path.join(self._temp_dir, "gtInstances.json") # These lines are adopted from # https://github.com/mcordts/cityscapesScripts/blob/master/cityscapesscripts/evaluation/evalInstanceLevelSemanticLabeling.py # noqa gt_dir = PathManager.get_local_path(self._metadata.gt_dir) groundTruthImgList = glob.glob(os.path.join(gt_dir, "*", "*_gtFine_instanceIds.png")) assert len( groundTruthImgList ), "Cannot find any ground truth images to use for evaluation. Searched for: {}".format( cityscapes_eval.args.groundTruthSearch ) predictionImgList = [] for gt in groundTruthImgList: predictionImgList.append(cityscapes_eval.getPrediction(gt, cityscapes_eval.args)) results = cityscapes_eval.evaluateImgLists( predictionImgList, groundTruthImgList, cityscapes_eval.args )["averages"] ret = OrderedDict() ret["segm"] = {"AP": results["allAp"] * 100, "AP50": results["allAp50%"] * 100} self._working_dir.cleanup() return ret class CityscapesSemSegEvaluator(CityscapesEvaluator): """ Evaluate semantic segmentation results on cityscapes dataset using cityscapes API. Note: * It does not work in multi-machine distributed training. * It contains a synchronization, therefore has to be used on all ranks. * Only the main process runs evaluation. """ def process(self, inputs, outputs): from cityscapesscripts.helpers.labels import trainId2label for input, output in zip(inputs, outputs): file_name = input["file_name"] basename = os.path.splitext(os.path.basename(file_name))[0] pred_filename = os.path.join(self._temp_dir, basename + "_pred.png") output = output["sem_seg"].argmax(dim=0).to(self._cpu_device).numpy() pred = 255 * np.ones(output.shape, dtype=np.uint8) for train_id, label in trainId2label.items(): if label.ignoreInEval: continue pred[output == train_id] = label.id Image.fromarray(pred).save(pred_filename) def evaluate(self): comm.synchronize() if comm.get_rank() > 0: return # Load the Cityscapes eval script *after* setting the required env var, # since the script reads CITYSCAPES_DATASET into global variables at load time. import cityscapesscripts.evaluation.evalPixelLevelSemanticLabeling as cityscapes_eval self._logger.info("Evaluating results under {} ...".format(self._temp_dir)) # set some global states in cityscapes evaluation API, before evaluating cityscapes_eval.args.predictionPath = os.path.abspath(self._temp_dir) cityscapes_eval.args.predictionWalk = None cityscapes_eval.args.JSONOutput = False cityscapes_eval.args.colorized = False # These lines are adopted from # https://github.com/mcordts/cityscapesScripts/blob/master/cityscapesscripts/evaluation/evalPixelLevelSemanticLabeling.py # noqa gt_dir = PathManager.get_local_path(self._metadata.gt_dir) groundTruthImgList = glob.glob(os.path.join(gt_dir, "*", "*_gtFine_labelIds.png")) assert len( groundTruthImgList ), "Cannot find any ground truth images to use for evaluation. Searched for: {}".format( cityscapes_eval.args.groundTruthSearch ) predictionImgList = [] for gt in groundTruthImgList: predictionImgList.append(cityscapes_eval.getPrediction(cityscapes_eval.args, gt)) results = cityscapes_eval.evaluateImgLists( predictionImgList, groundTruthImgList, cityscapes_eval.args ) ret = OrderedDict() ret["sem_seg"] = { "IoU": 100.0 * results["averageScoreClasses"], "iIoU": 100.0 * results["averageScoreInstClasses"], "IoU_sup": 100.0 * results["averageScoreCategories"], "iIoU_sup": 100.0 * results["averageScoreInstCategories"], } self._working_dir.cleanup() return ret
/roof_mask_Yv8-0.5.9-py3-none-any.whl/detectron2/evaluation/cityscapes_evaluation.py
0.530236
0.248409
cityscapes_evaluation.py
pypi
import itertools import json import numpy as np import os import torch from pycocotools.cocoeval import COCOeval, maskUtils from detectron2.structures import BoxMode, RotatedBoxes, pairwise_iou_rotated from detectron2.utils.file_io import PathManager from .coco_evaluation import COCOEvaluator class RotatedCOCOeval(COCOeval): @staticmethod def is_rotated(box_list): if type(box_list) == np.ndarray: return box_list.shape[1] == 5 elif type(box_list) == list: if box_list == []: # cannot decide the box_dim return False return np.all( np.array( [ (len(obj) == 5) and ((type(obj) == list) or (type(obj) == np.ndarray)) for obj in box_list ] ) ) return False @staticmethod def boxlist_to_tensor(boxlist, output_box_dim): if type(boxlist) == np.ndarray: box_tensor = torch.from_numpy(boxlist) elif type(boxlist) == list: if boxlist == []: return torch.zeros((0, output_box_dim), dtype=torch.float32) else: box_tensor = torch.FloatTensor(boxlist) else: raise Exception("Unrecognized boxlist type") input_box_dim = box_tensor.shape[1] if input_box_dim != output_box_dim: if input_box_dim == 4 and output_box_dim == 5: box_tensor = BoxMode.convert(box_tensor, BoxMode.XYWH_ABS, BoxMode.XYWHA_ABS) else: raise Exception( "Unable to convert from {}-dim box to {}-dim box".format( input_box_dim, output_box_dim ) ) return box_tensor def compute_iou_dt_gt(self, dt, gt, is_crowd): if self.is_rotated(dt) or self.is_rotated(gt): # TODO: take is_crowd into consideration assert all(c == 0 for c in is_crowd) dt = RotatedBoxes(self.boxlist_to_tensor(dt, output_box_dim=5)) gt = RotatedBoxes(self.boxlist_to_tensor(gt, output_box_dim=5)) return pairwise_iou_rotated(dt, gt) else: # This is the same as the classical COCO evaluation return maskUtils.iou(dt, gt, is_crowd) def computeIoU(self, imgId, catId): p = self.params if p.useCats: gt = self._gts[imgId, catId] dt = self._dts[imgId, catId] else: gt = [_ for cId in p.catIds for _ in self._gts[imgId, cId]] dt = [_ for cId in p.catIds for _ in self._dts[imgId, cId]] if len(gt) == 0 and len(dt) == 0: return [] inds = np.argsort([-d["score"] for d in dt], kind="mergesort") dt = [dt[i] for i in inds] if len(dt) > p.maxDets[-1]: dt = dt[0 : p.maxDets[-1]] assert p.iouType == "bbox", "unsupported iouType for iou computation" g = [g["bbox"] for g in gt] d = [d["bbox"] for d in dt] # compute iou between each dt and gt region iscrowd = [int(o["iscrowd"]) for o in gt] # Note: this function is copied from cocoeval.py in cocoapi # and the major difference is here. ious = self.compute_iou_dt_gt(d, g, iscrowd) return ious class RotatedCOCOEvaluator(COCOEvaluator): """ Evaluate object proposal/instance detection outputs using COCO-like metrics and APIs, with rotated boxes support. Note: this uses IOU only and does not consider angle differences. """ def process(self, inputs, outputs): """ Args: inputs: the inputs to a COCO model (e.g., GeneralizedRCNN). It is a list of dict. Each dict corresponds to an image and contains keys like "height", "width", "file_name", "image_id". outputs: the outputs of a COCO model. It is a list of dicts with key "instances" that contains :class:`Instances`. """ for input, output in zip(inputs, outputs): prediction = {"image_id": input["image_id"]} if "instances" in output: instances = output["instances"].to(self._cpu_device) prediction["instances"] = self.instances_to_json(instances, input["image_id"]) if "proposals" in output: prediction["proposals"] = output["proposals"].to(self._cpu_device) self._predictions.append(prediction) def instances_to_json(self, instances, img_id): num_instance = len(instances) if num_instance == 0: return [] boxes = instances.pred_boxes.tensor.numpy() if boxes.shape[1] == 4: boxes = BoxMode.convert(boxes, BoxMode.XYXY_ABS, BoxMode.XYWH_ABS) boxes = boxes.tolist() scores = instances.scores.tolist() classes = instances.pred_classes.tolist() results = [] for k in range(num_instance): result = { "image_id": img_id, "category_id": classes[k], "bbox": boxes[k], "score": scores[k], } results.append(result) return results def _eval_predictions(self, predictions, img_ids=None): # img_ids: unused """ Evaluate predictions on the given tasks. Fill self._results with the metrics of the tasks. """ self._logger.info("Preparing results for COCO format ...") coco_results = list(itertools.chain(*[x["instances"] for x in predictions])) # unmap the category ids for COCO if hasattr(self._metadata, "thing_dataset_id_to_contiguous_id"): reverse_id_mapping = { v: k for k, v in self._metadata.thing_dataset_id_to_contiguous_id.items() } for result in coco_results: result["category_id"] = reverse_id_mapping[result["category_id"]] if self._output_dir: file_path = os.path.join(self._output_dir, "coco_instances_results.json") self._logger.info("Saving results to {}".format(file_path)) with PathManager.open(file_path, "w") as f: f.write(json.dumps(coco_results)) f.flush() if not self._do_evaluation: self._logger.info("Annotations are not available for evaluation.") return self._logger.info("Evaluating predictions ...") assert self._tasks is None or set(self._tasks) == { "bbox" }, "[RotatedCOCOEvaluator] Only bbox evaluation is supported" coco_eval = ( self._evaluate_predictions_on_coco(self._coco_api, coco_results) if len(coco_results) > 0 else None # cocoapi does not handle empty results very well ) task = "bbox" res = self._derive_coco_results( coco_eval, task, class_names=self._metadata.get("thing_classes") ) self._results[task] = res def _evaluate_predictions_on_coco(self, coco_gt, coco_results): """ Evaluate the coco results using COCOEval API. """ assert len(coco_results) > 0 coco_dt = coco_gt.loadRes(coco_results) # Only bbox is supported for now coco_eval = RotatedCOCOeval(coco_gt, coco_dt, iouType="bbox") coco_eval.evaluate() coco_eval.accumulate() coco_eval.summarize() return coco_eval
/roof_mask_Yv8-0.5.9-py3-none-any.whl/detectron2/evaluation/rotated_coco_evaluation.py
0.565539
0.441974
rotated_coco_evaluation.py
pypi
import copy import itertools import json import logging import os import pickle from collections import OrderedDict import torch import detectron2.utils.comm as comm from detectron2.config import CfgNode from detectron2.data import MetadataCatalog from detectron2.structures import Boxes, BoxMode, pairwise_iou from detectron2.utils.file_io import PathManager from detectron2.utils.logger import create_small_table from .coco_evaluation import instances_to_coco_json from .evaluator import DatasetEvaluator class LVISEvaluator(DatasetEvaluator): """ Evaluate object proposal and instance detection/segmentation outputs using LVIS's metrics and evaluation API. """ def __init__( self, dataset_name, tasks=None, distributed=True, output_dir=None, *, max_dets_per_image=None, ): """ Args: dataset_name (str): name of the dataset to be evaluated. It must have the following corresponding metadata: "json_file": the path to the LVIS format annotation tasks (tuple[str]): tasks that can be evaluated under the given configuration. A task is one of "bbox", "segm". By default, will infer this automatically from predictions. distributed (True): if True, will collect results from all ranks for evaluation. Otherwise, will evaluate the results in the current process. output_dir (str): optional, an output directory to dump results. max_dets_per_image (None or int): limit on maximum detections per image in evaluating AP This limit, by default of the LVIS dataset, is 300. """ from lvis import LVIS self._logger = logging.getLogger(__name__) if tasks is not None and isinstance(tasks, CfgNode): self._logger.warn( "COCO Evaluator instantiated using config, this is deprecated behavior." " Please pass in explicit arguments instead." ) self._tasks = None # Infering it from predictions should be better else: self._tasks = tasks self._distributed = distributed self._output_dir = output_dir self._max_dets_per_image = max_dets_per_image self._cpu_device = torch.device("cpu") self._metadata = MetadataCatalog.get(dataset_name) json_file = PathManager.get_local_path(self._metadata.json_file) self._lvis_api = LVIS(json_file) # Test set json files do not contain annotations (evaluation must be # performed using the LVIS evaluation server). self._do_evaluation = len(self._lvis_api.get_ann_ids()) > 0 def reset(self): self._predictions = [] def process(self, inputs, outputs): """ Args: inputs: the inputs to a LVIS model (e.g., GeneralizedRCNN). It is a list of dict. Each dict corresponds to an image and contains keys like "height", "width", "file_name", "image_id". outputs: the outputs of a LVIS model. It is a list of dicts with key "instances" that contains :class:`Instances`. """ for input, output in zip(inputs, outputs): prediction = {"image_id": input["image_id"]} if "instances" in output: instances = output["instances"].to(self._cpu_device) prediction["instances"] = instances_to_coco_json(instances, input["image_id"]) if "proposals" in output: prediction["proposals"] = output["proposals"].to(self._cpu_device) self._predictions.append(prediction) def evaluate(self): if self._distributed: comm.synchronize() predictions = comm.gather(self._predictions, dst=0) predictions = list(itertools.chain(*predictions)) if not comm.is_main_process(): return else: predictions = self._predictions if len(predictions) == 0: self._logger.warning("[LVISEvaluator] Did not receive valid predictions.") return {} if self._output_dir: PathManager.mkdirs(self._output_dir) file_path = os.path.join(self._output_dir, "instances_predictions.pth") with PathManager.open(file_path, "wb") as f: torch.save(predictions, f) self._results = OrderedDict() if "proposals" in predictions[0]: self._eval_box_proposals(predictions) if "instances" in predictions[0]: self._eval_predictions(predictions) # Copy so the caller can do whatever with results return copy.deepcopy(self._results) def _tasks_from_predictions(self, predictions): for pred in predictions: if "segmentation" in pred: return ("bbox", "segm") return ("bbox",) def _eval_predictions(self, predictions): """ Evaluate predictions. Fill self._results with the metrics of the tasks. Args: predictions (list[dict]): list of outputs from the model """ self._logger.info("Preparing results in the LVIS format ...") lvis_results = list(itertools.chain(*[x["instances"] for x in predictions])) tasks = self._tasks or self._tasks_from_predictions(lvis_results) # LVIS evaluator can be used to evaluate results for COCO dataset categories. # In this case `_metadata` variable will have a field with COCO-specific category mapping. if hasattr(self._metadata, "thing_dataset_id_to_contiguous_id"): reverse_id_mapping = { v: k for k, v in self._metadata.thing_dataset_id_to_contiguous_id.items() } for result in lvis_results: result["category_id"] = reverse_id_mapping[result["category_id"]] else: # unmap the category ids for LVIS (from 0-indexed to 1-indexed) for result in lvis_results: result["category_id"] += 1 if self._output_dir: file_path = os.path.join(self._output_dir, "lvis_instances_results.json") self._logger.info("Saving results to {}".format(file_path)) with PathManager.open(file_path, "w") as f: f.write(json.dumps(lvis_results)) f.flush() if not self._do_evaluation: self._logger.info("Annotations are not available for evaluation.") return self._logger.info("Evaluating predictions ...") for task in sorted(tasks): res = _evaluate_predictions_on_lvis( self._lvis_api, lvis_results, task, max_dets_per_image=self._max_dets_per_image, class_names=self._metadata.get("thing_classes"), ) self._results[task] = res def _eval_box_proposals(self, predictions): """ Evaluate the box proposals in predictions. Fill self._results with the metrics for "box_proposals" task. """ if self._output_dir: # Saving generated box proposals to file. # Predicted box_proposals are in XYXY_ABS mode. bbox_mode = BoxMode.XYXY_ABS.value ids, boxes, objectness_logits = [], [], [] for prediction in predictions: ids.append(prediction["image_id"]) boxes.append(prediction["proposals"].proposal_boxes.tensor.numpy()) objectness_logits.append(prediction["proposals"].objectness_logits.numpy()) proposal_data = { "boxes": boxes, "objectness_logits": objectness_logits, "ids": ids, "bbox_mode": bbox_mode, } with PathManager.open(os.path.join(self._output_dir, "box_proposals.pkl"), "wb") as f: pickle.dump(proposal_data, f) if not self._do_evaluation: self._logger.info("Annotations are not available for evaluation.") return self._logger.info("Evaluating bbox proposals ...") res = {} areas = {"all": "", "small": "s", "medium": "m", "large": "l"} for limit in [100, 1000]: for area, suffix in areas.items(): stats = _evaluate_box_proposals(predictions, self._lvis_api, area=area, limit=limit) key = "AR{}@{:d}".format(suffix, limit) res[key] = float(stats["ar"].item() * 100) self._logger.info("Proposal metrics: \n" + create_small_table(res)) self._results["box_proposals"] = res # inspired from Detectron: # https://github.com/facebookresearch/Detectron/blob/a6a835f5b8208c45d0dce217ce9bbda915f44df7/detectron/datasets/json_dataset_evaluator.py#L255 # noqa def _evaluate_box_proposals(dataset_predictions, lvis_api, thresholds=None, area="all", limit=None): """ Evaluate detection proposal recall metrics. This function is a much faster alternative to the official LVIS API recall evaluation code. However, it produces slightly different results. """ # Record max overlap value for each gt box # Return vector of overlap values areas = { "all": 0, "small": 1, "medium": 2, "large": 3, "96-128": 4, "128-256": 5, "256-512": 6, "512-inf": 7, } area_ranges = [ [0 ** 2, 1e5 ** 2], # all [0 ** 2, 32 ** 2], # small [32 ** 2, 96 ** 2], # medium [96 ** 2, 1e5 ** 2], # large [96 ** 2, 128 ** 2], # 96-128 [128 ** 2, 256 ** 2], # 128-256 [256 ** 2, 512 ** 2], # 256-512 [512 ** 2, 1e5 ** 2], ] # 512-inf assert area in areas, "Unknown area range: {}".format(area) area_range = area_ranges[areas[area]] gt_overlaps = [] num_pos = 0 for prediction_dict in dataset_predictions: predictions = prediction_dict["proposals"] # sort predictions in descending order # TODO maybe remove this and make it explicit in the documentation inds = predictions.objectness_logits.sort(descending=True)[1] predictions = predictions[inds] ann_ids = lvis_api.get_ann_ids(img_ids=[prediction_dict["image_id"]]) anno = lvis_api.load_anns(ann_ids) gt_boxes = [ BoxMode.convert(obj["bbox"], BoxMode.XYWH_ABS, BoxMode.XYXY_ABS) for obj in anno ] gt_boxes = torch.as_tensor(gt_boxes).reshape(-1, 4) # guard against no boxes gt_boxes = Boxes(gt_boxes) gt_areas = torch.as_tensor([obj["area"] for obj in anno]) if len(gt_boxes) == 0 or len(predictions) == 0: continue valid_gt_inds = (gt_areas >= area_range[0]) & (gt_areas <= area_range[1]) gt_boxes = gt_boxes[valid_gt_inds] num_pos += len(gt_boxes) if len(gt_boxes) == 0: continue if limit is not None and len(predictions) > limit: predictions = predictions[:limit] overlaps = pairwise_iou(predictions.proposal_boxes, gt_boxes) _gt_overlaps = torch.zeros(len(gt_boxes)) for j in range(min(len(predictions), len(gt_boxes))): # find which proposal box maximally covers each gt box # and get the iou amount of coverage for each gt box max_overlaps, argmax_overlaps = overlaps.max(dim=0) # find which gt box is 'best' covered (i.e. 'best' = most iou) gt_ovr, gt_ind = max_overlaps.max(dim=0) assert gt_ovr >= 0 # find the proposal box that covers the best covered gt box box_ind = argmax_overlaps[gt_ind] # record the iou coverage of this gt box _gt_overlaps[j] = overlaps[box_ind, gt_ind] assert _gt_overlaps[j] == gt_ovr # mark the proposal box and the gt box as used overlaps[box_ind, :] = -1 overlaps[:, gt_ind] = -1 # append recorded iou coverage level gt_overlaps.append(_gt_overlaps) gt_overlaps = ( torch.cat(gt_overlaps, dim=0) if len(gt_overlaps) else torch.zeros(0, dtype=torch.float32) ) gt_overlaps, _ = torch.sort(gt_overlaps) if thresholds is None: step = 0.05 thresholds = torch.arange(0.5, 0.95 + 1e-5, step, dtype=torch.float32) recalls = torch.zeros_like(thresholds) # compute recall for each iou threshold for i, t in enumerate(thresholds): recalls[i] = (gt_overlaps >= t).float().sum() / float(num_pos) # ar = 2 * np.trapz(recalls, thresholds) ar = recalls.mean() return { "ar": ar, "recalls": recalls, "thresholds": thresholds, "gt_overlaps": gt_overlaps, "num_pos": num_pos, } def _evaluate_predictions_on_lvis( lvis_gt, lvis_results, iou_type, max_dets_per_image=None, class_names=None ): """ Args: iou_type (str): max_dets_per_image (None or int): limit on maximum detections per image in evaluating AP This limit, by default of the LVIS dataset, is 300. class_names (None or list[str]): if provided, will use it to predict per-category AP. Returns: a dict of {metric name: score} """ metrics = { "bbox": ["AP", "AP50", "AP75", "APs", "APm", "APl", "APr", "APc", "APf"], "segm": ["AP", "AP50", "AP75", "APs", "APm", "APl", "APr", "APc", "APf"], }[iou_type] logger = logging.getLogger(__name__) if len(lvis_results) == 0: # TODO: check if needed logger.warn("No predictions from the model!") return {metric: float("nan") for metric in metrics} if iou_type == "segm": lvis_results = copy.deepcopy(lvis_results) # When evaluating mask AP, if the results contain bbox, LVIS API will # use the box area as the area of the instance, instead of the mask area. # This leads to a different definition of small/medium/large. # We remove the bbox field to let mask AP use mask area. for c in lvis_results: c.pop("bbox", None) if max_dets_per_image is None: max_dets_per_image = 300 # Default for LVIS dataset from lvis import LVISEval, LVISResults logger.info(f"Evaluating with max detections per image = {max_dets_per_image}") lvis_results = LVISResults(lvis_gt, lvis_results, max_dets=max_dets_per_image) lvis_eval = LVISEval(lvis_gt, lvis_results, iou_type) lvis_eval.run() lvis_eval.print_results() # Pull the standard metrics from the LVIS results results = lvis_eval.get_results() results = {metric: float(results[metric] * 100) for metric in metrics} logger.info("Evaluation results for {}: \n".format(iou_type) + create_small_table(results)) return results
/roof_mask_Yv8-0.5.9-py3-none-any.whl/detectron2/evaluation/lvis_evaluation.py
0.746509
0.296349
lvis_evaluation.py
pypi
import itertools import json import logging import numpy as np import os from collections import OrderedDict from typing import Optional, Union import pycocotools.mask as mask_util import torch from PIL import Image from detectron2.data import DatasetCatalog, MetadataCatalog from detectron2.utils.comm import all_gather, is_main_process, synchronize from detectron2.utils.file_io import PathManager from .evaluator import DatasetEvaluator def load_image_into_numpy_array( filename: str, copy: bool = False, dtype: Optional[Union[np.dtype, str]] = None, ) -> np.ndarray: with PathManager.open(filename, "rb") as f: array = np.array(Image.open(f), copy=copy, dtype=dtype) return array class SemSegEvaluator(DatasetEvaluator): """ Evaluate semantic segmentation metrics. """ def __init__( self, dataset_name, distributed=True, output_dir=None, *, sem_seg_loading_fn=load_image_into_numpy_array, num_classes=None, ignore_label=None, ): """ Args: dataset_name (str): name of the dataset to be evaluated. distributed (bool): if True, will collect results from all ranks for evaluation. Otherwise, will evaluate the results in the current process. output_dir (str): an output directory to dump results. sem_seg_loading_fn: function to read sem seg file and load into numpy array. Default provided, but projects can customize. num_classes, ignore_label: deprecated argument """ self._logger = logging.getLogger(__name__) if num_classes is not None: self._logger.warn( "SemSegEvaluator(num_classes) is deprecated! It should be obtained from metadata." ) if ignore_label is not None: self._logger.warn( "SemSegEvaluator(ignore_label) is deprecated! It should be obtained from metadata." ) self._dataset_name = dataset_name self._distributed = distributed self._output_dir = output_dir self._cpu_device = torch.device("cpu") self.input_file_to_gt_file = { dataset_record["file_name"]: dataset_record["sem_seg_file_name"] for dataset_record in DatasetCatalog.get(dataset_name) } meta = MetadataCatalog.get(dataset_name) # Dict that maps contiguous training ids to COCO category ids try: c2d = meta.stuff_dataset_id_to_contiguous_id self._contiguous_id_to_dataset_id = {v: k for k, v in c2d.items()} except AttributeError: self._contiguous_id_to_dataset_id = None self._class_names = meta.stuff_classes self.sem_seg_loading_fn = sem_seg_loading_fn self._num_classes = len(meta.stuff_classes) if num_classes is not None: assert self._num_classes == num_classes, f"{self._num_classes} != {num_classes}" self._ignore_label = ignore_label if ignore_label is not None else meta.ignore_label def reset(self): self._conf_matrix = np.zeros((self._num_classes + 1, self._num_classes + 1), dtype=np.int64) self._predictions = [] def process(self, inputs, outputs): """ Args: inputs: the inputs to a model. It is a list of dicts. Each dict corresponds to an image and contains keys like "height", "width", "file_name". outputs: the outputs of a model. It is either list of semantic segmentation predictions (Tensor [H, W]) or list of dicts with key "sem_seg" that contains semantic segmentation prediction in the same format. """ for input, output in zip(inputs, outputs): output = output["sem_seg"].argmax(dim=0).to(self._cpu_device) pred = np.array(output, dtype=np.int) gt_filename = self.input_file_to_gt_file[input["file_name"]] gt = self.sem_seg_loading_fn(gt_filename, dtype=np.int) gt[gt == self._ignore_label] = self._num_classes self._conf_matrix += np.bincount( (self._num_classes + 1) * pred.reshape(-1) + gt.reshape(-1), minlength=self._conf_matrix.size, ).reshape(self._conf_matrix.shape) self._predictions.extend(self.encode_json_sem_seg(pred, input["file_name"])) def evaluate(self): """ Evaluates standard semantic segmentation metrics (http://cocodataset.org/#stuff-eval): * Mean intersection-over-union averaged across classes (mIoU) * Frequency Weighted IoU (fwIoU) * Mean pixel accuracy averaged across classes (mACC) * Pixel Accuracy (pACC) """ if self._distributed: synchronize() conf_matrix_list = all_gather(self._conf_matrix) self._predictions = all_gather(self._predictions) self._predictions = list(itertools.chain(*self._predictions)) if not is_main_process(): return self._conf_matrix = np.zeros_like(self._conf_matrix) for conf_matrix in conf_matrix_list: self._conf_matrix += conf_matrix if self._output_dir: PathManager.mkdirs(self._output_dir) file_path = os.path.join(self._output_dir, "sem_seg_predictions.json") with PathManager.open(file_path, "w") as f: f.write(json.dumps(self._predictions)) acc = np.full(self._num_classes, np.nan, dtype=np.float) iou = np.full(self._num_classes, np.nan, dtype=np.float) tp = self._conf_matrix.diagonal()[:-1].astype(np.float) pos_gt = np.sum(self._conf_matrix[:-1, :-1], axis=0).astype(np.float) class_weights = pos_gt / np.sum(pos_gt) pos_pred = np.sum(self._conf_matrix[:-1, :-1], axis=1).astype(np.float) acc_valid = pos_gt > 0 acc[acc_valid] = tp[acc_valid] / pos_gt[acc_valid] iou_valid = (pos_gt + pos_pred) > 0 union = pos_gt + pos_pred - tp iou[acc_valid] = tp[acc_valid] / union[acc_valid] macc = np.sum(acc[acc_valid]) / np.sum(acc_valid) miou = np.sum(iou[acc_valid]) / np.sum(iou_valid) fiou = np.sum(iou[acc_valid] * class_weights[acc_valid]) pacc = np.sum(tp) / np.sum(pos_gt) res = {} res["mIoU"] = 100 * miou res["fwIoU"] = 100 * fiou for i, name in enumerate(self._class_names): res["IoU-{}".format(name)] = 100 * iou[i] res["mACC"] = 100 * macc res["pACC"] = 100 * pacc for i, name in enumerate(self._class_names): res["ACC-{}".format(name)] = 100 * acc[i] if self._output_dir: file_path = os.path.join(self._output_dir, "sem_seg_evaluation.pth") with PathManager.open(file_path, "wb") as f: torch.save(res, f) results = OrderedDict({"sem_seg": res}) self._logger.info(results) return results def encode_json_sem_seg(self, sem_seg, input_file_name): """ Convert semantic segmentation to COCO stuff format with segments encoded as RLEs. See http://cocodataset.org/#format-results """ json_list = [] for label in np.unique(sem_seg): if self._contiguous_id_to_dataset_id is not None: assert ( label in self._contiguous_id_to_dataset_id ), "Label {} is not in the metadata info for {}".format(label, self._dataset_name) dataset_id = self._contiguous_id_to_dataset_id[label] else: dataset_id = int(label) mask = (sem_seg == label).astype(np.uint8) mask_rle = mask_util.encode(np.array(mask[:, :, None], order="F"))[0] mask_rle["counts"] = mask_rle["counts"].decode("utf-8") json_list.append( {"file_name": input_file_name, "category_id": dataset_id, "segmentation": mask_rle} ) return json_list
/roof_mask_Yv8-0.5.9-py3-none-any.whl/detectron2/evaluation/sem_seg_evaluation.py
0.811863
0.333286
sem_seg_evaluation.py
pypi
import contextlib import io import itertools import json import logging import numpy as np import os import tempfile from collections import OrderedDict from typing import Optional from PIL import Image from tabulate import tabulate from detectron2.data import MetadataCatalog from detectron2.utils import comm from detectron2.utils.file_io import PathManager from .evaluator import DatasetEvaluator logger = logging.getLogger(__name__) class COCOPanopticEvaluator(DatasetEvaluator): """ Evaluate Panoptic Quality metrics on COCO using PanopticAPI. It saves panoptic segmentation prediction in `output_dir` It contains a synchronize call and has to be called from all workers. """ def __init__(self, dataset_name: str, output_dir: Optional[str] = None): """ Args: dataset_name: name of the dataset output_dir: output directory to save results for evaluation. """ self._metadata = MetadataCatalog.get(dataset_name) self._thing_contiguous_id_to_dataset_id = { v: k for k, v in self._metadata.thing_dataset_id_to_contiguous_id.items() } self._stuff_contiguous_id_to_dataset_id = { v: k for k, v in self._metadata.stuff_dataset_id_to_contiguous_id.items() } self._output_dir = output_dir if self._output_dir is not None: PathManager.mkdirs(self._output_dir) def reset(self): self._predictions = [] def _convert_category_id(self, segment_info): isthing = segment_info.pop("isthing", None) if isthing is None: # the model produces panoptic category id directly. No more conversion needed return segment_info if isthing is True: segment_info["category_id"] = self._thing_contiguous_id_to_dataset_id[ segment_info["category_id"] ] else: segment_info["category_id"] = self._stuff_contiguous_id_to_dataset_id[ segment_info["category_id"] ] return segment_info def process(self, inputs, outputs): from panopticapi.utils import id2rgb for input, output in zip(inputs, outputs): panoptic_img, segments_info = output["panoptic_seg"] panoptic_img = panoptic_img.cpu().numpy() if segments_info is None: # If "segments_info" is None, we assume "panoptic_img" is a # H*W int32 image storing the panoptic_id in the format of # category_id * label_divisor + instance_id. We reserve -1 for # VOID label, and add 1 to panoptic_img since the official # evaluation script uses 0 for VOID label. label_divisor = self._metadata.label_divisor segments_info = [] for panoptic_label in np.unique(panoptic_img): if panoptic_label == -1: # VOID region. continue pred_class = panoptic_label // label_divisor isthing = ( pred_class in self._metadata.thing_dataset_id_to_contiguous_id.values() ) segments_info.append( { "id": int(panoptic_label) + 1, "category_id": int(pred_class), "isthing": bool(isthing), } ) # Official evaluation script uses 0 for VOID label. panoptic_img += 1 file_name = os.path.basename(input["file_name"]) file_name_png = os.path.splitext(file_name)[0] + ".png" with io.BytesIO() as out: Image.fromarray(id2rgb(panoptic_img)).save(out, format="PNG") segments_info = [self._convert_category_id(x) for x in segments_info] self._predictions.append( { "image_id": input["image_id"], "file_name": file_name_png, "png_string": out.getvalue(), "segments_info": segments_info, } ) def evaluate(self): comm.synchronize() self._predictions = comm.gather(self._predictions) self._predictions = list(itertools.chain(*self._predictions)) if not comm.is_main_process(): return # PanopticApi requires local files gt_json = PathManager.get_local_path(self._metadata.panoptic_json) gt_folder = PathManager.get_local_path(self._metadata.panoptic_root) with tempfile.TemporaryDirectory(prefix="panoptic_eval") as pred_dir: logger.info("Writing all panoptic predictions to {} ...".format(pred_dir)) for p in self._predictions: with open(os.path.join(pred_dir, p["file_name"]), "wb") as f: f.write(p.pop("png_string")) with open(gt_json, "r") as f: json_data = json.load(f) json_data["annotations"] = self._predictions output_dir = self._output_dir or pred_dir predictions_json = os.path.join(output_dir, "predictions.json") with PathManager.open(predictions_json, "w") as f: f.write(json.dumps(json_data)) from panopticapi.evaluation import pq_compute with contextlib.redirect_stdout(io.StringIO()): pq_res = pq_compute( gt_json, PathManager.get_local_path(predictions_json), gt_folder=gt_folder, pred_folder=pred_dir, ) res = {} res["PQ"] = 100 * pq_res["All"]["pq"] res["SQ"] = 100 * pq_res["All"]["sq"] res["RQ"] = 100 * pq_res["All"]["rq"] res["PQ_th"] = 100 * pq_res["Things"]["pq"] res["SQ_th"] = 100 * pq_res["Things"]["sq"] res["RQ_th"] = 100 * pq_res["Things"]["rq"] res["PQ_st"] = 100 * pq_res["Stuff"]["pq"] res["SQ_st"] = 100 * pq_res["Stuff"]["sq"] res["RQ_st"] = 100 * pq_res["Stuff"]["rq"] results = OrderedDict({"panoptic_seg": res}) _print_panoptic_results(pq_res) return results def _print_panoptic_results(pq_res): headers = ["", "PQ", "SQ", "RQ", "#categories"] data = [] for name in ["All", "Things", "Stuff"]: row = [name] + [pq_res[name][k] * 100 for k in ["pq", "sq", "rq"]] + [pq_res[name]["n"]] data.append(row) table = tabulate( data, headers=headers, tablefmt="pipe", floatfmt=".3f", stralign="center", numalign="center" ) logger.info("Panoptic Evaluation Results:\n" + table) if __name__ == "__main__": from detectron2.utils.logger import setup_logger logger = setup_logger() import argparse parser = argparse.ArgumentParser() parser.add_argument("--gt-json") parser.add_argument("--gt-dir") parser.add_argument("--pred-json") parser.add_argument("--pred-dir") args = parser.parse_args() from panopticapi.evaluation import pq_compute with contextlib.redirect_stdout(io.StringIO()): pq_res = pq_compute( args.gt_json, args.pred_json, gt_folder=args.gt_dir, pred_folder=args.pred_dir ) _print_panoptic_results(pq_res)
/roof_mask_Yv8-0.5.9-py3-none-any.whl/detectron2/evaluation/panoptic_evaluation.py
0.726426
0.206634
panoptic_evaluation.py
pypi
import torch from torch import nn from torch.autograd import Function from torch.autograd.function import once_differentiable from torch.nn.modules.utils import _pair class _ROIAlignRotated(Function): @staticmethod def forward(ctx, input, roi, output_size, spatial_scale, sampling_ratio): ctx.save_for_backward(roi) ctx.output_size = _pair(output_size) ctx.spatial_scale = spatial_scale ctx.sampling_ratio = sampling_ratio ctx.input_shape = input.size() output = torch.ops.detectron2.roi_align_rotated_forward( input, roi, spatial_scale, output_size[0], output_size[1], sampling_ratio ) return output @staticmethod @once_differentiable def backward(ctx, grad_output): (rois,) = ctx.saved_tensors output_size = ctx.output_size spatial_scale = ctx.spatial_scale sampling_ratio = ctx.sampling_ratio bs, ch, h, w = ctx.input_shape grad_input = torch.ops.detectron2.roi_align_rotated_backward( grad_output, rois, spatial_scale, output_size[0], output_size[1], bs, ch, h, w, sampling_ratio, ) return grad_input, None, None, None, None, None roi_align_rotated = _ROIAlignRotated.apply class ROIAlignRotated(nn.Module): def __init__(self, output_size, spatial_scale, sampling_ratio): """ Args: output_size (tuple): h, w spatial_scale (float): scale the input boxes by this number sampling_ratio (int): number of inputs samples to take for each output sample. 0 to take samples densely. Note: ROIAlignRotated supports continuous coordinate by default: Given a continuous coordinate c, its two neighboring pixel indices (in our pixel model) are computed by floor(c - 0.5) and ceil(c - 0.5). For example, c=1.3 has pixel neighbors with discrete indices [0] and [1] (which are sampled from the underlying signal at continuous coordinates 0.5 and 1.5). """ super(ROIAlignRotated, self).__init__() self.output_size = output_size self.spatial_scale = spatial_scale self.sampling_ratio = sampling_ratio def forward(self, input, rois): """ Args: input: NCHW images rois: Bx6 boxes. First column is the index into N. The other 5 columns are (x_ctr, y_ctr, width, height, angle_degrees). """ assert rois.dim() == 2 and rois.size(1) == 6 orig_dtype = input.dtype if orig_dtype == torch.float16: input = input.float() rois = rois.float() return roi_align_rotated( input, rois, self.output_size, self.spatial_scale, self.sampling_ratio ).to(dtype=orig_dtype) def __repr__(self): tmpstr = self.__class__.__name__ + "(" tmpstr += "output_size=" + str(self.output_size) tmpstr += ", spatial_scale=" + str(self.spatial_scale) tmpstr += ", sampling_ratio=" + str(self.sampling_ratio) tmpstr += ")" return tmpstr
/roof_mask_Yv8-0.5.9-py3-none-any.whl/detectron2/layers/roi_align_rotated.py
0.940463
0.514156
roi_align_rotated.py
pypi
import math import torch def diou_loss( boxes1: torch.Tensor, boxes2: torch.Tensor, reduction: str = "none", eps: float = 1e-7, ) -> torch.Tensor: """ Distance Intersection over Union Loss (Zhaohui Zheng et. al) https://arxiv.org/abs/1911.08287 Args: boxes1, boxes2 (Tensor): box locations in XYXY format, shape (N, 4) or (4,). reduction: 'none' | 'mean' | 'sum' 'none': No reduction will be applied to the output. 'mean': The output will be averaged. 'sum': The output will be summed. eps (float): small number to prevent division by zero """ x1, y1, x2, y2 = boxes1.unbind(dim=-1) x1g, y1g, x2g, y2g = boxes2.unbind(dim=-1) # TODO: use torch._assert_async() when pytorch 1.8 support is dropped assert (x2 >= x1).all(), "bad box: x1 larger than x2" assert (y2 >= y1).all(), "bad box: y1 larger than y2" # Intersection keypoints xkis1 = torch.max(x1, x1g) ykis1 = torch.max(y1, y1g) xkis2 = torch.min(x2, x2g) ykis2 = torch.min(y2, y2g) intsct = torch.zeros_like(x1) mask = (ykis2 > ykis1) & (xkis2 > xkis1) intsct[mask] = (xkis2[mask] - xkis1[mask]) * (ykis2[mask] - ykis1[mask]) union = (x2 - x1) * (y2 - y1) + (x2g - x1g) * (y2g - y1g) - intsct + eps iou = intsct / union # smallest enclosing box xc1 = torch.min(x1, x1g) yc1 = torch.min(y1, y1g) xc2 = torch.max(x2, x2g) yc2 = torch.max(y2, y2g) diag_len = ((xc2 - xc1) ** 2) + ((yc2 - yc1) ** 2) + eps # centers of boxes x_p = (x2 + x1) / 2 y_p = (y2 + y1) / 2 x_g = (x1g + x2g) / 2 y_g = (y1g + y2g) / 2 distance = ((x_p - x_g) ** 2) + ((y_p - y_g) ** 2) # Eqn. (7) loss = 1 - iou + (distance / diag_len) if reduction == "mean": loss = loss.mean() if loss.numel() > 0 else 0.0 * loss.sum() elif reduction == "sum": loss = loss.sum() return loss def ciou_loss( boxes1: torch.Tensor, boxes2: torch.Tensor, reduction: str = "none", eps: float = 1e-7, ) -> torch.Tensor: """ Complete Intersection over Union Loss (Zhaohui Zheng et. al) https://arxiv.org/abs/1911.08287 Args: boxes1, boxes2 (Tensor): box locations in XYXY format, shape (N, 4) or (4,). reduction: 'none' | 'mean' | 'sum' 'none': No reduction will be applied to the output. 'mean': The output will be averaged. 'sum': The output will be summed. eps (float): small number to prevent division by zero """ x1, y1, x2, y2 = boxes1.unbind(dim=-1) x1g, y1g, x2g, y2g = boxes2.unbind(dim=-1) # TODO: use torch._assert_async() when pytorch 1.8 support is dropped assert (x2 >= x1).all(), "bad box: x1 larger than x2" assert (y2 >= y1).all(), "bad box: y1 larger than y2" # Intersection keypoints xkis1 = torch.max(x1, x1g) ykis1 = torch.max(y1, y1g) xkis2 = torch.min(x2, x2g) ykis2 = torch.min(y2, y2g) intsct = torch.zeros_like(x1) mask = (ykis2 > ykis1) & (xkis2 > xkis1) intsct[mask] = (xkis2[mask] - xkis1[mask]) * (ykis2[mask] - ykis1[mask]) union = (x2 - x1) * (y2 - y1) + (x2g - x1g) * (y2g - y1g) - intsct + eps iou = intsct / union # smallest enclosing box xc1 = torch.min(x1, x1g) yc1 = torch.min(y1, y1g) xc2 = torch.max(x2, x2g) yc2 = torch.max(y2, y2g) diag_len = ((xc2 - xc1) ** 2) + ((yc2 - yc1) ** 2) + eps # centers of boxes x_p = (x2 + x1) / 2 y_p = (y2 + y1) / 2 x_g = (x1g + x2g) / 2 y_g = (y1g + y2g) / 2 distance = ((x_p - x_g) ** 2) + ((y_p - y_g) ** 2) # width and height of boxes w_pred = x2 - x1 h_pred = y2 - y1 w_gt = x2g - x1g h_gt = y2g - y1g v = (4 / (math.pi ** 2)) * torch.pow((torch.atan(w_gt / h_gt) - torch.atan(w_pred / h_pred)), 2) with torch.no_grad(): alpha = v / (1 - iou + v + eps) # Eqn. (10) loss = 1 - iou + (distance / diag_len) + alpha * v if reduction == "mean": loss = loss.mean() if loss.numel() > 0 else 0.0 * loss.sum() elif reduction == "sum": loss = loss.sum() return loss
/roof_mask_Yv8-0.5.9-py3-none-any.whl/detectron2/layers/losses.py
0.721253
0.849535
losses.py
pypi
from torch import nn from torchvision.ops import roi_align # NOTE: torchvision's RoIAlign has a different default aligned=False class ROIAlign(nn.Module): def __init__(self, output_size, spatial_scale, sampling_ratio, aligned=True): """ Args: output_size (tuple): h, w spatial_scale (float): scale the input boxes by this number sampling_ratio (int): number of inputs samples to take for each output sample. 0 to take samples densely. aligned (bool): if False, use the legacy implementation in Detectron. If True, align the results more perfectly. Note: The meaning of aligned=True: Given a continuous coordinate c, its two neighboring pixel indices (in our pixel model) are computed by floor(c - 0.5) and ceil(c - 0.5). For example, c=1.3 has pixel neighbors with discrete indices [0] and [1] (which are sampled from the underlying signal at continuous coordinates 0.5 and 1.5). But the original roi_align (aligned=False) does not subtract the 0.5 when computing neighboring pixel indices and therefore it uses pixels with a slightly incorrect alignment (relative to our pixel model) when performing bilinear interpolation. With `aligned=True`, we first appropriately scale the ROI and then shift it by -0.5 prior to calling roi_align. This produces the correct neighbors; see detectron2/tests/test_roi_align.py for verification. The difference does not make a difference to the model's performance if ROIAlign is used together with conv layers. """ super().__init__() self.output_size = output_size self.spatial_scale = spatial_scale self.sampling_ratio = sampling_ratio self.aligned = aligned from torchvision import __version__ version = tuple(int(x) for x in __version__.split(".")[:2]) # https://github.com/pytorch/vision/pull/2438 assert version >= (0, 7), "Require torchvision >= 0.7" def forward(self, input, rois): """ Args: input: NCHW images rois: Bx5 boxes. First column is the index into N. The other 4 columns are xyxy. """ assert rois.dim() == 2 and rois.size(1) == 5 if input.is_quantized: input = input.dequantize() return roi_align( input, rois.to(dtype=input.dtype), self.output_size, self.spatial_scale, self.sampling_ratio, self.aligned, ) def __repr__(self): tmpstr = self.__class__.__name__ + "(" tmpstr += "output_size=" + str(self.output_size) tmpstr += ", spatial_scale=" + str(self.spatial_scale) tmpstr += ", sampling_ratio=" + str(self.sampling_ratio) tmpstr += ", aligned=" + str(self.aligned) tmpstr += ")" return tmpstr
/roof_mask_Yv8-0.5.9-py3-none-any.whl/detectron2/layers/roi_align.py
0.956937
0.746093
roi_align.py
pypi
from copy import deepcopy import fvcore.nn.weight_init as weight_init import torch from torch import nn from torch.nn import functional as F from .batch_norm import get_norm from .blocks import DepthwiseSeparableConv2d from .wrappers import Conv2d class ASPP(nn.Module): """ Atrous Spatial Pyramid Pooling (ASPP). """ def __init__( self, in_channels, out_channels, dilations, *, norm, activation, pool_kernel_size=None, dropout: float = 0.0, use_depthwise_separable_conv=False, ): """ Args: in_channels (int): number of input channels for ASPP. out_channels (int): number of output channels. dilations (list): a list of 3 dilations in ASPP. norm (str or callable): normalization for all conv layers. See :func:`layers.get_norm` for supported format. norm is applied to all conv layers except the conv following global average pooling. activation (callable): activation function. pool_kernel_size (tuple, list): the average pooling size (kh, kw) for image pooling layer in ASPP. If set to None, it always performs global average pooling. If not None, it must be divisible by the shape of inputs in forward(). It is recommended to use a fixed input feature size in training, and set this option to match this size, so that it performs global average pooling in training, and the size of the pooling window stays consistent in inference. dropout (float): apply dropout on the output of ASPP. It is used in the official DeepLab implementation with a rate of 0.1: https://github.com/tensorflow/models/blob/21b73d22f3ed05b650e85ac50849408dd36de32e/research/deeplab/model.py#L532 # noqa use_depthwise_separable_conv (bool): use DepthwiseSeparableConv2d for 3x3 convs in ASPP, proposed in :paper:`DeepLabV3+`. """ super(ASPP, self).__init__() assert len(dilations) == 3, "ASPP expects 3 dilations, got {}".format(len(dilations)) self.pool_kernel_size = pool_kernel_size self.dropout = dropout use_bias = norm == "" self.convs = nn.ModuleList() # conv 1x1 self.convs.append( Conv2d( in_channels, out_channels, kernel_size=1, bias=use_bias, norm=get_norm(norm, out_channels), activation=deepcopy(activation), ) ) weight_init.c2_xavier_fill(self.convs[-1]) # atrous convs for dilation in dilations: if use_depthwise_separable_conv: self.convs.append( DepthwiseSeparableConv2d( in_channels, out_channels, kernel_size=3, padding=dilation, dilation=dilation, norm1=norm, activation1=deepcopy(activation), norm2=norm, activation2=deepcopy(activation), ) ) else: self.convs.append( Conv2d( in_channels, out_channels, kernel_size=3, padding=dilation, dilation=dilation, bias=use_bias, norm=get_norm(norm, out_channels), activation=deepcopy(activation), ) ) weight_init.c2_xavier_fill(self.convs[-1]) # image pooling # We do not add BatchNorm because the spatial resolution is 1x1, # the original TF implementation has BatchNorm. if pool_kernel_size is None: image_pooling = nn.Sequential( nn.AdaptiveAvgPool2d(1), Conv2d(in_channels, out_channels, 1, bias=True, activation=deepcopy(activation)), ) else: image_pooling = nn.Sequential( nn.AvgPool2d(kernel_size=pool_kernel_size, stride=1), Conv2d(in_channels, out_channels, 1, bias=True, activation=deepcopy(activation)), ) weight_init.c2_xavier_fill(image_pooling[1]) self.convs.append(image_pooling) self.project = Conv2d( 5 * out_channels, out_channels, kernel_size=1, bias=use_bias, norm=get_norm(norm, out_channels), activation=deepcopy(activation), ) weight_init.c2_xavier_fill(self.project) def forward(self, x): size = x.shape[-2:] if self.pool_kernel_size is not None: if size[0] % self.pool_kernel_size[0] or size[1] % self.pool_kernel_size[1]: raise ValueError( "`pool_kernel_size` must be divisible by the shape of inputs. " "Input size: {} `pool_kernel_size`: {}".format(size, self.pool_kernel_size) ) res = [] for conv in self.convs: res.append(conv(x)) res[-1] = F.interpolate(res[-1], size=size, mode="bilinear", align_corners=False) res = torch.cat(res, dim=1) res = self.project(res) res = F.dropout(res, self.dropout, training=self.training) if self.dropout > 0 else res return res
/roof_mask_Yv8-0.5.9-py3-none-any.whl/detectron2/layers/aspp.py
0.949599
0.566498
aspp.py
pypi
from typing import List, Optional import torch from torch.nn import functional as F def shapes_to_tensor(x: List[int], device: Optional[torch.device] = None) -> torch.Tensor: """ Turn a list of integer scalars or integer Tensor scalars into a vector, in a way that's both traceable and scriptable. In tracing, `x` should be a list of scalar Tensor, so the output can trace to the inputs. In scripting or eager, `x` should be a list of int. """ if torch.jit.is_scripting(): return torch.as_tensor(x, device=device) if torch.jit.is_tracing(): assert all( [isinstance(t, torch.Tensor) for t in x] ), "Shape should be tensor during tracing!" # as_tensor should not be used in tracing because it records a constant ret = torch.stack(x) if ret.device != device: # avoid recording a hard-coded device if not necessary ret = ret.to(device=device) return ret return torch.as_tensor(x, device=device) def cat(tensors: List[torch.Tensor], dim: int = 0): """ Efficient version of torch.cat that avoids a copy if there is only a single element in a list """ assert isinstance(tensors, (list, tuple)) if len(tensors) == 1: return tensors[0] return torch.cat(tensors, dim) def cross_entropy(input, target, *, reduction="mean", **kwargs): """ Same as `torch.nn.functional.cross_entropy`, but returns 0 (instead of nan) for empty inputs. """ if target.numel() == 0 and reduction == "mean": return input.sum() * 0.0 # connect the gradient return F.cross_entropy(input, target, reduction=reduction, **kwargs) class _NewEmptyTensorOp(torch.autograd.Function): @staticmethod def forward(ctx, x, new_shape): ctx.shape = x.shape return x.new_empty(new_shape) @staticmethod def backward(ctx, grad): shape = ctx.shape return _NewEmptyTensorOp.apply(grad, shape), None class Conv2d(torch.nn.Conv2d): """ A wrapper around :class:`torch.nn.Conv2d` to support empty inputs and more features. """ def __init__(self, *args, **kwargs): """ Extra keyword arguments supported in addition to those in `torch.nn.Conv2d`: Args: norm (nn.Module, optional): a normalization layer activation (callable(Tensor) -> Tensor): a callable activation function It assumes that norm layer is used before activation. """ norm = kwargs.pop("norm", None) activation = kwargs.pop("activation", None) super().__init__(*args, **kwargs) self.norm = norm self.activation = activation def forward(self, x): # torchscript does not support SyncBatchNorm yet # https://github.com/pytorch/pytorch/issues/40507 # and we skip these codes in torchscript since: # 1. currently we only support torchscript in evaluation mode # 2. features needed by exporting module to torchscript are added in PyTorch 1.6 or # later version, `Conv2d` in these PyTorch versions has already supported empty inputs. if not torch.jit.is_scripting(): if x.numel() == 0 and self.training: # https://github.com/pytorch/pytorch/issues/12013 assert not isinstance( self.norm, torch.nn.SyncBatchNorm ), "SyncBatchNorm does not support empty inputs!" x = F.conv2d( x, self.weight, self.bias, self.stride, self.padding, self.dilation, self.groups ) if self.norm is not None: x = self.norm(x) if self.activation is not None: x = self.activation(x) return x ConvTranspose2d = torch.nn.ConvTranspose2d BatchNorm2d = torch.nn.BatchNorm2d interpolate = F.interpolate Linear = torch.nn.Linear def nonzero_tuple(x): """ A 'as_tuple=True' version of torch.nonzero to support torchscript. because of https://github.com/pytorch/pytorch/issues/38718 """ if torch.jit.is_scripting(): if x.dim() == 0: return x.unsqueeze(0).nonzero().unbind(1) return x.nonzero().unbind(1) else: return x.nonzero(as_tuple=True) @torch.jit.script_if_tracing def move_device_like(src: torch.Tensor, dst: torch.Tensor) -> torch.Tensor: """ Tracing friendly way to cast tensor to another tensor's device. Device will be treated as constant during tracing, scripting the casting process as whole can workaround this issue. """ return src.to(dst.device)
/roof_mask_Yv8-0.5.9-py3-none-any.whl/detectron2/layers/wrappers.py
0.96796
0.776496
wrappers.py
pypi
import fvcore.nn.weight_init as weight_init from torch import nn from .batch_norm import FrozenBatchNorm2d, get_norm from .wrappers import Conv2d """ CNN building blocks. """ class CNNBlockBase(nn.Module): """ A CNN block is assumed to have input channels, output channels and a stride. The input and output of `forward()` method must be NCHW tensors. The method can perform arbitrary computation but must match the given channels and stride specification. Attribute: in_channels (int): out_channels (int): stride (int): """ def __init__(self, in_channels, out_channels, stride): """ The `__init__` method of any subclass should also contain these arguments. Args: in_channels (int): out_channels (int): stride (int): """ super().__init__() self.in_channels = in_channels self.out_channels = out_channels self.stride = stride def freeze(self): """ Make this block not trainable. This method sets all parameters to `requires_grad=False`, and convert all BatchNorm layers to FrozenBatchNorm Returns: the block itself """ for p in self.parameters(): p.requires_grad = False FrozenBatchNorm2d.convert_frozen_batchnorm(self) return self class DepthwiseSeparableConv2d(nn.Module): """ A kxk depthwise convolution + a 1x1 convolution. In :paper:`xception`, norm & activation are applied on the second conv. :paper:`mobilenet` uses norm & activation on both convs. """ def __init__( self, in_channels, out_channels, kernel_size=3, padding=1, dilation=1, *, norm1=None, activation1=None, norm2=None, activation2=None, ): """ Args: norm1, norm2 (str or callable): normalization for the two conv layers. activation1, activation2 (callable(Tensor) -> Tensor): activation function for the two conv layers. """ super().__init__() self.depthwise = Conv2d( in_channels, in_channels, kernel_size=kernel_size, padding=padding, dilation=dilation, groups=in_channels, bias=not norm1, norm=get_norm(norm1, in_channels), activation=activation1, ) self.pointwise = Conv2d( in_channels, out_channels, kernel_size=1, bias=not norm2, norm=get_norm(norm2, out_channels), activation=activation2, ) # default initialization weight_init.c2_msra_fill(self.depthwise) weight_init.c2_msra_fill(self.pointwise) def forward(self, x): return self.pointwise(self.depthwise(x))
/roof_mask_Yv8-0.5.9-py3-none-any.whl/detectron2/layers/blocks.py
0.962276
0.501648
blocks.py
pypi
import copy import numpy as np from typing import Dict import torch from scipy.optimize import linear_sum_assignment from detectron2.config import configurable from detectron2.structures import Boxes, Instances from ..config.config import CfgNode as CfgNode_ from .base_tracker import BaseTracker class BaseHungarianTracker(BaseTracker): """ A base class for all Hungarian trackers """ @configurable def __init__( self, video_height: int, video_width: int, max_num_instances: int = 200, max_lost_frame_count: int = 0, min_box_rel_dim: float = 0.02, min_instance_period: int = 1, **kwargs ): """ Args: video_height: height the video frame video_width: width of the video frame max_num_instances: maximum number of id allowed to be tracked max_lost_frame_count: maximum number of frame an id can lost tracking exceed this number, an id is considered as lost forever min_box_rel_dim: a percentage, smaller than this dimension, a bbox is removed from tracking min_instance_period: an instance will be shown after this number of period since its first showing up in the video """ super().__init__(**kwargs) self._video_height = video_height self._video_width = video_width self._max_num_instances = max_num_instances self._max_lost_frame_count = max_lost_frame_count self._min_box_rel_dim = min_box_rel_dim self._min_instance_period = min_instance_period @classmethod def from_config(cls, cfg: CfgNode_) -> Dict: raise NotImplementedError("Calling HungarianTracker::from_config") def build_cost_matrix(self, instances: Instances, prev_instances: Instances) -> np.ndarray: raise NotImplementedError("Calling HungarianTracker::build_matrix") def update(self, instances: Instances) -> Instances: if instances.has("pred_keypoints"): raise NotImplementedError("Need to add support for keypoints") instances = self._initialize_extra_fields(instances) if self._prev_instances is not None: self._untracked_prev_idx = set(range(len(self._prev_instances))) cost_matrix = self.build_cost_matrix(instances, self._prev_instances) matched_idx, matched_prev_idx = linear_sum_assignment(cost_matrix) instances = self._process_matched_idx(instances, matched_idx, matched_prev_idx) instances = self._process_unmatched_idx(instances, matched_idx) instances = self._process_unmatched_prev_idx(instances, matched_prev_idx) self._prev_instances = copy.deepcopy(instances) return instances def _initialize_extra_fields(self, instances: Instances) -> Instances: """ If input instances don't have ID, ID_period, lost_frame_count fields, this method is used to initialize these fields. Args: instances: D2 Instances, for predictions of the current frame Return: D2 Instances with extra fields added """ if not instances.has("ID"): instances.set("ID", [None] * len(instances)) if not instances.has("ID_period"): instances.set("ID_period", [None] * len(instances)) if not instances.has("lost_frame_count"): instances.set("lost_frame_count", [None] * len(instances)) if self._prev_instances is None: instances.ID = list(range(len(instances))) self._id_count += len(instances) instances.ID_period = [1] * len(instances) instances.lost_frame_count = [0] * len(instances) return instances def _process_matched_idx( self, instances: Instances, matched_idx: np.ndarray, matched_prev_idx: np.ndarray ) -> Instances: assert matched_idx.size == matched_prev_idx.size for i in range(matched_idx.size): instances.ID[matched_idx[i]] = self._prev_instances.ID[matched_prev_idx[i]] instances.ID_period[matched_idx[i]] = ( self._prev_instances.ID_period[matched_prev_idx[i]] + 1 ) instances.lost_frame_count[matched_idx[i]] = 0 return instances def _process_unmatched_idx(self, instances: Instances, matched_idx: np.ndarray) -> Instances: untracked_idx = set(range(len(instances))).difference(set(matched_idx)) for idx in untracked_idx: instances.ID[idx] = self._id_count self._id_count += 1 instances.ID_period[idx] = 1 instances.lost_frame_count[idx] = 0 return instances def _process_unmatched_prev_idx( self, instances: Instances, matched_prev_idx: np.ndarray ) -> Instances: untracked_instances = Instances( image_size=instances.image_size, pred_boxes=[], pred_masks=[], pred_classes=[], scores=[], ID=[], ID_period=[], lost_frame_count=[], ) prev_bboxes = list(self._prev_instances.pred_boxes) prev_classes = list(self._prev_instances.pred_classes) prev_scores = list(self._prev_instances.scores) prev_ID_period = self._prev_instances.ID_period if instances.has("pred_masks"): prev_masks = list(self._prev_instances.pred_masks) untracked_prev_idx = set(range(len(self._prev_instances))).difference(set(matched_prev_idx)) for idx in untracked_prev_idx: x_left, y_top, x_right, y_bot = prev_bboxes[idx] if ( (1.0 * (x_right - x_left) / self._video_width < self._min_box_rel_dim) or (1.0 * (y_bot - y_top) / self._video_height < self._min_box_rel_dim) or self._prev_instances.lost_frame_count[idx] >= self._max_lost_frame_count or prev_ID_period[idx] <= self._min_instance_period ): continue untracked_instances.pred_boxes.append(list(prev_bboxes[idx].numpy())) untracked_instances.pred_classes.append(int(prev_classes[idx])) untracked_instances.scores.append(float(prev_scores[idx])) untracked_instances.ID.append(self._prev_instances.ID[idx]) untracked_instances.ID_period.append(self._prev_instances.ID_period[idx]) untracked_instances.lost_frame_count.append( self._prev_instances.lost_frame_count[idx] + 1 ) if instances.has("pred_masks"): untracked_instances.pred_masks.append(prev_masks[idx].numpy().astype(np.uint8)) untracked_instances.pred_boxes = Boxes(torch.FloatTensor(untracked_instances.pred_boxes)) untracked_instances.pred_classes = torch.IntTensor(untracked_instances.pred_classes) untracked_instances.scores = torch.FloatTensor(untracked_instances.scores) if instances.has("pred_masks"): untracked_instances.pred_masks = torch.IntTensor(untracked_instances.pred_masks) else: untracked_instances.remove("pred_masks") return Instances.cat( [ instances, untracked_instances, ] )
/roof_mask_Yv8-0.5.9-py3-none-any.whl/detectron2/tracking/hungarian_tracker.py
0.842021
0.367242
hungarian_tracker.py
pypi
from detectron2.config import configurable from detectron2.utils.registry import Registry from ..config.config import CfgNode as CfgNode_ from ..structures import Instances TRACKER_HEADS_REGISTRY = Registry("TRACKER_HEADS") TRACKER_HEADS_REGISTRY.__doc__ = """ Registry for tracking classes. """ class BaseTracker(object): """ A parent class for all trackers """ @configurable def __init__(self, **kwargs): self._prev_instances = None # (D2)instances for previous frame self._matched_idx = set() # indices in prev_instances found matching self._matched_ID = set() # idendities in prev_instances found matching self._untracked_prev_idx = set() # indices in prev_instances not found matching self._id_count = 0 # used to assign new id @classmethod def from_config(cls, cfg: CfgNode_): raise NotImplementedError("Calling BaseTracker::from_config") def update(self, predictions: Instances) -> Instances: """ Args: predictions: D2 Instances for predictions of the current frame Return: D2 Instances for predictions of the current frame with ID assigned _prev_instances and instances will have the following fields: .pred_boxes (shape=[N, 4]) .scores (shape=[N,]) .pred_classes (shape=[N,]) .pred_keypoints (shape=[N, M, 3], Optional) .pred_masks (shape=List[2D_MASK], Optional) 2D_MASK: shape=[H, W] .ID (shape=[N,]) N: # of detected bboxes H and W: height and width of 2D mask """ raise NotImplementedError("Calling BaseTracker::update") def build_tracker_head(cfg: CfgNode_) -> BaseTracker: """ Build a tracker head from `cfg.TRACKER_HEADS.TRACKER_NAME`. Args: cfg: D2 CfgNode, config file with tracker information Return: tracker object """ name = cfg.TRACKER_HEADS.TRACKER_NAME tracker_class = TRACKER_HEADS_REGISTRY.get(name) return tracker_class(cfg)
/roof_mask_Yv8-0.5.9-py3-none-any.whl/detectron2/tracking/base_tracker.py
0.72086
0.23118
base_tracker.py
pypi
import copy import numpy as np from typing import List import torch from detectron2.config import configurable from detectron2.structures import Boxes, Instances from detectron2.structures.boxes import pairwise_iou from ..config.config import CfgNode as CfgNode_ from .base_tracker import TRACKER_HEADS_REGISTRY, BaseTracker @TRACKER_HEADS_REGISTRY.register() class BBoxIOUTracker(BaseTracker): """ A bounding box tracker to assign ID based on IoU between current and previous instances """ @configurable def __init__( self, *, video_height: int, video_width: int, max_num_instances: int = 200, max_lost_frame_count: int = 0, min_box_rel_dim: float = 0.02, min_instance_period: int = 1, track_iou_threshold: float = 0.5, **kwargs, ): """ Args: video_height: height the video frame video_width: width of the video frame max_num_instances: maximum number of id allowed to be tracked max_lost_frame_count: maximum number of frame an id can lost tracking exceed this number, an id is considered as lost forever min_box_rel_dim: a percentage, smaller than this dimension, a bbox is removed from tracking min_instance_period: an instance will be shown after this number of period since its first showing up in the video track_iou_threshold: iou threshold, below this number a bbox pair is removed from tracking """ super().__init__(**kwargs) self._video_height = video_height self._video_width = video_width self._max_num_instances = max_num_instances self._max_lost_frame_count = max_lost_frame_count self._min_box_rel_dim = min_box_rel_dim self._min_instance_period = min_instance_period self._track_iou_threshold = track_iou_threshold @classmethod def from_config(cls, cfg: CfgNode_): """ Old style initialization using CfgNode Args: cfg: D2 CfgNode, config file Return: dictionary storing arguments for __init__ method """ assert "VIDEO_HEIGHT" in cfg.TRACKER_HEADS assert "VIDEO_WIDTH" in cfg.TRACKER_HEADS video_height = cfg.TRACKER_HEADS.get("VIDEO_HEIGHT") video_width = cfg.TRACKER_HEADS.get("VIDEO_WIDTH") max_num_instances = cfg.TRACKER_HEADS.get("MAX_NUM_INSTANCES", 200) max_lost_frame_count = cfg.TRACKER_HEADS.get("MAX_LOST_FRAME_COUNT", 0) min_box_rel_dim = cfg.TRACKER_HEADS.get("MIN_BOX_REL_DIM", 0.02) min_instance_period = cfg.TRACKER_HEADS.get("MIN_INSTANCE_PERIOD", 1) track_iou_threshold = cfg.TRACKER_HEADS.get("TRACK_IOU_THRESHOLD", 0.5) return { "_target_": "detectron2.tracking.bbox_iou_tracker.BBoxIOUTracker", "video_height": video_height, "video_width": video_width, "max_num_instances": max_num_instances, "max_lost_frame_count": max_lost_frame_count, "min_box_rel_dim": min_box_rel_dim, "min_instance_period": min_instance_period, "track_iou_threshold": track_iou_threshold, } def update(self, instances: Instances) -> Instances: """ See BaseTracker description """ instances = self._initialize_extra_fields(instances) if self._prev_instances is not None: # calculate IoU of all bbox pairs iou_all = pairwise_iou( boxes1=instances.pred_boxes, boxes2=self._prev_instances.pred_boxes, ) # sort IoU in descending order bbox_pairs = self._create_prediction_pairs(instances, iou_all) # assign previous ID to current bbox if IoU > track_iou_threshold self._reset_fields() for bbox_pair in bbox_pairs: idx = bbox_pair["idx"] prev_id = bbox_pair["prev_id"] if ( idx in self._matched_idx or prev_id in self._matched_ID or bbox_pair["IoU"] < self._track_iou_threshold ): continue instances.ID[idx] = prev_id instances.ID_period[idx] = bbox_pair["prev_period"] + 1 instances.lost_frame_count[idx] = 0 self._matched_idx.add(idx) self._matched_ID.add(prev_id) self._untracked_prev_idx.remove(bbox_pair["prev_idx"]) instances = self._assign_new_id(instances) instances = self._merge_untracked_instances(instances) self._prev_instances = copy.deepcopy(instances) return instances def _create_prediction_pairs(self, instances: Instances, iou_all: np.ndarray) -> List: """ For all instances in previous and current frames, create pairs. For each pair, store index of the instance in current frame predcitions, index in previous predictions, ID in previous predictions, IoU of the bboxes in this pair, period in previous predictions. Args: instances: D2 Instances, for predictions of the current frame iou_all: IoU for all bboxes pairs Return: A list of IoU for all pairs """ bbox_pairs = [] for i in range(len(instances)): for j in range(len(self._prev_instances)): bbox_pairs.append( { "idx": i, "prev_idx": j, "prev_id": self._prev_instances.ID[j], "IoU": iou_all[i, j], "prev_period": self._prev_instances.ID_period[j], } ) return bbox_pairs def _initialize_extra_fields(self, instances: Instances) -> Instances: """ If input instances don't have ID, ID_period, lost_frame_count fields, this method is used to initialize these fields. Args: instances: D2 Instances, for predictions of the current frame Return: D2 Instances with extra fields added """ if not instances.has("ID"): instances.set("ID", [None] * len(instances)) if not instances.has("ID_period"): instances.set("ID_period", [None] * len(instances)) if not instances.has("lost_frame_count"): instances.set("lost_frame_count", [None] * len(instances)) if self._prev_instances is None: instances.ID = list(range(len(instances))) self._id_count += len(instances) instances.ID_period = [1] * len(instances) instances.lost_frame_count = [0] * len(instances) return instances def _reset_fields(self): """ Before each uodate call, reset fields first """ self._matched_idx = set() self._matched_ID = set() self._untracked_prev_idx = set(range(len(self._prev_instances))) def _assign_new_id(self, instances: Instances) -> Instances: """ For each untracked instance, assign a new id Args: instances: D2 Instances, for predictions of the current frame Return: D2 Instances with new ID assigned """ untracked_idx = set(range(len(instances))).difference(self._matched_idx) for idx in untracked_idx: instances.ID[idx] = self._id_count self._id_count += 1 instances.ID_period[idx] = 1 instances.lost_frame_count[idx] = 0 return instances def _merge_untracked_instances(self, instances: Instances) -> Instances: """ For untracked previous instances, under certain condition, still keep them in tracking and merge with the current instances. Args: instances: D2 Instances, for predictions of the current frame Return: D2 Instances merging current instances and instances from previous frame decided to keep tracking """ untracked_instances = Instances( image_size=instances.image_size, pred_boxes=[], pred_classes=[], scores=[], ID=[], ID_period=[], lost_frame_count=[], ) prev_bboxes = list(self._prev_instances.pred_boxes) prev_classes = list(self._prev_instances.pred_classes) prev_scores = list(self._prev_instances.scores) prev_ID_period = self._prev_instances.ID_period if instances.has("pred_masks"): untracked_instances.set("pred_masks", []) prev_masks = list(self._prev_instances.pred_masks) if instances.has("pred_keypoints"): untracked_instances.set("pred_keypoints", []) prev_keypoints = list(self._prev_instances.pred_keypoints) if instances.has("pred_keypoint_heatmaps"): untracked_instances.set("pred_keypoint_heatmaps", []) prev_keypoint_heatmaps = list(self._prev_instances.pred_keypoint_heatmaps) for idx in self._untracked_prev_idx: x_left, y_top, x_right, y_bot = prev_bboxes[idx] if ( (1.0 * (x_right - x_left) / self._video_width < self._min_box_rel_dim) or (1.0 * (y_bot - y_top) / self._video_height < self._min_box_rel_dim) or self._prev_instances.lost_frame_count[idx] >= self._max_lost_frame_count or prev_ID_period[idx] <= self._min_instance_period ): continue untracked_instances.pred_boxes.append(list(prev_bboxes[idx].numpy())) untracked_instances.pred_classes.append(int(prev_classes[idx])) untracked_instances.scores.append(float(prev_scores[idx])) untracked_instances.ID.append(self._prev_instances.ID[idx]) untracked_instances.ID_period.append(self._prev_instances.ID_period[idx]) untracked_instances.lost_frame_count.append( self._prev_instances.lost_frame_count[idx] + 1 ) if instances.has("pred_masks"): untracked_instances.pred_masks.append(prev_masks[idx].numpy().astype(np.uint8)) if instances.has("pred_keypoints"): untracked_instances.pred_keypoints.append( prev_keypoints[idx].numpy().astype(np.uint8) ) if instances.has("pred_keypoint_heatmaps"): untracked_instances.pred_keypoint_heatmaps.append( prev_keypoint_heatmaps[idx].numpy().astype(np.float32) ) untracked_instances.pred_boxes = Boxes(torch.FloatTensor(untracked_instances.pred_boxes)) untracked_instances.pred_classes = torch.IntTensor(untracked_instances.pred_classes) untracked_instances.scores = torch.FloatTensor(untracked_instances.scores) if instances.has("pred_masks"): untracked_instances.pred_masks = torch.IntTensor(untracked_instances.pred_masks) if instances.has("pred_keypoints"): untracked_instances.pred_keypoints = torch.IntTensor(untracked_instances.pred_keypoints) if instances.has("pred_keypoint_heatmaps"): untracked_instances.pred_keypoint_heatmaps = torch.FloatTensor( untracked_instances.pred_keypoint_heatmaps ) return Instances.cat( [ instances, untracked_instances, ] )
/roof_mask_Yv8-0.5.9-py3-none-any.whl/detectron2/tracking/bbox_iou_tracker.py
0.843911
0.318141
bbox_iou_tracker.py
pypi
import numpy as np from typing import List from detectron2.config import CfgNode as CfgNode_ from detectron2.config import configurable from detectron2.structures import Instances from detectron2.structures.boxes import pairwise_iou from detectron2.tracking.utils import LARGE_COST_VALUE, create_prediction_pairs from .base_tracker import TRACKER_HEADS_REGISTRY from .hungarian_tracker import BaseHungarianTracker @TRACKER_HEADS_REGISTRY.register() class VanillaHungarianBBoxIOUTracker(BaseHungarianTracker): """ Hungarian algo based tracker using bbox iou as metric """ @configurable def __init__( self, *, video_height: int, video_width: int, max_num_instances: int = 200, max_lost_frame_count: int = 0, min_box_rel_dim: float = 0.02, min_instance_period: int = 1, track_iou_threshold: float = 0.5, **kwargs, ): """ Args: video_height: height the video frame video_width: width of the video frame max_num_instances: maximum number of id allowed to be tracked max_lost_frame_count: maximum number of frame an id can lost tracking exceed this number, an id is considered as lost forever min_box_rel_dim: a percentage, smaller than this dimension, a bbox is removed from tracking min_instance_period: an instance will be shown after this number of period since its first showing up in the video track_iou_threshold: iou threshold, below this number a bbox pair is removed from tracking """ super().__init__( video_height=video_height, video_width=video_width, max_num_instances=max_num_instances, max_lost_frame_count=max_lost_frame_count, min_box_rel_dim=min_box_rel_dim, min_instance_period=min_instance_period, ) self._track_iou_threshold = track_iou_threshold @classmethod def from_config(cls, cfg: CfgNode_): """ Old style initialization using CfgNode Args: cfg: D2 CfgNode, config file Return: dictionary storing arguments for __init__ method """ assert "VIDEO_HEIGHT" in cfg.TRACKER_HEADS assert "VIDEO_WIDTH" in cfg.TRACKER_HEADS video_height = cfg.TRACKER_HEADS.get("VIDEO_HEIGHT") video_width = cfg.TRACKER_HEADS.get("VIDEO_WIDTH") max_num_instances = cfg.TRACKER_HEADS.get("MAX_NUM_INSTANCES", 200) max_lost_frame_count = cfg.TRACKER_HEADS.get("MAX_LOST_FRAME_COUNT", 0) min_box_rel_dim = cfg.TRACKER_HEADS.get("MIN_BOX_REL_DIM", 0.02) min_instance_period = cfg.TRACKER_HEADS.get("MIN_INSTANCE_PERIOD", 1) track_iou_threshold = cfg.TRACKER_HEADS.get("TRACK_IOU_THRESHOLD", 0.5) return { "_target_": "detectron2.tracking.vanilla_hungarian_bbox_iou_tracker.VanillaHungarianBBoxIOUTracker", # noqa "video_height": video_height, "video_width": video_width, "max_num_instances": max_num_instances, "max_lost_frame_count": max_lost_frame_count, "min_box_rel_dim": min_box_rel_dim, "min_instance_period": min_instance_period, "track_iou_threshold": track_iou_threshold, } def build_cost_matrix(self, instances: Instances, prev_instances: Instances) -> np.ndarray: """ Build the cost matrix for assignment problem (https://en.wikipedia.org/wiki/Assignment_problem) Args: instances: D2 Instances, for current frame predictions prev_instances: D2 Instances, for previous frame predictions Return: the cost matrix in numpy array """ assert instances is not None and prev_instances is not None # calculate IoU of all bbox pairs iou_all = pairwise_iou( boxes1=instances.pred_boxes, boxes2=self._prev_instances.pred_boxes, ) bbox_pairs = create_prediction_pairs( instances, self._prev_instances, iou_all, threshold=self._track_iou_threshold ) # assign large cost value to make sure pair below IoU threshold won't be matched cost_matrix = np.full((len(instances), len(prev_instances)), LARGE_COST_VALUE) return self.assign_cost_matrix_values(cost_matrix, bbox_pairs) def assign_cost_matrix_values(self, cost_matrix: np.ndarray, bbox_pairs: List) -> np.ndarray: """ Based on IoU for each pair of bbox, assign the associated value in cost matrix Args: cost_matrix: np.ndarray, initialized 2D array with target dimensions bbox_pairs: list of bbox pair, in each pair, iou value is stored Return: np.ndarray, cost_matrix with assigned values """ for pair in bbox_pairs: # assign -1 for IoU above threshold pairs, algorithms will minimize cost cost_matrix[pair["idx"]][pair["prev_idx"]] = -1 return cost_matrix
/roof_mask_Yv8-0.5.9-py3-none-any.whl/detectron2/tracking/vanilla_hungarian_bbox_iou_tracker.py
0.926408
0.440409
vanilla_hungarian_bbox_iou_tracker.py
pypi